# [Official] NVIDIA RTX 2080 Ti Owner's Club



## zhrooms

_Last Updated: February 3, 2020_

*NVIDIA GeForce® RTX 2080 Ti*

*RTX 2060 / SUPER Owner's Club*
*RTX 2070 / SUPER Owner's Club*
*RTX 2080 / SUPER Owner's Club*
*→ RTX 2080 Ti Owner's Club*

*Click here to join the discussion on Discord* or join directly through the *Discord app* with the code *kkuFR3d*




























*LIFESPAN (Click SHOW)*



Spoiler



Between releases, from the right; It took *1 Year, 6 Months* for the RTX 2080 Ti to come out after the GTX 1080 Ti.












*MSRP (Click SHOW)*



Spoiler



Adjusted for inflation, as of *January 17th, 2019* the 300A chip can be purchased for $1049 in Europe, very close to MSRP.












*SPECS (Click SHOW)*



Spoiler






Rich (BB code):


   *Architecture* Turing
   *Chip* TU102-300-A1 / TU102-300A-A1
   *Transistors* 18,600 million
   *Die Size* 754 mm²
   *Manufacturing Process* 12nm

   *CUDA Cores* 4352
   *TMUs* 272
   *ROPs* 88
   *SM Count* 68
   *Tensor Cores* 544
   *GigaRays* 10 GR/s

   *Core Clock* 1350 MHz
   *Boost Clock* 1545 MHz
   *Memory* 11GB GDDR6
   *Memory Bus* 352-bit
   *Memory Clock* 1750 MHz / 14000 MHz
   *Memory Bandwidth* 616 GB/s
   *External Power Supply* 8 + 8 Pin
   *TDP* 250W

   *DirectX* 12.1
   *OpenGL* 4.6
   *OpenCL* 1.2
   *Vulkan* 1.2
   *CUDA* 7.5

   *Interface* PCIe 3.0 x16
   *Connectors* 1x HDMI 2.0b, 3x DisplayPort 1.4a, 1x USB-C with DisplayPort 1.4a (VirtualLink)
   *Dimensions* 268 x 115mm (2-Slot)

   *Special Features* NVIDIA NVLink 2-Way, Real-Time Raytracing (10 GR/s), H.265 Encode/Decode, NVIDIA G-Sync, NVIDIA VR-Ready, HDCP 2.2, Backplate

   *Price* $999 US (Founders Edition $1,199 US)

   *Release Date* September 20th, 2018







Code:


RTX 3090    | GA102-300 |  8nm | 627mm² | 28.0 BT | 5248 CCs* | 328 TMUs | 96 ROPs | 82 SMs | 1695 MHz |  24GB | 1024MB x 24 | GDDR6X | 384-bit | 936 GB/s | 350W
RTX 3080    | GA102-200 |  8nm | 627mm² | 28.0 BT | 4352 CCs* | 272 TMUs | 88 ROPs | 68 SMs | 1710 MHz |  10GB | 1024MB x 10 | GDDR6X | 320-bit | 760 GB/s | 320W
RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT | 4352 CCs  | 272 TMUs | 88 ROPs | 68 SMs | 1635 MHz |  11GB | 1024MB x 11 | GDDR6  | 352-bit | 616 GB/s | 250W
RTX 2080 S  | TU104-450 | 12nm | 545mm² | 13.6 BT | 3072 CCs  | 192 TMUs | 64 ROPs | 48 SMs | 1815 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | 496 GB/s | 250W
RTX 2080    | TU104-400 | 12nm | 545mm² | 13.6 BT | 2944 CCs  | 184 TMUs | 64 ROPs | 46 SMs | 1710 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | 448 GB/s | 215W
GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12.0 BT | 3584 CCs  | 224 TMUs | 88 ROPs | 28 SMs | 1582 MHz |  11GB | 1024MB x 11 | GDDR5X | 352-bit | 484 GB/s | 250W
GTX 1080    | GP104-400 | 16nm | 314mm² |  7.2 BT | 2560 CCs  | 160 TMUs | 64 ROPs | 20 SMs | 1733 MHz |   8GB | 1024MB x 8  | GDDR5X | 256-bit | 320 GB/s | 180W
GTX 980 Ti  | GM200-310 | 28nm | 601mm² |  8.0 BT | 2816 CCs  | 172 TMUs | 96 ROPs | 22 SMs | 1076 MHz |   6GB |  512MB x 12 | GDDR5  | 384-bit | 336 GB/s | 250W
GTX 980     | GM204-400 | 28nm | 398mm² |  5.2 BT | 2048 CCs  | 128 TMUs | 64 ROPs | 16 SMs | 1216 MHz |   4GB |  512MB x 8  | GDDR5  | 256-bit | 224 GB/s | 165W
GTX 780 Ti  | GK110-425 | 28nm | 551mm² |  7.1 BT | 2880 CCs  | 240 TMUs | 48 ROPs | 15 SMs |  928 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit | 336 GB/s | 250W
GTX 780     | GK110-300 | 28nm | 551mm² |  7.1 BT | 2304 CCs  | 192 TMUs | 48 ROPs | 12 SMs |  900 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit | 288 GB/s | 250W
GTX 680     | GK104-400 | 28nm | 294mm² |  3.5 BT | 1536 CCs  | 128 TMUs | 32 ROPs |  8 SMs | 1058 MHz |   2GB |  256MB x 8  | GDDR5  | 256-bit | 192 GB/s | 200W
GTX 580     | GF110-375 | 40nm | 520mm² |  3.0 BT |  512 CCs  |  64 TMUs | 48 ROPs | 16 SMs |  772 MHz | 1.5GB |  128MB x 12 | GDDR5  | 384-bit | 192 GB/s | 250W
   * Can execute twice as many FP32 calculations per clock compared to previous generation when only executing FP32 operations,
   thus marketed as 10496 and 8704 CUDA cores


*ASUS*


Rich (BB code):


   *Turbo* Blower  *Non-A* | 1 Fan | 2    Slot | 268mm | LED | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4718017135061 | PN 90YV0C40-M0NM00
   *Dual*          *Non-A* | 2 Fan | 2.7  Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4718017151030 | PN 90YV0C43-M0NM00
   *Dual Adv*            | 2 Fan | 2.7  Slot | 268mm | N/A | 16 Power Phases | 1575 MHz Boost | 260/312 W | Reference PCB | EAN 4718017138789 | PN 90YV0C42-M0NM00
   *Dual OC*             | 2 Fan | 2.7  Slot | 268mm | N/A | 16 Power Phases | 1635 MHz Boost | 260/312 W | Reference PCB | EAN 4718017133005 | PN 90YV0C41-M0NA00
   *ROG Strix*     *Non-A* | 3 Fan | 2.7  Slot | 305mm | RGB | 19 Power Phases | 1545 MHz Boost | 250/280 W |    Custom PCB | EAN 4718017149563 | PN 90YV0CC2-M0NM00
   *ROG Strix Adv*       | 3 Fan | 2.7  Slot | 305mm | RGB | 19 Power Phases | 1575 MHz Boost | 250/313 W |    Custom PCB | EAN 4718017149532 | PN 90YV0CC1-M0NM00
   *ROG Strix OC*        | 3 Fan | 2.7  Slot | 305mm | RGB | 19 Power Phases | 1650 MHz Boost | 260/325 W |    Custom PCB | EAN 4718017133005 | PN 90YV0CC0-M0NM00
   *ROG Strix OC White*  | 3 Fan | 2.7  Slot | 305mm | RGB | 19 Power Phases | 1770 MHz Boost | 300/360 W |    Custom PCB | EAN 4718017562881 | PN 90YV0DY3-M0NM00
   *ROG Matrix* Hybrid   | 3 Fan | 2.7  Slot | 310mm | RGB | 19 Power Phases | 1800 MHz Boost | 300/360 W |    Custom PCB | EAN 4718017290371 | PN 90YV0CC4-M0NA00

*EVGA* - *XC Black Edition/XC/XC Ultra BIOS Update*


Rich (BB code):


   *Gaming*        *Non-A* | 1 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4250812430878 | PN 11G-P4-2280-KR
   *Gaming*        *Non-A* | 1 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4250812429766 | PN 11G-P4-2380-KR
   *Black Edition* *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4250812430861 | PN 11G-P4-2281-KR
   *XC Black Edition*    | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1560 MHz Boost | 260/338 W | Reference PCB | EAN 4250812430595 | PN 11G-P4-2282-KR
   *XC*                  | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429551 | PN 11G-P4-2382-KR
   *XC Ultra*            | 2 Fan | 2.75 Slot | 268mm | RGB | 16 Power Phases | 1650 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429544 | PN 11G-P4-2383-KR
   *XC* Hybrid           | 1 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429803 | PN 11G-P4-2384-KR
   *XC2 Ultra*           | 2 Fan | 2.75 Slot | 268mm | RGB | 16 Power Phases | 1650 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429735 | PN 11G-P4-2387-KR
   *FTW3*                | 3 Fan | 2.75 Slot | 302mm | RGB | 19 Power Phases | 1545 MHz Boost | 300/373 W |    Custom PCB | EAN 4250812429728 | PN 11G-P4-2483-KR
   *FTW3 Ultra*          | 3 Fan | 2.75 Slot | 302mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W |    Custom PCB | EAN 4250812429711 | PN 11G-P4-2487-KR
   *FTW3 Ultra Shield*   | 3 Fan | 2.75 Slot | 302mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W |    Custom PCB | EAN 4250812435996 | PN 11G-P4-2487-KS
   *FTW3 Ultra* Hybrid   | 1 Fan | 2    Slot | 291mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W |    Custom PCB | EAN 4250812429797 | PN 11G-P4-2484-KR
   *FTW3 Ultra* WB       | Water | 2    Slot | 291mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W |    Custom PCB | EAN 4250812429780 | PN 11G-P4-2489-KR
   *Kingpin* Hybrid      | 1 Fan | 2    Slot | 291mm | N/A | 19 Power Phases | 1770 MHz Boost | 361/520 W |    Custom PCB | EAN 4250812429773 | PN 11G-P4-2589-KR

*GAINWARD* - *Phoenix BIOS Update* - *Phoenix GS BIOS Update*


Rich (BB code):


   *Phoenix*       *Non-A* | 3 Fan | 2.5  Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4260183364115 | PN 426018336-4115
   *Phoenix GS*          | 3 Fan | 2.5  Slot | 292mm | RGB | 16 Power Phases | 1650 MHz Boost | 260/330 W | Reference PCB | EAN 4260183364122 | PN 426018336-4122

*GALAX | KFA2*


Rich (BB code):


   *Dual Black*    *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895147132686 | PN 28IULBUCT4NK
   *Dual Black*    *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895147131658 | PN 28IULBUCT4ND
   *Dual White*    *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895147133058 | PN 28IULBUCT4FK
   *Dual White*    *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895147133003 | PN 28IULBUCT4KK
   *OC*                  | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1620 MHz Boost | 300/380 W | Reference PCB | EAN 4895147131184 | PN 28IULBUCT4OK
   *OC*                  | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1620 MHz Boost | 300/380 W | Reference PCB | EAN 4895147131115 | PN 28IULBUCT4OC
   *OC White*            | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 300/380 W | Reference PCB | EAN 4895147131375 | PN 28IULBUCU4KW
   *SG*            *Non-A* | 3 Fan | 2    Slot | 308mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895147132983 | PN 28IULBUCT2CK
   *SG*                  | 3 Fan | 2    Slot | 308mm | RGB | 16 Power Phases | 1635 MHz Boost |         W | Reference PCB | EAN 4895147131092 | PN 28IULBUCT2GC
   *HOF*                 | 3 Fan | 2.75 Slot | 330mm | RGB | 19 Power Phases | 1635 MHz Boost | 300/450 W |    Custom PCB | EAN 4895147132785 | PN 28IULBUCV6DK
   *HOF*                 | 3 Fan | 2.75 Slot | 330mm | RGB | 19 Power Phases | 1635 MHz Boost | 300/450 W |    Custom PCB | EAN 4895147132723 | PN 28IULBUCV6DH
   *HOF OC Lab* WB       | Water | 2    Slot | 320mm | N/A | 19 Power Phases | 1755 MHz Boost | 300/450 W |    Custom PCB | EAN 4895147131467 | PN 28IULBUCU6OL

*GIGABYTE* - *Gaming OC/Windforce OC BIOS Update*


Rich (BB code):


   *Turbo* Blower  *Non-A* | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4719331303822 | PN GV-N208TTURBO-11GC
   *Turbo OC* Blower     | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1620 MHz Boost |         W | Reference PCB | EAN 4719331303686 | PN GV-N208TTURBO OC-11GC
   *AORUS Turbo* Blower  | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1650 MHz Boost | 260/300 W | Reference PCB | EAN 4719331303921 | PN GV-N208TAORUS T-11GC
   *Windforce*     *Non-A* | 3 Fan | 2    Slot | 280mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4719331303815 | PN GV-N208TWF3-11GC
   *Windforce OC*        | 3 Fan | 2    Slot | 280mm | RGB | 16 Power Phases | 1620 MHz Boost | 260/366 W | Reference PCB | EAN 4719331303570 | PN GV-N208TWF3OC-11GC
   *Gaming OC*           | 3 Fan | 2.5  Slot | 287mm | RGB | 16 Power Phases | 1650 MHz Boost | 300/366 W | Reference PCB | EAN 4719331303563 | PN GV-N208TGAMING OC-11GC
   *AORUS*               | 3 Fan | 2.75 Slot | 290mm | RGB | 19 Power Phases | 1695 MHz Boost | 275/366 W |    Custom PCB | EAN 4719331303709 | PN GV-N208TAORUS-11GC
   *AORUS X*             | 3 Fan | 2.75 Slot | 290mm | RGB | 19 Power Phases | 1770 MHz Boost | 300/366 W |    Custom PCB | EAN 4719331303693 | PN GV-N208TAORUS X-11GC
   *AORUS X* Hybrid      | 0 Fan | 2    Slot | 290mm | RGB | 19 Power Phases | 1770 MHz Boost | 300/366 W |    Custom PCB | EAN 4719331303891 | PN GV-N208TAORUSX W-11GC
   *AORUS X* WB          | Water | 2    Slot | 290mm | RGB | 19 Power Phases | 1770 MHz Boost | 300/366 W |    Custom PCB | EAN 4719331303761 | PN GV-N208TAORUSX WB-11GC

*INNO3D*


Rich (BB code):


   *Jet* Blower    *Non-A* | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 0835168001169 | PN N208T1-11D6-1770705
   *Jet* Blower    *Non-A* | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 0835168000346 | PN N208T1-11D6-1150022
   *X2*                  | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 0835168000247 | PN N208T2-11D6-1150633
   *X2 OC*               | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1590 MHz Boost | 260/290 W | Reference PCB | EAN 4895223100141 | PN N208T2-11D6X-1150633
   *X3 OC*               | 3 Fan | 2    Slot | 272mm | RGB | 16 Power Phases | 1665 MHz Boost | 260/290 W | Reference PCB | EAN 4895223100158 | PN N208T3-11D6X-1150VA24
   *iChiLL Black* Hybrid | 0 Fan | 2    Slot | 272mm | RGB | 16 Power Phases | 1695 MHz Boost | 300/330 W | Reference PCB | EAN 0835168000216 | PN C208TB-11D6X-11500004
   *iChiLL Frostbite* WB | Water | 2    Slot | 268mm | RGB | 16 Power Phases | 1695 MHz Boost | 300/330 W | Reference PCB | EAN 0835168000186 | PN C208TB-11D6X-1150FROS

*MSI* - *Gaming Trio/Gaming X Trio/Sea Hawk EK X BIOS Update* *Unofficial (Source)*


Rich (BB code):


   *Ventus GP*     *Non-A* | 2 Fan | 2.5  Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4719072646288 | PN V371-088R
   *Ventus*        *Non-A* | 2 Fan | 2.5  Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4719072599300 | PN V371-040R
   *Ventus GP OC*        | 2 Fan | 2.5  Slot | 268mm | N/A | 16 Power Phases | 1635 MHz Boost | 260/290 W | Reference PCB | EAN 4719072695729 | PN V371-231R
   *Ventus OC*           | 2 Fan | 2.5  Slot | 268mm | N/A | 16 Power Phases | 1635 MHz Boost | 260/290 W | Reference PCB | EAN 4719072596828 | PN V371-002R
   *Duke*          *Non-A* | 3 Fan | 2.7  Slot | 314mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN               | PN 
   *Duke OC*             | 3 Fan | 2.7  Slot | 314mm | RGB | 16 Power Phases | 1665 MHz Boost | 260/290 W | Reference PCB | EAN 4719072596903 | PN V371-011R
   *Gaming Trio*         | 3 Fan | 2.7  Slot | 327mm | RGB | 17 Power Phases | 1635 MHz Boost | 300/406 W |    Custom PCB | EAN 4719072603571 | PN V371-053R
   *Gaming X Trio*       | 3 Fan | 2.7  Slot | 327mm | RGB | 17 Power Phases | 1755 MHz Boost | 300/406 W |    Custom PCB | EAN 4719072596965 | PN V371-026R
   *Sea Hawk X* Hybrid   | 1 Fan | 2    Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R
   *Sea Hawk X* WB       | Water | 2    Slot | 305mm | RGB | 17 Power Phases | 1755 MHz Boost | 300/330 W |    Custom PCB | EAN 4719072598419 | PN V371-029R
   *Lightning*           | 3 Fan | 3    Slot | 328mm | RGB | 19 Power Phases | 1575 MHz Boost | 350/380 W |    Custom PCB | EAN 4719072628383 | PN 
   *Lightning 10th*      | 3 Fan | 3    Slot | 328mm | RGB | 19 Power Phases | 1575 MHz Boost | 350/380 W |    Custom PCB | EAN               | PN
   *Lightning Z*         | 3 Fan | 3    Slot | 328mm | RGB | 19 Power Phases | 1770 MHz Boost | 350/380 W |    Custom PCB | EAN 4719072615512 | PN V377-001R

*NVIDIA*


Rich (BB code):


   *Founders Edition*    | 2 Fan | 2    Slot | 268mm | LED | 16 Power Phases | 1635 MHz Boost | 260/320 W | Reference PCB | EAN               | PN 900-1G150-2530-000

*PALIT* - *Dual BIOS Update* - *GamingPro BIOS Update* - *GamingPro OC BIOS Update*


Rich (BB code):


   *X* Blower      *Non-A* | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN               | PN NE6208T019LC-150F
   *Dual*          *Non-A* | 2 Fan | 2.5  Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4710636270154 | PN NE6208T020LC-150A
   *GamingPro*           | 2 Fan | 2.5  Slot | 292mm | RGB | 16 Power Phases | 1575 MHz Boost | 250/310 W | Reference PCB | EAN 4710636269974 | PN NE6208TT20LC-150A
   *GamingPro OC*        | 2 Fan | 2.5  Slot | 292mm | RGB | 16 Power Phases | 1650 MHz Boost | 260/330 W | Reference PCB | EAN 4710636269981 | PN NE6208TS20LC-150A

*PNY*


Rich (BB code):


   *Blower*        *Non-A* | 1 Fan | 2    Slot | 281mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 0751492633053 | PN VCG2080T11BLPPB
   *Blower*        *Non-A* | 1 Fan | 2    Slot | 281mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 0751492621722 | PN VCG2080T11BLMPB
   *XLR8 Gaming OC*      | 3 Fan | 2    Slot | 314mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/300 W | Reference PCB | EAN 0751492621715 | PN VCG2080T11TFMPB-O

*ZOTAC*


Rich (BB code):


   *Blower*        *Non-A* | 1 Fan | 2    Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895173617614 | PN ZT-T20810A-10P
   *Twin Fan*      *Non-A* | 2 Fan | 2    Slot | 268mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4895173617195 | PN ZT-T20810G-10P
   *Triple Fan*          | 3 Fan | 2.5  Slot | 308mm | RGB | 16 Power Phases | 1605 MHz Boost | 260/300 W | Reference PCB | EAN 4895173617072 | PN ZT-T20810F-10P
   *AMP*                 | 3 Fan | 2.5  Slot | 308mm | RGB | 16 Power Phases | 1665 MHz Boost | 260/300 W | Reference PCB | EAN 4895173617058 | PN ZT-T20810D-10P
   *AMP Maxx*            | 2 Fan | 2    Slot | 298mm | RGB | 20 Power Phases | 1665 MHz Boost | 260/291 W |    Custom PCB | EAN 4895173617560 | PN ZT-T20810H-10P
   *AMP Extreme Core*    | 3 Fan | 2.5  Slot | 308mm | RGB | 20 Power Phases | 1755 MHz Boost | 300/336 W |    Custom PCB | EAN 4895173617553 | PN ZT-T20810C-10P
   *AMP Extreme*         | 3 Fan | 2.5  Slot | 308mm | RGB | 20 Power Phases | 1815 MHz Boost | 300/336 W |    Custom PCB | EAN 4895173617546 | PN ZT-T20810B-10P
   *ArcticStorm* WB      | Water | 2    Slot | 295mm | RGB | 20 Power Phases | 1575 MHz Boost | 300/336 W |    Custom PCB | EAN 4895173618079 | PN ZT-T20810K-30P

*BENCHMARKS (Click SHOW)*



Spoiler



*Source: Guru3D, HardwareCanucks, Hardware.Info, HEXUS, Kitguru, PC Games Hardware, PC Perspective, SweClockers, TechPowerUp, TweakTown*




























































































































































*OWNER'S (Click SHOW)*

If you would like to be added, make a thread post, with the model name and picture(s) of the card(s) inside your case. _Last Updated: January 4, 2019_



Spoiler






Rich (BB code):


*ASUS         | Turbo             | Water           | krizby*          < Pic #1, #2            
   *ASUS         | Dual OC           | Water           | DrunknFoo*       < Pic #1, #2            
   *ASUS         | Dual OC           | Water & Shunt   | Xeq54*           < Pic #1                
   *ASUS SLI     | Strix OC          | Water           | nycgtr*          < Pic #1

   *EVGA SLI     | XC                | Stock           | Barefooter*      < Pic #1                
   *EVGA         | XC                | Stock           | dboythagr8*      < Pic #1, #2, #3, #4    
   *EVGA         | XC                | Stock           | GraphicsWhore*   < Pic #1, #2, #3, #4    
   *EVGA         | XC                | Hybrid          | methadon36*      < Pic #1, #2            
   *EVGA         | XC Ultra          | Stock           | Menthol*         < Pic #1                
   *EVGA         | XC Ultra          | Stock           | nodicaL*         < Pic #1                
   *EVGA         | XC Ultra          | Stock           | pewpewlazer*     < Pic #1                
   *EVGA         | XC Ultra          | Stock           | raider89*        < Pic #1, #2            
   *EVGA         | XC Ultra          | Water           | Johnny_Utah*     < Pic #1                
   *EVGA         | XC Ultra          | Water           | kot0005*         < Pic #1, #2            
   *EVGA         | FTW3 Ultra        | Stock           | mafia97160*      < Pic #1                
   *EVGA         | FTW3 Ultra        | Stock           | Mike211*         < Pic #1, #2            
   *EVGA         | FTW3 Ultra        | Stock           | Snoopvelo*       < Pic #1

   *GIGABYTE     | Windforce OC      | Stock           | iPDrop*          < Pic #1                
   *GIGABYTE     | Windforce OC      | Water           | nycgtr*          < Pic #1                
   *GIGABYTE     | Gaming OC         | Stock           | Talon2016*       < Pic #1                
   *GIGABYTE     | Gaming OC         | Water           | Fiercy*          < Pic #1, #2, #3        
   *GIGABYTE     | AORUS X           | Stock           | pfinch*          < Pic #1                
   *GIGABYTE     | AORUS X WB        | Water           | Aurosonic*       < Pic #1, #2            
   *GIGABYTE     | AORUS X WB        | Water           | Coldmud*         < Pic #1                
   *GIGABYTE     | AORUS X WB SLI    | Water           | J7SC*            < Pic #1, #2            
   *GIGABYTE     | AORUS X WB        | Water           | kot0005*         < Pic #1                
   *GIGABYTE     | AORUS X WB        | Water           | profundido*      < Pic #1

   *KFA2         | OC                | Stock           | domrockt*        < Pic #1, #2

   *MSI          | Duke OC           | Stock           | michaelrw*       < Pic #1                
   *MSI          | Gaming X Trio     | Stock           | cstkl1*          < Pic #1                
   *MSI          | Gaming X Trio     | Stock           | nycgtr*          < Pic #1, #2, #3

   *NVIDIA SLI   | Founders Edition  | Stock           | Baasha*          < Pic #1                
   *NVIDIA SLI   | Founders Edition  | Stock           | Ferreal*         < Pic #1                
   *NVIDIA       | Founders Edition  | Stock           | l88bastar*       < Pic #1, #2, #3        
   *NVIDIA       | Founders Edition  | Stock           | Okt00*           < Pic #1                
   *NVIDIA       | Founders Edition  | Stock           | Pepillo*         < Pic #1                
   *NVIDIA SLI   | Founders Edition  | Water           | axiumone*        < Pic #1                
   *NVIDIA       | Founders Edition  | Water           | dadunn1700*      < Pic #1                
   *NVIDIA       | Founders Edition  | Water           | Esenel*          < Pic #1, #2            
   *NVIDIA       | Founders Edition  | Water           | Glerox*          < Pic #1                
   *NVIDIA SLI   | Founders Edition  | Water           | GosuPl*          < Vid #1                
   *NVIDIA SLI   | Founders Edition  | Water           | jabtn2*          < Pic #1, #2, #3        
   *NVIDIA SLI   | Founders Edition  | Water           | kossiewossie*    < Pic #1, #2            
   *NVIDIA       | Founders Edition  | Water           | KShirza1*        < Pic #1, #2, #3        
   *NVIDIA       | Founders Edition  | Water           | Pilz*            < Pic #1, #2            
   *NVIDIA       | Founders Edition  | Water           | torqueroll*      < Pic #1

   *PALIT        | Dual              | Custom & Shunt  | iRSs*            < Pic #1                
   *PALIT        | GamingPro OC      | Water           | domrockt*        < Pic #1                
   *PALIT        | GamingPro OC      | Water & Shunt   | mollet*          < Pic #1                
   *PALIT        | GamingPro OC      | Water           | kx11*            < Pic #1, #2

   *ZOTAC        | AMP               | Stock           | jepz*            < Pic #1                
   *ZOTAC        | AMP               | Stock           | Sionel*          < Pic #1                
   *ZOTAC        | AMP               | Water           | carlhil2*        < Pic #1                
   *ZOTAC SLI    | AMP               | Water           | Djreversal*      < Pic #1                
   *ZOTAC SLI    | AMP               | Water           | Hanks552*        < Pic #1​




*TECHPOWERUP | GPU-Z *

*Download TechPowerUp GPU-Z*

*NVIDIA | NVFLASH *

*Download NVIDIA NVFlash* *Official* 
*Download NVIDIA NVFlash* *Modified to allow flashing of FE*

*BIOS | ROM *

*> TechPowerUp BIOS Collection < Verified *

*> TechPowerUp BIOS Collection < Unverified*

*Important Starting in the second half of 2019, many cards are now shipping with a newer XUSB FW Version ID, preventing a lot of BIOSes in the TechPowerUp BIOS Collection from being flashed to the card through NVFlash, so be aware when purchasing a new card, you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 shipped with the 0x70060003 firmware (Build Date: 2018-08-29), the same MSI card purchased a year later in Q3 2019 shipped with the firmware 0x70100003 (Build Date: 2018-12-28), this newer card also had a different EEPROM (WBond instead of ISSI). The BIOS and firmware can however be overwritten by force, using an external programming/test clip, bypassing NVFlash entirely.*

*Download Gigabyte RTX 2080 Ti Windforce (Non-A) Reference PCB (2x8-Pin) 260W x 15% Power Target BIOS (300W)* *Official (Source)*

*└ Compatible with all TU102-300 (1E04) Non-A cards ┘ *

*Download Palit RTX 2080 Ti Dual (Non-A) Reference PCB (2x8-Pin) 250W x 124% Power Target BIOS (310W)* *Official (Source)*

*└ Compatible with all TU102-300 (1E04) Non-A cards ┘ *

*Download GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W)* *Official (Source)*

*└ Compatible with all TU102-300A (1E07) cards ┘ *

*Download MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W)* *Unofficial (Source)*

*└ Compatible with Gaming Trio, Gaming X Trio and Sea Hawk EK X ┘ *

*Download ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 260W x 231% Power Target BIOS (600W)* *Unofficial (Source)*

*└ Compatible with all TU102-300A (1E07) cards ┘ *

*XOC*

*Download ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)* *Unofficial (Source)*

*└ Power Limit: Unlimited | Overvoltage: No (up to 1.093V) | Voltage/Frequency Curve Editor: Disabled | 2D Clocks: Core (Yes) / Memory (No) | Fan Adjustable: Yes ┘ [/color]*

*Download MSI RTX 2080 Ti Lightning Z Custom PCB (3x8-Pin) 1000W x 100% Power Target BIOS (1000W)* *Unofficial (Source)*

*└ Power Limit: Unlimited | Overvoltage: No (up to 1.093V) | Voltage/Frequency Curve Editor: Disabled | 2D Clocks: Core (No) / Memory (No) | Fan Adjustable: Yes ┘ *

*Download EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)* *Official (Source)*

*└ Power Limit: Unlimited | Overvoltage: Yes (up to 1.125V) | Voltage/Frequency Curve Editor: Disabled | 2D Clocks: Core (Yes) / Memory (Yes) | Fan Adjustable: No ┘ *

*Download Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)* *Unofficial (Source)*

*└ Power Limit: Unlimited | Overvoltage: Yes (up to 1.125V) | Voltage/Frequency Curve Editor: Enabled | 2D Clocks: Core (Yes) / Memory (Yes) | Fan Adjustable: Yes ┘ *

*FLASH | GUIDE (Click SHOW)*



Spoiler













*└ Step 01 of 27 - Download NVFlash ┘ *










*└ Step 02 of 27 - Downloads Folder ┘ *










*└ Step 03 of 27 - Open Zip File ┘ *










*└ Step 04 of 27 - Copy Files ┘ *










*└ Step 05 of 27 - Create New Folder ┘ *










*└ Step 06 of 27 - Name Folder ┘ *










*└ Step 07 of 27 - Paste Files ┘ *










*└ Step 08 of 27 - Installation Successful ┘ *










*└ Step 09 of 27 - Find BIOS ┘ *










*└ Step 10 of 27 - Download BIOS ┘ *










*└ Step 11 of 27 - Name BIOS ┘ *










*└ Step 12 of 27 - Copy or Cut BIOS ┘ *










*└ Step 13 of 27 - Paste BIOS ┘ *










*└ Step 14 of 27 - Download Successful ┘ *










*└ Step 15 of 27 - Before Flash ┘ *










*└ Step 16 of 27 - Maximum Power Limit (330W) ┘ *



















*└ Step 17 of 27 - Starting Command Prompt (Administrator) ┘ *










*chdir C:\nvflash*

*└ Step 18 of 27 - Changing Directory ┘ *










*nvflash64 --protectoff*

*└ Step 19 of 27 - Disable Flash Protection ┘ *










*nvflash64 --save Partner2080TiModel.rom*

*└ Step 20 of 27 - (Optional) BIOS Backup ┘ *










*└ Step 21 of 27 - BIOS Saved ┘ *










*nvflash64 -6 Partner2080TiModel.rom*

*Y*

*└ Step 22 of 27 - Flash BIOS ┘ *










*Y*

*└ Step 23 of 27 - Confirm Update ┘ *










*exit*

*└ Step 24 of 27 - Flash Completed ┘ *










*└ Step 25 of 27 - After Flash ┘ *










*└ Step 26 of 27 - Maximum Power Limit (380W) ┘ *










*chdir C:\nvflash*

*nvflash64 --protecton*

*exit*

*└ Step 27 of 27 - (Optional) Enable Flash Protection ┘ *​



*OVERCLOCKING | TOOLS *

*Download ASUS GPUTweak II*

*Download EVGA Precision X1*

*Download Gainward EXPERTool*

*Download Galax/KFA2 Xtreme Tuner Plus*

*Download Gigabyte AORUS Engine*

*Download Inno3D TuneIT*

*Download MSI Afterburner*

*Download Palit ThunderMaster*

*Download PNY Velocity X*

*Download Zotac FireStorm*

*QUESTIONS | FAQ *​
_Last Updated: June, 2019_​
*Question:* What does Non-A mean?
*Answer:* There are two GPU variants: *TU102-300* (*1E04*) and TU102-300A (1E07). Factory overclocking is prohibited on the former, it has a boost of *1545 MHz* and a maximum power limit of *280-310W*. The latter chip has varying factory overclocks and power limits up to 520 W, flashing a 300A based BIOS onto a 300 GPU and vice versa is *not possible*. Manual overclocking is possible on both.

*Question:* How do I know if I have a 300 or 300A card?
*Answer:* Check the list above, but it may be incorrect as the early batches of the ASUS Turbo card had the A chip, recent reports indicate later batches are the correct 300 chip since the card was always advertised as not having a factory overclock. To be certain, check the *Device ID* in the main window of *GPU-Z*, if it shows 1E04 it is the 300 chip, if it shows 1E07 it is the "binned" 300A chip.

*Question:* How do I know what power limit my card has?
*Answer:* In *GPU-Z*, click the Advanced tab, then General and finally choose *NVIDIA BIOS*, under the Power Limit section you will see Default and Maximum.

*Question:* What does the power limit actually do?
*Answer:* Several years ago we had full voltage control, then NVIDIA introduced a power and temperature limit. Once a 300 card reaches its maximum 280-310W power usage it will restrict the voltage: for instance, my 280W card could not exceed 0.913v running a game in 5K resolution, this meant it could not reach a higher than 1860MHz core clock although the GPU has a hard limit of 1.093V, therefore it's far from its true potential.

*Question:*How big is the performance gap between the two?
*Answer:* Simply put, a higher power limit allows for a higher voltage, which in turn results in a higher overclock. While my 280W card could only reach 1860MHz at 0.913V, my other 330W card achieved 1980MHz at 1.018V, both at 75c running the same game in 5K resolution. 
+18% power limit → +11.5% voltage → +6.5% core clock. 
In further testing, trying to score as high as possible in 'UNIGINE Superposition 1080p Extreme' at watercooled temperatures, the 280W card (0.913v at 43c) got a score of *9188* (69 FPS), while the A card flashed to 380W (1.068V at 51c) got *10306* (77 FPS). 
+36% power limit → +17% voltage → +12% performance. 
At the time of writing this, the least expensive A GPU has a mere 4.4% price premium, using air cooling with a 330W power limit one can get up to 6.5% better performance, thus justifying the slight price bump. If you plan on running the card on water or with air cooling at the highest fan speed, the performance increase scales up to 12%.

*Question:* Is it safe to run the card at 380W?
*Answer:* Yes, as shown above, with the higher power limit the voltage could not exceed 1.068V, the hardware limit is still 1.093V. In order to reach that voltage and unlock the full overclock potential it would require to surpass 400W. There is also a common misconception that the 8-pin connectors are capped at 150W each, in actuality the port does not define the power cap and is capable of more than 300W, making it *over 600W* for every 2x8-Pin card.

*Question:* Is it safe to run the reference PCB at 1.093V?
*Answer:* Yes, as pointed out by *Buildzoid* during his analysis of the reference PCB: "If you're on water cooling or air cooling you have nothing to worry about, there is not much room for board partners to make a better PCB, for daily usage you can just buy a reference card and knock yourself out, really solid Vcore VRM, massive overkill on the Memory VRM, this is a really, really, really solid card". To put things into perspective, the reference board can handle above 600W, at which point the GPU will have already hit the 1.093V limit a long time ago so no damage can be done.

*Question:* What about shunt mod?
*Answer:* Shunt modding your card will *increase the power limit*, bypassing the software limit. This effectively means a guaranteed voltage of 1.093V, but is a high risk procedure, possibly voiding your warranty. I have no experience in removing a shunt mod so I cannot say how difficult it is. In short: not recommended unless you know exactly what you're doing.

*Question:* Do the custom PCB cards such as the ROG Strix and FTW3 overclock better than the reference PCB cards?
*Answer:* For daily use, the answer is no, the power limit is still the main limiting factor with overclocking. Long before you would've gained higher overclocking from the increased amount of power phases, the 1.093V limit would have already been reached.

*Question:* What about the temperature limit?
*Answer:* This is where the custom PCB cards actually shine, not due to the extra power phases, rather the air cooling capacity. For instance, the massive MSI Gaming X Trio, being the second largest card with its *327mm length, 2.7 slot form factor and 3 fans*, managed to keep a temperature of only 51°C (124°F) just above 1900MHz, locked to 0.900V with the fans set to 100%. 
Meanwhile, the Palit Dual, which is a *292mm 2.5 slot dual-fan card*, barely managed to keep it at 60°C (140°F), even after replacing the TIM with TG Kryonaut.
Thus we can conclude that a temperature delta of 9°C (48°F), which is a step higher on the clock curve, meaning 15MHz faster (not even 1%), does not matter for daily use, yet it's the polar opposite for attempting a high benchmark score. 
The real utility I found for these massive coolers, is allowing for near silent gaming, the *MSI Gaming X Trio* managed to stay at 75c, running 1980MHz (1.043V) on the core while gaming at 5K resolution, with the fan speed at a mere 45% (1530/1220 RPM).
It made less noise than my 4x 120mm 850RPM radiator fans cooling the CPU, thus we're getting watercooling noise levels from an air cooled card, running at a higher clock than the fastest factory overclocked card! If that is not impressive, then I don't know what is.

*Question:* My card reaches 2200MHz at just 1.000V, am I lucky?
*Answer:* No, since when the GPU is not fully utilized, it can freely go up to 1.093V, as it is not close to the power limit. To really find out your max clock, for your cooling (temperature), you need to run the most demanding benchmarks or games in up to 8K resolution which will result in low FPS and high VRAM usage. 
I have seen many people running *Time Spy*, complain about reaching the power limit in Graphics Test 2 specifically, which is due to Time Spy being a 1440p benchmark that does not max out the GPU throughout the tests. 
To really put your card to its knees, run Time Spy Extreme (4K), which is far more demanding on the hardware, but is sadly locked away behind a paywall. My recommendation is to instead run *UNIGINE Superposition* 8K Optimized preset.

*Question:* How much performance will I gain by water cooling my card?
*Answer:* A lower temperature allows the GPU to increase the core clock in steps of 15MHz, roughly at about every 10°C (50°F), so keeping your card at 40°C (104°F) instead of 75°C (167°F) is likely about 4 steps, therefore 60MHz higher, which translates to 3% higher performance. This is of course only possible if you do not hit the power limit first!

*Question:* Are the more expensive cards binned (hand-picked)?
*Answer:* There is no evidence supporting that. For example, the Galax OC Reference PCB 380W card could very well overclock higher than a EVGA FTW3 Custom PCB 373W card, it's mere chance. This is why there is no point in choosing an FTW3 over the cheapest A GPU if you are going to water cool it, they will have near identical performance if you use the Galax 380W BIOS on both.

*Question:* So which card should I buy?
*Answer:* The first thing to look at is the power limit, which rules out any Non-A GPUs as they are all locked to 280W and cannot be modified. Another thing to look for is the cooling, the obvious answer would be a hybrid variant, featuring a closed liquid cooler, unfortunately they tend to be far more expensive.
When looking for a GPU to cool with a water block it does not really matter what you get, as long as it is a reference PCB variant, there is no need for a custom one.
Finally, we have the air cooled GPUs, the bigger the better generally, triple-fan is recommended over dual-fan, but price is also an issue here, the large custom PCB cards tend to cost a lot more. 
I would say that a good compromise would be the 2 to 2.5 slot triple-fan reference PCB variants, such as: *Gainward Phoenix GS*, *Gigabyte Windforce OC*, *Inno3D X3 OC*, *MSI Duke OC*, *PNY XLR8 OC*, *Zotac Triple Fan* and *Zotac AMP*.
The GPU with the highest power limit out of these is the Gigabyte Windforce OC, at 366W, but all of them can safely be flashed with the Galax 380W BIOS.


----------



## Madness11

I will order Asus or Ref ) Hope i get more then 40+% then my 1080ti on non ray-trays games)


----------



## nodicaL

I got my Founders Preorder in!
Let’s go!

980 Ti needs to retire


----------



## Cacophony

pre-ordered wooot


----------



## Madness11

guys off price is 1199?? or 999?


----------



## GraphicsWhore

Pre-ordered the EVGA on Amazon.

I'm custom loop though, so will need to wait for a block. I'm almost thinking of cancelling and waiting until a block is actually available to buy but then I'm worried about the damn thing not being available anywhere.

ANY info on blocks for these things yet?


----------



## ibb27

Madness11 said:


> guys off price is 1199?? or 999?


FE variant is OC version and it is a 1199$, "non FE" Ti variant is 999$.


----------



## rush2049

I officially have a pre-order for an nvidia founders edition.


----------



## zhrooms

Pre-ordered the MSI GeForce RTX 2080 Ti Gaming X Trio. But might switch to an EVGA if I can find the right model.

*Edit:* Pictures after getting the card. Click *here* or *here* for many more!


----------



## Mullinz

Anyone else find it interesting that NV-Link is supported for these cards? Will this have any significant benefit over SLI apart from bandwidth? Or is it something developers will still have to explicitly support?


----------



## ibb27

Mullinz said:


> Anyone else find it interesting that NV-Link is supported for these cards? Will this have any significant benefit over SLI apart from bandwidth? Or is it something developers will still have to explicitly support?


No one have an answer until reviews pops up on internet.


----------



## GraphicsWhore

zhrooms said:


> Pre-ordered MSI GeForce RTX 2080 Ti 11GB GAMING X TRIO. But if an EVGA Founders Edition shows up at a retailer I'll cancel it. Gotta love reference cards that most waterblocks fit.


Yeah in my haste I neglected to remember something obvious which is a compatible block may not even come out for this EVGA model.

If I don’t find a 3rd party FE before 20th I may try to swap with someone or just cancel.


----------



## alanthecelt

pre ordered a founders edition assuming it will have a block


----------



## Moparman

ibb27 said:


> No one have an answer until reviews pops up on internet.


On the Gigabyte page it shows Nvlink has to have 2 identical cards. So my hope is that Nvlink just adds the 2 cards together and works as 1 big card the way it should have been since the start of SLI.


----------



## BigMack70

Got my Founders Edition order in... $1200 sure is steep but I've been waiting for more performance at 4k for 3 and a half years... got to this performance level first with Titan X Maxwell SLI, then switched to single card 1080 Ti after I got sick of SLI's problems, and now finally get something meaningfully faster on the market. 

I very much doubt that it is "worth" the price, sadly... I expect ~50% faster than 1080 Ti in games when RTX is not a factor.


----------



## chakku

BigMack70 said:


> I expect ~50% faster than 1080 Ti in games when RTX is not a factor.


Curb your expectations hypeman this won't happen.


----------



## Djreversal

not sure why he announced 999 for the Ti model.. and you can not find it anywhere for less then like 1100 dollars.


----------



## BigMack70

chakku said:


> Curb your expectations hypeman this won't happen.


We'll find out soon enough. If they can't deliver at least ~35%+ performance improvement over 1080 Ti in games without RTX, I'll be canceling my pre-order.


----------



## Foxrun

I have two preordered, panicked out work. Did they explain NVlink or give performance estimates?


----------



## GraphicsWhore

alanthecelt said:


> pre ordered a founders edition assuming it will have a block


Same. FE finally came up on nVidia's site so I went for it and will cancel the EVGA.



Djreversal said:


> not sure why he announced 999 for the Ti model.. and you can not find it anywhere for less then like 1100 dollars.


Just suggested retail price and I think for reference cards.


----------



## chakku

BigMack70 said:


> We'll find out soon enough. If they can't deliver at least ~35%+ performance improvement over 1080 Ti in games without RTX, I'll be canceling my pre-order.


Look at what the Titan V achieved with ~43% more shader cores (albeit clocked a little lower) and HBM2, while the 2080 Ti only has 21% more cores. If you're expecting it to be an even bigger gain over the 1080 Ti than the full Volta GPU then you may want to lower your expectations to not be completely disappointed. Titan V has more GFLOPS than 2080 Ti as well. Volta/Turing aren't so vastly different that you can't compare those metrics.


----------



## Foxrun

chakku said:


> Look at what the Titan V achieved with ~43% more shader cores (albeit clocked a little lower) and HBM2, while the 2080 Ti only has 21% more cores. If you're expecting it to be an even bigger gain over the 1080 Ti than the full Volta GPU then you may want to lower your expectations to not be completely disappointed. Titan V has more GFLOPS than 2080 Ti as well. Volta/Turing aren't so vastly different that you can't compare those metrics.



So it doesnt look like it will dethrone V?

Well comparing specs, it does appear to be worse. Still curious of how NVlink will function as I cant find anything on it from the conference.


----------



## Rob w

Just ordered mine, founders edition 2080ti
Will be an owner when it arrives..


----------



## chakku

Foxrun said:


> So it doesnt look like it will dethrone V?
> 
> Well comparing specs, it does appear to be worse. Still curious of how NVlink will function as I cant find anything on it from the conference.


I would set my expectations at it matching the Titan V or slightly beating it depending on how the GDDR6 compares to the HBM2 and how the cooling compared to the V's blower fan help it boost higher.

But hey if they can manage to pull off performance gains like the Infiltrator demo without the use of dedicated hardware like the RT or Tensor cores then I'll be amazed.


----------



## Johnny_Utah

Just preordered 2 FE Ti's from Nvidia site.


----------



## Artah

*EVGA RTX 2080 Ti*



Graphics***** said:


> Pre-ordered the EVGA on Amazon.
> 
> I'm custom loop though, so will need to wait for a block. I'm almost thinking of cancelling and waiting until a block is actually available to buy but then I'm worried about the damn thing not being available anywhere.
> 
> ANY info on blocks for these things yet?


I can't find it in amazon, can you post a link pls. Thanks.


----------



## toncij

chakku said:


> Look at what the Titan V achieved with ~43% more shader cores (albeit clocked a little lower) and HBM2, while the 2080 Ti only has 21% more cores. If you're expecting it to be an even bigger gain over the 1080 Ti than the full Volta GPU then you may want to lower your expectations to not be completely disappointed. Titan V has more GFLOPS than 2080 Ti as well. Volta/Turing aren't so vastly different that you can't compare those metrics.


Exactly. While there are some changes, parallel int vs float ops are not doubling the performance. 
NVidia is using some smart and dirty marketing to sell what's not there.

At the same clock and same core count, those cards should not be faster than 5%. Now, compare 2070 and 1080Ti or TITAN V vs 2080Ti.

Being careful with expectations is highly advised.


----------



## GraphicsWhore

Artah said:


> I can't find it in amazon, can you post a link pls. Thanks.


https://www.amazon.com/gp/offer-lis...=5143&creativeASIN=B07GHXJW5W&m=ATVPDKIKX0DER

Sold out though.


----------



## chakku

toncij said:


> Now, compare 2070 and 1080Ti or TITAN V vs 2080Ti.


This is probably part of their marketing tactic too. TITAN V performance at only 1/3 the cost! Even faster in the limited number of games that support RTX! If you compare the 2080 Ti to 1080 Ti then it is incredibly overpriced, compared to the TITAN V it's a bargain.


----------



## OwnedINC

2 FE's pre-ordered!


----------



## Elmy

One 2080 Ti FE from Nvidia ordered and one Zotac 2080 Ti from Newegg ordered. Going to see which one gets here faster.... 

EK said they will have blocks ready on 9/20 so ill have to keep an eye out for that since I am on custom loop with my Titan Xp. 

I am sure hoping it's minimum 30% faster than Titan Xp. :-/ 

Need more frame rate for that 2560 X 1440p 240Hz .5ms monitor that is being worked on.


----------



## Madness11

Elmy said:


> One 2080 Ti FE from Nvidia ordered and one Zotac 2080 Ti from Newegg ordered. Going to see which one gets here faster....
> 
> EK said they will have blocks ready on 9/20 so ill have to keep an eye out for that since I am on custom loop with my Titan Xp.
> 
> I am sure hoping it's minimum 30% faster than Titan Xp. :-/
> 
> Need more frame rate for that 2560 X 1440p 240Hz .5ms monitor that is being worked on.


What price is ZOTAC bro ?))


----------



## Glerox

chakku said:


> I would set my expectations at it matching the Titan V or slightly beating it depending on how the GDDR6 compares to the HBM2 and how the cooling compared to the V's blower fan help it boost higher.
> 
> But hey if they can manage to pull off performance gains like the Infiltrator demo without the use of dedicated hardware like the RT or Tensor cores then I'll be amazed.


Also pre-ordered one. Can't wait to finally get rid of SLI...

I agree, for games without RTX, I suspect Titan V levels of performance (around 14 tflops of single precision performance)


----------



## Elmy

Madness11 said:


> What price is ZOTAC bro ?))


1199 :-(


----------



## BigMack70

chakku said:


> Look at what the Titan V achieved with ~43% more shader cores (albeit clocked a little lower) and HBM2, while the 2080 Ti only has 21% more cores. If you're expecting it to be an even bigger gain over the 1080 Ti than the full Volta GPU then you may want to lower your expectations to not be completely disappointed. Titan V has more GFLOPS than 2080 Ti as well. Volta/Turing aren't so vastly different that you can't compare those metrics.


I'll be very surprised if it isn't at least 35% faster than a 1080 Ti. As the only reference point we have so far, an overclocked 1080 Ti will yield about 45fps average in the Infiltrator demo at 4k. If we assume 2080 Ti does 60fps average based on what was shown today, that's about 33% faster and is probably a conservative estimate. That doesn't guarantee that the performance advantage there will translate linearly into a gaming average, but I think people are a little more panicked than they need to be at this point. Is this card "worth" the price? Obviously not. Is it going to be a meaningful upgrade from the 1080 Ti? I think so.


----------



## GraphicsWhore

Elmy said:


> One 2080 Ti FE from Nvidia ordered and one Zotac 2080 Ti from Newegg ordered. Going to see which one gets here faster....
> 
> EK said they will have blocks ready on 9/20 so ill have to keep an eye out for that since I am on custom loop with my Titan Xp.
> 
> I am sure hoping it's minimum 30% faster than Titan Xp. 😕
> 
> Need more frame rate for that 2560 X 1440p 240Hz .5ms monitor that is being worked on.


They did??? Where? And for what? The FE?


----------



## chakku

BigMack70 said:


> I'll be very surprised if it isn't at least 35% faster than a 1080 Ti. As the only reference point we have so far, an overclocked 1080 Ti will yield about 45fps average in the Infiltrator demo at 4k. If we assume 2080 Ti does 60fps average based on what was shown today, that's about 33% faster and is probably a conservative estimate. That doesn't guarantee that the performance advantage there will translate linearly into a gaming average, but I think people are a little more panicked than they need to be at this point. Is this card "worth" the price? Obviously not. Is it going to be a meaningful upgrade from the 1080 Ti? I think so.


The Infiltrator demo was using the tensor cores in the 2080 Ti, we have yet to see an apples to apples comparison of 'vanilla' performance between the two.


----------



## toncij

chakku said:


> The Infiltrator demo was using the tensor cores in the 2080 Ti.


For what?


----------



## chakku

toncij said:


> For what?


Upscaling:



> As an example, Huang showed off the Infiltrator demo, based on the UE4 engine, running at 78fps at 4K, instead of 38fps on a GeForce 1080 Ti. The main reason for the increase is due to the Tensor cores filling in the resolution through neural network training.


https://hexus.net/tech/news/graphics/121292-nvidia-unveils-geforce-rtx-2080-ti-rtx-2080-rtx-2070/


----------



## Madness11

Guys what u think about %%%?? I hope 2080Ti give 40% vs 1080t without Ray-tracing)))


----------



## toncij

chakku said:


> Upscaling:
> 
> 
> 
> https://hexus.net/tech/news/graphics/121292-nvidia-unveils-geforce-rtx-2080-ti-rtx-2080-rtx-2070/


Ohh... completely missed that part. That's such a sad one little trick...
Now, without the Infiltrator, we have no performance info at all? Aside from specs.


----------



## dVeLoPe

i ordered 1x ZOTAC Gaming GeForce RTX 2080 Ti AMP

currently own a EVGA 1080Ti iCX with another clone of it sitting unused (just got a HB SLi bridge) that I can toss into the rig.

My question is will SLi 1080Ti keep up with a single 2080Ti (dont care about RayTracing bs)


----------



## Foxrun

Let's try and talk about the real thing here, NVlink. Are we finally getting 2 cards behaving as 1?


----------



## Okt00

That was pretty painful on the ol' plastic. 1080ti FE inbound. 

Now I really have to think about upgrading the rest of the PC to keep up...


----------



## toncij

Foxrun said:


> Let's try and talk about the real thing here, NVlink. Are we finally getting 2 cards behaving as 1?


NVLink is only a faster SLI. Interconnect is ~80GB/s, way slower than memory itself. It would significantly slow you down.


----------



## Madness11

toncij said:


> NVLink is only a faster SLI. Interconnect is ~80GB/s, way slower than memory itself. It would significantly slow you down.


Bro . What u think about powerful ?)) 2080ti vs 1080ti (with non ray-tracing) I hope is 40% ))


----------



## BigMack70

chakku said:


> BigMack70 said:
> 
> 
> 
> I'll be very surprised if it isn't at least 35% faster than a 1080 Ti. As the only reference point we have so far, an overclocked 1080 Ti will yield about 45fps average in the Infiltrator demo at 4k. If we assume 2080 Ti does 60fps average based on what was shown today, that's about 33% faster and is probably a conservative estimate. That doesn't guarantee that the performance advantage there will translate linearly into a gaming average, but I think people are a little more panicked than they need to be at this point. Is this card "worth" the price? Obviously not. Is it going to be a meaningful upgrade from the 1080 Ti? I think so.
> 
> 
> 
> The Infiltrator demo was using the tensor cores in the 2080 Ti, we have yet to see an apples to apples comparison of 'vanilla' performance between the two.
Click to expand...

Seems to be a pretty fair comparison to me... Tensor cores are likely enabled in this fashion all the time with the Turing cards.


----------



## Kinaesthetic

Ordered a FE from Nvidia!


----------



## dVeLoPe

wish i could nvlink them both together lol


----------



## dboythagr8

Ordered an EVGA GeForce RTX 2080 Ti XC card. I miscalculated on the release of these cards and the 4k hdr monitors, hard. Was stuck with a 1060 as a result, but excited to jump back into the high end.


----------



## davidmoffitt

Pre-ordered. This will be the first time I have a single-GPU vs SLI build in ~10 years, assuming it performs as suggested.


----------



## E-curbi

Woohoo New Graphics Club, the Asus and EVGA 2.75slot cards look great.

But what about us 2080 guys, are we excluded from this club? 

Found it, sorry. 

https://www.overclock.net/forum/69-nvidia/1706274-official-nvidia-rtx-2080-owner-s-club.html


----------



## Foxrun

toncij said:


> NVLink is only a faster SLI. Interconnect is ~80GB/s, way slower than memory itself. It would significantly slow you down.


Alrighty, time to cancel one of the 2080ti's. Maybe both depending on how it stacks up against V.


----------



## Elmy

Graphics***** said:


> They did??? Where? And for what? The FE?


Read this tweet. 

https://twitter.com/EKWaterBlocks/status/1031617938762149895

And from stuff I have read.. All cards are FE for the 9/20 launch... Just different coolers.


----------



## chakku

BigMack70 said:


> Seems to be a pretty fair comparison to me... Tensor cores are likely enabled in this fashion all the time with the Turing cards.


DLSS won't be enabled 'all the time' it will need to be supported by the game/benchmark, just like the raytracing techniques/ATAA, PhysX, GameWorks features, etc. So no, it's not a fair comparison when you're comparing a card that isn't designed for it to one that is.

If they can implement it in the drivers like super resolution or something then maybe we'd be talking, but it's still a cheap upscaling trick and not a valid performance comparison. If you want a fair comparison then compare it to a TITAN V running that benchmark with the same setup, using its own tensor cores.


----------



## Racersnare21

Got my 2080ti FE pre-ordered. Going in my new build which upgrades from my current 290x. Still a little skeptical on if it's actually worth the price. Time will tell I guess!


----------



## Glerox

Elmy said:


> Read this tweet.
> 
> https://twitter.com/EKWaterBlocks/status/1031617938762149895
> 
> And from stuff I have read.. All cards are FE for the 9/20 launch... Just different coolers.


Where in this tweet is it saying that all cards have the same PCB?


----------



## Rokku

Do you guys think a RTX 2080 TI will be bottlenecked by a i7 4770k? I will be upgrading to a i7 8700k soon.


----------



## GraphicsWhore

Elmy said:


> Graphics***** said:
> 
> 
> 
> They did??? Where? And for what? The FE?
> 
> 
> 
> Read this tweet.
> 
> https://twitter.com/EKWaterBlocks/status/1031617938762149895
> 
> And from stuff I have read.. All cards are FE for the 9/20 launch... Just different coolers.
Click to expand...

Thanks! That’s great news.

Confirmation on the boards would be even better. I’ve got a EVGA on preorder coming 9/21 and a FE but that isn’t scheduled until 10/8. I figured these initial blocks would be strictly for reference cards which is why I got the FE.

Can anyone confirm? I’ll cancel my FE if true.


----------



## nodicaL

BigMack70 said:


> Got my Founders Edition order in... $1200 sure is steep but I've been waiting for more performance at 4k for 3 and a half years... got to this performance level first with Titan X Maxwell SLI, then switched to single card 1080 Ti after I got sick of SLI's problems, and now finally get something meaningfully faster on the market.
> 
> I very much doubt that it is "worth" the price, sadly... I expect ~50% faster than 1080 Ti in games when RTX is not a factor.


I'm thinking about 30% increase in performance.
No way that it'll be 50%.


----------



## Kpjoslee

I hope those tensor cores will be utilized other than the purpose of ray tracing on DXR supported titles. If that is possible, then we could see some meaningful performance jumps on existing titles without DXR via driver updates.


----------



## BigMack70

chakku said:


> DLSS won't be enabled 'all the time' it will need to be supported by the game/benchmark, just like the raytracing techniques/ATAA, PhysX, GameWorks features, etc. So no, it's not a fair comparison when you're comparing a card that isn't designed for it to one that is.
> 
> If they can implement it in the drivers like super resolution or something then maybe we'd be talking, but it's still a cheap upscaling trick and not a valid performance comparison. If you want a fair comparison then compare it to a TITAN V running that benchmark with the same setup, using its own tensor cores.


Jensen pretty heavily implied that with NGX there will be driver-level implementation of the feature for games, as he said it was all part of the "RTX platform" you run on Turing. Only time will tell to see if that's true. And if it's implemented at that level, then rather or not it's a "valid performance comparison" is entirely dependent on how accurate an image it generates. If it's accurate, it may not negatively affect image quality any more than things like FXAA.


----------



## rluker5

Rokku said:


> Do you guys think a RTX 2080 TI will be bottlenecked by a i7 4770k? I will be upgrading to a i7 8700k soon.


For single player games:
Not at 4k high+ settings. Yes at 1440p med- settings (but still probably not that bad). Give or take a bit, and depending on the game, your overclocks, ram, hard drive, etc. 
My old 4770k at ~4.6 and 2400ddr3 had no problem pushing my sli 1080tis to 100% running 4k before I upgraded to the 5775c. But I've seen less than 100% gpu use on youtube with 1080p and slower ram with a single 1080 and also 1080ti.

I don't play as lot of multiplayer so I don't know how one of those would play when you get a lot of others involved.


----------



## chakku

BigMack70 said:


> Jensen pretty heavily implied that with NGX there will be driver-level implementation of the feature for games, as he said it was all part of the "RTX platform" you run on Turing. Only time will tell to see if that's true. And if it's implemented at that level, then rather or not it's a "valid performance comparison" is entirely dependent on how accurate an image it generates. If it's accurate, it may not negatively affect image quality any more than things like FXAA.


Existing neural network upscalers like NNEDI3 already make things look like an oil painting and the extremely compute heavy ones like Waifu2x still have tons of artifacts.

Also comparing apples to oranges is never a fair test, no matter how you slice it. We will wait for (hopefully) proper benchmarks and see what the result is, there's no use speculating exact performance figures but there's even less use expecting unrealistic numbers like 50% performance gains in regular use cases.


----------



## nycgtr

I'm currently traveling out of the country. I totally missed the keynote but I did preoder 2 msi 2080tis while running around. Currently running 2 xps. This should be a decent bump? Any news on that nv link sli bridge? I see it for preorder on nvidia.


----------



## BigMack70

chakku said:


> Existing neural network upscalers like NNEDI3 already make things look like an oil painting and the extremely compute heavy ones like Waifu2x still have tons of artifacts.


You're just reaching for reasons to be skeptical. That infiltrator demo definitely didn't look like an oil painting or an artifacted mess today.

I already said we need to wait and see how this all plays out. But you are acting like they produced vaporware and snake oil today, which they clearly did not - regardless of where the performance gains ultimately wind up.


----------



## rluker5

BigMack70 said:


> Jensen pretty heavily implied that with NGX there will be driver-level implementation of the feature for games, as he said it was all part of the "RTX platform" you run on Turing. Only time will tell to see if that's true. And if it's implemented at that level, then rather or not it's a "valid performance comparison" is entirely dependent on how accurate an image it generates. If it's accurate, it may not negatively affect image quality any more than things like FXAA.


 My tv upscales fps by 60, does that count? Also upscales resolution but that doesn't look good. Seems like they could AI upscale an awful lot with this since I'm sure it is faster than the VIXS 40nm processor on my 2014 uhd tv and people wouldn't notice that much.
And it would all be dependent on drivers. Not sure what I think of that but I hope it looks good. 
That's really what matters. 
If they snuck in a bunch of surreptitious upscaling it could crush Pascal, and if it looked better you would need a bowl of sour grapes to complain.


----------



## animeowns

*2080 ti 2x + nvlink*

pre ordered founders edition 2x second business day shipping nvlink = normal shipping method


----------



## crazysoccerman

in for a new 1080 ti. i've been burned by vaporware "up to X times performance with Y technology..." too many times.


----------



## joeh4384

These cards are probably going to be pretty hot especially if Nvidia is abandoning a blower cooler for the reference design. I want to pick up a 2080ti hybrid to replace my 1080ti but I probably will hold off a while.


----------



## rx7racer

For the ones mentioning price, not everyone will be coming from a 1080 Ti.

I agree it's an absurd price and it's way old coming from NV. Anyone that says these are reasonable prices are in denial to themselves and have been. 

But, alas even for me enough is enough and I want some performance which obviously isn't coming from any company but NV.

So I'm in for a Gigabyte Windforce 2080 Ti. Thumbs Up!

Screw it, I can't stand what I just did but it had to be done hahaha

Yay, Go AMD...


----------



## animeowns

joeh4384 said:


> These cards are probably going to be pretty hot especially if Nvidia is abandoning a blower cooler for the reference design. I want to pick up a 2080ti hybrid to replace my 1080ti but I probably will hold off a while.


Ichilxl has a hybrid cooled one up for pre order


----------



## rx7racer

I tend to agree, hating on NV for the pricing doesn't mean we can't buy them or that they are selling snake oil or you're a troll. It just means we know we are spending more than we should in a market that is stagnant and has zero options from opposing forces.

But for many as myself once was these prices are intolerable and horrible segmentation of NV's line up. Turing cores and all that are coming with a price since NV has no reason to be competitive with themselves and as most have pointed out for general rule of thumb at this point in time no one needs more than the performance of a 1080 Ti. 

Wonderful times we live in, 8 years I myself haven't purchased NV products. This is where it has got me, far enough behind performance wise I'm begging NV to stick it u...


----------



## boredgunner

Waiting for ASUS ROG Strix tri fan version. Expecting a substantial upgrade over my 1080 Ti to say the least.


----------



## NAIM101

If RTX 2080TI GPU is $1,100+ I expect it to be atleast 50% improvement over 1080TI. I will wait for review and if it's no where close, I will NOT be purchasing NVIDIA GPU anymore and look toward purchasing consoles next month. It is already ridiculous how prices went up with GTX GPU's blaming Bitcoin "mining" community. Nvidia better start limiting 1x per customer like Apple did!


----------



## BigMack70

NAIM101 said:


> If RTX 2080TI GPU is $1,100+ I expect it to be atleast 50% improvement over 1080TI. I will wait for review and if it's no where close, I will NOT be purchasing NVIDIA GPU anymore and look toward purchasing consoles next month. It is already ridiculous how prices went up with GTX GPU's blaming Bitcoin "mining" community. Nvidia better start limiting 1x per customer like Apple did!


The Xbox One X is probably the best overall deal on the market as far as price/performance goes. Definitely more impressive than any of this, even if the 2080 Ti is 50% faster than 1080 Ti.


----------



## renejr902

i want to buy a geforce 2080 ti and sell my 1080 ti, even if the price are way too high, BUT i still want a 25% better fps performance for no RTX games. So do you think its 100% sure that 2080 ti will give me at least 25% more fps in 4k gaming? thanks for your opinion.


----------



## boredgunner

NAIM101 said:


> If RTX 2080TI GPU is $1,100+ I expect it to be atleast 50% improvement over 1080TI. I will wait for review and if it's no where close, I will NOT be purchasing NVIDIA GPU anymore and look toward purchasing consoles next month. It is already ridiculous how prices went up with GTX GPU's blaming Bitcoin "mining" community. Nvidia better start limiting 1x per customer like Apple did!


What on earth will consoles give you that a GTX 1080 Ti PC can't? LOL.


----------



## l88bastar

BigMack70 said:


> The Xbox One X is probably the best overall deal on the market as far as price/performance goes. Definitely more impressive than any of this, even if the 2080 Ti is 50% faster than 1080 Ti.


My XOX collects dust. My PC gets used hours and hours each day.


----------



## GraphicsWhore

renejr902 said:


> i want to buy a geforce 2080 ti and sell my 1080 ti, even if the price are way too high, BUT i still want a 25% better fps performance for no RTX games. So do you think its 100% sure that 2080 ti will give me at least 25% more fps in 4k gaming? thanks for your opinion.


It’s very likely but we’re all stil speculating.


----------



## NewType88

boredgunner said:


> Waiting for ASUS ROG Strix tri fan version. Expecting a substantial upgrade over my 1080 Ti to say the least.


Think I'm going to get that one too, but I love how the turbo one looks !!

Are the heatsink mounting holes in the same spot as the 1080 ti variant ? Cant really tell from the pictures. I wanna slap my morpheus 2 on it and I know it works with the 1080ti strix.


----------



## MilesK

chakku said:


> The Infiltrator demo was using the tensor cores in the 2080 Ti, we have yet to see an apples to apples comparison of 'vanilla' performance between the two.


Yeah the RTX was doing "real time" ray tracing and sustaining 60 fps while doing it.

His point is merited, it's clearly faster rendering everything else going on by a considerable margin in this demo even when running something that makes it look marginally better.

We just don't know if the new RTX card is taking over all shadow calculations including things like SSAO etc which are ungainly luxury taxes that throttle GPU performance for little return in imagine quality.


----------



## Neokolzia

MilesK said:


> Yeah the RTX was doing "real time" ray tracing and sustaining 60 fps while doing it.
> 
> His point is merited, it's clearly faster rendering everything else going on by a considerable margin in this demo even when running something that makes it look marginally better.
> 
> We just don't know if the new RTX card is taking over all shadow calculations including things like SSAO etc which are ungainly luxury taxes that throttle GPU performance for little return in imagine quality.


The day they start being able to push real time video smoothing with GPU AI tech, I'll be excited for it, where can insert frames between frames on a real time user level.

I'm skeptical the performance increase on these cards, and waiting with bated breath, specially since not once during the presentation did they talk about actually comparable performance.
I understand under RTX loads it wouldn't matter but thats same as comparing FPS between cards generation apart with better approaches are computing tessellation than previous generations and boasting 50% performance increases when in reality its more like 10-15%

Obviously the Ray tracing stuff is some next gen design stuff but I find it hard to imagine them pushing it to main stream fast enough to actually make it easier on the artists.
While its AMAZING tech and makes the workflow so much easier fact is that for say a typical AAA console game, 95% of the user base + will not be using Raytracing either because incompatible hardware or on console.
So they will need to still make the games beautiful without that.

I'm hoping we see in next generation consoles the push for RTX however since this is a tech that positively effects game development pipelines thats for sure


----------



## toncij

Neokolzia said:


> I'm skeptical the performance increase on these cards


You should be. Be careful of those claims about chakku being a trol; he is one of the rare people here showing they know what they're talking about. Many will be very disappointed (unless we have nv shills here, of course).

NVidia yet again pulled a very unfair marketing trick and "heavily implied" things they knew people will "see". We see what we want to see and that's a brilliant part of NV marketing team. Whatever anyone pulls out of their own imagination, will end up easily deniable due to being "only heavily implied in the eyes of those claiming it, but was never intentional by NV".


----------



## Jbravo33

Preordered 2 FE Ti’s and an EVGA ti. Will resell EVGA. FE getting water cooled and will replace two Xp’s. Look forward to benching against the V. I’m guessing 30-35% above 1080ti and easily over 100fps in 4k sli.


----------



## Drunkbag

I kinda want to order the FE, but maybe I should wait for the 2080 Ti FTW3.
I wonder how much more expensive will it be than the XC Ultra.
Oh well, I got a month to decide.


----------



## JunkaDK

*What monitor?*

Any sence in getting a 4K 60hz monitor that has G-sync? G-sync will only help if the FPS is below 60fps right? and i thinking my PC can push well above 60fps now.


----------



## shilka

JunkaDK said:


> Any sence in getting a 4K 60hz monitor that has G-sync? G-sync will only help if the FPS is below 60fps right? and i thinking my PC can push well above 60fps now.


If you are looking for a new monitor for gaming a better option would be a 165 Hz 1440P display like the Asus PG279Q 
Or there is the Acer XB271HU and the AOC AG271QG 

As for G-sync one of the selling points is to prevent screen tearing which is a big problem at very high FPS so yes G-sync helps no matter the FPS


----------



## Shadowsong

Any confirmation if all of the initially released 3rd party cards will be reference PCB? Would like to purchase an EVGA 2080 ti, but will go FE if i can't confirm waterblock compatibility.


----------



## JunkaDK

shilka said:


> If you are looking for a new monitor for gaming a better option would be a 165 Hz 1440P display like the Asus PG279Q
> Or there is the Acer XB271HU and the AOC AG271QG
> 
> As for G-sync one of the selling points is to prevent screen tearing which is a big problem at very high FPS so yes G-sync helps no matter the FPS


I am not playing high FPS games.. i want to get the best looking image possible with the highest detail level as possible.

I thought that g-sync was to ensure that game FPS and screen HZ would always sync up to avoid tearing.. So if a game is running at lets say 70-80 FPS and the screen is only 60HZ, i don't get how g-sync can assist without lowering the FPS to 60FPS?

Please correct me if im wrong


----------



## shilka

G-sync locks your framerate to your monitor so if you have a 60 FPS monitor your games wont run at higher than 60 FPS 
Higher refresh rate is not only about FPS stuff as simple as moving your mouse across the screen looks better and less choppy because of the higher refresh rate

I play RTS games for the most part which are as you said not high FPS games but even there the higher refresh rate makes for much better gameplay and 
I know i would rather take higher refresh rate than monitor resolution any day of the week but thats my own personal opinion

Playing on a 60 Hz display give me a headache thats how used to 165 Hz i have become
At least consider it


----------



## JunkaDK

JunkaDK said:


> I am not playing high FPS games.. i want to get the best looking image possible with the highest detail level as possible.
> 
> I thought that g-sync was to ensure that game FPS and screen HZ would always sync up to avoid tearing.. So if a game is running at lets say 70-80 FPS and the screen is only 60HZ, i don't get how g-sync can assist without lowering the FPS to 60FPS?
> 
> Please correct me if im wrong



Found this in an article  So i guess my assumption was correct : "That being said, G-Sync doesn’t eliminate screen tearing if the rendering rate of the GPU is above your monitor’s refresh rate."
https://beebom.com/what-is-nvidia-fast-sync-enable/

So i guess Nvidia Fast Sync will help more than G-sync


----------



## shilka

If you really want 4K and you got $2000 laying around there is the Asus PG27UQ which has a 144 Hz refresh rate








See my last post if you have not seen it already


----------



## JunkaDK

Gonna have a look in the matress later


----------



## mtbiker033

can't wait to see some real user benchmarks, my 1080ti is still doing wonders at 1440p but I'm curious to compare benches.


----------



## GraphicsWhore

Any more rumblings whether the EVGA cards are custom PCB or not?


----------



## nodicaL

I know the 2080 Ti will blow my 980 Ti out of the water. 

No need for benchmarks here.


----------



## ThrashZone

Hi,
2080ti just took over the titan Xp price slot with better stocking cooling so it's already worth probably buying 
There's probably no need to add a water block so it's already saving money 
We'll assume it performs better.


----------



## Shadowsong

Graphics***** said:


> Any more rumblings whether the EVGA cards are custom PCB or not?


Waiting to find out the same, i'm assuming it's the case but waiting to find out for certain until i pull the trigger! I was considering waiting for a non reference design card, but can't be sure a block will be made for it and i see no Hydro Copper version advertised.


----------



## carlhil2

Pre-ordered the 2080Ti FE, will not see it til the first week of November...


----------



## Okt00

nodicaL said:


> You put it very nicely.
> I know the 2080 Ti will blow my 980 Ti out of the water.
> 
> No need for benchmarks here.


My 970 can't wait to get moved into the Linux server downstairs...

I'm interested to see the benchmarks, but that'll be to see how far it can go.




carlhil2 said:


> Pre-ordered the 2080Ti FE, will not see it til the first week of November...


Is that what the confirmation email from NVIDIA says?


----------



## carlhil2

Okt00 said:


> My 970 can't wait to get moved into the Linux server downstairs...
> 
> I'm interested to see the benchmarks, but that'll be to see how far it can go.
> 
> 
> 
> 
> Is that what the confirmation email from NVIDIA says?


Yes, says 11/5...


----------



## GraphicsWhore

Shadowsong said:


> Waiting to find out the same, i'm assuming it's the case but waiting to find out for certain until i pull the trigger! I was considering waiting for a non reference design card, but can't be sure a block will be made for it and i see no Hydro Copper version advertised.


Over on EVGA forums someone mentioned that these initial cards must be reference as a custom PCB is going to take time. That makes sense but I still want confirmation from EVGA or EK before I cancel either my EVGA or FE preorder.

I Tweeted EVGA about it yesterday and so far nothing. I'm not entirely clear why someone from EVGA can't answer this question in 2 seconds on Twitter or their official forums.



carlhil2 said:


> Pre-ordered the 2080Ti FE, will not see it til the first week of November...


Are you in the U.S.? I pre-ordered yesterday and it's shipping 10/8. Pretty interesting if it has jumped an entire month overnight.


----------



## Okt00

carlhil2 said:


> Yes, says 11/5...


Well that escalated quickly. Did you order much later after the pre-order was live?


----------



## nodicaL

Okt00 said:


> carlhil2 said:
> 
> 
> 
> Yes, says 11/5...
> 
> 
> 
> Well that escalated quickly. Did you order much later after the pre-order was live?
Click to expand...

Yikes!

I preordered as soon as the event stream was over and got 9/20 like it was supposed to be


----------



## GraphicsWhore

nodicaL said:


> Yikes!
> 
> I preordered as soon as the event stream was over and got 9/20 like it was supposed to be


For me the "pre-order" button was not showing for a while after the event. I guess they had sold out of that batch or whatever? Once it re-appeared I was able to pre-order and got ETA of 10/8.

Meanwhile, the EVGA XC gaming I pre-ordered is slated for delivery on 9/21, that's why I'm desperately trying to find out if the EK blocks releasing on 9/20 will fit it. I'd love to keep that card instead of the FE and have it earlier (plus it was a bit cheaper).


----------



## bastian

I also pre-ordered a 2080 Ti and I got it during the livestream, before it was officially announced. I got a confirmation email that stated it would ship on or around 9/20/2018. But now I am seeing people saying October and November.

I called nVidia Store Support and they didn't have any info to give me, other than to confirm my order went through.

I am assuming everyone has different ETAs based on how far in the queue you got. So for people who got in early, they will get it around the official date, and other later orders get one in October/November, etc.


If you go to pre-order one now, the store does indeed say 11/12/2018 for shipping.


----------



## GraphicsWhore

bastian said:


> I also pre-ordered a 2080 Ti and I got it during the livestream, before it was officially announced. I got a confirmation email that stated it would ship on or around 9/20/2018. But now I am seeing people saying October and November.
> 
> I called nVidia Store Support and they didn't have any info to give me, other than to confirm my order went through.
> 
> I am assuming everyone has different ETAs based on how far in the queue you got. So for people who got in early, they will get it around the official date, and other later orders get one in October/November, etc.
> 
> 
> If you go to pre-order one now, the store does indeed say 11/12/2018 for shipping.


Exactly. Yours should be going out in the first batch so I'm pretty sure you're good.


----------



## bastian

This launch was artificially delayed, so I am a little surprised nVidia and the AIBs did not have more ready to go sooner. 

Even though AIBs are accepting pre-orders for custom cards and stating they should be available in September, Gamers Nexus is implying that this won't really happen until starting in October for custom cards.


----------



## Rob w

Aquatuning just sent me an email, Water block available for pre order for RTX 2080ti and the 2080
Just for those that were concerned whether one would be available.


----------



## D. O'Connor

Guys has anyone had a confirmed shipping date for the Founders card yet? Mine is still saying order received....


----------



## GraphicsWhore

Rob w said:


> Aquatuning just sent me an email, Water block available for pre order for RTX 2080ti and the 2080
> Just for those that were concerned whether one would be available.



Thanks for posting this. The compatibility list here shows only the FE cards which may help confirm that these are the only cards using reference PCB and in that case I expect EK's blocks would have the same compatibility (only FE cards).


----------



## RobotDevil666

WOW that's a lot of Pre orders I bet the jacket man is happy  
Anyway It's FE 2080Ti for me, hoping for a EK block soon, I managed to get in early I have ship date 20/09 but when I checked now it's already 25/10 
I'm well hyped, we need those benchmarks/reviews now


----------



## BigMack70

There are a great many people in this community that seem to think that the problem behind Nvidia's pricing is people who buy Nvidia's products. While there's a surface level of plausibility to this, it reflects a gross misunderstanding of economics. Nvidia is in almost all meaningful respects a monopoly in the high end GPU space. AMD cannot compete above, what, the $300 price point? So what happens as a result? Nvidia adopts monopolistic pricing strategies and charges whatever they damn well please. 

If you want to get angry about this, get angry at AMD for being an utter failure for the past five years, not at people buying Nvidia's products.


----------



## skedda

BigMack70 said:


> There are a great many people in this community that seem to think that the problem behind Nvidia's pricing is people who buy Nvidia's products. While there's a surface level of plausibility to this, it reflects a gross misunderstanding of economics. Nvidia is in almost all meaningful respects a monopoly in the high end GPU space. AMD cannot compete above, what, the $300 price point? So what happens as a result? Nvidia adopts monopolistic pricing strategies and charges whatever they damn well please.
> 
> If you want to get angry about this, get angry at AMD for being an utter failure for the past five years, not at people buying Nvidia's products.


I am not angry about this, rather annoyed. I am not going to debate "economics" with you. You seem to have too much time on your hands. If the 2080 Ti were to fail to meet Nvidia's expectations in terms of sales, I am sure they would lower the prices. They are charging these prices because they know damn well that a lot of people are going to buy them regardless. AMD is to blame as well, of course.


----------



## ClashOfClans

Are there any 4k performance numbers or predictions? Can I expect 60fps at 4k with this card at least in all games when I crank up the settings to full blast? October is a long time to wait too. Not sure if I'm willing to drop $1200 for something I'll have to wait almost 2 months for. Tough 1st world problem dilemma here. 

My little GTX1060 3GB is doing admirably at 1080 and the tv upconverting to 4k I must say. But obviously running native 4k would be very awesome. I've paid my dues with this $150 gtx1060 3gb when I got it on sale, I think I will pull the plunge actually and finally be a first adopter for once! I know after a year these things will go down in price to like $700 for the 2080 ti and $500 for the regular 2080, but it's all good I guess. 

Oh what do you guys think of the ryzen 1800x and a 2080 Ti at 4k. At 4k I shouldn't see a bottleneck I'm guessing.


----------



## BigMack70

ClashOfClans said:


> Are there any 4k performance numbers or predictions? Can I expect 60fps at 4k with this card at least in all games when I crank up the settings to full blast?


The 1080 Ti is close to being a card that can do 4k 60fps at highest settings, and this will be faster than the 1080 Ti, so it's reasonable to expect most games to play at 4k 60fps with cranked settings. That said, not all games will do that, and there's probably zero chance that you will be turning on ray tracing at 4k and getting anything resembling a playable framerate.

We don't have any usable numbers yet for comparison. Closest we have is the Infiltrator demo which, assisted by their new DLSS feature, got to 78fps at 4k on the 2080 Ti (my 1950 Mhz 1080 Ti gets 47fps in the demo). No way to know how much of that performance delta is attributable to the DLSS feature, and no way to know how well or how frequently that feature will be implemented in actual games.


----------



## Foxrun

Well some concerning news, turns out the ti can't maintain 60fps at 1080p with RT enabled in tomb raider and BV. I'll get the link once I get off the toilet.

https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on

Hopefully we will have some driver improvements but I still doubt that we will have 4k and RT this generation.


----------



## GraphicsWhore

Has this been posted yet?

https://www.techradar.com/au/reviews/nvidia-geforce-rtx-2080-ti

Not a full review but some info that bodes well for 4K/high res gaming.


----------



## Ford8484

BigMack70 said:


> The 1080 Ti is close to being a card that can do 4k 60fps at highest settings, and this will be faster than the 1080 Ti, so it's reasonable to expect most games to play at 4k 60fps with cranked settings. That said, not all games will do that, and there's probably zero chance that you will be turning on ray tracing at 4k and getting anything resembling a playable framerate.
> 
> We don't have any usable numbers yet for comparison. Closest we have is the Infiltrator demo which, assisted by their new DLSS feature, got to 78fps at 4k on the 2080 Ti (my 1950 Mhz 1080 Ti gets 47fps in the demo). No way to know how much of that performance delta is attributable to the DLSS feature, and no way to know how well or how frequently that feature will be implemented in actual games.


Thats a 60% jump in performance...not bad at all- well, considering the cost- it should be around that level (even more, if you ask me). But like you said, there's no way of knowing if these results will be more universal...something tells me they wont be considering the lack of performance information for regular PC gaming. I got mine pre-ordered but really considering cancelling it....hopefully we can get some hard data on FPS performance before they ship. Ray-tracing looks cool but I think im with most people when saying higher 4k performance is more important.


----------



## sblantipodi

where are the benchmarks?


----------



## BigMack70

Ford8484 said:


> Thats a 60% jump in performance...not bad at all- well, considering the cost- it should be around that level (even more, if you ask me). But like you said, there's no way of knowing if these results will be more universal...something tells me they wont be considering the lack of performance information for regular PC gaming. I got mine pre-ordered but really considering cancelling it....hopefully we can get some hard data on FPS performance before they ship. Ray-tracing looks cool but I think im with most people when saying higher 4k performance is more important.


Just be sure canceling the pre order is really what you want to do if you go that route; these cards look completely sold out and back ordered which means a decision to cancel a pre-order is probably a decision not to have the option to own the card for several months. 

I figure I'll just return the card if it turns out not to have enough performance increase to make a difference over my 1080 Ti.


----------



## BigMack70

sblantipodi said:


> where are the benchmarks?


Probably behind NDA until the 20th.


----------



## Desolutional

On a slightly unrelated note, my 980 Ti has kept its' value *extremely* well over the past few years. If benches on this new card turn out to be fluff, and only good in RTX gameplay, then you can guarantee people will be furious.


----------



## Foxrun

Desolutional said:


> On a slightly unrelated note, my 980 Ti has kept its' value *extremely* well over the past few years. If benches on this new card turn out to be fluff, and only good in RTX gameplay, then you can guarantee people will be furious.


Itll be the other way around. I posted a link a few posts back and as of right now with drivers and such, the 2080ti cant hold 60fps at 1080p with ray tracing enabled in BV and tomb raider.


----------



## rx7racer

Well this thread went to pooh fairly quick. 

Anyway, for the ones of us who have pre-ordered and are officially going to be owner's can we get back on track to talking normal.

Some of the leaks do worry me but as I said already, not everybody is coming from a 1080 Ti. I will be in heaven for what I want which is solid 1440p performance considering even my Fury has kept me in an ok place for all these years.

What would be nice as everyone has said is for NV to measure the RTX brand in some form of previous established metrics. I agree with the many that say the figures they are giving are kind of useless to extrapolate any idea of possible performance.

I may have missed it but for example do we have any solid information on how the AI portion will be implemented yet opn non Rt games even? That alone potentially could boost cuda core performance. Looking at what has leaked it is worrisome as this card is supposed to handle 4k yet it's diving in 1080p, is this due to the processor or some other bottleneck, maybe just drivers and DX Ray tracing?

Many questions abound and hopefully we will know before we put it in our systems. 

It's odd NV and Jensen didn't tap on any previous metrics measured, it did change our focus I'd say.


----------



## bastian

I suspect 1080 Ti owners upgrading to 2080 Ti will see a 30-40% increase.


----------



## sblantipodi

WOW, I love RTX


----------



## Glerox

Anyone wanna bet Jensen will release a Titan RTX with fully enabled TU102 with 4608 CUDA cores and 12GB VRAM before the end of the year?


----------



## sblantipodi

Glerox said:


> Anyone wanna bet Jensen will release a Titan RTX with fully enabled TU102 with 4608 CUDA cores and 12GB VRAM before the end of the year?


at those prices he can SLI it in his @ss...


----------



## ClashOfClans

I don't see the benchmarks. You would think there would be some performance data out there to entice guys to preorder.

Some are saying 60% improvement over the gtx1080 ti(in games without ray tracing). That's really good if true.


----------



## Foxrun

ClashOfClans said:


> I don't see the benchmarks. You would think there would be some performance data out there to entice guys to preorder.
> 
> Some are saying 60% improvement over the gtx1080 ti(in games without ray tracing). That's really good if true.


Could you link the source? I definitely want to see this since Ive got two on order. I have high hopes for nvlink


----------



## Glerox

sblantipodi said:


> at those prices he can SLI it in his @ss...


LOLLL


----------



## Jpmboy

Glerox said:


> Anyone wanna bet Jensen will release a Titan RTX with fully enabled TU102 with 4608 CUDA cores and 12GB VRAM before the end of the year?



I can only hope... 


(but cuda cores need to be > 5120)


----------



## Glerox

Jpmboy said:


> I can only hope...
> 
> 
> (but cuda cores need to be > 5120)


I don't think it's possible because full Turing chip is 4608 CUDAS, not like Volta.


----------



## Jpmboy

Glerox said:


> I don't think it's possible because full Turing chip is 4608 CUDAS, not like Volta.



:doh:
yeah - but I thought (barring a large increase in operating frequency) a cuda core is just that, and the performance scales pretty linearly?


----------



## Glerox

Jpmboy said:


> :doh:
> yeah - but I thought (barring a large increase in operating frequency) a cuda core is just that, and the performance scales pretty linearly?


Good question. I suppose not every CUDA is the same since the architecture around, VRAM bandwith, other types of cores, etc can change the performance.
I guess, at the same frequency, 4608 Turing CUDAS > 5102 Volta CUDAS for gaming purposes (with RTX off to be fair).


----------



## gecko991

This is about to get interesting.


----------



## Jpmboy

Glerox said:


> Good question. I suppose not every CUDA is the same since the architecture around, VRAM bandwith, other types of cores, etc can change the performance.
> *I guess, at the same frequency, 4608 Turing CUDAS > 5102 Volta CUDAS for gaming purposes (with RTX off to be fair)*.



yes, that's the premise/promise, but thus far across a few generations any rendering scales with core count (normalizing for frequency improvements)... unfortunately all our paper-performance speculation will have to wait nearly a month for actual data. I'm gonna hold off until then (something I've said many times re: GPUs and have yet to abide by).


----------



## ClashOfClans

Foxrun said:


> Could you link the source? I definitely want to see this since Ive got two on order. I have high hopes for nvlink


Well did you see this: https://wccftech.com/nvidia-geforce-rtx-2080-ti-performance-preview-4k-100-fps/

It is saying that the 2080 ti gets well over 100fps in 4k in ultra settings...



> Coming straight to the performance segment, the GeForce RTX 2080 Ti is said to deliver smooth 60 FPS in multiple PC games at 4K resolution.





> In addition to the 4K gaming performance, it is also stated that some games (which they obviously can’t name explicitly) were able to run well over 100 FPS at 4K and using Ultra settings.


I think that solidified my preorder any way. If that is true I feel good about preordering and dropping $1200 on a graphics card. Mind you I've never spent more than $400, usually opting for mid-range. $1200 is a lot of coin for a graphics card but I justify it this way:

1) I'm coming from a GTX1060 3gb so my mind will be blown with this new card. Can't say the same for guys coming from a 1080 ti.
2) Weren't 1080 Ti cards brand new going for well over $1200 just a few months ago at the height of the crypto currency boom?
3) 6 core i9 macbook will cost some guy around 4k and won't be able to play 4k games at 10fps.


----------



## bastian

Just got another email from nVidia confirming my order and the ship date of "on or around 09/20/2018"


----------



## jincuteguy

zhrooms said:


> Pre-ordered MSI GeForce RTX 2080 Ti 11GB GAMING X TRIO. But if an EVGA Founders Edition shows up at a retailer I'll cancel it. Gotta love reference cards that most waterblocks fit.


How did you pre order this card on Newegg? It's sold out.


----------



## Shadowsong

ClashOfClans said:


> Well did you see this: https://wccftech.com/nvidia-geforce-rtx-2080-ti-performance-preview-4k-100-fps/
> 
> It is saying that the 2080 ti gets well over 100fps in 4k in ultra settings...
> 
> 
> 
> 
> 
> I think that solidified my preorder any way. If that is true I feel good about preordering and dropping $1200 on a graphics card. Mind you I've never spent more than $400, usually opting for mid-range. $1200 is a lot of coin for a graphics card but I justify it this way:
> 
> 1) I'm coming from a GTX1060 3gb so my mind will be blown with this new card. Can't say the same for guys coming from a 1080 ti.
> 2) Weren't 1080 Ti cards brand new going for well over $1200 just a few months ago at the height of the crypto currency boom?
> 3) 6 core i9 macbook will cost some guy around 4k and won't be able to play 4k games at 10fps.


Yep pretty much the same here, i've yet to pre-order as waiting on waterblock confirmation. But i'm happy to spend that amount, especially since i'm upgrading from an original Titan 6GB SLI setup. I always prefer to buy high end and keep the card(s) longer.


----------



## Blameless

ClashOfClans said:


> It is saying that the 2080 ti gets well over 100fps in 4k in ultra settings...


'Many games' for 4k60 and 'some games' for 4k100+.

Frankly, that vague level of performance is a complete given for a 2080 Ti, even by the most pessimistic estimates.

It would also describe the 1080 Ti accurately enough...I'm sure I could find at least two modern games that will run 4k at 100fps with highest in-game settings on one.


----------



## CJMitsuki

2080ti hit 33-48 fps with Ray Tracing at 1080p according to multiple sources at Gamescom. How that translates to what it will do with Ray Tracing disabled has yet to be seen. Most estimates are around 15-20% performance increase over 1080ti which are still only estimates. Time will tell, but 20% would be quite the disappointment for the price point to say the least. Im hoping its better than that but given the fps with RT on at 1080p does seem concerning. Ill wait to upgrade until after some benchmarks, if it is 30%+ increase I will buy but if 15-20% then 1080ti seems like a more effective buy with the dropping prices.

Sources and Info


Spoiler


----------



## ThrashZone

Jpmboy said:


> yes, that's the premise/promise, but thus far across a few generations any rendering scales with core count (normalizing for frequency improvements)... unfortunately all our paper-performance speculation will have to wait nearly a month for actual data. *I'm gonna hold off until then (something I've said many times re: GPUs and have yet to abide by)*.


HI,
Yep mostly impulse buying 
Resistance is futile you have the money :thumb:


----------



## Seyumi

Hopefully EVGA will introduce the quick-disconnect AIO's this generation. They showed them off nearly TWO years ago and complete radio silence ever since. These cards obviously run hot as Nvidia now has 2 fans instead of 1 on their reference designs and now EVGA does tripple-slot cooling on their higher-tiered GPU cards when they historically have only done duel-slot cooling. This generation could probably benefit from a 240mm AIO radiator instead of just a 120mm one. Inno3D already has leaked a 240mm 2080Ti AIO.


----------



## Fitzcaraldo

FE preorder went in before the end of the panel even.
Now waiting which Waterblock to choose for it.


----------



## GraphicsWhore

Fitzcaraldo said:


> FE preorder went in before the end of the panel even.
> Now waiting which Waterblock to choose for it.


Same. I'm deciding between the EK and Alphacool plexi version. May go with the latter just because Aquatuning is offering a pre-order. No price yet for the EK but the Alphacool is $157 and I assume EK will be similar.

Mentioned earlier in the thread I've been trying to get confirmation if any/all of the 3rd party launch cards are reference PCB.

Aquatuning.us responded but it was confusing. First they said yes they think the initial cards are reference but when I asked if they thought that meant the Alphacool blocks would fit the 3rd party cards, they said probably not.

EVGA said they treat only Founder's as reference and to talk to EK.

EK hasn't responded but their page clearly states the compatibility as only the 2080 FE and 2080Ti FE.

FYI, only reason I care is my pre-ordered EVGA XC 2080Ti is slated for delivery 9/21 where-as my nVidia FE not until 10/8, so I was hoping I could keep the EVGA to get this all done sooner.

At this point I think the bottom line is keeping the FE is the safest bet.


----------



## Madness11

GUYS cant wait )) where i can pre-oreder To Dubai?


----------



## dante`afk

no one else unimpressed by this RTX marketing BS 'awesome' uhh so 'greaaat' and the resulting ridicolous price?

I'll wait for actual tests, totally snoozefest so far.


----------



## bastian

nVidia is obviously saving numbers for reviews. But they have already said 2080 will beat the 1080 Ti in games, so obviously the leap to the 2080 Ti will be even bigger, not just small. I'm still sticking behind a 30-40% increase in performance going from 1080 Ti to 2080 Ti. There probably won't be a new faster card above the 2080 Ti for a while, I doubt they will even do a Titan.


----------



## ClashOfClans

Im not a big fan on how they launch a product either. It would seem like they would benefit especially hype wise on giving us some concrete benchmarks now. At least some graphs for games we play now. Allowing preorders for a GPU that will cost over 1k without concrete data on its performance to justify the purchase price doesn’t make any sense. I have other expensive hobbies as well but this is just one component in my pc. Good thing with a preorder....especially non-reference is those seem to be shipping in October. So if this thing is amazing I will have secured one. If not I’ll cancel the preorder and get my money back when the reviews release in September. 

But if we use logic...the 1080 ti is 30-40% faster than a regular 1080. If it is true that a 2080 is faster than a 1080 Ti then a 2080 Ti would logically be at least 30-40% faster than a regular 2080(basically adding $100 for each 10% advantage to justify the $400 price difference.)and would mean that the 2080 ti is probably around 60% faster than a 1080 Ti..that is a huge leap I think from the previous generation and would support the rumors that the 2080 Ti will be handling 4K with ease.


----------



## rx7racer

Glerox said:


> Anyone wanna bet Jensen will release a Titan RTX with fully enabled TU102 with 4608 CUDA cores and 12GB VRAM before the end of the year?


That will happen for sure, look at Quadro 8000, it's full chip. As soon as they have excess we will have it.

What would you rather make, $10k or a pitiful $2k-$3k per card.


----------



## BigMack70

ClashOfClans said:


> that is a huge leap I think from the previous generation


Let's remember that Fermi --> Kepler was about an 80-90% performance jump
Kepler --> Maxwell was about 45%
Maxwell --> Pascal was like 65-70%

If Pascal --> Turing is anything less than 40% it's horribly disappointing compared to what architectural jumps have historically provided.


----------



## BigMack70

https://blogs.nvidia.com/blog/2018/08/22/geforce-rtx-60-fps-4k-hdr-games/?linkId=100000003301637

Performance numbers starting to appear for RTX series... looks like 2080 is ballpark 40-50% faster than a 1080, and assuming this is representative of overall performance, it looks like my original statement of expecting about 50% performance improvement from 2080 Ti over 1080 Ti won't be that far off, contrary to what some naysayers have been spouting off about in this thread.


----------



## Edge0fsanity

BigMack70 said:


> https://blogs.nvidia.com/blog/2018/08/22/geforce-rtx-60-fps-4k-hdr-games/?linkId=100000003301637
> 
> Performance numbers starting to appear for RTX series... looks like 2080 is ballpark 40-50% faster than a 1080, and assuming this is representative of overall performance, it looks like my original statement of expecting about 50% performance improvement from 2080 Ti over 1080 Ti won't be that far off, contrary to what some naysayers have been spouting off about in this thread.


40-50% was about what i expected as well, 30% worst case scenario. Felt strong enough about it to pre order the 2080 ti on launch day. My guess is these cards really shine in DX12 as well over pascal.


----------



## chakku

BigMack70 said:


> Let's remember that Fermi --> Kepler was about an 80-90% performance jump
> Kepler --> Maxwell was about 45%
> Maxwell --> Pascal was like 65-70%
> 
> If Pascal --> Turing is anything less than 40% it's horribly disappointing compared to what architectural jumps have historically provided.


None of these numbers are even accurate, whose ass do you pull these made up numbers from?


----------



## Crinn

Notty said:


> Coming here and seeing all this people pre ordering a graphics card that they don´t even know how it performs, power consumption, etc, makes me cringe. This is why Nvidia is turning into the "Apple of PC Gaming". Because of guys like you.
> 
> Hope you all end up playing PC games with a 500 playerbase on most of them. Nvidia today just settled that PC High-End gaming is a luxury and for a niche.
> 
> I imagine a GTX/RTX 2060 to cost 450€ here. And that´s a mid end GPU that we know we must make compromises on the game settings. Disgusting.


You do realize that the pre-orders can be canceled at any time right?

If the benchmarks come out and we discover that the 20xx cards have unsatisfactory performance improvements then we can just cancel our orders at no cost to us.


----------



## BigMack70

chakku said:


> None of these numbers are even accurate, whose ass do you pull these made up numbers from?


OMG the ignorance of some people. 

GTX 580 (big Fermi) --> GTX 780 Ti (big Kepler) was 80-90%+. You can get this average from looking at the 780 in comparison to the 580 and then 780 Ti to 780. Alternatively you can compare midrange Fermi (GTX 560 Ti) to midrange Kepler (GTX 680).

780 Ti (big Kepler) --> Titan X (big Maxwell) was about 45%

Titan X (big Maxwell) --> 1080 Ti/Titan Xp (big Pascal) was about 75-80%

The ~50% performance boost that it looks like we're getting from Turing - as I originally guessed by the way and which you've been trying to call snake oil on - is exactly in line with the past decade of GPU history for new architectures.

And before anyone says something stupid about "but teh GTX 680 was teh sucksessor to teh GTX 580"... go look up the die sizes on the chips. Look at the tech. Not the marketing.


----------



## renejr902

i dont know if someone post it already but Turing cards should be much faster than Pascal in non RTX games, even non dlss games see this:
https://www.tomshardware.com/news/nv...zed,37679.html

By this nvidia benchmark, should we still have +35% performance for the 2080ti vs 1080ti in non rtx non dlss games? thanks for your opinion, and see that impressive benchmark


----------



## raider89

I see posts with people trying to compare numbers from 2080ti to the 1080ti I thought he said in the keynote that since they calculated everything completely different that you couldnt use that to try and compare the two based off that information. Which to my understanding you cant compare 2 different architectures using specs anyway or did I miss something?


----------



## raider89

CJMitsuki said:


> 2080ti hit 33-48 fps with Ray Tracing at 1080p according to multiple sources at Gamescom. How that translates to what it will do with Ray Tracing disabled has yet to be seen. Most estimates are around 15-20% performance increase over 1080ti which are still only estimates. Time will tell, but 20% would be quite the disappointment for the price point to say the least. Im hoping its better than that but given the fps with RT on at 1080p does seem concerning. Ill wait to upgrade until after some benchmarks, if it is 30%+ increase I will buy but if 15-20% then 1080ti seems like a more effective buy with the dropping prices.
> 
> Sources and Info
> 
> 
> Spoiler
> 
> 
> 
> RedGamingTech with information about the gamescom sources
> 
> Shadow of the Tomb Raider at Gamescom (off screen gameplay)
> 
> Hardware Unboxed Estimates


I really wouldn't use that as any type of actual performance expectation lol.


----------



## ClashOfClans

BigMack70 said:


> https://blogs.nvidia.com/blog/2018/08/22/geforce-rtx-60-fps-4k-hdr-games/?linkId=100000003301637
> 
> Performance numbers starting to appear for RTX series... looks like 2080 is ballpark 40-50% faster than a 1080, and assuming this is representative of overall performance, it looks like my original statement of expecting about 50% performance improvement from 2080 Ti over 1080 Ti won't be that far off, contrary to what some naysayers have been spouting off about in this thread.


If it's true than the value is worth it. They are showing the rtx 2080 getting 60fps and up in 4k. A lot of people are willing to pay for that in of itself. Honestly I think a 2080 is all I really need if that's the case...why am I getting a 2080 Ti if the rtx 2080 is already doing this:

https://image.ibb.co/iM4foK/20804k.jpg

Maybe I get a bit more headroom at 4k, but it would appear the 2080 Ti is targeting guys gaming at 4k who use monitors with a refresh rate of 75hz and up.


----------



## GraphicsWhore

ClashOfClans said:


> If it's true than the value is worth it. They are showing the rtx 2080 getting 60fps and up in 4k. A lot of people are willing to pay for that in of itself. Honestly I think a 2080 is all I really need if that's the case...why am I getting a 2080 Ti if the rtx 2080 is already doing this:



Games are obviously going to continue to get more demanding, so eventually - and unfortunately as is usually the case that it won't be too long from now - you'll get to a point where 2080 averages say, mid-40 FPS for various games @ 4K where-as the Ti would be doing 60. And then you'll wish you had gotten a Ti.

Whether that's worth the extra upfront cost to you personally is another matter; also depends how long you intend to wait before getting a new card.


----------



## Zurv

pooo.. I guess it is time to buy stuff again.

I'm real bummed that HDMI 2.1 isn't there. I'm really looking forward to see NVLINK in action for gaming. I'm not down like everyone else seems to be on SLI, but improvements would be cool. (like using the shared mem pool of both cards.)
Titan Xp (or 1080 ti) would beat the V i'm using not, but the V is still pretty sweet. I'm 100% sure 2080 TI NVLINK will beat the V too. Clearly if the 2080 TI is faster than a V then it is a no brainer. But if about the same.. erp.. maybe i don't feel like redoing my loop. 


I'm not liking the limits on order numbers. I like i'd like 4. (2 for 2 computers). I hope bitspower gets thier blocks out soon. I'm not using EK junk anymore.
I was VERY unhappy with the 1080 (I got 9 at launch and it didn't help that nvidia lied about 4 way support.) Then a titan came out and sold all the 1080s.

I can sell my Titan Vs at almost the cost i paid for them - which pays for this upgrade.

As always, i blame @Jpmboy for making me buy all this stuff. It is clearly him and not my lack of self control.


----------



## merlin__36

Thinking about Pre-Ordering the FE 2080 TI and just selling my Two Copper Water Blocked FE 1080 TI if i can get Higher FPS.
Also could sell the Titan X Pascal sitting in my old rig.


----------



## <sigh>

*I'm probably gonna regret this but....*

Left it a bit late, but I haven't missed a launch in a decade and I'm not starting now.... FE pre-ordered, the benches better be good!


----------



## Glerox

Zurv said:


> pooo.. I guess it is time to buy stuff again.
> 
> I'm real bummed that HDMI 2.1 isn't there. I'm really looking forward to see NVLINK in action for gaming. I'm not down like everyone else seems to be on SLI, but improvements would be cool. (like using the shared mem pool of both cards.)
> Titan Xp (or 1080 ti) would beat the V i'm using not, but the V is still pretty sweet. I'm 100% sure 2080 TI NVLINK will beat the V too. Clearly if the 2080 TI is faster than a V then it is a no brainer. But if about the same.. erp.. maybe i don't feel like redoing my loop.
> 
> 
> I'm not liking the limits on order numbers. I like i'd like 4. (2 for 2 computers). I hope bitspower gets thier blocks out soon. I'm not using EK junk anymore.
> I was VERY unhappy with the 1080 (I got 9 at launch and it didn't help that nvidia lied about 4 way support.) Then a titan came out and sold all the 1080s.
> 
> I can sell my Titan Vs at almost the cost i paid for them - which pays for this upgrade.
> 
> As always, i blame @Jpmboy for making me buy all this stuff. It is clearly him and not my lack of self control.


Hmmm I think you need to see a therapist LOL


----------



## carlhil2

Can't wait to see the OC potential...


----------



## Zurv

Glerox said:


> Hmmm I think you need to see a therapist LOL


shh.. i must be in the top 10 3dmark benchmarks because: fast computer + ???? == profit

right? i'm pretty sure that checks out.


----------



## merlin__36

Zurv said:


> shh.. i must be in the top 10 3dmark benchmarks because: fast computer + ???? == profit
> 
> right? i'm pretty sure that checks out.


What does ???? =


----------



## Zurv

merlin__36 said:


> What does ???? =


I'm not sure yet.. but when i do.. profit!


----------



## Vow3ll

EVGA GeForce RTX 2080 Ti XC ULTRA GAMING (Newegg)

edit: https://wccftech.com/nvidia-geforce-rtx-2080-gaming-performance-benchmarks-2x-over-gtx-1080-4k/


----------



## OwnedINC

dante`afk said:


> no one else unimpressed by this RTX marketing BS 'awesome' uhh so 'greaaat' and the resulting ridicolous price?
> 
> I'll wait for actual tests, totally snoozefest so far.


You'll wait because you have no choice, they're back ordered for months at this point.


----------



## emett

OwnedINC said:


> You'll wait because you have no choice, they're back ordered for months at this point.



lol, back ordered months? As soon as non reference cards appear those preorders will vanish.


----------



## BigMack70

emett said:


> lol, back ordered months? As soon as non reference cards appear those preorders will vanish.


All the pre-orders for those are sold out as well. These cards are going to be available to a few select F5 warriors only for several months, unless Nvidia reserved some stock to not be available for pre-orders and only for sale after 9/20.

Good luck getting one of these by November if you didn't pre-order.


----------



## Jpmboy

Zurv said:


> pooo.. I guess it is time to buy stuff again.
> 
> I'm real bummed that HDMI 2.1 isn't there. I'm really looking forward to see NVLINK in action for gaming. I'm not down like everyone else seems to be on SLI, but improvements would be cool. (like using the shared mem pool of both cards.)
> Titan Xp (or 1080 ti) would beat the V i'm using not, but the V is still pretty sweet. I'm 100% sure 2080 TI NVLINK will beat the V too. Clearly if the 2080 TI is faster than a V then it is a no brainer. But if about the same.. erp.. maybe i don't feel like redoing my loop.
> 
> 
> I'm not liking the limits on order numbers. I like i'd like 4. (2 for 2 computers). I hope bitspower gets thier blocks out soon. I'm not using EK junk anymore.
> I was VERY unhappy with the 1080 (I got 9 at launch and it didn't help that nvidia lied about 4 way support.) Then a titan came out and sold all the 1080s.
> 
> I can sell my Titan Vs at almost the cost i paid for them - which pays for this upgrade.
> 
> As always, i blame @*Jpmboy* for making me buy all this stuff. It is clearly him and not my lack of self control.



lol - "He started it".  (but I'll probably get a couple eventually. I need to see actual performance data, not NV's marketing F news). Take a look at Nvidia's promo slides... looks as tho the 2080's RT performance is higher than the 2080Tis'? Unless that's just a mistake.



Glerox said:


> Hmmm I think you need to see a therapist LOL


Don;t bother unless the therapist has a 65" 4K OLED monitor with 120Hz native refresh.


----------



## ITAngel

Is it worth getting one of these cards when you run 2x SLI GTX 1080 Ti on a custom water loop?


----------



## Jpmboy

ITAngel said:


> Is it worth getting one of these cards when you run 2x SLI GTX 1080 Ti on a custom water loop?


No... unless you have games that do not scale well with SLI, and there are more and more of these unfortunately.


----------



## ClashOfClans

I'm still seeing the rtx 2080 fe on best buy. I don't think many people think to go to best buy to get a high end video cards but they are there people...guess cats out of the bag now though so they'll be sold out I'm guessing after I post this today. Ship date shows october though for the 2080 Ti...but the 2080 is showing a ship date of September 21:

https://www.bestbuy.com/site/nvidia...ess-3-0-graphics-card/6291646.p?skuId=6291646

https://www.bestbuy.com/site/nvidia...ess-3-0-graphics-card/6291648.p?skuId=6291648

https://www.bestbuy.com/site/pny-ge...hics-card-silver-gray/6290686.p?skuId=6290686

https://www.bestbuy.com/site/msi-ge...ess-3-0-graphics-card/6290654.p?skuId=6290654

https://www.bestbuy.com/site/pny-ge...aphics-card-black-red/6290690.p?skuId=6290690


----------



## ITAngel

So pretty much skip the 2000 series cards?


----------



## alanthecelt

ITAngel said:


> So pretty much skip the 2000 series cards?


Bit early to say....
I was not impressed with SLI so stuck with a 1080TI.. which suited but if i wanted a 60fps experience when playing on my tv rather than monitor.. it was a no go
so... 20 series makes sense, the 2080ti can only be quicker in conventional titles. .im excited by the overclocking claims the Nvidia CEO made... I would like to know more about SLI with the wide bandwidth connector (NVLINK) but from what i heard the SLI experience is essentially the same as previous...


----------



## Jpmboy

ITAngel said:


> So pretty much skip the 2000 series cards?



I'm not saying that at all. But 2 1080Tis vs 1 2080Ti is an easy choice. No doubt a lot of folks wiith on 1080Ti wil lbe picking up a used one at a good price shortly.


----------



## ITAngel

Jpmboy said:


> I'm not saying that at all. But 2 1080Tis vs 1 2080Ti is an easy choice. No doubt a lot of folks wiith on 1080Ti wil lbe picking up a used one at a good price shortly.


OH lol yea true... I guess I didn't mean you said that but was clarifying if that is what you meant. XD Damn is to early for me! lol


----------



## Nervoize

I am wondering how far these cards will overclock. If they go over 2000+ easily, i might concider an upgrade. Currently sitting at 2176Mhz with my 1080Ti, which is more than enough to run most game over 100 fps at ultra settings 1440p. Gotta wait till the waterblocks are released though, since i am running a custom loop and i prefer EVGA over any other brand.


----------



## GraphicsWhore

Nervoize said:


> I am wondering how far these cards will overclock. If they go over 2000+ easily, i might concider an upgrade. Currently sitting at 2176Mhz with my 1080Ti, which is more than enough to run most game over 100 fps at ultra settings 1440p. Gotta wait till the waterblocks are released though, since i am running a custom loop and i prefer EVGA over any other brand.


Well, if nVidia, EVGA or EK could answer a simple f'ing question about whether the launch EVG party cards use the reference PCB then you wouldn't have to wait as EK and Alphacool will both have blocks for reference card available on release day.

I've got a EVGA XC 2080Ti on preorder coming 9/21 but also pre-ordered a FE because of this unknown, and that's not coming until 10/8.

Yet, somehow, this is a question EVGA and nVidia can't definitively answer and EK doesn't even bother responding.



ClashOfClans said:


> I'm still seeing the rtx 2080 fe on best buy. I don't think many people think to go to best buy to get a high end video cards but they are there people...guess cats out of the bag now though so they'll be sold out I'm guessing after I post this today. Ship date shows october though for the 2080 Ti...but the 2080 is showing a ship date of September 21:
> 
> https://www.bestbuy.com/site/nvidia...ess-3-0-graphics-card/6291646.p?skuId=6291646
> 
> https://www.bestbuy.com/site/nvidia...ess-3-0-graphics-card/6291648.p?skuId=6291648



Thanks for posting. I wish I had immediately checked Best Buy for a Ti FE when I was only able to get one from the Oct 8th batch on nVidia's site; might have caught one of from the first batch.

Just pre-ordered one at Best Buy anyway because I'd rather pick one up in-store a few mins away than wait for shipping from nVidia. Still going to hold on to the nVidia pre-order for a bit though and see if maybe I get lucky and the ship date changes to something earlier.


----------



## Recipe7

Word on the street is that the 999usd variants will be available on September 20th, such as blower and twin fan options. Those of you preordering with November ship dates may want to reconsider waiting. I am patiently waiting for 1. Benchmarks and 2. Ekwb blocks or a hydrocopper.


----------



## GraphicsWhore

Anyone want to swap?

If you have a pre-order for 2080Ti Founder's taking delivery from initial batch (9/20) I'd do a straight swap for an EVGA 2080Ti XC Gaming (being delivered 9/21).

I'd rather have a Founder's since I'm on custom loop to ensure no issues with the first wave of blocks coming out for these and I do have a Founder's on pre-order but it's not shipping until 10/8. I'm impatient and don't want to wait.

We could put up eBay auctions with Buy It Now and immediately pay each other the same amount. 

PM me if interested.


----------



## Okt00

carlhil2 said:


> Can't wait to see the OC potential...


Yea for sure! 

https://youtu.be/_D_dmvTH4iE?t=167
In this video at ~2:40 into the video Peterson touches overclocking...

I know there is so much marketing fluff getting thrown around; with NVIDIA following their release schedule for the next few weeks, pretty much grain of salt territory... but Peterson saying that the OC's are gonna be huge was fun.


----------



## rolldog

Jpmboy said:


> lol - "He started it".  (but I'll probably get a couple eventually. I need to see actual performance data, not NV's marketing F news). Take a look at Nvidia's promo slides... looks as tho the 2080's RT performance is higher than the 2080Tis'? Unless that's just a mistake.


I was thinking the same thing, which is why I’m holding off for now. The clock speeds of the 2080 are much higher than the 2080ti, but the 2080ti has more CUDA cores. I really want to see some third party reviews and performance tests instead of relying on NVDA’s overhyped marketing info. 

NVDA and GEICO must share the same marketing team, along with Brinker Intl, parent company of Olive Garden and Red Lobster. They sure know how to take something you know tastes horrible irresistible. I gotta hand it to them though, no one can make a commercial that good.


----------



## GraphicsWhore

rolldog said:


> I was thinking the same thing, which is why I’m holding off for now. The clock speeds of the 2080 are much higher than the 2080ti, but the 2080ti has more CUDA cores. I really want to see some third party reviews and performance tests instead of relying on NVDA’s overhyped marketing info.
> 
> NVDA and GEICO must share the same marketing team, along with Brinker Intl, parent company of Olive Garden and Red Lobster. They sure know how to take something you know tastes horrible irresistible. I gotta hand it to them though, no one can make a commercial that good.


What is "irresistible" about Olive Garden and Red Lobster? lol.


----------



## zhrooms

Graphics***** said:


> I'd rather have a Founder's since I'm on custom loop to ensure no issues with the first wave of blocks coming out for these and I do have a Founder's on pre-order but it's not shipping until 10/8.


 
It is very unlikely 'EVGA XC Gaming' uses a custom PCB not compatible with EK reference blocks. Almost all cards unveiled uses reference board and 16 power phases.

Product Page of EVGA XC Gaming does not make it sound like it would use a non-reference pcb, just has a redesigned iCX2 cooler and 2 x 8-pin just like Founders Edition.

Also the 'EVGA GeForce GTX 1080 Ti SC Gaming' (former XC Gaming) as an example was compatible with EK FE (Reference PCB) blocks.


----------



## rolldog

Graphics***** said:


> What is "irresistible" about Olive Garden and Red Lobster? lol.


You misread my post, nothing is, but their marketing hype (commercials) make it look irresistible, when everyone knows it's not.


----------



## rolldog

The new 2080ti from MSI has 2 x 8 pin power connectors plus a 6 pin power connector. I've never even seen a consumer based GPU with 3 power connectors on it.


----------



## BigMack70

rolldog said:


> The new 2080ti from MSI has 2 x 8 pin power connectors plus a 6 pin power connector. I've never even seen a consumer based GPU with 3 power connectors on it.


It happens occasionally. Some of the MSI Lightning models have had 3. I think the 1080 Ti lightning even had 3x 8 pin.


----------



## rolldog

GALAX GeForce RTX 2080 Ti 11GB HOF Limited Edition Graphics Card has 3 too. If I had to guess, they're probably overloaded with so much RGB stuff that it needs a dedicated power connector.


----------



## BigMack70

rolldog said:


> GALAX GeForce RTX 2080 Ti 11GB HOF Limited Edition Graphics Card has 3 too. If I had to guess, they're probably overloaded with so much RGB stuff that it needs a dedicated power connector.


I think it's just so that the cards can properly support the power delivery for LN2 overclocking, but I might be wrong.


----------



## GraphicsWhore

rolldog said:


> You misread my post, nothing is, but their marketing hype (commercials) make it look irresistible, when everyone knows it's not.


Yeah I guess I meant even their marketing can't make their crap look irresistible but I get what you're saying. After all, they are still somehow in business...


----------



## NewType88

When do strix of ftw3 variants usually come out ? 1-3 months after launch on the 20th of sept ?


----------



## Madness11

NewType88 said:


> When do strix of ftw3 variants usually come out ? 1-3 months after launch on the 20th of sept ?


wHY 3 BRO ?)) 21 september i think  or end September


----------



## NewType88

Madness11 said:


> wHY 3 BRO ?)) 21 september i think  or end September


I dunno I thought some of the higher end models came out later than initial launch because it’s more custom. Never paid attention to the 1080 ti launch because I wasn’t going to buy one.


----------



## Madness11

I buy Asus strix 1080ti , when lauch all 1080ti


----------



## NewType88

Oh nice ! Did you order it online or buy it in stores ?


----------



## Madness11

NewType88 said:


> Oh nice ! Did you order it online or buy it in stores ?


Buy from local shop  with little overprice)


----------



## Jpmboy

rolldog said:


> I was thinking the same thing, which is why I’m holding off for now. The clock speeds of the 2080 are much higher than the 2080ti, but the 2080ti has more CUDA cores. I really want to see some third party reviews and performance tests instead of relying on NVDA’s overhyped marketing info.
> 
> NVDA and GEICO must share the same marketing team, along with Brinker Intl, parent company of Olive Garden and Red Lobster. They sure know how to take something you know tastes horrible irresistible. I gotta hand it to them though, no one can make a commercial that good.


lol - olive garden. 





rolldog said:


> The new 2080ti from MSI has 2 x 8 pin power connectors plus a 6 pin power connector. I've never even seen a consumer based GPU with 3 power connectors on it.





BigMack70 said:


> I think it's just so that the cards can *properly support the power delivery for LN2 overclocking*, but I might be wrong.



^^ This.


----------



## rolldog

Well, it seems like some 3rd party manufacturers might not be releasing their version of the RtX 20 series as soon as we think.Nvidia is forcing "more than 10 graphic card makers" to "swallow contracted shipments released by Nvidia to deplete its inventories, in order to secure that they can be among the first batch of customers to get sufficient supply quotas of new-generation GPUs."


----------



## Vow3ll

Newegg just restocked and has this deal: EVGA GeForce RTX 2080 XC GAMING $749.99 (Save: $50.00)
https://www.newegg.com/Product/Prod...Homepage_Dailydeal-_-P1_14-487-404-_-08232018


----------



## ClashOfClans

Cancelled my 2080 Ti preorder. Just realized I was spending $1300 on a video card to play one game at 4k(fallout 4). Made no sense. And I got older games I haven't even tackled yet on steam like rise of the tomb raider and a bunch of far cry games I have yet to play that the 1060 will handle fine at 1080p.

- Also considered twice the TDP in my itx rig(120w vs 260w) that doesn't have the best air flow and I really don't want to have to wait around for a full coverage gpu water block.

- My 40 inch sony 4k ips monitor upscales well enough that 1080p upscaled to 4k looks close to native 4k. Of course 4k looks crisper in a screenshot but since the monitor upscales well enough it's hard to tell when gaming. This was evident when comparing f04 quality at 4k vs 1080p upscaled to 4k. 

Still think the 2080 Ti is worth $1300 for 4k though. Just not in my particular case seeing I'm more of a casual gamer right now.


----------



## toncij

It worries me a bit. Non-RTX performance numbers are nowhere to be found and changes to the SMs don't look like anything that could grant much in that department. TITAN V was "so-so" with almost 1k more FP32 cores and better memory. I don't see a trick NV might pull to get Turing to catch up with TITAN V on the same clock (let's take 2GHz as comparison).


----------



## Madness11

toncij said:


> It worries me a bit. Non-RTX performance numbers are nowhere to be found and changes to the SMs don't look like anything that could grant much in that department. TITAN V was "so-so" with almost 1k more FP32 cores and better memory. I don't see a trick NV might pull to get Turing to catch up with TITAN V on the same clock (let's take 2GHz as comparison).


14 september test are coming


----------



## keikei

ITAngel said:


> Is it worth getting one of these cards when you run 2x SLI GTX 1080 Ti on a custom water loop?


I think its a given if you want the RT / DLSS in the few games that will have it currently, the RTX lineup will be the way to go. Your Ti's will work harder for the new tech to work. We've yet to see it implemented properly as the tech is so early, i'd be wise to wait, but as both of us are lurking in this thread waiting is easier said than done.

Good news reading the $999 price of the 'normal' 2080Ti actually existing. It seemed to be imo when reported.


----------



## toncij

Still, that's way too long to wait.  - I'm trying to think of a reason or even change that would give NV per-core performance. Integer calculations in parallel are less relevant; as a developer I usually use FP32 for integer work because of the performance difference (significantly faster). Those ops are also much rarer in consumer software so not sure how they expect that change to affect gamers in much volume.
NV cache is already great enough, not much to be done there without skyrocketing production price. Core count is 4352 or something like that, which in comparison to a Titan looks much slower and we know how TV performs...

With 4352 cores for 2080Ti and 5120 cores for TITAN V I guess Turing will be about 50% of the speed-up TITAN V was. From GN's video, Sniper Elite 4 enjoyed 64% speed-up with TV, which puts 2080Ti at 32%.

And that's the best case. From my own experience, while I had (not overclocked) TV, the speed-up on something like Witcher 3 was about 25% in most cases.


----------



## stefxyz

Well my best indicator is the Battlefield 1 figure of 84 fps in 4k hdr on the 2080 from the charts if thats on Ultra than I am very happy because my Titan Pascal heavily oc with 2000 mhz and +400 on Memory does roughly 67 to 82 fps in 4k hdr on my asus PG27UQ. Now this is already with 20% oc under water with more performance on a FE 2080 then I can expect huge gains on an watercooled overclocked 2080ti with significant more shaders even on games that dont support the new rtx Features. I am not concerned at all. Nvidia might cherry picks the games and the setting (4k HDR) but usually they do not lie.

Even with a mild overclock BF1 will sport clearly north of 100fps in 4k hdr on a watercooled 2080ti.

My theory is that they want to get rid of their old stock why they keep performance numbers low and let the internet mob do the rest. Now people will see cheap prices on old models and think they make a good deal as performance looks mediocre on the new cards. Boy they might very well make a huuge mistake there....


----------



## Newbie2009

toncij said:


> It worries me a bit. Non-RTX performance numbers are nowhere to be found and changes to the SMs don't look like anything that could grant much in that department. TITAN V was "so-so" with almost 1k more FP32 cores and better memory. I don't see a trick NV might pull to get Turing to catch up with TITAN V on the same clock (let's take 2GHz as comparison).


I wouldn't worry too much, can always cancel order if performance is = 25% over a 1080ti

For the price needs to be about 50%. Well imo the price is ******ed but I will vote with my wallet either way.


----------



## Rob w

Hey just a quick question guys, am I correct that if I ordered a 2080ti on the night of games con then it is a founders edition?
Also, is the founders edition higher spec or is there something else special about it?
I feel a bit of a noob having to even ask this question.


----------



## keikei

Rob w said:


> Hey just a quick question guys, am I correct that if I ordered a 2080ti on the night of games con then it is a founders edition?
> Also, is the founders edition higher spec or is there something else special about it?
> I feel a bit of a noob having to even ask this question.



FE is oc'd an additional 100mhz , but everything else looks to be identical. FE is also $1200. https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/


----------



## Mad Pistol

This will be interesting to see, indeed.

I still have a HB SLI bridge from my 1070 SLI setup, so it would be "easy" to pick up a used 1080 Ti for some SLI action... although, I'm not sure my 750-watt EVGA PSU would be up to the task, so I'd probably end up getting a 1000-1200 watt unit.

I'm conflicted. Part of this has me energized to see what the future holds, but part of me is also saying that PC components are starting to get too expensive for what they can produce. I understand that Nvidia needed to increase price to maintain margins (the GPU dies are HUGE!!!) but it still makes it even more uncomfortable for those of us that want the best.


----------



## Artah

Anyone find a company making blocks yet? I wonder if the Titan V blocks will fit, anyone have any info? I'm hoping that the boards for ti and reg FE is the same, I ordered 2 of each, wish I could have ordered 4 instead. I'd trade someone for 2 BINB non Ti for a Titan V though


----------



## alanthecelt

ek 
alphacool - pre order
heatkiller


----------



## ITAngel

Mad Pistol said:


> This will be interesting to see, indeed.
> 
> I still have a HB SLI bridge from my 1070 SLI setup, so it would be "easy" to pick up a used 1080 Ti for some SLI action... although, I'm not sure my 750-watt EVGA PSU would be up to the task, so I'd probably end up getting a 1000-1200 watt unit.
> 
> I'm conflicted. Part of this has me energized to see what the future holds, but part of me is also saying that PC components are starting to get too expensive for what they can produce. I understand that Nvidia needed to increase price to maintain margins (the GPU dies are HUGE!!!) but it still makes it even more uncomfortable for those of us that want the best.


I am running dual AORUS GeForce GTX 1080 Ti Waterforce WB Xtreme Edition 11G on an Seasonic FOCUS Gold 850W unit just fine on a Threadripper system.  But I think 750 is pushing it though.


----------



## rush2049

I see lots of people and media personnel that are missing some information that is already known and confirmed.
I am going to speculate a little bit on some of this info, but it is highly informed.

*In regards to processor code names:*
Quadro RTX 8000, 6000, AND Geforce RTX 2080 ti are using the *TU102 chip*
(4,608 cuda cores, 576 tensor cores, 384-bit mem-width, NVLink, 775mm^2) each card is cut down from this

Quadro RTX 5000 AND Geforce RTX 2080, 2070 are using the *TU104 chip*
(2944 cuda cores, 368 tensor cores, 256-bit mem-width) each card is cut down from this, the # of tensor cores may not be 100% correct

The notable differences being the missing NVLink and significantly different full chip sizes. I think at this point only gamers nexus and der8auer have shown the two different chips when they tore down a 2080 and 2080 ti.

-----------------EDIT--------------------------
the 2080 also has the NVLink connector, but I suspect the tu104 chip has a cut down / slower implementation of it, but maybe I am wrong
-----------------END EDIT--------------------
See here for videos of teardown at gamescom: 



 and 




The tu104 has no metal frame around the die, but that was a quality sample card; not retail at all. The tu102 sits higher on the card closer to the NVLink interface and is noticeably larger.



*Regarding NVLink*: On the nvidia dgx-2 supercomputer they have dedicated NVSwitch silicon to facilitate communication between 16 tu102 dies. This presents all the cores as a single compute resource to the main processor and also handles all communication between the cards instead of using a PCIe bus.

I do not think that we are going to get any NVSwitch silicon for the geforce version of NVLink. However I suspect (and this is me speculating) that the 50 GB/s bi-directional communication between two cards will make them able to appear to the operating system as a single card. _Should maybe not call it SLI anymore._ Another feature that was mentioned during the Siggraph presentation was that the NVLink allows each card to directly access another's memory effectively adding together their memory pools. This leads me further into believing that it will work as a single graphics card when utilizing NVLink between two geforce cards.

-------------EDIT-------------------
the 2080 and 2080 ti both have the NVLink connector. But the 2080 is for sure the tu104 chip, while the tu102 chip on the 2080 ti has its brothers able to perform as I speculate. We will have to wait and see. Perhaps the NVLink silicon is also on the tu104 chip and is just a slower link?
-------------END EDIT-------------


*Regarding differences from Pascal to Turing*:

Seeing as how Nvidia retired the Geforce Titan brand and modified it to be solely Titan,
I think the 2080 ti is what you could consider the titan card of this gen
And the 2080 is what you would consider the 1080 of this gen.
The large disparity of cores between the 2080 ti on the tu102 chip and the 2080 on the tu104 chip leaves room for another card, which in previous generations would have been called the 1080 ti.

_As far as I am aware there was no mention of any cutting of Tensor or RT Cores for the 2080 ti from the tu102 chip. I think that the tu104 chip has less to begin with and possibly the lower cards have some disabled, but I am not sure._


*Comparing the cuda core count and FE price at launch:*
_______ -> Titan V
____ -> 5120

Geforce GTX Titan Xp -> Geforce RTX 2080 ti
3840 -> 4352
$1200 -> $1200

Geforce GTX 1080 ti -> _future card, possibly a 7nm based one_
3584 -> 
$699 ->

Geforce GTX 1080 -> Geforce RTX 2080
2560 -> 2944
$699 -> #799

Geforce GTX 1070 ti -> _future card, possibly a 7nm based one_
2432 -> 
$449 -> 

Geforce GTX 1070 -> Geforce RTX 2070
1920 -> 2304
$449 -> $599

Notice that the 2070 to 2080 core count difference is 640 cores. 
From the 2080 to the 2080 ti the core difference is 1408 cores.
There is room for another card, to split that 1408 core gap in half say with a total count of *3648 cores???* This would obviously have to be on the tu102 die and be very cut down. This could be called the *Geforce RTX 2090*

From last gen to this current gen we have core count bumps of 384 cores from the 1070 and 1080 to their successors. And we have a 512 core count bump from the Titan Xp to the 2080 ti. This solidifies my assumption of the card lineup placement and what they directly replace.

Volta architecture of the Titan V did not replace anything, though it was the architecture surpassing Pascal, it does not have a direct replacement in it's successive architecture either. Turing does not have a compute only focused SKU at the moment. The Quadro cards I do not consider aligned with it at all.

With these prices I see a slight $100 bump over last gen, or remaining constant in the case of the Titan to 2080 ti jump. These prices do not shock me at all. The silicon is much larger, GDDR6 is newer + memory has shortage/pricing is high at the moment, and you could always throw in inflation.

Also it can be said that the launch of the 2080 ti was not unexpected at all if you consider it just the next titan card, which was typically released first or closely following the standard XX80 card each generation. Perhaps the time table is sped up a tiny bit to make the 7nm cards that are coming seem further away, but on that point I am hesitant to commit to believing. 



*Regarding Turing Architecture Improvements: *

It was said that the cuda cores can operate with floating point and integer operations at the same time. Meaning that the cores are not dual-purpose and can run independently. Typically the INT8 performance is quite a bit lower than the FP16 performance of previous generations and we see that with turing the floating point and integer performance are both 14 Tera-operations per second. And that is independent of each other.
Games typically do not have too much integer calculations, but shaders and some other computer tasks do. So that is some gains right there. (This is the section of the presentation that was using the buzz word *TIPS*, which I assume stood for tera-integer operations per second, as opposed to *TFLOPS* which stands for tera-floating point operations per second) The specification of the full tu102 chip was said to be 14 TFLOPS and 14 TIPS. Slightly lower I assume for the slightly cut down RTX 2080 ti.

For the VR headset owners, there is *Variable Rate Shading*, which could also be called FOV-iated rendering. Rendering higher quality in the area where your eyes are looking. This is a very large performance boost if you are using that tech compared to previous gen.

There could also be small architecture improvements that are as of yet unknown to me. One such improvement that was just leaked was that the L1 cache is 2.7x bigger with double bandwidth and lower hit latency. And that the L2 cache is double in size. This will definitely lead to some improvement alone. https://videocardz.com/newz/nvidia-upgrades-l1-and-l2-caches-for-turing



Hope this was useful for some people explaining the business and strategy of the release. I didn't talk about the new RTX tech at all, which is all bonus to anyone buying a new RTX card.

Maybe I should have started a new thread with this info..... but its my gift to all the future 2080 ti owners that I will also be among.


----------



## Barefooter

^^ Great post!

Hope you are correct about the NVLink!


----------



## Zurv

@rush2049 nice post


----------



## Zurv

Barefooter said:


> ^^ Great post!
> 
> Hope you are correct about the NVLink!



I still didn't order the 2080 Ti(s) yet. If that is true about NVLink i'm in!! (otherwise i might just stick with my titan Vs)


----------



## Mad Pistol

Not sure if this has been posted yet, but...







If this is legit... wow. I think I want a GTX 2080 Ti now.


----------



## rush2049

I don't know if that has been posted before here. But I saw someone else post a different recording of it on reddit.

I think this video from way back in March of this year really REALLY shows off what ray tracing means for the future of games and engines:







I don't remember seeing that video back when it was new, but who knew it was a foreshadow of what was to come in real time?


----------



## CallsignVega

If the rumors are true about SLI now going to appear as one GPU via NVLink, that would be beyond epic. NVIDIA would sell a ton more cards for SLI.

But I have my doubts. I feel something that revolutionary for gaming would have been talked about at the reveal. The only thing I can say is maybe NVIDIA wanted to leave that for the reviewers. I guess we will see on Sep 14th unless it is leaked before hand.


----------



## GraphicsWhore

Artah said:


> Anyone find a company making blocks yet? I wonder if the Titan V blocks will fit, anyone have any info? I'm hoping that the boards for ti and reg FE is the same, I ordered 2 of each, wish I could have ordered 4 instead. I'd trade someone for 2 BINB non Ti for a Titan V though


Alphacool blocks are up for pre-order from aquatuning.us and modmymods.com. There are two models.

EK's Twitter said their preorders would be starting 22nd or yesterday but I don't see them yet.


----------



## BigMack70

CallsignVega said:


> If the rumors are true about SLI now going to appear as one GPU via NVLink, that would be beyond epic. NVIDIA would sell a ton more cards for SLI.
> 
> But I have my doubts. I feel something that revolutionary for gaming would have been talked about at the reveal. The only thing I can say is maybe NVIDIA wanted to leave that for the reviewers. I guess we will see on Sep 14th unless it is leaked before hand.


That would be two holy grails achieved in one launch... ray traced lighting and hassle-free multi-GPU... I might even be tempted to return to SLI if they pulled that off. Always loved the concept and the performance; hated the poor scaling and the limited support in games.


----------



## Ford8484

Elmy said:


> One 2080 Ti FE from Nvidia ordered and one Zotac 2080 Ti from Newegg ordered. Going to see which one gets here faster....
> 
> EK said they will have blocks ready on 9/20 so ill have to keep an eye out for that since I am on custom loop with my Titan Xp.
> 
> I am sure hoping it's minimum 30% faster than Titan Xp. :-/
> 
> Need more frame rate for that 2560 X 1440p 240Hz .5ms monitor that is being worked on.


I have the same exact cards preordered but not sure which one to cancel....FE does look nice and it has the higher default OC- but the Zotac I got on Amazon so it will get here faster- plus you can OC easy enough these days if the cooling is good....decisions!!


----------



## nodicaL

I'm hoping this card will be good enough to use for 1440p @ 240Hz for the more competitive FPS games.
Was getting 70-90 FPS at 1440p with BO4.


----------



## GraphicsWhore

Ford8484 said:


> I have the same exact cards preordered but not sure which one to cancel....FE does look nice and it has the higher default OC- but the Zotac I got on Amazon so it will get here faster- plus you can OC easy enough these days if the cooling is good....decisions!!


When is your FE supposed to arrive? 

If any of you have a Ti FE pre-order being delivered in Sept. that you're considering canceling, I'll pay you the price of the card, an extra $50 and shipping costs for you to ship it to me.


----------



## Rob w

The UK Nvidea site says rtx2080ti out of stock.. must be popular?


----------



## Dennybrig

Guys from a specs standpoint (cause we do not have a single benchmark to compare), what do i lose by going 2080 instead of 2080ti? how much slower do you guys think it will be.

Of course i will buy 2 2080 (cause i have sported SLI from the past 6 years) but i want to know what will i lose performance wise.
I can go 2 2080ti but not immediately, i dont have the cash


----------



## rush2049

Dennybrig said:


> Guys from a specs standpoint (cause we do not have a single benchmark to compare), what do i lose by going 2080 instead of 2080ti? how much slower do you guys think it will be.
> 
> Of course i will buy 2 2080 (cause i have sported SLI from the past 6 years) but i want to know what will i lose performance wise.
> I can go 2 2080ti but not immediately, i dont have the cash


see my previous post regarding model comparisons a page or two ago. https://www.overclock.net/forum/69-...rtx-2080-ti-owner-s-club-13.html#post27592828

You'll be missing on a lot. Jumping from the 2080 to the 2080 ti there are 50% more cuda cores. Plus the additional memory width, I would expect at least 50% or more performance.


----------



## KY_BULLET

I'm gonna wait this one out until I see some hard Evidence/Benchmarks that the 2080ti is in fact 35%-50% faster than the 1080 ti, with RTX is on or off. I think the RTX hype is nothing more than hook line and sinker. Havent researched it myself but, I'm hearing there isn't any games out there optimized for using RTX, Is this true?

If its proven to live up to the hype, then its game on for me! Hell, I have 2 kidneys, only really need one, right? 🙂


----------



## toncij

rush2049 said:


> I...


2080Ti is not a full chip. 4352 cores is less than a full one as is 11GB of GDDR6. The full chip is 4608 cores and would be with 12GB. One SM and one memory channel are disabled. - So, we could get a Titan. And there is room for a refresh too. Volta had 5120 FP32 cores on TITAN V... 

One thing: SLI was always a "single rendering entity" to a game. Driver was responsible for doing the work underneath.

Whatever NVLink makes it, it is not the NVLink from IBM servers. Not the same thing. Aggregate bandwidth for a full 6-lane NVLink v2 is 300GB/s which will not happen on Turing consumer cards. There is no hardware, let alone NVSwitch from DGX-2 which supports 900GB/s bandwidth.

DGX-1 was pretty erratic in use due to memory usage. So, no... it wouldn't be usable for highly dynamic, latency and consistency dependent usage gaming is.

To effectively use each other's memory, GPUs would need to have the same bidirectional bandwidth covering the GDDR6 bandwidth, and I don't see (at this moment) what tech would provide that on 2 NVLink "fingers" only. Using GDDR6 between chips with even 160GB/s is still about 1/4th of the bandwidth.


----------



## rolldog

stefxyz said:


> My theory is that they want to get rid of their old stock why they keep performance numbers low and let the internet mob do the rest. Now people will see cheap prices on old models and think they make a good deal as performance looks mediocre on the new cards. Boy they might very well make a huuge mi


As per my post on the previous page, this is directly from Nvidia’s financial disclosure. Nvidia is forcing “more than 10 graphics card makers” to "swallow contracted shipments released by Nvidia to deplete its inventories, in order to secure that they can be among the first batch of customers to get sufficient supply quotas of new-generation GPUs."

Since Nvidia’s production of GPUs ramped up for the cryptomining market, but AMD GPUs were preferred, Nvidia now has a huge surplus of 10 series graphics cards. So, Nvidia is forcing all their AIB partners to fulfill their contracts on 10 series cards, which they’re obligated to buy (this is pretty common), but they must do it if they want to secure their own supply of 20 series graphics cards, of which they have their own production targets. Soon, we’ll see a huge supply of cheap 10 series graphics cards on the market, and with an oversupply, prices will be cut, helping Nvidia capture AMD’s share of the mid-market level of GPUs.


----------



## Baasha

rush2049 said:


> *Regarding NVLink*: On the nvidia dgx-2 supercomputer they have dedicated NVSwitch silicon to facilitate communication between 16 tu102 dies. This presents all the cores as a single compute resource to the main processor and also handles all communication between the cards instead of using a PCIe bus.
> 
> I do not think that we are going to get any NVSwitch silicon for the geforce version of NVLink. However I suspect (and this is me speculating) that the 50 GB/s bi-directional communication between two cards will make them able to appear to the operating system as a single card. _Should maybe not call it SLI anymore._ Another feature that was mentioned during the Siggraph presentation was that the NVLink allows each card to directly access another's memory effectively adding together their memory pools. This leads me further into believing that it will work as a single graphics card when utilizing NVLink between two geforce cards.


Very very interesting... Why would they then limit it to two cards? Imagine running 4 GPUs with that high bandwidth! 

Anyway, can't wait to test it out.



Zurv said:


> I still didn't order the 2080 Ti(s) yet. If that is true about NVLink i'm in!! (otherwise i might just stick with my titan Vs)


Slacker! j/k 



CallsignVega said:


> If the rumors are true about SLI now going to appear as one GPU via NVLink, that would be beyond epic. NVIDIA would sell a ton more cards for SLI.
> 
> But I have my doubts. *I feel something that revolutionary for gaming would have been talked about at the reveal*. The only thing I can say is maybe NVIDIA wanted to leave that for the reviewers. I guess we will see on Sep 14th unless it is leaked before hand.


No doubt - that would be a major achievement and I too highly doubt that they would've forgotten about it during the reveal.

Also, not sure why they are delaying the launch from the announcement; this seems to be the first time (?) they've done this - usually they announce the card and it's available right away or very shortly but this time it's a 30 day wait.



toncij said:


> 2080Ti is not a full chip. 4352 cores is less than a full one as is 11GB of GDDR6. The full chip is 4608 cores and would be with 12GB. One SM and one memory channel are disabled. - So, we could get a Titan. And there is room for a refresh too. Volta had 5120 FP32 cores on TITAN V...
> 
> One thing: SLI was always a "single rendering entity" to a game. Driver was responsible for doing the work underneath.
> 
> Whatever NVLink makes it, it is not the NVLink from IBM servers. Not the same thing. Aggregate bandwidth for a full 6-lane NVLink v2 is 300GB/s which will not happen on Turing consumer cards. There is no hardware, let alone NVSwitch from DGX-2 which supports 900GB/s bandwidth.
> 
> DGX-1 was pretty erratic in use due to memory usage. So, no... it wouldn't be usable for highly dynamic, latency and consistency dependent usage gaming is.
> 
> To effectively use each other's memory, GPUs would need to have the same bidirectional bandwidth covering the GDDR6 bandwidth, and I don't see (at this moment) what tech would provide that on 2 NVLink "fingers" only. Using GDDR6 between chips with even 160GB/s is still about 1/4th of the bandwidth.


Hmm.. the other odd thing is for all the people who were at Gamescom, nobody has talked about performance with NVLink/SLI - there was a machine with such a setup so I'm sure some people got to use it - yet, not a word about performance which seems... odd.


----------



## rush2049

toncij said:


> 2080Ti is not a full chip. 4352 cores is less than a full one as is 11GB of GDDR6. The full chip is 4608 cores and would be with 12GB. One SM and one memory channel are disabled. - So, we could get a Titan. And there is room for a refresh too. Volta had 5120 FP32 cores on TITAN V...
> 
> One thing: SLI was always a "single rendering entity" to a game. Driver was responsible for doing the work underneath.
> 
> Whatever NVLink makes it, it is not the NVLink from IBM servers. Not the same thing. Aggregate bandwidth for a full 6-lane NVLink v2 is 300GB/s which will not happen on Turing consumer cards. There is no hardware, let alone NVSwitch from DGX-2 which supports 900GB/s bandwidth.
> 
> DGX-1 was pretty erratic in use due to memory usage. So, no... it wouldn't be usable for highly dynamic, latency and consistency dependent usage gaming is.
> 
> To effectively use each other's memory, GPUs would need to have the same bidirectional bandwidth covering the GDDR6 bandwidth, and I don't see (at this moment) what tech would provide that on 2 NVLink "fingers" only. Using GDDR6 between chips with even 160GB/s is still about 1/4th of the bandwidth.


I didn't say the 2080 ti was the full chip. Note the section comparing core counts. And thats why I specifically said "each card is cut down from this" after stating the chip specs.

I think the NVLink bandwidth we can expect is the same that the Quadro RTX cards can do. They can be paired together and have 50 GB/s one way bandwidth over the link, or a total of 100 GB/s. Exactly like I said in my previous post.
I only expect 2 lanes, each one-way, between the cards. As there is no NVSwitch silicon chip to facilitate cross-bar communication. I was getting my specifications directly from the NGX-2 documentation and white papers, not the NGX-1 which used completely different architecture and is not directly relevant to the turing cards coming out.

They say the Quadro RTX cards can directly use each other's memory. I take that statement from Nvidia as fact. Obviously the card wouldn't get full bandwidth that GDDR6 is capable of, but its way better than the ~2 GB/s of the old SLI bridges.


----------



## rush2049

Baasha said:


> Very very interesting... Why would they then limit it to two cards? Imagine running 4 GPUs with that high bandwidth!
> 
> Anyway, can't wait to test it out.
> 
> 
> 
> Slacker! j/k
> 
> 
> 
> No doubt - that would be a major achievement and I too highly doubt that they would've forgotten about it during the reveal.
> 
> Also, not sure why they are delaying the launch from the announcement; this seems to be the first time (?) they've done this - usually they announce the card and it's available right away or very shortly but this time it's a 30 day wait.
> 
> 
> 
> Hmm.. the other odd thing is for all the people who were at Gamescom, nobody has talked about performance with NVLink/SLI - there was a machine with such a setup so I'm sure some people got to use it - yet, not a word about performance which seems... odd.



From the video I saw, the two Geforce RTX cards that were NVLinked was not connected to any monitor and was just a showcase mod build. If you have footage of one running point me that way.


I think the 2 card NVLink limitation is that the cards need a (and I am quoting the white paper here) "packet transforming chip" which they called the NVSwitch to do cross communication. But they can do pairs of cards by themselves as they are acting sort of like how a crossover ethernet cable functions without a switch.



Go a bit down this page for the diagram of how 2 cards with NVLink work: https://devblogs.nvidia.com/nvswitch-accelerates-nvidia-dgx2/


----------



## Jpmboy

Baasha said:


> Very very interesting... Why would they then limit it to two cards? Imagine running 4 GPUs with that high bandwidth!
> Anyway, can't wait to test it out.
> Slacker! j/k


was wondering when you were gonna look to run quad turings... a Titan "T" is coming.


----------



## rush2049

And this is the page that gives us a clue to the NVLink bandwidth that the Quadro RTX cards have:
https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/


The Quadro 8000 and 6000 are based on TU102 and have 100 GB/s NVLink bandwidth.

The Quadro 5000 is based on TU104 and have 50 GB/s NVLink Bandwidth.


----------



## Malinkadink

Baasha said:


> Very very interesting... Why would they then limit it to two cards? Imagine running 4 GPUs with that high bandwidth!
> 
> Anyway, can't wait to test it out.
> 
> 
> 
> Slacker! j/k
> 
> 
> 
> No doubt - that would be a major achievement and I too highly doubt that they would've forgotten about it during the reveal.
> 
> Also, not sure why they are delaying the launch from the announcement; this seems to be the first time (?) they've done this - usually they announce the card and it's available right away or very shortly but this time it's a 30 day wait.
> 
> 
> 
> Hmm.. the other odd thing is for all the people who were at Gamescom, nobody has talked about performance with NVLink/SLI - there was a machine with such a setup so I'm sure some people got to use it - yet, not a word about performance which seems... odd.


They're delaying the launch because of the 10 series surplus they have due to the mining craze dying down. They figure allowing a month to unload 10 series cards with some discounts to encourage more buyers to pull the trigger especially after price shock of the 20 series cards that may dissuade people from buying those and getting a "deal" on a 10 series.


----------



## Baasha

rush2049 said:


> From the video I saw, the two Geforce RTX cards that were NVLinked was not connected to any monitor and was just a showcase mod build. If you have footage of one running point me that way.
> 
> 
> I think the 2 card NVLink limitation is that the cards need a (and I am quoting the white paper here) "packet transforming chip" which they called the NVSwitch to do cross communication. But they can do pairs of cards by themselves as they are acting sort of like how a crossover ethernet cable functions without a switch.
> 
> 
> 
> Go a bit down this page for the diagram of how 2 cards with NVLink work: https://devblogs.nvidia.com/nvswitch-accelerates-nvidia-dgx2/


Oh okay, gotcha. That explains it - thanks for the link. It says in one of the diagrams that GV100 (which is the chip in the Titan V (?)) can do up to 200 GB/s with 2x NVLink bridges - but I was told SLI was simply NOT going to work with Titan V - I was planning on getting 4 but instead just got one - essentially killing SLI.

I thought the NVLink'd system was indeed hooked up but you're right - I haven't seen any footage of it so I guess nobody really experienced it - lame!



Jpmboy said:


> was wondering when you were gonna look to run quad turings... a Titan "T" is coming.


 Any idea of the time frame for Titan T? This year or around Feb./Mar. 2019? 

Well, I got 4x 2080 Ti but it definitely seems like that's not happening (i.e. NVLink is limited to 2x GPUs) so I guess I'll just stick with 2x GPUs. Plus, with the 2x fans blowing hot air in the case, there is no way I can do a GPU sammich - the cards would die lol.



Malinkadink said:


> They're delaying the launch because of the 10 series surplus they have due to the mining craze dying down. They figure allowing a month to unload 10 series cards with some discounts to encourage more buyers to pull the trigger especially after price shock of the 20 series cards that may dissuade people from buying those and getting a "deal" on a 10 series.


Makes sense - weren't these 10 series cards at insane prices (i.e. 1080 Ti > $1000) just a few months ago?


----------



## Artah

Jpmboy said:


> No... unless you have games that do not scale well with SLI, and there are more and more of these unfortunately.


Is there a list of games somewhere that specify which ones do well in SLI and which ones don't?



alanthecelt said:


> ek
> alphacool - pre order
> heatkiller


Thanks for the info.


----------



## Malinkadink

Baasha said:


> Makes sense - weren't these 10 series cards at insane prices (i.e. 1080 Ti > $1000) just a few months ago?


Yes they were, and people were still buying them then, which explains how Nvidia felt that a $1200 price tag on the 2080Ti will sell easily. Even if a small percentage of people feel that $1200 is too much for them and Nvidia loses a sale it won't really matter for them as they're still going to more than make up for the cost of losing some customers because of the $500 more they're charging compared to previous Tis. They may not even lose anyone, those who were okay with paying $800 for a GPU will simply get a 2080 instead as its in their price range. 

I'm not really sold myself, I do think $1200 is too much for a non Titan card, and maybe its justified because of the R&D involved and the additional hardware to make raytracing possible, but like all new technologies it wont see widespread adoption until a few years from now. If next gen consoles have hardware enabled ray tracing that will greatly speed things up in the PC space, but until then it's going to be a slow uptake. 

I'm still content with the performance of my 1080 @ 1440p 165hz and gsync in the games I play and i'm not against dropping some settings to get more fps, in fact I usually end up disabling stuff I find interferes with how easily I can spot enemies in an fps game. In more adventure casual story based games I typically have no issues running max settings and achieve at least 60 fps which is more than enough for a game like that. 

7nm next year will be a much nicer upgrade imo.


----------



## rush2049

Artah said:


> Is there a list of games somewhere that specify which ones do well in SLI and which ones don't?
> 
> Thanks for the info.


Not really a list, but I can tell you that there are a number of features that, if enabled, kill your SLI scaling and sometimes make it worse than a single card.

Basically anything with the word 'temporal' in the feature name. (TXAA, TAA, SMAA T2x, SMAA T1x, T2, etc.) These types of anti-aliasing use the previous frame as a reference and forces each card to have a full copy of the previous completed frame. The problem is transferring the frame buffer across the old slow SLI connection was barely fast enough to assemble the final frame on one card and output it. Adding the cost of transferring the final frame back to the other card before starting the next is what kills your scaling performance. Some games have this as selectable anti-aliasing modes. And some games have them enabled behind the scenes and you cannot turn them off.

And then some games do not get any benefit from SLI no matter what: ( Dark Souls 1, Dark Souls 3, Binding of Isaac & BOI Rebirth/Afterbirth, Wolfenstein: The New Order, Dishonored, DayZ, FFX/X-2 HD, Terraria, Street Fighter V and Mortal Kombat XL; This list I borrowed from the linus tech tips forums)

Some games are coded in a weird way that requires significant cooperation between your cards to be run. This causes the cards to not be fully utilized 100% and will run at like 70% usage. Kind of like what happens if you had vertical sync on, or a frame limiter. Luckily this issue is a rare one and is usually patched out quickly.


Hope that answers your question.


----------



## CallsignVega

toncij said:


> 2080Ti is not a full chip. 4352 cores is less than a full one as is 11GB of GDDR6. The full chip is 4608 cores and would be with 12GB. One SM and one memory channel are disabled. - So, we could get a Titan. And there is room for a refresh too. Volta had 5120 FP32 cores on TITAN V...
> 
> One thing: SLI was always a "single rendering entity" to a game. Driver was responsible for doing the work underneath.
> 
> Whatever NVLink makes it, it is not the NVLink from IBM servers. Not the same thing. Aggregate bandwidth for a full 6-lane NVLink v2 is 300GB/s which will not happen on Turing consumer cards. There is no hardware, let alone NVSwitch from DGX-2 which supports 900GB/s bandwidth.
> 
> DGX-1 was pretty erratic in use due to memory usage. So, no... it wouldn't be usable for highly dynamic, latency and consistency dependent usage gaming is.
> 
> To effectively use each other's memory, GPUs would need to have the same bidirectional bandwidth covering the GDDR6 bandwidth, and I don't see (at this moment) what tech would provide that on 2 NVLink "fingers" only. Using GDDR6 between chips with even 160GB/s is still about 1/4th of the bandwidth.


I agree on both accounts. RTX Titan will be here by winter and SLI will still have all the failings its always had. Less and less support from developers as we move forward. 



Malinkadink said:


> They're delaying the launch because of the 10 series surplus they have due to the mining craze dying down. They figure allowing a month to unload 10 series cards with some discounts to encourage more buyers to pull the trigger especially after price shock of the 20 series cards that may dissuade people from buying those and getting a "deal" on a 10 series.


Agree. IMO these RTX cards are already stocked up in warehouses all over the world ready to ship.


----------



## zooterboy

I preordered the founders edition right after the announcement. Let's hope it's worth it...


----------



## dante`afk

If Nvidia would've been confident with their product, they'd have shown everywhere FPS counter and direct comparisons to 1080Ti and AMD cards. So far they try to sell BS with a 10 year-in-the-making crap technology that no one needs and wants. Gotta wait for 14th Sept, anything below 50% increase FPS performance in non-RTX scenarios and I'll be disappointed, not worth any switch.

Heck I'm still playing Diablo 2 with my 1080Ti.


----------



## toncij

Malinkadink said:


> 7nm next year will be a much nicer upgrade imo.


That's some wishful thinking to have 7nm NV cards the next year. With so much effort in selling 10 series and hype work for the series 20, I don't see NV killing Turing before 2020. And don't hold your breath if Intel or AMD fail to put some pressure there. At this moment, even Vega 20 7nm looks to be targeting "same performance, less power usage".



CallsignVega said:


> I agree on both accounts. RTX Titan will be here by winter and SLI will still have all the failings its always had. Less and less support from developers as we move forward.
> 
> Agree. IMO these RTX cards are already stocked up in warehouses all over the world ready to ship.



Could be. Using unified memory for games with only less than 1/5th of the bandwidth (100), would cause much worse stuttering and problems than it would solve. 8GB and even 11GB is enough for most games. Going unified would behave the same way GTX 970 did with that last 1/2GB of VRAM, when it would introduce serious stuttering due to severely severed (lol) bandwidth; being 1/7th of the total. Sometimes it didn't matter because that part wasn't used, sometimes it did because it would get used. So, no, I don't think you'll be able to use all the VRAM as one pool.
NVLink will simply fix some SLI scenarios which were there because of the very bad copy performance. HB bridges helped a lot too, but... things were never perfect.

Keeping in mind that also, exactly because of temporal effects someone described up there, SLI will never be possible for some engines or will have artifacts and lag due to a base frame being at least 1 frame too old.

And I also think as you do: NV would be foolish not to use the opportunity to sell 300 hundred more cores for $1500 in a new TITAN.


----------



## Ford8484

Am I the only one on the internet that knows you can do this magical thing called cancelling a preorder, lol. All these sites are saying "dont preorder, dont preorder!!!' YOU DONT PAY FULL PRICE WHEN YOU PREORDER!!!! Sorry, I know most of this thread knows what I'm talking about but its driving me nuts.


----------



## GosuPl

I'm joined the club ;-) 2x 2080Ti preordered  Time to change from "old" 2x TITAN X Pascal ;-)


----------



## GosuPl

Ford8484 said:


> Am I the only one on the internet that knows you can do this magical thing called cancelling a preorder, lol. All these sites are saying "dont preorder, dont preorder!!!' YOU DONT PAY FULL PRICE WHEN YOU PREORDER!!!! Sorry, I know most of this thread knows what I'm talking about but its driving me nuts.


Exactly. Once that the preorder can be canceled and the cards returned even within 30 days of receiving the package. If it turns out that the performance is not satisfactory.

But I think that the RTX series will be a very successful series of cards. There is nothing to complain about and to laugh, not to order, if someone wants it, why should not he do it?


----------



## ClashOfClans

dante`afk said:


> If Nvidia would've been confident with their product, they'd have shown everywhere FPS counter and direct comparisons to 1080Ti and AMD cards. So far they try to sell BS with a 10 year-in-the-making crap technology that no one needs and wants. Gotta wait for 14th Sept, anything below 50% increase FPS performance in non-RTX scenarios and I'll be disappointed, not worth any switch.
> 
> Heck I'm still playing Diablo 2 with my 1080Ti.


I think the numbers they have released are pretty telling already. 2080 Ti is going to crush the 1080 Ti in non RT scenarios. Look at what they said about the 2080 versus the 1080. 2080 will be surpassing the 1080 Ti, that is already obvious just based on the price point alone. Don't quote me on this, but the 2080 Ti is absolutely going to destroy the 1080 Ti just based on the current numbers. It's not going to be close. And we all know AA is the main setting that can reduce performance. No one seems to be excited about DLSS which is going to basically do what AA does without the performance hit. That alone is a huge game changer. You should sell your 1080 Ti while it's worth something. For 4k the 2080 Ti will demolish the 1080 Ti. Once you turn on DLSS it's really over, and the 2080 Ti will be faster than 1080 Ti Sli.


----------



## CallsignVega

This sucks if true:

https://wccftech.com/nvidia-geforce-rtx-2080-ti-2080-2070-supply-schedule/

I literally had my Ti order in within a couple of minutes of the NVIDIA store opening, so I may make the small batch cutoff.


----------



## rush2049

CallsignVega said:


> This sucks if true:
> 
> https://wccftech.com/nvidia-geforce-rtx-2080-ti-2080-2070-supply-schedule/
> 
> I literally had my Ti order in within a couple of minutes of the NVIDIA store opening, so I may make the small batch cutoff.


My order from nvidia said shipment on 9/20 for my TI card. 
I ordered a second one 4 hours later and shipment on that was 10/5...... so yeah.....


----------



## BigMack70

CallsignVega said:


> This sucks if true:
> 
> https://wccftech.com/nvidia-geforce-rtx-2080-ti-2080-2070-supply-schedule/
> 
> I literally had my Ti order in within a couple of minutes of the NVIDIA store opening, so I may make the small batch cutoff.


I think I am in the same position... order went in just after 1 PM EDT. Don't think the store had been open very long before that. Would be nice to get it a bit early. This article also seems to confirm my rationale for pre-ordering... anyone who didn't pre-order is going to have difficulty getting a card before Christmas.

Does Nvidia typically notify you of a ship/delivery date before actually shipping? Or do you just get a "your order has shipped" email when it happens?


----------



## CallsignVega

Ya all this nonsense about people freaking out about pre-ordering is so stupid. You literally risk nothing pre-ordering. They are just envious.


----------



## GraphicsWhore

Pre-prdered one of the Alphacool blocks from modmymods.com: https://modmymods.com/alphacool-eisblock-gpx-n-plexi-nvidia-geforce-rtx-2080-2080ti-m01-11661.html

EK preorders still don't seem to be up but I think I've made my decision. Saw a pic of the EK and it's similar, so unless there's something special about it I'll keep the Alphacool.

Haven't built anything in a while it's going to be fun to put a block on once I've got everything.


----------



## Krzych04650

Historically NVIDIA always delivered great products on the high-end (by this I mean like last 3 generations, just in case someone wants to bring up something from 10 years ago) so there should be little reason to think that it will be different this time, but this release is indeed concerning. 

Pascal has set the bar really high, with x70 class card matching previous gen x80Ti card right away on launch and new gen x80Ti really flying away. 

I am still pretty convinced that Turing is not going to disappoint in that regard, but like many people mentioned in this thread, it is hard to see from where such gains could come from. And even if there are big gains, their consistency is a also a huge concern, looking at very inconsistent gaming performance of Titan V. This is actually the thing that worries me much more than potential lack of horsepower of Turing, the consistency of performance of new cards is the real question I think. Also it seems that the usefulness of everything that was presented on keynote is dependent entirely on the support from game developers, which already answers the question how useful it will be... while the inconsistency of Titan V shows that the increase in raw specs is also not working too well.

Maybe this is the reason why they came up with RTX right now, they couldn't get significant and consistent improvement in performance, so they started with ray tracing to give us something else to focus on. It makes sense, if they could get huge performance increase the easy way like with Pascal then they wouldn't come out with all of this RTX, AI and huge expensive dies. There must be reason why they did that, and the competition is certainly not the one because they are still competing with Maxwell in some imagined world of theirs 

I have a bad feeling that 2080 is only going to match 1080Ti and 2080Ti is only going to be some poor 25% faster than previous gen. This along with already the longest gap between new GPU gen releases to ever happen and massive incompetence of AMD would mean a stagnation for another 2 years. Stagnation in performance, because prices are of course increased like if something revolutionary was actually going to happen. 

I don't know, I hope we get proper and consistent performance gains, but there not many things to reinforce anyone in this hope, while the question marks are many and huge.

And if this wasn't enough, the time we need to wait for reviews after release is like the longest in the history of everything.

But if we get great gains and all of this ray tracing, tensor and huge expensive dies on to top of that even despite complete lack of competition and the fact that NVIDIA is and will compete with itself for a long time then I am really going to be become a fanboy


----------



## rush2049

well that's quite dismal. 

I prefer my cups half full.


----------



## MiniZaid

was hoping for 120fps 4k for battlefield V with 1 2080ti. But just realized that ray tracing is like 30%-40 performance drop even on the new RTX cards. 
Looks like i have to run too cards again. Here's I go again, with 2 cards. SLI wasn't really fun with pascal. It was great for 700 and 900 series.


----------



## Krzych04650

MiniZaid said:


> was hoping for 120fps 4k for battlefield V with 1 2080ti. But just realized that ray tracing is like 30%-40 performance drop even on the new RTX cards.
> Looks like i have to run too cards again. Here's I go again, with 2 cards. SLI wasn't really fun with pascal. It was great for 700 and 900 series.


I didn't have much issue with my 1080s in SLI, they worked well where I expected them to work and I even got surprised by the support in games that I didn't even expect to work with SLI. AC Origins is the only title that I didn't play because of no SLI support (which was very unfortunate because every single AC game supports SLI perfectly, I was sure Origins is going too, but it didn't, and seriously didn't, to the point where even custom profiles couldn't do anything at all, like zero), Shadow of War would be the second one but it turned out to be so flat out bad as a game that this was the main reason I didn't play it, not the poor SLI support. But other than that no issues at all. 

I can already see myself on 14th September having a massive problem with choosing between 2080 SLI and single 2080 Ti. Depends on how they perform, but this is going to be my first GPU upgrade without simultaneous resolution increase, there very likely won't be one for another 2 years, so if Turing delivers proper performance increase then single 2080 Ti should surpass my 1080s with average 75-80% scaling by a bit, and I will have this kind of performance in all games regardless of SLI support, so it should be good for now, hopefully until next gen, but if not then I can always get a second one later. But this is the first time when single GPU has a chance to be enough. Only because the resolution won't increase, but still.


----------



## toncij

Krzych04650 said:


> Historically NVIDIA always delivered great products on the high-end (by this I mean like last 3 generations, just in case someone wants to bring up something from 10 years ago) so there should be little reason to think that it will be different this time, but this release is indeed concerning.
> 
> Pascal has set the bar really high, with x70 class card matching previous gen x80Ti card right away on launch and new gen x80Ti really flying away.
> 
> I am still pretty convinced that Turing is not going to disappoint in that regard, but like many people mentioned in this thread, it is hard to see from where such gains could come from. And even if there are big gains, their consistency is a also a huge concern, looking at very inconsistent gaming performance of Titan V. This is actually the thing that worries me much more than potential lack of horsepower of Turing, the consistency of performance of new cards is the real question I think. Also it seems that the usefulness of everything that was presented on keynote is dependent entirely on the support from game developers, which already answers the question how useful it will be... while the inconsistency of Titan V shows that the increase in raw power is also not working too well.
> 
> Maybe this is the reason why they came up with RTX right now, they couldn't get significant and consistent improvement in performance, so they started with ray tracing to give us something else to focus on. It makes sense, if they could get huge performance increase the easy way like with Pascal then they wouldn't come out with all of this RTX, AI and huge expensive dies. There must be reason why they did that, and the competition is certainly not the one because they are still competing with Maxwell in some imagined world of theirs
> 
> I have a bad feeling that 2080 is only going to match 1080Ti and 2080Ti is only going to be some poor 25% faster than previous gen. This along with already the longest gap between new GPU gen releases to even happen and massive incompetence of AMD would mean a stagnation for another 2 years. Stagnation in performance, because prices are of course increased like if something revolutionary was actually going to happen.
> 
> I don't know, I hope we get proper and consistent performance gains, but there not many things to reinforce anyone in this hope, while the question marks are many and huge.
> 
> And if this wasn't enough, the time we need to wait for reviews after release is like the longest in the history of everything.
> 
> But if we get great gains and all of this ray tracing, tensor and huge expensive dies on to top of that even despite complete lack of competition and the fact that NVIDIA is and will compete with itself for a long time then I am really going to be become a fanboy


You're pretty much onto something there. Without reducing the process you can only grow the chip. Then, the lower we go, the less benefit we have from reducing it since the complexity and development time increase. Large, complex, power hungry, hot spots, heat, clocks... the more we move fwd, the more expensive it gets.

12nm is not really a jump from 16nm. Pretty much the same density. But then again, 16nm was not really a jump you'd expect from 28nm. FinFET is not the same scaling...

More cores gets some diminishing returns and clocks are at their limit...


----------



## CallsignVega

I think one problem we are going to run into is NVIDIA probably isn't shipping more than one card to reviewers. So no SLI tests. I know [H] ordered two cards and an NVLink, but they probably won't have tests out until mid/late-October.


----------



## snafua

CallsignVega said:


> I think one problem we are going to run into is NVIDIA probably isn't shipping more than one card to reviewers. So no SLI tests. I know [H] ordered two cards and an NVLink, but they probably won't have tests out until mid/late-October.


That's if the NVLink bridges even come out at the same time. With the 1080's they were two months later.


----------



## CallsignVega




----------



## nodicaL

CallsignVega said:


> https://www.youtube.com/watch?v=WYzihy6ptNI


Quality!


----------



## GraphicsWhore

“But my fuhrer you can see explosions reflected everywhere.”

Lol


----------



## D. O'Connor

Anyone received a shipping date yet on the nvidia site? I'm still getting "order receieved"


----------



## rush2049

D. O'Connor said:


> Anyone received a shipping date yet on the nvidia site? I'm still getting "order receieved"


I cant check as the random password they give you with the order doesn't work.


----------



## snafua

D. O'Connor said:


> Anyone received a shipping date yet on the nvidia site? I'm still getting "order receieved"


For me the nvidia site says order received, the confirmation email says "This item will ship on or around 09/20/2018"


----------



## Bighouse

So, when do the NVIDIA RTX 2080 Ti Founders Edition ship? And, if doing a pre-order, does Nvidia bill your credit card at the time of order, or when the cards ship?


----------



## D. O'Connor

Ah ok thats like me then, I've got 09/25 though left it a little late lol cheers


----------



## D. O'Connor

#Bighouse The founder cards start shipping from the 20th, and no they wont charge you till they send it.


----------



## Rob w

Hey bighouse, I did not receive a shipping date for 2080ti so I contacted nv, they say card will get billed when order confirmed.
So even though I ordered on day of release they have yet to confirm 2080ti order.
Nvidea store now shows 2080ti “ out of stock”


----------



## Rob w

Rob w said:


> Hey bighouse, I did not receive a shipping date for 2080ti so I contacted nv, they say card will get billed when order confirmed.
> So even though I ordered on day of release they have yet to confirm 2080ti order.
> Nvidea store now shows 2080ti “ out of stock”


Just checked email from ordering


----------



## Jared Pace

Graphics***** said:


> “But my fuhrer you can see explosions reflected everywhere.”
> 
> Lol



rofl!

Place an order.


... for a plain 2080.


haha~!!


----------



## D. O'Connor

#Rob w Yeah I'm in the same boat, My e-mail has the same shipping date as yours but I'm only seing "order received" on the NV site. I think what they mean by "order confirmed" is when it ships.


----------



## Rob w

D. O'Connor said:


> #Rob w Yeah I'm in the same boat, My e-mail has the same shipping date as yours but I'm only seing "order received" on the NV site. I think what they mean by "order confirmed" is when it ships.


Totally agree.


----------



## Clukos

Krzych04650 said:


> Historically NVIDIA always delivered great products on the high-end (by this I mean like last 3 generations, just in case someone wants to bring up something from 10 years ago) so there should be little reason to think that it will be different this time, but this release is indeed concerning.
> 
> Pascal has set the bar really high, with x70 class card matching previous gen x80Ti card right away on launch and new gen x80Ti really flying away.
> 
> I am still pretty convinced that Turing is not going to disappoint in that regard, but like many people mentioned in this thread, it is hard to see from where such gains could come from. And even if there are big gains, their consistency is a also a huge concern, looking at very inconsistent gaming performance of Titan V. This is actually the thing that worries me much more than potential lack of horsepower of Turing, the consistency of performance of new cards is the real question I think. Also it seems that the usefulness of everything that was presented on keynote is dependent entirely on the support from game developers, which already answers the question how useful it will be... while the inconsistency of Titan V shows that the increase in raw specs is also not working too well.
> 
> Maybe this is the reason why they came up with RTX right now, they couldn't get significant and consistent improvement in performance, so they started with ray tracing to give us something else to focus on. It makes sense, if they could get huge performance increase the easy way like with Pascal then they wouldn't come out with all of this RTX, AI and huge expensive dies. There must be reason why they did that, and the competition is certainly not the one because they are still competing with Maxwell in some imagined world of theirs
> 
> I have a bad feeling that 2080 is only going to match 1080Ti and 2080Ti is only going to be some poor 25% faster than previous gen. This along with already the longest gap between new GPU gen releases to ever happen and massive incompetence of AMD would mean a stagnation for another 2 years. Stagnation in performance, because prices are of course increased like if something revolutionary was actually going to happen.
> 
> I don't know, I hope we get proper and consistent performance gains, but there not many things to reinforce anyone in this hope, while the question marks are many and huge.
> 
> And if this wasn't enough, the time we need to wait for reviews after release is like the longest in the history of everything.
> 
> But if we get great gains and all of this ray tracing, tensor and huge expensive dies on to top of that even despite complete lack of competition and the fact that NVIDIA is and will compete with itself for a long time then I am really going to be become a fanboy



Adding Tensor and RT cores has a cost, the process difference isn't big and GDDR6 isn't cheap so just scaling cuda cores and adding memory wasn't an option. Imo, Nvidia did all they could do with Turing given the 12nm process. They are essentially selling a cut-down HEDT GPU to consumers and found smart ways to use the hardware (DLSS and denoising with tensor cores and RTX with RT cores). Next gen should provide better RTX performance (maybe 2 RT cores per SM?) while at the same time increasing clock speeds and reducing die size, but most of that will be due to 7nm.


----------



## keikei

Question regarding the FE. Can you only get it from nvidia themselves? Also, how much does nvidia charge for tax if any? Thanks.


----------



## Rob w

keikei said:


> Question regarding the FE. Can you only get it from nvidia themselves? Also, how much does nvidia charge for tax if any? Thanks.


Tax is calculated at your local rate,ie; country it’s going to.
Mine is 20% on top, look back a few posts and you will see my invoice.
I don’t know weather fe is restricted to nvidea only or not.


----------



## Krzych04650

Clukos said:


> Adding Tensor and RT cores has a cost, the process difference isn't big and GDDR6 isn't cheap so just scaling cuda cores and adding memory wasn't an option. Imo, Nvidia did all they could do with Turing given the 12nm process. They are essentially selling a cut-down HEDT GPU to consumers and found smart ways to use the hardware (DLSS and denoising with tensor cores and RTX with RT cores). Next gen should provide better RTX performance (maybe 2 RT cores per SM?) while at the same time increasing clock speeds and reducing die size, but most of that will be due to 7nm.


I don't doubt that they did all they could, I even said that are giving suspiciously much with everything that is physically implemented on these cards - RT, Tensor, massive and expensive dies and etc. However I am interested in the final product, not in excuses. It would be a huge hypocrisy to accept Nvidia's excuses while I am always bashing AMD about how incompetent they are because every product they sell is coming with 5 tomes of excuses, lies and fairytales about some imaginary features and how you need to wait. No availability of new process is certainly a very real issue and good excuse, to the point when it is hard to even call it an excuse, but regardless of all of this they have released a new generation of GPUs and asked a heavy money for it, so I will judge them for that. If they just came out and say "hey there is no new process so it is not worth to invest in new generation because it would be expensive and underperforming, here is your Pascal refresh with 20% more performance, 20% smaller price and 20% better energy efficiency" then I would be fine with it, but they released everything like any other new generation, and even called it a "historical leap", so I am expecting one, especially at $999.

Also, what is physically on the cards is one thing, but how functional in real world all of these things are is a different story. If everything Tensor Cores can do has to be specifically supported by game developers (which brings the number of supported games to few per year, with no warranty that these games are even going to even be worth playing by themselves, so realistically you get one or two per year), and if the target for AAA games with RTX enabled is really [email protected], then all of these Tensors and RTs are going to do is to take up space, draw power and increase the price for no reason, because they will be effectively unuseful.


----------



## GraphicsWhore

keikei said:


> Question regarding the FE. Can you only get it from nvidia themselves? Also, how much does nvidia charge for tax if any? Thanks.


Yes, at this point it looks like the FE is only coming from nVidia. EVGA has a blower card listed in their eventual lineup (others may too) but it's still not listed as a Founder's Edition card.


----------



## Ford8484

Still considering cancelling my preorder and waiting until 7nm. If its only 20% increase over 1080ti- $1200 is a waste of money. I can easily get those extra frames by dropping shadows one notch or not using AA (depends on the game). We shall see....hopefully the 14th is the legit date independent benchmarks come out.


----------



## ThrashZone

Hi,
Yeah pretty wild 320 posts and nobody has one yet


----------



## keikei

https://www.guru3d.com/news-story/preorder-prices-geforce-rtx-2080-and-2080-ti.html


----------



## GraphicsWhore

Ford8484 said:


> Still considering cancelling my preorder and waiting until 7nm. If its only 20% increase over 1080ti- $1200 is a waste of money. I can easily get those extra frames by dropping shadows one notch or not using AA (depends on the game). We shall see....hopefully the 14th is the legit date independent benchmarks come out.


Well if you have a 1080Ti you can recoup some of the 2080Ti cost. Also, AIBs are allegedly starting at $999 so it wouldn't be $1200 anyway but I get what you're saying - all depends on what you expect and how you game in terms of resolutions, settings, etc.


----------



## Clukos

Graphics***** said:


> Well if you have a 1080Ti you can recoup some of the 2080Ti cost. Also, AIBs are allegedly starting at $999 so it wouldn't be $1200 anyway but I get what you're saying - all depends on what you expect and how you game in terms of resolutions, settings, etc.



I don't think MSRP means anything for this launch tbh. No vendor is going to sell the same GPU at $200 less than the FE. Everyone will do the same thing they did with the 1070/1080 launch, ignore MSRP and price at FE + $30-100.


----------



## Vlada011

NVIDIA Turing Founders Edition perfectly match with EKWB RGB Monoblock for ASUS Rampage V Edition 10 I have.


----------



## ClashOfClans

I think the 2080 Ti will be as fast as 1080 Ti Sli with RTX off.


----------



## Nizzen

Nvlink is the thing I can't wait to see how good it works


----------



## sblantipodi

ClashOfClans said:


> I think the 2080 Ti will be as fast as 1080 Ti Sli with RTX off.


LOL...


----------



## GraphicsWhore

Vlada011 said:


> NVIDIA Turing Founders Edition perfectly match with EKWB RGB Monoblock for ASUS Rampage V Edition 10 I have.


What?? How do you know that?


----------



## Ford8484

DLSS is more intriguing then Ray Tracing imo. If you can get 80-100% improvement with graphics that basically look 4k- that is huge. Really though, this whole Ray-Tracing thing seems so moot at this point. 4k now is finally becoming slightly more mainstream- and now Ray Tracing with 1080p? No thanks. I guess its always great to have more options if you look at it that way- but just seems pointless in 2018..


----------



## Jpmboy

Rob w said:


> Just checked email from ordering



damn - VAT tax is crazy. :blinksmil


----------



## keikei

Ford8484 said:


> DLSS is more intriguing then Ray Tracing imo. If you can get 80-100% improvement with graphics that basically look 4k- that is huge. Really though, this whole Ray-Tracing thing seems so moot at this point. 4k now is finally becoming slightly more mainstream- and now Ray Tracing with 1080p? No thanks. I guess its always great to have more options if you look at it that way- but just seems pointless in 2018..


Are you saying the improvement in image clarity is the same thing as the jump to real time shadows? They are separate hurdles.


----------



## sblantipodi

Ford8484 said:


> DLSS is more intriguing then Ray Tracing imo. If you can get 80-100% improvement with graphics that basically look 4k- that is huge. Really though, this whole Ray-Tracing thing seems so moot at this point. 4k now is finally becoming slightly more mainstream- and now Ray Tracing with 1080p? No thanks. I guess its always great to have more options if you look at it that way- but just seems pointless in 2018..


I haven't understood well what DLSS is...
isn't only a fast antialias method?

how could it improve image quality over obviously the antialiasing?

will we return to 1080P?
isn't a huge step back in terms of image quality even with RTX on?

at 1080p textures are way worse than 4K...


----------



## Ford8484

sblantipodi said:


> I haven't understood well what DLSS is...
> isn't only a fast antialias method?
> 
> how could it improve image quality over obviously the antialiasing?
> 
> will we return to 1080P?
> isn't a huge step back in terms of image quality even with RTX on?
> 
> at 1080p textures are way worse than 4K...


There's so many unknowns with this new tech....at the end of the day though its really about performance.


----------



## Clukos

sblantipodi said:


> I haven't understood well what DLSS is...
> isn't only a fast antialias method?
> 
> how could it improve image quality over obviously the antialiasing?
> 
> will we return to 1080P?
> isn't a huge step back in terms of image quality even with RTX on?
> 
> at 1080p textures are way worse than 4K...


It's essentially upscaling using DL, which results to something quite close to the real thing.


----------



## sblantipodi

Clukos said:


> It's essentially upscaling using DL, which results to something quite close to the real thing.


upscaling difficultlty can represent real thing


----------



## toncij

sblantipodi said:


> I haven't understood well what DLSS is...
> isn't only a fast antialias method?
> 
> how could it improve image quality over obviously the antialiasing?
> 
> will we return to 1080P?
> isn't a huge step back in terms of image quality even with RTX on?
> 
> at 1080p textures are way worse than 4K...


Check this one thing:
https://www.highperformancegraphics...ion1/HPG2018_AdaptiveTemporalAtnialiasing.pdf


----------



## ClashOfClans

sblantipodi said:


> LOL...


What do you mean lol the 1080 Ti about the same as 980 Ti Sli when it scales..they trade blows in games and plenty of vids show the 1080 Ti having the edge over 980 Ti Sli in games. So saying the 2080 Ti would be as fast or faster as 1080 Ti Sli is not much of a stretch if we take into account the gaps from 9 series to 10 series.


----------



## GraphicsWhore

Just realized even if my Alphacool block fits the EVGA XC I'll have the card a few days before I get the block and I have my old system in another box which is a 2600k, Maximus Gene-Z IV and 16GBs. Technically the card should work in it though will be bottle-necked in various instances because of CPU but I may still give it a try. I'm interested to see what kind of graphics numbers it'll get vs my 1080Ti (which is blocked and on XOC BIOS).


----------



## Krzych04650

ClashOfClans said:


> What do you mean lol the 1080 Ti about the same as 980 Ti Sli when it scales..they trade blows in games and plenty of vids show the 1080 Ti having the edge over 980 Ti Sli in games. So saying the 2080 Ti would be as fast or faster as 1080 Ti Sli is not much of a stretch if we take into account the gaps from 9 series to 10 series.


1080 Ti was a massive jump over 980 Ti, in some memory intensive games at 4K you can sometimes get like 80-90% more FPS than 980 Ti, stock vs stock, it is almost 2x if you have graphically intensive game at high resolution and if you are not bottlenecked by anything else, hence why single 1080 Ti can sometimes trade blows with 980 Ti SLI, even in games that scale well. Memory capacity and bandwidth were almost doubled. Also the jump from 28 nm to 16 nm. 

Right now not only 2080 Ti is staying on the same process, because 12 nm is not a massive jump, especially that 12 nm is probably just the name, but also other changes are much smaller. Looking at specs on the core, 2080 Ti typically has only about 20% more than 1080 Ti, while 1080 Ti has around 27% more than 980 Ti. In terms of memory, bandwidth increase is only 27% compared to 44% while memory capacity remains the same vs almost doubling (although here it is less important because you can hardly ever max out these 11 gigs during real world use).

Considering all of that, it looks like 2080 Ti over 1080 Ti is going to be more like 980 Ti over 780 Ti, about 40% increase, rather than 1080 Ti over 980 Ti, which is 60-90% faster depending on the game. 

So if you take for example 30 games and only half of them will support SLI, and if you take the average performance of 1080 Ti SLI including games with no SLI support, then 2080 Ti is probably going to be faster on average. But it is very unlikely that it will match or surpass 1080 Ti SLI in SLI supported games, it will have trouble even in ones with unsatisfying 1.6x scaling, not to mention ones with 1.8x+.

Don't get me wrong, I want 2080 Ti to be very good, because if it won't then I will be upgrading from 1080 SLI to 1080 Ti SLI two and a half years (!) after Pascal launch, which would be quite ridiculous. If 2080 Ti is going to be only like 30-35% faster than 1080 Ti, then it will be far from beating my 1080 SLI, let alone 1080 Ti SLI. Mostly the $999 price creates this situation, because if 2080 Ti was $700, then I would just take two even if single card was only 20% faster than 1080 Ti, I would even preorder them already because there would be nothing to worry about, but $999 is freaking 5000 PLN (considering massive difference in earnings, it is like if you had to pay 4000 EUR for example in Germany or 4000 USD in the USA), I am not going to pay that for something that is only 30% faster than 1080 Ti, for many games that would be a downgrade from my current 1080 SLI that I had very good time with.


----------



## xer0h0ur

It takes real motivation to get me to wake up early on my days off for anything. I woke up early to pre-order the FE 2080 Ti the moment it went live. Now I just need EK to take the block pre-orders live and hope to god they ship them immediately. It would suck to have a new card sitting there without its block.

Really sucks that AMD has been asleep at the wheel in the dGPU space that we are paying $1200 for these toys but oh well. C'est la vie.


----------



## Ford8484

xer0h0ur said:


> It takes real motivation to get me to wake up early on my days off for anything. I woke up early to pre-order the FE 2080 Ti the moment it went live. Now I just need EK to take the block pre-orders live and hope to god they ship them immediately. It would suck to have a new card sitting there without its block.
> 
> Really sucks that AMD has been asleep at the wheel in the dGPU space that we are paying $1200 for these toys but oh well. C'est la vie.


Apparently Navi is only going to be mid-range too- so something along the lines of 1080 performance- given the time frame its been since its release. At least this is the "word" out there. I just Nvidia doesnt come out with a 2090ti or some god knows what, for like $700 in the Spring, that outperforms the 2080ti....that would be pretty shady, but makes since if AMD comes in with a high end 7nm card....


----------



## shiokarai

xer0h0ur said:


> It takes real motivation to get me to wake up early on my days off for anything. I woke up early to pre-order the FE 2080 Ti the moment it went live. Now I just need EK to take the block pre-orders live and hope to god they ship them immediately. It would suck to have a new card sitting there without its block.
> 
> Really sucks that AMD has been asleep at the wheel in the dGPU space that we are paying $1200 for these toys but oh well. C'est la vie.


Sooo... are the EK preorders live or not? Your post seems to contradict itself. Went to EK site, no preorders yet?


----------



## GraphicsWhore

They are not. I suggest anyone who wants an EK preorder as quickly as possible follow them on twitter and turn on mobile notifications for their account. I assume they will tweet as soon as preorders go live. Given that they initially said it would be last week, I assume they’re close.


----------



## shiokarai

What concerns me is the compatibility with the NVLink bridge: compatible or not? Last time they've had an unpleasant affair with the Nvidia HB bridges - they were incompatible with FE 1080 Ti blocks


----------



## xer0h0ur

No, I was saying I want them to post the pre-orders now. All we got was that picture and a vague statement saying pre-orders are coming soon. That was a tweet from the 27th. I've been keeping an eye on their twitter to know when the pre-orders begin. 

https://twitter.com/EKWaterBlocks


----------



## shiokarai

xer0h0ur said:


> No, I was saying I want them to post the pre-orders now. All we got was that picture and a vague statement saying pre-orders are coming soon. That was a tweet from the 27th. I've been keeping an eye on their twitter to know when the pre-orders begin.
> 
> https://twitter.com/EKWaterBlocks


ok...

I'm sitting on the 2 x 2080 Ti + NVLink preorder here, 20th sept. delivery fortunately, but still uncertain whether to cancel preorder or not... esp. with Skylake-X refresh coming Q4 2018 and 7nm looming in the Q1 2019. There's a limit to a reasonable spending of money, regardless of the amount of money spent...


----------



## toncij

shiokarai said:


> ok...
> 
> I'm sitting on the 2 x 2080 Ti + NVLink preorder here, 20th sept. delivery fortunately, but still uncertain whether to cancel preorder or not... esp. with Skylake-X refresh coming Q4 2018 and 7nm looming in the Q1 2019. There's a limit to a reasonable spending of money, regardless of the amount of money spent...


Don't hold your breath on that. Even if it could be Q1 2019, and it could not, it most certainly won't be a consumer part, but a refresh of TESLA.
7nm being as expensive as it is, is first going into products that actually bring profits, cards in the range od $3-10k, not ~$1k consumer parts. 
And it probably won't be nowhere near Q1.


----------



## xer0h0ur

I'm not even touching any new Intel processors until they actually make something that isn't vulnerable to Specter, Meltdown, Foreshadow and god knows how many other problems that simply haven't been taken public or found.


----------



## Madness11

Guys. PSU (antec HCP 850 watt ) is enought with 2080 ti ?? (6900k cpu)


----------



## postem

Krzych04650 said:


> 1080 Ti was a massive jump over 980 Ti, in some memory intensive games at 4K you can sometimes get like 80-90% more FPS than 980 Ti, stock vs stock, it is almost 2x if you have graphically intensive game at high resolution and if you are not bottlenecked by anything else, hence why single 1080 Ti can sometimes trade blows with 980 Ti SLI, even in games that scale well. Memory capacity and bandwidth were almost doubled. Also the jump from 28 nm to 16 nm.
> 
> Right now not only 2080 Ti is staying on the same process, because 12 nm is not a massive jump, especially that 12 nm is probably just the name, but also other changes are much smaller. Looking at specs on the core, 2080 Ti typically has only about 20% more than 1080 Ti, while 1080 Ti has around 27% more than 980 Ti. In terms of memory, bandwidth increase is only 27% compared to 44% while memory capacity remains the same vs almost doubling (although here it is less important because you can hardly ever max out these 11 gigs during real world use).
> 
> Considering all of that, it looks like 2080 Ti over 1080 Ti is going to be more like 980 Ti over 780 Ti, about 40% increase, rather than 1080 Ti over 980 Ti, which is 60-90% faster depending on the game.
> 
> So if you take for example 30 games and only half of them will support SLI, and if you take the average performance of 1080 Ti SLI including games with no SLI support, then 2080 Ti is probably going to be faster on average. But it is very unlikely that it will match or surpass 1080 Ti SLI in SLI supported games, it will have trouble even in ones with unsatisfying 1.6x scaling, not to mention ones with 1.8x+.
> 
> Don't get me wrong, I want 2080 Ti to be very good, because if it won't then I will be upgrading from 1080 SLI to 1080 Ti SLI two and a half years (!) after Pascal launch, which would be quite ridiculous. If 2080 Ti is going to be only like 30-35% faster than 1080 Ti, then it will be far from beating my 1080 SLI, let alone 1080 Ti SLI. Mostly the $999 price creates this situation, because if 2080 Ti was $700, then I would just take two even if single card was only 20% faster than 1080 Ti, I would even preorder them already because there would be nothing to worry about, but $999 is freaking 5000 PLN (considering massive difference in earnings, it is like if you had to pay 4000 EUR for example in Germany or 4000 USD in the USA), I am not going to pay that for something that is only 30% faster than 1080 Ti, for many games that would be a downgrade from my current 1080 SLI that I had very good time with.


Pal, we are exactly on the same page; i have 2 1080s on SLI, and i didnt bothered to change for one 1080ti, even considering a lot of games dont support sli or have poor support, on games that work SLI beats a single 1080ti. I would happly pay 1200$ (or more around 1500$ here) to change for a single 2080ti that could run cooler and have better performance on dx12 and non sli games. 
If nvidia numbers on 2080 are anything to goes, by, i made some cheap calculations:
2080 = 1080ti + 5%. 1080ti avg 30% over 1080, so 2080 = 1.35 (1080 = 1)
Consideting 2080ti could attain, on raw performance 40% over 1080ti, it would be (1.30 * 1.4) = 1.89. So if its all true, its would be equivalent to 1.89 scaling which is unbeatable by 1080 SLI, since i usually get from 30-70% scaling, with some honorable exceptions that almost fully scale. 

Another metric: considering 2080 = 1080 * 1.5 (lets assume shorter for a more realistic approach, 1.4). 
2994 cuda cores vs 4352, so 1.45 (2080ti vs 2080) cuda cores. Throwing 10% for eventual differences in frequency -2% for bandwdith, lets assume 2080ti = 1.37 over 2080. That means (1.3 * 1.37) = 1.78 over 1080.

Assuming another simplistic math: 1080 ti = 1.3 x 1080. 2080 = 1080ti. Considering same relation (1.3 ti vs regular, and that is the minimal i can expect in uplift), one 1080 = (1.3 * 1.3) so 1.70, so in the worst scenario, a 2080ti means average performance of 1.69 over a single 1080. 

Its good enough for me to make sell my 1080s and buy a 2080 ti, but i will still wait for real benchmarks. If its really that bad and performance of 2080ti is less than 20% over 2080 (which i dont really expect), then i would skip. Im not in a hurry anyway, i just want to max out 144hz 1440p.


----------



## postem

Madness11 said:


> Guys. PSU (antec HCP 850 watt ) is enought with 2080 ti ?? (6900k cpu)


Yep its more than enough. Granted, 6900K uses a lot of power, unless you have some heavy overclock, i dont see issues.
I have 2 1080s + 8700K OC (its less than 6900K but still), its gonna work just fine. 

Considering a heavy oc on 2080ti, i dont think it will go much above 350w.


----------



## Madness11

postem said:


> Yep its more than enough. Granted, 6900K uses a lot of power, unless you have some heavy overclock, i dont see issues.
> I have 2 1080s + 8700K OC (its less than 6900K but still), its gonna work just fine.
> 
> Considering a heavy oc on 2080ti, i dont think it will go much above 350w.


U have also 850 watt?)


----------



## xer0h0ur

I'm so over multi-gpu its not even close. Tried it with Nvidia and AMD. Both always left a lot to be desired and turned out in the end to be more about e-peen benchmarking than getting to enjoy it in gaming. I will forever stick to buying the strongest single GPU I can afford when I am upgrading.


----------



## postem

Madness11 said:


> U have also 850 watt?)


Yes 850W seasonic gold psu. I love Seasonic psus, top quality, i have one before 650W with single 1080, i had a massive power surge that even fried the elevator on my apartment buiding and the psu still survived +2 weeks, well, at least it kept hardware intact. 

Anyway, my both 1080s usually draw 400w combined at max, +150W at max of cpu, i would only change for a 1000W psu for 2 1080tis just for safety, but im still confident a 850w psu can handle 2 1080 tis.

I used to run a 5970 on a 750W psu, and that card was such a massive powerhog, it was hitting near 400W on full load.


----------



## postem

xer0h0ur said:


> I'm so over multi-gpu its not even close. Tried it with Nvidia and AMD. Both always left a lot to be desired and turned out in the end to be more about e-peen benchmarking than getting to enjoy it in gaming. I will forever stick to buying the strongest single GPU I can afford when I am upgrading.


I have mixed feelings over SLI. When it works, it works sometimes very very nicely, like hitman with 1.8 scaling, or Witcher 3 where i can manage to get 130fps on 1440p, or it have some kind of worse scaling like Kingdom come or shadow of war, with performance hitting between 30-40% over.

What leaves a bad taste in mouth is that a lot of recent games doesnt support or offer very bad support, like AC Origins, which run on some sort of virtualization but all uses a kind of renderer that is incompatible with SLI. Ports in general, or cross plataform usually dont have good SLI support. DX12 games, the majority of it now, are just repurposed dx11 code, with notable exceptions like Hitman and Sniper Elite 4 that uses native multirendering offering incredible good scaling.


----------



## ClashOfClans

Krzych04650 said:


> 1080 Ti was a massive jump over 980 Ti, in some memory intensive games at 4K you can sometimes get like 80-90% more FPS than 980 Ti, stock vs stock, it is almost 2x if you have graphically intensive game at high resolution and if you are not bottlenecked by anything else, hence why single 1080 Ti can sometimes trade blows with 980 Ti SLI, even in games that scale well. Memory capacity and bandwidth were almost doubled. Also the jump from 28 nm to 16 nm.
> 
> Right now not only 2080 Ti is staying on the same process, because 12 nm is not a massive jump, especially that 12 nm is probably just the name, but also other changes are much smaller. Looking at specs on the core, 2080 Ti typically has only about 20% more than 1080 Ti, while 1080 Ti has around 27% more than 980 Ti. In terms of memory, bandwidth increase is only 27% compared to 44% while memory capacity remains the same vs almost doubling (although here it is less important because you can hardly ever max out these 11 gigs during real world use).
> 
> Considering all of that, it looks like 2080 Ti over 1080 Ti is going to be more like 980 Ti over 780 Ti, about 40% increase, rather than 1080 Ti over 980 Ti, which is 60-90% faster depending on the game.
> 
> So if you take for example 30 games and only half of them will support SLI, and if you take the average performance of 1080 Ti SLI including games with no SLI support, then 2080 Ti is probably going to be faster on average. But it is very unlikely that it will match or surpass 1080 Ti SLI in SLI supported games, it will have trouble even in ones with unsatisfying 1.6x scaling, not to mention ones with 1.8x+.
> 
> Don't get me wrong, I want 2080 Ti to be very good, because if it won't then I will be upgrading from 1080 SLI to 1080 Ti SLI two and a half years (!) after Pascal launch, which would be quite ridiculous. If 2080 Ti is going to be only like 30-35% faster than 1080 Ti, then it will be far from beating my 1080 SLI, let alone 1080 Ti SLI. Mostly the $999 price creates this situation, because if 2080 Ti was $700, then I would just take two even if single card was only 20% faster than 1080 Ti, I would even preorder them already because there would be nothing to worry about, but $999 is freaking 5000 PLN (considering massive difference in earnings, it is like if you had to pay 4000 EUR for example in Germany or 4000 USD in the USA), I am not going to pay that for something that is only 30% faster than 1080 Ti, for many games that would be a downgrade from my current 1080 SLI that I had very good time with.


If you upgrade now to 1080 ti sli wouldn't that only be 1 1/2 years after pascal launch. 1080 ti was released in 2017 I think. Or was it 2016? 

Any way Agree with everything that you said except your predicted performance numbers. 2080 Ti is going to be at least 50% faster overall than 1080 Ti...and much more faster in DLSS supported games IMO. That's just what my gut tells me right now. I think there are going to be a lot more people wishing they preordered because Nvidia has an ace up their sleeve which will be revealed when the reviews come out, and ebay has these cards selling for $2000 USD and up. It is unfortunate, these prices, but it's what happens when a company is competing against itself. 

I think it's a mistake going to 1080 Ti Sli this late in the game for you any way. Seems you would be more benefited getting one 2080 Ti now, then maybe another one when prices go down a bit. But if you think you are paying equivalent to 4000 usd for a new 2080 Ti...that would be an insane purchase. 

I did get a used 1080 Ti after canceling my 2080 Ti order however. $1300 is too much for me after some thought and while it's worth it for hardcore gamers, not me for as I'm still heavily into older titles that will run well at 4k with one 1080 Ti. I'm still on fallout 4 and been playing that on/off for almost 2 years now and there are actually more older titles I would like to get into.


----------



## carlhil2

Don't know if this has already been posted. 



 he says at the 33 minute mark that @4k, the 2080 should be about 50% faster than the 1080, didn't seem too sure though, Lol..


----------



## Jpmboy

eh - I'm not an avid gamer like Callsign or 99.9% of you guys. But I have had 4K for a very long time, and 4K60, even OLED (running for the past year or so) and I just can't get excited about any high frame rates on a 60Hz refresh. 4K 120 or 144 is when I'll get excited. 


What benefit is a 100FPS rate on a panel that can only refresh at 60Hz??


----------



## carlhil2

Jpmboy said:


> eh - I'm not an avid gamer like Callsign or 99.9% of you guys. But I have had 4K for a very long time, and 4K60, even OLED (running for the past year or so) and I just can't get excited about any high frame rates on a 60Hz refresh. 4K 120 or 144 is when I'll get excited.
> 
> 
> What benefit is a 100FPS rate on a panel that can only refresh at 60Hz??


 Lol, true, but I often get dips into the 40's in IL-2 with lots of planes/trees @4k. I need that 2080Ti...


----------



## Jpmboy

carlhil2 said:


> Lol, true, but I often get dips into the 40's in IL-2 with lots of planes/trees @4k. *I need that 2080Ti*...


oh yeah... I do too. But I have an affliction.


----------



## Madness11

hey guys. tell me pls What is this (rtx 2080ti) 
RTX-OPS	78T	76T
Why REF have more RTX then non ref?))) and what is this RTX- OPS ))


----------



## Fitzcaraldo

RTX-OPS are the operations per timeframe the RT-Cores and the Tensor cores combined can deliver. Think Ray Tracing and DLSS and similar.

Where did you get the second metric from? Keep in mind, FEs are OCd this time around and thus go above the reference spec.


----------



## Rob w

I don’t know, everybody speculating on the performance of this card, on and on it goes!
To me ! It is an upgrade from my 980ti= good
To me! It is a comparison to my Titan V= don’t know yet?
But whatever the benchmarks results are I feel it will be an upgrade of whatever sort on previous cards.
No, I did not need it! It was a want for the latest tech, I was able to go for it so I did and as a result I will have on my doorstep ( one day) a gtx2080ti fe and I will ????love it!.


----------



## Krzych04650

ClashOfClans said:


> If you upgrade now to 1080 ti sli wouldn't that only be 1 1/2 years after pascal launch. 1080 ti was released in 2017 I think. Or was it 2016?
> 
> Any way Agree with everything that you said except your predicted performance numbers. 2080 Ti is going to be at least 50% faster overall than 1080 Ti...and much more faster in DLSS supported games IMO. That's just what my gut tells me right now. I think there are going to be a lot more people wishing they preordered because Nvidia has an ace up their sleeve which will be revealed when the reviews come out, and ebay has these cards selling for $2000 USD and up. It is unfortunate, these prices, but it's what happens when a company is competing against itself.
> 
> I think it's a mistake going to 1080 Ti Sli this late in the game for you any way. Seems you would be more benefited getting one 2080 Ti now, then maybe another one when prices go down a bit. But if you think you are paying equivalent to 4000 usd for a new 2080 Ti...that would be an insane purchase.
> 
> I did get a used 1080 Ti after canceling my 2080 Ti order however. $1300 is too much for me after some thought and while it's worth it for hardcore gamers, not me for as I'm still heavily into older titles that will run well at 4k with one 1080 Ti. I'm still on fallout 4 and been playing that on/off for almost 2 years now and there are actually more older titles I would like to get into.


It all depends on 2080 Ti performance, I need it to at least match average scaled 1080s, for me the average scaling was around 80% in games I have played. And this will only bring my current average SLI performance to all games regardless of SLI support, with no upgrade, and also a downgrade from maximum performance when I get 95%+ scaling, which is not that uncommon, basically there are either games with mediocre 50-60% scaling or great ones with 90+, there isn't really much in between. 

As for the price, this insanity as you call it is everyday life here. The earnings are 4 times smaller than in the USA (and that is probably optimistic, because median monthly salary is like 2500 PLN netto, which is roughly $675, average is 3400 = $918, I am slightly below average which isn't bad at 23), and thanks to VAT the prices are also higher and for example for GPUs we typically pay 30% more than in the USA. Cheapest 2080 Tis start from 5000 PLN, $1350, 2x median salary. How much of median salary it is taking in the US or Germany? 1/3? And how is that proportional to all the crying on the web, you are all crying like you really had to spend your life savings, while in reality you are barely spending a weekly wage.


----------



## xer0h0ur

Jpmboy said:


> eh - I'm not an avid gamer like Callsign or 99.9% of you guys. But I have had 4K for a very long time, and 4K60, even OLED (running for the past year or so) and I just can't get excited about any high frame rates on a 60Hz refresh. 4K 120 or 144 is when I'll get excited.
> 
> 
> What benefit is a 100FPS rate on a panel that can only refresh at 60Hz??


We are just barely reaching the point where I can play at a solid 60 FPS with eye candy at 4K. I'm not about to wait to see when I can finally maintain FPS at 120Hz or higher refresh rates. That feels like ages away. I'm simply happy an RTX 2080 Ti is going to let me lock to v-sync and maintain 4K performance. I am of course happy to see that its also going to push my 144Hz 1440p monitor.


----------



## Shadowsong

Do we know yet if the EVGA cards will be fine with the EK waterblocks? I'm assuming they are reference design, but you never know!


----------



## Vow3ll

EVGA RTX 2080 FTW3 - built like a brick house and probably some of the reason behind the added cost of the 20 series, and like Buildzoid, I can't wait to see what they did for the Ti versions.


----------



## sblantipodi

Shadowsong said:


> Do we know yet if the EVGA cards will be fine with the EK waterblocks? I'm assuming they are reference design, but you never know!


as far as I know they are not reference, 13 phases on the FE and 16 phases on EVGA


----------



## rush2049

Man this hotHardware interview with Tom Peterson is great:


----------



## toncij

xer0h0ur said:


> I'm not even touching any new Intel processors until they actually make something that isn't vulnerable to Specter, Meltdown, Foreshadow and god knows how many other problems that simply haven't been taken public or found.


You know that might be years and years away? CPU architectures are designed generations in advance.


----------



## rush2049

toncij said:


> You know that might be years and years away? CPU architectures are designed generations in advance.


I think that is exactly what was meant.

In my day-job we are no longer purchasing intel based servers.


----------



## shiokarai

postem said:


> I have mixed feelings over SLI. When it works, it works sometimes very very nicely, like hitman with 1.8 scaling, or Witcher 3 where i can manage to get 130fps on 1440p, or it have some kind of worse scaling like Kingdom come or shadow of war, with performance hitting between 30-40% over.
> 
> What leaves a bad taste in mouth is that a lot of recent games doesnt support or offer very bad support, like AC Origins, which run on some sort of virtualization but all uses a kind of renderer that is incompatible with SLI. Ports in general, or cross plataform usually dont have good SLI support. DX12 games, the majority of it now, are just repurposed dx11 code, with notable exceptions like Hitman and Sniper Elite 4 that uses native multirendering offering incredible good scaling.


I share the exact same sentiment... running Dishonored 2 at the 6880x2880 DSR with 60fps on the PG348Q (2 x 1080Ti) was certainly epic and lovely to watch/play but at the same time eg. Doom 2016 or Wolfenstein II not supporting SLI was a big letdown... Let's see what NVLink will change (if anything)


----------



## toncij

rush2049 said:


> I think that is exactly what was meant.
> 
> In my day-job we are no longer purchasing intel based servers.


We've ordered 12 EPYCs last (first) gen, and this year I'm probably moving to 16-core TR since 32 has a massacred memory controller.



shiokarai said:


> I share the exact same sentiment... running Dishonored 2 at the 6880x2880 DSR with 60fps on the PG348Q (2 x 1080Ti) was certainly epic and lovely to watch/play but at the same time eg. Doom 2016 or Wolfenstein II not supporting SLI was a big letdown... Let's see what NVLink will change (if anything)


And how do you think that'd happen? NVLink can, in some cases, help with scaling, visible stuttering, but won't change a thing for unsupported cases. And it can't.


----------



## Ford8484

not sure if anyone saw this yet- https://wccftech.com/nvidia-rtx-2080-ti-2080-2070-are-40-faster-vs-pascal-in-gaming/. Apparently its 40% faster then 1080ti without DLSS. Thats ok....but for the cost, I'm not sure. If DLSS gets implemented in most AAA games I think this card will be worth it. But, it sure as hell is better then some outlets saying 10-20% increase in normal games.


----------



## Jpmboy

Ford8484 said:


> not sure if anyone saw this yet- https://wccftech.com/nvidia-rtx-2080-ti-2080-2070-are-40-faster-vs-pascal-in-gaming/. Apparently its 40% faster then 1080ti without DLSS. Thats ok....but for the cost, I'm not sure. If DLSS gets implemented in most AAA games I think this card will be worth it. But, it sure as hell is better then some outlets saying 10-20% increase in normal games.



that's sourced from NV PR/marketing.


----------



## Spiriva

I saw this video beeing posted over at a Swedish forum (the language is however not Swedish so I have no clue what he says)
Also I have no clue if its real or just made up numbers, there is no proof that this guy owns a 2080ti.







The following is also copied from the same Swedish forum, and taken from this video:

Max = Max
Medel = Average
Min = Minimum

*PUBG*
_Ultra, 4K_
_*GTX 2080 Ti*_
Max: 76,1 
Medel: 61,6
Min: 47,3

*GTX 1080Ti*
Max: 51,9 
Medel: 42,5
Min: 39,2 

*Rainbow 6 *
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 128,6
Medel: 90,1
Min: 64,3

*GTX 1080Ti*
Max: 89,2
Medel: 61,6
Min: 54,7

*The Witcher 3*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 68,5
Medel: 56,4
Min: 41,2

*GTX 1080Ti*
Max: 51,1 
Medel: 44,1
Min: 29,7

*The Crew 2*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 61,6 (60 fps cap ?).
Medel: 58,3
Min: 56,7

*GTX 1080Ti*
Max: 57,5
Medel: 47,3
Min: 42,1

*For Honor*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 109,6
Medel: 78,3
Min: 61,3

*GTX 1080Ti*
Max: 72,4
Medel: 51,7
Min: 42,6

*Battlefield V*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 96,6
Medel: 75,2
Min: 52,3

*GTX 1080Ti*
Max: 67,4
Medel: 54,5
Min: 46,5

*GTA V*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 119,6
Medel: 78,2
Min: 58,8

*GTX 1080Ti*
Max: 86,3
Medel: 61,5
Min: 53,7

*Mass Effect Andromeda*
_Ultra 4K_
_*GTX 2080 Ti[i/]*_
Max: 81
Medel: 67,3
Min: 50,3

*GTX 1080Ti*
Max: 54,6 
Medel: 48,4
Min: 33,7

*Far Cry 5*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 93,1
Medel: 68,9
Min: 55,6

*GTX 1080Ti*
Max: 56,5 
Medel: 44,4
Min: 32,7

*Call of Duty ww2*
_Ultra, 4K_
_*GTX 2080 Ti[i/]*_
Max: 87,2
Medel: 61,1
Min: 46,2

*GTX 1080Ti*
Max: 71,7
Medel: 50
Min: 42,1


----------



## CallsignVega

If those numbers are right, 2080 Ti may be slightly slower than the V.

Also disappointing news from the Tom interview. Looks like SLI works the same as old. Cancelling the second card and NVLink.


----------



## Foxrun

CallsignVega said:


> If those numbers are right, 2080 Ti may be slightly slower than the V.
> 
> Also disappointing news from the Tom interview. Looks like SLI works the same as old. Cancelling the second card and NVLink.


Eh I might cancel both and ride the V till a new titan.


----------



## Jpmboy

I'm waiting on the Titan T also (I'm trying anyway). IMO, the pair of TXPs in one rig here is still a tough act to beat.


----------



## Foxrun

Jpm, what type of overclock are you able to hit with your V? I feel one of the more exciting aspects of the 2080Ti is it's rumored ocing capability. That alone MIGHT push me over the edge.


----------



## drfouad

I ordered 2 Nvidia FE Ti’s to Replace my titan xp. If one will be enough I will get rid of the second one.


----------



## Jpmboy

Foxrun said:


> Jpm, what type of overclock are you able to hit with your V? I feel one of the more exciting aspects of the 2080Ti is it's rumored ocing capability. That alone MIGHT push me over the edge.


 it'll bench things like time spy extreme at 2100 core 1067 on the HBM2 ram all day... games solid at 2075 and below (1440P 144Hz). For the nominal gaming I do, I prefer 1440P and 144Hz, 4K60 is pretty but not my favorite. The extended family kids actually prefer 4K60 with the SLI TXps (x99 rig) - who's to say . Main thing is to keep the TV below 40C in my experience. I mean, if you have a 2080Ti preordered from NV, just check it out on air... if it's not what you want, NV (digital river?) takes returns in 30d no questions asked. :thumb:
I guess I have this issue with frame rates that far exceed the panel's refresh rate.


----------



## Jbravo33

Jpmboy said:


> it'll bench things like time spy extreme at 2100 core 1067 on the HBM2 ram all day... games solid at 2075 and below (1440P 144Hz). For the nominal gaming I do, I prefer 1440P and 144Hz, 4K60 is pretty but not my favorite. The extended family kids actually prefer 4K60 with the SLI TXps (x99 rig) - who's to say . Main thing is to keep the TV below 40C in my experience. I mean, if you have a 2080Ti preordered from NV, just check it out on air... if it's not what you want, NV (digital river?) takes returns in 30d no questions asked. :thumb:
> I guess I have this issue with frame rates that far exceed the panel's refresh rate.


The X27 is calling your name... xD


----------



## Mooncheese

Jpmboy said:


> that's sourced from NV PR/marketing.


Thank-you, I feel like I'm the only one who gets this. This is coming from the head of Technical Marketing who has been known to exaggerate performance in the past. Emphasis on marketing. 

I would take whatever he says with a massive grain of salt. 

Especially after their cherry picked 4K+HDR+G-Sync "benchmarks" comparing 2080 to 1080 when the memory bandwidth limited 1080's struggle at 4K became apparent when Vega benchmarks surfaced and showed Vega 64 (and possibly 56) faster at 4K even though it was slower at 1080p. 

Basically everything AdoredTV has said: 












I have a 3440x1440 120 Hz display, AW3418DW, and I also use my PG278Q for 2560x1440 3D Vision in games where it looks good and I have been wanting to upgrade from 1080 Ti but man, $1300 for 2080 Ti and it's probably 30% faster at 1440p in reality, not at all thrilled. 

I'm thinking of just skipping Turing altogether. Way to price gouge your loyal consumerbase on the heels of the crypto-currency boom (and subsequent crash that they seem to be oblivious to, but they can and have created the illusion of high demand for 2080 Ti by only selling 5k of them for pre-order, I mean it doesn't take a genius to see what's going on here).

GTX 1080 Ti, 50% faster than 980 Ti: $699
GTX 980 TI, 50% faster than 780 Ti: $699
GTX 2080 Ti, maybe 30% faster than 1080 Ti: $1200

33% of the die wasted on Ray Farce. Dude, I don't want to play a first person shooter at 60 FPS @ 1920x1080p on my 3440x1440 monitor so I can see reflections in windows. Not when it's that or play at 3440x1440 native @ 120 FPS. Same card. Ray Farce turned off. Ditto Shadow of the Tomb Raider. 

Here's the real rub of Ray Farce. All of those games in your Steam and Origin library? Yeah none of them have Ray Farce. 

Everyone's talking about how massive TU102 is. It's NOT massive when you consider the fact that 33% of the die is Ray Farce dead weight. 

Look, I'm all for greater fidelity, but this crap could have waited until the node dropped down to 7nm. Now, instead of us getting our usual 50-60% bump in performance it's half of that and all of the cards have been significantly marked up, meaning, Nvidia want's you to pay for crap you will probably not turn on. 

Imagine how fast these cards would be with the same die size but without 33% of said die allocated to Ray Tracing. 2080 Ti would be some 30% faster than the 30-40% it is now right in line with the 50-60% performance jumps that we have seen between architectures going back to Kepler. Ditto 2080 and 2070. Whoever insisted that now was the time for Ray Farce ought to be fired. I'm absolutely not kidding. And the attempted circulation of this meme equating Jensen Huang with visionaries of yore, I'm sorry but NO. This is just a cash grab because AMD is missing in action. Watch them drop the actual Titan card in 3 months for $1200 and drop the price of 2080 Ti to $899 and watch them release 7nm volta later next year depending on what AMD does. 

And the price bump, for what, ray farce?

RTX 2070 is basically the 2060 @ $600 (considering it's a GTX 1080+ and the 60 card has been as fast as the outgoing 80 card, no benches yet but 2070 will sit somewhere between 1080 and 1080 Ti considering 2080 is not faster than 1080 Ti)

RTX 2080 is basically an $800 2070 (considering the 70 card has as fast as / slightly faster than the outgoing 80 Ti card)

RTX 2080 TI is basically a $1200 Titan, conveniently renamed as nice big fat middle finger to all of us who wait out the Titan card for the 80 Ti card. Some "smart" bean counter figured "hey we can just rename the Titan card the 80 Ti card and then everyone who normally buys the 80 Ti card will be forced to pay Titan price for it! I mean, they were paying $1000 for the 1080 Ti because of crypto boom, so they certainly have the wherewithal, we can all get a raise, and you Mr. Huang can get your 2nd Bugatti Chiron for your 6th summer home on that Hawaiian island you purchased last week! They have plenty of money, still a lot we can squeeze out of them, a lot of them will have to eat ramen for maybe 6 months to pay the difference between $700 and $1200 but hey, do you want your second Bugatti Chiron? LET THEM EAT RAMEN!" 

Fan boys flame on, someone has to call this crap out, I don't see a whole lot of it happening here.


----------



## shiokarai

toncij said:


> And how do you think that'd happen? NVLink can, in some cases, help with scaling, visible stuttering, but won't change a thing for unsupported cases. And it can't.


I HOPE it'll help somehow with bandwidth issues, esp. on the mainstream platforms... doesn't expect it to be a miracle, unfortunately. But... without commitment on the nvidia side it's just less and less desirable to run SLI, basically a waste of money.


----------



## shiokarai

Phanteks announced their RTX blocks:

https://www.techpowerup.com/247195/...d-g2080ti-water-blocks-for-geforce-rtx-series

Available at the end of september - so they say.


----------



## Spiriva

Mooncheese: AdoredTv is however the worse AMD fanboy you can find on youtube.


Meanwhile at EK:

"EK Water Blocks
‏ @EKWaterBlocks
27 aug.

Here's a little teaser for you. Pre-orders are coming soon. "

Just put the waterblock up for preorder EK, and TAKE MY MONEY!!!


----------



## Jpmboy

Jbravo33 said:


> The X27 is calling your name... xD


 Acer 27 inch? what's that for, a laptop? 
JK - it's a nice panel, but passed it up hoping for something in the high 30s to low 40 inch range. 4K/120 40 Inch is just right.


----------



## Spiriva

Here is another one of them leaks, but again dont know if its true or just random fantasy numbers:

2080ti vs 1080ti


----------



## toncij

What are the pros and cons of X27 vs PG27UQ? 
Regarding "leaks" - it's all fake. Not even AIBs got drivers by today.

Regarding performance: TITAN V has significantly more cores. It would have to be some serious magic to make 4352 cores as fast as 5120, and I'm afraid, everything Turing does is mostly from Volta.


----------



## JackCY

toncij said:


> What are the pros and cons of X27 vs PG27UQ?
> Regarding "leaks" - it's all fake. Not even AIBs got drivers by today.
> 
> Regarding performance: TITAN V has significantly more cores. It would have to be some serious magic to make 4352 cores as fast as 5120, and I'm afraid, everything Turing does is mostly from Volta.


Which is mostly from Pascal, which is mostly from Maxwell.

Maxwell -> Pascal, node downsize that's it as such this different node allowed for higher clocks and it was optimized to run a little faster on it by chopping up some bulk into more pieces
Pascal -> Volta, some of them useful processing cores for what? AI etc. that's about it.
Volta -> Turing (RTX not GTX) add RT cores, mmm, that's it.

Plus add a sizeable price hike each generation just because they can with there being almost no competition in PC graphics.

Why is Volta and Turing larger die? Because it takes space to add the AI and RT dedicated HW processing.


----------



## Ford8484

toncij said:


> What are the pros and cons of X27 vs PG27UQ?
> Regarding "leaks" - it's all fake. Not even AIBs got drivers by today.
> 
> Regarding performance: TITAN V has significantly more cores. It would have to be some serious magic to make 4352 cores as fast as 5120, and I'm afraid, everything Turing does is mostly from Volta.


Pretty much aesthetics are the only differences between the X27 and PG27UQ. X27 is not as tacky, imo, the Asus looks more ridiculous and has larger bezels and stand. Though the OSD in the Asus looks a bit better. I've heard the fan is louder in the Asus but also heard the same for the Acer. Its really just subjective and personal taste. Their identical in gaming.


----------



## Mooncheese

Everyone who can swing PG27UQ and X27, I would wait for either PG35VQ or X35. Curved 21:9 completely blows flat 16:9 away, like it's not even a fair comparison. 







60 FPS? 3440x1440 is 67% less pixels than 3840x2160, meaning 67% more framerate. It's 25% more pixels than 2560x1440 to give you an idea. That would be around 120 FPS everywhere with a 2080 Ti, meanwhile, youre going to be struggling to get 60-70 FPS at 4K. 

4K is so completely overrated. 1440p is absolutely where it's at. It's perfectly sharp at 1.5-2 ft viewing distance. I don't understand this fixation with more pixels.


----------



## BigMack70

Mooncheese said:


> Everyone who can swing PG27UQ and X27, I would wait for either PG35VQ or X35. Curved 21:9 completely blows flat 16:9 away, like it's not even a fair comparison.
> 
> https://www.youtube.com/watch?v=0afwE6GVLQw&t
> 4K is so completely overrated. 1440p is absolutely where it's at. It's perfectly sharp at 1.5-2 ft viewing distance. I don't understand this fixation with more pixels.


Large format gaming on 40"+ screens is amazing and requires more pixels. I don't, however, understand why anyone would go 4k over 1440p below 40"


----------



## Ford8484

Mooncheese said:


> Everyone who can swing PG27UQ and X27, I would wait for either PG35VQ or X35. Curved 21:9 completely blows flat 16:9 away, like it's not even a fair comparison.
> 
> https://www.youtube.com/watch?v=0afwE6GVLQw&t
> 
> 
> 60 FPS? 3440x1440 is 67% less pixels than 3840x2160, meaning 67% more framerate. It's 25% more pixels than 2560x1440 to give you an idea. That would be around 120 FPS everywhere with a 2080 Ti, meanwhile, youre going to be struggling to get 60-70 FPS at 4K.
> 
> 4K is so completely overrated. 1440p is absolutely where it's at. It's perfectly sharp at 1.5-2 ft viewing distance. I don't understand this fixation with more pixels.


I see what your saying, but I game on the PS4 Pro a lot as well- so the X27 scales to that much better then these monitors. Its better all around in gaming imo...any news about these though? I can am curious about them.


----------



## Mooncheese

Ford8484 said:


> I see what your saying, but I game on the PS4 Pro a lot as well- so the X27 scales to that much better then these monitors. Its better all around in gaming imo...any news about these though? I can am curious about them.


But aren't you, for the most part, given the option between higher framerate and 4K? I understand the anemic CPU bottleneck making 4K the desirable option with PS4 and XboxOneX but these 21:9 panels absolutely will do 16:9 at whatever resolution you desire over HDMI, it may not look as crisp as 4K actual because of interpolation but yeah. That's probably the only valid argument for going with one of these $2k+ 4K monitors.

No news, they are slated for release either late this year or early the next. Given how industry gets most of their sales during Q4 because of the holidays there's a very good chance they will be released before the end of the year.


----------



## rolldog

I just got a notification from Nowinstock that Newegg had EVGA 2080 ‘‘tis in stock. I added one to my cart, got to the billing page, and my stupid iPad Pro prefilled some crap in the Newegg gift card section. By the time I deleted it and found my credit card number (if I chose PayPal, it kept filling in the Newegg gift card with my name), I got a message telling me that it wasn’t in stock anymore. There must be people lurking on that website waiting for new inventory,or preorders, to show up. That’s ridiculous.


----------



## toncij

Regarding this pixel discussion: high-PPI is there not for increased space, but fidelity. Also, you can't simulate it. 5K is not even near like rendering at 200% on a 1440 display. The difference is physical. 
You need to see it to understand. Not as important for games as for everything else. Let me put it this way: if someone would offer me a curved 7680x2880+, 120+Hz display, either VA or IPS at 34"+ (leaving option for 38" at 7680x3200), I'd probably spit out *any amount of money asked*. It would be a zero-day purchase, a preorder. The [email protected]+ is nice, but scaling at 200% makes it a bit hard to justify for me. That's only 1080 of usable space and Windows is not really great at non-integer scaling. For games, I'm perfectly fine with my PG279Q, although certain games I enjoy playing on a 5K (Blizzard games all run at 60+ at full quality 5K).


----------



## rolldog

I’ll buy one and decide for myself l


----------



## Ferreal

I'll still keep my preorders for 2 2080ti's to see how they compare to the Titan V.


I wonder how the Titan V will perform on RayTracing without RT cores


----------



## ClashOfClans

Mooncheese said:


> Why don't you just wait until non FE 2080 Ti becomes available for another $200?
> 
> You do understand that 2080 isn't faster than 1080 Ti correct?


2080 is faster than a 1080 Ti actually, this is evident in the game benchmarks nvidia released, which was shown without DLSS. I believe the 2070 will be slightly faster or on par to a 1080 Ti, and that the 2080 Ti will be faster than 1080 Ti Sli.

So...

2080 Ti > 1080 Ti SLi > 2080 > 2070 > 1080 Ti > 2060 > 1080. That will be the order from fastest to slowest once the reviews come out. I expect the 1080 Ti and 2070 to trade blows possibly, and the 1080 and 2060 to be close in performance. I feel this is the most realistic scenario from an objective standpoint rather than someone trying to take sides.


----------



## Jbravo33

Mooncheese said:


> Everyone who can swing PG27UQ and X27, I would wait for either PG35VQ or X35. Curved 21:9 completely blows flat 16:9 away, like it's not even a fair comparison.
> 
> https://www.youtube.com/watch?v=0afwE6GVLQw&t
> 
> 
> 60 FPS? 3440x1440 is 67% less pixels than 3840x2160, meaning 67% more framerate. It's 25% more pixels than 2560x1440 to give you an idea. That would be around 120 FPS everywhere with a 2080 Ti, meanwhile, youre going to be struggling to get 60-70 FPS at 4K.
> 
> 4K is so completely overrated. 1440p is absolutely where it's at. It's perfectly sharp at 1.5-2 ft viewing distance. I don't understand this fixation with more pixels.


67% more frames? that’s not the case at all. I have a Dell aw34 sitting next to an Acer x27. I don’t come anywhere near 67% increase in fps with a Titan V. Not sure where you got that from.


----------



## mustafa811

2080 will be 5% faster than 1080ti and this 5% may increase to 10-15% on best scenario with driver updates and software optimizations .. the 2080 ti will be 50% faster than 1080 ti and it may reach 65% with driver updates and optimizations 

this info based on latest leaks 

it might be wrong but i dont think so .. the pricing says it all


----------



## Glerox

BigMack70 said:


> That is an absolute best case scenario for SLI and does not represent average performance at all. Average SLI performance is going go be +40% to +70% depending on what set of games actually gets tested. I'll take single GPU at 140% over SLI any day of the week. My single 1080 Ti is vastly superior to my previous Titan X Maxwell SLI setup, even though there are a few games where the Titan X SLI was a decent clip faster. The overall performance across all games is better on the 1080 Ti. Too many games just don't use SLI at all.


This. 

Ditching Titan XP SLI for 2080TI.

Will never go SLI again.


----------



## renejr902

Raytracing RTX: The way it means to be OFF

Tensors cores DLSS: The way it means to be FAKE

"4352" Cuda cores: The way it means beat by a TitanV

1200$ : The way it means to be greed.

Stupid Consumer like me that will buy it: The way it means to be rip off. LoL! im still kidding, dont be offense in any way. 


i will still buy a 2080ti because Nvidia Chief marketing said the 2080ti will be 35% to 45% better in no rtx no dlss games, and its my best option for 4K at 60+ fps.
to my own opinion, 1080ti always need 30% more speed to be really playable in 4k at 60fps with very high graphics option.

After that buy, i will skip a gpu generation or two, and wait for a 4k 60fps RTX ON gpu. When do you think 4K 60fps RTX ON will be possible? How many years or gpu generation for 4K 60fps RTX ON?
and When do you think a Nvidia 7nm GPU generation will be release ? should i wait for 7nm nvidia gpu generation and skip the next one after that instead? thanks for your opinions guys. ( in general in future, i will buy a enthusiast gpu only at each 3 or 4 years, so i want a future proof gpu when possible)


----------



## SpacemanSpliff

I love how all of a sudden, no one seems to remember the pricing scale that existed at the launch of Pascal... or how with Titan V they moved Titan out of the consumer desktop price range to support their claims that isn't a gaming card, but rather a entry level professional card for higher learning, research, and other applications relevant to the professional sector. 

Anyone else notice how the xx80 Ti is taking the price bracket vacated by Titan? How the xx80 is the new Ti priced card? And so on and so forth... I have a feeling they might be introducing a new budget card to the range in the near future after the initial Turing launch. Just a hunch. Speculation? Yup... But it seems to be a logical conclusion to be made looking at the bigger picture. 

As for my choice of ordering 1 or two... I'm waiting to see some actual performance results from real world benchmarks. I've made due with this Haswell i5 for over 4 years and stop gapped a 1060 into the tower when my 780 shat the bed. I can survive a couple more months of waiting... of course... 5 paychecks in Nov/Dec along with the annual profit share check after a record year where I work... I think I can justify a $4-5k build with the wife for once... or just sleep on the couch for a month, lol.


----------



## Rob w

So if I’ve got this right we are all waiting for the official benchmarks, with baited breath?
Everyone ( sorry, most? Some ) are just speculating and spouting good and bad about how it will perform against x and y cards without even having any of these results, but hey! They will always be there and I suppose it kills the cliffhanger wait that we all have whilst waiting for release date.
For me, I went from 980ti to Titan v and got slated for it on here “ cause we all know that it’s a cientific card only and no good for gaming” well! I had far higher FPS during gaming and benching took me into top ten, so stuff the reviews there!!
So I blew up my Titan v ( place holder here for abuse) lol, waiting on repair, ordered a 2080ti just in case I get bad news with the repair, so I don’t realy care how it benches but it’s gotta be better than the 980ti and a cheaper alternative to a replacement Titan v even if a bit slower.
Conclusion is I may have a 3000$ doorstop but I have a 2080ti on order and am happy with that however it performs.
Peace brothers!


----------



## Keromyaou

Actually I started wondering if it is ever possible to run games comfortably on 4K display with RTX by one GPU (or even two GPU SLI) setup. According to Nvidia, 2080Ti can run Tomb Raider with RTX at only 30 to 70 fps on 1080p display. Then for 1440p it will require 77% stronger GPU power than 2080Ti. For 4K display it will require 300% stronger GPU power than 2080Ti. Since there are and will be many games which require more GPU power to run satisfactorily than Tomb Raider and in addition the current fps number (30 to 70 fps) for Tomb Raider is only barely fine rather than great, it will require at least 50-100% stronger GPU power than 2080Ti just to run most of the games comfortably with RTX even at 1080p. Then if you estimate the GPU power necessary to run games comfortably on 4K display, it will be 500-700% stronger GPU power than 2080Ti (170-250% increase is required at 1440p).

The problem is that it is getting much harder to shrink node size now than before. Currently the node size is 12nm. GPUs with 7nm node size will appear in a year. However if you consider about how much Intel is struggling to shrink node size for CPU, it is easily imaginable that to shrink node size from 7nm to even smaller than 7nm will be very hard. Probably the next GPUs with 7nm nodes we will get 50-80% performance increase. This is only good enough for running games with RTX on 1080p comfortably. But this is far cry from what we really need to run games at 4K display comfortably. Then what will be the next thing that GPU makers can do? The improvement of GPU design will help to some extent. But it has its own limit. NVLink might help somewhat to improve the multi-GPU performance. But as all know, multi-GPU setup can be very problematic. Two GPUs might be OK but no more than two GPUs. I think that 100-150% performance increase from 2080Ti without multi-GPU setup will be possible by whatever Nvidia can do with available technology. But if Nvidia wants to increase the performance further from there, it will require something extraordinary. They could put two or more GPUs on one board and develop the technology to control them very efficiently (far better than SLI). But it will skyrocket the price. I wonder if Nvidia is developing NVLink at the same time with RTX maybe because they know that it will be very difficult (or even impossible) to develop a GPU which can run games comfortably with RTX by one GPU at 4K or even at 1440p.


----------



## keikei

Given everything we know, the biggest factor imo is the availability of games running RT. As of right now it's just a handful of titles. Will there be more adoption or it this just a failed experiment? Is the tech too early? Will we see more games listed in 2019? There are many unanswered questions and yet we are asked to pay a very high premium for a promise. I ask we exercise some patience. Being a tech nerd like all the members here, easier said than done.


----------



## sblantipodi

Mooncheese said:


> Everyone who can swing PG27UQ and X27, I would wait for either PG35VQ or X35. Curved 21:9 completely blows flat 16:9 away, like it's not even a fair comparison.
> 
> https://www.youtube.com/watch?v=0afwE6GVLQw&t
> 
> 
> 60 FPS? 3440x1440 is 67% less pixels than 3840x2160, meaning 67% more framerate. It's 25% more pixels than 2560x1440 to give you an idea. That would be around 120 FPS everywhere with a 2080 Ti, meanwhile, youre going to be struggling to get 60-70 FPS at 4K.
> 
> 4K is so completely overrated. 1440p is absolutely where it's at. It's perfectly sharp at 1.5-2 ft viewing distance. I don't understand this fixation with more pixels.


image quality at 4K is two times better than 2K...
texture quality really makes the difference.


----------



## Rob w

keikei said:


> Given everything we know, the biggest factor imo is the availability of games running RT. As of right now it's just a handful of titles. Will there be more adoption or it this just a failed experiment? Is the tech too early? Will we see more games listed in 2019? There are many unanswered questions and yet we are asked to pay a very high premium for a promise. I ask we exercise some patience. Being a tech nerd like all the members here, easier said than done.


Everything we want to do has always had the hurdle of the technology not being there to be able to do it.
Every advance in tech leads to a whole new experience, of course tech is driven by ideas, so I take my hat off to these companies, five years ago Rtx was not even heard of in our community ( generally) it is just so high tech nowadays what with 10 nm now 7 nm, just how far can they keep going?
Pricing? Yeh! If we want the latest tech we gotta pay for it, because without companies like Intel, amd and nvidea we would all be playing cricket and marbles,, lol.


----------



## MiniZaid

CallsignVega said:


> No. Titan V is 30% faster than Titan-XP, which is 5-10% faster than 1080 Ti. All of them overclocked. Titan V is 35-40% faster than 1080Ti, so I expect the 2080 Ti to be right around the same speed as a V.


That's like the best case scenario. if it was 35-40%, i would have gotten it since i considered 2 1080tis even during the cryto boom

I literally just looked at benchmarks from 3 different sites
https://www.anandtech.com/show/12170/nvidia-titan-v-preview-titanomachy/7
https://hothardware.com/reviews/nvidia-titan-v-volta-gv100-gpu-review?page=3
https://www.hardocp.com/article/2018/03/20/nvidia_titan_v_video_card_gaming_review/17


----------



## shiokarai

oh, btw, Performance-PCs.com leaked (lel) EK blocks for RTX cards:

https://www.reddit.com/r/nvidia/comments/9c4d9f/ekwb_2080_ti_water_block_leak/

https://www.reddit.com/r/nvidia/comments/9c4gcv/ekwb_2080_ti_water_block_copper_leak/


----------



## toncij

The core count matters. If we're comparing the cards, we need them tested at the exact same clock. We can do averages if it can't be fixed.

TITAN V - 5120
2080Ti - 4352
1080Ti - 3584

Fine jumps of 768 cores. 

HBM2 on TITAN V is not double the GDDR6. It's about 650GB/s, much less than a full Volta TESLA of 900GB/s. HBM2 has some slight advantage of data rate, but that's less relevant here. Turing is at 616GB/s for 2080Ti and that's only 5,5% in favor of TITAN V.

At the same clock, in ALU limited operations, these cards are about this fast if we assume NO changes to the cache, core ops, etc:
2080Ti is 21% faster than 1080Ti. TITAN V is 17% faster than 2080Ti. That makes TITAN V 43% faster than 1080Ti.

Simply by pure core count at exactly the same clock. Now. There is a vast amount of factors that can affect these numbers. From the simple fact that every scene in a game can be different, to the fact that NV might have improved or made worse memory controllers, cache, etc. Not all jobs on a GPU are created equal, not every job is ALU limited, not every job is memory limited. Sometimes both.
But, the core count is the single most important factor after clock.

It is possible than 2080Ti is 42% faster than 1080Ti. To achieve that, NVidia needs to boost each core effectiveness by ~17%. And that's not about core only, but cache, pathways, controller... Increase is multiplicative, not additive.

But we can't know just yet. All we know is 21%. Anything more is... wait and see.


----------



## GraphicsWhore

shiokarai said:


> oh, btw, Performance-PCs.com leaked (lel) EK blocks for RTX cards:
> 
> https://www.reddit.com/r/nvidia/comments/9c4d9f/ekwb_2080_ti_water_block_leak/
> 
> https://www.reddit.com/r/nvidia/comments/9c4gcv/ekwb_2080_ti_water_block_copper_leak/


EK tweeted they'd be up for preorder around the 23rd. Still nothing. I have the full-cover Alphacool block on pre-order so I don't particularly care, just strange to me.


----------



## ClashOfClans

I'll be laughing at the naysayers when the 2080 Ti comes out and it is equivalent/faster than 1080 Ti Sli. Don't bet against Nvidia. I don't hang around Jensen Huang a lot or whatever. I think he seems like a nice guy and he's well dressed. He's a bit of a Maverick in the tech space, with the whole leather jacket, and focus on Artificial Intelligence. I'm not a huge fan of him by any means but we all can sit here and agree he has that IT factor about him. He has dominated AMD unfortunately for us consumers, and like Michael Jordan, you don't bet against him.

The 2080 Ti is going to be a feat. We are talking new performance barriers and the new tech that is coupled with that like Ray Tracing. Have you seen how beautiful Ray Tracing is with the new Tomb Raider game. I for one am excited because I know this tech will also trickle down to mid-level stuff and will become more affordable to the masses and casual users in time.

I know the main complaint is price about the 2080 Ti, but while you may of spent $1200/1300 on a 2080 Ti that is the equivalent performance of 1080 Ti Sli! Fact remains that Nvidia did release numbers where even the 2080 at 4k was beating out the 1080 Ti in mass effect andromeda(without DLSS). If you want the performance of two 1080 Ti cards in one card, without the microstutter of sli, and without the driver incompatibility associated with SLi technology, it's going to be very expensive. That's what the 2080 Ti is...it's 1080 Ti Sli performance in one card. I personally think they should price it at $1600. At $1200/1300 it's a steal for that kind of performance in only one card.


----------



## HowHardCanItBe

Cleaned and reopened. Folks, please behave and adhere to the terms of service of this site. It's really not that difficult.


----------



## keikei

*NVIDIA RTX 2080 Ti 3DMark Score Allegedly Leaks, 35% Faster vs 1080 Ti*


----------



## shilka




----------



## sblantipodi

shiokarai said:


> oh, btw, Performance-PCs.com leaked (lel) EK blocks for RTX cards:
> 
> https://www.reddit.com/r/nvidia/comments/9c4d9f/ekwb_2080_ti_water_block_leak/
> 
> https://www.reddit.com/r/nvidia/comments/9c4gcv/ekwb_2080_ti_water_block_copper_leak/


is it better to buy an EK blocks or wait for the EVGA Hidro Copper ?


----------



## keikei

shilka said:


> https://www.youtube.com/watch?v=dfGJpVEzUxo



Assuming these numbers are accurate, i'd like to see much more games. It looks like the nda maybe pushed down? Joker sayin' 12/13 th?


----------



## ThrashZone

Hi,
lol I'm sure you will thought and then some


----------



## shiokarai

sblantipodi said:


> is it better to buy an EK blocks or wait for the EVGA Hidro Copper ?



If you're concerned about warranty etc. HydroCopper is obviously safe in this regard (many manufacturers doesn't respect warranty when cooler is removed on the other hand - FE cards as far as i remember didn't have any warranty stickers etc.). If you're concerned about best possible performance and silence - custom waterblock will be better. EK blocks performance wise are great, Heatkiller are regarded as even better (and better build quality), Bitspower blocks are good too, can't comment on other brands.


----------



## Clukos

Timespy GPU score










More or less what you'd expect.


----------



## G woodlogger

I guess this is how people with high paying jobs relax


----------



## bastian

Clukos said:


> Timespy GPU score
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> More or less what you'd expect.



So 2080 Ti is around 35% faster than a 1080 Ti, stock to stock. So with overclocking, we should see even more. How anyone could be disappointed with this, I don't know.

Complain about the price all you want. Don't buy it. If you owned a business like nVidia you'd be doing the same thing. That's what happens when AMD provides hardly any competition in GPU.


----------



## Ford8484

bastian said:


> So 2080 Ti is around 35% faster than a 1080 Ti, stock to stock. So with overclocking, we should see even more. How anyone could be disappointed with this, I don't know.
> 
> Complain about the price all you want. Don't buy it. If you owned a business like nVidia you'd be doing the same thing. That's what happens when AMD provides hardly any competition in GPU.


drivers havent been finalized either yet too. It prob be around Titan V performance- but with the new features like DLSS and Raytracing (which is overhyped at this point imo). I'm content with the numbers....around 40 % is good- higher for newer games prob.....Still need more benchmarks though....


----------



## mustafa811

well if you considered the leaked 3dmark tests of the 2080 which was 10000 and the 1080ti is a 9500 so the 2080 is 5% faster than the 1080ti .. if you used the data given and cuda core number and performance percentage you will find the 2080ti is 50% more than the 1080ti .. just calculate it 

and if one of the nvidia team said it is like 35% to 45% so that 50% speculation is considered near

these are speculations guys .. i just wanted to reply to one of the members who stated that the 2080ti will be 80% more powerful than the 1080ti and i think this figure is none sense and will only come from a fanboy


----------



## PharmingInStyle

G woodlogger said:


> I guess this is how people with high paying jobs relax


Yes, but sometimes we make mistakes too like when I didn't pre order the Ti on day one. Then I check it a day or two later and it's a November delivery and now an email notification (in Nvidia's shop.) Now I can't even relax worrying if I can trust sellers on Ebay who say they have a reserved Sept 20 ship date Ti. If they come through on Ebay I can get the card when the early birds in the thread get theirs from Nvidia. I have to back up a bit. I don't really have to worry at all and I can relax. Because even if I get scammed on Ebay for 2k it doesn't affect me much at all financially. Hurt pride perhaps, and that could mildly affect my ability to kick back and enjoy the reading.

Ok sorry for the attempt at humor but anyone else thinking about spending 2k on Ebay on a Sept 20 reserved Ti? The extra 800 usd is not a factor aside from appearing to be an idiot to some posters. But I missed the 9/20 Ti cards in Nvidia shop so am I supposed to wait months instead of weeks to get one?


----------



## ClashOfClans

PharmingInStyle said:


> Yes, but sometimes we make mistakes too like when I didn't pre order the Ti on day one. Then I check it a day or two later and it's a November delivery and now an email notification (in Nvidia's shop.) Now I can't even relax worrying if I can trust sellers on Ebay who say they have a reserved Sept 20 ship date Ti. If they come through on Ebay I can get the card when the early birds in the thread get theirs from Nvidia. I have to back up a bit. I don't really have to worry at all and I can relax. Because even if I get scammed on Ebay for 2k it doesn't affect me much at all financially. Hurt pride perhaps, and that could mildly affect my ability to kick back and enjoy the reading.
> 
> Ok sorry for the attempt at humor but anyone else thinking about spending 2k on Ebay on a Sept 20 reserved Ti? The extra 800 usd is not a factor aside from appearing to be an idiot to some posters. But I missed the 9/20 Ti cards in Nvidia shop so am I supposed to wait months instead of weeks to get one?


No, as $1200 for one video card puts it out of reach for most. That is a lot of coin to play games. That said, considering the performance of this beast, it probably should msrp at $2000. 

AIBs will have this available in October it would seem. But I would expect stock to be good come Christmas time. I will probably wait one more generation where these cards are averaging 200fps at 4k ultra settings with AA enabled.


----------



## TheGovernment

PharmingInStyle said:


> G woodlogger said:
> 
> 
> 
> I guess this is how people with high paying jobs relax/forum/images/smilies/smile.gif
> 
> 
> 
> Yes, but sometimes we make mistakes too like when I didn't pre order the Ti on day one. Then I check it a day or two later and it's a November delivery and now an email notification (in Nvidia's shop.) Now I can't even relax worrying if I can trust sellers on Ebay who say they have a reserved Sept 20 ship date Ti. If they come through on Ebay I can get the card when the early birds in the thread get theirs from Nvidia. I have to back up a bit. I don't really have to worry at all and I can relax. Because even if I get scammed on Ebay for 2k it doesn't affect me much at all financially. Hurt pride perhaps, and that could mildly affect my ability to kick back and enjoy the reading.
> 
> Ok sorry for the attempt at humor but anyone else thinking about spending 2k on Ebay on a Sept 20 reserved Ti? The extra 800 usd is not a factor aside from appearing to be an idiot to some posters. But I missed the 9/20 Ti cards in Nvidia shop so am I supposed to wait months instead of weeks to get one?
Click to expand...

I dunno, who cares what some jellys on a forum think lol. Doesnt matter what you pay, if you want one, get one!
Just remember, there are people that pay thousands of dollars on speaker wire.....


----------



## dVeLoPe

I have extra cards available on 9-24-18 in SOFLO


----------



## CallsignVega

dVeLoPe said:


> I have extra cards available on 9-24-18 in SOFLO


Extra cards?


----------



## Badexample

Why you guys pre-ordering anything? I know it is part of the game culture but you guys thinking to have some special treatments from Nvidia? Now, we're stuck with an overpriced card with controlled leak performance benchmarks LOL. The average gamer are just brainless and make decisions all about their emotions.. No HDMI 2.1? LOL What a joke!


----------



## CallsignVega

Nothing will have HDMI 2.1 for quite a while, so why would 2000 series have it?


----------



## Spiriva

Badexample said:


> Why you guys pre-ordering anything? I know it is part of the game culture but you guys thinking to have some special treatments from Nvidia? Now, we're stuck with an overpriced card with controlled leak performance benchmarks LOL. The average gamer are just brainless and make decisions all about their emotions.. No HDMI 2.1? LOL What a joke!


I got an easy fix for you; dont preorder one.
Mean while I pre ordered, as i want one. Now if only EK would let ppl pre order the blocks too....."soon".


----------



## Badexample

CallsignVega said:


> Nothing will have HDMI 2.1 for quite a while, so why would 2000 series have it?


At that price, PC gaming should be leading the way don't you think instead of this "Nothing will have HDMI 2.1 for quite a while, so why would 2000 series have it attitude?


----------



## Glerox

EKWB blocks are up for pre-ordered.

https://www.ekwb.com/shop/water-blo...er-for-nvidia-geforce/geforce-rtx-20x0-series

I ordered a 2080ti nickel+plexi RGB and a CPU EVO block nickel+RGB


----------



## Spiriva

Glerox said:


> EKWB blocks are up for pre-ordered.
> 
> https://www.ekwb.com/shop/water-blo...er-for-nvidia-geforce/geforce-rtx-20x0-series
> 
> I ordered a 2080ti nickel+plexi RGB and a CPU EVO block nickel+RGB




Thanks for the heads up! 2* 2080ti water blocks and 2* backplates inc!


----------



## G woodlogger

To set the record straight my comment were not about the high price but the energy used for % calculations. So I don't mind the price much althrough they would probably be 25-30% lower with competition (Just from Nvidia profit margin). It is just not the product people expected. I think Nvidia are sandbagging because of all the old cards that need to be sold. There are also a bit of a learning curve. People have been used to just more of the same. The problem with Pascal were that it wasn't a new architecture and will grow old quicker that this new generation.

I am too ill to play so I will just live through you guys.
I will not post much, I promise:


----------



## xer0h0ur

Badexample said:


> Why you guys pre-ordering anything? I know it is part of the game culture but you guys thinking to have some special treatments from Nvidia? Now, we're stuck with an overpriced card with controlled leak performance benchmarks LOL. The average gamer are just brainless and make decisions all about their emotions.. No HDMI 2.1? LOL What a joke!


Braindead logic at its finest. Game pre-orders are worthless because you can buy it at any point whenever you so please. You can't say or do the same thing with regard to new hardware that launches. Which is why people pre-order hardware...to secure it.


----------



## GraphicsWhore

Badexample said:


> Spiriva said:
> 
> 
> 
> I got an easy fix for you; dont preorder one.
> Mean while I pre ordered, as i want one. Now if only EK would let ppl pre order the blocks too....."soon".
> 
> 
> 
> See, you part of the problem (inflated prices) but I don't really give a ....
Click to expand...

Apparently you do because here you are in the owners thread whining about those of us who preordered. These posts are getting stale. You can go and complain in a dozen other threads but coming to the owners thread to call us “problems” is douchey.



Glerox said:


> Ok so I will actually post something useful here since it's only B.S. for the last couple of pages...
> This is a "owner" thread (future owners in this case), not a "why you shouldn't buy it thread". There is Reddit for that.
> 
> EKWB blocks are up for pre-ordered.
> 
> https://www.ekwb.com/shop/water-blo...er-for-nvidia-geforce/geforce-rtx-20x0-series
> 
> I ordered a 2080ti nickel+plexi RGB and a CPU EVO block nickel+RGB /forum/images/smilies/tongue.gif


Thanks for the link. I checked the site earlier today and somehow missed. Preordered the Acetal Nickel RGB with black backplate. I noticed once I logged in the price for both dropped, especially for the block (down to $154 from advertised $189) so make sure to log in if you have an account.


----------



## keikei

So, i'm rocking a 600 watt psu. I want to plan for the future, potential sli for the new Ti. I'm running 3 hdds, 2 ssds, & 7 fans. What psu do you guys recommend?


----------



## Badexample

xer0h0ur said:


> Braindead logic at its finest. Game pre-orders are worthless because you can buy it at any point whenever you so please. You can't say or do the same thing with regard to new hardware that launches. Which is why people pre-order hardware...to secure it.


Braindead logic? What about none of us pre-order the damn thing... That would be a middle finger to NVIDIA and price will go down? The CEO just secured his Private island and laughing his ass off at all the numb addicted gamers out there lol.


----------



## carlhil2

I am just hoping that enough people fell for the disgruntled TechTubers speculation/propaganda and cancelled their pre-orders so that I can get mine sooner than November 5th, really though...


----------



## GraphicsWhore

Badexample said:


> Braindead logic? What about none of us pre-order the damn thing... That would be a middle finger to NVIDIA and price will go down? The CEO just secured his Private island and laughing his ass off at all the numb addicted gamers out there lol.


I mean these are ridiculous fantasies. You expect a bunch of random people into high-end PC hardware around the world to come to some kind of gentleman's agreement that nobody will pre-order this card so the price will go down. 

As I mentioned, many of us are selling old equipment to recoup cost - I'm going to end up paying about the same as I did for a 1080Ti. Still, you want us not to pre-order so the price can fall to something you prefer?

Complain about nVidia all you want and start threads about your opinion on this but stop posting this crap in owner's threads.


----------



## xer0h0ur

Badexample said:


> Braindead logic? What about none of us pre-order the damn thing... That would be a middle finger to NVIDIA and price will go down? The CEO just secured his Private island and laughing his ass off at all the numb addicted gamers out there lol.


Yes. Braindead. When you equate two things that have no business being equated...its braindead logic.


I'm reminded again why I stopped posting on this site. Trolls are allowed to flourish here.


----------



## Badexample

Graphics***** said:


> I mean these are ridiculous fantasies. You expect a bunch of random people into high-end PC hardware around the world to come to some kind of gentleman's agreement that nobody will pre-order this card so the price will go down.
> 
> As I mentioned, many of us are selling old equipment to recoup cost - I'm going to end up paying about the same as I did for a 1080Ti. Still, you want us not to pre-order so the price can fall to something you prefer?
> 
> Complain about nVidia all you want and start threads about your opinion on this but stop posting this crap in owner's threads.


Same here. I can sell my Titan XP and get the new TI. The internet can be a powerful tool. Keep buying overpriced gaming cards with no official releases of benchmark and you guys have the ball to call me braindead?


----------



## Badexample

xer0h0ur said:


> Yes. Braindead. When you equate two things that have no business being equated...its braindead logic.
> 
> 
> I'm reminded again why I stopped posting on this site. Trolls are allowed to flourish here.


Trolls or intelligent consumers? Nobody should pre-order anything with no substances, sorely based on hype. That is braindead logic. Let people be pissed at NVIDIA.


----------



## GraphicsWhore

Badexample said:


> Trolls or intelligent consumers? Nobody should pre-order anything with no substances, sorely based on hype. That is braindead logic. Let people be pissed at NVIDIA.


Go ahead and be pissed, just not in this thread. How hard is that to grasp? People have their reasons for pre-ordering. I know this card is going outperform my current card, I'm moving to 4K soon and I buy flagship cards every ~1.5 years so this is no different. The benchmark isn't going to change my mind.

Many people have been waiting years to upgrade, moving from old equipment and there's no danger of not getting exponentially more performance over their current setup, so benchmarks aren't going to change theirs either.


----------



## Mad Pistol

Graphicswheur said:


> Many people have been waiting years to upgrade, moving from old equipment and there's no danger of not getting exponentially more performance over their current setup, so benchmarks aren't going to change theirs either.


This is something to take note of. Many people are looking to upgrade because they are still on Maxwell and eyeing a 4k screen that a 1080 Ti can game on... just not perfectly.

Turing offers a shot at that. We're not 100% certain on it, but I'm willing to bet that even the RTX 2070 is an upgrade over a Titan X (Maxwell).


----------



## xer0h0ur

The reason I personally am upgrading to this card is because I am leapfrogging from an FE 1080 to this. That is a massive bump in power even if I completely ignore all of the DLSS / RTX hypetrain and propaganda. I've been using these two 4K and 1440p monitors in my sig since they were released and have been waiting this entire damn time for a reasonably affordable single GPU solution that would finally handle those suckers well. I tried my hand at Crossfire and SLI with both leaving a significantly sour taste in my mouth. So long as we as gamers have to keep relying on developers to implement support, Nvidia and AMD for SLI/Crossfire profiles, and for the damn thing to not get broken on game patches...nothing will ever draw me back into it. 

The dumbest thing of all though? Me having to defend my god damned purchase in an owners thread to the trolls that invade it. Guess its time to leave for another couple of years.


----------



## Glerox

I'm only on overclock.net since Pascal but for older folks, was it always like this on these forums?

Same BS "don't buy it" speeches happened with the PG27UQ... I must say I'm really happy with my monitor and yes I pre-ordered the damn thing. I will be really happy with my 2080TI too and guess what, if NVIDIA or AMD release another more powerful gamer gpu next year, I will probably pre-order it too!

We don't need to defend it, it's just our choice. People saying it's increasing the price of hardware, that's just how free market works... don't blame us, blame the lack of competition. That's the real problem!

Do you truly ask yourself each time you buy something, "hmmmm maybe it's a little overpriced, I must think about the other potential buyers so I won't buy this..."
LMAO of course not. 

Aneways that's my two cents... first and last time I defend why I pre-ordered.


----------



## carlhil2

Glerox said:


> I'm only on overclock.net since Pascal but for older folks, was it always like this on these forums?
> 
> Same BS "don't buy it" speeches happened with the PG27UQ... I must say I'm really happy with my monitor and yes I pre-ordered the damn thing. I will be really happy with my 2080TI too and guess what, if NVIDIA or AMD release another more powerful gamer gpu next year, I will probably pre-order it too!
> 
> We don't need to defend it, it's just our choice. People saying it's increasing the price of hardware, that's just how free market works... don't blame us, blame the lack of competition. That's the real problem!
> 
> Do you truly ask yourself each time you buy something, "hmmmm maybe it's a little overpriced, I must think about the other potential buyers so I won't buy this..."
> LMAO of course not.
> 
> Aneways that's my two cents... first and last time I defend why I pre-ordered.


 You missed out on the crying during the OG Titan launch, ahhh,things never change...


----------



## CallsignVega

I see this has turned into thread #112 to wine about the price from non buyers.


----------



## ClashOfClans

2080 Ti is 1080 Ti SLi performance in one card folks. That costs money. Just be patient and wait for reviews. All these fake leaks recently, don't trust any benchmarks that is not officially released from Nvidia before the NDA is lifted. I've calculated the 2080 Ti performance from Nvidia's benchmarks, and by estimation using 9 series to 10 series performance gaps. We can make sound logical conclusions on what the gaps will be from 10 to 11 from that data. 

We know that 1080 Ti is equivalent and a lot of times, faster than 980 Ti Sli. Therefore, 2080 Ti will also be about as fast, and possibly faster than 1080 Ti Sli. We can also use pricing as a possible estimation. Since 1080 Ti Sli will cost you around $1200...it makes sense that a 2080 Ti has the performance of two 1080 Ti cards.


----------



## Shadowsong

Not sure if everyone has seen this already, but EK waterblocks for the RTX series are up for pre-order  https://www.ekwb.com/shop/

Just Pre-ordered mine for my EVGA 2080Ti


----------



## Glerox

carlhil2 said:


> You missed out on the crying during the OG Titan launch, ahhh,things never change...


lol! ok I see, good to know hehe


----------



## Glerox

ClashOfClans said:


> Therefore, 2080 Ti will also be about as fast, and possibly faster than 1080 Ti Sli. We can also use pricing as a possible estimation. Since 1080 Ti Sli will cost you around $1200...it makes sense that a 2080 Ti has the performance of two 1080 Ti cards.


Before some angry dude tells you, the performance gain from 10 to 20 series seems to be lower than the performance gain from the 9 to 10 series DESPITE being more expensive (except for games with a yet-to-be-understand DLSS feature). It will not be as fast as two 1080TI for scalable games, I can guarantee you that! However, fewer and fewer games support SLI aneways.

Don't trust Nvidia to increase the performance gain just because of a steeper price. They are here to make money and there is ZERO competition so they can ask whatever the price they want...

That doesn't change the whole point that :
1. Let's wait for official independant benchmarks
2. Knowing all this, you're FREE to pre-order it if YOU want it, like I did.


----------



## ttnuagmada

> Therefore, 2080 Ti will also be about as fast, and possibly faster than 1080 Ti Sli. We can also use pricing as a possible estimation. Since 1080 Ti Sli will cost you around $1200...it makes sense that a 2080 Ti has the performance of two 1080 Ti cards.
> 
> Before some angry dude tells you, the performance gain from 10 to 20 series seems to be lower than the performance gain from the 9 to 10 series DESPITE being more expensive (except for games with a yet-to-be-understand DLSS feature). Don't trust Nvidia to increase the performance gain just because of a steeper price. They are here to make money and there is ZERO competition so they can ask whatever the price they want...
> 
> That doesn't change the whole point that :
> 1. Let's wait for official independant benchmarks
> 2. Knowing all this, you're FREE to pre-order it if YOU want it, like I did.


You gotta consider that these chips are absolutely massive. The 1080ti was the same price as a 980ti even though the die size went down. This time around the opposite is true by a significant amount. Bigger die = bigger expense.


----------



## Glerox

ttnuagmada said:


> You gotta consider that these chips are absolutely massive. The 1080ti was the same price as a 980ti even though the die size went down. This time around the opposite is true by a significant amount. Bigger die = bigger expense.


good point


----------



## ClashOfClans

Glerox said:


> Before some angry dude tells you, the performance gain from 10 to 20 series seems to be lower than the performance gain from the 9 to 10 series DESPITE being more expensive (except for games with a yet-to-be-understand DLSS feature). It will not be as fast as two 1080TI for scalable games, I can guarantee you that! However, fewer and fewer games support SLI aneways.


At least I gave some logical conclusions to why 2080 ti will be equivalent/possibly faster than 1080 Ti Sli. You don't seem to support your argument with anything....and for some reason you guarantee it won't be faster...based on what?


----------



## Glerox

ClashOfClans said:


> At least I gave some logical conclusions to why 2080 ti will be equivalent/possibly faster than 1080 Ti Sli. You don't seem to support your argument with anything....and for some reason you guarantee it won't be faster...based on what?


Based on MANY things too long to explain. You will see 

I have 90fps in Farcry 5 and 120fps in BF1 with Titan XP (1080Ti) in SLI at 4K ultra

No way a 2080TI will give you this. Wait and see


----------



## ThrashZone

Hi,
As long as this new 2080ti kills a titan Xp I don't see any reason not to get one if one has the money to spend seems like a reasonably priced gpu to replace the titan x line 
Then nvidia can play around with the titan v line and axe out the titan "X" versions all together


----------



## toncij

Shadowsong said:


> Not sure if everyone has seen this already, but EK waterblocks for the RTX series are up for pre-order  https://www.ekwb.com/shop/
> 
> Just Pre-ordered mine for my EVGA 2080Ti


I'm a bit worried about the gains there. EVGA is going to ship only reference cards, not even iCX2, let alone FTW series or Kingpin. Putting a block on a card already probably starved by power and thermal limits in a fixed firmware, hmm...


----------



## Ford8484

CallsignVega said:


> I see this has turned into thread #112 to wine about the price from non buyers.


Acer X27 thread all over again


----------



## xer0h0ur

toncij said:


> I'm a bit worried about the gains there. EVGA is going to ship only reference cards, not even iCX2, let alone FTW series or Kingpin. Putting a block on a card already probably starved by power and thermal limits in a fixed firmware, hmm...


Bit early to speculate about water cooled overclocking gains on a card that hasn't even been reviewed much less torn down or blocked. Or are you simply looking at two power connectors and saying its going to be power starved?


----------



## shiokarai

ThrashZone said:


> Hi,
> "I don't see any reason not to get one if one has the money to spend seems like a reasonably priced gpu to replace the titan x line
> Then nvidia can play around with the titan v line and axe out the titan "X" versions all together


I have the money and have 2 preordered but I wouldn't consider $1199 per card anywhere close to a "reasonably priced gpu"... I understand the hype and all that "I have money" but at least let's stay realistic about the bull**** pricing of 20xx series cards...


----------



## xer0h0ur

Obviously a "reasonably priced gpu" is subjective and will change from person to person. I'm not about to spend $3000 on Titan V but I am willing to spend $1200+200 on a 2080 Ti + EK WB/backplate. I've been on the same GPU for 2+ years. If I couldn't manage to save up $1400 in that span then I have much larger problems to deal with than Nvidia's pricing. I don't think anyone other than fanboys are defending the pricing on Turing.


----------



## raider89

Glerox said:


> Based on MANY things too long to explain. You will see
> 
> I have 90fps in Farcry 5 and 120fps in BF1 with Titan XP (1080Ti) in SLI at 4K ultra
> 
> No way a 2080TI will give you this. Wait and see



Im betting the 2080ti gives easily more than that?


----------



## raider89

Badexample said:


> See, you part of the problem (inflated prices) but I don't really give a ....


Dont see any increase in prices? What are you smoking.


----------



## Fiercy

http://www.performance-pcs.com/ek-vector-rtx-backplate-black.html#Specifications

Backplate pre-order if anyone is looking for it's not listed in search on the website as the water blocks are. If anyone needs it.


----------



## toncij

xer0h0ur said:


> Bit early to speculate about water cooled overclocking gains on a card that hasn't even been reviewed much less torn down or blocked. Or are you simply looking at two power connectors and saying its going to be power starved?


Well, it's a preorder, isn't it? 

But it's not completely arbitrary. TITAN V is almost identical if not identical judging by 8-phase PCB pics circling around since reveal. 6+8 is fit for about 250-260W, which is what NV stated. Anything more and I'd rather have 8+8. I might be wrong, but wouldn't you want more than stock power? Especially here with GDDR6 instead of low power HBM2?


----------



## xer0h0ur

toncij said:


> Well, it's a preorder, isn't it?
> 
> But it's not completely arbitrary. TITAN V is almost identical if not identical judging by 8-phase PCB pics circling around since reveal. 6+8 is fit for about 250-260W, which is what NV stated. Anything more and I'd rather have 8+8. I might be wrong, but wouldn't you want more than stock power? Especially here with GDDR6 instead of low power HBM2?


Am I missing something here? I distinctly remember reading the 2080 Ti has a 13 phase power design and the PCB shot Nvidia has on their site shows 8+8 pin connectors anyways.


----------



## keikei

https://wccftech.com/evga-geforce-rtx-20-series-ftw3-hybrid-hydro-copper-custom-graphics-cards/
https://videocardz.com/77837/evga-unveils-hydro-copper-and-hybrid-geforce-rtx-models


----------



## ThrashZone

shiokarai said:


> I have the money and have 2 preordered but I wouldn't consider $1199 per card anywhere close to a "reasonably priced gpu"... I understand the hype and all that "I have money" but at least let's stay realistic about the bull**** pricing of 20xx series cards...


Hi,
Would you feel better if the 2080ti was the titan X series replacement naming 
Titan Xp has always been 1200.us 

People keep comparing it to the 1080ti but obviously it's not even close to it's price point
But the 2080 is the same price point as the 1080ti so we'll have to wait and see how they benchmark 
You'll have to let us all know what's up along with everyone else that has preordered :thumb:


----------



## toncij

xer0h0ur said:


> toncij said:
> 
> 
> 
> Well, it's a preorder, isn't it? /forum/images/smilies/smile.gif
> 
> But it's not completely arbitrary. TITAN V is almost identical if not identical judging by 8-phase PCB pics circling around since reveal. 6+8 is fit for about 250-260W, which is what NV stated. Anything more and I'd rather have 8+8. I might be wrong, but wouldn't you want more than stock power? Especially here with GDDR6 instead of low power HBM2?
> 
> 
> 
> Am I missing something here? I distinctly remember reading the 2080 Ti has a 13 phase power design and the PCB shot Nvidia has on their site shows 8+8 pin connectors anyways.
Click to expand...

Hmm. 13-phase? Can you find a PCB picture?
From what I recall it was 8.
Aldo, on top, only 6 pins were connected to an 8-pin socket (left one), but I can't reliably confirm since Google images is a mess.

As I've said, I might be wrong. If you can confirm...


----------



## NewType88

Can someone measure the distance between the screws for mounting the heatsink on a 1080ti vs 2080 ti when they get theirs ? I'm hoping I can fit my Morpheus 2 on the 2080 ti and if it doesn't fit I'm going to be more picky of what GPU I get. Ill bring it up again after release ;p


----------



## GraphicsWhore

keikei said:


> https://wccftech.com/evga-geforce-rtx-20-series-ftw3-hybrid-hydro-copper-custom-graphics-cards/
> https://videocardz.com/77837/evga-unveils-hydro-copper-and-hybrid-geforce-rtx-models


Daamn. Wonder what the timing and pricing on the HC FTW3 will be. If it were reasonable I’d wait but Ive always had bad luck catching these when they go up for sale and then it’s however long more of waiting.


----------



## xer0h0ur

toncij said:


> Hmm. 13-phase? Can you find a PCB picture?
> From what I recall it was 8.
> Aldo, on top, only 6 pins were connected to an 8-pin socket (left one), but I can't reliably confirm since Google images is a mess.
> 
> As I've said, I might be wrong. If you can confirm...


I mean you literally could have seen it for yourself if you just went to Nvidia's site. 

https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/


----------



## ClashOfClans

NewType88 said:


> Can someone measure the distance between the screws for mounting the heatsink on a 1080ti vs 2080 ti when they get theirs ? I'm hoping I can fit my Morpheus 2 on the 2080 ti and if it doesn't fit I'm going to be more picky of what GPU I get. Ill bring it up again after release ;p


Its pretty obvious the 2070 will be on par, perhaps a little faster than the 1080 Ti. 2080 obviously much faster than the 1080 Ti, and then the 2080 Ti will proceed to blow the 1080 Ti away.


----------



## NewType88

why are you quoting me to say that ? lol. All i know is its going to be more than my 1070.


----------



## shiokarai

ThrashZone said:


> Hi,
> Would you feel better if the 2080ti was the titan X series replacement naming
> Titan Xp has always been 1200.us
> 
> People keep comparing it to the 1080ti but obviously it's not even close to it's price point
> But the 2080 is the same price point as the 1080ti so we'll have to wait and see how they benchmark
> You'll have to let us all know what's up along with everyone else that has preordered :thumb:


You're aware price of the RTX 2080Ti and rest of the 20 series price structure is with direct correlation with the overstock of 10 series cards Nvidia has to clear out because missed production targets when cryptocurrency boom of 2017 went bust and they're left with massive amount of 10 series cards unsold? This even came out in their recent earnings call.

This way the new RTX series is a new "level" and 10 cards - GTX, are still sold and can be sold simultaneously. An artificial market segmentation due to the fact that Nvidia has a de facto monopolistic position in the GPU market right now. They compete essentially with themselves - ie. they won't do a harm to themselves 

You think $1199 price of 2080 Ti is due to the removing titan line from the "gaming" line, but guess what... MSRP for 2080 Ti is $999 - that's what Nvidia told everybody at the keynote and slides. That's not the titan price point and never been. $1199 you mention is founders edition price premium - that may or may not have the correlation with last gen titan price (well, technically last gen titan price point is titan v price point...).

No need to be snarky.


----------



## Fiercy

shiokarai said:


> You think $1199 price of 2080 Ti is due to the removing titan line from the "gaming" line, but guess what... MSRP for 2080 Ti is $999 - that's what Nvidia told everybody at the keynote and slides. That's not the titan price point and never been. $1199 you mention is founders edition price premium - that may or may not have the correlation with last gen titan price (well, technically last gen titan price point is titan v price point...).
> No need to be snarky.


Dude clearly... do your research I mean seriously:
https://www.engadget.com/2013/02/19/nvidia-gtx-titan-announce/


----------



## Shadowsong

Is anyone considering waiting a bit longer for the EVGA FTW3 Hydro Copper cards to come out? Do you think their will be much of a performance difference?

https://www.guru3d.com/news-story/evga-shows-hydro-copper-and-hybrid-geforce-rtx-models.html


----------



## sblantipodi

Going to buy a threadripper 2 2950x. If you need to speak now it's the right moment 😃


----------



## shiokarai

nevermind, sheesh


----------



## ThrashZone

Hi,
All will be settled soon enough 
When the 2080ti smokes the titan X series everything will be clear as black and white


----------



## raider89

You think $1199 price of 2080 Ti is due to the removing titan line from the "gaming" line said:


> The 2080 Ti DID take the titan line. Hence the price. Do some research.


----------



## doom26464

How Did I miss this thread?

There is some gold in here!

Alot of delusional people thats for sure. 2070 will not even come close to matching the 1080ti. 2080 and 1080ti will be within the margin of each other and lets not forget the 8gb VRAM vrs 11GB VRAM


----------



## snafua

Rumors for Bechmark release dates:

"If the NDA details are true, Nvidia has constructed a three-pronged news cycle for the big RTX launch. On September 14, editors will go live with a deep dive into the Turing architecture. On September 17 the reviews for RTX 2080 will be published, and gamers curious about the flagship RTX 2080 Ti will have to wait until September 19."

Find it funny that (if true) the Ti would be a day before the first official shipping date. 

https://www.forbes.com/sites/jasone...rtx-2080-heres-when-you-can-read-the-reviews/


----------



## xer0h0ur

If there is one thing I really hate about Nvidia its how much they try to control their partners. I mean its such a classless low rent move that its even more confusing to see it happen from the industry leader. The thing is that you see it happen across industries. Intel did it to AMD, the Patriots cheated many times over in the NFL, Nvidia now doing it to their industry. Its almost like its a fear of losing leadership drives them to new lows to remain on top at all cost.


----------



## Foxrun

xer0h0ur said:


> If there is one thing I really hate about Nvidia its how much they try to control their partners. I mean its such a classless low rent move that its even more confusing to see it happen from the industry leader. The thing is that you see it happen across industries. Intel did it to AMD, the Patriots cheated many times over in the NFL, Nvidia now doing it to their industry. Its almost like its a fear of losing leadership drives them to new lows to remain on top at all cost.


I lol'd at the patriots part.


----------



## arrow0309

I've played part of the main story (for only 3 - 4 days) on a 4K gsync Predator XB271HK with my old TitanX (did some light game tweaking down from Ultra settings) and it was really playable 55 - 60 fps (gsync driven).
All this 2080ti Sli seems to me a lot of wasted power.


----------



## keikei

I just want a constant 60fps in subnautica. I know its not the best optimized game, but having moar power would help. Maybe the $999 price point will magically appear once this card launches. Dat be nice.


----------



## ClashOfClans

doom26464 said:


> How Did I miss this thread?
> 
> There is some gold in here!
> 
> Alot of delusional people thats for sure. 2070 will not even come close to matching the 1080ti. 2080 and 1080ti will be within the margin of each other and lets not forget the 8gb VRAM vrs 11GB VRAM


I'm not playing any games that require more than 8gb Vram even at 4k. When turning on AA at 4k with a 1080 Ti, it produces such a frame rate drop that vram is usually a non factor. And when not running AA not much a frame buffer is needed. 

Maybe for high texture mods it might be necessary, but I think the whole Vram amount these days is a bit overrated. If you got 6GB Vram it's more than enough even at 4k just from what I'm playing. I thought GTA V would be a vram beast, but at 4k with no AA, doesn't use close to 6GB.


----------



## superhead91

Thread cleaned again...

REMINDER

This is an owners thread, not a news thread or general discussion thread. Do not come here to antagonize people for spending their own money in a way they see fit. We will not hesitate to warn, infract, and thread-ban if you continue to do so.

Thanks,
super


----------



## xer0h0ur

Thank you


----------



## sblantipodi

This is not OT, not too much at least, please be patient.

I'm switching flag from blue Intel to red AMD.
My next rig will powered by a Threadripper 2 2950X and a RTX2080 Ti.

I upgrade my CPU platform every 3 to four years, in the mean time I upgrade GPUs two times.

Do you think that the current RTX are near to fullfill the PCI Express gen 3 16x bandwitdht?

I ask this because next years or in two years we will see PCI Express gen 4 and conseguently the next cards will use PCIe 4.

Will those cards supposed to fullfill the PCIe 3 bandwidth?


----------



## xer0h0ur

Short of having a 2080 Ti on hand to test it, hard to know how close it gets to taking up the PCI-E 3.0 lanes' bandwidth.


----------



## BigMack70

xer0h0ur said:


> Short of having a 2080 Ti on hand to test it, hard to know how close it gets to taking up the PCI-E 3.0 lanes' bandwidth.


Yup, though I'd be surprised if it's limited in any way by PCI-e gen 3. To my knowledge the 1080 Ti isn't really close to saturating 16 lanes of PCIe gen3, and the 2080 Ti is unlikely to be a massive leap from it. 

Historically, PCI-e generations have been one of the areas of PC hardware where the tech advances faster than the hardware that needs it. It's usually 1-2 GPU generations after a PCIe spec hits market that it's really needed, even for 2-way GPU setups.


----------



## sblantipodi

BigMack70 said:


> Yup, though I'd be surprised if it's limited in any way by PCI-e gen 3. To my knowledge the 1080 Ti isn't really close to saturating 16 lanes of PCIe gen3, and the 2080 Ti is unlikely to be a massive leap from it.
> 
> Historically, PCI-e generations have been one of the areas of PC hardware where the tech advances faster than the hardware that needs it. It's usually 1-2 GPU generations after a PCIe spec hits market that it's really needed, even for 2-way GPU setups.


from some tests I have seen 1080Ti was close to saturate PCIe gen 3 8x.
16x have double the bandwidth over 8x but who will now the next RTX...


----------



## keikei

I think I may end up with a blower card. I never saw a large difference in gaming performance in prior iterations. Somehow, company's can charge an extra $200 for aftermarket fans/heatsink. These were prior $50. The price gouging is in full effect.


----------



## ZealotKi11er

keikei said:


> I think I may end up with a blower card. I never saw a large difference in gaming performance in prior iterations. Somehow, company's can charge an extra $200 for aftermarket fans/heatsink. These were prior $50. The price gouging is in full effect.


I do not think $1000 price is real.


----------



## keikei

ZealotKi11er said:


> I do not think $1000 price is real.


Yeah, they had some blower preorders for the same amount as aftermarket. We'll see once all the cards are up for sale, but you may be correct.


----------



## Artah

Anyone else see that ship date of 2080 is 9/20 and the previous preorder ship date is 10/22? How did the people just ordering now get ahead of the line. Is it possible that Nvidia is "Raytracing" the pre-orders to correct the actual ship date of 9/20?


----------



## ClashOfClans

This is technically the official 2080 Ti preorder club.


----------



## Mad Pistol

Artah said:


> Anyone else see that ship date of 2080 is 9/20 and the previous preorder ship date is 10/22? How did the people just ordering now get ahead of the line. Is it possible that Nvidia is "Raytracing" the pre-orders to correct the actual ship date of 9/20?


It's possible they were pushing back preorders to make MORE people preorder so they would think they couldn't get it before Christmas unless they preordered it.

Lots of preorders.... preorder.


Have you preordered it yet?


----------



## Vow3ll

No comment necessary.... 
https://www.reddit.com/r/nvidia/comments/9dq97v/evga_rtx2080ti_icx2_misrepresentation/


----------



## white owl

Vow3ll said:


> No comment necessary....
> https://www.reddit.com/r/nvidia/comments/9dq97v/evga_rtx2080ti_icx2_misrepresentation/


A caption seems fitting...

EVGA makes a mistake and Reddit overreacts. Lol


----------



## ClashOfClans

Seems to be a lot of 2080s available for preorder still. If the 2080 overclocks well maybe it can match stock 2080 Ti performance. That would be great. I think with the 2080 overclocked it's going to give a lot of headroom in games at 4k res. Wonder if I should preorder one for 799 just in case the 2080 becomes the bargain chip.


----------



## Clukos

ClashOfClans said:


> Seems to be a lot of 2080s available for preorder still. If the 2080 overclocks well maybe it can match stock 2080 Ti performance. That would be great.



Impossible. The 2080 Ti has 47% more Cuda Cores and 38% higher memory bandwidth. I don't see the 2080 matching the 2080 Ti in anything GPU bound, maybe not even with LN2.


----------



## dentnu

EVGA GeForce RTX 2080 Ti FTW3 ULTRA GAMING is now up for preorders or at least it was. The price on it is $1349.99! Now I preordered an FE card from Nvidia which I am still trying to convenience myself that its worth it at $1200. The price on this card does not surprise me but dam by the time the kingpin's cards and the really top end cards come out they will be +$1600. This is starting to get out of hand no way the kingpin or any of those cards are worth +$1600 hell the FE card is overpriced at $1200. Just think what the next gen cards are going to cost...

https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR


----------



## xer0h0ur

IMO this is the problem Nvidia created when they vacated the Titan name from the "gaming" cards. They clearly just slapped the Ti name on a Titan and allowed AIBs to come up with custom designs. Of which its going to cost people a premium for.


----------



## CallsignVega

I snagged one (they are still available for pre-order as I type this):

https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR

The best cooler IMO.


----------



## ClashOfClans

CallsignVega said:


> I snagged one (they are still available for pre-order as I type this):
> 
> https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR
> 
> The best cooler IMO.


$1200 to $1350. I guess it's worth it if you are on air, but I would think most people willing to spend that much have a custom loop.


----------



## TheGovernment

People are gonna run these on air? What is this, 2006?


----------



## Nizzen

TheGovernment said:


> People are gonna run these on air? What is this, 2006?


Ln2


----------



## CallsignVega

ClashOfClans said:


> $1200 to $1350. I guess it's worth it if you are on air, but I would think most people willing to spend that much have a custom loop.


The benefit of water isn't what it once was. You may garner a boost step or two over top air, which isn't much.


----------



## Ford8484

Cant wait to try Shadow of the Tomb Raider with this puppy


----------



## Glerox

I just tested BFV SLI performance with the BF1 profile... it sucks so badly!






Litteraly getting 5-10 FPS more over single Titan XP despite good scaling...

I don't expect crazy improvment in the final game...

I can't wait my 2080TI to get rid of SLI... just trouble nowadays IMO


----------



## NewType88

CallsignVega said:


> I snagged one (they are still available for pre-order as I type this):
> 
> https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR
> 
> The best cooler IMO.


Ah man, I checked last night and it wasn’t available and still isn’t now. Lucky !


----------



## toncij

NewType88 said:


> Ah man, I checked last night and it wasn’t available and still isn’t now. Lucky !


Europe didn't even get the chance.

Regarding SLI - forget it until next gen at least. If developers go and invest insane amount of effort to run mGPU on custom solutions over now-wider bus NVLink is, it'll be ready for the next gen, not this one. It takes time. SLI was used because it was almost effortless, but solutions used by Ashes are expensive to make. They did it only for prestige. 

The only thing that could revitalize multi-GPU is heavily used ray-tracing that scales great with multi cards and multi-chip GPU solutions when manufacturers run out of options hitting the miniaturization wall (7nm looks to be) and get forced to create faster chips using interposer or MCMs, like Ryzen and some Intel + Vega solutions do.


----------



## xer0h0ur

CallsignVega said:


> The benefit of water isn't what it once was. You may garner a boost step or two over top air, which isn't much.


Except you're also cooling the vRAM and VRMs, maintaining overclocks, reducing noise and then there is of course the aesthetic of it. Ever since I went open loop in my builds, waterblocking my video cards is automatic.


----------



## CallsignVega

Have you had a high end air cooler before?

https://www.evga.com/technology/icx2/

vRAM and VRM's are certainly cooled, can cool the GPU to ~50C versus water ~35-40C, can actually be quieter than a water pump and radiator fans (1080Ti FTW3 was only 28 dB.)

That 10-15C delta may get you a boost step or two, maybe 1-2% performance increase. Certainly not enough to shout from the hilltops.

Now if you are talking like blower style OEM coolers like one on the 1080 Ti FE, ya those are terrible 84C throttling coolers.


----------



## xer0h0ur

CallsignVega said:


> Have you had a high end air cooler before?
> 
> https://www.evga.com/technology/icx2/
> 
> vRAM and VRM's are certainly cooled, can cool the GPU to ~50C versus water ~35-40C, can actually be quieter than a water pump and radiator fans (1080Ti FTW3 was only 28 dB.)
> 
> That 10-15C delta may get you a boost step or two, maybe 1-2% performance increase. Certainly not enough to shout from the hilltops.
> 
> Now if you are talking like blower style OEM coolers like one on the 1080 Ti FE, ya those are terrible 84C throttling coolers.


Did you forget that Titan V benefited massively from vRAM overclocking? Or are you implying that overclocking the vRAM will be the same on air and water?


----------



## Nizzen

Glerox said:


> I just tested BFV SLI performance with the BF1 profile... it sucks so badly!
> 
> https://youtu.be/iSgkAQxibkY
> 
> Litteraly getting 5-10 FPS more over single Titan XP despite good scaling...
> 
> I don't expect crazy improvment in the final game...
> 
> I can't wait my 2080TI to get rid of SLI... just trouble nowadays IMO


Sli works great here with bf1 profile. Using ultrasettings. Tried 4k no hdr ~100fps
3440x1440 ~ 110-130fps
1440p 140-160fps
1080p 180-200fps.


Maybe hdr not working great with bf1 profile in bf5?


----------



## CallsignVega

xer0h0ur said:


> Did you forget that Titan V benefited massively from vRAM overclocking? Or are you implying that overclocking the vRAM will be the same on air and water?


Titan V doesn't apply to the 2080 Ti situation. With the V, only choices are crap OEM blower cooler or custom loop water block. V benefits much more from a water block than a high end air cooled 2080 Ti would.


----------



## Glerox

Nizzen said:


> Sli works great here with bf1 profile. Using ultrasettings. Tried 4k no hdr ~100fps
> 3440x1440 ~ 110-130fps
> 1440p 140-160fps
> 1080p 180-200fps.
> 
> 
> Maybe hdr not working great with bf1 profile in bf5?


Yeah I tried without HDR and I was more around 85-90fps so HDR is decreasing performance.


----------



## toncij

xer0h0ur said:


> Did you forget that Titan V benefited massively from vRAM overclocking? Or are you implying that overclocking the vRAM will be the same on air and water?


HBM overclocking is vastly different than GDDR, much better gains from a few MHz oc:


----------



## xer0h0ur

You still didn't answer the question. Are you assuming that you can overclock the vRAM equally as well on an air cooler versus a waterblock?



toncij said:


> HBM overclocking is vastly different than GDDR, much better gains from a few MHz oc: https://youtu.be/CPqdZZooS2g


Beg to differ. The only thing that matters is how much bandwidth you're creating and Volta showed it would benefit from the added bandwidth. Who's to say Turing can't as well?


----------



## CallsignVega

xer0h0ur said:


> You still didn't answer the question. Are you assuming that you can overclock the vRAM equally as well on an air cooler versus a waterblock?


No idea how GDDR6 will respond thermally. With a 10C delta it might be 0% change or it might be 10% change. Just speculation at this point.


----------



## changboy

I own the aorus 1080 ti waterblock with rgb led and its really shine, so i can pré-order the asus 2080 ti dual for the 20 september but the EK block for the 2080 ti dosen't show the asus in the compatibility list and the ek block dont have much lighting rgb and i feel downgrade from my aorus desing, dont know if later they will make a beautyfull aorus with this rgb waterblock.


----------



## keikei

Anyone getting this one? Its one phat card! EVGA GeForce RTX 2080 Ti FTW3 ULTRA GAMING


----------



## CallsignVega

Yes, that is the one I have on order. Should be the best air cooled card out of all of them.


----------



## BigMack70

Any word on when the stand alone EVGA 2080 Ti hybrid cooler will be for sale? I'll likely add that to mine since I doubt my Titan X Maxwell hybrid cooler will fit this card (although, it did fit the 1080 Ti so who knows...).


----------



## Rob w

I was going to run mine on stock fans but I have a dual loop rig, so one loop would be redundant if I did.
Ordered the EKwb block, plus it turns it into a slim card allowing better airflow inside case.


----------



## MiniZaid

Glerox said:


> I just tested BFV SLI performance with the BF1 profile... it sucks so badly!
> 
> https://youtu.be/iSgkAQxibkY
> 
> Litteraly getting 5-10 FPS more over single Titan XP despite good scaling...
> 
> I don't expect crazy improvment in the final game...
> 
> I can't wait my 2080TI to get rid of SLI... just trouble nowadays IMO


I thought you made a video that shows the performance is good in closed alpha with the SLI bits

EDIT: oh hdr is the problem. I hope they fix that on release.


----------



## Glerox

MiniZaid said:


> I thought you made a video that shows the performance is good in closed alpha with the SLI bits
> 
> EDIT: oh hdr is the problem. I hope they fix that on release.


Yeah on the alpha I was playing the other map also. Maybe Rotterdam is more taxing.
HDR is the problem, but also Fullscreen, also G-sync, also TAA... all reasons why SLI performance is worst.


----------



## carlhil2

Supposedly Turing boosting without OC... https://twitter.com/VideoCardz/status/1039149716896927744


----------



## CallsignVega

Hnmm, if Turing is boosting that high stock, I wonder if that is where they got a lot of the performance gains versus Pascal. Meaning an overclocked Pascal will narrow the performance gap with an overclocked Turing since it may have more headroom.


----------



## toncij

CallsignVega said:


> Hnmm, if Turing is boosting that high stock, I wonder if that is where they got a lot of the performance gains versus Pascal. Meaning an overclocked Pascal will narrow the performance gap with an overclocked Turing since it may have more headroom.


From general 1900ish Mhz to 2100MHz is only 10%, only for clock, which is far from directly translated to actual 10% of the performance. And I'm being generous with 2100 over the 2070 up there and my FE models that were sitting at 1925 to 1950 at minimum on average.


----------



## carlhil2

CallsignVega said:


> Hnmm, if Turing is boosting that high stock, I wonder if that is where they got a lot of the performance gains versus Pascal. Meaning an overclocked Pascal will narrow the performance gap with an overclocked Turing since it may have more headroom.


 Still wouldn't match the humongous clock advantage Pascal had over Maxwell though..


----------



## ottoore

Gddr6 clock on these 3 cards is 14ghz, that's 13,6 ghz: just a 1080ti. Fake.


----------



## Ford8484

So does anyone think Nvidia will release a new Turing Titan at $3000? Essentially making the Ti versions the old price of the Titans? This would be pretty lame if they did, but I understand cause AMD is doing nothing for the highest end. If this is the case though this Ti would prob be the last Ti I get and just stick with the **80 cards going forward. I cant justify spending over $1000 every GPU generation and 4k gaming will be even more viable on lower end cards for the 3000 series....


----------



## sew333

Hello. I want to sell my 1080 ti aorus to buy 2080 ti aorus . According to my local shop www.x-kom.pl card will be in shops on 20.09 is this true? 2080 ti aorus or gaming version.

https://www.x-kom.pl/p/445410-karta...geforce-rtx-2080-ti-gaming-oc-11gb-gddr6.html


----------



## Glerox

You guys better be quick to cancel your preorders if bad reviews are coming up on September 19th, ONE day before the release lol, for both the 2080 and 2080TI. Videocardz says there isn't even a driver yet lol.


----------



## changboy

sew333 said:


> Hello. I want to sell my 1080 ti aorus to buy 2080 ti aorus . According to my local shop www.x-kom.pl card will be in shops on 20.09 is this true? 2080 ti aorus or gaming version.
> 
> https://www.x-kom.pl/p/445410-karta...geforce-rtx-2080-ti-gaming-oc-11gb-gddr6.html


 This is not aorus one yet its just a normal Gigabyte


----------



## changboy

Here is the teasing of the aorus wich will come later : https://videocardz.com/newz/gigabyte-teases-aorus-geforce-rtx-2080-ti


----------



## sew333

So its wrong info on shop site? 20.09 will be founder editions? I want aorus or gaming version of gigabyte 2080 ti


https://www.x-kom.pl/p/445410-karta...geforce-rtx-2080-ti-gaming-oc-11gb-gddr6.html


----------



## changboy

sew333 said:


> So its wrong info on shop site? 20.09 will be founder editions? I want aorus or gaming version of gigabyte 2080 ti
> 
> 
> https://www.x-kom.pl/p/445410-karta...geforce-rtx-2080-ti-gaming-oc-11gb-gddr6.html


Its a gaming version not aorus one but if you really want for the 20 september you better mail them to ask coz sometime they will sell yours as a pré-order but you won't recive it the 20 september but later. I say this coz here in Canada have a store wich not show is they are sold out for the 20 september, like all 2080 ti still available but its not true and if you buy it then you will recive it later coz all pré-order are sold out.


----------



## xer0h0ur

Glerox said:


> You guys better be quick to cancel your preorders if bad reviews are coming up on September 19th, ONE day before the release lol, for both the 2080 and 2080TI. Videocardz says there isn't even a driver yet lol.


There is no driver because Nvidia hasn't given anyone the driver yet. This is the first launch, I think, that Nvidia has NDAed like this and specifically demanded the contact information of each reviewer to approve or deny access to the driver for reviews. They even told AIBs that they have to request reviewer contact information and be approved before allowing reviewers to test their cards. 

Are you a buyer or just another troll in this thread though. Since this is an owners thread...


----------



## Barefooter

It's not really an Owner's thread though 

It should be renamed the "NVIDIA RTX 2080 Ti Speculation Club"

Then when people actually have the cards in hand start a real "NVIDIA RTX 2080 Ti Owner's Club"


----------



## ThrashZone

Hi,
Really 53 pages of speculation :thumb:

Or preorder owners


----------



## xer0h0ur

Glerox said:


> You guys better be quick to cancel your preorders if bad reviews are coming up on September 19th, ONE day before the release lol, for both the 2080 and 2080TI. Videocardz says there isn't even a driver yet lol.





Barefooter said:


> It's not really an Owner's thread though
> 
> It should be renamed the "NVIDIA RTX 2080 Ti Speculation Club"
> 
> Then when people actually have the cards in hand start a real "NVIDIA RTX 2080 Ti Owner's Club"


I mean. Just sayin. I also have the EK block and backplate pre-ordered. I'm committed.


----------



## dVeLoPe

So if I ALREADY have SLi 1080Ti what would I gain by moving to RTX 2080Ti aside from RT? or will Pascal in SLi still trump a single 2080Ti


----------



## changboy

xer0h0ur said:


> I mean. Just sayin. I also have the EK block and backplate pre-ordered. I'm committed.


 Here in Canada its not 1200$ but 1660$ lol.


----------



## Glerox

xer0h0ur said:


> Are you a buyer or just another troll in this thread though. Since this is an owners thread...


What? I guess you haven't seen my previous posts.
I pre-ordered during the conference lol. Also pre-ordered EKWB block and backplate.

That doesn't stop me from thinking that Nvidia is overcontrolling that release and I think they may have rushed it a bit.
Also, I think it's a bitter move from Nvidia to put the NDA one day before the official release...
I'm not trolling just saying.


----------



## Glerox

dVeLoPe said:


> So if I ALREADY have SLi 1080Ti what would I gain by moving to RTX 2080Ti aside from RT? or will Pascal in SLi still trump a single 2080Ti


Pascal in SLI will be better than one 2080TI for the 10% of games working with SLI


----------



## raider89

dVeLoPe said:


> So if I ALREADY have SLi 1080Ti what would I gain by moving to RTX 2080Ti aside from RT? or will Pascal in SLi still trump a single 2080Ti


You should sell those 2 and that will give you enough for a 2080ti, a single 2080ti will be better.


----------



## Ford8484

dVeLoPe said:


> So if I ALREADY have SLi 1080Ti what would I gain by moving to RTX 2080Ti aside from RT? or will Pascal in SLi still trump a single 2080Ti


No one knows yet- one more week until reviews. But you would get obviously better performance in games that don't support SLI, plus RTX cards support that DLSS (Deep learning) tech- apparently it doubles performance for games that support it...but its still all vague to my knowledge.


----------



## GraphicsWhore

Barefooter said:


> It's not really an Owner's thread though
> 
> It should be renamed the "NVIDIA RTX 2080 Ti Speculation Club"
> 
> Then when people actually have the cards in hand start a real "NVIDIA RTX 2080 Ti Owner's Club"


Nah, we're not going to do that.


----------



## empyr

Does anybody know if the MSI Gaming Trio is reference PCB?


----------



## gamingarena

empyr said:


> Does anybody know if the MSI Gaming Trio is reference PCB?


Its not,


----------



## empyr

gamingarena said:


> Its not,


 Thank you  


Edit: Nevermind, found it


----------



## NAIM101

Anyone know when Nvidia will restock on their RTX 2080TI FE?


----------



## carlhil2

https://s22.q4cdn.com/364334381/files/doc_presentations/2018/09/JHH_GTCJapan2018_FINAL.pdf


----------



## toncij

empyr said:


> Thank you
> 
> 
> Edit: Nevermind, found it


Looks to be a very nice choice.


----------



## Glerox

carlhil2 said:


> https://s22.q4cdn.com/364334381/files/doc_presentations/2018/09/JHH_GTCJapan2018_FINAL.pdf


DLSS is very the most important feature of Turing!


----------



## CallsignVega

carlhil2 said:


> https://s22.q4cdn.com/364334381/files/doc_presentations/2018/09/JHH_GTCJapan2018_FINAL.pdf


Hmm, dividing the chart up with a ruler, that's 51 FPS for 1080 Ti and 75 FPS for 2080 Ti in normal raster. So a 47% performance increase. I'm satisfied with that.


----------



## Newbie2009

So what I've seen so far on these cards the 2080ti will be 15-20% faster than the 1080ti, but with the new ai AA on turing, will be about 40% faster as you won't be using ingame AA like msaa,smaa,taa etc

That's what it seems like to me.

The reviews could be messy, no doubt Nvidia will be pushing reviewers to bench 1080ti with equivalent AA vs 2080ti with the new AI AA method.


----------



## Ford8484

Glerox said:


> DLSS is very the most important feature of Turing!


Yep, could be a game changer. Still odd Nvidia really isnt marketing it though as much as Ray tracing. Its pretty clear people want better performance first, then ray tracing second. I mean, who wants 1080p60fps on a 4k screen? Most people buying these cards are probably not even on 1080p screens anymore....regardless, real benchmarks coming soon!!!


----------



## BigMack70

Newbie2009 said:


> So what I've seen so far on these cards the 2080ti will be 15-20% faster than the 1080ti


The numbers we've seen so far suggests the 2080 Ti to be natively between 30% and 50% faster than a 1080 Ti without considering things like DLSS etc. While you always have to take Nvidia-provided slides with a little bit of salt, I very much doubt that they're going to be that far off the mark.


I'm excited to get my card next week to bump my shadow of the tomb raider frames up... 1080 Ti can't hold 60fps very often at all at 4k with ultra settings and is often around 40-45fps.


----------



## keikei

Ford8484 said:


> Yep, could be a game changer. Still odd Nvidia really isnt marketing it though as much as Ray tracing. Its pretty clear people want better performance first, then ray tracing second. I mean, who wants 1080p60fps on a 4k screen? Most people buying these cards are probably not even on 1080p screens anymore....regardless, real benchmarks coming soon!!!


Interesting how this popped up while reading this thread: https://www.google.com/search?q=GAM...&sourceid=chrome&ie=UTF-8&safe=active&ssui=on



Darksiders III from Gunfire Games / THQ Nordic
Deliver Us The Moon: Fortuna from KeokeN Interactive
Fear the Wolves from Vostok Games / Focus Home Interactive
Hellblade: Senua’s Sacrifice from Ninja Theory
KINETIK from Hero Machine Studios
Outpost Zero from Symmetric Games / tinyBuild Games
Overkill’s The Walking Dead from Overkill Software / Starbreeze Studios
SCUM from Gamepires / Devolver Digital
Stormdivers from Housemarque

Other Titles Implementing DLSS


Ark: Survival Evolved from Studio Wildcard
Atomic Heart from Mundfish
Dauntless from Phoenix Labs
Final Fantasy XV from Square Enix
Fractured Lands from Unbroken Studios
Hitman 2 from IO Interactive/Warner Bros.
Islands of Nyne from Define Human Studios
Justice from NetEase
JX3 from Kingsoft
Mechwarrior 5: Mercenaries from Piranha Games
PlayerUnknown’s Battlegrounds from PUBG Corp.
Remnant: From the Ashes from Arc Games
Serious Sam 4: Planet Badass from Croteam/Devolver Digital
Shadow of the Tomb Raider from Square Enix/Eidos-Montréal/Crystal Dynamics/Nixxes
The Forge Arena from Freezing Raccoon Studios
We Happy Few from Compulsion Games / Gearbox


----------



## Ford8484

keikei said:


> Interesting how this popped up while reading this thread: https://www.google.com/search?q=GAM...&sourceid=chrome&ie=UTF-8&safe=active&ssui=on
> 
> 
> 
> Darksiders III from Gunfire Games / THQ Nordic
> Deliver Us The Moon: Fortuna from KeokeN Interactive
> Fear the Wolves from Vostok Games / Focus Home Interactive
> Hellblade: Senua’s Sacrifice from Ninja Theory
> KINETIK from Hero Machine Studios
> Outpost Zero from Symmetric Games / tinyBuild Games
> Overkill’s The Walking Dead from Overkill Software / Starbreeze Studios
> SCUM from Gamepires / Devolver Digital
> Stormdivers from Housemarque
> 
> Other Titles Implementing DLSS
> 
> 
> Ark: Survival Evolved from Studio Wildcard
> Atomic Heart from Mundfish
> Dauntless from Phoenix Labs
> Final Fantasy XV from Square Enix
> Fractured Lands from Unbroken Studios
> Hitman 2 from IO Interactive/Warner Bros.
> Islands of Nyne from Define Human Studios
> Justice from NetEase
> JX3 from Kingsoft
> Mechwarrior 5: Mercenaries from Piranha Games
> PlayerUnknown’s Battlegrounds from PUBG Corp.
> Remnant: From the Ashes from Arc Games
> Serious Sam 4: Planet Badass from Croteam/Devolver Digital
> Shadow of the Tomb Raider from Square Enix/Eidos-Montréal/Crystal Dynamics/Nixxes
> The Forge Arena from Freezing Raccoon Studios
> We Happy Few from Compulsion Games / Gearbox


Considering future games will be even more graphically demanding...this could be very promising...really curious to see how this works out.


----------



## NewType88

Ford8484 said:


> Considering future games will be even more graphically demanding...this could be very promising...really curious to see how this works out.


I hope the Witcher 3 gets some DLSS support.

With the next die shrink being 7nm how else will perfromance improve after that other than optimization, such as things like DLSS. 4k 60fps is going to be a forever moving target it seems.


----------



## rush2049

For some reason Nvidia cancelled one of my card pre-orders.

The cancellation reason? "Other"


Kinda upset, I am going to have to wait till like nov-dec to get the second one now unless I can get them to reinstate my order.

I'll have one card and an nvlink bridge.... suck!!!


----------



## xer0h0ur

Boy you had me sweatin. I just checked my order to make sure it had not been canceled.


----------



## Glerox

rush2049 said:


> For some reason Nvidia cancelled one of my card pre-orders.
> 
> The cancellation reason? "Other"
> 
> 
> Kinda upset, I am going to have to wait till like nov-dec to get the second one now unless I can get them to reinstate my order.
> 
> I'll have one card and an nvlink bridge.... suck!!!


What the hell?!


----------



## TahoeDust

What's up fellas?...


----------



## skycake

rush2049 said:


> For some reason Nvidia cancelled one of my card pre-orders.
> 
> The cancellation reason? "Other"
> 
> 
> Kinda upset, I am going to have to wait till like nov-dec to get the second one now unless I can get them to reinstate my order.
> 
> I'll have one card and an nvlink bridge.... suck!!!


This happened to someone on Reddit and it was because their payment was declined and the email notification was caught by Gmail's spam filter. Check that, and if that's the case contact Nvidia support.


----------



## rush2049

skycake said:


> This happened to someone on Reddit and it was because their payment was declined and the email notification was caught by Gmail's spam filter. Check that, and if that's the case contact Nvidia support.


Just did.

I payed via paypal, and when I called customer service they said that pre-orders cannot be made via paypal. So it is likely my first order and the order for the NVLink bridge will be cancelled shortly as well. Which really sucks since I move funds to paypal specifically for large purchases like this and its a pain to pull the funds back out.

-Benjamin


----------



## almsivi

rush2049 said:


> Just did.
> 
> I payed via paypal, and when I called customer service they said that pre-orders cannot be made via paypal. So it is likely my first order and the order for the NVLink bridge will be cancelled shortly as well. Which really sucks since I move funds to paypal specifically for large purchases like this and its a pain to pull the funds back out.
> 
> -Benjamin


I didn't get the notification that the order was cancelled, but I paid partially with paypal and partially with my debit card. My order also won't be shipping out until October. Digital River is a joke. It seems insane that they don't accept paypal for pre-orders...

This was also my second order. The day it was announced I preordered one 2080 ti that was gonna get shipped out in September but autofill put in the wrong shipping address. When I called support to get the address changed they cancelled it instead. Language barrier was clearly an issue. They did not understand it was a preorder and that their mistake cost me a month of waiting. 

This was after I sold my pair of titan xp's. So no gaming until late October for me... Assuming they don't cancel this order because I used paypal.


----------



## Glerox

Damn, I also pre-ordered with Paypal.

When I check my order status, it says order received and I have no estimated ship date... normal?


----------



## Rob w

Glerox said:


> Damn, I also pre-ordered with Paypal.
> 
> When I check my order status, it says order received and I have no estimated ship date... normal?


Yeh that’s normal, order should also say they will email you when order complete.
I quierred my order with nv because I did not have shipping date, they don’t take your money until completed order is ready to ship.

Edit, just been checking my invoice and it states at the bottom that orders via PayPal are charged when received, orders via debit/credit card charged when shipped.
So according to that it looks like you can preorder via PayPal ( it has to be a digital river **** up!)


----------



## CallsignVega

Someone order a RTX card?


----------



## CallsignVega




----------



## cx-ray

Rob w said:


> Yeh that’s normal, order should also say they will email you when order complete.
> I quierred my order with nv because I did not have shipping date, they don’t take your money until completed order is ready to ship.


I ordered at launch through Paypal. My CC was charged immediately. All online places appear to do this in mainland EU. Amazon is the only exception I've seen so far, that waits until shipping.


----------



## Glerox

Rob w said:


> Yeh that’s normal, order should also say they will email you when order complete.
> I quierred my order with nv because I did not have shipping date, they don’t take your money until completed order is ready to ship.
> 
> Edit, just been checking my invoice and it states at the bottom that orders via PayPal are charged when received, orders via debit/credit card charged when shipped.
> So according to that it looks like you can preorder via PayPal ( it has to be a digital river **** up!)


So no idea when I'll get my GPU... great


----------



## Pepillo

Filtering performance in games of the 2080 and 2080TI:

https://videocardz.com/77983/nvidia-geforce-rtx-2080-ti-and-rtx-2080-official-performance-unveiled


----------



## Mad Pistol

Crap, this is going to make me purchase a 2080 Ti... I can already feel it happening.


----------



## carlhil2

Some are complaining about the difference in clocks between Turing reference and Pascal reference while overlooking the fact that Pascal had a close to +700mhz clock advantage over Maxwell....


----------



## xer0h0ur

Sounds like trolls to me. No real enthusiast complains about being able to clock higher.


----------



## BigMack70

carlhil2 said:


> Some are complaining about the difference in clocks between Turing reference and Pascal reference while overlooking the fact that Pascal had a close to +700mhz clock advantage over Maxwell....


This is the whiniest launch I remember in the past 10 years. Everyone wants to find something to cry about. And in my opinion it's purely because Nvidia just went balls deep on their wallets. But I have no sympathy for this; Nvidia has been doing that since 2011 at minimum. So why so salty? I don't get it. 

These cards are going to be 45-ish percent faster than their pascal counterparts, a perfectly normal generational gap. They come with tons of features that are a lot better than physx or hairworks. The only thing bad is price, and yeah it sucks but AMD can't compete so Nvidia gets to charge whatever they want.


----------



## bastian

People are salty because of the price. The cards are actually great. Can't wait.


----------



## BigMack70

bastian said:


> People are salty because of the price. The cards are actually great. Can't wait.


Anyone who is salty now, but who was not salty with the Kepler launch, and all Nvidia launches since, is either ignorant or a hypocrite. Nvidia started the rampant price gouging all the way back then, and this is just par for the course until the hypothetical day when AMD can once again challenge Nvidia for the performance crown.

In other words, I know price is the immediate reason people are salty, but I'd argue the real reason is that people are ignorant and don't pay attention to what's actually going on the past 7 years.


----------



## NAIM101

Anyone know where I can order Nvidia GTX 2080TI reference?


----------



## Clukos

carlhil2 said:


> Some are complaining about the difference in clocks between Turing reference and Pascal reference while overlooking the fact that Pascal had a close to +700mhz clock advantage over Maxwell....


It all really depends on the headroom these cards have, this is overclock.net after all 

Maxwell had huge headroom
Pascal had some headroom but not as much as Maxwell
Turing FE has an overclock on top of boosting clocks

We won't know how much these overclock until people get their hands on the cards. If the max you can overclock over FE OC is 30-50 MHz then that's pretty bad. I think most of the overclocking will be done in VRAM, we don't know how well GDDR6 clocks and that'll be interesting.


----------



## cx-ray

ericnvidia80 said:


> Hi Everyone,
> 
> Wanted to give you an update on the GeForce RTX 2080 Ti availability.
> 
> GeForce RTX 2080 Ti general availability has shifted to September 27th, a one week delay. We expect pre-orders to arrive between September 20th and September 27th.
> 
> There is no change to GeForce RTX 2080 general availability, which is September 20th.
> 
> We’re eager for you to enjoy the new GeForce RTX family! Thanks for your patience.


https://forums.geforce.com/default/...ries/geforce-rtx-2080-ti-availability-update/


----------



## xer0h0ur

Clukos said:


> It all really depends on the headroom these cards have, this is overclock.net after all
> 
> Maxwell had huge headroom
> Pascal had some headroom but not as much as Maxwell
> Turing FE has an overclock on top of boosting clocks
> 
> We won't know how much these overclock until people get their hands on the cards. If the max you can overclock over FE OC is 30-50 MHz then that's pretty bad. I think most of the overclocking will be done in VRAM, we don't know how well GDDR6 clocks and that'll be interesting.


I would bet the farm there isn't a chance in hell you can only overclock Turing 50 MHz. They increased the power envelope 20W, upped it to 13+3 phase power, and cleaned up the power delivery reducing the noise and ripple.


----------



## xer0h0ur

Also interesting to note:

"Scanner will make this process a lot more automatic and trustworthy. It will run incrementally and increase the clock speed and voltage of the GPU to build a profile of a specific card's capabilities, testing the GPU's computing ability at each speed to make sure that it's operating properly. Generally there will be arithmetic errors, rather than outright crashes, when the GPU is being operated only very slightly faster than it can support. So by making only small adjustments, Scanner can probe the limits of what the chip can handle without the crashes and reboots, and without human intervention. The whole process takes about 20 minutes."

https://arstechnica.com/gadgets/2018/09/nvidia-makes-gpu-overclocking-a-lot-smarter-with-scanner/

I'll give Scanner a try but that will only be after I do my own manual overclocking for the sake of comparison. I want to see what scanner determines is the highest stable overclock versus manual overclocking. Hence why I am doing my own overclocking first so I don't have a bias.


----------



## GraphicsWhore

BigMack70 said:


> bastian said:
> 
> 
> 
> People are salty because of the price. The cards are actually great. Can't wait.
> 
> 
> 
> Anyone who is salty now, but who was not salty with the Kepler launch, and all Nvidia launches since, is either ignorant or a hypocrite. Nvidia started the rampant price gouging all the way back then, and this is just par for the course until the hypothetical day when AMD can once again challenge Nvidia for the performance crown.
> 
> In other words, I know price is the immediate reason people are salty, but I'd argue the real reason is that people are ignorant and don't pay attention to what's actually going on the past 7 years.
Click to expand...

I think people are also salty because they’re being priced out of the high end GPU market. Literally, they can’t afford it. I can understand and I’m not justifying nvidia’s prices but it is what it is.


----------



## farmdve

Graphics***** said:


> I think people are also salty because they’re being priced out of the high end GPU market. Literally, they can’t afford it. I can understand and I’m not justifying nvidia’s prices but it is what it is.


As you can see, companies are probing the limits of how much people are willing to pay. Apple, Nvidia, Intel maybe?


----------



## Swaggerfeld

cx-ray said:


> https://forums.geforce.com/default/...ries/geforce-rtx-2080-ti-availability-update/


So, what does this actually mean? That we will be able to purchase non-FE reference 2080TIs after September 27th?


----------



## keikei

Swaggerfeld said:


> So, what does this actually mean? That we will be able to purchase non-FE reference 2080TIs after September 27th?



It sounds like you'll be able to order the card on the 20th, but they won't actually ship until the 27th.


----------



## arrow0309

I'm in, pre-ordered today a new EVGA 2080ti XC Gaming for £1200 (ebuyer.com).
Also pre-ordered yesterday the EK Vector Nickel Acetal RGB + black backplate for ~£170 (Scan.uk).


----------



## cx-ray

Swaggerfeld said:


> So, what does this actually mean? That we will be able to purchase non-FE reference 2080TIs after September 27th?


My best guess is that the launch date for the 2080 Ti has been delayed by 1 week.


----------



## xer0h0ur

That post sounds to me like they're talking about availability at retail establishments, aka products on the shelves. Don't think that means pre-orders but who knows unless he clarifies the statement.


----------



## carlhil2

http://benchmark.finalfantasyxv.com/result/


----------



## NewType88

Ordered a ftw3, says it should be available on the 23rd of next month, for that specific card that is.


----------



## Baasha

Will those of us who ordered the cards on Day 1 (i.e. Aug. 20th) still get them on Sep. 20th or will they be *shipped* only on Sep. 20th?


----------



## rush2049

Baasha said:


> Will those of us who ordered the cards on Day 1 (i.e. Aug. 20th) still get them on Sep. 20th or will they be *shipped* only on Sep. 20th?


From what I have read. Pre-orders will be shipped out asap starting on the 20th and should be completely fullfilled by the 27th. They sold too many and don't have any available for general availability on the 20th, so they are keeping the button on 'notify me' until the 27th when you can order them again (until they sell out again).


----------



## rx7racer

Baasha said:


> Will those of us who ordered the cards on Day 1 (i.e. Aug. 20th) still get them on Sep. 20th or will they be *shipped* only on Sep. 20th?


From what I gather and have found for my order through amazon is they will ship teh 20th, mine is supposed to arrive on Monday the 24th. But that's all anyone can tell me so have to assume 20th will be ship date for all.


----------



## Baasha

rush2049 said:


> From what I have read. Pre-orders will be shipped out asap starting on the 20th and should be completely fullfilled by the 27th. They sold too many and don't have any available for general availability on the 20th, so they are keeping the button on 'notify me' until the 27th when you can order them again (until they sell out again).





rx7racer said:


> From what I gather and have found for my order through amazon is they will ship teh 20th, mine is supposed to arrive on Monday the 24th. But that's all anyone can tell me so have to assume 20th will be ship date for all.


So nobody will have the card on the 20th? 

sigh..


----------



## xer0h0ur

carlhil2 said:


> http://benchmark.finalfantasyxv.com/result/


The best part, I'm that guy going from the bottom of that chart right to the tippy top. #CantWait


----------



## Glerox

xer0h0ur said:


> right to the tippy top. #CantWait


But really only half way up, think about it...


----------



## Krzych04650

All performance numbers that appeared recently are looking good and I don't expect them to be much different in reviews. Especially that few serious reviewers already said that initial results are very promising. I will certainly get 2080 Ti, hopefully hybrid models will appear soon, because I am not interested in all of those 3-slot bricks.


----------



## Dreamliner

I will say, the Founder's Edition card is the best looking card of this generation, by far. Most (not all) of the partner boards look downright ugly. EVGA has chosen that awful clear plastic nonsense, and other designs just don't look great. The 10-Series partner board cards were much better looking.

I'm skipping this generation though because without RT/DLSS games, we're looking at a ~30% traditional games performance boost and I'm not paying $1200 for that.


----------



## snafua

Teardown of a RTX2080ti FE. 

There's a bit about the NDA prohibiting a teardown that went out after the cards were already in the hands of reviewers.
How he handles the card makes me cringe =P


----------



## Dreamliner

So far, he has the most content about the new card. Apparently the reason he couldn’t get the fan shroud apart is because it is glued together.

The video he did with Jayztwocents a couple days ago I liked quite a bit.


----------



## farmdve

Dreamliner said:


> So far, he has the most content about the new card. Apparently the reason he couldn’t get the fan shroud apart is because it is glued together.
> 
> The video he did with Jayztwocents a couple days ago I liked quite a bit.


Well that sounds unfortunate. I mean...what if you needed to replace the fan, or at the very least, re-oil the bearings?


----------



## snafua

Wait.. how did we miss this? I'm confused.

https://forums.geforce.com/default/...vailability-update/?ncid=so-twi-rx20tdl-58775

Edit: Ok, so what I get from this is that the Nvidia pre-orders will start going out sometime between Sep 20 - Sep 27 (up to 1 week delay) and the general availability (non-preorder) is now Sep 27th for the Ti, still the 20th for the non-Ti.

How this affects AIB boards I'm still confused on.


----------



## bastian

Yes, if you pre-ordered and got in the Sept batch, you can expect your card to ship between Sep 20-27.

AIBs will probably be delayed as well.


----------



## Zurv

hrmm… bitspower has waterblocks up so you don't have to use the crappy EK ones 

https://shop.bitspower.com/index.php?route=product/product&product_id=7049&search=2080RD

now... i kick myself for not ordering this when i could. I'm going to use NVLINK sooo if i can't order 2 of the same card there is not point.. I have a feeling it will be last Oct before i get to play around with them.


----------



## tconroy135

bastian said:


> Yes, if you pre-ordered and got in the Sept batch, you can expect your card to ship between Sep 20-27.
> 
> AIBs will probably be delayed as well.


Hmm, I ordered mine about an hour after the event, not sure how long preorders were live. I hope I get mine this upcoming weekend, I'll be so bummed if I have to wait another week.


----------



## xer0h0ur

Zurv said:


> hrmm… bitspower has waterblocks up so you don't have to use the crappy EK ones
> 
> https://shop.bitspower.com/index.php?route=product/product&product_id=7049&search=2080RD
> 
> now... i kick myself for not ordering this when i could. I'm going to use NVLINK sooo if i can't order 2 of the same card there is not point.. I have a feeling it will be last Oct before i get to play around with them.


Crappy EK blocks? So you link RGB puke with an acrylic terminal to boot? Hard pass.


----------



## empyr

Zurv said:


> hrmm… bitspower has waterblocks up so you don't have to use the crappy EK ones
> 
> https://shop.bitspower.com/index.php?route=product/product&product_id=7049&search=2080RD
> 
> now... i kick myself for not ordering this when i could. I'm going to use NVLINK sooo if i can't order 2 of the same card there is not point.. I have a feeling it will be last Oct before i get to play around with them.



EK's blocks are crappy since when?


----------



## Artah

Mad Pistol said:


> It's possible they were pushing back preorders to make MORE people preorder so they would think they couldn't get it before Christmas unless they preordered it.
> 
> Lots of preorders.... preorder.
> 
> 
> Have you preordered it yet?


That would be shady marketing tactics, Nvidia never does that...


----------



## GraphicsWhore

Zurv said:


> hrmm… bitspower has waterblocks up so you don't have to use the crappy EK ones
> 
> https://shop.bitspower.com/index.php?route=product/product&product_id=7049&search=2080RD
> 
> now... i kick myself for not ordering this when i could. I'm going to use NVLINK sooo if i can't order 2 of the same card there is not point.. I have a feeling it will be last Oct before i get to play around with them.


I've read this a couple of times but maybe the other post was you as well - why do you say EK blocks are crappy? I've had one on my 1080Ti for a year and I haven't had any issues with performance or otherwise.

I'm waiting on both EK + backplate and Alphacool full-cover for my Ti - preordered both to see which I like better.


----------



## keikei




----------



## Dreamliner

I’m not a huge fan of the vapor chamber choice. I’m sure it will be fine but I prefer at least some airflow across the PCB...


----------



## Zurv

Graphics***** said:


> I've read this a couple of times but maybe the other post was you as well - why do you say EK blocks are crappy? I've had one on my 1080Ti for a year and I haven't had any issues with performance or otherwise.
> 
> I'm waiting on both EK + backplate and Alphacool full-cover for my Ti - preordered both to see which I like better.


they crack all the time. People might not see if they don't use terminals. (or don't change cards, reseats, etc..) Over the 10x series of cards i cracked like 7 EK blocks and broke 4-5 cards because of shorts. (I've had about 16 1080s, XP, Xp, etc cards.) Damn that middle screw! Even knowing that there is a problem... STILL BREAKS! Also the screw-holder things that you screw the backplate on likes to unscrew. So you end up, when removing the block, the screws still in the video card. Then you need pliers... then you slip you hand because you are doing it in the air by yourself and break chips off the PCB.  
bitspower also uses the default back of cards with is a nice touch. The build quality is really nice on the bitspower too. (that said, i don't like any of these dumb colored lights on anything.. that both EK and bits are doing.)


if someone still wants to use EK (and maybe they improved) you might want think about not using pexi. That is more prone to cracking. Hrmm.. is guess if someone isn't doing multi cards and never plans to take the block off after they put it on- EK is fine.


----------



## xer0h0ur

I've been using acrylic with nickel plating EK blocks for years and I removed the block from my 295X2 like 5 times as well as opening them to clean multiple times. Never had anything happen nor did they have fitment issues. How are you managing to break the acrylic is my question?


----------



## empyr

Zurv said:


> they crack all the time. People might not see if they don't use terminals. (or don't change cards, reseats, etc..) Over the 10x series of cards i cracked like 7 EK blocks and broke 4-5 cards because of shorts. (I've had about 16 1080s, XP, Xp, etc cards.) Damn that middle screw! Even knowing that there is a problem... STILL BREAKS! Also the screw-holder things that you screw the backplate on likes to unscrew. So you end up, when removing the block, the screws still in the video card. Then you need pliers... then you slip you hand because you are doing it in the air by yourself and break chips off the PCB.
> bitspower also uses the default back of cards with is a nice touch. The build quality is really nice on the bitspower too. (that said, i don't like any of these dumb colored lights on anything.. that both EK and bits are doing.)
> 
> 
> if someone still wants to use EK (and maybe they improved) you might want think about not using pexi. That is more prone to cracking. Hrmm.. is guess if someone isn't doing multi cards and never plans to take the block off after they put it on- EK is fine.



Honestly, I have never personally heard of that type of issue before. I have several friends who have used multiple EK blocks over the years, including on the 1080s and Titans and haven't had any of the issues you are facing. Do you know of other people who have the same issues as you?


----------



## Dreamliner

Water sure looks good, but holy smokes the cost...and risk!!

The fans on my card barely hits 30% on full load benchmarks and don’t spin at all unless I’m gaming. No pump noise, no expensive setups and no leak risks.

I really like the look of water, but the benefit just isn’t there for me.


----------



## Artah

Dreamliner said:


> Water sure looks good, but holy smokes the cost...and risk!!
> 
> The fans on my card barely hits 30% on full load benchmarks and don’t spin at all unless I’m gaming. No pump noise, no expensive setups and no leak risks.
> 
> I really like the look of water, but the benefit just isn’t there for me.


you have the 2080/2080ti cards? I preordered ekwb blocks and backplate but thinking of not putting them on if the noise is not that bad and the card stays supercool. It will definitely not look as nice and makes the 2x 420mm rads on my caselabs pedestal mostly useless.


----------



## Dreamliner

Artah said:


> you have the 2080/2080ti cards? I preordered ekwb blocks and backplate but thinking of not putting them on if the noise is not that bad and the card stays supercool. It will definitely not look as nice and makes the 2x 420mm rads on my caselabs pedestal mostly useless.


Sorry, I didn’t intend to mislead. I was referring to my 1080Ti card.


----------



## MonarchX

Where can I get RTX 2080 Ti for pre-order to be at my house on the 27th or as soon as possible (in USA)? In most places, it's OUT OF STOCK... I would like a BADASS one - up to $2000 but only if that place takes PayPal credit and I want it ASAP! Otherwise, mainstream is what I can afford.


BTW, what special about Founder's Edition? The GTX 1080 / Ti Founder's were more expensive than 3rd party-cooled (way better cooled!) editions - so why get them? Do you think RTX 2080 Ti dual-fan is going to be on par with 3rd party coolers from EVGA or MSI?


----------



## Nizzen

Zurv said:


> they crack all the time. People might not see if they don't use terminals. (or don't change cards, reseats, etc..) Over the 10x series of cards i cracked like 7 EK blocks and broke 4-5 cards because of shorts. (I've had about 16 1080s, XP, Xp, etc cards.) Damn that middle screw! Even knowing that there is a problem... STILL BREAKS! Also the screw-holder things that you screw the backplate on likes to unscrew. So you end up, when removing the block, the screws still in the video card. Then you need pliers... then you slip you hand because you are doing it in the air by yourself and break chips off the PCB.
> bitspower also uses the default back of cards with is a nice touch. The build quality is really nice on the bitspower too. (that said, i don't like any of these dumb colored lights on anything.. that both EK and bits are doing.)
> 
> 
> if someone still wants to use EK (and maybe they improved) you might want think about not using pexi. That is more prone to cracking. Hrmm.. is guess if someone isn't doing multi cards and never plans to take the block off after they put it on- EK is fine.



Userfailure.


I have had 17x ek gpublocks, 3x monoblocks and 7 normal cpu blocks. No cracks. Using both plexi and nicel/acetal. Connected with "quick disconnects"


Some allways have problems with everything


----------



## TheGovernment

Nizzen said:


> Zurv said:
> 
> 
> 
> they crack all the time. People might not see if they don't use terminals. (or don't change cards, reseats, etc..) Over the 10x series of cards i cracked like 7 EK blocks and broke 4-5 cards because of shorts. (I've had about 16 1080s, XP, Xp, etc cards.) Damn that middle screw! Even knowing that there is a problem... STILL BREAKS! Also the screw-holder things that you screw the backplate on likes to unscrew. So you end up, when removing the block, the screws still in the video card. Then you need pliers... then you slip you hand because you are doing it in the air by yourself and break chips off the PCB. /forum/images/smilies/smile.gif
> bitspower also uses the default back of cards with is a nice touch. The build quality is really nice on the bitspower too. (that said, i don't like any of these dumb colored lights on anything.. that both EK and bits are doing.)
> 
> 
> if someone still wants to use EK (and maybe they improved) you might want think about not using pexi. That is more prone to cracking. Hrmm.. is guess if someone isn't doing multi cards and never plans to take the block off after they put it on- EK is fine.
> 
> 
> 
> 
> Userfailure.
> 
> 
> I have had 17x ek gpublocks, 3x monoblocks and 7 normal cpu blocks. No cracks. Using both plexi and nicel/acetal. Connected with "quick disconnects"
> 
> 
> Some allways have problems with everything /forum/images/smilies/wink.gif
Click to expand...

Well, Ive had/have 8 gpu blocks, 8 cpu blocks and 4 monoblocks and 2 gpu blocks have ended up with cracks and 2 cpu blocks. Its not like it never happens. In fact, most recently, the pile of crap TR block leaked from the seal and destroyed my 1920x and Zenith board.... so that was awesome.....


----------



## keikei

MonarchX said:


> Where can I get RTX 2080 Ti for pre-order to be at my house on the 27th or as soon as possible (in USA)? In most places, it's OUT OF STOCK... I would like a BADASS one - up to $2000 but only if that place takes PayPal credit and I want it ASAP! Otherwise, mainstream is what I can afford.
> 
> 
> BTW, what special about Founder's Edition? The GTX 1080 / Ti Founder's were more expensive than 3rd party-cooled (way better cooled!) editions - so why get them? Do you think RTX 2080 Ti dual-fan is going to be on par with 3rd party coolers from EVGA or MSI?


Based on a tear down from gamers nexus, the FE model is much harder to dust clean. It is not maintenance friendly to say the least. It was not something i was aware of until the vid.


----------



## shiokarai

Zurv said:


> they crack all the time. People might not see if they don't use terminals. (or don't change cards, reseats, etc..) Over the 10x series of cards i cracked like 7 EK blocks and broke 4-5 cards because of shorts. (I've had about 16 1080s, XP, Xp, etc cards.) Damn that middle screw! Even knowing that there is a problem... STILL BREAKS! Also the screw-holder things that you screw the backplate on likes to unscrew. So you end up, when removing the block, the screws still in the video card. Then you need pliers... then you slip you hand because you are doing it in the air by yourself and break chips off the PCB.
> bitspower also uses the default back of cards with is a nice touch. The build quality is really nice on the bitspower too. (that said, i don't like any of these dumb colored lights on anything.. that both EK and bits are doing.)
> 
> 
> if someone still wants to use EK (and maybe they improved) you might want think about not using pexi. That is more prone to cracking. Hrmm.. is guess if someone isn't doing multi cards and never plans to take the block off after they put it on- EK is fine.


Don't use excessive force, plexi(glass) is prone to cracking, it's not an acetal. I've had 8 GTX1080Tis blocks (Titan X Pascal actually, they fit), CPU blocks, monoblocks etc. and only 1 plexi CPU block has some cracks around fittings due to PC being knocked out of the table (mATX PC), but I did got bad terminal mounting on the one of GPU blocks (leaked), had to return it.


----------



## BigMack70

MonarchX said:


> Where can I get RTX 2080 Ti for pre-order to be at my house on the 27th or as soon as possible (in USA)? In most places, it's OUT OF STOCK... I would like a BADASS one - up to $2000 but only if that place takes PayPal credit and I want it ASAP! Otherwise, mainstream is what I can afford.
> 
> 
> BTW, what special about Founder's Edition? The GTX 1080 / Ti Founder's were more expensive than 3rd party-cooled (way better cooled!) editions - so why get them? Do you think RTX 2080 Ti dual-fan is going to be on par with 3rd party coolers from EVGA or MSI?


If you want a 2080 Ti anytime soon you have three options, none of them good:
1) Buy one from a scalper on ebay for a ridiculous price 
2) Go talk to your local Best Buy/Microcenter/etc and see if they will have any actual physical stock on launch day 
3) Get your F5 key ready and hope that online retailers have some launch day stock that they didn't put up for pre-order which can be bought day 1 (IMO very unlikely) 


I think it's going to be a while before these cards are generally available at $1200 or less.


----------



## ThrashZone

MonarchX said:


> Where can I get RTX 2080 Ti for pre-order to be at my house on the 27th or as soon as possible (in USA)? In most places, it's OUT OF STOCK... I would like a BADASS one - up to $2000 but only if that place takes PayPal credit and I want it ASAP! Otherwise, mainstream is what I can afford.
> 
> 
> BTW, what special about Founder's Edition? The GTX 1080 / Ti Founder's were more expensive than 3rd party-cooled (way better cooled!) editions - so why get them? Do you think RTX 2080 Ti dual-fan is going to be on par with 3rd party coolers from EVGA or MSI?


Hi,
I'd monitor the evga 2000 series forums and also get on auto notify as well for the version you're interested in buying got to act fast as others also do this 
nvidia knows nothing about cooling stick with 3 fan design unless you plan on adding a water block.
http://www.evga.com/


----------



## Artah

evga 2080 preorder is open on amazon.

https://www.amazon.com/gp/product/B07GHVWMBS/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1


----------



## Foxrun

My nvlink is arriving Thursday, nothing on the TI's yet


----------



## bastian

nVidia has confirmed the delay for Ti's also affect AIB cards. At least it will give some people more time to cancel if they don't like reviews.


----------



## ClashOfClans

BigMack70 said:


> If you want a 2080 Ti anytime soon you have three options, none of them good:
> 1) Buy one from a scalper on ebay for a ridiculous price
> 2) Go talk to your local Best Buy/Microcenter/etc and see if they will have any actual physical stock on launch day
> 3) Get your F5 key ready and hope that online retailers have some launch day stock that they didn't put up for pre-order which can be bought day 1 (IMO very unlikely)
> 
> 
> I think it's going to be a while before these cards are generally available at $1200 or less.


I'm not sure droves of people are out looking to buy a $1200 card lol/$1300 with tax. Keeping less cards out in the wild simply allows Nvidia to maintain or even increase their price point by creating this demand. At $1200-1300 this is an incredibly niche product. They do this every release, and it obviously works. Apple does it, and all tech companies do it. The card will eventually come down to the $1000 price point but I think you are right, that will take a while. Hopefully by Christmas they are at least msrp.


----------



## Kiros

I don't feel like there's a need to make a thread for a small and silly question so I'll ask here. Does anyone or am I the only one who feels like Nvidia might release a card that's better than the 2080 Ti several months down the road? It just kind of feels odd that Nvidia showed off their hands with the Enthusiast and high end cards right off the bat. Like with the 1080 GTX everyone expected a Ti version to come later which it did and then they silently released the Titan Xp shortly after. If so, wonder what they'll name it.


----------



## TahoeDust

Does anyone know the exact time/time zone the NDA lifts? I am ready for reviews?


----------



## MonarchX

Kiros said:


> I don't feel like there's a need to make a thread for a small and silly question so I'll ask here. Does anyone or am I the only one who feels like Nvidia might release a card that's better than the 2080 Ti several months down the road? It just kind of feels odd that Nvidia showed off their hands with the Enthusiast and high end cards right off the bat. Like with the 1080 GTX everyone expected a Ti version to come later which it did and then they silently released the Titan Xp shortly after. If so, wonder what they'll name it.


Yes, they will. Its an educated guess based on historical NVidia GPU release data.


----------



## Artah

Kiros said:


> I don't feel like there's a need to make a thread for a small and silly question so I'll ask here. Does anyone or am I the only one who feels like Nvidia might release a card that's better than the 2080 Ti several months down the road? It just kind of feels odd that Nvidia showed off their hands with the Enthusiast and high end cards right off the bat. Like with the 1080 GTX everyone expected a Ti version to come later which it did and then they silently released the Titan Xp shortly after. If so, wonder what they'll name it.





MonarchX said:


> Yes, they will. Its an educated guess based on historical NVidia GPU release data.


I predict that they will but I don't see it as a significant upgrade except maybe more memory up to 16GB. It will not matter for the majority but if you are reading this on this website then most likely it will matter to you with your pursuit of performance.


----------



## BigMack70

Artah said:


> I predict that they will but I don't see it as a significant upgrade except maybe more memory up to 16GB. It will not matter for the majority but if you are reading this on this website then most likely it will matter to you with your pursuit of performance.


Assuming they release a Turing card above this, the clear path is that they simply release a full TU102 with the full 4608 cores, 12GB of memory, etc. And it would probably be 5-10% faster depending on clocks.


----------



## Fiercy

2080 already being shipped with deliveries starting tomorrow... and we have to wait it seems as always...


----------



## Artah

Fiercy said:


> 2080 already being shipped with deliveries starting tomorrow... and we have to wait it seems as always...


Nothing on the 2080 ti though and I ordered the same day it was announced. My email confirmation says 10/22 though.


----------



## Foxrun

Artah said:


> Nothing on the 2080 ti though and I ordered the same day it was announced. My email confirmation says 10/22 though.


Mine says 9/20 and the nvlink shipped today. So maybe tomorrow they will ship?


----------



## Rob w

Foxrun said:


> Mine says 9/20 and the nvlink shipped today. So maybe tomorrow they will ship?


Nv forum, NV made statement that 2080ti shipment delayed up to 27th sep before shipping, applies to aib’s As well, 2080’s seem to be ok.


----------



## Foxrun

Rob w said:


> Nv forum, NV made statement that 2080ti shipment delayed up to 27th sep before shipping, applies to aib’s As well, 2080’s seem to be ok.


There was a news posting here a few days back that it would not impact preorders.


----------



## bastian

https://twitter.com/NVIDIAGeForce/status/1042160762230202368

Apparently, giving free hardware to streamers is also more important than paying customers.


----------



## raider89

bastian said:


> https://twitter.com/NVIDIAGeForce/status/1042160762230202368
> 
> Apparently, giving free hardware to streamers is also more important than paying customers.


Nothing wrong with that, its free advertisement.


----------



## raider89

bastian said:


> https://twitter.com/NVIDIAGeForce/status/1042160762230202368
> 
> Apparently, giving free hardware to streamers is also more important than paying customers.


Nothing wrong with that, thats like getting made at reviewers for all the stuff they get first.


----------



## bastian

Reviewers have a job to review hardware. Streamers are not comparable to professional reviewers.


----------



## The L33t

bastian said:


> Reviewers have a job to review hardware. Streamers are not comparable to professional reviewers.


It's "paid" promotion for Nvidia. Very common.

High profile folks get access/dibs to all kinds of stuff, nothing new.


----------



## bastian

Its still bull****


----------



## snafua

TahoeDust said:


> Does anyone know the exact time/time zone the NDA lifts? I am ready for reviews?


The NDA rumor for reviewers was:
"On September 14, editors will go live with a deep dive into the Turing architecture. On September 17 the reviews for RTX 2080 will be published, and gamers curious about the flagship RTX 2080 Ti will have to wait until September 19."

Sept 14 "Turing Deep Dive" held true.
Sept 17 "RTX2080 Reviews" - That was today, nothing I've seen
Sept 19 "RTX2080TI Reviews" - We shall see.


----------



## raider89

snafua said:


> The NDA rumor for reviewers was:
> "On September 14, editors will go live with a deep dive into the Turing architecture. On September 17 the reviews for RTX 2080 will be published, and gamers curious about the flagship RTX 2080 Ti will have to wait until September 19."
> 
> Sept 14 "Turing Deep Dive" held true.
> Sept 17 "RTX2080 Reviews" - That was today, nothing I've seen
> Sept 19 "RTX2080TI Reviews" - We shall see.



Pretty sure Its the 19th at 9 or 10am I believe for both 2080 and 2080ti


----------



## raider89

snafua said:


> The NDA rumor for reviewers was:
> "On September 14, editors will go live with a deep dive into the Turing architecture. On September 17 the reviews for RTX 2080 will be published, and gamers curious about the flagship RTX 2080 Ti will have to wait until September 19."
> 
> Sept 14 "Turing Deep Dive" held true.
> Sept 17 "RTX2080 Reviews" - That was today, nothing I've seen
> Sept 19 "RTX2080TI Reviews" - We shall see.


Pretty sure Its the 19th at 9 or 10am I believe for both 2080 and 2080ti


----------



## raider89

bastian said:


> Its still bull****



The streamer you're referring too still has it sitting on his desk lol.


----------



## shiokarai

It's nothing wrong with giving free stuff to the so-called "influencers" etc. but is seems there's literally so few of them GPUs produced, they needed to push availability by a week to get the preorders fulfilled, which is kinda funny


----------



## BigMack70

~40% performance bump from the 1080 Ti.... good enough for me. Can't wait for my card to finally move from "order received" to "shipped"

Looks like we have our first true 4k60 card!

And to all the naysayers at the start of this thread who thought Turing was snake oil: I told you so.


----------



## Krzych04650

Looks decent for me too, by averaging between 4K and 1440p performance gains vs GTX 1080 from TechPowerUp it looks like 3440x1440 performance should be 1.75x of GTX 1080, so it will be like if my SLI worked everywhere with average scaling I got from games I played/tested so far. Thats exactly what I expected. 

It will only get faster with new games coming out and things like DLSS are really, really promising (not much belief in ray tracing though). OC potential looks the same as 1080 too. 

Maybe for this price I would like it to be a bit better, but like I said, I'd rather pay the higher price right now than wait almost another year for x80Ti model release, as it usually comes between 9 to 12 months after x80.

Although I am not very desperate for a new GPU right now so I will wait for AIO models to appear in shops, I want this card to arrive before the end of November, so they have so time to sort out the availability and to release more models because current ones are far from impressive.


----------



## CallsignVega

40% is slightly less than I was expecting. It looks to be right around or just slightly faster than a Titan V.


----------



## Artah

CallsignVega said:


> 40% is slightly less than I was expecting. It looks to be right around or just slightly faster than a Titan V.


yes but with a lot smaller price tag.


----------



## rush2049

I am wondering how much driver optimization for maxwell into pascal + 2 years of refinement help the 1080 and 1080 ti

and this is a radical architecture change for turing and the drivers might have great improvements with each release?

I am still getting my 2080 ti, as its a huge upgrade over my current titan x maxwell.
Looking to try to snag a second 2080 ti, as I also want to try out new NVLink based multi-gpu scaling when new modes get released there as well.


----------



## xer0h0ur

Hold the phone. Someone said they aren't making Windows 7 drivers for the 2000 series? That is an actual deal breaker for me. I might be forced to sell or return my card then. If this is true, Nvidia you're ******ed.


----------



## Fiercy

xer0h0ur said:


> Hold the phone. Someone said they aren't making Windows 7 drivers for the 2000 series? That is an actual deal breaker for me. I might be forced to sell or return my card then. If this is true, Nvidia you're ******ed.


I mean I could say the same about using Windows 7 in 2018... Get out and use new software grandpa! You know there are stripped out versions of Windows 10 and software to return classic Start menu. So frankly I've no idea why are you still bothering with 7.


----------



## Foxrun

Damn I wish we had some nvlink benches!


----------



## xer0h0ur

Fiercy said:


> I mean I could say the same about using Windows 7 in 2018... Get out and use new software grandpa! You know there are stripped out versions of Windows 10 and software to return classic Start menu. So frankly I've no idea why are you still bothering with 7.


You can go ahead and swallow Microsoft's garbage if you like. So long as Windows 10 keeps routinely breaking on updates I will never use that operating system. I need system stability and Windows 10 is anything but consistent. There is a reason the 7 user base is still massive and it has to do with software compatibility and OS stability.


----------



## bastian

Remember when people made fun of those who bought brand new 1080 Ti's when they came out because of price?

Well guess what, now there are going to be a lot of people who are priced out of the 2080 Ti and want a 1080 Ti to buy for used, which means people who have a 1080 Ti can sell their GPU for a great price and get a brand new 2080 Ti for a great price.

Karma, *****es!


----------



## Fiercy

xer0h0ur said:


> You can go ahead and swallow Microsoft's garbage if you like. So long as Windows 10 keeps routinely breaking on updates I will never use that operating system. I need system stability and Windows 10 is anything but consistent. There is a reason the 7 user base is still massive and it has to do with software compatibility and OS stability.


I guess I must be the only user on the planet who has never seen any instability from Windows 10 and I mean Windows 7 is made by Microsoft too... so I see no point in swallowing garbage reference. 

I think you just don't like change much (it's an age problem people get stuck in the ways of the past). But why would Nvidia keep making drivers for OS that Microsoft is drooping support for makes 0 sense to me. 

Besides people keeping old OS is making it tougher for developers as well because they have to keep multiple OS in minds when they develop new features. 

In the end almost no one wins.


----------



## xer0h0ur

Fiercy said:


> I guess I must be the only user on the planet who has never seen any instability from Windows 10 and I mean Windows 7 is made by Microsoft too... so I see no point in swallowing garbage reference.
> 
> I think you just don't like change much (it's an age problem people get stuck in the ways of the past). But why would Nvidia keep making drivers for OS that Microsoft is drooping support for makes 0 sense to me.
> 
> Besides people keeping old OS is making it tougher for developers as well because they have to keep multiple OS in minds when they develop new features.
> 
> In the end almost no one wins.


You must have been dropped on your head as a baby. The gaming industry HAS TO develop for DX11 and Windows 7 gamers no matter how much you dislike it or disagree with it. They make up the 2nd largest segment in the Windows ecosystem. Its also corporate suicide for a hardware manufacturer to completely alienate that user base by not making drivers for it. If they in fact don't make a Windows 7 driver then they are flat out cutting their own profits at the knees by eliminating a massive amount of users as potential customers. Do you even understand business?


----------



## Dreamliner

BOOM. Reviews are out and I was exactly correct. Traditional game performance of 20-series is exactly the same boost over 10-series as 10-series was over 9-series.

2080 = 1080Ti
2080Ti = ~1.3x 1080Ti

Just by looking at that single graph from Nvidia at GTC Japan 2018 I could easily see the spacing of 20-series over 10-series was near identical spacing as 10-series over 9-series. I knew we were gearing up for the same speed boost and I was right.

I also knew the idea that Nvidia “shuffled the lineup” and the 2080Ti was the new Titan card and the 2080 was going to wipe the floor with a 1080Ti was wishful thinking at best.

I’m sure DLSS will kick the 20-series up a notch, but for every title in your Steam library today, ~30% gains for $1200.

DLSS and RT will be awesome in 2020.


----------



## CallsignVega

39%


----------



## xer0h0ur

Dreamliner said:


> BOOM. Reviews are out and I was exactly correct. Traditional game performance of 20-series is exactly the same boost over 10-series as 10-series was over 9-series.
> 
> 2080 = 1080Ti
> 2080Ti = ~1.3x 1080Ti
> 
> Just by looking at that single graph from Nvidia at GTC Japan 2018 I could easily see the spacing of 20-series over 10-series was near identical spacing as 10-series over 9-series. I knew we were gearing up for the same speed boost and I was right.
> 
> I also knew the idea that Nvidia “shuffled the lineup” and the 2080Ti was the new Titan card and the 2080 was going to wipe the floor with a 1080Ti was wishful thinking at best.
> 
> I’m sure DLSS will kick the 20-series up a notch, but for every title in your Steam library today, ~30% gains for $1200.
> 
> DLSS and RT will be awesome in 2020.


And it looks like a 75% increase or more from someone like me going 1080 to 2080 Ti. Which is exactly what I was looking for. This card, assuming I actually get a Windows 7 driver for it, will absolutely do what I expected. Now I want to see the first reviews on how well they maintain overclocks with full cover waterblocks.


----------



## kx11

my 2080ti with a WB pre-installed should be shipped by 27th


----------



## Glerox

So I'm comfortable with my 2080TI preorder. At least for the 2080TI, you get a real and significant increase and nothing will beat it for at least a year except maybe a Titan RTX but even more expensive.
Can't wait to try the OC under water.

However, the 2080 is really a bummer IMO. It's equal to a 1080TI but 100$ more... unless you play only games with DLSS or you wanna play ray-tracing in 1080p, this card has no benefit... it sucks really. The 1080 was 20% faster than the 980TI... we really need more competition because this is a sad day for PC gaming


----------



## CallsignVega

Ya the 2080 is pointless. Overall this gen is pretty disappointing. I'm basically at the same performance level as I've been with my V for 10 months.


----------



## xer0h0ur

Glerox said:


> So I'm comfortable with my 2080TI preorder. At least for the 2080TI, you get a real and significant increase and nothing will beat it for at least a year except maybe a Titan RTX but even more expensive.
> Can't wait to try the OC under water.
> 
> However, the 2080 is really a bummer IMO. It's equal to a 1080TI but 100$ more... unless you play only games with DLSS or you wanna play ray-tracing in 1080p, this card has no benefit... it sucks really. The 1080 was 20% faster than the 980TI... we really need more competition because this is a sad day for PC gaming


I kinda expected a very minimal difference between a 2080 and a 1080 Ti when I saw the cuda core count. That card is a disappointment.


----------



## Dreamliner

CallsignVega said:


> 39%


That must be a synthetic bench. If you look at actual game benches, you’ll see ~30%.



xer0h0ur said:


> And it looks like a 75% increase or more from someone like me going 1080 to 2080 Ti. Which is exactly what I was looking for. This card, assuming I actually get a Windows 7 driver for it, will absolutely do what I expected. Now I want to see the first reviews on how well they maintain overclocks with full cover waterblocks.


I’m sure that’ll make a huge difference in temps. Even the EVGA cooler is a huge improvement over the FE cooler. I’ll admit, I wasn’t expecting that and figured this generation would cool pretty well on FE with the redesign. Looks like the FE heatsink is still pretty restrictive.


----------



## raider89

Dreamliner said:


> That must be a synthetic bench. If you look at actual game benches, you’ll see ~30%.
> 
> 35-40 in the benches I seen which I believe were OC'd. You will see 30% stock but we already knew these were made to overclock, especially with the new software that makes it pretty easy, let alone the fact they are binning all of these so we know they will overclock well. Totally worth it, upcoming games and current titles will soon have dlss in which that will jump significantly, if after these benchmarks people still are not pleased then well this is not your series then. You're paying 1300 for what would be the titan card in this initial line up. Was very pleased with the reviews today.


----------



## BigMack70

Dreamliner said:


> That must be a synthetic bench. If you look at actual game benches, you’ll see ~30%.


That post is all games, no synthetics. At stock. It's 39% faster. 

Of course, it looks like it's anywhere from 25% to 50% depending on the title, but TPU has one of the broadest test suites and I find their numbers generally the most reliable as a result. Some reviewers with smaller test suites have a number closer to 30%.


----------



## Ford8484

Artah said:


> yes but with a lot smaller price tag.


Still really expensive....I hope the 3080 is more substantial then the 2080 over 1080ti- which is a joke imo. Unless DLSS is as good as Nvidia claims and its actually implemented well, I see the 2080 as a bust. 2080ti isnt bad, but hopefully theres better gains next generation with the **80 series....I'm thinking there will be once next gen consoles launch and 4k becomes more mainstream.


----------



## Dreamliner

raider89 said:


> You're paying 1300 for what would be the titan card in this initial line up. Was very pleased with the reviews today.


This is not correct. The “lineup shuffle” was a bad theory after the announcement and it’s a provable fallacy today. In fact, the 2080Ti gets BEATEN by the Titan in several games. The reviews are not showing 40% 4K game benchmark gains over a 1080Ti. Sorry.

(If your saying a 2080Ti is roughy on par with last gen Titan gaming performance, just like 2080 vs 1080Ti, then I agree)

Ive got no doubt DLSS & RT is the future, but I’m not paying for tech I can’t use.


----------



## xer0h0ur

Like I said before, if I wasn't sitting here with a 1080 in my rig then I would be livid about the price point going from 1080 Ti to 2080 Ti. Since I am instead going from 1080 to 2080 Ti, its a more palatable spend for me with that leap in power. I can easily see why this launch is leaving a bad taste in 1080 Ti owners' mouths.


----------



## Ford8484

Im hoping for RE2: Remake to support DLSS- if not, it will still be incredible with this card...I need a time machine


----------



## xer0h0ur

Dreamliner said:


> This is not correct. The “lineup shuffle” was a bad theory after the announcement and it’s a provable fallacy today. In fact, the 2080Ti gets BEATEN by the Titan in several games. The reviews are not showing 40% 4K game benchmark gains over a 1080Ti. Sorry.
> 
> (If your saying a 2080Ti is roughy on par with last gen Titan gaming performance, just like 2080 vs 1080Ti, then I agree)
> 
> Ive got no doubt DLSS & RT is the future, but I’m not paying for tech I can’t use.


Oh no, a $3000 card beats a $1200 card in minimal use cases. Call the press.


----------



## Dreamliner

xer0h0ur said:


> Like I said before, if I wasn't sitting here with a 1080 in my rig then I would be livid about the price point going from 1080 Ti to 2080 Ti. Since I am instead going from 1080 to 2080 Ti, its a more palatable spend for me with that leap in power. I can easily see why this launch is leaving a bad taste in 1080 Ti owners' mouths.


It has nothing to do with me being an 1080Ti owner. In fact, in another thread I mentioned I had sold all my Pascal cards, went back to a 750Ti and preordered the 2080Ti. When I realized the gains were only going to be ~30% I rebought another 1080Ti. 

You can buy a new 1080Ti for $650 (or a used one under $450) and still think a $1200 2080Ti is worth it? No way.

I like what I see, but the pricing is ridiculous. The 2080Ti is a $799 MSRP card at best.


----------



## themadhatterxxx

xer0h0ur said:


> Like I said before, if I wasn't sitting here with a 1080 in my rig then I would be livid about the price point going from 1080 Ti to 2080 Ti. Since I am instead going from 1080 to 2080 Ti, its a more palatable spend for me with that leap in power. I can easily see why this launch is leaving a bad taste in 1080 Ti owners' mouths.


Enjoy that 2x FPS boost at 1440p and higher, I'm in the same boat.


----------



## themadhatterxxx

xer0h0ur said:


> Like I said before, if I wasn't sitting here with a 1080 in my rig then I would be livid about the price point going from 1080 Ti to 2080 Ti. Since I am instead going from 1080 to 2080 Ti, its a more palatable spend for me with that leap in power. I can easily see why this launch is leaving a bad taste in 1080 Ti owners' mouths.


Same boat as me...going to enjoy that 2x FPS increase at 1440p and higher.


----------



## Dreamliner

Which version of the card did you guys get? I noticed EVGA went to that clear plastic and being able to see the dust build up would drive me nutty. According to Gamer Nexus though, it seems to still cool much better than the FE version. I wonder if the vapor chamber helps or hurts it. Zero PCB airflow has to be bad, right??


----------



## raider89

Dreamliner said:


> This is not correct. The “lineup shuffle” was a bad theory after the announcement and it’s a provable fallacy today. In fact, the 2080Ti gets BEATEN by the Titan in several games. The reviews are not showing 40% 4K game benchmark gains over a 1080Ti. Sorry.
> 
> (If your saying a 2080Ti is roughy on par with last gen Titan gaming performance, just like 2080 vs 1080Ti, then I agree)
> 
> Ive got no doubt DLSS & RT is the future, but I’m not paying for tech I can’t use.



1: Lineup shuffle isn't a theory.
2: 2080ti > Titan V (Benchmarks show this)
3: 38% to be exact.

If you don't want to spend money on a obvious upgrade for a price that's roughly the same, 2080ti is on par in increased performance of what you would expect over the previous titan. Let alone what you will already know to expect with DLSS which is right around the corner. 

I understand the big focus was RT and DLSS and its not useable right now, but the increased performance is still there, as it sits right now and is what we would expect when comparing old series to new. Let it also be known that these percentages are not even overclocked in which they are binned cores and built to be overclocked nicely.

Im mean if you have a 1080ti and you're happy with what you have then sure don't upgrade, im not cancelling my pre order thats for sure  People wait 2 to 3 years to upgrade video cards anyway so why not wait a little bit longer. This gives me all the more reason to go buy that huge 120hz 4k monitor for $1300


----------



## raider89

Dreamliner said:


> Which version of the card did you guys get? I noticed EVGA went to that clear plastic and being able to see the dust build up would drive me nutty. According to Gamer Nexus though, it seems to still cool much better than the FE version. I wonder if the vapor chamber helps or hurts it. Zero PCB airflow has to be bad, right??


Founders Edition, Im doing my first full custom loop and these seem to be the most compatible version.


----------



## xer0h0ur

Nvidia FE which you already know I am dropping an EKWB and backplate on it immediately so the air cooler is a non-factor in my use case. That thing will only be used for whatever amount of time it takes the EK parts to arrive. Should provide some entertainment for me at least to be able to run my own benchmarks FE, FE OCed, EK stock, EK OCed then after all of that using Scanner to see what is the "max stable" OC that it would come up with versus my own manual overclock. 

After all, half of the fun of getting new silicon is testing the limits.


----------



## sblantipodi

do you remember the difference between the 980Ti and the 1080TI? Almoast 80% more performance with almoast doble the VRAM at the same price. Now from 1080Ti and 2080Ti we 60% price increase, 40% more performance, same VRAM quantity. Big fail.﻿


----------



## raider89

sblantipodi said:


> do you remember the difference between the 980Ti and the 1080TI? Almoast 80% more performance with almoast doble the VRAM at the same price. Now from 1080Ti and 2080Ti we 60% price increase, 40% more performance, same VRAM quantity. Big fail.﻿


Compare it once DLSS is here, Big WIN


----------



## Dreamliner

raider89 said:


> Founders Edition, Im doing my first full custom loop and these seem to be the most compatible version.


Money to burn eh? 

Jk, custom loops look super awesome. I’m just too chicken to run water. 

Look, all I’m trying to say is Turing is not worth $1200. Pascal offered the same percentage gains over Maxwell for $699, for only $50 more MSRP. 

Turing is $300 more MSRP and nobody is even selling at that price. Not worth it. Actual selling prices are almost 176% more for ~30-35% more performance. You can’t square that math.



raider89 said:


> Compare it once DLSS is here, Big WIN


DLSS will be awesome. It will be nice for you when those games start rolling out. My backlog will keep me busy long into a 2080Ti replacement.


----------



## Dreamliner

sblantipodi said:


> do you remember the difference between the 980Ti and the 1080TI? Almoast 80% more performance with almoast doble the VRAM at the same price. Now from 1080Ti and 2080Ti we 60% price increase, 40% more performance, same VRAM quantity. Big fail.﻿


Shh. Your logic is showing.


----------



## raider89

Dreamliner said:


> Money to burn eh?
> 
> Jk, custom loops look super awesome. I’m just too chicken to run water.
> 
> Look, all I’m trying to say is Turing is not worth $1200. Pascal offered the same percentage gains over Maxwell for $699, for only $50 more MSRP.
> 
> Turing is $300 more MSRP and nobody is even selling at that price. Not worth it. Actual selling prices are almost 176% more for ~30-35% more performance. You can’t square that math.
> 
> 
> 
> DLSS will be awesome. It will be nice for you when those games start rolling out. My backlog will keep me busy long into a 2080Ti replacement.



Yes money too burn lol, custom loop was only $780 from EK. The liquid leaking was the reason I always backed out of doing it before but im more into overclocking now so im biting the bullet.

You don't buy the whole 2080ti is a Titan thing like me and others do, hence the reason we look at people like they're crazy saying its over priced when its the same price as the previous titan. If I was looking at like you do the 2080ti and the 1080ti then sure I could see $300 more being a bit insane. But I would still buy it lmao.

The DLSS and RT is there, its on the cards. Thats when you can truly compare price to performance. Right now you're basically comparing a beta to the finished product.


----------



## chakku

In for the crushed dreams of the people expecting 50% over the 1080 Ti.


----------



## xer0h0ur

chakku said:


> In for the crushed dreams of the people expecting 50% over the 1080 Ti.


Not really. 38% looks alright thus far and we have yet to see any reviews with full cover waterblocks to see what sustained overclocked 2080 Ti's can put out.


----------



## Artah

Looking pretty good over 60hz at 4k for Monster Hunter World single card finally. SLI will no longer drain my pockets, I was already scraping the bottom of the barrel as it is.

https://www.ign.com/articles/2018/0...080-ti-founders-edition-review-and-benchmarks


----------



## raider89

chakku said:


> In for the crushed dreams of the people expecting 50% over the 1080 Ti.


You know with DLSS the 2080ti goes over the 50% mark.


----------



## raider89

chakku said:


> In for the crushed dreams of the people expecting 50% over the 1080 Ti.


With DLSS it does 50%+ . No dreams crushed.


----------



## chakku

xer0h0ur said:


> Not really. 38% looks alright thus far and we have yet to see any reviews with full cover waterblocks to see what sustained overclocked 2080 Ti's can put out.


TPU's result is a bit skewed by their odd Wolfenstein result. I'd like to see what the numbers are without that outlier, it would most likely align with the rest of the reviews on the net.


----------



## chakku

raider89 said:


> You know with DLSS the 2080ti goes over the 50% mark.


Hahahahaha. That's a flawed argument, you could say the 2080 Ti is any percentage faster than the 1080 Ti if you make it an apples to oranges comparison. Benchmark all those games on Ultra with the 1080 Ti and Low settings with the 2080 Ti, you'll be amazed at the performance gains.










When NVIDIA cheat benchmarks in their drivers by forcing some graphical settings to be lower for a higher score it's a scandal in the industry. When they make a feature out of it it's celebrated. Wait what?


----------



## themadhatterxxx

Dreamliner said:


> Which version of the card did you guys get? I noticed EVGA went to that clear plastic and being able to see the dust build up would drive me nutty. According to Gamer Nexus though, it seems to still cool much better than the FE version. I wonder if the vapor chamber helps or hurts it. Zero PCB airflow has to be bad, right??


Yeah I watched his video and apparently the FE cooler is not that great. I bought the MSI X Gaming one...I think it is one of the only non-ref boards?...1 6 pin and 2 8 pins


----------



## Dreamliner

xer0h0ur said:


> Not really. 38% looks alright thus far and we have yet to see any reviews with full cover waterblocks to see what sustained overclocked 2080 Ti's can put out.


TechPowerUp is the only site I’m seeing showing gains that high. Most are much closer to 30%. At this price point, that difference doesn’t matter much as it isn’t 50%+ across the board. 

I still don’t get why were excited this gen beats last gen...isn’t that expected?

Since were quoting from TPU:



> Similar leaps in technology in the past did not trigger such hikes in graphics card prices over generation. We hence feel that in their current form, the RTX 20-series is overpriced by at least 20% across the board, and could deter not just bleeding-edge enthusiasts, but also people upgrading from older generations such as "Maxwell." For games that don't use RTX, the generational performance gains are significant, but not as big as those between "Maxwell" and "Pascal."


I personally hope this little price experiment of Nvidia fails miserably. It’d really suck if they keep upping the price as if I’m buying a 1080TiTITi and not its replacement. Imagine buying a 2180Ti for $1399 and justifying it by saying “but those gainz tho.”



themadhatterxxx said:


> Yeah I watched his video and apparently the FE cooler is not that great. I bought the MSI X Gaming one...I think it is one of the only non-ref boards?...1 6 pin and 2 8 pins


Holy power connectors Batman! MSI arguably made the best board of the 10-series. It’ll be interesting to see how the 20-series shakes out.

I also read the FE cards don’t auto fan-stop either...


----------



## xer0h0ur

themadhatterxxx said:


> Yeah I watched his video and apparently the FE cooler is not that great. I bought the MSI X Gaming one...I think it is one of the only non-ref boards?...1 6 pin and 2 8 pins


Probably will be even worse once those RT and Tensor cores are in use. Another reason to slap a waterblock on it.


----------



## mpgxsvcd

I got an email from Amazon saying that the 2080 TI shipments were likely delayed. I ordered the "GIGABYTE GeForce RTX 2080 Ti Gaming OC 11GB Graphic Cards GV-N208TGAMING OC-11GC" as soon as the preorders went up on Amazon on 8-20-2018.


----------



## Dreamliner

mpgxsvcd said:


> I got an email from Amazon saying that the 2080 TI shipments were likely delayed. I ordered the "GIGABYTE GeForce RTX 2080 Ti Gaming OC 11GB Graphic Cards GV-N208TGAMING OC-11GC" as soon as the preorders went up on Amazon on 8-20-2018.


Yeah. It looks like all 2080 Ti cards got delayed to 9/27.


----------



## themadhatterxxx

Dreamliner said:


> I personally hope this little price experiment of Nvidia fails miserably. It’d really suck if they keep upping the price as if I’m buying a 1080TiTITi and not its replacement. Imagine buying a 2180Ti for $1399 and justifying it by saying “but those gainz tho.”


There's only one way that may happen and that's if AMD's 7nm gpu can go toe to toe with NVidia's flagship. I don't see this happening until 2020, and I would be very surprised if NVidia breaks their 2 year refresh cycle to release something next year. The only other way it happens is if the Pascal overstock clears out 6 months from now and Nvidia sees slow Turing sales for 2 quarters.


----------



## raider89

chakku said:


> Hahahahaha. That's a flawed argument, you could say the 2080 Ti is any percentage faster than the 1080 Ti if you make it an apples to oranges comparison. Benchmark all those games on Ultra with the 1080 Ti and Low settings with the 2080 Ti, you'll be amazed at the performance gains.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> When NVIDIA cheat benchmarks in their drivers by forcing some graphical settings to be lower for a higher score it's a scandal in the industry. When they make a feature out of it it's celebrated. Wait what?


Its not flawed at all, there is demo benchmarks out, and it shows a drastic improvement and its not even finished yet. The proof is all there. Sure you can post screen shots of some parts being blurry than others sure. But its not finished, just a demo. Its like the Tomb Raider rtx demo all over again. They are not cheating benchmarks lol, something a AMD fan would say.


----------



## raider89

Dreamliner said:


> TechPowerUp is the only site I’m seeing showing gains that high. Most are much closer to 30%. At this price point, that difference doesn’t matter much as it isn’t 50%+ across the board.
> 
> I still don’t get why were excited this gen beats last gen...isn’t that expected?
> 
> Since were quoting from TPU:
> 
> 
> 
> I personally hope this little price experiment of Nvidia fails miserably. It’d really suck if they keep upping the price as if I’m buying a 1080TiTITi and not its replacement. Imagine buying a 2180Ti for $1399 and justifying it by saying “but those gainz tho.”


The hikes in prices over previous gen? They're practically the same MSRP?


----------



## chakku

raider89 said:


> Its not flawed at all, there is demo benchmarks out, and it shows a drastic improvement and its not even finished yet. The proof is all there. Sure you can post screen shots of some parts being blurry than others sure. But its not finished, just a demo. Its like the Tomb Raider rtx demo all over again. They are not cheating benchmarks lol, something a AMD fan would say.


I'm not saying they're cheating benchmarks, the way people are comparing DLSS results to native resolution results however is on the same basket of eggs. If you're upscaling with DLSS then you're not at the native resolution, so no matter what it 1. won't be a fair comparison and 2. will never look as good as native. This gives the equivalent effect of NVIDIA's (or AMD's, because they have done it and been caught before too) using drivers to tweak things in benchmarks to get better scores, thus making comparisons with other results in said benchmark unfair because something has been changed.

When we're quoting performance numbers DLSS should never be part of the equation unless you're explicitly testing DLSS On vs Off on the RTX series cards, not when they're being compared to GTX cards. So yeah, overall 2080 Ti is ~30% faster, not 50% or whatever you'll get with DLSS.


----------



## raider89

chakku said:


> I'm not saying they're cheating benchmarks, the way people are comparing DLSS results to native resolution results however is on the same basket of eggs. If you're upscaling with DLSS then you're not at the native resolution, so no matter what it 1. won't be a fair comparison and 2. will never look as good as native.
> 
> When we're quoting performance numbers DLSS should never be part of the equation unless you're explicitly testing DLSS On vs Off on the RTX series cards, not when they're being compared to GTX cards. So yeah, overall 2080 Ti is ~30% faster, not 50% or whatever you'll get with DLSS.


I dont buy the whole wont look as good as native thing, we will just have to wait and see..

DLSS most certainly should be part of the testings, its one of the main features of the cards and its made to provide improvement. And its 38-40% not 30%. These results are not even overclocked.


----------



## chakku

raider89 said:


> DLSS most certainly should be part of the testings, its one of the main features of the cards and its made to provide improvement. And its 38-40% not 30%. These results are not even overclocked.


It's a bit ironic that you call me a fanboy while you consistently quote the only publication that got 38% (and now you're making up 40%?) gains over the 1080 Ti. I guess whoever gets the best results in their review is the best review site, right? I wonder who it will be for the 3000 series..

Nevertheless I'm happy with the performance of the 2080 Ti, my 1080 Ti will last a whole lot longer now.


----------



## Glerox

chakku said:


> If you're upscaling with DLSS then you're not at the native resolution, so no matter what it 1. won't be a fair comparison and 2. will never look as good as native.


You get DLSS wrong. It does NOT lower the resolution.

DLSS does not upscale from a LOWER resolution. It upscales from the SAME NATIVE resolution by mimicking supersampling (SSAA) with some AI shenanigans too complex for me.
It's meant to replace the current TAA standard.

4K+TAA is compared to 4K+DLSS (not 1440p or 1080p) and is an Apple to Apple comparison.

https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/


----------



## Dreamliner

..


----------



## chakku

Glerox said:


> You get DLSS wrong. It does NOT lower the resolution.
> 
> DLSS does not upscale from a LOWER resolution. It upscales from the SAME NATIVE resolution by mimicking supersampling (SSAA) with some AI shenanigans too complex for me.
> It's meant to replace the current TAA standard.
> 
> 4K+TAA is compared to 4K+DLSS (not 1440p or 1080p) and is an Apple to Apple comparison.
> 
> https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/


Right, I misunderstood what DLSS was doing then. So it's even less of a comparison to the previous generation because you're relying on DLSS being faster than TAA to make up these larger performance differences between the two. Imagine a GPU comparison where one is running AA and one isn't. Is that a fair comparison? We have already seen that DLSS actually looks worse than TAA and obviously more intensive AA methods tend to look better.

Enjoy that handful of games that actually uses DLSS I guess, but that's still not a metric to compare the 2080 Ti to the 1080 Ti with and thankfully the review publications agree and published real performance figures.


----------



## raider89

chakku said:


> It's a bit ironic that you call me a fanboy while you consistently quote the only publication that got 38% (and now you're making up 40%?) gains over the 1080 Ti. I guess whoever gets the best results in their review is the best review site, right? I wonder who it will be for the 3000 series..
> 
> Nevertheless I'm happy with the performance of the 2080 Ti, my 1080 Ti will last a whole lot longer now.



I never called anyone a fan boy, I simply compared a statement to what a fanboy might say. Yes im throwing around 38% from the posted image here in the recent posts, and I use 35-40 to make it a more realistic over the multi tests I have seen myself aside from here. 

Yes I am happy with it too, it justifies my 120hz 4k large monitor purchase to take advantage of its power.


----------



## chakku

raider89 said:


> I never called anyone a fan boy, I simply compared a statement to what a fanboy might say. Yes im throwing around 38% from the posted image here in the recent posts, and I use 35-40 to make it a more realistic over the multi tests I have seen myself aside from here.
> 
> Yes I am happy with it too, it justifies my 120hz 4k large monitor purchase to take advantage of its power.


120Hz 4K monitor and a GPU that can't manage 120Hz 4K. I hope you're at least buying two?


----------



## Dreamliner

chakku said:


> 120Hz 4K monitor and a GPU that can't manage 120Hz 4K. I hope you're at least buying two?


2 monitors. Once DLSS hits he'll be getting [email protected] on both! Once waterblocked and overclocked, I suspect sustained 14 Gigarays. Easy peasy.


----------



## tconroy135

It is a little sad that NVIDIA couldn't even release a benchmark program that includes both Ray Tracing and DLSS, it would have mitigated some of these bad reviews. I wonder if some heads are rolling in their marketing department.


----------



## raider89

chakku said:


> 120Hz 4K monitor and a GPU that can't manage 120Hz 4K. I hope you're at least buying two?



Multiple titles I seen were able to reach the 120+ fps in the tests. I can always lower some settings until DLSS is a go. But I will get 2 eventually depending on how well the nvlink turns out. I will wait for now lol.


----------



## chakku

tconroy135 said:


> It is a little sad that NVIDIA couldn't even release a benchmark program that includes both Ray Tracing and DLSS, it would have mitigated some of these bad reviews. I wonder if some heads are rolling in their marketing department.


If you want NVIDIA to give you benchmark numbers instead of third party reviewers you can always look at the review guide that was leaked with their performance metrics on it, however 'accurate' they may be compared to the reviews out today.


----------



## xer0h0ur

Everything I have seen so far in overclocking results has shown 2080 Ti's boosting to around 2100 MHz but then stepping down as low as 1800 MHz from elevating thermals due to the passive nature of Nvidia's fan curve on the FE. These cards are going to be nuts while sustaining higher clocks under full cover waterblocks.


----------



## Rob w

xer0h0ur said:


> Everything I have seen so far in overclocking results has shown 2080 Ti's boosting to around 2100 MHz but then stepping down as low as 1800 MHz from elevating thermals due to the passive nature of Nvidia's fan curve on the FE. These cards are going to be nuts while sustaining higher clocks under full cover waterblocks.


I was having second thoughts on a water block but i am realy glad i ordered one now, it looks promising to put it on water ( especialy chilled) lol, see what it can do then?


----------



## tconroy135

chakku said:


> If you want NVIDIA to give you benchmark numbers instead of third party reviewers you can always look at the review guide that was leaked with their performance metrics on it, however 'accurate' they may be compared to the reviews out today.


I meant a benchmark program to run. The only thing they gave us was the terrible FFXV benchmark.


----------



## Glerox

chakku said:


> Right, I misunderstood what DLSS was doing then. So it's even less of a comparison to the previous generation because you're relying on DLSS being faster than TAA to make up these larger performance differences between the two. Imagine a GPU comparison where one is running AA and one isn't. Is that a fair comparison? We have already seen that DLSS actually looks worse than TAA and obviously more intensive AA methods tend to look better.
> 
> Enjoy that handful of games that actually uses DLSS I guess, but that's still not a metric to compare the 2080 Ti to the 1080 Ti with and thankfully the review publications agree and published real performance figures.


I disagree 

Comparing AA ON to AA OFF would indeed be an unfair comparison because one image will have ugly aliasing and the other not.

However, comparing a method of AA to another (TAA to DLSS) is fair because the end result on image quality will be approximately the same (No ugly aliasing).

In your photo example, we see that DLSS is uglier than TAA but this is only a DEMO.
We don't have real-world examples yet but according to the article, DLSS is supposed to be at least as beautiful as TAA but with much improved performance.

Once we get compatible games and we see that DLSS is as good as TAA to remove aliasing, than it will be a fair comparison to compare performance of TAA vs DLSS.


----------



## chakku

tconroy135 said:


> I meant a benchmark program to run. The only thing they gave us was the terrible FFXV benchmark.


The people selling you the product are the last people you should want to make a benchmark to validate the performance of said product. Weren't 3DMark working on something with Raytracing anyway?


----------



## kx11

chakku said:


> Hahahahaha. That's a flawed argument, you could say the 2080 Ti is any percentage faster than the 1080 Ti if you make it an apples to oranges comparison. Benchmark all those games on Ultra with the 1080 Ti and Low settings with the 2080 Ti, you'll be amazed at the performance gains.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> When NVIDIA cheat benchmarks in their drivers by forcing some graphical settings to be lower for a higher score it's a scandal in the industry. When they make a feature out of it it's celebrated. Wait what?





nice screenshot


how about dis ?


----------



## raider89

Dreamliner said:


> $699 ≠ $999. Also, link to $999 card please?
> 
> Are you just trolling or what? Honestly its delusional to think you're actually getting those gains. No other reviewer is showing that and EVERY review says the 20-series is ridiculously overpriced.
> 
> I'm glad the 2080Ti exists. It makes the value of my 11 month old EVGA 1080Ti FTW3 I picked up used for $320 I-N-S-A-N-E.



Titan Xp - https://www.nvidia.com/en-us/titan/titan-xp/ $1200
Rtx 2080 ti - https://www.nvidia.com/en-us/geforce/products/geforce-store/ $1199.

As for the $999 you will have to wait for the lower tier 2080ti's thats coming later.

No these are the numbers from the benchmarks, not trolling. And im referring to the Titan xp not V.


----------



## chakku

Glerox said:


> I disagree
> 
> Comparing AA ON to AA OFF would indeed be an unfair comparison because one image will have ugly aliasing and the other not.
> 
> However, comparing a method of AA to another (TAA to DLSS) is fair because the end result on image quality will be approximately the same (No ugly aliasing).
> 
> In your photo example, we see that DLSS is uglier than TAA but this is only a DEMO.
> We don't have real-world examples yet but according to the article, DLSS is supposed to be at least as beautiful as TAA but with much improved performance.
> 
> Once we get compatible games and we see that DLSS is as good as TAA to remove aliasing, than it will be a fair comparison to compare performance of TAA vs DLSS.


I agree with what you're saying but I feel we have different goalposts. How I read your posts suggests you're talking about a TAA vs DLSS comparison, which is fine. What I'm talking about is extracting performance gains in Turing vs Pascal based on TAA vs DLSS, where the unfair comparison is made. Either you're comparing AA methods, or you're comparing GPU performance. You shouldn't try mix both into one and go with what comes out the other end.


----------



## xer0h0ur

Rob w said:


> I was having second thoughts on a water block but i am realy glad i ordered one now, it looks promising to put it on water ( especialy chilled) lol, see what it can do then?


I'm going to be like a kid in a candy store when I have the card and EK block in my hands.


----------



## BigMack70

There are way too many people in here who clearly aren't on topic for an owners thread. Mods can we clean up some of this again? This isn't a debate thread about how people spend their money. 

I also find it interesting how many people on here ignore the best test suite of games (TPU's) because they'd rather stick their fingers in their ears than listen to the facts that this card's performance, at 39% faster, is just fine as a next gen jump from the 1080 Ti.


----------



## chakku

BigMack70 said:


> There are way too many people in here who clearly aren't on topic for an owners thread. Mods can we clean up some of this again? This isn't a debate thread about how people spend their money.
> 
> I also find it interesting how many people on here ignore the best test suite of games (TPU's) because they'd rather stick their fingers in their ears than listen to the facts that this card's performance, at 39% faster, is just fine as a next gen jump from the 1080 Ti.


This isn't a 'stick our heads in the sand and rely on one review site's benchmarks because they make the product look better than every other site' thread. We have the performance numbers, be realistic and stop misrepresenting the results out there.

My point from the beginning was to curb your enthusiasm, wait for benchmarks and don't hype the product up for no reason. Now you're trying to deflate the disappointment by misleading people with results that are now out there. When will it end? When the next generation comes out and you stop pretending Turing was a significant leap?


----------



## xer0h0ur

I've given up on this thread managing to stick to actual owners talking among themselves.

Also FWIW, drivers are up for the RTX series.


----------



## chakku

xer0h0ur said:


> I've given up on this thread managing to stick to actual owners talking among themselves.


There are no owners yet. There won't be for another week minimum. How can you discuss the card as 'owners' when you don't have it in your hand? Until release day and especially before the NDA lifts for benchmark figures it's literally a speculation thread.


----------



## xer0h0ur

chakku said:


> There are no owners yet. There won't be for another week minimum. How can you discuss the card as 'owners' when you don't have it in your hand? Until release day and especially before the NDA lifts for benchmark figures it's literally a speculation thread.


We have already been through this before. I bought one, showed proof, also bought the EK block and backplate. None of which is being canceled or returned. I'm committed. The only thing that shook my confidence was FUD about supposedly no Windows 7 support which just got squashed on the driver release moments ago. They support Windows 7, 8, 8.1 and 10. 

Where is your order?


----------



## chakku

xer0h0ur said:


> We have already been through this before. I bought one, showed proof, also bought the EK block and backplate. None of which is being canceled or returned. I'm committed. The only thing that shook my confidence was FUD about supposedly no Windows 7 support which just got squashed on the driver release moments ago. They support Windows 7, 8, 8.1 and 10.
> 
> Where is your order?


I was a potential buyer, but I'm a sane person who doesn't preorder things without knowing anything about the product. Seeing these disappointing (but expected) performance figures makes me no longer want to buy it and I will stick to my 1080 Ti. Up until this point however, I was just as much of an 'owner' as you are. Just because you preordered it (without actually paying anything until it ships, mind you) you were no more an owner than I was.


----------



## raider89

chakku said:


> There are no owners yet. There won't be for another week minimum. How can you discuss the card as 'owners' when you don't have it in your hand? Until release day and especially before the NDA lifts for benchmark figures it's literally a speculation thread.


As people have already stated in this thread, people other then reviewers have the cards. There is owners of these cards right now. We have proof of purchase. 2080ti's should ship tomorrow for a majority of us.


----------



## raider89

BigMack70 said:


> There are way too many people in here who clearly aren't on topic for an owners thread. Mods can we clean up some of this again? This isn't a debate thread about how people spend their money.
> 
> I also find it interesting how many people on here ignore the best test suite of games (TPU's) because they'd rather stick their fingers in their ears than listen to the facts that this card's performance, at 39% faster, is just fine as a next gen jump from the 1080 Ti.



Yessss, I agree. This thread does need a good cleaning.


----------



## chakku

raider89 said:


> As people have already stated in this thread, people other then reviewers have the cards. There is owners of these cards right now. We have proof of purchase. 2080ti's should ship tomorrow for a majority of us.


Last I checked the 2080 Ti release was delayed a week?


----------



## raider89

xer0h0ur said:


> I'm going to be like a kid in a candy store when I have the card and EK block in my hands.


Same here, although I wish I was going with hard tubing versus the flex, one day I will give that a go, but im pretty sure I can make it look just as good if I do it right.


----------



## raider89

chakku said:


> Last I checked the 2080 Ti release was delayed a week?


Just the general market was delayed. Pre-orders are not effected. (From Nvidia, from other vendors im not sure.)


----------



## Dreamliner

xer0h0ur said:


> We have already been through this before. I bought one, showed proof, also bought the EK block and backplate. None of which is being canceled or returned. I'm committed. The only thing that shook my confidence was FUD about supposedly no Windows 7 support which just got squashed on the driver release moments ago. They support Windows 7, 8, 8.1 and 10.
> 
> Where is your order?


I ordered a 2080Ti during the keynote. I was going to cancel but ended up selling it instead.

I read none of the new RTX features will work on anything older than 10. You'll can still traditional game on 7/8, but no RT or DLSS will work on anything but 10.



raider89 said:


> As people have already stated in this thread, people other then reviewers have the cards. There is owners of these cards right now. We have proof of purchase. 2080ti's should ship tomorrow for a majority of us.


Nobody's 2080Ti (including yours) is shipping tomorrow. Nvidia pushed back it's own and all AIB partner cards back to the 27th.


----------



## Baasha

Did any of you get shipment confirmation for the GPUs? I got shipping confirmation for the NVLink bridges but not for the GPUs.


----------



## xer0h0ur

Basically the only cards that are shipping from 20th through 27th are the Nvidia FE pre-orders. Everything else is delayed a week.


----------



## raider89

Dreamliner said:


> I ordered a 2080Ti during the keynote. I was going to cancel but ended up selling it instead.
> 
> I read none of the new RTX features will work on anything older than 10. You'll can still traditional game on 7/8, but no RT or DLSS will work on anything but 10.
> 
> 
> Nobody's 2080Ti (including yours) is shipping tomorrow. Nvidia pushed back it's own and all AIB partner cards back to the 27th.


Ummm wrong, pre orders from nvidia was not pushed back.


----------



## raider89

xer0h0ur said:


> Basically the only cards that are shipping from 20th through 27th are the Nvidia FE pre-orders. Everything else is delayed a week.



This is what I got


----------



## Dreamliner

raider89 said:


> Ummm wrong, pre orders from nvidia was not pushed back.


okie dokie.

I look forward to your screenshot proof of shipment tomorrow.


----------



## chakku

BigMack70 said:


> There are way too many people in here who clearly aren't on topic for an owners thread. Mods can we clean up some of this again? This isn't a debate thread about how people spend their money.


I'm curious to know how benchmarks and new features (DLSS) aren't on topic for an owners thread?

Unless you mean mentioning any review that isn't TPU because it's not 38%, in which case sorry to burst your bubble but it's still on topic.


----------



## xer0h0ur

You people still have nothing better to do than post in a thread over tech you're not going to own? Sounds like the disqus comment section on WCCFT.


----------



## BigMack70

chakku said:


> I'm curious to know how benchmarks and new features (DLSS) aren't on topic for an owners thread?
> 
> Unless you mean mentioning any review that isn't TPU because it's not 38%, in which case sorry to burst your bubble but it's still on topic.


It's interesting that you think my post was directed at you.

I'd say that ignoring TPU's data because it doesn't fit a narrative that these cards are not much better than the 1080 Ti, which is a narrative pushed by non-owners who are seeking to justify their non-purchase, is off topic. Don't like 39% faster as an average number? Find a reliable benchmark that tests more games than TPUs. Until then, 39% is the best number available.

The 1080 Ti owners thread for folks like yourself who own that card is --> That way


----------



## Robostyle

Ugh, so much has been wrote here for an hour, can't read it all. 

Well, I think I get it. Its no surprise ppl who think that 100-71=39 don't actually thinking off the money question - all in all, there's always stupid ones. That's how capitalism work  Brainwashed fanboys, that always rush themselves to have the latest one!! :roll: 

P.S. I don't have a possibility to "preorder" this stuff in my country - only 3rd party at minimum 1500-1700$ per card. So, not even a chance, nope, you've missed with that one.


----------



## BigMack70

Robostyle said:


> Its no surprise ppl who think that 100-71=39


Literally nobody has said that in this thread ever. Learn math before you post.


----------



## chakku

BigMack70 said:


> It's interesting that you think my post was directed at you.
> 
> I'd say that ignoring TPU's data because it doesn't fit a troll narrative is off topic. Don't like 39% faster as an average number? Find a reliable benchmark that tests more games than TPUs. Until then, 39% is the best number available.


Oh hey look, 38% suddenly became 39%. Give it a few more posts and it will be 40%, then 41%.

I'm not ignoring TPU's data, I'm saying it's an outlier compared to the rest of the reviews out there. You're only referencing TPU and ignoring everyone else, if anyone is cherrypicking it is most definitely you. Guru3D also tested Wolfenstein II and got 50% faster, not 66% faster like TPU. Unfortunately Guru3D doesn't do an overall performance summary so you can't see their overall scores being skewed by this game like TPU's. Even with Wolfenstein II, Techspot had the 4K average at 31%. (With a 55% gain in Wolfenstein II)



> The same batch of tests run at 4K extends the lead out to 31%, so on average the 2080 Ti is 31% faster than the 1080 Ti, according to our results. The most notable change here is Grand Theft Auto V, previously the 2080 Ti was just 1% faster as the system was heavily limiting performance, but now at 4K it’s able to provide 37% more frames. Again Wolfenstein II is the standout result for the 2080 Ti but it also does well in Rainbow Six Siege and No Man’s Sky as well.


Keep your head in the sand though, you're only fooling yourself.

On the subject of the Wolfenstein II results it looks promising for NVIDIA finally adapting their architecture for newer APIs and features within them. If some of you may recall the Vega 64 beat the 1080 Ti in this game, likely due to the Vulkan implementation. If the Turing card performance gains are anything to go by, we should hopefully see more game developers now pushing out games with these async features which will increase performance for both Turing and Vega and leave Pascal et al behind in the dust, one can at least hope.


----------



## Robostyle

BigMack70 said:


> Literally nobody has said that in this thread ever. Learn math before you post.





BigMack70 said:


> That post is all games, no synthetics. At stock. It's 39% faster.


100-72 = 39! eureka!


----------



## Fiercy

Can someone tell me where do you guys have delivery date on the conformation email from Nvidia because mine doesn't list the date anywhere... and I can't seem to look up any information if I look up my order as well.


----------



## raider89

Robostyle said:


> 100-72 = 39! eureka!


That doesn't show anyone said 100-72 =39. You're just putting words into peoples mouths trying to figure out where 38 or 39 came from. HINT: It didnt come from the graph.


----------



## raider89

Fiercy said:


> Can someone tell me where do you guys have delivery date on the conformation email from Nvidia because mine doesn't list the date anywhere... and I can't seem to look up any information if I look up my order as well.


Its on the Order confirmation email, not the order received email...if that helps.


----------



## Glerox

Just received this from Nvidia :

"Dear cow,

Thank you for your purchase from the NVIDIA online store. 

Unfortunately, we have experienced an unexpected delay in shipping your RTX 2080 Ti Founders Edition order. We’ll be back in touch shortly with your new delivery date. 

If you have any other questions or concerns about your order, please visit customer service. 

We apologize for any inconvenience and thank you again for your order. 

Sincerely, 

NVIDIA Online Store 

Customer Service"

(The part after cow is true)


----------



## Fiercy

I second that here's what this **** LOOKS LIKE!! 


How about they compensate this with a game or 100$ discount...


----------



## xer0h0ur

Yup, just got the same e-mail and I had ordered during the keynote.


----------



## raider89

"Sorry wife, Nvidia just killed my ability to perform tonight"


----------



## Dreamliner

raider89 said:


> Ummm wrong, pre orders from nvidia was not pushed back.





raider89 said:


> "Sorry wife, Nvidia just killed my ability to perform tonight"


hmm.


----------



## ThrashZone

Fiercy said:


> I second that here's what this **** LOOKS LIKE!!
> 
> 
> How about they compensate this with a game or 100$ discount...


Hi,
That is one of the reasons I don't buy from nvidia directly they never that I've seen bundle a free game with cards :/
Seems everyone else does.


----------



## BigMack70

Robostyle said:


> 100-72 = 39! eureka!


Hint: subtraction is not relevant to the calculation. Again, please learn how basic math works.


----------



## Robostyle

*raider89*
I'll step aside. But it's funny to watch, how huang ripped everyone with these allover cutted dies, and he has a nerve to delay the launch. Don't mentioning all other stuff thaat wasa revealed for a past two month, regarding NV policy. Total arrogance. 
But the most ridiculous thing - you people keep asking for more bull**** like this. I can't understand that....


----------



## raider89

Fiercy said:


> Can someone tell me where do you guys have delivery date on the conformation email from Nvidia because mine doesn't list the date anywhere... and I can't seem to look up any information if I look up my order as well.





Dreamliner said:


> hmm.


Your comment on the delay was false up until this email, so my statment was correct and you was wrong up until just now.. total coincidence. The delay you referred too was regarding general public/shelf product. Read the forum topic where the official announcement was made by them.


----------



## Glerox

raider89 said:


> "Sorry wife, Nvidia just killed my ability to perform tonight"


LOL! And I don't have any more money to buy Viagra!


----------



## Dreamliner

raider89 said:


> Your comment on the delay was false up until this email, so my statment was correct and you was wrong up until just now.. total coincidence. The delay you referred too was regarding general public/shelf product. Read the forum topic where the official announcement was made by them.


I tried to tell you but you didn't listen. The card that ships to you tomorrow is right next to the one that is 39% faster than a 1080Ti. There is no coincidence and it doesn't exist.


----------



## raider89

Robostyle said:


> *raider89*
> I'll step aside. But it's funny to watch, how huang ripped everyone with these allover cutted dies, and he has a nerve to delay the launch. Don't mentioning all other stuff thaat wasa revealed for a past two month, regarding NV policy. Total arrogance.
> But the most ridiculous thing - you people keep asking for more bull**** like this. I can't understand that....


What exactly is your issue? If you can't afford it its okay, if its not readily available thats okay. Nobody got ripped off, everyone here who is still buying it is completely satisfied with the performance numbers today. The 2080ti's have the full size dyes, in which the full amount of cuda cores that are not readily available will not make much a difference from these cards anyway. High End = High Price, Nvidia is not budget friendly and us enthusiasts are okay with that. They run the market, thats how this works.


----------



## raider89

Dreamliner said:


> I tried to tell you but you didn't listen. The card that ships to you tomorrow is right next to the one that is 39% faster than a 1080Ti. There is no coincidence and it doesn't exist.


Assumptions only get you so far. Overall it is 35-40% faster than the 2080ti. DLSS it is even higher. In just gaming sure 30%, with DLSS it is even higher. If you're not getting this then I can't help you anymore.


----------



## Dreamliner

raider89 said:


> In just gaming sure 30%, with DLSS it is even higher. If you're not getting this then I can't help you anymore.


Got it.


----------



## raider89

Glerox said:


> LOL! And I don't have any more money to buy Viagra!


Im not not near that age lol.


----------



## LesPaulLover

raider89 said:


> What exactly is your issue? If you can't afford it its okay, if its not readily available thats okay. Nobody got ripped off, everyone here who is still buying it is completely satisfied with the performance numbers today. The 2080ti's have the full size dyes, in which the full amount of cuda cores that are not readily available will not make much a difference from these cards anyway. High End = High Price, Nvidia is not budget friendly and us enthusiasts are okay with that. They run the market, thats how this works.



Nvidia is giving you less and less every generation, and here you basically thanking them. A generational gain of 25-30% at a price increase of 70% is an absolute JOKE.

You've been conned, friend! The RTX 2080Ti is actually an "RTX Titan" and the RTX 2080 is the real 2080Ti if we look @ the pricing - Nvidia simply renamed their product stack because of the performance numbers.

Oh well looks like I'll be going back to console gaming for the first time in a decade when the new consoles release. Paying over $1000 for a graphics card is complete nonsense; and I say this as someone who owns a $750 1080Ti Hybrid card.


----------



## LesPaulLover

raider89 said:


> Assumptions only get you so far. Overall it is 35-40% faster than the 2080ti. DLSS it is even higher. In just gaming sure 30%, with DLSS it is even higher. If you're not getting this then I can't help you anymore.


Go watch the Gamer's Nexus RTX vids. The 2080Ti is AT ABSOLUTE BEST 30% faster than the 1080Ti, and that's only when over locked and running @ 4k. 

In some titles, running @ 1440p, the 2080Ti was as low as *17%* faster than the 1080Ti.

At a price increase of 70%. What a JOKE


----------



## BigMack70

LesPaulLover said:


> Go watch the Gamer's Nexus RTX vids. The 2080Ti is AT ABSOLUTE BEST 30% faster than the 1080Ti, and that's only when over locked and running @


 At best the 2080Ti is 50-60% faster. On average it's 35-40%. Worst case 10-20%.


----------



## raider89

LesPaulLover said:


> Go watch the Gamer's Nexus RTX vids. The 2080Ti is AT ABSOLUTE BEST 30% faster than the 1080Ti, and that's only when over locked and running @ 4k.
> 
> In some titles, running @ 1440p, the 2080Ti was as low as *17%* faster than the 1080Ti.
> 
> At a price increase of 70%. What a JOKE


Man, I can't handle anymore stupidity today, if you can't read and understand benchmarks that are on screen infront of you then you're lost in la la land.

Gamers Nexus was last to post there review and it was done in a hurry and was complete and utter garbage. So watch OC3 Video .. All the videos out are not overclocked. The 2080ti is a 4k card so keep it 4k. Its 35-40% OVERALL, 30% average in gaming. WITHOUT DLSS. End of story you lose. Stay broke go troll somewhere else other than a thread meant for people who purchased the card.


----------



## raider89

LesPaulLover said:


> Nvidia is giving you less and less every generation, and here you basically thanking them. A generational gain of 25-30% at a price increase of 70% is an absolute JOKE.
> 
> You've been conned, friend! The RTX 2080Ti is actually an "RTX Titan" and the RTX 2080 is the real 2080Ti if we look @ the pricing - Nvidia simply renamed their product stack because of the performance numbers.
> 
> Oh well looks like I'll be going back to console gaming for the first time in a decade when the new consoles release. Paying over $1000 for a graphics card is complete nonsense; and I say this as someone who owns a $750 1080Ti Hybrid card.


NO PRICE INCREASE OF MSRP DUMB DUMB. AMD Fan Boi is not welcome.


----------



## raider89

BigMack70 said:


> At best the 2080Ti is 50-60% faster. On average it's 35-40%. Worst case 10-20%.


More and more they speak, the more and more I really think they just can't afford it.


----------



## BigMack70

*It's a little more than 35% average performance boost in gaming over the 1080 Ti at 4k. Here's a compilation of a bunch of review data for proof.*

Anyone claiming anything less than 35% average performance uplift is either unable to read, unable to do math, or is speaking of a much more limited data set.

And yes, there is a mountain of salt being tossed around from people because of Nvidia's garbage pricing. In this thread in particular, numerous 1080 Ti owners who don't like the (lack of) value proposition and are salty they can't get the increased performance.


----------



## chakku

BigMack70 said:


> In this thread in particular, numerous 1080 Ti owners who don't like the (lack of) value proposition and are salty they can't get the increased performance.


You say this like anyone who chose not to buy the 2080 Ti because the performance improvement wasn't significant simply can't afford it. It most definitely has an awful value proposition, there's no denying that. Saying stuff like 'salty' or 'jealous' or 'can't afford' just shows you have no valid argument against their decision and need to justify your own purchase with something other than raw performance of the product. Sad really, the equivalent of claiming anyone who disagrees with you is a 'hater'. Piss poor backpedalling.


----------



## GraphicsWhore

chakku said:


> BigMack70 said:
> 
> 
> 
> In this thread in particular, numerous 1080 Ti owners who don't like the (lack of) value proposition and are salty they can't get the increased performance.
> 
> 
> 
> You say this like anyone who chose not to buy the 2080 Ti because the performance improvement wasn't significant simply can't afford it. It most definitely has an awful value proposition, there's no denying that. Saying stuff like 'salty' or 'jealous' or 'can't afford' just shows you have no valid argument against their decision and need to justify your own purchase with something other than raw performance of the product. Sad really, the equivalent of claiming anyone who disagrees with you is a 'hater'. Piss poor backpedalling.
Click to expand...

I’m with you - I don’t think people need to make it about “can’t afford” but on the other side you have people claiming everyone who preordered just did it with no concept of how GPUs work and what the reasonable expectation of performance could be.

Thing is, one of that group of people have preordered and are going to be owners of the card posting in an owners thread; the other people seemingly desperate to come into the thread to basically talk *****.

If you didn’t preorder the card and have no interest ordering it, you shouldn’t be here. Period. People can start as many threads as they want about the lack of value they believe these cards have without trying to make it a point here because, frankly, none of us care.


----------



## chakku

Graphics***** said:


> have no interest ordering it


Fair point, as I now know the performance of the 2080 Ti and no longer have any interest in ordering it, I'll take my leave.


----------



## BigMack70

Graphics***** said:


> If you didn’t preorder the card and have no interest ordering it, you shouldn’t be here. Period. People can start as many threads as they want about the lack of value they believe these cards have without trying to make it a point here because, frankly, none of us care.


Bingo.


----------



## Glerox

Finally we can start talking about overclocking.

Most reviewers seem to pass the +200/+800 mark on air so I guess this will be my starting point once it's under water.
GPU Core overclock looks similar to Pascal. However, it's seems GDDR6 overclocks more. I could never get to +800Mhz on my Titan XP.

Can't wait to try!


----------



## BigMack70

Glerox said:


> Finally we can start talking about overclocking.
> 
> Most reviewers seem to pass the +200/+800 mark on air so I guess this will be my starting point once it's under water.
> GPU Core overclock looks similar to Pascal. However, it's seems GDDR6 overclocks more. I could never get to +800Mhz on my Titan XP.
> 
> Can't wait to try!


I just hope that the ability to modify the BIOS is more like Maxwell than Pascal... it was nice to be able to run Titan X Maxwell cards on water without power limits. It's obvious that the OC is purely power limited by the BIOS so it will be interesting to see what happens.


----------



## snafua

xer0h0ur said:


> Yup, just got the same e-mail and I had ordered during the keynote.


Same here, assume you had the estimated Sept-20 ship date.


----------



## Fiercy

snafua said:


> Same here, assume you had the estimated Sept-20 ship date.


We shall see if we are going to be getting different shipment days (based on the time of the order) once they started sending emails with shipment updates...

Whats beyond me in this situation is how they knew about the delay days ago and they are still not sure when even people who ordered minutes after the preoder availability are getting cards... 
It just seems so strange and unrealistic to me...


----------



## Okt00

voltmods voltmods voltmods!


----------



## Rossman1516

I preordered my 2080ti during keynote and got the unexpected delay email from nvidia too. Curious to find out what comes next. I preordered a second through best buy and the estimated ship date is Oct 8. These are troubling times...


----------



## xer0h0ur

snafua said:


> Same here, assume you had the estimated Sept-20 ship date.


Yup


----------



## xer0h0ur

Glerox said:


> Finally we can start talking about overclocking.
> 
> Most reviewers seem to pass the +200/+800 mark on air so I guess this will be my starting point once it's under water.
> GPU Core overclock looks similar to Pascal. However, it's seems GDDR6 overclocks more. I could never get to +800Mhz on my Titan XP.
> 
> Can't wait to try!


I've been watching Steve aka Gamers Nexus overclocking live on YouTube for the past hours. He just barely hit 2200 MHz for a split second before crashing Time Spy using the FE cooler. He's now overclocking an EVGA RTX 2080 Ti XC Ultra for comparison.


----------



## Glerox

xer0h0ur said:


> I've been watching Steve aka Gamers Nexus overclocking live on YouTube for the past hours. He just barely hit 2200 MHz for a split second before crashing Time Spy using the FE cooler. He's now overclocking an EVGA RTX 2080 Ti XC Ultra for comparison.


I was watching the same! I'm looking forward to get a stable 2200Mhz under water 

Freakin KINGPIN got his card before everyone and overclocked to 2.4GHz with LN2 to break all records.
He has some special privileges with Nvidia and 3Dmark because he has the only 2080TI officially registered in the Hall of Fame.

*Edit : If you check the scores, Steve from GN got in the top 30 today but his scores don't show up in the HOF.
https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


----------



## Okt00

Glerox said:


> I was watching the same! I'm looking forward to get a stable 2200Mhz under water
> 
> Freakin KINGPIN got his card before everyone and overclocked to 2.4GHz with LN2 to break all records.
> He has some special privileges with Nvidia and 3Dmark because he has the only 2080TI officially registered in the Hall of Fame.
> 
> *Edit : If you check the scores, Steve from GN got in the top 30 today but his scores don't show up in the HOF.
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


GN's already higher at #7, I'd imagine K|NGP|N would take the whole page if it was every score.


----------



## TahoeDust

xer0h0ur said:


> I've been watching Steve aka Gamers Nexus overclocking live on YouTube for the past hours. He just barely hit 2200 MHz for a split second before crashing Time Spy using the FE cooler. He's now overclocking an EVGA RTX 2080 Ti XC Ultra for comparison.





Glerox said:


> I was watching the same! I'm looking forward to get a stable 2200Mhz under water
> 
> Freakin KINGPIN got his card before everyone and overclocked to 2.4GHz with LN2 to break all records.
> He has some special privileges with Nvidia and 3Dmark because he has the only 2080TI officially registered in the Hall of Fame.
> 
> *Edit : If you check the scores, Steve from GN got in the top 30 today but his scores don't show up in the HOF.
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


I was watching Steve tonight too. I was playing with my 1080ti FTW3 Hybrid while he was overclocking his 2080ti. It demolished my card. 7400+ to 4886... https://www.3dmark.com/spy/4442539

I'm can't wait for my 2080ti FTW3.


----------



## Glerox

Okt00 said:


> GN's already higher at #7, I'd imagine K|NGP|N would take the whole page if it was every score.


Oh I miss that.
But still it's weird that KingPin has the only 2080TI in all the Top100.


----------



## TahoeDust

Glerox said:


> Oh I miss that.
> But still it's weird that KingPin has the only 2080TI in all the Top100.


Kingpin has been knighted by both nVidia and EVGA. He always gets to test and post scores first.


----------



## ClashOfClans

LesPaulLover said:


> Nvidia is giving you less and less every generation, and here you basically thanking them. A generational gain of 25-30% at a price increase of 70% is an absolute JOKE.
> 
> You've been conned, friend! The RTX 2080Ti is actually an "RTX Titan" and the RTX 2080 is the real 2080Ti if we look @ the pricing - Nvidia simply renamed their product stack because of the performance numbers.
> 
> Oh well looks like I'll be going back to console gaming for the first time in a decade when the new consoles release. Paying over $1000 for a graphics card is complete nonsense; and I say this as someone who owns a $750 1080Ti Hybrid card.


Winners are the 1080 Ti owners who don't need to upgrade. Winners are also buyers who are happy to pay $1200 or so for a 2080 Ti. I feel the 1080 Ti is still a very relevant card so 1080 Ti owners can rejoice their card is up to par with a $800-850 2080 card! The 2080 Ti is a monster at 4k though, that's what that card is for. I'll be waiting for next gen and hoping the next Ti card is 35% faster than a 2080 Ti. So a 70% jump hopefully for next gen when I upgrade from my used cheap 1080 Ti. 

I think Nvidia has a good product and I like the possibilities of ray tracing and DLSS. I'm hoping this becomes standard and more fluid in next gen cards. Bottom line is the 2080 series is a great product, but it's fair to say it's priced a bit high. The 2080 especially since it's around 1080 Ti performance.


----------



## skingun

I've got the same email. Delayed delivery and we'll contact you. I ordered during keynote and am based in UK.


----------



## Spiriva

I read this on a Swedish forum, someone already got the 2080ti


----------



## skingun

They've shipped my NV-link bridge lol


----------



## snafua

skingun said:


> They've shipped my NV-link bridge lol


Well they got that right this time LoL. The SLI HB bridges for 1080's took about another month after shipping.


----------



## skingun

I member.


----------



## xer0h0ur

Glerox said:


> I was watching the same! I'm looking forward to get a stable 2200Mhz under water
> 
> Freakin KINGPIN got his card before everyone and overclocked to 2.4GHz with LN2 to break all records.
> He has some special privileges with Nvidia and 3Dmark because he has the only 2080TI officially registered in the Hall of Fame.
> 
> *Edit : If you check the scores, Steve from GN got in the top 30 today but his scores don't show up in the HOF.
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


Its not special privileges with Nvidia. Its special privileges with EVGA and they provide everything for him plus he works with an electronics engineer guru.


----------



## illidan2000

that is not true...
there is another one (n.44). take a look again!


----------



## keikei

Opinion on hybrid models (in general)?


----------



## BigMack70

keikei said:


> Opinion on hybrid models (in general)?


Can't speak for these since they aren't out but I had 2x Titan X Maxwell hybrids and a 1080 Ti hybrid and my experience with them was amazing. Not as pretty as custom loop; not quite as whisper quiet as you can make an overbuilt custom loop, but the performance is pretty much comparable with respect to keeping the cards cool enough that you don't lose any boost frequency due to temps. Only thing I ran into is that they exhaust so much hot air that you do NOT want their radiators as intake for your case. Definitely need to be exhaust.

If the FE card sounds like a vacuum cleaner in my rig I'll be buying a hybrid cooler, if I can, to slap on this one as well.


--EDIT-- Looks like DLSS could be the real deal; small visual quality loss that may not be noticeable during gameplay for large performance gains:


----------



## arrow0309

Hi, anyone knows that the Gigabyte 2080ti Gaming OC is pcb reference board compatible? I've already ordered the EK Vector RGB block and I wasn't sure what to finally order, looking to either the MSI Duke or this Gaming OC, both clocks to 1665 by Default like the Evga XC Ultra.

However I'd take the Gigabyte available sooner on Ebuyer.com.

I've looked everywhere but can't find any goddammed review, pcb pic or info whatsoever


----------



## Robostyle

raider89 said:


> What exactly is your issue?





raider89 said:


> everyone here who is still buying it is completely satisfied with the performance numbers today. Nvidia is not budget friendly and us enthusiasts are okay with that.


I thought u could figure that out. 
Moreover, I don't see anything "elite","luxurous", or "expensive" in totally gaming 9read:killing ur time, entertaining urself without gains) stuff and gear, so it would break 1000$ mark. 
It's not even a boat or supercar - imagine if Ferrari instead of offering you a car for 500K$ starts selling engine only, or chassis for 700K$? Every stuff has it's own price and position on the global market - which is dependant on overall economy, salaries, etc. And these clumsy attempts to explain the price rise away just makes me laugh

P.S. And yeah, dye is what u have in ur cola. Die is what u have on ur PCB. And no - all of RTX chips are cutted - only Quadro have their royalty to have full, uncutted silicon.


----------



## keikei

BigMack70 said:


> Can't speak for these since they aren't out but I had 2x Titan X Maxwell hybrids and a 1080 Ti hybrid and my experience with them was amazing. Not as pretty as custom loop; not quite as whisper quiet as you can make an overbuilt custom loop, but the performance is pretty much comparable with respect to keeping the cards cool enough that you don't lose any boost frequency due to temps. Only thing I ran into is that they exhaust so much hot air that you do NOT want their radiators as intake for your case. Definitely need to be exhaust.
> 
> If the FE card sounds like a vacuum cleaner in my rig I'll be buying a hybrid cooler, if I can, to slap on this one as well.
> 
> 
> --EDIT-- Looks like DLSS could be the real deal; small visual quality loss that may not be noticeable during gameplay for large performance gains:
> 
> 
> 
> Spoiler
> 
> 
> 
> https://www.youtube.com/watch?v=MMbgvXde-YA&feature=youtu.be



Yeah, I need a exhaust style card. I made the mistake of getting a display pc case and i need something that performs much better than the standard blower without the complexities of a loop. Thanks for the insight.


----------



## Woundingchaney

Has there been any information regarding potential shipping date for early orders yet. Seems most everyone got the delayed email (Nvidia), but no other update.


----------



## BigMack70

Woundingchaney said:


> Has there been any information regarding potential shipping date for early orders yet. Seems most everyone got the delayed email (Nvidia), but no other update.


The best info we have so far is still the post on the Nvidia forums that the cards will begin arriving between the 20th and 27th. I haven't seen any posts online that orders have begun shipping yet.


----------



## raider89

Robostyle said:


> I thought u could figure that out.
> Moreover, I don't see anything "elite","luxurous", or "expensive" in totally gaming 9read:killing ur time, entertaining urself without gains) stuff and gear, so it would break 1000$ mark.
> It's not even a boat or supercar - imagine if Ferrari instead of offering you a car for 500K$ starts selling engine only, or chassis for 700K$? Every stuff has it's own price and position on the global market - which is dependant on overall economy, salaries, etc. And these clumsy attempts to explain the price rise away just makes me laugh
> 
> P.S. And yeah, dye is what u have in ur cola. Die is what u have on ur PCB. And no - all of RTX chips are cutted - only Quadro have their royalty to have full, uncutted silicon.


Not sure what exactly the point of your post was, none of it makes any sense. The comparison was horribly inaccurate. I could see it making someone laugh who doesn't understand it, which is true in your case. 

Dye or die it doesn't make a difference you made it obvious you knew what I meant. Although I do know the difference my phone does not, sorry to burst your bubble at your failed attempt to correct me mr "all of RTX chips are cutted". Cutted lol.

I think its time to make a leave of absence


----------



## GraphicsWhore

Amazon says my EVGA XC is "arriving tomorrow by 8pm" but the status doesn't actually show shipped yet. I have to wait for two blocks to come in and that won't be until next week anyway but would be nice to have the card tomorrow just to hold it and stroke it and call it names.


----------



## xer0h0ur

Robostyle said:


> I thought u could figure that out.
> Moreover, I don't see anything "elite","luxurous", or "expensive" in totally gaming 9read:killing ur time, entertaining urself without gains) stuff and gear, so it would break 1000$ mark.
> It's not even a boat or supercar - imagine if Ferrari instead of offering you a car for 500K$ starts selling engine only, or chassis for 700K$? Every stuff has it's own price and position on the global market - which is dependant on overall economy, salaries, etc. And these clumsy attempts to explain the price rise away just makes me laugh
> 
> P.S. And yeah, dye is what u have in ur cola. Die is what u have on ur PCB. And no - all of RTX chips are cutted - only Quadro have their royalty to have full, uncutted silicon.


Literally LOLing here. You have the nerve to correct someone else's grammar when you obviously aren't a native English speaker making your own grammatical errors. Cutted isn't a word...and this "cutted" die you keep talking about has 5% less cuda cores than the full die. You're nuts if you actually believe Nvidia would have put the full die onto the Ti. It would have cannibalized their Quadro RTX 8000 that uses the full TU102. You know, that $10,000 card Nvidia makes. Do you even understand business?


----------



## BigMack70

Graphics***** said:


> Amazon says my EVGA XC is "arriving tomorrow by 8pm" but the status doesn't actually show shipped yet. I have to wait for two blocks to come in and that won't be until next week anyway but would be nice to have the card tomorrow just to hold it and stroke it and call it names.


Amazon often doesn't alert to a shipping delay until the day of or the day after it is supposed to arrive. Kind of obnoxious, really.


----------



## rush2049

My NVLink arrived:

https://imgur.com/a/A40Ic6H


















2x Philips 000 screws (on back)
2x Torx 8 screws (on front)
10 plastic clips on edges


----------



## skingun

Thank you for the disassembly pictures.


----------



## Baasha

Grrr.. did anyone else get this email:

"Unfortunately, we have experienced an unexpected delay in shipping your RTX 2080 Ti Founders Edition order. We’ll be back in touch shortly with your new delivery date."

I pre-ordered the first day - WTH is going on?!?


----------



## BigMack70

Baasha said:


> Grrr.. did anyone else get this email:
> 
> "Unfortunately, we have experienced an unexpected delay in shipping your RTX 2080 Ti Founders Edition order. We’ll be back in touch shortly with your new delivery date."
> 
> I pre-ordered the first day - WTH is going on?!?


The 1 week delay applies to everyone apparently. Supposedly we should all have them by the 27th. 

https://forums.geforce.com/default/...ries/geforce-rtx-2080-ti-availability-update/


----------



## snafua

BigMack70 said:


> The 1 week delay applies to everyone apparently. Supposedly we should all have them by the 27th.
> 
> https://forums.geforce.com/default/...ries/geforce-rtx-2080-ti-availability-update/


Right. They said between the 20th and 27th but doubt any shipping will happen tomorrow or over the weekend. So 24th at the earliest?
Not holding my breath that they get these out sooner.


----------



## superhead91

Thread re-opened.

Please remember, if a user is breaking the rules, report them and move on. Replying to them only makes the situation worse.


----------



## Baasha

Is Nvidia trying to compete with Asus for the worst launch award? 

This is honestly the FIRST time in 8 years that I've not had the GPUs on day 1. 

They first announce it a month in advance, don't remember that ever happening (at least since the 580 days), and now the actual launch is delayed by a week.

Oh well...


----------



## BigMack70

Baasha said:


> Is Nvidia trying to compete with Asus for the worst launch award?
> 
> This is honestly the FIRST time in 8 years that I've not hard the GPUs on day 1.
> 
> They first announce it a month in advance, don't remember that ever happening (at least since the 580 days), and now the actual launch is delayed by a week.
> 
> Oh well...


Yeah this is the first paper launch I can remember in a while. Disappointing. Both AMD and Nvidia have been a lot better than this in recent years in having stock on day 1.


----------



## Baasha

BigMack70 said:


> Yeah this is the first paper launch I can remember in a while. Disappointing. Both AMD and Nvidia have been a lot better than this in recent years in having stock on day 1.


Yea, and then the last minute change in lifting NDA for performance results was shady IMO.

Well, we now know why they did that.

I'm still eager to try out NVLink - other than that, the 2080 Ti is a big "meh" from what I've seen online so far.

The Turing Titan OTOH, hmm.


----------



## BigMack70

Baasha said:


> Yea, and then the last minute change in lifting NDA for performance results was shady IMO.
> 
> Well, we now know why they did that.
> 
> I'm still eager to try out NVLink - other than that, the 2080 Ti is a big "meh" from what I've seen online so far.
> 
> The Turing Titan OTOH, hmm.


Assuming Turing Titan is just a full TU102 and not a different chip, it's probably only about 7%-ish faster than 2080 Ti with an extra gig of vram; not enough to differentiate it in games though would certainly be a clear chart topper for benchmarks.


----------



## Baasha

BigMack70 said:


> Assuming Turing Titan is just a full TU102 and not a different chip, it's probably only about 7%-ish faster than 2080 Ti with an extra gig of vram; not enough to differentiate it in games though would certainly be a clear chart topper for benchmarks.


The main difference I'm hoping for is that NVLink is doable on the Turing Titan - since Titan V was gimped (i.e. no SLI/NVLink), having two Turing Titans should quite nice.


----------



## dboythagr8

My 2080ti from EVGA has changed to "Order Shipping, Pending pickup from UPS". I did next day air, so hopefully that means tomorrow? Saturday at the latest?



BigMack70 said:


> The 1 week delay applies to everyone apparently. Supposedly we should all have them by the 27th.
> 
> https://forums.geforce.com/default/...ries/geforce-rtx-2080-ti-availability-update/



"General availability". Meaning if you did not preorder, you will not have the chance to get the card until the 27th.


----------



## xer0h0ur

Sooooo, on the Nvidia subreddit some people are reporting getting their AIB 2080 Ti's delivered or ready for pickup.


----------



## warm

ordered a gaming x trio here

any chance a 9700k or 9900k would lift those bottlenecks we saw in fps on some games over youtube?


----------



## BigMack70

warm said:


> ordered a gaming x trio here
> 
> any chance a 9700k or 9900k would lift those bottlenecks we saw in fps on some games over youtube?


Very doubtful since it's basically the same architecture as 8000 series with slightly different core arrangement. Maybe in some very well multithreaded games the 9900k would do better but I think that even at 1440p there are going to be occasional CPU-based limitations with this card, and frequently at 1080p.


----------



## Baasha

3rd party AIB orders are getting delivered but nvidia hasn't even shipped the FE cards yet. 

patience is truly a virtue.  on the bright side, I got my NVLink bridge.

Asus 2080 Ti delivered: https://www.reddit.com/r/nvidia/comments/9hg23z/delivery_on_point/


----------



## xer0h0ur

It sucks to be certain. I just had no intention of paying elevated AIB prices for a cooler I was going to punt anyways for the EK waterblock.


----------



## jase78

U guys think there is any validity to the theory that the 2080ti model originally would have been the 2080? 
If the ti truly was the titan "renamed )" then shouldn't the 2080 perform about the same as it ? Usually the ti performs the same or a bit better than the titan when u take into act. the better aib coolers .


----------



## xer0h0ur

IMO the 2080 needed to have more cuda cores to make it an attractive GPU over the 1080 Ti. Oh well. What is done is done.


----------



## Glerox

Baasha said:


> Grrr.. did anyone else get this email:
> 
> "Unfortunately, we have experienced an unexpected delay in shipping your RTX 2080 Ti Founders Edition order. We’ll be back in touch shortly with your new delivery date."
> 
> I pre-ordered the first day - WTH is going on?!?


Now I understand why it's been delayed! It's because Baasha ordered all of them LOL


----------



## GraphicsWhore

xer0h0ur said:


> It sucks to be certain. I just had no intention of paying elevated AIB prices for a cooler I was going to punt anyways for the EK waterblock.


So this was weird because I paid 1149 for my EVGA XC on Amazon for a total of 1220. My FE cost me 1270 total.


----------



## xer0h0ur

Graphics***** said:


> So this was weird because I paid 1149 for my EVGA XC on Amazon for a total of 1220. My FE cost me 1270 total.


Weren't the AIB cards also listed for sale after the FE cards were up for pre-order? I wasn't about to risk missing out on a card with retailers while having no idea if they would be reference designs compatible with EK's launch waterblock. Its the only reason I buy FE cards.


----------



## GraphicsWhore

xer0h0ur said:


> Graphics***** said:
> 
> 
> 
> So this was weird because I paid 1149 for my EVGA XC on Amazon for a total of 1220. My FE cost me 1270 total.
> 
> 
> 
> Weren't the AIB cards also listed for sale after the FE cards were up for pre-order? I wasn't about to risk missing out on a card with retailers while having no idea if they would be reference designs compatible with EK's launch waterblock. Its the only reason I buy FE cards.
Click to expand...

Same. I actually had two FEs on preorder for the same reason. One is from nvidia and one is from a local Best Buy which I can pick up in store. The latter two I won’t get until the October launch on the 8th but at this point it seems the EVGA cards are all but 100% confirmed compatible with the EK blocks. I also have the Alphacool full cover coming and that should also be compatible so I’m just waiting to see which I like better aesthetically.


----------



## LesPaulLover

xer0h0ur said:


> Literally LOLing here. You have the nerve to correct someone else's grammar when you obviously aren't a native English speaker making your own grammatical errors. Cutted isn't a word...and this "cutted" die you keep talking about has 5% less cuda cores than the full die. You're nuts if you actually believe Nvidia would have put the full die onto the Ti. It would have cannibalized their Quadro RTX 8000 that uses the full TU102. You know, that $10,000 card Nvidia makes. Do you even understand business?


The so-called "RTX 2080Ti" isn't REALLY a "xx80Ti" model card. It's a "Titan" model. All of the price and specs point to this fact; except one - you're paying $1200+ and still not getting the full amount of VRAM!


----------



## BigMack70

LesPaulLover said:


> The so-called "RTX 2080Ti" isn't REALLY a "xx80Ti" model card. It's a "Titan" model


Nvidia's naming schemes don't mean anything. Heck it took them until the $3000 Titan V to stop labeling their Titan cards as GTX gaming cards. I don't see the point of this argument. 

IMO they called their $1000 GK100 card the "Titan" because if they had called it what it really ought to have been called - the GTX 680 - they would never have gotten away with charging that price. The Titan lineup served its purpose well - allow Nvidia to jack up top tier pricing to $1000+ and allow them to jack midrange pricing up to $600+. Mission accomplished so now they've dropped the pretense.


----------



## dVeLoPe

I have 5x 2080Ti on PRE-ORDER from launch day will they arrive on the 27th?


----------



## LesPaulLover

Please watch AdoredTV's YouTube videos "A History of Nvidia Geforce," and maybe you'll start to understand why the 2080Ti's pricetag is absolutely ridiculous. It's not only their SMALLEST generational improvement but also one of their GREATEST generational price increases.

Unfortunately loads of people have apparently bought these cards willingly - so we'll probably see this trend perpetuate indefinitely. What will the 3080Ti bring us? 25% over the 2080Ti for $1500? Would you STILL buy it then?

And go ahead....call me "poor" all you want. I have an $800 EVGA 1080Ti FTW3 Hybrid card. I think $800 is quite a generous price to pay for a gaming GPU.


----------



## BigMack70

LesPaulLover said:


> It's not only their SMALLEST generational improvement but also one of their GREATEST generational price increases.


You only get to pick one of these things as true. They can't both be true.


If you follow Nvidia's naming conventions for their cards, then the GTX 580 was a much smaller generational improvement over the 480 than this is. If you follow the underlying tech (IMO the better way to go) and look at the chips in each architecture, then Kepler was the greatest generational price increase (a 100% increase... GK104 was twice the price of GF114 and GK100 was twice the price of GF110).

It's close to the Kepler --> Maxwell performance jump, though not quite as good. And the price hike is nowhere even remotely close to 100% like with Kepler.


----------



## xer0h0ur

If I am not buying another video card for 2 to 3 years then I have no reason to worry about a potential price increase. I already have bought a $1500 card in the recent past. The 295X2. Its not a foreign concept to me. The only cards that don't tempt me are those semi-pro and pro cards. Although an argument can be made that this Ti is pretty much a semi-pro card. I just don't care since I am buying it strictly for the huge CUDA core count bump. I can't even use DLSS or RT on Windows 7 so its a non-factor in the near future until I build my next rig which will be on W10.


----------



## mouacyk

So has anyone pushed a 2080 Ti to the max yet, on air and water?


----------



## jase78

Has anyone dove into what the difference is between tensor cores and rt cores physically?


----------



## LesPaulLover

mouacyk said:


> So has anyone pushed a 2080 Ti to the max yet, on air and water?


On a more interesting / positive and related note - it seems like DX11 is nearing its absolute end of life.

Several 2080Ti reviewers noting that older titles (DX11) such as GTAV and The Witcher 3 are being CPU bottlenecked, at 4k ultra, by 8700k CPUs @ 5+GHz, when pushing a 2080Ti!

This is definitely promising news for ALL PC gamers! (ie - we'll FINALLY be getting games developed from the ground up on DX12 and/or Vulkan (instead of using the poor DX11 ---> DX12 code translations that we see in games like BF1)


----------



## aj_hix36

Does anyone know what the max power target, and max voltage is on the msi trio? The FE only does 123 max power, the evga xc ultra did 130, im hoping since it is a custom card with an extra 6pin even, that the max power allowed would be higher.


----------



## dboythagr8

My card shipped. Tracking number and all. From EVGA. Will be here tomorrow


----------



## xer0h0ur

LesPaulLover said:


> On a more interesting / positive and related note - it seems like DX11 is nearing its absolute end of life.
> 
> Several 2080Ti reviewers noting that older titles (DX11) such as GTAV and The Witcher 3 are being CPU bottlenecked, at 4k ultra, by 8700k CPUs @ 5+GHz, when pushing a 2080Ti!
> 
> This is definitely promising news for ALL PC gamers! (ie - we'll FINALLY be getting games developed from the ground up on DX12 and/or Vulkan (instead of using the poor DX11 ---> DX12 code translations that we see in games like BF1)


You still haven't learned anything have you? The gaming industry literally can not abandon DX11. No matter how much you want it to happen. Do you even realize that the entire gaming ecosystem is by and large made up of DX11 generation GPUs? They're in the business of maximizing profits, not leaving money behind.


----------



## xer0h0ur

aj_hix36 said:


> Does anyone know what the max power target, and max voltage is on the msi trio? The FE only does 123 max power, the evga xc ultra did 130, im hoping since it is a custom card with an extra 6pin even, that the max power allowed would be higher.


Eventually none of that will matter. People will wind up editing the vBIOS. At which point you're going to be GPU boost 4.0 limited by thermals. Which is why I am waterblocking the card.


----------



## aj_hix36

xer0h0ur said:


> aj_hix36 said:
> 
> 
> 
> Does anyone know what the max power target, and max voltage is on the msi trio? The FE only does 123 max power, the evga xc ultra did 130, im hoping since it is a custom card with an extra 6pin even, that the max power allowed would be higher.
> 
> 
> 
> Eventually none of that will matter. People will wind up editing the vBIOS. At which point you're going to be GPU boost 4.0 limited by thermals. Which is why I am waterblocking the card.
Click to expand...

Well does that make the msi trio with the extra 6 pin better between two cards that we pretend cant be temp capped? Im trying to decide whether to keep msi trio or evga ftw3.


----------



## Pixrazor

Hey guys!
Can someone run oclmembench when they got their cards in hands? thanks in advance!


----------



## EarlZ

I am looking at getting the MSI Gaming X Trio 2080Ti, how much more slot space does it need compared to the MSI Gaming X 1080Ti ? I ask this because I am using am microATX motherboard (Gigabyte Sniper G1 M5) with a sound card at the very last slot


----------



## Glerox

xer0h0ur said:


> Eventually none of that will matter. People will wind up editing the vBIOS. At which point you're going to be GPU boost 4.0 limited by thermals. Which is why I am waterblocking the card.


AFAIK, Nvidia blocks the editing of vBIOS. I havent seen anyone being able to change the vBIOS of their Pascal GPU.

The only way to unlock the power limit wad to do an hardware shunt mod (which I did and worked). I'm sure we will have to do the same on Turing in order to unlock the power limit.


----------



## mouacyk

LesPaulLover said:


> On a more interesting / positive and related note - it seems like DX11 is nearing its absolute end of life.
> 
> *Several 2080Ti reviewers noting that older titles (DX11) such as GTAV and The Witcher 3 are being CPU bottlenecked, at 4k ultra, by 8700k CPUs @ 5+GHz, when pushing a 2080Ti!*
> 
> This is definitely promising news for ALL PC gamers! (ie - we'll FINALLY be getting games developed from the ground up on DX12 and/or Vulkan (instead of using the poor DX11 ---> DX12 code translations that we see in games like BF1)


Please provide sources for the bolded statement. What could have changed all of a sudden with DX11 drivers and APIs so that a higher resolution now is bottle-necking... the CPU, instead of normally the GPU?

First of all, you de-railed the original question and then make an unsupported claim. If you want to be taken seriously, I'd suggest some more sources to back them up.


----------



## arrow0309

Anyone planning to do shunt mode on the 2080ti?


----------



## Glottis

xer0h0ur said:


> IMO the 2080 needed to have more cuda cores to make it an attractive GPU over the 1080 Ti. Oh well. What is done is done.


Not really, all that 2080 needs is time. As soon as more REAL DX12 and Vulkan games come out (not lame DX11->12 conversions), 2080 will take huge lead over 1080Ti. You can already see the pattern starting to emerge in Vulkan game Wolf 2 where 2080 takes a massive lead over 1080Ti and in Forza 7 where once again 2080 takes a massive lead over 1080Ti.


----------



## Rob w

arrow0309 said:


> Anyone planning to do shunt mode on the 2080ti?


I’ve no doubt I will, especially as I know how to do it correctly now ( lol)can’t resist the temptation.
I was not planning on running this at stock!! It’s got to totally trash the 1080ti ha ha.


----------



## arrow0309

Rob w said:


> I’ve no doubt I will, especially as I know how to do it correctly now ( lol)can’t resist the temptation.
> I was not planning on running this at stock!! It’s got to totally trash the 1080ti ha ha.


You mind be more specific? 
I'm also tempted, ordered a 2080 ti Msi Duke and a nice Vector from Scan.


----------



## LRRP

xer0h0ur said:


> I can't even use DLSS or RT on Windows 7


Do you think Vulkan will bring DLSS and \ or RT functionality to Windows 7?


----------



## Rob w

arrow0309 said:


> You mind be more specific?
> I'm also tempted, ordered a 2080 ti Msi Duke and a nice Vector from Scan.


Hi arrow,
More specific as in shunt modded my Titan v and blew it up! Resistor shorted against back of waterblock, has been repaired thankfully, hence if doing mod with 5mo resistors make sure there is clearance under the wb.
I will be doing the two from the power connectors but I need to research further as there seems to be a further section that monitors power draw.


----------



## GraphicsWhore

dboythagr8 said:


> My card shipped. Tracking number and all. From EVGA. Will be here tomorrow


Did you purchase directly from them?

I got my EVGA XC from Amazon and it's still saying it'll arrive by tonight but the status hasn't actually changed to "shipped" and there's no tracking info.


----------



## keikei

Anyone order a card that's not a preorder? Neweggs been soldout since yesterday. I'm not in a rush (need a few weeks), but i hope they get restocked soon.


----------



## Crinn

keikei said:


> Anyone order a card that's not a preorder? Neweggs been soldout since yesterday. I'm not in a rush (need a few weeks), but i hope they get restocked soon.


2080Ti will not be restocked until the 27th


----------



## mouacyk

Not mine. Overclocked to 2070MHz/14,506MHz for GPU score of 38,189. Good 1080 TI's around 2100MHz can hit ~31,500 so this puts the 2080 TI at +21% in this benchmark, OC to OC.
https://www.3dmark.com/3dm/28713873?

Memory is cranking on this GPU -- here we see 16,200MHz with a nice GPU score boost to 39,276 to put lead at +25%.
https://www.3dmark.com/fs/16415224


----------



## GraphicsWhore

mouacyk said:


> Not mine. Overclocked to 2070MHz/14,506MHz for GPU score of 38,189. Good 1080 TI's around 2100MHz can hit ~31,500 so this puts the 2080 TI at +21% in this benchmark, OC to OC.
> https://www.3dmark.com/3dm/28713873?


Interesting, thanks.

I'm also curious about how it performs with resolution scaling (as opposed to "real" resolutions) vs a 1080Ti.


----------



## keikei

mouacyk said:


> Not mine. Overclocked to 2070MHz/14,506MHz for GPU score of 38,189. Good 1080 TI's around 2100MHz can hit ~31,500 so this puts the 2080 TI at +21% in this benchmark, OC to OC.
> https://www.3dmark.com/3dm/28713873?


What % oc is that over stock card? I heard gamernexus got about 15% oc on their FE. Maybe they got a binned card.



Crinn said:


> 2080Ti will not be restocked until the 27th


Thanks. I'm not in a rush I guess to drop dat kinda of $. I'd read a few other companies are releasing other (possibly faster) versions *soon.


----------



## GraphicsWhore

dboythagr8 said:


> My 2080ti from EVGA has changed to "Order Shipping, Pending pickup from UPS". I did next day air, so hopefully that means tomorrow? Saturday at the latest?
> 
> 
> 
> 
> "General availability". Meaning if you did not preorder, you will not have the chance to get the card until the 27th.


I pre-ordered an EVGA XC on Amazon on launch day. Just got a notice that it's delayed.

This whole time I was thinking I'd have a card but be waiting for my blocks to come in. Now my EK block is scheduled by Monday and I won't have a card. Lol damn it.


----------



## FreeElectron

Are the cards 2080 TIs binned differently for different manufacturers?
If so, which ones are better (in order if possible)?


----------



## GraphicsWhore

FreeElectron said:


> Are the cards 2080 TIs binned differently for different manufacturers?
> If so, which ones are better (in order if possible)?


If they are I don't think it has ever been proven so the easy answer is "no."


----------



## tconroy135

Graphics***** said:


> If they are I don't think it has ever been proven so the easy answer is "no."


What I have read is that for 2080 Ti there are 2 reported 'BINs."

1. A BIN for FE Cards and Factory Overclocked AIB Cards
2. A BIN for MSRP Cards (NVIDIA) and Non-Factory Overclocked AIB Cards


----------



## rush2049

Jay just did this:

https://www.3dmark.com/spy/4449546

Time Spy : 24317 points. puts him in 37th place for timespy.



And jay just did this:

https://www.3dmark.com/spy/4449513

Puts him in 17th place with 13768 points in Timespy extreme.


Thats both with 2080 ti's in NVlink


----------



## Baasha

Glerox said:


> Now I understand why it's been delayed! It's because Baasha ordered all of them LOL


LOL.. I just got 4 



LesPaulLover said:


> The so-called "RTX 2080Ti" isn't REALLY a "xx80Ti" model card. It's a "Titan" model. All of the price and specs point to this fact; except one - you're paying $1200+ and still not getting the full amount of VRAM!


Wrong! The Turing Titan (and all Titans going forward) will be $3000 (similar to the Titan V). Prices have just shot up for everything, unfortunately.



rush2049 said:


> Jay just did this:
> 
> https://www.3dmark.com/spy/4449546
> 
> Time Spy : 24317 points. puts him in 37th place for timespy.
> 
> 
> 
> And jay just did this:
> 
> https://www.3dmark.com/spy/4449513
> 
> Puts him in 17th place with 13768 points in Timespy extreme.
> 
> 
> Thats both with 2080 ti's in NVlink



That seems pretty insane tbh.. can't wait for my cards to come in!!


----------



## raider89

Baasha said:


> Wrong! The Turing Titan (and all Titans going forward) will be $3000 (similar to the Titan V). Prices have just shot up for everything, unfortunately.
> 
> 
> 
> 
> That seems pretty insane tbh.. can't wait for my cards to come in!!


You dont know that for certain, 2080ti is priced and performs what you would expect, it makes logical sense. But there is nothing out there that shows who is right. Just Speculation


----------



## CallsignVega

LesPaulLover said:


> On a more interesting / positive and related note - it seems like DX11 is nearing its absolute end of life.
> 
> Several 2080Ti reviewers noting that older titles (DX11) such as GTAV and The Witcher 3 are being CPU bottlenecked, at 4k ultra, by 8700k CPUs @ 5+GHz, when pushing a 2080Ti!
> 
> This is definitely promising news for ALL PC gamers! (ie - we'll FINALLY be getting games developed from the ground up on DX12 and/or Vulkan (instead of using the poor DX11 ---> DX12 code translations that we see in games like BF1)


Whoever made that statement has no idea what they are talking about. 

You don't go from 187 FPS 1080P:










to 87 FPS 4K:










Because of a "CPU" bottleneck, jesus.


----------



## BigMack70

Have any preorders direct from nvidia started shipping yet? I've seen some EVGA and some ASUS in the wild but nothing else.


----------



## Vlada011

Guys is it this true... one Member posted on Techpowerup.

4K performance increase and 2 generation comparison with starting MSRPs:

1.
1080Ti - 2080: 9% for same price (or +50$ for the 2080)
980Ti - 1080 was 39% for -50$
780Ti - 980 was 9% for -150$

2.
1080Ti - 2080Ti: 39% for +300$ (more like 350$ actually)
980Ti - 1080Ti was 83% (!!!) for +50$
780Ti - 980Ti was 43% for -50$

This is absolutely shocking, insane if comparison is true.
980Ti upgrade to 1080Ti give you 83% more performance for 50$ more and 1080Ti to 2080Ti 35% for 350$.
What is excuse for this if this is true...

If this is true, we lost fortune and get nothing upgrading RIGs in moments when no competition.
Intel just prepare to back us everything what they take from us and give us even more and probably for less than 500 euro because competition.
They add 2 cores, then it was not enough and decide to 2 more cores and even to back flux solder and such CPU would cost 1000-1200$ before Zen Core showed up.
NVIDIA ask 300$ more than very expensive GTX1080Ti and force us us to buy GTX1080Ti because price difference and what will happen if we buy GTX1080Ti for 600-650 and AMD beat her for 30% for same money. What than? In moment when they show up Pascal will be 2 years old and more.

NVIDIA will drop price of RTX2080Ti and say excuse me that's similar to Pascal now we plan to launch real beast Volta 2 or what ever for same price as RTX2080Ti. That would happen if competition (AMD) make success.


----------



## Baasha

BigMack70 said:


> Have any preorders direct from nvidia started shipping yet? I've seen some EVGA and some ASUS in the wild but nothing else.


Nothing yet. 

Have you called Customer Service? They just gave me the runaround saying "we will notify you."


----------



## BigMack70

Baasha said:


> Nothing yet.
> 
> Have you called Customer Service? They just gave me the runaround saying "we will notify you."


I know better than to waste my time. They aren't going to tell me anything I don't already know and I'll just get pissed off.


----------



## xer0h0ur

Glerox said:


> AFAIK, Nvidia blocks the editing of vBIOS. I havent seen anyone being able to change the vBIOS of their Pascal GPU.
> 
> The only way to unlock the power limit wad to do an hardware shunt mod (which I did and worked). I'm sure we will have to do the same on Turing in order to unlock the power limit.


Aye, shunt mod it will be then.



Rob w said:


> Hi arrow,
> More specific as in shunt modded my Titan v and blew it up! Resistor shorted against back of waterblock, has been repaired thankfully, hence if doing mod with 5mo resistors make sure there is clearance under the wb.
> I will be doing the two from the power connectors but I need to research further as there seems to be a further section that monitors power draw.


What about shunt modding then applying an insulator over it to eliminate the block being able to short it?


----------



## tpi2007

tconroy135 said:


> What I have read is that for 2080 Ti there are 2 reported 'BINs."
> 
> 1. A BIN for FE Cards and Factory Overclocked AIB Cards
> 2. *A BIN for MSRP Cards (NVIDIA)* and Non-Factory Overclocked AIB Cards



The part in bold does not exist. Nvidia is not making any reference spec products, those lower quality chips with 90 Mhz less clockspeed and 10w less board power will be for AIBs to sell at the MSRP (or nearer to it) and they can't overclock those cards from the factory (the buyer can try doing it with the usual software).


----------



## Rob w

BigMack70 said:


> Have any preorders direct from nvidia started shipping yet? I've seen some EVGA and some ASUS in the wild but nothing else.


they announced on there forum they will give an update in the next few days.


----------



## Rob w

xer0h0ur said:


> Aye, shunt mod it will be then.
> 
> 
> 
> What about shunt modding then applying an insulator over it to eliminate the block being able to short it?


tape on order already, forgot to mention that


----------



## Mario2k

RTX 2080Ti is a nice graphics card but 1200£ is a Daylight Robbery .
I hope crypto currency mining on GPU will completly finish soon then graphic cards prices fall a lot.


----------



## kx11

Vlada011 said:


> Guys is it this true... one Member posted on Techpowerup.
> 
> 4K performance increase and 2 generation comparison with starting MSRPs:
> 
> 1.
> 1080Ti - 2080: 9% for same price (or +50$ for the 2080)
> 980Ti - 1080 was 39% for -50$
> 780Ti - 980 was 9% for -150$
> 
> 2.
> 1080Ti - 2080Ti: 39% for +300$ (more like 350$ actually)
> 980Ti - 1080Ti was 83% (!!!) for +50$
> 780Ti - 980Ti was 43% for -50$
> 
> This is absolutely shocking, insane if comparison is true.
> 980Ti upgrade to 1080Ti give you 83% more performance for 50$ more and 1080Ti to 2080Ti 35% for 350$.
> What is excuse for this if this is true...
> 
> If this is true, we lost fortune and get nothing upgrading RIGs in moments when no competition.
> Intel just prepare to back us everything what they take from us and give us even more and probably for less than 500 euro because competition.
> They add 2 cores, then it was not enough and decide to 2 more cores and even to back flux solder and such CPU would cost 1000-1200$ before Zen Core showed up.
> NVIDIA ask 300$ more than very expensive GTX1080Ti and force us us to buy GTX1080Ti because price difference and what will happen if we buy GTX1080Ti for 600-650 and AMD beat her for 30% for same money. What than? In moment when they show up Pascal will be 2 years old and more.
> 
> NVIDIA will drop price of RTX2080Ti and say excuse me that's similar to Pascal now we plan to launch real beast Volta 2 or what ever for same price as RTX2080Ti. That would happen if competition (AMD) make success.





why ?! everything is getting a higher price tag , you saw the 1450$ iPhone right ? 



i think it's not overpriced at all , Ray-Tracing cores and faster memory with the same power draw in the older generation is an amazing achievement


----------



## Dreamliner

kx11 said:


> i think it's not overpriced at all , Ray-Tracing cores and faster memory with the same power draw in the older generation is an amazing achievement


780ti ($700) = 970 ($329)
980ti ($649) = 1070﻿ ($379)
1080Ti ($699) = 2080 ($699)


The xx70 is supposed to equal a previous gen xx80Ti, the xx80 is supposed to be ~30% more performance than a xx70 and the xx80Ti is supposed to be ~30% more than a xx80. Nvidia shifted the performance of the whole 20-series DOWN and charged MORE for it. Quite amazing indeed.


----------



## tconroy135

kx11 said:


> why ?! everything is getting a higher price tag , you saw the 1450$ iPhone right ?
> 
> 
> 
> i think it's not overpriced at all , Ray-Tracing cores and faster memory with the same power draw in the older generation is an amazing achievement


I don't mind paying the high price because I want the standard performance gain. But RT cores and Tensor Cores are just being beta tested with the 20xx series of cards.

Watch the 7nm cards have 4x for those cores.


----------



## Glerox

Damn the FE cards are really power limited. It's a shame they are limited to +123% but some AIB go up to +130%.

Now I'm just waiting for some specialist/engineer to show us which resistors we can shunt with liquid metal 
Hopefully we will know soon!


----------



## GraphicsWhore

Apparently EVGA didn’t send enough stock to Amazon. Bunch of people on EVGA forums saying their 2080Ti was delayed, me included. ***? Love EVGA but this seems like a silly mistake.


----------



## dboythagr8

Got my 2080Ti. Installed and up and running. Ordered direct from EVGA.


----------



## Dreamliner

dboythagr8 said:


> Got my 2080Ti. Installed and up and running. Ordered direct from EVGA.


I think you're the first one. I ordered an EVGA one from Newegg, still waiting. Also ordered a FE version, still waiting for that too.


----------



## rush2049

Glerox said:


> Damn the FE cards are really power limited. It's a shame they are limited to +123% but some AIB go up to +130%.
> 
> Now I'm just waiting for some specialist/engineer to show us which resistors we can shunt with liquid metal
> Hopefully we will know soon!


Could we get this info added to the first post?
Which cards go to what amount on the power slider....
(This is an important overclocking factor to know)


----------



## Glerox

rush2049 said:


> Could we get this info added to the first post?
> Which cards go to what amount on the power slider....
> (This is an important overclocking factor to know)


AFAIK yet :

FE : 123%
Asus ROG Strix : 125%
EVGA XC : 130%


----------



## BigMack70

Hopefully we'll get custom BIOS to allow 130% or more on FE cards.


----------



## ChiTownButcher

Dreamliner said:


> kx11 said:
> 
> 
> 
> i think it's not overpriced at all , Ray-Tracing cores and faster memory with the same power draw in the older generation is an amazing achievement
> 
> 
> 
> 780ti ($700) = 970 ($329)
> 980ti ($649) = 1070﻿ ($379)
> 1080Ti ($699) = 2080 ($699)
> 
> 
> The xx70 is supposed to equal a previous gen xx80Ti, the xx80 is supposed to be ~30% more performance than a xx70 and the xx80Ti is supposed to be ~30% more than a xx80. Nvidia shifted the performance of the whole 20-series DOWN and charged MORE for it. Quite amazing indeed.
Click to expand...

And this is why I wont buy one and hope that the whole 20xx series is a financial disaster for Nvidia


----------



## dboythagr8

Dreamliner said:


> I think you're the first one. I ordered an EVGA one from Newegg, still waiting. Also ordered a FE version, still waiting for that too.


Some pics


----------



## Dreamliner

The EVGA card looks better than I thought it would. I still have concerns about dust buildup behind the clear shroud. I wish it was black.

Can you actually shut off ALL the LEDs? EVGA loves to make cards that leave a couple LED's on no matter what. Like the 1080 Classified having the row of white LEDs on ALL the time or the 1080Ti FTW3 having the FTW3 logo on ALL the time.


----------



## Menthol

Yes you can turn off the LED's or change color


----------



## raider89

ChiTownButcher said:


> And this is why I wont buy one and hope that the whole 20xx series is a financial disaster for Nvidia


Well I can assure you its not a financial disaster given the rarity and everyone sold out lol.


----------



## ChiTownButcher

raider89 said:


> ChiTownButcher said:
> 
> 
> 
> And this is why I wont buy one and hope that the whole 20xx series is a financial disaster for Nvidia
> 
> 
> 
> Well I can assure you its not a financial disaster given the rarity and everyone sold out lol.
Click to expand...

Just because a company most likely under produced and sold out it's over priced new toy doesnt mean it's a hit. Given everything going on it's more likely they did this to sell all of the left over 1080ti and slide in the fact that the 2080 performance is what we would normally have seen in the 2070, the 2080 ti should have been the 2080 and so on. 

While I like Nvidia product this whole launch feels like a cash grab and clearing out old stock.


----------



## TwinParadox

Please, is there anyone who can attach (or send me in private message) a valid bios dump of 2080Ti in order to add the UEFI/GOP
support to the GOPUpdater tool ?

Thanks in advance.


----------



## NewType88

@dboythagr8 Nice ! Could you measure the distance between the screws the hold on the heatsink ? Looks pretty accessible, Id appreciate it !


----------



## CallsignVega

Glerox said:


> AFAIK yet :
> 
> FE : 123%
> Asus ROG Strix : 125%
> EVGA XC : 130%


I'm glad I swapped out the FE for the FTW3. The latter should have 130% power limit. I hate constantly bouncing off that limit. The more headroom the better!


----------



## FreeElectron

Is there any compiled review links for the RTX 2080 TI AIB


----------



## illidan2000

i'm looking for a review for the gigabyte 2080TI Gaming OC
anyone saw it?


----------



## Jbravo33

dboythagr8 said:


> Got my 2080Ti. Installed and up and running. Ordered direct from EVGA.


U lucky mofo! I got delayed, but I did get my nvlink bridge xD. Wah wah wah


----------



## dboythagr8

Dreamliner said:


> The EVGA card looks better than I thought it would. I still have concerns about dust buildup behind the clear shroud. I wish it was black.
> 
> Can you actually shut off ALL the LEDs? EVGA loves to make cards that leave a couple LED's on no matter what. Like the 1080 Classified having the row of white LEDs on ALL the time or the 1080Ti FTW3 having the FTW3 logo on ALL the time.


It looks great in person. And yes you can shut off all of the LEDs, I just did it. In Precision X1, there's a LED toggle that brings up this program below. I chose static off, and it shut everything off

The blue you see in the pic is from the reflection from the colors and others stuff.


----------



## ENTERPRISE

Great thread, I am looking to buy soon, possibly in SLI (Nvlink). Will use the OP to help as a guide, but like most I would like a power limit up to 130%, nobody likes limits lol, so it looks like I will be waiting for some Custom PCB cards to be released as they are all reference thus far, unless I missed something ?


----------



## carlhil2

I am waiting on some water cooling and, hopefully, shunting results....anyone know if a ek uniblock would fit?


----------



## kx11

this WB will go nicely with my PC case


----------



## dboythagr8

ENTERPRISE said:


> Great thread, I am looking to buy soon, possibly in SLI (Nvlink). Will use the OP to help as a guide, but like most I would like a power limit up to 130%, nobody likes limits lol, so it looks like I will be waiting for some Custom PCB cards to be released as they are all reference thus far, unless I missed something ?


EVGA has a BIOS update out for their 20 series cards. 

https://forums.evga.com/EVGA-GeForce-RTX-2080-Ti-2080-XCXC-Ultra-BIOS-Update-m2858793.aspx


Getting ready to apply it to mine.


----------



## xer0h0ur

@dboythagr8 You lucky SOB, I'm peanut butter and jelly. Hopefully you already pushed the overclock before updating the vBIOS to be able to tell us how much more MHz you picked up with the extra power limit.


----------



## raider89

ChiTownButcher said:


> raider89 said:
> 
> 
> 
> 
> 
> ChiTownButcher said:
> 
> 
> 
> And this is why I wont buy one and hope that the whole 20xx series is a financial disaster for Nvidia
> 
> 
> 
> Well I can assure you its not a financial disaster given the rarity and everyone sold out lol.
> 
> Click to expand...
> 
> Just because a company most likely under produced and sold out it's over priced new toy doesnt mean it's a hit. Given everything going on it's more likely they did this to sell all of the left over 1080ti and slide in the fact that the 2080 performance is what we would normally have seen in the 2070, the 2080 ti should have been the 2080 and so on.
> 
> While I like Nvidia product this whole launch feels like a cash grab and clearing out old stock.
Click to expand...

Thats good speculation but a false one.


----------



## raider89

xer0h0ur said:


> @dboythagr8 You lucky SOB, I'm peanut butter and jelly. Hopefully you already pushed the overclock before updating the vBIOS to be able to tell us how much more MHz you picked up with the extra power limit.


Gonna be so fun putting the block on this and seeing how far we can push this.


----------



## xer0h0ur

raider89 said:


> Gonna be so fun putting the block on this and seeing how far we can push this.


I know, although the wait is bad right now. Its got me looking at water chillers and I need to stop spending money right meow xD


----------



## BigMack70

xer0h0ur said:


> I know, although the wait is bad right now. Its got me looking at water chillers and I need to stop spending money right meow xD


The wait wouldn't be so bad if Nvidia could just provide some concrete answers about when the cards are going to ship out. If it's going to be 1-2 weeks I'll hold on but 3+ weeks and I want to cancel my order and I'll just find one somewhere else. But I got in so early in the pre-order line that I don't want to just cancel without good reason.


----------



## xer0h0ur

That makes two of us.


----------



## dboythagr8

Haven't updated BIOS yet. Been doing Timespy benches. I have not touched memory yet. Power is at max (right now 112%). Fans are set to 75%. The following are GPU scores only from Timespy.

EVGA 2080TI XC

+145 : 14,293
+165 : 14,533
+185 : 14,524
+165 : Fail
+165 w/ voltage slider at max : 14,366
+175 w/ voltage slider at max: 14,341
+185 w/ voltage slider at max: 14,465

edit: discovered a weird anomaly. For some reason my PG27UQ was running at 82hz vs the 120hz I thought it was at. I'm not sure if that will have any bearing on scores, but I rebooted, and now I'm back to 120hz. Going to see if that makes any difference.

edit 2: ran again at +185, voltage max, and got 14481.


----------



## xer0h0ur

I don't think the monitor should affect the results on a synthetic benchmark. Would be weird if it did and I would like to know why if that were the case.


----------



## animeowns

nice my nvlink came 2


----------



## dboythagr8

xer0h0ur said:


> I don't think the monitor should affect the results on a synthetic benchmark. Would be weird if it did and I would like to know why if that were the case.


For sure. It was just odd and for my peace of mind, I wanted to do it again correctly.

Updated BIOS with 130% power, and it definitely helped. Ran Timespy again and at +145 and got my highest GPU score -- 14,594. Higher than when I was running at +165 and higher on the 112% power limited BIOS. My overall score was 12,141 if anyone was curious.


----------



## ENTERPRISE

So I am thinking of going with this model : https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR 



(EVGA FTW ULTRA)


It is a 3 slot card however due to the huge heatsink, this means if using Nvlink, there would not be much space inbetween the cards that is for sure which is a shame. Beasts for sure though.


----------



## dboythagr8

Think I've settled in on a stable core OC of +175. Now working on memory. Just did 175/700 and got a GPU score of 15,047 in Timespy.


----------



## xer0h0ur

Is that "Scanner" utility available to the public yet? I haven't seen it. Would be interesting to see what "stable OC" it came up with versus your own testing.


----------



## xer0h0ur

According to HardOCP Scanner is included in PrecisionX1 and they named it "VF Curve Tuner" in the software.


----------



## Ford8484

bloody hell- its still at Preorder on Newegg....I want my Zotac!! AcerX27 is hungry.....anyone have theirs yet and want to share their Overclocking results?


----------



## Kronos8

carlhil2 said:


> I am waiting on some water cooling and, hopefully, shunting results....anyone know if a ek uniblock would fit?


Interesting question. I also want to know that. At the moment, both VGA-Supremacy and thermosphere are considered non compatible for both 2080 and 2080Ti FE. The die size of 2080Ti is 31mm x 25mm. The base plate of supremacy is 49,5mm x 49,5mm (https://www.ekwb.com/shop/EK-PSS/EK-PSS-3831109805145.pdf) so that is more than enough. The main issue is the mounting. Unfortunately, I could not find the dimension of the mounting holes close to the GPU die. If that dimension is not between 53,2mm and 58,44mm (https://www.ekwb.com/shop/EK-PSS/EK-PSS-3831109805145.pdf), mounting is not possible and a new metal base (???) should be constructed? I'll ask on EK thread, but I have a strong feeling the official response will be "USE FULL COVER".....


----------



## stefxyz

So which custom cards can do 130 Power Traget? Only one I know of is the EVGA GeForce RTX 2080 Ti XC ULTRA GAMING. Founder edition can only do 125%...


----------



## Kronos8

carlhil2 said:


> I am waiting on some water cooling and, hopefully, shunting results....anyone know if a ek uniblock would fit?


Well, I did find the diagonal distance of the holes on 2080Ti, which is 98mm. So the actual distance of the mounting holes is 98/(2^0.5)=69mm. That makes it impossible for the VGA-Supremacy to be mounted without a new mounting metal plate from EK.


----------



## TahoeDust

ENTERPRISE said:


> So I am thinking of going with this model : https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR
> 
> 
> 
> (EVGA FTW ULTRA)
> 
> 
> It is a 3 slot card however due to the huge heatsink, this means if using Nvlink, there would not be much space inbetween the cards that is for sure which is a shame. Beasts for sure though.


That is the card I have pre-ordered. Hopefully just a few more weeks until it starts shipping.


----------



## Hulk1988

I got a 2080 TI FE. Card is much better than the 1080 TI FE and the dual fans are fine. I am waiting now for the Trio X for NVLINK. NVLINK came thursday already.



stefxyz said:


> So which custom cards can do 130 Power Traget? Only one I know of is the EVGA GeForce RTX 2080 Ti XC ULTRA GAMING. Founder edition can only do 125%...


MSI Trio X has 300W on default. You can increase the PowerLimit slider to 110%. That means 330W for the Trio X.


----------



## Glerox

This confirms that NVLink is the same old thing as SLI, just with a new name :






Shadow of the tomb Raider crashing LMAO it's THE game meant to showcase the new RTX cards.


----------



## marcolisi

Hey guys, what is the best 2080 ti custom model card for overclocking ?


----------



## Foxrun

Glerox said:


> This confirms that NVLink is the same old thing as SLI, just with a new name :
> 
> https://www.youtube.com/watch?v=84OkcOYmXOk
> 
> Shadow of the tomb Raider crashing LMAO it's THE game meant to showcase the new RTX cards.


Likely needs a patch etc. Did you preorder a Ti?


----------



## looniam

carlhil2 said:


> I am waiting on some water cooling and, hopefully, shunting results....anyone know if a ek uniblock would fit?


steve at GN measured 98.5mm diag* (tear down video) and quickly using a measuring tape; i get 85mm at most on the VGA supremacy i have.


----------



## dboythagr8

xer0h0ur said:


> According to HardOCP Scanner is included in PrecisionX1 and they named it "VF Curve Tuner" in the software.



The turner gave me +145. It takes about 15 - 20 minutes to run, and it only does core clock, not memory. Going above that I've been adjusting it on my own, but for gaming, +145 is rock solid, but I'm thinking I can do slightly more. But then again, benchmarking isn't always indicative of a stable OC. After downloading new BIOS and upping to 130% power level, the highest I got to was +175 and +700 on memory. This resulted in a Timespy GPU score of 15,047.

May try and push some more on the clocks later on, but I'd really like to actually play some games!

I did play Shadow of the Tomb Raider yesterday at a rock solid 60, 4k, maxed out, and in HDR on my PG27UQ. It was a great, high end experience. The IQ was crazy.


----------



## xer0h0ur

dboythagr8 said:


> The turner gave me +145. It takes about 15 - 20 minutes to run, and it only does core clock, not memory. Going above that I've been adjusting it on my own, but for gaming, +145 is rock solid, but I'm thinking I can do slightly more. But then again, benchmarking isn't always indicative of a stable OC. After downloading new BIOS and upping to 130% power level, the highest I got to was +175 and +700 on memory. This resulted in a Timespy GPU score of 15,047.
> 
> May try and push some more on the clocks later on, but I'd really like to actually play some games!
> 
> I did play Shadow of the Tomb Raider yesterday at a rock solid 60, 4k, maxed out, and in HDR on my PG27UQ. It was a great, high end experience. The IQ was crazy.


Well like I said earlier, I'm peanut butter and jelly. Enjoy your card. Testing the limits and pushing it as far as it goes is fun but we are after all buying these things to game on them. Get your game on.


----------



## Glerox

Foxrun said:


> Likely needs a patch etc. Did you preorder a Ti?


Yup but just one this time.
Titan XP SLI was a pain.
But I respect those chasing for ultimate performance. I had hope for NVlink being a new "plug and play" mGPU but it's the same as SLI.


----------



## dboythagr8

Getting 60+ at 4k makes such a huge difference, I cannot state it enough. The fact that you can get it at max settings makes it even sweeter. 

I played Shadow of TR earlier on the custom preset. I have shadows at Ultra, LOD at Ultra, and other items a notch higher than what is available on the "Highest" Preset. All of that at 4k and with HDR enabled. Benchmark still gave me 61 fps. This is without any specific 20 series enhancements and on the first RTX driver tree (411). Once they optimize the drivers even more and start rolling out DLSS, the 2080ti is going to REALLY sing. But just seeing the type of performance I'm seeing at this point is amazing. Yes it will cost you to achieve it, but what an experience.

On another note I gave up on EVGA Precision X1. It's just not ALL the way there. Downloaded Afterbruner 4.6.0 Beta 9 which has RTX support, and it's just a better experience. Still has the OC scanner too. For those who prefer Afterburner, press CTRL+F, and it'll take you to the OC scanner screen.


----------



## xer0h0ur

I've always been an Afterburner guy myself but I had no idea it was already supporting RTX. Good to know.


----------



## dboythagr8

xer0h0ur said:


> I've always been an Afterburner guy myself but I had no idea it was already supporting RTX. Good to know.



Yep -- Has to be 4.6.0 Beta 9 version

https://www.guru3d.com/files-details/msi-afterburner-beta-download.html

Added NVIDIA Turing GPU architecture support:
Added voltage control for reference design NVIDIA GeForce RTX 20x0 series graphics cards
Advanced GPU Boost control for NVIDIA GeForce RTX 20x0 series graphics cards. Extended voltage/frequency curve editor on GeForce RTX 20x0 family graphics cards allows you to tune additional piecewise power/frequency floor and temperature/frequency floor curves. Control points on those new curves allow you to control GPU Boost power and thermal throttling algorithms more precisely than traditional power limit and thermal limit sliders 
Hardware abstraction layer has been revamped to provide support for multiple independent fans per GPU due to introducing dual fan design on reference design NVIDIA GeForce RTX 20x0 series graphics cards and due to introducing native dual fan control in NVAPI. Both fans of NVIDIA GeForce RTX 20x0 can be monitored independently in hardware monitoring module now and can be controlled synchronically in manual mode
Added NVIDIA Scanner technology support
Improved hardware monitoring module:
Added thermal offset for CPU temperature monitoring on AMD Ryzen 7 2700X processors
“Pagefile usage” graph in hardware monitoring module has been renamed to “Commit charge”
Improved hardware control shared memory interface. During the past years, external applications like MSI Remote Server were using this interface for tuning GPU hardware settings remotely from external applications. The improvements are intended to allow connecting external stress testing and automatic overclocking related applications to MSI Afterburner via this interface:
Now voltage/frequency curve on NVIDIA Pascal and newer NVIDIA GPU architectures is accessible via hardware control shared memory interface
New hardware control shared memory interface command allows MSI Afterburner to load hardware settings from external application without immediately applying new settings to GPU
Added notification message, allowing external applications to notify MSI Afterburner about new command written to hardware control shared memory. Without the notification, MSI Afterburner is executing external commands on each hardware polling iteration like before. Please refer to SDK and MACMSharedMemorySample source code to see notification message usage example
Added hardware identification info to GPU entries in hardware control shared memory. Hardware identification info allows external applications to reconcile own enumerated devices with logical GPUs enumerated by MSI Afterburner 
Now hardware control shared memory is refreshed on delayed fan speed readback events
New bundled MSI Overclocking Scanner application in now included in MSI Afterburner distributive:
MSI Overclocking Scanner is currently supported on NVIDIA RTX 20x0 series graphics cards under 64-bit operating systems only. On such systems you may activate the scanner directly from voltage/frequency curve editor window
MSI Overclocking Scanner is powered by NVIDIA Scanner technology, which is using proprietary algorithms to quickly and reliably test manually overclocked GPU stability or find the maximum stable GPU overclocking in automatic mode with a single click. The scanner is using embedded NVIDIA test load to stress GPU.


----------



## ClashOfClans

dboythagr8 said:


> Getting 60+ at 4k makes such a huge difference, I cannot state it enough. The fact that you can get it at max settings makes it even sweeter.
> 
> I played Shadow of TR earlier on the custom preset. I have shadows at Ultra, LOD at Ultra, and other items a notch higher than what is available on the "Highest" Preset. All of that at 4k and with HDR enabled. Benchmark still gave me 61 fps. This is without any specific 20 series enhancements and on the first RTX driver tree (411). Once they optimize the drivers even more and start rolling out DLSS, the 2080ti is going to REALLY sing. But just seeing the type of performance I'm seeing at this point is amazing. Yes it will cost you to achieve it, but what an experience.
> 
> On another note I gave up on EVGA Precision X1. It's just not ALL the way there. Downloaded Afterbruner 4.6.0 Beta 9 which has RTX support, and it's just a better experience. Still has the OC scanner too. For those who prefer Afterburner, press CTRL+F, and it'll take you to the OC scanner screen.


Very encouraging news, congrats on your purchase. I should have waited patiently. Even though I got a cheap 1080 Ti it cannot max everything out at 4k! Matter of fact even on Fallout 4 I got to turn off AA and a couple more settings. Could also be a bug though in certain parts of the cities as it seems to max out 60fps everywhere else, but I do wonder if the 2080 Ti can max out F04 with the high texture pack at 4k in the cities. Starting to wonder if the 1800x is bottlenecking the gpu in the cities as well(ryzen 1800x). If you got F04 download the high texture pack it's only 55GB... and test it for me thanks.


----------



## Alonjar

stefxyz said:


> So which custom cards can do 130 Power Traget? Only one I know of is the EVGA GeForce RTX 2080 Ti XC ULTRA GAMING. Founder edition can only do 125%...


MSI Trio can do 130.


----------



## Hulk1988

Sorry not true as far as I know. Default TDP is higher at 300W. You can change the PowerLimit slider to 110% only. That means 330W. Lower than the new EVGA bios.


----------



## Emmett

Does anyone know if the ASUS Turbo is going to have the so called "B" grade chip in it since it's a non OC out of the box card?

I dont want to get a gimped Ti


----------



## ssgwright

wish gigabyte would let us have the updated nvflash software they're using


----------



## Krzych04650

Emmett said:


> Does anyone know if the ASUS Turbo is going to have the so called "B" grade chip in it since it's a non OC out of the box card?
> 
> I dont want to get a gimped Ti


This is the idea if rumors are to believed, that reference clocks will mean weaker chip. ASUS Turbo is also a blower cooler, so it is not only reference clocked but also at the very bottom of the pack with thermal design, so if weaker "B chips" are going anywhere, it has be here, there isn't really anything below. But I don't think you should be concerned, the difference between these chips is probably not massive, and even if then you are still not getting anywhere with OC with this cooler, maintaining minimal advertised boost will be a success assuming you don't ramp up the fan speed to a level that will make your PC launch into space.


----------



## Glerox

Awesome video from Der8auer!!






He shows how to do the shunt mod. Soldering is quite complicated, I would still do liquid metal. Haven't had any problems on the Titan XP with it.

Basically he also shows that
I ordered a water block for nothing lol...

He went from 2070mhz to 2100mhz from the stock cooler to a perfect temp of 20c...

To my recall the temperature scaling is even worst than Pascal.

Oh well... at least it will look good lol


----------



## xer0h0ur

Glerox said:


> Awesome video from Der8auer!!
> 
> https://youtu.be/9_G7tWAh5ms
> 
> He shows how to do the shunt mod. Soldering is quite complicated, I would still do liquid metal. Haven't had any problems on the Titan XP with it.
> 
> Basically he also shows that
> I ordered a water block for nothing lol...
> 
> He went from 2070mhz to 2100mhz from the stock cooler to a perfect temp of 20c...
> 
> To my recall the temperature scaling is even worst than Pascal.
> 
> Oh well... at least it will look good lol


You actually make no sense. Even he said that you can see somewhere around 100+ MHz over the FE cooler while using a waterblock. Its obvious from all of the reviews I saw and read that overclocking these cards will quickly drop your clocks on the FE cooler with any sustained loads. You know, like gaming instead of just quick benchmarking. Also, you linked a video testing 2080 and everything I am talking about is for the 2080 Ti.


----------



## Glerox

xer0h0ur said:


> You actually make no sense. Even he said that you can see somewhere around 100+ MHz over the FE cooler while using a waterblock. Its obvious from all of the reviews I saw and read that overclocking these cards will quickly drop your clocks on the FE cooler with any sustained loads. You know, like gaming instead of just quick benchmarking. Also, you linked a video testing 2080 and everything I am talking about is for the 2080 Ti.


Sorry my friend, 

He says like 4 times the temp scaling and the power scaling are bad. 

He says DON'T expect to get 100mhz with a waterblock.

And finally look at the graph, it's all in there.

2070Mhz with the stock cooler.
2100Mhz with the custom cooler at 20c + the shunt mod.

that's all there is! 

The benchmark he's using is quite long so those are really the stable clockspeeds.


----------



## CallsignVega

Wow that is pretty bad. A whopping 0.4% performance increase going from stock card at 100% fan to 20C dry ice. Water cooling on its last legs with these newer chips.


----------



## skingun

Hardly true. Many water cool for the reduced noise over the overclocking benefits. Some do both.

Anyone had there founders cards shipped yet?


----------



## stefxyz

Glerox said:


> Awesome video from Der8auer!!
> 
> https://youtu.be/9_G7tWAh5ms
> 
> He shows how to do the shunt mod. Soldering is quite complicated, I would still do liquid metal. Haven't had any problems on the Titan XP with it.
> 
> Basically he also shows that
> I ordered a water block for nothing lol...
> 
> He went from 2070mhz to 2100mhz from the stock cooler to a perfect temp of 20c...
> 
> To my recall the temperature scaling is even worst than Pascal.
> 
> Oh well... at least it will look good lol


If we ordered a WB for nothing might still to be seen once the RT cores, Tensor cores and the normal cores work all at once. I think we do not know if they add heat to the mix.


----------



## massimo40mq

Good morning guys, there will be in the evga store GTX 2080 Ti 'Kingpin' Edition for liquid cooling? thank you


----------



## Foxrun

skingun said:


> Hardly true. Many water cool for the reduced noise over the overclocking benefits. Some do both.
> 
> Anyone had there founders cards shipped yet?


No. There is a small riot on the Nvidia forums as someone contacted customer support and the reply they got was along the lines of 1-2 weeks. I'm curious if this has anything to do with the tariffs. I'm teetering on canceling.


----------



## CallsignVega

skingun said:


> Hardly true. Many water cool for the reduced noise over the overclocking benefits. Some do both.
> 
> Anyone had there founders cards shipped yet?


True on a blower card. Not nearly as much this gen seeing as the FE cards have MUCH quieter coolers than previous gen. I doubt even louder than a lot of peoples water cooled PC's.

I'm not saying don't water-cool, I'm just saying the benefits aren't nearly what they once were when all cards were extremely loud, ran super hot and actually scaled with voltage. Virtually all three of those don't apply nearly as much anymore.


----------



## Xeq54

Glerox said:


> Awesome video from Der8auer!!
> 
> https://youtu.be/9_G7tWAh5ms
> 
> He shows how to do the shunt mod. Soldering is quite complicated, I would still do liquid metal. Haven't had any problems on the Titan XP with it.
> 
> Basically he also shows that
> I ordered a water block for nothing lol...
> 
> He went from 2070mhz to 2100mhz from the stock cooler to a perfect temp of 20c...
> 
> To my recall the temperature scaling is even worst than Pascal.
> 
> Oh well... at least it will look good lol


Why did he short the shunts ? I thought shorting them is not what you are supposed to do.

I have my 2080ti (Asus DUAL) since friday and it is a solid card, however the power limit is extremely harsh and even with slider at 120% I can go to 1890-1940 mhz max without ocing the memory, with memory oc it stays around 1900 max. It is stable but the power limit does not let me go higher no matter what. I will try to play with undervolting next. But I do want to shunt mod it once I get waterblock for it.

I will have the option to swap it for the gaming trio in a week, but I am not sure if I wanna do it since the pcb is just unnecessarily big, it is also not fully custom pcb, it is clearly some kind of hybrid with the left side being reference and right side custom and it will take more time for waterblocks to arrive.


----------



## GAN77

Del


----------



## GAN77

Xeq54 said:


> Why did he short the shunts ? I thought shorting them is not what you are supposed to do.
> 
> I have my 2080ti (Asus DUAL)




2080ti (Asus DUAL) reference PCB.
Possibly apply bios from EVGA XC with a limit PL130% ?

EVGA XC 2080/2080TI same reference PCB.


----------



## Xeq54

Is NVflash with turing support even available ? Haven't seen any posts about it yet.


----------



## dboythagr8

ClashOfClans said:


> Very encouraging news, congrats on your purchase. I should have waited patiently. Even though I got a cheap 1080 Ti it cannot max everything out at 4k! Matter of fact even on Fallout 4 I got to turn off AA and a couple more settings. Could also be a bug though in certain parts of the cities as it seems to max out 60fps everywhere else, but I do wonder if the 2080 Ti can max out F04 with the high texture pack at 4k in the cities. Starting to wonder if the 1800x is bottlenecking the gpu in the cities as well(ryzen 1800x). If you got F04 download the high texture pack it's only 55GB... and test it for me thanks.




Just checked for you, and sorry, I don't have Fallout 4 on PC. I must have it on Xbox One X. Yes I have so many games, I can't keep track of what platform it's on :x.


----------



## Zfast4y0u

Xeq54 said:


> Why did he short the shunts ? I thought shorting them is not what you are supposed to do.
> 
> I have my 2080ti (Asus DUAL) since friday and it is a solid card, however the power limit is extremely harsh and even with slider at 120% I can go to 1890-1940 mhz max without ocing the memory, with memory oc it stays around 1900 max. It is stable but the power limit does not let me go higher no matter what. I will try to play with undervolting next. But I do want to shunt mod it once I get waterblock for it.
> 
> I will have the option to swap it for the gaming trio in a week, but I am not sure if I wanna do it since the pcb is just unnecessarily big, it is also not fully custom pcb, it is clearly some kind of hybrid with the left side being reference and right side custom and it will take more time for waterblocks to arrive.


how does 2080ti oc'ed behave on unigine heaven benchmark for example? does it hits power limits? not sure how hard/easy 2080ti will hit power limit when RTX is thrown in equation also. time will tell.


----------



## Xeq54

Zfast4y0u said:


> how does 2080ti oc'ed behave on unigine heaven benchmark for example? does it hits power limits? not sure how hard/easy 2080ti will hit power limit when RTX is thrown in equation also. time will tell.


Unigine is the only 3d load I tested where the frequency can go above that, roughly to 1960 - 1980. I decided to stop using it for further testing since it clearly isn't representative of an average 3d load scenario anymore. Keep in mind, heaven is 9 years old so it is not very relevant anymore.


----------



## ssgwright

no nvflash yet and the bios from evga won't flash your card unless the models match (already tried on my gigabyte 2080)


----------



## Zfast4y0u

Xeq54 said:


> Unigine is the only 3d load I tested where the frequency can go above that, roughly to 1960 - 1980. I decided to stop using it for further testing since it clearly isn't representative of an average 3d load scenario anymore. Keep in mind, heaven is 9 years old so it is not very relevant anymore.


so, were u spinning around 80-100% or did you hit power limit?


----------



## Jared Pace

Glerox said:


> Awesome video from Der8auer!!


Cool results


----------



## rauf0

Xeq54 said:


> Why did he short the shunts ? I thought shorting them is not what you are supposed to do.
> 
> I have my 2080ti (Asus DUAL) since friday and it is a solid card, however the power limit is extremely harsh and even with slider at 120% I can go to 1890-1940 mhz max without ocing the memory, with memory oc it stays around 1900 max. It is stable but the power limit does not let me go higher no matter what. I will try to play with undervolting next. But I do want to shunt mod it once I get waterblock for it.
> 
> I will have the option to swap it for the gaming trio in a week, but I am not sure if I wanna do it since the pcb is just unnecessarily big, it is also not fully custom pcb, it is clearly some kind of hybrid with the left side being reference and right side custom and it will take more time for waterblocks to arrive.


Be advised that ASUS GeForce RTX 2080 Ti Dual O11G, 11GB GDDR6 (90YV0C41-M0NM00) is also not planned to cover by EKWB, not sure what ab. rest of manufacturers.

"Sorry, we have no plans to make a full cover water block for this position. Thank you."

Looks like differences between Turing PCBs hinder the choice of card for liquid cooling.


----------



## Xeq54

The ASUS dual is a fully reference PCB card so the block will fit. EKWB possibly did not have the opportunity to verify this yet, so they have it listed as incompatible most likely.


----------



## rauf0

That would be great, as im looking for that card and later for block. Thanks for info @XeG54


----------



## Woundingchaney

Has any received availability updates from Nvidia or other suppliers?


----------



## Glerox

Xeq54 said:


> Why did he short the shunts ? I thought shorting them is not what you are supposed to do.


He shorted them because some people had trouble with liquid metal in the past. I saw one guy who's shunt resistor fell out! It was like dissolved by the liquid metal. But I think he might have applied too much of it.

Yeah if you short both shunt resistors, the card will detect it and enter a low power locked mode, so that why Der8auer only shorted one.

If I had to do it again, I would still use liquid metal.

However, I don't think I will do it, he gains like 1% performance but the power consumption (measured with probes) goes to the roof... not worth it unless you want to have a higher score in benchmark.


----------



## bastian

Woundingchaney said:


> Has any received availability updates from Nvidia or other suppliers?


What is strange about this Ti delay is many places have stock already for AIB cards, but it seems that nVidia is making everyone stick to the 27th.

It is not clear why nVidia is delaying the Ti.


----------



## rush2049

Nvidia just sent me an email saying delivery will be on October 5th.


----------



## BigMack70

rush2049 said:


> Nvidia just sent me an email saying delivery will be on October 5th.


When did you place your order?


----------



## rush2049

BigMack70 said:


> When did you place your order?


The minute the checkout button functioned on the nvidia store.


----------



## BigMack70

rush2049 said:


> The minute the checkout button functioned on the nvidia store.


Bummer. Sounds like you probably beat me by 5 minutes or so; I was hoping that those of us who were really early in line would at least get them by the 27th.

This is probably the last time I ever order direct from Nvidia. This is highly unacceptable.


----------



## GAN77

rauf0 said:


> That would be great, as im looking for that card and later for block. Thanks for info @XeG54


Reference


----------



## xer0h0ur

Glorious package from Slovenia arrive

EKWB and backplate are ready for whenever Nvidia decides to ship my damn card


----------



## raider89

xer0h0ur said:


> Glorious package from Slovenia arrive
> 
> EKWB and backplate are ready for whenever Nvidia decides to ship my damn card


People are getting emails of a ship date of oct 5-9


----------



## xer0h0ur

Oddly enough, not me. So I am beginning to think I am screwed.


----------



## raider89

xer0h0ur said:


> Oddly enough, not me. So I am beginning to think I am screwed.


I haven't got one yet either, but I also didn't get the delayed email until hours after everyone else. Hoping Micro Center will have some on the shelf on Thursday if so I will just buy one then.


----------



## Foxrun

xer0h0ur said:


> Oddly enough, not me. So I am beginning to think I am screwed.


Same and I preordered them during the keynote. Ill try the Cambridge microcenter on Thurs.


----------



## xer0h0ur

I've also got one MicroCenter in Chicago but I work on Thursday so even if they had any I would miss out on one. Feelsbadman


----------



## Rob w

xer0h0ur said:


> Glorious package from Slovenia arrive
> 
> EKWB and backplate are ready for whenever Nvidia decides to ship my damn card


ekwb still shows pre order/ no ship date
2080ti shows 20/9/18 ship but email says delayed.

so whilst everyone else is becoming happy bunnies! nothing going on here! grrrr!


----------



## raider89

Contacted Micro Center, waiting on them to respond if they plan on having them on the shelf Thursday, and not just pre orders.


----------



## xer0h0ur

Rob w said:


> ekwb still shows pre order/ no ship date
> 2080ti shows 20/9/18 ship but email says delayed.
> 
> so whilst everyone else is becoming happy bunnies! nothing going on here! grrrr!


Well I pre-ordered from EK minutes after the pre-order page went live and they shipped it when they said they would. Apparently EK actually fills orders in order and on time *gives Nvidia 1000 yard stare*


----------



## Foxrun

Well I just got my blocks from EKWB and Im not sure I want them. Might stick with an AIB cooler. This whole delay with Nvidia has mucked up my plans.


----------



## raider89

Im hoping the first come first server on Thursday is true, just going to try and grab one thats compatible with the water block.


----------



## snafua

rush2049 said:


> Nvidia just sent me an email saying delivery will be on October 5th.


Like you I ordered during the keynote, original estimated date from the email said Sep-20.
I haven't received the Oct 5th notification yet.


----------



## Ford8484

received this today from Newegg- 
"We have been informed that we will not be receiving our inventory of the NVIDIA RTX 2080 video cards as anticipated on September 20th, 2018. The release has been delayed until September 27th, 2018. You are welcome to keep your existing pre-order and it will be processed and shipped once we receive inventory, or you can instead choose to cancel your pre-order within your Newegg Account's Order History.

Newegg is dedicated to ensuring our customers are notified and provided all necessary information to make the best shopping selection. Please keep in mind that your method of payment has not been charged for the pre-order.

If you have any questions regarding the information provided in this email, please do not hesitate to contact Newegg Customer Service through one of the convenient contact methods provided here."

Sincerely,
Your Newegg Customer Service

I just sold my 1080ti....Looks like I'll have to work on the Ps4 Pro backlog...


----------



## xer0h0ur

"Newegg is dedicated to ensuring our customers are notified and provided all necessary information to make the best shopping selection." 

4 days late xD


----------



## snafua

xer0h0ur said:


> "Newegg is dedicated to ensuring our customers are notified and provided all necessary information to make the best shopping selection."
> 
> 4 days late xD


Well.. there is Nvidia's "We’ll be back in touch shortly with your new delivery date." - 5 days ago.
=P


----------



## skingun

Still haven't received the October 5 email. What is going on!!!


----------



## Clukos

Jared Pace said:


> Cool results


More like bad results, I guess Turing doesn't like Supo too much

This is on a 1080 Ti OC under water (24/7)










Of course the 2080 Ti is going to smash everything but that's another thing


----------



## 8051

Is it possible that with new driver revisions the 2080Ti perf. may increase enough to have a 50% lead over any and all stock 1080Ti variants without overclocking?


----------



## Fiercy

skingun said:


> Still haven't received the October 5 email. What is going on!!!


Just received my email with October 5 a min ago. So relax you are probably fine!


----------



## Emmett

I have received NO emails of delay or otherwise since I ordered on the 20th *sigh*


----------



## Foxrun

raider89 said:


> Im hoping the first come first server on Thursday is true, just going to try and grab one thats compatible with the water block.


I called my local one and they are getting in a decent amount. Taking a personal day and going for those MSI's!


----------



## xer0h0ur

So which are the AIB cards using the reference design with the highest power limit we have confirmed? I just want to keep those cards in mind if I do manage to go the MicroCenter.


----------



## Glerox

8051 said:


> Is it possible that with new driver revisions the 2080Ti perf. may increase enough to have a 50% lead over any and all stock 1080Ti variants without overclocking?


Nope, only possible theroretically with DLSS for hypothetical games that will support it in an eventual unknown future.


----------



## Nico67

Glerox said:


> AFAIK yet :
> 
> FE : 123%
> Asus ROG Strix : 125%
> EVGA XC : 130%


Does anyone know if what TDP these percentages are based off?

FE is supposedly 260w x 123% = 320w,

but if the others are based off 250w, then 250w x 130% = 325w?


----------



## snafua

Fiercy said:


> Just received my email with October 5 a min ago. So relax you are probably fine!


Is that a hard date or worded as "Around October 5th" again?


----------



## littledonny

BigMack70 said:


> Bummer. Sounds like you probably beat me by 5 minutes or so; I was hoping that those of us who were really early in line would at least get them by the 27th.
> 
> This is probably the last time I ever order direct from Nvidia. This is highly unacceptable.


Highly unacceptable? Good God man, calm down


----------



## Fiercy

snafua said:


> Is that a hard date or worded as "Around October 5th" again?


They actually say:
Your GeForce RTX™ 2080 Ti Founders Edition order will now ship by 10/5/18 with expedited shipping

So hopefully it will ship sooner my EK waterblock is already shipped ;(


----------



## Foxrun

littledonny said:


> Highly unacceptable? Good God man, calm down


He has a point. A botched release and a delay of upwards of 2 weeks with no reason as to why is unacceptable when you are purchasing an item over $1000.


----------



## xorbe

PayPal users that paid over a month ago have 1,200 real reasons to demand to know what's going on, this isn't kickstarter. As a CC buyer I'm chill with waiting -- I've not paid a penny to date, I'm just in line.


----------



## GraphicsWhore

EK block looking good. I'm really interested to compare it to the Alphacool full-cover, which is also RGB. No shipping confirmation for that yet but it's meaningless because I'm in the "Amazon delay with no ETA" camp.


----------



## raider89

Foxrun said:


> I called my local one and they are getting in a decent amount. Taking a personal day and going for those MSI's!


When I called and finally got someone they just told me they would fill pre orders first so not sure how that works. Hoping they just order more than what they got in for pre orders so I can snag one. Definitely going to go Thursday, i didnt get the Oct delay and im being told the expedited shipping they're giving people is 5-7 days and thats another week.


----------



## Glerox

Nico67 said:


> Does anyone know if what TDP these percentages are based off?
> 
> FE is supposedly 260w x 123% = 320w,
> 
> but if the others are based off 250w, then 250w x 130% = 325w?


That's a good point and can start to be quite complicated! At the end what matters, I guess, is which one can reach the higher stable clock speed.


----------



## Edge0fsanity

Still haven't gotten a ship date from Nvidia. I guess I'll try to get one on launch day from a retailer or drive up to microcenter for it. Rather disappointed at this launch. I never had this problem as a day 1 Titan buyer from Nvidia.


----------



## Krzych04650

8051 said:


> Is it possible that with new driver revisions the 2080Ti perf. may increase enough to have a 50% lead over any and all stock 1080Ti variants without overclocking?


It will gain few % from driver optimizations and new games are going to prioritize new gen GPUs, so after time you may see 50% lead in some new games, and 2x+ with DLSS, but performance in old games and old benchmarks is not going to change outside of these few % coming from driver optimizations because these games are not going to be changed or updated, so outside of driver improvements there are no factors that could affect performance. Memory capacity is not changed so Pascal is not going to run out of VRAM vs Turning, which was a big if not main factor of widening the gap between Kepler and Maxwell and then Maxwell vs Pascal. So like with anything with Turing, all depends on the support Tensor and RT features are going to get. The more support the more gap. But the performance in already released games is more or less closed case, the potential gains are single percentage digits.


----------



## xer0h0ur

Finally got the e-mail at 8:50 PM CST. If you're wondering what they sent its this:


----------



## BigMack70

Just got my October 5th email. So disappointing. Ordered just after 1 PM EST; basically at the front of the line, and it's another 2 weeks before I get the card. Wonder what happened that Nvidia botched this so bad.


----------



## Glerox

For those living north, canadacomputers has 2080TI in stock right now.


----------



## bastian

Glerox said:


> For those living north, canadacomputers has 2080TI in stock right now.


Still cannot release until 27th. And they don't have stock in stores or for every AIB.


----------



## Glerox

bastian said:


> Still cannot release until 27th. And they don't have stock in stores or for every AIB.


Of course they don't have all AIB lol.
Although they have the Zotac and EVGA in-stock in online store.


----------



## Glerox

I decided to give it a go.
Ordered the ZOTAC GAMING GeForce RTX 2080 Ti Triple Fan to get the cheapest reference PCB I can get because it's going under a water block aneways.
It would save me 120 canadian pesos over the FE.
I will cancel the FE if the Zotac ships first.


----------



## Esenel

Deine bestellte GeForce RTX™️ 2080 Ti Founders Edition wird zwischen dem 5. und 9. Oktober als Eillieferung versendet. Sobald deine Bestellung unterwegs ist, erhältst du per E-Mail eine Versandbestätigung. 

So for me it is 05.-09.10 haha 😄
So gambling again.
My EK was already shipped yesterday 😉

But preordering on the 23.08 already stated something of early October.


----------



## Baasha

This is maddening. 

I haven't gotten any email and I've pre-ordered 4x 2080 Ti 

These muppets better ship my GPUs soon.


----------



## Foxrun

raider89 said:


> When I called and finally got someone they just told me they would fill pre orders first so not sure how that works. Hoping they just order more than what they got in for pre orders so I can snag one. Definitely going to go Thursday, i didnt get the Oct delay and im being told the expedited shipping they're giving people is 5-7 days and thats another week.


Weird they didn't mention preorders to me. Are you near the Cambridge MA one?


----------



## raider89

Foxrun said:


> raider89 said:
> 
> 
> 
> When I called and finally got someone they just told me they would fill pre orders first so not sure how that works. Hoping they just order more than what they got in for pre orders so I can snag one. Definitely going to go Thursday, i didnt get the Oct delay and im being told the expedited shipping they're giving people is 5-7 days and thats another week.
> 
> 
> 
> No im in ohio near sharonville loc
Click to expand...


----------



## Ford8484

Baasha said:


> This is maddening.
> 
> I haven't gotten any email and I've pre-ordered 4x 2080 Ti
> 
> These muppets better ship my GPUs soon.


lol holy dam....4?


----------



## dentnu

I have one of these on order and after seeing that video feel pretty bummed out. It looks like MSI really dropped the ball on this custom PCB. Feel like just canceling my preorder from Newegg which they say will still ship on 27th. Problem is if I cancel pretty sure it's going to be a while before I can buy another card since they are all sold out and probably will be for a very long time. Thoughts on this or any advice?


----------



## raider89

Someone over on the nvidia forums is saying the reason behind the delay is a problem with gddr6 modules so they're having to fix all of them, not sure if this affects all or just FE models.

Found this link https://vonguru.fr/2018/09/24/court...-hausses-et-rtx-une-fin-dannee-apocalyptique/ on the forum and here is the link to the actual post.

https://forums.geforce.com/default/...s/geforce-rtx-2080-ti-availability-update/23/

Hes french so english is not perfect.


----------



## CallsignVega

dentnu said:


> https://www.youtube.com/watch?v=V70f1Kx0QTU&t=0s
> 
> I have one of these on order and after seeing that video feel pretty bummed out. It looks like MSI really dropped the ball on this custom PCB. Feel like just canceling my preorder from Newegg which they say will still ship on 27th. Problem is if I cancel pretty sure it's going to be a while before I can buy another card since they are all sold out and probably will be for a very long time. Thoughts on this or any advice?


Oh ya looks nothing special. I am glad I cancelled that order and went with a FTW3.


----------



## nycgtr

dentnu said:


> https://www.youtube.com/watch?v=V70f1Kx0QTU&t=0s
> 
> I have one of these on order and after seeing that video feel pretty bummed out. It looks like MSI really dropped the ball on this custom PCB. Feel like just canceling my preorder from Newegg which they say will still ship on 27th. Problem is if I cancel pretty sure it's going to be a while before I can buy another card since they are all sold out and probably will be for a very long time. Thoughts on this or any advice?


I saw the video. I have 2 trios on order. If you watch debauer's video on the 2080 on dice and volt modded, you will see there is crappy gains going below ambient as well as jacking up the power 50%. What you will get out the box on air is pretty much what turing can do tapped out. So just keep your preorder. For once i am not even gonna bother buying a block for my gpus.


----------



## dentnu

CallsignVega said:


> Oh ya looks nothing special. I am glad I canceled that order and went with an FTW3.


Yea I looked at the FTW3 but the price on that is really high... No way I am wasting $1500 on a video card. Which is what it comes out to with shipping and taxes for me. Hope its a beast though. Enjoy!



nycgtr said:


> I saw the video. I have 2 trios on order. If you watch debauer's video on the 2080 on dice and volt modded, you will see there is crappy gains going below ambient as well as jacking up the power 50%. What you will get out the box on air is pretty much what turing can do tapped out. So just keep your preorder. For once i am not even gonna bother buying a block for my gpus.


Yea I saw debauer's video seems Turing is worst than Pascal in terms of overclocking. Still deciding on what to do with my preorder. At this point, its either take whatever 2080 TI you can find or wait a long time to get one. Thanks


----------



## xer0h0ur

Am I the only person that interprets debauer's findings completely differently? He was testing a 2080. Not a 2080 Ti. His findings were all about how pointless it is to shunt mod because of the small gains at the cost of rapidly increasing power consumption. 

NONE of that has anything to do with the fact the 2080 Ti has far more cuda cores than the 2080 and every reviewer has seen the overclocks eventually throttle once the temperature settles at its peak on the FE cooler. Making the entire point of waterblocking Turing to maintain the overclocks, not necessarily to push the clocks higher...

Mind you, reviewers had to run the fans on full blast to even get high overclocks and they only ran those clocks for moments while running a benchmark. I really am sick of seeing this crap lately from video card reviews. If you're going to quote x overclock on a chart with FPS then at least prove to me that you can run those clocks non-stop. Otherwise its the most misleadingly worthless information you can get.


----------



## GraphicsWhore

What a *****SHOW, all around.

Amazon EVGA XC delayed - still no new ETA.

Meanwhile, I emailed modmymods.com today to ask about my Alphacool Eisblock since I pre-ordered the minute it became available. Apparently Alphacool told them there was a slight delay to ship the model I ordered (full cover, plexi, RGB). Then yesterday they sent a shipment to modmymods but told them that it was only the other two models they offer (acetal, "light" rgb") and none of the Eisblock and they're not sure when they'll ship that to them.

Not that i have a card anyway so not a big deal but jesus christ. Even if I got the card tomorrow I'd be waiting for my other block.

EK is the only one who has their ***** together it seems.


----------



## BigMack70

xer0h0ur said:


> Am I the only person that interprets debauer's findings completely differently? He was testing a 2080. Not a 2080 Ti. His findings were all about how pointless it is to shunt mod because of the small gains at the cost of rapidly increasing power consumption.
> 
> NONE of that has anything to do with the fact the 2080 Ti has far more cuda cores than the 2080 and every reviewer has seen the overclocks eventually throttle once the temperature settles at its peak on the FE cooler. Making the entire point of waterblocking Turing to maintain the overclocks, not necessarily to push the clocks higher...
> 
> Mind you, reviewers had to run the fans on full blast to even get high overclocks and they only ran those clocks for moments while running a benchmark. I really am sick of seeing this crap lately from video card reviews. If you're going to quote x overclock on a chart with FPS then at least prove to me that you can run those clocks non-stop. Otherwise its the most misleadingly worthless information you can get.


It just seems like 2080 Ti is going to be another 1080 Ti where your OC limit is 99% power and 1% temperature as long as you aren't running a blower style cooler that just can't keep up with the thermals. But even the FE coolers look like they prevent any significant thermal throttling. Nvidia has the power limits so limited that there's just not a lot of room to do anything. 

The real test will be what can be done with custom BIOS. If you could get a 150% power target or more via custom BIOS then I expect water cooling would begin to shine. But with the default 120-130% that it looks like we're getting, I haven't seen much evidence to suggest thermal limits on the air coolers that would be remedied by water.

Of course, we could all be testing this for ourselves right now if Nvidia hadn't screwed the pooch on this launch...


----------



## xer0h0ur

You're still ignoring the point I am making. Show me one single review where I can put a 2080 Ti at 2100 MHz or thereabouts, maintain that overclock and not have to have two fans at 100% fan speed to even hit 2100MHz or above. You're not going to find one.


----------



## mr2cam

Graphics***** said:


> What a *****SHOW, all around.
> 
> Amazon EVGA XC delayed - still no new ETA.
> 
> Meanwhile, I emailed modmymods.com today to ask about my Alphacool Eisblock since I pre-ordered the minute it became available. Apparently Alphacool told them there was a slight delay to ship the model I ordered (full cover, plexi, RGB). Then yesterday they sent a shipment to modmymods but told them that it was only the other two models they offer (acetal, "light" rgb") and none of the Eisblock and they're not sure when they'll ship that to them.
> 
> Not that i have a card anyway so not a big deal but jesus christ. Even if I got the card tomorrow I'd be waiting for my other block.
> 
> EK is the only one who has their ***** together it seems.


I still havn't recieved an e-mail about my full cover 2080ti block shipping from EK, it's still processing and it was suppose to ship out on or next to the 20th


----------



## GraphicsWhore

mr2cam said:


> I still havn't recieved an e-mail about my full cover 2080ti block shipping from EK, it's still processing and it was suppose to ship out on or next to the 20th


Haha damn, maybe not then. 

Which one did you get? I got the Nickel + Acetal RGB.


----------



## xer0h0ur

The block I had delivered yesterday was the Nickel + acrylic RGB block and a black backplate. Also, for once, the block was flawless. My 295X2 and 1080 blocks both had imperfections in the nickel plating that were visible through the acrylic. This block has flawless milling and no visual imperfections in the plating.


----------



## mr2cam

Graphics***** said:


> Haha damn, maybe not then.
> 
> Which one did you get? I got the Nickel + Acetal RGB.


EK-Vector RTX 2080 Ti RGB - Nickel + Plexi

EK-Vector RTX Backplate - Nickel


----------



## nycgtr

xer0h0ur said:


> You're still ignoring the point I am making. Show me one single review where I can put a 2080 Ti at 2100 MHz or thereabouts, maintain that overclock and not have to have two fans at 100% fan speed to even hit 2100MHz or above. You're not going to find one.


 Both the same arch so it same principles apply ti and non ti. If you notice in the latest version of nv boost the temp for max oc before it starts dropping clocks is in the 70s. Peterson talks about this in more than 1 interview. While for pascal ever 5c over 45 would see a drop in max clock, the same cant be said for turing which has a much higher temp limit. Maintaing 74c without being very noisy with an AIB card that has a triple fan setup and a good cooler is not difficult. Hence blocking the card becomes a moot point. At the same time I dont think 1900s vs 2100 (which is what seems to be the high end of the spectrum) is going to make that big of a difference in fps either. Almost feels like your new to this and watercooling. Also, complete side point if you wanted the best gpu temps an ek block was probably not the best choice either.


----------



## xer0h0ur

nycgtr said:


> Both the same arch so it same principles apply ti and non ti. If you notice in the latest version of nv boost the temp for max oc before it starts dropping clocks is in the 70s. Peterson talks about this in more than 1 interview. While for pascal ever 5c over 45 would see a drop in max clock, the same cant be said for turing which has a much higher temp limit. Maintaing 74c without being very noisy with an AIB card that has a triple fan setup and a good cooler is not difficult. Hence blocking the card becomes a moot point. At the same time I dont think 1900s vs 2100 (which is what seems to be the high end of the spectrum) is going to make that big of a difference in fps either. Almost feels like your new to this and watercooling. Also, complete side point if you wanted the best gpu temps an ek block was probably not the best choice either.


Hi, welcome to the conversation. You know, the one I am having about FE cards...at no point was I making reference to AIB cards with better coolers. By all means though, continue being condescending.


----------



## nodicaL

Glerox said:


> For those living north, canadacomputers has 2080TI in stock right now.


Yeah!

I cancelled my FE preorder and got the MSI Duke OC since it seems to be the highest reference OC and Power Consumption at 260 W. 

***EDIT***

yikes! Canada Computers seems to have increased the price on the Duke OC by $120. 

Hopefully people ordered it before the increase. 

Hoping it’ll be great in a water block.


----------



## tconroy135

nodicaL said:


> Yeah!
> 
> I cancelled my FE preorder and got the MSI Duke OC since it seems to be the highest reference OC and Power Consumption at 260 W.
> 
> ***EDIT***
> 
> yikes! Canada Computers seems to have increased the price on the Duke OC by $120.
> 
> Hopefully people ordered it before the increase.
> 
> Hoping it’ll be great in a water block.


Wouldn't the FE be the better choice if you plan to put a waterblock on it. Is the 'Duke' a custom pcb?


----------



## FaLLeNAn9eL

tconroy135 said:


> Wouldn't the FE be the better choice if you plan to put a waterblock on it. Is the 'Duke' a custom pcb?


From what I understand, most AIB cards have a higher voltage limit at 130% while FE cards hit a brick wall at 120.


----------



## raider89

FaLLeNAn9eL said:


> From what I understand, most AIB cards have a higher voltage limit at 130% while FE cards hit a brick wall at 120.


EK Shows the duke being compatible with the waterblock they have listed, so thats what im going to go after on Thursday.


----------



## Fiercy

If most of them are using reference PCB wouldn't it be possible to some how alter the power limit via custom stock bios with that adjustment. 

That's what i am hoping for.


----------



## GraphicsWhore

Be sure to post pics as soon as any of you have blocked your cards.


----------



## Foxrun

Graphics***** said:


> Be sure to post pics as soon as any of you have blocked your cards.


Took the day off will be first in line at microcenter and will block it as soon as I get back Thurs.


----------



## arrow0309

nodicaL said:


> Yeah!
> 
> I cancelled my FE preorder and got the MSI Duke OC since it seems to be the highest reference OC and Power Consumption at 260 W.
> 
> ***EDIT***
> 
> yikes! Canada Computers seems to have increased the price on the Duke OC by $120.
> 
> Hopefully people ordered it before the increase.
> 
> Hoping it’ll be great in a water block.





tconroy135 said:


> Wouldn't the FE be the better choice if you plan to put a waterblock on it. Is the 'Duke' a custom pcb?





raider89 said:


> EK Shows the duke being compatible with the waterblock they have listed, so thats what im going to go after on Thursday.


I've previously ordered the MSI 2080 ti Duke (UK) and is 100% reference (or better):

https://www.techpowerup.com/reviews/MSI/GeForce_RTX_2080_Ti_Duke/images/front_full.jpg

And the FE:

https://www.techpowerup.com/reviews...080_Ti_Founders_Edition/images/front_full.jpg

Arriving tomorrow also the EK Vector RGB + bp.

And yes, it's a 260W but one thing I don't really like is the PL raised to only 111%, vs the FE 123% and EVGA XC ULTRA at 130%.


----------



## raider89

Foxrun said:


> Graphics***** said:
> 
> 
> 
> Be sure to post pics as soon as any of you have blocked your cards.
> 
> 
> 
> Took the day off will be first in line at microcenter and will block it as soon as I get back Thurs.
Click to expand...

What time are you going to get there? Need to figure out qhen to go to mine lol


----------



## Foxrun

raider89 said:


> What time are you going to get there? Need to figure out qhen to go to mine lol


They told me that multiple people have been calling to see if they were going to have some in stock... Im going an hour before they open. They are getting the EVGA XC's so that 130% power limit is worth waiting for.


----------



## GraphicsWhore

Foxrun said:


> They told me that multiple people have been calling to see if they were going to have some in stock... Im going an hour before they open. They are getting the EVGA XC's so that 130% power limit is worth waiting for.


Are they going to have any in stock though? When I check the MicroCenter stores on the website it says "sold out" at every single one for every 2080Ti.

Edit: Nm, just called my local one. Hilariously, they actually changed their initial recorded greeting to mention the RTX cards, lol.

The guy said they will have first-come, first-serve stock on Thursday. I'm trying to figure out how early I realistically want to get there. There is another MicroCenter about 25 mins away so hopefully that thins out the potential buyers in this area but if I get there at like 8 and there's already a line I'm going to be annoyed.

I forgot to ask how many cards they're actually getting - will call them back tomorrow and may have to tell my work I'll be in a bit late on Thursday.


----------



## nycgtr

Goes to my point earlier.


----------



## Badass1982

I have a question, Is there any way that Nvidia COULD unlock the power limit with a bios/firmware update. I'm not asking if they WILL, but only if its actually doable with an update, or will this be unchangeable by firmware as its in the hardware. Thanks and I cannot believe they locked these cards down so much power wise!


----------



## dentnu

nycgtr said:


> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.


Yea water-cooling is not worth it unless you looking for less noise etc. Overclocking will be the same as air due to hitting power limit. Hopefully, we will be able to mod the bios very soon to remove the power limits or raise the power on these cards. The more testing that gets done on Turing the less I am impressed by it. Guess this is what happens when you are king of the GPU market.


----------



## dentnu

Badass1982 said:


> I have a question, Is there any way that Nvidia COULD unlock the power limit with a bios/firmware update. I'm not asking if they WILL, but only if its actually doable with an update, or will this be unchangeable by firmware as its in the hardware. Thanks and I cannot believe they locked these cards down so much power wise!


Yea EVGA just released a bios that raised all their cards power limits a bit. Nvidia could do the same thing and unlock the power limit or remove it with a bios update. It will never happen though... Our best bet is being able to mod the bios ourselves but at this point who knows how long it will take before that can happen. Who knows how many new types of protections are in the bios this time.


----------



## Glerox

nycgtr said:


> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.


Yeah I just feel that I'm throwing 150$ for the block and 50$ for the backplate through a window... it will be PURELY for esthetics... damn that sucks...

I could still cancel the blocks but I've planed my whole loop with the block in mind...


----------



## changboy

Aorus 2080ti triple lighting fans :

https://www.youtube.com/watch?time_continue=8&v=v4xSY6O8cqQ


----------



## BigMack70

nycgtr said:


> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.


Looks like I might be going back to air cooling for the first time in 4 years. As long as the FE cooler isn't too loud it's clear that power is the limit, not temperature.


----------



## illidan2000

Glerox said:


> Yeah I just feel that I'm throwing 150$ for the block and 50$ for the backplate through a window... it will be PURELY for esthetics... damn that sucks...
> 
> I could still cancel the blocks but I've planed my whole loop with the block in mind...


noise reducing is better that 30mhz on gpu core


----------



## dentnu

changboy said:


> Aorus 2080ti triple lighting fans :
> 
> https://www.youtube.com/watch?time_continue=8&v=v4xSY6O8cqQ


That's an interesting looking card. I hate RGB as it makes PC's look like a Christmas tree which is ugly in my opinion but the rgb on fans on this card looks very cool. Anyone here ever owned a gigabyte card in the past? Do they make good video cards?


----------



## nycgtr

dentnu said:


> Yea water-cooling is not worth it unless you looking for less noise etc. Overclocking will be the same as air due to hitting power limit. Hopefully, we will be able to mod the bios very soon to remove the power limits or raise the power on these cards. The more testing that gets done on Turing the less I am impressed by it. Guess this is what happens when you are king of the GPU market.


Well the fact that turing boost drops clocks at a much higher temp is probably the biggest knock towards blocking these cards. I mean I haven't used the included cooler on a gpu for years but I know aib models with good cooling setups are usually fairly quiet to keep a card in the 70s range with good airflow.


----------



## xer0h0ur

nycgtr said:


> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.


I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock. 

Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...


----------



## nycgtr

xer0h0ur said:


> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...


All my gpus are blocked for about 10 years now. Even my server has blocked gpus when it's not necessary. Aib coolers have gone a long way since back then. My 1080ti aibs I had about 7 diff models, were pretty impressive in terms of noise and perf but in a case they always did hit near 79-80 unless i cranked the fans to 80-85% or more to keep in the low 70s. However, maintaining 70s with near no real audible noise coming out of a case wasn't too difficult either, which is all you need to get max boost on any type of ambient for turing it seems. I am not sure how well NV's new ref cooler performs but reviews show far it's quite an improvement. I don't buy any ref based cards outside of titans as there's no choice there. Imo turing will be short lived and with the price already being fairly high adding a block cost to it for no real gain (if you have a good cooler or sufficient airflow) just makes the value even worse. Then again I never saw watercooling as a value proposition. However, in this case it seems we can maintain similar clocks, similar perf, similar noise levels without a block which really makes a hard to justify addition. Better off keep that towards the next high priced gpu from nv.


----------



## changboy

dentnu said:


> That's an interesting looking card. I hate RGB as it makes PC's look like a Christmas tree which is ugly in my opinion but the rgb on fans on this card looks very cool. Anyone here ever owned a gigabyte card in the past? Do they make good video cards?


I own the gigabyte aorus 1080 ti waterblock with rgb and its the best video card i ever bought. It clock at 2062mhz and never move when i game and température never exeed 35 celcius on my loop. I never run to 1 problem with my card, never have 1 crash when i game, its the best 1080 ti you can get.


----------



## changboy

On the pic i just poseted, the lighting is washed out with the highlighting but in fact all light are really red and is a lot better then the pic show it.


----------



## jase78

xer0h0ur said:


> nycgtr said:
> 
> 
> 
> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.
> 
> 
> 
> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...
Click to expand...

I remember reading all the same sh it when Pascal came out .lol . We all put blocks on anyways and I for one never regretted it . Still dont. 
Who cares what people say on a forum. Most are jealous and wish they had 15 16 hundred to dump into their hobby anyways.


----------



## Glerox

xer0h0ur said:


> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...


Yeah you have some good points here.


----------



## Nico67

xer0h0ur said:


> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...


Yeah, the waterblock is not going to let you clock much higher, but it will prevent any thermal throttling and reduce power consumption, allowing you maintain higher clocks. Gamer Nexus showed, 160mhz decay, which was probably 335mhz overlocked, although as they said power limitation will vary. They also weren't cooling VRM down to get power more power reduction.
Debauer's video goes as far as say 20c temps dropped input current from 18.5 - 17.0A and that was probably 20-30w. Thats more than your like to get with a better AIB board. Thats probably going to be a bit less on 40c water, but with a chiller I get 26c max gaming how I run my 1080Ti and it helped that significantly.

At the end of they day 20-30w may not help much under heavy load, but if you add that to say +130% then it maybe as good as you can get.


I really hate the way they market cards, they should spell out target power and max power, its very misleading conduct when trying to compare products. Speaking of which Asus Strix has three 2080Ti versions, 11G, A11G and O11G, who even knows who's reviewing what anymore


----------



## xer0h0ur

Nothing earth shattering here. Just showing the EK plexi+nickel block and black backplate for anyone that was wondering


----------



## mollet

Hi, got my Palit yesterday. Unmounted it and put the EK Waterblock on. Most of the time you could just use the original Backplate and dont need to buy the EK one.

I put some LM on two shunts under the 8 pin connectors and on the backside. Think it was 518 on the front and 519 on the back. Crazy thing but there is no 517 shunt, only 516 for PCIX


----------



## xer0h0ur

You may be playing with fire shunt modding the PCI-E shunt there. You're braver than I sir. That GPU-Z shot was the result from GPU-Boost doing its thing automatically or you overclocking? 

Either way, bravo sir. Any tips for someone like me that may want to shunt mod their card as well?


----------



## mollet

i didnt to the shunt mod on the pcix one, only on the 8pins. the pcix one is 516. I just touched 518+519

i overclocked offset +110 and +1000 memory (max in msi afterburner).

Tips. Yeah one tip i have after destroying my titan v. put something around the shunt, like vaseline and dont put to much LM on it cause it will drop on the solderpoints.
Gallium will eat the solderpoints and your shunts will fall off one day. This happens to me with my Titan V after only one month.

Luckly i found a very talented guy who could repair it.


----------



## xer0h0ur

Yeah from debauer's shunt mod I had figured out which were the 8 pin shunts. 

Your wording confused me and circling RS16 also made me believe you had modded it. 

I was like god damn, this man isn't afraid to send it xD


----------



## Shadowsong

Need some advice, do you think its worth waiting for an EVGA FTW Hydro Copper for the better PCB or should i just go for a reference design board and pickup an EK Block for i'd assuming slightly better cooling?


----------



## mollet

i think this depends on your needs. People buying the hydro cooper are mostly afraid to do the installation of the waterblock themselves.
The only card which is worth buying as non reference is the HOF 2080ti lab WC edition. As its unlocked in Powertarget and volt.
Maybe wait for an Kingpin model. 

I used alot EK Blocks so far. Never had any problems. Cooling is always around 45° on max overclocking, no thermal trotteling.
My question is. What should a better PCB do ? if there are still power and voltage limitations.


----------



## Shadowsong

mollet said:


> i think this depends on your needs. People buying the hydro cooper are mostly afraid to do the installation of the waterblock themselves.
> The only card which is worth buying as non reference is the HOF 2080ti lab WC edition. As its unlocked in Powertarget and volt.
> Maybe wait for an Kingpin model.
> 
> I used alot EK Blocks so far. Never had any problems. Cooling is always around 45° on max overclocking, no thermal trotteling.
> My question is. What should a better PCB do ? if there are still power and voltage limitations.


Yep good point regarding the voltage restrictions, extra phase design isn't going to help much if it can't be used! I've no real issue installing my own blocks, currently have two Titans with EK blocks.

I may just look at an EVGA reference board and pickup the EK block then. Are there any other cards you know of that have unlocked power targets?


----------



## Xeq54

Ahh lucky people with their memory OC..., my TI can do +700 max 

Thanks for the shunt + water results, it seems that shunt modding does grant about 150mhz on top of stock thanks to the removal of the power limit so there is point in doing it. Cant wait for my block.


----------



## alanthecelt

Got my Alphacool block on Monday.. No idea when my TI is arriving
might change the water terminals to come out of hte block
https://www.overclock.net/forum/attachment.php?attachmentid=220244&thumb=1
https://www.overclock.net/forum/attachment.php?attachmentid=220246&thumb=1


----------



## ENTERPRISE

I think I will be going with the Gigabyte Windforce OC. Ordered my Nvlink bridge ready. Nothing worse than ordering a multi GPU setup to find you have no bridge lol.


----------



## ClashOfClans

littledonny said:


> BigMack70 said:
> 
> 
> 
> Bummer. Sounds like you probably beat me by 5 minutes or so; I was hoping that those of us who were really early in line would at least get them by the 27th.
> 
> This is probably the last time I ever order direct from Nvidia. This is highly unacceptable.
> 
> 
> 
> Highly unacceptable? Good God man, calm down
Click to expand...

Why he just paid $1300 for a graphics card. Part of that price is to get in on the action early and enjoy it before others can


----------



## tconroy135

ClashOfClans said:


> Why he just paid $1300 for a graphics card. Part of that price is to get in on the action early and enjoy it before others can


Agree 100%


----------



## GraphicsWhore

alanthecelt said:


> Got my Alphacool block on Monday.. No idea when my TI is arriving
> might change the water terminals to come out of hte block
> https://www.overclock.net/forum/attachment.php?attachmentid=220244&thumb=1
> https://www.overclock.net/forum/attachment.php?attachmentid=220246&thumb=1


Can you post some more pics of the block? Would appreciate it.

I pre-ordered mine from modmymods.com and Monday they told me Alphacool had failed to ship them any of this model (only the other two they offer) and weren't sure when they'd be able to. Not sure what that's about but I already have my EK block. I got both because I was going to compare them and keep the one I liked more. Also still don't have my Ti but may go to MicroCenter early tomorrow and if I can snag one I'm debating whether to even bother waiting for the Alphacool since I don't know when I might get it and I'd like to get the card blocked this weekend. I may be able to make a decision from your pics whether to wait. Thanks.


----------



## nycgtr

Got a shipping today from newegg for 1x trio 1x gigabyte oc 1x zotac amp all from CA.

Nothing from my amazon order which has a trio and a gigabyte oc. Just says will email will avail. I preodered on amazon moment it came up, so it is first batch just seems like they got none.


----------



## alanthecelt

Graphics***** said:


> Can you post some more pics of the block? Would appreciate it.
> 
> I pre-ordered mine from modmymods.com and Monday they told me Alphacool had failed to ship them any of this model (only the other two they offer) and weren't sure when they'd be able to. Not sure what that's about but I already have my EK block. I got both because I was going to compare them and keep the one I liked more. Also still don't have my Ti but may go to MicroCenter early tomorrow and if I can snag one I'm debating whether to even bother waiting for the Alphacool since I don't know when I might get it and I'd like to get the card blocked this weekend. I may be able to make a decision from your pics whether to wait. Thanks.


Thats all i have right now i'm afraid, i wont be in till late tonight... 
they sent me an email last week with some last minute changes confirming i wanted one for a TI and updated dispatch friday, and arrived direct to me UK on Monday


----------



## sxr7171

xer0h0ur said:


> nycgtr said:
> 
> 
> 
> https://youtu.be/ORJfhlTAgfU
> 
> Goes to my point earlier.
> 
> 
> 
> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...
Click to expand...

Same deal with the Titan XP imho. I really didn’t get higher clocks, ok maybe a little, but it didn’t make any difference in use. 

That was however with my case open on air and fans noisy as heck. When I put the block on it. I could close the case. I could have quiet. And frankly I think it was more boosting more stably and staying up at higher clocks. 

There was a difference in practical terms while under sustained gaming with a closed case. My temps stayed under 53C in general. It’s probably safer for the chip that way. 

The power limit is there for a reason. These things get exponentially hotter for minor marginal clock rates. One minute your taking 260w and 25MHz later you’re taking 300w. Another 25MHz you’re at 380w. 

Unlocked power limits make sense for LN2 demonstrators. For people who actually play games they’ve set the power to a point where we will not be burning up cards to get 1-3% gains in FPS. 

I think none of this is new to us. We know what a block does from the last generation. More stable clock rates, quieter, cooler machine. You likely will not get peak FPS more than 1-2% however you will get more consistent, smoother frame rates. nVidia has put sensible power limits in place for people who use the card to play games. For just pure benchmarking maybe there will be some kingpin version.


----------



## GraphicsWhore

Foxrun said:


> They told me that multiple people have been calling to see if they were going to have some in stock... Im going an hour before they open. They are getting the EVGA XC's so that 130% power limit is worth waiting for.


So after calling yesterday and being told there would be a few in tomorrow, I call today to ask if they can tell me how many and now a different person says that's pre-order stock and they have no idea when they're getting non-pre-order stock but that she heard "maybe in 2 weeks." Lmao. Going to call one more time later to try and get confirmation. One of the people I've spoken to clearly has no clue, so I need a tiebreaker.

Meanwhile, on EVGA forums someone says they pre-ordered Aug 24th directly from EVGA and getting their card tomorrow. Many of of us who pre-ordered the same card on Amazon on the 20th don't even have a f'ing ETA.

This is a ridiculous launch.


----------



## xer0h0ur

I'm in it to game. Obviously though I am on this forum for a reason as well. I get my fun out of pushing the limits of the tech and then I use it for what its meant. Gaming. To that end a waterblock is damn useful in maintaining my clocks over an entire gaming session.


----------



## sxr7171

nycgtr said:


> xer0h0ur said:
> 
> 
> 
> I like Steve and I watched him overclock that very same FE card on the air cooler on stream but he is slightly misrepresenting his own findings by not mentioning that he needed to keep those two fans on at full blast to be able to reach his overclocks and still was hitting well into the 70's anyways as this video shows. This also while benchmarking on an open air setup. Good luck achieving the same results inside a case. Furthermore good luck keeping the GPU cool once those RT and Tensor cores are actually doing something in games. I still wholesale believe people are grossly underestimating the potential benefit to a full cover waterblock.
> 
> Again, we already know the cards are pegging the power limit so you're not going to achieve higher overclocks. However that doesn't mean you won't see legitimate results as you can see clear as day in his charts that the air cooled FE card runs significantly lower clocks under sustained load versus keeping it cool. Which is all I have been saying from the get go...
> 
> 
> 
> All my gpus are blocked for about 10 years now. Even my server has blocked gpus when it's not necessary. Aib coolers have gone a long way since back then. My 1080ti aibs I had about 7 diff models, were pretty impressive in terms of noise and perf but in a case they always did hit near 79-80 unless i cranked the fans to 80-85% or more to keep in the low 70s. However, maintaining 70s with near no real audible noise coming out of a case wasn't too difficult either, which is all you need to get max boost on any type of ambient for turing it seems. I am not sure how well NV's new ref cooler performs but reviews show far it's quite an improvement. I don't buy any ref based cards outside of titans as there's no choice there. Imo turing will be short lived and with the price already being fairly high adding a block cost to it for no real gain (if you have a good cooler or sufficient airflow) just makes the value even worse. Then again I never saw watercooling as a value proposition. However, in this case it seems we can maintain similar clocks, similar perf, similar noise levels without a block which really makes a hard to justify addition. Better off keep that towards the next high priced gpu from nv.
Click to expand...

I have to concede your point here as well. My second rig is a 1080ti in a EGPU case. The titans have that ridiculous blower. The 1080ti has a triple fan. Surprisingly I can have it at 2050MHz reasonably quiet and staying at 58C. These cards with the new air coolers should be okay on air. The only real benefit would be quietness. My EGPU case is the Akitio node pro and it has great airflow.

Still even with that hybrid (Titan X) I was always worried about running that blower enough to keep VRMs and RAM cool enough. So that detracted from the quietness factor. 

I think ultimately this gen some of the air coolers will get the job done enough for me. If anything I’d entertain an aftermarket air setup with 120mm fans blowing onto the card. Quieter fans pushing lots of air. In fact I have my laptop sitting on a custom dual 140mm fan setup that is so quiet for how much air it pushes and it keep my laptop/CPU under 88C which is incredible for an ultraportable.


----------



## Esenel

mollet said:


> i didnt to the shunt mod on the pcix one, only on the 8pins. the pcix one is 516. I just touched 518+519
> 
> i overclocked offset +110 and +1000 memory (max in msi afterburner).


But this would mean the shunt mod is also pointless on the Ti?
+110 and +1000 you can see also without.
Or doesn't it clock down anymore due to no powerwall?

How much has the consumption increased?
Thanks and have fun with your card!


----------



## raider89

Graphics***** said:


> So after calling yesterday and being told there would be a few in tomorrow, I call today to ask if they can tell me how many and now a different person says that's pre-order stock and they have no idea when they're getting non-pre-order stock but that she heard "maybe in 2 weeks." Lmao. Going to call one more time later to try and get confirmation. One of the people I've spoken to clearly has no clue, so I need a tiebreaker.
> 
> Meanwhile, on EVGA forums someone says they pre-ordered Aug 24th directly from EVGA and getting their card tomorrow. Many of of us who pre-ordered the same card on Amazon on the 20th don't even have a f'ing ETA.
> 
> This is a ridiculous launch.


Let us know what they say the next time you call, im getting the same feedback.


----------



## xer0h0ur

Esenel said:


> But this would mean the shunt mod is also pointless on the Ti?
> +110 and +1000 you can see also without.
> Or doesn't it clock down anymore due to no powerwall?
> 
> How much has the consumption increased?
> Thanks and have fun with your card!


Unless I am misreading something here, he is saying Afterburner will only allow him +110 at most. I don't know why that would be though. I've easily seen 2080 Ti's get overclocked past his reported clock without shunt modding.


----------



## Xeq54

Esenel said:


> But this would mean the shunt mod is also pointless on the Ti?
> +110 and +1000 you can see also without.
> Or doesn't it clock down anymore due to no powerwall?
> 
> How much has the consumption increased?
> Thanks and have fun with your card!


The afterburner values for core offset do not matter due to power limit. I can do +100 or +150 but it results in the same average clock under sustained gaming load. It fluctuates aroud 1875-1900mhz. Higher sustained clocks are not possible even with the power slider at max.


----------



## Rob w

xer0h0ur said:


> Unless I am misreading something here, he is saying Afterburner will only allow him +110 at most. I don't know why that would be though. I've easily seen 2080 Ti's get overclocked past his reported clock without shunt modding.


I watch a lot of the gn vids, but Steve has been pritty negative about this card right from the start ?
I have a date for mine now 5th/ 9th oct, so i will wait, I truly believe this card is capable of a lot more!

Edit; he was just the same over the titan V when he previewed that card. ( I think he is being anti NV ?)


----------



## Esenel

Xeq54 said:


> Esenel said:
> 
> 
> 
> But this would mean the shunt mod is also pointless on the Ti?
> +110 and +1000 you can see also without.
> Or doesn't it clock down anymore due to no powerwall?
> 
> How much has the consumption increased?
> Thanks and have fun with your card!
> 
> 
> 
> The afterburner values for core offset do not matter due to power limit. I can do +100 or +150 but it results in the same average clock under sustained gaming load. It fluctuates aroud 1875-1900mhz. Higher sustained clocks are not possible even with the power slider at max.
Click to expand...

Yes. That is what I am saying. He did a shunt Mod to overcome the power limit but ends up at the same level as everybody else 😉
So the shunt Mod would have no use and the Turing Chip has a limitation there unless you do LN2.
Read some posts back.


----------



## Xeq54

Rob w said:


> Edit; he was just the same over the titan V when he previewed that card. ( I think he is being anti NV ?)


He is a youtuber. Currently it is trending to be anti NV so he is too for more views and comments as the youtube algorithm works just like that.


----------



## mollet

Xeq54 said:


> The afterburner values for core offset do not matter due to power limit. I can do +100 or +150 but it results in the same average clock under sustained gaming load. It fluctuates aroud 1875-1900mhz. Higher sustained clocks are not possible even with the power slider at max.


there are 3 limitations on the card.
First is temperature which has been solved by the waterblock.
Second is Power Limitation (TDP) which has been solved by the shunt mod.
The last limitation is the vcore which could not be changed.

So you say it doesent make a different but it does. The Card i hard already reaches the power limit with stock settings. Always limiting around 1900mhz.
I can do +200 in afterburner but it doesent matter. As soon the card is near the power limit target of 115% the card will clock down, no matter how much offset you give to it.
Thats why i shuntmod something. To prevent the card from reaching TDP and downclock too early.


----------



## xer0h0ur

mollet said:


> there are 3 limitations on the card.
> First is temperature which has been solved by the waterblock.
> Second is Power Limitation (TDP) which has been solved by the shunt mod.
> The last limitation is the vcore which could not be changed.
> 
> So you say it doesent make a different but it does. The Card i hard already reaches the power limit with stock settings. Always limiting around 1900mhz.
> I can do +200 in afterburner but it doesent matter. As soon the card is near the power limit target of 115% the card will clock down, no matter how much offset you give to it.
> Thats why i shuntmod something. To prevent the card from reaching TDP and downclock too early.


God damn, that is hard news to swallow. I haven't seen a single reviewer talk about this. Why? I would imagine the YouTubers would be eating this for breakfast lunch and dinner as another indictment on how bad Turing is clocking. 

So just to be 100% clear. You're saying that its impossible to sustain anything even close to 2000MHz without shunt modding?


----------



## Xeq54

mollet said:


> there are 3 limitations on the card.
> First is temperature which has been solved by the waterblock.
> Second is Power Limitation (TDP) which has been solved by the shunt mod.
> The last limitation is the vcore which could not be changed.
> 
> So you say it doesent make a different but it does. The Card i hard already reaches the power limit with stock settings. Always limiting around 1900mhz.
> I can do +200 in afterburner but it doesent matter. As soon the card is near the power limit target of 115% the card will clock down, no matter how much offset you give to it.
> Thats why i shuntmod something. To prevent the card from reaching TDP and downclock too early.


? That is exactly what I am saying. I was just saying how the car behaves without the mod to explain people that it does not really matter what you do without shunt modding the card, all the cards are the same.

I am sure shunt modding will help, I did the mod on my 1080, gonna do it myself when I get the block. But most people dont want to do the mod for 100mhz.


----------



## mollet

Yes to say it clear, without power modding the card you will always hit the power limit and the card will throttle down, no matter what offset you give in afterburner. No matter you running the stock cooler, the FE cooler, hybrid or water.

Fist thing to do on this generation is to shuntmod eighter by solder another 0.5mohm shunt on top of the original one to have exactly the half of the powerlimit or doing liquid metal which may break your card as shown on my other post and modify the powerlimit in an unforseen way.
There is a failsafe mode on the card. If the card thinks it gets to less power it will reduce speed to around 300mhz.

Ill do a doubledecker shunt mod on friday evening on the 2080ti. I will do a video and do some explanation.
I recommend everybody NOT to use liquidmetal from Thermal Grizzly cause it has the most Gallium inside and is the most dangerous for shunt modding.

Temperature is not the first issue on this generation of cards.


----------



## Rob w

mollet said:


> Yes to say it clear, without power modding the card you will always hit the power limit and the card will throttle down, no matter what offset you give in afterburner. No matter you running the stock cooler, the FE cooler, hybrid or water.
> 
> Fist thing to do on this generation is to shuntmod eighter by solder another 0.5mohm shunt on top of the original one to have exactly the half of the powerlimit or doing liquid metal which may break your card as shown on my other post and modify the powerlimit in an unforseen way.
> There is a failsafe mode on the card. If the card thinks it gets to less power it will reduce speed to around 300mhz.
> 
> Ill do a doubledecker shunt mod on friday evening on the 2080ti. I will do a video and do some explanation.
> I recommend everybody NOT to use liquidmetal from Thermal Grizzly cause it has the most Gallium inside and is the most dangerous for shunt modding.
> 
> Temperature is not the first issue on this generation of cards.


I could'nt agree more!
I will be benching ( when it arives?)at stock also, and then doing the shunt mod, 5MO reisistors sat waiting,
it will be interesting to see your video. 
He did a tidy repair on your Titan by the way.


----------



## Ford8484

Xeq54 said:


> He is a youtuber. Currently it is trending to be anti NV so he is too for more views and comments as the youtube algorithm works just like that.


Exactly. Whatever trend is cool to hate on in the community these days....dont get wrong, ideally the 2080ti should be the 2080 for $800 and the 2080 should be the 2070 for $600...for the price and performance. But this is what you get when AMD doesnt up their game in the higher end (particularly 4k worthy card).


----------



## xer0h0ur

Man, I'm telling you. That is so weird. No one. I mean no one is saying these things in any reviews. They're all leading you to believe clock decay is only because of thermals.


----------



## Dayaks

xer0h0ur said:


> Man, I'm telling you. That is so weird. No one. I mean no one is saying these things in any reviews. They're all leading you to believe clock decay is only because of thermals.


Tech power up said every single 2080ti was power limited.

What I would like to see is how much additional power RTX features pull. That’s a significant portion of the die not being used in any of these reviews.

I would love if someone could run a traditional benchmark below power limit and see if one of the RTX benchmarks pegs it.


----------



## xer0h0ur

Does double stack soldering 5mO resistors affect the fitment of the EK block mollet? Please let us know. That would be the cleanest and best shunt mod by far if it fits alright. I am not worrying about the block shorting anything because I would be putting electrical tape over the double stack to insulate it from shorting. Only worried about the fitment.


----------



## Rob w

xer0h0ur said:


> Man, I'm telling you. That is so weird. No one. I mean no one is saying these things in any reviews. They're all leading you to believe clock decay is only because of thermals.


thermals do play a major part though, the card throttles when it reaches the power target first, before any temperature throttling has happened so modding makes it think its getting less power therby allowing it to pull more without throttling, then you need to keep temps down as you OC it, hence on water.


----------



## Rob w

xer0h0ur said:


> Does double stack soldering 5mO resistors affect the fitment of the EK block mollet? Please let us know. That would be the cleanest and best shunt mod by far if it fits alright. I am not worrying about the block shorting anything because I would be putting electrical tape over the double stack to insulate it from shorting. Only worried about the fitment.


that was the problem i had with my titan, so i would take it that there will be very little space between on this card also! (They are similar?)
edit, I am sure we'll see on the video?


----------



## changboy

The tech from nvidia explain, its not all sample can handle higher voltage without degrade faster. So they built the card like this coz it must be like this. Even you mod and els this will never brings you what you expected and also void your warrenty and for a 1200$ card and here for me its 1800$ canadian dollards, mean you better keep it like you buy it.

This is why its better to buy the model you need or wait it come out for sell. All know they will be others 2080ti later with all ur need. For this i will waiting for the card i want and anyway theres no game i play actually with rtx feature.


----------



## nycgtr

Well newegg shipped. Got 2 incoming monday. Amazon still on derp status.


----------



## Addsome

Hey guys, I am currently debating between the Zotac 2080ti AMP and EVGA 2080ti XC Ultra. Keep in mind the AMP is listed at $1599.00 Canadian and the XC Ultra is listed at $1699.00 Canadian. I know the AMP has better cooling capabilities as it is 3 fans vs the XC Ultra's 2 fans. What I like about the Ultra is the 130% power limit. The AMP is max 115%. Would the Ultra be worth the extra $100? Can I flash a bios to the AMP from a different card to get 130% power limit? Will Zotac themselves release a bios update like EVGA? What do you guys suggest? Thanks.


----------



## xer0h0ur

Addsome said:


> Hey guys, I am currently debating between the Zotac 2080ti AMP and EVGA 2080ti XC Ultra. Keep in mind the AMP is listed at $1599.00 Canadian and the XC Ultra is listed at $1699.00 Canadian. I know the AMP has better cooling capabilities as it is 3 fans vs the XC Ultra's 2 fans. What I like about the Ultra is the 130% power limit. The AMP is max 115%. Would the Ultra be worth the extra $100? Can I flash a bios to the AMP from a different card to get 130% power limit? Will Zotac themselves release a bios update like EVGA? What do you guys suggest? Thanks.


If the power limit is the most important factor for you then it makes no sense to put your eggs into one basket hoping for a BIOS that may never come. We already know that you can't use another card's BIOS thus far. So unless someone manages to modify the card's own BIOS then I wouldn't be too optimistic about that prospect.


----------



## Emmett

Wow. Just got a email from microcenter my pre-order will not be ready to pick up tomorrow!!!


----------



## raider89

Emmett said:


> Wow. Just got a email from microcenter my pre-order will not be ready to pick up tomorrow!!!



Which card?


----------



## Addsome

xer0h0ur said:


> If the power limit is the most important factor for you then it makes no sense to put your eggs into one basket hoping for a BIOS that may never come. We already know that you can't use another card's BIOS thus far. So unless someone manages to modify the card's own BIOS then I wouldn't be too optimistic about that prospect.


I just want the card that can get me the best OC. There literally only 1-2 reviews online for the AMP edition and none showing proper overclocking. I would much rather get the AMP because of price and the fact its Factory OCd more than the Ultra which makes it seem its a better binned chip. Only thing holding me back is the power limit. Do we have any idea how much the power limit is effecting overclocks?


----------



## Emmett

raider89 said:


> Emmett said:
> 
> 
> 
> Wow. Just got a email from microcenter my pre-order will not be ready to pick up tomorrow!!!
> 
> 
> 
> 
> Which card?
Click to expand...

The Asus turbo. Only one left to pre-order a month ago.


----------



## xer0h0ur

Addsome said:


> I just want the card that can get me the best OC. There literally only 1-2 reviews online for the AMP edition and none showing proper overclocking. I would much rather get the AMP because of price and the fact its Factory OCd more than the Ultra which makes it seem its a better binned chip. Only thing holding me back is the power limit. Do we have any idea how much the power limit is effecting overclocks?


I can't comment at all as to binning. That is purely silicon lottery and we are just barely now finding out how good or bad the overclocks are sustained when not thermally limited and up against the power limit. The only thing I would say is to look around for the other cards that have 115 power limit to get an idea of what that may be in a best case scenario.


----------



## Addsome

xer0h0ur said:


> I can't comment at all as to binning. That is purely silicon lottery and we are just barely now finding out how good or bad the overclocks are sustained when not thermally limited and up against the power limit. The only thing I would say is to look around for the other cards that have 115 power limit to get an idea of what that may be in a best case scenario.


Ya thats a good idea, ill have a look at other reviews with 115%. On the spec page the AMP edition says 260W power draw while its only 250W for the XC Ultra. Anything we can make from that? Is the AMP actually pulling in more power at 115% than the XC Ultra at 130%?


----------



## mollet

if you dont want to powermod, take the one with the most power target.


The Asus Turbo is a non OC card which means no binning and on top it has a bad cooling. I would not recommend spending money on that card. Wait a week or two more and go for an OC one.

The double shunt should fit the ek waterblock, else iam cutting it as its only acrylglas.


----------



## kx11

Gigabyte AORUS cards should be out within 2-3weeks 



Waterforce is most likely in November


----------



## xer0h0ur

EVGA also said SOON^^TM for their Hybrid 2080 Ti.


----------



## raider89

Emmett said:


> The Asus turbo. Only one left to pre-order a month ago.


Just checking did you mean they will or wont have it tomorrow?


----------



## Emmett

raider89 said:


> Emmett said:
> 
> 
> 
> The Asus turbo. Only one left to pre-order a month ago.
> 
> 
> 
> Just checking did you mean they will or wont have it tomorrow?
Click to expand...

Will NOT... So pissed..


----------



## GraphicsWhore

raider89 said:


> Let us know what they say the next time you call, im getting the same feedback.


Called both stores in my area and both were very clear: only pre-order stock coming in tomorrow - no ETA for in-store stock. So, good thing I established that and didn't show up to MicroCenter at f'ing 8am.



Foxrun said:


> They told me that multiple people have been calling to see if they were going to have some in stock... Im going an hour before they open. They are getting the EVGA XC's so that 130% power limit is worth waiting for.


I'd call your store again to make double sure they'll have anything except pre-order stock.


----------



## vmanuelgm

xer0h0ur said:


> God damn, that is hard news to swallow. I haven't seen a single reviewer talk about this. Why? I would imagine the YouTubers would be eating this for breakfast lunch and dinner as another indictment on how bad Turing is clocking.
> 
> So just to be 100% clear. You're saying that its impossible to sustain anything even close to 2000MHz without shunt modding?



Yep, impossible since without the shunt/soldering u'll have a downclocking/downvolting feast to around 1875-1950. Even at stock it throttles with some games and benchs.

I am waiting for my block to apply the power mod. Here some videos of the Gainward GS 2080Ti (115% max power via afterburner=300w from 260w at 100%):


https://www.youtube.com/channel/UC5khzv4OkE4Oq2FYRR_8hRA/videos?


Once u power hardmod, u can see which is the oc limit for your card. I guess most of cards will do at 1.093v between 2050-2150 in core and 800-1200 in memos.


----------



## GraphicsWhore

vmanuelgm said:


> Yep, impossible since without the shunt/soldering u'll have a downclocking/downvolting feast to around 1875-1950. Even at stock it throttles with some games and benchs.
> 
> I am waiting for my block to apply the power mod. Here some videos of the Gainward GS 2080Ti (115% max power via afterburner=300w from 260w at 100%):
> 
> 
> https://www.youtube.com/channel/UC5khzv4OkE4Oq2FYRR_8hRA/videos?
> 
> 
> Once u power hardmod, u can see which is the oc limit for your card. I guess most of cards will do at 1.093v between 2050-2150 in core and 800-1200 in memos.


Any shunt guides out already for this card?


----------



## jase78

Rob w said:


> xer0h0ur said:
> 
> 
> 
> Unless I am misreading something here, he is saying Afterburner will only allow him +110 at most. I don't know why that would be though. I've easily seen 2080 Ti's get overclocked past his reported clock without shunt modding.
> 
> 
> 
> I watch a lot of the gn vids, but Steve has been pritty negative about this card right from the start ?
> I have a date for mine now 5th/ 9th oct, so i will wait, I truly believe this card is capable of a lot more!
> 
> Edit; he was just the same over the titan V when he previewed that card. ( I think he is being anti NV ?)
Click to expand...




Xeq54 said:


> Rob w said:
> 
> 
> 
> Edit; he was just the same over the titan V when he previewed that card. ( I think he is being anti NV ?)
> 
> 
> 
> He is a youtuber. Currently it is trending to be anti NV so he is too for more views and comments as the youtube algorithm works just like that.
Click to expand...

You really think so? I would agree if we were talking about jayz but gamers nexus ? Idk man I think he just trying to represent the majority of gamers and going from 699 to 1200 is a hard pill to swallow and I'm sure he and all of the rest of us would like prices come back down to reality. 

I'm sure everyone here agrees Its been a rough year or so to be a gamer in need of a graphics card. Just as the mining bs ends we get hit with even higher prices direct from nvidia ! Think hes just reacting to that.


----------



## vmanuelgm

Graphics***** said:


> Any shunt guides out already for this card?


U can solder 0.005 ohms resistors over the stock ones or u can locate the points near the controller chips to add several 2-5 ohms resistors.

Have a read:

https://xdevs.com/guide/evga_2080tixc/


----------



## Foxrun

Graphics***** said:


> Called both stores in my area and both were very clear: only pre-order stock coming in tomorrow - no ETA for in-store stock. So, good thing I established that and didn't show up to MicroCenter at f'ing 8am.
> 
> 
> 
> I'd call your store again to make double sure they'll have anything except pre-order stock.


I did yesterday and they had told me that it is first come first serve.

Well I was wrong. I spoke to 3 different people the past three days. First day they said only msi and gigabyte. Second day was yes of course we will have EVGA. Now today is nope we dont know when we are getting them in sorry! Incompetence.


----------



## GraphicsWhore

Foxrun said:


> I did yesterday and they had told me that it is first come first serve.
> 
> Well I was wrong. I spoke to 3 different people the past three days. First day they said only msi and gigabyte. Second day was yes of course we will have EVGA. Now today is nope we dont know when we are getting them in sorry! Incompetence.


Ok, this is hilarious because I just called ANOTHER store somewhat near me and that one said they WILL have non-pre-order stock but a "very, very limited number." I must have said, "To be clear: this is in-store stock, not pre-orders?" like 10 times and he confirmed it was.

What the F*@(! LOL. This store is a good 30 mins out. If I get there butt-early and then they open the doors and say, "Oh sorry it's only pre-order stock" I am going to be pissed.

I'm thinking of calling this same store back ONE more time tonight and hopefully getting a different person. If they say no it's pre-order stock, I'm done. If they say yes we will have some, I may bite the bullet and head out at like 7am.

This is a complete circus.


----------



## arrow0309

mollet said:


> Hi, got my Palit yesterday. Unmounted it and put the EK Waterblock on. Most of the time you could just use the original Backplate and dont need to buy the EK one.
> 
> I put some LM on two shunts under the 8 pin connectors and on the backside. Think it was 518 on the front and 519 on the back. Crazy thing but there is no 517 shunt, only 516 for PCIX


Man, why so much tim on the gpu?
That's way too much.
One question, no pads over the chokes provided (intended) to install with the EK blocks?



xer0h0ur said:


> God damn, that is hard news to swallow. I haven't seen a single reviewer talk about this. Why? I would imagine the YouTubers would be eating this for breakfast lunch and dinner as another indictment on how bad Turing is clocking.
> 
> So just to be 100% clear. You're saying that its impossible to sustain anything even close to 2000MHz without shunt modding?


I won't be so sure, however I'll play my lottery first.



mollet said:


> Yes to say it clear, without power modding the card you will always hit the power limit and the card will throttle down, no matter what offset you give in afterburner. No matter you running the stock cooler, the FE cooler, hybrid or water.
> 
> Fist thing to do on this generation is to shuntmod eighter by solder another 0.5mohm shunt on top of the original one to have exactly the half of the powerlimit or doing liquid metal which may break your card as shown on my other post and modify the powerlimit in an unforseen way.
> There is a failsafe mode on the card. If the card thinks it gets to less power it will reduce speed to around 300mhz.
> 
> Ill do a doubledecker shunt mod on friday evening on the 2080ti. I will do a video and do some explanation.
> I recommend everybody NOT to use liquidmetal from Thermal Grizzly cause it has the most Gallium inside and is the most dangerous for shunt modding.
> 
> Temperature is not the first issue on this generation of cards.


How about flashing the vga another bios (reference design pcb) with more PL headroom like the EVGA XC Ultra (130%) or (maybe) the new MSI Sea Hawk X (dunno yet much about it's PL and if it really is reference pcb)?
Even the FE bios can do 123% (320W)


----------



## Foxrun

Yeah it is. I work for the state so I already put in for a personal day. Now I am stuck with a day off. I might just keep my titan until the next generation. 

Just to confirm, there hasnt been a reason for the delay right?


----------



## arrow0309

BTW:
I went with an even lower PL board this time, the MSI Duke (only 111% max, hope they'll gonna release a new bios or I'll have to improvise) and this Vector:


----------



## xer0h0ur

@arrow0309

The only people that have tried to use a different vBIOS on the cards have not been able to. No confirmed flashings of another cards vBIOS anywhere yet.


----------



## jase78

arrow0309 said:


> BTW:
> I went with an even lower PL board this time, the MSI Duke (only 111% max, hope they'll gonna release a new bios or I'll have to improvise) and this Vector:


Thought I read on here that even the option/compromise of flashing another cards bios (like with Pascal) has been patched.


----------



## GAN77

What is this curve?) I see on the second block of EK


----------



## Emmett

GAN77 said:


> What is this curve?) I see on the second block of EK


Is the curve part smooth with the rest to the left?


----------



## Ford8484

Receiving mine tomorrow....glad Newegg is shipping it out, but their shipping terms are ridiculous. I cant pick it up at the local Fed-Ex for some magical reason, so I have to drive 45 minutes to the main location...? Very convenient for your customers Newegg.


----------



## hotrod717

Graphics***** said:


> Called both stores in my area and both were very clear: only pre-order stock coming in tomorrow - no ETA for in-store stock. So, good thing I established that and didn't show up to MicroCenter at f'ing 8am.
> 
> 
> 
> I'd call your store again to make double sure they'll have anything except pre-order stock.


Thats funny, with 1080ti release it was 12pm until they could technically sell the cards at MC. Day 1 card ftw. Lol.


----------



## Aussiejuggalo

Dunno if any Aussies have seen it yet but these have been listed for $2000 in most places, that's for the cheapest one.


----------



## dVeLoPe

I ordered extra cards if anyone wants them at MSRP pm


----------



## hotrod717

GAN77 said:


> What is this curve?) I see on the second block of EK


 A QC issue, it was not properly milled and was not caught before nickel plating. I wouldnt mount that,. RMA


----------



## xer0h0ur

GAN77 said:


> What is this curve?) I see on the second block of EK


I don't know if you're asking about my block in the bottom picture. I just know I have a smooth surface on my block. I can't comment about the other block you show in the top picture.


----------



## Ford8484

wow, Newegg is complete garbage. How many people work in their customer service? 3?


----------



## Nico67

mollet said:


> Hi, got my Palit yesterday. Unmounted it and put the EK Waterblock on. Most of the time you could just use the original Backplate and dont need to buy the EK one.
> 
> I put some LM on two shunts under the 8 pin connectors and on the backside. Think it was 518 on the front and 519 on the back. Crazy thing but there is no 517 shunt, only 516 for PCIX
> 
> 
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=220228&d=1537944063


If that SMI command works, does it reduce to power limit slider %, if not then SMI 300w x 130% = 390w

Also seems likely that all 300A chips have a default of 260w which would be good. But I wonder if the MSI cards which were supposedly 300w base 111% is what you would get using SMI?


----------



## GraphicsWhore

Foxrun said:


> Yeah it is. I work for the state so I already put in for a personal day. Now I am stuck with a day off. I might just keep my titan until the next generation.
> 
> Just to confirm, there hasnt been a reason for the delay right?


Called that same store back just now. “Looks like we’re only getting the preorders.”

Lmao. 

Screw it. If I don’t get my EVGA before the 8th I’ll have a FE to pick up at Best Buy. Done chasing.


----------



## Emmett

Graphics***** said:


> Foxrun said:
> 
> 
> 
> Yeah it is. I work for the state so I already put in for a personal day. Now I am stuck with a day off. I might just keep my titan until the next generation.
> 
> Just to confirm, there hasnt been a reason for the delay right?
> 
> 
> 
> Called that same store back just now. “Looks like we’re only getting the preorders.”
> 
> Lmao.
> 
> Screw it. If I don’t get my EVGA before the 8th I’ll have a FE to pick up at Best Buy. Done chasing.
Click to expand...

I would just go when they open if your not to far away.


----------



## Dayaks

Ford8484 said:


> wow, Newegg is complete garbage. How many people work in their customer service? 3?


Used to be a good site before China bought them. They have been going downhill since.

By the way, if anyone has a Best Buy account check for a 10% off birthday coupon. Works on RTX cards.


----------



## marsel

NEW Gigabyte 2080 & 2080 Ti Gaming OC BIOS - Higher Power Limit


----------



## raider89

Emmett said:


> I would just go when they open if your not to far away.


Yeah im going when they open, they just tell me they are filling pre orders first and dont know if they will have any left or not, so seems to me like they don't have any clue what they're talking about.


----------



## GraphicsWhore

Emmett said:


> Graphics***** said:
> 
> 
> 
> 
> 
> Foxrun said:
> 
> 
> 
> Yeah it is. I work for the state so I already put in for a personal day. Now I am stuck with a day off. I might just keep my titan until the next generation.
> 
> Just to confirm, there hasnt been a reason for the delay right?
> 
> 
> 
> Called that same store back just now. “Looks like we’re only getting the preorders.”
> 
> Lmao.
> 
> Screw it. If I don’t get my EVGA before the 8th I’ll have a FE to pick up at Best Buy. Done chasing.
> 
> Click to expand...
> 
> I would just go when they open if your not to far away.
Click to expand...

25 mins or so. If I take a Lyft that’s like ~$30 each way so I consider that adding like 60 bucks to the cost of the card and the XC Ultra is listed at 1319. That’s like 1450 with tax for a card I already have preordered for 1270 total.

And then to sit in front or Microcenter for say 2 hours for potentially no reason. Plus I realized just because they open at 10am doesn’t mean the cards will be on the shelves exactly at opening, right? Don’t know exactly how that works. I’m gonna pass.


----------



## raider89

Graphics***** said:


> 25 mins or so. If I take a Lyft that’s like ~$30 each way so I consider that adding like 60 bucks to the cost of the card and the XC Ultra is listed at 1319. That’s like 1450 with tax for a card I already have preordered for 1270 total.
> 
> And then to sit in front or Microcenter for say 2 hours for potentially no reason. Plus I realized just because they open at 10am doesn’t mean the cards will be on the shelves exactly at opening, right? Don’t know exactly how that works. I’m gonna pass.


Damn im not buying a lyft I will drive myself lmao. Then take my happy ass to work haha. Plus if its on the shelf or not doesnt matter. Normally get stuff I ask for from the back anyway.


----------



## GraphicsWhore

raider89 said:


> Graphics***** said:
> 
> 
> 
> 25 mins or so. If I take a Lyft that’s like ~$30 each way so I consider that adding like 60 bucks to the cost of the card and the XC Ultra is listed at 1319. That’s like 1450 with tax for a card I already have preordered for 1270 total.
> 
> And then to sit in front or Microcenter for say 2 hours for potentially no reason. Plus I realized just because they open at 10am doesn’t mean the cards will be on the shelves exactly at opening, right? Don’t know exactly how that works. I’m gonna pass.
> 
> 
> 
> Damn im not buying a lyft I will drive myself lmao. Then take my happy ass to work haha. Plus if its on the shelf or not doesnt matter. Normally get stuff I ask for from the back anyway.
Click to expand...

I live in the city; 3 blocks from work and 4 blocks from the metro. I have a car sitting in the garage of my condo and I haven’t driven in so long the battery is dead and the car isn’t even registered, lol. So I literally can’t drive.

But like I said nobody can give me a straight answer about what will and won’t be at the store and so I’m leaning towards there not being anything. I’m not even blaming the people at Microcenter - I actually think they just have no idea what they’re getting or not getting because they haven’t been told by manufacturers. Seems weird for such an unknown to exist in the supply chain in 2018 but here we are.


----------



## Glerox

Just watched GN's stream. It seems there is a special software/bios to unlock the power limit, which is quite encouraging!


----------



## Baasha

Well I finally got that silly 10/05/2018 email for delivery of the GPUs. 

I'm honestly not even excited for these cards anymore.

I wonder if EVGA will make a "Classified" edition like they did until the 900 series - with 2x 8-pin and 1x 6-pin. that should be a beastly card.


----------



## Elmy

My Zotac Amp 2080 ti will be here tomorrow. I have had my Copper/Plexi EK waterblocks sitting here for a couple days now. Can't probably break my loop apart and install the new card until saturday morning. Might stream it on my Twitch channel at twitch.tv/Elmnator if anyone is interested.


----------



## FreeElectron

I have a couple of questions.

1- Which card manufacturers provide international warranty? (I only know of EVGA)
2- The card should be in stock today (27th), correct?


----------



## arrow0309

xer0h0ur said:


> @arrow0309
> 
> The only people that have tried to use a different vBIOS on the cards have not been able to. No confirmed flashings of another cards vBIOS anywhere yet.


Are you sure? Not even the FE bios? Or in case a different bios from the same brand (like the new Msi Sea Hawk X to my Duke)? 



marsel said:


> NEW Gigabyte 2080 & 2080 Ti Gaming OC BIOS - Higher Power Limit


This
After EVGA, they've also acted accordingly. Nice 
Hoping for the MSI to do the same.


----------



## zhrooms

Retailers in Scandinavia just pushed the release another week, to October 5, which is a Friday and if it ships that day it would arrive on October 8, that is 18 days later than the official release date.

The cards are also terribly overpriced, the EVGA XC Gaming costs you €1440, that is 40.6% higher than the price directly from EVGA (US), as a comparison the EVGA GTX 1080 Ti SC Gaming last year at release was 29.2% higher. (Excluding tax and so on, the % difference is what I want to focus on.)

It is nuts to me that some of you here already got card(s) a week ago, as it looks now I will be lucky if I get my card on Oct 8, and I placed my order 10 minutes after reveal.


----------



## vmanuelgm

arrow0309 said:


> Are you sure? Not even the FE bios? Or in case a different bios from the same brand (like the new Msi Sea Hawk X to my Duke)?
> 
> 
> 
> This
> After EVGA, they've also acted accordingly. Nice
> Hoping for the MSI to do the same.



I just flashed the Gigabyte Gaming OC bios in my Gainward 2080Ti.

Have now 122%=366w, but my clocks are lower.

Another mate in Gigabyte forum flashed it on his Windforce OC and reported worse performance.

If someone has the original bios for Gainward 2080Ti Phoenix GS, please email me, since I didn't back it up thinking the Gigabyte executable wouldn't succeed like EVGA one.

xDDDD


----------



## Xeq54

zhrooms said:


> It is nuts to me that some of you here already got card(s) a week ago, as it looks now I will be lucky if I get my card on Oct 8, and I placed my order 10 minutes after reveal.


It was just luck. There was an extremely limited quantity available through distributors. The retailer I bought it from received just around 10 cards (ASUS Dual) exactly on the "launch" day and they were also the only retailer that had them in the country. Since that day, they received no further stock for 2080tis


----------



## carlhil2

I just ordered online for in-store pickup the Zotac AMP Geforce RTX 2080 Ti from microcenter, once I actually pick it up tomorrow night, will cancel my nVidia store pre-order with the "November 5th.."estimated ship date...


----------



## Addsome

Glerox said:


> Just watched GN's stream. It seems there is a special software/bios to unlock the power limit, which is quite encouraging!


Can you link where in the stream they talk about this?


----------



## ssgwright

think it's this one


----------



## cerealkeller

I skimmed through it. They had a custom BIOS for their EVGA XOC card, which he said he got from the XOC community, which had a 154% power target I believe. Which is awesome.
Although I preordered an FE card. I hope that can be done with those too.
Anyway, I checked out the EVGA forums and couldn't find anything relating to a custom BIOS beyond 130% power target the cards already come with.
They did a shunt mod on the FE card, but I haven't seen the results. It's like 3 1/2 hours long.


----------



## Addsome

ssgwright said:


> think it's this one


Do you have a specific timestamp? Don't really feel like watching a 3+ hour video haha


----------



## cerealkeller

Addsome said:


> Do you have a specific timestamp? Don't really feel like watching a 3+ hour video haha


it's a bit after 1 hour 31 minutes


----------



## Addsome

cerealkeller said:


> it's a bit after 1 hour 31 minutes


Thanks. EVGA XC Ultra seems like a really good card if this bios leaks.


----------



## raider89

got my evga 2080ti from MC


----------



## GraphicsWhore

So looks like my MicroCenters DID get cards but it was only the Zotac and only a couple of them each. EK block doesn't fit the Zotacs so I'm glad I didn't get there early and wait. The MicroCenter in Brooklyn apparently had EVGA XC Ultras and I texted my brother to call them and see if they'll hold one, then he could pick it up and overnight it to me but I have a feeling they won't hold for anyone and they'll be sold out soon (if not already).


----------



## raider89

Graphics***** said:


> So looks like my MicroCenters DID get cards but it was only the Zotac and only a couple of them each. EK block doesn't fit the Zotacs so I'm glad I didn't get there early and wait. The MicroCenter in Brooklyn apparently had EVGA XC Ultras and I texted my brother to call them and see if they'll hold one, then he could pick it up and overnight it to me but I have a feeling they won't hold for anyone and they'll be sold out soon (if not already).


I did the in store pickup for it and they had it for me lol


----------



## raider89

sittin pretty lol


----------



## Not A Good Idea

just picked my 2 up. one from brooklyn MC and one from westury. both the EVGA XCs


----------



## GraphicsWhore

Not A Good Idea said:


> just picked my 2 up. one from brooklyn MC and one from westury. both the EVGA XCs


Were those pre-orders?



raider89 said:


> got my evga 2080ti from MC


Nice! Out of curiosity: which store?


----------



## CallsignVega

Graphics***** said:


> So looks like my MicroCenters DID get cards but it was only the Zotac and only a couple of them each. EK block doesn't fit the Zotacs so I'm glad I didn't get there early and wait. The MicroCenter in Brooklyn apparently had EVGA XC Ultras and I texted my brother to call them and see if they'll hold one, then he could pick it up and overnight it to me but I have a feeling they won't hold for anyone and they'll be sold out soon (if not already).


Zotac does not make a custom PCB card. EK block will fit any NVIDIA PCB card, including all Zotac's.


----------



## GraphicsWhore

CallsignVega said:


> Zotac does not make a custom PCB card. EK block will fit any NVIDIA PCB card, including all Zotac's.


Oh interesting. Actually I thought at least the initial cards must all be reference but EK's fit guide on their site shows their block being compatible with pretty much every aftermarket card except the Zotac.


----------



## raider89

Graphics***** said:


> Not A Good Idea said:
> 
> 
> 
> just picked my 2 up. one from brooklyn MC and one from westury. both the EVGA XCs
> 
> 
> 
> Were those pre-orders?
> 
> 
> 
> raider89 said:
> 
> 
> 
> got my evga 2080ti from MC
> 
> Click to expand...
> 
> Nice! Out of curiosity: which store?
Click to expand...

Columbus ohio store


----------



## mr2cam

Saw GN's live stream yesterday, they flashed an XOC bios on one of their cards and it allowed them to go to 154% power limit?? Is that only with a custom PCB card?


----------



## snafua

mr2cam said:


> Saw GN's live stream yesterday, they flashed an XOC bios on one of their cards and it allowed them to go to 154% power limit?? Is that only with a custom PCB card?


Yea, I'd like to find out if the FE cards are going to get a BIOS update. I doubt it.


----------



## Addsome

mr2cam said:


> Saw GN's live stream yesterday, they flashed an XOC bios on one of their cards and it allowed them to go to 154% power limit?? Is that only with a custom PCB card?


They flashed a EVGA 2080ti XC Ultra which is a reference PCB im pretty sure.


----------



## GraphicsWhore

Addsome said:


> They flashed a EVGA 2080ti XC Ultra which is a reference PCB im pretty sure.


Correct - it is.


----------



## mr2cam

Addsome said:


> They flashed a EVGA 2080ti XC Ultra which is a reference PCB im pretty sure.


Sweet, hopefully that will become available to the public soon ^_^


----------



## CallsignVega

Graphics***** said:


> Oh interesting. Actually I thought at least the initial cards must all be reference but EK's fit guide on their site shows their block being compatible with pretty much every aftermarket card except the Zotac.


They probably just haven't updated their list.


----------



## rush2049

Is it possible to flash another AIB partner's bioses onto the founders edition; assuming they were both reference design? 
Or, rather, was it possible on pascal to do that?

I would not be against soldering another resistor onto the shunts, but I would rather keep the accurate power monitoring and just have a larger, or offset power slider.


----------



## Not A Good Idea

Graphics***** said:


> Were those pre-orders?
> 
> yes. they were.


----------



## mr2cam

rush2049 said:


> Is it possible to flash another AIB partner's bioses onto the founders edition; assuming they were both reference design?
> Or, rather, was it possible on pascal to do that?
> 
> I would not be against soldering another resistor onto the shunts, but I would rather keep the accurate power monitoring and just have a larger, or offset power slider.


I would also like to know if you can flash another bios on the founders card, if not I might cancel my order and wait for an XC Ultra


----------



## ssgwright

I had a 1080 fe and I flashed both evga and asus bios on it and it was fine


----------



## Elmy

Zotac AMP RTX 2080 Ti has arrived.


----------



## Glerox

Can somebody with a Turing card and HDR display test if they fixed HDR game capture please?

Like, is the video recorded in HDR? (probably not...)

If not, can it at least convert the video correctly for SDR playback? (On my Pascal GPU, recorded video while playing in HDR looks really washed out in SDR...)


----------



## exploiteddna

I finally got two "now in stock" emails today, 1 from newegg and 1 from evga.. the newegg email i was 14 hours late and evga 6 hours late. im sure you can guess that i missed out on both. i need to find a way to set up where i get notified if I receive an email from certain sender or with certain keyword or title. I have 5 email accounts on my phone and cant rely on the normal email notification, as ive conditioned myself to ignore it


----------



## bastian

I don't understand why people buy Zotac. I'm not that desperate.


----------



## Addsome

bastian said:


> I don't understand why people buy Zotac. I'm not that desperate.


Whats wrong with Zotac?


----------



## akromatic

do we have a list for reference PCB(founders PCB) AIB cards?


----------



## raider89

So when do we normally get a official owners badge lol.


----------



## arrow0309

akromatic said:


> do we have a list for reference PCB(founders PCB) AIB cards?


Have a look below:

I'm gonna add the new MSI 2080ti Sea Hawk X, pretty sure (99%) is reference AIB as well:

https://i.postimg.cc/cWc50mnn/2080-seahawk-backplate.jpg

Compared to the back of the Duke's pcb:

https://www.techpowerup.com/reviews/MSI/GeForce_RTX_2080_Ti_Duke/images/back_full.jpg


----------



## bmgjet

Do any of the 2080ti have DVI-D on them?
Been trying to find any info on it and theres nothing mentioned.


----------



## kx11

overclocker.uk got some


----------



## Xeq54

You people who already got their cards, can you share your Hardware ID reported by device manager ? 
(Device manager> display adapters> right click> properties> advanced> hardware IDs.

There should be two different TIs (One the A variant, one the lower bin variant):

1E04
1E07 - This is the one I got.


----------



## Emmett

Xeq54 said:


> You people who already got their cards, can you share your Hardware ID reported by device manager ?
> (Device manager> display adapters> right click> properties> advanced> hardware IDs.
> 
> There should be two different TIs (One the A variant, one the lower bin variant):
> 
> 1E04
> 1E07 - This is the one I got.


1E07 Asus 2080 TI Turbo ( all i could get for now)


----------



## hhbreaker

Nvflash 5.513.0

GOOD LUCK

I want someone to export the EVGA XC 2080Ti PL130% BIOS.


----------



## Xeq54

Emmett said:


> 1E07 Asus 2080 TI Turbo ( all i could get for now)


Interesting, the 1E07 is the A variant which is also on the founders card and the turbo is the lowest tier Asus card if I am not mistaken, with no factory OC. So where are the lower binned chips the news talked about ? Hmm.


----------



## zhrooms

hhbreaker said:


> Nvflash 5.513.0


 
Interesting, a quick search gave no results, last official release seems to be 5.469.0 (June 18th, 2018).









 
 
_*Re-upload below with better compression and proper folder & filenames*_

nvflash_5.513.0\
*nvflash.exe
nvflash.txt
nvflash64.exe*
 
 
 
*EDIT Download Official by TechPowerUp - Sep 30, 2018*


----------



## skingun

So EVGA EU have a 2080 Ti card in stock, XC Black Edition. Looks like the non binned SKU with lower boost clock. Meh.


----------



## Rob w

Only another week and ( hopefully ) my pre order will be here! but this binning bit is a bit confusingas to which is the better.


----------



## Xeq54

Rob w said:


> Only another week and ( hopefully ) my pre order will be here! but this binning bit is a bit confusingas to which is the better.


So far all cards in reviews seem to be the "A" variant, which is supposedly the higher bin. 

I am starting to think that the whole story was made up as even the Asus Turbo has the A variant of the chip, so the difference between A and no A chip is most likely something else.


----------



## domrockt

skingun said:


> So EVGA EU have a 2080 Ti card in stock, XC Black Edition. Looks like the non binned SKU with lower boost clock. Meh.


You gave me an Heart atack... i never rushed to website like that... but afaics no stock available


----------



## keikei

https://www.guru3d.com/news-story/msi-is-working-on-a-geforce-rtx-2080-ti-lightning.html


----------



## BigMack70

My Nvidia pre-order is processing and my card was charged. Looks like Nvidia FE orders will start shipping out soon.


----------



## Jbravo33

BigMack70 said:


> My Nvidia pre-order is processing and my card was charged. Looks like Nvidia FE orders will start shipping out soon.


Samezies. Expect cards to ship today. So much for next day ship. Probably get them Monday. Same thing happened with Xp’s


----------



## Foxrun

Jbravo33 said:


> Samezies. Expect cards to ship today. So much for next day ship. Probably get them Monday. Same thing happened with Xp’s


Well damn my card was also charged. Perfect timing as I just sold my Titan yesterday.


----------



## GraphicsWhore

Finally got a date from Amazon for my XC: next Saturday Oct 6th. I ordered on announcement day and was originally scheduled last Friday so will have been a 2 week delay total.


----------



## nodicaL

My 2080 Ti from Canada Computers got delayed to October 5th because of the weather in China. 

Also changed the order to EVGA XC Ultra because of the PL.


----------



## Jbravo33

Foxrun said:


> Well damn my card was also charged. Perfect timing as I just sold my Titan yesterday.


Nice. Big P or little p? And if you don’t mind me asking how much did you get? About to put my little p’s on eBay this weekend. Gonna throw in ek block and nickel backplate. Just removed shunts and put original shroud back on. First time the fan had spun. I tore down 5 minutes after getting them. xD


----------



## zhrooms

Scandinavian retailers just delayed a 4th time, now cards are expected October 19 (earliest), would not be the least surprised if they delay another 30 days, no cards before Black Ops 4 or Battlefield V, two massive fall releases.

20 Sep *>* 27 Sep
27 Sep *>* 05 Oct
05 Oct *>* 09 Oct
09 Oct *>* 19 Oct

*Stellar launch NVIDIA!*


----------



## Xeq54

So I have tried to flash my Asus Dual with the founders bios and I can confirm that cross flashing bioses is no longer possible on Turing.

Flash with PCI subsystem override results in "Board ID mismatch". Each model from each manufacturer has unique "Board ID" even though the PCBs are the same.

Below a comparison between my stock and FE bioses.


----------



## Foxrun

Jbravo33 said:


> Nice. Big P or little p? And if you don’t mind me asking how much did you get? About to put my little p’s on eBay this weekend. Gonna throw in ek block and nickel backplate. Just removed shunts and put original shroud back on. First time the fan had spun. I tore down 5 minutes after getting them. xD


Neither. I was one of those fools, big V. I got 1900$ for it and will never be doing that again. Im left the block on it.

Nevermind, the guy back out. Back to stage 1.


----------



## Ford8484

Received my Zotac Amp from Newegg yesterday- though I did one day shipping. Its a beast though- was playing Origins in Ultra settings, 4k, low AA, and getting in the 80s in the country areas- low to mid 70s in cities like Alexandria. Just did the NVscanner with MSI Afterburner to OC....though every time I OC the memory just a bit games crash...I guess you cant OC the memory with these aftermarket cards because of the factory Overclock already? I'm not a huge expert with OC'ing so I'm not sure....though I havent tested with the latest driver that came out yesterday either....


----------



## GraphicsWhore

Ford8484 said:


> Received my Zotac Amp from Newegg yesterday- though I did one day shipping. Its a beast though- was playing Origins in Ultra settings, 4k, low AA, and getting in the 80s in the country areas- low to mid 70s in cities like Alexandria. Just did the NVscanner with MSI Afterburner to OC....though every time I OC the memory just a bit games crash...I guess you cant OC the memory with these aftermarket cards because of the factory Overclock already? I'm not a huge expert with OC'ing so I'm not sure....though I havent tested with the latest driver that came out yesterday either....


You can but each card is different in terms of what it'll take.



Xeq54 said:


> So I have tried to flash my Asus Dual with the founders bios and I can confirm that cross flashing bioses is no longer possible on Turing.
> 
> Flash with PCI subsystem override results in "Board ID mismatch". Each model from each manufacturer has unique "Board ID" even though the PCBs are the same.
> 
> Below a comparison between my stock and FE bioses.


So how was Steve from GamersNexus able to get that 150+% PL on the EVGA XC Ultra?


----------



## Xeq54

Graphics***** said:


> So how was Steve from GamersNexus able to get that 150+% PL on the EVGA XC Ultra?


Possibly had Evga XC Ultra in the first place and the +150% bios was made for EVGA XC Ultra.


----------



## ssgwright

Xeq54 said:


> So I have tried to flash my Asus Dual with the founders bios and I can confirm that cross flashing bioses is no longer possible on Turing.
> 
> Flash with PCI subsystem override results in "Board ID mismatch". Each model from each manufacturer has unique "Board ID" even though the PCBs are the same.
> 
> Below a comparison between my stock and FE bioses.


what nvflash are you using? I can't get 5.469 to work


----------



## skingun

domrockt said:


> skingun said:
> 
> 
> 
> So EVGA EU have a 2080 Ti card in stock, XC Black Edition. Looks like the non binned SKU with lower boost clock. Meh.
> 
> 
> 
> You gave me an Heart atack... i never rushed to website like that... but afaics no stock available /forum/images/smilies/biggrin.gif
Click to expand...

I pre-ordered 2080 Ti cards on launch day and the lower core speed of the EVGA put me off so I'll just hold out a little longer. 

They were available at 7 AM GMT.


----------



## Xeq54

ssgwright said:


> what nvflash are you using? I can't get 5.469 to work


5.513.0 posted couple of pages back. It has turing support, when I tried to reflash the bios I extracted from my card, it worked.


----------



## gavros777

I have titan x maxwell sli OC to 1456 mhz with an i7 3770k cpu at 4.5 ghz.
Is it a good choice to upgrade to a 2080 ti right now and keep the same cpu?
I game at 4k.

I'm thinking to get the ASUS STRIX RTX 2080 Ti, is there a better choice out there?

When will it be available by the way?


----------



## Jbravo33

Foxrun said:


> Neither. I was one of those fools, big V. I got 1900$ for it and will never be doing that again. Im left the block on it.
> 
> Nevermind, the guy back out. Back to stage 1.


Yikes! I was gonna hold on to TV a little longer. I may consider dumping that as well.


----------



## xer0h0ur

BigMack70 said:


> My Nvidia pre-order is processing and my card was charged. Looks like Nvidia FE orders will start shipping out soon.





Jbravo33 said:


> Samezies. Expect cards to ship today. So much for next day ship. Probably get them Monday. Same thing happened with Xp’s





Foxrun said:


> Well damn my card was also charged. Perfect timing as I just sold my Titan yesterday.


Same, my card was charged today at 5:25 AM. Of course in true Nvidia fashion no e-mail has been received regarding it being shipped or a tracking number for that matter.


----------



## CallsignVega

This launch has been a total farce and easily the worst for NVIDIA. They are getting arrogant.


----------



## BigMack70

xer0h0ur said:


> Same, my card was charged today at 5:25 AM. Of course in true Nvidia fashion no e-mail has been received regarding it being shipped or a tracking number for that matter.


My guess is ship Monday arrive Wed-Fri next week


----------



## Silent Scone

Graphics***** said:


> You can but each card is different in terms of what it'll take.
> 
> 
> 
> So how was Steve from GamersNexus able to get that 150+% PL on the EVGA XC Ultra?



Possibly because Jacob fed him it in order to promote the cards.

It's a shame cross flashing isn't going to be an option anymore, shunt modding isn't without its downsides.


----------



## changboy

Price for 2080 ti at newegg is actually ridiculous :

https://www.newegg.ca/Product/Product.aspx?Item=9SIAE9389T9463


----------



## exploiteddna

nodicaL said:


> My 2080 Ti from Canada Computers got delayed to October 5th because of the weather in China.
> 
> Also changed the order to EVGA XC Ultra because of the PL.


What’s the PL? Sry..


----------



## Asmodian

My water block should arrive Monday/Tuesday next week and Nvidia says 1-2 days for processing on the GPU. I hope to have my 2080 Ti by the end of next week. 

Has anyone done the shunt mod on a FE 2080 Ti yet? Does the power balancing circuit throttle power independently of the shunts?



CallsignVega said:


> This launch has been a total farce and easily the worst for NVIDIA. They are getting arrogant.


So true, they seem to have expected everyone to be so excited about ray tracing that it would overwhelm anything else so they didn't bother to try to market it for anything else.

Maybe because of lack of competition but it could also be an effort to not discourage sales of Pascal. Emphasizing how much slower the old model is does tend to reduce its perceived value.


----------



## Rob w

Asmodian said:


> Has anyone done the shunt mod on a FE 2080 Ti yet? Does the power balancing circuit throttle power independently of the shunts?


I am looking into that as well, it looks like it has the seperate power section ballancing independantly to the shunts yes, its wether it ballances power before or after the shunt, it could be doing the same as the TI unit on the titan v and get fooled by the mod?


----------



## mr2cam

Why even do a shunt mod if we are able to flash a 154% power target bios on to the cards?


----------



## rush2049

mr2cam said:


> Why even do a shunt mod if we are able to flash a 154% power target bios on to the cards?


Exactly, just have to figure out who or how to modify the bioses to have that target. I would much rather have accurate measurement of the draw then to fool it.


----------



## richiec77

Chatted with GN about holes spacing. 

Outer set is 98.2mm, same as Titan V. Inside set is 72mm. Diagonally measured. EKWB thermalsphere wont fit with any of its brackets.


----------



## xer0h0ur

Sure, if by chatted with Steve you mean watched his video where he takes a dial caliper and measures the hole spacing...


----------



## exploiteddna

CallsignVega said:


> This launch has been a total farce and easily the worst for NVIDIA. They are getting arrogant.


i sort of agree. the problem is, im in the middle of a complete system upgrade and for the first time in my life im actually making decent money, but thats all gonna change in january when i go back to school to finish my dissertation. my income will be sliced in half and im not good enough to set the money aside and buy a card later.. furthermore, for the first time in my life I want to have all of the top current-gen hardware in my rig all at the same time (excluding HEDT). I've had lots of x80 and x80 Ti, also lots of i7, but never had everything be top current-gen, all together. So its going to happen, im determined.

However, this poses a problem bc i really despise whats happening in the GPU market right now.i dont care how people try to rationalize it to themselves, the pricing on these cards is way overpriced. so i agree, nvidia have become very arrogant. last gen we had $1200 cards bc of crypto inflation, now we have $1200 gpu because new technology and R&D costs? And for the record, every generation of GPU has a ton of overhead R&D that goes into making the product. This generation is no different, aside from the fact that theyve implemented a new cutting-edge processing technique called real-time ray tracing. So yeah, its a bit more new tech than we typically get between generations, but the consumer already pays for R&D in the normal price of the product, so that doesn't justify doubling the price (yeah, i know its not quite doubled, but its close enough; 700 vs 1200). This new tech, thats in its infancy and doesnt give us, the average consumers, any real benefit right now and we can pretty much bet that 18 months from now the degree of RT raytracing support will still be appreciably low. This in combination with the tensor cores and DLSS worth, at best, an extra $100 per unit. 2 years of inflation would add approx $35. And remember, comparing the 780Ti to the 980Ti, we see the same type performance gains as we do with 1080Ti vs 2080Ti (no Raytracing/DLSS, of course), so thats not a reason to jack the price up either.

The long and short of it is, I agree with you in that this whole launch has been a farce, it stunk from the beginning. Nvidia are arrogant, youre right.I thought maybe if AIB cards came out at $1000, I may be less frustrated (but really $1000 is still too high). I love Nvidia products and have for a long time. but I definitely feel like theyve got me by the balls. And even though im really frustrated at this whole situation, I find myself still looking for in-stock cards so I can buy one. It's a really strange situation, to say the least


----------



## zhrooms

mr2cam said:


> Why even do a shunt mod if we are able to flash a 154% power target bios on to the cards?





rush2049 said:


> Exactly, just have to figure out who or how to modify the bioses to have that target.


 
Correct me if I'm wrong, just a speculation, but this is how it appears to me, why exactly 154%? Why not 150% or 200%?

NVIDIA 2080 Ti = *1545* MHz - *250* W
NVIDIA 2080 Ti FE = *1635* MHz - *260* W

EVGA 2080 Ti XC = *1635* MHz - *260* W
EVGA 2080 Ti XC Ultra = *1650* MHz - *260* W

EVGA 2080 Ti XC = *260* W x *110*% PT = *286* W
EVGA 2080 Ti XC Ultra = *260* W x *110*% PT = *286* W

After Official EVGA BIOS Update (*110*% > *130*%)

EVGA 2080 Ti XC = *260* W x *130*% PT = *338* W
EVGA 2080 Ti XC Ultra = *260* W x *130*% PT = *338* W

After Private EVGA BIOS Update (*130*% > *154*%)

EVGA 2080 Ti XC = *260* W x *154*% PT = *400.4* W
EVGA 2080 Ti XC Ultra = *260* W x *154*% PT = *400.4* W

So it makes sense, if they simply aimed at 400 W.

Comparison with the the MSI Gaming X Trio with extra 6-pin.

EVGA XC Ultra = *1650* MHz - *260* W
MSI Gaming X Trio = *1755* MHz - *300* W

EVGA XC Ultra = *260* W x *130*% PT = *338* W
MSI Gaming X Trio = *300* W x *110*% PT = *330* W

From what I have seen we absolutely need the 154%, official 130% is not nearly enough for stable 2100 MHz or slightly above.


----------



## Alonjar

michaelrw said:


> last gen we had $1200 cards bc of crypto inflation, now we have $1200 gpu because new technology and R&D costs?


LOL no, we now have $1200 GPUs because the inflated crypto market proved that people are willing to pay $1200 for GPUs. Nvidia will sell at whatever price the market will bear.


----------



## richiec77

xer0h0ur said:


> Sure, if by chatted with Steve you mean watched his video where he takes a dial caliper and measures the hole spacing...


I'll send you a PM


----------



## ocvn

richiec77 said:


> Chatted with GN about holes spacing.
> 
> Outer set is 98.2mm, same as Titan V. Inside set is 72mm. Diagonally measured. EKWB thermalsphere wont fit with any of its brackets.


however EK old version fit. idle 34, max temp 50 with max boost 2130


----------



## richiec77

ocvn said:


> however EK old version fit. idle 34, max temp 50 with max boost 2130


OH!. The...what was it called? Supremacy GPU or something. 

The VGA-Supremacy. Never owned one before. That's some very good information. 

https://www.ekwb.com/shop/ek-vga-supremacy-nickel


----------



## ocvn

richiec77 said:


> OH!. The...what was it called? Supremacy GPU or something.
> 
> The VGA-Supremacy. Never owned one before. That's some very good information.
> 
> https://www.ekwb.com/shop/ek-vga-supremacy-nickel


https://www.ekwb.com/shop/ek-vga-supremacy-bridge-edition

had to use the fan to the vrm because it will be quite hot during FSU stress test.


----------



## richiec77

ocvn said:


> https://www.ekwb.com/shop/ek-vga-supremacy-bridge-edition
> 
> had to use the fan to the vrm because it will be quite hot during FSU stress test.


Shunt modded? 

I did measure the Current over the PCIe on the Titan V. During TimeSpyExtreme, GT2, it'll spike up to 35.2A on the 2 lines. VRM did totally fine bare with just a 1300 RPM fan pointed at it. 

This was at 2107 Core, 1053 Memory. 1.087v. Failed to do the same when I had it cold this morning. There I could get the core up to 2167, same HBM2 speeds. Same Voltage. Guess...Possibly up to 37-38A during the peak. Typical average is around 27-28.5A during most of the run for Graphics.


----------



## b4db0y

Anyone put a EK waterblock on their 2080 Ti? Mine runs noticeable hotter than my 1080 Ti when running time spy.

My 1080 Ti barely hit 50 C overclocked under load, the 2080 Ti gets up to 57 C under load with power limit at 130% and +170 on core with +700 on memory.


----------



## ocvn

richiec77 said:


> Shunt modded?
> 
> I did measure the Current over the PCIe on the Titan V. During TimeSpyExtreme, GT2, it'll spike up to 35.2A on the 2 lines. VRM did totally fine bare with just a 1300 RPM fan pointed at it.
> 
> This was at 2107 Core, 1053 Memory. 1.087v. Failed to do the same when I had it cold this morning. There I could get the core up to 2167, same HBM2 speeds. Same Voltage. Guess...Possibly up to 37-38A during the peak. Typical average is around 27-28.5A during most of the run for Graphics.


try to find how to do shunt mod with MSI trio, custom pcb.... i might replace it with reference PCB board.
no nvlink in my country atm 



b4db0y said:


> Anyone put a EK waterblock on their 2080 Ti? Mine runs noticeable hotter than my 1080 Ti when running time spy.
> 
> My 1080 Ti barely hit 50 C overclocked under load, the 2080 Ti gets up to 57 C under load with power limit at 130% and +170 on core with +700 on memory.


47 with single run, 50 with stress test. ambient around 27 here
https://www.3dmark.com/spy/4512072


----------



## richiec77

ocvn said:


> try to find how to do shunt mod with MSI trio, custom pcb.... i might replace it with reference PCB board.
> 
> 
> 
> 47 with single run, 50 with stress test. ambient around 27 here
> https://www.3dmark.com/spy/4512072


Hmm. Can most like see which Shunt goes to which PCIE connection. Using a Multimeter on continuity mode, you should be able to figure it out if you have the tools. 3mOhm worked much better than 5mOhm soldered on top. 

Pretty good score for your ambient temps.


----------



## b4db0y

ocvn said:


> try to find how to do shunt mod with MSI trio, custom pcb.... i might replace it with reference PCB board.
> no nvlink in my country atm
> 
> 
> 
> 47 with single run, 50 with stress test. ambient around 27 here
> https://www.3dmark.com/spy/4512072


Wow, good stuff.

How many rads are you running? I have 1x 240mm and 1x 360mm - both are 60mm thick. Just trying to compare because I feel like I might have not mounted my waterblock correctly or something. It's a pretty hot running card considering it's under water.


----------



## ocvn

richiec77 said:


> Hmm. Can most like see which Shunt goes to which PCIE connection. Using a Multimeter on continuity mode, you should be able to figure it out if you have the tools. 3mOhm worked much better than 5mOhm soldered on top.
> 
> Pretty good score for your ambient temps.


thanks, try to figure it out... yesterday watched 1 youtube channel regarding the board he mentioned the 6 pins run parallel with 8 pin, tested it and wrong, whenever i took out the 6 pin, the card did not post >.< . tired of the kind of custom pcb which allow 330w only



b4db0y said:


> Wow, good stuff.
> 
> How many rads are you running? I have 1x 240mm and 1x 360mm - both are 60mm thick. Just trying to compare because I feel like I might have not mounted my waterblock correctly or something. It's a pretty hot running card considering it's under water.


because i run 2 cards so i setup 1 monsta 480, 1 ek 480 45mm and 1 240 ek 45mm also. all of them in push-pull configuration. it quite hot btw compare with all my previous ttx/ ttxp


----------



## b4db0y

ocvn said:


> thanks, try to figure it out... yesterday watched 1 youtube channel regarding the board he mentioned the 6 pins run parallel with 8 pin, tested it and wrong, whenever i took out the 6 pin, the card did not post >.< . tired of the kind of custom pcb which allow 330w only
> 
> 
> 
> because i run 2 cards so i setup 1 monsta 480, 1 ek 480 45mm and 1 240 ek 45mm also. all of them in push-pull configuration. it quite hot btw compare with all my previous ttx/ ttxp


Ok, that makes sense then. I have way less radiator and I only run my fans in push. Thanks for the quick replies.


----------



## richiec77

ocvn said:


> thanks, try to figure it out... yesterday watched 1 youtube channel regarding the board he mentioned the 6 pins run parallel with 8 pin, tested it and wrong, whenever i took out the 6 pin, the card did not post >.< . tired of the kind of custom pcb which allow 330w only


This Video?






If so, he mentions there's a sense pin for detecting if the 6 pin is present. He was analyzing the photos of the card and doesn't have one in hand to measure to make certain.

If you have a MM (multimeter) you can check to see if the 6 and 8 are connected to the same shunt under the middle 8 pin. If so, then modding the 2 shunts under the PCIe connectors should unlock it.

The other issue is the fuses. No clue what value they are. Those could pop? Or?


----------



## vmanuelgm

Silent Scone said:


> Possibly because Jacob fed him it in order to promote the cards.
> 
> It's a shame cross flashing isn't going to be an option anymore, shunt modding isn't without its downsides.



I used the Gigabyte executable to update their gaming ocs 2080tis and flashed my Gainward 2080ti. So there must be a way to cross flash. I haven't tried the new nvflash yet, so if it doesn't cross flash my card to original bios it probably needs to be refined.


----------



## b4db0y

Is there a bios with 154 PL for 2080 Ti XC public?


----------



## Emmett

the EK supremacy VGA will not work right. 58mmx58mm max, is there some way to adapt it?


----------



## Silent Scone

vmanuelgm said:


> I used the Gigabyte executable to update their gaming ocs 2080tis and flashed my Gainward 2080ti. So there must be a way to cross flash. I haven't tried the new nvflash yet, so if it doesn't cross flash my card to original bios it probably needs to be refined.


I tried to download Gigabytes firmware last night, but the zips were corrupted on all regions.


----------



## Emmett

wow I can undervolt my card to just under 1.0v and instantly hit power limit. at 1900.. what???

EDIT: in superposition 4k optimized.
EDIT:117%


----------



## Clukos

Emmett said:


> wow I can undervolt my card to just under 1.0v and instantly hit power limit. at 1900.. what???
> 
> EDIT: in superposition 4k optimized.
> EDIT:117%


1000mv is the new 1100mv for Turing, try something like 800mv if you want to undervolt your GPU (the 900mv Pascal equivalent).


----------



## Kronos8

ocvn said:


> however EK old version fit. idle 34, max temp 50 with max boost 2130


Nice job!!! Good to know.
But according to specs from your link, minimum hole distance is
min: 53x53mm (not diagonal)
Since diagonal mounting distance on the card is 72mm, the square hole distance is about 51mm. That means you probably had to barely refine the metal base of the water block, to get that extra 2mm, or less.
Don't get me wrong, even if you did that, it is good to know old VGA-Sup fits 2080Ti.


----------



## Emmett

Clukos said:


> 1000mv is the new 1100mv for Turing, try something like 800mv if you want to undervolt your GPU (the 900mv Pascal equivalent).


did not realize i could go that low, thanks, helped a lot


----------



## Kronos8

richiec77 said:


> Shunt modded?
> 
> I did measure the Current over the PCIe on the Titan V. During TimeSpyExtreme, GT2, it'll spike up to 35.2A on the 2 lines. VRM did totally fine bare with just a 1300 RPM fan pointed at it.
> 
> This was at 2107 Core, 1053 Memory. 1.087v. Failed to do the same when I had it cold this morning. There I could get the core up to 2167, same HBM2 speeds. Same Voltage. Guess...Possibly up to 37-38A during the peak. Typical average is around 27-28.5A during most of the run for Graphics.


May I ask if that is an EK-Evo block mounted on Titan V?


----------



## richiec77

Kronos8 said:


> May I ask if that is an EK-Evo block mounted on Titan V?


Yes. A Supremacy EVO. The Intel Bracket on the block measures 102mm on the very inside edge. Just filed a bit off each corner, then use 6-32 1" long screws, washers nuts. The PCB holes measure 98.2mm diagonally. Same as the 2080 Ti. 

The mount is similar to how you mount an LN2 Pot where you have to snug it up into position, then check all the way around to level it out without tightening it too much. You don't want the PCB bending.


----------



## Kronos8

richiec77 said:


> Yes. A Supremacy EVO. The Intel Bracket on the block measures 102mm on the very inside edge. Just filed a bit off each corner, then use 6-32 1" long screws, washers nuts. The PCB holes measure 98.2mm diagonally. Same as the 2080 Ti.
> 
> The mount is similar to how you mount an LN2 Pot where you have to snug it up into position, then check all the way around to level it out without tightening it too much. You don't want the PCB bending.


When I found out diagonal hole distance was +98mm (so EK VGA-Supr was off), I was thinking about the _possibility_ to mount EK EVO. It was later I noticed the holes with diagonal distance 72mm, so old VGA-Supr was possible. Both you and @ocvn proved my thoughts were correct.
Nice info!!! 
:thumb: My respects :thumb:


----------



## Talon2016

Silent Scone said:


> I tried to download Gigabytes firmware last night, but the zips were corrupted on all regions.


I downloaded it the other day and it also went corrupt on me when I went to look at it this morning. I downloaded the latest Nvflash, backed up my vBIOS on my Asus Dual, and then repaired the corrupted file with WinRAR. I cross flashed my card from 120% power limit on Asus Dual to 122% on Gigabyte card with their flasher. Then I made a backup .rom of the Gigabyte 122% limit. All worked perfectly.


----------



## Silent Scone

Talon2016 said:


> I downloaded it the other day and it also went corrupt on me when I went to look at it this morning. I downloaded the latest Nvflash, backed up my vBIOS on my Asus Dual, and then repaired the corrupted file with WinRAR. I cross flashed my card from 120% power limit on Asus Dual to 122% on Gigabyte card with their flasher. Then I made a backup .rom of the Gigabyte 122% limit. All worked perfectly.


Yeah, I repaired the BIOS also, although the version of WinRAR I had initially wasn't able to. All working now.


----------



## kx11

could someone confirm if this RTX 2080 Ti GamingPro OC Edition 11GB gpu can be used with phanteks glacier 2080ti WB ?!


----------



## GAN77

kx11 said:


> could someone confirm if this RTX 2080 Ti GamingPro OC Edition 11GB gpu can be used with phanteks glacier 2080ti WB ?!


 Palit GamingPro OC PCB a reference

http://gpu.watercool.de/WATERCOOL_HEATKILLER_GPU_Compatibility.pdf


----------



## xer0h0ur

The anticipation is freakin killing me here. I just want the FE cards to be in hand already so we can figure out how viable vBIOS modding is or if we can manage to flash another card's vBIOS onto it. That way I can make my decision to shunt mod or not shunt mod instead of having to take off the EK block later. 

I'm headed to a LAN party the weekend of the 20th so I need to have this all sorted out by then.


----------



## asdkj1740

what software hack//modded bios is jayz using??????


----------



## kx11

GAN77 said:


> Palit GamingPro OC PCB a reference
> 
> http://gpu.watercool.de/WATERCOOL_HEATKILLER_GPU_Compatibility.pdf





thanx a lot , that means i secured my 2080ti , coming in hot from Singapore


----------



## Silent Scone

asdkj1740 said:


> https://www.youtube.com/watch?v=R0TjlzK2deQ&feature=youtu.be&t=102
> what software hack//modded bios is jayz using??????


If it's anything other than the 130% BIOS posted over on EVGA forums, not one you'll be able to obtain...


----------



## asdkj1740

Silent Scone said:


> If it's anything other than the 130% BIOS posted over on EVGA forums, not one you'll be able to obtain...


steve in his latest video said it is 154%.
not really, pascal asus xoc is eventually leaked.

lets summon the old maxwell/kelper bios flashing users !!!!!!!!!!!!!!!!!!!!!
WE NEED THAT BIOS!!!!!!!!!!!!!!!!!!!!!!!!!!!


----------



## arrow0309

asdkj1740 said:


> steve in his latest video said it is 154%.
> not really, pascal asus xoc is eventually leaked.
> 
> lets summon the old maxwell/kelper bios flashing users !!!!!!!!!!!!!!!!!!!!!
> WE NEED THAT BIOS!!!!!!!!!!!!!!!!!!!!!!!!!!!


*ucking hell yeah 
Guys I'm still waiting for a MSI Duke from Scan.uk but I'm almost changing my mind for the fourth time and almost cancel the order once again.
I do wanna get the EVGA XC Ultra, I'll pay another £100 roughly over the Duke to get it.
When it'll arrive (eventually, 15 Oct min I guess), I dont know what to do really


----------



## Talon2016

Will someone with the EVGA Ultra card, please use the most recent Nvflash a few pages back please make a back up of your vBIOS and share it here.


----------



## xer0h0ur

You guys may trust a random Korean with a few posts but I'm not downloading that.


----------



## TahoeDust

2080 TI FTW3 will have 150% (373w) power limit. 

https://twitter.com/EVGA_JacobF/status/1045836253545979904


----------



## asdkj1740

TahoeDust said:


> 2080 TI FTW3 will have 150% (373w) power limit.
> 
> https://twitter.com/EVGA_JacobF/status/1045836253545979904


holy s, evga got mad to turing! evga pascal models are very conserative on power limit(even before the fire caught incident)


----------



## asdkj1740

arrow0309 said:


> *ucking hell yeah
> Guys I'm still waiting for a MSI Duke from Scan.uk but I'm almost changing my mind for the fourth time and almost cancel the order once again.
> I do wanna get the EVGA XC Ultra, I'll pay another £100 roughly over the Duke to get it.
> When it'll arrive (eventually, 15 Oct min I guess), I dont know what to do really


sorry duke.
the cooling capability on dude is weak.


----------



## Silent Scone

arrow0309 said:


> *ucking hell yeah
> Guys I'm still waiting for a MSI Duke from Scan.uk but I'm almost changing my mind for the fourth time and almost cancel the order once again.
> I do wanna get the EVGA XC Ultra, I'll pay another £100 roughly over the Duke to get it.
> When it'll arrive (eventually, 15 Oct min I guess), I dont know what to do really



I just cancelled my August 20th pre-order for the Duke. Sick of waiting for MSI to get stock in the channel.


----------



## zhrooms

*Added the unofficial Turing NVFlash to the bottom of the ORIGINAL POST, will be updated with official link once it is released.

EDIT Official is now released, link has been updated.*


----------



## arrow0309

asdkj1740 said:


> sorry duke.
> the cooling capability on dude is weak.


Who said anything about it's cooler?
I've already got a nice Vector:

https://www.overclock.net/forum/27640134-post1113.html



Silent Scone said:


> I just cancelled my August 20th pre-order for the Duke. Sick of waiting for MSI to get stock in the channel.


Ok, so?
What now?
As the matter of facts I've cancelled in order, a 2080 FE, a 2080ti Palit Gaming Pro OC, an EVGA XC (my bad) and (maybe) this last one Duke (because of its PL).
The last one from Scan is overdue from 27 of this month already but I assume I could get it before 15 October.
The availability date for the XC Ultra (same Scan) would be 15 October.

If I'd only knew what will MSI plan about their new bios releases ... ...
Or if one could flash different bioses.
I dunno, maybe the best would be go for a XC Ultra, no matter the wait and/or price.


----------



## Baasha

Is EVGA going to make a Classified version of the 2080 Ti (like the good old 980 Ti Classified and before)?


----------



## hhbreaker

This file contains Galax(kfa2)_PL127, eMTEK_PL115, EVGAXC_PL130 BIOSs
Reference PCB model can be applied to all.
Galax(kfa2)_PL127 showed me the best results.

NVFLASH 5.513 from the previous post is a file leaked from PALIT
I'm sorry for my poor English


----------



## zhrooms

Thank you.

If anyone is willing to try the above, could you report the actual power limit shown in your OC software?

Because the file names don't match, 
*eMTEK_PL115* is actually *eMTEK122.rom* (Is it 115 or 122%?)
*Galax(kfa2)_PL127* is actually *galaxy126.rom* (Is it 127% or 126%?)
*EVGAXC_PL130* is correct *evga130.rom*


----------



## hhbreaker

eMTEK PL115, Galax PL126 <---That's right.
The actual slider changes in the afterburner when applied
nvflash name.rom -6


----------



## GanMenglin

hhbreaker said:


> eMTEK PL115, Galax PL126 <---That's right.
> The actual slider changes in the afterburner when applied
> nvflash name.rom -6


Thank you so much for sharing those BIOSes.

I flashed evga 130% BIOS to my MSI 2080ti Ventus OC. The power limit of MSI was 111%, now it changes to 130%, and I use the same clock speed for core and memory, but the score of time spy extreme is lower than before. I don't know why. It's a reference PCB.

By the way, I've tried use gigabyte gaming oc new BIOS, the power limit is 300W / 366W.


----------



## coolbho3k

Does anyone know the stock power limit of the Gigabyte Gaming OC? I know they changed it to 122% but what is the base? I ordered this card and I'm waiting for it but wonder if I should've gone for the MSI Trio instead...


----------



## GanMenglin

coolbho3k said:


> Does anyone know the stock power limit of the Gigabyte Gaming OC? I know they changed it to 122% but what is the base? I ordered this card and I'm waiting for it but wonder if I should've gone for the MSI Trio instead...


From the BIOS information, it seems to be 300w/366w


----------



## coolbho3k

GanMenglin said:


> From the BIOS information, it seems to be 300w/366w


Thanks! 366W seems to be very decent. I got the MSI Trio originally but Amazon is not shipping it out and won't give a date, so I found a Gigabyte Gaming OC instead.


----------



## GanMenglin

My MSI 2080ti ventus oc, I think the limit of core frequency is 2145mhz, I will crash if goes higher. 

What’s yours?


----------



## coolbho3k

GanMenglin said:


> My MSI 2080ti ventus oc, I think the limit of core frequency is 2145mhz, I will crash if goes higher.
> 
> What’s yours?


It's not in my hands yet, still in shipping. I'll report back when I get it.


----------



## zhrooms

zhrooms said:


> NVIDIA 2080 Ti = *1545* MHz - *250* W
> NVIDIA 2080 Ti FE = *1635* MHz - *260* W
> 
> EVGA 2080 Ti XC = *1635* MHz - *260* W
> EVGA 2080 Ti XC Ultra = *1650* MHz - *260* W
> 
> EVGA 2080 Ti XC = *260* W x *110*% PT = *286* W
> EVGA 2080 Ti XC Ultra = *260* W x *110*% PT = *286* W
> 
> After Official EVGA BIOS Update (*110*% > *130*%)
> 
> EVGA 2080 Ti XC = *260* W x *130*% PT = *338* W
> EVGA 2080 Ti XC Ultra = *260* W x *130*% PT = *338* W
> 
> After Private EVGA BIOS Update (*130*% > *154*%)
> 
> EVGA 2080 Ti XC = *260* W x *154*% PT = *400.4* W
> EVGA 2080 Ti XC Ultra = *260* W x *154*% PT = *400.4* W
> 
> So it makes sense, if they simply aimed at 400 W.
> 
> Comparison with the the MSI Gaming X Trio with extra 6-pin.
> 
> EVGA XC Ultra = *1650* MHz - *260* W
> MSI Gaming X Trio = *1755* MHz - *300* W
> 
> EVGA XC Ultra = *260* W x *130*% PT = *338* W
> MSI Gaming X Trio = *300* W x *110*% PT = *330* W





TahoeDust said:


> 2080 TI FTW3 will have 150% (373w) power limit.
> 
> https://twitter.com/EVGA_JacobF/status/1045836253545979904


 
He never mentions the *%*, for example the MSI Gaming X Trio with extra 6-Pin is 300 W, not 260 W like most 2x8-Pin, the GTX 1080 Ti FTW3 was 280 W by default (2x8-Pin).

So it is very likely the RTX 2080 Ti FTW3 is also around 280 W.

260 W x 130 % = 338 (*Confirmed by Jacob*, you do not multiply the non-OC 250 W, but the actual OC 260 W x 1.3 = 338, which makes my previous post accurate, the *Private EVGA XC 154% PT BIOS* is indeed *400 W*)

*Jacob:* _"FTW3 373W"_

This could be many different values, but it being 150% is unlikely, not that it matters.

275 W x 136 % = 374
280 W x 133 % = 372
285 W x 131 % = 373
290 W x 129 % = 374
295 W x 127 % = 375
300 W x 125 % = 375

Why *373*? I'm guessing it is because of the 2x8-Pin, each capable of 150 W, plus the motherboard 75 W, so 150 W *+* 150 W *+* 75 W *=* *375* W (GTX 1080 Ti FTW3 was 280 W x 127 % = 355 W)


----------



## animeowns

Baasha said:


> Is EVGA going to make a Classified version of the 2080 Ti (like the good old 980 Ti Classified and before)?


you can bet there will be a kingpin version


----------



## coolbho3k

GanMenglin said:


> My MSI 2080ti ventus oc, I think the limit of core frequency is 2145mhz, I will crash if goes higher.
> 
> What’s yours?


How is your performance with the 122% Gigabyte BIOS?

It has a non-reference PCB so I'm surprised it works on your card.


----------



## bmg2

Anyone have connections that could get this bios?

EVGA 2080 Ti XC Ultra = 260 W x 154% PT = 400.4 W


----------



## JP2012

*.... Decisions.. decisions....*

Alright guys... I'm currently interested in this one...

https://m.newegg.com/products/N82E16814133754

The issue is.... Well the price.... Is it WORTH it? I'm currently using a Vega 64 and its still good in games, with zero issues...... BUT is it worth selling my kidney for the card.. and if so WHY? What is the actual performance in the real world, 2k and 4k??

And please don't say "Well it has Ray Tracing.... Blaaaa" I honestly DONT care about that feature... In my opinion that's a gimmick... That's not enough... To drop 1.3k on the card..


----------



## arrow0309

GanMenglin said:


> Thank you so much for sharing those BIOSes.
> 
> I flashed evga 130% BIOS to my MSI 2080ti Ventus OC. The power limit of MSI was 111%, now it changes to 130%, and I use the same clock speed for core and memory, but the score of time spy extreme is lower than before. I don't know why. It's a reference PCB.
> 
> By the way, I've tried use gigabyte gaming oc new BIOS, the power limit is 300W / 366W.


Nice job mate, those infos are precious, I thought it wasn't possible to change the bioses.



GanMenglin said:


> From the BIOS information, it seems to be 300w/366w


So, you've also flashed the Gaming OC (Gigabyte custom) bios to a reference MSI?



GanMenglin said:


> My MSI 2080ti ventus oc, I think the limit of core frequency is 2145mhz, I will crash if goes higher.
> 
> What’s yours?


I'm still waiting for my Duke, sent them a mail yesterday.
In this case if you say it's possible to use different reference bioses I may wanna wait for it instead of cancelling the order.
Have you tried the Galax bios posted a few posts ago?
I'm also looking forward to get the (new) MSI Sea Hawk X's bios, it's 300W (I don't know the PL though)


----------



## xer0h0ur

JP2012 said:


> Alright guys... I'm currently interested in this one...
> 
> https://m.newegg.com/products/N82E16814133754
> 
> The issue is.... Well the price.... Is it WORTH it? I'm currently using a Vega 64 and its still good in games, with zero issues...... BUT is it worth selling my kidney for the card.. and if so WHY? What is the actual performance in the real world, 2k and 4k??
> 
> And please don't say "Well it has Ray Tracing.... Blaaaa" I honestly DONT care about that feature... In my opinion that's a gimmick... That's not enough... To drop 1.3k on the card..


Well, if your avatar is accurate, a 2080 Ti should be strong enough to no longer have to deal with that multi-gpu aids. IMO you shouldn't be touching a 2080 Ti unless you're a 1440p or 4K gamer. The gains at 1080p are so marginal its not worth the price of admission.


----------



## nycgtr

JP2012 said:


> Alright guys... I'm currently interested in this one...
> 
> https://m.newegg.com/products/N82E16814133754
> 
> The issue is.... Well the price.... Is it WORTH it? I'm currently using a Vega 64 and its still good in games, with zero issues...... BUT is it worth selling my kidney for the card.. and if so WHY? What is the actual performance in the real world, 2k and 4k??
> 
> And please don't say "Well it has Ray Tracing.... Blaaaa" I honestly DONT care about that feature... In my opinion that's a gimmick... That's not enough... To drop 1.3k on the card..


If the price is such an issue you shouldn't buy it and wait till 7nm or get a 1080ti.


----------



## xer0h0ur

arrow0309 said:


> Nice job mate, those infos are precious, I thought it wasn't possible to change the bioses.
> 
> 
> 
> So, you've also flashed the Gaming OC (Gigabyte custom) bios to a reference MSI?
> 
> 
> 
> I'm still waiting for my Duke, sent them a mail yesterday.
> In this case if you say it's possible to use different reference bioses I may wanna wait for it instead of cancelling the order.
> Have you tried the Galax bios posted a few posts ago?
> I'm also looking forward to get the (new) MSI Sea Hawk X's bios, it's 300W (I don't know the PL though)


Its been mixed results so far. Some report you can't. Some have reported you can. Some that have successfully flashed another card's vBIOS have reported losing performance despite the higher power limit. I'm going to wait things out to see if people manage to mod the vBIOS instead of trying to flash a different card's to no benefit. So far the only people that have picked up performance have been those who flashed vBIOS updates that were offered by their card's manufacturer or people who shunt modded.


----------



## arrow0309

xer0h0ur said:


> Its been mixed results so far. Some report you can't. Some have reported you can. Some that have successfully flashed another card's vBIOS have reported losing performance despite the higher power limit. I'm going to wait things out to see if people manage to mod the vBIOS instead of trying to flash a different card's to no benefit. So far the only people that have picked up performance have been those who flashed vBIOS updates that were offered by their card's manufacturer or people who shunt modded.


The guy have a Ventus, it's a 250W - 1575 boost vga or 260W - 1635 boost if it's a Ventus OC. 
So reaching 2145 with the increased PL from the EVGA 130 bios it's nice result for an entry level TU102-300A imho.
We'll see.
Gosh, I still don't know what to do, stick to the Duke OC (pretty soon) hoping for some luck or change again and go for the SC Ultra (who knows when)?


----------



## xer0h0ur

Oh I missed that result. He is the only one I've seen now to gain performance from using another card's vBIOS. That is encouraging.


----------



## carlhil2

I used the Gigabyte update exe. on my Zotac, helped my air cooled card move up in 3DMark....


----------



## JP2012

xer0h0ur said:


> Well, if your avatar is accurate, a 2080 Ti should be strong enough to no longer have to deal with that multi-gpu aids. IMO you shouldn't be touching a 2080 Ti unless you're a 1440p or 4K gamer. The gains at 1080p are so marginal its not worth the price of admission.


Well my avatar is when I had dual Asus RX 580's Strix's a wile back... I have no idea where those cards are at..  !!

Well I've been gaming for the longest time at 1080p, a few months ago I picked up an 32' LG 4K monitor with FreeSync thinking AMD was finally gonna release a good graphic card.. lol whoops ✌

I've been researching, but they ONLY talk about RayTracing... It's a bit frustrating lol.. 

I guess for now I'll keep playing the waiting game... And let my tax money do the thinking for me next year.


----------



## Silent Scone

GanMenglin said:


> Thank you so much for sharing those BIOSes.
> 
> I flashed evga 130% BIOS to my MSI 2080ti Ventus OC. The power limit of MSI was 111%, now it changes to 130%, and I use the same clock speed for core and memory, but the score of time spy extreme is lower than before. I don't know why. It's a reference PCB.
> 
> By the way, I've tried use gigabyte gaming oc new BIOS, the power limit is 300W / 366W.


if you’re using the same offset you may need to increase this further depending on the BIOS. You may need to reinstall the driver, too. Occasionally when experiencing instability the power limit would lock to 100%.



xer0h0ur said:


> Oh I missed that result. He is the only one I've seen now to gain performance from using another card's vBIOS. That is encouraging.


A few users have, myself included. Gigabyte BIOS (122%) flashed to Palit (115%).


----------



## Silent Scone

hhbreaker said:


> This file contains Galax(kfa2)_PL127, eMTEK_PL115, EVGAXC_PL130 BIOSs
> Reference PCB model can be applied to all.
> Galax(kfa2)_PL127 showed me the best results.
> 
> NVFLASH 5.513 from the previous post is a file leaked from PALIT
> I'm sorry for my poor English


I can confirm the EVGA BIOS in the zip is legitimate. Thank you for sharing 

To override PCI Subsystem use nvflash64 -6 <filename> in an elevated command prompt. Please remember that this is a reference BIOS so be certain it is compatible with your card before flashing.


----------



## looniam

just saw this and leaving it here for reasons.

[GPU-Z Test Build] BIOS saving on Turing


----------



## asdkj1740

please upload this screen shot from gpuz.


----------



## coolbho3k

Is the base power limit also defined in BIOS or just the maximum percent?

Eg. If I flash a BIOS from a card with 300W base limit with 110% maximum (330W total) on to a card with 250W base limit, do I get a 330W card or a 275W card?


----------



## Esenel

Silent Scone said:


> hhbreaker said:
> 
> 
> 
> This file contains Galax(kfa2)_PL127, eMTEK_PL115, EVGAXC_PL130 BIOSs
> Reference PCB model can be applied to all.
> Galax(kfa2)_PL127 showed me the best results.
> 
> NVFLASH 5.513 from the previous post is a file leaked from PALIT
> I'm sorry for my poor English
> 
> 
> 
> I can confirm the EVGA BIOS in the zip is legitimate. Thank you for sharing /forum/images/smilies/smile.gif
> 
> To override PCI Subsystem use nvflash64 -6 <filename> in an elevated command prompt. Please remember that this is a reference BIOS so be certain it is compatible with your card before flashing.
Click to expand...

The EVGA bios should be compatible with the FE model, shouldn't it?


----------



## looniam

yes, that 250 watts board becomes a 300 watt card because of the firmware.


----------



## asdkj1740

coolbho3k said:


> Is the base power limit also defined in BIOS or just the maximum percent?
> 
> Eg. If I flash a BIOS from a card with 300W base limit with 110% maximum (330W total) on to a card with 250W base limit, do I get a 330W card or a 275W card?


it is set by aib for each model. power limiit is attached to the bios.
% and watt are the same thing in different presentaton only.

when you flash bios, the current bios on your card will be replaced, so the current power limit would be changed.


----------



## asdkj1740

good news is that, nzxt g12 is compatible with 2080ti fe pcb. the amd bracket can be mounted on the fe pcb.


----------



## Silent Scone

hhbreaker said:


> This file contains Galax(kfa2)_PL127, eMTEK_PL115, EVGAXC_PL130 BIOSs
> Reference PCB model can be applied to all.
> Galax(kfa2)_PL127 showed me the best results.
> 
> NVFLASH 5.513 from the previous post is a file leaked from PALIT
> I'm sorry for my poor English





Esenel said:


> The EVGA bios should be compatible with the FE model, shouldn't it?


Yes. I'm using it now. Although, I'm hitting a silicon limit in Time Spy rather than a power one now. Around 2115MHz. Hard modding will still help stabilise clocks, though


----------



## zhrooms

GanMenglin said:


> By the way, I've tried use gigabyte gaming oc new BIOS, the power limit is 300W / 366W.





coolbho3k said:


> 366W seems to be very decent





coolbho3k said:


> Does anyone know the stock power limit of the Gigabyte Gaming OC? I know they changed it to 122% but what is the base?


 
Not 366W, the cards with the highest power limit right now is the EVGA XC/XC Ultra after BIOS update (260 W x 130% = *338* W) and the MSI Gaming X Trio and Sea Hawk X (300 W x 110% = *330* W).

If the Gigabyte Gaming OC is 122% it'd be 260 W x 122% = *317* W



coolbho3k said:


> Wonder if I should've gone for the MSI Trio instead...


 
No, Gaming X Trio has a Custom PCB with extra 6-Pin connector, which means it can't be flashed with any other BIOS, has to be card specific, and 330 W is not nearly enough so it still throttles. *But*, since several partners is pushing out new BIOS for several cards with a higher power target, one might assume MSI will do the same, raise the current 330 W limit to something higher (330-375 W). Though this is not a guarantee, so the safe bet is to get any reference card (260 W) right now, if the EVGA 130% Power Target BIOS works and the card goes up to 338 W that is the best you can get.



Baasha said:


> Is EVGA going to make a Classified version of the 2080 Ti (like the good old 980 Ti Classified and before)?


 
No, FTW3 replaced Classified, same type of card just different name.



coolbho3k said:


> How is your performance with the 122% Gigabyte BIOS?
> 
> Has a non-reference PCB so I'm surprised it works on your card.


 
It is reference, check the original post of this thread. (All cards listed)



JP2012 said:


> What is the actual performance in the real world 4k??


 
Check the original post of this thread, at the bottom of it click SHOW on the benchmark spoiler and you'll see 13 games and 2 synthetic tests, GTX 1080 Ti vs RTX 2080 vs RTX 2080 Ti.



arrow0309 said:


> I'm also looking forward to get the (new) MSI Sea Hawk X's bios, it's 300W (I don't know the PL though)


 
Interesting, I see now that they updated the MSI website, before it said 260 W, now 300 W on the Sea Hawk X, it is probably 110% like the Gaming X Trio which is also 300 W, so 330 W (lower than EVGA 338 W).



Esenel said:


> The EVGA bios should be compatible with the FE model, shouldn't it?


 
Yes, NVIDIA Founders Edition and EVGA XC/XC Ultra are 260 W (Factory OC) 2x8-Pin Reference PCB cards.


----------



## Xeq54

So, I tried to flash my Asus dual with the Evga bios.

Flash was successful, however after restart, I had no video signal via DisplayPort, and after plugging in HDMI I had a low res output. The Graphics card was not recognized by windows, it was listed as generic VGA device in device manager. Thankfully, flashing back the stock AsusDual bios worked and my card is still working.

Anyone else with similar experience ? The Asus Dual is reference pcb and other users here reported that crossflashing the evga bios worked for them.

Also, be careful.


----------



## arrow0309

Silent Scone said:


> if you’re using the same offset you may need to increase this further depending on the BIOS. You may need to reinstall the driver, too. Occasionally when experiencing instability the power limit would lock to 100%.
> 
> 
> 
> A few users have, myself included. Gigabyte BIOS (122%) flashed to Palit (115%).


Wouldn't the Gigabyte Gaming supposed to be a custom design and not reference?


----------



## Silent Scone

Xeq54 said:


> So, I tried to flash my Asus dual with the Evga bios.
> 
> Flash was successful, however after restart, I had no video signal via DisplayPort, and after plugging in HDMI I had a low res output. The Graphics card was not recognized by windows, it was listed as generic VGA device in device manager. Thankfully, flashing back the stock AsusDual bios worked and my card is still working.
> 
> Anyone else with similar experience ? The Asus Dual is reference pcb and other users here reported that crossflashing the evga bios worked for them.
> 
> Also, be careful.



Hi, the display driver needs to reinstall. I'm not sure you needed to flashback.


----------



## asdkj1740

well the highest power limit card is not from evga ftw3. 
at the same time of having super high power limit, super low temp is also needed. 
i am afraid there are only few aib cards can handle >350w at relatively low temp. 


https://www.tomshw.de/2018/09/26/ge...ebertaktung-und-diese-herzlich-wenig-bringt/#
""Update from 30.09.2018
I have now been able to test a KFA2 (Galax) RTX 2080 Ti, which also comes with a registered in the firmware power limit of 380 watts to the customer (126% per controller possible).""


""In the end, the bigger power limit will only help you with pretty cool ones at almost the same clock speeds as you'll find on better cooled cards without the extra power of electric power anyway. A higher overclocking does not bring that, but only a similar clock at higher operating temperatures.""


----------



## Xeq54

Silent Scone said:


> Hi, the display driver needs to reinstall. I'm not sure you needed to flashback.


Well thanks, that was not common with pascal so I panicked. I can confirm that I have successfully flashed the EVGA bios on the Asus Dual card.


----------



## carlhil2

Power used by my Zotac flashed with the Gigabyte update exe.


----------



## arrow0309

carlhil2 said:


> Power used by my Zotac flashed with the Gigabyte update exe.


I still don't get it, it's a custom (Gigabyte Gaming OC) bios 100% working on the reference AIB's?
Or maybe I'm wrong and this Gigabyte is still another reference AIB like the Windforce OC (although it didn't look like).
@zhrooms
What do you think?


----------



## zhrooms

*Official NVIDIA NVFlash 5.513.0 Released*  -  Sep 30, 2018

_"This version adds support for *Turing GPUs*"_
 
 
*Official TechPowerUp GPU-Z 2.11 (Turing Test Build) Released*  -  Sep 30, 2018

_"This build should enable *BIOS saving* on *Turing GPUs* (TU102, TU104, RTX 2080, *RTX 2080 Ti*)."_
 
 
*Unofficial EVGA 130% Power Target BIOS Uploaded*  -  Sep 29, 2018

_"EVGA Reference PCB (2x8-Pin) RTX 2080 Ti XC *130*% Power Target BIOS (*338*W)"_
 
 
*> Download at the bottom of the original post <*


----------



## GAN77

Hi guys!

Please tell me the size! MSI RTX 2080 Ti Gaming X Trio.


----------



## zhrooms

carlhil2 said:


> Power used by my Zotac flashed with the Gigabyte update exe.


 
Not correct, that reading not what you should be looking at. We have confirmation by *EVGA Jacob* that the way to calculate the *W* is to multiply the *default W* with the *% Power Target*, so if EVGA XC is *260* W, then you multiply that by 1.3 (*130*%) and you get *338* W.



arrow0309 said:


> I still don't get it, it's a custom (Gigabyte Gaming OC) bios 100% working on the reference AIB's?
> Or maybe I'm wrong and this Gigabyte is still another reference AIB like the Windforce OC (although it didn't look like).


 
Both Gigabyte cards (Windforce OC and Gaming OC) *seems to be* reference boards with *2x8-Pin* and *16 Power Phases*, both are also *clocked mildly*, Windforce OC has the same boost as Founders Edition (1635 MHz) and the Gaming OC is 30MHz higher at 1665MHz, which is *identical to the MSI Duke OC* card (*Confirmed Reference PCB* / *2x8-Pin* / *16 Power Phases*).

I have no idea why some people claim the Gigabyte Gaming OC to be a "*Custom*" card, but it's *not just* people, *WATERCOOL HEATKILLER® GPU waterblock compatibility list* has the card listed as "*CUSTOM*" too.

Reviews of the *non-Ti card shows identical PCB to Founders Edition*. Everything points to it being a reference card.

*Hexus RTX 2080 Gaming OC Review:* _"As this is largely a reference PCB, power is sourced via the usual 6+8-pin plugs and board TDP is a stock 225W."_
*Kitguru RTX 2080 Gaming OC Review:* _"Once we get a look at the PCB itself, it becomes clear that this uses the reference PCB design from Nvidia"_

Yet WATERCOOL lists both the non-Ti and Ti as "*Custom*". If anyone can answer why, that would be *marvelous*.

*Picture - Kitguru NVIDIA Founders Edition PCB*
*Picture - Kitguru Gigabyte Gaming OC PCB*
*Picture - TechPowerUp Palit Gaming Pro OC PCB*

Only real difference I can see is the power connectors on the Gigabyte for the fans, that might make it incompatible with water blocks but certainly does not make it a custom PCB.
 


GAN77 said:


> Hi guys!
> 
> Please tell me the size! MSI RTX 2080 Ti Gaming X Trio.


 
The card is not released, last I heard *MSI expects it to be shipped to Europe around the middle of October*, so another 2 weeks for people to get their hands on it.

Website states, Card Dimension - *327 x 140 x 55.6 mm*, that's all I can tell you.


----------



## arrow0309

zhrooms said:


> *Official NVIDIA NVFlash 5.513.0 Released*  -  Sep 30, 2018
> 
> _"This version adds support for *Turing GPUs*"_
> 
> 
> *Official TechPowerUp GPU-Z 2.11 (Turing Test Build) Released*  -  Sep 30, 2018
> 
> _"This build should enable *BIOS saving* on *Turing GPUs* (TU102, TU104, RTX 2080, *RTX 2080 Ti*)."_
> 
> 
> *Unofficial EVGA 130% Power Target BIOS Uploaded*  -  Sep 29, 2018
> 
> _"EVGA Reference PCB (2x8-Pin) RTX 2080 Ti XC *130*% Power Target BIOS (*338*W)"_
> 
> 
> *> Download at the bottom of the original post <*


Nice job! 
Congrats! 



zhrooms said:


> Not correct, that reading not what you should be looking at. We have confirmation by *EVGA Jacob* that the way to calculate the *W* is to multiply the *default W* with the *% Power Target*, so if EVGA XC is *260* W, then you multiply that by 1.3 (*130*%) and you get *338* W.
> 
> 
> 
> Both Gigabyte cards (Windforce OC and Gaming OC) *seems to be* reference boards with *2x8-Pin* and *16 Power Phases*, both are also *clocked mildly*, Windforce OC has the same boost as Founders Edition (1635 MHz) and the Gaming OC is 30MHz higher at 1665MHz, which is *identical to the MSI Duke OC* card (*Confirmed Reference PCB* / *2x8-Pin* / *16 Power Phases*).
> 
> I have no idea why some people claim the Gigabyte Gaming OC to be a "*Custom*" card, but it's *not just* people, *WATERCOOL HEATKILLER® GPU waterblock compatibility list* has the card listed as "*CUSTOM*" too.
> 
> Reviews of the *non-Ti card shows identical PCB to Founders Edition*. Everything points to it being a reference card.
> 
> *Hexus RTX 2080 Gaming OC Review:* _"As this is largely a reference PCB, power is sourced via the usual 6+8-pin plugs and board TDP is a stock 225W."_
> *Kitguru RTX 2080 Gaming OC Review:* _"Once we get a look at the PCB itself, it becomes clear that this uses the reference PCB design from Nvidia"_
> 
> Yet WATERCOOL lists both the non-Ti and Ti as "*Custom*". If anyone can answer why, that would be *marvelous*.
> 
> *Picture - Kitguru NVIDIA Founders Edition PCB*
> *Picture - Kitguru Gigabyte Gaming OC PCB*
> *Picture - TechPowerUp Palit Gaming Pro OC PCB*
> 
> Only real difference I can see is the power connectors on the Gigabyte for the fans, that might make it incompatible with water blocks but certainly does not make it a custom PCB.
> 
> 
> 
> The card is not released, last I heard *MSI expects it to be shipped to Europe around the middle of October*, so another 2 weeks for people to get their hands on it.
> 
> Website states, Card Dimension - *327 x 140 x 55.6 mm*, that's all I can tell you.


Maybe because we still don't have a *ucking pic of this 2080 ti Gaming pcb yet nor a single review whatsoever?
Yeah, I've seen that 2080 (non ti) review but it's not the ti we're looking at.
Only the bitspower wb state it's reference AIB like the winforce.
I honestly believed it that way but right now, with the Aorus Xtreme soon I thought maybe they're sharing the same custom design.
But of course, I may be wrong.


----------



## Silent Scone

GAN77 said:


> Hi guys!
> 
> Please tell me the size! MSI RTX 2080 Ti Gaming X Trio.


It's one of the longest cards available. (32.7cm)

https://www.overclockers.co.uk/msi-...ddr6-pci-express-graphics-card-gx-34e-ms.html


----------



## arrow0309

arrow0309 said:


> ... cut ...
> Only the bitspower wb state it's reference AIB like the winforce.
> ... cut ...


Not anymore: 

https://shop.bitspower.com/index.php?route=product/product&path=67_102_349&product_id=7049


----------



## zhrooms

arrow0309 said:


> Maybe because we still don't have a *ucking pic of this 2080 ti Gaming pcb yet nor a single review whatsoever?
> Yeah, I've seen that 2080 (non ti) review but it's not the ti we're looking at.
> With the Aorus Xtreme soon I thought maybe they're sharing the same custom design.
> But of course, I may be wrong.


 
We don't need a picture of the PCB. We have official pictures of the card with cooler, the PCB has the *same dimensions as reference*, as well as the website says it has 13+3 Power Phases (*16*) just like reference PCB.

AORUS Xtreme also has official pictures, and it features a *Custom PCB (extended dimensions)* with *additional power phases*, extremely likely *19* like FTW3 and Strix.

The RTX 2080 Ti Gaming OC is a *reference card*, end of story, question is why it is listed as not compatible (CUSTOM) by WATERCOOL.

Also plenty of reasons the non-Ti is comparable to the Ti, too many to list actually. But the most damning is that we know of over *50 unique cards* by now, and the biggest difference *one* of them brings is that the cooler was changed between non-Ti and Ti, every other card esentially the same (*purpose/tier*).


----------



## GAN77

zhrooms said:


> Only real difference I can see is the power connectors on the Gigabyte for the fans, that might make it incompatible with water blocks but certainly does not make it a custom PCB.


Similar, but many differences, including the elements


----------



## asdkj1740

GAN77 said:


> Similar, but many differences, including the elements


lots of aib cards doing this, which is an upgrade.
and some aib cards cut all sp cap and pos cap and got them replaced by solid cap, this is downgrade.


----------



## arrow0309

GAN77 said:


> Similar, but many differences, including the elements


I wouldn't count them as differences, not for a bios flash purpose nor for an EK block compatibility.
And zhrooms is right, at least regarding the 2080 non ti, of course 99% right about the 2080 ti as well but I still wanna see a goddamned pcb of these Gaming OC's  

Notice they share the very same main vrm (mosfets, chokes and poscaps), some different mounting fan / rgb connectors means nothing.



asdkj1740 said:


> lots of aib cards doing this, which is an upgrade.
> and some aib cards cut all sp cap and pos cap and got them replaced by solid cap, this is downgrade.


I agree, like my (incoming) MSI Duke, upgraded all the main vrm poscaps from 330uf to 470uf.
Almost nothing changes but it's still an upgrade 

FE pcb:

https://www.techpowerup.com/reviews...080_Ti_Founders_Edition/images/front_full.jpg

MSI Duke pcb:

https://www.techpowerup.com/reviews/MSI/GeForce_RTX_2080_Ti_Duke/images/front_full.jpg


:specool:


----------



## zhrooms

GAN77 said:


> Similar, but many differences, including the elements


 
I don't think a single thing you highlighted actually touch the waterblock. GPU, Memory and VRM (half). The inductors & mosfet that are actually touching looks like reference.


----------



## xer0h0ur

If you really wanted to dissipate heat for some reason on a component that isn't making contact with the block's plate you would just need to use a thermal pad of the proper thickness so its contacting the plate. Obviously if it fall outside of the range of the plate itself then you can't do anything about that.


----------



## OleMortenF

Hi guys. I am a little confused about the different 2080 Ti Cards.

I am thinking about getting either:
MSI GeForce RTX 2080 Ti Duke OC
or
Gigabyte GeForce RTX 2080 Ti GAMING OC
I am gonna use an EK Waterblock on the card and found the Waterblock will fit both of these.

So then I came over this, and I am not sure which one would be the best.
MSI Duke 111% (300w base) 333w
GIGABYTE Cards 122% (250w base) 305w

And I also read on the top here that the ) MSI Duke, upgraded all the main vrm poscaps from 330uf to 470uf.
Does this mean that MSI Duke will be a better overclocker or can Gigabyte also be get upto 333w with a bios?


----------



## zhrooms

OleMortenF said:


> I am thinking about getting either:
> MSI GeForce RTX 2080 Ti Duke OC
> or
> Gigabyte GeForce RTX 2080 Ti GAMING OC
> I am gonna use an EK Waterblock on the card and found the Waterblock will fit both of these.
> 
> So then I came over this, and I am not sure which one would be the best.
> MSI Duke 111% (300w base) 333w
> GIGABYTE Cards 122% (250w base) 305w
> 
> And I also read on the top here that the ) MSI Duke, upgraded all the main vrm poscaps from 330uf to 470uf.
> Does this mean that MSI Duke will be a better overclocker or can Gigabyte also be get upto 333w with a bios?


 
MSI Duke OC is not 300 W base, but 260 W. (x 111% is only 289 W)

Both cards can be flashed to EVGA XC 130% Power Target BIOS (260 W > 338 W)

The Gigabyte Gaming OC is currently listed as not compatible with WATERCOOL blocks, for unknown reasons. So at the moment I would recommend MSI Duke OC, but both cards use reference PCB and will overclock the same, the only thing preventing you from overclocking further than X will be silicon lottery, some cards simply reach higher than others, nothing you can do about it. 

According to me it does not matter at all which card you buy at the moment (as long as it is reference PCB and you are going to use a water block), cheapest is best, and flash it (338 W) with the EVGA BIOS. (In the future it is extremely likely we will get a higher power target BIOS, for example the EVGA FTW3 will be 373 W from factory and the private EVGA XC one out there at 154% is 400 W)

_(Although, I always recommend EVGA over any other brand because of the warranty, if the prices were equal it is the obvious choice, replacing thermal paste and cooler while keeping warranty completely intact is amazing! Personally want an EVGA RTX 2080 Ti Blower Cooler for as little $ as posible, will overclock just as good on water as the FTW3 *if* we get the 400 W BIOS)_


----------



## OleMortenF

Thanks alot for answering  
There is one thing more that I am a little confused about.
When checking reviews on Guru3D on all the different 2080 Ti cards, the MSI 2080 Ti Duke seems to be the one that overclocks the best and also gets the highest voltage.
https://www.guru3d.com/articles_pages/msi_geforce_rtx_2080_ti_duke_review,28.html

Is this only because they maybe got they're hands on a silicon lottery card?


----------



## asdkj1740

inno3d vapor chamber cooler is insane.
the midplate cooling vrm and vram is seperated from the main vapor chamber heatsink. there are only three thermal pads between the midplate and the main vapor chamber heatsink for the vram, nothing for the vrm nor the sp caps.
meaning all the heats from mosfets and spcaps are trapped on the flat midplate and airflow cant reach the midplate at all becuase of the super large vapor chamber leaving almost no holes/spaces for the air to get to the midplate.

a freaking terrible design.


----------



## EQBoss

zhrooms said:


> MSI Duke OC is not 300 W base, but 260 W. (x 111% is only 289 W)
> 
> Both cards can be flashed to EVGA XC 130% Power Target BIOS (260 W > 338 W)
> 
> The Gigabyte Gaming OC is currently listed as not compatible with WATERCOOL blocks, for unknown reasons. So at the moment I would recommend MSI Duke OC, but both cards use reference PCB and will overclock the same, the only thing preventing you from overclocking further than X will be silicon lottery, some cards simply reach higher than others, nothing you can do about it.
> 
> According to me it does not matter at all which card you buy at the moment (as long as it is reference PCB and you are going to use a water block), cheapest is best, and flash it (338 W) with the EVGA BIOS. (In the future it is extremely likely we will get a higher power target BIOS, for example the EVGA FTW3 will be 373 W from factory and the private EVGA XC one out there at 154% is 400 W)
> 
> _(Although, I always recommend EVGA over any other brand because of the warranty, if the prices were equal it is the obvious choice, replacing thermal paste and cooler while keeping warranty completely intact is amazing! Personally want an EVGA RTX 2080 Ti Blower Cooler for as little $ as posible, will overclock just as good on water as the FTW3 *if* we get the 400 W BIOS)_


Isn't FTW3 a custom board? Wouldn't that not work for reference boards?


----------



## zhrooms

OleMortenF said:


> When checking reviews on Guru3D on all the different 2080 Ti cards, the MSI 2080 Ti Duke seems to be the one that overclocks the best and also gets the highest voltage.
> 
> Is this only because they maybe got they're hands on a silicon lottery card?


 
Yes, there are reviews where Founders Edition is overclocking higher than ASUS Strix/MSI Gaming X Trio which is Custom PCB with extra power phases (MSI also extra 6-Pin power), it really is all about luck. Was the exact same thing back with Pascal last year, my Founders Edition topped out at 2136 MHz on Water, while others had their cards (Reference PCB and Custom PCB on Water) reach up to ~2150-2175 MHz, same voltage (1.100mV) and XOC BIOS (Removed Power Limit). Got crushed in benchmark scores because of it.
 


EQBoss said:


> Isn't FTW3 a custom board? Wouldn't that not work for reference boards?


 
Maybe I was a bit unclear, there is a Private BIOS out right now for EVGA XC/XC Ultra that increases the power limit by 154% (400 W), up from the current Public BIOS (130%, 338 W). So we know it is possible to run Reference PCB cards at 400 W.

Then EVGA Jacob has stated the FTW3 comes with a factory W of 373 max, (just a guess because GTX 1080 Ti FTW3 was 280) around 280 W x 133% Power Target = 372.4 W. (338 W is nothing compared to 373 W)

So we know there are coming Custom PCB cards with a very high power limit, and 400 W is still a limitation with the Private BIOS, so it is to be expected that LN2 overclockers will get their hands on this 400 W BIOS and eventually share it with us, or even one with no power limit at all like the Pascal XOC BIOS shared by Elmor (Removed Power Limit by ASUS ROG).

Point is, only a matter of time before we get (official or unofficial) a BIOS with a higher power target than the current Official EVGA BIOS Update (130%, 338 W). We know it is possible, variants of it already exists, we know people need it.

Personally all I want is to be able to *benchmark* and play a *game* at the *max overclock my cooling allows* with *zero* throttling. NVIDIA Boost (Thermal & Power Limit) is a curse.


----------



## asdkj1740

on pascal, we can cross flash bios between reference models and non reference models, there are only some special bios from special model like kingpin cant be cross flashing or resulting in bricking.
the current highest power limit's official bios (public) should be galax one reported by tomshardware de saying it is a 380w bios.


i think jay and steve should share the 400w bios to the community. they should.


----------



## bmg2

Share the bios please!  Give the rest of us a chance to compete with Steve, Jay and Paul.


----------



## EQBoss

zhrooms said:


> Maybe I was a bit unclear, there is a Private BIOS out right now for EVGA XC/XC Ultra that increases the power limit by 154% (400 W), up from the current Public BIOS (130%, 338 W). So we know it is possible to run Reference PCB cards at 400 W.
> 
> Then EVGA Jacob has stated the FTW3 comes with a factory W of 373 max, (just a guess because GTX 1080 Ti FTW3 was 280) around 280 W x 133% Power Target = 372.4 W. (338 W is nothing compared to 373 W)
> 
> So we know there are coming Custom PCB cards with a very high power limit, and 400 W is still a limitation with the Private BIOS, so it is to be expected that LN2 overclockers will get their hands on this 400 W BIOS and eventually share it with us, or even one with no power limit at all like the Pascal XOC BIOS shared by Elmor (Removed Power Limit by ASUS ROG).
> 
> Point is, only a matter of time before we get (official or unofficial) a BIOS with a higher power target than the current Official EVGA BIOS Update (130%, 338 W). We know it is possible, variants of it already exists, we know people need it.
> 
> Personally all I want is to be able to *benchmark* and play a *game* at the *max overclock my cooling allows* with *zero* throttling. NVIDIA Boost (Thermal & Power Limit) is a curse.


Can't agree more, have 2 Titan X Pascals in SLI and power limit is a huge issue, especially since there's no bios mods for the titans, only 1080 tis. And yes I've been following this closely i'm aware of the 400 watt bios, wish they shared it instead of teasing us lol. Hopefully we get one soon. Still waiting for the 2 founders cards, hopefully ship tomorrow since they charged credit cards. 



asdkj1740 said:


> on pascal, we can cross flash bios between reference models and non reference models, there are only some special bios from special model like kingpin cant be cross flashing or resulting in bricking.
> the current highest power limit's official bios (public) should be galax one reported by tomshardware de saying it is a 380w bios.
> 
> 
> i think jay and steve should share the 400w bios to the community. they should.


Is that a custom pcb galax?


----------



## zhrooms

asdkj1740 said:


> on pascal, we can cross flash bios between reference models and non reference models, there are only some special bios from special model like kingpin cant be cross flashing or resulting in bricking.
> the current highest power limit's official bios (public) should be galax one reported by tomshardware de saying it is a 380w bios.
> 
> i think jay and steve should share the 400w bios to the community. they should.


Yes, can't flash Custom PCB that has different power connectors. Also we don't know *yet* if it is possible to flash 19 Power Phase *Turing* cards with 16 Power Phase BIOS and vice versa. (EVGA XC <> EVGA FTW3 <> EVGA XC).

*Correction: * It seems we can, as someone brave decided to try it, see this post.

And no, I did read that tomshardware article, it is not correct.

*Tom's Hardware DE (Google Translate) *_"I have now been able to test a *KFA2 (Galax) RTX 2080 Ti*, which also comes with a registered in the firmware *power limit of 380 watts* to the customer (*126%* per controller possible)."_

The KFA2/Galax OC card is *260* W and if the power limit is *126* %, then 260 W x 126 % = *328* W (Not 380 W).

*Correction: * GALAX was apparently crazy enough to actually do it, their Reference PCB really is 300/380W straight out of the box, it is higher than the EVGA FTW3 Custom PCB card at 373W max.


----------



## zocker

Hi,
i have a strange "issue"?!
The Pixel Rate and the Fill Rate are not equal to the values on Tech Power Up.
Its the Asus 2080 Ti DUAL.
Is something wrong with my card?!


----------



## arrow0309

zhrooms said:


> Yes, can't flash Custom PCB that has different power connectors. Also we don't know *yet* if it is possible to flash 19 Power Phase *Turing* cards with 16 Power Phase BIOS and vice versa. (EVGA XC <> EVGA FTW3 <> EVGA XC)
> 
> And no, I did read that tomshardware article, it is not correct.
> 
> *Tom's Hardware DE (Google Translate) *_"I have now been able to test a *KFA2 (Galax) RTX 2080 Ti*, which also comes with a registered in the firmware *power limit of 380 watts* to the customer (*126%* per controller possible)."_
> 
> The KFA2/Galax OC card is *260* W and if the power limit is *126* %, then 260 W x 126 % = *328* W (Not 380 W).


I think you're wrong as they may refer to this beast and it's a limited, very expensive edition Galax only (not KFA2), the RTX 2080TI HOF OC LAB WC EDITION:

http://galaxstore.net/GALAX-GeForce-RTX-2080Ti-HOF-OC-Lab-WC-Edition_p_180.html

I doubt it'll be any lower than the FTW3 (power limit wise) and there's no way in the world this board should only be 260W at its default (100%) power limit.


----------



## NewType88

@TahoeDust hey did you get an email from bh saying your FTW3 order is on backorder ? Even though its not supposed to be in stock till the 23rd ? I'm hoping that's just a standard message when things don't ship in a 2 week window or something.


----------



## coolbho3k

zhrooms said:


> Not 366W, the cards with the highest power limit right now is the EVGA XC/XC Ultra after BIOS update (260 W x 130% = *338* W) and the MSI Gaming X Trio and Sea Hawk X (300 W x 110% = *330* W).
> 
> If the Gigabyte Gaming OC is 122% it'd be 260 W x 122% = *317* W


Are you sure it's not 300W x 122% = 366W? Two reports here are saying it's 366W. Can someone else who's flashed the Gigabyte BIOS show the GPU-Z result?


----------



## zhrooms

zocker said:


> The Pixel Rate and the Fill Rate are not equal to the values on Tech Power Up.
> Its the Asus 2080 Ti DUAL.
> Is something wrong with my card?!


 
No, ignore that. Card is working as intended.
 


arrow0309 said:


> I think you're wrong as they may refer to this beast and it's a limited, very expensive edition Galax only (not KFA2), the RTX 2080TI HOF OC LAB WC EDITION
> 
> I doubt it'll be any lower than the FTW3 (power limit wise) and there's no way in the world this board should only be 260W at its default (100%) power limit.


 
"May refer", no they certainly aren't. They specifically called it Galax/KFA2 which means they were talking about the OC card because the Blower card is not released yet (or any other card), and the HOF card won't see the light of day for weeks. (SG Card is Galax only)

Also the the power limit should be similar or higher than FTW3 because it features 3x8-Pin.
 


coolbho3k said:


> Are you sure it's not 300W x 122% = 366W? Two reports here are saying it's 366W.


 
Yes, and there are no "reports". The only cards *right now* above 260 W are the *MSI Gaming X Trio* and *MSI Sea Hawk X*, both 300 W from factory with 110% Power Limit (300 W x 110% = 330 W). And why are they *300* W? Because they have a boost clock of *1755* MHz, almost *100 MHz faster* than *MSI Duke OC*/*Gigabyte Gaming OC* at *1665* MHz.


----------



## arrow0309

zhrooms said:


> ... ...
> 
> 
> "May refer", no they certainly aren't. They specifically called it Galax/KFA2 which means they were talking about the OC card because the Blower card is not released yet (or any other card), and the HOF card won't see the light of day for weeks. (SG Card is Galax only)
> 
> Also the the power limit should be similar or higher than FTW3 because it features 3x8-Pin.
> 
> 
> ... ...


Nope, the Galax SG is (or will be soon) KFA2 SG as well:

https://www.overclockers.co.uk/kfa2...ddr6-pci-express-graphics-card-gx-09s-kf.html

And it's a reference board with 2x 8pin:

So when you're talking 3x 8pin custom Galax you're talking HOF, we'll see soon. And I'm sure the EU KFA2 will have one as well, later.


----------



## zhrooms

arrow0309 said:


> Galax SG is (or will be soon) KFA2 SG as well:
> 
> https://www.overclockers.co.uk/kfa2...ddr6-pci-express-graphics-card-gx-09s-kf.html
> 
> And it's a reference board with 2x 8pin:
> 
> So when you're talking 3x 8pin custom Galax you're talking HOF, we'll see soon. And I'm sure the EU KFA2 will have one as well, later.


 
Yes, *soon, not yet*. It's not listed in any other store nor their website and the link that came up weeks ago is still dead. Surprised Overclockers UK let you add it to basket, the boost speed isn't even known.

Never mentioned it (OC) not being 2x8-Pin.

Nothing to really see about the HOF OC Lab, we know almost everything about it (*3x8-Pin*, *19 Power Phases*, 320mm, *Dual BIOS*) except W/PT/Boost (How fast it is *stock*). But it will be *fast* that's for sure.

Also have you even checked the *original post*? It's all right there, everything checked thoroughly. No need to tell me which cards are reference. I know.


----------



## iamjanco

PC Builders Club over in Germany *says the following* about the Galax HOF 2080ti HOF OC LAB:



> there are three 8 pin connections, which normally provide 150 watts per connection. Together with the 75 watts of the PCIe port, the RTX 2080 Ti HOF OC Lab WC Edition can approve 525 watts as standard. These connections supply 16 phases, which are distributed on the PCB.



Comes with the typically *extravagant Galax price* though.

The question is, does it have 19 phases as the the OP currently states, or 16? Noticed a similar statement about 16 phases on other sites, as well as mention of an additional three phases that support the memory itself. There was also mention of a 1680 boost clock on a couple of sites, but not sure where they're getting that from since Galax reflects TBA on their own site.


----------



## zhrooms

iamjanco said:


> The question is, does it have 19 phases as the the OP currently states, or 16? Noticed a similar statement about 16 phases on other sites, as well as mention of an additional three phases that support the memory itself. There was also mention of a 1680 boost clock on a couple of sites, but not sure where they're getting that from since Galax reflects TBA on their own site.


 
Yes, technically it is 16 + 3 = 19. So far the confirmed ones that offer Custom PCB card with 16 + 3 is ASUS (STRIX), EVGA (FTW3) and GALAX (HOF), more will join soon. MSI Custom PCB is 14 + 3 = 17. Sadly it does not matter at all when the average user is cooling with water, still going to hit a wall around 2100 MHz just like Founders Edition on Air. These cards are for serious overclockers only.

I am ignoring the "1680 MHz" at the moment because a news article said it raised the core clock itself, not the boost, which makes no sense, and 1680 MHz Boost is not a lot, MSI already has two cards running 1755 MHz Boost, sure it is possible it will run 1680 MHz stock but I'll wait for actual confirmation.


----------



## jase78

Why do u think most manufacturers have official updated bios' at same time ? Cant remember anything like this in the past . What changed ? Could this be related to the delays ?


----------



## zhrooms

jase78 said:


> Why do u think most manufacturers have official updated bios' at same time ? Cant remember anything like this in the past . What changed ? Could this be related to the delays ?


 
It was not at the same time, few days apart, but once one partner supplied an updated BIOS, the others seemed to follow, crucial time to get those pre-orders.

And definitely part of it was because of the delay, the entire RTX launch was rushed by several months, and the RTX 2080 was already delayed more than 6 months. Hopefully we won't get another shabby release like this again, but only time will tell, expect RTX 3080 in a year.


----------



## asdkj1740

zhrooms said:


> Yes, can't flash Custom PCB that has different power connectors. Also we don't know *yet* if it is possible to flash 19 Power Phase *Turing* cards with 16 Power Phase BIOS and vice versa. (EVGA XC <> EVGA FTW3 <> EVGA XC)
> 
> And no, I did read that tomshardware article, it is not correct.
> 
> *Tom's Hardware DE (Google Translate) *_"I have now been able to test a *KFA2 (Galax) RTX 2080 Ti*, which also comes with a registered in the firmware *power limit of 380 watts* to the customer (*126%* per controller possible)."_
> 
> The KFA2/Galax OC card is *260* W and if the power limit is *126* %, then 260 W x 126 % = *328* W (Not 380 W).


it wont brick the card, especially on pascal (1070~1080ti) upi up9511 is the most popular vrm controller. cross flashing bios is based on this controller: same controller<--> wont brick the card.
but that doesnt mean the card can work fine, sometimes crossflashing will resut in messingn up the p states and the card cant work fine.
19 phases or 16 phases are all faked. currently on 2080ti there are only 2 sets of vrm: 1. 8phases and 2. 10 phases.
the difference is not huge while this time nvidia got us the power balacning new tech which may even help the problem of flashing different bios with different the number of power connectors.

on pascal even we can crossflash with bricking the card, sometime the card will be messing up the power draw, i hope the new tech would solve this problem.


----------



## asdkj1740

hof bios and kingping/classified bios and ligthing bios are dangerous for cross flashing, leave them or brick your card.


----------



## Emmett

could someone post what I would want to type into command line to flash my 2080 ti turbo to the evga bios? it has to be forced?


----------



## asdkj1740

Emmett said:


> could someone post what I would want to type into command line to flash my 2080 ti turbo to the evga bios? it has to be forced?


nvflash64 -6 xxx.rom

btw asus cards may need to type another command first for leaving the flashing limitiation.
i cant recall it: protect_off, something like that, you can check the bios flashing post.


----------



## Emmett

asdkj1740 said:


> nvflash64 -6 xxx.rom
> 
> btw asus cards may need to type another command first for leaving the flashing limitiation.
> i cant recall it: protect_off, something like that, you can check the bios flashing post.


Thanks!


----------



## Hulk1988

I tested a lot my 2080 TI Phoenix and I am happy. 2115/8100 and reached 13114 Points in Superposition. But the BIOS is limited to 300W only. Luckily it is a reference PCB and I waiting for the 400W Bios now.

Someone has it already http://www.directupload.net/file/d/5227/vyaj7bz6_png.htm


----------



## CallsignVega

NewType88 said:


> @TahoeDust hey did you get an email from bh saying your FTW3 order is on backorder ? Even though its not supposed to be in stock till the 23rd ? I'm hoping that's just a standard message when things don't ship in a 2 week window or something.


I got the same email from B&H for my FTW3. Their system sends out automated updates every one and a while.


----------



## asdkj1740

Hulk1988 said:


> I tested a lot my 2080 TI Phoenix and I am happy. 2115/8100 and reached 13114 Points in Superposition. But the BIOS is limited to 300W only. Luckily it is a reference PCB and I waiting for the 400W Bios now.
> 
> Someone has it already http://www.directupload.net/file/d/5227/vyaj7bz6_png.htm


this should be 400w max?
would you mind uploading a screenshot of gpuz's "nvidia bios" page to show the power limit in watt??
thanks


----------



## Hulk1988

Its not my BIOS. Someone has it in a german Forum. 

But yes. It is 400w max. Lets see over the next days if someone shares the "private" Bios from EVGA.


----------



## carlhil2

Gigabye bios upted to my Zotac via exe.


----------



## asdkj1740

carlhil2 said:


> Gigabye bios upted to my Zotac via exe.


awesome!

btw it seems the bios has already covered samsung gddr6 varient. dont know when will samsung g6 started to be used on rtx 2080/2080ti. samsung g6 vram flash overclocking must be as good as g5.


----------



## carlhil2

asdkj1740 said:


> awesome!
> 
> btw it seems the bios has already covered samsung gddr6 varient. dont know when will samsung g6 started to be used on rtx 2080/2080ti. samsung g6 vram flash overclocking must be as good as g5.


Good info, will wait on that 400w, my water block isn't here yet anyways...


----------



## asdkj1740

carlhil2 said:


> Good info, will wait on that 400w, my water block isn't here yet anyways...


those techtubers should share it out to us.
we should be united and express our request to them.


----------



## Emmett

Hulk1988 said:


> I tested a lot my 2080 TI Phoenix and I am happy. 2115/8100 and reached 13114 Points in Superposition. But the BIOS is limited to 300W only. Luckily it is a reference PCB and I waiting for the 400W Bios now.
> 
> Someone has it already http://www.directupload.net/file/d/5227/vyaj7bz6_png.htm


How do you get 8100 on mem? afterburner only gives me +1000


----------



## ENTERPRISE

Happy I went with a reference PCB afterall, if we can get the increased PL up to 400Watts then bonus. Just ordered 2x MSI 2080Ti Sea Hawk X Cards.


----------



## Hulk1988

Emmett said:


> How do you get 8100 on mem? afterburner only gives me +1000


I used X1 from EVGA. You can move the slider higher than +1000
At 8125 I get graphical glitches. Using now 8000 as my daily setting.


----------



## asdkj1740

ENTERPRISE said:


> Happy I went with a reference PCB afterall, if we can get the increased PL up to 400Watts then bonus. Just ordered 2x MSI 2080Ti Sea Hawk X Cards.


pascal seahawk has been having too low power limit. 
btw rtx seahawk has changed the aio oem, no more asetek.


----------



## asdkj1740

Hulk1988 said:


> I used X1 from EVGA. You can move the slider higher than +1000
> At 8125 I get graphical glitches. Using now 8000 as my daily setting.


would you share your gpuz pic so that i can add it to the chart?


----------



## mikedrewsmy

Hi. Anyone with a 2080 Ti Gaming X Trio already flashed their card with an EVGA bios yet?


----------



## Vapochilled

So, both Gigabyte Gaming OC & MSI Duke OC are Reference PCB Correct ? 
If so, and if 400W bios or other BIOS are flash-able to this, i believe i should get the one with the best cooler correct? I'm asking because i got an email from the shop where i pre-ordered and they said: Do you want the gigabyte gaming oc or the duke oc ?


----------



## asdkj1740

Vapochilled said:


> So, both Gigabyte Gaming OC & MSI Duke OC are Reference PCB Correct ?
> If so, and if 400W bios or other BIOS are flash-able to this, i believe i should get the one with the best cooler correct? I'm asking because i got an email from the shop where i pre-ordered and they said: Do you want the gigabyte gaming oc or the duke oc ?


get the cheapest one. and msi has the stupid sticker preventing you to disassemble the cooler. i dont like msi nor asus.
cooling of course is the essential part for the gpu boost3.0 to work great.
after maxwell, there is no more disabling gpu boost bios that can be flashed. although i have seen kingpin using the gpu boost 3.0 disabled bios on the 1080ti fe once.


----------



## Hulk1988

mikedrewsmy said:


> Hi. Anyone with a 2080 Ti Gaming X Trio already flashed their card with an EVGA bios yet?


Since the Trio X has a complete different PCB I would NOT recommend to do that.


----------



## Hulk1988

asdkj1740 said:


> would you share your gpuz pic so that i can add it to the chart?


Sure. What exactly you want? The front page of GPU-Z with the OC Memory settings?


----------



## Vapochilled

asdkj1740 said:


> get the cheapest one. and msi has the stupid sticker preventing you to disassemble the cooler. i dont like msi nor asus.
> cooling of course is the essential part for the gpu boost3.0 to work great.
> after maxwell, there is no more disabling gpu boost bios that can be flashed. although i have seen kingpin using the gpu boost 3.0 disabled bios on the 1080ti fe once.



Duke is the cheapest.
1130euros for that duke
1160 for the Gigabyte Gaming
EGAs and Asus Strixs : 1350e + 

So, then i will get the duke with reference PCB and i will wait for the 400W bios...
Otherwise i will just flash the gigabyte bios on it because it has much higher limit


----------



## asdkj1740

Hulk1988 said:


> Since the Trio X has a complete different PCB I would NOT recommend to do that.


trio x is using up9512 with 8 pwm max, just like the fe design.
but i dont recommand doing this if you dont have the second gpu and dont know what to do for saving the bricked card (if any) lol.


----------



## asdkj1740

Hulk1988 said:


> Sure. What exactly you want? The front page of GPU-Z with the OC Memory settings?


https://www.overclock.net/forum/attachment.php?attachmentid=221170&stc=1&d=1538376198


----------



## Hulk1988

asdkj1740 said:


> https://www.overclock.net/forum/attachment.php?attachmentid=221170&stc=1&d=1538376198


Okay. I will do it in my Lunchbreak.


----------



## arrow0309

ENTERPRISE said:


> Happy I went with a reference PCB afterall, if we can get the increased PL up to 400Watts then bonus. Just ordered 2x MSI 2080Ti Sea Hawk X Cards.





asdkj1740 said:


> pascal seahawk has been having too low power limit.
> btw rtx seahawk has changed the aio oem, no more asetek.


I'm curios what pl will this bios have, I'm sure higher than the Duke's because the Sea Hawk X is stated 300W (vs 260W of the Duke). 



Vapochilled said:


> Duke is the cheapest.
> 1130euros for that duke
> 1160 for the Gigabyte Gaming
> EGAs and Asus Strixs : 1350e +
> 
> So, then i will get the duke with reference PCB and i will wait for the 400W bios...
> Otherwise i will just flash the gigabyte bios on it because it has much higher limit


Where the hell have you found it for 1130 euro, I've bought it from Scan.uk for £1150. 
But at least they let me break the stupid MSI tampering sticker to install the EK Vector (which I've also bought from them).


----------



## Xeq54

arrow0309 said:


> But at least they let me break the stupid MSI tampering sticker to install the EK Vector (which I've also bought from them).


That tampering sticker holds no legal ground anywhere in the EU. Even with the seal removed, the warranty is still intact and they have to accept it and then they have 30 days to resolve the situation. They can refuse the claim afterwards, but they will have to prove that whatever you did directly caused the issue, which is next to impossible unless you break something off the PCB or something.

I had two cards successfully replaced with the sticker broken in the past. They may try to refuse it in the shop, but you just need to let them know that you know your rights, they know them too.


----------



## arrow0309

Xeq54 said:


> That tampering sticker holds no legal ground anywhere in the EU. Even with the seal removed, the warranty is still intact and they have to accept it and then they have 30 days to resolve the situation. They can refuse the claim afterwards, but they will have to prove that whatever you did directly caused the issue, which is next to impossible unless you break something off the PCB or something.
> 
> I had two cards successfully replaced with the sticker broken in the past. They may try to refuse it in the shop, but you just need to let them know that you know your rights, they know them too.


Hi, 
I can't agree more to your pov, I've also had one Msi with broken "seal" (Amd 6950 twin frozr 3) rma and they accepted it (the eshop than send the vga to MSI, it took way longer than I expected but in the end it worked) sending me back a refurbished like new one. 
This time I've called them (Scan.uk) and they have reassured me everything's fine. 
I just wanted to point that the (possible) higher price I've paid (exactly £1.139 + shipping) may justify the purchase from them (I also live in North Manchester which is near to Scan Computers shop location in Bolton).


----------



## Rob w

arrow0309 said:


> Hi,
> I can't agree more to your pov, I've also had one Msi with broken "seal" (Amd 6950 twin frozr 3) rma and they accepted it (the eshop than send the vga to MSI, it took way longer than I expected but in the end it worked) sending me back a refurbished like new one.
> This time I've called them (Scan.uk) and they have reassured me everything's fine.
> I just wanted to point that the (possible) higher price I've paid (exactly £1.139 + shipping) may justify the purchase from them (I also live in North Manchester which is near to Scan Computers shop location in Bolton).


I use Scan quite a lot and agree they are excellent with there after sales, 2 mobos in the past and they sent replacement before I’d even sent faulty one back, straight swap over.


----------



## zhrooms

carlhil2 said:


> Gigabye bios upted to my Zotac via exe.


 
Seen it before and I don't trust that reading, it makes no sense at all Gigabyte would release official BIOS that takes their 260 W card up to 300 W, really stupid if true, does it even change the boost clock? And can anyone with it actually confirm it does not throttle in benchmarks? I have not seen a shred of proof that it really is 366 W.
 


Hulk1988 said:


> Waiting for the 400W Bios now.
> 
> Someone has it already http://fs1.directupload.net/images/181001/vyaj7bz6.png
> 
> Its not my BIOS. Someone has it in a german Forum.
> 
> But yes. It is 400w max. Lets see over the next days if someone shares the "private" Bios from EVGA.


 
Please link the german forum thread, that image itself is not proof enough, it could simply show 300 W because of 2x8-Pin (150+150) and the max 400 W could be meaningless without an actual comparison of a different BIOS (EVGA 130% 338 W for example).
 


Vapochilled said:


> So, both Gigabyte Gaming OC & MSI Duke OC are Reference PCB Correct ?
> If so, and if 400W bios or other BIOS are flash-able to this, i believe i should get the one with the best cooler correct? I'm asking because i got an email from the shop where i pre-ordered and they said: Do you want the gigabyte gaming oc or the duke oc ?


 
Yes, if one cools with water it doesn't matter, but on Air you should definitely carefully consider which card (cooler) to pick. (Including taking warranty into consideration, EVGA openly allow you to swap both thermal paste and cooler while keeping warranty intact, which makes them the obvious choice)

Between the two, the MSI is 2.7 Slot - 314mm long (massive cooler) with bigger fans (quieter), the Gigabyte is a 2.5 slot 287mm card. The MSI also looks a lot better I think and features sick RGB lighting. So for me MSI is the obvious choice (between those two).
 


arrow0309 said:


> I'm curios what pl will this bios have, I'm sure higher than the Duke's because the Sea Hawk X is stated 300W (vs 260W of the Duke).
> 
> But at least they let me break the stupid MSI tampering sticker to install the EK Vector (which I've also bought from them).


 
Very likely same as Gaming X Trio since both cards are 300 W, and that card has a 110% power target, so 330 W, but it would not surprise me if MSI released a BIOS update like the other partners, but 330 W is still somewhat high if you compare it to EVGA (already updated) at 338 W.
 


Xeq54 said:


> That tampering sticker holds no legal ground anywhere in the EU. Even with the seal removed, the warranty is still intact and they have to accept it and then they have 30 days to resolve the situation. They can refuse the claim afterwards, but they will have to prove that whatever you did directly caused the issue, which is next to impossible unless you break something off the PCB or something.
> 
> I had two cards successfully replaced with the sticker broken in the past. They may try to refuse it in the shop, but you just need to let them know that you know your rights, they know them too.





arrow0309 said:


> I've also had one Msi with broken "seal" (Amd 6950 twin frozr 3) rma and they accepted it (the eshop than send the vga to MSI, it took way longer than I expected but in the end it worked) sending me back a refurbished like new one.


 
Here is actual proof,

https://forum-en.msi.com/index.php?topic=184134.0
https://forum-en.msi.com/index.php?topic=183996.0
https://forum-en.msi.com/index.php?topic=255827.0

MSI is one of the better partners, EVGA on top. Duke OC is probably the #2 GPU I would recommend to anyone, as long as it's not too long for their cases (314mm), #1 would be EVGA XC Ultra, but it's not perfect either.


----------



## Spiriva

NA users better buy your 2080ti now, soon you will have "EU prices" in US too.


----------



## arrow0309

zhrooms said:


> Seen it before and I don't trust that reading, it makes no sense at all Gigabyte would release official BIOS that takes their 260 W card up to 300 W, really stupid if true, does it even change the boost clock? And can anyone with it actually confirm it does not throttle in benchmarks? I have not seen a shred of proof that it really is 366 W.
> 
> 
> 
> Please link the german forum thread, that image itself is not proof enough, it could simply show 300 W because of 2x8-Pin (150+150) and the max 400 W could be meaningless without an actual comparison of a different BIOS (EVGA 130% 338 W for example).
> 
> 
> 
> Yes, if one cools with water it doesn't matter, but on Air you should definitely carefully consider which card (cooler) to pick. (Including taking warranty into consideration, EVGA openly allow you to swap both thermal paste and cooler while keeping warranty intact, which makes them the obvious choice)
> 
> Between the two, the MSI is 2.7 Slot - 314mm long (massive cooler) with bigger fans (quieter), the Gigabyte is a 2.5 slot 287mm card. The MSI also looks a lot better I think and features sick RGB lighting. So for me MSI is the obvious choice (between those two).
> 
> 
> 
> Very likely same as Gaming X Trio since both cards are 300 W, and that card has a 110% power target, so 330 W, but it would not surprise me if MSI released a BIOS update like the other partners, but 330 W is still somewhat high if you compare it to EVGA (already updated) at 338 W.
> 
> 
> 
> 
> Here is actual proof,
> 
> https://forum-en.msi.com/index.php?topic=184134.0
> https://forum-en.msi.com/index.php?topic=183996.0
> https://forum-en.msi.com/index.php?topic=255827.0
> 
> MSI is one of the better partners, EVGA on top. Duke OC is probably the #2 GPU I would recommend to anyone, as long as it's not too long for their cases (314mm), #1 would be EVGA XC Ultra, but it's not perfect either.


Cheers mate!  



Spiriva said:


> https://www.youtube.com/watch?v=rOxz5Tc-YRs
> 
> NA users better buy your 2080ti now, soon you will have "EU prices" in US too.


LMAO
Btw:
In countries like Italy the (EU) prices are already rocket high.


----------



## CallsignVega

I'm fine with higher prices. Hopefully some tech manufacturing will move out of China. What they have been doing (and us letting them get away with it in the past) is tantamount to highway robbery.


----------



## StullenAndi

zhrooms said:


> .Yes, and there are no "reports". The only cards *right now* above 260 W are the *MSI Gaming X Trio* and *MSI Sea Hawk X*, both 300 W from factory with 110% Power Limit (300 W x 110% = 330 W). And why are they *300* W? Because they have a boost clock of *1755* MHz, almost *100 MHz faster* than *MSI Duke OC*/*Gigabyte Gaming OC* at *1665* MHz.


Please stop talking. It´s 300W and 380W for the Galax.


----------



## zhrooms

StullenAndi said:


> It´s 300W and 380W for the Galax.


 
No idea what you're talking about. What is 300W? Which Galax is 380W?

What is that image suppose to tell anyone?


----------



## StullenAndi

Use your brain then you will find out. Just stop talking about things you didn´t know about, that is the meaning of the picture.


----------



## asdkj1740

StullenAndi said:


> Use your brain then you will find out. Just stop talking about things you didn´t know about, that is the meaning of the picture.


have been studying on hxd editor, would you mind sharing how to read those number in text?


----------



## StullenAndi

Convert Little Endian to Big Endian and use Windows Calculator to get DEC Value. Then you know that zhrooms is wrong.


----------



## asdkj1740

StullenAndi said:


> Convert Little Endian to Big Endian and use Windows Calculator to get DEC Value. Then you know that zhrooms is wrong.


any online translator works for reading nvidia vbios??
https://www.scadacore.com/tools/programming-calculators/online-hex-converter/
which one is the big endian you are refering?


let me google what you say lol.
so those numbers on hxd editor is little endian and i need to convert them to big endian?
and how to get the dec value by the windows calculator??


these are really out of my understanding


----------



## StullenAndi

As for my picture there are 2 values in the highlighted area. Lets pick one of them E0 93 04. It´s written in little endian, to know what this means google for it. To convert to dec we just read the value from the end and will get big endian and then it´s 4 93 E0. If you enter this into your windows calculator in hex mode and then press dec you will get 300000(mW) or 300W and not 260W like zhrooms said.


----------



## EdgeCrusher86

300000 = 493E0 -> E0 93 04
380000 = 5CC60 -> 60 CC 05

I suppose we are talking about the Galax/KFA² HoF here when the FTW3 Ultra is set to 374W maximum as EVGA_Jacob mentioned @ twitter.


----------



## zhrooms

StullenAndi said:


> Use your brain then you will find out. Just stop talking about things you didn´t know about, that is the meaning of the picture.





asdkj1740 said:


> any online translator works for reading nvidia vbios??
> https://www.scadacore.com/tools/programming-calculators/online-hex-converter/
> which one is the big endian you are refering?
> 
> these are really out of my understanding


 
Ignore him, he's wrong (and rude). The value he wants you to see is 300000 / 380000, and it means nothing.

Same thing as when people claim the Gigabyte BIOS is showing 300 W, the Gigabyte representative himself is very clear about the BIOS released being 123% up from 108-109% (260 W x 109 % = 283 W and *260* W x *123* % = *320* W, *same as Founders Edition*).

The goal was to offer the same power limit as Founders Edition, not 300/366 W (maximum). 

If you make these claims you better *back it up* with some *real world testing* or *validation*, not a hex string from an unofficial BIOS.


----------



## StullenAndi

zhrooms said:


> .If you make these claims you better *back it up* with some *real world testing* or *validation*, not a hex string from an unofficial BIOS.


:facepalm: nothing more to say.


----------



## Vipeax

StullenAndi said:


> It´s 300W and 380W for the Galax.


I performed similar "research" for my RTX2080 yesterday (see https://www.overclock.net/forum/69-...dia-rtx-2080-owner-s-club-6.html#post27645306),










The default powerlimit is higher for mine, but I can only add an extra 4% on top of it, while many cards have a higher maximum setting available to them.


----------



## EdgeCrusher86

Well, the the new EVGA XC series PTs (260-338W) are in the same very line - so yeah, has to be a 300-380W PT for this Galax/KFA².


----------



## Krzych04650

I have managed to find MSI Sea Hawk X in stock in one of Polish shops. They accidentally listed it without the ability to buy on Saturday so I asked them when it is going to be available and they said they have only 2 units coming on Monday, so I was checking the website like every 15 minutes today and managed to get one. The delivery time is Wednesday 03.10, so basically instant. Not sure if this is delivery time or shipment time buy one day left or right doesn't really make much difference. It has been so long since last generation release, I am excited to play around with new card again.


----------



## zhrooms

EdgeCrusher86 said:


> Well, the the new EVGA XC series PTs (260-338W) are in the same very line - so yeah, has to be a 300-380W PT for this Galax/KFA².


 
Yes, but it still makes no sense for Gigabyte to release an official BIOS update to their 260 W Reference PCB card that puts the default power limit to 300 W, no other brand has done that and for good reason.

If it in fact is correct, that the Gigabyte Gaming OC BIOS is 366 W and the Galax is 380 W, then it would be easy to prove by doing a simple benchmark run, destroying the EVGA 338 W BIOS. The Galax one would be 7 W above the unreleased FTW3 and 20 W under the private 400 W BIOS.

Show me a benchmark comparison between them and I will believe it. I can't do it myself because of the insane delays, still won't have my card for another 3 weeks (Europe).


----------



## EdgeCrusher86

Yes, 300W for 100% PT would be resonable for an Aorus Extreme card, not below.

E: No, thank you, sir. I'll keep Pascal Ti SLI here. The performance boost is just way too low for me to justify TITAN like prices and furthermore still coming with 11GB. But I am interested anyway to read and watch the new stuff.


----------



## KANG_VX

Galax Default 300 MAX 380Watt










Only hit VCore limit no power limit hit anymore.


----------



## zhrooms

EdgeCrusher86 said:


> I'll keep Pascal Ti SLI here. The performance boost is just way too low for me to justify TITAN like prices and furthermore still coming with 11GB.


 
Well, it is 50% faster and costs 50% more, so? DLSS up to twice as fast.

Also it being 11GB is irrelevant for most, no game reaches close to that in 4K, only needed for 5K and above, and if you play anything in 5K the framerate will be low anyway.

9 out of 10 big releases coming out does not support AFR, so you'll be losing out on 50% higher performance in those titles.

If I were you I'd sell the 1080 Ti SLI as soon as possible while they're still worth something, and getting a RTX 2080 Ti. You'd be losing 50% performance in AFR titles (1/10 Games) but gain 50% performance in every other title (9/10 Games).


----------



## Vipeax

KANG_VX said:


> Galax Default 300 MAX 380Watt
> <img>
> 
> Only hit VCore limit no power limit hit anymore.
> 
> <img>


Well done!


----------



## asdkj1740

KANG_VX said:


> Galax Default 300 MAX 380Watt
> https://www.img.live/images/2018/10/01/galax2.gif[IMG]
> 
> 
> Only hit VCore limit no power limit hit anymore.
> 
> [IMG]https://www.img.live/images/2018/10/01/galax.gif[IMG][/quote]
> 
> which model/variant is this, would you link me to the correspondent official web page?
> and would you mind uploading this bios here for others who would like to try the 380w bios?


----------



## EdgeCrusher86

zhrooms said:


> Well, it is 50% faster and costs 50% more, so? DLSS up to twice as fast.
> 
> Also it being 11GB is irrelevant for most, no game reaches close to that in 4K, only needed for 5K and above, and if you play anything in 5K the framerate will be low anyway.
> 
> 9 out of 10 big releases coming out does not support AFR, so you'll be losing out on 50% higher performance in those titles.
> 
> If I were you I'd sell the 1080 Ti SLI as soon as possible while they're still worth something, and getting a RTX 2080 Ti. You'd be losing 50% performance in AFR titles (1/10 Games) but gain 50% performance in every other title (9/10 Games).


It's around 30-40% for the 2080 Ti (4K) in most cases. DLSS is nice but the amount of interesting titles (for me) is pretty much non existent. One has to wait how many nices titles are going to nr supported it in the future. 

I think that when it comes to AAA titles the number of supporting MGPU is growing again (for example F1 2018 or CoD BO4) - might be also due to the fact that NV is using NVL now. This should improve Highres-Gaming plus using G-SYNC (HDR) a lot. And as early tests showed even the scaling with a total 16 PCI-E lanes in total is nice whereas with Pascal SLI very bad (for example F1 2018 4K + T.AA around +70% for 2080 Ti SLI vs. +10% for 1080 Ti SLI using an i7-8086K) I'm also using custom profiles for SLI, so the situation is all in all not that bad. The only newer game without SLI support I play is AC:Origins and due to the fact I am using native UWQHD the framerate ain't that bad...so no DSR in that case. And there are games that use a little under 11GB for 4K - like FF XV. One title I played back then in 4K DSR was Wildlands maxed out - around 10.3-10.5GB peak. Luckily the new consoles are expected to release around 2020/21 so 11GB should be enough for this gen in 99%. Anyway it sucks when we are around 12GB maximum for 5 years especially RTX 2080 Ti SLI would be often a good 5K60 setup and I am sure that titles like FF XV would consume at least 12GB then. 

I'm going to wait for the next gen and decide then if I am going to buy one or two cards. One old Ti is going to be used in my second system then. But if they continue to increase the prices with performance we shall expect RTX 3080 Ti for 2k as it should be around 60-70% faster vs. RTX 2080 Ti.


----------



## asdkj1740

how to locate the correspondent little endian on the bios that is refering to the power limit??
like, how do we know "e0 93 04 00 60 cc 05 00" is the power limit and how to loacte such numbers on the hxd ediitor?

thank you this thread is insanely informative!


----------



## asdkj1740

i would not supprise galax doing this even on their cheapest model (on rtx card the cheapest galax model has got all sp caps and pos caps replaced to fp5k solid cap and the mosfet seems to be changed too)

galax has been doing this for years. for example on 1070, their cheapest quality pcb (even worse than fe) still got a very high power limit bios.
galax is insane.


----------



## Vipeax

asdkj1740 said:


> how to locate the correspondent little endian on the bios that is refering to the power limit??
> like, how do we know "e0 93 04 00 60 cc 05 00" is the power limit and how to loacte such numbers on the hxd ediitor?
> 
> thank you this thread is insanely informative!


I can't speak for the others, but the way I did it is by performing a cross-reference check between the 'stock' and increased power limit BIOS versions of EVGA.


----------



## vmanuelgm

zhrooms said:


> Yes, but it still makes no sense for Gigabyte to release an official BIOS update to their 260 W Reference PCB card that puts the default power limit to 300 W, no other brand has done that and for good reason.
> 
> If it in fact is correct, that the Gigabyte Gaming OC BIOS is 366 W and the Galax is 380 W, then it would be easy to prove by doing a simple benchmark run, destroying the EVGA 338 W BIOS. The Galax one would be 7 W above the unreleased FTW3 and 20 W under the private 400 W BIOS.
> 
> Show me a benchmark comparison between them and I will believe it. I can't do it myself because of the insane delays, still won't have my card for another 3 weeks (Europe).


I managed a better result with Gigabyte bios on air because of its power limit than with the original one from Gainward.

But to see the cards perform as they should, power mod and watercooling are needed. I recorded this video of Shadow of Tomb Raider bench, with power mod (soldered), watercooling and original bios from Gainward (115%=300w)


----------



## asdkj1740

Vipeax said:


> I can't speak for the others, but the way I did it is by performing a cross-reference check between the 'stock' and increased power limit BIOS versions of EVGA.


would you mind showing me the steps??
really appreciated!!


----------



## Vipeax

asdkj1740 said:


> would you mind showing me the steps??
> really appreciated!!





















The rest requires generic knowledge of hex, the way computers store data etc...


----------



## asdkj1740

Vipeax said:


> The rest requires generic knowledge of hex, the way computers store data etc...


well i know nothing about what you said lol.
thank you so much i will start learning this.


----------



## asdkj1740

http://www.galax.com/en/graphics-card/20-series.html
which one is the 380w max one?


----------



## Artah

Anyone else order ekwb water blocks for this GPU and have not gotten it yet? Ship date was supposed to be 9/7/2018...


----------



## GraphicsWhore

Artah said:


> Anyone else order ekwb water blocks for this GPU and have not gotten it yet? Ship date was supposed to be 9/7/2018...


Ordered when their pre-orders went up but the shipping date was never 9/7/2018. The first batch went out on 9/21 or possibly 9/20, so not sure where you're seeing the 9/7 date.

Anyway, mine was shipped 9/21 and I received 9/24.


----------



## zhrooms

asdkj1740 said:


> Would you mind uploading this bios here for others who would like to try the 380w bios?


 
Posted it in the original post next to the EVGA one at the bottom.
 


EdgeCrusher86 said:


> It's around 30-40% for the 2080 Ti (4K) in most cases. DLSS is nice but the amount of interesting titles (for me) is pretty much non existent. One has to wait how many nices titles are going to nr supported it in the future.


 
The 13 games & 2 benchmark tools in the original post begs to differ, don't care in the least that some reviewers are using *questionable methods* (Games/Measuring/Memory/Overclock/Processor) and get lower %, one reviewer gets 30% in Metro while another gets 50%, the higher % is a lot more likely to be correct.

As soon as I get my card I will try and figure out why some reviewers in X game got lower performance, but until then I have to rely on what we have.









 


EdgeCrusher86 said:


> And there are games that use a little under 11GB for 4K - like FF XV. One title I played back then in 4K DSR was *Wildlands* maxed out - around 10.3-10.5GB peak. Luckily the new consoles are expected to release around 2020/21 so 11GB should be enough for this gen in 99%. Anyway it sucks when we are around 12GB maximum for 5 years especially RTX 2080 Ti SLI would be often a good 5K60 setup and I am sure that titles like FF XV would consume at least 12GB then.


 
*Trash* titles don't count.









One game (FF XV) which is barely popular, really? Of course there are exceptions, but if 99 out of a 100 games don't reach even 9GB, you can safely say it is completely fine with _just_ 11GB. Highly doubt consoles will have any affect on PC Games next gen as well, but yes it is true that 5K would be playable with higher VRAM. It is technically already, but can't max settings.
 


EdgeCrusher86 said:


> But if they continue to increase the prices with performance we shall expect RTX 3080 Ti for 2k as it should be around 60-70% faster vs. RTX 2080 Ti.


 
RTX 3080 should release in about a year for $700-800 MSRP and be ~30% faster than RTX 2080 Ti. (*If things go back to the way they were*) Big IF!
 


vmanuelgm said:


> I managed a better result with Gigabyte bios on air because of its power limit than with the original one from Gainward.
> 
> But to see the cards perform as they should, power mod and watercooling are needed.)


 
Any BIOS would output improved results if you're coming from stock, and you don't need water cooling, the goal is to eliminate any throttling at max OC caused by the power limit.
 


asdkj1740 said:


> Which one is the 380w max one?


 
None, the BIOS has no identification (as far as I know), the only released card is the OC card by KFA2. But it supposedly has a boost of 1620 MHz, which is slower than Founders Edition, so it having a 300 W BIOS is very unlikely.

Seems to me it simply leaked directly from Galax (for whatever reason and to whom) and we don't know which card it was meant for, either way a few people here tested it, so it is not dangerous. That's basically as much as we know about it so far. I'd like someone to validate it being 380 W, by running UNIGINE Superposition in Extreme & 8K mode at max overclock (above 2100 MHz), show a detailed graph (Hardware Monitor) of the core clock from MSI Afterburner. And repeat with EVGA 338 W one, if the EVGA throttles and GALAX one does not, it is proven to be legit.


----------



## ralle_h

KANG_VX said:


> Galax Default 300 MAX 380Watt
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Only hit VCore limit no power limit hit anymore.


Can you tell us which card exactly that was? I'm asking to check the compatibility of the vBIOS with my card (display connectivity and so on).

Thanks in advance


----------



## Artah

Graphics***** said:


> Ordered when their pre-orders went up but the shipping date was never 9/7/2018. The first batch went out on 9/21 or possibly 9/20, so not sure where you're seeing the 9/7 date.
> 
> Anyway, mine was shipped 9/21 and I received 9/24.


oops, got the date wrong, that's when I ordered it. On my invoice it says Estimated Delivery Date: 19 September 2018 and I still don't have it.


----------



## StullenAndi

Vipeax said:


> I performed similar "research" for my RTX2080 yesterday.


Thanks for sharing.



asdkj1740 said:


> which model/variant is this, would you link me to the correspondent official web page?
> and would you mind uploading this bios here for others who would like to try the 380w bios?


If you go some pages back in this thread you can find a zip file attached to one post, containing 3 different bios files. The galax is one of them.


----------



## KANG_VX

ralle_h said:


> Can you tell us which card exactly that was? I'm asking to check the compatibility of the vBIOS with my card (display connectivity and so on).
> 
> Thanks in advance


Zotac AMP flash with galax bios.


----------



## coolbho3k

zhrooms said:


> Ignore him, he's wrong (and rude). The value he wants you to see is 300000 / 380000, and it means nothing.
> 
> Same thing as when people claim the Gigabyte BIOS is showing 300 W, the Gigabyte representative himself is very clear about the BIOS released being 123% up from 108-109% (260 W x 109 % = 283 W and *260* W x *123* % = *320* W, *same as Founders Edition*).
> 
> The goal was to offer the same power limit as Founders Edition, not 300/366 W (maximum).
> 
> If you make these claims you better *back it up* with some *real world testing* or *validation*, not a hex string from an unofficial BIOS.


Gigabyte rep just confirmed on here the Gaming OC BIOS is 300 / 366. https://www.overclock.net/forum/27646942-post7.html


----------



## zhrooms

coolbho3k said:


> Gigabyte rep just confirmed on here the Gaming OC BIOS is 300 / 366.


 
Ye saw that, you can check the original post for W values of different cards now, lots missing but that is because most cards aren't released yet, anywhere. Still *weeks* away.


----------



## GraphicsWhore

If anyone flashes their XC Gaming with the Galax BIOS plz post results. I will when I get the card Saturday and block it up.


----------



## Xeq54

Well I have flashed my card from the Evga XC 338W bios to the Galax 380w bios. The bios indeed goes to that value, temps are higher, and 3d mark score is a little bit higher 7080 vs 7220 with the galax bios pushed to the max. 

But the card still throttles in timespy extreme, especially in the second gpu test. Since this is a 380w bios, even with the 400w, we will still be limited. The scaling really is not great. 50w more for 50 more mhz is not anything special. 

I do believe that with shunt mod it will be better, but I can also see this card going up to and above 500W when unrestricted, and I wouldn't recommend doing that without a fullcover block. I have recorded peaks of 150% in GPUz when running timespy extreme. With this bios, that is about 460 watt peak after which the frequency plummets instantly.


----------



## vmanuelgm

zhrooms said:


> Any BIOS would output improved results if you're coming from stock, and you don't need water cooling, the goal is to eliminate any throttling at max OC caused by the power limit.




Hey mate, I am saying both overclocked to the limit under air, Gigabyte's bios performs better since it permits more power.

And can't agree with you about the water. If you power mod and leave your card under air, u'll have throttling for sure if you want to be above 2000MHz continuosly. The Ti gets really hotty. I had my card under air (Gainward GS 3 fan design) for one day after modding it, and saw a feast of throttling and temperatures reaching 88 degrees with games like Crysis 3. Now I have the block, my card won't go above 48-50 degrees, even in the worst conditions.


Have a look:








And then have a look under watercooling:


----------



## zhrooms

Xeq54 said:


> Well I have flashed my card from the Evga XC 338W bios to the Galax 380w bios. The bios indeed goes to that value, temps are higher, and 3d mark score is a little bit higher 7080 vs 7190 with the galax bios pushed to the max.
> 
> But the card still throttles in timespy extreme, especially in the second gpu test.


 
Could you show us the throttling in Time Spy Extreme? Like this,

MSI Afterburner Hardware Monitor, can add/remove/rename basically everything. Changing graph limits (min/max) helps to show off the core clock mhz, if you know you'll be running around 2100MHz set 2000-2200.


----------



## Xeq54

Sure, here you go. Notice the drop in the second test. The gpu shoots up to max PT and beyond: 131% power usage during which it throttles heavily and there is no way to hold 2k ghz since the voltage drops below 1v. I have seen peaks of up to 149% during other runs, which is around 460watts with this bios. Thats a peak now, with shunt mod, that can be the norm. So I do not think shunt modding is safe for air. For any air cooled card, not even the TRIO or EVGA.


----------



## bmg2

Xeq54 said:


> Well I have flashed my card from the Evga XC 338W bios to the Galax 380w bios. The bios indeed goes to that value, temps are higher, and 3d mark score is a little bit higher 7080 vs 7220 with the galax bios pushed to the max.
> 
> But the card still throttles in timespy extreme, especially in the second gpu test. Since this is a 380w bios, even with the 400w, we will still be limited. The scaling really is not great. 50w more for 50 more mhz is not anything special.
> 
> I do believe that with shunt mod it will be better, but I can also see this card going up to and above 500W when unrestricted, and I wouldn't recommend doing that without a fullcover block. I have recorded peaks of 150% in GPUz when running timespy extreme. With this bios, that is about 460 watt peak after which the frequency plummets instantly.


nvm, I see the bios on the first page.


----------



## BigMack70

Xeq54 said:


> Well I have flashed my card from the Evga XC 338W bios to the Galax 380w bios. The bios indeed goes to that value, temps are higher, and 3d mark score is a little bit higher 7080 vs 7220 with the galax bios pushed to the max.
> 
> But the card still throttles in timespy extreme, especially in the second gpu test. Since this is a 380w bios, even with the 400w, we will still be limited. The scaling really is not great. 50w more for 50 more mhz is not anything special.
> 
> I do believe that with shunt mod it will be better, but I can also see this card going up to and above 500W when unrestricted, and I wouldn't recommend doing that without a fullcover block. I have recorded peaks of 150% in GPUz when running timespy extreme. With this bios, that is about 460 watt peak after which the frequency plummets instantly.


Seems like there really isn't much potential to push these cards beyond their stock capabilities... Disappointing but not unexpected. 

If nvidia ever gets around to actually shipping my pre order I can begin testing but all signs suggest I'm just running this with the stock cooler and bios.


----------



## trippinonprozac

Hey guys,

What is considered a solid stable overclock on these cards? I am seeing a lot of mixed reports that people are struggling to hold 2ghz on air due to throttling. Presumably this is power limits being reached?

I have a Zotac amp'd card and I am well in excess of 2100mhz core with no throttling on air and stock volts? 

Did I get lucky in the silicon lottery this time around?


----------



## TahoeDust

trippinonprozac said:


> Hey guys,
> 
> What is considered a solid stable overclock on these cards? I am seeing a lot of mixed reports that people are struggling to hold 2ghz on air due to throttling. Presumably this is power limits being reached?
> 
> I have a Zotac amp'd card and I am well in excess of 2100mhz core with no throttling on air and stock volts?
> 
> Did I get lucky in the silicon lottery this time around?


Sounds like it. Lets see some benchmarks scores.


----------



## Addsome

trippinonprozac said:


> Hey guys,
> 
> What is considered a solid stable overclock on these cards? I am seeing a lot of mixed reports that people are struggling to hold 2ghz on air due to throttling. Presumably this is power limits being reached?
> 
> I have a Zotac amp'd card and I am well in excess of 2100mhz core with no throttling on air and stock volts?
> 
> Did I get lucky in the silicon lottery this time around?


My Zotac amp just shipped from best buy and it should be here tomorrow. Hoping I get this lucky with the silicone lottery. I have a gigabyte windforce I picked up on saturday but i haven't opened it. Guess its time to return it.


----------



## trippinonprozac

Here are a couple I ran the other night. I havent tweaked much so there is probably more in this card.

Timespy extreme - https://www.3dmark.com/spy/4520294

Timespy - https://www.3dmark.com/spy/4549815


----------



## Addsome

trippinonprozac said:


> Here are a couple I ran the other night. I havent tweaked much so there is probably more in this card.
> 
> Timespy extreme - https://www.3dmark.com/spy/4520294
> 
> Timespy - https://www.3dmark.com/spy/4549815


Is your card stable at 2100mhz? Is it not being power limited and throttled down?


----------



## trippinonprozac

Addsome said:


> Is your card stable at 2100mhz? Is it not being power limited and throttled down?


temps are around 55-60c when used for extended periods (custom fan curve). Its a little above 2100mhz for most use. Boosts as high as about 2150mhz in some cases.


----------



## xer0h0ur

vmanuelgm said:


> Hey mate, I am saying both overclocked to the limit under air, Gigabyte's bios performs better since it permits more power.
> 
> And can't agree with you about the water. If you power mod and leave your card under air, u'll have throttling for sure if you want to be above 2000MHz continuosly. The Ti gets really hotty. I had my card under air (Gainward GS 3 fan design) for one day after modding it, and saw a feast of throttling and temperatures reaching 88 degrees with games like Crysis 3. Now I have the block, my card won't go above 48-50 degrees, even in the worst conditions.
> 
> 
> Have a look:
> 
> 
> https://youtu.be/kAkLNGgSQYA
> 
> 
> And then have a look under watercooling:
> 
> 
> https://youtu.be/eYTo8yoXxxI


Some people simply won't acknowledge the benefits of Turing under a waterblock. I've given up on it. You're just going to be banging your head on a wall trying to get the message across.


----------



## Addsome

trippinonprozac said:


> temps are around 55-60c when used for extended periods (custom fan curve). Its a little above 2100mhz for most use. Boosts as high as about 2150mhz in some cases.


Whats your core overclock at? Seems like you won the silicone lottery my friend. Only cards i've seen sustain above 2100 are shunt modded with water cooling.


----------



## trippinonprozac

Addsome said:


> Whats your core overclock at? Seems like you won the silicone lottery my friend. Only cards i've seen sustain above 2100 are shunt modded with water cooling.


It was at +122 but I found that +110 boost to a minimum of 2100mhz on stock volts and is stable.

I have settled there. I was considering throwing a block on it but its near silent when gaming at these clocks so I think Ill leave it as is.


----------



## Addsome

trippinonprozac said:


> It was at +122 but I found that +110 boost to a minimum of 2100mhz on stock volts and is stable.
> 
> I have settled there. I was considering throwing a block on it but its near silent when gaming at these clocks so I think Ill leave it as is.


Any coil whine or weird fan noises? Been hearing complaints on the Nvidia subreddit.


----------



## trippinonprozac

Addsome said:


> Any coil whine or weird fan noises? Been hearing complaints on the Nvidia subreddit.


None mate.

I have had many cards that suffer it but not a peep out of this one!


----------



## dVeLoPe

https://www.amazon.com/gp/product/B07GFTJCW5/

I have 1x ZOTAC GAMING GeForce RTX 2080 Ti AMP on hand right now if anyone is interested

https://www.overclock.net/forum/147...g-geforce-rtx-2080-ti-amp-zt-t20810d-10p.html


----------



## Not A Good Idea

Well finally received my ek block and back plates.


----------



## Glerox

No shipping yet... W.T.F. Nvidia


----------



## tconroy135

Glerox said:


> No shipping yet... W.T.F. Nvidia


^^ This a 1000 times over!


----------



## EQBoss

Glerox said:


> No shipping yet... W.T.F. Nvidia


They should start shipping tomorrow. Was charged already Friday so tomorrow is last day from their promised 1-2 business days after charge. I hope. Could be fooling myself tho, this launch is some low tier bs. EK blocks came a week ago lul.


----------



## vmanuelgm

xer0h0ur said:


> Some people simply won't acknowledge the benefits of Turing under a waterblock. I've given up on it. You're just going to be banging your head on a wall trying to get the message across.



My videos can't be more ilustrative.





trippinonprozac said:


> It was at +122 but I found that +110 boost to a minimum of 2100mhz on stock volts and is stable.
> 
> I have settled there. I was considering throwing a block on it but its near silent when gaming at these clocks so I think Ill leave it as is.



Record a video to see those sustained clocks in games like Witcher 3, Crysis 3, etc, for more than 15 minutes.

I also got above 7400 on air at a lower clock.




Not A Good Idea said:


> Well finally received my ek block and back plates.


According to EK manual the chokes don't need thermal pad. Don't forget to put out that blue plastic from thermal pads.


----------



## snafua

Did anybody that pre-ordered an AIB card (eg. Not from the Nvidia store) get one delivered? I see some here went and picked them up at Brick and Mortar stores like Microcenter, but wanted to confirm if any cards went out from EVGA, Newegg, or some other online store that was accepting pre-orders.


----------



## Addsome

snafua said:


> Did anybody that pre-ordered an AIB card (eg. Not from the Nvidia store) get one delivered? I see some here went and picked them up at Brick and Mortar stores like Microcenter, but wanted to confirm if any cards went out from EVGA, Newegg, or some other online store that was accepting pre-orders.


Best buy Canada shipped my zotac 2080ti amp edition today and it should be delivered tomorrow. I bought it on the 27th.


----------



## Not A Good Idea

@Vmanuel Yeah I realized the chokes didn’t need them when I test fitted the block. Nope didn’t forget, I just hate getting thermal pads stuck on my fist with i apply the mx-4 so I keep the backing on until last minute.


----------



## arrow0309

Not A Good Idea said:


> Well finally received my ek block and back plates.


Hi, the EK block won't make contact to anything except the gpu, the gddr6 mem and the vrm's mosfets.
So any other pad you've placed like those originals (they were even thicker if you notice) are useless.


----------



## Xeq54

trippinonprozac said:


> It was at +122 but I found that +110 boost to a minimum of 2100mhz on stock volts and is stable.
> 
> I have settled there. I was considering throwing a block on it but its near silent when gaming at these clocks so I think Ill leave it as is.


Your timespy extreme score is just about 100 points higher than mine. So I doubt you have 2100+ sustained during the whole run. 

Please open afterburner and leave it running during the timespy extreme run. Then make a screenshot and show us the frequency graph.


----------



## kot0005

KANG_VX said:


> Galax Default 300 MAX 380Watt
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Only hit VCore limit no power limit hit anymore.



Is that HoF or kfa2 ? They are using samsung modules on non hof ?? I highly doubt it. It must be the hof.


----------



## xer0h0ur

EQBoss said:


> They should start shipping tomorrow. Was charged already Friday so tomorrow is last day from their promised 1-2 business days after charge. I hope. Could be fooling myself tho, this launch is some low tier bs. EK blocks came a week ago lul.


Yup, same.


----------



## LunaP

Ugh still no news on XSPC GPU blocks


----------



## arrow0309

kot0005 said:


> Is that HoF or kfa2 ? They are using samsung modules on non hof ?? I highly doubt it. It must be the hof.


First of all HOF is not a brand, it's a model (Hall Of Fame edition) and I suppose both of them (Galax and KFA2) will release it eventually, at least the air cooled edition. 
But it's still early. 
Second, that's only a memory support (future) for the Samsung gddr6 (entry 1), I'm sure the second entry is for the Micron memory support (they're using actually). 
And third it seems the bios is of a reference Galax model (don't know, maybe from a SG Edition?).


----------



## cstkl1

MSI trio

waiting for vincent to release the waterblocks. 

1960-2010.. 

it hovers because of the 3 thermal barriers.. the 50, 60, 70.. ambient temp latel 34-36.. 



superposition is all over.. can be 9500 one run.. another at 10k..


----------



## arrow0309

cstkl1 said:


> MSI trio
> 
> waiting for vincent to release the waterblocks.
> 
> 1960-2010..
> 
> it hovers because of the 3 thermal barriers.. the 50, 60, 70.. ambient temp latel 34-36..
> 
> 
> 
> superposition is all over.. can be 9500 one run.. another at 10k..


Excuse me, who is gonna release blocks for the Trio?


----------



## kot0005

arrow0309 said:


> First of all HOF is not a brand, it's a model (Hall Of Fame edition) and I suppose both of them (Galax and KFA2) will release it eventually, at least the air cooled edition.
> But it's still early.
> Second, that's only a memory support (future) for the Samsung gddr6 (entry 1), I'm sure the second entry is for the Micron memory support (they're using actually).
> And third it seems the bios is of a reference Galax model (don't know, maybe from a SG Edition?).



Yeah I know HOF is a model from Galax lol.

Ah right, yeah no way they r using samsung modules yet..would be nice if they did though. They wud OC like crazy if thery were like GDDR5 and 5X


----------



## cstkl1

arrow0309 said:


> cstkl1 said:
> 
> 
> 
> MSI trio
> 
> waiting for vincent to release the waterblocks.
> 
> 1960-2010..
> 
> it hovers because of the 3 thermal barriers.. the 50, 60, 70.. ambient temp latel 34-36..
> 
> 
> 
> superposition is all over.. can be 9500 one run.. another at 10k..
> 
> 
> 
> Excuse me, who is gonna release blocks for the Trio?
Click to expand...

bitspower


----------



## Rob w

I ordered the EKwb block but it’s looking like the 2080ti will arrive first, block still says pre order/processing.


----------



## themadhatterxxx

zhrooms said:


> Posted it in the original post next to the EVGA one at the bottom.
> 
> RTX 3080 should release in about a year for $700-800 MSRP and be ~30% faster than RTX 2080 Ti. (*If things go back to the way they were*) Big IF!




Bwahahahahahaha tell us why NVidia will break their 2 year refresh cycle, compete against themselves against their own products and undercut their own products pricewise.


----------



## domenic

snafua said:


> Did anybody that pre-ordered an AIB card (eg. Not from the Nvidia store) get one delivered? I see some here went and picked them up at Brick and Mortar stores like Microcenter, but wanted to confirm if any cards went out from EVGA, Newegg, or some other online store that was accepting pre-orders.


EVGA Ti XC ULTRA GAMING ordered on Friday from Amazon (sweet 24 months 0% financing), ETA for delivery is today. Upgrading from a Pascal Titan X purchased exactly 24 months ago when it was released. EK blocks on old and new.


----------



## GraphicsWhore

If you got a card from Amazon and it was delayed, check your ETA again. My EVGA XC originally estimated for Oct 6th (Saturday) changed to Friday and I just looked again a few mins ago and the card has shipped and being delivered tomorrow.

Of course I already have plans after work tomorrow. I could skip but I don't want to flake on friends and tell them it's for a video card, lol. Thursday I'm definitely foregoing my work Happy Hour though. Going to block this bad boy up and hopefully get everything up and running smoothly that night.


----------



## Foxrun

domenic said:


> EVGA Ti XC ULTRA GAMING ordered on Friday from Amazon (sweet 24 months 0% financing), ETA for delivery is today. Upgrading from a Pascal Titan X purchased exactly 24 months ago when it was released. EK blocks on old and new.


Lucky, mine still says delayed


----------



## Vipeax

domenic said:


> EVGA Ti XC ULTRA GAMING ordered on Friday from Amazon (sweet 24 months 0% financing), ETA for delivery is today. Upgrading from a Pascal Titan X purchased exactly 24 months ago when it was released. EK blocks on old and new.


Could you please use GPU-Z to dump the default BIOS and share it here?
https://www.techpowerup.com/forums/attachments/gpu-z-exe.107772/


----------



## nycgtr

MY amazon order just says wait for an email. Anyway I Got the trio yesterday from newegg. Swapped out the xp sli. It's a fast card but not mindblowing. Seems all the titles I've been playing were not friends with sli, so I did get a boost in every game. In the 10-15 fps region.


----------



## Vipeax

nycgtr said:


> Anyway I Got the trio yesterday from newegg.


Please share that BIOS too.


----------



## dVeLoPe

nycgtr said:


> MY amazon order just says wait for an email. Anyway I Got the trio yesterday from newegg. Swapped out the xp sli. It's a fast card but not mindblowing. Seems all the titles I've been playing were not friends with sli, so I did get a boost in every game. In the 10-15 fps region.


to confirm 1x 2080Ti = 2x TitanXP performance?

I dont own a 4k monitor only a 1440p one and I never used it yet still rocking my 144hz 1080p

I cant decide if i should go SLi or singe 2080Ti


----------



## Vipeax

dVeLoPe said:


> I dont own a 4k monitor only a 1440p one and I never used it yet still rocking my 144hz 1080p
> 
> I cant decide if i should go SLi or singe 2080Ti


What? A $2500+ GPU setup running on a what, $200 monitor these days? Monitors last way longer than graphics cards, so to be honest anyone running a 2080Ti should be running ASUS' 4K 144Hz monitor or at the very minimum any of the ultra wide 1440p displays like the Alienware.


----------



## cstkl1

Rob w said:


> I ordered the EKwb block but it’s looking like the 2080ti will arrive first, block still says pre order/processing.


bitspower already shipped out their blocks to many retailers already...

just waiting now for their base plate for msi trio


----------



## nycgtr

dVeLoPe said:


> to confirm 1x 2080Ti = 2x TitanXP performance?
> 
> I dont own a 4k monitor only a 1440p one and I never used it yet still rocking my 144hz 1080p
> 
> I cant decide if i should go SLi or singe 2080Ti


Not at all. My xps ran at 2050-2080 boost on avg. I would say its about a 20% increase from 1 xp. If you were to compare vs xp sli that would heavily depend on scaling for games that do scale it's not even close. With that said, I would still say 1 2080ti is better than 2xps in sli OVERALL due to scaling issues in most games. However, the 2080ti is still a 3440 HFR card at best or 4k 60 if your into that. I have a 4k gsync 60 and 144. I did buy enough cards to NVlink sli but I think imma pass on sli moving forward. My 2nd xp spent most of its life doing nothing just like the x pascal before it lol.


----------



## cstkl1

does anybody here know how to test out DLSS on FFXV benchmark.

cannot find the option .


----------



## dVeLoPe

Vipeax said:


> What? A $2500+ GPU setup running on a what, $200 monitor these days? Monitors last way longer than graphics cards, so to be honest anyone running a 2080Ti should be running ASUS' 4K 144Hz monitor or at the very minimum any of the ultra wide 1440p displays like the Alienware.


I dont like ultrawide and I just bought a Dell 24 Gsync 165hz 1440p that I was gonna start using when my step-up 2080Ti arrives.

Was not sure if 1x 2080Ti could drive a 1440p 165hz panel to its limits but think I will go with SLi next gen and stick to 1 for now.


----------



## Artah

nycgtr said:


> Not at all. My xps ran at 2050-2080 boost on avg. I would say its about a 20% increase from 1 xp. If you were to compare vs xp sli that would heavily depend on scaling for games that do scale it's not even close. With that said, I would still say 1 2080ti is better than 2xps in sli OVERALL due to scaling issues in most games. However, the 2080ti is still a 3440 HFR card at best or 4k 60 if your into that. I have a 4k gsync 60 and 144. I did buy enough cards to NVlink sli but I think imma pass on sli moving forward. My 2nd xp spent most of its life doing nothing just like the x pascal before it lol.


For the first time in a long time I'm only going to use one card on 4K I hope it's enough for the games that I play. I needed the other x16 slot for VRoc RAID 10. I really hope I don't need to buy two more cards for my wife and my computer.



Rob w said:


> I ordered the EKwb block but it’s looking like the 2080ti will arrive first, block still says pre order/processing.


I actually got my blocks yesterday, I emailed support about it and they said they had issues with their shipping confirmation. They should have liquid cooled their servers with ekwb blocks to make sure it does not fail....



arrow0309 said:


> Hi, the EK block won't make contact to anything except the gpu, the gddr6 mem and the vrm's mosfets.
> So any other pad you've placed like those originals (they were even thicker if you notice) are useless.


This is bad, you're saying that even the pads that come with ekwb are useless because it's not thick enough? How about if you stack the stock and the one that comes with the block?


----------



## exploiteddna

domenic said:


> EVGA Ti XC ULTRA GAMING ordered on Friday from Amazon (sweet 24 months 0% financing), ETA for delivery is today. Upgrading from a Pascal Titan X purchased exactly 24 months ago when it was released. EK blocks on old and new.


how did you know when amazon had them available for preorder? were you using nowinstock.net or some other method? I missed the window for the EVGAs on friday and now i fear it will be another few weeks of staring at this webpage until theyre available to order again (and i miss it, again).


----------



## dVeLoPe

michaelrw said:


> how did you know when amazon had them available for preorder? were you using nowinstock.net or some other method? I missed the window for the EVGAs on friday and now i fear it will be another few weeks of staring at this webpage until theyre available to order again (and i miss it, again).


if your interested in a Zotac AMP ive made a thread about it in the Video section


----------



## Addsome

What would you guys keep? Gigabyte 2080ti Windforce OC or Zotac 2080 ti Amp edition? And why?


----------



## domenic

Vipeax said:


> Could you please use GPU-Z to dump the default BIOS and share it here?
> https://www.techpowerup.com/forums/attachments/gpu-z-exe.107772/


Will do - video card just arrived but EB block won't be here until tomorrow. Once I get everything going tomorrow I will dump the BIOS. I think an update exists on the EVGA website to bump to 130% power. I will flash it first and then dump that.


----------



## Silent Scone

Need that 400W BIOS.


----------



## GraphicsWhore

Addsome said:


> What would you guys keep? Gigabyte 2080ti Windforce OC or Zotac 2080 ti Amp edition? And why?


See if you can find benchmarks comparing their temps and noise levels. Realistically they'll perform about the same but while you're unlikely to get any significant headroom in clocks from one over the other during OC, it's the silicon lottery as usual so I would run them through tests and see what's up. Plus one might have real bad coil whine or something.


----------



## Vipeax

domenic said:


> Will do - video card just arrived but EB block won't be here until tomorrow. Once I get everything going tomorrow I will dump the BIOS. I think an update exists on the EVGA website to bump to 130% power. I will flash it first and then dump that.


Excellent! And no, please share the default one before you flash. I already extracted the 130% one from the updater myself (see https://drive.google.com/drive/folders/10VgqTD1N5l81-0ZiUXQ2YcPPllRcBM3t), but I haven't gotten my hands on the default one yet.


----------



## Addsome

Graphics***** said:


> See if you can find benchmarks comparing their temps and noise levels. Realistically they'll perform about the same but while you're unlikely to get any significant headroom in clocks from one over the other during OC, it's the silicon lottery as usual so I would run them through tests and see what's up. Plus one might have real bad coil whine or something.


Zotac Amp is looking like one of the coolest cards this generation based on reviews. The Zotac Amp also has a higher boost clock out of the box. I cant seem to find much on the Gigabyte Windforce. I've just been reading online that Zotac is usually a lower quality brand and fans may be an issue. Unfortunately if I open the boxes, I cannot return the GPUs unless I pay a 15% restocking fee which I really do not want to do. Is Gigabyte a lot better of a company than Zotac?


----------



## carlhil2

Silent Scone said:


> Need that 400W BIOS.


Nice score, you on water?


----------



## Silent Scone

carlhil2 said:


> Nice score, you on water?


Yes, EK block


----------



## themadhatterxxx

Addsome said:


> Zotac Amp is looking like one of the coolest cards this generation based on reviews. The Zotac Amp also has a higher boost clock out of the box. I cant seem to find much on the Gigabyte Windforce. I've just been reading online that Zotac is usually a lower quality brand and fans may be an issue. Unfortunately if I open the boxes, I cannot return the GPUs unless I pay a 15% restocking fee which I really do not want to do. Is Gigabyte a lot better of a company than Zotac?


They both suck, the heatsink on both are thin...the gigabyte windforce heatsink is thinner.


----------



## GraphicsWhore

Silent Scone said:


> Need that 400W BIOS.


Nice work.

Which variant and BIOS is this? Also, any quirks putting the EK block on? Thanks.


----------



## xer0h0ur

Vipeax said:


> What? A $2500+ GPU setup running on a what, $200 monitor these days? Monitors last way longer than graphics cards, so to be honest anyone running a 2080Ti should be running ASUS' 4K 144Hz monitor or at the very minimum any of the ultra wide 1440p displays like the Alienware.


I just puked a little bit in my mouth. 1440p ultrawide has zero draw for me. Nothing. Zip. Nada. Hard pass. As for 4K I have no reason to do anything other than locking my framerate at 60 FPS and enjoying it. Perhaps in the future when 120/144Hz 4K monitors I like are less expensive I will consider upgrading. Being that new titles are so taxing at 4K I have no real motivation to upgrade yet.


----------



## ENTERPRISE

themadhatterxxx said:


> They both suck, the heatsink on both are thin...the gigabyte windforce heatsink is thinner.



You have any information to back up your claim if poor performance ? It is one thing to say they have smaller heatsinks, which is true, but its another to just say ''They suck''. Not very informative and nothing to back up that claim.


Also once I receive my MSI Seahawk X cards I will dump the BIOS for anyone interested, its likely not anthing up to 400Watts but always good to have a library of BIOS's. 



After some testing with the stock BIOS on the SeaHawks I will have to see how well they perform with a 400Watt BIOS. Just have to wait for them to come into stock


----------



## Vipeax

xer0h0ur said:


> Perhaps in the future when 120/144Hz 4K monitors I like are less expensive I will consider upgrading.


Sounds like you don't run 2080Ti's in SLI...


----------



## GraphicsWhore

ENTERPRISE said:


> You have any information to back up your claim if poor performance ? It is one thing to say they have smaller heatsinks, which is true, but its another to just say ''They suck''. Not very informative and nothing to back up that claim.
> 
> 
> Also once I receive my MSI Seahawk X cards I will dump the BIOS for anyone interested, its likely not anthing up to 400Watts but always good to have a library of BIOS's.
> 
> 
> 
> After some testing with the stock BIOS on the SeaHawks I will have to see how well they perform with a 400Watt BIOS. Just have to wait for them to come into stock


Where is this 400W BIOS though? And whose is it?


----------



## xer0h0ur

Vipeax said:


> Sounds like you don't run 2080Ti's in SLI...


To each their own but I will never ever again go through that multi-gpu aids. I've already experienced it with AMD and Nvidia. Its is never worth the money, the hassle or the headaches. Having to depend on them for driver profiles, developers implementing support and even worse making it remotely efficient support is not something I will tolerate again. Even then you're still hoping and praying that the game developer doesn't break it on subsequent game patches. 

Until AMD or Nvidia comes up with some way of multiple GPUs working without needing game developer support I will forever stick to buying the single strongest video card I can afford at the time of purchase.


----------



## exploiteddna

Addsome said:


> What would you guys keep? Gigabyte 2080ti Windforce OC or Zotac 2080 ti Amp edition? And why?


both.. keep one, sell me the other xd


----------



## Silent Scone

I have a second card on order, but still very tempted to cancel, in truth. There are still many pacing issues. The interface might have changed, but the AFR performance hasn't. Some DX12 titles seem to be doing ok, but simply not enough to warrant the cost.


----------



## LunaP

xer0h0ur said:


> To each their own but I will never ever again go through that multi-gpu aids. I've already experienced it with AMD and Nvidia. Its is never worth the money, the hassle or the headaches. Having to depend on them for driver profiles, developers implementing support and even worse making it remotely efficient support is not something I will tolerate again. Even then you're still hoping and praying that the game developer doesn't break it on subsequent game patches.
> 
> Until AMD or Nvidia comes up with some way of multiple GPUs working without needing game developer support I will forever stick to buying the single strongest video card I can afford at the time of purchase.


I guess its a case of user scenarios, I've played tons of games using SLI that never had any issues or required nvidia inspector. The issue has almost always been for games that didn't support it that the developers were either lazy and assumed the gpu would auto scale without their intervention and coding to use it or that it was someone elses problem ( for those games that didn't though they normally stated whether or not they'd both writing support for SLI ) 

MMO's its great, single player games can definitely push higher, games that don't fully support it still get a bump, and for the limited variety there's always a quick tutorial on 1-2 areas in inspector to update/create a custom profile to manually activate it. 

For me personally I always wanna be able to max out my settings and enjoy smooth gameplay, but again to each their own, just from your "aids" comment I feel like theres little to back that up. Blame is with the developers not the cards if the game has 0 support, or in rare cases that companies go out of their way to disable the option. Most of your issues sound driver related though, which doesn't matter if you run sli or single cards in that case since there will always be bad/rushed drivers that break other things. Saying something is never worth it discredits your argument given the vast usage of it. Though given this is the (4th time? I think since you've been pasting this since the days of the original Titan)? you've pasted this exact same message in the past few years its sort of clear that you haven't touched or been in the SLI game for a while, some research can go a long way. Not saying everyone should SLI, but 0 reason to say its 100% pointless and that there's 0 reason for anyone ever to use it.


----------



## domenic

Vipeax said:


> Excellent! And no, please share the default one before you flash. I already extracted the 130% one from the updater myself (see https://drive.google.com/drive/folders/10VgqTD1N5l81-0ZiUXQ2YcPPllRcBM3t), but I haven't gotten my hands on the default one yet.


Will do. I disassembled the card this afternoon and am all ready for the water block to show up tomorrow. I hope the placement of the fittings on the block is the same or I will need to re-shape my hard tubing.

Any recommendations or requests for a particular benchmark program after I get it all put together and overclocked? Man I wish that modded BIOS would float into the either so we could go above 130%...


----------



## Vipeax

xer0h0ur said:


> To each their own but I will never ever again go through that multi-gpu aids. I've already experienced it with AMD and Nvidia. Its is never worth the money, the hassle or the headaches.


Look, I agree with you. My point was about buying 2 2080Tis to put them in SLI for a 144Hz 1080p(!) monitor.


----------



## Vipeax

domenic said:


> Man I wish that modded BIOS would float into the either so we could go above 130%...


You can flash the 380W one already. Also, as I explained here: https://www.overclock.net/forum/69-...dia-rtx-2080-owner-s-club-8.html#post27648156 the percentages aren't useful to talk about. +30% with a base TDP of 180W would still be nothing, while 10% on a base TDP of 300W would put you at 330W.


----------



## Squeegie00

With the Moded EVGA bios on my Palit 2080 ti i get https://www.3dmark.com/spy/4564674

Will try the other bios tomorrow and hope for better luck.


----------



## xer0h0ur

LunaP said:


> I guess its a case of user scenarios, I've played tons of games using SLI that never had any issues or required nvidia inspector. The issue has almost always been for games that didn't support it that the developers were either lazy and assumed the gpu would auto scale without their intervention and coding to use it or that it was someone elses problem ( for those games that didn't though they normally stated whether or not they'd both writing support for SLI )
> 
> MMO's its great, single player games can definitely push higher, games that don't fully support it still get a bump, and for the limited variety there's always a quick tutorial on 1-2 areas in inspector to update/create a custom profile to manually activate it.
> 
> For me personally I always wanna be able to max out my settings and enjoy smooth gameplay, but again to each their own, just from your "aids" comment I feel like theres little to back that up. Blame is with the developers not the cards if the game has 0 support, or in rare cases that companies go out of their way to disable the option. Most of your issues sound driver related though, which doesn't matter if you run sli or single cards in that case since there will always be bad/rushed drivers that break other things. Saying something is never worth it discredits your argument given the vast usage of it. Though given this is the (4th time? I think since you've been pasting this since the days of the original Titan)? you've pasted this exact same message in the past few years its sort of clear that you haven't touched or been in the SLI game for a while, some research can go a long way. Not saying everyone should SLI, but 0 reason to say its 100% pointless and that there's 0 reason for anyone ever to use it.


Its cute you apparently were annoyed enough by my comment to apparently do a forum search to see how many times I have said this before. If you must know, I have used SLI on 690 and 780 Ti's and Crossfire on 295X2 as well as 295X2 + 290X. Both were bad experiences. I've always had an affinity towards dual GPU video cards ever since the 3dfx Voodoo 5 5500 which I also owned. You're welcome to go through the hassles all you want. I've given up on it entirely until the aforementioned occurs.


----------



## FarisLeonhart

Thanks for the bios, got it working with Zotac 2080 Ti AMP and had an overclock of +200 core /+970 memory clock. Temps with fans at 85% load are between 50-61c, overclock worked so well with (Shadow of the Tomb Raider | 4K Ultra) and gonna test more games now. (Max core clock: 2160) 

Bought mine from Sofmap retailer at Japan.


----------



## Hulk1988

Flashed the 380W bios on my 2080 TI Phoenix GS










I am getting 5-8% more FPS in every game. Awesome!


----------



## sblantipodi

Is the rtx 2080 ti dual a solid card?

Can I buy it without problems?


----------



## EQBoss

I would never recommend SLI to pretty much any normal gamer, imo it's only for enthusiasts. In a case where the fastest card can't do what you want sli is basically your only option. If you can get it working. Hoping we see better methods then afr in the future with nvlink. Had 780 ti SLI 980 ti SLI and currently Titan X (Pascal) SLI and when it works, it's really nice. Some games really perform well. I play in 4k 60 and in games like tomb raider, witcher, far cry my fps is around 80-100 fps, no stuttering or anything. Can't do that with one card. Modded skyrim drops me below 60, with 2 stable 60. It's always good to get the best single gpu you can and sli only the best cards where a single gpu just can't net you more performance. It does require tweaking, and support is pretty low tier these days so it's not for everyone.


----------



## Addsome

FarisLeonhart said:


> Thanks for the bios, got it working with Zotac 2080 Ti AMP and had an overclock of +200 core /+970 memory clock. Temps with fans at 85% load are between 50-61c, overclock worked so well with (Shadow of the Tomb Raider | 4K Ultra) and gonna test more games now. (Max core clock: 2160)
> 
> Bought mine from Sofmap retailer at Japan.


Which bios did you flash? The Zotac AMPs seem really good this generation.



michaelrw said:


> both.. keep one, sell me the other xd


I would but i'm currently located in Canada, unless we can work that out.


----------



## FarisLeonhart

Addsome said:


> Which bios did you flash? The Zotac AMPs seem really good this generation.


Galaxy 380w bios, sadly only SotR accepts my current high overclock ;-; seems I need to lower it a little, especially on air. (fans rpm could reach max 3500 at %100)


----------



## tconroy135

EQBoss said:


> I would never recommend SLI to pretty much any normal gamer, imo it's only for enthusiasts. In a case where the fastest card can't do what you want sli is basically your only option. If you can get it working. Hoping we see better methods then afr in the future with nvlink. Had 780 ti SLI 980 ti SLI and currently Titan X (Pascal) SLI and when it works, it's really nice. Some games really perform well. I play in 4k 60 and in games like tomb raider, witcher, far cry my fps is around 80-100 fps, no stuttering or anything. Can't do that with one card. Modded skyrim drops me below 60, with 2 stable 60. It's always good to get the best single gpu you can and sli only the best cards where a single gpu just can't net you more performance. It does require tweaking, and support is pretty low tier these days so it's not for everyone.


The only use for SLI is when you have money to blow and the game you want to play has proven SLI integration. I remember back in the day when I went SLI for Metro because I knew the performance scaling was decent and I wanted to use tessellation.

In the end, it is a solution when you want to do a tech demo in your own home. I could see that with these cards when it comes to ray tracing. Let's say Cyberpunk 2077 has excellent ray tracing with a huge performance cost, but 2080 Ti SLI will get you 4k60, then it will be a good option for enthusiasts.


----------



## R1amddude

SLI is for those who dont complain and just do it. The performance is there if you need it, and dont mind putting the time in to make sure it works correctly. If not, then stick to your single card and let the true SLI elite enthusiast gamers do their thing, and stop complaining, its getting old at this point.


----------



## R1amddude

EQBoss said:


> I would never recommend SLI to pretty much any normal gamer, imo it's only for enthusiasts. In a case where the fastest card can't do what you want sli is basically your only option. If you can get it working. Hoping we see better methods then afr in the future with nvlink. Had 780 ti SLI 980 ti SLI and currently Titan X (Pascal) SLI and when it works, it's really nice. Some games really perform well. I play in 4k 60 and in games like tomb raider, witcher, far cry my fps is around 80-100 fps, no stuttering or anything. Can't do that with one card. Modded skyrim drops me below 60, with 2 stable 60. It's always good to get the best single gpu you can and sli only the best cards where a single gpu just can't net you more performance. It does require tweaking, and support is pretty low tier these days so it's not for everyone.



I could not agree more with everything you just said. My friend ended up sli with 2 1080's because he found a good deal on a second one, and he wanted to turn up Mass Effect Andromeda settings.


----------



## Addsome

So just saw a post from a gigabyte rep talking about power limits here: http://forum.gigabyte.us/thread/5197/2080-gaming-higher-power-limit?page=6
He basically says, "2080 Ti's generally have 8+8, 75+150+150 = 375. That's why you see max around 373. Anyone with an 8+8 and running above 375W is very risky. You will be going over the rated power for either PCIe slot or an 8 pin power connector... Not a good idea." Does that mean the 380W Galax bios is not safe to run?


----------



## coolbho3k

Addsome said:


> So just saw a post from a gigabyte rep talking about power limits here: http://forum.gigabyte.us/thread/5197/2080-gaming-higher-power-limit?page=6
> He basically says, "2080 Ti's generally have 8+8, 75+150+150 = 375. That's why you see max around 373. Anyone with an 8+8 and running above 375W is very risky. You will be going over the rated power for either PCIe slot or an 8 pin power connector... Not a good idea." Does that mean the 380W Galax bios is not safe to run?


I thought you can pull much more than 150W from a single 8 pin connector on a quality PSU. It's just risky if you have a low wattage, poor quality PSU. That said I doubt a manufacturer will want to release an official BIOS that exceeds this by a lot because they don't want to pay up when you burn your house down.


----------



## dentnu

Addsome said:


> So just saw a post from a gigabyte rep talking about power limits here: http://forum.gigabyte.us/thread/5197/2080-gaming-higher-power-limit?page=6
> He basically says, "2080 Ti's generally have 8+8, 75+150+150 = 375. That's why you see max around 373. Anyone with an 8+8 and running above 375W is very risky. You will be going over the rated power for either PCIe slot or an 8 pin power connector... Not a good idea." Does that mean the 380W Galax bios is not safe to run?


Honestly, no bios other than the one that is for your card is safe to run. The minute you flash the bios with some other cards bios you do so at your own risk. I would not run the 380 watts Galax bios on any card that is on air. The Galax bios is for a custom PCB board that is made to support that many watts. I have no idea how many watts the reference board can take but if you are not on water be careful.


----------



## xer0h0ur

So...there is that


----------



## Addsome

dentnu said:


> Honestly, no bios other than the one that is for your card is safe to run. The minute you flash the bios with some other cards bios you do so at your own risk. I would not run the 380 watts Galax bios on any card that is on air. The Galax bios is for a custom PCB board that is made to support that many watts. I have no idea how many watts the reference board can take but if you are not on water be careful.


Are you sure the galax bios is for a custom pcb? Which galax card is it for?


----------



## Asmodian

coolbho3k said:


> I thought you can pull much more than 150W from a single 8 pin connector on a quality PSU. It's just risky if you have a low wattage, poor quality PSU. That said I doubt a manufacturer will want to release an official BIOS that exceeds this by a lot because they don't want to pay up when you burn your house down.


Yes, Gigabyte is not going to say it is OK but up to 300W is probably fine with a good power supply (do not use daisy chained connectors).

This is why I think another 0.005 ohm shunt on top of the ones for the PCI-E lanes to double the output is a good idea. A 0.003 or even 0.002 ohm shunt would open it up even more but I would rather throttle than go above 300W per connector. Nothing should burst into flames if someone starts furmark or something. 

I have melted an 8-pin connector when encoding video with x264 on a seriously overclocked i7-5960X, so it is possible. I ripped the smoldering plastic off the motherboard with pliers and after I got a power supply with both 8-pin and 4-pin aux motherboard power it has been fine, though I have to be very careful when plugging in the 8-pin now. I would prefer not to do that to my video card.


----------



## Addsome

Asmodian said:


> Yes, Gigabyte is not going to say it is OK but up to 300W is probably fine with a good power supply (do not use daisy chained connectors).
> 
> This is why I think another 0.005 ohm shunt on top of the ones for the PCI-E lanes to double the output is a good idea. A 0.003 or even 0.002 ohm shunt would open it up even more but I would rather throttle than go above 300W per connector. Nothing should burst into flames if someone starts furmark or something.
> 
> I have melted an 8-pin connector when encoding video with x264 on a seriously overclocked i7-5960X, so it is possible. I ripped the smoldering plastic off the motherboard with pliers and after I got a power supply with both 8-pin and 4-pin aux motherboard power it has been fine, though I have to be very careful when plugging in the 8-pin now. I would prefer not to do that to my video card.


I know the minimum recommended power supply for a 2080ti is a 650W which is what I have right now, mine is also gold rated but it is 6 years old. Do you think it would be safe running the 380W bios on it?


----------



## gavros777

Hello, is evga's 2080ti trio ftw card the best card for overclocking?

What 2080 ti you guys recommend buying?

Also old evga's gpu fans were screaming at 100% speed, are its new fans as quite as accelero xrteme 3/4's at 100% speed?


----------



## Porter_

count me in! ordered the ZOTAC AMP 2080 Ti on newegg today, should be delivered on Friday.


----------



## EQBoss

Addsome said:


> So just saw a post from a gigabyte rep talking about power limits here: http://forum.gigabyte.us/thread/5197/2080-gaming-higher-power-limit?page=6
> He basically says, "2080 Ti's generally have 8+8, 75+150+150 = 375. That's why you see max around 373. Anyone with an 8+8 and running above 375W is very risky. You will be going over the rated power for either PCIe slot or an 8 pin power connector... Not a good idea." Does that mean the 380W Galax bios is not safe to run?


Cables can accept more power then what they are rated for. 5 watts over the limit is nothing to worry about. This is assuming you have good cooling to deal with 380 watts and a good psu to deliver 380 watts. More power you run through cables the hotter the cables get. Running too much can melt the cables and indeed cause a fire. I personally wouldn't run anything higher then 400 through 2 8 pins for safety, but you can probably do 450 without issues.


----------



## Addsome

EQBoss said:


> Cables can accept more power then what they are rated for. 5 watts over the limit is nothing to worry about. This is assuming you have good cooling to deal with 380 watts and a good psu to deliver 380 watts. More power you run through cables the hotter the cables get. Running too much can melt the cables and indeed cause a fire. I personally wouldn't run anything higher then 400 through 2 8 pins for safety, but you can probably do 450 without issues.


Would a 6 year old 650W gold rated PSU be enough to run the 380W bios?


----------



## Baasha

Anyone running these in SLI/NVLink yet? How's the experience thus far?


----------



## Hulk1988

I have NVlink with 2x 2080 TI. Nothing special and the same like SLI. It is getting very hot and I would recommend to do this with water or 2 Hybrid versions.


----------



## Vlada011

One guy say I sold GTX1080Ti add 700$ and bought RTX2080Ti.... Haa 25%.
If you play games on x2 AA on GTX1080Ti reinstall GPU and increase AA to x8 with RTX2080Ti you will have little lower or same fps than before with GTX1080Ti.... that's difference.

I can't understand, someone to spend i9-9900K + Maximus Hero on difference between GTX1080Ti and RTX2080Ti.

I recommend gamers who can't follow NVIDIA prices and some of these who can follow now but support politic who will prevent them to enjoy in high end GPU for 3 or 5 years to delay upgrade, and use same cards only 18-24 months later. A lot of money will stay for other hardware and nothing important will change for 18 months on game field.


----------



## Rob w

Well, now I don’t know what’s happening? Nv says order processing! EK says order processing! Delivery company says order being delivered today! Who the heck from? Talk about cliff hanger, lol.


----------



## Silent Scone

R1amddude said:


> SLI is for those who dont complain and just do it. The performance is there if you need it, and dont mind putting the time in to make sure it works correctly. If not, then stick to your single card and let the true SLI elite enthusiast gamers do their thing, and stop complaining, its getting old at this point.


As humble as that may be (or not be), it's simply not true. If games do not have native support, then modifying SLI bits more often than not has adverse impacts on performance or visual anomalies. There is nothing elitist about investing in something that is fundamentally undersupported and underdeveloped. 

Not to use the "I've been building PC's for 30 years card", but I'm very familiar with SLI and have been using it since NVIDIA's implementation launched on the 6800.


----------



## StullenAndi

dentnu said:


> .I would not run the 380 watts Galax bios on any card that is on air. The Galax bios is for a custom PCB board that is made to support that many watts. I have no idea how many watts the reference board can take but if you are not on water be careful.


Thats not true, bios is from card with ref design. Take a look at the picture in the following link.

https://www.tomshw.de/2018/10/02/wa...kuehlungen-nicht-die-beste-idee-ist-igorslab/

And for everyone that is scary to flash the galax bios. Here is Version F3 with 300/366W from Gigabyte RTX2080Ti Gaming OC

https://filehorst.de/d/chilFCJC


----------



## cstkl1

got dlss working for FFXV bench tool

apparently its two bat files provided by NVIDIA for press. both for 4k custom. One for TAA another for DLSS. 

another note FFXV benchmark tool seens to have been updated since thd feb release.
FILE size smaller.


----------



## carlhil2

StullenAndi said:


> Thats not true, bios is from card with ref design. Take a look at the picture in the following link.
> 
> https://www.tomshw.de/2018/10/02/wa...kuehlungen-nicht-die-beste-idee-ist-igorslab/
> 
> And for everyone that is scary to flash the galax bios. Here is Version F3 with 300/366W from Gigabyte RTX2080Ti Gaming OC
> 
> https://filehorst.de/d/chilFCJC


The Gigabyte is the one that I have been using with my air cooled Zotac, my FC water block arrives tomorrow... https://www.3dmark.com/spy/4562893 2100mhz actually ends up running at 2040 because of temps..


----------



## Kutalion

arrow0309 said:


> Hi, the EK block won't make contact to anything except the gpu, the gddr6 mem and the vrm's mosfets.
> So any other pad you've placed like those originals (they were even thicker if you notice) are useless.


Hello, 
can you please make a support ticket on https://www.ekwb.com/create-a-support-ticket/
EK would like to know what's wrong there. Also would be great if you'd send me a ticket number so we can get RnD and head of support on it as well.


----------



## ENTERPRISE

Seems to be a good amount of stock shortages over here in the UK, some have an expected restock ETA of 30th November..Crazy.


----------



## GraphicsWhore

Kutalion said:


> Hello,
> can you please make a support ticket on https://www.ekwb.com/create-a-support-ticket/
> EK would like to know what's wrong there. Also would be great if you'd send me a ticket number so we can get RnD and head of support on it as well.


What's the deal here? Are you saying someone made a whoopsie on the design and you released a block that doesn't make contact with what it should? I have the EK block and backplate and supposed to block my EVGA XC tomorrow.


----------



## zhrooms

StullenAndi said:


> Tom's Hardware (DE) igorsLAB testing KFA2/Galax RTX 2080 Ti OC


 
*Very important article*, but difficult to read (because of the translation).

Reference PCB with *320* W Power Limit (FE BIOS) = *21.5* Amp with peaks up to *30* (On the PCIe 12v). 
And an additional *5.0* Amp from the PEG Slot (*Max 6.25* / 75 W). With peaks of up to *7.5*. 

Watts, Avg. Total - *330* W. 
Watts, Peak. Total - *440-460* W.

Reference PCB with *380* W Power Limit (Galax BIOS) = *24.0* Amp with peaks up to *32* (On the PCIe 12v). 
And an additional *6.0* Amp from the PEG Slot (*Max 6.25* / 75 W). With peaks of up to *9.0*!

Watts, Avg. Total - *375* W. 
Watts, Peak. Total - *480-500!* W.

*Conclusion* This is safe, average of 375 W total, 32 Amp PCIe 12v and 6 Amp PEG Slot. As long as you have a 600W PSU capable of 50 Amp on the +12V (single) you should be fine.
_________________________________

I have a question for you by the way,

Did the KFA2 card come shipped with the 126% 300/380W BIOS? 

_"For this I have the BIOS of the KFA2 / Galax RTX 2080 Ti OC secured and initially the firmware of the Founders Edition."_

By saying "secured" he means saved it? And by "initially" he means he flashed (initiated) it? He got the card with the 380 W BIOS and saved it, then flashed the FE BIOS to it to compare against the actual FE card, then he flashed back to the 380 W?

If it did, then we know the source of the GALAX BIOS that was uploaded here, and it makes the KFA2/Galax cards the best cards you can buy (if you do not intend to switch BIOS), even higher power limit than the the most expensive custom PCB cards (so far).


----------



## Krzych04650




----------



## zhrooms

Krzych04650 said:


> (Box of MSI RTX 2080 Ti Sea Hawk X)


 
Sick dude, show us the GPU-Z BIOS window please, the MSI Gaming X Trio which is also 300W stock only had a 110% power limit increase (for reviewers a few weeks back). So I am assuming your card will have the same (300 W x 110 % = 330 W), but who knows.


----------



## StullenAndi

zhrooms said:


> _"For this I have the BIOS of the KFA2 / Galax RTX 2080 Ti OC secured and initially the firmware of the Founders Edition."_
> 
> By saying "secured" he means saved it? And by "initially" he means he flashed (initiated) it? He got the card with the 380 W BIOS and saved it, then flashed the FE BIOS to it to compare against the actual FE card, then he flashed back to the 380 W?
> 
> If it did, then we know the source of the GALAX BIOS that was uploaded here, and it makes the KFA2/Galax cards the best cards you can buy (if you do not intend to switch BIOS), even higher power limit than the the most expensive custom PCB cards (so far).


I can´t speak for your translation but what the author did was to save the stock KFA2 Bios with 380W and flashed a Founders Edtion Bios on the KFA2 Card. So he has on both cards (FE and KFA2) the FE bios file to compare the chip quality. And yes, the 380W bios was delivered with the KFA2 card.



Krzych04650 said:


> Sea Hawk Picture


Please share the stock bios file.


----------



## cstkl1

on another note.. should be [@Kutalion]



yippy.. block for MSI Gaming X trio.


----------



## profundido

Silent Scone said:


> Need that 400W BIOS.


Awesome ! This is where things start to get interesting for me personally because I'm used to seeing 17K something score with my Titan X (Pascal) SLI and I'm looking to replace that with 1 card only in the future.

May I ask what exact card and conditions you used to achieve this result ?


----------



## zhrooms

StullenAndi said:


> *Author backed up the stock KFA2 Bios with 380W* and flashed a Founders Edtion Bios on the KFA2 Card.


 
Yes, that's what it looked like, still not entirely convinced he did not download the GALAX bios but it seems plausible the card actually has that BIOS stock, the MSI Sea Hawk X hybrid reference card is 300 W (2x8-Pin) so it is not impossible for GALAX to push a 300/380 W BIOS on their reference (air cooled) card.
 
KFA2/Galax OC should find its way out to people soon so they can confirm, even if I 90% believe it right now, it aint 100%.



Krzych04650 said:


> (Box of MSI RTX 2080 Ti Sea Hawk X)





StullenAndi said:


> Please share the stock bios file.


 
Agree, BIOS is supposedly worse than EVGA so literally worthless but might as well upload it, why not.


----------



## nycgtr

Well did some testing on the trio last night. with 110% power 330w. Fan at 100% I got a max temp of 63c could do +115 stable on the core. 120 was bench but not game stable. Boosts as high as 2175 but on avg sat at 2130-2150.


----------



## ilmazzo

xer0h0ur said:


> So...there is that


I heard some heads falling on the floor


----------



## nycgtr

cstkl1 said:


> on another note.. should be [@Kutalion]
> 
> 
> 
> yippy.. block for MSI Gaming X trio.


Or just get the bitspower one coming out. Recent experience if anything to go by.

#1 you wont loose the awesome backplate on the trio
#2 it will actually make proper contact lol
#3 it will cost slightly less or the same.


----------



## Xeq54

nycgtr said:


> #2 it will actually make proper contact lol


So does the EK block. It makes contact with VRM, VRAM and the die. You do not need anything else.


----------



## nycgtr

Xeq54 said:


> So does the EK block. It makes contact with VRM, VRAM and the die. You do not need anything else.


I've bought enough ek coolaid to stay away. Their gigabyte extreme block on the 1080ti didn't even contact the vrm, and I had that block and card among others. So I rather stick to companies that give me a better product for my money.


----------



## Addsome

zhrooms said:


> *Very important article*, but difficult to read (because of the translation).
> 
> Reference PCB with *320* W Power Limit (FE BIOS) = *21.5* Amp with peaks up to *30* (On the PCIe 12v).
> And an additional *5.0* Amp from the PEG Slot (*Max 6.25* / 75 W). With peaks of up to *7.5*.
> 
> Watts, Avg. Total - *330* W.
> Watts, Peak. Total - *440-460* W.
> 
> Reference PCB with *380* W Power Limit (Galax BIOS) = *24.0* Amp with peaks up to *32* (On the PCIe 12v).
> And an additional *6.0* Amp from the PEG Slot (*Max 6.25* / 75 W). With peaks of up to *9.0*!
> 
> Watts, Avg. Total - *375* W.
> Watts, Peak. Total - *480-500!* W.
> 
> *Conclusion* This is safe, average of 375 W total, 32 Amp PCIe 12v and 6 Amp PEG Slot. As long as you have a 600W PSU capable of 50 Amp on the +12V (single) you should be fine.
> _________________________________
> 
> I have a question for you by the way,
> 
> Did the KFA2 card come shipped with the 126% 300/380W BIOS?
> 
> _"For this I have the BIOS of the KFA2 / Galax RTX 2080 Ti OC secured and initially the firmware of the Founders Edition."_
> 
> By saying "secured" he means saved it? And by "initially" he means he flashed (initiated) it? He got the card with the 380 W BIOS and saved it, then flashed the FE BIOS to it to compare against the actual FE card, then he flashed back to the 380 W?
> 
> If it did, then we know the source of the GALAX BIOS that was uploaded here, and it makes the KFA2/Galax cards the best cards you can buy (if you do not intend to switch BIOS), even higher power limit than the the most expensive custom PCB cards (so far).


I currently have a 6 year old Corsair TX750M. Will it be good enough or do you think it has degraded too much to run the 380W bios?


----------



## BigMack70

Finally... FedEx says Friday 10/5 arrival.


----------



## Maintenance Bot

BigMack70 said:


> Finally... FedEx says Friday 10/5 arrival.


About time, at least we dont have to wait through the weekend.


----------



## zhrooms

Addsome said:


> I currently have a 6 year old Corsair TX750M. Will it be good enough or do you think it has degraded too much to run the 380W bios?


 
Looks fine to me, it's a 750W 80 Plus Bronze PSU with dual 8-Pin, 82% at 750W (900W from the Wall) and 55 Amp (62 Box). 

_"At 82.3% efficiency, the unit is extremely close to failing the Bronze rating, but it does hang in there."_


----------



## profundido

Addsome said:


> I currently have a 6 year old Corsair TX750M. Will it be good enough or do you think it has degraded too much to run the 380W bios?


Its borderline. You can try but with an overclocked cpu during max load peaks in the middle of game or benchmark your computer might just instantly power itself off. That's about the worst that could happen


----------



## xer0h0ur

THANK YOU JESUS. FINALLY!


----------



## tconroy135

If thermals are kept in check I wonder if the 380w Bios will cause any degradation.


----------



## nycgtr

tconroy135 said:


> If thermals are kept in check I wonder if the 380w Bios will cause any degradation.


Tbh I wouldn't bother seems to be a lot of hassle to be flashing another cards bios (no sure if anyone has a bios switch on their card) for like such cruddy gains.


----------



## tconroy135

xer0h0ur said:


> THANK YOU JESUS. FINALLY!


What was the time of your order?


----------



## xer0h0ur

tconroy135 said:


> What was the time of your order?


8/20/18 @ 1:06 PM central US time.

Fedex tracking number shows delivery date being on Friday.


----------



## tconroy135

xer0h0ur said:


> 8/20/18 @ 1:06 PM central US time.
> 
> Fedex tracking number shows delivery date being on Friday.


Cool mine 'shipped' as well, more like they notified FedEx it was coming, but says Friday delivery, so fingers crossed.


----------



## Krzych04650

Very thoughtful, MSI


----------



## xer0h0ur

tconroy135 said:


> Cool mine 'shipped' as well, more like they notified FedEx it was coming, but says Friday delivery, so fingers crossed.


Major corporations don't go to logistics companies. Logistics companies go to them to make the pickups.


----------



## arrow0309

Artah said:


> ... Cut ...
> 
> This is bad, you're saying that even the pads that come with ekwb are useless because it's not thick enough? How about if you stack the stock and the one that comes with the block?





Kutalion said:


> Hello,
> can you please make a support ticket on https://www.ekwb.com/create-a-support-ticket/
> EK would like to know what's wrong there. Also would be great if you'd send me a ticket number so we can get RnD and head of support on it as well.


Possible y'all don't understand what I was trying to say to the user who placed all (the same, 1mm thickness pads) everywhere just like under his original air cooler (not only unto the main two vrm pwrstages but on the chokes, poscap, different other mosfet or controllers) ?
I've never said the EK block won't contact with the parts it's supposed to like gpu, vram and two main vrm sections (mosfet only) one on the left side and one (longer) on the right side. 
I know other block manufacturers (bitspower in primis) provide pads for everything else (like chokes and pos caps) but I honestly don't think they should and I respect (and will install in) the EK's way. 
Not to mention the backplate and some other pads on the back of the card, they'll also add a little more cooling to the vrm. 



Graphics***** said:


> What's the deal here? Are you saying someone made a whoopsie on the design and you released a block that doesn't make contact with what it should? I have the EK block and backplate and supposed to block my EVGA XC tomorrow.


Don't worry mate, go on with your purchase and installation. 
I still have to do mine (waiting 4ever here in the UK for a MSI Duke from Scan) so I can't share anything about the block proper contact yet, I'll however use different (better) vrm 1mm pads from Gelid Extreme (12 W/mk). 



Xeq54 said:


> So does the EK block. It makes contact with VRM, VRAM and the die. You do not need anything else.


This 
Cool bro


----------



## mr2cam

Just got my shipping notification today also, card will be here Friday, and my waterblock and backplate just shipped out too, might have everything to install by the weekend


----------



## Hulk1988

Krzych04650 said:


> Very thoughtful, MSI


oh no.... and now? at the top?


----------



## Addsome

zhrooms said:


> Looks fine to me, it's a 750W 80 Plus Bronze PSU with dual 8-Pin, 82% at 750W (900W from the Wall) and 55 Amp (62 Box).
> 
> _"At 82.3% efficiency, the unit is extremely close to failing the Bronze rating, but it does hang in there."_


Ok thanks 



profundido said:


> Its borderline. You can try but with an overclocked cpu during max load peaks in the middle of game or benchmark your computer might just instantly power itself off. That's about the worst that could happen


So your saying I should just try it? If the psu is not enough, the worst that will happen is it will just shut off? Will anything be damaged? I was thinking of getting the EVGA g3 750w but idk


----------



## Krzych04650

Hulk1988 said:


> oh no.... and now? at the top?


Rear and CPU aio to the front. The best thing about of all this is that CPU AIO was in front originally, I moved it to the top to make place for GPU radiator so it can have fresh air directly, and only after I installed the GPU I realized that tubes are too short 

Also I have my PC in a different room on 10m cables (for many reasons, mostly because I am terribly sensitive to noises like coil whine), and apparently USB-C on 2080 Ti caused issues and Windows started to go crazy saying that there are too many USB HUBs connected. I had to disable Nvidia port in device manager to bring things back to normal.

So I got the card 3 hours ago and I didn't even start testing it yet


----------



## ottoore

Silent Scone said:


> Yes, EK block


Could you overclock yours gddr6 any higher after watercooling?


----------



## Silent Scone

profundido said:


> Awesome ! This is where things start to get interesting for me personally because I'm used to seeing 17K something score with my Titan X (Pascal) SLI and I'm looking to replace that with 1 card only in the future.
> 
> May I ask what exact card and conditions you used to achieve this result ?


Water, 22c ambient and 380W Galax BIOS with shunt to both 6 and 8 pin. Shunt didn't really offer much on ambient, though.


----------



## tconroy135

xer0h0ur said:


> Major corporations don't go to logistics companies. Logistics companies go to them to make the pickups.


No, that was my point, that the package is not in the hands of FedEx, so there is still every chance that my package shipping from Minnesota will get an adjusted delivery date to Saturday.


----------



## Silent Scone

ottoore said:


> Could you overclock yours gddr6 any higher after watercooling?


Honestly, I didn't bother pushing it much too far on air just because of the negligible gains, but it tops out now around 2100MHz


----------



## BigBeard86

Has anyone installed the kaken g12 bracket on the evga xc ultra and was able to retain the pcb plate and backplate, with proper AIO contact?


----------



## illidan2000

are 2080ti gigabyte (gaming and windforce) real or they are just a dream ? (or fake!)

no one has seen any of these cards. No reviews, nothing on stores really available.


----------



## Krzych04650

Here are GPU-Z screenshots for MSI Sea Hawk X


















I cannot upload BIOS because GPU-Z says that "BIOS reading is not supported on this device"

So it is confirmed that the power limit is the same as for Trio X.

Overclocking doesn't look overly promising so far, going anywhere over 1.000V causes the GPU to hit power limit frequently which results in lower performance than with max stable clocks at 1.000V. At least this is the behavior in Fire Strike Extreme Stress Test loop. Looks like the fight is going to be for stable 2100 MHz. I am so glad I canceled MSI Duke order, with its terribly low power limit it would be hard to get stable 2000, if not lower.

As for the card itself, good things are of course temperatures (40 ish under load) and very good magnetic fan on the radiator, on max speed which is around 2200 RPM it is literally two times quieter than stock fans on my Corsair H115i which are maxing out at around the same RPM. Turbine fan on the GPU is controlled with the same slider as radiator one, so this may be a problem for some. But it is only making gentle humming noise with no rattling or anything like that, it isn't really louder than radiator fan. AIO on the GPU is Corsair H55, so if you want to know anything about the pump performance and noise, look for info for this cooler. Fans certainly have potential for very quiet operation.

The bad things are obviously pathetic tube length, they are not even reaching the front of small mid tower case like Fractal Define R5. The shroud is bending easily in some places, but this is not because plastic is so bad or anything, it is just not reinforced in places where it should. For example it bends on the edge near AIO shield, but if they just mounted it to the AIO shield with a screw in the middle, it wouldn't. So this is some sloppy design.

Not a bad card, especially compared to all of these 3-slot ugly bricks, and of course I am keeping it, especially that I managed to get it so early, but EVGA is going to have very easy win with their hybrid model.


----------



## Addsome

illidan2000 said:


> are 2080ti gigabyte (gaming and windforce) real or they are just a dream ? (or fake!)
> 
> no one has seen any of these cards. No reviews, nothing on stores really available.


I have an unopened 2080ti windforce that I got on Sept 28 in Canada.


----------



## nycgtr

illidan2000 said:


> are 2080ti gigabyte (gaming and windforce) real or they are just a dream ? (or fake!)
> 
> no one has seen any of these cards. No reviews, nothing on stores really available.


I have 2 of them as of today so no they arent fake.


----------



## GraphicsWhore

nycgtr said:


> I have 2 of them as of today so no they arent fake.


Nah man, that's just your imagination


----------



## nycgtr

Graphics***** said:


> Nah man, that's just your imagination


Yea seems my 2nd trio via amazon is really going to be imaginary at this point. For the sake of having matching cards, I may just sell off the first trio I got. I need to bin these other 2 to see how well they boost. While I do have the nvlink bridge I am not competely sold on going nvlink yet. Getting tired of that eat 2 card cost for a 30% chance of 2 cards functioning


----------



## raider89

So should I get the bitpower waterblock or the ek one? People saying the ek one doesnt make contact on everything?


----------



## Krzych04650

Addsome said:


> I have an unopened 2080ti windforce that I got on Sept 28 in Canada.


Sell it to someone who cannot stand waiting and has too much money, for 1.5x of what you paid USD


----------



## xer0h0ur

raider89 said:


> So should I get the bitpower waterblock or the ek one? People saying the ek one doesnt make contact on everything?


I don't even understand where that is coming from. The EK blocks cool the same things they have been cooling since as far back as I have been buying their blocks. The GPU, VRMs and GDDR.


----------



## xermalk

Do i need to flash a bios to get past +1000 memory in MSI Afterburner? my Zotac 2080 ti AMP seems to handle +1000 as if it was meant to come like that by default.

Edit: changed over to the evga software, and all is well.

Seems to be running fine at +1150 mem +130 core with the 115% powerlimit on the Zotac AMP :thumb:
Now to begin the wait for waterblock reviews. the Zotac card isnt exactly quiet with a 115% power limit, and 130 will likley be worse.


----------



## nycgtr

raider89 said:


> So should I get the bitpower waterblock or the ek one? People saying the ek one doesnt make contact on everything?


Bitspower. Better quality, extra oring and standoff included. It's a no brainer. Compatible with stock back plate as well so save you the hassle of buying one.


----------



## Addsome

Krzych04650 said:


> Sell it to someone who cannot stand waiting and has too much money, for 1.5x of what you paid USD


I've had it on my local craigslist for about a week but no one seems to want it or just give low ball offers less than what I paid for it. I only put it $90 above what I paid. I don't really feel like getting scammed on ebay. Might just return it to the store soon.


----------



## dante`afk

anyone gotten any infor from vendors when any stock gonna be available again? msi/evga/asus etc?


----------



## StullenAndi

Krzych04650 said:


> I cannot upload BIOS because GPU-Z says that "BIOS reading is not supported on this device"


Right now only nvflash is working to save your bios file.


----------



## Swaggerfeld

CallsignVega said:


> I'm glad I swapped out the FE for the FTW3. The latter should have 130% power limit. I hate constantly bouncing off that limit. The more headroom the better!


Were you planning on going with SLI'd FTW3 cards? I'm going to need to pick up a motherboard with a ton of spacing. I am trying to avoid watercooling personally but am curious about thermals on the top card in an SLI setup while on air.


----------



## Krzych04650

Time Spy, 2100 MHz 1.025V vs 2010 MHz 0.925V, 1.3% difference in graphics score. The power limitations are just mad. But there has to be some real reason behind it, nobody did that just because...


----------



## NewType88

Krzych04650 said:


> Time Spy, 2100 MHz 1.025V vs 2010 MHz 0.925V, 1.3% difference in graphics score. The power limitations are just mad. But there has to be some real reason behind it, nobody did that just because...


Do you know how much wattage your pulling under full load on your oc cpu and gpu ?


----------



## raider89

xer0h0ur said:


> I don't even understand where that is coming from. The EK blocks cool the same things they have been cooling since as far back as I have been buying their blocks. The GPU, VRMs and GDDR.


Its from a recent post on this topic a few pages back. Something about the ek ones not touching some areas its supposed to even with thicker pads, but the bitspower one does.


----------



## raider89

nycgtr said:


> Bitspower. Better quality, extra oring and standoff included. It's a no brainer. Compatible with stock back plate as well so save you the hassle of buying one.


Thank you!


----------



## xermalk

https://www.3dmark.com/3dm/29050856
+130 core +1150 mem seems to be the limit on my card.

And some *nasty* coil whine at certain framerate/load levels
I can easily hear it when i stand 2-3m away from my Define S case .


----------



## xer0h0ur

raider89 said:


> Its from a recent post on this topic a few pages back. Something about the ek ones not touching some areas its supposed to even with thicker pads, but the bitspower one does.


Even though there is someone in this thread using an EK blocked and shunt modded card? How exactly did he manage to not overheat his components if that was the case?


----------



## Dayaks

Has anyone tested power usage on rasterized benchmarks vs RTX? I imagine RTX would use more if balanced where CUDA cores aren’t held back.


----------



## arrow0309

nycgtr said:


> I have 2 of them as of today so no they arent fake.


Hi, gonna go water cooling them?
If yes can you post a pic of the board?
I'd love to see one, they should be reference design but then many questioned it and classified them as custom.


----------



## nycgtr

arrow0309 said:


> Hi, gonna go water cooling them?
> If yes can you post a pic of the board?
> I'd love to see one, they should be reference design but then many questioned it and classified them as custom.


They are reference based pcb. I opened one that came yesterday, one came today. I was more hoping I'd end up with 2 trios.


----------



## exploiteddna

was able to get an MSI Duke OC about an hour ago from Newegg.. Im considering putting it up for auction on ebay, as I really want an EVGA XC Ultra (or maybe a non-reference design like FTW).. regardless the card will be going into a water loop. I dont know as much about all the variants like many here do, and im wondering if my desire for an XC Ultra is just me being a snob and theres not any difference, and if having them under water will make them all the same (i.e. cooler design is useless for me).. so, can anyone tell me what benefits, if any, there may be for getting an XC ultra or FTW3 instead of sticking with the Duke OC


----------



## dVeLoPe

michaelrw said:


> was able to get an MSI Duke OC about an hour ago from Newegg.. Im considering putting it up for auction on ebay, as I really want an EVGA XC Ultra (or maybe a non-reference design like FTW).. regardless the card will be going into a water loop. I dont know as much about all the variants like many here do, and im wondering if my desire for an XC Ultra is just me being a snob and theres not any difference, and if having them under water will make them all the same (i.e. cooler design is useless for me).. so, can anyone tell me what benefits, if any, there may be for getting an XC ultra or FTW3 instead of sticking with the Duke OC



throw it on ebay sell it for a profit then buy mine lol but im pretty sure all the cards are give or take the same FPS wise


----------



## Krzych04650

NewType88 said:


> Do you know how much wattage your pulling under full load on your oc cpu and gpu ?


For 2100 MHz 1.025V wallmeter says 470-510W total power draw, HWiNFO64 says 300-330W for GPU.
For 2010 MHz 0.925V wallmeter says 400-440W total power draw, HWiNFO says 240-270W for GPU.
CPU is fluctuating between 50 and 75 W.

So the power draw difference is huge for up to 2% performance difference. I will probably work on stable 2000 MHz clock with minimal voltage. Better do that than get angry at clocks going all over the place.


----------



## Krzych04650

michaelrw said:


> was able to get an MSI Duke OC about an hour ago from Newegg.. Im considering putting it up for auction on ebay, as I really want an EVGA XC Ultra (or maybe a non-reference design like FTW).. regardless the card will be going into a water loop. I dont know as much about all the variants like many here do, and im wondering if my desire for an XC Ultra is just me being a snob and theres not any difference, and if having them under water will make them all the same (i.e. cooler design is useless for me).. so, can anyone tell me what benefits, if any, there may be for getting an XC ultra or FTW3 instead of sticking with the Duke OC


Duke has the lowest power limit available, 284W if I remember correctly. This means if you don't get a good undervolt, you won't even be able to maintain 2000 MHz. EVGA FTW3 is going to have 373W power limit, so that is entire 100W more. That said, the actual difference in performance that you are going to get out of this is like 4%. Scaling is really not great, card is quickly eating all the extra power and what you get is 50W more power draw, +10C temperature increase and like 2% performance gain. You know, I am getting sick when I hear "no difference" or "not worth it" talk, but here there is really not much to be done. You make an entire overclocking campaign like GamersNexus or JayzTwoCents did, with shunt modding and blowing a freaking air conditioner on the GPUs, and all you get is like 7% higher graphics score vs undervolted 2000 MHz.


----------



## exploiteddna

dVeLoPe said:


> throw it on ebay sell it for a profit then buy mine lol but im pretty sure all the cards are give or take the same FPS wise


haha yeah.. sorry we werent able to find a middle ground.. funny thing is we both had the same view.. just on different sides of it. Yeah i figured most are gonna be similar.. i guess maybe the non-reference may be different, but they havent even released full specs on the FTW line yet (even though evga have said they plan to ship preorders this friday)


----------



## xer0h0ur

Duke was also confirmed to be able to take a 130% power limit BIOS flash...and he already said he's going to be under water


----------



## exploiteddna

Krzych04650 said:


> Duke has the lowest power limit available, 284W if I remember correctly. This means if you don't get a good undervolt, you won't even be able to maintain 2000 MHz. EVGA FTW3 is going to have 373W power limit, so that is entire 100W more. That said, the actual difference in performance that you are going to get out of this is like 4%. Scaling is really not great, card is quickly eating all the extra power and what you get is 50W more power draw, +10C temperature increase and like 2% performance gain.


i dont mind flashing the bios though to circumvent the power limit. not sure if anyone has tried on one of the reference MSI cards. yeah maybe i should sell it idk.. man it took so long to even get this one.. at least the build/upgrade is still waiting on a new ek gpu block, and a 9900k (and maybe a z390 board).. so its not like i can do the build yet anyways.. but still.. 
i wonder if i could just flash the bios though.. im not concerned about the VRM or my PSU being able to handle it, though. 

edit: 289W boost, per OP

edit 2: the other good thing is that i grabbed this from newegg in stock not preorder.. says it ships tomorrow. Also I didnt have to pay sales tax on it. I have a preorder with amazon for Windforce OC (283W boost) not expected to ship until the end of the month, and that one has an extra 100$ in sales tax


----------



## Krzych04650

michaelrw said:


> i dont mind flashing the bios though to circumvent the power limit. not sure if anyone has tried on one of the reference MSI cards. yeah maybe i should sell it idk.. man it took so long to even get this one.. at least the build/upgrade is still waiting on a new ek gpu block, and a 9900k (and maybe a z390 board).. so its not like i can do the build yet anyways.. but still..
> i wonder if i could just flash the bios though.. im not concerned about the VRM or my PSU being able to handle it, though.
> 
> edit: 289W boost, per OP


Like I said, there is not too much to gain, while waiting is very, very unhealthy  Just keep the card. If you can flash it to higher power limit and get it under water then you won't be anywhere behind cards like my Sea Hawk.


----------



## exploiteddna

Krzych04650 said:


> Like I said, there is not too much to gain, while waiting is very, very unhealthy  Just keep the card. If you can flash it to higher power limit and get it under water then you won't be anywhere behind cards like my Sea Hawk.


Well then I guess it wil depend on if I can successfully flash it. Ill have to comb through the thread again and see what the status of all that is.


----------



## Johnny_Utah

Couple of pics of EVGA 2080Ti XC Ultra (Got lucky, found one at Microcenter on the 27th but waiting for the second card):


----------



## Addsome

michaelrw said:


> Well then I guess it wil depend on if I can successfully flash it. Ill have to comb through the thread again and see what the status of all that is.


Its very easy to flash. Lots of people flashing the 380W Galax bios and reporting good results.


----------



## exploiteddna

Addsome said:


> Its very easy to flash. Lots of people flashing the 380W Galax bios and reporting good results.


so what would you do if you were in my position, especially if you were gonna put it under water cooling? (yeah, im that guy that asks everyone else what to do bc i cant make hard decisions all by my self xd)


----------



## exploiteddna

Johnny_Utah said:


> Couple of pics of EVGA 2080Ti XC Ultra (Got lucky, found one at Microcenter on the 27th but waiting for the second card):


dang, nice one johnny.. i feel like i just saw some posts of yours.. maybe that was over on evga forums. ur really lucky to have nabbed an xc ultra.. from microcenter no less.. it baffles me that any store has these on the shelves.

but man.. that IO bracket is behemoth. Please, we're gonna need a low-pro bracket..like, asap


----------



## dVeLoPe

I have an XC BLACK EDITION GAMING, 11G-P4-2282-KR and plan on leaving it stock what would the FPS difference be between that and a heavily modded card in 1440p?


----------



## Krzych04650

Default GPU Boost behavior is really chaotic. When it hits power limit, it violently drops clocks, but not voltage. There were moments in Time Spy when I was dropping clocks by even as much as 200 MHz vs target, while voltage was only dropped by something like 30 mV. Makes no sense at all, 200 MHz throttle allows for dropping at least 100 mV. 

So I have stress tested all frequencies from 1995 to 2100 (not getting above 2100 with 330W power limit anyway) for stable voltages to assign specific voltage to each frequency so throttling actually makes some sense and is dropping voltage proportionally to clocks, not just clocks. So 2100 is 1.025, 2085 is 0.987, 2070 is 0.975, ... ,2010 is 0.918 and 1995 is 0.900. No need to go lower.

It brought up my Time Spy graphics score by 500 points, almost hitting 16 000. Not much more that I can possibly do until there are any reliable ways to get these rumored 373-380W bios flashes.



dVeLoPe said:


> I have an XC BLACK EDITION GAMING, 11G-P4-2282-KR and plan on leaving it stock what would the FPS difference be between that and a heavily modded card in 1440p?


Run some Time Spy and check the graphics score on stock. 16500-16600 is around what you can get after modding. Around 16000 is what you get if you take your time tweaking with what you have (330-338W bioses) and 15500 is quick OC or Scanner OC.


----------



## dVeLoPe

Krzych04650 said:


> Default GPU Boost behavior is really chaotic. When it hits power limit, it violently drops clocks, but not voltage. There were moments in Time Spy when I was dropping clocks by even as much as 200 MHz vs target, while voltage was only dropped by something like 30 mV. Makes no sense at all, 200 MHz throttle allows for dropping at least 100 mV.
> 
> So I have stress tested all frequencies from 1995 to 2100 (not getting above 2100 with 330W power limit anyway) for stable voltages to assign specific voltage to each frequency so throttling actually makes some sense and is dropping voltage proportionally to clocks, not just clocks. So 2100 is 1.025, 2085 is 0.987, 2070 is 0.975, ... ,2010 is 0.918 and 1995 is 0.900. No need to go lower.
> 
> It brought up my Time Spy graphics score by 500 points, almost hitting 16 000. Not much more that I can possibly do until there are any reliable ways to get these rumored 373-380W bios flashes.


can you revert the card back to stock and post a before/after of your exact 3dmark results page please


----------



## GraphicsWhore

Won’t have time tonight but going on the block tomorrow


----------



## Addsome

Krzych04650 said:


> Default GPU Boost behavior is really chaotic. When it hits power limit, it violently drops clocks, but not voltage. There were moments in Time Spy when I was dropping clocks by even as much as 200 MHz vs target, while voltage was only dropped by something like 30 mV. Makes no sense at all, 200 MHz throttle allows for dropping at least 100 mV.
> 
> So I have stress tested all frequencies from 1995 to 2100 (not getting above 2100 with 330W power limit anyway) for stable voltages to assign specific voltage to each frequency so throttling actually makes some sense and is dropping voltage proportionally to clocks, not just clocks. So 2100 is 1.025, 2085 is 0.987, 2070 is 0.975, ... ,2010 is 0.918 and 1995 is 0.900. No need to go lower.
> 
> It brought up my Time Spy graphics score by 500 points, almost hitting 16 000. Not much more that I can possibly do until there are any reliable ways to get these rumored 373-380W bios flashes.
> 
> 
> 
> Run some Time Spy and check the graphics score on stock. 16500-16600 is around what you can get after modding. Around 16000 is what you get if you take your time tweaking with what you have (330-338W bioses) and 15500 is quick OC or Scanner OC.


The galax 380W bios is already in the wild, why dont you flash that?


----------



## domenic

Vipeax said:


> Excellent! And no, please share the default one before you flash. I already extracted the 130% one from the updater myself (see https://drive.google.com/drive/folders/10VgqTD1N5l81-0ZiUXQ2YcPPllRcBM3t), but I haven't gotten my hands on the default one yet.


Done & uploaded to the database here - https://www.techpowerup.com/vgabios/204144/204144


----------



## xer0h0ur

Look at all these brave souls using EK waterblocks since we have people assuring us the components aren't making contact with the block's pads. /sarcasm


----------



## Krzych04650

dVeLoPe said:


> can you revert the card back to stock and post a before/after of your exact 3dmark results page please


Stock 300W PL vs Stock 330W PL vs Basic OC vs Optimized boost curve I was talking about. Look at graphics test 2, because this is the one that strikes power limit hard (test 1 is not as demanding) and forces first three settings to throttle significantly below 2000 MHz, even down to 1905, while the last setting is 2025 minimum, mostly 2070.

(SysInfo scanning is off because it takes ages every time)
https://www.3dmark.com/compare/spy/4577924/spy/4577948/spy/4577972/spy/4577992


----------



## dVeLoPe

Krzych04650 said:


> Stock 300W PL vs Stock 330W PL vs Basic OC vs Optimized boost curve I was talking about. Look at graphics test 2, because this is the one that strikes power limit hard (test 1 is not as demanding) and forces first three settings to throttle significantly below 2000 MHz, even down to 1905, while the last setting is 2025 minimum, mostly 2070.
> 
> (SysInfo scanning is off because it takes ages every time)
> https://www.3dmark.com/compare/spy/4577924/spy/4577948/spy/4577972/spy/4577992


aside from benchmark numbers what is your actual in game fps difference just trying to figure out if its worth the hassle or to leave my card stock


----------



## Krzych04650

dVeLoPe said:


> aside from benchmark numbers what is your actual in game fps difference just trying to figure out if its worth the hassle or to leave my card stock


FPS difference is relative to FPS count. For example let's assume the gain from overclocking is 10%. If you had 50 FPS on stock then you will have 55 FPS after OC = 5 FPS difference, but if you had 150 FPS on stock, then you will have 165 with OC = 15 FPS difference. So there is not such thing as "you will have x more FPS". Take benchmark scores, calculate percentage increase from OC and then you can apply it to anything you want. It is a very basic and not accurate way because different games react differently to OC, but you will get basic idea.


----------



## R1amddude

Looks like Friday or Sat to decide when my 2080ti comes in... ebay for $2000 or keep it for my Ryzen 2700 build... decisions decisions...


----------



## GraphicsWhore

R1amddude said:


> Looks like Friday or Sat to decide when my 2080ti comes in... ebay for $2000 or keep it for my Ryzen 2700 build... decisions decisions...


Is this USD? If so, you ain’t getting 2000 dude.


----------



## cstkl1

2010 avg on gpu test 1 ....1965 avg on gpu test 2

cant wait to get some waterblock and break 17k/8k for graphic scores. 

thermal throttle badly on gpu test 1. gpu test 2 thermal+powerlimit throttle.


----------



## Elmy

Best I could do so far. 14,084 on Timespy.... 

https://www.3dmark.com/spy/4579132


----------



## Vapochilled

Krzych04650 said:


> FPS difference is relative to FPS count. For example let's assume the gain from overclocking is 10%. If you had 50 FPS on stock then you will have 55 FPS after OC = 5 FPS difference, but if you had 150 FPS on stock, then you will have 165 with OC = 15 FPS difference. So there is not such thing as "you will have x more FPS". Take benchmark scores, calculate percentage increase from OC and then you can apply it to anything you want. It is a very basic and not accurate way because different games react differently to OC, but you will get basic idea.





Guys, just to highlight something already known, but seems to be forgotten in these moments of anxiety.
The most important is no not only max or avg, but 1% slow percentile.
Whenever you jump to situations of max overclock the min fps values tend to be lower making the experience worse. the avg and max will be higher but the 1% fps will be higher also (bad), and the mins are lower (bad).

I believe we should focus our benchmarks in something that would also give us these feedbacks.


----------



## arrow0309

nycgtr said:


> They are reference based pcb. I opened one that came yesterday, one came today. I was more hoping I'd end up with 2 trios.


Hi, just to be sure before I order one, did you get those Gaming ones or the Windforce? 
I'm interested to get the Gaming model only, so I didn't specifically ask and you didn't specify.  
I've already the EK block.


----------



## SimonOcean

xer0h0ur said:


> Look at all these brave souls using EK waterblocks since we have people assuring us the components aren't making contact with the block's pads. /sarcasm


I am potentially interested in your post, but with the sarcasm it is a little obtuse... Are you saying that there is a problem with EK's RTX Vector waterblocks? Do you have any threads that you can link that describe or illustrate the problem? Genuinely interested as I ordered the EK RTX waterblock, have it ready to install, but am waiting for delivery of my GPU. If the block is no good I have time to order a Hydro Copper from EVGA.

Thank you.


----------



## gavros777

which 2080 ti brand is best for overclocking?


----------



## ilmazzo

gavros777 said:


> which 2080 ti brand is best for overclocking?


none since they are all power limited


----------



## dpoverlord

ilmazzo said:


> gavros777 said:
> 
> 
> 
> which 2080 ti brand is best for overclocking?
> 
> 
> 
> none since they are all power limited
Click to expand...

Interesting, I was considering upgrading my 1080ti. So it's just cosmetic?


----------



## Xeq54

SimonOcean said:


> I am potentially interested in your post, but with the sarcasm it is a little obtuse... Are you saying that there is a problem with EK's RTX Vector waterblocks? Do you have any threads that you can link that describe or illustrate the problem? Genuinely interested as I ordered the EK RTX waterblock, have it ready to install, but am waiting for delivery of my GPU. If the block is no good I have time to order a Hydro Copper from EVGA.
> 
> Thank you.


There is no problem. I have the block too, I checked the vrms and GDDR6 and all make contact. When I removed the block all the pads had indentations from the components they were supposed to cool. 

I think this whole rumor started by one person many pages back, he thought that the chokes needed to be cooled and placed pads on the chokes which were not making contact. But that is by design and its how it should be.


----------



## illidan2000

nycgtr said:


> I have 2 of them as of today so no they arent fake.


windforce or gaming oc ?


----------



## zhrooms

tconroy135 said:


> If thermals are kept in check I wonder if the 380w Bios will cause any degradation.


 
Unlikely, Gigabyte has 366 W BIOS on their Reference Card (Gaming OC) and KFA2/Galax has 380 W BIOS on their Reference Card (OC, not confirmed).
 


dVeLoPe said:


> I have an XC BLACK EDITION GAMING, 11G-P4-2282-KR and plan on leaving it stock what would the FPS difference be between that and a heavily modded card in 1440p?
> Aside from benchmark numbers what is your actual in game fps difference just trying to figure out if its worth the hassle or to leave my card stock





nycgtr said:


> Tbh I wouldn't bother seems to be a lot of hassle to be flashing another cards bios (no sure if anyone has a bios switch on their card) for like such cruddy gains.


 
A lot of hassle? Takes like 30 seconds and completely safe.

260W = 94.1 FPS
320W = 100.8 FPS +7.1%
380W = 104.2 FPS +3.4% (+10.7% from 260W)

Most users never touch overclocking, just install the card and play, so comparing performance from that person to someone who simply switches BIOS and turns up power limit to 127% (380W), 110 FPS instead of 100 FPS, from doing something that takes 30 seconds and virtually risk free.
 


Addsome said:


> So your saying I should just try it? If the psu is not enough, the worst that will happen is it will just shut off? Will anything be damaged? I was thinking of getting the EVGA g3 750w but idk


 
Your PSU has more than enough juice, no issues running 380 W BIOS, a PSU should shut off when safeguards are triggered, to prevent any damage.
 


Krzych04650 said:


> I cannot upload BIOS because GPU-Z says that "BIOS reading is not supported on this device"





StullenAndi said:


> Right now only nvflash is working to save your bios file.


 
Are you sure? GPU-Z was specifically updated to enable saving of Turing BIOS.
 


Krzych04650 said:


> Overclocking doesn't look overly promising so far, going anywhere over 1.000V causes the GPU to hit power limit frequently which results in lower performance than with max stable clocks at 1.000V. At least this is the behavior in Fire Strike Extreme Stress Test loop. Looks like the fight is going to be for stable 2100 MHz. I am so glad I canceled MSI Duke order, with its terribly low power limit it would be hard to get stable 2000, if not lower.


 
I don't know what you expected, we've known for 3 weeks now that 320 W Founders Edition is throttling like crazy, and for almost a week now we've known even Galax 380W BIOS is throttling. 
 


Krzych04650 said:


> The power limitations are just mad. But there has to be some real reason behind it, nobody did that just because...


 
No, NVIDIA did that just "because", Boost is and has always been a plague for enthusiasts/overclockers, no different this time.
 


michaelrw said:


> Was able to get an MSI Duke OC about an hour ago.. I really want an EVGA XC Ultra (or maybe a non-reference design like FTW).. regardless the card will be going into a water loop. I dont know as much about all the variants like many here do, and im wondering if my desire for an XC Ultra is just me being a snob and theres not any difference, and if having them under water will make them all the same (i.e. cooler design is useless for me).. so, can anyone tell me what benefits, if any, there may be for getting an XC ultra or FTW3 instead of sticking with the Duke OC


 
The EVGA XC Gaming & EVGA XC Ultra are the same cards, just different cooler, the Ultra has a way thicker one (3 Slot), it's probably the best air cooler on the market right now, so it's the *last* card you should buy if you are going to replace it with a waterblock.

MSI Duke OC and EVGA XC Ultra are also the same cards, underneath the cooler, at the moment the biggest difference is the BIOS, EVGA has an official update that does not void warranty, that brings it up to 338 W, which is higher than MSI Duke OC. But if you don't care about *risking* warranty you could just keep Duke OC and flash the 380W Galax BIOS and it will be way faster than EVGA XC Ultra (unless you also flash it, then they're both the same).

Last we have the warranty, EVGA *openly* allows you to replace thermal paste and cooler while keeping warranty completely intact, MSI does that too but not publicly, they try to scare you by placing a sticker on one screw to restrict disassembly, but they are forced under EU law to still accept it (RMA). So basically you're safe either way, just extra safe with EVGA (That's why I always favor/recommend them over any other partner). If I were you I'd just keep the MSI because you already have it, not worth waiting weeks just to potentially save you the card in a future RMA process, which is *extremely unlikely* to happen in the first place, actually killing a card is way harder than most think, you can drop it on the floor without the heavy cooler (from a reasonable height) 10 times, put it in the oven, over the sink, will very likely be perfectly fine. Patience is key when replacing thermal paste/cooler.
 


michaelrw said:


> but man.. that IO bracket is behemoth. Please, we're gonna need a low-pro bracket..like, asap


 
Or you simply don't buy the Ultra in the first place, as earlier said the whole point of the card is the massive Air Cooler, if you're going to water cool just buy the Black or XC, stay far away from the Ultra.
 


Krzych04650 said:


> Not much more that I can possibly do until there are any reliable ways to get these rumored 373-380W bios flashes.


 
Have you not been reading the thread? It is extremely easy, fast and safe to flash to the Galax 380W, also what rumored? Galax 380W was leaked almost a week ago and now tested by 10+ members here, I've hosted and linked it at the bottom of the original post.
 


xer0h0ur said:


> Look at all these brave souls using EK waterblocks since we have people assuring us the components aren't making contact with the block's pads. /sarcasm


 
Ye that makes no sense, the amount of respect and trust earned by EK over the years is crazy, there is no way they would ever make such a mistake, especially not on a flagship release like this. They'd be gone years ago if they frequently killed cards.
 


arrow0309 said:


> Hi, just to be sure before I order one, did you get those Gaming ones or the Windforce?
> I'm interested to get the Gaming model only, I've already the EK block.


 
Gaming OC has the official 366 W BIOS update, while Windforce does not (at least not yet), but if you're willing to risk warranty you could just flash the Windforce with Galax 380 W BIOS and it'd be even faster than Gaming OC (which you can also flash though).

Other than that same card, different cooler.


----------



## arrow0309

Hi Zhrooms, I know, let's say I'd like to get the Gaming one.
But I still want one single proof that they're the same pcb (Windforce and Gaming), that this mithical Gaming is yet another reference AIB board.


----------



## ilmazzo

dpoverlord said:


> Interesting, I was considering upgrading my 1080ti. So it's just cosmetic?


What I said goes over the decision to upgrade or not from a 1080ti, you only can judge if the 6-700$ dollar at minimum to upgrade worth it or not.... I said only that since these cards are power hungry and the bioses are kept in safe with the power limit you can't have a brand that do significantly better than another, maybe you can find a model beside another to have more pL (like duke vs trios) but it is nothing more than that, no "magic-brand" anymore .... by at least 7-8 years in gpus to be honest (taking away kingpins since are not mainstream cards). It goes only to the silicon so it is luck.....


----------



## GAN77

Any info about Samsung GDDR6 memory cards?
Why just micron?


----------



## Globespy

dpoverlord said:


> Interesting, I was considering upgrading my 1080ti. So it's just cosmetic?


As someone who had the 2080ti (returned to Nvidia within the 30 day no questions asked period) I can't stress how much of a bad mistake this would be.
Is it faster? Sure.
But only slightly and unless you game in 4K, then the gains are minimal below 4K.

Certainly not at all worth the upgrade at the insane price point - I could have sold my 2080ti for a massive profit, but I've always detested scalpers and the card isn't worth even close to $1200 let alone over $2000 which even EVGA/ASUS are trying to charge. Pretty disgusting.

Ray Tracing is not going to be more than a novelty in this generation of cards, and DLSS is being torn apart so even those of us who hoped we could at least fall back on that are left holding our 'you know whats'.....

I'm sure I will get flack for this post, but I put my money where my mouth is and pre-ordered within an hour of the launch, and really wanted this to be a success story, but it's pretty obvious that it's not, and I am GLAD that I can invest that money elsewhere in my system.

The entire youtube community has nothing to do with these cards other than see who has the best synthetic benchmark test scores. If that's your thing and you aren't buying the card for significant gaming performance gains over your 1080ti (which you won't get unless you play on a 4K monitor), then for it!

I game at 1440p 144hz - sure there's a 2160p 144hz monitor out there, but looks like even two 2080ti's using the new NVLink still won't be able to have consistent 144-150 FPS.

I hope that RTX/DLSS and the new architecture comes into it's own and gains traction - truly I do.
But Nvidia isn't doing a good job of gaining the communities support, and I highly suspect we will see an updated version (given the multiple reviews that show the 2080ti cards have 'more' that Nvidia has purposely left out at this stage) next year, and a new gen the year after.

Good luck with your decision. 
If you enjoy gaming at high FPS (1440p/1080p 144hz monitors and above) then your 1080ti is the best bet until the next generation of GPU's, which will be sooner than later.


----------



## profundido

Addsome said:


> Ok thanks
> 
> 
> 
> So your saying I should just try it? If the psu is not enough, the worst that will happen is it will just shut off? Will anything be damaged? I was thinking of getting the EVGA g3 750w but idk


exactly, I would try it (talking purely about the maximum wattage load). I have witnessed this phenomena on my own computer in the past. The power supply advertises a certain maximum wattage like 750W in your case, but it can only sustain and continuously deliver a percentage of that, depending on the quality of the power supply. Since during load the requested wattage will peak there may be a moment where the computer requests more than the power supply can deliver at that very second. Poof, computer is suddenly powered off regardless of what you were doing. 

So you see it does not damage the computer hardware components because it's the opposite of a power grid failure where it delivers suddenly too much for your computer to handle. The only thing that can be broken because of it is the software on your computer. If your game or OS happened to be in the middle of writing it's critical files down when the unexpected power off happens they may need a repair at worst.

Either way let me clarify that it is bad practise to try run at your psu limit. You must realize that its' not only a matter of "will I cross that limit or not ?" but also in general in order to be power efficient your total max request power should be around 70% of your max advertised load. That means in your case your power supply should be at least 850W preferably silver/gold.

Also -the load question put aside- take into account that in general even if your total request power load would never exceed the % of the 750watt power supply it can deliver, that is not your biggest risk here. The quality of the psu and it's age make it prone to breaking (dying) at some point and when that happens it can permanently damage valuable components in your computer. For this reason alone skimping on a power supply is a bad idea. It makes no sense to put your expensive computer components like a 1200$ gpu at such risk !!

In my years on a technical service desk I've seen quite a few computers where the psu had blown up because of collected dust or simply dying or overcurrent and trust me it isn't a pretty sight. You hear a loud poof and then see everything's blown up and black inside. Usually there is little to save.

My advice: Invest in any decent silver/gold psu with enough wattage and a surgeprotector between your computer and the wall. 1 thing I really like particularly about the corsair AXi series is that their "link" software shows the wattage being pulled from the wall live in a graph. This is very interesting to learn when and what operations pull what exact power without having to put a meter at the wall and go look all the time (which shows you only a momentary view and not a view over time).

https://www.corsair.com/us/en/blog/using-link-with-a-corsair-axi-digital-power-supply

I've compared this software functionality with an independent meter on the wall and it turns out to be perfectly accurate. Early versions of the software used to cause extra unwanted cpu usage but that has been solved a while ago. Highly recommended monitoring functionality now

ps: I run not only a 5+Ghz 8700K but also double GPu's in SLI max OC with watercooling. During for instance SOTTR gameplay my total wattage goes up to 800-900W sustained effectively (with both gpu's at 100% load) being used so when building my computer long ago I chose a Corsair AX1200i for the intended purposes and to make sure I'm never close to maximum power and even though I assumed back then it would be overkill it turns out to be just optimal and nothing more. When browsing the internet it falls all the way back to 200W and a good psu handles this with optimal efficiency


----------



## profundido

Silent Scone said:


> Water, 22c ambient and 380W Galax BIOS with shunt to both 6 and 8 pin. Shunt didn't really offer much on ambient, though.


thank you. EK waterblock and what exact graphics card ? Do you have an idea of the final increase just from the shunt mod on water ? Is it worth it or more like 2% again ?


----------



## profundido

Swaggerfeld said:


> Were you planning on going with SLI'd FTW3 cards? I'm going to need to pick up a motherboard with a ton of spacing. I am trying to avoid watercooling personally but am curious about thermals on the top card in an SLI setup while on air.


I've done it in the past and it's a nono. Then I tried to aio coolers but they move the bottleneck to another place inside your computer case. Finally I went full custom watercooling and I can say that for SLI in order to prevent thermal throttling it's the only way to go.


----------



## profundido

cstkl1 said:


> 2010 avg on gpu test 1 ....1965 avg on gpu test 2
> 
> cant wait to get some waterblock and break 17k/8k for graphic scores.
> 
> thermal throttle badly on gpu test 1. gpu test 2 thermal+powerlimit throttle.


Nice result ! What exact card did you use ?


----------



## Zemach

Gigabyte Windforce RTX 2080Ti Use Bios Galaxy TDP 126 % (380W) 
Test results Time Spy
https://www.3dmark.com/3dm/29048627


----------



## StullenAndi

zhrooms said:


> Are you sure? GPU-Z was specifically updated to enable saving of Turing BIOS


Only on testbuild I guess.

https://www.techpowerup.com/forums/threads/gpu-z-test-build-bios-saving-on-turing.248059/


----------



## G woodlogger

GAN77 said:


> Any info about Samsung GDDR6 memory cards?
> Why just micron?


Nvidia have a deal with Micron to only use their memory for 2 month or something like that.


----------



## BigMack70

zhrooms said:


> Have you not been reading the thread? It is extremely easy, fast and safe to flash to the Galax 380W, also what rumored? Galax 380W was leaked almost a week ago and now tested by 10+ members here, I've hosted and linked it at the bottom of the original post.


Any chance you could post the steps to flash a new BIOS onto a card in the OP?


----------



## raider89

xer0h0ur said:


> Even though there is someone in this thread using an EK blocked and shunt modded card? How exactly did he manage to not overheat his components if that was the case?


It was a rumor or someone not knowing how to install it is what im getting from others, but I would say just because some parts of the card are not touching doesn't mean it would overheat lol.


----------



## GAN77

G woodlogger said:


> Nvidia have a deal with Micron to only use their memory for 2 month or something like that.


Thank you for the information!


----------



## zhrooms

gavros777 said:


> which 2080 ti brand is best for overclocking?





ilmazzo said:


> none since they are all power limited





dpoverlord said:


> Interesting, I was considering upgrading my 1080ti. So it's just cosmetic?


 
No, they have different power limits out of box, the baseline is Founders Edition at 320W, but many cards is at 300W or even less, which is bad, the 320W offered by the Founders Edition is far from enough to prevent throttling, ideal BIOS is 400W (and card capable of that) and above, which no card has so far.

1. KFA2/Galax OC = *380W* (Reference PCB)
2. EVGA FTW3 = *373W* (Custom PCB)
3. Gigabyte Gaming OC = *366W* (Reference PCB)
4. EVGA XC/XC Ultra = *338W* (Reference PCB)
5. MSI Gaming X Trio/Sea Hawk X = *330W* (Custom PCB)
6. ROG Strix = *325W* (Custom PCB)
7. NVIDIA Founders Edition = *320W* (Reference PCB)
X. The rest *300W* or lower (some confirmed/unconfirmed)

All reference cards can be flashed to the KFA2/Galax 380W BIOS, so if you're willing to risk it, buying any (cheapest) reference card is probably the best card.

Most air coolers perform about the same, won't be 10c+ difference between one 3 fan cooler and another 3 fan cooler.

If one didn't want to replace BIOS, then the list applies, I'd probably recommend Gigabyte Gaming OC over KFA2/Galax because it's not 100% confirmed that it actually carries the 380W BIOS.

The question is not, "which is the best at overclocking", more like "which card throttles the least", all cards use the same GPU and Voltage (that we know of right now), so they technically overclock the same, just luck if you get one that manages to be overclocked slightly higher.
 


arrow0309 said:


> I still want one single proof that they're the same pcb (Windforce and Gaming)


 
1. Website does not mention it being Custom PCB
2. Product pictures show the dimensions of a Reference PCB card
3. Both Non-Ti and Ti have the exact same PCB dimensions and cooler
4. Non-Ti version is reviewed and confirmed to be Reference PCB
5. The Boost clock value matches other Reference PCB cards
6. Card was shipped with 260W BIOS like any other Reference PCB card

There is absolutely nothing indicating that it would be a Custom PCB, EK and WATERCOOL does not list it as compatible with their waterblocks, but that doesn't mean it is not a reference PCB. (As the Non-Ti is also incompatible but confirmed to be reference card)
 


ilmazzo said:


> What I said goes over the decision to upgrade or not from a 1080ti, you only can judge if the 6-700$ dollar at minimum to upgrade worth it or not.


 
Not worth it for most here I'm sure, small difference in most games at 1440p, even at high refresh rates. 

RTX 2080 Ti Overclocked can push *over 150 FPS* in ArmA 3, Fallout 4, Grand Theft Auto V, The Division and The Witcher 3 at *max settings*.

The card has literally exceeded [email protected] If you play exclusively at that resolution, buy a RTX 2080 Ti and don't plan to upgrade monitor for years, then there'd no reason at all to upgrade from it to RTX 3080.

Personally I play single player games at the highest resolution I can while maintaining 60 FPS, which means 4K and above, newer games at [email protected] and older games at [email protected] (with the card overclocked, on Water and with 380W BIOS).
 


profundido said:


> That means in your case your power supply should be at least 850W preferably silver/gold.


 
No, the difference between bronze/silver/gold/plat is like abysmal, on paper sure you can measure it, but in real world it doesn't mean much. Running a single GPU even overclocked (380W) is more than fine on a 600W, only if you push both the CPU and GPU to the limits you will be stretching it, but even then games don't max your system fully.

Then its been shown that to get those last 20-30 MHz it requires a decent amount of power limit, which you could simply skip if you don't want to push the PSU, settle for 1-2% less GPU performance and avoid those nasty 500W peaks.

He will be fine on TX750M, even pushing the limits, but if he wants to run it completely risk free just lower the power limit a bit, really that simple.[/quote]
 


StullenAndi said:


> Only on testbuild I guess.
> 
> https://www.techpowerup.com/forums/threads/gpu-z-test-build-bios-saving-on-turing.248059/


 
Yes, posted that twice in the thread and had it linked in the original post since it was uploaded on TechPowerUp, but I guess people don't bother to read.

Also surprised there are barely any BIOS uploads at the TechPowerUp BIOS Collection.

The cards I know are out right now (seen people say/show they have them).

ASUS Turbo _(On TechPowerUp)_
ASUS Dual _(On TechPowerUp)_
EVGA Black
EVGA XC/XC Ultra _(On TechPowerUp)_
GAINWARD Phoenix _(On TechPowerUp)_
GIGABYTE Windforce
GIGABYTE Gaming _(On TechPowerUp)_
INNO3D X2 OC
MSI Duke _(On TechPowerUp)_
MSI Gaming X Trio
MSI Sea Hawk X
NVIDIA Founders Edition
ZOTAC Triple Fan
ZOTAC AMP

Cards I am not sure I have seen people with yet,

COLORFUL iGame Advanced
GALAX/KFA2 OC
INNO3D Gaming OC X3
MSI Ventus
PALIT GamingPro _(On TechPowerUp)_
PNY XLR8 Gaming OC

Should be at least 15 unique ones (BIOS) out there right now. TechPowerup only has 7.


----------



## carlhil2

Put water block on my Zotac yesterday, boosts to 2040 stock..


----------



## zhrooms

BigMack70 said:


> Any chance you could post the steps to flash a new BIOS onto a card in the OP?


 
I don't know the steps myself, as I don't have a card and likely won't for another 2 weeks minimum, even though I placed my pre-order like 10 minutes after reveal. Seen multiple dudes (Scandinavian) in the last few days get the card I ordered, but I guess the store I placed my order at is the spawn of satan, and I will avoid them in the future for sure.

The only way I can put the steps up right now is if some helpful members here write the steps for me (and agree upon them together).

So to anyone that has done it, write the steps (other members confirm them), preferably also for new card owners to try them for verification, then I'll be confident enough to push them out there for everyone.


----------



## carlhil2

Stock and OC average clocks during Valley loop..


----------



## nycgtr

zhrooms said:


> Unlikely, Gigabyte has 366 W BIOS on their Reference Card (Gaming OC) and KFA2/Galax has 380 W BIOS on their Reference Card (OC, not confirmed).
> 
> 
> 
> 
> A lot of hassle? Takes like 30 seconds and completely safe.
> 
> 260W = 94.1 FPS
> 320W = 100.8 FPS +7.1%
> 380W = 104.2 FPS +3.4% (+10.7% from 260W)
> 
> Most users never touch overclocking, just install the card and play, so comparing performance from that person to someone who simply switches BIOS and turns up power limit to 127% (380W), 110 FPS instead of 100 FPS, from doing something that takes 30 seconds and virtually risk free.
> 
> 
> 
> Your PSU has more than enough juice, no issues running 380 W BIOS, a PSU should shut off when safeguards are triggered, to prevent any damage.
> 
> 
> 
> 
> Are you sure? GPU-Z was specifically updated to enable saving of Turing BIOS.
> 
> 
> 
> I don't know what you expected, we've known for 3 weeks now that 320 W Founders Edition is throttling like crazy, and for almost a week now we've known even Galax 380W BIOS is throttling.
> 
> 
> 
> No, NVIDIA did that just "because", Boost is and has always been a plague for enthusiasts/overclockers, no different this time.
> 
> 
> 
> The EVGA XC Gaming & EVGA XC Ultra are the same cards, just different cooler, the Ultra has a way thicker one (3 Slot), it's probably the best air cooler on the market right now, so it's the *last* card you should buy if you are going to replace it with a waterblock.
> 
> MSI Duke OC and EVGA XC Ultra are also the same cards, underneath the cooler, at the moment the biggest difference is the BIOS, EVGA has an official update that does not void warranty, that brings it up to 338 W, which is higher than MSI Duke OC. But if you don't care about *risking* warranty you could just keep Duke OC and flash the 380W Galax BIOS and it will be way faster than EVGA XC Ultra (unless you also flash it, then they're both the same).
> 
> Last we have the warranty, EVGA *openly* allows you to replace thermal paste and cooler while keeping warranty completely intact, MSI does that too but not publicly, they try to scare you by placing a sticker on one screw to restrict disassembly, but they are forced under EU law to still accept it (RMA). So basically you're safe either way, just extra safe with EVGA (That's why I always favor/recommend them over any other partner). If I were you I'd just keep the MSI because you already have it, not worth waiting weeks just to potentially save you the card in a future RMA process, which is *extremely unlikely* to happen in the first place, actually killing a card is way harder than most think, you can drop it on the floor without the heavy cooler (from a reasonable height) 10 times, put it in the oven, over the sink, will very likely be perfectly fine. Patience is key when replacing thermal paste/cooler.
> 
> 
> 
> Or you simply don't buy the Ultra in the first place, as earlier said the whole point of the card is the massive Air Cooler, if you're going to water cool just buy the Black or XC, stay far away from the Ultra.
> 
> 
> 
> Have you not been reading the thread? It is extremely easy, fast and safe to flash to the Galax 380W, also what rumored? Galax 380W was leaked almost a week ago and now tested by 10+ members here, I've hosted and linked it at the bottom of the original post.
> 
> 
> 
> Ye that makes no sense, the amount of respect and trust earned by EK over the years is crazy, there is no way they would ever make such a mistake, especially not on a flagship release like this. They'd be gone years ago if they frequently killed cards.
> 
> 
> 
> Gaming OC has the official 366 W BIOS update, while Windforce does not (at least not yet), but if you're willing to risk warranty you could just flash the Windforce with Galax 380 W BIOS and it'd be even faster than Gaming OC (which you can also flash though).
> 
> Other than that same card, different cooler.


I've done plenty of bios flashing on kpe and classifieds. From what I see others posting the 380 watt vs 330 watt isn't a huge gain in actual numbers not the 100+ on firestrike. I've had gpus die suddenly, I am not going to create 1200 dollar bricks because the card took a dump with the wrong manu bios on it for minimal gains.


----------



## exploiteddna

Addsome said:


> The galax 380W bios is already in the wild, why dont you flash that?


Where? I found this pack of bios files from this post, but i dont think any of those is 380W is it?



Vipeax said:


> Excellent! And no, please share the default one before you flash. I already extracted the 130% one from the updater myself (see https://drive.google.com/drive/folders/10VgqTD1N5l81-0ZiUXQ2YcPPllRcBM3t), but I haven't gotten my hands on the default one yet.


 thanks for sharing all the bios youve collected so far.



vmanuelgm said:


> I managed a better result with Gigabyte bios on air because of its power limit than with the original one from Gainward.
> But to see the cards perform as they should, power mod and watercooling are needed. I recorded this video of Shadow of Tomb Raider bench, with power mod (soldered), watercooling and original bios from Gainward (115%=300w)
> https://youtu.be/i041QNZWkdE


can you give me any helpful info on the soldered power mod? not that i want to go soldering my new 1250$ card, but i used to love doing voltmods back in the day so who knows maybe ill consider this



xer0h0ur said:


> Look at all these brave souls using EK waterblocks since we have people assuring us the components aren't making contact with the block's pads. /sarcasm


can you point me to some posts that demonstrate the problems people are having with EK blocks fitting the reference 2080Ti cards


----------



## GraphicsWhore

carlhil2 said:


> Put water block on my Zotac yesterday, boosts to 2040 stock..


Does it stay there under load?


----------



## carlhil2

Graphics***** said:


> Does it stay there under load?


Nope, average during Valley was 2022mhz..


----------



## GraphicsWhore

carlhil2 said:


> Nope, average during Valley was 2022mhz..


Oh but it looks like you're just on 100% PL? Have you raised it to max? And I assume that's the stock Zotac BIOS?

Can't wait to get the f(#* out of work today and go home and slap the EK block on my XC. Going to run it through stock BIOS and then flash the Galax.


----------



## zhrooms

nycgtr said:


> From what I see others posting the 380 watt vs 330 watt isn't a huge gain in actual numbers not the 100+ on firestrike.


 
Not suppose to be _huge_ gains, does not overclock higher, just less throttling, you gain a few extra % performance from it, I'm an enthusiast and I care about that, average joe does not.
 


michaelrw said:


> Where?
> 
> Any helpful info on the soldered power mod? not that i want to go soldering my new 1250$ card, but i used to love doing voltmods back in the day so who knows maybe ill consider this
> 
> Cn you point me to some posts that demonstrate the problems people are having with EK blocks fitting the reference 2080Ti cards


 
Everything you need (Tools/BIOS) is in the original post of this thread at the bottom.

I don't see any reason to physically mod your card unless you push for high benchmark scores, as the 380W BIOS removes most throttling, its been shown that in Time Spy Extreme is still throttles a lot but most none. _(Correct me if I'm wrong)_

There are no problems with the EK blocks, just misunderstandings/misinformation.


----------



## ottoore

Silent Scone said:


> Honestly, I didn't bother pushing it much too far on air just because of the negligible gains, but it tops out now around 2100MHz


Thank you. I would have liked to know if there was some sort of scaling reducing mems temps.


----------



## GAN77

ottoore said:


> Thank you. I would have liked to know if there was some sort of scaling reducing mems temps.


The topic was written, temperature step boost 10 degrees. At every step, minus 15-17 MHz.


----------



## Fiercy

So guys don’t know it was mentioned here but the gigabyte wind force card does not support ek waterblock. The PCB is about 1mm longer then it should be which is not the worst of it. 

One of the fan headers is right where’re the block should make tight contact. I had to basically bend the pins on it and then isolate them with electrical scotch to put the block on normally. 

In a bit she’ll it can be done on the gigabyte but I am wondering if I can bend those pins back later so I can have warranty.


Do you think I should have just waited for my reference card?


----------



## Addsome

zhrooms said:


> I don't know the steps myself, as I don't have a card and likely won't for another 2 weeks minimum, even though I placed my pre-order like 10 minutes after reveal. Seen multiple dudes (Scandinavian) in the last few days get the card I ordered, but I guess the store I placed my order at is the spawn of satan, and I will avoid them in the future for sure.
> 
> The only way I can put the steps up right now is if some helpful members here write the steps for me (and agree upon them together).
> 
> So to anyone that has done it, write the steps (other members confirm them), preferably also for new card owners to try them for verification, then I'll be confident enough to push them out there for everyone.


Its the same as pascal. I just followed the instructions from this post: https://www.overclock.net/forum/69-nvidia/1627212-how-flash-different-bios-your-1080-ti.html

Step 1: Extract nvflash to a folder

Step 2: Open elevated command prompt and cd to extracted folder

Step 3: Turn protection off so you can flash the BIOS.
nvflash64 --protectoff

Step 4: Backup original bios.
nvflash64 --save filename.rom

Step 5: Flash new bios.
nvflash64 -6 biosfilename.rom

Im not sure we have to run the bat file to apply the proper power limit though, if anyone can confirm that.



michaelrw said:


> Where? I found this pack of bios files from this post, but i dont think any of those is 380W is it?
> 
> thanks for sharing all the bios youve collected so far.
> 
> 
> can you give me any helpful info on the soldered power mod? not that i want to go soldering my new 1250$ card, but i used to love doing voltmods back in the day so who knows maybe ill consider this
> 
> 
> can you point me to some posts that demonstrate the problems people are having with EK blocks fitting the reference 2080Ti cards


The galax 380W bios is at the bottom of OP.


----------



## GraphicsWhore

For those of you who have blocked: did you re-apply thermal paste? Brand new card so I figure probably not necessary but factory TIM jobs can be questionable.


----------



## OleMortenF

Fiercy said:


> So guys don’t know it was mentioned here but the gigabyte wind force card does not support ek waterblock. The PCB is about 1mm longer then it should be which is not the worst of it.
> 
> One of the fan headers is right where’re the block should make tight contact. I had to basically bend the pins on it and then isolate them with electrical scotch to put the block on normally.
> 
> In a bit she’ll it can be done on the gigabyte but I am wondering if I can bend those pins back later so I can have warranty.
> 
> 
> Do you think I should have just waited for my reference card?


Seriously? Is it the 2080 Ti?
GIGABYTE GeForce RTX 2080 Ti WINDFORCE OC 11G is on the EK Compatibility list
https://www.ekwb.com/configurator/waterblock/3831109810477


----------



## zhrooms

Fiercy said:


> So guys don’t know it was mentioned here but the *gigabyte wind force* card does *not* support *ek* waterblock.
> 
> *One of the fan headers* is right where’re the block should make tight contact. I had to basically *bend the pins* on it to put the block on normally.
> 
> *It can be done* on the gigabyte but I am wondering if I can bend those pins back later so I can have warranty.


 


OleMortenF said:


> Seriously? Is it the 2080 Ti?
> GIGABYTE GeForce RTX 2080 Ti WINDFORCE OC 11G is on the EK Compatibility list


 
*Edit:* Fiercy actually has Gaming OC, not Windforce?


----------



## ottoore

GAN77 said:


> The topic was written, temperature step boost 10 degrees. At every step, minus 15-17 MHz.


We're talking about mems not core.


----------



## TahoeDust

EVGA just posted the boost clocks for the 2080 TI FTW3... 1755MHz

That is a lot higher than everyone else's, but it probably still maxes as the same as everyone else.


----------



## zhrooms

TahoeDust said:


> EVGA just posted the boost clocks for the 2080 TI FTW3... 1755MHz
> 
> That is a lot higher than everyone else's, but it probably still maxes as the same as everyone else.


 
No, MSI has 3 cards all with 1755MHz already, Gaming X Trio, Sea Hawk X and Sea Hawk EK X. Check the original post for the list.


----------



## Zemach

zhrooms said:


> We knew the Gaming OC was not on the compatiblity list, only the Windforce, apparently it is both. Just made this masterpiece gif to show the problem.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The above is OC but Windforce is the same, look at the official Windforce promo pic, right above the G in Gigabyte you see the 4-pin connector.
> Also check out this pic from WCCFTech of their review of the Gaming OC, closeup pic of the connector.
> Next to the 8-Pin connectors on the EVGA XC there is no 4-pin.
> 
> _R.I.P Gigabyte_


Gaming OC








Windforce


----------



## coolbho3k

Got my Gigabyte Gaming OC yesterday. Best it can do after flashing the Galax BIOS is +100 on the core. Otherwise I get crashing in 3dmark. Average boost is around 2000 MHz on max power target with this BIOS and +100. Pretty disappointing lottery pick.


----------



## zhrooms

Zemach said:


> Windforce


 







*Now this is juicy!*

So you're saying @Fiercy is lying? 

But seriously, he literally believes he has Windforce when he actually has Gaming OC?


----------



## Zemach

zhrooms said:


> *Now this is juicy!*
> 
> So you're saying @Fiercy is lying?
> 
> But seriously, he literally believes he has Windforce when he actually has Gaming OC?


----------



## zhrooms

Zemach said:


> (Pictures of Windforce)


 
Dude chill with the image spam, *we believe you!* By the looks of his PCB he has Gaming OC (with Windforce cooler), not the actual Windforce card.

Thanks for the first 2 pics, they clearly show the difference between the cards, and why Gaming OC is not compatible (because of the 4-Pin) which Windforce does not have (and is compatible).


----------



## Addsome

Whats the command prompt to see my wattage limit to make sure the new bios is working?


----------



## Zemach

zhrooms said:


> Dude chill with the image spam, *we believe you!* By the looks of his PCB he has Gaming OC (with Windforce cooler), not the actual Windforce card.
> 
> Thanks for the first 2 pics, they clearly show the difference between the cards, and why Gaming OC is not compatible (because of the 4-Pin) which Windforce does not have (and is compatible).


I think it's a gaming OC, not a Windforce because I have both gaming OC and Windforce PCB is not the same.


----------



## SimonOcean

Xeq54 said:


> There is no problem. I have the block too, I checked the vrms and GDDR6 and all make contact. When I removed the block all the pads had indentations from the components they were supposed to cool.
> 
> I think this whole rumor started by one person many pages back, he thought that the chokes needed to be cooled and placed pads on the chokes which were not making contact. But that is by design and its how it should be.


Thanks for that Xeq54.

I know that EK had well publicised problems with various attempts at Threadripper CPU blocks and mono blocks. However my experience using their GPU blocks has been positive so this scare story surprised me. And as you say, especially for a flagship product. I don't think that their reputation would take another massive screw up so soon after the Threadripper debacle. (Which they owned up to reasonably quickly).

Really... people ought to be responsible about putting out rubbish stories that potentially trash a companies reputation without being able to legitimately back up the criticism.


----------



## Silent Scone

Krzych04650 said:


> Default GPU Boost behavior is really chaotic. When it hits power limit, it violently drops clocks, but not voltage. There were moments in Time Spy when I was dropping clocks by even as much as 200 MHz vs target, while voltage was only dropped by something like 30 mV. Makes no sense at all, 200 MHz throttle allows for dropping at least 100 mV.
> 
> So I have stress tested all frequencies from 1995 to 2100 (not getting above 2100 with 330W power limit anyway) for stable voltages to assign specific voltage to each frequency so throttling actually makes some sense and is dropping voltage proportionally to clocks, not just clocks. So 2100 is 1.025, 2085 is 0.987, 2070 is 0.975, ... ,2010 is 0.918 and 1995 is 0.900. No need to go lower.
> 
> It brought up my Time Spy graphics score by 500 points, almost hitting 16 000. Not much more that I can possibly do until there are any reliable ways to get these rumored 373-380W bios flashes.
> 
> 
> 
> Run some Time Spy and check the graphics score on stock. 16500-16600 is around what you can get after modding. Around 16000 is what you get if you take your time tweaking with what you have (330-338W bioses) and 15500 is quick OC or Scanner OC.



It's possible to dial a lot of that out through the VF curve.


----------



## Silent Scone

https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+performance+preset/version+1.0/2+gpu

Stand aside, Imperialism. One card still on air, both top out around the same (2115-2130MHz).


----------



## kx11

ordered Phanteks WB and they updated their comp. list and included many GPUs 



http://www.phanteks.com/PH-GB2080TiFE.html


----------



## Alonjar

Well dang it guys... I managed to reserve a Gigabyte Windforce 2080ti at Microcenter, but all the info I'm reading makes it sound like a pretty bad card compared to most others when it comes to overclocked performance. Their listing was super generic, just said Gigabyte overclocked triple fan, thought I was getting a Gaming OC but checking the manufacturer model # it looks like its the Windforce.

I really dont give a darn about spending an extra $50-100 to get a much better card. Should I just cancel the reservation and continue waiting impatiently for a better one to show up somewhere? I'm playing on [email protected] monitor so a 5-10 fps difference could be significant...


----------



## raider89

Trying to keep up with the replies on here, but are people with the 2080 ti XC Ultra flashing the 380w bios ? I have the evga bios for 130% currently not sure if the galaxy bios is higher?


----------



## gavros777

which brand with 3 fans is the most quiet?
I wanna buy the evga ftw3 but i had a bad experience with a very loud dual fan gpu of them i bought in the past.

Are the 3 fan gpus as quite as the accelero xtreme 3/4 fans?


----------



## xer0h0ur

SimonOcean said:


> I am potentially interested in your post, but with the sarcasm it is a little obtuse... Are you saying that there is a problem with EK's RTX Vector waterblocks? Do you have any threads that you can link that describe or illustrate the problem? Genuinely interested as I ordered the EK RTX waterblock, have it ready to install, but am waiting for delivery of my GPU. If the block is no good I have time to order a Hydro Copper from EVGA.
> 
> Thank you.





Xeq54 said:


> There is no problem. I have the block too, I checked the vrms and GDDR6 and all make contact. When I removed the block all the pads had indentations from the components they were supposed to cool.
> 
> I think this whole rumor started by one person many pages back, he thought that the chokes needed to be cooled and placed pads on the chokes which were not making contact. But that is by design and its how it should be.


This. It was sarcasm aimed at the people making up a false narrative. The EK blocks are just fine. I will be mounting mine tomorrow soon as I get my card delivered.


----------



## FarisLeonhart

While playing FFXV (4K-Ultra) power reached 386w and avg 320-362w (98% gpu load) galaxy bios and fans at 100%.

https://i.imgur.com/nWjDyVE.png

core clock avg: 2085-2100


----------



## arrow0309

Silent Scone said:


> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+performance+preset/version+1.0/2+gpu
> 
> Stand aside, Imperialism. One card still on air, both top out around the same (2115-2130MHz).


Nice score, congrats! 

:specool:



kx11 said:


> ordered Phanteks WB and they updated their comp. list and included many GPUs
> 
> 
> 
> http://www.phanteks.com/PH-GB2080TiFE.html


Fancy solution for both the 2080 and 2080ti's on ref pcb. 
Do they sell their own bp as well or you're supposed to maintain the original one?



gavros777 said:


> which brand with 3 fans is the most quiet?
> I wanna buy the evga ftw3 but i had a bad experience with a very loud dual fan gpu of them i bought in the past.
> 
> Are the 3 fan gpus as quite as the accelero xtreme 3/4 fans?


Asus Strix if you like it and wanna spend the extra buck for a nice custom: 

https://www.techpowerup.com/reviews/ASUS/GeForce_RTX_2080_Ti_Strix_OC/32.html
https://www.guru3d.com/articles_pages/asus_geforce_rtx_2080_ti_rog_strix_preview,10.html

Or grab a Duke if you like the (cheaper yet unavailable) reference design:

https://www.techpowerup.com/reviews/MSI/GeForce_RTX_2080_Ti_Duke/32.html


----------



## gamingarena

nycgtr said:


> Well did some testing on the trio last night. with 110% power 330w. Fan at 100% I got a max temp of 63c could do +115 stable on the core. 120 was bench but not game stable. Boosts as high as 2175 but on avg sat at 2130-2150.


How loud are the fans at 100%? FE blower card loud? or tolerable!


----------



## Fiercy

Sorry for the confusion guys I my self belived that my card is windforce but it’s not it’s 2080ti oc. 

So I guess me buying what’s available on Newegg didn’t really worked out for me. 

Question to anyone here who knows will I be able to repair or bend those pins back? In case I need to put air cooler back ?

And what was that top connection fan or rgb light?

Thank you. This things is beautiful


----------



## carlhil2

Silent Scone said:


> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+performance+preset/version+1.0/2+gpu
> 
> Stand aside, Imperialism. One card still on air, both top out around the same (2115-2130MHz).


Very nice, you have some very good scores on 3DMark. good job with your 2080Ti's. if you had a 16/18 core chip, you really would be crushing it. me jelly...


----------



## Krzych04650

Uploading MSI Seak Hawk X bios


----------



## gavros777

newegg lists a gigabyte 2080 ti combo with a psu in stock for sale at 1335 usd.
should i wait for the evga ftw3 or get this one?
(By the way will it be months before the evga ftw3 goes in stock and for sale?)


----------



## coolbho3k

FarisLeonhart said:


> While playing FFXV (4K-Ultra) power reached 386w and avg 362w (98% gpu load) galaxy bios and fans at 100%.
> 
> https://i.imgur.com/nWjDyVE.png
> 
> core clock avg: 2085-2100


What are your OC settings?


----------



## exploiteddna

zhrooms said:


> No, NVIDIA did that just "because", Boost is and has always been a plague for enthusiasts/overclockers, no different this time.


I remember the first gen to have boost clocks.. we used to call them power states, or power levels, something like that. I know in the extreme oc community we would just force a certain power state so the card would always be run at full clocks, or do it where it could only go to idle or full.. not all this crap in between.


zhrooms said:


> The EVGA XC Gaming & EVGA XC Ultra are the same cards, just different cooler, the Ultra has a way thicker one (3 Slot), it's probably the best air cooler on the market right now, so it's the *last* card you should buy if you are going to replace it with a waterblock.
> 
> MSI Duke OC and EVGA XC Ultra are also the same cards, underneath the cooler, at the moment the biggest difference is the BIOS, EVGA has an official update that does not void warranty, that brings it up to 338 W, which is higher than MSI Duke OC. But if you don't care about *risking* warranty you could just keep Duke OC and flash the 380W Galax BIOS and it will be way faster than EVGA XC Ultra (unless you also flash it, then they're both the same).
> 
> Last we have the warranty, EVGA *openly* allows you to replace thermal paste and cooler while keeping warranty completely intact, MSI does that too but not publicly, they try to scare you by placing a sticker on one screw to restrict disassembly, but they are forced under EU law to still accept it (RMA). So basically you're safe either way, just extra safe with EVGA (That's why I always favor/recommend them over any other partner). If I were you I'd just keep the MSI because you already have it, not worth waiting weeks just to potentially save you the card in a future RMA process, which is *extremely unlikely* to happen in the first place, actually killing a card is way harder than most think, you can drop it on the floor without the heavy cooler (from a reasonable height) 10 times, put it in the oven, over the sink, will very likely be perfectly fine. Patience is key when replacing thermal paste/cooler.
> 
> Or you simply don't buy the Ultra in the first place, as earlier said the whole point of the card is the massive Air Cooler, if you're going to water cool just buy the Black or XC, stay far away from the Ultra.
> 
> Have you not been reading the thread? It is extremely easy, fast and safe to flash to the Galax 380W, also what rumored? Galax 380W was leaked almost a week ago and now tested by 10+ members here, I've hosted and linked it at the bottom of the original post.


thing is, im not in the EU, im in the US. Will MSI honor warranty having remove stock cooler and installing a water block? I honestly forgot that was one of the main reasons Ive used EVGA for many generations now. Flashing bios im not that concerned about.. ive done it in the past, used to flash bios all the time when i was doing LN2 benching, with private bios from manufacturers. But yeah.. im a little less of a risk taker now than i once was and also back then i was getting 7970 Lightning for $500 so it wasnt quite as risky.. $1200+ is a whole new level of risk. A backup bios would def be a nice addition, on the off chance something gets corrupted and youre unable to flash back to vanilla. Obv i dont expect MSI To honor warranty if they receive it back with a different bios on it. As long as water blocks are allowed, and i dont damage it so much that i cant reflash the stock bios, should be good to go. 
Finally, i also want to be careful with pushing that much power into the components. Im not so concerned about the gpu itself, but rather some of the regulators and other ICs that may not like the additional power. Its set up to handle up to 375 W. I mean, even 400 would prob be fine, but as always just need to be mindful of what ur doing and pay attention for any warning signs


----------



## FarisLeonhart

coolbho3k said:


> What are your OC settings?


I found +900 mem and +170 core to be stable on all games right now, I tried 950/180 but some games crashed. 

https://i.imgur.com/NyLMYGD.png


----------



## coolbho3k

FarisLeonhart said:


> I found +900 mem and +170 core to be stable on all games right now, I tried 950/180 but some games crashed.
> 
> https://i.imgur.com/NyLMYGD.png


Damn, that is a good oc.


----------



## FarisLeonhart

coolbho3k said:


> Damn, that is a good oc.


Thank you! yeah I'm so happy with it I was just greedy trying to reach 950-1K mem lol and this is on air! I might be able to reach 1K mem under water but not worth paying for since difference in performance is very small here. (temps 50-60c) I avoid noise by using closed headphones while fans at 100%. (Zotac 2080 Ti AMP)


----------



## nycgtr

gamingarena said:


> How loud are the fans at 100%? FE blower card loud? or tolerable!


like half the noise of a titan blower


----------



## dentnu

gavros777 said:


> newegg lists a gigabyte 2080 ti combo with a psu in stock for sale at 1335 usd.
> should i wait for the evga ftw3 or get this one?
> (By the way will it be months before the evga ftw3 goes in stock and for sale?)


Yea I have been waiting for my 2080 Ti Gaming X trio to ship from Newegg over 2 weeks now and I am 7 on the list to get at this point but they have no idea when they will get more. I am thinking of just buying this combo also but have no idea if this card is any good. Can I please get some honest feedback from a few of you that might have this card. Is it worth it should I pull the trigger?


----------



## raider89

FarisLeonhart said:


> Thank you! yeah I'm so happy with it I was just greedy trying to reach 950-1K mem lol and this is on air! I might be able to reach 1K mem under water but not worth paying for since difference in performance is very small here. (temps 50-60c) I avoid noise by using closed headphones while fans at 100%. (Zotac 2080 Ti AMP)


I have the 2080ti XC Ultra and I flashed the Galax bios and it wont let me run Precision X1, not sure what thats all about I wonder if its just me. Might have to use another program, I flashed back to the 130% evga bios but can only get 85 on the core clock and I notice its hitting the power limit so I believe I need to re try to galax and use afterburner or something.

I also noticed you dont have the voltage slider higher, are we suppposed to keep that at 0?


----------



## nycgtr

arrow0309 said:


> Nice score, congrats!
> 
> :specool:
> 
> 
> 
> Fancy solution for both the 2080 and 2080ti's on ref pcb.
> Do they sell their own bp as well or you're supposed to maintain the original one?
> 
> 
> 
> Asus Strix if you like it and wanna spend the extra buck for a nice custom:
> 
> https://www.techpowerup.com/reviews/ASUS/GeForce_RTX_2080_Ti_Strix_OC/32.html
> https://www.guru3d.com/articles_pages/asus_geforce_rtx_2080_ti_rog_strix_preview,10.html
> 
> Or grab a Duke if you like the (cheaper yet unavailable) reference design:
> 
> https://www.techpowerup.com/reviews/MSI/GeForce_RTX_2080_Ti_Duke/32.html


phanteks gpu blocks work with stock backplates.


----------



## Alonjar

gavros777 said:


> newegg lists a gigabyte 2080 ti combo with a psu in stock for sale at 1335 usd.
> should i wait for the evga ftw3 or get this one?
> (By the way will it be months before the evga ftw3 goes in stock and for sale?)


I just snagged one myself. The PSU in the bundle is eligible for return, so you can just return the PSU for your money back once you receive it. I placed the order 20 minutes ago, and the status just changed to "Packaging" and gave an estimated ship date of Oct 5 (tomorrow)


----------



## Alonjar

dentnu said:


> Yea I have been waiting for my 2080 Ti Gaming X trio to ship from Newegg over 2 weeks now and I am 7 on the list to get at this point but they have no idea when they will get more. I am thinking of just buying this combo also but have no idea if this card is any good. Can I please get some honest feedback from a few of you that might have this card. Is it worth it should I pull the trigger?


I dont have one, but the stats (power limit, and stock boost clock) with the new official bios for the Gaming OC vs the MSI Trio X are:

GIGABYTE OC	122% (300w base)	366w	1665 Mhz
MSI Trio 110% (300w base)	330w	1755 Mhz


----------



## cstkl1

nycgtr said:


> Or just get the bitspower one coming out. Recent experience if anything to go by.
> 
> #1 you wont loose the awesome backplate on the trio
> #2 it will actually make proper contact lol
> #3 it will cost slightly less or the same.


yup got info will get it on the 22nd OCT.


----------



## cstkl1

profundido said:


> Nice result ! What exact card did you use ?


stock msi trio.. pretty warm right now so the card hitting thermal limite easy..


----------



## exploiteddna

im amazed that its still available. i would grab one of them in place of my Duke OC, but gigabyte is not waterblock friendly, dont rly like the coolers to be removed from what ive seen. oh well.. wish it was an FTW3 combo xd yeah right


----------



## FarisLeonhart

raider89 said:


> I have the 2080ti XC Ultra and I flashed the Galax bios and it wont let me run Precision X1, not sure what thats all about I wonder if its just me. Might have to use another program, I flashed back to the 130% evga bios but can only get 85 on the core clock and I notice its hitting the power limit so I believe I need to re try to galax and use afterburner or something.
> 
> I also noticed you dont have the voltage slider higher, are we suppposed to keep that at 0?


my evga precision x1 version is 0.2.8.0, I dunno if i have to play with voltage so i left it at 0.


----------



## Alonjar

michaelrw said:


> im amazed that its still available. i would grab one of them in place of my Duke OC, but gigabyte is not waterblock friendly, dont rly like the coolers to be removed from what ive seen. oh well.. wish it was an FTW3 combo xd yeah right


To be fair, I suppose its entirely possible that they dont actually have the cards and the bundle just isnt properly calculating their stock levels... I guess if my order sits in "packaging" limbo for the next few weeks I'll know I messed up, lol


----------



## dentnu

Alonjar said:


> To be fair, I suppose its entirely possible that they dont actually have the cards and the bundle just isnt properly calculating their stock levels... I guess if my order sits in "packaging" limbo for the next few weeks I'll know I messed up, lol


I spoke with Newegg and they just told me its in stock and ready to be shipped. They have the cards but are only selling them in the combo. Your order will ship tomorrow if it says packaging. Newegg does not charge your card or move to package a card unless they have it and it is ready to ship.


----------



## gavros777

dentnu said:


> I spoke with Newegg and they just told me its in stock and ready to be shipped. They have the cards but are only selling them in the combo. Your order will ship tomorrow if it says packaging. Newegg does not charge your card or move to package a card unless they have it and it is ready to ship.


Wow thanks for the info, i bought one too  cant wait months for the evga ftw3 since one of my titan x maxwell sli just died on me.


----------



## EQBoss

FarisLeonhart said:


> Thank you! yeah I'm so happy with it I was just greedy trying to reach 950-1K mem lol and this is on air! I might be able to reach 1K mem under water but not worth paying for since difference in performance is very small here. (temps 50-60c) I avoid noise by using closed headphones while fans at 100%. (Zotac 2080 Ti AMP)


How many rads are you using in your setup?


----------



## arrow0309

Krzych04650 said:


> Uploading MSI Seak Hawk X bios


Hi, thanks! 
I might wanna try it on my "incoming" Duke, what are the power values exactly (default power / max power watts, pl %)? 
Also what is the gaming stable oc you managed to get with this bios?


----------



## torqueroll

Checking in. Got my card just now and I ordered on August 20. My EK block got here almost two weeks ago.


----------



## Edge0fsanity

Anyone running the 380w galax bios on the stock FE cooler? What were your results? How are the temps?

My card comes in today and as soon as i'm done testing the card i'm going to want to put a different bios on and start OC'ing. Just curious if the stock cooler is strong enough to deal with the added temps. I have a full cover waterblock for it but i won't be putting that on until the case i ordered earlier this year ships which could be anywhere from weeks to months from now.


----------



## raider89

Anyone having any luck on overclocking the 2080ti XC Ultra?


----------



## illidan2000

torqueroll said:


> Checking in. Got my card just now and I ordered on August 20. My EK block got here almost two weeks ago.


congrats


----------



## Fiercy

So what's the good overclock MHZ for a card mine hovers around 2160-2175 in Shadow of The Tomb raider. Is that an OK overclock? 

So far I've only updated to the new Gigabyte bios they released.


----------



## FarisLeonhart

EQBoss said:


> How many rads are you using in your setup?


One, for CPU only. GPU on triple fans.
https://www.ekwb.com/shop/ek-coolstream-xe-240-double


----------



## dentnu

Fiercy said:


> So what's the good overclock MHZ for a card mine hovers around 2160-2175 in Shadow of The Tomb raider. Is that an OK overclock?
> 
> So far I've only updated to the new Gigabyte bios they released.


wow that a pretty big overclock if its stable at those clocks. What card do you have?


----------



## asdkj1740

Krzych04650 said:


> Uploading MSI Seak Hawk X bios


needs some folks to read the power limit please!


----------



## Fiercy

dentnu said:


> wow that a pretty big overclock if its stable at those clocks. What card do you have?


It's under EK waterblock. It's gigabyte oc version as I found recently there is actually a big difference between a windforce oc and just oc for some reason pcb from mine didn't fully fit waterblock out of the box... but I made it work.


----------



## tamas970

asdkj1740 said:


> needs some folks to read the power limit please!


Second that, I'd be very much interested! I am deciding between the sea hawk and the Gaming X Trio, water vs massive air. I also wonder how well the PCB's (VRM, etc) are cooled.


----------



## asdkj1740

got a 2080ti gigabyte gaming oc to play with.
with the old bios 260w stock, i got 78c at 2000rpm on furmark 0xaa 1080p for around 30mins with 28c ambient temp.

please be aware of your cooling setup before flashing high power bios.


----------



## Fiercy

Here's the fps I got in Assassin's Creed® Odyssey Ultrawide Everything on Max. 

Clock speed 2175.

I am happy with this!


----------



## dentnu

Fiercy said:


> It's under EK waterblock. It's gigabyte oc version as I found recently there is actually a big difference between a windforce oc and just oc for some reason pcb from mine didn't fully fit waterblock out of the box... but I made it work.


I almost got that one as part a combo on Newegg yesterday but after reading a bunch of threads all over the internet of people RMA's being refused for them opening the card and just replacing the thermal paste on them. I change my mind as I do not want to deal with that type of BS if something happens to it as I would be putting a water block on it. Seems Gigabyte really does not want you to take their cards apart from what I read a few people said they did not have any warranty stickers on their cards nor did they tell them they took it apart but they still knew they changed the thermal paste which is nuts.


----------



## Hulk1988

asdkj1740 said:


> needs some folks to read the power limit please!


MSI Trio 300W/330W Custom PCB
MSI Sea Hawk EK Watercooling 300W/330W Custom PCB
MSI Sea Hawk X Hybrid 300W/330W Reference PCB

The Hybrid version will be the best on the current market because it is reference and you can flash to the 380W Bios and you will stay at 50C.


----------



## Woundingchaney

My card should be arriving sometime today!


----------



## ilmazzo

asdkj1740 said:


> got a 2080ti gigabyte gaming oc to play with.
> with the old bios 260w stock, i got 78c at 2000rpm on furmark 0xaa 1080p for around 30mins with 28c ambient temp.
> 
> please be aware of your cooling setup before flashing high power bios.


I get surprised every time I see someone still using furmark, honestly


----------



## asdkj1740

ilmazzo said:


> I get surprised every time I see someone still using furmark, honestly


for cooling and power testing it is the best.


----------



## asdkj1740

Hulk1988 said:


> MSI Trio 300W/330W Custom PCB
> MSI Sea Hawk EK Watercooling 300W/330W Custom PCB
> MSI Sea Hawk X Hybrid 300W/330W Reference PCB
> 
> The Hybrid version will be the best on the current market because it is reference and you can flash to the 380W Bios and you will stay at 50C.


as always msi seahawk is freaking conservative on power limit.
thank you dude.


----------



## asdkj1740

tamas970 said:


> Second that, I'd be very much interested! I am deciding between the sea hawk and the Gaming X Trio, water vs massive air. I also wonder how well the PCB's (VRM, etc) are cooled.


msi air cooling is weak when it comes to high power usage namely >250w.
even the new version of asus strix cooler cant beat the cheapest 120mm aio.
aio>>>>>>>>>>>>>air cooler.


----------



## BigBeard86

i installed the kraken g12 bracket on my evga 2080ti xc ultra. idle temps are 30c and full load 46c. the card is oced to 2050mhz stable, but constantly hits power limit when trying to go higher, even with 130% evga bios.

I read here about the galax bios...where can i get it, how do i flash it, and how mch more oc did you guys get on it? did anyone flash it on the evga card? is it generally safe, considering i am watercooled and have high end corsair 1000watt psu?


----------



## HeadlessKnight

Got Zotac AMP Edition since it was the only one at stock. But too bad I got a very leaky chip, hits 1995 MHz max boost at stock but throttles like crazy, I've seen it go as low as 1785 MHz. Any other BIOS with higher power limit tested and working on that particular card?


----------



## Ford8484

FYI guys- if anyone is playing AC Odyssey at 4k with this card. Set everything to Very High settings- turn the clouds to Very high. Ubisoft as usual has games that are not optimized and they dont articulate what settings tank performance. Very high clouds compared to Ultra is basically placebo effect. Do this and set AA at low- I'm average 72 FPS in native 4k with a 7700k.


----------



## carlhil2

HeadlessKnight said:


> Got Zotac AMP Edition since it was the only one at stock. But too bad I got a very leaky chip, hits 1995 MHz max boost at stock but throttles like crazy, I've seen it go as low as 1785 MHz. Any other BIOS with higher power limit tested and working on that particular card?


I use the Gigabyte on mine, haven't had any issues so far...I am at 2115-7800 memory in games at the moment, same for benches..my gpu boosts to 2040 stock, averages about 2025 under load...


----------



## tamas970

HeadlessKnight said:


> Got Zotac AMP Edition since it was the only one at stock. But too bad I got a very leaky chip, hits 1995 MHz max boost at stock but throttles like crazy, I've seen it go as low as 1785 MHz. Any other BIOS with higher power limit tested and working on that particular card?


Can you post temps, including VRM? The 1080 Ti AMP edition to my knowledge kept its FET's burning...


----------



## Esenel

There she is 🙂
Blocking in the evening 🙂


----------



## Addsome

FarisLeonhart said:


> I found +900 mem and +170 core to be stable on all games right now, I tried 950/180 but some games crashed.
> 
> https://i.imgur.com/NyLMYGD.png


Nice man, I got the AMP edition too and flashed with galax bios I managed to overclock +140 core and +850 mem.



Fiercy said:


> So what's the good overclock MHZ for a card mine hovers around 2160-2175 in Shadow of The Tomb raider. Is that an OK overclock?
> 
> So far I've only updated to the new Gigabyte bios they released.


Thats a monster overclock my friend. Most cards struggle to go over 2100


----------



## domrockt

i ordered my KFA2 a few Days ago, they are available in Amazon Europe for just 1249€ atm.
So now the waiting game is on


----------



## raider89

BigBeard86 said:


> i installed the kraken g12 bracket on my evga 2080ti xc ultra. idle temps are 30c and full load 46c. the card is oced to 2050mhz stable, but constantly hits power limit when trying to go higher, even with 130% evga bios.
> 
> I read here about the galax bios...where can i get it, how do i flash it, and how mch more oc did you guys get on it? did anyone flash it on the evga card? is it generally safe, considering i am watercooled and have high end corsair 1000watt psu?



I have the same card but on air but I wasn't successful at getting much overclock. I got the galax bios from this link https://www.overclock.net/forum/attachment.php?attachmentid=220918&d=1538254397

I used these steps to flash it .

Step 1: Extract nvflash to a folder

Step 2: Open elevated command prompt and cd to extracted folder

Step 3: Turn protection off so you can flash the BIOS.
nvflash64 --protectoff

Step 4: Backup original bios.
nvflash64 --save filename.rom

Step 5: Flash new bios.
nvflash64 -6 biosfilename.rom

However I was not able to use Precision x1 I had to use afterburner but didnt have luck overclocking. The person who was using tis bios with a nice overclock was using EVGA Precision X1 version 0.2.8 so might want to try that one if the latest didnt work.


----------



## GraphicsWhore

raider89 said:


> I have the same card but on air but I wasn't successful at getting much overclock. I got the galax bios from this link https://www.overclock.net/forum/attachment.php?attachmentid=220918&d=1538254397
> 
> I used these steps to flash it .
> 
> Step 1: Extract nvflash to a folder
> 
> Step 2: Open elevated command prompt and cd to extracted folder
> 
> Step 3: Turn protection off so you can flash the BIOS.
> nvflash64 --protectoff
> 
> Step 4: Backup original bios.
> nvflash64 --save filename.rom
> 
> Step 5: Flash new bios.
> nvflash64 -6 biosfilename.rom
> 
> However I was not able to use Precision x1 I had to use afterburner but didnt have luck overclocking. The person who was using tis bios with a nice overclock was using EVGA Precision X1 version 0.2.8 so might want to try that one if the latest didnt work.


When you say "didn't have luck overclocking", what kinds of numbers are we talking about?


----------



## raider89

Graphics***** said:


> When you say "didn't have luck overclocking", what kinds of numbers are we talking about?


I was able to get 700 on memory and 85 on clock.


----------



## HeadlessKnight

carlhil2 said:


> I use the Gigabyte on mine, haven't had any issues so far...I am at 2115-7800 memory in games at the moment, same for benches..my gpu boosts to 2040 stock, averages about 2025 under load...


Thanks I will try out the GBT bios then. That's a really nice chip you have there. 



tamas970 said:


> Can you post temps, including VRM? The 1080 Ti AMP edition to my knowledge kept its FET's burning...


Using stock BIOS it never goes above 67 C in a warm room.
As for VRM temps, I can't really help much since the card doesn't have VRM temp sensors and I don't have the tools to measure it, sorry.


----------



## iamjanco

I wanted the FTW3, but by the time I paid for it, sales tax, and a block, I would have been at ~$1660--, so I took advantage of another opportunity that was ~$140 on top of that.

I guess it's okay for me to join the "club" now (tentative shipping date Oct. 19th):


----------



## FarisLeonhart

Addsome said:


> Nice man, I got the AMP edition too and flashed with galax bios I managed to overclock +140 core and +850 mem.
> 
> 
> 
> Thats a monster overclock my friend. Most cards struggle to go over 2100


Thanks! It seems memory overclock gets difficult over +900 in most cases, he must try other games because I was able to get +950 +185 in SotR only.


----------



## Jbravo33

I canceled my FE’s yesterday. They truly killed the excitement for me. Will wait for titan. I do have 2 ekwb rgb blocks that got delivered today. I will hold onto nvlink. If anyone is interested pm for the blocks. I’ll take pics when I get to work. As far as EK blocks quality I’ve had no issues with 2 of them on Xp’s and one on my titan V.


----------



## Krzych04650

I have flashed the bios of my Sea Hawk with 380W Galax one. 










Power limit is still hit hard in benchmarks like Time Spy, but in games throttling point was moved significantly higher and now I am maintaining stable 2130-2145 MHz at 1.056V after reaching stable temperature (47C at 100% fan speed). On 330W I was throttling down to 2055-2070 MHz and had to keep voltage considerably below 1.000V to even achieve that.

The ways of GPU Boost are still mostly mysterious and I am not entirely satisfied, but in real world usage (games) clocks are much better and much more stable.


----------



## tamas970

HeadlessKnight said:


> Using stock BIOS it never goes above 67 C in a warm room.
> As for VRM temps, I can't really help much since the card doesn't have VRM temp sensors and I don't have the tools to measure it, sorry.


Thank you anyway! I suspect VRM troubles behind OC limitations I hope people with thermal cameras will jump on these cards. Unfortunately haven't seen any decent thermal studies on any 2080Ti-s so far.


----------



## nycgtr

iamjanco said:


> I wanted the FTW3, but by the time I paid for it, sales tax, and a block, I would have been at $1600--, so I took advantage of another opportunity that was $200 on top of that.
> 
> I guess it's okay for me to join the "club" now (tentative shipping date Oct. 19th):
> 
> View attachment 222130


Oh boy lol. I saw that in their shop and I thought who would pay that much for a 2080ti on a block. Literally what I paid for a used V with a block.


----------



## asdkj1740

Krzych04650 said:


> I have flashed the bios of my Sea Hawk with 380W Galax one.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power limit is still hit hard in benchmarks like Time Spy, but in games throttling point was moved significantly higher and now I am maintaining stable 2130-2145 MHz at 1.056V after reaching stable temperature (47C at 100% fan speed). On 330W I was throttling down to 2055-2070 MHz and had to keep voltage considerably below 1.000V to even achieve that.
> 
> The ways of GPU Boost are still mostly mysterious and I am not entirely satisfied, but in real world usage (games) clocks are much better and much more stable.


even der8auer using external device to hack the card and gamersnexus using shunt mod in liquid metal with 400w xoc bios, the power limit still cant be fully diabled. meaing nvidia must have done something further to block our way breaking throught as on pascal or even maxwell.
turing has a new power monitoring, no more INA3221. old trick is ended. sadly nvidia is more and more restrictive to users.


----------



## cgcross

raider89 said:


> Anyone having any luck on overclocking the 2080ti XC Ultra?


All the scores I hit were with a 2080 TI XC Ultra with the Galax BIOS. 


https://www.reddit.com/r/overclocking/comments/9litw1/2_behind_kingpin_isnt_bad/


----------



## asdkj1740

damn wrote a lot but got 404 then all gone.

let me rewrite it in short:
2080ti gaming oc 290w max bios, 8700k 1.35v 5ghz, antec neoeco ii 550w (dual 12v rail with each 30a and 540w max in combined, oem by delta), 29c ambient temp.
i was playing game and got shutdown after ~1 hour. according to real time monitoring by hwinfo64, the gpu got ~290w while the cpu got ~80w.
the back ventilation grill on the power supply was super hot so i guess i triggered the over temp protection of my psu because i cant immediately reboot my pc.

be careful about psu cooling.


----------



## iamjanco

nycgtr said:


> Oh boy lol. I saw that in their shop and I thought who would pay that much for a 2080ti on a block. Literally what I paid for a used V with a block.


Chuckle, me <--- :kookoo:

If it doesn't work out, I could always trade it for a V


----------



## BigBeard86

can someone please tell me where to get the galax bios and how ot flash it on my watercooled 2080ti xc ultra?


----------



## kx11

BigBeard86 said:


> can someone please tell me where to get the galax bios and how ot flash it on my watercooled 2080ti xc ultra?



first page dude


----------



## kx11

what if someone got the Bios of this one ? 



















who's willing to flash it on his GPU ?!


----------



## exploiteddna

Alonjar said:


> I'm playing on [email protected] monitor so a 5-10 fps difference could be significant...


 which one? but also, if theres 5-10 fps improvement at 1080p 60hz, then obv you wont get the same increase on your panel. 



Fiercy said:


> Sorry for the confusion guys I my self belived that my card is windforce but it’s not it’s 2080ti oc.
> 
> So I guess me buying what’s available on Newegg didn’t really worked out for me.
> 
> Question to anyone here who knows will I be able to repair or bend those pins back? In case I need to put air cooler back ?
> 
> And what was that top connection fan or rgb light?
> 
> Thank you. This things is beautiful


wonder if the light all being pushed to the end of the card is normal. also, not sure how i feel about the plexi that extends over the pcb, allowing you to see the pcb.. i like how phanteks did it where the plexi is just for seeing the liquid chamber.



FarisLeonhart said:


> While playing FFXV (4K-Ultra) power reached 386w and avg 320-362w (98% gpu load) galaxy bios and fans at 100%.
> 
> https://i.imgur.com/nWjDyVE.png
> 
> core clock avg: 2085-2100


what card do you have?




Krzych04650 said:


> Uploading MSI Seak Hawk X bios


awesome, many thx. I may use this bios on my Duke if I decide not to use the Galaxy 380W, and that way keeping the BIOS within the same manufacturer. We'll see. thx


----------



## raider89

cgcross said:


> All the scores I hit were with a 2080 TI XC Ultra with the Galax BIOS.
> 
> 
> https://www.reddit.com/r/overclocking/comments/9litw1/2_behind_kingpin_isnt_bad/


Did you use precision x1?


----------



## FarisLeonhart

michaelrw said:


> what card do you have?


Zotac 2080 Ti AMP

AC: Odyssey is 99% constant 60fps 4K/Ultra <3 thankfully. (low AA, DoF off)


----------



## cgcross

raider89 said:


> Did you use precision x1?


Afterburner, not a fan of X1 interface


----------



## xermalk

cgcross said:


> Afterburner, not a fan of X1 interface


i just wish you could go over 1000 mem with Afterburner. Theres been a few overclocking them with 1200-1400 mhz.
My coil whining Zotac AMP seems to handle 1200 just fine with precision x1. But I'm going to have to return the card on Monday.


----------



## Asmodian

Since it sounds like the shunt mod is a no go with these cards I am quite disappointed. Very annoying really.

Does anyone have a way to mod that removes the power limit (besides Actually Hardcore Overclocking's)? I saw that der8aur's Strix went into safety mode with a shunt mod that should only have doubled the power limit. A bios flash for an extra 60W is not much. 

I don't want to remove the shunts and solder resistors to completely fool the power circuit. It would be good for the GPU to know how much each 8-pin was pulling relative to each other. I am tempted to solder shunts onto my FE card in the hope that it doesn't go into safety mode... but it probably would and that is a lot of risk to take if it is unlikely to work.


----------



## rush2049

I am getting constant crashes after 30-40 min of gameplay in Forza Horizon 4.
Driver 416.16 and the October windows 10 update.

At first I thought that the overclock I applied using the scanner wasn't stable..... darn driver sketchiness.


----------



## Cuylertech

Jbravo33 said:


> I canceled my FE’s yesterday. They truly killed the excitement for me. Will wait for titan. I do have 2 ekwb rgb blocks that got delivered today. I will hold onto nvlink. If anyone is interested pm for the blocks. I’ll take pics when I get to work. As far as EK blocks quality I’ve had no issues with 2 of them on Xp’s and one on my titan V.


PM'd im local to you in boston and would like to grab one.


----------



## cgcross

xermalk said:


> i just wish you could go over 1000 mem with Afterburner. Theres been a few overclocking them with 1200-1400 mhz.
> My coil whining Zotac AMP seems to handle 1200 just fine with precision x1. But I'm going to have to return the card on Monday.


The little I used X1 it seemed I could only get to about 1025 anyhow, and I'm not nearly as familiar with editing the fan curve in X1. Afterburner was an easy choice with that.


----------



## Cuylertech

cgcross said:


> The little I used X1 it seemed I could only get to about 1025 anyhow, and I'm not nearly as familiar with editing the fan curve in X1. Afterburner was an easy choice with that.


do you have the issue with X1 being weird with the Galaxy Bios? Mine takes a good 2-3 minutes loading then if i close it it'll crash.


----------



## cgcross

Cuylertech said:


> do you have the issue with X1 being weird with the Galaxy Bios? Mine takes a good 2-3 minutes loading then if i close it it'll crash.


Haven't, but haven't used it much either. If you have the new driver I've seen all kinds of weird issues reported since it came out. I'll load X1 up run a scan and see if I can get it near what Afterburner did for me.


----------



## nodicaL

Good news for people who ordered from Canada Computers.
Just got my shipping confirmation from Canada Comp., and pick up confirm from FedEx.

It'll arrive on Tuesday.

Shipping from Markham, Ontario.

Last time I called, they told me that the ETA for the EVGA RTX 2080 Ti XC Ultra would be 2 weeks from the initial Oct 5.
Glad they were wrong here.

Really happy to have this before BO4 launch.


----------



## coolbho3k

raider89 said:


> I was able to get 700 on memory and 85 on clock.


I can only get up to +70 on clock but my card (Gigabyte Gaming OC) is +15 at stock so I guess +85 total. Haven't tried to push memory yet. 366W Gigabyte BIOS. in game I get average ~1950 with peaks in the low 2000s. Maybe we have the "brown cards"


----------



## kx11

tomorrow my Palit gaming PRO OC 2080ti arrives and from reading lots of comments it seems to have a solid Bios , i don't feel good about flashing another bios


----------



## Hulk1988

UNIGINE Benchmarks 

Place 4. Could you try it as well with your card,please? I have no water block. Everything @ Air


----------



## raider89

Cuylertech said:


> cgcross said:
> 
> 
> 
> The little I used X1 it seemed I could only get to about 1025 anyhow, and I'm not nearly as familiar with editing the fan curve in X1. Afterburner was an easy choice with that.
> 
> 
> 
> do you have the issue with X1 being weird with the Galaxy Bios? Mine takes a good 2-3 minutes loading then if i close it it'll crash.
Click to expand...

I cant run x1 on galax bios at all. afterburner works fine


----------



## R1amddude

Hey guys got my 2080ti today. Bad news is I cant play Fallout 4 or Mass Effect Andromeda. Both games keep on crashing, and this is with no OC. Ive reinstalled the new drivers and it still keeps on doing it. Looks like Nvidias drivers need some work, it is most def a driver issue.


----------



## raider89

R1amddude said:


> Hey guys got my 2080ti today. Bad news is I cant play Fallout 4 or Mass Effect Andromeda. Both games keep on crashing, and this is with no OC. Ive reinstalled the new drivers and it still keeps on doing it. Looks like Nvidias drivers need some work, it is most def a driver issue.


If it was a driver issue it should be a huge issue. might have to rma


----------



## R1amddude

raider89 said:


> If it was a driver issue it should be a huge issue. might have to rma



I submitted the report to nvidia online. The card works perfectly until the game crashes so highly doubt its the card. Also windows and web pages work perfectly after the crash.


----------



## Esenel

*2080Ti FE*

Hi all,

If I try to flash my FE BIOS to either the GX or the XC bios of the OP I get the following error message (Attached).
Any ideas?

Otherwise my Timespy Extreme scores are ok. GPU: 7 571

Thanks in advance!


----------



## TahoeDust

R1amddude said:


> I submitted the report to nvidia online. The card works perfectly until the game crashes so highly doubt its the card. Also windows and web pages work perfectly after the crash.


Did you use DDU to uninstall your old drivers? If not, you may want to give that a try.

Restart into Safe Mode
Run DDU
Restart
Install new drivers
Restart


----------



## Sparc145

Anyone able to get the Galax 380w bios flashed on their FE? I'm getting Board ID mismatch:



Code:


.\nvflash64.exe -6 2080TIGX126.rom
NVIDIA Firmware Update Utility (Version 5.513.0)
Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.

Checking for matches between display adapter(s) and image(s)...

Adapter: Graphics Device      (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:06,D:00,F:00

EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page

WARNING: Firmware image PCI Subsystem ID (10DE.12FA)
  does not match adapter PCI Subsystem ID (10DE.12A4).
WARNING: None of the firmware image compatible Board ID's
match the Board ID of the adapter.
  Adapter Board ID:        0049
  Firmware image Board ID: 007E

Please press 'y' to confirm override of PCI Subsystem ID's:
Overriding PCI subsystem ID mismatch

NOTE: Exception caught.
Nothing changed!

ERROR: Board ID mismatch


----------



## rush2049

R1amddude said:


> Hey guys got my 2080ti today. Bad news is I cant play Fallout 4 or Mass Effect Andromeda. Both games keep on crashing, and this is with no OC. Ive reinstalled the new drivers and it still keeps on doing it. Looks like Nvidias drivers need some work, it is most def a driver issue.


Use the driver before version 416.16 I get crashes with it as well.


----------



## Alonjar

Will a RTX 2080ti be throttled or limited by running PCIE 3.0 x8 vs x16?


----------



## Asmodian

I do not seem to be able to flash the Galax 380W BIOS on my Founders Edition 2080 Ti. 



> C:\Tools\nvflash_5.513.0>nvflash64 2080TIGX126.rom -6
> NVIDIA Firmware Update Utility (Version 5.513.0)
> Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.
> 
> 
> Checking for matches between display adapter(s) and image(s)...
> 
> Adapter: Graphics Device (10DE,1E07,10DE,12A4) H:--:NRM S:00,B:65,D:00,F:00
> 
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> 
> WARNING: Firmware image PCI Subsystem ID (10DE.12FA)
> does not match adapter PCI Subsystem ID (10DE.12A4).
> WARNING: None of the firmware image compatible Board ID's
> match the Board ID of the adapter.
> Adapter Board ID: 0049
> Firmware image Board ID: 007E
> 
> Please press 'y' to confirm override of PCI Subsystem ID's:
> Overriding PCI subsystem ID mismatch
> 
> NOTE: Exception caught.
> Nothing changed!
> 
> 
> 
> 
> ERROR: Board ID mismatch


----------



## Asmodian

Double post.


----------



## Addsome

Esenel said:


> Hi all,
> 
> If I try to flash my FE BIOS to either the GX or the XC bios of the OP I get the following error message (Attached).
> Any ideas?
> 
> Otherwise my Timespy Extreme scores are ok. GPU: 7 571
> 
> Thanks in advance!


Press y


----------



## R1amddude

rush2049 said:


> Use the driver before version 416.16 I get crashes with it as well.



Went to 411 sept 27th driver. Now I have this wierd line/blur towards the top of my screen in game, and on web pages. I hope Nvidia rolls out a hotfix soon, this is the worst launch ever for drivers. gonna go back to the first driver ever released hopefully that works properly :doh:


----------



## R1amddude

Never mind figured out the wierd display line/blur/seperation towards the top of the screen in games and web pages. If you get it ( I had it when I first installed the card before I installed new drivers but after I deleted the old drivers ) all you have to do to get rid of it is shut down the computer. Restarting it would reappear. Dont ask me how I thought to shut down to see if it would work, im just smart like that. Note that I have a 43" 4K dell display, and a AMD 2700 on a Asus X370 MB.


----------



## l88bastar

Coming from a Titan X Pascal the 2080ti FE is a pretty sexy card wedged into my ITX rig. Been having fun with it all day, a little too drunk now to
continue playing FPS though


----------



## BigMack70

Can someone else test Battlefield 4 with their 2080 Ti? If I put *any* overclock on my card at all, the game crashes at the menus. I can run 30 minute benchmark loops and several other games at +170/+1000 but BF4 crashes at the menus at +140/+0

Is my card just a turd?


----------



## iamjanco

I'm looking for the manufacturer's part number found on the bios chip(s) of a 2080ti built on the reference pcb. Has anyone managed to make note of that yet?

Edited: never mind, found it in Asmodian's code above.


----------



## Edge0fsanity

Got my 2080ti FE today and started testing on air. Card won't do anything past 2055 on air. Based on the posts in here it looks like i got a lottery loser. How much benefit did you all get from putting a full cover waterblock on it? Not interested in what some reviewer said, looking for real world results. I was planning to flash the galax bios as well to put an end to the throttling.

Debating on whether i want to send this back, i'm not paying $1250 for a lottery loser. I've already paid another $200 for a block and backplate on top of that...

On the plus side this single card plays destiny 2 @ 3440x1440/120 every bit as well as 2 TXPs in SLI and in some areas its much better. Seems to hold right around 2000mhz with the stock bios throttling. Haven't had a chance to check any other games.


----------



## BigMack70

Edge0fsanity said:


> Got my 2080ti FE today. Card won't do anything past 2055 on air. Based on the posts in here it looks like i got a lottery loser. How much benefit did you all get from putting a full cover waterblock on it? Not interested in what some reviewer said, looking for real world results.
> 
> Debating on whether i want to send this back, i'm not paying $1250 for a lottery loser. I've already paid another $200 for a block and backplate on top of that...
> 
> On the plus side this single card plays destiny 2 @ 3440x1440/120 every bit as well as 2 TXPs in SLI and in some areas its much better. Seems to hold right around 2000mhz with the stock bios throttling. Haven't had a chance to check any other games.


My card looks like it won't do anything over 1900 MHz on air (avg sustained clock) in several games so I'd say you look like a millionaire in comparison...

I'm also debating rather or not I want to send back. Despite the "best possible" performance, I'm extremely disappointed with my card. 

This is one of the most finnicky cards I've ever had. Can do 2050 MHz in benches but 1900 MHz games? dafuq?


----------



## TahoeDust

l88bastar said:


> Coming from a Titan X Pascal the 2080ti FE is a pretty sexy card wedged into my ITX rig. Been having fun with it all day, a little too drunk now to
> continue playing FPS though


That is an awesome rig man. I had a real hard time on my current build deciding between ITX and full blown water...I went bigish water. Your's makes me wonder what if...

#alsodrunk

...sadly waiting on my 2080ti ftw3


----------



## Edge0fsanity

BigMack70 said:


> My card looks like it won't do anything over 1900 MHz on air (avg sustained clock) in several games so I'd say you look like a millionaire in comparison...
> 
> I'm also debating rather or not I want to send back. Despite the "best possible" performance, I'm extremely disappointed with my card.
> 
> This is one of the most finnicky cards I've ever had. Can do 2050 MHz in benches but 1900 MHz games? dafuq?


I'm starting to throw benches at it and i'm having the opposite problem. Had to drop my OC 40mhz and now its averaging around 1900mhz in timespy while it throttles like crazy.

Probably have to flash the galax bios to get a true measure of the upper limit. This is annoying as hell. These cards are screaming for a custom bios that disables the power limit and boost 4.0 or whatever version its on now.


----------



## BigMack70

Edge0fsanity said:


> I'm starting to throw benches at it and i'm having the opposite problem. Had to drop my OC 40mhz and now its averaging around 1900mhz in timespy while it throttles like crazy.
> 
> Probably have to flash the galax bios to get a true measure of the upper limit. This is annoying as hell. These cards are screaming for a custom bios that disables the power limit and boost 4.0 or whatever version its on now.


How do your cards fare in the battlefield games?


----------



## R1amddude

BigMack70 said:


> My card looks like it won't do anything over 1900 MHz on air (avg sustained clock) in several games so I'd say you look like a millionaire in comparison...
> 
> I'm also debating rather or not I want to send back. Despite the "best possible" performance, I'm extremely disappointed with my card.
> 
> This is one of the most finnicky cards I've ever had. Can do 2050 MHz in benches but 1900 MHz games? dafuq?



I havent tried any benches yet but im in the mid upper 1900's in Mass Effect Andromeda. +150 core crashed for me so I went down to +130 core to be safe. Havent touched my ram yet ( dont really want/need to ). Founders edition stock bios, msi afterburner.


----------



## R1amddude

Im about to take apart the stock cooler and put on some real grizzly thermal paste, ill post back with results on that, 70c with 100% fan speed could be better imo.


----------



## zlatanselvic

Founders edition here, trying to install the 130% bios. 

Error: Board ID Mismatch.

Has anyone solved this issue.


----------



## Fiercy

I don’t know about benches because I actually play games on my card but I have stable 2160mhz with water block in New Assasins Creed and Tomb Raider. 
That’s on 122% gagabyte oc bios. 

Not sure if I need to flash another bios from what I read. 

Seems like MHz varies a lot from card to card.


----------



## l88bastar

TahoeDust said:


> That is an awesome rig man. I had a real hard time on my current build deciding between ITX and full blown water...I went bigish water. Your's makes me wonder what if...
> 
> #alsodrunk
> 
> ...sadly waiting on my 2080ti ftw3


I just came from a large WC rig with SLI.....got tired of all the headaches and wanted something simple.
Loving this tiny rig so far


----------



## nycgtr

https://www.3dmark.com/fs/16588085

Broke 9k with the 380 watt bios on a gigabyte windforce. Kinda really wish I could flash another bios for my trio sighhh


----------



## cstkl1

nycgtr said:


> https://www.3dmark.com/fs/16588085
> 
> Broke 9k with the 380 watt bios on a gigabyte windforce. Kinda really wish I could flash another bios for my trio sighhh


i wouldnt bother with FS on win 10 bro

beat ya with lower clocked stock msi card at 2025 lol

https://www.3dmark.com/fs/16556554

its just too flaky


----------



## ocvn

nycgtr said:


> https://www.3dmark.com/fs/16588085
> 
> Broke 9k with the 380 watt bios on a gigabyte windforce. Kinda really wish I could flash another bios for my trio sighhh


https://www.3dmark.com/fs/16491048

both my trio x with 330w bios. 100 core/ 1000 mem with msi afterburner.


----------



## nycgtr

Interesting. Gonna slap the trio in now and see.


----------



## cstkl1

nycgtr said:


> Interesting. Gonna slap the trio in now and see.


its not the card. its just FS in win 10. 
look at the cpu/combine score..

trashing your 7900x @4.8 vs my [email protected]

FS is just flaky on physx and combine scores.


----------



## animeowns

oh so we are allowed to flash bios on turing cards? I know pascal was locked down my 2x nvidia founders edition cards 2080ti just shipped


----------



## Edge0fsanity

BigMack70 said:


> How do your cards fare in the battlefield games?


i don't play those. All i have right now is destiny 2 installed. Currently downloading other games for testing purposes. I did a new mobo, cpu, and converted to full m.2 storage about 2 months ago and haven't gotten around to installing everything again. I'll post results tomorrow when i get a chance to do more testing.


----------



## Edge0fsanity

animeowns said:


> oh so we are allowed to flash bios on turing cards? I know pascal was locked down my 2x nvidia founders edition cards 2080ti just shipped


only manufacturer bios's can get flashed, nothing custom. People are generally flashing the gigabyte and galax bios for a higher PL, it does nothing else.


----------



## EQBoss

zlatanselvic said:


> Founders edition here, trying to install the 130% bios.
> 
> Error: Board ID Mismatch.
> 
> Has anyone solved this issue.


Damn and here i have 2 cards arriving tuesday.


----------



## Guiler

Yeah having the same issue trying to flash an FE card to get a higher power limit. If anyone figures out a work around it would be appreciated.


----------



## asdkj1740

zlatanselvic said:


> Founders edition here, trying to install the 130% bios.
> 
> Error: Board ID Mismatch.
> 
> Has anyone solved this issue.


what command you have typed?
have you tpyed "-6"?


----------



## zlatanselvic

-6 is correct. i used that command


----------



## Guiler

asdkj1740 said:


> what command you have typed?
> have you tpyed "-6"?


Yeah I have, also made sure it wasn't set to protected. Seems like a lot of FE cards are having the same issue.


----------



## carlhil2

That Giga update exe. bios helped me break 16000 Time Spy @2130 to move into the top 10 for a minute..


----------



## EQBoss

Guiler said:


> Yeah I have, also made sure it wasn't set to protected. Seems like a lot of FE cards are having the same issue.


Haven't seen one FE card with a flashed bios.

Do commands -5 -6 work? or -j -5 -6.


----------



## Sparc145

Doesn't for me. It appears they changed the flag behavior.



Code:


.\nvflash64.exe -j -5 -6 2080TIGX126.rom
NVIDIA Firmware Update Utility (Version 5.513.0)
Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.


Command format not recognized

Press 'Enter' to list of the available commands, or 'Q' to quit.




Code:


.\nvflash64.exe -5 -6 2080TIGX126.rom
NVIDIA Firmware Update Utility (Version 5.513.0)
Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.


Command format not recognized

Press 'Enter' to list of the available commands, or 'Q' to quit.


----------



## BigBeard86

BigMack70 said:


> Can someone else test Battlefield 4 with their 2080 Ti? If I put *any* overclock on my card at all, the game crashes at the menus. I can run 30 minute benchmark loops and several other games at +170/+1000 but BF4 crashes at the menus at +140/+0
> 
> Is my card just a turd?


I was playing for about an hour today at 4k dsr (1440p monitor) and this game gets 150+ fps on ultra at that resolution. no issues with crashing at all.


----------



## Zammin

Ordered my 2080Ti the other day. Was originally planning on getting a founders card from Nvidia but who knows when they will be available again. I ended up going for an EVGA XC Gaming for the 130% power limit (plus it was on special with a free mechanical keyboard and free postage haha).

Got an EK Vector waterblock and backplate coming with it, but I saw yesterday that Phanteks released their GPU blocks for the 20 series and now I'm debating because the Phanteks block looks miles better aesthetically but in the past with the 10 series the EK blocks were better performing. I can see this time that the EK block has 5 more microchannels than the Phanteks block but that's all I really have to go off.

It's hard this early on when there aren't any reviews for any of the waterblocks yet, it's hard enough to even find an image of a block fitted to a 20 series.

Anyone in this thread got their card under water yet?


----------



## Esenel

Zammin said:


> Ordered my 2080Ti the other day. Was originally planning on getting a founders card from Nvidia but who knows when they will be available again. I ended up going for an EVGA XC Gaming for the 130% power limit (plus it was on special with a free mechanical keyboard and free postage haha).
> 
> Got an EK Vector waterblock and backplate coming with it, but I saw yesterday that Phanteks released their GPU blocks for the 20 series and now I'm debating because the Phanteks block looks miles better aesthetically but in the past with the 10 series the EK blocks were better performing. I can see this time that the EK block has 5 more microchannels than the Phanteks block but that's all I really have to go off.
> 
> It's hard this early on when there aren't any reviews for any of the waterblocks yet, it's hard enough to even find an image of a block fitted to a 20 series.
> 
> Anyone in this thread got their card under water yet? /forum/images/smilies/smile.gif


----------



## mouacyk

All copper nice.


----------



## Zammin

Esenel said:


> View attachment 222226
> 
> View attachment 222228


Nice! thanks for the pic  I was having so much trouble finding ones of the block installed. EKs advertising is mostly renders.

May I ask what kind of temperatures you're seeing at 100% load? Wanting to know so I can compare to mine once it's in, to make sure everything is seated okay.


----------



## KShirza1

Just got mine today! Did a little bypass surgery to test it out. Now to install the vector block.


----------



## Zammin

cstkl1 said:


> on another note.. should be [@Kutalion]
> 
> 
> 
> yippy.. block for MSI Gaming X trio.


I found this funny because prior to buying my 2080Ti, I actually sent a message to Erik through the EK facebook page to ask if they were making a block for the Gaming X Trio. He told me they would not be making one at all. Literally less than a week later the Sea Hawk X EK is announced lmao.


----------



## Zammin

KShirza1 said:


> Just got mine today! Did a little bypass surgery to test it out. Now to install the vector block.


Awesome. Glad I found this thread. Interested to see what kind of temps you get as well 

I probably won't be putting mine in until the 9900k releases. My system is all hard tubing so it will be easier to do it all at once.


----------



## enigmatic23

On my EVGA XC with an EK block, I am getting around 45c in assassins creed odyssey after hours of playing. Running the Galax bios at 2050mhz and 8000 memory. Dual 360 rads with a d5 at full speed and fans at 1100rpm. Voltage limited in GPUz the entire time and power barely breaks 300w. Ran Superposition and under full load in 8k or 4k temps were still below 46c while pulling around 360w. https://benchmark.unigine.com/results/rid_a8be45ca4d48457d90779371e7bc69b6


----------



## Zammin

enigmatic23 said:


> On my EVGA XC with an EK block, I am getting around 45c in assassins creed odyssey after hours of playing. Running the Galax bios at 2050mhz and 8000 memory. Dual 360 rads with a d5 at full speed and fans at 1100rpm. Voltage limited in GPUz the entire time and power barely breaks 300w. Ran Superposition and under full load in 8k or 4k temps were still below 46c while pulling around 360w. https://benchmark.unigine.com/results/rid_a8be45ca4d48457d90779371e7bc69b6


Cheers for that. It's good to know what to expect. Sounds like these are a little hotter than Pascal, my current cooling setup is very similar to yours and my Strix 1080ti OC usually tops out at 38-40C with a Phanteks block.

This might be a noob question as I'm new to this thread, but do you find you are getting better performance with the Galax BIOS over the latest EVGA one with the 130% power limit? I hadn't heard of the Galax BIOS flash on Turing until today, but it has me interested.


----------



## xer0h0ur

So anyone tried flashing vBIOS onto a FE card yet? That is the boat I am in now after putting my EK block on it.


----------



## NAIM101

Esenel said:


> View attachment 222226
> 
> View attachment 222228




I have the EKWB custom waterblock solution all in one and your tube look pro grade/


----------



## enigmatic23

Zammin said:


> Cheers for that. It's good to know what to expect. Sounds like these are a little hotter than Pascal, my current cooling setup is very similar to yours and my Strix 1080ti OC usually tops out at 38-40C with a Phanteks block.
> 
> This might be a noob question as I'm new to this thread, but do you find you are getting better performance with the Galax BIOS over the latest EVGA one with the 130% power limit? I hadn't heard of the Galax BIOS flash on Turing until today, but it has me interested.


Didn't even get a chance to try it, did a few synthetic tests on 3dmark with the 130% and was power limited so went ahead and flashed it. Great temps! Very easy to do the flash and gives just a bit extra.


----------



## Asmodian

I decided to do a shunt mod after all, power limits are too annoying. :devil:








I had a bit of trouble with the shunt on the front of the card. 









If I run Heaven my card goes into safe mode (300 MHz) but Time Spy (and everything else I have tested) runs fine. 

Time Spy crashes if I go much above +125 MHz but it happily runs at 2070 MHz / 8000 MHz:
Time Spy graphics score: 15823
Time Spy Extreme graphics score: 6729

I recommend using 0.006+ ohm shunts on top of the stock ones to keep it from going into safe mode.


----------



## cgcross

Asmodian said:


> I decided to do a shunt mod after all, power limits are too annoying. :devil:
> 
> I recommend using 0.006+ ohm shunts on top of the stock ones to keep it from going into safe mode.


I did a LM shunt mod but assumed I needed the 2 on top... so its the one on top closest to the 8 pins and the one on back?

Timespy Graphics Score 16975
Timespy Extreme Graphics Score 7927


----------



## zlatanselvic

So I guess no bios mods for us founders guys?


----------



## Asmodian

cgcross said:


> I did a LM shunt mod but assumed I needed the 2 on top... so its the one on top closest to the 8 pins and the one on back?


That is what I found with an ohm meter... you have much better scores than I do but I don't get power throttling anymore, it never goes above ~80% power now.


----------



## cgcross

Asmodian said:


> That is what I found with an ohm meter... you have much better scores than I do but I don't get power throttling anymore, it never goes above ~80% power now.


I'll LM shunt the third one to see if they get better, why not !?

My card was ridiculous good on air with no shunts... shunts didn't add much. That's why I was curious on yours.


----------



## Hulk1988

It seems like that the new driver 416.xx blocked a Bios update now. Has someone a 411.xx driver version + FE card and tried it yet?


----------



## Zammin

enigmatic23 said:


> Didn't even get a chance to try it, did a few synthetic tests on 3dmark with the 130% and was power limited so went ahead and flashed it. Great temps! Very easy to do the flash and gives just a bit extra.


Thanks for the info man. 

Every little bit helps. Once I have mine up and running I'll do some testing with the EVGA standard BIOS on water and post here, then I might have a go at flashing the BIOS to the Galax one if need be.

It's good to have a thread like this to compile all this new information, looking elsewhere on the web information regarding watercooling RTX cards is scarce with all these products being so new. Mostly what comes up is marketing/news articles and the odd Youtube video like the one Gamers Nexus did with the hybrid mod.

If anyone in this thread has the Phanteks block I would love to hear your results with that one to compare. I have a feeling it'll be the same as last gen where the EK block will be a couple degrees better performing, but damn Phanteks nailed the aesthetic. No machine marks on the block or plexi, and the plexi is crystal clear with the sides blacked/chromed out by aluminium plates.


----------



## cstkl1

Hulk1988 said:


> It seems like that the new driver 416.xx blocked a Bios update now. Has someone a 411.xx driver version + FE card and tried it yet?


only for FE??

wondering has anybody flashed the galax bios on msi trio??

almost zero knowledge on anything engineering..

so da question is that fuse on each of the pcie connector on msi trio a issue for flashing bios mod??


----------



## cstkl1

Zammin said:


> Ordered my 2080Ti the other day. Was originally planning on getting a founders card from Nvidia but who knows when they will be available again. I ended up going for an EVGA XC Gaming for the 130% power limit (plus it was on special with a free mechanical keyboard and free postage haha).
> 
> Got an EK Vector waterblock and backplate coming with it, but I saw yesterday that Phanteks released their GPU blocks for the 20 series and now I'm debating because the Phanteks block looks miles better aesthetically but in the past with the 10 series the EK blocks were better performing. I can see this time that the EK block has 5 more microchannels than the Phanteks block but that's all I really have to go off.
> 
> It's hard this early on when there aren't any reviews for any of the waterblocks yet, it's hard enough to even find an image of a block fitted to a 20 series.
> 
> Anyone in this thread got their card under water yet? /forum/images/smilies/smile.gif


isnt the phantek ones double da weight??
🤔


----------



## Zammin

cstkl1 said:


> isnt the phantek ones double da weight??
> 🤔


Yeah they are normally a fair bit heavier.


----------



## cstkl1

btw for somebody just beat those silly scores by GM/Linus/j2c with water. 

tyvm for saving the oc community.


----------



## Gustavoh

Hello everybody! New to the forum and to the thread, just got a Gainward Phoenix GS. Managed, so far, to run +130 on the core and + 700 on the memory, but hitting over 80C (in a NZXT H400i case). With a custom fan curve, I've managed to settle at around 80C under full load, averaging around 1950 MHz, but the noise is a PITA (fans nearing 100%).

So I'm thinking about putting it under water. According to the thread's first page, the Gainward Phoenix GS is a reference PCB, but it does not show up as compatible with EK's nor Phantek's waterblocks on their websites. Is it because it really isn't compatible (despite the reference PCB) or is it because EK and Phantek haven't gotten a chance to confirm it (the card seems to have just come out in Europe)? 

Thanks for the help!


----------



## animeowns

If I purchase a 2080ti on ebay will nvidia honor the warranty if its new unopened ? need a 3rd one for secondary pc


----------



## Edge0fsanity

Hulk1988 said:


> It seems like that the new driver 416.xx blocked a Bios update now. Has someone a 411.xx driver version + FE card and tried it yet?


just tried the galax bios with 411.70 and it does not work. Board ID mismatch.


----------



## Hulk1988

Igor from Tomshardware Germany said that the delay is connected with a Bios update for the FE. That means that this card is most likely useless. Would be for me at least.


----------



## eeeven

in the yesterdays live-stream at gamers-nexus, he said, that there is a XOC Bios for 2080 Ti Cards available and it can be found here at oc.net. i cant find it. can someone help me?

has anybody already succesfully flashed another non-FE Bios onto the FE Card?
Some people getting Erros on that: Board-ID mismatch


----------



## Diverge

So is everyone going to return their FE cards? Mine is power limited, so can't get more than 1900-2000 MHz... I guess there's 30 days to hope something changes... We should all bombard Nvidia with requests for a higher power limit bios.


----------



## BigMack70

BigBeard86 said:


> I was playing for about an hour today at 4k dsr (1440p monitor) and this game gets 150+ fps on ultra at that resolution. no issues with crashing at all.


Native 4k the framerate isn't quite that high. Closer to 120fps average. What clocks were you at? Seems like around 1900 / 8000 is the best I can get stable in BF4


----------



## torqueroll

Edge0fsanity said:


> just tried the galax bios with 411.70 and it does not work. Board ID mismatch.


Same issue here. Tried both the galax and evga bios. I have FE card from Nvidia.


----------



## bmg2

torqueroll said:


> Same issue here. Tried both the galax and evga bios. I have FE card from Nvidia.


Did you turn write protect off? nvflash64 --protectoff
I uninstalled the drivers before flashing, since the newly flashed card was going to look like a different graphics card anyway. nvflash -6 newbios.rom. The -6 overrides the mismatch...make sure the bios is compatible with your hardware first!


----------



## Edge0fsanity

bmg2 said:


> Did you turn write protect off? nvflash64 --protectoff
> I uninstalled the drivers before flashing, since the newly flashed card was going to look like a different graphics card anyway. nvflash -6 newbios.rom. The -6 overrides the mismatch...make sure the bios is compatible with your hardware first!


i did not uninstall drivers first, did do it with --protectoff and -6 does not override it. You get the option to proceed with the override followed by an error. Were you able to get a bios working on an FE or are you using an AIB card?


----------



## torqueroll

bmg2 said:


> Did you turn write protect off? nvflash64 --protectoff
> I uninstalled the drivers before flashing, since the newly flashed card was going to look like a different graphics card anyway. nvflash -6 newbios.rom. The -6 overrides the mismatch...make sure the bios is compatible with your hardware first!


I tried uninstalling the drivers but no luck. Same mismatch error. 'nvflash64 --protectoff' and 'nvflash64 -6 xxx.rom'.


----------



## nycgtr

Diverge said:


> So is everyone going to return their FE cards? Mine is power limited, so can't get more than 1900-2000 MHz... I guess there's 30 days to hope something changes... We should all bombard Nvidia with requests for a higher power limit bios.


Meh tbh if you were to look at fps numbers at 2000 vs 2100 it's really not much difference. I got 3 here and the worst one even with the 380watt bios at best is 2020, compared to the 2130 on the other it's a 1.2 fps to 1.7 at best.


----------



## animeowns

nycgtr said:


> Meh tbh if you were to look at fps numbers at 2000 vs 2100 it's really not much difference. I got 3 here and the worst one even with the 380watt bios at best is 2020, compared to the 2130 on the other it's a 1.2 fps to 1.7 at best.



I am keeping my cards when they get here nvidia finally shipped 2 of my cards but I need to order a 3rd card for 2nd pc will nvidia honor warranty if I buy a 2080ti FE edition from ebay new unopened ?


----------



## SaccoSVD

Hi fellow OC'ers 

After watching all the reviews online about the 20's series I decided to go the second hand GTX 1080 Ti AORUS Waterforce Xtreme Edition route instead. (have a 980ti SC)

I kinda like the 20's series in terms of performance but the price is insane, pretty much inline with what the reviewers say.

I would love to hear your verdict on the 2080ti, because here things are a bit more detailed than from youtube outlets.


----------



## ksd8808

Fiercy said:


> It's under EK waterblock. It's gigabyte oc version as I found recently there is actually a big difference between a windforce oc and just oc for some reason pcb from mine didn't fully fit waterblock out of the box... but I made it work.


Hi Fiercy, are you currently using a Gaming OC instead Windforce OC? Would you please tell me how did the EK waterblock conflict with the PCB, and how did you manage to make it work?

Because a friend of mine owns an Gaming OC and wants to install an EK waterblock onto it, however he is wondering if it would work since the PCB of gaming OC is a little bit different from the founder edition's. Your experience could be valuable. Thank you.


----------



## GraphicsWhore

Is EVGA backplate compatible with EK block, anyone know? I put the block and EK backplate but EVGA’s backplate looks better IMO.

Also, fired the card up and used DDU, installed drivers. All games crash on start, some referencing direct 3d error. Tried latest EVGA BIOS and Galax. Temps and clocks are low so figured it was definitely software related.

Installed windows on blank SSD and installed drivers, sure enough everything works fine.

Really don’t want to have to reinstall Windows on my main drive but might have to. Tried everything I feel like. So f’ing annoying.


----------



## EQBoss

Iirc -5 used to be overriding boards ids and -6 for the pci subsystem. If they changed the flags then maybe we need different flags? Would really suck if we can't flash anything on it.


----------



## cstkl1

nycgtr said:


> Diverge said:
> 
> 
> 
> So is everyone going to return their FE cards? Mine is power limited, so can't get more than 1900-2000 MHz... I guess there's 30 days to hope something changes... We should all bombard Nvidia with requests for a higher power limit bios.
> 
> 
> 
> Meh tbh if you were to look at fps numbers at 2000 vs 2100 it's really not much difference. I got 3 here and the worst one even with the 380watt bios at best is 2020, compared to the 2130 on the other it's a 1.2 fps to 1.7 at best.
Click to expand...

the msi 1755 really a joke. flashed another bios more inline with FE clocks..

its still only boosting to 1890. so that 1755 means nothing. looking at the volt/freq curve nothing changed 

oh yeah no issue flashing to galax 380 bios on trio.. just the first fan will ramp up and second one had to do manual ramp.

other than that seems to work. also seems a bit pointless with my card which i think even under wanter maybe can be stable only arnd 2025-2055

btw since nobody talked about it.. nvidia abandon their more than a decade clockgen of 12.5 to 15 for turing.


----------



## R1amddude

Still testing my founders edition but clocks can vary wildly from game to game. So far for an air cooled card I think mine is higher quality than my 1080ti FE. Play around with the clocks for each game, it appears my +130 OC for Andromeda did not sit well with Mechwarrior online, power % and clocks were bouncing up and down, lowered the OC and it stayed at around 1920 mhz. My 1080ti had a different feel to its OC capabilities.


----------



## xer0h0ur

Graphics***** said:


> Is EVGA backplate compatible with EK block, anyone know? I put the block and EK backplate but EVGA’s backplate looks better IMO.
> 
> Also, fired the card up and used DDU, installed drivers. All games crash on start, some referencing direct 3d error. Tried latest EVGA BIOS and Galax. Temps and clocks are low so figured it was definitely software related.
> 
> Installed windows on blank SSD and installed drivers, sure enough everything works fine.
> 
> Really don’t want to have to reinstall Windows on my main drive but might have to. Tried everything I feel like. So f’ing annoying.


I DDUed my 1080's drivers and plugged in the stock FE card then installed the latest Windows 7 driver for the 2080 Ti. I ran some 3dmark benches and played some games to verify the card wasn't a dud before waterblocking it. Finally I shut the system down, installed the EKWB and backplate and plugged her into the loop and system. When I started the rig back up I was immediately BSODing upon logging into Windows. Of course the first thing that came to mind was something went wrong on the waterblock and/or backplate installation. Except all I had to do was boot into safe mode and DDU the drivers again which got everything working just fine again.


----------



## Edge0fsanity

R1amddude said:


> Still testing my founders edition but clocks can vary wildly from game to game. So far for an air cooled card I think mine is higher quality than my 1080ti FE. Play around with the clocks for each game, it appears my +130 OC for Andromeda did not sit well with Mechwarrior online, power % and clocks were bouncing up and down, lowered the OC and it stayed above 2000 mhz more consistently. My 1080ti had a different feel to its OC capabilities.


I'm finding its very different from OC'ing pascal. You pretty much have to tweak the power/freq curve at every level if you want the most out of the card on a stock bios. I found that my max OC was +170 anytime the card throttled down around 993mV or less. However, +200 is perfectly fine at any voltage above that.


----------



## xer0h0ur

I don't know why people were saying it was going to be pointless to waterblock these cards. I've only had it installed for one day and I already noticed right away a huge sustained clock difference between the air cooler and having it blocked. I'm talking like a 200MHz difference if not more from use case to use case.


----------



## Fiercy

xer0h0ur said:


> I DDUed my 1080's drivers and plugged in the stock FE card then installed the latest Windows 7 driver for the 2080 Ti. I ran some 3dmark benches and played some games to verify the card wasn't a dud before waterblocking it. Finally I shut the system down, installed the EKWB and backplate and plugged her into the loop and system. When I started the rig back up I was immediately BSODing upon logging into Windows. Of course the first thing that came to mind was something went wrong on the waterblock and/or backplate installation. Except all I had to do was boot into safe mode and DDU the drivers again which got everything working just fine again.


You should invest some time in learning what Windows 10 LTSC (use it myself).
1) No Windows Store and all its apps
2) No Cortana
3) No One Drive
4) No Feature Updates (Only Security Ones) 

On the waterblock side though totally agree i was able to go from ultrawide 71 fps in benchmark to 82 (stable) in Assassin's Creed Odyssey.


----------



## exploiteddna

Esenel said:


> If I try to flash my FE BIOS to either the GX or the XC bios of the OP I get the following error message
> 
> 
> 
> Sparc145 said:
> 
> 
> 
> Anyone able to get the Galax 380w bios flashed on their FE? I'm getting Board ID mismatch:
> 
> 
> 
> Asmodian said:
> 
> 
> 
> I do not seem to be able to flash the Galax 380W BIOS on my Founders Edition 2080 Ti.
> 
> 
> 
> 
> 
> Click to expand...
Click to expand...

Just a thought... are you using the latest NVFlash software and are you issuing the command to remove protection



Jbravo33 said:


> I canceled my FE’s yesterday. They truly killed the excitement for me. Will wait for titan. I do have 2 ekwb rgb blocks that got delivered today. I will hold onto nvlink. If anyone is interested pm for the blocks. I’ll take pics when I get to work. As far as EK blocks quality I’ve had no issues with 2 of them on Xp’s and one on my titan V.


which versions do you have? Nickle-Plexi? Im interested if so



Alonjar said:


> Will a RTX 2080ti be throttled or limited by running PCIE 3.0 x8 vs x16?


steve from GN posed this question a few weeks ago iirc, but i cant remember if he ever answered it or did any testing


----------



## Hulk1988

Wow! My friend tested a 2080 TI FE which he got 2 weeks ago from a company for Review. BIOS flash works. Not a nice move NVidia. I think that is the reason for the delay.


----------



## BigMack70

xer0h0ur said:


> I don't know why people were saying it was going to be pointless to waterblock these cards. I've only had it installed for one day and I already noticed right away a huge sustained clock difference between the air cooler and having it blocked. I'm talking like a 200MHz difference if not more from use case to use case.


+200 MHz sounds a bit unlikely; what BIOS are you using? Using Unigine Valley to test, my card cold (40-45C at start of benchmark) does about 2025 MHz. That drops about 100 MHz down to 1920 MHz at 75C. So I figure best case if I water cool this thing is +100 MHz sustained clocks, at least on stock BIOS.


----------



## R1amddude

xer0h0ur said:


> I don't know why people were saying it was going to be pointless to waterblock these cards. I've only had it installed for one day and I already noticed right away a huge sustained clock difference between the air cooler and having it blocked. I'm talking like a 200MHz difference if not more from use case to use case.



Gonna have to agree with you, at 65-71c ( where its at when I game ) it likes to stay in the low to mid 1900's, 1965 if im lucky.


----------



## Dayaks

cgcross said:


> Asmodian said:
> 
> 
> 
> That is what I found with an ohm meter... you have much better scores than I do but I don't get power throttling anymore, it never goes above ~80% power now.
> 
> 
> 
> I'll LM shunt the third one to see if they get better, why not !?
> 
> My card was ridiculous good on air with no shunts... shunts didn't add much. That's why I was curious on yours.
Click to expand...

You don’t want to do all three. One is the PCIe slot...


----------



## xer0h0ur

BigMack70 said:


> +200 MHz sounds a bit unlikely; what BIOS are you using? Using Unigine Valley to test, my card cold (40-45C at start of benchmark) does about 2025 MHz. That drops about 100 MHz down to 1920 MHz at 75C. So I figure best case if I water cool this thing is +100 MHz sustained clocks, at least on stock BIOS.


Yes, its that much of a clock drop or more on an FE 2080 Ti unless you're blasting both fans @ 100% and even then you can't sustain the clocks you get waterblocked. My card was boosting just shy of 2000MHz on the EK block without overclocking it at all. Using the original FE vBIOS of course as I haven't had any confirmation yet on someone being able to flash another bios onto it to try it for myself.


----------



## BigMack70

xer0h0ur said:


> Yes, its that much of a clock drop or more on an FE 2080 Ti unless you're blasting both fans @ 100% and even then you can't sustain the clocks you get waterblocked. My card was boosting just shy of 2000MHz on the EK block without overclocking it at all. Using the original FE vBIOS of course as I haven't had any confirmation yet on someone being able to flash another bios onto it to try it for myself.


So you're getting 2100+ on water sustained?


----------



## GraphicsWhore

xer0h0ur said:


> Graphics***** said:
> 
> 
> 
> Is EVGA backplate compatible with EK block, anyone know? I put the block and EK backplate but EVGA’s backplate looks better IMO.
> 
> Also, fired the card up and used DDU, installed drivers. All games crash on start, some referencing direct 3d error. Tried latest EVGA BIOS and Galax. Temps and clocks are low so figured it was definitely software related.
> 
> Installed windows on blank SSD and installed drivers, sure enough everything works fine.
> 
> Really don’t want to have to reinstall Windows on my main drive but might have to. Tried everything I feel like. So f’ing annoying.
> 
> 
> 
> I DDUed my 1080's drivers and plugged in the stock FE card then installed the latest Windows 7 driver for the 2080 Ti. I ran some 3dmark benches and played some games to verify the card wasn't a dud before waterblocking it. Finally I shut the system down, installed the EKWB and backplate and plugged her into the loop and system. When I started the rig back up I was immediately BSODing upon logging into Windows. Of course the first thing that came to mind was something went wrong on the waterblock and/or backplate installation. Except all I had to do was boot into safe mode and DDU the drivers again which got everything working just fine again.
Click to expand...

Yeah first time I DDU’ed was not in safe mode. Second time I was in safe mode. But I don’t think it’s driver related.

DirectX seems have been completely borked on my main drive thanks to the Oct Windows update which you have to have to install nvidia 416.16.

Ridiculous. I guess worst case scenario I just reinstall Windows on my main drive and start fresh. Have to redownload/reinstall like 400gbs of games but thankfully my other games are spread out across 3 other SSDs.


----------



## R1amddude

BigMack70 said:


> So you're getting 2100+ on water sustained?



Im getting tempted to water cool my 2080ti lol.


----------



## FreeElectron

When will the 2080TIs be available (for MSRP prices)?


----------



## exploiteddna

wasnt expecting this until Tuesday. What a pleasant surprise! Count me in boiss :muscle:


not bad for no preorder, just ordered on wednesday


----------



## GraphicsWhore

FreeElectron said:


> When will the 2080TIs be available (for MSRP prices)?


You’re talking about the $999 price? Pretty sure never.

By the way I just want to say I’m happy a bunch of us have our 2080Tis so that we can finally appease all the “This isn’t an owners thread u guys don’t even have the cards!” douche bags.


----------



## BigMack70

R1amddude said:


> Im getting tempted to water cool my 2080ti lol.


If 2100+ sustained clocks are possible on water I'm likely water cooling mine as well.


----------



## xer0h0ur

BigMack70 said:


> So you're getting 2100+ on water sustained?


2070MHz to be exact is as far as I have pushed a sustained clock yet. It took me so damn long to get the waterblock installed and the card introduced into the loop that I barely got a little over 2 hours of benchmarking and gaming to try out how stable the clock was. I'm talking gaming stability not sustaining the clocks themselves. I am currently running it giving the GDDR6 the full beans +1000MHz +123 power limit +100 mV and +150MHz on the GPU clock which is only occasionally boosting up to 2100MHz in bechmarks but in actual gaming its boosting and sustaining 2070MHz. I am certainly power limited by the vBIOS. 

FWIW I still need to toy with the actual curve instead of just using the sliders and I really hope we end up being able to flash a higher power limit vBIOS onto these FE cards to get a little extra performance out of them.


----------



## FreeElectron

Graphics***** said:


> You’re talking about the $999 price? Pretty sure never.
> 
> By the way I just want to say I’m happy a bunch of us have our 2080Tis so that we can finally appease all the “This isn’t an owners thread u guys don’t even have the cards!” douche bags.


Ok. but, those prices are insane


----------



## BigMack70

xer0h0ur said:


> I really hope we end up being able to flash a higher power limit vBIOS onto these FE cards to get a little extra performance out of them.


Me too. It's the last piece of the puzzle necessary before I decide to water cool. If I go water I want to be able to get at least a 400W BIOS on the card, and 450W would be ideal. Just re-read the GN article where they did their hybrid mod and their clocks were impressive but performance in games was the exact same as on air because of power limits.


----------



## Krzych04650

BigMack70 said:


> If 2100+ sustained clocks are possible on water I'm likely water cooling mine as well.


With my Sea Hawk (flashed to 380W bios), I am getting sustained 2130 MHz at 1.056V in most games, at around 47 C (with 100% fan speed, which means tons of heat because if I keep it on stock with 300W limitation I am getting 51C max with 30% fan speed (700 RPM), which is minimal fan speed to Sea Hawk). For example AC: Syndicate with maxed out everything, Gameworks and MSAA, maintains this 2130 clock and even wants to boost to 2145 sometimes, but I had to cut it from the boost curve because this is my core limitation, I cannot OC higher than 2130. Even very power and temperature heavy games that are drawing more power than others the making the GPU few degrees hotter, like The Witcher 3 for example, are not dropping below 2100 MHz. 

This is not possible in benchmarks like Time Spy, but in gaming, 2100+ is sustainable with 380W bios and watercooling.


----------



## Foxrun

I'm getting worse performance in monster hunter, killing floor 2 with physx, and no man's sky than I did with my Titan v.


----------



## mr2cam

Is anyone able to flash a different bios to the FE cards?


----------



## Diverge

mr2cam said:


> Is anyone able to flash a different bios to the FE cards?


I don't think anyone has flashed a FE card that was sold retail. So I guess nvidia patched the ability.


----------



## xer0h0ur

The reason why its usually harder to sustain clocks in synthetics versus gaming is because you rarely get the same levels of sustained GPU usage in games versus how hard it pegs the GPU during benchmark runs.


----------



## GraphicsWhore

FreeElectron said:


> Graphics***** said:
> 
> 
> 
> You’re talking about the $999 price? Pretty sure never.
> 
> By the way I just want to say I’m happy a bunch of us have our 2080Tis so that we can finally appease all the “This isn’t an owners thread u guys don’t even have the cards!” douche bags.
> 
> 
> 
> Ok. but, those prices are insane
Click to expand...

Oh no those are 3rd party sellers. Ignore that price. I thought you were talking about the 999 price that was in nvidia’s keynote.

The real MSRP - $1150+ depending on model should be available relatively soon. I’ve been reading about people being able to place preorders on amazon again.

Keep tabs on nowinstock.net and other sites


----------



## BigBeard86

Will the galax bios work with my evga xc ultra? 

Can someone pelase remind me of the flashing commands? do I have to type in "-6" or can I just use nvflash command?


----------



## zlatanselvic

Anyone find a way to flash founders BIOS yet?

Still stuck on the same spot - my card looks to be able to push to 2100+ if I can get some power to it!


----------



## GraphicsWhore

BigBeard86 said:


> Will the galax bios work with my evga xc ultra?
> 
> Can someone pelase remind me of the flashing commands? do I have to type in "-6" or can I just use nvflash command?


Yes it should.

Nvflash64 -protectoff

nvflash64 -6 FILENAME.ROM


----------



## raider89

2040-2075 during games seem to be my max any higher it crashes. And thats +85 on the core.


----------



## xer0h0ur

Has anyone else tried using the automatic overclocking tool yet? I still need to give that a go.


----------



## Esenel

Although I cannot flash any BIOS on the FE card it is pretty fast.

Rank 20 TimeSpy graphics score 
16283 points
https://www.3dmark.com/spy/4611453


----------



## Ford8484

xer0h0ur said:


> Has anyone else tried using the automatic overclocking tool yet? I still need to give that a go.


Yea I use the MSI Afterburner one for the Zotac Amp. It seems to work pretty well. I prefer doing it and not obsess over getting the exact ideal OC....then you just toy with the graphics forever and not play the games, lol.


----------



## exploiteddna

I think I’m going to hold off on opening the Duke OC for now until I see if I can land a different card in the next week or so. Idk. It’s so hard not to open it. 

I’m still amazed that people are/we’re buying amazon preorders on eBay for 1600$ that won’t ship until end of the month when they could have just waited and got their own even faster


----------



## raider89

Esenel said:


> Although I cannot flash any BIOS on the FE card it is pretty fast.
> 
> Rank 20 TimeSpy graphics score
> 16283 points
> https://www.3dmark.com/spy/4611453


I can't get nowhere close to that score lol.


----------



## Fiercy

Wow I am amazed with Nvidia Scanner!! The thing established the limits of my card almost perfectly it showed +139 PASS and I know it can do around 149 tops but semi-stable. 

It also showed I can't go above 2175 which is where I knew my limit is. 

So i guess in my case it super accurately pointed to the most highest and stable result I can recommend you guys try it.


----------



## xer0h0ur

Which card to you have @Fiercy


----------



## shadow85

I am contemplating on selling both my ASUS GTX 1080 Ti STRIX OC for a single RTX 2080 Ti, for the simple fact that some games like AC Odyssey will never supsport SLi.

I really would like to play the new ACO with more than 39 fps, currently what I am getting on 4K max + HDR.


----------



## Diverge

People who are having success with the OC scanner; what build of windows are you on?

I'm trying to figure out why I can't use the OC scanner on my system, but manually OCing works just fine in benches and games. I'm on freshly wiped system using Windows 10 October 2018 build... which I am wondering if that is contributing to my issues.


----------



## BigMack70

OC scanner is a bugged mess for me using the newest MSI Afterburner beta. It's given me an entire range, from +33 MHz to +160 MHz with all other variables being the same. Any time it gives me above +100 MHz, it's not 100% game stable (fails BF4). 

I also got basically the same Timespy Extreme scores at +170/+1000, +150/+1000, +130/+1000, and the OC scan results that were higher than +90. 

So at the moment I'm just running +130/+1000.


----------



## xer0h0ur

That is the one thing I haven't tried yet. I gotta play some games on my 4K monitor again to see how that is going to shake out. 

Here is some impressive food for thought though. I got into 4K gaming at the moment that 60Hz 4K monitors were hitting the market. Back then I was using a 295X2 and I later tri-fired it along with a 290X. Lets just say that my overclocked 2080 Ti takes a steaming dump on the 295X2 + 290X:

You can basically ignore the CPU score since I went from a 4930K back then to a 4960X that is in the rig now. But that graphics score tho

https://www.3dmark.com/fs/16589074 <------2080 Ti
https://www.3dmark.com/fs/5358452 <------295X2 + 290X

I more or less had abandoned playing at 4K back then because it was only viable when crossfire was scaling well and drivers worked. Now I can actually get back to it and enjoy it without the headaches of crossfire or sli.


----------



## raider89

xer0h0ur said:


> That is the one thing I haven't tried yet. I gotta play some games on my 4K monitor again to see how that is going to shake out.
> 
> Here is some impressive food for thought though. I got into 4K gaming at the moment that 60Hz 4K monitors were hitting the market. Back then I was using a 295X2 and I later tri-fired it along with a 290X. Lets just say that my overclocked 2080 Ti takes a steaming dump on the 295X2 + 290X:
> 
> You can basically ignore the CPU score since I went from a 4930K back then to a 4960X that is in the rig now. But that graphics score tho
> 
> https://www.3dmark.com/fs/16589074 <------2080 Ti
> https://www.3dmark.com/fs/5358452 <------295X2 + 290X
> 
> I more or less had abandoned playing at 4K back then because it was only viable when crossfire was scaling well and drivers worked. Now I can actually get back to it and enjoy it without the headaches of crossfire or sli.


I don't know what im doing wrong lol, my scores are so low compared to others here. I have the 2080ti xc ultra using 130%evga bios.


----------



## xer0h0ur

Well remember I was showing Firestrike Extreme scores for the sake of trying to give a good comparison between that old setup versus now. Even dropped the clocks on my RAM to try to emulate the older setup as much as possible. I wouldn't have the faintest idea how I would be scoring on windows 10 with time spy.


----------



## nycgtr

Got my bitspower blocks in today. They really stepped it up more for this gen it seems. Even the screws have great attention to detail


----------



## zhrooms

Hulk1988 said:


> The Hybrid version will be the best on the current market because it is reference and you can flash to the 380W Bios and you will stay at 50C.


 
Yes, MSI Sea Hawk X is currently the only hybrid out right now and it is indeed compatible with the Galax/KFA2 380W BIOS. Costs a bit more but without question the best card out right now, *if you switch BIOS*.
 


asdkj1740 said:


> even der8auer using external device to hack the card, the power limit still cant be fully diabled. meaing nvidia must have done something further to block our way breaking throught as on pascal or even maxwell.
> turing has a new power monitoring, no more INA3221. old trick is ended. sadly nvidia is more and more restrictive to users.


 
I think it is too early to tell, the card is barely even out, of course the old trick won't work, probably just a matter of time until there is a new "trick".

It took Elmor 3 weeks to upload the XOC BIOS for the GTX 1080 Ti (close to the release of the STRIX card) after the reviews dropped. And this launch is catastrophic in comparison, have some faith!
 


asdkj1740 said:


> 290w max bios, 8700k 1.35v 5ghz, antec neoeco ii 550w (dual 12v rail with each 30a and 540w max in combined, oem by delta), 29c ambient temp.
> i was playing game and got shutdown after ~1 hour. the back ventilation grill on the power supply was super hot so i guess i triggered the over temp protection of my psu.


 
That is a garbage PSU though. Founders Edition (320W) overclocked on water peaks around 37.5 Amp, and 29c ambient is high. Not the least surprised the PSU gave up after an hour. Dual rail is the issue. I like ur style though, $1200 Graphics Card and $50 PSU







.
 


kx11 said:


> tomorrow my Palit gaming PRO OC 2080ti arrives and from reading lots of comments it seems to have a solid Bios , i don't feel good about flashing another bios


 
I don't know which comments you've read but it's not solid, worse than founders edition. 300W vs 320W. Meanwhile Gigabyte is at 366W and Galax/KFA2 at 380W.

But if you don't care *that much* about overclocking it does not matter.
 


BigMack70 said:


> My card looks like it won't do anything over 1900 MHz on air (avg sustained clock) in several games.
> 
> This is one of the most finnicky cards I've ever had. Can do 2050 MHz in benches but 1900 MHz games? dafuq?


 
Card? BIOS?
Power Limit?
Which Games?

Preferably show MSI Afterburner graph so we can see power limit, temperature, usage and core clock in a game, where you claim it can barely even do 1900MHz.
 


Gustavoh said:


> I'm thinking about putting it under water. According to the thread's first page, the Gainward Phoenix GS is a reference PCB, but it does not show up as compatible with EK's nor Phantek's waterblocks on their websites. Is it because it really isn't compatible (despite the reference PCB) or is it because EK and Phantek haven't gotten a chance to confirm it (the card seems to have just come out in Europe)?


 
Yes, it is listed as Unknown or Missing, so simply not confirmed, pictures of the card from Gamescom show the NVIDIA logo directly on the PCB, which means it is NVIDIA reference. Check for yourself, does it say NVIDIA in white text at the bottom?

Also, if you would not mind, it would be very helpful if you could remove the cooler and take a picture of it for us here, it helps other people just like yourself figuring out if their card is compatible. If you check the original post here, if the *Reference PCB* text is underlined, it is a link and you can see the PCB. But the Gainward Phoenix and a few other cards are still missing, no reviews of it and very few (if any) has got it here. You might be the only one.



eeeven said:


> In the yesterdays live-stream at gamers-nexus, he said, that there is a XOC Bios for 2080 Ti Cards available and it can be found here at oc.net. i cant find it.


 
There is no XOC BIOS yet, as far as we know. What we do have is the supposedly *Galax/KFA2 RTX 2080 Ti OC* BIOS which has the highest power limit so far (except the private 400W one used by public figures such as the well known overclockers and youtubers), but it is still limiting overclocking. I have it hosted & linked in the original post of this thread, at the bottom.
 


ksd8808 said:


> Would you please tell me how did the EK waterblock conflict with the PCB, and how did you manage to make it work?
> 
> Because a friend of mine owns an Gaming OC and wants to install an EK waterblock onto it, however he is wondering if it would work since the PCB of gaming OC is a little bit different from the founder edition's.


 
Yes he has the Gaming OC, click *here* to see what the problem is. It is a reference card but with an added 4-pin connector at the top for the cooler (I believe it is for RGB, he asked that question himself so neither of us know, but other partner cards usually power the fans from the 4-Pin at the bottom), what he did was to basically destroy it (remove the black connector itself & bend the pins) for the block to fit.

Obviously not recommended, warranty is very likely shot and will be difficult to sell the card for a good price if the cooler is not fully functional (RGB). Unless he actually manages to repair it, not impossible but doubt you could call it easy.
 


cstkl1 said:


> I had no issue flashing the Galax 380W BIOS (2x8-Pin) on the Gaming X Trio (2x8-Pin, 1x6-Pin), other than the first fan will ramp up and the second one had to do manual ramp, seems to work fine.


 
Interesting, what I am curious about is if you can restore the card using the original BIOS, so that the fans work as intended again? Do you mind testing that out and telling us how it went?
 


FreeElectron said:


> When will the 2080TIs be available (for MSRP prices)?


 
Probably when they actually get in stock, right now every store is fighting for the few cards being sent out, as soon as the shelves start to fill up, prices will lower.


----------



## xer0h0ur

The only gripes I had about the EKWB and backplate I received were that apparently EK is too cheap to include a printout of the instructions anymore. They always used to have one. There were also errors with diagrams where you had to figure out for yourself where the screw went. The greatest annoyance was that they forgot to give me the screws altogether for the backplate. The only reason I managed to install it was because I had left over screws from the various blocks and backplates I have ordered from them over the years.


----------



## exploiteddna

FreeElectron said:


> Ok. but, those prices are insane


they will come down, trust me. when sales decline and 10 series stock begins to dwindle, you will see the msrp be more prevalent. save this post to mark my words


----------



## enigmatic23

xer0h0ur said:


> The only gripes I had about the EKWB and backplate I received were that apparently EK is too cheap to include a printout of the instructions anymore. They always used to have one. There were also errors with diagrams where you had to figure out for yourself where the screw went. The greatest annoyance was that they forgot to give me the screws altogether for the backplate. The only reason I managed to install it was because I had left over screws from the various blocks and backplates I have ordered from them over the years.


Same here. Backplate screws missing and using the online instructions was a pain. Luckily it all went together with a few extra screws and temps are decent. Cant break 50c with Galax bios and ambient is around 26c


----------



## xer0h0ur

enigmatic23 said:


> Same here. Backplate screws missing and using the online instructions was a pain. Luckily it all went together with a few extra screws and temps are decent. Cant break 50c with Galax bios and ambient is around 26c


I haven't gone past 47c but I still have plenty of games to try out yet.


----------



## marcolisi

Hi guys, is it possible to SLI a titan x pascal with a msi 2080 ti trio ?


----------



## Fiercy

xer0h0ur said:


> Which card to you have @Fiercy


Gigabyte 2080TI OC and very happy with it although it didn't exactly fit waterblock and I heard some bad things about their RMA process but at least it clocks well. 


I am running 1809 Windows build myself so for me it causes no trouble with the Scanner.


----------



## BigMack70

zhrooms said:


> Card? BIOS?
> Power Limit?
> Which Games?
> 
> Preferably show MSI Afterburner graph so we can see power limit, temperature, usage and core clock in a game, where you claim it can barely even do 1900MHz.


Nvidia Founders card/BIOS. Here's what a round of BF4 looks like. If I push clocks any higher it becomes unstable. There are a few games I've tried that will sit higher than 1920 but most games pretty much sit at 1920 and fluctuate between 1890 and 1965, exactly like BF4. It seems like it's a fairly representative game, and it's the most sensitive I've found to instability so far so it's been a good test. A few games will run for a while at +170 on my card but BF4 will hard lock upon loading the menus at anything higher than +140. The BF4 menus also freeze on my OC scanner profile that it found (which is roughly equivalent to a +150 or +160 offset depending on where you look in the curve).


----------



## xer0h0ur

Not surprised you're seeing clocks drop with 98% GPU usage. I still have to try a game that will peg my GPU consistently that hard.


----------



## BigMack70

xer0h0ur said:


> Not surprised you're seeing clocks drop with 98% GPU usage. I still have to try a game that will peg my GPU consistently that hard.


I take it 1440p doesn't do that so much? Pretty much every game at 4k maxes out my GPU all the time unless I enable vsync.


----------



## mr2cam

raider89 said:


> I don't know what im doing wrong lol, my scores are so low compared to others here. I have the 2080ti xc ultra using 130%evga bios.


Ya that is quite low, I have the exact same setup as you and I did 13912 with gsync on


----------



## xer0h0ur

BigMack70 said:


> I take it 1440p doesn't do that so much? Pretty much every game at 4k maxes out my GPU all the time unless I enable vsync.


I plan on giving some titles a go at 4K tonight so I will be able to give you a better idea of how that will go for me.


----------



## zhrooms

BigMack70 said:


> Here's what a round of BF4 looks like.


 
Here is what I want you to do, open *Afterburner* > Properties > Monitoring > *Select*: *GPU core clock* > *Edit*: Graph limits to 1800 - 2200 > Hit: OK to Save and Close

Now repeat the *Select* & *Edit* with these,

*Usage* (75 - 100)
*Memory Clock* (7000 - 9000)
*Power* (75 - 150)
*Framerate* (0 - 300)

Makes it easier to see the detail of what is going on, and it is important.

Please do this and upload a new image, as well as a second *game* (demanding), just for a minute or two running around will suffice.

_(You can disable the ones you do not want by unchecking the checkmark left of the graph names. *Example:* Temp Limit, Voltage Limit, No Load Limit and so on.)_


----------



## l88bastar

marcolisi said:


> Hi guys, is it possible to SLI a titan x pascal with a msi 2080 ti trio ?



no


----------



## cstkl1

BigMack70 said:


> OC scanner is a bugged mess for me using the newest MSI Afterburner beta. It's given me an entire range, from +33 MHz to +160 MHz with all other variables being the same. Any time it gives me above +100 MHz, it's not 100% game stable (fails BF4).
> 
> I also got basically the same Timespy Extreme scores at +170/+1000, +150/+1000, +130/+1000, and the OC scan results that were higher than +90.
> 
> So at the moment I'm just running +130/+1000.





zhrooms said:


> Hulk1988 said:
> 
> 
> 
> The Hybrid version will be the best on the current market because it is reference and you can flash to the 380W Bios and you will stay at 50C.
> 
> 
> 
> 
> Yes, MSI Sea Hawk X is currently the only hybrid out right now and it is indeed compatible with the Galax/KFA2 380W BIOS. Costs a bit more but without question the best card out right now, *if you switch BIOS*.
> 
> 
> 
> asdkj1740 said:
> 
> 
> 
> even der8auer using external device to hack the card, the power limit still cant be fully diabled. meaing nvidia must have done something further to block our way breaking throught as on pascal or even maxwell.
> turing has a new power monitoring, no more INA3221. old trick is ended. sadly nvidia is more and more restrictive to users.
> 
> Click to expand...
> 
> 
> I think it is too early to tell, the card is barely even out, of course the old trick won't work, probably just a matter of time until there is a new "trick".
> 
> It took Elmor 3 weeks to upload the XOC BIOS for the GTX 1080 Ti (close to the release of the STRIX card) after the reviews dropped. And this launch is catastrophic in comparison, have some faith!
> 
> 
> 
> asdkj1740 said:
> 
> 
> 
> 290w max bios, 8700k 1.35v 5ghz, antec neoeco ii 550w (dual 12v rail with each 30a and 540w max in combined, oem by delta), 29c ambient temp.
> i was playing game and got shutdown after ~1 hour. the back ventilation grill on the power supply was super hot so i guess i triggered the over temp protection of my psu.
> 
> Click to expand...
> 
> 
> That is a garbage PSU though. Founders Edition (320W) overclocked on water peaks around 37.5 Amp, and 29c ambient is high. Not the least surprised the PSU gave up after an hour. Dual rail is the issue. I like ur style though, $1200 Graphics Card and $50 PSU
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> kx11 said:
> 
> 
> 
> tomorrow my Palit gaming PRO OC 2080ti arrives and from reading lots of comments it seems to have a solid Bios , i don't feel good about flashing another bios
> 
> Click to expand...
> 
> 
> I don't know which comments you've read but it's not solid, worse than founders edition. 300W vs 320W. Meanwhile Gigabyte is at 366W and Galax/KFA2 at 380W.
> 
> But if you don't care *that much* about overclocking it does not matter.
> 
> 
> 
> BigMack70 said:
> 
> 
> 
> My card looks like it won't do anything over 1900 MHz on air (avg sustained clock) in several games.
> 
> This is one of the most finnicky cards I've ever had. Can do 2050 MHz in benches but 1900 MHz games? dafuq?
> 
> Click to expand...
> 
> 
> Card? BIOS?
> Power Limit?
> Which Games?
> 
> Preferably show MSI Afterburner graph so we can see power limit, temperature, usage and core clock in a game, where you claim it can barely even do 1900MHz.
> 
> 
> 
> Gustavoh said:
> 
> 
> 
> I'm thinking about putting it under water. According to the thread's first page, the Gainward Phoenix GS is a reference PCB, but it does not show up as compatible with EK's nor Phantek's waterblocks on their websites. Is it because it really isn't compatible (despite the reference PCB) or is it because EK and Phantek haven't gotten a chance to confirm it (the card seems to have just come out in Europe)?
> 
> Click to expand...
> 
> 
> Yes, it is listed as Unknown or Missing, so simply not confirmed, pictures of the card from Gamescom show the NVIDIA logo directly on the PCB, which means it is NVIDIA reference. Check for yourself, does it say NVIDIA in white text at the bottom?
> 
> Also, if you would not mind, it would be very helpful if you could remove the cooler and take a picture of it for us here, it helps other people just like yourself figuring out if their card is compatible. If you check the original post here, if the *Reference PCB* text is underlined, it is a link and you can see the PCB. But the Gainward Phoenix and a few other cards are still missing, no reviews of it and very few (if any) has got it here. You might be the only one.
> 
> 
> 
> eeeven said:
> 
> 
> 
> In the yesterdays live-stream at gamers-nexus, he said, that there is a XOC Bios for 2080 Ti Cards available and it can be found here at oc.net. i cant find it.
> 
> Click to expand...
> 
> 
> There is no XOC BIOS yet, as far as we know. What we do have is the supposedly *Galax/KFA2 RTX 2080 Ti OC* BIOS which has the highest power limit so far (except the private 400W one used by public figures such as the well known overclockers and youtubers), but it is still limiting overclocking. I have it hosted & linked in the original post of this thread, at the bottom.
> 
> 
> 
> ksd8808 said:
> 
> 
> 
> Would you please tell me how did the EK waterblock conflict with the PCB, and how did you manage to make it work?
> 
> Because a friend of mine owns an Gaming OC and wants to install an EK waterblock onto it, however he is wondering if it would work since the PCB of gaming OC is a little bit different from the founder edition's.
> 
> Click to expand...
> 
> 
> Yes he has the Gaming OC, click *here* to see what the problem is. It is a reference card but with an added 4-pin connector at the top for the cooler (I believe it is for RGB, he asked that question himself so neither of us know, but other partner cards usually power the fans from the 4-Pin at the bottom), what he did was to basically destroy it (remove the black connector itself & bend the pins) for the block to fit.
> 
> Obviously not recommended, warranty is very likely shot and will be difficult to sell the card for a good price if the cooler is not fully functional (RGB). Unless he actually manages to repair it, not impossible but doubt you could call it easy.
> 
> 
> 
> cstkl1 said:
> 
> 
> 
> I had no issue flashing the Galax 380W BIOS (2x8-Pin) on the Gaming X Trio (2x8-Pin, 1x6-Pin), other than the first fan will ramp up and the second one had to do manual ramp, seems to work fine.
> 
> Click to expand...
> 
> 
> Interesting, what I am curious about is if you can restore the card using the original BIOS, so that the fans work as intended again? Do you mind testing that out and telling us how it went?
> 
> 
> 
> FreeElectron said:
> 
> 
> 
> When will the 2080TIs be available (for MSRP prices)?
> 
> Click to expand...
> 
> 
> Probably when they actually get in stock, right now every store is fighting for the few cards being sent out, as soon as the shelves start to fill up, prices will lower.
Click to expand...

restored. all working like normal

so nobody ginna address that fake 1755 boost that does nothing. lol


----------



## BigMack70

zhrooms said:


> Here is what I want you to do, open *Afterburner* > Properties > Monitoring > *Select*: *GPU core clock* > *Edit*: Graph limits to 1800 - 2200 > Hit: OK to Save and Close
> 
> Now repeat the *Select* & *Edit* with these,
> 
> *Usage* (75 - 100)
> *Memory Clock* (7000 - 9000)
> *Power* (75 - 150)
> *Framerate* (0 - 300)
> 
> Makes it easier to see the detail of what is going on, and it is important.
> 
> Please do this and upload a new image, as well as a second *game* (demanding), just for a minute or two running around will suffice.
> 
> _(You can disable the ones you do not want by unchecking the checkmark left of the graph names. *Example:* Temp Limit, Voltage Limit, No Load Limit and so on.)_


I genuinely have no idea what you are trying to accomplish by effectively zooming in on what was already posted. The game sits at 1920 MHz most of the time, as shown. It fluctuates slightly +/- two clock steps (1890 - 1950 MHz) for very brief moments before returning to 1920.

And you can either believe me or not when I say this is typical of most games I've tested. I don't really care. Your belief or disbelief doesn't make my dud of a card clock any better.


----------



## exploiteddna

nycgtr said:


> https://www.3dmark.com/fs/16588085
> 
> Broke 9k with the 380 watt bios on a gigabyte windforce. Kinda really wish I could flash another bios for my trio sighhh


whats up with the trio? is it not amenable to third party bios ROMs?


----------



## zhrooms

michaelrw said:


> whats up with the trio? is it not amenable to third party bios ROMs?


 
Seems to work fine according to @cstkl1 (Post #1 & #2)

Minor fan issue but other than that didn't brick the card. Curious about the performance increase.


----------



## EQBoss

xer0h0ur said:


> I haven't gone past 47c but I still have plenty of games to try out yet.


Nice temp, whats the rad setup?


----------



## zhrooms

BigMack70 said:


> I genuinely have no idea what you are trying to accomplish by effectively zooming in on what was already posted. The game sits at 1920 MHz most of the time, as shown. It fluctuates slightly +/- two clock steps (1890 - 1950 MHz) for very brief moments before returning to 1920.
> 
> And you can either believe me or not when I say this is typical of most games I've tested. I don't really care. Your belief or disbelief doesn't make my dud of a card clock any better.


 
I do believe everything in your image. 

Zooming in just shines a brighter light on what we want to see, so we can confirm with higher accuracy the GPU Usage fits with the Core Clock and Framerate as an example, *several of those graphs go hand in hand, if one changes so does one or two more.* And when your graphs have the wrong graph limits we can't see it (or limited).

Your image can be helpful to people, which means the higher accuracy your image has the better, I'd be more confident sharing an image with detail than rough averages. That's all.


----------



## Edge0fsanity

I'm going to have to return my FE as soon as i can find a replacement. I just can't deal with the horrendous coil whine this card makes. Its very audible over the cards fans running at full speed. Its driving me nuts and putting it into a quiet waterloop will make it 10x worse. I've owned a lot of cards over the years, never had coil whine like this. Combine that with below average clocks and a locked bios and this card lost the lottery bad.

I wish they would just release the titan already. I've had much better experiences buying those.


----------



## cstkl1

ASUS released a GPU tweak utility just for rtx

they have osd. pretty nice font.. here is the kicker.. it has adverts.. *** right.


----------



## cstkl1

Edge0fsanity said:


> I'm going to have to return my FE as soon as i can find a replacement. I just can't deal with the horrendous coil whine this card makes. Its very audible over the cards fans running at full speed. Its driving me nuts and putting it into a quiet waterloop will make it 10x worse. I've owned a lot of cards over the years, never had coil whine like this. Combine that with below average clocks and a locked bios and this card lost the lottery bad.
> 
> I wish they would just release the titan already. I've had much better experiences buying those.


if based off the quadro rtx 8000.. not much of a improvement

4608 cuda cores vs 4352 and 576 tensor cores vs 544


----------



## Zammin

xer0h0ur said:


> The only gripes I had about the EKWB and backplate I received were that apparently EK is too cheap to include a printout of the instructions anymore. They always used to have one. There were also errors with diagrams where you had to figure out for yourself where the screw went. The greatest annoyance was that they forgot to give me the screws altogether for the backplate. The only reason I managed to install it was because I had left over screws from the various blocks and backplates I have ordered from them over the years.





enigmatic23 said:


> Same here. Backplate screws missing and using the online instructions was a pain. Luckily it all went together with a few extra screws and temps are decent. Cant break 50c with Galax bios and ambient is around 26c


Jeez I hope mine comes with the screws.. I don't have any spares to use. That seems like a major oversight on EKs part :/

Has anyone here gotten the backplate and received the screws with it? if its a common problem I should call the store that I ordered it from and ask them to check the contents of the box.


----------



## Zammin

Apologies if this has been covered or if I've misunderstood something, but I'm a bit confused with some of the wattage ratings vs power limits with different BIOS versions. I read on the first page that the EVGA XC/Ultra BIOS is 338W max and the Gigabyte Gaming OC BIOS 366W max, but from what I remember the EVGA cards have a higher power limit (130%) than the Gigabyte cards (which I thought was 123%). Does this mean that the power target range isn't standardised across difference reference cards and the Gigabyte card is capable of achieving a higher power limit?


----------



## Asmodian

Zammin said:


> Jeez I hope mine comes with the screws.. I don't have any spares to use. That seems like a major oversight on EKs part :/


They tape them into the flap on one side of the box, it is easy to miss them unless you open the box on both ends. My back plate from EKWB did come with screws. I have to agree that the instructions were pretty bad this time (they didn't include them but there is a pdf on the product page).


----------



## enigmatic23

Zammin said:


> Jeez I hope mine comes with the screws.. I don't have any spares to use. That seems like a major oversight on EKs part :/
> 
> Has anyone here gotten the backplate and received the screws with it? if its a common problem I should call the store that I ordered it from and ask them to check the contents of the box.


ok just checked my backplate box again and they are well hidden in the recess of the fold on one end of the box. False alarm at least for me


----------



## nycgtr

Got my bitspower lotan blocks in. Some quick rough pics. Nicer than last gens. Works with stock backplates and included an extra oring stand off and I assume what they call bitspower premium pads no longer fujipoly included.


----------



## Edge0fsanity

cstkl1 said:


> Edge0fsanity said:
> 
> 
> 
> I'm going to have to return my FE as soon as i can find a replacement. I just can't deal with the horrendous coil whine this card makes. Its very audible over the cards fans running at full speed. Its driving me nuts and putting it into a quiet waterloop will make it 10x worse. I've owned a lot of cards over the years, never had coil whine like this. Combine that with below average clocks and a locked bios and this card lost the lottery bad.
> 
> I wish they would just release the titan already. I've had much better experiences buying those.
> 
> 
> 
> if based off the quadro rtx 8000.. not much of a improvement
> 
> 4608 cuda cores vs 4352 and 576 tensor cores vs 544
Click to expand...

Not interested in the Titan for the performance so much as the buying experience. Buy it on launch and it ships next day without delays. Chips are better binned. Never had issues with one.

I'm just frustrated with this. I could live with the low clocks and throttling, that isn't going to effect games much. It's the coil whine that is going to force me to send it back. It really is awful. Such a pain in the ass to deal with this then find a replacement when there is no stock.


----------



## Zammin

Asmodian said:


> They tape them into the flap on one side of the box, it is easy to miss them unless you open the box on both ends. My back plate from EKWB did come with screws. I have to agree that the instructions were pretty bad this time (they didn't include them but there is a pdf on the product page).





enigmatic23 said:


> ok just checked my backplate box again and they are well hidden in the recess of the fold on one end of the box. False alarm at least for me


That's good to know, thanks for the confirmation guys.



nycgtr said:


> Got my bitspower lotan blocks in. Some quick rough pics. Nicer than last gens. Works with stock backplates and included an extra oring stand off and I assume what they call bitspower premium pads no longer fujipoly included.


That looks good as, let us know how it performs once it's in the loop.


----------



## raider89

mr2cam said:


> raider89 said:
> 
> 
> 
> I don't know what im doing wrong lol, my scores are so low compared to others here. I have the 2080ti xc ultra using 130%evga bios.
> 
> 
> 
> Ya that is quite low, I have the exact same setup as you and I did 13912 with gsync on
Click to expand...

What oc are you doing? and what temps


----------



## ocvn

cstkl1 said:


> the msi 1755 really a joke. flashed another bios more inline with FE clocks..
> 
> its still only boosting to 1890. so that 1755 means nothing. looking at the volt/freq curve nothing changed
> 
> oh yeah no issue flashing to galax 380 bios on trio.. just the first fan will ramp up and second one had to do manual ramp.
> 
> other than that seems to work. also seems a bit pointless with my card which i think even under wanter maybe can be stable only arnd 2025-2055
> 
> btw since nobody talked about it.. nvidia abandon their more than a decade clockgen of 12.5 to 15 for turing.


thanks for your info . will try it next few days to my trio-x.


----------



## nycgtr

cstkl1 said:


> the msi 1755 really a joke. flashed another bios more inline with FE clocks..
> 
> its still only boosting to 1890. so that 1755 means nothing. looking at the volt/freq curve nothing changed
> 
> oh yeah no issue flashing to galax 380 bios on trio.. just the first fan will ramp up and second one had to do manual ramp.
> 
> other than that seems to work. also seems a bit pointless with my card which i think even under wanter maybe can be stable only arnd 2025-2055
> 
> btw since nobody talked about it.. nvidia abandon their more than a decade clockgen of 12.5 to 15 for turing.


Interesting. You got me wanting to try this. What do you mean by manual ramp? Auto fan stopped working? There's 3 fans


----------



## exploiteddna

Edge0fsanity said:


> Not interested in the Titan for the performance so much as the buying experience. Buy it on launch and it ships next day without delays. Chips are better binned. Never had issues with one.
> 
> I'm just frustrated with this. I could live with the low clocks and throttling, that isn't going to effect games much. It's the coil whine that is going to force me to send it back. It really is awful. Such a pain in the ass to deal with this then find a replacement when there is no stock.


i mean.. this is the first coil whine ive heard of for 2080 ti. you prob just got bad luck. try to return it. but its not like the whole line of reference 2080ti are suffering from whine.



nycgtr said:


> Got my bitspower lotan blocks in. Some quick rough pics. Nicer than last gens. Works with stock backplates and included an extra oring stand off and I assume what they call bitspower premium pads no longer fujipoly included.


nice


----------



## Section31

Nice block. Also nice it supports the original back plate. Not having to buy an back plate is big savings. Hope performance is good.

I'm waiting for the black nickel heatkiller 2080Ti blocks to come in. Need to find an friends computer using air cooling to test out the 2080TI before the block comes and I start the installation process.


----------



## xer0h0ur

Okay after playing some games that were pegging my GPU into the high 90's usage it was dropping my clock to 2055MHz bouncing between 49 and 50c temps.


----------



## Asmodian

3DMark Graphics Scores comparing my 2080 Ti v.s. Titan X (Pascal), both on the same system and at almost the same clocks (2055 v.s. 2050). This is also about max overclock on both of my GPUs. 

Time Spy shows a much larger increase compared to Fire Strike. It is not surprising but these GPUs only offer a significant upgrade for newer games.

Time Spy Extreme: 7562 (+44.5%)
Time Spy: 15684 (+40.3%)
Fire Strike Ultra: 9117 (+17.3%)
Fire Strike Extreme: 18350 (+14.3%)


----------



## Emmett

xer0h0ur said:


> Okay after playing some games that were pegging my GPU into the high 90's usage it was dropping my clock to 2055MHz bouncing between 49 and 50c temps.


at what voltage was your card running?


----------



## Rob w

My 2080ti FE and ek block arrived Thursday, normally I would have rushed to get it in the rig but to be honest the excitement got lost somewhere!
So they are still sat in their boxes unopened? If I don’t install them soon then I think I will just sell them and maybe just miss this gen.
Decisions decisions?


----------



## gavros777

Is the gigabyte card the only one that offers 4 year warranty?
One of my titan x maxwell died just couple months after its 3 year warranty expired, so 4 years seem a lot better than 3 to me.


----------



## asdkj1740

gavros777 said:


> Is the gigabyte card the only one that offers 4 year warranty?
> One of my titan x maxwell died just couple months after its 3 year warranty expired, so 4 years seem a lot better than 3 to me.


it is 3+1 strictly speaking, it requires buyers to online register the card within 30 days for getting that +1 extended year warranty cover.
zotac has got 5years, and again, it is 2+3 requiring buyers online registration within limited time.


----------



## alex1990

Hi all! Anyone have modded or updated bios for msi trio? my OCed card rests on the power limit and max 330w is not enough
Got 110% limit in bf1


----------



## EdgeCrusher86

Beware that with that GALAX 380W BIOS you may DAMAGE YOU'RE MAINBOARD because if fully overclocked the card pulls 6A avg. and far beyond 8A out of the PCI-E bus - 5.5A says the spec. So if you're setting the card to 116% PT (348W), your'e within the specification.
That can also be the case with GIGABYTE'S F3 BIOS (366W) or the EVGA FTW3 Ultra (373W) for example or of course the XOC 400W one.

https://translate.google.com/transl...cht-die-beste-idee-ist-igorslab/2/&edit-text=



Tomshw.de -> Google Translate said:


> What was worth a bit of criticism for AMD and the Radeon RX 480, you just have to say here, where the power consumption of 380 watts with the two 8-pin power supply connections would not have been 100% compliant. But where the external ATX plug still has the fattest reserves, is on the motherboard in the long run then then over with funny. And I also realized why MSI closes at 350 watts. The balancing between the rails is then no longer quite so easy to fix.
> 
> Fully overclocked there are on average, namely at 380 watts of power already 6 amps on the motherboard slot instead of the allowed 5.5 amps. And only as an average! The tips go far beyond 8 amps. It's not going to scrape a normal motherboard right now, but it's also about other things being washed away by such wild currents. And if it is the on-board sound that chirps all the misery in the ear acoustically.
> 
> If the power target is reduced to a maximum of "only" 350 watts, the current flow on the motherboard drops back to the permitted 5.5 amps. So that would be the compromise that I would have to offer in the bean counters mode the whole arch conservatives. The rest, it will be anyway no matter how a lukewarm Bockwurst, because it has something tingling in the stomach, too.
> Where the risk here is likely to be limited, because who buys such a card, in general, so does not use a cheap, wobbly 4-layer motherboard of Grabbeltisch, but the finest multilayer thread around mid three-digit range. He can do it easily.


Bockwurst is a cooked sausage by the way.


----------



## zhrooms

Rob w said:


> My 2080ti FE and ek block arrived Thursday, normally I would have rushed to get it in the rig but to be honest the excitement got lost somewhere!
> So they are still sat in their boxes unopened? If I don’t install them soon then I think I will just sell them and maybe just miss this gen.
> Decisions decisions?


 
If you're not even in a rush to get it into the rig, maybe you shouldn't keep it. For me the choice is easy, I *need* the performance for High Refresh Rate 1440p and 4-8K.

Secondly, there are 10 reports in this thread alone that retail Founders Edition can *not* be flashed. Which is a deal breaker for many, especially on water where the temps are not restricing the card, it makes the lower (320W) power limit a big annoyance. Several % slower card.
 


gavros777 said:


> One of my titan x maxwell died just couple months after its 3 year warranty expired, so 4 years seem a lot better than 3 to me.


 
That was just very bad luck, the odds of a card dying is *extremely low*, so it happening to you again is like one in a million. It is *not* sound reasoning to base your next purchase on extended warranty.
 


alex1990 said:


> Anyone have modded or updated bios for msi trio? my OCed card rests on the power limit and max 330w is not enough


 
Yes, @cstkl1 successfully flashed his *Gaming X Trio*. Read his post.

Both @ocvn and @nycgtr has shown interest in flashing their cards as well, so that makes you the third one. Need more people to try it before I am confident putting the information in the original post.








 I am still waiting on my X Trio.


----------



## OBR

Hi, i have problem with flashing FE 2080 Ti, i tryed everything but still have "device id mismatch" error. Its possible to flash FE edition? Thanks


----------



## Edge0fsanity

michaelrw said:


> i mean.. this is the first coil whine ive heard of for 2080 ti. you prob just got bad luck. try to return it. but its not like the whole line of reference 2080ti are suffering from whine.


Yeah, real bad luck. I'm going to try and get them to replace it as a warranty issue i think. We'll see how that goes, i've got 30 days regardless to get my money back and find a replacement.


----------



## Edge0fsanity

OBR said:


> Hi, i have problem with flashing FE 2080 Ti, i tryed everything but still have "device id mismatch" error. Its possible to flash FE edition? Thanks


Bios is locked down on the FE for now so no flashing it. Hopefully someone comes up with a work around.


----------



## asdkj1740

https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=
more and more bios got uploaded to tpu bios collection.
we still need that xoc 400w bios from evga.


----------



## Esenel

Edge0fsanity said:


> Yeah, real bad luck. I'm going to try and get them to replace it as a warranty issue i think. We'll see how that goes, i've got 30 days regardless to get my money back and find a replacement.


My 2080Ti FE has also coil whine :-(
Really annoying due to a complete silent loop :-/

But so had my MSI Gaming X 1080 as well.
So I will not replace it, because who knows if the next one has /(not) the same issue as well.


But temps are fine on EK Block:

After 15 Min TimeSpy Extreme graphics test 2 Loop I get 46°C GPU +-1°C depending on the loops fan speed.
And the EK backplate also distributes the heat quite well.


----------



## cstkl1

Edge0fsanity said:


> cstkl1 said:
> 
> 
> 
> 
> 
> Edge0fsanity said:
> 
> 
> 
> I'm going to have to return my FE as soon as i can find a replacement. I just can't deal with the horrendous coil whine this card makes. Its very audible over the cards fans running at full speed. Its driving me nuts and putting it into a quiet waterloop will make it 10x worse. I've owned a lot of cards over the years, never had coil whine like this. Combine that with below average clocks and a locked bios and this card lost the lottery bad.
> 
> I wish they would just release the titan already. I've had much better experiences buying those.
> 
> 
> 
> if based off the quadro rtx 8000.. not much of a improvement
> 
> 4608 cuda cores vs 4352 and 576 tensor cores vs 544
> 
> Click to expand...
> 
> Not interested in the Titan for the performance so much as the buying experience. Buy it on launch and it ships next day without delays. Chips are better binned. Never had issues with one.
> 
> I'm just frustrated with this. I could live with the low clocks and throttling, that isn't going to effect games much. It's the coil whine that is going to force me to send it back. It really is awful. Such a pain in the ass to deal with this then find a replacement when there is no stock.
Click to expand...

yeah i be raging mad.. 
1. FE late to ship out when alot of AIB u can just walk in stores and buy
2. NVIDIA store warantty limitation
3. And now the did a bios protect??

too many strikes


----------



## Xeq54

So, its finished, mounted the block + shunt modded it with LM.

Shunt mod was very tricky, had to redo it 4 times, removing a bit of LM each time until it stopped going into safe mode (300mhz lock), only about third of the width of the shunt is covered with LM now, otherwise it went into safe mode right away.

Timespy extreme score went up by about 200 points, not much, but the frequency is stable now at 2100 mostly, with ocassional dips to 2085. Voltage stays locked at 1.093.


----------



## Edge0fsanity

Esenel said:


> My 2080Ti FE has also coil whine :-(
> Really annoying due to a complete silent loop :-/
> 
> But so had my MSI Gaming X 1080 as well.
> So I will not replace it, because who knows if the next one has /(not) the same issue as well.
> 
> 
> But temps are fine on EK Block:
> 
> After 15 Min TimeSpy Extreme graphics test 2 Loop I get 46°C GPU +-1°C depending on the loops fan speed.
> And the EK backplate also distributes the heat quite well.


Every card has coil whine if you listen for it. My TXPs would squeal noticeably when applying load to them when they were cold but as soon as they warmed up it would go away unless you opened the case and stuck your head close to them. This 2080ti however sounds like a damn buzz saw. My system is completely air cooled atm while i'm in between builds. With every fan in the case and gpu cranked to max rpm its still audible over it with the case door closed and the pc sitting on the floor across the room. 

I want to put my block on it so bad and just ignore the problem but i know it'll drive me insane.


----------



## Edge0fsanity

Xeq54 said:


> So, its finished, mounted the block + shunt modded it with LM.
> 
> Shunt mod was very tricky, had to redo it 4 times, removing a bit of LM each time until it stopped going into safe mode (300mhz lock), only about third of the width of the shunt is covered with LM now, otherwise it went into safe mode right away.
> 
> Timespy extreme score went up by about 200 points, not much, but the frequency is stable now at 2100 mostly, with ocassional dips to 2085. Voltage stays locked at 1.093.


any pics of the lm coverage once you got it working correctly?


----------



## gavros777

I game at 4k 60zh with an i7 3770k at 4.5ghz.
Is my setup good enough for the 2080 ti?
Also in future games that will be using 8 cores would an i7 with hyperthreading be able to handle them?


----------



## cstkl1

nycgtr said:


> Interesting. You got me wanting to try this. What do you mean by manual ramp? Auto fan stopped working? There's 3 fans


in default.. theres two fan controllers.. same as FE

two fans are fixed to one and another single...lower rpm.. so gonna assume this is the bigger ones.
what got affected with the flash was fan 2.. need to do manual .. it was on 0 .. no movement on auto. the 1st fan however was at default 30% ( i think thats for nvidia FE default)


----------



## cstkl1

might as well disclose this...

hmm dont use evga X1

the clocks dont stay everytime u boot on ure presets

open MSI AB.. control F... watch the frequency/voltage graphs.

Affected are MSI AB/EVGA X1. 
btw this is why my scores are higher on lower clock.. i check the voltage curve everytime.
it tends to downlclock a lot or sometime upclock giving u false instability.

Asus GPu Tweak II latest version for RTX only seems not affected.. but damn that advert thingy on the OSD.. but seriously awesome FONT for the OSD...


----------



## cstkl1

gavros777 said:


> I game at 4k 60zh with an i7 3770k at 4.5ghz.
> Is my setup good enough for the 2080 ti?
> Also in future games that will be using 8 cores would an i7 with hyperthreading be able to handle them?


should be fine.. except high FPS racing games.... the problem with 3770k is DMI 2.0 on da pch..... so just dont run a lot of things via PCH.


----------



## Addsome

EdgeCrusher86 said:


> Beware that with that GALAX 380W BIOS you may DAMAGE YOU'RE MAINBOARD because if fully overclocked the card pulls 6A avg. and far beyond 8A out of the PCI-E bus - 5.5A says the spec. So if you're setting the card to 116% PT (348W), your'e within the specification.
> That can also be the case with GIGABYTE'S F3 BIOS (366W) or the EVGA FTW3 Ultra (373W) for example or of course the XOC 400W one.
> 
> https://translate.google.com/transl...cht-die-beste-idee-ist-igorslab/2/&edit-text=
> 
> 
> 
> Tomshw.de -> Google Translate said:
> 
> 
> 
> What was worth a bit of criticism for AMD and the Radeon RX 480, you just have to say here, where the power consumption of 380 watts with the two 8-pin power supply connections would not have been 100% compliant. But where the external ATX plug still has the fattest reserves, is on the motherboard in the long run then then over with funny. And I also realized why MSI closes at 350 watts. The balancing between the rails is then no longer quite so easy to fix.
> 
> Fully overclocked there are on average, namely at 380 watts of power already 6 amps on the motherboard slot instead of the allowed 5.5 amps. And only as an average! The tips go far beyond 8 amps. It's not going to scrape a normal motherboard right now, but it's also about other things being washed away by such wild currents. And if it is the on-board sound that chirps all the misery in the ear acoustically.
> 
> If the power target is reduced to a maximum of "only" 350 watts, the current flow on the motherboard drops back to the permitted 5.5 amps. So that would be the compromise that I would have to offer in the bean counters mode the whole arch conservatives. The rest, it will be anyway no matter how a lukewarm Bockwurst, because it has something tingling in the stomach, too.
> Where the risk here is likely to be limited, because who buys such a card, in general, so does not use a cheap, wobbly 4-layer motherboard of Grabbeltisch, but the finest multilayer thread around mid three-digit range. He can do it easily.
> 
> 
> 
> Bockwurst is a cooked sausage by the way. /forum/images/smilies/biggrin.gif
Click to expand...

So how do you determine if your motherboard can handle the extra amps?


----------



## cstkl1

EdgeCrusher86 said:


> Beware that with that GALAX 380W BIOS you may DAMAGE YOU'RE MAINBOARD because if fully overclocked the card pulls 6A avg. and far beyond 8A out of the PCI-E bus - 5.5A says the spec. So if you're setting the card to 116% PT (348W), your'e within the specification.
> That can also be the case with GIGABYTE'S F3 BIOS (366W) or the EVGA FTW3 Ultra (373W) for example or of course the XOC 400W one.
> 
> https://translate.google.com/transl...cht-die-beste-idee-ist-igorslab/2/&edit-text=
> 
> 
> 
> Tomshw.de -> Google Translate said:
> 
> 
> 
> What was worth a bit of criticism for AMD and the Radeon RX 480, you just have to say here, where the power consumption of 380 watts with the two 8-pin power supply connections would not have been 100% compliant. But where the external ATX plug still has the fattest reserves, is on the motherboard in the long run then then over with funny. And I also realized why MSI closes at 350 watts. The balancing between the rails is then no longer quite so easy to fix.
> 
> Fully overclocked there are on average, namely at 380 watts of power already 6 amps on the motherboard slot instead of the allowed 5.5 amps. And only as an average! The tips go far beyond 8 amps. It's not going to scrape a normal motherboard right now, but it's also about other things being washed away by such wild currents. And if it is the on-board sound that chirps all the misery in the ear acoustically.
> 
> If the power target is reduced to a maximum of "only" 350 watts, the current flow on the motherboard drops back to the permitted 5.5 amps. So that would be the compromise that I would have to offer in the bean counters mode the whole arch conservatives. The rest, it will be anyway no matter how a lukewarm Bockwurst, because it has something tingling in the stomach, too.
> Where the risk here is likely to be limited, because who buys such a card, in general, so does not use a cheap, wobbly 4-layer motherboard of Grabbeltisch, but the finest multilayer thread around mid three-digit range. He can do it easily.
> 
> 
> 
> Bockwurst is a cooked sausage by the way. /forum/images/smilies/biggrin.gif
Click to expand...

reedit.. read it again

that site is quoting pcie sig spec

interesting ... nobody can custom mod the bios atm??


----------



## Bighouse

What's with the ridiculous pricing for the RTX2080Ti on Amazon? Is this the cryptocurrency crowed thing again, or something else?


----------



## Addsome

cstkl1 said:


> that site is quoting pcie sig spec
> 150watts for each 8pin
> 75 watts for pcie lane
> so he assumes its 80watts from the lane
> 
> he doesnt know the 8pin can take more than 150 and its a pcie sig spec.


So there's nothing to worry about then?



Bighouse said:


> What's with the ridiculous pricing for the RTX2080Ti on Amazon? Is this the cryptocurrency crowed thing again, or something else?


Those are third party sellers that are trying to scalp since the cards are sold out everywhere.


----------



## Glerox

Are all FE cards making a coil whine? Mine is arriving tuesday so if it's all the cards I might as well return it straight away.


----------



## raider89

EdgeCrusher86 said:


> Beware that with that GALAX 380W BIOS you may DAMAGE YOU'RE MAINBOARD because if fully overclocked the card pulls 6A avg. and far beyond 8A out of the PCI-E bus - 5.5A says the spec. So if you're setting the card to 116% PT (348W), your'e within the specification.
> That can also be the case with GIGABYTE'S F3 BIOS (366W) or the EVGA FTW3 Ultra (373W) for example or of course the XOC 400W one.
> 
> https://translate.google.com/transl...cht-die-beste-idee-ist-igorslab/2/&edit-text=
> 
> 
> 
> Bockwurst is a cooked sausage by the way.


I can't see this damaging our boards.


----------



## GosuPl

https://www.3dmark.com/compare/spy/4532387/spy/4619301#

380W bios  But on air with 100% rpm, GPU have 80C. On LC this will be great


----------



## EQBoss

Damn I'm considering just returning the cards since you can't flash them. But I don't really want to wait for more cards. At least I'll know for future reference to stay away from founders, nvidia didn't lock their 10 series...


----------



## GosuPl

EQBoss said:


> Damn I'm considering just returning the cards since you can't flash them. But I don't really want to wait for more cards. At least I'll know for future reference to stay away from founders, nvidia didn't lock their 10 series...


Cant flash? I just put link for 2080Ti with 380W bios flashed on reference PCB. You can flash your 2080Ti FE.


----------



## raider89

GosuPl said:


> Cant flash? I just put link for 2080Ti with 380W bios flashed on reference PCB. You can flash your 2080Ti FE.


We haven't seen anyone thats been able to flash a bios on the FE edition 2080tis.


----------



## Guiler

raider89 said:


> We haven't seen anyone thats been able to flash a bios on the FE edition 2080tis.


Honestly I don't *think* it would be to hard to hex edit the bios upgrade to edit in the FE board ID, I have no idea where to even start but I bet someone here knows how to do it.


----------



## raider89

Guiler said:


> Honestly I don't *think* it would be to hard to hex edit the bios upgrade to edit in the FE board ID, I have no idea where to even start but I bet someone here knows how to do it.


Im sure someone will figure out something down the line, but its pretty well locked down right now.


----------



## zhrooms

It is time to get this list going, requirement is to *make a new post* in this thread with *picture(s) of the card* in your *case.*

(List is located at the bottom of the original post)​






 

(Last Updated October 7, 2018) Full list below if anyone is interested, almost all are missing pictures! For example we have 4 members with the Gaming X Trio which is by far the biggest card out right now, it is amazing looking but not a single picture!


Code:


09-22-2018	https://www.overclock.net/forum/27633860-post875.html	dboythagr8	EVGA	XC	Pics	https://www.overclock.net/forum/27634000-post881.html	https://www.overclock.net/forum/27634356-post892.html
09-28-2018	https://www.overclock.net/forum/27641936-post1179.html	Emmett	ASUS	Turbo			
09-29-2018	https://www.overclock.net/forum/27643794-post1243.html	Talon2016	ASUS	Dual			
09-30-2018	https://www.overclock.net/forum/27645744-post1325.html	zocker	ASUS	Dual			
09-28-2018	https://www.overclock.net/forum/27642372-post1195.html	Xeq54	ASUS+Shunt	Dual	Pics	https://www.overclock.net/forum/27656066-post1927.html	
09-22-2018	https://www.overclock.net/forum/27634038-post883.html	Menthol	EVGA	XC Ultra	Pics	https://www.overclock.net/forum/27634038-post883.html	
09-27-2018	https://www.overclock.net/forum/27640876-post1149.html	raider89	EVGA	XC Ultra			
09-29-2018	https://www.overclock.net/forum/27643332-post1225.html	b4db0y	EVGA	XC			
10-06-2018	https://www.overclock.net/forum/27655486-post1887.html	mr2cam	EVGA	XC Ultra			
10-06-2018	https://www.overclock.net/forum/27654980-post1854.html	Foxrun	EVGA	XC Ultra			
10-05-2018	https://www.overclock.net/forum/27653510-post1725.html	cgcross	EVGA+Shunt	XC Ultra			
09-27-2018	https://www.overclock.net/forum/27640890-post1153.html	Not A Good Idea	EVGA+WB	XC (SLI)	Pics	https://www.overclock.net/forum/27647684-post1427.html	
10-03-2018	https://www.overclock.net/forum/27648274-post1448.html	Graphics*****	EVGA+WB	XC			
10-02-2018	https://www.overclock.net/forum/27648628-post1463.html	domenic	EVGA+WB	XC Ultra			
10-03-2018	https://www.overclock.net/forum/27650650-post1584.html	Johnny_Utah	EVGA+WB	XC Ultra	Pics	https://www.overclock.net/forum/27650650-post1584.html	
10-05-2018	https://www.overclock.net/forum/27653180-post1706.html	BigBeard86	EVGA+WB	XC Ultra			
10-06-2018	https://www.overclock.net/forum/27654306-post1798.html	enigmatic23	EVGA+WB	XC			
09-27-2018	https://www.overclock.net/forum/27640628-post1140.html	vmanuelgm	Gainward	Phoenix			
10-02-2018	https://www.overclock.net/forum/27649022-post1486.html	Hulk1988	Gainward	Phoenix			
10-06-2018	https://www.overclock.net/forum/27654574-post1814.html	Gustavoh	Gainward	Phoenix			
10-04-2018	https://www.overclock.net/forum/27651254-post1619.html	Zemach	Gigabyte	Windforce + GOC	Pics	https://www.overclock.net/forum/27651822-post1645.html	
10-05-2018	https://www.overclock.net/forum/27652480-post1676.html	Alonjar	Gigabyte	Windforce			
10-04-2018	https://www.overclock.net/forum/27651588-post1637.html	Fiercy	Gigabyte+WB	Gaming OC	Pics	https://www.overclock.net/forum/27651588-post1637.html	https://www.overclock.net/forum/27652302-post1663.html
09-30-2018	https://www.overclock.net/forum/27645636-post1317.html	asdkj1740	Inno3D	X2 OC	Pics	https://www.overclock.net/forum/27645636-post1317.html	
09-29-2018	https://www.overclock.net/forum/27643306-post1221.html	ocvn	MSI	Gaming X Trio	Pics	https://www.overclock.net/forum/27643306-post1221.html	
09-29-2018	https://www.overclock.net/forum/27644370-post1265.html	GanMenglin	MSI	Ventus OC			
10-02-2018	https://www.overclock.net/forum/27647950-post1441.html	cstkl1	MSI	Gaming X Trio			
10-01-2018	https://www.overclock.net/forum/27648366-post1451.html	nycgtr	MSI	Gaming X Trio			
10-03-2018	https://www.overclock.net/forum/27649752-post1517.html	Krzych04650	MSI	Sea Hawk X			
10-06-2018	https://www.overclock.net/forum/27650574-post1575.html	michaelrw	MSI	Duke			
10-07-2018	https://www.overclock.net/forum/27655910-post1918.html	alex1990	MSI	Gaming X Trio			
10-05-2018	https://www.overclock.net/forum/27653120-post1701.html	Woundingchaney	NVIDIA	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27653684-post1738.html	rush2049	NVIDIA	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27653818-post1748.html	R1amddude	NVIDIA	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27653858-post1753.html	Sparc145	NVIDIA	Founders Edition			
10-06-2018	https://www.overclock.net/forum/27653968-post1761.html	I88bastar	NVIDIA	Founders Edition	Pics	https://www.overclock.net/forum/27653968-post1761.html	
10-06-2018	https://www.overclock.net/forum/27653976-post1762.html	BigMack70	NVIDIA	Founders Edition			
10-06-2018	https://www.overclock.net/forum/27654054-post1771.html	zlatanselvic	NVIDIA	Founders Edition			
10-06-2018	https://www.overclock.net/forum/27654122-post1783.html	Guiler	NVIDIA	Founders Edition			
10-06-2018	https://www.overclock.net/forum/27654672-post1819.html	Diverge	NVIDIA	Founders Edition			
10-07-2018	https://www.overclock.net/forum/27655974-post1921.html	OBR	NVIDIA	Founders Edition			
10-08-2018	https://www.overclock.net/forum/27654754-post1826.html	animeowns	NVIDIA (SLI)	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27653900-post1756.html	Asmodian	NVIDIA+Shunt	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27652844-post1687.html	torqueroll	NVIDIA+WB	Founders Edition	Pics	https://www.overclock.net/forum/27652844-post1687.html	
10-06-2018	https://www.overclock.net/forum/27652908-post1688.html	Edge0fsanity	NVIDIA+WB	Founders Edition			
10-05-2018	https://www.overclock.net/forum/27653234-post1711.html	Esenel	NVIDIA+WB	Founders Edition	Pics	https://www.overclock.net/forum/27654252-post1792.html	
10-06-2018	https://www.overclock.net/forum/27654286-post1795.html	KShirza1	NVIDIA+WB	Founders Edition	Pics	https://www.overclock.net/forum/27654286-post1795.html	
10-06-2018	https://www.overclock.net/forum/27654314-post1800.html	xer0h0ur	NVIDIA+WB	Founders Edition			
10-04-2018	https://www.overclock.net/forum/27655840-post1915.html	Rob w	NVIDIA+WB	Founders Edition			
09-30-2018	https://www.overclock.net/forum/27644786-post1284.html	Silent Scone	Palit	GamingPro			
10-02-2018	https://www.overclock.net/forum/27648916-post1483.html	Squeegie00	Palit	GamingPro			
10-06-2018	https://www.overclock.net/forum/27653772-post1745.html	kx11	Palit	GamingPro			
09-26-2018	https://www.overclock.net/forum/27639064-post1052.html	mollet	Palit+WB+Shunt	GamingPro	Pics	https://www.overclock.net/forum/27639064-post1052.html	
09-27-2018	https://www.overclock.net/forum/27641478-post1168.html	Elmy	Zotac	AMP	Pics	https://www.overclock.net/forum/27641478-post1168.html	
09-27-2018	https://www.overclock.net/forum/27642420-post1197.html	Ford8484	Zotac	AMP			
10-01-2018	https://www.overclock.net/forum/27647150-post1405.html	KANG_VX	Zotac	AMP			
10-01-2018	https://www.overclock.net/forum/27647484-post1415.html	trippinonprozac	Zotac	AMP			
10-02-2018	https://www.overclock.net/forum/27647498-post1417.html	Addsome	Zotac	AMP			
10-02-2018	https://www.overclock.net/forum/27649012-post1485.html	FarisLeonhart	Zotac	AMP			
10-03-2018	https://www.overclock.net/forum/27650404-post1560.html	xermalk	Zotac	AMP			
10-05-2018	https://www.overclock.net/forum/27653196-post1707.html	HeadlessKnight	Zotac	AMP			
10-05-2018	https://www.overclock.net/forum/27649410-post1502.html	Porter_	Zotac	AMP			
09-30-2018	https://www.overclock.net/forum/27644590-post1282.html	carlhil2	Zotac+WB	AMP


----------



## Fiercy

Here are two night pictures of the card in the build.

I am really happy with EK block and how my card turned out Overclock wise!


----------



## xer0h0ur

Emmett said:


> at what voltage was your card running?


The voltage I was seeing when I was paying attention to it was 1.05v. It may have gone just barely past that at some point but that was the voltage I was seeing while the GPU was pegged at 96-98% usage.


----------



## mr2cam

Anyone else noticing quite a bit higher temps compared to your 1080ti? I had a founders 1080ti and max temp on water (360 and 240 cooling both cpu and gpu) was about 40c, now im hitting as high as 53, I figured the extra power would add more heat. Might take my block back apart and make sure im getting a good spread.


----------



## xer0h0ur

Esenel said:


> My 2080Ti FE has also coil whine :-(
> Really annoying due to a complete silent loop :-/
> 
> But so had my MSI Gaming X 1080 as well.
> So I will not replace it, because who knows if the next one has /(not) the same issue as well.
> 
> 
> But temps are fine on EK Block:
> 
> After 15 Min TimeSpy Extreme graphics test 2 Loop I get 46°C GPU +-1°C depending on the loops fan speed.
> And the EK backplate also distributes the heat quite well.


My FE 2080 Ti also has coil whine but you can sometimes minimize the coil whine by conditioning the card. My FE 1080 also had coil whine and what I did was leave counter strike open for 24 hours with the framerate unlocked so it was shooting to 1000 fps and bringing on the coil whine. That minimized the coil whine so it was not nearly as loud anymore.


----------



## OBR

BIOS on FE is not locked, because i can flash my own BIOS (the same device ID) sucessfully. Problem is in new WinFlash, options for device ID mismatch override (-4 -5) is missing there.


----------



## EQBoss

GosuPl said:


> Cant flash? I just put link for 2080Ti with 380W bios flashed on reference PCB. You can flash your 2080Ti FE.


Haven't seen one flashed FE yet, you can flash reference pcb but FEs have some extra lock on them.



Guiler said:


> Honestly I don't *think* it would be to hard to hex edit the bios upgrade to edit in the FE board ID, I have no idea where to even start but I bet someone here knows how to do it.


Well I remember the -5 command used to bypass board id but it doesn't work anymore, they must have changed the flags.


----------



## nycgtr

zhrooms said:


> It is time to get this list going, requirement is to *make a new post* in this thread with *picture(s) of the card* in your *case.*
> ​



Trio compared to the 2080ti windforce, titan xp looped, and an old kpe I got laying around.​


----------



## raider89

OBR said:


> BIOS on FE is not locked, because i can flash my own BIOS (the same device ID) sucessfully. Problem is in new WinFlash, options for device ID mismatch override (-4 -5) is missing there.


locked


----------



## Asmodian

Guiler said:


> Honestly I don't *think* it would be to hard to hex edit the bios upgrade to edit in the FE board ID, I have no idea where to even start but I bet someone here knows how to do it.


Without the proper encryption key you cannot flash it. That is what changed between Maxwell and Pascal, encrypted BIOS so you cannot flash hex edited BIOSes. 

edit: It isn't actually encrypted but you need to sign it, and it requires a creating a new signature if you make any changes to the BIOS. We don't have a version of NVFlash that bypasses this signature check anymore.


----------



## cstkl1

for da sake of pic


----------



## exploiteddna

Edge0fsanity said:


> Yeah, real bad luck. I'm going to try and get them to replace it as a warranty issue i think. We'll see how that goes, i've got 30 days regardless to get my money back and find a replacement.


shouldnt be hard as long as you have $1300 in funds/credit. i didnt preorder anything and my card arrived yesterday. shouldnt be hard. id just return it flat out


----------



## Xero717

Has anyone successfully flashed the Galax Ref BIOS on an EVGA 2080 Ti XC Ultra?


----------



## exploiteddna

EdgeCrusher86 said:


> Beware that with that GALAX 380W BIOS you may DAMAGE YOU'RE MAINBOARD because if fully overclocked the card pulls 6A avg. and far beyond 8A out of the PCI-E bus - 5.5A says the spec. So if you're setting the card to 116% PT (348W), your'e within the specification.
> That can also be the case with GIGABYTE'S F3 BIOS (366W) or the EVGA FTW3 Ultra (373W) for example or of course the XOC 400W one.
> 
> https://translate.google.com/transl...cht-die-beste-idee-ist-igorslab/2/&edit-text=
> 
> 
> 
> Bockwurst is a cooked sausage by the way.


thx for the info. i mean, its a pretty interesting artice, but i would like to hear some other input from people more experienced with electrical engineering.. maybe buildzoid (even though hes just a kid and doesnt have any formal training). But i was under the impression that the current draw remained the same on the PCIe bus (based on an earlier vid from buildzoid and/or steves interviews with nvidia) and that it was the 8-pin connectors that would supply the added current. remember the formula P=IV -- Wattage/Power (P, measured in watts) = Current (I, measured in amps) * Voltage (V, measured in volts) -- if power increases, current, voltage or both will also need to increase
But again, i was under the impression that the power regulators kept the PCIe power in check, and that it was the 8-pin power connectors (and 6-pin on some cards) that were managing most of the increased power


----------



## xer0h0ur

michaelrw said:


> thx for the info. i mean, its a pretty interesting artice, but i would like to hear some other input from people more experienced with electrical engineering.. maybe buildzoid (even though hes just a kid and doesnt have any formal training). But i was under the impression that the current draw remained the same on the PCIe bus (based on an earlier vid from buildzoid and/or steves interviews with nvidia) and that it was the 8-pin connectors that would supply the added current. remember the formula P=IV -- Wattage/Power (P, measured in watts) = Current (I, measured in amps) * Voltage (V, measured in volts) -- if power increases, current, voltage or both will also need to increase
> But again, i was under the impression that the power regulators kept the PCIe power in check, and that it was the 8-pin power connectors (and 6-pin on some cards) that were managing most of the increased power


Knowledge is knowledge. Age or a piece of paper doesn't make said knowledge any more or less correct.


----------



## Esenel

I asked myself the question if I as Founders owner are the idiot now, due to not being able to flash the bios.

So I compared my results to the leading board and figured out that it does not really matter.
The chips are at their maximum, also with a limit of 320 Watt.

Without flashing, shunt mod or extreme cooling the difference in performance is negligible.

Difference to Silent Scone with 380W Bios is around 1.5%

https://www.3dmark.com/compare/spy/4574376/spy/4625652
https://www.3dmark.com/compare/spy/4579631/spy/4611609

Difference to GamersNexus with shunt mod and crazy cooling is 3.5%
https://www.3dmark.com/compare/spy/4600448/spy/4611609

I would say as long as you are blocking your Founders you are fine.
Otherwise get a Custom Card due to the annyoing fan :-D


----------



## xer0h0ur

I would like to be able to drop in a higher power limit onto my FE card but my waterblocked results are perfectly fine and I am getting what I expected more or less.


----------



## zhrooms

Xero717 said:


> Has anyone successfully flashed the Galax Ref BIOS on an EVGA 2080 Ti XC Ultra?


 
You can safely flash the XC Ultra with the GALAX BIOS. Both are NVIDIA Reference PCB (2x8-Pin).
 


Esenel said:


> I compared my results to the leading board and figured out that it does not really matter.
> 
> Without flashing, shunt mod or extreme cooling the difference in performance is negligible.
> 
> Difference to Silent Scone with 380W Bios is around 1.5% I would say as long as you are blocking your Founders you are fine.


 
Igor begs to differ.

*The Witcher 3, 4K, Ultra
KFA2 380W on Water - 104.2 FPS (+3.4%)
NVFE 320W on Water - 100.8 FPS

Forza Motorsport 7, 4K, Ultra
KFA2 380W on Water - 137.8 FPS (+2.7%)
NVFE 320W on Water - 134.2 FPS *

3.4% is not that little considering the Ti is about 30% faster than the RTX 2080.

*NVFE 320W on Water - 100.8 FPS (+31.8%)
RTX 2080 - 76.5 FPS

KFA2 380W on Water - 104.2 FPS (+36.2%)
RTX 2080 - 76.5 FPS*

4.4% / 13.84%.

Roughly in Fallout 4, Grand Theft Auto V, The Witcher 3 and World of Warcraft, each ~115 MHz step on the processor is 1% higher performance.

If you can get let's say 4% from the GALAX BIOS, that'd be like a 450MHz higher overclock, from 4.55GHz to 5GHz. _(Not saying this is how it works, just trying to put it into perspective, obviously CPU and GPU performance is separated.)_

*Getting up to 4% higher GPU performance simply from a $0 thirty second BIOS flash is a big deal in my opinion.*


----------



## Esenel

zhrooms said:


> Igor begs to differ.
> 
> *The Witcher 3, 4K, Ultra
> KFA2 380W on Water - 104.2 FPS (+3.4%)
> NVFE 320W on Water - 100.8 FPS
> 
> Forza Motorsport 7, 4K, Ultra
> KFA2 380W on Water - 137.8 FPS (+2.7%)
> NVFE 320W on Water - 134.2 FPS *
> 
> 3.4% is not that little considering the Ti is about 30% faster than the RTX 2080.
> 
> *NVFE 320W on Water - 100.8 FPS (+31.8%)
> RTX 2080 - 76.5 FPS
> 
> KFA2 380W on Water - 104.2 FPS (+36.2%)
> RTX 2080 - 76.5 FPS*
> 
> 4.4% / 13.84%.
> 
> Roughly in Fallout 4, Grand Theft Auto V, The Witcher 3 and World of Warcraft, each ~115 MHz step on the processor is 1% higher performance.
> 
> If you can get let's say 4% from the GALAX BIOS, that'd be like a 450MHz higher overclock, from 4.55GHz to 5GHz. _(Not saying this is how it works, just trying to put it into perspective, obviously CPU and GPU performance is separated.)_
> 
> *Getting up to 4% higher GPU performance simply from a $0 thirty second BIOS flash is a big deal in my opinion.*


Yes I know his opinion. I am also reading his articles.
He is also mentioning that this more in performance comes with a cost.

On the one hand you have an around 60 Watt higher power consumption and on the other hand too much Ampere on the Mainboard slot. (Mainboard slot hm fine)

So we get 2.7% - 3.4% more FPS for ~18,75% higher power comsumption.

In my opinion it is nice to have this for benchmarks, but you do not need it for actual gaming.

Don't get me wrong. I would love to climb higher in this damned ladder, but I am also very happy with my card 


And as an owner of just a WQHD screen with 165 Hz I also find it very difficult if I should OC at all.

There we come to games and their behaviours:

Battlefield 1 WQHD - Ultra:
180-200 fps and sometimes 80% GPU usage. 
So "too" powerfull.

Civilization VI WQHD - Highest Settings:
~100 fps in the middle of a map with a bored GPU and a CPU or Engine limitation. On the edge of a map 300 fps+
So no need to OC.

Assassin's Creed Origin WQHD - Very High Preset:
~100 fps in Alexandria with ~90% GPU usage and a CPU or Engine limitation.
~145 fps outside with ~95% GPU usage.
I will use the OC I think.

Vermintide 2 WQHD - Extreme Preset:
145 -200 fps depending on the location.
I will use the OC here as well I think.

Guild Wars 2 WQHD - Highest Settings:
~100 fps and so a GPU at around 30% usage downclocked :-D
So no need for OC as well.

Witcher 3 I am not able to test yet.
I have a black starting screen due to a driver issue according to the nvidia forum.

Or just get the 4k 144 Hz screen?

Decisions, decisions :-D

Best regards.


----------



## exploiteddna

cstkl1 said:


> yeah i be raging mad..
> 1. FE late to ship out when alot of AIB u can just walk in stores and buy
> 2. NVIDIA store warantty limitation
> 3. And now the did a bios protect??
> 
> too many strikes


i wanna feel bad for people, and i sort of do, but like.. i would never buy an FE card, Nvidia can stick to making good GPUs, let the aib partners do what they do best



zhrooms said:


> Gaming X Trio which is by far the biggest card out right now, it is amazing looking but not a single picture!


pretty sure i just saw photos of it .. here and here



Fiercy said:


> Here are two night pictures of the card in the build.
> I am really happy with EK block and how my card turned out Overclock wise!


nice man, what are the other expansion cards in there? i see



xer0h0ur said:


> Knowledge is knowledge. Age or a piece of paper doesn't make said knowledge any more or less correct.


age: yes, its more likely that someone who is older has had more training in their respective field. no offense should be taken from that
"piece of paper": someone who has had enough training in electrical engineering to have a "piece of paper" saying as much, is likely to be more informed on their subject matter than someone without a "piece of paper" ("piece of paper" i think is a metaphor for a diploma/degree?)
are there exceptions? obviously. plenty of people are self-taught at what they do. 
also, I never said buildzoid info was inaccurate, nor did i say "cant trust this kid bc he doesnt have a degree in electrical engineering". but rather just a cautionary statement suggesting you may want to proceed with caution if you choose to risk your 1200$ hardware, relying solely on his analysis. I recommended him for "input from people more experienced with electrical engineering." FWIW, ill be the first to admin that hes far ahead of my understanding of ICs and circuits. my 'piece of paper' doesnt say electrical engineering on it, though, and i only took 2 semesters of physics, for maybe a grand total of 4 weeks of training in electrical concepts.
but lets not get into some philosophical debate on why college is unnecessary; that comment youre harping on was rather innocuous


----------



## tamas970

zhrooms said:


> Igor begs to differ.
> 
> *The Witcher 3, 4K, Ultra
> KFA2 380W on Water - 104.2 FPS (+3.4%)
> NVFE 320W on Water - 100.8 FPS
> 
> Forza Motorsport 7, 4K, Ultra
> KFA2 380W on Water - 137.8 FPS (+2.7%)
> NVFE 320W on Water - 134.2 FPS *
> 
> 3.4% is not that little considering the Ti is about 30% faster than the RTX 2080.
> 
> *Getting up to 4% higher GPU performance simply from a $0 thirty second BIOS flash is a big deal in my opinion.*


To me, that 4% increase doesn't seem to be a breakthrough, with almost 20% power invested. What you do pay, is the increased wear on the cooling system and on your ears.
I'd be more interested on the minimum framerates I suspect there might be a bigger improvement in the 99th percentile department.


----------



## Xero717

zhrooms said:


> You can safely flash the XC Ultra with the GALAX BIOS. Both are NVIDIA Reference PCB (2x8-Pin).


Thanks!


----------



## exploiteddna

anyone know where to get, or seen posted anywhere, the bios for the 2080Ti sea hawk EK X?
also when i go to the bios database on tpu, i only find 1 bios there, and it's for the FE card.


----------



## Addsome

michaelrw said:


> anyone know where to get, or seen posted anywhere, the bios for the 2080Ti sea hawk EK X?
> also when i go to the bios database on tpu, i only find 1 bios there, and it's for the FE card.


Put the card model as the RTX 2080 ti, then under got brand choose unverified uploads. That should have all the bios files


----------



## Emmett

Would the GALAX or EVGA bios work on my Asus Turbo if I were to block it?
I only ask because the turbo has one less display port connector.
If it would not work, I would like to know because I can still return the card.

It's kinda neat because it could be a true single slot blocked card if a single slot bracket were made for it.


----------



## AngryLobster

zhrooms said:


> You can safely flash the XC Ultra with the GALAX BIOS. Both are NVIDIA Reference PCB (2x8-Pin).
> 
> 
> 
> Igor begs to differ.
> 
> *The Witcher 3, 4K, Ultra
> KFA2 380W on Water - 104.2 FPS (+3.4%)
> NVFE 320W on Water - 100.8 FPS
> 
> Forza Motorsport 7, 4K, Ultra
> KFA2 380W on Water - 137.8 FPS (+2.7%)
> NVFE 320W on Water - 134.2 FPS *
> 
> 3.4% is not that little considering the Ti is about 30% faster than the RTX 2080.
> 
> *NVFE 320W on Water - 100.8 FPS (+31.8%)
> RTX 2080 - 76.5 FPS
> 
> KFA2 380W on Water - 104.2 FPS (+36.2%)
> RTX 2080 - 76.5 FPS*
> 
> 4.4% / 13.84%.
> 
> Roughly in Fallout 4, Grand Theft Auto V, The Witcher 3 and World of Warcraft, each ~115 MHz step on the processor is 1% higher performance.
> 
> If you can get let's say 4% from the GALAX BIOS, that'd be like a 450MHz higher overclock, from 4.55GHz to 5GHz. _(Not saying this is how it works, just trying to put it into perspective, obviously CPU and GPU performance is separated.)_
> 
> *Getting up to 4% higher GPU performance simply from a $0 thirty second BIOS flash is a big deal in my opinion.*


Is this a troll attempt? 15-20% increased power consumption for 3-4FPS is a big deal? For people actually gaming this makes absolutely no difference.


----------



## BigMack70

Holy crap Nvidia has worked some magic with this card and Vulkan. I'm getting 140fps+ averages in Wolfenstein 2 and DOOM maxed out at 4k. Wowwwwww


----------



## NAIM101

Anyone know if the ASUS DUAL RTX 2080TI is a reference design card? Reason why I ask is I want to add waterblock on it. Thanks


----------



## TahoeDust

I'm wondering how long it is going to take for FTW3 waterblocks and who will have the first?....EVGA or EK?


----------



## carlhil2

Zotac Gaming AMP WC flashed with Giga update exe. bios. by the way, did any FE owners attempt the Giga exe. update on their card?


----------



## xkm1948

Is it just me or the OC scanner built into Afterburner seems to not working? Well it worked the first few times. Then it always got stuck at 645MHz core forever. Anyone else experiencing similar issues?


----------



## exploiteddna

NAIM101 said:


> Anyone know if the ASUS DUAL RTX 2080TI is a reference design card? Reason why I ask is I want to add waterblock on it. Thanks


should be ref, yes.



TahoeDust said:


> I'm wondering how long it is going to take for FTW3 waterblocks and who will have the first?....EVGA or EK?


someone asked a week or two ago and they didnt have an ETA for FTW3 blocks. I imagine it wont be for a month or so, given that the ftw3 preorders havent even shipped yet. see thread on evga forums as to when the preorders ship



carlhil2 said:


> Zotac Gaming AMP WC flashed with Giga update exe. bios. by the way, did any FE owners attempt the Giga exe. update on their card?


whats the "Giga exe" bios

also, alphacool already has a block for sale? interesting


----------



## carlhil2

michaelrw said:


> should be ref, yes.
> 
> 
> someone asked a week or two ago and they didnt have an ETA for FTW3 blocks. I imagine it wont be for a month or so, given that the ftw3 preorders havent even shipped yet. see thread on evga forums as to when the preorders ship
> 
> 
> whats the "Giga exe" bios


Gigabyte bios Update exe, 366w......


----------



## KShirza1

RTX 2080 Ti block and back plate installed


----------



## fida

RTX 2080 ti FW with Kraken G12 + Kraken X72 AIO


----------



## zlatanselvic

Any updates on founders flashing?


----------



## Zammin

carlhil2 said:


> Zotac Gaming AMP WC flashed with Giga update exe. bios. by the way, did any FE owners attempt the Giga exe. update on their card?


I was just looking at these Alphacool waterblocks, they actually look really good. Most amount of microchannels I've seen on any of the RTX blocks to come out so far. How are your temps with it? and does it work with the back plate that comes with your card?


----------



## nycgtr

Loop still settling. Anyways I decided to forgo the sli. I tried 5 ref based cards and blocked the best one. Will block the 2nd best and throw in the wifes rig. I will probably swap to the strix when its available. Bigger gpu would look better for single gpu. Haven't been single gpu since Sli/crossfire became a thing. If nvlink shows promise later on I will go back to it possibly. I tested 2 cards together and aside from synthetics I can't say there was much benefit outside of tomb raider. As for the block I am getting 45c or so after looping valley for about 30mins. water temp sits at 29c idle so not too bad not amazing either. My clocks tend to be 2100-2130 for the most part.


----------



## carlhil2

Zammin said:


> I was just looking at these Alphacool waterblocks, they actually look really good. Most amount of microchannels I've seen on any of the RTX blocks to come out so far. How are your temps with it? and does it work with the back plate that comes with your card?


+90 is 2130 for my gpu, 77f ambient, after several runs og 3DMark, 42c, depending on my ambient, have seen up to 46c, it's not bad, and, yes, with right screws, backplate will fit...


----------



## Emmett

Anyone running a Maximus x Apex?

Just put in a Zotac 2080 ti AMP. it's in first slot, no other cards and I only get 8x

Tried clearing CMOS. reflashed bios. 8x.

I had been running two cards at 8x each. not sure what happened.


----------



## Zammin

carlhil2 said:


> +90 is 2130 for my gpu, 77f ambient, after several runs og 3DMark, 42c, depending on my ambient, have seen up to 46c, it's not bad, and, yes, with right screws, backplate will fit...


Sounds pretty good. The GPU I have on order is the EVGA XC Gaming. I called EVGA and they said the screws are M2.5 for the backplate. I'm yet to confirm what thread the Alphacool block uses. If I can confirm it will work I may cancel my EK order and get one of those in the plexi variant. Not sure yet though. It's kinda expensive because no-one sells Alphacool in Australia, I could only find them in stock on ModmyMods and the shipping costs $50ish (USD). All up it will be about $250-$260 AUD.


----------



## nycgtr

Zammin said:


> Sounds pretty good. The GPU I have on order is the EVGA XC Gaming. I called EVGA and they said the screws are M2.5 for the backplate. I'm yet to confirm what thread the Alphacool block uses. If I can confirm it will work I may cancel my EK order and get one of those in the plexi variant. Not sure yet though. It's kinda expensive because no-one sells Alphacool in Australia, I could only find them in stock on ModmyMods and the shipping costs $50ish (USD). All up it will be about $250-$260 AUD.


My bitspower uses the factory backplate. I just used the bitspower screws where the factory ones would go


----------



## Zammin

nycgtr said:


> My bitspower uses the factory backplate. I just used the bitspower screws where the factory ones would go


Cheers, good to know all options. Information on blocks other than the EK one is pretty scarce right now with many of them having only just released. I can't even find an instructions PDF for the Alphacool block to check what thread the screws are.


----------



## Krzych04650

https://www.3dmark.com/spy/4597097

Time Spy 16360 graphics score with 380W bios and max 44C GPU temp with water cooling, cannot get anything more. I will wait for some serious winter and make like 5C ambient temp and then try again  

Considering that FE stock score is 13500, 21% increase is not bad.


----------



## Zammin

I just asked Aquatuning UK through their facebook page (because they never respond to my emails) about the Alphacool block's screw size. Apparently they are M3 screws, so I guess if I used one with the EVGA XC Gaming's back plate I would need to purchase additional screws, since the EVGA card uses M2.5 screws.. :/


----------



## kx11

so my build is x299 watercooled , cpu is 7900x delidd and the gpu is Palit gaming pro OC which i tried to watercool it and pretty much the temps were above 75c under load while my CPU hardly reaches 60c even though it's running 4.6ghz , this makes feel that 2080ti is defecttwith watercooling since many people reported the same issue , I'll reinstall the original air cooling solution and see if the temps are good or not , the gpu can get through benchmarks just fine however any amount of OC will make it crash in-game


----------



## Zammin

kx11 said:


> so my build is x299 watercooled , cpu is 7900x delidd and the gpu is Palit gaming pro OC which i tried to watercool it and pretty much the temps were above 75c under load while my CPU hardly reaches 60c even though it's running 4.6ghz , this makes feel that 2080ti is defecttwith watercooling since many people reported the same issue , I'll reinstall the original air cooling solution and see if the temps are good or not , the gpu can get through benchmarks just fine however any amount of OC will make it crash in-game


The Palit card is a reference board yeah? Which block do you have? Just a guess but maybe the block might not be seated correctly or maybe not enough thermal paste? I had something similar with my Phanteks block on my 1080Ti Strix. Temps went from 50+ to 38-40 after reseating the block and using more thermal paste than I did the last time. Just an idea.


----------



## kx11

Zammin said:


> kx11 said:
> 
> 
> 
> so my build is x299 watercooled , cpu is 7900x delidd and the gpu is Palit gaming pro OC which i tried to watercool it and pretty much the temps were above 75c under load while my CPU hardly reaches 60c even though it's running 4.6ghz , this makes feel that 2080ti is defecttwith watercooling since many people reported the same issue , I'll reinstall the original air cooling solution and see if the temps are good or not , the gpu can get through benchmarks just fine however any amount of OC will make it crash in-game
> 
> 
> 
> The Palit card is a reference board yeah? Which block do you have? Just a guess but maybe the block might not be seated correctly or maybe not enough thermal paste? I had something similar with my Phanteks block on my 1080Ti Strix. Temps went from 50+ to 38-40 after reseating the block and using more thermal paste than I did the last time. Just an idea.
Click to expand...

it is a reference board and the block is from phanteks , I'll try again and reset the gpu


----------



## Zammin

kx11 said:


> it is a reference board and the block is from phanteks , I'll try again and reset the gpu


Best of luck man. Hope the temps improve afterward.


----------



## kx11

Zammin said:


> kx11 said:
> 
> 
> 
> it is a reference board and the block is from phanteks , I'll try again and reset the gpu
> 
> 
> 
> Best of luck man. Hope the temps improve afterward.
Click to expand...


could it be a broken thermostat? i hope not


thanx man


----------



## Talon2016

FE RTX 2080 Ti owners could always just buy a cheap $20 USB Programmer and test clip and flash hardware flash the chip to whatever vBIOS they like. I did this with my laptop 1070 with a modded vbios with a power limit of 151/171 (slider) and got amazing results. Unless Nvidia has blocked this ability I see no reason why a programmer wouldn't work here as well.


----------



## Esenel

zlatanselvic said:


> Any updates on founders flashing?


Not working.
Also not for Igor of TomsHW.de post #69

https://www.tomshw.de/community/thr...ere-alternative-igorslab.415/page-4#post-4971

Doesn't matter.
The gain is just minimal.


----------



## Hulk1988

Not confirmed information!!! RUMOR!

Partner cards must prevent Bios flash with the next wave of graphic cards. Order from NVidia.

That would mean that all "older" reference cards have a higher value.


----------



## gavros777

Nvidia must have received a ton of dead maxwell cards with flashed bios to become so strict from pascal and afterwards.
A maxwell card of mine died recently too, but i was doing 24/7 afk in gta online(for around 4 weeks) to make some damn gta money as grinding in that game is total pain.


----------



## profundido

cgcross said:


> All the scores I hit were with a 2080 TI XC Ultra with the Galax BIOS.
> 
> 
> https://www.reddit.com/r/overclocking/comments/9litw1/2_behind_kingpin_isnt_bad/


wow, very impressive ! gj


----------



## cstkl1

got a second trio today.

this one.. clocks way higher 2100 easy.. stabilizes around 2025-2040
but the vram only can be maxed out at 15500. 

Funny how the other card does 16300 easy.

lets see how water turns out on 22nd.


----------



## stefxyz

Esenel said:


> I asked myself the question if I as Founders owner are the idiot now, due to not being able to flash the bios.
> 
> So I compared my results to the leading board and figured out that it does not really matter.
> The chips are at their maximum, also with a limit of 320 Watt.
> 
> Without flashing, shunt mod or extreme cooling the difference in performance is negligible.
> 
> Difference to Silent Scone with 380W Bios is around 1.5%
> 
> https://www.3dmark.com/compare/spy/4574376/spy/4625652
> https://www.3dmark.com/compare/spy/4579631/spy/4611609
> 
> Difference to GamersNexus with shunt mod and crazy cooling is 3.5%
> https://www.3dmark.com/compare/spy/4600448/spy/4611609
> 
> I would say as long as you are blocking your Founders you are fine.
> Otherwise get a Custom Card due to the annyoing fan :-D


Good results. How much memory OC?


----------



## xermalk

No more bios flashing?

This makes me question sending back my Zotac 2080 ti AMP, as it has annoying coilwhine. But as it can be heard outside of a Define S, through my headphones i dont think i woudl be able to stand the noise .

Card clocks really well to, doing 8300 MHZ on memory and sitting around 2025-2050 mhz at 70% fanspeed in Heaven.


----------



## kot0005

Xeq54 said:


> So, its finished, mounted the block + shunt modded it with LM.
> 
> Shunt mod was very tricky, had to redo it 4 times, removing a bit of LM each time until it stopped going into safe mode (300mhz lock), only about third of the width of the shunt is covered with LM now, otherwise it went into safe mode right away.
> 
> Timespy extreme score went up by about 200 points, not much, but the frequency is stable now at 2100 mostly, with ocassional dips to 2085. Voltage stays locked at 1.093.


That LM is guna eat ur solder joints..use a wire..


----------



## Esenel

stefxyz said:


> Esenel said:
> 
> 
> 
> I asked myself the question if I as Founders owner are the idiot now, due to not being able to flash the bios.
> 
> So I compared my results to the leading board and figured out that it does not really matter.
> The chips are at their maximum, also with a limit of 320 Watt.
> 
> Without flashing, shunt mod or extreme cooling the difference in performance is negligible.
> 
> Difference to Silent Scone with 380W Bios is around 1.5%
> 
> https://www.3dmark.com/compare/spy/4574376/spy/4625652
> https://www.3dmark.com/compare/spy/4579631/spy/4611609
> 
> Difference to GamersNexus with shunt mod and crazy cooling is 3.5%
> https://www.3dmark.com/compare/spy/4600448/spy/4611609
> 
> I would say as long as you are blocking your Founders you are fine.
> Otherwise get a Custom Card due to the annyoing fan :-D
> 
> 
> 
> Good results. How much memory OC?
Click to expand...

+1000 on memory
+145 on core

As I am using MSI Afterburner I cannot go higher on memory.

Cheers


----------



## GosuPl

https://scontent-frx5-1.xx.fbcdn.ne...=15d01b0df8bcc0cbd652621ecc61f99b&oe=5C5D5FF3

I want 380W bios, but cant flash on FE still ;-)


----------



## Edge0fsanity

Hulk1988 said:


> Not confirmed information!!! RUMOR!
> 
> Partner cards must prevent Bios flash with the next wave of graphic cards. Order from NVidia.
> 
> That would mean that all "older" reference cards have a higher value.


figured that was coming

i'll probably just order a ftw3 whenever i can after i return my FE. 373w should be enough.


----------



## Monstieur

How many MHz more do you guys get with the 380W BIOS versus the 338W BIOS? I assume the performance increase is negligible compared to the power consumption.


----------



## BigMack70

Esenel said:


> As I am using MSI Afterburner I cannot go higher on memory.


Do any of the other OC tools (like EVGA Precision) let you go above +1000 on memory?


----------



## exploiteddna

KShirza1 said:


> RTX 2080 Ti block and back plate installed


whats that plastic thing plugged into your two 8pin connectors that wraps around the back of the card?



nycgtr said:


> Loop still settling. Anyways I decided to forgo the sli. I tried 5 ref based cards and blocked the best one. Will block the 2nd best and throw in the wifes rig. I will probably swap to the strix when its available. Bigger gpu would look better for single gpu. Haven't been single gpu since Sli/crossfire became a thing. If nvlink shows promise later on I will go back to it possibly. I tested 2 cards together and aside from synthetics I can't say there was much benefit outside of tomb raider. As for the block I am getting 45c or so after looping valley for about 30mins. water temp sits at 29c idle so not too bad not amazing either. My clocks tend to be 2100-2130 for the most part.


what are the two lcd displays youve got there? some aquaero, and something else


----------



## Esenel

BigMack70 said:


> Esenel said:
> 
> 
> 
> As I am using MSI Afterburner I cannot go higher on memory.
> 
> 
> 
> Do any of the other OC tools (like EVGA Precision) let you go above +1000 on memory?
Click to expand...

I thought EVGA Precision allows more.
Will test it on Thursday.

I also need a new Power Supply 😄
Be quiet DPP 650 W shuts down the PC during Rise of the Tomb Raider with the i7-8086k @ 5.2 and the GPU maxed out on Multi Rail.

Igor suggested doing OC Mode.
Also have to test this one.


----------



## profundido

michaelrw said:


> anyone know where to get, or seen posted anywhere, the bios for the 2080Ti sea hawk EK X?
> also when i go to the bios database on tpu, i only find 1 bios there, and it's for the FE card.


considering the excellent results from people with the sea hawk x hybrid flashing the Galax bios I assume that for people for existing custom watercooling setups the new Sea Hawk EK X will be an excellent card: block already in place, compatible with the galax bios (assuming) and most likely yielding excellent results with a good bin and extra 6-pin power connector.

Curious to see the first results from these


----------



## cx-ray

michaelrw said:


> whats that plastic thing plugged into your two 8pin connectors that wraps around the back of the card?


https://www.evga.com/articles/01051/evga-powerlink/


----------



## EQBoss

Man lots of reports of 2080 tis artifacting out of the box. Sure hope mine aren't duds, feeling a bit nervous. If they are might as well return the cards for a refund. Maybe get evga down the line.


----------



## ENTERPRISE

EQBoss said:


> Man lots of reports of 2080 tis artifacting out of the box. Sure hope mine aren't duds, feeling a bit nervous. If they are might as well return the cards for a refund. Maybe get evga down the line.



I have not seen this, where are these reports ?


----------



## EQBoss

ENTERPRISE said:


> I have not seen this, where are these reports ?


A page of posts on the nvidia forums about the 2080 tis artifacting out of the box. Memory seems to be the issue.


----------



## iamjanco

See the following Google search results: *2080 artifact site:forums.geforce.com*

While some of the complaints about artifacting might be valid, a number of them could be related to the system the card is in (e.g., config, psu, psu rails in use, driver fix required, etc.). A shortlist from those search results follows, as well as one from PugetSystems:

*RTX 2080 Artifacting in some games. Is it just me?*

*Image artifacts with RTX 2080 in Photoshop CC 2018 (Fixed with 411.70 NVIDIA driver)*

*RTX 2080 TI FE artefacts*

*2080 Ti Fe - Artifacts + Crash (BSOD)*

*Asus Dual RTX 2080 Ti artifacts and crash*

*Artifacts in pubg and game crashed in bf1 but had no artifacts*

*Asus Dual RTX 2080 TI Screen freez/artifacts then BSOD randomly (solved)*


----------



## ENTERPRISE

Interesting, I will have to read up but lets hope we can chalk it up to software related.


----------



## BigMack70

OK I think I managed to fiddle around with custom curve to squeeze about 1% extra performance out of the card compared to flat offset. Turns out I can get away with additional offset at lower voltages. So I start at +185 at 0.8V and taper off to +125 by 1.025V. 

Still find this card to be super weird and finnicky. With that core offset, +1000 on the memory, and 123% power target, it only gets +13% graphics score in Time Spy (stock vs OC).

I'm not sure why I can't manage to get something closer to +20% real world performance when the card claims to be power limited no matter what settings it is otherwise run at. Are others able to get +20% or better on their scores without doing things like shunt mods?

I am starting to get the impression that even though this card is power limited, it's a very inefficient design that gets half or less of its increased power draw in actual performance improvement.


----------



## Asmodian

I get artifacts in a VERY old OpenGL screen saver and XPlane 11 but in nothing else I have tested. Another OpenGL screen saver from a similar era is fine. I don't think it is memory, it seems like the 416.16 drivers and/or Windows 1809 to me.


----------



## skingun

Having seen the reports of artifacts I decided to fire up Destiny 2. I've only tested one of my cards and everything seems fine. Will test the second card later this week.

Overwatch has crashed a couple of times due to a rendering issue but it never crashes the PC so I think this is not hardware related.


----------



## dmasteR

Been patiently waiting for my MSI Trio  

Much needed upgrade from the 980Ti...


----------



## Esenel

BigMack70 said:


> OK I think I managed to fiddle around with custom curve to squeeze about 1% extra performance out of the card compared to flat offset. Turns out I can get away with additional offset at lower voltages. So I start at +185 at 0.8V and taper off to +125 by 1.025V.
> 
> Still find this card to be super weird and finnicky. With that core offset, +1000 on the memory, and 123% power target, it only gets +13% graphics score in Time Spy (stock vs OC).
> 
> I'm not sure why I can't manage to get something closer to +20% real world performance when the card claims to be power limited no matter what settings it is otherwise run at. Are others able to get +20% or better on their scores without doing things like shunt mods?
> 
> I am starting to get the impression that even though this card is power limited, it's a very inefficient design that gets half or less of its increased power draw in actual performance improvement.


My guess is you are still on air with your GPU?
I just use the flat offset but on water.
These are my scores from stock to OC.

https://www.3dmark.com/compare/spy/4611609/spy/4597387

Stock nearly your OC score? That's strange Oo
No shunt mod needed. Still Founders bios.


----------



## Krzych04650

Esenel said:


> I asked myself the question if I as Founders owner are the idiot now, due to not being able to flash the bios.
> 
> So I compared my results to the leading board and figured out that it does not really matter.
> The chips are at their maximum, also with a limit of 320 Watt.
> 
> Without flashing, shunt mod or extreme cooling the difference in performance is negligible.
> 
> Difference to Silent Scone with 380W Bios is around 1.5%
> 
> https://www.3dmark.com/compare/spy/4574376/spy/4625652
> https://www.3dmark.com/compare/spy/4579631/spy/4611609
> 
> Difference to GamersNexus with shunt mod and crazy cooling is 3.5%
> https://www.3dmark.com/compare/spy/4600448/spy/4611609
> 
> I would say as long as you are blocking your Founders you are fine.
> Otherwise get a Custom Card due to the annyoing fan :-D


It is true. I just switched back to my original Sea Hawk X bios because I was getting some serious stuttering issues with 380W bios. Not specifically in Time Spy, but there were games that were faring far worse than my 1080 SLI when it comes to frametimes and perceived smoothness, and I have fought it for 2 days with no success, so only thing left was to switch bios back, and so far the issue is fixed.

The performance difference isn't really something to worry about, the best I can get now in TimeSpy is 15950 (after simple OC), compared to 16300 of 380W bios (after hours and hours of tweaking the curve), so thats 2%, 2070-2100 MHz vs 2145-2160. And thats the highest number I've got, because for example Superposition benchmark is not even showing a difference, 13350 for 330W bios and 13450 for 380W, this is well within margin of error. The difference in actual gameplay will be none because I play with locked framerate so this potential 2% difference is vanished because the load is never exactly 99%, more like in high 80s or low 90s, to have at least some performance headroom for more demanding areas of the game.

If not for these stuttering issues I would have kept 380W bios of course, there is little reason in refusing free performance that can be applied in like 30 seconds, even if this is only 2%, plus you mostly get rid of frequency throttling which is very comforting, but the stuttering was really bad in some scenarios. Don't know why or how, but switching back helped.

So it is not like you lose a lot by getting FE and being unable to flash 380 bios. Cooling is far bigger issue for you.


----------



## Sionel

Zotac 2080 Ti Amp Edition checking in. Love this thing so glad I bit the bullet to buy one of these upgrading from an MSI 1080 Gaming X bought back in 2016 love this card. Question for anyone else that might have one as well and has modded it for Liquid cooling. I'm looking to get into custom looping my system but I am wondering if anyone has used an FE's Block on it everywhere I have read says it's a FE board I just don't want to bite the bullet and buy one and it not work or i destroy the card in doing so.


----------



## snafua

I finally got my FE late Friday and just now got a chance to install and check things out.

Sorry, maybe a remedial question that goes back to Pascal.
I have a few old games I was checking at 4k and the GPU doesn't boost above base.
Is it the game just not throwing enough at the card to keep it busy? 
I don't see any cpu threads maxing out and made sure vsync was off.


----------



## dmasteR

snafua said:


> I finally got my FE late Friday and just now got a chance to install and check things out.
> 
> Sorry, maybe a remedial question that goes back to Pascal.
> I have a few old games I was checking at 4k and the GPU doesn't boost above base.
> Is it the game just not throwing enough at the card to keep it busy?
> I don't see any cpu threads maxing out and made sure vsync was off.


Have you tried using prefer maximum performance?


----------



## axiumone

My two FE's arrived today. One has a dead fan. Yay.


----------



## Baasha

Just got the cards today 

Let's see how they perform.


----------



## gopalbose0

*The wait is over. Received my Zotac 2080 ti Amp*

Received my 2080 ti yesterday. Took more than 1 hr getting it to fit due to its long size and hard cables on my Power supply.
Runs fine stock and am getting about what I expected over the 1080 ti.
Does evga x1 rgb control work for other users ? Only the zotac firestorm app allows me to change color and its extremely buggy I am lucky to get by changing colors overclocking is an after thought. 
How well does x1's oc scanner work for you? What did you set your memory up to ? Does yours allow setting the speed of third fan ?
At 100% fan its loud even with my ac's fan making 50db noise but using stock curve it didn't even cross 50 C with the fans peaking at 1350rpm.
Boosts upto 2100mhz and then stays stable at 1900s stock everything.


----------



## carlhil2

gopalbose0 said:


> Received my 2080 ti yesterday. Took more than 1 hr getting it to fit due to its long size and hard cables on my Power supply.
> Runs fine stock and am getting about what I expected over the 1080 ti.
> Does evga x1 rgb control work for other users ? Only the zotac firestorm app allows me to change color and its extremely buggy I am lucky to get by changing colors overclocking is an after thought.
> How well does x1's oc scanner work for you? What did you set your memory up to ? Does yours allow setting the speed of third fan ?
> At 100% fan its loud even with my ac's fan making 50db noise but using stock curve it didn't even cross 50 C with the fans peaking at 1350rpm.
> Boosts upto 2100mhz and then stays stable at 1900s stock everything.


Nice gpu, my Zotac max boost is 2040 at stock, +90 offset gives me 2130mhz. I am water cooled...


----------



## GraphicsWhore

Finally got the card blocked and had a few mins to run benches. This is an EVGA XC with their 130% PL BIOS, +120 and +1000 on core/mem and voltage maxed in Precision. First attached is Heaven Extreme after 5 mins. Same numbers after Firestrike. Hit 2100 a few times but averaged 2085 at first and 2070 as time went on while staying in high 40's. Pretty pleased. Only had a chance to reinstall Shadow of the Tomb Raider and FH4 but on max everything at 3440x1440 they're both butter. Love it. Can't wait for the RTX patch.

Will run some more tests and then check the Galax BIOS. On a somewhat related note I've never had a RGB block but this one from EK is perfect. The lights aren't around the entire block so it's subtle enough to be cool but not Christmas Tree'ish. Also syncs with my Maximus mobo via Aura which is nice.


----------



## carlhil2

Graphics***** said:


> Finally got the card blocked and had a few mins to run benches. This is an EVGA XC with their 130% PL BIOS, +120 and +1000 on core/mem and voltage maxed in Precision. First attached is Heaven Extreme after 5 mins. Same numbers after Firestrike. Hit 2100 a few times but averaged 2085 at first and 2070 as time went on while staying in high 40's. Pretty pleased. Only had a chance to reinstall Shadow of the Tomb Raider and FH4 but on max everything at 3440x1440 they're both butter. Love it. Can't wait for the RTX patch.
> 
> Will run some more tests and then check the Galax BIOS. On a somewhat related note I've never had a RGB block but this one from EK is perfect. The lights aren't around the entire block so it's subtle enough to be cool but not Christmas Tree'ish. Also syncs with my Maximus mobo via Aura which is nice.


Run the OC Scanner and see what clocks it gives you...


----------



## Baasha

OSD doesn't show up with Precision X1 for me.

Also, how do we run OC Scanner - is that built into X1?


----------



## ESRCJ

Alright, so i can confirm that BIOS flashing does not work on my 2080 Ti FEs either. These cards are horrible overclockers as well. I'm likely going to return them. All of this waiting and I get this... no thanks.


----------



## KShirza1

Card blocked and running. Seeing about mid 40's on load.


----------



## cletus-cassidy

Anyone have any insight on the MSI Ventus 2080 TI? It just went up on Amazon and I snagged a pre-order: https://www.amazon.com/gp/product/B07HWW7NCW/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

Specific question: Is this a reference card such that EK or Watercool Heatkiller full GPU water block will fit?

Never owned an MSI GPU. Don't love their MBs, but seems that some here have had ok GPU experiences?


----------



## Zammin

cletus-cassidy said:


> Anyone have any insight on the MSI Ventus 2080 TI? It just went up on Amazon and I snagged a pre-order: https://www.amazon.com/gp/product/B07HWW7NCW/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
> 
> Specific question: Is this a reference card such that EK or Watercool Heatkiller full GPU water block will fit?
> 
> Never owned an MSI GPU. Don't love their MBs, but seems that some here have had ok GPU experiences?


It's listed in the compatibility list for the Phanteks FE blocks, so it would be a reference PCB like 90% of the other ones that are currently available. Here's a link: http://www.phanteks.com/PH-GB2080TiFE.html

Some block manufacturers haven't updated their lists yet but the Phanteks list seems to cover just about everything. There may be another list online of all reference PCBs.


----------



## torqueroll

Monstieur said:


> How many MHz more do you guys get with the 380W BIOS versus the 338W BIOS? I assume the performance increase is negligible compared to the power consumption.


I am definitely going to try it but after seeing how little of an actual difference there is between my OC limited at 100,110,123% power limit there is a such a huge power and heat increase for negligible performance. I would only use it for benchmarking.


----------



## cletus-cassidy

torqueroll said:


> I am definitely going to try it but after seeing how little of an actual difference there is between my OC limited at 100,110,123% power limit there is a such a huge power and heat increase for negligible performance. I would only use it for benchmarking.


Many thanks. Just to confirm, if it fits Phanteks, then it's reference, and it should fit the others (EK, Watercool)?


----------



## cletus-cassidy

Emmett said:


> Would the GALAX or EVGA bios work on my Asus Turbo if I were to block it?
> I only ask because the turbo has one less display port connector.
> If it would not work, I would like to know because I can still return the card.
> 
> It's kinda neat because it could be a true single slot blocked card if a single slot bracket were made for it.


Did you get another BIOS working on the Turbo? I managed to find an open box version on Newegg that I will be putting under water. Will only bother if I can add another BIOS with a higher TDP so curious on your experience.


----------



## Zammin

cletus-cassidy said:


> Many thanks. Just to confirm, if it fits Phanteks, then it's reference, and it should fit the others (EK, Watercool)?


Very likely yes, but do be aware there are some instances where an AIB card's fan header interferes with the block fitment, however the only example I am aware of is the EK Vector blocks with the Gigabyte Gaming OC model.

Heatkiller/Watercool should have their own list of compatible cards as well.


----------



## kx11

after a 2 day battle with the temps of my Palit gaming pro OC under load while watercooled i finally got the temps dwon from 80c under load to 60c under load with silent fans mode




now i want to know if this SOTTR benchmark is good , ultra graphics + TAA + 4k 

































hopefully all that is good , OC the GPU to something like 170core / 500mem will crash any app


----------



## ocvn

zhrooms said:


> If you're not even in a rush to get it into the rig, maybe you shouldn't keep it. For me the choice is easy, I *need* the performance for High Refresh Rate 1440p and 4-8K.
> 
> Secondly, there are 10 reports in this thread alone that retail Founders Edition can *not* be flashed. Which is a deal breaker for many, especially on water where the temps are not restricing the card, it makes the lower (320W) power limit a big annoyance. Several % slower card.
> 
> 
> 
> That was just very bad luck, the odds of a card dying is *extremely low*, so it happening to you again is like one in a million. It is *not* sound reasoning to base your next purchase on extended warranty.
> 
> 
> 
> Yes, @cstkl1 successfully flashed his *Gaming X Trio*. Read his post.
> 
> Both @ocvn and @nycgtr has shown interest in flashing their cards as well, so that makes you the third one. Need more people to try it before I am confident putting the information in the original post.
> 
> 
> 
> 
> 
> 
> 
> 
> I am still waiting on my X Trio.


Confirmed. TrioX is normal board. I flash Galaxy 380w bios to TrioX without any problem. testing now.


----------



## profundido

kx11 said:


> after a 2 day battle with the temps of my Palit gaming pro OC under load while watercooled i finally got the temps dwon from 80c under load to 60c under load with silent fans mode
> 
> 
> 
> 
> now i want to know if this SOTTR benchmark is good , ultra graphics + TAA + 4k



Awwww that is soooo painful to watch. I ran the same benchmark on my 2 year old SLI of 2* Titan X first gen Pascal (worse than 1080ti) and I realize I benchmark 89fps in the test of ultra graphics + SMAA + 4k 

My heart bleeds !


----------



## MonarchX

I am no different from anyone else, but I'm over-worked and too tired to keep refreshing Amazon and NewEgg site to find an affordable (~$1350) RTX 2080 Ti from a good maker like MSI, Gigabyte, ASUS, EVGA, or even NVidia (not Zotac or Palit stuff...) in USA to be shipped to me ASAP. If someone out there or here sees this sweat-blood message and feels... something.? but most importantly knows of some niche secret spot where I can get what I want, well then... I guess you can let me know? Maybe? I don't know... I'm going to go to bed before it's time to go to work and there is a black cat outside - she is very beautiful...


----------



## GraphicsWhore

carlhil2 said:


> Run the OC Scanner and see what clocks it gives you...


How the hell do I use it? I clicked it once before and remember seeing some bars but didn't take the opportunity to figure out what I was looking at. On my EVGA 1080Ti I only used AfterBurner.


----------



## GraphicsWhore

kx11 said:


> after a 2 day battle with the temps of my Palit gaming pro OC under load while watercooled i finally got the temps dwon from 80c under load to 60c under load with silent fans mode
> 
> 
> hopefully all that is good , OC the GPU to something like 170core / 500mem will crash any app


80??? Damn. What was going on there?


----------



## nycgtr

MonarchX said:


> I am no different from anyone else, but I'm over-worked and too tired to keep refreshing Amazon and NewEgg site to find an affordable (~$1350) RTX 2080 Ti from a good maker like MSI, Gigabyte, ASUS, EVGA, or even NVidia (not Zotac or Palit stuff...) in USA to be shipped to me ASAP. If someone out there or here sees this sweat-blood message and feels... something.? but most importantly knows of some niche secret spot where I can get what I want, well then... I guess you can let me know? Maybe? I don't know... I'm going to go to bed before it's time to go to work and there is a black cat outside - she is very beautiful...


Check your pm.


----------



## kx11

Graphics***** said:


> 80??? Damn. What was going on there?





the thermal pads were placed wrong , fixed it


----------



## Emmett

cletus-cassidy said:


> Emmett said:
> 
> 
> 
> Would the GALAX or EVGA bios work on my Asus Turbo if I were to block it?
> I only ask because the turbo has one less display port connector.
> If it would not work, I would like to know because I can still return the card.
> 
> It's kinda neat because it could be a true single slot blocked card if a single slot bracket were made for it.
> 
> 
> 
> Did you get another BIOS working on the Turbo? I managed to find an open box version on Newegg that I will be putting under water. Will only bother if I can add another BIOS with a higher TDP so curious on your experience.
Click to expand...

I did not as I am still considering returning it


----------



## GraphicsWhore

Emmett said:


> I did not as I am still considering returning it


If you block the card you're still going to get good performance if you can't flash EVGA or Galax BIOS. What's the max power on that card btw?


----------



## dentnu

Heads up everyone looks like Newegg finally got a shipment of MSI 2080 TI Trio's today. My order is finally in the packaging stage and is expected to ship today.  I would keep an eye on Newegg as they might have gotten extra cards and will be putting them up on their site soon.


----------



## Ferreal

Received my 2080ti's yesterday. I'm upgrading from a Titan V, so I wasn't expecting a big jump. The improvements in PQ surprised me, I'm impressed.

SOTTR ran amazing in NVLink and I was doubling my FPS. One 2080ti is also impressive when comparing to the $3k Titan V.

For some reason I cannot get 144hz to work. Running 4k HDR 10bit full RGB @ 98hz looks amazing with the RTX's, so no complaints here.

This is all without the most important features of the RTX lineup: Raytracing and DLSS. 

Well worth the wait.


----------



## Neon01

Wonder what the odds are of a Gigabyte Gaming OC water block being released. Somewhat interested to see what mine can do under water, but have to say I'm not feeling a burning need to water cool this time around. The OC I'm able to get under air isn't bad, and the stock fans are very quiet.


----------



## nycgtr

Neon01 said:


> Wonder what the odds are of a Gigabyte Gaming OC water block being released. Somewhat interested to see what mine can do under water, but have to say I'm not feeling a burning need to water cool this time around. The OC I'm able to get under air isn't bad, and the stock fans are very quiet.


Just remove the fan jumper and bend the pins. It's not too hard to undo.


----------



## gavros777

Ordered my card from newegg last thursday and damn Fedex has my card from 12:05 am this morning close to my location and is holding it to deliver it on the 11th as that was the initial estimate.
Other shipping companies deliver items as soon as they can, why fedex is being so fedex? lol.


----------



## kx11

Neon01 said:


> Wonder what the odds are of a Gigabyte Gaming OC water block being released. Somewhat interested to see what mine can do under water, but have to say I'm not feeling a burning need to water cool this time around. The OC I'm able to get under air isn't bad, and the stock fans are very quiet.



they did mention Auros Xtreme waterblock is coming out after Auros cards so maybe November


----------



## Emmett

GraphicsWhore said:


> Emmett said:
> 
> 
> 
> I did not as I am still considering returning it
> 
> 
> 
> If you block the card you're still going to get good performance if you can't flash EVGA or Galax BIOS. What's the max power on that card btw?
Click to expand...

AB goes to 120% with the turbo. I think it's a good sample. I can set 950MV and it will maintain 1985 in a game with nothing sowing in perfcap. If I do the same on the zotac I get voltage perfcap notice. The turbo also does +1000 on memory. Have not tried higher. So I may yet block it.




gavros777 said:


> Ordered my card from newegg last thursday and damn Fedex has my card from 12:05 am this morning close to my location and is holding it to deliver it on the 11th as that was the initial estimate.
> Other shipping companies deliver items as soon as they can, why fedex is being so fedex? lol.


FedEx always delivers early by me. It's UPS that will hold. Even if they get DAYS early


----------



## xer0h0ur

Ferreal said:


> Received my 2080ti's yesterday. I'm upgrading from a Titan V, so I wasn't expecting a big jump. The improvements in PQ surprised me, I'm impressed.
> 
> SOTTR ran amazing in NVLink and I was doubling my FPS. One 2080ti is also impressive when comparing to the $3k Titan V.
> 
> For some reason I cannot get 144hz to work. Running 4k HDR 10bit full RGB @ 98hz looks amazing with the RTX's, so no complaints here.
> 
> This is all without the most important features of the RTX lineup: Raytracing and DLSS.
> 
> Well worth the wait.


Nvidia's drivers are a hot mess right now. I can't set my BenQ XL2730Z to 144Hz or else I get a permanent black screen. Even when setting it to 120Hz to avoid this black screen I will often times be greeted to a black screen anyways upon Windows booting until I blindly sign in and the desktop loads at which point a signal is finally fed to the monitor. I heard its a problem affecting all monitors using the same AU Optronics panels and possibly extends to more of their panels as well. I just know this only started happening with the last 2 or 3 driver releases and they haven't fixed it yet.


----------



## zhrooms

*Asked about increasing the power limit of the Gaming X Trio.* 
                                                                        *(And the other cards)*
 


MSI's Online Customer Service System said:


> _*The power limit is designed according to the graphics card performance and safety. For your graphics card, we do not suggest to increase the power limit reluctantly, for this may cause the damage to the hardware of your card and decrease the service life.*_


  Does not mean much, looks like a generic response.


----------



## Baasha

Ferreal said:


> Received my 2080ti's yesterday. I'm upgrading from a Titan V, so I wasn't expecting a big jump. The improvements in PQ surprised me, I'm impressed.
> 
> SOTTR ran amazing in NVLink and I was doubling my FPS. One 2080ti is also impressive when comparing to the $3k Titan V.
> 
> For some reason I cannot get 144hz to work. Running 4k HDR 10bit full RGB @ 98hz looks amazing with the RTX's, so no complaints here.
> 
> This is all without the most important features of the RTX lineup: Raytracing and DLSS.
> 
> Well worth the wait.


SOTTR scales really well but other games that otherwise scaled really well (I was using 4x Titan Xp and more recently 2x Titan Xp) do not do so with 2x 2080 Ti and NVLink.

Can you test some other games and corroborate? I tried Witcher 3, BF1, Ghost Recon Wildlands etc.

Single GPU games such as Odyssey and Origins perform WAY better than the Titan Xp - about 30% better so far.

I shudder to think if my 6950X @ 4.30Ghz is "bottlenecking" these 2x 2080 Ti - I don't think so but...


----------



## gavros777

Emmett said:


> FedEx always delivers early by me. It's UPS that will hold. Even if they get DAYS early


I remembered fedex delivering early to me too in the past, strange they're holding my card this time.


----------



## GraphicsWhore

gavros777 said:


> I remembered fedex delivering early to me too in the past, strange they're holding my card this time.


If it's at a local facility can't you pick it up?


----------



## gavros777

GraphicsWhore said:


> If it's at a local facility can't you pick it up?


The fedex facility is around 70 miles away from me, i wish it was closer.


----------



## Nico67

Emmett said:


> I did not as I am still considering returning it


What was the issue, device ID error like the FE cards? could be that you need to try a different DP port in case they shifted the port numbering when dropping the extra port. Interested myself as single slot card would be very useful


----------



## Djreversal

GraphicsWhore said:


> Finally got the card blocked and had a few mins to run benches. This is an EVGA XC with their 130% PL BIOS, +120 and +1000 on core/mem and voltage maxed in Precision. First attached is Heaven Extreme after 5 mins. Same numbers after Firestrike. Hit 2100 a few times but averaged 2085 at first and 2070 as time went on while staying in high 40's. Pretty pleased. Only had a chance to reinstall Shadow of the Tomb Raider and FH4 but on max everything at 3440x1440 they're both butter. Love it. Can't wait for the RTX patch.
> 
> Will run some more tests and then check the Galax BIOS. On a somewhat related note I've never had a RGB block but this one from EK is perfect. The lights aren't around the entire block so it's subtle enough to be cool but not Christmas Tree'ish. Also syncs with my Maximus mobo via Aura which is nice.




what card was this?


----------



## kx11

alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )


----------



## nodicaL

Offical EVGA RTX 2080 Ti XC Ultra owner!

https://imgur.com/asXOk8o
https://imgur.com/BNZC4Xo


----------



## Emmett

Nico67 said:


> Emmett said:
> 
> 
> 
> I did not as I am still considering returning it
> 
> 
> 
> What was the issue, device ID error like the FE cards? could be that you need to try a different DP port in case they shifted the port numbering when dropping the extra port. Interested myself as single slot card would be very useful /forum/images/smilies/smile.gif
Click to expand...

I never tried to flash it. Was hoping to see if somoene else did first


----------



## GraphicsWhore

Djreversal said:


> GraphicsWhore said:
> 
> 
> 
> Finally got the card blocked and had a few mins to run benches. This is an EVGA XC with their 130% PL BIOS, +120 and +1000 on core/mem and voltage maxed in Precision. First attached is Heaven Extreme after 5 mins. Same numbers after Firestrike. Hit 2100 a few times but averaged 2085 at first and 2070 as time went on while staying in high 40's. Pretty pleased. Only had a chance to reinstall Shadow of the Tomb Raider and FH4 but on max everything at 3440x1440 they're both butter. Love it. Can't wait for the RTX patch.
> 
> Will run some more tests and then check the Galax BIOS. On a somewhat related note I've never had a RGB block but this one from EK is perfect. The lights aren't around the entire block so it's subtle enough to be cool but not Christmas Tree'ish. Also syncs with my Maximus mobo via Aura which is nice.
> 
> 
> 
> 
> 
> what card was this?
Click to expand...

Mentioned in my post: EVGA XC (Gaming). EK block and backplate.

How the crap do you use Precision X1? Do I just hit Scan and let it run? Am I supposed to be running a test after I hit scan? I hit it and it just seemed to run indefinitely until my core reached 2160 and driver crashed. Now what?

I modified the VF curve and saved but when I loaded it just snapped to something else.

How do I change all the curve points after one to be the same?

This is the least intuitive software ever.

Edit: see attachment. This is after hitting scan and just letting it run for a while. The VF curve keeps changing but this time when I hit 2145 I canceled because last time it crashed at 2160.

What is this telling me? 2145 @ 1013mV and 40 degrees my ass.


----------



## ESRCJ

Can anyone convince me that I should keep my two FEs? How many of you are able to complete a run of TimeSpy with a peak core clock speed of 2100MHz or higher on air? Neither of my cards can do that. Since I can't lift or increase the power limit without shunt modding (LM produces volatile results and soldering is something I've never done), these cards feel vastly underwhelming and the excitement is somewhat gone. I wanted to throw some blocks on them and put them in a massive loop. Overclocking is very important to me and I'll take every percentage I can get.


----------



## nodicaL

For others wondering about the EVGA XC Ultra flash to 380W bios.
It worked flawlessly for me.


----------



## Pilz

I've had mine for a few days now and it's been running fine. Has anyone gotten EVGA's X1 to keep the settings after a reboot? What about flashing the EVGA XC Ultra Bios? I've seen a lot of failed attempts without a bypass so far. Waiting on my 9900K + Asus ROG Extreme motherboard now. 

Sorry for the crappy picture of my 1080FTW and 2080Ti it was dark and I was tired.


----------



## Pilz

nodicaL said:


> For others wondering about the EVGA XC Ultra flash to 380W bios.
> It worked flawlessly for me.


Did you flash it to the FE card? Did you have the day 1 driver installed beforehand?


----------



## GraphicsWhore

Pilz said:


> Did you flash it to the FE card? Did you have the day 1 driver installed beforehand?


380W is the Galax BIOS and he flashed it to the EVGA XC Ultra.


----------



## Schramm

Got my 2080ti fe blocked it all up and the thing is so noisy with coil whine. Even put it on its own separate psu a corsiar ax1200 to see if the corsair rmx650 was just not up to the task and that didn't help at all. If i'm not mistaken I can't really do anything to reduce it correct? Is anyone else having this issue with FE cards?


----------



## raider89

Pilz said:


> nodicaL said:
> 
> 
> 
> For others wondering about the EVGA XC Ultra flash to 380W bios.
> It worked flawlessly for me.
> 
> 
> 
> Did you flash it to the FE card? Did you have the day 1 driver installed beforehand?
Click to expand...

Hes talking about flashing the galax buis to the xc ultra card. No flashing on fe atm


----------



## raider89

GraphicsWhore said:


> Djreversal said:
> 
> 
> 
> 
> 
> GraphicsWhore said:
> 
> 
> 
> Finally got the card blocked and had a few mins to run benches. This is an EVGA XC with their 130% PL BIOS, +120 and +1000 on core/mem and voltage maxed in Precision. First attached is Heaven Extreme after 5 mins. Same numbers after Firestrike. Hit 2100 a few times but averaged 2085 at first and 2070 as time went on while staying in high 40's. Pretty pleased. Only had a chance to reinstall Shadow of the Tomb Raider and FH4 but on max everything at 3440x1440 they're both butter. Love it. Can't wait for the RTX patch.
> 
> Will run some more tests and then check the Galax BIOS. On a somewhat related note I've never had a RGB block but this one from EK is perfect. The lights aren't around the entire block so it's subtle enough to be cool but not Christmas Tree'ish. Also syncs with my Maximus mobo via Aura which is nice.
> 
> 
> 
> 
> 
> what card was this?
> 
> Click to expand...
> 
> Mentioned in my post: EVGA XC (Gaming). EK block and backplate.
> 
> How the crap do you use Precision X1? Do I just hit Scan and let it run? Am I supposed to be running a test after I hit scan? I hit it and it just seemed to run indefinitely until my core reached 2160 and driver crashed. Now what?
> 
> I modified the VF curve and saved but when I loaded it just snapped to something else.
> 
> How do I change all the curve points after one to be the same?
> 
> This is the least intuitive software ever.
> 
> Edit: see attachment. This is after hitting scan and just letting it run for a while. The VF curve keeps changing but this time when I hit 2145 I canceled because last time it crashed at 2160.
> 
> What is this telling me? 2145 @ 1013mV and 40 degrees my ass.
Click to expand...

The curve does not take voltage, or memory or power target it only gives you core clock.


----------



## ESRCJ

Schramm said:


> Got my 2080ti fe blocked it all up and the thing is so noisy with coil whine. Even put it on its own separate psu a corsiar ax1200 to see if the corsair rmx650 was just not up to the task and that didn't help at all. If i'm not mistaken I can't really do anything to reduce it correct? Is anyone else having this issue with FE cards?
> 
> https://www.youtube.com/watch?v=P_fQWyd4_mg


I had noticeable coil whine with my Titan XP. It's definitely more noticeable with a block since you don't have those loud FE fans blowing and covering up some of that noise. The coil whine will be worse with higher FPS, at least that's what I noticed. My advice is to use headphones when gaming.


----------



## iamjanco

"For now, we are the new world record" --der8auer, with respect to the competition from kingpin @ 10:18







I'll also leave this one here:

*OMG .. MSI RTX 2080 Ti Sea Hawk X in UK Amazon for 1838 GBP /2500 USD*


----------



## BigMack70

gridironcpj said:


> How many of you are able to complete a run of TimeSpy with a peak core clock speed of 2100MHz or higher on air?


I've not seen anyone able to maintain greater than 2100 MHz on air in TimeSpy. These cards are overclocking duds, even worse than Pascal.

One thing seems certain... if AMD or Nvidia releases a GPU and claims in the announcement that it's "built for overclocking", you know that it's going to be a turd at overclocking.


----------



## Zammin

kx11 said:


> alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )


That block looks so good. I'm gonna miss the way my strix 1080ti phanteks block looks in my build


----------



## Pilz

GraphicsWhore said:


> Pilz said:
> 
> 
> 
> Did you flash it to the FE card? Did you have the day 1 driver installed beforehand?
> 
> 
> 
> 380W is the Galax BIOS and he flashed it to the EVGA XC Ultra.
Click to expand...




raider89 said:


> Pilz said:
> 
> 
> 
> 
> 
> nodicaL said:
> 
> 
> 
> For others wondering about the EVGA XC Ultra flash to 380W bios.
> It worked flawlessly for me.
> 
> 
> 
> Did you flash it to the FE card? Did you have the day 1 driver installed beforehand?
> 
> Click to expand...
> 
> Hes talking about flashing the galax buis to the xc ultra card. No flashing on fe atm
Click to expand...

Thanks guys, my mistake I must've missed where he mentioned the card. 

For what it's worth my FE card can do 2100-2160mhz but over 2100mhz sometimes becomes unstable. 

I get crashes with the memory at anything over +700 sadly.


----------



## Pilz

Zammin said:


> kx11 said:
> 
> 
> 
> alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )
> 
> 
> 
> That block looks so good. I'm gonna miss the way my strix 1080ti phanteks block looks in my build /forum/images/smilies/frown.gif
Click to expand...

I wonder why they left a gap in the backplate right behind the GPU die.


----------



## cstkl1

decided not to share that wall paper.. lol


----------



## Neon01

Couple questions for the experts here. Just downloaded 3dMark (free version) and installed it to run TimeSpy benchmark and see how I'm stacking up and if everything is running smoothly. I've no experience with 3dMark at all, so this is all new to me. I OC'ed my Gigabyte Gaming 2080ti with +130/+450 in MSI Afterburner v4.6 Beta 9 with maxed power and temp targets and set the fan to a constant 75%. 

The results I got seem disappointing considering the numbers everyone else around here are flaunting:

Overall: 10802
Graphics Score: 14602
CPU Score: 4364

Are these just plain bad scores, or is this a case of my CPU holding me back? I knew my 4770k (OCed to 4.2GHz) is probably getting pretty long in the tooth, and in truth I planned to do a full system upgrade near black Friday, but is my GPU looking poor as well? I did notice that the core clock stayed pretty close to 1870mhz most of the way through the benchmark. The results screen listed the core clock at "2070MHz", but looking at the AB graphs it only hit that for a moment, and most of the time it was between 1850-1920 (lowest for the first test). I also noticed that most of the posts in this thread indicate that my Gigabyte should be able to go to +130% on power limit, but AB seems to cap at +111%. Is there a way to unlock the actual limits of the vbios? I thought AB used to have some sort of unlock that you had to write into the config file, but it's been a minute since I tinkered with that.

For the record, my GPU temp never got above 60C at constant 75% fan, so it doesn't seem to be working that hard...

Any help is appreciated.


----------



## EQBoss

Got my 2 FEs today, didn't have time to work on my machine today. Hoping they're not duds lol


----------



## MrTOOSHORT

Neon01 said:


> Couple questions for the experts here. Just downloaded 3dMark (free version) and installed it to run TimeSpy benchmark and see how I'm stacking up and if everything is running smoothly. I've no experience with 3dMark at all, so this is all new to me. I OC'ed my Gigabyte Gaming 2080ti with +130/+450 in MSI Afterburner v4.6 Beta 9 with maxed power and temp targets and set the fan to a constant 75%.
> 
> The results I got seem disappointing considering the numbers everyone else around here are flaunting:
> 
> Overall: 10802
> Graphics Score: 14602
> CPU Score: 4364
> 
> Are these just plain bad scores, or is this a case of my CPU holding me back? I knew my 4770k (OCed to 4.2GHz) is probably getting pretty long in the tooth, and in truth I planned to do a full system upgrade near black Friday, but is my GPU looking poor as well? I did notice that the core clock stayed pretty close to 1870mhz most of the way through the benchmark. The results screen listed the core clock at "2070MHz", but looking at the AB graphs it only hit that for a moment, and most of the time it was between 1850-1920 (lowest for the first test). I also noticed that most of the posts in this thread indicate that my Gigabyte should be able to go to +130% on power limit, but AB seems to cap at +111%. Is there a way to unlock the actual limits of the vbios? I thought AB used to have some sort of unlock that you had to write into the config file, but it's been a minute since I tinkered with that.
> 
> For the record, my GPU temp never got above 60C at constant 75% fan, so it doesn't seem to be working that hard...
> 
> Any help is appreciated.



Looks ok to me. The cpu OC'd higher will get you more gpu score in TS. GPU score is what's important in 3dmarks.


----------



## famich

BigMack70 said:


> I've not seen anyone able to maintain greater than 2100 MHz on air in TimeSpy. These cards are overclocking duds, even worse than Pascal.
> 
> One thing seems certain... if AMD or Nvidia releases a GPU and claims in the announcement that it's "built for overclocking", you know that it's going to be a turd at overclocking.


Gainward GS here 4K TimeSpy run peaked at 2140 on air cooling. Know three other guys did the same with GW on air . Binned Chips most probably on GS cards 
Fans at 80%.


----------



## ssgwright

Neon01 said:


> Couple questions for the experts here. Just downloaded 3dMark (free version) and installed it to run TimeSpy benchmark and see how I'm stacking up and if everything is running smoothly. I've no experience with 3dMark at all, so this is all new to me. I OC'ed my Gigabyte Gaming 2080ti with +130/+450 in MSI Afterburner v4.6 Beta 9 with maxed power and temp targets and set the fan to a constant 75%.
> 
> The results I got seem disappointing considering the numbers everyone else around here are flaunting:
> 
> Overall: 10802
> Graphics Score: 14602
> CPU Score: 4364
> 
> Are these just plain bad scores, or is this a case of my CPU holding me back? I knew my 4770k (OCed to 4.2GHz) is probably getting pretty long in the tooth, and in truth I planned to do a full system upgrade near black Friday, but is my GPU looking poor as well? I did notice that the core clock stayed pretty close to 1870mhz most of the way through the benchmark. The results screen listed the core clock at "2070MHz", but looking at the AB graphs it only hit that for a moment, and most of the time it was between 1850-1920 (lowest for the first test). I also noticed that most of the posts in this thread indicate that my Gigabyte should be able to go to +130% on power limit, but AB seems to cap at +111%. Is there a way to unlock the actual limits of the vbios? I thought AB used to have some sort of unlock that you had to write into the config file, but it's been a minute since I tinkered with that.
> 
> For the record, my GPU temp never got above 60C at constant 75% fan, so it doesn't seem to be working that hard...
> 
> Any help is appreciated.


your graphics score is good for at 2080ti but the overall score is crap what cpu are you running?


----------



## frosthesnowman

Emmett said:


> I never tried to flash it. Was hoping to see if somoene else did first


Exiting lurk mode here for a minute.

I took the chance and flashed my ASUS TURBO-RTX2080TI-11G with the "GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W)" linked on the first page of this thread.

It worked. 

Both DisplayPort connectors still work, didn't test the HDMI or USB-C.

It's an older system with a 180mm Radiator in push-pull config and EKWB block for the gpu. There is a separate 120mm push-pull AIO for the CPU.

A decent little performance increase for an hour or two of work. Temps went up by about 3c under load. Need to work on the overclock further.


----------



## GraphicsWhore

Pilz said:


> Zammin said:
> 
> 
> 
> 
> 
> kx11 said:
> 
> 
> 
> alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )
> 
> 
> 
> That block looks so good. I'm gonna miss the way my strix 1080ti phanteks block looks in my build /forum/images/smilies/frown.gif
> 
> Click to expand...
> 
> I wonder why they left a gap in the backplate right behind the GPU die.
Click to expand...

To let it breathe? Very strange and would make me nervous if I’m on water. The EK backplate has a thermal pad on that spot.


----------



## Neon01

ssgwright said:


> your graphics score is good for at 2080ti but the overall score is crap what cpu are you running?


i7-4770k. It's got a very mild OC on it (4.2GHz)

Just ran again after I realized I had didn't have Nvidia control panel power management setting to "Prefer Max Performance". Bumped the core OC to +145mhz and ran it again - didn't make much difference:

10838 overall
14639 graphics
4386 CPU

It really seems like I'm being limited by that 111% power limit, as AB shows me in power limit condition nearly the whole time, and I'm bouncing between 100% and 110% power. Can that be unlocked without running custom bios? I've read in this thread that the stock Gigabyte Gaming bios is good to 120% for 360W.


----------



## Pilz

GraphicsWhore said:


> To let it breathe? Very strange and would make me nervous if I’m on water. The EK backplate has a thermal pad on that spot.


On the stock FE cooler and EK block there's a thermal pad in that spot which is obviously enclosed so I thought it was odd how they left it open. I'm pretty sure all of my recent GPU's with factory backplates have that section closed maybe it's for "style".

Nvidia sure loves to over use thermal paste too. This was mine when I removed the factory cooler. I never used it before stripping off the stock cooler and replacing it with my EKWB + backplate.


I'm also getting very inconsistent "1 click overclocks" with X1 as you can see here. I ran these tests back to back after a reboot. One shows the speeds in game where it's running at a constant 2130mhz but I did have a crash or two despite passing the "test.


----------



## Ferreal

Baasha said:


> SOTTR scales really well but other games that otherwise scaled really well (I was using 4x Titan Xp and more recently 2x Titan Xp) do not do so with 2x 2080 Ti and NVLink.
> 
> Can you test some other games and corroborate? I tried Witcher 3, BF1, Ghost Recon Wildlands etc.
> 
> Single GPU games such as Odyssey and Origins perform WAY better than the Titan Xp - about 30% better so far.
> 
> I shudder to think if my 6950X @ 4.30Ghz is "bottlenecking" these 2x 2080 Ti - I don't think so but...


I found out that I can enable 144hz in game only, desktop has to be either 120hz or 98hz.

7980xe
Acer x27
4k HDR 144hz
MS October update 1809 + 416.16 drivers

SOTTR - perfect scaling
Witcher 3 - good scaling - I didn't play it for too long but it was around 90% usage on both cards. 
BF1 - this game did the worst. On DX11 I was getting between 50%-72% most of the time, it never reached 80%. Definitely playable though and the PQ is awesome. Turned on DX12 and it was unplayble. FPS was jumping everywhere.
Battlefront 2 - I played this game the most. NVLink did not work at all on DX12. On DX11 - perfect scaling - playing this game in 4k 144hz and getting over 100fps is just amazing, everything is sharp, smooth and FAST! Black crush is still there but it is a lot better than what it was on the Titan V (waiting for that Acer firmware).

9980xe is coming soon if you're looking to upgrade.


----------



## Nico67

frosthesnowman said:


> Exiting lurk mode here for a minute.
> 
> I took the chance and flashed my ASUS TURBO-RTX2080TI-11G with the "GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W)" linked on the first page of this thread.
> 
> It worked.
> 
> Both DisplayPort connectors still work, didn't test the HDMI or USB-C.
> 
> It's an older system with a 180mm Radiator in push-pull config and EKWB block for the gpu. There is a separate 120mm push-pull AIO for the CPU.
> 
> A decent little performance increase for an hour or two of work. Temps went up by about 3c under load. Need to work on the overclock further.


Nice, will keep an eye out for Turbo's as well. Got a EVGA XC on pre order, but very little availibility here


----------



## arrow0309

kx11 said:


> alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )


Nice and sleek rig 
However one 360 radiator only? 
That explain the high temperatures of your 2080 ti you were posting. 
It's way too little cooling mass imho given the other components as well (Skylake X and vrm cooling) and that your not even lining in Norway or North Canada. 
I wonder the liquid inside your loop what temp it gets?


----------



## nerfon

Any tried flash the old nvidia 2080ti fe bios https://www.techpowerup.com/vgabios/203753/nvidia-rtx2080ti-11264-180829 over the locked retail one ?


----------



## ENTERPRISE

Boooo, looks like my MSI Seahawk X Cards have an ETA of 30th November, Lets hope they come soon. Nvidia and their poor yeilds


----------



## BlueHeisenberg

nerfon said:


> Any tried flash the old nvidia 2080ti fe bios https://www.techpowerup.com/vgabios/203753/nvidia-rtx2080ti-11264-180829 over the locked retail one ?


That might be a good idea, but I think it will fail like the Galax one.

I installed yesterday the EK Vektor WB with the backplate on my 2080ti FE and now the temps are around 45 degrees celsius with +1100 on memory and +160 on core but it obviously hits Power limit.

I tried to flash the Galax bios but no luck as everyone 😞

I also tried to put Liquid Metal on the shunt resistors but I couldn’t manage to get it to stick into it...


----------



## Zammin

arrow0309 said:


> Nice and sleek rig
> However one 360 radiator only?
> That explain the high temperatures of your 2080 ti you were posting.
> It's way too little cooling mass imho given the other components as well (Skylake X and vrm cooling) and that your not even lining in Norway or North Canada.
> I wonder the liquid inside your loop what temp it gets?


It looks like he has a 280(?) in the top as well, but it's just out of sight.


----------



## nerfon

My ek vector coming tomorrow , yeah i feel our only option for this new board revision founders will be hard mods with solder or just sell them when we find another whatever 2080ti on stock


----------



## arrow0309

ENTERPRISE said:


> Boooo, looks like my MSI Seahawk X Cards have an ETA of 30th November, Lets hope they come soon. Nvidia and their poor yeilds


Yeah, we're messed up pretty bad in the UK with these MSI, especially from Scan, no ETA info whatsoever even for the Duke I've preordered since 20th September.


----------



## Pilz

arrow0309 said:


> kx11 said:
> 
> 
> 
> alright here's my PALIT Gaming pro OC 2080ti under a waterblock by PHANTEKS rnning nicely with GALAX bios which allowed me to push power limit to 126% although OC offset wasn't any better than the original Bios ( 70+ core / 100+ mem maximum )
> 
> 
> Nice and sleek rig
> However one 360 radiator only?
> That explain the high temperatures of your 2080 ti you were posting.
> It's way too little cooling mass imho given the other components as well (Skylake X and vrm cooling) and that your not even lining in Norway or North Canada.
> I wonder the liquid inside your loop what temp it gets?
> 
> 
> 
> Here I was thinking a 480mm*60mm with push pull per loop was the minimum. I have each loop running like that and my coolant hovers around 28-30°C on the hot end depending on my pump speed, fans at 1200rpm. I can't imagine cooling the 2080Ti with a single 360mm or even cooling both that and a X299 Monoblock with a 360+240 set up because we all know how Skylake-X stays super cool /S
Click to expand...


----------



## GraphicsWhore

frosthesnowman said:


> Exiting lurk mode here for a minute.
> 
> I took the chance and flashed my ASUS TURBO-RTX2080TI-11G with the "GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W)" linked on the first page of this thread.
> 
> It worked.


Did you do anything special to unlock the voltage slider in AB? Mine is grayed out. I did the usual thing of changing the profile text file and checking the box but no luck. Also, when I try to use the voltage curve all points are at the very bottom. If I bring them up to about where i had my 1080Ti - with relatively steady increase and 2085 at about 1050mV - AB is telling me I'm doing the equivalent of like +1100 to the core. Thing is completely messed up. Using the latest version.


----------



## micluha

Hi everyone!
Has someone tried flashing a Galax Bios to Duke from MSI?

I got only additional 90 Mhz from OC with that poor PL (112%). Now thinking to try bios from Galax but i'm not sure about it.
Does it have any sense? Will profit be noticeble in 4k gaming?

Thx in advance from Belarus.

PS My GPU is on air.


----------



## GraphicsWhore

micluha said:


> Does it have any sense? Will profit be noticeble in 4k gaming?


Short answer: no.


----------



## ocvn

micluha said:


> Hi everyone!
> Has someone tried flashing a Galax Bios to Duke from MSI?
> 
> I got only additional 90 Mhz from OC with that poor PL (112%). Now thinking to try bios from Galax but i'm not sure about it.
> Does it have any sense? Will profit be noticeble in 4k gaming?
> 
> Thx in advance from Belarus.
> 
> PS My GPU is on air.


I flash Galax 380, Giga 366 to both my TrioX without any problem.


----------



## micluha

GraphicsWhore said:


> Short answer: no.


Thx you! If this is true.


----------



## nycgtr

micluha said:


> Thx you! If this is true.


I flashed like 7 cards I can tell you the gain is very minimal, you will boost higher but that fps difference is like 1.x.


----------



## Artah

Can someone post a Unigine Superpostion bench mark 4K with a single and/or SLI 2080ti pls and list your OC/Voltage using the the driver 416.16 Optimized directX. Thank you.


----------



## Pilz

GraphicsWhore said:


> frosthesnowman said:
> 
> 
> 
> Exiting lurk mode here for a minute.
> 
> I took the chance and flashed my ASUS TURBO-RTX2080TI-11G with the "GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W)" linked on the first page of this thread.
> 
> It worked.
> 
> 
> 
> Did you do anything special to unlock the voltage slider in AB? Mine is grayed out. I did the usual thing of changing the profile text file and checking the box but no luck. Also, when I try to use the voltage curve all points are at the very bottom. If I bring them up to about where i had my 1080Ti - with relatively steady increase and 2085 at about 1050mV - AB is telling me I'm doing the equivalent of like +1100 to the core. Thing is completely messed up. Using the latest version.
Click to expand...

Have you tried using MSI afterburner or EVGAs X1? I think the voltage slider is rather a placebo of sort since moving it shows no real changes that I've seen so far running up to 2205mhz but obviously hitting the power limit causing stability issues.


----------



## arrow0309

Pilz said:


> arrow0309 said:
> 
> 
> 
> Here I was thinking a 480mm*60mm with push pull per loop was the minimum. I have each loop running like that and my coolant hovers around 28-30°C on the hot end depending on my pump speed, fans at 1200rpm. I can't imagine cooling the 2080Ti with a single 360mm or even cooling both that and a X299 Monoblock with a 360+240 set up because we all know how Skylake-X stays super cool /S
> 
> 
> 
> I agree, I'm using a 360 (Xspc Rx, v3) top pull internal and a 420 (Alphacool Monsta) in push, external and I don't even have a monoblock, only Heatkiller vi on the cpu and another Heatkiller Titan X pascal.
> I'll swap the Gpu full cover with the EK Vector.
Click to expand...


----------



## xer0h0ur

Pilz said:


> Have you tried using MSI afterburner or EVGAs X1? I think the voltage slider is rather a placebo of sort since moving it shows no real changes that I've seen so far running up to 2205mhz but obviously hitting the power limit causing stability issues.


From what I observe so far, the voltage slider does nothing in terms of adding voltage but it does seem to stabilize the voltage at the high end overclocks where you would be on the edge of crashing without bumping the slider all the way. Maybe its just me but that is my general observation.


----------



## GraphicsWhore

Pilz said:


> Have you tried using MSI afterburner or EVGAs X1? I think the voltage slider is rather a placebo of sort since moving it shows no real changes that I've seen so far running up to 2205mhz but obviously hitting the power limit causing stability issues.


I was referring to AfterBurner ("AB"). Since I can't unlock the voltage slider I went to X1 and like all EVGA software it's kind of a mess. Doesn't save profiles after rebooting, voltage curve does whatever it wants if you set it manually and the "Scan" function doesn't do anything for me except start showing ridiculous clock rates after a few mins and pushing my card until it crashes.

But I've also got the weird AB issue with the voltage curve where it's telling me I'm trying to add +1000 to the clock because I moved it up to what is actually a fairly average, level curve topping out at 1050mV @ 2070. Like ***.


----------



## Edge0fsanity

Managed to grab an asus dual OC off newegg this morning to replace my FE. Should have it by tomorrow afternoon. Hope i don't get another potato with bad coilwhine...


----------



## kx11

arrow0309 said:


> Nice and sleek rig
> However one 360 radiator only?
> That explain the high temperatures of your 2080 ti you were posting.
> It's way too little cooling mass imho given the other components as well (Skylake X and vrm cooling) and that your not even lining in Norway or North Canada.
> I wonder the liquid inside your loop what temp it gets?



thnx 



it's a 360rad + 240rad , the bad temps from before were caused by placing the thermal pads on the wrong parts , i followed EKWB manual and it worked , now it reaches 60c max only under load @ 4k ultra settings


the cpu temps never passes 60c , average @ 50c under load and 26c to 35c idle while it's OC to 4.6ghz 





so i think i'm good overall


----------



## xer0h0ur

kx11 said:


> thnx
> 
> 
> 
> it's a 360rad + 240rad , the bad temps from before were caused by placing the thermal pads on the wrong parts , i followed EKWB manual and it worked , now it reaches 60c max only under load @ 4k ultra settings
> 
> 
> the cpu temps never passes 60c , average @ 50c under load and 26c to 35c idle while it's OC to 4.6ghz
> 
> 
> 
> so i think i'm good overall


Dude, how? I'm literally only using ONE 360 Monsta radiator in push/pull for an overclocked 4960X and overclocked 2080 Ti and manage better temps than you do and my loop is dirty right now too. Something still doesn't make sense there for you.


----------



## kx11

xer0h0ur said:


> Dude, how? I'm literally only using ONE 360 Monsta radiator in push/pull for an overclocked 4960X and overclocked 2080 Ti and manage better temps than you do and my loop is dirty right now too. Something still doesn't make sense there for you.



i dunno man , my temps are good AFAIK and performance is good


----------



## xer0h0ur

I'm going to slap a corsair H80i V2 on my CPU and leave the Monsta for only the GPU to see how much I can drop the temps when I get around to tearing down the loop to clean it. Hopefully I can drop my GPU temps back into the 40's at full GPU load versus the low 50's where it reaches now.


----------



## GAN77

xer0h0ur said:


> I'm going to slap a corsair H80i V2 on my CPU and leave the Monsta for only the GPU to see how much I can drop the temps when I get around to tearing down the loop to clean it. Hopefully I can drop my GPU temps back into the 40's at full GPU load versus the low 50's where it reaches now.


You will not get 10 degrees from this replacement.


----------



## OleMortenF

Just installed EK waterblock my 2080Ti myself. Temp under load is about 50c overclocked 2100/8000 when benchmarking


----------



## xer0h0ur

GAN77 said:


> You will not get 10 degrees from this replacement.


I'm not in search of a 10c drop. A 10c drop would mean going from 53c to 43c which isn't going to happen. However if I can drop it back into the 40's then that means a step or two higher in GPU boost frequencies so I would be maintaining higher clocks when the GPU is pegged in the upper 90's of usage. Right now I only maintain the highest boost clocks on games that don't have high usage and downclock as low as 2040MHz on high usage. I'm after all looking for the best temps under sustained loads which is what gaming is. 

Its no skin off my back in the end, I already had the H80i and plenty of tubing to redo the GPU loop anyways. Its more a case of curiosity as I know for a fact that OCed 4960X is dumping a lot of heat into that radiator.


----------



## coolbho3k

Hmm i'm getting flashing artifacts in PUBG on a memory OC of +500. Dropped it to +300 and still got flashing artifacts. It's stable everywhere else.


----------



## Baasha

Ferreal said:


> I found out that I can enable 144hz in game only, desktop has to be either 120hz or 98hz.
> 
> 7980xe
> Acer x27
> 4k HDR 144hz
> MS October update 1809 + 416.16 drivers
> 
> SOTTR - perfect scaling
> Witcher 3 - good scaling - I didn't play it for too long but it was around 90% usage on both cards.
> BF1 - this game did the worst. On DX11 I was getting between 50%-72% most of the time, it never reached 80%. Definitely playable though and the PQ is awesome. Turned on DX12 and it was unplayble. FPS was jumping everywhere.
> Battlefront 2 - I played this game the most. NVLink did not work at all on DX12. On DX11 - perfect scaling - playing this game in 4k 144hz and getting over 100fps is just amazing, everything is sharp, smooth and FAST! Black crush is still there but it is a lot better than what it was on the Titan V (waiting for that Acer firmware).
> 
> 9980xe is coming soon if you're looking to upgrade.


Thanks for testing it out. Perfect - I am planning to upgrade - I skipped the X299 generation so 9980xe should be a nice "jump."

Regarding games, here's my findings so far:

1.) SOTTR - same, perfect scaling
2.) Witcher 3 - I get 90% on one GPU but only around 83% on the other - it doesn't seem to scale evenly.
3.) BF1 - DX12 doesn't allow multi-GPU; DX11 shows average scaling - around 70% for each GPU (whereas 2x Titan Xp are at 99% & 98% each and I get around 150fps in 4K maxed out w/ TAA.
4.) Battlefront II - same - scales really well and was amazed because the game would insta-crash for me with 2 (or 4) Titan Xp for some reason. Getting around 130 - 150fps in 4K maxed out.

The real shocker for me is AC Odyssey - still cannot get 60fps in 4K maxed out!  Some scenes were at 43fps (!).

Testing a bunch of other games so will update thread soon.


----------



## Diverge

xer0h0ur said:


> Dude, how? I'm literally only using ONE 360 Monsta radiator in push/pull for an overclocked 4960X and overclocked 2080 Ti and manage better temps than you do and my loop is dirty right now too. Something still doesn't make sense there for you.


I agree. 60C on water cooling doesn't seem right for a gpu. Should be the low to mid 50's under load

With my titan x pascal and an 6700K with one small 2x120 rad I was in the 50's with a +200 core overclock. Add it had a leak with lots of air in the system. I think I can keep my 2080TI in the high 60's / low 70's on air if I max the fan speed to 100% and additional fans blowing at it to cool the back (which gets super hot on the FE cooler).


----------



## Diverge

On a side note, anyone with a FE able to use GPU-Z 2.11 to dump their bios? 

It says my device doesn't support it when I try, which I found odd.


----------



## Pilz

arrow0309 said:


> Pilz said:
> 
> 
> 
> 
> 
> arrow0309 said:
> 
> 
> 
> Here I was thinking a 480mm*60mm with push pull per loop was the minimum. I have each loop running like that and my coolant hovers around 28-30°C on the hot end depending on my pump speed, fans at 1200rpm. I can't imagine cooling the 2080Ti with a single 360mm or even cooling both that and a X299 Monoblock with a 360+240 set up because we all know how Skylake-X stays super cool /S
> 
> 
> 
> I agree, I'm using a 360 (Xspc Rx, v3) top pull internal and a 420 (Alphacool Monsta) in push, external and I don't even have a monoblock, only Heatkiller vi on the cpu and another Heatkiller Titan X pascal.
> I'll swap the Gpu full cover with the EK Vector.
> 
> Click to expand...
> 
> 
> My rads are the biggest that will currently fit unless I modify my 1000D to accommodate wider 480mm units. My temps are fine with low fan speeds so I'm glad I went with a more reasonable set up this time.
> 
> 
> 
> xer0h0ur said:
> 
> 
> 
> 
> 
> Pilz said:
> 
> 
> 
> Have you tried using MSI afterburner or EVGAs X1? I think the voltage slider is rather a placebo of sort since moving it shows no real changes that I've seen so far running up to 2205mhz but obviously hitting the power limit causing stability issues.
> 
> Click to expand...
> 
> From what I observe so far, the voltage slider does nothing in terms of adding voltage but it does seem to stabilize the voltage at the high end overclocks where you would be on the edge of crashing without bumping the slider all the way. Maybe its just me but that is my general observation.
> 
> Click to expand...
> 
> 
> 
> 
> GraphicsWhore said:
> 
> 
> 
> 
> 
> Pilz said:
> 
> 
> 
> Have you tried using MSI afterburner or EVGAs X1? I think the voltage slider is rather a placebo of sort since moving it shows no real changes that I've seen so far running up to 2205mhz but obviously hitting the power limit causing stability issues.
> 
> Click to expand...
> 
> I was referring to AfterBurner ("AB"). Since I can't unlock the voltage slider I went to X1 and like all EVGA software it's kind of a mess. Doesn't save profiles after rebooting, voltage curve does whatever it wants if you set it manually and the "Scan" function doesn't do anything for me except start showing ridiculous clock rates after a few mins and pushing my card until it crashes.
> 
> But I've also got the weird AB issue with the voltage curve where it's telling me I'm trying to add +1000 to the clock because I moved it up to what is actually a fairly average, level curve topping out at 1050mV @ 2070. Like ***.
> 
> Click to expand...
> 
> I was a little tired when I quoted you earlier so my apologies there. X1 does suck because like you mentioned settings aren't saved on reboot or really strange all other than while running. Scan does work for me with inconsistent results and even offsets that "pass" aren't fully stable in games as I found out. My GPU easily gets to the cap of 2205mhz which I can't seem to pass although I'd like to see if we can ever flash a different bios on the FE cards and pull more power.
> 
> 
> 
> 
> xer0h0ur said:
> 
> 
> 
> I'm going to slap a corsair H80i V2 on my CPU and leave the Monsta for only the GPU to see how much I can drop the temps when I get around to tearing down the loop to clean it. Hopefully I can drop my GPU temps back into the 40's at full GPU load versus the low 50's where it reaches now.
> 
> Click to expand...
> 
> Mine is running off a 480mm loop and sits at 47°C peak with a 23°C ambient temperature in my room. Fans are running at 1200rpm and temperature sensors in my loop at both ends read the coolant at 30.5°C/29.8°C hot/cold sides with the D5 running around 3000rpm
> 
> Offset for these numbers is +120/500mem
Click to expand...


----------



## xer0h0ur

I love that 1000D case. That is what I planned on building my next rig into.


----------



## mr2cam

Diverge said:


> I agree. 60C on water cooling doesn't seem right for a gpu. Should be the low to mid 50's under load
> 
> With my titan x pascal and an 6700K with one small 2x120 rad I was in the 50's with a +200 core overclock. Add it had a leak with lots of air in the system. I think I can keep my 2080TI in the high 60's / low 70's on air if I max the fan speed to 100% and additional fans blowing at it to cool the back (which gets super hot on the FE cooler).


Ya I agree something is up, I have a 360 + 240 cooling my 8700k at 5ghz (1.36) and a 2080ti, max temp with an ambient of 73f and with my a/c off was 53c, I usually have my a/c on in the room and temps never go above 42c.


----------



## Ford8484

Baasha said:


> Thanks for testing it out. Perfect - I am planning to upgrade - I skipped the X299 generation so 9980xe should be a nice "jump."
> 
> Regarding games, here's my findings so far:
> 
> 1.) SOTTR - same, perfect scaling
> 2.) Witcher 3 - I get 90% on one GPU but only around 83% on the other - it doesn't seem to scale evenly.
> 3.) BF1 - DX12 doesn't allow multi-GPU; DX11 shows average scaling - around 70% for each GPU (whereas 2x Titan Xp are at 99% & 98% each and I get around 150fps in 4K maxed out w/ TAA.
> 4.) Battlefront II - same - scales really well and was amazed because the game would insta-crash for me with 2 (or 4) Titan Xp for some reason. Getting around 130 - 150fps in 4K maxed out.
> 
> The real shocker for me is AC Odyssey - still cannot get 60fps in 4K maxed out!  Some scenes were at 43fps (!).
> 
> Testing a bunch of other games so will update thread soon.


drop the Volumetric clouds to high and AA to low- should be able to easily get above 60. Pretty lame we have to do that with this card but it is what it is.


----------



## arrow0309

kx11 said:


> thnx
> 
> 
> 
> it's a 360rad + 240rad , the bad temps from before were caused by placing the thermal pads on the wrong parts , i followed EKWB manual and it worked , now it reaches 60c max only under load @ 4k ultra settings
> 
> 
> the cpu temps never passes 60c , average @ 50c under load and 26c to 35c idle while it's OC to 4.6ghz
> 
> 
> 
> 
> 
> so i think i'm good overall





kx11 said:


> i dunno man , my temps are good AFAIK and performance is good


Everything above 50C gpuwise (and I'm being generous with "50") is not liquid cooled temp, it's AIO temp (or high end air coolers temp).
Just curious, what is your room temp?


----------



## kx11

arrow0309 said:


> Everything above 50C gpuwise (and I'm being generous with "50") is not liquid cooled temp, it's AIO temp (or high end air coolers temp).
> Just curious, what is your room temp?



room temp is hardly 23c with AC on , in fact i stand corrected the loop isn't perfect , GPU temps under a stress test by timespy extreme went almost 65c which is a red flag , CPU still cool and good to go , either this GPU is a failure or i need to re-do the loop 



on a side note my case is corsair 570x and my build inside is x29 EATX , i got the top rad mounted but the fans pulled out a little towards the glass shield (that faces me) which blocks some of the air pressure by the fans , that is why i think the GPU temps are not normal


i guess i'll upgrade to thermal view 71 because this case crams everything in so tight


----------



## Diverge

kx11 said:


> room temp is hardly 23c with AC on , in fact i stand corrected the loop isn't perfect , GPU temps under a stress test by timespy extreme went almost 65c which is a red flag , CPU still cool and good to go , either this GPU is a failure or i need to re-do the loop
> 
> 
> 
> on a side note my case is corsair 570x and my build inside is x29 EATX , i got the top rad mounted but the fans pulled out a little towards the glass shield (that faces me) which blocks some of the air pressure by the fans , that is why i think the GPU temps are not normal
> 
> 
> i guess i'll upgrade to thermal view 71 because this case crams everything in so tight


I doubt it's because your case is tight. It's likely the block isn't seated correctly. I've read that you already re-seated it once, to fix really bad temps. But it still doesn't seem right. 

But my previous build was with a caselabs BH-2, watercooled 6700K and Titan X pascal (both overclocked), all on a EK PE 240 rad, and a wimpy EK SPC-60 pump, and my gpu temps were low 50's. I even had a leak (not sure for how long)... Poor EK quality control - Long time ago started off with an EK predator that sat in the box for a long ass time, then when I went to use it it started to leak from the plastic rad/pump housing (crack at the fitting threads). So I built my own loop, salvaging the CPU block, which was basically unused. I then recently noticed air pockets in my gpu block, which caused it to tarnish the GPU block.. which all led back to an apparent leak at the CPU o-ring :[


----------



## mr2cam

kx11 said:


> room temp is hardly 23c with AC on , in fact i stand corrected the loop isn't perfect , GPU temps under a stress test by timespy extreme went almost 65c which is a red flag , CPU still cool and good to go , either this GPU is a failure or i need to re-do the loop
> 
> 
> 
> on a side note my case is corsair 570x and my build inside is x29 EATX , i got the top rad mounted but the fans pulled out a little towards the glass shield (that faces me) which blocks some of the air pressure by the fans , that is why i think the GPU temps are not normal
> 
> 
> i guess i'll upgrade to thermal view 71 because this case crams everything in so tight


When doing a stress test take your hand and squeeze the block together and hold it and see if you get a temp drop, if you get a drop then you have an issue with the block.


----------



## Esenel

*FE Bios +149 Core + 1125 Mem - TimeSpy Graphics 16459*

So the best I could manage with my Founders Bios is as of today Rank 17 for TimeSpy Graphics Score with 16459 points on a custom loop without shunt mod.

I would say that is a pretty fair result.

https://www.3dmark.com/spy/4667323


For TimeSpy Extreme Graphics Score I was only able to sustain +145 Core and 1100 Mem with artifacts and therefor only Rank 29 with 7792 points.

https://www.3dmark.com/spy/4667869


My findings?
Compared to the 380 Watt Bios the FE one is also very capeable.
Key in my opinion is to block it and to tune the memory.

I have not done yet a full test range, but as we are very limited in core clock I would say tuning the memory is more important than core.

Maybe someone has done this already? 
Will try it out soon myself.

Kind regards


----------



## carlhil2

Some of you guys are getting some high temps for water. my gpu was maxing out in benches at 44c with a 280mm Monsta rad in it's own loop til the pump died today. itergrated it with my cpu loop and it idles at 28c, never hit 40c during 2 hrs. of playing IL-2(WW2 Flight Sim)... it's this block using gpu backplate.. https://modmymods.com/alphacool-eisblock-gpx-n-acetal-nvidia-geforce-rtx-2080ti-m01-11667.html


----------



## kx11

yeah my problem is most likely the GPU itself , i read reports about Palit 2080ti cards overheating for no reason , not mention my card won't work more than an hour with a tiny OC in games , it did pass timespy extreme stress test though with 97%


----------



## badjz

kx11 said:


> yeah my problem is most likely the GPU itself , i read reports about Palit 2080ti cards overheating for no reason , not mention my card won't work more than an hour with a tiny OC in games , it did pass timespy extreme stress test though with 97%


It’s likely your block. I’m having a similar issue & have isolated it to poor contact via the block (ek vector). I have 2 running in sli, one card hits 45 on load, other 65. I reseated the block/pads more times than I can count, no change. I swapped the blocks around & now the high temps have swapped around. Must be a design flaw in the block, it’s not making good contact on both my cards.


----------



## Zammin

I'm starting to think I might cancel the order on my EK block and go with this one: http://www.au.aquatuning.com/water-...xi-light-nvidia-geforce-rtx-2080ti-m01?c=7187










The backplate has been listed now as well: http://www.au.aquatuning.com/water-...ck-gpx-n-rtx-2080ti-acetal-plexi-light?c=7187










They reckon stock should be ready for delivery between the 15th and the 19th, so that's not too far away. Works out a fair bit cheaper than the Vector + Backplate too.


----------



## skingun

That block looks nice. I like the Aquacomputer and Watercool ones too.


----------



## ENTERPRISE

arrow0309 said:


> Yeah, we're messed up pretty bad in the UK with these MSI, especially from Scan, no ETA info whatsoever even for the Duke I've preordered since 20th September.



Yeah UK stock is awful ! I guess the silver lining is once we get our cards their will be newer Nvidia drivers for our cards and perhaps more BIOS's to flash from haha.


----------



## cletus-cassidy

Apologies if I missed in all the posts, but has anyone been able to flash either the Galax or EVGA bioses to the ASUS Dual 2080 Ti?


----------



## Zammin

skingun said:


> That block looks nice. I like the Aquacomputer and Watercool ones too.


Yeah I saw a picture of the upcoming Aquacomputer one, it looks fantastic. I don't think there have been any pics of the Heatkiller blocks for the RTX cards yet but if they look like last gen then that's awesome. Unfortunately both of those brands are very expensive to buy and ship to Australia though, especially with GST added. There is one shop in Aus that sells a small range of Heatkiller and they don't seem to be interested in bringing in the RTX blocks anytime soon sadly.


----------



## GAN77

Zammin said:


> Yeah I saw a picture of the upcoming Aquacomputer one, it looks fantastic.


Where can I see the photo?


----------



## Pilz

carlhil2 said:


> Some of you guys are getting some high temps for water. my gpu was maxing out in benches at 44c with a 280mm Monsta rad in it's own loop til the pump died today. itergrated it with my cpu loop and it idles at 28c, never hit 40c during 2 hrs. of playing IL-2(WW2 Flight Sim)... it's this block using gpu backplate.. https://modmymods.com/alphacool-eisblock-gpx-n-acetal-nvidia-geforce-rtx-2080ti-m01-11667.html


Mine stays very cool but let's talk about the important factors to consider when you say someone's loop is too hot.

Ambient temperature
Fan speeds

My ambient temperature is around 23°C and with my fans at 1200rpm I hover between 42-47°C as my room heats up. 

With fans at 2000rpm my temperatures don't creat 40°C even heavily overclocked. I messed around with Time Spy Extreme last night but my CPU overclock was causing it to fail that part of the test. For the below test I didn't close and background programs and merely wanted to see if it could run without crashing the GPU side. 

Memory +1000 i.e. 16gbs
Core +110 (peak was 2115mhz)

Graphics score of 7276 isn't great but I know it can do much better if I actually try. I'll run a +150 core test tonight if I can get it stable. 

Pic #2 is what I run while playing games (playing PUBG in the pic) currently until I have more time to tweak the tweak the core offset. Temps are under 40°C gaming at 1200rpm fan speed and the same ambient temperature as listed above. 





Zammin said:


> I'm starting to think I might cancel the order on my EK block and go with this one: http://www.au.aquatuning.com/water-...xi-light-nvidia-geforce-rtx-2080ti-m01?c=7187
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The backplate has been listed now as well: http://www.au.aquatuning.com/water-...ck-gpx-n-rtx-2080ti-acetal-plexi-light?c=7187
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They reckon stock should be ready for delivery between the 15th and the 19th, so that's not too far away. Works out a fair bit cheaper than the Vector + Backplate too.


The EK one performs great but Alphacool also makes excellent stuff. I have their XT60 480mm radiators and D5 reservoir/pump stand for my EK pumps. Obviously between EK, Bitspower, Alphacool and Aquacomputer there's no wrong choice.


----------



## Zammin

GAN77 said:


> Where can I see the photo?


I hope Shoggy (the AC rep) doesn't mind me sharing this. He sent it to me in a PM recently when I asked about it. I don't think there are pictures online, but it looks wicked. I've attached them to this post.



Pilz said:


> Mine stays very cool but let's talk about the important factors to consider when you say someone's loop is too hot.
> 
> Ambient temperature
> Fan speeds
> 
> My ambient temperature is around 23°C and with my fans at 1200rpm I hover between 42-47°C as my room heats up.
> 
> With fans at 2000rpm my temperatures don't creat 40°C even heavily overclocked. I messed around with Time Spy Extreme last night but my CPU overclock was causing it to fail that part of the test. For the below test I didn't close and background programs and merely wanted to see if it could run without crashing the GPU side.
> 
> Memory +1000 i.e. 16gbs
> Core +110 (peak was 2115mhz)
> 
> Graphics score of 7276 isn't great but I know it can do much better if I actually try. I'll run a +150 core test tonight if I can get it stable.
> 
> Pic #2 is what I run while playing games (playing PUBG in the pic) currently until I have more time to tweak the tweak the core offset. Temps are under 40°C gaming at 1200rpm fan speed and the same ambient temperature as listed above.
> 
> 
> The EK one performs great but Alphacool also makes excellent stuff. I have their XT60 480mm radiators and D5 reservoir/pump stand for my EK pumps. Obviously between EK, Bitspower, Alphacool and Aquacomputer there's no wrong choice.


Wow! Those temperatures are really good, seems like most of the people in this thread are seeing mid 40s on water so under 40c @ 1200RPM sounds pretty damn good. I expected this generation to run hotter than pascal.

Yeah I'm sure they're all good quality stuff, I'm only leaning toward alphacool as it has more microchannels than the EK block (35 vs 29) and evenly distributed lighting, plus the price is really good and I can grab 5L of aquacomputer coolant at the same time without paying extra for postage. I would still be happy with the EK block if I already had it, but since I have the chance to change my mind now I might go ahead with it.


----------



## tamas970

The MSI Trio seems to be the beefiest air-cooled 2080Ti, is there any higher power (380-400W) bios for it? Especially that the unique 8+8+6 power connector could even deliver that power...

I guess the bios written for the reference PCB doesn't work.


----------



## GAN77

Zammin said:


> I hope Shoggy (the AC rep) doesn't mind me sharing this. He sent it to me in a PM recently when I asked about it. I don't think there are pictures online, but it looks wicked. I've attached them to this post.


Looks great! Thank you!


----------



## iPDrop

I would like to be added to the list! I have a Gigabyte Windforce OC 11GB


----------



## Spaceace74

Anyone NVlink a Gigabyte 2080ti gaming OC and EVGA 2080ti XC Ultra? I know the EVGA is thicker and would likely go in slot 2 but I’m not sure about the height and if it’s possible.


----------



## tamas970

Spaceace74 said:


> Anyone NVlink a Gigabyte 2080ti gaming OC and EVGA 2080ti XC Ultra? I know the EVGA is thicker and would likely go in slot 2 but I’m not sure about the height and if it’s possible.


Does SLI make any sense nowadays?


----------



## nycgtr

Zammin said:


> I hope Shoggy (the AC rep) doesn't mind me sharing this. He sent it to me in a PM recently when I asked about it. I don't think there are pictures online, but it looks wicked. I've attached them to this post.
> 
> 
> 
> Wow! Those temperatures are really good, seems like most of the people in this thread are seeing mid 40s on water so under 40c @ 1200RPM sounds pretty damn good. I expected this generation to run hotter than pascal.
> 
> Yeah I'm sure they're all good quality stuff, I'm only leaning toward alphacool as it has more microchannels than the EK block (35 vs 29) and evenly distributed lighting, plus the price is really good and I can grab 5L of aquacomputer coolant at the same time without paying extra for postage. I would still be happy with the EK block if I already had it, but since I have the chance to change my mind now I might go ahead with it.


OMG TAKE MY MONEY!!!!!


----------



## mouacyk

Yeah, I'll buy a 2080 TI just for that block. How would one go about installing the backplate anyway, with that heatpipe attachment?


----------



## GraphicsWhore

tamas970 said:


> Does SLI make any sense nowadays?


Plenty of people still doing it but SLI'ing two different air cards sounds miserable. I know two of the same card can have different performance but then you also have different aesthetics, noise levels. F that.


----------



## nycgtr

Unless you have a 4k 144hz display and ready to pray for devs to care there's no point.


----------



## wuannai

nerfon said:


> Any tried flash the old nvidia 2080ti fe bios https://www.techpowerup.com/vgabios/203753/nvidia-rtx2080ti-11264-180829 over the locked retail one ?


Yes I tried and it flashes!
I tried hoping to be able to flash later Galax BIOS but that didn'work 

Perhaps if we get review sample BIOS and it flashes we can later flash the Galax one?...idk


----------



## Nizzen

Got my delivery in Norway


----------



## OBR

Hulk1988 said:


> Wow! My friend tested a 2080 TI FE which he got 2 weeks ago from a company for Review. BIOS flash works. Not a nice move NVidia. I think that is the reason for the delay.


its strange, because i have sample from nvidia for reviews and flashing is unable


----------



## CallsignVega

tamas970 said:


> The MSI Trio seems to be the beefiest air-cooled 2080Ti, is there any higher power (380-400W) bios for it? Especially that the unique 8+8+6 power connector could even deliver that power...
> 
> I guess the bios written for the reference PCB doesn't work.


No the Trio has actually done pretty poorly in testing. Temps weren't that good and the PCB review found out it is just a reference PCB with an extra 6-pin soldered on. 

IMO FTW3 is easily the best air cooled card.


----------



## tamas970

Nizzen said:


> Got my delivery in Norway






 

Looking forward to the Trio vs AMP hardcore results! Should be interesting:
The MSI is bottlenecked by the 330W power limit (no 380-400W BIOS AFAIK), the Zotac is by the 2x8pin power. As of now, there is no "perfect" card on the market (strong PCB, 2x8+6pin power, 380-400W BIOS, good cooling) but the Trio X is only a BIOS version away...



CallsignVega said:


> No the Trio has actually done pretty poorly in testing. Temps weren't that good and the PCB review found out it is just a reference PCB with an extra 6-pin soldered on.
> 
> IMO FTW3 is easily the best air cooled card.


True, GPU-temps are less than stellar compared to e.g. a Strix. BUT! Put the actual frequencies/power limits into account! The Trio runs at 300W by default, while most cards set the default power target to 260W. The trio is AFAIK the heaviest card though, non-stellar performance may just come from the grease? The "fake" 6pin is a bit of disappointment, however, it could still work fine. I don't know if more than 300W power could be delivered trough a standard 2x8pin (no limitations on the PCB or in the reference BIOS).

What I like in the Trio are the realistic VRM-temps. Thermal imaging test here shows the VRM-s around 80°C :

http://livedoor.blogimg.jp/wisteriear/imgs/9/0/90849266.jpg


----------



## larrydavid

CallsignVega said:


> No the Trio has actually done pretty poorly in testing. Temps weren't that good and the PCB review found out it is just a reference PCB with an extra 6-pin soldered on.
> 
> IMO FTW3 is easily the best air cooled card.


Outside of the FTW3, I believe the Trio has the highest out of the box power limit(300W), and it has the highest factory overclock, so unless temperature testing is accounting for that, it's not an apples to apples comparison. The extra 6-pin isn't useful, but it does have one extra power phase compared to the reference design IIRC.

Are there any FTW3 reviews out there that cover the cooling system?


----------



## larrydavid

tamas970 said:


> https://www.youtube.com/watch?v=TfWzqWpr92c
> 
> Looking forward to the Trio vs AMP hardcore results! Should be interesting:
> The MSI is bottlenecked by the 330W power limit (no 380-400W BIOS AFAIK), the Zotac is by the 2x8pin power.


You can flash the Galax 380W bios on the MSI. It's already been done.

I haven't been able to find another MSI Trio. Newegg even pulled the listing, so I have a Zotac on the way tomorrow.


----------



## Belmire

2080Ti SLI FE here. Temp of top card is 44c at stock because the card(s) seem to idle at about 1200MHz. High pixel clock for 1440p 144Hz + 900p 60Hz? These things need water....darn.


----------



## tamas970

larrydavid said:


> You can flash the Galax 380W bios on the MSI. It's already been done.
> 
> I haven't been able to find another MSI Trio. Newegg even pulled the listing, so I have a Zotac on the way tomorrow.


Found the post, thanks! Doesn't seem perfect though. Unfortunately there aren't many trio users to provide evidence if the additional 6 pins/superb cooling can bring the card any further with an unlocked BIOS.

BTW is there a way to deliver more than 300W on 2x8 pins? E.g. boosting a Sea Hawk or an Evga FTW3?


----------



## CallsignVega

tamas970 said:


> https://www.youtube.com/watch?v=TfWzqWpr92c
> 
> Looking forward to the Trio vs AMP hardcore results! Should be interesting:
> The MSI is bottlenecked by the 330W power limit (no 380-400W BIOS AFAIK), the Zotac is by the 2x8pin power. As of now, there is no "perfect" card on the market (strong PCB, 2x8+6pin power, 380-400W BIOS, good cooling) but the Trio X is only a BIOS version away...
> 
> 
> True, GPU-temps are less than stellar compared to e.g. a Strix. BUT! Put the actual frequencies/power limits into account! The Trio runs at 300W by default, while most cards set the default power target to 260W. The trio is AFAIK the heaviest card though, non-stellar performance may just come from the grease? The "fake" 6pin is a bit of disappointment, however, it could still work fine. I don't know if more than 300W power could be delivered trough a standard 2x8pin (no limitations on the PCB or in the reference BIOS).
> 
> What I like in the Trio are the realistic VRM-temps. Thermal imaging test here shows the VRM-s around 80°C :
> 
> http://livedoor.blogimg.jp/wisteriear/imgs/9/0/90849266.jpg





larrydavid said:


> Outside of the FTW3, I believe the Trio has the highest out of the box power limit(300W), and it has the highest factory overclock, so unless temperature testing is accounting for that, it's not an apples to apples comparison. The extra 6-pin isn't useful, but it does have one extra power phase compared to the reference design IIRC.
> 
> Are there any FTW3 reviews out there that cover the cooling system?


Maybe the grease or contact patch on the Techpowerup MSI Trio were bad. But the temps/performance were fairly terrible considering it is a custom PCB and has a monster cooler on it. 

Stock, 2x 8-pin (150+150) and PCI-E power (75w) are 375w. That is why the FTW3 comes with a 373w factory BIOS. It is maxing out the power delivery within design specs.

Plus 3 extra power phases on the FTW3, the ICX2 cooler looks the superior design this series:

https://www.evga.com/technology/icx2/

But yes of course we need some real world testing!


----------



## ocvn

larrydavid said:


> Outside of the FTW3, I believe the Trio has the highest out of the box power limit(300W), and it has the highest factory overclock, so unless temperature testing is accounting for that, it's not an apples to apples comparison. The extra 6-pin isn't useful, but it does have one extra power phase compared to the reference design IIRC.
> 
> Are there any FTW3 reviews out there that cover the cooling system?


giga oc f3 bios 300w, power limit 122% to 366W. I tried both of my TrioX cards with Giga 366W and galax 380w and i feel gigabyte bios better than Galax in term of stability. both my cards under water with furmark load 47-49 C, ambient 28C


----------



## nycgtr

CallsignVega said:


> No the Trio has actually done pretty poorly in testing. Temps weren't that good and the PCB review found out it is just a reference PCB with an extra 6-pin soldered on.
> 
> IMO FTW3 is easily the best air cooled card.


As someone whos gone thru 8 tis now of 3-4 diff models. I can tell u the trio stays the coolest. 

It does only have 1 ref extra vs ref but considering the pricing was similar on preorders as for an out the box card, its probably the best bet for now. The ftw3 will be overbuilt, with a 150 mark up for a pcb you won't be able to gain benefit from on turing with ambient. I don't see much value in the ftw3 tbh. Unless they allow for a much higher power limit than 380 and you can keep it under 50c, it doesn't seem to be worth the premium.


----------



## ocvn

CallsignVega said:


> Maybe the grease or contact patch on the Techpowerup MSI Trio were bad. But the temps/performance were fairly terrible considering it is a custom PCB and has a monster cooler on it.
> 
> Stock, 2x 8-pin (150+150) and PCI-E power (75w) are 375w. That is why the FTW3 comes with a 373w factory BIOS. It is maxing out the power delivery within design specs.
> 
> Plus 3 extra power phases on the FTW3, the ICX2 cooler looks the superior design this series:
> 
> https://www.evga.com/technology/icx2/
> 
> But yes of course we need some real world testing!


do you have the link for the evga FTW3 373w bios? i can't find it here or techpowerup database


----------



## skingun

The Aquacomputer blocks are posted publicly on the German forum.

Watercool pics went up today on their website. Copper version only. I don't think Ni version has left production yet.


----------



## GAN77

skingun said:


> The Aquacomputer blocks are posted publicly on the German forum.


Watercool)


----------



## Nizzen

Looks like 2x Zotac 2080ti works on air:
Timespy


----------



## skingun

GAN77 said:


> skingun said:
> 
> 
> 
> The Aquacomputer blocks are posted publicly on the German forum.
> 
> 
> 
> Watercool)
Click to expand...

Did I miss something here?


----------



## GAN77

skingun said:


> Did I miss something here?


http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Products/15620

https://shop.aquacomputer.de/index.php?cPath=7_11_149_151

Watercool and Aquacomputer different companies.


----------



## skingun

GAN77 said:


> skingun said:
> 
> 
> 
> Did I miss something here?
> 
> 
> 
> http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Products/15620
> 
> https://shop.aquacomputer.de/index.php?cPath=7_11_149_151
Click to expand...

I don't understand you.


----------



## CallsignVega

ocvn said:


> do you have the link for the evga FTW3 373w bios? i can't find it here or techpowerup database


No I don't think it will be out until the cards ship.


----------



## nycgtr

skingun said:


> The Aquacomputer blocks are posted publicly on the German forum.
> 
> Watercool pics went up today on their website. Copper version only. I don't think Ni version has left production yet.


Big fan of watercool but I can't unseen it. I know they used a similar design before but yeah can't unsee the dirty.


----------



## HeadlessKnight

I have one DP that can't go over 30 Hz even at 1080p. Is it a borked port? 
For those interested it is the one on the far left. Driver 416.16.


----------



## Spacewide

I'm just posting here to know how it feels like.


----------



## Talon2016

Returned my Asus Dual OC, ordered a Gigabyte Gaming OC RTX 2080 Ti last week, got it yesterday. Far far better thermals. This card is great. +1000Mhz memory and boosts around 2055-2070mhz with +100mhz on the OC vBIOS. Tops out at 61C with a Timespy or super position run.


----------



## Okt00

Nizzen said:


> Looks like 2x Zotac 2080ti works on air:
> Timespy



Nice!


----------



## dVeLoPe

is their anyone doing testing on https://www.evga.com/products/product.aspx?pn=11G-P4-2282-KR

how does it compare to the top cards? can it be bios flashed aswell? what are the cons of this exact card?


----------



## tamas970

ocvn said:


> do you have the link for the evga FTW3 373w bios? i can't find it here or techpowerup database


That will be an interesting one, for sure. The card has a 250W default power target, 150% would be way above anything we have seen so far.


----------



## Zammin

nycgtr said:


> OMG TAKE MY MONEY!!!!!


It looks pretty damn nice huh? They are hoping to have it finalised some time next month. No idea on price but he said it would be pretty expensive. 



mouacyk said:


> Yeah, I'll buy a 2080 TI just for that block. How would one go about installing the backplate anyway, with that heatpipe attachment?


I'm not sure to be honest, but it looks like they had that active backplate option on their last gen blocks as well, so you might be able to check how it works on them.


----------



## R1amddude

Well guys looks like I got a dud. After reinstalling the drivers 5 times and Fallout 4 crashing 5 min into gameplay constantly, I am returning my $1200 paper weight back to Nvidia. They can keep it. What a horrible buggy mess launch for a graphics card, I guess this is the way we the consumer are ment to be played...


----------



## GraphicsWhore

R1amddude said:


> Well guys looks like I got a dud. After reinstalling the drivers 5 times and Fallout 4 crashing 5 min into gameplay constantly, I am returning my $1200 paper weight back to Nvidia. They can keep it. What a horrible buggy mess launch for a graphics card, I guess this is the way we the consumer are ment to be played...


After installing my EVGA XC and updating to Windows 1809 all my games crashed immediately. No driver version or any other fixes worked but after installed Windows on a spare SSD to confirm the problem it was clearly the upgrade to 1809 on my main drive as everything worked fine on the new install. You sure it’s the card?


----------



## R1amddude

GraphicsWhore said:


> After installing my EVGA XC and updating to Windows 1809 all my games crashed immediately. No driver version or any other fixes worked but after installed Windows on a spare SSD to confirm the problem it was clearly the upgrade to 1809 on my main drive as everything worked fine on the new install. You sure it’s the card?



I put my geforce 960 back in my rig and all the problems went away, so yeah.


----------



## GraphicsWhore

R1amddude said:


> GraphicsWhore said:
> 
> 
> 
> After installing my EVGA XC and updating to Windows 1809 all my games crashed immediately. No driver version or any other fixes worked but after installed Windows on a spare SSD to confirm the problem it was clearly the upgrade to 1809 on my main drive as everything worked fine on the new install. You sure it’s the card?
> 
> 
> 
> .
> 
> I put my geforce 960 back in my rig and all the problems went away, so yeah.
Click to expand...

Honestly I wouldn’t count on it. 1809 installs support for DXR and two of my errors mentioned something with directx. If i had put my 1080Ti back in it’s quite possible things would have worked. There’s the combination of drivers and directx runtime and all sorts of software-based stuff. I assume you’re on build 1809? If so I would strongly recommend you try a clean install to be sure. Microsoft is incompetent and Windows sucks and I will suspect that stupid OS before anything else anytime there’s a problem. Unless your card is smoking or is like 100 degrees to the touch, I would err on the side of it being software than hardware.

People have gotten some disappointing overclockers but I’ve read I think every post in this thread since we started getting our cards and not a single other person received just a straight brick.


----------



## nycgtr

Zammin said:


> It looks pretty damn nice huh? They are hoping to have it finalised some time next month. No idea on price but he said it would be pretty expensive.
> 
> 
> 
> I'm not sure to be honest, but it looks like they had that active backplate option on their last gen blocks as well, so you might be able to check how it works on them.


Well the terminal is an option that's for sure. I don't really care for that or that. If will probably fall in the line with their regular block and back pricing which tbh is similar to EK and yet 1000x better in build quality. 



GraphicsWhore said:


> Honestly I wouldn’t count on it. 1809 installs support for DXR and two of my errors mentioned something with directx. If i had put my 1080Ti back in it’s quite possible things would have worked. There’s the combination of drivers and directx runtime and all sorts of software-based stuff. I assume you’re on build 1809? If so I would strongly recommend you try a clean install to be sure. Microsoft is incompetent and Windows sucks and I will suspect that stupid OS before anything else anytime there’s a problem. Unless your card is smoking or is like 100 degrees to the touch, I would err on the side of it being software than hardware.
> 
> People have gotten some disappointing overclockers but I’ve read I think every post in this thread since we started getting our cards and not a single other person received just a straight brick.


Think reviewers got cherry picked tbh. Most of these cards are in the near 2000 or low low 2000 without a bios flash.


----------



## Pilz

Zammin said:


> Pilz said:
> 
> 
> 
> Mine stays very cool but let's talk about the important factors to consider when you say someone's loop is too hot.
> 
> Ambient temperature
> Fan speeds
> 
> My ambient temperature is around 23°C and with my fans at 1200rpm I hover between 42-47°C as my room heats up.
> 
> With fans at 2000rpm my temperatures don't creat 40°C even heavily overclocked. I messed around with Time Spy Extreme last night but my CPU overclock was causing it to fail that part of the test. For the below test I didn't close and background programs and merely wanted to see if it could run without crashing the GPU side.
> 
> Memory +1000 i.e. 16gbs
> Core +110 (peak was 2115mhz)
> 
> Graphics score of 7276 isn't great but I know it can do much better if I actually try. I'll run a +150 core test tonight if I can get it stable.
> 
> Pic #2 is what I run while playing games (playing PUBG in the pic) currently until I have more time to tweak the tweak the core offset. Temps are under 40°C gaming at 1200rpm fan speed and the same ambient temperature as listed above.
> 
> 
> The EK one performs great but Alphacool also makes excellent stuff. I have their XT60 480mm radiators and D5 reservoir/pump stand for my EK pumps. Obviously between EK, Bitspower, Alphacool and Aquacomputer there's no wrong choice.
> 
> 
> 
> Wow! Those temperatures are really good, seems like most of the people in this thread are seeing mid 40s on water so under 40c @ 1200RPM sounds pretty damn good. I expected this generation to run hotter than pascal.
> 
> Yeah I'm sure they're all good quality stuff, I'm only leaning toward alphacool as it has more microchannels than the EK block (35 vs 29) and evenly distributed lighting, plus the price is really good and I can grab 5L of aquacomputer coolant at the same time without paying extra for postage. I would still be happy with the EK block if I already had it, but since I have the chance to change my mind now I might go ahead with it.
Click to expand...

I'll gladly buy an Alphacool one if there's an option for a non-black backplate when they launch. Alphacool stuff is excellent quality and that block is very good looking! Assuming I get the Alphacool block my EK will be up for grabs at a reasonable price if anyone's interested 🙂

Those temps were at max fan speeds with my ML120's and Alphacool 480mm radiators for fun. Normal ambient temperature is 23°C and the GPU sits around 44-47°C at 1200rpm. When the fans run at 2100rpm my GPU drops into the high 30's as you can see in that benchmark. Cooling isn't an issue with a proper set up even overclocked like my card. 



nycgtr;27663636[QUOTE="Zammin said:


> It looks pretty damn nice huh? They are hoping to have it finalised some time next month. No idea on price but he said it would be pretty expensive.
> 
> I'm not sure to be honest, but it looks like they had that active backplate option on their last gen blocks as well, so you might be able to check how it works on them.


Well the terminal is an option that's for sure. I don't really care for that or that. If will probably fall in the line with their regular block and back pricing which tbh is similar to EK and yet 1000x better in build quality. 

Think reviewers got cherry picked tbh. Most of these cards are in the near 2000 or low low 2000 without a bios flash.[/QUOTE]

I doubt it, mine runs stable at 2100mhz as the min and can reach some sort of stability at 2130mhz but I haven't played with it enough to really see. I'm sure I could push it further in the stability envelope given enough time. Once I get things running how I want I'll post new results. I might add in a few runs in the mean time just to see how hard I can run things.


----------



## Zammin

nycgtr said:


> Well the terminal is an option that's for sure. I don't really care for that or that. If will probably fall in the line with their regular block and back pricing which tbh is similar to EK and yet 1000x better in build quality.


Shoggy said it would be more expensive than their last gen stuff, but no doubt the quality will be exceptional.



Pilz said:


> I'll gladly buy an Alphacool one if there's an option for a non-black backplate when they launch. Alphacool stuff is excellent quality and that block is very good looking! Assuming I get the Alphacool block my EK will be up for grabs at a reasonable price if anyone's interested 🙂
> 
> Those temps were at max fan speeds with my ML120's and Alphacool 480mm radiators for fun. Normal ambient temperature is 23°C and the GPU sits around 44-47°C at 1200rpm. When the fans run at 2100rpm my GPU drops into the high 30's as you can see in that benchmark. Cooling isn't an issue with a proper set up even overclocked like my card.


They are available for order now but I haven't seen any back plates other than the black one. The backplates have only just been listed. I placed my order this morning for the block, back plate and 5L of aquacomputer coolant. I'm looking forward to seeing how my temps turn out, I'm also using ML120 Pro fans but two 360 radiators (one slim and one 60mm thick) in a Lian Li PC-O11 Dynamic.


----------



## dante`afk

how's the zotac amp doing there? anyone running around with it with the 380w bios? results?

buy yes/no? wait?


----------



## gavros777

I was using this displayport adapter with my titan x maxwell
https://www.amazon.com/gp/product/B01AYQ5APE/ref=oh_aui_search_detailpage?ie=UTF8&psc=1
and after upgrading to a 2080 ti the screen looks horrible like full with artifacts.

When i plug my tv/monitor directly to the hdmi port it works fine.

Is something wrong with my card or i need to find a dp 1.4 adapter?


----------



## nycgtr

Anyone else having issues with the display coming back from sleep with the 380watt bios?


----------



## GraphicsWhore

dante`afk said:


> how's the zotac amp doing there? anyone running around with it with the 380w bios? results?
> 
> buy yes/no? wait?


Wait for what? Even with that BIOS the real world performance increase is negligible. Unless you’re on a custom loop holding for a factory blocked crazy card you might as well just go for it.


----------



## Pilz

Zammin said:


> nycgtr said:
> 
> 
> 
> Well the terminal is an option that's for sure. I don't really care for that or that. If will probably fall in the line with their regular block and back pricing which tbh is similar to EK and yet 1000x better in build quality.
> 
> 
> 
> Shoggy said it would be more expensive than their last gen stuff, but no doubt the quality will be exceptional.
> 
> 
> 
> Pilz said:
> 
> 
> 
> I'll gladly buy an Alphacool one if there's an option for a non-black backplate when they launch. Alphacool stuff is excellent quality and that block is very good looking! Assuming I get the Alphacool block my EK will be up for grabs at a reasonable price if anyone's interested 🙂
> 
> Those temps were at max fan speeds with my ML120's and Alphacool 480mm radiators for fun. Normal ambient temperature is 23°C and the GPU sits around 44-47°C at 1200rpm. When the fans run at 2100rpm my GPU drops into the high 30's as you can see in that benchmark. Cooling isn't an issue with a proper set up even overclocked like my card.
> 
> Click to expand...
> 
> They are available for order now but I haven't seen any back plates other than the black one. The backplates have only just been listed. I placed my order this morning for the block, back plate and 5L of aquacomputer coolant. I'm looking forward to seeing how my temps turn out, I'm also using ML120 Pro fans but two 360 radiators (one slim and one 60mm thick) in a Lian Li PC-O11 Dynamic.
Click to expand...

Where did you buy it from?


----------



## Zammin

Pilz said:


> Where did you buy it from?


Aquatuning AU (the Australian site for Alphacools UK outlet). You can also buy directly from Alphacools website or ModmyMods in the US (they apparently have stock of the plexi and acetal versions but not the backplates).

Some factory backplates can be used, theres a guy in this thread who is using the acetal version on his Zotac card with the Zotac back plate. The Alphacool block apparently uses M3 screws though so you may have to buy extra screws if using the factory backplates from AIB cards which usually use M2.5.


----------



## Pilz

Zammin said:


> Pilz said:
> 
> 
> 
> Where did you buy it from?
> 
> 
> 
> Aquatuning AU (the Australian site for Alphacools UK outlet). You can also buy directly from Alphacools website or ModmyMods in the US (they apparently have stock of the plexi and acetal versions but not the backplates).
> 
> Some factory backplates can be used, theres a guy in this thread who is using the acetal version on his Zotac card with the Zotac back plate. The Alphacool block apparently uses M3 screws though so you may have to buy extra screws if using the factory backplates from AIB cards which usually use M2.5.
Click to expand...

My mods only carriers the ICE block not the awesome looking Kyrographics one. Which version did you get?


----------



## Glerox

R1amddude said:


> Well guys looks like I got a dud. After reinstalling the drivers 5 times and Fallout 4 crashing 5 min into gameplay constantly, I am returning my $1200 paper weight back to Nvidia. They can keep it. What a horrible buggy mess launch for a graphics card, I guess this is the way we the consumer are ment to be played...


Damn sorry for you :S You convinced me to try the gpu before putting a block on it!

Is anyone satisfied with his FE yet? I just read about bad OC, coil whine and now crashes... 

mine is still in a sealed box, should I open it lol.


----------



## R1amddude

Glerox said:


> Damn sorry for you :S You convinced me to try the gpu before putting a block on it!
> 
> Is anyone satisfied with his FE yet? I just read about bad OC, coil whine and now crashes...
> 
> mine is still in a sealed box, should I open it lol.



Alright guys update. Right after I flipped my **** and requested a return to Nvidia I googled fallout 4 crash 2080ti. Apparently a bunch of users on a forum reported the same issue on geforce 2080's as well not just 2080ti, and lo and behold, you have to turn off weapon debris. Apparently having it on at all will cause the game to crash any time you play. Turned it off and now its working fine. I guess I jumped the gun and might be keeping it. Im sure Nvidia will hopefully update the drivers for direct x issues ( like Mass Effect Andromedia for me )


----------



## Zammin

Pilz said:


> My mods only carriers the ICE block not the awesome looking Kyrographics one. Which version did you get?


Yeah I said I bought the Alphacool one, not the Aquacomputer Kryographics. Different brand. I only got sent the images of the Kryographics block from the AquaComputer rep here on OCN. It's not available for purchase yet, might be available in November.

I bought the Alphacool GPX-N Plexi/Nickel w/back plate.


----------



## Pilz

Zammin said:


> Pilz said:
> 
> 
> 
> My mods only carriers the ICE block not the awesome looking Kyrographics one. Which version did you get?
> 
> 
> 
> Yeah I said I bought the Alphacool one, not the Aquacomputer Kryographics. Different brand. I only got sent the images of the Kryographics block from the AquaComputer rep here on OCN. It's not available for purchase yet, might be available in November.
> 
> I bought the Alphacool GPX-N Plexi/Nickel w/back plate.
Click to expand...

Sorry I had them confused for some reason. The Aqua computer one is what I wanted. The heatpipe set up is certainly interesting as is the display not that I need either it's more of wanting one for the looks. My EKWB Vector runs perfectly fine and cool under load. Hopefully Aquacomputer launches theirs soon, I've never had a block from them only an Aquaero and related items


----------



## Zammin

Pilz said:


> Sorry I had them confused for some reason. The Aqua computer one is what I wanted. The heatpipe set up is certainly interesting as is the display not that I need either it's more of wanting one for the looks. My EKWB Vector runs perfectly fine and cool under load. Hopefully Aquacomputer launches theirs soon, I've never had a block from them only an Aquaero and related items


All good man! I have no doubt the EK block performs great.

That back plate for the Kryographics block looks really cool. I had a look at some testing of older Kryographics blocks with the active back plate and it seemed to reduce the core temperature by 1-2C which is pretty impressive from just a back plate.


----------



## Esenel

Glerox said:


> Is anyone satisfied with his FE yet? I just read about bad OC, coil whine and now crashes...
> 
> mine is still in a sealed box, should I open it lol.


Founders is great. As long as you are gonna block it.
Bad OC? Not at all.
The 380 W bios is more a Placebo.
You can get up to 3 or maybe 4% of performance for 20% higher power consumption and damaging your PCI-E slot. Yeah great deal 😄

TimeSpy graphics 16459.
On a lousy Founders. Not modded at all.

https://www.3dmark.com/spy/4667323


----------



## mbhtzor

Can confirm the Asus 2080 Ti Dual Advanced works fine with the Gigabyte BIOS. The cooling however, SUCKS! Will probably block the card this weekend.


Had some crazy FPS-drops in Forza Horizon 4. Rebooted the computer, and got a BIOS error due to CPU Temperature. Ryzen 2700x was at 94 degrees :thumb: Some gunk in my watercooling loop was restricting the flow, so need to take care of that as well. Luckily not the cards fault


----------



## cx-ray

R1amddude said:


> Im sure Nvidia will hopefully update the drivers for direct x issues ( like Mass Effect Andromedia for me )


I have more than 760 hours of multi player game play in that game and have logged over 10 hours with 2080 Ti SLI now. I haven't noticed any new bugs compared to Titan SLI and Titan X Pascal SLI, or any of those cards in single GPU mode for that matter.

However, I've found that the game is quite sensitive to small system instabilities due to overclocking errors or hardware defects.


----------



## Nephalem89

Good morning! 

I am currently a poser of a gygabite 2080 ti gaming oc. I'm looking for a water block some recommendation ? thanks a lot !


----------



## Zammin

Nephalem89 said:


> Good morning!
> 
> I am currently a poser of a gygabite 2080 ti gaming oc. I'm looking for a water block some recommendation ? thanks a lot !


You can technically use any water block for a 2080Ti reference PCB, but some like the EKWB Vector will have a clearance issue with the fan connector at the end of the PCB on that specific card. The work around is to pull the plastic off the connector and bend the pins. If you don't want to do that, the Phanteks block says it's compatible with your card: http://www.phanteks.com/PH-GB2080TiFE.html

I have a Phanteks block on my current 1080Ti and it doesn't go all the way to the end of the PCB, this is probably how it avoids contact with that connector. Not sure what other brands of blocks will fit without the above workaround though.


----------



## AngryLobster

Is this the official 2080 Ti water cooling club?

Anyway, I've found these cards undervolt like crazy compared to Pascal. I'm also really happy with performance at 4K. Reviews only test Ultra settings but knocking a few down to High let's the card achieve 80+ FPS in a lot of games.


----------



## tamas970

Any results with the Zotac AMP! with the 380W BIOS? Temperatures, clocks, any bugs after the BIOS flash? I'd be interested in the VRM/VRAM temps too, in case someone has an IR thermometer...


----------



## Foxrun

Im rocking 160 on core and 8100 on memory with the FE (to give more hope to those who have one). 160 on core translates to 2115-2130; I cant push it further without AC Odyssey crashing, though I might be able to push the mem further. I am under water though.


----------



## profundido

Just a quick completely off-topic question. Now that we know that the only thing that can drive existing games (not taking advantage of RTX) really higher is just more of the same old stuff, why doesn't Nvidia just make a new "Titan Extreme *** edition" card with a PCB that is like double the size (let's say half higher and half longer)

I mean I'm not familiar enough with the limitations and production process but what exactly stops them from just cramming more on 1 PCIE slot ? Where's the limitation exactly ?


----------



## Pilz

Zammin said:


> Pilz said:
> 
> 
> 
> Sorry I had them confused for some reason. The Aqua computer one is what I wanted. The heatpipe set up is certainly interesting as is the display not that I need either it's more of wanting one for the looks. My EKWB Vector runs perfectly fine and cool under load. Hopefully Aquacomputer launches theirs soon, I've never had a block from them only an Aquaero and related items
> 
> 
> 
> All good man! I have no doubt the EK block performs great.
> 
> That back plate for the Kryographics block looks really cool. I had a look at some testing of older Kryographics blocks with the active back plate and it seemed to reduce the core temperature by 1-2C which is pretty impressive from just a back plate.
Click to expand...

The look of EK's block could be better which is mainly why I want another style now that I've seen the Kryographics one. 1-2°C is a big improvement not that I really need lower temps. My CPU on the other hand probably has seen better days. X99 has never been known for good overclocking and my high V core needed for 4.4ghz+ doesn't help. I'm replacing that dinosaur with a i9 9900K and a Asus Maximus XI Extreme. 

Let us know how the Alphacool performs when you get it. I'm always curious to see if there's a big different between high end blocks (there really shouldn't be). I think my temps will drop more once I remove the unnecessary 3.5in bays in my 1000D. They're inhibiting air flow quite a lot thanks to Corsairs poor design.


----------



## Pilz

Foxrun said:


> Im rocking 160 on core and 8100 on memory with the FE (to give more hope to those who have one). 160 on core translates to 2115-2130; I cant push it further without AC Odyssey crashing, though I might be able to push the mem further. I am under water though.


That's odd, with 100 on the core mine hits 2100mhz, 110 gives me 2130mhz and so on. I run into the power/voltage limit fairly fast once I go with higher offsets to the core. I've run a +150 offset which gave me 2205mhz but that crashed pretty fast. I wonder if mine is perhaps binned better?


----------



## Edge0fsanity

Esenel said:


> Founders is great. As long as you are gonna block it.
> Bad OC? Not at all.
> The 380 W bios is more a Placebo.
> You can get up to 3 or maybe 4% of performance for 20% higher power consumption and damaging your PCI-E slot. Yeah great deal 😄
> 
> TimeSpy graphics 16459.
> On a lousy Founders. Not modded at all.
> 
> https://www.3dmark.com/spy/4667323


I don't think there is anything wrong with the FE either, mine was just all around garbage with terrible coil whine so it had to get returned. Would have kept it and blocked it if it didn't have the coil whine.

OC results are entirely silicon lottery and you'll get the full range of garbage to golden chips with the FE.

380W bios is mostly placebo for games, matters to benchmarkers.

380w bios isn't going to damage your pcie slot if you have a good mobo and psu, which you should if you're buying a $1200-1300 card. 8 pin pcie connectors are rated at 150w each and the pcie slot is 75w for a total of 375w. Keep in mind that bios comes from a manufacturer for their cards to run on air. I doubt that they would release something that is going to damage your equipment. You'll see more high wattage bios cards when the custom PCB cards are out such as the FTW3 with its 373w bios.

The timespy on my FE was ~15800 with the best OC i could get on air. Thats what a below average chip gets you. About a 4% difference and i'm betting if i had put it on water i could bring that difference down to 1-2%.

I think the only benefit to running the 380w bios is that it gets you a slightly more stable framerate in games since the clock isn't jumping +/- 50mhz all the time.


----------



## Foxrun

Pilz said:


> That's odd, with 100 on the core mine hits 2100mhz, 110 gives me 2130mhz and so on. I run into the power/voltage limit fairly fast once I go with higher offsets to the core. I've run a +150 offset which gave me 2205mhz but that crashed pretty fast. I wonder if mine is perhaps binned better?


Im not sure. The other one that I am returning only hit 2045-55 with the same core settings. The only limiter that I hit is voltage. How high or a core clock can you sustain (ie 2100)?


----------



## Nephalem89

Zammin said:


> You can technically use any water block for a 2080Ti reference PCB, but some like the EKWB Vector will have a clearance issue with the fan connector at the end of the PCB on that specific card. The work around is to pull the plastic off the connector and bend the pins. If you don't want to do that, the Phanteks block says it's compatible with your card: http://www.phanteks.com/PH-GB2080TiFE.html
> 
> I have a Phanteks block on my current 1080Ti and it doesn't go all the way to the end of the PCB, this is probably how it avoids contact with that connector. Not sure what other brands of blocks will fit without the above workaround though.



Only buy from overclokers.uk i don't send from spain


----------



## Esenel

Edge0fsanity said:


> I don't think there is anything wrong with the FE either, mine was just all around garbage with terrible coil whine so it had to get returned. Would have kept it and blocked it if it didn't have the coil whine.
> 
> OC results are entirely silicon lottery and you'll get the full range of garbage to golden chips with the FE.
> 
> 380W bios is mostly placebo for games, matters to benchmarkers.
> 
> 380w bios isn't going to damage your pcie slot if you have a good mobo and psu, which you should if you're buying a $1200-1300 card. 8 pin pcie connectors are rated at 150w each and the pcie slot is 75w for a total of 375w. Keep in mind that bios comes from a manufacturer for their cards to run on air. I doubt that they would release something that is going to damage your equipment. You'll see more high wattage bios cards when the custom PCB cards are out such as the FTW3 with its 373w bios.
> 
> The timespy on my FE was ~15800 with the best OC i could get on air. Thats what a below average chip gets you. About a 4% difference and i'm betting if i had put it on water i could bring that difference down to 1-2%.
> 
> I think the only benefit to running the 380w bios is that it gets you a slightly more stable framerate in games since the clock isn't jumping +/- 50mhz all the time.



Silicon Lottery -> Totally agree on that one.
As the 380 Watt Bios is on a 2 Pin PCB yes the average and spike current are over what is save. Igor from TomsHW.de measured it.
That is the reason (also on his page) why the manufacturer is replacing the bios with a more Founders like bios.

Doing 380 Watt on a custom PCB? No problem at all then 

But doing some memory OC on this card is more than enough to get fine FPS in games in my opinion.

Cheers


----------



## Zammin

Pilz said:


> The look of EK's block could be better which is mainly why I want another style now that I've seen the Kryographics one. 1-2°C is a big improvement not that I really need lower temps. My CPU on the other hand probably has seen better days. X99 has never been known for good overclocking and my high V core needed for 4.4ghz+ doesn't help. I'm replacing that dinosaur with a i9 9900K and a Asus Maximus XI Extreme.
> 
> Let us know how the Alphacool performs when you get it. I'm always curious to see if there's a big different between high end blocks (there really shouldn't be). I think my temps will drop more once I remove the unnecessary 3.5in bays in my 1000D. They're inhibiting air flow quite a lot thanks to Corsairs poor design.


Will do. My temps might not be as good as they would be in your loop though as I have less radiator space (PC-O11 Dynamic) but if it's in the mid 40s I'll be very happy. I've seen watercooled temps mostly reported in the mid 40s with a few hitting 50, so I guess it just comes down to how good the rest of the loop is.

I want to get a 9900k too actually, I'm just waiting for testing on high end Z370 coz I'm already on a Maximus X Code and the prices of the Z390 Maximus XI boards are very high here. So if it still overclocks well on the Z370 Hero/Code/Formula I might stick with my current board and just drop the new CPU in. So keen for the soldered IHS though, no more delidding!


----------



## nycgtr

I swapped the bios on the gigabyte windforce to the gigabyte official one with the power limit raise. I notice I boost to the same clock pretty much but I am not getting some of the 380watt funnies.


----------



## looniam

i'll just leave this here:
TechPowerUp GPU-Z v2.12.0 Released




> Added detection for fake graphics cards using old relabeled NVIDIA GPUs (G84, G86, G92, G94, G96, GT215, GT216, GT218, GF108, GF106, GF114, GF116, GF119, GK106)
> *Added BIOS saving capability for NVIDIA Turing
> Added monitoring for multiple fans on Turing
> Added fan speed % monitoring on Turing
> Added HDMI and DisplayPort info to Advanced -> NVIDIA
> Power draw on NVIDIA cards is now reported in both TDP % and Watt*
> Fixed system hang caused by Valve anti-cheat
> *Fixed memory bandwidth on Turing with GDDR6*
> Fixed tooltip for system memory usage sensor
> Fixed broken Radeon RX 400 GPU usage monitoring on newer drivers


----------



## GraphicsWhore

Had to redo my run from GPU to CPU since ports were shifted compared to my 1080Ti block.

Would have liked to make it all tube instead of fittings but when I pulled out my tube kit I remembered when I was building this thing last year I accidentally cut a tube with the silicone insert still inside (first water build woes). So now the insert isn't long enough for me to do multiple bends. I'll get a new insert at some point and make it all tube.

Should be finished putting it back together tonight and put it through extended benchmarks, test the Galax BIOS. Pumped.


----------



## tamas970

looniam said:


> i'll just leave this here:
> TechPowerUp GPU-Z v2.12.0 Released


No VRM temp info unfortunately (However, on some cards there might be no sensor...)


----------



## Edge0fsanity

Esenel said:


> Silicon Lottery -> Totally agree on that one.
> As the 380 Watt Bios is on a 2 Pin PCB yes the average and spike current are over what is save. Igor from TomsHW.de measured it.
> That is the reason (also on his page) why the manufacturer is replacing the bios with a more Founders like bios.
> 
> Doing 380 Watt on a custom PCB? No problem at all then
> 
> But doing some memory OC on this card is more than enough to get fine FPS in games in my opinion.
> 
> Cheers


Could you provide a link to the info from TomsHW.de regarding the currents with the 380w bios? I have that bios on my new card but i was thinking about bringing it down to the 366w bios while i do testing on water in a few days.


----------



## carlhil2

dante`afk said:


> how's the zotac amp doing there? anyone running around with it with the 380w bios? results?
> 
> buy yes/no? wait?


This is the Zotac flashed with the Gigabyte bios with Alphacool block.. gpu does 2040mhz at stock, with block on it will do a steady 2040, icluded stock Time Spy run..


----------



## tamas970

carlhil2 said:


> This is the Zotac flashed with the Gigabyte bios with Alphacool block..


Have you tested the BIOS on air as well, or the block was already mounted?


----------



## carlhil2

tamas970 said:


> Have you tested the BIOS on air as well, or the block was already mounted?


Ran it on air with bios, would do @2115 in games, clock down depending on temps..


----------



## GAN77

carlhil2 said:


> This is the Zotac flashed with the Gigabyte bios with Alphacool block..


Temperature under water?


----------



## cletus-cassidy

mbhtzor said:


> Can confirm the Asus 2080 Ti Dual Advanced works fine with the Gigabyte BIOS. The cooling however, SUCKS! Will probably block the card this weekend.
> 
> 
> Had some crazy FPS-drops in Forza Horizon 4. Rebooted the computer, and got a BIOS error due to CPU Temperature. Ryzen 2700x was at 94 degrees :thumb: Some gunk in my watercooling loop was restricting the flow, so need to take care of that as well. Luckily not the cards fault


I have the OC (011G) version of this card on order from Newegg and also putting mine under water. Keep me posted about your results with the Gigabyte BIOS on the card under water. Very interested. Thanks in advance.


----------



## tamas970

carlhil2 said:


> Ran it on air with bios, would do @2115 in games, clock down depending on temps..


Thanks, impressive! Any memories on temp values and how much it clocked down during torture?


----------



## Lownage

Will this bios work on my Zotac AMP! ?

https://www.techpowerup.com/vgabios/204090/evga-rtx2080ti-11264-180916


----------



## Esenel

Edge0fsanity said:


> Esenel said:
> 
> 
> 
> Silicon Lottery -> Totally agree on that one.
> As the 380 Watt Bios is on a 2 Pin PCB yes the average and spike current are over what is save. Igor from TomsHW.de measured it.
> That is the reason (also on his page) why the manufacturer is replacing the bios with a more Founders like bios.
> 
> Doing 380 Watt on a custom PCB? No problem at all then 🙂
> 
> But doing some memory OC on this card is more than enough to get fine FPS in games in my opinion.
> 
> Cheers
> 
> 
> 
> Could you provide a link to the info from TomsHW.de regarding the currents with the 380w bios? I have that bios on my new card but i was thinking about bringing it down to the 366w bios while i do testing on water in a few days.
Click to expand...

Sure:
https://www.tomshw.de/2018/10/11/kf...es-380-watt-power-limit-mit-rgb-beplankung/4/

Talks about spikes up to 8.5 Ampere with the 380W bios. Average of 6 Ampere.
Specified is 5.5 Ampere.

Risky on a longterm use in my opinion.


----------



## nycgtr

I would vote to sticking with the 330watt. I literally have a 20mhz clock difference. For me the not waking up from sleep and gpu device removed error in certain games. I never had these with the trio, nor the current gigabyte card I blocked until i flashed the 380watt bios. After reverting to the gigabyte higher watt bios, all is good.


----------



## Khalil

Managed to snag a Gigabyte 2080ti Gaming OC and was interested in flashing to the latest bios which gives another 11% max power.

Do I need to remove drivers and pop into safe mode before running the bios exe since its an official Giga Bios update or should I play it safe?


----------



## Fiercy

Khalil said:


> Managed to snag a Gigabyte 2080ti Gaming OC and was interested in flashing to the latest bios which gives another 11% max power.
> 
> Do I need to remove drivers and pop into safe mode before running the bios exe since its an official Giga Bios update or should I play it safe?


I did nothing of the sort and it updated just fine for me. Didn't have to do anything special.


----------



## Pilz

Foxrun said:


> Pilz said:
> 
> 
> 
> That's odd, with 100 on the core mine hits 2100mhz, 110 gives me 2130mhz and so on. I run into the power/voltage limit fairly fast once I go with higher offsets to the core. I've run a +150 offset which gave me 2205mhz but that crashed pretty fast. I wonder if mine is perhaps binned better?
> 
> 
> 
> Im not sure. The other one that I am returning only hit 2045-55 with the same core settings. The only limiter that I hit is voltage. How high or a core clock can you sustain (ie 2100)?
Click to expand...

It really depends on the load. In TS Extreme with a +110 offset I'll hover around 2070 but peak at 2115-2130 so it's hitting the voltage/power limits based on the loading. My memory is also running at a 1100 offset while this is happening. If I do a 200 core offset my card will crash sometimes so it's really hard to say.

Edit: I can sustain 2130mhz in gamging no problem, I haven't tried a higher core clock gaming yet.


----------



## Khalil

Fiercy said:


> I did nothing of the sort and it updated just fine for me. Didn't have to do anything special.


Thanks, you were right.. quick update.


----------



## Dayaks

Esenel said:


> Edge0fsanity said:
> 
> 
> 
> 
> 
> Esenel said:
> 
> 
> 
> Silicon Lottery -> Totally agree on that one.
> As the 380 Watt Bios is on a 2 Pin PCB yes the average and spike current are over what is save. Igor from TomsHW.de measured it.
> That is the reason (also on his page) why the manufacturer is replacing the bios with a more Founders like bios.
> 
> Doing 380 Watt on a custom PCB? No problem at all then 🙂
> 
> But doing some memory OC on this card is more than enough to get fine FPS in games in my opinion.
> 
> Cheers
> 
> 
> 
> Could you provide a link to the info from TomsHW.de regarding the currents with the 380w bios? I have that bios on my new card but i was thinking about bringing it down to the 366w bios while i do testing on water in a few days.
> 
> Click to expand...
> 
> Sure:
> https://www.tomshw.de/2018/10/11/kf...es-380-watt-power-limit-mit-rgb-beplankung/4/
> 
> Talks about spikes up to 8.5 Ampere with the 380W bios. Average of 6 Ampere.
> Specified is 5.5 Ampere.
> 
> Risky on a longterm use in my opinion.
Click to expand...

I wouldn’t worry about spikes (heat takes some time) or being 9% over spec personally. Maybe I’d care more if I had four cards, but one or two it’s not even a concern for me.

Still good info to know.

In other news I waterblocked my Asus Dual and flashed to the 380W bios. I was originally power limited at a Timespy score of 12700. Now I hit 14200 combined. A 12% jump isn’t too shabby in my book. Running at 2080Mhz and 700 VRAM for a 24/7 stable OC. The EK block keeps my card at about 42C with 24C fluid IIRC.

I originally had the shunt on the back of the card modded. I took it off because it caused my card to stick at 300Mhz. We’ll see if RTX features require more power I might just bite the bullet and get a proper resistor. Currently with a 380W bios I don’t think it’s really needed.


----------



## PCNerd

Just received and installed the PNY XLR8. Just a note that the card does not have any RGB - I talked to some folks at PNY and there is an XLR8 version with RGB/LED in the works. 

I don't believe that this card is the reference PCB (so i'm hesitant to try the GALAX BIOS). I uploaded the stock BIOS if anyone wants it: 

https://www.techpowerup.com/vgabios/204371/204371

EDIT: I was able to flash to the Gigabyte 366W BIOS. Testing it out now.


----------



## Edge0fsanity

anyone else having issues with timespy scores reading low? Last week my FE got 15800 graphics score on air. I returned that card. New asus dual oc card with a waterblock, 366w bios, +1000 mem, and peak clock of 2130mhz does 14000. Other benchmarks are fine, superposition 4k optimized put me at 13373 which is a top 20 score.


----------



## kx11

no wonder my gpu temps are high as hell
















it's that infamous phanteks glacier res.


----------



## Foxrun

Pilz said:


> It really depends on the load. In TS Extreme with a +110 offset I'll hover around 2070 but peak at 2115-2130 so it's hitting the voltage/power limits based on the loading. My memory is also running at a 1100 offset while this is happening. If I do a 200 core offset my card will crash sometimes so it's really hard to say.
> 
> Edit: I can sustain 2130mhz in gamging no problem, I haven't tried a higher core clock gaming yet.


Same except for something like monster hunter. That game will drop my clocks to 2070-2100


----------



## Zammin

kx11 said:


> no wonder my gpu temps are high as hell
> 
> 
> it's that infamous phanteks glacier res.


Oh dear.. That's a lot of air..


----------



## Baasha

Guys - I have this nagging feeling like my 6950X might be bottlenecking these GPUs - I have 2x 2080 Ti in SLI and in many games that were formerly at 99% across each GPU now only scales around 80 - 90% and some are even worse (like GTA V for instance).

Also, since the latest Windows update, my OSD shows my CPU at 4.0Ghz when the BIOS settings are all at 4.30Ghz - not sure how/when this changed and so I need help in getting my OC back to 4.30Ghz. How do I do that? Please help!


----------



## sugi0lover

bump...


----------



## Gustavoh

Hello again! So I went ahead and ordered the EK Vector waterblock and backplate for my Gainward Phoenix GS. It is indeed a reference PCB, as you can see in the picture.

I had a little bit of trouble removing the original backplate, had to apply a little bit of force (the PCB did flex slightly), but other than that I thought I had a smooth install of the wb. It turns out I get no signal when I boot now! :-(

I've managed to boot using the onboard graphics (HDMI out from the motherboard), so the PC itself seems to be ok, even though I did get the BSOD at Win 10 startup because of nvsomething something.sys failure -- which would suggest the GPU is not starting up. I've already checked the power cables, even switched them, to no avail. Any ideas or suggestions? Did I just kill my fancy new GPU? 

(I'm dreading having to drain the loop to take out the gpu+block and reopen it/reassemble it.)


----------



## Rob w

Baasha said:


> Guys - I have this nagging feeling like my 6950X might be bottlenecking these GPUs - I have 2x 2080 Ti in SLI and in many games that were formerly at 99% across each GPU now only scales around 80 - 90% and some are even worse (like GTA V for instance).
> 
> Also, since the latest Windows update, my OSD shows my CPU at 4.0Ghz when the BIOS settings are all at 4.30Ghz - not sure how/when this changed and so I need help in getting my OC back to 4.30Ghz. How do I do that? Please help!


The October windows update has caused a lot of problems with loosing Oc,
It seems to be linked with the microcode you need to update ‘me driver’ afaik.
ASUs forum has a leangthy thread about it.


----------



## arrow0309

kx11 said:


> no wonder my gpu temps are high as hell
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> it's that infamous phanteks glacier res.


You have to do a lot of bleeding 



Rob w said:


> The October windows update has caused a lot of problems with loosing Oc,
> It seems to be linked with the microcode you need to update ‘me driver’ afaik.
> ASUs forum has a leangthy thread about it.


Confirm, I've had to increase up to +10/15 mv for both my vcore and system agent, also lowered to 4.7 (avx offset, -1).


----------



## OleMortenF

Did some quick tests again with my 2080 Ti and EK Waterblock

Stock boost: 1635/7000MHz

About 290Watt (GPU-Z)

Temp under Load 42c


Overclocked: 2100/8000MHz (GALAX RTX 2080 Ti 300W x 126% Power Target BIOS (380W)

Power Limit 126% and Core Voltage +100

+70 Core Clock + 1000 Memory

375Watt (GPU-Z)

Temp under Load 54c

I dont think these temps are that bad.


----------



## Pilz

So after the botched W10 Broadwell-E patch messed up my CPU overclock its temps have been much higher as a whole. I need to look into the issue some more but for now, this is my Time Spy Extreme score. Obviously, my CPU is the weak point here and is getting replaced with a 9900K. I have my 6800K running at 4.4ghz on cores 0,1 and 4.3ghz on the other 4. Graphics score is up there with some of the better runs I've seen. Despite my really weird CPU temp spikes (it normally peaks at 55C with the overclock) my GPU stayed at 35C max. 2 of my Corsair Dominator RAM sticks failed the other day which are getting replaced, in the mean time I bought some G Skill Trident Z RGB to replace it since I'm not waiting 10 days with no working PC  RAM will be here today!


----------



## Nephalem89

Good morning some advice or block of water to refrigerate a gygabite 2080 ti gaming oc thanks


----------



## Krzych04650

nycgtr said:


> I would vote to sticking with the 330watt. I literally have a 20mhz clock difference. For me the not waking up from sleep and gpu device removed error in certain games. I never had these with the trio, nor the current gigabyte card I blocked until i flashed the 380watt bios. After reverting to the gigabyte higher watt bios, all is good.


For me clock difference was much bigger, more like 50 MHz, 2130-2160 typical instead of 2070-2115, throttling was also much smaller in Time Spy graphics test 2, 2100 minimum instead of 2055 or even 2040 minimum, and for games 380 bios removed throttling completely, but performance difference wasn't really there (1,5-2,5%) and I got some serious stuttering. Don't know if this Galax BIOS is some kind of beta version or what, but it is certainly not working properly for me. 

This stuttering was really strange especially that there was no real rule where it would happen. For example in games like Witcher 3 or AC: Syndicate it was all fine at 1080p and 1440p, but going for 3440x1440 or let alone 4K introduced massive stuttering. This would explain why Time Spy wouldn't stutter (it is 1440p), but for example Superposition benchmark didn't stutter at 3440x1440 and 4K, even despite hitting power limit often. There were also some games like Tomb Raider (2013) where there was no stuttering regardless of anything. So I wasn't able to find any scheme in all of this. Reverting to original BIOS fixed it.



Baasha said:


> Guys - I have this nagging feeling like my 6950X might be bottlenecking these GPUs - I have 2x 2080 Ti in SLI and in many games that were formerly at 99% across each GPU now only scales around 80 - 90% and some are even worse (like GTA V for instance).
> 
> Also, since the latest Windows update, my OSD shows my CPU at 4.0Ghz when the BIOS settings are all at 4.30Ghz - not sure how/when this changed and so I need help in getting my OC back to 4.30Ghz. How do I do that? Please help!


You have to uninstall KB4100347 update, this is the one that breaks things.


----------



## stefxyz

Its definately a 4k card. I made screener from witcher 3 on my watercooled oc fe card. First run is Ultra and high in post processing. The way back how I would play it: hairworks off and shadows to high from ultra. This way i get around 100 fps in 4k glory on the PG27UQ. PS quality is a bit **** its a 90mm macro lens over the shoulder with low depth of field setting... (4k should be ready in an hour or so).







Long story short no hairworks and super slight reduction in shadows gives you roughly 100 super buttery gsynced fps now.


----------



## Diverge

sugi0lover said:


> Hi,
> I tried shunt mod using two 0.006 ohm registers over the original ones on pcb.
> But instead of using soldering or any conductive tapes, I used a glue gun just to fix the new ones to the original ones.
> I made sure that metal parts of registers physically contacted each other, but power limit is the same as the original.
> Do I actually have to use soldering or something like that to make it work?
> To my eyes, new and original registers contact each other.
> Any help will be greatly appreciated.


You need to solder it when dealing with such precision low resistance devices. The contact resistance you have between the 2 parts not being soldered is likely greater than that value of the resistor.


----------



## icehotshot

2 x Asus Turbo 2080 TI in stock at MC St. Davids, PA. Get em while their hot.

http://www.microcenter.com/product/...2080-ti-single-fan-11gb-gddr6-pcie-video-card


----------



## Nephalem89

Any advice to water cooling a gygabite 2080 ti gaming oc until I was raised to desolder the connector but if yes you lose the warranty completely .... I thought about taking a universal block but how you refrigeraris the vram and the vrm thanks!


----------



## Jpmboy

Edge0fsanity said:


> I don't think there is anything wrong with the FE either, mine was just all around garbage with terrible coil whine so it had to get returned. Would have kept it and blocked it if it didn't have the coil whine.
> 
> OC results are entirely silicon lottery and you'll get the full range of garbage to golden chips with the FE.
> 
> 380W bios is mostly placebo for games, matters to benchmarkers.
> 
> 380w bios isn't going to damage your pcie slot if you have a good mobo and psu, which you should if you're buying a $1200-1300 card. *8 pin pcie connectors are rated at 150w each *and the pcie slot is 75w for a total of 375w. Keep in mind that bios comes from a manufacturer for their cards to run on air. I doubt that they would release something that is going to damage your equipment. You'll see more high wattage bios cards when the custom PCB cards are out such as the FTW3 with its 373w bios.
> 
> The timespy on my FE was ~15800 with the best OC i could get on air. Thats what a below average chip gets you. About a 4% difference and i'm betting if i had put it on water i could bring that difference down to 1-2%.
> 
> I think the only benefit to running the 380w bios is that it gets you a slightly more stable framerate in games since the clock isn't jumping +/- 50mhz all the time.



The 8 pin PCIE connectors are rated at 150W each in terms of the physical specification (wire gauge, pin contact surface area). A modern PSU is capable of delivering much higher wattage thru each PCIE connector (single rail mode) and most use a gauge and pin well exceeding this min spec. The slot power comes from the ATX rail and is drawn from that power plane in the PCB. This is more limited and unless you defeat the board's OCP on a modern board, there will not be any ATX connector melting happening anymore (ah... the good ole days  ). For example, an AX1500i has a per-rail OCP of 40Amp on each of the PCIE (and CPU) lines in multirail mode, there is no limit in single rail mode. 40A x 12V is... well you know. E.g. with the P2 lock disabled, my Titan V will pull (measured with a clamp meter) over 400W while folding thru the PCIE connectors, only 40W thru the slot.

Unless the PSU is old or bad, no one is going to burn a PCIE slot with any of these cards regardless on the bios or modifcations. 



e: almost drove over to MC this morning to buy those two ASUS Turbo 2080Ti's... but then came to my senses trying to hold off for a full die Turing Titan (.. must resist  )


----------



## Jpmboy

Nephalem89 said:


> Any advice to water cooling a gygabite 2080 ti gaming oc until I was raised to desolder the connector but if yes you lose the warranty completely .... I thought about taking a universal block but how you refrigeraris the vram and the vrm thanks!


just asking... EK does not have that card listed as compatible for their blocks??


----------



## Edge0fsanity

Jpmboy said:


> just asking... EK does not have that card listed as compatible for their blocks??


i believe that card has a fan connector that causes contact issue with the block. The fan connector has to be modified to get the block to fit.


----------



## Nephalem89

Edge0fsanity said:


> i believe that card has a fan connector that causes contact issue with the block. The fan connector has to be modified to get the block to fit.


Thi is problem









The problem is that I do not want to lose the guarantee ... do you know if more models will come out? or better put a universal and refrigerate the vrm and the ram? But how do I refrigerate it? Or desoldering the connector?


----------



## tamas970

Baasha said:


> Guys - I have this nagging feeling like my 6950X might be bottlenecking these GPUs - I have 2x 2080 Ti in SLI and in many games that were formerly at 99% across each GPU now only scales around 80 - 90% and some are even worse (like GTA V for instance).
> 
> Also, since the latest Windows update, my OSD shows my CPU at 4.0Ghz when the BIOS settings are all at 4.30Ghz - not sure how/when this changed and so I need help in getting my OC back to 4.30Ghz. How do I do that? Please help!


I am no expert on CPU OC, but know that new windows OS-es/currently running under the name of "updates" (new wddm every year) can very well kill a working OC. Try increasing the voltage or revert back to a previous windows build.


----------



## Jpmboy

Nephalem89 said:


> Thi is problem
> 
> The problem is that I do not want to lose the guarantee ... do you know if more models will come out? or better put a universal and refrigerate the vrm and the ram? But how do I refrigerate it? Or desoldering the connector?


A uni-block will require some means of cooling the power section (less so the ram). I would not go that route for a "permanent" installation. Before removing the connector (and unless you REALLY know how to flow/solder, any warranty inspection will catch this) I'd grind out the space on the waterblock first. It's a low profile fan connector and will not require more than a dremel tool-level shaving of the block.
That said... your card is not listed here:
The std EK block will require "fitment".


----------



## sugi0lover

bump...


----------



## Nephalem89

Is it possible to cut the methacrylate without creating leaks


----------



## Nephalem89

Jpmboy said:


> Nephalem89 said:
> 
> 
> 
> Thi is problem
> 
> The problem is that I do not want to lose the guarantee ... do you know if more models will come out? or better put a universal and refrigerate the vrm and the ram? But how do I refrigerate it? Or desoldering the connector?
> 
> 
> 
> A uni-block will require some means of cooling the power section (less so the ram). I would not go that route for a "permanent" installation. Before removing the connector (and unless you REALLY know how to flow/solder, any warranty inspection will catch this) I'd grind out the space on the waterblock first. It's a low profile fan connector and will not require more than a dremel tool-level shaving of the block.
> That said... your card is not listed here:
> The std EK block will require "fitment". /forum/images/smilies/smile.gif
Click to expand...

Is it possible to cut the methacrylate without creating leaks??


----------



## Jpmboy

Nephalem89 said:


> Is it possible to cut the methacrylate without creating leaks??


 sure - a long as you stay outside the O-ring. But if the fan connector is NOT under the metal cooling block itself, it is not likely going to be a problem. If it is under the metal AND well outside the O-ring, you can still mod the block to fit. I'd rather mod the $130 block than the $1200 card. 
Also - you should be aware that (by the book) most manufacturers will void the warranty if you remove the stock cooler anyway. Many use a "Tamper Telltail" either with a sticker on the gpu core mount screws or a small piece of paper tape on a backplate or the like. check thoroughly, you can do things to save these tamper tell alls. :thumb:


----------



## sblantipodi

A little bit out of topic question. Is gsync able to eliminate tearing even when the framerate goes over the refresh rate of the monitor.?
If yes, how? Does it limit the framerate like gsync?

Thanks


----------



## Zammin

sblantipodi said:


> A little bit out of topic question. Is gsync able to eliminate tearing even when the framerate goes over the refresh rate of the monitor.?
> If yes, how? Does it limit the framerate like gsync?
> 
> Thanks


No it does not eliminate tearing above the refresh rate, see this article about the ideal settings for G sync for zero tearing and minimal input lag (essentially none): https://www.blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/14/

Basically in NVCP you turn on G Sync, turn on Vsync as well under global settings and limit your frame rate to a number that is 3 below your maximum refresh rate ideally using a frame limiter in the game or alternatively with RTSS. (E.g. for 144hz, set to 141FPS)


----------



## Baasha

Krzych04650 said:


> You have to uninstall KB4100347 update, this is the one that breaks things.





tamas970 said:


> I am no expert on CPU OC, but know that new windows OS-es/currently running under the name of "updates" (new wddm every year) can very well kill a working OC. Try increasing the voltage or revert back to a previous windows build.


That was the problem exactly! Uninstalled and all is well now.

Thanks guys!


----------



## nycgtr

Anyone who wants a card bad I got quite a few number of new ones. I originally planned to bin all of them and keep the best 3-4 of them. However, I have 3 strix arriving on the 30th thanks to a contact. So I can return or sell the ones I have. They are new in box but I paid over 1300 each with tax (nyc tax is brutal) so you can get them at my cost + fees + shippping or wait for a better tax free deal to be in stock. just throwing it out there.


----------



## AdamK47

The voltage sliders in EVGA Precision and MSI Afterburner don't appear to do anything. On my EVGA 2080Ti XC Ultra with +100 core it will hover between 2070 and 2040 with 1.050V with the voltage slider at 0. It's the same exact thing with the voltage slider at 100. What's up with that? Are reviewers and owners simply enjoying a placebo effect by putting that voltage slider at 100?


----------



## Nico67

AdamK47 said:


> The voltage sliders in EVGA Precision and MSI Afterburner don't appear to do anything. On my EVGA 2080Ti XC Ultra with +100 core it will hover between 2070 and 2040 with 1.050V with the voltage slider at 0. It's the same exact thing with the voltage slider at 100. What's up with that? Are reviewers and owners simply enjoying a placebo effect by putting that voltage slider at 100?



The Voltage slider doesn't actively increase the voltage, it allows the curve to access more voltage. You need to tune the curve so that it will use a higher voltage at your highest clk and then a bit more at the next highest frequency etc. This is how Pascal worked at least.


----------



## StullenAndi

sugi0lover said:


> Hi,
> I tried shunt mod using two 0.006 ohm registers over the original ones on pcb.


What are you trying to achieve with this mod? Adding 2 times 0.006 ohm will result in 0.001875 ohm. Even with a second 0.005 on top of the existing one, resulting in 0.0025 ohm, the failsafe is triggered.


----------



## Jpmboy

nycgtr said:


> Anyone who wants a card bad I got quite a few number of new ones. I originally planned to bin all of them and keep the best 3-4 of them. However, I have 3 strix arriving on the 30th thanks to a contact. So I can return or sell the ones I have. They are new in box but I paid over 1300 each with tax (*nyc tax is brutal*) so you can get them at my cost + fees + shippping or wait for a better tax free deal to be in stock. just throwing it out there.


lol - yeah that's (only) one reason I don't live there anymore. But owning 5+ 2080Tis and complaining about NYC tax... priceless.


----------



## nycgtr

Jpmboy said:


> lol - yeah that's (only) one reason I don't live there anymore. But owning 5+ 2080Tis and complaining about NYC tax... priceless.


Lol hey gotta wish you can save where you can right? Between real estate prices out here a man can use all the help he can get lol.


----------



## Zammin

Man $1300 USD for a 2080Ti would be a steal here in Australia.. The absolute cheapest ones here in Aus (if you can find them) are $2000 AUD ($1422.98 USD) and that's for a bottom of the line blower style card.

The 2080 non-Ti is probably the most inflated here. At best they start at $1450 ($1031.66 USD) for the bottom of the line models and go all the way up to $1700 ($1209.53 USD) if you want a Strix card. You could buy a 2080Ti in the US for the price of a Strix 2080 here haha.

We can't bring them in from the US either because we are being shut out of online stores like Amazon etc, and the few places that still ship to Aus like Newegg only sell about a tenth of their range to us, and the RTX cards aren't included.

I hope Newegg eventually at least open up their full range of Z390 motherboards to Australia because they are way cheaper on Newegg. They just aren't listing the ones I want for Aus yet. We're getting shafted by local retailers..


----------



## HuckleberryFinn

How much risk is involved with flashing the BIOS? Is it safe to use the Galax 380w BIOS with my Zotac Amp 2080Ti? 

If there's even a 1% chance I'll brick my card, I won't try.


----------



## kx11

HuckleberryFinn said:


> How much risk is involved with flashing the BIOS? Is it safe to use the Galax 380w BIOS with my Zotac Amp 2080Ti?
> 
> If there's even a 1% chance I'll brick my card, I won't try.



it's not worth it




enjoy your card


----------



## kx11

if anyone is interested


----------



## tamas970

kx11 said:


> it's not worth it
> 
> enjoy your card


The Zotac AMP is limited to 260/300W by default, which sounds quite restrictive for situations where more juice is needed (minimum fps???) Just wonder if it's worth to buy a 330W+ card, like the giga Gaming OC. (FTW3 is not even listed, not to mention available...)


----------



## Jpmboy

HuckleberryFinn said:


> How much risk is involved with flashing the BIOS? Is it safe to use the Galax 380w BIOS with my Zotac Amp 2080Ti?
> 
> If there's even a 1% chance I'll brick my card, I won't try.


don't try. As KX said, it is not worth it. tho you just need another NV card (any gen) to recovery-flash the "bricked" card in 90% of the bad flash instances. AFAIK, the bios PL will not override the hard-coded power/thermal ceilings. (just like pascal and volta). Hardmods or stand pat.


----------



## SpacemanSpliff

tamas970 said:


> The Zotac AMP is limited to 260/300W by default, which sounds quite restrictive for situations where more juice is needed (minimum fps???) Just wonder if it's worth to buy a 330W+ card, like the giga Gaming OC. (FTW3 is not even listed, not to mention available...)


That's simply because EVGA have not released them to market yet. However the info and specs for them are available at the EVGA website...

https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR


----------



## Pilz

Jpmboy said:


> don't try. As KX said, it is not worth it. tho you just need another NV card (any gen) to recovery-flash the "bricked" card in 90% of the bad flash instances. AFAIK, the bios PL will not override the hard-coded power/thermal ceilings. (just like pascal and volta). Hardmods or stand pat.


My card is the FE and pulls 137.7% of the TDP stock as I found out. I was running some stability tests and had GPU-Z log the max core clock/power consumption. Here's a screenshot showing how much power it's really pulling assuming the program is accurate.


----------



## Jpmboy

Pilz said:


> My card is the FE and pulls 137.7% of the TDP stock as I found out. I was running some stability tests and had GPU-Z log the max core clock/power consumption. Here's a screenshot showing how much power it's really pulling assuming the program is accurate.


NIce. flashed? Anyway, try looking at the clock bin drops when it hits the power limit. Afterburner can show this if you have the PL binary flag enabled in the sensor panel.


----------



## Bloodmosher

BF1 crashes every time I start it with one of my 2080TIs. I bought two for NVlink. With one of the cards in the picture it always crashes. It worked fine for a week or so, but no longer. Tried using it in single card mode. Tried backwards and forwards on drivers. Tried re-seating, different slots, etc. All the same result. Time to RMA?


----------



## gavros777

where can i register the gigabyte 2080 ti for the 4 year warranty?


----------



## GraphicsWhore

Bloodmosher said:


> BF1 crashes every time I start it with one of my 2080TIs. I bought two for NVlink. With one of the cards in the picture it always crashes. It worked fine for a week or so, but no longer. Tried using it in single card mode. Tried backwards and forwards on drivers. Tried re-seating, different slots, etc. All the same result. Time to RMA?


If just BF1 then definitely not.


----------



## Pilz

Jpmboy said:


> NIce. flashed? Anyway, try looking at the clock bin drops when it hits the power limit. Afterburner can show this if you have the PL binary flag enabled in the sensor panel.


No, it's 100% stock, unflashed, not modded. The only thing I have is an EKWB on there but obviously, these cards are pulling more power than people think assuming the program is accurate. 

Has anyone else checked the power draw with GPU-Z?


----------



## Zammin

My pre order with one of the local retailers keeps getting pushed back. They put the pre-orders up early last month with an ETA of early this month, but mines been pushed back twice to the end of this month now or maybe even the next. After talking with them on the phone it sounds like they didn't even have confirmed inbound stock when they put the pre-orders up, and they have no idea when the cards will be arriving. They just put an ETA date up and when that date passes they put another one up for a few weeks later.

They said "all the other Australian retailers are struggling to get stock as well" but the difference is the other retailers didn't put up pre-orders for stock they weren't sure they could even get in the first place 

Could be next year for all I know now...


----------



## tamas970

Zammin said:


> My pre order with one of the local retailers keeps getting pushed back. They put the pre-orders up early last month with an ETA of early this month, but mines been pushed back twice to the end of this month now or maybe even the next. After talking with them on the phone it sounds like they didn't even have confirmed inbound stock when they put the pre-orders up, and they have no idea when the cards will be arriving. They just put an ETA date up and when that date passes they put another one up for a few weeks later.
> 
> They said "all the other Australian retailers are struggling to get stock as well" but the difference is the other retailers didn't put up pre-orders for stock they weren't sure they could even get in the first place
> 
> Could be next year for all I know now...


No cards with decent cooling/330+W BIOS are available in Europe either https://geizhals.eu/?cat=gra16_512&...=uk&hloc=eu&plz=&dist=&mail=&sort=t&bl1_id=30


----------



## Zammin

tamas970 said:


> No cards with decent cooling/330+W BIOS are available in Europe either https://geizhals.eu/?cat=gra16_512&...=uk&hloc=eu&plz=&dist=&mail=&sort=t&bl1_id=30


I feel ya man.. We don't even have any bottom of the line models available, completely sold out everywhere. I hope the 9900k doesn't see the same kind of global shortage the 2080Ti has, Intel decided to follow Nvidia's pre-order business model so it's possible.. People might rush in to pre-order with the fear they won't get one otherwise like with the 2080Ti's launch. I prefer to wait for third party independent reviews before purchasing but that really bit me the arse this time since they were all gone by launch.


----------



## ESRCJ

kx11 said:


> if anyone is interested


I'm interested. I just need to find two 2080 Ti Strix in stock. So far, they haven't popped up for sale anywhere online in the US.


----------



## Nico67

Zammin said:


> My pre order with one of the local retailers keeps getting pushed back. They put the pre-orders up early last month with an ETA of early this month, but mines been pushed back twice to the end of this month now or maybe even the next. After talking with them on the phone it sounds like they didn't even have confirmed inbound stock when they put the pre-orders up, and they have no idea when the cards will be arriving. They just put an ETA date up and when that date passes they put another one up for a few weeks later.
> 
> They said "all the other Australian retailers are struggling to get stock as well" but the difference is the other retailers didn't put up pre-orders for stock they weren't sure they could even get in the first place
> 
> Could be next year for all I know now...



Yeah, I couldn't get a straight answer in regards to my pre order, dates coming and going and getting pushed out to the end of this month at least.

Was lucky enough to pickup up a Asus Turbo card today from a different store, got it installed and started working with it, default (which happens to be 250w) your not even holding 1600mhz (boost is only 1545 though) and it hits 81c, but it does get better. Custom fan curve will get temp down to 60c and pushing 120% is 300w, then your in the 1700's. Overclocking 100mhz which brings you up to the base boost of most cards gets you into the 1800's.

Clearly flashing the bios is the only way to go especially with the more basic cards. I have the Galax bios on it now, but have only gone 110% for 330w limit at the moment. Fan doesn't seem spin as fast, but that won't be an issue as I have a waterblock for it down the track. In the high 1900's low 2000's with +155 clock (for 1790 boost) 74c (due to the fan seeming a bit slower) and rarely using 1.000v and bouncing off the power limit.

Fun playing around with, and have gone from 10957 Supo 4k (base) to 12493. I think it will do ok on chilled water but that may be a ways off, depending how keen I get


----------



## Pilz

I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?


----------



## Pilz

I thought I'd share my new RAM since now it matches my coolant coolant colors 🙂

3000mhz C14 G.Skill Trident Z RGB OC'D to 3200 C14

I'm reworking all of my tubing once I get my i9 9900K + Asus Z390 Maximus XI Extreme. I haven't decided on the block yet.

My 2080Ti will transition to the Aquacomputer Kryographics block once they come out.


----------



## tamas970

Pilz said:


> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?


windows 1809 update?


----------



## Newbie2009

Pilz said:


> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?


Faulty card or PSU I would guess.


----------



## ENTERPRISE

Pilz said:


> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?


I have been seeing this a lot from other users. I would do a fresh install of Windows (Pre 1809) and use it as test arena. You can keep your other windows, just install a test windows on another drive, install the latest drivers and a game that has problems and test to see if it has the same issues. 

If it has the same issues you could be looking at a driver issue or a possible hardware issue.


----------



## Pilz

ENTERPRISE said:


> Pilz said:
> 
> 
> 
> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?
> 
> 
> 
> I have been seeing this a lot from other users. I would do a fresh install of Windows (Pre 1809) and use it as test arena. You can keep your other windows, just install a test windows on another drive, install the latest drivers and a game that has problems and test to see if it has the same issues.
> 
> If it has the same issues you could be looking at a driver issue or a possible hardware issue.
Click to expand...

I know BO4 has inherent issues but I believe it's due to the newest Nvidia driver because that's when things started to crash. I'll likely do a fresh install if needed but I'd rather avoid messing with windows if possible. I haven't updated to the latest version yet because each one behind issues and Microsoft seems to be releasing Alpha builds considering how buggy they have been.

Is it possible the bios update I did for my motherboard is affecting the GPU like that? I don't see it being likely but I want to eliminate all other variables. I also returned my RAM back to 3000mhz for this reason too. That helped with some crashes despite passing a lengthy stress test


----------



## Pilz

tamas970 said:


> Pilz said:
> 
> 
> 
> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?
> 
> 
> 
> windows 1809 update?
Click to expand...




Newbie2009 said:


> Pilz said:
> 
> 
> 
> I've been having tons of crashes in games, namely Shadow of the Tomb Raider, PUBG and COD BO4. I tried running all of them at stock clocks to no avail. I ended up doing a clean install of the drivers (didn't fix the crashing) but now I'm having a weird problem. I used to get a 2100mhz core clock with a 100 offset, now it only goes to 2070mhz with the same offset. Does anyone have an explanation of what could be causing this?
> 
> 
> 
> Faulty card or PSU I would guess.
Click to expand...


For some reason the mobile site wasn't letting me multi quite, now it is.

No, I'm on whatever build was before the newest one because I always postpone windows updates for at least a few weeks due to the issues MS has releasing a stable build.

My card was perfectly fine beforehand and only started having crashes a few days ago. Worst case I'll RMA. My PSU is fine, although I'm reaching it's upper limits now by pulling 700W (Corsair AX760) so it'll be replaced with a new AXi 1600 or something similar depending on what I feel like.


----------



## xermalk

Was it confirmed that new cards will not allow bios flashing?

I was looking at the Palit cards, but their power limit is quite low compared to the others.


----------



## GraphicsWhore

Pilz said:


> For some reason the mobile site wasn't letting me multi quite, now it is.
> 
> No, I'm on whatever build was before the newest one because I always postpone windows updates for at least a few weeks due to the issues MS has releasing a stable build.
> 
> My card was perfectly fine beforehand and only started having crashes a few days ago. Worst case I'll RMA. My PSU is fine, although I'm reaching it's upper limits now by pulling 700W (Corsair AX760) so it'll be replaced with a new AXi 1600 or something similar depending on what I feel like.


Try fresh Windows install to make sure. Although you're not on 1809 Windows is still a POS and should always be suspected. I did update to the latest build last week and all my games crashed to desktop immediately. Seems like it was DirectX related but no fix worked so I had to do a fresh install and everything is normal.


----------



## Okt00

Monitor was supposed to show up Friday... I had this GPU just staring at me all weekend. Sure is pretty. 

It was a race between an Active DP to DVI-D adapter and a Predator X34... both are slated for delivery today. Gonna have some fun when I get home from the office, for sure! 

All I want to do is push it and see what the silicon can do. Maybe I'll ask the wife for a waterblock for Christmas.


----------



## raider89

Would like to be added to the owners list.

EVGA 2080Ti XC Ultra


----------



## nfsking2

OK, I managed to get the EVGA +30% power BIOS work on a Inno3D reference board RTX2080Ti, and everything is working great by far.

As this topic mentioned, some card might have the EPPROM locked down, so you guys should unlock it at first.

I used the NVFLASH 5.513.0, first input 'nvflash64 -r' to turn the EPPROM write protection off. (It's all written in the help info of the nvflash)

Then input 'nvflash64 -6 evga130.rom'. After a few seconds' blank screen, the flashing is done, reboot.

You may need to reinstall the display driver.

Here is the screenshot.

update:

I just tuned the freq a little bit (lower infact, cuz the boost freq was even higher due to the higher power limit) , as the result, the power limit is indeed at 130%, and the GPU core freq is more stable than before, but the fan noice is also louder than before.

I got a little higher graphic score in 3DMark Time Spy Extreme (7070 --> 7115), but the voltage control seemed not working in EVGA XOC, so maybe I should try MSI Afterburner later.


----------



## Pilz

nfsking2 said:


> OK, I managed to get the EVGA +30% power BIOS work on a Inno3D reference board RTX2080Ti, and everything is working great by far.
> 
> As this topic mentioned, some card might have the EPPROM locked down, so you guys should unlock it at first.
> 
> I used the NVFLASH 5.513.0, first input 'nvflash64 -r' to turn the EPPROM write protection off. (It's all written in the help info of the nvflash)
> 
> Then input 'nvflash64 -6 evga130.rom'. After a few seconds' blank screen, the flashing is done, reboot.
> 
> You may need to reinstall the display driver.
> 
> Here is the screenshot.
> 
> update:
> 
> I just tuned the freq a little bit (lower infact, cuz the boost freq was even higher due to the higher power limit) , as the result, the power limit is indeed at 130%, and the GPU core freq is more stable than before, but the fan noice is also louder than before.
> 
> I got a little higher graphic score in 3DMark Time Spy Extreme (7070 --> 7115), but the voltage control seemed not working in EVGA XOC, so maybe I should try MSI Afterburner later.


What was your max boost speed? The voltage slider seems to provide more stability at higher clocks and does not actually change voltage like in the past as has been mentioned here before.

Edit: running a stock FE card overclocked I eeked out 7443 in Time Spy Extreme graphics. I could do better if I actually had the time to mess with memory stability and refine my core offset some more. I posted the results a few days ago, you should be able to find the screenshot. 

I'm surprised your EVGA card isn't doing better, even when I ran a sh"t benchmark it was ~7200


----------



## arrow0309

What? 
Now nobody's crossflasing bioses anymore? 
How about flashing a (slightly more powerful) MSI Sea Hawk X bios on my (incoming) MSI Duke (less than 300W bios)?


----------



## sblantipodi

is there someone experiencing monitor flickering while GSYNC is on?


----------



## larrydavid

sblantipodi said:


> is there someone experiencing monitor flickering while GSYNC is on?


I've read the 416 branch may have this issue.


----------



## sblantipodi

larrydavid said:


> I've read the 416 branch may have this issue.


so the issue is still there?


----------



## Madness11

Hello. Guys I buy zotac rtx2080ti amp , it's bad card?? Or I need change to another brand


----------



## arrow0309

Madness11 said:


> Hello. Guys I buy zotac rtx2080ti amp , it's bad card?? Or I need change to another brand


They're all reference design AIB's, the amp here is a factory oc model pretty much like all the other cards of the "binned" A class TU102
Why should this one be a "bad card"?
Are you planning to keep it air cooled? Than it's better than average (cooling-wise).
You wanna go water cooling? 

But have you really bought it or you just think about it?


----------



## Frozburn

Found a 2080 Ti Duke OC for 1250 euro here and thinking about getting it. I've been trying to find the 2080 Ti Gaming X Trio for ages now and it's just nowhere to be found :/ From what I see, the Duke is just as good as the Trio. Is there something I am missing here? The Trio even costs over 1300


----------



## Madness11

I m buy . And wait new 9900k, but I m just ask , it's good card ?? Or better take msi trioX?


----------



## Pilz

arrow0309 said:


> Madness11 said:
> 
> 
> 
> Hello. Guys I buy zotac rtx2080ti amp , it's bad card?? Or I need change to another brand
> 
> 
> 
> They're all reference design AIB's, the amp here is a factory oc model pretty much like all the other cards of the "binned" A class TU102
> Why should this one be a "bad card"?
> Are you planning to keep it air cooled? Than it's better than average (cooling-wise).
> You wanna go water cooling?
> 
> But have you really bought it or you just think about it?
Click to expand...

The FE is also a binned model so it's a better deal in most cases to slap a block on if you don't care about the factory cooler


----------



## arrow0309

Frozburn said:


> Found a 2080 Ti Duke OC for 1250 euro here and thinking about getting it. I've been trying to find the 2080 Ti Gaming X Trio for ages now and it's just nowhere to be found :/ From what I see, the Duke is just as good as the Trio. Is there something I am missing here? The Trio even costs over 1300


It's the one I've ordered, hoping for a new bios to come out, with some higher PL (since I'm gonna watercool it).
The question is where? Or (better) when (can you find it at this price), the UK price would be £1.139 still in preorder ****s sake.



Madness11 said:


> I m buy . And wait new 9900k, but I m just ask , it's good card ?? Or better take msi trioX?


Again, where from / when? 31 November?


----------



## gavros777

Hello guys can you help me on this one, where can i register the gigabyte 2080 ti for the 4 year warranty?
Is it the aorus site as i registered an account on gigabyte.com but no option to register my card exists there.


----------



## raider89

gavros777 said:


> Hello guys can you help me on this one, where can i register the gigabyte 2080 ti for the 4 year warranty?
> Is it the aorus site as i registered an account on gigabyte.com but no option to register my card exists there.


https://www.aorus.com/event-detail.php?i=764


----------



## truehighroller1

Hey just wanted to drop in and report back about my success. I have the asus dual fan 2080ti oc edition, and flashed to the GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W).

Score before and after comparison

https://www.3dmark.com/compare/spy/4736275/spy/4732403

I had to manually reset afterburner to get the new tdp to kick in correctly but it did and the card gets to about 71c now instead of 59c. 

Loving this card now.

Comparison to my old card for the hell of it.

https://www.3dmark.com/compare/spy/3179168/spy/4736275

Those were the results I was looking for out of this card.


----------



## tamas970

truehighroller1 said:


> Hey just wanted to drop in and report back about my success. I have the asus dual fan 2080ti oc edition, and flashed to the GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W).
> Score before and after comparison
> https://www.3dmark.com/compare/spy/4736275/spy/4732403
> I had to manually reset afterburner to get the new tdp to kick in correctly but it did and the card gets to about 71c now instead of 59c.
> Loving this card now.
> Comparison to my old card for the hell of it.
> https://www.3dmark.com/compare/spy/3179168/spy/4736275
> Those were the results I was looking for out of this card.


Thanks! Any idea how the minimum FPS was affected by the new BIOS? 3% is not quite the "boost" for I would be risking adverse effects of a 3rd party firmware.


----------



## truehighroller1

tamas970 said:


> Thanks! Any idea how the minimum FPS was affected by the new BIOS? 3% is not quite the "boost" for I would be risking adverse effects of a 3rd party firmware.


I have no clue but when I pay $1350 for a new video card I get full control of said video card. I just wanted to report back that it does indeed work 100% no shunt mod needed. It also means that if someone comes out with one that has no limit based on reference design, it should work.

Edit: I also would like to say that I can see the difference in the stability of the clocks in afterburner as well so I would think it helped a pretty decent amount in smoothing things out as far as fps go.


----------



## dante`afk

Does anyone know where to get an Inno3d 2080Ti in the US? Seems this only sells in EU.


----------



## Pilz

dante`afk said:


> Does anyone know where to get an Inno3d 2080Ti in the US? Seems this only sells in EU.


I haven't seen them for sale yet, why would you want theirs VS EVGA, MSI etc?


----------



## GraphicsWhore

This AfterBurner issue is driving me nuts. I open the curve editor and everything is at the bottom (first pic).

Then if I raise it to a reasonable level like 2080 @ 1050mV, it says I'm trying to add over 1000 to the core. What??


----------



## Emmett

GraphicsWhore said:


> This AfterBurner issue is driving me nuts. I open the curve editor and everything is at the bottom (first pic).
> 
> Then if I raise it to a reasonable level like 2080 @ 1050mV, it says I'm trying to add over 1000 to the core. What??


try reseating, DDU?


----------



## SharpShoot3r07

Two questions.

I still game in 1080p with a 980ti. I feel like a 2080ti would be overkill so I was going to go with the 2080. Would that still be overkill? What if I get a 2k monitor. Do people game in 2k or is it 4k or bust?

Do you think Nvidia will do a game + GPU bundle like they have in the past. I'm ready to buy a card NOW but was hoping for a COD or Battlefield bundle in a month or two.


----------



## Zammin

SharpShoot3r07 said:


> Two questions.
> 
> I still game in 1080p with a 980ti. I feel like a 2080ti would be overkill so I was going to go with the 2080. Would that still be overkill? What if I get a 2k monitor. Do people game in 2k or is it 4k or bust?
> 
> Do you think Nvidia will do a game + GPU bundle like they have in the past. I'm ready to buy a card NOW but was hoping for a COD or Battlefield bundle in a month or two.


For 1080p it might be overkill, unless maybe you have a 240Hz monitor, but even then you will be CPU limited in most cases at that resolution. For 1440p at 144/165hz it's not really overkill, I run a GTX1080Ti Strix on an Acer XB271HU 165Hz 1440p monitor and with the RTX 2080Ti I expect to be able to run the same high frame rates but with the graphics settings cranked up higher.

A 4k monitor would be a good match as well.


----------



## jamesch

SharpShoot3r07 said:


> Two questions.
> 
> I still game in 1080p with a 980ti. I feel like a 2080ti would be overkill so I was going to go with the 2080. Would that still be overkill? What if I get a 2k monitor. Do people game in 2k or is it 4k or bust?
> 
> Do you think Nvidia will do a game + GPU bundle like they have in the past. I'm ready to buy a card NOW but was hoping for a COD or Battlefield bundle in a month or two.


For 1080p 60fps 980Ti is already an overkill card. If you are gaming at higher fps (120 or 144) a faster card would be better, but actually you are much more CPU limited at 1080P than GPU limited. A card that's 40% faster in 4K might be only 15% faster in 1080p, and generally you'll want a 5GHz+ CPU much more than just a faster GPU.

Whereas at 4K you could literally downclock a 8700K to 3.7GHz (base) and not lose a single frame in gaming.


----------



## dVeLoPe

EVGA GeForce RTX 2080 Ti XC BLACK EDITION (11G-P4-2282-KR) 

can this card be bios flashed? how does it perform vs other 2080ti?


----------



## GraphicsWhore

Emmett said:


> try reseating, DDU?


Custom loop so no reseating but the card is fine. I can use AB to overclock and all monitors work. I had Windows installed on a test drive a few days ago and the behavior was the same. I wonder if it has to do with the EVGA BIOS. I'm using the official 130% PL but I think the Galax BIOS did the same crap.



dVeLoPe said:


> EVGA GeForce RTX 2080 Ti XC BLACK EDITION (11G-P4-2282-KR)
> 
> can this card be bios flashed? how does it perform vs other 2080ti?


Guess you're about to find out.


----------



## dVeLoPe

im not unless someone replies my step-up que probably wont be ready until next year lol


----------



## jm600rr

So due to availability and Amazon only allowing single card purchases I ended up with Zotac 2080 TI and the TI AMP. Going to run SLI. Can I flash the non-AMP version with the AMP firmware? If so, where can I get the file?

Thanks!


----------



## Nizzen

jamesch said:


> SharpShoot3r07 said:
> 
> 
> 
> Two questions.
> 
> I still game in 1080p with a 980ti. I feel like a 2080ti would be overkill so I was going to go with the 2080. Would that still be overkill? What if I get a 2k monitor. Do people game in 2k or is it 4k or bust?
> 
> Do you think Nvidia will do a game + GPU bundle like they have in the past. I'm ready to buy a card NOW but was hoping for a COD or Battlefield bundle in a month or two.
> 
> 
> 
> For 1080p 60fps 980Ti is already an overkill card. If you are gaming at higher fps (120 or 144) a faster card would be better, but actually you are much more CPU limited at 1080P than GPU limited. A card that's 40% faster in 4K might be only 15% faster in 1080p, and generally you'll want a 5GHz+ CPU much more than just a faster GPU.
> 
> Whereas at 4K you could literally downclock a 8700K to 3.7GHz (base) and not lose a single frame in gaming.
Click to expand...

With 2x 2080ti nvlink I'm cpubound in BF1 4k , with 7980xe @ 4.6ghz and 4000 c17 tweaked memory.

Need 8ghz cpu 😛


----------



## tamas970

Nizzen said:


> With 2x 2080ti nvlink I'm cpubound in BF1 4k , with 7980xe @ 4.6ghz and 4000 c17 tweaked memory.
> 
> Need 8ghz cpu 😛


Maybe too many unused cores thus not enough GHz? A 9900k might give you 5.2GHz (+13%) and yet enough threads.

Question though if the 2x8 PCIe lanes of the LGA1151 would become a bottleneck...


----------



## BigMack70

Nizzen said:


> With 2x 2080ti nvlink I'm cpubound in BF1 4k , with 7980xe @ 4.6ghz and 4000 c17 tweaked memory.
> 
> Need 8ghz cpu 😛


I think I'm slightly CPU-bound in BF1 at 4k on a single 2080 Ti with my 4.4 GHz 5930k... it's kind of crazy


----------



## Pilz

BigMack70 said:


> Nizzen said:
> 
> 
> 
> With 2x 2080ti nvlink I'm cpubound in BF1 4k , with 7980xe @ 4.6ghz and 4000 c17 tweaked memory.
> 
> Need 8ghz cpu 😛
> 
> 
> 
> I think I'm slightly CPU-bound in BF1 at 4k on a single 2080 Ti with my 4.4 GHz 5930k... it's kind of crazy
Click to expand...

No game utilizes all of the VRAM even in 2K/Ultra. I'll have to try things on 4K/Ultra on my QLED tv today. My 6800K at 4.4ghz might be having the same issue. I'm not sure if it's worth going HEDT again this time since I don't really render things as much plus 8 cores (but less PCIE lanes if you exlcude PCH lanes) could be an issue. I want to see if we can do 5.4ghz for all 8 cores with a good open loop. 5.3ghz will be the minimum I'll accept for the 9900K as the base/boost since I'll keep it static at that speed. I run my 6800K at 4.35ghz usually and that's my only clock (there's not base or boost since it idles and runs at my set freq by choice). My system is becoming power bound by my AX760 PSU now. According to my UPS I'm pulling at least 690-700W and who knows how accurate that is. Considering I have 2 D5 pumps, 18 120mm fans, 3 140mm fans, Aquaero, and other stuff it's entirely possible. My GPU can pull 360W under heavy load with my OC, CPU likely pulls over TDP (200W with my OC if I had to guess) add that to my fans which should pull ~60W (0.225A each) pumps are ~50W so I guess my next step is a AX1600i


----------



## BigMack70

Pilz said:


> No game utilizes all of the VRAM even in 2K/Ultra. I'll have to try things on 4K/Ultra on my QLED tv today. My 6800K at 4.4ghz might be having the same issue. I'm not sure if it's worth going HEDT again this time since I don't really render things as much plus 8 cores (but less PCIE lanes if you exlcude PCH lanes) could be an issue. I want to see if we can do 5.4ghz for all 8 cores with a good open loop. 5.3ghz will be the minimum I'll accept for the 9900K as the base/boost since I'll keep it static at that speed. I run my 6800K at 4.35ghz usually and that's my only clock (there's not base or boost since it idles and runs at my set freq by choice). My system is becoming power bound by my AX760 PSU now. According to my UPS I'm pulling at least 690-700W and who knows how accurate that is. Considering I have 2 D5 pumps, 18 120mm fans, 3 140mm fans, Aquaero, and other stuff it's entirely possible. My GPU can pull 360W under heavy load with my OC, CPU likely pulls over TDP (200W with my OC if I had to guess) add that to my fans which should pull ~60W (0.225A each) pumps are ~50W so I guess my next step is a AX1600i


I am trying to wait on a CPU upgrade until PCI-express 4.0 hits the market. My 5930k can still do way more than 60fps (100-120fps seems like my average in BF1 depending on map) and my 4k TV is only 60 Hz anyway. So it feels bad to see GPU usage in BF1 dip to 95% at times because of the CPU but it doesn't mean anything in actual gameplay.

I generally don't like to do CPU upgrades frequently... once every 4-5 years is enough to keep up with games usually, and I'm concerned that if I jump to a 9900K I'll regret it in a couple years when new cards come out and I don't have a PCI-e 4.0 motherboard.

I'm also uncertain if HEDT is worth it or not in future. Having quad channel memory has been really nice with Haswell-E; I hear about all kinds of performance issues people have with slower dual channel memory on modern CPUs but I've not perceived any issues on quad-channel DDR4-2666. I'm also uncertain if going above 8C/16T is going to be beneficial for games within the next five years... games are only now moving beyond 4C/4T.


----------



## iNen1

Managed to flash Galax Bios on my Windforce 2080 ti. Doesn't quiet reach 380W.

https://i.imgur.com/f5tSfuR.gif.

Also, got a stinker of a card. Can only manage around 1980 MHz stable on core clock, even with new Bios. For my next card i'm gonna splash out some extra dough and get a Strix, at least i'll be sure to get a top of the line binned Chip.


----------



## nycgtr

iNen1 said:


> Managed to flash Galax Bios on my Windforce 2080 ti. Doesn't quiet reach 380W.
> 
> https://i.imgur.com/f5tSfuR.gif.
> 
> Also, got a stinker of a card. Can only manage around 1980 MHz stable on core clock, even with new Bios. For my next card i'm gonna splash out some extra dough and get a Strix, at least i'll be sure to get a top of the line binned Chip.


Doesn't work that way. I can tell you my best card outta 10 is a windforce. You just lost the lottery. I don't feel the strix will be any different.


----------



## Zammin

iNen1 said:


> Managed to flash Galax Bios on my Windforce 2080 ti. Doesn't quiet reach 380W.
> 
> https://i.imgur.com/f5tSfuR.gif.
> 
> Also, got a stinker of a card. Can only manage around 1980 MHz stable on core clock, even with new Bios. For my next card i'm gonna splash out some extra dough and get a Strix, at least i'll be sure to get a top of the line binned Chip.





nycgtr said:


> Doesn't work that way. I can tell you my best card outta 10 is a windforce. You just lost the lottery. I don't feel the strix will be any different.


Yeah I agree, buying the more expensive cards doesn't always mean higher bin. I have a 1080Ti Strix OC which was the most expensive 1080Ti in Australia at the time that I bought it and it clocks like an average 1080Ti. Nothing special. I think with the 2080Ti's the best you can do is just make sure you are getting one with a TU102-300A chip (as opposed to TU102-300) and other than that it just comes down to silicon lottery.

I'm watercooling so I ordered the EVGA XC Gaming since it has the 300A chip and wasn't too expensive. I feel like these cards are so power limited that buying a top shelf model with beefy power delivery won't benefit the average gaming user like me much, plus reference gives you more options for water blocks.


----------



## Jpmboy

Nizzen said:


> With 2x 2080ti nvlink I'm cpubound in BF1 4k , with 7980xe @ 4.6ghz and 4000 c17 tweaked memory.
> 
> Need 8ghz cpu 😛


what's the thread utilization? … but you know a 5.2 8700K/8086K is better for graphics/gaming at the GPUs limits. 



tamas970 said:


> Maybe too many unused cores thus not enough GHz? A 9900k might give you 5.2GHz (+13%) and yet enough threads.
> 
> Question though if the 2x8 PCIe lanes of the LGA1151 would become a bottleneck...


not saturating he PCIE at x8. Even 4 TXps are not lane restricted. Maybe at 8K or 4K120 and higher you can see some x8 effect, but not at 4K60.


----------



## Zammin

On a side note, the reviews for the RTX 2070 came out along with pricing. They cost over $1000 in Australia which is the same or in some cases more than a 1080Ti, and it appears to perform only a bit better than a regular 1080 depending on the game. I had a bit of a chuckle. I wonder what the point of the 2070 is.. Will it even be able to do ray tracing well given it has considerably less RT cores than the other two? and when you take ray tracing out of the picture it's a terrible deal, paying the same amount for notably less performance than the pascal price equivalent.

At least with the 2080Ti the performance is notably better than a Titan Xp for a bit less money.

Just the way I see it anyway.


----------



## Jpmboy

Zammin said:


> On a side note, the reviews for the RTX 2070 came out along with pricing. They cost over $1000 in Australia which is the same or in some cases more than a 1080Ti, and it appears to perform only a bit better than a regular 1080 depending on the game. I had a bit of a chuckle. I wonder what the point of the 2070 is.. Will it even be able to do ray tracing well given it has considerably less RT cores than the other two? and when you take ray tracing out of the picture it's a terrible deal, paying the same amount for notably less performance than the pascal price equivalent.
> 
> At least with the 2080Ti the performance is notably better than a Titan Xp *for a bit less money*.
> 
> Just the way I see it anyway.


not when you consider that the TXp has been out for what... 2 years now? … I'm still holding out for the Turing Titan (the RTX 6000 is pre order atm)


----------



## iNen1

nycgtr said:


> Doesn't work that way. I can tell you my best card outta 10 is a windforce. You just lost the lottery. I don't feel the strix will be any different.


I'm basing this off the LN2 video Der8auer made with the strix rtx 2080 ti. He wanted to break the 3dmark record, so he got himself 5 samples to choose the best 2 cards from. None clocked under 2160 Mhz. Highest clocking was 2190 Mhz.


----------



## Zammin

Jpmboy said:


> not when you consider that the TXp has been out for what... 2 years now? … I'm still holding out for the Turing Titan (the RTX 6000 is pre order atm)


Yeah I understand we would expect a bigger performance leap after 2 years, I was just comparing the merit of the 2080Ti to the 2070, for which I can't really think of any haha.

Edit: Sorry, I meant the Titan Xp which came out April last year and is still available on Nvidia's website, not the Titan X Pascal which came out the year before.


----------



## raider89

Are we able to use the FTW3 150% bios on the XC Ultra? Not sure if that is a thing or not.


----------



## iNen1

raider89 said:


> Are we able to use the FTW3 150% bios on the XC Ultra? Not sure if that is a thing or not.


I think the FTW3 has a custom PCB. Flashing that BIOS onto a reference one won't work i think.


----------



## Jpmboy

Zammin said:


> Yeah I understand we would expect a bigger performance leap after 2 years, I was just comparing the merit of the 2080Ti to the 2070, for which I can't really think of any haha.
> 
> Edit: Sorry, I meant the Titan Xp which came out April last year and is still available on Nvidia's website, not the Titan X Pascal which came out the year before.


was it only last april? seems like an eternity.


----------



## GraphicsWhore

raider89 said:


> Are we able to use the FTW3 150% bios on the XC Ultra? Not sure if that is a thing or not.


Try it and let us know


----------



## Zammin

Jpmboy said:


> was it only last april? seems like an eternity.


It sure does haha. Been a long wait for the new line of GPUs.

I wonder what the RTX Titan card will cost, this launch is different to the last in that the XX80Ti card came out before the first Titan. It will be interesting to see the performance gap as well.


----------



## nycgtr

iNen1 said:


> I'm basing this off the LN2 video Der8auer made with the strix rtx 2080 ti. He wanted to break the 3dmark record, so he got himself 5 samples to choose the best 2 cards from. None clocked under 2160 Mhz. Highest clocking was 2190 Mhz.


Look at the other strix samples across the board from other yters. It looks a lot less impressive when you consider the range. Ber8aur is an asus sponsered ocer. They wouldn't send him a non cherry picked sample from the getgo. You got an average card. Lots of people are throwing around 2100 numbers, but that's a rare bunch tbh. Which is why I gave up after 10. I can't even keep track of how many tbh. I had way too many that I popped open. I still have more new ones and I am not even gonna bother. high 1900s avg, low 2000-mid good, 2100+ very very good.


----------



## rush2049

So after doing a lot of experimenting this is what I found when using the OC Scanner tool:

-After finishing a scan it says what the dominant limiter was; as its running math it is always on 'no load' as this is based off of FPS... thus it will always say no load was the dominant limitation.

- During a scan, if the GPU usage drops in the middle of a short test this is what determines the limit and it backs down on that voltage a step or two.

- If your system crashes or black screens during scanning, be patient and it will automatically reboot for you. If somehow the display driver recovers, reboot before continuing. After the reboot let everything settle, and make sure to dial in the EXACT settings in afterburner/oc tool of choice from when you started scan, and the scan should resume when you say start.

-The Fans use a non-insubstantial amount of power / the three-phase motors affect the overclock. Set them at 41% constant for the scan. Watch your temps however.... near 80C and you should stop and raise fan speed slightly. And start over.

-Do not have any memory overclock during the scan. (Also I have not tested this, but maybe have an underclock may aid the scan tool to test only the core, unconfirmed atm)



I have found my card to overclock poorly at the lower voltage levels, but can get about +270mhz at the higher voltages. I custom edit the curve to follow this +270 offset at greater than 1 volt. And adjust to a more modest +50 mhz offset at lower than 1 volt. Works well for me.

I have not yet overclocked the memory as I don't have a ton of time to watch for artifacting at the moment.


----------



## Okt00

Seemed to be hitting about 2070MHz during boost. with a +1000 memory boost. 

Played an hour of DOOM last night at 3440x1440 Ultra at around 160fps, BF1 at around 118fps all maxed however I was seeing some stuttering (Frames didn't drop at all). I did a bit of google-fu and have a few things to try. Not sure if it's the DX12 setting, or uncapped frames, etc. 

Hell of a card though. the GTX970 was great. But this is so much better. 

Interested in taking a real swing at the OC. What's everyone using to dial in the clocks? Loop Heaven or is there a new one out?


----------



## Bakerman

truehighroller1 said:


> I have no clue but when I pay $1350 for a new video card I get full control of said video card. I just wanted to report back that it does indeed work 100% no shunt mod needed. It also means that if someone comes out with one that has no limit based on reference design, it should work.
> 
> Edit: I also would like to say that I can see the difference in the stability of the clocks in afterburner as well so I would think it helped a pretty decent amount in smoothing things out as far as fps go.



So, you are complaining about reference design and flashed your card to get higher power limit. Yet, your score improved measly 3% which is zero difference in real life, and your score is still lower than my FE card. 

I don't see any point getting any non-reference card after reading this.


----------



## Pilz

rush2049 said:


> So after doing a lot of experimenting this is what I found when using the OC Scanner tool:
> 
> -After finishing a scan it says what the dominant limiter was; as its running math it is always on 'no load' as this is based off of FPS... thus it will always say no load was the dominant limitation.
> 
> - During a scan, if the GPU usage drops in the middle of a short test this is what determines the limit and it backs down on that voltage a step or two.
> 
> - If your system crashes or black screens during scanning, be patient and it will automatically reboot for you. If somehow the display driver recovers, reboot before continuing. After the reboot let everything settle, and make sure to dial in the EXACT settings in afterburner/oc tool of choice from when you started scan, and the scan should resume when you say start.
> 
> -The Fans use a non-insubstantial amount of power / the three-phase motors affect the overclock. Set them at 41% constant for the scan. Watch your temps however.... near 80C and you should stop and raise fan speed slightly. And start over.
> 
> -Do not have any memory overclock during the scan. (Also I have not tested this, but maybe have an underclock may aid the scan tool to test only the core, unconfirmed atm)
> 
> 
> 
> I have found my card to overclock poorly at the lower voltage levels, but can get about +270mhz at the higher voltages. I custom edit the curve to follow this +270 offset at greater than 1 volt. And adjust to a more modest +50 mhz offset at lower than 1 volt. Works well for me.
> 
> I have not yet overclocked the memory as I don't have a ton of time to watch for artifacting at the moment.


Which OC scanner are you using that says the failure reason? I've only seen AB and EVGA's X1


----------



## tamas970

Bakerman said:


> So, you are complaining about reference design and flashed your card to get higher power limit. Yet, your score improved measly 3% which is zero difference in real life, and your score is still lower than my FE card.
> 
> I don't see any point getting any non-reference card after reading this.



The the asus dual fan 2080ti oc is a reference design card with less than stellar cooling...


----------



## jamesch

Bakerman said:


> So, you are complaining about reference design and flashed your card to get higher power limit. Yet, your score improved measly 3% which is zero difference in real life, and your score is still lower than my FE card.
> 
> I don't see any point getting any non-reference card after reading this.


Your point is only valid for the 20 series because nvidia had to overclock its reference cards to near their clockspeed limits this generation.

Maxwell, Pascal, Kepler all had about 15-20% performance left under the hood. AIB cards absolutely made sense then.


----------



## Baasha

Yea games are CPU bound at 4K with these GPUs. Never thought that would be the case - 6950X @ 4.30Ghz.


----------



## GraphicsWhore

rush2049 said:


> So after doing a lot of experimenting this is what I found when using the OC Scanner tool:
> 
> -After finishing a scan it says what the dominant limiter was; as its running math it is always on 'no load' as this is based off of FPS... thus it will always say no load was the dominant limitation.


Thank you. EVGA responds [email protected]*$ all to questions about Precision/X1 and this was one of them for me as I kept getting this pop-up.

I have been complaining that my scan seems to run indefinitely but I read today that it can take 15-20 mins. Is that right? Last time I let it run indefinitely at some point my clock went up to 2160 and I crashed. Since then I haven't bothered running it because it didn't seem like it was doing anything.

Do you know if there's a way to adjust all points on the VF curve after one point like in AfterBurner? And why are there so many g-damn points? Plus the GUI is tiny and it's just a pain in the ass.



Pilz said:


> Which OC scanner are you using that says the failure reason? I've only seen AB and EVGA's X1


He's referencing X1. At least that's what I got that error/pop-up in. Might also apply to Precision.


----------



## Pilz

jamesch said:


> Bakerman said:
> 
> 
> 
> So, you are complaining about reference design and flashed your card to get higher power limit. Yet, your score improved measly 3% which is zero difference in real life, and your score is still lower than my FE card.
> 
> I don't see any point getting any non-reference card after reading this.
> 
> 
> 
> 
> Your point is only valid for the 20 series because nvidia had to overclock its reference cards to near their clockspeed limits this generation.
> 
> Maxwell, Pascal, Kepler all had about 15-20% performance left under the hood. AIB cards absolutely made sense then.
Click to expand...




GraphicsWhore said:


> rush2049 said:
> 
> 
> 
> So after doing a lot of experimenting this is what I found when using the OC Scanner tool:
> 
> -After finishing a scan it says what the dominant limiter was; as its running math it is always on 'no load' as this is based off of FPS... thus it will always say no load was the dominant limitation.
> 
> 
> 
> Thank you. EVGA responds [email protected]*$ all to questions about Precision/X1 and this was one of them for me as I kept getting this pop-up.
> 
> I have been complaining that my scan seems to run indefinitely but I read today that it can take 15-20 mins. Is that right? Last time I let it run indefinitely at some point my clock went up to 2160 and I crashed. Since then I haven't bothered running it because it didn't seem like it was doing anything.
> 
> Do you know if there's a way to adjust all points on the VF curve after one point like in AfterBurner? And why are there so many g-damn points? Plus the GUI is tiny and it's just a pain in the ass.
> 
> 
> 
> Pilz said:
> 
> 
> 
> Which OC scanner are you using that says the failure reason? I've only seen AB and EVGA's X1
> 
> Click to expand...
> 
> He's referencing X1. At least that's what I got that error/pop-up in. Might also apply to Precision.
Click to expand...

My reference card can hit 2205mhz but it's not stable. I think it's possible to see 2160mhz stable on my card because I've gotten very close to that before but stopped because it was a long process. I routinely run 2130mhz completely fine so I agree about the FE cards.


X1 doesn't give me an error reason ever so that's why I'm confused. It also gives me different offsets with no change in temperature (peak is 38°C when I benchmark) so I think it's simply buggy right now. AB beta's scanner doesn't work for me at all so I have no experience in how it gets offsets. Hopefully these will launch with more features and actually save settings on reboot for once.


----------



## rush2049

I am using afterburner for my OC Scanner testing.

I am watching the detailed graphs of gpu usage %, clock speed, voltage, and temp during the Scan. The GPU usage dips between MHz changes, but if it goes to 0 and stays there for any amount of time that's what the scanner counts as a failed test. The scanner tests four different voltages increasing the Mhz in 15Mhz increments each test. When it fails it moves onto the next voltage to test.

I would say a good 15-25 minutes is what the scanner tool takes to run; its like 20-30 seconds per test for each 15 Mhz step. And on that first voltage it starts really low. (of course if your PC crashes and reboots at any failure point and you have to resume, it will take longer)


I also get varying amounts of results from the scanner, and I only really treat it as a starting point. Though I like editing the freq/volt curve way more than a static offset.



Using the OC Scanner Test feature is also useful after manually adjusting the curve. A good thing to know is that the maximum confidence level you can get is 90%. And what the scanner thinks is stable, may not pass a firestrike run.... but if the overclock can't pass the 5 min scanner test, then it certainly won't pass any benchmark or game. So it wittles out unstable clocks quickly.


----------



## Hulk1988

I saw the two posts from page 1 that you are able to flash a Trio with the 380W Bios. But there are any proofs besides of text? Thank you


----------



## nycgtr

Hulk1988 said:


> I saw the two posts from page 1 that you are able to flash a Trio with the 380W Bios. But there are any proofs besides of text? Thank you


Works. Just pointless to do so.


----------



## dante`afk

Pilz said:


> I haven't seen them for sale yet, why would you want theirs VS EVGA, MSI etc?


From what I've seen so far in some other forums they get very good results. A-chip with standard PCB. Seen 2150-2190Mhz on the core stable. And very cheap to buy.


----------



## dante`afk

jm600rr said:


> So due to availability and Amazon only allowing single card purchases I ended up with Zotac 2080 TI and the TI AMP. Going to run SLI. Can I flash the non-AMP version with the AMP firmware? If so, where can I get the file?
> 
> Thanks!


on US? You bought them for 1900$ each?


----------



## Nizzen

Some 2080ti testing 

https://www.diskusjon.no/uploads/monthly_10_2018/post-42975-0-87647100-1539641006.jpg


----------



## Okt00

Nizzen said:


> Some 2080ti testing
> 
> https://www.diskusjon.no/uploads/monthly_10_2018/post-42975-0-87647100-1539641006.jpg


I think I see a NVLink box in the background... lucky you.


----------



## tamas970

Nizzen said:


> Some 2080ti testing
> 
> https://www.diskusjon.no/uploads/monthly_10_2018/post-42975-0-87647100-1539641006.jpg


I see the reason why the cards are on short supply elsewhere...


----------



## SharpShoot3r07

jamesch said:


> For 1080p 60fps 980Ti is already an overkill card. If you are gaming at higher fps (120 or 144) a faster card would be better, but actually you are much more CPU limited at 1080P than GPU limited. A card that's 40% faster in 4K might be only 15% faster in 1080p, and generally you'll want a 5GHz+ CPU much more than just a faster GPU.
> 
> Whereas at 4K you could literally downclock a 8700K to 3.7GHz (base) and not lose a single frame in gaming.


Well I am tempted to go with a 4k setup now but that is going to run me $3200 just for the card and monitor. Unless there are cheaper 4k 144hz monitors out there.


----------



## jm600rr

dante`afk said:


> on US? You bought them for 1900$ each?


MSRP. Hunted them down on nowinstock.net.


----------



## GraphicsWhore

Pilz said:


> My reference card can hit 2205mhz but it's not stable. I think it's possible to see 2160mhz stable on my card because I've gotten very close to that before but stopped because it was a long process. I routinely run 2130mhz completely fine so I agree about the FE cards.
> 
> 
> X1 doesn't give me an error reason ever so that's why I'm confused. It also gives me different offsets with no change in temperature (peak is 38°C when I benchmark) so I think it's simply buggy right now. AB beta's scanner doesn't work for me at all so I have no experience in how it gets offsets. Hopefully these will launch with more features and actually save settings on reboot for once.


Absolutely buggy, like all EVGA software. They're great at hardware but god does their software SUCK. 

Newest build of X1 still doesn't save/load profiles correctly for me. The VF curve is absolutely ridiculous. 1000000 points and the GUI is so damn small I use Magnifier in Windows to make it less tedious. And of course it doesn't f'ing save. And once I do set it and run, I'm not convinced it's actually doing anything based on what I see the card doing.


----------



## Outcasst

Is the XC ultra with updated BIOS currently the highest power target card at 338.0 W?

Edit: Just found the Gigabyte Gaming OC at 366W


----------



## tamas970

Outcasst said:


> Is the XC ultra with updated BIOS currently the highest power target card at 338.0 W?


That's the max power, not the default target.

Highest power limits:
KFA2(?): 380W
FTW3: 373W
Giga Gaming OC: 366W


----------



## Outcasst

tamas970 said:


> That's the max power, not the default target.
> 
> Highest power limits:
> KFA2(?): 380W
> FTW3: 373W
> Giga Gaming OC: 366W


Could one assume that if you planned to liquid cool in the future, the highest max power target card would be the best to choose? (assuming it's reference PCB)


----------



## skingun

Doesn't seem to make a whole lot of difference. Just get the cheapest A die you can.


----------



## tamas970

Outcasst said:


> Could one assume that if you planned to liquid cool in the future, the highest max power target card would be the best to choose? (assuming it's reference PCB)


Beyond 330W you are strictly in silicon lottery territory. I don't know, if under _any_ conditions could the GPU break 2300MHz (custom PCB with more VRM power, FC block+Peltier, Shunt-mod).


----------



## truehighroller1

tamas970 said:


> Beyond 330W you are strictly in silicon lottery territory. I don't know, if under _any_ conditions could the GPU break 2300MHz (custom PCB with more VRM power, FC block+Peltier, Shunt-mod).


I'm running an asus oc dual fan flashed with the GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W) and I'm pulling 380 watts. I gained about 4% more performance from opening mine up from 312 to 380. It just stopped it from throttling itself all the time. Temps went up a little but not very much. I was running about 59c and now I'm running about 71c, while gaming I mean of course.

I didn't have to do a shunt mod. 

I'm more tempted then ever to get water cooling on my gpu this time around though. I've had this card clock it self to 2100 with air cooling. Before the BIOS flash it wouldn't go that high.


----------



## Jpmboy

^^ you'll be amazed at the performance benefit from water cooling the entire rig. It's easy and moves with any upgrades down the road.


----------



## carlhil2

Lol, "Very High" it is then.... and that's @2145mhz..


----------



## wheatpaste1999

Anyone have good results undervolting? I am testing FE cards on air right now and using MSI Afterburner.

So far I can go down to 925 mV and 1980 MHz on the core clock and +800 MHz on the memory, which seems decent to me. I can go slightly higher on the core clock (2010 MHz) and still pass Superposition 4k/8k, Heaven and Real Bench benchmarks, but I had to drop down the clock to 1980 MHz to pass Time Spy and Fire Strike.

I tried setting V-F curves at higher voltages (1000 mV, 1050 mV), but I was getting lots of bouncing up and down on the V-F curve. If I set the clock frequency at 925 mV I only get clock changes based off of temperature.

Going to test more tomorrow and try for higher clocks and higher voltages.


----------



## stefxyz

Edit: double post


----------



## stefxyz

Outcasst said:


> Could one assume that if you planned to liquid cool in the future, the highest max power target card would be the best to choose? (assuming it's reference PCB)


The answer is yes. However you also need to be lucky with the chip that it runs +1000 mhz memory stable and also a high frequency. But it will help to get the last bit out of it and sustain higher clocks under water.

My FE is perfect in the way it never crashes even at 2145 mhz and +1000 mhz memory (temps never higher than 41 clesius with a dedicated 560mm radiator and loop just foe the gpu) still the frequency in gaming drops down to 1970 in some games due to power limitation.So if you want the last tip cherry on top of the ice cream that will surely give you some extra mhz at a high tdp cost. But many me including dont care about some more power draw as cards are still way too slow in general for highend vr and only now getting fast enough for premium 4k 120 mhz.


----------



## zipeldiablo

skingun said:


> Doesn't seem to make a whole lot of difference. Just get the cheapest A die you can.


Custom pcb don't matter that much then ?


carlhil2 said:


> Lol, "Very High" it is then.... and that's @2145mhz..


Ghost recon always had poor optimization unfortunatly 



Baasha said:


> Yea games are CPU bound at 4K with these GPUs. Never thought that would be the case - 6950X @ 4.30Ghz.


So what you're saying is i should consider seriously to upgrade my 5960x ?



So guys, how easy is it to flash the bios of the galax?
Wondering if there is risk to brik the card or something (would be flashing an evga xc, probably gaming depending on which one is available)


----------



## mbhtzor

I have 2100+ stable in games on my Asus RTX 2080 Ti Advanced under watercooling.
+143 core / +700 memory. Is that something I should be satisfied with?

Bought a Zotac AMP! as well to see if I'm able to get any higher, if not I'll just return it.



Tried going higher by placing my computer outside (don't ask ) and core at 30 degrees, but only improvement I was able to get was +750 on memory, and max score I got on graphics only was 7443 in Timespy Extreme.


----------



## GraphicsWhore

I finally figured out how to run the OC Scan using X1. It does legitimately take 15-20 mins to complete so I let it go. Applied it afterwards and added +1000 to memory but my 3d mark graphics scores were actually lower than my best which I had achieved just adding +120 to core manually (also with +1000 memory).

Average clocks were in the same range as manually OC'ing as well (2085-2115).

So ultimately it's just a waste of 15-20 mins for me. 

Also updated AfterBurner to latest version (beta 6) and now voltage slider and curve work normally, but that didn't do anything more to benchmark scores than just setting +120/+1000 either.

So it seems +120/+1000 is my sweet-spot for benching. Only thing I haven't tried yet is running the X1 scan with the Galax BIOS. I'm going to do that then try to get ~2085Mhz with lowest voltage possible and that'll be my daily OC.


----------



## arrow0309

GraphicsWhore said:


> I finally figured out how to run the OC Scan using X1. It does legitimately take 15-20 mins to complete so I let it go. Applied it afterwards and added +1000 to memory but my 3d mark graphics scores were actually lower than my best which I had achieved just adding +120 to core manually (also with +1000 memory).
> 
> Average clocks were in the same range as manually OC'ing as well (2085-2115).
> 
> So ultimately it's just a waste of 15-20 mins for me.
> 
> Also updated AfterBurner to latest version (beta 6) and now voltage slider and curve work normally, but that didn't do anything more to benchmark scores than just setting +120/+1000 either.
> 
> So it seems +120/+1000 is my sweet-spot for benching. Only thing I haven't tried yet is running the X1 scan with the Galax BIOS. I'm going to do that then try to get ~2085Mhz with lowest voltage possible and that'll be my daily OC.


Is 2085 possible for gaming with less vgpu? 
Keep us informed of the results.


----------



## Okt00

GraphicsWhore said:


> I finally figured out how to run the OC Scan using X1. It does legitimately take 15-20 mins to complete so I let it go. Applied it afterwards and added +1000 to memory but my 3d mark graphics scores were actually lower than my best which I had achieved just adding +120 to core manually (also with +1000 memory).
> 
> Average clocks were in the same range as manually OC'ing as well (2085-2115).
> 
> So ultimately it's just a waste of 15-20 mins for me.
> 
> Also updated AfterBurner to latest version (beta 6) and now voltage slider and curve work normally, but that didn't do anything more to benchmark scores than just setting +120/+1000 either.
> 
> So it seems +120/+1000 is my sweet-spot for benching. Only thing I haven't tried yet is running the X1 scan with the Galax BIOS. I'm going to do that then try to get ~2085Mhz with lowest voltage possible and that'll be my daily OC.


Did you ever try the OC Scan using Afterburner? It has a status window that keeps you up to date with how the scan is progressing. 

I find the curve editor in X1 to be 'not so easy to use', AB seems to work with arrow keys and all that. I do wish there was a marquee select though.


----------



## GraphicsWhore

Okt00 said:


> Did you ever try the OC Scan using Afterburner? It has a status window that keeps you up to date with how the scan is progressing.
> 
> I find the curve editor in X1 to be 'not so easy to use', AB seems to work with arrow keys and all that. I do wish there was a marquee select though.


Yeah the curve editor in X1 is absurd - I've already ranted about it like 1000 times in this thread lol. Have you figured out how to move all the points to the right of one point to "snap" to the same frequency? Or reduce the number of points?

Yesterday when I tried OC Scanner in AB, nothing happened. I hit the button a few times and no response. Will try again tonight.

On that note: I forgot how to snap all the points to the right of one point in AB as well (i.e., make it flat starting at one point). Anyone know?

On an unrelated note: I've found a game that I can make a direct comparison to that the 2080Ti handles which the 1080Ti could not. Project Cars 2 VR, all settings max on my VIVE Pro would make my 1080Ti struggle with FPS dips into sub-30's (card also blocked and running at 2080/6000).

2080Ti doesn't even flinch with the same settings and holy CRAP does it look amazing to see this fidelity in VR at butter FPS.

Elite Dangerous VR was another one that 1080Ti didn't handle well at max. Going to try that tonight and expect the same result.


----------



## R1amddude

Just ran Firestrike with my 2700 and 2080ti sweet result.


----------



## rush2049

What is the go-to bios for flashing onto founder edition cards?

I am seeing the Gigabyte Gaming OC as having the highest power limit at 300W - 366W and it looks like a reference PCB?

Or is there another bios I should look into?


----------



## Fiercy

Guys anyone having any blue screen while exiting games? 

I’ve found people on Nvidia forums having the exact same BSOD issues like me. Just wondering if anyone here encountered this.


----------



## Kinchouka

rush2049 said:


> What is the go-to bios for flashing onto founder edition cards?
> 
> I am seeing the Gigabyte Gaming OC as having the highest power limit at 300W - 366W and it looks like a reference PCB?
> 
> Or is there another bios I should look into?


You can't flash another bios on FE cards, look at first page.


----------



## rush2049

Kinchouka said:


> You can't flash another bios on FE cards, look at first page.


Well, I figure it's just a matter of time till nvflash is updated to bypass it much like pascal cards.


----------



## Diverge

rush2049 said:


> Well, I figure it's just a matter of time till nvflash is updated to bypass it much like pascal cards.


The odd thing is the Techpowerup have the bios dump from the pre-release FE cards long before retail cards released, but yet it's the same version as retail. 

So what is different? Is there a write-protect pin that get's toggled via software somehow? Or do you think they somehow locked them via hardware?

Not asking you specifically, just in general. I'm more curious what changed, rather than really wanting to flash other bioses.


----------



## Okt00

GraphicsWhore said:


> Yeah the curve editor in X1 is absurd - I've already ranted about it like 1000 times in this thread lol. Have you figured out how to move all the points to the right of one point to "snap" to the same frequency? Or reduce the number of points?
> 
> Yesterday when I tried OC Scanner in AB, nothing happened. I hit the button a few times and no response. Will try again tonight.
> 
> On that note: I forgot how to snap all the points to the right of one point in AB as well (i.e., make it flat starting at one point). Anyone know?


So I think once you've tweaked a point and apply the curve, everything to the right of said point is set to the previous offset. Don't quote me on that! lol... I'll try when I get home from work.

I'm going to check out the actual profile 'file' maybe it's just some xml/non-binary that can be edited. Might even be worth making a different editor to modify the profile contents directly then side load it into AB. (Just thoughts)


I even have a little C program which uses some undocumented NVAPI pointers to set the core and memory offsets. I don't know how it really works with the new cards and this voltage curve though. I'd think it would write a table to the cards, and not be doing some 'on-the-fly' manipulations of the core offsets based on the voltage readout... anyone have any thoughts? Maybe a different core offset for each P-State?


----------



## rush2049

2080 ti FE are available for sale again on nvidia's site....


----------



## truehighroller1

Jpmboy said:


> ^^ you'll be amazed at the performance benefit from water cooling the entire rig. It's easy and moves with any upgrades down the road.


Completely agree. I have enough radiators to handle it. I have just my CPU cooled at the moment.


----------



## xer0h0ur

Fiercy said:


> Guys anyone having any blue screen while exiting games?
> 
> I’ve found people on Nvidia forums having the exact same BSOD issues like me. Just wondering if anyone here encountered this.


I presume its a Windows 10 occurrence. I'm on Windows 7 and the only times I have managed to crash at all have been from pushing the overclock. Ever since I used the scanner tool in Afterburner to determine my curve for me I haven't seen a single issue.


----------



## Zammin

rush2049 said:


> 2080 ti FE are available for sale again on nvidia's site....


aaaand they're gone :upsidedwn

Got the email when I woke up, sold out by the time I got to the page lmao


----------



## octiny

Zammin said:


> aaaand they're gone :upsidedwn
> 
> Got the email when I woke up, sold out by the time I got to the page lmao


Still available as of 2 minutes ago.


----------



## Zammin

octiny said:


> Still available as of 2 minutes ago.


On the Australian Nvidia store page is still says "Notify Me" where "Add to Cart" should be.

Maybe they are limiting availability to us..

Edit: Yep just checked. Available to the US and Canada but still out of stock for Australia. Nice.


----------



## dante`afk

how are the FE cards going in terms of OC? bios flash not possible on them?


----------



## Zammin

dante`afk said:


> how are the FE cards going in terms of OC? bios flash not possible on them?


BIOS flash not possible at this point in time, some people are getting good OC and others average, it most likely comes down to silicon lottery. Same deal with the AIB reference cards that have the same 300A chip as the FE, only difference is you can BIOS flash the AIB cards.


----------



## dante`afk

Zammin said:


> BIOS flash not possible at this point in time, some people are getting good OC and others average, it most likely comes down to silicon lottery. Same deal with the AIB reference cards that have the same 300A chip as the FE, only difference is you can BIOS flash the AIB cards.


does that mean that all FE cards should have the (good) A-chip?


----------



## Zammin

dante`afk said:


> does that mean that all FE cards should have the (good) A-chip?


Yeah the FE cards have the 300A chip. AIB reference cards that are factory overclocked have it as well. The 300 non-A chips are only found in the bottom tier non-factory overclocked cards.


----------



## NAIM101

Hello, anyone know if the ASUS DUAL RTX 2080TI is a reference PCB design? I want to add water cooler on it but if not I need to purchase the Nvidia FE 2080TI. Anyone??


----------



## NAIM101

Nevermind, thank you reivew. 


Terrible 2080 Ti. Cheaply made. Poor heatsink 9/27/2018 8:02:49 PM
Pros: It works but gets super hot. More below in cons. 

Relatively conservative looks.

Cons: Cheap cheap cheap product sold at huge price tag. The box and packaging alone look like a cheap $200 GPU not a product that costs nearly $1300 after tax and shipping. 

The heatsink is total crap. It doesn’t even have any heat pipes. Just a huge aluminum fin array that does a poor job at cooling. 

Limited to 120% power limit. The Strix cards, even the cheap EVGA cards have 130% limit. 

Not a single light or RGB. It’s a crappy plastic shroud with cheap “RTX” painted on. It doesn’t even say 2080 Ti anywhere! 

Does it look like the pictures! False advertising! It isn’t black like the pictures. It has this huge copper and silver plate on the the bottom and around the edges. Like they decided not to paint it to save even more money. Will be returning for false advertising.

Other Thoughts: AVOID AVOID this cheap cheap 2080 Ti. Literally buy any other brand. Asus is totally ripping you off with this one.


----------



## Pilz

NAIM101 said:


> Nevermind, thank you reivew.
> 
> 
> Terrible 2080 Ti. Cheaply made. Poor heatsink 9/27/2018 8:02:49 PM
> Pros: It works but gets super hot. More below in cons.
> 
> Relatively conservative looks.
> 
> Cons: Cheap cheap cheap product sold at huge price tag. The box and packaging alone look like a cheap $200 GPU not a product that costs nearly $1300 after tax and shipping.
> 
> The heatsink is total crap. It doesn’t even have any heat pipes. Just a huge aluminum fin array that does a poor job at cooling.
> 
> Limited to 120% power limit. The Strix cards, even the cheap EVGA cards have 130% limit.
> 
> Not a single light or RGB. It’s a crappy plastic shroud with cheap “RTX” painted on. It doesn’t even say 2080 Ti anywhere!
> 
> Does it look like the pictures! False advertising! It isn’t black like the pictures. It has this huge copper and silver plate on the the bottom and around the edges. Like they decided not to paint it to save even more money. Will be returning for false advertising.
> 
> Other Thoughts: AVOID AVOID this cheap cheap 2080 Ti. Literally buy any other brand. Asus is totally ripping you off with this one.


Which 2080Ti? You ranted and failed to specify


----------



## Zammin

NAIM101 said:


> Hello, anyone know if the ASUS DUAL RTX 2080TI is a reference PCB design? I want to add water cooler on it but if not I need to purchase the Nvidia FE 2080TI. Anyone??





NAIM101 said:


> Nevermind, thank you reivew.
> 
> 
> Terrible 2080 Ti. Cheaply made. Poor heatsink 9/27/2018 8:02:49 PM
> Pros: It works but gets super hot. More below in cons.
> 
> Relatively conservative looks.
> 
> Cons: Cheap cheap cheap product sold at huge price tag. The box and packaging alone look like a cheap $200 GPU not a product that costs nearly $1300 after tax and shipping.
> 
> The heatsink is total crap. It doesn’t even have any heat pipes. Just a huge aluminum fin array that does a poor job at cooling.
> 
> Limited to 120% power limit. The Strix cards, even the cheap EVGA cards have 130% limit.
> 
> Not a single light or RGB. It’s a crappy plastic shroud with cheap “RTX” painted on. It doesn’t even say 2080 Ti anywhere!
> 
> Does it look like the pictures! False advertising! It isn’t black like the pictures. It has this huge copper and silver plate on the the bottom and around the edges. Like they decided not to paint it to save even more money. Will be returning for false advertising.
> 
> Other Thoughts: AVOID AVOID this cheap cheap 2080 Ti. Literally buy any other brand. Asus is totally ripping you off with this one.


The ASUS DUAL is a reference design. From what I understand most of the 2-fan 2080Tis run kinda hot except for the 2.75 slot EVGA XC Ultra. If you're chucking a water block on it, there's no need to worry about the air cooler. As long as it's a reference board with a 300A chip you're good to go. As for power limits you can flash the BIOS of other reference cards to increase the power limit. They are listed on the first page of the thread.

I think to keep costs down (since the AIB partners are competing with Nvidias FE this time) a lot of the reference designs had to have fairly low cost coolers and minimal RGB. Even the EVGA ones look cheaper than last gen with the clear plastic shroud. I'm not 100% sure if that's true though, just what I have heard.


----------



## nerfon

New nvflash_5.527.0 released https://www.techpowerup.com/download/nvidia-nvflash/ no clue on changes


----------



## SharpShoot3r07

Does anyone see a promo in the near future where you buy a 20 series card and get "xyz" game for free? I'm holding off for this reason.


----------



## tamas970

NAIM101 said:


> Nevermind, thank you reivew.
> Terrible 2080 Ti. Cheaply made. Poor heatsink 9/27/2018 8:02:49 PM
> Pros: It works but gets super hot. More below in cons.
> Relatively conservative looks.
> Cons: Cheap cheap cheap product sold at huge price tag. The box and packaging alone look like a cheap $200 GPU not a product that costs nearly $1300 after tax and shipping.
> The heatsink is total crap. It doesn’t even have any heat pipes. Just a huge aluminum fin array that does a poor job at cooling.
> Limited to 120% power limit. The Strix cards, even the cheap EVGA cards have 130% limit.
> Not a single light or RGB. It’s a crappy plastic shroud with cheap “RTX” painted on. It doesn’t even say 2080 Ti anywhere!
> Does it look like the pictures! False advertising! It isn’t black like the pictures. It has this huge copper and silver plate on the the bottom and around the edges. Like they decided not to paint it to save even more money. Will be returning for false advertising.
> Other Thoughts: AVOID AVOID this cheap cheap 2080 Ti. Literally buy any other brand. Asus is totally ripping you off with this one.


LoL, what about starting your cry specifying *which card* you are talking about? Until the last sentence not even the manufacturer was clear. _Guessing_ an Asus dual, which is usually the bottom of the pack...


----------



## Novak-Djokovic

So, what's the general feeling now... The best card is the EVGA 2080ti FTW?


----------



## Zammin

Pilz said:


> Which 2080Ti? You ranted and failed to specify





tamas970 said:


> LoL, what about starting your cry specifying *which card* you are talking about? Until the last sentence not even the manufacturer was clear. _Guessing_ an Asus dual, which is usually the bottom of the pack...


It was poorly put together but he was copying and pasting someone's review of the ASUS 2080Ti DUAL and he did specify, but it was in his first post right above that one. Here it is, hope it clears things up:



NAIM101 said:


> Hello, anyone know if the ASUS DUAL RTX 2080TI is a reference PCB design? I want to add water cooler on it but if not I need to purchase the Nvidia FE 2080TI. Anyone??


----------



## tamas970

Zammin said:


> It was poorly put together but he was copying and pasting someone's review of the ASUS 2080Ti DUAL and he did specify, but it was in his first post right above that one. Here it is, hope it clears things up:


Thanks, clear now .


----------



## domrockt

tamas970 said:


> No cards with decent cooling/330+W BIOS are available in Europe either/forum/images/smilies/frown.gif https://geizhals.eu/?cat=gra16_512&...=uk&hloc=eu&plz=&dist=&mail=&sort=t&bl1_id=30



I ordered the KFA2 from Amazon they claim to ship between the 10.11 and 28.12..18

Gues Bad luck with my Order but 100€ cheaper


----------



## nfsking2

Pilz said:


> What was your max boost speed? The voltage slider seems to provide more stability at higher clocks and does not actually change voltage like in the past as has been mentioned here before.
> 
> Edit: running a stock FE card overclocked I eeked out 7443 in Time Spy Extreme graphics. I could do better if I actually had the time to mess with memory stability and refine my core offset some more. I posted the results a few days ago, you should be able to find the screenshot.
> 
> I'm surprised your EVGA card isn't doing better, even when I ran a sh"t benchmark it was ~7200


I didn't push GDDR so hard. Now it's 7500~7600


----------



## Edge0fsanity

NAIM101 said:


> Hello, anyone know if the ASUS DUAL RTX 2080TI is a reference PCB design? I want to add water cooler on it but if not I need to purchase the Nvidia FE 2080TI. Anyone??


I have it and it is reference and will work the EK vector block. Everything about that review you posted is correct. Bottom tier product if you're going to air cool. Runs about 10C hotter than a FE. Still, has the better chip in it and works with waterblocks. I blocked mine, flashed the bios, and it has decent clocks.


----------



## tamas970

domrockt said:


> I ordered the KFA2 from Amazon they claim to ship between the 10.11 and 28.12..18
> 
> Gues Bad luck with my Order but 100€ cheaper


Plese keep us posted how the vapor cooling fares! Interesting concept, seems quite small though for 300+W dissipation.


----------



## profundido

SharpShoot3r07 said:


> Does anyone see a promo in the near future where you buy a 20 series card and get "xyz" game for free? I'm holding off for this reason.


I see a promo in the future where you buy the game and get a 20 series card for free...


----------



## wuannai

nerfon said:


> New nvflash_5.527.0 released https://www.techpowerup.com/download/nvidia-nvflash/ no clue on changes


Anyone tried to crossflash his FE with this version?


----------



## GraphicsWhore

A few TimeSpy and FireStrike runs. Highest graphics in FS was a different run than highest total.

Now I have a baseline I know what I'm aiming for: 16k graphics on TS and 40k graphics on FS. Former I think might be doable as I still have a lot of tweaking and testing, including with Galax BIOS. I don't see making up almost 500pts in FS but we'll see.


----------



## Okt00

Well I am pretty gutted. Last night I had clocked up to a 2100MHz @1.024V with a +1000 Memory offset, ran BF1 for at least 2 hours, no problem. 

I was just about to come over here and report in, so all I had running at the time was Firefox, and my whole screen filled up with artifacts. Okay, no sweat, dial back the OC. Well the whole display driver hard locks up and I reboot. 

Well now as soon as the Win10 load balls step to load the OS the whole screen lights up again, and windows 10 disables the device. I disabled the Afterburner autostart rebooted, same thing. Reinstalled the driver... same. Called it a night and went to bed. 

Fired it up this AM and it booted into the OS at full resolution, interesting. Take a shower and see that it's reverted to. the 1024x768 or whatever low resolution it defaults to and disabled the device again. 

Now I'm still getting video out of the DP connector on that card, but Windows says it's disabled the devices as it's detected a problem. 

I'll get in touch with Nvidia for a RMA process, but anyone have any ideas? 

RTX 2080 Ti Founders Edition


----------



## kot0005

Okt00 said:


> Well I am pretty gutted. Last night I had clocked up to a 2100MHz @1.024V with a +1000 Memory offset, ran BF1 for at least 2 hours, no problem.
> 
> I was just about to come over here and report in, so all I had running at the time was Firefox, and my whole screen filled up with artifacts. Okay, no sweat, dial back the OC. Well the whole display driver hard locks up and I reboot.
> 
> Well now as soon as the Win10 load balls step to load the OS the whole screen lights up again, and windows 10 disables the device. I disabled the Afterburner autostart rebooted, same thing. Reinstalled the driver... same. Called it a night and went to bed.
> 
> Fired it up this AM and it booted into the OS at full resolution, interesting. Take a shower and see that it's reverted to. the 1024x768 or whatever low resolution it defaults to and disabled the device again.
> 
> Now I'm still getting video out of the DP connector on that card, but Windows says it's disabled the devices as it's detected a problem.
> 
> I'll get in touch with Nvidia for a RMA process, but anyone have any ideas?
> 
> RTX 2080 Ti Founders Edition



Fe models seem to have lots of issues. This must be happening because something is overheating or going bad. Check ur memory and vrm temps with a probe


----------



## GraphicsWhore

Okt00 said:


> Well I am pretty gutted. Last night I had clocked up to a 2100MHz @1.024V with a +1000 Memory offset, ran BF1 for at least 2 hours, no problem.
> 
> I was just about to come over here and report in, so all I had running at the time was Firefox, and my whole screen filled up with artifacts. Okay, no sweat, dial back the OC. Well the whole display driver hard locks up and I reboot.
> 
> Well now as soon as the Win10 load balls step to load the OS the whole screen lights up again, and windows 10 disables the device. I disabled the Afterburner autostart rebooted, same thing. Reinstalled the driver... same. Called it a night and went to bed.
> 
> Fired it up this AM and it booted into the OS at full resolution, interesting. Take a shower and see that it's reverted to. the 1024x768 or whatever low resolution it defaults to and disabled the device again.
> 
> Now I'm still getting video out of the DP connector on that card, but Windows says it's disabled the devices as it's detected a problem.
> 
> I'll get in touch with Nvidia for a RMA process, but anyone have any ideas?
> 
> RTX 2080 Ti Founders Edition


I assume power limit and voltage set to max? Even on water I doubt the card stayed close to 2100 @ 1024 with +1000 mem for 2 hours. Do you write AB stats to a file by chance to see what it was doing right before it crashed?


----------



## Okt00

GraphicsWhore said:


> I assume power limit and voltage set to max? Even on water I doubt the card stayed close to 2100 @ 1024 with +1000 mem for 2 hours. Do you write AB stats to a file by chance to see what it was doing right before it crashed?


Yes, I'll clarify. It boosting to 2100MHz, I saw this hit 2100 sustain for a few seconds then come back down, not just a sharp spike. 

Good idea. AB wasn't but I have HWInfo64 log everything to some rolling logs. 

My google-fu says it's a known VRAM issue. I read someone over at either ChaosGroup or OTOY forums say they heard it was a GDDR6 issue with the Founders Edition cards, which ended up causing a massive delay while they sorted that out. Don't know how true that is however. 

Support basically said let's RMA the thing within 2 messages over chat. I have to send in the system information to proceed with the RMA, so I know what I'm doing tonight.



> Me: okay, I can complete later tonight. But does this sound like a known problem? Possible VRAM issue?
> Support: We have some reports of this issue with this card but we need to collect all the files from PC to confirm anything


----------



## xermalk

kot0005 said:


> Fe models seem to have lots of issues. This must be happening because something is overheating or going bad. Check ur memory and vrm temps with a probe


Seems Nvidia is having issues with QC for for the fe cards, their forum is filled with crash/artifact reports.

https://forums.geforce.com/default/board/227/


----------



## xer0h0ur

I've seen several people complain about artifacting on their FE cards on the Nvidia forums. I've been crossing my fingers that my card doesn't eventually give me problems. I've been pushing a stable 2070MHz GPU clock that at worst downclocks to 2040MHz and +1000 on the GDDR6 without issues yet.


----------



## BehindTimes

xermalk said:


> Seems Nvidia is having issues with QC for for the fe cards, their forum is filled with crash/artifact reports.
> 
> https://forums.geforce.com/default/board/227/


Yeah. I'm going on my fifth card (3rd replacement) which is coming tomorrow. And I haven't even bothered to overclock yet, just running pure stocks, doing a simple check of running Time Spy and The Witcher 3. The fourth card seems to be working fine, but even in the forums, you're seeing people who stated everything worked well for a couple weeks, then bam, artifacts started to appear. Engineering problem with how they designed the FE cards?


----------



## xer0h0ur

I can't say with certainty but GPU artifacting has never been the same for me as GDDR artifacting. GPU artifacting usually ends up being popping or flashing textures while GDDR artifacting ends up looking like weird symbols or missing textures for instance. 

For instance this is a picture someone put up of their artifacting. I would be guessing GDDR6 artifacting over GPU artifacting on that instance.


----------



## tamas970

Is it just the FE or other vendors' cards are also artifacting?


----------



## sblantipodi

I have bouht a 2080 Ti FE, I am reading that a lot of people here have some problems with the FE...
Is this an engineering flaw? Should I cancel my order and get a refund?


----------



## GraphicsWhore

tamas970 said:


> Is it just the FE or other vendors' cards are also artifacting?


I mean any card has the possibility of having VRAM problems but this thing with the FE sounds like it's a batch or batches of cards. I'm on EVGA's forum a lot and haven't heard of any such issues.



sblantipodi said:


> I have bouht a 2080 Ti FE, I am reading that a lot of people here have some problems with the FE...
> Is this an engineering flaw? Should I cancel my order and get a refund?


Well if you get a refund are you planning on buying another FE? If so I would just put your card to through the tests and see if anything happens. If it does you can get a RMA but it may turn out to be perfectly fine. 

Or you could cancel and go 3rd party card but they're hard to find.


----------



## tamas970

sblantipodi said:


> I have bouht a 2080 Ti FE, I am reading that a lot of people here have some problems with the FE...
> Is this an engineering flaw? Should I cancel my order and get a refund?


As I collected from the geforce forums, it is a VRAM failure occurring in some but not all cards.


----------



## Diverge

Seems concerning. I've been debating if I want to watercool my card. I guess I'll hold off for a while while evaluating it, and everything else. Have a 9900K coming, so the whole system will be new/upgraded by next week.


----------



## Zurv

Which of the FE cards have the highest power boost? I know some do have higher volt/boost better than others. I'm getting the Nvidia FE, but all the base cards are the same. (all are made by nvidia. Any of the factory overclock card's bios should do better the default one on the NVidia.)
What are people using? bios? waterblock?

I'm going to use the bitspower blocks, shut mod, and looking for some input on what bios i should use. (SLI too.. of course.)

I wonder how DLSS works. If SLI isn't supported... can the other tensor cores from the other card help? That said, the tensor cores aren't doing anything.. so offloading the work might not make a diff.)

(i wonder how much i can get for Titan Vs... hopefully i can get $2500 each.  )

I have a feeling a new titan will be out by the end of the year and i'm going to be forced to upgrade again.... (FORCED! It is out of my control. )


----------



## xer0h0ur

Diverge said:


> Seems concerning. I've been debating if I want to watercool my card. I guess I'll hold off for a while while evaluating it, and everything else. Have a 9900K coming, so the whole system will be new/upgraded by next week.


As long as you're not heavy handed with the original cooler and the card's screws you can easily get away with waterblocking it and Nvidia being none the wiser about it. They didn't put any sort of tamper proof stickers or anything.

I carefully removed the cooler making sure I didn't tear the pads anywhere and stored it and the backplate face down so that pads didn't fall off from their placement.


----------



## Hulk1988

Alright. As I said already.The reason for the 2080 TI Delays are the Bios protection.

I cannot flash my MSI X Trio which I got yesterday.

WARNING: Firmware image PCI Subsystem ID (10DE.12FA)
does not match adapter PCI Subsystem ID (1462.3715).

NOTE: Exception caught.
Nothing changed!

Edit: my bad. Forgot -6 to add

Trio X successfully flashed to Galaxy Bios! Awesome


----------



## GraphicsWhore

Zurv said:


> Which of the FE cards have the highest power boost? I know some do have higher volt/boost better than others. I'm getting the Nvidia FE, but all the base cards are the same. (all are made by nvidia. Any of the factory overclock card's bios should do better the default one on the NVidia.)
> What are people using? bios? waterblock?
> 
> I'm going to use the bitspower blocks, shut mod, and looking for some input on what bios i should use. (SLI too.. of course.)
> 
> I wonder how DLSS works. If SLI isn't supported... can the other tensor cores from the other card help? That said, the tensor cores aren't doing anything.. so offloading the work might not make a diff.)
> 
> (i wonder how much i can get for Titan Vs... hopefully i can get $2500 each.  )
> 
> I have a feeling a new titan will be out by the end of the year and i'm going to be forced to upgrade again.... (FORCED! It is out of my control. )


What do you mean which of the FE cards? There is only one FE - nVidia's. You can't currently flash another card's BIOS to them unless that changed with the most recent version of nvflash.

For best results you want a non-FE reference card that can be flashed and put them on water.


----------



## Zurv

GraphicsWhore said:


> What do you mean which of the FE cards? There is only one FE - nVidia's. You can't currently flash another card's BIOS to them unless that changed with the most recent version of nvflash.
> 
> For best results you want a non-FE reference card that can be flashed and put them on water.


All the basic reference cards are the same as the FE, just the bios is different (and cooler.) All the cards, including the ref are way over engineered. What is limiting perf is heat, but mainly voltage. Under water, there will be little difference between ref and custom cards too. (the custom cards might be better when moded and under LN. Maybe.)
gamers nexus has a bunch videos if you want to nerd out. Here is the one for the FE 



 short version == overkill.


RE: RMA to Nvidia. Don't worry about it. I've mauled many a card I bought directly from nvidia and I've never had a problem getting a replacement. That said, they have all been Titan cards (X, X(P), Xp, etc.)
That is why i normally get nvidia cards directly from them. The RMA dance from EVGA and Asus takes to long. Also, you can't resell most big brands of video cards (ie, used cards) on amazon anymore. But you still can for Nvidia. I'll soon have 3 Titan Vs up for sale there.
My suggestion is always keep some backup card around so when you blowup your card, you can still use your PC while waiting for a replacement.


----------



## GraphicsWhore

Zurv said:


> All the basic reference cards are the same as the FE, just the bios is different (and cooler.).


Yeah I'm aware of that, which is why I mentioned getting a reference card. My point was that unlike previous gens there is currently only one "Founder's Edition", the one made by nVidia, and also that it won't take another BIOS (except I guess the other FE pre-release BIOS). At least not yet.


----------



## Zurv

That would be a shame if the NV FE cards didn't allow cross flashing. That said, hopefully a shunt mod will take care of the volt issues.


----------



## xer0h0ur

For those that really have to have the most out of the card they're going to have to shunt it but its been shown already you're increasing the power draw big time for a small percentage gain in actual performance. I don't know if I will shunt mod it anymore.


----------



## exploiteddna

evga had some Gaming XC 2080ti available a few mins ago. i was able to add to cart and check out, but i changed my mind, there may be some still, not sure


----------



## bmg2

michaelrw said:


> evga had some Gaming XC 2080ti available a few mins ago. i was able to add to cart and check out, but i changed my mind, there may be some still, not sure


They sold out fast.


----------



## truehighroller1

Zurv said:


> All the basic reference cards are the same as the FE, just the bios is different (and cooler.) All the cards, including the ref are way over engineered. What is limiting perf is heat, but mainly voltage. Under water, there will be little difference between ref and custom cards too. (the custom cards might be better when moded and under LN. Maybe.)
> gamers nexus has a bunch videos if you want to nerd out. Here is the one for the FE https://www.youtube.com/watch?v=-zsWoef9MqQ short version == overkill.
> 
> 
> RE: RMA to Nvidia. Don't worry about it. I've mauled many a card I bought directly from nvidia and I've never had a problem getting a replacement. That said, they have all been Titan cards (X, X(P), Xp, etc.)
> That is why i normally get nvidia cards directly from them. The RMA dance from EVGA and Asus takes to long. Also, you can't resell most big brands of video cards (ie, used cards) on amazon anymore. But you still can for Nvidia. I'll soon have 3 Titan Vs up for sale there.
> My suggestion is always keep some backup card around so when you blowup your card, you can still use your PC while waiting for a replacement.



Yeah, nO. To anyone seeing me quoting this guy's post and wondering, get an A variety which is on paper from NVIDIA the one able to be over clocked at this point as the none A varieties have been stated by NVIDIA themselves to be binned crappier cards. Companies that are paying NVIDIA the extra cash up front to get the A variety are then able to over clock and cool the cards with custom better cooling solutions and then able to push the cards as better over clockers etc. in order to get more money.

This launch is such a farce, it's unreal. They're acting like MSM and lying about everything possible piece of information to cause confusion expecting to get away with bold face lying. While at the same time trying to be honest with the retailers BUYING said A variety versions in order to sell and make a profit. This whole mess is so over the top hyped and straight up bold faced lied about, it's unreal.


----------



## methadon36

bmg2 said:


> They sold out fast.


I Got that E-mail as I was checking my E-mail account. was able to snag up 11G-P4-2382-KR, 11GB for $1199. Last time I got notification I was at Mc donalds drive through and didn't have my CC in hand.


----------



## BigMack70

xer0h0ur said:


> For those that really have to have the most out of the card they're going to have to shunt it but its been shown already you're increasing the power draw big time for a small percentage gain in actual performance. I don't know if I will shunt mod it anymore.


Everything I'm seeing is that the percentage performance gains are really awful compared to the percentage power increase above 300W. I'm not even sure custom BIOS is worth the risk this time around.


----------



## Okt00

xer0h0ur said:


> I can't say with certainty but GPU artifacting has never been the same for me as GDDR artifacting. GPU artifacting usually ends up being popping or flashing textures while GDDR artifacting ends up looking like weird symbols or missing textures for instance.
> 
> For instance this is a picture someone put up of their artifacting. I would be guessing GDDR6 artifacting over GPU artifacting on that instance.


Mine started like that, totally "space invaders" for the symbols. 

I got an RMA request in, hoping this isn't some Nvidia shenanigans where I wait 5 weeks for an email. LOL

Now I get this. Reminds me of when my Radeon 9700 Pro bit the dust. Although it was very pink. This is pretty tame with the white. Mind you less psychoactives these days.


----------



## wheatpaste1999

wuannai said:


> Anyone tried to crossflash his FE with this version?


Tried it today with no luck.


----------



## kx11

very cool how Bitspower shipped my WB for 2080ti strix OC edition , now i need to find the card somewhere in the world


----------



## Zammin

kx11 said:


> very cool how Bitspower shipped my WB for 2080ti strix OC edition , now i need to find the card somewhere in the world


Ditching the Palit card and the Phanteks block?


----------



## kx11

Zammin said:


> Ditching the Palit card and the Phanteks block?



yeah it was a dead card that is too hot (65c peak if i'm going 4k unlocked frame rate) and doesn't OC at all although it runs games just fine , reference PCB cards are not on my radar at all now


----------



## Zammin

kx11 said:


> yeah it was a dead card that is too hot (65c peak if i'm going 4k unlocked frame rate) and doesn't OC at all although it runs games just fine , reference PCB cards are not on my radar at all now


It does sound like there was something wrong. Although it's probably not because it's a reference PCB otherwise the same issue would be popping up everywhere as 90% of the 2080Ti's currently available seem to be reference PCB haha. Do you happen to know if the Palit card has a 300A or non-A GPU?

Hope the Strix gives you a better experience than the Palit card. It looks like it's got one hell of a power delivery on it, it's a shame that it's limited to 325W with the standard BIOS though. That's lower than some reference designs, but there might be a custom BIOS out there.


----------



## xAD3r1ty

Anyone knows what's wrong with the gaming x trio idling at 20w instead of 6-10w ? mine does that no idea what's the cause


----------



## nerfon

My 2080fe ek velocity stay at 45 degree 2150mhz ram +1000 

Ambient 25 degree.


----------



## Outcasst

Received my 2080 Ti FE today. Just powered it up and to my horror, one of the fans is defective. What I can only describe as a constant brushing sound as if the bearing is catching on something.

You can really tell Nvidia are pushing these out as fast as possible. No QC.

Returning it for a refund. Simply unnacceptable for a £1100 card.


----------



## GraphicsWhore

What are y'alls clocks at idle? Only noticed yesterday I always sit at 1350/7000 with everything at default settings.

Edit: duh, just remembered I had turned "max performance" mode in nVidia control panel.


----------



## zhrooms

MSI GeForce RTX 2080 Ti Gaming X Trio - Mini Review

Had my Gaming X Trio for a while now, but got stuck playing Call of Duty so didn't test it until now.

The good news is, card is *extremely quiet* (also got a 0 RPM mode) thanks to its massive cooler, and it looks absolutely amazing, only card that might come close would be the *AORUS*.

The bad news, it can barely overclock because of the *ridiculous power limit*. You are paying for a custom PCB with more power phases (1) and an extra 6-pin power connector, but they aren't helping anything at all.

*Default* MSI Gaming X Trio BIOS is at a comfortable *300W*, with the possibility of *330W* increasing the power limit to 110%. (Founders Edition goes up to 320W, EVGA XC up to 338W, Gigabyte Gaming OC up to 366W and Galax OC up to *380W*)

To properly stress the card for *my purposes* (taking Screenshots in 5K to 8K resolution), I raised every setting in *Call of Duty: Black Ops 4* to the max (except Texture Quality to avoid hitting the VRAM ceiling) and set the resolution to *5120 x 2880 (5K)*, joined a custom TDM game and wrote down the performance numbers after *3 minutes* (26c Ambient).

*#1 Stock, Out of Box (300W) *
*72c* with the fan speed at *41/50*% (1380/1350rpm).
0.987-*1.000*v / 1860-*1875* MHz

Card has a boost clock of 1755 MHz and it managed to reach 1875 MHz, but this is far from the 2000-2100 MHz many reviewers took their Founders Edition during overclocking. Need to point out 5K resolution puts considerably more stress on the card, so it can't maintain as high of a core clock, if I put it back to 1440p it would go a lot higher, so not really comparable but still.

*#2 Power Limit 110% (330W) *
*73c* with the fan speed at *42/52*% (1440/1410rpm).
1.018-*1.031*v / 1890-*1905* MHz

So raising the Power Limit by 30W increased the voltage by 0.031v (3.1%), which in turn increased the core clock by 30 MHz (1.6%).

How does temperature affect it? Since many consider 73c to be high.

*#3* Power Limit 110% (330W) & *Fan Speed 100%*
*56c* with the fan speed at *100/100*% (3380/2620rpm).
*1.025*v / *1935* MHz

Fans spinning at almost twice the RPM (85%+) lowered the temperature by *17c* (30%), both the core clock and voltage became a more solid straight line at 1.025v / 1935 MHz.

So conclusion here is that out of box, installing MSI Afterburner, *increasing the power limit* and *fan speed*, the *fastest* the card will possibly go is *1935* MHz.

Anything above 1935 MHz is then considered an overclock. But before getting into that, I was curious how much voltage the GPU actually needed for this speed, because out of box they are always (to a varying degree) using a much higher voltage than required.

*#4* Power Limit 110% (330W) & Fan Speed 100% & *Manual Voltage/Frequency Curve Editor set to 0.900v/1935MHz*
*51c* with the fan speed at *100/100*% (3380/2620rpm).
*0.900*v / *1935* MHz

Temperature went down by *5c* and it was only using *~85% of the power limit* (TDP). I was impressed but not for long, as it was not a usable voltage curve. As expected when you idle in Windows the temperature goes down, and because of it the gpu clock goes up, so starting a game at 0.900v/1935MHz the GPU is then running close to 2000MHz, at only 0.900v it is very unstable and crashes fast, before the gpu has time to *heat up* and *clock down* to 1935 MHz.

To solve this early crash (startup) I had to up the voltage from 0.900v to 0.950v, it then managed to remain stable at the initial low temperature/higher clock.

But what if.. 0.900v would have worked at the initial startup, since 51c is almost watercooling temperature, what if the fan speed is lowered to a quiet level?

*#5* Power Limit 110% (330W) & *Fan Speed 45%* & Manual Voltage/Frequency Curve Editor set to 0.900v/1935MHz
*66c* with the fan speed at *45/45*% (1530/1220rpm).
*0.900*v / *1935* MHz

I have yet to mention the noise level, but *35% fan speed* is *literally inaudible* over (quieter than) my 4x 120mm 850 rpm radiator fans, though at only 35% the card reached closer to *75c*, which I thought was a bit much as at around 67c the gpu clocked down from 1935MHz to 1920MHz. So I upped it to 45% which is still almost inaudible, *it really is super quiet!* And that's while maxing the card in a 2018 game at 5K resolution, only 66c and almost dead silent, I think I'm safe in saying this is *one of the quietest cards* (but remember it is also the longest at *327mm*, does not fit in many cases).

After finding out the card runs 1935W, it requires 0.950v to remain stable in lower resolution/temperature and that it can do it all with 45% fan speed, this is the result.

*#6* Power Limit 110% (330W) & Fan Speed 45% & Manual Voltage/Frequency Curve Editor set to *0.950v*/1935MHz
*69c* with the fan speed at *45/45*% (1530/1220rpm).
*0.950*v / *1935* MHz

It went up 3c and from 85% power limit to 92-96% by incrasing the voltage from 0.900v to 0.950v.

*Out of Box with Increased Power Limit & Manual 100% Fan Speed:* 
56c, 100% Fan Speed, 1.025v / 1935 MHz

*Optimized with Increased Power Limit & Manual 45% Fan Speed & Voltage/Frequency Curve Editor:*
69c, 45% Fan Speed, 0.950v / 1935 MHz (Almost inaudible and lower power usage, same performance)

Finally overclocking, this is where things get depressing.. this is out of box overclocking, max it could do was +90 without running into the same problem as earlier when the temperature was low and it clocker higher and crashed when starting a game.

*#7* Power Limit 100% (300W) & *Core Clock +90MHz* 
*76c* with the fan speed at *45/45*%, slightly higher ambient.
0.968-*0.981*v / 1920-*1935* MHz

Raising the power limit by 10%.

*#8* Power Limit 110% (330W) & *Core Clock +90MHz* 
*77c* with the fan speed at *45/45*%, slightly higher ambient.
0.987-*1.000*v / 1950-*1965* MHz








So *that was it*, it managed to go from *1935* MHz to *1965* MHz (+1.5%) with +90 on the core clock. Note that this was with a low fan speed, to keep it quiet, it would have gone higher with fans at 100% but I could never stand that noise.

Out of Box:
1875 MHz

Out of Box with Power Target 110%:
1905 MHz (+30 MHz)

Out of Box with Power Target 110% and Fans at 100%:
1935 MHz (+30 MHz)

Overclocking Core Clock +90 with Default Power Target and Fans on 45%:
1935 MHz

Overclocking Core Clock +90 with Power Target 110% and Fans on 45%: 
1965 MHz (+30 MHz)

So from first to last, a 90 MHz increase. *Realistically* just installing afterburner you can get it to 1935 MHz in seconds. Then it's *only a 30 MHz increase*.

This means trying to overclock is almost pointless, 30 MHz (1.5%) is likely only a single FPS.

It was interesting though seeing how it scales with temperature, going full fan speed raised the voltage and core clock speed by 30 MHz, so using a waterblock to cool the GPU definitely helps, and you could probably get at least 30 MHz more (1.5%) but this is with a 330W BIOS, many cards stop at just 290W.

(Rough temperatures, just to show the steps I noticed)
*Above 50c - 1.042v
Above 60c - 1.031v
Above 70c - 1.024v*

Higher voltage and lower temperature means the core clock goes up, how much a waterblock helps I will find out, but not anytime soon. Also I was not able to adjust the voltage up, only down under 1.000v, was able to set it to 1.250v in Afterburner but it never actually went above 1.024v.

Cause of this low overclock is why? What prevented me from going above 1965 MHz? Was it power limit, temperature or voltage?

Here is where both the expected and unexpected revealed itself, from the looks of the results above and the fact that power limit usually were at 100-105%, you would assume *temperature is the problem*, and *lack of voltage control*.. but that is not the case at all.

After two other members here confirmed it was safe to flash BIOS on the Gaming X Trio, I decided to give it a go.

*#9* *GALAX BIOS* & *Power Limit 126% (380W)* & Core Clock +90MHz
*78c* with the fan speed at *45/45*%. (Disregard temperature as you can adjust the voltage down from 1.043v using Voltage/Frequency Curve Editor)
*1.043*v / *1965* MHz
 
 
*Click SHOW for Graphs!*



Spoiler















Okay so it worked fine so far, with the increased power limit the voltage instantly went up to a rock solid 1.043v never deviating, same with the core clock, a completely straight line of 1965 MHz.

So what happened when I increased the +90 core clock further?.. GPU shot up to *2040* MHz! Another 75 MHz by typing in a random number which I have now forgotten, with a hot (75c) silent card (1500rpm), now that is *impressive*! It did crash soon after, which is most likely because of the high temperature (unstable), water cooled it would surely go even further than 2050 MHz, that I am sure of.

Final conclusion is that *power limit is the main problem, even on a quiet high temperature air cooled card.* Changing BIOS is not only for water cooled cards is what I am saying, there's several % of performance to gain.

I feel for anyone who is stuck with 290W power limit BIOS or a water cooled Founders Edition that can not flash BIOS (yet), *it really restricts overclocking of any kind*.

Of course realize there is a lot more testing to be done on the *380W BIOS* but I feel I should wait until I have a water cooled card, to compare it on air versus water preferably the same day, so temperature, game patches, drivers are all the same.

Also there is no way I am keeping the MSI Gaming X Trio (It is a visually stunning & quiet card but other than that it has nothing going for it at all), the *best cards are as usual the cheapest reference PCB ones*, because *almost every card on air will hit the power limit* without replacing the BIOS, which has to be done if you're pushing the card on water so it does not really matter which card you buy, as long as it is reference PCB (so most waterblocks fit, unless you want a specific block). There are 3 notable cards that has exceptionally high power limit out of box, and those are Gigabyte Gaming OC at 366W but does not allow for a reference waterblock even though it is a reference PCB card, they added a FAN/RGB connector in a stupid location, then the EVGA FTW3 Ultra which is a huge card with custom PCB at 373W and last, surprisingly, the Galax OC reference PCB card with 380W BIOS (I can not confirm it ships with that BIOS yet but according to Tom's Hardware their sample had it).








TL;DR Nvidia has the RTX 2080 Ti in a straight jacket, 320W on the Founders Edition is a travesty.


----------



## arrow0309

zhrooms said:


> MSI GeForce RTX 2080 Ti Gaming X Trio - Mini Review
> 
> Had my Gaming X Trio for a while now, but got stuck playing Call of Duty so didn't test it until now.
> 
> The good news is, card is *extremely quiet* (also got a 0 RPM mode) thanks to its massive cooler, and it looks absolutely amazing, only card that might come close would be the *AORUS*.
> 
> The bad news, it can barely overclock because of the *ridiculous power limit*. You are paying for a custom PCB with more power phases (1) and an extra 6-pin power connector, but they aren't helping anything at all.
> 
> *Default* MSI Gaming X Trio BIOS is at a comfortable *300W*, with the possibility of *330W* increasing the power limit to 110%. (Founders Edition goes up to 320W, EVGA XC up to 338W, Gigabyte Gaming OC up to 366W and Galax OC up to *380W*)
> 
> To properly stress the card for *my purposes* (taking Screenshots in 5K to 8K resolution), I raised every setting in *Call of Duty: Black Ops 4* to the max (except Texture Quality to avoid hitting the VRAM ceiling) and set the resolution to *5120 x 2880 (5K)*, joined a custom TDM game and wrote down the performance numbers after *3 minutes* (26c Ambient).
> 
> *#1 Stock, Out of Box (300W) *
> *72c* with the fan speed at *41/50*% (1380/1350rpm).
> 0.987-*1.000*v / 1860-*1875* MHz
> 
> Card has a boost clock of 1755 MHz and it managed to reach 1875 MHz, but this is far from the 2000-2100 MHz many reviewers took their Founders Edition during overclocking. Need to point out 5K resolution puts considerably more stress on the card, so it can't maintain as high of a core clock, if I put it back to 1440p it would go a lot higher, so not really comparable but still.
> 
> *#2 Power Limit 110% (330W) *
> *73c* with the fan speed at *42/52*% (1440/1410rpm).
> 1.018-*1.031*v / 1890-*1905* MHz
> 
> So raising the Power Limit by 30W increased the voltage by 0.031v (3.1%), which in turn increased the core clock by 30 MHz (1.6%).
> 
> How does temperature affect it? Since many consider 73c to be high.
> 
> *#3* Power Limit 110% (330W) & *Fan Speed 100%*
> *56c* with the fan speed at *100/100*% (3380/2620rpm).
> *1.025*v / *1935* MHz
> 
> Fans spinning at almost twice the RPM (85%+) lowered the temperature by *17c* (30%), both the core clock and voltage became a more solid straight line at 1.025v / 1935 MHz.
> 
> So conclusion here is that out of box, installing MSI Afterburner, *increasing the power limit* and *fan speed*, the *fastest* the card will possibly go is *1935* MHz.
> 
> Anything above 1935 MHz is then considered an overclock. But before getting into that, I was curious how much voltage the GPU actually needed for this speed, because out of box they are always (to a varying degree) using a much higher voltage than required.
> 
> *#4* Power Limit 110% (330W) & Fan Speed 100% & *Manual Voltage/Frequency Curve Editor set to 0.900v/1935MHz*
> *51c* with the fan speed at *100/100*% (3380/2620rpm).
> *0.900*v / *1935* MHz
> 
> Temperature went down by *5c* and it was only using *~85% of the power limit* (TDP). I was impressed but not for long, as it was not a usable voltage curve. As expected when you idle in Windows the temperature goes down, and because of it the gpu clock goes up, so starting a game at 0.900v/1935MHz the GPU is then running close to 2000MHz, at only 0.900v it is very unstable and crashes fast, before the gpu has time to *heat up* and *clock down* to 1935 MHz.
> 
> To solve this early crash (startup) I had to up the voltage from 0.900v to 0.950v, it then managed to remain stable at the initial low temperature/higher clock.
> 
> But what if.. 0.900v would have worked at the initial startup, since 51c is almost watercooling temperature, what if the fan speed is lowered to a quiet level?
> 
> *#5* Power Limit 110% (330W) & *Fan Speed 45%* & Manual Voltage/Frequency Curve Editor set to 0.900v/1935MHz
> *66c* with the fan speed at *45/45*% (1530/1220rpm).
> *0.900*v / *1935* MHz
> 
> I have yet to mention the noise level, but *35% fan speed* is *literally inaudible* over (quieter than) my 4x 120mm 850 rpm radiator fans, though at only 35% the card reached closer to *75c*, which I thought was a bit much as at around 67c the gpu clocked down from 1935MHz to 1920MHz. So I upped it to 45% which is still almost inaudible, *it really is super quiet!* And that's while maxing the card in a 2018 game at 5K resolution, only 66c and almost dead silent, I think I'm safe in saying this is *one of the quietest cards* (but remember it is also the longest at *327mm*, does not fit in many cases).
> 
> After finding out the card runs 1935W, it requires 0.950v to remain stable in lower resolution/temperature and that it can do it all with 45% fan speed, this is the result.
> 
> *#6* Power Limit 110% (330W) & Fan Speed 45% & Manual Voltage/Frequency Curve Editor set to *0.950v*/1935MHz
> *69c* with the fan speed at *45/45*% (1530/1220rpm).
> *0.950*v / *1935* MHz
> 
> It went up 3c and from 85% power limit to 92-96% by incrasing the voltage from 0.900v to 0.950v.
> 
> *Out of Box with Increased Power Limit & Manual 100% Fan Speed:*
> 56c, 100% Fan Speed, 1.025v / 1935 MHz
> 
> *Optimized with Increased Power Limit & Manual 45% Fan Speed & Voltage/Frequency Curve Editor:*
> 69c, 45% Fan Speed, 0.950v / 1935 MHz (Almost inaudible and lower power usage, same performance)
> 
> Finally overclocking, this is where things get depressing.. this is out of box overclocking, max it could do was +90 without running into the same problem as earlier when the temperature was low and it clocker higher and crashed when starting a game.
> 
> *#7* Power Limit 100% (300W) & *Core Clock +90MHz*
> *76c* with the fan speed at *45/45*%, slightly higher ambient.
> 0.968-*0.981*v / 1920-*1935* MHz
> 
> Raising the power limit by 10%.
> 
> *#8* Power Limit 110% (330W) & *Core Clock +90MHz*
> *77c* with the fan speed at *45/45*%, slightly higher ambient.
> 0.987-*1.000*v / 1950-*1965* MHz
> 
> 
> 
> 
> 
> 
> 
> 
> So *that was it*, it managed to go from *1935* MHz to *1965* MHz (+1.5%) with +90 on the core clock. Note that this was with a low fan speed, to keep it quiet, it would have gone higher with fans at 100% but I could never stand that noise.
> 
> Out of Box:
> 1875 MHz
> 
> Out of Box with Power Target 110%:
> 1905 MHz (+30 MHz)
> 
> Out of Box with Power Target 110% and Fans at 100%:
> 1935 MHz (+30 MHz)
> 
> Overclocking Core Clock +90 with Default Power Target and Fans on 45%:
> 1935 MHz
> 
> Overclocking Core Clock +90 with Power Target 110% and Fans on 45%:
> 1965 MHz (+30 MHz)
> 
> So from first to last, a 90 MHz increase. *Realistically* just installing afterburner you can get it to 1935 MHz in seconds. Then it's *only a 30 MHz increase*.
> 
> This means trying to overclock is almost pointless, 30 MHz (1.5%) is likely only a single FPS.
> 
> It was interesting though seeing how it scales with temperature, going full fan speed raised the voltage and core clock speed by 30 MHz, so using a waterblock to cool the GPU definitely helps, and you could probably get at least 30 MHz more (1.5%) but this is with a 330W BIOS, many cards stop at just 290W.
> 
> (Rough temperatures, just to show the steps I noticed)
> *Above 50c - 1.042v
> Above 60c - 1.031v
> Above 70c - 1.024v*
> 
> Higher voltage and lower temperature means the core clock goes up, how much a waterblock helps I will find out, but not anytime soon. Also I was not able to adjust the voltage up, only down under 1.000v, was able to set it to 1.250v in Afterburner but it never actually went above 1.024v.
> 
> Cause of this low overclock is why? What prevented me from going above 1965 MHz? Was it power limit, temperature or voltage?
> 
> Here is where both the expected and unexpected revealed itself, from the looks of the results above and the fact that power limit usually were at 100-105%, you would assume *temperature is the problem*, and *lack of voltage control*.. but that is not the case at all.
> 
> After two other members here confirmed it was safe to flash BIOS on the Gaming X Trio, I decided to give it a go.
> 
> *#9* *GALAX BIOS* & *Power Limit 126% (380W)* & Core Clock +90MHz
> *78c* with the fan speed at *45/45*%. (Disregard temperature as you can adjust the voltage down from 1.043v using Voltage/Frequency Curve Editor)
> *1.043*v / *1965* MHz
> 
> 
> *Click SHOW for Graphs!*
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Okay so it worked fine so far, with the increased power limit the voltage instantly went up to a rock solid 1.043v never deviating, same with the core clock, a completely straight line of 1965 MHz.
> 
> So what happened when I increased the +90 core clock further?.. GPU shot up to *2040* MHz! Another 75 MHz by typing in a random number which I have now forgotten, with a hot (75c) silent card (1500rpm), now that is *impressive*! It did crash soon after, which is most likely because of the high temperature (unstable), water cooled it would surely go even further than 2050 MHz, that I am sure of.
> 
> Final conclusion is that *power limit is the main problem, even on a quiet high temperature air cooled card.* Changing BIOS is not only for water cooled cards is what I am saying, there's several % of performance to gain.
> 
> I feel for anyone who is stuck with 290W power limit BIOS or a water cooled Founders Edition that can not flash BIOS (yet), *it really restricts overclocking of any kind*.
> 
> Of course realize there is a lot more testing to be done on the *380W BIOS* but I feel I should wait until I have a water cooled card, to compare it on air versus water preferably the same day, so temperature, game patches, drivers are all the same.
> 
> Also there is no way I am keeping the MSI Gaming X Trio (It is a visually stunning & quiet card but other than that it has nothing going for it at all), the *best cards are as usual the cheapest reference PCB ones*, because *almost every card on air will hit the power limit* without replacing the BIOS, which has to be done if you're pushing the card on water so it does not really matter which card you buy, as long as it is reference PCB (so most waterblocks fit, unless you want a specific block). There are 3 notable cards that has exceptionally high power limit out of box, and those are Gigabyte Gaming OC at 366W but does not allow for a reference waterblock even though it is a reference PCB card, they added a FAN/RGB connector in a stupid location, then the EVGA FTW3 Ultra which is a huge card with custom PCB at 373W and last, surprisingly, the Galax OC reference PCB card with 380W BIOS (I can not confirm it ships with that BIOS yet but according to Tom's Hardware their sample had it).
> 
> 
> 
> 
> 
> 
> 
> 
> TL;DR Nvidia has the RTX 2080 Ti in a straight jacket, 320W on the Founders Edition is a travesty.


Excellent review mate, congrats! :specool:
Even if I was looking for some more sustained OC freq (with the 380W bios at least).


----------



## BigMack70

So what sort of performance delta is there between a 380W and a 320W BIOS?


----------



## zhrooms

arrow0309 said:


> Even if I was looking for some more sustained OC freq (with the 380W bios at least).


 
I will test it a bit more soon, want to see what it can do at low ambient (0-5c) and 100% fan speed before I send it back, get a few nice benchmark scores.
 


BigMack70 said:


> So what sort of performance delta is there between a 380W and a 320W BIOS?


 
What do you mean?


----------



## BigMack70

I mean, how much more performance in games do you get on a 380W BIOS compared to 320W. It's 18% more power on a power-limited GPU; what does that translate to in real world performance?


----------



## zhrooms

BigMack70 said:


> I mean, how much more performance in games do you get on a 380W BIOS compared to 320W. It's 18% more power on a power-limited GPU; what does that translate to in real world performance?


 
Yeah that was what I assumed you meant but, that is impossible to say until someone actually tests it, and there's multiple ways to do it.

For example, on air or on water? If on air what fan speed, loud or quiet? Low or high ambient temperature? What Resolution as it puts different stress on the GPU?

I am curious just as you are, below is personally what I want to know. At 4-5K resolution.

Room Temperature ~25c
1. Air Cooled with Quiet Fan Speed = 1965 MHz (330W BIOS)
2. Air Cooled with Quiet Fan Speed = xxxx MHz (380W BIOS)

Room Temperature ~25c & ~0c (To get water cooled temperatures while still on air)
3. Air Cooled with Loud Fan Speed = xxxx MHz (330W BIOS)
4. Air Cooled with Loud Fan Speed = xxxx MHz (380W BIOS)

5. Water Cooled = xxxx MHz (330W BIOS)
6. Water Cooled = xxxx MHz (380W BIOS)

I'd also want to know what the stock speeds are on water, how much the lower temperature helps.

*1* is already tested by me on the MSI Gaming X Trio, I do want to try *2*, *3*, *4* before I send it back.

Unable to test *5* & *6* because I simply do not have a reference card or water block (yet). The closest I can do now is try to simulate it in a low ambient environment.


----------



## NAIM101

Edge0fsanity said:


> I have it and it is reference and will work the EK vector block. Everything about that review you posted is correct. Bottom tier product if you're going to air cool. Runs about 10C hotter than a FE. Still, has the better chip in it and works with waterblocks. I blocked mine, flashed the bios, and it has decent clocks.



What do you mean it has better chip? Isn't it all the same chip? FE and After market cards 2080TI. All made from Nvidia same 2080TI chip?


----------



## nyk20z3

Waiting for the MSI 2080 Ti Lighting myself....


----------



## exploiteddna

Finally got my card installed. Sorry for the ghetto junction where my old card and block were removed. But I’m not blocking this card until I have the new cpu and motherboard which haven’t arrived yet. Anyways, this is my ‘join the club’ photo proof, MSI 2080ti Duke OC


----------



## GAN77

Any news about samsung GDDR6 RAM on turing?

https://news.samsung.com/global/sam...-nvidia-quadro-professional-graphics-solution
https://www.samsung.com/semiconductor/dram/gddr6/


----------



## Okt00

NAIM101 said:


> What do you mean it has better chip? Isn't it all the same chip? FE and After market cards 2080TI. All made from Nvidia same 2080TI chip?



https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant


> We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.


----------



## Okt00

michaelrw said:


> Finally got my card installed. Sorry for the ghetto junction where my old card and block were removed. But I’m not blocking this card until I have the new cpu and motherboard which haven’t arrived yet. Anyways, this is my ‘join the club’ photo proof, MSI 2080ti Duke OC


A Male to Male coupler.... genius! Glad to see a real compression fitting on there though. Not that there is a lot of force on a straight through. 

What CPU and Motherboard do you have on the way?


----------



## truehighroller1

nyk20z3 said:


> Waiting for the MSI 2080 Ti Lighting myself....


MSI Support sucks so bad, never getting another one in my life. They wouldn't come out with a better less restrictive TDP wise bios for the 1080 ti yet evga did, for their kingpin. That was the end of me buying their products at that point, forever more. From now on buying king pin instead and besides for the BIOS issues evga has way better support options like, their step up program.


----------



## truehighroller1

double post keep the other one sorry.


----------



## jabtn2

Got a baseline run in with my cards on timespy extreme. I was short on time before heading out for a few days but I did a quick and dirty overclock on the CPU and got a baseline run in. 14501 isn't bad for the effort I put in so far. Only had time to do a simple overclock on the cards, didn't even try more than just the one setting. +100/ +1200 on both. I don't game too much, just a little Forza here and there. Computer is for work and I like to have fun with my builds.


----------



## truehighroller1

I wanted to say this earlier when I saw someone state that they wanted an FE as apposed to the other reference cards the A variety, 2080 ti's. You beat me to it.



Okt00 said:


> https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant
> We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.


I'll quote my earlier post as it sums this whole launch up pretty well with all of the information you listed above taken into consideration as well. 




truehighroller1 said:


> Yeah, nO. To anyone seeing me quoting this guy's post and wondering, get an A variety which is on paper from NVIDIA the one able to be over clocked at this point as the none A varieties have been stated by NVIDIA themselves to be binned crappier cards. Companies that are paying NVIDIA the extra cash up front to get the A variety are then able to over clock and cool the cards with custom better cooling solutions and then able to push the cards as better over clockers etc. in order to get more money.
> 
> This launch is such a farce, it's unreal. They're acting like MSM and lying about everything possible piece of information to cause confusion expecting to get away with bold face lying. While at the same time trying to be honest with the retailers BUYING said A variety versions in order to sell and make a profit. This whole mess is so over the top hyped and straight up bold faced lied about, it's unreal.


The post edit system is messed up....


----------



## jabtn2

All else aside, aren't FE cards "A" cards as well? Besides currently being able to flash other bios versions, are there any downsides when comparing a Founders Edition card to a 3rd party "A" binned reference pcb card?


----------



## Zammin

jabtn2 said:


> All else aside, aren't FE cards "A" cards as well? Besides currently being able to flash other bios versions, are there any downsides when comparing a Founders Edition card to a 3rd party "A" binned reference pcb card?


Yes the FE cards are 300A chips as well. So they are in the same catagory as the AIB reference designs with the 300A chip. There shouldn't be any downsides when comparing the two unless the cooler on one is worse than the other. The upside to the 300A reference designs is that you get the same 300A chip as the founders edition while also being able to flash the BIOS.


----------



## seanbarkley

jabtn2 said:


> All else aside, aren't FE cards "A" cards as well? Besides currently being able to flash other bios versions, are there any downsides when comparing a Founders Edition card to a 3rd party "A" binned reference pcb card?


Of course FE are A cards.

https://videocardz.net/nvidia-geforce-rtx-2080ti/

I would say the only downside is temperatures, but 5 degress less don’t compensate the ugliness and cheap looks of the custom models.


----------



## jabtn2

I almost felt bad taking the coolers off of my FE cards. They are easily the best looking cards I have had. I am a little disappointed about flashing being locked for now but it is not a big deal. Some recent posts were making me second guess FE cards being higher binned models. Thanks for the clarification.


----------



## nycgtr

nevermind had to disable write protect on this one card only for some reason.


----------



## exploiteddna

Okt00 said:


> A Male to Male coupler.... genius! Glad to see a real compression fitting on there though. Not that there is a lot of force on a straight through.
> 
> What CPU and Motherboard do you have on the way?


well i preordered the 9900k from amazon but yet to receive an estimated ship date. I havent even ordered the motherboard yet, but as soon as i get notification that the maximus xi formula is back in stock, im going to grab one of those. I was wanting to get the extreme for an extra $50, but they dont even have a release date for it yet and support told me theyre hoping to release sometime in the next few months.. but thats not good enough for me. I like the extreme bc it doesnt have as much of that armor that im not rly a fan of, and it also has the added dimm.2 slot that would be cool to use for my ssd.. but whatever the formula is decent and i can try to hook up the vrm heatsink too

the new case is here already, a lian li O11 dynamic, black, with the addon vertical gpu mount with riser. Then im swapping my frost res with a clear res tube (both are primochill D5 enabled res and top).. then i have about 10 meters of the smaller diameter bitspower crystalink acrylic tubing

new EK Velocity cpu block is here, as well as the Vector block thats going on the 2080ti. i also have a new EK 360 PE rad that ill either add to the loop or replace my swiftech 360

Umm also got this 32gb kit that I’m gonna swap with my 16gb kit (same speeds)

i think thats about it.. pretty much a complete upgrade and water loop overhaul


----------



## GraphicsWhore

Playing around with the Gigabyte BIOS on my EVGA XC Gaming.

OCs about same as EVGA and Galax BIOSes (+120/+1050); similar scores in TimeSpy and FireStrike.

Only one I've spend extended time under-volting. Wanted to keep it at or under 1v and after a few tests it's a steady 2040 @ 1000mV and surprisingly solid benchmark scores. As it stands this is currently in 1st place to be my 24/7 OC.


----------



## Mikecacho

Anyone else have a Zotac Amp? just want to compare. Here are my clock settings that i currently run: +45 core / +1000 mem with a boost up to 2100 on core.


----------



## CallsignVega

jabtn2 said:


> I almost felt bad taking the coolers off of my FE cards. They are easily the best looking cards I have had. I am a little disappointed about flashing being locked for now but it is not a big deal. Some recent posts were making me second guess FE cards being higher binned models. Thanks for the clarification.


IMO when it comes to computer parts, form should always follow function. Yes, the FE air coolers look good but they have demonstrated to be the worst air cooler compared to AIB.


----------



## carlhil2

Mikecacho said:


> Anyone else have a Zotac Amp? just want to compare. Here are my clock settings that i currently run: +45 core / +1000 mem with a boost up to 2100 on core.


Nice. mine boost to 2040 at stock so +60 gives me 2100mhz. mine OC well though...


----------



## Vipeax

jabtn2 said:


> I almost felt bad taking the coolers off of my FE cards. They are easily the best looking cards I have had. I am a little disappointed about flashing being locked for now but it is not a big deal. Some recent posts were making me second guess FE cards being higher binned models. Thanks for the clarification.


You cou try it with the --overrideboard parameter. I've been digging into the application today and it appears that there should be an option to overwrite the board ID mismatch error after all.










The command list only hints at the overridesub (6 being the 'shortcut') parameter to overwrite a PCI subsystem ID mismatch, but the code for the other commands is still there.



















So, the actual commandlist for the parser doesn't contain the overrideboard parameter anymore so that is why it would never catch on. To fix that I had to patch NVFlash anyway, so I went ahead and just overwrote the logic that requires the overrideboard parameter in the first place.

Link for the patched NVFlash: https://drive.google.com/open?id=1MxJN8_-wq2707DCRQqfXSA9J99slA6lG

Screenshot with my RTX2080 FE card. This card is actually being sent back once Digital River finally comes back to me with shipping details (I requested a refund 4 weeks ago as I got my hands on my Sea Hawk X for my miniITX case at that point and I've given follow up calls regarding this 3 times already), so I don't want to bother flashing a card that is flagged to go back for a refund very soon:










If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


----------



## Djreversal

anyone link me to the 3 milliohm resister for the shunt mod that i should be ordering? I dont want to order the wrong one. Going to pick up the 2080 ti's now


----------



## jabtn2

Vipeax,

I would be willing to try this. I have sli water blocked cards. I'm out of town right now but I'll be back at my house on Tuesday.


----------



## Zurv

Vipeax said:


> If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


I'll try it when i get my cards on Monday. That said, putting a waterblock on an Nvidia card doesn't void the warranty. (it doesn't for EVGA either. gamers nexus confirmed that this week. Even if you take off/break the evga sticker on the screw.)


----------



## torqueroll

Joining the owners club. Here is the mandatory picture of the card in the case.

I've ordered a new card and will return this one due to excessive coil whine. Hopefully the new one is better.


----------



## Vipeax

Zurv said:


> I'll try it when i get my cards on Monday. That said, putting a waterblock on an Nvidia card doesn't void the warranty. (it doesn't for EVGA either. gamers nexus confirmed that this week. Even if you take off/break the evga sticker on the screw.)


Ah, I knew about EVGA not caring (and it seems MSI is sort of alright with the sticker being broken as well), but NVIDIA themselves is new to me.


----------



## jabtn2

Vipeax said:


> Ah, I knew about EVGA not caring (and it seems MSI is sort of alright with the sticker being broken as well), but NVIDIA themselves is new to me.


My Nvidia cards have never had any sort of breakable warranty sticker when taking them apart, for whatever that's worth.

In my experience, msi does not care about warranty stickers either. I asked a customer service rep about them one time and they told me it was just to prevent large scale warranty fraud.


----------



## Zurv

The sticker thing in general is illegal anyway. (In the US)
https://www.digitaltrends.com/computing/ftc-warranty-stickers-illegal/


----------



## Vipeax

Alright good stuff (I'm European). Let's ignore the warranty part then and replace it with the fact that a full-size waterblock will cool the PCB elements much better than the stock cooler. While the vapor champer does a good job covering all relevant parts of the PCB, it also can become quite hot over the entire vapor chambor block. A watercooled GPU will keep those components at a much cooler level and while the reference PCB is overengineered I do feel more comfortable with people increasing their powerlimit to high levels (350W+) with adequate PCB cooling.


----------



## tistou77

Hello

I have a waterblock ekwb RGB (nickel + plexi) and I hesitate to take the backplate ekwb black or nickel

Black is really black ?
I fear that the nickel makes "weird" (it's not really chrome)

Thanks


----------



## Mikecacho

carlhil2 said:


> Nice. mine boost to 2040 at stock so +60 gives me 2100mhz. mine OC well though...


Anything past +45 on the core can result in crashing, but with it hitting 2100 at the current OC i am more than happy. What is the max OC you can achieve?


----------



## nyk20z3

truehighroller1 said:


> MSI Support sucks so bad, never getting another one in my life. They wouldn't come out with a better less restrictive TDP wise bios for the 1080 ti yet evga did, for their kingpin. That was the end of me buying their products at that point, forever more. From now on buying king pin instead and besides for the BIOS issues evga has way better support options like, their step up program.


Well i had a 780 Lighting and it was a Beast and never had any issues with it. I then bought a 980 Matrix then a Strix 1080 Ti OC, the problem is Asus Killed the Matrix line so the Lighting is the only other super high end card besides the Kingpin that i have interest in.


----------



## Outcasst

Anybody know of any benchmarks / games that are more forgiving on the power usage? Trying to see what this FE chip can really do but the 123% power limit is killing me. Currently using Heaven. +170 core and +800 memory so far.


----------



## tconroy135

Outcasst said:


> Anybody know of any benchmarks / games that are more forgiving on the power usage? Trying to see what this FE chip can really do but the 123% power limit is killing me. Currently using Heaven. +170 core and +800 memory so far.


I think the answer is 'ALL' games. The 2080Ti might be an example where benchmarks are not good for Overclocking. Even the OC Scanner on MSI Afterburner pushes the power limit and might undersell your Overclocking Performance. 

What is the actual clockspeed your card is settling on at +170 and at what fan speed?


----------



## Vipeax

Outcasst said:


> Anybody know of any benchmarks / games that are more forgiving on the power usage? Trying to see what this FE chip can really do but the 123% power limit is killing me. Currently using Heaven. +170 core and +800 memory so far.


Furmark stress tests (not the benchmark) are light on the TDP if you just want to see how far you can go. Alternatively you could download a DXR tech demo from Microsoft. High load low TDP usage.

Furmark:









DXR demo:









The fun thing about the DXR demo is that it contains no error handling so the tiniest instability will crash the application as soon as a single rendering error occures, which actually makes this an easy to run stress test to test your core overclock while sleeping or doing some browsing.


----------



## Outcasst

tconroy135 said:


> What is the actual clockspeed your card is settling on at +170 and at what fan speed?


In Heaven it's all over the place. i'd say on average it's at 2050MHz. Goes up to 2100 and as low as 2010.

100% fan speed max of 71c.

I think it's a fairly decent chip, I was going to return it with this gimped power limit I don't think it's worth keeping to eventually put it under water. Plus one of the fans on this FE cooler is very loud at low RPM.

Does anybody know if the Gigabyte Aorus will be a custom or reference PCB? And is there likely to be a waterblock available? I believe the 1080 Ti had one for the Aorus.


----------



## cgcross

I'm able to stay stable in benchmarks at 2160-2190 with my Ultra XC. Light liquid metal shunt mod, on a custom loop. Galax BIOS forced 1093mv.

https://www.3dmark.com/spy/4786898


----------



## carlhil2

Mikecacho said:


> Anything past +45 on the core can result in crashing, but with it hitting 2100 at the current OC i am more than happy. What is the max OC you can achieve?


2145(+105) even though in games I push @2115/30.... your stock max boost is 2055 right? I thought that you would be able to OC higher with that chip..oh, and I flashed to the Gigabyte 366w bios, water cooled..


----------



## Sparc145

Vipeax said:


> So, the actual commandlist for the parser doesn't contain the overrideboard parameter anymore so that is why it would never catch on. To fix that I had to patch NVFlash anyway, so I went ahead and just overwrote the logic that requires the overrideboard parameter in the first place.
> 
> Link for the patched NVFlash: https://drive.google.com/open?id=1MxJN8_-wq2707DCRQqfXSA9J99slA6lG
> 
> -------
> 
> If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


Flashed my FE 2080ti this way just fine. Thanks for doing this. It's interesting they just removed the flag from the binary, I figured this was going to be an on-card lock...

Saw a 100pt bump in timespy extreme and GPUz reporting higher power draw with the galax bios.


----------



## Vipeax

Sparc145 said:


> Flashed my FE 2080ti this way just fine. Thanks for doing this. It's interesting they just removed the flag from the binary, I figured this was going to be an on-card lock...
> 
> Saw a 100pt bump in timespy extreme and GPUz reporting higher power draw with the galax bios.


Excellent news, thanks!


----------



## toncij

torqueroll said:


> Joining the owners club. Here is the mandatory picture of the card in the case.
> 
> I've ordered a new card and will return this one due to excessive coil whine. Hopefully the new one is better.


What case is that beauty?


----------



## jabtn2

Sparc145 said:


> Flashed my FE 2080ti this way just fine. Thanks for doing this. It's interesting they just removed the flag from the binary, I figured this was going to be an on-card lock...
> 
> Saw a 100pt bump in timespy extreme and GPUz reporting higher power draw with the galax bios.


This is awesome news. Can't wait to get home and try it out.


----------



## Vipeax

Sparc145 said:


> ...


Thanks again Sparc145, could you perhaps also post a screenshot of the NVIDIA BIOS section on the Advanced tab in GPU-Z?

For example:


----------



## Pepillo

Sparc145 said:


> Flashed my FE 2080ti this way just fine. Thanks for doing this. It's interesting they just removed the flag from the binary, I figured this was going to be an on-card lock...
> 
> Saw a 100pt bump in timespy extreme and GPUz reporting higher power draw with the galax bios.


How does it work? First you run the patched Nvflash and then you can change the BIOS in the usual way or do it in a single process? Thank you


----------



## Vipeax

Pepillo said:


> How does it work? First you run the patched Nvflash and then you can change the BIOS in the usual way or do it in a single process? Thank you


It's like any other flash using NVFlash. It just doesn't stop you from actually performing the flash.


----------



## Pepillo

Vipeax said:


> It's like any other flash using NVFlash. It just doesn't stop you from actually performing the flash.


Ok, thanks


----------



## arrow0309

torqueroll said:


> Joining the owners club. Here is the mandatory picture of the card in the case.
> 
> I've ordered a new card and will return this one due to excessive coil whine. Hopefully the new one is better.


Hi, nice setup! :specool:

I already have the same block in possession (Vector nickel acetal rgb).
So you're saying that even with liquid cooling you notice the coil whine?
May I ask you what model of 2080ti do you have?

I hope I won't regret for not choosing the Bitspower block this time (or a German Watercool or Aquacomputer), I know the EK block won't cover and make contact with the chokes as well but I honestly didn't think you may have coil whine.


----------



## Vipeax

I noticed the new version just now. They didn't include any release notes by the looks of it, but I assume it was useful in a way. 

https://drive.google.com/open?id=1q7mF8J9QF2QRXQfpQPbeEEOlFnKU01df

This is the same patch applied to version 5.527.0.


----------



## bp7178

tistou77 said:


> Hello
> 
> I have a waterblock ekwb RGB (nickel + plexi) and I hesitate to take the backplate ekwb black or nickel
> 
> Black is really black ?
> I fear that the nickel makes "weird" (it's not really chrome)
> 
> Thanks


The nickel that EK uses has a slight yellow tint to it. It is not chrome, which typically has a more blue tint.


----------



## Mikecacho

carlhil2 said:


> 2145(+105) even though in games I push @2115/30.... your stock max boost is 2055 right? I thought that you would be able to OC higher with that chip..oh, and I flashed to the Gigabyte 366w bios, water cooled..


I see, i am on air. Water cooling will come next, i assume then i should get around +80-100. its between 2040-2050 i believe at stock. As for bios i tried EVGA, but did not like having to setup a custom curve and did not see any gains as the base boost is lower by 15 and the extra power limit did nothing noticeable.

With that being said, is it just me or is the Firestorm app a POS. Its poorly optimized and stutters like hell when clicking apply for about 10 seconds or so; so i stick with MSI app.


----------



## tistou77

bp7178 said:


> The nickel that EK uses has a slight yellow tint to it. It is not chrome, which typically has a more blue tint.


Ok thanks :thumb:
I think to take the black, the nickel will be weird then


----------



## stefxyz

78 min fps in the Farcry 5 Benchmark Ultra with SMAA and HDR10 on 4k. I am impressed (watercooled to 32 celsius (10 above amnbient) 2018ti with >2000 mhz and +1000 MHZ on memory):


----------



## Zammin

So jelly of you guys that have your cards already. I ordered mid last month with an ETA of the start of this month, but every time the ETA is passed the store I ordered with just updates the ETA to another week later. Now they just aren't updating the ETA at all. Last week they told me "Friday at the latest" and when I called them on Saturday they said they have no idea when they'll be receiving the cards.. They advertised as "Pre-order" where as in reality it was a Back-order since they clearly didn't have any confirmed inbound stock..

My guess is I'll probably get it mid to late November at this bloody rate..


----------



## DrunknFoo

Received my Asus RTX 2080 TI Dual OC a week ago, barely was able to throw it into a super cramped i3 htpc for functionality testing and some half a$$ed OCing. Played around with it for maybe 30-45mins before removing it.

The card in itself I think is pretty decent... With little to no air flow in the case and little to no tweaking the core stable/peaks 2150-2160, (half assed as I said, didn't bother pushing memory much) max temp hitting 71C @45% rpm
Can't say much else about the heatsink and backplate, poorly designed (as you all know).

Whenever the build is is complete and running, I'll start the real tweaking once I get the cpu and ram stable.
The real build is waiting on a asus xi formula and 9900k (whenever if I can get my hands on them)

Any recommendation on which bios I should slap on the dual oc? shunt mod will depend on my temps and overall power draw... I think I might end up pushing the corsair rm850x,

Some pics of the incomplete rig and the naked card


----------



## Zammin

DrunknFoo said:


> Received my Asus RTX 2080 TI Dual OC a week ago, barely was able to throw it into a super cramped i3 htpc for functionality testing and some half a$$ed OCing. Played around with it for maybe 30-45mins before removing it.
> 
> The card in itself I think is pretty decent... With little to no air flow in the case and little to no tweaking the core stable/peaks 2150-2160, (half assed as I said, didn't bother pushing memory much) max temp hitting 71C @45% rpm
> Can't say much else about the heatsink and backplate, poorly designed (as you all know).
> 
> Whenever the build is is complete and running, I'll start the real tweaking once I get the cpu and ram stable.
> The real build is waiting on a asus xi formula and 9900k (whenever if I can get my hands on them)
> 
> Any recommendation on which bios I should slap on the dual oc? shunt mod will depend on my temps and overall power draw... I think I might end up pushing the corsair rm850x,
> 
> Some pics of the incomplete rig and the naked card


Looking good man! I will be very interested to see how you go with the 2080Ti and the 9900k in the loop. I was originally planning on getting the 9900k to install at the same time as the 2080Ti but I'm a bit concerned now that I've seen how hot the i9 is even under water with a 5Ghz OC. I was expecting much better temps with the soldered IHS.

I'm starting to lean toward keeping my 5Ghz 8700k w/liquid metal and just chucking the 2080Ti into my existing system/loop. I have two 360 rads (EK XE360 and XSPC EX360 slim) and I know the 2080Ti will be putting out more heat than my 1080Ti, I'm just not sure how my system would handle both the 2080Ti and an overclocked 9900k without running the fans really hard to keep the water temps from rising too much. I game at 1440p 165Hz so hopefully the 8700k will do well with the 2080Ti.


----------



## DrunknFoo

Zammin said:


> Looking good man! I will be very interested to see how you go with the 2080Ti and the 9900k in the loop. I was originally planning on getting the 9900k to install at the same time as the 2080Ti but I'm a bit concerned now that I've seen how hot the i9 is even under water with a 5Ghz OC. I was expecting much better temps with the soldered IHS.
> 
> I'm starting to lean toward keeping my 5Ghz 8700k w/liquid metal and just chucking the 2080Ti into my existing system/loop. I have two 360 rads (EK XE360 and XSPC EX360 slim) and I know the 2080Ti will be putting out more heat than my 1080Ti, I'm just not sure how my system would handle both the 2080Ti and an overclocked 9900k without running the fans really hard to keep the water temps from rising too much. I game at 1440p 165Hz so hopefully the 8700k will do well with the 2080Ti.


The water temperature of some 8700k and 1080ti setups can remain relatively the same when looking at a 8700k stock tim vs delided chip under the same volt/clock....The total amount of heat generated from the die is relatively the same depending on LLC behavior, the difference would be the efficiency of heat transfer from the silicon to IHS into the loop. With that said, it is safe to say 2x360 rads are pretty much overkill for the above setup, as similar setups can easily run 2x1080ti in a single loop....

I would think the 2x360 rad setups would now be considered efficient rather than overkill with the newer gen setup? worst case scenario change the existing fan curve, either by increasing rpm or setting the temp threshold...

I would expect water temps to increase of maybe about 5C going from a 1080ti to a 2080ti and maybe another 5C for the 9900k over the 8700k. but will have to wait and see...


----------



## Djreversal

just got my 2 new 2080 ti's today... water blocks and new resistors for the shunts will be here tuesday.. hopefully get them in by wednesday / thursday the latest and see how things go.


----------



## VETDRMS

cgcross said:


> I'm able to stay stable in benchmarks at 2160-2190 with my Ultra XC. Light liquid metal shunt mod, on a custom loop. Galax BIOS forced 1093mv.
> 
> https://www.3dmark.com/spy/4786898


How are you forcing constant voltage?


----------



## Esenel

Vipeax said:


> You cou try it with the --overrideboard parameter. I've been digging into the application today and it appears that there should be an option to overwrite the board ID mismatch error after all.
> 
> 
> 
> If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


Thanks a lot for the updated NVFlash.

I was able to flash my 2080Ti founders now.
Although the result is very underwhelming :-D

TimeSpy on Founders Bios 320W: 16459 Points
TimeSpy on GX Bios 380W: 16610 Points
https://www.3dmark.com/spy/4800946

So I gained 1% in performance by adding ~18% more power?
At least I surpassed JayzTwoCents :-D


----------



## jacknhut

Was anyone able to flash their Gigabyte Windforce 2080 Ti stock bios to Galax Bios or Gigabyte Gaming OC bios linked in the 1st page?


----------



## Vipeax

Esenel said:


> Thanks a lot for the updated NVFlash.
> 
> I was able to flash my 2080Ti founders now.
> Although the result is very underwhelming :-D
> 
> TimeSpy on Founders Bios 320W: 16459 Points
> TimeSpy on GX Bios 380W: 16610 Points
> https://www.3dmark.com/spy/4800946
> 
> So I gained 1% in performance by adding ~18% more power?
> At least I surpassed JayzTwoCents :-D


NVIDIA did a really good job at finding the sweetspot for these series, so I'm not surprised to see such a result to be honest. However, your card isn't magically consuming 18% more power. It just won't throttle as quick so it seems you just weren't being throttled that much to begin with. You seem to do pretty well on the sillicon lottery level by the looks of it (both CPU and GPU).


----------



## Zammin

DrunknFoo said:


> The water temperature of some 8700k and 1080ti setups can remain relatively the same when looking at a 8700k stock tim vs delided chip under the same volt/clock....The total amount of heat generated from the die is relatively the same depending on LLC behavior, the difference would be the efficiency of heat transfer from the silicon to IHS into the loop. With that said, it is safe to say 2x360 rads are pretty much overkill for the above setup, as similar setups can easily run 2x1080ti in a single loop....
> 
> I would think the 2x360 rad setups would now be considered efficient rather than overkill with the newer gen setup? worst case scenario change the existing fan curve, either by increasing rpm or setting the temp threshold...
> 
> I would expect water temps to increase of maybe about 5C going from a 1080ti to a 2080ti and maybe another 5C for the 9900k over the 8700k. but will have to wait and see...


Right now when I run new games like Black Ops 4 I'm getting water temps between 30 and 33C after the radiators but before the blocks with the fans running at 1600RPM, so if it does end up being 5C for the 2080Ti and 5C for the 9900k that will raise my water temps to 40-43C which seems a little high to me? I'm trying to get my hands on a HW Labs 360GTS to replace the XSPC rad which may improve temps a bit but it's really difficult and expensive to get one to Australia.. :/

Let us know how you go once all your gear is in


----------



## Jpmboy

Zurv said:


> The sticker thing in general is illegal anyway. (In the US)
> https://www.digitaltrends.com/computing/ftc-warranty-stickers-illegal/


^^ +1 if we could. 


Vipeax said:


> You cou try it with the --overrideboard parameter. I've been digging into the application today and it appears that there should be an option to overwrite the board ID mismatch error after all.
> 
> 
> 
> The command list only hints at the overridesub (6 being the 'shortcut') parameter to overwrite a PCI subsystem ID mismatch, but the code for the other commands is still there.
> 
> 
> 
> 
> 
> So, the actual commandlist for the parser doesn't contain the overrideboard parameter anymore so that is why it would never catch on. To fix that I had to patch NVFlash anyway, so I went ahead and just overwrote the logic that requires the overrideboard parameter in the first place.
> 
> Link for the patched NVFlash: https://drive.google.com/open?id=1MxJN8_-wq2707DCRQqfXSA9J99slA6lG
> 
> Screenshot with my RTX2080 FE card. This card is actually being sent back once Digital River finally comes back to me with shipping details (I requested a refund 4 weeks ago as I got my hands on my Sea Hawk X for my miniITX case at that point and I've given follow up calls regarding this 3 times already), so I don't want to bother flashing a card that is flagged to go back for a refund very soon:
> 
> 
> 
> If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


 Nice work man. 



WHERE IS THE REP SYSTEM !!!


----------



## BigMack70

Esenel said:


> Thanks a lot for the updated NVFlash.
> 
> I was able to flash my 2080Ti founders now.
> Although the result is very underwhelming :-D
> 
> TimeSpy on Founders Bios 320W: 16459 Points
> TimeSpy on GX Bios 380W: 16610 Points
> https://www.3dmark.com/spy/4800946
> 
> So I gained 1% in performance by adding ~18% more power?
> At least I surpassed JayzTwoCents :-D


Thanks. Wish the rep system was still around for you.

Anyway, this is what I figured... these cards seem to scale like turds beyond what's already available stock. Custom BIOS just doesn't seem worth it.


----------



## toncij

BigMack70 said:


> Thanks. Wish the rep system was still around for you.
> 
> Anyway, this is what I figured... these cards seem to scale like turds beyond what's already available stock. Custom BIOS just doesn't seem worth it.


Nor do custom cards so it seems. All the same as long as one doesn't buy the non-A chip card, which don't even exist yet as far as I know.


----------



## Esenel

Vipeax said:


> Esenel said:
> 
> 
> 
> Thanks a lot for the updated NVFlash.
> 
> I was able to flash my 2080Ti founders now.
> Although the result is very underwhelming 😄
> 
> TimeSpy on Founders Bios 320W: 16459 Points
> TimeSpy on GX Bios 380W: 16610 Points
> https://www.3dmark.com/spy/4800946
> 
> So I gained 1% in performance by adding ~18% more power?
> At least I surpassed JayzTwoCents :-D
> 
> 
> 
> NVIDIA did a really good job at finding the sweetspot for these series, so I'm not surprised to see such a result to be honest. However, your card isn't magically consuming 18% more power. It just won't throttle as quick so it seems you just weren't being throttled that much to begin with. You seem to do pretty well on the sillicon lottery level by the looks of it (both CPU and GPU).
Click to expand...

Yes correct.
In the games Witcher 3, Rise of the Tomb Raider and Assassin's Creed Origins it did consume more than 300-330W.
ACO benchmark was even more like 250 W. I thought something was wrong with the card when doing it 😄

For daily use I will back down to +145 core and +1000 memory with the PL at max.

But maybe my resolution is also no good for testing max power consumption (WQHD).
Only saw 377W with MSI Kombustor.

And yes I feel quite lucky with silicon lottery 🙂


----------



## IonParty

*Modded NvFlash WORKS ON FOUNDER EDITION 2080 Ti*



Vipeax said:


> If anyone who is experienced in flashing and has another card, another pc or an iGPU available to flash back the card in case of a bad flashing is willing to give this a try that would be much appreciated. I prefer someone who installed a waterblock or something as such a person would have voided their warranty already. I'd be more than happy to work with someone to make a modified version that works on FE cards.


Flashed a FE 2080 Ti with the GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Max Power Target (375W) BIOS and it is 100% working as far as i can tell. Haven't been able to clock it any higher than previously surprisingly as it would seem the clocks are voltage limited more than power limited but it is nice to know that it will not be throttling due to power nearly as much as before. Using a custom loop btw. I am definitely pulling the full 380W max when benching it though so yeah it's definitely worked. also this bios got me to #60th place in 3dmark timespy extreme 1x GPU score just to put that out there.


----------



## kot0005

Zammin said:


> So jelly of you guys that have your cards already. I ordered mid last month with an ETA of the start of this month, but every time the ETA is passed the store I ordered with just updates the ETA to another week later. Now they just aren't updating the ETA at all. Last week they told me "Friday at the latest" and when I called them on Saturday they said they have no idea when they'll be receiving the cards.. They advertised as "Pre-order" where as in reality it was a Back-order since they clearly didn't have any confirmed inbound stock..
> 
> My guess is I'll probably get it mid to late November at this bloody rate..



Lol sounds like you ordered from PLE computers ?? Same deal with mine..


----------



## Zammin

kot0005 said:


> Lol sounds like you ordered from PLE computers ?? Same deal with mine..


Aaaaaay you nailed it man. haha. I have spoken to a few other people on the Aus PC Enthusiast groups on Facebook who share our frustration as well. Such a bummer.


----------



## cgcross

VETDRMS said:


> How are you forcing constant voltage?


In the Afterburner custom curve(CTRL+F) on your curve select the voltage you want to be constant at and move it to the overclock speed you want on the graph then hit CTRL+L. A yellow line will appear at that voltage level. If a white line is also vertical at a lower voltage keep moving the speed down for voltages to the left of the yellow line until it is the only vertical line you see.


----------



## sblantipodi

I am very satisfied from the founders 2080 Ti.
With +123% power limit, +100Mhz on core touching 2GHz, 7500MHz VRAM, card is pretty silent and I don't go over 70°C on a small case.

is this results good or my 5930K is heavy limiting?



what's your results on a better CPU like the 8700K for example with similar GPU overclock and an average 5GHz on the 8700K?


----------



## carlhil2

sblantipodi said:


> I am very satisfied from the founders 2080 Ti.
> With +123% power limit, +100Mhz on core touching 2GHz, 7500MHz VRAM, card is pretty silent and I don't go over 70°C on a small case.
> 
> is this results good or my 5930K is heavy limiting?
> 
> 
> 
> what's your results on a better CPU like the 8700K for example with similar GPU overclock and an average 5GHz on the 8700K?


7960x with HT disabled @4.9 .. my gpu boost to 2040 stock so +105 gives me 2145...


----------



## torqueroll

arrow0309 said:


> Hi, nice setup! :specool:
> 
> I already have the same block in possession (Vector nickel acetal rgb).
> So you're saying that even with liquid cooling you notice the coil whine?
> May I ask you what model of 2080ti do you have?
> 
> I hope I won't regret for not choosing the Bitspower block this time (or a German Watercool or Aquacomputer), I know the EK block won't cover and make contact with the chokes as well but I honestly didn't think you may have coil whine.


Sorry for late answer. I bought the Founders Edition. The amount of coil whine is the same for air and wc. Although generally with watercooling you have less noise so the coil whine will be more noticeable.


----------



## torqueroll

toncij said:


> What case is that beauty?


Hex Gear R80
http://www.hex-gear.com/


----------



## cgcross

sblantipodi said:


> I am very satisfied from the founders 2080 Ti.
> With +123% power limit, +100Mhz on core touching 2GHz, 7500MHz VRAM, card is pretty silent and I don't go over 70°C on a small case.
> 
> is this results good or my 5930K is heavy limiting?
> 
> what's your results on a better CPU like the 8700K for example with similar GPU overclock and an average 5GHz on the 8700K?


You should be fine. I'm 14056 at full OC, turned my 8700k OC down to 5Ghz(From 5.3) and my 2080 ti(From 2175) down to 1995 and got this:


----------



## exploiteddna

XC Ultra in stock now go get em

https://www.evga.com/products/product.aspx?pn=11G-P4-2383-KR


----------



## Jpmboy

first run on air. I got the card today...


----------



## carlhil2

Jpmboy said:


> first run on air. I got the card today...


Nice, I see 14000+ in your future. what's your stock max boost clock with PT maxed out?


----------



## Jpmboy

carlhil2 said:


> Nice, I see 14000+ in your future. what's your stock max boost clock with PT maxed out?


 what do you use to push the max stock boost? Di you put yours under water?


Edit - well using superposition and 123% the boost is to 2100? (that can;t be right


----------



## carlhil2

Jpmboy said:


> what do you use to push the max stock boost?


That bench, Heaven, 3DMark.. I just run a bench and check the graph for my max boost.. on air, because of temps, it was 2025 for the most part, water cooled, 2040...


----------



## Jpmboy

carlhil2 said:


> That bench, Heaven, 3DMark.. I just run a bench and check the graph for my max boost.. on air, because of temps, it was 2025 for the most part, water cooled, 2040...


 did a restart to clear out any clocks. max clock with superposition is 1978. EK block is on the way. Once water cooled, I'll bench the thing. Still hoping for a turing Titan tho. 
edit- is it me or does gpuZ report a different P0 freq than Afterburner?


----------



## Zurv

Jpmboy said:


> did a restart to clear out any clocks. max clock with superposition is 1978. EK block is on the way. Once water cooled, I'll bench the thing. Still hoping for a turing Titan tho.
> edit- is it me or does gpuZ report a different P0 freq than Afterburner?


what hell manz. You "tested" the card before taking it apart? That some crazy talk 


Here is my untested (like a man.. (like a dumb man!)) waterblocked cards. That is a 3 slot bridge (but a 4 slot SLI bridge.... oh nvidia.. you always have to do stuff differently.)

… now to take the loop apart and TEST for no leaks (which is something i don't do all the time... cause.. dummy) then run some tests. 
I'm using bitspower blocks because EK blocks scare me with cracking.


----------



## Jpmboy

Zurv said:


> that hell mans. You "tested" the card before taking it apart? That some crazy talk
> 
> 
> Here is my untested (like a man.. (like a dumb man!)) waterblocked cards. That is a 3 slot bridge (but a 4 slot SLI bridge.... oh nvidia.. you always have to do stuff differently.)
> 
> … now to take the loop apart and TEST for no leaks (which is something i don't do all the time... cause.. dummy) then run some tests.
> I'm using bitspower blocks because EK blocks scare me with cracking.


 lookin good buddy! I shouda got the bitspower blocks... and use the stock BP. Are those the bits blocks from Amazon?
Lol- I ALWAYS test the cards on air first... push them a bit for a day or so before wasting Fuji poly on a dog. 
You should get a leak tester from aquacomputer. Comes with a connection kit, pressure gauge and small bicycle pump. Works like a charm.


----------



## Zurv

Jpmboy said:


> lookin good buddy! I shouda got the bitspower blocks... and use the stock BP. Are those the bits blocks from Amazon?
> Lol- I ALWAYS test the cards on air first... push them a bit for a day or so before wasting Fuji poly on a dog.


Feh, you know both of us will be replacing the TIs with Titans when they come out 
yeah, i got it from amazon. Came in a day. I had the water bridge already. (wishful thinking that someone would get the Titan V and sli working...)


----------



## Jbravo33

Just got mine. Not even bothering testing on air. Quick google search and YouTube and am I correct that shunt mods don’t work on this card? Watched der8auer and GN. Seems you have to do more than just a normal shunt? Has anyone else tried?


----------



## Vipeax

Jbravo33 said:


> Just got mine. Not even bothering testing on air. Quick google search and YouTube and am I correct that shunt mods don’t work on this card? Watched der8auer and GN. Seems you have to do more than just a normal shunt? Has anyone else tried?


Planning to go past 380W per card?


----------



## kx11

hopefully my Strix OC gets shipped tomorrow


----------



## jabtn2

Moved up one spot to 6th! I know it doesn't mean anything but it's exciting to me haha


----------



## zhrooms

*MSI GeForce RTX 2080 Ti Gaming X Trio 11GB* with flashed BIOS (380W) in _UNIGINE Superposition 1080p Extreme_. (10c ambient)

+*150* on Core and +*1150* on Memory

Almost every scene *2085-2100* MHz, first two (because of initial low temperature) it held *2100-2115* MHz. Could not do +*165* (*2130* MHz peak), crashed within seconds, voltage never exceeded *1.067*v because the 380W power limit is not enough, shunt mod allows it to go *1.093*v (*max*).

*Shunt* modded cards should beat this score easily, but against any other card I think it was a great score.










Compared against my previous card below, *EVGA GeForce GTX 1080 Ti Founders Edition 11GB* with flashed BIOS (Power Limit Removed).

*2126* MHz at *1.100*v

(2080 Ti score/fps is *58.0*% higher.)


----------



## amano74

I have a rtx 2080ti gainward GS, ref pcb, 2x8 pins, power limit 115%, i put an Ek waterblock on it,

today i flashed the bios with the galax 380w, all worked fine, had more stable clock & the power limit at 126%

but i can't push any further the overclock than the previous bios, because it draws so much power, the pc just shut down in middle of timespy…

a seasonic focus plus gold 750W have not enough amps on 12v rail? or i reached the max wattage for 2x8pins + pcie slot, 2x150w + 75w = 375 watts?

edit : the gpu never went above 40°


----------



## jabtn2

amano74 said:


> I have a rtx 2080ti gainward GS, ref pcb, 2x8 pins, power limit 115%, i put an Ek waterblock on it,
> 
> today i flashed the bios with the galax 380w, all worked fine, had more stable clock & the power limit at 126%
> 
> but i can't push any further the overclock than the previous bios, because it draws so much power, the pc just shut down in middle of timespy…
> 
> a seasonic focus plus gold 750W have not enough amps on 12v rail? or i reached the max wattage for 2x8pins + pcie slot, 2x150w + 75w = 375 watts?


Quite possibly could be the PSU not supplying enough power. Basically 400w just from the GPU, leaves 350 for the rest of the system. My double GPU system shut down to due same problem on an ax1500i.


----------



## Jpmboy

amano74 said:


> I have a rtx 2080ti gainward GS, ref pcb, 2x8 pins, power limit 115%, i put an Ek waterblock on it,
> 
> today i flashed the bios with the galax 380w, all worked fine, had more stable clock & the power limit at 126%
> 
> but i can't push any further the overclock than the previous bios, because it draws so much power, the pc just shut down in middle of timespy…
> 
> a seasonic focus plus gold 750W have not enough amps on 12v rail? or i reached the max wattage for 2x8pins + pcie slot, 2x150w + 75w = 375 watts?
> 
> edit : the gpu never went above 40°



the PCIE rails on the PSU are capable of delivering much more than 150W each. The 150W spec pertains to the min specs for the connectors and wire gauge. Check that that PSU is not in a multiple rail mode (vs single rail).
And regarding the AX1500i. Some of the second-run SKUs shipped with multirail mode enabled (still, that has a 40A cutoff per PCIE/CPU rail) - you have to load corsair link to enable single rail mode. Doubtful an AX1500i PSU OCP'ed by only 2 GPUs. I've measured 700W to the CPU (7980XE) and another 400W to a titan V or 600W on 2 Titan Xps. No OCP in single rail mode. :thumb:


----------



## jabtn2

Jpmboy said:


> And regarding the AX1500i. Some of the second-run SKUs shipped with multirail mode enabled (still, that has a 40A cutoff per PCIE/CPU rail) - you have to load corsair link to enable single rail mode. Doubtful an AX1500i PSU OCP'ed by only 2 GPUs. I've measured 700W to the CPU (7980XE) and another 400W to a titan V or 600W on 2 Titan Xps. No OCP in single rail mode. /forum/images/smilies/thumb.gif


I have a 7980xe @1.5v @5.2GHZ and two cards with the 380w bios. 2 d5 pumps, 8 EK vardar 140mm and 4 vardar 120mm fans. I just assumed it was over the ax1500i's limit so I sold it off and switched to the newer ax1600i. Thanks for the info!


----------



## Jpmboy

yeah, right. So, since then they should ship in single rail mode. :thumb:


----------



## amano74

superposition works fine, but it draws 50watts less than timespy on the wattmeter :

https://www.overclock.net/forum/attachment.php?attachmentid=226242&thumb=1


----------



## Zurv

odd.. i flashed the bios and the settings are still the default. (boost and power)
When i flash it again (just to make sure) i didn't get all the warnings about it being a different board.

nvflash64_patched.exe -6 --index=0 2080TIGX126.rom

*shrug*


----------



## amano74

put all files needed on desktop ( nvflash + rom)

Windows + R keys to open Run, then cmd to open Command Prompt.

cd desktop \nvflash64_patched <Enter>
nvflash64_patched --protectoff <Enter> (if write protected)
nvflash64_patched -6 2080TIGX126.rom <Enter>
(Press 'y' to confirm)
(Update successful)
nvflash64_patched --protecton <Enter> (write protect is optional)
close cmd
reboot
Done..Install graphics driver again.

Uninstall the graphics driver before flashing the VBIOS or your screen will prolly go all wonky.


----------



## dante`afk

jabtn2 said:


> I have a 7980xe @1.5v @5.2GHZ and two cards with the 380w bios. 2 d5 pumps, 8 EK vardar 140mm and 4 vardar 120mm fans. I just assumed it was over the ax1500i's limit so I sold it off and switched to the newer ax1600i. Thanks for the info!


what are the 2 d5's doing?


----------



## jabtn2

dante`afk said:


> what are the 2 d5's doing?


One runs the CPU loop and one runs the GPU loop.


----------



## Zurv

what's up with SLI limit? nvlink has a billion times more bandwidth and still SLI sync limit is a thing.. ugh.. 
it makes me sad i'm not top 5 on any of these 3dmark hall of frame tests... *sigh*….


----------



## Jbravo33

Alright let’s see what these cards got!? Kinda bummed I have one crap core in my 7980 so I had to disable it. While 17 others are maxing 60-70c on got 1 that hits 104 seconds after starting cinebench. I reapplied liquid metal a dozen times. Gave up. Will hold me back from the top but so far 4.8 on 17 no problem @1.265.


----------



## jabtn2

Zurv said:


> what's up with SLI limit? nvlink has a billion times more bandwidth and still SLI sync limit is a thing.. ugh..
> it makes me sad i'm not top 5 on any of these 3dmark hall of frame tests... *sigh*….


I don't think I've had problems hitting sli sync limit but I can double check tomorrow. My goal has been to get to #5 on time spy extreme 2x GPU. I've got a couple ideas I can try tomorrow but that'll be about it for me. Maybe I'll try some other tests for fun.


----------



## Zurv

jabtn2 said:


> I don't think I've had problems hitting sli sync limit but I can double check tomorrow. My goal has been to get to #5 on time spy extreme 2x GPU. I've got a couple ideas I can try tomorrow but that'll be about it for me. Maybe I'll try some other tests for fun.


chiller sir  or work in some ice some place 
something is wrong with my loop. I'm only 1.2LPM (normally i'm closer to 5)
I think it may have to do with the 6-7 Koolance quick disconnects in the loop.


----------



## Vipeax

Zurv said:


> odd.. i flashed the bios and the settings are still the default. (boost and power)
> When i flash it again (just to make sure) i didn't get all the warnings about it being a different board.
> 
> nvflash64_patched.exe -6 --index=0 2080TIGX126.rom
> 
> *shrug*


You did reboot I hope?


----------



## Zemach

3DMark Night Raid

CPU 8086K 5.7Ghz 6/6 Water cooling VGA 2080Ti core +140 mem +1000


----------



## amano74

[/QUOTE]CPU 8086K 5.7Ghz 6/6 Water cooling VGA 2080Ti core +140 mem +1000[/QUOTE]


holy s..., how did u do that? i can even boot at 1.55v 5.3ghz with my 8700k on my maximus X...

my setup :

https://www.overclock.net/forum/attachment.php?attachmentid=226338&thumb=1


----------



## Zemach

CPU 8086K 5.7Ghz 6/6 Water cooling VGA 2080Ti core +140 mem +1000[/QUOTE]


holy s..., how did u do that? i can even boot at 1.55v 5.3ghz with my 8700k on my maximus X...

my setup :

https://www.overclock.net/forum/attachment.php?attachmentid=226338&thumb=1[/QUOTE]

silicon lottery


----------



## NewType88

kx11 said:


> hopefully my Strix OC gets shipped tomorrow


Where did you order ? I’ve never seen a availability date for these.


----------



## Jpmboy

@Zemach - please just use the drag and drop for pictures. Your posts are dragging this page to a crawl.


----------



## Okt00

Jpmboy said:


> @Zemach - please just use the drag and drop for pictures. Your posts are dragging this page to a crawl.


I thought it was work's opinionated firewall wall and shaping... but yeah, those were the worst to load. 


On a more positive note: 
Looks like I just got a tracking number for my _new_ 2080 Ti, they RMA processed that pretty quick. Got the whole "Can't provide you a timeline" but I just pressed each person I dealt with until it was done. Began process on Thursday at 8PM via Nvidia.com chat support and had a tracking number before end of day Friday. Not to shabby.


----------



## amano74

the problem i had, with my pc shutting down in the middle of timespy, is psu related,

i tried with a 10 years old Silverstone 1000w psu (80 amps on 12v), & iam able to finish all benchmarks now.

https://www.3dmark.com/spy/4816496

i can tell u now that a 750w psu with 62 amps on 12v rail, is not enough for a 8700k & rtx 2080ti OC.


& the Galax 380w bios, works great on rtx 2080ti FE.


----------



## ENTERPRISE

Zemach said:


> CPU 8086K 5.7Ghz 6/6 Water cooling VGA 2080Ti core +140 mem +1000



holy s..., how did u do that? i can even boot at 1.55v 5.3ghz with my 8700k on my maximus X...

my setup :

https://www.overclock.net/forum/attachment.php?attachmentid=226338&thumb=1[/QUOTE]

silicon lottery[/QUOTE]


Sorry bud, had to kill off those images you had linked from offsite hosting as those images where huge and dragging the thread performance down. Please use our drag and drop/upload feature.


----------



## dante`afk

jabtn2 said:


> One runs the CPU loop and one runs the GPU loop.


Oh cool. Two single loops with each have their own rads?

Sent from my SM-G935T using Tapatalk


----------



## jabtn2

dante`afk said:


> Oh cool. Two single loops with each have their own rads?
> 
> Sent from my SM-G935T using Tapatalk


Exactly. One EK coolstream 560 for each loop. Took out the soft line and trying hard line for the first time. First fill up about to happen!


----------



## Zemach

ENTERPRISE said:


> holy s..., how did u do that? i can even boot at 1.55v 5.3ghz with my 8700k on my maximus X...
> 
> my setup :
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=226338&thumb=1


 silicon lottery[/QUOTE]


Sorry bud, had to kill off those images you had linked from offsite hosting as those images where huge and dragging the thread performance down. Please use our drag and drop/upload feature.[/QUOTE]

Okay, next time I'll use uploading. Overclock.net


----------



## dante`afk

Nice jabtn2.


What waterblocks is everyone using here? I'm hesitant buying from EK since I've heard from multiple sources so far that their 2080ti cooler has bad temps.

Sent from my SM-G935T using Tapatalk


----------



## Zurv

dante`afk said:


> Nice jabtn2.
> 
> 
> What waterblocks is everyone using here? I'm hesitant buying from EK since I've heard from multiple sources so far that their 2080ti cooler has bad temps.
> 
> Sent from my SM-G935T using Tapatalk



I'm using the bitspower block. It uses the same block for the 2080 and 2080 ti. Also a bonus is you can still use the blackplate that came with the card.

yes, there are LED lights on it, but are super simple to remove  
I'm not sure how the thermals compare to other cards.
It is on prime  https://www.amazon.com/Bitspower-Waterblock-NVIDIA-GeForce-Reference/dp/B07HCS7L1F

do note, they give you JUST enough pads and you are going to need to cut them up. (kinda a pain in the butt..)
Or just get some fuji pads (you'll need 1mm and 2mm)


----------



## dante`afk

Zurv said:


> I'm using the bitspower block. It uses the same block for the 2080 and 2080 ti. Also a bonus is you can still use the blackplate that came with the card.
> 
> yes, there are LED lights on it, but are super simple to remove
> I'm not sure how the thermals compare to other cards.
> It is on prime  https://www.amazon.com/Bitspower-Waterblock-NVIDIA-GeForce-Reference/dp/B07HCS7L1F
> 
> do note, they gaie you JUST enough pads and you are going to need to cut them up. (kinda a pain in the butt..)
> Or just get some fuji pads (you'll need 1mm and 2mm)


Thanks
What are your temps and radiator sizes? Can you link the fuji pads and did you take any pictures of yours when applying?

Sent from my SM-G935T using Tapatalk


----------



## Esenel

amano74 said:


> the problem i had, with my pc shutting down in the middle of timespy, is psu related,
> 
> i tried with a 10 years old Silverstone 1000w psu (80 amps on 12v), & iam able to finish all benchmarks now.
> 
> https://www.3dmark.com/spy/4816496
> 
> i can tell u now that a 750w psu with 62 amps on 12v rail, is not enough for a 8700k & rtx 2080ti OC.
> 
> 
> & the Galax 380w bios, works great on rtx 2080ti FE.


I can confirm your findings.
Had first a be quiet Dark Power Pro 11 650 W Platinum powering 8086k @ 5.2 GHz and the FE [email protected]
While playing Rise of the Tomb Raider it permanently shut down the PC.

Now on the Seasonic Prime Platinum 1000W. No issues so far.

The 2080 Ti is a thirsty beast 😄


----------



## Zurv

dante`afk said:


> Thanks
> What are your temps and radiator sizes? Can you link the fuji pads and did you take any pictures of yours when applying?
> 
> Sent from my SM-G935T using Tapatalk


I have a massive rad. Case sized (https://koolance.com/erm-3k3uc-liquid-cooling-system-copper ) and i'm also running a 7980xe and SLI... so my temps are not a good benchmark  (i'm in getting about 50c on both cards and 34c on the CPU. But that is full on OC'd)
that JPMboy has a bitspower on order and i don't think he is going to SLI it.
(i'm also redoing my loop now. the flow is super slow for some reason... that or the flow meter is broken... it is still time to replace tubes and water (it has been about a year.)
That is over 1/2 gallon of liquid. So silly


----------



## xer0h0ur

295x2 taught me the lesson of power supply overkill and sticking to single rail designs.


----------



## Pilz

xer0h0ur said:


> 295x2 taught me the lesson of power supply overkill and sticking to single rail designs.


I have a Corsair AX 1600i and use multi rail without a problem. Why would you need single rail with a properly size PSU?


----------



## Jbravo33

Not bad first night of benching on air got me a spot on top 3. That one super hot core is holding cpu score back a bit. So far 155-175 on core and 1000 on memory is perfectly fine. Little more testing before blocks go on. This is the Longest I’ve ever had cards on air xD 

https://www.3dmark.com/fs/16807889


----------



## xer0h0ur

Pilz said:


> I have a Corsair AX 1600i and use multi rail without a problem. Why would you need single rail with a properly size PSU?


Clearly you haven't heard of the 295X2. At bone stock it draws 500 watts and 50 amps. There is a reason we had a power supply compatibility list. My power supply handled a 295X2, 290X and 4960X all of which were overclocked. That isn't just happening on any given power supply.


----------



## amano74

dante`afk said:


> Nice jabtn2.
> 
> 
> What waterblocks is everyone using here? I'm hesitant buying from EK since I've heard from multiple sources so far that their 2080ti cooler has bad temps.
> 
> Sent from my SM-G935T using Tapatalk



I had problems with the ek waterblock not making good contact on gpu, 

but iam sure now the pcb of the card is guilty, cause it's not perfectly flat,

there was too much pressure on some memory pads, flexing the pcb, & the die had poor contact on the block..

it's all good now, temps are finally fine.

u can see with eyes on the picture below, the difference in pressure between memory pads,

some have blown at sides due to high pressure, those pads (or the memory on them) were guilty for high temps...

https://www.overclock.net/forum/attachment.php?attachmentid=226518&thumb=1


----------



## Jpmboy

Pilz said:


> I have a Corsair AX 1600i and use multi rail without a problem. Why would you need single rail with a properly size PSU?


you don't. That PSu can push >40A over any of the PCIE/CPU connectors in multirail mode (avoid the split x2 PCIE cables) . Only time I've ever needed (well, used) single rail on the AX1500i is subzero benching with hard and/or soft modded cards. Honestly, compared to a trio of 780TI Kingpins (which required 2x 1200 watt single rail PSUs to run 3DMK11) or the 295x2 (which is s fire-breather) the Nvidia cards since Pascal use relatively low power.


----------



## Jpmboy

xer0h0ur said:


> Clearly you haven't heard of the 295X2. At bone stock it draws 500 watts and 50 amps. There is a reason we had a power supply compatibility list. My power supply handled a 295X2, 290X and 4960X all of which were overclocked. That isn't just happening on any given power supply.


lol - I'm still using a watercooled 295x2 on an office/gaming rig here. It still can keep pace even at 4K (tho nothing like the recent NV cards). AMD needs to up it's game - like the 7970 days.


----------



## Esenel

zhrooms said:


> *MSI GeForce RTX 2080 Ti Gaming X Trio 11GB* with flashed BIOS (380W) in _UNIGINE Superposition 1080p Extreme_. (10c ambient)
> 
> +*150* on Core and +*1150* on Memory
> 
> Almost every scene *2085-2100* MHz, first two (because of initial low temperature) it held *2100-2115* MHz. Could not do +*165* (*2130* MHz peak), crashed within seconds, voltage never exceeded *1.067*v because the 380W power limit is not enough, shunt mod allows it to go *1.093*v (*max*).
> 
> *Shunt* modded cards should beat this score easily, but against any other card I think it was a great score.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Compared against my previous card below, *EVGA GeForce GTX 1080 Ti Founders Edition 11GB* with flashed BIOS (Power Limit Removed).
> 
> *2126* MHz at *1.100*v
> 
> (2080 Ti score/fps is *58.0*% higher.)


I can get 10223 at +145 core and +1000 memory, 380W bios and no shunt mod.
Quite close


----------



## Jpmboy

dante`afk said:


> Thanks
> What are your temps and radiator sizes? Can you link the fuji pads and did you take any pictures of yours when applying?
> 
> Sent from my SM-G935T using Tapatalk


 https://www.fujipoly.com/usa/


https://www.amazon.com/gp/product/B00MQ5Y1XY/ref=ox_sc_act_title_3?smid=A15YNZR7YB053N&psc=1


----------



## Pilz

xer0h0ur said:


> Clearly you haven't heard of the 295X2. At bone stock it draws 500 watts and 50 amps. There is a reason we had a power supply compatibility list. My power supply handled a 295X2, 290X and 4960X all of which were overclocked. That isn't just happening on any given power supply.


You're correct (forgot about that odd AMD card) I have not although I do see the dilemma; single rail mode obvious works better in that case. I keep mine in multi rail mode because I don't pull over 40A on any given rail although running in single rail mode wouldn't hurt or benefit my scenario.


----------



## axiumone

Damn. These bad boys get pretty hot even under water. I have a pair in sli with ek blocks, a [email protected] with 3 x 360 rads and a dual ddc. With an ambient temp of around 25c, my cards are around 55c with a very mild overclock to around 2000mhz. 

They also draw some crazy power it seems. I have these on a Corsair ax1200i, each pcie plug has an individual cable and I’m seeing spikes of upwards of 50 amps per cable. 

Quick pic before I filled the loop.


----------



## mr2cam

axiumone said:


> Damn. These bad boys get pretty hot even under water. I have a pair in sli with ek blocks, a [email protected] with 3 x 360 rads and a dual ddc. With an ambient temp of around 25c, my cards are around 55c with a very mild overclock to around 2000mhz.
> 
> They also draw some crazy power it seems. I have these on a Corsair ax1200i, each pcie plug has an individual cable and I’m seeing spikes of upwards of 50 amps per cable.
> 
> Quick pic before I filled the loop.


Ya I saw a pretty significant increase in temp compared to my 1080ti


----------



## Zurv

axiumone said:


> Damn. These bad boys get pretty hot even under water. I have a pair in sli with ek blocks, a [email protected] with 3 x 360 rads and a dual ddc. With an ambient temp of around 25c, my cards are around 55c with a very mild overclock to around 2000mhz.
> 
> They also draw some crazy power it seems. I have these on a Corsair ax1200i, each pcie plug has an individual cable and I’m seeing spikes of upwards of 50 amps per cable.
> 
> Quick pic before I filled the loop.


nice looking system neighbor.  thanks for reporting your temps. I'm seeing close to that (51c-53c) and i feel a little better seeing others with those temps too.
which cards are you using? FE?


----------



## kot0005

amano74 said:


> I had problems with the ek waterblock not making good contact on gpu,
> 
> but iam sure now the pcb of the card is guilty, cause it's not perfectly flat,
> 
> there was too much pressure on some memory pads, flexing the pcb, & the die had poor contact on the block..
> 
> it's all good now, temps are finally fine.
> 
> u can see with eyes on the picture below, the difference in pressure between memory pads,
> 
> some have blown at sides due to high pressure, those pads (or the memory on them) were guilty for high temps...
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=226518&thumb=1


how did u fix the issue ?


----------



## axiumone

Zurv said:


> nice looking system neighbor. 🙂 thanks for reporting your temps. I'm seeing close to that (51c-53c) and i feel a little better seeing others with those temps too.
> which cards are you using? FE?


Thank you!

Yeah, FE's here. I was bummed about not being able to bios mod them, but seeing the temps now, I don't think I would mod it. Bumping the power target will probably shoot these cards up to 60c and on custom loop that's just crazy.


----------



## Jpmboy

axiumone said:


> Thank you!
> 
> Yeah, FE's here. I was bummed about not being able to bios mod them, but seeing the temps now, I don't think I would mod it. Bumping the power target will probably shoot these cards up to 60c and on custom loop that's just crazy.


no way to check the VRM temps either. :worriedsm


----------



## Vipeax

axiumone said:


> Yeah, FE's here. I was bummed about not being able to bios mod them, but seeing the temps now, I don't think I would mod it. Bumping the power target will probably shoot these cards up to 60c and on custom loop that's just crazy.


The powerlimit should have a very minimum amount of impact on heat since the cards won't be at their peak wattage the majority of the time (and you can bios mod them).



Jpmboy said:


> no way to check the VRM temps either. /forum/images/smilies/worriedsmiley.gif


The pcb designs are excellent for these series. There is zero reason to be worried about their temperatures without going subzero.


----------



## xer0h0ur

Jpmboy said:


> lol - I'm still using a watercooled 295x2 on an office/gaming rig here. It still can keep pace even at 4K (tho nothing like the recent NV cards). AMD needs to up it's game - like the 7970 days.


That thing was a bloody space heater is what it was. I've never had a hotter rig than when it was tri-fired 295X2 and 290X. I sold my waterblocked 295X2 months ago and made about $700 off of it. Was surprised it even sold for that much. Perhaps it was for compute workloads or mining. Who knows.


----------



## axiumone

Want to have a hearty laugh? Here’s an old rig of mine. 295x2 in quadfire paired with a 4960x. That thing was a furnace.

I have a picture of a killawatt showing close to 1700 watts being pulled when overclocked.


----------



## xer0h0ur

Oh no. That is no laughing matter. It in fact was a furnace. Literally could have heated a home just by keeping that PC powered on.


----------



## Zurv

can i join the bake off? 

I think that these are a titan and a titan X builds.

that first one was so messy. I had to run dual power supplies too.

yeah, jmpboy… you love the red liquid from the chiller


----------



## Jpmboy

Vipeax said:


> The powerlimit should have a very minimum amount of impact on heat since the cards won't be at their peak wattage the majority of the time (and you can bios mod them).
> 
> 
> The pcb designs are excellent for these series. There is zero reason to be worried about their temperatures without going subzero.


actually.. that's when you really do not have to worry about the power section temperature... they get frosty.


----------



## axiumone

Great article from hothardware looking into the performance of nvlink. 

https://hothardware.com/reviews/nvidia-nvlink-review-taking-sli-to-the-next-level

Interesting to see a good in-depth look at nvlink. I'm very surprised to see scailing in titles where there was negative previously.


----------



## Jpmboy

axiumone said:


> Want to have a hearty laugh? Here’s an old rig of mine. 295x2 in quadfire paired with a 4960x. That thing was a furnace.
> 
> I have a picture of a killawatt showing close to 1700 watts being pulled when overclocked.


 lol - 2 firebreathers. I recall someone (else?) subbing runs in one of the bench threads I manage... except for the power (and heat) that musta been a killer rig!
still running my 4960X and 295x2 as a work box... very solid. :thumb:






xer0h0ur said:


> That thing was a bloody space heater is what it was. I've never had a hotter rig than when it was tri-fired 295X2 and 290X. I sold my waterblocked 295X2 months ago and made about $700 off of it. Was surprised it even sold for that much. Perhaps it was for compute workloads or mining. Who knows.


 it does really well with compute (DP performance) and was the only DP game in consumer town when NV killed DP after the OG Titan (I shoulda kept those 2 for this purpose). .




Zurv said:


> can i join the bake off?
> yeah, jmpboy… you love the red liquid from the chiller


draining that thing looked like a murder scene from CSI!


----------



## Zurv

any of you sli peep have some ubi games? I was going to try wildlands and the division (which support SLI), but they won't start. They pop for up a sec then exit.
AC:O works, but doesn't support sli.

anyone mind testing? i could go for a clean installed (and windows 1809 is out), but i'm lazy. I don't want to reinstall and the problem is with the game and SLI on the rtx.


----------



## Zammin

jabtn2 said:


> Exactly. One EK coolstream 560 for each loop. Took out the soft line and trying hard line for the first time. First fill up about to happen!


Are those the Alphacool GPX-N plexi blocks? Where did you buy them? I ordered one from Aquatuning but they are pulling the same crap that PLE computers are doing with my 2080Ti where they keep changing the delivery date every time the previous one is missed. They've changed it 3 times now. Was originally meant to be in stock on the 19th now they reckon 2nd November. I have no idea what the hold up is, they were able to get the 2080 blocks in stock but not the 2080Ti blocks, and this is Alphacool's direct outlet...


----------



## Zurv

Zammin said:


> Are those the Alphacool GPX-N plexi blocks? Where did you buy them? I ordered one from Aquatuning but they are pulling the same crap that PLE computers are doing with my 2080Ti where they keep changing the delivery date every time the previous one is missed. They've changed it 3 times now. Was originally meant to be in stock on the 19th now they reckon 2nd November. I have no idea what the hold up is, they were able to get the 2080 blocks in stock but not the 2080Ti blocks, and this is Alphacool's direct outlet...


isn't the layout of the 2080 and 2080ti PCB the same? I know with the bitspower block it can be used on either card.


----------



## axiumone

Zurv said:


> isn't the layout of the 2080 and 2080ti PCB the same? I know with the bitspower block it can be used on either card.


Different layouts. Specifically the 2080 doesn't have that front section of the VRM next to IO like the Ti. Only reason I know that is because I blindly bought two EK 2080 blocks instead of Ti at first. That didn't work out too well. If anyone needs some brand new 2080 blocks, feel free message me, as I'm not going to return them.


----------



## Jpmboy

axiumone said:


> Different layouts. Specifically the 2080 doesn't have that front section of the VRM next to IO like the Ti. Only reason I know that is because I blindly bought two EK 2080 blocks instead of Ti at first. That didn't work out too well. If anyone needs some brand new 2080 blocks, feel free message me, as I'm not going to return them.


 aren't the EK Vector blocks for the 2080 and 2080Ti the same? Only pad placement is different - same as the bitpower. Main difference is that the EK has tighter tolerances apparently: 0.5mm on the ram vs 1mm for the BP block. Really need to use fuji extreme on the BP in my opinion. 


btw - GALAX is using the bitpower on their HOF 2080Ti card


----------



## axiumone

Jpmboy said:


> aren't the EK Vector blocks for the 2080 and 2080Ti the same? Only pad placement is different - same as the bitpower. Main difference is that the EK has tighter tolerances apparently: 0.5mm on the ram vs 1mm for the BP block. Really need to use fuji extreme on the BP in my opinion.
> 
> 
> btw - GALAX is using the bitpower on their HOF 2080Ti card


If that were the case, it would have made my life a lot easier, instead of ordering Ti specific blocks. Alas, there's no cutout for the front VRM on the 2080 blocks. The instructions are universal though, the QR code on the box leads to the same page.


----------



## Jpmboy

oh geeze, they sure are different (and clearly labeled). - that is a handsome looking block tho. EK machining is nice and clean.


----------



## Zammin

Zurv said:


> isn't the layout of the 2080 and 2080ti PCB the same? I know with the bitspower block it can be used on either card.


Yeah as the other guys said some companies are doing separate blocks for 2080 and 2080Ti, Alphacool and EK being two examples. I'm just annoyed that I'm getting same run around with ETAs that mean nothing from PLE for the GPU and Aquatuning for the water block.


----------



## kot0005

Zammin said:


> Yeah as the other guys said some companies are doing separate blocks for 2080 and 2080Ti, Alphacool and EK being two examples. I'm just annoyed that I'm getting same run around with ETAs that mean nothing from PLE for the GPU and Aquatuning for the water block.


Why buy alphacool blocks ? They are garbage. Only EK and Aquacomputer make decent gpu blocks. Just buy EK ones locally, you dont have to send it back to Germany if the plating goes off.


----------



## Zammin

kot0005 said:


> Why buy alphacool blocks ? They are garbage. Only EK and Aquacomputer make decent gpu blocks. Just buy EK ones locally, you dont have to send it back to Germany if the plating goes off.


Whoah hold up a second, this is news to me. Out of curiosity what makes you say they are garbage? Are you sure that EK and Aquacomputer are the ONLY good GPU blocks? I mean that's a pretty broad statement, I haven't heard that until now. There is another guy in this thread with the Acetal version of the block I ordered and he is getting good results. There are also a few using the Bitspower block as well. That would also exclude Heatkiller which people seem to have a lot of faith in.

Last gen the Alphacool blocks were pretty good performers, beating out many other manufacturers according to TechPowerUp as seen in this graph taken from one of their Pascal GPU block reviews, you will see the Alphacool GPX is at the top of the chart:










This might be due to the higher count of microchannels over the GPU area. With the RTX blocks there isn't a whole lot of info on which blocks are the best performers, which makes it difficult to choose, but counting the microchannels on the various blocks available I figured should give me a good idea as more channels means more cooling surface area over the GPU right? Anyway this is what I got:

Alphacool GPX-N M01: 35 microchannels
Bitspower Lotan: 30 microchannels
EKWB Vector: 29 microchannels
Phanteks Glacier: 24 microchannels

The ones I don't know about are Heatkiller and Aquacomputer, both which are much more expensive to buy and ship here and the Aquacomputer one isn't even available yet (the rep told me it would be very expensive though).

I was able to get the Alphacool block and back plate for less than an EK block and back plate locally, as well as 5L of Aquacomputer coolant with shipping for the lot only costing $9 which is fantastic. So in theory, (keeping in mind I don't have a lot to go off here without comprehensive reviews) the Alphacool block should perform at least slightly better than than the EK one, while costing a bit less and allowing me to get 5L of coolant for $38 along with it.

That was my thinking anyway, I'm just annoyed that I'm getting the run around with the ETA just like with PLE. It won't matter what GPU block I get if I never get my GPU lol.

As for the nickel plating coming off, can you please link me to a case where this occurred? This is news to me so I'd like to look into it if it truly is a common issue and not the fault of the cooling fluid or additives being used (some Biocides like Mayhems Extreme can damage nickel plating).

Cheers!

Edit:I tried googling "nickel plating coming off alphacool block" and nothing came up specific to Alphacool, strangely the only brand specific stuff that came up were a few EK instances? :S


----------



## kot0005

Zammin said:


> Edit:I tried googling "nickel plating coming off alphacool block" and nothing came up specific to Alphacool, strangely the only brand specific stuff that came up were a few EK instances? :S


I was referring to EK blocks about the Nickel plating. I have had mine come of quite a few times but they replace it easily. I dont know about Alphacool's warranty or RMA so I stay away. 

I had bad expiriences with their DDC pumps in the past. The heatsinks were poorly designed so the pumps would short out because of contact with the metal on the sides. Switched to bitspower and the issue was gone.

When they first entered the GPU block market they did not do full cover blocks, the water woudnt flow through the VRM sections. They also kinda look ugly..

Bitspower blocks are expensive and they charge a lot to ship to Australia. 

about those temps, they dont look like load temps, they are probably at ambient. EK might have lower fins but they use a jet plate so it might change performance..

if you see this block 

https://www.techpowerup.com/233854/...-coverage-vga-block-for-gp102-boards-pictured

it doesnt have enough flow over memory or vrm's

But post your results here once you set up ur stuff. ANy word on PLE about your EVGA card ? I might cancel mine if scorptec or pccg lists 2080Ti gaming OC as in stock.

It doesnt look like PLE will get any stock till mid November.


----------



## Zammin

kot0005 said:


> I was referring to EK blocks about the Nickel plating. I have had mine come of quite a few times but they replace it easily. I dont know about Alphacool's warranty or RMA so I stay away.
> 
> I had bad expiriences with their DDC pumps in the past. The heatsinks were poorly designed so the pumps would short out because of contact with the metal on the sides. Switched to bitspower and the issue was gone.
> 
> When they first entered the GPU block market they did not do full cover blocks, the water woudnt flow through the VRM sections. They also kinda look ugly..
> 
> Bitspower blocks are expensive and they charge a lot to ship to Australia.
> 
> about those temps, they dont look like load temps, they are probably at ambient. EK might have lower fins but they use a jet plate so it might change performance..
> 
> if you see this block
> 
> https://www.techpowerup.com/233854/...-coverage-vga-block-for-gp102-boards-pictured
> 
> it doesnt have enough flow over memory or vrm's
> 
> But post your results here once you set up ur stuff. ANy word on PLE about your EVGA card ? I might cancel mine if scorptec or pccg lists 2080Ti gaming OC as in stock.
> 
> It doesnt look like PLE will get any stock till mid November.


Yeah I know about how bad some of Alphacools pumps are, I read the thread about the VPP pump failures and such. I wouldn't use their pumps by any means, my EK D5 pump/combo is doing a good job.

The block looks good to me though, it is a full cover block and does cool the VRMs and memory, so there shouldn't be any issues. The layout is actually different to last gen. This is the one I ordered for reference: http://www.au.aquatuning.com/water-...xi-light-nvidia-geforce-rtx-2080ti-m01?c=7187

As for the graph from TPU I posted, they are load temps but they are delta T with a room ambient of 25C which is why they look like low numbers. In the test the Alphacool block had the best GPU temps but not as impressive VRM temps, in saying that the VRMs were only 6-7C hotter than with the EK block, which I don't see as being as issue. Although I have to add that I do not know if the one in TPUs graphs is a full cover one or the ones that rely on airflow to cool the memory and the VRMs, they don't have the actual review on their website so I can't confirm. Here is the actual link to the page where I got the graph from: https://www.techpowerup.com/reviews/EVGA/Hydro_Copper_GTX_1080/6.html

The other guy in this thread who has the same block (but the acetal version) said he is happy with his. However if for some reason I don't end up liking it, I'm getting a pretty good deal on it so I could always sell it on and buy a different block if need be. $150-$200 for a block isn't much in comparison to a $2000+ GPU haha. I will let you guys know how my temperatures turn out whenever get all the stuff I ordered...

Nothing from PLE regarding the EVGA XC. They won't tell me anything other than "it'll be here when it get's here." They aren't even updating their ETA anymore. I suspect that the other stores won't be getting stock soon either but if they do, I may also grab one elsewhere and cancel my order with PLE. I thought it was funny that on their website some of the 2080Tis say "Back Order" and some say "Pre Order" when in reality they are all back orders.. :/


----------



## kot0005

Zammin said:


> I may also grab one elsewhere and cancel my order with PLE.


3 people already cancelled their orders.. one guy bought a 2080 instead..


----------



## Zammin

kot0005 said:


> 3 people already cancelled their orders.. one guy bought a 2080 instead..


Oh man.. I can understand their frustration. Don't think I'd settle for a 2080 though myself, not much of an upgrade from the 1080Ti.

I hope they come through for us next month. I wonder who their supplier is that keeps giving them incorrect ETAs. I asked them if it was EVGA and they said no, but when I asked who it was they said "uuuh... someone else".


----------



## kot0005

a couple more seem to have cancelled their evga 2080ti orders..one bought a gigabyte waterforce 1080Ti


----------



## UdoG

axiumone said:


> Damn. These bad boys get pretty hot even under water. I have a pair in sli with ek blocks, a [email protected] with 3 x 360 rads and a dual ddc. With an ambient temp of around 25c, my cards are around 55c with a very mild overclock to around 2000mhz.
> 
> They also draw some crazy power it seems. I have these on a Corsair ax1200i, each pcie plug has an individual cable and I’m seeing spikes of upwards of 50 amps per cable.
> 
> Quick pic before I filled the loop.


Could you please tell me if the Nvidia NVLink has a built-in led lighting?

Thanks.


----------



## axiumone

UdoG said:


> Could you please tell me if the Nvidia NVLink has a built-in led lighting?
> 
> Thanks.


Yes and there is no way to control it in any way for now, so it's bright green all the time.


----------



## UdoG

axiumone said:


> Yes and there is no way to control it in any way for now, so it's bright green all the time.


:-( Is it possible to deactivate the LED? If not, I have to check the ASUS NVLink. 
The EVGA looks to much "futuristic".


----------



## Zammin

kot0005 said:


> a couple more seem to have cancelled their evga 2080ti orders..one bought a gigabyte waterforce 1080Ti


I saw someone posted today that apparently these guys have the Gigabyte Windforce in stock: https://www.shoppingexpress.com.au/..._BBDVdVjO4c-q4rQfQqwv2w4AaXnbTjnTNn_ojQxD2fv4

Someone in the comments said that TECS also apparently are getting MSI Trios in stock regularly, although I don't know if that's true. I also question whether the Gigabyte one is really in stock or not, but the guy who posted the link on FB is asking them to confirm. I'm not too keen on the Windforce card and the MSI Trio has no waterblocks yet, so I don't think I'll cancel my order just yet. Lets see if any of the others come in stock elsewhere before PLE's mythical stock arrives


----------



## illidan2000

PCNerd said:


> Just received and installed the PNY XLR8. Just a note that the card does not have any RGB - I talked to some folks at PNY and there is an XLR8 version with RGB/LED in the works.
> 
> I don't believe that this card is the reference PCB (so i'm hesitant to try the GALAX BIOS). I uploaded the stock BIOS if anyone wants it:
> 
> https://www.techpowerup.com/vgabios/204371/204371
> 
> EDIT: I was able to flash to the Gigabyte 366W BIOS. Testing it out now.


Hi! What's up?
How the card is working with the new bios?
ON stock bios, what percentage you could raise the PL?


----------



## boi801

hello,

I have a cheap 2070 MSI version that is very very very power limited and I am trying with no success to flash it with a FE version even knowing it might fail because of the PCB differences, but I want to do it anyway, but even with the new patched nvflash I cant pass the ID mismatch.
I know this is the 2080ti elite thread but the 2070 thread is already dead.
I am also looking for MSI 2070 bios...

Thanks!


----------



## Jpmboy

boi801 said:


> hello,
> 
> I have a cheap 2070 MSI version that is very very very power limited and I am trying with no success to flash it with a FE version even knowing it might fail because of the PCB differences, but I want to do it anyway, but even with the new patched nvflash I cant pass the ID mismatch.
> I know this is the 2080ti elite thread but the 2070 thread is already dead.
> I am also looking for MSI 2070 bios...
> 
> Thanks!


 Forst check that the rom sizes are the same for hte two boards, else the flash will fail.
did you issue the "protectoff" command?

these instructions should work - replace "nvflash" with the name of the .exe you are using: 
open dev manager and disable drivers
open the NVFlash folder:

Win7: shift-rt-click in the folder > "open command window here"
W8/8.1/10: file>open> command prompt as admin


type: (for a sinlge card config)

nvflash --list (note the GPU pcie lanes - probably 0, 1, 2 etc unless you have a PLX chip
nvflash -i0 --protectoff
nvflash -i0 -6 newromname.rom
Hit Y every time asked

Exit the cmnd window after the flash finishes

enable the drivers in dev manager. 
reboot


----------



## dante`afk

did anyone order a 2080Ti lately form nvidia.com (US) with the estimated delivery date of 11/02 and got his card already shipped?


----------



## warbucks

dante`afk said:


> did anyone order a 2080Ti lately form nvidia.com (US) with the estimated delivery date of 11/02 and got his card already shipped?


I ordered mine on the 18th with an estimated delivery of Oct 29th. It arrived yesterday.


----------



## Zurv

I ordered 2 from NVidia on the 17 (Wednesday) and had them in my hands on the 22nd (Monday)


----------



## managerman

dante`afk said:


> did anyone order a 2080Ti lately form nvidia.com (US) with the estimated delivery date of 11/02 and got his card already shipped?




Yes! Ordered on 10/22. They are arriving tomorrow! Originally said Nov 2nd! Woohoo!



-M


----------



## rush2049

UdoG said:


> :-( Is it possible to deactivate the LED? If not, I have to check the ASUS NVLink.
> The EVGA looks to much "futuristic".


On the Nvidia bridge, it is really simple to take apart and you could very easily put tape over the LED panel or unplug the wires to the LED (though that might cause some kind of issue)


----------



## Foxrun

Has anyone flashed a bios that actually allowed them to increase their core clock?


----------



## Vipeax

Jpmboy said:


> Forst check that the rom sizes are the same for hte two boards, else the flash will fail.
> did you issue the "protectoff" command?
> 
> these instructions should work - replace "nvflash" with the name of the .exe you are using:
> open dev manager and disable drivers
> open the NVFlash folder:
> 
> Win7: shift-rt-click in the folder > "open command window here"
> W8/8.1/10: file>open> command prompt as admin
> 
> 
> type: (for a sinlge card config)
> 
> nvflash --list (note the GPU pcie lanes - probably 0, 1, 2 etc unless you have a PLX chip
> nvflash -i0 --protectoff
> nvflash -i0 -6 newromname.rom
> Hit Y every time asked
> 
> Exit the cmnd window after the flash finishes
> 
> enable the drivers in dev manager.
> reboot


He's out of luck. I bet he has a RTX 2070 chip that is a non-"A" chip. It will report a mismatch on the Device ID. The EEPROM actually returns an error status code (that can be translated to DEVICE MISMATCH with the correct documentation) when attempting a flash even when I patched NVFlash to bypass that check as well.


----------



## boi801

Vipeax said:


> He's out of luck. I bet he has a RTX 2070 chip that is a non-"A" chip. It will report a mismatch on the Device ID. The EEPROM actually returns an error status code (that can be translated to DEVICE MISMATCH with the correct documentation) when attempting a flash even when I patched NVFlash to bypass that check as well.


I am going to try that right now... and yes I have a non A chip....


----------



## boi801

Vipeax said:


> He's out of luck. I bet he has a RTX 2070 chip that is a non-"A" chip. It will report a mismatch on the Device ID. The EEPROM actually returns an error status code (that can be translated to DEVICE MISMATCH with the correct documentation) when attempting a flash even when I patched NVFlash to bypass that check as well.


You are correct, no luck:

WARNING: None of the firmware image compatible PCI Device ID's
match the PCI Device ID of the adapter.
Adapter PCI Device ID: 1F02
Firmware image PCI Device ID: 1F07
Alternate: 1F2E
WARNING: Firmware image PCI Subsystem ID (10DE.12ADx
does not match adapter PCI Subsystem ID (1462.3734).
WARNING: None of the firmware image compatible Board ID's
match the Board ID of the adapter.
Adapter Board ID: 00D8
Firmware image Board ID: 00D7

NOTE: Exception caught.
Nothing changed!




ERROR: GPU mismatch


----------



## BigMack70

Any chance someone here with a 2080 Ti also upgraded from a CPU like Haswell-E to something new and can comment on the experience? There's no news about PCI-e 4.0 anywhere yet so I'm debating upgrading my CPU from a 5930k since I am pretty sure I'm losing a couple % performance even at 4k.


----------



## jabtn2

I just updated to the newest Nvidia drivers that were released a while ago. Now I can't hold steady at the same clocks I had before. Anyone else notice this?


----------



## Okt00

jabtn2 said:


> I just updated to the newest Nvidia drivers that were released a while ago. Now I can't hold steady at the same clocks I had before. Anyone else notice this?


Are you taking 416.34 vs 416.16? or older? 


Version: 416.34 - Release Date: Thu Oct 11, 2018

Version: 416.16 - Release Date: Thu Oct 04, 2018

Version: 411.70 - Release Date: Thu Sep 27, 2018

Version: 411.63 - Release Date: Wed Sep 19, 2018


----------



## Jpmboy

Zurv said:


> isn't the layout of the 2080 and 2080ti PCB the same? I know with the bitspower block it can be used on either card.


 hey J. Got the block and mounted it this afternoon. Max core temp I'm seeing is ~ 10 or 12C higher than the loop cold side. (eg, 32C running superposition or timespy). Folding gives ~ 2.5M PPD! I've been using a different tim for a while now (samples) and an becoming sold on Tim-Mate TM2. Seems to be an equal of TGK but much less expensive. Been using it on x299 rigs (3) and 1151 rigs (2). good stuff.


Vipeax said:


> He's out of luck. I bet he has a RTX 2070 chip that is a non-"A" chip. It will report a mismatch on the Device ID. The EEPROM actually returns an error status code (that can be translated to DEVICE MISMATCH with the correct documentation) when attempting a flash even when I patched NVFlash to bypass that check as well.


these bios blocks NV has been putting in lately realy suk.


Say - what tool you you use to control the card's frequencies? I have the GALAX tool (beta) the EVGA X1 tool, etc.


----------



## jabtn2

Okt00 said:


> Are you taking 416.34 vs 416.16? or older?
> 
> 
> Version: 416.34 - Release Date: Thu Oct 11, 2018
> 
> Version: 416.16 - Release Date: Thu Oct 04, 2018
> 
> Version: 411.70 - Release Date: Thu Sep 27, 2018
> 
> Version: 411.63 - Release Date: Wed Sep 19, 2018


I went from 416.16 to 416.34. I will revert to 416.16 soon to see if I am stable at the same clocks again. Same temperature on both gpus along with same voltage, power, etc. Running at lower clocks gives same score in timespy extreme as those same lower clocks did before. In other news, I finally got this thing actually put together! As mentioned by others, I wish I could turn the nvlink bridge light off easily.


----------



## stuff79

Nice setup you have there jabtn2. What case is that? On another note, has anyone had any issues with pairing an i7 7700(non K) with a 2080ti? There seems to be so many mixed feedback regarding cpu bottlenecking online, that I figured this would be the best place to ask seeing how experienced the majority of the members are in this forum. Many thanks in advance and any feedback is much appreciated. Cheers=)


----------



## BigMack70

stuff79 said:


> Nice setup you have there jabtn2. What case is that? On another note, has anyone had any issues with pairing an i7 7700(non K) with a 2080ti? There seems to be so many mixed feedback regarding cpu bottlenecking online, that I figured this would be the best place to ask seeing how experienced the majority of the members are in this forum. Many thanks in advance and any feedback is much appreciated. Cheers=)


Can't comment specifically about the 7700k but I can tell you that at 1440p, my 4.5 GHz 5930k significantly bottlenecks the 2080 Ti and even at 4k I think I am seeing some mild bottlenecking (for example, BF1 at 4k only uses 95-98% of the GPU). At 4k it's less of an issue because the display is 60 Hz but at 1440p 144 Hz I'd be recommending the 9900k to folks.


----------



## jabtn2

stuff79 said:


> Nice setup you have there jabtn2. What case is that?


It's a Thermaltake Tower 900 Snow. I was tired of so many mini itx builds haha. Picture for scale.


----------



## GosuPl

One of my RTX 2080Ti, just died , today...

GDDR6 problem, artifacts, crashes on stock 

*** with that cards...

Well, at least at the time of the card I will play AC Origns because I have to catch up, and there SLI does not work ;-)


----------



## TahoeDust

My 2080 ti FTW3 just shipped from EVGA...finally. Now I need a waterblock for it.


----------



## Okt00

jabtn2 said:


> I went from 416.16 to 416.34. I will revert to 416.16 soon to see if I am stable at the same clocks again. Same temperature on both gpus along with same voltage, power, etc. Running at lower clocks gives same score in timespy extreme as those same lower clocks did before. In other news, I finally got this thing actually put together! As mentioned by others, I wish I could turn the nvlink bridge light off easily.


Oh wow! That's pretty. I like how one side is parallel city, and you crisscrossed the other side.


----------



## Jpmboy

stuff79 said:


> On another note, has anyone had any issues with pairing an i7 7700(non K) with a 2080ti? There seems to be so many mixed feedback regarding cpu bottlenecking online, that I figured this would be the best place to ask seeing how experienced the majority of the members are in this forum. Many thanks in advance and any feedback is much appreciated. Cheers=)


really depends on the resolution and which games you are running. Unfortunately we really need to spin-up a 4 or 6 core CPU to minimize a cpu effect as best as can be done and a locked SKU like the 7700 makes that a challenge. Heck if you were nearby I'd toss ya a coffeelake K CPU so you could test it yourself.


----------



## stuff79

Hey there Jpmboy. Thanks for your feedback. As for my location to you, I'd reckon it would take 19 hrs and 25 minutes by plane to get there=) Now, assuming that I plan to game at 4k resolution would the load not theoretically be skewed more towards the gpu, in this case the 2080ti? I actually have seen a youtuber post a video describing his setup comprising a 4770k+2080ti (



) which was quite interesting. Granted his was a K series CPU, and he tuned the timings of his ram, I felt this seemed promising. By the way, is the performance of the 7700 that far off from a 4770k?


----------



## kot0005

GosuPl said:


> One of my RTX 2080Ti, just died , today...
> 
> GDDR6 problem, artifacts, crashes on stock
> 
> *** with that cards...
> 
> Well, at least at the time of the card I will play AC Origns because I have to catch up, and there SLI does not work ;-)


Lots of Fe's dying..


----------



## stuff79

You would think with the premium prices asked by Nvidia, their reference cards would fare much better. Or is this a trend across the board for the 2080ti cards from other vendors as well?


----------



## TahoeDust

I have seen a TON of problems with FE cards in the last week or so. I can't tell if it is because they are the biggest sample size, or if there is a real issue with the cards.


----------



## kot0005

stuff79 said:


> You would think with the premium prices asked by Nvidia, their reference cards would fare much better. Or is this a trend across the board for the 2080ti cards from other vendors as well?


all of 2080Ti complaints are from FE users. None from AIB's so far. probably poor QC in Nvidia assembly lines.


----------



## BigMack70

stuff79 said:


> I actually have seen a youtuber post a video describing his setup comprising a 4770k+2080ti


Interesting. On 1440p maxed settings and TAA, the SotTR benchmark tells me that I'm only 88% GPU bound (compared to 100% at 4k) with a 4.5 GHz 5930k and 32GB of DDR4-2666. I don't think his methodology (assuming CPU under 90-100% = not CPU bottlenecked) is valid. For example, when I upgraded to a 5930k from a 2600k, I did so without ever seeing my 2600k at 90%+ usage in games, but I still saw a decent performance boost in many titles. Notice how, for example, his 1440p DOOM GPU usage drops down to around 90% even though his CPU usage is around 80%.


----------



## bp7178

dante`afk said:


> did anyone order a 2080Ti lately form nvidia.com (US) with the estimated delivery date of 11/02 and got his card already shipped?


Ordered on Oct. 22 with an est. 11/02 ship date. They shipped today, Oct. 24.


----------



## bp7178

UdoG said:


> :-( Is it possible to deactivate the LED? If not, I have to check the ASUS NVLink.
> The EVGA looks to much "futuristic".


Yes, you can turn it off with X1.


----------



## Okt00

stuff79 said:


> Hey there Jpmboy. Thanks for your feedback. As for my location to you, I'd reckon it would take 19 hrs and 25 minutes by plane to get there=) Now, assuming that I plan to game at 4k resolution would the load not theoretically be skewed more towards the gpu, in this case the 2080ti? I actually have seen a youtuber post a video describing his setup comprising a 4770k+2080ti (https://youtu.be/jhudkF3GWgQ) which was quite interesting. Granted his was a K series CPU, and he tuned the timings of his ram, I felt this seemed promising. By the way, is the performance of the 7700 that far off from a 4770k?


Make sure to watch the part 2 he's put out. When you get a more CPU intensive load, it definitely hits the overall performance. Think BF1, GTA V... 

https://www.youtube.com/watch?v=qKztwJTxujg

Honestly though while my 2080 Ti was working, my [email protected] kept up fine.


----------



## Jpmboy

stuff79 said:


> Hey there Jpmboy. Thanks for your feedback. As for my location to you, I'd reckon it would take 19 hrs and 25 minutes by plane to get there=) Now, assuming that I plan to game at 4k resolution would the load not theoretically be skewed more towards the gpu, in this case the 2080ti? I actually have seen a youtuber post a video describing his setup comprising a 4770k+2080ti (https://youtu.be/jhudkF3GWgQ) which was quite interesting. Granted his was a K series CPU, and he tuned the timings of his ram, I felt this seemed promising. By the way, is the performance of the 7700 that far off from a 4770k?


the 4770K is actually a very good gaming CPU (v good IPC). The issue with multiplier locked cpus is you are limited to bclk OC and on most platforms that also alters the DMI (eg, PCIE, usb etc. bus clock) requiring PLL tuning - can be done, but also tricky. For the majority of games (assuming limited streaming) 4 to 6 cores is plenty... but they really need to be running 5+GHz to really push the load to the GPU at any resolution. Best to just run your rig and see if the gameplay is choking, then decide. Honestly? the 2080Ti deserves a better partner than a locked "mid-stream" CPU. Even an unlocked 4 core CL like the 8350K would really impress you. You got a 4K panel, 2080Ti... give that kit a proper CPU. 
This is OCN after all... lock multiplier CPUs are, well... LOCKED.




kot0005 said:


> Lots of Fe's dying..





BigMack70 said:


> Interesting. On 1440p maxed settings and TAA, the SotTR benchmark tells me that I'm only 88% GPU bound (compared to 100% at 4k) with a 4.5 GHz 5930k and 32GB of DDR4-2666. I don't think his methodology (assuming CPU under 90-100% = not CPU bottlenecked) is valid. For example, when I upgraded to a 5930k from a 2600k, I did so without ever seeing my 2600k at 90%+ usage in games, but I still saw a decent performance boost in many titles. Notice how, for example, his 1440p DOOM GPU usage drops down to around 90% even though his CPU usage is around 80%.


 IDK man, a 5.2GHz 2600K can keep pace in _most _games with any cpu. a landmark chip 




Hey guys, post your scores in OCN's benchthreads:


https://www.overclock.net/t/1518806/firestrike-ultra-top-30/0_20
https://www.overclock.net/t/1443196/firestrike-extreme-top-30
https://www.overclock.net/t/1464813/3d-mark-11-extreme-top-30
https://www.overclock.net/t/872945/top-30-3d-mark-13-fire-strike-scores-in-crossfire-sli
https://www.overclock.net/t/1235557/official-top-30-heaven-benchmark-4-0-scores
https://www.overclock.net/t/1360884/official-top-30-unigine-valley-benchmark-1-0
https://www.overclock.net/t/1361939/top-30-3dmark11-scores-for-single-dual-tri-quad
https://www.overclock.net/t/1406832/single-gpu-firestrike-top-30
https://www.overclock.net/t/1606006/3dmark-time-spy-benchmark-top-30/0_20
https://www.overclock.net/t/1627767/top-30-unigine-superposition-benchmark/0_20


----------



## dVeLoPe

BigMack70 said:


> Any chance someone here with a 2080 Ti also upgraded from a CPU like Haswell-E to something new and can comment on the experience? There's no news about PCI-e 4.0 anywhere yet so I'm debating upgrading my CPU from a 5930k since I am pretty sure I'm losing a couple % performance even at 4k.


id like to know the same thing just got a 2080Ti and im on a 4GHZ 5820k (6c/12t)


----------



## dante`afk

jabtn2 said:


> It's a Thermaltake Tower 900 Snow. I was tired of so many mini itx builds haha. Picture for scale.


bruh, are you currently moving apartments?

such a sick computer and then you have this ghetto setup table, chair, room


----------



## jabtn2

dante`afk said:


> bruh, are you currently moving apartments?
> 
> such a sick computer and then you have this ghetto setup table, chair, room /forum/images/smilies/biggrin.gif


LMAO. I spend money where it counts I guess? The positioning in the room is so I'm facing my TV, and the desk and chair are Goodwill specials. It works for what I do.


----------



## dante`afk

lmao 


anyone else (apart from zurv) who uses the bitspower block and can provide some Deltas for their gpu/ambient?

jpmboy do you got your block already?


----------



## kot0005

Another FE bites the dust...

https://www.reddit.com/r/nvidia/comments/9r7oon/if_you_ever_remember_my_first_pc_the_gpu_died/


----------



## ENTERPRISE

Seems to only be FE Models biting the dust. Not that I ever buy FE Models but certainly glad I didn't this time.


----------



## GosuPl

Something unimaginable. I wonder what the FE is wrong with ...


----------



## Zammin

Damn, it's pretty unexpected that the most problematic version of the product would be the one coming directly from the GPU manufacturer themselves.. They must be rushing production to meet the demand right? I've never bought an FE card but were there issues with the 10 series FE's? And was the 10 series launch like this one where the top end Geforce GPU is near impossible to get a couple months after launch?


----------



## Nicklas0912

Is there any bios out there that can do so FE cards get higher power target ? what happen if you flash it with EVGA GeForce RTX 2080 Ti XC BLACK, is Ref PCB, but have 5% more power target.


----------



## PhotonFanatic

Question: Did they come out with the Ti version of the 2080 at the same time (or close to it) as the regular 2080? If so, why not just drop the Ti, and make that the regular version? Or call the Ti, the 2090?


----------



## Zammin

PhotonFanatic said:


> Question: Did they come out with the Ti version of the 2080 at the same time (or close to it) as the regular 2080? If so, why not just drop the Ti, and make that the regular version? Or call the Ti, the 2090?


They released at the same time. I don't see an issue with it being called the 2080Ti, I mean the 1080Ti wasn't called the 1090 so it would be odd if they suddenly changed it. It is different that they were released at the same time though.


----------



## GosuPl

Plus, I have to have a new 2080Ti in my home in a few days, before the courier picks up the corpse ;-)

The benefits of being a reviewer and psycho hardware tester: D


----------



## toncij

Anyone knows if those Palit Duals and GPs are reference PCB? Looking to get a pair of those since block is going on them anyway. Cheaper to get than some EVGA XCs and Asus.


----------



## Jpmboy

dante`afk said:


> lmao
> 
> 
> anyone else (apart from zurv) who uses the bitspower block and can provide some Deltas for their gpu/ambient?
> 
> jpmboy do you got your block already?


yeah - posted in back a page or two. The BP block is working fine - looks good too. card runs 10-12C above the loop cold side temp. Folding overnight... max temp is 33C





toncij said:


> Anyone knows if those Palit Duals and GPs are reference PCB? Looking to get a pair of those since block is going on them anyway. Cheaper to get than some EVGA XCs and Asus.


 check the OP?


----------



## Vipeax

Nicklas0912 said:


> Is there any bios out there that can do so FE cards get higher power target ? what happen if you flash it with EVGA GeForce RTX 2080 Ti XC BLACK, is Ref PCB, but have 5% more power target.


Just use the Galax one...


----------



## Zammin

Aquatuning are starting to sound defensive in their emails to me about this GPU block.. They've emailed me a couple times saying my GPU block has been delayed and I said that it's fine and just asked if they knew what was causing the delay (considering they have the 2080 version in stock now but not the 2080Ti one that was due weeks ago) and they avoided the question entirely, so I asked again politely and they came back with:

"This information is an expected delivery date, which is determined or estimated on the basis of supplier information and own experience values. Therefore we would like to ask for your understanding that Aquatuning has no influence on delivery dates of suppliers and therefore there is no claim for compliance on the part of Aquatuning."

I wasn't even complaining about the delay I was just asking if they knew what the cause of delay was, not accusing them of non-compliance lol. Didn't even answer my question for the second time. I guess they don't know but don't want to say it. They are Alphacool's direct outlet so I thought they would know what was going on :/


----------



## toncij

Jpmboy said:


> yeah - posted in back a page or two. The BP block is working fine - looks good too. card runs 10-12C above the loop cold side temp. Folding overnight... max temp is 33C
> check the OP?


Silly me. Thanks mate.


----------



## kot0005

stuff79 said:


> You would think with the premium prices asked by Nvidia, their reference cards would fare much better. Or is this a trend across the board for the 2080ti cards from other vendors as well?





Zammin said:


> Aquatuning are starting to sound defensive in their emails to me about this GPU block.. They've emailed me a couple times saying my GPU block has been delayed and I said that it's fine and just asked if they knew what was causing the delay (considering they have the 2080 version in stock now but not the 2080Ti one that was due weeks ago) and they avoided the question entirely, so I asked again politely and they came back with:
> 
> "This information is an expected delivery date, which is determined or estimated on the basis of supplier information and own experience values. Therefore we would like to ask for your understanding that Aquatuning has no influence on delivery dates of suppliers and therefore there is no claim for compliance on the part of Aquatuning."
> 
> I wasn't even complaining about the delay I was just asking if they knew what the cause of delay was, not accusing them of non-compliance lol. Didn't even answer my question for the second time. I guess they don't know but don't want to say it. They are Alphacool's direct outlet so I thought they would know what was going on :/


Try messaging alphacool directly and tell them about your order.


----------



## Zammin

kot0005 said:


> Try messaging alphacool directly and tell them about your order.


That's a good idea. Thank you sir


----------



## illidan2000

anyone has pny 2080ti? has it reference pcb? what is its maximum pl? is capable of gigabyte bios?


----------



## GAN77

toncij said:


> Anyone knows if those Palit Duals and GPs are reference PCB?


Reference PCB with TU102-300-KX-A1 GPU, no TU102-300*A*-KX-A1 GPU

https://forums.overclockers.ru/viewtopic.php?p=15844665#p15844665


----------



## boi801

Hello,

non-A chips already flashable?


----------



## Jpmboy

ugh - spent the last 30 min reloading drivers and control tools "cause my 2080Ti was idling at 1200MHz... then see that 144HZ is forcing that, 120Hz does not. Don't have this problem with my Titan V at 144Hz. Think I'll switch the TV back to this rig...


----------



## Zurv

Jpmboy said:


> ugh - spent the last 30 min reloading drivers and control tools "cause my 2080Ti was idling at 1200MHz... then see that 144HZ is forcing that, 120Hz does not. Don't have this problem with my Titan V at 144Hz. Think I'll switch the TV back to this rig...


i was having odd problems (mainly with SLI) and playing around with the drivers didn't help. I went with a full reinstall and that cleared up the issues. That said, 1809 is around anymore (ugh.. note, 1809 is required for MS API for Ray tracing.)
So was the fix a clean install or going back to 1803?


----------



## nycgtr

Peeps, I am in contact with bitspower regarding the upcoming MSI trio x block for the 80 and 80ti. So far I have suggested that it work with the factory back plate (which has now been confirmed to work with factory back plate) and a removable piece be added to the block to keep an even look when not using sli. Specifically for the vertical gpu mount look. Below is a preview as the block is still in the works with an eta of 2 weeks. I have suggested that the additional removable piece have a design or logo element if that suits them better. Any input I will relay over.

https://imgur.com/a/T93IzLW


----------



## Vipeax

Jpmboy said:


> ugh - spent the last 30 min reloading drivers and control tools "cause my 2080Ti was idling at 1200MHz... then see that 144HZ is forcing that, 120Hz does not. Don't have this problem with my Titan V at 144Hz. Think I'll switch the TV back to this rig...


With three screens it is stupid also, but the cost of power plays such a minor factor to be honest. Even at 100W idle I'd be looking at 0.019 €/hour. At an insane 4 hours a day / 365 days a year that would mean €25/year. Who gives a **** on a card that is going to cost you about €500/year due to 7nm replacements.


----------



## Pepillo

Jpmboy said:


> ugh - spent the last 30 min reloading drivers and control tools "cause my 2080Ti was idling at 1200MHz... then see that 144HZ is forcing that, 120Hz does not. Don't have this problem with my Titan V at 144Hz. Think I'll switch the TV back to this rig...


Same problem here, with a 165 Hz monitor. Also the vRam was idling at maximum clocks …….


----------



## amano74

Pepillo said:


> Same problem here, with a 165 Hz monitor. Also the vRam was idling at maximum clocks …….


i switched back to the original bios on my rtx 2080ti, 

& i Don't have the problem at 144htz, watching a video on YouTube atm, core 300mhz, memory 100mhz

https://www.overclock.net/forum/attachment.php?attachmentid=226926&thumb=1


----------



## toncij

amano74 said:


> i switched back to the original bios on my rtx 2080ti,
> 
> & i Don't have the problem at 144htz, watching a video on YouTube atm, core 300mhz, memory 100mhz
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=226926&thumb=1


But with original BIOS you're stuck at low power limit?


----------



## amano74

toncij said:


> But with original BIOS you're stuck at low power limit?


i did some test on timespy : 

original bios, power limit 115% (300w) max oc core + vram : 15800 graphics score

galax bios, power limit 126% (380w) max oc core + vram : 16380 graphics score

with the galax bios, on my wattmeter, it draws more than 380w on heavy oc load, (around 400w)

due to some rma problem on some rtx 2080ti FE, i want the original bios in it before maybe it R.I.P...

the gain is not worth it, +25% power draw, for 3% improvement...


----------



## Spiriva

Its impossible to get the EVGA 2080ti in Sweden, they keep pushing the date ahead all the time. Now its 2nd of December. Im going to go ahead and buy the 2080ti FE from Nvidia at once instead next week.

My question is tho, (since I havent got any card yet i havent been very active in this thered) Is there any bios that is considered 'the best' ? 

Or is the Nvidia original bios just as fine as any other out there ? Im going to watercool the card with an EK block.

thnx


----------



## GosuPl

Not only 2080Ti FE have problems.

https://forum.purepc.pl/application...7f6667584372a470c21eb569393333a5c1f0337a03e3e

https://forum.purepc.pl/application...ccb653e79ad0d31a7fbdb641df5a52425e2f1c7732012

https://forum.purepc.pl/application...42f67086cad5937459d85a8df29ee8936ef5dec3e8974

My faulty 2080Ti FE

https://forum.purepc.pl/application...e65b4578d83e7832c626099f29d2ce512eb83d1308b25

I feel that soon there will be a big scandal and Nvidia will get ...

Btw. RGB controller on FE cards (still we havent soft for this) sometimes goes mad ;-)

https://forums.geforce.com/default/topic/1073171/geforce-rtx-20-series/rtx-2080-logo-is-red-/1/

https://forum.purepc.pl/application...2hJB6lykxLnnsWggZX2iqPpG3ycbdI1zd1Oj8S7umPUUM


----------



## mouacyk

Starting to sound like RT and Tensor cores are really premature and were injected into the pipeline too soon.


----------



## Jpmboy

Zurv said:


> i was having odd problems (mainly with SLI) and playing around with the drivers didn't help. I went with a full reinstall and that cleared up the issues. That said, 1809 is around anymore (ugh.. note, 1809 is required for MS API for Ray tracing.)
> So was the fix a clean install or going back to 1803?


nothing except back to 120Hz worked. Changed to a diff monitor (165Hz agon) - same issue. I'm on 1803 with this rig. Orignially put the Ti on a 8086/Apex X bench - ran fine, that has version 1709! (disabled all updates via grp policy


Vipeax said:


> With three screens it is stupid also, but the cost of power plays such a minor factor to be honest. Even at 100W idle I'd be looking at 0.019 €/hour. At an insane 4 hours a day / 365 days a year that would mean €25/year. Who gives a **** on a card that is going to cost you about €500/year due to 7nm replacements.


not primarily a cost consideration here... more of a bug hunt!


----------



## xer0h0ur

mouacyk said:


> Starting to sound like RT and Tensor cores are really premature and were injected into the pipeline too soon.


Tensor cores have been on dies since Volta. Sounds to me like you're just speculating.


----------



## amano74

xer0h0ur said:


> Tensor cores have been on dies since Volta. Sounds to me like you're just speculating.


i agree, most of the problems of artifacting seems to be vram related, & the GDDR6 is the newest thing on the card.


----------



## kx11

scan.co.uk got the strix OC in stock , i almost ordered one before looking at that price


----------



## Nico67

Jpmboy said:


> ugh - spent the last 30 min reloading drivers and control tools "cause my 2080Ti was idling at 1200MHz... then see that 144HZ is forcing that, 120Hz does not. Don't have this problem with my Titan V at 144Hz. Think I'll switch the TV back to this rig...


Had a similar problem with a Swift years ago, "Prefer Max Performance" Globally was the issue. Setting "Adaptive" globally and "Prefer Max" on a per app basis was a good workaround.


----------



## Zammin

kot0005 said:


> Try messaging alphacool directly and tell them about your order.


Umart have the Gigabyte Gaming OC in stock if you're keen: https://www.umart.com.au/Gigabyte-GeForce-RTX-2080-Ti-Gaming-11G-OC-Graphics-Card_47312G.html

Edit: never mind they changed it from "In Stock" to "Pre-Order" shortly afterward.


----------



## boi801

Hello again, and sory to talk about RTX lower tier 2070 here... its thread is empty...
I have been trying to flash my MSI 2070 nonA chip with other versions with more power room but those are A chips and its a no go... 
But today on tomshardware got a nonA chip bios from EVGA with reference PCB and was able to flash it, resulting in 1 of the 3 displays ports didn't work, because the EVGA has a DVI were MSI have a 3rd DP, and the power readings were messed up because EVGA only have one 8pin and MSI uses 8+6 power.
Anyway the OC was the same although the power didn't reached more than 70% (MSIAB and GPUZ)...
Every time the OC fails or crashes while testing never shows artifacts, never crashes windows, only freezes the benchmark app, fans goes down.... still no artifacts ever...


----------



## xer0h0ur

amano74 said:


> i agree, most of the problems of artifacting seems to be vram related, & the GDDR6 is the newest thing on the card.


I mean I won't rule out the possibility of this massive monolithic die having a problem since its the first time they have tried selling such a huge GPU to the consumer market. I still think it has more to do with GDDR6 issues as you have guessed. That was the rumor going around over the delays with the Ti's stock around the globe. Of course that is like me saying I am going to take one speculation over another. Its still just speculation in the end. I'm keeping my fingers crossed that nothing happens with my FE card but so far so good.


----------



## kot0005

Zammin said:


> kot0005 said:
> 
> 
> 
> Try messaging alphacool directly and tell them about your order.
> 
> 
> 
> Umart have the Gigabyte Gaming OC in stock if you're keen: https://www.umart.com.au/Gigabyte-GeForce-RTX-2080-Ti-Gaming-11G-OC-Graphics-Card_47312G.html
Click to expand...

Says pre order. I have one ready to be picked up from pccg already. But i got an e mail from ple today saying my xc ultra will arrive today lol.. I will check if pccg can hold it till monday for me. If not It should go back into stock.


----------



## Jpmboy

Nico67 said:


> Had a similar problem with a Swift years ago, "Prefer Max Performance" Globally was the issue. Setting "Adaptive" globally and "Prefer Max" on a per app basis was a good workaround.


yeah - strange. I've had it set to adaptive in NVCP. I switched the TitanV back to x299 and 2080Ti back to 1151, and viola. Both now again idle in the P8 state at based clocks whether 144 and 165 Hz. IDK - ghost in the machine I think.


----------



## Djreversal

SO i am trying to bios flash my 2080ti's and it looks like its trying to do both.. 1 looks like its been successful the other one says software write protection enabled... can someone maybe walk me threw if im doing this right or if there is a code i need to put in to make this work?


----------



## Jpmboy

Djreversal said:


> SO i am trying to bios flash my 2080ti's and it looks like its trying to do both.. 1 looks like its been successful the other one says software write protection enabled... can someone maybe walk me threw if im doing this right or if there is a code i need to put in to make this work?


check my post earlier today for the commands... and what exact commands are you using?


----------



## svntwoo

*GIGABYTE RTX 2080 Ti WindForce OC*

I managed to pick up a GIGABYTE RTX 2080 Ti WindForce OC(GV-N208TWF3OC-11GC) from Micro Center a couple days after launch, several people did not pick up pre-orders. 


Unaware of the 111% max power limit.... Thankfully Matt from Gigabyte was able to provide links and info on updating the BIOS to 90.02.0B.40.6B and that up'd the power limit to 140%. After some playing around and benching I am happy with the card, I was able to get clocks to 2115/7850 with the bios update but wow does it draw a load of power @ 138.8% and 360.8 watts, which is about where I am maxing out. 

Dialed it back down for day to day use/gaming


----------



## Zammin

kot0005 said:


> Says pre order. I have one ready to be picked up from pccg already. But i got an e mail from ple today saying my xc ultra will arrive today lol.. I will check if pccg can hold it till monday for me. If not It should go back into stock.


yeah they changed it to pre order some time after I posted it 

Whaaaat I've been watching PCCG and haven't seen anything come in stock? And PLE haven't contacted me about my card :/ maybe I should call them again..


----------



## Zammin

kot0005 said:


> Says pre order. I have one ready to be picked up from pccg already. But i got an e mail from ple today saying my xc ultra will arrive today lol.. I will check if pccg can hold it till monday for me. If not It should go back into stock.





Zammin said:


> yeah they changed it to pre order some time after I posted it
> 
> Whaaaat I've been watching PCCG and haven't seen anything come in stock? And PLE haven't contacted me about my card :/ maybe I should call them again..


Ok I just called them and they said they have no idea if they are getting stock today and they haven't heard anything from EVGA or their supplier... They told me I was up the front of the queue for the first lot to come in as well so idk what the heck is going on with them.


----------



## Djreversal

Jpmboy said:


> check my post earlier today for the commands... and what exact commands are you using?


i did 

nvflash64 --index=0 -6 (rom file) that worked on the asus card

did the index=1 same stuff and it says software write protection enabled, unable to program eeprom


----------



## Jpmboy

Djreversal said:


> i did
> 
> nvflash64 --index=0 -6 (rom file) that worked on the asus card
> 
> did the index=1 same stuff and it says software write protection enabled, unable to program eeprom



what brand is the other card? And is it one of the non-A chips discussed in this thread?


nvflash64 -i1 --protectoff


then flash. should work :worriedsm


----------



## Djreversal

Jpmboy said:


> what brand is the other card? And is it one of the non-A chips discussed in this thread?
> 
> 
> nvflash64 -i1 --protectoff
> 
> 
> then flash. should work :worriedsm




boom you are the man.. it just took... its a Zotac AMP card.. and the other was an ASUS OC card


----------



## Djreversal

my benchmarks seem quite slow.. not sure why.. i dont see any throttling going on.. both GPU's are holding 2050mhz .. temps are at 31c or less the entire time. Not sure if there are settings i have to play with to tweak this but SLI is enabled, its showing enabled as well. but my scores are slower then a single 2080ti on the benchmarks.


----------



## Jbravo33

Blocks on and changed the loop. Snagged the top spot in Firestrike ultra (2cards). Will see if a bios flash can get me higher this weekend xD. Also tried shunting with LM. Just threw it in safe mode 300Mhz. Wahhh

https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/2+gpu


----------



## Zammin

Jbravo33 said:


> Blocks on and changed the loop. Snagged the top spot in Firestrike ultra (2cards). Will see if a bios flash can get me higher this weekend xD. Also tried shunting with LM. Just threw it in safe mode 300Mhz. Wahhh
> 
> https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/2+gpu


killer setup man. first time I've seen the factory FE back plates used with a waterblock. Looks like it would fit completely flush without the end piece, but then of course you would lose the nice lighting.


----------



## Madness11

Any users of zotac amp 2080ti ??? Have any issue with vrm or memory ??? Amp it's the Fe version right?? I'm scare , a lot of ppl have bad experience with Fe card (((


----------



## jepz

Madness11 said:


> Any users of zotac amp 2080ti ??? Have any issue with vrm or memory ??? Amp it's the Fe version right?? I'm scare , a lot of ppl have bad experience with Fe card (((


Yep, I have a 2080 Ti AMP and it's awesome.

No issues at all, cooler is very good, at 100% fans (about 3700 rpm) gives me 54ºC in full load, at normal speed (about 2300rpm) gives me 63ºC, also a good overclock in GPU (@2175-2190MHz) and Mem [email protected], using MSI afterburner 4.6.0 Beta 9, mem slider is at the max, incredible card.

2190MHz at Fire Strike Ultra: https://www.3dmark.com/fs/16775412

Some pics of the card:


----------



## toncij

So far only the PNY card is non-A chip, right? (and Asus blower cooler model?)


----------



## webmi

since all cards are nv ref board anyqay, even the asus blower (asus turbo) is A, havent seen a single non-A so far.


----------



## octiny

toncij said:


> So far only the PNY card is non-A chip, right? (and Asus blower cooler model?)


Asus Turbo is A binned. Confirmed from another user in this thread, and can confirm myself as I have one.


----------



## Zurv

Madness11 said:


> Any users of zotac amp 2080ti ??? Have any issue with vrm or memory ??? Amp it's the Fe version right?? I'm scare , a lot of ppl have bad experience with Fe card (((


don't stress the reference models (including FE) - almost all cards out there are reference with few custom cards. Of course if people are going to have problems they are going to have problems with the bulk of the cards out there.
If you are worried then get it from a vendor that RMA's stuff fast (or a store, like amazon, that you can return it without any issues (unlike newegg.. which is a return nightmare.)


----------



## toncij

webmi said:


> since all cards are nv ref board anyqay, even the asus blower (asus turbo) is A, havent seen a single non-A so far.


Yes, Palit DUAL is a non-A as it seems:

https://forums.overclockers.ru/viewtopic.php?p=15844665#p15844665
https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-269.html#post27684548

Asus Turbo is A, but is Asus Dual too? Presumably so, yes, also A.

Judging by what I see here: PNY XLR8 Gaming Overclock should be an A too.


----------



## Okt00

Installed my replacement 2080 Ti last night, ran through about 20x firestrike runs, 3dmark doesn't seem to hit the same frequencies I get in real world gaming... 

+123% Power, +100mv Voltage, +0 Core, 60% Fans, 1950MHz
+120% Power, +100mv Voltage, Auto Curve, 60% Fans, 2085MHz
+123% Power, +100mv Voltage, Adjusted Curve, 60% Fans, 2130MHz
+123% Power, +100mv Voltage, Adjusted Curve, +850 Memory, 60% Fans, 2100MHz (2170MHz in BF1)

Last night running 3dMark, I hit a boost of 2100MHz, consistent. Then I fire up BF1 and play for 20 minutes, I get a max boost of 2170MHz. 

Tried +1000MHz on the memory, instead of causing artifacts the normal way I was greeted with a lens flare sprite draws everywhere... it was a wild ride. wow. Down to +850 for now. 

Also as I was tweaking the V/F curve around 1085mv, every time I cranked things up it just seems like I was getting lower and lower max voltages each run, instead of more upper end frequency which I would have thought the adjustment would have provided. 

At least I have a working card for a weekend so I can put it through it's paces. Just installed 666GB of games onto a new 1TB Samsung EVO. Man those are cheap these days! I paid more than double that for my OCZ Vertex 128GB SSD! (Witcher 3, BF1, Battlefront II, GTA V, DOOM (2016), Kingdom Come, Diablo 3, ARMA 3, DayZ... some others too, too much to play until BFV releases.)


----------



## GosuPl

My second 2080Ti FE has just died ... These are time bombs. I did not expect that ******* premiere after Nvidia ...


----------



## dante`afk

damn

bios changed? oc? how long did it run?


----------



## Zfast4y0u

GosuPl said:


> My second 2080Ti FE has just died ... These are time bombs. I did not expect that ******* premiere after Nvidia ...


what happen? seams FE's are still thing to avoid :/


----------



## GosuPl

No bios changed, OC for 2085/15400 for benches + 100% rpm on fans. And for gaming on stock (until LC)

GDDR6 died, lots of artifcats and crashes.


----------



## mr2cam

Makes me scared, I have my FE on water but still is a bit scary.


----------



## Jbravo33

Zammin said:


> killer setup man. first time I've seen the factory FE back plates used with a waterblock. Looks like it would fit completely flush without the end piece, but then of course you would lose the nice lighting.


Thanks. It’s not the greatest fit but I like the look of stock plates over nickel. Don’t want it to look too custom. I preferred the founders on air to be the cleanest, but having a silent pc is a must. My cable box is louder than this monstrosity. : )


----------



## GosuPl

Jbravo33 said:


> Thanks. It’s not the greatest fit but I like the look of stock plates over nickel. Don’t want it to look too custom. I preferred the founders on air to be the cleanest, but having a silent pc is a must. My cable box is louder than this monstrosity. : )


No problem with installing EK Vectors? My friend have ASUS RTX 2080Ti Dual OC, and have problems with this block. 3x reinstall, and lot of "fun" with screws of different length ;-)


----------



## jabtn2

GosuPl said:


> Jbravo33 said:
> 
> 
> 
> Thanks. It’s not the greatest fit but I like the look of stock plates over nickel. Don’t want it to look too custom. I preferred the founders on air to be the cleanest, but having a silent pc is a must. My cable box is louder than this monstrosity. : )
> 
> 
> 
> No problem with installing EK Vectors? My friend have ASUS RTX 2080Ti Dual OC, and have problems with this block. 3x reinstall, and lot of "fun" with screws of different length ;-)
Click to expand...

I have ek vector blocks on my ti's. I always just use the shortest screws with EK gpu kits. The longer screws will bottom out and get stuck in the threaded standoffs on the block. Also, to me the EK nickel backplates are nice.


----------



## ENTERPRISE

While the Nvidia uses a reference PCB, how come it seems to only be the FE cards dying over other AIB cards ? If it was a temperature thing I would have thought it would be an obvious issue but nobody seems to be indicating high temps being an issue on the FE. Looks to be a GDDR6 issue, but again if it is a GDDR6 issue one would think it would be affecting a much wider range of cards than just the FE's


My cards do not become available until the 30th November (MSI Sea Hawk X 2080Ti). hoping I do not have any issues with them.


----------



## Esenel

*@Owner's of Dieing FE Model*

Hi Owner's of a dieing FE.

Can all of you give more feedback how, after which time and under which conditions this happened?

Died after how many days after installing:
Died after how many days after first indication:
Air / Water(Block):
Stock/OC:
Description of first indication:
Description/Screenshot of damage:
Batchnumber:
Feedback of Nvidia:

I hope this is ok for you.
As I do own one myself, I am very curious.

Thanks a lot!
Cheers


----------



## Jpmboy

Jbravo33 said:


> Thanks. It’s not the greatest fit but I like the look of stock plates over nickel. Don’t want it to look too custom. I preferred the founders on air to be the cleanest, but having a silent pc is a must. My cable box is louder than this monstrosity. : )


2-sided tape to lay the OEM backplates on the EK mount? I don;t see any screws in there.,


----------



## Okt00

ENTERPRISE said:


> While the Nvidia uses a reference PCB, how come it seems to only be the FE cards dying over other AIB cards ? If it was a temperature thing I would have thought it would be an obvious issue but nobody seems to be indicating high temps being an issue on the FE. Looks to be a GDDR6 issue, but again if it is a GDDR6 issue one would think it would be affecting a much wider range of cards than just the FE's
> 
> 
> My cards do not become available until the 30th November (MSI Sea Hawk X 2080Ti). hoping I do not have any issues with them.


Is Micron the only supplier of the GDDR6 modules? For both AIB and Nvidia cards?


----------



## anticommon

Hi everyone I am feeling pretty dumb so I thought I would sell my 1080 to hydrocopper and snag a 2080 ti to put a block on. I know I'm after a reference design pcb as that seems to be the easiest to get a block for, and I was leaning towards the XC Ultra as I have heard that has an increased power limit.... *but* now I am thinking I could get a founders for $1200 (I think without tax) or an xc ultra for $1370 after tax. Another $210 for the block and backplate on the FE but only $160 for a hydrocopper block. So my totals are looking like either $1410 or $1530. The big question is whether the FE cards play nice with custom power limit bios or if I should be looking at any number of other cards that can take a custom bios and/or have that sweet 140% PL. 

I just kinda wish the XC black for $999 would actually be for sale at EVGA ( 

Thoughts?


----------



## Jpmboy

anticommon said:


> Hi everyone I am feeling pretty dumb so I thought I would sell my 1080 to hydrocopper and snag a 2080 ti to put a block on. I know I'm after a reference design pcb as that seems to be the easiest to get a block for, and I was leaning towards the XC Ultra as I have heard that has an increased power limit.... *but* now I am thinking I could get a founders for $1200 (I think without tax) or an xc ultra for $1370 after tax. Another $210 for the block and backplate on the FE but only $160 for a hydrocopper block. So my totals are looking like either $1410 or $1530. The big question is whether the FE cards play nice with custom power limit bios or if I should be looking at any number of other cards that can take a custom bios and/or have that sweet 140% PL.
> 
> I just kinda wish the XC black for $999 would actually be for sale at EVGA (
> 
> Thoughts?


you can use the stock backplate with the bitspower block. Looks good. Runing my 2080Ti with this block, the core temperature is a constant 10-11C above the loop temperature, which is as good as any. :thumb:


----------



## anticommon

Jpmboy said:


> you can use the stock backplate with the bitspower block. Looks good. Runing my 2080Ti with this block, the core temperature is a constant 10-11C above the loop temperature, which is as good as any. :thumb:


I like the look of the nickel backplate though. Have you seen any discernable performance differences going from air to water apart from just temperature? 

Still curious about custom bios and power limit if the Fe cards


----------



## amano74

anticommon said:


> I like the look of the nickel backplate though. Have you seen any discernable performance differences going from air to water apart from just temperature?
> 
> Still curious about custom bios and power limit if the Fe cards


i did some test on timespy : 

original bios, power limit 115% (260-300w) max oc core + vram : 15800 graphics score 

galax bios, power limit 126% (300-380w) max oc core + vram : 16380 graphics score 

less than 4% improvements for 25% more power draw…


some have seen less improvements because they had better gpu than mine out of box..


----------



## boi801

*This is my MOD. There are many like it, but this one is mine.*

This is my MOD. There are many like it, but this one is mine.

Mod power limit on a MSI RTX 2070;

Find the shunt resistor
Do that
Apply it there

And it Works!!!! LOL power limit never reads 90%
Now this mod need a name...


----------



## amano74

boi801 said:


> This is my MOD. There are many like it, but this one is mine.
> 
> Mod power limit on a MSI RTX 2070;
> 
> Find the shunt resistor
> Do that
> Apply it there
> 
> And it Works!!!! LOL power limit never reads 90%
> Now this mod need a name...



lol, nice mod man!


----------



## anticommon

amano74 said:


> i did some test on timespy :
> 
> original bios, power limit 115% (260-300w) max oc core + vram : 15800 graphics score
> 
> galax bios, power limit 126% (300-380w) max oc core + vram : 16380 graphics score
> 
> less than 4% improvements for 25% more power draw…
> 
> 
> some have seen less improvements because they had better gpu than mine out of box..


Something seems wrong when a 9% increase in PL results in 80 watts more pulled from the card. 4% performance increase doesn't seem like much, but then again if you are chasing the pinnacle of performance 4% might be par.

I've also been looking up these FE cards and there are posts from just hours ago talking about how many of the FE cards have just died after 2-3 weeks. Makes me think I should stick to aftermarket.





boi801 said:


> This is my MOD. There are many like it, but this one is mine.
> 
> Mod power limit on a MSI RTX 2070;
> 
> Find the shunt resistor
> Do that
> Apply it there
> 
> And it Works!!!! LOL power limit never reads 90%
> Now this mod need a name...



lmao this is great I should have tried something like this with my 1080 Ti. Bios makes up for that I guess. I would be worried that it might slip off and short something that should not be shorted...


----------



## amano74

no it's 260w bios _ pl +15% = 300w / 300w bios _ pl + 26% = 380w


that means that u let the card draw 80w more power to earn 3 or 4% performance


the gpu boost is near the best balance between perf/power, & near the max the card can do..


----------



## Rob w

Anyone know what is going on here?
On my 2080 ti fe I ran the scanner and then tweeked the settings, 
Set power to max
Voltage +80
Core +171
Memory +683
Max temps 63deg
Time spy graphics score 12261 ( clock @ 2115, mem @1921)
Did a bit more tweeking and score dropped to 7809 ?
Re ran scanner and even at stock score is still low, FPS seems to be half of previous runs ( where sections of test ran at 106fps it is now 53fps)
Any ideas?


----------



## amano74

Rob w said:


> Anyone know what is going on here?
> On my 2080 ti fe I ran the scanner and then tweeked the settings,
> Set power to max
> Voltage +80
> Core +171
> Memory +683
> Max temps 63deg
> Time spy graphics score 12261 ( clock @ 2115, mem @1921)
> Did a bit more tweeking and score dropped to 7809 ?
> Re ran scanner and even at stock score is still low, FPS seems to be half of previous runs ( where sections of test ran at 106fps it is now 53fps)
> Any ideas?


gpu at stock settings u must have around 15000 points graphics on timespy.

& what is the scanner?, u use msi afterburner or evga x1 to oc & monitor ur clocks while benching?


----------



## Djreversal

https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.


Picture of the rig.

2990wx with dual 2080ti's in SLI

https://i.imgur.com/ebqyuXc.jpg


----------



## boi801

anticommon said:


> lmao this is great I should have tried something like this with my 1080 Ti. Bios makes up for that I guess. I would be worried that it might slip off and short something that should not be shorted...


mine is non-A....


----------



## Jpmboy

Djreversal said:


> https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.
> 
> 
> Picture of the rig.
> 
> 2990wx with dual 2080ti's in SLI
> 
> https://i.imgur.com/ebqyuXc.jpg


you can just look *here *to compare.


----------



## amano74

Djreversal said:


> https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.
> 
> 
> Picture of the rig.
> 
> 2990wx with dual 2080ti's in SLI
> 
> https://i.imgur.com/ebqyuXc.jpg



seems that ur sli is not scaling good, 

cause i have almost 17000 points graphic score with 1 card, in firestrike extreme at stock gpu : https://www.3dmark.com/3dm/29786975


----------



## GosuPl

Djreversal said:


> https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.
> 
> 
> Picture of the rig.
> 
> 2990wx with dual 2080ti's in SLI
> 
> https://i.imgur.com/ebqyuXc.jpg


https://www.3dmark.com/compare/fs/16848888/fs/16619989

Massive bottleneck with this AMD CPU.


----------



## Zurv

Djreversal said:


> https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.
> 
> 
> Picture of the rig.
> 
> 2990wx with dual 2080ti's in SLI
> 
> https://i.imgur.com/ebqyuXc.jpg


firestrike is a CPU test. But looking at just your grafix score, it is oddly low
I got a score (graphic) of: 33,569 
you got a score (graphic) of: 20,749

Are you just running stock speeds?
My OC (when in SLI and in 3d mark tests) is like 2070 (*cry*)
(in afterburner +130 core +600mem)

your ram is pretty slow. It that a limit of the threadripper?


----------



## anticommon

GosuPl said:


> https://www.3dmark.com/compare/fs/16848888/fs/16619989
> 
> Massive bottleneck with this AMD CPU.


2990x < 2950x on the TR side of things when it comes to straight gaming. By how much I don't know though.


Edit: Also, I bit the bullet and have a 2080 ti xc ultra on the way. Was thinking of snagging a vector WB and the nickel backplate, although the EVGA plate looks nice, I think I would want it switched out if there is no RGB on it though (yeah I'm that guy who likes the RGB okay)


----------



## xer0h0ur

boi801 said:


> This is my MOD. There are many like it, but this one is mine.
> 
> Mod power limit on a MSI RTX 2070;
> 
> Find the shunt resistor
> Do that
> Apply it there
> 
> And it Works!!!! LOL power limit never reads 90%
> Now this mod need a name...


I dub thee

Señor Clamp


----------



## GosuPl

anticommon said:


> 2990x < 2950x on the TR side of things when it comes to straight gaming. By how much I don't know though.
> 
> 
> Edit: Also, I bit the bullet and have a 2080 ti xc ultra on the way. Was thinking of snagging a vector WB and the nickel backplate, although the EVGA plate looks nice, I think I would want it switched out if there is no RGB on it though (yeah I'm that guy who likes the RGB okay)


All AMD CPU's are very bad for SLI setups, even HEDT AMD.


----------



## Jpmboy

Djreversal said:


> boom you are the man.. it just took... its a Zotac AMP card.. and the other was an ASUS OC card


cool. at least that a soft "protection" from cross-flashing. :thumb:


anticommon said:


> I like the look of the nickel backplate though. Have you seen any discernable performance differences going from air to water apart from just temperature?
> 
> Still curious about custom bios and power limit if the Fe cards


lower temps means higher clocks... as always. There are temp-induced clock bin drops (of 13Hz - the bios is coded in these steps) at temperatures beginning at 30C.


boi801 said:


> This is my MOD. There are many like it, but this one is mine.
> 
> Mod power limit on a MSI RTX 2070;
> 
> Find the shunt resistor
> Do that
> Apply it there
> 
> And it Works!!!! LOL power limit never reads 90%
> Now this mod need a name...


nice! very... Darwinian.


(I'll stay out of the bathroom  )


----------



## Zammin

anticommon said:


> Hi everyone I am feeling pretty dumb so I thought I would sell my 1080 to hydrocopper and snag a 2080 ti to put a block on. I know I'm after a reference design pcb as that seems to be the easiest to get a block for, and I was leaning towards the XC Ultra as I have heard that has an increased power limit.... *but* now I am thinking I could get a founders for $1200 (I think without tax) or an xc ultra for $1370 after tax. Another $210 for the block and backplate on the FE but only $160 for a hydrocopper block. So my totals are looking like either $1410 or $1530. The big question is whether the FE cards play nice with custom power limit bios or if I should be looking at any number of other cards that can take a custom bios and/or have that sweet 140% PL.
> 
> I just kinda wish the XC black for $999 would actually be for sale at EVGA (
> 
> Thoughts?





anticommon said:


> Also, I bit the bullet and have a 2080 ti xc ultra on the way. Was thinking of snagging a vector WB and the nickel backplate, although the EVGA plate looks nice, I think I would want it switched out if there is no RGB on it though (yeah I'm that guy who likes the RGB okay)


If you're blocking the card the regular XC is a better deal as they are the same card except the XC Ultra has the huge air cooler and a 3-slot bracket for more $$$. Either way you'll be fine though, just one costs more. I'm surprised you were able to get one so quickly.. I ordered mine last month and it's still nowhere to be seen....


----------



## Zammin

Update on my order.. PLE Computers did receive some EVGA 2080Ti XCs in yesterday but gave them all to other people even though they told me a couple weeks ago that I was up the front of the queue... I'm so bloody frustrated. It took them that long just to get a few in and I didn't even get one despite having "pre-ordered" with them last month..

They reckon I'm now "second" in the queue but they still have no idea when stock would be arriving.. Their supplier didn't even tell them about the batch that came in yesterday and they aren't getting any ETAs at all. They said most of the manufacturers are prioritising the US, and so hardly any are coming to Australia.. I could be waiting until the end of the bloody year now for all I know now..


----------



## Ownedj00

Zammin said:


> Update on my order.. PLE Computers did receive some EVGA 2080Ti XCs in yesterday but gave them all to other people even though they told me a couple weeks ago that I was up the front of the queue... I'm so bloody frustrated. It took them that long just to get a few in and I didn't even get one despite having "pre-ordered" with them last month..
> 
> They reckon I'm now "second" in the queue but they still have no idea when stock would be arriving.. Their supplier didn't even tell them about the batch that came in yesterday and they aren't getting any ETAs at all. They said most of the manufacturers are prioritising the US, and so hardly any are coming to Australia.. I could be waiting until the end of the bloody year now for all I know now..



Go elsewhere mate, PLE did this to me with the 8700K so i just watched PCCG like a hawk and the next day i got one without pre ordering.


----------



## kot0005

Zammin said:


> Update on my order.. PLE Computers did receive some EVGA 2080Ti XCs in yesterday but gave them all to other people even though they told me a couple weeks ago that I was up the front of the queue... I'm so bloody frustrated. It took them that long just to get a few in and I didn't even get one despite having "pre-ordered" with them last month..
> 
> They reckon I'm now "second" in the queue but they still have no idea when stock would be arriving.. Their supplier didn't even tell them about the batch that came in yesterday and they aren't getting any ETAs at all. They said most of the manufacturers are prioritising the US, and so hardly any are coming to Australia.. I could be waiting until the end of the bloody year now for all I know now..


Picked up my XC ULTRA card!!!! got 2140mhz with oc scanner 230w +176mhz


----------



## Zammin

Ownedj00 said:


> Go elsewhere mate, PLE did this to me with the 8700K so i just watched PCCG like a hawk and the next day i got one without pre ordering.


I would definitely go elsewhere but hardly anywhere sells the EVGA card locally, and generally nowhere has any stock of 2080Tis at all. I have been checking all the other sellers daily but all that has come up is the Gigabyte OC from PC Case Gear which won't work with my waterblock unless I damage the fan connector, and a very overpriced ASUS Turbo from Mwave, both of which are sold out again.

If something suitable pops up in the mean time I will definitely snag it and cancel my order with PLE but I need to hold onto it in the meantime in case they actually come through.. I'm just so frustrated that their staff all tell me different things. I've had one tell me two weeks ago that I was up the front of the queue and my card would be there "Friday at the latest". That didn't happen and when they finally did get stock yesterday they gave them to other people said I actually wasn't up the front of the queue. They don't all appear to be on the same page and I feel like I'm getting the run around. 



kot0005 said:


> Picked up my XC ULTRA card!!!! got 2140mhz with oc scanner 230w +176mhz


God damn I am so jelly. I wonder if I bought the ultra if I would've had one by now. I didn't want to pay the extra money for an air cooler that I wasn't going to use though. Setup looks killer man, great work. How are the temps under extended full load atm? and what's that cool little display you've got attached to your GPU block?


----------



## kot0005

Zammin said:


> I How are the temps under extended full load atm? and what's that cool little display you've got attached to your GPU block?



I haven't really checked, waiting for the bif air bubble in the gpu block to go away. But when I used OC scanner the max it hit was 37c with around 26c room temp. Its https://shop.aquacomputer.de/produc...7MVM97JoI72SXsZQpVGY9P-bPxhT4Meq5jItqmEUTL-Sc


I made a casing for it and 3d printed/painted it. https://www.thingiverse.com/thing:3...3eNEpQLDui4J2PvP3GQmJQXzNW0zmBa5cUiXCYF6UzWWs Aquiacomputer uses these displays in their blocks and sells the touch displays.


----------



## Rob w

amano74 said:


> gpu at stock settings u must have around 15000 points graphics on timespy.
> 
> & what is the scanner?, u use msi afterburner or evga x1 to oc & monitor ur clocks while benching?


Using msi afterburner to monitor,
Will try again today, I’m wondering if it’s locking to 300mhz during the test, will use OSD and check but it’s weird suddenly only getting half the score and FPS.


----------



## Zammin

kot0005 said:


> I haven't really checked, waiting for the bif air bubble in the gpu block to go away. But when I used OC scanner the max it hit was 37c with around 26c room temp. Its https://shop.aquacomputer.de/produc...7MVM97JoI72SXsZQpVGY9P-bPxhT4Meq5jItqmEUTL-Sc
> 
> 
> I made a casing for it and 3d printed/painted it. https://www.thingiverse.com/thing:3...3eNEpQLDui4J2PvP3GQmJQXzNW0zmBa5cUiXCYF6UzWWs Aquiacomputer uses these displays in their blocks and sells the touch displays.


It looks really neat. Does it require an Aquaero to run?


----------



## toncij

Would be cool if the OP list could include chip type column, so we can distinct between A and non-A cards.


----------



## Vipeax

toncij said:


> Would be cool if the OP list could include chip type column, so we can distinct between A and non-A cards.


Why? This topic is about the RTX2080Ti. They're all overclocking (A) chips.


----------



## Zammin

Vipeax said:


> Why? This topic is about the RTX2080Ti. They're all overclocking (A) chips.


Not all 2080Ti's will have the TU102-300A chip though, only the factory overclocked ones. There are/will be some cheaper ones that have the 300 non-A chip, I believe someone mentioned one of the Palit cards has it. I wouldn't be surprised if one of the Colorful iGame ones does as well.


----------



## jepz

Please, add to the first page, card: Zotac 2080Ti AMP.

OC: 2145-2205MHz range GPU / GDDR6 [email protected]

PC: https://valid.x86.fr/2m40a4

Card inside de case:

  

Benchs:

Time Spy Extreme: https://www.3dmark.com/spy/4756428

Time Spy: https://www.3dmark.com/spy/4782375

Fire Strike Ultra: https://www.3dmark.com/fs/16775412


Cheers


----------



## Vipeax

Zammin said:


> Not all 2080Ti's will have the TU102-300A chip though, only the factory overclocked ones. There are/will be some cheaper ones that have the 300 non-A chip, I believe someone mentioned one of the Palit cards has it. I wouldn't be surprised if one of the Colorful iGame ones does as well.


Doh, seems you're right! 

https://www.techpowerup.com/vgabios/204720/msi-rtx2080ti-11264-180920

This is a "bad" chip device ID. I wish I could get my hands on one for free (RTX2070 would be enough, they all have the same EEPROM chip) so that I could work on making the non-A models flashable .


----------



## kot0005

Zammin said:


> It looks really neat. Does it require an Aquaero to run?


I dont think it should, it uses HWinfo sensors and aquasuite software. Connect it to mobo via internal USB. I have an aquero as well but its not connected to it.


----------



## Zammin

Vipeax said:


> Doh, seems you're right!
> 
> https://www.techpowerup.com/vgabios/204720/msi-rtx2080ti-11264-180920
> 
> This is a "bad" chip device ID. I wish I could get my hands on one for free so that I could work on making the non-A models flashable .


I think it's pretty safe to assume that most of the cards that ship at the default 1545mhz (the cheapest models of many brands) are likely to be these 300 non-A chips, with Nvidia separating the good chips from the bad (MSRP?) ones. 



kot0005 said:


> I dont think it should, it uses HWinfo sensors and aquasuite software. Connect it to mobo via internal USB. I have an aquero as well but its not connected to it.


Awesome. Thanks for the links. We have a 3D printer now so I might just copy your style haha.


----------



## xermalk

amano74 said:


> seems that ur sli is not scaling good,
> 
> cause i have almost 17000 points graphic score with 1 card, in firestrike extreme at stock gpu : https://www.3dmark.com/3dm/29786975


That seems on the low side. I'm getting a much higher score gpu score at 18614 with my Gigabyte 2080TI Gaming OC aircooled, and it struggles to stay stable at +120 mhz overclock.

https://www.3dmark.com/fs/16855383


----------



## amano74

xermalk said:


> That seems on the low side. I'm getting a much higher score gpu score at 18614 with my Gigabyte 2080TI Gaming OC aircooled, and it struggles to stay stable at +120 mhz overclock.
> 
> https://www.3dmark.com/fs/16855383


cause it was gpu out of box graphic score.

i can do better, 

max oc firestrike extreme with original 300w bios : https://www.3dmark.com/fs/16856865

max oc timespy with original 300w bios : https://www.3dmark.com/spy/4857965

max oc timespy with galax 380w bios : https://www.3dmark.com/spy/4816496


----------



## Jpmboy

Zammin said:


> I think it's pretty safe to assume that most of the cards that ship at the default 1545mhz (the cheapest models of many brands) are likely to be these 300 non-A chips, with Nvidia separating the good chips from the bad (MSRP?) ones.


All of these chips have the same native boost frequency, the bios changes the native max turbo freq - hence different SKUs use a different bios boost table. There's nothing new here except that the non-A chips have a hardware flag which disables flashing. They have not (yet) been shown to OC any less. Folks that want to flash to a higher power bios are SOL. A higher PL does not mean a higher boost frequency.


----------



## xermalk

amano74 said:


> cause it was gpu out of box graphic score.
> 
> i can do better,
> 
> max oc firestrike extreme with original 300w bios : https://www.3dmark.com/fs/16856865
> 
> max oc timespy with galax 380w bios : https://www.3dmark.com/spy/4816496


Ah that explains it  im on the 366w bios. Don't see the point of flashing another brands bios when the gigabyte one is good enough.
And with a unpowered riser cable i have no clue how well it would handle the increased pci-e draw.
Its possible even 366w is a bit overkill for it.


----------



## amano74

xermalk said:


> Ah that explains it  im on the 366w bios. Don't see the point of flashing another brands bios when the gigabyte one is good enough.
> And with a unpowered riser cable i have no clue how well it would handle the increased pci-e draw.
> Its possible even 366w is a bit overkill for it.



that was the problem i had with the 380w galax bios,
at max oc, the psu 750w "62amps" can't handle, pc shutting down in timespy..


----------



## Jpmboy

water cooled FE. Stock bios, no mods. Like pascal and volta, the temperature/Hz/Power channel dominates control of the card's performance. in other words, unless you chill the card, flashing to a higher PL bios is only incremental at best, with 16 pwms. There are clock bin drops beginning at 30C.I haven't taken the card below the dew point yet (10C on the loop).
You'll do better by locking the card in P0 in all cases.


----------



## methadon36

Finally got my 2080 Ti XC GAMING, 11G-P4-2382-KR 2 days ago.. Was waiting for the step up process but I got notification Of Model #2382 in stock so I jumped on it before it got sold out... And 2 days later I get E-mail saying the step up card is ready lol... The step-up card is model #2282 so I'm not mad for getting this one, Plus I got a local buyer of my 1080ti so it worked out great!.. So can I be added to the owners club please!


----------



## Zfast4y0u

methadon36 said:


> Finally got my 2080 Ti XC GAMING, 11G-P4-2382-KR 2 days ago.. Was waiting for the step up process but I got notification Of Model #2382 in stock so I jumped on it before it got sold out... And 2 days later I get E-mail saying the step up card is ready lol... The step-up card is model #2282 so I'm not mad for getting this one, Plus I got a local buyer of my 1080ti so it worked out great!.. So can I be added to the owners club please!






ur second pic, with that sticky note remind me on those *****es who make nude photos on adult web sites and write on such sticky notes name of website. ha ha


----------



## methadon36

Zfast4y0u said:


> ur second pic, with that sticky note remind me on those *****es who make nude photos on adult web sites and write on such sticky notes name of website. ha ha


LOL. Been doing to much selling On reddit so Its just second nature now for proof pics with notes attached.


----------



## dante`afk

What was the code for A-chips again?









Sent from my iPhone XS Max using Tapatalk


----------



## Fitzcaraldo

The last part "-A1" means it's revision 1, version A and therefore a flashable and oc'd chip.


----------



## xer0h0ur

300A means its a binned die. 300 without the A means its the lesser die meant for the MSRP cards that aren't factory overclocked.


----------



## dante`afk

Zurv/jpmboy how did you mount the nvidia backplate with the bitspower block? Neither the screws from nvidia (too short) nor the ones from bitspower (too thick) fit through the backplate?

Sent from my iPhone XS Max using Tapatalk


----------



## Jpmboy

dante`afk said:


> Zurv/jpmboy how did you mount the nvidia backplate with the bitspower block? Neither the screws from nvidia (too short) nor the ones from bitspower (too thick) fit through the backplate?
> 
> Sent from my iPhone XS Max using Tapatalk


yeah - the bp block uses the NV mount screws and the NV backplate (tiny) screws. THe holes that held the OEM GPU (spring) screws get the torx head screws included with the block. You do not use the OEM "spring" screws. the block is held on with the OEM double-duty lugs. the oem backplate screws go into these. it in the instructions (which are really poor, IMO).


----------



## Pepillo

RTX 2080 TI FE, very happy with it. I was surprised at the temperatures, better than I expected, and their performance is also excellent. I just hope I have no problems with the GDDR6, it's not just the FE models, other brands are also having the same flaw. At the moment with the serial bios, I tried the 380w but my AX860 power supply does not hold that and the 7900X, it touches change.


----------



## looniam

dante`afk said:


> What was the code for A-chips again?
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my iPhone XS Max using Tapatalk


as mentioned, its 300A (can be AIB OCed) or 300 ("reference" clocks only)

no need to pull the cooler of as gpu-z will report device ID as 1E07 (A chip) or 1E02.


----------



## Zurv

i'm confused by people just bump their ram up to 1000+

Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.


----------



## NewType88

Who has their FTW3 ? Mines coming Monday !


----------



## amano74

Pepillo said:


> RTX 2080 TI FE, very happy with it. I was surprised at the temperatures, better than I expected, and their performance is also excellent. I just hope I have no problems with the GDDR6, it's not just the FE models, other brands are also having the same flaw. At the moment with the serial bios, I tried the 380w but my AX860 power supply does not hold that and the 7900X, it touches change.


so, my 750psu watt 62 amps can't handle the 380w bios, ur ax860 psu 71 amps can't handle too, there is something weird here...


----------



## amano74

Zurv said:


> i'm confused by people just bump their ram up to 1000+
> 
> Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.



with evga precision x1 i can go to +1300mhz on vram, but it begins to artifact, at +1375mz it will crash,

between +1000mhz & +1300mhz, there is no improvements in score for me.


----------



## Pepillo

Zurv said:


> i'm confused by people just bump their ram up to 1000+
> 
> Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.


I know, with the previous 1080 TI rise less than 500, but this 2080 ti, at least in my case, goes up to 1000 without problems raising the scores. Not all are equal, this time I've been lucky.



amano74 said:


> Pepillo said:
> 
> 
> 
> RTX 2080 TI FE, very happy with it. I was surprised at the temperatures, better than I expected, and their performance is also excellent. I just hope I have no problems with the GDDR6, it's not just the FE models, other brands are also having the same flaw. At the moment with the serial bios, I tried the 380w but my AX860 power supply does not hold that and the 7900X, it touches change.
> 
> so, my 750psu watt 62 amps can't handle the 380w bios, ur ax860 psu 71amps can't handle too, there is something weird in it
> 
> 
> 
> I have a 7900X to 4.8, for me is not enough because the CPU alone is swallowed more than 300w....
Click to expand...


----------



## methadon36

Just slapped on A G12 bracket and H100i v2 Cooler to the 2080ti.. and it still has the Back plate attached Running Superposition Bench I get 56c and heaven I get 52c. Had to use the AMD brackets from the G12 kit and the AMD kit screws used for the AIO to hold down the bracket to the GPU. Its making flush contact with the die so I will keep it on for now


----------



## xer0h0ur

Zurv said:


> i'm confused by people just bump their ram up to 1000+
> 
> Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.


+1000 isn't even the maximum you can overclock the GDDR6 on some cards to begin with. Its just what Afterburner limits to. My card does +1000 without breaking a sweat but its waterblocked so the RAM is actively cooled. I have never managed to get any crashing or artifacting aside from pushing the GPU past the point of no return. In fact I just run the curve that Afterburner came up with from the OC scanner. 

You have at least peaked my curiosity to check what dropping the mem clock comes up with versus +1000.


----------



## dante`afk

Jpmboy said:


> yeah - the bp block uses the NV mount screws and the NV backplate (tiny) screws. THe holes that held the OEM GPU (spring) screws get the torx head screws included with the block. You do not use the OEM "spring" screws. the block is held on with the OEM double-duty lugs. the oem backplate screws go into these. it in the instructions (which are really poor, IMO).


Ah I see, just googled the instructions...lmao indeed confusing.

so the screws that came with the block are not being used at all?


----------



## callonryan

*2080ti MSI Trio X - Overclock and Stability*

For those with 2080tis and want to compare. So far very impressed with the Trio's clock. Thank you for having this thread.


https://imgur.com/8tQU0ig


----------



## kossiewossie

Djreversal said:


> https://www.3dmark.com/fs/16848888 got this score testing.. not sure how good it is yet... need to look around and see what others are getting.
> 
> 
> Picture of the rig.
> 
> 2990wx with dual 2080ti's in SLI
> 
> https://i.imgur.com/ebqyuXc.jpg


The 2990WX doesn't play very well in 3dMark, because of how the Die's interacts with memory access, AMD have added few workarounds to gaming on the 2990WX platform, If you install AMD Ryzen Master, and enable gaming mode, this will disable the cores that do not have direct memory access and should improve your 3dMark score massively and improve overall gaming performance, to revert back to the 32/64 mode, just enable Creator mode in AMD Ryzen master software, Also AMD are releasing an update for AMD Ryzen master at the end of the month for 2990WX, that will add something called Dynamic mode more info at https://www.tomshardware.com/news/amd-cpu-threadripper-2970wx-2920x,37895.html


I want to join the party too! Currently working on a new build, I should hopefully be done by Sunday/Monday!


----------



## Garrett1974NL

So I got my 2080Ti in and slapped a waterblock on... It's the Inno3D one with a 260x111% max BIOS... should I try the 380W Galax one and see what I get?
Depending on the game the GPU boost varies from 2040 to 2160... that's with +165 in MSI Afterburner... fully stable.
Would I see less fluctuation with the Galax 380W BIOS?


----------



## Esenel

Zurv said:


> i'm confused by people just bump their ram up to 1000+
> 
> Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.


I cannot share your findings.
For me it means more performance.

Scaling from +0 to +1100:

Mhz / Graphics Score:
+0 / 15865
+500 / 16170
+750 / 16276
+900 / 16329
+1000 / 16398
+1100 / 16427

Moving from +0 to my daily +1000 results in 3.36% more performance.


----------



## xer0h0ur

I vaguely remember the problem with AMD CPUs in 3dmark being that its AVX instruction heavy.


----------



## Jbravo33

GosuPl said:


> No problem with installing EK Vectors? My friend have ASUS RTX 2080Ti Dual OC, and have problems with this block. 3x reinstall, and lot of "fun" with screws of different length ;-)


No issues was pretty straight forward and quick. Actually made a video will upload sometime this weekend. 



Jpmboy said:


> 2-sided tape to lay the OEM backplates on the EK mount? I don;t see any screws in there.,


I used about 6 factory backplate screws. You can see a couple in attached pic. I used 4 supplied screws from ek only around the die then used factory nut screws where it aligned with backplate. As long as it was pretty spread out along the card. Seemed to make good contact and if it didn’t I can’t tell as now matter what I do I can’t get these cards to go above 44c and that’s with a 7980 @4.8 all in one loop. Even 5GHz for 3dmark runs. Big case is paying off. Now if only G.Skill RGB ram would sync. xD


----------



## Emmett

Anyone with a founders card on air that can touch the backplate while under a good load?
I have 2 120's blowing right on it. And I can barely keep my fingers on it. Core WAS showing 64 tops. And I saId WAS because the driver will no longer load for it.


----------



## Zurv

Esenel said:


> I cannot share your findings.
> For me it means more performance.
> 
> Scaling from +0 to +1100:
> 
> Mhz / Graphics Score:
> +0 / 15865
> +500 / 16170
> +750 / 16276
> +900 / 16329
> +1000 / 16398
> +1100 / 16427
> 
> Moving from +0 to my daily +1000 results in 3.36% more performance.


Interesting, but maybe try it on useful testing? 1080p isn't pushing anything. Can you please try it @ 4k? ie, extreme? some mid point between 500 and 1000 would have been useful too. Clearly 500 isn't going to have any issues. The issue isn't be 0 and 1000. Even with error correction the extra speed should make up for it. 700 or 800 could be faster... or not  but just moving it to the max assuming that is the better perf might not be correct. It isn't for me.. but SLI gets funky with stuff (ie, my OCs both in mem and core are lower for SLI than single.)


----------



## xer0h0ur

I fail to see how testing 1080p for memory errors would be any different than testing at 4K. Its not like we are talking about video cards with a small amount of vRAM that would cause a ton of texture swapping in and out of it. The only difference would be how much vRAM is being used at 1080p versus 4K.


----------



## Jpmboy

Jbravo33 said:


> No issues was pretty straight forward and quick. Actually made a video will upload sometime this weekend.
> 
> I used about 6 factory backplate screws. You can see a couple in attached pic. I used 4 supplied screws from ek only around the die then used factory nut screws where it aligned with backplate. As long as it was pretty spread out along the card. Seemed to make good contact and if it didn’t I can’t tell as now matter what I do I can’t get these cards to go above 44c and that’s with a 7980 @4.8 all in one loop. Even 5GHz for 3dmark runs. Big case is paying off. Now if only G.Skill RGB ram would sync. xD


 nice! and crazy build.. but I think you need more LED fans.  (jking.. very nice build. :thumb
yeah - that rgb ram is crap. Not sure why, but the same ICs in the4 non-RGB version behave very differently. 



Zurv said:


> Interesting, but maybe try it on useful testing? 1080p isn't pushing anything. Can you please try it @ 4k? ie, extreme? some mid point between 500 and 1000 would have been useful too. Clearly 500 isn't going to have any issues. The issue isn't be 0 and 1000. Even with error correction the extra speed should make up for it. 700 or 800 could be faster... or not  but just moving it to the max assuming that is the better perf might not be correct. It isn't for me.. but SLI gets funky with stuff (ie, my OCs both in mem and core are lower for SLI than single.)


Seems like superposition 8K gives the ram a good load. But for sure, ram OC on two cards will be different than a single card config - if only for signal alignment issues. That said, the FE here tolerates +1150 on the ram thru everything I've tried thus far (with perf improvements). You are right - there is a bunch of embedded EC for GDDR6. 


xer0h0ur said:


> I fail to see how testing 1080p for memory errors would be any different than testing at 4K. Its not like we are talking about video cards with a small amount of vRAM that would cause a ton of texture swapping in and out of it. The only difference would be how much vRAM is being used at 1080p versus 4K.


it's just the load and block sizes moving thru the ram that changes... IME, vram clocks at 1080P will be different than 4K or 8K at the limit.


----------



## kot0005

First non A 2080TI chiop card ??

https://wccftech.com/evga-lists-elusive-999-evga-geforce-rtx-2080ti-black-edition/


----------



## xer0h0ur

I thought someone else had already posted that they had a non A Ti. Maybe I am imagining things.


----------



## Jpmboy

anyone with a win10 "ray-ready" release (1809) tried this *ray trace bench*?


----------



## Zammin

Jpmboy said:


> All of these chips have the same native boost frequency, the bios changes the native max turbo freq - hence different SKUs use a different bios boost table. There's nothing new here except that the non-A chips have a hardware flag which disables flashing. They have not (yet) been shown to OC any less. Folks that want to flash to a higher power bios are SOL. A higher PL does not mean a higher boost frequency.


Thanks man, but I'm not sure we are talking about the same thing. I was talking about what is mentioned in the below comment regarding Nvidia's binning process where the TU102-300A chips are supposedly better overclockers than the TU102-300 chips, which is why the 300A's go in the factory OC'd cards and the 300s go in the cheaper non-factory OC'd cards. Were you talking about the A1 (revision number) on the end of the SKU? Sorry if I've misunderstood something.



xer0h0ur said:


> 300A means its a binned die. 300 without the A means its the lesser die meant for the MSRP cards that aren't factory overclocked.


----------



## amano74

Jpmboy said:


> anyone with a win10 "ray-ready" release (1809) tried this *ray trace bench*?


https://www.overclock.net/forum/attachment.php?attachmentid=227552&thumb=1


----------



## kot0005

Zammin said:


> I think it's pretty safe to assume that most of the cards that ship at the default 1545mhz (the cheapest models of many brands) are likely to be these 300 non-A chips, with Nvidia separating the good chips from the bad (MSRP?) ones.
> 
> 
> 
> Awesome. Thanks for the links. We have a 3D printer now so I might just copy your style haha.


you could also try this : https://hackaday.io/project/19018-gnat-stats-tiny-oled-pc-performance-monitor

Costs about the same as buying the aquacomputer model.

$10for Arduino and $10 for the oled screen from ebay


----------



## kot0005

xer0h0ur said:


> I thought someone else had already posted that they had a non A Ti. Maybe I am imagining things.


yes but it was a 2070 he had, MSI something


----------



## Ragnarøk

Do you see a gain in preformance going from the Nvidia stock bios on the 2080ti FE to exemple "GALAX Reference PCB (2x8-Pin) RTX 2080 Ti 300W x 126% Power Target BIOS (380W)" bios ?


----------



## Nico67

Got keen and built up the WC PC. It idles at 22c, but thats with it stuck at 1350mhz. hits about 32-33c under load. At 330w PL and +202 core it was max 2175 but bouncing around 2115-2145, with 360w PL it was 2160-2175 but I was seeing some artifacts. Knocked the core back a bit and its running 2130-2145 and 7499mem (not keen to burn it out just yet). Up to 13383 4k Supo.

max vgpu is 1.050, but I have maxed that yet, may help with core artifacts.


----------



## Esenel

Zurv said:


> Interesting, but maybe try it on useful testing? 1080p isn't pushing anything. Can you please try it @ 4k? ie, extreme? some mid point between 500 and 1000 would have been useful too. Clearly 500 isn't going to have any issues. The issue isn't be 0 and 1000. Even with error correction the extra speed should make up for it. 700 or 800 could be faster... or not  but just moving it to the max assuming that is the better perf might not be correct. It isn't for me.. but SLI gets funky with stuff (ie, my OCs both in mem and core are lower for SLI than single.)


First. TimeSpy is 1440p not 1080p.

"some mid point between 500 and 1000 would have been useful too"?

=>I tested 750 and 900 as well. Maybe you did not pay attention.

TimeSpy Extreme +0 to +1000:

MHz / Graphics Score
+0 / 7475
+500 / 7665
+750 / 7699
+900 / 7760
+1000 / 7784
+1100 / Failed

Superposition is too time consuming to test :-D


----------



## Vipeax

Zurv said:


> i'm confused by people just bump their ram up to 1000+
> 
> Yeah, it isn't going to crash, but it will start having errors which will result is lower ram performance. Might i suggest people find the sweet spot vs just maxing it out. For my cards it is around +640.. after that my timespy score will drop as a result of error correction.


Just because your card can't run these frequencies doesn't mean nobody else has a card that can. I'm running +1260 and that has been tested in TimeSpy, all Unigine benches, Total War WH2, Tomb Raider (Rise and Shadow), Furmark and Doom. Beyond those numbers I get error correction and at +1314 it starts artifacting.


----------



## toncij

Nico67 said:


> Got keen and built up the WC PC. It idles at 22c, but thats with it stuck at 1350mhz. hits about 32-33c under load. At 330w PL and +202 core it was max 2175 but bouncing around 2115-2145, with 360w PL it was 2160-2175 but I was seeing some artifacts. Knocked the core back a bit and its running 2130-2145 and 7499mem (not keen to burn it out just yet). Up to 13383 4k Supo.
> 
> max vgpu is 1.050, but I have maxed that yet, may help with core artifacts.


What kind of WC is this? :OP


----------



## Spiriva

nvflash --index 0 2080TIGX126.rom --overridesub

Is this the proper way to flash the 2080ti FE, with the NVIDIA NVFlash 5.527.0 Modified to allow flashing of FE in post #1 with the GALAX bios ? or am i missing something ?


----------



## Garrett1974NL

Spiriva said:


> nvflash --index 0 2080TIGX126.rom --overridesub
> 
> Is this the proper way to flash the 2080ti FE, with the NVIDIA NVFlash 5.527.0 Modified to allow flashing of FE in post #1 with the GALAX bios ? or am i missing something ?


I believe it's like this:
nvflash64 -6 2080TIGX126.rom
I don't think you need the index parameter unless you have 2 cards... which is not very likely considering the price 

Did I type anything incorrect? Then correct me if I'm wrong


----------



## Jpmboy

Zammin said:


> Thanks man, but I'm not sure we are talking about the same thing. I was talking about what is mentioned in the below comment regarding Nvidia's binning process where the TU102-300A chips are supposedly better overclockers than the TU102-300 chips, which is why the 300A's go in the factory OC'd cards and the 300s go in the cheaper non-factory OC'd cards. Were you talking about the A1 (revision number) on the end of the SKU? Sorry if I've misunderstood something.


yeah, I was posting about the revision. I've not see any information - except in this thread - regarding the die having a "bin stamp". Etching _after _testing? :blinksmil


amano74 said:


> https://www.overclock.net/forum/attachment.php?attachmentid=227552&thumb=1


thanks. as soon as my rig finishes the WU it's on, I'll give it a spin. My TV runs in the low 30s. It really depends if your OS has the windows ray tracing module (you can google it, 1809 october update)


----------



## amano74

i will post it again, cause i made a mistake in the link, for the vraybench : *ray trace bench*?

that's the max stable oc i was able to get with my rig, original (260/300w) vga bios..


https://www.overclock.net/forum/attachment.php?attachmentid=227628&thumb=1


----------



## GosuPl

Vipeax said:


> Just because your card can't run these frequencies doesn't mean nobody else has a card that can. I'm running +1260 and that has been tested in TimeSpy, all Unigine benches, Total War WH2, Tomb Raider (Rise and Shadow), Furmark and Doom. Beyond those numbers I get error correction and at +1314 it starts artifacting.


Soon VRAM on your GPU, will be death. My 2x 2080Ti with +700 mem, both VRAM died. On GF Forum, you can see plenty of posts with 2080Ti GDDR6 artifactings even on stock


----------



## Spiriva

Garrett1974NL said:


> I believe it's like this:
> nvflash64 -6 2080TIGX126.rom
> I don't think you need the index parameter unless you have 2 cards... which is not very likely considering the price
> 
> Did I type anything incorrect? Then correct me if I'm wrong


It worked fine, Thank you 

126% powertarget up from 123% powertarget gave me ~35mhz more on the gpu. The memory i could max out the slide in Msi Afterburner (+1000) but i read on a few forums that high clock on the memory can kill the cards ?


----------



## xer0h0ur

GosuPl said:


> Soon VRAM on your GPU, will be death. My 2x 2080Ti with +700 mem, both VRAM died. On GF Forum, you can see plenty of posts with 2080Ti GDDR6 artifactings even on stock


Let me guess. You were overclocking on air coolers. Almost everyone here doing any serious overclocking has their card(s) waterblocked.


----------



## Garrett1974NL

xer0h0ur said:


> Let me guess. You were overclocking on air coolers. Almost everyone here doing any serious overclocking has their card(s) waterblocked.


As for myself, I have a core-only GPU block (EK-VGA Supremacy) with max temps of 45 to 47C depending on the heating in my room 
I have this card: https://i.imgur.com/QWwZ0v2.jpg
So the memory and VRM is being cooled by the thick metal plate that Inno3D puts on it, very handy if you want to watercool the core only 
There's also a 120mm 1000rpm fan continuously blowing air on the back of the card... should I be worried about memory overheating?
I would guess not but... anyone care to chime in on this?


----------



## Jpmboy

I'm not sure any of the memory failures have anything to do with operating frequencies, all of which are within the AOR of the dram ICs. (tho how many failures really? one user with two cards? IDK root cause analysis in that scenario would not first point to the video cards since many others are not having the problem.)


----------



## Vipeax

GosuPl said:


> Soon VRAM on your GPU, will be death. My 2x 2080Ti with +700 mem, both VRAM died. On GF Forum, you can see plenty of posts with 2080Ti GDDR6 artifactings even on stock


I don't mind the RMA process. I chose a good store for a reason.


----------



## pewpewlazer

xer0h0ur said:


> Let me guess. You were overclocking on air coolers. Almost everyone here doing any serious overclocking has their card(s) waterblocked.


How does your assumption of _"almost everyone here doing any serious overclocking has their card(s) waterblocked"_ have ANYTHING to do with cards dying?


----------



## GTANY

Garrett1974NL said:


> As for myself, I have a core-only GPU block (EK-VGA Supremacy) with max temps of 45 to 47C depending on the heating in my room
> I have this card: https://i.imgur.com/QWwZ0v2.jpg
> So the memory and VRM is being cooled by the thick metal plate that Inno3D puts on it, very handy if you want to watercool the core only
> There's also a 120mm 1000rpm fan continuously blowing air on the back of the card... should I be worried about memory overheating?
> I would guess not but... anyone care to chime in on this?


What Inno3D model did you buy ? I intend to watercool the GPU only. 

And did the GPU block holes fit the card ones ? Indeed, the holes distance is 51 or 69 mm on the 2080 TI card whereas the distance is 53 - 58 mm on the Supremacy.


----------



## Jpmboy

Spiriva said:


> It worked fine, Thank you
> 
> 126% powertarget up from 123% powertarget gave me ~35mhz more on the gpu. The memory i could max out the slide in Msi Afterburner (+1000) but *i read on a few forums that high clock on the memory can kill the cards* ?


 well if that is the case, i'm in the experiment... my FE has had the memory at +1000 folding with the P2 lock disabled for several days... Also, benching at +1150 or more, and a little gaming at +1000. It's only been a week tho.


What we have are "rumors" or a few isolated posts from a few users (sometimes the same user in different forums) - right?


----------



## Spiriva

Jpmboy said:


> well if that is the case, i'm in the experiment... my FE has had the memory at +1000 folding with the P2 lock disabled for several days... Also, benching at +1150 or more, and a little gaming at +1000. It's only been a week tho.
> 
> 
> What we have are "rumors" or a few isolated posts from a few users (sometimes the same user in different forums) - right?


Absolutly that is true. Also we dont know how many FE cards Nvidia sold in total and how many have had (in this exemple) faulty memory.
Most ppl who have thier cards dieing will write about it some place online, and for the users which have normal working cards will prolly not write anything at all.

I wonder how Nvidia is about RMA a card that have had its cooler removed, for us who uses waterblocks. Will they care, will they even know as long as you screw the cooler back on. There were no stickers or such on the card that would break if you took it apart.


----------



## Rob w

mine has been running at 2115/1987 had issues first with it clocking back but sorted that, only running on stock fans/ stock bios as is out of the box.
ran msi scanner then tweaked from there, will run it for a week or two then fit the ek block and maybe look at doing a shunt mod if needed.

nv fe model.


----------



## GosuPl

xer0h0ur said:


> Let me guess. You were overclocking on air coolers. Almost everyone here doing any serious overclocking has their card(s) waterblocked.


https://forums.geforce.com/default/topic/1078162/geforce-rtx-20-series/rtx-2080ti-massively-die-/1/

https://forums.geforce.com/default/...20-series/2080-ti-fe-artifacts-crash-bsod-/1/

And lots of similiar topics.

2080Ti's (even not FE) VRAM dying is very serious problem. Ppls have 2/3x rma and have even DOA GPU's. So...


----------



## Jpmboy

amano74 said:


> i will post it again, cause i made a mistake in the link, for the vraybench : *ray trace bench*?
> 
> that's the max stable oc i was able to get with my rig, original (260/300w) vga bios..
> 
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=227628&thumb=1


 watercooled FE. No mods, stock bios, Win10x64 release *1709*... but the issue/thing is whether ANYONE with Win10 1809 has an FE to run. I suspect you are running a windows version lower than 1809 which supposedly has the ray trace module added since our GPU score are very similar.
(the sescond pic below is with the card locked in P0)


----------



## Jpmboy

amano74 said:


> i will post it again, cause i made a mistake in the link, for the vraybench : *ray trace bench*?
> 
> that's the max stable oc i was able to get with my rig, original (260/300w) vga bios..
> 
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=227628&thumb=1


 watercooled FE. No mods, stock bios, Win10x64 release *1709*... but the issue/thing is whether ANYONE with Win10 1809 has an FE to run. I suspect you are running a windows version lower than 1809 which supposedly has the ray trace module added since our GPU score are very similar.


----------



## Garrett1974NL

GTANY said:


> What Inno3D model did you buy ? I intend to watercool the GPU only.
> 
> And did the GPU block holes fit the card ones ? Indeed, the holes distance is 51 or 69 mm on the 2080 TI card whereas the distance is 53 - 58 mm on the Supremacy.


http://www.inno3d.com/products_detail.php?refid=382
That is the card... The "inner" holes is what I used with some m2.,5 screws and nuts  
Those are AMD distance mounting holes


----------



## amano74

Jpmboy said:


> watercooled FE. No mods, stock bios, Win10x64 release *1709*... but the issue/thing is whether ANYONE with Win10 1809 has an FE to run. I suspect you are running a windows version lower than 1809 which supposedly has the ray trace module added since our GPU score are very similar.


yeah, u are right, i have the 1803 version, sry. tought i had the 1809 cause i let the system on auto update..


----------



## xer0h0ur

pewpewlazer said:


> How does your assumption of _"almost everyone here doing any serious overclocking has their card(s) waterblocked"_ have ANYTHING to do with cards dying?


Triggered much? The assumption we have been making so far about cards dying has been about the GDDR6 having problems. Running a video card always overclocked and not having any clue how well or how bad the air cooler is cooling the vRAM doesn't help. Not all of these cards actively cool or even cool the vRAM well at all.


----------



## GTANY

Garrett1974NL said:


> http://www.inno3d.com/products_detail.php?refid=382
> That is the card... The "inner" holes is what I used with some m2.,5 screws and nuts
> Those are AMD distance mounting holes


OK, thank you.


----------



## Zemach

vraybench Test Winver 1803 I7 8086K 5.2/5.0 Vcore 1.296 Ram 4500 CL 17 19 19 28 1.5v Vga RTX 2080Ti 2145/1950


----------



## Murlocke

After two years of not upgrading anything, I'm finally looking to upgrade to a 9700k and 2080Ti but NEITHER are in stock anywhere. I've had a plugin checking a dozen or so web pages for changes during the past week. Sad times.


----------



## Jpmboy

Zemach said:


> vraybench Test Winver 1803 I7 8086K 5.2/5.0 Vcore 1.296 Ram 4500 CL 17 19 19 28 1.5v Vga RTX 2080Ti 2145/1950


thanks.. :thumb: we need an 1809 run... to see if the ray trace patch does anything in this benchmark.


----------



## jepz

Jpmboy said:


> thanks.. :thumb: we need an 1809 run... to see if the ray trace patch does anything in this benchmark.


I have 1809, I'll test it for you right now.

EDIT: Done, I can go further with the TR4 (@4.1GHz and DDR4 3600) but let's test the RTX with the 1809 version, and apparently it does not improves nothing from 1803.

vraybench Test Winver 1809 TR4 1920X 4.0 Vcore 1.4v Ram 3466 CL 18 19 19 39 1.4v Vga RTX 2080Ti 2175/1950-2000


----------



## Nico67

toncij said:


> What kind of WC is this? :OP


Triple loop from 10 gallon tank, first around water chiller, second around CPU mono block and third around GPU block. The CPU/ GPU In and Return lines are what you can see, with chiller temp set at 20c 
Works pretty well and is all "tidily" hidden away.


----------



## boi801

for science...
RTX 2070 +150/+800
[email protected]


----------



## mr2cam

I don't think I have seen a bit of artifacting with my FE, wondering if the stock cooler isn't doing a good enough job of cooling the memory modules


----------



## Mike211

Got my EVGA GeForce RTX 2080 Ti FTW3 ULTRA on Monday need one more.


----------



## Jpmboy

jepz said:


> I have 1809, I'll test it for you right now.
> 
> EDIT: Done, I can go further with the TR4 (@4.1GHz and DDR4 3600) but let's test the RTX with the 1809 version, and apparently it does not improves nothing from 1803.
> 
> vraybench Test Winver 1809 TR4 1920X 4.0 Vcore 1.4v Ram 3466 CL 18 19 19 39 1.4v Vga RTX 2080Ti 2175/1950-2000



perfect - thank you (+1 if I could). So it seems that vray does not utilize the windows DX12 ray trace API... that is assuming it is in the distro you have. AFAIK, it should be. 

I have 1809 on two "insider" rigs here, but neither has an RTX card (1080 and 1070Ti)


----------



## anticommon

Zammin said:


> If you're blocking the card the regular XC is a better deal as they are the same card except the XC Ultra has the huge air cooler and a 3-slot bracket for more $$$. Either way you'll be fine though, just one costs more. I'm surprised you were able to get one so quickly.. I ordered mine last month and it's still nowhere to be seen....


They shipped it on friday and its due in this week. I got the ultra for $50 more because it was the only one in stock. Kinda feel bad for people who preordered and still don't see their cards... but then again it took going to the EVGA page probably 50 times throughout the day before one was in stock.


----------



## GosuPl

In the case of defective GDDR6, I am currently waiting for new cards that will reach me before I send them will be defective. I checked from yesterday, with memories of 300 MHz lower clocked in relation to stock settings, will there still be artifacts because above this value arts as hell. What turns out? After about 40 minutes of playing the "Shadow of the Tomb Raider" game, artifacts drops again. It turns out that these memory chips are very sensitive to temperatures and as soon as they fall down, degradation occurs all the time. After cooling the cards, the arts will peruse, but of course still at - 300 defs.

I check the cards with fans set to 100%.


----------



## exploiteddna

Rob w said:


> mine has been running at 2115/1987 had issues first with it clocking back but sorted that, only running on stock fans/ stock bios as is out of the box.
> ran msi scanner then tweaked from there, will run it for a week or two then fit the ek block and maybe look at doing a shunt mod if needed.
> 
> nv fe model.


whats 'msi scanner' ?? Ive seen/used the scanner on Precision X1 .. didnt know MSI had one too.. is it part of afterburner?




Mike211 said:


> Got my EVGA GeForce RTX 2080 Ti FTW3 ULTRA on Monday need one more.


10 year warranty?



anticommon said:


> but then again it took going to the EVGA page probably 50 times throughout the day before one was in stock.


only 50?

.


----------



## NewType88

Mike211 said:


> Got my EVGA GeForce RTX 2080 Ti FTW3 ULTRA on Monday need one more.


How’s your memory temps and overall temps ? Users are reporting 2 memory modules being like 30c hotter than the rest.


----------



## exploiteddna

Murlocke said:


> After two years of not upgrading anything, I'm finally looking to upgrade to a 9700k and 2080Ti but NEITHER are in stock anywhere. I've had a plugin checking a dozen or so web pages for changes during the past week. Sad times.


ive got it setup so that i receive push notifications on my phone as soon as it comes in stock. the way i configure it is different depending on the vendor. for amazon and newegg i use nowinstock.net and configure mobile notifications. for other sites like EVGA, for example, when it asks for an email address to notify when its in stock, i enter in my (mobile) phone number with the mobile gateway for my carrier (verizon). so i enter <phone-number>@vtext.com this treats it like an email address, and when it comes in stock, it is received by my phone as SMS message, alerting me immediately. These are the common mobile gateways for US carriers:
Verizon: [email protected]
AT&T: [email protected]
Sprint: [email protected]
T-Mobile USA: [email protected]

So this is how I do it. This is how I got my 2080ti and have had the opportunity to buy several of them. Just the other day I got an EVGA XC Ultra from newegg for $1249. I ended up cancelling the order before it shipped, but I had it. (then they raised the price to 1299 the next day). But I digress.

This is easy method for making sure oyu get what you want.


----------



## Jpmboy

^^ thanks for the Verizon vtext.com thing. could be useful in a number of settings. Again, +1 if I could!


----------



## Zammin

Jpmboy said:


> yeah, I was posting about the revision. I've not see any information - except in this thread - regarding the die having a "bin stamp". Etching _after _testing? :blinksmil


Yeah it was all over online tech news and youtube a while ago when it was discovered: https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant

I'm sure if you google something like "Nvidia binning RTX 2080 Ti" you'll find plenty of info. Hope that helps.



anticommon said:


> They shipped it on friday and its due in this week. I got the ultra for $50 more because it was the only one in stock. Kinda feel bad for people who preordered and still don't see their cards... but then again it took going to the EVGA page probably 50 times throughout the day before one was in stock.


Sadly in Australia we are getting only the stock that trickles down after the US, we are not a priority unfortunately. I can't even order from EVGA or anywhere in the US. EVGA and Nvidia always say out of stock even when the US pages say "Add to Cart". Newegg does not allow us to buy 2080Ti's and Amazon US have shut us out entirely as of earlier this year. So the only way we can buy them is through local retailers who have literally no idea when they will receive stock, or buy off scalpers on Ebay who want stupid amounts of money.

I've been keeping an eye out every day for stock of any other RTX 2080 Ti's locally but I've only seen the Gigabyte Gaming OC (not compatible with my water block) and the ASUS blower card which was being advertised for a higher price than the EVGA one I have on "Pre-Order"...


----------



## Jpmboy

Zammin said:


> Yeah it was all over online tech news and youtube a while ago when it was discovered: https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant
> 
> I'm sure if you google something like "Nvidia binning RTX 2080 Ti" you'll find plenty of info. Hope that helps.


got it, did a bit of asking around. no one cold give me a cutoff criteria for the "A" chips. :tiredsmil


----------



## dexth77

hi,
Has anybody flashed Zotac RTX2080ti AMP with higher power limit bios? Would like to be sure before I risk bricking it...thanks!


----------



## Vipeax

dexth77 said:


> hi,
> Has anybody flashed Zotac RTX2080ti AMP with higher power limit bios? Would like to be sure before I risk bricking it...thanks!


You're fine.


----------



## Spiriva

Does anyone got a "EVGA GeForce RTX 2080 Ti XC ULTRA GAMING" card ? Im looking for the updated bios EVGA released, the 130% power target bios. 

123% on the Nvidia FE card, and 126% with the Galaxy bios from #1 post, so it would be cool to try the 130% bios too.


https://forums.evga.com/EVGA-GeForce-RTX-2080-Ti-2080-XCXC-Ultra-BIOS-Update-m2858793.aspx

This one as a .rom file


----------



## zipeldiablo

Anybody did some OC on this? 
I have some issues testing my oc, basically i have an evga 2080ti XC gaming, wanted to oc the memory also and i can't seem to have something stable ?

Card is watercooled, temps 48 degrees celsius maximum, though i rebuilt my system before ek updated their manual so i have no thermal pads on the coils, only the vrms, don't know if that impacts stability ?

What are you guys using to test your oc because clearly the method i am using is not working.
Also i forgot how having to reboot the system every time the oc crashes is such a pain in the ass.


----------



## bmgjet

Zammin said:


> Yeah it was all over online tech news and youtube a while ago when it was discovered: https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant
> 
> I'm sure if you google something like "Nvidia binning RTX 2080 Ti" you'll find plenty of info. Hope that helps.
> 
> 
> 
> Sadly in Australia we are getting only the stock that trickles down after the US, we are not a priority unfortunately. I can't even order from EVGA or anywhere in the US. EVGA and Nvidia always say out of stock even when the US pages say "Add to Cart". Newegg does not allow us to buy 2080Ti's and Amazon US have shut us out entirely as of earlier this year. So the only way we can buy them is through local retailers who have literally no idea when they will receive stock, or buy off scalpers on Ebay who want stupid amounts of money.
> 
> I've been keeping an eye out every day for stock of any other RTX 2080 Ti's locally but I've only seen the Gigabyte Gaming OC (not compatible with my water block) and the ASUS blower card which was being advertised for a higher price than the EVGA one I have on "Pre-Order"...


Know the feeling here in NZ not much different.
I had pre-orders from 2 places here and was going to keep best clocker and sell the other one on.
Both of them done the same dirty to me.
First week got email that they were under provided stock and had to wait 3 more weeks.
3rd week comes Same email again just copy and pasted.
4th week get different email that they are refunding every ones pre-orders since they have no ETA.
5th week they show cards in stock now but $299 more then pre-order prices.


----------



## xermalk

Is there a list/compilation somewhere of all the currently available DXR examples?

Like https://github.com/TheRealMJP/DXRPathTracer


----------



## -jamez-

Hi guys,

Long time lurker, first time poster. Have had my 2080 Ti FE for about 3 weeks now. Absolutely love it, however i think i may have gotten one of the bad batch.

When I first got the card, i when and ran some stock as a rock benches on air: https://www.3dmark.com/3dm/29333638? I also ran a few benchmark tests on Rise of the Tomb Raider (as that's the game i was currently playing). Was getting roughly 110-120fps on max settings. I'm running a 1440p monitor.

About a week and a half ago i got all my gear to put together a custom loop. Put it on water, overclocked it a bunch as well as my CPU and this is about as high a score as i got: https://www.3dmark.com/spy/4806720. Rise of the Tomb raider was benching between 140-150fps flawlessly. 

This is where it started to break down. I clocked it back a tad after benching for gaming stability. Was running great until about 5 days ago i start to get game crashes, artifacts (coloured flashes, white blocks, and other misc.). So i began to clock it riiiiiiight back down (stopping at every step down to see if the problems would stop) to where it pretty much was stock. This didn't solve the issue. Reverting completely to stock fixed the severity however it was still happening intermittently.

Got onto Nvidia support who got me to clean install my drivers and install an unreleased driver (416.64) to see if that fixed the issues. No dice. In the end they decided to RMA my card (which i don't mind at all considering they do advanced RMA's so you're not without a card - awesome service that is).

I decided to keep everything stock for now. While i wait, went back to play Rise of the Tomb Raider. I noticed my FPS counter was sitting at around 80ish fps which was kind of odd and it would never go over 100. So i decided to run some of the RotTR benches again and the benches never went over 100. Was anywhere between 75-90fps each run. 

So something is seriously wrong with my card...

I ran another TSE benchmark completely stock to see how the scores were matching up: https://www.3dmark.com/spy/4876660

Has anyone RMA'd through Nvidia before? How long does the turn around usually take?

Edit: Heres a picture of my rig that got some praise and criticism. I guess its a blessing in disguise which gives me an excuse to tear-down the loop, order some more PETG, and fix up some flaws - https://www.reddit.com/r/nvidia/comments/9qy2sq/first_ever_custom_loop_feat_2080_ti/


----------



## Spiriva

From EVGA forum:

"My original bios was 90.02.0B.00.67 which was only +112% for power.
Flashing with the new bios was 90.02.0B.00.E4 which is +130% power.
Video card is the 2080Ti XC ultra."

https://www.techpowerup.com/vgabios/204090/evga-rtx2080ti-11264-180916

this should be the 130% PT bios then.


*Flashed it, worked fine is the 130% PT version.


----------



## Rob w

It certainly is a worrying time for anyone with a 2080ti fe, especially if it’s working ok ( at the minute )
Had mine running for the past week and the only problem I had was tweaking it too much on air ( yep loads of artifacting)but sorted that ok and it’s been running fine since.
I’m going to hold off putting the ek block on it for a few weeks until I know it’s ok and not have to Rma it.
Sad that a card that destroys the Titan v at a third of the cost is so troublesome, to some/a lot?


----------



## Vipeax

Interesting to see how some people still don't understand the basic story behind powerlimit percentages. Like the recent posts expecting the EVGA BIOS to result in a high power limit, because the slider has a higher percentage attached to it... 

Since when is 338W more than 380W? Overclocking for dummies.


----------



## Zammin

bmgjet said:


> Know the feeling here in NZ not much different.
> I had pre-orders from 2 places here and was going to keep best clocker and sell the other one on.
> Both of them done the same dirty to me.
> First week got email that they were under provided stock and had to wait 3 more weeks.
> 3rd week comes Same email again just copy and pasted.
> 4th week get different email that they are refunding every ones pre-orders since they have no ETA.
> 5th week they show cards in stock now but $299 more then pre-order prices.


Man that's dirty.. I hope PLE doesn't pull something like that on me. Sorry to hear you guys in NZ are getting the run around as well.


----------



## webmi

boi801 said:


> This is my MOD. There are many like it, but this one is mine.
> 
> Mod power limit on a MSI RTX 2070;
> 
> Find the shunt resistor
> Do that
> Apply it there
> 
> And it Works!!!! LOL power limit never reads 90%
> Now this mod need a name...


Here in Germany the Device in red is called "Wäscheklammer". May be something like that should be in your name for the MOD. ^^


----------



## -jamez-

Yep it’s defintiely dying 😞

Stock clock on my CPU and GPU - here’s some of the less severe artefacts I’ve seen:
















Then later I get a compete stuttering/freezing mess into a crash. Was bouncing between 1 and 32fps every minute or so with freezing in between:









***.


----------



## alex1990

Bios update for 2080 Ti Trio
MSINV371MH.103


----------



## Jpmboy

Vipeax said:


> Interesting to see how some people still don't understand the basic story behind powerlimit percentages. Like the recent posts expecting the EVGA BIOS to result in a high power limit, because the slider has a higher percentage attached to it...
> 
> Since when is 338W more than 380W? Overclocking for dummies.


it's ever since we have not had a simple bios editor where folks can see the wattage vs power % "thing".


----------



## Hulk1988

alex1990 said:


> Bios update for 2080 Ti Trio
> MSINV371MH.103


For the Trio X? Source?

Thank you


----------



## alex1990

Hulk1988 said:


> For the Trio X? Source?
> 
> Thank you


Yes, for trio x
https://forum-en.msi.com/index.php?topic=310364.msg1783790#msg1783790


----------



## Vipeax

Jpmboy said:


> it's ever since we have not had a simple bios editor where folks can see the wattage vs power % "thing".


GPU-Z sounds like an easier tool than a bios editor to me.



















No wonder some people still manage to brick cards once in a while. Some guy even tried to flash a corrupted download that had a size of 0 bytes.


----------



## Jpmboy

Vipeax said:


> GPU-Z sounds like an easier tool than a bios editor to me.
> 
> No wonder some people still manage to brick cards once in a while. Some guy even tried to flash a corrupted download that had a size of 0 bytes.


unfortunately that is look but don't touch. an editor would be wonderful. Many a mod bios for OG titans were made - greatly improved performance. Then NV instituted a lock out.


----------



## alanthecelt

got my Palit GeForce RTX 2080 Ti Gaming Pro OC over the weekend
was a cheap card to block
https://www.overclock.net/forum/attachment.php?attachmentid=227814&thumb=1
https://www.overclock.net/forum/attachment.php?attachmentid=227812&thumb=1
https://www.overclock.net/forum/attachment.php?attachmentid=227816&thumb=1

I will need to catch up on the bios flashing business ...


----------



## Vipeax

Jpmboy said:


> unfortunately that is look but don't touch. an editor would be wonderful. Many a mod bios for OG titans were made - greatly improved performance. Then NV instituted a lock out.


Oh I know, but it is very plausible that this was partially also done to counter all the fake GPUs being sold with modified BIOS files on them.



alanthecelt said:


> got my Palit GeForce RTX 2080 Ti Gaming Pro OC over the weekend
> was a cheap card to block
> https://www.overclock.net/forum/attachment.php?attachmentid=227814&thumb=1
> https://www.overclock.net/forum/attachment.php?attachmentid=227812&thumb=1
> https://www.overclock.net/forum/attachment.php?attachmentid=227816&thumb=1
> 
> I will need to catch up on the bios flashing business ...


No high-resolution picture of the PCB? :-(


----------



## Jpmboy

Vipeax said:


> Oh I know, but it is very plausible that this was partially also done to counter all the fake GPUs being sold with modified BIOS files on them.


lol - it's the wild-wild west you know.


----------



## alanthecelt

Vipeax said:


> No high-resolution picture of the PCB? :-(


in all honesty.. this is the first time i looked at the pic i took and realised it was terrible :S
i did notice it was different to the picture in the alphacool manual
to the top right of the chip (if its the right way up) in the corner of the upper and right 4 memory modules... there is 4 large resistor? type parts, thers also pads for 2 more.. in the alphacool manual all 6 are populated


----------



## webmi

Vipeax said:


> GPU-Z sounds like an easier tool than a bios editor to me.





Jpmboy said:


> an editor would be wonderful.


If you know what you are doing, you can use a HEX EDITOR to read power from the BIOS directly (technically even change it, but without the right tools (bypass for cert) we are not able to flash a changed BIOS).


----------



## GosuPl

Interesting info from Sora

"LATEST NEWS

I called Asus support the night before last and the agent told me they were receiving many escalations regarding this 2080 issue and their engineering team is currently working on resolution. The only information they could provide me was it is indeed related to nvlink interface incompatibility with current BIOS/board, and users of the 20xx series GPU's, whether multipgpu or single gpu may face issues and instability until BIOS update is released.

They also stated there is other issues related to the 20xx series GPU's and screens going black prior to login screen in windows/drive instability with boards at the moment, and this is affecting other boards as well (not just the ZE) and they are working on BIOS updates for these issues too.

They said to expect BIOS updates "soon" and their L2 teams will keep in touch.

Seems the NVlink interface, regardless of multigpu or single, is causing issues with ASUS mobos in general for many. I suspect they never really added support for it since it was technically outside the scope of the market these boards were intended for (well, until now)."

Source.

https://forums.geforce.com/default/.../2080-ti-recall-cryptomining-and-bad-drivers/

Btw./ Another faulty 2080Ti user...

https://forums.geforce.com/default/...es/rtx-2080-ti-causing-horrible-artifacts-/1/

https://www.dropbox.com/s/u5u35057hgbubas/2018-10-18 20.04.20.jpg?dl=0

More, and more Turing faulty cards. If my new cards will be faulty too, i will back for TITAN Xp and waiting for next gen, or eventually for new revisions of 2080Ti's witsh Samsung GDDR6 instead Micron. What, a shame on Nv side...


----------



## Jpmboy

webmi said:


> If you know what you are doing, you can use a HEX EDITOR to read power from the BIOS directly (technically even change it, but without the right tools (bypass for cert) we are not able to flash a changed BIOS).


yeah - I spent too much time staring at hex for pascal and volta hoping to flash a modded bios.


----------



## Zammin

alanthecelt said:


> got my Palit GeForce RTX 2080 Ti Gaming Pro OC over the weekend
> was a cheap card to block
> https://www.overclock.net/forum/attachment.php?attachmentid=227814&thumb=1
> https://www.overclock.net/forum/attachment.php?attachmentid=227812&thumb=1
> https://www.overclock.net/forum/attachment.php?attachmentid=227816&thumb=1
> 
> I will need to catch up on the bios flashing business ...


Nice. I see you bought the Alphacool block as well. Much like the 2080Ti I ordered, my Alphacool block that I bought through Aquatuning is nowhere to be seen atm.. Aquatuning keep giving random ETAs and changing them every time they are missed, and even Alphacool themselves couldn't tell me what the hold up was. :/

Let me know how your load temps go once it's all installed. Hopefully I can get mine soon to compare.


----------



## tamas970

MSI cards seem to be on a huge back order, has anyone some insider knowledge what causes the problem? Shipping dates seem much more grim for e.g. an MSI Trio than other cards.


----------



## toncij

So, judging by what is being written everywhere, many manufacturers are having dying cards? VRAM?

Doesn't look good.

Btw, what is with Asus and this killing of DP ports to put in one HDMI? Why would someone want 2 HDMI ports?


----------



## alanthecelt

Zammin said:


> Nice. I see you bought the Alphacool block as well. Much like the 2080Ti I ordered, my Alphacool block that I bought through Aquatuning is nowhere to be seen atm.. Aquatuning keep giving random ETAs and changing them every time they are missed, and even Alphacool themselves couldn't tell me what the hold up was. :/
> 
> Let me know how your load temps go once it's all installed. Hopefully I can get mine soon to compare.


yer i had the block on literally their first shipment.. my card was in the ether for 2 months.. so i changed it up to the palit from a generic model (no detail as such) 
I should get it in the loop next couple of evenings


----------



## Okt00

What blocks am I missing for the 2080 Ti? I'm new the loop scene and want to make sure I choose a good one...

*EKWB EK-Vector RTX 2080 Ti*
Copper + Acetal (RGB or Not)
Copper + Plexi (RGB or Not)
Nickel + Plexi (RGB or Not)
Nickel + Acetal (RGB or Not)

*Watercool HEATKILLER IV for RTX 2080 Ti*
Copper + Acryl
Copper + Acetal
Nickel + Acetal
Nickel + Acryl (RGB)
Nickel + Black Aluminum (RGB)

*Alphacool Eisblock GPX-N*
Acetal
Plexi (RGB)
Plexi Light (RGB)

*Phanteks GLACIER G2080Ti*
Nickel + Acrylic + Anodized Black Aluminum (RGB)
Nickel + Acrylic + Chrome Plated Aluminum (RGB)

*Bitspower Lotan VGA*
Nickel + Acrylic (RGB)

*Barrow*
85-NVG2080T-PA (RGB)


XSPC has a 2080 block, the Razer Neo, but nothing I can see for a 2080 Ti...


----------



## Nicklas0912

So, what bios are you guys running with your RTX 2080 TI? I use a Custom 380Watt bios, for my RTX 2080 TI Founders card, I will post overclocks later, but so far, is SO MUCH better than the stock bios, just wating for my waterblock to come home, since the temps is skyrocking now


----------



## mr2cam

Nicklas0912 said:


> So, what bios are you guys running with your RTX 2080 TI? I use a Custom 380Watt bios, for my RTX 2080 TI Founders card, I will post overclocks later, but so far, is SO MUCH better than the stock bios, just wating for my waterblock to come home, since the temps is skyrocking now


Yikes, I wouldn't be flashing any different bios on your FE with all the FE cards dying recently, they might refuse a warranty if they find out you flashed a different bios on it.


----------



## Vipeax

mr2cam said:


> Yikes, I wouldn't be flashing any different bios on your FE with all the FE cards dying recently, they might refuse a warranty if they find out you flashed a different bios on it.


Memory errors won't make the GPU unrecognizable for a flash so he can always restore it.


----------



## xermalk

Okt00 said:


> What blocks am I missing for the 2080 Ti? I'm new the loop scene and want to make sure I choose a good one...
> 
> *EKWB EK-Vector RTX 2080 Ti*
> Copper + Acetal (RGB or Not)
> Copper + Plexi (RGB or Not)
> Nickel + Plexi (RGB or Not)
> Nickel + Acetal (RGB or Not)
> 
> *Watercool HEATKILLER IV for RTX 2080 Ti*
> Copper + Acryl
> Copper + Acetal
> Nickel + Acetal
> Nickel + Acryl (RGB)
> Nickel + Black Aluminum (RGB)
> 
> *Alphacool Eisblock GPX-N*
> Acetal
> Plexi (RGB)
> Plexi Light (RGB)
> 
> *Phanteks GLACIER G2080Ti*
> Nickel + Acrylic + Anodized Black Aluminum (RGB)
> Nickel + Acrylic + Chrome Plated Aluminum (RGB)
> 
> *Bitspower Lotan VGA*
> Nickel + Acrylic (RGB)
> 
> *Barrow*
> 85-NVG2080T-PA (RGB)
> 
> 
> XSPC has a 2080 block, the Razer Neo, but nothing I can see for a 2080 Ti...


The sexiest of them.

















Been delayed by 2-3 weeks becuse of issues getting all the materials to make them.

Has a built in flowsensor and temp probe.

https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/


----------



## dVeLoPe

EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR

Does anyone own this card? Can the BiOS be flashed? What waterblock will fit?


----------



## Garrett1974NL

Nicklas0912 said:


> So, what bios are you guys running with your RTX 2080 TI? I use a Custom 380Watt bios, for my RTX 2080 TI Founders card, I will post overclocks later, but so far, is SO MUCH better than the stock bios, just wating for my waterblock to come home, since the temps is skyrocking now


You mean like much less fluctuating GPU core speeds?


----------



## Nicklas0912

If you got the 380Watt bios, so yes  it works on all ref PCB borads.


----------



## Nicklas0912

Both that and higher clocks, as your limit is not 260 watt anymore, but 380 watt.


----------



## Spiriva

Nicklas0912 said:


> Both that and higher clocks, as your limit is not 260 watt anymore, but 380 watt.


Isnt it 320W on the FE cards ?


----------



## Jpmboy

Spiriva said:


> Isnt it 320W on the FE cards ?


 yes
... and a power limit has nothing to do with a max frequency that can be obtained. It can only allow for a higher load to be processed thru the component and any given frequency. Again, just like pascal and volta, temperature has the more important effect on frequency.
some folks just loose it in all the excitement.


----------



## Nicklas0912

Jpmboy said:


> yes
> ... and a power limit has nothing to do with a max frequency that can be obtained. It can only allow for a higher load to be processed thru the component and any given frequency. Again, just like pascal and volta, temperature has the more important effect on frequency.
> some folks just loose it in all the excitement.


Temps are doing the most on pascal/tur yes, but after the new bios, I can do +170 instead of +140 on the core clock, I get higher fps and better scores in 3dmark, higher clocks, so it are DOING something, even when the temps are 70c with max overclock.
aleast +600 points in Timespy. 

With stock bios, I see Max 260 Watt in GPU-Z, with the new bios 360w+, the system power from the wall is higher too, not just a "excitement"  

After the new bios, the cooling is the limit, temps is skyrocking, the cooler cant follow it anymore.


----------



## Okt00

xermalk said:


> The sexiest of them.
> 
> https://forum.aquacomputer.de/images-ac/kryographics_NEXT_2080Ti_TOP.jpg
> https://forum.aquacomputer.de/images-ac/kryographics_NEXT_2080Ti_BOTTOM.jpg
> Been delayed by 2-3 weeks becuse of issues getting all the materials to make them.
> 
> Has a built in flowsensor and temp probe.
> 
> https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/



That _is_ pretty! I don't know how functional that sensor would be for me personally being a vertical mounting for the GPU. But I'll keep it in mind.


----------



## Jbravo33

Nicklas0912 said:


> Temps are doing the most on pascal/tur yes, but after the new bios, I can do +170 instead of +140 on the core clock, I get higher fps and better scores in 3dmark, higher clocks, so it are DOING something, even when the temps are 70c with max overclock.
> aleast +600 points in Timespy.
> 
> With stock bios, I see Max 260 Watt in GPU-Z, with the new bios 360w+, the system power from the wall is higher too, not just a "excitement"
> 
> After the new bios, the cooling is the limit, temps is skyrocking, the cooler cant follow it anymore.


The one day I was on air the max I could put on cards were 175/1000. After putting a water block and flashing bios the max I could put on cards were 175/1000. There was no increase in frequency due to bios flash. Now will it stay at a higher clock longer? Probably. I’d rather hard mod like shunt but I tried that and doesn’t work on this card unless you do some extra things I really don’t feel like doing. I removed new bios almost immediately as it was less stable even tho I wasn’t seeing the power limit as much.


----------



## Vipeax

I need someone who is running the GALAX BIOS to share a GPU-Z screenshot of the first tab please.


----------



## Garrett1974NL

Jbravo33 said:


> The one day I was on air the max I could put on cards were 175/1000. After putting a water block and flashing bios the max I could put on cards were 175/1000. There was no increase in frequency due to bios flash. Now will it stay at a higher clock longer? Probably. I’d rather hard mod like shunt but I tried that and doesn’t work on this card unless you do some extra things I really don’t feel like doing. I removed new bios almost immediately as it was less stable even tho I wasn’t seeing the power limit as much.


Less stable... you mean crashing or something?


----------



## MikeJeffries

*Flashing Bios on the RTX 2080 Ti FE?*

Is anyone here flashing the bios on the NVidia RTX 2080 Ti FE Cards?

I'm reading that people are increasing the maximum power on their reference PCB cards but I'm not sure if it's being done on the FE cards, which is what I currently have.

Can anyone let me know if it's even worth it to do on them?


----------



## Jpmboy

Nicklas0912 said:


> Temps are doing the most on pascal/tur yes, but after the new bios, I can do +170 instead of +140 on the core clock, I get higher fps and better scores in 3dmark, higher clocks, so it are DOING something, even when the temps are 70c with max overclock.
> aleast +600 points in Timespy.
> 
> With stock bios, *I see Max 260 Watt in GPU-Z*, with the new bios 360w+, the system power from the wall is higher too, not just a "excitement"
> 
> After the new bios, the cooling is the limit, temps is skyrocking, the cooler cant follow it anymore.


something must have been wrong with the card initially. The stock FE will pull 320W - and slam into the power limit - when folding, boinc, timespy extreme, heaven @ 4K, superposition 8K, etc., but before that, the temperature "fail-safe" will kick in with the stock cooler and drop clock bins from 30C up. Hopefully the galax bios (which I believe is made for their watercooled card and/or LN2) does not also raise the bios max temp... best put a block on that pretty soon. the peak MHz recorded in gpuZ means little, it's the frequency at the sustained temp during a run that pushes your TS score :thumb:


----------



## Nico67

I was getting restarts during games, even knocked the OC down a bit, but didn't seem to help. Tried dropping the power limit back a bit 360w-> 330w and no problems since. Could be due to PCIE extender cable it ITX case not being able to handle 75w+, maybe tacking on a 6pin PCIE cable at the video card end might been a good idea.
Could mean that certain motherboards especially those with extra VGA 6pin headers might handle higher PL bioses better.

For reference,

PSU - Silverstone SST-SX800-LTI
Extender - 500mm 3M Twinax cable

Update: turns out it is the PSU and lowering power was only circumstantial. 800W should be fine, but this model isn't quite up to the task.


----------



## exploiteddna

-jamez- said:


> Hi guys,
> 
> Long time lurker, first time poster. Have had my 2080 Ti FE for about 3 weeks now. Absolutely love it, however i think i may have gotten one of the bad batch.
> 
> When I first got the card, i when and ran some stock as a rock benches on air: https://www.3dmark.com/3dm/29333638? I also ran a few benchmark tests on Rise of the Tomb Raider (as that's the game i was currently playing). Was getting roughly 110-120fps on max settings. I'm running a 1440p monitor.
> 
> About a week and a half ago i got all my gear to put together a custom loop. Put it on water, overclocked it a bunch as well as my CPU and this is about as high a score as i got: https://www.3dmark.com/spy/4806720. Rise of the Tomb raider was benching between 140-150fps flawlessly.
> 
> This is where it started to break down. I clocked it back a tad after benching for gaming stability. Was running great until about 5 days ago i start to get game crashes, artifacts (coloured flashes, white blocks, and other misc.). So i began to clock it riiiiiiight back down (stopping at every step down to see if the problems would stop) to where it pretty much was stock. This didn't solve the issue. Reverting completely to stock fixed the severity however it was still happening intermittently.
> 
> Got onto Nvidia support who got me to clean install my drivers and install an unreleased driver (416.64) to see if that fixed the issues. No dice. In the end they decided to RMA my card (which i don't mind at all considering they do advanced RMA's so you're not without a card - awesome service that is).
> 
> I decided to keep everything stock for now. While i wait, went back to play Rise of the Tomb Raider. I noticed my FPS counter was sitting at around 80ish fps which was kind of odd and it would never go over 100. So i decided to run some of the RotTR benches again and the benches never went over 100. Was anywhere between 75-90fps each run.
> 
> So something is seriously wrong with my card...
> 
> I ran another TSE benchmark completely stock to see how the scores were matching up: https://www.3dmark.com/spy/4876660
> 
> Has anyone RMA'd through Nvidia before? How long does the turn around usually take?
> 
> Edit: Heres a picture of my rig that got some praise and criticism. I guess its a blessing in disguise which gives me an excuse to tear-down the loop, order some more PETG, and fix up some flaws - https://www.reddit.com/r/nvidia/comments/9qy2sq/first_ever_custom_loop_feat_2080_ti/


welcome to ocn. 
nice build for first time. definitely i would re-do that bottom tube, its really wonky and off center. But the color scheme I like; i go for similar colors on my build.


----------



## Nicklas0912

Jpmboy said:


> something must have been wrong with the card initially. The stock FE will pull 320W - and slam into the power limit - when folding, boinc, timespy extreme, heaven @ 4K, superposition 8K, etc., but before that, the temperature "fail-safe" will kick in with the stock cooler and drop clock bins from 30C up. Hopefully the galax bios (which I believe is made for their watercooled card and/or LN2) does not also raise the bios max temp... best put a block on that pretty soon. the peak MHz recorded in gpuZ means little, it's the frequency at the sustained temp during a run that pushes your TS score :thumb:


I know how it woorks, and i nerver talk about the Preak Mhz, that means **** !  I tryed with both Bios, and getting alot higher sustained frequency, with the modded. I have both Titan X Pascal, and Titan Xp, I know how this works.

Im wating for my waterblock to come, is in the mail atm. 

With the modded bios, the card is hitting 72c under load "timespy 4k" with 100% fans speed, it just cant handle the amount of watt, rofl.

"After I went back to strock bios to try, I saw a max peak of 317 Watt in GPU-Z, but that means the modded bios doing is work, of 360 Watt +, but need the waterblock to get higher clocks first.


----------



## Jpmboy

cool. enjoy the card.


----------



## -jamez-

michaelrw said:


> welcome to ocn.
> nice build for first time. definitely i would re-do that bottom tube, its really wonky and off center. But the color scheme I like; i go for similar colors on my build.


Thanks mate! Yeah i do agree with you. As mentioned, i've heard all the criticism from the people on Reddit. So although my GPU is dying, it gives me a chance to right those wrongs.

That double 90 degree was a tough one to do my first try. Was pretty hilarious though. Couldn't get the silicon insert out because of the extra friction with the second 90 degree. So got my pint-sized gf to hold one end while i pulled on the other. Came loose and we both went flying. Good times.

Have learnt now to lube the silicon up before attempting that one again.


----------



## Jpmboy

nvm. figured it out.


----------



## Glerox

I'm probably gonna open my FE 2080TI next week-end. I was still waiting for parts for my build.
I'm surprised to hear more and more users having problems with the FE gpu like BSOD and artefacts.
It's a shame because my Titan XPs have been rock solid since 2 years, even after months of mining.

Should I return my un-opened FE and go for a board partner instead?

The gpu will be under an EKWB block.


----------



## kx11

Evga FTW3 vs Strix OC , which one got better OC numbers all around ??


----------



## kot0005

GosuPl said:


> https://forums.geforce.com/default/topic/1078162/geforce-rtx-20-series/rtx-2080ti-massively-die-/1/
> 
> https://forums.geforce.com/default/...20-series/2080-ti-fe-artifacts-crash-bsod-/1/
> 
> And lots of similiar topics.
> 
> 2080Ti's (even not FE) VRAM dying is very serious problem. Ppls have 2/3x rma and have even DOA GPU's. So...


Pls link Non FE cards dying.. 2 or 3 AIB cards dying is normal, meanwhile almost every user has had their FE die..


----------



## stryfetew

I haven't even taken mine out of the box yet. This is making me seriously nervous about mine.


----------



## kot0005

stryfetew said:


> I haven't even taken mine out of the box yet. This is making me seriously nervous about mine.


FE ? 

u might as well return it if its FE or order a AIB and wait till its shipped and return FE.


----------



## stryfetew

kot0005 said:


> FE ?
> 
> u might as well return it if its FE or order a AIB and wait till its shipped and return FE.


It's my understanding that even the AIB cards are having issues. I'm hoping NVIDIA steps up and acknowledges there is an issue and they are working on a fix.


----------



## Jpmboy

kot0005 said:


> Pls link Non FE cards dying.. 2 or 3 AIB cards dying is normal, meanwhile *almost every user has had their FE die*..


 C'mon ... just a bit of an overstatement.
:exclamati


----------



## iamjanco

Both articles published today:

*DigitalTrends - Nvidia RTX 2080 Ti Graphics Cards Are Dying in Alarming Numbers*
*TechRadar - Nvidia’s flagship RTX 2080 Ti graphics cards are failing more than they should*

I'd take them like a grain of salt until more is learned about what's actually causing cards to crash and burn.


----------



## NewType88

My ftw3 at default settings with gpu and system fans at 100% I’m getting 70c on the gpu and mid 80s on some of the memory modules. Those temps seem way to high for stock and 100% fans.


----------



## GosuPl

kot0005 said:


> Pls link Non FE cards dying.. 2 or 3 AIB cards dying is normal, meanwhile almost every user has had their FE die..



https://www.overclock3d.net/news/gp..._2080_ti_died_-_reports_of_gpu_deaths_mount/1

https://tyrone.tech/nvidia-shipping...dRn-LWvTBDhE5eJi6FevZc256S9jKG20q_p1yGrV15a60


----------



## Jpmboy

NewType88 said:


> My ftw3 at default settings with gpu and system fans at 100% I’m getting 70c on the gpu *and mid 80s on some of the memory modules*. Those temps seem way to high for stock and 100% fans.


 this is what is great about the FTW series. My 1080TiFTW3 had and my 1070Ti FTW2 has these I2C channel open so the on-board DTS can be read. The reported ram failures may indeed be related to poor cooling of the ram ICs... we'll certainly find out in time. Nearly all the failure report are for NV FE cards, but that is likely a "sample bias". AIBs/3rd party cards are just showing up in the past week or 2. They all use the MIcron ICs AFAIK.
What ram brand is shown in GPUZ for your FTW3?


----------



## jcde7ago

Just as a reference point for other folks in this thread - my experience with the 2080 Ti FE:

- Pre-ordered a 2080 Ti FE before the link was officially live at the earliest possible opportunity
- Received my 2080 Ti on 10/5
- 2080 Ti worked amazing for about 10 days; I started getting hard lockups requiring a full reboot and artifacting on 10/15
- Opened up a case with Nvidia live chat on 10/18 (and RMA was approved) after I couldn't take it anymore as the lock ups were getting more frequent (started out being an hour or two apart, then within 30 minutes, then lock ups would occur with any game within 5 minutes).
- On 10/24, I received an email saying my advanced RMA was shipped and will be delivered 10/25.
- Nvidia messed up with the wrong zip code and my replacement was actually held by FedEx until I picked it up today, 10/29.

Replacement 2080 Ti installed a couple hours ago...so far, no issues playing Destiny 2 for an hour on highest settings, but I gotta be honest...Mr Nvidia, I don't feel so good (but seriously - i'm basically waiting for this second card to start failing in the next 7-10 days, as multiple people i've talked to have already had their replacement FEs fail on them as well).

My advice: stay way, way away from 2080 Ti FEs for the next month or two at minimum...AIB cards seem to be affected as well, but not nearly to the same degree as the failure rate for these FEs. 

As an early adopter of most of the previous Titan/Ti high-end cards...i've never seen anything like this and, quite frankly, it's downright shameful that Nvidia continues to stay silent about an obvious manufacturing flaw with these cards.

EDIT:

2080 Ti FE Card #1 Serial/batch: 0323818****** (RMA'd after 10 days) 
2080 Ti FE Card #2 Serial/batch: 0324018****** (New replacement installed on 10/29)

I will update if my replacement card has any issues....(knocks on wood)


----------



## Esenel

My FE card is also installed since the 05.10 with an EK block and hadn't have any issues yet.

Batchnumber not at hand yet.


----------



## Emmett

jcde7ago said:


> Just as a reference point for other folks in this thread - my experience with the 2080 Ti FE:
> 
> - Pre-ordered a 2080 Ti FE before the link was officially live at the earliest possible opportunity
> - Received my 2080 Ti on 10/5
> - 2080 Ti worked amazing for about 10 days; I started getting hard lockups requiring a full reboot and artifacting on 10/15
> - Opened up a case with Nvidia live chat on 10/18 (and RMA was approved) after I couldn't take it anymore as the lock ups were getting more frequent (started out being an hour or two apart, then within 30 minutes, then lock ups would occur with any game within 5 minutes).
> - On 10/24, I received an email saying my advanced RMA was shipped and will be delivered 10/25.
> - Nvidia messed up with the wrong zip code and my replacement was actually held by FedEx until I picked it up today, 10/29.
> 
> Replacement 2080 Ti installed a couple hours ago...so far, no issues playing Destiny 2 for an hour on highest settings, but I gotta be honest...Mr Nvidia, I don't feel so good (but seriously - i'm basically waiting for this second card to start failing in the next 7-10 days, as multiple people i've talked to have already had their replacement FEs fail on them as well).
> 
> My advice: stay way, way away from 2080 Ti FEs for the next month or two at minimum...AIB cards seem to be affected as well, but not nearly to the same degree as the failure rate for these FEs.
> 
> As an early adopter of most of the previous Titan/Ti high-end cards...i've never seen anything like this and, quite frankly, it's downright shameful that Nvidia continues to stay silent about an obvious manufacturing flaw with these cards.
> 
> EDIT:
> 
> 2080 Ti FE Card #1 Serial/batch: 0323818****** (RMA'd after 10 days)
> 2080 Ti FE Card #2 Serial/batch: 0324018****** (New replacement installed on 10/29)
> 
> I will update if my replacement card has any issues....(knocks on wood)


I am in process of rma with 0323818 also

Same as you. Worked fine for awhile. The backplate on my card was on 🔥 even with fans blowing right on it.


----------



## Spiriva

I put a EK block on mine the day after I got it (26th Oct). Its been running at 2175/8000mhz since I Flashed it with the Evga bios 338W over Nvidia original FE at 320W.

No problems so far but it only been a couple of days. I got an EVGA xc card on order aswell, altho they peolly wunt reach Sweden till December or such so i got a few weeks on me to try this FE card out.
But so far im very pleased with it.

Serial: 0323918******


----------



## kot0005

GosuPl said:


> https://www.overclock3d.net/news/gp..._2080_ti_died_-_reports_of_gpu_deaths_mount/1
> 
> https://tyrone.tech/nvidia-shipping...dRn-LWvTBDhE5eJi6FevZc256S9jKG20q_p1yGrV15a60


dude lol... The two links are FE cards, I asked for non FE, aka AIB card death links.. and OC3D literally copied digital trends post from earlier...I am pretty sure everyone is guna publish this as news now. Took them long enough but its poor to see that media only publishes these only when someone else picks it up. I asked gamersnexus on twitter about this a week ago. I bet they r guna make a vid about it now because its the new thing and they need a piece of it. Not just GN. Everyone is guna publish this now.


----------



## ENTERPRISE

kot0005 said:


> dude lol... The two links are FE cards, I asked for non FE, aka AIB card death links.. and OC3D literally copied digital trends post from earlier...I am pretty sure everyone is guna publish this as news now. Took them long enough but its poor to see that media only publishes these only when someone else picks it up. I asked gamersnexus on twitter about this a week ago. I bet they r guna make a vid about it now because its the new thing and they need a piece of it. Not just GN. Everyone is guna publish this now.


I was wondering how long it would take before the ''Media'' started to talk about the clear issue Nvidia is having.


----------



## th3orist

*Any possibility to adjust FE 2080 Ti fan speed in idle mode?*

Hey guys, so i have a FE 2080 Ti and all in all i am very pleased (except with the price), but the fans in idle mode are sitting always at 41% and i can't adjust it with any oc software.
So do you specialists know a way how to manually adjust them to go lower in idle mode? Does flashing another bios help with that?


----------



## Madness11

Ohhhh, I m zo scared now , to run my zotac amp 2080ti .... sad )))


----------



## Silent Scone

-jamez- said:


> Hi guys,
> 
> Long time lurker, first time poster. Have had my 2080 Ti FE for about 3 weeks now. Absolutely love it, however i think i may have gotten one of the bad batch.
> 
> When I first got the card, i when and ran some stock as a rock benches on air: https://www.3dmark.com/3dm/29333638? I also ran a few benchmark tests on Rise of the Tomb Raider (as that's the game i was currently playing). Was getting roughly 110-120fps on max settings. I'm running a 1440p monitor.
> 
> About a week and a half ago i got all my gear to put together a custom loop. Put it on water, overclocked it a bunch as well as my CPU and this is about as high a score as i got: https://www.3dmark.com/spy/4806720. Rise of the Tomb raider was benching between 140-150fps flawlessly.
> 
> This is where it started to break down. I clocked it back a tad after benching for gaming stability. Was running great until about 5 days ago i start to get game crashes, artifacts (coloured flashes, white blocks, and other misc.). So i began to clock it riiiiiiight back down (stopping at every step down to see if the problems would stop) to where it pretty much was stock. This didn't solve the issue. Reverting completely to stock fixed the severity however it was still happening intermittently.
> 
> Got onto Nvidia support who got me to clean install my drivers and install an unreleased driver (416.64) to see if that fixed the issues. No dice. In the end they decided to RMA my card (which i don't mind at all considering they do advanced RMA's so you're not without a card - awesome service that is).
> 
> I decided to keep everything stock for now. While i wait, went back to play Rise of the Tomb Raider. I noticed my FPS counter was sitting at around 80ish fps which was kind of odd and it would never go over 100. So i decided to run some of the RotTR benches again and the benches never went over 100. Was anywhere between 75-90fps each run.
> 
> So something is seriously wrong with my card...
> 
> I ran another TSE benchmark completely stock to see how the scores were matching up: https://www.3dmark.com/spy/4876660
> 
> Has anyone RMA'd through Nvidia before? How long does the turn around usually take?
> 
> Edit: Heres a picture of my rig that got some praise and criticism. I guess its a blessing in disguise which gives me an excuse to tear-down the loop, order some more PETG, and fix up some flaws - https://www.reddit.com/r/nvidia/comments/9qy2sq/first_ever_custom_loop_feat_2080_ti/


Strange issue. If it's a hardware problem, it's an odd one. I would consider trying on a dummy OS or reinstalling your current OS before considering removing the card.


----------



## Esenel

th3orist said:


> Hey guys, so i have a FE 2080 Ti and all in all i am very pleased (except with the price), but the fans in idle mode are sitting always at 41% and i can't adjust it with any oc software.
> So do you specialists know a way how to manually adjust them to go lower in idle mode? Does flashing another bios help with that?


Yes flashing another bios let you adjust the fan speed I guess.

At least I see a different percentage in Msi Afterburner.


----------



## Spiriva

Currently on the Gigabyte OC 2080ti bios 360W reference pcb


----------



## th3orist

Esenel said:


> Yes flashing another bios let you adjust the fan speed I guess.
> 
> At least I see a different percentage in Msi Afterburner.


Ok, so you flashed your FE 2080ti with a new bios (which one btw?) and now in afterburner in idle mode you can turn for example the fans completely off if you choose to do so? Can you please test? Pretty please?


----------



## kot0005

Watch it in 4k


----------



## Jpmboy

jcde7ago said:


> Just as a reference point for other folks in this thread - my experience with the 2080 Ti FE:
> 
> - Pre-ordered a 2080 Ti FE before the link was officially live at the earliest possible opportunity
> - Received my 2080 Ti on 10/5
> - 2080 Ti worked amazing for about 10 days; I started getting hard lockups requiring a full reboot and artifacting on 10/15
> - Opened up a case with Nvidia live chat on 10/18 (and RMA was approved) after I couldn't take it anymore as the lock ups were getting more frequent (started out being an hour or two apart, then within 30 minutes, then lock ups would occur with any game within 5 minutes).
> - On 10/24, I received an email saying my advanced RMA was shipped and will be delivered 10/25.
> - Nvidia messed up with the wrong zip code and my replacement was actually held by FedEx until I picked it up today, 10/29.
> 
> Replacement 2080 Ti installed a couple hours ago...so far, no issues playing Destiny 2 for an hour on highest settings, but I gotta be honest...Mr Nvidia, I don't feel so good (but seriously - i'm basically waiting for this second card to start failing in the next 7-10 days, as multiple people i've talked to have already had their replacement FEs fail on them as well).
> 
> My advice: stay way, way away from 2080 Ti FEs for the next month or two at minimum...AIB cards seem to be affected as well, but not nearly to the same degree as the failure rate for these FEs.
> 
> As an early adopter of most of the previous Titan/Ti high-end cards...i've never seen anything like this and, quite frankly, it's downright shameful that Nvidia continues to stay silent about an obvious manufacturing flaw with these cards.
> 
> EDIT:
> 
> 2080 Ti FE Card #1 Serial/batch: 0323818****** (RMA'd after 10 days)
> 2080 Ti FE Card #2 Serial/batch: 0324018****** (New replacement installed on 10/29)
> 
> I will update if my replacement card has any issues....(knocks on wood)


hey bud, post back with the followup news.. good or bad.


----------



## Rob w

th3orist said:


> Ok, so you flashed your FE 2080ti with a new bios (which one btw?) and now in afterburner in idle mode you can turn for example the fans completely off if you choose to do so? Can you please test? Pretty please?


Using afterburner I can fully adjust my fans to the desired level ie; max at 50deg
If you want to turn them off completely( don’t know why you’d do that) then disable fan control in afterburner options.


----------



## toncij

I've lost my XC Ultra in post system (they're still trying to find the card that was sent), but this or next week I'm getting Palit, PNY and a week after GB Gaming OC and Zotac. Will see which one survives. No FEs tho. And I've cancelled my FTW3 since this year EVGA doesn't care enough about EU customers even to post any kind of info after soon full 2 months of wait or actually a full month past the latest delivery date.


----------



## NewType88

Jpmboy said:


> this is what is great about the FTW series. My 1080TiFTW3 had and my 1070Ti FTW2 has these I2C channel open so the on-board DTS can be read. The reported ram failures may indeed be related to poor cooling of the ram ICs... we'll certainly find out in time. Nearly all the failure report are for NV FE cards, but that is likely a "sample bias". AIBs/3rd party cards are just showing up in the past week or 2. They all use the MIcron ICs AFAIK.
> What ram brand is shown in GPUZ for your FTW3?




Its Micron. Isnt that the only supplier for these GPU's right now ? Here is a pic of furmark with all system fans and gpu at 100% fan speeds with a stock card. Super high core temps and look at the difference in ram temps. Im probably going to RMA, the temps and noise are unacceptable.


On techyescity's channel review of the strix 2080 ti he got 57c at 60% fan speed....that's the thermals I was expecting. A bit higher for enclosed case, but WAY better than what im getting.


----------



## R1amddude

I also pre ordered the 2080ti when the website went live was hitting F5. Ive only OC the ram once at +600 for firestrike score and have kept it at stock the whole time. Been gaming on it consistently since I got it at +60 core oc and no problems so far. Mostly play Mechwarrior online and Fallout 4. I had driver problems with it when I first got it, but those have been mostly fixed. I guess im one of the lucky ones?


----------



## Okt00

jcde7ago said:


> Just as a reference point for other folks in this thread - my experience with the 2080 Ti FE:
> 
> - Pre-ordered a 2080 Ti FE before the link was officially live at the earliest possible opportunity
> - Received my 2080 Ti on 10/5
> - 2080 Ti worked amazing for about 10 days; I started getting hard lockups requiring a full reboot and artifacting on 10/15
> - Opened up a case with Nvidia live chat on 10/18 (and RMA was approved) after I couldn't take it anymore as the lock ups were getting more frequent (started out being an hour or two apart, then within 30 minutes, then lock ups would occur with any game within 5 minutes).
> - On 10/24, I received an email saying my advanced RMA was shipped and will be delivered 10/25.
> - Nvidia messed up with the wrong zip code and my replacement was actually held by FedEx until I picked it up today, 10/29.
> 
> Replacement 2080 Ti installed a couple hours ago...so far, no issues playing Destiny 2 for an hour on highest settings, but I gotta be honest...Mr Nvidia, I don't feel so good (but seriously - i'm basically waiting for this second card to start failing in the next 7-10 days, as multiple people i've talked to have already had their replacement FEs fail on them as well).
> 
> My advice: stay way, way away from 2080 Ti FEs for the next month or two at minimum...AIB cards seem to be affected as well, but not nearly to the same degree as the failure rate for these FEs.
> 
> As an early adopter of most of the previous Titan/Ti high-end cards...i've never seen anything like this and, quite frankly, it's downright shameful that Nvidia continues to stay silent about an obvious manufacturing flaw with these cards.
> 
> EDIT:
> 
> 2080 Ti FE Card #1 Serial/batch: 0323818****** (RMA'd after 10 days)
> 2080 Ti FE Card #2 Serial/batch: 0324018****** (New replacement installed on 10/29)
> 
> I will update if my replacement card has any issues....(knocks on wood)





Emmett said:


> I am in process of rma with 0323818 also
> 
> Same as you. Worked fine for awhile. The backplate on my card was on 🔥 even with fans blowing right on it.



My serial for FE going back is same as you two. 0323818
0324018 is the new one. 

I do recall when I removed the 'dead' card, I had to take off the panels and get a screw driver... take off a few plates, point is I took my time. Heck if I had to put the card down _quick_ as the backplate was *hot!* ... still.


----------



## GosuPl

Emmett said:


> I am in process of rma with 0323818 also
> 
> Same as you. Worked fine for awhile. The backplate on my card was on 🔥 even with fans blowing right on it.


I have info which seems to confirm for now, defective batches starts on - 0332xxx, those that have long and healthy working so far - 0333xxx.

I also toss info, check what you have on the boxes. My cards are also from - 0332xxx and fate met them severe ;-)


----------



## Pepillo

GosuPl said:


> I have info which seems to confirm for now, defective batches starts on - 0332xxx, those that have long and healthy working so far - 0333xxx.
> 
> I also toss info, check what you have on the boxes. My cards are also from - 0332xxx and fate met them severe ;-)


¿332 or 323?


----------



## Nicklas0912

Anyone know how to see your batch number ? 

So far, my card is running fine, but is only 6 days old.


----------



## Esenel

th3orist said:


> Esenel said:
> 
> 
> 
> Yes flashing another bios let you adjust the fan speed I guess.
> 
> At least I see a different percentage in Msi Afterburner.
> 
> 
> 
> Ok, so you flashed your FE 2080ti with a new bios (which one btw?) and now in afterburner in idle mode you can turn for example the fans completely off if you choose to do so? Can you please test? Pretty please? /forum/images/smilies/smile.gif
Click to expand...

The 380W bios.
But I am already on waterblock. Sorry.


----------



## Jpmboy

NewType88 said:


> Its Micron. Isnt that the only supplier for these GPU's right now ? Here is a pic of furmark with all system fans and gpu at 100% fan speeds with a stock card. Super high core temps and look at the difference in ram temps. Im probably going to RMA, the temps and noise are unacceptable.
> 
> 
> On techyescity's channel review of the strix 2080 ti he got 57c at 60% fan speed....that's the thermals I was expecting. A bit higher for enclosed case, but WAY better than what im getting.
> https://www.youtube.com/watch?v=yPh3-_GQiAM


Thanks. Yeah, micron it is. IDK, could be that it just needs a better TIM job? You know, sometimes these things arrive looking either like someone put the tim down with a cake frosting bag... or like it was done with a microliter syringe. Could simply be poor contact with the pads too. Or... send it back. :thumb:


Esenel said:


> The 380W bios.
> But I am already on waterblock. Sorry.


 ^^ same here, tho I've not see more than 350W used so far.




323918 serial here.


----------



## Esenel

Jpmboy said:


> Esenel said:
> 
> 
> 
> The 380W bios.
> But I am already on waterblock. Sorry.
> 
> 
> 
> ^^ same here, tho I've not see more than 350W used so far.
Click to expand...

Yes the same here.
As I only have WQHD it is not stressed so much in my opinion.
Most of the time around the 100% which would be 300W.
Assassins Creed Origins even less.
Just some benches pushes it.


----------



## Spiriva

Nicklas0912 said:


> Anyone know how to see your batch number ?
> 
> So far, my card is running fine, but is only 6 days old.


It says on the box, serial number


----------



## Vipeax

NewType88 said:


> On techyescity's channel review of the strix 2080 ti he got 57c at 60% fan speed....that's the thermals I was expecting. A bit higher for enclosed case, but WAY better than what im getting.


No clue who that is, but:








https://www.techpowerup.com/reviews/ASUS/GeForce_RTX_2080_Ti_Strix_OC/37.html


----------



## Pepillo

Jpmboy said:


> 323918 serial here.



That's my FE serial too, I hope it brings us good luck.


----------



## Nicklas0912

Esenel said:


> Yes the same here.
> As I only have WQHD it is not stressed so much in my opinion.
> Most of the time around the 100% which would be 300W.
> Assassins Creed Origins even less.
> Just some benches pushes it.


I get 360 Watt + in Timespy/timespy extreme with the 380W bios.


----------



## xermalk

My Gigabyte 2080ti isn't stable in Fire Strike Extreme, but it can loop time spy and valley all day long.
It gets a score of 97,9% if i run the Time Spy Extreme stress test.

Do i need to worry?

Its holding a rock steady 2040mhz in Heaven at 1.075v/71c


----------



## Vipeax

xermalk said:


> My Gigabyte 2080ti isn't stable in Fire Strike Extreme, but it can loop time spy and valley all day long.
> It gets a score of 97,9% if i run the Time Spy Extreme stress test.
> 
> Do i need to worry?
> 
> Its holding a rock steady 2040mhz in Heaven at 1.075v/71c


Sounds like it isn't stable, right...?


----------



## HuckleberryFinn

Add me to the list for having to RMA.

ZOTAC AMP 2080Ti, purchased at launch. Worked great for over a month, then yesterday out of nowhere Windows went black screen and rebooted. 

Now I get error code 43 in Device Manager and I cannot boot stable with drivers installed. Screen goes black, or dark brown, even went PINK once, before re-booting. PSU is a Seasonic PRIME Titanium 850w so that should not be the issue. 

Desktop works just fine using Intel integrated graphics, so I know the Zotac is kaput. Tried cleaning the drivers, reinstalling, new Windows install, using older drivers... nothing works. So... it's RMA time to Zotac. 

Anyone with my same card have the same issue? How is their RMA time? This is so frustrating. Now even if my replacement card seems OK, I won't really know until I have used it for a few months. Thank God I didn't put a block on it!

s/N: N183700******


----------



## Diverge

My FE has been running fine since I got it October 5th. I even mined with is for a few days straight to stress it... granted it was a core heavy algorithm, so not much vram usage. Not sure the batch #. I'll have to look when I get home from work.


----------



## Zurv

My cards FE in SLI are still working fine. But i'm waterlocked and (pointlessly) shunt mod'd. 
032411
032391
(shipped from NVidia 10/18)

on the plus side, adding another pump to my loop dropped the temps a bunch.
I was getting to mid-high 50s when looping timespy 4k and now i'm in the mid 40s.

going from 2LPM to 6LPM FTW. (serial vs parallel didn't make a diff to the temps. Both cards are the same.) 
(I did screw myself a bit when redoing the CPU block. I replaced the LM to TGK and that added 10C.. blah... (this is a 7980xe running @ 5.8ghz))


----------



## HuckleberryFinn

I must re-iterate I was using this card every day for over a month and it was fantastic, ran like a champ. No artifacting, above-average TimeSpy score, fairly quiet under load, I was very happy after two weeks and even sold my 1080 Ti. 

This crap just came out of nowhere. Very unsettling and I would say anyone planning on mods or aftermarket cooling BEWARE. Your card may be defective and it hasn't gone bad yet. I did overclock modestly, but didn't touch the BIOS or voltage. It just died. 

First video card I have ever owned that just up and stopped working. OFC it happens to be the $1200 flagship nVidia brags about. Well so much for "it just works", Jensen.


----------



## Garrett1974NL

HuckleberryFinn said:


> I must re-iterate I was using this card every day for over a month and it was fantastic, ran like a champ. No artifacting, above-average TimeSpy score, fairly quiet under load, I was very happy after two weeks and even sold my 1080 Ti.
> 
> This crap just came out of nowhere. Very unsettling and I would say anyone planning on mods or aftermarket cooling BEWARE. I did overclock modestly, but didn't touch the BIOS or voltage. It just died.
> 
> First video card I have ever owned that just up and stopped working. OFC it happens to be the $1200 flagship nVidia brags so much about. Well so much for "it just works", Jensen.


Aircooled?


----------



## HuckleberryFinn

Garrett1974NL said:


> Aircooled?


Yep, ZOTAC AMP. Pre-Order. THought about putting it on water and very happy I didn't, as it should make the RMA process easier.


----------



## rush2049

I have serial number starting: 032361

I have an open RMA request with Nvidia but haven't heard anything back yet.

I get hard crashes in games after 1-2 hours. Sometimes during loading of levels/maps. I find that the more the game does streaming of object/textures the more likely to crash.
It is completely at stock now. The overclocks I was getting earlier using the OC Scanner tool are no longer stable, and that OC Scanner tool fails and gives very inconsistent results now. Before it was very reliable.

Also using chrome I get freezes and pauses of anything that uses hardware decoding of video...... really annoying.

And then randomly I get a BSOD during anything, idle, games, chrome, anything.


Really want nvidia to get back to me with one of those advanced RMAs


----------



## Esenel

Living FE
S/N: 0323818

On Water => Better temps => Living FE


----------



## Diverge

HuckleberryFinn said:


> I must re-iterate I was using this card every day for over a month and it was fantastic, ran like a champ. No artifacting, above-average TimeSpy score, fairly quiet under load, I was very happy after two weeks and even sold my 1080 Ti.
> 
> This crap just came out of nowhere. Very unsettling and I would say anyone planning on mods or aftermarket cooling BEWARE. Your card may be defective and it hasn't gone bad yet. I did overclock modestly, but didn't touch the BIOS or voltage. It just died.
> 
> First video card I have ever owned that just up and stopped working. OFC it happens to be the $1200 flagship nVidia brags about. Well so much for "it just works", Jensen.


This is concerning, and I wouldn't doubt if they are all could just die at any moment. Makes me want to just return mine for a refund, and put back in my Titan X pascal that I didn't get around to selling yet and skip this gen.


----------



## Jbravo33

Zurv said:


> My cards FE in SLI are still working fine. But i'm waterlocked and (pointlessly) shunt mod'd.
> 032411
> 032391
> (shipped from NVidia 10/18)
> 
> on the plus side, adding another pump to my loop dropped the temps a bunch.
> I was getting to mid-high 50s when looping timespy 4k and now i'm in the mid 40s.
> 
> going from 2LPM to 6LPM FTW. (serial vs parallel didn't make a diff to the temps. Both cards are the same.)
> (I did screw myself a bit when redoing the CPU block. I replaced the LM to TGK and that added 10C.. blah... (this is a 7980xe running @ 5.8ghz))



Shunts worked for you? I applied liquid metal and all it did was throw the card in safe mode. Stays at 300 MHz


----------



## Garrett1974NL

So apparently the memory IC's are dying... is it because of overheating?
Hot memory causes artifacting if I'm not mistaking.
I have the GPU under water, not the memory, there's a big plate of metal on the memory and VRM's, and because my GPU doesn't get above 45 or so at load, less heat from that area is 'distributed' around it, to the memory IC's... I think.
If I'm wrong feel free to correct me


----------



## Zurv

Jbravo33 said:


> Shunts worked for you? I applied liquid metal and all it did was throw the card in safe mode. Stays at 300 MHz


They "work" - like when i did this on the titan V, more power draw power from the wall with zero impact on the perf. Like what Jmpboy has been saying. If you want to run these cards faster, it isn't power/volts but temp.
Sadly i didn't test the cards before shunting them. They came in, i pop'd the coolers off, shut'n with LM and put a WB on.


----------



## Jpmboy

Garrett1974NL said:


> So apparently the memory IC's are dying... is it because of overheating?
> Hot memory causes artifacting if I'm not mistaking.
> I have the GPU under water, not the memory, there's a big plate of metal on the memory and VRM's, and because my GPU doesn't get above 45 or so at load, less heat from that area is 'distributed' around it, to the memory IC's... I think.
> If I'm wrong feel free to correct me


 the heat spreader helps when using a uniblock. If you happen to have an IR temp gun you might what to check the temp of the heat spreader. You can also use one of those flat temp sensors slipped under the plate and hooked to a MB sensor header if the MB has them.
I use unis all the time and scan the card for its hotspots, tho not this time. 
(can get a bit "loopy" with multiple cards.)


----------



## Garrett1974NL

That's what I use to measure hotspots on the card  @Jpmboy, what card is that in the top photo?
No heatsinks anywhere? Only a GPU block? (got the same block...performs excellent with liquid metal)


----------



## Jpmboy

Garrett1974NL said:


> That's what I use to measure hotspots on the card
> @*Jpmboy* , what card is that in the top photo?
> No heatsinks anywhere? Only a GPU block? (got the same block...performs excellent with liquid metal)


 I think that is a titan Xp. That was just initial testing with a delta fan _screaming _at the PCB... paired with another Xp it is in this caselabs build below - on 24/7 folding and archiving 3 security cams vids/recordings. (2 very good cards btw  )
I mean, I use the unis only temporarily, but many folks use them in a final instalation as you have (i think).


so what temps are you seeing with the IR gun?


----------



## Garrett1974NL

Nothing above 55, more like 50-ish... (the cooling plate)
This is how it looks... like crap now LOL... but it works, I should really get rid of the dust but oh well...
That blue LED fan is like ~1000rpm, that's it. Almost inaudible


----------



## GAN77

AORUS GeForce RTX 2080 Gigabyte Xtreme with memory samsung/


----------



## skingun

UK owner.

2 cards here. FE. Both 0323818. No issues so far.

Cards are watercooled with Bitspower blocks. Using stock backplates. Backplate gets hot, but not so hot I cannot hold my hand on it.

Occasionally get rendering device failed when playing Overwatch. No crashes in any other games so pointing my finger at Blizzard on this one.


----------



## Jpmboy

Garrett1974NL said:


> Nothing above 55, more like 50-ish... (the cooling plate)
> This is how it looks... like crap now LOL... but it works, I should really get rid of the dust but oh well...
> That blue LED fan is like ~1000rpm, that's it. Almost inaudible


ah... another ratrod fan! :thumb:


----------



## Murlocke

2080Ti was just in stock at EVGA. Went to order, and bank declined my card because "EVGA.com is detected as a fraud website". Had to spend 10 minutes getting it authorized on the phone and it sold out.

I'm quite pissed.


----------



## toncij

Jpmboy said:


> I think that is a titan Xp. That was just initial testing with a delta fan _screaming _at the PCB... paired with another Xp it is in this caselabs build below - on 24/7 folding and archiving 3 security cams vids/recordings. (2 very good cards btw  )
> I mean, I use the unis only temporarily, but many folks use them in a final instalation as you have (i think).
> 
> Nice set of machines. Do you employ people to do maintenance?
> 
> 
> so what temps are you seeing with the IR gun?





GAN77 said:


> AORUS GeForce RTX 2080 Gigabyte Xtreme with memory samsung/


Who else, aside from GB Aorus is using Samsung memory? I presume all ref. boards are Hynix (their GDDR6 is 14Gbps limited, Micron goes up to 16 and Samsung is up to 18)?


----------



## Diverge

rush2049 said:


> I have serial number starting: 032361
> 
> I have an open RMA request with Nvidia but haven't heard anything back yet.
> 
> I get hard crashes in games after 1-2 hours. Sometimes during loading of levels/maps. I find that the more the game does streaming of object/textures the more likely to crash.
> It is completely at stock now. The overclocks I was getting earlier using the OC Scanner tool are no longer stable, and that OC Scanner tool fails and gives very inconsistent results now. Before it was very reliable.
> 
> Also using chrome I get freezes and pauses of anything that uses hardware decoding of video...... really annoying.
> 
> And then randomly I get a BSOD during anything, idle, games, chrome, anything.
> 
> 
> Really want nvidia to get back to me with one of those advanced RMAs


I have an 032361 as well, with no issues so far (since October 5th).


----------



## Jpmboy

toncij said:


> Who else, aside from GB Aorus is using Samsung memory? I presume all ref. boards are Hynix (their GDDR6 is 14Gbps limited, Micron goes up to 16 and Samsung is up to 18)?


that's a 2080, not a 2080Ti. :thumb:


----------



## toncij

Jpmboy said:


> that's a 2080, not a 2080Ti. :thumb:


That could explain why we see no issues with 2080s...


----------



## Jpmboy

could be. NOte, the RTX6000 launched today and is sold out already!


----------



## toncij

Jpmboy said:


> could be. NOte, the RTX6000 launched today and is sold out already!


A bit pricey for me, even for a workstation.  TV was the highest I'd go (3k). RTX5000, which seems affordable, is a bit on the weak side.

BUT - an RTX Titan could be close now!


----------



## Zurv

toncij said:


> BUT - an RTX Titan could be close now!


oh you better shut your month!


… ok nvidia.. where is the link to pre-order...


----------



## Barefooter

Murlocke said:


> 2080Ti was just in stock at EVGA. Went to order, and bank declined my card because "EVGA.com is detected as a fraud website". Had to spend 10 minutes getting it authorized on the phone and it sold out.
> 
> I'm quite pissed.


Yeah I got an alert too. I went on EVGA site immediately, placed the order then got an error message, so I called my credit card company and they said that EVGA is just reversing the charges immediately ever time it goes through (I tried like four times).

Then I call EVGA to find out what the problem is and the guy I talked to said their payment system automatically declines orders that have a different shipping address than billing address. He says you have to pay with PayPal if you want to ship it to a different address.

I'm like seriously, that isn't stated anyplace and why even have the option to ship to another address then. Of course they were all gone by then.

So I'm quite pissed too


----------



## Vipeax

GAN77 said:


> AORUS GeForce RTX 2080 Gigabyte Xtreme with memory samsung/


I demand a memory overclock. If these are indeed their GDDR6 16Gbit chips you should be able to go way past 8000MHz.

And no, RTX2080 cards also use Micron. This is the first time I'm seeing Samsung chips on any of them.


----------



## TahoeDust

Got my EVGA FTW3 today. Really nice card...it's huge. Pretty fast too....

Just a quick & dirty +70/+1000 OC

https://www.3dmark.com/spy/4892223


----------



## Jpmboy

toncij said:


> A bit pricey for me, even for a workstation.  TV was the highest I'd go (3k). RTX5000, which seems affordable, is a bit on the weak side.
> 
> BUT - an RTX Titan could be close now!



sometimes Nvidia be like:


----------



## Zammin

Murlocke said:


> 2080Ti was just in stock at EVGA. Went to order, and bank declined my card because "EVGA.com is detected as a fraud website". Had to spend 10 minutes getting it authorized on the phone and it sold out.
> 
> I'm quite pissed.





Barefooter said:


> Yeah I got an alert too. I went on EVGA site immediately, placed the order then got an error message, so I called my credit card company and they said that EVGA is just reversing the charges immediately ever time it goes through (I tried like four times).
> 
> Then I call EVGA to find out what the problem is and the guy I talked to said their payment system automatically declines orders that have a different shipping address than billing address. He says you have to pay with PayPal if you want to ship it to a different address.
> 
> I'm like seriously, that isn't stated anyplace and why even have the option to ship to another address then. Of course they were all gone by then.
> 
> So I'm quite pissed too


I got the notification this morning as well, checked the site within around 15-20mins of receiving the email and it said out of stock. Not sure if they just sold out in 15 mins again or if it's showing "out of stock" because I'm viewing from Australia. I would've thought if EVGA had stock they would send it to the places that have had pre-orders for months now rather than hand them out to anyone. I've been sitting on my pre-order since mid-September and I think this is the second time EVGA has had stock in that time, yet almost none of it seems to make it here..


----------



## Glerox

Ok, little interesting update. My 2080TI FE is serial number 0323818xxxxxx and some users have been experiencing problems with this batch.
Mine is still unopened and within 30 days for refund.
I wanted to know if the artefacts are related to serial numbers and if I would have to pay shipping for RMA in case I try the GPU and find it's broken.

Here is a little chat with Nvidia. On a scale of 1 to 10, how much can I trust Rahul? lol


----------



## Zammin

Well Nvidia AU finally got stock of the FE cards, so I just bought one of those instead.. I have no idea when PLE will ever get my EVGA card so this might be a better option. $155 cheaper as well. I guess I just have to hope that this one isn't a bad FE.. Hopefully they've fixed the issues for this batch..

Should I run it air cooled for a while before blocking it to make sure it doesn't die? or is heat the thing that kills them?


----------



## jcde7ago

Glerox said:


> Ok, little interesting update. My 2080TI FE is serial number 0323818xxxxxx and some users have been experiencing problems with this batch.
> Mine is still unopened and within 30 days for refund.
> I wanted to know if the artefacts are related to serial numbers and if I would have to pay shipping for RMA in case I try the GPU and find it's broken.
> 
> Here is a little chat with Nvidia. On a scale of 1 to 10, how much can I trust Rahul? lol


I've been following this 2080 Ti FE fiasco rather closely since my first card was RMA'd a couple weeks ago, and i'll say this...If, and I mean IF your card is defective, it will exhibit issues (any combination of black screen on boot, BSOD, driver errors, hard lockups with blaring sound, artifacting) within the first 10-14 days, pretty much no matter what you do or how well you cool the card (OC/No-OC/underclock, overvolt/undervolt, etc..doesn't matter. If it is defective, it is simply a matter of time, and it doesn't take long at that). This has been a common theme with what I would say is the vast majority of posts i've seen across multiple forums/Reddit.

As seemingly common as the issues are with these FEs and that particular batch, however....we still have such a small sample size and not enough time has passed to really come to a good conclusion...but relative to other launches in recent memory...none have been as problematic as these RTXs.

IMHO, i'd open your card up and use it, because if it fails, it will fail within your refund window with plenty of time to spare, that much is near-certain at this point. If it somehow lasts a full month with absolutely zero issues...well, i'd say you won the lottery, as statistically, not every card will have an issue even if it's the worst case and most of them happen to be problematic.


----------



## Emmett

Glerox said:


> Ok, little interesting update. My 2080TI FE is serial number 0323818xxxxxx and some users have been experiencing problems with this batch.
> Mine is still unopened and within 30 days for refund.
> I wanted to know if the artefacts are related to serial numbers and if I would have to pay shipping for RMA in case I try the GPU and find it's broken.
> 
> Here is a little chat with Nvidia. On a scale of 1 to 10, how much can I trust Rahul? lol


If opened and ends up bad. They will pay shipping. But unopened return you pay. In the US 
at least..

This whole thing reminds me of years back when ELPIDA chipped ram sticks we're just
Quickly dying at stock speeds and volts.


----------



## GraphicsWhore

Damn, sorry to hear about you guys with the FE issues - that's annoying waiting for the card then having to RMA.

On somewhat related good news: I sold my EVGA 1080Ti FE with block for $625. Makes the $1200 I spent on the 2080Ti easier to swallow.


----------



## Jpmboy

Vipeax said:


> I need someone who is running the GALAX BIOS to share a GPU-Z screenshot of the first tab please.


Did you get a reply? running for a few days now.(attached - card was folding when I grabbed the snip)


----------



## Glerox

jcde7ago said:


> IMHO, i'd open your card up and use it, because if it fails, it will fail within your refund window with plenty of time to spare, that much is near-certain at this point. If it somehow lasts a full month with absolutely zero issues...well, i'd say you won the lottery, as statistically, not every card will have an issue even if it's the worst case and most of them happen to be problematic.


My 30 days window is almost over... I have to decide now! 
I think I'll give it a try...
I won't be able to get a refund but at least I won't pay the shipping for repair/RMA and it's 3 years under warranty.
Sadly I won't change the BIOS to risk voiding the warranty... no 380W for me


----------



## xer0h0ur

Well FWIW my FE card is from batch 0323618******

It only lived as an air cooled card for a matter of a couple of hours at most. Slapped an EKWB and backplate on it immediately afterwards. Card has performed marvelously thus far.


----------



## dVeLoPe

dVeLoPe said:


> EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR
> 
> Does anyone own this card? Can the BiOS be flashed? What waterblock will fit?


anyone?


----------



## Zammin

dVeLoPe said:


> anyone?


EVGA themselves said the 2080Ti XC Black can be flashed to the 338W BIOS they posted on their forum. Any waterblock for a reference card will fit.


----------



## Glerox

xer0h0ur said:


> Well FWIW my FE card is from batch 0323618******
> 
> It only lived as an air cooled card for a matter of a couple of hours at most. Slapped an EKWB and backplate on it immediately afterwards. Card has performed marvelously thus far.


Thanks for sharing.
As the guy from Nvidia said, it might not be related to serial numbers for now. 
(Of course the upcoming ones will have the issue resolved).


----------



## TahoeDust

FTW3 pics in case anyone is interested in the card...


----------



## dadunn1700

Nvidia 2080ti FE w/ EK Velocity Waterblock (Acetal/Copper)
i7 5960X Processor 4.4ghz @ 1.23v
32GB DDR4 RAM 
500GB Samsung 970 EVO NVME M.2
500GB Samsung 970 SSD
230GB SSD
1TB Seagate Hybrid Drive
2TB WD Drive 7200RPM
2x Koolance 360 Rads
Koolance CPU Waterblock
Koolance Bay Reservoir w/pump

PC was originally purchased from Origin PC years ago when the 5960X was the top of the line Intel cpu. Was my 1st watercooled system and got me into watercooling. I've since added a 2nd 360 rad, a drain line for easy maintainence, re-applied thermal pastes and re-plumbed for aesthetics. The 2080ti was my first time buying and installing a GPU waterblock. Prior to this, the cpu was the only thing I have watercooled. 

Temps are amazing with the waterblock. Has yet to go above 55c during stress testing (20c/69f room temp). Average temps in the 40's and idle temps only 5c above room temperature. Probably won't flash the bios for more voltage. Happy with the performance and low noise. I can pretty much leave the fans on low for my CPU and GPU giving my cooling solution and voltage levels. Still working on the overclock for the GPU. Right now, in EVGA Precision X1, it's at +120mhz on the core and +600 on the VRAM (May 5, 2019 - settled on stable +150/+800 for overlock). No issues as of yet. (Card still going strong as of May 5 2019). 

PS. For anyone who bought a EK waterblock for the 2080 or 2080ti, EK has amended their instructions to include the use of a 3rd thermal pad covering the coils. Just letting everyone know. If you contact EK they will mail you the thermal pad free of charge since it is not included with blocks they produced before the instruction manual was amended. I know from experience.


----------



## CallsignVega

Damn my backup plan 2080 Ti Zotac AMP shipped a week early from Amazon. I guess I won't be getting a FTW3. Doesn't matter much though since I'm water cooling it and the FTW3 water blocks are completely MIA anyway.


----------



## Nico67

jcde7ago said:


> Just as a reference point for other folks in this thread - my experience with the 2080 Ti FE:
> 
> - Pre-ordered a 2080 Ti FE before the link was officially live at the earliest possible opportunity
> - Received my 2080 Ti on 10/5
> - 2080 Ti worked amazing for about 10 days; I started getting hard lockups requiring a full reboot and artifacting on 10/15
> - Opened up a case with Nvidia live chat on 10/18 (and RMA was approved) after I couldn't take it anymore as the lock ups were getting more frequent (started out being an hour or two apart, then within 30 minutes, then lock ups would occur with any game within 5 minutes).
> - On 10/24, I received an email saying my advanced RMA was shipped and will be delivered 10/25.
> - Nvidia messed up with the wrong zip code and my replacement was actually held by FedEx until I picked it up today, 10/29.
> 
> Replacement 2080 Ti installed a couple hours ago...so far, no issues playing Destiny 2 for an hour on highest settings, but I gotta be honest...Mr Nvidia, I don't feel so good (but seriously - i'm basically waiting for this second card to start failing in the next 7-10 days, as multiple people i've talked to have already had their replacement FEs fail on them as well).
> 
> My advice: stay way, way away from 2080 Ti FEs for the next month or two at minimum...AIB cards seem to be affected as well, but not nearly to the same degree as the failure rate for these FEs.
> 
> As an early adopter of most of the previous Titan/Ti high-end cards...i've never seen anything like this and, quite frankly, it's downright shameful that Nvidia continues to stay silent about an obvious manufacturing flaw with these cards.
> 
> EDIT:
> 
> 2080 Ti FE Card #1 Serial/batch: 0323818****** (RMA'd after 10 days)
> 2080 Ti FE Card #2 Serial/batch: 0324018****** (New replacement installed on 10/29)
> 
> I will update if my replacement card has any issues....(knocks on wood)


Sorry to here your card dying, any OC or flashed bios?


----------



## stefxyz

dadunn1700 said:


> PS. For anyone who bought a EK waterblock for the 2080 or 2080ti, EK has amended their instructions to include the use of a 3rd thermal pad covering the coils. Just letting everyone know. If you contact EK they will mail you the thermal pad free of charge since it is not included with blocks they produced before the instruction manual was amended. I know from experience.


***. Luckily I have some more pads around, ending them to CH and customs open the letter: 30 bucks... 

Now this evening I will close the loop rinse, dismantle reapply again... Thx EK....

To all tehse OC or Bios questions: this has absolutely nothing to do with overclocking. This 100 year old myth is still around. Only a shunt mod could kill a gpu. This here is a manufacturing/component error and the only thing oc might do is bring the faulty parts tpo surface earlier....


----------



## Esenel

dadunn1700 said:


> PS. For anyone who bought a EK waterblock for the 2080 or 2080ti, EK has amended their instructions to include the use of a 3rd thermal pad covering the coils. Just letting everyone know. If you contact EK they will mail you the thermal pad free of charge since it is not included with blocks they produced before the instruction manual was amended. I know from experience.


True. Updated manual.
Oh man. I do not want to flush the loop again -.-

I will test with a probe this evening what the temps are and then decide.

Thanks for the info.


----------



## Fitzcaraldo

Please keep in mind, that while it seems like certain batches (which we base on S/N numbers) seem to be affected, it is actually the other way around: Sample size is so small that so far, the most reports come in on the early batches and we do not know if other batches are not also affected. Therefore there is no use in getting nervous about being in a certain range of S/N number with your specific card. It is NOT an indicator that your card is faulty/is in the clear.


----------



## Zammin

Esenel said:


> True. Updated manual.
> Oh man. I do not want to flush the loop again -.-
> 
> I will test with a probe this evening what the temps are and then decide.
> 
> Thanks for the info.


TBH it's probably not worth the effort, even on the 10 series IIRC many blocks didn't cool the chokes, or made it optional. It was optional on my Phanteks block for my Strix 1080Ti. I never had an EK 10 series block but I seem to recall watching installation videos where there were no thermal pads on the chokes.

We are talking about the chokes right?


----------



## Zammin

Fitzcaraldo said:


> Please keep in mind, that while it seems like certain batches (which we base on S/N numbers) seem to be affected, it is actually the other way around: Sample size is so small that so far, the most reports come in on the early batches and we do not know if other batches are not also affected. Therefore there is no use in getting nervous about being in a certain range of S/N number with your specific card. It is NOT an indicator that your card is faulty/is in the clear.


I was watching Awesome Hardware today (livestream with Paul's HW and Bitwit) and Paul mentioned he had been talking to his Nvidia rep about the card failures, he basically said that Nvidia are just replacing/repairing the affected cards and at this point do not believe this to be a large scale issue. If that's true then I would assume the number of affected cards is not that big in comparison to the number of cards sold, but I guess we'll see what happens over the next few months. I just hope the FE I bought today turns out good. :/


----------



## jcde7ago

Nico67 said:


> Sorry to here your card dying, any OC or flashed bios?


Nope, stock. Even underclocked memory and core on the card that was RMA'd, it made no difference. 

There is a common defect with all of the cards that are failing, we just don't have enough data to know what (most people are just assuming it's VRAM related because of the artifacting/lockup symptoms)...even if it ends up being a bad 'batch' of a certain component making it into the manufacturing line, statistics dictate that not all cards will have issues, even if a 'good' and 'defective' card are 1 serial number apart (unless it is an inherent design flaw that affects 100% of all 2080 Tis manufactured, which is extremely unlikely).



Fitzcaraldo said:


> Please keep in mind, that while it seems like certain batches (which we base on S/N numbers) seem to be affected, it is actually the other way around: Sample size is so small that so far, the most reports come in on the early batches and we do not know if other batches are not also affected. Therefore there is no use in getting nervous about being in a certain range of S/N number with your specific card. It is NOT an indicator that your card is faulty/is in the clear.


Eh, people are just providing serial numbers as data points...correlation doesn't always mean causation (see my comment above), but the more info we have, the better.



Zammin said:


> I was watching Awesome Hardware today (livestream with Paul's HW and Bitwit) and Paul mentioned he had been talking to his Nvidia rep about the card failures, he basically said that Nvidia are just replacing/repairing the affected cards and at this point do not believe this to be a large scale issue. If that's true then I would assume the number of affected cards is not that big in comparison to the number of cards sold, but I guess we'll see what happens over the next few months. I just hope the FE I bought today turns out good. :/


The issue is, we just don't know, because we have no definitive sales vs. RMA numbers...the RMAs could be well within a reasonable margin for new tech/manufacturing process, sure...but as an owner of pretty much all previous high-end Titan cards, etc., there is SIGNIFICANTLY more 'noise' around failures and RMAs with this RTX 2080 Ti launch than past launches, that's for dang sure.


----------



## alex1990

Within 25 days my *2080 Ti Gaminx X TRIO broke*. At first there were failures in their games without errors, but I thought about drivers and windows 1809. But yesterday a small fan randomly rotated to 100% every 5-10 seconds after reaching a temperature of 70. As if the video card is in “emergency mode” and does not overheat . I have a large and well-blown body, before that there was an average temperature of 73 at 45% r / min (auto mode).
I tried different drivers (3 pieces) and BIOS (101 and 103), turned off overclocking and deleted AFterburner. Even checking on another PC did not help.
I handed the card to the guarantee and am waiting for a decision.


----------



## ENTERPRISE

alex1990 said:


> Within 25 days my *2080 Ti Gaminx X TRIO broke*. At first there were failures in their games without errors, but I thought about drivers and windows 1809. But yesterday a small fan randomly rotated to 100% every 5-10 seconds after reaching a temperature of 70. As if the video card is in “emergency mode” and does not overheat . I have a large and well-blown body, before that there was an average temperature of 73 at 45% r / min (auto mode).
> I tried different drivers (3 pieces) and BIOS (101 and 103), turned off overclocking and deleted AFterburner. Even checking on another PC did not help.
> I handed the card to the guarantee and am waiting for a decision.


Sorry to hear. With all the issues I am going to fully test my 2080Ti's with benching and games prior to fully installing them so I can be more confident I am not going to hit issues due to hardware faults etc. Nothing worse than getting something fully installed and settled into your rig to then having to pull it all out due to a fault. Never tested a GPU prior to fully installing before as I have never had issues. This time round however, not fancying the chance.


----------



## alanthecelt

Got My Palit OC in the loop last night, on its Alphacool block
ran nvidia scanner and it actually benched lower than stock
flashed the 380w bios, timespy shows it at 2115 core, didn't mess with the memory yet
my loop is set to stay within 5 deg c of ambient, however my aquaero is currently disconnected so my info is limited...
basically in timespy the gpu didn't pass 47 deg c, however id have expected my 1080ti to have not really passed 42 on its ek block


----------



## stefxyz

Quite bad news so far esp considering the cards run only half as the tensor and rtx cores are not even utilized yet. I wonder if this is one of the reasons we do not see any RTX functionality yet....


----------



## Fitzcaraldo

stefxyz said:


> Quite bad news so far esp considering the cards run only half as the tensor and rtx cores are not even utilized yet. I wonder if this is one of the reasons we do not see any RTX functionality yet....


The reason is not related to any possible faulty hardware. It's simply due to a lack of implementation on all levels (OS & software). Blame it on Microsoft for ******* up 1809.


----------



## Zammin

jcde7ago said:


> The issue is, we just don't know, because we have no definitive sales vs. RMA numbers...the RMAs could be well within a reasonable margin for new tech/manufacturing process, sure...but as an owner of pretty much all previous high-end Titan cards, etc., there is SIGNIFICANTLY more 'noise' around failures and RMAs with this RTX 2080 Ti launch than past launches, that's for dang sure.


I believe you man. It's very discerning for those of us buying FE cards right now. I would've kept my pre-order for the EVGA XC Gaming card if I knew when I was getting it, but I've been getting fed rubbish time and time again over the last month and a half by the store I "pre-ordered" with and I could be waiting another month or two for all I know. The ETA still says 12 October and when I call them they just say they don't have even the slightest idea when stock will come in lol.

It's hard here in Aus because there are so many people who want these GPUs but we aren't a high priority for most of the manufacturers, so we only seem to get what trickles down after shipments to the US. As a result there is a massive back log of orders with every local store, and a lot of missed or missing ETAs.

If the FE I snagged today turns out to be good it will be great as it's $155 cheaper. I think I'm gonna run it air cooled for a while first to be sure that it's okay. That way if it dies on me it won't be as much of a pain to RMA it. If it's still good after 14-20 days of good use I'll block it and chuck it in the loop.


----------



## toncij

alanthecelt said:


> Got My Palit OC in the loop last night, on its Alphacool block
> ran nvidia scanner and it actually benched lower than stock
> flashed the 380w bios, timespy shows it at 2115 core, didn't mess with the memory yet
> my loop is set to stay within 5 deg c of ambient, however my aquaero is currently disconnected so my info is limited...
> basically in timespy the gpu didn't pass 47 deg c, however id have expected my 1080ti to have not really passed 42 on its ek block


Lower than stock??

Their dual fan is pretty good from what I've heard. Going to get one today. Together with PNY XLR8 or whatever the name is. Really interested in how those perform.


----------



## Zammin

alanthecelt said:


> Got My Palit OC in the loop last night, on its Alphacool block
> ran nvidia scanner and it actually benched lower than stock
> flashed the 380w bios, timespy shows it at 2115 core, didn't mess with the memory yet
> my loop is set to stay within 5 deg c of ambient, however my aquaero is currently disconnected so my info is limited...
> basically in timespy the gpu didn't pass 47 deg c, however id have expected my 1080ti to have not really passed 42 on its ek block


Nice temp! That on a 360+280 setup? I'm still super envious of that V3000 case haha.


----------



## alanthecelt

toncij said:


> Lower than stock??
> 
> Their dual fan is pretty good from what I've heard. Going to get one today. Together with PNY XLR8 or whatever the name is. Really interested in how those perform.


yer once the scanner had overclocked the card, it seemed to be permanently locked at ~1500mhz... not sure why...perhaps it was power limited but it didnt add up, putting the higher W bios fixed it after running the scanner again


----------



## alanthecelt

Zammin said:


> Nice temp! That on a 360+280 setup? I'm still super envious of that V3000 case haha.


looks like my rig had old parts listed....
its the thickest 480 and a thick 420 that i could get in the case
i dunno... 47 degrees was a little warmer than i was used to seeing.....


----------



## Zammin

alanthecelt said:


> looks like my rig had old parts listed....
> its the thickest 480 and a thick 420 that i could get in the case
> i dunno... 47 degrees was a little warmer than i was used to seeing.....


Ah I see, was gonna say I thought you'd have a 480 in there looking at the basement of the case. I suppose with that radiator capacity it's fair that you would expect a bit lower, but it's about right in the middle of what I've seen other users on here report. The best I've seen mentioned (without a chiller) was very low 40s, and the highest (without any problems causing abnormal temps) was around 55 I think. Of course I don't know the fans speeds and ambient temps for all of these cases, and that plays a pretty big part. The other guy with the acetal version of that block said he was getting 45-46 I think. I'll let you know what I get when I block mine, I'm getting the same block without the shroud but my rad capacity will be lower (EK XE360 + HWL 360GTS). It's also pretty hot here right now, we are in Summer atm moving into 30C+ days and high humidity.


----------



## alanthecelt

yer i have my coolant set up to go 0-5 deg c above ambient.. my room is pretty cool atm, maybe 18 deg c..
I'll have the Aquaero connected back to the pc later in the week so i can actually get the coolant temp info
power limit raised/ overclocked seemed to have a significant change in temp..


----------



## GosuPl

Zammin said:


> Nice temp! That on a 360+280 setup? I'm still super envious of that V3000 case haha.


Yap, very nice temp. In few days i will make dual loop in my RIG (if my 2x 2080Tis from rma will be fine), what rads konfig do you suggest?

1. 480/60 HWLabs GTX 480 + 2x RTX 2080Ti and 480/60 Alphacool Nexxoss + 280/60 Phobya + 7980XE and RVIE
2. 480/60 Alphacool Nexxoss + 280/60 Phobya + 2x RTX 2080Ti and HWLabs 480/60 GTX 480 + 7980XE and RVIE

I have all rads now, just waitng for fittings (HDC) and rest of LC stuff.

Separate loops for CPU / VRM and for 2x GPU. Dual D5 as pumps.


----------



## Zammin

GosuPl said:


> Yap, very nice temp. In few days i will make dual loop in my RIG (if my 2x 2080Tis from rma will be fine), what rads konfig do you suggest?
> 
> 1. 480/60 HWLabs GTX 480 + 2x RTX 2080Ti and 480/60 Alphacool Nexxoss + 280/60 Phobya + 7980XE and RVIE
> 2. 480/60 Alphacool Nexxoss + 280/60 Phobya + 2x RTX 2080Ti and HWLabs 480/60 GTX 480 + 7980XE and RVIE
> 
> I have all rads now, just waitng for fittings (HDC) and rest of LC stuff.
> 
> Separate loops for CPU / VRM and for 2x GPU. Dual D5 as pumps.


I mean both sound good, that's way more radiator than I have experience with, but if you're asking whether to have more radiator capacity on the CPU loop or GPU loop, I would think the GPUs are going to be putting out more heat. Either way that is a lot of rad capacity 2x480x60 and 1x 280x60.


----------



## Fitzcaraldo

GosuPl said:


> Yap, very nice temp. In few days i will make dual loop in my RIG (if my 2x 2080Tis from rma will be fine), what rads konfig do you suggest?
> 
> 1. 480/60 HWLabs GTX 480 + 2x RTX 2080Ti and 480/60 Alphacool Nexxoss + 280/60 Phobya + 7980XE and RVIE
> 2. 480/60 Alphacool Nexxoss + 280/60 Phobya + 2x RTX 2080Ti and HWLabs 480/60 GTX 480 + 7980XE and RVIE
> 
> I have all rads now, just waitng for fittings (HDC) and rest of LC stuff.
> 
> Separate loops for CPU / VRM and for 2x GPU. Dual D5 as pumps.


Chiming in on this: More rad surface for the GPUs, they will produce more heat to dissipate.


----------



## GosuPl

Zammin said:


> I mean both sound good, that's way more radiator than I have experience with, but if you're asking whether to have more radiator capacity on the CPU loop or GPU loop, I would think the GPUs are going to be putting out more heat. Either way that is a lot of rad capacity 2x480x60 and 1x 280x60.


Thx for reply, i think 480/60 + 280/60 will be very good setup for 2x GPU.
But 7980XE and VRM on RVIE (monoblock on CPU and VRM) is very hot too.

On Enthoo Elite i can put many rads, but...I dont like "play" with hartubinng PETG, i must have clean look, but bending this sh***t is annoyning ;-)


----------



## GosuPl

Fitzcaraldo said:


> Chiming in on this: More rad surface for the GPUs, they will produce more heat to dissipate.



Thx


----------



## alanthecelt

i had this debate on mine... i wanted dual loops... in the end i decided since noise and cooling and pump redundancy was an issue.. i went single loop but multiple res and pumps
the only real advantage dual loops would give was being able to run 2 different coolant colours


----------



## kot0005

ENTERPRISE said:


> I was wondering how long it would take before the ''Media'' started to talk about the clear issue Nvidia is having.


here we go..


----------



## kot0005

TahoeDust said:


> FTW3 pics in case anyone is interested in the card...


Hey dude upload ftw3 375w bios please!


----------



## kot0005

dadunn1700 said:


> EK has amended their instructions to include the use of a 3rd thermal pad covering the coils. Just letting everyone know. If you contact EK they will mail you the thermal pad free of charge since it is not included with blocks they produced before the instruction manual was amended. I know from experience.


Yeah, thankfully I had some from previous installations. Its 1mm pad on the chokes. Apparently it will reduce coil whine..


----------



## Edge0fsanity

kot0005 said:


> Yeah, thankfully I had some from previous installations. Its 1mm pad on the chokes. Apparently it will reduce coil whine..


Will this pad make any difference beyond reduction in coil whine? My card is near silent without it.


----------



## Edge0fsanity

Does anyone have information on waterblock performance with these cards? I've read a few times in this thread that the EK blocks aren't performing that well in comparison to others which is what i have. What i haven't seen is thermal performance from some of the other manufacturers.

I've been playing around with the curve in AB with my card lately trying to get a maximum stable OC dialed in. It seems my card hits a wall at 48C in timespy(using this for stress testing) and will destabilize and crash with the desired clocks, otherwise its perfectly stable below that temp. Its a 30mhz difference without taking thermal throttling into account.


----------



## alanthecelt

Edge0fsanity said:


> I've been playing around with the curve in AB with my card lately trying to get a maximum stable OC dialed in. It seems my card hits a wall at 48C in timespy(using this for stress testing) and will destabilize and crash with the desired clocks, otherwise its perfectly stable below that temp. Its a 30mhz difference without taking thermal throttling into account.


As above im getting about 46/47 degrees at a coolant temp somewhere around 20 deg c (hard to tell atm unil Aquaero plugged in) on alphacool block, 380w bios, using scanner to OC, peak is 2115 MHZ


----------



## Jpmboy

Fitzcaraldo said:


> The reason is not related to any possible faulty hardware. It's simply due to a lack of implementation on all levels (OS & software). Blame it on Microsoft for ******* up 1809.


^^ this. it's been a bad coming-out party for DX12RT for sure.


----------



## Esenel

ENTERPRISE said:


> Sorry to hear. With all the issues I am going to fully test my 2080Ti's with benching and games prior to fully installing them so I can be more confident I am not going to hit issues due to hardware faults etc. Nothing worse than getting something fully installed and settled into your rig to then having to pull it all out due to a fault. Never tested a GPU prior to fully installing before as I have never had issues. This time round however, not fancying the chance.


Haha never trust hardware.
I experienced this the very hard way.

I bought a Z370 Asus Hero with an i7-8086k.

I was waiting for nearly 4 month to get a binned 5.1 Ghz one (started the order with a 8700k)!

I flushed the whole loop of cpu and gpu to dismantle everything and clean it.
Took me severals hours of course.

So changed from the Ryzen to the 8086k and everything looked great.
Even took the time for a fresh windows installation.

BUT after the first OC tests I could not figure out why I couldn't get any stable RAM OC.
After days of trial and error I saw that it was due to VCCIO and VCCSA.
I was able to enter values in the bios but it was not applied.
Tested all bios versions, reseated the CPU,... everything.

Only thing to do send it back to Caseking.

Long story short.
It was a faulty Mainboard with the only flaw of not being able to regulate two voltages :-D

So flushing again. Intel out. Ryzen In.
Waiting for RMA.
Ryzen out and Intel in again -.-

What I learned from this was testing every piece of hardware outside of the case before doing this **** again :-D

Did also a test on the 2080Ti before taking everything apart ;-)


----------



## toncij

Esenel said:


> Haha never trust hardware.
> I experienced this the very hard way.
> 
> I bought a Z370 Asus Hero with an i7-8086k.
> 
> I was waiting for nearly 4 month to get a binned 5.1 Ghz one (started the order with a 8700k)!
> 
> I flushed the whole loop of cpu and gpu to dismantle everything and clean it.
> Took me severals hours of course.
> 
> So changed from the Ryzen to the 8086k and everything looked great.
> Even took the time for a fresh windows installation.
> 
> BUT after the first OC tests I could not figure out why I couldn't get any stable RAM OC.
> After days of trial and error I saw that it was due to VCCIO and VCCSA.
> I was able to enter values in the bios but it was not applied.
> Tested all bios versions, reseated the CPU,... everything.
> 
> Only thing to do send it back to Caseking.
> 
> Long story short.
> It was a faulty Mainboard with the only flaw of not being able to regulate two voltages :-D
> 
> So flushing again. Intel out. Ryzen In.
> Waiting for RMA.
> Ryzen out and Intel in again -.-
> 
> What I learned from this was testing every piece of hardware outside of the case before doing this **** again :-D
> 
> Did also a test on the 2080Ti before taking everything apart ;-)


Outside of case would mean you'd have a spare system for that. Not everyone does. Butt yes, run a Turing a month without the loop first. Just to avoid annoying loop drain work.

Faulty hardware is a primary reason I always keep one consumer level machine and one HEDT for work, a much more conservative build which I upgrade after I'm satisfied with the performance of a new part in a "cheap" ($5000 far from cheap tho) build without water.
First time in years now that I'm stuck on only a mere 9900K and a MBP laptop. That makes me nervous, but I can't make up my mind: go with the 9000 HEDT, Ryzen 2 or wait at least a few months for Ryzen 7nm. Intel is most probably a year+ away with their degraded "10nm"...


----------



## Esenel

toncij said:


> Outside of case would mean you'd have a spare system for that. Not everyone does. Butt yes, run a Turing a month without the loop first. Just to avoid annoying loop drain work.
> 
> Faulty hardware is a primary reason I always keep one consumer level machine and one HEDT for work, a much more conservative build which I upgrade after I'm satisfied with the performance of a new part in a "cheap" ($5000 far from cheap tho) build without water.
> First time in years now that I'm stuck on only a mere 9900K and a MBP laptop. That makes me nervous, but I can't make up my mind: go with the 9000 HEDT, Ryzen 2 or wait at least a few months for Ryzen 7nm. Intel is most probably a year+ away with their degraded "10nm"...


No. But long cables :-D


----------



## Murlocke

I see on the spec sheet the eVGAs go to 338W and the ZOTAC AMP goes to 300W. Is this actually going to make a difference on air? I'm not about to buy a $1300 card and have to risk flashing it. My gut tells me to hold out for an eVGA but the heatsink on the ZOTAC seems much better. Ideally, I'd buy an eVGA FTW3 but who knows when those will be in stock.


----------



## stefxyz

Edge0fsanity said:


> Does anyone have information on waterblock performance with these cards? I've read a few times in this thread that the EK blocks aren't performing that well in comparison to others which is what i have. What i haven't seen is thermal performance from some of the other manufacturers.
> 
> I've been playing around with the curve in AB with my card lately trying to get a maximum stable OC dialed in. It seems my card hits a wall at 48C in timespy(using this for stress testing) and will destabilize and crash with the desired clocks, otherwise its perfectly stable below that temp. Its a 30mhz difference without taking thermal throttling into account.



My 2080ti fe full power target +1000 on memory and at 2140 mhz runs 38 degrees celsius max with 20 degrees ambient (1 560mm Radiator loopt just for the gpu).


----------



## ENTERPRISE

Esenel said:


> Haha never trust hardware.
> I experienced this the very hard way.
> 
> I bought a Z370 Asus Hero with an i7-8086k.
> 
> I was waiting for nearly 4 month to get a binned 5.1 Ghz one (started the order with a 8700k)!
> 
> I flushed the whole loop of cpu and gpu to dismantle everything and clean it.
> Took me severals hours of course.
> 
> So changed from the Ryzen to the 8086k and everything looked great.
> Even took the time for a fresh windows installation.
> 
> BUT after the first OC tests I could not figure out why I couldn't get any stable RAM OC.
> After days of trial and error I saw that it was due to VCCIO and VCCSA.
> I was able to enter values in the bios but it was not applied.
> Tested all bios versions, reseated the CPU,... everything.
> 
> Only thing to do send it back to Caseking.
> 
> Long story short.
> It was a faulty Mainboard with the only flaw of not being able to regulate two voltages :-D
> 
> So flushing again. Intel out. Ryzen In.
> Waiting for RMA.
> Ryzen out and Intel in again -.-
> 
> What I learned from this was testing every piece of hardware outside of the case before doing this **** again :-D
> 
> Did also a test on the 2080Ti before taking everything apart ;-)


Ouch, Yeah that is no fun at all ! I am going to test each 2080Ti in a single config and stress each of them before putting them in SLI and settling down. 

Fingers crossed your next build session goes well :thumb:


----------



## illidan2000

Hi,
i noticed that Gigabyte 2080ti Gaming G1 has two connectors. The Red one (upper right) is "interfering" with waterblocks. 
Do you know why there are two connectors? The red one is only for RGB led lights? I could simply tape this?
The blue one is used for the fans ?


----------



## GraphicsWhore

illidan2000 said:


> Hi,
> i noticed that Gigabyte 2080ti Gaming G1 has two connectors. The Red one (upper right) is "interfering" with waterblocks.
> Do you know why there are two connectors? The red one is only for RGB led lights? I could simply tape this?
> The blue one is used for the fans ?


I think someone mentioned in the thread red is the fan header and you need to just bend the pins for the block to go on.


----------



## TahoeDust

kot0005 said:


> Hey dude upload ftw3 375w bios please!


Here you go. Also uploaded to GPU-Z database.


----------



## illidan2000

GraphicsWhore said:


> I think someone mentioned in the thread red is the fan header and you need to just bend the pins for the block to go on.


for me it's a big problem... I cannot simply bend them, because I wanted to put the evga hybrid on it.... So the fan header is what I need to feed the pump and the radiator fan 
the blue one, it's for what?


----------



## toncij

illidan2000 said:


> Hi,
> i noticed that Gigabyte 2080ti Gaming G1 has two connectors. The Red one (upper right) is "interfering" with waterblocks.
> Do you know why there are two connectors? The red one is only for RGB led lights? I could simply tape this?
> The blue one is used for the fans ?


So it is not a ref. PCB?



TahoeDust said:


> Here you go. Also uploaded to GPU-Z database.


Will that even work on non FTW3 cards?


----------



## Zurv

I wonder what the limits are on the gainward 2080 ti geek edition?
https://videocardz.net/gainward-geforce-rtx-2080-ti-11gb-gamesoul-geek-edition

if someone wants to pick it up and dump the bios.


----------



## Garrett1974NL

I'm not flashing yet... I'll give the card 1 more month... if it's still good THEN I'll flash... not before 
Too many of these things seem to die prematurely


----------



## gtbtk

Hi All,

The GTX 1070 micron cards, before the nvidia Bios update to resolve the issue, would start checkerboard artifacting in a similar manner to what seems to be happening with your cards. It ended up being either a voltage or GDDR5 timings issue in bios. 

While it did not totally solve the problem, Setting the NVcontrol panel to the Hi performance mode would help the cards remain more stable rather than using the default Optimized NV control panel setting. 

We found that the card dropping to the lower pstate would be the trigger to make the card to start checkerboard artifacting when you placed a graphics load on the card. The artifacts were made worse the faster the ram was clocked

I don't know if this will help with 2080TIs to work around the problem or not, but it is probably worth trying to see if you are having a similar experience to the 1070 micron owners. If it helps with the stability, reminding nvidia about the 1070 issue may get you a solution a bit faster than the 5 months it took to get the 1070 fix.

Cheers


----------



## TahoeDust

toncij said:


> Will that even work on non FTW3 cards?


No clue, I was asked to upload it, so I did.


----------



## exploiteddna

gaming x trio in stock newegg. for now


----------



## kx11

got the strix OC block but still no GPU


----------



## Garrett1974NL

gtbtk said:


> Hi All,
> 
> The GTX 1070 micron cards, before the nvidia Bios update to resolve the issue, would start checkerboard artifacting in a similar manner to what seems to be happening with your cards. It ended up being either a voltage or GDDR5 timings issue in bios.
> 
> While it did not totally solve the problem, Setting the NVcontrol panel to the Hi performance mode would help the cards remain more stable rather than using the default Optimized NV control panel setting.
> 
> We found that the card dropping to the lower pstate would be the trigger to make the card to start checkerboard artifacting when you placed a graphics load on the card. The artifacts were made worse the faster the ram was clocked
> 
> I don't know if this will help with 2080TIs to work around the problem or not, but it is probably worth trying to see if you are having a similar experience to the 1070 micron owners. If it helps with the stability, reminding nvidia about the 1070 issue may get you a solution a bit faster than the 5 months it took to get the 1070 fix.
> 
> Cheers


Do you mean the texture filtering quality?
Setting that to high performance?


----------



## Noufel

got my Trio X 2080ti today, i hope that i won't be space invaded


----------



## Jpmboy

Garrett1974NL said:


> Do you mean the texture filtering quality?
> Setting that to high performance?


 i think he means power management:
setting to hi perf will set a higher min P-state. p8 is idle, p0 is the highest.


----------



## illidan2000

toncij said:


> So it is not a ref. PCB?


Not completely. It has the fan header on the top...


----------



## Chalupa

I'm in the process of building a rig for my boss and we were going to use SLI 2080 TI's in the build. I know there's a lot of information out there on the 2080 TI's dying and we decided to put the build on hold until we know more about the issue. Are these graphics cards safe to buy right now? We don't want to put together this massive watercooling pc just to have the graphics cards die a month later. Are these issues going to resolved soon?


----------



## kot0005

Edge0fsanity said:


> Does anyone have information on waterblock performance with these cards? I've read a few times in this thread that the EK blocks aren't performing that well in comparison to others which is what i have. What i haven't seen is thermal performance from some of the other manufacturers.
> 
> I've been playing around with the curve in AB with my card lately trying to get a maximum stable OC dialed in. It seems my card hits a wall at 48C in timespy(using this for stress testing) and will destabilize and crash with the desired clocks, otherwise its perfectly stable below that temp. Its a 30mhz difference without taking thermal throttling into account.


I max out at 45-46c in AC odyssey. coolant hit around 35c. My fans run at 600rpm all times. 2x 480XE rads.


----------



## kot0005

TahoeDust said:


> Here you go. Also uploaded to GPU-Z database.


thanks.


----------



## Edge0fsanity

stefxyz said:


> My 2080ti fe full power target +1000 on memory and at 2140 mhz runs 38 degrees celsius max with 20 degrees ambient (1 560mm Radiator loopt just for the gpu).


Thats a really good temp, what block are you using?



kot0005 said:


> I max out at 45-46c in AC odyssey. coolant hit around 35c. My fans run at 600rpm all times. 2x 480XE rads.


What block and bios are you running?


----------



## CaliLife17

Well decided to return my 2x FE editions and going to pick up a couple of EVGA cards instead. Just nabbed 1x XC ultra from B&H, need to get 1 more. If anything does go wrong with the cards, I would rather deal with EVGA than Nvidia. Plus EVGA bios has a higher power target. 

Sadly they didn't have any of the non Ultra cards in stock, since I don't need the beefy air cooler. Both cards will be going under water.


----------



## Zurv

Edge0fsanity said:


> Thats a really good temp, what block are you using?
> 
> 
> What block and bios are you running?


also, what are you using to "max" out the card. warm that loop up and then run fire strike extreme for like 30min. Is it still those low temps?


----------



## Zurv

Chalupa said:


> I'm in the process of building a rig for my boss and we were going to use SLI 2080 TI's in the build. I know there's a lot of information out there on the 2080 TI's dying and we decided to put the build on hold until we know more about the issue. Are these graphics cards safe to buy right now? We don't want to put together this massive watercooling pc just to have the graphics cards die a month later. Are these issues going to resolved soon?


It takes time to build and get cards into the retail channel. If there is a problem (again, if) it will take a long time before it makes it into consumer's hands. don't stress it. If there is a problem you'll just RMA it.

Do maybe open the cards up and make sure all the pads and TIM are in good shape and touching what they need to touch.


----------



## Zurv

CaliLife17 said:


> Well decided to return my 2x FE editions and going to pick up a couple of EVGA cards instead. Just nabbed 1x XC ultra from B&H, need to get 1 more. If anything does go wrong with the cards, I would rather deal with EVGA than Nvidia. Plus EVGA bios has a higher power target.
> 
> Sadly they didn't have any of the non Ultra cards in stock, since I don't need the beefy air cooler. Both cards will be going under water.


If you are worried.. both are ref cards and are pretty much made in the same place. I personally find RMA'd via nvidia much faster and less painful than RMA'n from evga.


----------



## HuckleberryFinn

I want to let everyone know ZOTAC is refusing to pay my shipping for RMA on this faulty $1200 graphics card.

I would advise anyone risking buying a 2080Ti to stay far away from ZOTAC.


----------



## CaliLife17

Zurv said:


> If you are worried.. both are ref cards and are pretty much made in the same place. I personally find RMA'd via nvidia much faster and less painful than RMA'n from evga.


I don't think dealing with Nvidia customer support would be bad, I have heard the process is pretty painless. I more so want a later MFG card and one with unlocked bios. Also since these are going to be getting blocks on them, if I can avoid draining my loop to replace cards for RMAs that would be nice.


----------



## Esenel

Founders + EK Block (without latest thermal pads) temps with probes.

For testing the temperatures I took the TimeSpy Extreme Graphics Test 2 Benchmark in a loop. 15 minutes.

Attached you see on which points I did put the probes and you see a thermal picture after 15 minutes.

The attached HWiNFO screenshot shows the following:

Aquaero Temp3 is the temperature outside the case which is max 27°C.
Yes it was quite warm in my room ^^

Temp1 and Temp2 measure the water before and after CPU / GPU.

Temp6 is the left probe in the picture, near left VRM.
Temp7 is in the middle next to one Memory chip.
Temp8 is the right probe near right VRM.

So my findings in a summary:
- Delta T Ambient to Water is ~ 8°C => fine
- Delta T Water to GPU is ~ 14 °C => fine as well
- Temp next to outer memory is ~ 44°C
- Temp near VRMs 42-46°C => fine
- Temp of the GPU and backplate are ~ 50°C (Due to high room temperature) => also fine

I will NOT flush the loop to add this additional pads.

And I cannot image this graphics card dying due to high temps :-D

So long


----------



## Nico67

jcde7ago said:


> Nope, stock. Even underclocked memory and core on the card that was RMA'd, it made no difference.
> 
> There is a common defect with all of the cards that are failing, we just don't have enough data to know what (most people are just assuming it's VRAM related because of the artifacting/lockup symptoms)...even if it ends up being a bad 'batch' of a certain component making it into the manufacturing line, statistics dictate that not all cards will have issues, even if a 'good' and 'defective' card are 1 serial number apart (unless it is an inherent design flaw that affects 100% of all 2080 Tis manufactured, which is extremely unlikely).


Wow, that's not good. Either there is some serious mem quality issues or maybe the cores are more borderline on some of the shader grps than they though. Pretty sure there are driver and bios issues, along with Win10 compatibility issues, although none of with should really cause a stock card to degrade. Especially one as well as the FE seems to be build, although mem temp could still be a thing?

The worst my card does is reboot, although pushing it too hard has caused it to drop the driver and go black screen. Look at various issues on the Nvidia forums it seems alt tabbing in game could be the cause of some crashes and reboots, and may not happen immediately. I guess a lot of people like me would be doing that to check how the GPU is performing intermittently.
I played The Division for a few hrs last nite with any issues @ 2100/7499 330w, but taking it slow to see if I can spot any behavioural trends that could be causing my reboots.


----------



## Nico67

Esenel said:


> Founders + EK Block (without latest thermal pads) temps with probes.
> 
> For testing the temperatures I took the TimeSpy Extreme Graphics Test 2 Benchmark in a loop. 15 minutes.
> 
> Attached you see on which points I did put the probes and you see a thermal picture after 15 minutes.
> 
> The attached HWiNFO screenshot shows the following:
> 
> Aquaero Temp3 is the temperature outside the case which is max 27°C.
> Yes it was quite warm in my room ^^
> 
> Temp1 and Temp2 measure the water before and after CPU / GPU.
> 
> Temp6 is the left probe in the picture, near left VRM.
> Temp7 is in the middle next to one Memory chip.
> Temp8 is the right probe near right VRM.
> 
> So my findings in a summary:
> - Delta T Ambient to Water is ~ 8°C => fine
> - Delta T Water to GPU is ~ 14 °C => fine as well
> - Temp next to outer memory is ~ 44°C
> - Temp near VRMs 42-46°C => fine
> - Temp of the GPU and backplate are ~ 50°C (Due to high room temperature) => also fine
> 
> I will NOT flush the loop to add this additional pads.
> 
> And I cannot image this graphics card dying due to high temps :-D
> 
> So long


I'd recommend using two 8pin cables direct from the PSU. That cable is probably fine for lower powered cards, and you may not have any issues, but probably not the best to pull 300w on a single cable


----------



## Jpmboy

it's only 25A, not a problem for the seasonic 1000 single rail, assuming the wire gauge is sufficient... note the heat map he shows. no doubt, 2 cables are better.


----------



## jcde7ago

Nico67 said:


> Wow, that's not good. Either there is some serious mem quality issues or maybe the cores are more borderline on some of the shader grps than they though. Pretty sure there are driver and bios issues, along with Win10 compatibility issues, although none of with should really cause a stock card to degrade. Especially one as well as the FE seems to be build, although mem temp could still be a thing?
> 
> The worst my card does is reboot, although pushing it too hard has caused it to drop the driver and go black screen. Look at various issues on the Nvidia forums it seems alt tabbing in game could be the cause of some crashes and reboots, and may not happen immediately. I guess a lot of people like me would be doing that to check how the GPU is performing intermittently.
> I played The Division for a few hrs last nite with any issues @ 2100/7499 330w, but taking it slow to see if I can spot any behavioural trends that could be causing my reboots.


Yeah, it is what it is. The card seems extremely well built, from a design/manufacturing standpoint...but obviously, there are some QC/QA issues here with some cards, and like i said previously, it's likely a specific bad/defective component that made its way into some of the FEs that are causing issues, and serial numbers alone won't be enough to determine root cause just because that isn't how statistics works. 

My recommendation for any FE owners is to actually push the card as much as you can in the early going, if you can...this doesn't mean run Furmark on it or unnecessary stress tests, but actually game + put load on the card as much as you would during extended gaming sessions, crank the settings up, etc....if the card is defective, you should know pretty quick.

Day 2 on my replacement FE, and so far, so good...cranking up all settings in Destiny 2/Black Ops 4, zero issues thus far after a few hours...this card is an absolute beast when it's working like it's supposed to, that much i'll say (if this replacement card goes bad though i'll be looking for an immediate refund and going back to a 1080 Ti).

Also just got a notification from Aquatuning that my Eiswolf AIO for the 2080 Ti should be delivered to me on Friday, but i'll be holding off on installing it for a few weeks just to make sure this second FE doesn't croak as quick as the last one.


----------



## Zammin

Checked Aquatuning today and it says they now have stock of the GPU block I ordered, but they didn't contact me to let me know about it.... That was the last item for my order and I have been asking about it for a while now, so I would've expected some kind of contact.. Sent them an email so hopefully they get back to me tonight and ship my order. Once that's done I've got everything I need on the way, RTX2080Ti FE, 9900k, EK Velocity CPU block and Alphacool GPU block. Only other thing is a new motherboard if I decide to upgrade to Z390. Hoping the Maximus XI Formulas come back in stock on ebay for the next 20% off tech sale since I missed them last time.


----------



## Vipeax

I noticed from the moment that I started overclocking that the regular RTX2080 scales very well with memory overclocks (verified with 3DMark, Unigine benchmarks and a few games), here is some data for logged Unigine Valley benchmarks (3440x1440, max settings):
https://i.imgur.com/HhhlQhy.png

Anyone feels like going through a similar test with their RTX2080Ti? With the higher shader units count as well as the bigger memory bus I wonder what it is like for the RTX2080Ti (+300MHz on the memory means +1200MHz in Precision X1/Afterburner). Feel free to use whatever benchmark you prefer and adjust the frequencies for whatever fits your card obviously. I can imagine some people won't go as far on the memory, while being able to push much higher frequencies on their core than I can.


----------



## TahoeDust

I am a little disappointed in my FTW3's overclock. The highest I can get benchmark stable is a +70 offset in the PX1 resulting in mid 20XXMhz clock. It seems like a lot of cards are +120 or better. I'm thinking about rolling the dice again.


----------



## Zammin

TahoeDust said:


> I am a little disappointed in my FTW3's overclock. The highest I can get benchmark stable is a +70 offset in the PX1 resulting in mid 20XXMhz clock. It seems like a lot of cards are +120 or better. I'm thinking about rolling the dice again.


That sounds pretty normal to me, especially on air. I think JPMboy was saying earlier that temperature is what limits clocks on these cards, so on water you would probably see better clocks. The thing to remember is that all of these factory OC'd AIB cards share the same chip as the FE so your chances in the silicon lottery are the same whether it's a FTW3 or an XC gaming, Gaming X Trio, FE etc.

Also if your card has a high boost clock out of the box, your +70 could also be a +120 for a different card with a lower factory boost clock, so I wouldn't be too worried about comparing those offsets.

On my current Strix 1080Ti OC I can only get around +60, but on water that lets it boost up to around 2050-2060mhz. It just sounds like a low offset because it's got a high boost clock out of the box.

What temperatures are you seeing on air with the FTW3 btw? It's a good looking card.


----------



## kot0005

Edge0fsanity said:


> stefxyz said:
> 
> 
> 
> My 2080ti fe full power target +1000 on memory and
> 
> 
> 
> kot0005 said:
> 
> 
> 
> I max out at 45-46c in AC odyssey. coolant hit around 35c. My fans run at 600rpm all times. 2x 480XE rads.
> 
> 
> 
> What block and bios are you running?
> 
> Click to expand...
> 
> Ek, 338w stock evga xc bios.
> 
> Runs at 2070-2085mhz stable in the game using 115% power I think its the max limit of my card. At 130% it doesnt increase at all.
Click to expand...


----------



## Jpmboy

Vipeax said:


> I noticed from the moment that I started overclocking that the regular RTX2080 scales very well with memory overclocks (verified with 3DMark, Unigine benchmarks and a few games), here is some data for logged Unigine Valley benchmarks (3440x1440, max settings):
> https://i.imgur.com/HhhlQhy.png
> 
> Anyone feels like going through a similar test with their RTX2080Ti? With the higher shader units count as well as the bigger memory bus I wonder what it is like for the RTX2080Ti (+300MHz on the memory means +1200MHz in Precision X1/Afterburner). Feel free to use whatever benchmark you prefer and adjust the frequencies for whatever fits your card obviously. I can imagine some people won't go as far on the memory, while being able to push much higher frequencies on their core than I can.


 I have my Ti FE running with +1000 on the slider (=2000 frequency in gpuZ). Benches "productively" until +1175, +1200 will crash in Heaven, Valley (4K), superposition 4K and 8K and TS extreme. Frankly for benchmarks, 3DMK11 extreme really pushes the card.
with my humble 8086K:


----------



## kot0005

Just checked with stability testing. Maxes out at 2085mhz at 1.050mv and 120% power. 130% doesnt do anything.. may be i shud flash gigabyte bios for 2100 lol.


----------



## Zammin

kot0005 said:


> Just checked with stability testing. Maxes out at 2085mhz at 1.050mv and 120% power. 130% doesnt do anything.. may be i shud flash gigabyte bios for 2100 lol.


Does the higher power target allow for higher clocks? Or does it just allow the GPU to work harder at the same clock speeds? I recall someone saying that recently in the thread. Either way wouldn't hurt to try and see if your benchmark scores go up


----------



## TahoeDust

Zammin said:


> What temperatures are you seeing on air with the FTW3 btw? It's a good looking card.


Temps are solid. Running Haven for 30min using the default fan curve core maxed at 69c. I am very much looking forward to getting the card under water. I just need someone to make a block for it.


----------



## Zammin

TahoeDust said:


> Temps are solid. Running Haven for 30min using the default fan curve core maxed at 69c. I am very much looking forward to getting the card under water. I just need someone to make a block for it.


Nice. I wonder who will make the first FTW3 waterblock. Presumably EVGA's own hydrocopper?


----------



## CallsignVega

TahoeDust said:


> Temps are solid. Running Haven for 30min using the default fan curve core maxed at 69c. I am very much looking forward to getting the card under water. I just need someone to make a block for it.


69C? Honestly that is pretty bad for such a large cooler. These cards are hot and really have to go under water.


----------



## Zammin

CallsignVega said:


> 69C? Honestly that is pretty bad for such a large cooler. These cards are hot and really have to go under water.


I think it might be because it has a 373W power limit in it's BIOS. The default fan curve could be very quiet as well. I remember when my 1080Ti Strix was on air the default curve landed it about there as well but was very conservative.


----------



## Talon2016

My Gigabyte Gaming OC RTX 2080 Ti failed after just a couple weeks of usage. It's still "working" but any gaming basically crashes either instantly or within a short time. Some games I am seeing flickering artifacts (the rainbow dots) that typically are failing vram. Others I don't get that at all and just freeze or crash without an error. I've also seen the GPU disconnected error. I purchased from Amazon (thank god) and the GPU is being returned for a full refund. 

I managed to snag an EVGA XC Ultra from their website earlier and it will be here tomorrow. Luckily they don't charge tax either! I hope this EVGA card lasts longer than a couple weeks of light use. We can also now rest assured it's not simply FE cards failing.


----------



## Gottex

Guys, need an advice concerning MSI 2080 Ti Gaming X Trio
Never thought to set fans on auto till my friend asked me to try, cause he discovered a strange "feature".
Whenever I play a game with fans set on "auto" and the temperature hit 76'c, the "Fan 1" ramps up to 3200rpm in a blink of an eye till it drops to 75'c, after that it goes back to normal scheme: Fan 1 - 45-50% (~1400-1500rpm), Fan 2 -50-55%(~1500rpm)
I wonder if this is a usual behaviour or its a bug ? 
Cause my friend has the same video card and it's been almost 25 days since he bought it and only now it started to behave strangely, the "Fan 1" ramps up to 3200rpm after the temp reach 70'c and it goes back to ~1400-1500rpm after 69'c, but the thing is that for the past 25 days he has been playing games and stuff with fans set on auto and "Fan 1" NEVER reached 3200rpm even when the temp was 78'c, its always been around 40-45%.

For monitoring and fan control I use MSI Afterburner latest beta
GeForce 416.64hf
stock bios 103 (same thing on stock bios 101)


----------



## Vipeax

Zammin said:


> kot0005 said:
> 
> 
> 
> Just checked with stability testing. Maxes out at 2085mhz at 1.050mv and 120% power. 130% doesnt do anything.. may be i shud flash gigabyte bios for 2100 lol.
> 
> 
> 
> Does the higher power target allow for higher clocks? Or does it just allow the GPU to work harder at the same clock speeds? I recall someone saying that recently in the thread. Either way wouldn't hurt to try and see if your benchmark scores go up
Click to expand...

It only prevents powerlimit throttling so no clock increase.


----------



## toncij

My Palit GamingPro OC runs up to 2160 on air, but as temp ramps up it crashes the game. Will see how it runs under water in a month. Memory from 7000 to 8200 stable.

However, I'm running stock. One thing that intrigues me is - if all those manufacturers are leaving memory be, at stock, there could be a reason for that? Maybe GDDR6 doesn't survive overclocks?


----------



## alanthecelt

toncij said:


> My Palit GamingPro OC runs up to 2160 on air, but as temp ramps up it crashes the game. Will see how it runs under water in a month. Memory from 7000 to 8200 stable.
> 
> However, I'm running stock. One thing that intrigues me is - if all those manufacturers are leaving memory be, at stock, there could be a reason for that? Maybe GDDR6 doesn't survive overclocks?


mine is happy at 2115/2130 set via scanner on water at 47 deg max
haven't touched the ram at all yet, would rather prove the core clock was good first


----------



## stefxyz

Edge0fsanity said:


> Thats a really good temp, what block are you using?
> 
> 
> What block and bios are you running?


The EK Velocity RGB Nickel, I think its more about radiator surface. The loop is exclusively for the gpu with 1 56cm radiator with 8 14mm noctua fans push pull. I had the same temps on the pascal titan x btw.


----------



## ENTERPRISE

toncij said:


> My Palit GamingPro OC runs up to 2160 on air, but as temp ramps up it crashes the game. Will see how it runs under water in a month. Memory from 7000 to 8200 stable.
> 
> However, I'm running stock. One thing that intrigues me is - if all those manufacturers are leaving memory be, at stock, there could be a reason for that? Maybe GDDR6 doesn't survive overclocks?


Most AIB's leave memory clock as it is as it is core clock increases that give you the most gains. Overclocked memory tends not to give you anywhere near as much gain.


----------



## Vipeax

ENTERPRISE said:


> toncij said:
> 
> 
> 
> My Palit GamingPro OC runs up to 2160 on air, but as temp ramps up it crashes the game. Will see how it runs under water in a month. Memory from 7000 to 8200 stable.
> 
> However, I'm running stock. One thing that intrigues me is - if all those manufacturers are leaving memory be, at stock, there could be a reason for that? Maybe GDDR6 doesn't survive overclocks?
> 
> 
> 
> Most AIB's leave memory clock as it is as it is core clock increases that give you the most gains. Overclocked memory tends not to give you anywhere near as much gain.
Click to expand...

Not on the 2080 surely, see my post from earlier. In fact core overclock does barely anything on those while memory overclocks can easily add 5 to 10%.


----------



## Esenel

Vipeax said:


> I noticed from the moment that I started overclocking that the regular RTX2080 scales very well with memory overclocks (verified with 3DMark, Unigine benchmarks and a few games), here is some data for logged Unigine Valley benchmarks (3440x1440, max settings):
> https://i.imgur.com/HhhlQhy.png
> 
> Anyone feels like going through a similar test with their RTX2080Ti? With the higher shader units count as well as the bigger memory bus I wonder what it is like for the RTX2080Ti (+300MHz on the memory means +1200MHz in Precision X1/Afterburner). Feel free to use whatever benchmark you prefer and adjust the frequencies for whatever fits your card obviously. I can imagine some people won't go as far on the memory, while being able to push much higher frequencies on their core than I can.


I did such a testing.
TimeSpy / Extreme with 0, 500, 750, 900, 1000 and 1100 on memory.

You just have to scroll back some pages 😉


----------



## Esenel

Nico67 said:


> Esenel said:
> 
> 
> 
> Founders + EK Block (without latest thermal pads) temps with probes.
> 
> For testing the temperatures I took the TimeSpy Extreme Graphics Test 2 Benchmark in a loop. 15 minutes.
> 
> Attached you see on which points I did put the probes and you see a thermal picture after 15 minutes.
> 
> The attached HWiNFO screenshot shows the following:
> 
> Aquaero Temp3 is the temperature outside the case which is max 27°C.
> Yes it was quite warm in my room ^^
> 
> Temp1 and Temp2 measure the water before and after CPU / GPU.
> 
> Temp6 is the left probe in the picture, near left VRM.
> Temp7 is in the middle next to one Memory chip.
> Temp8 is the right probe near right VRM.
> 
> So my findings in a summary:
> - Delta T Ambient to Water is ~ 8°C => fine
> - Delta T Water to GPU is ~ 14 °C => fine as well
> - Temp next to outer memory is ~ 44°C
> - Temp near VRMs 42-46°C => fine
> - Temp of the GPU and backplate are ~ 50°C (Due to high room temperature) => also fine
> 
> I will NOT flush the loop to add this additional pads.
> 
> And I cannot image this graphics card dying due to high temps 😄
> 
> So long
> 
> 
> 
> I'd recommend using two 8pin cables direct from the PSU. That cable is probably fine for lower powered cards, and you may not have any issues, but probably not the best to pull 300w on a single cable /forum/images/smilies/smile.gif
Click to expand...




Jpmboy said:


> it's only 25A, not a problem for the seasonic 1000 single rail, assuming the wire gauge is sufficient... note the heat map he shows. no doubt, 2 cables are better.


Yes I am very confident that Seasonic did a proper product planning for their 200€ PSU and that this one cable can sustain the possible 300 Watt on a long term.

Yes 2 cables would be better, but 2x a second connector hanging around is just plain ugly 😄


----------



## Jpmboy

alanthecelt said:


> mine is happy at 2115/2130 set via scanner on water at 47 deg max
> haven't touched the ram at all yet, would rather prove the core clock was good first


ocscanner? is that really good at predicting game-stable clocks?


Esenel said:


> Yes I am very confident that Seasonic did a proper product planning for their 200€ PSU and that this one cable can sustain the possible 300 Watt on a long term.
> 
> Yes 2 cables would be better, but 2x a second connector hanging around is just plain ugly 😄


 I have that same PSU... it does have non "Y" pcie cables - right?
Buy yeah - it will handle the occasional 300W draw np.


----------



## alanthecelt

Jpmboy said:


> ocscanner? is that really good at predicting game-stable clocks?


I have no idea, normally i would set the curve up in afterburner manually. and spend a few hours doing so with heaven running
This time i set the scanner tool running for 20 mins in afterburner, knowing it would err on stability.
i saw the gpu restart a few times as it crashed...

then i jumped into monster hunter world for a few hours and saw that speed being logged regularly... and no crashes yet
early days yet, i know one game to another can show stability issues


----------



## Chalupa

Zurv said:


> It takes time to build and get cards into the retail channel. If there is a problem (again, if) it will take a long time before it makes it into consumer's hands. don't stress it. If there is a problem you'll just RMA it.
> 
> Do maybe open the cards up and make sure all the pads and TIM are in good shape and touching what they need to touch.


We plan on putting them under water so that shouldn't be an issue.


----------



## Noufel

possible design flaw in the reference PCB that cause the degradation of vram in time 
https://www.computerbase.de/forum/t...rce-rtx-2080-ti.1832694/page-20#post-21887316


----------



## Vipeax

Noufel said:


> possible design flaw in the reference PCB that cause the degradation of vram in time
> https://www.computerbase.de/forum/t...rce-rtx-2080-ti.1832694/page-20#post-21887316


That doesn't explain all the water-cooled cards dying, right?


----------



## Fitzcaraldo

The fact that there also are custom designs failing doesn't mean it is necessarily a design flaw in the ref design. Doesn't mean one of the involved components does not have a higher than expected failure rate.


----------



## Fiercy

If anyone with 2080TI GSync monitor is experiencing BSOD issue on closing games. It's not your card! It's a known bug at this point. 

PM me if you want special driver Nvidia sent me with a potential fix.


----------



## carlhil2

Zammin said:


> That sounds pretty normal to me, especially on air. I think JPMboy was saying earlier that temperature is what limits clocks on these cards, so on water you would probably see better clocks. The thing to remember is that all of these factory OC'd AIB cards share the same chip as the FE so your chances in the silicon lottery are the same whether it's a FTW3 or an XC gaming, Gaming X Trio, FE etc.
> 
> Also if your card has a high boost clock out of the box, your +70 could also be a +120 for a different card with a lower factory boost clock, so I wouldn't be too worried about comparing those offsets.
> 
> On my current Strix 1080Ti OC I can only get around +60, but on water that lets it boost up to around 2050-2060mhz. It just sounds like a low offset because it's got a high boost clock out of the box.
> 
> What temperatures are you seeing on air with the FTW3 btw? It's a good looking card.


+1, for my gpu, +60 gives me @2100mhz because my stock boost is @2040mhz...


----------



## iamjanco

Has anyone collected and sent NVIDIA logged error info while troubleshooting an issue with their support? Just wondering. 

I brought that up because there appear to be some tools--albeit perhaps of limited use to many end users--that can be used to collect such logged error data. It might be interesting to bump a number of these error reports against one another in the search for common denominator(s).

Further information can be had via the following links:

*NVIDIA Developer Zone: GPU Managment and Deployment Documentation* (https://docs.nvidia.com/deploy/index.html)

Of special interest: *XID Messages* (https://docs.nvidia.com/deploy/xid-errors/index.html)

*On linux OS*: 

*How to Gather and Export NVIDIA Bug Report Logs* (one example, not necessary valid for all flavors of linux)
https://blog.exxactcorp.com/gather-export-nvidia-bug-report-logs/

NVIDIA support response sourced from *Best way for running nvidia-bug-report.sh when reporting suspend/hibernate bugs?*
(https://devtalk.nvidia.com/default/...suspend-hibernate-bugs-/post/5062186/#5062186)

As excerpted from the above thread: 



> Did you see Xid error in log? If yes, what? Please share nvidia bug report. As super/root user run script nvidia-bug-report.sh to generate logs. This script comes with nvidia driver itself.


----------



## rush2049

My card has serial number 3236..... 

On the geforce forums almost all the failing cards have serial numbers starting 323
A lot of serials starting with 333 are fine and healthy.

Also I did a little bit of digging and it seems that almost all of the hardware reviewers were sent serial numbers starting 333, but I didn't check more than the first 2 pages of google results for 2080 ti reviews (also had to have photos good enough to read serial sticker).

So I think maybe its a bad batch?


----------



## Fitzcaraldo

rush2049 said:


> My card has serial number 3236.....
> 
> On the geforce forums almost all the failing cards have serial numbers starting 323
> A lot of serials starting with 333 are fine and healthy.
> 
> Also I did a little bit of digging and it seems that almost all of the hardware reviewers were sent serial numbers starting 333, but I didn't check more than the first 2 pages of google results for 2080 ti reviews (also had to have photos good enough to read serial sticker).
> 
> So I think maybe its a bad batch?


No. The 323 batch simply was the first and thus far most widespread batch in the wild. IF anything fails it is most likely to be a card of this batch right now. Don't jump to conclusions.

We don't even know if the S/N indicates another batch or is simply a continuous number. It could be that 323 were several batches as likely as these are still all "the first batch".


----------



## jcde7ago

Quick blurb on Tom's Hardware re: Micron GDDR6 temps. 

https://www.tomshardware.com/news/rtx-2080-ti-gpu-defects-launch,37995.html

Not saying it's anything more than pure speculation, but it's interesting nonetheless (but no - overheating VRAM still doesn't explain those water-cooled-since-Day-1-with-adequate-VRAM-cooling cards that have been reported as failing, which i've seen reported across various forums as well).


----------



## Bill D

got I Stric OC incoming any Strix cards fail yet ?

my PSU = rm1000i should be ok ?


----------



## rush2049

Anyone want to add their info to this document. I am trying to get an idea of how widespread the issue is.

I added most of the info I could find, but I may have missed some stuff:


https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing


----------



## CaliLife17

Any issue with pairing an EVGA XC and XC ultra in SLI? Both cards are going to be under Water, so don't care about air cooler. I have 1 XC Ultra on Order, and need to get 1 more card for SLI, but wasn't sure if I could just pick up an XC card (if that comes available before another XC Ultra), flash it to the XC Ultra bios (So boost clocks match) and just run them together. Only difference I see is the slight uptick in boost, and the thicker air cooler.


----------



## xer0h0ur

rush2049 said:


> Anyone want to add their info to this document. I am trying to get an idea of how widespread the issue is.
> 
> I added most of the info I could find, but I may have missed some stuff:
> 
> 
> https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing


Added mine


----------



## Jpmboy

rush2049 said:


> Anyone want to add their info to this document. I am trying to get an idea of how widespread the issue is.
> 
> I added most of the info I could find, but I may have missed some stuff:
> 
> 
> https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing


added. Problem will be to get folks with good/working cards to chime in for this to tell us anything (as usual). Maybe add a stock vs watercooled column?


----------



## Garrett1974NL

Is the spreadsheet for Founders Editions only?


----------



## Okt00

Garrett1974NL said:


> Is the spreadsheet for Founders Editions only?


The spreadsheet title is "2080 Ti Founders Edition RMA Tracking"


----------



## Noufel

Tom's germany are more in the overheating Vram theory foe reference PCB.
https://www.tomshardware.com/news/rtx-2080-ti-gpu-defects-launch,37995.html


----------



## Nico67

Esenel said:


> Yes I am very confident that Seasonic did a proper product planning for their 200€ PSU and that this one cable can sustain the possible 300 Watt on a long term.
> 
> Yes 2 cables would be better, but 2x a second connector hanging around is just plain ugly 😄


Just cut the extra connectors off with a flush cut pair of side cutters  Did that my self for the same reason


----------



## Causality1978

my 2080Ti just few days run. all rigt. run stable 1935 mhz(i see 2000 also for while) on core and 6800 mhz memory.. 75 C 90%load . msi afterburner : PW 111% .max possible, temp l. 88 ..
i was try overclock gpu core 100+ but fail in black screen ,, soo i remove hdmi cable from card and turn in back to get back to windows .. no hw fail happen ,, just i think gf card is on max tune from inno" factory.,
i dont expect hi overclock just smooth run on 24/7/365 sieve
1230eu , not bad))) i have just 6050 points in time spy extreme .. becouse my cpu is 2990wx and 2133mhz ddr4 .. but for hi scores is better now 7980x i know..,,but i dont care "gaming" 
temps never go over 80 C even load is 100% days,, soo three fan card is good choise.. i was think before buy NV Fe but now ..lol..
ups i forget : Inno3D GeForce RTX 2080 Ti Gaming OC X3 HDMI 3xDP 11GB


----------



## DrunknFoo

So my build is pretty much complete, cpu and ram thoroughly tested for stability, took about 4 days total. Started tinkering around with the GPU, tried the galax bios for my Asus Dual OC Ti... I guess a shunt mod is required to draw more power and push? the added draw seems to clean the frequency for the clock but only slightly...? 

Is it worth looking into another bios?....


----------



## sblantipodi

is there someone who can post it's benchmark of AC Odissey or Shadow of the Tomb Raider in 4K with 2080Ti?
I am wondering if I'm CPU limited on those games with that card.


----------



## Murlocke

Any idea why Precision X1 is saying my card's core is a whopping 1935MHz when under load?

I clicked the "Scan" thing under VG Curve Tuner (I assume this is the automatically overclocking tool?), and that was the results.


----------



## Vipeax

Murlocke said:


> Any idea why Precision X1 is saying my card's core is a whopping 1935MHz when under load?
> 
> I clicked the "Scan" thing under VG Curve Tuner (I assume this is the automatically overclocking tool?), and that was the results.


https://www.geforce.com/hardware/technology/gpu-boost/technology


----------



## toncij

Fiercy said:


> If anyone with 2080TI GSync monitor is experiencing BSOD issue on closing games. It's not your card! It's a known bug at this point.
> 
> PM me if you want special driver Nvidia sent me with a potential fix.


Could use that. I had issues like that, but attributed it to my inexperience with mesh shaders.


----------



## Nico67

Fiercy said:


> If anyone with 2080TI GSync monitor is experiencing BSOD issue on closing games. It's not your card! It's a known bug at this point.
> 
> PM me if you want special driver Nvidia sent me with a potential fix.



Actually haven't had any issues yet in that regard, but its only running at 90hz


----------



## CallsignVega

Fitzcaraldo said:


> The fact that there also are custom designs failing doesn't mean it is necessarily a design flaw in the ref design. Doesn't mean one of the involved components does not have a higher than expected failure rate.


There are only like two non-reference PCB 2080 Ti's being sold in the US. The EVGA FTW3 and the MSI Trio. And the MSI is just a slightly tweaked reference design. 

So basically 99+% of 2080 Ti's in the wild are reference PCB cards.


----------



## xer0h0ur

Murlocke said:


> Any idea why Precision X1 is saying my card's core is a whopping 1935MHz when under load?
> 
> I clicked the "Scan" thing under VG Curve Tuner (I assume this is the automatically overclocking tool?), and that was the results.


Air cooled? I hope that is the case otherwise you may have gotten a dud in the silicon lottery. May be worth trying to re-test using Afterburner's OC Scanner instead. FWIW people recommended not overclocking the vRAM when you make a OC Scanner run.


----------



## Zurv

sblantipodi said:


> is there someone who can post it's benchmark of AC Odissey or Shadow of the Tomb Raider in 4K with 2080Ti?
> I am wondering if I'm CPU limited on those games with that card.


Here it is on high and highest preset. It is on titan V.. which is a weeeeee bit slower than my 2080ti (but that is only because my titan V OC'd like pooo.)

but this is the perf (give or take a few fps) you should be getting on your system for Rise of the Tomb Raider.
do make sure you are using dx12 (which i'm sure you already know)


----------



## nycgtr

My strix and strix waterblock shipped today. Looking forward to it.


----------



## Zammin

rush2049 said:


> My card has serial number 3236.....
> 
> On the geforce forums almost all the failing cards have serial numbers starting 323
> A lot of serials starting with 333 are fine and healthy.
> 
> Also I did a little bit of digging and it seems that almost all of the hardware reviewers were sent serial numbers starting 333, but I didn't check more than the first 2 pages of google results for 2080 ti reviews (also had to have photos good enough to read serial sticker).
> 
> So I think maybe its a bad batch?





Fitzcaraldo said:


> No. The 323 batch simply was the first and thus far most widespread batch in the wild. IF anything fails it is most likely to be a card of this batch right now. Don't jump to conclusions.
> 
> We don't even know if the S/N indicates another batch or is simply a continuous number. It could be that 323 were several batches as likely as these are still all "the first batch".


That's interesting, the FE I only just bought this week as soon as they became available again on Nvidias website has a serial number starting with 0323.. I thought that if 323 was the first lot to go out, I would be seeing a newer serial number. Man I hope they aren't selling me a repaired/refurbed card as a new one... :/

Also bloody Aquatuning are ignoring my emails.. I saw yesterday morning that they had available stock of the GPU block I pre-ordered from them almost a month ago but my order still says "in processing" so I sent them an email to ask if on of the cards had been allocated to me. They didn't respond and now the page says out of stock again, and my order status hasn't been updated.. I swear if they sold the stock to other people and didn't reserve one for my order I'll be so angry 

Trying to get new hardware this year is nothing but frustration. The only smooth experience I've had so far was buying my 9900k on Ebay...


----------



## sblantipodi

Zurv said:


> Here it is on high and highest preset. It is on titan V.. which is a weeeeee bit slower than my 2080ti (but that is only because my titan V OC'd like pooo.)
> 
> but this is the perf (give or take a few fps) you should be getting on your system for Rise of the Tomb Raider.
> do make sure you are using dx12 (which i'm sure you already know)
> 
> https://www.youtube.com/watch?v=Ycqu4Mv9Lv0&t=6s


thanks, I really appreciate it.
why the min and the average on your highest custom test is 0?


----------



## NewType88

Bill D said:


> got I Stric OC incoming any Strix cards fail yet ?
> 
> my PSU = rm1000i should be ok ?


From where did you order ?


----------



## sblantipodi

sblantipodi said:


> is there someone who can post it's benchmark of AC Odissey or Shadow of the Tomb Raider in 4K with 2080Ti?
> I am wondering if I'm CPU limited on those games with that card.





























here my results, hope to see some others...  thanks


----------



## Bill D

NewType88 said:


> From where did you order ?


newegg


----------



## truehighroller1

Murlocke said:


> Any idea why Precision X1 is saying my card's core is a whopping 1935MHz when under load?
> 
> I clicked the "Scan" thing under VG Curve Tuner (I assume this is the automatically overclocking tool?), and that was the results.


You have to turn the voltage slider up to 100 so it can get up to 1.093v.

Like in the picture below.


----------



## zocker

Hi i have an DUAL-RTX2080TI-O11G from ASUS.
Is there a Bios Mod or Custom Bios for this Card?!


----------



## rush2049

For anyone who wants to see what my card is doing: 




This type of stuttering is sometimes much worse and eventually it freezes completely and crashes. Sometimes BSOD and sometimes it black screens and then recovers to the desktop after a bit.

And all of that is with DEBUG mode enabled which forces absolute stock settings (below FE edition overclock) and no GPU boost at all.


----------



## Murlocke

Not really seeing results I like on my eVGA Gaming Ultra 2080Ti. Using stock cooling but have the fan at 100% and it stays at about 63C.

I have the core at +150, and I am still seeing it running only at 1930-1950mhz on benchmarks. Everything I read says they should get 2000-2100. OC Scanner sets my card at +163 on the core. Do I really want to set my core to like +200 or +250? Seems like a big OC on an already factory OCed card.

Should I increase the voltage to +100? (I always run it at 100% fan while gaming since my computer is in another room)


----------



## carlhil2

Don't you guys OC from your max boost clocks at stock?


----------



## SeraphX17

Has anyone tried GALAX's Lab OC BIOS? I saw it was recently uploaded - https://www.techpowerup.com/vgabios/204869/204869

450w???


----------



## Zammin

SeraphX17 said:


> Has anyone tried GALAX's Lab OC BIOS? I saw it was recently uploaded - https://www.techpowerup.com/vgabios/204869/204869
> 
> 450w???


Isn't that BIOS meant for the limited run custom Galax card that has three 8-pin power connectors?

I don't think any of the other cards could pull 450W from two 8-pins + PCIE slot

Bloody epic looking card though, Buildzoid has a PCB analysis of it. Apparently it's built specifically for competition overclocking.


----------



## CallsignVega

rush2049 said:


> For anyone who wants to see what my card is doing: https://www.youtube.com/watch?v=3wjHQucNdNU
> 
> This type of stuttering is sometimes much worse and eventually it freezes completely and crashes. Sometimes BSOD and sometimes it black screens and then recovers to the desktop after a bit.
> 
> And all of that is with DEBUG mode enabled which forces absolute stock settings (below FE edition overclock) and no GPU boost at all.


Sounds like the reported memory problems with these cards. Reduce the stock memory clock by 100 MHz and see if that fixes it. 




Murlocke said:


> Not really seeing results I like on my eVGA Gaming Ultra 2080Ti. Using stock cooling but have the fan at 100% and it stays at about 63C.
> 
> I have the core at +150, and I am still seeing it running only at 1930-1950mhz on benchmarks. Everything I read says they should get 2000-2100. OC Scanner sets my card at +163 on the core. Do I really want to set my core to like +200 or +250? Seems like a big OC on an already factory OCed card.
> 
> Should I increase the voltage to +100? (I always run it at 100% fan while gaming since my computer is in another room)


The problem is that if you put too high of an overclock on the card (one that could actually work under load), when there is no load when a game is starting up the frequency under no load will spike very high and crash the card. Really the only way you are going to keep sustained 2050+ MHz on 2080 Ti's under the heaviest of loads is water cooling.


----------



## Ordred

*Looking for real MSI trio bios*

Hey there, I‘m new here, but Íve been reading this thread for a while and learned quite a lot. I tried the Galax bios with my MSI trio and it worked, but I had troubles with my fan profile and rgb control (don‘t want to leave it on rainbow mode all the time since it´s to annoying). Do you think someone might mod the trios bios so we could use the MSI card with 0rpm mode and the card being recognized as a trio? The power limit is highly annoying but no zero rom mode is as well (the galax bios kept it at 34% fanspeed minimum). Íd be really happy if someone helped out, since I dońt have the knowledge to mod biosses.

Thanks for your help untill now allready


----------



## kx11

nycgtr said:


> My strix and strix waterblock shipped today. Looking forward to it.





from where ?


----------



## Zammin

Ordred said:


> Hey there, I‘m new here, but Íve been reading this thread for a while and learned quite a lot. I tried the Galax bios with my MSI trio and it worked, but I had troubles with my fan profile and rgb control (don‘t want to leave it on rainbow mode all the time since it´s to annoying). Do you think someone might mod the trios bios so we could use the MSI card with 0rpm mode and the card being recognized as a trio? The power limit is highly annoying but no zero rom mode is as well (the galax bios kept it at 34% fanspeed minimum). Íd be really happy if someone helped out, since I dońt have the knowledge to mod biosses.
> 
> Thanks for your help untill now allready


As far as I know you can't mod the BIOS on RTX cards, only flash BIOS versions from other cards.


----------



## Edge0fsanity

zocker said:


> Hi i have an DUAL-RTX2080TI-O11G from ASUS.
> Is there a Bios Mod or Custom Bios for this Card?!


There are no custom vbios for any 2080ti. You can flash bios from other 2080ti models on that card however. I am running the galax 380w bios on mine. I would highly suggest watercooling the card if you're going to do this, the cooler on that card is absolutely terrible and reaches 80C on the stock 312w bios with sustained loads.


----------



## dante`afk

volunteers please?  

https://www.techpowerup.com/vgabios/204869/204869


----------



## kx11

SeraphX17 said:


> Has anyone tried GALAX's Lab OC BIOS? I saw it was recently uploaded - https://www.techpowerup.com/vgabios/204869/204869
> 
> 450w???



a 1800$ will surely be intresting




here's where to buy one
http://galaxstore.net/GALAX-GeForce-RTX-2080Ti-HOF-OC-Lab-WC-Edition_p_180.html


----------



## ocvn

MSI TrioX flash to 450W Galax and 374W FTW3, DF everything.
GalaxyHOF: https://www.3dmark.com/fs/16920364
EVGA: https://www.3dmark.com/fs/16919991


----------



## IceAero

Murlocke said:


> Should I increase the voltage to +100? (I always run it at 100% fan while gaming since my computer is in another room)





truehighroller1 said:


> You have to turn the voltage slider up to 100 so it can get up to 1.093v.
> 
> Like in the picture below.


Yeah, PSA on this one guys: Make sure you set this to 100.

I just got my FTW3 and overlooked this step--as a result, my card was not OC'ing very well, and felt a little unstable (e.g., weird shimmering textures in BF1 and constant flickering at 200% resolution scale [to push the card]). 

Now, set to 100, I'm rocking a smooth 90fps at a 200% resolution scale (5120x2880 for me), without stuttering, flickering, or any artifacts.

Anyway, while I don't think this slider does much, it definitely should be pegged at 100 for best results.


----------



## nycgtr

kx11 said:


> from where ?


newegg for the strix. There werent many it went out before my nowinstock alert even came. I just happen to look for it. Block from Bitspower direct.


----------



## Zurv

oops.. double post.


----------



## Zurv

SeraphX17 said:


> Has anyone tried GALAX's Lab OC BIOS? I saw it was recently uploaded - https://www.techpowerup.com/vgabios/204869/204869
> 
> 450w???


I found this bios slower than the gigabyte one. The problem is the 112.5% power limit.


----------



## Vipeax

Zurv said:


> yes, it can take 450w, but the power limit is only 112.5 /forum/images/smilies/frown.gif


Basic math... 400 to 450 is an increase of 12.5%.


----------



## Murlocke

Initial testing with the benchmarks I already had installed. Was expecting a bit larger gains given how much this setup cost, but what can ya do. Seems very odd I only gained 3FPS on the second Fire Strike test. Time Spy seems to utilize newer hardware much better. No idea what's up with the Rise of the Tomb Raider minimums on the Titan X system.

Left side: 6700k @ 4.6GHz, Reference Titan X (Pascal) with +150 on core/+300 on memory
Right side: 9700k @ 5GHz, eVGA 2080Ti Gaming Ultra with +130 on core/+500 on memory/+100 voltage/130% Target


----------



## nycgtr

Murlocke said:


> Initial testing with the benchmarks I already had installed. Was expecting a bit larger gains given how much this setup cost, but what can ya do. No idea what was up with the Rise of the Tomb Raider minimums on the Titan X results.
> 
> Left side: 6700k @ 4.6GHz, Reference Titan X (Pascal) with +150 on core/+300 on memory
> Right side: 9700k @ 5GHz, eVGA 2080Ti Gaming Ultra with +130 on core/+500 on memory/+100 voltage/130% Target


Well I swapped from two titan Xp to one 2080 ti opted to not sli my cards this round. My main display is a predator x27 and tbh a single card is good enough if u make some compromises not exactly what I could say for the titan xp but tbh its really not a huge jump but enough to go single card with some compromises.


----------



## Murlocke

nycgtr said:


> Well I swapped from two titan Xp to one 2080 ti opted to not sli my cards this round. My main display is a predator x27 and tbh a single card is good enough if u make some compromises not exactly what I could say for the titan xp but tbh its really not a huge jump but enough to go single card with some compromises.


I've "downgraded" like that in the past as well. SLI becomes a chore after awhile and I think you made the right choice. I run a 65" 4K OLED for gaming so I only care about steady 60FPS. Sadly, these TVs don't support GSYNC, so soon as I drop below about 58 it becomes very noticeable.

This card more or less achieves 60FPS in every game on the max settings. The most demanding I could find was Kingdom Come: Deliverance, which I must turn the shader setting down 2 notches to get around ~58FPS average. Not a very optimized game I don't think, I didn't even play it because my old Titan X Pascal wasn't handling it well at 4K.


----------



## dante`afk

I also had worse results with the 450w bios compared to the 380w on my FE. Would not even wanna run at 2000. Went back to 380w


----------



## Vipeax

dante`afk said:


> I also had worse results with the 450w bios compared to the 380w on my FE. Would not even wanna run at 2000. Went back to 380w


The FE doesn't even have the right connectors. 450W is 75W (socket) + 75W (6 pin)+ 150W (8 pin) + 150W (8 pin) minimum really. I can't say that I'm surprised that the Trio shows good results while yours doesn't. Depending on how it is configured it may very well start behaving very strangely.


----------



## nycgtr

Murlocke said:


> I've "downgraded" like that in the past as well. SLI becomes a chore after awhile and I think you made the right choice. I run a 65" 4K OLED for gaming so I only care about steady 60FPS. Sadly, these TVs don't support GSYNC, so soon as I drop below about 58 it becomes very noticeable.
> 
> This card more or less achieves 60FPS in every game on the max settings. The most demanding I could find was Kingdom Come: Deliverance, which I must turn the shader setting down 2 notches to get around ~58FPS average. Not a very optimized game I don't think, I didn't even play it because my old Titan X Pascal wasn't handling it well at 4K.


Yeah I got tired of playing with nvinspector. I was hoping nvlink sli would bring something different but after trying it out it proved to be same poop same smell.


----------



## Jpmboy

Zurv said:


> I found this bios slower than the gigabyte one. The problem is the 112.5% power limit.


the % value only reflects the increase over the based set in the bios...


dante`afk said:


> I also had worse results with the 450w bios compared to the 380w on my FE. Would not even wanna run at 2000. Went back to 380w



It is more likely that the problem is how the bios deals with the source power (PCIE, and slot) - it has to define the power draw from each source rail, and with an additional source on the card - a 3rd PCIE, I'm like 100% sure the GALAX LN2 bios has a different order (eg, the bios sets rail 1,2,3,4 assuming 1-3 are PCIE sources, 4 is slot... it was the same when we made modded bioses for the kingpin series based off reference power plane bioses). The bios may actually be asking the slot for 150W.

So... when crossflashing a 2 socket card with a 3 socket bios, be happy the thing actually turned on and/or didn't trip the board's OCP (some MBs have this failsafe).


----------



## Okt00

rush2049 said:


> For anyone who wants to see what my card is doing: https://www.youtube.com/watch?v=3wjHQucNdNU
> 
> This type of stuttering is sometimes much worse and eventually it freezes completely and crashes. Sometimes BSOD and sometimes it black screens and then recovers to the desktop after a bit.
> 
> And all of that is with DEBUG mode enabled which forces absolute stock settings (below FE edition overclock) and no GPU boost at all.


I can't comment on the Freezing completely and crashing, but in BF1 on DX12 mode, the first few minutes of a new round I get those stutters. Now my FPS never seems to drop under 100, might just be a couple frames at a lower rate, or the CPU just get hung up. Anyway, from what I gather in DX12 mode there is something with the shader caches needing to be rebuilt which causes those stutters until they are complete. 

https://answers.ea.com/t5/Technical...uttering-at-the-beginning-of-the/td-p/6780729


----------



## IceAero

Okt00 said:


> I can't comment on the Freezing completely and crashing, but in BF1 on DX12 mode, the first few minutes of a new round I get those stutters. Now my FPS never seems to drop under 100, might just be a couple frames at a lower rate, or the CPU just get hung up. Anyway, from what I gather in DX12 mode there is something with the shader caches needing to be rebuilt which causes those stutters until they are complete.
> 
> https://answers.ea.com/t5/Technical...uttering-at-the-beginning-of-the/td-p/6780729


I think the only solution is to avoid DX12 in BF1. No matter what you do, it's not going to feel 'right', even if you reduce or eliminate the stutters. 

All you have to do is try DX11 mode, and you'll see that is feels substantially more responsive.


----------



## Jbravo33

sblantipodi said:


> is there someone who can post it's benchmark of AC Odissey or Shadow of the Tomb Raider in 4K with 2080Ti?
> I am wondering if I'm CPU limited on those games with that card.


Scaling is awesome in this game. This is highest preset. If i max out all options one card is 58. if i set to highest preset its over 60 for 1 card.


----------



## Zurv

Jbravo33 said:


> Scaling is awesome in this game.


yes it is 

note (which i'm sure you already know) TAA can sometimes get funky with SLI.


----------



## Jbravo33

Zurv said:


> yes it is
> 
> note (which i'm sure you already know) TAA can sometimes get funky with SLI.


running X27 i usually shut off AA in all games. really no point with this monitor. no AA adds about 17% in SOTR

I see a few people with the EVGA FTW has anyone uploaded their bios to TPU? curious to try it out. so far the gigabyte 366 has worked best for me. better then the 380.


----------



## GraphicsWhore

Jbravo33 said:


> running X27 i usually shut off AA in all games. really no point with this monitor.
> 
> I see a few people with the EVGA FTW has anyone uploaded their bios to TPU? curious to try it out. so far the gigabyte 366 has worked best for me. better then the 380.


Yes someone uploaded and linked a few pages back.


----------



## Diverge

Murlocke said:


> Initial testing with the benchmarks I already had installed. Was expecting a bit larger gains given how much this setup cost, but what can ya do. Seems very odd I only gained 3FPS on the second Fire Strike test. Time Spy seems to utilize newer hardware much better. No idea what's up with the Rise of the Tomb Raider minimums on the Titan X system.
> 
> Left side: 6700k @ 4.6GHz, Reference Titan X (Pascal) with +150 on core/+300 on memory
> Right side: 9700k @ 5GHz, eVGA 2080Ti Gaming Ultra with +130 on core/+500 on memory/+100 voltage/130% Target


Nice info, since I just did the almost the same exact upgrades (FE 2080TI). 9700K is out for delivery, then I get to put my new system together


----------



## dmasteR

ocvn said:


> MSI TrioX flash to 450W Galax and 374W FTW3, DF everything.
> GalaxyHOF: https://www.3dmark.com/fs/16920364
> EVGA: https://www.3dmark.com/fs/16919991


Are you able to get better clocks with the FTW3/Galax bios over the stock Trio? 

Thinking on flashing mine, but not going to if it doesn’t actually help.


----------



## Vipeax

dmasteR said:


> Are you able to get better clocks with the FTW3/Galax bios over the stock Trio?
> 
> Thinking on flashing mine, but not going to if it doesn’t actually help.


How often does this have to be mentioned? That is NOT how it works. It only helps mantaining those maximum clocks throughout high load scenarios.


----------



## Garrett1974NL

Anyone here have a failing card that's *watercooled*?


----------



## GraphicsWhore

Okt00 said:


> rush2049 said:
> 
> 
> 
> For anyone who wants to see what my card is doing: https://www.youtube.com/watch?v=3wjHQucNdNU
> 
> This type of stuttering is sometimes much worse and eventually it freezes completely and crashes. Sometimes BSOD and sometimes it black screens and then recovers to the desktop after a bit.
> 
> And all of that is with DEBUG mode enabled which forces absolute stock settings (below FE edition overclock) and no GPU boost at all.
> 
> 
> 
> I can't comment on the Freezing completely and crashing, but in BF1 on DX12 mode, the first few minutes of a new round I get those stutters. Now my FPS never seems to drop under 100, might just be a couple frames at a lower rate, or the CPU just get hung up. Anyway, from what I gather in DX12 mode there is something with the shader caches needing to be rebuilt which causes those stutters until they are complete.
> 
> https://answers.ea.com/t5/Technical...uttering-at-the-beginning-of-the/td-p/6780729
Click to expand...

My stuttering seems to go on all game until I turn DX12 off. I hope BFV doesn’t have the same problem.

On an unrelated note: I assume flashing the Galax HOF to EVGA XC wouldn’t be a great idea because of the power delivery?


----------



## Fitzcaraldo

The "stuttering" is a known issue that is partially caused by windows and partially seems to be some weird cascading effect that happens sometimes with nvidia cards since pascal depending on the hardware composition. Apparently there is no reliable fix.


----------



## owned1390

Fitzcaraldo said:


> The "stuttering" is a known issue that is partially caused by windows and partially seems to be some weird cascading effect that happens sometimes with nvidia cards since pascal depending on the hardware composition. Apparently there is no reliable fix.


Thats not true! You should return your card if you got stuttering gameplay.
This guy got a 2080 TI and its buttery smooth: https://www.youtube.com/watch?v=ANgyZnenNN8


----------



## Fitzcaraldo

owned1390 said:


> Thats not true! You should return your card if you got stuttering gameplay.
> This guy got a 2080 TI and its buttery smooth: https://www.youtube.com/watch?v=ANgyZnenNN8


https://forums.geforce.com/default/...h-fps-drops-since-windows-10-creators-update/
This forum thread with collected user feedback begs to differ. It IS a widespread issue that is not easy to track down. Obviously, this is not simply a RMA issue but some deeper rooted problem.


----------



## GraphicsWhore

owned1390 said:


> Thats not true! You should return your card if you got stuttering gameplay.
> This guy got a 2080 TI and its buttery smooth: https://www.youtube.com/watch?v=ANgyZnenNN8


This issue has been documented since BF1 came out. BFV beta had reports of the same problem. I had the issue on my 1080Ti and I still have it on my 2080Ti. Definitely a software thing.


----------



## owned1390

Fitzcaraldo said:


> https://forums.geforce.com/default/...h-fps-drops-since-windows-10-creators-update/
> This forum thread with collected user feedback begs to differ. It IS a widespread issue that is not easy to track down. Obviously, this is not simply a RMA issue but some deeper rooted problem.





GraphicsWhore said:


> This issue has been documented since BF1 came out. BFV beta had reports of the same problem. I had the issue on my 1080Ti and I still have it on my 2080Ti. Definitely a software thing.


The 2080 TI in the video doesent stutter and also my 980TI isnt stuttering at all while ALL my games stutter with my 2080TI.
I talked to zotac today and they told me to send it back and get a new card.


----------



## Jpmboy

lol - i think that was a drive-by trolling.


----------



## Fitzcaraldo

owned1390 said:


> The 2080 TI in the video doesent stutter and also my 980TI isnt stuttering at all while ALL my games stutter with my 2080TI.
> I talked to zotac today and they told me to send it back and get a new card.


Again: The stuttering is a widespread issue that _can_ appear depending on your hardware configuration. If YOU are affected in all games you are completely correct in RMA'ing your GPU. But please understand that the general issue of "my new and shiny GPU is stuttering in games, ***?!" is caused by something possibly beyond the GPU in question as the thread shows cases where the same GPU doesn't stutter if put into another machine. So there is more than one possible cause of this.


----------



## rush2049

Fitzcaraldo said:


> Again: The stuttering is a widespread issue that _can_ appear depending on your hardware configuration. If YOU are affected in all games you are completely correct in RMA'ing your GPU. But please understand that the general issue of "my new and shiny GPU is stuttering in games, ***?!" is caused by something possibly beyond the GPU in question as the thread shows cases where the same GPU doesn't stutter if put into another machine. So there is more than one possible cause of this.


The gameplay was using DX11 mode.... the stuttering is in all games, except for a few that aren't very intensive, like minecraft and maple story 2.
Also the stuttering eventually crashes the game, or crashes the entire system, or blue screens.

Another thing to be aware i had DEBUG mode enabled in the control panel. This turns off boost and lowers clocks to stock of the chip below what the founders edition is supposed to run at.

This morning I tried to run 3Dmark firestrike (again with debug mode on) and the entire system crashes and rebooted no bluescreen at all before it finished the first graphics test.

I have a RMA card coming to me to swap with this one. I hope this next one is better.


----------



## Jbravo33

GraphicsWhore said:


> Yes someone uploaded and linked a few pages back.


Thank you



Jpmboy said:


> lol - i think that was a drive-by trolling.


 Haha


----------



## toncij

Not sure about this stuttering comment, since on the same machine: Vega FE, 1080Ti FE, TITAN V - do NOT stutter, while 2080Ti Palit and 2080Ti PNY do stutter. Both in Odyssey and Call of Duty BO4.


----------



## truehighroller1

Vipeax said:


> The FE doesn't even have the right connectors. 450W is 75W (socket) + 75W (6 pin)+ 150W (8 pin) + 150W (8 pin) minimum really. I can't say that I'm surprised that the Trio shows good results while yours doesn't. Depending on how it is configured it may very well start behaving very strangely.


This is the card every one is flashing the BIOS from. It has three, 8 pins on it.

http://www.galax.com/en/graphics-card/20-series/galax-geforce-rtx-2080ti-hof.html

Edit: I don't even see how the guy in comment, here https://www.overclock.net/forum/27697080-post3129.html found it because when I do a manual search for the BIOS not using the name etc. I can't find it like it's hidden almost.


----------



## owned1390

toncij said:


> Not sure about this stuttering comment, since on the same machine: Vega FE, 1080Ti FE, TITAN V - do NOT stutter, while 2080Ti Palit and 2080Ti PNY do stutter. Both in Odyssey and Call of Duty BO4.


You should return it or atleast contact Palit/PNY.


----------



## Fitzcaraldo

toncij said:


> Not sure about this stuttering comment, since on the same machine: Vega FE, 1080Ti FE, TITAN V - do NOT stutter, while 2080Ti Palit and 2080Ti PNY do stutter. Both in Odyssey and Call of Duty BO4.


Ths is possibly related to your OS. Both windows 1803 and 1809 are prone to introduce stuttering, sadly.


----------



## toncij

owned1390 said:


> You should return it or atleast contact Palit/PNY.


Of course, if within two weeks there is no driver patch, both are going back for a full refund (including the yet to arrive EVGA XC and GB Gaming OC). Titan V will serve good.



Fitzcaraldo said:


> Ths is possibly related to your OS. Both windows 1803 and 1809 are prone to introduce stuttering, sadly.


If it was the OS problem, it would be applicable to the other 3 cards. It doesn't work that way. There is no magic OS service that introduces stuttering. It can be either driver or the hardware.


----------



## kot0005

GraphicsWhore said:


> This issue has been documented since BF1 came out. BFV beta had reports of the same problem. I had the issue on my 1080Ti and I still have it on my 2080Ti. Definitely a software thing.


what components do u have in ur build ?


----------



## Jpmboy

toncij said:


> Of course, if within two weeks there is no driver patch, both are going back for a full refund (including the yet to arrive EVGA XC and GB Gaming OC). Titan V will serve good.
> 
> 
> 
> If it was the OS problem, it would be applicable to the other 3 cards. It doesn't work that way. There is no magic OS service that introduces stuttering. It can be either driver or the hardware.


not necessarily. The turing architecture/driver uses the OS execution stack differently differently than pascal or earlier chips (except volta). There have been these stuttering issues at nearly every gen launch, and it is usually an issue originating in the hardware/OS/Driver interface. Lol - jst wait until MS DX12RT is thrown into the mix. :no-smil

That said, of course if someone is not happy with their card, then just return it and move on.


----------



## toncij

Jpmboy said:


> not necessarily. The turing architecture/driver uses the OS execution stack differently differently than pascal or earlier chips (except volta). There have been these stuttering issues at nearly every gen launch, and it is usually an issue originating in the hardware/OS/Driver interface. Lol - jst wait until MS DX12RT is thrown into the mix. :no-smil
> 
> That said, of course if someone is not happy with their card, then just return it and move on.


Do you have any concrete tech source on that? What differently means? The WDM is the same AFAIK.


----------



## kot0005

So I flashed 366w bios and my card holds 2100Mhz but hit 360w power limit.


----------



## Jpmboy

toncij said:


> Do you have any concrete tech source on that? What differently means? The WDM is the same AFAIK.


 oh I have to dig it up... but it is similar to how differently AMD manages stack execution vs NVidia. There's a tube vid out there too somewhere.
needles to say, at EVERY launch there are users who see stutter and others that do not... once drivers etc mature that should get better. It is certainly not a premonitory sign for the ram else every gen would have ram failures too.


----------



## owned1390

toncij said:


> Of course, if within two weeks there is no driver patch, both are going back for a full refund (including the yet to arrive EVGA XC and GB Gaming OC). Titan V will serve good.


EVGA/GB 2080 TI ? Can you tell me if they also stutter?



Jpmboy said:


> not necessarily. The turing architecture/driver uses the OS execution stack differently differently than pascal or earlier chips (except volta). There have been these stuttering issues at nearly every gen launch, and it is usually an issue originating in the hardware/OS/Driver interface. Lol - jst wait until MS DX12RT is thrown into the mix. :no-smil
> 
> That said, of course if someone is not happy with their card, then just return it and move on.


But not everyone with a 2080 TI gut stutters and hiccups?


----------



## truehighroller1

I flashed the Galax 450W and see what's going on as far as people reporting less performance with their card on it. The only people that would benefit from this if they're lucky, would be the 3x 8pin power connector 2080ti owners. The card is only pulling 340W~ with this BIOS and it makes sense because the other little bit needed for this BIOS to work is being pulled from the 3rd 8pin connector at 150W.

From my experience making custom BIOS back in the 980ti day's they're still setup BIOS structure wise, similar. There's an entry area where you program how much wattage is to be pulled from each power connector when, you can modify the BIOS file itself.

SIGH, flashing back to 380 BIOS.

On a side note, that GALAX GPU, this one.

http://galaxstore.net/GALAX-GeForce-RTX-2080Ti-HOF-OC-Lab-WC-Edition_p_180.html

IS one nice card. The software they have for it allows you to go to 1.2 something volts with the circuitry they have on the card. 

That thing is a KINGPIN essentially. On top of that I'd like to add that they seem to be big dog IMO as they seem to be making quality BIOS and cards which I have to say is a nice change compared to the MSI approach. MSI's 980ti lightning BIOS was such a mess and by the time I got done tweaking that BIOS it ran like it should have out of the BOX. With the 1080ti lightning same thing, they had a horrible BIOS and unfortunately you couldn't touch the BIOS to fix it. I refuse to ever buy MSI again after that last lightning. KINGPIN from EVGA or the HOF one from GALAX as that thing is super nice.


----------



## Frozburn

Could use some help with this. We have a bunch of 2080 Tis in stock here and I'm trying to decide which one to get. Upgrading from a GTX 970 to 2080 Ti.

I can get any of these for the same price

Gigabyte Gaming OC

MSI Gaming X Trio

MSI Duke OC

I don't care about looks. From what I've researched, the Gaming OC has the best power limit and the best cooling out of those. I've no idea why that is.... the Gaming X Trio is bigger yet it cools worse? 

This is for 1080p 240hz with 8700k @ 5.2 and RAM @ 4000


----------



## truehighroller1

Frozburn said:


> Could use some help with this. We have a bunch of 2080 Tis in stock here and I'm trying to decide which one to get. Upgrading from a GTX 970 to 2080 Ti.
> 
> I can get any of these for the same price
> 
> Gigabyte Gaming OC
> 
> MSI Gaming X Trio
> 
> MSI Duke OC
> 
> I don't care about looks. From what I've researched, the Gaming OC has the best power limit and the best cooling out of those. I've no idea why that is.... the Gaming X Trio is bigger yet it cools worse?
> 
> This is for 1080p 240hz with 8700k @ 5.2 and RAM @ 4000


Get the Gigabyte one and flash the BIOS to the Galax 380 watt BIOS =win. I have so much bad experience with MSI at this point seriously. My run in's with Gigabyte have been way better over the years then the ones with MSI.


----------



## kx11

Frozburn said:


> Could use some help with this. We have a bunch of 2080 Tis in stock here and I'm trying to decide which one to get. Upgrading from a GTX 970 to 2080 Ti.
> 
> I can get any of these for the same price
> 
> Gigabyte Gaming OC
> 
> MSI Gaming X Trio
> 
> MSI Duke OC
> 
> I don't care about looks. From what I've researched, the Gaming OC has the best power limit and the best cooling out of those. I've no idea why that is.... the Gaming X Trio is bigger yet it cools worse?
> 
> This is for 1080p 240hz with 8700k @ 5.2 and RAM @ 4000





you might want to add this one too


https://www.zotac.com/us/product/graphics_card/zotac-gaming-geforce-rtx-2080-ti-amp-extreme


the top Zotac card


----------



## nycgtr

Frozburn said:


> Could use some help with this. We have a bunch of 2080 Tis in stock here and I'm trying to decide which one to get. Upgrading from a GTX 970 to 2080 Ti.
> 
> I can get any of these for the same price
> 
> Gigabyte Gaming OC
> 
> MSI Gaming X Trio
> 
> MSI Duke OC
> 
> I don't care about looks. From what I've researched, the Gaming OC has the best power limit and the best cooling out of those. I've no idea why that is.... the Gaming X Trio is bigger yet it cools worse?
> 
> This is for 1080p 240hz with 8700k @ 5.2 and RAM @ 4000


The trio cools the best outta all of them. Not sure where you got that idea from. I have had every model except the duke and zotac so far.


----------



## Jpmboy

Frozburn said:


> Could use some help with this. We have a bunch of 2080 Tis in stock here and I'm trying to decide which one to get. Upgrading from a GTX 970 to 2080 Ti.
> 
> I can get any of these for the same price
> 
> Gigabyte Gaming OC
> 
> MSI Gaming X Trio
> 
> MSI Duke OC
> 
> I don't care about looks. From what I've researched, the Gaming OC has the best power limit and the best cooling out of those. I've no idea why that is.... the Gaming X Trio is bigger yet it cools worse?
> 
> This is for 1080p 240hz with 8700k @ 5.2 and RAM @ 4000


Honestly... any of those is gonna have no problem with 1080/240 (with all eye candy on). Unfortunately ALL 2080Tis run hot and drop clock bins along the way, so if you can, water cool the thing.


----------



## Frozburn

Thanks for the answers. 



nycgtr said:


> The trio cools the best outta all of them. Not sure where you got that idea from. I have had every model except the duke and zotac so far.


I've looked at 5 reviews or so and they all show the Gaming OC at around 65c (Strix 62c or so) and the Gaming X Trio is always 70+. No idea what's up with that


----------



## jcde7ago

Just received my Alphacool Eiswolf AIO for my FE...it took about 3 weeks to arrive from Germany (ordered from Aquatuning).

I've actually downsized my rig and opted to use AIOs this time around instead of a custom loop, so I would not recommend the Eiswolf if money/value is a concern.

That said, since i'm on my replacement FE after the first one failed, i'm torn with whether or not I want to install the Eiswolf now, or wait a couple weeks to see if this 2nd FE suffers the same fate as its predecessor and fails within 10 days (going on day 4 now).


----------



## Zammin

IceAero said:


> Yeah, PSA on this one guys: Make sure you set this to 100.
> 
> I just got my FTW3 and overlooked this step--as a result, my card was not OC'ing very well, and felt a little unstable (e.g., weird shimmering textures in BF1 and constant flickering at 200% resolution scale [to push the card]).
> 
> Now, set to 100, I'm rocking a smooth 90fps at a 200% resolution scale (5120x2880 for me), without stuttering, flickering, or any artifacts.
> 
> Anyway, while I don't think this slider does much, it definitely should be pegged at 100 for best results.


Sorry if this is a dumb question, but does running max voltage cause degradation over time? I recall some time around launch some of the reviewers on youtube said they were told that by Nvidia, and that it wasn't recommended. i don't know how true it is but I thought I'd ask.


----------



## truehighroller1

Zammin said:


> Sorry if this is a dumb question, but does running max voltage cause degradation over time? I recall some time around launch some of the reviewers on youtube said they were told that by Nvidia, and that it wasn't recommended. i don't know how true it is but I thought I'd ask.



That goes for anything electronic though. The higher frequency will degrade the electronics paths over time. Capacitors get worn down etc.. That's normal is what I am saying but yeah it will degrade the electronics faster they're right. You can set your voltage manually when benching however and then when you're done let it handle itself normally again when gaming and everything else that is and be perfectly fine.



IceAero said:


> Yeah, PSA on this one guys: Make sure you set this to 100.
> 
> I just got my FTW3 and overlooked this step--as a result, my card was not OC'ing very well, and felt a little unstable (e.g., weird shimmering textures in BF1 and constant flickering at 200% resolution scale [to push the card]).
> 
> Now, set to 100, I'm rocking a smooth 90fps at a 200% resolution scale (5120x2880 for me), without stuttering, flickering, or any artifacts.
> 
> Anyway, while I don't think this slider does much, it definitely should be pegged at 100 for best results.


The 100 voltage slider lets the card go up to the max voltage 1.093v on it's voltage curve which directly relates to the clock the card runs at. If you leave it at default it throttles itself to 1.065v I believe which in turns gives you lower clocks as well.


----------



## gtbtk

Garrett1974NL said:


> Do you mean the texture filtering quality?
> Setting that to high performance?


Power management section


----------



## GraphicsWhore

kot0005 said:


> what components do u have in ur build ?


7700k on water
EVGA 2080Ti on water
16GBs @ 3200
1.7TBs of SSDs


----------



## kot0005

AC odyssey results: 


G and V sync off

2100-2115Mhz at 1.093mV and 310Watts according to GPU-Z


----------



## rodgadala

Fiercy said:


> Sorry for the confusion guys I my self belived that my card is windforce but it’s not it’s 2080ti oc.
> 
> So I guess me buying what’s available on Newegg didn’t really worked out for me.
> 
> Question to anyone here who knows will I be able to repair or bend those pins back? In case I need to put air cooler back ?
> 
> And what was that top connection fan or rgb light?
> 
> Thank you. This things is beautiful


Hey I had the same issue what i did was desolder the fan conector from the pcb and saved it just in incase i want to sell the card. Also you can desolder it and place it on the pcb there are 3 more fan conector slots open.


----------



## rodgadala

Hey! How are you guys whats the best bios for a water cooled GeForce® RTX 2080 Ti GAMING OC 11G ?

Thank you for your help


----------



## anticommon

Here's a question: Got my 2080 ti xc ultra under water, and now MSI afterburner doesn't show voltage slider - aka it's greyed out. EVGA precision x doesn't seem to have an option for statistics monitoring like afterburner does in real time. What to do?

I need the added voltage to go over 2100mhz but I can't monitor it when it is applied, or apply it whilst monitored.


----------



## Vlada011

Can I ask you something owners of RTX2080Ti, you shouldn't answer if you don't want.
How much highest price you are ready to pay for high end graphic cards.

2000$, I think 2100-2150$.
Because we saw that Titan Z no one wanted for 3000$.
Right?

Some of you will say this is highest price and they will think that, but that's not true, you are ready to pay 1500-1700$, maybe even more.


----------



## Esenel

Vlada011 said:


> Can I ask you something owners of RTX2080Ti, you shouldn't answer if you don't want.
> How much highest price you are ready to pay for high end graphic cards.
> 
> 2000$, I think 2100-2150$.
> Because we saw that Titan Z no one wanted for 3000$.
> Right?
> 
> Some of you will say this is highest price and they will think that, but that's not true, you are ready to pay 1500-1700$, maybe even more.


Completly depending on the situation.
If we take the 2080 Ti.
This card can do 1440p maxed out @120-200fps depending on the game.
So no faster card needed for this setup.
In 4k this card can do from 60-120 fps depending on the game.
So also fast (enough) for nearly everything and a lot of players.
So only VR people remain.
There nothing can be fast enough.
But the market is too small to ask for every price.
I would say next gen graphics cards have to compete with their price again.

I for myself will wait for an 27" 4k 144 Hz screen without an annoying fan and the 65" 144 Hz "TV".

Only then an upgrade will make sense again.

And to come back to the price and this constellation.

Next gen gpu: max 1500€ for 4k 144 fps+ 
27" 4k 144 Hz screen max 1500-2000€
65" 4k 144 Hz TV max 2000-2500€

But at the moment I am totally fine 😉


----------



## UdoG

anticommon said:


> Here's a question: Got my 2080 ti xc ultra under water, and now MSI afterburner doesn't show voltage slider - aka it's greyed out. EVGA precision x doesn't seem to have an option for statistics monitoring like afterburner does in real time. What to do?
> 
> I need the added voltage to go over 2100mhz but I can't monitor it when it is applied, or apply it whilst monitored.


You have to download the latest BETA Version from Afterburner!


----------



## Spiriva

anticommon said:


> Here's a question: Got my 2080 ti xc ultra under water, and now MSI afterburner doesn't show voltage slider - aka it's greyed out. EVGA precision x doesn't seem to have an option for statistics monitoring like afterburner does in real time. What to do?
> 
> I need the added voltage to go over 2100mhz but I can't monitor it when it is applied, or apply it whilst monitored.


Options > General > under "Compability properties" unlock voltage control and unlock voltage monitoring. 
Then restart afterburner and it should show.


----------



## Lefty23

anticommon said:


> Here's a question: Got my 2080 ti xc ultra under water, and now MSI afterburner doesn't show voltage slider - aka it's greyed out. EVGA precision x doesn't seem to have an option for statistics monitoring like afterburner does in real time. What to do?
> 
> I need the added voltage to go over 2100mhz but I can't monitor it when it is applied, or apply it whilst monitored.


"MSI AB 4.6.0 Beta 9" works with my XC Ultra. Have you unlocked voltage control in the application General Settings?


----------



## GosuPl

Turing funreal...


----------



## anticommon

Thanks everyone it was the beta version of after burner that I needed to make the voltage option actually function while checked. 

Now, apparently i think my card simply runs hot. It's sitting at anywhere from upper 50's to low 70's when I have a dual 360mm radiator setup with plenty of airflow. Cpu isn't much different from the normal 44-55c @ 5.1 (8700k) just the GPU is way hotter than my 1080 ti was and it is clearly leading to thermal throttling. I have the core liquid metaled to the block and made sure to apply a decent enough amount and spread evenly over both block and die. 

The core tends to sit in the 2020-2050 mhz range but will sometimes dip in benchmarks hitting low 1900's with temps in the 65-72c range. 

I have the voltage maxed (althoough I didn't really see any difference between having like +60 and +100 on voltage because it automatically adjusts on the fly) and PL @ 130. Memory seems to be able to do +800 to 900 depending on the test. 

Does this sound pretty.. average? Or is this kind of a bad clocker. I'm getting like 6400 in timespy extreme and 14283 in regular timespy... Doesn't really seem right but maybe my card just won't do more. I could try reapplying my LM on the die to see if that changes much but I would rather not dissassemble my loop right now.

EDIT: EKWB also didn't send all the thermal pads. I had to use one and really get creative to make sure every component was fully covered. And the backplate... Really didn't have an appropriate amount of thermal pads, had mess with that one as well.


----------



## Garrett1974NL

Those temps are way too high, I have the EK-VGA Supremacy on my 2080Ti and the highest I've seen is 47c... stock BIOS (290W max)
What waterblock do you have on it?


----------



## Zurv

I too had temps in the 50s - i added another pump to the mix and it dropped the temps to the mid 40s.
(1.5LPM to 6LPM... 4LPM was the sweet spot. After that the temps didn't drop much.)


----------



## Spiriva

anticommon said:


> Thanks everyone it was the beta version of after burner that I needed to make the voltage option actually function while checked.
> 
> Now, apparently i think my card simply runs hot. It's sitting at anywhere from upper 50's to low 70's when I have a dual 360mm radiator setup with plenty of airflow. Cpu isn't much different from the normal 44-55c @ 5.1 (8700k) just the GPU is way hotter than my 1080 ti was and it is clearly leading to thermal throttling. I have the core liquid metaled to the block and made sure to apply a decent enough amount and spread evenly over both block and die.
> 
> The core tends to sit in the 2020-2050 mhz range but will sometimes dip in benchmarks hitting low 1900's with temps in the 65-72c range.
> 
> I have the voltage maxed (althoough I didn't really see any difference between having like +60 and +100 on voltage because it automatically adjusts on the fly) and PL @ 130. Memory seems to be able to do +800 to 900 depending on the test.
> 
> Does this sound pretty.. average? Or is this kind of a bad clocker. I'm getting like 6400 in timespy extreme and 14283 in regular timespy... Doesn't really seem right but maybe my card just won't do more. I could try reapplying my LM on the die to see if that changes much but I would rather not dissassemble my loop right now.
> 
> EDIT: EKWB also didn't send all the thermal pads. I had to use one and really get creative to make sure every component was fully covered. And the backplate... Really didn't have an appropriate amount of thermal pads, had mess with that one as well.


That seems like a pretty high temp for a watercooled card. I also use the EK version, they didnt send me enuff thermal pads either. But it worked out fine tho, you could cut the one you did get so it covered everything that needed 
You might need to to pull the block of and reinstall it ? My card runs at 2150-2175mhz currently using the Galaxy 380W bios (normaly it pulls around 340W tho) Only seen it pull 374W while 3dmark.


----------



## DooRules

They sent enough thermal pads. It's just their instructions on where to put them was wrong. I did it the way the instructions said at first and temps were out to lunch for a water cooled gpu. Redid the gpu twice same thing. Where it says to put the 8mm strips I left them off and all good. I covered the mem chips and vrm's, that's it. Have not seen 40' since with repeated benching.


----------



## Garrett1974NL

DooRules said:


> They sent enough thermal pads. It's just their instructions on where to put them was wrong. I did it the way the instructions said at first and temps were out to lunch for a water cooled gpu. Redid the gpu twice same thing. Where it says to put the 8mm strips I left them off and all good. I covered the mem chips and vrm's, that's it. Have not seen 40' since with repeated benching.


You mean on page 6 of this PDF?
https://www.ekwb.com/shop/EK-IM/EK-IM-3831109813140.pdf
The pads with the 3 on them?


----------



## anticommon

Garrett1974NL said:


> Those temps are way too high, I have the EK-VGA Supremacy on my 2080Ti and the highest I've seen is 47c... stock BIOS (290W max)
> What waterblock do you have on it?


EK-VECTOR

looks like I probably need to reinstall but tbh its just really annoying. I'm in a state of constantly dropping my memory clocks due to the fact that after 10~ minutes of pubg it starts artifacting before eventually crashing. 

As far as thermal pads go they only sent me E & F from the ek installation guide. I did manage to cover everything they showed though. I have to imagine there was something wrong with my LM install, I put some of the EK thermal compound over the little jelly beans besides the chip and used what I thought was an appropriate amount of LM on the chip, but who knows? Maybe the clearances are off by just a little to prevent good contact.

I will say tho, I love this nickel backplate. Looks so nice... if only the temps were better haha.


----------



## TahoeDust

Has anyone heard about anyone making a 2080 ti FTW3 block? EKWB, said they were working on one and to check back in a couple weeks.


----------



## xer0h0ur

DooRules said:


> They sent enough thermal pads. It's just their instructions on where to put them was wrong. I did it the way the instructions said at first and temps were out to lunch for a water cooled gpu. Redid the gpu twice same thing. Where it says to put the 8mm strips I left them off and all good. I covered the mem chips and vrm's, that's it. Have not seen 40' since with repeated benching.


Actually everyone that ordered their initial run of Vector blocks were only sent two thermal pads and the instructions also only called for using those two thermal pads. They have since then included a 3rd pad and updated the installation instructions. 

Me personally, my card is doing just fine without that other pad. I have no reason to fumble with my block again until I open it for a cleaning waaaaaaaaaay later down the road.


----------



## DooRules

Garrett1974NL said:


> You mean on page 6 of this PDF?
> https://www.ekwb.com/shop/EK-IM/EK-IM-3831109813140.pdf
> The pads with the 3 on them?


Yup, I left the number 3 pads off. If they were only .5 mil then maybe good contact all around but with 1 mil pads they seem to raise the block too much.


----------



## anticommon

DooRules said:


> Yup, I left the number 3 pads off. If they were only .5 mil then maybe good contact all around but with 1 mil pads they seem to raise the block too much.


could this be my issue? the instructions say pads 3 (aka f) are 1.0 mm... just like pads 2 (aka g) but if they are in fact supposed to be .5mm...


----------



## tekathan

Does anyone know where I can get the Bios that raises my Power Limmit of the 2080 TI to 154% ?
Thank you


----------



## tekathan

Also, Has anyone tried this BIOS
https://www.techpowerup.com/vgabios/204932/204932
Claims to be 373 W limit which is in the 140s?


----------



## kot0005

anticommon said:


> Thanks everyone it was the beta version of after burner that I needed to make the voltage option actually function while checked.
> 
> Now, apparently i think my card simply runs hot. It's sitting at anywhere from upper 50's to low 70's when I have a dual 360mm radiator setup with plenty of airflow. Cpu isn't much different from the normal 44-55c @ 5.1 (8700k) just the GPU is way hotter than my 1080 ti was and it is clearly leading to thermal throttling. I have the core liquid metaled to the block and made sure to apply a decent enough amount and spread evenly over both block and die.
> 
> The core tends to sit in the 2020-2050 mhz range but will sometimes dip in benchmarks hitting low 1900's with temps in the 65-72c range.
> 
> I have the voltage maxed (althoough I didn't really see any difference between having like +60 and +100 on voltage because it automatically adjusts on the fly) and PL @ 130. Memory seems to be able to do +800 to 900 depending on the test.
> 
> Does this sound pretty.. average? Or is this kind of a bad clocker. I'm getting like 6400 in timespy extreme and 14283 in regular timespy... Doesn't really seem right but maybe my card just won't do more. I could try reapplying my LM on the die to see if that changes much but I would rather not dissassemble my loop right now.
> 
> EDIT: EKWB also didn't send all the thermal pads. I had to use one and really get creative to make sure every component was fully covered. And the backplate... Really didn't have an appropriate amount of thermal pads, had mess with that one as well.


Bruh, just use thermal grizzily. Pretty sure u have poor contact. Mine doesnt go over 43c with 25c room temp. Same everything except dual 480

I am using 1mm pads on chokes. You need to ditch tgat LM. Its too thin for gpus.


----------



## kx11

could someone explain the difference between Asus strix OC vs Advanced models , i mean if my waterblock says it's made for OC 2080ti can i use it on Advanced model?


----------



## dante`afk

does anyone use a dvi to displayport adapter and has seen any issues on his dvi screen with this card?

mine goes black sometimes for a second depending which app is start, and I also see from time to time green frequency lines(?) over the screen if it's just on idle.

didnt have this with my 1080ti


----------



## kx11

man that GALAX OC LAB is killing everyone 





https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


----------



## arcDaniel

tekathan said:


> Also, Has anyone tried this BIOS
> 
> https://www.techpowerup.com/vgabios/204932/204932
> 
> Claims to be 373 W limit which is in the 140s?




The question is, will the ftw3 Bios work on FE-Cards


Gesendet von iPhone mit Tapatalk Pro


----------



## anticommon

kot0005 said:


> Bruh, just use thermal grizzily. Pretty sure u have poor contact. Mine doesnt go over 43c with 25c room temp. Same everything except dual 480
> 
> I am using 1mm pads on chokes. You need to ditch tgat LM. Its too thin for gpus.


I just finished taking it apart and somehow the gpu was not seated properly. Made sure to get it all back together and switched the LM for the EK ectotherm that came in the box. Sitting at low 50's for peak during timespy. Going to look at trying out the GALAX bios now and see if I can get the clocks higher as I'm constantly power limited.


----------



## Nico67

Seems my PSU is the issue causing reboots, Silverstone SX800-LTI is not enough to run stable in games, will reboot in some games as much as every 15mins or so.

Stacked a SX650-G on one of the 8pin GPU ports and no more issues, thought it might be a dud PSU, but the 650w by itself can't even get into the game. Not sure why these SFX PSU's can't handle it, particularly when the 800W is supposed to be able to run two 1080Ti's etc.

May try to pickup a 1000w Seasonic Titanium to see if that runs it ok.

Sent a mail off to Silverstone as well so that will be interesting, any other thoughts as to why the SFX PSU's are an issue most welcome. Seemed like to instantaneous power demand was to high or rapidly increases as the highest I saw on the UPS out was 550w ish maybe the 800w reaction time gets worse when it heats up a bit?


----------



## anticommon

Also anyone know why MSI afterburner only shows up to +1000 on memory? Might try to go a bit farther now that the block has been reseated and is likely making better contact with the memory chips.


----------



## zhrooms

*Uh oh!*

After returning my *Gaming X Trio* I decided to go for the cheapest one I could find, which was the very recently released *Palit Dual*, non-OC version of the GamingPro/GamingPro OC, identical card except no backplate and no factory-OC, perfect for water cooling. 

To my surprise it turned out *NOT* to be the *A version* (1E07), the card itself looks to be overclocking just fine like an A version, managed to reach 1905MHz @ 60c no problem, it's just held back by the power limit (280W).

Attempting to flash the card results in a *GPU mismatch* error, because it's a *1E04* (Non-A), not *1E07* (A).

The similar error (now solved with modified NVFlash) trying to flash *Founders Edition* resulted in *Board ID mismatch*, so this error is different with a complete *GPU mismatch* instead. *Good news* is that the same modified NVFlash Vipeax released to circumvent the Board ID mismatch seems to work for the GPU mismatch as well!

Definitely nervous to attempt it, I will wait a bit to hear your opinion, if you might have some information I don't. I'd also like to inspect the PCB since I have never seen a 1E04 before, the only 2 cards I know of so far with 1E04 are *MSI Ventus* and *Palit Dual*. Disassembly is very easy because the card does not have a backplate, so I will update you here with pictures once I have removed the cooler.

*GPU-Z* and *NVFlash* below (Palit Dual / 10DE 1E04),



















Slightly deeper BIOS information, Board ID: BIOS-P/[email protected]

The differences between Palit Dual and MSI Gaming X Trio

*Palit* Dual
IFR Subsystem ID: 10DE-1E04
Subsystem Vendor ID: 0x10DE
Subsystem ID: 0x1E04
Version: 90.02.0B.40.0B

*MSI* Gaming X Trio
IFR Subsystem ID: 1462-3715
Subsystem Vendor ID: 0x1462
Subsystem ID: 0x3715
Version: 90.02.0B.00.74










Attempting to flash GALAX BIOS with official NVFlash 5.527, resulting in GPU mismatch.










Attempting to flash GALAX BIOS with unofficial NVFlash 5.527, bypasses the error.










Not attempting it *just yet*.


----------



## Zammin

zhrooms said:


> *Uh oh!*
> 
> SNIP


I think the PNY non-OC'd card/s are also non-A from what others are saying in this thread.

How much did the Palit card cost? In Australia the FE is the cheapest card, with the cheapest models (including non-A chips) being a minimum of $100 more, $115 when you factor in postage.


----------



## arcDaniel

arcDaniel said:


> The question is, will the ftw3 Bios work on FE-Cards
> 
> 
> Gesendet von iPhone mit Tapatalk Pro


Yes it works


----------



## toncij

Jpmboy said:


> oh I have to dig it up... but it is similar to how differently AMD manages stack execution vs NVidia. There's a tube vid out there too somewhere.
> needles to say, at EVERY launch there are users who see stutter and others that do not... once drivers etc mature that should get better. It is certainly not a premonitory sign for the ram else every gen would have ram failures too.


Yes, agree. I'm eager to read about it; very interesting. I can't find anything in official docs for anything, from Win to SDKs' docs.



owned1390 said:


> EVGA/GB 2080 TI ? Can you tell me if they also stutter?
> But not everyone with a 2080 TI gut stutters and hiccups?


Sure, as soon as I get those.
Probably not.

By the way: PNY XLR8 is cooler and drastically quieter than Palit GamingPro OC, and it's a 2-slot card, not 3 (Palit is 3-slot).


----------



## zhrooms

Zammin said:


> I think the PNY non-OC'd card/s are also non-A from what others are saying in this thread.
> 
> How much did the Palit card cost? In Australia the FE is the cheapest card, with the cheapest models (including non-A chips) being a minimum of $100 more, $115 when you factor in postage.


 
The XLR8 is A, not seen anyone with the PNY Blower card, that one might be non-A but it's listed as 285W TDP on their website which is probably a typo, but still, boost is stock 1545 though, idk.

Palit Dual cost me €1250, everything about it is cheap, box, no accessories, no backplate, not the prettiest PCB, non-A, lowest power limit, if it can't be flashed it's a terrible card, it can only do *1845-1860* MHz / *0.900-0.906*v @ *73*c at lower fan speed, the 280W power limit restricts it. 
As a comparison the MSI Gaming X Trio with stock 330W BIOS did *1980* MHz / *1.000*v @ *73*c with it being slightly quieter, only 35-45% fan speed. So that's a 120 MHz core clock difference and the Palit is as said still louder, 6.45% higher core clock, that's like 60 to 65 FPS, big deal to me.

What can completely save the card is BIOS flash, Water Block & Backplate.

Simply put, at above 70c (low fan speed).
280W ~ 0.900v
330W ~ 1.000v

That's why the MSI can go 120 MHz higher on the core.


----------



## Causality1978

kx11 said:


> man that GALAX OC LAB is killing everyone
> 
> 
> 
> 
> 
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu



HAPPY TO SEE ALL people WITH GALAXY OC LAB WILL KLL "kingping" monopol )))

monopoly frog must by killed!


----------



## Zammin

Does anyone know if the GDDR6 temp can be measured through software on the FE/Reference cards? My FE arrives tomorrow and if I can I'd like the make sure the GDDR6 isn't overheating during my initial testing, as that is being suspected as a potential cause for the dead cards.


----------



## Pepillo

Another one dead, RTX 2080 TI FE:



















One week of use, stock bios, serial 323918. Waiting answer for RMA.


----------



## kot0005

anticommon said:


> Also anyone know why MSI afterburner only shows up to +1000 on memory? Might try to go a bit farther now that the block has been reseated and is likely making better contact with the memory chips.


AB is limited to 1000, u have to use EVGA precision X1.



Nico67 said:


> Seems my PSU is the issue causing reboots, Silverstone SX800-LTI is not enough to run stable in games, will reboot in some games as much as every 15mins or so.
> 
> Stacked a SX650-G on one of the 8pin GPU ports and no more issues, thought it might be a dud PSU, but the 650w by itself can't even get into the game. Not sure why these SFX PSU's can't handle it, particularly when the 800W is supposed to be able to run two 1080Ti's etc.
> 
> May try to pickup a 1000w Seasonic Titanium to see if that runs it ok.
> 
> Sent a mail off to Silverstone as well so that will be interesting, any other thoughts as to why the SFX PSU's are an issue most welcome. Seemed like to instantaneous power demand was to high or rapidly increases as the highest I saw on the UPS out was 550w ish maybe the 800w reaction time gets worse when it heats up a bit?





arcDaniel said:


> Yes it works


I have a Corsair AX860i and the max power power my PC consumed is 600W. I have an 8700k at 1.3v & 2080Ti at 2070Mhz at 110/130% of 338W bios for 24/7 usage. lots of LED's, 20 Fans, Aquaero, and 2 D5's so 800W should be enough..



Causality1978 said:


> HAPPY TO SEE ALL people WITH GALAXY OC LAB WILL KLL "kingping" monopol )))
> 
> monopoly frog must by killed!


lol


----------



## kot0005

Pepillo said:


> Another one dead, RTX 2080 TI FE:
> 
> One week of use, stock bios, serial 323918. Waiting answer for RMA.


Watercooling or air ?


----------



## Pepillo

kot0005 said:


> Watercooling or air ?




Stock, on air.


----------



## Iskaryotes

I have the weirdest issue with my 2080 Ti. It takes 30+ seconds for the GPU/monitor to wake up from sleep. When I cold boot the PC, by the time the monitors come on, it's already showing the Windows Login screen.

I did all the basic troubleshooting (e.g., reinstall drivers, reseat card, switch cables, monitors, ports, etc.), and I am 100% sure that the culprit is the new GPU. I have 980 Tis and Quadro cards that I also tested with and it take 10 seconds max for those cards to wake/boot up.

It's not the biggest problem, but it's not something that I can tolerate for a $1300 card. But before I RMA the card, I wanted to know if any other owners are experiencing similar issues. I've tried Googling with all my might, but I can't seem to find a single person with a similar issue as mine.


----------



## Zammin

Pepillo said:


> Another one dead, RTX 2080 TI FE:
> 
> One week of use, stock bios, serial 323918. Waiting answer for RMA.


Bloody hell.. I really hope my FE is gonna be alright.. Its the only non-bottom tier card (except the Gigabyte OC which doesnt work with waterblocks) that I could get in Aus. Got sick of the never ending wait with no ETA for my back-ordered EVGA card.. When did you purchase that one?


----------



## Pepillo

Zammin said:


> Bloody hell.. I really hope my FE is gonna be alright.. Its the only non-bottom tier card (except the Gigabyte OC which doesnt work with waterblocks) that I could get in Aus. Got sick of the never ending wait with no ETA for my back-ordered EVGA card.. When did you purchase that one?


I purchase it on 20/10, installed on my rig on 23, dead 29 .............


----------



## Zammin

Pepillo said:


> I purchase it on 20/10, installed on my rig on 23, dead 29 .............


Damn.. I bought mine on the 30th.. Guess this batch may not be safe after all.. I really want to know the cause for these failures, if it turns out to be insufficient memory cooling I wonder what Nvidia will do for all the existing FE owners? I would think they would have to revise the cooler and replace it for all existing owners experiencing issues or high temps, or maybe some really bloody good thermal pads for the memory..

I mean, that's the major difference between the FE cooler and most AIB ones, the full cover vapor chamber right?

I will run mine on air for a while, if it dies hopefully I don't have to mail it overseas. It was posted from New South Whales which is the neighboring state to me, so hopefully I would only have to send it back there.


----------



## Spiriva

Pepillo said:


> I purchase it on 20/10, installed on my rig on 23, dead 29 .............


That really sucks, hopefully you get a new one fast from Nvidia.
I used mine now for a week, installed it in the loop last saturday (27/10). Abit scared it will die too, will jsut suck so much to drain the loop, and remount the original cooler to send it back to Nvidia. 
And on top of that i already sold my 1080ti 

When it died did it "just happen" like it was fine and then the next second it wasnt, or did you get weird artifacts that went away and then they came back before it completly got trashed like in ur pics ?


----------



## uggy

Anyone here that have the founders 2080ti and put a water block on it? Have you flashed the bios to evga ftw 3 for example? I’m buying 2 myself and wondering if it’s worth flashing them when I put a water block on them.

Edit: what is the “best” bios to flash to?


----------



## tekathan

Was anyone able to successfully put the Galaxy Bios 
https://www.techpowerup.com/vgabios/204869/204869
on the Evga 2080TI XC?


----------



## Pepillo

Spiriva said:


> When it died did it "just happen" like it was fine and then the next second it wasnt, or did you get weird artifacts that went away and then they came back before it completly got trashed like in ur pics ?


Just happen. No symptoms or warning, occurred suddenly.


----------



## kot0005

Zammin said:


> Damn.. I bought mine on the 30th.. Guess this batch may not be safe after all.. I really want to know the cause for these failures, if it turns out to be insufficient memory cooling I wonder what Nvidia will do for all the existing FE owners? I would think they would have to revise the cooler and replace it for all existing owners experiencing issues or high temps, or maybe some really bloody good thermal pads for the memory..
> 
> I mean, that's the major difference between the FE cooler and most AIB ones, the full cover vapor chamber right?
> 
> I will run mine on air for a while, if it dies hopefully I don't have to mail it overseas. It was posted from New South Whales which is the neighboring state to me, so hopefully I would only have to send it back there.


non FE cards seem to be dying too..Gamersnexus is getting 3 dead XC ultras from users. He said it could do something with the VRM traces running under the memory modules.

If you think about it...Turing has VRM's on either side of the GPU. That's the only major pcb difference.


----------



## zhrooms

Iskaryotes said:


> I have the weirdest issue with my 2080 Ti. It takes 30+ seconds for the GPU/monitor to wake up from sleep. When I cold boot the PC, by the time the monitors come on, it's already showing the Windows Login screen.
> 
> I did all the basic troubleshooting (e.g., reinstall drivers, reseat card, switch cables, monitors, ports, etc.), and I am 100% sure that the culprit is the new GPU. I have 980 Tis and Quadro cards that I also tested with and it take 10 seconds max for those cards to wake/boot up.
> 
> It's not the biggest problem, but it's not something that I can tolerate for a $1300 card. But before I RMA the card, I wanted to know if any other owners are experiencing similar issues. I've tried Googling with all my might, but I can't seem to find a single person with a similar issue as mine.


 
Hey, do not RMA! It's a *software issue*, I ran into it myself a week ago, out of nowhere, after waking up from sleep mode all I get is a black screen for x seconds before login appears, didn't happen a few weeks back, only thing that's changed is some random Windows updates and a driver re-install or two. But it's definitely not something wrong with the card, just a software bug as said. Hopefully resolved in a future driver release (if it is the driver that is, could be Windows updates, perhaps even GFE which I have running).

Currently using: *Windows 10 Pro, 64-bit, Version 1809* & *GeForce Game Ready Driver 416.34 WHQL* & *DisplayPort 1.2*


----------



## Emmett

Pepillo said:


> Another one dead, RTX 2080 TI FE:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> One week of use, stock bios, serial 323918. Waiting answer for RMA.


My second founders also a 323918 is starting to fail also. Can't wait to ship it back tomorrow. Meanwhile my zotac amp extreme runs great. And I just blocked my Asus turbo


----------



## Addsome

Emmett said:


> Pepillo said:
> 
> 
> 
> Another one dead, RTX 2080 TI FE:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> One week of use, stock bios, serial 323918. Waiting answer for RMA.
> 
> 
> 
> My second founders also a 323918 is starting to fail also. Can't wait to ship it back tomorrow. Meanwhile my zotac amp extreme runs great. And I just blocked my Asus turbo
Click to expand...

What's the powerlimit on the amp extreme and whered you get it from?


----------



## Garrett1974NL

Still no watercooled card has died as far as I've read... only aircooled ones.
So... the memory is overheating to the point of no return I think.. as in: getting so hot that it gets damaged and showing artifacts. From that point on it's RMA time.


----------



## Emmett

"What's the powerlimit on the amp extreme and whered you get it from?"


it's a bit lower 115%

I got it from newegg 3-4 weeks ago.


----------



## Addsome

Emmett said:


> "What's the powerlimit on the amp extreme and whered you get it from?"
> 
> 
> it's a bit lower 115%
> 
> I got it from newegg 3-4 weeks ago.


What the base wattage for the card?


----------



## Emmett

Addsome said:


> What the base wattage for the card?




I'll pull the founders later, and put the zotac back in and check sometime today, I forget ATM.


----------



## Lownage

260w + 115% = max 300


----------



## Addsome

Lownage said:


> 260w + 115% = max 300


Same as the normal amp version then


----------



## Lownage

Oh my bad. Its 300w + x% 
I thought you were talking about the AMP! w/o EXTREME


----------



## sblantipodi

my 2080Ti FE is completely stable at +150MHz/+500MHz but it does a loud coil whine, is this something to worry about?


----------



## Spiriva

sblantipodi said:


> my 2080Ti FE is completely stable at +150MHz/+500MHz but it does a loud coil whine, is this something to worry about?



Ccoil whine is just anoying, it does not hurt the card in anyway.


----------



## sblantipodi

Spiriva said:


> sblantipodi said:
> 
> 
> 
> my 2080Ti FE is completely stable at +150MHz/+500MHz but it does a loud coil whine, is this something to worry about?
> 
> 
> 
> 
> Ccoil whine is just anoying, it does not hurt the card in anyway.
Click to expand...

Ok thanks


----------



## Edge0fsanity

sblantipodi said:


> my 2080Ti FE is completely stable at +150MHz/+500MHz but it does a loud coil whine, is this something to worry about?


My fe had really bad coil whine. I returned it, no way I'm putting up with that from a $1200 card.


----------



## cstkl1

got a strix 2080ti .. found it by pure luck.

now while waiting for a block for dat damn msi using this

da question now.. should i flash dat galax bios.. 🤔🤔🤔🤔🍈


----------



## Nico67

kot0005 said:


> I have a Corsair AX860i and the max power power my PC consumed is 600W. I have an 8700k at 1.3v & 2080Ti at 2070Mhz at 110/130% of 338W bios for 24/7 usage. lots of LED's, 20 Fans, Aquaero, and 2 D5's so 800W should be enough..


Decided to pull the Seasonic 1000w platinum out of my old pc as I had the original cables in the box so left the cablemod ones in that case. Easy test and runs fine, so I don't think its the 800w limit that's the issue, maybe just the ability of the SFX PSU in some way.

Runs games nicely at 2145/7499 360w pretty much locked with very few hits on the power limit.


----------



## dVeLoPe

I am recieving a EVGA XC BLACK GAMiNG variant of this card soon and want to know what the best block to install on it is.


----------



## Zammin

kot0005 said:


> non FE cards seem to be dying too..Gamersnexus is getting 3 dead XC ultras from users. He said it could do something with the VRM traces running under the memory modules.
> 
> If you think about it...Turing has VRM's on either side of the GPU. That's the only major pcb difference.


Hmmm you have an interesting point there. I wonder if the reason for the FE's being the most reported failures might be because they are the most purchased model? After all they are cheaper than any AIB models as far as I am aware. And Nvidia were constantly selling out until just recently when the failures started popping up.

I'm very much looking forward to GN's analysis of the dead cards. I really appreciate the money and time they are putting into investigating the issue.


----------



## jcde7ago

Garrett1974NL said:


> Still no watercooled card has died as far as I've read... only aircooled ones.


Way to jinx it, chief...

https://www.reddit.com/r/nvidia/comments/9u68m4/rip_asus_turbo_2080_ti_on_water/

But seriously, there have been a few reports of watercooled FEs and AIBs dying already before this Reddit post, check the Nvidia forums. This is why people have been hesitant to put water blocks on FEs immediately, because if they're gonna fail, watercooling doesn't really seem to help much at the moment and depending on your vendor, taking apart a card to put a block on it voids warranty (assuming they check for it before issuing an RMA or ask about it and someone happens to just 'admit' to it).

If there is an inherent design/component flaw with these cards, watercooling them isn't going to do crap in terms of hoping to save them from inevitably failing.



Zammin said:


> I wonder if the reason for the FE's being the most reported failures might be because they are the most purchased model?


Yes, because Nvidia hogged the majority of their yields for their own release, which logically makes the FEs the most available 2080 Ti model and thus = higher failure reports.


----------



## arrow0309

Garrett1974NL said:


> Still no watercooled card has died as far as I've read... only aircooled ones.
> So... the memory is overheating to the point of no return I think.. as in: getting so hot that it gets damaged and showing artifacts. From that point on it's RMA time.


Not exactly. 
There is a guy on (our) italian forum, hardware upgrade, RTX Turing thread user who's 2080 FE put under wc since the beginning died after a week or less.
Strange thing is he has never overclocked the memory, only the gpu (and not even hard).

I have mine (2080 ti FE) running on air since Friday afternoon, so far so good (not big deal clocks, +120/+500 is holding) but I'll keep it on air cooling for at least 10 days testing it the best I can before installing the Vector block and bp (already in my possession).
Today I've played on my ps4 pro (RDR2 is way too addictive) but I've run the FE under Valley on endless loop for ~9h (PL max / oc +100/500).


----------



## jcde7ago

arrow0309 said:


> but I'll keep it on air cooling for at least 10 days testing it the best I can before installing the Vector block and bp.


Honestly, I would recommend a minimum of 3-4 weeks pushing your card hard on AIR before putting a water block on it...i'm staring at my Alphacool Eiswolf AIO, tempted to slap it on my 2nd 2080 Ti FE any day now, but continue to remind myself that I need to just wait it out to make sure this card doesn't suffer the same fate as its predecessor that only lasted two weeks...

Based on what i've read on the forums/Reddit, etc., probably greater than 90% of the reported failures are occurring between 7-14 days, so if you can at least double that time period and your card is still chugging along with zero issues...you can feel somewhat more comfortable I suppose (it goes without saying that if you have a specific warranty or refund policy that requires you to return the card within 30 days then of course, don't wait until that window passes to complete your testing).


----------



## arrow0309

jcde7ago said:


> Honestly, I would recommend a minimum of 3-4 weeks pushing your card hard on AIR before putting a water block on it...i'm staring at my Alphacool Eiswolf AIO, tempted to slap it on my 2nd 2080 Ti FE any day now, but continue to remind myself that I need to just wait it out to make sure this card doesn't suffer the same fate as its predecessor that only lasted two weeks...
> 
> Based on what i've read on the forums/Reddit, etc., probably greater than 90% of the reported failures are occurring between 7-14 days, so if you can at least double that time period and your card is still chugging along with zero issues...you can feel somewhat more comfortable I suppose (it goes without saying that if you have a specific warranty or refund policy that requires you to return the card within 30 days then of course, don't wait until that window passes to complete your testing).


Hi, you're right (on your own) still I'd prefer not to stress so hard (OC wise) because of these high temps I'm getting on air cooling (maybe I'd better replace the thermal paste, the least I can do right now), up to 80 - 81C (and I would't like these temps to further affect the vga some times after the waterblock install as well).
And on the other hand, if I'd still have to return it, wouldn't that be easier under 30 days in order to get a full refund (if I'll decide that)?
Hmmmm, I don't know what to do exactly, for now will continue to (normally) "stress" test it mainly with gaming and some loop benchmark.


----------



## Jpmboy

tekathan said:


> Also, Has anyone tried this BIOS
> https://www.techpowerup.com/vgabios/204932/204932
> Claims to be 373 W limit which is in the 140s?


the % increase is over the base power value, it means little. Just look at the "BIOS" under the drop down in gpuz advanced tab. the 380W bios is only 126% (base is 300W).


tekathan said:


> Does anyone know where I can get the Bios that raises my Power Limmit of the 2080 TI to 154% ?
> Thank you


see above. 


kx11 said:


> man that GALAX OC LAB is killing everyone
> 
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu


all LN2. But yeah, the HOF card(s) are always good if you can get one. Also opens up _real _voltage control.





anticommon said:


> Also anyone know why MSI afterburner only shows up to +1000 on memory? Might try to go a bit farther now that the block has been reseated and is likely making better contact with the memory chips.


just down load the GALAX extreme tuning tool (rtx version) Much better behaved than X1 IMO.


Pepillo said:


> Another one dead, RTX 2080 TI FE:
> 
> One week of use, stock bios, serial 323918. Waiting answer for RMA.


 ruh-oh. I have the same lot. Been 2 weeks under water (bitspower block)






DID get to some benching this evening...
can't hold 2175 at 1440P time spy. Water cooled, loop temp was 12C.


----------



## Iskaryotes

zhrooms said:


> Hey, do not RMA! It's a *software issue*, I ran into it myself a week ago, out of nowhere, after waking up from sleep mode all I get is a black screen for x seconds before login appears, didn't happen a few weeks back, only thing that's changed is some random Windows updates and a driver re-install or two. But it's definitely not something wrong with the card, just a software bug as said. Hopefully resolved in a future driver release (if it is the driver that is, could be Windows updates, perhaps even GFE which I have running).
> 
> Currently using: *Windows 10 Pro, 64-bit, Version 1809* & *GeForce Game Ready Driver 416.34 WHQL* & *DisplayPort 1.2*


Oh thank you so much for letting me know! Other than this, I don't have any issues with my card (so far), so I am glad that it's just a software issue. I appreciate it!


----------



## Addsome

jcde7ago said:


> arrow0309 said:
> 
> 
> 
> but I'll keep it on air cooling for at least 10 days testing it the best I can before installing the Vector block and bp.
> 
> 
> 
> Honestly, I would recommend a minimum of 3-4 weeks pushing your card hard on AIR before putting a water block on it...i'm staring at my Alphacool Eiswolf AIO, tempted to slap it on my 2nd 2080 Ti FE any day now, but continue to remind myself that I need to just wait it out to make sure this card doesn't suffer the same fate as its predecessor that only lasted two weeks...
> 
> Based on what i've read on the forums/Reddit, etc., probably greater than 90% of the reported failures are occurring between 7-14 days, so if you can at least double that time period and your card is still chugging along with zero issues...you can feel somewhat more comfortable I suppose (it goes without saying that if you have a specific warranty or refund policy that requires you to return the card within 30 days then of course, don't wait until that window passes to complete your testing).
Click to expand...

Someones Watercooled card failed after 2 months...


----------



## Jpmboy

here's the pad layouts for the EK and for the bitspower blocks. BP is going belts and suspenders. clearly some of the pads are less for cooling than "accidental" electrical contact. 
(GALAX uses the BP block on their 2080Ti HOF)


----------



## jcde7ago

Addsome said:


> Someones Watercooled card failed after 2 months...


The card hasn't even been out for 2 months...and my recommendation was with warranties/return policies in mind.


----------



## Zammin

My FE turned up today. Can't believe it came in a box with absolutely ZERO packaging inside. The GPU box was just freely sliding around all over the place. The box looked a bit rough too.. This is the only time I've ever ordered PC parts that weren't packaged properly, and for a $1900 graphics card direct from Nvidia no less.. Bloody hell Nvidia.. Luckily the retail box the graphics card came in was snugly packed with foam.

This thing looks incredible, build quality wise. It's hard to believe this thing that's shrouded in really nice aluminium is the CHEAPEST version to buy. Obviously not the most effective cooler, but definitely one of the most nicely built ones I've seen.

I'll be able to test it once my motherboard comes in later in the week. I'll throw the 9900k and 2080Ti in my spare Phanteks case and run them air cooled for a while to make sure everything works before throwing them in the custom loop, that way if I have to RMA it won't be a huge pain in the ass.


----------



## Xiph

zhrooms said:


> ...
> 
> Attempting to flash GALAX BIOS with unofficial NVFlash 5.527, bypasses the error.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Not attempting it *just yet*.


Anyone tried this flash with modified nvflash (A model bios to non A model)?
I need to find way to increase power limit for my non A 2070. Bios mod is preferred method instead of shunt modding.


----------



## kot0005

jcde7ago said:


> Way to jinx it, chief...
> 
> https://www.reddit.com/r/nvidia/comments/9u68m4/rip_asus_turbo_2080_ti_on_water/
> 
> But seriously, there have been a few reports of watercooled FEs and AIBs dying already before this Reddit post, check the Nvidia forums. This is why people have been hesitant to put water blocks on FEs immediately, because if they're gonna fail, watercooling doesn't really seem to help much at the moment and depending on your vendor, taking apart a card to put a block on it voids warranty (assuming they check for it before issuing an RMA or ask about it and someone happens to just 'admit' to it).
> 
> If there is an inherent design/component flaw with these cards, watercooling them isn't going to do crap in terms of hoping to save them from inevitably failing.
> 
> 
> 
> Yes, because Nvidia hogged the majority of their yields for their own release, which logically makes the FEs the most available 2080 Ti model and thus = higher failure reports.


Not a real Watercooled card. That block doesnt have water flow over VRM's and memory. Only the GPU core.


----------



## Scrimstar

When are there going to be restocks?


----------



## sblantipodi

Edge0fsanity said:


> My fe had really bad coil whine. I returned it, no way I'm putting up with that from a $1200 card.


I would like to put my card on liquid and Inna watercolor system that coilwhine is pretty annoying.
Does the RMA solved the coil whine problem?


----------



## Zammin

Scrimstar said:


> When are there going to be restocks?


Depends where you live, Australia have had them in stock on Nvidia's website for a week now.


----------



## stefxyz

I am sure we got a lot of wrong RMAs out there: the artifact driver issue is still not fixed in the normal NVIDIA driver. YOu have to download a separate hoitfix driver.

Happened to me actually: installed new card started witcher: graphic errors: I thought my card is dead.

Now if you have this now go to the forums and see news that cards are dying first thing you think: my card is dead too.

QUite some huge titles like witcher 3 and farcry 5 are affected by the driver bug.


----------



## Fitzcaraldo

kot0005 said:


> Not a real Watercooled card. That block doesnt have water flow over VRM's and memory. Only the GPU core.


Can confirm. The "full cover waterblock" the OP in that post showed is indeed not full cover but just a fancy shroud for a passive hybrid: 
https://www.watercoolinguk.co.uk/im...orce-rtx-2080ti-black-m01-1015772-68329-3.jpg


----------



## Zammin

Watching this video today got me thinking, if you flash the EVGA BIOS to the FE, would you get RGB control in X1? Jay did say they use the exact same connector. Anyone in this thread willing to give it a shot? My card won't be installed in a system for a week probably. Obviously not a big deal but it's interesting. Kinda cool seeing the logos in colours other than green for once.


----------



## Brama

Jpmboy said:


> here's the pad layouts for the EK and for the bitspower blocks. BP is going belts and suspenders. clearly some of the pads are less for cooling than "accidental" electrical contact.
> (GALAX uses the BP block on their 2080Ti HOF)


I had to remove the #3 2x1 mm thermal pads on chokes as they are too thick and I had temperature issue on GPU for a not complete contact between WB and GPU SOC.
Be careful.


----------



## arrow0309

Brama said:


> I had to remove the #3 2x1 mm thermal pads on chokes as they are too thick and I had temperature issue on GPU for a not complete contact between WB and GPU SOC.
> Be careful.


Hi 
Are you on hwupgrade also? 
Whatever, so you're sayin you left the chokes with no pads whatsoever, like they were initially designed (talking of the EK blocks)? 
There is a guy (Beatl) whose FE just died water cooled (in a week or so). 
I suspect those 1mm chokes pads could develop a poor contact to some memory ic's and "may be" the cause of its vga death. 
Btw:
According to what he says he was keeping the mem at stock clocks.


----------



## Zammin

Some good news for the Strix owners who are waiting for more waterblocks, Watercool_Jakob said that they are developing a Heatkiller block for the Strix 2080Ti.


----------



## DooRules

Brama said:


> I had to remove the #3 2x1 mm thermal pads on chokes as they are too thick and I had temperature issue on GPU for a not complete contact between WB and GPU SOC.
> Be careful.


Had to do the exact same thing. All good after #3 pads were removed. I am not sure if a thinner pad would work for the chokes.


----------



## Edge0fsanity

sblantipodi said:


> I would like to put my card on liquid and Inna watercolor system that coilwhine is pretty annoying.
> Does the RMA solved the coil whine problem?


i returned it for a refund and bought an aib card. That solved the coilwhine issue for me.


----------



## Brama

arrow0309 said:


> Hi
> Are you on hwupgrade also?
> Whatever, so you're sayin you left the chokes with no pads whatsoever, like they were initially designed (talking of the EK blocks)?
> There is a guy (Beatl) whose FE just died water cooled (in a week or so).
> I suspect those 1mm chokes pads could develop a poor contact to some memory ic's and "may be" the cause of its vga death.
> Btw:
> According to what he says he was keeping the mem at stock clocks.


Yes, I am on hwupgrade with anoter nick but same avatar.
I confirm now I am working without any pad #3 and it seems a lot better than with 1 mm pads.
Crossing my fingers.


----------



## Fitzcaraldo

Best part about it is that EK added the additional pads on the chokes later on to combat coil whine.


----------



## Brama

DooRules said:


> Had to do the exact same thing. All good after #3 pads were removed. I am not sure if a thinner pad would work for the chokes.


As chokes are passive components and component package is not intended to be contact cooled, I am pretty confident.


----------



## Vipeax

@zhrooms, it won't work. The third error will pop up after you confirm the overwrite.


----------



## arrow0309

Brama said:


> As chokes are passive components and component package is not intended to be contact cooled, I am pretty confident.


:specool:
I'm gonna do the same, exactly. 

Btw:
Have a look on hwupgrade.


----------



## Jpmboy

Brama said:


> I had to remove the #3 2x1 mm thermal pads on chokes as they are too thick and I had temperature issue on GPU for a not complete contact between WB and GPU SOC.
> Be careful.


thanks - but I've had the bitspower block on and running since I got he card. core temp never is more than +12C over inlet coolant temp, and an IR gun scan of the backside (plate off) shows that everything is nice and cool. let's see if the card can hold up :thumb:


----------



## cstkl1

just saw this.. samsung gddr 6 on rtx 2080 ( giga)
wondering is this birdy version ( since a lot of them just came to MY)

https://forum.lowyat.net/index.php?showtopic=4547881&view=findpost&p=90824448


----------



## kot0005

arrow0309 said:


> Hi
> Are you on hwupgrade also?
> Whatever, so you're sayin you left the chokes with no pads whatsoever, like they were initially designed (talking of the EK blocks)?
> There is a guy (Beatl) whose FE just died water cooled (in a week or so).
> I suspect those 1mm chokes pads could develop a poor contact to some memory ic's and "may be" the cause of its vga death.
> Btw:
> According to what he says he was keeping the mem at stock clocks.



I am using pads on chokes..i have no issues. stays under 45c and its summer here in Australia.


----------



## Jpmboy

I think a lot of these issues have their origin in "some assembly required".


----------



## Jpmboy

cstkl1 said:


> got a strix 2080ti .. found it by pure luck.
> 
> now while waiting for a block for dat damn msi using this
> 
> da question now.. should i flash dat galax bios.. 🤔🤔🤔🤔🍈


i've been running the galax 380W bios for a while now. Works fine. But it would be pointless to flash an air cooled card. you'll just hit the thermal limit instead of the power limit. And it seems that temp has priority control.


----------



## Vipeax

https://www.reddit.com/r/nvidia/comments/9u92p0/founders_rgb_controler_can_work/

Could someone who flashes their FE card try this out? All that I want to know is if flashing it with the EVGA BIOS unlocks the LED control with EVGA Precision.


----------



## zhrooms

Vipeax said:


> It won't work. The third error will pop up after you confirm the overwrite.


 
Do you think there's a way to get past the third error? Also what is the third error specifically?

Removed the cooler and inspected the components earlier, looks like any reference 1E07, I see no reason at all why an 1E04 GPU couldn't take 1E07 GPU BIOS, from what I've read it's just supposed to be a lower binned GPU, still *identical* GPU, simply not deemed as good for the partners factory overclocks.


----------



## Vipeax

zhrooms said:


> Do you think there's a way to get past the third error? Also what is the third error specifically?
> 
> Removed the cooler and inspected the components earlier, looks like any reference 1E07, I see no reason at all why an 1E04 GPU couldn't take 1E07 GPU BIOS, from what I've read it's just supposed to be a lower binned GPU, still *identical* GPU, simply not deemed as good for the partners factory overclocks.


There is subsystem (brand, overwritable), device id (bypassed) and now also board id. That one will pop after, go give it a try you'll see. I would also be surprised if the PCB is in any way difference from a reference one, but if I remember correctly the EEPROM chip is returning an error status code. If you give it a try I can see if I can change NVFlash wherever needed based on screenshots that you provide, but I do think that they are blocking this on a hardware level also.


----------



## Scrimstar

I'm in the US, when do you guys think the Asus Strix cards will restock. 

not looking to use water, so I assume those are the best on air, besides maybe galax cards


----------



## Vipeax

Scrimstar said:


> I'm in the US, when do you guys think the Asus Strix cards will restock.
> 
> not looking to use water, so I assume those are the best on air, besides maybe galax cards


FTW3 should cool better.


----------



## GosuPl

Can anyone confirm RGB led working on FE card with EVGA / GALAX bios? My 2x 2080Ti totally died, i can't even install drivers


----------



## Emmett

Garrett1974NL said:


> Those temps are way too high, I have the EK-VGA Supremacy on my 2080Ti and the highest I've seen is 47c... stock BIOS (290W max)
> What waterblock do you have on it?



Which card did you use supremacy on.


----------



## Garrett1974NL

Emmett said:


> Which card did you use supremacy on.


The INNO3D GeForce RTX 2080 Ti X2 OC, so far so good.


----------



## Hulk1988

Here are my babies

2080 TI STRIX OC vs FTW3


----------



## Emmett

Garrett1974NL said:


> The INNO3D GeForce RTX 2080 Ti X2 OC, so far so good.


Thanks, I am tempted to pull mine off a 1080 ti and try it on one of my 2080 ti's.


----------



## Garrett1974NL

Emmett said:


> Thanks, I am tempted to pull mine off a 1080 ti and try it on one of my 2080 ti's.


Look here's how I did it... and yeah that's the Geforce font 
The rectangular cutout on the metal plate (on the memory and VRM) is JUST large enough to let the block 'in' if that makes sense.
I used some Swiftech nuts, springs and screws to mount it on the card.
The holes I used correspond with the AMD hole distances.


----------



## Jpmboy

imo, the heatsink/spreader on that card in important when using a uniblock.


----------



## UdoG

I have a question regarding SLI / NVLink.
I got my two EVGA 2080 TI FTW3s today - both cards are running with 2115 on air. However, now I have a little "problem" - the first card has a max. Boost of 1835 - the other 1755 according to GPU-Z. I can overclock one card with 130 MHz - the other with only 80/90, because of the higher basic boost clock. What will happen if I connect both via NVLink - will the maximum be at 80/90 MHz? Which card would you use as a "Primary" card?
I will start with some tests as soon as I received my NVLink.


----------



## Garrett1974NL

Jpmboy said:


> imo, the heatsink/spreader on that card in important when using a uniblock.


I think so too 
That's what convinced me to buy this card 
minimal effort to put on the block


----------



## Causality1978

My INNO3D GeForce RTX 2080 Ti X3 OC 100% load 24/7/365 ,about 260-280 w load acc. gpu-z.. two week ok... on default setup. temps 75-80C depend on room temperature... 
Card do what i need and I dont want "tune card" .. soo i wonder how long will "resist" on default setup but hard drill nonstop run )) buying from "no problem rma seller"


----------



## GAN77

Garrett1974NL said:


> I think so too
> That's what convinced me to buy this card
> minimal effort to put on the block


Can you tell me the exact dimensions? Sorry, I used your photo.

Thank you for the information and great benefits.


----------



## Garrett1974NL

@ GAN77 53.2mm x 53.2mm


----------



## GAN77

Garrett1974NL said:


> @ GAN77 53.2mm x 53.2mm


Thank you!


----------



## Jpmboy

GAN77 said:


> Can you tell me the exact dimensions? Sorry, I used your photo.
> 
> Thank you for the information and great benefits.


it's all right *here*


----------



## Garrett1974NL

GAN77 said:


> Thank you!


Mind you, in the metal plate, there is a M2,5 thread in the 4 holes!
If you were to remove the metal plate you could use M3 screws but you'd have to find another solution to cool the VRM's and memory!

And you can not use the EK mounting material that comes with the EK-VGA Supremacy if you leave the plate on!
That's why I used the old Swiftech stuff I had laying around.


----------



## GAN77

Jpmboy said:


> it's all right *here*


Yes, I have this block for a long time. New.
I would like to try it with my monster)


----------



## GAN77

Garrett1974NL said:


> Mind you, in the metal plate, there is a M2,5 thread in the 4 holes!
> If you were to remove the metal plate you could use M3 screws but you'd have to find another solution to cool the VRM's and memory!
> 
> And you can not use the EK mounting material that comes with the EK-VGA Supremacy if you leave the plate on!
> That's why I used the old Swiftech stuff I had laying around.


This is an important note.


----------



## joshpsp1

Hoping to get my Aorus Xtreme next week. Aside from the boost clock has anyone seen any information about it? Seems to be nothing at alll, no reviews anywhere.


----------



## Garrett1974NL

GAN77 said:


> This is an important note.


You could use the same thing as I did:
http://www.performance-pcs.com/swiftech-mcw60-a2900-adapter-kit-for-ati-2900xt.html
(I only used the screws, springs and nuts, NOT the metal that goes on the swiftech block)


----------



## Vipeax

RTX failure rate for Caseking for AIB cards is rather low:





GTX1080: 7.1% (+2 year lifespan of that card now)
GTX1080Ti: 4.6% (1.5 year lifespan of that card now)
RTX2080: 0.17% (almost 1000 cards sold now)
RTX2080Ti: 1.4% (a bit less sales than the RTX2080)


----------



## kot0005

Vipeax said:


> RTX failure rate for Caseking for AIB cards is rather low
> 
> GTX1080: 7.1% (+2 year lifespan of that card now)
> GTX1080Ti: 4.6% (1.5 year lifespan of that card now)
> RTX2080: 0.17% (almost 1000 cards sold now)
> RTX2080Ti: 1.4% (a bit less sales than the RTX2080)


its not actually, 2080Ti has been out for what ? 1 to 1.5 months...

1080Ti is 16-18 months old.


----------



## kot0005

did anyone try FTW3 BIOS on a reference pcb card yet ?

https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012


----------



## Vipeax

kot0005 said:


> its not actually, 2080Ti has been out for what ? 1 to 1.5 months...
> 
> 1080Ti is 16-18 months old.


That's what I wrote?... He says they don't see anything strange compared to other cards.


----------



## Causality1978

RTX2080Ti: 1.4% (a bit less sales than the RTX2080) soo about +/- 15%+ from year production.. its much or not...? ...)))) -20% income


----------



## Vipeax

Causality1978 said:


> RTX2080Ti: 1.4% (a bit less sales than the RTX2080) soo about +/- 15%+ from year production.. its much or not...? ...)))) -20% income


How would 1.4% equal 15% of the annual production?


----------



## NewType88

Hulk1988 said:


> Here are my babies
> 
> 2080 TI STRIX OC vs FTW3


. 

What case are you putting those in ? I’m curious to see the thermals you get with OC. Please share. Tag me so I don’t miss it.


----------



## GraphicsWhore

kot0005 said:


> did anyone try FTW3 BIOS on a reference pcb card yet ?
> 
> https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012


Waiting for someone else to go first, preferably someone with the XC Gaming.


----------



## arcDaniel

kot0005 said:


> did anyone try FTW3 BIOS on a reference pcb card yet ?
> 
> https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012


Yes I have done it, at it works fine.

You can see, that Precision recognize the Card as FTW3, but without the supplement Temp Sensors.

The Card is original an EVGA 2080ti XC Ultra, so Ref. PCB.

I do not crazy OC, but the higher PT helps to handle some power Spikes.


----------



## Fitzcaraldo

kot0005 said:


> its not actually, 2080Ti has been out for what ? 1 to 1.5 months...
> 
> 1080Ti is 16-18 months old.


Pardon my maths but does sample size matter here beyond statistic inaccuracy? Caseking should've given RMA numbers for the same duration instead overall, yes. But the relevant part is not actually the seemingly low RMA numbers on the Turing but the comparability to Pascal. If these RMA numbers shoot up to Pascal levels in a shorter amount of time, THEN we have a statistical evidence of a widespread issue. Nobody debates that Turing cards are failing. The questions are WHY and HOW MANY.


----------



## kot0005

arcDaniel said:


> Yes I have done it, at it works fine.
> 
> You can see, that Precision recognize the Card as FTW3, but without the supplement Temp Sensors.
> 
> The Card is original an EVGA 2080ti XC Ultra, so Ref. PCB.
> 
> I do not crazy OC, but the higher PT helps to handle some power Spikes.


sweet, going to flash my xc ultra tonight.



Fitzcaraldo said:


> Pardon my maths but does sample size matter here beyond statistic inaccuracy? Caseking should've given RMA numbers for the same duration instead overall, yes. But the relevant part is not actually the seemingly low RMA numbers on the Turing but the comparability to Pascal. If these RMA numbers shoot up to Pascal levels in a shorter amount of time, THEN we have a statistical evidence of a widespread issue. Nobody debates that Turing cards are failing. The questions are WHY and HOW MANY.


Well we dont know if the RMA rate could double in another 2 months or stay stable. If it doubles and goes exponentially then we'd have a high amount of RMA % in a very short time. These cards are also ultra rarer than when 1080Ti launched but RMA is already at 1.4%. Should really be looking at the first 3 months into the launch of 1080Ti instead of since launch.


----------



## stefxyz

Its just nbot an issue. If it would be above 5% then it would be.If it would be above 15% it would be a real problem where NVIDIA might call back and stop production. Its evident the cards work fine for the majority with normal RMA quotes so far. Only FE cards still might have an issue if something would be wrong with the termal measurements and chips die due to bad cooling for some reason.


----------



## toncij

joshpsp1 said:


> Hoping to get my Aorus Xtreme next week. Aside from the boost clock has anyone seen any information about it? Seems to be nothing at alll, no reviews anywhere.


Where can it be ordered anyway? Can't find any source on it.

I'm getting a Gaming OC tomorrow. I presume it's still a ref PCB tho.


----------



## kot0005

ok my XC ultra is running on FTW3 BIOS perfectly. Can go upto 2115Mhz with max power limit on water.


----------



## arcDaniel

The FTW3 Bios is great because of the 373W PT it do not hurt the Specifications of the PCI-E (75W) and Power-connectors (2x8-Pin = 2x150W). 

TomsHW.de tested the 380W Bios from Galax and here, the Card hurt the Specifications of the PCI-E Slot. Even if an modern Mainboard schouldn't get damaged because of that, the Specifications are not there for nothing.

(PS: Sorry for my poor English)


----------



## toncij

Which one of those bioses? The newer one on techpow looks to be a rev. sample.


----------



## Spiriva

I tried the Galaxy 380W bios, and now im running the FTW 373W bios. When running 3dmark on extreme I never seen the card pull more then 370W tho no matter which one of these two i have on the card.

Altho as of now ill use the FTW bios, they both overclock the same 2150-2175mhz / +1000 on the mem (afterburner´s slider wount go over +1000)


* https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012 < 90.02.0B.40.B6 I flashed this version to my 2080ti FE


----------



## arcDaniel

toncij said:


> Which one of those bioses? The newer one on techpow looks to be a rev. sample.


See the Build Date, the ...............B6 is the newer one.


----------



## vmanuelgm

arcDaniel said:


> The FTW3 Bios is great because of the 373W PT it do not hurt the Specifications of the PCI-E (75W) and Power-connectors (2x8-Pin = 2x150W).
> 
> TomsHW.de tested the 380W Bios from Galax and here, the Card hurt the Specifications of the PCI-E Slot. Even if an modern Mainboard schouldn't get damaged because of that, the Specifications are not there for nothing.
> 
> (PS: Sorry for my poor English)



A bios hurting PCIe specification and 8pin limited to 150w each???

None of the bios's won't pull more current from Pciex unless you mod its hardware...

In regards to 6 and 8 pin connectors, read this:

"There is common misunderstanding to refer 6-pin or 8-pin MiniFit.JR connectors as fixed 75W or 150W power capable inputs.

Nothing can be further from truth actually, as connector port itself does not define the power cap. These power levels are nothing but just the way for how NV determine capability of used board hardware to deliver high power to the voltage regulators. It’s purely imaginary specification and have nothing to do with actual power taken from connector nor power input capability. Active circuitry on PCBA after the connector is used to measure current flowing from the connector into the VRM. This enables software, driver and NV BIOS to handle GPU clocks and reduce voltages if measured power hitting programmed BIOS limit value (which can be lower or higher value than 75/150W!).

So if we play and change circuit to adjust the calibration point, this limitation will be lifted accordingly as well. Also to make sure we are not at any physical limit of power connector itself, check Molex 26-01-3116 specifications, which have specifications both 13A per contact (16AWG wire in small connector) to 8.5A/contact (18AWG wire). This means that using common 18AWG cable, 6-pin connector specified for 17A of current (3 contacts for +12V power, 2 contacts for GND return, one contact for detect). 8-pin have 25.5A current specification (3 contacts for +12V power, 3 contacts for GND return and 2 contacts for detection). This is 204W at +12.0V level or 306W for 8-pin accordingly.

Now when somebody tells you that 6-pin can’t provide more than 75W, you know they don’t understand the topic very well. It’s not the connector itself or cable limit the power, but active regulation of GPU/BIOS/Driver according to detection of used cables and preprogrammed limits. So how actual power measured? Will see later on detail shots of the RTX 2080 Ti PCBA."


https://xdevs.com/guide/evga_2080tixc/


----------



## arcDaniel

vmanuelgm said:


> A bios hurting PCIe specification and 8pin limited to 150w each???
> 
> None of the bios's won't pull more current from Pciex unless you mod its hardware...
> 
> In regards to 6 and 8 pin connectors, read this:
> 
> "There is common misunderstanding to refer 6-pin or 8-pin MiniFit.JR connectors as fixed 75W or 150W power capable inputs.
> 
> Nothing can be further from truth actually, as connector port itself does not define the power cap. These power levels are nothing but just the way for how NV determine capability of used board hardware to deliver high power to the voltage regulators. It’s purely imaginary specification and have nothing to do with actual power taken from connector nor power input capability. Active circuitry on PCBA after the connector is used to measure current flowing from the connector into the VRM. This enables software, driver and NV BIOS to handle GPU clocks and reduce voltages if measured power hitting programmed BIOS limit value (which can be lower or higher value than 75/150W!).
> 
> So if we play and change circuit to adjust the calibration point, this limitation will be lifted accordingly as well. Also to make sure we are not at any physical limit of power connector itself, check Molex 26-01-3116 specifications, which have specifications both 13A per contact (16AWG wire in small connector) to 8.5A/contact (18AWG wire). This means that using common 18AWG cable, 6-pin connector specified for 17A of current (3 contacts for +12V power, 2 contacts for GND return, one contact for detect). 8-pin have 25.5A current specification (3 contacts for +12V power, 3 contacts for GND return and 2 contacts for detection). This is 204W at +12.0V level or 306W for 8-pin accordingly.
> 
> Now when somebody tells you that 6-pin can’t provide more than 75W, you know they don’t understand the topic very well. It’s not the connector itself or cable limit the power, but active regulation of GPU/BIOS/Driver according to detection of used cables and preprogrammed limits. So how actual power measured? Will see later on detail shots of the RTX 2080 Ti PCBA."
> 
> 
> https://xdevs.com/guide/evga_2080tixc/


I do not know if you understand German for this article, but the Graph is in English and you can see what I mean:
https://www.tomshw.de/2018/10/11/kf...es-380-watt-power-limit-mit-rgb-beplankung/4/


----------



## GosuPl

EVGA 2080Ti XC Black have TU102-300A binned chip?

https://eu.evga.com/products/product.aspx?pn=11G-P4-2282-KR

I want buy 2x 2080Ti EVGA cards, when my refund for 2x 2080Ti FE R.I.P edition will be on PayPayl, but i can only buy 1 2080Ti XC Gaming / Ultra / FTW3, only allowed 2 per household is Black versions :/


----------



## Spiriva

GosuPl said:


> EVGA 2080Ti XC Black have TU102-300A binned chip?
> 
> https://eu.evga.com/products/product.aspx?pn=11G-P4-2282-KR
> 
> I want buy 2x 2080Ti EVGA cards, when my refund for 2x 2080Ti FE R.I.P edition will be on PayPayl, but i can only buy 1 2080Ti XC Gaming / Ultra / FTW3, only allowed 2 per household is Black versions :/


Just order the second one to your: girlfriend, boyfriend, mom, dad, sister, brother etc you get the picture


----------



## vmanuelgm

arcDaniel said:


> I do not know if you understand German for this article, but the Graph is in English and you can see what I mean:
> https://www.tomshw.de/2018/10/11/kf...es-380-watt-power-limit-mit-rgb-beplankung/4/



That graph, in german/english/spanish or whatever, means nothing to me. 

I understood you were saying the 380w bios pulled more current from pciex connector on the motherboard, and that is not possible without modding the hardware.

Also I understand from your words you think that pciex connectors on the graphics card are limited to 150w if they are 8 pinned. That is absolutely false!!! 150w is just a limit put by Nvidia which ignores hardware capabilities, and these are way beyond that number!!!

If you want, I may link several videos from Der8auer or Gamersnexus, along with xdevs articles!!!


----------



## GosuPl

Spiriva said:


> Just order the second one to your: girlfriend, boyfriend, mom, dad, sister, brother etc you get the picture


Well, this solution is known, but unnecessary complication. Well, but I will do it the most ;-) Granny died (probably playing at 2x 2080Ti of the dead) will get one EVGA ;-)


----------



## Gripen90

Arrived yesterday. One for me and one for the misses.









It's weird only having one card in the main rig since I have been running SLi permantly since 2006 and then 3 way SLi from 2008 onwards. It looks empty. (don't know why oc.net turns the image upside down - site hasn't been itself in a long time)


----------



## toncij

Did someone here say that GB Gaming 2080Ti is not really a ref PCB? Some fan connector was added?


----------



## GAN77

toncij said:


> Did someone here say that GB Gaming 2080Ti is not really a ref PCB? Some fan connector was added?


Yes


----------



## vmanuelgm

toncij said:


> Did someone here say that GB Gaming 2080Ti is not really a ref PCB? Some fan connector was added?



In spite of that connector, it is indeed a reference pcb with some extra circuitery, but if u are planning to watercool it with the EK reference block, u won't be able unless you bend those pins or do some kind of mod.


----------



## Jpmboy

arcDaniel said:


> The FTW3 Bios is great because of the 373W PT it do not hurt the Specifications of the PCI-E (75W) and Power-connectors (2x8-Pin = 2x150W).
> 
> TomsHW.de tested the 380W Bios from Galax and here, the Card hurt the Specifications of the PCI-E Slot. Even if an modern Mainboard schouldn't get damaged because of that, the Specifications are not there for nothing.
> 
> (PS: Sorry for my poor English)


 the "specifications" or limits you quoted for these power rails is based solely on the minimum connector specs (eg, surface area of contact for pins, wire gauge, and for the slot, trace thickness). They have nothing to do with what any specific board or PSU is capable of carrying as a load. If that were the case, every CPU EPS connector would have fried the day after the 7980XE launched since it pulls over 3x the min spec. The Rampage Extreme VI (and re5-10 etc) have slot temperature sensors if you worry.


The Galax HOF 2080Ti is modified, allows control of core and vram voltage, has 3 PCIE connectors and you can bet it is capable of puling more than mainstream specification on every power rail it is hooked to. (same for the kingpin cards). Turing pulls much less current than some older gen cards. The 780Ti KP was waaay off the reservation of power rail minimum specifications!


----------



## Hulk1988

Someone tried to flash the ASUS STRIX OC with the Galaxy Bios?


----------



## CallsignVega

UdoG said:


> I have a question regarding SLI / NVLink.
> I got my two EVGA 2080 TI FTW3s today - both cards are running with 2115 on air. However, now I have a little "problem" - the first card has a max. Boost of 1835 - the other 1755 according to GPU-Z. I can overclock one card with 130 MHz - the other with only 80/90, because of the higher basic boost clock. What will happen if I connect both via NVLink - will the maximum be at 80/90 MHz? Which card would you use as a "Primary" card?
> I will start with some tests as soon as I received my NVLink.


The card the display is connected to is the master. As for AFR, your speed will be limited to the slowest of the two cards.


----------



## nycgtr

My 1st strix arrived today. Block incoming tmr. Much smaller than the trio.


----------



## GAN77

Maybe samsung memory on strix?)


----------



## kx11

nycgtr said:


> My 1st strix arrived today. Block incoming tmr. Much smaller than the trio.





mine should be here by the 9th , i got the block already


----------



## nycgtr

kx11 said:


> mine should be here by the 9th , i got the block already


So default oc edition power limit is 325 max. I can get 2115 on air but that seems to be the max. Still looking for one that can go past my golden windforce lol.


----------



## nycgtr

GAN77 said:


> Maybe samsung memory on strix?)


Micron. I dont think any of them are samsung. I guess your paying for the over engineered pcb as the premium. I can see I am power limited.


----------



## cstkl1

GAN77 said:


> Maybe samsung memory on strix?)


nope. micron. using one now..


good news bitspower released a waterblock for trio.. yippy!!!!
ordered. done. strix will be on sale.


----------



## GraphicsWhore

About a month now and my EVGA XC Gaming is performing really well with EK block/backplate. The card takes +120/+1050 on my OC profile for clocks of 2085-2115 but also fares well on my under-volt profile which is good for ~2040Mhz @ 1000mV. I'm still playing around to see what I can sustain @ 993mV. Galax BIOS gave me my highest 3dmark scores (not by much) but I'm back on the stock EVGA for now. Temps at load range from mid to upper 40's depending on game and I've hit 50/51 a couple of times after extended sessions in VR.

Haven't recorded the exact numbers but biggest noticeable improvements over my 1080Ti are in:

Project Cars 2 VR and Elite Dangerous VR (max settings on VIVE Pro)
Watch Dogs 2 and Ghost Recon Wildlands (max settings @ 3440x1440)

These now run butter-smooth which my 1080Ti could not sustain. On that note I was able to sell that card (EVGA FE with EK block) for $625 to recoup some costs. 

All in all really happy with the purchase and performance.

Now it'd be great to actually play something that utilizes RTX...


----------



## cstkl1

nycgtr said:


> GAN77 said:
> 
> 
> 
> Maybe samsung memory on strix?)
> 
> 
> 
> Micron. I dont think any of them are samsung. I guess your paying for the over engineered pcb as the premium. I can see I am power limited.
Click to expand...

have u noticed da strix voltage reading reading seems fluky.. also it doesnt want to go to 1.09v 

my card on default air does 1995. on ram oc 16000 max on gpu is 2100
stock ram it does 2140. 

thats the other weird part.. the msi core core clocks were not affected by the vram clocks. had two msi both were about the same.. 

odd.


----------



## nycgtr

cstkl1 said:


> have u noticed da strix voltage reading reading seems fluky.. also it doesnt want to go to 1.09v
> 
> my card on default air does 1995. on ram oc 16000 max on gpu is 2100
> stock ram it does 2140.
> 
> thats the other weird part.. the msi core core clocks were not affected by the vram clocks. had two msi both were about the same..
> 
> odd.


I had two trios. I sold them both, shipped the other one yesterday (Since I knew my strixs were coming). My two trios were stable in the mid 2000s. Adjusting vram didnt effect the clocks. So far with the strix it seems im hitting a power limit. If i lower the mem then I can clock higher vice versa. Honestly, Id keep the strix over the trio. The pcb is just much better. If a custom strix bios comes along it will be great potential.


----------



## kx11

nycgtr said:


> So default oc edition power limit is 325 max. I can get 2115 on air but that seems to be the max. Still looking for one that can go past my golden windforce lol.





there's a switch for OC mode maybe try that too


----------



## cstkl1

nycgtr said:


> cstkl1 said:
> 
> 
> 
> have u noticed da strix voltage reading reading seems fluky.. also it doesnt want to go to 1.09v
> 
> my card on default air does 1995. on ram oc 16000 max on gpu is 2100
> stock ram it does 2140.
> 
> thats the other weird part.. the msi core core clocks were not affected by the vram clocks. had two msi both were about the same..
> 
> odd.
> 
> 
> 
> I had two trios. I sold them both, shipped the other one yesterday (Since I knew my strixs were coming). My two trios were stable in the mid 2000s. Adjusting vram didnt effect the clocks. So far with the strix it seems im hitting a power limit. If i lower the mem then I can clock higher vice versa. Honestly, Id keep the strix over the trio. The pcb is just much better. If a custom strix bios comes along it will be great potential.
Click to expand...

strix hmm based on experience.. too fluky because of their custom pcb etc and dont like using asus tweaker. 
on 1080ti there was a big difference for strix vs poseidon. poseidon was more consistent on the voltage reading and clocks. both were on water. i am seeing the same thing on 2080ti strix on air vs the 1080ti strix. 

yeah its hitting the powerlimit and throttling down quick. tested with msi ab was no go.. unstable.. seems best with asus gpu tweaker. weird osd.. sticker and text.. really asus.. REALLY!! .. lol

my issue why i bought the strix was whild waiting for the gpu block for msi. 
using a tj11 the card length forces me to remove the 180mm fan. strix atm is 5c cooler than msi with that case fan. 

was actually happy with msi... next upgrade now is 9 series extreme cpu.


----------



## nycgtr

cstkl1 said:


> strix hmm based on experience.. too fluky because of their custom pcb etc and dont like using asus tweaker.
> on 1080ti there was a big difference for strix vs poseidon. poseidon was more consistent on the voltage reading and clocks. both were on water. i am seeing the same thing on 2080ti strix on air vs the 1080ti strix.
> 
> yeah its hitting the powerlimit and throttling down quick. tested with msi ab was no go.. unstable.. seems best with asus gpu tweaker. weird osd.. sticker and text.. really asus.. REALLY!! .. lol
> 
> my issue why i bought the strix was whild waiting for the gpu block for msi.
> using a tj11 the card length forces me to remove the 180mm fan. strix atm is 5c cooler than msi with that case fan.
> 
> was actually happy with msi... next upgrade now is 9 series extreme cpu.


Acutally I got the strix cuz I felt the msi was a giant reference. Albiet a good looking giant reference. The trio was the first 2080ti i received on launch day. I was originally committed to it since I couldnt get a strix. I knew the msi block was coming from bits for about a month plus now and they told me it was posted to the store yesterday. I think I posted pics of the render in this topic over a week ago. I've gone thru so many 2080tis of various kinds that I just wanted a bigger gpu. So it was strix or trio. Then it just came down to better pcb, and I think the strix block from bp looks better lol. I had suggested the removable piece to bitspower for the trio and they confirmed they would add it, its nice to see it on the final product.


Afterburner doesnt play nice. Using precision x1 I don't have any issues. +150 to core + 650 to mem so far stable.


----------



## warbucks

I've got a founders edition water cooled. Looking for suggestions on which bios may be best to increase the power limit that's compatible with the FE.


----------



## joshpsp1

toncij said:


> Where can it be ordered anyway? Can't find any source on it.
> 
> I'm getting a Gaming OC tomorrow. I presume it's still a ref PCB tho.


Overclockers.co.uk and Novatech.co.uk

Avoid Novatech as they're slow to get stock and OCUK is getting exclusivity on the first batch.



Also for anyone who is in Europe the EVGA 2080 Ti FTW3 Ultra is in stock at scan.co.uk

Albeit at a higher price than EVGA were selling them for.

https://www.scan.co.uk/products/evg...r-ready-graphics-card-4352-core-1755mhz-boost


----------



## ENTERPRISE

Getting painful waiting for my 2080Ti to come into stock lol : https://www.scan.co.uk/products/msi...graphics-card-4352-core-1350mhz-gpu-1755mhz-b 



It will be worth the wait.


----------



## Frozburn

Is anyone using a 2080 Ti Gaming X Trio with one of those 370+ BIOS? Thinking about getting that card but MSI are saying the card is safe to 330 only? What is the point of this card's cooling then


----------



## nycgtr

Frozburn said:


> Is anyone using a 2080 Ti Gaming X Trio with one of those 370+ BIOS? Thinking about getting that card but MSI are saying the card is safe to 330 only? What is the point of this card's cooling then


You don't need it. The cooler is for maintaining good temps. Being able to clock say 30mhz higher with a higher wattage bios doesn't really produce tangible results. With the 2 trios I have had. Flashing 1 didnt get me anything really tangible and clocks both were similar to what other ref cards were doing with a higher wattage bios.


----------



## Frozburn

nycgtr said:


> You don't need it. The cooler is for maintaining good temps. Being able to clock say 30mhz higher with a higher wattage bios doesn't really produce tangible results. With the 2 trios I have had. Flashing 1 didnt get me anything really tangible and clocks both were similar to what other ref cards were doing with a higher wattage bios.


Thanks. 

Any reason you don't use that card anymore? Should I look for another one?


----------



## joshpsp1

A little worried about the cooling of the Aorus Xtreme. It has the highest out of the box boost clock whilst being a 2 slot card unlike most of the other high end GPUs. Not to mention the 7 display outputs on it. Can imagine it being toasty.


----------



## Jpmboy

warbucks said:


> I've got a founders edition water cooled. Looking for suggestions on which bios may be best to increase the power limit that's compatible with the FE.



the galax 380W bios works fine... if you are worried about the power limit, just set teh power slider to where you want to limit it to. Kinda like having a 650HP car, it's all about throttle control.


----------



## NewType88

GraphicsWhore said:


> About a month now and my EVGA XC Gaming is performing really well with EK block/backplate. The card takes +120/+1050 on my OC profile for clocks of 2085-2115 but also fares well on my under-volt profile which is good for ~2040Mhz @ 1000mV. I'm still playing around to see what I can sustain @ 993mV. Galax BIOS gave me my highest 3dmark scores (not by much) but I'm back on the stock EVGA for now. Temps at load range from mid to upper 40's depending on game and I've hit 50/51 a couple of times after extended sessions in VR.
> 
> Haven't recorded the exact numbers but biggest noticeable improvements over my 1080Ti are in:
> 
> Project Cars 2 VR and Elite Dangerous VR (max settings on VIVE Pro)
> Watch Dogs 2 and Ghost Recon Wildlands (max settings @ 3440x1440)
> 
> These now run butter-smooth which my 1080Ti could not sustain. On that note I was able to sell that card (EVGA FE with EK block) for $625 to recoup some costs.
> 
> All in all really happy with the purchase and performance.
> 
> Now it'd be great to actually play something that utilizes RTX...


What’s your fan rpm at under full load ?


----------



## GraphicsWhore

NewType88 said:


> What’s your fan rpm at under full load ?


1000


----------



## kx11

this seller claims he got Auros 2080ti Xtreme waterforce ready to ship , he's in Malaysia though


https://www.lelong.com.my/gigabyte-...rusx-wb-easyit2u-209403554-2019-11-Sale-P.htm


great prices too compared to the EU prices


----------



## Bill D

nycgtr said:


> Acutally I got the strix cuz I felt the msi was a giant reference. Albiet a good looking giant reference. The trio was the first 2080ti i received on launch day. I was originally committed to it since I couldnt get a strix. I knew the msi block was coming from bits for about a month plus now and they told me it was posted to the store yesterday. I think I posted pics of the render in this topic over a week ago. I've gone thru so many 2080tis of various kinds that I just wanted a bigger gpu. So it was strix or trio. Then it just came down to better pcb, and I think the strix block from bp looks better lol. I had suggested the removable piece to bitspower for the trio and they confirmed they would add it, its nice to see it on the final product.
> 
> 
> Afterburner doesnt play nice. Using precision x1 I don't have any issues. +150 to core + 650 to mem so far stable.


my strix is topping out at 2055 stock
https://www.3dmark.com/fs/16938761

can't see a lot of reason to go higher just adds too much heat 
and the thing is no loader than my 1080 ti strix


----------



## Barefooter

Killer deal alert! In stock and I just bought one.

https://www.bhphotovideo.com/c/product/1442127-REG/evga_geforce_rtx_2080_ti.html

EVGA GeForce RTX 2080 Ti XC GAMING Graphics Card & SuperNOVA 1600 G2 1600W 80 Plus Gold $1,299.98


----------



## Scrimstar

I am not sure whether to get EVGA FTW3 Ultra or Asus Strix OC. 

I am pretty sure the Strix has better VRM and PCB, but the EVGA might run cooler, since it is a full TRIPLE slot vs 2.7 slot [I am not going to watercool gpu]

I am thinking of possibly running SLi in the future, don't know if two EVGA's would fit in a ASRock fatility XE


----------



## Frozburn

Scrimstar said:


> I am not sure whether to get EVGA FTW3 Ultra or Asus Strix OC.
> 
> I am pretty sure the Strix has better VRM and PCB, but the EVGA might run cooler, since it is a full TRIPLE slot vs 2.7 slot [I am not going to watercool gpu]
> 
> I am thinking of possibly running SLi in the future, don't know if two EVGA's would fit in a ASRock fatility XE


Saw this yesterday maybe it will help you decide

https://www.reddit.com/r/nvidia/comments/9uhtwe/i_hit_the_silicon_lottery_with_an_evga_2080_ti/


----------



## Schramm

Barefooter said:


> Killer deal alert! In stock and I just bought one.
> 
> https://www.bhphotovideo.com/c/product/1442127-REG/evga_geforce_rtx_2080_ti.html
> 
> EVGA GeForce RTX 2080 Ti XC GAMING Graphics Card & SuperNOVA 1600 G2 1600W 80 Plus Gold $1,299.98


Thanks for posting this I quickly ordered it shortly after you posted here. Did you notice the price has since been increased to $1,599.99 for the combo deal? I'm expecting the verification department to flag and cancel the order tomorrow morning.


----------



## Barefooter

Schramm said:


> Thanks for posting this I quickly ordered it shortly after you posted here. Did you notice the price has since been increased to $1,599.99 for the combo deal? I'm expecting the verification department to flag and cancel the order tomorrow morning.


Yeah I just noticed that price increase too. That is just like paying regular price for the power supply 

Hopefully they do not cancel on us.


----------



## ESRCJ

Frozburn said:


> Scrimstar said:
> 
> 
> 
> I am not sure whether to get EVGA FTW3 Ultra or Asus Strix OC.
> 
> I am pretty sure the Strix has better VRM and PCB, but the EVGA might run cooler, since it is a full TRIPLE slot vs 2.7 slot [I am not going to watercool gpu]
> 
> I am thinking of possibly running SLi in the future, don't know if two EVGA's would fit in a ASRock fatility XE
> 
> 
> 
> Saw this yesterday maybe it will help you decide
> 
> https://www.reddit.com/r/nvidia/comments/9uhtwe/i_hit_the_silicon_lottery_with_an_evga_2080_ti/
Click to expand...

I wonder if EVGA is actually binning the FTW3s though or if that's just a very lucky draw for a FTW3.


----------



## GraphicsWhore

gridironcpj said:


> Frozburn said:
> 
> 
> 
> 
> 
> Scrimstar said:
> 
> 
> 
> I am not sure whether to get EVGA FTW3 Ultra or Asus Strix OC.
> 
> I am pretty sure the Strix has better VRM and PCB, but the EVGA might run cooler, since it is a full TRIPLE slot vs 2.7 slot [I am not going to watercool gpu]
> 
> I am thinking of possibly running SLi in the future, don't know if two EVGA's would fit in a ASRock fatility XE
> 
> 
> 
> Saw this yesterday maybe it will help you decide
> 
> https://www.reddit.com/r/nvidia/comments/9uhtwe/i_hit_the_silicon_lottery_with_an_evga_2080_ti/
> 
> Click to expand...
> 
> I wonder if EVGA is actually binning the FTW3s though or if that's just a very lucky draw for a FTW3.
Click to expand...

Likely both.

I assume by “under 60” he just means for the length of the scan. That thing has a magic chip but not magic cooling; I gotta believe it’ll get toasty running those clocks for any extended amount of time.

Dude needs to shunt and put it on water.


----------



## xl_BlackHawk_lx

Frozburn said:


> Is anyone using a 2080 Ti Gaming X Trio with one of those 370+ BIOS? Thinking about getting that card but MSI are saying the card is safe to 330 only? What is the point of this card's cooling then


I have the same card (2080 Ti Gaming X Trio) for over a month now and was first thinking of flashing to 370+ BIOS, but I think it won't give enough gain to justify that action. I have done hours of testing with my GPU and mine works stable between 2070 - 2055 MHz on CC and 8000 MHz on MC with 110% power limit and temps around 56-58'c (80% fan speed). 

The moment it tries to go beyond 2070, it hits the power limit (327W) and comes back to 2070. Having said that even if it runs stable at 2070 or 2055, i think it is great performance and beyond that you won't get any significant jump in FPS . 

My MSI OC scanner result is +146 CC while MC is set to +1000 & power limit at 110%. With these settings, it works fine on all benchmarks Time Spy, Superposition etc (except Fire Strike (Xtreme/Ultra) that's where it crashes) 

Coming to AC-Odyssey & COD-BO4, I am using +134CC & +1000MC and its running great without crashing for hours.


----------



## bp7178

Scrimstar said:


> I am not sure whether to get EVGA FTW3 Ultra or Asus Strix OC.
> 
> I am pretty sure the Strix has better VRM and PCB, but the EVGA might run cooler, since it is a full TRIPLE slot vs 2.7 slot [I am not going to watercool gpu]
> 
> I am thinking of possibly running SLi in the future, don't know if two EVGA's would fit in a ASRock fatility XE


X299 boards typically use the 4-slot spacing for the dual x16 PCIe slots. I haven't checked the manual for that particular board, but my Asrock OC Formula is spaced that way.


----------



## toncij

OC Scanners is completely useless... tested several times, can't push fans to 100% to ensure some kind of stability on cooling and results are different every time. (X1 scanner, beta 3.5)

Is there a way to reliably run it?


----------



## Jpmboy

even if there were, i'm not sure it _reliably _tells you anything. as with any of this stuff, there is no "one button" test of clocks/performance-efficiency


----------



## GraphicsWhore

toncij said:


> OC Scanners is completely useless... tested several times, can't push fans to 100% to ensure some kind of stability on cooling and results are different every time. (X1 scanner, beta 3.5)
> 
> Is there a way to reliably run it?


No. There's no way to reliably have X1 do anything. Also, I wouldn't get too hung up on the scanner. I ran it again yesterday, got a "pass" with +164 score and applied it but all my benchmarks were hundreds of points lower than just manually setting OC or vcurve. And it doesn't even matter because I STILL can't save profiles so if it was doing something useful I'd have to re-run the scanner every time. I'm done using it altogether. It's a worthless POS and AfterBurner works fine.


----------



## toncij

So, back to manual trial and error.


----------



## CallsignVega

toncij said:


> OC Scanners is completely useless... tested several times, can't push fans to 100% to ensure some kind of stability on cooling and results are different every time. (X1 scanner, beta 3.5)
> 
> Is there a way to reliably run it?


EVGA makes the most worthless software ever. No point ever using anything besides MSI Afterburner and RTSS.


----------



## nycgtr

Just came.

Will work on it tonight.


----------



## Jpmboy

CallsignVega said:


> EVGA makes the most worthless software ever. No point ever using anything besides MSI Afterburner and RTSS.


although it doesn't have a "save" function, the Galax Extreme tuner (not the "plus" version) is very good. The voltage slider applies the requested voltage AND locks the card in P0. Worth a try...


----------



## domrockt

Finaly Amazon is shipping my KFA2 2080ti it will be at my home on Friday. I waited for one Month and a week. Can't wait to put it in my System.


----------



## Mike211

My 2 EVGA GeForce RTX 2080 Ti FTW3 ULTRA


----------



## torqueroll

warbucks said:


> I've got a founders edition water cooled. Looking for suggestions on which bios may be best to increase the power limit that's compatible with the FE.


I got a FE on water as well with the Galax bios. Works fine but the gains are minimal at best but the power draw increase is huge. I max it out only for benchmarking.


----------



## Johnny_Utah

I'm seeing two FTW3 BIOS listed now:

https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012

https://www.techpowerup.com/vgabios/205052/evga-rtx2080ti-11264-181008


Looks like one was posted about 5 days later. Is this an updated one?


----------



## toncij

Waiting for a 4th card soon for quad setup inside a 2990WX. Unfortunately, had to snatch 1 of each model. Trying to get ref PCBs for easier wc soon, since who knows when we'll have access to custom board blocks.

In general, each of these cards clocks pretty much the same - at ambient 18°C at about 2100-2150 give or take a few 10s. I'm afraid we won't see much overclock unless going sub-zero.

And whoever thought of the dual-slot is not nice. We could've been running beautiful single-slot cards like 10-series if not for the silly hdmi. They should've either kept hdmi or usb-c, not both.


----------



## sblantipodi

great card all in all, my 2080Ti FE is really bad ass


----------



## Nico67

toncij said:


> And whoever thought of the dual-slot is not nice. We could've been running beautiful single-slot cards like 10-series if not for the silly hdmi. They should've either kept hdmi or usb-c, not both.


Asus Turbo 2080Ti - single slot


----------



## toncij

Nico67 said:


> Asus Turbo 2080Ti - single slot


-A chips?


----------



## Nico67

toncij said:


> -A chips?


 Yep


----------



## dVeLoPe

does the A chip matter that much? can I see it in software or only if I clean thermal paste


----------



## Jpmboy

dVeLoPe said:


> does the A chip matter that much? can I see it in software or only if I clean thermal paste


only if you intend to flash the card,


----------



## exploiteddna

ENTERPRISE said:


> Getting painful waiting for my 2080Ti to come into stock lol : https://www.scan.co.uk/products/msi...graphics-card-4352-core-1350mhz-gpu-1755mhz-b
> 
> 
> 
> It will be worth the wait.


hard to believe an OG like you has been unable to get their hands on a card. granted, you want a card that isn't produced in huge quantities (relative to other cards), so it at least makes sense. Do you have an ETA at least?


----------



## Nico67

dVeLoPe said:


> does the A chip matter that much? can I see it in software or only if I clean thermal paste



I think the nvflash64 --version tells you if its an A chip or not. Not sure if that's the exact command.


----------



## dVeLoPe

Jpmboy said:


> only if you intend to flash the card,


It's 11G-P4-2282-KR I am not sure which bios I can go with but was hopefully going to get it under water!


----------



## Jpmboy

dVeLoPe said:


> It's 11G-P4-2282-KR I am not sure which bios I can go with but was hopefully going to get it under water!


 there's a few pics posted in this thread of the 2 dies' etchings. 



to check for the eeprom version, the command is_ nvflash_ --check
the "nvlash" needs to be the exact name of the exe in the folder you open an admin command prompt in.
the nvflash in the zip folder below will flash the FE card. It also has the FE bios (TU102) and the 380W bios (galax) :thumb:
viperex(?) may know of a way not using nvflash to interrogate the eeprom.


----------



## toncij

Nico67 said:


> Yep


Where is the block from and the IO panel?


----------



## ENTERPRISE

michaelrw said:


> ENTERPRISE said:
> 
> 
> 
> Getting painful waiting for my 2080Ti to come into stock lol : https://www.scan.co.uk/products/msi...graphics-card-4352-core-1350mhz-gpu-1755mhz-b
> 
> 
> 
> It will be worth the wait.
> 
> 
> 
> hard to believe an OG like you has been unable to get their hands on a card. granted, you want a card that isn't produced in huge quantities (relative to other cards), so it at least makes sense. Do you have an ETA at least?
Click to expand...

b

Haha if only being an OG put you first in line right. ETA is 30th November. Hoping this does not change to later. Turing stock in general has been a huge issue in the UK. I guess it is what it is.


----------



## toncij

Careful with that. A single 120mm rad doesn't seem enough. Not sure how good that'll be.


----------



## Camacho1988

Hi new user here,

finally my MSI DUKE 2080 TI arrived with wich VBIOS should I Flash it to get a more stable Boost. Card runs under Air.

Greetings


----------



## ENTERPRISE

toncij said:


> Careful with that. A single 120mm rad doesn't seem enough. Not sure how good that'll be.


We will certainly see how it works out. If it comes down to it, I will just return them. To be honest I preferred this design as it comes with a blower fan that will actually expel hot air out of the case while the AIO takes care of the GPU, as I say, we will have to see how it pans out. At this rate I will be happy to just be able to play games again lol.


----------



## Zammin

Alphacool block and backplate arrived yesterday. Took them a while just to get stock but damn for $9 shipping it got here in like 3-4 days from Germany. It looks really good, but the inside doesn't look spotlessly clean. Kinda looks a bit dusty and there are some fingerprints on the nickel underneath the plexi which shows a bit more evidently with the lighting on (all pretty minor so it doesn't show up in the photos). I might pop the top off and clean it by hand before I use it so my OCD doesn't drive me insane later lol. Weirdly though, the RGB connector on this thing is male and not the kind where you can remove the pins, so I'm gonna need an extension to make it female so it can plug into the motherboard.. Pretty sure they advertise it as Aura sync compatible but I don't know what motherboard vendor uses female sockets on the board for RGB.


----------



## kot0005

The block looks nice, test it out and post some load temps.


----------



## Zammin

kot0005 said:


> The block looks nice, test it out and post some load temps.


Yeah I'm pretty pleased with it aesthetically. Just looks like they could've cleaned it a bit better during assembly. But it's not the first time I've seen that, my Phanteks block for my 1080ti had a fat thumbprint right in the visible area of the nickel haha.

I won't have the block fitted for at least a good few weeks to a month as I still need to test my FE card for a while to make sure it's not an R.I.P. edition, but once I get around to it I will post my results here.

Still debating whether to stick with vertical mounting or to go horizontal and squeeze an extra 240mm radiator in my PC-O11 Dynamic (on the floor of the case). That is assuming that the GPU block doesn't foul on the glass.


----------



## kot0005

Zammin said:


> Yeah I'm pretty pleased with it aesthetically. Just looks like they could've cleaned it a bit better during assembly. But it's not the first time I've seen that, my Phanteks block for my 1080ti had a fat thumbprint right in the visible area of the nickel haha.
> 
> I won't have the block fitted for at least a good few weeks to a month as I still need to test my FE card for a while to make sure it's not an R.I.P. edition, but once I get around to it I will post my results here.
> 
> Still debating whether to stick with vertical mounting or to go horizontal and squeeze an extra 240mm radiator in my PC-O11 Dynamic (on the floor of the case). That is assuming that the GPU block doesn't foul on the glass.


PCCG listed 2080TI Auros and Inno3d gaming OC.


----------



## bastian

New game ready driver releasing today. Bug fixes, the highlight being fixes for crashing RTX cards.


----------



## Zammin

kot0005 said:


> PCCG listed 2080TI Auros and Inno3d gaming OC.


Yeah I've been seeing random 2080Ti's (mostly Galax OC and Gigabyte Gaming OC) popping up everywhere except PLE over the last few weeks. Even on Ebay for better prices than the big retailers. I think stock is starting to come in more often since the news about dying GPUs came out, likely less demand right now. That would explain why Nvidia AU still have stock now over a week after they came back in stock.


----------



## GosuPl

One of my new (from rma) 2080Ti FE have GDDR6 Samsung

https://scontent-frt3-2.xx.fbcdn.ne...=c8a4270cfe564275fc4cb024b108bad4&oe=5C403E66

Soon i will check the second one 

Great, this is very good info


----------



## Jpmboy

toncij said:


> Careful with that. A single 120mm rad doesn't seem enough. Not sure how good that'll be.


 better than the air cooler, and not as good as custom water. Same as the 295x2 (which had/has a single 120). 




GosuPl said:


> One of my new (from rma) 2080Ti FE have GDDR6 Samsung
> https://scontent-frt3-2.xx.fbcdn.ne...=c8a4270cfe564275fc4cb024b108bad4&oe=5C403E66
> Soon i will check the second one
> Great, this is very good info


 well no sheet... samsung on the new ones eh. what's the lot number?
(and better hope the second one is too, or you are in for a ride  )


----------



## Zammin

GosuPl said:


> One of my new (from rma) 2080Ti FE have GDDR6 Samsung
> 
> https://scontent-frt3-2.xx.fbcdn.ne...=c8a4270cfe564275fc4cb024b108bad4&oe=5C403E66
> 
> Soon i will check the second one
> 
> Great, this is very good info


Oooooh that's good news! I hope I get my motherboard tomorrow so I can check mine. I bought mine only a week or so ago.


----------



## GosuPl

Jpmboy said:


> better than the air cooler, and not as good as custom water. Same as the 295x2 (which had/has a single 120).
> 
> 
> 
> well no sheet... samsung on the new ones eh. what's the lot number?
> (and better hope the second one is too, or you are in for a ride  )


+ 30 Mhz boost better from out of the BOX, than earlier 2080Ti with Micron.

Stock vs Stock

https://www.3dmark.com/compare/fs/1...gO_YH8a11uAZiiJmx-FnJ4t31Ee9FnrhHxfkrMCAR7ytw

Soon i will test the second card, both have batches 03244181xxx so i think, both have Samsung


----------



## Jpmboy

GosuPl said:


> + 30 Mhz boost better from out of the BOX, than earlier 2080Ti with Micron.
> 
> Stock vs Stock
> 
> https://www.3dmark.com/compare/fs/1...gO_YH8a11uAZiiJmx-FnJ4t31Ee9FnrhHxfkrMCAR7ytw
> 
> Soon i will test the second card, both have batches 03244181xxx so i think, both have Samsung



lol this is overclock.net bro... stock clocks? :blinksmil That can be nothing more than a bios/software change. Take the whip to it and see if the ram clocks higher than 8100 or so.


----------



## GosuPl

Jpmboy said:


> lol this is overclock.net bro... stock clocks? :blinksmil That can be nothing more than a bios/software change. Take the whip to it and see if the ram clocks higher than 8100 or so.


Be patient, of course i will do OC  I never use CPU or GPU without OC ;-) Now i'm just testing few things ;-)

Samsung with + 500, have slighlty better score than Micron + 700. Of course this can be anomally (for real core clocks is same - 2085 both), or never driver.

https://www.3dmark.com/compare/fs/16983074/fs/16695719#

I will test how much Samsung can OC. On Micron OC higher than + 725 (on my both former 2080Ti's) have lower scores than OC for 675/700. GDDR6 error corretion.

Water soon, "funny" play with bending PETG and dual loop will be ready 

2080Ti @2085/15400 with Samsung vs [email protected]/15400 with Micron.

https://www.3dmark.com/compare/fs/16983183/fs/16734572#

Samsung is Samsung ;-)

E:

2x 2080Ti FE have Samsung memory


----------



## Spiriva

bastian said:


> New game ready driver releasing today. Bug fixes, the highlight being fixes for crashing RTX cards.


Thanks for the heads up!

https://www.nvidia.co.uk/download/driverResults.aspx/139654/en-uk


----------



## Jpmboy

GosuPl said:


> Be patient, of course i will do OC  I never use CPU or GPU without OC ;-) Now i'm just testing few things ;-)
> 
> Samsung with + 500, have slighlty better score than Micron + 700. Of course this can be anomally (for real core clocks is same - 2085 both), or never driver.
> 
> https://www.3dmark.com/compare/fs/16983074/fs/16695719#
> 
> I will test how much Samsung can OC. On Micron OC higher than + 725 (on my both former 2080Ti's) have lower scores than OC for 675/700. GDDR6 error corretion.
> 
> Water soon, "funny" play with bending PETG and dual loop will be ready
> 
> 2080Ti @2085/15400 with Samsung vs [email protected]/15400 with Micron.
> 
> https://www.3dmark.com/compare/fs/16983183/fs/16734572#
> 
> Samsung is Samsung ;-)
> 
> E:
> 
> 2x 2080Ti FE have Samsung memory


lookin' good! keep us posted re: your two new cards.


----------



## DooRules

Samsung mem on the Titans then maybe, bring it on.


----------



## ThrashZone

Hi,
Open box deals two already asus though ilk at micro center
Blower style 2080ti lol definitely needs a water block but don't they all 
You're killing me samsung memory on a NV FE series


----------



## zhrooms

*MSI RTX 2080 Ti Gaming X Trio
 
4719072596965 / V371-026R*​ 


























































































*Flashed BIOS (380W) in UNIGINE Superposition 1080p Extreme at 10°C ambient.

+150 on Core and +1150 on Memory (with above BIOS).

Every scene it held 2085-2100 MHz, except for the first two (because of initial lower temperature) at 2100-2115 MHz.*









​


----------



## GosuPl

@zhrooms

Great card and awesome photos, good job !


----------



## toncij

Wrong slot. nvm.


----------



## nycgtr

Strix was been blocked. 2160 on water maintainable.


----------



## GAN77

nycgtr said:


> Strix was been blocked. 2160 on water maintainable.


Сool look!!!


----------



## GAN77

dell


----------



## GAN77

dell


----------



## GAN77

nycgtr said:


> Strix was been blocked. 2160 on water maintainable.


Сool build!


----------



## kx11

nycgtr said:


> Strix was been blocked. 2160 on water maintainable.





temps ?! under load ? i'll have the same as you


----------



## Garrett1974NL

@zhrooms you better put that thing under water 
But nice photos, I'll give you credit for that


----------



## nycgtr

kx11 said:


> temps ?! under load ? i'll have the same as you


Im hitting 46c max under load. But I have 2 480s cooling vrm, 7960x, and the gpu. Water temp is about 27-29. Ah yeah i ran outta kryonaut so i used mx4 or mx2 forgot lol. Not the best choice.


----------



## Esenel

GosuPl said:


> Jpmboy said:
> 
> 
> 
> lol this is overclock.net bro... stock clocks? /forum/images/smilies/blinksmiley.gif That can be nothing more than a bios/software change. Take the whip to it and see if the ram clocks higher than 8100 or so.
> 
> 
> 
> Be patient, of course i will do OC /forum/images/smilies/biggrin.gif I never use CPU or GPU without OC 😉 Now i'm just testing few things 😉
> 
> Samsung with + 500, have slighlty better score than Micron + 700. Of course this can be anomally (for real core clocks is same - 2085 both), or never driver.
> 
> https://www.3dmark.com/compare/fs/16983074/fs/16695719#
> 
> I will test how much Samsung can OC. On Micron OC higher than + 725 (on my both former 2080Ti's) have lower scores than OC for 675/700. GDDR6 error corretion.
> 
> Water soon, "funny" play with bending PETG and dual loop will be ready /forum/images/smilies/smile.gif
> 
> 2080Ti @2085/15400 with Samsung vs [email protected]/15400 with Micron.
> 
> https://www.3dmark.com/compare/fs/16983183/fs/16734572#
> 
> Samsung is Samsung 😉
> 
> E:
> 
> 2x 2080Ti FE have Samsung memory /forum/images/smilies/smile.gif
Click to expand...

Ciuld you please do a Timespy Run?
Thanks!


----------



## arrow0309

zhrooms said:


> *MSI RTX 2080 Ti Gaming X Trio
> 
> 4719072596965 / V371-026R*​
> [ ... cut ... ]
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [ ... cut ... ]
> 
> *Flashed BIOS (380W) in UNIGINE Superposition 1080p Extreme at 10°C ambient.
> 
> +150 on Core and +1150 on Memory (with above BIOS).
> 
> Every scene it held 2085-2100 MHz, except for the first two (because of initial lower temperature) at 2100-2115 MHz.*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ​





nycgtr said:


> Strix was been blocked. 2160 on water maintainable.


Awesome pics and awesome cards both, the air Queen and the liquid King :specool:


----------



## dante`afk

Zammin said:


> Alphacool block and backplate arrived yesterday. Took them a while just to get stock but damn for $9 shipping it got here in like 3-4 days from Germany. It looks really good, but the inside doesn't look spotlessly clean. Kinda looks a bit dusty and there are some fingerprints on the nickel underneath the plexi which shows a bit more evidently with the lighting on (all pretty minor so it doesn't show up in the photos). I might pop the top off and clean it by hand before I use it so my OCD doesn't drive me insane later lol. Weirdly though, the RGB connector on this thing is male and not the kind where you can remove the pins, so I'm gonna need an extension to make it female so it can plug into the motherboard.. Pretty sure they advertise it as Aura sync compatible but I don't know what motherboard vendor uses female sockets on the board for RGB.


Well dont be surprised. Alphacool is at the lower end of desireable watercool products.

Sent from my iPhone XS Max using Tapatalk


----------



## nycgtr

dante`afk said:


> Well dont be surprised. Alphacool is at the lower end of desireable watercool products.
> 
> Sent from my iPhone XS Max using Tapatalk


I always said they give out more crap to yters, modders, etc than they sell. It's a miracle they are still in business.


----------



## Nico67

toncij said:


> Where is the block from and the IO panel?


Block is just the EK Vector 2080Ti model, just with the terminal cover off in that pic. The I/O plate is a 1080Ti single slot EK one I got ages ago hoping the next model would use the same connector layout. It works, just the USB-C connector is in a DP port opening. Took a liitle bit of work to fit it but not to difficult.


----------



## Zurv

blah .. battlefield V doesn't support SLI and i'm not totally sure RT is in there..
i turned it on via the command line. Maybe fake RT vs real RT is such a small diff it doesn't matter.
the good news is 1 card can do [email protected] with the maxed settings


----------



## arrow0309

dante`afk said:


> Well dont be surprised. Alphacool is at the lower end of desireable watercool products.
> 
> Sent from my iPhone XS Max using Tapatalk


And why would that so? 
They still make the best "all copper" radiators and I proudly own two, a 280mm UT60 and a 420mm Monsta.
Maybe they're somehow new on the market with the real full cover blocks but so they are Phanteks and Thermaltake for instance.
Performance-wise we still have to see how do they perform.

But somehow I have to agree, desirable-wise speaking I don't know, ended up with this EK Vector but IMHO the sexiest right now is the Aquacomputer block and backplate with active cooling:

https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/


----------



## GAN77

arrow0309 said:


> And why would that so?
> 
> 
> But somehow I have to agree, desirable-wise speaking I don't know, ended up with this EK Vector but IMHO the sexiest right now is the Aquacomputer block and backplate with active cooling:
> 
> https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/


XSPC Razor Neo - RTX 2080 Ti (Black Chrome + Tempered Glass) looks just as good.
http://www.xs-pc.com/waterblocks-gpu/razor-neo-rtx-2080-ti-black-chrome-tempered-glass


----------



## rush2049

Zurv said:


> blah .. battlefield V doesn't support SLI and i'm not totally sure RT is in there..
> i turned it on via the command line. Maybe fake RT vs real RT is such a small diff it doesn't matter.
> the good news is 1 card can do [email protected] with the maxed settings
> 
> https://youtu.be/lUJTAKhbUFM


RT is coming in the day 0 patch apparently.... also you have to have windows 10 1809 which is kinda hard to acquire at the moment.


----------



## Zurv

rush2049 said:


> RT is coming in the day 0 patch apparently.... also you have to have windows 10 1809 which is kinda hard to acquire at the moment.


when is 0 day? 

yeah, i'm running 1809. I downloaded the ISO when it was RTM


----------



## Vlada011

One of things because I'm happy these days and what could give me little fun after EKWB robbed me and sold me craps is fact that I didn't spend clear i9-9820X on performance difference between GTX1080Ti and RTX2080Ti.
Because difference between GTX1080Ti with 2 years warranty and RTX2080Ti with 3 years warranty is exactly one i9-9820X.
Or i9-9900K if we look new GTX1080Ti.


----------



## kot0005

GosuPl said:


> One of my new (from rma) 2080Ti FE have GDDR6 Samsung
> 
> https://scontent-frt3-2.xx.fbcdn.ne...=c8a4270cfe564275fc4cb024b108bad4&oe=5C403E66
> 
> Soon i will check the second one
> 
> Great, this is very good info


well theres no point in Samsung really, unless it OC's to 18gbps


----------



## cstkl1

alrrady can predict when amd launched their card a hbm 2x nvidia will refresh the rtx with 16gbps.

waterblock is taking a scenic route atm with fedex.


----------



## Zammin

GosuPl said:


> Be patient, of course i will do OC  I never use CPU or GPU without OC ;-) Now i'm just testing few things ;-)
> 
> Samsung with + 500, have slighlty better score than Micron + 700. Of course this can be anomally (for real core clocks is same - 2085 both), or never driver.
> 
> https://www.3dmark.com/compare/fs/16983074/fs/16695719#
> 
> I will test how much Samsung can OC. On Micron OC higher than + 725 (on my both former 2080Ti's) have lower scores than OC for 675/700. GDDR6 error corretion.
> 
> Water soon, "funny" play with bending PETG and dual loop will be ready
> 
> 2080Ti @2085/15400 with Samsung vs [email protected]/15400 with Micron.
> 
> https://www.3dmark.com/compare/fs/16983183/fs/16734572#
> 
> Samsung is Samsung ;-)
> 
> E:
> 
> 2x 2080Ti FE have Samsung memory


Very interesting to see that difference in score at the same settings. My card has a serial number starting with 323 so yours must be newer, I'm guessing mine will have micron memory. Damn.



dante`afk said:


> Well dont be surprised. Alphacool is at the lower end of desireable watercool products.
> 
> Sent from my iPhone XS Max using Tapatalk





nycgtr said:


> I always said they give out more crap to yters, modders, etc than they sell. It's a miracle they are still in business.


I completely understand there is a lot of hate for Alphacool over their crappy pumps and some other incidents including their hybrid waterblocks that don't cool the memory or VRMs well, which is totally justified, I am well aware of these issues and I would not buy those products either. But those are different products. I don't have an issue with this block, other than the extra bit of cleaning they could have done but that's kind of nit picking on my part and it's easily fixed. Honestly it looks really well made, at least as good as EK which was my other option. As long as it looks nice and cools well that's all I need. I don't necessarily need the really high end exotic stuff especially since buying from Heatkiller or Aquacomputer (when that block is actually released) including shipping to Australia and GST will cost waaaay more than this did.

If I change my mind about using this block (which at this point I don't expect) I can always sell it and buy something else since it was only $150 AUD directly from Aquatuning + $30 AUD for the back plate (as opposed to over $200 just for the block if I ordered from the US). It's also worth mentioning that whether something is desirable or not is different from person to person, I personally do not desire to spend over $300 AUD on a waterblock if I can avoid it haha, just buying the 2080Ti cost $1900 AUD alone which is a lot of money for me.


----------



## GraphicsWhore

GAN77 said:


> arrow0309 said:
> 
> 
> 
> And why would that so?
> 
> 
> But somehow I have to agree, desirable-wise speaking I don't know, ended up with this EK Vector but IMHO the sexiest right now is the Aquacomputer block and backplate with active cooling:
> 
> https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/
> 
> 
> 
> XSPC Razor Neo - RTX 2080 Ti (Black Chrome + Tempered Glass) looks just as good.
> http://www.xs-pc.com/waterblocks-gpu/razor-neo-rtx-2080-ti-black-chrome-tempered-glass
Click to expand...

Damn. My current build has black chrome in it. When I switch to a 9-series intel in a few months I’m going to be very tempted to swap my EK block for this. Not sure about that port arrangement though.


----------



## Zammin

arrow0309 said:


> And why would that so?
> They still make the best "all copper" radiators and I proudly own two, a 280mm UT60 and a 420mm Monsta.
> Maybe they're somehow new on the market with the real full cover blocks but so they are Phanteks and Thermaltake for instance.
> Performance-wise we still have to see how do they perform.
> 
> But somehow I have to agree, desirable-wise speaking I don't know, ended up with this EK Vector but IMHO the sexiest right now is the Aquacomputer block and backplate with active cooling:
> 
> https://forum.aquacomputer.de/wasserk-hlung/108610-preview-kryographics-next-2080ti/


There are a few others in this thread that have the same block and for them it is performing around the same as most of the other commonly used blocks like EK and Bitspower.

The Aquacomputer block looks incredible, the AC rep sent me a pic of it a while back. Looks like it has a display in the in/out part of the block. At the time shoggy said he didn't have an exact price but it was going to be very expensive compared to previous AC GPU blocks.


----------



## kot0005

Zammin said:


> There are a few others in this thread that have the same block and for them it is performing around the same as most of the other commonly used blocks like EK and Bitspower.
> 
> The Aquacomputer block looks incredible, the AC rep sent me a pic of it a while back. Looks like it has a display in the in/out part of the block. At the time shoggy said he didn't have an exact price but it was going to be very expensive compared to previous AC GPU blocks.


AC has the best Quality and higher pricing because they move a lot less volume than EK or others. So they dont mass produce as much as EK.


----------



## Zammin

kot0005 said:


> AC has the best Quality and higher pricing because they move a lot less volume than EK or others. So they dont mass produce as much as EK.


Yeah that's totally understandable. I wouldn't expect a block like that to be priced the same as EK etc. Especially with the active backplate and bullt in OLED display.


----------



## cstkl1

dats one crazy big/heavy box..



havent bought any block from bitspower since gtx 480 ( their backplate short circuited my gpu)

but i can see why the disdain of EK. da quality especially plexi part..


----------



## kot0005

Zammin said:


> Yeah that's totally understandable. I wouldn't expect a block like that to be priced the same as EK etc. Especially with the active backplate and bullt in OLED display.


you can just buy the vision module and stick it to the block with mounting tape it plugs into internal usb headers, then install aquasuite and get the sensor info from hwinfo or aida64. You could have ordered one in with ur blocks lol. Aquatuning sells them.

http://www.au.aquatuning.com/water-...computer-vision-touch-with-internal-usb-cable


----------



## kot0005

cstkl1 said:


> dats one crazy big/heavy box..
> 
> 
> 
> havent bought any block from bitspower since gtx 480 ( their backplate short circuited my gpu)
> 
> but i can see why the disdain of EK. da quality especially plexi part..


whats wrong with EK's plexi quality ?? can you link some complaint posts ? Acrylic is pretty much the same unless you use some really cheap stuff.


----------



## Zammin

kot0005 said:


> you can just buy the vision module and stick it to the block with mounting tape it plugs into internal usb headers, then install aquasuite and get the sensor info from hwinfo or aida64. You could have ordered one in with ur blocks lol. Aquatuning sells them.
> 
> http://www.au.aquatuning.com/water-...computer-vision-touch-with-internal-usb-cable


Yeah I saw what you did with yours, looks really neat. I thought about it but didn't go ahead with it, the OLED display on my new ROG Formula should be able to display some helpful data, I also ordered an OLED flow meter as well.


----------



## cstkl1

kot0005 said:


> whats wrong with EK's plexi quality ?? can you link some complaint posts ? Acrylic is pretty much the same unless you use some really cheap stuff.


nah its just me i guess.. last EK gpu block with plexi i had was hmm GTX 580 i think... but da plexi on da strix x299.. monoblock.. dont let me even get started on that. it has like weird marks coming up after a year of use.. will take pic later after i replace it with EK velocity nickel plexi rgb.. ( thinking now should i just change this to bits... )

anyway

hmm msi and bitspower seriously went overrboard with the number of thermalpaste..

MSI had like 3-4 with thermal paste thats hardly even had a contact with the heatsink.. whats up with that..


----------



## cstkl1

anybody know what this two holes for??

https://shop.bitspower.com/installation_guide_file/20181106_9722.pdf


----------



## kot0005

damm the acrylic on the block is a lot thicker than on ek


----------



## GTANY

Do you know if the EVGA RTX 2080 TI Black Edition Gaming (https://www.evga.com/products/product.aspx?pn=11G-P4-2281-KR) will have a 300A chip ? 

On Techpowerup, it is the case but I am not sure that it is reliable : other cards like the MSI Ventus had 300A chip then 300 one afterwards on Techpowerup.


----------



## cstkl1

got a reply for bitspower.. its a new i/o plate..

saw the same two hole on lotan FE. 
beginning to think should have bought da waterblock for da strix ( it has a special part integrated to the block to screw to the i/o)


----------



## toncij

vmanuelgm said:


> In spite of that connector, it is indeed a reference pcb with some extra circuitery, but if u are planning to watercool it with the EK reference block, u won't be able unless you bend those pins or do some kind of mod.


Do you have info on some successful blocking? I can remove the pin connector tho... I guess...


----------



## Zammin

cstkl1 said:


> anybody know what this two holes for??














kot0005 said:


> damm the acrylic on the block is a lot thicker than on ek


Thicc plexi


----------



## Zammin

toncij said:


> Do you have info on some successful blocking? I can remove the pin connector tho... I guess...


Way earlier on in this thread (like a month+ ago) there was a guy who put an EK block on his, I believe he removed the plastic shroud around the pins and bent the pins over flat. Someone else mentioned the possibility of de-soldering the pins so they can be put back on, but I don't think it's been tried.

It is a shame about that connector, the Gigabyte Gaming cards have been some of the easiest to come by and the price is fairly reasonable here, plus a good OEM BIOS (366W). Just that pesky connector blocking your block (pun intended).


----------



## toncij

Zammin said:


> Way earlier on in this thread (like a month+ ago) there was a guy who put an EK block on his, I believe he removed the plastic shroud around the pins and bent the pins over flat. Someone else mentioned the possibility of de-soldering the pins so they can be put back on, but I don't think it's been tried.
> 
> It is a shame about that connector, the Gigabyte Gaming cards have been some of the easiest to come by and the price is fairly reasonable here, plus a good OEM BIOS (366W). Just that pesky connector blocking your block (pun intended).


Yes, and it's a good card, the one I have here. The cooling (air) is also cool (pun again), but still, it's an annoying part of it. What is it used for anyway? Also, the normal addressable led connector is gone on it, it seems. If that's what I think it is.


----------



## Zammin

toncij said:


> Yes, and it's a good card, the one I have here. The cooling (air) is also cool (pun again), but still, it's an annoying part of it. What is it used for anyway? Also, the normal addressable led connector is gone on it, it seems. If that's what I think it is.


I'm not 100% sure if it's a fan header or an RGB header, I had heard both, but I don't have that card so I'm not totally sure. I think the user that blocked his was named Fiery or something along those lines, although I son't think he's posted in this thread in a while.


----------



## cstkl1

bye bye strix.. (making a profit with da strix.. demand high)


----------



## Jpmboy

cstkl1 said:


> anybody know what this two holes for??
> 
> https://shop.bitspower.com/installation_guide_file/20181106_9722.pdf


3mm LEDs


----------



## rush2049

If anyone wants the V-bios off of the new cards coming directly from Nvidia as RMA replacements: https://www.techpowerup.com/vgabios/205158/205158

I just got my replacement card, Samsung memory. Serial: 032441.....

I updates this spreadsheet with my card, also if anyone hasn't added their working or not working cards, please do so: https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing


----------



## toncij

Something weird I'm noticing - I keep monitoring clocks during 3dmark test and while the clock looks fine, the end performance result is worse than expected. Significantly. Took 2 cards, applied same ftw3 bios or stock gb-gaming 366W one and one card does about 7000-7100 points, other drops to 6400-6800, clocks looking similar. Thermals look fine too, no visible downlock, 2055-1980 in the 2nd TimeSpyEx test.. but one cards simply can't touch 7000 result.
Microstuttering that is invisible to naked eye?


----------



## toncij

rush2049 said:


> If anyone wants the V-bios off of the new cards coming directly from Nvidia as RMA replacements: https://www.techpowerup.com/vgabios/205158/205158
> 
> I just got my replacement card, Samsung memory. Serial: 032441.....
> 
> I updates this spreadsheet with my card, also if anyone hasn't added their working or not working cards, please do so: https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing


Very interested in how much you can clock the memory...


----------



## kot0005

Is anyone able to get avg 80fps on BFV in 4k ?? My 2080Ti boosts to 2100mhz but dips below 50fps ....*** and hdr doesnt work with DX12.. can anyone try


----------



## cstkl1

Jpmboy said:


> 3mm LEDs


thats what i thought too.. bitspower replied..

they going to sell a custom I/O bracket that can be used for FE block and TRIO lotan series.


----------



## cstkl1

rainbow....


----------



## CallsignVega

kot0005 said:


> Is anyone able to get avg 80fps on BFV in 4k ?? My 2080Ti boosts to 2100mhz but dips below 50fps ....*** and hdr doesnt work with DX12.. can anyone try


With my Titan V and BFV on ultra settings, it is quite choppy. Usually in the 60-90 FPS range. Stuttering a lot. I have the latest NVIDIA driver too. My HDR works with DX12. Game is gorgeous.


----------



## GraphicsWhore

CallsignVega said:


> With my Titan V and BFV on ultra settings, it is quite choppy. Usually in the 60-90 FPS range. Stuttering a lot. I have the latest NVIDIA driver too. My HDR works with DX12. Game is gorgeous.


This is still the issue with stuttering when DX12 is enabled that was common in BF1 and was reported in the BFV beta? Do you have stuttering with DX12 off?

I'm thinking of joining Origin Access to play BFV this weekend but if the stuttering issue is still there that's really discouraging. This problem has been well-documented since BF1.


----------



## dante`afk

CallsignVega said:


> With my Titan V and BFV on ultra settings, it is quite choppy. Usually in the 60-90 FPS range. Stuttering a lot. I have the latest NVIDIA driver too. My HDR works with DX12. Game is gorgeous.


The game stutters in dx12. No stuttering with dx11. You also have to enable future frame in options for smooth fps.

DX11





DX12





disregard my gameplay, i did not play a single game in 3 months [emoji14]

Sent from my iPhone XS Max using Tapatalk


----------



## CallsignVega

Ok I'll switch to DX11 and try again.


----------



## GraphicsWhore

dante`afk said:


> The game stutters in dx12. No stuttering with dx11. You also have to enable future frame in options for smooth fps.
> 
> DX11
> https://youtu.be/ESz9cTZaVhU
> 
> DX12
> https://youtu.be/pTYEan-sFzo
> 
> disregard my gameplay, i did not play a single game in 3 months
> 
> Sent from my iPhone XS Max using Tapatalk


That sucks. Honestly I don't think I notice a difference between DX11 and 12 but it's crazy that this is still an issue. I remember reading about it when I was trying to figure out how to fix it in BF1 and I the consensus seemed to be that it ultimately it's a Frostbite problem, though don't know if that was ever proven. Given that stuttering exists in BF1, the BFV beta and now BFV, it seems reasonable to believe it's true.


----------



## Zurv

kot0005 said:


> Is anyone able to get avg 80fps on BFV in 4k ?? My 2080Ti boosts to 2100mhz but dips below 50fps ....*** and hdr doesnt work with DX12.. can anyone try


i'm sticking to 2150 and haven't seen it drop below the mid-60s. HDR also is working fine with dx12. I'm using an LG c8 77"
(that said, my rtx is on my desktop and my titan V is on my TV)

also the stutter in dx12 only happened the first time i loaded into a map. After that it is fine. (also, lowered the OC a bit too.)


----------



## okioki

*Asus Turbo Faulty*

Hello, this is my faulty Asus Turbo 2080Ti


----------



## GraphicsWhore

okioki said:


> Hello, this is my faulty Asus Turbo 2080Ti


What’s wrong with it?


----------



## arrow0309

dante`afk said:


> The game stutters in dx12. No stuttering with dx11. You also have to enable future frame in options for smooth fps.
> 
> DX11
> https://youtu.be/ESz9cTZaVhU
> 
> DX12
> https://youtu.be/pTYEan-sFzo
> 
> disregard my gameplay, i did not play a single game in 3 months
> 
> Sent from my iPhone XS Max using Tapatalk


They both look dx11 to me.


----------



## GraphicsWhore

dante`afk said:


> The game stutters in dx12. No stuttering with dx11. You also have to enable future frame in options for smooth fps.
> 
> DX11
> https://youtu.be/ESz9cTZaVhU
> 
> DX12
> https://youtu.be/pTYEan-sFzo
> 
> disregard my gameplay, i did not play a single game in 3 months
> 
> Sent from my iPhone XS Max using Tapatalk


Damn your stuttering is hardcore. Mine wasn't that bad in the BFV beta but I'll find out tonight how it is in the final game. Praying I'm somehow lucky and don't have it at all.

Like I said though I can't tell the difference between DX11 and 12 and I'd be surprised if anyone could without nitpicking still images.


----------



## dVeLoPe

EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR

anyone want to buy this card from me for 1200? I get it on the 14th


----------



## kot0005

I just turned on Msi afterburner monotoring so I am getting 60 to 70fps at 4k and my gpu usage howrs at 60 to 65% ...prob why i am not hitting 90fps. I dont know how to fix this lol. 

For HDR on DX12 its just corrupted. The colours are oversaturated. Works fine when I use DX11, this is on a pg27uq.

Will try reinstalling drivers.


----------



## Jpmboy

dante`afk said:


> The game stutters in dx12. No stuttering with dx11. You also have to enable future frame in options for smooth fps.
> 
> DX11
> https://youtu.be/ESz9cTZaVhU
> 
> DX12
> https://youtu.be/pTYEan-sFzo
> 
> disregard my gameplay, i did not play a single game in 3 months
> 
> Sent from my iPhone XS Max using Tapatalk



i almost got nauseous watching the dx12. 
I would try setting the NV and windows power plan to High/max perf, and/or disabling speedshift and speed step in bios, and disable all c-states and see if it gets better. looks to me more of a D machine-interface issue, if it is not a driver thing.


----------



## kot0005

Reinstalled driver, still getting low gpu usage on both dx11 and 12.

Just tried BF1 dx12 hdr works, no stutters and 98fps at 98hz with 95+% gpu usage..


----------



## kot0005

Okay...everything is fixed. Had to repair the game lol. HDR works on DX12. But now my DX12 stutters and is unplayable lolll... it didnt stutter before ***


----------



## dante`afk

Jpmboy said:


> i almost got nauseous watching the dx12.
> I would try setting the NV and windows power plan to High/max perf, and/or disabling speedshift and speed step in bios, and disable all c-states and see if it gets better. looks to me more of a D machine-interface issue, if it is not a driver thing.


dunno about that, everyone i know has the same with dx12 and also some reviewsites mentioned the same.


----------



## kot0005

Ok after repair DX11 is still around 60 to 70% gpu usage. But gameplay is smooth and frames are around 55 to 65. With DX12 i got hdr to work and 95%+ gpu usage after repair but lots of atuttering. Changing maximim pre rendered frames in nvcp with dx12 made it better but not good..


----------



## dante`afk

what's everyone's delta (water to gpu diff) with the bitspower block? I'm not sure if my thermal paste is not applied correctly or if the block is just not that good, but I see people with EK/WC blocks with a delta of 8c-10c on load while mine does 15c-18c (5c-6c while idling)


----------



## cstkl1

i am getting 15c delta from idle..furmark/timespy etc 30 min loop

havent fully bleed. the block so thick if i look closely inside can still see pocket of air.. 

overnight got rid most of the pockets.. think will see until end of the week. 

gaming load just now delta was only 8-9c. 

running at 2130/7000.. 330 watts msi bios

ambient temp 27c


----------



## Jpmboy

dante`afk said:


> what's everyone's delta (water to gpu diff) with the bitspower block? I'm not sure if my thermal paste is not applied correctly or if the block is just not that good, but I see people with EK/WC blocks with a delta of 8c-10c on load while mine does 15c-18c (5c-6c while idling)


i get ~10C over loop temp (cold side) at steady state. Right now, folding at 2070 the entire day, card is at 32C, water is at 23C. the BP block works fine. You prolly just need to redo the tim. I have been using a new(ish) tim lately... Tim-Mate TIM2 and find it is as good as anything (well, except LM). Using it on Titan V, 2080Ti, 2 TXps, 7980XE, 6950X, 8700K, 8086K, 2700K, etc. very good stuff IMO. And does not dry out - at all.


----------



## GraphicsWhore

Not exactly sure what changed but I was whining about hitting 40k on FireStrike graphics. Before I was at like 39.7k

Then...

http://www.3dmark.com/fs/16999645


----------



## cstkl1

age. getting blind

sorry delta on water tempnon load??

only about 8-10c 
idle 2-3c


----------



## dante`afk

okay thanks for your input boys.

i disassembled the block/gpu and reapplied new TIM, this time NHT1 from Noctua (was using kyronaut before). Temps are better now I believe, water out 29, water in 28, ambient 25 and gpu is at 38c after 30 minute of BF V - sounds about 10c delta, right? lots of airbubbles now in the loop though, res and block are full of them, let's hope it gets down a bit further over time when the bubbles get out.


----------



## kot0005

apparently this is the issue https://www.reddit.com/r/Battlefiel..._low_fps_and_low_gpu_usage_caused_by_ffr_off/

I haven't tried to turn it on yet because I am reinstalling the game lol. It stopped launching after I disabled intel speed step in the bios like someone here suggested xD.


----------



## Zammin

Well looks like my FE has Micron memory according to GPU-Z. Only just got the test system up and running today. Oh well, now we wait and see if it survives. Fingers crossed it's not an R.I.P edition.


----------



## kot0005

I gave up on BF V ...The game just doesnt wana launch. Back to playing Lost Ark.


----------



## CallsignVega

Jpmboy said:


> i get ~10C over loop temp (cold side) at steady state. Right now, folding at 2070 the entire day, card is at 32C, water is at 23C. the BP block works fine. You prolly just need to redo the tim. I have been using a new(ish) tim lately... Tim-Mate TIM2 and find it is as good as anything (well, except LM). Using it on Titan V, 2080Ti, 2 TXps, 7980XE, 6950X, 8700K, 8086K, 2700K, etc. very good stuff IMO. And does not dry out - at all.


You go with lathering on the TIM for GPU's? I heard these days, unlike CPU's in which you use sparingly, it's better to have too much TIM on the GPU than not enough...



kot0005 said:


> I gave up on BF V ...The game just doesnt wana launch. Back to playing Lost Ark.


It doesn't launch at all? Or do you get a black screen in the game?


----------



## Zammin

kot0005 said:


> I gave up on BF V ...The game just doesnt wana launch. Back to playing Lost Ark.


I'm not surprised BFV is running poorly, BF1 was horrible on DX12 and even on DX11 wasn't exactly a smooth experience (online MP anyway). When I played the BFV beta it was pretty rough, those two experiences put me off buying the new game.



CallsignVega said:


> You go with lathering on the TIM for GPU's? I heard these days, unlike CPU's in which you use sparingly, it's better to have too much TIM on the GPU than not enough...


I have to agree that going a bit heavy with the paste on the GPU is usually the way to go, there's a lot of testing on Youtube (Gamers Nexus, JayzTwoCents etc) that shows having too much doesn't hurt your temps at all (in same cases it actually dropped it slightly lol. I made the mistake of not using enough paste when I first mounted my Phanteks block to my 1080Ti. I dis the star method as shown on the instructions but I did so sparingly. Had temps in the 50s while gaming. Took it apart and spread a nice thicc layer of Kryonaut over the whole die and bam, down into the high 30s.


----------



## kot0005

CallsignVega said:


> You go with lathering on the TIM for GPU's? I heard these days, unlike CPU's in which you use sparingly, it's better to have too much TIM on the GPU than not enough...
> 
> 
> 
> It doesn't launch at all? Or do you get a black screen in the game?


Just doesnt launch lol, it syncs up to 100% with cloud then the origin app minimizes and comes back up. I tried enabling speedstep later, tried uninstalling origin and clearing registry then re installing.

Tried to run GPU with stock clocks as well just to make sure, the game window doesnt launch at all.

I can still launch BF1 DX12 and HDR with no problems. Just have to try doing a full re download of the game but I already did that when trying to fix my low gpu usage. Prob guna try it again when it launches.

Lost ark is pretty good diablo replacement if anyone can manage to get a Korean account.


----------



## kot0005

Zammin said:


> I'm not surprised BFV is running poorly, BF1 was horrible on DX12 and even on DX11 wasn't exactly a smooth experience (online MP anyway). When I played the BFV beta it was pretty rough, those two experiences put me off buying the new game.
> 
> 
> 
> I have to agree that going a bit heavy with the paste on the GPU is usually the way to go, there's a lot of testing on Youtube (Gamers Nexus, JayzTwoCents etc) that shows having too much doesn't hurt your temps at all (in same cases it actually dropped it slightly lol. I made the mistake of not using enough paste when I first mounted my Phanteks block to my 1080Ti. I dis the star method as shown on the instructions but I did so sparingly. Had temps in the 50s while gaming. Took it apart and spread a nice thicc layer of Kryonaut over the whole die and bam, down into the high 30s.


I used a lot of paste of my gpu, used the pattern that EK shows in their manual. You also need to use a bit more because the die is super big. My temps have been lower than my stock bios 1080Ti so far. 

Only goes to 50c when its a hot day like 35c ambient. Fans always at 650rpm max.


----------



## arrow0309

kot0005 said:


> I used a lot of paste of my gpu, used the pattern that EK shows in their manual. You also need to use a bit more because the die is super big. My temps have been lower than my stock bios 1080Ti so far.
> 
> Only goes to 50c when its a hot day like 35c ambient. Fans always at 650rpm max.


The blob or the line method?


----------



## cstkl1

dante`afk said:


> okay thanks for your input boys.
> 
> i disassembled the block/gpu and reapplied new TIM, this time NHT1 from Noctua (was using kyronaut before). Temps are better now I believe, water out 29, water in 28, ambient 25 and gpu is at 38c after 30 minute of BF V - sounds about 10c delta, right? lots of airbubbles now in the loop though, res and block are full of them, let's hope it gets down a bit further over time when the bubbles get out.


think i did the same issue.. block all cleared up of bubbles.. but now.. hmm delta suddenly getting big... ... when installing actually just had enough to paint the die instead of slabbing it up thick... was using kryonaught as well


----------



## Jpmboy

CallsignVega said:


> You go with lathering on the TIM for GPU's? I heard these days, unlike CPU's in which you use sparingly, it's better to have too much TIM on the GPU than not enough...


 i just use a small pea size glob in the center of the die... the size of the glob depends on the size of the die, gpu or cpu. OEMs put it on with a cake icing bag, because most air cooler have crap thermal transfer plates. A well made water block whether on a cpu or gpu should really be treated the same. 




kot0005 said:


> apparently this is the issue https://www.reddit.com/r/Battlefiel..._low_fps_and_low_gpu_usage_caused_by_ffr_off/
> 
> I haven't tried to turn it on yet because I am reinstalling the game lol. It stopped launching after I disabled intel speed step in the bios like someone here suggested xD.


I suggested that... and there is no reason for ANY program to fail to launch with speed step and speed shift disabled. You got something else going on there regarding basic system operation or bios settings. Note: speed shift needs the C6-state comm enabled and "OS-native" enabled in bios for Intel's speed SHIFT to work properly. Speed step is a Windows 7 leftover - disable it when using win10 v1709 or higher. Disabling both and all c-states, simply runs the cpu at the max turbo freq set in bios. this is the highest P-state for the hardware. Best to use manual override vcore when you disable dynamic frequency capabilities (like 'step and/or 'shift.)
So if you disable speed step, be sure you have speed shift properly enabled... else just disable both.


----------



## Zammin

kot0005 said:


> I used a lot of paste of my gpu, used the pattern that EK shows in their manual. You also need to use a bit more because the die is super big. My temps have been lower than my stock bios 1080Ti so far.
> 
> Only goes to 50c when its a hot day like 35c ambient. Fans always at 650rpm max.


Nice. Dat radiator capacity. I can't run my fans that slow in game while the GPU is going flat out (1080Ti/8700k system) or the coolant temp will probably go above 45C. With the fans at 1600RPM on a hot day the coolant temp usually stays around 32-34C during intensive gaming sessions. Might be a bit higher with the 9900k and 2080Ti when I get them under water. Considering putting in another 240 rad if my GPU block will fit in my case horizontally, not sure yet though.

Had my first play session with the 2080Ti and 9900k system (all air cooled for now) tonight. Black Ops 4 kept crashing and was playing pretty badly at first, turns out the XMP II setting on Z390 was causing issues and instability. As soon as I reset to defaults the issues went away. Set to XMP I and it seems to be okay now.

XMP II looks like it changes a whole bunch of other memory settings other than the usual 4 timings and voltage, while XMP I just does the usual as per last gen. Not sure why XMP II is there if it's so unstable?

At stock settings (other than a custom fan curve) the GPU is operating between 1800-1900mhz in game and running in the low-mid 70s temperature-wise with a fairly aggressive fan curve. If I raise the power limit the temp goes up but the clock speed stays about the same. I suppose this is where average air cooling is the limiting factor.

It's nice to be able to crank all the settings and stay pretty much pinned at the refresh rate at 1440p 144/165hz. On my 1080Ti Strix I had to dial a few settings down to keep the frame rate above 115 most of the time.


----------



## joshpsp1

Broke my arm yesterday so I won't be able to test my Aorus Xtreme on Tuesday


----------



## dante`afk

after getting the block off my gpu yesterday, I noticed that the lower part of the memory chips had some kind of glazing around them. I wonder if that is from the thermal pads? Tried to remove it with alcohol, no luck.

Inb4 my card dies.


----------



## Fitzcaraldo

dante`afk said:


> after getting the block off my gpu yesterday, I noticed that the lower part of the memory chips had some kind of glazing around them. I wonder if that is from the thermal pads? Tried to remove it with alcohol, no luck.
> 
> Inb4 my card dies.


Mine has the same. It's either silicone grease or some additional retention resin to secure the chips next to the pci-connector on the PCB due to the weight of the card (with cooler).


----------



## rush2049

So I just got my replacement card from Nvidia.

How risky do you think it is to put my EKWB on this new card before testing it for a good two weeks that my last one took to fail?


----------



## Jpmboy

the only risk is having to pull it out of the loop. Use QDCs and that gets real simple.


----------



## xer0h0ur

bastian said:


> New game ready driver releasing today. Bug fixes, the highlight being fixes for crashing RTX cards.


More like making my perfectly working 2080 Ti black screen both of my monitors. I have never had so many wonky drivers for a video card till getting this 2080 Ti. Its quickly becoming a god damned joke. There are 4 official Windows 7 drivers that have been released so far and only one of them works for me.


----------



## GraphicsWhore

I guess it's a combination of lower ambient (temps have plummeted around here) and drivers, plus using the Galax BIOS, but I crushed my previous TimeSpy and FireStrike scores in graphics and overall.

Previous TS overall: 12396 
New TS overall: 12762 (http://www.3dmark.com/spy/5003035)

Previous TS graphics max: 15976
New TS graphics max: 16606

Clocks: Max 2160/8000










Previous FS overall max: 25584
New FS overall max: 25923 (http://www.3dmark.com/fs/17009547)

Previous FS graphics max: 39830
New FS graphics max: 40535

Clocks: Max 2130/8000


----------



## domrockt

finaly my KFA2 2080ti arrived!!! D
now i can be added as an owner 

https://www.techpowerup.com/gpuz/details/uqf46


----------



## Addsome

How long should I test my Zotac 2080ti amp before calling it safe and flashing the Galax bios? I got my card installed October 3 and initially had the Galax bios on it for about 2.5 weeks until I started hearing about the failures. I then reverted to stock bios and have been running it overclocked on the stock bios since then. Would you guys say it's safe to flash the Galax bios now? Also I have an old GTX 680 just lying around. If my 2080ti did fail while having the galax bios, could I use my 680 to flash it back to stock?


----------



## xer0h0ur

Addsome said:


> How long should I test my Zotac 2080ti amp before calling it safe and flashing the Galax bios? I got my card installed October 3 and initially had the Galax bios on it for about 2.5 weeks until I started hearing about the failures. I then reverted to stock bios and have been running it overclocked on the stock bios since then. Would you guys say it's safe to flash the Galax bios now? Also I have an old GTX 680 just lying around. If my 2080ti did fail while having the galax bios, could I use my 680 to flash it back to stock?


IMO if you're going to have a problem it likely would have already surfaced. Assuming you have been playing games and actually consistently putting a high load on the GPU. You should still be able to flash the card again with its original BIOS even if your GDDR6 suddenly decided to die on you, when you have another video card to load into Windows with as you're saying.


----------



## GraphicsWhore

Addsome said:


> How long should I test my Zotac 2080ti amp before calling it safe and flashing the Galax bios? I got my card installed October 3 and initially had the Galax bios on it for about 2.5 weeks until I started hearing about the failures. I then reverted to stock bios and have been running it overclocked on the stock bios since then. Would you guys say it's safe to flash the Galax bios now? Also I have an old GTX 680 just lying around. If my 2080ti did fail while having the galax bios, could I use my 680 to flash it back to stock?


If you have a bunk card BIOS isn't going to matter. If you encounter issues with the Galax just revert to stock. Running Galax is not going to take your card over the edge into failure. As above poster said if you had a bad card you'd probably know by now.

And yes you can use another card to re-flash the 2080TI.


----------



## Addsome

GraphicsWhore said:


> Addsome said:
> 
> 
> 
> How long should I test my Zotac 2080ti amp before calling it safe and flashing the Galax bios? I got my card installed October 3 and initially had the Galax bios on it for about 2.5 weeks until I started hearing about the failures. I then reverted to stock bios and have been running it overclocked on the stock bios since then. Would you guys say it's safe to flash the Galax bios now? Also I have an old GTX 680 just lying around. If my 2080ti did fail while having the galax bios, could I use my 680 to flash it back to stock?
> 
> 
> 
> If you have a bunk card BIOS isn't going to matter. If you encounter issues with the Galax just revert to stock. Running Galax is not going to take your card over the edge into failure. As above poster said if you had a bad card you'd probably know by now.
> 
> And yes you can use another card to re-flash the 2080TI.
Click to expand...

The reason I originally reverted back to stock was for warranty purposes. The galax bios worked great for me


----------



## Zammin

I saw a post on the Nvidia forums about the cards that have Samsung memory also have a new VBIOS, run cooler and are more stable, I don't know in what way it is more stable, but can anyone with a Samsung equipped card confirm either of these two things? I wonder what's changed with the VBIOS, is it improved or is it just different due to the change in memory?

I suppose the better stability could be the driver update rather than the newer cards, but it's hard to be sure without having a Samsung model on hand.

Here's the thread: https://forums.geforce.com/default/...es/rtx-2080ti-new-batch-gets-samsung-memory-/


----------



## Zammin

Reading through that thread, it looks like the new and old VBIOS has support for Micron, Samsung and SK Hynix memory. I wonder what actually changed in the new one. Someone also posted in the thread that they recently bought some 2080Ti's that came with Micron memory but have the new VBIOS found on the Samsung versions already on them:

"I have received my two 2080Ti FE cards on this Monday(11/5) from JD,the authorized retailer in China. Both of the cards are still with Micron memory, SN:03242181xxxxx, while the vbios is already in latest version(90.02.17.00.04), same as the cards you guys received with Samsung memory."

Here's a link to the new VBIOS if anyone want's to try it: https://www.techpowerup.com/vgabios/205158/205158


----------



## Jpmboy

hopefully it's not "more stable" because they relaxed the vram timings in the new bios. There have been many examples where an early vbios had tighter timings and could achieve only lower ram clocks, but ran faster even at a lower frequency on the ram (or more efficient, which ever way you look at it).
Only way to know is to test...


----------



## Zammin

Jpmboy said:


> hopefully it's not "more stable" because they relaxed the vram timings in the new bios. There have been many examples where an early vbios had tighter timings and could achieve only lower ram clocks, but ran faster even at a lower frequency on the ram (or more efficient, which ever way you look at it).
> Only way to know is to test...


Other than back to back benchmarking in time spy or firestrike and looking for differences in scores, is there a way to know exactly what's changed? Like maybe an application that can scan the VBIOS for timings? This is all new to me, I've never even flashed a VBIOS to a card before.


----------



## xer0h0ur

Well after some sorcery I managed to get the 416.81 driver working. I will probably bench later to compare but for now games.


----------



## Jpmboy

Zammin said:


> Other than back to back benchmarking in time spy or firestrike and looking for differences in scores, is there a way to know exactly what's changed? Like maybe an application that can scan the VBIOS for timings? This is all new to me, I've never even flashed a VBIOS to a card before.


 unfortunately, the usual list of benchmarks is the best way to tell. Cuda-enabled memory tasks will put the card's ram in the P2 state, so ya have to disable the P2 state "On" flag in the driver (NV INspector) for it to run at full speed - all that said, it's gonna take some serious testing to see any real change in vram performance (4K, 8K etc). I'm have been running the galax 380W bios and really can't see a reason to flash back to stock at this point. Runs >8000 on the micron ram, so unless there is a real WOW, I'll stand pat. 
gaming at 4K+ is probably the best way to _notice _and improvement.


----------



## Zammin

Jpmboy said:


> unfortunately, the usual list of benchmarks is the best way to tell. Cuda-enabled memory tasks will put the card's ram in the P2 state, so ya have to disable the P2 state "On" flag in the driver (NV INspector) for it to run at full speed - all that said, it's gonna take some serious testing to see any real change in vram performance (4K, 8K etc). I'm have been running the galax 380W bios and really can't see a reason to flash back to stock at this point. Runs >8000 on the micron ram, so unless there is a real WOW, I'll stand pat.
> gaming at 4K+ is probably the best way to _notice _and improvement.


Fair call, thanks for the info. Sounds like it's not going to be worth the effort.

On another note, it appears my card fails the Time Spy Stress Test at default settings. Only thing I've changed is the fan curve to keep the GPU temp in the 70s. Other than that, it's a fresh Win10 install, 9900k at ASUS defaults with XMP 1 enabled. I don't know why it would fail the test at default speeds, has this happened to anyone else?


----------



## Zammin

Failed Time Spy Extreme Stress Test as well. I'm not noticing any stuttering or artifacts, but my block speeds are constantly bouncing around between the low and high 1700s for most of the test in both cases. It starts at around 1900mhz and eventually finds its way down to the mid 1700s, with some fluctuation throughout. Could this be the reason? I know I've read some of you guys mentioning how your clock speeds aren't steady on Turing, and tend to vary a lot. On my GTX1080 Ti it has completely different behaviour, it just follows the curve and gradually steps down a little bit at a time as the temps creep up. I'm guessing this is the difference in behaviour between GPU boost 3 and 4.


----------



## Zammin

Tried setting windows power plan to high performance and NVCP to high performance, still failed Time Spy Extreme..


----------



## Emmett

I just passed barely. founders 2080 TI at default except a fan profile. fans were at 100% max core temp 65C
8700K at 5.2 with HT OFF. mem at 3200


----------



## Zammin

Emmett said:


> I just passed barely. founders 2080 TI at default except a fan profile. fans were at 100% max core temp 65C
> 8700K at 5.2 with HT OFF. mem at 3200


Thanks for running a test on your end, I'm running another one now with the fans at 100% to see if I get the same result. I'm on a 9900k at stock speeds with 3200 memory.

I just had a brief live chat with Nvidia support and the guy wasn't very knowledgeable. I sent them the screenshot of my last result. He said he would get back to me by email within 48 hours.

When googling the issue only one thread came up from the geforce forums where a couple of people thought it was normal but most others said their cards all passed the test with ease every time. I'm hoping we can get some more input from the others in this thread.


----------



## Nizzen

xer0h0ur said:


> bastian said:
> 
> 
> 
> New game ready driver releasing today. Bug fixes, the highlight being fixes for crashing RTX cards.
> 
> 
> 
> More like making my perfectly working 2080 Ti black screen both of my monitors. I have never had so many wonky drivers for a video card till getting this 2080 Ti. Its quickly becoming a god damned joke. There are 4 official Windows 7 drivers that have been released so far and only one of them works for me.
Click to expand...

Complaining about not working driver, and using ancient OS...

OK


----------



## xer0h0ur

You enjoying your seemingly monthly broken Windows 10?


----------



## Zammin

Emmett said:


> I just passed barely. founders 2080 TI at default except a fan profile. fans were at 100% max core temp 65C
> 8700K at 5.2 with HT OFF. mem at 3200





Zammin said:


> Thanks for running a test on your end, I'm running another one now with the fans at 100% to see if I get the same result. I'm on a 9900k at stock speeds with 3200 memory.
> 
> I just had a brief live chat with Nvidia support and the guy wasn't very knowledgeable. I sent them the screenshot of my last result. He said he would get back to me by email within 48 hours.
> 
> When googling the issue only one thread came up from the geforce forums where a couple of people thought it was normal but most others said their cards all passed the test with ease every time. I'm hoping we can get some more input from the others in this thread.


Nope, still failed with 100% fan speed. 70C. Slight improvement but still failed.


----------



## Spiriva

My friend got his 2080ti FE card last week. He ran it in his 2nd system (on air) to make sure the card worked good. Then this weekend he put a waterblock on it, and got it installed in to the loop. 
He told me that he used EK´s updated manual when putting on the thermal pads, and putted them as EK showed in the PDF. 

After the card was installed in the loop (in his main system) everything was working good, but now for some odd reason he got coil whine coming from the card while gaming. Which was not there while the 2080ti FE was running with its stock air cooler (in his 2nd system)

I tought maybe it is becace the card can work harder while beeing cooled by the waterblock, but he said even if he clocks it exactly the same as it was while beeing cooled by the stock cooler (+100 core voltage, 123% power limit, 88c temp target,) then putting the core clock -50mhz to make it clock to 1900mhz as with the air cooler at 100% fan speed, and then mem on +0
he still gets coil whine with the waterblock.

Anyone got an idea what could have happen ? Apperently the coil whine isnt that bad, a low pitch "bzzz" sound, but why did it suddenly appear with the waterblock installed ?

*Hes running stock Nvidia 2080ti FE bios


----------



## dVeLoPe

https://www.swiftech.com/komodo-rtx2080ti-heirloom.aspx

anyone ordering? whats the best block for my evga card?


----------



## Zammin

Just tried the stress test again with the power limit maxed out, the clocks were sustained slightly higher and it was hotter (79C) but the result was still a fail at 96.7%.. I wonder if this is normal and if not, what the problem is..


----------



## Jpmboy

Zammin said:


> Just tried the stress test again with the power limit maxed out, the clocks were sustained slightly higher and it was hotter (79C) but the result was still a fail at 96.7%.. I wonder if this is normal and if not, what the problem is..


The measure of stability in this test is frame rate consistency - have you established stability for the cpu/ram subsystem? Frankly, there is no guarantee that XMP is really stable.
galax bios, [email protected], 32GB @4000c16, R6A. water


----------



## dVeLoPe

my evga card arrive on the 14th how can i test make sure its not a dud?


----------



## CallsignVega

Spiriva said:


> My friend got his 2080ti FE card last week. He ran it in his 2nd system (on air) to make sure the card worked good. Then this weekend he put a waterblock on it, and got it installed in to the loop.
> He told me that he used EK´s updated manual when putting on the thermal pads, and putted them as EK showed in the PDF.
> 
> After the card was installed in the loop (in his main system) everything was working good, but now for some odd reason he got coil whine coming from the card while gaming. Which was not there while the 2080ti FE was running with its stock air cooler (in his 2nd system)
> 
> I tought maybe it is becace the card can work harder while beeing cooled by the waterblock, but he said even if he clocks it exactly the same as it was while beeing cooled by the stock cooler (+100 core voltage, 123% power limit, 88c temp target,) then putting the core clock -50mhz to make it clock to 1900mhz as with the air cooler at 100% fan speed, and then mem on +0
> he still gets coil whine with the waterblock.
> 
> Anyone got an idea what could have happen ? Apperently the coil whine isnt that bad, a low pitch "bzzz" sound, but why did it suddenly appear with the waterblock installed ?
> 
> *Hes running stock Nvidia 2080ti FE bios


Naw, the whine was always there. The fan noise just covered it up.


----------



## Zammin

Jpmboy said:


> The measure of stability in this test is frame rate consistency - have you established stability for the cpu/ram subsystem? Frankly, there is no guarantee that XMP is really stable.
> galax bios, [email protected], 32GB @4000c16, R6A. water


Thanks. Yeah I know it's a test based on frame rate consistency. I've only just got this system up and running, but it hasn't shown any other signs of instability on XMP 1. The specific model of RAM is in the motherboard's QVL as well. I could try it again with XMP disabled and see if it improves, but when I was having issues with the RAM before on XMP 2 mode it was pretty obvious. There was a lot of stuttering and crashes. Watching the test it looks very steady, but for some reason or another it isn't as consistent as one would expect. I can imagine the results being better on water since your clocks don't drop as much throughout the test due to temperature. When I start the test the GPU is below 50C, but after a few loops it's in the 70s.

Tomorrow morning I'll try disable XMP and see what happens. If it still fails I don't know what the reason is. Everything else is default, no overclocks. The XMP profile on this kit is nothing too crazy, just 3200mhz 16-18-18-38 2T 1.35V. Perfectly stable on my other system at least (8700k/1080Ti).


----------



## Jpmboy

Zammin said:


> Thanks. Yeah I know it's a test based on frame rate consistency. I've only just got this system up and running, but it hasn't shown any other signs of instability on XMP 1. The specific model of RAM is in the motherboard's QVL as well. I could try it again with XMP disabled and see if it improves, but when I was having issues with the RAM before on XMP 2 mode it was pretty obvious. There was a lot of stuttering and crashes. Watching the test it looks very steady, but for some reason or another it isn't as consistent as one would expect. I can imagine the results being better on water since your clocks don't drop as much throughout the test due to temperature. When I start the test the GPU is below 50C, but after a few loops it's in the 70s.
> 
> Tomorrow morning I'll try disable XMP and see what happens. If it still fails I don't know what the reason is. Everything else is default, no overclocks. The XMP profile on this kit is nothing too crazy, just *3200mhz 16-18-18-38 2T 1.35V.* Perfectly stable on my other system at least (8700k/1080Ti).


takes 1 hour with GSAT to eliminate ram issues as a source of the problem. If you already tested the XMP1 settings with GSAT, HCi or ramtest, it is very unlikely that the ram is causing the fail.
The card is hitting the 70s with the fans at 100%?




CallsignVega said:


> Naw, the whine was always there. The fan noise just covered it up.


^^ this


----------



## bmg2

Zammin said:


> Just tried the stress test again with the power limit maxed out, the clocks were sustained slightly higher and it was hotter (79C) but the result was still a fail at 96.7%.. I wonder if this is normal and if not, what the problem is..


Have you folks with gpu problems at stock settings tried getting rid of your cpu overclock? First thing to do if you haven't.


----------



## arrow0309

Zammin said:


> Just tried the stress test again with the power limit maxed out, the clocks were sustained slightly higher and it was hotter (79C) but the result was still a fail at 96.7%.. I wonder if this is normal and if not, what the problem is..


If that helps mine didn't pass as well.

I'll try again after I'll install it the Vector (waiting for last minute change, ordering 2 pairs of quick disconnect).
It might be the cpu / memory overclock although I've recently reset to XMP values.


----------



## acmilangr

Hi all
I have bought gigabyte 2080ti gaming OC and after I flashed with the F2 bios something strange is happened.

The power limit can now go to 122%. It is fine.

The strange thing is now I have more temperature (about 6 degrees more) even with all default settings. Maybe now it has more clock speed so it is OK if that is becouse that. (I didn't store the default bios to compare it...) 

The problem now; I can overclock +145/+700 with success but only if I set the power limit to 111%. If I change it to maximum 122% it crashes (3d marks stress). The clock on these settings is about 1995mhz. If I choose 122% has the same clock speed (1995mh)with plus more temperature.

So my card works better with 111% than 122%....

is it normal? 
Can I try other bios? Where can I find the default bios? Can I try windforce bios or it is dangerous? 
What you think?


----------



## arcDaniel

https://forums.geforce.com/default/...ch-gets-samsung-memory-/post/5911479/#5911479

Read Post 81, so if not a bad joke, it is not a Micron Problem.


----------



## Emmett

Zammin said:


> Just tried the stress test again with the power limit maxed out, the clocks were sustained slightly higher and it was hotter (79C) but the result was still a fail at 96.7%.. I wonder if this is normal and if not, what the problem is..



This is on my other system it's a waterblocked (poorly) asus 2080 ti Turbo. no backplate. still hitting 60C. 

I was just playing with undervolting it at 0.975 it ran at 1900-2000 through run. no mem OC on GPU. slamming power limit still (stock bios)


The system is 8700k at 4.9 HT ON mem 3200


I never use XMP


----------



## GosuPl

arcDaniel said:


> https://forums.geforce.com/default/...ch-gets-samsung-memory-/post/5911479/#5911479
> 
> Read Post 81, so if not a bad joke, it is not a Micron Problem.


Yeah, i saw this, even reply. But, maybe he just have bad luck.


----------



## Jpmboy

oh baby... https://www.techpowerup.com/gpu-specs/titan-rtx.c3311
where's my buddy @Zurv
... yeah, blame me.


----------



## Addsome

How long does the time spy extreme stress test run for?


----------



## Zurv

Jpmboy said:


> oh baby... https://www.techpowerup.com/gpu-specs/titan-rtx.c3311
> where's my buddy @Zurv
> ... yeah, blame me.


OH YOU SOB!

(I was already planning to get 2 to 4 of them....) But still it is your fault!

hrmm.. no RT cores... but that might be missing because of lack of info. That said, I don't really care about RT and the really new tech that should be the most useful is the DLSS which uses the tensor cores. (also more cuda cores > RT cores. IMO.)

I wonder if it is Nvidia only (which i bet) or will OEMs make cards too. The cost is going to be nutty (maybe...) - It doesn't have HBM and no RT cores. Hopefully it is close to the TI pricing (and not the Vs pricing.)


----------



## Jpmboy

Addsome said:


> How long does the time spy extreme stress test run for?


20 loops I believe. 5-10min it seems.


Zurv said:


> OH YOU SOB!
> 
> (I was already planning to get 2 to 4 of them....) But still it is your fault!
> 
> hrmm.. no RT cores... but that might be missing because of lack of info. That said, I don't really care about RT and the really new tech that should be the most useful is the DLSS which uses the tensor cores. (also more cuda cores > RT cores. IMO.)
> 
> I wonder if it is Nvidia only (which i bet) or will OEMs make cards too. The cost is going to be nutty (maybe...) - It doesn't have HBM and no RT cores. Hopefully it is close to the TI pricing (and not the Vs pricing.)


 the rtx6000 has rt cores, so I think it is as you say, just lack of info in that TPU post. The rtx6000 is 2/3 the cost of the volta... so I'm hoping NV brings out the RTX Titan at (much) less than that ratio... hoping.
But the TT still comes in a bit shy on the cuda and tensor cores compared to the TV, if the quadro line predicts anything. Let's face it, the TV is a rough act to follow - if only they enabled sli on it, damnit.


----------



## Addsome

So it seems if I run the time spy extreme stress test with both my core and memory overclocked I get a driver crash. If I run them at the same setting but overclocked separately with the other option at 0, I passed both times. Should I reduce my memory or core overclock. Currently running 130/900 with a max clock of around 2070.

Edit: I pass the stress test with both overclocked if I run fans at 100%. Usually my fan curve has fans around 80-90%. What do you guys suggest I do?


----------



## acmilangr

With +140/+700 I got max 1995mhz with my gigabyte Gaming 2080ti. Is it normal?


----------



## GraphicsWhore

acmilangr said:


> With +140/+700 I got max 1995mhz with my gigabyte Gaming 2080ti. Is it normal?


If that core offset was actually doing something you'd be topping out way higher. Set it to 0. What is your max then?


----------



## Zurv

acmilangr said:


> With +140/+700 I got max 1995mhz with my gigabyte Gaming 2080ti. Is it normal?


try with a much lower OC, like +100 / +500
also what are your temps. It is heat that is going to limit your speed.

I personal only use +130 / +600 (anymore that than my time spy scores go down.)

I'm about 45C under load and ~2070 to ~2150 (i'm also in SLI.)

Also, do you have enough power?

This is a useful Wattage-calculator.
https://seasonic.com/wattage-calculator
or
https://outervision.com/power-supply-calculator

My system uses almost 1.2KW of power!


----------



## Jpmboy

Addsome said:


> So it seems if I run the time spy extreme stress test with both my core and memory overclocked I get a driver crash. If I run them at the same setting but overclocked separately with the other option at 0, I passed both times. Should I reduce my memory or core overclock. Currently running 130/900 with a max clock of around 2070.
> 
> Edit: *I pass the stress test with both overclocked if I run fans at 100%*. Usually my fan curve has fans around 80-90%. What do you guys suggest I do?


better cooling, under water the cards would be much happier.


----------



## iamjanco

Jpmboy said:


> oh baby... https://www.techpowerup.com/gpu-specs/titan-rtx.c3311
> where's my buddy @Zurv
> ... yeah, blame me.


Made me chuckle (can't wait to see more).


----------



## cstkl1

latest drivers are garbage

example vram throttle on nvenc

https://youtu.be/teXsGtc-h_0


----------



## Zammin

Jpmboy said:


> takes 1 hour with GSAT to eliminate ram issues as a source of the problem. If you already tested the XMP1 settings with GSAT, HCi or ramtest, it is very unlikely that the ram is causing the fail.
> The card is hitting the 70s with the fans at 100%?


I haven't heard of GSAT but I do have HCI memtest pro, I'll run that overnight tonight and see if I get any errors. Yeah I get 69-70C with the fans at 100%. Case fans aren't going very hard though and it's summer here in Australia right now. FE cooler as well. Hits 75C in time spy extreme with my custom curve. 



bmg2 said:


> Have you folks with gpu problems at stock settings tried getting rid of your cpu overclock? First thing to do if you haven't.


I'm not overclocked on anything except RAM XMP. Gonna try and run the test again this morning with XMP off. 



arrow0309 said:


> If that helps mine didn't pass as well.
> 
> I'll try again after I'll install it the Vector (waiting for last minute change, ordering 2 pairs of quick disconnect).
> It might be the cpu / memory overclock although I've recently reset to XMP values.


Interesting to see a few of us are failing the test. I think since we aren't far off 97% watercooling will definitely get you there since your clocks won't drop as much after the first few loops, bit we would expect the stock air cooled card to still pass right?



Emmett said:


> This is on my other system it's a waterblocked (poorly) asus 2080 ti Turbo. no backplate. still hitting 60C.
> 
> I was just playing with undervolting it at 0.975 it ran at 1900-2000 through run. no mem OC on GPU. slamming power limit still (stock bios)
> 
> 
> The system is 8700k at 4.9 HT ON mem 3200
> 
> 
> I never use XMP


Do you just enter your main timings, speed and voltage manually? I've done that on my other system but enabling XMP changes the same things. 


Thank you to everyone chiming in to help!


----------



## Jpmboy

figured I'd run the TSE stress test at gaming clocks: https://www.3dmark.com/3dm/30225813
I really think these cards are VERY temperature sensitive.
UL white paper:


----------



## kot0005

CallsignVega said:


> Spiriva said:
> 
> 
> 
> My friend got his 2080ti FE card last week. He ran it in his 2nd system (on air) to make sure the card worked good. Then this weekend he put a waterblock on it, and got it installed in to the loop.
> He told me that he used EK´s updated manual when putting on the thermal pads, and putted them as EK showed in the PDF.
> 
> After the card was installed in the loop (in his main system) everything was working good, but now for some odd reason he got coil whine coming from the card while gaming. Which was not there while the 2080ti FE was running with its stock air cooler (in his 2nd system)
> 
> I tought maybe it is becace the card can work harder while beeing cooled by the waterblock, but he said even if he clocks it exactly the same as it was while beeing cooled by the stock cooler (+100 core voltage, 123% power limit, 88c temp target,) then putting the core clock -50mhz to make it clock to 1900mhz as with the air cooler at 100% fan speed, and then mem on +0
> he still gets coil whine with the waterblock.
> 
> Anyone got an idea what could have happen ? Apperently the coil whine isnt that bad, a low pitch "bzzz" sound, but why did it suddenly appear with the waterblock installed ?
> 
> *Hes running stock Nvidia 2080ti FE bios
> 
> 
> 
> Naw, the whine was always there. The fan noise just covered it up.
Click to expand...

Yep top end cards will always have whine. Its only bad when the whine is really loud.


----------



## Zammin

Okay so I disabled XMP and ran time spy extreme again, failed the test still. Interestingly though if I restart the test immediately afterward before the GPU cools down too much it passes, but only just. Will run HCI MemTest Pro tonight but I don't think that XMP is the issue at this stage, given the results.

Looking at the first result (failed) there is a tiny little drop in GPU load, that's the only time I've seen that in all my time spy extreme tests. May just be a little hiccup, but the load patterns on all the other tests look the same as the second result (passed).


----------



## Zurv

Maybe see if you are stable without video.
http://www.ocbase.com/index.php/download

Occt is a good stress test


----------



## Zammin

Zurv said:


> Maybe see if you are stable without video.
> http://www.ocbase.com/index.php/download
> 
> Occt is a good stress test


I've got AIDA64 and Realbench, never used OCCT. Is there a particular test in OCCT you recommend?

I would be surprised if it's unstable with no overclock on the CPU, RAM or GPU. Everything is currently default.


----------



## Jpmboy

It must be a temperature-induced EC loop that's causing the average frame rate to vary, and it may have little to do with core temperature. the card's VRMs and ram are not cooled very well by the stock cooler. Especially the VRMs. They hit 100C in a timespy loop with tyhe air cooler. maybe a good idea to pop the top and use fuji poly instead of the stock pads on the VRM section.


----------



## Zammin

Jpmboy said:


> It must be a temperature-induced EC loop that's causing the average frame rate to vary, and it may have little to do with core temperature. the card's VRMs and ram are not cooled very well by the stock cooler. Especially the VRMs. They hit 100C in a timespy loop with tyhe air cooler. maybe a good idea to pop the top and use fuji poly instead of the stock pads on the VRM section.


Unfortunately I don't have any Fujipoly thermal pads on hand. I do have a water block here but my intention was to test everything on air for a few weeks to a month before putting it in the loop incase anything dies and I have to RMA. My custom loop uses hard tubing so it would be a pain to have to tear the GPU out of the loop for RMA if it died.

100C on the VRMs sounds like a lot, I would have expected an unmodified card to be able to handle itself properly provided there is sufficient air flow.. If components are reaching 100C that sounds like a major oversight by Nvidia. In saying that I don't know what are safe temps for the VRMs, but 100C sounds excessive..

Can the temperature of the VRMs and other components be measured by any software? If so I could check and see what's going on.


----------



## Zammin

Also I'm running a RealBench stress test right now, my 9900k at stock speeds (4.7Ghz) is hitting 95C on a Be Quiet Dark Rock Pro 4 with the cooler and case fans at 100% lmao. Bloody hell this thing is hot with AVX..........


----------



## animeowns

iamjanco said:


> Made me chuckle (can't wait to see more).


if titan rtx only has 12gb of vram I might just skip it all together if it won't be that much increase from the 2080ti I want something with 32gb but at the current prices with sli support and is ready for 8k! bring it on 2019


----------



## Zammin

30 minutes of Realbench stress test passed no problems. Running OCCT Large Data Set now, but I doubt it'll find any errors. I'll re-enable XMP and run HCI memtest overnight tonight and see what happens. At this point I'm still thinking the time spy stress test failures are likely not related to the RAM or CPU. Hard to know unless something throws an error though.


----------



## Emmett

Zammin said:


> 30 minutes of Realbench stress test passed no problems. Running OCCT Large Data Set now, but I doubt it'll find any errors. I'll re-enable XMP and run HCI memtest overnight tonight and see what happens. At this point I'm still thinking the time spy stress test failures are likely not related to the RAM or CPU. Hard to know unless something throws an error though.



Interesting, I JUST swapped out my 8700k for a 9900k and in doing so updated the bios on my Maximus X formula. I was 4 updates behind.
but wanted to make sure the 9900k would be recognized by the board. I re-ran the test, and hit my lowest score of 97.0




I think you are gonna figure this out soon, dont give up.


----------



## Zammin

Emmett said:


> Interesting, I JUST swapped out my 8700k for a 9900k and in doing so updated the bios on my Maximus X formula. I was 4 updates behind.
> but wanted to make sure the 9900k would be recognized by the board. I re-ran the test, and hit my lowest score of 97.0
> 
> 
> 
> 
> I think you are gonna figure this out soon, dont give up.


Interesting, but was that only down 0.7%? that might be just margin of error, but I'm not sure.

Thanks mate! I'm still waiting to hear back from Nvidia, but if I can't find anything wrong it might just be the air cooler and it's limitations. What JPMboy said about the VRMs hitting 100C on the FE has me very concerned, but I have no way of measuring it. HWinfo64 doesn't display VRM temp for the GPU on this card like it did for my Strix 1080Ti. Part of me thinks it will all straighten out on water, but another part of me is thinking that logically it should all just work correctly and sufficiently out of the box, and if it doesn't then something is wrong.

It's a complicated situation..

BTW, have you tried stress testing your 9900k with AVX yet? I'm interested to see if others are getting the same very high temps at stock. For me OCCT large data sets has been running for over an hour and a half and it's in the 60s and 70s but it doesn't generate a lot of heat in this test. Realbench hits 95C and AIDA64 hits 100C for a moment before going back down to the 80s and 90s. Really hot for stock on a huge air cooler..


----------



## Nephalem89

Hi this model evga xc2 ultra gaming 2080ti is semi custom pcb...https://forums.evga.com/2080ti-xc-2-ultra-gaming-m2880162.aspx water blocks for xc ultra gaming 2080ti work for it


----------



## Jpmboy

animeowns said:


> if titan rtx only has 12gb of vram I might just skip it all together if it won't be that much increase from the 2080ti I want something with 32gb but at the current prices with sli support and is ready for 8k! bring it on 2019


12GB is plenty for 8k afaik. 32GB... lol, consumer software doesn;t even know how to use that much vram. 


Zammin said:


> 30 minutes of Realbench stress test passed no problems. Running OCCT Large Data Set now, but I doubt it'll find any errors. I'll re-enable XMP and run HCI memtest overnight tonight and see what happens. At this point* I'm still thinking the time spy stress test failures are likely not related to the RAM or CPU*. Hard to know unless something throws an error though.


agree. once you got the base platform nailed down as stable... I'm still betting it is the thermals. Gamersnexus did a thermal image of the card's components. I used a Fluke IR thermo on the backside and the vrm section is the hottest thing on the card.


Emmett said:


> Interesting, I JUST swapped out my 8700k for a 9900k and in doing so updated the bios on my Maximus X formula. I was 4 updates behind.
> but wanted to make sure the 9900k would be recognized by the board. I re-ran the test, and hit my lowest score of 97.0
> I think you are gonna figure this out soon, dont give up.


the frame rate "score" is not a score. I think you are reading too much into that number. after dropping a new cpu into an old(er) platform with a bios update you also need to refresh the OS's MC and chipset. Then get a copy of Latency Monitor and test the rig for issues.


Zammin said:


> Interesting, but was that only down *0.7%? that might be just margin of error*, but I'm not sure.
> 
> Thanks mate! I'm still waiting to hear back from Nvidia, but if I can't find anything wrong it might just be the air cooler and it's limitations. What JPMboy said about the VRMs hitting 100C on the FE has me very concerned, but I have no way of measuring it. HWinfo64 doesn't display VRM temp for the GPU on this card like it did for my Strix 1080Ti. Part of me thinks it will all straighten out on water, but another part of me is thinking that logically it should all just work correctly and sufficiently out of the box, and if it doesn't then something is wrong.
> 
> It's a complicated situation..
> 
> BTW, have you tried stress testing your 9900k with AVX yet? I'm interested to see if others are getting the same very high temps at stock. For me OCCT large data sets has been running for over an hour and a half and it's in the 60s and 70s but it doesn't generate a lot of heat in this test. *Realbench hits 95C and AIDA64 hits 100C for a moment before going back down to the 80s and 90s.* Really hot for stock on a huge air cooler..


This is happening at stock clocks and voltage?


----------



## Zammin

Jpmboy said:


> This is happening at stock clocks and voltage?


Yeah stock clocks and voltage with a Dark Rock 4 Pro dual tower air cooler (comparable to NH-D15). Everything in the BIOS is left on auto. When running AVX stress tests it pulls about 1.250V. When running non-AVX loads like cinebench it only pulls 1.144V and the temps are in the 70s.

I watched Hardware Unboxed's review of the 9900k and they got 84C when running the Blender test on a NH-D15 at stock clocks and voltage. I'm not sure how Blender compares to RealBench and AIDA64 in terms of heat generation, but I've never had a CPU run so hot with AVX workloads despite having a huge cooler. Makes me wonder how much better it will be on water and if I can OC it at all without temps getting out of hand.

It's totally fine when gaming, the temps are about the same as my watercooled 8700k. But when all 8 cores are being heavily stressed in synthetic stress tests it starts to get out of hand.


----------



## toncij

Well, you can't get around the fact that 9900K has 2 more cores. That's 33% more cores and 33+% more heat in the same spot. I'd suggest you go with an AIO if not custom loop, if you're going to run 9900K.


----------



## Zammin

toncij said:


> Well, you can't get around the fact that 9900K has 2 more cores. That's 33% more cores and 33+% more heat in the same spot. I'd suggest you go with an AIO if not custom loop, if you're going to run 9900K.


My intention is to install it in my custom loop, it's only being air cooled temporarily while I test stuff. But I would have expected it to be okay at stock clocks on one of the best air coolers around. I guess I expected better from the soldered IHS.

I wonder if I didn't do the mounting screws up tight enough on the cooler..


----------



## Jpmboy

Zammin said:


> Yeah stock clocks and voltage with a Dark Rock 4 Pro dual tower air cooler (comparable to NH-D15). Everything in the BIOS is left on auto. When running AVX stress tests it pulls about 1.250V. When running non-AVX loads like cinebench it only pulls 1.144V and the temps are in the 70s.
> 
> I watched Hardware Unboxed's review of the 9900k and they got 84C when running the Blender test on a NH-D15 at stock clocks and voltage. I'm not sure how Blender compares to RealBench and AIDA64 in terms of heat generation, but I've never had a CPU run so hot with AVX workloads despite having a huge cooler. Makes me wonder how much better it will be on water and if I can OC it at all without temps getting out of hand.
> 
> It's totally fine when gaming, the temps are about the same as my watercooled 8700k. But when all 8 cores are being heavily stressed in synthetic stress tests it starts to get out of hand.


yeah, so with everything on auto, nearly every board manufacturer selects an LLC level that will do just that, raise the vcore under load (I assume you have no avx offsets in bios). it is always best to allow for some level of vdroop of the voltage rail that LLC is acting on. It's better for your chip in the long run.
what MB?


----------



## Zammin

Jpmboy said:


> yeah, so with everything on auto, nearly every board manufacturer selects an LLC level that will do just that, raise the vcore under load (I assume you have no avx offsets in bios). it is always best to allow for some level of vdroop of the voltage rail that LLC is acting on. It's better for your chip in the long run.
> what MB?


Maximus XI Formula. And yeah everything on auto, the only thing it does differently to Intels spec as far as I am aware is that it maintains 4.7Ghz instead of down clocking to try and adhere to the "95W" TDP. If I disable MCE it will immediately clock all the way down to 4Ghz right after starting AIDA64's stress test. 

At stock the LLC is only level 2. Interestingly when I was playing around with some manual settings to see how the CPU would behave, I found that even on LLC 6 and 7 there is still vdroop. On my 8700k and Maximus X Code there is no vdroop at LLC 6. Something is different about LLC behavior this gen. Or maybe it's just that the power draw is so much higher we need higher levels of LLC when overclocking. I don't really know.


----------



## Noufel

this thread has transformed to "how to OC your cpu and get stable frequencies "


----------



## Krzych04650

Finally manged to tune my 2080 Ti Sea Hawk X curve to sustain 2100 MHz even for prolonged load with 380W BIOS.

I have the curve set to 2145 MHz 1.037V, after starting the load it immediately drops to 2130 and then also pretty quickly to 2115 where it is stable at this voltage, and then to 2100 later on. Temps stabilizing around 50-52 degrees with 100% fan speed.

1.037V seems to be the sweetspot for 380W, in Time Spy Extreme Stress Test I am often using 120-124% PL while 126% is max, sustaining 2100 MHz almost perfectly, very rarely dropping to 2085 MHz when I hit max PL.

Time Spy Extreme Stress Test - https://www.3dmark.com/tsst/252417

Time Spy score - https://www.3dmark.com/3dm/30243565?

16233 graphics score. No idea how people are getting 16500-16600, I cannot get it even when I run unstable 2145-2160 curve.


----------



## GraphicsWhore

So I just realized since DX12 will have to be enabled in BFV for RTX support coming in a couple of days, those of us with the stuttering issue when DX12 is on will have to choose between ray-tracing or smooth gameplay. Really hope that's not the case.



Krzych04650 said:


> Finally manged to tune my 2080 Ti Sea Hawk X curve to sustain 2100 MHz even for prolonged load with 380W BIOS.
> 
> I have the curve set to 2145 MHz 1.037V, after starting the load it immediately drops to 2130 and then also pretty quickly to 2115 where it is stable at this voltage, and then to 2100 later on. Temps stabilizing around 50-52 degrees with 100% fan speed.
> 
> 1.037V seems to be the sweetspot for 380W, in Time Spy Extreme Stress Test I am often using 120-124% PL while 126% is max, sustaining 2100 MHz almost perfectly, very rarely dropping to 2085 MHz when I hit max PL.
> 
> Time Spy Extreme Stress Test - https://www.3dmark.com/tsst/252417
> 
> Time Spy score - https://www.3dmark.com/3dm/30243565?
> 
> 16233 graphics score. No idea how people are getting 16500-16600, I cannot get it even when I run unstable 2145-2160 curve.


16606 is my max graphics in TS with Galax BIOS. That's an EVGA XC Gaming in EK block/backplate. Max power, max voltage and I think that run was manual OC of +170/+1000 in AfterBurner which topped out at 2160/8000.

I hope I can still squeeze a bit more out of it. At some point though doing small incremental offsets and having to re-run the benchmark each time becomes tedious.

Related question: it's not possible to sort rankings by just graphics score in any of the benchmarks right? It always goes by total score? I'm interested to see where I stand in just graphics.


----------



## Krzych04650

GraphicsWhore said:


> So I just realized since DX12 will have to be enabled in BFV for RTX support coming in a couple of days, those of us with the stuttering issue when DX12 is on will have to choose between ray-tracing or smooth gameplay. Really hope that's not the case.
> 
> 
> 
> 16606 is my max graphics in TS with Galax BIOS. That's an EVGA XC Gaming in EK block/backplate. Max power, max voltage and I think that run was manual OC of +170/+1000 in AfterBurner which topped out at 2160/8000.
> 
> I hope I can still squeeze a bit more out of it. At some point though doing small incremental offsets and having to re-run the benchmark each time becomes tedious.
> 
> Related question: it's not possible to sort rankings by just graphics score in any of the benchmarks right? It always goes by total score? I'm interested to see where I stand in just graphics.


Here you can sort scores for Time Spy and Time Spy Extreme either by CPU or GPU score (and number of GPUs): https://www.3dmark.com/hall-of-fame-2/timespy+graphics+score+performance+preset/version+1.0/1+gpu


----------



## Jpmboy

Noufel said:


> this thread has transformed to "how to OC your cpu and get stable frequencies "


yep - gone way too OT


----------



## Nico67

Zammin said:


> My intention is to install it in my custom loop, it's only being air cooled temporarily while I test stuff. But I would have expected it to be okay at stock clocks on one of the best air coolers around. I guess I expected better from the soldered IHS.
> 
> I wonder if I didn't do the mounting screws up tight enough on the cooler..


I'd be inclined to get it under water ASAP. This stuff is just to hot to push to hard on air.


I normally run a GPU for a day or two on air to make sure its not dead out of the box or fails very quickly (had a 7970 that blew sparks out of the edge of the PCB in a day or so just stock gaming  )
CPU I put a block on straight away, but just test stuff with a simple rad setup.

I would suspect that it improves chances of survival it you don't bake stuff out of the box


----------



## Esenel

Founders: +145 Core & + 1000 Memory=> Passed


----------



## Zammin

Noufel said:


> this thread has transformed to "how to OC your cpu and get stable frequencies "





Jpmboy said:


> yep - gone way too OT


Haha sorry guys, didn't mean to talk about it for that long. I asked about it in the 9900k thread in the Intel CPUs section but no-one answered me.. Just a bit concerned about it and I appreciate your answers. 



Nico67 said:


> I'd be inclined to get it under water ASAP. This stuff is just to hot to push to hard on air.
> 
> 
> I normally run a GPU for a day or two on air to make sure its not dead out of the box or fails very quickly (had a 7970 that blew sparks out of the edge of the PCB in a day or so just stock gaming  )
> CPU I put a block on straight away, but just test stuff with a simple rad setup.
> 
> I would suspect that it improves chances of survival it you don't bake stuff out of the box


Yeah, in terms of the GPU I'm in two minds about this and have been since I first ordered. Part of me thinks it's better to just put it under water fort he reason you mentioned and the other part of me thinks it's better to let it run on air for a while so I don't have to tear the loop apart and re-fit the air cooler to RMA the thing if it dies. If Nvidia made the thing it should run properly and not kill itself when running on it's own air cooler, especially for this price. If the air cooler is killing the cards Nvidia needs to fix that. (Not making claims, just a case of what if)

In the case of the CPU it will survive for now, just need it on air while the GPU is on trial 

A $1900 AUD card should work safely by itself without requiring modification from the user, IMO anyway.



Esenel said:


> Founders: +145 Core & + 1000 Memory=> Passed


Nice. You on air or water? That's what I would've expected out of mine.. :/

Thanks everyone for your help so far (on topic and off topic stuff )


----------



## Zammin

Also I've ruled out XMP as an issue for not passing Time Spy Extreme. Here are the results from running HCI MemTest Pro overnight:


----------



## Jpmboy

Zammin said:


> Haha sorry guys, didn't mean to talk about it for that long. I asked about it in the 9900k thread in the Intel CPUs section but no-one answered me.. Just a bit concerned about it and I appreciate your answers.


bring it *here *for now... I have a Max Xi E and 9700K in boxes here, will make an Asus x390 thread as soon as I set it up...


----------



## Esenel

Zammin said:


> Esenel said:
> 
> 
> 
> Founders: +145 Core & + 1000 Memory=> Passed
> 
> 
> 
> Nice. You on air or water? That's what I would've expected out of mine.. 😕
Click to expand...

Water. Temp max 46°C.
Founders Bios again.


----------



## Zammin

Jpmboy said:


> bring it *here *for now... I have a Max Xi E and 9700K in boxes here, will make an Asus x390 thread as soon as I set it up...


Sounds good. Looking forward to the new support thread. Thanks.



Esenel said:


> Water. Temp max 46°C.
> Founders Bios again.


Ah, nice. It would seem that watercooled cards are pretty well always going to pass, especially if the reason for the air cooled FE's failing the test is temperature related (VRM or GPU temp variance, I'm not sure).

I'm just very surprised the air cooled card fails the test. My 1080Ti (Strix) would always pass by a large margin (usually around 99% consistency) but 96% seems to be what I get most of the time with the 2080 Ti unless I restart the test before the card can fully cool down.


----------



## Noufel

Zammin said:


> Haha sorry guys, didn't mean to talk about it for that long. I asked about it in the 9900k thread in the Intel CPUs section but no-one answered me.. Just a bit concerned about it and I appreciate your answers.
> 
> 
> 
> Yeah, in terms of the GPU I'm in two minds about this and have been since I first ordered. Part of me thinks it's better to just put it under water fort he reason you mentioned and the other part of me thinks it's better to let it run on air for a while so I don't have to tear the loop apart and re-fit the air cooler to RMA the thing if it dies. If Nvidia made the thing it should run properly and not kill itself when running on it's own air cooler, especially for this price. If the air cooler is killing the cards Nvidia needs to fix that. (Not making claims, just a case of what if)
> 
> In the case of the CPU it will survive for now, just need it on air while the GPU is on trial
> 
> A $1900 AUD card should work safely by itself without requiring modification from the user, IMO anyway.
> 
> 
> 
> Nice. You on air or water? That's what I would've expected out of mine.. :/
> 
> Thanks everyone for your help so far (on topic and off topic stuff )


no problem bro, we are a family and we are all concerned with having the best of our hardware especialy when it cost that much


----------



## ESRCJ

Has anyone tried flashing any other BIOS onto the Strix 2080 Ti? Since it features a custom PCB, could a BIOS flash for a reference PCB be problematic? I was hoping for the 380W BIOS.


----------



## allexffs

*EEPROM Program failed*

Can someone please explain why i get this error, i have tried both version of NVflash in the original Post. My Card is a Palit 2080ti Gaming Pro ( Not the Gaming Pro OC)

Get different errors in each version of NVflash:

EEPROM Program failed.
PCI subsystem ID mismatch




Tried doing --protectoff first aswell



Added 2 textfiles with the whole CMD. 1 for each version of NVflash


----------



## GraphicsWhore

allexffs said:


> Can someone please explain why i get this error, i have tried both version of NVflash in the original Post. My Card is a Palit 2080ti Gaming Pro ( Not the Gaming Pro OC)
> 
> Get different errors in each version of NVflash.
> 
> Tried doing --protectoff first aswell
> 
> Added 2 textfiles, 1 for each version of NVflash


Two commands I've always used in sequence, including on current card, and never had an issue with are:

nvflash64 --protectoff
nvflash64 -6 ROMNAME.rom


----------



## allexffs

GraphicsWhore said:


> Two commands I've always used in sequence, including on current card, and never had an issue with are:
> 
> nvflash64 --protectoff
> nvflash64 -6 ROMNAME.rom


Tried that too, but same errors. i did read that non binned chips cant be flashed ? pretty sure the PALIT GAMING PRO is not binned


----------



## lowrider_05

I have the palit gaming pro "oc" and can flash but had to send back the palit gaming "dual" because it was locked and un flashable. Maybe the gaming pro non oc is locked too.


----------



## chrisk2305

Hi Guys,

Can you tell me what the best BIOS I should flash on a 2080 TI FE to increase the power limit? Is the new Gigabyte BIOS the way to go? Thanks, Chris


----------



## acmilangr

So the maximum I can get on my 2080ti is 1995mhz.
It seems i lost the lotarry

Gigabyte Gaming 2080ti


----------



## CallsignVega

So I have to run my 2080 Ti ~200 MHz higher clock to equal my Titan V. Kinda disappointed really. But the frame-times seem smoother to me on the 2080 Ti. Maybe placebo. I do know the Zotac AMP fans on 100% are LOUD. Good thing my EK block arrives soon.


----------



## Garrett1974NL

CallsignVega said:


> So I have to run my 2080 Ti ~200 MHz higher clock to equal my Titan V. Kinda disappointed really. But the frame-times seem smoother to me on the 2080 Ti. Maybe placebo. I do know the Zotac AMP fans on 100% are LOUD. Good thing my EK block arrives soon.


Well it's half the price of the Titan V... so I'd be happy with that


----------



## toncij

Had to check twice to see if I'm on a 2080Ti or 9900K thread... 

Palit GamingPro OC can be flashed; no problems so far - tried GB-366W, FTW3-735W and GALAX-380W. All fine.


----------



## GAN77

toncij said:


> FTW3-735W All fine.



Really? ))


----------



## Spiriva

416.94 WHQL

Provides the optimal gaming experience for Battlefield V, Fallout 76, and Hitman 2

https://www.nvidia.com/Download/driverResults.aspx/139905/en-us


----------



## rush2049

Whats going on here?


----------



## Nexures

anyone knows if palit gpus are "A" variants ? pcb pics have paste on the die ^^


----------



## Okt00

rush2049 said:


> Whats going on here?


Beta?


----------



## kx11

you people should try MSI AB scanline sync , it might beat Gsync / Freesync / Vsync , not to mention how open it is to editing/modding


----------



## Swaggerfeld

CallsignVega said:


> So I have to run my 2080 Ti ~200 MHz higher clock to equal my Titan V. Kinda disappointed really. But the frame-times seem smoother to me on the 2080 Ti. Maybe placebo. I do know the Zotac AMP fans on 100% are LOUD. Good thing my EK block arrives soon.


Haha, I thought you were trying to avoid water cooling? Did you end up going with SLI'd Zotacs?


----------



## CallsignVega

Swaggerfeld said:


> Haha, I thought you were trying to avoid water cooling? Did you end up going with SLI'd Zotacs?


These cards run so hot you basically have to. No, SLI is not for me anymore. The juice isn't worth the squeeze IMO.


----------



## chrisk2305

chrisk2305 said:


> Hi Guys,
> 
> Can you tell me what the best BIOS I should flash on a 2080 TI FE to increase the power limit? Is the new Gigabyte BIOS the way to go? Thanks, Chris


Anybody? Thx


----------



## The L33t

Folks, "we" can now use RTX. https://twitter.com/EA_DICE/status/1062635899786276864?s=20

Microsoft re-released the "October" update with the pipework (DirectX) to support RTX, Dice has updated the game and we now have drivers with specific BF:V tweaks from NVIDIA. So.. All is in place.

Go play.


----------



## ESRCJ

I can confirm the Galax BIOS does not work for the Strix. The screen went black and now that BIOS is no longer functional (this card has a dual BIOS switch). Is there any way I can recover the BIOS?


----------



## Spiriva

Jaqub (DICE)
‏
In Battlefield V on PC RTX ray tracing can now be enabled with graphics cards that support this function!


----------



## The L33t

gridironcpj said:


> I can confirm the Galax BIOS does not work for the Strix. The screen went black and now that BIOS is no longer functional (this card has a dual BIOS switch). Is there any way I can recover the BIOS?


Did you backup? 

If you did..: 

- Boot with the working bios, 
- Switch to the non-working bios while still running. 
- Flash the backup. 

Your done.


----------



## ESRCJ

The L33t said:


> Did you backup?
> 
> If you did..:
> 
> - Boot with the working bios,
> - Switch to the non-working bios while still running.
> - Flash the backup.
> 
> Your done.


Yeah I saved a copy of the original BIOS prior to flashing. I can just flip the switch while the system is running?

Edit: This method was successful. Thanks! I guess I'll have to shunt mod this card then when my BP waterblock comes in since BIOS flashing seems to be a no-go for the Strix. I wish Asus had a BIOS with a higher power limit. This one is capped at 325W.


----------



## Spiriva

gridironcpj said:


> I can confirm the Galax BIOS does not work for the Strix. The screen went black and now that BIOS is no longer functional (this card has a dual BIOS switch). Is there any way I can recover the BIOS?


Does the strix have the same amount of DP ports ? I remember flashing my old evga-1080 with another bios, and then i had to change DP to get it to work, since the bios i used didnt have as many DP.


----------



## ESRCJ

Spiriva said:


> Does the strix have the same amount of DP ports ? I remember flashing my old evga-1080 with another bios, and then i chande to change DP to get it to work, since the bios i used didnt have as many DP.


It has 2 DPs, 2 HDMIs, and 1 USB-C. Not sure if this is in-line with the Galax card. I figured the issue was that the Strix has a custom PCB.


----------



## Nexures

anyone has any info on "A" revisions of palit and evga gpus ? for 2080 and 2080ti ?


----------



## The L33t

Nexures said:


> anyone has any info on "A" revisions of palit and evga gpus ? for 2080 and 2080ti ?


Whats your doubt in particular? 

The rules are.. 

Overclocked from factory = A

Stock from factory = non-A

Notice: Nvidia own Founders model is overclocked from factory and as such, A.

Evga black edition (999 usd model) = non-A


----------



## ESRCJ

I managed to flash the Galax 380W BIOS and the FTW3 BIOS onto my Strix. Thanks Spiriva for the suggestion on the Displayports. I switched the port and everything worked flawlessly. I'm liking the FTW3 BIOS the most so far.


----------



## Nexures

The L33t said:


> Whats your doubt in particular?
> 
> The rules are..
> 
> Overclocked from factory = A
> 
> Stock from factory = non-A
> 
> Notice: Nvidia own Founders model is overclocked from factory and as such, A.
> 
> Evga black edition (999 usd model) = non-A


is that an actual rule tho? is it confirmed that every oc version of the gpu is A ? coz evga has told me: "While we would expect most of our products with factory clocks that are overclocked to be the "A" GPU we do not guarantee any specific GPU in our specs sheet so we would not guarantee any specific GPU on any specific part"

and you said"Evga black edition=non a" but the picture from here has the A on the die https://imgur.com/TppHGDP


----------



## Spiriva

gridironcpj said:


> I managed to flash the Galax 380W BIOS and the FTW3 BIOS onto my Strix. Thanks Spiriva for the suggestion on the Displayports. I switched the port and everything worked flawlessly. I'm liking the FTW3 BIOS the most so far.


Cool that it worked  Im using the FTW 373W bios too. Both the 373W and the 380W are the same while gaming, but in 3dmark for some reaseon i got 10mhz higher with the 373W bios. 2160mhz instead of 2150mhz stable thru the whole runs. Altho I never seen the card pull more then ~330W while gaming, and around 350W while 3dmarking.


----------



## Jpmboy

chrisk2305 said:


> Hi Guys,
> 
> Can you tell me what the best BIOS I should flash on a 2080 TI FE to increase the power limit? Is the new Gigabyte BIOS the way to go? Thanks, Chris


the galax 380W bios. I've had my card on this bios since putting it under water. If youare using the stock cooler, don't bother flashing.


CallsignVega said:


> So I have to run my 2080 Ti ~200 MHz higher clock to equal my Titan V. Kinda disappointed really. But the frame-times seem smoother to me on the 2080 Ti. Maybe placebo. I do know the Zotac AMP fans on 100% are LOUD. Good thing my EK block arrives soon.


yeah, I've measured the same effect in [email protected] also. The 2080Ti is using 1.5X the wattage to keep pace in anything using Cuda.


Garrett1974NL said:


> Well it's half the price of the Titan V... so I'd be happy with that


and twice the power consumption. BUt yeah, the TV is expensive... and an RTX Titan will be out soon. 


Spiriva said:


> 416.94 WHQL
> 
> Provides the optimal gaming experience for Battlefield V, Fallout 76, and Hitman 2
> 
> https://www.nvidia.com/Download/driverResults.aspx/139905/en-us


dead link?


gridironcpj said:


> Yeah I saved a copy of the original BIOS prior to flashing. I can just flip the switch while the system is running?
> 
> Edit: This method was successful. Thanks! I guess I'll have to shunt mod this card then when my BP waterblock comes in since BIOS flashing seems to be a no-go for the Strix. I wish Asus had a BIOS with a higher power limit. This one is capped at 325W.


 when you open up the card, take a picture of the Die. Let's see if it is a non-A variant.


Spiriva said:


> Does the strix have the same amount of DP ports ? I remember flashing my old evga-1080 with another bios, and then i had to change DP to get it to work, since the bios i used didnt have as many DP.


:thumb:
The ole... "count the display port trick".


----------



## The L33t

Nexures said:


> The L33t said:
> 
> 
> 
> Whats your doubt in particular?
> 
> The rules are..
> 
> Overclocked from factory = A
> 
> Stock from factory = non-A
> 
> Notice: Nvidia own Founders model is overclocked from factory and as such, A.
> 
> Evga black edition (999 usd model) = non-A
> 
> 
> 
> is that an actual rule tho? is it confirmed that every oc version of the gpu is A ? coz evga has told me: "While we would expect most of our products with factory clocks that are overclocked to be the "A" GPU we do not guarantee any specific GPU in our specs sheet so we would not guarantee any specific GPU on any specific part"
> 
> and you said"Evga black edition=non a" but the picture from here has the A on the die https://imgur.com/TppHGDP
Click to expand...

Rules are made to be broken! Now, seriously... 

What was said was that in the beginning of supply, nvidia was only supplying A grade gpus, so every single model seen was A, even the stock clocked ones. 

So, no rule against using an A grade in a stock clock situation. 

What is locked is factory OC on non-A chips. Only software OC is possible.

That picture you posted has many Evga models quoted underneath, meaning, it was posted to take the pcb layout into knowledge, it's not a black edition particularly. But as I said, nothing blocking Evga from putting an A grade on any card. It's the other way around.

If you want A guaranteed, buy an overclocked from factory card, period.


----------



## Nexures

The L33t said:


> Rules are made to be broken! Now, seriously...
> 
> What was said was they in the beginning of supply, nvidia was only supplying A grade gpus, so every single model seen was A, even he stock clocked ones.
> 
> So, no rule against using an A grade in a stock clock situation.
> 
> What is locked is factory OC on non-A chips. Only software OC is possible.
> 
> That picture you posted has many Evga versions quoted underneath, meaning, it was posted to take the pcb layout into knowledge, it's not a black edition particularly. But as I said, nothing blocking Evga from putting an A grade on any card. It's the other way around.


thats even worse tho. so at the end of the day, its impossible to buy a gpu knowing it is A revision? completely no way to find out


----------



## Spiriva

Jpmboy said:


> dead link?



It didnt work ? 

It can be downloaded here too; https://www.guru3d.com/files-details/geforce-416-94-whql-driver-download.html


----------



## The L33t

Nexures said:


> The L33t said:
> 
> 
> 
> Rules are made to be broken! Now, seriously...
> 
> What was said was they in the beginning of supply, nvidia was only supplying A grade gpus, so every single model seen was A, even he stock clocked ones.
> 
> So, no rule against using an A grade in a stock clock situation.
> 
> What is locked is factory OC on non-A chips. Only software OC is possible.
> 
> That picture you posted has many Evga versions quoted underneath, meaning, it was posted to take the pcb layout into knowledge, it's not a black edition particularly. But as I said, nothing blocking Evga from putting an A grade on any card. It's the other way around.
> 
> 
> 
> thats even worse tho. so at the end of the day, its impossible to buy a gpu knowing it is A revision? completely no way to find out
Click to expand...

It's not, it's very easy. 

If you buy a overclocked from factory model, you have A grade guaranteed! The only chance of getting a non-A is on stock clocked models!

How can it be simpler than that?


----------



## Nexures

The L33t said:


> It's not, it's very easy.
> 
> If you buy a overclocked from factory model, you have A grade guaranteed! The only chance of getting a non-A is on stock clocked models!
> 
> How can it be simpler than that?


guaranteed by who tho thats what im trying to find out haha. how am i guaranteed if even evga themselves have told me that its not guaranteed??

also, do you know any other sources/links on pcb of palit, or other gpus for that matter? i wanna make sure that the gpu im buying is a reference pcb


----------



## Jpmboy

Spiriva said:


> It didnt work ?
> 
> It can be downloaded here too; https://www.guru3d.com/files-details/geforce-416-94-whql-driver-download.html


no sorry, yours was fine (I had DL'd the driver already)... it was the twitter link with the "page not found". lol - too many "multiquotes" for this early in the morning.


----------



## ESRCJ

Wait, the Strix should have an A-variant, correct? Have people reported a non-A variant in the Strix?

Also, I've run a few benchmarks and despite having consistently higher clocks and hitting around 370W with the FTW3 BIOS, my average FPS is lower. This is rather odd.


----------



## The L33t

Nexures said:


> The L33t said:
> 
> 
> 
> It's not, it's very easy.
> 
> If you buy a overclocked from factory model, you have A grade guaranteed! The only chance of getting a non-A is on stock clocked models!
> 
> How can it be simpler than that?
> 
> 
> 
> guaranteed by who tho thats what im trying to find out haha. how am i guaranteed if even evga themselves have told me that its not guaranteed??
> 
> also, do you know any other sources/links on pcb of palit, or other gpus for that matter? i wanna make sure that the gpu im buying is a reference pcb
Click to expand...

For a reference pcb list check for the watercool.de waterblock compatibility list. 

As for what Evga says, we'll, these kind of things are never fully disclosed or advertised... Never. 

What a manufacturer guarantees you is the clocks mentioned in the page spec sheet and nothing else! A or non-A is of no consequence.

It's like wanting them to list the resistors brand, capacitors, led manufacturer and so on! It's between them and Nvidia. Period. 

We can take advantage of such knowledge or not. It's up-to you.

And no-one as seen a single overclocked from factory card with a non-A chip. Do with that info what you want..


----------



## kot0005

Is no 1 plzying BF V with RTX ?? I wana see the performance. Dont have the game downloaded unfortunately..


----------



## BigMack70

kot0005 said:


> Is no 1 plzying BF V with RTX ?? I wana see the performance. Dont have the game downloaded unfortunately..


Techpowerup has initial impressions










It's basically as we thought... 1080p 60fps is the target for ray tracing. I find 1080p unacceptable on a GPU like this; the hit to image quality is too great compared to 4k, and I won't be using it.

Guru3d is getting sub-30fps at 4k with DXR ultra. The idea that ray tracing is playable on these GPUs is (and always has been) a complete joke. Ray Tracing is 1-2 GPU architectures away from being usable.


----------



## Nexures

The L33t said:


> For a reference pcb list check for the watercool.de waterblock compatibility list.
> 
> As for what Evga says, we'll, these kind of things are never fully disclosed or advertised... Never.
> 
> What a manufacturer guarantees you is the clocks mentioned in the page spec sheet and nothing else! A or non-A is of no consequence.
> 
> It's like wanting them to list the resistors brand, capacitors, led manufacturer and so on! It's between them and Nvidia. Period.
> 
> We can take advantage of such knowledge or not. It's up-to you.
> 
> And no-one as seen a single overclocked from factory card with a non-A chip. Do with that info what you want..


thank you bby


----------



## kot0005

BigMack70 said:


> Techpowerup has initial impressions
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's basically as we thought... 1080p 60fps is the target for ray tracing. I find 1080p unacceptable on a GPU like this; the hit to image quality is too great compared to 4k, and I won't be using it.
> 
> Guru3d is getting sub-30fps at 4k with DXR ultra. The idea that ray tracing is playable on these GPUs is (and always has been) a complete joke. Ray Tracing is 1-2 GPU architectures away from being usable.


there is something wrong...the settings dont seem to do anything. I was watching a streamer and he was getting 80-120fps on a 2080 with ultra settings, so 2080ti doing the same fps seems like a bog or poor optimization.


----------



## alanthecelt

do we have dlss in b5 yet?


----------



## GraphicsWhore

kot0005 said:


> there is something wrong...the settings dont seem to do anything. I was watching a streamer and he was getting 80-120fps on a 2080 with ultra settings, so 2080ti doing the same fps seems like a bog or poor optimization.


What "settings"? DXR definitely does something. This is the full comparison:


----------



## ESRCJ

Is 1.05V the voltage limit for these cards? I can't seem to manually crank it up any higher than that. Although I swear I saw someone here a while back with their voltage at 1.095V.


----------



## GraphicsWhore

gridironcpj said:


> Is 1.05V the voltage limit for these cards? I can't seem to manually crank it up any higher than that. Although I swear I saw someone here a while back with their voltage at 1.095V.


No. 

I hit 1.093. Are you setting voltage slider to max?


----------



## ESRCJ

GraphicsWhore said:


> gridironcpj said:
> 
> 
> 
> Is 1.05V the voltage limit for these cards? I can't seem to manually crank it up any higher than that. Although I swear I saw someone here a while back with their voltage at 1.095V.
> 
> 
> 
> No.
> 
> I hit 1.093. Are you setting voltage slider to max?
Click to expand...

Yeah and 1.05V is as high as it goes, even when I force a higher voltage on the VF curve in Afterburner.


----------



## Pepillo

https://forums.geforce.com/default/...ries/rtx-2080-ti-founders-edition-contact-us/

Does anyone understand what does this mean?

"Limited test escapes" ??????????


----------



## vmanuelgm

Video of my 2080Ti in BTV RT enabled @2560x1080p








With RT in low the game reaches nice FPS...


----------



## GraphicsWhore

vmanuelgm said:


> Video of my 2080Ti in BTV RT enabled @2560x1080p
> 
> 
> https://youtu.be/xs5-is87wfw
> 
> 
> With RT in low the game reaches nice FPS...


Thanks for posting. Yeah that looks good and I think for a game where you're moving around frantically it'd be hard to notice the difference between DXR low and ultra.

Looks like you're using a 21:9 display? That's the resolution I'll probably have to go down to as well (my native is 3440x1440). I'm interested to see "High" DXR will provide a good enough experience to stay at that native res. Seems like video settings don't scale at all once DXR is enabled so even if I turned game graphics down to high it won't make a difference but I'll test tonight.


----------



## vmanuelgm

GraphicsWhore said:


> Thanks for posting. Yeah that looks good and I think for a game where you're moving around frantically it'd be hard to notice the difference between DXR low and ultra.
> 
> Looks like you're using a 21:9 display? That's the resolution I'll probably have to go down to as well (my native is 3440x1440). I'm interested to see "High" DXR will provide a good enough experience to stay at that native res. Seems like video settings don't scale at all once DXR is enabled so even if I turned game graphics down to high it won't make a difference but I'll test tonight.



My native is also 3440x1440p, Acer x34a, but I tested 2560x1080 thinking it is the closest resolution to 1080p at 21:9...


----------



## GraphicsWhore

vmanuelgm said:


> My native is also 3440x1440p, Acer x34a, but I tested 2560x1080 thinking it is the closest resolution to 1080p at 21:9...


Yes I think that's our only option.

Are you in front of your PC now? If so, do you mind doing a couple of FPS tests at 3440x1440? 

Game settings: Ultra
DXR: Low

Game Settings: High
DXR: Ultra

Thanks.


----------



## kx11

RAY TRACING @ 4k is fun


----------



## PharmingInStyle

So how do textures look in 1080p resolution in BF5 with ray tracing? Can't wait to find out because if you all are ok with it then I'll pop for a 2080 Ti AIB. Right now I only play games in 4k. But maybe willing to lower it for RT. Btw BF5 is quite the stunner visually in 4k imo.


----------



## joshpsp1

Overclockers UK have a singular Aorus Xtreme left in stock for £1289.99. My preorder will arrive tomorrow, will report back with some benchmarks.


----------



## GraphicsWhore

Anyone on SLI with BFV want to post some numbers for us?


----------



## Jbravo33

GraphicsWhore said:


> Anyone on SLI with BFV want to post some numbers for us?


No support for SLI yet : (


----------



## iamjanco

Pepillo said:


> https://forums.geforce.com/default/...ries/rtx-2080-ti-founders-edition-contact-us/
> 
> Does anyone understand what does this mean?
> 
> "Limited test escapes" ??????????


It means *they weren't testing enough samples* in the production runs they were releasing for distribution/sale. IOW, poor quality control management.


----------



## BigMack70

GraphicsWhore said:


> Anyone on SLI with BFV want to post some numbers for us?


Microstutter is so bad on a single card there's no way multiple cards will be playable even if the average framerate scales well


----------



## Zurv

BigMack70 said:


> Techpowerup has initial impressions
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's basically as we thought... 1080p 60fps is the target for ray tracing. I find 1080p unacceptable on a GPU like this; the hit to image quality is too great compared to 4k, and I won't be using it.
> 
> Guru3d is getting sub-30fps at 4k with DXR ultra. The idea that ray tracing is playable on these GPUs is (and always has been) a complete joke. Ray Tracing is 1-2 GPU architectures away from being usable.


i can't tell the diff with ray tracing on or off.. other than a massive drop in FPS  





oh man.. look at the frame time. (i'm going to bet the render time to so long that it is limiting how many frames can be drawn.)


----------



## Esenel

WQHD - Ultra mit DXR Ultra.

Conquest - Arras - 64 Players=> 80 fps avg






Better performance then expected after seeing media doing their tests.
But I have to say for such a fast shooter where you do not have the time to watch out for the details it is not necessary.
And as you only get 40% of your fps as with RTX off at WQHD Ultra it is not the first choice for a shooter.
Maybe with the next generation it can be used for shooters as well.
But BF V with RTX On looks great


----------



## dante`afk

gridironcpj said:


> I managed to flash the Galax 380W BIOS and the FTW3 BIOS onto my Strix. Thanks Spiriva for the suggestion on the Displayports. I switched the port and everything worked flawlessly. I'm liking the FTW3 BIOS the most so far.


what was the thing with the displayports?


----------



## Robostyle

del


----------



## ESRCJ

dante`afk said:


> what was the thing with the displayports?


If the screen goes black after the BIOS flash, but the card is still running (fans, LEDs), then it's possible the Displayport in use isn't detected by the flashed BIOS. Switching to the other Displayport fixed the black screen issue I had. 

However, I will note that if you're using the Strix, the FTW3 BIOS might not improve performance. My Superposition and TimeSpy average FPS decreased, despite the higher power limit and resulting clock speeds.


----------



## CallsignVega

Where is the option to turn on ray tracing in BFV? I don't see it. I have latest NVIDIA drivers, DX12 on and Windows 10 says it's up to date.


----------



## kx11

here's my test in Campaign


----------



## Foxrun

kx11 said:


> here's my test in Campaign
> 
> 
> https://www.youtube.com/watch?v=dPRdAYFv0iE


how is 4k on low dxr?


----------



## ESRCJ

I verified that the voltage cap on my other Strix 2080 Ti is also 1.05V. If 1.093V is possible on other cards, then it's unfortunate that a card with a great custom PCB is so limited.


----------



## GraphicsWhore

gridironcpj said:


> I verified that the voltage cap on my other Strix 2080 Ti is also 1.05V. If 1.093V is possible on other cards, then it's unfortunate that a card with a great custom PCB is so limited.


That's weird. I mean if you're still getting same or better clocks than a card hitting 1.093V it doesn't really matter but still...

Where are you getting the readout?


----------



## ESRCJ

GraphicsWhore said:


> That's weird. I mean if you're still getting same or better clocks than a card hitting 1.093V it doesn't really matter but still...
> 
> Where are you getting the readout?


GPU-Z, EVGA Precision X1, and MSI Afterburner are all showing 1.05V.

I should also note that my Titan X (Pascal) was capped at 1.075V, although that card also had a 1.093V limit from what I recall.


----------



## vmanuelgm

Multiplayer at 3440x1440p with RT in Ultra and Low... I forgot to deactivate the vertical sync in Hamada map, so gpu load will be affected.


----------



## kot0005

GraphicsWhore said:


> What "settings"? DXR definitely does something. This is the full comparison:


DXR ultra , med and high dont change anything...in the graphs ultra gives you higher perf than med and high..




iamjanco said:


> It means *they weren't testing enough samples* in the production runs they were releasing for distribution/sale. IOW, poor quality control management.


I wonder if this is FE only or all boards based on the FE/reference...


----------



## Zurv

CallsignVega said:


> Where is the option to turn on ray tracing in BFV? I don't see it. I have latest NVIDIA drivers, DX12 on and Windows 10 says it's up to date.


you need windows 10 1809 (the new version from Oct - that was just released yesterday)
you can get it here: https://www.microsoft.com/en-us/software-download/windows10


----------



## kot0005

CallsignVega said:


> Where is the option to turn on ray tracing in BFV? I don't see it. I have latest NVIDIA drivers, DX12 on and Windows 10 says it's up to date.


in advanced graphics section under DX12 setting. Did u patch the game? there was a patch too.


----------



## Spiriva

CallsignVega said:


> Where is the option to turn on ray tracing in BFV? I don't see it. I have latest NVIDIA drivers, DX12 on and Windows 10 says it's up to date.


----------



## Zammin

vmanuelgm said:


> Video of my 2080Ti in BTV RT enabled @2560x1080p
> 
> 
> https://youtu.be/xs5-is87wfw
> 
> 
> With RT in low the game reaches nice FPS...


The visuals certainly look impressive. 38C on the GPU is very impressive! Do you notice the GPU temp is lower with ray tracing enabled or higher?


----------



## Ferreal

Anyone got sli to work for BFV? Should be playable in 4K with ray tracing. Someone on Nvidia forums suggested to enable AFR2. Going to give that a try when I get home.


----------



## Zurv

Ferreal said:


> Anyone got sli to work for BFV? Should be playable in 4K with ray tracing. Someone on Nvidia forums suggested to enable AFR2. Going to give that a try when I get home.


I got it working with dx11, but not dx12 (which RT requires) I have a bad feeling that SLI and RT can't work at the same time. "feeling"  I have nothing to base that one. (Other than an engine that has great SLI support in the past. With a lot of investment in SLI from both Dice and Nvidia...)


----------



## Ferreal

Zurv said:


> Ferreal said:
> 
> 
> 
> Anyone got sli to work for BFV? Should be playable in 4K with ray tracing. Someone on Nvidia forums suggested to enable AFR2. Going to give that a try when I get home.
> 
> 
> 
> I got it working with dx11, but not dx12 (which RT requires) I have a bad feeling that SLI and RT can't work at the same time. "feeling" /forum/images/smilies/smile.gif I have nothing to base that one. (Other than an engine that has great SLI support in the past. With a lot of investment in SLI from both Dice and Nvidia...)
Click to expand...

Sigh, I had high hopes for sli when Nvidia announced nvlink for Geforce. There’s no reason for 4K 144hz monitors when one card just can’t do it.


----------



## LesPaulLover

vmanuelgm said:


> Video of my 2080Ti in BTV RT enabled @2560x1080p
> 
> 
> https://youtu.be/xs5-is87wfw
> 
> 
> With RT in low the game reaches nice FPS...


50-60fps @ 2560x1080 is "nice FPS" on a $1200 GPU? Wow ok!


----------



## LesPaulLover

PharmingInStyle said:


> So how do textures look in 1080p resolution in BF5 with ray tracing? Can't wait to find out because if you all are ok with it then I'll pop for a 2080 Ti AIB. Right now I only play games in 4k. But maybe willing to lower it for RT. Btw BF5 is quite the stunner visually in 4k imo.


I promise you the difference between 4k and 1080p is SIGNIFICANTLY greater than the difference between RTX on and off.

You'd have to be insane. The difference bewteen playing @ 4k and 1080p is SIGNIFICANTLY greater than the difference bewteen RTX on and off.

It's literally no contest.


----------



## Ferreal

It’s literally unplayable in 4K with ray tracing on and in 1080p it looks like poop. I’ll play with RT off for now.


----------



## PharmingInStyle

LesPaulLover said:


> I promise you the difference between 4k and 1080p is SIGNIFICANTLY greater than the difference between RTX on and off.
> 
> You'd have to be insane. The difference bewteen playing @ 4k and 1080p is SIGNIFICANTLY greater than the difference bewteen RTX on and off.
> 
> It's literally no contest.


I'm not surprised, and I won't go for the 1080p. But some will consider the following to be a bit insane or unstable. I read today the fps is around 18 in 4k in BF5 with RT on, and that's good enough for me to try it. 

I can deal with 20-25 fps in games if the graphics are spectacular (30 fps would be nice.) So I'm thinking 18 may be close enough so I'm ordering the Ti on Amazon (a AIB) and play it in 4k RT. I'm more into graphics than story or action in games. The version 1809 in my Win 10 installed fine today and BF5 patched ok this morning, so I'll just wait for the card.


----------



## kx11

FFXV benchmark got updated with DLSS


http://benchmark.finalfantasyxv.com/na/


----------



## kx11

interesting !!!


----------



## alex1990

*MSI 2080ti Gaming X TRIO 

BIOS 400W*

i got accept from ru support for upload this test rom

I tested this 3 weeks, all ok


----------



## Lownage

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Is it an official MSI Bios?


----------



## alex1990

Lownage said:


> Is it an official MSI Bios?


Of course! i have private contact with RU support


----------



## Lownage

The german support told me that there won´t be a new bios with higher powertarget. That´s why I´m asking.


----------



## Gottex

Lownage said:


> The german support told me that there won´t be a new bios with higher powertarget. That´s why I´m asking.





alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Confirmed!!! The bios is a beast, been on it for 2,5 weeks.
+150 Chip
+800 Mem

2040-2055 permanent in AC Odyssey, Fans 70%, temp 63-67'c


----------



## ESRCJ

So I figured out why my voltage limit was capped at 1.05V. In Afterburner, the "Unlock voltage control" was set to "standard MSI" when it really needed to be at "third party." Now I can manually set it to 1.093V. However, I need more power limit to work with now. Any word on if Asus plans to release a better BIOS for the Strix? It really needs one since the stock BIOS has a power limit of 325W.


----------



## Foxrun

Spiriva said:


>


Same it's not appearing for me either.


----------



## acmilangr

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Can I use it on my gigabyte 2080ti gaming


----------



## alanthecelt

do we have any info on how DXR scales with overclocking?


----------



## dante`afk

DXR is fun and things, but impossible to use in online gaming. the input lag is just insane.

Until this is fixed, single player with a gamepad is the way to go with DXR.


----------



## Spiriva

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Anyone tried this one with the 2080ti FE card ? Im currently away for the weekend, but interested to see if anyone tried it yet ?


----------



## BigMack70

PharmingInStyle said:


> I'm not surprised, and I won't go for the 1080p. But some will consider the following to be a bit insane or unstable. I read today the fps is around 18 in 4k in BF5 with RT on, and that's good enough for me to try it.
> 
> I can deal with 20-25 fps in games if the graphics are spectacular (30 fps would be nice.) So I'm thinking 18 may be close enough so I'm ordering the Ti on Amazon (a AIB) and play it in 4k RT. I'm more into graphics than story or action in games. The version 1809 in my Win 10 installed fine today and BF5 patched ok this morning, so I'll just wait for the card.


18-25 fps in a multiplayer shooter is not playable.


----------



## GraphicsWhore

18fps isn't playable period.

In any case I think some of the DXR performance stuff is overblown, at least for us lucky 2080-Ti-owning-motherf'ers.

I played for an hour yesterday. At 3440x1440/Ultra/DXR low, in single player, the framerates were great. I was averaging in mid 50's to 60. Low of upper 40's and max of 70+ in open areas. In multiplayer they dipped across the board but not by much.

Yes, that's DXR at low but the reflections are still pretty amazing and in motion I don't know how much you'll notice difference between that and a higher DXR setting. Need to do some more testing.

I did try 2560x1080/Ultra/DXR ultra. It looked both better and worse because of higher DXR quality but lower resolution. 2560x1440 was doing some weird **** for me so I didn't have a chance to test.

I have a feeling I'll either go 2560x1440/Ultra/DXR High or stay at the 3440x1440/Ultra/DXR Low because as I said I doubt that in motion I'll be able to tell between DXR Low and High. Meanwhile, i DO notice the difference between almost 1000 extra horizontal pixels.

FYI, card was at 2085/8000 (Galax BIOS) but interestingly AfterBurner showed I only hit 1.075v (normally hit 1.093). Max temp was 48.

Couple of pics of 3440x1440/Ultra/DXR Low attached


----------



## illidan2000

a quick question:
the Zotac 2080ti Triple Fan (not the AMP edition) has the "A" chip? (TU102-300A instead the TU102-300)


----------



## alanthecelt

there's two versions of triple fan, an oc and none oc model... i have the oc model.. i forgot to check when putting the waterblock on....
if i were betting i'd say a, mine is sat happily at 2115/2130 on 380w bios overclockd using auto overclock only...


----------



## GraphicsWhore

dante`afk said:


> DXR is fun and things, but impossible to use in online gaming. the input lag is just insane.
> 
> Until this is fixed, single player with a gamepad is the way to go with DXR.


I honestly have no idea what you're talking about. Are you sharing a personal experience?



Spiriva said:


> Anyone tried this one with the 2080ti FE card ? Im currently away for the weekend, but interested to see if anyone tried it yet ?


Nope, guess you're going to take the lead on this one.


----------



## nycgtr

Foxrun said:


> Same it's not appearing for me either.


Dl latest nv driver, force update to latest winver (may not work thru windows update) and it should appear. I had to do that last night.


----------



## DooRules

illidan2000 said:


> a quick question:
> the Zotac 2080ti Triple Fan (not the AMP edition) has the "A" chip? (TU102-300A instead the TU102-300)



It does. I have the Galax bios on mine. 380W


----------



## GraphicsWhore

Foxrun said:


> Same it's not appearing for me either.


I couldn't see it either. Had latest driver and Windows told me I was up-to-date but when I checked my Windows version I was on 1803 and 1809 wasn't pulling down when I used Windows Updates. I had to use the Windows update tool (https://www.microsoft.com/en-us/software-download/windows10)


----------



## bmg2

I'm guessing Ray Tracing with pretty high settings will be fine at 2560x1440 for single player gaming. For multiplayer, lower quality and/or resolution would be needed. I can live with that personally. I'm still using a 2560x1440 monitor.


----------



## GraphicsWhore

bmg2 said:


> I'm guessing Ray Tracing with pretty high settings will be fine at 2560x1440 for single player gaming. For multiplayer, lower quality and/or resolution would be needed. I can live with that personally.


The disparity between single and multiplayer is not that much based on my experience last night. So, no, it may not necessarily require you to lower. The real difference comes with resolution/video settings/DXR level.

I'm going to be doing a bunch of testing on 2560x1440 and some more on my native 3440x1440. As I mentioned earlier 3440x1440/Ultra/DXR Low for me ran great in single and multi player and I may not even bother going down to 2560x1440 to raise DXR because of the inability to perceive the difference with a game like this where you're mostly in constant motion.


----------



## Okt00

Enabled DXR at Ultra last night to see how bad it would be after reading about the performance. 

Was pleasantly surprised. 

I'm running at [email protected], without DXR I get between 93-140fps depending on the map and area. (MP)

Now turning on DXR with the ultra preset still set, I'm getting between 60-94 fps. Which seems okay to me. 60 was only in the super dense city maps, outside in the desert I was hitting 85-94 with no crazy dips. GSYNC is pretty awesome. 

I've been running the KE-7 with chrome plating... I can see the characters reflection in the bolt pull. And the Oasis in the desert is a sight to see... the low lying water with the bowl around it. Ray tracing is incredible setup up in fidelity. There are graphical artifacts with the denoising. I notice this coming from a offline rendering background. They have a way to go before it's perfect obviously... but damn it looks good. 

I will say though, at first I thought it was my mouse sensitivity, then my headcold... but I'm thinking there _is an added delay_, and I don't think it was lack of frames. I think it's the frametime itself. I don't have a frame time or delay on my RTSS, so I'm not sure how to test for that.


----------



## Okt00

kx11 said:


> interesting !!!


Isn't that ~30% memory bandwidth? Are there details on that card somewhere?


----------



## GraphicsWhore

Okt00 said:


> Enabled DXR at Ultra last night to see how bad it would be after reading about the performance.
> 
> Was pleasantly surprised.
> 
> I'm running at [email protected], without DXR I get between 93-140fps depending on the map and area. (MP)
> 
> Now turning on DXR with the ultra preset still set, I'm getting between 60-94 fps. Which seems okay to me. 60 was only in the super dense city maps, outside in the desert I was hitting 85-94 with no crazy dips. GSYNC is pretty awesome.
> 
> I've been running the KE-7 with chrome plating... I can see the characters reflection in the bolt pull. And the Oasis in the desert is a sight to see... the low lying water with the bowl around it. Ray tracing is incredible setup up in fidelity. There are graphical artifacts with the denoising. I notice this coming from a offline rendering background. They have a way to go before it's perfect obviously... but damn it looks good.
> 
> I will say though, at first I thought it was my mouse sensitivity, then my headcold... but I'm thinking there _is an added delay_, and I don't think it was lack of frames. I think it's the frametime itself. I don't have a frame time or delay on my RTSS, so I'm not sure how to test for that.


Yeah I noticed with DXR ultra there was some choppiness introduced and it's not related to FPS, so likely frame time though I didn't have OSD on to check.


----------



## dante`afk

GraphicsWhore said:


> I honestly have no idea what you're talking about. Are you sharing a personal experience?


Do you have BF V?

Try it with RTX. Input lag is huge.

Sent from my iPhone XS Max using Tapatalk


----------



## dentnu

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Thanks! This is awesome news just flashed it on my MSI 2080ti Gaming X TRIO and it is working great so far no issues. Lets see how far I can push it now with this bios...


----------



## CENS

What kind if GPU-only waterblock solutions work? can you use ek CPU block for the wider spacing?


----------



## vmanuelgm

Zammin said:


> The visuals certainly look impressive. 38C on the GPU is very impressive! Do you notice the GPU temp is lower with ray tracing enabled or higher?



I am seeing the same or lower temperatures...


----------



## GosuPl

Tear down one of my faulty 2080Ti with Micron.

https://scontent-frt3-1.xx.fbcdn.ne...=e5e4f72f2ccaf6d034b7015bb08ef7a6&oe=5C6E174F

https://scontent-frt3-1.xx.fbcdn.ne...=a16f597a444a561d6ce0402ad6983a4e&oe=5C6E9A24

A healthy card is clean next to the memories  This healthy GPU belong to my friend, and have Micron too.

https://scontent-frt3-1.xx.fbcdn.ne...=d69373ccebec733a1ba91ba903fe6b03&oe=5C752BBF

Tommorow i will tear down my healthy 2080Ti's with Samsung memory, i assume clean too 

GPU's with memory after modifications, were sent to customers ordering pre orders. As you can see, it did not help to protect the cards from problems


----------



## Jpmboy

GosuPl said:


> Tear down one of my faulty 2080Ti with Micron.
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=e5e4f72f2ccaf6d034b7015bb08ef7a6&oe=5C6E174F
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=a16f597a444a561d6ce0402ad6983a4e&oe=5C6E9A24
> 
> A healthy card is clean next to the memories  This healthy GPU belong to my friend, and have Micron too.
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=d69373ccebec733a1ba91ba903fe6b03&oe=5C752BBF
> 
> Tommorow i will tear down my healthy 2080Ti's with Samsung memory, i assume clean too
> 
> GPU's with memory after modifications, were sent to customers ordering pre orders. As you can see, it did not help to protect the cards from problems


If you have a UV lamp/light, yuo might check to see if that coating on the smd's surrounding the ram chips is UV active... could be conformal coating?


----------



## GosuPl

GosuPl said:


> Tear down one of my faulty 2080Ti with Micron.
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=e5e4f72f2ccaf6d034b7015bb08ef7a6&oe=5C6E174F
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=a16f597a444a561d6ce0402ad6983a4e&oe=5C6E9A24
> 
> A healthy card is clean next to the memories  This healthy GPU belong to my friend, and have Micron too.
> 
> https://scontent-frt3-1.xx.fbcdn.ne...=d69373ccebec733a1ba91ba903fe6b03&oe=5C752BBF
> 
> Tommorow i will tear down my healthy 2080Ti's with Samsung memory, i assume clean too
> 
> GPU's with memory after modifications, were sent to customers ordering pre orders. As you can see, it did not help to protect the cards from problems


No problem, i will check tomorrow  What do you think about this?


----------



## ESRCJ

Apparently there are no plans to release a BIOS with a higher power limit for the Strix.


----------



## Jpmboy

GosuPl said:


> No problem, i will check tomorrow  What do you think about this?


looks like something fouled after the flow? assembly "bork".


----------



## GosuPl

gridironcpj said:


> Apparently there are no plans to release a BIOS with a higher power limit for the Strix.


The situation is quite strange, considering that the memory problem was definitely on this card. Before it fell to the end, lowering the memory clocks to - 400 below stock, helped ;-)


----------



## cstkl1

dat msi bios.. its actuallty 406watt

seems legit and working


----------



## cstkl1

latest nvidia drivers seems to have lower scores in 3dmark etc.


----------



## cstkl1

hmm the low score could be this 406watt bios.. need to investigate
norm 2115-2130mhz default 330watt bios i get 16400
this bios 15800.... hmmm



not ocing mem atm .. first card died. that one did 8000 easy.. heck it can bench up to 8300mhz..
this card not that good around 7500-7600..


----------



## dante`afk

on what card did you flash this? I had tried the HOF bios on my FE and got lower OC potential with it, so I'd to go back to the 380w bios.


----------



## cstkl1

dante`afk said:


> on what card did you flash this? I had tried the HOF bios on my FE and got lower OC potential with it, so I'd to go back to the 380w bios.


trio.. err thought da pic with da logo would have told u dat..

hmm i think that caps thingy on da msi power rail doing something

atm need further testing.. is it a bios or driver ( havent benched on da last two drivers)


----------



## acmilangr

Can i use this Msi bios on my gigabyte Gaming 2080ti?


----------



## alanthecelt

GraphicsWhore said:


> Yeah I noticed with DXR ultra there was some choppiness introduced and it's not related to FPS, so likely frame time though I didn't have OSD on to check.


pretty sure i saw it mentioned elsewhere that dx12 is at fault, people mentioning all the dice(frostbite?) or at least battlefield games with dx12 aren't smooth in dx12


----------



## HMI Touchscreen

Thanks you very much. The article is very useful to me, and I had wrote an article about " How does touch screen works?"
The article source: http://www.vicpas.com/f699417/How-does-touch-screen-work.htm
The touch screen professional suppliers:
http://www.vicpas.com
http://www.touchsuppliers.com
http://www.cmtouchpanel.com


----------



## kx11

so Palit released a Bios updating app which i got to update my Gaming pro OC gpu , now the power target is up to 126%


----------



## Rob w

For me a bit of good news,
Just got off talking with Nvidea ref:warranty of waterblocked RTX2080ti and yes they assure me the warranty will be valid ( so long as I don’t damage the card) and have emailed me with conformation,
So ok I will now install waterblock knowing warranty will be valid.


----------



## GAN77

Rob w said:


> For me a bit of good news,
> Just got off talking with Nvidea ref:warranty of waterblocked RTX2080ti and yes they assure me the warranty will be valid ( so long as I don’t damage the card) and have emailed me with conformation,
> So ok I will now install waterblock knowing warranty will be valid.




Сan you show their official response?


----------



## Jpmboy

^ it's been their policy ever since I bought directly from NV


----------



## cstkl1

yup. 
the only one that denies warantty are the distributors/resellers. so always get the rma no direct example msi taiwan who will forward it to a distro nearby where you can drop it off..

even parallel cards can be rmaed ( distros will charge)

nowadays afaik most big brand AIB have principle offices stationed in most countries with some even with FAE to do rma/bios fixing etc.


----------



## Jpmboy

for ANYONE with win10Pro that has been slow (or fails) to update to 1809, check here: https://www.overclock.net/forum/27718274-post608.html


----------



## Ferreal

Anyone tried this? It was posted on Nvidia forums. It’s now playable for me in 4K with DXR on. Runs smooth between 50-60 FPS consistently. Looks great too. 

“We recommend in this first release of DXR that “DXR Raytraced Reflections Quality” be set to “Low” due to Battlefield V’s Known Issues when using “Medium”, “High”, and “Ultra” settings. EA, DICE, and NVIDIA will also continue to optimize this implementation and deliver regular updates.

Although preferences will vary, we recommend disabling the following settings from Video-->Basic when DXR is enabled for the best overall experience:

Chromatic Aberration
Film Grain
Vignette
Lens Distortion”

https://forums.geforce.com/default/topic/1080010/geforce-rtx-20-series/battlefield-v-dxr-faq/


----------



## GraphicsWhore

alanthecelt said:


> pretty sure i saw it mentioned elsewhere that dx12 is at fault, people mentioning all the dice(frostbite?) or at least battlefield games with dx12 aren't smooth in dx12


So there was a stutter with DX12 but that was fixed with the patch that added DXR support, at least for me.

I think it's unrelated.


----------



## Okt00

Ferreal said:


> Anyone tried this? It was posted on Nvidia forums. It’s now playable for me in 4K with DXR on. Runs smooth between 50-60 FPS consistently. Looks great too.
> 
> “We recommend in this first release of DXR that “DXR Raytraced Reflections Quality” be set to “Low” due to Battlefield V’s Known Issues when using “Medium”, “High”, and “Ultra” settings. EA, DICE, and NVIDIA will also continue to optimize this implementation and deliver regular updates.
> 
> Although preferences will vary, we recommend disabling the following settings from Video-->Basic when DXR is enabled for the best overall experience:
> 
> Chromatic Aberration
> Film Grain
> Vignette
> Lens Distortion”
> 
> https://forums.geforce.com/default/topic/1080010/geforce-rtx-20-series/battlefield-v-dxr-faq/


That made a rather significant improvement in the feel of the game. I don't know that I missed any of those postprocessing effects either.


----------



## Ferreal

Okt00 said:


> Ferreal said:
> 
> 
> 
> Anyone tried this? It was posted on Nvidia forums. It’s now playable for me in 4K with DXR on. Runs smooth between 50-60 FPS consistently. Looks great too.
> 
> “We recommend in this first release of DXR that “DXR Raytraced Reflections Quality” be set to “Low” due to Battlefield V’s Known Issues when using “Medium”, “High”, and “Ultra” settings. EA, DICE, and NVIDIA will also continue to optimize this implementation and deliver regular updates.
> 
> Although preferences will vary, we recommend disabling the following settings from Video-->Basic when DXR is enabled for the best overall experience:
> 
> Chromatic Aberration
> Film Grain
> Vignette
> Lens Distortion”
> 
> https://forums.geforce.com/default/topic/1080010/geforce-rtx-20-series/battlefield-v-dxr-faq/
> 
> 
> 
> That made a rather significant improvement in the feel of the game. I don't know that I missed any of those postprocessing effects either.
Click to expand...

Definitely. Ray tracing makes the biggest difference in graphics.


----------



## rakesh27

Ray Tracing does make a nice difference.. however the drivers are young and the associated hardware with it and pci-e and memory bandwidth has to be at least 50% more.. then we can see good fps in 4k... 

i think its great to come out with this, Nv has to start somewhere, but i suspect 2 years it will be the new std with RTX and it will be much better then the first generation...

VR is going the same way.... 


Now the trick will be to combine VR and RTX and i think thats about 4-5 years away for good resolution and frame rate.. the technology is moving forward all the time...

However as with anything new the first adopters pay through the roof..... then over time it becomes cheaper...


----------



## GraphicsWhore

No surprise but this card hates under-volting. Anyone else tried?

With various combinations the most t I can muster @ 993mV is 2010Mhz which eventually drops to 1995 but doesn't seem to go below that. I picked 993 because, to me, if you're going to under-volt, it should be below 1000mV. Of course, real-world performance doesn't see a noticeable difference and you spare yourself some heat so it's not bad but we all know this card wants power.


----------



## callonryan

--


----------



## callonryan

cstkl1 said:


> trio.. err thought da pic with da logo would have told u dat..
> 
> hmm i think that caps thingy on da msi power rail doing something
> 
> atm need further testing.. is it a bios or driver ( havent benched on da last two drivers)


pass on it. pushes the thermal limit and card is significantly scaling down to avoid that thermal barrier. Firestrike Ultra stress test took me into uncharted territory (80s; normally sits in 60). On top of that, the graphic score drops me down about 4%. Safer to stay with GALAX.


----------



## Zammin

So regarding my FE not passing 3Dmark stress tests, NVIDIA support say that some pass and some don't, and that it's usually not an issue as long as there isn't any artifacting or crashing going on.. They have given me the option to send the card back for a replacement anyway, but I'm not sure what I want to do at this point. On one hand the card hasn't died or caused any major issues, but on the other many other people with FE's seem to pass the test.

Maybe I should try overclocking the card and seeing how good it is first, and if it's no good I could probably take up their offer for a replacement based on the test failing factor.


----------



## Zammin

I'm doing some overclocking atm, just with the offsets in afterburner. I've left the power and temp targets alone for now, because it gets too hot when I raise them. The FE air cooler struggles to keep it under 80C when they are turned all the way up, and I'm aware that under those conditions the VRM can reach 100C (according to GN testing).Voltage slider has also been left alone.

For now I see, to be able to achieve +170 on the core and +700 on the memory with default power and temp targets. Tested in Heaven, Superposition, Valley and currently running Time Spy Extreme (Stress Test).

In Heaven at 75C the core clock was fluctuating between 1935-1950Mhz most of the time. If I max the power limit it will go up to around 2025Mhz but as I mentioned it gets pretty hot.

In Superposition at 75C the core clock was generally around the 1905Mhz mark.

In Valley at 75C the core clock was fluctuating between 1950-2000Mhz most of the time.

In the Time Spy Extreme stress test at 75C at around half way into the test I was seeing 1860-1950Mhz, but later in the test at 76C I was seeing 1845-1935Mhz. I actually passed the test at 97.4% this time but I started the test from 50C rather than 44C, so that might have something to do with it.

I still have Fire Strike Extreme and MSI Kombustor to go, but assuming all is well my question to those of you who have overclocked the FE cards on air is do you think this card is good? average? bad?. Since I have a chance to replace it, it would be good to know where this sample stands amongst the rest.


----------



## callonryan

Zammin said:


> I'm doing some overclocking atm, just with the offsets in afterburner. I've left the power and temp targets alone for now, because it gets too hot when I raise them. The FE air cooler struggles to keep it under 80C when they are turned all the way up, and I'm aware that under those conditions the VRM can reach 100C (according to GN testing).Voltage slider has also been left alone.
> 
> For now I see, to be able to achieve +170 on the core and +700 on the memory with default power and temp targets. Tested in Heaven, Superposition, Valley and currently running Time Spy Extreme (Stress Test).
> 
> In Heaven at 75C the core clock was fluctuating between 1935-1950Mhz most of the time. If I max the power limit it will go up to around 2025Mhz but as I mentioned it gets pretty hot.
> 
> In Superposition at 75C the core clock was generally around the 1905Mhz mark.
> 
> In Valley at 75C the core clock was fluctuating between 1950-2000Mhz most of the time.
> 
> In the Time Spy Extreme stress test at 75C at around half way into the test I was seeing 1860-1950Mhz, but later in the test at 76C I was seeing 1845-1935Mhz. I actually passed the test at 97.4% this time but I started the test from 50C rather than 44C, so that might have something to do with it.
> 
> I still have Fire Strike Extreme and MSI Kombustor to go, but assuming all is well my question to those of you who have overclocked the FE cards on air is do you think this card is good? average? bad?. Since I have a chance to replace it, it would be good to know where this sample stands amongst the rest.


75c seems a bit too high for an FE sitting on the 2nd Boost (should be 10c lower or so). Seems like a case problem to me. Position your fans where you have an ample amount of exhaust. For me, I bought a riser and vertically placed my card, to coincide with the airflow. It helps by about 8c. 30 bucks went a long way


----------



## callonryan

grrr # 41 THAT'S ME! https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/1+gpu

I see ocvn is up there too. Maybe I need to get a new $2k processor?!


----------



## Zammin

callonryan said:


> 75c seems a bit too high for an FE sitting on the 2nd Boost (should be 10c lower or so). Seems like a case problem to me. Position your fans where you have an ample amount of exhaust. For me, I bought a riser and vertically placed my card, to coincide with the airflow. It helps by about 8c. 30 bucks went a long way


Thanks for your input man. I know the temps are a little high but I'm not worried about it too much. It's only set up as a test system in my old Enthoo Luxe case for now as I will be watercooling the GPU later (got the block and everything here, ready to go). It's currently summer in Australia, and there isn't a lot of airflow in the case because the fan speeds are tied to CPU temp in the BIOS, so unless the CPU heats up enough the fans don't spin very fast. Previously in the same case my Strix 1080Ti (when it was air cooled) was hitting 70C on hotter days so 75C on an FE cooler under the same conditions seems about right.

What I'm more interested in is whether the card is any good or not, I'm interested to hear what offsets others have gotten with their FE cards on air and if mine is good/bad/average. I have a chance to send it back so if it's not very good I can still get a replacement.


----------



## Majek

Zammin said:


> Thanks for your input man. I know the temps are a little high but I'm not worried about it too much. It's only set up as a test system in my old Enthoo Luxe case for now as I will be watercooling the GPU later (got the block and everything here, ready to go). It's currently summer in Australia, and there isn't a lot of airflow in the case because the fan speeds are tied to CPU temp in the BIOS, so unless the CPU heats up enough the fans don't spin very fast. Previously in the same case my Strix 1080Ti (when it was air cooled) was hitting 70C on hotter days so 75C on an FE cooler under the same conditions seems about right.
> 
> What I'm more interested in is whether the card is any good or not, I'm interested to hear what offsets others have gotten with their FE cards on air and if mine is good/bad/average. I have a chance to send it back so if it's not very good I can still get a replacement.


I think you are good. My 2080 ti crashes at over +160 core offset. At 160 it boosts to maximum of 2145 but quickly goes down to 2130 and 2115 and lower as the temperature goes up. I haven't tested the offset with standard power limit, so this is with power and temperature maxed out. Have you only tried at stock limits? My memory starts artifacting at over +1000 unless I keep it cool. Colour flashes - waiting for my FE to fail it seems. But at +900, no artifacting.

Btw, guys, can anyone comment on Evga Precision X1 scanner behaviour? I know manual oc is better but I still find it interesting. When I am at stock power limit, the scanner gives me +179 core offset. With the limit set to 123%, the maximum I get is +164 only.

Has anyone else noticed similar behaviour on their card i.e. scanner giving them lower offset at higher power limit?

Thanks


----------



## Zammin

Majek said:


> I think you are good. My 2080 ti crashes at over +160 core offset. At 160 it boosts to maximum of 2145 but quickly goes down to 2130 and 2115 and lower as the temperature goes up. I haven't tested the offset with standard power limit, so this is with power and temperature maxed out. Have you only tried at stock limits? My memory starts artifacting at over +1000 unless I keep it cool. Colour flashes - waiting for my FE to fail it seems. But at +900, no artifacting.
> 
> Btw, guys, can anyone comment on Evga Precision X1 scanner behaviour? I know manual oc is better but I still find it interesting. When I am at stock power limit, the scanner gives me +179 core offset. With the limit set to 123%, the maximum I get is +164 only.
> 
> Has anyone else noticed similar behaviour on their card i.e. scanner giving them lower offset at higher power limit?
> 
> Thanks


Are you on air? If so, I'm impressed that you got over 2100mhz on air with the FE cooler. I have tried maxing out the power slider and it goes up to around 2025-2050 in heaven, but it starts to heat up a lot. I left it on defaults after that knowing that the VRM also gets very hot with the FE cooler once the power limit is maxed (according the GN's testing). Once it's all on water I can comfortably turn it up to the max.

I can't go above +750 on the memory in AB or I start to see artifacts and crashes as well. +1000 is entirely out of the question, it crashes badly and locks the system up for a minute or two when I try it during a heaven loop.


----------



## pfinch

AORUS GeForce® RTX 2080 Ti XTREME 11G arrived


----------



## Majek

Zammin said:


> Are you on air? If so, I'm impressed that you got over 2100mhz on air with the FE cooler. I have tried maxing out the power slider and it goes up to around 2025-2050 in heaven, but it starts to heat up a lot. I left it on defaults after that knowing that the VRM also gets very hot with the FE cooler once the power limit is maxed (according the GN's testing). Once it's all on water I can comfortably turn it up to the max.
> 
> I can't go above +750 on the memory in AB or I start to see artifacts and crashes as well. +1000 is entirely out of the question, it crashes badly and locks the system up for a minute or two when I try it during a heaven loop.


Yes, I am on air but there are a few but's.

Very often, the boost will drop to 2040/2085. In Timespy, there are scenes where the clock goes below 2000. The 2145/2130 boost is only for a second or two.

I am using poor man's cooling atm i.e. an open window for my tests to keep ambient around 13 C and side panel is off (winter is coming here).

Fans are at 75%.

The low ambient and high rpm means the card is around 60/65 C. Memory must be cool too - it can go to + 1050 then. With 24 C ambient I get artifacts at +1000.

Also, with low ambient, I can run Heaven at +180 core and the card boosts to 2180 for a moment. But in Timespy +170 is the limit and firestrike crashes at +165 even. Heaven is not pushing the card enough.

I am sure if you had such low temperature in the room, you'd get similar results. 

I could write 'my card does +180/1000' without mentioning the open window/side panel and thus make everyone think all FEs should do +180 and more. I am sure many people distort the statistics in such a way. If I were at your place (ambient), my card would probably fare worse than yours. I can't get +170 stable.

Can you ran precision x1 scanner and see what offset it suggests?


----------



## Zammin

Majek said:


> Yes, I am on air but there are a few but's.
> 
> Very often, the boost will drop to 2040/2085. In Timespy, there are scenes where the clock goes below 2000. The 2145/2130 boost is only for a second or two.
> 
> I am using poor man's cooling atm i.e. an open window for my tests to keep ambient around 13 C and side panel is off (winter is coming here).
> 
> Fans are at 75%.
> 
> The low ambient and high rpm means the card is around 60/65 C. Memory must be cool too - it can go to + 1050 then. With 24 C ambient I get artifacts at +1000.
> 
> Also, with low ambient, I can run Heaven at +180 core and the card boosts to 2180 for a moment. But in Timespy +170 is the limit and firestrike crashes at +165 even. Heaven is not pushing the card enough.
> 
> I am sure if you had such low temperature in the room, you'd get similar results.
> 
> I could write 'my card does +180/1000' without mentioning the open window/side panel and thus make everyone think all FEs should do +180 and more. I am sure many people distort the statistics in such a way. If I were at your place (ambient), my card would probably fare worse than yours. I can't get +170 stable.
> 
> Can you ran precision x1 scanner and see what offset it suggests?


Ah I see, it's more than 10C cooler where you are. Yeah sure man, I haven't tried the scanner yet but I'm happy to try it. It's late here right now but I'll try and give it a shot tomorrow. Does it matter if I run it on Afterburner? That's what I have installed, but I can install X1 if you need a direct comparison.


----------



## Majek

Zammin said:


> Ah I see, it's more than 10C cooler where you are. Yeah sure man, I haven't tried the scanner yet but I'm happy to try it. It's late here right now but I'll try and give it a shot tomorrow. Does it matter if I run it on Afterburner? That's what I have installed, but I can install X1 if you need a direct comparison.


Thanks a lot. I think afterburner is fine. Though my general oc knowledge is limited as is the time to do tests. Hence the open window before thinking of doing water etc. - checking what my FE and 9900k are capable of.

Also, I am just wondering if the card won't fail considering all that's happening with TIs.

If you could run the scanner with stock and then with 123% power limit, that'd be awesome. I'm wondering if you'll also get lower score with the latter. If you don't have 30 minutes, then stock is fine too. 

Appreciate it.


----------



## CENS

My first Asus Rog Strix 2080Ti OC edition died with memory artifacts after 1 day. So not only FE models are affected. Second card seems fine so far.

Since the Strix doesn't have a full-cover block yet: Does anyone know a GPU-only waterblock solution? Due to the wider spacing of the screwholes the standard ek supremacy-vga block doesn't seem to fit. Maybe the CPU block is a good choice?


----------



## GAN77

CSN7 said:


> Since the Strix doesn't have a full-cover block yet: Does anyone know a GPU-only waterblock solution? Due to the wider spacing of the screwholes the standard ek supremacy-vga block doesn't seem to fit. Maybe the CPU block is a good choice?


https://shop.bitspower.com/index.php?route=product/product&path=67_102_349&product_id=7082
Also planning Watercool
https://watercool.de/


----------



## marcolisi

Hi guys, would you please suggest me a tutorial for overclocking my new msi 2080 ti trio ?

I would like to maximize the card performance .

For now the card is on a asus z170 deluxe motherboard but soon it would be mounted on a asus maximums xi extreme .

Does it make a difference on what motherboard this videocard is mounted in terms of performance boost with the overclocking?


----------



## mafia97160

RTX 2080 Ti FTW3 ULTRA is here


----------



## callonryan

marcolisi said:


> Hi guys, would you please suggest me a tutorial for overclocking my new msi 2080 ti trio ?
> 
> I would like to maximize the card performance .
> 
> For now the card is on a asus z170 deluxe motherboard but soon it would be mounted on a asus maximums xi extreme .
> 
> Does it make a difference on what motherboard this videocard is mounted in terms of performance boost with the overclocking?


It slightly varies from mobo to mobo. Ensure you're on a PCI-E 16x slot and force 3.0 in the bios for the slot you're using. From there, I'd use an industry standard benchmarking utlity like 3dmark, heaven, etc..

I really enjoy using the OCScanner, and this will save you tons of time of going back and forth. In my case with my trio, it gave me a +167 on the core. Before initializing the program, I'd ensure that everything was set to default and whatever fan speed you prefer (I personally like 70%. The OCScanner results should be within 10% of the required OC stability for your clock (it doesn't provide memory, just core clock). As for the memory, I've tested 3 different trios and the lowest memory was +650 (start off with 600 and add +50 as you go using the stability test in 3dmark or equivalent).


----------



## marcolisi

callonryan said:


> marcolisi said:
> 
> 
> 
> Hi guys, would you please suggest me a tutorial for overclocking my new msi 2080 ti trio ?
> 
> I would like to maximize the card performance .
> 
> For now the card is on a asus z170 deluxe motherboard but soon it would be mounted on a asus maximums xi extreme .
> 
> Does it make a difference on what motherboard this videocard is mounted in terms of performance boost with the overclocking?
> 
> 
> 
> It slightly varies from mobo to mobo. Ensure you're on a PCI-E 16x slot and force 3.0 in the bios for the slot you're using. From there, I'd use an industry standard benchmarking utlity like 3dmark, heaven, etc..
> 
> I really enjoy using the OCScanner, and this will save you tons of time of going back and forth. In my case with my trio, it gave me a +167 on the core. Before initializing the program, I'd ensure that everything was set to default and whatever fan speed you prefer (I personally like 70%. The OCScanner results should be within 10% of the required OC stability for your clock (it doesn't provide memory, just core clock). As for the memory, I've tested 3 different trios and the lowest memory was +650 (start off with 600 and add +50 as you go using the stability test in 3dmark or equivalent).
Click to expand...

Thank you for the reply . I am a total ignorant on the matter. 😞
Is there any video tutorial or tutorial on paper with a step by step procedure ?


----------



## GraphicsWhore

marcolisi said:


> callonryan said:
> 
> 
> 
> 
> 
> marcolisi said:
> 
> 
> 
> Hi guys, would you please suggest me a tutorial for overclocking my new msi 2080 ti trio ?
> 
> I would like to maximize the card performance .
> 
> For now the card is on a asus z170 deluxe motherboard but soon it would be mounted on a asus maximums xi extreme .
> 
> Does it make a difference on what motherboard this videocard is mounted in terms of performance boost with the overclocking?
> 
> 
> 
> It slightly varies from mobo to mobo. Ensure you're on a PCI-E 16x slot and force 3.0 in the bios for the slot you're using. From there, I'd use an industry standard benchmarking utlity like 3dmark, heaven, etc..
> 
> I really enjoy using the OCScanner, and this will save you tons of time of going back and forth. In my case with my trio, it gave me a +167 on the core. Before initializing the program, I'd ensure that everything was set to default and whatever fan speed you prefer (I personally like 70%. The OCScanner results should be within 10% of the required OC stability for your clock (it doesn't provide memory, just core clock). As for the memory, I've tested 3 different trios and the lowest memory was +650 (start off with 600 and add +50 as you go using the stability test in 3dmark or equivalent).
> 
> Click to expand...
> 
> Thank you for the reply . I am a total ignorant on the matter. 😞
> Is there any video tutorial or tutorial on paper with a step by step procedure ?
Click to expand...

Install MSI Afterburner. 

Set +100 on core and +500 on memory and set power and voltage sliders to max.

Install and run the “Heaven” benchmark. If it’s stable, keep adding to clock and memory until you figure out your max stable clocks. If not stable, lower the clocks until stable.

Frankly I find the scanners useless. I’ve used both the EVGA X1 and Afterburner scanners and my max benchmark scores were lower using those numbers than setting a manual overclock.


----------



## marcolisi

GraphicsWhore said:


> Install MSI Afterburner.
> 
> Set +100 on core and +500 on memory and set power and voltage sliders to max.
> 
> Install and run the “Heaven” benchmark. If it’s stable, keep adding to clock and memory until you figure out your max stable clocks. If not stable, lower the clocks until stable.
> 
> Frankly I find the scanners useless. I’ve used both the EVGA X1 and Afterburner scanners and my max benchmark scores were lower using those numbers than setting a manual overclock.


Thanks!

Do you think that it can be done better in OC?


How can I benchmark the videocard to compare the results versus an online database ?

Also, have you heard if there is any way to unlock the power limit of the msi 2080 ti trio ?

Thanks again!


----------



## GraphicsWhore

marcolisi said:


> Thanks!
> 
> Do you think that it can be done better in OC?
> 
> 
> How can I benchmark the videocard to compare the results versus an online database ?
> 
> Also, have you heard if there is any way to unlock the power limit of the msi 2080 ti trio ?
> 
> Thanks again!


To compare benchmarks, download the basic version of 3dMark (https://benchmarks.ul.com/3dmark) Run TimeSpy and FireStrike and you can refine your results comparison to people with same CPU and GPU.

Not sure what you mean by "unlock the power limit." You have the power limit slider set to max so I assume that's what the max for your card is. You may be able to flash another BIOS to that card to get a higher PL though. You'll have to read the thread to see your options.

That screenshot is interesting though. How long did you run Heaven before taking that? Because it has good clocks but is not even at 1V. You probably need to run that longer. Anyway the FireStrike and TimeSpy benchmarks will push your card to its max. Run those with various offsets on core and memory and see what you get.


----------



## Emmett

has driver 416.94 been pulled? I can download 416.81 but .94 just goes to a blank screen.


----------



## Zammin

Zammin said:


> I'm doing some overclocking atm, just with the offsets in afterburner. I've left the power and temp targets alone for now, because it gets too hot when I raise them. The FE air cooler struggles to keep it under 80C when they are turned all the way up, and I'm aware that under those conditions the VRM can reach 100C (according to GN testing).Voltage slider has also been left alone.
> 
> For now I see, to be able to achieve +170 on the core and +700 on the memory with default power and temp targets. Tested in Heaven, Superposition, Valley and currently running Time Spy Extreme (Stress Test).
> 
> In Heaven at 75C the core clock was fluctuating between 1935-1950Mhz most of the time. If I max the power limit it will go up to around 2025Mhz but as I mentioned it gets pretty hot.
> 
> In Superposition at 75C the core clock was generally around the 1905Mhz mark.
> 
> In Valley at 75C the core clock was fluctuating between 1950-2000Mhz most of the time.
> 
> In the Time Spy Extreme stress test at 75C at around half way into the test I was seeing 1860-1950Mhz, but later in the test at 76C I was seeing 1845-1935Mhz. I actually passed the test at 97.4% this time but I started the test from 50C rather than 44C, so that might have something to do with it.
> 
> I still have Fire Strike Extreme and MSI Kombustor to go, but assuming all is well my question to those of you who have overclocked the FE cards on air is do you think this card is good? average? bad?. Since I have a chance to replace it, it would be good to know where this sample stands amongst the rest.





Majek said:


> Can you ran precision x1 scanner and see what offset it suggests?





Zammin said:


> Ah I see, it's more than 10C cooler where you are. Yeah sure man, I haven't tried the scanner yet but I'm happy to try it. It's late here right now but I'll try and give it a shot tomorrow. Does it matter if I run it on Afterburner? That's what I have installed, but I can install X1 if you need a direct comparison.





Majek said:


> Thanks a lot. I think afterburner is fine. Though my general oc knowledge is limited as is the time to do tests. Hence the open window before thinking of doing water etc. - checking what my FE and 9900k are capable of.
> 
> Also, I am just wondering if the card won't fail considering all that's happening with TIs.
> 
> If you could run the scanner with stock and then with 123% power limit, that'd be awesome. I'm wondering if you'll also get lower score with the latter. If you don't have 30 minutes, then stock is fine too.
> 
> Appreciate it.


Well I had to make some tweaks to my basic overclock today because Black Ops 4 was crashing and I saw some strange artifacts at the beginning of one of the matches, even though all the stress tests were fine. Goes to show how important real game testing is. Looks like for now in Afterburner +150 is my best stable core clock at default power and temp targets, and +600 on the memory is safe and stable. So not as good as I initially thought. But I would guess this is about average.. If anyone has any further input would be greatly appreciated. I'm still not sure whether I want a replacement or not. It hasn't shown any signs of dying which is great.

Majek, I'm running OC scanner in Afterburner now at default power and temp targets. I'll let you know in a few minutes how it goes. I've left my custom fan curve on, not sure if that affects anything though.. It's pretty hot here today so I might not run it at max power and temp targets as it may get too hot. Perhaps on a cooler day I can give that a shot.


----------



## Zammin

Okay Majek here's what I got. I've attached my default curve, +150 offset and OC scanner curve for reference.

Honestly the OC scanner curve doesn't look that different to my +150 offset.

Regarding the power limit and OC scanner, this article that was posted earlier (https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/) says the following:

_One thing to note here is that we’ve set the power and temperature limit after running Nvidia Scanner. Nvidia says the Scanner only modifies the core clock, so if you change the power and temperature limits before hand, the Scanner might find different and potentially higher core clocks. However in our experience, we actually achieved lower clocks in the Scanner setting the power limit beforehand, so we’d recommend cranking it up after the Scanner is complete._


----------



## TK421

Which card is best binned? EVGA FTW3 ultra or ASUS strix gaming OC, or is it some other card? I'm in the US so can't buy Galax or the chinese brands.



Some professional overclockers use the strix gaming oc, like der8auer. Not sure if that indicates better quality over the FTW3, pcb shows some extreme oc components that maybe favors that usage case.


----------



## Zammin

TK421 said:


> Which card is best binned? EVGA FTW3 ultra or ASUS strix gaming OC, or is it some other card? I'm in the US so can't buy Galax or the chinese brands.
> 
> 
> 
> Some professional overclockers use the strix gaming oc, like der8auer. Not sure if that indicates better quality over the FTW3, pcb shows some extreme oc components that maybe favors that usage case.


I don't think there is a "best binned" card model, the only binning that goes on to my knowledge is the division of TU102-300 and TU102-300A chips at NVIDIA. If you buy a card that is factory overclocked (higher than 1545mhz boost out of the box) then you likely have an "A" variant. Other than that there is no real way to predict or control how good your silicon will be. Having an "A" variant doesn't mean it's a super good chip either, just likely to be better than it's non-A counterparts, which I don't believe there are many out there at the moment.

What you are getting on the big custom cards is a nice big custom VRM and features like dual BIOS etc. In the case of the FTW3 I'm guessing you get ICX sensors as well.

AFAIK Der8auer is sponsored by ASUS, which is likely why you see him using their hardware the most. The Strix OC is a nice card, it has great power delivery but the BIOS it comes with doesn't let you raise the power target much higher than the FE models. The EVGA FTW3 has a higher max power of 373W IIRC, check the first page of this thread for details.


----------



## Robostyle

Zammin said:


> What you are getting on the big custom cards is a nice big custom VRM and features like.


Which gives you nothing and zero guarantee from coil whine, blowouts (Hey, MSi 10 series!), etc.
Plus, VRM makes zero difference since pascal - FE as well as best 3rd party boards overclock the same. In order to achieve at least some usefullness you need hardware tinkering.


----------



## Majek

Zammin said:


> Okay Majek here's what I got. I've attached my default curve, +150 offset and OC scanner curve for reference.
> 
> Honestly the OC scanner curve doesn't look that different to my +150 offset.
> 
> Regarding the power limit and OC scanner, this article that was posted earlier (https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/) says the following:
> 
> _One thing to note here is that we’ve set the power and temperature limit after running Nvidia Scanner. Nvidia says the Scanner only modifies the core clock, so if you change the power and temperature limits before hand, the Scanner might find different and potentially higher core clocks. However in our experience, we actually achieved lower clocks in the Scanner setting the power limit beforehand, so we’d recommend cranking it up after the Scanner is complete._


Hi Zammin,

Perfect! Thanks for taking the time to check. It looks like majority of the cards end up in almost the same place when it comes to oc.


----------



## Novak-Djokovic

I had the Asus Strix 2080ti in my hands. I returned it to get the EVGA FTW3 Ultra, which of course, I cannot get as it's nearly impossible to find anywhere.

By the time I get the notification that some vendor has the FTW3 and I go to the site, the cards are gone. And, they are NEVER available when I'm sitting at my desktop w/ my credit card in my hand....


----------



## Robostyle

Majek said:


> Zammin said:
> 
> 
> 
> Okay Majek here's what I got. I've attached my default curve, +150 offset and OC scanner curve for reference.
> 
> Honestly the OC scanner curve doesn't look that different to my +150 offset.
> 
> Regarding the power limit and OC scanner, this article that was posted earlier (https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/) says the following:
> 
> _One thing to note here is that we’ve set the power and temperature limit after running Nvidia Scanner. Nvidia says the Scanner only modifies the core clock, so if you change the power and temperature limits before hand, the Scanner might find different and potentially higher core clocks. However in our experience, we actually achieved lower clocks in the Scanner setting the power limit beforehand, so we’d recommend cranking it up after the Scanner is complete._
> 
> 
> 
> Hi Zammin,
> 
> Perfect! Thanks for taking the time to check. It looks like majority of the cards end up in almost the same place when it comes to oc.
Click to expand...

Well, yep, but you still have the cooling solution to keep in mind - thats where all the difference. Hence, some good AIB coolers might keep ur card cool’n’quet, although it also depends on chassis cooling.

As for enlarged power limit - idk guys, why are you so obsessed with it? It doesnt gives any OC improvement, only increased power draw and temps.

I mean, I achieved just the same 2100 MHz stable at ~1.1V both on Zotac extreme (PL increased) and Strix OC (conservative PL) - the only difference was extra 80W and 5C on chip within the loop. Zotac didn’t had any reasonable advantage to consider as worth paying/flashing the bios, etc.


----------



## Zammin

Majek said:


> Hi Zammin,
> 
> Perfect! Thanks for taking the time to check. It looks like majority of the cards end up in almost the same place when it comes to oc.


No problem man. 



Robostyle said:


> Well, yep, but you still have the cooling solution to keep in mind - thats where all the difference. Hence, some good AIB coolers might keep ur card cool’n’quet, although it also depends on chassis cooling.
> 
> As for enlarged power limit - idk guys, why are you so obsessed with it? It doesnt gives any OC improvement, only increased power draw and temps.
> 
> I mean, I achieved just the same 2100 MHz stable at ~1.1V both on Zotac extreme (PL increased) and Strix OC (conservative PL) - the only difference was extra 80W and 5C on chip within the loop. Zotac didn’t had any reasonable advantage to consider as worth paying/flashing the bios, etc.


"Obsessed" might be too strong a word in this instance, Majek was just wondering if other people were seeing lower OC scanner results with the power target set to maximum before the scan. Techspot's article answered that question and that's that I suppose.

I only experimented with the power target briefly since it makes my FE get pretty hot, but it did in fact raise the clock speeds by a decent bump when I raised it while running heaven on loop. That probably doesn't apply to every situation but in that instance it did increase clock speeds, but as you say temps went up a fair bit. I will experiment with it some more when I have my water block fitted but I don't intend to raise it while I'm on the FE cooler.


----------



## Robostyle

Zammin said:


> No problem man.
> 
> 
> 
> "Obsessed" might be too strong a word in this instance, Majek was just wondering if other people were seeing lower OC scanner results with the power target set to maximum before the scan. Techspot's article answered that question and that's that I suppose.
> 
> I only experimented with the power target briefly since it makes my FE get pretty hot, but it did in fact raise the clock speeds by a decent bump when I raised it while running heaven on loop. That probably doesn't apply to every situation but in that instance it did increase clock speeds, but as you say temps went up a fair bit. I will experiment with it some more when I have my water block fitted but I don't intend to raise it while I'm on the FE cooler.


I didn’t meant everyone, sry for any misunderstanding - I just know couple of individuals that are obsessed about lurking for the topmost highest PL bioses from Zotac, Galax and others.. while increasing PL does have something to do with max boost clock when OCing (probably you won’t reach those magical 2200 MHz on 1080/2080 ti’s without turning everything to the max - still I was able to achieve the same overclocking results with various gpus, made by wide range of vendors with broad PL limits, with only voltage % increased.

Only thing I have in mind - that worse overclocking chips might require more W for stable post-2100 OC - but we need Pascal OC guru here, unfortunately I’m not one of them - still tinkering with voltage curves..


----------



## Zammin

Robostyle said:


> I didn’t meant everyone, sry for any misunderstanding - I just know couple of individuals that are obsessed about lurking for the topmost highest PL bioses from Zotac, Galax and others.. while increasing PL does have something to do with max boost clock when OCing (probably you won’t reach those magical 2200 MHz on 1080/2080 ti’s without turning everything to the max - still I was able to achieve the same overclocking results with various gpus, made by wide range of vendors with broad PL limits, with only voltage % increased.
> 
> Only thing I have in mind - that worse overclocking chips might require more W for stable post-2100 OC - but we need Pascal OC guru here, unfortunately I’m not one of them - still tinkering with voltage curves..


Oh I getcha, no worries man. Sorry for misunderstanding. 

Yeah I see where you are coming from. Perhaps it comes down to whether power or temperature is the limiting factor in each instance, I get the impression with many air-cooled cards that temperature is generally the major limiting factor. I seem to recall JPMboy mentioning that unless you are on water, generally there isn't a lot to gain with those high PL VBIOS's. Personally, I'm still very new to working with Turing as I only just got my card earlier this month, I hope to be able to figure out what works and what doesn't once I have my waterblock fitted so temperature is no longer a major limitation.


----------



## Kronos8

CSN7 said:


> Since the Strix doesn't have a full-cover block yet: Does anyone know a GPU-only waterblock solution? Due to the wider spacing of the screwholes the standard ek supremacy-vga block doesn't seem to fit. Maybe the CPU block is a good choice?


https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-331.html#post27702602


----------



## Frozburn

Ended up buying the MSI 2080 Ti Duke OC. Does anyone know if you can change the thermal paste without voiding the warranty on these cards? Read somewhere that Palit don't allow it


----------



## Jpmboy

Robostyle said:


> I didn’t meant everyone, sry for any misunderstanding - I just know couple of individuals that are obsessed about lurking for the topmost highest PL bioses from Zotac, Galax and others.. while increasing PL does have something to do with max boost clock when OCing (probably you won’t reach those magical 2200 MHz on 1080/2080 ti’s without turning everything to the max - still I was able to achieve the same overclocking results with various gpus, made by wide range of vendors with broad PL limits, with only voltage % increased.
> 
> Only thing I have in mind - that worse overclocking chips might require more W for stable post-2100 OC - but we need Pascal OC guru here, unfortunately I’m not one of them - still tinkering with voltage curves..


unfortunately, none of the control lines in the stack operate in isolation. Power, temperature, voltage and frequency are all part "Boost". Simply changing the powerlimit without over coming (or over riding) one or more of the other sensor inputs is pretty pointless. The top dog in the control stack is temperature (ever since Pascal launched).


----------



## marcolisi

GraphicsWhore said:


> To compare benchmarks, download the basic version of 3dMark (https://benchmarks.ul.com/3dmark) Run TimeSpy and FireStrike and you can refine your results comparison to people with same CPU and GPU.
> 
> Not sure what you mean by "unlock the power limit." You have the power limit slider set to max so I assume that's what the max for your card is. You may be able to flash another BIOS to that card to get a higher PL though. You'll have to read the thread to see your options.
> 
> That screenshot is interesting though. How long did you run Heaven before taking that? Because it has good clocks but is not even at 1V. You probably need to run that longer. Anyway the FireStrike and TimeSpy benchmarks will push your card to its max. Run those with various offsets on core and memory and see what you get.





I have run Heaven for maybe 10 minutes before taking that.




This is my benchmark result. How do I see the list with all the other results to see where this result stands compared to all the ppl that run the test?


What is the acceptable temperature for long term running of Heaven and for how long should I run it before accepting as good the values?


Where can I select the FireStrike or TimeSpy 's tests?


----------



## Jpmboy

marcolisi said:


> I have run Heaven for maybe 10 minutes before taking that.
> This is my benchmark result. How do I see the list with all the other results to see where this result stands compared to all the ppl that run the test?
> What is the acceptable temperature for long term running of Heaven and for how long should I run it before accepting as good the values?
> Where can I select the FireStrike or TimeSpy 's tests?


Have you looked in the Hall of Fame?


----------



## NewType88

Emmett said:


> has driver 416.94 been pulled? I can download 416.81 but .94 just goes to a blank screen.


Same.


----------



## 4hwgenxx

hi guys!

there is any bios for RTX 2080ti that have higher volt? moving the slider in overclock seems do not change anything...

now i have flashed the GALAX bios with 126 power limit, it's better to flash the EVGA 130?


----------



## acmilangr

Hi all.
I own gigabyte 2080ti gaming OC edition.
The maximum I can get on overclock is +130/+700 that means 1995mhz at about 75C.

Does this just means that I lost the silicon lottery or something goes wrong with my card? I have also updated the bios with power limit to 122% but there is no change.


----------



## stdout

I think a lot of people are misunderstanding what the power figures mean. The percentage of extra power you can dial into the card is a percentage of the cards original power, not a generic number. So 12% extra of 400 watt is more than 25% of 250 watt. even if 25 is a higher number than 12.

My 2 Strix OC cards that has the same power limits also behave differently. To achieve 2150, one card is dialed +100 MHz, the other +130. (still on air, with a 140 fan blowing to the backside.)

What I am trying to say is, not two cards are the same, and not all cards can go 2150 even if they do have 400 watt to chew from.

I would not even start to consider a different bios as long as these cards perform like this. (My 1080ti FE cards both had the poseidon bios on them, that worked flawless.)


----------



## gta1989

i have tried 2 cards now a evga ultra oc and it would only do 2015mhz i had a chance to grab a asus dual oc and this card would do 2100mhz on air close to the same temps. i put the asus under water and now it will run steady 2225mhz core and 16k mem at a nice 44c in stress test thats a barrow full cover block with a 6950x at 4.4 in the loop. temp is a huge factor but i also put the galaxy bios on the asus card and it did help hold the card at 2225 but it didnt help it go any higher just doesnt bounce around anymore since it has extra room to pull power and not bounce off the max


----------



## acmilangr

stdout said:


> I think a lot of people are misunderstanding what the power figures mean. The percentage of extra power you can dial into the card is a percentage of the cards original power, not a generic number. So 12% extra of 400 watt is more than 25% of 250 watt. even if 25 is a higher number than 12.
> 
> My 2 Strix OC cards that has the same power limits also behave differently. To achieve 2150, one card is dialed +100 MHz, the other +130. (still on air, with a 140 fan blowing to the backside.)
> 
> What I am trying to say is, not to cards are the same, and not all cards can go 2150 even if they do have 400 watt to chew from.
> 
> I would not even start to consider a different bios as long as these cards perform like this. (My 1080ti FE cards both had the poseidon bios on them, that worked flawless.)


What bios could I try on my gigabyte 2080ti gaming OC?


----------



## gta1989

the gigabyte oc has a 338w power limit already which for my asus dual cooler was to much for the cooler and i would lose clocks by temps getting to high. i had to find the max clock at about 75c or i would start getting instability after and it starts to downclock itself anyways. but the best bios i have found right now is that galaxy with a 380w power limit but i had to wait till i put the water block on it to really use any more then the stock bios would give me


----------



## jonsky77

Also depends on what you're using to test it I think. With furmark my 2080ti FE drops to 1780/8000mhz, but it Heaven it's 2310/8000mhz, and Superposition it's 2145/8000. It's always hitting the power limit, even reducing the memory overclock can increase the core overclock I've found. My BIOS max wattage is 320W.


----------



## jonsky77

jonsky77 said:


> Also depends on what you're using to test it I think. With furmark my 2080ti FE drops to 1780/8000mhz, but it Heaven it's 2310/8000mhz, and Superposition it's 2145/8000. It's always hitting the power limit, even reducing the memory overclock can increase the core overclock I've found. My BIOS max wattage is 320W.


My mistake, Heaven shows 2310mhz but gpu-z says it 2115


----------



## GraphicsWhore

gta1989 said:


> i put the asus under water and now it will run steady 2225mhz core and 16k mem at a nice 44c in stress test


Yeah gonna need to see some proof of that.


----------



## acmilangr

Could someone tell me another bios to use for my gigabyte 2080ti gaming OC?


----------



## lolhaxz

Picked up a Galax 2080Ti KFA2 - which appears to have come with the 380W bios.

Stable at 1.043v @ 2115MHz and 8000MHz memory (+1000) under water, so not a magical sample really.

However I am having some odd problems where what seems like if the GPU load is high enough the whole computer shuts down, if I leave the power limit at stock or lower it then it runs fine.

Forza Horizon 4 @ 4K with 8x AA, completely stock clocks but just power limit raised to 120% causes the machine to shutdown almost immediately upon load but something like SuperPosition 8K or 4K runs fine continuously with max power limit and the overclock.

Power supply is a Coolermaster V1000 PSU, single rail 12v upto 80A... so it should not be the PSU, perhaps it's drawing too much current from the PCIE slot?

Definitely seems power related, problems only began when switching from 1080Ti to 2080Ti


----------



## GraphicsWhore

lolhaxz said:


> Picked up a Galax 2080Ti KFA2 - which appears to have come with the 380W bios.
> 
> Stable at 1.043v @ 2115MHz and 8000MHz memory (+1000) under water, so not a magical sample really.
> 
> However I am having some odd problems where what seems like if the GPU load is high enough the whole computer shuts down, if I leave the power limit at stock or lower it then it runs fine.
> 
> Forza Horizon 4 @ 4K with 8x AA, completely stock clocks but just power limit raised to 120% causes the machine to shutdown almost immediately upon load but something like SuperPosition 8K or 4K runs fine continuously with max power limit and the overclock.
> 
> Power supply is a Coolermaster V1000 PSU, single rail 12v upto 80A... so it should not be the PSU, perhaps it's drawing too much current from the PCIE slot?
> 
> Definitely seems power related, problems only began when switching from 1080Ti to 2080Ti


Which software are you using? Maybe try a different one as a quick way to rule that out.

I'd suspect the card if it's immediate crash at stock clocks but would still want to test a different PSU or the card in a different system. Would also look at what it's pulling wattage wise when the shut down occurs.


----------



## Jpmboy

lolhaxz said:


> Picked up a Galax 2080Ti KFA2 - which appears to have come with the 380W bios.
> 
> Stable at 1.043v @ 2115MHz and 8000MHz memory (+1000) under water, so not a magical sample really.
> 
> However I am having some odd problems where what seems like if the GPU load is high enough the whole computer shuts down, if I leave the power limit at stock or lower it then it runs fine.
> 
> Forza Horizon 4 @ 4K with 8x AA, completely stock clocks but just power limit raised to 120% causes the machine to shutdown almost immediately upon load but something like SuperPosition 8K or 4K runs fine continuously with max power limit and the overclock.
> 
> Power supply is a Coolermaster V1000 PSU, single rail 12v upto 80A... so it should not be the PSU, perhaps it's drawing too much current from the PCIE slot?
> 
> Definitely seems power related, problems only began when switching from 1080Ti to 2080Ti


 Nice... I tried to get an HOF. The bios settings will limit the wattage from the slot (ATX power rail and any PCIE slot Aux power connector), and meet the current draw (amps) via the PCIE connectors whic are not limited to 150W each. If the board has a aux power molex connector "for SLI/CFX" use it with these high power cards.
What MB?


----------



## Zammin

acmilangr said:


> Hi all.
> I own gigabyte 2080ti gaming OC edition.
> The maximum I can get on overclock is +130/+700 that means 1995mhz at about 75C.
> 
> Does this just means that I lost the silicon lottery or something goes wrong with my card? I have also updated the bios with power limit to 122% but there is no change.


Sounds about average man, me and some of the other guys with FE's on air are getting +150ish but your card has 30mhz higher factory boost clock than the FE. I see pretty similar clock speeds in games at 75C. 1950-2035mhz. Although much lower in extreme stress tests like time spy extreme. 



jonsky77 said:


> My mistake, Heaven shows 2310mhz but gpu-z says it 2115


Yeah Unigine software always shows a much higher number I think because it only checks clock speed at the very start of the test, after which it immediately comes right down. Heaven shows 2350mhz for me where as in reality it's about 1950Mhz on average once fully warmed up (1440p Ultra 8xMSAA, FE card).


----------



## cstkl1

their default thermal paste is pretty good. evenly spread and their are using something that looks like hydronaut/mx5.. viscous and not so thick like kryonaut. dont see the point of changing it.


----------



## cstkl1

hmmm..shutdown eh..


i could run 2100mhz at 1.05... looks stable.. but after 30-40mim heaven/time spy (set to 100loop) arnd run 30.. blanks and reboots. 

1.075 it chugs all da way...to 3 hrs no issue
so to be safe running now at [email protected]

which will flux between 2100-2130.


----------



## Jpmboy

Hey guys. *OCN Foldathon* this month (starts tonight 12ET). If there are times during the day when you are not using your 208Ti, set it to fold for Overclock.net. The 2080Ti is very close to breaking 3M PPD, not been done yet. 
jpm


----------



## kot0005

stdout said:


> I think a lot of people are misunderstanding what the power figures mean. The percentage of extra power you can dial into the card is a percentage of the cards original power, not a generic number. So 12% extra of 400 watt is more than 25% of 250 watt. even if 25 is a higher number than 12.
> 
> My 2 Strix OC cards that has the same power limits also behave differently. To achieve 2150, one card is dialed +100 MHz, the other +130. (still on air, with a 140 fan blowing to the backside.)
> 
> What I am trying to say is, not to cards are the same, and not all cards can go 2150 even if they do have 400 watt to chew from.
> 
> I would not even start to consider a different bios as long as these cards perform like this. (My 1080ti FE cards both had the poseidon bios on them, that worked flawless.)


There is no 2150Mhz clock at all.... its in steps of 15 and you can either get 2145 or 2160...

2225?? are you sure its not 2025 ? got any screenshots or a vid while playing a game like BF V or AC: odyssey ?



jonsky77 said:


> Also depends on what you're using to test it I think. With furmark my 2080ti FE drops to 1780/8000mhz, but it Heaven it's 2310/8000mhz, and Superposition it's 2145/8000. It's always hitting the power limit, even reducing the memory overclock can increase the core overclock I've found. My BIOS max wattage is 320W.




here we go another user with LN2 level clocks. U must be super lucky, you could sell the card to pro OCer's for heaps.


----------



## TK421

Then which card has the best cooler and also the -A gpu?


FTW3 is 500 over msrp though...


----------



## Zammin

Zammin said:


> Okay Majek here's what I got. I've attached my default curve, +150 offset and OC scanner curve for reference.
> 
> Honestly the OC scanner curve doesn't look that different to my +150 offset.
> 
> Regarding the power limit and OC scanner, this article that was posted earlier (https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/) says the following:
> 
> _One thing to note here is that we’ve set the power and temperature limit after running Nvidia Scanner. Nvidia says the Scanner only modifies the core clock, so if you change the power and temperature limits before hand, the Scanner might find different and potentially higher core clocks. However in our experience, we actually achieved lower clocks in the Scanner setting the power limit beforehand, so we’d recommend cranking it up after the Scanner is complete._





Majek said:


> Hi Zammin,
> 
> Perfect! Thanks for taking the time to check. It looks like majority of the cards end up in almost the same place when it comes to oc.


Just to update this, the OC scanner overclock was unstable. It kept crashing in games. Core clock only, no memory OC. Probably not a very useful feature if it doesn't work :/


----------



## dVeLoPe

I have a 11G-P4-2282-KR
Part Desc: EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING,

if anyone wants it ill sell it PM


----------



## vmanuelgm

Zammin said:


> Just to update this, the OC scanner overclock was unstable. It kept crashing in games. Core clock only, no memory OC. Probably not a very useful feature if it doesn't work :/



Not very trustable indeed!!!


----------



## ESRCJ

If you're looking for a waterblock for the 2080 Ti Strix, STAY AWAY FROM BITSPOWER. 

(1) I didn't think much of the fact that the waterblock is compatible with both the 2080 and 2080 Ti Strix, but after installing it, I noticed that my PCB was slightly bent with it on (see picture). 
(2) Also, the part of the block that screws into the IO bracket is a separate piece and does not align properly, so it will be slightly tilted (see picture).
(3) And finally, the WORST part about this terrible block: the cold plate is designed for a 2080, not a 2080 Ti. With two D5 pumps and two Black Ice Nemesis 480 GTXs and the fans at full speed, I was hitting 39C (ambient is 20C). Also, the card would idle at 25C. My Titan X always idled at ambient. My CPU was also idling at ambient in the same loop.

With all that said, EK is planning to release a block, so it may be worth waiting for that. Bitspower's big mistake was designing a block for both a 2080 and 2080 Ti. TU102 is a massive GPU and the cold plate should be larger.


----------



## alanthecelt

that temp is good for an RTX IMHO m y 1080ti used to run 36-42 deg c
now my zotac and alphacool block runs low 40's up to 49 running a 5 degree coolant delta in a cool room


----------



## cstkl1

gridironcpj said:


> If you're looking for a waterblock for the 2080 Ti Strix, STAY AWAY FROM BITSPOWER.
> 
> (1) I didn't think much of the fact that the waterblock is compatible with both the 2080 and 2080 Ti Strix, but after installing it, I noticed that my PCB was slightly bent with it on (see picture).
> (2) Also, the part of the block that screws into the IO bracket is a separate piece and does not align properly, so it will be slightly tilted (see picture).
> (3) And finally, the WORST part about this terrible block: the cold plate is designed for a 2080, not a 2080 Ti. With two D5 pumps and two Black Ice Nemesis 480 GTXs and the fans at full speed, I was hitting 39C (ambient is 20C). Also, the card would idle at 25C. My Titan X always idled at ambient. My CPU was also idling at ambient in the same loop.
> 
> With all that said, EK is planning to release a block, so it may be worth waiting for that. Bitspower's big mistake was designing a block for both a 2080 and 2080 Ti. TU102 is a massive GPU and the cold plate should be larger.


thats what i use to get as well. thought cause da card running idle at 300mhz at higher p state compared to 1080ti poseidon/titan X sli etc all used to be at ambient temp also but at 135mhz.. 
the trio bits running idle arnd 5c-7c above ambient. 

man it was a basket thing to bleed all the air from da block ( eventhough bits says its optimized design for it) this cant blame them cause saw it happen to friends ek block also. it is cause of the design of double vrm at both sides of da gpu.

afaik ek strix block will be for 2080/2080 ti aswell. correct me if am wrong.

da trio block was only for ti.

i waa planning to redo the tim with hydronaut instead of kryo which i think i didnt put enough.


----------



## ESRCJ

alanthecelt said:


> that temp is good for an RTX IMHO m y 1080ti used to run 36-42 deg c
> now my zotac and alphacool block runs low 40's up to 49 running a 5 degree coolant delta in a cool room


The idle temps tell the story for me. The cold plate on this block is simply inadequate. I might shave off another degree or two when I install my Nemesis 420 GTX back in, but still won't change the fact that the cold plate was designed for a smaller GPU. 

How bad does that PCB bend look? That really stood out to me and worried me a bit.


----------



## alanthecelt

yer i was talking load.. i will see sub 30's on desktop
but yer a bent pcb is a no no.. that's a no go and worth proper investigation

my mate installed the BP block on a different card.. didn't see a bend.. i didn't like all the random thickness of thermal pads though to make up for the multiple card fitting


----------



## cstkl1

gridironcpj said:


> alanthecelt said:
> 
> 
> 
> that temp is good for an RTX IMHO m y 1080ti used to run 36-42 deg c
> now my zotac and alphacool block runs low 40's up to 49 running a 5 degree coolant delta in a cool room
> 
> 
> 
> The idle temps tell the story for me. The cold plate on this block is simply inadequate. I might shave off another degree or two when I install my Nemesis 420 GTX back in, but still won't change the fact that the cold plate was designed for a smaller GPU.
> 
> How bad does that PCB bend look? That really stood out to me and worried me a bit.
Click to expand...

is that a bend or the block looks like the top part is sitting on of the io ports shielding.. 🤔🤔🤔🤔🍈


----------



## cstkl1

also to tackle dat io bend part. as mention before for FE/trio bits is releasing a custom pci i/o bracket to be fitted and screwed to that two hole on da block

afaik strix doesnt need cause it has a black acetal thingy thats screwed to the i/o bracket.


----------



## GAN77

gridironcpj said:


> How bad does that PCB bend look? That really stood out to me and worried me a bit.


Did you follow the instructions?


----------



## ESRCJ

GAN77 said:


> Did you follow the instructions?


Indeed I did. I took it apart and tried again, but the PCB bend is still there.


----------



## Jpmboy

gridironcpj said:


> The idle temps tell the story for me. The cold plate on this block is simply inadequate. I might shave off another degree or two when I install my Nemesis 420 GTX back in, but still won't change the fact that the cold plate was designed for a smaller GPU.
> 
> How bad does that PCB bend look? That really stood out to me and worried me a bit.


 I have the BP block on my 2080Ti FE. no bend. Are you over tightening the screws? Be sure you follow the pad installation instructions for the 2080Ti carefully - very different from EK and different from teh 2080 which is on the same instruction sheet.

Anywho, it keeps my 2080Ti at no more than 10C over water temp. Block works (and looks) great.


----------



## lolhaxz

lolhaxz said:


> Picked up a Galax 2080Ti KFA2 - which appears to have come with the 380W bios.
> 
> Stable at 1.043v @ 2115MHz and 8000MHz memory (+1000) under water, so not a magical sample really.
> 
> However I am having some odd problems where what seems like if the GPU load is high enough the whole computer shuts down, if I leave the power limit at stock or lower it then it runs fine.
> 
> Forza Horizon 4 @ 4K with 8x AA, completely stock clocks but just power limit raised to 120% causes the machine to shutdown almost immediately upon load but something like SuperPosition 8K or 4K runs fine continuously with max power limit and the overclock.
> 
> Power supply is a Coolermaster V1000 PSU, single rail 12v upto 80A... so it should not be the PSU, perhaps it's drawing too much current from the PCIE slot?
> 
> Definitely seems power related, problems only began when switching from 1080Ti to 2080Ti





GraphicsWhore said:


> Which software are you using? Maybe try a different one as a quick way to rule that out.
> 
> I'd suspect the card if it's immediate crash at stock clocks but would still want to test a different PSU or the card in a different system. Would also look at what it's pulling wattage wise when the shut down occurs.


Software does not make a difference - tried EVGA X1, updated drivers... the only pattern is it's far less likely to crash if the overall power requirements are artificially lowered by running a lower resolution or artificially limiting the power limit.



Jpmboy said:


> Nice... I tried to get an HOF. The bios settings will limit the wattage from the slot (ATX power rail and any PCIE slot Aux power connector), and meet the current draw (amps) via the PCIE connectors whic are not limited to 150W each. If the board has a aux power molex connector "for SLI/CFX" use it with these high power cards.
> What MB?


It a Maxmius Hero X - no AUX power connector.

------

What's interesting is if I lower my CPU overclock from 5.2GHz to 5.0GHz it takes about 2-3 minutes to restart, only if I lower my CPU to ~3.5GHz does it continue to run for any measurable period of time.... 5.2GHz is Prime95 stable for hours SmallFFT, <=60C full load @ 1.4v, but heavy load together, CPU and GPU its extremely unstable.

If I run Furmark + Prime95, it instantly reboots... but these reboots are not OCP I don't believe as that should shutdown the PSU and stay shutdown for some time... if you've ever tripped it and panicked because it wont turn back on again till the caps discharge you'll know what I'm talking about.... no errors codes, no bios post messages.

Don't really want to have the roll the dice on a pricey 1200W power supply if it's not the cause.

I would be very interested to see if anyone else is capable of running Prime95 + Furmark at the same time on a overclocked system, especially with the 380W Galax bios, I'm am suspicious that it is trying to drag too much current through the motherboard and its the motherboard that is tripping.

It's worth noting that previously I had a 1080Ti @ 2037MHz @ 1050mv i think and that never one did anything like this, and I spent many 4-5 hour long Forza Horizon 4 sessions in the last few weeks prior to swapping over to the 2080Ti.


----------



## Jpmboy

lolhaxz said:


> Software does not make a difference - tried EVGA X1, updated drivers... the only pattern is it's far less likely to crash if the overall power requirements are artificially lowered by running a lower resolution or artificially limiting the power limit.
> 
> 
> 
> It a Maxmius Hero X - no AUX power connector.
> 
> ------
> 
> What's interesting is if I lower my CPU overclock from 5.2GHz to 5.0GHz it takes about 2-3 minutes to restart, only if I lower my CPU to ~3.5GHz does it continue to run for any measurable period of time.... 5.2GHz is Prime95 stable for hours SmallFFT, <=60C full load @ 1.4v, but heavy load together, CPU and GPU its extremely unstable.
> 
> If I run Furmark + Prime95, it instantly reboots... but these reboots are not OCP I don't believe as that should shutdown the PSU and stay shutdown for some time... if you've ever tripped it and panicked because it wont turn back on again till the caps discharge you'll know what I'm talking about.... no errors codes, no bios post messages.
> 
> Don't really want to have the roll the dice on a pricey 1200W power supply if it's not the cause.
> 
> I would be very interested to see if anyone else is capable of_* running Prime95 + Furmark at the same time *_on a overclocked system, especially with the 380W Galax bios, I'm am suspicious that it is trying to drag too much current through the motherboard and its the motherboard that is tripping.
> 
> It's worth noting that previously I had a 1080Ti @ 2037MHz @ 1050mv i think and that never one did anything like this, and I spent many 4-5 hour long Forza Horizon 4 sessions in the last few weeks prior to swapping over to the 2080Ti.


 Frankly... that's just stupid. Both belong back in the jurassic "stability" tar pit. OCP will/should do an immediate restart, not stay off.

If you can OCP the power supply with the card or cpu or both, send it back under warranty.
A good PSU costs 25% of the price of trhe card, and a bad one can fry all connected gear.
That said, in your bios, set the core power limits to maximum and disable the board's OCP switch in bios (I think that board has one)


----------



## ESRCJ

Wow I can't believe I missed this. I apparently forgot to remove one of the booster screws, which was causing the bend and probably uneven contact with the die. I'm going to reinstall in in my system and see if the temps are improved.


----------



## GAN77

gridironcpj said:


> Wow I can't believe I missed this. I apparently forgot to remove one of the booster screws, which was causing the bend and probably uneven contact with the die. I'm going to reinstall in in my system and see if the temps are improved.


I hinted to you the instructions)


----------



## ESRCJ

GAN77 said:


> I hinted to you the instructions)


Yeah I figured. I can't believe I missed one. 

So temps are still the same. As I mentioned before, I think it's the cold plate. I think it was designed with the 2080 in mind, not the Ti. CPU is idling at ambient, whereas GPU is idling 5C above ambient.


----------



## cstkl1

gridironcpj said:


> GAN77 said:
> 
> 
> 
> I hinted to you the instructions)
> 
> 
> 
> Yeah I figured. I can't believe I missed one.
> 
> So temps are still the same. As I mentioned before, I think it's the cold plate. I think it was designed with the 2080 in mind, not the Ti. CPU is idling at ambient, whereas GPU is idling 5C above ambient.
Click to expand...

and i was about to DEMAND vincent to come here and explain himself

phew..

nice. but idle wont improve dude. 

changing rig to thermaltake 900 with 16 noctua 14 3k rpm.. so da gpu will have its on 560 rad.. hoping to have time to settle before da holidays.


----------



## GAN77

gridironcpj said:


> Yeah I figured. I can't believe I missed one.
> 
> So temps are still the same. As I mentioned before, I think it's the cold plate. I think it was designed with the 2080 in mind, not the Ti. CPU is idling at ambient, whereas GPU is idling 5C above ambient.


Connection In? Out?


----------



## ESRCJ

GAN77 said:


> Connection In? Out?


Yeah I have the bottom left as the inlet and the top right as the outlet.



cstkl1 said:


> and i was about to DEMAND vincent to come here and explain himself
> 
> phew..
> 
> nice. but idle wont improve dude.
> 
> changing rig to thermaltake 900 with 16 noctua 14 3k rpm.. so da gpu will have its on 560 rad.. hoping to have time to settle before da holidays.


It just seems a bit off. The two rads I have in my PC right now are already overkill for the GPU. I really do this it's the cold plate.


----------



## Jpmboy

gridironcpj said:


> Yeah I have the bottom left as the inlet and the top right as the outlet.
> 
> 
> 
> It just seems a bit off. The two rads I have in my PC right now are already overkill for the GPU. I really do this it's the cold plate.


What TIM? If you have it disassembled, check that the die contact milling is flat... and that the die is too. These larger dies are not always flat (and some loopy folks actually sand them down believe it or not - do not do that!). 
Idle temp is meaningless, what you need to know is the loop temperature, otherwise any temperature reading is, well... baseless.


----------



## ThrashZone

Hi,
Yeah sanding/ lapping nickle plating


----------



## ESRCJ

Jpmboy said:


> What TIM? If you have it disassembled, check that the die contact milling is flat... and that the die is too. These larger dies are not always flat (and some loopy folks actually sand them down believe it or not - do not do that!).
> Idle temp is meaningless, what you need to know is the loop temperature, otherwise any temperature reading is, well... baseless.


I'm using Kryonaut. Are you using as much paste as you did with the EK blocks? I just used the applicator that came with the paste and covered the entire die with a thin layer of the TIM. I don't have a disassembled currently, but I will be taking it apart again later today. I'm actually still seeing a little bend in the PCB still, although it's very, very small and closer to the memory facing the front of the card.. I have all of the thermal pads in the correct places, although there is one that goes over a component that doesn't exist on the Strix...

Oh and I don't know my current loop temps. I don't have my temp sensor in the loop. I only brought up idle because my components typically idle at my ambient temp. Seeing the card 5C above definitely threw me off.


----------



## Jpmboy

gridironcpj said:


> I'm using Kryonaut. Are you using as much paste as you did with the EK blocks? I just used the applicator that came with the paste and covered the entire die with a thin layer of the TIM. I don't have a disassembled currently, but I will be taking it apart again later today. I'm actually still seeing a little bend in the PCB still, although it's very, very small and closer to the memory facing the front of the card.. I have all of the thermal pads in the correct places, although there is one that goes over a component that doesn't exist on the Strix...
> 
> Oh and I don't know my current loop temps. I don't have my temp sensor in the loop. I only brought up idle because my components typically idle at my ambient temp. Seeing the card 5C above definitely threw me off.


I just use a pea-size glob in the center of the die and allow the coldplate to spread it - this avoids any air being trapped in the application.
The strix is a non-reference PCB - right? so it will necessarily have a different pad set vs reference. Some pads are there more for electrical insulation than cooling, so use all shown in your instructions. The FE has a couple of pads that are certainly there for insulation.


----------



## alanthecelt

that might not be enough TIM...I would be interested to see the coverage when you pop the block
on GPU's i spread it all over, thinly.


----------



## ESRCJ

Jpmboy said:


> I just use a pea-size glob in the center of the die and allow the coldplate to spread it - this avoids any air being trapped in the application.
> The strix is a non-reference PCB - right? so it will necessarily have a different pad set vs reference. Some pads are there more for electrical insulation than cooling, so use all shown in your instructions. The FE has a couple of pads that are certainly there for insulation.


Perhaps I'm using too much thermal paste. I would say the amount I'm spreading with the applicator is maybe 2 pea sizes? Not sure though. I'm pretty much following the same application method I used for my Titan XP, which had amazing temps. And yeah, the Strix is a non-reference PCB.


----------



## GraphicsWhore

gridironcpj said:


> Perhaps I'm using too much thermal paste. I would say the amount I'm spreading with the applicator is maybe 2 pea sizes? Not sure though. I'm pretty much following the same application method I used for my Titan XP, which had amazing temps. And yeah, the Strix is a non-reference PCB.


Not sure it would make a giant difference but I'm still trying to figure out what the issue is. You said you hit 39C on load? How is that a problem?


----------



## Jpmboy

alanthecelt said:


> that might not be enough TIM...I would be interested to see the coverage when you pop the block
> on GPU's i spread it all over, thinly.


temps do the talking. 


gridironcpj said:


> Perhaps I'm using too much thermal paste. I would say the amount I'm spreading with the applicator is maybe 2 pea sizes? Not sure though. I'm pretty much following the same application method I used for my Titan XP, which had amazing temps. And yeah, the Strix is a non-reference PCB.


 it's pretty hard to use too much on a GPU since any excess will squeeze out. The main thing with spreading tim is that when the block lays on the die, it can trap air and create "exclusion" zones. Think of making a TIM circle on the die and then laying the block on top. The pea method cannot suffer this problem. a pea is actually a lot. With a well fitted block, a "grain of rice" size application is enough.
Too bad he's closed up shop, but SkinnyLabs did some very good studies of mount quality and tim application some years ago. May still be available in page archives....


----------



## Toxsick

Has anyone noticed small freezing across all games?

The MSI Afterburner OSD overlay clearly shows something is wrong, and it's not my system.

Frame time graph shows a clear spike of 0.2ms(mini freeze that is visible in my eyes) that jumps up occasionally in all games i've played so far.

re installed everything from the OS to drivers. everything is at stock.

https://forums.geforce.com/default/...-2080-occasional-stuttering-across-all-games/


----------



## ESRCJ

GraphicsWhore said:


> Not sure it would make a giant difference but I'm still trying to figure out what the issue is. You said you hit 39C on load? How is that a problem?


That's with all rad fans at 100 percent. The two rads currently in the rig are Black Ince Nemesis 480 GTXs, which aren't slouches. With the fans set to 30 percent, the GPU peaks at 50C in the Timespy stress test.



Jpmboy said:


> temps do the talking.
> 
> it's pretty hard to use too much on a GPU since any excess will squeeze out. The main thing with spreading tim is that when the block lays on the die, it can trap air and create "exclusion" zones. Think of making a TIM circle on the die and then laying the block on top. The pea method cannot suffer this problem. a pea is actually a lot. With a well fitted block, a "grain of rice" size application is enough.
> Too bad he's closed up shop, but SkinnyLabs did some very good studies of mount quality and tim application some years ago. May still be available in page archives....


I'll take note of what I see once I remove the block again. Thanks for the suggestions and help.


----------



## ESRCJ

Alright so here is a pic of the TIM after the block was taken off. Note that the little bit that''s on the PCB was not there until the block came off (it spilled as I was removing the block). Any comments on the TIM? From my inspection, it looks like I may be using more than I need, although I'm not sure if that explains the "high" temps (they're high for me dammit).


----------



## Fitzcaraldo

gridironcpj said:


> Alright so here is a pic of the TIM after the block was taken off. Note that the little bit that''s on the PCB was not there until the block came off (it spilled as I was removing the block). Any comments on the TIM? From my inspection, it looks like I may be using more than I need, although I'm not sure if that explains the "high" temps (they're high for me dammit).


You can never use too much TIM. 
You can always run into issues with contact.

Your temps are perfectly fine for a 2080 Ti. That card runs hot like a mf.


----------



## Jpmboy

gridironcpj said:


> Alright so here is a pic of the TIM after the block was taken off. Note that the little bit that''s on the PCB was not there until the block came off (it spilled as I was removing the block). Any comments on the TIM? From my inspection, it looks like I may be using more than I need, although I'm not sure if that explains the "high" temps (they're high for me dammit).


 looks fine... I bit too much for my tastes, but that's not gonna cause it to run hot as y9ou say. erm... what exactly is the max temp recorded during a heaven or some other std benchmark run?


you got those pads right? Looks like it's missing some small ones... fatest pad on the bottom?


----------



## ESRCJ

Jpmboy said:


> looks fine... I bit too much for my tastes, but that's not gonna cause it to run hot as y9ou say. erm... what exactly is the max temp recorded during a heaven or some other std benchmark run?
> 
> 
> you got those pads right? Looks like it's missing some small ones... fatest pad on the bottom?


I ran the Timespy stress test with my fans at 30 percent and the max temp was 50C. Ambient is currently 20C. While I know it isn't an apples-to-apples comparison, my Titan XP in the same loop never passed 40C in the summer, same stress test.

There are two small ones on the card, whereas the rest are on the block. It's hard to tell since the pads are the same color as some of the components lol. I still need to put that pad I mentioned earlier on the imaginary component Bitspower thinks is on the board.

Oh and all pads are 1mm. The kit only came with that thickness. The instructions only mention 1mm as well.


----------



## Jpmboy

gridironcpj said:


> I ran the Timespy stress test with my fans at 30 percent and the max temp was 50C. Ambient is currently 20C. While I know it isn't an apples-to-apples comparison, my Titan XP in the same loop never passed 40C in the summer, same stress test.
> 
> There are two small ones on the card, whereas the rest are on the block. It's hard to tell since the pads are the same color as some of the components lol. I still need to put that pad I mentioned earlier on the imaginary component Bitspower thinks is on the board.
> 
> Oh and all pads are 1mm. The kit only came with that thickness. The instructions only mention 1mm as well.


ah, so I think you got a different block than mine (or @Zurv). Maybe a newer milling pattern.


----------



## ESRCJ

Jpmboy said:


> ah, so I think you got a different block than mine (or @Zurv). Maybe a newer milling pattern.


Yeah it's a block designed specifically for the Strix PCB (2080 and 2080 Ti).

Here is Steve from GN using a hybrid mod. Keep in mind I have SIGNIFICANTLY more rad space than that little AIO. The limiter on my end is clearly the block. I honestly can't recommend this block to anyone based off of my experience. I'm almost tempted to return the card and get an FE so I can throw a block on it and call it a day. I'm sick of putting my build on hold for one reason or another at this point.


----------



## cstkl1

ThrashZone said:


> Hi,
> Yeah sanding/ lapping nickle plating /forum/images/smilies/eek.gif


bitspower dude. they have been known to screw up contact issues on monoblocks so.. no suprise


----------



## GosuPl

I'm working on a new installment of my PC, I have a question about thermpads;-)
I set them a bit more than the EK manual says, the green ones are from the original cooler. I equipped both backplate and pcb cards with them. Block and backplate adhere equally. I have been adding more thermopads for years than they recommend in the instructions, I wonder if in the case of Turing also apply to this rule.
What do you think? I covered several elements that would be bare, and the block adheres to them.

https://scontent-frx5-1.xx.fbcdn.ne...=6d1940a0a64d57c72b961b1893616564&oe=5C74CE5C


https://scontent-frx5-1.xx.fbcdn.ne...=0aab1d67eb1d0802d300c073ff648e55&oe=5C6FCF16

https://scontent-frx5-1.xx.fbcdn.ne...=ae2eb02447584bad63e21787a2290c7a&oe=5C7ED064


----------



## cstkl1

i get that with nvenc streaming even to facebook when i was on air.


----------



## Jpmboy

cstkl1 said:


> bitspower dude. they have been known to screw up contact issues on monoblocks so.. no suprise


they all can have issues. I have 3 BP blocks running right now (1080, titan V, 2080Ti), one Swift Komodo (290X), and a koolance (295x2) and 2 EK (TXps.. also had EK on my OG Titans, Maxwell Titans, and Titan Pascals). If he got a bad design from BP for the strix... SEND IT BACK.


----------



## marcolisi

Do you guys know how to turn off the colored lights on the msi 2080 ti trio?


----------



## dante`afk

gridironcpj said:


> I ran the Timespy stress test with my fans at 30 percent and the max temp was 50C. Ambient is currently 20C. While I know it isn't an apples-to-apples comparison, my Titan XP in the same loop never passed 40C in the summer, same stress test.
> 
> There are two small ones on the card, whereas the rest are on the block. It's hard to tell since the pads are the same color as some of the components lol. I still need to put that pad I mentioned earlier on the imaginary component Bitspower thinks is on the board.
> 
> Oh and all pads are 1mm. The kit only came with that thickness. The instructions only mention 1mm as well.


yea something seems off with your block. i'm at 23c ambient, gpu at idle is at 29 and under load between 38-39c with the same block. (idle water temp 25, load 27-28c). so all in all about 5c delta in idle and 10-11 on load.

or you used the wrong pads at the wrong location? check the manual again, the thicker pads are supposed to go on certain areas.


----------



## kot0005

Jpmboy said:


> I just use a pea-size glob in the center of the die and allow the coldplate to spread it - this avoids any air being trapped in the application.
> The strix is a non-reference PCB - right? so it will necessarily have a different pad set vs reference. Some pads are there more for electrical insulation than cooling, so use all shown in your instructions. The FE has a couple of pads that are certainly there for insulation.


pea size on a 800mm2 die is not going to cover it all lol.


----------



## ESRCJ

Jpmboy said:


> cstkl1 said:
> 
> 
> 
> bitspower dude. they have been known to screw up contact issues on monoblocks so.. no suprise
> 
> 
> 
> they all can have issues. I have 3 BP blocks running right now (1080, titan V, 2080Ti), one Swift Komodo (290X), and a koolance (295x2) and 2 EK (TXps.. also had EK on my OG Titans, Maxwell Titans, and Titan Pascals). If he got a bad design from BP for the strix... SEND IT BACK.
Click to expand...

I'm definitely sending it back. And yeah all brands can definitely have issues. The Rampage VI monoblock is a perfect example of a lackluster EK product (such a shame since it looks so nice).



dante`afk said:


> gridironcpj said:
> 
> 
> 
> I ran the Timespy stress test with my fans at 30 percent and the max temp was 50C. Ambient is currently 20C. While I know it isn't an apples-to-apples comparison, my Titan XP in the same loop never passed 40C in the summer, same stress test.
> 
> There are two small ones on the card, whereas the rest are on the block. It's hard to tell since the pads are the same color as some of the components lol. I still need to put that pad I mentioned earlier on the imaginary component Bitspower thinks is on the board.
> 
> Oh and all pads are 1mm. The kit only came with that thickness. The instructions only mention 1mm as well.
> 
> 
> 
> yea something seems off with your block. i'm at 23c ambient, gpu at idle is at 29 and under load between 38-39c with the same block. (idle water temp 25, load 27-28c). so all in all about 5c delta in idle and 10-11 on load.
> 
> or you used the wrong pads at the wrong location? check the manual again, the thicker pads are supposed to go on certain areas.
Click to expand...

Do you have a Strix and the Bitspower Strix block? The Strix block only comes with 1mm pads and the instructions only show those included as well, so they didn't mess up on the pads according to the product details.


----------



## bsch3r

gridironcpj said:


> I'm definitely sending it back. And yeah all brands can definitely have issues. The Rampage VI monoblock is a perfect example of a lackluster EK product (such a shame since it looks so nice).
> 
> 
> 
> Do you have a Strix and the Bitspower Strix block? The Strix block only comes with 1mm pads and the instructions only show those included as well, so they didn't mess up on the pads according to the product details.


Thermal take has now a block for the strix 2080 ti oc, maybe try this one.


----------



## ESRCJ

bsch3r said:


> gridironcpj said:
> 
> 
> 
> I'm definitely sending it back. And yeah all brands can definitely have issues. The Rampage VI monoblock is a perfect example of a lackluster EK product (such a shame since it looks so nice).
> 
> 
> 
> Do you have a Strix and the Bitspower Strix block? The Strix block only comes with 1mm pads and the instructions only show those included as well, so they didn't mess up on the pads according to the product details.
> 
> 
> 
> Thermal take has now a block for the strix 2080 ti oc, maybe try this one.
Click to expand...

Nice find. Thanks for the suggestion. Has anyone here ever used a TT block?


----------



## Lownage

marcolisi said:


> Do you guys know how to turn off the colored lights on the msi 2080 ti trio?


Try this: https://www.msi.com/Landing/mystic-light-Graphics-cards#sync


----------



## pfinch

Hey guys,

how do you overclock the RTX 2080TI exactly?! 
Adjusting manual Voltagecurve is dead?! (like 1080 ti)

My best Results 3d Scores are so far: 
- Precision X1 
- maxed out Voltage/Temp Target
- running OC Scanner 
- +875 MHz Ram

After that (OC Scanner):
Is there any difference after that to change the Core Clock? Or do I have to reset all settings first?

btw. adjusting GPU-Voltage Slider to 100% makes OC worse for me. (No more Core Clock, hitting PowerTarget early)


----------



## vmanuelgm

GosuPl said:


> I'm working on a new installment of my PC, I have a question about thermpads;-)
> I set them a bit more than the EK manual says, the green ones are from the original cooler. I equipped both backplate and pcb cards with them. Block and backplate adhere equally. I have been adding more thermopads for years than they recommend in the instructions, I wonder if in the case of Turing also apply to this rule.
> What do you think? I covered several elements that would be bare, and the block adheres to them.
> 
> https://scontent-frx5-1.xx.fbcdn.ne...=6d1940a0a64d57c72b961b1893616564&oe=5C74CE5C
> 
> 
> https://scontent-frx5-1.xx.fbcdn.ne...=0aab1d67eb1d0802d300c073ff648e55&oe=5C6FCF16
> 
> https://scontent-frx5-1.xx.fbcdn.ne...=ae2eb02447584bad63e21787a2290c7a&oe=5C7ED064



U shouldn't do that cos u can altere the height of the block inducing bad contact. Probably won't happen since it's a minimal change, but it could.


----------



## kot0005

Nvidia is bundling BFV with RTX cards now. RIP early adopters, including me.


----------



## mouacyk

xer0h0ur said:


> Tensor cores have been on dies since Volta. Sounds to me like you're just speculating.


https://www.overclock.net/forum/27722180-post75.html


----------



## Jpmboy

kot0005 said:


> pea size on a 800mm2 die is not going to cover it all lol.


 yeah.. lol. It's not a question. already been done. 
you've not seen a large die until you pop the top on a Titan V... pea-size has been working fine on that since it launched.
You guys just goober TIM on there in waaay too much excess.


----------



## vmanuelgm

kot0005 said:


> Nvidia is bundling BFV with RTX cards now. RIP early adopters, including me.



U can buy a new one to put in SLI, xDDDD


----------



## Scrimstar

Hulk1988 said:


> Here are my babies
> 
> 2080 TI STRIX OC vs FTW3


Any updates my guy

------------------------------------

Or if anyone else has an input, I am deciding between these two GPU

I assume EVGA FTW3 is a bit cooler, but might not leave much room for air flow, if I choose to NVLink two cards down the line


My mobo:


----------



## callonryan

pfinch said:


> Hey guys,
> 
> how do you overclock the RTX 2080TI exactly?!
> Adjusting manual Voltagecurve is dead?! (like 1080 ti)
> 
> My best Results 3d Scores are so far:
> - Precision X1
> - maxed out Voltage/Temp Target
> - running OC Scanner
> - +875 MHz Ram
> 
> After that (OC Scanner):
> Is there any difference after that to change the Core Clock? Or do I have to reset all settings first?
> 
> btw. adjusting GPU-Voltage Slider to 100% makes OC worse for me. (No more Core Clock, hitting PowerTarget early)


Before running OCScanner, put everything to default expect for your fan (personal preference). I've hand my hands on 5 different 2080tis and the OCScanner is very accurate. After you receive your OC, you can input that in and, max out PL, Voltage (placebo), and input your +875. Make sure you stress test these values before trying to game on it or GPU intensive games will make you look foolish.


----------



## kot0005

This game doesnt even use my 2080Ti.. still looks good.


----------



## cstkl1

hohoho.. crazy amount of BFV codes now..

Got one from asus

waiting to get two from MSI.


----------



## cstkl1

kot0005 said:


> Nvidia is bundling BFV with RTX cards now. RIP early adopters, including me.


nah u can still get it.. 

i just got one code from asus.. eventhough got da card two week ago...

now just waiting for msi page to be up...


----------



## cstkl1

Scrimstar said:


> Any updates my guy
> 
> ------------------------------------
> 
> Or if anyone else has an input, I am deciding between these two GPU
> 
> I assume EVGA FTW3 is a bit cooler, but might not leave much room for air flow, if I choose to NVLink two cards down the line
> 
> 
> My mobo:


i would go with asus for air..

EVGA FTW3 on water 
https://www.evga.com/products/product.aspx?pn=400-HC-1489-B1


----------



## arrow0309

Guys, I'm finally installing the EK Vector block and bp on my 2080ti FE this evening.
I decided not to put any pads on the chokes and use premium Gelid 12w/mk pads on the memory and on the both mosfet stripes.
Also they'll be the stock pads on the rear, under the backplate.
Am I doing right, or not? 

Sent from my ONEPLUS A5010 using Tapatalk


----------



## cstkl1

msi seems like *****in about da date of purchase..

asus had no issue. got it like in a minute.


----------



## Zammin

cstkl1 said:


> nah u can still get it..
> 
> i just got one code from asus.. eventhough got da card two week ago...
> 
> now just waiting for msi page to be up...


How does one get a code if they purchased prior to the promotion? I purchased from Nvidia a few weeks ago.

Otherwise, if you have more codes than you know what to do with and want to share, let me know lol


----------



## ESRCJ

cstkl1 said:


> nah u can still get it..
> 
> i just got one code from asus.. eventhough got da card two week ago...
> 
> now just waiting for msi page to be up...


Who did you contact from Asus to get the code? I received my Strix a few weeks ago.


----------



## cstkl1

Zammin said:


> cstkl1 said:
> 
> 
> 
> nah u can still get it..
> 
> i just got one code from asus.. eventhough got da card two week ago...
> 
> now just waiting for msi page to be up...
> 
> 
> 
> How does one get a code if they purchased prior to the promotion? I purchased from Nvidia a few weeks ago.
> 
> Otherwise, if you have more codes than you know what to do with and want to share, let me know lol /forum/images/smilies/biggrin.gif/forum/images/smilies/biggrin.gif/forum/images/smilies/biggrin.gif
Click to expand...

well msi seems to be not responding. asus was no problem.. 




gridironcpj said:


> cstkl1 said:
> 
> 
> 
> nah u can still get it..
> 
> i just got one code from asus.. eventhough got da card two week ago...
> 
> now just waiting for msi page to be up...
> 
> 
> 
> Who did you contact from Asus to get the code? I received my Strix a few weeks ago.
Click to expand...

just follow da asus thingy
same here got my strix 2-3 weeks ago

asus
https://www.asus.com/event/vga/BattlefieldV/

msi
https://www.msi.com/Promotion/game-on-game-rtx

gigabyte
https://www.aorus.com/event-detail.php?i=838

zotac
https://www.zotac.com/my/page/redeem-Battlefield-V

but seriously not da first time. in the past asus was always fair just as they are now.


----------



## ESRCJ

cstkl1 said:


> well msi seems to be not responding. asus was no problem..
> 
> 
> just follow da asus thingy
> same here got my strix 2-3 weeks ago
> 
> asus
> https://www.asus.com/event/vga/BattlefieldV/
> 
> msi
> https://www.msi.com/Promotion/game-on-game-rtx
> 
> gigabyte
> https://www.aorus.com/event-detail.php?i=838
> 
> zotac
> https://www.zotac.com/my/page/redeem-Battlefield-V
> 
> but seriously not da first time. in the past asus was always fair just as they are now.


So you filled out Asus's redemption form and it worked? I got an error. Or did you contact them and get it that way?

Also, if you have the Strix, are you also using the Bitspower waterblock for it?


----------



## dexth77

Unfortunately zotac would not entertain purchase out of promotional window. Any kind soul with extra code would share? Not going to buy as not my kind of game but dying to test out RTX 😓


----------



## cstkl1

dexth77 said:


> Unfortunately zotac would not entertain purchase out of promotional window. Any kind soul with extra code would share? Not going to buy as not my kind of game but dying to test out RTX 😓


dont give up...

i am now trying to get a MSI fae to help me out as last resort.. 
MSI taiwan has a principle office nearby.. might just go banging on it. already overpaid the card by USD250 ( they overpriced here to allow the retailers to get 20% profit.. so the distributors get to keep 10-20% profit on da the mrsp).. which is pretty dumb since asus/msi controls are the incoming cards.. distributes it to a distro ( whose role seems to be paying Asus/MSI on time and handle all the retailers) .. a weird hierarchy. 

second option is get your retailer/distro to pressure them. worked before..


----------



## cstkl1

gridironcpj said:


> So you filled out Asus's redemption form and it worked? I got an error. Or did you contact them and get it that way?
> 
> Also, if you have the Strix, are you also using the Bitspower waterblock for it?


strix selling it...

crap the link i posted all was for APAC.

sorry folks... 

ASUS i just used their site... didnt even have to login to ROG forum.. then got a email telling me to claim it on GFE with da redeem code which led me to browser asking me for origin account login.. and DONE.

MSI/Giga seems to be on registration into their whatever club and redeem it under promotion on da card.


----------



## Zammin

cstkl1 said:


> strix selling it...
> 
> crap the link i posted all was for APAC.
> 
> sorry folks...
> 
> ASUS i just used their site... didnt even have to login to ROG forum.. then got a email telling me to claim it on GFE with da redeem code which led me to browser asking me for origin account login.. and DONE.
> 
> MSI/Giga seems to be on registration into their whatever club and redeem it under promotion on da card.


If only claiming the ASUS Black Ops 4 codes were as easy as BFV.. I'm supposed to be eligible for one after buying and registering my Maximus XI Formula. I registered through the promotion page on their website and never got an email. I'm having to get multiple support staff to try and resolve it.. They told me I had to actually install a program called "ASUS COD Promotion" and run it to get my email, the online registration form for the promotion mentions nothing about this and when I run the application it just says I don't have any eligible ROG products lol *facepalm*

There's an ongoing thread on the ROG forums of angry people who aren't receiving their codes and getting unhelpful support..


----------



## CallsignVega

Zotac site not only says Nov 20th until Jan, but not even cards sold in the US are eligible. What the heck.


----------



## kot0005

cstkl1 said:


> well msi seems to be not responding. asus was no problem..
> 
> 
> just follow da asus thingy
> same here got my strix 2-3 weeks ago
> 
> asus
> https://www.asus.com/event/vga/BattlefieldV/
> 
> msi
> https://www.msi.com/Promotion/game-on-game-rtx
> 
> gigabyte
> https://www.aorus.com/event-detail.php?i=838
> 
> zotac
> https://www.zotac.com/my/page/redeem-Battlefield-V
> 
> but seriously not da first time. in the past asus was always fair just as they are now.


Cant get it with EVGA because the claim button only appears if you register a card with 20/11 as the date.


----------



## dexth77

cstkl1 said:


> dont give up...
> 
> i am now trying to get a MSI fae to help me out as last resort..
> MSI taiwan has a principle office nearby.. might just go banging on it. already overpaid the card by USD250 ( they overpriced here to allow the retailers to get 20% profit.. so the distributors get to keep 10-20% profit on da the mrsp).. which is pretty dumb since asus/msi controls are the incoming cards.. distributes it to a distro ( whose role seems to be paying Asus/MSI on time and handle all the retailers) .. a weird hierarchy.
> 
> second option is get your retailer/distro to pressure them. worked before..


OK! trying my luck, submitted anyway. Stated will take 5-6 days to process. Will update if successful.


----------



## ESRCJ

I just got an email from EK stating that their 2080 Ti Strix block will be released before the end of the year.

I can't seem to get the Asus BFV code redemption to work. Everytime I submit the information, I get an error saying the purchase date is not within the window. I uploaded a screenshot of my invoice. The only way I can see this working is photoshopping a fake purchase date, but I don't want to be that cheap...


----------



## Spiriva

Apparently if you bought a card from Nvidia before 20th, you will absolutly not get a BF V code. Nice Nvidia....**** over the people who bought your cards early.


----------



## mafia97160

hi, a quick question about overclock view that the card is endowed with four core overclok is that on a possible core overclok the RT core


----------



## Jpmboy

Spiriva said:


> Apparently if you bought a card from Nvidia before 20th, you will absolutly not get a BF V code. Nice Nvidia....**** over the people who bought your cards early.


pretty ridiculous. about normal for nvidia tho.


----------



## Pepillo

Hello.

Does anyone know the TDP of the Gigabyte 2080 TI Aourus Extreme?

There are to download a BIOS, the November F3, but it does not give the values.

Thanks


----------



## Jpmboy

Pepillo said:


> Hello.
> 
> Does anyone know the TDP of the Gigabyte 2080 TI Aourus Extreme?
> 
> There are to download a BIOS, the November F3, but it does not give the values.
> 
> Thanks


GPUZ> ADV Tab> Nvidia Bios in the dropdown list. The card's/bios' base and max power limits are listed right there.


----------



## mr2cam

Is the Galax bios still the best one to flash to? My FE on water hasn't showed any signs of artifacting, and I am upgrading to an ultrawide monitor, so figure more is better, even if it is a very small %.


----------



## Pepillo

Jpmboy said:


> GPUZ> ADV Tab> Nvidia Bios in the dropdown list. The card's/bios' base and max power limits are listed right there.


I do not have the card yet, is the one I will ask in exchange for the RMA of the Ventus OC that has died, second that I die, the first was a FE. A week lasted the FE and five days the MSI, no comments of my bad luck ............


----------



## Spiriva

mr2cam said:


> Is the Galax bios still the best one to flash to? My FE on water hasn't showed any signs of artifacting, and I am upgrading to an ultrawide monitor, so figure more is better, even if it is a very small %.


For me the Galax 380W or the Evga FTW3 373W bios is the same. Atm im ruinning the FTW3 bios, but thats cause i tried that the last of them two and didnt see any reason to switch back.
Either of them two will work very well.


----------



## mr2cam

Spiriva said:


> For me the Galax 380W or the Evga FTW3 373W bios is the same. Atm im ruinning the FTW3 bios, but thats cause i tried that the last of them two and didnt see any reason to switch back.
> Either of them two will work very well.


It's weird im only seeing one Galax bios and its 400W, also there are 2 different FTW3 bios's on techpowerup


----------



## Spiriva

mr2cam said:


> It's weird im only seeing one Galax bios and its 400W, also there are 2 different FTW3 bios's on techpowerup


https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012

I use this one, just got the one with the most recent build date.
The Galax bios is in #1 post in this thread, and its not on techpowerup page at all.


Galax 380W: https://onedrive.live.com/?authkey=...54D70F!117&parId=D7731B0FB754D70F!114&o=OneUp


----------



## mr2cam

Spiriva said:


> https://www.techpowerup.com/vgabios/204932/evga-rtx2080ti-11264-181012
> 
> I use this one, just got the one with the most recent build date.
> The Galax bios is in #1 post in this thread, and its not on techpowerup page at all.
> 
> 
> Galax 380W: https://onedrive.live.com/?authkey=...54D70F!117&parId=D7731B0FB754D70F!114&o=OneUp


Awesome thank you!


----------



## kx11

with 2080ti watercooling seems too much for my case and my EATX mobo so i went another way
















i'm surprised how quiet this GPU is even under load


----------



## Garrett1974NL

mr2cam said:


> Awesome thank you!


Flash it and let us know if you have a stable GPU core clock please 
I'm on the fence... so I need someone to convince me


----------



## lowrider_05

The Watercool Heatkiller IV Block looks just Amazing!


----------



## cerealkeller

So the Galax BIOS supports more power than the stock FE BIOS? The FE goes up to 126%, so I'm guess the Galax BIOS 126% is a higher power draw?
Also, is there a tutorial for how to do the BIOS flashing? It's been several years since I've had to do it.


----------



## GraphicsWhore

cerealkeller said:


> So the Galax BIOS supports more power than the stock FE BIOS? The FE goes up to 126%, so I'm guess the Galax BIOS 126% is a higher power draw?
> Also, is there a tutorial for how to do the BIOS flashing? It's been several years since I've had to do it.


Yes quite a bit more. 380w for Galax.

To flash just put BIOS in same folder as nvflash, open command prompt, change directory to nvflash directory and run following commands

nvflash64 --protectoff
nvflash64 -6 BIOSFILENAME.rom


----------



## Jpmboy

save your original bios before flashing


----------



## cerealkeller

Thanks for the instructions, much appreciated


----------



## Zammin

kx11 said:


> with 2080ti watercooling seems too much for my case and my EATX mobo so i went another way
> 
> i'm surprised how quiet this GPU is even under load


That Thor PSU looks killer. I'd buy one if it was visible in my case haha. I reckon a lot of Phanteks Evolve X owners will be getting them since they have the cutout in the PSU basement that will let the display be shown through the case window.



lowrider_05 said:


> The Watercool Heatkiller IV Block looks just Amazing!


That block looks beautiful. Hows the performance? If you have a water temp sensor would you happen to know the max temperature delta between the GPU and water temp during extended full load?


----------



## cerealkeller

GraphicsWhore said:


> Yes quite a bit more. 380w for Galax.
> 
> To flash just put BIOS in same folder as nvflash, open command prompt, change directory to nvflash directory and run following commands
> 
> nvflash64 --protectoff
> nvflash64 -6 BIOSFILENAME.rom


I also have two 1080 Ti's in my system. Will I have to do something different to prevent them from all being flashed?
EDIT:
Nevermind, I figured it out. I remembered the last time I saw instructions for flashing BIOS was when I had a Kepler Titan. So I went to the Titan owners thread and found the instructions. For anyone else who is looking for instructions, you can find them at this link on the first page. You have to click to drop down the instructions. And for me, following the "fix a bad flash" instructions applied to my situation with multiple cards in my system. The flash was successful and I will try to remember to post with my results. I'm pretty certain my max clock speed is 2070, but what I really want is to be able to maintain that clock without power throttling.

https://www.overclock.net/forum/69-nvidia/1363440-official-nvidia-geforce-gtx-titan-owners-club.html


EDIT:
So I had a chance to play some Fallout VR. The game ran much more smoothly without the clock speed bouncing around all over the damn place. I was able to maintain my clock speed of 2070 on the GPU and 16.6 (+1300) on the VRAM without any throttling whatsoever. I typically had the power hanging around 120% with a max of 124%. My core temp was 1C higher than normal at 39C. I have an EK Vektor block and back plate on my 2080 Ti. That's the only game I've tried so far, I have settings maxed with 2.5x SS on the Far Harbor DLC. So I'm extremely pleased with these results. I mean, I would still prefer even more power overhead. I've pushed older reference cards much, much further than 380 watts, so I'm not the least bit concerned. I would suggest anyone wanting to do this either run their fans very high or max or get a water block with a decent loop. I wouldn't be phased to try a BIOS with a 450w power limit. For anyone wanting to do this, just remember to use GPU-Z to back up your factory BIOS before flashing.
I had a game earlier that was actually power throttling the clock speed to under 1900 MHz, which I was disgusted to see. I plan to check that out tomorrow to see how high clocks will hold.
I did some benchmarking with Fire Strike. I can't get Time Spy to work for whatever reason. I think it may be because of my unusual hardware configuration. My scores didn't see huge improvements, but I did get something measurable.
9064 in Fire Strike Ultra, my previous high was 8811 an increase of 2.8%
16413 in Fire Strike Extreme, my previous high was 16063 an increase of 2.1%
I did see clocks power throttling still to 2055 in a few spots, down from 2070, with a max power draw of 124%. So I could still use some more overhead. But that is an improvement, I was seeing clocks throttling to 1975 during the first graphics test with the factory BIOS.
I ended up having some stability issues with voltage not raising to a stable setting. So I went in and set a custom voltage curve. Which improved stability and had the added effect of improving my performance even further.
My thanks to everyone who has worked on this to make this possible for all of us.


----------



## lowrider_05

Zammin said:


> That Thor PSU looks killer. I'd buy one if it was visible in my case haha. I reckon a lot of Phanteks Evolve X owners will be getting them since they have the cutout in the PSU basement that will let the display be shown through the case window.
> 
> 
> 
> That block looks beautiful. Hows the performance? If you have a water temp sensor would you happen to know the max temperature delta between the GPU and water temp during extended full load?


Sorry no Tempsensors and i have a combined CPU+GPU Loop but i can say that the GPU (with the 380W BIOS) gets to a max temp of 58C (22C Ambient) when running 1 hour superposition 4k Optimized.
My Fans run at 350-650 RPM (completely silent) and with the Alphacool Block i had before the temps got to about 67C, so the Heatkiller block is much better in this regard.


----------



## Zammin

lowrider_05 said:


> Sorry no Tempsensors and i have a combined CPU+GPU Loop but i can say that the GPU (with the 380W BIOS) gets to a max temp of 58C (22C Ambient) when running 1 hour superposition 4k Optimized.
> My Fans run at 350-650 RPM (completely silent) and with the Alphacool Block i had before the temps got to about 67C, so the Heatkiller block is much better in this regard.


Thanks for the info man. Knowing your water temp can be really useful for determining how well different components are performing. For example the temp difference between the water and the GPU will tell you how well the block is performing. Of course I expect the Heatkiller block to be a top performer, I was just wondering since it's so new and I couldn't find any performance reviews. 

Did you have an alphacool block on your 2080Ti before? Which one?

I've got an Alphacool full cover plexi block and back plate which I haven't fit to my GPU yet.


----------



## richiec77

Messing with the new FTW3 to land. OK card so far. BIOS switch does nothing but boost fan speed. Lame.

Spent a bit of time figuring out the EKWB thermosphere mounting. Was able to just remove the Fan cooler assembly and leave the front and backplate in place for now. The G92 Mounting plate fits. Hole space is good. The frame is just barely small enough to fit inside the cutout in the front plate. 

The stock standoffs are a tad short though. Stock is 2.5mm where the height from the GPU frame to PCB is 3mm. Die height to PCB is near 3.2mm. It's tricky, but fitting the nylon washer in the kit under the standoffs brings it level. They're approximately 0.6-0.7mm in thickness. Just need slightly longer screws to work. 

Also, for the top of the Thermosphere to fit without hitting the backplate: reverse it. If installed as is from the factory, the metal plugs hit the backplate. If you reverse it, there's plenty of clearance.

Will leave it alone for a couple weeks, but will most likely try shunt mods soldering shunts ontop. Power Limit is the bane of Nvidia.

I think I won a full tube of paste with this card....Jesus.


----------



## lowrider_05

Zammin said:


> Thanks for the info man. Knowing your water temp can be really useful for determining how well different components are performing. For example the temp difference between the water and the GPU will tell you how well the block is performing. Of course I expect the Heatkiller block to be a top performer, I was just wondering since it's so new and I couldn't find any performance reviews.
> 
> Did you have an alphacool block on your 2080Ti before? Which one?
> 
> I've got an Alphacool full cover plexi block and back plate which I haven't fit to my GPU yet.


This one: https://www.alphacool.com/shop/neue-produkte/23838/alphacool-eisblock-gpx-n-acetal-nvidia-geforce-rtx-2080ti-m01

it was the first block that was available but it has much less thermal contacts for the VRM,s and surrounding components and 1mm Vram Pads compared to the 0,5mm Pads on the Heatkiller but it got the job done as well.


----------



## Zammin

lowrider_05 said:


> This one: https://www.alphacool.com/shop/neue-produkte/23838/alphacool-eisblock-gpx-n-acetal-nvidia-geforce-rtx-2080ti-m01
> 
> it was the first block that was available but it has much less thermal contacts for the VRM,s and surrounding components and 1mm Vram Pads compared to the 0,5mm Pads on the Heatkiller but it got the job done as well.


Oh wow.. that's the same as the one I have, except mine is plexi. So you reckon roughly 10C lower temps under the same conditions with the heatkiller block? that's a pretty massive difference :bigeyedsm


----------



## lowrider_05

Zammin said:


> Oh wow.. that's the same as the one I have, except mine is plexi. So you reckon roughly 10C lower temps under the same conditions with the heatkiller block? that's a pretty massive difference :bigeyedsm


well, for me it was like this but on the alphacool i had no backplate, with the heatkiller i have so that could also make a little difference.


----------



## Zammin

lowrider_05 said:


> well, for me it was like this but on the alphacool i had no backplate, with the heatkiller i have so that could also make a little difference.


Normally backplates don't do much for the GPU core, maybe a little for the VRM area. You have certainly got my interest now, I always thought the difference in performance between the well established brand's waterblocks would be only a few C at best, but in your case it sounds like a much larger difference. The Alphacool block has a very large micro channel array over the GPU area so I would have expected a lot better. If the difference really is that big I might sell mine and order a Heatkiller block instead. I posted in the Heatkiller thread to see if anyone can provide water to GPU temp deltas. Fingers crossed, info is scarce on these new products right now.


----------



## alanthecelt

the case above the water temp must be very high
i run the full Alphacool, 380w bios, and so far i haven't seen a gpu temp above 49 after extended use (real world gaming), coolant delta max 5 degrees in a cool room
I expect in a low delta scenario the difference between block effectiveness will be much tighter


----------



## Zammin

alanthecelt said:


> the case above the water temp must be very high
> i run the full Alphacool, 380w bios, and so far i haven't seen a gpu temp above 49 after extended use (real world gaming), coolant delta max 5 degrees in a cool room
> I expect in a low delta scenario the difference between block effectiveness will be much tighter


Cheers for the info. When your GPU is at 49C, what is your water temp?


----------



## alanthecelt

yer erm... i've been meaning to set a sensor up for that on my lcd screen
I'll do that next time im on the machine... useful info to be bale to give out


----------



## Zammin

alanthecelt said:


> yer erm... i've been meaning to set a sensor up for that on my lcd screen
> I'll do that next time im on the machine... useful info to be bale to give out


Ah no worries man. Yeah it's handy info to have, I have two temp sensors in my loop plugged into my commander pro haha.


----------



## alanthecelt

Zammin said:


> Ah no worries man. Yeah it's handy info to have, I have two temp sensors in my loop plugged into my commander pro haha.


yer ive got 2 set up the aquaero and a handful of air temp sensors, readout setup on a little screen but coolant delta or temp isnt displayed, only gpu/cpu core atm
passing he pure sensor info to the screen might be useful


----------



## cstkl1

one lowyat member manage to get a code for his zotac. always bought way before this promotion date.. so keep trying.. 

MSI still no go. its a automated process on their website based on registration. should have just faked it.


----------



## Kronos8

richiec77 said:


> Spent a bit of time figuring out the EKWB thermosphere mounting. Was able to just remove the Fan cooler assembly and leave the front and backplate in place for now. The G92 Mounting plate fits. Hole space is good. The frame is just barely small enough to fit inside the cutout in the front plate.


Nice info. I thought thermosphere could not be used with 2080Ti's, but you proved me wrong. So, both Thermosphere and VGA-Supremacy can be used.

https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-331.html#post27702602

Always good to know that universal blocks are still usable, despite the fact they are not promoted. Mind sharing your temps? Any VRM temps?


----------



## Garrett1974NL

Zammin said:


> Cheers for the info. When your GPU is at 49C, what is your water temp?


I know you did not ask me but I do like to chime in. When my coolant is at 30 the GPU is 41. 11c delta all the time. Koolance GPU-230 GPU block  
With the EK-VGA Supremacy I had a 13c delta. I use Conductonaut liquid metal. Hope this info is useful somehow


----------



## richiec77

Kronos8 said:


> Nice info. I thought thermosphere could not be used with 2080Ti's, but you proved me wrong. So, both Thermosphere and VGA-Supremacy can be used.
> 
> https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-331.html#post27702602
> 
> Always good to know that universal blocks are still usable, despite the fact they are not promoted. Mind sharing your temps? Any VRM temps?


Didn't know if the Thermosphere would fit until I got my hands on one. I was trying the R600 bracket for a while...then tried the G92 bracket. Super close to an exact fit with the bracket. 

Uh...right now Temps are hard to share as I'm running it on a Chiller right now. So 1C coolant temps and 8C idle temps isn't realistic for most. 

Trying to figure out why the card isn't seeing 100% utilization under TimeSpy. GT1 shows 78-84%, GT2 100%. Shouldn't be that way.


----------



## Zammin

Garrett1974NL said:


> I know you did not ask me but I do like to chime in. When my coolant is at 30 the GPU is 41. 11c delta all the time. Koolance GPU-230 GPU block
> With the EK-VGA Supremacy I had a 13c delta. I use Conductonaut liquid metal. Hope this info is useful somehow


That's a pretty good delta, I hadn't heard of that block until I looked it up just now and realised it's a universal type block. Are you just cooling the rest of the components with heat sinks and air?

The reason I was keen to know the deltas for the Heatkiller and Alphacool blocks was to compare their performance since lowrider's heatkiller block seems to perform a lot better, but GPU temp alone doesn't always tell the whole story. Thanks for chiming in though.


----------



## Garrett1974NL

Yeah it is pretty good for full load 
There is a metal plate on memory and VRM and I tiewrapped a 120mm fan to the card. 1000rpm so not too noisy. It looks getto but works awesome


----------



## cstkl1

curious to see what constant 2100mhz is like



https://www.3dmark.com/spy/5119762​
didnt oc da cpu.


----------



## Lefty23

cstkl1 said:


> curious to see what constant 2100mhz is like


ok this is as close as I can get. Even at 1.0V, it still dipped once to 2070 on the second scene...
https://www.3dmark.com/3dm/30514308?

24/7 OC of my system (with the fans and pump at 100% for the run)
CPU: 53/[email protected]
Mem: 4400c18
GPU: 2100/[email protected]


Some users were asking about V/F curve OC and undervolting earlier in the thread. Actually seems to work just as good, at least with my card. Coming from 1080ti, first thing I tried after installing the waterblock was to use the curve to OC the card while under-volting @1.0V. Got to a core clock at a much lower voltage than using the slider which is great considering how hot these RTX cards run.
When I didn’t see any reports from other users on this, I thought it was considered “common knowledge” from Pascal generation and not worth mentioning. Now I'm wondering if I was wrong and my case is just a fluke?


----------



## Jpmboy

richiec77 said:


> Messing with the new FTW3 to land. OK card so far. BIOS switch does nothing but boost fan speed. Lame.
> 
> Spent a bit of time figuring out the EKWB thermosphere mounting. Was able to just remove the Fan cooler assembly and leave the front and backplate in place for now. The G92 Mounting plate fits. Hole space is good. The frame is just barely small enough to fit inside the cutout in the front plate.
> 
> The stock standoffs are a tad short though. Stock is 2.5mm where the height from the GPU frame to PCB is 3mm. Die height to PCB is near 3.2mm. It's tricky, but fitting the nylon washer in the kit under the standoffs brings it level. They're approximately 0.6-0.7mm in thickness. Just need slightly longer screws to work.
> 
> Also, for the top of the Thermosphere to fit without hitting the backplate: reverse it. If installed as is from the factory, the metal plugs hit the backplate. If you reverse it, there's plenty of clearance.
> 
> Will leave it alone for a couple weeks, but will most likely try shunt mods soldering shunts ontop. Power Limit is the bane of Nvidia.
> 
> I think I won a full tube of paste with this card....Jesus.


nice job! and yeah, the OEMs just goober the TIM on... compensates for poor coldplates coming off the line.


----------



## Jpmboy

Lefty23 said:


> ok this is as close as I can get. Even at 1.0V, it still dipped once to 2070 on the second scene...
> https://www.3dmark.com/3dm/30514308?
> 
> 24/7 OC of my system (with the fans and pump at 100% for the run)
> CPU: 53/[email protected]
> Mem: 4400c18
> GPU: 2100/[email protected]
> 
> 
> Some users were asking about V/F curve OC and undervolting earlier in the thread. Actually seems to work just as good, at least with my card. Coming from 1080ti, first thing I tried after installing the waterblock was to use the curve to OC the card while under-volting @1.0V. Got to a core clock at a much lower voltage than using the slider which is great considering how hot these RTX cards run.
> When I didn’t see any reports from other users on this, I thought it was considered “common knowledge” from Pascal generation and not worth mentioning. Now I'm wondering if I was wrong and my case is just a fluke?


 that's a v good score. post it HERE


(btw, you gotta remember, undervolting while still pulling 350W, just changes the amperage at that voltage. it's pretty simple, if you ask the card to do 350W worth of work, is has to pull that much power at whatever voltage you set.)


----------



## Citruschrome

Just flashed FE to the Galax bios, I assume it's normal to see a Vbios "splash" screen during post? I haven't seen that in a very long time. I've also noticed that the voltage will no longer go to 1.09x for me and seems to top out at 1.06x, tried several different unlock voltage options in afterburner but no luck.


----------



## Jpmboy

yes. it's annoying.


----------



## Lefty23

Jpmboy said:


> that's a v good score. post it HERE
> 
> 
> (btw, you gotta remember, undervolting while still pulling 350W, just changes the amperage at that voltage. it's pretty simple, if you ask the card to do 350W worth of work, is has to pull that much power at whatever voltage you set.)


thanks, posted in TS thread.

I remember reading this before. Since you seem to "have" this knowledge a quick question for you.
Does using higher amperege cause the h/w to perform worse? I remember my 1080ti would get better results at the same clocks using one bin higher voltage in a lot of cases.
That's back when I was testing each scene of each bench to see what clock at what volt bin runs better, in order to setup a V/F curve before I run the benchmark for highest score 
Not doing that anymore 

@Citruschrome Splash screen is normal. Not sure why you are not getting voltages up to 1.093V. Seems to work with my XC Ultra using "Standard MSI" option. Maybe a driver or FE thing? Have you clean installed the drivers after the flash?
Edit: I usually follow the procedure mentioned in OP here:
https://www.overclock.net/forum/69-nvidia/1627212-how-flash-different-bios-your-1080-ti.html


----------



## cstkl1

Lefty23 said:


> cstkl1 said:
> 
> 
> 
> curious to see what constant 2100mhz is like
> 
> 
> 
> ok this is as close as I can get. Even at 1.0V, it still dipped once to 2070 on the second scene...
> https://www.3dmark.com/3dm/30514308?
> 
> 24/7 OC of my system (with the fans and pump at 100% for the run)
> CPU: 53/[email protected]
> Mem: 4400c18
> GPU: 2100/[email protected]
> 
> 
> Some users were asking about V/F curve OC and undervolting earlier in the thread. Actually seems to work just as good, at least with my card. Coming from 1080ti, first thing I tried after installing the waterblock was to use the curve to OC the card while under-volting @1.0V. Got to a core clock at a much lower voltage than using the slider which is great considering how hot these RTX cards run.
> When I didn’t see any reports from other users on this, I thought it was considered “common knowledge” from Pascal generation and not worth mentioning. Now I'm wondering if I was wrong and my case is just a fluke?
Click to expand...

pretty low gpu score for 2100 at cpu 5.3ghz 
at 164xx..should be way higher.. are u sure ure not being throttled like more constantly??

about the temps. on 2080ti on air seriously dont see significant temp diff whether its at 1v or 1.093v

what makes the difference is the powerlimit which equates to the powerdraw. 

so [@Jpmboy] is correct as always


----------



## Citruschrome

Lefty23 said:


> thanks, posted in TS thread.
> 
> I remember reading this before. Since you seem to "have" this knowledge a quick question for you.
> Does using higher amperege cause the h/w to perform worse? I remember my 1080ti would get better results at the same clocks using one bin higher voltage in a lot of cases.
> That's back when I was testing each scene of each bench to see what clock at what volt bin runs better, in order to setup a V/F curve before I run the benchmark for highest score
> Not doing that anymore
> 
> @Citruschrome Splash screen is normal. Not sure why you are not getting voltages up to 1.093V. Seems to work with my XC Ultra using "Standard MSI" option. Maybe a driver or FE thing? Have you clean installed the drivers after the flash?
> Edit: I usually follow the procedure mentioned in OP here:
> https://www.overclock.net/forum/69-nvidia/1627212-how-flash-different-bios-your-1080-ti.html


I haven't no, I did clean install drivers but just through Nvidia. I'll run through it and double test.


----------



## cstkl1

Citruschrome said:


> Lefty23 said:
> 
> 
> 
> thanks, posted in TS thread.
> 
> I remember reading this before. Since you seem to "have" this knowledge a quick question for you.
> Does using higher amperege cause the h/w to perform worse? I remember my 1080ti would get better results at the same clocks using one bin higher voltage in a lot of cases.
> That's back when I was testing each scene of each bench to see what clock at what volt bin runs better, in order to setup a V/F curve before I run the benchmark for highest score /forum/images/smilies/redface.gif
> Not doing that anymore /forum/images/smilies/smile.gif
> 
> @Citruschrome Splash screen is normal. Not sure why you are not getting voltages up to 1.093V. Seems to work with my XC Ultra using "Standard MSI" option. Maybe a driver or FE thing? Have you clean installed the drivers after the flash?
> Edit: I usually follow the procedure mentioned in OP here:
> https://www.overclock.net/forum/69-nvidia/1627212-how-flash-different-bios-your-1080-ti.html
> 
> 
> 
> I haven't no, I did clean install drivers but just through Nvidia. I'll run through it and double test.
Click to expand...

err y do u want 1.093??
i run mine at 1.093 causd of thd vf curve can sometimes go up to 30mhz higher than what i set.. technically 2100mhz was stable on every fan setting on da rads up to 55c @1.05 with heaven 2 hr run/timespy stress test 100 run. since temps are not affected whether its 1.0v to 1.093v.. i left it at highest. 

1v i can only get 2055mhz stable.. 
vram seriously after what happen to the first card.. i dont oc it all all and leave it at stock. seems like a ticking time bomb these micron vrams.


----------



## Lefty23

cstkl1 said:


> pretty low gpu score for 2100 at cpu 5.3ghz
> at 164xx..should be way higher.. are u sure ure not being throttled like more constantly??


yeah sure, but that's on a [email protected] screen with a second monitor connected, no nvcp tweaks, AV and bunch of other apps running in the background, Meltdown & Spectre enabled, etc
As I said it dips once to 2070-2085 on the second scene

I guess you were looking for something like this (same profile but with nvcp tweaks, Meltdown & Spectre disabled & single 1440p monitor):
https://www.3dmark.com/spy/5124741


----------



## cstkl1

Lefty23 said:


> cstkl1 said:
> 
> 
> 
> pretty low gpu score for 2100 at cpu 5.3ghz
> at 164xx..should be way higher.. are u sure ure not being throttled like more constantly??
> 
> 
> 
> yeah sure, but that's on a [email protected] screen with a second monitor connected, no nvcp tweaks, AV and bunch of other apps running in the background, Meltdown & Spectre enabled, etc
> As I said it dips once to 2070-2085 on the second scene
> 
> I guess you were looking for something like this (same profile but with nvcp tweaks, Meltdown & Spectre disabled & single 1440p monitor):
> https://www.3dmark.com/spy/5124741
Click to expand...

err no. i get 16500 if i just bump my cpu to 4.9..

no tweaks. this is a 24/7 comp running plex/torrents and a few VMs.

btw didnt even knew about those tweaks lol. nvcp?? control panel can be tweaked for 3dmark?? meltdown/spectre can be disabled?? .. seriously dont know about that. will try another time.


----------



## Murlocke

I am getting black artifacting/lines in low resolution/low demanding games but only in fullscreen. For example, Binding of Isaac. Everything else is fine. It does not show up in screenshots, which makes me believe it might be some type of HDMI handshaking issue. Does not happen with my Titan XP though... Tried reseating cables, cold booting everything, two different cables, etc.

Anyone else experiencing this? Can anyone with Binding of Isaac: Rebirth test?


----------



## Okt00

cstkl1 said:


> err no. i get 16500 if i just bump my cpu to 4.9..
> 
> no tweaks. this is a 24/7 comp running plex/torrents and a few VMs.
> 
> btw didnt even knew about those tweaks lol. nvcp?? control panel can be tweaked for 3dmark?? meltdown/spectre can be disabled?? .. seriously dont know about that. will try another time.



I believe typically setting things in NVCP Adjust Image Settings with Preview from High Quality to Performance overall, then setting things like:

Texture Filtering to High Performance,
VSync to Off
Power Management to Perfer Maximum Performance
Multi Display to Single Display Performance Mode


----------



## kot0005

Zammin said:


> Oh wow.. that's the same as the one I have, except mine is plexi. So you reckon roughly 10C lower temps under the same conditions with the heatkiller block? that's a pretty massive difference :bigeyedsm


If you wana buy stuff again from au.aquatuning.com, they have a huge 15% discount code. Use BLACKOUT15. It works on .de and .uk not sure of .us though.


----------



## cstkl1

Lefty23 said:


> yeah sure, but that's on a [email protected] screen with a second monitor connected, no nvcp tweaks, AV and bunch of other apps running in the background, Meltdown & Spectre enabled, etc
> As I said it dips once to 2070-2085 on the second scene
> 
> I guess you were looking for something like this (same profile but with nvcp tweaks, Meltdown & Spectre disabled & single 1440p monitor):
> https://www.3dmark.com/spy/5124741


tested da meltdown.. got a slight boost on da CPU scores.. but da GPU score still da same at 16500... nvidia control panel this time set to performance instead of the auto ( before this just disabled gsync)
CPU oced to 4.9


https://www.3dmark.com/spy/5128216

2100mhz:8000
i know 3dmark says 2115.. but thats like for da first few secs.. it will stay 2100mhz right to the end.


----------



## cstkl1

Okt00 said:


> cstkl1 said:
> 
> 
> 
> err no. i get 16500 if i just bump my cpu to 4.9..
> 
> no tweaks. this is a 24/7 comp running plex/torrents and a few VMs.
> 
> btw didnt even knew about those tweaks lol. nvcp?? control panel can be tweaked for 3dmark?? meltdown/spectre can be disabled?? .. seriously dont know about that. will try another time.
> 
> 
> 
> 
> I believe typically setting things in NVCP Adjust Image Settings with Preview from High Quality to Performance overall, then setting things like:
> 
> Texture Filtering to High Performance,
> VSync to Off
> Power Management to Perfer Maximum Performance
> Multi Display to Single Display Performance Mode
Click to expand...

tested that saw a bump of 12 on da gpu score. 
vsync/gsync was off as per 3dmark requirement
so far dat spectre thingy gave a bump in da cpu score arnd 200 points


----------



## Zammin

kot0005 said:


> If you wana buy stuff again from au.aquatuning.com, they have a huge 15% discount code. Use BLACKOUT15. It works on .de and .uk not sure of .us though.


Damn it this stuff always comes up right after I buy something lol. Thanks for the heads up, if I think of something I need I'll use the code. I ordered from the AU site.



alanthecelt said:


> yer ive got 2 set up the aquaero and a handful of air temp sensors, readout setup on a little screen but coolant delta or temp isnt displayed, only gpu/cpu core atm
> passing he pure sensor info to the screen might be useful


I got some info from a guy in the heatkiller thread about his 2080Ti block, maximum delta of 15C water temp to GPU temp (31C water, 46C GPU). Let me know once you have your alphacool delta checked. Maybe I should compile this info for as many 2080Ti blocks as I can since there aren't any comprehensive reviews yet haha.


----------



## Citruschrome

So anyone else running into issues not being able to boost the vcore from 1.062 up to 1.093 after flashing to the Galax 380w?


----------



## cerealkeller

Citruschrome said:


> So anyone else running into issues not being able to boost the vcore from 1.062 up to 1.093 after flashing to the Galax 380w?


I recently did the flash, I have an FE if that matters. I haven't seen 1.093 yet, but I have been hitting 1.081 and one step above that I can't recall exactly what it was 1.087 I think. I'm running 2070 MHz and it's stable so I'm not worried about it.


----------



## kot0005

Zammin said:


> Damn it this stuff always comes up right after I buy something lol. Thanks for the heads up, if I think of something I need I'll use the code. I ordered from the AU site.
> 
> 
> 
> I got some info from a guy in the heatkiller thread about his 2080Ti block, maximum delta of 15C water temp to GPU temp (31C water, 46C GPU). Let me know once you have your alphacool delta checked. Maybe I should compile this info for as many 2080Ti blocks as I can since there aren't any comprehensive reviews yet haha.


I bought a Vision module and some 7W/mK Thermals pads just incase I need to RMA the GPU..


----------



## kot0005

Citruschrome said:


> So anyone else running into issues not being able to boost the vcore from 1.062 up to 1.093 after flashing to the Galax 380w?


I use FTW3 BIOS on my XC ultra and I dont have a problem. I can hit 1.093v but I have to adjust my curve at the top end for it to work, I also have to turn up the voltage and power sliders to max. 

I dont run my card at this voltage tho. Its running at 1.068v and 2100mhz


----------



## Kronos8

richiec77 said:


> Didn't know if the Thermosphere would fit until I got my hands on one. I was trying the R600 bracket for a while...then tried the G92 bracket. Super close to an exact fit with the bracket.
> 
> Uh...right now Temps are hard to share as I'm running it on a Chiller right now. So 1C coolant temps and 8C idle temps isn't realistic for most.
> 
> Trying to figure out why the card isn't seeing 100% utilization under TimeSpy. GT1 shows 78-84%, GT2 100%. Shouldn't be that way.


Thanks for sharing.
I hope you'll figure it out.
:cheers:


----------



## cstkl1

****ty MSI

asus gave me TWO codes


----------



## alanthecelt

Zammin said:


> Damn it this stuff always comes up right after I buy something lol. Thanks for the heads up, if I think of something I need I'll use the code. I ordered from the AU site.
> 
> 
> 
> I got some info from a guy in the heatkiller thread about his 2080Ti block, maximum delta of 15C water temp to GPU temp (31C water, 46C GPU). Let me know once you have your alphacool delta checked. Maybe I should compile this info for as many 2080Ti blocks as I can since there aren't any comprehensive reviews yet haha.


be useful to have a consistent load to compare mind.. like a particular benchmark and core clock or something


----------



## Zammin

alanthecelt said:


> be useful to have a consistent load to compare mind.. like a particular benchmark and core clock or something


Any 3d application that will sustain near 100% load would be fine, you could use a 3Dmark stress test or just loop Heaven or Superposition for a while. You just need something to get your GPU and water close to their peak temps and take a delta from there.

The part I'm not sure about is whether overclocking the GPU or raising the power limit will change the delta. I would assume it will make the GPU and water a bit hotter but the temp delta wouldn't change much. That's my guess anyway.


----------



## Okt00

cstkl1 said:


> tested that saw a bump of 12 on da gpu score.
> vsync/gsync was off as per 3dmark requirement
> so far dat spectre thingy gave a bump in da cpu score arnd 200 points


12 points!

lol... okay so ymmv. Thanks for testing it out.


----------



## cerealkeller

Citruschrome said:


> So anyone else running into issues not being able to boost the vcore from 1.062 up to 1.093 after flashing to the Galax 380w?


actually, with further use, I have run into that in a couple games. Don't know why and they both crashed. So what I did was install the latest beta of Afterburner and setup a custom curve. Precision may allow that too, but I knew how to do it with Afterburner so I went with that. Hit Ctrl+F with the mouse cursor over top of the monitoring side and then set a custom curve. It was time consuming for me. Fortunately I knew the clock speed I could run without any additional voltage and set my max of 2070 at 1.09 (roughly) and set the rest in between 1.06 and 1.09. Then set everything under 1.06 until my custom curve started to line up with the factory curve and then hit apply and saved it. Anything above roughly 1900 MHz is going to get a little more voltage than normal, but that's not really an issue that concerns me.


----------



## ESRCJ

Hey guys, what is your preferred program for GPU stress testing? I was planning on manually setting most of my VF curve and I want to use a stress test that is highly reliable and diverse. As a first pass, I've been using Heaven. 

I also think it may be useful if we post our custom VF curves for the sake of comparison.


----------



## tekathan

New Build is done boys. Score for Time Spy in SS. 
9900k OC 5.140 2080TI +150Mhz +1000Mem . Bios = Gigabite


----------



## Citruschrome

So like others have said you really need to adjust your curve to get your full voltage once you're dealing with 380w pl, at least with the FE. I'm using an adjusted nvidia scanner vf curve now as it's given me the best results. For BFV it sits at 2145 and the voltage stays at 1093 unless the core drops to 2130 from pl then the voltage will drop to 1081 . I've only noticed maybe 2c difference as well under water so I'm pretty happy with the results.


----------



## hdtvnut

Thanks to nowinstock.net/computers/videocards/nvidia/rtx2080ti, finally nailed a rog strix 011G at Newegg.


----------



## ESRCJ

I'n having an issue with Afterburner's VF curve. I'm manually setting and locking a particular voltage-frequency point, but randomly the core clock will boost another 15MHz and crash Heaven. For example, I'll have 2205MHz at 1.093V, but randomly while running the benchmark loop it'll boost up to 2220MHz while maintaining 1.093V. So it's effectively jumping above my predefined curve. I'm trying to develop my maximum stable curve, but this seems impossible when this kind of behavior is exhibited. With 2290MHz at 1.093V, everything is fine and the clock stays put.


----------



## cstkl1

gridironcpj said:


> I'n having an issue with Afterburner's VF curve. I'm manually setting and locking a particular voltage-frequency point, but randomly the core clock will boost another 15MHz and crash Heaven. For example, I'll have 2205MHz at 1.093V, but randomly while running the benchmark loop it'll boost up to 2220MHz while maintaining 1.093V. So it's effectively jumping above my predefined curve. I'm trying to develop my maximum stable curve, but this seems impossible when this kind of behavior is exhibited. With 2290MHz at 1.093V, everything is fine and the clock stays put.


timespy that 2280/2295 stable clocks.


----------



## Rob w

Any ideas why when I run msi afterburner scanner it fails to scan “with fail code 3 “
2080ti fe.


----------



## ESRCJ

cstkl1 said:


> timespy that 2280/2295 stable clocks.


My score in TS graphics stays under 16K regardless of my clock speeds. I don't know what's wrong. My clocks are consistently over 2100MHz and my memory is at 8000MHz, yet my FPS is the same as when I had air cooling and the stock BIOS. I've tried everything and I can't figure it out.


----------



## Rob w

Rob w said:


> Any ideas why when I run msi afterburner scanner it fails to scan “with fail code 3 “
> 2080ti fe.


Just an update on the situation, I was mucking about with scanner curves and voltages but gave up and reset / re-ran scanner, now every time I run scanner it fails code 3 
Now card just randomly shuts off display or locks up rig, tests ie; timespy lock up even with no Oc.
My Titan is working fine so I am starting to think gpu is joining the list of card fails, hello nv enquiries !


----------



## shadow85

Guys, I am after a 2080 Ti. But I am in Australia, stock is very minimal. I have been holding out for the ASUS STRIX, but it seems like I won't get my hands on one for a very long time.

Can all the 2080 Ti guru's here enlighten me on weather I should wait out for the STRIX, or should I just get one of the more readily available ones like the Gigabyte Gaming OC.

We also have some stock of the Zotac AMP here. Recently the MSI Sea Hawk X was released but ofcourse instantly sold out. I would like to hold out for that one too.

What is everyones opinion in which one I should get.


----------



## Zammin

shadow85 said:


> Guys, I am after a 2080 Ti. But I am in Australia, stock is very minimal. I have been holding out for the ASUS STRIX, but it seems like I won't get my hands on one for a very long time.
> 
> Can all the 2080 Ti guru's here enlighten me on weather I should wait out for the STRIX, or should I just get one of the more readily available ones like the Gigabyte Gaming OC.
> 
> We also have some stock of the Zotac AMP here. Recently the MSI Sea Hawk X was released but ofcourse instantly sold out. I would like to hold out for that one too.
> 
> What is everyones opinion in which one I should get.


The Gigabyte gaming OC is a pretty damn good card for the price, as long as you aren't planning on watercooling. It has a 366W BIOS which is higher than the Strix. I've heard the cooler is pretty good too. Might not be quite as beefy as the Strix cooler but it may not make a whole lot of difference. I'm in Australia too, I had an EVGA XC on "pre-order" with PLE for months and nothing happened. As soon as Nvidia AU had stock again I cancelled my order with PLE and bought one of those. Saved me $155 too.

With the Strix model, my understanding is that the only other thing you are getting aside from the larger cooler is a extra beefy VRM (which probably won't make much difference given the power limits and considering the reference VRM is plenty for air and watercooling) and a couple of fan and RGB headers. If those features aren't really of any use to you maybe just save some money and grab a Gigabyte Gaming OC. 

EDIT: There is also this one available right now, it has a big flashy cooler as well: https://www.pccasegear.com/products/44580/gigabyte-aorus-geforce-rtx-2080-ti-11gb


----------



## Madness11

Read some forum on Nvidia , a lot of cards just died ..... I m scarre ))


----------



## GraphicsWhore

gridironcpj said:


> cstkl1 said:
> 
> 
> 
> timespy that 2280/2295 stable clocks.
> 
> 
> 
> My score in TS graphics stays under 16K regardless of my clock speeds. I don't know what's wrong. My clocks are consistently over 2100MHz and my memory is at 8000MHz, yet my FPS is the same as when I had air cooling and the stock BIOS. I've tried everything and I can't figure it out.
Click to expand...

Only thing that sounds wrong is your core clock reading. I get 16606 graphics on TS hitting 2160 so at 22xx you’d be way over that.


----------



## shadow85

Zammin said:


> shadow85 said:
> 
> 
> 
> Guys, I am after a 2080 Ti. But I am in Australia, stock is very minimal. I have been holding out for the ASUS STRIX, but it seems like I won't get my hands on one for a very long time.
> 
> Can all the 2080 Ti guru's here enlighten me on weather I should wait out for the STRIX, or should I just get one of the more readily available ones like the Gigabyte Gaming OC.
> 
> We also have some stock of the Zotac AMP here. Recently the MSI Sea Hawk X was released but ofcourse instantly sold out. I would like to hold out for that one too.
> 
> What is everyones opinion in which one I should get.
> 
> 
> 
> The Gigabyte gaming OC is a pretty damn good card for the price, as long as you aren't planning on watercooling. It has a 366W BIOS which is higher than the Strix. I've heard the cooler is pretty good too. Might not be quite as beefy as the Strix cooler but it may not make a whole lot of difference. I'm in Australia too, I had an EVGA XC on "pre-order" with PLE for months and nothing happened. As soon as Nvidia AU had stock again I cancelled my order with PLE and bought one of those. Saved me $155 too.
> 
> With the Strix model, my understanding is that the only other thing you are getting aside from the larger cooler is a extra beefy VRM (which probably won't make much difference given the power limits and considering the reference VRM is plenty for air and watercooling) and a couple of fan and RGB headers. If those features aren't really of any use to you maybe just save some money and grab a Gigabyte Gaming OC. /forum/images/smilies/smile.gif
> 
> EDIT: There is also this one available right now, it has a big flashy cooler as well: https://www.pccasegear.com/products/44580/gigabyte-aorus-geforce-rtx-2080-ti-11gb
Click to expand...

Actually yes, I forgot to mention that the Aorus one just came into stock, seems like it has to better than the gaming OC version yeah?

But I bet even by the time I do decide to get that one it will go out of stock lol


----------



## Miao

*Strix PL*

Hi guys,
does anyone knows if it is possible to flash a different bios (galax 380 or whatever with higher pl) on a 2080 Ti Strix OC? Or if there's any other way to increase the board's pl? The stock 325W is toooooooooo low! even on air!
I've the card from some days now, i made some tests, and even if i'm able to bench at +160 on air i'm continuously hitting the pl, continuously.
The REAL costant clock/voltage stable curve that it's possible to use without hit the pl is 0.962v/2025 for light and quick workload, but the maximum stable clock for ts ext. stress test is 0.956/1980 (just one drop at 1965, then stable until the end), btw with the usual oc metod i can see peak of 2115 (maybe 2.145 somethimes, I don't even remember, it's not so important, cause they're just peak), maybe in some scenerios it's possible to stay stable for some time at 2070.
The overvolt works, but it's absolutely counterproductive, the only result it's to hit the pl even faster and to achieve lower clocks, so the best results that i obtained was on 0 ov (like +160)
But clock it's not the point.

On Asus ROG forum an official source says that it's not possible to flash any other bios 'cause the card it's a fully customized pcb and it will not work. The same source says that they're not planning to make an increased pl bios update.
Actually I'm still not sure if the card have a real two-bios switch, 'cause the only change are on the fan profile.
Anyone have some other info about? Any news of someone using a third part bios on it? The only way i have to increase or disable the pl are shunts mod or other hard-mods?

I'm happy for the thermals of this card, on a mid to low airflow condition, with 26° on ambient the card can stay on 63/65° in full load with the automatic fan curve (P bios/switch, 50% fan pwm, ab. 1800rpm), in an increased airflow condition with gpu fans at 100% (3500rpm) I can finish the Time Spy Extreme Stress Test (98. 7% fps stability) at 53/54°
So, there's enough thermal condition for some more wats on pl.
it is a pity to have a pl so low on this board, the pcb looks great, and probably it's possible to push her a lot even on air or water.

But so, at the moment I recommend this card only for use in stock, or perhaps for the mods, but I do not want to lose the warranty so soon, so if there is no other way I will try to return this card, even if it was so difficult to get it, I ordered (and paid) an MSI TrioX and waited for over two months, then I changed the order when the Strix (the first high end Ti) became available on the Italian market, even if in very low quantities, now It's unavailable again.

thanks in advance for the reply, by


----------



## hdtvnut

Miao, mine will be here about Friday; interested in what you found, and will run similar tests.

I was also waiting to pick up a Trio with no success; missed a few-minute window at Newegg Wednesday.

The Strix temps are about the lowest of the ti's, maybe because of both cooling design and pl. Failure rate seems less than some others also; I suspect there is a connection.


----------



## acmilangr

Miao said:


> Hi guys,
> does anyone knows if it is possible to flash a different bios (galax 380 or whatever with higher pl) on a 2080 Ti Strix OC? Or if there's any other way to increase the board's pl? The stock 325W is toooooooooo low! even on air!
> I've the card from some days now, i made some tests, and even if i'm able to bench at +160 on air i'm continuously hitting the pl, continuously.
> The REAL costant clock/voltage stable curve that it's possible to use without hit the pl is 0.962v/2025 for light and quick workload, but the maximum stable clock for ts ext. stress test is 0.956/1980 (just one drop at 1965, then stable until the end), btw with the usual oc metod i can see peak of 2115 (maybe 2.145 somethimes, I don't even remember, it's not so important, cause they're just peak), maybe in some scenerios it's possible to stay stable for some time at 2070.
> The overvolt works, but it's absolutely counterproductive, the only result it's to hit the pl even faster and to achieve lower clocks, so the best results that i obtained was on 0 ov (like +160)
> But clock it's not the point.
> 
> On Asus ROG forum an official source says that it's not possible to flash any other bios 'cause the card it's a fully customized pcb and it will not work. The same source says that they're not planning to make an increased pl bios update.
> Actually I'm still not sure if the card have a real two-bios switch, 'cause the only change are on the fan profile.
> Anyone have some other info about? Any news of someone using a third part bios on it? The only way i have to increase or disable the pl are shunts mod or other hard-mods?
> 
> I'm happy for the thermals of this card, on a mid to low airflow condition, with 26° on ambient the card can stay on 63/65° in full load with the automatic fan curve (P bios/switch, 50% fan pwm, ab. 1800rpm), in an increased airflow condition with gpu fans at 100% (3500rpm) I can finish the Time Spy Extreme Stress Test (98. 7% fps stability) at 53/54°
> So, there's enough thermal condition for some more wats on pl.
> it is a pity to have a pl so low on this board, the pcb looks great, and probably it's possible to push her a lot even on air or water.
> 
> But so, at the moment I recommend this card only for use in stock, or perhaps for the mods, but I do not want to lose the warranty so soon, so if there is no other way I will try to return this card, even if it was so difficult to get it, I ordered (and paid) an MSI TrioX and waited for over two months, then I changed the order when the Strix (the first high end Ti) became available on the Italian market, even if in very low quantities, now It's unavailable again.
> 
> thanks in advance for the reply, by


My opinion is to keep the strix. 
Dude this is the best 2080ti. It has the best temps and the lowest noise. If it goes to 2125 it is excellent. Just respect and keep it.


----------



## ESRCJ

Alright so I fixed my poor graphics performance by messing with the Nvidia Control Panel.

https://www.3dmark.com/spy/5150813

This is with the Galax 380W BIOS. My graphics score with the Strix BIOS (325W) was almost 16,300 graphics score. I need to do further tweaking, as I'm still heavily hitting the power limit.


----------



## xHoLy

Hey guys,
I just built a new system with an RTX 2080Ti and I have been experiencing this weird "bug". It happens in games and even when watching youtube or twitch. It is basically like what happens when you get BSOD except it recovers from it. Screen and sound freeze for a couple of seconds.
Has anyone experienced this?


----------



## cstkl1

gridironcpj said:


> Alright so I fixed my poor graphics performance by messing with the Nvidia Control Panel.
> 
> https://www.3dmark.com/spy/5150813
> 
> This is with the Galax 380W BIOS. My graphics score with the Strix BIOS (325W) was almost 16,300 graphics score. I need to do further tweaking, as I'm still heavily hitting the power limit.


so u got your waterblock??
also.. eh.. i thought strix cannot be flashed cause of the different DP etc...




https://www.3dmark.com/spy/5152128​
16733
spent some time with this.. HT off 5Ghz stable. Uncore bumped to 34. Took some time cause started from scratch especially da ram. Same 2100/8000mhz. Stable cpu clocks do pay off compare to a 4.9ghz HT thats not stable. ( for stresstest but stable for benchmarks)


----------



## cstkl1

xHoLy said:


> Hey guys,
> I just built a new system with an RTX 2080Ti and I have been experiencing this weird "bug". It happens in games and even when watching youtube or twitch. It is basically like what happens when you get BSOD except it recovers from it. Screen and sound freeze for a couple of seconds.
> Has anyone experienced this?


thats not normal.. and doesnt sound like a GPU thing.

CPU/RAM/MOBO Spec. 

are you running everything at stock including ram??


----------



## Miao

acmilangr said:


> My opinion is to keep the strix.
> Dude this is the best 2080ti. It has the best temps and the lowest noise. If it goes to 2125 it is excellent. Just respect and keep it.


2215 for half second is not so relevant.

The card is great but have the worse pl of all the others high end Ti.

The real sustainable clock for a very long period are under 2GHz (1980/1965 etc)

That's because of the Power Limits, thermals are great (at least for a 2080 Ti)

i don't know, for the next few days i'll take it, it's a great card, but too limited, I'll see if something will change, we will see, but i'm not so confident.

@hdtvnut

Happy to see another strix is coming to the comunity, I made tons of tests, if you need something specific I can send you, my previous post was just for asking about pl incremental/removal, not tests result.
But if someone is interested i can share

bye


----------



## lolhaxz

Miao said:


> 2215 for half second is not so relevant.
> 
> The card is great but have the worse pl of all the others high end Ti.
> 
> The real sustainable clock for a very long period are under 2GHz (1980/1965 etc)
> 
> That's because of the Power Limits, thermals are great (at least for a 2080 Ti)
> 
> i don't know, for the next few days i'll take it, it's a great card, but too limited, I'll see if something will change, we will see, but i'm not so confident.
> 
> 
> @hdtvnut
> 
> Happy to see another strix is coming to the comunity, I made tons of tests, if you need something specific I can send you, my previous post was just for asking about pl incremental/removal, not tests result.
> But if someone is interested i can share
> 
> bye


I don't see why people are hitting the power limit? especially if they have flashed the Galax 380W bios... 1.093v is a complete waste of your time unless your just experimenting, it will pass a loop of TS 5-10 times at ~30MHz higher than it would have at 1.05v but it will crash shortly there after; when you actually go to play a game the instability will bubble up and you likely will end up going back to clocks that were stable at the lower voltage.

Under water at 1.093v at Galax 380W the only thing I can find to make it hit the power limit is Furmark.

It's scary how consistent the silicon seems, most I've spoken with say its on the edge at 2115MHz, but really needs to come down to 2100MHz or even 2085MHz


----------



## cstkl1

lolhaxz said:


> I don't see why people are hitting the power limit? especially if they have flashed the Galax 380W bios... 1.093v is a complete waste of your time unless your just experimenting, it will pass a loop of TS 5-10 times at ~30MHz higher than it would have at 1.05v but it will crash shortly there after; when you actually go to play a game the instability will bubble up and you likely will end up going back to clocks that were stable at the lower voltage.
> 
> Under water at 1.093v at Galax 380W the only thing I can find to make it hit the power limit is Furmark.
> 
> It's scary how consistent the silicon seems, most I've spoken with say its on the edge at 2115MHz, but really needs to come down to 2100MHz or even 2085MHz


guessing non MSI trio cards.. nobody tried dat 406watt bios...??


----------



## acmilangr

Miao said:


> acmilangr said:
> 
> 
> 
> My opinion is to keep the strix.
> Dude this is the best 2080ti. It has the best temps and the lowest noise. If it goes to 2125 it is excellent. Just respect and keep it.
> 
> 
> 
> 2215 for half second is not so relevant.
> 
> The card is great but have the worse pl of all the others high end Ti.
> 
> The real sustainable clock for a very long period are under 2GHz (1980/1965 etc)
> 
> That's because of the Power Limits, thermals are great (at least for a 2080 Ti)
> 
> i don't know, for the next few days i'll take it, it's a great card, but too limited, I'll see if something will change, we will see, but i'm not so confident.
> 
> 
> @hdtvnut
> 
> Happy to see another strix is coming to the comunity, I made tons of tests, if you need something specific I can send you, my previous post was just for asking about pl incremental/removal, not tests result.
> But if someone is interested i can share
> 
> bye
Click to expand...

So you mean that the maximum you get on core is only 1980 after some period? What temperature do you have? How much +core can you add to be stable?


----------



## ESRCJ

cstkl1 said:


> so u got your waterblock??
> also.. eh.. i thought strix cannot be flashed cause of the different DP etc...
> 
> 
> 
> 
> https://www.3dmark.com/spy/5152128​
> 16733
> spent some time with this.. HT off 5Ghz stable. Uncore bumped to 34. Took some time cause started from scratch especially da ram. Same 2100/8000mhz. Stable cpu clocks do pay off compare to a 4.9ghz HT thats not stable. ( for stresstest but stable for benchmarks)


Nice score. I'd really like to break 17K. Bitspower won't accept returns on opened products. The cooling performance is meh, but I'm hanging onto it until a better alternative comes along. EK has a block coming in next month for the Strix. The BIOS flash works if you switch to the DP further right on the IO. The BIOS flashes just don't seem that effective. I'm also using Precision X1 since Afterburner doesn't let me push the memory slider past +1000MHz, although I really hate X1's VF curve. It's a cluttered mess. I'd prefer just a spreadsheet I could modify.


----------



## lolhaxz

So the problem I was having with the system just rebooting under load turned out to be the power supply [Coolermaster V1000] - replaced it with a AX1200i... no more reboots and also a lot less coil whine; the v1000 is based on Seasonic design.. I think the PSU is not faulty (as it was running a 1080Ti fine) I suspect that the larger transient current swings with the 2080Ti are the problem.... it spikes +- 200W in Timespy regularly.

Tested 2070MHz and 5.2GHz overclock - Prime95 non-AVX (in place large FFT) + Timespy Graphics 1 (4K, looped); appears stable...

What is interesting is that 2115MHz on the GPU is fine without Prime95 running, with Prime95 running it craps itself about about 15 minutes... so something to keep in mind, something that is CPU heavy + GPU heavy will effect the overclock of both if they happen to be close to the edge - you can argue that the system will never see a load like this when gaming but I prefer the old skool approach to stability testing, ya'll probably find you insta BSOD when trying it


----------



## JackCY

I think Seasonic has more strict specs on some of the PSU designs and when your power hungry GPU load spikes it will kick in a protection and shutdown etc. Your GPU is at fault by generating too high peak/spike loads. Probably tripping OCP with the spikes.


----------



## ESRCJ

I'd really love to see a BIOS with a higher power limit designed specifically for the Strix.


----------



## cx-ray

JackCY said:


> I think Seasonic has more strict specs on some of the PSU designs and when your power hungry GPU load spikes it will kick in a protection and shutdown etc. Your GPU is at fault by generating too high peak/spike loads. Probably tripping OCP with the spikes.


I've had 3 Seasonic units fail. One 350 TFX and two X-1250 single rail units. The TFX just died one morning, one X-1250 kept overheating and shut down (Tested by Seasonic and was certified as not defective. According to them it can sustain 1450W continuously), my second X-1250 has some kind of instability and troublesome power ready now. 

Fans also noisy at high load. I used to have one of the 1250's in a water cooling setup. It was the noisiest thing in the system. Basically, ruining my silent build.

They are supposed to be one of the best manufacturers. Not sure what's going with the units I've had.


----------



## Zammin

cx-ray said:


> I've had 3 Seasonic units fail. One 350 TFX and two X-1250 single rail units. The TFX just died one morning, one X-1250 kept overheating and shut down (Tested by Seasonic and was certified as not defective. According to them it can sustain 1450W continuously), my second X-1250 has some kind of instability and troublesome power ready now.
> 
> Fans also noisy at high load. I used to have one of the 1250's in a water cooling setup. It was the noisiest thing in the system. Basically, ruining my silent build.
> 
> They are supposed to be one of the best manufacturers. Not sure what's going with the units I've had.


My old 2013 Seasonic X-1050W has a pretty loud fan as well. Even on hybrid mode it kicks in constantly during normal use. Other than that it's okay. The RM1000i I'm using now is nice and quiet, the fan almost never kicks in since it basically never gets hot lol.


----------



## Garrett1974NL

Everyone who has their 2080Ti watercooled, what are your *delta* values?
When my water is 30C, and the card is under 98/99% load, then the card is 40 or 41C, so I have a delta of 10 or 11C.
I'm quite curious what delta values you could get with a fullcover from EK and stuff... I have a GPU only block from Koolance, the GPU-230 
Keep them coming guys


----------



## Krzych04650

lolhaxz said:


> So the problem I was having with the system just rebooting under load turned out to be the power supply [Coolermaster V1000] - replaced it with a AX1200i... no more reboots and also a lot less coil whine; the v1000 is based on Seasonic design.. I think the PSU is not faulty (as it was running a 1080Ti fine) I suspect that the larger transient current swings with the 2080Ti are the problem.... it spikes +- 200W in Timespy regularly.
> 
> Tested 2070MHz and 5.2GHz overclock - Prime95 non-AVX (in place large FFT) + Timespy Graphics 1 (4K, looped); appears stable...
> 
> What is interesting is that 2115MHz on the GPU is fine without Prime95 running, with Prime95 running it craps itself about about 15 minutes... so something to keep in mind, something that is CPU heavy + GPU heavy will effect the overclock of both if they happen to be close to the edge - you can argue that the system will never see a load like this when gaming but I prefer the old skool approach to stability testing, ya'll probably find you insta BSOD when trying it


Did you try using two separate PCI-E cables to each 8-pin GPU connector instead of just one cable with two connectors? My Seasonic Prime PSU manual says to do so for GPUs with more than 300W power draw.


----------



## lolhaxz

Garrett1974NL said:


> Everyone who has their 2080Ti watercooled, what are your *delta* values?
> When my water is 30C, and the card is under 98/99% load, then the card is 40 or 41C, so I have a delta of 10 or 11C.
> I'm quite curious what delta values you could get with a fullcover from EK and stuff... I have a GPU only block from Koolance, the GPU-230
> Keep them coming guys


Also interested.

After approx 15 minutes standing still in Witcher 3 at 4K _everything_ maxed, vsync off.
18C Ambient
31C Water (Thermal probe attached to Radiator)
42C GPU (100% Usage, ~105% power (300W in GPU-Z))

2 x PE360 EK Rads
6 x 120mm BeQuiet Silent Wing 3 @ 900rpm 
1 x D5 Pump (about 70% PWM at that load/temp)

WIth the fans cranked to 1150rpm (max) and pump at 100% it'd be 2-3C cooler.

Shared CPU and GPU loop, EK Supremecy and EK Vector (Galax 2080Ti OC @ 2085MHz - 380W Bios is stock)

There's so many variables...


----------



## lolhaxz

Krzych04650 said:


> Did you try using two separate PCI-E cables to each 8-pin GPU connector instead of just one cable with two connectors? My Seasonic Prime PSU manual says to do so for GPUs with more than 300W power draw.


It was using dedicated leads from Day 1, even on the 1080Ti - one plug from each lead.

So ... yes.


----------



## Fitzcaraldo

Garrett1974NL said:


> Everyone who has their 2080Ti watercooled, what are your *delta* values?
> When my water is 30C, and the card is under 98/99% load, then the card is 40 or 41C, so I have a delta of 10 or 11C.
> I'm quite curious what delta values you could get with a fullcover from EK and stuff... I have a GPU only block from Koolance, the GPU-230
> Keep them coming guys


Delta 8K with Heatkiller IV block


----------



## Jpmboy

Lefty23 said:


> thanks, posted in TS thread.
> 
> I remember reading this before. Since you seem to "have" this knowledge a quick question for you.
> *Does using higher amperege cause the h/w to perform worse*? I remember my 1080ti would get better results at the same clocks using one bin higher voltage in a lot of cases.
> That's back when I was testing each scene of each bench to see what clock at what volt bin runs better, in order to setup a V/F curve before I run the benchmark for highest score
> Not doing that anymore
> 
> @*Citruschrome* Splash screen is normal. Not sure why you are not getting voltages up to 1.093V. Seems to work with my XC Ultra using "Standard MSI" option. Maybe a driver or FE thing? Have you clean installed the drivers after the flash?
> Edit: I usually follow the procedure mentioned in OP here:
> https://www.overclock.net/forum/69-nvidia/1627212-how-flash-different-bios-your-1080-ti.html


good question... with no simple generalize-able answer as it depends on the circuit type and trace. At this scale, there is a voltage ceiling after which gating fails, so the "sweetspot" (V and A to make power) is what we tune for in any OC (or underclock).


----------



## Miao

gridironcpj said:


> Nice score. I'd really like to break 17K. Bitspower won't accept returns on opened products. The cooling performance is meh, but I'm hanging onto it until a better alternative comes along. EK has a block coming in next month for the Strix. The BIOS flash works if you switch to the DP further right on the IO. The BIOS flashes just don't seem that effective. I'm also using Precision X1 since Afterburner doesn't let me push the memory slider past +1000MHz, although I really hate X1's VF curve. It's a cluttered mess. I'd prefer just a spreadsheet I could modify.


I'm not sure of what u said, did you flash the galax bios on the strix?
You just flashed it on the P bios of the strix? the Q bios remain the original asus quiet 325w after that?
Are them actually 2 different bios?

If now your are able to bench superposition (for ex.) over 1.018v stable, than that's working.
wich clock you can obtain stable now?

On original bios, with a +160MHz on core with 0ov(and 0 ram), in that scenario the voltage will throttle from 1.018 to 0.981 continuosly
2 shot on +160 on original bios attached.


----------



## Krzych04650

Citruschrome said:


> So anyone else running into issues not being able to boost the vcore from 1.062 up to 1.093 after flashing to the Galax 380w?


On my Sea Hawk X with 380W Galax BIOS max voltage is 1.068. Not really important because this is far too high for staying inside power limit. 

Max voltage I can use is 1.037, then in Time Spy Extreme Stress Test I am not throttling and usually staying around 120-124% PL while 126% is max. I can sustain stable 2100 MHz this way. I have to set the curve to [email protected] and then it drops to 2130 almost immediately, then to 2115 quickly and for this clock it is stable at this voltage, and then to 2100 after the temperature stabilizes around 50-52 C @ 20C ambient.

It is pretty annoying that the card makes 3 clock drops before stabilizing (because the starting temp is like 25C and end temp is 50-52 C). I could use way lower voltage for 2100 MHz, but I have to account for clock drops from GPU boost so I have to use voltage that makes 2145 and 2130 MHz stable for a minute or so, before it heats up and drops to 2115 and then 2100. Some games can even crash during loading because of 2145. In this way air cooled cards are better, you can tell them to stay passive until 60C and ramp up the fans to 100% from 61C, so the difference between starting and end temperatures can be almost none which means almost temperature change and like one or max two clock drops, while on my Sea Hawk I get almost 25C delta between idle and load. Thats for normal ambient temp, I am keeping my PC in unused unheated room where ambient temp right now is like 12C, so idle temp on the GPU is like 16C and it will probably make another clock drop when heating up, so total of 4.


----------



## Miao

Krzych04650 said:


> On my Sea Hawk X with 380W Galax BIOS max voltage is 1.068. Not really important because this is far too high for staying inside power limit.
> 
> Max voltage I can use is 1.037, then in Time Spy Extreme Stress Test I am not throttling and usually staying around 120-124% PL while 126% is max. I can sustain stable 2100 MHz this way. I have to set the curve to [email protected] and then it drops to 2130 almost immediately, then to 2115 quickly and for this clock it is stable at this voltage, and then to 2100 after the temperature stabilizes around 50-52 C @ 20C ambient.
> 
> It is pretty annoying that the card makes 3 clock drops before stabilizing (because the starting temp is like 25C and end temp is 50-52 C). I could use way lower voltage for 2100 MHz, but I have to account for clock drops from GPU boost so I have to use voltage that makes 2145 and 2130 MHz stable for a minute or so, before it heats up and drops to 2115 and then 2100. Some games can even crash during loading because of 2145. In this way air cooled cards are better, you can tell them to stay passive until 60C and ramp up the fans to 100% from 61C, so the difference between starting and end temperatures can be almost none which means almost temperature change and like one or max two clock drops, while on my Sea Hawk I get almost 25C delta between idle and load. Thats for normal ambient temp, I am keeping my PC in unused unheated room where ambient temp right now is like 12C, so idle temp on the GPU is like 16C and it will probably make another clock drop when heating up, so total of 4.


ok, but theese are still unreachable results with a 320/325w bios.
The maximum stable result i was able to achieve in terms of throttling was 1 single drop (@46° or something like that), but from 1980 to 1965, so, so far.

BTW... i'm happy to say that yes, the Strix really have two bios!!!
didn't check before, somethimes i'm happy to be an idiot 





Code:


nvflash64.exe --save StriX-P.rom
NVIDIA Firmware Update Utility (Version 5.527.0)
Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.


Adapter: Graphics Device      (10DE,1E07,1043,866A) H:--:NRM  S:00,B:01,D:00,F:00



EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page


Build GUID            : 8433269CD246402FBCFA8C3C93FDFEE1
IFR Subsystem ID      : 1043-866A
Subsystem Vendor ID   : 0x1043
Subsystem ID          : 0x866A
Version               : 90.02.0B.40.38
Image Hash            : 5780368517905C4D4F070C4506B2235C
Hierarchy ID          : Normal Board
Build Date            : 08/20/18
Modification Date     : 09/27/18
UEFI Version          : 0x50005
UEFI Variant ID       : 0x0000000000000009 ( Unknown )
UEFI Signer(s)        : Microsoft Corporation UEFI CA 2011
XUSB-FW Version ID    : 0x70060003
XUSB-FW Build Time    : 2018-08-29 01:55:05
InfoROM Version       : G001.0000.02.04
InfoROM Backup        : Present
License Placeholder   : Present
GPU Mode              : N/A

Saving of image completed.

---------------------------------------

nvflash64.exe --save Strix-Q.rom
NVIDIA Firmware Update Utility (Version 5.527.0)
Copyright (C) 1993-2018, NVIDIA Corporation. All rights reserved.


Adapter: Graphics Device      (10DE,1E07,1043,866A) H:--:NRM  S:00,B:01,D:00,F:00



EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page


Build GUID            : 24137C1A46B74AE289ED866C2AD9261A
IFR Subsystem ID      : 1043-866A
Subsystem Vendor ID   : 0x1043
Subsystem ID          : 0x866A
Version               : 90.02.0B.00.8E
Image Hash            : A2D703FCF919FF5B5F26096D090FEA4B
Hierarchy ID          : Normal Board
Build Date            : 08/20/18
Modification Date     : 09/07/18
UEFI Version          : 0x50005
UEFI Variant ID       : 0x0000000000000009 ( Unknown )
UEFI Signer(s)        : Microsoft Corporation UEFI CA 2011
XUSB-FW Version ID    : 0x70060003
XUSB-FW Build Time    : 2018-08-29 01:55:05
InfoROM Version       : G001.0000.02.04
InfoROM Backup        : Present
License Placeholder   : Present
GPU Mode              : N/A

Saving of image completed.

so, i've something to screw up 



EDIT: Bios Q flashed with "classic" Galax 380W, results later, (i'll test all the outputs, 2dp+2hdmi, too) bye!


----------



## Garrett1974NL

Fitzcaraldo said:


> Delta 8K with Heatkiller IV block


Impressive... and I take it you don't use Liquid Metal on the GPU die? (I do...awesome)


----------



## Miao

*Galax 380W Bios on Asus ROG StriX OC*

so, I tried the Galax 380W bios on the ROG Strix OC, I decided to sacrify the Quiet bios on the performance altar...

now... the short version is: DON'T DO IT, unless you don't have just one DP output.

The display outputs will totally screw up, the card will recognize 1DP, 1HDMI (but not the right one, instead of the 2nd dp, so don't work anymore), 1 Type-C and 2 DVI (!!! instead of 2HDMI, so nothing over 1080p more possible on that outputs), instead of the original 2DP, 2HDMI and 1TC.

The original outputs:and the fault situation after the flash:

The order is correct, so you totally loose one dp and limit 2 hdmi, in my case 4k on Oled TV was not possible anymore, the first reason to come back to original P bios.

But I made a few tests before:
-There're at least 30mV more on the galax bios to play with, I was able to run just some bench around 2100 @1.050v (nothing more 1.018 or similar is possible with original bioses), there's a lot to play with the curve to obtain a good result, but if i can't do it with my favourite screen it's pointless.
-So, for ex., with my daily config i just jump from 9776 to 9928 in SP1080e, not enough to justify the io costs. I usually play in a 9600 conf.

-Btw it's still hitting pl costantly, but there's some gain.
-If you do that, consider that the middle fan will not run as before, will stop @ Low temps, I suggest a manual curve.
-Just for the records, Asus Aura Sync will still work.

I leave it on the Q swith, maybe for a future bench session, but for daily use i went back to original P.

I still hope that asus will give us something better for this card, but for this quick test the FE-like bios doen't seem to let this card do their best.

bye.


----------



## Lashmush

Hey, I am about to flash the Galax BIOS on my Palit 2080 Ti Dual. I was wondering if my motherboard, the Gigabyte Z270P-D3 can handle the power though? My PSU is an EVGA 750W G2 Supernova so I think that should do it but if my mobo can't deal with it, maybe I should go for a different BIOS? Anyone know if my mobo can make the cut?


----------



## Jpmboy

Miao said:


> so, I tried the Galax 380W bios on the ROG Strix OC, I decided to sacrify the Quiet bios on the performance altar...
> 
> now... the short version is: DON'T DO IT, unless you don't have just one DP output.
> 
> The display outputs will totally screw up, the card will recognize 1DP, 1HDMI (but not the right one, instead of the 2nd dp, so don't work anymore), 1 Type-C and 2 DVI (!!! instead of 2HDMI, so nothing over 1080p more possible on that outputs), instead of the original 2DP, 2HDMI and 1TC.
> 
> The original outputs:and the fault situation after the flash:
> 
> The order is correct, so you totally loose one dp and limit 2 hdmi, in my case 4k on Oled TV was not possible anymore, the first reason to come back to original P bios.
> 
> But I made a few tests before:
> -There're at least 300mV more on the galax bios to play with, I was able to run just some bench around 2100 @1.050v (nothing more 1.018 or similar is possible with original bioses), there's a lot to play with the curve to obtain a good result, but if i can't do it with my favourite screen it's pointless.
> -So, for ex., with my daily config i just jump from 9776 to 9928 in SP1080e, not enough to justify the io costs. I usually play in a 9600 conf.
> 
> -Btw it's still hitting pl costantly, but there's some gain.
> -If you do that, consider that the middle fan will not run as before, will stop @ Low temps, I suggest a manual curve.
> -Just for the records, Asus Aura Sync will still work.
> 
> I leave it on the Q swith, maybe for a future bench session, but for daily use i went back to original P.
> 
> I still hope that asus will give us something better for this card, but for this quick test the FE-like bios doen't seem to let this card do their best.
> 
> bye.


yeah - well, if the cards have a different IO panel layout you wil have to hunt down the port translation... this goes no matter what the IO panel difference on the cards is. Also, same for the PCIE power rails. Not all cross flashes are equal.


----------



## Dreamliner

Has anyone been able to get Battlefield V out of Nvidia? It sucks preorder owners had to deal with ship delays...twice...and Nvidia starts bundling a free game a month later.


----------



## Miao

Jpmboy said:


> yeah - well, if the cards have a different IO panel layout you wil have to hunt down the port translation... this goes no matter what the IO panel difference on the cards is. Also, same for the PCIE power rails. Not all cross flashes are equal.


And so? I didn't get the point, or doesn't understand you suggestion.
In my daily use i have all the 4 output attached, even if not working togheter at the same time, i have tre main configuration: 1. TV 4k on HDMI, 2. DP Gsync monitor uwqhd (with sometime 2nd 1080p monitor for work in extended layout), and 3. A 4k over dp (to be replaced with a 32:9 49" soon), in that situation the second dp was totally not working and the hdmis work limited by 1080p, maybe I can swap te main monitor that i want to use atm in the primary dp, but is not wat I paid for and pretty uncomfortable.

Maybe nex time i'll boot with secondary bios I can make some extra test, any suggestions?


ps: btw anyone tryed the 400w msi bios on other hw?


----------



## Jpmboy

simply compare the connectors on the IO panel of each card (the galax HOF 2080Ti vs your card). If they are not identical, then what you observed can happen and you have to (literally) poke around to find the connectors that work in your config (the order in bios will differ for each card. Once you go multiplanel, a crossflash of bioses meant to drive a different IO configuation will always cause this. (this was posted in this thread some time ago). And that's so.


----------



## iRSs

Lashmush said:


> Hey, I am about to flash the Galax BIOS on my Palit 2080 Ti Dual. I was wondering if my motherboard, the Gigabyte Z270P-D3 can handle the power though? My PSU is an EVGA 750W G2 Supernova so I think that should do it but if my mobo can't deal with it, maybe I should go for a different BIOS? Anyone know if my mobo can make the cut?


you cannot flash any other bios on palit 2080 ti DUAL but it's own, not even the palit's 2080 ti gaming pro oc bios which has 330w power limit. the only thing that you can do is put liquid metal on shunt resistors.
i did that and i'm never seeing power limit on any games, occt psu test or 3dmark. the only thing that bothers me is that when i start furmark with power target set to max in afterburner and run furmark or kombustor (ONLY these two) my computer powers off after few seconds then it starts up again on its own.
i have stock 8700k and seasonic prime 750w titanium.


----------



## Miao

Jpmboy said:


> simply compare the connectors on the IO panel of each card (the galax HOF 2080Ti vs your card). If they are not identical, then what you observed can happen and you have to (literally) poke around to find the connectors that work in your config (the order in bios will differ for each card. Once you go multiplanel, a crossflash of bioses meant to drive a different IO configuation will always cause this. (this was posted in this thread some time ago). And that's so.


I never thought otherwise, in fact it is the first thing I have tested, and in my research I did not find anything with increased pl and similar outputs (the only one that could change a little 'will be the aorus with its combo ports , but it will probably only make the whole thing more complicated), btw the "classic" 380w bios is not from an Hof, but from a "simple fe-like kfa2/galax dual", or am I wrong?
the emphasis in my post was dictated by the fact that in the first post of this thread is indicated that a user flashed the asus turbo without problems, but probably has not tested the hdmi.

Simply, if someone has a way, put an advice in the flashing area of the first post, "possible, but not suggested because of the different outputs layout). 
_*edit: i was wrong here, the turbo in an fe,she just drop one dp.*_ 
Then I misunderstood a lot of other stuff too I suppose, sorry for that.

And if anyone knew of a card with larger pls and outputs similar to the Strix (2DP2HDMI), I would be grateful if you would report it.

thanks.



iRSs said:


> you cannot flash any other bios on palit 2080 ti DUAL but it's own, not even the palit's 2080 ti gaming pro oc bios which has 330w power limit. the only thing that you can do is put liquid metal on shunt resistors.
> i did that and i'm never seeing power limit on any games, occt psu test or 3dmark. the only thing that bothers me is that when i start furmark with power target set to max in afterburner and run furmark or kombustor (ONLY these two) my computer powers off after few seconds then it starts up again on its own.
> i have stock 8700k and seasonic prime 750w titanium.


Why he can't flash? It's no the palit dual an fe card?
edit: now i get it, it's a non-A chip.


----------



## ESRCJ

I'm considering a liquid metal shunt mod at this point. The Strix is a great card, but none of these other BIOS mesh that well with it since it has it's own custom PCB.


----------



## Miao

considered too, but atm i don't wan't to voi my warranty.
Maybe in the future if i'll switch to a decent custom loop.

btw "Buildzoid" from AHOC has a lot of ideas and proposal about in his videos. take a look.


----------



## ESRCJ

Miao said:


> considered too, but atm i don't wan't to voi my warranty.
> Maybe in the future if i'll switch to a decent custom loop.
> 
> btw "Buildzoid" from AHOC has a lot of ideas and proposal about in his videos. take a look.


I watched his Strix PCB breakdown about a month ago where he proposed a good method. I'm not very experienced with soldering, so I would probably stick with a LM shunt mod. I'd definitely rather have a better BIOS for this card though. My Titan X (Pascal) shunt mod with LM eventually wasn't as effective. I think the LM moved a little at some point when I was moving everything into a new chassis. Luckily, I didn't have any issues with the resistors falling off due to LM eating away the solder.


----------



## CallsignVega

Don't use LM. I had a shunt fall off my Titan due to the LM dissolving the solder on a shunt. Luckily NVIDIA honored the warranty. Use the hot glue method.


----------



## Zammin

Well after asking Nvidia's tech about the failed 3D mark stress test results and OC scanner being unstable they insisted on sending me another card in advance, which I was quite surprised by. The replacement they sent me is a newer model with samsung memory so that's cool. This one still fails the Time Spy Extreme stress test on default settings with the same custom fan curve so I guess it must just be due to temp variation. I got basically the same result as my first pass with the other card, slightly lower but within margin of error (less than 1%). I've got 2 weeks to test this card out before I have to send one back. The only issue I have with this card so far is that the fans have a bit of a whine. They are definitely noisier than my other card.. Maybe not a huge deal if the card still overclocks at least the same as my other one because I'll be putting a waterblock on it anyway, but slightly annoying still.


----------



## Zammin

Yep confirmed. One of the fans is faulty. I ran them both at a locked 50% where I could clearly hear the whine and stopped each one individually with my finger on the center. The second one is making the noise, the first one is quiet. Bloody hell lol.

I've also just confirmed it's a worse overclocker than my other one. No point keeping it. Definitely going to send it back for the faulty fan.


----------



## Zammin

I've put my old card back in for now. Something I noticed is that the newer one runs 2-3 degrees cooler under load. I wonder if it's a slight change in thermal interface materials in the newer ones, maybe something different about the newer BIOS or more than likely just coincidence. I can confirm though that the newer Samsung equipped cards come in a darker box and have a different BIOS to the Micron equipped cards.


----------



## lolhaxz

Zammin said:


> I've put my old card back in for now. Something I noticed is that the newer one runs 2-3 degrees cooler under load. I wonder if it's a slight change in thermal interface materials in the newer ones, maybe something different about the newer BIOS or more than likely just coincidence. I can confirm though that the newer Samsung equipped cards come in a darker box and have a different BIOS to the Micron equipped cards.



Typically a CPU or GPU will draw different amounts of current depending on its quality.. and also have slightly different voltage curves, more leakage = more power... typically higher clocks, lower leakage = less power... typically lower clocks.... but far from predictable or any form of general rule.

All BIOS rom's contain the config for all memory IC's in use... so that's not the reason for the different bios... but perhaps just a revision; is it on techpowerup?


----------



## Zammin

lolhaxz said:


> Typically a CPU or GPU will draw different amounts of current depending on its quality.. and also have slightly different voltage curves, more leakage = more power... typically higher clocks, lower leakage = less power... typically lower clocks.... but far from predictable or any form of general rule.
> 
> All BIOS rom's contain the config for all memory IC's in use... so that's not the reason for the different bios... but perhaps just a revision; is it on techpowerup?


Ah yes you are right in that all the BIOS roms are compatible with the different memory types. I had actually read that previously but forgot haha.

The newer BIOS is on TPU from memory but the last time I saw it there it was only just recently uploaded by a user and not verified by TPU.


----------



## dante`afk

Garrett1974NL said:


> Everyone who has their 2080Ti watercooled, what are your *delta* values?
> When my water is 30C, and the card is under 98/99% load, then the card is 40 or 41C, so I have a delta of 10 or 11C.
> I'm quite curious what delta values you could get with a fullcover from EK and stuff... I have a GPU only block from Koolance, the GPU-230
> Keep them coming guys


10c with bitspower block. still waiting on my heatkiller block since almost a month.


----------



## iRSs

CallsignVega said:


> Don't use LM. I had a shunt fall off my Titan due to the LM dissolving the solder on a shunt. Luckily NVIDIA honored the warranty. Use the hot glue method.


well, i just soledered one of my shunt back just now 
it is the one from the back of the card... i never had problems with LM but that one is on the back and sometimes LM fall from it on the solder but the ones on the front of the card are safe 

what hot glue method?


----------



## Jpmboy

iRSs said:


> you cannot flash any other bios on palit 2080 ti DUAL but it's own, not even the palit's 2080 ti gaming pro oc bios which has 330w power limit. the only thing that you can do is put liquid metal on shunt resistors.
> i did that and i'm never seeing power limit on any games, occt psu test or 3dmark. the only thing that bothers me is that when i start furmark with power target set to max in afterburner and run furmark or kombustor (ONLY these two) my computer powers off after few seconds then it starts up again on its own.
> i have stock 8700k and seasonic prime 750w titanium.


 that's a PSU OCP. you either need to run it in single rail mode (if possible) or use a different PSU. 750W is enough, but a per rail OCP will trip with this card (and some other cards). 
Vega is right. LM is a eutectic mixture with indium and gallium. the gallium WILL form a new amalgam with the metal mixture in solder... and lower it's melting point (eg, make it liquid at normal operating temperatures). There have been a number of users with 5MMO resistors failing off. Just sayin'. It can happen. 




gridironcpj said:


> I'm considering a liquid metal shunt mod at this point. The Strix is a great card, but none of these other BIOS mesh that well with it since it has it's own custom PCB.


^^ see above. I can say that ASUS will not accept a card damaged that way back under warranty. What is it that you are trying to do that MUST have an increased PL?


iRSs said:


> well, i just soledered one of my shunt back just now
> it is the one from the back of the card... i never had problems with LM but that one is on the back and sometimes LM fall from it on the solder but the ones on the front of the card are safe
> what hot glue method?


^^ solder is the way to go. :thumb:


----------



## iRSs

Jpmboy said:


> that's a PSU OCP. you either need to run it in single rail mode (if possible) or use a different PSU. 750W is enough, but a per rail OCP will trip with this card (and some other cards).
> Vega is right. LM is a eutectic mixture with indium and gallium. the gallium WILL form a new amalgam with the metal mixture in solder... and lower it's melting point (eg, make it liquid at normal operating temperatures). There have been a number of users with 5MMO resistors failing off. Just sayin'. It can happen.


my psu has a single 12V rail and the same happens on corsair rm750i of a friend.


----------



## Shawnb99

I just ordered a EVGA 2080TI XC Ultra. Shipped out today.


----------



## Jpmboy

iRSs said:


> my psu has a single 12V rail and the same happens on corsair rm750i of a friend.



geeze, It must be close then. I have my 2080Ti with Galax bios on a Seasonic Prime Titanium 850W. No OCP (ASUS z390 Max 11 Extreme, 32GB, fans, pump, lol, and LEDs  )


----------



## Lashmush

iRSs said:


> you cannot flash any other bios on palit 2080 ti DUAL but it's own, not even the palit's 2080 ti gaming pro oc bios which has 330w power limit. the only thing that you can do is put liquid metal on shunt resistors.
> i did that and i'm never seeing power limit on any games, occt psu test or 3dmark. the only thing that bothers me is that when i start furmark with power target set to max in afterburner and run furmark or kombustor (ONLY these two) my computer powers off after few seconds then it starts up again on its own.
> i have stock 8700k and seasonic prime 750w titanium.


Dammit, really? I've peaked at 2115 / 7850 MHz in Afterburner and I wanted to push more power into it. Isn't there a way to alter nvflash to allow the flash by force or will the card be auto-bricked due to being Non-A ?

Damn, this is a bummer. No real point in getting watercooling for this card then.


----------



## NewType88

So I can run superposition benchmark just fine, but when I do the stress test I get this......some of the scenes render normally though. What could this mean ?


----------



## boxrick

Hello! I came here in pursuit of making my Zotac Gaming 2080ti go below 35% fan speed when idle, which bios will allow me to reduce it down to 0 and is a known good bios?


----------



## Jpmboy

boxrick said:


> Hello! I came here in pursuit of making my Zotac Gaming 2080ti go below 35% fan speed when idle, which bios will allow me to reduce it down to 0 and is a known good bios?


none that I know of. Fact is, even at idle the card would get too hot with the fans off.


----------



## boxrick

Jpmboy said:


> none that I know of. Fact is, even at idle the card would get too hot with the fans off.


I have good ventilation and 35% speed is far too loud, reducing to 20% would be a great help when its doing basically nothing. In the bios the fans speed noticeably slower so something at driver level is causing the minimum speed. 

Are there any NVFlash guides? I tried to flash the a Gigabyte bios but I am getting the following

WARNING: Firmware image PCI Subsystem ID (1458.37A9x
does not match adapter PCI Subsystem ID (19DA.1503).

NOTE: Exception caught.
Nothing changed!


----------



## boxrick

I flashed the gigabyte bios using the -6 flag which ignored the mismatch which seems to stop two of the fans and leave 1 in a low spin much better! Card idles at about 45C rather than the old 35C but well worth it for the noise difference.


----------



## Jpmboy

boxrick said:


> I flashed the gigabyte bios using the -6 flag which ignored the mismatch which seems to stop two of the fans and leave 1 in a low spin much better! Card idles at about 45C rather than the old 35C but well worth it for the noise difference.


 yep, -6 and "--protectoff" may be needed in some cases. Will all the fans still respond to manual control? What happens when you run the the card hard?
You should recognize that when you flash a card resulting in loss of power to one fan and not the other... there may be (and likely are) other control systems which are not addressed in the new (mismatched) bios or mis addressed... that's just how that level control code works. In general, this is fine for bench sessions but not something recommended for 24/7. An easy alternative for some 2 and 3 fan AIB boards (esp EVGA FTW cards) is to use AB, which normally will address only one fan and not the other ( while precision X wil address both) :thumb:


----------



## Zammin

shadow85 said:


> Actually yes, I forgot to mention that the Aorus one just came into stock, seems like it has to better than the gaming OC version yeah?
> 
> But I bet even by the time I do decide to get that one it will go out of stock lol


Hey man, dunno if you're still looking for a Strix in Aus but Mwave have stock right now: https://www.mwave.com.au/product/asus-geforce-rtx-2080-ti-rog-strix-11gb-video-card-ac18158


----------



## shadow85

Yay, my Sea Hawk X has shipped out to me today!

What are the max clocks people are seeing on the Sea Hawk X on stock Bios.

Is there any other BIOS I should use for 24/7 gaming, not benching numbers?


----------



## cstkl1

****ty MSI. going to seriously take action against MSI Malaysia. There are laws on profitering which are enforced on other ndustry but for some reason comp industry seem to be unaffected. already earning 20% on the mrsp yet they set ontop of it another 15% and we get taxed on the total cost arnd 5%.. 

asus u can try two ways to claim. one from their claim site ( no need rog login)
second write to them in the rog forum contact.

Asus is fair. all of us received the cards just arnd october. really a ****ty practice. the codes are given by nvidia. everybody easily contributed usd 200-300 foc to the retailers.


----------



## Zammin

cstkl1 said:


> ****ty MSI. going to seriously take action against MSI Malaysia. There are laws on profitering which are enforced on other ndustry but for some reason comp industry seem to be unaffected. already earning 20% on the mrsp yet they set ontop of it another 15% and we get taxed on the total cost arnd 5%..
> 
> asus u can try two ways to claim. one from their claim site ( no need rog login)
> second write to them in the rog forum contact.
> 
> Asus is fair. all of us received the cards just arnd october. really a ****ty practice. the codes are given by nvidia. everybody easily contributed usd 200-300 foc to the retailers.


ASUS certainly haven't been fair about the Black Ops 4 codes. Many of us are being left in the dark as to why we can't get our codes. ASUS promotion support are ignoring me and a few others, and the guy at ASUS who I have been talking to (Adrian) has acknowledged that my products are eligible and I should be getting a code, but for some reason he hasn't given me one and he hasn't responded in over 5 days despite 3 emails being sent to him. Some people are taking some sort of legal action now that ASUS have stopped responding to them and others actually returned their products because they weren't getting the game they were supposed to get with the product.


----------



## Nico67

iRSs said:


> my psu has a single 12V rail and the same happens on corsair rm750i of a friend.



My 800w Titanium Silverstone SFX-L PSU did the same thing, seemed to trip once it warmed up a bit. They don't draw 800w on power, but the instaneous demand can change so fast that it wants 200w more all of a sudden, and some PSU's seem to see that change as either OCP or maybe even a short cct? Seasonic had to own up to issues with there "Focus" range of PSU's recently, for similar issues, but I think what ever ATX spec where on now may not account for the 2080Ti demands.

I am running a 1000w Seasonic Prime Titanium now and have had no issues, my older 1050w Platinum I tried from my old PC was fine too, as well as the 800w SFX + 650w SFX taking some of the load. So much for 650w PSU recommendation, I think 850w+ seems to be a safe bet, although I sure some brands may have slower trips that aren't tripping.

Still waiting to here back on the RMA for the 800w SFX but my guess is they will just replace it and the new one would be no better.


----------



## ENTERPRISE

shadow85 said:


> Yay, my Sea Hawk X has shipped out to me today!
> 
> What are the max clocks people are seeing on the Sea Hawk X on stock Bios.
> 
> Is there any other BIOS I should use for 24/7 gaming, not benching numbers?


Woop not long for me either ! Will see if it is worth flashing another BIOS on these cards.


----------



## shadow85

ENTERPRISE said:


> Woop not long for me either ! Will see if it is worth flashing another BIOS on these cards.


I have never flashed a BIOS on GPU. Is it hard to do, and easy to brick?


----------



## cstkl1

Zammin said:


> ASUS certainly haven't been fair about the Black Ops 4 codes. Many of us are being left in the dark as to why we can't get our codes. ASUS promotion support are ignoring me and a few others, and the guy at ASUS who I have been talking to (Adrian) has acknowledged that my products are eligible and I should be getting a code, but for some reason he hasn't given me one and he hasn't responded in over 5 days despite 3 emails being sent to him. Some people are taking some sort of legal action now that ASUS have stopped responding to them and others actually returned their products because they weren't getting the game they were supposed to get with the product.


really getting pissed off any money grubbing companies. .. we all pay a fair amount and gave them a good profit. when the codes are bought it bulk dont think it even phases them .


----------



## cstkl1

shadow85 said:


> Yay, my Sea Hawk X has shipped out to me today!
> 
> What are the max clocks people are seeing on the Sea Hawk X on stock Bios.
> 
> Is there any other BIOS I should use for 24/7 gaming, not benching numbers?





ENTERPRISE said:


> Woop not long for me either ! Will see if it is worth flashing another BIOS on these cards.


guessing this is the pcie 8 pin with a 120mm cooler/pump right..

then go for da galax bios

if its da EK X version go for da 406 watt POWAH bios ( seriously like another person said.. u only see it hitting TDP limits in Timespy Graphic test 2 and furmark if u keep the temps below 60 on stock bios.. )


----------



## Jpmboy

shadow85 said:


> I have never flashed a BIOS on GPU. Is it hard to do, and easy to brick?


 no, it is not hard to do. A bad flash resulting in a "bricked" card is usually recoverable if you have any other NV card, or even using the iGPU. Detailed instructions are in this thread (somewhere). First, save your OEM bios with GPUZ (easy). then basically:
First disable the NV driver in device manager (pic below)

1) Dl the flash version in the OP (x64.exe) and unzip
2) place the new bios *in* the folder containing the exe.
3) you need to open a cmnd prompt (as admini) _IN _the folder. If you have not changed win10 to add this to the file menu, google "add open command prompt here" and follow Brink's instructions to add this to the right-click context menu. *These instructions work fine too*.

4) once you have the cmnd window open type: _ nvflash64 --protectoff _(note: "nvflash64" needs to be the exact name of the .exe in the folder)

5) type nvflash64 -6 {newromname.rom}
6) answer yes each time asked (a cap Y may be required in some instances)
7) when finished, exit the cmnd window, restart your PC.
Viola. Bios flashed.
put the OEM bios in the flash folder for safe keeping. 

I attached a txt file with instructions for 2 cards in SLI


----------



## VPII

Interestingly I found that my card actually clocks better with the stock bios for Galax 2080Ti OC that the HOF OC. What I found was that it runs alot hotter with the other bios which partly seems as though the max fan speed is lower with the HOF bios. Anyways, I'll stick to my stock bios. No TDP issues, only heat


----------



## GraphicsWhore

cstkl1 said:


> if its da EK X version go for da 406 watt POWAH bios


What is this "POWAH" BIOS you speak of?


----------



## starmin

*Which RTX 2080 ti to purchase?*

Hi Guys,



I'm looking at upgrading the ROG strix 1080ti in my current system to a 2080ti. I know there is a lot of talk on the cooling and failure rates atm so i;m trying to avoid that and get a card that isn't having any problems at the moment.



I would also like it to be suitable for water cooling as it will probably go down that path in the near future.






Is the ROG strix OC still the best card to get? Or does it not matter to much since I will be water cooling it soon?



Thanks for any help!


----------



## Nephalem89

Hi ! Right now that bios is the best for my msi 2080ti sea hawk ek thanks a lot !


----------



## shadow85

Jpmboy said:


> shadow85 said:
> 
> 
> 
> I have never flashed a BIOS on GPU. Is it hard to do, and easy to brick?
> 
> 
> 
> no, it is not hard to do. A bad flash resulting in a "bricked" card is usually recoverable if you have any other NV card, or even using the iGPU. Detailed instructions are in this thread (somewhere). First, save your OEM bios with GPUZ (easy). then basically:
> First disable the NV driver in device manager (pic below)
> 
> 1) Dl the flash version in the OP (x64.exe) and unzip
> 2) place the new bios *in* the folder containing the exe.
> 3) you need to open a cmnd prompt (as admini) _IN _the folder. If you have not changed win10 to add this to the file menu, google "add open command prompt here" and follow Brink's instructions to add this to the right-click context menu. *These instructions work fine too*.
> 
> 4) once you have the cmnd window open type: _ nvflash64 --protectoff _(note: "nvflash64" needs to be the exact name of the .exe in the folder)
> 
> 5) type nvflash64 -6 {newromname.rom}
> 6) answer yes each time asked (a cap Y may be required in some instances)
> 7) when finished, exit the cmnd window, restart your PC.
> Viola. Bios flashed.
> put the OEM bios in the flash folder for safe keeping. /forum/images/smilies/wink.gif
> 
> I attached a txt file with instructions for 2 cards in SLI
Click to expand...

Thank jmpboy!!! Much appreciated.

So I may go for the Galax BIOS then.

Btw, why is the Galax BIOS seem to offdr the highest PL, arn't they only a 2 fan caurd?


----------



## Jpmboy

shadow85 said:


> Thank jmpboy!!! Much appreciated.
> 
> So I may go for the Galax BIOS then.
> 
> Btw, *why is the Galax BIOS seem to offdr the highest PL*, arn't they only a 2 fan caurd?


simply because Galax wrote it that way. (use to be able to do this ourselves).


----------



## Zammin

starmin said:


> Hi Guys,
> 
> 
> 
> I'm looking at upgrading the ROG strix 1080ti in my current system to a 2080ti. I know there is a lot of talk on the cooling and failure rates atm so i;m trying to avoid that and get a card that isn't having any problems at the moment.
> 
> 
> 
> I would also like it to be suitable for water cooling as it will probably go down that path in the near future.
> 
> 
> Is the ROG strix OC still the best card to get? Or does it not matter to much since I will be water cooling it soon?
> 
> 
> 
> Thanks for any help!


Unfortunately there is no card that is 100% safe from failures. This includes all previous gens as well.

The Strix 2080Ti has probably the best air cooler but if you're looking to watercool it then that doesn't mean much. It has a beefed up VRM but a power limit that is about the same as an FE, and the reference VRM is already really good according to Buildzoid's analysis of the FE PCB, so again the overkill VRM probably won't help much unless you are doing LN2 or something with an XOC BIOS.

For watercooling it's probably best to get a reference design card as that will give you the largest range of waterblock options. Right now there is only one waterblock for the Strix and another coming soon, but there are plenty available for the reference designs already. I'm going to be watercooling mine as well which is why I just bought the FE.


----------



## Jbravo33

I’ll just leave this here. RMA ing both. Not taking any chances. This is bull****. Cards have been on water since 24 hours after taking delivery. Wasn’t overclocked either. Shadow of tomb raider kept crashing. On the 4th crash I see this.


----------



## ESRCJ

Jbravo33 said:


> I’ll just leave this here. RMA ing both. Not taking any chances. This is bull****. Cards have been on water since 24 hours after taking delivery. Wasn’t overclocked either. Shadow of tomb raider kept crashing. On the 4th crash I see this.


Damn that sucks. I'm hoping my Strix remains healthy. The WCCF bench-a-thon should be in a few weeks according to AMD is for Intellectuals. Everyone here is welcome to participate.


----------



## Zammin

Jbravo33 said:


> I’ll just leave this here. RMA ing both. Not taking any chances. This is bull****. Cards have been on water since 24 hours after taking delivery. Wasn’t overclocked either. Shadow of tomb raider kept crashing. On the 4th crash I see this.


Are both cards dying? God damn dude, sorry to hear. I partially retract my previous statement about the number of cards dying lol.


----------



## Zammin

Man, the Nvidia staff are super helpful. Way better than the other computer parts companies I've dealt with in the past (ASUS, Acer and Bitspower to name a few). I told them the card they sent me had a faulty fan and the support guy was like "No worries, let me arrange another one for you."

Not to mention this one got here in 3 days from Hong Kong. Blazing fast delivery. Even though I have some minor concerns with the QC on some of the FE's and their cooler (which obviously wont matter once watercooled) I'm glad Nvidia are super easy to deal with.


----------



## Jbravo33

gridironcpj said:


> Damn that sucks. I'm hoping my Strix remains healthy. The WCCF bench-a-thon should be in a few weeks according to AMD is for Intellectuals. Everyone here is welcome to participate.


Yup. Yippie sooooo happppy 



Zammin said:


> Are both cards dying? God damn dude, sorry to hear. I partially retract my previous statement about the number of cards dying lol.


I know right. I’m not sure but I’m sending both back. No matter what they say they are getting both in the box. Same box they came in. Their chat representative sucked. Said we need to troubleshoot and test further I said absolutely not. I’m not wasting my time testing. It’s defective. Bottom line. I still own a titan V and 3 Xp’s one of them a collectors. I wouldn’t be wasting my time talking to you if I thought it was something I could troubleshoot. No disrespect. Goes to ask me what motherboard and power supply. Rampage vi extreme and evga 1600t2. What else? Then tells me “you’ll get an email soon thanks bye” like what? lol. 
This tells me the issue is probably bigger than what I originally thought it was. Ticking time bombs. xD


----------



## Jbravo33

Zammin said:


> Man, the Nvidia staff are super helpful. Way better than the other computer parts companies I've dealt with in the past (ASUS, Acer and Bitspower to name a few). I told them the card they sent me had a faulty fan and the support guy was like "No worries, let me arrange another one for you."
> 
> Not to mention this one got here in 3 days from Hong Kong. Blazing fast delivery. Even though I have some minor concerns with the QC on some of the FE's and their cooler (which obviously wont matter once watercooled) I'm glad Nvidia are super easy to deal with.


Lol I wish. My guy/girl was terrible.


----------



## ReFFrs

Jbravo33 said:


> I’ll just leave this here. RMA ing both. Not taking any chances. This is bull****. Cards have been on water since 24 hours after taking delivery. Wasn’t overclocked either. Shadow of tomb raider kept crashing. On the 4th crash I see this.


What is the model and manufacturer of your cards? By looking at your post history it's not clear either. Was it Samsung or Micron memory?


----------



## skingun

I had to RMA my cards. Dealt directly with Nvidia. Service was excellent.


----------



## Zammin

Jbravo33 said:


> Yup. Yippie sooooo happppy
> 
> 
> 
> I know right. I’m not sure but I’m sending both back. No matter what they say they are getting both in the box. Same box they came in. Their chat representative sucked. Said we need to troubleshoot and test further I said absolutely not. I’m not wasting my time testing. It’s defective. Bottom line. I still own a titan V and 3 Xp’s one of them a collectors. I wouldn’t be wasting my time talking to you if I thought it was something I could troubleshoot. No disrespect. Goes to ask me what motherboard and power supply. Rampage vi extreme and evga 1600t2. What else? Then tells me “you’ll get an email soon thanks bye” like what? lol.
> This tells me the issue is probably bigger than what I originally thought it was. Ticking time bombs. xD





Jbravo33 said:


> Lol I wish. My guy/girl was terrible.


Ah yes I know what you're talking about, when you open a live chat you have to talk to a monkey before they pass you on to a "Level 2 Tech". I had the same experience when talking to the live chat person, but the tech support person who emails you will likely be way more helpful. I started with just some minor concerns and he was like "No problem, give me your details and I'll send you another card". Unfortunately the replacement has a dud fan but as soon as I told him that he said he'll get another one to me. Let us know how you go once you get that email, and don't send both the cards away beforehand because they will likely send you the new ones in advance. If one of your cards is still good you can probably keep using it until your replacements arrive. 



skingun said:


> I had to RMA my cards. Dealt directly with Nvidia. Service was excellent.


Good to hear. (the service I mean, not that you had to RMA haha )


----------



## Jbravo33

ReFFrs said:


> What is the model and manufacturer of your cards? By looking at your post history it's not clear either. Was it Samsung or Micron memory?


Founders editions. Purchased second week of October. Micron memory. 



Zammin said:


> Ah yes I know what you're talking about, when you open a live chat you have to talk to a monkey before they pass you on to a "Level 2 Tech". I had the same experience when talking to the live chat person, but the tech support person who emails you will likely be way more helpful. I started with just some minor concerns and he was like "No problem, give me your details and I'll send you another card". Unfortunately the replacement has a dud fan but as soon as I told him that he said he'll get another one to me. Let us know how you go once you get that email, and don't send both the cards away beforehand because they will likely send you the new ones in advance. If one of your cards is still good you can probably keep using it until your replacements arrive.
> 
> 
> 
> Good to hear. (the service I mean, not that you had to RMA haha )


I can live with that. Crossing fingers for a normal “level 2 tech” xD

Best part was I reply there’s a delay for 3-4 minutes. I’m like hello and his reply is “sorry for the delay in response. I’m assisting other users at the same time, please bear with me” WAIT WHAT?? Lol I screenshot the whole thing.


----------



## Zammin

Jbravo33 said:


> Founders editions. Purchased second week of October. Micron memory.
> 
> 
> I can live with that. Crossing fingers for a normal “level 2 tech” xD
> 
> Best part was I reply there’s a delay for 3-4 minutes. I’m like hello and his reply is “sorry for the delay in response. I’m assisting other users at the same time, please bear with me” WAIT WHAT?? Lol I screenshot the whole thing.


Lol yeah I'm pretty sure the live chat people are just generic call center type staff who are there to filter out the most basic of basic issues and direct anything more complicated than the "unplug it and plug it back in" type scenarios on to the appropriate channels. I think it's just a hoop we need to jump through to get to the more case specific knowledgeable staff/techs.

Hope it goes well for you man. The slowest part for me was communication, me being in Australia and talking with American staff, so I send them an email during the day and they reply sometime early the next morning or the day after and so on, so forth. Once the replacement ships it arrives in like 2-3 days.


----------



## Jpmboy

Jbravo33 said:


> I’ll just leave this here. RMA ing both. Not taking any chances. This is bull****. Cards have been on water since 24 hours after taking delivery. Wasn’t overclocked either. Shadow of tomb raider kept crashing. On the 4th crash I see this.


damn bro. what a PIA. THat is definitely the ram or the ram controller. suks. Good thing you kept the V. 


gridironcpj said:


> Damn that sucks. I'm hoping my Strix remains healthy.* The WCCF bench-a-thon* should be in a few weeks according to AMD is for Intellectuals. Everyone here is welcome to participate.


 what is that?
edit - nvm.


----------



## Ryan Kopp

Not sure if this is the right place or this has been discussed, other non FE card thermals are welcome. Water cooled excluded. Idc about you, I'm jealous and never going air cooling again after the next series launched. Air cooled cards look better though so 😛.

Anyways this is in regards to the MSI gaming x trio... I re did the thermal paste to kryonaut for increased performance but I don't remember my stock Temps and I didn't oc the card much or run 100% fans ever really during that short time period. I just remember it was a bit too hot for my liking...

What Temps are my fellow owners running under load at 100% fans. Please specify open air or in a closed case and your oc specs.

With my side panel off I'm at 60c with it on I'm at 65c full load 100% fan speed. 

720/140 oc also. I hate micron. T_T..


----------



## ESRCJ

Jpmboy said:


> damn bro. what a PIA. THat is definitely the ram or the ram controller. suks. Good thing you kept the V.
> 
> what is that?
> edit - nvm.


You should join us. The host named "AMD is for Intellectuals" could use some competition.


----------



## Jpmboy

gridironcpj said:


> You should join us. The host named "AMD is for Intellectuals" could use some competition.


 link to the _intellectual_? I only found the webpage via google.


https://wccftech.com/wccftech-community-bench-thon-world-tanks/


----------



## ESRCJ

Jpmboy said:


> link to the _intellectual_? I only found the webpage via google.
> 
> 
> https://wccftech.com/wccftech-community-bench-thon-world-tanks/


It's this guy: https://twitter.com/AMDIntellectual

He's a frequent user on WCCF. He is hosting the next bench-a-thon on WCCF in a few weeks. It'll be a suite of 3dmark benchmarks. There should be a corresponding article with the bench-a-thon in a few weeks.


----------



## Jpmboy

gridironcpj said:


> It's this guy: https://twitter.com/AMDIntellectual
> 
> He's a frequent user on WCCF. He is hosting the next bench-a-thon on WCCF in a few weeks. It'll be a suite of 3dmark benchmarks. There should be a corresponding article with the bench-a-thon in a few weeks.


 okay. I'll take a look. (I don't twitter around tho.  )


tried the tank bench with my Titan V @ 1440P (daily settings)...


----------



## ESRCJ

Jpmboy said:


> okay. I'll take a look. (I don't twitter around tho.  )
> 
> 
> tried the tank bench with my Titan V @ 1440P (daily settings)...


Nice score. You would have won if you competed. This was the 1440p leaderboard from when we had that bench-a-thon (I'm Bowser on the leaderboard). I ended up beating my old score a bit later though.


----------



## Jpmboy

that's a damn good score with the TXP!


----------



## Gary2015

skingun said:


> I had to RMA my cards. Dealt directly with Nvidia. Service was excellent.


Same here....artifacts...has tp RMA...2080Ti FE


----------



## taem

Well FTW3 was back in stock so I ordered one. Was strongly tempted by Strix -- looks to be the quietest model -- and probably I will be disappointed by FTW3 fan noise. Some reviews say FTW3 is dead silent but I'm dubious, EVGA doesn't seem to prioritize noise in its cards imho. But FTW3 had some things going for it:

1. EVGA. If you've ever dealt with EVGA customer support this will be a major pro.
2. Step up. Assuming FTW3 Hybrid releases next year I can upgrade.
3. 373w power limit. No need to flash bios.
4. Has a fan header just like Strix, though only one.
5. Has that L-shaped adapter that lets you lug in power cables from the back instead of the side. Depending on your case and card this is very nice. Some cases like Lian Li PC 011 won't fit this card comfortably without that adapter, the power cables will be squished against the side panel.
6. Huge fin stack. Cooling ought to be equal to Strix.
7. EVGA has their own block. Dunno how EVGA blocks compare to EK etc, and I certainly would have preferred a Heatkiller, but I think I can take it on faith the Hydro Copper is a decent block. Be nice also to have card and block come from same manufacturer for fit issues and questions.


I wonder if there are any external gpu enclosures that can fit this though. Razer Core X says it fits triple slot cards so hopefully that will work, in case I demote this card in the near future.



Anyone already have their FTW3? How is it?


----------



## acmilangr

taem said:


> Well FTW3 was back in stock so I ordered one. Was strongly tempted by Strix -- looks to be the quietest model -- and probably I will be disappointed by FTW3 fan noise. Some reviews say FTW3 is dead silent but I'm dubious, EVGA doesn't seem to prioritize noise in its cards imho. But FTW3 had some things going for it:
> 
> 1. EVGA. If you've ever dealt with EVGA customer support this will be a major pro.
> 2. Step up. Assuming FTW3 Hybrid releases next year I can upgrade.
> 3. 373w power limit. No need to flash bios.
> 4. Has a fan header just like Strix, though only one.
> 5. Has that L-shaped adapter that lets you lug in power cables from the back instead of the side. Depending on your case and card this is very nice. Some cases like Lian Li PC 011 won't fit this card comfortably without that adapter, the power cables will be squished against the side panel.
> 6. Huge fin stack. Cooling ought to be equal to Strix.
> 7. EVGA has their own block. Dunno how EVGA blocks compare to EK etc, and I certainly would have preferred a Heatkiller, but I think I can take it on faith the Hydro Copper is a decent block. Be nice also to have card and block come from same manufacturer for fit issues and questions.
> 
> 
> I wonder if there are any external gpu enclosures that can fit this though. Razer Core X says it fits triple slot cards so hopefully that will work, in case I demote this card in the near future.
> 
> 
> 
> Anyone already have their FTW3? How is it?


Link? Where did you order from?


----------



## taem

acmilangr said:


> Link? Where did you order from?



EVGA. https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR

Sold out again though. Was selling out in seconds for the longest time, then it was in stock for several hours yesterday.


----------



## acmilangr

It seems that due to the larger power limit, EVGA FTW3 is the fastest card on overclocking , the reports I have seen is about 2100mhz, strix is about 2040-2080.

Off course all have to do with silicon loterry but it seems that more power limit helps EVGA to be a little better.


----------



## Jbravo33

So finally heard back from nvidia rep and not digital river. They are replacing both cards no questions asked. So glad to hear the good stories of their service is factual.


----------



## Jpmboy

Jbravo33 said:


> So finally heard back from nvidia rep and not digital river. They are replacing both cards no questions asked. So glad to hear the good stories of their service is factual.


good to know. I've had nothing but good experiences with nvidia service. fact is, these consumer gaming cards are a very small portion of their top (and bottom) line these days. But they are still a positive...


----------



## Redwoodz

You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


----------



## Murlocke

I don't know if this is an HDMI handshaking issues or a driver issue or a faulty card. My 2080Ti produces black artifacts like this in multiple games sometimes, it usually only happens in low resolution games and menu systems. This is a no man's sky loading screen. I experience absolutely no issues in games themselves.... The artifacts do not show up in screenshots so I have to be quick with a camera to capture them.

I've tried stock and on latest drivers. Has no one else experienced this on the new 2080 series?



Redwoodz said:


> You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


Return for what? Until it happens to someone else there's no evidence of that. They would order a recall ASAP if there was a serious issue like that, people could die.


----------



## kot0005

Redwoodz said:


> You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


Its 1 card and I bet it was a user error. He probably shorted it.


----------



## GraphicsWhore

Redwoodz said:


> You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


Returning them under what pretense? That they might fail? Lol ok.


----------



## Jpmboy

Redwoodz said:


> You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


 you should return yours. 




kot0005 said:


> Its 1 card and I bet it was a user error. He probably shorted it.


i doubt bravo shorted the card.


----------



## krel

Ordered an FTW3 to replace my pair of Titan Xm cards, getting tired of fighting with the combination of SLI and surround - I think I'm probably done with multiGPU. Looking forward to getting the card; anything I should know/quick summary rather than having to read 4000+ posts?  I did read the OP, of course.


----------



## Emmett

I think my second founders card took a dump this morning. Was running realbench. Froze on opencl part 53c at the time. Rebooted to several black screens
Could only get to windows in safe mode.
Error code 43. I also got this blue screen error.

"Video memory error (internal)" what?

This is the second card with nvlink.
Pulled and system is back to normal with single card. Gonna toss it in my second machine tomorrow. So annoying. Should have skipped this gen.


----------



## GraphicsWhore

krel said:


> Ordered an FTW3 to replace my pair of Titan Xm cards, getting tired of fighting with the combination of SLI and surround - I think I'm probably done with multiGPU. Looking forward to getting the card; anything I should know/quick summary rather than having to read 4000+ posts?  I did read the OP, of course.


Don't use X1 Precision software - it sucks ass.


----------



## kot0005

Emmett said:


> I think my second founders card took a dump this morning. Was running realbench. Froze on opencl part 53c at the time. Rebooted to several black screens
> Could only get to windows in safe mode.
> Error code 43. I also got this blue screen error.
> 
> "Video memory error (internal)" what?
> 
> This is the second card with nvlink.
> Pulled and system is back to normal with single card. Gonna toss it in my second machine tomorrow. So annoying. Should have skipped this gen.



or just dump sli. its dead..


----------



## Emmett

kot0005 said:


> Emmett said:
> 
> 
> 
> I think my second founders card took a dump this morning. Was running realbench. Froze on opencl part 53c at the time. Rebooted to several black screens
> Could only get to windows in safe mode.
> Error code 43. I also got this blue screen error.
> 
> "Video memory error (internal)" what?
> 
> This is the second card with nvlink.
> Pulled and system is back to normal with single card. Gonna toss it in my second machine tomorrow. So annoying. Should have skipped this gen.
> 
> 
> 
> .
> 
> 
> or just dump sli. its dead..
Click to expand...

Yep. I thought nvlink might be different.

I thought it might "Just Work"


----------



## Zammin

Jbravo33 said:


> So finally heard back from nvidia rep and not digital river. They are replacing both cards no questions asked. So glad to hear the good stories of their service is factual.


That's great news. Glad it went smoothly for you as well. 



Redwoodz said:


> You guys should be returning all REFERENCE PCB RTX gpu's. Ticking time bomb, just waiting to burn your house. https://hardforum.com/threads/evga-2080-ti-xc-burst-into-flame.1971627/


Not sure if this was meant to be a joke but if not, yeah like the others said it's only one case and if I had to guess I'd say maybe one of the components on that end of the card was faulty and blew out. It's unfortunate but with it being the only case currently recorded I don't think we are going to see many more cards bursting into flames hehe


----------



## xkm1948

EVGA hybrid kit for 2080/Ti is now available. Any brave souls?

https://www.evga.com/products/productlist.aspx?type=18&family=Cooling&chipset=GPU+HYBRID+Cooler


----------



## Esenel

My Founders (03238) is running 1-2 h average since 05.10 with +145 Core and +1000 Memory without any issues.

Drivers not counting 😄

So please do not generalize incidents if there are still a lot/most of them working.

Maybe the first batch had an issue.
But then I will RMA. So what.

So long.


----------



## Spiriva

Vulkan Developer driver 417.17

https://developer.nvidia.com/vulkan-beta-41717-windows-10


----------



## Redwoodz

It's not one, there have been several, and the PCB's are likely at fault. They don't all errupt in flame, most just artifact. I posted it because it's being hushed. Your choice....carry on.
https://www.eteknix.com/geforce-rtx-2080-ti-de-listed-on-nvidias-online-store/


----------



## Jpmboy

nothing is being hushed. if you read thru this thread, you'd realize it. C'mon man. You show up months after this is news. BTW - did you send your 2080Ti's back?


----------



## wheatpaste1999

Redwoodz said:


> It's not one, there have been several, and the PCB's are likely at fault. They don't all errupt in flame, most just artifact. I posted it because it's being hushed. Your choice....carry on.
> https://www.eteknix.com/geforce-rtx-2080-ti-de-listed-on-nvidias-online-store/


Try again. Straight from the page you linked:

Update 18/11/2018 20:29 GMT – A spokesperson from Nvidia has reached out to us to dismiss this story. Stating that the item is simply out of stock. Instead of showing as “out of stock”, the NVIDIA website simply removes the item from the store, which led us to believe that the item had been de-listed. This is false and we apologise for supplying false information. We’ll update again as any information arrises to that fact, or otherwise.


----------



## ThrashZone

Hi,
Yeah weird nvidia doesn't do what most do and just "say out of stock" or notify.. how tough is that anyway


----------



## ThrashZone

xkm1948 said:


> EVGA hybrid kit for 2080/Ti is now available. Any brave souls?
> 
> https://www.evga.com/products/productlist.aspx?type=18&family=Cooling&chipset=GPU+HYBRID+Cooler


Hi,
Boy I hope it turns out better than the 1080ti version that one was a defect fried quite a few cards


----------



## TobbbeSWE

*Non-A?*

What does NON-A mean? In the graphics-card list.


----------



## dante`afk

I created this sheet to keep track about 2080Ti defect rate through various forums.

If you wanna sign up: https://docs.google.com/spreadsheets/d/11gQ2rkQP8-857W_VEAHs_saSvZoTr4gpvDi_I24FrgE/edit#gid=0

Maybe OP can put it into the start post.


----------



## Jpmboy

TobbbeSWE said:


> What does NON-A mean? In the graphics-card list.


factory non-OC chip. AFAIK, you can;t flash this version with a different bios. But, it means nothing regarding performance and gaming overclock-ability (beyond the usual stock bios power limit thing)


----------



## TobbbeSWE

Well that sucks! I had no idea that there was cards like that. Now it feels like i mabey need to change card. 
Really? They cant be flashed?


----------



## Jpmboy

only can be flashed with the OEM bios. I've not seen a successful cross-flash (vendor or SKU). So far we've been unable to override the "mismatch"


----------



## TobbbeSWE

Jpmboy said:


> only can be flashed with the OEM bios. I've not seen a successful cross-flash (vendor or SKU). So far we've been unable to override the "mismatch"


So another Palit bios wont work either i guess.

What about editing the existing bios with a editor tool. I dont think there is a Turing editor out yet but they usually pop up some time after launch.


----------



## Jpmboy

TobbbeSWE said:


> So another Palit bios wont work either i guess.
> 
> What about editing the existing bios with a editor tool. I dont think there is a Turing editor out yet but they usually pop up some time after launch.


lol - still don't have a pascal bios editor...


----------



## TobbbeSWE

Jpmboy said:


> lol - still don't have a pascal bios editor...


Oh i see.. I've been away from the game for a long time. Last time i visited OCnet was when i hardmodded ref Pcb's on two 780ti's and pushed them to 1500mhz on water.. good times!

I've found this. Guess you read it already.

https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant


----------



## TobbbeSWE

Jpmboy said:


> only can be flashed with the OEM bios. I've not seen a successful cross-flash (vendor or SKU). So far we've been unable to override the "mismatch"


Could it hurt to try flashing with for example the Palit OC bios? 

This is my card. 

https://www.techpowerup.com/gpuz/w5mf5

Card Name:	NVIDIA GeForce RTX 2080 Ti
Owner:	TobbbeSWE	Submitted:	Nov 29th, 2018 17:33
GPU:	TU102	Revision:	A1
BIOS Version:	90.02.0B.40.0B	
Device ID:	10DE - 1E04
Subvendor:	NVIDIA (10DE - 1E04)


----------



## Jpmboy

TobbbeSWE said:


> Could it hurt to try flashing with for example the Palit OC bios?
> 
> This is my card.
> 
> https://www.techpowerup.com/gpuz/w5mf5
> 
> Card Name: NVIDIA GeForce RTX 2080 Ti
> Owner: TobbbeSWE Submitted: Nov 29th, 2018 17:33
> GPU: TU102 Revision: A1
> BIOS Version: 90.02.0B.40.0B
> Device ID: 10DE - 1E04
> Subvendor: NVIDIA (10DE - 1E04)


why not try. Worst thing is NVflash will report that it cannot complete the operation.


----------



## TobbbeSWE

Jpmboy said:


> why not try. Worst thing is NVflash will report that it cannot complete the operation.


Ok! It's a few years since i used NV flash and that was Ezflash. Now i cant get my head around how it works. You know of some fool proof guide?


----------



## Barefooter

@zhrooms can you add me to the owners list please? EVGA RTX 2080 Ti XC Gaming.

They still need waterblocks.










If you guys haven't seen my build log yet you can click on the link in my sig.

Getting close to the end now :thumb:


----------



## BudgieSmuggler

*Power limit and Galax bios flashing on Zotac amp 2080ti*

Hi I've succesfully flashed the bios on my Zotac 2080ti Amp with the Galax bios but not seeing any difference. Clocks still settle around 2090mhz. Games do feel slightly sharper but that could be placebo effect. Any suggestions on what else to do besides going max power of 112% on this bios? Thanks. Lovin the 2080ti btw just not the price. Ha


----------



## dante`afk

I had to lock the sheet since some idiot went ahead and made 100% defect raid, if anyone wants in there just pm me with the format from the list.

https://docs.google.com/spreadsheets/d/15ITRevhey3EvH2uxzcJGmqCycWGkK40W8x3UojY_22g/edit#gid=0


----------



## GraphicsWhore

TobbbeSWE said:


> Ok! It's a few years since i used NV flash and that was Ezflash. Now i cant get my head around how it works. You know of some fool proof guide?


Download nvflash and the BIOS; put the BIOS in the same folder as nvflash.

Open cmd prompt, change directory to the nvflash folder.

1st command: nvflash64 --protectoff
2nd command: nvflash64 -6 BIOSNAME.rom

Hit "y" when prompted.

Restart.

Driver might reinstall itself via Windows.


----------



## rush2049

dante`afk said:


> I had to lock the sheet since some idiot went ahead and made 100% defect raid, if anyone wants in there just pm me with the format from the list.
> 
> https://docs.google.com/spreadsheets/d/15ITRevhey3EvH2uxzcJGmqCycWGkK40W8x3UojY_22g/edit#gid=0


We already have one going: https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing
This is sourced from users on OCN, Geforce Forums, and Reddit (and anyone who found the link).

I keep it fairly clean and make data backups every so often.


----------



## ThrashZone

BudgieSmuggler said:


> Hi I've succesfully flashed the bios on my Zotac 2080ti Amp with the Galax bios but not seeing any difference. Clocks still settle around 2090mhz. Games do feel slightly sharper but that could be placebo effect. Any suggestions on what else to do besides going max power of 112% on this bios? Thanks. Lovin the 2080ti btw just not the price. Ha


Hi,
You probably won't see any difference unless you put a water block on it and cool the beast.


----------



## dante`afk

rush2049 said:


> We already have one going: https://docs.google.com/spreadsheets/d/1BHwsdI5cCaUUQu0vRtHgGDGrG-3Vf7_b82KX_lZ4-9M/edit?usp=sharing
> This is sourced from users on OCN, Geforce Forums, and Reddit (and anyone who found the link).
> 
> I keep it fairly clean and make data backups every so often.


oh thanks!


----------



## Jpmboy

TobbbeSWE said:


> Ok! It's a few years since i used NV flash and that was Ezflash. Now i cant get my head around how it works. You know of some fool proof guide?


you got the instructions. Also posted in this thread a day ago. read back a page or two.


----------



## BudgieSmuggler

Okay Thanks. So really no point on air. I tried the Evga XC bios and that felt really good to play with but it's only a 2 fan card so one of my fans wasn't runnign .Temps seemed the same but didn't want to risk it long term. I think with my default bios and getting 2090mhz and +1000 memory is a decent overclock. Was hoping to try and break the 2100-2150mhz range with the downclock but liek you said it's probably due to air cooling. Thanks for quick reply


----------



## GraphicsWhore

BudgieSmuggler said:


> Okay Thanks. So really no point on air. I tried the Evga XC bios and that felt really good to play with but it's only a 2 fan card so one of my fans wasn't runnign .Temps seemed the same but didn't want to risk it long term. I think with my default bios and getting 2090mhz and +1000 memory is a decent overclock. Was hoping to try and break the 2100-2150mhz range with the downclock but liek you said it's probably due to air cooling. Thanks for quick reply


Just so you're aware breaking 2100 doesn't mean **** unless you're obsessed with benchmarks. Those are peak clocks and even on water you'll bounce around. I'm on the Galax BIOS on water staying around ~50 degrees on load and over extended periods my average is below 2100. I don't think anyone here has yet been able to prove a min of 2100+.

So, especially on air, you're probably like in the top 90th percentile. Your clock is definitely way past "decent."


----------



## iRSs

acmilangr said:


> It seems that due to the larger power limit, EVGA FTW3 is the fastest card on overclocking , the reports I have seen is about 2100mhz, strix is about 2040-2080.
> 
> Off course all have to do with silicon loterry but it seems that more power limit helps EVGA to be a little better.


dafuq mate?
my palit dual 2080 ti whch is a NON-A chip (the worst) stays at 2080-2100gpu with 7800 mem and you praise the strix and ftw3?
what is wrong with these cards?!

i also have morpheus 2 cooler which keeps my gpu at 55 degrees in any game except path of exile and i removed my LM shunt mods (gonna do them in a boring day)
and, GOD damn, my card was ~1120 euro plus 65 for cooler 

also


Jpmboy said:


> factory non-OC chip. AFAIK, you can;t flash this version with a different bios. But, it means nothing regarding performance and gaming overclock-ability (beyond the usual stock bios power limit thing)


----------



## GraphicsWhore

BudgieSmuggler said:


> Hi I've succesfully flashed the bios on my Zotac 2080ti Amp with the Galax bios but not seeing any difference. Clocks still settle around 2090mhz. Games do feel slightly sharper but that could be placebo effect. Any suggestions on what else to do besides going max power of 112% on this bios? Thanks. Lovin the 2080ti btw just not the price. Ha


You should love some compressed air and computer wipes big guy, get that thing cleaned up a bit. Good grief. $1200 video card in a dirty dust box.


----------



## Spiriva

417.21 hotfix driver

https://nvidia.custhelp.com/app/answers/detail/a_id/4745

Fixes:

Memory Data Rate reporting an incorrect value in the NVIDIA Control Panel
Microsoft Edge does not respond after playing back video
Can't apply color format after updating to driver 417.01


----------



## BudgieSmuggler

Thanks for that info. Yeh i know 2100mhz doesn't mean **** and won't give any big performance gains. I kind of like the challenge like a lot of overclockers. Remember spending days trying to get my old fx8320 stable at 5ghz just for the F*** of it. lol. I've gone back to the Evga bios as the Galax might have been throttling and causing microstutter. It's nice and smooth and getting around 80fps in Battlefield V with ray tracing on Ultra at 1440p so very happy with that. Thanks for your positive response mate! Now 2085mhz stable just over 50 (fans aren't full whack) and +1000. Gonna just enjoy it now. Have a good night


----------



## Jpmboy

iRSs said:


> dafuq mate?
> my palit dual 2080 ti whch is a NON-A chip (the worst) stays at 2080-2100gpu with 7800 mem and you praise the strix and ftw3?
> what is wrong with these cards?!
> 
> i also have morpheus 2 cooler which keeps my gpu at 55 degrees in any game except path of exile and i removed my LM shunt mods (gonna do them in a boring day)
> and, GOD damn, my card was ~1120 euro plus 65 for cooler
> 
> also



lol - that's the poster child picture for why ya need a 12 layer motherboard! 1Kg cpu cooler and a "custom" air cooler on the card (at least a Kg).... hanging, good grief.


----------



## OutlawNeedsHelp

Hey guys, need some help here from anyone who has watercooled their card.

I have a Zotac 2080 AMP (ZT-T20800D-10P), and EKWB Vector 2080 block and backplate. 

I'm at the part where I'm installing the backplate and I've hit a snag, I seem to be missing some screws. The backplate manual says use 6 M2.5x7 screws to connect the backplate to the block, but uh, I wasn't given 6. I seem to only have two, and zero screws came in the backplate box, those two are from the waterblock box. I have a hundred M2.5x6 screws but they're really sketchy on reaching and I'm not going to risk overflexing the card or anything like that. 

It also doesn't say whether to use any washers when installing the backplate. Is this right? Obviously I'm worried about shorting the card. 

So I'd really appreciate hearing from someone else that's watercooled their card already; did you receive all the M2.5x7 screws (or any screws with the backplate), and did you put washers in between the backplate and the card?

Thanks, these are the parts

https://www.newegg.com/Product/Product.aspx?Item=N82E16814500435
https://www.ekwb.com/shop/ek-vector-rtx-2080-copper-acetal
https://www.ekwb.com/shop/ek-vector-rtx-backplate-black






EDIT: I'm an idiot. And so is EK, tbh. The screws were in a separate compartment of the backplate box. With only one reference to where they are, in tiny print on the edge of the box. They should definitely not be in such a good hiding place lol


----------



## iRSs

Jpmboy said:


> lol - that's the poster child picture for why ya need a 12 layer motherboard! 1Kg cpu cooler and a "custom" air cooler on the card (at least a Kg).... hanging, good grief.


what is wrong with air cooling you computer?
-it performs as good as the most expensive aio closed loops
-i can keep it fanless when i am not gaming which means zero sound and alot less dust
-costs almost half of the best performing closed loop aios
-my motherboard is fine, it always was
-also i keep my vrm cool 

my whole card wights 1.6kg and it is not hanging. i am using accelero 4 'no hanging' plate along with its backplate radiator

open loops are waay more expensive, need yearly time consuming maintenance, you cannot keep them fanless (unless you have behemoth size rads) and collects alot of dust in push configuration (pull config is easier to clean at least)

also i move few times a year with my pc for lan parties 

ps: what is a poster child picture?


----------



## Mike211

My 2 EVGA GeForce RTX 2080 Ti FTW3 ULTRA


----------



## Jpmboy

iRSs said:


> what is wrong with air cooling you computer?
> -it performs as good as the most expensive aio closed loops
> -i can keep it fanless when i am not gaming which means zero sound and alot less dust
> -costs almost half of the best performing closed loop aios
> -my motherboard is fine, it always was
> -also i keep my vrm cool
> 
> my whole card wights 1.6kg and it is not hanging. i am using accelero 4 'no hanging' plate along with its backplate radiator
> 
> open loops are waay more expensive, need yearly time consuming maintenance, you cannot keep them fanless (unless you have behemoth size rads) and collects alot of dust in push configuration (pull config is easier to clean at least)
> 
> also i move few times a year with my pc for lan parties
> 
> ps: what is a poster child picture?



lol- there's nothing wrong with air coolers (I have a couple of NH-Ds here). But the do weigh a lot. Not sure what you are talking about with yearly maintenance on a custom loop - there's no more than a dusting like air coolers.
the rig below has been on 24/7/365 for well over a year now. x299 w/ monoblock, 1080/block and 1 360 rad. Quiet at full bore. Even with a breather on the rez, koolance liquid shows no evap - liquid is 30C on the cold side of the rad.


----------



## kot0005

Redwoodz said:


> It's not one, there have been several, and the PCB's are likely at fault. They don't all errupt in flame, most just artifact. I posted it because it's being hushed. Your choice....carry on.
> https://www.eteknix.com/geforce-rtx-2080-ti-de-listed-on-nvidias-online-store/



please ban this troll..Since when is display artifacting and shorting the pcb the same ??


----------



## iRSs

Jpmboy said:


> lol- there's nothing wrong with air coolers (I have a couple of NH-Ds here). But the do weigh a lot. Not sure what you are talking about with yearly maintenance on a custom loop - there's no more than a dusting like air coolers.
> the rig below has been on 24/7/365 for well over a year now. x299 w/ monoblock, 1080/block and 1 360 rad. Quiet at full bore. Even with a breather on the rez, koolance liquid shows no evap - liquid is 30C on the cold side of the rad.


they actually weight almost the same. for example, nh-d15 weights 980g and its equivalent in performance would be something like H150i PRO RGB 360mm (tested them myself side by side, aio actually loses on longer gaming sessions because of water's thermal resistance) weights 960g (i know that the stress is not on the motherboard but on the case.
with yearly maintenance i was reffering to bacteria, algae, something stuck in block's fins, corrosion, etc. it might perform the same without maintenance but that does not mean taht it does not need it 
also quiet at ful bore for w/e that might mean will never be 0db quiet 

ps: i always have had something for radiators and conventional air cooling therefore my dream case is NSG-S0, but it seems that the project is dead, MonsterLabo would be my next choice )


----------



## NewType88

iRSs said:


> dafuq mate?
> my palit dual 2080 ti whch is a NON-A chip (the worst) stays at 2080-2100gpu with 7800 mem and you praise the strix and ftw3?
> what is wrong with these cards?!
> 
> i also have morpheus 2 cooler which keeps my gpu at 55 degrees in any game except path of exile and i removed my LM shunt mods (gonna do them in a boring day)
> and, GOD damn, my card was ~1120 euro plus 65 for cooler
> 
> also


55c ? I put the Morpheus 2 on my FTW3 and it cooled the same as the stock cooler, only the ram got much hotter. I kept the mid plate on though. What is your max wattage and fans and fan speed to see 55c? I was using 2 ippc 2000rpm fans and I was still getting over 70c under full wattage oc.


----------



## iRSs

kot0005 said:


> please ban this troll..Since when is display artifacting and shorting the pcb the same ??


i still think that the problem with the dying 2080 ti's is the heat as the original palit dual cooler does not do too much of a job in... cooling because on the back side of the card, where the vrm is located i cannot hold my finger without burning myself (which would be alot hotter in the mosfet itself), same for memory chips location.
i read that the nvidia FE cooler performs worse, at least for the vrm location...


----------



## BudgieSmuggler

GraphicsWhore said:


> You should love some compressed air and computer wipes big guy, get that thing cleaned up a bit. Good grief. $1200 video card in a dirty dust box.


Thanks for the advice. I ran out of air after cleaning mobo, cpu, ram area. Have to get more. It doesn't look great close up but doesn't bother me having a new gpu in there. A bit of dust won't kill my card


----------



## Jpmboy

Spiriva said:


> Vulkan Developer driver 417.17
> 
> https://developer.nvidia.com/vulkan-beta-41717-windows-10


Running 417.21 on my Titan V rig - haven't gamed but I'll see how it does folding overnight. 


GraphicsWhore said:


> Just so you're aware breaking 2100 doesn't mean **** unless you're obsessed with benchmarks. Those are peak clocks and even on water you'll bounce around. I'm on the Galax BIOS on water staying around ~50 degrees on load and over extended periods my average is below 2100. I don't think anyone here has yet been able to prove a min of 2100+.
> 
> So, especially on air, you're probably like in the top 90th percentile. Your clock is definitely way past "decent."


galax bios and water... use the GALAX extreme tuner, click the radio button for voltage and set 1070mV (or what ever voltage you prefer). Card will lock in P0 and hold clocks under load much better. Like Pascal, Turing is very temperature sensitive with clock bin drops beginning at ~ 40C


iRSs said:


> they actually weight almost the same. for example, nh-d15 weights 980g and its equivalent in performance would be something like H150i PRO RGB 360mm (tested them myself side by side, aio actually loses on longer gaming sessions because of water's thermal resistance) weights 960g (i know that the stress is not on the motherboard but on the case.
> with yearly maintenance i* was reffering to bacteria, algae, something stuck in block's fins, corrosion, etc*. it might perform the same without maintenance but that does not mean taht it does not need it
> also quiet at ful bore for w/e that might mean will never be 0db quiet
> 
> ps: i always have had something for radiators and conventional air cooling therefore my dream case is NSG-S0, but it seems that the project is dead, MonsterLabo would be my next choice )


Nothing green will grow in the presence of copper (rad and blocks)... and especially in a good coolant (the glycols are the same found in pool algecide. Copper is toxic to anything chlorophyll based. But yeah, dust will collect on rad fins without a screen/filter on the inlet side. Watercooling is very easy, but does cost a bit more. Corrosion? Nah.:thumb:


----------



## iRSs

NewType88 said:


> 55c ? I put the Morpheus 2 on my FTW3 and it cooled the same as the stock cooler, only the ram got much hotter. I kept the mid plate on though. What is your max wattage and fans and fan speed to see 55c? I was using 2 ippc 2000rpm fans and I was still getting over 70c under full wattage oc.


first of all, if you kept the mid plate your vrm is toasted 

i use two scythe slip stream 1900 rpm version but as you can see they stay around 1500. i thought about buying 2x ippc 3000rpm but these that i have are more than enough, i will buy 2x nf-a12x25 in the future tho.

53 degrees is a thermal sensor i put on the back side of the card between the 'backplate' radiator and pcb where the vrm is, so my fans are controlled by the vrm temp, not the gpu temp. also i had to hook them up on the motherboard because the stock header is weird. the pcb still has standard fan header mount points but the header is not there 
that is quake champions with temps after 20 minutes of gameplay. it draws more power while i am in game, that 72.7 was while i switched to snipping tool.

like this it draws from the wall ~450w (which is the whole system: pc, monitor, 2x big studio monitors, one router) with liquid metal on power shunts i can reach almost 700won the wall on path of exile (because quake rarely goes above 112%))
i could do some more tests if you need.

also i uploaded my card without the cooler on it. to those tiny radiators addd a huge one on the back which cools the entire pcb and the cpu fans which pull some air through it.


----------



## iRSs

Jpmboy said:


> Nothing green will grow in the presence of copper (rad and blocks)... and especially in a good coolant (the glycols are the same found in pool algecide. Copper is toxic to anything chlorophyll based. But yeah, dust will collect on rad fins without a screen/filter on the inlet side. Watercooling is very easy, but does cost a bit more. Corrosion? Nah.:thumb:


meh, maybe not on your side but it does happen, also naked copper does corrode after a time or at least stain.
a bit more is not just a bit, but something like 3x more for a custom loop 

ps: thanks for the tip with galax xtreme tuner for locking the voltage


----------



## Zammin

iRSs said:


> i still think that the problem with the dying 2080 ti's is the heat as the original palit dual cooler does not do too much of a job in... cooling because on the back side of the card, where the vrm is located i cannot hold my finger without burning myself (which would be alot hotter in the mosfet itself), same for memory chips location.
> i read that the nvidia FE cooler performs worse, at least for the vrm location...


The FE cooler isn't fantastic that's for sure, but as long as you don't raise the power limit the VRM shouldn't overheat. In Gamers Nexus' testing it showed everything was in check with the power limit at default but when maxed out the VRM hit 100C. For that reason I'm leaving my PL at default until I get my waterblock on my FE.

I like you're setup btw, that's a wicked air cooled rig


----------



## NewType88

iRSs said:


> first of all, if you kept the mid plate your vrm is toasted
> 
> i use two scythe slip stream 1900 rpm version but as you can see they stay around 1500. i thought about buying 2x ippc 3000rpm but these that i have are more than enough, i will buy 2x nf-a12x25 in the future tho.
> 
> 53 degrees is a thermal sensor i put on the back side of the card between the 'backplate' radiator and pcb where the vrm is, so my fans are controlled by the vrm temp, not the gpu temp. also i had to hook them up on the motherboard because the stock header is weird. the pcb still has standard fan header mount points but the header is not there
> that is quake champions with temps after 20 minutes of gameplay. it draws more power while i am in game, that 72.7 was while i switched to snipping tool.
> 
> like this it draws from the wall ~450w (which is the whole system: pc, monitor, 2x big studio monitors, one router) with liquid metal on power shunts i can reach almost 700won the wall on path of exile (because quake rarely goes above 112%))
> i could do some more tests if you need.
> 
> also i uploaded my card without the cooler on it. to those tiny radiators addd a huge one on the back which cools the entire pcb and the cpu fans which pull some air through it.


I don’t recall my vrm being much hotter. The top bank of mem run like 30c hotter than the rest everything stock. Look at the peak wTtage under load on hwinfo or something. I know on furmark I see only like 290w, but on Witcher 3 it will push it to 370W plus. So that will definitely see a temp difference. That photo shows only 180w. 

Explain that backplate.


----------



## iRSs

NewType88 said:


> I don’t recall my vrm being much hotter. The top bank of mem run like 30c hotter than the rest everything stock. Look at the peak wTtage under load on hwinfo or something. I know on furmark I see only like 290w, but on Witcher 3 it will push it to 370W plus. So that will definitely see a temp difference. That photo shows only 180w.
> 
> Explain that backplate.


vrm should be hotter because the midplate is not doing much cooling.
how much is 30c hotter?
i played path of exile which heats my card almost like furmark and the average wattage was 250w (depends of the ingame area)
i cannot go past 255 wattage because my card is limited at 112%, the only way for me to get past this is put liquid metal on shunt again (i will do that but sunday when i get back home)
the curious thing is that with 250w in gpu-z, but the wattmeter on the wall says 370w whis is 100w LOWER than quake ) which was something like 80% power.
if i recall corectly when i was having shunt mods teh wattmeter said ~700w in path of exile and temperatures was ~70 but EVERYTHING on the card was quite warm. every single capacitor, coil and mosfet, but, hell, that's 250w more than stock!

that backplate is the "back-side cooler" from the photo i attached and it has some thick blue thermal pads between itself and the pcb. i will take more photos sunday when i will do shunt mods again.


----------



## kot0005

Get ready boys, this game is running at 15fps on ultra https://www.youtube.com/watch?time_continue=107&v=fJjHW0gTeHY


----------



## Jspinks020

Well the addition of 417. driver benching like a 980ti scores. It's fine don't need one yet. Meatgrinder maps and who see's who first dies and some weps are underpowered.


----------



## Citruschrome

Jpmboy said:


> galax bios and water... use the GALAX extreme tuner, click the radio button for voltage and set 1070mV (or what ever voltage you prefer). Card will lock in P0 and hold clocks under load much better. Like Pascal, Turing is very temperature sensitive with clock bin drops beginning at ~ 40C


Messed around with this a bit and I'm not sure if I like sitting in P0 with my clocks jacked up at all times, maybe I'll come back to it at some point. It would be nice if it would lock down the voltage too but I doubt that's possible. 

Did a bit of meming while I was at it lol


----------



## Jpmboy

Citruschrome said:


> Messed around with this a bit and I'm not sure if I like sitting in P0 with my clocks jacked up at all times, maybe I'll come back to it at some point. It would be nice if it would lock down the voltage too but I doubt that's possible.
> 
> Did a bit of meming while I was at it lol


 I would not leave it at P0 al lthe time either. Only when I plan to use the card. Hit reset, and it all returns to normal idle.




iRSs said:


> meh, maybe not on your side but it does happen, also naked copper does corrode after a time or at least stain.
> a bit more is not just a bit, but something like 3x more for a custom loop
> 
> ps: thanks for the tip with galax xtreme tuner for locking the voltage


The galax tool is not the best GUI, but it works well. regarding copper... surface oxidation is required in any copper system (even in the clean water source in homes - copper in, plastic out... or chlorinate). Staining is a pH thing. This is waaay OT. EOD


----------



## MrTOOSHORT

2080 TIs came back in stock at the Nvidia store after a month. I'm thinking they are Samsung chip cards from now on instead of micron. Just ordered one, have the ek block coming too. Very excited!


----------



## GraphicsWhore

Jpmboy said:


> Running 417.21 on my Titan V rig - haven't gamed but I'll see how it does folding overnight.
> galax bios and water... use the GALAX extreme tuner, click the radio button for voltage and set 1070mV (or what ever voltage you prefer). Card will lock in P0 and hold clocks under load much better. Like Pascal, Turing is very temperature sensitive with clock bin drops beginning at ~ 40C


Nice, I'll give it a try tonight. Thanks.



Citruschrome said:


> Messed around with this a bit and I'm not sure if I like sitting in P0 with my clocks jacked up at all times, maybe I'll come back to it at some point. It would be nice if it would lock down the voltage too but I doubt that's possible.
> 
> Did a bit of meming while I was at it lol


2265? Is that accurate??


----------



## arrow0309

Hi, a quick guide or link to the right commands to flash the Galax 380W bios to my 2080 ti FE?
Thanks!


----------



## GraphicsWhore

arrow0309 said:


> Hi, a quick guide or link to the right commands to flash the Galax 380W bios to my 2080 ti FE?
> Thanks!


1st command:
nvflash64 --protectoff

2nd command:
nvflash64 -6 BIOSNAME.ROM


----------



## Jpmboy

MrTOOSHORT said:


> 2080 TIs came back in stock at the Nvidia store after a month. I'm thinking they are Samsung chip cards from now on instead of micron. Just ordered one, have the ek block coming too. Very excited!


 was waiting for you to buy in. 
Curious to know if you get a sammy or micron. 




arrow0309 said:


> Hi, a quick guide or link to the right commands to flash the Galax 380W bios to my 2080 ti FE?
> Thanks!


scroll back a few pages


----------



## arrow0309

GraphicsWhore said:


> 1st command:
> nvflash64 --protectoff
> 
> 2nd command:
> nvflash64 -6 BIOSNAME.ROM


Thanks, all good
It seems I haven't hit the lottery, stable at only 2070 (+90) under stock bios (didn't mess with the voltage though) now I've set a 1.070v with the Galax Xtreme Tuner and tried with +135 (2115) and it's still not holding (desktop driver crash after ~15' in BF5 dx12, RTX).
Going to try lowering the oc to +120 for now.

Any way to make the voltage also work in afterburner (without messing with its freq/volt curve)?
I'm OSD addicted


----------



## djchup

I received my EVGA 2080ti FTW3 this week, am somewhat disappointed. No OC headroom whatsoever. Thermals are fine, core never goes over 70c, memory tops out at 79. Nvidia's scanner recommended oc result was +103 MHz, instantly crashes in Fire strike with Max power limit and +103Mhz on core w/ no memory OC. I've tried dialing it back multiple times and I got bluescreens in Battlefield V even at +40Mhz core w/no memory OC. I haven't seen anyone have results this poor with a FTW3. I'm running it at stock speeds and have gotten one BSOD in battlefield V so far, planning on RMA if it happens again at stock speeds. I tried everything I could think of including running my 8700k at stock speed/voltage, clean windows install etc...


----------



## Jpmboy

arrow0309 said:


> Thanks, all good
> It seems I haven't hit the lottery, stable at only 2070 (+90) under stock bios (didn't mess with the voltage though) now I've set a 1.070v with the Galax Xtreme Tuner and tried with +135 (2115) and it's still not holding (desktop driver crash after ~15' in BF5 dx12, RTX).
> Going to try lowering the oc to +120 for now.
> 
> Any way to make the voltage also work in afterburner (without messing with its freq/volt curve)?
> I'm OSD addicted


what temperature is it hitting in BF5? Also, the bios boost table is set (they all are) in 13hz increments... so +130, +117, +104 (divisible by 13) are the numbers to use. You can see this by increasing from 91 to 104 (or any increment) and watch the actuall frequency change in a 13Hz jump once you get close to +104 from +91. This is not new. An exception is the Titan V.


----------



## GraphicsWhore

djchup said:


> I received my EVGA 2080ti FTW3 this week, am somewhat disappointed. No OC headroom whatsoever. Thermals are fine, core never goes over 70c, memory tops out at 79. Nvidia's scanner recommended oc result was +103 MHz, instantly crashes in Fire strike with Max power limit and +103Mhz on core w/ no memory OC. I've tried dialing it back multiple times and I got bluescreens in Battlefield V even at +40Mhz core w/no memory OC. I haven't seen anyone have results this poor with a FTW3. I'm running it at stock speeds and have gotten one BSOD in battlefield V so far, planning on RMA if it happens again at stock speeds. I tried everything I could think of including running my 8700k at stock speed/voltage, clean windows install etc...


What’a your max core with no OC?


----------



## Shawnb99

Damn UPS has to show up the 30 minutes I step out for, now I have to wait till Monday or Tuesday. So wanted to test it out this weekend


----------



## djchup

GraphicsWhore said:


> What’a your max core with no OC?


~1915Mhz iirc (@ work right now)


----------



## arrow0309

Jpmboy said:


> what temperature is it hitting in BF5? Also, the bios boost table is set (they all are) in 13hz increments... so +130, +117, +104 (divisible by 13) are the numbers to use. You can see this by increasing from 91 to 104 (or any increment) and watch the actuall frequency change in a 13Hz jump once you get close to +104 from +91. This is not new. An exception is the Titan V.


40C
Are you sure about the 13hz increments?
I know they were like this with my previous Titan X Pascal but I do see them with 15hz increments now (2055, 2070, 2085, 2100, 2115).
Where's the trick?


----------



## taem

Sweet. Ordered Tuesday, EVGA shipped Wednesday, here today.

Unfortunately in an hour I've got out of town guests coming in. Can't even install this til next week. Well I suppose I could ignore my guests.

Bit concerned with fan control issues I'm hearing about with this card. Afterburner can't control these fans, and Precision X1 by most accounts is buggy as heck on fan control.


----------



## Jpmboy

arrow0309 said:


> 40C
> Are you sure about the 13hz increments?
> I know they were like this with my previous Titan X Pascal but I do see them with 15hz increments now (2055, 2070, 2085, 2100, 2115).
> Where's the trick?


eh - yes. It's 15Hz not 13.


----------



## GraphicsWhore

So tried using Galax Xtreme Tuner (RTX beta) thanks to @Jpmboy advice and did reach both my highest overall and graphics scores in TimeSpy. It was two different runs and I only increased a few points but still cool.

Highest overall was 12772 (vs previous 12762) with [email protected] and GPU max 2145 and memory at 8148. Highest graphics was 16611 (vs previous 16606) with [email protected] and GPU at max 2160 and memory at 8000.

As usual it's all about that core. So next I'm going to try and see what I can do to pass a run with max of 2175+.

I am impressed that I was able to stick +1148 on memory. This card has been solid all around.

Only thing is the voltage control in Xtreme Tuner didn't seem to do anything. Setting it at 1100mV or 1200mV still only gave me a max of 1093 according to AfterBurner.


----------



## Citruschrome

GraphicsWhore said:


> 2265? Is that accurate??



Yeah but it's at idle running chrome, because the Galax tuner locks you in P0 the clocks will go to their max without stress. It was just for the fun of it 2160 is the max stable clock for most benches and a few games for me but I can still get crashes in a few games so I top it out at 2145 with max gpu boost, now that I'm running the galax bios though that means my clocks usually hover between 2115 and 2130 due to the card staying 40-45c.


----------



## axiumone

So, first shots of the new rtx titan.


----------



## kx11

here's a video of RTX TITAN box




https://neatclip.com/clip/z87g51587






another photo


----------



## Jpmboy

launch the damn thing already!


----------



## Zammin

Hahaha that slip Linus made was totally intentional lmao, like when they did a livestream the night before the RTX embargo lifted with the 2080Ti boxes in the background the whole time.

Can't wait to find out more about this card. You guys reckon it'll cost Titan V money? More? Less?

The box looks like the Quadro RTX boxes, white except the green accents are gold on the Titan box.


----------



## nycgtr

I'd say probably around the 2k mark to 3k mark.


----------



## Fatalution

Hello. I own 2080ti FE and that's my 2nd card (first RMA'd). Both have very distinct coil whine. Is it the case with other FE's here?


----------



## Zammin

Fatalution said:


> Hello. I own 2080ti FE and that's my 2nd card (first RMA'd). Both have very distinct coil whine. Is it the case with other FE's here?


Out of the two FE's I've had, neither had noticeable coil whine (at least that I could hear over the fans) but one of them had a faulty fan that would whine. When I stopped the fan the whining went away.


----------



## ESRCJ

I may get the Titan if it's $2000 or less.


----------



## arrow0309

GraphicsWhore said:


> So tried using Galax Xtreme Tuner (RTX beta) thanks to @Jpmboy advice and did reach both my highest overall and graphics scores in TimeSpy. It was two different runs and I only increased a few points but still cool.
> 
> Highest overall was 12772 (vs previous 12762) with [email protected] and GPU max 2145 and memory at 8148. Highest graphics was 16611 (vs previous 16606) with [email protected] and GPU at max 2160 and memory at 8000.
> 
> As usual it's all about that core. So next I'm going to try and see what I can do to pass a run with max of 2175+.
> 
> I am impressed that I was able to stick +1148 on memory. This card has been solid all around.
> 
> Only thing is the voltage control in Xtreme Tuner didn't seem to do anything. Setting it at 1100mV or 1200mV still only gave me a max of 1093 according to AfterBurner.


Nice score and achievement however :specool:
What driver? 
Btw, keeping both Xtreme Tuner and msi ab is OK, no issues?


----------



## Fatalution

Zammin said:


> Out of the two FE's I've had, neither had noticeable coil whine (at least that I could hear over the fans) but one of them had a faulty fan that would whine. When I stopped the fan the whining went away.


Hmmm... Which PSU do you have?


----------



## cstkl1

arrow0309 said:


> Jpmboy said:
> 
> 
> 
> what temperature is it hitting in BF5? Also, the bios boost table is set (they all are) in 13hz increments... so +130, +117, +104 (divisible by 13) are the numbers to use. You can see this by increasing from 91 to 104 (or any increment) and watch the actuall frequency change in a 13Hz jump once you get close to +104 from +91. This is not new. An exception is the Titan V.
> 
> 
> 
> 40C
> Are you sure about the 13hz increments?
> I know they were like this with my previous Titan X Pascal but I do see them with 15hz increments now (2055, 2070, 2085, 2100, 2115).
> Where's the trick?
Click to expand...

eh pascal and before its actually
12/25/51/102/153 etc... cause think the actual clockgen has a decimal 12.x increament.. 

yeah turing is 15mhz...
whats amazing in gddr6.. theres no steps.. what we set is what we get.

da msi 1755 bios vs say a galax FE bios doesnt even change the vid pairing. 
so far only card i have seen running at 1800 is that gigabyte auros.


----------



## Aurosonic

Just got my "aorus geforce® rtx 2080 ti xtreme waterforce wb 11g" with pre-installed waterblock to exclude warranty issues in case of any card problems. Just wondering what's it's powerlimit on stock bios ? And i saw somewhere here in the thread some statistics about dead cards from forums owners. Could anyone point me to it please ? Can't find it


----------



## CallsignVega

Looks like my time with 2080 Ti will be short lived. Going SLI RTX Titan.


----------



## cstkl1

CallsignVega said:


> Looks like my time with 2080 Ti will be short lived. Going SLI RTX Titan.


lets see whether this time its nvidia store exclusive again..

y cant they just let aib have a piece of da pie..


----------



## Zammin

Fatalution said:


> Hmmm... Which PSU do you have?


The system I have it in atm has an old 2013 Seasonic X-1050W


----------



## iRSs

how do you get 1.093v ?
because my palit dual only goes as high as 1.050 even with galax extreme tuner.

Sent from my LG-H930 using Tapatalk


----------



## bsch3r

CallsignVega said:


> Looks like my time with 2080 Ti will be short lived. Going SLI RTX Titan.


Me too


----------



## ESRCJ

iRSs said:


> how do you get 1.093v ?
> because my palit dual only goes as high as 1.050 even with galax extreme tuner.
> 
> Sent from my LG-H930 using Tapatalk


Check your VF "curve." The default "curve" is usually flat beyond 1.05V. GPU boost will choose the lowest voltage for a given clock according to your VF "curve." That's why you're not passing 1.05V. You'll need to manually set anything past 1.05V at least 15MHz above whatever your clock at 1.05V is.


----------



## BudgieSmuggler

Hi mate. Quick update i have pretty much got a stable 2100 mhz clock. I'm using the Galax bios again with +139 on the core +1000 memory and the stablisisng was helped by a tip i read on here about locking the voltage using Galax extreme tuner. Locking voltage at 1060mhz has stopped any downclocking lower than 2085mhz. 70% of the time roughly stays at 2100mhz with 51 degrees on air. Very satisfied with that. Cheers


----------



## arrow0309

gridironcpj said:


> Check your VF "curve." The default "curve" is usually flat beyond 1.05V. GPU boost will choose the lowest voltage for a given clock according to your VF "curve." That's why you're not passing 1.05V. You'll need to manually set anything past 1.05V at least 15MHz above whatever your clock at 1.05V is.


Sooner or later I'll have to mess around with it. 
If you don't mind, what are the steps to play with this curve (in Afterburner)?

In the end it'll gonna save this new curve for ever or as a profile to save? 
Or do you have to manually set it every time?


----------



## iRSs

arrow0309 said:


> Sooner or later I'll have to mess around with it.
> If you don't mind, what are the steps to play with this curve (in Afterburner)?
> 
> In the end it'll gonna save this new curve for ever or as a profile to save?
> Or do you have to manually set it every time?


can you do voltage frequency curve in afterburner? 
or i need to use evga precision?

Sent from my LG-H930 using Tapatalk


----------



## Jpmboy

arrow0309 said:


> Nice score and achievement however :specool:
> What driver?
> Btw, keeping both Xtreme Tuner and msi ab is OK, no issues?


no problem. I wouldn't use both at the same time tho. Control conflict can happen. In AB, put your mouse in teh graph window, hit ctrl-F, make your curve, pick a point on a curve, hit Ctrl-L then apply. it applies hte voltage curve AND locks the card in P0. :thumb:


----------



## TobbbeSWE

Jpmboy said:


> lol- there's nothing wrong with air coolers (I have a couple of NH-Ds here). But the do weigh a lot. Not sure what you are talking about with yearly maintenance on a custom loop - there's no more than a dusting like air coolers.
> the rig below has been on 24/7/365 for well over a year now. x299 w/ monoblock, 1080/block and 1 360 rad. Quiet at full bore. Even with a breather on the rez, koolance liquid shows no evap - liquid is 30C on the cold side of the rad.




Got this after a flash attempt with the Galax bios on my Non-A palit card. Thanks for the quick guide btw! 
What a shame considering my cooling potential.



















Here is rig and my cooling btw..

After about 30 min in Heaven benchmark i get these temps. Water Temperature never really get above ambient. The radiator contains about 20L Liquid and it's a 3 panel radiator 1800x600mm. The fans is just looks really. It's already insane, why not go full balls to the wall you know.


































Clocks stays between 2010-2070mhz.


----------



## ESRCJ

arrow0309 said:


> Sooner or later I'll have to mess around with it.
> If you don't mind, what are the steps to play with this curve (in Afterburner)?
> 
> In the end it'll gonna save this new curve for ever or as a profile to save?
> Or do you have to manually set it every time?


If you really want to get the "curve" right, I would recommend testing each voltage increment and find the highest stable clock for each. I used 15-minute runs of Heaven to get an idea of what the curve should look like and it hasn't failed me for any other applications. I tested each point from 1.000V to 1.093V.

For testing a particular voltage, open to VF curve and select the desired voltage, then move the point up to a desired clock (be realistic). Press "L" once it's in the position you want, then apply (the checkmark in AB). The curve beyond that point will be flat and you'll be locked at that voltage-frequency point. As a point of reference, I was able to do 2100MHz at 1.000V. Keep in mind it won't stay locked if you hit the power limit or are above a particular temperature (clock will drop 15MHz if the card surpasses about 40C for example). Run Heaven for 15 minutes. If it survives, keep increasing the clock for that particular voltage. Once you find the highest stable clock, move onto the next point. I made a spreadsheet tracking each stable voltage-frequency point. I didn't make my final curve until I finished my testing. 

When making your final curve, start from 1.093V and work your way down. Otherwise, hitting "apply" may shift your curve around. It's not my favorite software to work with honestly, as the curve has a mind of its own when I hit apply. 



iRSs said:


> can you do voltage frequency curve in afterburner?
> or i need to use evga precision?
> 
> Sent from my LG-H930 using Tapatalk


You can use AB. Precision's VF curve is very small and harder to work with in my opinion.


----------



## BudgieSmuggler

Fyi i don't have any problems with having afterburner and extreme tuner open. I use afterburner for osd. i set extreme tuner for voltage (it's saved with previous settings already when i open it) and then just use saved profile on Afterburner that i have that matches identically the settings in extreme tuner. Except for the voltage lock which is controlled by Extreme tuner obviously. No issues using both


----------



## arrow0309

Jpmboy said:


> no problem. I wouldn't use both at the same time tho. Control conflict can happen. In AB, put your mouse in teh graph window, hit ctrl-F, make your curve, pick a point on a curve, hit Ctrl-L then apply. it applies hte voltage curve AND locks the card in P0. :thumb:





gridironcpj said:


> If you really want to get the "curve" right, I would recommend testing each voltage increment and find the highest stable clock for each. I used 15-minute runs of Heaven to get an idea of what the curve should look like and it hasn't failed me for any other applications. I tested each point from 1.000V to 1.093V.
> 
> For testing a particular voltage, open to VF curve and select the desired voltage, then move the point up to a desired clock (be realistic). Press "L" once it's in the position you want, then apply (the checkmark in AB). The curve beyond that point will be flat and you'll be locked at that voltage-frequency point. As a point of reference, I was able to do 2100MHz at 1.000V. Keep in mind it won't stay locked if you hit the power limit or are above a particular temperature (clock will drop 15MHz if the card surpasses about 40C for example). Run Heaven for 15 minutes. If it survives, keep increasing the clock for that particular voltage. Once you find the highest stable clock, move onto the next point. I made a spreadsheet tracking each stable voltage-frequency point. I didn't make my final curve until I finished my testing.
> 
> When making your final curve, start from 1.093V and work your way down. Otherwise, hitting "apply" may shift your curve around. It's not my favorite software to work with honestly, as the curve has a mind of its own when I hit apply.
> 
> 
> 
> You can use AB. Precision's VF curve is very small and harder to work with in my opinion.


The hard way 



BudgieSmuggler said:


> Fyi i don't have any problems with having afterburner and extreme tuner open. I use afterburner for osd. i set extreme tuner for voltage (it's saved with previous settings already when i open it) and then just use saved profile on Afterburner that i have that matches identically the settings in extreme tuner. Except for the voltage lock which is controlled by Extreme tuner obviously. No issues using both


And the easy way :specool: 

Thanks for the input guys.


----------



## arrow0309

TobbbeSWE said:


> Got this after a flash attempt with the Galax bios on my Non-A palit card. Thanks for the quick guide btw!
> What a shame considering my cooling potential.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here is rig and my cooling btw..
> 
> After about 30 min in Heaven benchmark i get these temps. Water Temperature never really get above ambient. The radiator contains about 20L Liquid and it's a 3 panel radiator 1800x600mm. The fans is just looks really. It's already insane, why not go full balls to the wall you know.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Clocks stays between 2010-2070mhz.


My block exactly, less radiant mass but pretty happy so far. 
Nice setup bud!


----------



## Jpmboy

BudgieSmuggler said:


> Fyi i don't have any problems with having afterburner and extreme tuner open. I use afterburner for osd. i set extreme tuner for voltage (it's saved with previous settings already when i open it) and then just use saved profile on Afterburner that i have that matches identically the settings in extreme tuner. Except for the voltage lock which is controlled by Extreme tuner obviously. No issues using both


like I said, control conflict/clash CAN happen.


----------



## Redwoodz

kot0005 said:


> please ban this troll..Since when is display artifacting and shorting the pcb the same ??


 Design defect in the PCB. Extreme hot spots. Since when have you EVER heard of 3 out 4 cards failing within days. Including the "new" Samsung chip cards. Keep burying your head if you like. https://www.hardocp.com/article/2018/11/21/rtx_2080_ti_fe_escapes_testing_by_dying_after_8_hours/


----------



## nyk20z3

Any update on the 2080 Ti Lighting release date ?


----------



## csaris

I'm about to place an order of an EVGA 2080Ti Black Edition (non binned), part no. 11G-P4-2281-KR.

Does anyone know what's the max power limit allowed??


----------



## dante`afk

axiumone said:


> So, first shots of the new rtx titan.


5% more performance, if at all, (no 380w bios) for $1500 more pricetag incoming.


----------



## GAN77

TobbbeSWE said:


> After about 30 min in Heaven benchmark i get these temps. Water Temperature never really get above ambient. The radiator contains about 20L Liquid and it's a 3 panel radiator 1800x600mm. The fans is just looks really. It's already insane, why not go full balls to the wall you know.


Great! Do you have a flow sensor? What is the flow rate?


----------



## Jpmboy

dante`afk said:


> 5% more performance, if at all, (no 380w bios) for $1500 more pricetag incoming.


did you see the cuda core count difference? 5%... :wth:
was that the first RTX Titan-hater post? I think it is!


----------



## nycgtr

Jpmboy said:


> did you see the cuda core count difference? 5%... :wth:
> was that the first RTX Titan-hater post? I think it is!


Well i think its 5.6 haha. The math would be based of the full chip. I think 10% is possible. I will be skipping this one though. Kinda feeling these oversized aibs tis atm.


----------



## Jpmboy

nycgtr said:


> Well i think its 5.6 haha. The math would be based of the full chip. I think 10% is possible. I will be skipping this one though. Kinda feeling these oversized aibs tis atm.


lets see the rt and tensor core count. it's not quite linear, but close. lol - I've had every Titan since the OG. Each time it has been better than the Ti SKU in that family. How much? we'll see.


----------



## nycgtr

Jpmboy said:


> lets see the rt and tensor core count. it's not quite linear, but close. lol - I've had every Titan since the OG. Each time it has been better than the Ti SKU in that family. How much? we'll see.


Yeah I am tapped out gpu wise this year. Bought 4 txps and 20 2080tis of which I still have 4 2080 tis left so yeah Im done with messing with gpus for the year. However, I cant say self control will be amazing when it goes live lol.


----------



## ThrashZone

Jpmboy said:


> lets see the rt and tensor core count. it's not quite linear, but close. lol - I've had every Titan since the OG. Each time it has been better than the Ti SKU in that family. How much? we'll see.


Hi,
Are you going by 2080ti and titan v stat's increase or the piddly 1080ti and titan Xp stat's which were only about 10% depending on which benchmark one was doing 

I believe we can see titan v pricing regardless


----------



## arrow0309

Jpmboy said:


> like I said, control conflict/clash CAN happen.


Indeed, happened the hell out 
Managed in the end (however) got them working together.
No luck with the overvoltage.
Then tried with the curve in afterburner, the same.
It won't increase the damn voltage, untouched it should be 1.062 however it locks to 1.050v whenever I set it higher with either of the 2 above methods.


----------



## Rakanoth

I keep reading about people's RTX cards dying. Therefore, I am a bit reluctant to get a new RTX 2080 Ti. Should I wait a little bit more until all the issues are sorted out?


----------



## nycgtr

Rakanoth said:


> I keep reading about people's RTX cards dying. Therefore, I am a bit reluctant to get a new RTX 2080 Ti. Should I wait a little bit more until all the issues are sorted out?


I think they are mostly fe cards and the situation is overblown.


----------



## BudgieSmuggler

Hi. i wasn't dismissing what you said. I was just letting the other guy know that there's no issue using both.


----------



## TobbbeSWE

GAN77 said:


> Great! Do you have a flow sensor? What is the flow rate?


I had one with a propeller before but i started leaking on me. I can just say that the flow is great! It doesn't really matter how big tank och radiator you use. It's the difference in height between components that cause high pressures and bad flow because the water is being forced upwards against gravity. The D5 pump handle's height difference pretty good tho. Besides small fittings with tight curves and so on, height is the big problem with transporting all liquid.

When i mounted that big radiator i made sure to put it in the same height as my PC on the other side of the drywall. This D5 is able to pump around the pretty good even on lowest setting.

Here is a pic from when i picked it up, the dual pallet put things in perspective how big it really is haha


----------



## TobbbeSWE

I've spoke with Nvidia btw. Basically asked them right out to send me a Non-A bios with more TDP that 280w or bios editing software. 

Called them out (In a very polite way) for making an OEM like gpu/card and selling it like any other 2080Ti. It should be called..... 2080Ti CV, "Commoners version" or 2080ti OEM or 2080Ti Looked/Gimped


Cant wait to see what the say.


----------



## Jpmboy

TobbbeSWE said:


> Got this after a flash attempt with the Galax bios on my Non-A palit card. Thanks for the quick guide btw!
> What a shame considering my cooling potential.
> 
> Here is rig and my cooling btw..
> 
> After about 30 min in Heaven benchmark i get these temps. Water Temperature never really get above ambient. The radiator contains about 20L Liquid and it's a 3 panel radiator 1800x600mm. The fans is just looks really. It's already insane, why not go full balls to the wall you know.
> 
> Clocks stays between 2010-2070mhz.


good god.

there's nothing you're gonna fit in that case (or 5 additional cases) that can come near needing that much rad space. That's thing with ambient cooling... can't do better than ambient air temp.


----------



## GraphicsWhore

arrow0309 said:


> Nice score and achievement however :specool:
> What driver?
> Btw, keeping both Xtreme Tuner and msi ab is OK, no issues?


Newest driver + hotfix.

No issues with xtreme tuner and AB. Technically there's no reason to run them both at the same time but at one point I did to use the AB OSD and no issues.


----------



## GosuPl

Jpmboy said:


> lets see the rt and tensor core count. it's not quite linear, but close. lol - I've had every Titan since the OG. Each time it has been better than the Ti SKU in that family. How much? we'll see.



I have hope for 10%, but... 

2080Ti - 352 bit / 4352 CUDA / 88 ROP / 272 TMU / 68 RT / 544 Tensor
RTX TITAN - 384 bit / 4608 CUDA / 96 ROP / 288 TMU / 72 RT / 576 Tensor

Maybe i switch my 2x 2080Ti on 1x RTX TITAN (AC Orgins / Oddysey didnt work with SLI  ) and then, maybe if games will support SLI + RT, add the sceond one.
But we shall se, if perf boost will be lower than 10%, mehhhh waste of time ;-)


----------



## TobbbeSWE

Jpmboy said:


> good god.
> 
> there's nothing you're gonna fit in that case (or 5 additional cases) that can come near needing that much rad space. That's thing with ambient cooling... can't do better than ambient air temp.


Let's just say..Once upon a time i were a great overclocker haha! And i overclocked for very long periods, with different hardware from now. My room was loud, warm, especially during summer and the water temps usually went a few c above ambient. THERE WAS STILL POINTS TO GAIN! So i went insane and bought this radiator, have perfected it since.  

Thing is' if you can get your water staying as close to ambient for hours straight that must be a good thing? 

This is just with one Reference 780Ti with solderd wires and resistors. With 2 in SLI my 1200w supply shut down some times so i needed 2 PSU's in the end.


----------



## cx-ray

Watch 48h Heaven benchmark with RTX on your RTX so you can enjoy your RTX while watching RTX on your RTX:


----------



## GraphicsWhore

cx-ray said:


> Watch 48h Heaven benchmark with RTX on your RTX so you can enjoy your RTX while watching RTX on your RTX:
> 
> https://www.youtube.com/watch?v=rHZE605Fxks


Technically there are no RTX features enabled so we're just watching RTX with no RTX on our RTX.


----------



## dante`afk

Jpmboy said:


> did you see the cuda core count difference? 5%... :wth:
> was that the first RTX Titan-hater post? I think it is!



just being realistic, there is not too much headroom. it will still be power limited, more cuda cores yes, but how much more performance does that equal? 5-10% more fps tops - i'm happy if I'm wrong tho.

and the price? imagine it costing it like titan v 3500$ for just 5-10% more performance, LUL.


----------



## Asmodian

I finally got a working shunt mod on my FE 2080 Ti. 

8 mOhms on top of the 5 mOhms shunts works perfectly, for 1.625x the power limit, 422W at 100% or 520W at 123%.
https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html


----------



## iRSs

Asmodian said:


> I finally got a working shunt mod on my FE 2080 Ti.
> 
> 8 mOhms on top of the 5 mOhms shunts works perfectly, for 1.625x the power limit, 422W at 100% or 520W at 123%.
> https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html


Did u try 5 mOhms on top of the 5 mohms already on the card?

Sent from my LG-H930 using Tapatalk


----------



## Asmodian

Yes, adding 5 mOhm shunts on top causes the card to go into safety mode where it is locked to 300 MHz.


----------



## lolhaxz

dante`afk said:


> just being realistic, there is not too much headroom. it will still be power limited, more cuda cores yes, but how much more performance does that equal? 5-10% more fps tops - i'm happy if I'm wrong tho.
> 
> and the price? imagine it costing it like titan v 3500$ for just 5-10% more performance, LUL.



Ummmm - it's NVIDIA you are talking about here they are the absolute MASTERS of making early adopters feel like poop... I wouldn't mind betting it comes out at either the 2080Ti price or just above and the 2080Ti price get's the chop by $100 or so.

They will want to have something ready to say hey look we're still the best when AMD does it's press release for its rubbish re badges in January.


----------



## lolhaxz

Asmodian said:


> I finally got a working shunt mod on my FE 2080 Ti.
> 
> 8 mOhms on top of the 5 mOhms shunts works perfectly, for 1.625x the power limit, 422W at 100% or 520W at 123%.
> https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html


This is probably quite ignorant on my part... but what do you need with 422W - 520W? or just cause you can?

With the Galax BIOS at 370W I'm really yet to find anything that will cause it to hit the power limit at even upto 1.093v except Furmark (which from my extensive testing, more voltage is worth at tops perhaps another 15MHz)

Admittedly 4K/2xAA (Furmark) but it still holds 2055-2070MHz.


----------



## cstkl1

dante`afk said:


> Jpmboy said:
> 
> 
> 
> did you see the cuda core count difference? 5%... /forum/images/smilies/wth.gif
> was that the first RTX Titan-hater post? I think it is!
> 
> 
> 
> 
> just being realistic, there is not too much headroom. it will still be power limited, more cuda cores yes, but how much more performance does that equal? 5-10% more fps tops - i'm happy if I'm wrong tho.
> 
> and the price? imagine it costing it like titan v 3500$ for just 5-10% more performance, LUL.
Click to expand...

eh based on quadro single precision at 16tflops vs Ti 13.4... thats 19%...

da rest we can gauge from 
https://wccftech.com/nvidia-turing-gpu-geforce-rtx-2080-ti-ipc-block-diagram-detailed/amp/

not much improvement on tensor core 576 vs 544..


----------



## kx11

if it's got dual fans only , i wonder how bad the temps are with OC


----------



## Asmodian

lolhaxz said:


> This is probably quite ignorant on my part... but what do you need with 422W - 520W? or just cause you can?
> 
> With the Galax BIOS at 370W I'm really yet to find anything that will cause it to hit the power limit at even upto 1.093v except Furmark (which from my extensive testing, more voltage is worth at tops perhaps another 15MHz)


Only because I can, of course. I actually run 1.025 V, it is still stable at my max OC of 2070 MHz. 

When I first got the card I couldn't flash the bios to the Galax one so I tried the shunt mod with some 5 mOhm shunts I had leftover from my Titan X mod (even though der8aur said it caused his card to go into safe mode). Of course it also went into safe mode so I desoldered the resistor off the back and ordered 8 to 20 mOhm shunts. I was finally able to get back to it today and I was pleased that I only needed the 8 mOhm resistors. I didn't want to leave it with only one shunt modded because of the power balancing circuitry.

Are we able to flash the BIOS on the FE 2080 Ti now? I probably wouldn't have taken a soldering iron to it originally if I had been able to flash to the Galax bios. The performance difference between a 370W limit and my 520W is probably non-existent in anything I run, the performance gain from the stock max of 320W is already very small. Still, it is great to always see Max: 0 for Power limit in Afterburner. 

Conveniently the power limit would also equal 1 when it was in emergency mode so it is easy to be sure it isn't freaking out, even briefly, now.

Edit: I see we are able to flash the FE now, that is great and makes a shunt mod much less reasonable.


----------



## iRSs

Asmodian said:


> Yes, adding 5 mOhm shunts on top causes the card to go into safety mode where it is locked to 300 MHz.


how about 6 and 7 mohms?
i want to try the maximum limit.

also, did u check pcb temperature in vrm location while you were in furmark with shunt mods?

Sent from my LG-H930 using Tapatalk


----------



## hdtvnut

My Asus 011G arrived from NewE. It's doing about what's expected: max boost 2070-2125, max mem 16000. Firestrike E is up about 23% from my 1080ti ftw3 at score of 17,657, while Time Spy E is up 44% at 7638. Fans are easily holding it at 60 deg. Nice card, but the biggest I've ever seen. I'll leave it as is for now, to make sure it's going to hang in there.

7980xe relid
Asrock OC Formula
16 x 4 C16 GSkill
HK IV Copper
EK res 140 D5
Nemesis 360gtx
Seasonic 1000w Prime Platinum
Corsair 750D Airflow


----------



## boli

If you feel generous I'd appreciate your thoughts on the following.

*TL;DR:*
Anything wrong with getting a Palit Gaming Pro (OC) for use with an EK block?

I'd be testing it on air for a few days, hoping to catch any issues, then block it and probably put the 380W firmware on it to see what my (limited cooling) loop can handle. 

*Lots of details: history, setup etc:*
I've been using a Titan X (Pascal) on water (EK) for about 2 years, at +200/+600 OCs, to play modern games at UHD/4K.

The CPU is a 6700K at stock speed (OC had no effect on fps at 4K, so I'd rather save the extra power budget for the GPU), since cooling capacity is limited with a single 280 (45mm thick) radiator (2 Noctua fans in push config).

Compared to what most others are rocking this seems underpowered, but it worked surprisingly well for me. At very quiet settings the GPU ends up at 50 to 55 C during longer sessions.

Anyway, this summer I replaced the previous 60 Hz 4K display with a 144 Hz one. HDR is quite nice indeed, and available in surprisingly many games that I play(ed), such as AC Origins & Odyssey, BF1 & 5, SotTR, FC5, Hitman 2, Destiny 2. 

Obviously the Titan X struggles to get more than 60 fps in most games (e.g. BF5 runs at ~60 fps on ultra and ~90 fps on low), hence my wish to upgrade to a 2080 Ti.

I'm still waiting for an ASUS Dual OC I ordered almost 6 weeks ago (pretty bad 2080 Ti availability in Switzerland, and FEs not available at all). The EK block for it I've had since September, it's one of the early ones with fewer thermal pads (I've read in this thread that it might be better not to add the 3rd pad anyway, to assure good contact).

After having read the first post (awesome work BTW!) and pretty much the whole second half of this thread, I came to the conclusion that I should probably get any reference card with an A-chip, test it thoroughly, and possibly put the 380W firmware on it once it's blocked.

Currently, the only available A-chip 2080 Ti I can find are the Palit "Gaming" ones (regular and OC), hence the initial question.

*Reliability*
It doesn't inspire confidence that some 2080 Ti fail (at least mostly without flames ). My thinking is that I can at least limit the risk somewhat by thoroughly testing before putting a block on it.

On the other hand, assuming it could fail any time, I might be better off getting a pre-blocked card, such as the Gigabyte Aorus Xtreme, as that would reduce the hassle of warranty-return considerably.

*RTX/Touring Titan?*
The possibility of a touring Titan release "soon" tends to make me want to continue waiting, however given Nvidia doesn't ship anything to Switzerland (only Austria and Germany served from nvidia.de) I'd have to get in on ebay or something, like my current Titan X.
Also, if it has a Titan V price I doubt it would have the performance to make it worth it to me.


----------



## jcleary47

I just got my Asus ROG Strix O11G last week. Loving it!

It sounds like there's not a lot of waterblocks out for this card yet, correct? I saw the Bitspower Lotan on Performance-PCS just recently came into stock. Are there any others in the works? Release dates? Hoping to get this added into my loop soon!


----------



## Jpmboy

TobbbeSWE said:


> Let's just say..Once upon a time i were a great overclocker haha! And i overclocked for very long periods, with different hardware from now. My room was loud, warm, especially during summer and the water temps usually went a few c above ambient. THERE WAS STILL POINTS TO GAIN! So i went insane and bought this radiator, have perfected it since.
> 
> Thing is' if you can get your water staying as close to ambient for hours straight that must be a good thing?
> 
> This is just with one Reference 780Ti with solderd wires and resistors. With 2 in SLI my 1200w supply shut down some times so i needed 2 PSU's in the end.


yeah, I had to use 2x 1200W psus with my 3 780Ti kingpins + evbot. Add2PSU is a great little unit. You should just get a chiller


----------



## ThrashZone

Hi,
Yeah performance pc holiday discounts weren't as good as I would of hoped for 3-400.us purchases so I passed on the chiller.


----------



## nycgtr

jcleary47 said:


> I just got my Asus ROG Strix O11G last week. Loving it!
> 
> It sounds like there's not a lot of waterblocks out for this card yet, correct? I saw the Bitspower Lotan on Performance-PCS just recently came into stock. Are there any others in the works? Release dates? Hoping to get this added into my loop soon!


I have the strix block from BP its a nice block. Can recommend it.


----------



## GAN77

jcleary47 said:


> I just got my Asus ROG Strix O11G last week. Loving it!
> 
> It sounds like there's not a lot of waterblocks out for this card yet, correct? I saw the Bitspower Lotan on Performance-PCS just recently came into stock. Are there any others in the works? Release dates? Hoping to get this added into my loop soon!


Watercool.de planned.


----------



## d5aqoep

Just got my Zotac 2080ti AMP. How to check if my chip is A or non-A?


----------



## skingun

The Watercool block is much better than the Bitspower block. But that's not suprising really. The Aquacomputer block looks like it will be very good too, but it's not available yet.


----------



## Asmodian

iRSs said:


> how about 6 and 7 mohms?
> i want to try the maximum limit.
> 
> also, did u check pcb temperature in vrm location while you were in furmark with shunt mods?


I was unable to find 6 mohm shunts, I did order 7 mohm ones but because I expected even 8 to be too low I started with it. Now I don't really feel like taking it apart and testing the 7s. 

7 v.s. 8 mohm is not that big of a change in power either, 1.714x instead of 1.625x for a limit of 548W instead of 520W.

Furmark doesn't seem like a good idea, I obviously cannot RMA the card and a power virus could cause damage when no real workloads would. That said I was testing madVR with settings turned way up. When skipping around a video a lot I was able to hit the power limit and even triggered the thermal throttling flag even though the reported GPU temp never went above 39°C. I think 520W is more than enough to run through my poor card so I will be sticking with the 8 mohm shunts. I didn't check PCB temps, I have an EK WB and back plate so that wouldn't be super easy.


----------



## NewType88

iRSs said:


> vrm should be hotter because the midplate is not doing much cooling.
> how much is 30c hotter?
> i played path of exile which heats my card almost like furmark and the average wattage was 250w (depends of the ingame area)
> i cannot go past 255 wattage because my card is limited at 112%, the only way for me to get past this is put liquid metal on shunt again (i will do that but sunday when i get back home)
> the curious thing is that with 250w in gpu-z, but the wattmeter on the wall says 370w whis is 100w LOWER than quake ) which was something like 80% power.
> if i recall corectly when i was having shunt mods teh wattmeter said ~700w in path of exile and temperatures was ~70 but EVERYTHING on the card was quite warm. every single capacitor, coil and mosfet, but, hell, that's 250w more than stock!
> 
> that backplate is the "back-side cooler" from the photo i attached and it has some thick blue thermal pads between itself and the pcb. i will take more photos sunday when i will do shunt mods again.



Under load the stock cooler I would get the left mem bank around high 50 low 60c and the top and right mem banks would get to mid 80s to 90c. The vrm got to like mid 80c with the morpheous, but the ram got much hotter. Ive seen a video of the morpheous being used with just the midplate and vrms/mem were fine. Could be something up with the thermal pads...I dunno. I would of kept it on even if it cooled the same because the ippc fans even at 2000rpm sound much better than the stock fans.

Why am I sideways ?


----------



## NewType88

Jpmboy said:


> yeah, I had to use 2x 1200W psus with my 3 780Ti kingpins + evbot. Add2PSU is a great little unit. You should just get a chiller


How loud does that thing get ? Compare it to a system fan speed too for comparison.


----------



## GraphicsWhore

Asmodian said:


> iRSs said:
> 
> 
> 
> how about 6 and 7 mohms?
> i want to try the maximum limit.
> 
> also, did u check pcb temperature in vrm location while you were in furmark with shunt mods?
> 
> 
> 
> I was unable to find 6 mohm shunts, I did order 7 mohm ones but because I expected even 8 to be too low I started with it. Now I don't really feel like taking it apart and testing the 7s. /forum/images/smilies/redface.gif
> 
> 7 v.s. 8 mohm is not that big of a change in power either, 1.714x instead of 1.625x for a limit of 548W instead of 520W.
> 
> Furmark doesn't seem like a good idea, I obviously cannot RMA the card and a power virus could cause damage when no real workloads would. That said I was testing madVR with settings turned way up. When skipping around a video a lot I was able to hit the power limit and even triggered the thermal throttling flag even though the reported GPU temp never went above 39°C. I think 520W is more than enough to run through my poor card so I will be sticking with the 8 mohm shunts. I didn't check PCB temps, I have an EK WB and back plate so that wouldn't be super easy. /forum/images/smilies/tongue.gif
Click to expand...

Are you actually drawing 520w?


----------



## TraktorXD

Hi guys,
is there already some one how is try to flash bios on STRIX OC? I mean the Galax or HOF OC? Thanks for all advices


----------



## iRSs

NewType88 said:


> Under load the stock cooler I would get the left mem bank around high 50 low 60c and the top and right mem banks would get to mid 80s to 90c. The vrm got to like mid 80c with the morpheous, but the ram got much hotter. Ive seen a video of the morpheous being used with just the midplate and vrms/mem were fine. Could be something up with the thermal pads...I dunno. I would of kept it on even if it cooled the same because the ippc fans even at 2000rpm sound much better than the stock fans.
> 
> Why am I sideways ?


it does not look that it stays too hot but i do not know how hot 80 degrees mean because when my pc was drawing 700w from the wall the pcb where vrm was located was quite hot and now the backplate radiator heats up alot, the thermal sensor i put there never goes above 60 devrees but that is the temperature outside the mosfet itself which could be 20 degrees higher.
eigher way, 80 degrees for vrm does not sound right. it IS below 125 which is the maximum "recommended" work temperature but being cooler means better efficiency, lower degradation and lower overall pcb temperature.

i did the trick with voltage curve and the max i can reach is 2175mhz @1.075v, i cannot do 2190mhz even with 1.093v i guess that is my sillicon limit.
with this i often hit the power limit in quake but path of exile stays around 1800mhz pecause of the power limitation. 

the graph curve overclock is [email protected] because i can do 1075mhz @1.075v but once my card heats up the overall frequency graph line goes lower around 2145mhz with 1.075v. if i rasie it to 2175mhz when the card cools down it crashes because the overall graph goes higher and i end up with 2190mhz at w/e voltage...
is there a way to lock voltage and frequency?
galax extreme tuner only locks 1.050v

tomorrow i will test it with shunt mods.

ps: you are sideways because of the exif information inside the photo


----------



## Caffinator

whats 3dmark vantage score of these gpus lol


----------



## mistershan

What's new with 2080ti? It seems to be sold out everywhere and I heard they having lots of problems. Even if I do find one, should I just stay away?


----------



## Vow3ll

mistershan said:


> What's new with 2080ti? It seems to be sold out everywhere and I heard they having lots of problems. Even if I do find one, should I just stay away?


https://www.nowinstock.net/computers/videocards/nvidia/rtx2080ti/


----------



## GraphicsWhore

Caffinator said:


> whats 3dmark vantage score of these gpus lol


A lot.



mistershan said:


> What's new with 2080ti? It seems to be sold out everywhere and I heard they having lots of problems. Even if I do find one, should I just stay away?


Seems like the initial batch of cards had some issues but most of them are fine.


----------



## gamingarena

WOW Titan V CEO 32Gb Just showed up on Ebay,

https://www.ebay.com/itm/NVIDIA-TIT...h=item286fac0f32:g:mrkAAOSwkvtcApsR:rk:2:pf:0

Going by the specs looks like this is fully Unlocked GV100 Volta ( AKA Quadro GV100 ) with 128 Rops 32Gb HBM2 4096Bit almost 900Gbt/s Bandwith, this beast should blow 2080ti out of the water, not to mention full 32Gb Vram...

Thinking of bidding on it, what you guys think its a good value for this beast?


----------



## Zurv

gamingarena said:


> WOW Titan V CEO 32Gb Just showed up on Ebay,
> 
> https://www.ebay.com/itm/NVIDIA-TIT...h=item286fac0f32:g:mrkAAOSwkvtcApsR:rk:2:pf:0
> 
> Going by the specs looks like this is fully Unlocked GV100 Volta ( AKA Quadro GV100 ) with 128 Rops 32Gb HBM2 4096Bit almost 900Gbt/s Bandwith, this beast should blow 2080ti out of the water, not to mention full 32Gb Vram...
> 
> Thinking of bidding on it, what you guys think its a good value for this beast?


wow. As the owner of 2 Titan Vs (for gaming!) that is a sexy deal. If you do get it do put it under water. It gets super hot and the cooling is BS.

(no wonder i couldn't sell any of my Titan Vs when the sexier version is going for this cheap. poooo.)


----------



## gamingarena

Zurv said:


> wow. As the owner of 2 Titan Vs (for gaming!) that is a sexy deal. If you do get it do put it under water. It gets super hot and the cooling is BS.
> 
> (no wonder i couldn't sell any of my Titan Vs when the sexier version is going for this cheap. poooo.)


I'm not so sure it will end so cheap its only 5 hrs old listing 9 days to go and already 14bids i'm sure will go way up, but i wouldn't mind staying at this price at all


----------



## ESRCJ

TraktorXD said:


> Hi guys,
> is there already some one how is try to flash bios on STRIX OC? I mean the Galax or HOF OC? Thanks for all advices


I tried the Galax 380W BIOS, but did not see similar results as others who flashed the same BIOS onto the reference PCB. The issue is the Strix has a custom PCB (I just said a custom PCB is an issue... lol), so we need a BIOS designed specifically for it. You might gain a percent or two with the Galax BIOS, but that's about it. A shunt mod would be ideal for this card at the moment. I'm waiting until I at least get a new waterblock, as I'm not terribly impressed with the Bitspower block. With this block, you need to install it without the backplate if you want the best contact. With the stock backplate installed, you're limited to 7 screws for the block, excluding those closest to the die.


----------



## dante`afk

Titan RTX will have 24GB.










2699Euro = 3059.14 USD
No HBM2

1400 more on the price tag for what, 10% more performance? please be more 


280w RTX Titan vs 380w RTX 2080Ti, who wins? the Ti does. Why does nvidia just not give us an unlocked 450w bios.


----------



## TobbbeSWE

dante`afk said:


> Titan RTX will have 24GB.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2699Euro = 3059.14 USD
> No HBM2
> 
> 1400 more on the price tag for what, 10% more performance? please be more


This explains the difference in price. The Titan V and RTX is more suited for scientist than the first Titan series kepler, maxwell, and pascal that was more "geforce" oriented.

https://www.anandtech.com/show/13668/nvidia-unveils-rtx-titan-2500-top-turing


----------



## lolhaxz

dante`afk said:


> Titan RTX will have 24GB.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2699Euro = 3059.14 USD
> No HBM2
> 
> 1400 more on the price tag for what, 10% more performance? please be more
> 
> 
> 280w RTX Titan vs 380w RTX 2080Ti, who wins? the Ti does. Why does nvidia just not give us an unlocked 450w bios.



Now official - https://www.nvidia.com/en-us/titan/titan-rtx/

Guess I was wrong, wow - $2,499 (.99!)


----------



## dante`afk

that's just marketing BS. the rtx titan is a gaming card, with a little bit of "pro"


----------



## TobbbeSWE

Asmodian said:


> I was unable to find 6 mohm shunts, I did order 7 mohm ones but because I expected even 8 to be too low I started with it. Now I don't really feel like taking it apart and testing the 7s.
> 
> 7 v.s. 8 mohm is not that big of a change in power either, 1.714x instead of 1.625x for a limit of 548W instead of 520W.
> 
> Furmark doesn't seem like a good idea, I obviously cannot RMA the card and a power virus could cause damage when no real workloads would. That said I was testing madVR with settings turned way up. When skipping around a video a lot I was able to hit the power limit and even triggered the thermal throttling flag even though the reported GPU temp never went above 39°C. I think 520W is more than enough to run through my poor card so I will be sticking with the 8 mohm shunts. I didn't check PCB temps, I have an EK WB and back plate so that wouldn't be super easy.


Skip the back plate and hit it with a fan. That will cool of the rear PCB much more, and in so doing lower overall temps were the Water block aint touching. Lowering temps on the back will lower temps thru the whole circuit board. Back plates normally only helps with looks and usually contains the heat making the PCB and gpu hotter.


----------



## TobbbeSWE

Jpmboy said:


> yeah, I had to use 2x 1200W psus with my 3 780Ti kingpins + evbot. Add2PSU is a great little unit. You should just get a chiller


Chillers are nice! But they draw power and make a lot of noise so they are only suited for "overklocking sessions" were the hoses and blocks are constantly overlooked for condensation. Even when using good insolations condensation starts somewhere eventually. Those extra 20-25c brings a lot of headroom in overclocking tho!

..........

I have a rather crazy plan tho..I'm heating my house from collecting heat energi via a geothermal heat pump. The collector hose is 400m long and goes 200m down the mountain. The etanol/watermix or "brine" temps is a constant 2-5c down there. I'm thinking about putting a heat ex changer on the collector hose. That way i will have constant supply of cold liquid all year around and i will also down all a lot of the energi my PC pulls from the wall down the brine liquid. That heat will then be picked up by my geothermal heat pump. Win win scenario. Hahaha! 

But then again! When working with below ambient cooling condensation is your enemy and a 24/7 below ambient loop will be risky if i don't put the PC in a insulated box. 
It's fun thinking about what one can do!


----------



## cx-ray

dante`afk said:


> that's just marketing BS. the rtx titan is a gaming card, with a little bit of "pro"


It's for Pro gamers


----------



## Spiriva

Nvidia Geforce 417.22

Provides the optimal gaming experience for Battlefield V Tides of War Chapter 1: Overture Update

http://it.download.nvidia.com/Windows/417.22/417.22-desktop-win10-64bit-international-whql.exe










"Battlefield V: Official DXR Dev Update – Up To 50% Performance Increase"


----------



## cstkl1

i feel da powah... ( sick like a dog for last week or so with suspect what could be influenza B.. )

titan rtx.. if aib sells it.. will buy it. no aib. go die nvidia store.


----------



## djchup

My FTW3 2080ti just got its RMA approved, I have a new one on the way. It couldn't pass 3dmark at 2000 MHz core, and I was getting frequent bluescreens in games at stock speeds. Hopefully my replacement does better.


----------



## Shreve

MrTOOSHORT said:


> 2080 TIs came back in stock at the Nvidia store after a month. I'm thinking they are Samsung chip cards from now on instead of micron. Just ordered one, have the ek block coming too. Very excited!


Did yours come in? were you able to confirm?


----------



## taem

Installed the FTW3 2080 ti. First blush, incredibly disappointing. Maybe my settings are all jacked up. But running tests ootb, it barely edges my 1080 ti.

Example: Firestrike 1.1

1080 ti gets 29,937 graphics score

2080 ti gets 32,533 graphics score


Comparably puny gains in Firestrike Ultra and other benches. Barely pips the 1080 ti.

To top it all off, it's way way louder than the 1080 ti Lightning Z I was using.

So, this was a great $1400 purchase lol.


----------



## gamingarena

taem said:


> Installed the FTW3 2080 ti. First blush, incredibly disappointing. Maybe my settings are all jacked up. But running tests ootb, it barely edges my 1080 ti.
> 
> Example: Firestrike 1.1
> 
> 1080 ti gets 29,937 graphics score
> 
> 2080 ti gets 32,533 graphics score
> 
> 
> Comparably puny gains in Firestrike Ultra and other benches. Barely pips the 1080 ti.
> 
> To top it all off, it's way way louder than the 1080 ti Lightning Z I was using.
> 
> So, this was a great $1400 purchase lol.


You can do Better than that get Titan RTX for another 5% increase why stop at $1400


----------



## GraphicsWhore

taem said:


> Installed the FTW3 2080 ti. First blush, incredibly disappointing. Maybe my settings are all jacked up. But running tests ootb, it barely edges my 1080 ti.
> 
> Example: Firestrike 1.1
> 
> 1080 ti gets 29,937 graphics score
> 
> 2080 ti gets 32,533 graphics score
> 
> 
> Comparably puny gains in Firestrike Ultra and other benches. Barely pips the 1080 ti.
> 
> To top it all off, it's way way louder than the 1080 ti Lightning Z I was using.
> 
> So, this was a great $1400 purchase lol.


Something doesn't seem right there, even for stock clocks. As a comparison I hit 32483 graphics on my 1080Ti. Granted that's under water with XOC BIOS and a healthy overclock but still, out of the box I figure you should be around 37k?

What are your clocks during the run? And does it OC at least?


----------



## mistershan

Should I go for the Titan RTX or the 2080 ti? It is twice the price but you are getting twice the ram. Is the Titan more for work? I am a pro video editor and do some graphics/AFX as well but I don't think any of the apps really would use the Titan that much. GPU seems to be more for Color and CGI. 

However, if spending more will get much better results I may go for it since it's all tax deductible for me.


----------



## GraphicsWhore

mistershan said:


> Should I go for the Titan RTX or the 2080 ti? It is twice the price but you are getting twice the ram. Is the Titan more for work? I am a pro video editor and do some graphics/AFX as well but I don't think any of the apps really would use the Titan that much. GPU seems to be more for Color and CGI.
> 
> However, if spending more will get much better results I may go for it since it's all tax deductible for me.


Where's the extra VRAM coming into play though? Given that editing depends so much on the other main components - CPU, RAM, storage - I have a hard time believing the Titan is worth twice the price if you're a video editor. The official page mentions 8K editing. Not to say that if you don't do 8K then it's pointless; just that the 2080Ti is almost certainly going to be more than enough.

I think this is more suitable for machine learning and AI.

You could wait for some benchmarks I guess.


----------



## kot0005

Nice, 1440p RTX at 60fps is a lot better than 1080p. BUT will it be average ?!?! One day to findout.


----------



## NBrock

Are the Nvidia Founder Edition cards the better binned chips? Or are those strictly for the 3rd party higher end cards?


----------



## Esenel

kot0005 said:


> Nice, 1440p RTX at 60fps is a lot better than 1080p. BUT will it be average ?!?! One day to findout.


That was already possible without the patch.
Should be over 80 fps on high then.


----------



## BudgieSmuggler

Pretty much staying over 60fps in BFV except for Tiralieur or whatever that mission is. It reminds me of 4k on a 1080ti. If you drop the odd setting like terrain and vegetation and set to performance rather then quality or high quality in NV control panel then fps are really improved. On the first mission i've seen avg of 85 fps indoors but that drops when outdoors with the snow falling. It's encouraging that there is an improvement through new drivers so hopedully we'll see more improvements in the coming months. Apparently they are working on it. If we get dlss support may see further gains. And we weren't getting steady 60fps with highest graphics settings beforehand. In parts but not majority of scenes


----------



## Jpmboy

TobbbeSWE said:


> This explains the difference in price. The Titan V and RTX is more suited for scientist than the first Titan series kepler, maxwell, and pascal that was more "geforce" oriented.
> 
> https://www.anandtech.com/show/13668/nvidia-unveils-rtx-titan-2500-top-turing


actually... the OG Titan was the only SKU "Prosumer" card that did not have crippled double precision. The RTX Titan is a Prosumer card. Games v.good and can do some sci-calc, good at CGI work, weak at DP science compared to the quadro


----------



## iRSs

taem said:


> Installed the FTW3 2080 ti. First blush, incredibly disappointing. Maybe my settings are all jacked up. But running tests ootb, it barely edges my 1080 ti.
> 
> Example: Firestrike 1.1
> 
> 1080 ti gets 29,937 graphics score
> 
> 2080 ti gets 32,533 graphics score
> 
> 
> Comparably puny gains in Firestrike Ultra and other benches. Barely pips the 1080 ti.
> 
> To top it all off, it's way way louder than the 1080 ti Lightning Z I was using.
> 
> So, this was a great $1400 purchase lol.


something is wrong about your scores.
max up fans, install the latest driver 417.22 then run the test again.

i got 38900 in graphics score but i have shunt mods and voltage up to 1.093.
temp was between 60 and 70 (depending on the test).

i got 610 watts on the wall for the pc alone in *path of exile* and after few minutes psu's *ocp* shuts my pc down...
i guess i need a more powerful psu than *seasonic prime titanium 750w* to drive 2080 ti with shunt mods )


----------



## taem

GraphicsWhore said:


> Something doesn't seem right there, even for stock clocks. As a comparison I hit 32483 graphics on my 1080Ti. Granted that's under water with XOC BIOS and a healthy overclock but still, out of the box I figure you should be around 37k?
> 
> What are your clocks during the run? And does it OC at least?




It can oc to 2085 and hold it if I crank fans, otherwise at default 60% fan it holds 2055. Not stellar but decent.

But here's the thing. On stock clocks, I get for example 33,000 ish on Firestrike graphics score. Which is too low to begin with. But it gets worse. Applying any overclock and running the test, score drops to 20,000. This happens on every test. And oc of +5 will cut all benchmark scores by 33-40%.

What...


----------



## GosuPl

taem said:


> It can oc to 2085 and hold it if I crank fans, otherwise at default 60% fan it holds 2055. Not stellar but decent.
> 
> But here's the thing. On stock clocks, I get for example 33,000 ish on Firestrike graphics score. Which is too low to begin with. But it gets worse. Applying any overclock and running the test, score drops to 20,000. This happens on every test. And oc of +5 will cut all benchmark scores by 33-40%.
> 
> What...


This is 2080Ti FE ? If yes, you have Micron or Samsung memory ?


----------



## J7SC

I just joined OCN after visiting this thread for a while for useful upgrade info while sitting on the fence re. RTX purchase... so I bit the bullet and added a RTX 2080Ti (Gigabyte Aorus Etreme WB). First installation runs on an older Z170, 6700K-4.7-DDR3866 yesterday as that has the only 4K (tele), but will migrate it into a new system I'm building for work and play soon after Christmas.

Given the RTX temp issues and their reported impact on performance in this thread and elsewhere, I'm glad I went for the factory water block version. So far, temps have not exceeded mid-40s C with 21 C ambient though I am not adding voltage yet per EVGA Prec1. I am surprised at the amount of watts it seems to pull though, per GPUz below. Is that kind of wattage 'normal' at stock volts ?


----------



## GraphicsWhore

taem said:


> GraphicsWhore said:
> 
> 
> 
> Something doesn't seem right there, even for stock clocks. As a comparison I hit 32483 graphics on my 1080Ti. Granted that's under water with XOC BIOS and a healthy overclock but still, out of the box I figure you should be around 37k?
> 
> What are your clocks during the run? And does it OC at least?
> 
> 
> 
> 
> 
> It can oc to 2085 and hold it if I crank fans, otherwise at default 60% fan it holds 2055. Not stellar but decent.
> 
> But here's the thing. On stock clocks, I get for example 33,000 ish on Firestrike graphics score. Which is too low to begin with. But it gets worse. Applying any overclock and running the test, score drops to 20,000. This happens on every test. And oc of +5 will cut all benchmark scores by 33-40%.
> 
> What...
Click to expand...

But if you look at your minimum numbers for memory and core after the run, nothing looks off? They don’t suddenly plummet at some point? Not even sure how low they’d have to be to end with a graphics score of 20k but it’d be obvious. 

Also, what are you using to adjust clocks and look at frequencies? If X1 I’d ditch that **** immediately and try Afterburner. Also you should post about this on the EVGA forums.


----------



## TK421

Jpmboy said:


> actually... the OG Titan was the only SKU "Prosumer" card that did not have crippled double precision. The RTX Titan is a Prosumer card. Games v.good and can do some sci-calc, good at CGI work, weak at DP science compared to the quadro



kepler og titan vs kepler titan black?


----------



## hdtvnut

taem said:


> Installed the FTW3 2080 ti. First blush, incredibly disappointing. Maybe my settings are all jacked up. But running tests ootb, it barely edges my 1080 ti.
> 
> Example: Firestrike 1.1
> 
> 1080 ti gets 29,937 graphics score
> 
> 2080 ti gets 32,533 graphics score
> 
> 
> Comparably puny gains in Firestrike Ultra and other benches. Barely pips the 1080 ti.
> 
> To top it all off, it's way way louder than the 1080 ti Lightning Z I was using.
> 
> So, this was a great $1400 purchase lol.



Just testing new Asus 2080ti ROG Strix 011G to compare to my 1080ti ftw3. Strix set at +100 core (1750), +2000 mem (16000) in GPUTweakII; ftw3 set at +50 and +1000 (12000) in Afterburner. Both stock fans/BIOS, OC near max for stability. Driver 416.34. 

Graphics test ftw3/ Strix

Firestrike: 29500/ 36000/ +22%

Firestrike Extreme 14600/ 18010/ +23%

Time Spy 10600/ 15060/ +42%

Time Spy Extreme 5291/ 7270/ +37% 

So, here at least, DirectX 12 improvement's much more than DirectX 11.


----------



## arrow0309

J7SC said:


> I just joined OCN after visiting this thread for a while for useful upgrade info while sitting on the fence re. RTX purchase... so I bit the bullet and added a RTX 2080Ti (Gigabyte Aorus Etreme WB). First installation runs on an older Z170, 6700K-4.7-DDR3866 yesterday as that has the only 4K (tele), but will migrate it into a new system I'm building for work and play soon after Christmas.
> 
> Given the RTX temp issues and their reported impact on performance in this thread and elsewhere, I'm glad I went for the factory water block version. So far, temps have not exceeded mid-40s C with 21 C ambient though I am not adding voltage yet per EVGA Prec1. I am surprised at the amount of watts it seems to pull though, per GPUz below. Is that kind of wattage 'normal' at stock volts ?


Interesting, Aorus stock bios?
Yeah, I'd think the wattage is normal for that high oc you've managed to get.
Mine (FE) won't go over 2100 not even at 35C. So I went with the Evga 338W bios.
But that's another story. 

Sent from my ONEPLUS A5010 using Tapatalk


----------



## lolhaxz

J7SC said:


> I just joined OCN after visiting this thread for a while for useful upgrade info while sitting on the fence re. RTX purchase... so I bit the bullet and added a RTX 2080Ti (Gigabyte Aorus Etreme WB). First installation runs on an older Z170, 6700K-4.7-DDR3866 yesterday as that has the only 4K (tele), but will migrate it into a new system I'm building for work and play soon after Christmas.
> 
> Given the RTX temp issues and their reported impact on performance in this thread and elsewhere, I'm glad I went for the factory water block version. So far, temps have not exceeded mid-40s C with 21 C ambient though I am not adding voltage yet per EVGA Prec1. *I am surprised at the amount of watts it seems to pull though, per GPUz below. Is that kind of wattage 'normal' at stock volts ?*


It is interesting to still see so many have problems with power limit, here are some of my results - this is a stock standard Galax OC Twin 2080Ti - with a EK block, ambient is about 22C, Dual PE360's @ about 900rpm.

2100MHz is my daily, but it will do 2130-2160 depending on temperature at 1.093v - not worth the difference however.

There's not many legitimate workloads that I can find that causes it to power limit for any extended period of time, in GPU-Z, Blue is VRel and Green is Pwr ... I note Timespy Extreme second test does hit the Pwr limit... non-extreme does not.

The jump in clocks from 2100 to 2145-2160 ... is worth a massive 60 points.... take 15MHz off all those max clocks... because thats the absolute peak when it starts at ~25C

Intrigued that my water starts at ~22C and slowly climbs to 31-32C which results in ~45C GPU... yet you started at 33C and only saw 46C.


----------



## d5aqoep

I am waiting for a card which will be for Pro Pro gamers just like I am waiting for a Pro Pro iPad.

Just 1 pro is not enough to to compel me to purchase anything these days.


----------



## ENTERPRISE

That moment when you find out your pre-order is delayed until 30th December from the 30th November. The pain grows lol.


----------



## Aurosonic

J7SC said:


> I am surprised at the amount of watts it seems to pull though, per GPUz below. Is that kind of wattage 'normal' at stock volts ?


I've same card and got same 375W (which is absolute limit for two 8 pin PCI cables (150w each) + 75W from PCIe slot) @2100/16140 stock bios. My temps are 42 max after all those Firestrike and Timespy tests. Haven't tried Superposition though.


----------



## arrow0309

What gpu overclock (amount) did you applied, also for that gpu volt. (1.081v) did you have to mess with the voltage / clock curve or is coming out of your bios (or increasing the gpu voltage from the ab's slider) directly?


----------



## iRSs

Aurosonic said:


> I've same card and got same 375W (which is absolute limit for two 8 pin PCI cables (150w each) + 75W from PCIe slot) @2100/16140 stock bios. My temps are 42 max after all those Firestrike and Timespy tests. Haven't tried Superposition though.


who told you that the absolute limit for 2x8 pins is 375w?

Sent from my LG-H930 using Tapatalk


----------



## lolhaxz

iRSs said:


> who told you that the absolute limit for 2x8 pins is 375w?
> 
> Sent from my LG-H930 using Tapatalk


If you follow spec it's 150W per 8pin and 75W from PCIE.

We all know you can probably get away with drawing another ~50-60% from each connector very comfortably assuming the cable gauge is acceptable.

But generally no vendor would go too far [if at all] outside of spec cause if your house burns down they could in some ways be culpable - get a cheap crappy power supply and they may use the minimum required gauge only to meet spec etc etc.

Same thing with the traces on the motherboard, can it deliver more than 75W? sure can, but again, some cheap edge case motherboard...


----------



## Aurosonic

iRSs said:


> who told you that the absolute limit for 2x8 pins is 375w?
> 
> Sent from my LG-H930 using Tapatalk


ATX specification told me: https://en.wikipedia.org/wiki/ATX
And i guess there's some reason for 380W limitation for most bios, where's 450W-480W for Only Galax with 3 8-pin connectors.


----------



## Jpmboy

Aurosonic said:


> ATX specification told me: https://en.wikipedia.org/wiki/ATX
> And i guess there's some reason for 380W limitation for most bios, where's 450W-480W for Only Galax with 3 8-pin connectors.


guys - the ATX specification (as posted earlier) is the min reqs for the physical connectors and wire gauge. That's all. (period) NOT a limit on what the PSU can deliver over any rail. For example, the AX1500i has 40amp breaks on the 12V PCIE (and CPU) connectors in multirail mode... that's 480W per pcie connector... but who uses multirail mode anyway. This is Overclock.net, not safepower.net.


----------



## Jpmboy

J7SC said:


> I just joined OCN after visiting this thread for a while for useful upgrade info while sitting on the fence re. RTX purchase... so I bit the bullet and added a RTX 2080Ti (Gigabyte Aorus Etreme WB). First installation runs on an older Z170, 6700K-4.7-DDR3866 yesterday as that has the only 4K (tele), but will migrate it into a new system I'm building for work and play soon after Christmas.
> 
> Given the RTX temp issues and their reported impact on performance in this thread and elsewhere, I'm glad I went for the factory water block version. So far, temps have not exceeded mid-40s C with 21 C ambient though I am not adding voltage yet per EVGA Prec1. I am surprised at the amount of watts it seems to pull though, per GPUz below. Is that kind of wattage 'normal' at stock volts ?


open GPUZ and nav to the Advanced > Nvivia bios tab. the bios power limit is shown there.


----------



## keikei

What would you guys recommend as a good 'stock' card? Plug & play basically.


----------



## Edge0fsanity

keikei said:


> What would you guys recommend as a good 'stock' card? Plug & play basically.


if i was going to buy a card right now and leave it stock it would be the ftw3. Decent cooler, 373w bios, binned chip.


----------



## Aurosonic

Jpmboy said:


> guys - the ATX specification (as posted earlier) is the min reqs for the physical connectors and wire gauge. That's all. (period) NOT a limit on what the PSU can deliver over any rail. For example, the AX1500i has 40amp breaks on the 12V PCIE (and CPU) connectors in multirail mode... that's 480W per pcie connector... but who uses multirail mode anyway. This is Overclock.net, not safepower.net.


Thanks. Are there any custom bios to raise PL and voltage for Aorus Extreme ?


----------



## Jpmboy

unless you can chill the card below 20C, any higher PL is meaningless since the clocks temp throttle.


----------



## kot0005

Okay, I gave BFV another go. I figured out why it was crashing. Its a setting in rivatuner.

PG27UQ, 2100Mhz/7730Mhz 2080Ti, 2800Mhz Ram and 5Ghz 8700k is my setup.

DX12 works perfectly now and at 4k, I get stable 98fps at 98Hz refresh with dips to 70 during some explosions. Its probably pushing over a 100. The game looks stunning at 4k and proper hdr. The animations suck tho and the ragdoll physics are so broken. 

Then I turned on DXR at 1440p and I set my refresh to 60hz but it will go down to like 37fps and 50-55fps a lot of the time. The weird thing is the GPU usage was only 50%....at 4k no dxr I was getting 98 -99% gpu usage.

at 1440p with DXR my gpu usage was low and the clock would even go down to 1995mhz all the way from 2085-2100Mhz at 1062mV. So the RT cores must not be populating the raster cores...which is like super bad.

Even my temps go down to 36c at 1440p in contrast to 40-45c at 4k.


----------



## Aurosonic

del


----------



## arrow0309

kot0005 said:


> Okay, I gave BFV another go. I figured out why it was crashing. Its a setting in rivatuner.
> 
> PG27UQ, 2100Mhz/7730Mhz 2080Ti, 2800Mhz Ram and 5Ghz 8700k is my setup.
> 
> DX12 works perfectly now and at 4k, I get stable 98fps at 98Hz refresh with dips to 70 during some explosions. Its probably pushing over a 100. The game looks stunning at 4k and proper hdr. The animations suck tho and the ragdoll physics are so broken.
> 
> Then I turned on DXR at 1440p and I set my refresh to 60hz but it will go down to like 37fps and 50-55fps a lot of the time. The weird thing is the GPU usage was only 50%....at 4k no dxr I was getting 98 -99% gpu usage.
> 
> at 1440p with DXR my gpu usage was low and the clock would even go down to 1995mhz all the way from 2085-2100Mhz at 1062mV. So the RT cores must not be populating the raster cores...which is like super bad.
> 
> Even my temps go down to 36c at 1440p in contrast to 40-45c at 4k.


Hi, I'm playing at 3440x1440, DXR low, chromatic aberration, film grain, lens flare and vignette disabled, Vsync off /120hz (game, always), Vsync On (nvidia driver), 2080ti FE in oc at 2085 /7500 @40-41C and it's steady, never lower than the 2085 clock (it was 2070 before with the stock bios). 
And getting ~60 - 70 fps (50 - 90 min / max).


----------



## VPII

arrow0309 said:


> Hi, I'm playing at 3440x1440, DXR low, chromatic aberration, film grain, lens flare and vignette disabled, Vsync off /120hz (game, always), Vsync On (nvidia driver), 2080ti FE in oc at 2085 /7500 @40-41C and it's steady, never lower than the 2085 clock (it was 2070 before with the stock bios).
> And getting ~60 - 70 fps (50 - 90 min / max).


I can concur... I get basically the same but my res is 2560 x 1440 and my clocks little lower. Still with the new Nvidia driver it really works fairly well I'd say but I'll be honest when I say I struggle to see the difference between DXR ON and OFF. But hey, maybe it is old age and I'm just blind for some things.


----------



## djchup

kot0005 said:


> Okay, I gave BFV another go. I figured out why it was crashing. Its a setting in rivatuner.
> 
> PG27UQ, 2100Mhz/7730Mhz 2080Ti, 2800Mhz Ram and 5Ghz 8700k is my setup.
> 
> DX12 works perfectly now and at 4k, I get stable 98fps at 98Hz refresh with dips to 70 during some explosions. Its probably pushing over a 100. The game looks stunning at 4k and proper hdr. The animations suck tho and the ragdoll physics are so broken.
> 
> Then I turned on DXR at 1440p and I set my refresh to 60hz but it will go down to like 37fps and 50-55fps a lot of the time. The weird thing is the GPU usage was only 50%....at 4k no dxr I was getting 98 -99% gpu usage.
> 
> at 1440p with DXR my gpu usage was low and the clock would even go down to 1995mhz all the way from 2085-2100Mhz at 1062mV. So the RT cores must not be populating the raster cores...which is like super bad.
> 
> Even my temps go down to 36c at 1440p in contrast to 40-45c at 4k.


Which setting in rivatuner?


----------



## mistershan

GraphicsWhore said:


> Where's the extra VRAM coming into play though? Given that editing depends so much on the other main components - CPU, RAM, storage - I have a hard time believing the Titan is worth twice the price if you're a video editor. The official page mentions 8K editing. Not to say that if you don't do 8K then it's pointless; just that the 2080Ti is almost certainly going to be more than enough.
> 
> I think this is more suitable for machine learning and AI.
> 
> You could wait for some benchmarks I guess.


Yea but I wonder if that 8k applies to real time bv

Do you think I should wait to do a full rebuild or sell off my 2 x 1080s and get a 2080ti? Or would my older parts negate any gains I would get from it? 

i7 5820k 
2 x Asus Strix 1080s SLI 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU


----------



## Jpmboy

kot0005 said:


> Okay, I gave BFV another go. I figured out why it was crashing. Its a setting in rivatuner.
> 
> PG27UQ, 2100Mhz/7730Mhz 2080Ti, 2800Mhz Ram and 5Ghz 8700k is my setup.
> 
> DX12 works perfectly now and at 4k, I get stable 98fps at 98Hz refresh with dips to 70 during some explosions. Its probably pushing over a 100. The game looks stunning at 4k and proper hdr. The animations suck tho and the ragdoll physics are so broken.
> 
> Then I turned on DXR at 1440p and I set my refresh to 60hz but it will go down to like 37fps and 50-55fps a lot of the time. The weird thing is the GPU usage was only 50%....at 4k no dxr I was getting 98 -99% gpu usage.
> 
> at 1440p with DXR my gpu usage was low and the clock would even go down to 1995mhz all the way from 2085-2100Mhz at 1062mV. So the RT cores must not be populating the raster cores...which is like super bad.
> 
> Even my temps go down to 36c at 1440p in contrast to 40-45c at 4k.


great post.... if you tell us the riva tuner thing. 


VPII said:


> I can concur... I get basically the same but my res is 2560 x 1440 and my clocks little lower. Still with the new Nvidia driver it really works fairly well I'd say but I'll be honest when I say I struggle to see the difference between DXR ON and OFF. But hey, maybe it is old age and I'm just blind for some things.


same here. I think DXR is a visual placebo effect thing at this point in implementation.


djchup said:


> Which setting in rivatuner?


^^ this!


----------



## GraphicsWhore

mistershan said:


> Yea but I wonder if that 8k applies to real time bv
> 
> Do you think I should wait to do a full rebuild or sell off my 2 x 1080s and get a 2080ti? Or would my older parts negate any gains I would get from it?
> 
> i7 5820k
> 2 x Asus Strix 1080s SLI
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU


5820k is still solid. A 2080Ti with those parts is going to run great.



VPII said:


> I can concur... I get basically the same but my res is 2560 x 1440 and my clocks little lower. Still with the new Nvidia driver it really works fairly well I'd say but I'll be honest when I say I struggle to see the difference between DXR ON and OFF. But hey, maybe it is old age and I'm just blind for some things.


I played for a couple of hours last weekend and I was thinking, "Damn, new drivers have made the performance with DXR incredible. And it looks so good!" Then I checked settings and realized at some point I had turned DXR off.

Like most FPSs BFV is a relatively frantic game where you're running around dodging bullets, so naturally you're unlikely to notice the difference between DXR Ultra/Low or even Off in motion, and you're not going to stand around staring at reflections unless you want to die every 2 seconds. In many ways it's not a great game to showcase the effects because the game-play style isn't conducive to being able to look at and appreciate them.

Let's get a next-gen MYST made from scratch using RTX - there's a game that would be a great fit.


----------



## arrow0309

Jpmboy said:


> great post.... if you tell us the riva tuner thing.
> 
> ... ...
> 
> ^^ this!


Lol 
Good old years! :specool:

PS:
He was probably (still) thinking of the afterburner as a Alexey Nicolaychuk's Riva Tuner legacy.



Jpmboy said:


> ... ...
> same here. I think DXR is a visual placebo effect thing at this point in implementation.
> ... ...


I wouldn't be so sure about it, I see a lot of difference, actually I won't ever play Battlefield without RTX anymore


----------



## arrow0309

GraphicsWhore said:


> 5820k is still solid. A 2080Ti with those parts is going to run great.
> 
> 
> 
> I played for a couple of hours last weekend and I was thinking, "Damn, new drivers have made the performance with DXR incredible. And it looks so good!" Then I checked settings and realized at some point I had turned DXR off.
> 
> Like most FPSs BFV is a relatively frantic game where you're running around dodging bullets, so naturally you're unlikely to notice the difference between DXR Ultra/Low or even Off in motion, and you're not going to stand around staring at reflections unless you want to die every 2 seconds. In many ways it's not a great game to showcase the effects because the game-play style isn't conducive to being able to look at and appreciate them.
> 
> Let's get a next-gen MYST made from scratch using RTX - there's a game that would be a great fit.


Sometimes I just wanna die, but enjoy the graphics 
Sometimes I even have plenty of time to enjoy it and stay alive.
90% (the rest) of the time I probably will notice it less, still good enough to make me love it


----------



## dVeLoPe

https://www.evga.com/products/product.aspx?pn=11G-P4-2282-KR

I have this card for sale it's still brand new (I got it from EVGA STEP-UP) if anyone wants it let me know would like to sell to buy a TiTAN RTX


----------



## kot0005

djchup said:


> Which setting in rivatuner?


custom direct 3d support, had to turn that off

https://imgur.com/a/9VhAw8l these are my settings



arrow0309 said:


> Hi, I'm playing at 3440x1440, DXR low, chromatic aberration, film grain, lens flare and vignette disabled, Vsync off /120hz (game, always), Vsync On (nvidia driver), 2080ti FE in oc at 2085 /7500 @40-41C and it's steady, never lower than the 2085 clock (it was 2070 before with the stock bios).
> And getting ~60 - 70 fps (50 - 90 min / max).


I have DRX on Ultra. The difference is massive but most of the time you are looking for players and your gun not the other stuff. Some of the non DXR cube maps in water puddles are questionable lol. They look 1000% worse than in BF 1 , like with lots of noise, pixelated cube maps.


----------



## Aurosonic

Checked stock bios PL for Gigabyte Aorus 2080ti Xtream Waterblock 11g: 366W (to thread header info)


----------



## kot0005

GraphicsWhore said:


> 5820k is still solid. A 2080Ti with those parts is going to run great.
> 
> 
> 
> I played for a couple of hours last weekend and I was thinking, "Damn, new drivers have made the performance with DXR incredible. And it looks so good!" Then I checked settings and realized at some point I had turned DXR off.
> 
> Like most FPSs BFV is a relatively frantic game where you're running around dodging bullets, so naturally you're unlikely to notice the difference between DXR Ultra/Low or even Off in motion, and you're not going to stand around staring at reflections unless you want to die every 2 seconds. In many ways it's not a great game to showcase the effects because the game-play style isn't conducive to being able to look at and appreciate them.
> 
> Let's get a next-gen MYST made from scratch using RTX - there's a game that would be a great fit.


RTX will be good in Metro Exodus and may be anthem if they implement it right. Anthem looks like a craphole atm tho… the last vid was showing 10-15fps even the campfire smoke in the last part of the vid looked like it was in slow mon...


----------



## mackanz

Bunch of Gigabyte's in stock here in Norway all of a sudden. Both Gaming OC and Windforce. Review on the gaming OC was so-so and i took a chance on the Windforce due to it having a custom pcb (it is longer than the gaming oc and have a 750w recommendation)


----------



## ESRCJ

For those with Samsung GDDR6 on your cards, how far were you able to overclock the memory?


----------



## J7SC

Thanks for your comments/replies* arrow0309, lolhaxz, Aurosonic and Jpmboy*...


...yes, stock Bios (thus my wattage question). I have had other cards years back with custom Bios that pulled far more than that via dual 8 pin + PCI slot, but this Aorus 2080 Ti Extreme WB is just a few days old / stock everything on the GPU.

:Snorkle:
...cooling setup is effective but ultra quiet (this system is in the bedroom for now, connected to 4k tv). New XSPC RX360 rad just for the GPU, with Swiftech 655 pump on 4/5 speed, 3x 120mm Noctua *CPU* fans connected to 'CPU Opt' PWM. 6700k CPU itself has 240mm AIO. 

...wondering whether to upgrade this system to 9900k and keep it separate to give this GPU even more oomph, or go for a x299 or Zen+/2 build in the dedicated home office space (where the 55in tv won't really make sense but all my other systems are).


----------



## mistershan

GraphicsWhore said:


> 5820k is still solid. A 2080Ti with those parts is going to run great.
> 
> A i9 9900k wouldn't blow it out of the water?.... How much do you think I could get for my current build used? I was thinking of doing a full upgrade so I can get a mobo with USBC ports as well as a better CPU for work...Not sure if it's smarter to just sell the 1080s and get a 2080ti...then wait a year or two to do a full upgrade...OR, to just sell the whole thing as one piece and upgrade every couple years.


----------



## PhantomTaco

Apologies if this has been answered in the past but it's a bit hard to search this thread for the question I have:

I was hoping to see if anyone has confirmed whether or not flashing an FE card with a custom bios for a card with an RGB LED allows you to actually change the LED color on the FE cooler?


----------



## Asmodian

GraphicsWhore said:


> Are you actually drawing 520w?


Not sustained but I am pulling >400W. I have seen it say 271W in hwinfo, which should be 440W. I think it was hitting 520W only very briefly because madVR is a very spiky load. On a Kill-a-Watt I see, with madVR paused, my system pulling 205W (GPU idle and clocked down, probably only pulling 30W) and with madVR unpaused my system is pulling 620W. madVR uses very little CPU with these settings so I think all the change in power is the GPU, so +415W for ~450W total GPU power draw. With madVR at these settings I do see the power limit trigger sometimes with the limit at 100% but not with the limit at 123%.


















Edit: The Kill A Watt tracks well with what hwinfo reports for power use x1.625 so I do believe my mod worked as expected and I can use hwinfo to get a fairly accurate value for powder draw. Thanks for poking me to actually do the sanity check.


----------



## MrTOOSHORT

gridironcpj said:


> For those with Samsung GDDR6 on your cards, how far were you able to overclock the memory?


I received my card today, purchased from the Nvidia store Nov 30th. My card has Samsung chips like I hoped. I can max out AB +1000 no problem.


----------



## ESRCJ

My Strix with Micron can do +1140MHz using Precision. How far can you push it with Precision? I'll curious because I've heard of people pushing it to +1500MHz.


----------



## Jpmboy

MrTOOSHORT said:


> I received my card today, purchased from the Nvidia store Nov 30th. My card has Samsung chips like I hoped. I can max out AB +1000 no problem.


 I think AB tops out at 1000. try the galax Extreme tuner rtx beta
http://www.galax.com/en/xtreme-tuner-plus-rtx.


----------



## Jpmboy

kot0005 said:


> *custom direct 3d support, had to turn that off*
> 
> https://imgur.com/a/9VhAw8l these are my settings
> 
> 
> 
> I have DRX on Ultra. The difference is massive but most of the time you are looking for players and your gun not the other stuff. Some of the non DXR cube maps in water puddles are questionable lol. They look 1000% worse than in BF 1 , like with lots of noise, pixelated cube maps.



thanks! :thumb:


----------



## ESRCJ

What's your max stable core clock speed at 1.093V? I'm curious to know if my Strix is par for the course or above average. I can do 2175MHz at 1.093V stable.


----------



## iRSs

gridironcpj said:


> What's your max stable core clock speed at 1.093V? I'm curious to know if my Strix is par for the course or above average. I can do 2175MHz at 1.093V stable.


2175mhz 1.087v (i cannot go to 2190 even with 1.093)
palit 2080 ti dual *Non-A* )


----------



## ESRCJ

iRSs said:


> 2175mhz 1.087v (i cannot go to 2190 even with 1.093)
> palit 2080 ti dual *Non-A* )


Nice!


----------



## MrTOOSHORT

gridironcpj said:


> My Strix with Micron can do +1140MHz using Precision. How far can you push it with Precision? I'll curious because I've heard of people pushing it to +1500MHz.



Tops out so far @+1400 for TimeSpy:

*https://www.3dmark.com/3dm/30956318[/b]



Jpmboy said:



I think AB tops out at 1000. try the galax Extreme tuner rtx beta
http://www.galax.com/en/xtreme-tuner-plus-rtx.

Click to expand...


Thanks.*


----------



## ESRCJ

MrTOOSHORT said:


> Tops out so far @+1400 for TimeSpy:
> 
> *https://www.3dmark.com/3dm/30956318[/b]
> 
> 
> 
> 
> Thanks.*


*

+1400 is really good! 

I saw the FTW3 pop on EVGA's site along with their Hydrocopper block. I couldn't resist lmao. Ordered. Returning the Strix. I figured if my card starts running Space Invaders, I'd rather not deal with Asus's terrible customer service. EVGA is top notch in this regard from my prior experiences. Maybe if I'm lucky, my FTW3 will have Samsung GDDR6.*


----------



## Zammin

MrTOOSHORT said:


> I received my card today, purchased from the Nvidia store Nov 30th. My card has Samsung chips like I hoped. I can max out AB +1000 no problem.





gridironcpj said:


> My Strix with Micron can do +1140MHz using Precision. How far can you push it with Precision? I'll curious because I've heard of people pushing it to +1500MHz.


I see a lot of people getting +1000 or so on the memory with the 2080Ti, but my Micron equipped FE couldn't do more than +600-700 with the stock power limit. Did you need to raise the power limit to get +1000?

I didn't even test the memory on my Samsung equipped model because it came with a faulty fan.


----------



## ESRCJ

Zammin said:


> I see a lot of people getting +1000 or so on the memory with the 2080Ti, but my Micron equipped FE couldn't do more than +600-700 with the stock power limit. Did you need to raise the power limit to get +1000?
> 
> I didn't even test the memory on my Samsung equipped model because it came with a faulty fan.


When I was running the Strix on air, I couldn't do more than +1000. When I put the water block on, I could hit +1140. That seems odd though considering I never thought memory was thermally limited with these beefy air coolers that weigh more than a PSU.


----------



## MrTOOSHORT

Zammin said:


> I see a lot of people getting +1000 or so on the memory with the 2080Ti, but my Micron equipped FE couldn't do more than +600-700 with the stock power limit. Did you need to raise the power limit to get +1000?
> 
> I didn't even test the memory on my Samsung equipped model because it came with a faulty fan.


Just got the card today so not much testing, still on air too. I put the pl to 123% max, cranked the fan to 100% and let TS run. I'll stick the block on this weekend.


----------



## kx11

just in case someone wants EVGA FTW 3 ULTRA HYDRO COOPER


https://www.evga.com/products/product.aspx?pn=400-HC-1489-B1


----------



## J7SC

MrTOOSHORT said:


> Tops out so far @+1400 for TimeSpy:
> 
> *https://www.3dmark.com/3dm/30956318*
> 
> 
> 
> 
> Thanks.



Very nice result ! :thumb:


BTW how can I determine the memory maker on my card (without taking it apart >already in a loop with a factory water-block etc) ? Is there a software app - I checked OP ? :thinking:


----------



## MrTOOSHORT

Thx.

Use gpuz:


----------



## J7SC

MrTOOSHORT said:


> Thx.
> 
> Use gpuz:



Tx...doh :doh: I forgot about that :doh:


----------



## khunpunTH

which bios has most power limit? now i use galax bios 380w. 
I did 2205 mhz on time spy with some cold water.
https://www.3dmark.com/spy/5223219

I've already did some DICE and got around 2300 mhz but FPS not improve at all.
So i think power limit is the problem.
Are there anyway to improve my score?

thanks.


----------



## GraphicsWhore

khunpunTH said:


> which bios has most power limit? now i use galax bios 380w.
> I did 2205 mhz on time spy with some cold water.
> https://www.3dmark.com/spy/5223219
> 
> I've already did some DICE and got around 2300 mhz but FPS not improve at all.
> So i think power limit is the problem.
> Are there anyway to improve my score?
> 
> thanks.


Someone here was talking about a 400-some-watt BIOS but I never saw a link. At 2230 i’d amazed if your ceiling was much higher but I’d love to see you try all the same. As you mentioned though it doesn’t necessarily mean your score will be higher.


----------



## CallsignVega

It will be interesting to see how a 380w BIOS 2080 Ti compares against a 280w OEM only BIOS RTX Titan.

I think both water-cooled, the RTX Titan may only be ~5% faster. What say you guys. This may be the first Titan I've ever sat out on.


----------



## Zemach

core+190 mem+ 1350 core maxboot 2205 bios Galax 380W


----------



## arrow0309

gridironcpj said:


> What's your max stable core clock speed at 1.093V? I'm curious to know if my Strix is par for the course or above average. I can do 2175MHz at 1.093V stable.


How are you doing this voltage? 
I've tried with the msi ab's curve once but maybe I wasn't doing right. 

Sent from my ONEPLUS A5010 using Tapatalk


----------



## J7SC

I switched to Windows 7 drive and MSI AB..."stable" in GPUz render test at 2235MHz with all-stock Gigabyte Bios and 21 C ambient on regular water. Still, I doubt it could handle TimeSpy at this speed, but good to know that with a custom Bios or chiller etc, the GPU would have some headroom. 

Also, card seems to run 4 C to 5 C cooler in Win7 @ GPUz render test compared to Win 10. Best stable memory (Micron) is at 2060 (freezes at 2075) with Precx1 - Win 10. That means 2025-2030 memory for benching for efficiency, though have to actually test that out.


----------



## Aurosonic

My micron memory overclocks to +1350 max for benchmarks with some artifacts though. Hard crash at +1370. And stable no artifacts at +1200 for daily use. Ambient temps are 26, water 28, GPU never goes higher than 42C after 1 hour Supoerposition 8k optimized tests. Unlucky with core though, can get only 2100 with ambient temps. Gonna try some cold water for better results.


----------



## shadow85

Just got my sea hawk x, guys what fan configuration do you use?

1. Sea Hawk fan on front of case as intake
Or
2. Sea hawk fan on rear of case for exhaust

And do u use dual fans as push/pull configuration?


----------



## ESRCJ

arrow0309 said:


> How are you doing this voltage?
> I've tried with the msi ab's curve once but maybe I wasn't doing right.
> 
> Sent from my ONEPLUS A5010 using Tapatalk


In Afterburner, open the VF curve and move the 1.093V point to the desired clock speed, then hit L and apply. This will lock your core clock to that speed and voltage unless you hit the power limit or temps pass a certain threshold. The core clock always takes the minimum voltage for a given clock according the the VF curve. At stock, the VF curve is flat after 1.05V, so you will never pass 1.05V unless you increase the clock at 1.093V above every voltage point below it.


----------



## reflex75

kot0005 said:


> Okay, I gave BFV another go. I figured out why it was crashing. Its a setting in rivatuner.
> 
> PG27UQ, 2100Mhz/7730Mhz 2080Ti, 2800Mhz Ram and 5Ghz 8700k is my setup.
> 
> DX12 works perfectly now and at 4k, I get stable 98fps at 98Hz refresh with dips to 70 during some explosions. Its probably pushing over a 100. The game looks stunning at 4k and proper hdr. The animations suck tho and the ragdoll physics are so broken.
> 
> Then I turned on DXR at 1440p and I set my refresh to 60hz but it will go down to like 37fps and 50-55fps a lot of the time. The weird thing is the GPU usage was only 50%....at 4k no dxr I was getting 98 -99% gpu usage.
> 
> at 1440p with DXR my gpu usage was low and the clock would even go down to 1995mhz all the way from 2085-2100Mhz at 1062mV. So the RT cores must not be populating the raster cores...which is like super bad.
> 
> Even my temps go down to 36c at 1440p in contrast to 40-45c at 4k.


This is a bad news!
It means RTX cards are unbalanced hardware: the RT cores are too weak and struggling to keep up with the rest...


----------



## profundido

gridironcpj said:


> What's your max stable core clock speed at 1.093V? I'm curious to know if my Strix is par for the course or above average. I can do 2175MHz at 1.093V stable.


what timespy score does that result in if I may ask ?


----------



## ENTERPRISE

shadow85 said:


> Just got my sea hawk x, guys what fan configuration do you use?
> 
> 1. Sea Hawk fan on front of case as intake
> Or
> 2. Sea hawk fan on rear of case for exhaust
> 
> And do u use dual fans as push/pull configuration?


Where possible I would recommend not having them as intakes. If you can mount it to the rear or top of the case as an exhaust, that would be best. Push Pull configuration MAY yield better results but that is not a given. Your best bet is to use the stock configuration in exhaust and check to see what your temps are like.


----------



## shadow85

Thank you enterprise. Will check it out


----------



## ESRCJ

profundido said:


> what timespy score does that result in if I may ask ?


It never could achieve 1.093V in Timespy, as it would hit the power limit further down the VF curve. Here was my highest score:

https://www.3dmark.com/spy/5161184

This was with the 380W Galax BIOS, although it did not do much, as the Strix has a custom PCB and does not mesh well with other BIOS. A shunt mod would be ideal with the Strix.


----------



## shadow85

Is there suppose to be plastic stuck onto the gpu/dragon logo part on the Sea Hawk X model?

Been trying to remove this plastic but it goes all the way into the bloody card that I can't reach unless I open her up?

Don't tell me this is poor quality control from MSi because it will be the last card I buy from them if it is.


----------



## arrow0309

gridironcpj said:


> In Afterburner, open the VF curve and move the 1.093V point to the desired clock speed, then hit L and apply. This will lock your core clock to that speed and voltage unless you hit the power limit or temps pass a certain threshold. The core clock always takes the minimum voltage for a given clock according the the VF curve. At stock, the VF curve is flat after 1.05V, so you will never pass 1.05V unless you increase the clock at 1.093V above every voltage point below it.


OK, thanks. 
But can I then save the (stable) OC / profile or would I have to do it every time? 
Also during the test can you close the curve window?


----------



## TraktorXD

May be try to flash a HOF OC LAB bios its 450W, but thay have custom pcb as well dont know if its 100% works with a STRIX pcb.


----------



## shadow85

Hmm, my sea hawk x is reaching 1950MHz and 69°C on AC Odyssey 4K benchmarks. Does this temp seem rite for a water cooled card?

Maybe incorrect installation?
Also the manual states to connect radiator fan wire to SYSFAN1 on motherboard, do I have to do this or can I leave the fan wires connected normally to the GPU as it is from the packaging.


----------



## vmanuelgm

BFV Chapter One Overture Raytracing Enabled Ultra/High/Medium @3440x1440p (Ultra settings) @2080Ti @Rotterdam (the city where I was born, xDDDD)


----------



## profundido

shadow85 said:


> Just got my sea hawk x, guys what fan configuration do you use?
> 
> 1. Sea Hawk fan on front of case as intake
> Or
> 2. Sea hawk fan on rear of case for exhaust
> 
> And do u use dual fans as push/pull configuration?


1. for achieving the coldest temps on your graphics card (in case you want max overclock and power draw where thermals are your bottleneck) since it puts the coldest possible (ambient) air directly on your seahawk radiator but this does increase the air inside your case and other components and radiators.

2. In case graphics cards temps are no problem at all and no thermal throttling is happening. You'll be putting already heated air from inside the case onto the seahawk radiator and leaving more cold air for the other components inside the case.

Push in both cases so fan before the radiator, even in the exhaust scenario. Do the test for yourself to see why push is better than pull in case you have doubts. Also consider push/pull on the graphics card fan for max results


----------



## profundido

shadow85 said:


> Hmm, my sea hawk x is reaching 1950MHz and 69°C on AC Odyssey 4K benchmarks. Does this temp seem rite for a water cooled card?
> 
> Maybe incorrect installation?
> Also the manual states to connect radiator fan wire to SYSFAN1 on motherboard, do I have to do this or can I leave the fan wires connected normally to the GPU as it is from the packaging.


move the seahawk radiator to front intake with the fan in push (I would add a second one for push/pull). I can't find a manual online for it's support page so you'll have to check it yourself but if the card comes equipped with an extra integrated fan header to control the radiator fan in addition to the blower fan on the card itself then use it because it will allow the speed of both fans to be controlled from within afterburner together with the power settings of your choice.

Alternatively or in case the fan header on the board is only there for and supports only the integrated blower fan on the card itself, connect the radiator fan to any fan connector on your motherboard such as e.g sysfan 1-2-3. Since the card comes with a single 120mm AIO radiator which you cannot extend (compared to custom waterloops) you'll have to set the speed high or the speed curve very aggressive since it will be very hard to keep temps down compared to custom waterloop solutions. Be prepared for noise as is always the case with AIO cooled graphics cards.


----------



## ESRCJ

arrow0309 said:


> gridironcpj said:
> 
> 
> 
> In Afterburner, open the VF curve and move the 1.093V point to the desired clock speed, then hit L and apply. This will lock your core clock to that speed and voltage unless you hit the power limit or temps pass a certain threshold. The core clock always takes the minimum voltage for a given clock according the the VF curve. At stock, the VF curve is flat after 1.05V, so you will never pass 1.05V unless you increase the clock at 1.093V above every voltage point below it.
> 
> 
> 
> OK, thanks.
> But can I then save the (stable) OC / profile or would I have to do it every time?
> Also during the test can you close the curve window?
Click to expand...

You can close the VF curve window once applied. Sometimes I keep it open to watch the current point do the power limit dance. The choreography is top notch.

As for saving a profile, the curve can be a bit finicky. You'll want to start by choosing your 1.093V frequency, then 1.087V (I think that's the next one down), and so on until you work your way down as far as you want. Sometimes after hitting apply, the curve will shift some points up and down against your will, which is annoying. 

A few others mentioned Galax or Gigabyte software that seems to lock things a bit better. You should browse up the thread to find them. Jpm mentioned it at least once.


----------



## vmanuelgm

shadow85 said:


> Hmm, my sea hawk x is reaching 1950MHz and 69°C on AC Odyssey 4K benchmarks. Does this temp seem rite for a water cooled card?
> 
> Maybe incorrect installation?
> Also the manual states to connect radiator fan wire to SYSFAN1 on motherboard, do I have to do this or can I leave the fan wires connected normally to the GPU as it is from the packaging.


That's a very high temperature for an AIO watercooled card. Something is going wrong there...


----------



## cstkl1

cstkl1 said:


> hmm the low score could be this 406watt bios.. need to investigate
> norm 2115-2130mhz default 330watt bios i get 16400
> this bios 15800.... hmmm
> 
> 
> 
> not ocing mem atm .. first card died. that one did 8000 easy.. heck it can bench up to 8300mhz..
> this card not that good around 7500-7600..


https://www.overclock.net/forum/69-...i-owner-s-club.html#/topics/0?postid=27716280

da trio 406watt bios


----------



## kot0005

how is the new BF V patch ?


----------



## kot0005

vmanuelgm said:


> BFV Chapter One Overture Raytracing Enabled Ultra/High/Medium @3440x1440p (Ultra settings) @2080Ti @Rotterdam (the city where I was born, xDDDD)
> 
> https://youtu.be/sSaTvt6TPw4


lol the fps is still low on ultra...I saw dips to 27 fps!! 50% perf sounds like BS. Do you have NGX 1.2.7 or 1.30 ? you should use 1.2.7 its in ur program files , nvidia folder. Delete it if ur using NGX 1.3.0 and reinstall ur driver that comes with 1.2.7 and test again.


----------



## Jpmboy

focus on ram frequency ("+ 1000")... ?? remember the bios has timings for each ram brand and "die". you should check whether the higher frequency is actually faster (I know MrT knows this)


----------



## VPII

kot0005 said:


> lol the fps is still low on ultra...I saw dips to 27 fps!! 50% perf sounds like BS. Do you have NGX 1.2.7 or 1.30 ? you should use 1.2.7 its in ur program files , nvidia folder. Delete it if ur using NGX 1.3.0 and reinstall ur driver that comes with 1.2.7 and test again.


Which driver do you use.... I found with the 417.22 I get in 1440P between 58 lowest and 78 highest FPS using DXR on Ultra


----------



## GosuPl

kot0005 said:


> how is the new BF V patch ?


Still no SLI + DXR support, time to sell 2x 2080Ti and buy 1x TITAN RTX.

But patch perf is great for me.

3440x1440p ultra + DXR ultra on single GPU...

https://scontent-waw1-1.xx.fbcdn.ne...=b0ac2a45acfdb1f6a649d1313618a4df&oe=5CA53422

https://scontent-waw1-1.xx.fbcdn.ne...=6b03e5803c580d2bb2322b7de4bcc7a9&oe=5CA416F5

https://scontent-waw1-1.xx.fbcdn.ne...=1ea29276dfb33bd6c1f5eebe30cb50d9&oe=5C999EF1

https://scontent-waw1-1.xx.fbcdn.ne...=3e0d70d078932d4992146ee664eee184&oe=5CA3E194

https://scontent-waw1-1.xx.fbcdn.ne...=0905a0aacc067637a172688437a38468&oe=5C6A393A

https://scontent-waw1-1.xx.fbcdn.ne...=c1f63e907ee0241e8fbea488fef742b0&oe=5CAE1B47


----------



## Shawnb99

So how do I overclock this card? 
Using EVGA precision. I can’t set either clock past 500 without my screen going black and forcing me to restart.
Do I have a bad card or am I doing something wrong.
Never overclocked a GPU before.
On air now but will be putting it under water soon. But if I can’t OC it I’m going to return it.


----------



## GraphicsWhore

Shawnb99 said:


> So how do I overclock this card?
> Using EVGA precision. I can’t set either clock past 500 without my screen going black and forcing me to restart.
> Do I have a bad card or am I doing something wrong.
> Never overclocked a GPU before.
> On air now but will be putting it under water soon. But if I can’t OC it I’m going to return it.


Don't use EVGA's software. Use AfterBurner. And what do you mean "either clock past 500." You're trying to add 500 to the core clock?

Raise the power and voltage to maximum, then start with like +500 on memory (+0 on core) and go up from there. When you find your max memory, start raising the core.

If you crash on +500 memory, you lost the silicon lottery.

On an unrelated note: how many here have flashed the Galax HOF 450w BIOS? And if you did, which card do you have? I'm curious to try it on my EVGA XC.


----------



## iRSs

Shawnb99 said:


> So how do I overclock this card?
> Using EVGA precision. I can’t set either clock past 500 without my screen going black and forcing me to restart.
> Do I have a bad card or am I doing something wrong.
> Never overclocked a GPU before.
> On air now but will be putting it under water soon. But if I can’t OC it I’m going to return it.


on gpu core you will get 100 something mhz more and memory will go 600+
also you can try ocing it under water because you will not be thermal limited (at least not as hard as you are on air).

what card are you talking about?

ps: i am still searchig for a way to flash non-a gpus


----------



## Shawnb99

iRSs said:


> on gpu core you will get 100 something mhz more and memory will go 600+
> 
> also you can try ocing it under water because you will not be thermal limited (at least not as hard as you are on air).
> 
> 
> 
> what card are you talking about?
> 
> 
> 
> ps: i am still searchig for a way to flash non-a gpus




I have a EVGA XC ultra

Using afterburner was much better.
So far I’m able to go as high as 850 memory and just 100 clock but going to keep testing how far I can go.

Glad I didn’t lose the silicon lottery at least. 
Thanks for the help


Sent from my iPhone using Tapatalk


----------



## BudgieSmuggler

Who asked about Galax Hof bios update? I've flashed to that bios on my 2080ti amp and it's working well. No issues with it. I tried to XC Ultra bios also. Went back to the HOF one


----------



## yianni

i have a an EVGA XC under water and can run memory +1200 and core +220 using furmark stress test, but BF V will only run with +190 core OC


----------



## TraktorXD

BudgieSmuggler said:


> Who asked about Galax Hof bios update? I've flashed to that bios on my 2080ti amp and it's working well. No issues with it. I tried to XC Ultra bios also. Went back to the HOF one


If you really flash this bios Version: 90.02.0B.00.D7 (HOF OC LAB) its cool. Question is if its worth it


----------



## kot0005

VPII said:


> Which driver do you use.... I found with the 417.22 I get in 1440P between 58 lowest and 78 highest FPS using DXR on Ultra


417.22 but their latest NGX is 1.2.7. One of the earlier patches sneaked in 1.3.0 which was by accident for unknown reasons. If u have 1.3.0 it wont be overwritten during driver update.


----------



## yianni

GraphicsWhore said:


> Don't use EVGA's software. Use AfterBurner. And what do you mean "either clock past 500." You're trying to add 500 to the core clock?
> 
> Raise the power and voltage to maximum, then start with like +500 on memory (+0 on core) and go up from there. When you find your max memory, start raising the core.
> 
> If you crash on +500 memory, you lost the silicon lottery.
> 
> On an unrelated note: how many here have flashed the Galax HOF 450w BIOS? And if you did, which card do you have? I'm curious to try it on my EVGA XC.


you on stock BIOS?


----------



## GraphicsWhore

yianni said:


> you on stock BIOS?


No, currently on the Galax 380w.



BudgieSmuggler said:


> Who asked about Galax Hof bios update? I've flashed to that bios on my 2080ti amp and it's working well. No issues with it. I tried to XC Ultra bios also. Went back to the HOF one


I did. Thanks for the info. Gonna give it a go on my XC Gaming.



yianni said:


> i have a an EVGA XC under water and can run memory +1200 and core +220 using furmark stress test, but BF V will only run with +190 core OC


Those are big numbers, congrats. Out of curiosity: have you run any 3dmark? I ask because I'm curious whether the +220 is actually going to provide max results. I can pass TimeSpy at +190 but the score is actually lower than +170.

What's your max core with +220?


----------



## enigmatic23

yianni said:


> i have a an EVGA XC under water and can run memory +1200 and core +220 using furmark stress test, but BF V will only run with +190 core OC


good deal, mine under water will not push 2100 in any stress test or games, can do 2085 core and +1000 mem all day and makes me so jealous to see the numbers everyone else is getting. On the Galax bios as well and hits 370w in superposition.

Going to try the HOF bios later and see what it can do


----------



## yianni

enigmatic23 said:


> good deal, mine under water will not push 2100 in any stress test or games, can do 2085 core and +1000 mem all day and makes me so jealous to see the numbers everyone else is getting. On the Galax bios as well and hits 370w in superposition.
> 
> Going to try the HOF bios later and see what it can do


im still on stock bios, havent changed it.

3d mark only finished with 190, nothing higher, video score of 16057, total score of 15053

core is at 2160 with 220 in furmark, 8199 memory, 1043mv


----------



## Zammin

Jpmboy said:


> focus on ram frequency ("+ 1000")... ?? remember the bios has timings for each ram brand and "die". you should check whether the higher frequency is actually faster (I know MrT knows this)


Is there a way to find out what the timings are for the micron and samsung memory chips on these cards?



yianni said:


> i have a an EVGA XC under water and can run memory +1200 and core +220 using furmark stress test, but BF V will only run with +190 core OC





yianni said:


> im still on stock bios, havent changed it.
> 
> 3d mark only finished with 190, nothing higher, video score of 16057, total score of 15053
> 
> core is at 2160 with 220 in furmark, 8199 memory, 1043mv


Damn that's a nice OC. +150 core and +700 mem is about as good as I can get on my current micron equipped FE. I hear a lot of people getting +1000 and above on the memory so I'm not sure what's up with mine. Maybe just not as good. With your +1200 on the memory what's the total frequency? is it "effective" memory clock or normal?


----------



## J7SC

Aurosonic said:


> My micron memory overclocks to +1350 max for benchmarks with some artifacts though. Hard crash at +1370. And stable no artifacts at +1200 for daily use. Ambient temps are 26, water 28, GPU never goes higher than 42C after 1 hour Supoerposition 8k optimized tests. Unlucky with core though, can get only 2100 with ambient temps. Gonna try some cold water for better results.



Hi - I think you have the same Gigabyte Aorus extr wb card > what Bios date do you have ? Mine reports as August 20, 2018 in GPUz. I read that Gigabyte updated various other 2080 Ti Bios sets (such as for Gaming, OC Windforce) back in October, so I am wondering if there's a newer one for our edition. BTW, I checked the Aorus PCB layout vs the Galax Hof and I doubt the Galax hof bios would work without serious hiccups, but I might try anyway  

I fondly remember the days when we could edit and customize our GPU Bios in 'notepad' such as for GTX 680s -- Power Target too low ? No problem. Never mind the EVbot setup I had. These days, it's all more 'corporate'


----------



## yianni

Zammin said:


> Is there a way to find out what the timings are for the micron and samsung memory chips on these cards?
> 
> 
> 
> 
> 
> Damn that's a nice OC. +150 core and +700 mem is about as good as I can get on my current micron equipped FE. I hear a lot of people getting +1000 and above on the memory so I'm not sure what's up with mine. Maybe just not as good. With your +1200 on the memory what's the total frequency? is it "effective" memory clock or normal?


What do you mean my total frequency?

Sent from my SM-G930V using Tapatalk


----------



## Zammin

yianni said:


> What do you mean my total frequency?
> 
> Sent from my SM-G930V using Tapatalk


Sorry, I meant as in what is your total memory clock speed/frequency with the overclock applied. For example with afterburner, at stock the FE model has a speed of 7000mhz on the memory. But some applications show effective memory clock which would read 14000mhz. I was just wondering if your +1200mhz offset was effective memory clock or normal memory clock.

I generally run +600 on the memory in afterburner but if I was to use an application that shows effective memory clock (I think Gigabyte's app does this) it would read +1200.


----------



## lolhaxz

enigmatic23 said:


> good deal, mine under water will not push 2100 in any stress test or games, can do 2085 core and +1000 mem all day and makes me so jealous to see the numbers everyone else is getting. On the Galax bios as well and hits 370w in superposition.
> 
> Going to try the HOF bios later and see what it can do


IMO - Much more likely that those posting 2160's and such have a vastly different idea of stability testing than yourself or are posting suicide runs (obviously there are exceptions)

You've really got to try the full gamut to confirm stability, I can do 2160's in Superposition 4K/8K quite reliably but Timespy get's a bit dicy 15-30MHz earlier and some games (with even lower utilization) can root out a unstable overclock.

By FAR the norm is about 2085-2100MHz so I'd say you have a pretty reasonable card.

Sometimes can be kind of like reddit where everyone has a 5.4GHz 8700K at 1.2v


----------



## kot0005

also please post resolution you use when posting overclock numbers.



Okay I played BF V for a bit with the new patch and perf is def there, lowest dip at 1440p ultra DXR was 57fps and it holds 60fps steady. I am going to try 4k with 75% res. scaling next.


----------



## PhantomTaco

Is anyone able to tell me which BIOS is suggested for FE cards and whether or not flashing allows for control of the RGB leds on the FE cooler?


----------



## Zammin

PhantomTaco said:


> Is anyone able to tell me which BIOS is suggested for FE cards and whether or not flashing allows for control of the RGB leds on the FE cooler?


I have my suspicions that the EVGA XC BIOS may allow for RGB control on the FE after JayzTwoCents confirmed that they use the exact same RGB connector in the exact same place on the board when he fit the FE cooler to an XC ultra PCB and controlled the colour of the logo that way. I made a comment about it way back in this thread but I don't think anyone has tested it yet.

The XC gaming has the same clock speeds as the FE to my knowledge but allows for a slightly higher power target. You could try flashing the XC BIOS to yours and seeing if you can control the LEDs and let the rest of us know if it worked hehe.


----------



## Aurosonic

J7SC said:


> Hi - I think you have the same Gigabyte Aorus extr wb card > what Bios date do you have ? Mine reports as August 20, 2018 in GPUz. I read that Gigabyte updated various other 2080 Ti Bios sets (such as for Gaming, OC Windforce) back in October, so I am wondering if there's a newer one for our edition. BTW, I checked the Aorus PCB layout vs the Galax Hof and I doubt the Galax hof bios would work without serious hiccups, but I might try anyway
> 
> I fondly remember the days when we could edit and customize our GPU Bios in 'notepad' such as for GTX 680s -- Power Target too low ? No problem. Never mind the EVbot setup I had. These days, it's all more 'corporate'


YEah, same card, same bios. Wann try HOF bios but afraid a bit


----------



## Aurosonic

Timespy: 16064 scores , thats my maximum for stock bios Gigabyte Aorus Extreme @2100/8200 and 9900k @5350

https://www.3dmark.com/spy/5296879


----------



## kot0005

Aurosonic said:


> Timespy: 16064 scores , thats my maximum for stock bios Gigabyte Aorus Extreme @2100/8200 and 9900k @5350
> 
> https://www.3dmark.com/spy/5296879


How are you running 9900k at 5.3Ghz without melting ur PC ?


----------



## Aurosonic

kot0005 said:


> How are you running 9900k at 5.3Ghz without melting ur PC ?


Just a good custom watercooling loop, CPU is delided and TIM replaced with liquid metal under IHS and under waterblock. Allows me to cool up to 310w CPU package power.
And i got luck with silicon lottery, my default cpu vid is just 0.968v, so i'm running daily at 5150 [email protected] and 5350 [email protected] for benchmarking


----------



## enigmatic23

Aurosonic said:


> YEah, same card, same bios. Wann try HOF bios but afraid a bit


Tried HOF bios last night and flashed successfully on an Evga XC underwater. Once I attempted to overclock it, the card never pulled more than 330 watts or so. Only was able to manage 2010-15 on the core and GPUZ shows power limited. Afterburner has Power and Voltage to the max and temps max around 43c during benches.

Ran Superpostion 4k which would usually get me just south of 380 watts and same deal. OC scanner results showed +23 or so on core and main limiter as power. Quickly reflashed back to Galax 380w and back in business.


----------



## profundido

question: 

what are the best performing cards out of the bunch right now for me to hunt ? Who's card scores let's say 16K+ in Timespy ?


----------



## GraphicsWhore

profundido said:


> question:
> 
> what are the best performing cards out of the bunch right now for me to hunt ? Who's card scores let's say 16K+ in Timespy ?


The binned models of the various makers, e.g. EVGA FTW3 are the ones to shoot for but that in no way means you're guaranteed 16k in Timespy.


----------



## profundido

GraphicsWhore said:


> The binned models of the various makers, e.g. EVGA FTW3 are the ones to shoot for but that in no way means you're guaranteed 16k in Timespy.


16k seems to be the "exceptionally high" club right now. Problem is Almost everyting is out of stock everywhere so I'll need to know which one's are binned


----------



## DooRules

profundido said:


> question:
> 
> what are the best performing cards out of the bunch right now for me to hunt ? Who's card scores let's say 16K+ in Timespy ?


Zotac triple fan with 380w bios... over 17k graphics in TS, with cold air assist, lol 

http://www.3dmark.com/spy/5294334


----------



## GraphicsWhore

profundido said:


> 16k seems to be the "exceptionally high" club right now. Problem is Almost everyting is out of stock everywhere so I'll need to know which one's are binned


Price is somewhat an indicator if you go by MSRP. The most expensive models of the lineup from each maker are almost certainly going to be binned. I provided you one (FTW3) and I think for EVGA the XC Ultra is as well. I don't know if the XC Gaming (my card) is but it doesn't matter because I got an amazing performer; my graphics score (16611) is higher than the vast majority of benchmarks ranked above me, including the ones which are 16k total.

Keep in mind a total score of 16k also requires a strong CPU score. I frankly somewhat ignore that and pay more attention to the graphics score. Anyway, the above still stands in terms of maximizing your potential for a good performer but as I already mentioned and I'm pretty sure you already knew, even one of the binned models doesn't guarantee anything. You still have to also win the silicon lottery, or at least do really well in it and that's down to luck.

Speaking of which: best of luck on your search and purchase.


----------



## profundido

DooRules said:


> Zotac triple fan with 380w bios... over 17k graphics in TS, with cold air assist, lol
> 
> http://www.3dmark.com/spy/5294334


impressive !


----------



## profundido

GraphicsWhore said:


> Price is somewhat an indicator if you go by MSRP. The most expensive models of the lineup from each maker are almost certainly going to be binned. I provided you one (FTW3) and I think for EVGA the XC Ultra is as well. I don't know if the XC Gaming (my card) is but it doesn't matter because I got an amazing performer; my graphics score (16611) is higher than the vast majority of benchmarks ranked above me, including the ones which are 16k total.
> 
> Keep in mind a total score of 16k also requires a strong CPU score. I frankly somewhat ignore that and pay more attention to the graphics score. Anyway, the above still stands in terms of maximizing your potential for a good performer but as I already mentioned and I'm pretty sure you already knew, even one of the binned models doesn't guarantee anything. You still have to also win the silicon lottery, or at least do really well in it and that's down to luck.
> 
> Speaking of which: best of luck on your search and purchase.


Thanks. Luck is alas the one thing I never ever seem to have. Hence my trying to factor it out as much as possible lol


----------



## yianni

GraphicsWhore said:


> Price is somewhat an indicator if you go by MSRP. The most expensive models of the lineup from each maker are almost certainly going to be binned. I provided you one (FTW3) and I think for EVGA the XC Ultra is as well. I don't know if the XC Gaming (my card) is but it doesn't matter because I got an amazing performer; my graphics score (16611) is higher than the vast majority of benchmarks ranked above me, including the ones which are 16k total.
> 
> Keep in mind a total score of 16k also requires a strong CPU score. I frankly somewhat ignore that and pay more attention to the graphics score. Anyway, the above still stands in terms of maximizing your potential for a good performer but as I already mentioned and I'm pretty sure you already knew, even one of the binned models doesn't guarantee anything. You still have to also win the silicon lottery, or at least do really well in it and that's down to luck.
> 
> Speaking of which: best of luck on your search and purchase.


you have a 16611 score with your XC? i was able to only get 16064 or something like that. i guess my higher clock/mem doesnt matter


----------



## GraphicsWhore

yianni said:


> you have a 16611 score with your XC? i was able to only get 16064 or something like that. i guess my higher clock/mem doesnt matter


Yes - https://www.3dmark.com/spy/5234532

But to repeat: that's graphics, not total.


----------



## yianni

Nice what OC do you have? Wonder if the galax firmware would get my OC higher

Sent from my SM-G930V using Tapatalk


----------



## GraphicsWhore

yianni said:


> Nice what OC do you have? Wonder if the galax firmware would get my OC higher
> 
> Sent from my SM-G930V using Tapatalk


I'm on the Galax 380w. I'm going to give the HOF 450w a shot tonight.

The 16611 graphics run was +181/+1000.

I then passed at +181/+1148, but my graphics score actually went down 3 points lol, though total score went up a bit.


----------



## Jpmboy

GraphicsWhore said:


> I'm on the Galax 380w. I'm going to give the HOF 450w a shot tonight.
> 
> The 16611 graphics run was +181/+1000.
> 
> I then passed at +181/+1148, but my graphics score actually went down 3 points lol, though total score went up a bit.


if you are not locking the card in the P0 state, 17000 will be a reach.


----------



## DooRules

Jpmboy said:


> if you are not locking the card in the P0 state, 17000 will be a reach.


Yipper. I had to use the AB curve at 1.093 but it allowed me to get up to 2205 on the core with mem at 1000. Fell over at 2220 core but hell no shame there.


----------



## yianni

GraphicsWhore said:


> I'm on the Galax 380w. I'm going to give the HOF 450w a shot tonight.
> 
> The 16611 graphics run was +181/+1000.
> 
> I then passed at +181/+1148, but my graphics score actually went down 3 points lol, though total score went up a bit.


i had +190/+1200 and my score is lower. thats why im confused


----------



## Asmodian

I got 16395 at +135/+1100 (2115/8100) with a stock FE Bios and a shunt mod. 

Keeping it below 38°C helped me gain a few points.


----------



## yianni

Asmodian said:


> I got 16395 at +135/+1100 (2115/8100) with a stock FE Bios and a shunt mod.
> 
> Keeping it below 38°C helped me gain a few points.


not sure what the shunt does, but my gpu stays cool, maxes out at 42 when running furmark


----------



## GraphicsWhore

Jpmboy said:


> GraphicsWhore said:
> 
> 
> 
> I'm on the Galax 380w. I'm going to give the HOF 450w a shot tonight.
> 
> The 16611 graphics run was +181/+1000.
> 
> I then passed at +181/+1148, but my graphics score actually went down 3 points lol, though total score went up a bit.
> 
> 
> 
> if you are not locking the card in the P0 state, 17000 will be a reach.
Click to expand...

I tried what you suggested using galax tuner but setting 1.093v didn’t seem to work. According to AB I never reached 1.093, even though I usually do after some amount of stress. Remind me of the process again?


----------



## yianni

GraphicsWhore said:


> I tried what you suggested using galax tuner but setting 1.093v didn’t seem to work. According to AB I never reached 1.093, even though I usually do after some amount of stress. Remind me of the process again?


what are you guys using to OC? +130 on x1 and galax tuner is a lot different than +130 on AB haha. +90 on AB = 2140 core clock


----------



## J7SC

Aurosonic said:


> YEah, same card, same bios. Wann try HOF bios but afraid a bit



Tx for the info ! So far, I'm really not unhappy with this current Bios, other than the Power Target limit / pretty much like any other 2080 Ti user. As to the Galax HoF Bios, here are comparable PCB screenshots (from Buildzoid/YouTube). Sufficiently different IMO to make a Bios swap a frustrating thing, but one can always flash back


----------



## Aurosonic

J7SC said:


> Tx for the info ! So far, I'm really not unhappy with this current Bios, other than the Power Target limit / pretty much like any other 2080 Ti user. As to the Galax HoF Bios, here are comparable PCB screenshots (from Buildzoid/YouTube). Sufficiently different IMO to make a Bios swap a frustrating thing, but one can always flash back


Ive just flashed to Galax 380w bios, testing it now, running oc scanner


----------



## jelome1989

Is there really any difference between the Strix Advanced and OC edition? Has anybody done an actual comparison between the 2080 Ti Strix Advanced vs OC edition? Or have a list of maximum OC for the cards? I've been hearing people say different things, some say they're literally the same and ASUS don't bin Strix cards and some say they do.


----------



## J7SC

Aurosonic said:


> Ive just flashed to Galax 380w bios, testing it now, running oc scanner



Great - looking forward to hear if it works on our card versions - if it does, are you going to the 450w version ? :gunner2:


----------



## yianni

Aurosonic said:


> Ive just flashed to Galax 380w bios, testing it now, running oc scanner


what card do you have? also where can i get it, ill try it on my XC


----------



## GraphicsWhore

Aurosonic said:


> Ive just flashed to Galax 380w bios, testing it now, running oc scanner


So, not sure your experience thus far with OC Scanner or which one you're using but I've tried both the X1 and the AB one and a manual overclock was able to beat both in performance. Just FYI depending on how they've worked out for you; they may not necessarily be the best bet.



yianni said:


> what card do you have? also where can i get it, ill try it on my XC


It's in the original post - "Galax reference BIOS."


----------



## yianni

GraphicsWhore said:


> It's in the original post - "Galax reference BIOS."


how did you OC your XC exactly nvflash?, last video card i flashed was an ATI 5870 haha


----------



## Aurosonic

Well, no luck anyway. Same results on Galax 380w bios. Cant get higher 2100 even at 1.093v, always powerlimited with 380w and frquency drops. Looks like just unluck with chip, it can handle 2100 only at 1.093v and its always powerlimited


----------



## yianni

Aurosonic said:


> Well, no luck anyway. Same results on Galax 380w bios. Cant get higher 2100 even at 1.093v, always powerlimited with 380w and frquency drops. Looks like just unluck with chip, it can handle 2100 only at 1.093v and its always powerlimited


what card do you have? i think my XC is able to get 380w... my slider goes up to 130%

nm hwinfo shows 1.0.69 haha


----------



## GraphicsWhore

yianni said:


> what card do you have? i think my XC is able to get 380w... my slider goes up to 130%
> 
> nm hwinfo shows 1.0.69 haha


130% is the EVGA stock BIOS and that's I believe 338w. See rest of my post below.



yianni said:


> how did you OC your XC exactly nvflash?, last video card i flashed was an ATI 5870 haha


Download the BIOS, then download nvflash from original post (1st link). Unzip nvflash. Put the BIOS in the nvflash folder.
Open command prompt and change directory to nvflash folder

1st command: nvflash64 --protectoff
2nd command: nvflash64 -6 2080TIGX126.rom

Hit "y" when prompted.

Restart. You may boot up as if you have no video drivers but Windows should reinstall automatically after a few seconds. You can confirm new BIOS via GPUz or check that new power limit is 126% in AB or whichever program you use.

I got my highest scores using Galax Tuner though that doesn't necessarily mean AB will be lower but you can play around with them. If downloading Galax Tuner I think you need to make sure it's the RTX version. I never figured out if Galax Tuner has a monitor somewhere so I just opened AB at the same time and then checked the numbers after my bench runs (max core, mem, voltage, etc.)

Good luck.



Aurosonic said:


> Well, no luck anyway. Same results on Galax 380w bios. Cant get higher 2100 even at 1.093v, always powerlimited with 380w and frquency drops. Looks like just unluck with chip, it can handle 2100 only at 1.093v and its always powerlimited


Sorry haven't seen your previous posts but are you on water?


----------



## Aurosonic

yianni said:


> what card do you have? i think my XC is able to get 380w... my slider goes up to 130%
> 
> nm hwinfo shows 1.0.69 haha


It's Gigabyte Aorus 2080ti Xtream Waterforce WB 11G. Somehow it's power consumption is way more, that other cards has. That's why it hits powerlimit so early


----------



## GraphicsWhore

Aurosonic said:


> It's Gigabyte Aorus 2080ti Xtream Waterforce WB 11G. Somehow it's power consumption is way more, that other cards has. That's why it hits powerlimit so early


There's always the HOF 450w BIOS to test


----------



## Aurosonic

GraphicsWhore said:


> Sorry haven't seen your previous posts but are you on water?


Yeah, custom water loop, my temps never goes higher than 43 even at 380W power draw from card. But I cant get more than 2100 in Timespy


----------



## yianni

GraphicsWhore said:


> 130% is the EVGA stock BIOS and that's I believe 338w. See rest of my post below.
> 
> 
> 
> Download the BIOS, then download nvflash from original post (1st link). Unzip nvflash. Put the BIOS in the nvflash folder.
> Open command prompt and change directory to nvflash folder
> 
> 1st command: nvflash64 --protectoff
> 2nd command: nvflash64 -6 2080TIGX126.rom
> 
> Hit "y" when prompted.
> 
> Restart. You may boot up as if you have no video drivers but Windows should reinstall automatically after a few seconds. You can confirm new BIOS via GPUz or check that new power limit is 126% in AB or whichever program you use.
> 
> I got my highest scores using Galax Tuner though that doesn't necessarily mean AB will be lower but you can play around with them. If downloading Galax Tuner I think you need to make sure it's the RTX version. I never figured out if Galax Tuner has a monitor somewhere so I just opened AB at the same time and then checked the numbers after my bench runs (max core, mem, voltage, etc.)
> 
> Good luck.
> 
> 
> 
> Sorry haven't seen your previous posts but are you on water?


thank you. yeah i am on water i have a gtx 480 for my cpu/gpu loop, my gpu doesnt go over 40 degrees in time spy. id like to save my stock BIOS though just in case, although im sure i could find that somewhere if needed

ive been using both as well, ab and galax


----------



## GraphicsWhore

Aurosonic said:


> Yeah, custom water loop, my temps never goes higher than 43 even at 380W power draw from card. But I cant get more than 2100 in Timespy


Interesting. And what offset are you using for core? Also, which program are you using to overclock?



yianni said:


> thank you. yeah i am on water i have a gtx 480 for my cpu/gpu loop, my gpu doesnt go over 40 degrees in time spy. id like to save my stock BIOS though just in case, although im sure i could find that somewhere if needed
> 
> ive been using both as well, ab and galax


I think you can save it using GPUz but you can also get all BIOSes from the Techpowerup database: https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=


----------



## J7SC

Aurosonic said:


> Well, no luck anyway. Same results on Galax 380w bios. Cant get higher 2100 even at 1.093v, always powerlimited with 380w and frquency drops. Looks like just unluck with chip, it can handle 2100 only at 1.093v and its always powerlimited



Tx for trying and sharing the result. Still looks like a more than 'decent' chip in any case. Also, so far I get best results on mine without even touching the voltage slider, though I have not had time yet to try out the MSI AB custom curve approach ! 

Besides, there are other options :h34r-smi ...you'll note the shunts in the earlier PCB pics I posted on our card. That combined with a GPU pot and at least DICE would work. For me though, I am not even going to take the waterblock off. I rather buy a 2nd identical card after Christmas for a new Threadripper build I am planning.


----------



## yianni

GraphicsWhore said:


> Interesting. And what offset are you using for core? Also, which program are you using to overclock?
> 
> 
> 
> I think you can save it using GPUz but you can also get all BIOSes from the Techpowerup database: https://www.techpowerup.com/vgabios...&version=&interface=&memType=&memSize=&since=


yup i did, now i flashed i think, but not sure if it actually went through haha, i never checked the stock bios version prior


----------



## GraphicsWhore

yianni said:


> yup i did, now i flashed i think, but not sure if it actually went through haha, i never checked the stock bios version prior


If your new power limit in AfterBurner is 126%, it's flashed. Also can verify in info using GPUz.


----------



## yianni

GraphicsWhore said:


> If your new power limit in AfterBurner is 126%, it's flashed. Also can verify in info using GPUz.


i redid it and now its flashed


----------



## Asmodian

yianni said:


> Asmodian said:
> 
> 
> 
> I got 16395 at +135/+1100 (2115/8100) with a stock FE Bios and a shunt mod. /forum/images/smilies/wink.gif
> 
> Keeping it below 38°C helped me gain a few points.
> 
> 
> 
> not sure what the shunt does, but my gpu stays cool, maxes out at 42 when running furmark
Click to expand...

There is a drop in bin (from 2115 to 2100 in my case) when the temp goes above 38. The shunt removes the possibility of a power limit (520W at 123%). Both these help get a few more points. 😛


----------



## boli

OK, so I bought a KFA2 OC, since I'll be putting an EK block on it. For now I'm putting it through its paces on air, to check for the problems some 2080 Tis have…

Unlike the info in the first post says, my KFA2 OC had a 260/320W firmware, so maybe they put in a lower PL in more recent cards? I double checked EAN and P/N.

Anyway, I've been using it stock to run a few benchmarks and games, tried out OCScanner and manual memory overclocks, all on air, for the last 3 nights.

OCScanner gave average OCs of +120 to +160 for the ~4 times I tried (I set max PL before/after scanning without repeatable results).

Best manual memory OC was +840 (the card has Micron memory BTW), after that there were artefacts in Time Spy graphics test 2, right at the end. In actual games it was sometimes worse though: while PC5 and AC: Odyssey built in benchmarks ran fine, the SotTR built-in benchmark always crashed during its 3rd (I think) scene repeatedly, at 800, 700, 600 memory OC – didn't try to go lower yet.

Tonight I just flashed the 380W BIOS on it, and am running benchmarks again. At stock speeds without OC, but max PL, it almost reaches the previous OC scores.

I'll probably block it this weekend and try some more OC…

*Edit:* Hmm for some reason this post shows up, but my very first one (from last Sunday) is still awaiting moderation. So here's some more info from that post: I'm replacing a Titan X (Pascal) in my first custom loop PC I built 2 years ago. It's got a smallish Phantek case with a single 280x140x45 rad, a stock 6700K also in the loop, 16 GB memory and a 2 TB 960 Pro SSD. The single rad worked well enough for the +200/+600 Titan X (Pascal) with its 300W PL – it ended up at 50 to 55 C during long sessions, and being very quiet at that. Looking forward to blocking the 2080 Ti. BTW all benchmarks are fps at 4K/UHD with HDR on (for my 144 Hz G-Sync HDR display).


----------



## vmanuelgm

kot0005 said:


> lol the fps is still low on ultra...I saw dips to 27 fps!! 50% perf sounds like BS. Do you have NGX 1.2.7 or 1.30 ? you should use 1.2.7 its in ur program files , nvidia folder. Delete it if ur using NGX 1.3.0 and reinstall ur driver that comes with 1.2.7 and test again.



Same fps with 1.2.5 and 1.2.7.

50% gain as maximum!!!


----------



## yianni

way better 3dmark score

https://www.3dmark.com/3dm/31015781

if i up the voltage will it allow better OC? i gave it +20 but i dont think its doing anything


----------



## GosuPl

My RIG with 2x 2080Ti, soon 2x RTX TITAN ( please, stop me  )


----------



## kot0005

So I flashed my card with the Galax 380w BIOS from the front page and now everytime I boot up, I can see this new splash screen that says Nvidia corporation VGA y or n some crap like that. Is it normal ? Or is this because I set write protect off before I flashed the card ?


----------



## GraphicsWhore

yianni said:


> way better 3dmark score
> 
> https://www.3dmark.com/3dm/31015781
> 
> if i up the voltage will it allow better OC? i gave it +20 but i dont think its doing anything


Might, might not. Voltage slider should be maxed out in AB. If using Galax Tuner i think you actually set it to the voltage number; you can try like 1100mV or above but keep in mind that doesn't mean your card will actually hit that.



kot0005 said:


> So I flashed my card with the Galax 380w BIOS from the front page and now everytime I boot up, I can see this new splash screen that says Nvidia corporation VGA y or n some crap like that. Is it normal ? Or is this because I set write protect off before I flashed the card ?


Yes, seeing the vBIOS is normal.


----------



## ElGreco

Hi all!

I am interested to buy this card ...
Gigabyte Aorus GeForce RTX 2080 Ti Xtreme Waterforce WB 11GB GDDR

and I would like to have your opinion on it as well as if you know where I could find this... it’s simply really impossible to find 

Thank you!


----------



## AvengedRobix

My best... for now

https://www.3dmark.com/spy/5154782


----------



## yianni

AvengedRobix said:


> My best... for now
> 
> https://www.3dmark.com/spy/5154782


Very nice. What's your card OCed to

Sent from my SM-G930V using Tapatalk


----------



## Asmodian

AvengedRobix said:


> My best... for now
> 
> https://www.3dmark.com/spy/5154782


2280MHz! Very impressive, you won the silicon lottery. 17669 is a crazy graphics score.


----------



## cstkl1

Jpmboy said:


> GraphicsWhore said:
> 
> 
> 
> I'm on the Galax 380w. I'm going to give the HOF 450w a shot tonight.
> 
> The 16611 graphics run was +181/+1000.
> 
> I then passed at +181/+1148, but my graphics score actually went down 3 points lol, though total score went up a bit.
> 
> 
> 
> if you are not locking the card in the P0 state, 17000 will be a reach.
Click to expand...

how to lock??? 😱🤔🍈


----------



## J7SC

ElGreco said:


> Hi all!
> 
> I am interested to buy this card ...
> Gigabyte Aorus GeForce RTX 2080 Ti Xtreme Waterforce WB 11GB GDDR
> 
> and I would like to have your opinion on it as well as if you know where I could find this... it’s simply really impossible to find
> 
> Thank you!


. 

I just got this card a few days ago after researching 2080 TIs a bit. Obviously too early to make great pronouncements, but so far, I really love it...very fast and quiet with 'stock everything'. Also, over the years I have been quite impressed with Gigabyte's GPU quality. I have various GPU generations, and some of those by other vendors had minor or sometimes even recurring problems based on quality control. 

As to where to by the Aorus GeForce RTX 2080 Ti Xtreme Waterforce WB, the one I got past weekend was the only one they had received though I understand that some European online retailers are expecting new shipments of this card tomorrow (7th), so maybe elsewhere, too


----------



## GraphicsWhore

AvengedRobix said:


> My best... for now
> 
> https://www.3dmark.com/spy/5154782


God damn.


----------



## MiniZaid

GosuPl said:


> Still no SLI + DXR support, time to sell 2x 2080Ti and buy 1x TITAN RTX.
> 
> But patch perf is great for me.
> 
> 3440x1440p ultra + DXR ultra on single GPU...
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=b0ac2a45acfdb1f6a649d1313618a4df&oe=5CA53422
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=6b03e5803c580d2bb2322b7de4bcc7a9&oe=5CA416F5
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=1ea29276dfb33bd6c1f5eebe30cb50d9&oe=5C999EF1
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=3e0d70d078932d4992146ee664eee184&oe=5CA3E194
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=0905a0aacc067637a172688437a38468&oe=5C6A393A
> 
> https://scontent-waw1-1.xx.fbcdn.ne...=c1f63e907ee0241e8fbea488fef742b0&oe=5CAE1B47


use nvidia inspector and change the battlefield v sli bits to battlefield 1/3/4


----------



## iRSs

yianni said:


> not sure what the shunt does, but my gpu stays cool, maxes out at 42 when running furmark


this is a shunt mod and according to my calculations this much liquid metal on all three of them almost doubles the powertarget.

if you are power limited in furmak you might very well have 1700 mhz on gpu core









Sent from my LG-H930 using Tapatalk


----------



## iRSs

Aurosonic said:


> Well, no luck anyway. Same results on Galax 380w bios. Cant get higher 2100 even at 1.093v, always powerlimited with 380w and frquency drops. Looks like just unluck with chip, it can handle 2100 only at 1.093v and its always powerlimited


being power limited is not a chip defect, it is a design "flaw".
but.. how do you manage to draw 380w?

Sent from my LG-H930 using Tapatalk


----------



## BudgieSmuggler

TraktorXD said:


> If you really flash this bios Version: 90.02.0B.00.D7 (HOF OC LAB) its cool. Question is if its worth it


In truth it hasnt made a big difference. Can get stable 2100 on air and 1100 memory. Run it at 1000 for most games. Temps around 50. Not much changes from the default Zotac bios as far as clock speeds. Reckon am at the cards limit in every way. Happy with the performance as it is.


----------



## AvengedRobix

yianni said:


> AvengedRobix said:
> 
> 
> 
> My best... for now
> 
> https://www.3dmark.com/spy/5154782
> 
> 
> 
> Very nice. What's your card OCed to
> 
> Sent from my SM-G930V using Tapatalk
Click to expand...

Easy.. hof oclab edition 😂😁


----------



## Spiriva

cstkl1 said:


> how to lock??? 😱🤔🍈


Xtreme Tuner Plus for RTX (BETA) 

http://www.galax.com/en/xtreme-tuner-plus


----------



## Citruschrome

iRSs said:


> this is a shunt mod and according to my calculations this much liquid metal on all three of them almost doubles the powertarget.
> 
> if you are power limited in furmak you might very well have 1700 mhz on gpu core
> 
> Sent from my LG-H930 using Tapatalk


All 3? That's pretty ballsy unlocking the PCIE power limit but I guess it's working well.


----------



## kot0005

I got into Anthem closed Alpha, will let you guys know how it plays and looks.


----------



## iRSs

Citruschrome said:


> All 3? That's pretty ballsy unlocking the PCIE power limit but I guess it's working well.


maybe it is ballsy but what could go worng?
in the past years i have always unlocked all three resistors.

Sent from my LG-H930 using Tapatalk


----------



## profundido

AvengedRobix said:


> Easy.. hof oclab edition 😂😁


wow, that is some godlike card


----------



## Rob w

Well! Got my advance 2080ti fe rma, only took a week but I think this time round I’ll leave the wb off, first card failed about a week after fitting block, pita to remove and reinstate card.
Just need to sort which bios to flash it with, any suggestions? ( now that a few have been tried)
Not opened box yet but hoping it has the Samsung memory, should have as it’s a muchlater batch No,


----------



## profundido

Asmodian said:


> I got 16395 at +135/+1100 (2115/8100) with a stock FE Bios and a shunt mod.
> 
> Keeping it below 38°C helped me gain a few points.


how much difference did the shunt mod make in the end ?


----------



## yianni

i was able to get a nice GPU score this time

https://www.3dmark.com/spy/5310040


----------



## yianni

AvengedRobix said:


> Easy.. hof oclab edition 😂😁


how much voltage you have for 5.2g on your 9900k? i tried 1.35 and couldnt pass cinebench


----------



## DarthBaggins

kot0005 said:


> I got into Anthem closed Alpha, will let you guys know how it plays and looks.


I did as well, pre-loading it while I'm at work :cheers: but I'm only running a 1080Ti until the 2080Ti market levels out.


----------



## AvengedRobix

yianni said:


> AvengedRobix said:
> 
> 
> 
> Easy.. hof oclab edition 😂😁
> 
> 
> 
> how much voltage you have for 5.2g on your 9900k? i tried 1.35 and couldnt pass cinebench
Click to expand...

1,36


----------



## DarthBaggins

lol, just a tad more than the failed 1.35 :laugher:


----------



## ThrashZone

DarthBaggins said:


> I did as well, pre-loading it while I'm at work :cheers: but I'm only running a 1080Ti until the 2080Ti market levels out.


Hi,
Hopefully by the end of this year or beginning 2019 maybe it will 
nVidia stock isn't looking to good so at worst end of first quarter.


----------



## DarthBaggins

their stock being low can be good, prices might adjust to the realm of sanity to compensate


----------



## yianni

AvengedRobix said:


> 1,36


ill give that a try, 5.0 is stable at 1.27 with LLC at 7. crazy how it needs so much more voltage for 5.2

edit: 1.36 did the job for 5.2

although 5.3 wont budge, i think i went as high as 1.47 voltage


----------



## nyk20z3

O Yea

https://www.guru3d.com/news-story/evga-releases-teaser-rtx-2080-ti-kingpin.html


----------



## Asmodian

profundido said:


> how much difference did the shunt mod make in the end ?


Very little, compared to a BIOS with a 380W limit I don't think it would help much, if at all. Unfortunately I did not test a higher limit BIOS before doing the mod so I cannot say for sure. It does keep my card from throttling under any load I can sustain (when at 123% power).


----------



## ElGreco

J7SC said:


> .
> 
> I just got this card a few days ago after researching 2080 TIs a bit. Obviously too early to make great pronouncements, but so far, I really love it...very fast and quiet with 'stock everything'. Also, over the years I have been quite impressed with Gigabyte's GPU quality. I have various GPU generations, and some of those by other vendors had minor or sometimes even recurring problems based on quality control.
> 
> As to where to by the Aorus GeForce RTX 2080 Ti Xtreme Waterforce WB, the one I got past weekend was the only one they had received though I understand that some European online retailers are expecting new shipments of this card tomorrow (7th), so maybe elsewhere, too


This is good news! Still, I try to find one retailer in entire Europe with this card but no luck...


----------



## TK421

How to find out the differences between samsung gddr6 and non samsung? Is it different per manufacturer?


----------



## yianni

Asmodian said:


> Very little, compared to a BIOS with a 380W limit I don't think it would help much, if at all. Unfortunately I did not test a higher limit BIOS before doing the mod so I cannot say for sure. It does keep my card from throttling under any load I can sustain (when at 123% power).


i can tell you that i get a better 3dmark score with the 380w bios


----------



## Jpmboy

^^ because it does not power throttle. Neither will a shunt mod. 


TK421 said:


> How to find out the differences between samsung gddr6 and non samsung? Is it different per manufacturer?


it's unclear what you are asking for?


----------



## Asmodian

yianni said:


> i can tell you that i get a better 3dmark score with the 380w bios


Yes, unlocking the power limit is helpful, but not as much as I expected, and I don't think going above 380W would help you gain much more. This is also dependant on your max clocks, I haven't been able to complete Timespy above 2115 MHz core, so I need less power than someone who can run 2260 MHz.

I get a good score for someone running 2115/8100, and with my CPU, but it is not that impressive.


----------



## J7SC

ElGreco said:


> This is good news! Still, I try to find one retailer in entire Europe with this card but no luck...



I had seen the ad for this card with expected arrival date of Dec 7th at caseking.de (incidentally where famed OCer DerBauer does his thing, such as dual 2080Ti on LN2)...just checked that site (their warehouse is in Berlin) but that Gigabyte card is still listed as 'ordered', without an updated expected arrival date. That said, they had seven other 2080 TI models in stock, including the Zotac AMP! 

I went by the place I got my Gigabyte today to pick up a Threadripper, mobo and RAM, and they did not have a single 2080 Ti (by any manufacturer) in store. There must be a manufacturing bottleneck, perhaps related to the troubles they had with some of the FE versions. Ditto for certain kinds of GSkill memory, NVlinks and some Intel CPUs that are out of stock and back ordered. This being the holiday season doesn't make it any better for the retailers, not least as they are not out of stock due to excessive demand but trickle-only supply :wth:


----------



## GraphicsWhore

Tried my hand at the Galax HOF 450W BIOS on my EVGA XC. Bust. Haven't analyzed the HWInfo log yet but bottom line is it was a lower score in TimeSpy using same offsets as the Galax 380W BIOS (+181/+1000). This is using the Galax Tuner but AB didn't fare any better.

Went back to the 380W and with that offset in Galax Tuner, graphics score went from 16611 to 16664. Only thing that has changed is I installed the newest nVidia drivers.

http://www.3dmark.com/spy/5322226

However, that run also dropped me 400+ points in total score because, for whatever reason, my CPU score took a nosedive, despite running at 5Ghz vs 4.8.

I'm going to flash back to the 450W and play around with offsets some more, plus go back to 4.8 on the CPU, but at this point my card is making it clear: it doesn't really want to dance above 2160 core. That's fine with me as it takes +1148 on memory (380W BIOS). I did not test memory on the 450W BIOS.

I continue to laugh at the fact that, despite like 1000 runs at seemingly every combination of core and memory, when you're dealing with multiple BIOSes you realize there are still so many combinations to test. I've just set a limit for myself to do that no longer than 1 hour, so I can actually play some f'ing games instead of spending my entire night inching up offsets and running benchmarks, lol.


----------



## TahoeDust

FINALLY got the EVGA Hydro Copper waterblock for my FTW3.


----------



## iRSs

Asmodian said:


> Yes, unlocking the power limit is helpful, but not as much as I expected, and I don't think going above 380W would help you gain much more. This is also dependant on your max clocks, I haven't been able to complete Timespy above 2115 MHz core, so I need less power than someone who can run 2260 MHz.
> 
> I get a good score for someone running 2115/8100, and with my CPU, but it is not that impressive.


you also forget the fact that power limit makes the core frequency nump alot more than voltage limit and this means that your frametime is allover the place which is a bad thing in very fast paced shooters like counter-strike or quake champions. also Path of Exile heats and consumes as much watts as furmark or kombustor and this means that whithout shunt mods my frequencies in Path of Exile i would have got something adround 1800 mhz on core because of the HEAVY power limit. shunt mods for me did a jump in fps from 80 to 110 and alot of smoothness in using skills like Molten Strike or Bestiary portal effect.

you should not BLINDLY do shunt mods or flash another bios (because everyone does it) unless you are POWER LIMITED. you should check that in gpu-z or configure riva tuner statistics server's overlay to dispaly it in game.
make your research people, know your mods and don't argue over 300 poinds in timespy...

my palit 2080ti dual which is the WORST 2080ti out ther (non-a too) can hold 7900 memory and 2175 core @1.093v
and it stays cool too having on it a behemoth of an air cooler.

Sent from my LG-H930 using Tapatalk


----------



## J7SC

GraphicsWhore said:


> Tried my hand at the Galax HOF 450W BIOS on my EVGA XC. Bust. Haven't analyzed the HWInfo log yet but bottom line is it was a lower score in TimeSpy using same offsets as the Galax 380W BIOS (+181/+1000).
> (..cut..)
> 
> I continue to laugh at the fact that, despite like 1000 runs at seemingly every combination of core and memory, when you're dealing with multiple BIOSes you realize there are still so many combinations to test.* I've just set a limit for myself to do that no longer than 1 hour, so I can actually play some f'ing games instead of spending my entire night inching up offsets and running benchmarks, lol*.



I know what you mean, though my 'affliction'' is a bit different...still, I (or rather family and friends) named my buildlog for the new home of the Aourus 2080 Ti '*INTERVENTION*' :wth:


The 6700k @ 4.7GHz was a bit limiting...


----------



## kot0005

How is the Aorus Xtreme waterforce card ? Is the block any good ? Buildzoid was concerned with the chokes on this card's PCB as they were smaller than the reference cards.


----------



## J7SC

kot0005 said:


> How is the Aorus Xtreme waterforce card ? Is the block any good ? Buildzoid was concerned with the chokes on this card's PCB as they were smaller than the reference cards.


So far, I absolutely love it. Good sustained clocks (per earlier post, 2190 @ Unigine Superposition) and highest temp ever was 47c after an hour of benching with 21c ambient. 

I watched Buildzoid's PCB analysis before I bought it - his concern was for Ln2 and even then he wasn't sure. His real issue seemed to be that he did not have the data sheet for those 'different' chokes and would do LN2 only if he had the info. I think that's what he said but he does have a unique communication style which gets me lost sometimes. In any case, I never had any Gigabyte product fail, and the card comes with a four year warranty. 



Besides, it's very shiny which makes it faster, just like racing stripes...


----------



## Aurosonic

J7SC said:


> So far, I absolutely love it. Good sustained clocks (per earlier post, 2190 @ Unigine Superposition) and highest temp ever was 47c after an hour of benching with 21c ambient.


How did you manage to get 2190 on this card ?  I cant pass over 2115  I got powerlimited in any benchmark even at 2100  GPU-Z shows 370-380W in Timespy 1-st test and freq. drops to 2040-2070.
When i try +140 Offset in Afterburner and hard lock voltage at 1.093, timespy almost instantly crashing.

I got a bad chip right ?


----------



## Aurosonic

iRSs said:


> how do you manage to draw 380w?


Any Firestrike or Timespy at 2100, GPU-Z show 370-380W power consuption and Power % jumps between 115-127% with limit 122%


----------



## ESRCJ

A friend of mine got a pair of FTW3s and neither could break 2100MHz, even at 1.093V. I've also seen some very poor results over at the EVGA forum with those cards. I'm a bit nervous for mine, which arrives early next week. Also, looking at the Timespy leaderboards I don't see very many EVGA cards aside from Jayznonsense and Gamersnexus. I'm sure they both got binned cards as well. Are any of the FTW3 owners here hitting over 2100MHz? The Strix I sold was hitting 2100MHz at 1.000V with zero dips in Heaven for hours. It's a shame EVGA doesn't bother binning their current flagship. That 373W power limit seems pointless if the silicon is utter garbage on some of them.


----------



## kot0005

J7SC said:


> So far, I absolutely love it. Good sustained clocks (per earlier post, 2190 @ Unigine Superposition) and highest temp ever was 47c after an hour of benching with 21c ambient.
> 
> I watched Buildzoid's PCB analysis before I bought it - his concern was for Ln2 and even then he wasn't sure. His real issue seemed to be that he did not have the data sheet for those 'different' chokes and would do LN2 only if he had the info. I think that's what he said but he does have a unique communication style which gets me lost sometimes. In any case, I never had any Gigabyte product fail, and the card comes with a four year warranty.
> 
> 
> 
> Besides, it's very shiny which makes it faster, just like racing stripes...


what is your rad setups like ? 47c seems a bit high at 21c ambient. My card with the ek block only hits 47c with 36c ambient. Its hot as hell here. Because..summer.


----------



## kot0005

gridironcpj said:


> A friend of mine got a pair of FTW3s and neither could break 2100MHz, even at 1.093V. I've also seen some very poor results over at the EVGA forum with those cards. I'm a bit nervous for mine, which arrives early next week. Also, looking at the Timespy leaderboards I don't see very many EVGA cards aside from Jayznonsense and Gamersnexus. I'm sure they both got binned cards as well. Are any of the FTW3 owners here hitting over 2100MHz? The Strix I sold was hitting 2100MHz at 1.000V with zero dips in Heaven for hours. It's a shame EVGA doesn't bother binning their current flagship. That 373W power limit seems pointless if the silicon is utter garbage on some of them.


WHY sell it ?!?!? 2100mhz at 1v is really good even if it wont oc past that because ur card will have a much longer lifespan..


----------



## ESRCJ

kot0005 said:


> WHY sell it ?!?!? 2100mhz at 1v is really good even if it wont oc past that because ur card will have a much longer lifespan..


I sold it for well-over what I paid for it. The Strix's big weakness is that low 325W power limit. It could do 2190MHz at 1.093V, but I could never hit those voltages in most applications due to the power limit. I didn't want to shunt mod it and custom BIOS didn't work very well with it, so I figured I'd take a chance with a FTW3. I honestly thought it was an average chip until I heard about some of those FTW3 results...


----------



## GAN77

Aquacompute released kryographics NEXT 2080 Ti


----------



## TraktorXD

gridironcpj said:


> I sold it for well-over what I paid for it. The Strix's big weakness is that low 325W power limit. It could do 2190MHz at 1.093V, but I could never hit those voltages in most applications due to the power limit. I didn't want to shunt mod it and custom BIOS didn't work very well with it, so I figured I'd take a chance with a FTW3. I honestly thought it was an average chip until I heard about some of those FTW3 results...


You try to flash a Galaxy HOF bios with 400/450W? or just a normal one with 380?


----------



## ThrashZone

TahoeDust said:


> FINALLY got the EVGA Hydro Copper waterblock for my FTW3.


Hi,
Nice :thumb:


----------



## GraphicsWhore

kot0005 said:


> gridironcpj said:
> 
> 
> 
> A friend of mine got a pair of FTW3s and neither could break 2100MHz, even at 1.093V. I've also seen some very poor results over at the EVGA forum with those cards. I'm a bit nervous for mine, which arrives early next week. Also, looking at the Timespy leaderboards I don't see very many EVGA cards aside from Jayznonsense and Gamersnexus. I'm sure they both got binned cards as well. Are any of the FTW3 owners here hitting over 2100MHz? The Strix I sold was hitting 2100MHz at 1.000V with zero dips in Heaven for hours. It's a shame EVGA doesn't bother binning their current flagship. That 373W power limit seems pointless if the silicon is utter garbage on some of them.
> 
> 
> 
> WHY sell it ?!?!? 2100mhz at 1v is really good even if it wont oc past that because ur card will have a much longer lifespan..
Click to expand...

You believe that ****? 2100 “with no dips” at 1V? Without proof I’m going to go ahead and call BS on that. That would literally defy everything we know about this chip. But presumably he has the screenshots because if I hit 2100 “with no dips” at 1V I’d be advertising that all over the board. And I sure as **** would not sell the card acting like that wasn’t good enough.

In fact, based on the post it doesn’t seem like he even knows what he’s talking about. “I hit 2100 with no dips at 1V but I’m selling because of the power limit.” Lol ***?


----------



## J7SC

kot0005 said:


> what is your rad setups like ? 47c seems a bit high at 21c ambient. My card with the ek block only hits 47c with 36c ambient. Its hot as hell here. Because..summer.



Good observation ! Rads aren't the only way to cool down, though at 36 C ambient :cheers:

GPU rad setup is XSPC 360x60, with 3x Noctua 120mm CPU fans plugged into 'CPU Opt' PWM on the Z170 mobo. As the actual CPU has a good AIO cooler, the fans never signal spin up unless I run Cinebench or other CPU intensive apps. In other words, the current (temporary) setup is set to total silent mode as it is in our master bedroom for now where the 4K tv is - and I don't fancy getting 'evicted'. 

The real fun will start next weekend when I'm getting the Watercool Heatkiller TR4 for the build with the new Threadripper etc., the new proper home for this Gigabyte 2080 ti card


----------



## ESRCJ

GraphicsWhore said:


> kot0005 said:
> 
> 
> 
> 
> 
> gridironcpj said:
> 
> 
> 
> A friend of mine got a pair of FTW3s and neither could break 2100MHz, even at 1.093V. I've also seen some very poor results over at the EVGA forum with those cards. I'm a bit nervous for mine, which arrives early next week. Also, looking at the Timespy leaderboards I don't see very many EVGA cards aside from Jayznonsense and Gamersnexus. I'm sure they both got binned cards as well. Are any of the FTW3 owners here hitting over 2100MHz? The Strix I sold was hitting 2100MHz at 1.000V with zero dips in Heaven for hours. It's a shame EVGA doesn't bother binning their current flagship. That 373W power limit seems pointless if the silicon is utter garbage on some of them.
> 
> 
> 
> WHY sell it ?!?!? 2100mhz at 1v is really good even if it wont oc past that because ur card will have a much longer lifespan..
> 
> Click to expand...
> 
> You believe that ****? 2100 “with no dips” at 1V? Without proof I’m going to go ahead and call BS on that. That would literally defy everything we know about this chip. But presumably he has the screenshots because if I hit 2100 “with no dips” at 1V I’d be advertising that all over the board. And I sure as **** would not sell the card acting like that wasn’t good enough.
> 
> In fact, based on the post it doesn’t seem like he even knows what he’s talking about. “I hit 2100 with no dips at 1V but I’m selling because of the power limit.” Lol ***?
Click to expand...

Take it easy bud. There are users here whose cards can hit 2190MHz or greater at 1.093V. Their cards could likely do what mine did. I tested each point of my VF curve from 1V to 1.093V to find the highest stable clock per voltage.


----------



## Asmodian

gridironcpj said:


> Take it easy bud. There are users here whose cards can hit 2190MHz or greater at 1.093V. Their cards could likely do what mine did. I tested each point of my VF curve from 1V to 1.093V to find the highest stable clock per voltage.


Yes, it doesn't seem unreasonable to me. My fairly average card runs 2070 MHz at 1.025V, a very nice piece of silicon could likely do 2100 at 1V. I definitely would have kept it though, that would be very good silicon.


----------



## callonryan

https://www.3dmark.com/spy/5333274

On cold boot I got 16613 for graphic score, but some reason it went borderless and crashed on cpu.


----------



## kot0005

J7SC said:


> Good observation ! Rads aren't the only way to cool down, though at 36 C ambient :cheers:
> 
> GPU rad setup is XSPC 360x60, with 3x Noctua 120mm CPU fans plugged into 'CPU Opt' PWM on the Z170 mobo. As the actual CPU has a good AIO cooler, the fans never signal spin up unless I run Cinebench or other CPU intensive apps. In other words, the current (temporary) setup is set to total silent mode as it is in our master bedroom for now where the 4K tv is - and I don't fancy getting 'evicted'.
> 
> The real fun will start next weekend when I'm getting the Watercool Heatkiller TR4 for the build with the new Threadripper etc., the new proper home for this Gigabyte 2080 ti card


Ahh ok that's not bad then, I have 2x 480mm rads for my gpu and CPU 

Also I dont know what I can share about Anthem closed alpha but some tiny info:

Game has Full HDR support in the current build. Proper HDR , like in BF V. Should be expected though.

Performance is around AC:Origins/Shadow of the tombraider levels at 4k. Hopefully driver support.

I left feedback on their forums so hopefully the use it lol


Another important note!! My GPU was pulling 390watts during this game, I had a couple of crashes and had to dial my regular 2100Mhz clocks to 2055-2070Mhz and MY GPu finally hit the 50c mark, its still hot today tho and high humidity. its around 30c ambient.

Yes the game made my 2080TI pull 390watts at 2055Mhz lol and was being power limited according tot GPU -Z


----------



## TahoeDust

I'm impressed with EVGA's Hydro Copper block for my FTW3. These are the temps after 30+ minutes of Heaven loop.


----------



## J7SC

I ave been looking at the divvying up of the available power 'budget' of about380 watts on the 2080 Tis. VRAM GDDR6 consumption in oc really is minor...with everything else equal, going from stock speed to 2070 MHz for the memory in GPUz render test only cost me about 7 watts (increase in max watt power consumption). This might be less than all that RGB dazzle on my card 


Next, I still get best overall results by either not adding any extra vcore to the GPU at all, or very little...only moving to the 'max' power target after setting oc speeds. Really no surprise there. Also, my GPU is factory water-blocked, re. the relationship between temps and power target

Then a bit of a mystery, now potentially solved and which possibly **might** affect your system as well. In Win 7, my card was idling at about 20 watts, and 4 C cooler than in Win 10, where it was idling at 80 watts (but both at stock voltage)...Win 7 idle @ 300 MHz, Win 10 idle at 1350 MHz...Yet in my initial runs in Win 10, I recall seeing 300 MHz idle speed as well, including in Precision X 1. Obviously, that seems to be a locked minimum speed bin now, and I think I may have seen a post by someone about that but cannot find it right now. The culprit could be the Galax xtreme utility I had tried out but then fully uninstalled later. In Win 10, I had MSI Afterburner, Precision X 1 and (for a time) the Galax utility. In Win 7, I only have the MSI AB, with identical settings.

Speaking of power limits and budgets, have you ever seen a power-stage modded MSI GTX 580 Lightning on LN2 :wubsmiley. A friend of mine had one, and literally needed two PSUs for that card and a giant LN2 dewar to bench :lmaosmile - no power limits there


----------



## iamjanco

But can it run Crysis?

EVGA teases GeForce RTX 2080 Ti Kingpin


----------



## nycgtr

47c isnt that great lol. I have 46c max in a 28 ambient lol. 2100 isn't that common and peeps really shouldn't be complaining. I have quite a few of the more desirable 2080tis. tbh. That's probably my 4th trio by now.


----------



## J7SC

iamjanco said:


> But can it run Crysis?


...

*Crysis* has become the "*Keyser Soze*" of the GPU world...perhaps created/ funded by NVidia ?  :h34r-smi


----------



## cstkl1

https://www.3dmark.com/spy/5334841

2100mhz /7990 ( futuremark says 7998.. 🤔) .. afterbuner.. it spikes to 2130 for first few sec but will stay at 2100 all the way with the 406 watt bios. 

so close to 16k. (no idea why futuremark says 5.2ghz was the max boost which i assure is impossible at 1.3v for 7820x.. it was 5ghz) 

manage to get 5ghz HT on running. ( rainy day.. so ambient drop abit).


----------



## MrTOOSHORT

Got the Ek block on, bios flashed to Galax 380w and did ok in TS. The memory is killer, core not so killer.

*https://www.3dmark.com/3dm/31093093*


----------



## Skaarj

Hello! 
Bios Galax RTX 2080 Ti 11 GB BIOS (HOF OC Labs Edition) 450W Power Limit with a non-reference design PCB flashed to the reference card? There was no brick ??


----------



## boli

Today I blocked my *KFA2 OC* with an EKWB block. Because of the possible issue with the #3 thermal pads I initially used the 1mm thick pad on the coils, put a blob of thermal paste on the GPU, put the block on it, tightened the innermost 4 screws, and then removed the block again to see what happened.

The strips I used were 7mm wide, so strictly less than the max 8mm EKWB recommends in the manual.

Still, it did look like the contact wasn't excellent (less pressure on left side of GPU, unfortunately I don't have a picture. In the second picture I already put a little bit more thermal paste onto the left side of the GPU (where it wasn't spread properly), and replaced the thermal pads on the coils with 0.5mm ones instead.

Anyway, it seems to work fine now, temps are similar to the Titan X (Pascal) it replaced, however I did increase the fan speed a little bit, due to the higher max power (380W vs 300W). I'm currently evaluating OCs again. Still using the 380W BIOS, not the 320W one the card came with, so more info later…

Pictures inside:


Spoiler


----------



## GraphicsWhore

Skaarj said:


> Hello!
> Bios Galax RTX 2080 Ti 11 GB BIOS (HOF OC Labs Edition) 450W Power Limit with a non-reference design PCB flashed to the reference card? There was no brick ??


No. I flashed to my EVGA XC Gaming and there were no issues.



MrTOOSHORT said:


> Got the Ek block on, bios flashed to Galax 380w and did ok in TS. The memory is killer, core not so killer.
> 
> *https://www.3dmark.com/3dm/31093093*


Nice. Almost 17K graphics is way more than "ok", especially given the fact that you reached it at only 2085.


----------



## Aurosonic

MrTOOSHORT said:


> Got the Ek block on, bios flashed to Galax 380w and did ok in TS. The memory is killer, core not so killer.
> 
> *https://www.3dmark.com/3dm/31093093*


How did you manage to unlock +1000 memory limit with Afterburner ?


----------



## J7SC

I asked myself that as well...MrTOOSHORT might have a different method, but per below, I can do it by using PrecX1 first just for the VRAM, then use MSI AB after for all else. May be that's how I got locked into 1350 idle

BTW, that's not my benching speed, just trying to figure out power budget envelope, first w/o any extra voltage applied per slider, Also, I'm starting to see ghostly shapes in the render window /: too much eggnog :wth:


----------



## Aurosonic

J7SC said:


> I asked myself that as well...MrTOOSHORT might have a different method, but per below, I can do it by using PrecX1 first just for the VRAM, then use MSI AB after for all else. May be that's how I got locked into 1350 idle
> 
> BTW, that's not my benching speed, just trying to figure out power budget envelope, first w/o any extra voltage applied per slider, Also, I'm starting to see ghostly shapes in the render window /: too much eggnog :wth:


 When i apply desired memory frequency with Precision X1 and then go to Afterburner and apply curve - it drops memory to +1000 limit


----------



## J7SC

Aurosonic said:


> When i apply desired memory frequency with Precision X1 and then go to Afterburner and apply curve - it drops memory to +1000 limit



Once VRAM speed is set with PrecX1 or such and you have closed it, then go to MSI AB and do all other adjustments BUT DO NOT touch VRAM slider in MSI AB


----------



## Aurosonic

J7SC said:


> Once VRAM speed is set with PrecX1 or such and you have closed it, then go to MSI AB and do all other adjustments BUT DO NOT touch VRAM slider in MSI AB


Thats what i do exactly. I do not touch vram at all and still if its on zero it drops to 7000, if it's on at max at shows freq i set in precision x it drop to +1000


----------



## Foxrun

Aurosonic said:


> Thats what i do exactly. I do not touch vram at all and still if its on zero it drops to 7000, if it's on at max at shows freq i set in precision x it drop to +1000


I have the same issue.


----------



## MrTOOSHORT

Open AB and reset clocks.

Deselect apply overclocking start up in AB first, close ab, set higher memory offset with xtremetuner, nvidia inspector or what not, open ab, offset should be there( it's over 9000!). Now don't touch the memory at all in AB while adjusting everything else.


----------



## RaGran

Hi,

Anyone know if this --> https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/
can be used to flash A chip bios to non-A chip board to get a higher power limit?


----------



## Jpmboy

RaGran said:


> Hi,
> 
> Anyone know if this --> https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/
> can be used to flash A chip bios to non-A chip board to get a higher power limit?


no. that was taken from this thread. only disables the board ID mismatch. No the chip identifier. It lets the FE get flashed by aib bios'


----------



## Jpmboy

I mean - I really like extreme tuner... just wish it did not TAKE UP HALF THE 1440P SCREEN!
can bench this card up to 2175:


----------



## MrTOOSHORT

Yeah stewies head takes it all!

Broke 17,000 gpu in TS:

*https://www.3dmark.com/spy/5348377*


----------



## Jpmboy

MrTOOSHORT said:


> Yeah stewies head takes it all!
> 
> Broke 17,000 gpu in TS:
> 
> *https://www.3dmark.com/spy/5348377*


never had a doubt.


----------



## MrTOOSHORT

Jpmboy said:


> never had a doubt.




Thanks

Card isn't that good on core, but the memory is insane. Samsung ftw.:thumb:

Very happy with the set up, block and card.


----------



## Jpmboy

MrTOOSHORT said:


> Thanks
> 
> Card isn't that good on core, but the memory is insane. Samsung ftw.:thumb:
> 
> Very happy with the set up, block and card.


I see. 8400+ !! Gonna see how the RTX-T does soon.


----------



## CallsignVega

I think I am going to wait on the RTX Titan to see how it does compared to the 2080 Ti. Seeing as how small the Titan difference is this time. There are also no confirmed water blocks for the RTX Titan that I can find, so another huge issue.


----------



## J7SC

Got the 2950x build started tonight for basic testing (waiting for w-cool parts for completion). With all that RGB on the mobo, RAM etc, swap-in of the RGB-heavy Aourus is going to be quite a light show. Wondering which Threadripper setting will be best for GPU


----------



## cstkl1

CallsignVega said:


> I think I am going to wait on the RTX Titan to see how it does compared to the 2080 Ti. Seeing as how small the Titan difference is this time. There are also no confirmed water blocks for the RTX Titan that I can find, so another huge issue.


pretty sure its gonna be the same pcb as Ti FE.


----------



## thebski

Finally got my Founders 2080 Ti under water and I'm pretty pleased with the results.

https://www.overclock.net/forum/attachment.php?attachmentid=237408&thumb=1

That's after about 30 minutes of Valley at 3440x1440. I'm going to test it in some games now. I know some games will not sustain those clocks due to power limits, but I'm pretty sure it will sustain over 2100. For reference I was able to sustain around 1950 when the card was on air. It was limited by power and of course the bad throttling at higher temps. I didn't expect a ~150 MHz gain so I am pretty happy.

Edit: Looks like I will be able to sustain 2130 MHz in games. Giddy.


----------



## lolhaxz

Got sick of the whole afterburner +1000MHz limit.

Modified the EXE (challenging actually, also had to bypass the anti tamper code) to support upto +2000MHz.

I don't know why but I hate Precision X1.


----------



## yianni

MrTOOSHORT said:


> Yeah stewies head takes it all!
> 
> Broke 17,000 gpu in TS:
> 
> *https://www.3dmark.com/spy/5348377*


What voltage did you have for 5.4g? I tried 1.47 for 5.3 and couldn't pass a cinebench test. But my CPU runs p95 all day with 1.27 @5g

Sent from my SM-G930V using Tapatalk


----------



## Jpmboy

CallsignVega said:


> I think I am going to wait on the RTX Titan to see how it does compared to the 2080 Ti. Seeing as how small the Titan difference is this time. There are also no confirmed water blocks for the RTX Titan that I can find, so another huge issue.


just by visual, the Ti blocks should fit fine. Besides, 30 day no questions asked returns if not satisfied. they are expensive tho


----------



## Spiriva

lolhaxz said:


> Got sick of the whole afterburner +1000MHz limit.
> 
> Modified the EXE (challenging actually, also had to bypass the anti tamper code) to support upto +2000MHz.
> 
> I don't know why but I hate Precision X1.


Wanna share it ?


----------



## GraphicsWhore

CallsignVega said:


> I think I am going to wait on the RTX Titan to see how it does compared to the 2080 Ti. Seeing as how small the Titan difference is this time. There are also no confirmed water blocks for the RTX Titan that I can find, so another huge issue.


Well you've seen the specs. You're paying twice the price for, what, a ton more VRAM? Would you ever use close to that? Doesn't seem like the GPU itself is going to be worth twice the cost.


----------



## Nephalem89

Good afternoon! A doubt I have.. Currently I have the MSI 2080ti Sea Hawk Ek for riding but I saw that has left the gigabyte 2080ti water Force extrem.. You think it's worth returning the MSI and wait for the gigabyte for the 2 more phases and the extra power to part of the warranty?.... Can the MSI be increased by the TDP? I am a sea of doubts


----------



## GraphicsWhore

Nephalem89 said:


> Good afternoon! A doubt I have.. Currently I have the MSI 2080ti Sea Hawk Ek for riding but I saw that has left the gigabyte 2080ti water Force extrem.. You think it's worth returning the MSI and wait for the gigabyte for the 2 more phases and the extra power to part of the warranty?.... Can the MSI be increased by the TDP? I am a sea of doubts


Seems like a lot of work when you could end up with a card that is either only marginally better or possibly even a worse overclocker if you struck out on the silicon lottery. Just try another BIOS on your Seahawk to see if you can get a bit more out of it. What kinds of clocks are you getting on it now?


----------



## Nephalem89

GraphicsWhore said:


> Nephalem89 said:
> 
> 
> 
> Good afternoon! A doubt I have.. Currently I have the MSI 2080ti Sea Hawk Ek for riding but I saw that has left the gigabyte 2080ti water Force extrem.. You think it's worth returning the MSI and wait for the gigabyte for the 2 more phases and the extra power to part of the warranty?.... Can the MSI be increased by the TDP? I am a sea of doubts
> 
> 
> 
> Seems like a lot of work when you could end up with a card that is either only marginally better or possibly even a worse overclocker if you struck out on the silicon lottery. Just try another BIOS on your Seahawk to see if you can get a bit more out of it. What kinds of clocks are you getting on it now?
Click to expand...

Core watches 1755 MHz Bost/1350 MHz Normal This is what puts on the web of the manufacturer still not the EH mounted because I lack the motherboard that BIOS can be put to the MSI


----------



## J7SC

Hi
A couple of quick questions

1.) For those who flashed the 2080 Ti Galax 'bios' (380w, 450w as I recall) onto their non-Galax custom PCB 2080 Ti, what was the actual / measured increase in the max watt power consumption in GPUz ? I am already at a sustained 378 watt max with my stock Gigabyte Aorus. What about the increase with the 450w for those who loaded it ? This assumes I could even get it to work on the Aorus in the first place.

2.) Probably not the right thread, but I only loaded Win 10 just over a week ago only for the 2080 TI / DX12 (after years of resistance following early developer copy versions I played with). There were some rumors of DX12 hacks for Win 7 64, but nothing confirmed I have seen. My system is dual-booted and and memory speed is higher in Win 7 64 vs Win 10 in MaxxMem2 etc by quite a margin.

Thanks


----------



## tekathan

Wait there a 450w version of the BIOS? 
Can the two 8 pin cables and the PCI 75w rail even handle the 450w ??? 
Does this even work with the reference cards?


----------



## MrTOOSHORT

The 450w bios is useless unless you have more than two 8pin connectors. This was covered a while back in this thread.


----------



## J7SC

Thanks guys :thumb: 

2x 8 pin and PCI rail official specs aside, I once had a set of 780 Ti KPs with custom bios and EVbot which could go well beyond 400w each with those power connectors. For now though I'll see if I can get the 380w Galax to work once I finish the new Thread'nipper build


----------



## Timmaigh!

Hello there,

seeking help regarding cooling improvement for 2 2080Tis - more specifically Gaming OC and Windforce OC. As usual and expected, the upper card is always pretty hot, if its Windforce, then its like 87C under heavy load (OctaneBench), if its Gaming, then about 83-84C. I would like to get those temps lower, would be satisfied with 75-80C range. 

Aside of getting hybrid cooling kit for both cards, which is something i am currently not entertaining to do, i have few other cheaper choices, i guess:

- adding 14cm fan to the side panel, blowing the air from outside the case straight onto the cards. What fan would you suggest to buy for this particular task? My case is FD Define R5, without window, and the opening in the side panel i have currently closed.

- another alternative, i have been advised, is to just install 2 or 3 fans straight on the cards - not on the side panel, but deeper into the case, blowing the air inside the case into the gap between the cards. Like this: https://pasteboard.co/HQb663X.jpg It would require some DYI way of mounting the fans into the place, but clearly its doable. I have 3 unused 12cm Eiswind fans from my CPU AiO i could use for this purpose.

So which option would you think would be better? Or if you have any other alternative, please, fill me in. 

Thanks


----------



## tekathan

So i have the Gigabyte Aor Extreme which has that additional 8 pin GPU boost port on the side of it. 
DO you think i can utilize that for the 3rd 8x pin?


----------



## snow cakes

are the prices going to drop when the Titan comes out? debating to wait or not


----------



## ESRCJ

snow cakes said:


> are the prices going to drop when the Titan comes out? debating to wait or not


That would be unlikely. The Titan is in another tier of pricing entirely. Also, Nvidia isn't having any issues selling the 2080 Ti at the current prices.


----------



## TahoeDust

MrTOOSHORT said:


> The 450w bios is useless unless you have more than two 8pin connectors. This was covered a while back in this thread.


What about two 8pin connectors plus the extra 6pin pcie connector on the EVGA x299 dark board? Could that make use of the 450w bios?


----------



## ocvn

TahoeDust said:


> What about two 8pin connectors plus the extra 6pin pcie connector on the EVGA x299 dark board? Could that make use of the 450w bios?


no. it wont pass the 400W. I checked the bios with 2 TrioX before. My mobo is x299 Dark also.


----------



## TahoeDust

ocvn said:


> no. it wont pass the 400W. I checked the bios with 2 TrioX before. My mobo is x299 Dark also.


Thanks.


----------



## iRSs

MrTOOSHORT said:


> The 450w bios is useless unless you have more than two 8pin connectors. This was covered a while back in this thread.


i think that you are wrong because my pc stays at widle with ~110 w but when i start furmark it ramps up to 700w (this with shunt mods and power limited to 50% in ab). it would go higher but my psu won't allow it activating its ocp as soo an i start furmark. i can get ~700w on occt psu test (but this also ramps the cpu up). if i would have a 1000w psu i am sure that i could pass the 700w barrier.

every measurement is on the wall for the pc itself.

as someone else said here the 150w per pcie is the minimum specifications for the wires and connectors themselves.

i remember having a gtx 780ti which i couldn't cool on air @1.3v (with aftermarket cooler) but i can definitely cool gtx 2080ti. unfortunately i did not have a wattmeter back then and now i do not have a gtx 780ti any more

Sent from my LG-H930 using Tapatalk


----------



## kot0005

snow cakes said:


> are the prices going to drop when the Titan comes out? debating to wait or not


2080Ti stock is still extremely low, prices arent coming down anytime soon..


----------



## Frozburn

My 2080 Ti Duke died after 26 days. Was semi working last 3 days but now it won't load Windows at all. Time for some long RMA process :/

Anyone got experience with MSI RMA? 

https://i.imgur.com/bvMoavT.jpg


----------



## ENTERPRISE

Frozburn said:


> My 2080 Ti Duke died after 26 days. Was semi working last 3 days but now it won't load Windows at all. Time for some long RMA process :/
> 
> Anyone got experience with MSI RMA?
> 
> https://i.imgur.com/bvMoavT.jpg


This certainly never fills me with confidence for when I get my cards lol. Fingers crossed I guess. Hope your RMA does not take too long.


----------



## DarthBaggins

MSI has a great RMA experience from the few times I've had the pleasure of using it (on a board and a bad fan on a GTX 960 I was using for [email protected]). only one who has a better experience is Intel (IMO).


----------



## TraktorXD

Hi guys, ROG Strix Gaming A / OC is there some different between this two editions?


----------



## nycgtr

TraktorXD said:


> Hi guys, ROG Strix Gaming A / OC is there some different between this two editions?


Different power limit on bios. Otherwise identical. Just flash the OC bios onto one of the bios slots. I have both versions. My A version actually ocs better. So your really just playing the lottery.


----------



## ValSidalv21

Frozburn said:


> My 2080 Ti Duke died after 26 days. Was semi working last 3 days but now it won't load Windows at all. Time for some long RMA process :/
> 
> Anyone got experience with MSI RMA?
> 
> https://i.imgur.com/bvMoavT.jpg


Mine too, DUKE OC. Started artifacting about two weeks after I got it. Horizontal green lines across the screen on cold boot. They disappeared after a restart or two as the PCB warmed up and the card would work fine for days until the next cold boot. Hours and hours of gaming without any issues. I decided not to RMA it immediately and wait for it to get worse. Two days ago (about a month of use in total) as I watched a youtube video it froze with the same green artifacts, crashed and never recovered. I started the RMA process, but I'm not dealing with MSI directly, I'm going through the local distributor. I hope everything will go smooth and I'll have better luck with the replacement.


----------



## TraktorXD

Oh ok, you just flash Galaxy bios or just standart one from OC edition?


----------



## nycgtr

TraktorXD said:


> Oh ok, you just flash Galaxy bios or just standart one from OC edition?


The oc edition bios. Do not flash the reference based bios (galaxy etc) onto the strix.


----------



## Frozburn

ValSidalv21 said:


> Mine too, DUKE OC. Started artifacting about two weeks after I got it. Horizontal green lines across the screen on cold boot. They disappeared after a restart or two as the PCB warmed up and the card would work fine for days until the next cold boot. Hours and hours of gaming without any issues. I decided not to RMA it immediately and wait for it to get worse. Two days ago (about a month of use in total) as I watched a youtube video it froze with the same green artifacts, crashed and never recovered. I started the RMA process, but I'm not dealing with MSI directly, I'm going through the local distributor. I hope everything will go smooth and I'll have better luck with the replacement.


I had the same thing with gaming. It lasted for days while gaming but then it would randomly crash on desktop with nothing going on or when using Chrome.

I also sent the card to that store I bought it from. Gonna let them deal with it and if I get another awful card like this one I'll just ask for a refund. Lemme know how your RMA goes if you can


----------



## TraktorXD

I mean the 450W from Galax OC Lab card. Its a 19phases as well


----------



## TraktorXD

nycgtr said:


> The oc edition bios. Do not flash the reference based bios (galaxy etc) onto the strix.


I mean the 450W from Galax OC Lab card. Its a 19phases as well


----------



## nycgtr

TraktorXD said:


> I mean the 450W from Galax OC Lab card. Its a 19phases as well


I never tried but I wouldn't do it either. Your not going to gain any clocks in doing so. Tbh with the factory bios power limit doesnt even seem to be an issue with 3 of the strix I've had of which I still have 2.


----------



## Zammin

Has anyone been able to contact Nvidia L2 customer support in the last 10 days? My customer service agent was doing so well until he just went dark 11 days ago. He got the first replacement card to me in 3 days, but it has a faulty fan so I told him I needed to send it back. He apologised and said he'd get the E-RMA process started again and arrange another replacement for me. I told him that I had the shipping label but no shipping instructions and he said he asked the RMA team to send the instructions to me.

After that Troy (my L2 agent) and the RMA team became entirely unresponsive. I've emailed 4 times and no response. I spoke to one of the L1 people on live chat yesterday who said that Troy would still be around and they would ask him to respond to me. Still no response. I called (international call from Australia to the US) the L2 support phone number and I was on hold for 10 minutes before being redirected to L1 support again who told me that they need to receive the faulty card back before sending a replacement even though I was under the impression that it was an E-RMA.. They gave me the shipping instructions at least but now I've discovered that the return shipping is Fedex international ECONOMY which means I could be waiting bloody weeks before anything happens, that is assuming Troy even responds to me at all.

This went from a really good customer service experience to an incredibly frustrating one. Now my build is probably going to be delayed until next year...


----------



## Zammin

double post..


----------



## snow cakes

thanks


----------



## TahoeDust

What are other guys with their cards underwater seeing for OCs? My card seems to run any kind of benchmark at 2160-2145MHz except TimeSpy Graphics Test 2. It is 100% everyday game stable at 2085-2100MHz. Both at +1000MHz on memory. I assume this is probably averagish performance.


----------



## J7SC

TahoeDust said:


> What are other guys with their cards underwater seeing for OCs? My card seems to run any kind of benchmark at 2160-2145MHz except TimeSpy Graphics Test 2. It is 100% everyday game stable at 2085-2100MHz. Both at +1000MHz on memory. I assume this is probably averagish performance.


I just happened to watch Steve Burke from Gamers Nexus on YouTube yesterday when he mentioned that in TS, graphics test 2 is particularly sensitive to memory OC. May be do a trial run with your usual GPU OC bit backing off VRAM a bit to check TS2 test ?


----------



## kot0005

Zammin said:


> Has anyone been able to contact Nvidia L2 customer support in the last 10 days? My customer service agent was doing so well until he just went dark 11 days ago. He got the first replacement card to me in 3 days, but it has a faulty fan so I told him I needed to send it back. He apologised and said he'd get the E-RMA process started again and arrange another replacement for me. I told him that I had the shipping label but no shipping instructions and he said he asked the RMA team to send the instructions to me.
> 
> After that Troy (my L2 agent) and the RMA team became entirely unresponsive. I've emailed 4 times and no response. I spoke to one of the L1 people on live chat yesterday who said that Troy would still be around and they would ask him to respond to me. Still no response. I called (international call from Australia to the US) the L2 support phone number and I was on hold for 10 minutes before being redirected to L1 support again who told me that they need to receive the faulty card back before sending a replacement even though I was under the impression that it was an E-RMA.. They gave me the shipping instructions at least but now I've discovered that the return shipping is Fedex international ECONOMY which means I could be waiting bloody weeks before anything happens, that is assuming Troy even responds to me at all.
> 
> This went from a really good customer service experience to an incredibly frustrating one. Now my build is probably going to be delayed until next year...


Lol dude, you are putting in way too much effort for the $2000 u spent.. should have bought a card from PCCG when they had several come in stock. you have been waiting for over 3.5 months now.


----------



## TahoeDust

J7SC said:


> I just happened to watch Steve Burke from Gamers Nexus on YouTube yesterday when he mentioned that in TS, graphics test 2 is particularly sensitive to memory OC. May be do a trial run with your usual GPU OC bit backing off VRAM a bit to check TS2 test ?


Good idea. Thanks for the tip. Steve is awesome. I will try it out next session.


----------



## Zammin

kot0005 said:


> Lol dude, you are putting in way too much effort for the $2000 u spent.. should have bought a card from PCCG when they had several come in stock. you have been waiting for over 3.5 months now.


You're not wrong in that I'm having to go through way too much effort for this, and in hindsight, yes that may have helped if PCCG had a model I wanted in stock, but I can't go back in time and buy one from PCCG. Nvidia are the ones that dropped the ball here, even if I look 3 months back I couldn't have known at the time. 3 months ago you and I were waiting for our orders from PLE to arrive.


----------



## Zammin

Man BFV has a lot of issues.. I only just got it (luckily it was fairly inexpensive) and in the first story mission I've already had my plane bug out and get stuck sideways in the ground, trees stretching randomly sideways through other objects, square areas of water without reflections and the game got stuck in in an infinite loading loop at the end of the mission....


----------



## pfinch

Hey guys,

my 15 days old AORUS Xtreme died yesterday. 
Ordered a Gainward 2080TI GS which should work best with the KFA2 GALAX 380 W BIOS, right?!

EDIT: Noticed an updated BIOS for the 20180 TI GS (VGA_BIOS_Upgrade_1102) which should maximize the Power Target to 126% (like GALAX). 
Do you have any expirience with this one compared to GALAX?


----------



## reflex75

pfinch said:


> Hey guys,
> 
> my 15 days old AORUS Xtreme died yesterday.
> Ordered a Gainward 2080TI GS which should work best with the KFA2 GALAX 380 W BIOS, right?!
> 
> EDIT: Noticed an updated BIOS for the 20180 TI GS (VGA_BIOS_Upgrade_1102) which should maximize the Power Target to 126% (like GALAX).
> Do you have any expirience with this one compared to GALAX?












This RTX generation is a total failure!
Why people are still buying them?


----------



## profundido

reflex75 said:


> This RTX generation is a total failure!
> Why people are still buying them?


Because....(drumroll)...they have no other option if needing the performance ! I too -out of sheer principle- am 100% against this whole ridiculously failing RTX rip-off priced generation but there is simply no alternative, no competition.

I have been wanting to get rid of my existing SLI of 2*titan X (Pascal) which scores about 18700 graphics score in timespy for a long time, hoping to replace it with a solid single card that performs at least the same and hopefully alot better. The 'better' part I can forget about already until next year since even the best and most overclocked cards in this thread barely go over 17000 graphics score but it is still alot more than what I have now on a single card and since SLI is really dead now I'm forced into upgrading. 

Since I exclusively use VR and 4K 144Hz monitor I need a single card capable of driving it. So yes, as much as I wanted to sit out this entire generation and 'teach Nvidia a lesson' by not sponsering them I can't


----------



## Nizzen

reflex75 said:


> pfinch said:
> 
> 
> 
> Hey guys,
> 
> my 15 days old AORUS Xtreme died yesterday.
> Ordered a Gainward 2080TI GS which should work best with the KFA2 GALAX 380 W BIOS, right?!
> 
> EDIT: Noticed an updated BIOS for the 20180 TI GS (VGA_BIOS_Upgrade_1102) which should maximize the Power Target to 126% (like GALAX).
> Do you have any expirience with this one compared to GALAX?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This RTX generation is a total failure!
> Why people are still buying them?
Click to expand...

I have 5x 2080ti, and they are working flawless 🙂

2x msi trio x
2x zotac Amp!
1x founders with ek water

Rtx is only failure to the people not owning them.. Strange 😛

I


----------



## vmanuelgm

pfinch said:


> Hey guys,
> 
> my 15 days old AORUS Xtreme died yesterday.
> Ordered a Gainward 2080TI GS which should work best with the KFA2 GALAX 380 W BIOS, right?!
> 
> EDIT: Noticed an updated BIOS for the 20180 TI GS (VGA_BIOS_Upgrade_1102) which should maximize the Power Target to 126% (like GALAX).
> Do you have any expirience with this one compared to GALAX?


The new Gainward Bios is working properly. A bit more TDP (if u shunt it doesn´t matter) and good performance.


----------



## Zammin

reflex75 said:


> This RTX generation is a total failure!
> Why people are still buying them?


Do you have one?


----------



## profundido

Nizzen said:


> I have 5x 2080ti, and they are working flawless 🙂
> 
> 2x msi trio x
> 2x zotac Amp!
> 1x founders with ek water
> 
> Rtx is only failure to the people not owning them.. Strange 😛
> 
> I



by the way, what were your winning lottery numbers ?


----------



## arrow0309

vmanuelgm said:


> The new Gainward Bios is working properly. A bit more TDP (if u shunt it doesn´t matter) and good performance.


How many (total) Watts?


----------



## Jpmboy

Zammin said:


> Do you have one?


 I'll take that action... No he/she does not
look at it this way... soon there will be an RTX Titan thread and the low-brow posts will troll that thread.


----------



## Nephalem89

A question for an MSI Sea Hawk EK 2080ti that BIOS is advisable to install thanks a lot other question 2080ti sea hawk ek is the same pcb that msi 2080ti trio?,


----------



## reflex75

Zammin said:


> reflex75 said:
> 
> 
> 
> This RTX generation is a total failure!
> Why people are still buying them?
> 
> 
> 
> Do you have one?
Click to expand...

I was planning to buy one before reading all this hardware failures from around the world, so I keep my 1080ti for the moment. 
I use to buy Nvidia cards since my GeForce 3 Ti 200, and I have never seen so many flaws affecting a new generation: cards dying without explanation, not a single game using DLSS upscaling, just one game using RT, no improvement in price/performance ratio, small performance jump but crazy high price...


----------



## VPII

Let me put it to you in perspective. When you live in an area where the cheapest Rtx 2080 ti is about 1400 USD and most expensive is 1950 USD you work out what you csn and cannot do. Basically I bought a Rtx 2080 ti fir 600USD taken what I got for my old card / hardware.

Now don't tell me to take into consideration what I paid for something I bought more than a year ago.... that is written off.... redundant.... so that does not count.

Rtx cards dying is something we still need to experience in my country so until then Ill enjoy what I got for 600USD.

Sent from my SM-G950F using Tapatalk


----------



## GAN77

Nephalem89 said:


> A question for an MSI Sea Hawk EK 2080ti that BIOS is advisable to install thanks a lot other question 2080ti sea hawk ek is the same pcb that msi 2080ti trio?,


Yes, the boards are similar.

You own MSI Sea Hawk EK 2080ti ? Do you have any feedback?


----------



## Nephalem89

GAN77 said:


> Nephalem89 said:
> 
> 
> 
> A question for an MSI Sea Hawk EK 2080ti that BIOS is advisable to install thanks a lot other question 2080ti sea hawk ek is the same pcb that msi 2080ti trio?,
> 
> 
> 
> Yes, the boards are similar.
> 
> You own MSI Sea Hawk EK 2080ti ? Do you have any feedback?
Click to expand...

Not yet... I still have no motherboard but I doubt to change it for a Gigabyte 2080ti Xtreme wateforce... Because I think it has more TDP and more phases


----------



## Skaarj

GraphicsWhore said:


> No. I flashed to my EVGA XC Gaming and there were no issues.


Fine! Thank! We will flash it.


----------



## Jbravo33

reflex75 said:


> I was planning to buy one before reading all this hardware failures from around the world, so I keep my 1080ti for the moment.
> I use to buy Nvidia cards since my GeForce 3 Ti 200, and I have never seen so many flaws affecting a new generation: cards dying without explanation, not a single game using DLSS upscaling, just one game using RT, no improvement in price/performance ratio, small performance jump but crazy high price...


Progress! : ) 
https://wccftech.com/final-fantasy-xv-dlss-beta-available-now/


----------



## nycgtr

profundido said:


> by the way, what were your winning lottery numbers ?


No failures with mine either. Currently with 5 on hand and I've gone thru maybe near 30. I really think it's overblown. HOWEVER, I do notice it's got some sleep issues with multi monitor on like all of them.


----------



## kot0005

Zammin said:


> You're not wrong in that I'm having to go through way too much effort for this, and in hindsight, yes that may have helped if PCCG had a model I wanted in stock, but I can't go back in time and buy one from PCCG. Nvidia are the ones that dropped the ball here, even if I look 3 months back I couldn't have known at the time. 3 months ago you and I were waiting for our orders from PLE to arrive.


Just get the gigabyte its the same as FE

https://www.pccasegear.com/search?query=2080ti if u have any problems RMA will be superfast unless the card is 1yr old. They will just exchange it for u.


----------



## Zammin

kot0005 said:


> Just get the gigabyte its the same as FE
> 
> https://www.pccasegear.com/search?query=2080ti if u have any problems RMA will be superfast unless the card is 1yr old. They will just exchange it for u.


Thanks man, but I don't want to buy a second card, nor do I have the spare cash. It's worth mentioning that PC Case Gear might've been good to some people about returns but to me they have been really difficult every time I've wanted to return something. They either outright refuse to take it or they will hold onto it for a week or so before pushing it onto their supplier or in the case of more expensive products they would push it to the manufacturer for RMA. So in those cases it was no different than if I was to RMA with the manufacturer, just slower.

AIB cards from Gigabyte aren't fault-proof either, GN had a couple dead ones along with some EVGA cards sent in for their testing. The card I have works so I have been playing games on it since I got it, I just contacted NVIDIA and asked about why it can't pass stability tests when other FEs can, and it also has really loud coil whine in some games. They immediately said "No worries, if you aren't happy with it we'll send you an advance replacement." which was really good service, and I got the replacement in 3 days, it just so happened that the one they sent has a noisy faulty fan and at some point during communication the guy stopped responding for a while which was very frustrating.

I mailed the faulty card back and he did finally respond yesterday but I still don't know how long it will take for the card to reach them and whether he will send me another one before then or not.

It's very annoying that there was a delay of 11 days with no response, but it's still been better over all than my dealings with other brands like ASUS and Acer directly... I've heard Gigabyte aren't fun to deal with directly either, the only graphics card AIB I have heard good things about the RMA/return experience is EVGA, which is why as you know they were my first choice. So I guess for now we'll just have to wait and see how this turns out..


----------



## kx11

well speaking of FFXV , DLSS is cool and all , the IQ is much sharper than TAA and the performance is actually amazing however the game still got those hitches because the engine renders the world slower than my progress through it which can hit the fps as hard as going down to 35fps with DLSS , solid performance even with GAMEWORKS on , i didn't try it without 4k though !!!


----------



## carlhil2

Added a MO-RA3 360 PRO stainless steel rad to my loop. got it from FrozenCPU for under $100, I now have 2,480mm of rad, lol, I know, still, it helps keep water temps low longer...


----------



## enkay

Quick two part question. 

1. I ordered from the nvidia store, and it says in stock BUT when i go to check out it said est deliver mid dec in red. Kind of confused about that, as i did not see that a week ago. are they selling them without being in stock? 

2. I am really scared about the failure rate of these cards. Now i know most problems i read are because they are problems, and people that have great cards dont post. It's like that with many other pieces of tech. I do believe that it is true, you only read the bad stuff. BUT i am getting a little worried, because i have not seen this much of the same problem on every website under the reviews, i mean practically EVERY. all the same problem. Does anyone have a clue what's going on? are there newer models that are not affected? 

thanks again for any help provided.


----------



## BudgieSmuggler

With regards to the previously failing cards there were bad memory modules so i read. I believe (others will correct me if i'm wrong) it was either Hynix or Micron was the issue. They have changed to Samsung memory now for the more recent cards and i think it was mostly the first batches of Founders Edition cards. They will rma any bad card anyway and i haven't heard of one going bad for a while. For AIB manufacturers like Asus, Evga, Gigabyte, Zotac i don't think they've seen major issues. My Zotac 2080ti amp runs beautifully. 2100mhz and +1150 on the memory on air. It's faultless. If you're going witha Founders Edition maybe send them an email and ask if you can feel confident buying a Founders now after the problems. Pretty sure they knew which batches had the failing memory modules and have pulled them from sale now. Someone else will chip in if that's wrong info


----------



## reflex75

BudgieSmuggler said:


> With regards to the previously failing cards there were bad memory modules so i read. I believe (others will correct me if i'm wrong) it was either Hynix or Micron was the issue...


We don't know if it's caused by the memory.
It could be the big GPU die itself which is very complex to produce.
Only Nvidia knows what's wrong, but they keep it secret.

By the way, Gamers Nexus has even try to test underclock memory but still crash:
"Downclock mem + core to -1000/-1000 - freeze by 60 seconds"
https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens


----------



## EQBoss

My first 2 FEs had to be rmad due to artifacting which sucks, the replacements are newer revisions(black box as opposed to grey) and pretty solid so far through. Both cards reach 2145 - 2160 in triple a games, flashed galax bios increased performance and made clocks stable. One gpu i could hit 2230 in game stable at max voltage, having trouble setting custom curves for 2 cards tho. 8000 on memory, could probably go higher but that's all afterburner allows without mods. Interesting to note it's micron memory, through the logo on the chips look different compared to my first card, suggesting newer revision memory. Can't really complain, performance is solid on both cards, gpus are on water with temps in the 46-47 region on load. 

I'm having some trouble activating dlss, both on the benchmark and the actual ff15 game. Was wondering if the galax bios could cause it, can anyone with flashed bios report? Have latest drivers, latest windows 1809, also tried 1803, tried oc and non oc, not sure what else to try.


----------



## BudgieSmuggler

GraphicsWhore said:


> Don't use EVGA's software. Use AfterBurner. And what do you mean "either clock past 500." You're trying to add 500 to the core clock?
> 
> Raise the power and voltage to maximum, then start with like +500 on memory (+0 on core) and go up from there. When you find your max memory, start raising the core.
> 
> If you crash on +500 memory, you lost the silicon lottery.
> 
> On an unrelated note: how many here have flashed the Galax HOF 450w BIOS? And if you did, which card do you have? I'm curious to try it on my EVGA XC.





reflex75 said:


> We don't know if it's caused by the memory.
> It could be the big GPU die itself which is very complex to produce.
> Only Nvidia knows what's wrong, but they keep it secret.
> 
> By the way, Gamers Nexus has even try to test underclock memory but still crash:
> "Downclock mem + core to -1000/-1000 - freeze by 60 seconds"
> https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens





Okay but i did read in more than one place that it was a memory issue and that they changed to samsung on newly manufactured cards. It was pretty specific.


----------



## BudgieSmuggler

https://www.techpowerup.com/forums/...i-cards-caused-by-micron-gddr6-memory.249975/


----------



## kossiewossie

GraphicsWhore said:


> Don't use EVGA's software. Use AfterBurner. And what do you mean "either clock past 500." You're trying to add 500 to the core clock?
> 
> Raise the power and voltage to maximum, then start with like +500 on memory (+0 on core) and go up from there. When you find your max memory, start raising the core.
> 
> If you crash on +500 memory, you lost the silicon lottery.
> 
> On an unrelated note: how many here have flashed the Galax HOF 450w BIOS? And if you did, which card do you have? I'm curious to try it on my EVGA XC.


I flashed it on my 2080 TI FE and it works but my max overclock was less on the bios compared to the GALAX Reference PCB 380W bios, I can do 150+ on the core with the GALAX 380W bios (boosting to 2150mhz~), but the Galax HOF 450w capped out on me at 125+ (boosting to 2100mhz~) and anything more than that was unstable.


----------



## Deathscythes

Hello,

Just got my hands on a Strix one, and I am losing my mind.
Going from a 1080 Ti Strix, in firestrike my Graphics score raised but the combined score went from nearly 15K to less than 12.5K.
I don't get it, is anyone experiencing anything similar? It doesn't make any sense... I am running Win 7.

Thanks for the help!


----------



## Emmett

EQBoss said:


> My first 2 FEs had to be rmad due to artifacting which sucks, the replacements are newer revisions(black box as opposed to grey) and pretty solid so far through. Both cards reach 2145 - 2160 in triple a games, flashed galax bios increased performance and made clocks stable. One gpu i could hit 2230 in game stable at max voltage, having trouble setting custom curves for 2 cards tho. 8000 on memory, could probably go higher but that's all afterburner allows without mods. Interesting to note it's micron memory, through the logo on the chips look different compared to my first card, suggesting newer revision memory. Can't really complain, performance is solid on both cards, gpus are on water with temps in the 46-47 region on load.
> 
> I'm having some trouble activating dlss, both on the benchmark and the actual ff15 game. Was wondering if the galax bios could cause it, can anyone with flashed bios report? Have latest drivers, latest windows 1809, also tried 1803, tried oc and non oc, not sure what else to try.


I have the Microsoft store version of Final Fantasy and dlss does not show for me.

And I'm on stock bios. Latest driver.


----------



## kot0005

BudgieSmuggler said:


> https://www.techpowerup.com/forums/...i-cards-caused-by-micron-gddr6-memory.249975/



Samsung vram 2080tis died as well..
Plz google.


----------



## GraphicsWhore

kossiewossie said:


> I flashed it on my 2080 TI FE and it works but my max overclock was less on the bios compared to the GALAX Reference PCB 380W bios, I can do 150+ on the core with the GALAX 380W bios (boosting to 2150mhz~), but the Galax HOF 450w capped out on me at 125+ (boosting to 2100mhz~) and anything more than that was unstable.


I think I mentioned it but I also crashed at the same offsets that I successfully ran on the 380 BIOS. Turning down the core offset slightly I was able to pass FireStrike but scores were way down. So yeah it did nothing for me either. Someone here mentioned that it's pointless to flash on cards that aren't using 3 x 8pin power but not sure if that was verified or what.

Anyway, 380W BIOS is still awesome and my 24/7 BIOS.


----------



## xsidex

Hey! Coming here for some help as I think I can't figure this out by myself.

I was trying to flash the GALAX BIOS from the main post on my Asus Turbo 2080ti but I'm having this error popup everytime I try to flash it.
https://www.overclock.net/forum/attachment.php?attachmentid=238358&thumb=1

Now what is so strange to me is that I tried going to Techpowerup to download a bios from there for my exact card, just to see if the GALAX bios was incompatible with mine for some reason. The screenshot above is the result of me flashing that bios in my card and failing. The same thing happens with the GALAX bios. It starts "storing updated firmware" and then stops for no reason.

The commands I've use were "nvflash64 --protectoff" followed by "nvflash64 -6 BIOS.ROM

Any tips?

Also i've tried both the original and modified NVFLASH from the main post.

I would just like to find a way to increase my max power from 112% 

Thanks for the help, and hopefully is just me making a dumb simple mistake


----------



## Foxrun

Anyone with a gray box 2080ti? Ive had mine since launch but no issues as of yet. I think I have samsung memory, not sure how to tell.


----------



## Deathscythes

Foxrun said:


> Anyone with a gray box 2080ti? Ive had mine since launch but no issues as of yet. I think I have samsung memory, not sure how to tell.


GPU-Z will tell you =)


----------



## Foxrun

Welp I have micron... I haven't died yet though


----------



## EQBoss

Emmett said:


> I have the Microsoft store version of Final Fantasy and dlss does not show for me.
> 
> And I'm on stock bios. Latest driver.


I can't use dlss in benchmark either, I did everything i could think of, ddu driver, change windows builds, turn off oc. I have an x299 msi pro carbon ac and a 7900x @ 4.8. Not sure if its a bug or what. I am using 4k monitors. 2 gpus if that makes a difference.



Foxrun said:


> Welp I have micron... I haven't died yet though


It's honestly luck of the draw, through the black box revisions that i got have been working good and OC pretty high, micron memory.


----------



## Zurv

It is most likely not the ram. That was just the internet guessing. It seems like it was some QC issues with a launch batch or two. Don't stress it.


----------



## EQBoss

Can anyone test dlss in benchmark ff15 or the game? Post mobo cpu and what bios on gpu you're using, I'd appreciate it, I still can't get it to work.


----------



## Emmett

EQBoss said:


> Can anyone test dlss in benchmark ff15 or the game? Post mobo cpu and what bios on gpu you're using, I'd appreciate it, I still can't get it to work.


It's showing up in the benchmark for me now..

Downloaded from

http://benchmark.finalfantasyxv.com/na/

So the windows store version is just not patched yet. (or ever?)
the steam demo of the game did NOT have it.

2080 TI stock bios for the card. Maximus X formula with a 9900k


----------



## kot0005

Zammin said:


> Thanks man, but I don't want to buy a second card, nor do I have the spare cash. It's worth mentioning that PC Case Gear might've been good to some people about returns but to me they have been really difficult every time I've wanted to return something. They either outright refuse to take it or they will hold onto it for a week or so before pushing it onto their supplier or in the case of more expensive products they would push it to the manufacturer for RMA. So in those cases it was no different than if I was to RMA with the manufacturer, just slower.
> 
> AIB cards from Gigabyte aren't fault-proof either, GN had a couple dead ones along with some EVGA cards sent in for their testing. The card I have works so I have been playing games on it since I got it, I just contacted NVIDIA and asked about why it can't pass stability tests when other FEs can, and it also has really loud coil whine in some games. They immediately said "No worries, if you aren't happy with it we'll send you an advance replacement." which was really good service, and I got the replacement in 3 days, it just so happened that the one they sent has a noisy faulty fan and at some point during communication the guy stopped responding for a while which was very frustrating.
> 
> I mailed the faulty card back and he did finally respond yesterday but I still don't know how long it will take for the card to reach them and whether he will send me another one before then or not.
> 
> It's very annoying that there was a delay of 11 days with no response, but it's still been better over all than my dealings with other brands like ASUS and Acer directly... I've heard Gigabyte aren't fun to deal with directly either, the only graphics card AIB I have heard good things about the RMA/return experience is EVGA, which is why as you know they were my first choice. So I guess for now we'll just have to wait and see how this turns out..


Found u a deal if you are keen...I has ICX2 so $2k is pretty good.

https://www.ozbargain.com.au/node/425051


----------



## ValSidalv21

Frozburn said:


> I had the same thing with gaming. It lasted for days while gaming but then it would randomly crash on desktop with nothing going on or when using Chrome.
> 
> I also sent the card to that store I bought it from. Gonna let them deal with it and if I get another awful card like this one I'll just ask for a refund. Lemme know how your RMA goes if you can


Here's an update. Got my card back, the same one. Apparently they couldn't reproduce the artifacts.

Guess I was wrong to think it finally died for good. Damn it.

Now I'm talking to customer care via email to see if I can resolve this and in the meanwhile the card is back to it's semi-functional state.

Edit: Well I just heard back from customer care, they accept to replace it.

I have the option to wait for a DUKE because they don't have one immediately available, or I can take a VENTUS and a cashback for the price difference.

Not sure what to do.


----------



## Frozburn

ValSidalv21 said:


> Here's an update. Got my card back, the same one. Apparently they couldn't reproduce the artifacts.
> 
> Guess I was wrong to think it finally died for good. Damn it.
> 
> Now I'm talking to customer care via email to see if I can resolve this and in the meanwhile the card is back to it's semi-functional state.


That's unacceptable dude. If they return this card to me, I'll immediately send it back or get a refund. The card crashes with 0 memory clock on OC Scanner with the memory test. I did this test a few times and then it fully died and would give me black screen before logging into Windows. You should try this OC Scanner test with Furry E GPU Memory Burner 3072MB, 8x AA and 1920x1080 fullscreen (or w/e res u use, I use 1920x1080 cuz of 240hz) + artifact checker. This was the only program for me that gave me those artifacts almost instantly, OC or not (even with downclock). Micron memory if that matters at all

Waiting a month for some RMA for them to send it back is just dumb. I don't care what they find or don't find, if a card breaks on me like that they better change it. I'll let you know what happens with mine.

Edit: I have no idea how good the Ventus is. Can't they send u a Gaming X Trio? It's pretty much the same card. They cool the same, overclock the same


----------



## CallsignVega

I think I am done with EK blocks. After all these years they STILL cannot securely fasten their stand-offs to their blocks. Had to take apart my loop and remove the block on one of my 2080 Ti's because when I was testing the contact patch after first application of the block, simply removing those screws made two of the stand-offs near the core back out of the block. Good thing it didn't crack the chip. 

Temps's are great though now, max out ~36C under load on both cards. Time to see how good my cards can overclock.


----------



## kot0005

CallsignVega said:


> I think I am done with EK blocks. After all these years they STILL cannot securely fasten their stand-offs to their blocks. Had to take apart my loop and remove the block on one of my 2080 Ti's because when I was testing the contact patch after first application of the block, simply removing those screws made two of the stand-offs near the core back out of the block. Good thing it didn't crack the chip.
> 
> Temps's are great though now, max out ~36C under load on both cards. Time to see how good my cards can overclock.


wow I might have to chedck mine when I disassemble it.


----------



## Zammin

kot0005 said:


> Found u a deal if you are keen...I has ICX2 so $2k is pretty good.
> 
> https://www.ozbargain.com.au/node/425051


Thanks man. That's a really good deal. If I hadn't already bought a card I would jump on it. Same issue I already mentioned though, Already got a card and don't have a spare $2000 for a second card. Appreciate you searching for one for me though.


----------



## Gripen90

Deathscythes said:


> Hello,
> 
> Just got my hands on a Strix one, and I am losing my mind.
> Going from a 1080 Ti Strix, in firestrike my Graphics score raised but the combined score went from nearly 15K to less than 12.5K.
> I don't get it, is anyone experiencing anything similar? It doesn't make any sense... I am running Win 7.
> 
> Thanks for the help!


Yep, I have seen the same behavior going from GTX 1080 to RTX 2080 and GTX 1080Ti to RTX 2080Ti in FireStrike combined score. What is causing it I do not know. Probably some anomaly in FireStrike utilizing the new architecture.


----------



## ValSidalv21

Frozburn said:


> That's unacceptable dude. If they return this card to me, I'll immediately send it back or get a refund. The card crashes with 0 memory clock on OC Scanner with the memory test. I did this test a few times and then it fully died and would give me black screen before logging into Windows. You should try this OC Scanner test with Furry E GPU Memory Burner 3072MB, 8x AA and 1920x1080 fullscreen (or w/e res u use, I use 1920x1080 cuz of 240hz) + artifact checker. This was the only program for me that gave me those artifacts almost instantly, OC or not (even with downclock). Micron memory if that matters at all
> 
> Waiting a month for some RMA for them to send it back is just dumb. I don't care what they find or don't find, if a card breaks on me like that they better change it. I'll let you know what happens with mine.
> 
> Edit: I have no idea how good the Ventus is. Can't they send u a Gaming X Trio? It's pretty much the same card. They cool the same, overclock the same


Nope, they're all out of stock. They only have that one VENTUS OC, nothing else. If I don't take it, they say I'll most likely have to wait until next year for a replacement.

Looks like the VENTUS is also a reference PCB like the DUKE, but with cheaper cooler. Guess I'll take that and the cashback and if it's working well, I'll probably replace the stock cooler with some AIO solution for the price difference.


----------



## Gripen90

pfinch said:


> Hey guys,
> 
> my 15 days old AORUS Xtreme died yesterday.
> Ordered a Gainward 2080TI GS which should work best with the KFA2 GALAX 380 W BIOS, right?!
> 
> EDIT: Noticed an updated BIOS for the 20180 TI GS (VGA_BIOS_Upgrade_1102) which should maximize the Power Target to 126% (like GALAX).
> Do you have any expirience with this one compared to GALAX?


I had a Gainward RTX 2080Ti Phoenix Golden Sample. It lasted me about 4 days. I tried flashing to the latest BIOS too but, it was already too late. The card was artifacting and crashing. I made an RMA and got a refund and bought me a couple of 1080Ti's.. again.


----------



## Frozburn

ValSidalv21 said:


> Nope, they're all out of stock. They only have that one VENTUS OC, nothing else. If I don't take it, they say I'll most likely have to wait until next year for a replacement.
> 
> Looks like the VENTUS is also a reference PCB like the DUKE, but with cheaper cooler. Guess I'll take that and the cashback and if it's working well, I'll probably replace the stock cooler with some AIO solution for the price difference.


I'm sure they can find a GPU for you if you start some fire on Twitter like most people do (seems to be the way to get results these days, pathetic). There's just no way that I'd wait months to get a card. I mean at that point you might as well just wait for the next series of GPUs.

Are you from US or Europe?


----------



## ValSidalv21

Frozburn said:


> I'm sure they can find a GPU for you if you start some fire on Twitter like most people do (seems to be the way to get results these days, pathetic). There's just no way that I'd wait months to get a card. I mean at that point you might as well just wait for the next series of GPUs.
> 
> Are you from US or Europe?


Europe, but not EU. We only got a few 2080 Ti's here so far. The Duke and Ventus from MSI, the Gigabyte Gaming and the Asus Dual. None of them, except one last Ventus, are currently in stock.


----------



## Frozburn

ValSidalv21 said:


> Europe, but not EU. We only got a few 2080 Ti's here so far. The Duke and Ventus from MSI, the Gigabyte Gaming and the Asus Dual. None of them, except one last Ventus, are currently in stock.


Isn't it possible for you to get a full refund for your Duke and then buy another Ti elsewhere?


----------



## ValSidalv21

Frozburn said:


> Isn't it possible for you to get a full refund for your Duke and then buy another Ti elsewhere?


I can get full refund, but there's no 2080 Ti's anywhere in stock locally. There's plenty of 2080's and 2070's, but no Ti's. Importing one from EU would take just as long as waiting for a replacement Duke and on top of that I'll have to pay shipment and import tax. So my best option right now is to take the Ventus.


----------



## Zurv

ValSidalv21 said:


> I can get full refund, but there's no 2080 Ti's anywhere in stock locally. There's plenty of 2080's and 2070's, but no Ti's. Importing one from EU would take just as long as waiting for a replacement Duke and on top of that I'll have to pay shipment and import tax. So my best option right now is to take the Ventus.


Once the RTX Titans hit a bunch of people here will be selling their 2080 TI(s).. that said, i don't know if any of the crazy people are on your side of the pond.


----------



## Frozburn

ValSidalv21 said:


> I can get full refund, but there's no 2080 Ti's anywhere in stock locally. There's plenty of 2080's and 2070's, but no Ti's. Importing one from EU would take just as long as waiting for a replacement Duke and on top of that I'll have to pay shipment and import tax. So my best option right now is to take the Ventus.


If I were you I'd get a refund and get the gaming OC from Gigabyte when it comes in stock (they usually come in stock very often here)


----------



## Deathscythes

Gripen90 said:


> Yep, I have seen the same behavior going from GTX 1080 to RTX 2080 and GTX 1080Ti to RTX 2080Ti in FireStrike combined score. What is causing it I do not know. Probably some anomaly in FireStrike utilizing the new architecture.


Thanks for the reply, the thing is that I have looked for builds with similar specs, 7980XE/2080Ti and the combined scores are where it should be despite the graphics and physics score being lower than mine. One thing though, I am on win 7 do you know if that could be the issue? Also my firestrike hasn't been updated in a year (a loooooong build log lol)


----------



## Deathscythes

While I am at it, what is the best bios for the 2080 Ti? Galax? 
What are the things to know about the card? any tips? =)


----------



## keikei

Can the new Ti hit 144 frames in BFV at low settings? I'm looking to get a higher hz monitor as well as the card.


----------



## Deathscythes

keikei said:


> Can the new Ti hit 144 frames in BFV at low settings? I'm looking to get a higher hz monitor as well as the card.


In 1080p I average 180-230 in the witcher 3, no sure how intense BFV is but that should give you an idea =)


----------



## nycgtr

Anyone looking for a trio or a trio bitspower block I got both for grabs. Mainly looking to get rid of the block as I will just return the trio.


----------



## NewType88

keikei said:


> Can the new Ti hit 144 frames in BFV at low settings? I'm looking to get a higher hz monitor as well as the card.


I can hit 144 in BFV with everything ultra 1440p.


----------



## Esenel

keikei said:


> Can the new Ti hit 144 frames in BFV at low settings? I'm looking to get a higher hz monitor as well as the card.


WQHD. Yes.


----------



## Gunslinger.

alkjdfadsjflkjadslfkjasdlkjfasfdds


----------



## NewType88

@Esenel What program is that ?


----------



## Esenel

NewType88 said:


> @Esenel What program is that ?


Tableau.


----------



## J7SC

Gunslinger. said:


> MSI RTX 2080 Ti Lightnings
> 
> They didn't come with an aircooler, so I'm not sure what the final retail version will look like.



...looks yummy, 19 phases (GPU + VRAM) and 3x 8 pin, with shunts 'easily id'ed'. Buildzoid is going to have a field day  when analyzing this PCB, along with Matrix and Kingpin...getting the popcorn


----------



## J7SC

Quick question on PCIe riser cables. I am continuing with my Threadripper build with the Aorus 2080 Ti WB, and just got the Thermaltake Core P5 case in, which comes with a latest-gen PCIe riser / extension cable - it is supposed top be very high quality. I'm wondering about any performance hits. That might be a more general question than for just this thread, but 2080 Ti are obviously high-throughput and, ahem, sensitive cards.


A quick web / YouTube search was mostly inconclusive...what's your take on using the riser / extension, i.e. with the 2080 Ti ?


Thanks


----------



## Jpmboy

Gunslinger. said:


> MSI RTX 2080 Ti Lightnings
> 
> They didn't come with an aircooler, so I'm not sure what the final retail version will look like.


 is it the photo angle or is the PCB really that much taller? :bigeyedsm
Nice get.


----------



## ikhyhk

Is it possible to crossflash any reference pcb card bios to another?

At the moment I'm running Gainward 2080ti GS with the 380W galax bios. But I find it unusable with its fans set to 34% at minimum. I was wondering if I'm able to flash Gigabyte OC card bios to this card and maybe it will have a lower minimum fan rpm. It had the second highes maximum power target according to the OP's list.

Or is there any other way to meddle with the fan curves that would let the fans to spin down completely or at least at lower rpm?


----------



## Gunslinger.

Jpmboy said:


> is it the photo angle or is the PCB really that much taller? :bigeyedsm
> Nice get.


I would say it's as tall as most non-reference 1080 Ti's were and the same height as my 980 Ti KPE


----------



## SpartanM07

J7SC said:


> Quick question on PCIe riser cables. I am continuing with my Threadripper build with the Aorus 2080 Ti WB, and just got the Thermaltake Core P5 case in, which comes with a latest-gen PCIe riser / extension cable - it is supposed top be very high quality. I'm wondering about any performance hits. That might be a more general question than for just this thread, but 2080 Ti are obviously high-throughput and, ahem, sensitive cards.
> 
> 
> A quick web / YouTube search was mostly inconclusive...what's your take on using the riser / extension, i.e. with the 2080 Ti ?
> 
> 
> Thanks


I've had my 2080 ti FE on a riser cable (from the Cablemods vertical mounting kit) ever since I received my pre-order. Overclocks well (stable 2040MHz sustained load).


----------



## GraphicsWhore

J7SC said:


> Quick question on PCIe riser cables. I am continuing with my Threadripper build with the Aorus 2080 Ti WB, and just got the Thermaltake Core P5 case in, which comes with a latest-gen PCIe riser / extension cable - it is supposed top be very high quality. I'm wondering about any performance hits. That might be a more general question than for just this thread, but 2080 Ti are obviously high-throughput and, ahem, sensitive cards.
> 
> 
> A quick web / YouTube search was mostly inconclusive...what's your take on using the riser / extension, i.e. with the 2080 Ti ?
> 
> 
> Thanks


You won't know if there are issues until you install it. I have a P5 and have read about all the various riser cable issues and stayed far away from it but it sounds like the majority of people haven't had any problems.


----------



## J7SC

SpartanM07 said:


> I've had my 2080 ti FE on a riser cable (from the Cablemods vertical mounting kit) ever since I received my pre-order. Overclocks well (stable 2040MHz sustained load).





GraphicsWhore said:


> You won't know if there are issues until you install it. I have a P5 and have read about all the various riser cable issues and stayed far away from it but it sounds like the majority of people haven't had any problems.




Thanks gents :thumb: - I'll try the riser first to see if it works because the Core P5 build will carry two systems if at all possible (X399 w/ 2080 Ti and smaller X99 w/ SLI 980s), so space will be an issue and the riser affords options. But if I get another 2080 TI for SLI/ NVlink on the X399 later I would switch the back to mobo mount.


----------



## cstkl1

Esenel said:


> NewType88 said:
> 
> 
> 
> @Esenel What program is that ?
> 
> 
> 
> Tableau.
Click to expand...

try graphana. we use that at work with mongo. tableau too limited.


----------



## CallsignVega

You guys find upping the voltage leading to better overclocks? Or leave the slider stock? I'm just wondering, even on water if the extra heat is worth it or would lead to earlier throttling negating the benefit of the increased voltage.


----------



## kossiewossie

I'm getting pretty solid overclock on my Micron memory 2080Ti's FE, Iv done about 4 hours of looping Firestrike/Time Spy Stress test and few hours of Overwatch and seems all good at 900+
One of my 2080TIs can do about 180+ on the core with the GAX 380W bios, but sadly the other 2080TI can only do 130+ 
I have had the cards since release and so far so good no issues, apart from coil whine.


----------



## Edge0fsanity

CallsignVega said:


> You guys find upping the voltage leading to better overclocks? Or leave the slider stock? I'm just wondering, even on water if the extra heat is worth it or would lead to earlier throttling negating the benefit of the increased voltage.


Set a custom curve in AB to hold 1.043v. My Peak OC is nearly the same at that voltage as 1.093v. Results in no pl throttling in games and very little throttling in benchmarks. I actually get higher scores with it vs trying to let it hit another 15mhz with higher voltage and having it throttle all over the place. 

Temps are better too. I get 1 less bin of thermal throttling with the lowered voltage during extended stress testing or gaming sessions.

I'm on the 380w galax bios btw


----------



## Esenel

cstkl1 said:


> try graphana. we use that at work with mongo. tableau too limited.


Offtopic:

Graphana is a very solid product and we use it for it's created purpose of streaming timeseries data.
Although it is definitely not the ONE tool created for doing everything.

We at work use Power BI, Tableau and Graphana to leverage the strength of all three tools.
Power BI for solid and cheap reporting.
Tableau as Visual Analytics platform.
And Graphana for streaming timeseries data.

To call Tableau a limited solution just shows me that you do not know much about the product.
As you can as well integrate Visual Extensions written in Javascript (as well as in Graphana) the possibilties of creating different chart types are equal.

The huge benefit of Tableau is its ease of use and pre built-in functionality.

In Tableau you do not need to write SQL queries to create a chart.
Tableau does it for you while drag and drop dimensions ;-)
Although doing a k-mean clustering via drag and drop is a very nice thing to do.
Or cross data source filtering of columns without joining or blending them. Or writing code for it ;-)

If you are happy with your tool of choice that's the most important.
It makes you solve your tasks faster ;-)

Cheers


----------



## Jpmboy

kossiewossie said:


> I'm getting pretty solid overclock on my Micron memory 2080Ti's FE, Iv done about 4 hours of looping Firestrike/Time Spy Stress test and few hours of Overwatch and seems all good at 900+
> One of my 2080TIs can do about 180+ on the core with the GAX 380W bios, but sadly the other 2080TI can only do 130+
> I have had the cards since release and so far so good no issues, apart from coil whine.


Nice rig! And yeah, of the 1000's sold, a few dozen(?) 2080Tis RIP'd. The hysteria is amplified by social media.


----------



## dallasmarlow

Esenel said:


> Offtopic:
> 
> To call Tableau a limited solution just shows me that you do not know much about the product.


I registered an account to respond to that post, but you beat me to it (well said though). :thumb:


----------



## mistershan

Would it be worth it for me to sell 2 x 1080s and get a 2080ti Asus Strix card? What type of performance increase would I get with my other components? Or should I just wait and upgrade to a more modern mobo with usb c, faster ram and a an i9 9 series chip. 

i7 5820k 
2 x Asus Strix 1080s SLI 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU


----------



## arrow0309

You guys aware of any Samsung ic's ti card that have also died?


----------



## dVeLoPe

is their a good video on youtube comparing SLi 1080Ti to RTX2080Ti in games side by side that someone can link me


----------



## mistershan

mistershan said:


> Would it be worth it for me to sell 2 x 1080s and get a 2080ti Asus Strix card? What type of performance increase would I get with my other components? Or should I just wait and upgrade to a more modern mobo with usb c, faster ram and a an i9 9 series chip.
> 
> i7 5820k
> 2 x Asus Strix 1080s SLI
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU


Bump before my question gets lost...


----------



## white owl

mistershan said:


> mistershan said:
> 
> 
> 
> Would it be worth it for me to sell 2 x 1080s and get a 2080ti Asus Strix card? What type of performance increase would I get with my other components? Or should I just wait and upgrade to a more modern mobo with usb c, faster ram and a an i9 9 series chip.
> 
> i7 5820k
> 2 x Asus Strix 1080s SLI
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU
> 
> 
> 
> Bump before my question gets lost...
Click to expand...

I'd gladly trade 2x 1080s for one 1080TI or a 2080TI. If you want to know the exact FPS difference you should check out benchmarks. Same for different CPUs. Any time new hardware comes out people will see how it compares to older hardware by benchmarking them using games, applications and synthetic benchmarks. You can use their results as a rough idea of what you'll get if you used the same hardware in a similar configuration.


----------



## mistershan

white owl said:


> I'd gladly trade 2x 1080s for one 1080TI or a 2080TI. If you want to know the exact FPS difference you should check out benchmarks. Same for different CPUs. Any time new hardware comes out people will see how it compares to older hardware by benchmarking them using games, applications and synthetic benchmarks. You can use their results as a rough idea of what you'll get if you used the same hardware in a similar configuration.


I have see some videos where 2 x 1080s SLI were getting around the same FPS in BF V...

How much do you think I could get for 2 1080s? Is there a good place to sell them?






Seems like my CPU is getting kinda old...Maybe I should just sell my entire rig for a grand or something and buy something new?


----------



## BudgieSmuggler

Where are you located for trying to sell your 1080's? If in the UK there is a facebook page "PC parts for sale UK" If you search it has an Overclockers UK logo as they are sponsors. I've just sold my 1080ti on there for a good price. There's always someone looking for gpu's of all tiers. Would be paid by Paypal goods and services on there or collection and cash. Better than Ebay for me as there's too many scammers on there trying funny business. Good luck


----------



## Deathscythes

Gripen90 said:


> Yep, I have seen the same behavior going from GTX 1080 to RTX 2080 and GTX 1080Ti to RTX 2080Ti in FireStrike combined score. What is causing it I do not know. Probably some anomaly in FireStrike utilizing the new architecture.


It would seem the difference was due to my firestrike version being one year old. I have been building for that long lol.
So i reinstalled the latest version and tried. Same results...
But then I reboot and things seem to be in order now.

Check the difference, it's nuts...
12K
https://www.3dmark.com/fs/17417034

After :
15.5K
https://www.3dmark.com/fs/17442122

So yeah 2 to 2.5K...

Anyway I am glad it's fixed now =)


----------



## mistershan

BudgieSmuggler said:


> Where are you located for trying to sell your 1080's? If in the UK there is a facebook page "PC parts for sale UK" If you search it has an Overclockers UK logo as they are sponsors. I've just sold my 1080ti on there for a good price. There's always someone looking for gpu's of all tiers. Would be paid by Paypal goods and services on there or collection and cash. Better than Ebay for me as there's too many scammers on there trying funny business. Good luck


No. I am located in the states in New York City. Is there something comparable for here as well?


----------



## J7SC

Well, the Thermaltake Core P5 PCI riser question is no longer relevant  - not for GPUs anyway (might use the riser for non-boot M.2s in the MSI Meg X399 Crt though for games and vid storage).

After some earlier checks, there were only two Gigabyte Aorus Xtreme 2080 TIs factory Water-Block delivered to my greater metro area a month or so back and Newegg still didn't have any (at least earlier this week). I had picked one up which turn had shown some great OC potential (per a few pages back). I found the other one today ! It was still at the original pre-'price 'update' and I picked it up. As I had skipped a few generations of GPU and CPU, it's all a big and 'not cheap' catch-up build now....even finally had to begrudgingly get Win 10 (I had an early developer copy - and quickly went back to Win 7 64...)

Re. the many comments of the 'silicon lottery', the two GPU cards are only two digits apart in their serial number. The new GPU clocks 15-20 MHz lower at Superposition etc, but can run (at least) 50 MHz higher on VRAM. I am very pleased with that.The overall build is daunting though, even with a big open case like Core P5 because I'm planning to use 2x 360/60 + 160/60 rads for the 2080 TIs, and at least another separate-loop 360x60 for the 2950X, perhaps more. I was planning to make this a two-mobo build (my trusty older Z170 or perhaps even the Z97 at the back), but space is fast becoming the final frontier (see what I did there?).

Now a quick question: With the NVlink I also got for SLI, which of the two cards would you put in slot 1 ? The firs one which is 15-20 MHz faster on GPU, or the second one which is 50+ MHz faster on VRAM. I am asking because of the extra role the primary GPU's VRAM takes :thinking:


----------



## Asmodian

mistershan said:


> Seems like my CPU is getting kinda old...Maybe I should just sell my entire rig for a grand or something and buy something new?


If you don't need the performance right now I don't feel like it is a great time to buy, both the Ryzen 3000 CPUs and Intel's next line seem to be real upgrades and the RTX series is not a great value atm.

Perhaps sell the 1080s and get a used 1080 Ti or Titan X Pascal or something, but I am not a fan of current SLI. You could probably get $300 for each 1080 without much trouble so you wouldn't need to add much money for the switch, but in titles that support SLI well you might even see a few less frames. Still a better option IMO but not exactly a major upgrade unless you are playing something that doesn't support SLI well.


----------



## krizby

Finally finish my build after waiting a month for the waterblock :/



















Need help:
I have the Asus 2080 ti turbo edition but I cant flash the galax bios, nvflash give out "GPU Mismatch" error, the gpu is A1 chip


----------



## MrTOOSHORT

Try this command:

*nvflash --protectoff*


----------



## krizby

I tried --protectoff but that didn't work, any chance that board vendor put restrictions on flashing bios ?


----------



## MrTOOSHORT

Did you use the modified nvflash from OP?:



> Download NVIDIA NVFlash 5.527.0 Modified to allow flashing of FE


----------



## BudgieSmuggler

Yeh there is usually a local PC parts/sales site on Facebook. Can post it there on on marketplace(Facebook) Ebay (personally not for me though lots still do sell on there) Before you sell take a video of the card working and showing in gpu-z take photo's of all the serial numbers, close up of the card(showing no damage etc) And i added a part on my sales post telling potential buyers that i was taking photos' etc so scammers shouldn't bother. There are unfortunately people out there that will say the card was damaged and send back a different card of the same type etc etc then get a refund through Paypal or Ebay. So always best to cover your ass. I've found with local sites where eople have been members for a while they are less likely to try funny business. Just my 2 cents


----------



## BudgieSmuggler

Did you use the nvflash64 -6 BIOSNAME.rom
the -6 overrides the gpu board mismatch. Just do some research on whether your can be flashed and that others have succesffuly falshed the same bios to the card you have


----------



## krizby

Yeah I tried the 5.527 modified but it says EEPROM program fail ? I probably forgot --protectoff with 5.527, will try again later tonight



BudgieSmuggler said:


> Did you use the nvflash64 -6 BIOSNAME.rom
> the -6 overrides the gpu board mismatch. Just do some research on whether your can be flashed and that others have succesffuly falshed the same bios to the card you have


Yeah I always use nvflash64 -6 bios.rom. There is 1 user said he sucessfully flash the 2080 ti turbo with galax bios.


----------



## tamas970

ValSidalv21 said:


> Mine too, DUKE OC. Started artifacting about two weeks after I got it. Horizontal green lines across the screen on cold boot. They disappeared after a restart or two as the PCB warmed up and the card would work fine for days until the next cold boot. Hours and hours of gaming without any issues. I decided not to RMA it immediately and wait for it to get worse. Two days ago (about a month of use in total) as I watched a youtube video it froze with the same green artifacts, crashed and never recovered. I started the RMA process, but I'm not dealing with MSI directly, I'm going through the local distributor. I hope everything will go smooth and I'll have better luck with the replacement.


Wow, about time to add a statistics column to post#1, how many from each manufacturer died so far in this topic...


----------



## Vesimas

In your opinion for a 2560x1440 144/165Hz would be better a 2080Ti or with a 2080 i could be fine? Asking because there is 600€ difference in price


----------



## boli

I don't think this has been posted yet, so:


> Buildzoid talks about the best RTX 2080 Ti PCBs for overclocking and, ultimately, those which are just sort of boring.


----------



## cstkl1

Esenel said:


> cstkl1 said:
> 
> 
> 
> try graphana. we use that at work with mongo. tableau too limited.
> 
> 
> 
> Offtopic:
> 
> Graphana is a very solid product and we use it for it's created purpose of streaming timeseries data.
> Although it is definitely not the ONE tool created for doing everything.
> 
> We at work use Power BI, Tableau and Graphana to leverage the strength of all three tools.
> Power BI for solid and cheap reporting.
> Tableau as Visual Analytics platform.
> And Graphana for streaming timeseries data.
> 
> To call Tableau a limited solution just shows me that you do not know much about the product.
> As you can as well integrate Visual Extensions written in Javascript (as well as in Graphana) the possibilties of creating different chart types are equal.
> 
> The huge benefit of Tableau is its ease of use and pre built-in functionality.
> 
> In Tableau you do not need to write SQL queries to create a chart.
> Tableau does it for you while drag and drop dimensions 😉
> Although doing a k-mean clustering via drag and drop is a very nice thing to do.
> Or cross data source filtering of columns without joining or blending them. Or writing code for it 😉
> 
> If you are happy with your tool of choice that's the most important.
> It makes you solve your tasks faster 😉
> 
> Cheers
Click to expand...

we didnt buy tableu ..only had tested it. we actually liked especially on creating auto reports ( power management for buildings for superstores.. so they can reduce cost)
but then ms came in and gave a huge discount and help on free machine learning etc.. so.. da powers dat be decided no to tableau.. (glc company)

powerbi is our main reporting

graphana ..ya we use it for multiple clients with diff need .. so coding better for on da fly analytics to predict immenient failures. 
just found it easier.


----------



## cstkl1

Vesimas said:


> In your opinion for a 2560x1440 144/165Hz would be better a 2080Ti or with a 2080 i could be fine? Asking because there is 600€ difference in price /forum/images/smilies/biggrin.gif


to put some numbers.. technically 1080ti couldnt run few games at 60fps min @1440p so 2080 was no diff unless its vulkan where it blows da 1 series.

2080ti finally i could have smooth 60fps for quantum [email protected] before you go calling dat game console port/unoptimized etc.. afaik been proven that thd game engine actually fills up the drawcall efficiently just there wasnt a strong gpu at that time to handle it.. then came titan V and 2080ti..


----------



## mistershan

Asmodian said:


> If you don't need the performance right now I don't feel like it is a great time to buy, both the Ryzen 3000 CPUs and Intel's next line seem to be real upgrades and the RTX series is not a great value atm.
> 
> Perhaps sell the 1080s and get a used 1080 Ti or Titan X Pascal or something, but I am not a fan of current SLI. You could probably get $300 for each 1080 without much trouble so you wouldn't need to add much money for the switch, but in titles that support SLI well you might even see a few less frames. Still a better option IMO but not exactly a major upgrade unless you are playing something that doesn't support SLI well.


Yea I think I am going to hold off on a full system build for now...I really need thunderbolt 3(USBC) ports...For editing off drives...and I think if i go ONE card also I will have room to buy a PCIe card...So I think the plan will be to overclock my processor and get a 2080ti. If i can get 600 for my old cards then the extra is no problem at all since I can write it off on my taxes.

You make a good point though that FPS may be less in games that have good SLI support. Did you mean just for the 1080ti or for the 2080ti as well? 

Also, which 2080ti should I get? I really like the Asus Strix mainly because I've never had a single problem with any Asus product. It's impossible to find that. The cheapest option would be the founders edition though. I hear the Zotac one is good to. However, I never heard of Zotac. Are they are good company?


----------



## mistershan

Is this a good 2080ti or should I wait for the Asus Strix or Zotac? 1,200 is pretty good. Is 2 fans good enough? 

https://www.microcenter.com/product...x-2080-ti-dual-fan-11gb-gddr6-pcie-video-card


----------



## J7SC

mistershan said:


> Is this a good 2080ti or should I wait for the Asus Strix or Zotac? 1,200 is pretty good. Is 2 fans good enough?
> 
> https://www.microcenter.com/product...x-2080-ti-dual-fan-11gb-gddr6-pcie-video-card



...quite decent card based on reference design PCB. I would consider getting a water-block with the money you save just because RTX 2080 Ti deliver even better performance and headroom from the power and temp 'Scrooge' limiters when cooled beyond stock.


----------



## mistershan

J7SC said:


> ...quite decent card based on reference design PCB. I would consider getting a water-block with the money you save just because RTX 2080 Ti deliver even better performance and headroom from the power and temp 'Scrooge' limiters when cooled beyond stock.


If I got the Zotac or Asus Strix would i need a water block?


----------



## J7SC

mistershan said:


> If I got the Zotac or Asus Strix would i need a water block?



...not a question of 'need' in the sense of it otherwise cooking itself, but all 2080 Ti respond very well to extra cooling. Also depends on what your planned use and oc targets are. The Asus Strix, if you can find it, would be a great choice.


----------



## GAN77

J7SC said:


> The Asus Strix, if you can find it, would be a great choice.



Is there a way to increase the limits of this card? 325W sounds like a joke.


----------



## ESRCJ

GAN77 said:


> Is there a way to increase the limits of this card? 325W sounds like a joke.


You could always try a shunt mod.


----------



## mistershan

J7SC said:


> ...not a question of 'need' in the sense of it otherwise cooking itself, but all 2080 Ti respond very well to extra cooling. Also depends on what your planned use and oc targets are. The Asus Strix, if you can find it, would be a great choice.


Cooking itself? Is there a an overheating issue? Why are these cards so out of stock right now? I see 2080s everywhere but not 2080tis. Is there an issue with them or something?

Also, I have Asus Strix but anytime I ever tried to over clock them my computer crashed so I never really messed around with that. If I am not really an OC guy, is it pretty much a waste to get one?


----------



## mistershan

gridironcpj said:


> You could always try a shunt mod.


You have a 208ti Strix? The triple fan one? How did you find one? Is it worth it to wait for one?


----------



## nycgtr

GAN77 said:


> Is there a way to increase the limits of this card? 325W sounds like a joke.


You will hit a wall before the voltage. I have had 4 strixs now. 



mistershan said:


> You have a 208ti Strix? The triple fan one? How did you find one? Is it worth it to wait for one?


If you want one that badly I have 1 extra. Pm me if your in the US 48.


----------



## kot0005

J7SC said:


> Well, the Thermaltake Core P5 PCI riser question is no longer relevant  - not for GPUs anyway (might use the riser for non-boot M.2s in the MSI Meg X399 Crt though for games and vid storage).
> 
> After some earlier checks, there were only two Gigabyte Aorus Xtreme 2080 TIs factory Water-Block delivered to my greater metro area a month or so back and Newegg still didn't have any (at least earlier this week). I had picked one up which turn had shown some great OC potential (per a few pages back). I found the other one today ! It was still at the original pre-'price 'update' and I picked it up. As I had skipped a few generations of GPU and CPU, it's all a big and 'not cheap' catch-up build now....even finally had to begrudgingly get Win 10 (I had an early developer copy - and quickly went back to Win 7 64...)
> 
> Re. the many comments of the 'silicon lottery', the two GPU cards are only two digits apart in their serial number. The new GPU clocks 15-20 MHz lower at Superposition etc, but can run (at least) 50 MHz higher on VRAM. I am very pleased with that.The overall build is daunting though, even with a big open case like Core P5 because I'm planning to use 2x 360/60 + 160/60 rads for the 2080 TIs, and at least another separate-loop 360x60 for the 2950X, perhaps more. I was planning to make this a two-mobo build (my trusty older Z170 or perhaps even the Z97 at the back), but space is fast becoming the final frontier (see what I did there?).
> 
> Now a quick question: With the NVlink I also got for SLI, which of the two cards would you put in slot 1 ? The firs one which is 15-20 MHz faster on GPU, or the second one which is 50+ MHz faster on VRAM. I am asking because of the extra role the primary GPU's VRAM takes :thinking:


would you open one of the blocks and post some pics ? There are no reviews of these cards yet or any images showing internal layout of how many microfins the core has. Do these cards come with a default 375w bios ?

Is there any coil whine ?


----------



## J7SC

kot0005 said:


> would you open one of the blocks and post some pics ? There are no reviews of these cards yet or any images showing internal layout of how many microfins the core has. Do these cards come with a default 375w bios ?
> 
> Is there any coil whine ?


Hello
- GPUs are getting test mounted for tube length right now w/NVLink etc, so can't pull them apart...but later on, I'll try. In the meantime, pls have a look at the pic below which shows the fins in their CGI pic

- also below, GPUz after Superposition run with card 2. As with card one, consistently pulls 375w...highest I've seen was 378.9w / 126% TDP...all on default Bios, without any extra voltage...MSI AB slider works but seems connected only to placebo switch :upsidedwn *EDIT: update* in my next post

- no coil whine, at least so far


----------



## J7SC

- further to my post above, I just downloaded the 'Aorus VGA engine' utility (from Aorus site)....nifty > deals with this custom PCB, allows voltage control :applaud: and has memory setting options waaayyy beyond MSI AB's 1000 offset limit . Can't try it now as cards are getting 'plumbed', but very soon... (final item for build, the CPU waterblock, is supposed to arrive from Germany tomorrow, per FedEx)


----------



## kot0005

J7SC said:


> - further to my post above, I just downloaded the 'Aorus VGA engine' utility (from Aorus site)....nifty > deals with this custom PCB, allows voltage control :applaud: and has memory setting options waaayyy beyond MSI AB's 1000 offset limit . Can't try it now as cards are getting 'plumbed', but very soon... (final item for build, the CPU waterblock, is supposed to arrive from Germany tomorrow, per FedEx)


ahh well I cant really see how many fins it has fgrom the render, you should also check for the gpu paste contact and the thermal pads they use for the block.


----------



## VPII

I'm sitting with a Galax 2080 TI OC so basically the 380WATT TDP card if set to +126% TDP. The card runs pretty well. Maximum clocks I can get is 2145mhz core and +800mhz memory..... +850mhz memory also works but you start seeing artifacts during bench runs. I did try the HOF bios which with flashing worked but it ended up resulting in the card running hotter so clocks were not that great. I'm back on the stock bios, but unfortunately the 2145 gpu clock with only last 4 to 5 seconds during bench and will drop to around 2040 when GPU temps hit 65 to 75C. Unfortunately in my country it is somewhat difficult to get water cooling solutions for this card. but I'll keep looking to see whether I can get some even if from abroad.

The card itself runs great so I cannot complain. Just that I'd like to get it to run a little cooler to keep the clocks I set.


----------



## J7SC

kot0005 said:


> ahh well I cant really see how many fins it has fgrom the render, you should also check for the gpu paste contact and the thermal pads they use for the block.


This second card runs very cool as well. I can see thermal pads sandwiched in there on top...need to re-stock on Fuji thermal pads as I converted eight w-c GPUs on four other "re"-builds back to air cooling last month for home office and family (ab)use...also still short on those little spring-loaded factory GPU screws, they keep on disappearing, even from the safe storage...probably gremlins sneaking in :sozo:




VPII said:


> I'm sitting with a Galax 2080 TI OC so basically the 380WATT TDP card if set to +126% TDP. The card runs pretty well. Maximum clocks I can get is 2145mhz core and +800mhz memory..... +850mhz memory also works but you start seeing artifacts during bench runs. I did try the HOF bios which with flashing worked but it ended up resulting in the card running hotter so clocks were not that great. I'm back on the stock bios, but unfortunately the 2145 gpu clock with only last 4 to 5 seconds during bench and will drop to around 2040 when GPU temps hit 65 to 75C. Unfortunately in my country it is somewhat difficult to get water cooling solutions for this card. but I'll keep looking to see whether I can get some even if from abroad.
> 
> The card itself runs great re temps so I cannot complain. Just that I'd like to get it to run a little cooler to keep the clocks I set.


I really like the Galax, but hard to get here. As to water-cooling them, I have used universal GPU blocks a lot before with great results (i.e. from Swiftech). If you use those, just also mount a 120mm fan over VRM that usually also cools the VRAM, Even better if the Galax has a cold plate`.


----------



## kot0005

J7SC said:


> This second card runs very cool as well. I can see thermal pads sandwiched in there on top...need to re-stock on Fuji thermal pads as I converted eight w-c GPUs on four other "re"-builds back to air cooling last month for home office and family (ab)use...also still short on those little spring-loaded factory GPU screws, they keep on disappearing, even from the safe storage...probably gremlins sneaking in :sozo:
> 
> 
> 
> 
> AH okay, but if u even decide to swapout the thermal pads with fujipoly ones, takes some pics of the block's insides, the pcb for thermal paste contact and the thermal pads they use, I am curious because most factory installations suck.


----------



## profundido

mistershan said:


> Would it be worth it for me to sell 2 x 1080s and get a 2080ti Asus Strix card? What type of performance increase would I get with my other components? Or should I just wait and upgrade to a more modern mobo with usb c, faster ram and a an i9 9 series chip.
> 
> i7 5820k
> 2 x Asus Strix 1080s SLI
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU


you're looking at slightly better than your current SLI setup in case of games that supported SLI 100% (rise/shadow of tomb raider etc...) and over 70% performance increase in games that didn't support SLI properly which is all the rest basically  

Not to mention no more SLI hassles and better thermals so a solid YES: More than worth it !


----------



## RobMachado

Hello guys, 

So I've had my Evga RTX 2080 Ti Oc for 3 weeks now, and it seems that my card is failing. At the beginning I was getting Crash Dump errors while playing Wolfenstein 2 The New Colossus, after that the game would freeze with artifacts, had two launch task manager to shut down the game. 

I decided to go ahead a launch Unigine Valley Benchmark, leave the GPU run it for a while, enough to say that my Pc would either restart or crash back to windows after a while. Now I am not getting even display signal from my monitor, so yea its no apparently dead, it died :X. 

Before I start the RMA process (which is going to be a huge hassle as I live outside of the US), should I try upgrading the card's bios? If so, how is it done? And will this void my Warranty? 

Currently running latest Nvidia Drivers, card was never OC'ed as I was waiting for my EK Vector Block to arrive. I upgraded to this card from 2 EVGA Hydrocopper GTX 1080s, and never had a problem with them. 

Specs: 

i9 7960X @ 4.00Ghz
Corsair Dominator 32 Gb DDR4 
Asus Rampage VI Extreme
Samsung Evo Pro 256gb M.2 Drive 
Windos 10 Pro 64 Bit.


----------



## xsidex

Hey guys! Me and another forum user "Krizby" can't flash any bios on our 2080ti Asus turbo. Any one have some tips? Our bios is the same (90.02.0B.40.09) which is different from the older 90.02.0B.00.BB that you can find in Techpowerup. Not only thatbut we also can't flash the GALAX bios that is sahred in the main post along with proof that a user has got it on his card.
Does anyone have the same card and managed to successfully flash the bios? Any tips for us to attempt to fix this situation.

This is a link to my previous post in this thread a couple of days ago that got no replies: https://www.overclock.net/forum/69-n...l#post27759142

Any help would be great!


----------



## krizby

Jup, I couldn't even flash to the older Asus 2080 ti Turbo bios found on Techpowerup database. When flashing with the newest nvflash it says GPU mismatch and when flashing with the modified 5.527 it says EEPROM Programs Fail. Any tips ?

Chip is definitely A1


----------



## arcDaniel

krizby said:


> Jup, I couldn't even flash to the older Asus 2080 ti Turbo bios found on Techpowerup database. When flashing with the newest nvflash it says GPU mismatch and when flashing with the modified 5.527 it says EEPROM Programs Fail. Any tips ?
> 
> Chip is definitely A1


That is not an A-Chip !!!! 

Here is my A-Chip, you see de difference?


----------



## krizby

oh then my mistake, it was supposedly 300A and 300non-A, sucks that the later Asus 2080 ti Turbo batch use non-A chip, sneaky sneaky Asus.

OP Please update the 1st page. Asus 2080 ti Turbo now use non-A chip with 250/280W power limit.


----------



## RobMachado

RobMachado said:


> Hello guys,
> 
> So I've had my Evga RTX 2080 Ti Oc for 3 weeks now, and it seems that my card is failing. At the beginning I was getting Crash Dump errors while playing Wolfenstein 2 The New Colossus, after that the game would freeze with artifacts, had two launch task manager to shut down the game.
> 
> I decided to go ahead a launch Unigine Valley Benchmark, leave the GPU run it for a while, enough to say that my Pc would either restart or crash back to windows after a while. Now I am not getting even display signal from my monitor, so yea its no apparently dead, it died :X.
> 
> Before I start the RMA process (which is going to be a huge hassle as I live outside of the US), should I try upgrading the card's bios? If so, how is it done? And will this void my Warranty?
> 
> Currently running latest Nvidia Drivers, card was never OC'ed as I was waiting for my EK Vector Block to arrive. I upgraded to this card from 2 EVGA Hydrocopper GTX 1080s, and never had a problem with them.
> 
> Specs:
> 
> i9 7960X @ 4.00Ghz
> Corsair Dominator 32 Gb DDR4
> Asus Rampage VI Extreme
> Samsung Evo Pro 256gb M.2 Drive
> Windos 10 Pro 64 Bit.


Bump


----------



## J7SC

kot0005 said:


> AH okay, but if u even decide to swapout the thermal pads with fujipoly ones, takes some pics of the block's insides, the pcb for thermal paste contact and the thermal pads they use, I am curious because most factory installations suck.





Will do, I'm kind of curious myself re. the block, fins and inside view etc. The Aorus factory block kind of looks like it could be a Bitspower one. But today, I am looking where to hide the quick-disconnects for the GPUs and CPU on the back - or instead make them a shiny part of the front. Sooner or later, I'm going to try hard tubes, but not yet / got to get that 'EisKoffer' first :biggrinsm

*EDIT @kot0005 *...final piece for the build w/ the 2x 2080 Tis was just dropped off by Santa and Rudolph (who's taking selfies  )


----------



## OGdrifter

SECOND2080 ti died on me yesterday... first one was gigabyte, has been 1 month already, still in rma process. the 2nd was ROG strix, incredible cooler, but card died, which i assume is not asus fault. luckily my local shop insta replaced the strix with a new one!! mad props to them. 

first card died slowly in the first 2 weeks. space invader artifacts.

second card lasted 2 weeks no problem, then suddenly nvidia driver errors in battlefield, crash to desktop and over the cource of the day deteriorated to major artifacts in all games and benchmarks. everything started crashing eventually. no space invader artifacts though, just color or black flashes and crashes.

both cards were dying differently. i am just sharing in case others have some concerns about 2080 ti still. 

from my experience so far, especially if you are putting the card under water on your own, test the card for 24 h atleast.

my advice would be to push the card from the get go. i would not recommend furmark, for whatever reason it was not the best test to reveal artifacts. I would go with Kombustor for a prolonged session for a day or OCCT for absolute max load, and next i would go with Witcher 3 or shadow of the TR gameplay (demo free on steam) on absolute max settings. i would also advise to put the power target to the max and a good oc on. both cards passed timespy extreme stress test multiple times with 97+, but once crashes started, it would crash the timespy as well.

so it would appear there is a component that is slowly decaying over time.

why? because from my experience once the card started to deteorate, reverting back to stock made no difference, so it would be better to know ASAP if your card is faulty or not. under normal circumstance the above test can not damage anything, so taking it slow would just delay the sentencing.

my 0.02$


----------



## Emmett

krizby said:


> oh then my mistake, it was supposedly 300A and 300non-A, sucks that the later Asus 2080 ti Turbo batch use non-A chip, sneaky sneaky Asus.
> 
> OP Please update the 1st page. Asus 2080 ti Turbo now use non-A chip with 250/280W power limit.


Interesting, so they were likely supposed to be non A's from the beginning, but used A's to fill orders it seems.


----------



## xsidex

Do you guys think it's realistic for us with Non A cards to expect a bios in the future with slightly raised power limit? We are dealing with 112% max so anything would be an improvement


----------



## Frozburn

OGdrifter said:


> SECOND2080 ti died on me yesterday... first one was gigabyte, has been 1 month already, still in rma process. the 2nd was ROG strix, incredible cooler, but card died, which i assume is not asus fault. luckily my local shop insta replaced the strix with a new one!! mad props to them.
> 
> first card died slowly in the first 2 weeks. space invader artifacts.
> 
> second card lasted 2 weeks no problem, then suddenly nvidia driver errors in battlefield, crash to desktop and over the cource of the day deteriorated to major artifacts in all games and benchmarks. everything started crashing eventually. no space invader artifacts though, just color or black flashes and crashes.
> 
> both cards were dying differently. i am just sharing in case others have some concerns about 2080 ti still.
> 
> from my experience so far, especially if you are putting the card under water on your own, test the card for 24 h atleast.
> 
> my advice would be to push the card from the get go. i would not recommend furmark, for whatever reason it was not the best test to reveal artifacts. I would go with Kombustor for a prolonged session for a day or OCCT for absolute max load, and next i would go with Witcher 3 or shadow of the TR gameplay (demo free on steam) on absolute max settings. i would also advise to put the power target to the max and a good oc on. both cards passed timespy extreme stress test multiple times with 97+, but once crashes started, it would crash the timespy as well.
> 
> so it would appear there is a component that is slowly decaying over time.
> 
> why? because from my experience once the card started to deteorate, reverting back to stock made no difference, so it would be better to know ASAP if your card is faulty or not. under normal circumstance the above test can not damage anything, so taking it slow would just delay the sentencing.
> 
> my 0.02$


It's what happened to my card. Was dying slowly by crashing the PC with no error, then I passed timespy like 80 times (loop) all was good and then it started with the artifacts (OC or not). From all the programs I used, EVGA OC Scanner X was the best one in finding errors

Set it to fullscreen, Furry E gpu memory burner 3072, 8X AA and artifact scanner on. This thing found my artifacts almost instantly, whereas with the other programs the card would look normal for way longer (and even play BF5 for 30 minutse without any issues, wth?). Waiting for a new card now... hopefully it doesn't suck so bad (first one died after 25 days)


----------



## Coldmud

J7SC said:


> Hello
> - GPUs are getting test mounted for tube length right now w/NVLink etc, so can't pull them apart...but later on, I'll try. In the meantime, pls have a look at the pic below which shows the fins in their CGI pic
> 
> - also below, GPUz after Superposition run with card 2. As with card one, consistently pulls 375w...highest I've seen was 378.9w / 126% TDP...all on default Bios, without any extra voltage...MSI AB slider works but seems connected only to placebo switch :upsidedwn *EDIT: update* in my next post
> 
> - no coil whine, at least so far


Why is your PL max 122%? My regular Aorus Ti goes up to 133% default bios.


----------



## J7SC

Coldmud said:


> Why is your PL max 122%? My regular Aorus Ti goes up to 133% default bios.



No idea, it's the same with both my cards though for both, that equates to 375-380 watts anyways. Can you post the GPUz main screen w/ Bios date and id number (I presume you're running the stock Bios) ? Tx


----------



## Coldmud

J7SC said:


> No idea, it's the same with both my cards though for both, that equates to 375-380 watts anyways. Can you post the GPUz main screen w/ Bios date and id number (I presume you're running the stock Bios) ? Tx


If it equates to 380W then no big deal probably, however I do notice some stutters myself when I limit it to, say 120%.

I downloaded the update F3 bios from gigabyte's website (https://www.gigabyte.com/nl/Graphics-Card/GV-N208TAORUS-11GC#support-dl-bios) but before that I also had 133%.

I'm returning this card, today I also received the waterforce extreme, still building my loop. But if that's stuck at 122% I'm probably gonna put that F3 bios on it with 133% max.


----------



## Coldmud

RobMachado said:


> Hello guys,
> 
> So I've had my Evga RTX 2080 Ti Oc for 3 weeks now, and it seems that my card is failing. At the beginning I was getting Crash Dump errors while playing Wolfenstein 2 The New Colossus, after that the game would freeze with artifacts, had two launch task manager to shut down the game.
> 
> I decided to go ahead a launch Unigine Valley Benchmark, leave the GPU run it for a while, enough to say that my Pc would either restart or crash back to windows after a while. Now I am not getting even display signal from my monitor, so yea its no apparently dead, it died :X.
> 
> Before I start the RMA process (which is going to be a huge hassle as I live outside of the US), should I try upgrading the card's bios? If so, how is it done? And will this void my Warranty?
> 
> Currently running latest Nvidia Drivers, card was never OC'ed as I was waiting for my EK Vector Block to arrive. I upgraded to this card from 2 EVGA Hydrocopper GTX 1080s, and never had a problem with them.
> 
> Specs:
> 
> i9 7960X @ 4.00Ghz
> Corsair Dominator 32 Gb DDR4
> Asus Rampage VI Extreme
> Samsung Evo Pro 256gb M.2 Drive
> Windos 10 Pro 64 Bit.


That sucks dude! Updating bios won't void any warranty, just flash it back before you RMA it, but how are you gonna flash it with no signal? Did you try both hdmi and DP outputs?
Definitely RMA it, can't u just deal with the store u bought it from? I also heard a lot of success stories by just contacting nvidia themselves for RMA.

Goodluck!


----------



## J7SC

Coldmud said:


> If it equates to 380W then no big deal probably, however I do notice some stutters myself when I limit it to, say 120%.
> 
> I downloaded the update F3 bios from gigabyte's website (https://www.gigabyte.com/nl/Graphics-Card/GV-N208TAORUS-11GC#support-dl-bios) but before that I also had 133%.
> 
> I'm returning this card, today I also received the waterforce extreme, still building my loop. But if that's stuck at 122% I'm probably gonna put that F3 bios on it with 133% max.



Thanks much ! Bios date is the same but I have to check the id number (system is currently apart for 'loop-fitting'  ) The xrteme editions (waterforce and other) **might** have a different PCB, not sure. But just in case, I downloaded F3 per your link / thanks :thumb:


----------



## krizby

Emmett said:


> Interesting, so they were likely supposed to be non A's from the beginning, but used A's to fill orders it seems.


The shadiest business practice ever, Non A chip is supposedly sell for 1000usd MSRP, my 2080 Ti Turbo costs more than the Gigabyte 2080 Ti windforce which use A chip that I bought for my friend.


----------



## Coldmud

J7SC said:


> Thanks much ! Bios date is the same but I have to check the id number (system is currently apart for 'loop-fitting'  ) The xrteme editions (waterforce and other) **might** have a different PCB, not sure. But just in case, I downloaded F3 per your link / thanks :thumb:


I thought I heard buildzoids/nexus latest vid claiming all Aorus/waterforce are identical, but he may just have been talking about VRM situation. 

Gonna have to recheck. Lemme know if that worked out in any better gains for your cards, am planning on doing the exact same thing if limited to 122%.


----------



## J7SC

Coldmud said:


> I thought I heard buildzoids/nexus latest vid claiming all Aorus/waterforce are identical, but he may just have been talking about VRM situation.
> 
> Gonna have to recheck. Lemme know if that worked out in any better gains for your cards, am planning on doing the exact same thing if limited to 122%.



It looks like they have different PCBs with whatever few pics I could find on your PCB (Aourus site has an excerpt, and the VRM didn't have those round things Buildzoid was going on about, but instead square ones). Also, different Bios ids, and no Bios update for the Xtreme (water or air) at all yet.

I might try your F3 anyways (can always flash back, especially w/ 2nd card in the system) once I finish the build, but at the end of the day, the %TDP doesn't matter to me as much as the total (378w+) TDP. Of course, if the 133% pushes the total higher...


----------



## kot0005

J7SC said:


> kot0005 said:
> 
> 
> 
> AH okay, but if u even decide to swapout the thermal pads with fujipoly ones, takes some pics of the block's insides, the pcb for thermal paste contact and the thermal pads they use, I am curious because most factory installations suck.
> 
> 
> 
> 
> 
> 
> Will do, I'm kind of curious myself re. the block, fins and inside view etc. The Aorus factory block kind of looks like it could be a Bitspower one. But today, I am looking where to hide the quick-disconnects for the GPUs and CPU on the back - or instead make them a shiny part of the front. Sooner or later, I'm going to try hard tubes, but not yet / got to get that 'EisKoffer' first /forum/images/smilies/biggrinsmiley.gif
> 
> *EDIT @kot0005 *...final piece for the build w/ the 2x 2080 Tis was just dropped off by Santa and Rudolph (who's taking selfies /forum/images/smilies/tongue.gif )
Click to expand...

Nice dude post some results when you are done.

My 2080ti might be failing too.. idk i get some wierd artifacts on black ui elements. Even corsair icue.

Hopefully its a glitch.


----------



## Coldmud

double post


----------



## Coldmud

J7SC said:


> It looks like they have different PCBs with whatever few pics I could find on your PCB (Aourus site has an excerpt, and the VRM didn't have those round things Buildzoid was going on about, but instead square ones). Also, different Bios ids, and no Bios update for the Xtreme (water or air) at all yet.
> 
> I might try your F3 anyways (can always flash back, especially w/ 2nd card in the system) once I finish the build, but at the end of the day, the %TDP doesn't matter to me as much as the total (378w+) TDP. Of course, if the 133% pushes the total higher...


Im finishing the loop tomorrow if all goes well  I will test total TDP with new waterforce and its og wf bios and also regular aorus F3 bios on that and post TDP results. Max TDP what F3 will do according to GPU-Z is 366W Though.


----------



## VPII

Frozburn said:


> It's what happened to my card. Was dying slowly by crashing the PC with no error, then I passed timespy like 80 times (loop) all was good and then it started with the artifacts (OC or not). From all the programs I used, EVGA OC Scanner X was the best one in finding errors
> 
> Set it to fullscreen, Furry E gpu memory burner 3072, 8X AA and artifact scanner on. This thing found my artifacts almost instantly, whereas with the other programs the card would look normal for way longer (and even play BF5 for 30 minutse without any issues, wth?). Waiting for a new card now... hopefully it doesn't suck so bad (first one died after 25 days)


I've set what you stated for the EVGA OC Scanner.... I stopped it a little shy of 10 minutes as the gpu temp was hovering around 86C with the Fan speed ramping up like crazy, even more than 100% it seems. But no issues found. While testing the Galax RTX 2080ti OC the core was set with +105mhz and memory +500mhz. I have not had any issues while playing games or any crashes except for when system memory is set to 3600mhz and running a 2700X it is a little difficult to get stable.


----------



## J7SC

kot0005 said:


> Nice dude post some results when you are done.
> 
> My 2080ti might be failing too.. idk i get some wierd artifacts on black ui elements. Even corsair icue.
> 
> Hopefully its a glitch.


Oh no, I hope your card is ok ...I guess cold boot / max 100% TDP testing coming your way..



Coldmud said:


> Im finishing the loop tomorrow if all goes well  I will test total TDP with new waterforce and its og wf bios and also regular aorus F3 bios on that and post TDP results. Max TDP what F3 will do according to GPU-Z is 366W Though.


...only half kidding if I say the difference to 378w might be the RGB (wait till you see that card boot up in all its RGB glory). When my build is done, I really should test it w/ RGB off to see if it increases Superpostion etc


----------



## krizby

Okay so from my overclocking experience with the 2080TI non-A chip, the chip doesn't like going beyond 1v. At 1v or above the card will crash with the same clock offset. When I had my original cooler a +225mhz clock offset was stable, putting on the waterblock and the card would not even stable with +200mhz clock offset, reason was with the waterblock the card was less power restricted and was boosting beyond 1v.

My recommendation for anyone who is experiencing crashes (not space invader artifact, your card is probably gone by that point) is to just undervolt the card to 1v. Maybe high voltage is what is killing the 2080ti, from what I remember an Nvidia engineer was saying running the 1080ti at 1.093v could reduce the lifespan of the chip to a month. It didn't happen to my 1080ti though, probably 2080ti is more susceptible to high vootage. My 2080ti non A is stable at 2010mhz/1v and +600mhzVRAM.


----------



## pewpewlazer

Caved and bought an EVGA 2080 Ti XC ULTRA SUPER DUPER GAMING or something the other day with the 10%/$75 off eBay deal. Spent more on a video card than I did rent this month. What a world we live in.

Anyway, is the stock VRM/ram plate sufficient with a GPU only waterblock? Saw a few people mention in this thread that they were using "universal" blocks, but they seemed to all be on non-reference PCBs, and one guy had a 120mm fan ziptied over his card. Do I really need one of these obscenely priced "full cover" blocks or can I get away with a normal block and some zip ties?


----------



## Asmodian

pewpewlazer said:


> Anyway, is the stock VRM/ram plate sufficient with a GPU only waterblock? Saw a few people mention in this thread that they were using "universal" blocks, but they seemed to all be on non-reference PCBs, and one guy had a 120mm fan ziptied over his card. Do I really need one of these obscenely priced "full cover" blocks or can I get away with a normal block and some zip ties?


If you are flashing a high watt BIOS I would suggest cooling the VRMs with something, either a good bit of direct air or a full cover block. It would be great to have some comparisons of memory OC's v.s. temperature but in general memory likes to be cold too.


----------



## kx11

this dude Mad Tse is still no.1 in 3dmark HOF single GPU race using GALAX HOF under LN2




https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu




KiNgPiN got schooled this time


----------



## Jpmboy

kx11 said:


> this dude Mad Tse is still no.1 in 3dmark HOF single GPU race using GALAX HOF under LN2
> https://www.3dmark.com/hall-of-fame-2/timespy+3dmark+score+extreme+preset/version+1.0/1+gpu
> KiNgPiN got schooled this time


neither of which is relevant to anything. Well, except if you plan on gaming while in orbit... outside. cryogenic benchmark results have zero real-world implications.


----------



## profundido

So, a quick analysis of the market options shows that if you want to buy a card for custom watercooling that comes with a factory-tested waterblock on it already complete with warranty support and in addition you would like 19 power phases to squeeze the lemon it becomes clear that this Gigabyte Aorus is the clear winner. In fact, it's 4years warranty period after registration online is exceptional.

After scanning the europe retailers for 10 days now and seeing nothing but "delivery date unknown", "preorder" and a phew phonecalls and emails confirming that they are expected no sooner than end of january with maybe a very few end december...I've come to realize just how rare this product really is.

You can imagine my unbelief when I suddenly refreshed my browser again (daily check) and saw that a well known hardwareshop in my country mentioned "in stock". I immediately called them to ask if it's an error but they confirmed they had received 1 (and only 1 !!!) card and it was physically present in their warehouse   Meanwhile all the rest of the shops across europe remain "no stock"...

With the paw of a hungry wolf I snatch-ordered it in the blink of an eye. The parcel just arrived at work here now and in a few hours I'll throw it into my system and do my first tests.

What final results and benchmarks are you guys who also own this card currently getting ?

I just tried to download that Aorus engine (version 1.50) but it just gives me XML code stating that the file is not present


@Coldmud
@kot0005
@J7SC
@ElGreco
@Aurosonic


Also, I'm starting to deduce from all the past 466 pages here that running the other biosses and pushing the power/voltage limit is exactly what causes cards to break permanently with artifacts as result. So my current intention is to just proceed with it's factory bios and test the limits of it and not flash anything else. If the factory stresstested this card under these conditions sucessfully it shouldn't break if I stresstest it under these conditions...


----------



## Jpmboy

RTX Titan is live this morning...


----------



## DooRules

Jpmboy said:


> RTX Titan is live this morning...



Thx for the heads up


----------



## profundido

This notification just came in for EVGA online shop Europe:


Dear Valued Customer,

This email is being sent in response to your request to be notified with availability updates for product 11G-P4-2487-KR and this product was marked as being in stock on 18-12-2018 at 06:23 PST.

Please keep in mind that this product will only be available while supplies last and it is recommended to place your order as soon as possible to ensure availability.

We appreciate your patience and if you should need to request another availability notification, please click the link below and then click the Auto-Notify button to submit a new request.

Direct Product Link:11G-P4-2487-KR

https://eu.evga.com/Products/Product.aspx?pn=11G-P4-2487-KR


----------



## DooRules

I am a bit hesitant to order a 2080ti block for the Titan. Have to look around and see if anyone has any plans for one.


----------



## Jpmboy

DooRules said:


> I am a bit hesitant to order a 2080ti block for the Titan. Have to look around and see if anyone has any plans for one.


bitspower


----------



## dante`afk

curious to see first results. not ordering yet since there is no waterblock available, and a stock titan will be slower anyway than a 2080ti.

heck, who even knows if it can beat a 380w bios 2080ti?


----------



## Shawnb99

Mine may have just crapped out on me.
After playing some madden last night, screen went pure black and froze on me upon reboot windows loads into s black screen.
Just formatted and did a clean install and upon installing the GPU drivers got the black screen of death.

Trying a clean install again hopefully this time it works.




Sent from my iPhone using Tapatalk


----------



## J7SC

Jpmboy said:


> neither of which is relevant to anything. Well, except if you plan on gaming while in orbit... outside. cryogenic benchmark results have zero real-world implications.



Well said - it's like trying to go to the mall for shopping in a top-fuel dragster / the first quarter mile of your trip will go by real fast, but then...there will be 'issues'. 

Still, in some of the OCN tables, folks post LN2 results (including just a few days ago) which can confuse the unsuspecting. There used to be an unwritten rule that if you subbed LN2, for example for HWbot, you would not sub it in the broader forums. I had real fun with LN2, DICE, Phase and chillers and such before but I discovered that I like building unusual / modded systems - my firs love - much more, and just in case I relapse, the latest system I'm building has quick-disconnects 



profundido said:


> So, a quick analysis of the market options shows that if you want to buy a card for custom watercooling that comes with a factory-tested waterblock on it already complete with warranty support and in addition you would like 19 power phases to squeeze the lemon it becomes clear that this Gigabyte Aorus is the clear winner. In fact, it's 4years warranty period after registration online is exceptional.
> 
> (edit)
> 
> I just tried to download that Aorus engine (version 1.50) but it just gives me XML code stating that the file is not present
> 
> @*Coldmud*
> @*kot0005*
> @*J7SC*
> @*ElGreco*
> @*Aurosonic*
> 
> Also, I'm starting to deduce from all the past 466 pages here that running the other biosses and pushing the power/voltage limit is exactly what causes cards to break permanently with artifacts as result. So my current intention is to just proceed with it's factory bios and test the limits of it and not flash anything else. If the factory stresstested this card under these conditions sucessfully it shouldn't break if I stresstest it under these conditions...



Congrats on the Aorus ! The four year warranty is very appealing ! And your comment on loading other custom PCB GPU Bios on different cards also has merit, though to some degree it comes down to 'case-by-case', such as how different the PCBs are, IMO. 

My Threadripper system with the two Aorus Xtr 2080 TI WB is currently being built (leak testing tonight :sad-smile , I hope), but I recall posting a Superposition a week to 10 days ago with the first card when it was in an older Z170/6700K/dual channel test setup which admittedly will hamper it a bit.

The 'Aorus engine' link is indeed screwy right now. I'll try to zip up a copy as an attachment if that doesn't get fixed, but can't do so now given the disassembled state (M.2s already transferred).


Good luck with your install :thumb:


----------



## Shawnb99

And black screen of death upon nivida drivers again.
Works up until that point.

Is this a RMA or is it fixable?


Sent from my iPhone using Tapatalk


----------



## animeowns

dante`afk said:


> curious to see first results. not ordering yet since there is no waterblock available, and a stock titan will be slower anyway than a 2080ti.
> 
> heck, who even knows if it can beat a 380w bios 2080ti?



titan rtx will be faster


----------



## J7SC

Shawnb99 said:


> And black screen of death upon nivida drivers again.
> Works up until that point.
> 
> Is this a RMA or is it fixable?
> 
> 
> Sent from my iPhone using Tapatalk




Have you tried uninstalling not only the NVidia drivers but the card itself in 'control panel / device manager' completely ? That may not fix it but if it boots w/o artifacts etc up until windows, it **may**. To get even more serious, eliminate all GPU entries in your Windows registry...or better still, if you have another known functional NVidia GPU card of recent vintage laying around (or can borrow), plug that in to see if windows boots.


----------



## Shawnb99

J7SC said:


> Have you tried uninstalling not only the NVidia drivers but the card itself in 'control panel / device manager' completely ? That may not fix it but if it boots w/o artifacts etc up until windows, it **may**. To get even more serious, eliminate all GPU entries in your Windows registry...or better still, if you have another known functional NVidia GPU card of recent vintage laying around (or can borrow), plug that in to see if windows boots.




Since it first happened I did a clean install twice now. Was able to boot into safe mode and install drivers fine but upon rebooting it happened again.
Can boot into safe mode fine, just craps out in normal windows.
Going to try a clean install with my old card next 


Old card installs drivers with no issue. 


Sent from my iPhone using Tapatalk


----------



## J7SC

Shawnb99 said:


> Since it first happened I did a clean install twice now. Was able to boot into safe mode and install drivers fine but upon rebooting it happened again.
> Can boot into safe mode fine, just craps out in normal windows.
> Going to try a clean install with my old card next
> 
> 
> Old card installs drivers with no issue.
> 
> Sent from my iPhone using Tapatalk



 Looks like RMA time. If you are on a custom Bios, you should revert back to stock, just in case


----------



## Shawnb99

J7SC said:


> Looks like RMA time. If you are on a custom Bios, you should revert back to stock, just in case




Yeah I thought as much.  Glad I didn’t put it under water yet. Was running stock bios, no overclock as well. Still dead in under 2 weeks
Now to see how long to get a new one, plus see how much it’ll cost me to ship it.




Sent from my iPhone using Tapatalk


----------



## csaris

I have in my hands a 2080ti strix advance.

Very nice card but the advance version has only 300w power limit.

I tried to flash Strix OC bios but I get a message that something is protected and nothing changed.

Any suggestions?


----------



## mistershan

Shawnb99 said:


> Yeah I thought as much.  Glad I didn’t put it under water yet. Was running stock bios, no overclock as well. Still dead in under 2 weeks
> Now to see how long to get a new one, plus see how much it’ll cost me to ship it.
> 
> 
> 
> 
> Sent from my iPhone using Tapatalk


That sucks man. I got my card yesterday at Micro center and paid the extra 150 for protection. It's just a short subway ride away for me and I can just swap it out if that happens.

What model do you have?


----------



## mistershan

I got my 2080ti yesterday. It doesn't look like my CPU is bottle necking. It's nearly doubled my frames in a lot of games compared to my 2 x1080 SLI. Also I noticed it's smoother. Was that the frame skipping I was hearing about in SLI? 

However, in some games I am getting really bad performance. Far Cry 5 I am getting a descent frame rate but it's showing 1 percent GPU usage. Far Cry 4 same thing but I am getting like 20-30 frames, which is much worse than before. 

Is there some games that need to be updated for the 2080ti? 


i7 5820k 
EVGA 2080ti XC 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU


----------



## mackanz

Anyone flashed a 2080ti with reference PCB (Gigabyte windforce in my case) to a higher powerlimit bios? My card comes with a whooping 112% and off course hits the powerlimit pretty fast. I think there are more headroom in this card.


----------



## Shawnb99

mistershan said:


> That sucks man. I got my card yesterday at Micro center and paid the extra 150 for protection. It's just a short subway ride away for me and I can just swap it out if that happens.
> 
> 
> 
> What model do you have?




Got a EVGA 2080 TI XC Ultra. Had it for a whole 14 days.




Sent from my iPhone using Tapatalk


----------



## mistershan

Shawnb99 said:


> Got a EVGA 2080 TI XC Ultra. Had it for a whole 14 days.
> 
> 
> 
> 
> Sent from my iPhone using Tapatalk


Damn...I have the XC..Not the ultra..but damn, I thought EVGA was supposed t o be really good.


----------



## csaris

csaris said:


> I have in my hands a 2080ti strix advance.
> 
> Very nice card but the advance version has only 300w power limit.
> 
> I tried to flash Strix OC bios but I get a message that something is protected and nothing changed.
> 
> Any suggestions?



Just flashed it but realized that my card was in Q mode so now my fans always spin at idle!! Not big deal.

Does anyone have the stock Q mode bios from advanced or oc Strix version????


----------



## Shawnb99

mistershan said:


> Damn...I have the XC..Not the ultra..but damn, I thought EVGA was supposed t o be really good.




Yeah so so I. Or least their support is said to be good, guess I’ll find out for myself now.


Sent from my iPhone using Tapatalk


----------



## J7SC

@*profundido* > The Aorus utility link is fixed


----------



## krizby

mackanz said:


> Anyone flashed a 2080ti with reference PCB (Gigabyte windforce in my case) to a higher powerlimit bios? My card comes with a whooping 112% and off course hits the powerlimit pretty fast. I think there are more headroom in this card.


Welcome to the 2080 TI non-A chip owners club dude, I'm sorry to inform that your chip can't be flashed with A chip bios :/. Seems like all the lowest model from any brand is being replaced with non-A chips, I have the Asus 2080 TI turbo and it is Non-A chip also.


----------



## sprayingmango

krizby said:


> Welcome to the 2080 TI non-A chip owners club dude, I'm sorry to inform that your chip can't be flashed with A chip bios :/. Seems like all the lowest model from any brand is being replaced with non-A chips, I have the Asus 2080 TI turbo and it is Non-A chip also.


Do you specifically mean a chip that ends in A, like 300A? Both of my EVGA 2080Ti's are A chips and I just assumed they were all that way.


----------



## VPII

krizby said:


> Welcome to the 2080 TI non-A chip owners club dude, I'm sorry to inform that your chip can't be flashed with A chip bios :/. Seems like all the lowest model from any brand is being replaced with non-A chips, I have the Asus 2080 TI turbo and it is Non-A chip also.


So if I get your statement correct, is you have a 2080ti with a NON-A chip it cannot be flashed with an A chip bios? I was thinking of opening up my card to see what it has, but having flashed it already with the Galax Geforce RTX 2080Ti HOF bios I take it my Galax RTX 2080 TI OC is an A chip. The HOF bios is not that great.... added a lot of heat with not much extra performance so my 380 watt TDP stock bios will need to do now.


----------



## Coldmud

J7SC said:


> ...only half kidding if I say the difference to 378w might be the RGB (wait till you see that card boot up in all its RGB glory). When my build is done, I really should test it w/ RGB off to see if it increases Superpostion etc


Oh my, its a beaut all right! It's really captivating on wave color along side my Aorus Master. Temporary setup with the tubes like this, but my god this card is a beast, not breaking 50degrees but I haven't really stressed it yet


----------



## J7SC

Coldmud said:


> Oh my, its a beaut all right! It's really captivating on wave color along side my Aorus Master. Temporary setup with the tubes like this, but my god this card is a beast, not breaking 50degrees but I haven't really stressed it yet



:thumb:


----------



## Coldmud

J7SC said:


> :thumb:


You might be on to something with the rgb adding to total TDP though, this bios reports 366w max as well. Gonna do more testing tomorrow.


----------



## J7SC

Coldmud said:


> You might be on to something with the rgb adding to total TDP though, this bios reports 366w max as well. Gonna do more testing tomorrow.



The aforementioned 'Aorus engine' overclocking software might not be as good as MSI AB overall, but apart from VRAM clock offsets beyond 1000 (may or may not make sense, but nice to have the option), it automatically loads RGB control for the Aorus card with which you can also turn the whole thing off (but it's very mesmerizing when 'on' :thumbsups ) or customize it to your heart's content.

What is your max watt reading in Unigine's Superposition 4k / optimized ? That's what I used for my earlier posts on watt limits when I reached 378.9w. Also, these Aorus respond very well to 'excess cooling'. Neither of my two cards has ever exceeded 47 C (even that, rarely) with an XSPC 360/60 rad and Noctua fans ON IDLE. The current Threadripper build will have 2x 360 XSPC, and a 160x60, plus 6x 120mm 3k rpm Gentle Typhoons on their GPU-only loop - and I found a way to quiet them down for the bedroom setup (lest I get evicted :laugher: )


----------



## krizby

sprayingmango said:


> Do you specifically mean a chip that ends in A, like 300A? Both of my EVGA 2080Ti's are A chips and I just assumed they were all that way.


Yes 300A chip are supposedly better bin chip and manufacturers are allowed to increase the power limit while TU300 non-A chip is locked to 250/280W power (112% power limit).



VPII said:


> So if I get your statement correct, is you have a 2080ti with a NON-A chip it cannot be flashed with an A chip bios? I was thinking of opening up my card to see what it has, but having flashed it already with the Galax Geforce RTX 2080Ti HOF bios I take it my Galax RTX 2080 TI OC is an A chip. The HOF bios is not that great.... added a lot of heat with not much extra performance so my 380 watt TDP stock bios will need to do now.


If you look closely at the freg/voltage curve, after 1v the clock pretty much don't increase linearly with voltages. So i capped my ****ty non-A chip at 2010mhz / 1.000v and call it a day lol.


----------



## VPII

krizby said:


> Yes 300A chip are supposedly better bin chip and manufacturers are allowed to increase the power limit while TU300 non-A chip is locked to 250/280W power (112% power limit).
> 
> 
> 
> If you look closely at the freg/voltage curve, after 1v the clock pretty much don't increase linearly with voltages. So i capped my ****ty non-A chip at 2010mhz / 1.000v and call it a day lol.


Well I can take my GPU all the way up to 2145mhz but unfortunately the heat drops it to 2040 to 2055 while running a benchmark.


----------



## kx11

VPII said:


> So if I get your statement correct, is you have a 2080ti with a NON-A chip it cannot be flashed with an A chip bios? I was thinking of opening up my card to see what it has, but having flashed it already with the Galax Geforce RTX 2080Ti HOF bios I take it my Galax RTX 2080 TI OC is an A chip. The HOF bios is not that great.... added a lot of heat with not much extra performance so my 380 watt TDP stock bios will need to do now.





you should not play around with HOF bios it's running on a fully custom voltage controller ( the only GPU according to gamers nexus ) so it might harm you GPU


----------



## profundido

my findings after initial tests last night of the Gigabyte Aorus 2080ti Extreme Waterforce:


The card looks awesome and was installed in my computer in 5min thanks to QDC's. Loving the RGB.



Performance-wise so far I'm very dissapointed. The stock bios it comes with is locked to 122% PL. First Timespy run I got a really low score and flashed the Galax bios immediately. That raised the PL to 126%. Under max voltage (+100) and powerlimit I seem to have no problem running the memory at 8000Mhz (+1000) but the core won't go any higher than 2070mhz or anything I try crashes (Uniheaven, Timespy,...). Timespy extreme even crashes at 2060Mhz and forces me to go even lower to be stable. Temp under full load never exceeded 47°C. 

It's the only card on the motherboard currently and I reset the Asus Formula X motherboard bios to defaults last night. It doesn't get any more basic than that so I think I ruled everything out that could case a bottleneck. Latest Geforce drivers, [email protected] and Gskill [email protected]

am I missing something or did I really get that unlucky ? I see reports here of people being completely thermally bottlenecked but able to run their gpu chip at 2145Mhz

https://www.3dmark.com/spy/5451380


----------



## VPII

kx11 said:


> you should not play around with HOF bios it's running on a fully custom voltage controller ( the only GPU according to gamers nexus ) so it might harm you GPU


Thank you... I am back on the normal Galax RTX 2080 TI OC bios and the 380Watt max TDP is enough for what the card can do.


----------



## uggy

Just bought a couple of founders 2080ti from the Nvidia store and put on some water blocks on them. 
A couple of questions for you experts in here. 
- is there any point of flashing a new bios in to them? And in that case, what bios should I pick?
- what is a good overclock to start with? Heaven benchmark ok to use to check if it’s a stable OC?
- msi afterburner still ok to use on the rtx cards? Or evga precision x a better choice?


----------



## VPII

uggy said:


> Just bought a couple of founders 2080ti from the Nvidia store and put on some water blocks on them.
> A couple of questions for you experts in here.
> - is there any point of flashing a new bios in to them? And in that case, what bios should I pick?
> - what is a good overclock to start with? Heaven benchmark ok to use to check if it’s a stable OC?
> - msi afterburner still ok to use on the rtx cards? Or evga precision x a better choice?


I cannot comment on which BIOS to chose but you may be limited if FE models. As for MSI Afterburner, yes it does the job, but you'll be limited to +1000mhz max on memory, not that many card can do more in any case. I prefer to run 3dmark Time Spy as it pretty computational and stresses the card a lot and will show if your overclock is not working. But Heaven or Superposition from Unigine will also do the trick I'd say. Anyways, good luck and let us know what you get out of those cards.


----------



## uggy

VPII said:


> uggy said:
> 
> 
> 
> Just bought a couple of founders 2080ti from the Nvidia store and put on some water blocks on them.
> A couple of questions for you experts in here.
> - is there any point of flashing a new bios in to them? And in that case, what bios should I pick?
> - what is a good overclock to start with? Heaven benchmark ok to use to check if it’s a stable OC?
> - msi afterburner still ok to use on the rtx cards? Or evga precision x a better choice?
> 
> 
> 
> I cannot comment on which BIOS to chose but you may be limited if FE models. As for MSI Afterburner, yes it does the job, but you'll be limited to +1000mhz max on memory, not that many card can do more in any case. I prefer to run 3dmark Time Spy as it pretty computational and stresses the card a lot and will show if your overclock is not working. But Heaven or Superposition from Unigine will also do the trick I'd say. Anyways, good luck and let us know what you get out of those cards.
Click to expand...

Thanks for the reply. 
Is a max voltage and power, 200+ on core and 600+ on memory a good and easy starting point?


----------



## VPII

uggy said:


> Thanks for the reply.
> Is a max voltage and power, 200+ on core and 600+ on memory a good and easy starting point?


Hi there

I'll star with 105.... I figured the speed bump up 15+ with every increase which is why I state 105+ on core. 600+ on mem should be fine.


----------



## dante`afk

animeowns said:


> titan rtx will be faster


Please elaborate how a limited 280w power draw rtx titan with a boost of up to 1770mhz will be able to beat a 380w or even 450w bios 2080ti with a boost of up to 2000, with oc 2100+ card?


----------



## uggy

Should I bother flashing the bios to another one?


----------



## Shawnb99

EVGA has accepted I need a RMA. Now I get to play the return game. Am sure shipping won’t be cheap and who knows if the new one won’t crap out as well. 

Regret buying the card now



Sent from my iPhone using Tapatalk


----------



## Coldmud

J7SC said:


> The aforementioned 'Aorus engine' overclocking software might not be as good as MSI AB overall, but apart from VRAM clock offsets beyond 1000 (may or may not make sense, but nice to have the option), it automatically loads RGB control for the Aorus card with which you can also turn the whole thing off (but it's very mesmerizing when 'on' :thumbsups ) or customize it to your heart's content.
> 
> What is your max watt reading in Unigine's Superposition 4k / optimized ? That's what I used for my earlier posts on watt limits when I reached 378.9w. Also, these Aorus respond very well to 'excess cooling'. Neither of my two cards has ever exceeded 47 C (even that, rarely) with an XSPC 360/60 rad and Noctua fans ON IDLE. The current Threadripper build will have 2x 360 XSPC, and a 160x60, plus 6x 120mm 3k rpm Gentle Typhoons on their GPU-only loop - and I found a way to quiet them down for the bedroom setup (lest I get evicted :laugher: )


Sounds like a sweet loop ur building! I don't have exp with unigine but I've encountered some strange behaviour atm with stock bios. I'm only able to put +60 on core it boost from 1995 to 2040 then, if I put more on core, superposition crashes on startup. 

With +60 on core I only get max TDP of 300W! It never tops 105% PL. Now i'm contemplating if I should cleaninstall all drivers, cause I just slapped this on over existing install. Or find a bios which holds clocks better, cause right now i won't top 0.981v!


----------



## profundido

Coldmud said:


> Sounds like a sweet loop ur building! I don't have exp with unigine but I've encountered some strange behaviour atm with stock bios. I'm only able to put +60 on core it boost from 1995 to 2040 then, if I put more on core, superposition crashes on startup.
> 
> With +60 on core I only get max TDP of 300W! It never tops 105% PL. Now i'm contemplating if I should cleaninstall all drivers, cause I just slapped this on over existing install. Or find a bios which holds clocks better, cause right now i won't top 0.981v!


I flashed to galax and got higher limits and better results. Now I wanna go back and forth to stock bios and compare results. See my private message if you have a sec.

Looking to do more testing in the next hours and make some screenshots so I can compare to you guys


----------



## J7SC

Coldmud said:


> Sounds like a sweet loop ur building! I don't have exp with unigine but I've encountered some strange behaviour atm with stock bios. I'm only able to put +60 on core it boost from 1995 to 2040 then, if I put more on core, superposition crashes on startup.
> 
> With +60 on core I only get max TDP of 300W! It never tops 105% PL. Now i'm contemplating if I should cleaninstall all drivers, cause I just slapped this on over existing install. Or find a bios which holds clocks better, cause right now i won't top 0.981v!


Sounds like there's something strange going on :headscrat . I'm on stock Bios for both cards, and the slower of the two boosts past 2040++ with bone stock no MSI AB / other adjustments, then 150-160 or so on top. 

Two steps you might try...First, try GPUz render test with MSI AB open on stock Bios with initial power target of '100%' and watch the voltage readout of MSI AB - is it swinging around (ie dropping temporarily) or stable ? At what value ? If it is stable, then increase the power target in 5 MHz increments - still stable ? Second, I would pick up a brand new M2 or SSD and do a fresh Win 10 install (and thus also not screwing up your daily M.2/SSD) and see what the card boosts to.


As far as I understand you and Profundido, the Galax 380w Bios works on the Aourus and improves things ? I'm not quite there yet / leak-testing the 2nd of 3 loops now. Please note though that every time you change Bios (or switch it on cards that have multiple Bios), Windows registry will create a separate GPU entry. A complete 'virgin' Win install eliminates that potential source of interference. Even MSI AB can affect boost settings and idle clocks when NOT loaded, so fresh install (what 15 min w/ win 10 ?) will give you some certainty.


----------



## Jbravo33

uggy said:


> Just bought a couple of founders 2080ti from the Nvidia store and put on some water blocks on them.
> A couple of questions for you experts in here.
> - is there any point of flashing a new bios in to them? And in that case, what bios should I pick?
> - what is a good overclock to start with? Heaven benchmark ok to use to check if it’s a stable OC?
> - msi afterburner still ok to use on the rtx cards? Or evga precision x a better choice?


I had my 2080Ti on blocks day after I got them and flashed a few different bios’ on them. In my opinion it’s not worth it. the bios mode. You will see better clocks from holding lower temps. I ended up getting space invaders and it was a hassle flashing back before I sent in cards. I’m good with bios flashing. My 2 cents. 



dante`afk said:


> Please elaborate how a limited 280w power draw rtx titan with a boost of up to 1770mhz will be able to beat a 380w or even 450w bios 2080ti with a boost of up to 2000, with oc 2100+ card?


Easy. Bios flashing is overrated. If your looking to squeak out 100 points on 3dmark sure bios flash away. But holding lower temps will be a better benefit than flashing a bios. In reality what’s it holding 10-30mhz higher clocks while consuming way more power and getting hotter? What’s that equate to in gaming? Nothing really. I’ve flashed 1080ti and 2080Ti and for me I’m good and not wasting anymore time doing so. I only overclock when benching. 

Also I’d assume half the people buying a titan RTX (on this forum) will be power modding. So just like the Xp which when modded was easily 15% faster than a 1080ti that was flashed or shunted. I’m guessing this will about the same.


----------



## Coldmud

J7SC said:


> Sounds like there's something strange going on :headscrat . I'm on stock Bios for both cards, and the slower of the two boosts past 2040++ with bone stock no MSI AB / other adjustments, then 150-160 or so on top.
> 
> Two steps you might try...First, try GPUz render test with MSI AB open on stock Bios with initial power target of '100%' and watch the voltage readout of MSI AB - is it swinging around (ie dropping temporarily) or stable ? At what value ? If it is stable, then increase the power target in 5 MHz increments - still stable ? Second, I would pick up a brand new M2 or SSD and do a fresh Win 10 install (and thus also not screwing up your daily M.2/SSD) and see what the card boosts to.
> 
> 
> As far as I understand you and Profundido, the Galax 380w Bios works on the Aourus and improves things ? I'm not quite there yet / leak-testing the 2nd of 3 loops now. Please note though that every time you change Bios (or switch it on cards that have multiple Bios), Windows registry will create a separate GPU entry. A complete 'virgin' Win install eliminates that potential source of interference. Even MSI AB can affect boost settings and idle clocks when NOT loaded, so fresh install (what 15 min w/ win 10 ?) will give you some certainty.





profundido said:


> I flashed to galax and got higher limits and better results. Now I wanna go back and forth to stock bios and compare results. See my private message if you have a sec.
> 
> Looking to do more testing in the next hours and make some screenshots so I can compare to you guys


My fault, reinstalled nv drivers and now im getting slightly better results. This install is pretty fresh on m2 not even a month old, but I just don't get this bios. Im not going above 45c however with +125 on the clock i'm all over the place from 1980-2085mhz. 
It just doesnt hold it. Any more on the core is not stable in games and i'm not seeing anything over 1.05v. In superposition it's not stable even on +100 core which yields in 2040-2055mhz clocks and max 1.043v, but i'm not topping 300W TDP.

Okay, gpu-z render test with stock and 100% PL yields in 1.050v stable. If I up clocks and voltage slider I can indeed get max 1.093v also stable not dipping. however not seeing close to this in games or benchmarks.
And I put core too high it crashes before even seeing 1.05v+ so what to do? I might just slap on the 366W max 133%PL F3 bios from the regular Aorus at this point.

edit: hmm I guess a 2nd install of drivers fixed more things on my end, I'm stable on clocks now and stock settings give me about 2020mhz, doing more testing now


----------



## acmilangr

Could someone recommend me a bios to install on my Asus 2080ti Dual with safety?


----------



## Shawnb99

Since I have to return my EVGA OC Ultra for being defective debating if I should replace it or return it and get a FTW model, would mean need a new water block but might have better luck with that model.


Sent from my iPhone using Tapatalk


----------



## J7SC

Coldmud said:


> My fault, reinstalled nv drivers and now im getting slightly better results. This install is pretty fresh on m2 not even a month old, but I just don't get this bios. Im not going above 45c however with +125 on the clock i'm all over the place from 1980-2085mhz.
> (cut)
> 
> edit: hmm I guess a 2nd install of drivers fixed more things on my end, I'm stable on clocks now and stock settings give me about 2020mhz, doing more testing now



Well, some progress. But as you came from a regular (non-xtreme) Aorus 2080 Ti, Win 10 and or Nvidia drivers might still confuse some aspects of the cards. What is worse is Win 10's automatic 'hardware found' steps after boot-up when you had uninstalled the card before can beat you to your manual re-install. I would still do a separate Win 10 barebones virgin install with that new card.


----------



## ElGreco

Hello everybody,
Few posts ago I mentioned I was interested to buy the GIGABYTE AORUS extreme waterblock.
https://www.gigabyte.com/Graphics-Card/GV-N208TAORUSX-WB-11GC#kf

Since it’s still impossible to find it anywhere in Europe, I was thinking to buy the ASUS STRIX OC version,
https://www.asus.com/Graphics-Cards/ROG-STRIX-RTX2080TI-O11G-GAMING/

...because:
A. Buildzoid seems to prefer this card due to the dual bios and good vrms 
B. It’s the only non-FE card with custom waterblock available (so I will have both the fans cover and the waterblock)

My only consideration would be the lower default clocks compared to the aorus extreme wf wb, plus the asus is a 2.7 slots size card which I may have issues installing in horizontal setup with an extender.

What would be your opinion on my concerns plus any other comments you might have about this...

Thank you all!


----------



## Section31

Despite being an RTX2080 TI owner myself, I feel the RTX lineups GPUs are all just ticking time bombs (random death after unknown period of use/days). I am hoping my 2080TI can survive till early 2020 (it should as i just use my pc as web surfer nowadays. I know I'm way overkill PC user but I enjoy the experience of building water cooled pcs (especially in caselabs cases) 

For myself, I regret waiting for the RTX announcement. I was itching to order an caselabs sma8a in feb this year and decided to wait out for all the products in the year. In retrospect, none of us would have guessed caselabs would go under but I should have ordered the sma8a as I would have made the cutoff date for caselabs closure.

I have decided i'm waiting for intel 7nm/10nm CPU and nvidia 7nm gen2 GPUs. The tech will be at least big jump in less heat and power requirements and some performance gains but not a lot. If you must, its better to buy something that doesn't cost a lot,can provide satisfactory performance (but not the top) yet does not effect your future upgrade funds for 7nm/10nm. The other consideration for you is the size of the cards, I have found the founders cards to be the best size wise with the MSI cards being the worse (super big PCB that are too long tall and long, which makes watercooling loops harder to do and less pretty). In the case of horizontal mobo cases, putting an radiator on the top case mounts may cause clearance issues.


----------



## Falkentyne

Section31 said:


> Despite being an RTX2080 TI owner myself, I feel the RTX lineups GPUs are all just ticking time bombs (random death after unknown period of use/days). I am hoping my 2080TI can survive till early 2020 (it should as i just use my pc as web surfer nowadays. I know I'm way overkill PC user but I enjoy the experience of building water cooled pcs (especially in caselabs cases)
> 
> For myself, I regret waiting for the RTX announcement. I was itching to order an caselabs sma8a in feb this year and decided to wait out for all the products in the year. In retrospect, none of us would have guessed caselabs would go under but I should have ordered the sma8a as I would have made the cutoff date for caselabs closure.
> 
> I have decided i'm waiting for intel 7nm/10nm CPU and nvidia 7nm gen2 GPUs. The tech will be at least big jump in less heat and power requirements and some performance gains but not a lot. If you must, its better to buy something that doesn't cost a lot,can provide satisfactory performance (but not the top) yet does not effect your future upgrade funds for 7nm/10nm. The other consideration for you is the size of the cards, I have found the founders cards to be the best size wise with the MSI cards being the worse (super big PCB that are too long tall and long, which makes watercooling loops harder to do and less pretty). In the case of horizontal mobo cases, putting an radiator on the top case mounts may cause clearance issues.


Has anyone with a Galax 2080 Ti HOF had their cards die?
I haven't seen anyone report that on that exact card. I'm guessing it's because of the custom VRM's which stop the space invaders crash from happening?


----------



## J7SC

The real problem seems to be the lack of useful info - is it the VRAM, the VRM or the GPU itself that are mostly responsible for the failures ? Is it more likely concerning FE type PCBs (which is what I read elsewhere), or custom PCBs ? Does it happen more often with cards that have been flashed with another Bios, or not ? 

It seems that NVidia has instructed its vendors to be 'generous' with RMA policies, but it is still a hassle to return a card for another one, not to mention the high price users such as you and I paid for this 'flagship' equipment.

Adding to the confusion is that large retailers who have some decent volume numbers (as far as that goes with the still hard-to-get 2080 TIs) have suggested that the failure rates for 2080 Ti are within the norm (typically 3% - 5%). Yet reading this thread here and elsewhere, there clearly are some above-average issues reported. IMO, it is the lack of real information from NVidia and vendors that doesn't help as rumors and sometimes half-backed facts are substituted. I also found the RTX launch to be initially delayed (were there early signs of manufacturing or quality problems ?), then botched and rushed, perhaps to deal with the collapse of other GPU segments due to mining deflating.

All I got out of the information I gathered at various threads and YouTube analysis (ie Gamers Nexus) before I bought was that I prefer the non-FE PCB, that VRAM is more often than not an issue in failures, and that good cooling (preferably water-cooling, including VRAM) are good steps to take, though no guarantee. When I finally made my purchase decision, I went for the vendor with the best built-quality (> just in my own experience / I own almost 30 GPUs of various gens) and said vendor also offered the 4 year warranty. But the uncertainty remains, more so than with other equipment in a similar price bracket.


----------



## boli

profundido said:


> [snip]
> 
> Under max voltage (+100) and powerlimit I seem to have no problem running the memory at 8000Mhz (+1000) but the core won't go any higher than 2070mhz or anything I try crashes (Uniheaven, Timespy,...). Timespy extreme even crashes at 2060Mhz and forces me to go even lower to be stable. Temp under full load never exceeded 47°C.
> 
> [snip]
> 
> 
> 
> am I missing something or did I really get that unlucky ? I see reports here of people being completely thermally bottlenecked but able to run their gpu chip at 2145Mhz



If that is unlucky, let's share it. [emoji6] My KFA2 OC runs at +120/+800 stable, at similar temps that's also 2070 MHz. Some benches/games can go higher on core or memory, but this was what worked for everything.


----------



## J7SC

> ...
> Under max voltage (+100) and powerlimit I seem to have no problem running the memory at 8000Mhz (+1000) but the core won't go any higher than 2070mhz or anything I try crashes (Uniheaven, Timespy,...). Timespy extreme even crashes at 2060Mhz and forces me to go even lower to be stable. Temp under full load never exceeded 47°C.
> ...
> am I missing something or did I really get that unlucky ? I see reports here of people being completely thermally bottlenecked but able to run their gpu chip at 2145Mhz


*@ Profundido*

What kind of PSU are you running, with what type of CPU and oc ? I ask because I once chased down a crash problem with a 7 series KPE on EVBot which would crash at certain peak loads in 3D FSE...turned out to be a weak rail in the PSU instead. Just on the off-chance that it might be an issue, may be borrow / try out a PSU with at least 200w headroom ? Also, I tested out and posted earlier on the VRAM-only overclock (with) +1000 watt use increase in GPUz render test...it was only 6-8+/- extra watts difference


*EDIT -* checked HWBot for average GPU overclocks of 2080 TIs:

Air
*2014*

water 
*2070*

LN2
*2472*

*First comparison of Titan RTX vs 2080 Ti *(mostly gaming benchmarks, and keeping in mind 'early days' for drivers etc)

*



*


----------



## Coldmud

I'm having some disappointing results as well. With the 380W bios in most cases going above 2070mhz is unstable territory inc vert black artifact lines just like u would expect from to high a mem oc in w3. Mem was kept stock. With the original bios it craps out 15-20mhz sooner. 

Memory can go to +1000 but going over +700 yields little gains. 

However I can't seem to get 1.093v ingame whatever I do.
It's stuck at 1.05v for all bioses I tried so far. If I do the gpu-z render test it does go to 1.093v and holds it, just not ingame, not temp limited I'm sure.

Sometimes when I add to core above 2070mhz it blips 1.063v for a moment then it crashes, Am I also missing something? I need 1.093v to be stable above 2070mhz, yet it wont give me anything over 1.05v before crashing.
This gpuboost 3.0 is so disappointing.

Are there any special bios' out there that hold clocks and volts like they did in the old Titan days? 

I did DDU with clean install, after the flash, should be enough I'm sure.


----------



## VPII

Coldmud said:


> I'm having some disappointing results as well. With the 380W bios in most cases going above 2070mhz is unstable territory inc vert black artifact lines just like u would expect from to high a mem oc in w3. Mem was kept stock. With the original bios it craps out 15-20mhz sooner.
> 
> Memory can go to +1000 but going over +700 yields little gains.
> 
> However I can't seem to get 1.093v ingame whatever I do.
> It's stuck at 1.05v for all bioses I tried so far. If I do the gpu-z render test it does go to 1.093v and holds it, just not ingame, not temp limited I'm sure.
> 
> Sometimes when I add to core above 2070mhz it blips 1.063v for a moment then it crashes, Am I also missing something? I need 1.093v to be stable above 2070mhz, yet it wont give me anything over 1.05v before crashing.
> This gpuboost 3.0 is so dissapointing.
> 
> Are there any special bios' out there that hold clocks and volts like they did in the old Titan days?
> 
> I did DDU with clean install, after the flash, should be enough I'm sure.


What are you checking the core voltage with? My reason for asking is I'm checking mine on the LOG file generated by GPUz. You may want to also check your power draw with the card as I've seen with increased core voltage it goes a little over 380watt. Interestingly I found that with my max overclock +180 core I get 2130 to 2145 when the benchmark starts but it drops very quickly and progressively down to 2025, 2040 and if I'm lucky 2055. I did however find that it works better if I do not increase the core voltage slider in MSI Afterburner which means that I'll sit with 1.05v max on core (and TDP goes 372watt max) but it is sufficient for the clocks I'm running.

At present I need better cooling for the card. My system is an open build in a Lian Li PC-T60A test bench. I absolutely cannot stand a enclosed case.


----------



## Coldmud

VPII said:


> What are you checking the core voltage with? My reason for asking is I'm checking mine on the LOG file generated by GPUz. You may want to also check your power draw with the card as I've seen with increased core voltage it goes a little over 380watt. Interestingly I found that with my max overclock +180 core I get 2130 to 2145 when the benchmark starts but it drops very quickly and progressively down to 2025, 2040 and if I'm lucky 2055. I did however find that it works better if I do not increase the core voltage slider in MSI Afterburner which means that I'll sit with 1.05v max on core (and TDP goes 372watt max) but it is sufficient for the clocks I'm running.
> 
> At present I need better cooling for the card. My system is an open build in a Lian Li PC-T60A test bench. I absolutely cannot stand a enclosed case.


Mostly just with MSI Afterburner, checked with GPU-Z also stuck at 1.05v I'm under water also open bench so not sure why this magical 1.05v lock. I'm only dropping max 1 power stage after I apply core and drawing up to 380w according to gpu-z as well.

EDIT: Well, with a custom voltage curve It seems I am able to put desired voltage to clocks I want. Triple facepalm... still discovering this latest gpu boost XD


----------



## pfinch

Hey guys,

does anyone have flashed the 400W Bios for MSI 2080Ti TRIO?


----------



## profundido

Coldmud said:


> My fault, reinstalled nv drivers and now im getting slightly better results. This install is pretty fresh on m2 not even a month old, but I just don't get this bios. Im not going above 45c however with +125 on the clock i'm all over the place from 1980-2085mhz.
> It just doesnt hold it. Any more on the core is not stable in games and i'm not seeing anything over 1.05v. In superposition it's not stable even on +100 core which yields in 2040-2055mhz clocks and max 1.043v, but i'm not topping 300W TDP.
> 
> Okay, gpu-z render test with stock and 100% PL yields in 1.050v stable. If I up clocks and voltage slider I can indeed get max 1.093v also stable not dipping. however not seeing close to this in games or benchmarks.
> And I put core too high it crashes before even seeing 1.05v+ so what to do? I might just slap on the 366W max 133%PL F3 bios from the regular Aorus at this point.
> 
> edit: hmm I guess a 2nd install of drivers fixed more things on my end, I'm stable on clocks now and stock settings give me about 2020mhz, doing more testing now


sometimes the problem is in the basics. Are you sure your Nvidia control panel setting for "power management" is still set to "use highest performance" ? Also in Windows control panel - system - power management profiles. Are you sure it's set to "high performance" profile ? Just doublechecking


----------



## profundido

Shawnb99 said:


> Since I have to return my EVGA OC Ultra for being defective debating if I should replace it or return it and get a FTW model, would mean need a new water block but might have better luck with that model.
> 
> 
> Sent from my iPhone using Tapatalk


Only you can know what you deep inside really want mate  It's a risk either way but something tells me you won't sleep tight and happy unless you go for the latter option...


----------



## profundido

ElGreco said:


> Hello everybody,
> Few posts ago I mentioned I was interested to buy the GIGABYTE AORUS extreme waterblock.
> https://www.gigabyte.com/Graphics-Card/GV-N208TAORUSX-WB-11GC#kf
> 
> Since it’s still impossible to find it anywhere in Europe, I was thinking to buy the ASUS STRIX OC version,
> https://www.asus.com/Graphics-Cards/ROG-STRIX-RTX2080TI-O11G-GAMING/
> 
> ...because:
> A. Buildzoid seems to prefer this card due to the dual bios and good vrms
> B. It’s the only non-FE card with custom waterblock available (so I will have both the fans cover and the waterblock)
> 
> My only consideration would be the lower default clocks compared to the aorus extreme wf wb, plus the asus is a 2.7 slots size card which I may have issues installing in horizontal setup with an extender.
> 
> What would be your opinion on my concerns plus any other comments you might have about this...
> 
> Thank you all!



You'll be off just as good since with a waterblock, the only bottleneck remaining is things like max core and mem overclock which will in all cases exceed the default clocks and boost clocks all together. In other words default clocks don't matter. You'll only be missing a really nicely done RGB lighting in the end. And this comes from a non-RGB fan...

About the size and fit in your case I cannot help you ofcourse 

I informed and stock is for the Gigabyte card is only expected back for sure at the end of january (guaranteed) and a few here and there maybe end of next week. If you have the patiences you could do daily checks like I did for all shops and be fast like a ninja when it's suddenly shown as "in stock"...

good luck !


----------



## profundido

J7SC said:


> *@ Profundido*
> 
> What kind of PSU are you running, with what type of CPU and oc ? I ask because I once chased down a crash problem with a 7 series KPE on EVBot which would crash at certain peak loads in 3D FSE...turned out to be a weak rail in the PSU instead. Just on the off-chance that it might be an issue, may be borrow / try out a PSU with at least 200w headroom ? Also, I tested out and posted earlier on the VRAM-only overclock (with) +1000 watt use increase in GPUz render test...it was only 6-8+/- extra watts difference
> 
> 
> *EDIT -* checked HWBot for average GPU overclocks of 2080 TIs:
> 
> Air
> *2014*
> 
> water
> *2070*
> 
> LN2
> *2472*
> 
> *First comparison of Titan RTX vs 2080 Ti *(mostly gaming benchmarks, and keeping in mind 'early days' for drivers etc)
> 
> *https://www.youtube.com/watch?v=0L595AHrhWE*



I come from running double Titans in SLI the past 2.5 years on the top notch PSU Corsair AX1200i (which allows me to see realtime usage and history graphs in windows) with peaks up to 900Watts usage during intense gaming. Since I replace the card I barely hit 600Watts anymore. So the PSU is for sure not the problem I guess.

cpu is [email protected] in combination with Gskill TridentZ 2x8GB [email protected] Succesfully (re)tested for 24h of Pi95 about 2 weeks ago. I must admit however that I upgraded the motherboard bios right after installing the new graphics card which also reset bios parameters so it could be a factor. For this reason I restarted a pi95 stresstest this morning that will run for 8hours. So I can check this off for sure. Also check my latest testing findings below in separate post


----------



## profundido

Last night I spent flashing forth and back using different BIOS from the first page in this thread trying out all the highest Wattage BIOSes including Galax HOF, Gigabyte Windforce latest version, EVGA FTW3 and back to stock bios the card came with. They all flash succesfully in case anyone is wondering. If you flash to a non-Gigabyte the gigabyte proprietary bios update tool will fail after that because it has a check in it and doesn't recognize your current bios anymore (ID mismatch basically) but if you use nvflash to manually put any Gigabyte bios back in the card (e.g windforce) it will recognize the bios once again and work correctly. So as long as your system is stable you can't brick the card by flashing.

The different BIOS use different starting points and offsets for core clock increase. Therefore I'll refrain from using wording such as +X since what is +60 on one BIOS is called +145 on another resulting in the exact same core clock so for the sake of comparing only exact resulting core clock is useful.

Performance-wise I come to the same conclusion as some other people in this thread: All the high wattage BIOS are having a power limit ranging from 122-140% and allow for achieving the same results within roughly 300 max difference in Timespy. The highest result I've seen was using the Galax bios from the first page but it was not the most stable result. Eventually I settled for the Gigabyte Windforce BIOS since it allows a power limit of 140% whereas my stock only allows 122% and during my testings I do see peaks of 136%. Highest voltage I've seen is 1.093v but it's nowhere near kept constant. I'll set a manual curve tonight for testing.


Benchmark wise the my results are only more and more confirmed. 2085Mhz core clock everything starts crashing including Uniheaven, any benchmark or game. 2070Mhz is stable in Timespy but not Timespy Extreme which seems more restrictive. 2060Mhz is stable there but no in SOTTR built-in benchmark nor in The Witcher 3. Those 2 game tests seem the most restrictive so far in my testings since they load up over 7GB mem on the card as well as stress all OC cores of your cpu at the same time while hammering your GPU. Nothing comes close to that sort of load, not even Timespy Extreme. To get those games stable over longer period of time I need to bring the clock down to 2050Mhz. That seems to be stable so far for everything. Finally I tested Skyrim VR with 200% supersampling as well. Everything smooth as silk. During all my testings memory was running fixed at 8000Mhz. No problems there


In conclusion:

The overclocker in me is dissappointed because the core chip I received underneath the block seems to be a dud. But deducing from all latest information I'm starting to believe that the current (latest) batches of chips being used for all 2080ti are mediocre bins and the few good bins there were in the first place have all been sold in the first batch to generate good results leading to good sales (of mediocre chips). However in turn the memory has been replaced by awesome performant memory that most likely won't die so fast.

However, to put things in perspective and look at the bigger picture: The end user and gamer in me is happy all in all since this GPU card upgrade gives me a 50-57% single card performance increase which results in almost the same as my SLI before. Except, now my 4K is playable (80-110fps) in ALL games instead of just only the few that still supported SLI really well. Especially since I do alot of VR now which is all singlecard, I feel this upgrade bigtime. Also much faster microresponse (no more microstuttering) and fluidity in games, with my nvidia control panel fixed to 8-bit RGB 120hz and rivatuner fps capped at 117, g-sync and an v-sync forced on in nvidia control panel overriding game settings it's a level of smoothness I have never been able to achieve using my SLI in all past years. It's quite amazing to behold such detailed and butter smooth visuals @163PPI.


----------



## dante`afk

do we need titan waterblocks or do the 2080ti ref blocks fit on the titan?


----------



## profundido

dante`afk said:


> do we need titan waterblocks or do the 2080ti ref blocks fit on the titan?


Titan blocks. Bitspower has already announced they will release one very soon. That being said it's not yet known if 2080Ti blocks will be really incompatible. You might want to ask this question in the (new) dedicated Titan RTX owners thread


----------



## animeowns

Shawnb99 said:


> EVGA has accepted I need a RMA. Now I get to play the return game. Am sure shipping won’t be cheap and who knows if the new one won’t crap out as well.
> 
> Regret buying the card now
> 
> 
> 
> Sent from my iPhone using Tapatalk


I had a 2070 XC ultra for a day returned it to best buy even with my headphones on I could still hear the coil whine it was the worst card I ever owned with coil whine. Is it possible to fix coil whine in cards or is that just a defect ?


----------



## Coldmud

profundido said:


> Last night I spent flashing forth and back using different BIOS from the first page in this thread trying out all the highest Wattage BIOSes including Galax HOF, Gigabyte Windforce latest version, EVGA FTW3 and back to stock bios the card came with. They all flash succesfully in case anyone is wondering. If you flash to a non-Gigabyte the gigabyte proprietary bios update tool will fail after that because it has a check in it and doesn't recognize your current bios anymore (ID mismatch basically) but if you use nvflash to manually put any Gigabyte bios back in the card (e.g windforce) it will recognize the bios once again and work correctly. So as long as your system is stable you can't brick the card by flashing.
> 
> The different BIOS use different starting points and offsets for core clock increase. Therefore I'll refrain from using wording such as +X since what is +60 on one BIOS is called +145 on another resulting in the exact same core clock so for the sake of comparing only exact resulting core clock is useful.
> 
> Performance-wise I come to the same conclusion as some other people in this thread: All the high wattage BIOS are having a power limit ranging from 122-140% and allow for achieving the same results within roughly 300 max difference in Timespy. The highest result I've seen was using the Galax bios from the first page but it was not the most stable result. Eventually I settled for the Gigabyte Windforce BIOS since it allows a power limit of 140% whereas my stock only allows 122% and during my testings I do see peaks of 136%. Highest voltage I've seen is 1.093v but it's nowhere near kept constant. I'll set a manual curve tonight for testing.
> 
> 
> Benchmark wise the my results are only more and more confirmed. 2085Mhz core clock everything starts crashing including Uniheaven, any benchmark or game. 2070Mhz is stable in Timespy but not Timespy Extreme which seems more restrictive. 2060Mhz is stable there but no in SOTTR built-in benchmark nor in The Witcher 3. Those 2 game tests seem the most restrictive so far in my testings since they load up over 7GB mem on the card as well as stress all OC cores of your cpu at the same time while hammering your GPU. Nothing comes close to that sort of load, not even Timespy Extreme. To get those games stable over longer period of time I need to bring the clock down to 2050Mhz. That seems to be stable so far for everything. Finally I tested Skyrim VR with 200% supersampling as well. Everything smooth as silk. During all my testings memory was running fixed at 8000Mhz. No problems there
> 
> 
> In conclusion:
> 
> The overclocker in me is dissappointed because the core chip I received underneath the block seems to be a dud. But deducing from all latest information I'm starting to believe that the current (latest) batches of chips being used for all 2080ti are mediocre bins and the few good bins there were in the first place have all been sold in the first batch to generate good results leading to good sales (of mediocre chips). However in turn the memory has been replaced by awesome performant memory that most likely won't die so fast.
> 
> However, to put things in perspective and look at the bigger picture: The end user and gamer in me is happy all in all since this GPU card upgrade gives me a 50-57% single card performance increase which results in almost the same as my SLI before. Except, now my 4K is playable (80-110fps) in ALL games instead of just only the few that still supported SLI really well. Especially since I do alot of VR now which is all singlecard, I feel this upgrade bigtime. Also much faster microresponse (no more microstuttering) and fluidity in games, with my nvidia control panel fixed to 8-bit RGB 120hz and rivatuner fps capped at 117, g-sync and an v-sync forced on in nvidia control panel overriding game settings it's a level of smoothness I have never been able to achieve using my SLI in all past years. It's quite amazing to behold such detailed and butter smooth visuals @163PPI.


Excellent post! I have come to the same conclusions, mem allows a good oc, core is as dud. if you apply a manual offset for v/f curve you can hold 1.093v if you're not temp limited.
All in all still pleased with the card as my regular aorus 2080ti was so loud in witcher3 and got up to 81c, now im seeing max 50c and gaming in silence is all worth it.
Haven't heard about that 140% Gigabyte Windforce BIOS, gonna have to try that and see if I can hit a higher TDP wall.

btw I've seen the blurbuster article about capping fps, however 3 fps is too much for me  and I get amazing frametimes with just capping to 119fps.


----------



## Jpmboy

animeowns said:


> I had a 2070 XC ultra for a day returned it to best buy even with my headphones on I could still hear the coil whine it was the worst card I ever owned with coil whine. Is it possible to fix coil whine in cards or is that just a defect ?


not fixable afaik. it's really not a defect. there are no coils in these cards, what you hear are singing caps. My V sings under certain work loads, but not during gaming.


----------



## Emmett

I decided to put one of my 2080 ti's on an AIO using the NZXT G12 bracket. I took the card right from the box and removed stock cooler (an asus 2080 ti Dual OC an EARLY one).
I removed the worthless backplate as it was for looks only, NO thermal pads at all, and spaced a bit from the PCB that very thick pads would have been needed if i wanted to add.
No midplate.
I have a 120mm near the bracket of the card blowing air on it. and the NZXT 92mm is cooling the rear. using a thermaltake flow ring 360 on GPU. After a week of running with no
issues I loaded the galaxy 380 watt on it, and been running that for a week with no issues yet. but the strange part is..

The card will run 2040-2025 and +1000 on memory at 0.975 Volts, and it's maxxing out the power limit at nearly the full 380. I have seen it at 378.0 does that seem right?
The max temp I see on gpu is 50-51C If i try to leave mem at stock and add volts. even 2055 starts to artifact.

The maxxing out limit only happens when running timespy or superposition or the like.

Is it the galax 380 bios?
Or just the chip?

Also, I was thinking of putting another 2080 ti on the G12 bracket, would a 280 AIO cool just as well as the 360?


----------



## Iniura

I just received a normal 2080Ti Strix not the OC version. I haven't overclocked my videocard in a couple years like some help.

Does anyone know how to figure out if it's the A version or Non A, in my gpuz screenshot under device id it says 1E04 in it does that mean it's the non A? as this should be 1E07 for the A version? Or am I incorrect and can only see by taking the cooler of and look at the die?

I'm using MSI afterburner atm, I slid the temp limit to +88 and the Power limit% to 112max it can't do more because of the Strix bios version right? But if it's the non A version I can probably not flash another bios. I want to watercool the card eventually but I am trying to figure out if it's good enough to keep or if it's kinda a dud and I should just get a reference card (A version) and block that.

I tested with +100 on the core clock running Firestrike Extreme but it won't complete it will stop eventually and I get the message benchmark cancelled by user so I tried +60 on the core and this succeed. 
My card probably can't be OC that high right or can I only be sure about this when I flash another bios for instance the EVGA one or the Galaxy one if its even possible and block the card?

Thnx I appreciate the help


----------



## acmilangr

Where can I download Galax bios for my Asus dual 2080ti oc card please?


----------



## Coldmud

Iniura said:


> I just received a normal 2080Ti Strix not the OC version. I haven't overclocked my videocard in a couple years like some help.
> 
> Does anyone know how to figure out if it's the A version or Non A, in my gpuz screenshot under device id it says 1E04 in it does that mean it's the non A? as this should be 1E07 for the A version? Or am I incorrect and can only see by taking the cooler of and look at the die?
> 
> I'm using MSI afterburner atm, I slid the temp limit to +88 and the Power limit% to 112max it can't do more because of the Strix bios version right? But if it's the non A version I can probably not flash another bios. I want to watercool the card eventually but I am trying to figure out if it's good enough to keep or if it's kinda a dud and I should just get a reference card (A version) and block that.
> 
> I tested with +100 on the core clock running Firestrike Extreme but it won't complete it will stop eventually and I get the message benchmark cancelled by user so I tried +60 on the core and this succeed.
> My card probably can't be OC that high right or can I only be sure about this when I flash another bios for instance the EVGA one or the Galaxy one if its even possible and block the card?
> 
> Thnx I appreciate the help


Came across this list, bad news IAF.

10DE 1F02 ... GeForce RTX 2070 "non-A"
10DE 1F07 ... GeForce RTX 2070 "A"
10DE 1E82 ... GeForce RTX 2080 "non-A"
10DE 1E87 ... GeForce RTX 2080 "A"
10DE 1E04 ... GeForce RTX 2080 Ti "non-A"
10DE 1E07 ... GeForce RTX 2080 Ti "A"




acmilangr said:


> Where can I download Galax bios for my Asus dual 2080ti oc card please?


The 380W bios is on the first page under BIOS.


----------



## acmilangr

Coldmud said:


> Iniura said:
> 
> 
> 
> I just received a normal 2080Ti Strix not the OC version. I haven't overclocked my videocard in a couple years like some help.
> 
> Does anyone know how to figure out if it's the A version or Non A, in my gpuz screenshot under device id it says 1E04 in it does that mean it's the non A? as this should be 1E07 for the A version? Or am I incorrect and can only see by taking the cooler of and look at the die?
> 
> I'm using MSI afterburner atm, I slid the temp limit to +88 and the Power limit% to 112max it can't do more because of the Strix bios version right? But if it's the non A version I can probably not flash another bios. I want to watercool the card eventually but I am trying to figure out if it's good enough to keep or if it's kinda a dud and I should just get a reference card (A version) and block that.
> 
> I tested with +100 on the core clock running Firestrike Extreme but it won't complete it will stop eventually and I get the message benchmark cancelled by user so I tried +60 on the core and this succeed.
> My card probably can't be OC that high right or can I only be sure about this when I flash another bios for instance the EVGA one or the Galaxy one if its even possible and block the card?
> 
> Thnx I appreciate the help
> 
> 
> 
> Came across this list, bad news IAF.
> 
> 10DE 1F02 ... GeForce RTX 2070 "non-A"
> 10DE 1F07 ... GeForce RTX 2070 "A"
> 10DE 1E82 ... GeForce RTX 2080 "non-A"
> 10DE 1E87 ... GeForce RTX 2080 "A"
> 10DE 1E04 ... GeForce RTX 2080 Ti "non-A"
> 10DE 1E07 ... GeForce RTX 2080 Ti "A"
> 
> 
> 
> 
> acmilangr said:
> 
> 
> 
> Where can I download Galax bios for my Asus dual 2080ti oc card please?
> 
> Click to expand...
> 
> The 380W bios is on the first page under BIOS.
Click to expand...

Thanks alot
But I can't download it from there. Where are the links?
Edit. Found it


----------



## J7SC

profundido said:


> Last night I spent flashing forth and back using different BIOS from the first page in this thread trying out all the highest Wattage BIOSes including Galax HOF, Gigabyte Windforce latest version, EVGA FTW3 and back to stock bios the card came with. They all flash succesfully in case anyone is wondering.
> 
> (cut)
> 
> 
> Performance-wise I come to the same conclusion as some other people in this thread: All the high wattage BIOS are having a power limit ranging from 122-140% and allow for achieving the same results within roughly 300 max difference in Timespy.
> 
> However, to put things in perspective and look at the bigger picture: The end user and gamer in me is happy all in all since this GPU card upgrade gives me a 50-57% single card performance increase which results in almost the same as my SLI before. Except, now my 4K is playable (80-110fps) in ALL games instead of just only the few that still supported SLI really well.


As others stated, excellent post, and very useful:thumb:
Quick question: When you tried the other Bios sets (ie Windforce Power Limit up to 140%), did the actual max watt reading go beyond 380w (where my stock Aorus Xtreme cards stop), or was it higher ? As to early cards binning higher, that may or may not be the case, but when I bought my second one, I first made sure that it was very close re. serial numbers since I knew what the first one could do. Even that is no guarantee, though.

In any case, you saved me a lot of time re. trial and error and flashing two cards with different Bios... I am finally ready for assembly and putting the loops together for the 2080 TIs after testing everything out on test benches, just watching some paint drying (literally ) as I'm going for a black-and-white theme > the best I could think of w/ all that RGB everywhere else (mobo, 2x 2080 TIs, GSkill RGB, fans... )


----------



## ilmazzo

J7SC said:


> As others stated, excellent post, and very useful:thumb:
> Quick question: When you tried the other Bios sets (ie Windforce Power Limit up to 140%), did the actual max watt reading go beyond 380w (where my stock Aorus Xtreme cards stop), or was it higher ? As to early cards binning higher, that may or may not be the case, but when I bought my second one, I first made sure that it was very close re. serial numbers since I knew what the first one could do. Even that is no guarantee, though.
> 
> In any case, you saved me a lot of time re. trial and error and flashing two cards with different Bios... I am finally ready for assembly and putting the loops together for the 2080 TIs after testing everything out on test benches, just watching some paint drying (literally ) as I'm going for a black-and-white theme > the best I could think of w/ all that RGB everywhere else (mobo, 2x 2080 TIs, GSkill RGB, fans... )


in the post he even claims that sli is a waste of time and money


----------



## EarlZ

I am looking at getting the gigabyte 2080Ti Gaming OC 11G, how does his stack in terms of cooling compared to the other custom cooler cards? my selection is quite limited due to my case only accepting 315mm of gpu space, the asus strix would have been my first choice but it costs $277 more on my region. Other options include the zotac 2080Ti Amp and Auros Xtreme.


----------



## Coldmud

ilmazzo said:


> in the post he even claims that sli is a waste of time and money


I assume he means coming from sli 1080ti. Going to a single card solution usually means better frametimes. however you need the second card to get 120fps+ on these new 4k screens.
Nowhere in that post did he claim sli 2080ti's is a waste of time.


----------



## Coldmud

J7SC said:


> As others stated, excellent post, and very useful:thumb:
> Quick question: When you tried the other Bios sets (ie Windforce Power Limit up to 140%), did the actual max watt reading go beyond 380w (where my stock Aorus Xtreme cards stop), or was it higher ? As to early cards binning higher, that may or may not be the case, but when I bought my second one, I first made sure that it was very close re. serial numbers since I knew what the first one could do. Even that is no guarantee, though.
> 
> In any case, you saved me a lot of time re. trial and error and flashing two cards with different Bios... I am finally ready for assembly and putting the loops together for the 2080 TIs after testing everything out on test benches, just watching some paint drying (literally ) as I'm going for a black-and-white theme > the best I could think of w/ all that RGB everywhere else (mobo, 2x 2080 TIs, GSkill RGB, fans... )


Coming along nicely! That's some serious rad power! Are you going to do push>pull on those? Get the noctua fans, if you haven't. They are unbelievably silent and powerful.


----------



## Nephalem89

The best of gygabite 🙂


----------



## J7SC

Coldmud said:


> Coming along nicely! That's some serious rad power! Are you going to do push>pull on those? Get the noctua fans, if you haven't. They are unbelievably silent and powerful.



...the SLI item above is just trolling. Every personal system I have built since SLI / Crossfire was introduced has at least 2x GPUs, some more...and I never owned 1080 TIs either.

As to fans, I'm working with Scythe 3000rpm [not so] Silent Typhoons. I'm trying out a shroud approach re. noise, plus the Core P5 has the front glass. But it may just be too loud, in which case the Noctuas (currently that two-tone brown) will have to be painted. Also, I built a subframe for the back that will hold a separate 4790K system / 2x 780 TIs that is connected to the 4K TV for just YouTube and such so that I do not have to boot 'the big not so silent one' just for some browsing etc.


----------



## Coldmud

J7SC said:


> ...the SLI item above is just trolling. Every personal system I have built since SLI / Crossfire was introduced has at least 2x GPUs, some more...and I never owned 1080 TIs either.
> 
> As to fans, I'm working with Scythe 3000rpm [not so] Silent Typhoons. I'm trying out a shroud approach re. noise, plus the Core P5 has the front glass. But it may just be too loud, in which case the Noctuas (currently that two-tone brown) will have to be painted. Also, I built a subframe for the back that will hold a separate 4790K system / 2x 780 TIs that is connected to the 4K TV for just YouTube and such so that I do not have to boot 'the big not so silent one' just for some browsing etc.


Noctua sells the chromax all black fans but yeah, the brown/beige two tone is.... 

They are currently inside my case so I don't see them, you can always cover them with some sort of shroud that fits over them just like I did. Also helps with cleaning the dust buildup.



Nephalem89 said:


> The best of gygabite 🙂


Hey man, congratulations! This card is a beast, you are going to enjoy it! Slap the 380w bios on there, adjust voltage curve for 1.093v at 2100mhz+ and report your results here!


----------



## wirefox

Nephalem89 said:


> The best of gygabite 🙂


drool....


----------



## J7SC

Coldmud said:


> Noctua sells the chromax all black fans but yeah, the brown/beige two tone is....
> 
> They are currently inside my case so I don't see them, you can always cover them with some sort of shroud that fits over them just like I did. Also helps with cleaning the dust buildup.
> 
> (cut)


Colleagues at work told me about an auction sale from a server farm recently...so I picked up a big pile of Silent Typhoons and also some Sunons (lower left, still need to be cleaned) which are thicker and faster still and can lift off the surface of a table <> but are far toooo looouuud even w/ shrouds. And if all else fails, I've got a pile of unused Noctuas and plenty of white plastic paint :laugher: . I'll get those 2080 TIs frosty one way or another !


----------



## boli

*Tests on air/water with various power levels with/without OC*

First some history/context:
Previously I used a Titan X (Pascal) with 300W power budget and +200/+600 OC for about 2 years, for gaming in UHD. Since I had a small case (Phanteks Evolv ITX) with only a single 280x140x45 EK rad for CPU + GPU, my GPU temps reached ~55C during gaming sessions, with Noctua fans at pretty quiet ~800 rpm. Water temps were about 10-15C below that (estimated by looking at GPU temp when load was stopped). Ambient is usually in low twenties, currently 23C.

Since I got my KFA2 OC 2080 Ti two weeks ago and upgraded its BIOS from the 320W to the 380W one, I had to increase fan speeds to a higher level than I'm comfortable with (~1000-1200 rpm) to be in the same temperature range. BTW I'm aware that this is a higher temp than many are comfortable with, but I figured should still be safe for tubes and pump – I simply value quietness more than the absolutely highest clocks.

Due to the fans being too loud now I moved the innards of my PC (stock i7-6700K, 16 GiB, 960 Pro 2 TB for games + 850 EVO 2 TB for backup) into a similar but bigger case (Phanteks Evolv X), this time with two slim rads 280x140x28 + 420x140x28. All water cooling equipment by EK.

Test results inside:


Spoiler



*Games*







*3DMark*










Anyway, after testing power limits from 90% to 126% with the 380W BIOS (270W to 380W) I eventually flashed back to the stock max 320W BIOS, because the power/heat increase of the 380W PL is not worth the few extra fps to me. Also if my card were to die at least I wouldn't have to bother with the reflash with all the other hassle of putting back the stock cooler. 
I should probably note that the most recent tests (320W OC) were with the Windows October update applied, whereas all the earlier tests were with the April version. Maybe this explains the much lower FC5 score, which I did test 3 times to be sure.

Also I should mention that the 2080 TI OC is +120 core and +800 memory, which results in mostly 2070 MHz clocks during gaming.


----------



## Frozburn

Any reason not to flash to a better BIOS? Getting a replacement soon (card died) and I'm thinking about trying one. Does it mess with your warranty?


----------



## dVeLoPe

is this video accurate? if I already own SLi 1080Ti and dont care about RT/DLSS 

Why would anyone that has SLi 1080Ti spend so much more if the perf diff is minimal


----------



## sprayingmango

dVeLoPe said:


> https://www.youtube.com/watch?v=ZAIwtAUpHyA
> 
> is this video accurate? if I already own SLi 1080Ti and dont care about RT/DLSS
> 
> Why would anyone that has SLi 1080Ti spend so much more if the perf diff is minimal


I went from SLI 1080Ti's to NVLINK 2080Ti's and it's incredible. The 2080Ti's are faster to begin with, so you get a speed / performance bump across the board plus you get support for the newest tech. I game at 4k HDR 120Hz and it holds 120 the entire time in games like BFV (nvidia inspector profile) and everything else I play.


----------



## dVeLoPe

yes but I already own 2x 1080Ti and spent 1550$ for them back in May of this year.

I still would have to come out of pocket with roughly another 50% of what I paid for 1080Ti SLi to switch over to 2080Ti SLi

Why would I do this if I dont care about the shiny new DLSS/RT tech? another 10% performance boost for 50% increase in price? no thx lol


----------



## Esenel

dVeLoPe said:


> yes but I already own 2x 1080Ti and spent 1550$ for them back in May of this year.
> 
> I still would have to come out of pocket with roughly another 50% of what I paid for 1080Ti SLi to switch over to 2080Ti SLi
> 
> Why would I do this if I dont care about the shiny new DLSS/RT tech? another 10% performance boost for 50% increase in price? no thx lol


Or some could say why did you buy so late this year already outdated hardware with the new ahead?


----------



## dante`afk

dVeLoPe said:


> https://www.youtube.com/watch?v=ZAIwtAUpHyA
> 
> is this video accurate? if I already own SLi 1080Ti and dont care about RT/DLSS
> 
> Why would anyone that has SLi 1080Ti spend so much more if the perf diff is minimal


framestuttering, uneven frames, framepacing not smooth.

anyone saying there is nothing like that in SLI straight up lies.


----------



## ilmazzo

Coldmud said:


> I assume he means coming from sli 1080ti. Going to a single card solution usually means better frametimes. however you need the second card to get 120fps+ on these new 4k screens.
> Nowhere in that post did he claim sli 2080ti's is a waste of time.


These are the steps usually I don't understand from some people:

- I get the top nvidia gpu
- It's not enough for my 4K screen
- I get a second top nvidia gpu
- Sli is meh (65% more frames when good, worst frametimes, game developer support depending and so on, ton of heat, etc etc)
- Nvidia bring out the new uber powa top gpu
- I get the new uber powa top gpu
- It is still not enough for my 4K but I see improvements in the frametimes and so on and I get roughly the same fps of the previous setup but more consistent and so on, great card!!!!!
- Willing to get the second new uber powa gpu (because single it is still not enough)

mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm


----------



## ilmazzo

dante`afk said:


> framestuttering, uneven frames, framepacing not smooth.
> 
> anyone saying there is nothing like that in SLI straight up lies.


SLI is like politics: everyone has it own opinions about it and noone has the full truth in his hands


----------



## Jyssi

How much should be the delta between water temperature and GPU core after stress test? Idling 5 minutes on desktop and I have 5C delta, 33C water and 38C gpu.
What peak GPU temperatures others with waterblocks are getting on time spy test?
I've got EK vector waterblock on reference 2080ti.


----------



## dante`afk

it should be about 10-11c. 38c for 33c water is pretty good.


----------



## Jyssi

Ok thanks!

80C on time spy on the other hand...

Can I put liquid metal between block and core and keep evga warranty alive?


----------



## VPII

Okay I've done a little testing just to see where I am with regards to temps and performance. I generally ran for everyday use my Galax RTX 2080 ti OC with up to 380watt TDP at full TDP +105 core and + 500 memory. To be safe I checked with MSI Kombustor Furmark Donut and found that temps reached 86C between 2 to 3 minutes from start. Not what I wanted as the fans were set at 100%. Even when it would reach 86C you hear the fans ramps up even more so obviously more than 100%. So I dropped everything to stock and with only fan set to 100% and what do you know.... max temp was 73C. Yes I know, higher TDP means more heat, but just wanted to put it out there for those scared of their card failing.


----------



## sprayingmango

Esenel said:


> Or some could say why did you buy so late this year already outdated hardware with the new ahead?


Exactly


----------



## Jpmboy

dVeLoPe said:


> yes but I already own 2x 1080Ti and spent 1550$ for them back in May of this year.
> 
> I still would have to come out of pocket with roughly another 50% of what I paid for 1080Ti SLi to switch over to 2080Ti SLi
> 
> *Why would I do this if I dont care about the shiny new DLSS/RT tech*? another 10% performance boost for 50% increase in price? no thx lol


you wouldn't... I hope.
(were these cathartic posts?)


----------



## GraphicsWhore

Where the **** is the RTX patch for Shadow of the Tomb Raider? Is this pissing anyone else off? I've held off playing the game waiting for ut and I have a ton of other games to play right now so it's not a big deal but it's annoying nonetheless.


----------



## cx-ray

dante`afk said:


> framestuttering, uneven frames, framepacing not smooth.
> 
> anyone saying there is nothing like that in SLI straight up lies.


You can blame marketing for that misconception. Then again, the reality of SLI probably isn't worth the marketing investment. If you think selling an identical second card to gamers is difficult, try selling a second one to gamers who already are getting +80 FPS with a single card.

For a good experience, get the best card a manufacturer makes for high frame rates. If you can't quite get +80 FPS, SLI will perform perfectly fine for properly supported titles. Don't get SLI if your hardware is already struggling to get 60 FPS at a given resolution with a single card, and try to solve that by adding another card. Turning down your settings if possible would be a far better option.

Same thing with G-Sync and FreeSync, once accustomed to higher frame rates, below ~70 FPS starts looking bad no matter what. No tearing at higher FPS is where the strengths of VRR plays out.


----------



## sprayingmango

dante`afk said:


> framestuttering, uneven frames, framepacing not smooth.
> 
> anyone saying there is nothing like that in SLI straight up lies.


You clearly don’t understand what you’re talking about... many years ago it used to be an issue but in the past 3-4 years it’s been fantastic. G-Sync has solved any and all of the issues you listed. NVLink is even better. I’ve been an SLI user since day one and it’s fantastic.


----------



## boli

Jyssi said:


> How much should be the delta between water temperature and GPU core after stress test? Idling 5 minutes on desktop and I have 5C delta, 33C water and 38C gpu.
> 
> What peak GPU temperatures others with waterblocks are getting on time spy test?
> 
> I've got EK vector waterblock on reference 2080ti.





Jyssi said:


> [snip]80C on time spy on the other hand...[snip]



Um, 80C during time spy short bench, and 5C above water temp on idle? I think your EK block might not be seated properly! I get to 51C GPU temp during a 20 minute time spy stress test, and it drops by 14C immediately after test ends (see pic). Very quiet fans (~800 rpm). Maybe check earlier posts about not using thermal pads F, using shortest screws, not tightening screws until PCB is bent and using more thermal paste than usual.

Sorry I won't search right now as I'm using a mobile device to type this.

*Update*:
I'm back home now, so here's some more info from my notes I took for myself (mostly from this thread, but also from elsewhere):
People apparently get around 42C to 54C GPU temps on water under load.

Here's some quotes regarding the thermal pad thing:


Spoiler






> They sent enough thermal pads. It's just their instructions on where to put them was wrong. I did it the way the instructions said at first and temps were out to lunch for a water cooled gpu. Redid the gpu twice same thing. Where it says to put the 8mm strips I left them off and all good. I covered the mem chips and vrm's, that's it. Have not seen 40' since with repeated benching
> ——
> Yup, I left the number 3 pads off. If they were only .5 mil then maybe good contact all around but with 1 mil pads they seem to raise the block too much.
> ——
> could this be my issue? the instructions say pads 3 (aka f) are 1.0 mm... just like pads 2 (aka g) but if they are in fact supposed to be .5mm...
> ——
> I had to remove the #3 2x1 mm thermal pads on chokes as they are too thick and I had temperature issue on GPU for a not complete contact between WB and GPU SOC.
> Be careful.
> ——
> Had to do the exact same thing. All good after #3 pads were removed. I am not sure if a thinner pad would work for the chokes.
> ——
> I am using pads on chokes..i have no issues. stays under 45c and its summer here in Australia






Here's some external thing about using lots of thermal paste, short screws and bent PCBs:
https://www.reddit.com/r/watercooling/comments/9ntbzv/ek_vector_2080_ti_temp_issue/

Personally I also tightened the screws too much at first, but the temp and contact was fine, maybe because I used only 0.5mm thin thermal pad for the coils on my first attempt (pics in an earlier post of mine).

I have since updated to 1mm thick pad for the coils (see pic), when I moved to a bigger case with 2 rads total (more info in earlier posts).

So far I have never tried without any thermal pads on coils because I think that my temps are fine. But someone please let me know if they're not. 

Pictures:


Spoiler


----------



## Ford8484

GraphicsWhore said:


> Where the **** is the RTX patch for Shadow of the Tomb Raider? Is this pissing anyone else off? I've held off playing the game waiting for ut and I have a ton of other games to play right now so it's not a big deal but it's annoying nonetheless.


lol yep, same here. Game looks good and its been discounted, tempted to pull the trigger but yea, *** I want to try DLSS with Raytracing.


----------



## Zfast4y0u

sprayingmango said:


> You clearly don’t understand what you’re talking about... many years ago it used to be an issue but in the past 3-4 years it’s been fantastic. G-Sync has solved any and all of the issues you listed. NVLink is even better. I’ve been an SLI user since day one and it’s fantastic.


hes just copy/pasting what others said ( clueless ppl also ), he himself has no clue, dont even bother commenting such nonsense. just skip it ^^


----------



## Jyssi

boli said:


> Um, 80C during time spy short bench, and 5C above water temp on idle? I think your EK block might not be seated properly! I get to 51C GPU temp during a 20 minute time spy stress test, and it drops by 14C immediately after test ends (see pic). Very quiet fans (~800 rpm). Maybe check earlier posts about not using thermal pads F, using shortest screws, not tightening screws until PCB is bent and using more thermal paste than usual.
> 
> Sorry I won't search right now as I'm using a mobile device to type this.
> 
> *Update*:
> I'm back home now, so here's some more info from my notes I took for myself (mostly from this thread, but also from elsewhere):
> People apparently get around 42C to 54C GPU temps on water under load.
> 
> Here's some quotes regarding the thermal pad thing:
> 
> Here's some external thing about using lots of thermal paste, short screws and bent PCBs:
> https://www.reddit.com/r/watercooling/comments/9ntbzv/ek_vector_2080_ti_temp_issue/
> 
> Personally I also tightened the screws too much at first, but the temp and contact was fine, maybe because I used only 0.5mm thin thermal pad for the coils on my first attempt (pics in an earlier post of mine).
> 
> I have since updated to 1mm thick pad for the coils (see pic), when I moved to a bigger case with 2 rads total (more info in earlier posts).
> 
> So far I have never tried without any thermal pads on coils because I think that my temps are fine. But someone please let me know if they're not.
> 
> Pictures:
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 240210
> 
> View attachment 240208


This is what I was afraid of. My block was installed at local PC store and I kinda was expecting something like this. I can't fix it myself this year since I'll be leaving for christmas trip tomorrow.
Aand cba to fight with the store, they were nice and quick and I'll most likely can do better myself. It didn't cost me anything anyway 

E: It's now going "only" to 76C during time spy benchmarking. 15 minutes of witcher 3 (335W 99%usage) with no frame limits it hits 80C. 84C on furmark.
E2: I'll try loosening the screws now, thx for the link


----------



## dante`afk

sprayingmango said:


> You clearly don’t understand what you’re talking about... many years ago it used to be an issue but in the past 3-4 years it’s been fantastic. G-Sync has solved any and all of the issues you listed. NVLink is even better. I’ve been an SLI user since day one and it’s fantastic.





Zfast4y0u said:


> hes just copy/pasting what others said ( clueless ppl also ), he himself has no clue, dont even bother commenting such nonsense. just skip it ^^



I've been using SLI from 480 up until 1080Ti.

Sure I'm just copy pasting and have no clue.


----------



## Zfast4y0u

anger management buddy... next time u go generalize anything, dont get butt hurt when u get called out as clueless.


----------



## dante`afk

sweet defensive tactic, "anger management buddy", "you are just mad"

haha. 

go back into your eggshell.


----------



## Nizzen

GraphicsWhore said:


> Where the **** is the RTX patch for Shadow of the Tomb Raider? Is this pissing anyone else off? I've held off playing the game waiting for ut and I have a ton of other games to play right now so it's not a big deal but it's annoying nonetheless.


We don't care!

We are playing Battlefield V online with friends


----------



## domrockt

So after 
1xKFA2 defekt (artifacting after 3 days of usage)
2xPalit OC defekt (random crashes in all games after 5 days of usage)
my third Palit seems to work .. i put it under water and my beloved Hailea 250 waterchiller i managed a humble overcklock http://www.3dmark.com/spy/5481922

Yes its messy but i build it all this evening and need to clean up at Saturday.

my GPU temp never exeeds 35° C

i will try to return the third palit for an Amazon deposit and change it to something else.  

any tips how i can achive some more clocks out of the Palit? And yes the KFA2 Bios is flashed already.


----------



## Porphyro

cgcross said:


> VETDRMS said:
> 
> 
> 
> How are you forcing constant voltage?
> 
> 
> 
> In the Afterburner custom curve(CTRL+F) on your curve select the voltage you want to be constant at and move it to the overclock speed you want on the graph then hit CTRL+L. A yellow line will appear at that voltage level. If a white line is also vertical at a lower voltage keep moving the speed down for voltages to the left of the yellow line until it is the only vertical line you see.
Click to expand...

Hi guys,

New to all this but very keen to learn. I'm having what seems to be a fairly unique issue which I'm hopeful you can help me with. I recently built a watercooled computer with the following specs: 9900k, Asus Dual 2080 ti OC both on EK Velocity blocks and connected to 2x360 rads, Maximus XI Formula, 970 NVME Pro 1TB and ridiculous 64GB Corsair Platinum Ram). What's happening is the GPU wildly fluctuates the core clock speeds in any render test (like gpu-z) or benchmark such as Firestrike/Timespy to the point where in the latter benches I hit a measly 17k/7k score respectively. From what I can tell 7 k is very low for this card when most hit 13k. As for Firestrike, only once did it hit 23k but usually sits at between 17k-19k. I have tried uninstalling/reinstalling drivers, restarting Windows from scratch but nothing seems to allow me to keep clocks stable. Precision X1 or Afterburner both show the wild jumps in clocks from 14k MHz to 16 to 15 to 17 to 19 to 14 etc. It's wildly chaotic and the drops and peaks seem too extreme. I have a 1080 ti also and its clocks stay pretty solid in one place (it also beats my 2080 ti in most Firestrike runs at default settings). I checked the OC scanner in Afterburner and it states that the dominant limiters include the "no load" which from what I've read is normal as nothing is loaded to stress it but it also gives me "power" as a limiter. The voltage curve it spits out after has a zero increase from the main curve. 

Does anybody have any ideas as to what's going on? Do I need to flash the vbios? Never done this before so could do with some guidance or being pointed in right direction. Does my GPU actually have better bios to flash to it? 

Could it be software? Could it be the PSU (An Asus Thor 1200w with cablemod 8pin cables)? Software clashing with it? 

Windows 10 currently sits at 1809 build, Nvidia drivers at 417.35. Not really sure what else to do. 

This is my first ever post so any help would be greatly appreciated. Thank you kindly in advance.


----------



## Hulk1988

Got my Waterforce Xtreme now. Beautiful RGB 

I am happy with the results: 2130/8040


----------



## J7SC

Hulk1988 said:


> Got my Waterforce Xtreme now. Beautiful RGB
> 
> I am happy with the results: 2130/8040
> (cut)



Congrats ! And yes, that RGB show is kind of memorizing 



...kind of miss it now as I'm finishing my build (trying some copper hard-tubing :sad-smile but SLI means really tight bends)


----------



## J7SC

Porphyro said:


> Hi guys,
> 
> New to all this but very keen to learn. I'm having what seems to be a fairly unique issue which I'm hopeful you can help me with. I recently built a watercooled computer with the following specs: 9900k, Asus Dual 2080 ti OC both on EK Velocity blocks and connected to 2x360 rads, Maximus XI Formula, 970 NVME Pro 1TB and ridiculous 64GB Corsair Platinum Ram). What's happening is the GPU wildly fluctuates the core clock speeds in any render test (like gpu-z) or benchmark such as Firestrike/Timespy to the point where in the latter benches I hit a measly 17k/7k score respectively. From what I can tell 7 k is very low for this card when most hit 13k. As for Firestrike, only once did it hit 23k but usually sits at between 17k-19k. I have tried uninstalling/reinstalling drivers, restarting Windows from scratch but nothing seems to allow me to keep clocks stable. Precision X1 or Afterburner both show the wild jumps in clocks from 14k MHz to 16 to 15 to 17 to 19 to 14 etc. *It's wildly chaotic and the drops and peaks seem too extreme*. I have a 1080 ti also and its clocks stay pretty solid in one place (it also beats my 2080 ti in most Firestrike runs at default settings).
> (cut)



Dual 2080 TIs could be a PSU issue, have you tried just switching to a different pair of 8 pins on the PSU ? Don't know if you have the 320w or 380w Bios for both cards, but if it is 380w per and you have a heavily overclocked 9900K, you're getting close. Personally, I prefer multi-rail PSUs (don't know what your Asus is) but the last thing I am looking for is a single-vs-multi rail PSU convert discussion (sometimes worse than AMD vs NVidia...). I have both, but use multi-rails for SLI builds after a long conversation with a senior Asus tech.

Also, your cooling system sounds more than adequate (max GPU temps during benching btw ?) unless you have some badly seated thermal pads etc.

The wild swinging / peaks and valleys suggest that you probably should try the MSI AB voltage curve (as you already indicated above) as it introduces a fixed voltage as your next step. What was the outcome w/ voltage curve ? Still wild swings ? Also, I would lower the fixed voltage and OC a bit w/ the curve, i.e. 1.075v max and then work your way up


----------



## Jpmboy

Porphyro said:


> Hi guys,
> 
> New to all this but very keen to learn. I'm having what seems to be a fairly unique issue which I'm hopeful you can help me with. I recently built a watercooled computer with the following specs: 9900k, Asus Dual 2080 ti OC both on EK Velocity blocks and connected to 2x360 rads, Maximus XI Formula, 970 NVME Pro 1TB and ridiculous 64GB Corsair Platinum Ram). What's happening is the GPU wildly fluctuates the core clock speeds in any render test (like gpu-z) or benchmark such as Firestrike/Timespy to the point where in the latter benches I hit a measly 17k/7k score respectively. From what I can tell 7 k is very low for this card when most hit 13k. As for Firestrike, only once did it hit 23k but usually sits at between 17k-19k. I have tried uninstalling/reinstalling drivers, restarting Windows from scratch but nothing seems to allow me to keep clocks stable. Precision X1 or Afterburner both show the wild jumps in clocks from 14k MHz to 16 to 15 to 17 to 19 to 14 etc. It's wildly chaotic and the drops and peaks seem too extreme. I have a 1080 ti also and its clocks stay pretty solid in one place (it also beats my 2080 ti in most Firestrike runs at default settings). I checked the OC scanner in Afterburner and it states that the dominant limiters include the "no load" which from what I've read is normal as nothing is loaded to stress it but it also gives me "power" as a limiter. The voltage curve it spits out after has a zero increase from the main curve.
> 
> Does anybody have any ideas as to what's going on? Do I need to flash the vbios? Never done this before so could do with some guidance or being pointed in right direction. Does my GPU actually have better bios to flash to it?
> 
> Could it be software? Could it be the PSU (An Asus Thor 1200w with cablemod 8pin cables)? Software clashing with it?
> 
> Windows 10 currently sits at 1809 build, Nvidia drivers at 417.35. Not really sure what else to do.
> 
> This is my first ever post so any help would be greatly appreciated. Thank you kindly in advance.


 what peak temp do the cards hit in Timespy extreme (even with no OC)?. Use GPUZ and toggle the temp window to "max". need two instances, each pointing to the individual cards.
1) in NV CP, set the 3D options like shown below.
2) down load the Galax extreme tuner (rtx beta). use the voltage slider (set it to the peak shown in gpuz) click the radio button. Card is locked in P0. Now test the clock swings. Galax tuner or a-beta work fine. The galax tool works better to ensure P0 state IMO.
3) of course, move the power slider to max for both cards. (regardless of which control tool, or bios)
4) Flashing to a high current bios will help with powerlimit clock bin drops - but what you describe does not sound like a limit-induced bin drop.
5) the ASUS 1200W PSU is plenty. If it were not, it would just OCP. It has plenty of power. Be sure to connect the SLI power to your MB (molex). Saves the ATX pwer some extra load. :thumb:


----------



## J7SC

I do have an older AX1200 w Corsair which I cannot use w/ SLI anymore as one card (no matter which) will choke and swing heavily, even after swapping PCIe cables, and the PCIe lanes getting extra power via Molex or additional PCIe, all before OCP. I only use that one now for single card builds. Then again, I am not sure where is 9900k is overclocked to and how many watts max it would use, in addition to up to 2x 380w via the GPUs

That raises another potential tip for the poster: While you should bench Firestrike etc with your power slider all the way to the right (max) under normal circumstances, you might try 110% or even a bit less to see if the wild swings / peaks you reported are still there. That will tell you if there is some sort of power delivery issue.


----------



## Jpmboy

J7SC said:


> I do have an older AX1200 w Corsair which I cannot use w/ SLI anymore as one card (no matter which) will choke and swing heavily, even after swapping PCIe cables, and the PCIe lanes getting extra power via Molex or additional PCIe, all before OCP. I only use that one now for single card builds. Then again, I am not sure where is 9900k is overclocked to and how many watts max it would use, in addition to up to 2x 380w via the GPUs
> 
> That raises another potential tip for the poster: While you should bench Firestrike etc with your power slider all the way to the right (max) under normal circumstances, *you might try 110% or even a bit less* to see if the wild swings / peaks you reported are still there. That will tell you if there is some sort of power delivery issue.


 good idea.
FYI - one of my AX1200's went bad and would OCP on 2 TXPs. Sent it back to corsair and they sent a new HX1200i, which is filling the Ax1200 shoes rightly. IMO, the AX1200 is one of the best ever made. Still have one running a 4960X/32GB 1.65V ram, a water cooled 295x2 (really a power sink - very inefficient). Strong PSU. :thumb:


----------



## J7SC

Jpmboy said:


> good idea.
> FYI - one of my AX1200's went bad and would OCP on 2 TXPs. Sent it back to corsair and they sent a new HX1200i, which is filling the Ax1200 shoes rightly. IMO, the AX1200 is one of the best ever made. Still have one running a 4960X/32GB 1.65V ram, a water cooled 295x2 (really a power sink - very inefficient). Strong PSU. :thumb:



yeah. I should have RMAed the AX1200 but I have two anyway. The other one has been flawless, and on single card, i.e. for a testbench to try out a new card, it's just fine. I also still have a Lepa G1200w which I used exactly once some years ago...even though it was cheap, and there's nothing wrong with it, somehow it scares me to hook expensive equipment up to it :headscrat


----------



## pewpewlazer

Got this massive monstrosity in the mail tonight. EVGA 2080 Ti XC Ultra. I'm still trying to come to grips with the fact that I spent more on a single video card than I did rent this month, but hey, at least it has the RGBz!

The humongous triple slot design meant I had to move my 10G NIC down to the bottom PCIE slot... only to find out that slot is disabled when you run an m.2 drive. Had to tear apart my apartment trying to figure out where I hid the cat6 cable long enough to reach my computer. Good times. 

So all said I traded 90% of my network gigabits and over 100% of a months rent for some RTX. Great deal, eh?

I hadn't done much of any research on what overclocks people are seeing on average, but I had high hopes when I saw people complaining that Afterburner maxed out the mem clock slider at "only" 1000mhz. I installed the beast, fired up Superposition, and began ramping up the mem speed with a grin on my face. 500... 700... 850... 900 crash. WHOMP WHOMP. Oh well, lost the silicon lottery on video card ram yet again. Tried 130% PL and +850 mem, crash. Sitting at +750 now. Maybe I could get 800 or 825 or so. Meh.

Glad to have gotten away from SLI, and those god awful annoying sounding EVGA Hybrid cooler AIOs I was using. Guess it's time to start shopping for water blocks...


----------



## Porphyro

J7SC said:


> Porphyro said:
> 
> 
> 
> Hi guys,
> 
> New to all this but very keen to learn. I'm having what seems to be a fairly unique issue which I'm hopeful you can help me with. I recently built a watercooled computer with the following specs: 9900k, Asus Dual 2080 ti OC both on EK Velocity blocks and connected to 2x360 rads, Maximus XI Formula, 970 NVME Pro 1TB and ridiculous 64GB Corsair Platinum Ram). What's happening is the GPU wildly fluctuates the core clock speeds in any render test (like gpu-z) or benchmark such as Firestrike/Timespy to the point where in the latter benches I hit a measly 17k/7k score respectively. From what I can tell 7 k is very low for this card when most hit 13k. As for Firestrike, only once did it hit 23k but usually sits at between 17k-19k. I have tried uninstalling/reinstalling drivers, restarting Windows from scratch but nothing seems to allow me to keep clocks stable. Precision X1 or Afterburner both show the wild jumps in clocks from 14k MHz to 16 to 15 to 17 to 19 to 14 etc. *It's wildly chaotic and the drops and peaks seem too extreme*. I have a 1080 ti also and its clocks stay pretty solid in one place (it also beats my 2080 ti in most Firestrike runs at default settings).
> (cut)
> 
> 
> 
> 
> Dual 2080 TIs could be a PSU issue, have you tried just switching to a different pair of 8 pins on the PSU ? Don't know if you have the 320w or 380w Bios for both cards, but if it is 380w per and you have a heavily overclocked 9900K, you're getting close. Personally, I prefer multi-rail PSUs (don't know what your Asus is) but the last thing I am looking for is a single-vs-multi rail PSU convert discussion (sometimes worse than AMD vs NVidia...). I have both, but use multi-rails for SLI builds after a long conversation with a senior Asus tech.
> 
> Also, your cooling system sounds more than adequate (max GPU temps during benching btw ?) unless you have some badly seated thermal pads etc.
> 
> The wild swinging / peaks and valleys suggest that you probably should try the MSI AB voltage curve (as you already indicated above) as it introduces a fixed voltage as your next step. What was the outcome w/ voltage curve ? Still wild swings ? Also, I would lower the fixed voltage and OC a bit w/ the curve, i.e. 1.075v max and then work your way up
Click to expand...

I'll try this. I should perhaps correct something though. I don't have two tis, only one. What I meant by saying "both under EK waterblock was the CPU and GPU, not two GPUs. I have a single Asus Dual 2080ti OC edition and it shows these wild clock jumps on default. 

I'll start trying what people have suggested here but am really a massive beginner. This is only the second computer I've ever built but decided to go all out and try my hand at custom cooling.


----------



## J7SC

Porphyro said:


> I'll try this. I should perhaps correct something though. I don't have two tis, only one. What I meant by saying "both under EK waterblock was the CPU and GPU, not two GPUs. I have a single Asus Dual 2080ti OC edition and it shows these wild clock jumps on default.
> 
> I'll start trying what people have suggested here but am really a massive beginner. This is only the second computer I've ever built but decided to go all out and try my hand at custom cooling.



...well, single GPU is a somewhat different matter re. PSU. Still, use the MSI voltage curve (and not to the full 1.093v initially, but as suggested before to 1.075v max to start with) and also initially reduce the Power Target to 110% instead of max (122-130%, depending on the specific Bios etc). Are the swings still there ? Even before gaming or 3DMark, I would just run the GPUz render as a start - watching also the GPU voltage while running the render test. It should be perfectly stable after taking the above steps. If it is all stable, try 115% and so forth


----------



## Porphyro

Jpmboy said:


> Porphyro said:
> 
> 
> 
> Hi guys,
> 
> New to all this but very keen to learn. I'm having what seems to be a fairly unique issue which I'm hopeful you can help me with. I recently built a watercooled computer with the following specs: 9900k, Asus Dual 2080 ti OC both on EK Velocity blocks and connected to 2x360 rads, Maximus XI Formula, 970 NVME Pro 1TB and ridiculous 64GB Corsair Platinum Ram). What's happening is the GPU wildly fluctuates the core clock speeds in any render test (like gpu-z) or benchmark such as Firestrike/Timespy to the point where in the latter benches I hit a measly 17k/7k score respectively. From what I can tell 7 k is very low for this card when most hit 13k. As for Firestrike, only once did it hit 23k but usually sits at between 17k-19k. I have tried uninstalling/reinstalling drivers, restarting Windows from scratch but nothing seems to allow me to keep clocks stable. Precision X1 or Afterburner both show the wild jumps in clocks from 14k MHz to 16 to 15 to 17 to 19 to 14 etc. It's wildly chaotic and the drops and peaks seem too extreme. I have a 1080 ti also and its clocks stay pretty solid in one place (it also beats my 2080 ti in most Firestrike runs at default settings). I checked the OC scanner in Afterburner and it states that the dominant limiters include the "no load" which from what I've read is normal as nothing is loaded to stress it but it also gives me "power" as a limiter. The voltage curve it spits out after has a zero increase from the main curve.
> 
> Does anybody have any ideas as to what's going on? Do I need to flash the vbios? Never done this before so could do with some guidance or being pointed in right direction. Does my GPU actually have better bios to flash to it?
> 
> Could it be software? Could it be the PSU (An Asus Thor 1200w with cablemod 8pin cables)? Software clashing with it?
> 
> Windows 10 currently sits at 1809 build, Nvidia drivers at 417.35. Not really sure what else to do.
> 
> This is my first ever post so any help would be greatly appreciated. Thank you kindly in advance.
> 
> 
> 
> what peak temp do the cards hit in Timespy extreme (even with no OC)?. Use GPUZ and toggle the temp window to "max". need two instances, each pointing to the individual cards.
> 1) in NV CP, set the 3D options like shown below.
> 2) down load the Galax extreme tuner (rtx beta). use the voltage slider (set it to the peak shown in gpuz) click the radio button. Card is locked in P0. Now test the clock swings. Galax tuner or a-beta work fine. The galax tool works better to ensure P0 state IMO.
> 3) of course, move the power slider to max for both cards. (regardless of which control tool, or bios)
> 4) Flashing to a high current bios will help with powerlimit clock bin drops - but what you describe does not sound like a limit-induced bin drop.
> 5) the ASUS 1200W PSU is plenty. If it were not, it would just OCP. It has plenty of power. Be sure to connect the SLI power to your MB (molex). Saves the ATX pwer some extra load. /forum/images/smilies/thumb.gif
Click to expand...

Thanks for the info. I have more questions of course as I am really quite new to a lot of things being suggested here and could do with some further insight/help.

Just to be clear, I have one 2080ti on waterblock not two. My message meant to say a waterblock on the CPU and GPU a piece not two GPUs. 

I can't get "normal" scores on timespy much less the extreme version (which coincidentally averages really low scores of ). In gfx test two the FPS goes down to 11-20fps for example. My best timespy (normal version) score was 8k. I don't know how you toggle temp settings in gpuz as there's nothing to toggle as far as I can see. The temps seem to peak at 60 degrees from what I can see. They never go higher. In fact they seem to climb there pretty much immediately and stay there. The system is one giant loop passing through two 360 rads. The CPU never goes above 50ish degrees. Usually that sits in 40s. 
1. I can't see any options you are referring to for the NV control panel. What settings are you suggesting I try?
2. I'll try the galax tuner and let you know. I've been using MSI AB (beta version) and latest precision X1. Both show some wild jumps. 

I removed the cable mod pcie cables and replaced them with the Asus Thor ones. They was an immediate jump in score in Firestrike to 23k. But then I ran timespy and it peaked at 8k. No OC, nothing tweaked. Just default. This is low imo. 

Could you describe what exactly you mean by limit induced bin drop. What exactly do you think is happening. 

So much for Jensen's "it just works". 

Thanks again. It's much appreciated.


----------



## Porphyro

J7SC said:


> Porphyro said:
> 
> 
> 
> I'll try this. I should perhaps correct something though. I don't have two tis, only one. What I meant by saying "both under EK waterblock was the CPU and GPU, not two GPUs. I have a single Asus Dual 2080ti OC edition and it shows these wild clock jumps on default.
> 
> I'll start trying what people have suggested here but am really a massive beginner. This is only the second computer I've ever built but decided to go all out and try my hand at custom cooling.
> 
> 
> 
> 
> ...well, single GPU is a somewhat different matter re. PSU. Still, use the MSI voltage curve (and not to the full 1.093v initially, but as suggested before to 1.075v max to start with) and also initially reduce the Power Target to 110% instead of max (122-130%, depending on the specific Bios etc). Are the swings still there ? Even before gaming or 3DMark, I would just run the GPUz render as a start - watching also the GPU voltage while running the render test. It should be perfectly stable after taking the above steps. If it is all stable, try 115% and so forth
Click to expand...

Hi again,

I have used the voltage curve (the word curve appears in the MHz boost section) but have no idea how to set the volts to 1.075 as you suggest. I've never tinkered with voltages you see so this is all new to me. How exactly do I do this? Any videos I could watch or a step by step guide you could point me to? I know you can set a yellow line on a point in the curve (Cyril+l) but not sure what exactly that's doing. Could you help me better understand this please? I'll watch the GPU voltage in the render as you suggest. I've also limited power to 110%. The max the card bios seems to allow is 120% in any case.

I checked the vddc stats while gpuz is running its render and then MSI it's of test and the voltage hovers between 0.72-0.78v with occasional spikes above or below that. In short, it's never perfectly straight. Is it supposed to be? 

I also noticed the core clock MHz don't jump around when oc scanner is scanning. Only when I run a render or 3d mark bench does it fluctuate wildly.

I also spoke to store I bought it from. The guy suggested it might be the phanteks riser cable (which I also bought from them). Any ideas on riser cables doing this sort of thing? Changing the cables to the ones that came with PSU also introduced coil whine from GPU when tests/renders start up. Dies down after a bit. Thoughts?


----------



## Jyssi

boli said:


> Um, 80C during time spy short bench, and 5C above water temp on idle? I think your EK block might not be seated properly! I get to 51C GPU temp during a 20 minute time spy stress test, and it drops by 14C immediately after test ends (see pic). Very quiet fans (~800 rpm). Maybe check earlier posts about not using thermal pads F, using shortest screws, not tightening screws until PCB is bent and using more thermal paste than usual.


What is your water temperature after heavy testing? 
Loosened screws a bit and it's a bit better.
I did witcher 3 afk (gpu 99% ~322W) overnight and it averaged 68C with 70C peak, 31C delta.


----------



## boli

Jyssi said:


> What is your water temperature after heavy testing?
> Loosened screws a bit and it's a bit better.
> I did witcher 3 afk (gpu 99% ~322W) overnight and it averaged 68C with 70C peak, 31C delta.


Your high 31C diff between GPU and water indicates that the block is still not thermally coupled properly.

As for my water temps: I have neither flow meter nor a water temp sensor installed, so I _estimate_ that my *idle* GPU temp is probably pretty close to the water temp. To a lesser degree (no pun intended) this holds for the CPU as well, but from my experience the GPU seems to be a little bit more responsive temperature indicator. In your case the GPU temp wouldn't work for that, because of the bad thermal coupling of the water (block) to the GPU.

Anyway: as I mentioned my GPU went up to 51C in a 20 minute stress test, and then immediately dropped by 14C when the test is done, so I assume my water is around these 37C degrees (51-14). I figure this is somewhat plausible, as my room temperature is 23C. This would make my deltaT 14C (37-23), and thus my cooling setup be about average/mediocre (from what I gathered, good cooling setups have a deltaT of below 10C).

I'd say you don't need to fret too much. Yes the temps are high, so your performance will be lower (like under air), but the GPU should be fine for now. Whenever you have time to drain your loop and reinstall the block yourself, with precision and care, I'd recommend doing so (buy new 0.5mm and 1mm thermal pads beforehand if you need to). Possibly consider experimenting with not using the 8mm wide pads on the coils (and if you do use, make sure they're < 8mm wide). Personally I installed all thermal pads, put a blob of paste on the GPU, then laid the block onto the GPU, without tightening any screws, then removed it again, and checked the contact area. Seemed fine to me (pic in earlier post), and should get better with screws (not tightened as much to bend PCB). So no rush, enjoy the holidays. 

This is only my first-and-a-half water cooling setup, and I started ~2 years ago, so consider I'm not that experienced. Maybe more experienced peeps feel like chiming in.

P.S. previously I could get the water a few degrees higher by running Furmark + Prime95 for an hour, but will have to retest that because of recently adjusted setup.


----------



## Gustavoh

boli said:


> Your high 31C diff between GPU and water indicates that the block is still not thermally coupled properly.
> 
> As for my water temps: I have neither flow meter nor a water temp sensor installed, so I _estimate_ that my *idle* GPU temp is probably pretty close to the water temp. To a lesser degree (no pun intended) this holds for the CPU as well, but from my experience the GPU seems to be a little bit more responsive temperature indicator. In your case the GPU temp wouldn't work for that, because of the bad thermal coupling of the water (block) to the GPU.
> 
> Anyway: as I mentioned my GPU went up to 51C in a 20 minute stress test, and then immediately dropped by 14C when the test is done, so I assume my water is around these 37C degrees (51-14). I figure this is somewhat plausible, as my room temperature is 23C. This would make my deltaT 14C (37-23), and thus my cooling setup be about average/mediocre (from what I gathered, good cooling setups have a deltaT of below 10C).
> 
> I'd say you don't need to fret too much. Yes the temps are high, so your performance will be lower (like under air), but the GPU should be fine for now. Whenever you have time to drain your loop and reinstall the block yourself, with precision and care, I'd recommend doing so (buy new 0.5mm and 1mm thermal pads beforehand if you need to). Possibly consider experimenting with not using the 8mm wide pads on the coils (and if you do use, make sure they're < 8mm wide). Personally I installed all thermal pads, put a blob of paste on the GPU, then laid the block onto the GPU, without tightening any screws, then removed it again, and checked the contact area. Seemed fine to me (pic in earlier post), and should get better with screws (not tightened as much to bend PCB). So no rush, enjoy the holidays.
> 
> This is only my first-and-a-half water cooling setup, and I started ~2 years ago, so consider I'm not that experienced. Maybe more experienced peeps feel like chiming in.
> 
> P.S. previously I could get the water a few degrees higher by running Furmark + Prime95 for an hour, but will have to retest that because of recently adjusted setup.


I just got a temp sensor installed in my reservoir, and my temps are similar to yours (water tops out at 37C, GPU around 48-53C in gaming, room temp around 22-24C). I agree that 31C between water and gpu block suggests it is not installed properly. Next time I flush I'll reinstall the block and see if I can lower that delta a little more (I think I skimped on paste the first time around).


----------



## boli

Gustavoh said:


> I just got a temp sensor installed in my reservoir, and my temps are similar to yours (water tops out at 37C, GPU around 48-53C in gaming, room temp around 22-24C).


Thanks for sharing, that's already quite reassuring. Do you also have the EK _Vector_ block? And is the idle GPU temp only little above (or even at) the water temperature in your experience?
Just wondering how well my estimate works – I am considering getting a D5 NEXT pump, which includes a temp sensor, and attach a flow sensor (the built-in one only estimates flow based on rpm, which apparently doesn't work well for other-than-their fluids).


----------



## boli

Jyssi said:


> What is your water temperature after heavy testing?





boli said:


> P.S. previously I could get the water a few degrees higher by running Furmark + Prime95 for an hour, but will have to retest that because of recently adjusted setup.


OK, so I did a quick retest with 50 minutes of Furmark + Prime95, despite my Phanteks Evolv X fan hub not working properly. As a result, my system is louder and cooler than usual, but should serve to give you ballpark figures:

GPU: max 53C
Water (estimate): 40C

After the test both GPU and CPU agree on a temp of 39C, so I assume this to be only little above the real water temp. 

The fan hub issue is that the first fan (which returns rpm to CPU_FAN socket) runs much slower than the 4 other fans (could be max of 1500 rpm). Have not found a solution yet, despite having tried lots of stuff people recommended, upgraded BIOS etc. But nevermind, that's a bit off topic.


----------



## profundido

J7SC said:


> As others stated, excellent post, and very useful:thumb:
> Quick question: When you tried the other Bios sets (ie Windforce Power Limit up to 140%), did the actual max watt reading go beyond 380w (where my stock Aorus Xtreme cards stop), or was it higher ? As to early cards binning higher, that may or may not be the case, but when I bought my second one, I first made sure that it was very close re. serial numbers since I knew what the first one could do. Even that is no guarantee, though.
> 
> In any case, you saved me a lot of time re. trial and error and flashing two cards with different Bios... I am finally ready for assembly and putting the loops together for the 2080 TIs after testing everything out on test benches, just watching some paint drying (literally ) as I'm going for a black-and-white theme > the best I could think of w/ all that RGB everywhere else (mobo, 2x 2080 TIs, GSkill RGB, fans... )


I didn't pay attention to it at first but I repeated the test, playing around with how different bios react to throttling and voltage and this time I payed attention but no, none of them exceeded around 380. So it looks like the max Power limit also depends on the offset the bios uses, just like the core clock. In other words 122% max PL on Aorus Extreme Waterforce bios equals the same max wattage as 140% max PL on Aorus Windforce. In attachment below I put a GPU-Z screenshot after a test with the Waterforce stock bios.


Also retested the HOF bios again. To be honest the more I test, the more I become confused as there are so many factors at play that influence eachother. This reminds me of that Rubik's clock where changing a dial affects all the others as well and results are never what you expect:






Big point I learned in this round of testing is that simply fixing the voltage to be high and constant causes alot of throttling as you immediately hit that internal hard 380W limit and leads to less performance than for instance the standard curves the bios comes with. It's just your voltage and and consequently wattage that go up but because of internal throttling the card won't hold it steady and keeps falling down resulting in lower performance eventually. I guess there's a reason why every bios comes with a 'standard' curve that has been tweaked by the engineers to match that exact piece of hardware and sparingly increase voltage in order to squeeze out maximum performance while consuming as little as possible resources trying to stay under that 380W limit, especially when custom overclocking. As soon as you become too greedy and give a little bit too much voltage even on a stable clock you hit the limit and the card drops down no matter how fixed you had set your curve. Less is more and more is less apparently. The trick is finding that mathematical 'extremum' point where the card reaches the most performance for the least energy consumed towards that 380W limit. 

Also the content really matters alot and affects all dials here. Content like a specific scene in a game that steadily keeps drawing the same amount of graphics over long times may hold your max fixed voltage overclock good while the next scene might kill it all over. Also the benchmarks are completely unequal in this regard. So if you tune your overclock for Timespy you might perform worse in game X and if you tune it for game X you might suck in game Y...


----------



## boli

profundido said:


> So it looks like the max Power limit also depends on the offset the bios uses, just like the core clock. In other words 122% max PL on Aorus Extreme Waterforce bios equals the same max wattage as 140% max PL on Aorus Windforce.


Yes, and it's even simpler than that. Have you seen the two power figures for various BIOS in the very first post of this thread? You can find these in the _Advanced_ tab of GPU-Z too BTW. For example:

The FE comes with 260W power limit (=100% for this particular BIOS), and can increase that to 123%, so 260 * 1.23 = 320W.
Earlier Galax cards had base 300W power limit (=100% for this particular BIOS), and can increase that to 126%, so 300W * 1.26 = 378W (commonly referred to as 380W).

Because % are relative to the base power limit, it's best to just state the absolute power limit in W.



profundido said:


> The trick is finding that mathematical 'extremum' point where the card reaches the most performance for the least energy consumed towards that 380W limit.


Indeed, and according to my testing my (more recent) KFA2 OC BIOS had "only" a max of 320W, which is pretty close to this sweet spot. For me it was close enough, so I went back to this max 320W BIOS, as the extra 60W of the 380W BIOS only gave ~2% more fps for 18% more power.

I assume that with hours of testing a better custom power/frequency curve could be found, but I'm quite happy as is.

At this point this performance/power draw thing should be becoming old news as it has been mentioned several times, in particular regarding der8auers dry ice OC of an RTX 2080, but personally it took a while to sink in.


----------



## profundido

boli said:


> Yes, and it's even simpler than that. Have you seen the two power figures for various BIOS in the very first post of this thread? You can find these in the _Advanced_ tab of GPU-Z too BTW. For example:
> 
> The FE comes with 260W power limit (=100% for this particular BIOS), and can increase that to 123%, so 260 * 1.23 = 320W.
> Earlier Galax cards had base 300W power limit (=100% for this particular BIOS), and can increase that to 126%, so 300W * 1.26 = 378W (commonly referred to as 380W).
> 
> Because % are relative to the base power limit, it's best to just state the absolute power limit in W.
> 
> 
> 
> Indeed, and according to my testing my (more recent) KFA2 OC BIOS had "only" a max of 320W, which is pretty close to this sweet spot. For me it was close enough, so I went back to this max 320W BIOS, as the extra 60W of the 380W BIOS only gave ~2% more fps for 18% more power.
> 
> I assume that with hours of testing a better custom power/frequency curve could be found, but I'm quite happy as is.
> 
> At this point this performance/power draw thing should be becoming old news as it has been mentioned several times, in particular regarding der8auers dry ice OC of an RTX 2080, but personally it took a while to sink in.
> 
> Conclusion of said video starts at about 15 minutes: conclusion here


I see what you mean. I guess it's only after fighting against these limits you truly realize just how futile and pointless it is lol  You're either lucky on the silicon or not and clearly all the good bin samples are long gone by now. I missed the boat and have to accept it. Time to find the sweet spot for my card and call it a day so I can actually get to gaming


----------



## EarlZ

I recently got a Gigabyte 2080Ti Gaming OC and I am wondering where we are at with the 2000 series failures, has nvidia resolved this already?


----------



## boli

profundido said:


> I see what you mean. I guess it's only after fighting against these limits you truly realize just how futile and pointless it is lol  You're either lucky on the silicon or not and clearly all the good bin samples are long gone by now. I missed the boat and have to accept it. Time to find the sweet spot for my card and call it a day so I can actually get to gaming


Well put. Someone wrote that the average frequency for water cooled cards was around 2070 MHz. This is where I'm at, so pretty much as expected, and at least not below that. Though irrationally anything above 2000 would have sounded fine to me.


----------



## VPII

boli said:


> Well put. Someone wrote that the average frequency for water cooled cards was around 2070 MHz. This is where I'm at, so pretty much as expected, and at least not below that. Though irrationally anything above 2000 would have sounded fine to me.


Unfortunately, or maybe fortunately I have a Galax RTX 2080TI with the 380 watt max TDP. Now understand I do not have water cooling as in my country things are a bit slow so maybe next year with luck we'll have some available. I found that max clocks I can run was 2145 but that means nothing with air cooling as the speeds will drop to about 2025 or so while running anything extreme. I also found that when I run the normal clocks and set the TDP to +127% it increases heat to the point where I do not feel comfortable. I would like my card to stay below 80C on air. So I tested the OC scanner with MSI After burner and Evga Precision X1. Both showed +199core is a pass. Now that +199 does not actually work. +180 is my max clocks... well those I tested with every benchmark. Currently I am back with TDP stock 100% clocks up by +150mhz and memory +500mhz. This works pretty well as the max TDP I saw while running Time Spy was between 295 and 300.8 watt which basically means I do not need to touch the TDP slider.


----------



## Jpmboy

VPII said:


> Unfortunately, or maybe fortunately I have a Galax RTX 2080TI with the 380 watt max TDP. Now understand I do not have water cooling as in my country things are a bit slow so maybe next year with luck we'll have some available. I found that max clocks I can run was 2145 but that means nothing with air cooling as the speeds will drop to about 2025 or so while running anything extreme. I also found that when I run the normal clocks and set the TDP to +127% it increases heat to the point where I do not feel comfortable. I would like my card to stay below 80C on air. So I tested the OC scanner with MSI After burner and Evga Precision X1. Both showed +199core is a pass. Now that +199 does not actually work. +180 is my max clocks... well those I tested with every benchmark. Currently I am back with TDP stock 100% clocks up by +150mhz and memory +500mhz. This works pretty well as the max TDP I saw while running Time Spy was between 295 and 300.8 watt which basically means I do not need to touch the TDP slider.


hard to find those Galax cards for sale in any country.


----------



## Gustavoh

boli said:


> Thanks for sharing, that's already quite reassuring. Do you also have the EK _Vector_ block? And is the idle GPU temp only little above (or even at) the water temperature in your experience?
> Just wondering how well my estimate works – I am considering getting a D5 NEXT pump, which includes a temp sensor, and attach a flow sensor (the built-in one only estimates flow based on rpm, which apparently doesn't work well for other-than-their fluids).


I've got the EK Vector too. Just checked and the idle temp is 2C above water temp: 24C water, 26C GPU (and CPU) idle temps. I've also noticed that right after a gaming session, temps drop immediately to within 3C of water temp (idle GPU temps right after closing a game are 39-40C for a water temp of 37C). 

Just for reference, my clocks also max out at 2055-2070 (anything above that crashes). With the Galax 380W bios on my Asus Dual OC, at least I get very stable clocks at 2040-2055 (with drops to 2025 or 2010 at the lowest) (no custom curve, just +130 core and +500 mem).


----------



## Jyssi

boli said:


> OK, so I did a quick retest with 50 minutes of Furmark + Prime95, despite my Phanteks Evolv X fan hub not working properly. As a result, my system is louder and cooler than usual, but should serve to give you ballpark figures:
> 
> GPU: max 53C
> Water (estimate): 40C
> 
> After the test both GPU and CPU agree on a temp of 39C, so I assume this to be only little above the real water temp.
> 
> The fan hub issue is that the first fan (which returns rpm to CPU_FAN socket) runs much slower than the 4 other fans (could be max of 1500 rpm). Have not found a solution yet, despite having tried lots of stuff people recommended, upgraded BIOS etc. But nevermind, that's a bit off topic.


Thanks for replies.

I did furmark once more and can now be sure that I really need to fix it myself.


----------



## boli

VPII said:


> Now that +199 does not actually work. +180 is my max clocks...


Just to nitpick: +199 is actually +195, as it increases in steps of 15. Also, max clocks decrease with rising temperatures, so the +X number isn't all that important. What counts is the clock you get during benchmarking or gaming, which in my case is up to 2070 MHz (peaks of 2085 almost not worth mentioning), and on air as typically more like the 2025 MHz you noted.

I wouldn't trust that 2145 MHz max clock of yours unless you've been able to run it for half an hour or more, which I assume is quite impossible with air cooling.

If one care's mostly about 3DMark results, it's obviously easier to achieve high clocks for only short amounts of time, but personally I rather like to play games for hours on end. 

*Update:* Someone in this thread mentioned that the Microsoft DXR demos are a nice lightweight bench to find max clocks, without hitting the power or temp limit. Also they crash pretty fast if something is off so very handy to find max overclocks. Personally I also like Time Spy Extreme graphics test 2 and Shadow of the Tomb Raider bench for memory overclock tests, and Fire Strike Ultra graphics test 2 for general stability.



Jpmboy said:


> hard to find those Galax cards for sale in any country.


Hmm, I was under the impression that my KFA2 is actually a Galax, and KFA2 was just Galax's name in Europe? That may be wrong though. *Update*: the very first post of this thread also suggests they are the same.
(Furthermore I thought that Pallit and Gainward belong to the same conglomerate, but on this I'm even more "just read it somewhere")



Gustavoh said:


> I've got the EK Vector too. Just checked and the idle temp is 2C above water temp: 24C water, 26C GPU (and CPU) idle temps. I've also noticed that right after a gaming session, temps drop immediately to within 3C of water temp (idle GPU temps right after closing a game are 39-40C for a water temp of 37C).
> 
> Just for reference, my clocks also max out at 2055-2070 (anything above that crashes). With the Galax 380W bios on my Asus Dual OC, at least I get very stable clocks at 2040-2055 (with drops to 2025 or 2010 at the lowest) (no custom curve, just +130 core and +500 mem).


Awesome, thank you for checking. Same nitpick as above BTW, your +130 is actually +120 (steps of 15). I also thought my card did 130, but later found that 120 up to 134 is always the same, and only 135 is faster.


----------



## VPII

boli said:


> Just to nitpick: +199 is actually +195, as it increases in steps of 15. Also, max clocks decrease with rising temperatures, so the +X number isn't all that important. What counts is the clock you get during benchmarking or gaming, which in my case is up to 2070 MHz (peaks of 2085 almost not worth mentioning), and on air as typically more like the 2025 MHz you noted.
> 
> I wouldn't trust that 2145 MHz max clock of yours unless you've been able to run it for half an hour or more, which I assume is quite impossible with air cooling.
> 
> If one care's mostly about 3DMark results, it's obviously easier to achieve high clocks for only short amounts of time, but personally I rather like to play games for hours on end.
> 
> *Update:* Someone in this thread mentioned that the Microsoft DXR demos are a nice lightweight bench to find max clocks, without hitting the power or temp limit. Also they crash pretty fast if something is off so very handy to find max overclocks. Personally I also like Time Spy Extreme graphics test 2 and Shadow of the Tomb Raider bench for memory overclock tests, and Fire Strike Ultra graphics test 2 for general stability.
> 
> 
> Hmm, I was under the impression that my KFA2 is actually a Galax, and KFA2 was just Galax's name in Europe? That may be wrong though. (Furthermore I thought that Pallit and Gainward belong to the same conglomerate, but on this I'm even more "just read it somewhere")
> 
> 
> Awesome, thank you for checking. Same nitpick as above BTW, your +130 is actually +120 (steps of 15). I also thought my card did 130, but later found that 120 up to 134 is always the same, and only 135 is faster.


You see the problem is, even if I set my clocks that it is 2025mhz max which it keeps while playing BFV for an hour or so (when overclock set to +150 without touching TDP), I still need to set it higher to drop to that point. If I set it to be 2025 I'll end up with 1980 , 1965 and sometimes even 1950 as standard stable throughout. You right regarding KFA, but in South Africa you only get Galax so I assume KFA is Europe.


----------



## boli

VPII said:


> You see the problem is, even if I set my clocks that it is 2025mhz max which it keeps while playing BFV for an hour or so (when overclock set to +150 without touching TDP), I still need to set it higher to drop to that point. If I set it to be 2025 I'll end up with 1980 , 1965 and sometimes even 1950 as standard stable throughout. You right regarding KFA, but in South Africa you only get Galax so I assume KFA is Europe.


Hmm, I'm not sure I understand 100%, but let's find out: when you say you "set your clocks to be 2025 MHz" I assume this: during gaming you saw that your card can hold say 1050 mV stable. So when idle you set +150 and see in the curve that this results in 2100 MHz (value from my idle 26C card right now). However during gaming the temps are a lot higher, and thus the curve a lot lower, so that 1050 mV only yields say 2025 MHz at 70C on air, or 2070 MHz at 50C on water. Might this be it? I mostly made up the numbers, but hope they get the point across.

Or is the "max" you mention the maximum of the curve at 1200 mV? From what I understand that is unreachable with realistic power limits and normal water/air cooling.

Thanks about KFA2 == Galax confirmation.


----------



## profundido

Ok here's the sweetspot I will settle for. This is the Galax bios btw. +140 locks up for me, +135 gives me best result I've seen so far without locking up or throttling too much


----------



## boli

profundido said:


> Ok here's the sweetspot I will settle for. This is the Galax bios btw. +140 locks up for me, +135 gives me best result I've seen so far without locking up or throttling too much


Nice! BTW I thought that +135 is the same as +140 or +149 (granularity of 15 MHz), so the next step down would be +120, same as me. I could be wrong, but all my experiments pointed to this. Just keep in mind when a game crashes on you at some point in the future. You can get more confidence in your OC by testing Fire Strike Ultra graphics test 2, Microsoft DXR demos (linked above) and longer tests in general (read: gaming).  I had to reduce my memory OC from 1000 to 800 due do BF V crashes for example.


----------



## VPII

boli said:


> Hmm, I'm not sure I understand 100%, but let's find out: when you say you "set your clocks to be 2025 MHz" I assume this: during gaming you saw that your card can hold say 1050 mV stable. So when idle you set +150 and see in the curve that this results in 2100 MHz (value from my idle 26C card right now). However during gaming the temps are a lot higher, and thus the curve a lot lower, so that 1050 mV only yields say 2025 MHz at 70C on air, or 2070 MHz at 50C on water. Might this be it? I mostly made up the numbers, but hope they get the point across.
> 
> 
> 
> Or is the "max" you mention the maximum of the curve at 1200 mV? From what I understand that is unreachable with realistic power limits and normal water/air cooling.
> 
> 
> 
> Thanks about KFA2 == Galax confirmation.


You see I've never seen more than 1.065v for the core. Never needed it. If you set the core to be 2025mhz max the curve with temp dictate what you sit with. So 2025 maybe good till 45 to 50c but there after and there will be after the speed would drop. But if you say set +150 core which equates to say 2100 core the temp in curve would show 2025 at around 65 to 70c which is why 2025 is held. This is mostly relevant to air as in my case.

Sent from my SM-G950F using Tapatalk


----------



## profundido

boli said:


> Nice! BTW I thought that +135 is the same as +140 or +149 (granularity of 15 MHz), so the next step down would be +120, same as me. I could be wrong, but all my experiments pointed to this. Just keep in mind when a game crashes on you at some point in the future. You can get more confidence in your OC by testing Fire Strike Ultra graphics test 2, Microsoft DXR demos (linked above) and longer tests in general (read: gaming).  I had to reduce my memory OC from 1000 to 800 due do BF V crashes for example.


I tested +100,+120, +130, +135 and +140 with different and incremental results. I most definitely had a bump going from +130 to +135 and again to +140. I also just had a consistent "dx device hung" crash in SOTTR benchmark everytime I bench at +135 but not at +130 so the difference is real.


by the way, the SOTTR benchmark is awesome to test the memory (and it's overclock) since it loads over 7GB of mem on the card !


----------



## boli

VPII said:


> You see I've never seen more than 1.065v for the core. Never needed it. If you set the core to be 2025mhz max the curve with temp dictate what you sit with. So 2025 maybe good till 45 to 50c but there after and there will be after the speed would drop. But if you say set +150 core which equates to say 2100 core the temp in curve would show 2025 at around 65 to 70c which is why 2025 is held. This is mostly relevant to air as in my case.


Alright, I _think_ we are in agreement. 



profundido said:


> I tested +100,+120, +130, +135 and +140 with different and incremental results. I most definitely had a bump going from +130 to +135 and again to +140. I also just had a consistent "dx device hung" crash in SOTTR benchmark everytime I bench at +135 but not at +130 so the difference is real.


How interesting. The _displayed_ GPU clocks definitely change only in 15 MHz increments, and that's what AfterBurner's curves snap to as well.
Anyone know if behind the scenes the increments are finer, or that 5 MHz increments change something invisible that makes a difference? While the performance differential is not substantial in practice, I'd like to know so as not to spread misinformation.


----------



## J7SC

profundido said:


> I see what you mean. I guess it's only after fighting against these limits you truly realize just how futile and pointless it is lol You're either lucky on the silicon or not and clearly all the good bin samples are long gone by now. I missed the boat and have to accept it. Time to find the sweet spot for my card and call it a day so I can actually get to gaming





boli said:


> Well put. Someone wrote that the average frequency for water cooled cards was around 2070 MHz. This is where I'm at, so pretty much as expected, and at least not below that. Though irrationally anything above 2000 would have sounded fine to me.


Yeah, I had posted the average clocks from HWBot for air, water and LN2 and most folks here are hitting the air and water ones. In the real world, an extra 30 MHz means very little - it just makes a marginal difference in benches...but if folks 'compete' with someone else in a specific bench, a score of18102 is > than 18101, so out come the custom Bios. Some custom Bios, like the 380W Galax won't change the overall wattage if you already had a factory Bios going to 375-380w, but can change when what speed bin kicks in, or holds. Depending on your specific chip, that can be useful in that regard, or not - but it is not a NOS injector so to speak. For that you have to try some shunt mods, with the chance that your card gets 'less useful' if it goes wrong, never mind warranty issues.

Per pic below, while I have tested each card on its own for about a week in a testbench, putting them together is a different matter, especially as I am working with for-me new materials, new OS and new CPU family - but that's the whole point of this build > going past my comfort zone in a different direction than I used to before with chillers, DICE or LN2 and all that jazz. THAT SAID, no matter what your build and main use & purpose, *the BEST thing you can do with your 2080 Ti, IMO, is to give it maximum cooling, given the NVidia GPU boost algorithms that tie temps to power limits*.

Today I should make good progress on the build - finished the December 22 shopping mall parking lot derby :wth: and the pick-folks-up at the airport for Christmas rodeo  ...I just need a bucket of espresso now


----------



## Snoopvelo

*Just got a EVGA RTX 2080 Ti FTW3 ULTRA*

Power limit: 124%

https://imgur.com/a/UyiVM2t


----------



## cerealkeller

is the Galax 380w BIOS still the best thing for FE? Nothing new with higher power limit?


----------



## ESRCJ

cerealkeller said:


> is the Galax 380w BIOS still the best thing for FE? Nothing new with higher power limit?


I believe so. That's what I'm using for my FE under water and there are only a few cases where I'm hitting the power limit. Graphics test 2 of Time Spy comes to mind. I'm also seeing it ever-so-slightly in Superposition (4K optimized).


----------



## VPII

boli said:


> Alright, I _think_ we are in agreement.
> 
> 
> 
> How interesting. The _displayed_ GPU clocks definitely change only in 15 MHz increments, and that's what AfterBurner's curves snap to as well.
> Anyone know if behind the scenes the increments are finer, or that 5 MHz increments change something invisible that makes a difference? While the performance differential is not substantial in practice, I'd like to know so as not to spread misinformation.


Hi boli, well it seems with +150 core and +500 memory no adjustment on TDP, temp or voltage I actually get 2040mhz core while playing BFV. Not really sure why I play the game, just got it with the RTX 2080 ti I purchased, so I basically just use it for testing. Anyways, it seems my limit is a little better than what I thought. It is also early morning this side so little cooler, I'm sure mid day the standard core speed will probably be 2025 maybe even less. TDP hover predominantly between 275 and 290.... just once it hit 300watt flat but all the rest below.


----------



## Madness11

hey guys, some one have stuttering or freezing in games ??? I have now .....ANd have 0 idea how to fix this ... Please help


----------



## UdoG

Does anyone have experience with the Seahawk EK X / Gigabyte NVIDIA GeForce RTX 2080 Ti 11GB AORUS XTREME WATERFORCE WB?


----------



## boli

profundido said:


> I tested +100,+120, +130, +135 and +140 with different and incremental results. I most definitely had a bump going from +130 to +135 and again to +140. I also just had a consistent "dx device hung" crash in SOTTR benchmark everytime I bench at +135 but not at +130 so the difference is real.


The differences between 120 and 130 (which I believe are the same clock bin) _could_ be measuring inaccuracies. Small differences (and they are small) happen from test to test even at the same OC offset. I tried getting more precise averages with more tests per offset, see below.

However 130 and 135 are in different clock bins, so 135 crashing consistently strengthens the theory. So I would assume that 120-134 are all the same (and in your case: stable). If you feel like it, maybe try 134 and see what happens.



boli said:


> The _displayed_ GPU clocks definitely change only in 15 MHz increments, and that's what AfterBurner's curves snap to as well.
> Anyone know if behind the scenes the increments are finer, or that 5 MHz increments change something invisible that makes a difference? While the performance differential is not substantial in practice, I'd like to know so as not to spread misinformation.


So I tested some more, with 4 Fire Strike Ultra rounds per OC offset the results are still somewhat inconclusive. Note the Y-scale's min and max values, the results are _really_ close regardless (and would be near invisible if the scale started at 0).








I figured that doing longer tests may result in less variance, which is why I ran Time Spy Extreme stress tests. Each test is 20 loops; I picked the last two averages assuming the temperature (as one of many variables) would be most stable there.

119 MHz offset: 45.48 and 45.57 fps (I assume this would be in the 105 MHz bin)
120 MHz offset: 46.40 and 46.80 fps (I assume this would be in the 120 MHz bin)
134 MHz offset: 45.65 and 45.79 fps (I assume this would _also_ be in the 120 MHz bin)
135 MHz offset: 45.80 and 45.92 fps (I assume this would be in the 135 MHz bin)
*Retest below:*
120 MHz offset: …, 45.71, 45.79, 45.71 and 45.77 fps (could it have been 4*5*.xx in the first 120 MHz test, and me entering it wrong into my notes? Or did my room heat up enough to make a difference? The first seems more likely to me)

Overall I'm still not sure. I'll stick to 120 MHz because it doesn't really matter. ¯\_(ツ)_/¯



VPII said:


> Hi boli, well it seems with +150 core and +500 memory no adjustment on TDP, temp or voltage I actually get 2040mhz core while playing BFV. Not really sure why I play the game, just got it with the RTX 2080 ti I purchased, so I basically just use it for testing. Anyways, it seems my limit is a little better than what I thought. It is also early morning this side so little cooler, I'm sure mid day the standard core speed will probably be 2025 maybe even less. TDP hover predominantly between 275 and 290.... just once it hit 300watt flat but all the rest below.





J7SC said:


> [snip]*the BEST thing you can do with your 2080 Ti, IMO, is to give it maximum cooling, given the NVidia GPU boost algorithms that tie temps to power limits*[snip]


This sums it up well. Good luck with your interesting-looking build.



VPII said:


> with +150 core and +500 memory no adjustment on TDP, temp or voltage I actually get 2040mhz core while playing BFV. Not really sure why I play the game, just got it with the RTX 2080 ti I purchased, so I basically just use it for testing. Anyways, it seems my limit is a little better than what I thought.


Nice! BF V is a gorgeous game, particularly with HDR, and even if you don't like multiplayer, I found at least its four single player "war stories" well made, engaging and worth playing.
Maybe try RTX on for extra eye candy, however with 35 fps instead of 90 fps at UHD ultra made me disable RTX again rather quickly. It does look nice though. Not that I'd have time to notice it during the rather hectic multiplayer. 



UdoG said:


> Does anyone have experience with the Seahawk EK X / Gigabyte NVIDIA GeForce RTX 2080 Ti 11GB AORUS XTREME WATERFORCE WB?


You may want to search this thread or scroll up some, there are several Aorus WB owners here.


----------



## GanMenglin

UdoG said:


> Does anyone have experience with the Seahawk EK X / Gigabyte NVIDIA GeForce RTX 2080 Ti 11GB AORUS XTREME WATERFORCE WB?


I'm using Seahawk ek X now

Actually, I can say I've purchased all series of MSI 2080ti: from ventus OC to Duke OC, now it's Seahawk EK, only didn't use the trio X


----------



## Madness11

Guys any help ?) Stutter its my graphic card are broken ?? Or what i need to do ? (I trying change drivers)


----------



## GanMenglin

@zhrooms 

I've flashed my seahawk EK X with galaxy 380w bios, but the maximum power only goes like 340, 350w, can't be 380w.

Due to the information I thought trio X shares the same PCB with seahawk ek, and you are using the trio X with 380w bios. what's your power limit?


----------



## UdoG

GanMenglin said:


> I'm using Seahawk ek X now
> 
> Actually, I can say I've purchased all series of MSI 2080ti: from ventus OC to Duke OC, now it's Seahawk EK, only didn't use the trio X


Thanks.
How does the card perform?


----------



## J7SC

Madness11 said:


> Guys any help ?) Stutter its my graphic card are broken ?? Or what i need to do ? (I trying change drivers)



...there can be several reasons, and there's simply not enough info in your post. You need to narrow it down a bit. First, open GPUz and MSI AB, set power target to less than max (i.e. 110% if it reaches to 122%, or 115% if max is 126%) in MSI AB. Also ensure that in MSI AB options, you have checked GPU voltage recording. Leave MSI AB open, set GPUz to 'sensors', open another GPUz window and run the render test for a few minutes (watching temps int he process). 

Does the card / GPU voltage 'swing' a lot and often, or is it mostly stable ? With MSI AB, the bottom screen can be expanded to show you various performance measures, including GPU voltage (you checked earlier) over time...what about loads etc. A screen shot of MSI AB monitoring over the period of the GPUz render test would help (keeping in mind that it is Christmas holidays and responses may be sporadic).

You can also try leaving MSI AB open while you game and record those freezes as well. Either way, if there are dramatic swings in GPU voltage, trying the fixed voltage curve in MSI AB (previous posts by someone here a few days ago) can help, and that is assuming that it is GPU related, noting also that for now, temps are not the issue (per your post).

Other reasons for stuttering can relate to the rest of your system...what CPU (i.e. is the game becoming CPU bound), what amount of system RAM and how aggressive the timings, is your PSU near its limit and so forth...so the best thing is trying to isolate one thing at a time to deal with the stutter.


----------



## GanMenglin

UdoG said:


> Thanks.
> How does the card perform?


Comepare to the other 2 cards, I can say this is the best 2080ti of MSI this season, but, all the 2080ti cards disappointed me... 

Actually I can say I ruined 3 msi 2080ti, 2 ventus oc 1 duke oc, because of OC with high PL bios.


----------



## GanMenglin

Madness11 said:


> Guys any help ?) Stutter its my graphic card are broken ?? Or what i need to do ? (I trying change drivers)


I think I got the same problem. Not always, but sometime, right?


----------



## J7SC

boli said:


> (cut)
> 
> This sums it up well. Good luck with your interesting-looking build.


 Thanks  I'm taking my own advice about cooling very seriously, especially as the Aorus 2080 TIs Waterforce cards have inherently low temps - as long as the rest of the system-build doesn't become a weak link. I had most of the water-cooling items anyway, but even an unused 360 rad underwent a thorough cleaning...amazing how much crud (tiny and not so tiny solids) comes out, even on a new rad. After the bathtub treatment, all rads (and the test-benched 2080 TIs) were flushed with distilled water....never mind taking all the pumps apart and cleaning them as well. As 2080 TIs and a 2950X Threadripper can generate a lot of heat, system prep is important. Plus, armed for bear with Dremel ad Co for modding - the actual build is the fun-part that follows, hopefully finished between Christmas and New year


----------



## cerealkeller

UdoG said:


> Does anyone have experience with the Seahawk EK X / Gigabyte NVIDIA GeForce RTX 2080 Ti 11GB AORUS XTREME WATERFORCE WB?


I had a 1080 Seahawk X EK, I don't know if that will help. I can tell you it's pretty much identical to the normal EK block except for the graphics on the front of the water block and a custom back plate. On a side note, I ordered two of them and one of them came with the water block assembled incorrectly. I had to tear it down and put it back together.


----------



## VPII

boli said:


> The differences between 120 and 130 (which I believe are the same clock bin) _could_ be measuring inaccuracies. Small differences (and they are small) happen from test to test even at the same OC offset. I tried getting more precise averages with more tests per offset, see below.
> 
> However 130 and 135 are in different clock bins, so 135 crashing consistently strengthens the theory. So I would assume that 120-134 are all the same (and in your case: stable). If you feel like it, maybe try 134 and see what happens.
> 
> 
> 
> So I tested some more, with 4 Fire Strike Ultra rounds per OC offset the results are still somewhat inconclusive. Note the Y-scale's min and max values, the results are _really_ close regardless (and would be near invisible if the scale started at 0).
> View attachment 240772
> 
> 
> I figured that doing longer tests may result in less variance, which is why I ran Time Spy Extreme stress tests. Each test is 20 loops; I picked the last two averages assuming the temperature (as one of many variables) would be most stable there.
> 
> 119 MHz offset: 45.48 and 45.57 fps (I assume this would be in the 105 MHz bin)
> 120 MHz offset: 46.40 and 46.80 fps (I assume this would be in the 120 MHz bin)
> 134 MHz offset: 45.65 and 45.79 fps (I assume this would _also_ be in the 120 MHz bin)
> 135 MHz offset: 45.80 and 45.92 fps (I assume this would be in the 135 MHz bin)
> *Retest below:*
> 120 MHz offset: …, 45.71, 45.79, 45.71 and 45.77 fps (could it have been 4*5*.xx in the first 120 MHz test, and me entering it wrong into my notes? Or did my room heat up enough to make a difference? The first seems more likely to me)
> 
> Overall I'm still not sure. I'll stick to 120 MHz because it doesn't really matter. ¯\_(ツ)_/¯
> 
> 
> 
> 
> 
> This sums it up well. Good luck with your interesting-looking build.
> 
> 
> 
> Nice! BF V is a gorgeous game, particularly with HDR, and even if you don't like multiplayer, I found at least its four single player "war stories" well made, engaging and worth playing.
> Maybe try RTX on for extra eye candy, however with 35 fps instead of 90 fps at UHD ultra made me disable RTX again rather quickly. It does look nice though. Not that I'd have time to notice it during the rather hectic multiplayer.
> 
> 
> 
> You may want to search this thread or scroll up some, there are several Aorus WB owners here.


Hi Boli. I have RTX on Ultra at 1440P and sitting with about 75 to 80 average FPS.


----------



## pewpewlazer

lolhaxz said:


> So the problem I was having with the system just rebooting under load turned out to be the power supply [Coolermaster V1000] - replaced it with a AX1200i... no more reboots and also a lot less coil whine; the v1000 is based on Seasonic design.. I think the PSU is not faulty (as it was running a 1080Ti fine) I suspect that the larger transient current swings with the 2080Ti are the problem.... it spikes +- 200W in Timespy regularly.
> 
> Tested 2070MHz and 5.2GHz overclock - Prime95 non-AVX (in place large FFT) + Timespy Graphics 1 (4K, looped); appears stable...
> 
> What is interesting is that 2115MHz on the GPU is fine without Prime95 running, with Prime95 running it craps itself about about 15 minutes... so something to keep in mind, something that is CPU heavy + GPU heavy will effect the overclock of both if they happen to be close to the edge - you can argue that the system will never see a load like this when gaming but I prefer the old skool approach to stability testing, ya'll probably find you insta BSOD when trying it





JackCY said:


> I think Seasonic has more strict specs on some of the PSU designs and when your power hungry GPU load spikes it will kick in a protection and shutdown etc. Your GPU is at fault by generating too high peak/spike loads. Probably tripping OCP with the spikes.


Ugh 

My 2080 Ti arrived Friday, OC'd the mem real quick (+750), +130 PL, played some BF5, maybe ran +100 core, no issues. I think I ran the OC scanner that night but SOTR would crash immediately so I just ran the card stock for the rest of the night.

Saturday morning I fired up some BF5 with the afterburner OC scanner curve for the core, 130% PL, +750 mem. It was glorious. After a few rounds all of a sudden the screen goes black, a second a silence, then I hear the fans kick back on I see my computer booting back up. Thought it was a power outage at first, then I realized none of my lights flashed on and I didn't hear my file server's UPS transfer over. Hmm. Play some more BF5 a bit later (same OC settings) and after maybe 20-30 minutes, same thing happens. My immediate thought was PSU, but figured no way, my PSU should be way overkill for this.

Anyway I went to plug my kill-a-watt to see what was going on and noticed the power cord was starting to come out of the power strip a little bit. It's one of those Belkin ones with the swiveling outlet ports with tons of terrible Amazon reviews. Plugged my computer straight into the wall, hoping that somehow the power strip was my problem. Didn't get a chance to play again until just now tonight, but I barely made it to the third map of a BF5 game before the same shutdown/reboot thing happened.

My PSU is an EVGA Supernova PS 1000. It's a Seasonic unit with a single 12V rail rated at 83A. I have two separate 8 pin cables ran to the 2080 Ti. The same exact cable configuration that powered this same exact machine with a pair of overclocked GTX 1070s for 2.5 years without a single hiccup. AC power draw playing BF5 hovers around 500-550W, about the same as the old 1070 setup. I was thinking about large transient current swings myself, but it seemed totally insane that it could spike high enough to trigger OCP when my steady state load is only around 50% of rated capacity.

It's a relief to see someone else has basically had this same exact problem and found the solution. Now I just wish I could go back in time and kick myself in the face. When EVGA had their black friday sales this year I nearly bought a 1600W P2 or T2 because the price was so unbelievably good. I talked myself out of it because "I have a phenomenal 1000W unit, I'll never need anything beefier than this". Well apparently now I need one, and the prices are unbelievably expensive.

EDIT:

Apparently there were issues with the VEGA cards tripping OCP on Seasonic PSUs as well. Awesome. Guess I need a 1600W PSU for a 600W system now. Or a PSU made by a company that doesn't put over-protective OCP settings on their units. Cross Seasonic off my list of companies I'll consider purchasing something from again.

https://www.reddit.com/r/Amd/comments/9zd1os/seasonic_updated_statement_after_the/


----------



## ESRCJ

I decided to use XtremeTuner instead of MSI AB due to the latter only allowing the memory clock to be increased another 1000MHz. However, when I set GPU VOLT (Absolute V/mv) to the max and GPU Volt (Offset %) to 100, the voltage only peaks out at 1.063V. Any ideas on how I can lock it at 1.093V for this application? I tried doing it in MSI AB, then going back to XtremeTuner for the memory OC, but I'd get crashes when using both for overclocking (with both closed during the benchmark). I'll also note that I don't want to use Precision, as I really don't like messing with their VF curve due to how small it is.


----------



## krizby

To run at 1.093v you have to modify the freq/voltage curve after 1.063v. Default curve will flatline after 1.063v, you need to increase the freq +15mhz at 1.075v, flatline at 1.08v then increase +15mhz again at 1.093v. If it's not stable then you flatline at 1.075, 1.08v then +15mhz at 1.093v. Seriously though I don't know why people are so obsessed with getting maximum voltages and power for 2080TI. Short benchmark show some gains but gaming you will likely degrade performance when maxing the the Voltages and PL due to the useless extra heat.

Best thing to do with 2080ti is undervolting them to like 1.000v or .900v


----------



## Spiriva

pewpewlazer said:


> Ugh
> 
> My 2080 Ti arrived Friday, OC'd the mem real quick (+750), +130 PL, played some BF5, maybe ran +100 core, no issues. I think I ran the OC scanner that night but SOTR would crash immediately so I just ran the card stock for the rest of the night.
> 
> Saturday morning I fired up some BF5 with the afterburner OC scanner curve for the core, 130% PL, +750 mem. It was glorious. After a few rounds all of a sudden the screen goes black, a second a silence, then I hear the fans kick back on I see my computer booting back up. Thought it was a power outage at first, then I realized none of my lights flashed on and I didn't hear my file server's UPS transfer over. Hmm. Play some more BF5 a bit later (same OC settings) and after maybe 20-30 minutes, same thing happens. My immediate thought was PSU, but figured no way, my PSU should be way overkill for this.
> 
> Anyway I went to plug my kill-a-watt to see what was going on and noticed the power cord was starting to come out of the power strip a little bit. It's one of those Belkin ones with the swiveling outlet ports with tons of terrible Amazon reviews. Plugged my computer straight into the wall, hoping that somehow the power strip was my problem. Didn't get a chance to play again until just now tonight, but I barely made it to the third map of a BF5 game before the same shutdown/reboot thing happened.
> 
> My PSU is an EVGA Supernova PS 1000. It's a Seasonic unit with a single 12V rail rated at 83A. I have two separate 8 pin cables ran to the 2080 Ti. The same exact cable configuration that powered this same exact machine with a pair of overclocked GTX 1070s for 2.5 years without a single hiccup. AC power draw playing BF5 hovers around 500-550W, about the same as the old 1070 setup. I was thinking about large transient current swings myself, but it seemed totally insane that it could spike high enough to trigger OCP when my steady state load is only around 50% of rated capacity.
> 
> It's a relief to see someone else has basically had this same exact problem and found the solution. Now I just wish I could go back in time and kick myself in the face. When EVGA had their black friday sales this year I nearly bought a 1600W P2 or T2 because the price was so unbelievably good. I talked myself out of it because "I have a phenomenal 1000W unit, I'll never need anything beefier than this". Well apparently now I need one, and the prices are unbelievably expensive.
> 
> EDIT:
> 
> Apparently there were issues with the VEGA cards tripping OCP on Seasonic PSUs as well. Awesome. Guess I need a 1600W PSU for a 600W system now. Or a PSU made by a company that doesn't put over-protective OCP settings on their units. Cross Seasonic off my list of companies I'll consider purchasing something from again.
> 
> https://www.reddit.com/r/Amd/comments/9zd1os/seasonic_updated_statement_after_the/



I had the exact same problem (with comp in sign) and a Corsair 1000w PSU. During black friday i picked up a Corsair ax1600i, put it to "single ocp" (never tried multi rail ocp) in thier software "iCUE" and no more problems with that the PSU just shutting off.
I had the Corsair 1000w for many years, maybe it just happen to go bad, anyhow with the new 1600w PSU everything workes fine.


----------



## ESRCJ

krizby said:


> To run at 1.093v you have to modify the freq/voltage curve after 1.063v. Default curve will flatline after 1.063v, you need to increase the freq +15mhz at 1.075v, flatline at 1.08v then increase +15mhz again at 1.093v. If it's not stable then you flatline at 1.075, 1.08v then +15mhz at 1.093v. Seriously though I don't know why people are so obsessed with getting maximum voltages and power for 2080TI. Short benchmark show some gains but gaming you will likely degrade performance when maxing the the Voltages and PL due to the useless extra heat.
> 
> Best thing to do with 2080ti is undervolting them to like 1.000v or .900v


Is there a VF curve associated with XtremeTuner though? I gave people the same advice you just gave me regarding the curve for Afterburner, but I'm asking about XtremeTuner specifically. 

As for why I even care, this is for benchmark runs. I lock my voltages at 1V and the core clock at 2100MHz for gaming.


----------



## J7SC

^ Easiest to use xtreme tuner (or Precision X1, Aorus tool or other) to set VRAM overclock if you want to go beyond 1000MHz, close that app, then open MSI AB after and do the voltage curve. If you do not touch the VRAM slider in MSI AB, the profile you save will include the higher VRAM, AFAI recall. During the test bench, I had set MSI AB that it would not automatically load with Windows, meaning it is bone stock for gaming and every day tasks. For bench runs, I just loaded the saved profile.

Merry Christmas and Happy Holiday Festivities, everyone !


----------



## OutlawXGP

Well my EVGA 2080ti Black Edition started artifacting and crashing with every single game today, Its been two days since I received the card. I purchased it on this of 19th December and it arrived on the 22nd. It seemed to work fine the first two days but all the issues started this evening when I try to play some Just Cause 4. I also tested some Assassin Creed Odyssey and Battlefield 5 and the exact same thing happens. Even No Mans Sky is Artifacting.

I Have no overclock applied to it what so ever, all the settings are at default and I reinstalled the Nvidia drivers using DDU twice, as you can see from the videos above it has heavy artifacting which leads into a crash as soon the game starts.

Just Cause 4: https://imgur.com/a/zAfc7qf

Just Cause 4 Youtube: 



No Mans Sky: 



Unigine valley: 



Time Spy: 




I knew the 2080tis were having problems but I did not expect them to be this bad, Now I'm left with Intel HD graphics, This really sucks!


----------



## AlbertoM

pewpewlazer said:


> Ugh
> 
> My 2080 Ti arrived Friday, OC'd the mem real quick (+750), +130 PL, played some BF5, maybe ran +100 core, no issues. I think I ran the OC scanner that night but SOTR would crash immediately so I just ran the card stock for the rest of the night.
> 
> Saturday morning I fired up some BF5 with the afterburner OC scanner curve for the core, 130% PL, +750 mem. It was glorious. After a few rounds all of a sudden the screen goes black, a second a silence, then I hear the fans kick back on I see my computer booting back up. Thought it was a power outage at first, then I realized none of my lights flashed on and I didn't hear my file server's UPS transfer over. Hmm. Play some more BF5 a bit later (same OC settings) and after maybe 20-30 minutes, same thing happens. My immediate thought was PSU, but figured no way, my PSU should be way overkill for this.
> 
> Anyway I went to plug my kill-a-watt to see what was going on and noticed the power cord was starting to come out of the power strip a little bit. It's one of those Belkin ones with the swiveling outlet ports with tons of terrible Amazon reviews. Plugged my computer straight into the wall, hoping that somehow the power strip was my problem. Didn't get a chance to play again until just now tonight, but I barely made it to the third map of a BF5 game before the same shutdown/reboot thing happened.
> 
> My PSU is an EVGA Supernova PS 1000. It's a Seasonic unit with a single 12V rail rated at 83A. I have two separate 8 pin cables ran to the 2080 Ti. The same exact cable configuration that powered this same exact machine with a pair of overclocked GTX 1070s for 2.5 years without a single hiccup. AC power draw playing BF5 hovers around 500-550W, about the same as the old 1070 setup. I was thinking about large transient current swings myself, but it seemed totally insane that it could spike high enough to trigger OCP when my steady state load is only around 50% of rated capacity.
> 
> It's a relief to see someone else has basically had this same exact problem and found the solution. Now I just wish I could go back in time and kick myself in the face. When EVGA had their black friday sales this year I nearly bought a 1600W P2 or T2 because the price was so unbelievably good. I talked myself out of it because "I have a phenomenal 1000W unit, I'll never need anything beefier than this". Well apparently now I need one, and the prices are unbelievably expensive.
> 
> EDIT:
> 
> Apparently there were issues with the VEGA cards tripping OCP on Seasonic PSUs as well. Awesome. Guess I need a 1600W PSU for a 600W system now. Or a PSU made by a company that doesn't put over-protective OCP settings on their units. Cross Seasonic off my list of companies I'll consider purchasing something from again.
> 
> https://www.reddit.com/r/Amd/comments/9zd1os/seasonic_updated_statement_after_the/


I had this problem with my 1080, black screen and fans going crazy. Changed the PCI-E cable to a modular one and problem solved. I think it's not a protection of PSU, it's from the GPU. Its the power delivery not working as it should. Could be a problem on the PSU or cables.


----------



## RaGran

Spiriva said:


> I had the exact same problem (with comp in sign) and a Corsair 1000w PSU. During black friday i picked up a Corsair ax1600i, put it to "single ocp" (never tried multi rail ocp) in thier software "iCUE" and no more problems with that the PSU just shutting off.
> I had the Corsair 1000w for many years, maybe it just happen to go bad, anyhow with the new 1600w PSU everything workes fine.


I also had such issues with a Corsair HX1200 PSU. It has a physical switch for single/multi rail mode, and the problems went away after switching to single rail mode, which disables the 40A OCP limit. I would rather blame NVIDIA/AMD for any such issues rather than the PSU makers, as the whole problem is about the cards drawing more power through a single cable/source than the source is specified to provide. They should add more PSU cable connectors on the boards if they need them. On the other hand, the OCP probably could work even if the limit was a little less strict.


----------



## RaGran

Has anyone tried using some kind of EEPROM hardware programmer with a test clip to flash an a-chip bios to a non-a chip board? Would/could that work, or would there be issues of some sort with that?

I saw someone suggest that for flashing the FE cards with other bioses before the patched nvflash was available.

Has that kind of thing been tried/done with other gpus before?

You could try it without a test clip of course as well, but I know I'm not going to de-solder anything from the board as I don't have the skills and equipment for that. Locating the bios chip on the board would be enough of a challenge.


----------



## pewpewlazer

AlbertoM said:


> I had this problem with my 1080, black screen and fans going crazy. Changed the PCI-E cable to a modular one and problem solved. I think it's not a protection of PSU, it's from the GPU. Its the power delivery not working as it should. Could be a problem on the PSU or cables.


Not sure what you mean by "changed the PCI-E cable to a modular one"?

Anyway, I trekked up Microcenter today and picked up an EVGA Supernova 1600W T2. Common sense says a 1600W PSU for a single CPU single GPU gaming rig is gross overkill and a waste of money, but I thought the same thing about the top-of-the-line 1000W PSU I bought 3 years ago, and look where that got me.

First thing I noticed (other than the sheer weight of the box, which almost broke the plastic Microcenter bag they put in in) was that the PCIE cables appear to have inline capacitors. I'm not sure if it was a discussion here or on Reddit, but I had read a comment from the Seasonic rep that they were sending out PCIE cables with inline caps as a fix to the whole VEGA 56/64 shutdown issue. Or maybe it was some STRIX GTX 970 issue. But either way, the Seasonic rep said they fixed some sort of GPU specific shut down issue with inline caps. Hmmf. Must be something to it, eh?

I just got done playing BF5 for almost 2 hours without any shutdown/reboot nonsense, card overclocked and all, so hopefully this was the fix.

Now I'm kind of wishing I picked up that EK Vector 2080 Ti block I was eying at Microcenter, but I need a new pump if I'm going to add another block & rad or build a second loop, and I wasn't about to pay ~150 for an EK D5.


----------



## kot0005

Alright, I bought a 2080Ti Aorus waterforce.

Please do not buy this card. It looks goo and that's about it. Clocks a few Mhz higher than my XC ultra may be but the waterblock is horrible. Hopefully EK makes a block for the Aorus lineup. The backplate on this card is flimsy and garbage compared to Ek's reference backplate. 

Gigabyte doesnt use screws on the far right corner of the block so the plexi piece is just hanging off and it flexes every time u hold the right side of the card.

The waterblock sucks, I am getting upto 8c higher than my EK vector block. Really shows that EK is the best in business for making these waterblocks.

Final downside is that there is no way to service the block on the user end if it gunks up, the screws are hidden under the adhesive covers on the front of the card.

so yeah, save yourself some headaces and buy a reference pcb and the ekl block or wait for ek to release custom pcb blocks.


----------



## UdoG

Thanks for this info - sounds not good.

Maybe the MSI RTX 2080 Ti Sea Hawk EK X is a better solution due to a EK waterblock is installed?!


----------



## J7SC

kot0005 said:


> Alright, I bought a 2080Ti Aorus waterforce.
> 
> Please do not buy this card. It looks goo and that's about it. Clocks a few Mhz higher than my XC ultra may be but the waterblock is horrible. Hopefully EK makes a block for the Aorus lineup. The backplate on this card is flimsy and garbage compared to Ek's reference backplate.
> 
> Gigabyte doesnt use screws on the far right corner of the block so the plexi piece is just hanging off and it flexes every time u hold the right side of the card.
> 
> The waterblock sucks, I am getting upto 8c higher than my EK vector block. Really shows that EK is the best in business for making these waterblocks.
> 
> Final downside is that there is no way to service the block on the user end if it gunks up, the screws are hidden under the adhesive covers on the front of the card.
> 
> so yeah, save yourself some headaces and buy a reference pcb and the ekl block or wait for ek to release custom pcb blocks.



I respectfully disagree - I was so impressed with my first 2080 TI waterforce, I bought a 2nd one a few weeks later and so far, they're working out great, including on temps, in thorough testing in the test-benches. That said, I have several other (non-2080 TI) GPUs with EK blocks that also performed (mostly) very well over the years. In my setup the fans for the rad for the 2080 TI waterforce are set to never go above idle given the sound-sensitive location of the machine (per earlier post). Even after multiple Superposition runs, temps never went above 47 C (w/ 22 c ambient)- usually, it's more the mid-30s when not stressed 100% continuously.

The internal screws are indeed hidden behind a dust and water shield (^ 'gunk' ?) which I very much appreciate, having had a leak develop inside an (EK cooper) block on another GPU at the O-ring. About the Plexiglas 'hanging off' on the right hand side (below where the 8-pins are) - fair enough, they could have put an extra screw or two there, but it is not an issue for me as I would never handle such a heavy card on the lower right anyway.

All that said, apart from the full waterblock version (and its AIO sister card), they also make an air-cooled one with the same PCB. It might be worth it for you to check with EK if they will make a block for that card which should fit the 2080 TI waterforce as well.

So there you have it - one card and two different opinions..."vive le difference" as the French say


----------



## choikugi

I flashed galax bios to ZOTAC 2080 ti amp version and got interesting results for overclocking. I used OC scanner for overclocking. adjusted power limit to max for both.

ZOTAC rom : frequency is not very stable. 1800 - 2100 but temp is 55-61 in full load max clock is 2100.

GALAX rom : frequency is stable usually stay as 2100 but temp is 66-73 in full load probably because of increated power limit. In time spy test, got 400+ score more than ZOTAC rom. max clock is 2175

it's like low temp vs stable frequency. Which one you would like to choose in this case ?


----------



## AlbertoM

pewpewlazer said:


> Not sure what you mean by "changed the PCI-E cable to a modular one"?
> 
> Anyway, I trekked up Microcenter today and picked up an EVGA Supernova 1600W T2. Common sense says a 1600W PSU for a single CPU single GPU gaming rig is gross overkill and a waste of money, but I thought the same thing about the top-of-the-line 1000W PSU I bought 3 years ago, and look where that got me.
> 
> First thing I noticed (other than the sheer weight of the box, which almost broke the plastic Microcenter bag they put in in) was that the PCIE cables appear to have inline capacitors. I'm not sure if it was a discussion here or on Reddit, but I had read a comment from the Seasonic rep that they were sending out PCIE cables with inline caps as a fix to the whole VEGA 56/64 shutdown issue. Or maybe it was some STRIX GTX 970 issue. But either way, the Seasonic rep said they fixed some sort of GPU specific shut down issue with inline caps. Hmmf. Must be something to it, eh?
> 
> I just got done playing BF5 for almost 2 hours without any shutdown/reboot nonsense, card overclocked and all, so hopefully this was the fix.
> 
> Now I'm kind of wishing I picked up that EK Vector 2080 Ti block I was eying at Microcenter, but I need a new pump if I'm going to add another block & rad or build a second loop, and I wasn't about to pay ~150 for an EK D5.


My PSU is a Corsair TX850M semi modular, single rail, with 2 PCI-E cables attached in the main harness and two other PCI-E modular ones.

I only used the two attached for a very long time with my GTX 295 and since i bought my 1080.

The problem started a couple months after I was using XOC T4 Bios with unlimited TDP, so I was passing way more than 150W through the PCIE feeding the 1080, furmark showed almost 350W TDP.

So think what happens with the cable is that over time it loses its capacity, im no electrical engineer but remember seeing this somewhere, because its being used to the limit or way beyond that, and start to make a little short circuit or raising the resistance way more than it should.

The GPU detects that immediately when you run 3D heavy application, when it will pull all the energy it needs, then it triggers safe mode, the famous black screen and fans all the way to max.

So I changed the cable to a unused modular and the problem was solved. Also I started using another bios that draws max 276W, so my single PCIE cable of my 1080 is only 50W max beyond spec lol, and its holding strongly.

About the inline capacitors, yeah sure they could be adding those to avoid any drop of current to the gpu, but inside the PSU just before the PCIE cables, there are HUGE capacitors just to do this. Must be some real sensitive GPU that requires almost no current drop, or just to be on safe side. Same thing or myth exists on car audio amplifiers that can draw from 500 to almost 2000W on heavy load, we put giant capacitors next to it just to be on the safe side of the electrical supply, some people say its useless, but go figure.

And you are right, 1600W PSU is waaaay overkill for a single GPU setup. But for the peace of mind it makes sense lol

I had a 620W HX Corsair with my GTX 295, it didn't last more than 2 years because the gpu was a beast SLI single PCB card, also heavily overclocked with 1.2v custom bios, for sure the PSU was 100% load all the time, also with heavily overclocked CPU and a bunch of fans lol

Then i moved to my TX850M, and its lasting since 2012!!! lol

The more headroom you have with your PSU, the more it will last for sure.


----------



## MrTOOSHORT

choikugi said:


> I flashed galax bios to ZOTAC 2080 ti amp version and got interesting results for overclocking. I used OC scanner for overclocking. adjusted power limit to max for both.
> 
> ZOTAC rom : frequency is not very stable. 1800 - 2100 but temp is 55-61 in full load max clock is 2100.
> 
> GALAX rom : frequency is stable usually stay as 2100 but temp is 66-73 in full load probably because of increated power limit. In time spy test, got 400+ score more than ZOTAC rom. max clock is 2175
> 
> it's like low temp vs stable frequency. Which one you would like to choose in this case ?



Keep the Galax bios. Throttling on stock bios is terrible. 73'C is fine for load on air. See if you can lower the volts and stick with 2100MHz.


----------



## pewpewlazer

AlbertoM said:


> My PSU is a Corsair TX850M semi modular, single rail, with 2 PCI-E cables attached in the main harness and two other PCI-E modular ones.
> 
> 
> So think what happens with the cable is that over time it loses its capacity, im no electrical engineer but remember seeing this somewhere, because its being used to the limit or way beyond that, and start to make a little short circuit or raising the resistance way more than it should.
> 
> The GPU detects that immediately when you run 3D heavy application, when it will pull all the energy it needs, then it triggers safe mode, the famous black screen and fans all the way to max.


Ah, didn't think about the whole "semi-modular" PSU thing. Gotcha. I'm no EE either, but the thought of insulation degradation in 12V DC computer power wiring seems unlikely, but what do I know?

Also, the "safe mode" black screen fans to the max situation you describe is not what I experienced. I would be playing a game, then all of a sudden my computer would SHUT OFF, then instantly power back up and REBOOT.

Played another ~2hr of BF5 this morning, no issues.


----------



## AlbertoM

pewpewlazer said:


> Ah, didn't think about the whole "semi-modular" PSU thing. Gotcha. I'm no EE either, but the thought of insulation degradation in 12V DC computer power wiring seems unlikely, but what do I know?
> 
> Also, the "safe mode" black screen fans to the max situation you describe is not what I experienced. I would be playing a game, then all of a sudden my computer would SHUT OFF, then instantly power back up and REBOOT.
> 
> Played another ~2hr of BF5 this morning, no issues.


Yeah man, it's less about the insulation and more about the copper thickness and its oxidation over time, causing less conductibility and raising resistance. Thats really EE stuff but i know that thing happens. This process is catalized if you run the cable with max capacity or more the spec for long periods. And with more sensible hardware like our fancy PCs and GPUs, it's really something to consider. The insulation would go wrong only with absurd temperature abuse, that of course would come from cable thickness not appropriate for the power and abuse of current.


----------



## iamjanco

As long as the points of contact between the connecting points to the source--in this instance the psu and the connections to the gpu (e.g., soldered connections, connector pins, etc.)--are free of surface oxidation, the role resistance plays due to oxidation in copper wire is typically negligible (copper wire doesn't corrode internally). See *Does oxidized copper conduct electricity?* for relevant info.

As for insulation breakdown points, yes, heat (the result of e.g. excessive current and/or very high temps in an enclosed space) can cause wire insulation to degrade to the point where it needs to be replaced (e.g., to prevent electrical shorts), depending on the type of insulation; but as long as the correct grade and gauge of insulated wire is used in a circuit, that degradation is typically mitigated. See *“A Stitch In Time” -- The Complete Guide to Electrical Insulation Testing *(pdf) for a more detailed explanation.

Lastly, regardless of the size or qualities of the capacitors in use inside a psu, the use of capacitors in modular cabling does have its purpose; both to help reduce noise that might make it past a psu's internal capacitors, but more so to help minimize the impact of noise that might be induced in a modular cable's wires between the psu and the powered circuit. While geared more toward the impacts of capacitance on data carrying conductors, see *Why is Cable Capacitance Important for Electronic Applications?* if you'd like to know more.

As for the the issues (black screens, space invaders, etc.) a number of RTX 20xx owners have experienced, while NVIDIA and AIBs are tight-lipped about the reasons behind them (drivers, bad components, etc.), there is talk in mostly private circles about the roles temperature plays in a GPU that has far more pin outs than any consumer grade processor previously offered with respect to that processor’s pin density; and the sort of impact heat and cold might have on the connections associated with that processor given that pin density, which raises the following question:

_Can cycling an RTX 20xx card’s temps from its high limit (let’s say 100 degress C or more, give or take) to its lower limit(s) at system idle/ambient (e.g., stressing the card using overclocking utilities; turning the PC on and off, repeatedly; regardless of durations of powered and nonpowered states; or the lengths of intervals between the two states) cause the issue or not?_

Time may or may not tell, but TSMC certainly didn't.

*HAPPY HOLIDAYS A LL!*


----------



## Hanks552

choikugi said:


> I flashed galax bios to ZOTAC 2080 ti amp version and got interesting results for overclocking. I used OC scanner for overclocking. adjusted power limit to max for both.
> 
> ZOTAC rom : frequency is not very stable. 1800 - 2100 but temp is 55-61 in full load max clock is 2100.
> 
> GALAX rom : frequency is stable usually stay as 2100 but temp is 66-73 in full load probably because of increated power limit. In time spy test, got 400+ score more than ZOTAC rom. max clock is 2175
> 
> it's like low temp vs stable frequency. Which one you would like to choose in this case ?


i have the same graphics card, i want to try it out, thanks for the report


----------



## Coldmud

kot0005 said:


> Alright, I bought a 2080Ti Aorus waterforce.
> 
> Please do not buy this card. It looks goo and that's about it. Clocks a few Mhz higher than my XC ultra may be but the waterblock is horrible. Hopefully EK makes a block for the Aorus lineup. The backplate on this card is flimsy and garbage compared to Ek's reference backplate.
> 
> Gigabyte doesnt use screws on the far right corner of the block so the plexi piece is just hanging off and it flexes every time u hold the right side of the card.
> 
> The waterblock sucks, I am getting upto 8c higher than my EK vector block. Really shows that EK is the best in business for making these waterblocks.
> 
> Final downside is that there is no way to service the block on the user end if it gunks up, the screws are hidden under the adhesive covers on the front of the card.
> 
> so yeah, save yourself some headaces and buy a reference pcb and the ekl block or wait for ek to release custom pcb blocks.


You heard it everyone! Please do not buy this card, that so far several user in this post are quite content with, for reasons op himself doesn't seem to fully comprehend. I'm running this card on 1 single 360rad just pushing air with 3 fans and never exceed 48c.

Flimsy backplate? It's the same q as all of them since classy 680's. A .5 cm overlap is hanging off? Service the block? (Block is removable btw.) It's just a chunk of closed plexiglass, blow some air through it for cleaning. 
And if the time comes and this card defaults, I will be using the free extended 4 year warranty on this instead of tinkering with a 1600 euro card myself.

Stable at 1340mhz and 1180mhz mem all day, but the power limit is real, so real world scenario is just clock it to ~1260mhz and call it a day, just like 90% of these cards..

Also: might wanna buy some solid proven coolants if ur **** gunks up, don't use opaque and tt c1000, so far had excellent results with the EK cryofuel solids.


----------



## ESRCJ

There's a bench-a-thon currently going on wccftech for those interested. It's just for fun. Scroll to the featured comments for the rules. Merry Christmas! 

https://wccftech.com/nvidia-geforce-gtx-2050-gtx-1150-specs-performance-leak/


----------



## pewpewlazer

So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up. 

I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?


----------



## J7SC

pewpewlazer said:


> So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up.
> 
> I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?



Have a look at Iamjanco's post on the previous page which is quite telling. Can't say for sure but I wonder whether what you describe is temperature-related as it seems to kick in after longer gaming sessions. If you can still boot up from cold, I really do think you want to water-cool that 2080 TI as soon as you can. I know, the extra expense and all that, but 2080 TIs really do seem to hate higher temps, and just as importantly the temp swings they go through per above post(s).

I'm finishing my build with the 2x 2080 TI waterforce ones with TWO 360 rads (+ a 160mm one) and TWO Swiftech pumps I cannibalized from an earlier build in a separate loop just for the RTX cards...this whole temp delta thing is making me cautious and it seems to catch a lot of RTX TI owners by surprise and when the cards display this (or other) weird behavior you describing, it might be time to think about capping max temps. For now, until you can water-cool them, can you take some 120mm fans and point it almost at the bottom of the card / PCIe slot - the bottom VRAM on the 2080 TI seems to get the hottest. Anyway, I really hope you get this problem figured and corrected.


----------



## krizby

pewpewlazer said:


> So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up.
> 
> I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?



Yup, the same thing happened to me. I did notice the card was boosting above 1.000v when the PC hardlocked and restarted so i undervolt the card to 1.000v (same overclock offset, just max 1.000v) and it has been stable for a week now. The funny thing was I created another overclock profiles that undervolt to .900v and test Hitman 2, the .900v profile (1890mhz) gave higher fps than the 1.000v profile (2010mhz core) in a static scene. The .900v undervolt also gives very tight frametimes consistency in PUBG, which I really need. So yeah undervolt FTW.



J7SC said:


> Have a look at Iamjanco's post on the previous page which is quite telling. Can't say for sure but I wonder whether what you describe is temperature-related as it seems to kick in after longer gaming sessions. If you can still boot up from cold, I really do think you want to water-cool that 2080 TI as soon as you can. I know, the extra expense and all that, but 2080 TIs really do seem to hate higher temps, and just as importantly the temp swings they go through per above post(s).
> 
> I'm finishing my build with the 2x 2080 TI waterforce ones with TWO 360 rads (+ a 160mm one) and TWO Swiftech pumps I cannibalized from an earlier build in a separate loop just for the RTX cards...this whole temp delta thing is making me cautious and it seems to catch a lot of RTX TI owners by surprise and when the cards display this (or other) weird behavior you describing, it might be time to think about capping max temps. For now, until you can water-cool them, can you take some 120mm fans and point it almost at the bottom of the card / PCIe slot - the bottom VRAM on the 2080 TI seems to get the hottest. Anyway, I really hope you get this problem figured and corrected.


Lol watercooling the card was what actually caused the hard restart. My Asus 2080ti Turbo was stable with +225mhz offset and +650mhz Vram with stock cooling, put on Heatkiller IV waterblock + backplate and the overclock was unstable. Reason was the card can boost above 1.000v now with a waterclock. My take here is that after 1.000v you have to use the VF curve to lower the clock offset (which is obvious when you use the OC scanner which give to a VF curve that lower the offset the higher voltage).


----------



## iamjanco

pewpewlazer said:


> So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up.
> 
> I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?


Have you been monitoring your card temps during these failures?


----------



## dVeLoPe

a bit random but did anyone here upgrade from a 1080Ti SLi setup to a 2080Ti SLi setup and have some results?

Or a side by side youtube video with proven stats that isnt some garbage thrown up just for view count?


----------



## pewpewlazer

iamjanco said:


> Have you been monitoring your card temps during these failures?


GPU temps sit at ~76*C (+/- a degree or two) consistently. No idea how hot the rest of it runs, I should probably stick some thermocouples on the back plate around the ram or something and see what's going on there.


----------



## J7SC

krizby said:


> Yup, the same thing happened to me. I did notice the card was boosting above 1.000v when the PC hardlocked and restarted so i undervolt the card to 1.000v (same overclock offset, just max 1.000v) and it has been stable for a week now. The funny thing was I created another overclock profiles that undervolt to .900v and test Hitman 2, the .900v profile (1890mhz) gave higher fps than the 1.000v profile (2010mhz core) in a static scene. The .900v undervolt also gives very tight frametimes consistency in PUBG, which I really need. So yeah undervolt FTW.



That's a good idea, not least as undervolting helps w/ temps as well, and the second profile you describe (.900v) also benefits in terms of the overall Power 'budget'. When I ran some tests earlier, oc'ing VRAM to 1000 only used up an extra 6-8w as far as I remember, but a single step GPU v increase used up multiples of that.

Going back to the OP and his card's strange behavior vs. temps, I read multiple times now that VRAM temps, especially below the physical GPU location, is an issue on 2080 TIs and also Titan RTX which is why they now include some extra thermal materials on the bottom row. VRAM temps in those locations might be an issue even when the general temps (such as those listed in GPUz or MSI AB) seem within spec.

*BTW to all: New MSI AB 4.6.0 Beta 10 out*...additional voltage control (now, now, caution) and VRAM slider to 1500MHz


----------



## J7SC

> Lol watercooling the card was what actually caused the hard restart. My Asus 2080ti Turbo was stable with +225mhz offset and +650mhz Vram with stock cooling, put on Heatkiller IV waterblock + backplate and the overclock was unstable. Reason was the card can boost above 1.000v now with a waterclock. My take here is that after 1.000v you have to use the VF curve to lower the clock offset (which is obvious when you use the OC scanner which give to a VF curve that lower the offset the higher voltage).


LOL - when I get this all mounted in the new build and my test-bench derived OC goes unstable, I'll know why and hold you personally responsible   The copper tubes btw are for the 2080 TIs...copper really is a very good heat transfer medium / learned that when I cut the pipes and burned my fingers (7 inch distance) just from the friction


----------



## kot0005

Some pics


----------



## kot0005

UdoG said:


> Thanks for this info - sounds not good.
> 
> Maybe the MSI RTX 2080 Ti Sea Hawk EK X is a better solution due to a EK waterblock is installed?!


I asked EK about it, its using previous gen block, so the microfin surface area is not as big as on vector blocks, just wait till EK releases custom pcb blocks.




J7SC said:


> I respectfully disagree - I was so impressed with my first 2080 TI waterforce, I bought a 2nd one a few weeks later and so far, they're working out great, including on temps, in thorough testing in the test-benches. That said, I have several other (non-2080 TI) GPUs with EK blocks that also performed (mostly) very well over the years. In my setup the fans for the rad for the 2080 TI waterforce are set to never go above idle given the sound-sensitive location of the machine (per earlier post). Even after multiple Superposition runs, temps never went above 47 C (w/ 22 c ambient)- usually, it's more the mid-30s when not stressed 100% continuously.
> 
> The internal screws are indeed hidden behind a dust and water shield (^ 'gunk' ?) which I very much appreciate, having had a leak develop inside an (EK cooper) block on another GPU at the O-ring. About the Plexiglas 'hanging off' on the right hand side (below where the 8-pins are) - fair enough, they could have put an extra screw or two there, but it is not an issue for me as I would never handle such a heavy card on the lower right anyway.
> 
> All that said, apart from the full waterblock version (and its AIO sister card), they also make an air-cooled one with the same PCB. It might be worth it for you to check with EK if they will make a block for that card which should fit the 2080 TI waterforce as well.
> 
> So there you have it - one card and two different opinions..."vive le difference" as the French say



Yes but I had tested a reference card with ek block before the waterforce and the temps are really high. I'd only get 40c with max load with 20c amb, waterforce hits 50c easily, I have never had problems with EK blocks leaking other than the Nickel plating wearing off, you will always want blocks to be removable so you can clean you stuff that gets stucks in the microfins every now and then otherwise it will restrict your flow and cause heat issues.


So yeah, not able to service the block on user end is 100% negative. I am just putting it out there so people know. The block is also not mounted to the pcb on right side which is pretty bad because it reduces rigidity and structural integrity of the card.


----------



## Maxxamillion

Who makes the Gigabyte water blocks?


----------



## krizby

J7SC said:


> LOL - when I get this all mounted in the new build and my test-bench derived OC goes unstable, I'll know why and hold you personally responsible   The copper tubes btw are for the 2080 TIs...copper really is a very good heat transfer medium / learned that when I cut the pipes and burned my fingers (7 inch distance) just from the friction


wow that is some serious cooling power, noice .
here is my rig










I would highly recommend Heatkiler IV wb


----------



## J7SC

kot0005 said:


> Yes but I had tested a reference card with ek block before the waterforce and the temps are really high. I'd only get 40c with max load with 20c amb, waterforce hits 50c easily, I have never had problems with EK blocks leaking other than the Nickel plating wearing off, you will always want blocks to be removable so you can clean you stuff that gets stucks in the microfins every now and then otherwise it will restrict your flow and cause heat issues.
> 
> So yeah, not able to service the block on user end is 100% negative. I am just putting it out there so people know. The block is also not mounted to the pcb on right side which is pretty bad because it reduces rigidity and structural integrity of the card.


I just can't reproduce your temps on either of my cards. Running multiple 4k Superposition test to set VRAM speed, w/ fans on idle...highest delta to ambient I ever got was 25C or so. Sometimes though, factory installs can be a bit 'off'. I can only tell you what my 2 Aorus cards are doing.

As mentioned, I have multiple EK blocks and like them a lot, but one leaked right on top where the o-ring secures the 'serviceable' top plate. So it's a double-edged sword pro / con thing, IMO. Plus I service the w-c system regularly re. liquids - a mix of basic Thermaltake liquid w/ some distilled water



Maxxamillion said:


> Who makes the Gigabyte water blocks?


I could be wrong but they look like they are Bitspower / looks a lot like the one in the pic below, though that is the version for the MSI Gaming x trio












krizby said:


> wow that is some serious cooling power, noice .
> here is my rig
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I would highly recommend Heatkiler IV wb


...very nice ! / ! one can never have enough cooling power. Got a Heatkiller IV, but for the CPU (TR 2950X) and applying liquid metal tomorrow. Threadripper has a big heat-spreader surface area :sad-smile


----------



## Hanks552

*My setup*

2x RTX 2080 TI Zotac AMP ( stock BIOS)


----------



## pewpewlazer

krizby said:


> Yup, the same thing happened to me. I did notice the card was boosting above 1.000v when the PC hardlocked and restarted so i undervolt the card to 1.000v (same overclock offset, just max 1.000v) and it has been stable for a week now. The funny thing was I created another overclock profiles that undervolt to .900v and test Hitman 2, the .900v profile (1890mhz) gave higher fps than the 1.000v profile (2010mhz core) in a static scene. The .900v undervolt also gives very tight frametimes consistency in PUBG, which I really need. So yeah undervolt FTW.
> 
> 
> 
> Lol watercooling the card was what actually caused the hard restart. My Asus 2080ti Turbo was stable with +225mhz offset and +650mhz Vram with stock cooling, put on Heatkiller IV waterblock + backplate and the overclock was unstable. Reason was the card can boost above 1.000v now with a waterclock. My take here is that after 1.000v you have to use the VF curve to lower the clock offset (which is obvious when you use the OC scanner which give to a VF curve that lower the offset the higher voltage).


Interesting. I hadn't much motivation mess around with the v-f curve since it was an absolute nightmare to use with an SLI setup. But the advantages were (and still are) pretty obvious. GPU boost is downright dysfunctional. The "curve" that the "OC scanner" came up with calls for 2100mhz @ 1.05v. Playing BF5 with that OC profile, my card sits at 2055mhz @ 1.05v. Never mind the fact that the "curve" says it should run 2055mhz @ 0.993v, why not suck up 1.05v and not even run the specified clock speed? My 1070s did the same unbelievable nonsense.

My "OC scanner" curve stepped up to 2055mhz @ 1.00v, so I set the curve flat at 2055mhz from 1.00v onwards (which took an eternity to do, why are there so many voltage points the card will never even see without a 200%+ PL?). I clicked apply and afterburner set it to 2025mhz at 1.00v, then ramped to 2055mhz at 1.02ish, then a weird jump at 1.2v. After gaming a bit and changing profiles a few times, that same saved profile now loads 2055 @ 0.993v, and bumps it to 2070mhz @ 1.118v now. What the hell!

On a positive note, SotR no longer crashes with a video card driver error immediately upon loading a level. I guess it was boosting to a bazillion ghz @ 20v or whatever unrealistic numbers get put at the end of the curve without forcing them lower yourself, causing an insta-crash. Gotta love GPU poopst!

Things were so much easier 15+ years ago when "boost" didn't exist, "power limits" weren't a thing (software power limits, anyway), and your GPU ran ONE voltage in 3D mode all the time. Want to change it? Get out your soldering iron. EASY.


----------



## bmgjet

pewpewlazer said:


> Things were so much easier 15+ years ago when "boost" didn't exist, "power limits" weren't a thing (software power limits, anyway), and your GPU ran ONE voltage in 3D mode all the time. Want to change it? Get out your soldering iron. EASY.


You can still get out a soldering iron and change the power limit and voltage.


----------



## boli

dVeLoPe said:


> a bit random but did anyone here upgrade from a 1080Ti SLi setup to a 2080Ti SLi setup and have some results?
> 
> 
> 
> Or a side by side youtube video with proven stats that isnt some garbage thrown up just for view count?



The GamersNexus RTX Titan Video from a few days ago has 2080 Ti and 1080 Ti SLI scores too


----------



## UdoG

J7SC said:


> *BTW to all: New MSI AB 4.6.0 Beta 10 out*...additional voltage control (now, now, caution) and VRAM slider to 1500MHz


https://forums.guru3d.com/threads/rtss-6-7-0-beta-1.412822/page-80#post-5620815

Download:
http://msi-afterburner.guru3d.com/MSIAfterburnerSetup460Beta10Build14218.rar


----------



## VPII

I've been one of the people who wanted the highest TDP I can get to try and squeeze the most out of the GPU. Well in the past couple of weeks I had the card I actually found that higher TDP does absolutely nothing unless you take it under LN2 or DICE. Water cooling may help a bit, but not really that much. What I found was to run OC scanner with the TDP and Temp limit left as it would be default. In doing so I found +199 and +190 or so between Precision X1 and MSI Afterburner respectively. So I went and set the core to +150 instead and raised the memory to where I know it would work, but leaving the TDP and Temp as is or default so to speak. I also changed the fan profile for the lowest to be 60% and it would go up to 100% the moment the core temp hits 70c. This resulted in my card basically running between 2025, 2040 and 2055, but mostly 2040 core.

If I run benchmarks I run it with the core set to +180 and memory set to +800 but the TDP and Temp set to max. This still results in my core speed dropping to the same speeds or even lower than what I stated above.


----------



## boli

That MSI AB 4.4b10 thread above is quite interesting, for example regarding how the curve changes on save:


> NVAPI is correcting specified offsets to make final curve monotonically increasing and apply smoothing if necessary.
> Also, you're misunderstanding "base curve" (dark grey line). It is not a real offsetless base curve, which is not readable via NVAPI, it is final curve with offsets subtracted from each point. It is not intended for anything but seeing the points where the offsets are applied.


----------



## Esenel

pewpewlazer said:


> So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up.
> 
> I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?


My guess is that your CPU OC is the issue.
BF V uses AVX which kicks your CPU quite hard.

Had to go back from 5.2 to 5.1 GHz.
Now issues are solved.
I also thought it was the GPU although CPU was the issue.


----------



## boli

pewpewlazer said:


> So much for the power supply being the fix... Put quite a few hours into BF5 today no problems up until just now. Been happily chugging along with the card running 2055 core @ 1.05v and 7750 mem. No visible artifacting or anything. Now all of a sudden my screen goes black, monitors say no signal, and my computer reboots. Only difference now is it just sort of went to a black screen then rebooted, where as it would power off with the old PSU, I'd hear the fans spin down, then it would boot back up.
> 
> I've never seen anything like this. As far back as I can remember, unstable GPU overclocks have always manifested themselves driver crashes, or as a total system lockups/freezing that required a manual reboot. CPU instability on my haswell-E setup has always reared its head in the form as a BSOD. This automatic reboot behavior is a new one to me. Anyone else experience this?





Esenel said:


> My guess is that your CPU OC is the issue.
> BF V uses AVX which kicks your CPU quite hard.
> 
> Had to go back from 5.2 to 5.1 GHz.
> Now issues are solved.
> I also thought it was the GPU although CPU was the issue.


*I also just had a spontaneous reboot, and I can reproduce it easily!* (I'm not really concerned, because it doesn't happen in practice, but maybe it's helpful or interesting to anyone. As a software engineer I love reproducible issues )

My HW: stock 6700K (not OC at all), 16 GiB RAM, KFA2 (AKA Galax) OC 2080 Ti with +120/+800 OCs at 320W, Seasonic Prime Titanium 750W, an M2 and a SATA SSD, EK Velocity + Vector, D5, 420 + 280 slim rads.

Because I just swapped the in/out port order on my EK GPU block, I was running Furmark (stress) + Prime95 (max heat) to see if it makes any difference whatsoever. For fun I launched a Microsoft DXR Demo (the "Prodedural Geometry" one), which immediately caused a reboot.

I've run this DXR demo countless times by itself, without any issues. Also alternating it with Furmark IIRC (whichever window is in the foreground gets more GPU cycles). I'm not sure I've had Prime95 in the background though, and it does not occur when Prime95 is not running in the background. It also doesn't happen when I change the launch order to Furmark, DXR, Prime95.

Anyway, the exact same thing happened again after the reboot, when I did the same sequence of Furmark + Prime95 stressing, then launching DXR demo.

It doesn't happen when I close AB 4.6b10 beforehand, I suppose because the GPU's and VRAM's OC is not applied. I've never had this during gaming (mostly BF V currently).


Also I did have issues like these "spontaneous" reboots during game sessions with my living room PC (4770K + GTX 1080) with some Seasonic fanless 480W power supply 2 years ago. Replacing it with a Seasonic Prime Titanium 650 solved that issue there. That previous power supply was borderline, so no surprise there, OCP not too unexpected. 

With my current main PC though, I am somewhat surprised. Never happened the 2 years before with Titan X (Pascal), but that card was only 300W and didn't have DXR demo obviously.  I did game a lot on that though.

Anyway, Prime95 kicks CPU hard with AVX as well, while DXR Demo is light on power load (unlike Furmark) so possibly jacks up GPU frequency into levels that then require high voltages, which in turn might trip the OCP. Or insert your theory here. As long as it doesn't happen while gaming normally I'm not too concerned.

*Update:* After having it happen 3 times, I tried to reproduce with +90 and +105 GPU OC, with no auto-reboot, and then again with +130 and +120 (which I believe are probably the same anyway) also without auto-reboot for five tries. So I suppose it may depend on what Prime95 is doing exactly (its load varies over time), or my system's state (pretty close to steady state I suppose, at 51C with very silent fans). Anyway, off to play BF V with usual +120/+800 GPU/VRAM OC. ¯\_(ツ)_/¯


----------



## nycgtr

Bad habits die hard. Tried to avoid sli this time around but meh.


----------



## arrow0309

nycgtr said:


> Bad habits die hard. Tried to avoid sli this time around but meh.


Nicely done, congrats! :specool:


----------



## J7SC

boli said:


> I also just had a spontaneous reboot, and I can reproduce it easily!
> (cut)
> 
> Update: After having it happen 3 times, I tried to reproduce with +90 and +105 GPU OC, with no auto-reboot, and then again with +130 and +120 (which I believe are probably the same anyway) also without auto-reboot for five tries. So I suppose it may depend on what Prime95 is doing exactly (its load varies over time), or my system's state (pretty close to steady state I suppose, at 51C with very silent fans). Anyway, off to play BF V with usual +120/+800 GPU/VRAM OC. ¯\_(ツ)_/¯


It may be that similar symptoms have (somewhat) different causes in chasing down these issues - but temps and 'clean power' are not only at the top of the list, they are also related (via GPU boost, and other). My go-to heavy lift PSU is the Antec Platinum HPC 1300w PSU which I have had for some time and I am using in my current 2080 TI SLI build. It's multi-rail, in fact the four 12v rails can handle 50amps each as long as the overall 1300w are not exceeded (though I have at times, i.e. on sub-zero). 

This is an issue that potentially relates, given heavy simultaneous loads on CPU and GPU. Usually at this stage, people start throwing their eggnog mugs at folks who suggest multi-rail in this day and age when single rail are more 'modern' etc, and a *single rail with a lot of* headroom over and above oc'ed CPU and 2080 TI should be fine. I have several other PSUs at 1200w or higher that are single rail as well, but for HD CPU and GPU, it's the Antec HPC.

PSU 'spike' capabilities likely relate as one of the culprits behind the issues described on these pages, and multi-rail can help there. The other is temps - and they are related not only via GPU boost but the hotter a transistor gets, the more juice you have to give it, rinse and repeat. And average watt usage vs spikes can be very different animals, the latter being fond of OCPs.




nycgtr said:


> Bad habits die hard. Tried to avoid sli this time around but meh.



:applaud:Looks awesome. SLI / +/++ is addictive


----------



## kot0005

J7SC said:


> I just can't reproduce your temps on either of my cards. Running multiple 4k Superposition test to set VRAM speed, w/ fans on idle...highest delta to ambient I ever got was 25C or so. Sometimes though, factory installs can be a bit 'off'. I can only tell you what my 2 Aorus cards are doing.
> 
> As mentioned, I have multiple EK blocks and like them a lot, but one leaked right on top where the o-ring secures the 'serviceable' top plate. So it's a double-edged sword pro / con thing, IMO. Plus I service the w-c system regularly re. liquids - a mix of basic Thermaltake liquid w/ some distilled water
> 
> 
> 
> I could be wrong but they look like they are Bitspower / looks a lot like the one in the pic below, though that is the version for the MSI Gaming x trio
> 
> 
> View attachment 241358
> 
> 
> 
> 
> ...very nice ! / ! one can never have enough cooling power. Got a Heatkiller IV, but for the CPU (TR 2950X) and applying liquid metal tomorrow. Threadripper has a big heat-spreader surface area :sad-smile



Bitspower blocks will def perform better because they have a jetplate. Waterforce doesnt have a jetplate.


----------



## Glerox

Merry Christmas!

So I've finally completed my 2080 TI build. Please OP add me to the owner page.

Gallery here :
http://imgur.com/a/7hP1Ed2

You can see the whole build in a 10 min video timelapse here : 




I've been so unlucky in the silicon lottery this time 

My 2080TI FE goes to maximum 2040 MHz on the core and 7500 MHz on the VRAM... (+80/+500). The 380W biosdoesn't change anything.

My 9900K need 1.36V for 5GHz on all cores... it gets toasty as hell...

really unlucky this time... Oh well it sucks but the build looks awesome


----------



## EarlZ

I just got a Gigabyte 2080TI Gaming OC a few days ago and I tried the OC scanner and I got an avg overclock of 151 and 185 on the second run ( didnt apply the OC ) not sure why the range is between 1-2 clock ranges.

Raising the voltage slider even on the slightest bit pushes the voltage to 1.081v is that normal ?

Whats the best method & app to determine a stable memory OC on the 2080TI's


----------



## J7SC

Glerox said:


> Merry Christmas!
> 
> So I've finally completed my 2080 TI build. Please OP add me to the owner page.
> 
> Gallery here :
> http://imgur.com/a/7hP1Ed2
> 
> You can see the whole build in a 10 min video timelapse here : https://youtu.be/Kqj1zu37bwM
> 
> I've been so unlucky in the silicon lottery this time
> 
> My 2080TI FE goes to maximum 2040 MHz on the core and 7500 MHz on the VRAM... (+80/+500). The 380W bios doesn't change anything.
> 
> My 9900K need 1.36V for 5GHz on all cores... it gets toasty as hell...really unlucky this time... Oh well it sucks but the build looks awesome



That's a gorgeous build :wubsmiley - absolutely love the color combo throughout, and the cable management for the GPU is impressive. 

I'm still on the case modding on my Core P5 though almost finished that part/ went through three Dremel metal cutting wheels in 2 hrs today


----------



## Zammin

kot0005 said:


> Yes but I had tested a reference card with ek block before the waterforce and the temps are really high. I'd only get 40c with max load with 20c amb, waterforce hits 50c easily, I have never had problems with EK blocks leaking other than the Nickel plating wearing off, you will always want blocks to be removable so you can clean you stuff that gets stucks in the microfins every now and then otherwise it will restrict your flow and cause heat issues.
> 
> 
> So yeah, not able to service the block on user end is 100% negative. I am just putting it out there so people know. The block is also not mounted to the pcb on right side which is pretty bad because it reduces rigidity and structural integrity of the card.


I'm a bit late here but yeah not being able to take apart the block would suck. After a while I find you need to disassemble the block to properly clean things. Sometimes you can be lucky enough to have everything stay perfectly clean and free of all debris over time but it's not guaranteed. Sucks that the bolts are all covered by the front plate. I guess they don't want you taking it apart :/

I haven't been following this thread for a while now but out of curiosity, how come you went for the Waterforce? I seem to recall you had an EVGA XC ultra ref card that was pretty good and it had an EK block fitted to it. Did you want a custom card instead?


----------



## Nizzen

EarlZ said:


> I just got a Gigabyte 2080TI Gaming OC a few days ago and I tried the OC scanner and I got an avg overclock of 151 and 185 on the second run ( didnt apply the OC ) not sure why the range is between 1-2 clock ranges.
> 
> Raising the voltage slider even on the slightest bit pushes the voltage to 1.081v is that normal ?
> 
> Whats the best method & app to determine a stable memory OC on the 2080TI's


Play battlefield V multiplayer


----------



## boli

EarlZ said:


> [snip]Whats the best method & app to determine a stable memory OC on the 2080TI's



Time Spy graphics test 2, Shadow of the Tomb Raider benchmark and Battlefield V worked well for me.


----------



## krkseg1ops

*Gigabyte 2080 Ti Windforce (non-OC)*

Does anybody know whether Gigabyte has plans to release an updated BIOS to Windforce non-OC? It is a non-binned (TU102-400) chipset and as such cannot be flashed. Or can it?


----------



## nycgtr

krkseg1ops said:


> Does anybody know whether Gigabyte has plans to release an updated BIOS to Windforce non-OC? It is a non-binned (TU102-400) chipset and as such cannot be flashed. Or can it?


Wow they started releasing non A chips in the windforce non gaming? I openned like 6 of them all of them were As around launch.


----------



## krkseg1ops

I was under the impression TU102-400 chipsets were being installed in cheaper models. Anyway, do you think the performance gain is worth me returning the card (I can do it until tomorrow), getting Gaming OC (I have to have a card which is no more than 125mm high) and flash the 141% power limit BIOS? I heard it's a matter of few frames in 4k but at the cost of thermals. I have Corsair Air 240 case which is an mATX case. Adding 30% more power limit might mess up my thermals which are fine right now.


----------



## VPII

Lots of water cooling 2080 TI cards on here.... Are there any people here running it on AIR, stock AIR and if so what are your temps and overclocks?


----------



## Sheyster

VPII said:


> Lots of water cooling 2080 TI cards on here.... Are there any people here running it on AIR, stock AIR and if so what are your temps and overclocks?


MSI Duke OC air-cooled with the 380W BIOS installed. +150 on core using MSI AB, no memory OC. Temps typically max out at 72 C in BF V.


----------



## boli

nycgtr said:


> Wow they started releasing non A chips in the windforce non gaming? I openned like 6 of them all of them were As around launch.


Dunno about this model, but a few pages back someone noticed that Asus (IIRC) now use non-A chips in a SKU where previously A chips were used. The theory was that the A chips used to be more plentiful than the non-A chips for some reason, so they used them instead. So in essence, some early buyers got lucky. After all, any non-factory OC-model should be fine with a non-A chip in theory.



krkseg1ops said:


> I was under the impression TU102-400 chipsets were being installed in cheaper models. Anyway, do you think the performance gain is worth me returning the card (I can do it until tomorrow), getting Gaming OC (I have to have a card which is no more than 125mm high) and flash the 141% power limit BIOS? I heard it's a matter of few frames in 4k but at the cost of thermals. I have Corsair Air 240 case which is an mATX case. Adding 30% more power limit might mess up my thermals which are fine right now.


There will probably be different opinions about that. For absolute max frame rates, you probably do want the 380W power limit, *assuming* you can cool it appropriately (that pretty much means water). Personally I flashed the max 320W BIOS back onto my card, because the few extra FPS for ~19% higher power use (and worse thermals) of the 380W BIOS was not worth it, to me.

Also, there are quite a few A chips recently that compared rather poor to earlier samples, OC-wise. With a card from early December I'm at +120 GPU (=max 2070 MHz in game, typically less) and +800 memory for example, where earlier ones could be 2100+.

You might also get a lucky sample of a non-A chip, though due to binning I assume lower chance still. Also, depending on your particular model it may have "too low" a power limit even for air (this is rather subjective, for my card it turned out the 320W limit was quite a nice balance of power/performance).



VPII said:


> Lots of water cooling 2080 TI cards on here.... Are there any people here running it on AIR, stock AIR and if so what are your temps and overclocks?


I posted comparisons for my card on air and water, with and without OC, with different power limits. Don't remember temps on air exactly, but they were pretty high (70+C) even with annoying fans.


----------



## krizby

krkseg1ops said:


> I was under the impression TU102-400 chipsets were being installed in cheaper models. Anyway, do you think the performance gain is worth me returning the card (I can do it until tomorrow), getting Gaming OC (I have to have a card which is no more than 125mm high) and flash the 141% power limit BIOS? I heard it's a matter of few frames in 4k but at the cost of thermals. I have Corsair Air 240 case which is an mATX case. Adding 30% more power limit might mess up my thermals which are fine right now.


well I bought a Asus 2080ti turbo that is non-A chip, new batch of gigabyte windforce also uses non-A chip, even MSI duke non oc version also use non-A chips lol. Ngreedya said the non-A chip is supposedly sell at 1000usd Msrp, now all non OC versions use non-A chip and they sell at ~1200usd. Talk about bait and switch tactics. Although the only differences I see is the power limits, non-A chip max out at 280W while A chip go to 380W with the galax bios. Higher tdp could net you higher benchmarks score but gaming you would hardly notice it (The useless extra heat is discernible though)


----------



## ComansoRowlett

https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927 Are you able to flash this bios on a reference PCB card? Would love the 400/450w power limit.


----------



## Sheyster

ComansoRowlett said:


> https://www.techpowerup.com/gpu-specs/galax-rtx-2080-ti-hof-oc-lab-wc-edition.b6412 Are you able to flash this bios on a reference PCB card? Would love the 400/450w power limit.


Looks like the BIOS is up in their database:

https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927

Wanna try to flash it and let us know the results?


----------



## ComansoRowlett

Sheyster said:


> Looks like the BIOS is up in their database:
> 
> https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927
> 
> Wanna try to flash it?


Yeah I saw, was just wondering if anyone had any success prior to attempting myself, wouldn't want to have to faff around sorting out a bricked card when I am not at home to get a backup card out to sort it.


----------



## Coldmud

The 450W HOF bios is useless really, without a 3rd 8pin connector to the motherboard as others in this thread have stated. It was covered a few pages back. The HOF card has a unique VRM. The bios ranges to +12% PL, at 0%PL, in my case, that was about 340W And at 12% never exceeded 380W total TDP. It also introduced more fluctuations in clock speeds, the 380W GALAX is still the best one to try water or air IMHO.


----------



## iRSs

Coldmud said:


> The 450W HOF bios is useless really, without a 3rd 8pin connector to the motherboard as others in this thread have stated. It was covered a few pages back. The HOF card has a unique VRM. The bios ranges to +12% PL, at 0%PL, in my case, that was about 340W And at 12% never exceeded 380W total TDP. It also introduced more fluctuations in clock speeds, the 380W GALAX is still the best one to try water or air IMHO.


3rd connector for what?

Sent from my LG-H930 using Tapatalk


----------



## Coldmud

iRSs said:


> 3rd connector for what?
> 
> Sent from my LG-H930 using Tapatalk


excuse me, still half hungover. I meant 3rd 8 pin power connector going from PSU to GPU.


----------



## ComansoRowlett

Problem is I'm hitting the 380w power limit, it seems my GPU is particularly leaky but clocks well hence why the 400w/450w limit would aid my OC. I hit 73C max at 100% fan speed on my Founders card.


----------



## iRSs

Coldmud said:


> excuse me, still half hungover. I meant 3rd 8 pin power connector going from PSU to GPU.


i know what you asked but my question still stands.
3rd 8pin connector from psu to gpu for what purpose?

Sent from my LG-H930 using Tapatalk


----------



## Coldmud

They are all power limited. The new lightning is reported to have 3x8pin pci connectors. I also hit 380w with my stock bios, however the galax 380w bios holds clocks better. You are better off finding the lowest possible volt you can get away with and be stable at around 2050mhz.
This way you won't reach the power limit that often and inevitably be downlocked.


----------



## iRSs

ComansoRowlett said:


> Problem is I'm hitting the 380w power limit, it seems my GPU is particularly leaky but clocks well hence why the 400w/450w limit would aid my OC. I hit 73C max at 100% fan speed on my Founders card.


try liquid metal shunt mods

this much LM almost doubles the power limit and little more makes the card go into safe mode.









Sent from my LG-H930 using Tapatalk


----------



## Coldmud

iRSs said:


> i know what you asked but my question still stands.
> 3rd 8pin connector from psu to gpu for what purpose?
> 
> Sent from my LG-H930 using Tapatalk


To go beyond the pcie 2x8pin power spec and get more then 380W total TDP (75W for pcie slot + 2x150W for 8pin psu connector=375W)


----------



## Coldmud

EarlZ said:


> Whats the best method & app to determine a stable memory OC on the 2080TI's


Witcher 3 with Max rendering Hunter config and HD reworked pack. Made me have to backup mem another 50mhz when everything else mentioned was stable.


----------



## kot0005

Glerox said:


> Merry Christmas!
> 
> So I've finally completed my 2080 TI build. Please OP add me to the owner page.
> 
> 
> really unlucky this time... Oh well it sucks but the build looks awesome


Nice! That looks beautiful.



Zammin said:


> I'm a bit late here but yeah not being able to take apart the block would suck. After a while I find you need to disassemble the block to properly clean things. Sometimes you can be lucky enough to have everything stay perfectly clean and free of all debris over time but it's not guaranteed. Sucks that the bolts are all covered by the front plate. I guess they don't want you taking it apart :/
> 
> I haven't been following this thread for a while now but out of curiosity, how come you went for the Waterforce? I seem to recall you had an EVGA XC ultra ref card that was pretty good and it had an EK block fitted to it. Did you want a custom card instead?


Yeah, I was mesmerised by the looks of the Waterforce card's block. It looks really nice with the brushed aluminium cover over the block but not so great performance wise. Ended up selling my xc ultra and the block for $2k I coped around $300 loss but its what it is.


----------



## iRSs

Coldmud said:


> To go beyond the pcie 2x8pin power spec and get more then 380W total TDP (75W for pcie slot + 2x150W for 8pin psu connector=375W)


those specifications are for PHYSICAL integrity of the connectors and wires themselves.

did you actually TEST this?
my pc can easly pass above 700w from the wall in kombustor or path of exile
(8700k stock, 2080ti)


----------



## Nizzen

iRSs said:


> those specifications are for PHYSICAL integrity of the connectors and wires themselves.
> 
> did you actually TEST this?
> my pc can easly pass above 700w from the wall in kombustor or path of exile
> (8700k stock, 2080ti)


Picture?


----------



## Coldmud

iRSs said:


> those specifications are for PHYSICAL integrity of the connectors and wires themselves.
> 
> did you actually TEST this?
> my pc can easly pass above 700w from the wall in kombustor or path of exile
> (8700k stock, 2080ti)


With hof bios I told you i saw 340W with 0% and and 380W at 12% in gpu-z sensor log.
If you mean running an actual watt meter between psu and socket then no I didn't.
Why should I? These are the limitations of 2x8pin cards, offcourse you can shunt the card and fool it. Thinking it's running at different stages, what is the discussion here?
And passing 700w in path of exile?


----------



## Zammin

kot0005 said:


> Yeah, I was mesmerised by the looks of the Waterforce card's block. It looks really nice with the brushed aluminium cover over the block but not so great performance wise. Ended up selling my xc ultra and the block for $2k I coped around $300 loss but its what it is.


Ahh yeah I getcha now. Yeah when I was looking at it (when it first popped up on PLE's website) I was wondering how the block would compared to the commonly used aftermarket solutions. It was hard to tell because you can't really see inside the block.


----------



## J7SC

The dual 8 pin / max watt question seems to pop up every month or so. By official spec, 2x 8 pin = 300w...and adding another 75w via PCIe, there's your 375 limit. Keyword is 'official'. It's sort of like modding your Honda Civic engine w/ a really big turbo and 30 pounds of boost...it very likely will rev past the factory red line to 13k rpm or so, but whether you should and how often remains a different matter...

Here's a pic from [email protected] looking at the MSI Meg X399 Creation and its high-po VRM. At the top right, you see two 12 V 8 pin connectors on the mobo circled with '960w'. That's where a 32core on LN2 may / may not get to. I realize that 8 pin mobo connectors and PCIe connectors are somewhat different in terms of mapping, but carry similar specs. But that's about an absolute outside LN2 potential, not 'spec'. Clearly, GPU w/ 2x 8 pins can pull far more than 375 w, but that is the official spec - which is also why some manufacturers add a 6 pin or 3rd 8 pin for their top end cards


----------



## Ostrava

Hey guys, hoping someone can help me with this.

I have two 2080 ti ftw3's from EVGA. I was using one of them alone on its air cooler for the past month or so, and I recently got the second one. 
I installed the evga ftw3 hydro copper waterblock on the second card and swapped it out the first card for it (so that I can use it while I am installing the waterblock on the first card).

For some reason, this card is running at about 75% of the performance rate of the previous card. This is regardless of the scenario: I have compared performance on 3DMark Fire Strike Ultra, where I am seeing 37fps on the first scene as opposed to 47 fps average from my benchmarks with the original card.
Additionally, I have a CUDA program I have written that I use frequently, and it is running in 4.2 seconds as opposed to the 3.0 seconds the original card ran.
Both cards were/are running in the OC BIOS mode.

I have checked the temperatures, and it stays below 40C (due to watercooling). The frequency readout from afterburner is 2070 mhz during the Fire Strike Ultra benchmark. The benchmark also reports 99% GPU utilization.

I am really confused as to what could be causing the exact same make and model of GPU to be running such a performance delta between two cards.

Any help would be appreciated.


----------



## J7SC

Ostrava said:


> Hey guys, hoping someone can help me with this.
> 
> I have two 2080 ti ftw3's from EVGA. I was using one of them alone on its air cooler for the past month or so, and I recently got the second one.
> I installed the evga ftw3 hydro copper waterblock on the second card and swapped it out the first card for it (so that I can use it while I am installing the waterblock on the first card).
> 
> For some reason, this card is running at about 75% of the performance rate of the previous card. This is regardless of the scenario: I have compared performance on 3DMark Fire Strike Ultra, where I am seeing 37fps on the first scene as opposed to 47 fps average from my benchmarks with the original card.
> Additionally, I have a CUDA program I have written that I use frequently, and it is running in 4.2 seconds as opposed to the 3.0 seconds the original card ran.
> Both cards were/are running in the OC BIOS mode.
> 
> I have checked the temperatures, and it stays below 40C (due to watercooling). The frequency readout from afterburner is 2070 mhz during the Fire Strike Ultra benchmark. The benchmark also reports 99% GPU utilization.
> 
> I am really confused as to what could be causing the exact same make and model of GPU to be running such a performance delta between two cards.
> 
> Any help would be appreciated.



No two cards are exactly alike in performance, but this seems a bit much in terms of delta. I presume you are running both at the same oc AND max Power Target (slider to the right in apps like MSI AB) and they both read the same percentage ? Even then, they may not be the same-spec card is as discussed elsewhere in this thread > same model name etc but slightly different specs as part of a running change made by the manufacturer, including on A spec and non A spec.

You can try flashing the second card with the Bios of the first and see if that leads to a more balanced performance. Even before then, with both cards at the same oc and Power Target, run Unigine Superposition 4k (which will max watt usage) and have GPUz open on the sensor tab while running Superposition. If one reads 375+- watts and the other 320 +- watts at the end of the test with all else equal, then you have slightly different spec models of the same card. Flashing the original bios from the first to the second **may** help.


----------



## Ostrava

J7SC said:


> No two cards are exactly alike in performance, but this seems a bit much in terms of delta. I presume you are running both at the same oc AND max Power Target (slider to the right in apps like MSI AB) and they both read the same percentage ? Even then, they may not be the same-spec card is as discussed elsewhere in this thread > same model name etc but slightly different specs as part of a running change made by the manufacturer, including on A spec and non A spec.
> 
> You can try flashing the second card with the Bios of the first and see if that leads to a more balanced performance. Even before then, with both cards at the same oc and Power Target, run Unigine Superposition 4k (which will max watt usage) and have GPUz open on the sensor tab while running Superposition. If one reads 375+- watts and the other 320 +- watts at the end of the test with all else equal, then you have slightly different spec models of the same card. Flashing the original bios from the first to the second **may** help.


I should clarify I only have the second card installed currently, and can only compare to previously run results. With that said, I ran Unigine Superposition 4k and GPU-Z reported that the power usage maxed at 170W (which is 57% of total TDP also according to GPU-Z). Could this be the issue?


----------



## J7SC

Ostrava said:


> I should clarify I only have the second card installed currently, and can only compare to previously run results. With that said, I ran Unigine Superposition 4k and GPU-Z reported that the power usage maxed at 170W (which is 57% of total TDP also according to GPU-Z). Could this be the issue?



That definitely sounds* way off* - It is not a problem of only having one card in at a time (unless you want to bios flash the second one via the first one), but for now, you need to find what else is going on. Superposition 4K pegs my two Aorus 2080 TI XTR WB at 375w for the first and 378w for the second card (in single-card runs).

To being with, your 2nd card should at least reach 95+ % of TDP in that run...further, the 57% of 170w equate to about 300w, which also sounds low. While you may very well have slightly different spec cards by the same name, there clearly is another issue somewhere. BTW, what were the max temps in that Unigine 4k run (wondering if it is temp- throttling). If temps are ok, you might want to consider a separate and temporary Win 10 install just for a quick test with card #1 NOT installed at all, just card #2, and then the NVidia driver. I say that because having two or more of the same cards and flipping them in a given PCIe slot or between two PCIe slots can sometimes lead to very odd results.


----------



## J7SC

*Addendum: *

Can you do another Superposition 4k opt. run with GPUz Sensor tab open and GPU, VRAM, Temp, TDP %, TDP w and VDC (gpu voltage) all set to 'highest reading' in GPUz, then take a screenshot right after and post it ?


----------



## krizby

Ostrava said:


> I should clarify I only have the second card installed currently, and can only compare to previously run results. With that said, I ran Unigine Superposition 4k and GPU-Z reported that the power usage maxed at 170W (which is 57% of total TDP also according to GPU-Z). Could this be the issue?


You should probably take the wb off and see if there are any thermal compound that is electrically capacitance dripped onto the smd around the chip, and make sure you peeled off all the plastic tape over the thermal pads.


----------



## J7SC

krizby said:


> You should probably take the wb off and see if there are any thermal compound that is electrically capacitance dripped onto the smd around the chip, and make sure you peeled off all the plastic tape over the thermal pads.



btw, nice cat in your Avatar


----------



## jura11

Hi guys

What temperatures are you seeing with EK Vector RTX 2080Ti waterblock?

I'm using or running EK Vector RTX 2080Ti waterblock on my build(top slot RTX 2080Ti, second slot GTX1080Ti and in two other slots have two GTX1080) that's in Caselabs M8 with pedestal and 4*360mm radiators plus MO-ra3 360mm and with Aquacomputer Kryos CPU block, flow with all that is around 132-135LPH right now, dropped from 160-164LPH 

GTX1080Ti or two GTX1080 are running same or similar temperatures like I have previously, no change in temperatures 31-38°C under load in gaming or rendering etc 

RTX 2080Ti on other hand seems is running bit on hot side or hotter temperatures what I would expect, idle temperatures in 20's or 22-25°C and 40-45°C load temperatures like in gaming or rendering, I have expected temperatures similarly to my GTX1080Ti at least

On RTX 2080Ti I have used Thermal Grizzly Kryonaut, pea method and used included EK thermal pads 

Not sure if its worth try to use X method for TIM, on GTX I have always used X method which has been OK or proven on my build 

Ambient temperature usually is in 21-23°C

Trying get to bottom of it, not sure if its waterblock or TIM or even thermal pads are problematic etc


Hope this helps 

Thanks, Jura


----------



## kot0005

I had to RMA my waterforce. Found some cracks near one of the screws. They probably overtightened the screw during assembly. I cant even check if the screw is super tight because they r all hidden behind the aluminium cover.

Will try to get the MSI sea hawk EK and report back, atleast on this one you can see the internals to check clogging in the future...I am guna miss the waterforce because its just looks so good, the build quality isnt that good tho..


So does the 380W BIOS works on gaming X trio ? any good Overclocks ?


----------



## kot0005

Here is the photo.


----------



## MrTOOSHORT

Jura, 2080ti run hotter than last gen. Looks about right on your ek card.


----------



## Snoopvelo

*EVGA RTX 2080 Ti FTW3 ULTRA*

Received last week from B&H


----------



## Ostrava

krizby said:


> You should probably take the wb off and see if there are any thermal compound that is electrically capacitance dripped onto the smd around the chip, and make sure you peeled off all the plastic tape over the thermal pads.





J7SC said:


> That definitely sounds* way off* - It is not a problem of only having one card in at a time (unless you want to bios flash the second one via the first one), but for now, you need to find what else is going on. Superposition 4K pegs my two Aorus 2080 TI XTR WB at 375w for the first and 378w for the second card (in single-card runs).
> 
> To being with, your 2nd card should at least reach 95+ % of TDP in that run...further, the 57% of 170w equate to about 300w, which also sounds low. While you may very well have slightly different spec cards by the same name, there clearly is another issue somewhere. BTW, what were the max temps in that Unigine 4k run (wondering if it is temp- throttling). If temps are ok, you might want to consider a separate and temporary Win 10 install just for a quick test with card #1 NOT installed at all, just card #2, and then the NVidia driver. I say that because having two or more of the same cards and flipping them in a given PCIe slot or between two PCIe slots can sometimes lead to very odd results.





J7SC said:


> *Addendum: *
> 
> Can you do another Superposition 4k opt. run with GPUz Sensor tab open and GPU, VRAM, Temp, TDP %, TDP w and VDC (gpu voltage) all set to 'highest reading' in GPUz, then take a screenshot right after and post it ?


Thanks for the help guys. I reinstalled ubuntu on another drive and booted into that, ran some benchmarks there and everything looked normal. I boot back into windows, and now everything works fine! I thought I had already tried restarting my PC but I guess I didn't restart it hard enough. Anyways, the problem has resolved itself. I appreciate the help.


----------



## J7SC

kot0005 said:


> I had to RMA my waterforce. Found some cracks near one of the screws. They probably overtightened the screw during assembly. I cant even check if the screw is super tight because they r all hidden behind the aluminium cover.
> 
> Will try to get the MSI sea hawk EK and report back, at least on this one you can see the internals to check clogging in the future...I am guna miss the waterforce because its just looks so good, the build quality isnt that good tho..
> (cut)





kot0005 said:


> Here is the photo.



I have a tough time understanding this given your post #4862 . What happened ? Anyway, good look with the RMA and whatever card you get next.


----------



## J7SC

Ostrava said:


> Thanks for the help guys. I reinstalled ubuntu on another drive and booted into that, ran some benchmarks there and everything looked normal. I boot back into windows, and now everything works fine! I thought I had already tried restarting my PC but I guess I didn't restart it hard enough. Anyways, the problem has resolved itself. I appreciate the help.



Cheers  Glad it worked out !


----------



## Sheyster

kot0005 said:


> So does the 380W BIOS works on gaming X trio ? any good Overclocks ?


It does work on that MSI card. Can't speak about specific OC's for the card though.


----------



## boli

Merged into next


----------



## boli

jura11 said:


> What temperatures are you seeing with EK Vector RTX 2080Ti waterblock?
> 
> I'm using or running EK Vector RTX 2080Ti waterblock on my build(top slot RTX 2080Ti, second slot GTX1080Ti and in two other slots have two GTX1080) that's in Caselabs M8 with pedestal and 4*360mm radiators plus MO-ra3 360mm and with Aquacomputer Kryos CPU block, flow with all that is around 132-135LPH right now […]
> 
> RTX 2080Ti on other hand seems is running bit on hot side or hotter temperatures what I would expect, idle temperatures in 20's or 22-25°C and 40-45°C load temperatures like in gaming or rendering […]
> 
> On RTX 2080Ti I have used Thermal Grizzly Kryonaut, pea method and used included EK thermal pads […]
> 
> Ambient temperature usually is in 21-23°C


When I fully stress my system (see below) my GPU temp ends up at about 52C, with quiet fans (~800 rpm). During gaming it's usually a few degrees lower. I have a lot less rad space than you and (reportedly) weak EK slim rads. Could get temps below 50C with 1400 rpm fans. 

Like someone else said, the 2080 Ti uses a lot of power (up to 380W depending on firmware), which turns to heat, so I'd also say that your temps are fine/normal.



boli said:


> *I also just had a spontaneous reboot, and I can reproduce it easily!*
> 
> More details of original post inside:
> 
> 
> Spoiler
> 
> 
> 
> (I'm not really concerned, because it doesn't happen in practice, but maybe it's helpful or interesting to anyone. As a software engineer I love reproducible issues )
> 
> My HW: stock 6700K (not OC at all), 16 GiB RAM, KFA2 (AKA Galax) OC 2080 Ti with +120/+800 OCs at 320W, Seasonic Prime Titanium 750W, an M2 and a SATA SSD, EK Velocity + Vector, D5, 420 + 280 slim rads.
> 
> Because I just swapped the in/out port order on my EK GPU block, I was running Furmark (stress) + Prime95 (max heat) to see if it makes any difference whatsoever. For fun I launched a Microsoft DXR Demo (the "Prodedural Geometry" one), which immediately caused a reboot.
> 
> I've run this DXR demo countless times by itself, without any issues. Also alternating it with Furmark IIRC (whichever window is in the foreground gets more GPU cycles). I'm not sure I've had Prime95 in the background though, and it does not occur when Prime95 is not running in the background. It also doesn't happen when I change the launch order to Furmark, DXR, Prime95.
> 
> Anyway, the exact same thing happened again after the reboot, when I did the same sequence of Furmark + Prime95 stressing, then launching DXR demo.
> 
> It doesn't happen when I close AB 4.6b10 beforehand, I suppose because the GPU's and VRAM's OC is not applied. I've never had this during gaming (mostly BF V currently).
> 
> Also I did have issues like these "spontaneous" reboots during game sessions with my living room PC (4770K + GTX 1080) with some Seasonic fanless 480W power supply 2 years ago. Replacing it with a Seasonic Prime Titanium 650 solved that issue there. That previous power supply was borderline, so no surprise there, OCP not too unexpected.
> 
> With my current main PC though, I am somewhat surprised. Never happened the 2 years before with Titan X (Pascal), but that card was only 300W and didn't have DXR demo obviously.  I did game a lot on that though.
> 
> Anyway, Prime95 kicks CPU hard with AVX as well, while DXR Demo is light on power load (unlike Furmark) so possibly jacks up GPU frequency into levels that then require high voltages, which in turn might trip the OCP. Or insert your theory here. As long as it doesn't happen while gaming normally I'm not too concerned.
> 
> 
> 
> *Update:* After having it happen 3 times, I tried to reproduce with +90 and +105 GPU OC, with no auto-reboot, and then again with +130 and +120 (which I believe are probably the same anyway) also without auto-reboot for five tries. So I suppose it may depend on what Prime95 is doing exactly (its load varies over time), or my system's state (pretty close to steady state I suppose, at 51C with very silent fans). Anyway, off to play BF V with usual +120/+800 GPU/VRAM OC. ¯\_(ツ)_/¯


Two days later and this still hasn't happened again since; not as reproducible as I thought. Just wanted to add that while stress testing as described above, my system draws about ~460W from the wall. At the same time, GPU-Z shows 316W average consumption for the GPU (its firmware's power limit is 320W), and AB shows about 95W power for the CPU.
When I put the DXR demo in the foreground (and thus Furmark gets less GPU time), power draw at the wall lowers to ~392W.



krizby said:


> […] Seriously though I don't know why people are so obsessed with getting maximum voltages and power for 2080TI. Short benchmark show some gains but gaming you will likely degrade performance when maxing the the Voltages and PL due to the useless extra heat.
> 
> Best thing to do with 2080ti is undervolting them to like 1.000v or .900v





krizby said:


> […] I did notice the card was boosting above 1.000v when the PC hardlocked and restarted so i undervolt the card to 1.000v (same overclock offset, just max 1.000v) and it has been stable for a week now. The funny thing was I created another overclock profiles that undervolt to .900v and test Hitman 2, the .900v profile (1890mhz) gave higher fps than the 1.000v profile (2010mhz core) in a static scene. The .900v undervolt also gives very tight frametimes consistency in PUBG, which I really need. So yeah undervolt FTW.


Inspired by these posts I tried out some lower voltage flat curves as well, and here are the results. I'm using the firmware my card came with, at 320W power limit.
Memory is overclocked +800 to 7800 MHz, except on the lowest voltage SotTR test, which crashed. That single test thus ran at +500 memory. With more voltage SotTR was fine at +800 memory.

*Games*:
The 320 and 380W OC results are older, I already posted those earlier, mostly – I only retested FC5 @ 320W, because in that test it had some weird regression (GamersNexus noticed that as well, during their Titan RTX tests), which seems to be fixed now.








*3DMark*:








*Curves*:


Spoiler



I started with this curve that was determined by the OC Scanner. It starts at +150, somewhat higher than the flat +120 OC does. It also ends at +120 though (that's what my particular card can run while stable).








The least undervolted curve was achieved by setting the max frequency to *2040 MHz @ 1037 mV* on all of the right side of the curve (time consumingly point by point):








Same again for *2010 MHz @ 1012 mV*:








And lastly for max *1920 MHz @ 925 mV*. The picture says 1905 MHz, but during testing it snapped to 1920 MHz, and that was the clock speed shown by the OSD during benchmarks. I currently can't get a screenshot of it, because the curve gets all weird on save with AB 4.6b10 (similar to the light gray spikes in the background):










*Conclusion*:
I rather like the frame time consistency of the voltage- and clock-limited curves (played BF V for hours at 925 mV yesterday), maybe I should try one in between 925mV and 1012mV.


----------



## Krzych04650

boli said:


> When I fully stress my system (see below) my GPU temp ends up at about 52C, with quiet fans. During gaming it's usually a few degrees lower. I have a lot less rad space than you and (supposedly) weak EK slim rads.
> 
> Like someone else said, the 2080 Ti uses a lot of power (up to 380W depending on firmware), which turns to heat, so I'd also say that your temps are fine/normal.
> 
> 
> 
> Two days later and this still hasn't happened again since; not as reproducible as I thought. Just wanted to add that while stress testing as described above, my system draws about ~460W from the wall. At the same time, GPU-Z shows 316W average consumption for the GPU (its firmware's power limit is 320W), and AB shows about 95W power for the CPU.
> When I put the DXR demo in the foreground (and thus Furmark gets less GPU time), power draw at the wall lowers to ~392W.
> 
> 
> 
> 
> 
> Inspired by these posts I tried out some lower voltage flat curves as well, and here are the results. I'm using the firmware my card came with, at 320W power limit.
> Memory is overclocked +800 to 7800 MHz, except on the lowest voltage SotTR test, which crashed. That single test thus ran at +500 memory. With more voltage SotTR was fine at +800 memory.
> 
> *Games*:
> The 320 and 380W OC results are older, I already posted those earlier, mostly – I only retested FC5 @ 320W, because in that test it had some weird regression (GamersNexus noticed that as well, during their Titan RTX tests), which seems to be fixed now.
> View attachment 241904


Undervolting can be really useful, I was running my 1080 SLI undervolted 24/7, 2025 MHz 0.975V. My max OC was only 2076 anyway and required max voltage, which increased the power draw by 60W per card compared to undervolt and increase temperatures by almost 20C with only 3% performance increase, which didn't make any sense especially that the cards were air cooled so temperatures were a concern. With undervolt both cards were staying within their specified 180W TDP despite overclock and performance 15% higher than stock, compared to 240W per card with max OC. 

I didn't find that overly useful on 2080 Ti though. It didn't undervolt very well and stock 330W power limit was not enough even for something like 0.987V anyway, so going with high power draw (380W bios) was a way this time, especially that this is an AIO watercooled card and a single one so temps are not a concern. And so a single 2080 Ti is drawing about the same amount of power as two 1080s in SLI  for about the same performance as 1080 SLI, about 5% slower in games with perfect SLI scaling. So not much progress except significantly increased resistance against lazy developers, it is almost proof against.


----------



## raider89

Is the Galax 380W bios still the go-to bios to download for a 2080ti EVGA XC Ultra? I just got my GPU Waterblock in and will be doing some overclocking benchmarks.


----------



## arrow0309

MrTOOSHORT said:


> Jura, 2080ti run hotter than last gen. Looks about right on your ek card.


Agree 
Even if I have the same temps (or even better) now with the 2080 ti + Vector Nickel Acetal than before with my Titan X Pascal + Heatkiller Nickel Plexy, both with bp, both ~ same clock (2070- 2085).
But I'll have to admit that I've also added some (little) extra cooling power installing a third rad (280 UT60 frontal) inside my case (I still have an Xspc RX360 v3 on top and a 420 monsta external).
I've also added another D5 serial, both EK Revo top (not Revo X2 but one with res) also needed to compensate the restriction from the 4 quick disconnects (two new Barrow for the vga):

https://drive.google.com/file/d/1KIAwR7JMhDaovfeUGeGTPNTVYQFqoduK/view?usp=sharing
https://drive.google.com/file/d/1mnN_QZCkaTdSx7cvcFCTlzWhm7hqnGkK/view?usp=sharing


----------



## krizby

boli said:


> *Conclusion*:
> I rather like the frame time consistency of the voltage- and clock-limited curves (played BF V for hours at 925 mV yesterday), maybe I should try one in between 925mV and 1012mV.


Very nice benchmarks dude, FYI I found that OC scanner curve is a little conservative, i was able to increase the curve by +30mhz (1890mhz/900mv). Playing games at .900v undervolt already reaches 270W power consumption, my max PL is only 280W so I leave it at that. 

To test for tight frametimes I limit the AB frametimes graph overlay from 0-30ms and just move the mouse around really quick in PUBG, with unhinged freq/voltages the frametimes is a mess while undervolting gives way better looking frametimes graph.


----------



## iRSs

Coldmud said:


> With hof bios I told you i saw 340W with 0% and and 380W at 12% in gpu-z sensor log.
> If you mean running an actual watt meter between psu and socket then no I didn't.
> Why should I? These are the limitations of 2x8pin cards, offcourse you can shunt the card and fool it. Thinking it's running at different stages, what is the discussion here?
> And passing 700w in path of exile?





Nizzen said:


> Picture?


i have made a video





i didn't bother to tune the frequency curve (hence just 2085 MHz) for high power limit because i usually play at 45% limit which is the highest that does not trigger psu's OCP.
unfortunately i cannot go more than 600W on wallet in path of exile alone.
i want to mention that the pc itself eats 100-126w in idle and 170w in prime95 large fft, 205w small fft which leaves 430w for gpu when no avx is involved.


furmark will eat a little some more but i cannot run it, here the video:


----------



## J7SC

undervolting makes a lot of sense - unless you're Rauf from Sweden who pushed his 2080 TI Galax OC Labs to 2700 MHz  / TimeSpy (LN2) @ HWBot...don't think he's running the stock bios  ...still, good to know that w/ proper cooling and the right kind of bios and equipment, 2080 TI has a lot of headroom


----------



## EarlZ

boli said:


> Time Spy graphics test 2, Shadow of the Tomb Raider benchmark and Battlefield V worked well for me.


I would just have to take note of the stock scores and keep pushing until it no longer moves up or when the scores start to drop, correct?


----------



## Jpmboy

J7SC said:


> undervolting makes a lot of sense - unless you're Rauf from Sweden who pushed his 2080 TI Galax OC Labs to 2700 MHz  / TimeSpy (LN2) @ HWBot...don't think *he's running the stock bios*  ...still, good to know that w/ proper cooling and the right kind of bios and equipment, 2080 TI has a lot of headroom


 stock bios? :lachen:



(the card is likely hard modded too)


----------



## LRRP

Two on the way.

https://www.evga.com/products/product.aspx?pn=11G-P4-2489-KR


----------



## J7SC

Jpmboy said:


> stock bios? :lachen:
> 
> (the card is likely hard modded too)


..nah, looks bone-stock to me :arrowhead (actually, the VRM section looks surprisingly untouched <> from this angle... )


----------



## iRSs

J7SC said:


> ..nah, looks bone-stock to me :arrowhead (actually, the VRM section looks surprisingly untouched <> from this angle... )


isn't the hof lab edition come with something that completely disables power limits and allows you manually set the voltage?
i have read somewhere that it connects there


----------



## J7SC

iRSs said:


> isn't the hof lab rdition come with something that completely disables power limits and allows you manuallt set the voltage?
> i have read somewhere that it connects there



...probably, and I love how easy you can make out the shunts. It comes with all kinds of things, unfortunately not in a box with my name and address on it (what an oversight that is, Galax ! )...very low production numbers


----------



## cg4200

So i had said i would not pay over 999.00 for 2080 ti when i got an email from evga that EVGA GeForce RTX 2080 Ti BLACK EDITION GAMING, 11G-P4-2281-KR, 11GB GDDR6 was in stock. Merry x-mas to myself will go with new 43 wasabi 120hz ordered
I had read it is a 300 non a chip so likely won't oc good..
Anyone here rocking one?
I am gonna order ek block for it.
Does this card have 130 power limit update ?
Can you flash a a chip evga xc to a non a chip?\
If can't flash how about the old shunt do they work on rtx cards same as 1080 ti write up?
Not much info due to never being in stock thanks for any info


----------



## VPII

Hi there people... I'd like to understand how I need to go about setting my vgpu to undervolt to 0.981 which is the max needed for 2025core. I tried various things, including setting the curve in MSI After burner to run straight from 2025 on wards but when I run 3D it will be down around 1965 on the core instead of 2025. My reason for asking is that I'm running on air and if I'm able to set the vgpu to not go higher than 0.981 I can reduce my TDP limit by 15 to 20% which basically means my GPU will be around 63 to 64C max instead of 78 to 80C. Any help will be much appreciated.

Except for the curve in MSI Afterburner I also used Xtreme Tuner (as my card is a Galax card) to set the VGPU at the desired level but it won't hold it there while I game or run a benchmark.


----------



## pewpewlazer

VPII said:


> Hi there people... I'd like to understand how I need to go about setting my vgpu to undervolt to 0.981 which is the max needed for 2025core. I tried various things, including setting the curve in MSI After burner to run straight from 2025 on wards but when I run 3D it will be down around 1965 on the core instead of 2025. My reason for asking is that I'm running on air and if I'm able to set the vgpu to not go higher than 0.981 I can reduce my TDP limit by 15 to 20% which basically means my GPU will be around 63 to 64C max instead of 78 to 80C. Any help will be much appreciated.
> 
> Except for the curve in MSI Afterburner I also used Xtreme Tuner (as my card is a Galax card) to set the VGPU at the desired level but it won't hold it there while I game or run a benchmark.


If your V-F curve is flat from 2025 @ 0.981v onward and it's still clocking down to 1965, then you're still hitting the power limit.


----------



## VPII

pewpewlazer said:


> If your V-F curve is flat from 2025 @ 0.981v onward and it's still clocking down to 1965, then you're still hitting the power limit.


Nope not the power limit but the temp limit for the clocks. The RTX 2080 ti is more sensitive for temps than power. In all honesty it is what I've realised which is why I want to downvolt it to reduce power limit and as such heat.. Regardless of where you set your power limit, if the card has a 380watt max you can set then it will be used regardless of whether you set it as such. I've seen this when I reduce power limit it will go from 85% set to 95% or higher with no impact on the clocks, the clocks drop with temp.

I found a way to set the clocks as I want. After doing the OC scan with MSI After burner on the curve I'll select the clocks I want with the mouse and click "L" and then apply. It will then remain on those clocks regardless and limit the voltage to what is set on the curve for those clocks.


----------



## TurricanM3

My Zotac Twin has a nonA chip. You should probably correct the OP list.

Which nonA Bios has the highest PT?


----------



## pfinch

Hello,

do you have the same issue with OC-Tools (Afterburner, Precision, GPU Tweak) escpecially the Curve-Voltage Points at 1.05 and 1.043 are jumping between the GPU Boost 15Mhz base.
Sometimes after reboot +155 on Core is 1.05=2145. After reboot 1.043 is 2145 Mhz. 
Critical: Somtimes the 1.043 is at 2145 and 1.05 is at 2160, that isn't stable anymore and crashes.

Tried adding fixed Core Frequency, OC Scanner and custom Curve. 
I'm locking the Afterburner Profile. But still got this jumping 

My Card: ASUS Strix OC BO4 Edition with P-BIOS enabled.


----------



## arrow0309

pfinch said:


> Hello,
> 
> do you have the same issue with OC-Tools (Afterburner, Precision, GPU Tweak) escpecially the Curve-Voltage Points at 1.05 and 1.043 are jumping between the GPU Boost 15Mhz base.
> Sometimes after reboot +155 on Core is 1.05=2145. After reboot 1.043 is 2145 Mhz.
> Critical: Somtimes the 1.043 is at 2145 and 1.05 is at 2160, that isn't stable anymore and crashes.
> 
> Tried adding fixed Core Frequency, OC Scanner and custom Curve.
> I'm locking the Afterburner Profile. But still got this jumping
> 
> My Card: ASUS Strix OC BO4 Edition with P-BIOS enabled.


Maybe it's too much oc, try to keep it a step or two lower.
You'll still be way higher than average.
And less risking of damaging the card.


----------



## BigJon901

For the Waterforce WB owners, did anyone get RGB to work? I've tried everything, RGBFusion 2.0 detects the card and gives me all the options but nothing happens... I checked the connector on the card itself and it appears to be seated correctly. If all else fails guess I'll have to make an adapter and put it on my MB header to be controlled by Aura Sync.


----------



## J7SC

BigJon901 said:


> For the Waterforce WB owners, did anyone get RGB to work? I've tried everything, RGBFusion 2.0 detects the card and gives me all the options but nothing happens... I checked the connector on the card itself and it appears to be seated correctly. If all else fails guess I'll have to make an adapter and put it on my MB header to be controlled by Aura Sync.



Hello - Both my Waterforce TIs lit up RGB on their own in both Win 7 and Win 10 test benches. However, when installing RGBFusion 2.0 in Win 7, the card is detected, but only re. RGB 'on/off', or static color, and reversing it or saving it didn't work. However, on next boot, in spite of clicking on 'no auto-load in Windows', the card's RGB was initially off completely, then reverted back to default...Haven't even tried to install RGBFusion on the Win 10 test bench...RGBFusion seems buggy for now. Without it, both card do their default "NY Times Square meets Tokyo by night" light show...


----------



## krizby

J7SC said:


> Hello - Both my Waterforce TIs lit up RGB on their own in both Win 7 and Win 10 test benches. However, when installing RGBFusion 2.0 in Win 7, the card is detected, but only re. RGB 'on/off', or static color, and reversing it or saving it didn't work. However, on next boot, in spite of clicking on 'no auto-load in Windows', the card's RGB was initially off completely, then reverted back to default...Haven't even tried to install RGBFusion on the Win 10 test bench...RGBFusion seems buggy for now. Without it, both card do their default "NY Times Square meets Tokyo by night" light show...


RGBfusion is buggy AF, it won't allow your screen to sleep so yeah, unless you enjoy screen burn in then uninstall this POS asap.


----------



## J7SC

krizby said:


> RGBfusion is buggy AF, it won't allow your screen to sleep so yeah, unless you enjoy screen burn in then uninstall this POS asap.



Yeah, like a buggy driving through a stable full of POS' :wth: . Also, at least an earlier version was part of the recent warning about it being a security risk, along with Asus' RGB software - the usual when low-level hardware access is granted willy-nilly. The latest release may / may not have fixed that, but still doesn't work right in any case on the RGB.

Speaking of RGB, I'm trying to go for a certain black-and-white understated look, yet the mobo is full of MSI RGB, the RAM is Trident Z RGB (mind you B-Die ) and the 2x Aorus 2080 TI WB are RGB on steroids...I know I can turn the mobo RGB off, but not sure about the Trident Z. And as to the 2080 TIs, I can see the plug on the cards and if all else fails, I just disconnect the RGB altogether.

Cooling for the CPU is done, located on the rear panel (2x 360 rads, 2x Swiftech pumps). Still need to squeeze 2x 360s + 160 + 2 more Swiftech pumps onto the front for the GPUs. Tomorrow (I hope)...


----------



## jonathan13

Hello all! I'm sure this has been asked of you all hundreds of times as this point, but I have searched for the last half hour for a step by step guide to using nvflash with an FE 2080 Ti and I cannot find it. I'm liquid cooled and would love to take advantage of the 380w vbios, but I also want to make sure I do it correctly, and not end up in damage control. Does anyone happen to have a link to a step by step guide for the 2080 Ti FE? I've already gotten the patched version of NVFlash as well as the 380w rom. Thanks in advance!


----------



## Zammin

krizby said:


> Very nice benchmarks dude, FYI I found that OC scanner curve is a little conservative, i was able to increase the curve by +30mhz (1890mhz/900mv). Playing games at .900v undervolt already reaches 270W power consumption, my max PL is only 280W so I leave it at that.
> 
> To test for tight frametimes I limit the AB frametimes graph overlay from 0-30ms and just move the mouse around really quick in PUBG, with unhinged freq/voltages the frametimes is a mess while undervolting gives way better looking frametimes graph.


This definitely caught my attention. By any chance would you happen to know why undervolting improved your frametime consistency? I have no idea how to determine whether frametime spikes are related to the game or hardware settings.


----------



## krizby

Zammin said:


> This definitely caught my attention. By any chance would you happen to know why undervolting improved your frametime consistency? I have no idea how to determine whether frametime spikes are related to the game or hardware settings.


So think of frametimes as the time it takes between one frame to the succeeding frame. Normally 100fps equate to 10ms between frames in ideal situation. In reality it goes like 9-11-11-9-9-11-11ms between frames, this situation is made worse with turbo boost as clock goes up and down and you have bigger variation in frametimes, this could throw off your aim in a competitive fps environment (especially when you opponents head is only a couple of pixels wide).


----------



## Zammin

krizby said:


> So think of frametimes as the time it takes between one frame to the succeeding frame. Normally 100fps equate to 10ms between frames in ideal situation. In reality it goes like 9-11-11-9-9-11-11ms between frames, this situation is made worse with turbo boost as clock goes up and down and you have bigger variation in frametimes, this could throw off your aim in a competitive fps environment (especially when you opponents head is only a couple of pixels wide).


Oh, I know what frametimes are, I was just wondering about why the undervolting helps keep them consistent. If it's due to fluctuating clock speeds then would having a watercooled card that holds a steady low temp and maybe the 380W BIOS to help prevent power throttling also help in the same way if it prevents the constant fluctuation of boost clock that we see due to power and temp throttling?

I get little spikes in Black Ops 4 here and there on two separate 2080Ti FEs, although I have always thought that to be just the game.


----------



## krizby

Zammin said:


> Oh, I know what frametimes are, I was just wondering about why the undervolting helps keep them consistent. If it's due to fluctuating clock speeds then would having a watercooled card that holds a steady low temp and maybe the 380W BIOS to help prevent power throttling also help in the same way if it prevents the constant fluctuation of boost clock that we see due to power and temp throttling?
> 
> I get little spikes in Black Ops 4 here and there on two separate 2080Ti FEs, although I have always thought that to be just the game.


well you only lose like 45-60mhz when your card reaches 80C but clock could flunctuate aggressively (100-200mhz range) when hitting your PL (hitting PL is extremely easy on 2080ti even the 380W one). I had my 2080ti under water and high fps and all but somehow the game doesn't feel smooth until I undervolted, maybe you could try undervolting and report your experience. Probably easy to tell when your aim get better 🙂.


----------



## VPII

Any of you having random blue screens, or back to desktop issue with Shadow of Tomb Raider. I've played probably 35 to 40% through the game with no issues now all of a sudden I get these random issues. I even closed MSI AB as stated by some people as the monitoring is messing with it. This helped a little but not much. I tried it using Extreme Tuning and it worked but I still got after a while these random Blue Screens or flip to desktop. Any advice will be greatly appreciated.


----------



## mistershan

Hey guys. I recently upgrades from 2 x 1080s SLI Asus Strix to a an EVGA XC 2080ti...I am getting a pretty good performance bump, almost double in some games, but the card is SOO loud. The fans sound like a jet or something. It's also not a steady stream...it keeps speeding up super loud and then getting lower and then speeds right back up. Over and over again...The temp gets to about 80 degrees... I don't understand because even with 2 x 1080s I barely heard them. My girlfriend is complaining at how loud my computer is and acts like the thing will blow. This is the first time I ever got a complaint about my computer...Are the rest of my parts too slow and it's putting too much pressure on the card? Or is it a bad card?.. .I don't have water block but I am not doing any over clocking at all and don't plan to. 

i7 5820k 
Evga XC 2080ti 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU


----------



## krizby

mistershan said:


> Hey guys. I recently upgrades from 2 x 1080s SLI Asus Strix to a an EVGA XC 2080ti...I am getting a pretty good performance bump, almost double in some games, but the card is SOO loud. The fans sound like a jet or something. It's also not a steady stream...it keeps speeding up super loud and then getting lower and then speeds right back up. Over and over again...The temp gets to about 80 degrees... I don't understand because even with 2 x 1080s I barely heard them. My girlfriend is complaining at how loud my computer is and acts like the thing will blow. This is the first time I ever got a complaint about my computer...Are the rest of my parts too slow and it's putting too much pressure on the card? Or is it a bad card?.. .I don't have water block but I am not doing any over clocking at all and don't plan to.
> 
> i7 5820k
> Evga XC 2080ti
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU


uhhh...yeahh.....1080's TDP is like 180W while 2080 Ti XC is 260W, so 360W vs 520W of heat, that is 150% increase. Also the Strix are known for their silent operation while EVGA XC is a Dual Fan card that is pretty loud by itself let alone 2 of them in SLI.


----------



## BigJon901

J7SC said:


> Hello - Both my Waterforce TIs lit up RGB on their own in both Win 7 and Win 10 test benches. However, when installing RGBFusion 2.0 in Win 7, the card is detected, but only re. RGB 'on/off', or static color, and reversing it or saving it didn't work. However, on next boot, in spite of clicking on 'no auto-load in Windows', the card's RGB was initially off completely, then reverted back to default...Haven't even tried to install RGBFusion on the Win 10 test bench...RGBFusion seems buggy for now. Without it, both card do their default "NY Times Square meets Tokyo by night" light show...


In that case I guess the RGB on my card is defective... this sucks. I didn't get anything light up but the card works fine. Don't think I'll drain my loop for a rma .


----------



## BigJon901

BigJon901 said:


> In that case I guess the RGB on my card is defective... this sucks. I didn't get anything light up but the card works fine. Don't think I'll drain my loop for a rma .


Uninstalled RGB fusion, the card did a light show when booting into windows, looked over and lights are out again... At this point I'm done with this card...should have just got a founders and RGB block.


----------



## J7SC

BigJon901 said:


> Uninstalled RGB fusion, the card did a light show when booting into windows, looked over and lights are out again... At this point I'm done with this card...should have just got a founders and RGB block.



I don't think it is the card, but remnants of the buggy RGB Fusion, even after 'uninstall'. I had similar strange things happen...lights were fine when booting, but a few seconds into windows 7, it tried to revert back to single color / than off - then on again to normal, a minute or so later. This did not happen before I installed RGB fusion. I finally got it working by editing the Win 7 registry. In win 10, which never had the RGB fusion installed, everything works fine on auto (the regular default light-show)


----------



## mistershan

krizby said:


> uhhh...yeahh.....1080's TDP is like 180W while 2080 Ti XC is 260W, so 360W vs 520W of heat, that is 150% increase. Also the Strix are known for their silent operation while EVGA XC is a Dual Fan card that is pretty loud by itself let alone 2 of them in SLI.


Huh? No, I don't have 2 x 2080 ti's. I went from 2 x 1080s SLI to just a single 2080ti. I am done with SLI. So that's 360W vs 260W...So why is 260W sounding like a jet engine?... So if I somehow got the 2080ti Strix that would be less loud? 

Why is the this EVGA XC card so loud? I thought EVGA was a good company.... Could there be something else wrong? ... Are my parts too old? I could still return this card. Maybe I should just wait till after the new chips come out and do a full rebuild so I have more modern parts to match with the 2080ti. Everything else in my machine is from 2014. 


i7 5820k 
Evga XC 2080ti 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU


----------



## pewpewlazer

mistershan said:


> Huh? No, I don't have 2 x 2080 ti's. I went from 2 x 1080s SLI to just a single 2080ti. I am done with SLI. So that's 360W vs 260W...So why is 260W sounding like a jet engine?... So if I somehow got the 2080ti Strix that would be less loud?
> 
> Why is the this EVGA XC card so loud? I thought EVGA was a good company.... Could there be something else wrong? ... Are my parts too old? I could still return this card. Maybe I should just wait till after the new chips come out and do a full rebuild so I have more modern parts to match with the 2080ti. Everything else in my machine is from 2014.
> 
> 
> i7 5820k
> Evga XC 2080ti
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU


Yes the 2080 Ti STRIX would probably be a lot quieter, seeing they went as far as to put a "quiet mode" BIOS switch on the card. It's the only air cooled 2080 Ti that shows much of a difference in noise in reviews from what I've seen.

The difference between "360W" and 260W is that your old setup was actually 180W x2 (assuming the strix cards didn't come with a higher power limit than stock). Each cooler was not only bigger than the one on the 2080 Ti XC, but it had less heat load to deal with as well.

What case are you using?
How is your case airflow?
What % speed are the GPU fans running at when gaming?
Which EVGA 2080 Ti XC model is it? They make four with XC in the model name.

I have the 2080 Ti XC ULTRA which has the "2.75 slot" cooler instead of the normal 2 slot cooler on the XC black and XC gaming or whatever. The larger cooler isn't the LOUDEST thing in the world at stock power limits (60-65% fan speed), but definitely annoyingly noticeable over the Corsair ML120/140 case & rad fans I have spinning at 1000-1200 RPM while gaming. I've left the power limit at 100% until I get a water block for it because at 130% PL the fans will spin up into the 70%s, where they not only seem absurdly louder, but they also do an annoying ramp up/down/up/down behavior of some sorts where the tone is constantly changing. I can only imagine what the 2 slot version of this junk sounds like.

EVGA is a good company. There's probably nothing else wrong, and that fact that your other hardware is "old" has nothing to do with it either. 260W is just a ton of heat to dissipate in a 2 or "2.75" PCIE slot area with an air cooler. Even the triple fan cards with humongous coolers that extend past the end of the PCB don't run much quieter than a founders edition card.


----------



## iamjanco

mistershan said:


> Hey guys. I recently upgrades from 2 x 1080s SLI Asus Strix to a an EVGA XC 2080ti...I am getting a pretty good performance bump, almost double in some games, but the card is SOO loud. The fans sound like a jet or something. It's also not a steady stream...it keeps speeding up super loud and then getting lower and then speeds right back up. Over and over again...The temp gets to about 80 degrees... I don't understand because even with 2 x 1080s I barely heard them. My girlfriend is complaining at how loud my computer is and acts like the thing will blow. This is the first time I ever got a complaint about my computer...Are the rest of my parts too slow and it's putting too much pressure on the card? Or is it a bad card?.. .I don't have water block but I am not doing any over clocking at all and don't plan to.


It's not a bad card (I've got the same one), it's just the low end cooling that comes with it. Water cooling the card would fix that but comes with the added expense of new blocks, etc., and the work of maintaining a wc loop. That's more manageable of course if you're already using water cooling.

I just ordered a *kit from Aquacomputer* for my card, but it was pricey. I figure given the track record of their card blocks coupled with active backplates that the setup will cool better than anything other than LN2 (or wc through a chiller). I also plan on doing flow and pressure drop testing on the card, as well as testing the thermals, something similar to what VSG at ThermalBench *did to an older set of blocks*. @Shoggy


----------



## mistershan

pewpewlazer said:


> Yes the 2080 Ti STRIX would probably be a lot quieter, seeing they went as far as to put a "quiet mode" BIOS switch on the card. It's the only air cooled 2080 Ti that shows much of a difference in noise in reviews from what I've seen.
> 
> The difference between "360W" and 260W is that your old setup was actually 180W x2 (assuming the strix cards didn't come with a higher power limit than stock). Each cooler was not only bigger than the one on the 2080 Ti XC, but it had less heat load to deal with as well.
> 
> What case are you using?
> How is your case airflow?
> What % speed are the GPU fans running at when gaming?
> Which EVGA 2080 Ti XC model is it? They make four with XC in the model name.
> 
> I have the 2080 Ti XC ULTRA which has the "2.75 slot" cooler instead of the normal 2 slot cooler on the XC black and XC gaming or whatever. The larger cooler isn't the LOUDEST thing in the world at stock power limits (60-65% fan speed), but definitely annoyingly noticeable over the Corsair ML120/140 case & rad fans I have spinning at 1000-1200 RPM while gaming. I've left the power limit at 100% until I get a water block for it because at 130% PL the fans will spin up into the 70%s, where they not only seem absurdly louder, but they also do an annoying ramp up/down/up/down behavior of some sorts where the tone is constantly changing. I can only imagine what the 2 slot version of this junk sounds like.
> 
> EVGA is a good company. There's probably nothing else wrong, and that fact that your other hardware is "old" has nothing to do with it either. 260W is just a ton of heat to dissipate in a 2 or "2.75" PCIE slot area with an air cooler. Even the triple fan cards with humongous coolers that extend past the end of the PCB don't run much quieter than a founders edition card.


I am not sure which XC it is but definitely is not the Ultra. I guess I should have held out longer for a better card instead of settling for this one. 50 bucks more for better cooling is a good deal. Strix would be even better...Is the 2080ti drawing more power than the 2 x 1080s? Will that spike my power bill? 

My case is the Cool Master Storm trooper. The big white Star Wars looking case. I am not sure what the percentage of the fans in the GPU are. How do I check all that? What app is best to monitor all of that?

What is a dangerous temp for a video card? I only worry because I heard there has been a card that lit on fire but I am sure that was rare...Do you guys all have fire extinguishers in your house since you deal with these hot rod esque machines?


----------



## mistershan

iamjanco said:


> It's not a bad card (I've got the same one), it's just the low end cooling that comes with it. Water cooling the card would fix that but comes with the added expense of new blocks, etc., and the work of maintaining a wc loop. That's more manageable of course if you're already using water cooling.
> 
> I just ordered a *kit from Aquacomputer* for my card, but it was pricey. I figure given the track record of their card blocks coupled with active backplates that the setup will cool better than anything other than LN2 (or wc through a chiller). I also plan on doing flow and pressure drop testing on the card, as well as testing the thermals, something similar to what VSG at ThermalBench *did to an older set of blocks*. @Shoggy


So the cooling in the Strix would be better? I rather get something natively in the card rather than messing with the water blocks. I am bad with taking things apart and wouldn't want to cause more problems.


----------



## iamjanco

mistershan said:


> So the cooling in the Strix would be better? I rather get something natively in the card rather than messing with the water blocks. I am bad with taking things apart and wouldn't want to cause more problems.


From what I know of the Strix, I'd say it's got a higher quality cooling solution, but that would be true of most cards compared to the XC. The Strix's also got higher stock clocks. I don't own a Strix though, and the reason I went with eVGA has more to do with their aftermarket support than anything else; meaning I don't have to worry about water cooling the card and not being refused service (should I need it) because I did.


----------



## cg4200

I was wondering what program would I use to open a evga bios rom?
I have a evga non a chip and am reading you can not cross flash with a-chip bios.
saw a post about device mismatch 1E04 non a-chip ...1E07 is a-chip so was thinking on the rom bios to only change the 1E07 on evga bios rom to match my 1E04 chip so the flash would work... my card will be here Wednesday just trying to get a pl of 130 or so was going to throw a wb on it... any thoughts or ideas? thanks


----------



## J7SC

iamjanco said:


> It's not a bad card (I've got the same one), it's just the low end cooling that comes with it. Water cooling the card would fix that but comes with the added expense of new blocks, etc., and the work of maintaining a wc loop. That's more manageable of course if you're already using water cooling.
> 
> I just ordered a *kit from Aquacomputer* for my card, but it was pricey. I figure given the track record of their card blocks coupled with active backplates that the setup will cool better than anything other than LN2 (or wc through a chiller)
> (cut)



That kit from Aquacomputer looks really nice :thumb: (but: RGB ?  ). The block looks like it makes good contact with the lower VRAM which is important for RTX / GDDR6 I gather. Re. flow test and pressure drops, please publish the results when you have them as I find it very useful for complex custom loops.


----------



## iamjanco

J7SC said:


> That kit from Aquacomputer looks really nice :thumb: (but: RGB ?  ). The block looks like it makes good contact with the lower VRAM which is important for RTX / GDDR6 I gather. Re. flow test and pressure drops, please publish the results when you have them as I find it very useful for complex custom loops.


Yeah, I'm aware of the RGB option, but that's easily resolved by leaving it disconnected. As for the tests, will do.


----------



## J7SC

iamjanco said:


> Yeah, I'm aware of the RGB option, but that's easily resolved by leaving it disconnected. As for the tests, will do.



:thumb: Thanks. I'm looking for some sort of baseline I can compare to with my builds that, on the 2x GPUs (almost 800w heat energy combined), are a hybrid of serial and parallel.


----------



## Deathscythes

Hello, 

So for the sake of It I decided to put LM on the shunts of my 2080 Ti strix. Surprisingly the card didn't go in security mode, however it is rendered completely unstable even at stock... any idea what that could be?
Also, trying the OC Scanner before the mod I was getting a message saying "dominant limiters : power". No I get a dominant limiters : no load.
I have no clue what's going on, I would really appreciate if someone could enlighten me =)


----------



## Jspinks020

We still are not where we are at with GPU's either and Pricing....


----------



## kot0005

Can the strix be flashed with the 380W bios ?


----------



## ESRCJ

kot0005 said:


> Can the strix be flashed with the 380W bios ?


It can, but you lose two Displayports and the BIOS doesn't seem to mesh all that well with it in general. The Strix needs another BIOS designed for its custom PCB with a higher power limit. Shunt modding is the best bet, although I don't know of anyone who has successfully done that with the Strix. 

On a different note, has anyone tried any other BIOS on an a reference card that provides 400W or more? I wouldn't mind working my way up a few more spots on the 3dmark HOF in Time Spy. I'm definitely hitting the power limit in Graphics Test 2.


----------



## pfinch

arrow0309 said:


> Maybe it's too much oc, try to keep it a step or two lower.
> You'll still be way higher than average.
> And less risking of damaging the card.


sadly not. I got this voltage-point-jumping even at default clocks.
[email protected]
sometimes 1980 @ 1.043 and 1.05

I don't get it...


----------



## pewpewlazer

pfinch said:


> sadly not. I got this voltage-point-jumping even at default clocks.
> [email protected]
> sometimes 1980 @ 1.043 and 1.05
> 
> I don't get it...


It's temperature based. Every 5 or 10*C temperature increase, it will shift the "base" curve down some (the curve your +xxx mhz offsets are applied to). Right now, if I load my custom curve profile at idle, it doesn't shift around too much, and the clock speed at the 1.125V+ range (yes, I know it will never get there, but that's not the point) is 2100mhz. If I load up Heaven or some other 3D application to get the core temp up to its normal gaming temp of ~70*C, then reload afterburner and open the curve back up, the 1.125V+ range max clock will now be 2055mhz. It will shift the entire curve downwards at higher temps, but it seems to shift the higher voltage ranges more aggressively.

Also, the "flat spots" on the curve where it has the same clock speed at multiple voltages will vary with temperature as well. So for example right now my curve has 1815mhz @ 0.812V, 0.818V, and 0.825V. At a different temperature, it might only have 1815mhz at two of those points, and one might get bumped up to 1830mhz or kicked down to 1800mhz.

GPU boost is just a real joy /sarcasm


----------



## jura11

pfinch said:


> sadly not. I got this voltage-point-jumping even at default clocks.
> [email protected]
> sometimes 1980 @ 1.043 and 1.05
> 
> I don't get it...


Hi there 

Yes on stock clocks or stock BIOS my clocks or voltage points would jump like crazy, best way to try apply +100MHz OC on core for start and modest +500MHz on VRAM and start from there, check voltage with CTRL+F and adjust voltage to yours liking like 1.07 or 1.08v and add there bump few more MHz and this should allow you to keep clocks/voltage stable 

But still I would have look on Galax 380W BIOS, used on my and finally clocks or voltage wouldn't jump under load etc

Hope this helps 

Thanks, Jura


----------



## jura11

MrTOOSHORT said:


> Jura, 2080ti run hotter than last gen. Looks about right on your ek card.


Hi there 

Thanks there, I have been bit worried or paranoid about the temperatures because personally I expected similar temperatures to last gen or GTX10xx series 

Temperatures are higher by 5-7°C over the last gen or GTX10xx series

Will test Aquacomputer RTX block and Phanteks one later in January and will see if I can get bit better temperatures 

Hope this helps 

Thanks, Jura


----------



## jura11

Hi guys

Can you please test yours OC settings or yours OC in Octane benchmark, V3 will not work with RTX but V4 does 

Its great benchmark for stability test of clocks, there I find much sooner instability of clocks than in Timespy kr FireStrike or gaming, my clocks are stable like in games or 3DMARK benchmarks but not in Octane or Daz3d Studio IRAY rendering

My best OC is for games or gaming itself is 2100-2113MHz with +850MHz or +1000MHz on VRAM but in Octane or Daz3d Studio IRAY there I need to run 2075-2088MHz to be 100% stable and +650MHz on VRAM


Strangest thing is temperatures, in rendering other 3*GPUs are in low 30's and RTX is running at 38-42°C 

Hope this helps 

Thanks, Jura


----------



## UdoG

...


----------



## EarlZ

I am getting a 1-2second random pause on Monster Hunter World and Battle Field 5, the audio continues during the pause and this happend very randomly, I've used DDU and updated to the latest drivers and this still happens, I have both games on an SSD and I am running on a 9700K with a Gigabyte 2080Ti Gaming OC. Is this any where related to the gpu or somethibg else is causing this?


----------



## CptSpig

EarlZ said:


> I am getting a 1-2second random pause on Monster Hunter World and Battle Field 5, the audio continues during the pause and this happend very randomly, I've used DDU and updated to the latest drivers and this still happens, I have both games on an SSD and I am running on a 9700K with a Gigabyte 2080Ti Gaming OC. Is this any where related to the gpu or somethibg else is causing this?


It's not the gpu. I have a Titan Xp doing the same thing in Battlefield 5. Probably driver or setting issues. Try setting to windowed no boarder in the video settings.


----------



## gijs007

Is there anything that can be done about power throttling? GPU-Z always reports perfcap Pwr.
I've already got the "GALAX RTX 2080 Ti OC 380 watt" BIOS flashed on my EVGA 2080 Ti. Yet my card still throttles on power..

Regarding overclocking tooling:
I've noticed that I can't change the GPU voltage in MSI afterburner, is this normal?
I can't use the EVGA Precision X1 software, it makes my primary monitor freeze. I'm wondering if it's a bug, is anyone else experiencing this issue?


----------



## CptSpig

gijs007 said:


> Is there anything that can be done about power throttling? GPU-Z always reports perfcap Pwr.
> I've already got the "GALAX RTX 2080 Ti OC 380 watt" BIOS flashed on my EVGA 2080 Ti. Yet my card still throttles on power..
> 
> Regarding overclocking tooling:
> I've noticed that I can't change the GPU voltage in MSI afterburner, is this normal?
> I can't use the EVGA Precision X1 software, it makes my primary monitor freeze. I'm wondering if it's a bug, is anyone else experiencing this issue?


You sure it's not temperature Throttling? The colder the core the higher the clocks. 
Go here to unlock the voltage slider in afterburner: https://rog.asus.com/forum/showthre...-quot-voltage-control-quot-in-MSI-Afterburner
Go here to download the Galax Xtreme Tuner for memory past 1000: http://www.galax.com/en/xtreme-tuner-plus-rtx


----------



## MrTOOSHORT

MSI AB 4.6.0 beta 10 unlocks memory up to +1500 now.


----------



## CptSpig

MrTOOSHORT said:


> MSI AB 4.6.0 beta 10 unlocks memory up to +1500 now.


That's awesome! Thanks


----------



## pewpewlazer

gijs007 said:


> Is there anything that can be done about power throttling? GPU-Z always reports perfcap Pwr.
> I've already got the "GALAX RTX 2080 Ti OC 380 watt" BIOS flashed on my EVGA 2080 Ti. Yet my card still throttles on power..
> 
> Regarding overclocking tooling:
> I've noticed that I can't change the GPU voltage in MSI afterburner, is this normal?
> I can't use the EVGA Precision X1 software, it makes my primary monitor freeze. I'm wondering if it's a bug, is anyone else experiencing this issue?


GPU boost is pretty much going to ramp the clocks/voltage as high as it possibly can until it hits the power limit. Unless you're running an ancient game that puts minimal load on the GPU, or do something like run the 380 watt BIOS then cap your max voltage at 0.85V or something super low, you can expect the card to almost always be bouncing off the power limit in games.


----------



## EarlZ

CptSpig said:


> It's not the gpu. I have a Titan Xp doing the same thing in Battlefield 5. Probably driver or setting issues. Try setting to windowed no boarder in the video settings.


I've already tried that and it is the same issue  GPU is running at about 67-68c and CPU aroung 58c
I am not sure what else to try.


----------



## gijs007

pewpewlazer said:


> GPU boost is pretty much going to ramp the clocks/voltage as high as it possibly can until it hits the power limit. Unless you're running an ancient game that puts minimal load on the GPU, or do something like run the 380 watt BIOS then cap your max voltage at 0.85V or something super low, you can expect the card to almost always be bouncing off the power limit in games.


Already using the 380 watt BIOS.
How do I cap the max voltage/undervolt the GPU?



CptSpig said:


> You sure it's not temperature Throttling? The colder the core the higher the clocks.
> Go here to unlock the voltage slider in afterburner: https://rog.asus.com/forum/showthre...-quot-voltage-control-quot-in-MSI-Afterburner
> Go here to download the Galax Xtreme Tuner for memory past 1000: http://www.galax.com/en/xtreme-tuner-plus-rtx


It's running near 80c, temp limit is set to 88. It's not temp throttling as far as I can tell.
I'm considering getting an EVGA Hybrid kit once they're available. Since I've noticed that the GPU becomes less stable somewhere between 75-82c

The voltage slider is unlocked, but it doesn't appear to have any effect. Not even after making the manual changes in the profile file.

As for the memory overclock, I don't think +1500MHz or anything near it is doable with my card.
Been using +860 for a while in games, but had occasional crashes.
I've now set it +600, if I set tot to +650 or above the OC scanner test reports a significant lower confidence level.


----------



## CptSpig

gijs007 said:


> Already using the 380 watt BIOS.
> How do I cap the max voltage/undervolt the GPU?
> 
> 
> 
> It's running near 80c, temp limit is set to 88. It's not temp throttling as far as I can tell.
> I'm considering getting an EVGA Hybrid kit once they're available. Since I've noticed that the GPU becomes less stable somewhere between 75-82c
> 
> As for the memory overclock, I don't think +1500MHz or anything near it is doable with my card.
> Been using +860 for a while in games, but had occasional crashes.
> I've now set it +600, if I set tot to +650 or above the OC scanner test reports a significant lower confidence level.


1. open Afterburner
2. set the core slider to any value (multiple of 15 for turing and 13 for Pascal) so Turing 144, 150.. etc. apply
3. put your mouse in the graph window, hit cntrl-F
4. select the mV point and hit cntrl-L
5. Apply
6. Save to slot for use when needed.


----------



## MrTOOSHORT

Hi CptSpig, it's multiples of 15 now for Turing. Everything else is pretty much the same as Pascal.


----------



## CptSpig

MrTOOSHORT said:


> Hi CptSpig, it's multiples of 15 now for Turing. Everything else is pretty much the same as Pascal.


Thanks! Cards in the mail can't wait to put it through it's paces.


----------



## gijs007

CptSpig said:


> 1. open Afterburner
> 2. set the core slider to any value (multiple of 15 for turing and 13 for Pascal) so Turing 144, 150.. etc. apply
> 3. put your mouse in the graph window, hit cntrl-F
> 4. select the mV point and hit cntrl-L
> 5. Apply
> 6. Save to slot for use when needed.


Thanks. 

It seems that at 0.95v it doesn't power throttle, anything above that does (at least during 3D mark). The only downside is that the clock speed is a bit lower compared to not locking it in, but at least the frames are not as jittery.
I did notice the clocks go down by about 30MHz shortly after generating load, even though they are "locked" and no throttling is reported. By the looks of it this happens when the temperature exceeds a threshold, somewhere between 60-65c.

Now I just need a way to reduce the multi monitor idle power consumption, as it's currently at 75 watt with two 4K monitors (one at 60 Hz the other at 97Hz)..


----------



## CptSpig

gijs007 said:


> Thanks.
> 
> It seems that at 0.95v it doesn't power throttle, anything above that does (at least during 3D mark). The only downside is that the clock speed is a bit lower compared to not locking it in, but at least the frames are not as jittery.
> I did notice the clocks go down by about 30MHz shortly after generating load, even though they are "locked" and no throttling is reported. By the looks of it this happens when the temperature exceeds a threshold, somewhere between 60-65c.
> 
> Now I just need a way to reduce the multi monitor idle power consumption, as it's currently at 75 watt with two 4K monitors (one at 60 Hz the other at 97Hz)..


These cards like to be cold like all the recent cards. My Titan Xp never goes above 35c even when gaming. I have a really good loop and open bench.


----------



## UdoG

@owner of a SLI setup 
Could you please tell me your setup (Motherboard/CPU/NVRam/SSD)?

Because my CPU seems to be to slow for my SLI (RTX2080TI, X299 Dark, I7900X)

Thanks!


----------



## EarlZ

@CptSpig

Turned off Windows defender and ansel, still getting the same issue


----------



## pewpewlazer

gijs007 said:


> Thanks.
> 
> It seems that at 0.95v it doesn't power throttle, anything above that does (at least during 3D mark). The only downside is that the clock speed is a bit lower compared to not locking it in, but at least the frames are not as jittery.
> I did notice the clocks go down by about 30MHz shortly after generating load, even though they are "locked" and no throttling is reported. By the looks of it this happens when the temperature exceeds a threshold, somewhere between 60-65c.
> 
> Now I just need a way to reduce the multi monitor idle power consumption, as it's currently at 75 watt with two 4K monitors (one at 60 Hz the other at 97Hz)..


75 watts before or after locking the voltage @ 0.95V? One thing to keep in mind, when you lock the voltage on the V-F curve in afterburner and apply the clocks, the card will sit at that locked voltage and associated clock. So on my 1950mhz @ 0.9v profile, it sits at 1950mhz 0.9v on the desktop. When I'm not gaming, I just reset to default in afterburner, so that my core drops down to 1350 @ 0.712v.

EDIT: Just opened up GPU-Z since I remembered that reads power in watts. Idle with a dual monitor setup (1440p 165hz and a 4k 60hz) is 75 watts. With my locked voltage profile it shoots up to 100 watts. Yikes.


----------



## Zammin

gijs007 said:


> Thanks.
> 
> It seems that at 0.95v it doesn't power throttle, anything above that does (at least during 3D mark). The only downside is that the clock speed is a bit lower compared to not locking it in, but at least the frames are not as jittery.
> I did notice the clocks go down by about 30MHz shortly after generating load, even though they are "locked" and no throttling is reported. By the looks of it this happens when the temperature exceeds a threshold, somewhere between 60-65c.
> 
> Now I just need a way to reduce the multi monitor idle power consumption, as it's currently at 75 watt with two 4K monitors (one at 60 Hz the other at 97Hz)..





pewpewlazer said:


> 75 watts before or after locking the voltage @ 0.95V? One thing to keep in mind, when you lock the voltage on the V-F curve in afterburner and apply the clocks, the card will sit at that locked voltage and associated clock. So on my 1950mhz @ 0.9v profile, it sits at 1950mhz 0.9v on the desktop. When I'm not gaming, I just reset to default in afterburner, so that my core drops down to 1350 @ 0.712v.
> 
> EDIT: Just opened up GPU-Z since I remembered that reads power in watts. Idle with a dual monitor setup (1440p 165hz and a 4k 60hz) is 75 watts. With my locked voltage profile it shoots up to 100 watts. Yikes.


This info got me interested so I gave it a try on my FE (stock BIOS, still air cooled) today. With an offset of +150 on the core and +600 on the memory I tried locking the curve at .950V which should be 1950mhz. However as gijs007 noted it would still downclock a bit. Generally around 1920mhz but sometimes 1905mhz. I noticed it was still hitting the power limit on and off during gameplay. I increased the power target to maximum and it stopped hitting the power limit, but it was still dropping 30-45mhz off the target when under load anyway for some reason.

The last thing I tried was locking it to .925V which is right on 1905mhz with the above settings and it stays there without moving and without hitting the power limit. I'm not 100% certain (as I'm only using my eyes as a test) but I'm pretty sure I'm actually getting a smoother experience with these settings compared to my usual offset overclock. Until someone mentioned it earlier in this thread I would have never thought the normal and constant fluctuation in clock speeds that these cards experience by default would cause frametime spikes (jitter). If this truly is the case, to me it seems like an oversight by Nvidia, from memory Pascal didn't behave like this (at least the cards I had) as it would just gradually step down clocks/voltage with the changes in temperature until the peak temp was reached and it would stay there. I don't remember my 1070, 1080 or 1080Ti constantly changing the clock speed every few of seconds like my 2080Ti does.

It's good to know that there is a way to improve upon this issue by locking a lower voltage but it's annoying that you have to switch between locked and unlocked whenever you start or finish your games. Also not being able to sustain the higher clock speeds is a real bummer as the GPU is certainly capable of 2000mhz+. I wonder if I will be able to lock a higher voltage/frequency once I've got the card on water and still see the same benefits in frametime consistency. :thinking:


----------



## krizby

Zammin said:


> This info got me interested so I gave it a try on my FE (stock BIOS, still air cooled) today. With an offset of +150 on the core and +600 on the memory I tried locking the curve at .950V which should be 1950mhz. However as gijs007 noted it would still downclock a bit. Generally around 1920mhz but sometimes 1905mhz. I noticed it was still hitting the power limit on and off during gameplay. I increased the power target to maximum and it stopped hitting the power limit, but it was still dropping 30-45mhz off the target when under load anyway for some reason.
> 
> The last thing I tried was locking it to .925V which is right on 1905mhz with the above settings and it stays there without moving and without hitting the power limit. I'm not 100% certain (as I'm only using my eyes as a test) but I'm pretty sure I'm actually getting a smoother experience with these settings compared to my usual offset overclock. Until someone mentioned it earlier in this thread I would have never thought the normal and constant fluctuation in clock speeds that these cards experience by default would cause frametime spikes (jitter). If this truly is the case, to me it seems like an oversight by Nvidia, from memory Pascal didn't behave like this (at least the cards I had) as it would just gradually step down clocks/voltage with the changes in temperature until the peak temp was reached and it would stay there. I don't remember my 1070, 1080 or 1080Ti constantly changing the clock speed every few of seconds like my 2080Ti does.
> 
> It's good to know that there is a way to improve upon this issue by locking a lower voltage but it's annoying that you have to switch between locked and unlocked whenever you start or finish your games. Also not being able to sustain the higher clock speeds is a real bummer as the GPU is certainly capable of 2000mhz+. I wonder if I will be able to lock a higher voltage/frequency once I've got the card on water and still see the same benefits in frametime consistency. :thinking:


Just make your curve looks like this then save the profile, takes some time but no need to press L every time you play games. Best settings to get smooth frametimes is setting maximum power limits and undervolt to .925v (925mv being the most efficient point on the VF curve).

The temperature induced throttles are likely not noticeable during gameplay but the PL throttle are really aggressive with Turing (100-200mhz), my guess is that due the load balancing nature of the VRM design and the 12nm fab is leaky at higher voltage that cause this behavior. My 1080 TI stay at 2088mhz/1.093v all the time even when bouncing off the PL and just reduce to 2075mhz when temperature throttle kicks in.


----------



## Zammin

krizby said:


> Just make your curve looks like this then save the profile, takes some time but no need to press L every time you play games. Best settings to get smooth frametimes is setting maximum power limits and undervolt to .925v (925mv being the most efficient point on the VF curve).
> 
> The temperature induced throttles are likely not noticeable during gameplay but the PL throttle are really aggressive with Turing (100-200mhz), my guess is that due the load balancing nature of the VRM design and the 12nm fab is leaky at higher voltage that cause this behavior.


Hey mate, thanks for the advice. So if I understand you correctly, I should completely flatten the curve from .925V onward? I did try doing that but as soon as I click apply it moves it in a few places.. Can't understand why it won't allow me to flatten it at 1905mhz/.0925V. it will bump the first half of the flat line up a tad and the second half up by about the same amount again.


----------



## Zammin

I just tried it again and it was even worse this time lol. It completely changed the entire curve. Here are some images of the original +150 curve with the lock in place, the curve after I edited it, and finally the curve after clicking apply...


----------



## Hanks552

running 3dmark Time Spy extreme 15 238 graphics score https://www.3dmark.com/spy/5621199
idk what else could i do, already running 380w custom bios ( no hardware mod yet)


----------



## kot0005

Okay here is my msi seahawk ek. The build quality of the block is waaaay higher than the gigabyte one. Not sure about temps and oc yet.


My card has samsung memory. Will report back once I try some oc after my loop settles down.


----------



## ESRCJ

kot0005 said:


> Okay here is my msi seahawk ek. The build quality of the block is waaaay higher than the gigabyte one. Not sure about temps and oc yet.
> 
> 
> My card has samsung memory. Will report back once I try some oc after my loop settles down.


That is a gorgeous-looking card. Glad to hear it has Samsung memory.


----------



## krizby

Zammin said:


> I just tried it again and it was even worse this time lol. It completely changed the entire curve. Here are some images of the original +150 curve with the lock in place, the curve after I edited it, and finally the curve after clicking apply...


If you are on air then let your card idle and cool down below 40C before applying the curve. Once you set the curve don't close the VF windows, hit apply first then save profile. One more tip: use the OC scanner tool to give you a VF curve, save it as #1 profile, increase freq at each voltage point +15mhz or +30mhz and cap the voltage at 925mv, you might be surprised that at lower voltages the overclock offset are really high, your card might even be stable at 1935mhz or 1950mhz at 925mv.


----------



## Zammin

kot0005 said:


> Okay here is my msi seahawk ek. The build quality of the block is waaaay higher than the gigabyte one. Not sure about temps and oc yet.
> 
> 
> My card has samsung memory. Will report back once I try some oc after my loop settles down.


Looks good man. Glad you got one with a good block (that is presumably serviceable).

I've had the opportunity to test a Samsung equipped FE and compare to my Micron equipped FE back to back in my system and the Samsung memory was able to clock a lot higher (around +900 to 1000 as opposed to +600 to 700 on my Micron memory) but it didn't translate to better performance at least in Heaven benchmark. I got only like 1FPS better average, which is within margin of error, so I wonder if JPMboy was right in his previous comment about the timings being different. Or maybe Heaven just isn't very VRAM intensive, I'm not sure. Playing games I didn't see any noticeable gains. Not the most scientific testing but that's what I found anyway.


----------



## Zammin

krizby said:


> If you are on air then let your card idle and cool down below 40C before applying the curve. Once you set the curve don't close the VF windows, hit apply first then save profile.


I did let it cool down all the way (into the 30s) that time, and I did keep the V/F window open when I hit apply and it still changes the whole curve. I have no idea why it's doing that :/



krizby said:


> One more tip: use the OC scanner tool to give you a VF curve, save it as #1 profile, increase freq at each voltage point +15mhz or +30mhz and cap the voltage at 925mv, you might be surprised that at lower voltages the overclock offset are really high, your card might even be stable at 1935mhz or 1950mhz at 925mv.


Interesting, so in this case I would run the OC scanner, flatten the curve at 0.925V (once I can actually get it to do that without editing the whole damn curve on me lol) and then manually offset the curve by moving the points up a little at a time to get my max stable OC at 0.925V. I seem to recall there being a way to move the whole curve up and down without having to manually move each point individually. Was it holding shift or something? Can't recall exactly what it was.


----------



## kot0005

Zammin said:


> Looks good man. Glad you got one with a good block (that is presumably serviceable).
> 
> I've had the opportunity to test a Samsung equipped FE and compare to my Micron equipped FE back to back in my system and the Samsung memory was able to clock a lot higher (around +900 to 1000 as opposed to +600 to 700 on my Micron memory) but it didn't translate to better performance at least in Heaven benchmark. I got only like 1FPS better average, which is within margin of error, so I wonder if JPMboy was right in his previous comment about the timings being different. Or maybe Heaven just isn't very VRAM intensive, I'm not sure. Playing games I didn't see any noticeable gains. Not the most scientific testing but that's what I found anyway.


Probably cant  only some of the screws are removable rest are hidden under the steel plate. But atleast u can see the microfin channel, so I will know for sure if its clogged or not by looking at it. So its a stepup from the Gigabyte watereforce.


----------



## Zammin

kot0005 said:


> Probably cant  only some of the screws are removable rest are hidden under the steel plate. But atleast u can see the microfin channel, so I will know for sure if its clogged or not by looking at it. So its a stepup from the Gigabyte watereforce.


Yeah that is a lot better. At least you'll know if it ever needs to be flushed. The lighting is implemented nicer on the Seahawk than the regular Vector blocks IMO. Seems a bit more evenly distributed.


----------



## krizby

Zammin said:


> I did let it cool down all the way (into the 30s) that time, and I did keep the V/F window open when I hit apply and it still changes the whole curve. I have no idea why it's doing that :/
> 
> 
> 
> Interesting, so in this case I would run the OC scanner, flatten the curve at 0.925V (once I can actually get it to do that without editing the whole damn curve on me lol) and then manually offset the curve by moving the points up a little at a time to get my max stable OC at 0.925V. I seem to recall there being a way to move the whole curve up and down without having to manually move each point individually. Was it holding shift or something? Can't recall exactly what it was.


Nah holding Shift is the same as using offset, you have to manually increase each voltage point sadly.


----------



## Zammin

krizby said:


> Nah holding Shift is the same as using offset, you have to manually increase each voltage point sadly.


Oh, so if you make the curve and flatten it at 0.925V first, it won't retain it's shape if you hold shift and move it up or down? Or do you mean that you have to change and test each point individually?

Sorry if this is a noob question, this is actually the first time I've done any V/F curve stuff. All my previous GPU overclocking was done with the basic offset function.


----------



## kot0005

Finished adding the final touches to the block


----------



## krizby

Zammin said:


> Oh, so if you make the curve and flatten it at 0.925V first, it won't retain it's shape if you hold shift and move it up or down? Or do you mean that you have to change and test each point individually?
> 
> Sorry if this is a noob question, this is actually the first time I've done any V/F curve stuff. All my previous GPU overclocking was done with the basic offset function.


Well there is no faster way to do the undervolt, you have to set each point after 925mv at exact frequency using up and down arrow key as even 1mhz wrong can screw up the curve.


----------



## Zammin

krizby said:


> Well there is no faster way to do the undervolt, you have to set each point after 925mv at exact frequency using up and down arrow key as even 1mhz wrong can screw up the curve.


Oh I think I misunderstood you, I'm guessing you mean to only bump up the frequency from 0.925V onward as one straight line. I getcha. I thought you meant the whole curve haha.

I ran the OC scanner and managed to flatten the curve from 0.925V onward at 1935mhz to start with. Just starting game testing now. Looks like it's sitting at 1920mhz atm in the menus of Black Ops 4. Will see if it drops further once in game. Don't know why it does that when set above 1905mhz, you'd think if it's at 0.925V it would stay at the frequency at that point of the curve (provided it's not power throttling of course).


----------



## Zammin

krizby said:


> Well there is no faster way to do the undervolt, you have to set each point after 925mv at exact frequency using up and down arrow key as even 1mhz wrong can screw up the curve.





Zammin said:


> Oh I think I misunderstood you, I'm guessing you mean to only bump up the frequency from 0.925V onward as one straight line. I getcha. I thought you meant the whole curve haha.
> 
> I ran the OC scanner and managed to flatten the curve from 0.925V onward at 1935mhz to start with. Just starting game testing now. Looks like it's sitting at 1920mhz atm in the menus of Black Ops 4. Will see if it drops further once in game. Don't know why it does that when set above 1905mhz, you'd think if it's at 0.925V it would stay at the frequency at that point of the curve (provided it's not power throttling of course).


Okay so with it set up this way it drops to 1890mhz. No idea why. Might have to turn on voltage monitoring and see if it's holding 0.925V.


----------



## Angrycrab

Anyone knows when XOC bios will be released?


----------



## krizby

Zammin said:


> Okay so with it set up this way it drops to 1890mhz. No idea why. Might have to turn on voltage monitoring and see if it's holding 0.925V.


Yeah when the chip reaches above 40C it will start going left on the VF curve due to thermal limiting algorithm, you lose 15mhz for each 15C increase in temp I reckon (not sure about the the specific temperature that the chip will downclock).
So here are 3 reasons your card will go left on the VF curve:
_PL is reached or predicted to reach
_Temperature rises
_Framerates is capped

Your VF curve looks good, just increase +15mhz to every point before 925mv too so when your card start to throttle the clock reductions is smoother (I did mean adjust the entire VF curve manually , after 1.100v you can leave it as it is because GPU boost never goes beyond there)


----------



## UdoG

kot0005 said:


> Finished adding the final touches to the block


Looks pretty good!
I'm waiting for your temp and OC results 
Could you please also check the maximum TDP (Watt)?

Thanks!


----------



## Zammin

krizby said:


> Yeah when the chip reaches above 40C it will start down clocking due to thermal limiting algorithm, you lose 15mhz for each 15C increase in temp I reckon (not sure about the the specific temperature that the chip will downclock).
> So here are 3 reasons your card will go down on the VF curve:
> _PL is reached or predicted to reach
> _Temperature rises
> _Monitor refresh rate is reached and Vsync on


I was able to confirm that it is indeed holding 0.925V as I wanted it to, so that's good. You may be right about it stepping down with temperature as it does step down gradually from the point of opening the application, although the part I find confusing is that I was under the impression that the way it worked was, every X amount of degrees increase in temps would cause the GPU to drop to the next point down on the VF curve. So what I was thinking was if the curve is flattened out at a lower point in the curve to where it normally is, it shouldn't step down the frequency since the voltage point isn't changing. 

Although right off the bat, as soon as I open the application (game, Black Ops 4) the clock speed is already 15mhz lower than where I set it to. Even with the frequency at 0.925V set to 1965mhz it starts off at 1950mhz when I start the app, then 1935mhz after a few minutes, then 1920mhz after a few more minutes and a while later 1905mhz. I even tried maxing out the temp target and it didn't change anything. When I had it locked at 0.925V/1905mhz earlier today with CTRL+L it was holding 1905mhz constantly but anything above that would eventually drop to 1905mhz. So I guess there must be some other (possibly temp related) factor going on here.

With the initial testing out of the way I really think the GPU can do 1950mhz+ at 0.925V, I just need to work out how to get it to stay there. If it is a temperature thing, maybe it will start doing what I want when it's watercooled. Also need to work out how to get the curve to stop making self-edits in other areas when I apply my changes lol.


----------



## kot0005

UdoG said:


> Looks pretty good!
> I'm waiting for your temp and OC results
> Could you please also check the maximum TDP (Watt)?
> 
> Thanks!


330W its in the original post btw.

Its OC is about the same as my other cards, this one can do 2100Mhz and 1k on mem with 330W bios. Temps are around 4-5c lower than gigabyte card. getting 48-50c instead of 50-54c on waterforce.

So to conclude, MSI card has a lot higher build quality for the block. It has a warranty sticker so I didnt disassemble it. The backplate is a lot thicker and rigid compared to the waterforce. One end of the waterblock isnt just hanging around like the waterforce and is actually bolted to the card and backplate. 
Stainless steel front block cover instead of aliminium, acrylic is a lot thicker. and quality feels higher. 

I can see the inside of the block, so if it clogs up I will know. Gigabyte is fully enclosed. so u wont know that it clogged up until your card starts hitting 80c ..

MSI card has a tiny bit of coil whine compared to gigabyte one but a lot lower than my XC ultra with reference pcb.

Gigabyte uses smaller chokes, probably why the whine is lower. I dont know if I can recommend this card yet because of no power limiters but we'll see..


----------



## GanMenglin

kot0005 said:


> 330W its in the original post btw.
> 
> Its OC is about the same as my other cards, this one can do 2100Mhz and 1k on mem with 330W bios. Temps are around 4-5c lower than gigabyte card. getting 48-50c instead of 50-54c on waterforce.
> 
> So to conclude, MSI card has a lot higher build quality for the block. It has a warranty sticker so I didnt disassemble it. The backplate is a lot thicker and rigid compared to the waterforce. One end of the waterblock isnt just hanging around like the waterforce and is actually bolted to the card and backplate.
> Stainless steel front block cover instead of aliminium, acrylic is a lot thicker. and quality feels higher.
> 
> I can see the inside of the block, so if it clogs up I will know. Gigabyte is fully enclosed. so u wont know that it clogged up until your card starts hitting 80c ..
> 
> MSI card has a tiny bit of coil whine compared to gigabyte one but a lot lower than my XC ultra with reference pcb.
> 
> Gigabyte uses smaller chokes, probably why the whine is lower. I dont know if I can recommend this card yet because of no power limiters but we'll see..


I used all the 20 series cards of MSI, ventus OC, duke, and seahawk EK X, all has coil whine, seahawk EK X is the lowest, but still has.

And I tried to flash it with the 380w bios, it can work, but the power limit only hits like 350w 360w, problem is the core voltage.


----------



## GanMenglin

kot0005 said:


> Okay here is my msi seahawk ek. The build quality of the block is waaaay higher than the gigabyte one. Not sure about temps and oc yet.
> 
> 
> My card has samsung memory. Will report back once I try some oc after my loop settles down.


How to check the memory chip provider? Just with GPU-Z?

And What's your bios version? Mine is MSINV371MH.191


----------



## GanMenglin

GanMenglin said:


> How to check the memory chip provider? Just with GPU-Z?
> 
> And What's your bios version? Mine is MSINV371MH.191


Well, I've got the latest bios Version .192 from MSI official forum.


----------



## gijs007

pewpewlazer said:


> 75 watts before or after locking the voltage @ 0.95V? One thing to keep in mind, when you lock the voltage on the V-F curve in afterburner and apply the clocks, the card will sit at that locked voltage and associated clock. So on my 1950mhz @ 0.9v profile, it sits at 1950mhz 0.9v on the desktop. When I'm not gaming, I just reset to default in afterburner, so that my core drops down to 1350 @ 0.712v.
> 
> EDIT: Just opened up GPU-Z since I remembered that reads power in watts. Idle with a dual monitor setup (1440p 165hz and a 4k 60hz) is 75 watts. With my locked voltage profile it shoots up to 100 watts. Yikes.


Exactly. It uses 75 watt when idle by default. The locked voltage profile uses 105 watt. 
Hope Nvidia will fix this multi monitor idle power usage as it's insane. (I though my 1080 was bad with 40watt..)


Regarding the curve, shouldn't it be possible to edit this manually?
It has to be saved to disk somewhere..


----------



## kx11

finally got my hands on a GALAX HOF gpu




































so far this gpu is all about the looks , the HOF Ai overclocking software doesn't help much but it can handle the RGB stuff and custom text on that LCD panel which shows infos of the GPU but it will not update the LCD infos unless i use the HOF Ai software , that thing is still in Beta and it's weird that it won't control the fans correctly when auto mode is enabled , power target is only 112% on both HOF Ai and MSI AB but GALAX said that a bios update is coming to this gpu soon !!!


----------



## Pepillo

I found this for my 2080 TI Aorus Xtreme:










https://es.aliexpress.com/item/Byks...108.1000016.1.491d247aadbDPl&isOrigTitle=true

Opinions? I'm afraid of an unknown Chinese brand, but there's no more for my Aorus


----------



## kot0005

GanMenglin said:


> How to check the memory chip provider? Just with GPU-Z?
> 
> And What's your bios version? Mine is MSINV371MH.191


90.02.17.00.2C 

Any reason to update BIOS ?


----------



## Jpmboy

kx11 said:


> finally got my hands on a GALAX HOF gpu
> 
> so far this gpu is all about the looks , the HOF Ai overclocking software doesn't help much but it can handle the RGB stuff and custom text on that LCD panel which shows infos of the GPU but it will not update the LCD infos unless i use the HOF Ai software , that thing is still in Beta and it's weird that it won't control the fans correctly when auto mode is enabled , power target is only 112% on both HOF Ai and MSI AB but GALAX said that a bios update is coming to this gpu soon !!!


Very nice! I tried to buy (several times) a pair of their blocked HOF 2080Tis... never available here.


----------



## kot0005

@CallsignVega Still got ur TV to confirm this in gamign ? https://www.reddit.com/r/nvidia/comments/abzs2t/titan_v_core_clocks_were_capped_by_the_drivers/


----------



## nyk20z3

kx11 said:


> finally got my hands on a GALAX HOF gpu
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> so far this gpu is all about the looks , the HOF Ai overclocking software doesn't help much but it can handle the RGB stuff and custom text on that LCD panel which shows infos of the GPU but it will not update the LCD infos unless i use the HOF Ai software , that thing is still in Beta and it's weird that it won't control the fans correctly when auto mode is enabled , power target is only 112% on both HOF Ai and MSI AB but GALAX said that a bios update is coming to this gpu soon !!!


Regardless of the bugs your dealing with what a sick gpu.


----------



## kx11

Jpmboy said:


> Very nice! I tried to buy (several times) a pair of their blocked HOF 2080Tis... never available here.





Ebay man , a couple of ppl are selling them , even the OC LAB edition


----------



## EarlZ

What is the common range for memory overclock on the 2080Ti ?


----------



## Hanks552

Pepillo said:


> I found this for my 2080 TI Aorus Xtreme:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://es.aliexpress.com/item/Byks...108.1000016.1.491d247aadbDPl&isOrigTitle=true
> 
> Opinions? I'm afraid of an unknown Chinese brand, but there's no more for my Aorus


i have it, totally fine


----------



## Hanks552

Can somebody help me with overclocking my RTX 2080 ti? im using custom bios 380w, my temps are low, idk if i can push it more =P
i got stable 165 core 950 memory
zotac AMP reference PCB

my discord Hanks552#6509


----------



## Zammin

EarlZ said:


> What is the common range for memory overclock on the 2080Ti ?


It varies quite a bit, I can get about +700 on my Micron equipped FE (although I generally just run it at +600) and about +900 on the Samsung equipped one I tested. I could actually run the samsung one up to +1200 but between +1000 and +1200 I would very occasionally see an artifact so +900 was the totally stable point. Both of these cards were on air with the stock power limit and stock voltage. In heaven benchmark there was basically no difference in performance between the two despite the difference in memory speed. Some of us suspect that the Micron and Samsung memory may have different timings.


----------



## pewpewlazer

Zammin said:


> I was able to confirm that it is indeed holding 0.925V as I wanted it to, so that's good. You may be right about it stepping down with temperature as it does step down gradually from the point of opening the application, although the part I find confusing is that I was under the impression that the way it worked was, every X amount of degrees increase in temps would cause the GPU to drop to the next point down on the VF curve. So what I was thinking was if the curve is flattened out at a lower point in the curve to where it normally is, it shouldn't step down the frequency since the voltage point isn't changing.
> 
> Although right off the bat, as soon as I open the application (game, Black Ops 4) the clock speed is already 15mhz lower than where I set it to. Even with the frequency at 0.925V set to 1965mhz it starts off at 1950mhz when I start the app, then 1935mhz after a few minutes, then 1920mhz after a few more minutes and a while later 1905mhz. I even tried maxing out the temp target and it didn't change anything. When I had it locked at 0.925V/1905mhz earlier today with CTRL+L it was holding 1905mhz constantly but anything above that would eventually drop to 1905mhz. So I guess there must be some other (possibly temp related) factor going on here.
> 
> With the initial testing out of the way I really think the GPU can do 1950mhz+ at 0.925V, I just need to work out how to get it to stay there. If it is a temperature thing, maybe it will start doing what I want when it's watercooled. Also need to work out how to get the curve to stop making self-edits in other areas when I apply my changes lol.


As temperature goes up, it doesn't shift to the next point down on the V-F curve. It shifts the entire V-F curve (or at least certain key points of it) downwards. You'll never get it stop making the "self-edits" when you click apply. If you edit your curve at the exact same temperature you created/saved it at, you might get close to having it stop moving, but don't be surprised if it does.

Here's an example of how this witchcraft known as "GPU boost" acts:

My card normally idles around 55*C. If I load my saved profile and click apply when the card is at 55*C, there are almost no "self-edits" going on. My 1950mhz @ 0.900V stays where I set it. If you look to the left of my selected 0.900V point in the screenshot, you'll see that the next point down (0.893V) is also 1950mhz.

Next I manually forced the fan to 100% and let the GPU temps pull down to 40*C. Once it got to 40*C, I re-applied that exact same profile. As you can see, it's calling for 1965mhz @ 0.900V. In fact, the entire "base" curve line below the one you can edit has shifted upwards. Also note that the entire "base" curve hasn't just shifted up by a set amount. The overall shape has changed. Around the 1.00V to 1.025V range, you can see that some points have shifted up +30mhz, while others have only moved up +15mhz.

Finally, I loaded up Heaven benchmark and let the card get up to "gaming temp". Almost immediately, the card dropped down to 1935mhz @ 0.900V, and within 10-20 seconds hit 65*C. At 65*C, I re-loaded my profile and took another screenshot. Everything has shifted again. My 0.900V point is now 1935mhz, and the frequency curve is now completely flat from 1.000V onward.

I didn't bother snagging a screenshot, but notice how the 0.893V point, one step down from 0.900V, was running the same speed in all 3 of my screenshots? Once the card hit 70*C, the 0.893V point shifted down from 1935mhz to 1920mhz, but 0.900V remained at 1935mhz.

Welcome to GPU boost 4.0 (or whatever version of this over engineered nonsense they're on now). It's an absolute nightmare.


----------



## skupples

hey folks, i'm trying to get a definitive answer on what blocks fit this card. (FTW3)

is EVGA still the only one?

mine should arrive monday. 

went from 1070 (slow year at work) to 2080, to 1080 ti, all much disappoint, hopefully this ends my 4k disappointment.


----------



## kot0005

Hanks552 said:


> Can somebody help me with overclocking my RTX 2080 ti? im using custom bios 380w, my temps are low, idk if i can push it more =P
> i got stable 165 core 950 memory
> zotac AMP reference PCB
> 
> my discord Hanks552#6509


Tune the custom curve. Increase ur clocks for voltage steps upto 1093Mv in the graph.


----------



## J7SC

kx11 said:


> finally got my hands on a GALAX HOF gpu


*Oh la la, very nice* :thumb: - Can't get any Galax products here, much less OC HOF...may be I try Ebay (per your later post) though w/ 2080 TIs, I would want a warranty. Does Galax warranty transfer ? Eevn to a country where they don't officially sell them ? Tempting, very tempting..



Jpmboy said:


> Very nice! I tried to buy (several times) a pair of their blocked HOF 2080Tis... never available here.


Per above...I've tried for YEARS, but never thought of Ebay. I think they only made 150 (or elss ?) of the blocked HOF OC Labs edition



kot0005 said:


> 330W its in the original post btw.
> 
> Its OC is about the same as my other cards, this one can do 2100Mhz and 1k on mem with 330W bios. Temps are around 4-5c lower than gigabyte card. getting 48-50c instead of 50-54c on waterforce.
> 
> So to conclude, MSI card has a lot higher build quality for the block. It has a warranty sticker so I didnt disassemble it. The backplate is a lot thicker and rigid compared to the waterforce. One end of the waterblock isnt just hanging around like the waterforce and is actually bolted to the card and backplate.
> (cut)



I find your Gigabyte obsession slightly amusing. Needless to add, I have a very different experience w/ my 2 Gigabyte Aorus Xtreme 2080 Ti waterforce cards now...they have the best built-quality among 30 or so high-end GPUs I have around here, IMO. And it is worth repeating that they come with a four year warranty / as far as I know, that's among the longest for this type of card. Certainly longer than an earlier MSI Lighting which died a month after the warranty ran out 

As to temps, I had already posted (repeated) Superposition 4K runs for each card with a 360 rad and fans on IDLE...highest temp either card has ever seen was 47C with fans on IDLE, so may be you and I just build differently. As to the speed, I can bench at 2190 with stock Bios and no extra voltage (per earlier post), perhaps more if I try hard enough with custom bios, per 2235 MHz GPUz and MSI Ab tab below. But really, that may just be a difference of a few frames. In any case, the top-end Gigabyte 2080 TIs are highly binned and have been sold out in most places for a month+.

Contrary to assertions, the Aorus 2080 TI waterforce cards are extremely sturdy, and the backplate has an extra plastic coating on top of the metal to cover some additional lighting elements. The only problem with the cards is that they are very, very heavy; heavier than a EK-copper block dual GPU card I have and which was the previous record holder in my collection. They do not sag due to their construction and the fact they are anchored on one side, but still - a lot of extra weight on the mobo. especially when it's mounted vertically.

Another point of correction is that you can very well inspect the card for any kind of dirt inside, you can certainly see the micro-fins etc. My cards also do not have any coil whine, and the chokes are round rather than the usual square one everyone else is using, but I have other Gigabyte top-end GPUs that used a rather unusual power section - and they are still running 5+ years later.

I wish I would have more time with my setup, but I'm on my second MSI X399 mobo in less than a month / very annoying to exchange when you have a very extensive water-cooling setup for CPU and 2xGPUs.

On another, more general note, I heard from an industry insider that the the foundry for the NVidia RTX 2080 TIs was putting out a total of 1,800 dies / day (as of a couple of weeks ago). Sounds like a lot, but that is before dysfunctional duds and such. Then they get distributed to the vendors who do their own binning. Apparently, the RTX 2080 TI yields are just not high enough for the top bins for most manufacturers' highest end cards, and therefore, we have the shortages we are well aware of. In addition, vendors are now introducing additional models in the mid to upper mid-range to make up for it.


----------



## EarlZ

Zammin said:


> It varies quite a bit, I can get about +700 on my Micron equipped FE (although I generally just run it at +600) and about +900 on the Samsung equipped one I tested. I could actually run the samsung one up to +1200 but between +1000 and +1200 I would very occasionally see an artifact so +900 was the totally stable point. Both of these cards were on air with the stock power limit and stock voltage. In heaven benchmark there was basically no difference in performance between the two despite the difference in memory speed. Some of us suspect that the Micron and Samsung memory may have different timings.


What do you normally use to test for performance gain and stability, does the module still throttle if it is unstable on the selected freq like the 1080Ti which proved hard to properly over clock and measure the performance gain.


----------



## Bighouse

I just ordered two RTX2080ti FEs from NVidia, the NVLink device, and water blocks/backplates and terminal connection from EKWB. Can I be a part of this thread, even though I don't actually have the cards yet?


----------



## Hanks552

kot0005 said:


> Tune the custom curve. Increase ur clocks for voltage steps upto 1093Mv in the graph.


could you help me out? i never done this curve before =P


----------



## VPII

Hanks552 said:


> could you help me out? i never done this curve before =P


If you open afterburner you'll see next to the Core Clock (Mhz) there is a little graph to the left as per the picture below. After clicking on the graph you select in the top right of the graph OC Scanner. When opening it you select scan. Will take about 20 minutes or so. After you've completed the scan you select the clock speed on the graph you'd like to run and then hit "L" and it will draw a yellow line from top to bottom of graph showing the max voltage an core speed it will run then in Afterburner you hit apply. As you'll see in my afterburner it shows 1980 at 0.956v. I specifically selected this and setting my TDP to 80% to reduce heat as my card is stock air cooled.


----------



## skupples

Bighouse said:


> I just ordered two RTX2080ti FEs from NVidia, the NVLink device, and water blocks/backplates and terminal connection from EKWB. Can I be a part of this thread, even though I don't actually have the cards yet?


based on how it used to work, you become a "member" upon proof of GPU-Z verification.


----------



## Thoth420

Hi all. I don't tend to OC my GPU until it's a bit later in life if at all and just tend to focus on my CPU. My current card is the 1080Ti FTW3 but the factory OC is unstable and even in debug mode it sometimes has issues with game crashing lately. I weeded out software and OS it is def the card. 

TLDR I'd like to avoid a factory OC 2080 Ti as a replacement card but the FE seemed to have a really bumpy launch and is also factory OC'd technically even though it comes from Nvidia directly. I was looking at the EVGA Black which appears to be reference but I am not sure about that. 
Can anyone point me to reference 2080 Ti models please?


----------



## zhrooms

kx11 said:


> Finally got my hands on a GALAX HOF, so far this GPU is all about the looks, the HOF overclocking software doesn't help much but it can handle the RGB stuff and custom text on that LCD panel which shows infos of the GPU but it will not update the LCD infos unless i use the HOF Ai software, that thing is still in Beta and it's weird that it won't control the fans correctly when auto mode is enabled.
> 
> *Power target is only 112%* on both HOF Ai and MSI AB but GALAX said that a bios update is coming to this gpu soon !!!


 
No, the HOF is suppose to be *112%*! It's the fastest card out there at 400W stock and 450W when set to 112%, the second fastest card is MSI Gaming X Trio at 300W stock and 406W when set to 135%.

450W is more than you actually need for the hard limit of 1.093V, core clock should reach over 2150MHz in Time Spy Extreme and UNIGINE Superposition 8K Optimized. The card basically has no power limit, so high it becomes irrelevant, similar to a shunt mod.


----------



## kx11

zhrooms said:


> No, the HOF is suppose to be *112%*! It's the fastest card out there at 400W stock and 450W when set to 112%, the second fastest card is MSI Gaming X Trio at 300W stock and 406W when set to 135%.
> 
> 450W is more than you actually need for the hard limit of 1.093V, core clock should reach over 2150MHz in Time Spy Extreme and UNIGINE Superposition 8K Optimized. The card basically has no power limit, so high it becomes irrelevant, similar to a shunt mod.





it does pull quite a lot of power from my PSU , while playing a game @ 4k this is how much the whole system pulls (mostly) compared to just 540w using a reference board 2080ti gpu


----------



## zhrooms

kx11 said:


> It does pull quite a lot of power from my PSU, while playing a game @ 4k this is how much the whole system pulls (mostly) compared to just 540w using a reference board 2080ti gpu


 
Run *UNIGINE Superpostion* 8K Optimized (Launcher Preset) with the *Voltage/Frequency curve editor* locked to 2100MHz @ 1093mV in MSI Afterburner (_Press 'L' after selecting to lock, turns yellow, click apply to save_), and report back with what the PSU displays!


----------



## EarlZ

CptSpig said:


> It's not the gpu. I have a Titan Xp doing the same thing in Battlefield 5. Probably driver or setting issues. Try setting to windowed no boarder in the video settings.


Seems like it is caused by 417.XX drivers rolling back to 416.94 has fixed the issue ( for now )


----------



## Jpmboy

Hanks552 said:


> i have it, totally fine


post back with some temperatures...


----------



## CptSpig

EarlZ said:


> Seems like it is caused by 417.XX drivers rolling back to 416.94 has fixed the issue ( for now )


Thanks for the feedback!


----------



## Pepillo

Hanks552 said:


> i have it, totally fine


Thank you, I've ordered one. I have read that are the same that uses Gigabyte in their Waterforce.


----------



## J7SC

Jpmboy said:


> post back with some temperatures...



Iamjanco did a post (60 posts or so back) which had some links in it for thermal performance (below) in test performed by thermalbench.com - mind you, that was for 1080s, but still, probably not a bad indication about Bykski (try to say that 5 times really fast...) blocks. Bykski also does OEM blocks for GPU vendors.


----------



## Sheyster

Thoth420 said:


> Hi all. I don't tend to OC my GPU until it's a bit later in life if at all and just tend to focus on my CPU. My current card is the 1080Ti FTW3 but the factory OC is unstable and even in debug mode it sometimes has issues with game crashing lately. I weeded out software and OS it is def the card.
> 
> TLDR I'd like to avoid a factory OC 2080 Ti as a replacement card but the FE seemed to have a really bumpy launch and is also factory OC'd technically even though it comes from Nvidia directly. I was looking at the EVGA Black which appears to be reference but I am not sure about that.
> Can anyone point me to reference 2080 Ti models please?


I've been very pleased with my MSI Duke OC so far. It is a reference board with a very mild OC, and is very easily flashed to the 380W PL BIOS. Suggest you pick one up if you can find one. I've heard rumors that the EVGA Black is a non-A chip, but not sure if this is true or not.


----------



## cg4200

Sheyster said:


> I've been very pleased with my MSI Duke OC so far. It is a reference board with a very mild OC, and is very easily flashed to the 380W PL BIOS. Suggest you pick one up if you can find one. I've heard rumors that the EVGA Black is a non-A chip, but not sure if this is true or not.


Yeah I got mine yesterday non-a chip can't flash and pl 112... Card is nice though for 999.00 put wb on it and could do 2000 on core.. 1000 on memory will have to test more to see if artifacts pasted firestrike though.
Not many people have them I am trying to find a non-a with higher than 112 to flash to.. or does anyone know how to bypass the nv flash to get a ftw bios on this ?? thanks


----------



## kot0005

https://www.techpowerup.com/251151/...cstorm-watercooled-graphics-card-for-ces-2019

looks really nice.


----------



## sippo

It's safe to use Sea Hawk X Hybrid bios on Zotac 2080 TI AMP! - as they both are reference PCB?? (Zotac is on water cooling from EKWB because of fans).
Someone is using it?


----------



## kx11

zhrooms said:


> Run *UNIGINE Superpostion* 8K Optimized (Launcher Preset) with the *Voltage/Frequency curve editor* locked to 2100MHz @ 1093mV in MSI Afterburner (_Press 'L' after selecting to lock, turns yellow, click apply to save_), and report back with what the PSU displays!



give me some time , this GPU needs extra attention to details when it comes to OC


----------



## cg4200

*shunt quesyion*

Hey I saw a few videos on shunt mod but I have the evga black not fe card... there is 2 shunts on front and one on back different ..
Does anyone know which shunt is which?? thanks
I would think the one on bottom goes for pci but not sure


----------



## Hanks552

sippo said:


> It's safe to use Sea Hawk X Hybrid bios on Zotac 2080 TI AMP! - as they both are reference PCB?? (Zotac is on water cooling from EKWB because of fans).
> Someone is using it?


I have the galax 380w bios, works perfect. I recommend it


----------



## GanMenglin

my MSI seahawk EK X’s core voltage can only goes 1.075v, I’ve already used the 406w bios on it. Anyone knows how to make it to the limit 1.093v? VF curv in afterburner is useless...shows 1.25v max, but actually only up to 1.075


----------



## Hanks552

VPII said:


> If you open afterburner you'll see next to the Core Clock (Mhz) there is a little graph to the left as per the picture below. After clicking on the graph you select in the top right of the graph OC Scanner. When opening it you select scan. Will take about 20 minutes or so. After you've completed the scan you select the clock speed on the graph you'd like to run and then hit "L" and it will draw a yellow line from top to bottom of graph showing the max voltage an core speed it will run then in Afterburner you hit apply. As you'll see in my afterburner it shows 1980 at 0.956v. I specifically selected this and setting my TDP to 80% to reduce heat as my card is stock air cooled.



i have different curves for the same graphics card, thats normal? And after OCscanner


----------



## VPII

Hanks552 said:


> i have different curves for the same graphics card, thats normal?


Hi Hanks552, it appears as though your second card on the screen shot is less of an overclocker than the first, but maybe not by too much. In the graph it shows that the vcore goes to 1.2 but what I found with my card is that 1.063 is basically max vcore I'll get. I cannot comment for another card so it is difficult to say, but if you are running SLI then I'd say try 1980mhz for both cards or even one notch lower like 1965 depending on the vcore you get. But to be sure, run GPUz with monitoring enabled and a file written to show the monitoring over time and see what is the highest vcore you get while running a bench. That will give you an idea on what you can settle when trying to find clocks working with both cards.


----------



## Hanks552

VPII said:


> Hi Hanks552, it appears as though your second card on the screen shot is less of an overclocker than the first, but maybe not by too much. In the graph it shows that the vcore goes to 1.2 but what I found with my card is that 1.063 is basically max vcore I'll get. I cannot comment for another card so it is difficult to say, but if you are running SLI then I'd say try 1980mhz for both cards or even one notch lower like 1965 depending on the vcore you get. But to be sure, run GPUz with monitoring enabled and a file written to show the monitoring over time and see what is the highest vcore you get while running a bench. That will give you an idea on what you can settle when trying to find clocks working with both cards.


this is the clock and voltage, overclocking manually


----------



## J7SC

Hanks552 said:


> i have different curves for the same graphics card, thats normal?



Yes, same with my two GPUs though they're a bit close together. Only if you have absolutely identically performing and rated cards (think VID, or even the ASIC value GPUz *used to* display  ) would the curves overlay perfectly.


----------



## VPII

J7SC said:


> Yes, same with my two GPUs though they're a bit close together. Only if you have absolutely identically performing and rated cards (think VID, or even the ASIC value GPUz *used to* display  ) would the curves overlay perfectly.


It still may not overlay perfectly even if identical. It is all a challenge trying to find what works best for both cards. I really hope you find the sweat spot for both.


----------



## VPII

Hanks552 said:


> this is the clock and voltage, overclocking manually


Hi Hanks, clearly the second one on the screenshot has a limit of 1.044 volt which is probably why it is limited. It could be related to the power limit for the card which is why it cannot be higher and you stuck with that for now. Not when you do the curve for the card with MSI Afterburner, check what the max clocks are for that particular card when the vcore is at 1.044v and then try set both to same clocks and see if it works.


----------



## J7SC

VPII said:


> It still may not overlay perfectly even if identical. It is all a challenge trying to find what works best for both cards. I really hope you find the sweat spot for both.



Point well taken, it is just one of several preconditions. What amazes me though is the sheer amount of info MSI AB must be able to pull to display the curves (before any mods by the user), such as the equivalent of VID. 

But beyond GPU voltage, speed and power consumption, there's also VRAM. I am still trying to figure out which way to go on my setup. Card 1 has a slightly higher GPU speed / slightly lower voltage reads at a given speed than card 2, but card 2 can clock its VRAM a bit higher, by about 40 - 50 MHz. For NVLink/SLI, what's more important on average ? I guess it will also come down to the specific app...i.e. as an example, in TimeSpy, graphic test 1 is harder on the GPU while graphic test 2 is harder on the VRAM, according to GN and others. Obviously, I don't want to continuously flip water-blocked cards...:thinking:


----------



## Hanks552

sharing my experience it doesnt go behond 49C
Ambient temp 28C


----------



## kx11

i'm making progress with this gpu , it's quite crazy TBH


1st of all my problems were caused by the registry edits MSI AB + XtremeTuner+ + HOF Ai app were storing in my Windows registry and i couldn't get rid of them , so with that idea in my head i though a format is needed but before i did it i looked around GALAX korean forums with gogle translator's help I've found the Vbios update for this gpu which is not available on the other pages like the global/EU page , so i flashed the bios , formatted windows and installed HOF Ai 1st then removed it because it's weird and just not ready to use then got MSI AB running , click on Thumbnails to see the power target and GFX score in TSE on an air cooled GPU , hopefully i'm good now ?!


----------



## Medusa666

kx11 said:


> i'm making progress with this gpu , it's quite crazy TBH
> 
> 
> 1st of all my problems were caused by the registry edits MSI AB + XtremeTuner+ + HOF Ai app were storing in my Windows registry and i couldn't get rid of them , so with that idea in my head i though a format is needed but before i did it i looked around GALAX korean forums with gogle translator's help I've found the Vbios update for this gpu which is not available on the other pages like the global/EU page , so i flashed the bios , formatted windows and installed HOF Ai 1st then removed it because it's weird and just not ready to use then got MSI AB running , click on Thumbnails to see the power target and GFX score in TSE on an air cooled GPU , hopefully i'm good now ?!


I'm really impressed with this card, would you mind telling us your opinion overall, the noise levels of the card, what temperatures you are seeing, build quality, etc, was it worth the extra $$$ for you? 

And some more pictures wouldn't hurt ; )


----------



## zhrooms

Sheyster said:


> I've heard rumors that the EVGA Black is a non-A chip, but not sure if this is true or not.


Check the *Original Post* for the list of all Non-A cards.



cg4200 said:


> Yeah I got mine yesterday non-a chip can't flash and pl 112... Card is nice though for 999.00 put wb on it and could do 2000 on core..
> 
> Not many people have them I am trying to find a non-a with higher than 112 to flash to.. or does anyone know how to bypass the nv flash to get a ftw bios on this ?? thanks


Card is actually not nice, putting a waterblock on a Non-A card is close to pointless, you get higher performance and the same noise level on an air cooled *A* card (for less money). You can't really run 2000MHz on the core in the real heavy benchmarks such as Fire Strike Extreme and UNIGINE Superposition 1080p Extreme/8K Optimized, and that's what matters. The low power limit is a huge problem. A 380W card is 12% faster than a 280W card, when both are on water. (More details in the *Original Post* down in the FAQ)

It is very unlikely you will ever be able to flash it, factory overclocking TU102-300-K1-A1 is not allowed by any brand, they all have identical boost and power limit. No brand will suddenly release a card with overclocking, that is what the *A* is for. So that leaves shunt mod as the only option, which is risky.



sippo said:


> Is it safe to use the Sea Hawk X Hybrid BIOS on Zotac 2080 TI AMP? - as they both are reference PCB?


Yes, but why would you want to? The recommended one is the Galax OC at 380W power limit, works on basically all cards.



GanMenglin said:


> My MSI Sea Hawk X EK voltage can only go up to 1.075v, I’ve already used the 406W BIOS on it. Anyone know how to make it to the limit of 1.093V? VF curv in afterburner is useless...shows 1.25v max, but actually only up to 1.075


The power limit of 406W is not enough for 1.093V, it's only 26W higher than the recommended Galax OC at 380W which can get close but not all the way, the 406W BIOS do get even closer, but again not all the way (by the looks of it, I have not personally tested it). Only the Galax HOF card is guaranteed to do that with its 450W BIOS, and shunt modding your card.



VPII said:


> In the graph it shows that the vcore goes to 1.2 but what I found with my card is that 1.063 is basically max vcore I'll get.


NVIDIA has set a hard limit of 1.093V, Afterburner just happens to show higher as older cards could go higher. Read the FAQ at the bottom of the *Original Post* for more information as to why you can only go to 1.063V.



kx11 said:


> I'm making progress , it's quite crazy TBH
> 
> I looked around GALAX korean forums and found the BIOS update for the card, hopefully I'm good now?


It's not "crazy", just has the highest power limit.

Do you mean this? http://www.galax.kr/bbs/board.php?bo_table=data&wr_id=42

You also showed the wrong window in GPU-Z, you are suppose to show the NVIDIA BIOS, not General.

Did you make a backup of the previous BIOS you had?


----------



## kx11

zhrooms said:


> It's not "crazy", just has the highest power limit.
> 
> Do you mean this? http://www.galax.kr/bbs/board.php?bo_table=data&wr_id=42
> 
> You also showed the wrong window in GPU-Z, you are suppose to show the NVIDIA BIOS, not General.
> 
> Did you make a backup of the previous BIOS you had?





you mean this page ?


----------



## kx11

Medusa666 said:


> I'm really impressed with this card, would you mind telling us your opinion overall, the noise levels of the card, what temperatures you are seeing, build quality, etc, was it worth the extra $$$ for you?
> 
> And some more pictures wouldn't hurt ; )



really good card overall , fans are not noisy when running below 65% , the temps are not bad either , the worst i saw was 71c under 100% load , the fans are sometimes tricky like they won't spin at all so what i needed to do is sutting down PC unplug the power cable from PSU wait 10 seconds and plug it again , that way MSI AB gets to spin the fans in Auto mode while manual fans control is still weird and doesn't work with any program


----------



## zhrooms

kx11 said:


> You mean this page?


Yes, thanks.

Apparently the new BIOS is 300/450W (150%), the previously uploaded BIOS at TechPowerUp was 400/450W (112%).

Likely due to temperature, if you do not touch overclocking and run it stock, with the power limit at 400W, it will stay at a higher voltage than is actually needed, thus heating the GPU for no reason.

Keeping it at 300W will restrict voltage, but will still be fed enough to run the boost clock.


----------



## syneic

Finally got my 2x "ZOTAC GeForce RTX 2080 Ti Triple" during christmas, but I'm having issues with my nvlink bridge.
Since no shops in my country imports anything else than asus and msi bridges I bought the MSI one, it says on the box that it should be compatible with 2000 series rtx cards, and doesn't say it needs to be MSI.

I've tried googling it as well but i cant find any place where it says that the bridges are vendor specific, does anyone know if this is the case ?

Everything works with the bridge connected, i just dont get the "sli/nvlink" option in nvidia controll panel, also tested the cards separately and all looks well.


----------



## Hanks552

syneic said:


> Finally got my 2x "ZOTAC GeForce RTX 2080 Ti Triple" during christmas, but I'm having issues with my nvlink bridge.
> Since no shops in my country imports anything else than asus and msi bridges I bought the MSI one, it says on the box that it should be compatible with 2000 series rtx cards, and doesn't say it needs to be MSI.
> 
> I've tried googling it as well but i cant find any place where it says that the bridges are vendor specific, does anyone know if this is the case ?
> 
> Everything works with the bridge connected, i just dont get the "sli/nvlink" option in nvidia controll panel, also tested the cards separately and all looks well.


I have evga nvlink on a zotac card. Check if is really mounted correctly, check drivers, I don’t think it should only work for msi


----------



## Shawnb99

I have now spent more weeks waiting for a replacement GPU then I did having it and no idea when it’ll be in stock.

So disappointed in Nivida. 


Sent from my iPhone using Tapatalk


----------



## kx11

zhrooms said:


> Yes, thanks.
> 
> Apparently the new BIOS is 300/450W (150%), the previously uploaded BIOS at TechPowerUp was 400/450W (112%).
> 
> Likely due to temperature, if you do not touch overclocking and run it stock, with the power limit at 400W, it will stay at a higher voltage than is actually needed, thus heating the GPU for no reason.
> 
> Keeping it at 300W will restrict voltage, but will still be fed enough to run the boost clock.





interesting , this card doesn't like memory OC very much or at least my unit needs no more than 400+ to allow some core OC to be applied


----------



## Strokin3s

I just purchased a RTX 2080TI Founders Edition. Are people still having issues with these cards? I'm going to liquid cool it so I don't want to put a block on it and have it die on me. Any tests I can run or anything?


----------



## raider89

zhrooms said:


> No, the HOF is suppose to be *112%*! It's the fastest card out there at 400W stock and 450W when set to 112%, the second fastest card is MSI Gaming X Trio at 300W stock and 406W when set to 135%.
> 
> 450W is more than you actually need for the hard limit of 1.093V, core clock should reach over 2150MHz in Time Spy Extreme and UNIGINE Superposition 8K Optimized. The card basically has no power limit, so high it becomes irrelevant, similar to a shunt mod.


Is the 380w Galax bios still the best bios to use ? I have the XC ultra on water, didn't know if a better bios was available.


----------



## J7SC

kx11 said:


> really good card overall , fans are not noisy when running below 65% , the temps are not bad either , the worst i saw was 71c under 100% load , the fans are sometimes tricky like they won't spin at all so what i needed to do is sutting down PC unplug the power cable from PSU wait 10 seconds and plug it again , that way MSI AB gets to spin the fans in Auto mode while manual fans control is still weird and doesn't work with any program






kx11 said:


> interesting , this card doesn't like memory OC very much or at least my unit needs no more than 400+ to allow some core OC to be applied



Given your earlier posts about speed on air, it seems like a _great_ card - down the line, you might want to consider water-cooling on it as all 2080 TIs seem to like the lower temps, the power limit and temps are tied, and even the GDDR6 VRAM reacts well to cooler temps. 

Or...do some more eBaying :wubsmiley


----------



## Asmodian

zhrooms said:


> Likely due to temperature, *if you do not touch overclocking and run it stock, with the power limit at 400W, it will stay at a higher voltage than is actually needed, thus heating the GPU for no reason.*
> 
> Keeping it at 300W will restrict voltage, but will still be fed enough to run the boost clock.


No, the power limit does not work that way. If you are not hitting the limit changing the limit does nothing at all, voltages don't change and power draw doesn't change. The only way the power limit does anything is by limiting the boost clock, there is no way to use the power limit to run at a lower voltage without also running at lower clocks.

In a way what you are saying is true, the difference is likely for temps when not overclocking, but only because many heavy loads will hit 300W, causing the card to declock/devolt to reduce power. Waiting until the load hits 400W before downclocking results in much higher temperatures.

It is amazing how much power can be saved by the power limit, the short term small declocks from bouncing off the lower power limit usually barely hurts performance while dropping a lot of power. But _it is downclocking_, not only dropping the voltage.


----------



## Medusa666

So I have a Zotac RTX 2080 Ti AMP! edition that I recieved last week. 

I think that the card overall feels robust and of good quality. I did not know what fan bearing they use, so I emailed their support and they did not disclose it from factory, but told me that the lifespan is 18 000 hours. Another good selling point that made me choose this card was the extended 5 year warranty that Zotac offers if you register the card within 28 days of purchase. 

Overclocking wise it does 2110-2115 MHz consistently during gaming after using the OC scanner, is this considered good? It is with the stock air cooler, I'm using MSI Afterburner. I have only done a 400 MHz OC on the memory and not attempted anything higher currently. 

Noise levels are OKish, but not great, I have built a silent PC so for me the auto fan curve is a bit of a letdown, but I will manually adjust it when I got more spare time on my hands. 

All in all I'm happy with my purchase, first time I ever own a Zotac card so looking forward to seeing how it holds up over time and if I ever need to use the warranty. 

I have the option of getting a KFA2/GALAX RTX 2080 Ti Hall Of Fame card for 2112$, I got the Zotac card for 1380$, I don't know what to do. I like the looks of the HOF card, but the looks in itself is not worth 732$ more to me, I'm thinking that the HOF card has better quality of the components and PCB, but Zotac still has the longer 5 year warranty which should negate the need for higher quality components? Noise level wise I do not know how the HOF card is, should be more quiet?


----------



## Shawnb99

Strokin3s said:


> I just purchased a RTX 2080TI Founders Edition. Are people still having issues with these cards? I'm going to liquid cool it so I don't want to put a block on it and have it die on me. Any tests I can run or anything?




I had a EVGA XC Ultra die on me 18 days in no OC or hard gaming.
Just stress test it as much as you can before you put it under. I was days away from putting mine under, better now then taking it out of the loop. 


Sent from my iPhone using Tapatalk


----------



## boli

Medusa666 said:


> Overclocking wise it does 2110-2115 MHz consistently during gaming after using the OC scanner, is this considered good? It is with the stock air cooler, I'm using MSI Afterburner.



That'd be above average on water (2070), and way above average on air (2040?). Grats!


----------



## J7SC

Medusa666 said:


> So I have a Zotac RTX 2080 Ti AMP! edition that I recieved last week.
> 
> (cut)
> 
> All in all I'm happy with my purchase, first time I ever own a Zotac card so looking forward to seeing how it holds up over time and if I ever need to use the warranty.
> 
> I have the option of getting a KFA2/GALAX RTX 2080 Ti Hall Of Fame card for 2112$, I got the Zotac card for 1380$, I don't know what to do. I like the looks of the HOF card, but the looks in itself is not worth 732$ more to me, I'm thinking that the HOF card has better quality of the components and PCB, but Zotac still has the longer 5 year warranty which should negate the need for higher quality components? Noise level wise I do not know how the HOF card is, should be more quiet?



...not sure which Galax HOF 2080 TI you're referring to...and the quoted price of $2,112, is that US $ ? The top-of-the-line water-blocked OC Labs one is listed at US $ 1,799.99, but sold out (in fact, the only thing not sold out at their US online store seems to be the GTX 1060...). All that aside, if you already have the Zotac and got the additional 3 yr warranty, it may make more sense to either buy a nice water block for the Zotac, or even consider a second Zotac card for NVLink, given the price difference. Personally, I love the Galax top-of-the-line HOF products and have been desiring them for years, but they are not officially available here and warranty could be an issue. But really, still just your personal decision...:thinking:


----------



## kx11

J7SC said:


> Given your earlier posts about speed on air, it seems like a _great_ card - down the line, you might want to consider water-cooling on it as all 2080 TIs seem to like the lower temps, the power limit and temps are tied, and even the GDDR6 VRAM reacts well to cooler temps.
> 
> Or...do some more eBaying :wubsmiley



that GPU is too much for me and too big for my case ( 570x corsair ) and it'll be so hard to sell a year later when nvidia release 30xx line up :h34r-smi:h34r-smi


----------



## bogdi1988

Interesting that the new beta 10 of msi afterburner detects cards with the Galax 380W bios as an MSI RTX2080Ti Lightning


----------



## kx11

some photos of the GPU (big res. )



https://farm8.staticflickr.com/7893/45881289384_9c6d8a2689_o.jpg
https://farm5.staticflickr.com/4883/45881289394_a4b1ae7419_o.jpg
https://farm8.staticflickr.com/7849/46553432852_85d9e7351f_o.jpg


----------



## Medusa666

J7SC said:


> ...not sure which Galax HOF 2080 TI you're referring to...and the quoted price of $2,112, is that US $ ? The top-of-the-line water-blocked OC Labs one is listed at US $ 1,799.99, but sold out (in fact, the only thing not sold out at their US online store seems to be the GTX 1060...). All that aside, if you already have the Zotac and got the additional 3 yr warranty, it may make more sense to either buy a nice water block for the Zotac, or even consider a second Zotac card for NVLink, given the price difference. Personally, I love the Galax top-of-the-line HOF products and have been desiring them for years, but they are not officially available here and warranty could be an issue. But really, still just your personal decision...:thinking:


Yeah, sorry I did not mention that the price is so high because I'm in Sweden and the card costs 18 990 swedish krona here. 

I think you have a great idea with the water block, is there any good easily set up AIO for the RTX 2080 Ti reference boards? Or anything that you would recommend? 

I also drool over the 2080 Ti HoF, but it is very expensive! : ) 

This is the card I'm having the option on.


----------



## Medusa666

kx11 said:


> some photos of the GPU (big res. )
> 
> 
> 
> https://farm8.staticflickr.com/7893/45881289384_9c6d8a2689_o.jpg
> https://farm5.staticflickr.com/4883/45881289394_a4b1ae7419_o.jpg
> https://farm8.staticflickr.com/7849/46553432852_85d9e7351f_o.jpg


Looks amazing : ) 

What was your reasoning behind buying that specific card, seeing as it is more expensive than others for almost the same performance?


----------



## kx11

Medusa666 said:


> Looks amazing : )
> 
> What was your reasoning behind buying that specific card, seeing as it is more expensive than others for almost the same performance?





Asus Strix OC , pre-ordered it 3 times and every time the seller made me wait for at least a month before i cancelled the order , then this GPU showed up and i bought it  took me 2 weeks to get it after i purchased it , it's should've been a week but shipping issues got in the way




mine got the chinese box with some E-sports gaming chinese teams logos on it , the manual is in chinese too but who cares


----------



## Medusa666

kx11 said:


> Asus Strix OC , pre-ordered it 3 times and every time the seller made me wait for at least a month before i cancelled the order , then this GPU showed up and i bought it  took me 2 weeks to get it after i purchased it , it's should've been a week but shipping issues got in the way
> 
> mine got the chinese box with some E-sports gaming chinese teams logos on it , the manual is in chinese too but who cares


I see, so maybe luck of the draw then : ) It is a beautiful card, and the components are crazy high quality. Any chance that you can do a YT video when you record the noise levels of the fans at various intervals? 

Have you done any overclocking as of yet?


----------



## kx11

Medusa666 said:


> I see, so maybe luck of the draw then : ) It is a beautiful card, and the components are crazy high quality. Any chance that you can do a YT video when you record the noise levels of the fans at various intervals?
> 
> Have you done any overclocking as of yet?





i don't have good audio recording equipment to capture the actual noise levels but a good fan curve on MSI AB can do the job for silent gaming 



i was just benchmarking SOTR going 100+ core , 400+mem while fans were spinning 100% 













the fans were running 100% because of the button in the back , MSI AB couldn't detect the fans were running 100%


----------



## J7SC

kx11 said:


> that GPU is too much for me and too big for my case ( 570x corsair ) and it'll be so hard to sell a year later when nvidia release 30xx line up



...570x Corsair + Dremel tool = mucho space :thumbsups



kx11 said:


> some photos of the GPU (big res. )
> https://farm5.staticflickr.com/4883/45881289394_a4b1ae7419_o.jpg


I'm spying some Galax RAM as well, nice...looks like you found a whole Galax nest (don't tell me, eBay ?!)



Medusa666 said:


> Yeah, sorry I did not mention that the price is so high because I'm in Sweden and the card costs 18 990 swedish krona here.
> 
> I think you have a great idea with the water block, is there any good easily set up AIO for the RTX 2080 Ti reference boards? Or anything that you would recommend?
> (cut)



I'm not sure about AIO, though there must be some solutions others can recommend. But if you choose a custom loop and block for the reference design, I would go with Watercool Heatkiller IV or Aquacomputer kryographics w/ active backplate


----------



## zhrooms

Shawnb99 said:


> I have now spent more weeks waiting for a replacement GPU then I did having it and no idea when it’ll be in stock.
> 
> So disappointed in Nivida.


 
You're not alone, this card is a *disaster* so far. RTX is basically non-existent when used as a big salespoint, prices are all over the place in europe, recently also increased, more than half the models are almost impossible to find in stock. Past few weeks the Non-A cards has flooded the market, and there is NO indication of which card has which GPU, sketchy is putting it mildly. There is absolutely no reason (that we know of) for the Non-A cards to exist, they overclock just the same as any A card when shunt modded (to bypass the low power limit of just 280W, Founders Edition runs 320W as a comparison). The only difference is a white letter missing on the GPU itself (300 instead of 300A), board is identical in every way other than that. And this is brand new by the way, separation of a flagship card into two GPUs, people should really be more upset. The Galax OC card at 380W, normal reference PCB with a dual fan 2 slot card is *12% faster* than a Non-A card on water cooling, that's *69 to 77 FPS*. Galax/KFA2 is the only brand that basically ignored what the other partners did and put the highest power limit on their cards, both reference (380W) and custom PCB (450W). What surprised me the most is that ASUS is selling the (one of three variants) ROG Strix with the Non-A GPU, no room for overclocking what so ever rendering the expensive PCB and cooler basically useless, and retailers sell it for up to €220 more than the cheapest A card, which has higher performance even with way less efficient coolers.

Feels like the separation of GPUs is being done solely so that partners can sell the overclocked (unlocked, similar to Intel selling K models) cards at a premium price, previously (1000 series) we could just buy the cheapest card and flash BIOS in 30 seconds, and all was well. This time around about 1/3 of the cards (excluding waterblocked & duplicate name cards) being sold cannot be altered in any way except through shunt modding (29x A cards and 14x Non-A cards).

EVGA sells (no tax) their cheapest Black Edition (Non-A GPU) at $999 MSRP and the XC Black Edition (A GPU) at $1149 MSRP, just what NVIDIA advertised back during the reveal, recommended price of $999 MSRP and $1200 for the Founders Edition. But prices today, even several months after release is *all over the place*, here in Europe you can purchase the Palit GamingPro featuring an A GPU for $1100 USD (no tax) which is great, but the Gigabyte Windforce with Non-A GPU will set you back $1200 USD (no tax), so prices are reversed in many cases, and the A version of the Windforce (OC variant) has the same exact price of $1200 USD pre-tax, I have never seen more messed up prices months after release.

You save at most 4-5% buying a Non-A card over A, but choosing A instead gets you 6-7% performance, so you actually pay more for less when choosing ANY Non-A card (Air cooled, dual or triple fan is irrelevant, just needs to be 330W BIOW BIOS or above to get the 6-7%).

The GPU itself is absolutely amazing performance wise, such a shame NVIDIA had to sully it with this awful GPU separation and reveal/release/post-release, can't think of a single thing they did right for us consumers.

Reveal, wrong time.
Reveal, overpriced.
Release, delayed.
Release, delayed again.
Post-release, weeks and even months for cards to show up.
Post-release, no RTX titles as far as the eye can reach.
Post-release, flooded market with locked slower cards.

And this is probably a success for NVIDIA/partners so expect the next flagship to be separated too.
 


Strokin3s said:


> I just purchased a RTX 2080TI Founders Edition. Are people still having issues with these cards? I'm going to liquid cool it so I don't want to put a block on it and have it die on me. Any tests I can run or anything?


 
Cards are dying all over the place, at least the cards that were most likely to die directly from NVIDIA (Founders Edition) is out of circulation at this time, so you should be fine. I doubt stress testing the card would kill it faster but who knows, if you want you could always set it to loop benchmarks when you're out for the day and sleep (if you can sleep with your computer on), preferably also overclock it as much as you can, can run the card at low fan speed and high temp.
 


raider89 said:


> Is the 380w Galax bios still the best bios to use ? I have the XC ultra on water, didn't know if a better bios was available.


 
Yes, only ones higher are the MSI Gaming X Trio at 406W and the Galax HOF at 450W, but both of them got an extra power connector. Galax 380W is the highest 2x8-Pin and therefore recommended.
 


J7SC said:


> Seems like a _great_ card - down the line, you might want to consider water-cooling on it as all 2080 TIs seem to like the lower temps, the power limit and temps are tied


 
Power Limit does far more for performance than temperature limit, water cooling your card might gain you about 30-45 MHz realistically, while increasing the power limit by just 50W gets you 120 MHz.
 


Asmodian said:


> No, the power limit does not work that way. If you are not hitting the limit changing the limit does nothing at all, voltages don't change and power draw doesn't change.
> 
> The only way the power limit does anything is by limiting the boost clock, there is no way to use the power limit to run at a lower voltage without also running at lower clocks.
> 
> In a way what you are saying is true, the difference is likely for temps when not overclocking, but only because many heavy loads will hit 300W, causing the card to declock/devolt to reduce power. Waiting until the load hits 400W before downclocking results in much higher temperatures.
> 
> It is amazing how much power can be saved by the power limit, the short term small declocks from bouncing off the lower power limit usually barely hurts performance while dropping a lot of power. But _it is downclocking_, not only dropping the voltage.


 
That's exactly how it works... Both my cards increased voltage at stock boost when increasing the power limit, instead of the core clock fluctuating, it became a straight line (at the previous peak) when the voltage increased after increasing power limit, and when it passed this threshold, it kept raising voltage, thus temperature. Tested it extensively using Non-A and A GPU.

_"There is no way to use the power limit to run at a lower voltage without also running at lower clocks"_ Of course not? I'm saying the opposite. When the core clock is reached, the voltage does not need to be increased but is anyway as it basically scales with power limit, blame the "boost" curve, when you run the card at lower usage it will frequently reach 1.093V, which is completely unecessary, that is one of the reasons I use a manual locked voltage curve, that way it doesn't go all over the place, having a life of its own.

_"Waiting until the load hits 400W before downclocking results in much higher temperatures."_ That is literally what I said. When the card is unrestricted (400W) the voltage is higher than it needs by default, this only applies to stock, not overclocking.

_"It is amazing how much power can be saved by the power limit"_. Yes, that's the purpose of it, but the worst thing ever for manual overclocking as they are so hell bent on restraining it.

_"But it is downclocking, not only dropping the voltage."_ I never said that, I was talking about the opposite as earlier mentioned.
 


Medusa666 said:


> So I have a Zotac RTX 2080 Ti AMP! edition that I recieved last week.
> 
> Overclocking wise it does 2110-2115 MHz consistently during gaming after using the OC scanner, is this considered good? It is with the stock air cooler, I'm using MSI Afterburner. I have only done a 400 MHz OC on the memory and not attempted anything higher currently.
> 
> Noise levels are OKish, but not great, I have built a silent PC so for me the auto fan curve is a bit of a letdown, but I will manually adjust it when I got more spare time on my hands.
> 
> All in all I'm happy with my purchase, first time I ever own a Zotac card so looking forward to seeing how it holds up over time and if I ever need to use the warranty.
> 
> I have the option of getting a KFA2/GALAX RTX 2080 Ti Hall Of Fame card for 2112$, I got the Zotac card for 1380$, I don't know what to do. I like the looks of the HOF card, but the looks in itself is not worth 732$ more to me, I'm thinking that the HOF card has better quality of the components and PCB, but Zotac still has the longer 5 year warranty which should negate the need for higher quality components? Noise level wise I do not know how the HOF card is, should be more quiet?


 
Zotac AMP is one of the best cards you can buy, huge cooler and can flash the 380W BIOS, takes the ~3rd "best card" slot (shared with a few other cards).

OC Scanner is a gimmick at best, manual overclocking is the only way to go. Also stating "2110-2115MHz" means *absolutely nothing*, you have to be very clear in what application you run with those speeds, is it Minecraft at 5% GPU Usage or is it 3DMark Time Spy Extreme at 4K? I recommend UNIGINE Superposition to everyone, it has 8K Optimized preset which you only get about 30 FPS in, really pushes the card to its limit, reveals the actual max core clock.

The card is a reference PCB design by NVIDIA, so there's not really much to see except for the specific Zotac cooler, but as said it's a big cooler and thus, you made a great purchase.

The Hall of Fame is specifically made for LN2 overclocking, so using the card on air is a huge waste of money, but if money is no issue then why not, should be about 1% faster, for about 53% higher cost. Also the "components" are overkill on the Zotac (reference PCB), just as good at overclocking as any other card on Air & Water (few exceptions). Custom PCB cards such as Strix, FTW3 and HOF only shine at sub-zero temperatures, bypassing NVIDIA safety limits.
 


boli said:


> That'd be above average on water (2070), and way above average on air (2040?). Grats!


 
See the above reply as to why it needs to be known 'in what', the "average on water 2070" is falsely fabricated, has no significance.


----------



## Hanks552

i think i already hit top overclock on my =P


----------



## J7SC

zhrooms said:


> You're not alone, this card is a *disaster* so far. RTX is basically non-existent when used as a big salespoint, prices are all over the place in europe, recently also increased, more than half the models are almost impossible to find in stock. Past few weeks the Non-A cards has flooded the market, and there is NO indication of which card has which GPU, sketchy is putting it mildly. There is absolutely no reason (that we know of) for the Non-A cards to exist, they overclock just the same as any A card when shunt modded (to bypass the low power limit of just 280W, Founders Edition runs 320W as a comparison). The only difference is a white letter missing on the GPU itself (300 instead of 300A), board is identical in every way other than that. And this is brand new by the way, separation of a flagship card into two GPUs, people should really be more upset. The Galax OC card at 380W, normal reference PCB with a dual fan 2 slot card is *12% faster* than a Non-A card on water cooling, that's *69 to 77 FPS*. Galax/KFA2 is the only brand that basically ignored what the other partners did and put the highest power limit on their cards, both reference (380W) and custom PCB (450W). What surprised me the most is that ASUS is selling the (one of three variants) ROG Strix with the Non-A GPU, no room for overclocking what so ever rendering the expensive PCB and cooler basically useless, and retailers sell it for up to €220 more than the cheapest A card, which has higher performance even with way less efficient coolers.
> 
> (cut)
> 
> See the above reply as to why it needs to be known 'in what', the "average on water 2070" is falsely fabricated, has no significance.



I very much agree on the first part of your post...this RTX launch has been nothing short of embarrassing. Between the GTX 600 and GTX 900 series, I bought 13 GPUs, usually custom PCB, for four machines. Yet when those cards came out, the launch was a well-orchestrated affair. IMO, RTX was a bit of a "Hail Mary" pass as NVidia's stock price had dropped by 45% already before the more recent stock market gyrations. NVidia took a huge hit because of the crypto mining collapse, yet seemed to have promised shareholders that it knew how to 'command the GPU market'...Some shareholder groups are trying to sue NVIdia on this (so far unsuccessfully) and its biggest shareholder is apparently trying to unload a multi-billion $ NVidia share package.

That is the context for the RTX launch - rushed, then postponed for a bit, then botched. Enter rumored yield problems by the fab, and there's indeed a big shortage of higher-binned cards (per my recent post), and if you can even find a top-binned model, the prices have gone up significantly. Your comments on the new 285W+-'version' is spot on...I have actually seen vendors this week trying to make that a positive sales point / as in 'it is an improvement' and several retailers are releasing additional models in the mid-range to make up for the shortage at the top end.

---
On your 'average on water', I do take issue with the term 'falsely fabricated', though it is obviously out of context. I believe the poster had picked up some numbers I had reported earlier from HWBot, but without the context which as you rightly point out is very important (what App / apples to apples etc). 

For the 2080 TI, that means it is based on 2350 separate HWBot submissions in any one of more likely multiple GPU applications listed below. The current HWBot summary for the 2080 Ti as of today reads:

*Air* = 1985 MHz
*Water* = 2090 MHz
*LN2* = 2473 MHz

Interestingly, the 'air' speed has dropped from when I first posted this some weeks back, reflecting perhaps the issues raised in the first part about fab problems, so now 'water' is 100 MHz or so ahead of air. All that said, and returning to my experience with NVidia 600 - 900 series high-end card launches, the RTX launch is somewhat of a disaster (IMO) by comparison

Here are the apps used by the 2350 2080 TI submissions to generate the air, water and LN2 MHz numbers:


----------



## AlbertoM

This is no embarrassing, its a complete disaster. 

Sell a much higher cost product and offer a minor gain in performance compared to previous releases.

Not mentioning the cards dying all over the place.

You guys are crazy for taking that jump into unknown, or have a lot of money to burn.

I run since 2016 my FE GTX 1080 at 2164 MHz 1.075v with Palit BIOS 276W TDP with EVGA Hybrid cooler and its doing 1440p at 200FPS in Doom thats the only game I play, so until things settle and this bull**** its over, no, no, no thank you very much.


----------



## Thoth420

If I am not interested in Overclocking this card as it is more of a replacement as my 1080Ti is dying way sooner than expected does it matter if I get a non A chip? I just want stability in games don't really mind losing a bit of frames for it. Should I just go with a 2080 non Ti? I need to drive 3440 x 1440 G Sync so I tend to only look at the flagship gaming cards from Nvidia but I didn't pay attention to this gen because I was going to skip it.
I was looking at the EVGA Black. Ref speeds and dual fan so I don't have to listen to vacuum cleaner.


----------



## Sheyster

Thoth420 said:


> If I am not interested in Overclocking this card as it is more of a replacement as my 1080Ti is dying way sooner than expected does it matter if I get a non A chip? I just want stability in games don't really mind losing a bit of frames for it. Should I just go with a 2080 non Ti? I need to drive 3440 x 1440 G Sync so I tend to only look at the flagship gaming cards from Nvidia but I didn't pay attention to this gen because I was going to skip it.
> I was looking at the EVGA Black. Ref speeds and dual fan so I don't have to listen to vacuum cleaner.


Seems like EVGA Black is going for $1149 if you can find one. I would just get this instead:

https://www.newegg.com/Product/Prod...0 ti&cm_re=rtx_2080_ti-_-14-500-433-_-Product

It's an A chip, reference PCB. You'll be able to flash the 380w BIOS and you'll also get a free copy of BF5.


----------



## Medusa666

zhrooms said:


> Zotac AMP is one of the best cards you can buy, huge cooler and can flash the 380W BIOS, takes the ~3rd "best card" slot (shared with a few other cards).
> 
> OC Scanner is a gimmick at best, manual overclocking is the only way to go. Also stating "2110-2115MHz" means *absolutely nothing*, you have to be very clear in what application you run with those speeds, is it Minecraft at 5% GPU Usage or is it 3DMark Time Spy Extreme at 4K? I recommend UNIGINE Superposition to everyone, it has 8K Optimized preset which you only get about 30 FPS in, really pushes the card to its limit, reveals the actual max core clock.
> 
> The card is a reference PCB design by NVIDIA, so there's not really much to see except for the specific Zotac cooler, but as said it's a big cooler and thus, you made a great purchase.
> 
> The Hall of Fame is specifically made for LN2 overclocking, so using the card on air is a huge waste of money, but if money is no issue then why not, should be about 1% faster, for about 53% higher cost. Also the "components" are overkill on the Zotac (reference PCB), just as good at overclocking as any other card on Air & Water (few exceptions). Custom PCB cards such as Strix, FTW3 and HOF only shine at sub-zero temperatures, bypassing NVIDIA safety limits.


Thank you for a very detailed and structured reply, made alot of sense even for a layman such as myself. 

I'm seeing these speeds in games and benchmarks when I play, it is peak speeds of 2110-2115 MHz so it is most often not consistent except for only in lighter games like Overwatch, when I run the Unigine benchmark 8K optimized I get around 1950MHz with the same OC scanner curve applied. 

I now attempted to manually overclock the card and entered +140 MHz in MSI Afterburner, I cancelled the test at scene 5 (just wanted to see if it started at all lol) but it stayed around 1995-2010 MHz with a peak of 2160 MHz. 

Is it hard to use Nvflash? Does it void the warranty of the card?


----------



## raider89

I don't think I got lucky with my 2080ti, Seems 2080 is the max I can hit in Timespy, can't even get top 100.


----------



## VPII

zhrooms said:


> NVIDIA has set a hard limit of 1.093V, Afterburner just happens to show higher as older cards could go higher. Read the FAQ at the bottom of the *Original Post* for more information as to why you can only go to 1.063V.


Hi zhrooms, thanks for directing me to the original post. I really enjoyed reading your Q&A section, very interesting and gave some good advise. I'm running a original 380 watt TDP Galax RTX 2080 Ti OC but unfortunately the cooling on this card is the limiting factor. At present I've set my max oc 1965 - 1980 using the MSI Afterburner OC curve to let the gpu vcore stay at 0.962v dropping the TDP to 80% dropping it to 240watt but I gain roughly a 10 to 15c temp drop. The card can do 2145mhz core, memory only +750mhz so memory maybe not the best, but unfortunately that core drops all the way to 2025 as with that power limit the card hits 80c or just below while running benchmarks. I prefer running the card with the lowered TDP as when I'm gaming SOTR or BFV (don't really play this, just use for testing as I got it with the card) the card will max out at 63 to 65c temp.


----------



## EarlZ

I am a bit surprised on how well the 2080TI can run with a little bit of under volt, I've setup 0.975 @ 2010Mhz, runs superposition 4k on 1965-1980.


----------



## Coldmud

This is rich, Gigabyte finally seems to have updated the windforce extreme bios to F2, yet the only change they log is "improve fan RPM" on a fanless card. 

https://www.aorus.com/GV-N208TAORUSX-WB-11GC#pd_download


----------



## J7SC

Coldmud said:


> This is rich, Gigabyte finally seems to have updated the windforce extreme bios to F2, yet the only change they log is "improve fan RPM" on a fanless card.
> 
> https://www.aorus.com/GV-N208TAORUSX-WB-11GC#pd_download



..you mean 'waterforce', not 'windforce'...though now that they come with improved fan RPM  
BTW, have you loaded that new Bios ? Any performance differentials ?


----------



## zhrooms

J7SC said:


> On your 'average on water', I do take issue with the term 'falsely fabricated', though it is obviously out of context. I believe the poster had picked up some numbers I had reported earlier from HWBot, but without the context which as you rightly point out is very important (what App / apples to apples etc).
> 
> For the 2080 TI, that means it is based on 2350 separate HWBot submissions in any one of more likely multiple GPU applications listed below. The current HWBot summary for the 2080 Ti as of today reads:
> 
> *Air* = 1985 MHz
> *Water* = 2090 MHz
> *LN2* = 2473 MHz
> 
> Interestingly, the 'air' speed has dropped from when I first posted this some weeks back, reflecting perhaps the issues raised in the first part about fab problems, so now 'water' is 100 MHz or so ahead of air.
> 
> Here are the apps used by the 2350 2080 TI submissions to generate the air, water and LN2 MHz numbers:


 
Yes I know they're from HWBot and that's the issue, you can't base *anything* on those numbers today, years ago you could when the cards had unlocked voltage (even playing field).

HWBot is trying to simulate the average core clock over all those benchmarks, but it does not work when you introduce the 'boost' we have on the cards today. Most of those benchmarks in that list will not fully utilize this GPU, and when the card is not hitting the power limit because of it, the core clock will increase to numbers higher than they would during actual full usage. So right there the numbers are already skewed, for example in one of the lower resolution benchmarks, the card might go up to 2100MHz at 1.093V, but it's down to 1900MHz 0.900V in Time Spy Extreme.

Then we have the temperature differences, someone running a card with auto fan curve at 75c, versus another with the fan speed at 100% with a temperature of 55c, that means at least 2 core clock steps higher (30MHz).

Power limit is the biggest deciding factor in what core clock you actually reach, we have a wild amount of different cards at 250, 260, 280, 290, 300, 320, 330, 338, 360, 366, 380, 406 and 450W power limits. 

Each of these will produce a different overclock, the lowest 250W will only manage to keep 0.900V and the 450W can go all the way up to 1.093V. So we're now talking about a 300MHz (or more) difference (both air cooled cards).

In a perfect world every single card would have identical BIOS (Power Limit) and unlocked voltage (which we did have years ago), at that point it was all up to luck, if you got a slightly better GPU it would reach a few MHz more.

So let's take the Air value of 1985 MHz, how does this mean anything to me if I have a MSI Gaming X Trio? Which is the second largest air cooled card with the second highest power limit? That card has no problem reaching 2100MHz on Air, which is 115MHz more than HWBot says the average user gets. So how does that help anyone?

Every card will simply reach a different core clock, one cooler is better than the other, or one user runs his/her fan speed slightly faster which results in one core clock step higher (15MHz).

To break it down simply,

Cooler / Temperature?
Auto Fan Speed / User Fan Speed?
Stock / Overclocked?
Power Limit / Voltage?
Benchmark / Resolution?

There are just far too many factors to accurately put a number on the average overclock, in my previous post I mentioned about 1/3 of all cards sold right now (excluding waterblocked and hybrid cards) can not even reach 1900MHz at full usage because they are Non-A cards with a max power limit of 280W. For anyone with a card such as that the 1985MHz value is completely meaningless, they can never even get close to it, unless they run a benchmark at 720p of course, but that is irrelevant as the card won't be used as it was intended.
 


AlbertoM said:


> Sell a much higher cost product and offer a minor gain in performance compared to previous releases.
> 
> Not mentioning the cards dying all over the place.
> 
> You guys are crazy for taking that jump into unknown, or have a lot of money to burn.


 
"_Minor gains_"? See the *original post* for the charts of the actual performance difference. You do get what you pay for, the increased cost scales almost perfectly with the performance gained.

And some of us actually play other games than just Doom like yourself, or are simply enthusiasts, we not only want but need the performance for high resolution gaming as an example, and to call it "unknown" is very ignorant, the fail rates are extremely low, even if they are/were far higher than previous generations.



AlbertoM said:


> Not mentioning this RTX and DLSS... Totally useless marketing craps.
> 
> Reminds me of the time they tried to sell SLI as a viable tech. LOL
> 
> And im not hater. But im not idiot.


 
RTX (Raytracing) is a very important step in realistic visuals for future games, and DLSS has amazing potential to both speed up games and improve image quality, they just need time to flourish.

They _tried_ to sell SLI? They were extraordinarily successful in doing so, amazingly simple technology that changed gaming for millions upon millions of people.

Right.
 


Thoth420 said:


> If I am not interested in Overclocking this card as it is more of a replacement as my 1080Ti is dying way sooner than expected does it matter if I get a non A chip? I just want stability in games don't really mind losing a bit of frames for it. Should I just go with a 2080 non Ti? I need to drive 3440 x 1440 G Sync so I tend to only look at the flagship gaming cards from Nvidia but I didn't pay attention to this gen because I was going to skip it.
> I was looking at the EVGA Black. Ref speeds and dual fan so I don't have to listen to vacuum cleaner.


 
Not overclocking today is shortsighted, because even the factory overclocks are barely overclocks, lots of wasted performance for no reason other than NVIDIA being.. NVIDIA (and partners).

As an example, going from a 280W to a 330W card results in 69 to 73 FPS (or 6%) with the card running 75c at 45% fan speed, barely audible. Well within the safety limits, like not even close. Takes just seconds to set up, there is no excuse for not doing it.

The Non-A cards at 250W stock only run 1650MHz, while if you overclock lets say a Gigabyte Windforce OC 366W (which is one of the cheapest A cards) you will get close to 2000MHz, that's 20% faster. That'd be like going from 50 FPS to 60 FPS in 4K or 120 FPS to 144 FPS in 1440p, for free, just higher temp, same noise level.

I don't know what else to say, running the Non-A cards stock at 250W is a joke, 1650MHz is so slow when the GPU is capable of 2150MHz on water (~30%, 46 to 60 FPS), and that is safely within hard voltage limits and warranty.
 


Sheyster said:


> Seems like EVGA Black is going for $1149 if you can find one. I would just get this instead:
> 
> https://www.newegg.com/Product/Prod...0 ti&cm_re=rtx_2080_ti-_-14-500-433-_-Product
> 
> It's an A chip, reference PCB. You'll be able to flash the 380w BIOS and you'll also get a free copy of BF5.


 
Yeah just check the *original post* for the list, pick what suits you. (e.g. card dimensions, cooler size, overclocking needs)
 


Medusa666 said:


> Thank you for a very detailed and structured reply, made alot of sense even for a layman such as myself.
> 
> I'm seeing these speeds in games and benchmarks when I play, it is peak speeds of 2110-2115 MHz so it is most often not consistent except for only in lighter games like Overwatch, when I run the Unigine benchmark 8K optimized I get around 1950MHz with the same OC scanner curve applied.
> 
> I now attempted to manually overclock the card and entered +140 MHz in MSI Afterburner, I cancelled the test at scene 5 (just wanted to see if it started at all lol) but it stayed around 1995-2010 MHz with a peak of 2160 MHz.
> 
> Is it hard to use Nvflash? Does it void the warranty of the card?


 
Read the FAQ at the bottom of the *original post* if you haven't already by the way.

1950MHz is great in UNIGINE Superposition, that's higher than the 1755MHz boost some cards have (which translates to about 1905MHz), with a 330W power limit I reached a stable 75c @ 1980MHz in 5K Gaming & Superposition.

Just be aware that at 380W and water cooling the card can sustain 2100MHz, almost 8% faster than your result, so there's a lot of untapped potential in the these GPUs, the power limit is the main problem.

I'm about to put up a flash guide in the original post, so check it later, detailed instructions with pictures, makes it very fast and safe.
 


raider89 said:


> I don't think I got lucky with my 2080ti, Seems 2080 is the max I can hit in Timespy, can't even get top 100.


 
Power limit? Fan speed? Temperature? All these need to be known for the "2080" number to mean something.
 


VPII said:


> Thanks for directing me to the original post. I really enjoyed reading your Q&A section, very interesting and gave some good advise. I'm running a original 380 watt TDP Galax RTX 2080 Ti OC but unfortunately the cooling on this card is the limiting factor. At present I've set my max oc 1965 - 1980 using the MSI Afterburner OC curve to let the gpu vcore stay at 0.962v dropping the TDP to 80% dropping it to 240watt but I gain roughly a 10 to 15c temp drop. The card can do 2145mhz core, memory only +750mhz so memory maybe not the best, but unfortunately that core drops all the way to 2025 as with that power limit the card hits 80c or just below while running benchmarks. I prefer running the card with the lowered TDP as when I'm gaming SOTR or BFV (don't really play this, just use for testing as I got it with the card) the card will max out at 63 to 65c temp.


 
As mentioned above, if you reach 2025MHz in the most heavy benchmarks that's great, my Gaming X Trio reached 1980MHz at 75c (330W) as a reference point. I'd say any air card running above 1995MHz (2010MHz and higher) in 5K, Time Spy Extreme or Superposition 8K Optimized is absolutely great, 1755MHz boost cards reach about 1905MHz, so you'd already be running 5.5% faster than the fastest factory overclock if you run 2010MHz.

Summary; in [email protected] from some early testing I did back in November
250W Stock = 1650MHz (Palit 1545) Stock BIOS @ 75c Quiet
280W Stock = 1695MHz (Palit 1545) Stock BIOS @ 75c Quiet
300W Stock = 1875MHz (MSI 1755) Stock BIOS @ 75c Quiet
330W Stock = 1905MHz (MSI 1755) Stock BIOS @ 75c Quiet

280W OC = 1860MHz (Palit 1545) Stock BIOS @ 75c Quiet
330W OC = 1980MHz (MSI 1755) Stock BIOS @ 75c Quiet
380W OC = 2025MHz (MSI 1755) Flashed BIOS @ 75c Quiet

380W OC = 2100MHz (MSI 1755) Flashed BIOS @ 45c Loud
 


Coldmud said:


> This is rich, Gigabyte finally seems to have updated the windforce extreme bios to F2, yet the only change they log is "improve fan RPM" on a fanless card.





J7SC said:


> ..you mean 'waterforce', not 'windforce'...though now that they come with improved fan RPM
> BTW, have you loaded that new Bios ? Any performance differentials ?


 
If someone could test it that would be dandy, see if the power limit has changed in GPU-Z/NVIDIA BIOS.


----------



## VPII

zhrooms said:


> As mentioned above, if you reach 2025MHz in the most heavy benchmarks that's great, my Gaming X Trio reached 1980MHz at 75c (330W) as a reference point. I'd say any air card running above 1995MHz (2010MHz and higher) in 5K, Time Spy Extreme or Superposition 8K Optimized is absolutely great, 1755MHz boost cards reach about 1905MHz, so you'd already be running 5.5% faster than the fastest factory overclock if you run 2010MHz.
> 
> Summary; in [email protected] from some early testing I did back in November
> 250W Stock = 1650MHz (Palit 1545) Stock BIOS @ 75c Quiet
> 280W Stock = 1695MHz (Palit 1545) Stock BIOS @ 75c Quiet
> 300W Stock = 1875MHz (MSI 1755) Stock BIOS @ 75c Quiet
> 330W Stock = 1905MHz (MSI 1755) Stock BIOS @ 75c Quiet
> 
> 280W OC = 1860MHz (Palit 1545) Stock BIOS @ 75c Quiet
> 330W OC = 1980MHz (MSI 1755) Stock BIOS @ 75c Quiet
> 380W OC = 2025MHz (MSI 1755) Flashed BIOS @ 75c Quiet
> 
> 380W OC = 2100MHz (MSI 1755) Flashed BIOS @ 45c Loud


Well as per your suggestion I ran Unigine Superposition 8K..... Lowest clocks with my max oc was 1995, but only 7 times throughout the bench where as 2010 and 2025 was mostly the lowest with a 5806 as the result. Can be seen in the bench also showing the Gpuz clocks during run.


----------



## Pepillo

Quick question. What temperatures can be expected as normal average with a fullcover waterblock on 2080 TI gaming?

Thanks


----------



## zhrooms

VPII said:


> Well as per your suggestion I ran Unigine Superposition 8K.
> 
> Lowest clocks with my max oc was 1995, but only 7 times throughout the bench where as 2010 and 2025 was mostly the lowest with a 5806 as the result.


 
Yes, as seen in the chart you shared, when the temperature increased it clocked down a step towards the end, from 2010Mhz to 1995MHz.

1995-2010MHz is what everyone should expect with an A card on Air, and the right BIOS. At water cooling temperatures it's closer to 2085-2115MHz.

You can compare against my *Gaming X Trio* result, Superposition 1080p Extreme preset, to get the score below I used the Galax 380W BIOS and let some outside air help cool it, so the score is air cooled, 10c ambient, GPU still reached 51c at the end of the test.

Core Clock was set to +150, which made it run 2085-2100MHz throughout the test, with a short 2115MHz peak at the start when the GPU was under 35c. Memory at +1150.












Pepillo said:


> Quick question. What temperatures can be expected as normal average with a fullcover waterblock on 2080 TI gaming?


 
From reading this thread the consensus seems to be between 35 and 45 under load, can't say anything specific as people use different radiators, pumps and then the waterblocks themselves. It's safe to say it should not be above 50, unless you maybe run it for a while heating up the loop.

To get the same noise level on air cooled cards, the temperature might be around 75c, so that's 30-40c lower water cooling, makes the card a few % faster as it clocks the card up in steps of 15MHz based on temperature.


----------



## cg4200

I am sure this card is now worth 999.00 +120.00 bits power block which I would never buy again they have most of their contact depending on thermal pads and 1.5 and 1 mm
I dropped out of school cuz recess was to short..and could design better block terrible machining also only nice thing is it looks good..
I will stick to ek in future or go heat killer and try them again.. I have gtx 360 dual and 240 gtx dual ddc full rpm gaming 45 .. My titan xp max 40 shunted with ek.
At least I can hit 2160 peak with + 1000 on memory with is micron.. will keep testing mem higher has not artifact yet
Shame they separate chips by 300a and so forth if there was anything close to this performance by anyone else I would buy it from them..
I feel like scumbag buying and giving them my money but need to push my 4k screen 
hit 86 place on firestrike https://www.3dmark.com/3dm/32069823?
Hopefully someone can figure out how to change device id from 1E04 to the other one on nvflash tool or can override it would be much easier


----------



## Pepillo

zhrooms said:


> From reading this thread the consensus seems to be between 35 and 45 under load, can't say anything specific as people use different radiators, pumps and then the waterblocks themselves. It's safe to say it should not be above 50, unless you maybe run it for a while heating up the loop.
> 
> To get the same noise level on air cooled cards, the temperature might be around 75c, so that's 30-40c lower water cooling, makes the card a few % faster as it clocks the card up in steps of 15MHz based on temperature.


Thanks. So if I am now on air over the 60 º, the expected is 30 MHz more. Or maybe by having lower temperature and consumption perhaps possible 15-30 Hhz more additional?


----------



## raider89

zhrooms said:


> Power limit? Fan speed? Temperature? All these need to be known for the "2080" number to mean something.


I am using the 380w galax bios, I have my pump and fans all 100%. Power limit was 126% with afterburner. My temperature was 35-40c


----------



## VPII

zhrooms said:


> Yes, as seen in the chart you shared, when the temperature increased it clocked down a step towards the end, from 2010Mhz to 1995MHz.
> 
> 1995-2010MHz is what everyone should expect with an A card on Air, and the right BIOS. At water cooling temperatures it's closer to 2085-2115MHz.
> 
> You can compare against my *Gaming X Trio* result, Superposition 1080p Extreme preset, to get the score below I used the Galax 380W BIOS and let some outside air help cool it, so the score is air cooled, 10c ambient, GPU still reached 51c at the end of the test.
> 
> Core Clock was set to +150, which made it run 2085-2100MHz throughout the test, with a short 2115MHz peak at the start when the GPU was under 35c. Memory at +1150.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From reading this thread the consensus seems to be between 35 and 45 under load, can't say anything specific as people use different radiators, pumps and then the waterblocks themselves. It's safe to say it should not be above 50, unless you maybe run it for a while heating up the loop.
> 
> To get the same noise level on air cooled cards, the temperature might be around 75c, so that's 30-40c lower water cooling, makes the card a few % faster as it clocks the card up in steps of 15MHz based on temperature.
> 
> Thanks


Thanks zhrooms, my result is actually not that bad seen that I'm in the Southern hemisphere so I'm sitting with about 30c+ ambient at present. I try to do my runs early morning when we have about 25c or so ambient, but still I believe my run is good non the less.


----------



## Coldmud

J7SC said:


> ..you mean 'waterforce', not 'windforce'...though now that they come with improved fan RPM
> BTW, have you loaded that new Bios ? Any performance differentials ?


ops, you're right, posted that after pulling a 16hr shift  Haven't had the change to bench with it yet, only chugging on 8gb ram atm, sold the old system but kept half of the RAM, waiting on a new kit arriving tuesday.

I don't expect much from this updated bios really, they did the same fan-control updates for regular 2080ti and Extreme which is at F3 atm, didn't note any significant changes for the former.

Just wondering what else they changed if anything, because this bios took about a month longer then other versions of the card.


----------



## Sheyster

zhrooms said:


> Yeah just check the original post for the list, pick what suits you. (e.g. card dimensions, cooler size, overclocking needs)



The problem right now is availability. I didn't really want an MSI Duke OC, I actually wanted the Gaming X. It literally sold out before I could pay for it. Already had one in the cart on Newegg. I got a decent card so don't regret it.  That AMP card I linked him to was available to buy immediately and is a decent A chip card.


----------



## J7SC

zhrooms said:


> Yes I know they're from HWBot and that's the issue, you can't base *anything* on those numbers today, years ago you could when the cards had unlocked voltage (even playing field).
> 
> HWBot is trying to simulate the average core clock over all those benchmarks, but it does not work when you introduce the 'boost' we have on the cards today. Most of those benchmarks in that list will not fully utilize this GPU, and when the card is not hitting the power limit because of it, the core clock will increase to numbers higher than they would during actual full usage. So right there the numbers are already skewed, for example in one of the lower resolution benchmarks, the card might go up to 2100MHz at 1.093V, but it's down to 1900MHz 0.900V in Time Spy Extreme.
> (cut)
> 
> If someone could test it that would be dandy, see if the power limit has changed in GPU-Z/NVIDIA BIOS.



I certainly don't think that HWBot's results are foolproof, and clearly, it's not always 'apples to apples' given various differences in Bios, limits etc. Still, what interested me in a fairly large sample size of over 2,300 submissions were *two specific* items:

First, the delta between air and water since folks who do LN2 (like myself for many years) will likely run rare custom bios and/or EVBot, VRM daughter-boards kind of stuff. Whereby air and water are typically closer to stock (even that isn't a perfect example, given 'chilled' water etc). Besides, the test suite does include Unigine, and going to any of the HWBot benches will allow you to pick out the individual 2080 TI results, usually w/ vendor and equipment description.

But more importantly is the second, the drop of almost 50 MHz in three weeks with the average air MHz for 2080 Ti...IMO, this could reflect the influx of lower-limit Bios and/or A/non-A. All the limitations you mentioned w/HWBot would apply equally, so something has changed...while average water MHz went up a bit, air results are dropping significantly. 





Coldmud said:


> ops, you're right, posted that after pulling a 16hr shift  Haven't had the change to bench with it yet, only chugging on 8gb ram atm, sold the old system but kept half of the RAM, waiting on a new kit arriving tuesday.
> 
> I don't expect much from this updated bios really, they did the same fan-control updates for regular 2080ti and Extreme which is at F3 atm, didn't note any significant changes for the former.
> 
> Just wondering what else they changed if anything, because this bios took about a month longer then other versions of the card.



...the length of time for the Bios update by Gigabyte to conclude probably just reflects sales volume per 2080 Ti model...they would be doing the highest-volume cards first, sort of like the Spectre and Co Bios updates by mobo vendors. I have to check the size of the Bios file, i.e. is it significantly different and is the original power limit of 380w affected.

I hope to get to it in a couple of days. As you know from PMs, this is a very complex build (2 loops, 4 pumps, 5 rads, copper tubing...). I had everything tested out separately but then when I mounted the mobo, s.th. went weird...I ended up having to RMA the mobo (though MSI was quite good about it). For now, I am plugging in a MSI Pro Carbon...but unlike the MSI Meg Creation, it is not an eATX, and also the spacing between PCIe 1 and 3 (for the two 2080 TIs) is different...still, all said and done, a complex but super-fun project


----------



## kx11

J7SC said:


> ...570x Corsair + Dremel tool = mucho space :thumbsups
> 
> 
> 
> I'm spying some Galax RAM as well, nice...looks like you found a whole Galax nest (don't tell me, eBay ?!)





for GALAX stuff lelong.com.my is my go to online store for that


----------



## devilhead

hi, so got 2x asus 2080ti turbo, cherry picked one, other sold. So i have watercooled the card (got some problems during waterblock installation, but now it works. Now the problem is power limit  always 300w tdp 120%, so which bios i shuold flash? 380w galax ? My chip is TU102-300A-K1-A1


----------



## Coldmud

J7SC said:


> ...the length of time for the Bios update by Gigabyte to conclude probably just reflects sales volume per 2080 Ti model...they would be doing the highest-volume cards first, sort of like the Spectre and Co Bios updates by mobo vendors. I have to check the size of the Bios file, i.e. is it significantly different and is the original power limit of 380w affected.
> 
> I hope to get to it in a couple of days. As you know from PMs, this is a very complex build (2 loops, 4 pumps, 5 rads, copper tubing...). I had everything tested out separately but then when I mounted the mobo, s.th. went weird...I ended up having to RMA the mobo (though MSI was quite good about it). For now, I am plugging in a MSI Pro Carbon...but unlike the MSI Meg Creation, it is not an eATX, and also the spacing between PCIe 1 and 3 (for the two 2080 TIs) is different...still, all said and done, a complex but super-fun project



Damn.. somehow I glanced over the fact u were doing a full copper tube build, that's crazy tbh! I ran into a similar issue when I had quad sli on a rampage board a while back with hardtubing and copper connectors, I just ended up buying the exact board the day I returned the defect one, sold the replacement I got back on eBay after. Didn't wanna waste time rerouting anything 

Never had rma issues with msi, they accepted 4 boards I recked within a month time span trying to get 6 gpus working on them back in the mining craze, and probably about 3 Titans when voltage could be given as pleased 

False alarm on the F2 bios btw, for me it was the very same bios that came installed. There is no way to tell what F1 or F2 is though, it's just gigabyte naming. Got a "bios version does not match" from the main exe, they have their roms embedded into that, in the x64 folder there was another exe with nvflash.sys, ran that, installed without issues, same dates though.


----------



## Shawnb99

EVGA has the FTW3 hydro copper in stock on their site. Tempted to buying it instead of waiting for a replacement.
I’d assume this would be an A chip though how does one even tell?



Sent from my iPhone using Tapatalk


----------



## Sheyster

Shawnb99 said:


> EVGA has the FTW3 hydro copper in stock on their site. Tempted to buying it instead of waiting for a replacement.
> I’d assume this would be an A chip though how does one even tell?
> Sent from my iPhone using Tapatalk


FTW3 (standard and hydro) is an A chip. It just depends on the model. There is some guidance in the OP about this. Seems like most of the AIB's are offering both.


----------



## pewpewlazer

Shawnb99 said:


> EVGA has the FTW3 hydro copper in stock on their site. Tempted to buying it instead of waiting for a replacement.
> I’d assume this would be an A chip though how does one even tell?
> 
> 
> 
> Sent from my iPhone using Tapatalk


Strange, the FTW3 hydro copper is $250 more than the air cooled FTW3, but the FTW3 hydro copper block is only $200 as a stand alone item. A $50 premium to pre-install it AND not include an air cooler? Seems very strange to me, unless they FTW3 hydro copper cards are the best of the best bins?

It's definitely an "A chip", since it's factory overclocked, which is apparently not allowed on the "non-A chips" by Nvidia. Plus the FTW3 is EVGAs highest model card on a custom PCB and all, would be totally ridiculous for them to slap the "non-A chips" on those.

Speaking of hydro copper, does anyone has the EVGA hydro copper block for the reference PCB cards? Model # 400-HC-1389-B1. It's cheaper than the EK block once you buy their silly backplate to go with it. I need to stop procrastinating and get a block ordered for this thing this weekend so I can crank the power limit up without going deaf!


----------



## kx11

started benchmarking using the unstable HOF Ai














according to 3dmark the Gpu temps never passed 60c during the benchmark , i couldn't get MSI AB to do those numbers nor running the OSD because it'll clash with HOF Ai really bad and mess with the gpu


----------



## Shawnb99

Grabbed the FTW Hydro copper.
To impatient to wait for my replacement XC Ultra plus this saves me installing the EK Water block myself.
How does the Hydro Copper block stack up to other blocks?



Sent from my iPhone using Tapatalk


----------



## pewpewlazer

Shawnb99 said:


> Grabbed the FTW Hydro copper.
> To impatient to wait for my replacement XC Ultra plus this saves me installing the EK Water block myself.
> How does the Hydro Copper block stack up to other blocks?
> 
> 
> 
> Sent from my iPhone using Tapatalk


I just ordered one (well, the reference PCB version of the block). $193 shipped for a water block *cringe*. That's almost as outrageous as these $1200 graphics cards!

Hopefully it arrives by next weekend. Now I just need to pick up another radiator and a bigger pump...


----------



## Shawnb99

pewpewlazer said:


> I just ordered one (well, the reference PCB version of the block). $193 shipped for a water block *cringe*. That's almost as outrageous as these $1200 graphics cards!
> 
> 
> 
> Hopefully it arrives by next weekend. Now I just need to pick up another radiator and a bigger pump...




$193 ouch. I have elite membership, whatever that is, so I got free shipping. Exchange rate is bad enough, I couldn’t afford more shipping.
Still need to pay taxes as well when it gets here.
Everything is so expensive nowadays 


Sent from my iPhone using Tapatalk


----------



## H4rd5tyl3

Does anyone have an ETA on the FTW3 or Strix cards? It's getting annoying waiting this long...


----------



## CptSpig

cg4200 said:


> I am sure this card is now worth 999.00 +120.00 bits power block which I would never buy again they have most of their contact depending on thermal pads and 1.5 and 1 mm
> I dropped out of school cuz recess was to short..and could design better block terrible machining also only nice thing is it looks good..
> I will stick to ek in future or go heat killer and try them again.. I have gtx 360 dual and 240 gtx dual ddc full rpm gaming 45 .. My titan xp max 40 shunted with ek.
> At least I can hit 2160 peak with + 1000 on memory with is micron.. will keep testing mem higher has not artifact yet
> Shame they separate chips by 300a and so forth if there was anything close to this performance by anyone else I would buy it from them..
> I feel like scumbag buying and giving them my money but need to push my 4k screen
> hit 86 place on firestrike https://www.3dmark.com/3dm/32069823?
> Hopefully someone can figure out how to change device id from 1E04 to the other one on nvflash tool or can override it would be much easier


I have a EK block on my Titan Xp and a Bitspower on my 2080ti. I have to say the Bitspower block has much better quality than the EK.


----------



## J7SC

Shawnb99 said:


> Grabbed the FTW Hydro copper.
> To impatient to wait for my replacement XC Ultra plus this saves me installing the EK Water block myself.
> How does the Hydro Copper block stack up to other blocks?






pewpewlazer said:


> I just ordered one (well, the reference PCB version of the block). $193 shipped for a water block *cringe*. That's almost as outrageous as these $1200 graphics cards! Hopefully it arrives by next weekend. Now I just need to pick up another radiator and a bigger pump...



'grats to both of you - water-cooling the 2080 TIs is a good move in more ways than one. As to the cost of the graphics cards, you should check over w/ the guys @ Titan RTX...basically, you got yours at half price, more or less  
@pewpewlazer: ...noticed a lot of used w-c items for sale here https://www.overclock.net/forum/147...gear-blocks-radiators-fittings-fans-more.html


----------



## arrow0309

raider89 said:


> I am using the 380w galax bios, I have my pump and fans all 100%. Power limit was 126% with afterburner. My temperature was 35-40c


Yeah, mine clocks the same more or less "unfortunately", 2070 - 2085 (it's never 2080), same temps 34-39C.
So I went to the Evga 338W bios since in BF5 the 380W was useless.
However I'd still wanna try it back (next week when I'm getting back from winter holiday), would like to see if the gpu test 2 in Time Spy will power limit this 338W and cut the clocks even lower than 2070 (or the score).


----------



## Thoth420

So I ended up opting for the MSI Gaming X Trio as it was available at my local brick and mortar and it has one hell of a cooler so I imagine it can handle the slight OC it has onboard. I guess it was just my bad experience with a 7970 Lightning back in the day and then the 1080 Ti FTW3 that had me thinking just stay reference clocks.

Anyways I didn't realize that it is 2x8pin as well as another 6pin unlike the standard 2x8pin the rest seem to have. The problem is I only have 2x8pin cables from cablemod and I let a friend have my original PSU cables so I don't have another 6pin cable until I can get one delivered. I want it to match the other cables anyways. Can I use this with just the 2x8pin for now or will I have issues?


----------



## arrow0309

Thoth420 said:


> So I ended up opting for the MSI Gaming X Trio as it was available at my local brick and mortar and it has one hell of a cooler so I imagine it can handle the slight OC it has onboard. I guess it was just my bad experience with a 7970 Lightning back in the day and then the 1080 Ti FTW3 that had me thinking just stay reference clocks.
> 
> Anyways I didn't realize that it is 2x8pin as well as another 6pin unlike the standard 2x8pin the rest seem to have. The problem is I only have 2x8pin cables from cablemod and I let a friend have my original PSU cables so I don't have another 6pin cable until I can get one delivered. I want it to match the other cables anyways. Can I use this with just the 2x8pin for now or will I have issues?


Yep, you can use the 2x8pin.


----------



## pegnose

What is the Firestrike graphics score of an ASUS Strix OC on stock cooler if max. overclocked? Is it able to achieve >40.000 as the EVGA FTW3 does?


----------



## cg4200

CptSpig said:


> I have a EK block on my Titan Xp and a Bitspower on my 2080ti. I have to say the Bitspower block has much better quality than the EK.


Nice looking I will give you that I agree I also liked how thick the plexi is .. But that's about it... after looking at the machining look at the pic there is no reason for that not to be one cut with proper tool unless it cost to much for them.. so I use 1.5 mm thermal pad for 3 instructers and 1.0 mm pad for other 3 when they are flush to begin with.. a little kid could do better literally.. If I would have looked at pic online and instructions I would never bought the block .., my 2cents 50 gaming after 1 hour battlefield 5 .. I took card out double checked everything is good exept temps.. 
last 10 blocks for me have been ek will go back and sell this 80.00 bucks on fleabay after I get my ek block and glue the led strip back on came off taking block out to double check pads..


----------



## krizby

Hi Sheyster, took you a long time to join 2080TI owners club eh ? I enjoyed your contributions to the 1080TI owners club, hopefully you will do the same here.

Anyways to OP, can you add a section on how to do the undervolt with the VF curve on first post ? as undervolting seems to benefits gameplay experience greatly on Turing.


----------



## J7SC

Thoth420 said:


> So I ended up opting for the MSI Gaming X Trio as it was available at my local brick and mortar and it has one hell of a cooler so I imagine it can handle the slight OC it has onboard. I guess it was just my bad experience with a 7970 Lightning back in the day and then the 1080 Ti FTW3 that had me thinking just stay reference clocks.
> 
> Anyways I didn't realize that it is 2x8pin as well as another 6pin unlike the standard 2x8pin the rest seem to have. The problem is I only have 2x8pin cables from cablemod and I let a friend have my original PSU cables so I don't have another 6pin cable until I can get one delivered. I want it to match the other cables anyways. Can I use this with just the 2x8pin for now or will I have issues?



Have you checked your GPU box for a Molex-to-PCIe connector as part of the accessory package ? Many graphics cards come with it. Also, plenty of mid-range and high-end mobos have either a Molex or PCIe 6/8 pin plug for additional juice, i.e. when running tri- or quad-sli, so using a Molex-to-PCIe is not a problem. If it is not already in your current or previous GPU accessory boxes, you can buy them at a lot of places, likely also your brick-and mortar store. Alternatively, pic below is from eBay...cost is US$ 2.89


----------



## Angrycrab

I have an EVGA 2080Ti XC Ultra Hybrid with two Noctua 3k rpm 4pin pwm fans In push/pull config. I'm trying to control both radiator fans using Afterburner but the problem Is one fan ramps up to max speed when connected using a 4pin Splitter to the gpu fan header. Not sure why both radiator fans can't be controlled using afterburner. Anyone tried push/pull with the new evga hybrids?


----------



## Hanks552

after playing BF5 RTX ON Medium all settings ultra 3440x1440 FPS 60~75


----------



## Edge0fsanity

anyone running the ftw3 bios on their xc or xc ultra? Just flashed the galax 380w bios onto my xc ultra and it works fine but i lost fan control on the second fan. Does the ftw3 bios fix this problem? Card is going on water in a few days but i would like to do testing with a higher power limit. 

Also, anyone know which nvlink bridges work with EK blocks? I just learned the hard way that the evga one does not. I don't like the nvidia one, only other bridge that i like is the aorus one.


----------



## sippo

I've change bios on zotac 2080 ti amp to galax 380W.

Results that I have:
Now:
Zotac+Galax 380W+EK Block: https://www.3dmark.com/spy/5684194
Zotac+Stock: https://www.3dmark.com/spy/5668364 (I've didn't test zotac bios on water )

Old:
Previous Zotac - memory corruption after some time: https://www.3dmark.com/spy/4861756

Temp on GPU ~60C.

Is it good or I should expect something more? (I'm new to GPU overclocking - last time when I've OC GPU it was voodoo 3000)


----------



## gecko991

Nice. Just ordered a PNY XLR8 2080 ti for a new build. Should be interesting.


----------



## nycgtr

cg4200 said:


> Nice looking I will give you that I agree I also liked how thick the plexi is .. But that's about it... after looking at the machining look at the pic there is no reason for that not to be one cut with proper tool unless it cost to much for them.. so I use 1.5 mm thermal pad for 3 instructers and 1.0 mm pad for other 3 when they are flush to begin with.. a little kid could do better literally.. If I would have looked at pic online and instructions I would never bought the block .., my 2cents 50 gaming after 1 hour battlefield 5 .. I took card out double checked everything is good exept temps..
> last 10 blocks for me have been ek will go back and sell this 80.00 bucks on fleabay after I get my ek block and glue the led strip back on came off taking block out to double check pads..


I believe this was to make it work for the 2080 and ti variant. I have the ref bitspower block and I remember this part, tbh it works fine. So far no one has had contact issues with the bits block which isnt something that can be said for EK. I have 3 bp ti blocks, the 2080ti trio, 2 of the strix and 1 ref. No problems with any. If you want to go rebuy an inferior product with blue cream pads go for it lol.

Also give that heatkiller, aquacomptuer and the xspc neo 2080ti ref blocks have all been released, I find it hard that anyone who's well into the watercooling game would spend their own money on the vector block given its cost vs the much better made competition.

My ref bp block when i removed it. I had no contact issues in the area you are referring to.


----------



## Asmodian

zhrooms said:


> That's exactly how it works... Both my cards increased voltage at stock boost when increasing the power limit, instead of the core clock fluctuating, it became a straight line (at the previous peak) when the voltage increased after increasing power limit, and when it passed this threshold, it kept raising voltage, thus temperature. Tested it extensively using Non-A and A GPU.


Are you sure you are not confusing running at a lower temperature, so needing a lower voltage to hit the same clock, with using extra voltage for no apparent reason? In all my testing on my FE 2080 Ti the voltage is always the same for the same clock at the same temperature. Any extra voltage is unnecessary only if the higher clock speed is unnecessary, or it is the result of higher temperatures changing the frequency v.s. voltage curve.

I also edit the curve to so it never runs above 1.025 V but these changes are independent of the power limit. The power limit simply causes the GPU to move down the clock v.s. voltage curve, dropping to a lower clock and voltage pair.

Perhaps you are attempting to point out that the clock v.s. voltage curve, at least at 28°C on my GPU, is flat from 1.050V to 1.100V? That is true, but on my card it always uses the lowest voltage within a flat spot in the curve. At default I sometimes see voltages over 1.050 V because the flat spot starts at a higher voltage as temperature rises, but it never uses an unnecessarily higher voltage according to the BIOS defined surface for clock v.s. voltage v.s. temperature.

Using the power limit to lower the max voltage is a confusing way to think about it and it will only actually do so with particular loads. Simply edit the frequency v.s. voltage curve if you want to change the max voltage and use the power limit to lower the max temperature.


----------



## Sheyster

krizby said:


> Hi Sheyster, took you a long time to join 2080TI owners club eh ? I enjoyed your contributions to the 1080TI owners club, hopefully you will do the same here.


Thanks but I think you mean the Titan X? I never owned a 1080 Ti card. My last 3 cards have been Titan X Maxwell (SLi), Titan X Pascal, and Titan Xp which I just sold recently.


----------



## CptSpig

EarlZ said:


> @CptSpig
> 
> Turned off Windows defender and ansel, still getting the same issue


What is your FPS in Battlefield V?


----------



## CptSpig

EarlZ said:


> Seems like it is caused by 417.XX drivers rolling back to 416.94 has fixed the issue ( for now )


Het Earl I have 417.35 installed with no issues.


----------



## Jpmboy

Sheyster said:


> Now that my kid is done with college and out of the house, this **** is cheap in comparison to that!


^^ this. :thumb:


cg4200 said:


> Nice looking I will give you that I agree I also liked how thick the plexi is .. But that's about it... after looking at the machining look at the pic there is no reason for that not to be one cut with proper tool unless it cost to much for them.. so I use 1.5 mm thermal pad for 3 instructers and 1.0 mm pad for other 3 when they are flush to begin with.. a little kid could do better literally.. If I would have looked at pic online and instructions I would never bought the block .., my 2cents 50 gaming after 1 hour battlefield 5 .. I took card out double checked everything is good exept temps..
> last 10 blocks for me have been ek will go back and sell this 80.00 bucks on fleabay after I get my ek block and glue the led strip back on came off taking block out to double check pads..


the block is cut to fit the 2080 and 2080Ti. It cools the card very well.


nycgtr said:


> I believe this was to make it work for the 2080 and ti variant. I have the ref bitspower block and I remember this part, tbh it works fine. So far no one has had contact issues with the bits block which isnt something that can be said for EK. I have 3 bp ti blocks, the 2080ti trio, 2 of the strix and 1 ref. No problems with any. If you want to go rebuy an inferior product with blue cream pads go for it lol.
> 
> Also give that heatkiller, aquacomptuer and the xspc neo 2080ti ref blocks have all been released, I find it hard that anyone who's well into the watercooling game would spend their own money on the vector block given its cost vs the much better made competition.
> 
> My ref bp block when i removed it. I had no contact issues in the area you are referring to.


Yeah - Ek tends to make some close "cuts" on their blocks - so far I've been pretty pleased with the bitspower GPU blocks. Rather than wait for RTX titan blocks, IU put 2 EK vector 2080Ti blocks on the cards.. so far okay. Bitspower was talking jan/feb distribution. The nickel plating on the BP blocks is very good - mirror finish. Ek is still good, but they have more competition which is a good thing.


----------



## kot0005

nycgtr said:


> I believe this was to make it work for the 2080 and ti variant. I have the ref bitspower block and I remember this part, tbh it works fine. So far no one has had contact issues with the bits block which isnt something that can be said for EK. I have 3 bp ti blocks, the 2080ti trio, 2 of the strix and 1 ref. No problems with any. If you want to go rebuy an inferior product with blue cream pads go for it lol.
> 
> Also give that heatkiller, aquacomptuer and the xspc neo 2080ti ref blocks have all been released, I find it hard that anyone who's well into the watercooling game would spend their own money on the vector block given its cost vs the much better made competition.
> 
> My ref bp block when i removed it. I had no contact issues in the area you are referring to.


For me Bits power block is $300 shipped and no backplate, ek is $200 for block and $50 for backplate. Aquacomputer is around $350 for block and backplate. Almost everyone increased their prices, not just GPU's. Mobo, RAM, CPU, new 4k hdr monitors..

Only thing going down is ssd price.

I'd say that BP and AC stuff are premium while EK and heatkiller products are balanced for price and quality while Xspc and Alphacool are low end.


----------



## nyk20z3

Hopefully they announce it at CES but the price tag will be ridiculous i am sure especially after throwing CF in the mix -


----------



## J7SC

Edge0fsanity said:


> anyone running the ftw3 bios on their xc or xc ultra? Just flashed the galax 380w bios onto my xc ultra and it works fine but i lost fan control on the second fan. Does the ftw3 bios fix this problem? Card is going on water in a few days but i would like to do testing with a higher power limit. *Also, anyone know which nvlink bridges work with EK blocks? I just learned the hard way that the evga one does not. I don't like the nvidia one, only other bridge that i like is the aorus one.*



I have both3 slot and 4 slot ROG Asus NVLink bridges and they fit the Giga Aorus Xtr 2080 TI cards I have w/ factory water block perfectly...not sure about them fitting EK blocks, though, but I recall reading that they are among the better fitting bridges.

Speaking of water-blocks, here's a quick look at 2080 TI reference water-block prices, mostly at source (i.e. in-house shop) unless otherwise noted, and using today's exchange rates....also, does not include back-plates, taxes, duties, shipping, handling etc

Watercool (DE) US$ 124.4 to US$ 181.6 (Strix?)
Alphacool (DE) US$ 134.8
Aquacomputer (DE) US$ 188.31
EK Vector US$ 159.9
Bykski US$ 99 to US$ 107.5 (AliExpress) / US$129.9 (Primochill)
Bitspower US$150 - US$ 170

As to performance, I'm still waiting for a good comparison test for 2080 TIs, but as posted before, here is a comp from thermalbench for 1080 TI blocks which can provide some guidance, for now. Keep in mind that in the graph, differences look a bit more dramatic due to the scale....actual delta between lowest and highest temp on core, for example, is about 3 C...


----------



## VPII

Taken that I live where waterblocks for the RTX 2080 ti will take a while before being available, I was thinking, with some advice from a friend working in the industry, to get a NZXT Kraken G12 with a Corsair H110 to fit to my RTX 2080 Ti. Not too sure how it would work, but from what I understand and read it will fit, just need to see how I'll go about cooling the VRM where needed. Any ideas / advice from some of the esteemed forum members?


----------



## J7SC

VPII said:


> Taken that I live where waterblocks for the RTX 2080 ti will take a while before being available, I was thinking, with some advice from a friend working in the industry, to get a NZXT Kraken G12 with a Corsair H110 to fit to my RTX 2080 Ti. Not too sure how it would work, but from what I understand and read it will fit, just need to see *how I'll go about cooling the VRM where needed. Any ideas / advice from some of the esteemed forum members?*



...not sure about the esteemed forum member bit :arrowhead , but the standard method is to slap on a good 120mm fan (ie Noctua etc) and secure it w/ rubber bands...no really. Just make sure that there are some rubber standoffs, also for vibration control. For a slightly better look that is less, ahem, street-mod-rod , you can check if you can reuse some of the per-existing mounting holes in the GPU PCB directly over the VRM or with brackets, like this arrangement here (@ guru3d forum). Needless to add that it is a big no-no to drill new holes into your GPU PCB...


----------



## kot0005

RIP AMD, nvidia just announced adaptive sync support. Where are all the haters now ?


----------



## Zammin

kot0005 said:


> RIP AMD, nvidia just announced adaptive sync support. Where are all the haters now ?


You mean we can run freesync monitors with Nvidia cards in the future? That would be fantastic. There are some really good monitors out there with freesync at good prices. Got a link?

EDIT: nevermind, found the thread here on OCN. I see it's a list of 12 monitors at this point in time. Too bad some of the Samsung quantum dot models aren't on the list. Still though, it's step in the right direction.


----------



## VPII

J7SC said:


> Needless to add that it is a big no-no to drill new holes into your GPU PCB...


Thanks my brother.... needless to say I laughed reading this, but I understand that sometimes you'll get those who think they can. I'm just waiting on the shop to respond to my request then I'll place the order and will hopefully have it here by Wednesday, at the latest Thursday. I really love this card it is actually doing pretty well seen that I'm running it with an overclock but 80% TDP limit so in effect 240Watt total just to keep it cool.


----------



## J7SC

VPII said:


> Thanks my brother.... needless to say I laughed reading this, but I understand that sometimes you'll get those who think they can. I'm just waiting on the shop to respond to my request then I'll place the order and will hopefully have it here by Wednesday, at the latest Thursday. I really love this card it is actually doing pretty well seen that I'm running it with an overclock but 80% TDP limit so in effect 240Watt total just to keep it cool.



 :laugher:


----------



## Hanks552

somebody doing hardwaremod?
i need more POWERRRRRRRRR
380w is not enough lol


----------



## kot0005

Just checked my Samsung memory, It can run at 1200Mhz, didnt really wana try higher.


----------



## ENTERPRISE

Just received my Gigabyte Auros 2080Ti Xtreme cards. Have people had success or any issues using the 380Watt Galax Bios on these Cards as the Auros Xtreme has a custom PCB ? 

*edit* 

As the Galax 380 Watt Cards have 16 Phases and my cards have 19 phases, may not work. May need one of the Galax HOF BIOS's as those cards have 19 Phases. Maybe this one ? : https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since= 

Cheers !


----------



## ENTERPRISE

On a separate note, I have just cleaned this thread. Please do not post if you have nothing of use to contribute. If you are posting just to judge others on their purchasing decision, then please do not bother. Any such future behavior will be me with a thread ban and an official warning/infraction.


----------



## dantoddd

Hi,

guys I'm thinking of buy one of these. How is the 4K performance. Can you max all games in 4K at a playable frame rate, greater than 50 fps?


----------



## Zammin

ENTERPRISE said:


> On a separate note, I have just cleaned this thread. Please do not post if you have nothing of use to contribute. If you are posting just to judge others on their purchasing decision, then please do not bother. Any such future behavior will be me with a thread ban and an official warning/infraction.


I was hoping this would happen, thank you :thumb:


----------



## Zammin

dantoddd said:


> Hi,
> 
> guys I'm thinking of buy one of these. How is the 4K performance. Can you max all games in 4K at a playable frame rate, greater than 50 fps?


"Max all games in 4K" covers an incredibly wide range of games. There will still be some title's that punish graphics cards and may still dip below 60 at times, but these are very powerful GPUs. The most powerful gaming model at this point in time. If you want the best right now, there aren't really any alternatives. If you play at 4K it's a good choice. If you do come across a game that is dipping below 60 at max settings in 4K just turn down a few settings that you won't notice and you should be good to go.

I play on a 165hz 1440p monitor and at least in the games I play it can sustain very high frame rates at pretty high settings most of the time.


----------



## zhrooms

Sheyster said:


> The problem right now is availability. I didn't really want an MSI Duke OC, I actually wanted the Gaming X. It literally sold out before I could pay for it. Already had one in the cart on Newegg. I got a decent card so don't regret it.  That AMP card I linked him to was available to buy immediately and is a decent A chip card.


 
_*Decent*_

Started trying to figure out the best cards, so far it looks to be,

*EVGA XC Ultra (2 Fan, 2.75 Slot)
Gainward Phoenix GS (3 Fan, 2.5 Slot)
MSI Duke OC (3 Fan, 2.75 Slot)
Zotac AMP (3 Fan, 2.5 Slot)
Zotac Triple Fan (3 Fan, 2.5 Slot)*

All are similarly priced 300A cards with massive coolers which allows for high overclock or very low noise, also reference PCB (2x8-Pin) so that most waterblocks fit and can flash the Galax 380W BIOS.

Palit Dual (2 Fan, 2 Slot, Reference PCB) runs 60c while Gaming X Trio (3 Fan, 2.75 Slot, Custom PCB) runs 51c, that almost 10c difference is huge in terms of overclocking/noise level.

MSI Gaming X Trio costs €200 more in Germany than MSI Duke OC, definitely not worth it in terms of price/performance, Duke cooler is close in size and 26W lower power limit than Gaming X Trio after flashing BIOS. Realistically we're talking about a maybe 30MHz difference at best, so 1.5% faster (less than 1 FPS from 60 FPS) for a 16% higher cost on the Gaming X Trio.

I'm far from a final conclusion but it's a start.
 


J7SC said:


> The drop of almost 50 MHz in three weeks with the average air MHz for 2080 Ti...IMO, this could reflect the influx of lower-limit Bios and/or A/non-A.


 
That is a possibility yes. There is no indication of which card is A, so lots of people buy them blindly, partners trick you by offering software overclocking up to 1560MHz as an example, so the boost is technically 1545MHz (Non-A) but they advertise it as overclocked 1560MHz, scummy.
 


devilhead said:


> So which BIOS should I flash? 380W Galax? My chip is TU102-300A-K1-A1


 
Yes, that is the highest power limit BIOS that works on essentially all cards.
 


Shawnb99 said:


> I’d assume this would be an A chip though how does one even tell?


 
As mentioned above, partners trick you by advertising overclock but it's really through their software, not factory overclocked, so the only real way to tell is to dig around on each cards website or simply check the *Original Post*.
 


Sheyster said:


> FTW3 (standard and hydro) is an A chip. It just depends on the model. There is some guidance in the OP about this. Seems like most of the AIB's are offering both.


 
Some? It's complete (some odd brands excluded) and accurate.
 


pewpewlazer said:


> Seems very strange to me, unless they FTW3 hydro copper cards are the best of the best bins?
> 
> Plus the FTW3 is EVGAs highest model card on a custom PCB and all, would be totally ridiculous for them to slap the "non-A chips" on those.


 
There is nothing pointing to any partner binning their cards, aka they don't hand-pick. 

And you think it's ridiculous? Well so do I because it's actually real, one of the three ASUS Strix variants use the non-A GPU, costs €130 more than the cheapest A card in germany, but is far slower. €240 ($275) more than cheapest non-A card, so yeah that's one expensive cooler.
 


H4rd5tyl3 said:


> Does anyone have an ETA on the FTW3 or Strix cards? It's getting annoying waiting this long...


 
Why do you want one? They don't overclock better than other cards, air or water. MSI Gaming X Trio destroys both of them by the way, if you're hell bent on having a custom PCB card.
 


Thoth420 said:


> So I ended up opting for the MSI Gaming X Trio as it was available at my local brick and mortar and it has one hell of a cooler so I imagine it can handle the slight OC it has onboard.
> 
> Anyways I didn't realize that it is 2x8pin as well as another 6pin unlike the standard 2x8pin the rest seem to have. The problem is I only have 2x8pin cables from cablemod and I let a friend have my original PSU cables so I don't have another 6pin cable until I can get one delivered. I want it to match the other cables anyways. Can I use this with just the 2x8pin for now or will I have issues?





arrow0309 said:


> Yep, you can use the 2x8pin.





J7SC said:


> Have you checked your GPU box for a Molex-to-PCIe connector as part of the accessory package?


 
MSI Gaming X Trio is the second best card you can buy, number one is the Galax Hall of Fame.

And *no*, you can *not* run the card with just 2x8-Pin, I tested it when I had the card.

You do get a *6-pin to 8-pin PCIe power adapter* with the card, but that won't help you.
 


pegnose said:


> What is the Firestrike graphics score of an ASUS Strix OC on stock cooler if max. overclocked? Is it able to achieve >40.000 as the EVGA FTW3 does?


 
Strix OC has a much lower power limit, FTW3 will always beat it because of that. Unless you flash BIOS on the Strix that is.
 


krizby said:


> Can you add a section on how to do the undervolt with the VF curve? As undervolting seems to benefits gameplay experience greatly on Turing.


 
_Greatly_ is an exaggeration, undervolting _can_ help, realistically only 15-30MHz best case scenario. Though, some people do prefer lower temperature/noise in general.

I'll look into it once I get hold on a card again (not had one since November).
 


Hanks552 said:


> SLI is good, maybe not for gaming that much


 
SLI is not good, it's amazing. See these two posts; *#1* & *#2*
 


kot0005 said:


> RIP AMD, nvidia just announced adaptive sync support. Where are all the haters now ?





Zammin said:


> You mean we can run freesync monitors with Nvidia cards in the future? That would be fantastic. There are some really good monitors out there with freesync at good prices. Got a link?
> 
> *EDIT:* Nevermind, found the thread here on OCN. I see it's a list of 12 monitors at this point in time. Too bad some of the Samsung quantum dot models aren't on the list. Still though, it's step in the right direction.


 
Yeah only 12 out of 400 monitors passed their _tests_, but it was also mentioned you can try to force it on your freesync monitor, so just because it's not officially supported (the current 12) you might still be able to use it! Which is great news, also from here on out far more (new) monitors on the market might support G-Sync.
 


kot0005 said:


> Just checked my Samsung memory, It can run at 1200Mhz, didnt really wana try higher.


 
You mean +1200? Why would you stop trying for higher? See where the limit is.
 


Asmodian said:


> Are you sure you are not confusing running at a lower temperature, so needing a lower voltage to hit the same clock, with using extra voltage for no apparent reason?
> 
> I also edit the curve to so it never runs above 1.025 V but these changes are independent of the power limit.
> 
> Perhaps you are attempting to point out that the clock v.s. voltage curve, at least at 28°C on my GPU, is flat from 1.050V to 1.100V? That is true, but on my card it always uses the lowest voltage within a flat spot in the curve. At default I sometimes see voltages over 1.050 V because the flat spot starts at a higher voltage as temperature rises, but it never uses an unnecessarily higher voltage according to the BIOS defined surface for clock v.s. voltage v.s. temperature.
> 
> Using the power limit to lower the max voltage is a confusing way to think about it and it will only actually do so with particular loads. Simply edit the frequency v.s. voltage curve if you want to change the max voltage and use the power limit to lower the max temperature.


 
Yes, identical temperature. 

Yes, editing curve changes everything. I'm talking stock / overclocking without the frequency editor, which most people use.

Basically the stock curve is all over the place, it doesn't scale speed with voltage linearly, and that is the main issue, even keeping the same core clock the voltage increased when upping the power limit, because for some reason it wanted it.

One example I can give is that at 300W the card ran 1875MHz @ 0.987V, and at 330W it ran 1890MHz @ 1.031V, only a single core clock step higher but far higher voltage. This is disasterous for temperature, and will eventually result in it down clocking to the same core clock speed.

So upping your power limit a lot, from 280W to 380W will not only help it reach the boost, but far exceed it, and the core clock and voltage won't scale appropriately, as you say yourself, you've noticed fluctuating voltage when it wasn't necessary. I do wonder if there's a clear difference between BIOSes.

Really wish I could test this more in detail but I don't have a card anymore, so much easier to explain with screenshots of the actual voltage/frequencies/temp on graphs.


----------



## dantoddd

Zammin said:


> "Max all games in 4K" covers an incredibly wide range of games. There will still be some title's that punish graphics cards and may still dip below 60 at times, but these are very powerful GPUs. The most powerful gaming model at this point in time. If you want the best right now, there aren't really any alternatives. If you play at 4K it's a good choice. If you do come across a game that is dipping below 60 at max settings in 4K just turn down a few settings that you won't notice and you should be good to go.
> 
> I play on a 165hz 1440p monitor and at least in the games I play it can sustain very high frame rates at pretty high settings most of the time.


Thanks. basically i was going to buy a 1080 Ti for 699 but the supplier said they don't have 1080 Tis anymore. Which meant i had to buy 2080 or a 2080 Ti. decided to go for 2080 Ti cause it seemed a better choice. Now i need to buy a new monitor as well. I have a 1080p monitor but with a 2080 Ti i figured i need something with more pixels. I know 2080 Ti will handle anything at 1440p with ease, but i'm sure whether there is any point in going for 4K display. 

I only play single player games like AC, Tomb Raider, and total war. I do play FPS games if there are interesting ones out there. So I'm more than happy when games hit 50-60 FPS cause thats more than smooth for those games.

So this is my dilemma, buy a high refresh rate 1440p monitor or get 4K 60Hz IPS.


----------



## Shawnb99

dantoddd said:


> Thanks. basically i was going to buy a 1080 Ti for 699 but the supplier said they don't have 1080 Tis anymore. Which meant i had to buy 2080 or a 2080 Ti. decided to go for 2080 Ti cause it seemed a better choice. Now i need to buy a new monitor as well. I have a 1080p monitor but with a 2080 Ti i figured i need something with more pixels. I know 2080 Ti will handle anything at 1440p with ease, but i'm sure whether there is any point in going for 4K display.
> 
> 
> 
> I only play single player games like AC, Tomb Raider, and total war. I do play FPS games if there are interesting ones out there. So I'm more than happy when games hit 50-60 FPS cause thats more than smooth for those games.
> 
> 
> 
> So this is my dilemma, buy a high refresh rate 1440p monitor or get 4K 60Hz IPS.




I’d say go for 1440p. Or even a widescreen monitor before making the jump to 4K. 


Sent from my iPhone using Tapatalk


----------



## dantoddd

Zammin said:


> "Max all games in 4K" covers an incredibly wide range of games. There will still be some title's that punish graphics cards and may still dip below 60 at times, but these are very powerful GPUs. The most powerful gaming model at this point in time. If you want the best right now, there aren't really any alternatives. If you play at 4K it's a good choice. If you do come across a game that is dipping below 60 at max settings in 4K just turn down a few settings that you won't notice and you should be good to go.
> 
> I play on a 165hz 1440p monitor and at least in the games I play it can sustain very high frame rates at pretty high settings most of the time.


Thanks. basically i was going to buy a 1080 Ti for 699 but the supplier said they don't have 1080 Tis anymore. Which meant i had to buy 2080 or a 2080 Ti. decided to go for 2080 Ti cause it seemed a better choice. Now i need to buy a new monitor as well. I have a 1080p monitor but with a 2080 Ti i figured i need something with more pixels. I know 2080 Ti will handle anything at 1440p with ease, but i'm sure whether there is any point in going for 4K display. 

I only play single player games like AC, Tomb Raider, and total war. I do play FPS games if there are interesting ones out there. So I'm more than happy when games hit 50-60 FPS cause thats more than smooth for those games.

So this is my dilemma, buy a high refresh rate 1440p monitor or get 4K 60Hz IPS.


----------



## dantoddd

Shawnb99 said:


> I’d say go for 1440p. Or even a widescreen monitor before making the jump to 4K.
> 
> 
> Sent from my iPhone using Tapatalk


Can you please explain why


----------



## dante`afk

could anyone explain the following please?

installing waterblock first time 2 months ago, fresh TIM etc . 37-38c.
a month later (under same conditions, ambient, water etc) temps start to be 39-40.


TIM issue?


----------



## VPII

dante`afk said:


> could anyone explain the following please?
> 
> installing waterblock first time 2 months ago, fresh TIM etc . 37-38c.
> a month later (under same conditions, ambient, water etc) temps start to be 39-40.
> 
> 
> TIM issue?



Hi there..... you have 2c variance , not really that much. In all honesty there are a number of things that would contribute to it so difficult to say. Not sure where you live, for starters, but I'll be honest with you when I say comparing temps October to February to temps April to August you'll get a massive swing on my side of the world it is not exaggerating. South Africa that is.


----------



## profundido

dantoddd said:


> Can you please explain why


high refresh rate 1440p for sure. Reasons are really simple:


1. it's a better experience than 4K @60hz. You might not see this at first but trust me after 1 week of playing around the 100fps mark there is no way back to the 60fps

2. The 2080ti is perfect for 1440p because it can maintain 100+ fps in AAA titles on highest possible detail. All that together is a serious improvement in play comfort level. at 4K however this card will only do around 80fps average in newest titles max details (referring to SSOTR demo) so it doesn't quite shine yet there although more than playable. Next gen top card should be more suited for that if we get at least 30-40% performance bump.

3. going [email protected] would be a stupid move technology wise at this point in time while [email protected] is still way to expensive unless you simply have the cash and there exist only 2 as of now. Compared to this scenario a [email protected] is standard now with cheaper prices and loads of choices available.


----------



## dante`afk

VPII said:


> Hi there..... you have 2c variance , not really that much. In all honesty there are a number of things that would contribute to it so difficult to say. Not sure where you live, for starters, but I'll be honest with you when I say comparing temps October to February to temps April to August you'll get a massive swing on my side of the world it is not exaggerating. South Africa that is.


well it's definitely colder now than in Oct/Nov.


----------



## Thoth420

Thanks zhrooms 

I have a cable coming from cable mods. There was a splitter but I'll just wait as I ordered a few other toys and a freshy PSU same model just this one has been beat on for almost two years so time to sit it aside for a frankenbuild.

This card is a massive lol!!!


----------



## hdtvnut

Heads up - 2080ti ROG Strix (3-fan deep) available from Newegg right now.


----------



## ThrashZone

dante`afk said:


> could anyone explain the following please?
> 
> installing waterblock first time 2 months ago, fresh TIM etc . 37-38c.
> a month later (under same conditions, ambient, water etc) temps start to be 39-40.
> 
> 
> TIM issue?


Hi,
Driver/ bios ?
Seems mobo bios are getting a lot warmer lately


----------



## keikei

ASUS ROG MATRIX GeForce RTX 2080 Ti 



> Rocking the ROG MATRIX branding, I was genuinely expecting a standard water cooling design but ASUS proved me wrong by implementing a very interesting cooling methodology known as Infinity Loop. Now, this has been done before for both CPU and GPUs but ASUS’s solution, an *AIO liquid cooling solution built within the shroud* claims to offer performance similar to 240mm radiators which makes it the highlight of this part.


----------



## dantoddd

profundido said:


> high refresh rate 1440p for sure. Reasons are really simple:
> 
> 
> 1. it's a better experience than 4K @60hz. You might not see this at first but trust me after 1 week of playing around the 100fps mark there is no way back to the 60fps
> 
> 2. The 2080ti is perfect for 1440p because it can maintain 100+ fps in AAA titles on highest possible detail. All that together is a serious improvement in play comfort level. at 4K however this card will only do around 80fps average in newest titles max details (referring to SSOTR demo) so it doesn't quite shine yet there although more than playable. Next gen top card should be more suited for that if we get at least 30-40% performance bump.
> 
> 3. going [email protected] would be a stupid move technology wise at this point in time while [email protected] is still way to expensive unless you simply have the cash and there exist only 2 as of now. Compared to this scenario a [email protected] is standard now with cheaper prices and loads of choices available.


Thanks


----------



## zhrooms

dantoddd said:


> Now i need to buy a new monitor as well. I have a 1080p monitor but with a 2080 Ti i figured i need something with more pixels. I know 2080 Ti will handle anything at 1440p with ease, but i'm sure whether there is any point in going for 4K display.
> 
> I only play single player games like AC, Tomb Raider, and total war. I do play FPS games if there are interesting ones out there. So I'm more than happy when games hit 50-60 FPS cause thats more than smooth for those games.
> 
> So this is my dilemma, buy a high refresh rate 1440p monitor or get 4K 60Hz IPS.


 
How are nobody mentioning *DSR*? It's like the best thing ever. You can play in 8K on a 1080p monitor. As you say yourself RTX 2080 Ti destroys any game in 1440p, that means you can play most games in 4K with a far higher refresh rate than 60, and with DSR you can do that without owning a 4K monitor, just set up DSR in NVIDIA Control Panel in about 5 seconds and you can now play any game in 4 to 5K resolution at a 144Hz refresh rate (Image quality will be immensely improved regardless if you have a true 4K monitor or not).

*Last year I spent days researching monitors, conclusion was that the best experience you can buy today is (by far);*

*High Refresh Rate*, once you've properly experienced smooth gameplay at 120Hz or above you basically can't go back, playing fast paced FPS games in 60Hz is almost impossible. So let's just rule out 60Hz straight away, even for single player games.

*16:9 or 21:9 aspect ratio aka Ultra Wide?* 21:9 still to this day has limited support and won't work for consoles, many movies (Netflix, IMAX and more) and tv shows as they are regularly 16:9, that rules out Ultra Wide too.

*24" or 27" size?* None of course, 32" is the new meta, having personally used 24, 27 and 32", it's hard to go back to 24" after trying 27", and the same goes for 27", going down to that from 32" is like an absolute joke, claustrophobic really.

*TN or IPS?* Again, neither of them, VA is easily the most enjoyable panel type, far better contrast/blacks but not as fast as TN/AHVA, but it doesn't matter because it's fast enough to not notice a real difference, but the contrast is night and day you always notice. IPS suffer from insane glow which makes dark games and movies almost unplayable/unwatchable, and the better angles are completely irrelevant outside an office space, that's the only place an IPS shines. I would only recommend TN to professional gamers, as they are now up to 240Hz and the lack of colors or contrast doesn't matter there. So for the average user VA is the way to go.

On top of that we have *Curved* which further increases the immersion than the size alone provides, then *HDR* because why not? Not even a price bump and useful if one ever wants to play an exclusive Playstation title as an example.

*Conclusion is then:* _32" VA 144Hz 16:9 Curved HDR_

Here are 6 models I'd recommend looking at, all in the price range of just ~ €500, super cheap for the experience you're getting, more or less unbeatable by anything else, only real step up is to OLED but that's not viable today. The only thing these monitors lack is 4K and HDMI 2.1, which we can hope comes out later this year or next, or they release 40" OLED 120Hz HDMI 2.1.

*Acer XZ1 XZ321QU*
*AOC Agon AG322QC4*
*ASUS ROG Strix XG32VQ* (No HDR)
*BenQ EX3203R*
*LG 32GK850F-B*
*ViewSonic XG3240C*

I personally own the Acer XZ1, tried the Samsung C32HG70 as well (not listed as it's worse than Acer). As a reference I've also tried 27" 4K HDR which does have an incredible image sharpness thanks to the insanely high PPI, but the experience is still far worse than the above monitors, 32" at 1440p is essentially the same PPI as 24" at 1080p, which is completely fine.


----------



## zhrooms

keikei said:


> ASUS ROG MATRIX GeForce RTX 2080 Ti


 
*ASUS Website*

310mm, 1800MHz Boost Clock, Custom PCB, 2x8-Pin

Power Limit will be higher than the 325W on the Strix as the boost clock is far higher.

Same board as ROG Strix, different cooler. Meaning 16+3 Power Phases.










*Infinity Loop Cooling*
_Infinity Loop, a unique all-in-one closed-loop cooling system. Each card has been binned and tuned to deliver the best out-of-the-box Turing™ experience, raising the bar for graphics horsepower in small chassis and air-cooled builds._









 
 
 
*> Animation*


----------



## Sheyster

zhrooms said:


> Some? It's complete (some odd brands excluded) and accurate.


LOL, don't take "some" to mean "a little" here. I did not intend to offend you whatsoever. However, you just contradicted yourself BTW: It's complete (some odd brands excluded) = not complete.


----------



## dantoddd

zhrooms said:


> How are nobody mentioning *DSR*? It's like the best thing ever. You can play in 8K on a 1080p monitor. As you say yourself RTX 2080 Ti destroys any game in 1440p, that means you can play most games in 4K with a far higher refresh rate than 60, and with DSR you can do that without owning a 4K monitor, just set up DSR in NVIDIA Control Panel in about 5 seconds and you can now play any game in 4 to 5K resolution at a 144Hz refresh rate (Image quality will be immensely improved regardless if you have a true 4K monitor or not).
> 
> *Last year I spent days researching monitors, conclusion was that the best experience you can buy today is (by far);*
> 
> *High Refresh Rate*, once you've properly experienced smooth gameplay at 120Hz or above you basically can't go back, playing fast paced FPS games in 60Hz is almost impossible. So let's just rule out 60Hz straight away, even for single player games.
> 
> *16:9 or 21:9 aspect ratio aka Ultra Wide?* 21:9 still to this day has limited support and won't work for consoles, many movies (Netflix, IMAX and more) and tv shows as they are regularly 16:9, that rules out Ultra Wide too.
> 
> *24" or 27" size?* None of course, 32" is the new meta, having personally used 24, 27 and 32", it's hard to go back to 24" after trying 27", and the same goes for 27", going down to that from 32" is like an absolute joke, claustrophobic really.
> 
> *TN or IPS?* Again, neither of them, VA is easily the most enjoyable panel type, far better contrast/blacks but not as fast as TN/AHVA, but it doesn't matter because it's fast enough to not notice a real difference, but the contrast is night and day you always notice. IPS suffer from insane glow which makes dark games and movies almost unplayable/unwatchable, and the better angles are completely irrelevant outside an office space, that's the only place an IPS shines. I would only recommend TN to professional gamers, as they are now up to 240Hz and the lack of colors or contrast doesn't matter there. So for the average user VA is the way to go.
> 
> On top of that we have *Curved* which further increases the immersion than the size alone provides, then *HDR* because why not? Not even a price bump and useful if one ever wants to play an exclusive Playstation title as an example.
> 
> *Conclusion is then:* _32" VA 144Hz 16:9 Curved HDR_
> 
> Here are 6 models I'd recommend looking at, all in the price range of just ~ €500, super cheap for the experience you're getting, more or less unbeatable by anything else, only real step up is to OLED but that's not viable today. The only thing these monitors lack is 4K and HDMI 2.1, which we can hope comes out later this year or next, or they release 40" OLED 120Hz HDMI 2.1.
> 
> *Acer XZ1 XZ321QU*
> *AOC Agon AG322QC4*
> *ASUS ROG Strix XG32VQ* (No HDR)
> *BenQ EX3203R*
> *LG 32GK850F-B*
> *ViewSonic XG3240C*
> 
> I personally own the Acer XZ1, tried the Samsung C32HG70 as well (not listed as it's worse than Acer). As a reference I've also tried 27" 4K HDR which does have an incredible image sharpness thanks to the insanely high PPI, but the experience is still far worse than the above monitors, 32" at 1440p is essentially the same PPI as 24" at 1080p, which is completely fine.


Thanks I'll look at those


----------



## zhrooms

Sheyster said:


> LOL, don't take "some" to mean "a little" here. I did not intend to offend you whatsoever. However, you just contradicted yourself BTW: It's complete (some odd brands excluded) = not complete.


No worries







, not gonna deny I got a tiny bit triggered by the word _some_. Actually can't believe I spent so many hours doing that damn list, at least 30 over the past few months. Guess I'm not the sharpest tool in the shed as I'm not getting paid a dime for it. Hopefully it helps _some_ people though.


----------



## CptSpig

dantoddd said:


> Thanks I'll look at those


Read some reviews on VA panels they tend to have issue with ghosting. If you game a lot you will probably not like the ghosting. I just purchased a Alienware AW3418DW 34" 21:9 panel and it's awesome! I know this is not the thread for monitors but check out the screen shots from this monitor in the spoiler with a Titan Xp I have not saved any with my 2018ti.



Spoiler


----------



## Deathscythes

zhrooms said:


> *ASUS Website*
> 
> 310mm, 1800MHz Boost Clock, Custom PCB, 2x8-Pin
> 
> Power Limit will be higher than the 325W on the Strix as the boost clock is far higher.
> 
> Same board as ROG Strix, different cooler. Meaning 16+3 Power Phases.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Infinity Loop Cooling*
> _Infinity Loop, a unique all-in-one closed-loop cooling system. Each card has been binned and tuned to deliver the best out-of-the-box Turing™ experience, raising the bar for graphics horsepower in small chassis and air-cooled builds._
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *> Animation*



Thanks! You think we can expect binned dies on those? Like 2150MHz minimum? Also given the specs doesn't it use samsung GDDR6?


----------



## arrow0309

zhrooms said:


> How are nobody mentioning *DSR*? It's like the best thing ever. You can play in 8K on a 1080p monitor. As you say yourself RTX 2080 Ti destroys any game in 1440p, that means you can play most games in 4K with a far higher refresh rate than 60, and with DSR you can do that without owning a 4K monitor, just set up DSR in NVIDIA Control Panel in about 5 seconds and you can now play any game in 4 to 5K resolution at a 144Hz refresh rate (Image quality will be immensely improved regardless if you have a true 4K monitor or not).
> 
> *Last year I spent days researching monitors, conclusion was that the best experience you can buy today is (by far);*
> 
> *High Refresh Rate*, once you've properly experienced smooth gameplay at 120Hz or above you basically can't go back, playing fast paced FPS games in 60Hz is almost impossible. So let's just rule out 60Hz straight away, even for single player games.
> 
> *16:9 or 21:9 aspect ratio aka Ultra Wide?* 21:9 still to this day has limited support and won't work for consoles, many movies (Netflix, IMAX and more) and tv shows as they are regularly 16:9, that rules out Ultra Wide too.
> 
> *24" or 27" size?* None of course, 32" is the new meta, having personally used 24, 27 and 32", it's hard to go back to 24" after trying 27", and the same goes for 27", going down to that from 32" is like an absolute joke, claustrophobic really.
> 
> *TN or IPS?* Again, neither of them, VA is easily the most enjoyable panel type, far better contrast/blacks but not as fast as TN/AHVA, but it doesn't matter because it's fast enough to not notice a real difference, but the contrast is night and day you always notice. IPS suffer from insane glow which makes dark games and movies almost unplayable/unwatchable, and the better angles are completely irrelevant outside an office space, that's the only place an IPS shines. I would only recommend TN to professional gamers, as they are now up to 240Hz and the lack of colors or contrast doesn't matter there. So for the average user VA is the way to go.
> 
> On top of that we have *Curved* which further increases the immersion than the size alone provides, then *HDR* because why not? Not even a price bump and useful if one ever wants to play an exclusive Playstation title as an example.
> 
> *Conclusion is then:* _32" VA 144Hz 16:9 Curved HDR_
> 
> Here are 6 models I'd recommend looking at, all in the price range of just ~ €500, super cheap for the experience you're getting, more or less unbeatable by anything else, only real step up is to OLED but that's not viable today. The only thing these monitors lack is 4K and HDMI 2.1, which we can hope comes out later this year or next, or they release 40" OLED 120Hz HDMI 2.1.
> 
> *Acer XZ1 XZ321QU*
> *AOC Agon AG322QC4*
> *ASUS ROG Strix XG32VQ* (No HDR)
> *BenQ EX3203R*
> *LG 32GK850F-B*
> *ViewSonic XG3240C*
> 
> I personally own the Acer XZ1, tried the Samsung C32HG70 as well (not listed as it's worse than Acer). As a reference I've also tried 27" 4K HDR which does have an incredible image sharpness thanks to the insanely high PPI, but the experience is still far worse than the above monitors, 32" at 1440p is essentially the same PPI as 24" at 1080p, which is completely fine.


What do you mean?
_"tried the Samsung C32HG70 as well (not listed as it's worse than Acer)"_
As I just wanted to give it a try as soon as this Freesync capable story come out true.
It shares the very same panel with the Asus XG32VQ.


----------



## J7SC

ENTERPRISE said:


> Just received my Gigabyte Auros 2080Ti Xtreme cards. Have people had success or any issues using the 380Watt Galax Bios on these Cards as the Auros Xtreme has a custom PCB ?
> 
> *edit*
> 
> As the Galax 380 Watt Cards have 16 Phases and my cards have 19 phases, may not work. May need one of the Galax HOF BIOS's as those cards have 19 Phases. Maybe this one ? : https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since=
> 
> Cheers !


...there are three Gigabyte Auros 2080Ti Xtreme card versions (triple air, AIO and full waterblock) but AFAIK they have the same PCB. I got a couple of the factory waterblock ones and in Unigine Superposition 4k and 8k, they already pull 375w to 380w anyways w/ power slider to '122%'...thus, I haven't flashed the 380w Galax bios yet (thinking about the 450w HoF OC labs one though, but not sure it would work).

That said, some of the Gigabyte Auros 2080Ti Xtreme card owners here apparently have successfully flashed the 380w Galax bios...and as far as I remember, one managed to flash the 450w HoF one (post in mid to late December) but found that it was not increasing the overall wattage, but introduced more 'jumping' etc. 

BTW, what are you going to do about all that RGB on the cards (w/o using their buggy RGB software) if you find it to be too much over the long run...inquiring minds want to know


----------



## Sheyster

zhrooms said:


> No worries
> 
> 
> 
> 
> 
> 
> 
> , not gonna deny I got a tiny bit triggered by the word _some_. Actually can't believe I spent so many hours doing that damn list, at least 30 over the past few months. Guess I'm not the sharpest tool in the shed as I'm not getting paid a dime for it. Hopefully it helps _some_ people though.


We do appreciate your effort! These "owner" threads are tedious to maintain and often thankless. FWIW, it's very nice having all the BIOS links at the end of the OP. Thank you!


----------



## MrTOOSHORT

Yes, I didn't say earlier, but the OP is amazing. Great info, Good job!:thumb:


----------



## zhrooms

CptSpig said:


> Read some reviews on VA panels they tend to have issue with ghosting. If you game a lot you will probably not like the ghosting.


 
Please don't spread misinformation, *it's not an issue*.

Some quotes on the C32HG70 that arrow below expressed interest in,

On input lag

"_With Low Input lag mode turned off, the screen showed a total lag of *25.10ms* as measured by SMTT. If we take out an element related to pixel response times (6.50ms) then we are left with an *estimated signal processing lag of 18.6ms*.

The screen also features a 'low input lag' mode in the OSD menu. With low input lag mode turned on we saw an excellent improvement in the lag. The total display lag from SMTT measurements was now only *7.0ms*, giving an estimated signal processing lag of only *0.5ms*. This is basically nothing and makes the screen *suitable for a wide range of gaming needs. A very good result there.*_" - TFTCentral on the C32HG70

On response time

"_These comparisons are based on the underlying pixel response times, ignoring any motion clarity benefits of strobing backlights etc. So on this screen the measurements were taken in the 'Standard' response time mode, while the active refresh rate had no impact on the measurements. With an average of *13ms* G2G the screen was quite slow, lagging behind the other models listed here - most of which are gaming screens. Other competing VA technology screens like the AOC AGON AG352UCG for instance were a little faster at *10.2ms* G2G average_" - TFTCentral on the C32HG70

To compare, the ROG Swift PG279Q which features an IPS (AHVA) panel at 165Hz got *5.0ms* G2G. So half of the AOC in the quote, and 10.2 still a good number, it's easy to look at them and say, _well that's twice as much_, that must be _bad_, when in reality it barely matters. Let's say as an example the threshhold for actually noticing it being slower starts at 20ms, then we would be well within the limit. I recently tested the ROG Swift PG278QR TN 1440p 165Hz which has a response time of about *3.0ms* G2G, at least 3 times faster than my Acer VA, but the only difference (in Overwatch at 300 FPS) I noticed was the overshoot, the blur reduction (strobing backlight) is incredibly fast on this Acer (which causes the overshoot, overly aggressive overdrive impulse).

Saying that ghosting (high input lag, response time) is an *issue or a problem* on VA is simply false. It's slower sure, especially when blur reduction is off, then you could tell the difference very easily, but gaming you always use blur reduction regardless of which monitor you buy. It does use a more aggressive blur reduction to compensate (the PG278QR also has overshoot on the fastest Overdrive mode so you can't escape it).

Don't think I can say much more other than that I have no problem _at all_ playing very fast paced FPS games on VA (Overwatch, Black Ops 4, Global Offensive), it's *super* sharp in motion because of the aggressive blur reduction. And if you don't need to play the fastest game at 144Hz, you can turn down overdrive to eliminate the overshoot, introduce a bit of blur but irrelevant if lower framerate or slower game. The pros of having a VA far outweighs the cons.
 


Deathscythes said:


> Thanks! You think we can expect binned dies on those? Like 2150MHz minimum?
> 
> Also given the specs doesn't it use samsung GDDR6?


 
No, it's just a Strix with a different cooler, on their website they use the word _binned_ but that is what they call the factory overclocked GPU (300A) I believe.

I don't get what you mean, if it is samsung or not is irrelevant?
 


arrow0309 said:


> What do you mean?
> _"tried the Samsung C32HG70 as well (not listed as it's worse than Acer)"_
> As I just wanted to give it a try as soon as this Freesync capable story come out true.
> It shares the very same panel with the Asus XG32VQ.


 
The blur reduction locks the brightness to about 220cd/m2, which is 100 above what you want for sRGB. So that made the monitor a *big no* for me. But it did have impressive low input lag and blur reduction, not as much overshoot as my Acer but that was both a plus and a negative. I'd rather have the more effective Overdrive, even if it makes it look worse, more overshoot, really don't care about it when I play to win, pleasant visuals don't matter at that point.

The SVA (Quantum Dots) made *no difference*, which surprised me, literally no difference, compared them side by side for several days (and took about 50 pictures, close up, on clouding, brightness, blur reduction). The local dimming is also a complete joke, does not work in games or movies. Samsung cost about €150 more than my Acer at the time I made my purchase, so hell no to that. It being capable of higher peak nits in HDR is irrelevant unless you use a console for the monitor. The difference is only €50 in germany right now though, but I would still pick the Acer over Samsung mainly because it doesn't lock the brightness, and the blur reduction is better (subjectively).
 


J7SC said:


> ...there are three Gigabyte Auros 2080Ti Xtreme card versions (triple air, AIO and full waterblock) but AFAIK they have the same PCB. I got a couple of the factory waterblock ones and in Unigine Superposition 4k and 8k, they already pull 375w to 380w anyways w/ power slider to '122%'...thus, I haven't flashed the 380w Galax bios yet (thinking about the 450w OC labs one though, but not sure it would work).
> 
> That said, some of the Gigabyte Auros 2080Ti Xtreme card owners here have successfully flashed the 380w Galax bios...both Galax and Gigabyte are 19 phases, just marketing dep't differentials on 16 +3 (the latter for VRAM) or 19.


 
They do have the same PCB yes. I've read comments from multiple people that has encountered problems with the 450W one because it was made for 3 power connectors, flashing a 3 connector card with a 2 connector BIOS works, but not the other way around it seems (I can't confirm this, correct me if I'm wrong).

Seems pretty pointless to flash 366W to 380W, those 14W aren't going to make miracles happen and will put a risk on your warranty, even if it's almost completely risk free. I would personally not do it, for benchmark sure, for daily usage no.
 
 
 


Sheyster said:


> We do appreciate your effort!





MrTOOSHORT said:


> OP is amazing!


https://youtu.be/p8J-YmVs1j0?t=93


----------



## CptSpig

zhrooms said:


> Please don't spread misinformation, *it's not an issue*.


Personal preference is not miss information. Numbers are one thing but real world application is another. I have the Asus ROG PG279Q and Alienware AW3418DW both IPS panels with no IPS glow and very little Back light bleed. I had a Acer Predator Z35 Gaming Monitor - Z35P bmiphz and had to send it back because the ghosting was intolerable for gaming. The only thing good about the VA panel was the blacks and high refresh rate. Now hopefully the new panels with HDR will be a big improvement. The IPS panels once calibrated were nearly as good. Crunching numbers is one thing but owning the monitor's is the only real way tell what's best. IMO the best monitor will be a personal choice. Best to look at monitors with good return policy's so you can send it back. Now that's good advice. :thumb:


----------



## jepz

Thanks for the update, OP!

Cheers


----------



## J7SC

zhrooms said:


> They do have the same PCB yes. I've read comments from multiple people that has encountered problems with the 450W one because it was made for 3 power connectors, flashing a 3 connector card with a 2 connector BIOS works, but not the other way around it seems (I can't confirm this, correct me if I'm wrong).
> 
> Seems pretty pointless to flash 366W to 380W, those 14W aren't going to make miracles happen and will put a risk on your warranty, even if it's almost completely risk free. I would personally not do it, for benchmark sure, for daily usage no.


I think you're right on re. flashing a 2x8 pin card with a 3x8 pin Bios...it may (or may not) 'kind of' work but probably ends up gimping performance...there was one chap who I think did try it and got it to run but reverted back because his results w/ 3x8 pin 450w bios were worse than stock, which also makes intuitive sense.

Since the Aorus Xtr 2080 TI already pull about 380w with stock Bios, a better step would be shunt mods w/ stock Bios but really, I didn't build this system to be a bencher (though I did use quick-disconnects, just in case...  )

And I like to add to the other kudos above, thanks for running a highly informative owners thread / knowledge base ! :thumbsups


----------



## Thoth420

CptSpig said:


> Personal preference is not miss information. Numbers are one thing but real world application is another. I have the Asus ROG PG279Q and Alienware AW3418DW both IPS panels with no IPS glow and very little Back light bleed. I had a Acer Predator Z35 Gaming Monitor - Z35P bmiphz and had to send it back because the ghosting was intolerable for gaming. The only thing good about the VA panel was the blacks and high refresh rate. Now hopefully the new panels with HDR will be a big improvement. The IPS panels once calibrated were nearly as good. Crunching numbers is one thing but owning the monitor's is the only real way tell what's best. IMO the best monitor will be a personal choice. Best to look at monitors with good return policy's so you can send it back. Now that's good advice. :thumb:


Seconded... after going through almost every "gaming" panel both G and Freesync above 1080 reso the only monitor worth keeping was the AW3418DW. Flawless once calibrated and the only downside I guess would be having to fuss around to make some games work proper on 21:9.


----------



## Coldmud

ENTERPRISE said:


> Just received my Gigabyte Auros 2080Ti Xtreme cards. Have people had success or any issues using the 380Watt Galax Bios on these Cards as the Auros Xtreme has a custom PCB ?
> 
> *edit*
> 
> As the Galax 380 Watt Cards have 16 Phases and my cards have 19 phases, may not work. May need one of the Galax HOF BIOS's as those cards have 19 Phases. Maybe this one ? : https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since=
> 
> Cheers !


I've been running the 380w bios for weeks now, just rolled back to stock for some testing. No issues with this bios, allows for a slightly higher OC on core, with stock 366W bios ~2080mhz is stable, on 380W i can do ~2130mhz stable. With some apps the difference of 14W means you don't hit the PL, so it really depends what software you use. If you want the extra 50mhz just try it. Also didn't buildzoid debunk 19 actual phases on this pcb?




J7SC said:


> ...there are three Gigabyte Auros 2080Ti Xtreme card versions (triple air, AIO and full waterblock) but AFAIK they have the same PCB. I got a couple of the factory waterblock ones and in Unigine Superposition 4k and 8k, they already pull 375w to 380w anyways w/ power slider to '122%'...thus, I haven't flashed the 380w Galax bios yet (thinking about the 450w HoF OC labs one though, but not sure it would work).
> 
> That said, some of the Gigabyte Auros 2080Ti Xtreme card owners here apparently have successfully flashed the 380w Galax bios...and as far as I remember, one managed to flash the 450w HoF one (post in mid to late December) but found that it was not increasing the overall wattage, but introduced more 'jumping' etc.





zhrooms said:


> They do have the same PCB yes. I've read comments from multiple people that has encountered problems with the 450W one because it was made for 3 power connectors, flashing a 3 connector card with a 2 connector BIOS works, but not the other way around it seems (I can't confirm this, correct me if I'm wrong).
> 
> Seems pretty pointless to flash 366W to 380W, those 14W aren't going to make miracles happen and will put a risk on your warranty, even if it's almost completely risk free. I would personally not do it, for benchmark sure, for daily usage no.


Yeah several have tried, I think also Profundido explained some on the 450W bios, I also tried it on air and water. No real issues with it, just never reached anything over 380W TDP. It did introduce noticeably more heat on stock settings for the fanned card. A pointless bios for 2x8pin cards.


----------



## keikei

https://videocardz.com/newz/msi-geforce-rtx-2080-ti-lightning-pictured


----------



## J7SC

Coldmud said:


> I've been running the 380w bios for weeks now, just rolled back to stock for some testing. No issues with this bios, allows for a slightly higher OC on core, with stock 366W bios ~2080mhz is stable, on 380W i can do ~2130mhz stable. With some apps the difference of 14W means you don't hit the PL, so it really depends what software you use. If you want the extra 50mhz just try it. Also didn't buildzoid debunk 19 actual phases on this pcb?


I think Buildzoid's comments - as far as I can actually understand hm - is two-fold. First, the caps. on this Aorus Xtr PCB are round and he had no data sheet on them, not square like on virtually all other custom PCBs. Second, and I may have misunderstood what he said, is that almost all custom 16 phase (for GPU, plus 3 for VRAM being separate) PCBs by various vendors are actually true 8 phase, but splitting to two phases each, though WITHOUT true doublers. On the Galax HoF OCL and the Asus Strix (and perhaps upcoming ones such as Lightning, Kingpin & Co), the controller is a 10 phase, instead of 8 phase. At the end of the day, almost all custom PCBs, even with these limitations, should deliver more than enough ooomph to go close to or even beyond 400w (and thus a bit beyond 2x 8 pin 'official' spec, as discussed before), were it not for the built-in limiters (you can deal with to some extent via shunt mods)...




Coldmud said:


> Yeah several have tried, I think also Profundido explained some on the 450W bios, I also tried it on air and water. No real issues with it, just never reached anything over 380W TDP. It did introduce noticeably more heat on stock settings for the fanned card. A pointless bios for 2x8pin cards.


 Using and 3x 8 pin Bios on a 2x 8 pin PCB is sort of like using an intake manifold for a 3 cylinder engine on a two cylinder motor (may be this metaphor needs work :lmaosmile ). At the end of the day, you want a GPU A-spec chip that can run at the lowest possible voltage for a given speed with a decent (~ 380w) Bios, and give it as much cooling as you can.


*EDIT:* Gamers Nexus has a tear-down vid of the new Asus Matrix, interesting stuff


----------



## Thoth420

Any of you all using the windows October update with these GPUs without issue? Just curious as I am on it at the moment and plan to do a clean OS install on a fresh drive NVME drive I got. I just want to know which version I should install. I don't need RTX but the latest driver I do need to use because it has a fix for a Hitman 2 issue.


----------



## CptSpig

Thoth420 said:


> Any of you all using the windows October update with these GPUs without issue? Just curious as I am on it at the moment and plan to do a clean OS install on a fresh drive NVME drive I got. I just want to know which version I should install. I don't need RTX but the latest driver I do need to use because it has a fix for a Hitman 2 issue.


I am using the Windows 10 Pro64 1809 October update with Nvidia 417.35 driver no issues. :thumb:


----------



## Asmodian

I am using 417.58 with Windows 10 Pro for Workstations 1809 17763.195 without issues.


----------



## kot0005

zhrooms said:


> Please don't spread misinformation, *it's not an issue*.
> 
> Some quotes on the C32HG70 that arrow below expressed interest in,
> 
> On input lag


Hi, You might want to change my card in OP btw, I RMA'd my Aorus waterforce. I am on the 2080Ti Seahawk EX X card now.


----------



## ENTERPRISE

J7SC said:


> ...there are three Gigabyte Auros 2080Ti Xtreme card versions (triple air, AIO and full waterblock) but AFAIK they have the same PCB. I got a couple of the factory waterblock ones and in Unigine Superposition 4k and 8k, they already pull 375w to 380w anyways w/ power slider to '122%'...thus, I haven't flashed the 380w Galax bios yet (thinking about the 450w HoF OC labs one though, but not sure it would work).
> 
> That said, some of the Gigabyte Auros 2080Ti Xtreme card owners here apparently have successfully flashed the 380w Galax bios...and as far as I remember, one managed to flash the 450w HoF one (post in mid to late December) but found that it was not increasing the overall wattage, but introduced more 'jumping' etc.
> 
> BTW, what are you going to do about all that RGB on the cards (w/o using their buggy RGB software) if you find it to be too much over the long run...inquiring minds want to know





Coldmud said:


> I've been running the 380w bios for weeks now, just rolled back to stock for some testing. No issues with this bios, allows for a slightly higher OC on core, with stock 366W bios ~2080mhz is stable, on 380W i can do ~2130mhz stable. With some apps the difference of 14W means you don't hit the PL, so it really depends what software you use. If you want the extra 50mhz just try it. Also didn't buildzoid debunk 19 actual phases on this pcb?
> 
> 
> 
> 
> 
> 
> Yeah several have tried, I think also Profundido explained some on the 450W bios, I also tried it on air and water. No real issues with it, just never reached anything over 380W TDP. It did introduce noticeably more heat on stock settings for the fanned card. A pointless bios for 2x8pin cards.


Thanks guys, with that in mind, seems fairly pointless in flashing the BIOS. Flashing the BIOS will likely give me some odd side affects somewhere along the line when it comes to RGB possibly. As for the RGB, honestly I could care less about it. However in today's age everything is plastered in RGB so there is not much choice lol. I know with the Auros Engine software you can turn it off if required, not noticed any bugs in the Auros software just yet.


----------



## arrow0309

zhrooms said:


> ... cut ...
> 
> The blur reduction locks the brightness to about 220cd/m2, which is 100 above what you want for sRGB. So that made the monitor a *big no* for me. But it did have impressive low input lag and blur reduction, not as much overshoot as my Acer but that was both a plus and a negative. I'd rather have the more effective Overdrive, even if it makes it look worse, more overshoot, really don't care about it when I play to win, pleasant visuals don't matter at that point.
> 
> The SVA (Quantum Dots) made *no difference*, which surprised me, literally no difference, compared them side by side for several days (and took about 50 pictures, close up, on clouding, brightness, blur reduction). The local dimming is also a complete joke, does not work in games or movies. Samsung cost about €150 more than my Acer at the time I made my purchase, so hell no to that. It being capable of higher peak nits in HDR is irrelevant unless you use a console for the monitor. The difference is only €50 in germany right now though, but I would still pick the Acer over Samsung mainly because it doesn't lock the brightness, and the blur reduction is better (subjectively).
> 
> ... cut ...


OK, so I'm gonna forget about this Samsung since it also has pwm brightness so I'm not gonna like its flickering (in case that was noticeable). 
Tell me, what do you think about this newer AOC Agon ag322qc4? 
It may be preorder for only £375 (from amazon.co.uk). 
This one has pwm or not, how about the ghosting? 



CptSpig said:


> Personal preference is not miss information. Numbers are one thing but real world application is another. I have the Asus ROG PG279Q and Alienware AW3418DW both IPS panels with no IPS glow and very little Back light bleed. I had a Acer Predator Z35 Gaming Monitor - Z35P bmiphz and had to send it back because the ghosting was intolerable for gaming. The only thing good about the VA panel was the blacks and high refresh rate. Now hopefully the new panels with HDR will be a big improvement. The IPS panels once calibrated were nearly as good. Crunching numbers is one thing but owning the monitor's is the only real way tell what's best. IMO the best monitor will be a personal choice. Best to look at monitors with good return policy's so you can send it back. Now that's good advice. :thumb:


Hi, I've also owned those monitors (except the Alienware) and always liked the ips tech (even on the glowish PG348Q) and yeah, I've also had the newer VA Predator Z35p @120hz and only kept it a couple of weeks, the ghosting (and colors as well) were far from my taste in BF1 for instance. 
So I went to this X34P that I possess from almost a year and it seemed "almost" perfect. 
Now I'm also considering a 32" 1440p 144hz for a better experience with RT (few) games however I'm not sure if I'm doing the right thing. 
Also saving some money (since my second hand X34P is still no cheap if I'm gonna sell it).


----------



## Zammin

Asmodian said:


> I am using 417.58 with Windows 10 Pro for Workstations 1809 17763.195 without issues.


Is 417.58 a hotfix driver that's not available on the Nvidia website? 417.35 is the latest that shows up for me :/


----------



## profundido

Thoth420 said:


> Any of you all using the windows October update with these GPUs without issue? Just curious as I am on it at the moment and plan to do a clean OS install on a fresh drive NVME drive I got. I just want to know which version I should install. I don't need RTX but the latest driver I do need to use because it has a fix for a Hitman 2 issue.


latest build, no problem


----------



## Thoth420

Asmodian said:


> I am using 417.58 with Windows 10 Pro for Workstations 1809 17763.195 without issues.





kot0005 said:


> Hi, You might want to change my card in OP btw, I RMA'd my Aorus waterforce. I am on the 2080Ti Seahawk EX X card now.





profundido said:


> latest build, no problem


Cheers fellas! Makes my life a whole lot easier.



Zammin said:


> Is 417.58 a hotfix driver that's not available on the Nvidia website? 417.35 is the latest that shows up for me :/


I don't see the hotfix one... what does it fix? Also link please if you don't mind.


----------



## Asmodian

https://www.overclock.net/forum/226-software-news/1715798-nvidia-geforce-417-35-417-58-hotfix.html


----------



## CptSpig

arrow0309 said:


> OK, so I'm gonna forget about this Samsung since it also has pwm brightness so I'm not gonna like its flickering (in case that was noticeable).
> Tell me, what do you think about this newer AOC Agon ag322qc4?
> It may be preorder for only £375 (from amazon.co.uk).
> This one has pwm or not, how about the ghosting?
> 
> 
> 
> Hi, I've also owned those monitors (except the Alienware) and always liked the ips tech (even on the glowish PG348Q) and yeah, I've also had the newer VA Predator Z35p @120hz and only kept it a couple of weeks, the ghosting (and colors as well) were far from my taste in BF1 for instance.
> So I went to this X34P that I possess from almost a year and it seemed "almost" perfect.
> Now I'm also considering a 32" 1440p 144hz for a better experience with RT (few) games however I'm not sure if I'm doing the right thing.
> Also saving some money (since my second hand X34P is still no cheap if I'm gonna sell it).


IMO, I would keep the X34P it is the same panel as the AW3418DW and I think it's the best available at this time. If you don't have a Titan V, Xp or 2080ti you will be hard pressed to achieve 144 FPS (ultra) in AAA games. Wait for HDR panels to arrive should be soon. My 2080ti (on water) in Battlefield V with all setting including Ray tracing On ultra I can maintain over 100 FPS. IMO you can't have a better experience with ultra wide screen monitors at this time. Good luck with you final decision.


----------



## J7SC

ENTERPRISE said:


> Thanks guys, with that in mind, seems fairly pointless in flashing the BIOS. Flashing the BIOS will likely give me some odd side affects somewhere along the line when it comes to RGB possibly. As for the RGB, honestly I could care less about it. However in today's age everything is plastered in RGB so there is not much choice lol. I know with the Auros Engine software you can turn it off if required, not noticed any bugs in the Auros software just yet.


Yeah, so far, the Aorus XTR 2080 TI native Bios has given me no reason to complain, be it on Power Target reached or otherwise. While the Galax 380w remains an option to try out on a rainy day, the latest MSI AB addresses everything else I could possibly want such as voltage curve / under-volting / voltage lock etc. The only thing missing is a Power Limit increase to 480w...:upsidedwn

I like the RGB on the cards, but combined with Ram RGB and mobo RGB by different vendors all running on their own cycles, it's a bit much for my taste. As to the Aorus RGB and related software, I currently only tried out the latest version with Win 7 as I had to RMA my mobo (MSI Meg Creation on the Threadripper build) and replace it with the MSI Pro Carbon (meant for a 2nd Threadripper built) for now. So I don't know about the latest Aorus software update in Win 10 yet. *Importantly*, about three weeks ago, there were a series of articles about significant security holes in both Asus Aura RGB and the Aorus / RGB software. Gigabyte has since then updated related software to version 1.50, but in Win 7, that seems buggy...don't know about Win 10 yet though by the sounds of it, you might be running that and don't have issues.

Anyway, this excerpt from a bit-tech.net article from December 20 relates "..to the vulnerabilities highlighted by SecureAuth; accordingly, users are advised to uninstall the affected packages - Asus Aura Sync version 1.07.22 and earlier, Gigabyte App Center version 1.33 and earlier, Aorus Graphics Engine version 1.33 and earlier, Xtreme Engine version 1.25 and earlier, and OC Guru II version 2.08 and earlier". As stated, Gigabyte has now updated the package but if you're still running version 1.33 / earlier...


----------



## zhrooms

CptSpig said:


> I have the Asus ROG PG279Q and Alienware AW3418DW both IPS panels with no IPS glow and very little Back light bleed. I had a Acer Predator Z35 Gaming Monitor - Z35P bmiphz and had to send it back because the ghosting was intolerable for gaming. The only thing good about the VA panel was the blacks and high refresh rate. Now hopefully the new panels with HDR will be a big improvement. The IPS panels once calibrated were nearly as good. Crunching numbers is one thing but owning the monitor's is the only real way tell what's best. IMO the best monitor will be a personal choice. Best to look at monitors with good return policy's so you can send it back. Now that's good advice.


 
I dare you to take a picture of them in the dark with maximum brightness, "_no IPS glow_". There is no escaping IPS glow.

Also you're comparing an almost 2.5 year old VA panel to your AH-IPS. And the ghosting is not intolerable, reading a review on the monitor it shows very good numbers when ULMB/Overdrive is turned on, or maybe you forgot to enable them?









Which new panels with HDR? *These?*

And yes I absolutely agree with you about return policy, you should always buy a monitor with more carefulness than any other computer part or accessory, there are plenty of monitors with good panels that are really poorly configured (OSD, Software). For example, in one of my recent posts I listed 6 different VA panels, they are very similar in specification but they vary wildly in terms of software calibration/features, as an example the Samsung C32HG70 was unusable for me in gaming as they lock the brightness to 220 nits, not allowing you to adjust it, and this information is not documented anywhere, can take up to a year for a new monitor to be reviewed properly where this information surfaces, so always be ready to return it when you buy a monitor.
 


Thoth420 said:


> Seconded... after going through almost every "gaming" panel both G and Freesync above 1080 reso the only monitor worth keeping was the AW3418DW. *Flawless* once calibrated and *the only downside* is fuss around to games at 21:9.


 
     _*"Flawless"*_

     










Makes me sad knowing there's people out there owning IPS monitors that don't know any better, they're just awful panels for the average user, only excel in a bright office environments where you need the superior viewing angles and high color accuracy. Horrible experience in any dark environment such as.. your own home. There is no getting rid of the orange IPS glow, it is the worst thing I have experienced on a monitor in the past 20 years. Imagine a quarter of your 16:9 screen showing the color orange when watching a dark movie or playing a game. What it looks like in the picture above is what it looks like with your own eyes. You can minimize it by turning down brightness but it will still be visible, if you want to experience any HDR content such as a console game or Blu-Ray/Netflix the brightness will be maxed, then it's laughably bad, so much glow you'd think there's something wrong with the monitor but it's not, just the panel being IPS. Would much rather use an 8 year old 24" TN 120Hz panel than a 27" IPS 144Hz panel, experience/immersion is far better on the TN, slight white-bluish glow instead of insane orange glow, if you're color blind I guess it's better







. 

Maybe you've noticed I hate IPS with a passion at this point, but it's really not without reason. Choosing IPS (AHVA) over VA today is like the biggest mistake you can do, that's just a fact, also widely agreed upon by the monitor enthusiasts.

If you do *professional work* and need high color accuracy / consistency, then of course you probably choose a 10-bit 4K AH-IPS panel at the very least, but for any kind of *entertainment* the VA offers a much more enjoyable experience, black levels of VA is like 0.05 and AHVA 0.12, can easily tell the difference with the monitors side by side, what you won't notice though is the very slightly better color accuracy of IPS. And I've already gone through in my previous post about VA in gaming, it is slower but it's not so slow it makes it a problem.

I am not saying VA is the greatest thing ever, by any stretch of the imagination, they *all* belong in the trash, VA just happens to be the *lesser evil* in this case.








OLED is where it's at, out of this world, but we're still at least a whole year away from having it as PC monitors, I'd be perfectly fine with wall mounting a 40" 120Hz OLED /w HDMI 2.1, really not that much bigger than 32/34".
 


Coldmud said:


> I've been running the 380w bios for weeks now, just rolled back to stock for some testing. No issues with this bios, allows for a slightly higher OC on core, with stock 366W bios ~2080mhz is stable, on 380W i can do ~2130mhz stable. With some apps the difference of 14W means you don't hit the PL, so it really depends what software you use. If you want the extra 50mhz just try it. Also didn't buildzoid debunk 19 actual phases on this pcb?


 
The voltage is what matters here though, the higher power limit should result in a higher voltage, thus reach a higher stable core clock. It would be interesting if you could compare the voltage between the two. Preferably run a single player game at 5K resolution and highest settings, use a save game to load the same scene, look at voltage *and* core clock, re-do with the other BIOS.
 


J7SC said:


> At the end of the day, almost all custom PCBs, even with these limitations, should deliver more than enough ooomph to go close to or even beyond 400w (and thus a bit beyond 2x 8 pin 'official' spec, as discussed before), were it not for the built-in limiters (you can deal with to some extent via shunt mods)...
> 
> You want a GPU A-spec chip that can run at the lowest possible voltage for a given speed with a decent (~ 380w) Bios, and give it as much cooling as you can.


 
I'd suggest you read the FAQ at the bottom of the original post if you haven't already, briefly go into how even just the reference card can technically go above 600W, but would hit the hard limit long before that, when paying for custom PCB you pay for just the cooler, unless you also plan to overclock it on LN2, which basically none do.

Personally I don't care about power usage, paid top dime for the best, better squeeze every single MHz out of it. Doubt you'd notice it on the electricity bill.
 


kot0005 said:


> Can you change my card in the owner's list? I RMA'd my Aorus waterforce. I am on the 2080Ti Seahawk EX X card now.


 
Show me a picture of it, in your rig, and I'll add it!
 


ENTERPRISE said:


> As for the RGB, honestly I could care less about it. However in today's age everything is plastered in RGB so there is not much choice lol.


 
From what I've observed, people seem to take "RGB" literal, Red.. Green.. and Blue, when you have the option to use anything in between, just because you got fans capable of RGB doesn't mean you are forced to use the default color cycle/rainbow mode. I bought RGB for the possibility to switch between red, and for example blue (UV), depending on my current parts or what I like at the time. I could never stand a constant rainbow next to me 24/7. On my APEX motherboard I only use the color red and with a puny brightness of just 5% for daily use, barely bright enough to be seen, won't light up more than the actual motherboard.










Color Cycle example, this is what I see the vast majority of RGB capable builds use.










Why not mix them and make pink?










I don't even know what to call this color.








   









For anyone interested, check my signature for the link to the build log.
 


arrow0309 said:


> OK, so I'm gonna forget about this Samsung since it also has pwm brightness so I'm not gonna like its flickering (in case that was noticeable).
> Tell me, what do you think about this newer AOC Agon ag322qc4?
> It may be preorder for only £375 (from amazon.co.uk).
> This one has pwm or not, how about the ghosting?


 
I did not notice the PWM at all, the Samsung and Acer were more or less identical in every way except the Samsung were capable of 50 higher nits. I also just remembered something tiny that bugged me on the Samsung, the HDR mode were only able to auto detect, on Acer you can manually toggle it.

PWM is not used according to the *TFTCentral Review*. The overdrive modes are 4 levels "Off, Weak, Medium, Strong", the more control the better. 9.3ms G2G at the Strong level.

"_If you switch up to the maximum '*strong*' setting there is however a nice improvement in motion clarity. The image becomes sharper and clearer, and a lot of the blurring has been removed. Overall if we ignore that one slow transition in this measurement sample, we have a far better *7.3ms G2G response time average now which is just about fast enough to keep up with the 144Hz frame rate demands.* We didn't observe any obvious smearing or blurring in practice at 144Hz with the overdrive setting on 'strong'. There was some minor overshoot creeping in on a couple of transitions, but nothing you could notice in practice._" - *TFTCentral Review*

That sounds really good, it requires like 6.94ms G2G for 144Hz, the "Strong" overdrive is aggressive just like on my Acer XZ1, there's ZERO blur at 144Hz, but quite *a lot* of overshoot, but the complete elimination of blur makes it worth it.

It also has HDMI2.0 and DP1.2, so you can hook up a console no problem, DisplayHDR 400 is a nice touch. RGB illumination on the back surprised me.

This monitor is extremely good for the price. I strongly dislike *how it looks*, especially the foot, but if you would mount it on a stand then I guess it's fine.

My Acer in all its glory as a comparison, got it mounted on a stand, but I like the foot (more than the other models).












arrow0309 said:


> I've also had the newer VA Predator Z35p @120hz and only kept it a couple of weeks, the ghosting (and colors as well) were far from my taste in BF1 for instance.


 
I hope you're not judging an entire panel type based on the experience with a single monitor, for instance the Z35P has lower contrast, lower refresh rate and G-Sync so it does not feature the traditional blur reduction methods as far as I know, just Nvidia ULMB if I'm not mistaken, which might be the reason for the ghosting. Correct me if I'm wrong.

Colors / Calibration / Brightness / Blur Reduction vary a lot between brands and monitors/panels, I'd recommend anyone looking for a new monitor to read reviews on as many as possible, make a list with your top choices and proceed with ordering the one you believe is best, simply return and try the next one if it wasn't up to your expectation.
 


CptSpig said:


> IMO you can't have a better experience with ultra wide screen monitors at this time.


 
Definitely tempted to try one, mostly just to see how it works with content not supported by Ultrawide, since that is its main flaw.


----------



## acmilangr

Hello
Soon I will get Asus strix 2080ti OC

Is there any safety bios for this card with higher power limit?


----------



## J7SC

Ah, now it get's interesting: 

*Kingpin 2080 TI *shown at CES...AIO cooling, 3x 8 pin, 520watt max :exclamati (!), EVBot connector (still have one here ) and more


----------



## Duskfall

Anyone knows if I can flash ROG-STRIX-RTX2080TI-O11G-GAMING Bios to ROG-STRIX-RTX2080TI-11G-GAMING card?


----------



## ValSidalv21

3DMark Port Royal is out, I scored 8433 on my air cooled VENTUS OC http://www.3dmark.com/pr/1513


----------



## Pepillo

ValSidalv21 said:


> 3DMark Port Royal is out, I scored 8433 on my air cooled VENTUS OC http://www.3dmark.com/pr/1513


8.927, Gigabyte Aorus Xtreme http://www.3dmark.com/pr/1748


----------



## nycgtr

Duskfall said:


> Anyone knows if I can flash ROG-STRIX-RTX2080TI-O11G-GAMING Bios to ROG-STRIX-RTX2080TI-11G-GAMING card?


All the 2080ti strixs are identical regardless of model. You can flash the bios freely among them. Whenever the matrix bios is available we can flash as well since it's also just the strix pcb.


----------



## nycgtr

ValSidalv21 said:


> 3DMark Port Royal is out, I scored 8433 on my air cooled VENTUS OC http://www.3dmark.com/pr/1513


Not home so can't run it. Does it work with nvlink?


----------



## nyk20z3

O Yea


----------



## sblantipodi

hi guys. I don't know if my 5930K is limiting my RTX2080 Ti.

I have just downloaded Port Royal RTX Benchmark and with the card at stock I do only 7350 points.
If I OC the card @+150MHz/+500MHz I go up to 8160 points.

Am I CPU limited?
I have seen people with card at stock doing 7900 points on a 8700K CPU.

can you share your results? am I heavily CPU limited?

thanks


----------



## ValSidalv21

nycgtr said:


> Not home so can't run it. Does it work with nvlink?


No idea, only got the one.



sblantipodi said:


> hi guys. I don't know if my 5930K is limiting my RTX2080 Ti.
> 
> I have just downloaded Port Royal RTX Benchmark and with the card at stock I do only 7350 points.
> If I OC the card @+150MHz/+500MHz I go up to 8160 points.
> 
> Am I CPU limited?
> I have seen people with card at stock doing 7900 points on a 8700K CPU.
> 
> can you share your results? am I heavily CPU limited?
> 
> thanks


You shouldn't be, mine is 5820K and I got about 8100 on a stock clocked card, that 8400 I posted is with small core OC and stock memory clock.


----------



## sblantipodi

ValSidalv21 said:


> nycgtr said:
> 
> 
> 
> Not home so can't run it. Does it work with nvlink?
> 
> 
> 
> No idea, only got the one.
> 
> 
> 
> sblantipodi said:
> 
> 
> 
> hi guys. I don't know if my 5930K is limiting my RTX2080 Ti.
> 
> I have just downloaded Port Royal RTX Benchmark and with the card at stock I do only 7350 points.
> If I OC the card @+150MHz/+500MHz I go up to 8160 points.
> 
> Am I CPU limited?
> I have seen people with card at stock doing 7900 points on a 8700K CPU.
> 
> can you share your results? am I heavily CPU limited?
> 
> thanks
> 
> Click to expand...
> 
> You shouldn't be, mine is 5820K and I got about 8100 on a stock clocked card, that 8400 I posted is with small core OC and stock memory clock.
Click to expand...

I got 7300 on the stock card. Don't know why.
Is your 8100 at stock but with power limit and temp limit cranked up?


----------



## MrTOOSHORT

Should get 8000 atleast.

I got 9300 points with 5.3Ghz 9900k and 2085Mhz 2080ti. 5 to 5.3GHz did very little for the score.

*https://www.3dmark.com/pr/1946*


----------



## sblantipodi

False alarm steam was updating gigs of games.
Now my score is 7900 at stock and 8350 in oc so ok.


----------



## CptSpig

zhrooms said:


> I dare you to take a picture of them in the dark with maximum brightness, "_no IPS glow_". There is no escaping IPS glow.
> 
> Also you're comparing an almost 2.5 year old VA panel to your AH-IPS. And the ghosting is not intolerable, reading a review on the monitor it shows very good numbers when ULMB/Overdrive is turned on, or maybe you forgot to enable them?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Which new panels with HDR? *These?*
> 
> And yes I absolutely agree with you about return policy, you should always buy a monitor with more carefulness than any other computer part or accessory, there are plenty of monitors with good panels that are really poorly configured (OSD, Software). For example, in one of my recent posts I listed 6 different VA panels, they are very similar in specification but they vary wildly in terms of software calibration/features, as an example the Samsung C32HG70 was unusable for me in gaming as they lock the brightness to 220 nits, not allowing you to adjust it, and this information is not documented anywhere, can take up to a year for a new monitor to be reviewed properly where this information surfaces, so always be ready to return it when you buy a monitor.


I would not buy a freeSync panel *This!* It's a VA panel I hope the HDR will make the difference!

You don't need to feel sorry for people with IPS panels we are very happy with our choices. Time to stop seeing who's is longer and get back to the awesome 2080ti's.

Here is a picture of a dark seen on my AW3418DW DXR not enabled using my Titan Xp. See spoiler:



Spoiler


----------



## sblantipodi

8692 3d marks on port royal
https://www.3dmark.com/pr/2074

5930K @ 4.2GHz
2080 Ti @ +150MHz/+500MHz daily stable

pretty satisfied from my old rig, it can push this beast pretty good


----------



## ValSidalv21

http://www.3dmark.com/pr/2089
8575 here with +500 on the memory, +1000 crashes midway through the run. Not sure about the core OC as I exported the curve from the OC Scanner, but max registered boost with MSI AB is 2105 on a cold card, sustained about 1950-2050Mhz.


----------



## sblantipodi

ValSidalv21 said:


> http://www.3dmark.com/pr/2089
> 8575 here with +500 on the memory, +1000 crashes midway through the run. Not sure about the core OC as I exported the curve from the OC Scanner, but max registered boost with MSI AB is 2105 on a cold card, sustained about 1950-2050Mhz.


why you would ever put the memory on +1000?
don't do it it's not a reasonable and usable frequency.

I prefer manual oc over the oc scanner. using +150MHz on core and +500MHz on memory


----------



## boli

Just a quick message to agree that the first post is easily the most comprehensive 2080 Ti resource in all the web, and _very_ much appreciated.

Regarding displays: people have different preferences. I prefer to value sharpness very highly, as I've been working on 5k displays for years, and gaming on UHD for years – until last summer in 60 Hz only. 120 Hz is very nice indeed, but the more noticeable change (to me anyway) is HDR. Anyway, I concur with: If you can, try before you settle on a display.


----------



## Duskfall

nycgtr said:


> All the 2080ti strixs are identical regardless of model. You can flash the bios freely among them. Whenever the matrix bios is available we can flash as well since it's also just the strix pcb.


Thanks for the info!


----------



## kx11

this demo is pretty


https://www.3dmark.com/3dm/32186042?


----------



## arrow0309

CptSpig said:


> IMO, I would keep the X34P it is the same panel as the AW3418DW and I think it's the best available at this time. If you don't have a Titan V, Xp or 2080ti you will be hard pressed to achieve 144 FPS (ultra) in AAA games. Wait for HDR panels to arrive should be soon. My 2080ti (on water) in Battlefield V with all setting including Ray tracing On ultra I can maintain over 100 FPS. IMO you can't have a better experience with ultra wide screen monitors at this time. Good luck with you final decision.


Decided to keep my X34P since I almost love it 




zhrooms said:


> I did not notice the PWM at all, the Samsung and Acer were more or less identical in every way except the Samsung were capable of 50 higher nits. I also just remembered something tiny that bugged me on the Samsung, the HDR mode were only able to auto detect, on Acer you can manually toggle it.
> 
> PWM is not used according to the *TFTCentral Review*. The overdrive modes are 4 levels "Off, Weak, Medium, Strong", the more control the better. 9.3ms G2G at the Strong level.
> 
> "_If you switch up to the maximum '*strong*' setting there is however a nice improvement in motion clarity. The image becomes sharper and clearer, and a lot of the blurring has been removed. Overall if we ignore that one slow transition in this measurement sample, we have a far better *7.3ms G2G response time average now which is just about fast enough to keep up with the 144Hz frame rate demands.* We didn't observe any obvious smearing or blurring in practice at 144Hz with the overdrive setting on 'strong'. There was some minor overshoot creeping in on a couple of transitions, but nothing you could notice in practice._" - *TFTCentral Review*
> 
> That sounds really good, it requires like 6.94ms G2G for 144Hz, the "Strong" overdrive is aggressive just like on my Acer XZ1, there's ZERO blur at 144Hz, but quite *a lot* of overshoot, but the complete elimination of blur makes it worth it.
> 
> It also has HDMI2.0 and DP1.2, so you can hook up a console no problem, DisplayHDR 400 is a nice touch. RGB illumination on the back surprised me.
> 
> This monitor is extremely good for the price. I strongly dislike *how it looks*, especially the foot, but if you would mount it on a stand then I guess it's fine.
> 
> My Acer in all its glory as a comparison, got it mounted on a stand, but I like the foot (more than the other models).
> 
> [ ... cut ... ]
> 
> 
> 
> I hope you're not judging an entire panel type based on the experience with a single monitor, for instance the Z35P has lower contrast, lower refresh rate and G-Sync so it does not feature the traditional blur reduction methods as far as I know, just Nvidia ULMB if I'm not mistaken, which might be the reason for the ghosting. Correct me if I'm wrong.
> 
> Colors / Calibration / Brightness / Blur Reduction vary a lot between brands and monitors/panels, I'd recommend anyone looking for a new monitor to read reviews on as many as possible, make a list with your top choices and proceed with ordering the one you believe is best, simply return and try the next one if it wasn't up to your expectation.
> 
> 
> 
> Definitely tempted to try one, mostly just to see how it works with content not supported by Ultrawide, since that is its main flaw.


Thank you but I'm gonna still keep my uw Predator @120hz
With daily brightness (37 value):










If I do want to test some 4K HDR I can still use my 49" Samsung Q7F 



CptSpig said:


> I would not buy a freeSync panel *This!* It's a VA panel I hope the HDR will make the difference!
> 
> You don't need to feel sorry for people with IPS panels we are very happy with our choices. Time to stop seeing who's is longer and get back to the awesome 2080ti's.
> 
> Here is a picture of a dark seen on my AW3418DW DXR not enabled using my Titan Xp. See spoiler:
> 
> 
> 
> Spoiler


Nice monitors (ours, same panels) also once you've tasted the uw gaming experience you can hardly turn back to a square panel :specool:
Mine is also calibrated


----------



## Deathscythes

zhrooms said:


> No, it's just a Strix with a different cooler, on their website they use the word _binned_ but that is what they call the factory overclocked GPU (300A) I believe.
> 
> I don't get what you mean, if it is samsung or not is irrelevant?


Thanks, I actually got my answer, apparently the top 5% TU102 cut down dies will be used for Matrix cards. So they are actually binned 
Also I wouldn't be surprised if samsung GDDR6 overclocked better than Micron's.


----------



## Zammin

Deathscythes said:


> Thanks, I actually got my answer, apparently the top 5% TU102 cut down dies will be used for Matrix cards. So they are actually binned
> Also I wouldn't be surprised if samsung GDDR6 overclocked better than Micron's.


In my experience testing two FE's (one with Micron and the other with Samsung) the Samsung memory could do around +1000 (8000Mhz) and the Micron memory could do about +700 (7700Mhz) but when benchmarking Unigine Heaven it didn't make much difference, maybe 2FPS more on average. I wonder if this might be because of different memory timings, but as far as I know that information isn't out there so I can't be sure.


----------



## ESRCJ

Zammin said:


> Deathscythes said:
> 
> 
> 
> Thanks, I actually got my answer, apparently the top 5% TU102 cut down dies will be used for Matrix cards. So they are actually binned /forum/images/smilies/biggrin.gif
> Also I wouldn't be surprised if samsung GDDR6 overclocked better than Micron's.
> 
> 
> 
> In my experience testing two FE's (one with Micron and the other with Samsung) the Samsung memory could do around +1000 (8000Mhz) and the Micron memory could do about +700 (7700Mhz) but when benchmarking Unigine Heaven it didn't make much difference, maybe 2FPS more on average. I wonder if this might be because of different memory timings, but as far as I know that information isn't out there so I can't be sure.
Click to expand...

I don't think Heaven is very memory-intensive. Time Spy Graphics Test 2 is pretty memory-intensive. My Samsung GDDR6 can do +1400MHz for Timespy and more for other benchmarks.


----------



## Gunslinger.

Zammin said:


> In my experience testing two FE's (one with Micron and the other with Samsung) the Samsung memory could do around +1000 (8000Mhz) and the Micron memory could do about +700 (7700Mhz) but when benchmarking Unigine Heaven it didn't make much difference, maybe 2FPS more on average. I wonder if this might be because of different memory timings, but as far as I know that information isn't out there so I can't be sure.


Unigine Heaven is a terrible benchmark for testing modern GPU's because it's sooooooo CPU bound.


----------



## kot0005

EK's 2080Ti strik wb https://www.techpowerup.com/#g251348-3


----------



## nycgtr

http://www.3dmark.com/pr/2969

Does work with nvlink guess my hotfix makes the driver not whql.


----------



## Zammin

gridironcpj said:


> I don't think Heaven is very memory-intensive. Time Spy Graphics Test 2 is pretty memory-intensive. My Samsung GDDR6 can do +1400MHz for Timespy and more for other benchmarks.


Ah okay, I wasn't aware. Probably should have tried Time Spy then. In games I didn't notice higher frame rates on the Samsung card though, if that's anything to go by. I know it's not a scientific test. 



Gunslinger. said:


> Unigine Heaven is a terrible benchmark for testing modern GPU's because it's sooooooo CPU bound.


I don't think I'm CPU bound in Heaven at 1440p ultra settings with a 9900k, in fact the CPU usage is quite low.


----------



## ESRCJ

Zammin said:


> gridironcpj said:
> 
> 
> 
> I don't think Heaven is very memory-intensive. Time Spy Graphics Test 2 is pretty memory-intensive. My Samsung GDDR6 can do +1400MHz for Timespy and more for other benchmarks.
> 
> 
> 
> Ah okay, I wasn't aware. Probably should have tried Time Spy then. In games I didn't notice higher frame rates on the Samsung card though, if that's anything to go by. I know it's not a scientific test.
> 
> 
> 
> Gunslinger. said:
> 
> 
> 
> Unigine Heaven is a terrible benchmark for testing modern GPU's because it's sooooooo CPU bound.
> 
> Click to expand...
> 
> I don't think I'm CPU bound in Heaven at 1440p ultra settings with a 9900k, in fact the CPU usage is quite low.
Click to expand...

You're not going to notice a few FPS while gaming. It's mainly beneficial for benchmark scores 😉


----------



## jelome1989

Is the Strix 2080 Ti Call of Duty edition included in Nvidia's Anthem + Battlefield V promotion?


----------



## kx11

jelome1989 said:


> Is the Strix 2080 Ti Call of Duty edition included in Nvidia's Anthem + Battlefield V promotion?



i think they give you COD BO only


----------



## jelome1989

kx11 said:


> jelome1989 said:
> 
> 
> 
> Is the Strix 2080 Ti Call of Duty edition included in Nvidia's Anthem + Battlefield V promotion?
> 
> 
> 
> 
> i think they give you COD BO only
Click to expand...

Yeah that's what I first thought as well. But looking at its product page the CoD key is included with the accessories so technically you can argue that the CoD key is not a separate promotion bundle thus you'd be eligible for other promotions if that makes sense. Also looking at Newegg listing, it states that it's indeed included in the promotion. Saw it just now.


----------



## J7SC

...and here I thought my 2x Aorus 2080 TI WB w/ all those extra monitor connectors were future-proof (as far as that goes), if not just a bit overkill...wrong. This CES display has 268 LG OLED (w/bending) in it...

Now *this* is what I call *high-fidelity surround vision* for gaming ! :wubsmiley


----------



## kx11

nycgtr said:


> All the 2080ti strixs are identical regardless of model. You can flash the bios freely among them. Whenever the matrix bios is available we can flash as well since it's also just the strix pcb.





are you sure ? i mean the voltage controller on the matrix must be different than the other Strix models


----------



## Baasha

A buddy of mine bought a 2080 Ti and says that his system immediately shuts down upon turning it on. He said he got the system to turn on but the fans were blasting at 100% but it shuts down immediately in like 1 or 2 seconds.

I'm not sure what he's talking about since I've never run into any issue like this with either of my 2080 Ti's.

Do the fans on the 2080 Ti spin at idle (without MSI or PrecisionX installed)? Or do you have to set a separate fan curve for it to spin at idle?

Any thoughts/advice on how to fix this?


----------



## Zammin

gridironcpj said:


> You're not going to notice a few FPS while gaming. It's mainly beneficial for benchmark scores 😉


Yeah that's pretty much what I was getting at. I mostly game so it's all the same to me. I can see it being beneficial for someone wanting to bench though. Sadly both the Samsung cards I tested didn't overclock very well on the core though, although I'm sure that's just silicon lottery and has nothing to do with the memory. If I could of had the core from my Micron card and the Samsung memory that would have been nice haha.


----------



## kx11

Baasha said:


> A buddy of mine bought a 2080 Ti and says that his system immediately shuts down upon turning it on. He said he got the system to turn on but the fans were blasting at 100% but it shuts down immediately in like 1 or 2 seconds.
> 
> I'm not sure what he's talking about since I've never run into any issue like this with either of my 2080 Ti's.
> 
> Do the fans on the 2080 Ti spin at idle (without MSI or PrecisionX installed)? Or do you have to set a separate fan curve for it to spin at idle?
> 
> Any thoughts/advice on how to fix this?





unplug the fans headers and plug them in again while the gpu is completely disconnected from power/mobo , that should help , if not he should return it


----------



## kot0005

Baasha said:


> A buddy of mine bought a 2080 Ti and says that his system immediately shuts down upon turning it on. He said he got the system to turn on but the fans were blasting at 100% but it shuts down immediately in like 1 or 2 seconds.
> 
> I'm not sure what he's talking about since I've never run into any issue like this with either of my 2080 Ti's.
> 
> Do the fans on the 2080 Ti spin at idle (without MSI or PrecisionX installed)? Or do you have to set a separate fan curve for it to spin at idle?
> 
> Any thoughts/advice on how to fix this?



seems like a psu issue, ur buddy must be trying to use single cable with split 8pin or the psu just isnt good ?


----------



## VPII

So I took of the cooler from my Galax RTX 2080 Ti in anticipation for the NZXT Kraken G12 and Corsiar H110 CW to arrive, only to find that the shipping was with economy so I may get it at some time on Friday, if I am lucky tomorrow. So I decided to refit the cooler, only have Thermaltake TG8 thermal compound. After fitting and checked temps and found that it actually dropped a bit. I have to say the TP on the GPU when I removed the cooler looked a little dodgy and all over the place. Some acetone helped to clean it off nicely, but at least now under load I'm between 2 and 8C below what I was before.


----------



## krizby

Lol so I just figured out that now you can modify the VF curve once the card is heat soaked by pressing F5 in MSI afterburner 4.6.0 beta 10 , so it goes like this:

1. Set up your overclock, apply then save the profile.
2. Go into a game so that your card heat up to your maximum gaming GPU temp
3. Alt+tab and go to the afterburner VF curve, press F5, Afterburner will re-detect your heat soaked VF curve.
4. Modify the curve to your liking by pressing SHIFT + Drag the curve, you can minimize freq loss or even bring the curve back to the cool-state VF curve but it will definitely crash if your overclock at cool-state is at the ragged edge already.
5. Apply then Save the OC profile again then restart PC.

Here is the good bit for people who undervolt, instead of losing freq when the card heat up, you can make it so that it uses higher volt. For example you set up the undervolt at 2010mhz/ 1.000v, once your card heat up you can set the undervolt to 2010mhz/1.006v (water cooled card), for air cool card remember that for 10C increase you have to use 1 voltage step higher starting at 44C.


----------



## nycgtr

kx11 said:


> are you sure ? i mean the voltage controller on the matrix must be different than the other Strix models


Knowing asus its just a strix pcb with a new cooler and higher boost. I believe it was said it was a strix on a few articles and on gamersnexus teardown. We will know soon enough but in the video teardown it looks identical.





Watch the video. When I was waterblocking my 2 strixs I was wondering why certain things on the pcb were blank or rotated. Now I know. You can see they only really noticable difference is they added a header for the pump. An area which just was blankish on the strix.


----------



## boli

krizby said:


> Lol so I just figured out that now you can modify the VF curve once the card is heat soaked by pressing F5 in MSI afterburner 4.6.0 beta 10 , so it goes like this: [snip]



Sometimes I edit the curve with windowed Furmark running to keep the card hot, that seems to mostly work as well. Applying a saved preset has the same effect as the reload with F5 I think.

Furmark is very intense so the lower frequency part is active though, unlike most games.


----------



## kot0005

Quick port royal on 330w bios using my seahawk ek: +100core +1000mem

https://www.3dmark.com/pr/5859


----------



## Bighouse

So, I've got two RTX2080ti cards, and an NVLink arriving today. Also arriving are all the parts from EKWB to hook the two into my cooling system. I'll likely be installing tomorrow or on the weekend. But, I'm not sure what method I should use to install the cards, so I'll ask you guys what you'd recommend:

1. Unbox the RTX's and install each one, fan-cooled into the mobo and test each one individually to see if they work. Then install them as a pair with NVLink to test them out. Then disassemble them and install the water cooling components.
2. Unbox the RTX's and just disassemble the fans and install the water cooling components, and coolant, test for leaks, and then fire it up.

2 is my preferred option, but 1 gives me the peace of mind that if one card is defective, I might catch it before I get to far into the build.

Opinions?


----------



## Krzych04650

zhrooms said:


> I dare you to take a picture of them in the dark with maximum brightness, "_no IPS glow_". There is no escaping IPS glow.
> 
> Also you're comparing an almost 2.5 year old VA panel to your AH-IPS. And the ghosting is not intolerable, reading a review on the monitor it shows very good numbers when ULMB/Overdrive is turned on, or maybe you forgot to enable them?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Which new panels with HDR? *These?*
> 
> And yes I absolutely agree with you about return policy, you should always buy a monitor with more carefulness than any other computer part or accessory, there are plenty of monitors with good panels that are really poorly configured (OSD, Software). For example, in one of my recent posts I listed 6 different VA panels, they are very similar in specification but they vary wildly in terms of software calibration/features, as an example the Samsung C32HG70 was unusable for me in gaming as they lock the brightness to 220 nits, not allowing you to adjust it, and this information is not documented anywhere, can take up to a year for a new monitor to be reviewed properly where this information surfaces, so always be ready to return it when you buy a monitor.
> 
> 
> 
> _*"Flawless"*_
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Makes me sad knowing there's people out there owning IPS monitors that don't know any better, they're just awful panels for the average user, only excel in a bright office environments where you need the superior viewing angles and high color accuracy. Horrible experience in any dark environment such as.. your own home. There is no getting rid of the orange IPS glow, it is the worst thing I have experienced on a monitor in the past 20 years. Imagine a quarter of your 16:9 screen showing the color orange when watching a dark movie or playing a game. What it looks like in the picture above is what it looks like with your own eyes. You can minimize it by turning down brightness but it will still be visible, if you want to experience any HDR content such as a console game or Blu-Ray/Netflix the brightness will be maxed, then it's laughably bad, so much glow you'd think there's something wrong with the monitor but it's not, just the panel being IPS. Would much rather use an 8 year old 24" TN 120Hz panel than a 27" IPS 144Hz panel, experience/immersion is far better on the TN, slight white-bluish glow instead of insane orange glow, if you're color blind I guess it's better
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Maybe you've noticed I hate IPS with a passion at this point, but it's really not without reason. Choosing IPS (AHVA) over VA today is like the biggest mistake you can do, that's just a fact, also widely agreed upon by the monitor enthusiasts.
> 
> If you do *professional work* and need high color accuracy / consistency, then of course you probably choose a 10-bit 4K AH-IPS panel at the very least, but for any kind of *entertainment* the VA offers a much more enjoyable experience, black levels of VA is like 0.05 and AHVA 0.12, can easily tell the difference with the monitors side by side, what you won't notice though is the very slightly better color accuracy of IPS. And I've already gone through in my previous post about VA in gaming, it is slower but it's not so slow it makes it a problem.
> 
> I am not saying VA is the greatest thing ever, by any stretch of the imagination, they *all* belong in the trash, VA just happens to be the *lesser evil* in this case.
> 
> 
> 
> 
> 
> 
> 
> 
> OLED is where it's at, out of this world, but we're still at least a whole year away from having it as PC monitors, I'd be perfectly fine with wall mounting a 40" 120Hz OLED /w HDMI 2.1, really not that much bigger than 32/34".


I can agree on Alienware. I have tested it along with 9 other ultrawides over the course of last 3 years and it is the lowest grade panel I have seen out of all of them. Even a little worse than Samsung s34e790c which is the worst display I have seen (not counting some cheap office displays), literally just a bigger version of my 10 year old 1360x768 19" $50 Samsung. Alienware had lots of glow, some bleeding, terrible color and gamma accuracy with OSD settings to fix it, poor brightness, I don't really get why it is having so much success, X34P may look a bit more embarrassing, but they both do while X34P is at least calibrated and has a proper brightness. Alienware looked like a joke next to my old 34UC98, which has a perfect calibration and next to 38UC99 the lowest glow out of all ultrawides.

Though I definitely don't agree on VA. These panels are very poor, including Samsung SVA QD, because AUO AMVA is not even a panel, it is the same nonsense as AHVA. I have tested two SVA models Samsung s34e790c and the new Samsung CF791. s34e790c is not even worth mentioning, but CF791 was very disappointing. I was confident that it will beat my UC98 easily and I genuiely got it to upgrade, but it wasn't even close. Poor viewing angles, poor text sharpness and general poor look to it despite factory calibration, and what is the most disappointing poor black and poor black uniformity. The same as you can see in Rtings.com review. It didn't look meaningfully better than UC98 and this 2500:1 was just largely theoretical, it failed completely on what is supposed to be THE ONLY advantage of VA, which is black level and dark scene performance. (100% vs 100% brightness, both UC98 and CF791 are 300 nits max)

I also tested the newest ultrawide, LG 34GK950G 120 Hz G-sync Nano IPS, which was ridiculously expensive, but it looked like this 










And all this Nano IPS did was enforcing extended gamut on all content, because FreeSync version has an sRGB mode, but G-sync one doesn't. And it didn't support HDR (well even if it did then it would be fake), but question is what was that Nano IPS even for if games are not using extended gamut for SDR. Wonder what I would do if this was the only issue of the display, using Reshade every time to desaturate?

So upgrading from broken UC98 (stuck pixels and two vertical lines across the screen) to G-sync ultrawide was a terrible experience, so just not to buy another UC98 freakin 2.5 years later, I bought 38UC99 to at least make it any kind of upgrade. Lack of G-sync is disappointing, but since it has VRR and Nvidia is now supporting it from Jan 15th, I may still get it in the end, certainly not with G-sync module quality but at least partially.

My 38UC99 is even better than UC98, and it was by very very far the best at the time (compare to some UW2 screens like PG348Q or Dell U3415W, they are so bad that they are not even monitors), and much more uniform on black than Samsung CF791 was. Glow is slight and greyish, not orange, it is not covering the picture with orange cloud as you mentioned the orange glow. It doesn't have that, as one of the very few. I am using it with 100% brightness for gaming and with Ambibox (DYI Ambilight) and I can comfortably set it against any existing VA ultrawide and watch a one sided fight, no comparison. Theoretical contrast destroyed by lack of uniformity is by far not enough to give up everything else, because there is nothing else that SVA would be better at than proper LG's AH-IPS from screens like 34UC98 and 38UC99, it is significantly worse at everything else. 

I didn't really like VA on TVs either. Even 5000:1 was not enough to get rid of poor dark scene performance in dark room. Over a year ago I have chosen FullHD LG OLED over Sony X900E from direct comparison in house in dark room, hands down, even SDR on OLED vs HDR on X900E. Back then 4K OLEDs wasn't that cheap, now I have 4K OLED. And OLED phone too. I also think that OLED or MicroLED is a way to go and whether you choose IPS and VA it is still light years behind in picture quality, but for what we have now and based on all kinds of displays and TVs I have tested, I definitely prefer quality (in relation to other displays) LG IPS of what Samsung does with SVA QD, OLED is out of this world in comparison, and AUO is AUO - garbage. TN is what it is but at least you know that you are compromising picture quality for speed and lower price, so it has it's good uses.

And yes, you most definitely have to have good return policy when buying a display or a TV. These are by far the hardest things to buy. Enough said that I have bought total of 13 monitors and TVs in last 3 years and kept only 4. Even if you have experience and do a lot of research, you chances for getting a good display the first time are still miserable. Even unit to unit variety of the same model is enough to make for few returns, not to even mention a MASSIVE disparity between what reviews say and what actual state is. The are only two types of reviewers: -pseudo ones, which are getting something, don't know crap about it and cannot see any flaws, talk about nothing for 5 minutes in the video and call it a review, tons of channels like that on YouTube, - theoreticians, measuring everything with their fancy equipment and never actually looking at the display in practice. That would be TFT Central, for example they don't even distinguish the glow between screens like PG348Q, Alienware, LG 34GK950G and LG 34UC99, while in fact they are all so drastically different and this is the biggest flaw of the technology that is not even covered, except for one "piss off" shot from the side that tells nothing. And when you ask the guy about he will say some generic nonsense like "I don't see a problem, it is normal, it not visible in bright room". Bright room... When I see someone talking about bright room I am just done  There is just nobody I am aware of who would make all of this pro measurements (which are very important, don't get me wrong) but also give a real opinion from practical use. So usually what you end up with is completely different from what you have read.


----------



## J7SC

Bighouse said:


> So, I've got two RTX2080ti cards, and an NVLink arriving today. Also arriving are all the parts from EKWB to hook the two into my cooling system. I'll likely be installing tomorrow or on the weekend. But, I'm not sure what method I should use to install the cards, so I'll ask you guys what you'd recommend:
> 
> 1. Unbox the RTX's and install each one, fan-cooled into the mobo and test each one individually to see if they work. Then install them as a pair with NVLink to test them out. Then disassemble them and install the water cooling components.
> 2. Unbox the RTX's and just disassemble the fans and install the water cooling components, and coolant, test for leaks, and then fire it up.
> 
> 2 is my preferred option, but 1 gives me the peace of mind that if one card is defective, I might catch it before I get to far into the build.
> 
> Opinions?



*...combination* of 1. and 2. ! You definitely want to test the cards - first individually, then in NVLink - after unboxing 'in their natural state' for any issues (boot-up ok, fans spinning, and of course any physical damage or signs that they were used and re-packaged). Once you're satisfied, *then* put the water-blocks on the cards, connect the rest of your GPU loop and do extensive leak-testing....

Good luck :thumb:


----------



## Esenel

Quick Port Royale bench with my daily +145 Core and +1000 Mem ends up with nice 9333 points.

https://www.3dmark.com/pr/6414


----------



## AlbertoM

That Asus Matrix does not make sense for me. It's watercooled but where the hot air will go? Radiators do not spread the hot air to the sides like fins of a heatsink. It will all go down to the PCB.

Probably there is a space there to hot air go out to the sides, but IMHO not a good design, not mentioning the air will remain on your case, and that the hot air from GPU will try to cool the VRM. 

And also that radiators work best with a fan of the exact width of the radiator right in front or with a push/pull.

I would buy this instead:

BTW, has anyone tried one of those? IDK if this are in the market already though.


----------



## nyk20z3

AlbertoM said:


> That Asus Matrix does not make sense for me. It's watercooled but where the hot air will go? Radiators do not spread the hot air to the sides like fins of a heatsink. It will all go down to the PCB.
> 
> Probably there is a space there to hot air go out to the sides, but IMHO not a good design, not mentioning the air will remain on your case, and that the hot air from GPU will try to cool the VRM.
> 
> And also that radiators work best with a fan of the exact width of the radiator right in front or with a push/pull.
> 
> I would buy this instead:
> 
> BTW, has anyone tried one of those? IDK if this are in the market already though.


I had the 2080 version and its a great performer!


----------



## kot0005

krizby said:


> Lol so I just figured out that now you can modify the VF curve once the card is heat soaked by pressing F5 in MSI afterburner 4.6.0 beta 10 , so it goes like this:
> 
> 1. Set up your overclock, apply then save the profile.
> 2. Go into a game so that your card heat up to your maximum gaming GPU temp
> 3. Alt+tab and go to the afterburner VF curve, press F5, Afterburner will re-detect your heat soaked VF curve.
> 4. Modify the curve to your liking by pressing SHIFT + Drag the curve, you can minimize freq loss or even bring the curve back to the cool-state VF curve but it will definitely crash if your overclock at cool-state is at the ragged edge already.
> 5. Apply then Save the OC profile again then restart PC.
> 
> Here is the good bit for people who undervolt, instead of losing freq when the card heat up, you can make it so that it uses higher volt. For example you set up the undervolt at 2010mhz/ 1.000v, once your card heat up you can set the undervolt to 2010mhz/1.006v (water cooled card), for air cool card remember that for 10C increase you have to use 1 voltage step higher starting at 44C.


I have no idea what you are trying to achieve or do here, is there any benefit in doing this if I am not undervolting ? What do u mean by OC crashes if the card isnt heat soaked ? I mean I can heat up my card running Heaven and F5 doesnt reset the curve. Looking for some more explanation.



Esenel said:


> Quick Port Royale bench with my daily +145 Core and +1000 Mem ends up with nice 9333 points.
> 
> https://www.3dmark.com/pr/6414



which bios are you using ?


----------



## kot0005

This isnt a monitor thread...please use the relevant thread..


----------



## Medusa666

Found a really nice video on how the Zotac RTX cards are made, looks like a solid QC process : )


----------



## Murlocke

My eVGA 2080Ti XC Ultra just died today after being in use since launch week. Was browsing the web, computer turned off suddenly. Wouldn't even post. After much troubleshooting I had to both remove the GPU (AND) clear the CMOS + Cold boot before the computer would even start. Soon as I put the GPU back in, it wouldn't start again, and I had to repeat the process. It's completely shorting the system out.

Not a happy camper. Apparently my ASUS z370 mobo doesn't have HDMI 2.0 onboard, so I am stuck with 4K30 or 1080p60 until this gets replaced...


----------



## Sheyster

Murlocke said:


> My eVGA 2080Ti XC Ultra just died today after being in use since launch week.


At least it's eVGA so they'll probably hook you up fairly quickly. These reference PCB cards do seem to be dying a lot.


----------



## LayZ_Pz

Got my RTX 2080ti Strix, first batch (So no double fan header), has anyone got a bios mod done to that yet ? The 125% Powerlimit is quite.. limiting. 

Also Quid of the warranty in Europe ? I want to change the paste to something better, but one of the screw has some of these weird warranty sticker on it.


----------



## Deathscythes

kot0005 said:


> EK's 2080Ti strik wb https://www.techpowerup.com/#g251348-3


Nice! Release planned for mid February =)


----------



## krizby

kot0005 said:


> I have no idea what you are trying to achieve or do here, is there any benefit in doing this if I am not undervolting ? What do u mean by OC crashes if the card isnt heat soaked ? I mean I can heat up my card running Heaven and F5 doesnt reset the curve. Looking for some more explanation.


What I can achieve here is that the card will not throttle due to temperature, for example you set a +120mhz offset overclock, once the GPU reach 44C the VF curve changes itself to +105mhz offset, you can manually bring the VF curve back up to +120mhz offset (SHIFT + Drag the curve). If your card doesn't crash then you just gain another +15mhz overclock. This doesn't benefit short benchmark since your temp won't reach 44C anyways. However this can bring total clock stability for people who requires it (playing competitive games).


----------



## kot0005

krizby said:


> What I can achieve here is that the card will not throttle due to temperature, for example you set a +120mhz offset overclock, once the GPU reach 44C the VF curve changes itself to +105mhz offset, you can manually bring the VF curve back up to +120mhz offset (SHIFT + Drag the curve). If your card doesn't crash then you just gain another +15mhz overclock. This doesn't benefit short benchmark since your temp won't reach 44C anyways. However this can bring total clock stability for people who requires it (playing competitive games).


Ahh I see. I know what you mean now. Because My clocks start with 2100 then drop to 2085 once it goes above 45c. I will try it out.


----------



## Hanks552

Nice! just trying it out


----------



## J7SC

Quick question* :thinking: *:

We know from various posts and of course the OP / first post that the GPU speed of the 2080 TI will throttle as temps go up, i.e. "a lower temperature allows the GPU to increase the core clock in steps of 15MHz, roughly at about every 10°C " 

But at what temp does this step-down of MHz *actually start* ? 30 C ? 40 C ? :headscrat


----------



## Hanks552

J7SC said:


> Quick question* :thinking: *:
> 
> We know from various posts and of course the OP / first post that the GPU speed of the 2080 TI will throttle as temps go up, i.e. "a lower temperature allows the GPU to increase the core clock in steps of 15MHz, roughly at about every 10°C "
> 
> But at what temp does this step-down of MHz *actually start* ? 30 C ? 40 C ? :headscrat


On my experience from 30 to 40 I did lost MHz


----------



## Shawnb99

Well I regret buying from EVGA now. UPS is trying to charge me $120 in brokerage fees. ***? Just turned an overpriced GPU even more now 


Sent from my iPhone using Tapatalk


----------



## J7SC

Hanks552 said:


> On my experience from 30 to 40 I did lost MHz



Thanks  ...was asking as I'm putting the finishing touches on an updated loop for the 2x 2080 TIs.


----------



## J7SC

Shawnb99 said:


> Well I regret buying from EVGA now. UPS is trying to charge me $120 in brokerage fees. ***? Just turned an overpriced GPU even more now
> 
> 
> Sent from my iPhone using Tapatalk



Silly question perhaps, but why didn't you buy from Newegg.ca ? Specific model availability ?


----------



## Shawnb99

J7SC said:


> Silly question perhaps, but why didn't you buy from Newegg.ca ? Specific model availability ?




They had no stock. EVGA was the only place I could buy the FTE3 Hydro Copper from.

Just gouging me


Sent from my iPhone using Tapatalk


----------



## Hanks552

J7SC said:


> Thanks  ...was asking as I'm putting the finishing touches on an updated loop for the 2x 2080 TIs.


Pics? I got 2115 at 30 to 39, 40 drops to 2100mhz
Memory 950 stable / 10000 some games artifacts but 3D mark just fine


----------



## Zammin

Hanks552 said:


> Pics? I got 2115 at 30 to 39, 40 drops to 2100mhz
> Memory 950 stable / 10000 some games artifacts but 3D mark just fine


Sorry if this was already asked and I missed it, but is there a certain BIOS you are using to prevent power throttling? Galax 380W? Without an undervolt my FE is constantly bouncing off the power limit and the clock speeds are all over the place as a result. I've had some success in setting the PL to the max while undervolting and overclocking, but once I get my waterblock on the card I'm hoping I can run it faster without bouncing off the limit all the time. Temp throttling doesn't concern me too much as it's gradual, but the PL throttling is erratic and seems to affect frametimes to a degree.


----------



## krizby

Hanks552 said:


> Pics? I got 2115 at 30 to 39, 40 drops to 2100mhz
> Memory 950 stable / 10000 some games artifacts but 3D mark just fine


The frequency drop you experience could just be due to power limit. To test for the first temp throttle please lock your GPUs to a voltage that would not cause throttle due to PL. Go to the VF curve and select the 1.000v point and press L then stress your GPUs until your clock drop. My first temp throttle happens at 44C.


----------



## J7SC

Shawnb99 said:


> They had no stock. EVGA was the only place I could buy the FTE3 Hydro Copper from.
> 
> Am also being charged another $150 in government fees over the taxes.
> Just gouging me


...'Death and Taxes' takes on a whole new meaning - what with some 2080 TIs passing away prematurely, while others are taxed silly at purchase...anyway, I sincerely hope that once it is all squared away, you'll get extra pleasure out of the setup !




Hanks552 said:


> Pics? I got 2115 at 30 to 39, 40 drops to 2100mhz
> Memory 950 stable / 10000 some games artifacts but 3D mark just fine


I took a whole pile of pics for an upcoming build log. But as reported before, I ended up RMAing the MSI X399 Creation mobo you see in the pic below and in the meantime use a MSI X399 Pro Carbon I had set aside for a 2nd build later...that mobo has different foot print dimensions, and also different spacing between PCIe slots 1 and 3, requiring a different NVLink...a bit of a setback as I am working with a loop that has some rigid copper tubing in major sections for the GPUs.

But as I had to redo it anyway, I decided that the 2x Aorus 2080 TIs are getting 2x 360 rads and special 160 Black Ice (in addition to the 2x 360s for the Threadripper). This is a change from the earlier total of 3x 360and 2x 160. Combined w/ powerful fans, the temp delta to ambient should get close to the lower 40ies C at peak for the GPUs, thus my earlier question


----------



## VPII

So I finally got the NZXT Kraken G12 with the Corsair H110 CW. All fitted and I am impressed. Usually when running Unigine 1080P Extreme temps would be around 70 to 75 if I set the TDP to 380 watt (126%). I just did one run to see and temps touched 40c towards the end but was basically hovering around 39c. Unfortunately I did not have heat sinks to put on the memory and vrm so it was only active fans blowing over it, except not shown in the picture I've added an extra fan over the top.

I'll go out today to try get some sinks as I could feel the memory and vrm heating up. Also need to understand, I am in South Africa, so standard Ambient temps in house ranges from 24 to 30c. Don't laugh, I have an open bench rig so the things stand around and that is it.


----------



## DooRules

Shawnb99 said:


> Well I regret buying from EVGA now. UPS is trying to charge me $120 in brokerage fees. ***? Just turned an overpriced GPU even more now
> 
> 
> Sent from my iPhone using Tapatalk


UPS are the absolute worst if ordering from the US. They rape you at the border. Simple rule I have is if the only shipper someone else will use is UPS I go somewhere else. FEDEX is about the best IMHO for across border shopping and shipping.


----------



## Shawnb99

DooRules said:


> UPS are the absolute worst if ordering from the US. They rape you at the border. Simple rule I have is if the only shipper someone else will use is UPS I go somewhere else. FEDEX is about the best IMHO for across border shopping and shipping.




Yeah they truly are. I made the mistake of picking the free shipping from EVGA, should of known better it’s never free.

I’ll see if I can clear it myself and save $120.


Sent from my iPhone using Tapatalk


----------



## ENTERPRISE

A small update. I received my Gigabyte Auros Xtreme 2080Ti Cards, lovely quality and solid performers. Unfortunately one did go faulty with regards to the fan arrangement. The central fan while it works has shifted its balance incorrectly and actually collides with one of the neighboring fans (Remember each fan does not have a separate shroud) causing an awful noise. So RMA for that one, other one seems fine so far fingers crossed. 

I can certainly see what GB were looking to achieve with the fan arrangement and for all intense and purposes works great BUT I would have allowed 0.5 - 1mm additional tolerance to the fan clearances, this would have solved all issues, even if the fans became slightly unbalanced.

Just thought I would share.


----------



## Esenel

kot0005 said:


> which bios are you using ?


380Watt.

I could even run it with the curve set to 1.093 V and 2060GHz and +1100 MEM, but the temps of the GPU where around 40°C so the score was lower.
I will cool down the room today and then do a new run ;-)

I did not see the card running into a PL. Barely going over 100%(300W).

Cheers


----------



## Hanks552

*Hanks552*



Zammin said:


> Sorry if this was already asked and I missed it, but is there a certain BIOS you are using to prevent power throttling? Galax 380W? Without an undervolt my FE is constantly bouncing off the power limit and the clock speeds are all over the place as a result. I've had some success in setting the PL to the max while undervolting and overclocking, but once I get my waterblock on the card I'm hoping I can run it faster without bouncing off the limit all the time. Temp throttling doesn't concern me too much as it's gradual, but the PL throttling is erratic and seems to affect frametimes to a degree.


Galax Bios 380W


----------



## Hanks552

krizby said:


> The frequency drop you experience could just be due to power limit. To test for the first temp throttle please lock your GPUs to a voltage that would not cause throttle due to PL. Go to the VF curve and select the 1.000v point and press L then stress your GPUs until your clock drop. My first temp throttle happens at 44C.


i wonder if the shunt mod would help me out, there is a voltage limit or what?
i dont know exactly all the RTX limitations...


----------



## J7SC

VPII said:


> So I finally got the NZXT Kraken G12 with the Corsair H110 CW. All fitted and I am impressed. Usually when running Unigine 1080P Extreme temps would be around 70 to 75 if I set the TDP to 380 watt (126%). I just did one run to see and temps touched 40c towards the end *but was basically hovering around 39c. Unfortunately I did not have heat sinks to put on the memory and vrm so it was only active fans blowing over it, except not shown in the picture I've added an extra fan over the top.
> 
> I'll go out today to try get some sinks as I could feel the memory and vrm heating up*. Also need to understand, I am in South Africa, so standard Ambient temps in house ranges from 24 to 30c. Don't laugh, I have an open bench rig so the things stand around and that is it.


Looks like it will be a solid thermal performer. What are you using for thermal interface for the heatsinks ? Fuji tape ?




ENTERPRISE said:


> A small update. I received my Gigabyte Auros Xtreme 2080Ti Cards, lovely quality and solid performers. Unfortunately one did go faulty with regards to the fan arrangement. The central fan while it works has shifted its balance incorrectly and actually collides with one of the neighboring fans (Remember each fan does not have a separate shroud) causing an awful noise. So RMA for that one, other one seems fine so far fingers crossed.
> 
> I can certainly see what GB were looking to achieve with the fan arrangement and for all intense and purposes works great BUT I would have allowed 0.5 - 1mm additional tolerance to the fan clearances, this would have solved all issues, even if the fans became slightly unbalanced.
> 
> Just thought I would share.



Well, that's a fly in the ointment...but those ring fans > absolutely mesmerizing (see around 7min 20s below for those not familiar with it) ! But they need tighter tolerances beyond those with shrouds I guess


----------



## VPII

J7SC said:


> Looks like it will be a solid thermal performer. What are you using for thermal interface for the heatsinks ? Fuji tape ?
> 
> 
> 
> 
> 
> Well, that's a fly in the ointment...but those ring fans > absolutely mesmerizing (see around 7min 20s below for those not familiar with it) ! But they need tighter tolerances beyond those with shrouds I guess
> https://www.youtube.com/watch?v=SnMGvuv-FTg


Hi J7SC, I basically left the TP on the H110 as I got it, just to start with. I'll later see if I can change it for better thermals but at present I am pretty happy with the setup except that I need heat sinks for Memory and VRM.


----------



## krizby

VPII said:


> Hi J7SC, I basically left the TP on the H110 as I got it, just to start with. I'll later see if I can change it for better thermals but at present I am pretty happy with the setup except that I need heat sinks for Memory and VRM.


VRAM may need some heatsinks but VRM I would say there is no need, just some airflow over is plenty.


----------



## kx11

J7SC said:


> Looks like it will be a solid thermal performer. What are you using for thermal interface for the heatsinks ? Fuji tape ?
> 
> 
> 
> 
> 
> Well, that's a fly in the ointment...but those ring fans > absolutely mesmerizing (see around 7min 20s below for those not familiar with it) ! But they need tighter tolerances beyond those with shrouds I guess
> https://www.youtube.com/watch?v=SnMGvuv-FTg



i have a friend with Auros Xtreme waterforce 2080ti , he complains about the OC headroom not passing 2050mhz at all , i don't mess with Gigabyte products at all btw , they seem unreliable since i owned 2 x99 mobos that were 1xDOA 1xfaulty in less than a week


----------



## VPII

krizby said:


> VRAM may need some heatsinks but VRM I would say there is no need, just some airflow over is plenty.


It is what I picked up from feeling.. Memory is definitely the hottest.... VRM hot but not as much.


----------



## Rob w

Hey guys, 
Hints needed, I’ve got the 2080ti fe from nv it’s an rma and has Samsung mem but I am just so disappointed in its performance, all I have done is put it under water ( dedicated loop 420 mm rad)
I can put +1000 on mem and it’s fine benching but gaming I have to drop it to +300 to be stable.
Core clock I can set to 2115 but drops to 2070 benching but not truly stable at that, if I set it to 2100 it will drop to 1980ish but gaming I can set to 2130 stable.?
Even put it on the chiller temps 32 deg and below and still can’t get on scoreboards.( latest Rtx port royal test max 9001 points not even on scoreboard.)
I am not IT or electronics just an enthusiast! 
My Titan v I shunt modded, cool card and stable, but this one I just don’t know what to do?
Should I flash it / shunt mod it or just put it on eBay ?


----------



## Medusa666

kx11 said:


> i have a friend with Auros Xtreme waterforce 2080ti , he complains about the OC headroom not passing 2050mhz at all , i don't mess with Gigabyte products at all btw , they seem unreliable since i owned 2 x99 mobos that were 1xDOA 1xfaulty in less than a week


This is not related, but could you please put up some more pictures of that glorious card of yours? Have you had any sucess with fan curves? 

I have ordered it also and it is a very expensive purchase, but I hope that it will be cool and quiet during gaming : ) 

Appreciate what you posted so far though : ) 

M


----------



## Kylek92

Noob question. What's the best card to get for highest overclock
Currently have the asus turbo but I might return it


----------



## kx11

Medusa666 said:


> This is not related, but could you please put up some more pictures of that glorious card of yours? Have you had any sucess with fan curves?
> 
> I have ordered it also and it is a very expensive purchase, but I hope that it will be cool and quiet during gaming : )
> 
> Appreciate what you posted so far though : )
> 
> M





yeah MSI AB nailed the fan curves finally but i needed a couple restarts for that to work , installing HOF Ai would mess up the whole deal so you stay stay clear from that currently 





aaand few pics


https://farm5.staticflickr.com/4915/45965018394_78c7e1031c_o.jpg


https://farm5.staticflickr.com/4868/45965018544_2bc9bee238_o.jpg


----------



## Scotty99

Hey asus strix owners, does your card have trouble displaying white like shown in this thread?:
https://rog.asus.com/forum/showthread.php?88417-ASUS-GTX1080-Strix-OC-LED-issues

That is a 1080 but curious if asus has fixed that with this generation.

Also any coil whine? I had a 1080ti strix and coil whine was louder than any fan in my system and i had to sell it.

Thanks.


----------



## krizby

Kylek92 said:


> Noob question. What's the best card to get for highest overclock
> Currently have the asus turbo but I might return it


Highest overclock is a lottery but if you want sustained high clock at the cost of power consumption go with Galax HOF 2080 TI with their 450W power limit that would prepare you for the winter to come.


----------



## DarthBaggins

kx11 said:


> yeah MSI AB nailed the fan curves finally but i needed a couple restarts for that to work , installing HOF Ai would mess up the whole deal so you stay stay clear from that currently
> 
> 
> 
> 
> 
> aaand few pics


The HOF is an amazing card, but [email protected] it's huge lol


----------



## GosuPl

RTX 2080Ti DXR vs TITAN V DXR.

Several portals wrote about the TITAN V equaling the performance of RTX 2080Ti with ray tracing enabled. We were interested in this topic, so we decided to check this issue. The series "we bust the myths" has this in mind that the title suggests the realities  See for yourself how the case looks in the context of the performance of these cards. Below are some pastoral information.


----------



## kx11

DarthBaggins said:


> The HOF is an amazing card, but [email protected] it's huge lol





it's huge enough that it blocks the ram slots on the right when i put it in the vertical mount


----------



## kx11

Scotty99 said:


> Hey asus strix owners, does your card have trouble displaying white like shown in this thread?:
> https://rog.asus.com/forum/showthread.php?88417-ASUS-GTX1080-Strix-OC-LED-issues
> 
> That is a 1080 but curious if asus has fixed that with this generation.
> 
> Also any coil whine? I had a 1080ti strix and coil whine was louder than any fan in my system and i had to sell it.
> 
> Thanks.



from my experience with ASUS AURA , that purple-ish white issue can be fixed by using the green color palette and lowering the saturation like the attached pic


----------



## TK421

For the EVGA cards, which ones can remove the heatsink assembly and fan assembly _*WITHOUT*_removing the VRM and VRAM heatsink plate?


I have a spare CLC from Titan X Maxwell and would like to use it with one of the EVGA cards. (I'm not sure if this would even work so CMIIW).





The only card I know that can do this is the FTW3 PCB, but I'm not sure about the reference XC series.


----------



## J7SC

VPII said:


> Hi J7SC, I basically left the TP on the H110 as I got it, just to start with. I'll later see if I can change it for better thermals but at present I am pretty happy with the setup except that I need heat sinks for Memory and VRM.



Cool...but I meant the thermal interface material for the VRAM heat-sinks 




kx11 said:


> i have a friend with Auros Xtreme waterforce 2080ti , he complains about the OC headroom not passing 2050mhz at all , i don't mess with Gigabyte products at all btw , they seem unreliable since i owned 2 x99 mobos that were 1xDOA 1xfaulty in less than a week


Not sure about your friend's setup and 'silicon lottery' draw, but the slower of the two Auros Xtreme Waterforce cards I have boosts to 2175 MHz before starting to cycle down as temps go up (per earlier screenshot), i.e. in Superposition. The two cards are very close together re. serial numbers. 

As to quality, everybody has their own set of experiences with different vendors. For example, I won't buy EVGA cards anymore as I had a total of 8 top-$end cards over two gens for 4 machines...3 of the 8 developed the same PCB related problem which wasn't fatal but affected performance. When I RMAed one of them, I got a strange white unlabelled box back with a clearly used (scratched) card from someone else which had an ASIC value in the low 60ies...This is not a plug against EVGA as it obviously can and does also happen with various other vendors. 

However, my personal experience with Gigabyte mobos and GPUs is excellent, and combined with the extra long warranty, that helped make up my mind who to turn to for 2080 TIs. As to mobos, this is the first build I am using a MSI for...it is in RMA now but it is not clear yet what exactly happened, so I wait until the RMA process is over.


----------



## Pepillo

Someone used LM (Condutonaut) to mount the water block on 2080 TI?

I went very well with the 7900X and now that I have to mount the RL on the 2080 TI I am tempted to try it.....

Thanks


----------



## Thoth420

That HOF card is so hella cool. Grats on getting one!


----------



## J7SC

DarthBaggins said:


> The HOF is an amazing card, but [email protected] it's huge lol



IF size matters , there are always 2x HOF OCL waterblocked cards which individually take up less vertical space...they certainly would look good in any case (including mine), if only one could actually buy them now...


----------



## Skaarj

Greetings to all!
Has anyone flashed bios from MSI 2080 Ti Gaming X TRIO 406W https://www.techpowerup.com/vgabios/205495/205495 into a PCB reference design?
What are your impressions? Thank!


----------



## Sheyster

Skaarj said:


> Greetings to all!
> Has anyone flashed bios from MSI 2080 Ti Gaming X TRIO 406W https://www.techpowerup.com/vgabios/205495/205495 into a PCB reference design?
> What are your impressions? Thank!


I’m curious about this as well. It’s only a 26w difference but I’d like to know if it works on the ref PCB or not. The Gaming X has 3 fans and 3 power connectors so we can’t just assume it will work flawlessly.


----------



## Nizzen

I have 2x MSI 2080 Ti Gaming X TRIO, so maybe I trying it some day


----------



## Sheyster

Nizzen said:


> I have 2x MSI 2080 Ti Gaming X TRIO, so maybe I trying it some day



The ask here is someone who DOES NOT own the Gaming X who has tried it to confirm it works.


----------



## J7SC

Skaarj said:


> Greetings to all!
> Has anyone flashed bios from MSI 2080 Ti Gaming X TRIO 406W https://www.techpowerup.com/vgabios/205495/205495 into a PCB reference design?
> What are your impressions? Thank!





Sheyster said:


> I’m curious about this as well. It’s only a 26w difference but I’d like to know if it works on the ref PCB or not. The Gaming X has 3 fans and 3 power connectors so we can’t just assume it will work flawlessly.



This came up before in this thread, mostly re. the 450w Galax HOF OCLabs Bios...flashing a Bios from a card with 3 physical EPS connectors (either 2x8 pin + 1x6 pin, or 3x 8 pin) onto a card with 2 physical EPS connectors will not yield the full power target but gimp it, as was also the experience of the folks who had tried it


----------



## callonryan

--


----------



## callonryan

Skaarj said:


> Greetings to all!
> Has anyone flashed bios from MSI 2080 Ti Gaming X TRIO 406W https://www.techpowerup.com/vgabios/205495/205495 into a PCB reference design?
> What are your impressions? Thank!


380 churns better performance. The extra wattage adds 1-2c, which in turn for me makes my core clock step down by 15mhz. No matter what I do, I can't break my benchs set forth by the GALAX 380w Bios.

#45 grrr: https://www.3dmark.com/hall-of-fame-...sion+1.0/1+gpu

(still working on my fan curve. I should be able to get to top 40).


----------



## knightriot

Hi guy, i got new 2080ti with EK Vector block and i got 52*c when 4k gaming, my old 1080ti only 43*c , my ambient temp ~30*c, should i worry about it?
Thanks and sorry about my bad English


----------



## TK421

TK421 said:


> For the EVGA cards, which ones can remove the heatsink assembly and fan assembly _*WITHOUT*_removing the VRM and VRAM heatsink plate?
> 
> 
> I have a spare CLC from Titan X Maxwell and would like to use it with one of the EVGA cards. (I'm not sure if this would even work so CMIIW).
> 
> 
> 
> 
> 
> The only card I know that can do this is the FTW3 PCB, but I'm not sure about the reference XC series.





any thoughts?


----------



## callonryan

knightriot said:


> Hi guy, i got new 2080ti with EK Vector block and i got 52*c when 4k gaming, my old 1080ti only 43*c , my ambient temp ~30*c, should i worry about it?
> Thanks and sorry about my bad English


air or water?


----------



## knightriot

callonryan said:


> air or water?


water bro


----------



## Hanks552

i cant wait for the weekend


----------



## Nizzen

knightriot said:


> Hi guy, i got new 2080ti with EK Vector block and i got 52*c when 4k gaming, my old 1080ti only 43*c , my ambient temp ~30*c, should i worry about it?
> Thanks and sorry about my bad English



2080ti is hotter than 1080ti. No worries. If you worry, use chilled water ????


This is OCN after all


----------



## Hanks552

knightriot said:


> water bro


check thermalpaste application
my RTX 2080 ti OC 380W only goes to 40C, and i have 2 cards running on full load
check all the corner and apply paste on gpu and block
im using grizzly bear kryonaut


----------



## knightriot

Hanks552 said:


> check thermalpaste application
> my RTX 2080 ti OC 380W only goes to 40C, and i have 2 cards running on full load
> check all the corner and apply paste on gpu and block
> im using grizzly bear kryonaut


full load at 40*c ,oh man your ambient temp may be ~ 20 @[email protected]??


----------



## Goz3rr

I just tried my hand at 3dmark for the first time with my new watercooled 2080 Ti and managed to score a respectable 9285 in the Port Royal benchmark after some tweaking (I flashed the 380W Galax BIOS, power and voltage are maxed out, and I think I found the max stable offsets).

My GPU core clocks are maxing out around 2145MHz and my GPU is bouncing between power and voltage limits, but if I look at other people's scores they are getting a bunch more points with the same or sometimes lower clockspeeds. Is this just 3dmark reporting the speeds wrong?

I suppose at this point it's basically the silicon lottery, and short of hardware mods there isn't much I can do to improve my score? Would lowering the temperature of my card help anything at all? It's currently doing about 45 °C under load (ambient of around 20 °C, water temp around 30 °C because of a very silent fan curve).


----------



## krizby

Goz3rr said:


> I just tried my hand at 3dmark for the first time with my new watercooled 2080 Ti and managed to score a respectable 9285 in the Port Royal benchmark after some tweaking (I flashed the 380W Galax BIOS, power and voltage are maxed out, and I think I found the max stable offsets).
> 
> My GPU core clocks are maxing out around 2145MHz and my GPU is bouncing between power and voltage limits, but if I look at other people's scores they are getting a bunch more points with the same or sometimes lower clockspeeds. Is this just 3dmark reporting the speeds wrong?
> 
> I suppose at this point it's basically the silicon lottery, and short of hardware mods there isn't much I can do to improve my score? Would lowering the temperature of my card help anything at all? It's currently doing about 45 °C under load (ambient of around 20 °C, water temp around 30 °C because of a very silent fan curve).


Modify the VF curve in afterburner will gain you some additional point, first thing to do is to make the voltage beyond 1.05v accessible by increasing the frequency after 1.05v (original curve trail off at 1.05v so the gpu will not use more than 1.05v), second is to make temperature throttle gone by stressing your card to 45C and press F5 to reload the curve then SHIFT + drag the curve back to the cold temperature curve (at above 40C the curve will lower itself by 15mhz). Doing all this gain you an additional 30-45mhz, probably couple hundred points.


----------



## Goz3rr

krizby said:


> Modify the VF curve in afterburner will gain you some additional point, first thing to do is to make the voltage beyond 1.05v accessible by increasing the frequency after 1.05v (original curve trail off at 1.05v so the gpu will not use more than 1.05v), second is to make temperature throttle gone by stressing your card to 45C and press F5 to reload the curve then SHIFT + drag the curve back to the cold temperature curve (at above 40C the curve will lower itself by 15mhz). Doing all this gain you an additional 30-45mhz, probably couple hundred points.


Thanks for the solid info, will definitely give this a try tonight


----------



## Glottis

I was looking at power limit lists in the first post and I am a bit confused. As I understand some power limits are listed with stock bios, while others are listed with unofficial bios? For example MSI Gaming X Trio. I'm pretty sure that it has 330W out of the box, so this 407W power limit has to be with unofficial bios applied. I wish this was explained a bit better, some people might wrongfully buy MSI Gaming X Trio thinking they'll get 407W out of the box.

Also, can you flash GALAX 380W bios on all 300A reference PCB cards? I was thinking about Asus STRIX but prices are insane compared to other brands, and you can't flash GALAX 380W bios because it's custom PCB, is this correct?


----------



## VPII

Well after installing the NZXT Kraken G12 with the Corsair H110 cw and now finally adding some heat sinks on the memory I tried a few runs 3DMark Time Spy and Port Royal. Note that I'm running an AMD Ryzen 2700X some benches with cpu at 4267mhz and other at 4367mhz. Obviously if I was running the Intel 9900K heat package my scores would have been a fair bit higher, but still bloody good for my setup I'd say. Memory still feels pretty toasty to the touch.... well at the back of the PCB.

2130mhz was max clock. Will drop heee and there but mostly constant.


----------



## LRRP

*and another one bites the dust!*

Purchased two EVGA RTX 2080 TI FTW3 Hydro Copper cards on release 12/28/2018.
Cards arrived 1/02/2019 when I installed them one at a time to test for best performing card to be the top card, (card 1) in my SLI rig.
Both cards appeared to clock roughly the same but card 2, (the card that died) was using a lot more voltage and running 4C hotter under load.
Both cards have Micron memory.

Wednesday morning 1/09/2019, exactly one week after I first put power to it, card 2 died.
I had just booted up for the day and was only browsing the web, no gaming or benching - cards were at idle.
Suddenly the bottom quarter of my screen was filled with tiny colored squares and the computer froze completely.
After disabling SLI I was able to get these GPUZ shots of both cards at idle.
There are a couple interesting things to note here.
The dead card, card 2, is showing a PerfCap Reason of Pwr while idling at 0.7250 v and 300 MHz.
Card 2 is idling at 25.7 W. Before it died it would idle around 17 W just like card 1.
When card 2 died GPUZ added the line you see called "Memory Usage (Dynamic)". Not only did I not add this I don't have a tick box to enable it in GPUZ.
This all suggests to me that the problem with these dying 2080's is very probably memory related.

So, it's RMA time for me. oh joy.


----------



## broodro0ster

Is there anyone who has the "cheap" EVGA RTX 2080 Ti Black Edition Gaming? And what's your experience with it?
I don't care about the cooler, since I'm looking for a nice to watercool. Preferably a reference design since the Heatkiller blocks don't fit other designs. 

Since EVGA is the only one in Europe who still gives warrenty after the card has been watercooled, it needs to be an EVGA card. 

I'm still doubting between a 2080 and 2080TI, but I've always regretted that I bought a GTX 1080 and not a 1080TI, so I think I will enjoy a 2080TI for a longer time than a 2080.


----------



## ducky083

Hi Zhrooms, 

You said in your first post that all of reference pcb 2080ti cards can be flashed by the GALAX 380W bios.

I've a ZOTAC 2080ti TRIPLE FAN with te bios ver : 90.02.17.00.58 and a A GPU device ID: 10DE 1E07 - 19DA 1503

Can i flash my card with the galax bios without problem ?

How to do this securly ? (nvflash version, command lines, etc.)

I hope you can help me to do this.

Many many thanks and sorry for my bad english ;-)


----------



## Shawnb99

LRRP said:


> Purchased two EVGA RTX 2080 TI FTW3 Hydro Copper cards on release 12/28/2018.
> 
> Cards arrived 1/02/2019 when I installed them one at a time to test for best performing card to be the top card, (card 1) in my SLI rig.
> 
> Both cards appeared to clock roughly the same but card 2, (the card that died) was using a lot more voltage and running 4C hotter under load.
> 
> Both cards have Micron memory.
> 
> 
> 
> Wednesday morning 1/09/2019, exactly one week after I first put power to it, card 2 died.
> 
> I had just booted up for the day and was only browsing the web, no gaming or benching - cards were at idle.
> 
> Suddenly the bottom quarter of my screen was filled with tiny colored squares and the computer froze completely.
> 
> After disabling SLI I was able to get these GPUZ shots of both cards at idle.
> 
> There are a couple interesting things to note here.
> 
> The dead card, card 2, is showing a PerfCap Reason of Pwr while idling at 0.7250 v and 300 MHz.
> 
> Card 2 is idling at 25.7 W. Before it died it would idle around 17 W just like card 1.
> 
> When card 2 died GPUZ added the line you see called "Memory Usage (Dynamic)". Not only did I not add this I don't have a tick box to enable it in GPUZ.
> 
> This all suggests to me that the problem with these dying 2080's is very probably memory related.
> 
> 
> 
> So, it's RMA time for me. oh joy.




That’s not good to hear. I ordered one myself, should get here today. Already had a XC Ultra die on me not looking forward to going through it again. 
*** is causing all these cards to die
Card that died also had micron memory 

Sent from my iPhone using Tapatalk


----------



## DooRules

ducky083 said:


> Hi Zhrooms,
> 
> You said in your first post that all of reference pcb 2080ti cards can be flashed by the GALAX 380W bios.
> 
> I've a ZOTAC 2080ti TRIPLE FAN with te bios ver : 90.02.17.00.58 and a A GPU device ID: 10DE 1E07 - 19DA 1503
> 
> Can i flash my card with the galax bios without problem ?
> 
> How to do this securly ? (nvflash version, command lines, etc.)
> 
> I hope you can help me to do this.
> 
> Many many thanks and sorry for my bad english ;-)


I have the same 2080ti, I have been running the Galax 380W bios since day 1 no issues. There are instructions much farther back in this thread on the how to do it part.


----------



## LRRP

Shawnb99 said:


> That’s not good to hear. I ordered one myself, should get here today. Already had a XC Ultra die on me not looking forward to going through it again.
> *** is causing all these cards to die
> Card that died also had micron memory


From what I've read my impression is that the problem is not vendor specific for either the cards or the memory brand used.
Weather any given card dies appears to be luck of the draw.
Since GDDR6 is the new spice in the mix this generation I'm suspecting a general reliability or implementation problem with GDDR6.

Good luck with your new Hydro Copper. I'd appreciate hearing from you how it performs.


----------



## ducky083

DooRules said:


> I have the same 2080ti, I have been running the Galax 380W bios since day 1 no issues. There are instructions much farther back in this thread on the how to do it part.



How, great.


So, i know i'm ****ting lol but can you explain me the method that works for you ?


Have you got the same id and bios original version on your card ? and what bios have you choosen ? emtek122, evga130 or galaxy 126 ?


Thx !


----------



## Thoth420

Did you guys OC the VRAM by chance on these dead cards?


----------



## LRRP

Thoth420 said:


> Did you guys OC the VRAM by chance on these dead cards?


Yes. My surviving card is stable benching or gaming with a memory overclock of 8220 MHz.
The card that died maxed out at 8000 MHz.


----------



## Thoth420

LRRP said:


> Yes. My surviving card is stable benching or gaming with a memory overclock of 8220 MHz.
> The card that died maxed out at 8000 MHz.


Isn't the ref like 4000?! I mention it because no factory OC cards touch the VRAM clocks it seems.


----------



## LRRP

Thoth420 said:


> Isn't the ref like 4000?! I mention it because no factory OC cards touch the VRAM clocks it seems.


Factory default on both my cards is 7000 under load.


----------



## Shawnb99

Thoth420 said:


> Did you guys OC the VRAM by chance on these dead cards?




No overclock at all on mine. No heavy gaming or anything yet still died after 18 days


Sent from my iPhone using Tapatalk


----------



## Thoth420

LRRP said:


> Factory default on both my cards is 7000 under load.





Shawnb99 said:


> No overclock at all on mine. No heavy gaming or anything yet still died after 18 days
> 
> 
> Sent from my iPhone using Tapatalk


Thanks for the responses fellas. I'm nervous... and my 1080Ti is in for RMA at the moment so I would be without a card. I might order a 1070 or something as a backup because I really cannot be without a GPU. I have obligations.


----------



## Pepillo

Thoth420 said:


> Isn't the ref like 4000?! I mention it because no factory OC cards touch the VRAM clocks it seems.


Gigabyte Aorus Xtreme factory overclock vram, slighty, but overclok.


----------



## BudgieSmuggler

Asmodian said:


> I am using 417.58 with Windows 10 Pro for Workstations 1809 17763.195 without issues.


 Don't see a driver 417.58 yet. Is that a specific driver for your OS? Latest public driver is still 417.35 from what i can see. and for the other who asked am using it fine with no problems on the Win 10 Pro October update 1809


----------



## BudgieSmuggler

Thoth420 said:


> Thanks for the responses fellas. I'm nervous... and my 1080Ti is in for RMA at the moment so I would be without a card. I might order a 1070 or something as a backup because I really cannot be without a GPU. I have obligations.


I didn't know they were still failing. Damn. I thought it was the early released cards that had issues. Guess there are still bad ones in stock. Thankfully had no problems with my Zotac amp. Have a solid 2100 mhz and +1100 memory on air. Sometimes have to bring it down to +1000 memory on some games. Wouldn't be too nervous if you haven't had any signs so far. I think despite the problems they are still very much in low % that have failed compared to sales. Hope all stays well with your card.


----------



## DarthBaggins

BudgieSmuggler said:


> Don't see a driver 417.58 yet. Is that a specific driver for your OS? Latest public driver is still 417.35 from what i can see. and for the other who asked am using it fine with no problems on the Win 10 Pro October update 1809


Had to do some digging but found it:
417.58* Hotfix Driver


----------



## Thoth420

BudgieSmuggler said:


> I didn't know they were still failing. Damn. I thought it was the early released cards that had issues. Guess there are still bad ones in stock. Thankfully had no problems with my Zotac amp. Have a solid 2100 mhz and +1100 memory on air. Sometimes have to bring it down to +1000 memory on some games. Wouldn't be too nervous if you haven't had any signs so far. I think despite the problems they are still very much in low % that have failed compared to sales. Hope all stays well with your card.


I got mine from my local Best Buy so mine may be old stock. It's not exactly the kind of item that sells often around here and I surprised they even had any Ti models. Normally they cap out at xx70 or xx80 here. Is there a way to check my serial or something else on the card/box to determine the batch month? I have the MSI Gaming X Trio

Thanks mate I hope so too. I have never had an issue with an Nvidia card from them just a 7970 Lightning but nobody is perfect.


----------



## Goz3rr

krizby said:


> Modify the VF curve in afterburner will gain you some additional point, first thing to do is to make the voltage beyond 1.05v accessible by increasing the frequency after 1.05v (original curve trail off at 1.05v so the gpu will not use more than 1.05v), second is to make temperature throttle gone by stressing your card to 45C and press F5 to reload the curve then SHIFT + drag the curve back to the cold temperature curve (at above 40C the curve will lower itself by 15mhz). Doing all this gain you an additional 30-45mhz, probably couple hundred points.


So my current (default) curve looks like this, but I'm not exactly sure what I'm supposed to be changing. 
The voltage peaks at 1093mV for a very short time right at the start of the benchmark, and then basically stays locked to 1050mV and 1062mV for the rest of the benchmark.
I suppose I should slightly increase the frequency curve for the 1062mV point and up in an attempt to make use of that little bit of extra voltage over 1050mV?

By lowering my temps by a decent amount (staying below 40 °C) and overclocking my CPU to 5.0GHz I've managed to achieve a pretty decent score of 9468, compared to my 9285 score yesterday


----------



## BudgieSmuggler

DarthBaggins said:


> Had to do some digging but found it:
> 417.58* Hotfix Driver



Thanks for that. I completely missed this hotfix and it fixes the the 144hz issue with my Benq 2730xl lol. Will use it now. Cheers


----------



## bogdi1988

ducky083 said:


> How, great.
> 
> 
> So, i know i'm ****ting lol but can you explain me the method that works for you ?
> 
> 
> Have you got the same id and bios original version on your card ? and what bios have you choosen ? emtek122, evga130 or galaxy 126 ?
> 
> 
> Thx !


Galax126. Flash and you're good to go. The ID will change but there will be 0 impact.


----------



## ducky083

bogdi1988 said:


> Galax126. Flash and you're good to go. The ID will change but there will be 0 impact.





I have to flash with what version of nvflash (5.541.0 or 5.527.0) ? i've to do same thing as a 980ti like disable card in the peripheral manager before flash ?


What is the command lines to :


- Save original bios
- Flash new bios


Thx


----------



## kx11

and i thought 11gb of vram is enough for modern games


----------



## BudgieSmuggler

Goz3rr said:


> So my current (default) curve looks like this, but I'm not exactly sure what I'm supposed to be changing.
> The voltage peaks at 1093mV for a very short time right at the start of the benchmark, and then basically stays locked to 1050mV and 1062mV for the rest of the benchmark.
> I suppose I should slightly increase the frequency curve for the 1062mV point and up in an attempt to make use of that little bit of extra voltage over 1050mV?
> 
> By lowering my temps by a decent amount (staying below 40 °C) and overclocking my CPU to 5.0GHz I've managed to achieve a pretty decent score of 9468, compared to my 9285 score yesterday



Just to share my settings which are the highest stable i've found..........gradually heading up from base at 1043mv i am at 2070mhz at 1050mv i am at 2085mhz and from 1056 all teh way to the end i am at 2130mhz. Perfectly stable with the Evga 360w (130%) bios. With being on air my core clock downclocks to a stable 2100mhz solid even at 52 degrees. So 2100mhz and +1000 memory is perfect on air for my Zotac amp (non extreme) Good luck


----------



## BudgieSmuggler

ducky083 said:


> I have to flash with what version of nvflash (5.541.0 or 5.527.0) ? i've to do same thing as a 980ti like disable card in the peripheral manager before flash ?
> 
> 
> What is the command lines to :
> 
> 
> - Save original bios
> - Flash new bios
> 
> 
> Thx


Can save using bios in GPU-Z if you like. Then......


1st command: nvflash64 --protectoff
2nd command: nvflash64 -6 BIOSNAME.rom (this line will skip board mismatch error so make sure your card is compatible with your bios of choice) Or if you have onboard graphics no biggie as you can always reflash it back to default) Good Luck

Hit "y" when prompted.

Restart.

Driver might reinstall itself via Windows


----------



## ducky083

BudgieSmuggler said:


> Can save using bios in GPU-Z if you like. Then......
> 
> 
> 1st command: nvflash64 --protectoff
> 2nd command: nvflash64 -6 BIOSNAME.rom (this line will skip board mismatch error so make sure your card is compatible with your bios of choice) Or if you have onboard graphics no biggie as you can always reflash it back to default) Good Luck
> 
> Hit "y" when prompted.
> 
> Restart.
> 
> Driver might reinstall itself via Windows



So, i will try when i will be ready ;-)


360w bios has got fanstop function ?


----------



## Renegade5399

Hopefully my step-ups will some day move beyond step 1, in queue.

I started them on 11/19/18 FFS...


----------



## BudgieSmuggler

ducky083 said:


> I have to flash with what version of nvflash (5.541.0 or 5.527.0) ? i've to do same thing as a 980ti like disable card in the peripheral manager before flash ?
> 
> 
> What is the command lines to :
> 
> 
> - Save original bios
> - Flash new bios
> 
> 
> Thx


Can save using bios in GPU-Z if you like. Then......


1st command: nvflash64 --protectoff
2nd command: nvflash64 -6 BIOSNAME.rom (this line will skip board mismatch error so make sure your card is compatible with your bios of choice) Or if you have onboard graphics no biggie as you can always reflash it back to default) Good Luck

Hit "y" when prompted.

Restart.

Driver might reinstall itself via Windows


Also disable your 2080ti in device manager before flashing bios. Then reinstall driver via win update or the package of your choice


----------



## BudgieSmuggler

ducky083 said:


> So, i will try when i will be ready ;-)
> 
> 
> 360w bios has got fanstop function ?



Not the one i used (Evga Xc Ultra) Still has low fan profile which doesn't bother me. Am done tinkering now and just enjoying my card with solid settings.  I tried the Galax 400w bios but didn't help me at all and wasn't comfy using it for no benefit with a 13 + 3 phase card


----------



## pewpewlazer

BudgieSmuggler said:


> Not the one i used (Evga Xc Ultra) Still has low fan profile which doesn't bother me. Am done tinkering now and just enjoying my card with solid settings.  I tried the Galax 400w bios but didn't help me at all and wasn't comfy using it for no benefit with a 13 + 3 phase card


Am I missing something? My XC Ultra caps at 338W @ 130% PL, and I thought the GALAX BIOS was 380w, not 400?


----------



## Esenel

knightriot said:


> Hi guy, i got new 2080ti with EK Vector block and i got 52*c when 4k gaming, my old 1080ti only 43*c , my ambient temp ~30*c, should i worry about it?
> Thanks and sorry about my bad English


Hi,
As Delta T between GPU and water is ~11°C for the most of us this would mean your water is ~40°C.

This would mean Delta t between water and your ambient is as well ~10°C.
As the best Delta is around 5°C this seems to be a valid temperature for your ambient and a normal custom loop.


----------



## sprayingmango

EVGA raised the prices on their 2080Ti GPUs by $100 to $150.


----------



## Thoth420

DarthBaggins said:


> Had to do some digging but found it:
> 417.58* Hotfix Driver


Edit: NVM found my answer.

DCH or Standard driver (for a clean install of Win10 Pro 64bit version 1809)?

What even is the difference? I have never encountered it.


----------



## BudgieSmuggler

pewpewlazer said:


> Am I missing something? My XC Ultra caps at 338W @ 130% PL, and I thought the GALAX BIOS was 380w, not 400?


 Yes you're right, my mistake 338w Evga. Pretty sure the Galax Hof on techpowerup is 400w. Will double check


----------



## BudgieSmuggler

pewpewlazer said:


> Am I missing something? My XC Ultra caps at 338W @ 130% PL, and I thought the GALAX BIOS was 380w, not 400?


 This one is 400w and 450 at max 



https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927


----------



## kx11

BudgieSmuggler said:


> Yes you're right, my mistake 338w Evga. Pretty sure the Galax Hof on techpowerup is 400w. Will double check



mine is 450w


----------



## Shawnb99

sprayingmango said:


> EVGA raised the prices on their 2080Ti GPUs by $100 to $150.




They need to to make up for the ones dying


Sent from my iPhone using Tapatalk


----------



## ducky083

BudgieSmuggler said:


> Not the one i used (Evga Xc Ultra) Still has low fan profile which doesn't bother me. Am done tinkering now and just enjoying my card with solid settings.  I tried the Galax 400w bios but didn't help me at all and wasn't comfy using it for no benefit with a 13 + 3 phase card



Ok, and what version of nvflash i've tu use ? 5.541.0 or 5.527.0 ?


----------



## LRRP

Shawnb99 said:


> They need to to make up for the ones dying
> 
> 
> Sent from my iPhone using Tapatalk


lol - ouch, that's gonna leave a mark.

Just took a look at EVGA's product page. The lowest end Black card is up $100.00 but the pricing on the top end card's hasn't changed.


----------



## Sheyster

LRRP said:


> lol - ouch, that's gonna leave a mark.
> 
> Just took a look at EVGA's product page. The lowest end Black card is up $100.00 but the pricing on the top end card's hasn't changed.



LOL wow! GG EVGA charging more for a non-A chip card...


----------



## BudgieSmuggler

kx11 said:


> mine is 450w


Yeh 400 watt default and 450 watt max


----------



## BudgieSmuggler

ducky083 said:


> Ok, and what version of nvflash i've tu use ? 5.541.0 or 5.527.0 ?


I used 5.531.0 which worked fine. Go with the latest one and can't go wrong if not


----------



## dentnu

Well it looks like my MSI 2080 Ti Gaming X Trio is degrading or dieing. I can not overclock the card as high as I was able to before. I was able to do a max of +104 core clock in all games but now I can't do more than +60. I do not know if the new drivers are preventing me from overclocking the card as before or what is going on. Anything over +60 core clock cause artifacts in benchmarks and games. Looks like my card is dieing or if the chip just degraded over the past 2 months. Any ideas ?


----------



## Hanks552

AC ON, I don’t have a box like jay two cents, but I will do it later lol


----------



## BudgieSmuggler

dentnu said:


> Well it looks like my MSI 2080 Ti Gaming X Trio is degrading or dieing. I can not overclock the card as high as I was able to before. I was able to do a max of +104 core clock in all games but now I can't do more than +60. I do not know if the new drivers are preventing me from overclocking the card as before or what is going on. Anything over +60 core clock cause artifacts in benchmarks and games. I do not know if my card is dieing or if the chip just degraded over the past 2 months. Any ideas ?



Have you checked that Nvidia control panel hasn't reset the power option from "prefer max performance" If you did a clean install that may need changing back


----------



## dentnu

BudgieSmuggler said:


> Have you checked that Nvidia control panel hasn't reset the power option from "prefer max performance" If you did a clean install that may need changing back


Yea I did a clean install of drivers with perfer max performance. I re-flash my original bios just in case it does die. I have tried using all voltage ranges from 1000mv to 1093mv and same issue nothing I do is able to get it back to the overclock I had before.


----------



## Hanks552

My overclock can’t go any higher ****!


----------



## BudgieSmuggler

Hanks552 said:


> My overclock can’t go any higher ****!


 Have you changed any other settings on your pc or is everything else exactly as it was?


----------



## LRRP

Hanks552 said:


> My overclock can’t go any higher ****!


ain't life grand?


----------



## Hanks552

BudgieSmuggler said:


> Hanks552 said:
> 
> 
> 
> My overclock can’t go any higher ****!
> 
> 
> 
> Have you changed any other settings on your pc or is everything else exactly as it was?
Click to expand...

Cpu overclock i7 8700k 5.2ghz and xmp on


----------



## BudgieSmuggler

Have you done a complete wipe using DDU and re install of driver since you reflashed back to stock bios? If not i would do that and try again with settings as you had them before. Was there any sign of anything weird before you noticed that you couldn't reach same overclock? High temps or artifacting?


----------



## BudgieSmuggler

dentnu said:


> Yea I did a clean install of drivers with perfer max performance. I re-flash my original bios just in case it does die. I have tried using all voltage ranges from 1000mv to 1093mv and same issue nothing I do is able to get it back to the overclock I had before.


And re installed exactly the same drivers?


----------



## BudgieSmuggler

LRRP said:


> ain't life grand?



Thats a high temp crash isn't it? Were you on 400 watt bios with your msi card prior to this problem?


----------



## BudgieSmuggler

Hanks552 said:


> Cpu overclock i7 8700k 5.2ghz and xmp on



Was wondering if you'd changed anything else settings wise. So 5.2 ghz was the same clock you were running cpu before with no problems? You haven't reflashed your mobo bios since then?


----------



## Hanks552

BudgieSmuggler said:


> Hanks552 said:
> 
> 
> 
> Cpu overclock i7 8700k 5.2ghz and xmp on
> 
> 
> 
> 
> Was wondering if you'd changed anything else settings wise. So 5.2 ghz was the same clock you were running cpu before with no problems? You haven't reflashed your mobo bios since then?
Click to expand...

I did reflash my motherboard but I setup the overclock again.
I turned down to stock speeds, and my 3dmark dropped to 16500 score from 17140, so cpu still plays a big role in this benchmark even tho the benchmark doesn’t check cpu score


----------



## BudgieSmuggler

Hanks552 said:


> I did reflash my motherboard but I setup the overclock again.
> I turned down to stock speeds, and my 3dmark dropped to 16500 score from 17140, so cpu still plays a big role in this benchmark even tho the benchmark doesn’t check cpu score



Did you change pci-e lane back to 3.0 (gen 3) ? Why did you go back to stock speeds, were you having some problems with cpu temps?


----------



## Hanks552

BudgieSmuggler said:


> Did you change pci-e lane back to 3.0 (gen 3) ? Why did you go back to stock speeds, were you having some problems with cpu temps?


pci-e back to 3.0? what? i didnt touch that
i changed back to stock speeds just to check if was something else crashing
just to test, but even with stock it did crash...
temps are fine in everything, it doesnt go over 40c


----------



## J7SC

dentnu said:


> Well it looks like my MSI 2080 Ti Gaming X Trio is degrading or dieing. I can not overclock the card as high as I was able to before. I was able to do a max of +104 core clock in all games but now I can't do more than +60. I do not know if the new drivers are preventing me from overclocking the card as before or what is going on. Anything over +60 core clock cause artifacts in benchmarks and games. Looks like my card is dieing or if the chip just degraded over the past 2 months. Any ideas ?



You might want to try two things: 

First, revert back to the previous driver (not just an clean install of the new driver) you used before this 'slowdown in oc'. 

Second, re. artifacts, it sounds almost like there's a contact problem developing between the card body (GPU and VRAM) and the cooler. If you have extra TIM for the GPU and may be some thermal tape for the VRAM, it is worth checking and remounting the cooler body. In particular, the VRAM below the GPU and just above the PCIe is often an issue re. thermal contact seating.


----------



## BudgieSmuggler

Hanks552 said:


> pci-e back to 3.0? what? i didnt touch that
> i changed back to stock speeds just to check if was something else crashing
> just to test, but even with stock it did crash...
> temps are fine in everything, it doesnt go over 40c



Was only asking as you said you'd flashed mobo bios. You could have forgotten to set the pci-e lane speed back to gen 3 on your bios (if you have that setting in bios) My Asus board has the option. Was trying to eliminate things step by step


----------



## Hanks552

BudgieSmuggler said:


> Was only asking as you said you'd flashed mobo bios. You could have forgotten to set the pci-e lane speed back to gen 3 on your bios (if you have that setting in bios) My Asus board has the option. Was trying to eliminate things step by step


i see
this is a old option i think
i think my overclock is being limited by the voltage, i already hitted 1.094v


----------



## BudgieSmuggler

Hanks552 said:


> i see
> this is a old option i think
> i think my overclock is being limited by the voltage, i already hitted 1.094v


No. It's a Z370 board so not really old. I see. But you weren't voltage limited previously? It's a strange change in overclock capability. If it stays the same maybe time to start rma process. Have you gone back to the same flashed bios from your stock bios? Or still on stock? So at 1094mv you can only get +60 on the core clock? What were you able to reach originally on your stock default bios or did you flash to new bios straight away?


----------



## BudgieSmuggler

Maybe it's worth trying a whole new bios and start from scratch. Were you on the 400w bios when problems started? You really have nothing to lose because you may end up rma the card anyway


----------



## dantoddd

Bought one of these. 

https://www.msi.com/Graphics-card/GeForce-RTX-2080-Ti-GAMING-X-TRIO

should get it in one week. Is there an OC guide for these cards here?


----------



## VPII

I'm sitting a a slight challenge. As I've stated before, I got myself a NZXT Kraken G12 and Corsair H110 cw which I fitted to my Galax RTX 1080 Ti OC. Temperatures are brilliant to say the least. Right now idling around 23c without any overclock. The card's max clocks at stock is 1965mhz and it will stay there the entire time I'll game with the tdp not going above 285watt, basically 95%. Started up BFV, I only have a 1440P monitor so I understand that the load on the GPU and Memory will not be that much, gpu temp maxed out at 36c but the meory felt pretty hot to the touch at the back of the card. I did manage to find some small heatsinks to put on the ram as the vram get's pretty hot even at stock speed. But the memory still gets pretty hot. I have two additional fans, one blowing on the left side of the card for the vrm on the left as well as the memory below the gpu and another fan blowing over the top for he memory above the gpu. I do not have a thermometer to check the temps but while playing and feeling the back of the gpu where the memory sit, it does get pretty hot.

What is the max the memory and vrm can run? When I took off the stock cooler, I have to say that even with the stock cooler I cannot see that the cooling on the vram and vrm was that great to begin with.


----------



## J7SC

VPII said:


> I'm sitting a a slight challenge. As I've stated before, I got myself a NZXT Kraken G12 and Corsair H110 cw which I fitted to my Galax RTX 1080 Ti OC. Temperatures are brilliant to say the least. Right now idling around 23c without any overclock. The card's max clocks at stock is 1965mhz and it will stay there the entire time I'll game with the tdp not going above 285watt, basically 95%. Started up BFV, I only have a 1440P monitor so I understand that the load on the GPU and Memory will not be that much, gpu temp maxed out at 36c but the meory felt pretty hot to the touch at the back of the card. I did manage to find some small heatsinks to put on the ram as the vram get's pretty hot even at stock speed. But the memory still gets pretty hot. I have two additional fans, one blowing on the left side of the card for the vrm on the left as well as the memory below the gpu and another fan blowing over the top for he memory above the gpu. I do not have a thermometer to check the temps but while playing and feeling the back of the gpu where the memory sit, it does get pretty hot.
> 
> What is the max the memory and vrm can run? When I took off the stock cooler, I have to say that even with the stock cooler I cannot see that the cooling on the vram and vrm was that great to begin with.


..Unlike GPU temps which are readily reported in MSI AB, GPUz etc, VRM and VRAM temps are hard to lock down, unless you have one of those no-contact infrared directional thermometers (typically cost between US$15 and US$150, depending on model). HWInfo64 **may** also report some of it for some models, but I'm not sure about the 2080 TI with that.

Generally speaking, VRM temps should be max 85c (= worry time), far better to keep it below that. With VRAM, the closer to ambient, the better, but I don't have a panic temp reading.

One thing I used to do with tri- and quad SLI in the bad old days was to tie together 3 or so 120mm fans with zip ties and mount them on the right and side of the GPU PCB (opposite end to HDMI outlets) and have them blow 'along' the length of the card. With a single card, tying together 2x 120mm (so effectively a 120 x240) and angling either just a bit towards the lower center front and rear worked best. This does depend how you mounted the 'main' 120mm directly over the VRM - obviously, you don't just want to have them block each other as they would be more or less perpendicular to each other. Really, I would invest in a no-contact infrared directional thermometer and then just fine-tune the fan arrangement, perhaps even with some flexible shrouds ! That - or wait until your birthday and get a nice waterblock with active backplate cooling :biggrinsm


----------



## VPII

J7SC said:


> ..Unlike GPU temps which are readily reported in MSI AB, GPUz etc, VRM and VRAM temps are hard to lock down, unless you have one of those no-contact infrared directional thermometers (typically cost between US$15 and US$150, depending on model). HWInfo64 **may** also report some of it for some models, but I'm not sure about the 2080 TI with that.
> 
> Generally speaking, VRM temps should be max 85c (= worry time), far better to keep it below that. With VRAM, the closer to ambient, the better, but I don't have a panic temp reading.
> 
> One thing I used to do with tri- and quad SLI in the bad old days was to tie together 3 or so 120mm fans with zip ties and mount them on the right and side of the GPU PCB (opposite end to HDMI outlets) and have them blow 'along' the length of the card. With a single card, tying together 2x 120mm (so effectively a 120 x240) and angling either just a bit towards the lower center front and rear worked best. This does depend how you mounted the 'main' 120mm directly over the VRM - obviously, you don't just want to have them block each other as they would be more or less perpendicular to each other. Really, I would invest in a no-contact infrared directional thermometer and then just fine-tune the fan arrangement, perhaps even with some flexible shrouds ! That - or wait until your birthday and get a nice waterblock with active backplate cooling :biggrinsm


Nope I won't spend a cent more for this GPU..... it costed me an arm and a leg. You have to understand, the pricing in the USA or Europe is bad, but the pricing here in South Africa is a hell of a lot worse.


----------



## BudgieSmuggler

dantoddd said:


> Bought one of these.
> 
> https://www.msi.com/Graphics-card/GeForce-RTX-2080-Ti-GAMING-X-TRIO
> 
> should get it in one week. Is there an OC guide for these cards here?



Have you oc'ed a card before using the vf curve? You can do it that way or simply do + 60 to start with on core clock and go up in 15 mhz increments. Set power and temp to max cabable. A lot of people are reporting they can do +1000 on memory(same here) But i'd start at +400 on memory and go up by 100mhz until you see artifacting or get instability(crash/freeze) That's if you want to push the card and happy to have aggressive fan curve to stop downclocking once the cards goes over 50 degrees. Unless you're on water of course. Also these have oc scanner where the card will scan and find it's capable overclock. Won't neccesarily find the very highest OC though. Most are doing it manually. If you go to voltage curve frequency editor in msi afterburner you'll oc scanner on top right corner. click on the escalating bars part of the core clock window(on main view of afterburner) to get there. Looks like the bars on a mobile phone signal. That's where you manually set the curve if you're doing it manually. You're aiming for highest "stable" clock on lowest possible voltage basically. Which will stop too much downclocking due to excess voltage and heat. Once these cards hit over 50 degrees they downclock in increments of 15 or 30 mhz. My max from 1056 mv onwards is 2130 mhz so my card downclocks to 2100mhz stable even if it hits 54-55 degrees


----------



## LayZ_Pz

Hello,

Got my RTX 2080ti Strix, first batch (So no double fan header), has anyone got a bios mod done to that yet ? The 125% Powerlimit is quite.. limiting. 

Also Quid of the warranty in Europe ? I want to change the paste to something better, but one of the screw has some of these weird warranty sticker on it, has any one got any information in this regard?


----------



## Shawnb99

Dank EKWB drain ports. 
Go to drain my system to install the new GPU only for the drain port not to work, just turns and turns without letting any water out.
So can’t drain my system atm unless I use an upper port on the res and even if I do still need a new drain port for when I rebuild.

So no installing this week  


Sent from my iPhone using Tapatalk


----------



## krizby

Goz3rr said:


> So my current (default) curve looks like this, but I'm not exactly sure what I'm supposed to be changing.
> The voltage peaks at 1093mV for a very short time right at the start of the benchmark, and then basically stays locked to 1050mV and 1062mV for the rest of the benchmark.
> I suppose I should slightly increase the frequency curve for the 1062mV point and up in an attempt to make use of that little bit of extra voltage over 1050mV?
> 
> By lowering my temps by a decent amount (staying below 40 °C) and overclocking my CPU to 5.0GHz I've managed to achieve a pretty decent score of 9468, compared to my 9285 score yesterday /forum/images/smilies/biggrin.gif


at 1.05v you have 2175mhz and flatline after that so the gpu will not use above 1.05v. At 1.062v increase freq to 2190mhz, at 1.075v set to 2205mhz, at 1.093v set to 2220mhz. Btw if your card is stable at 2175mhz/1.05v then you have a golden chip in your hand 🙂.


----------



## J7SC

VPII said:


> Nope I won't spend a cent more for this GPU..... it costed me an arm and a leg. You have to understand, the pricing in the USA or Europe is bad, but the pricing here in South Africa is a hell of a lot worse.



...I wish I would have your re*$*olve, even with having gotten my two cards at a very decent (in very relative terms) prices. Still, if I were you, I would invest $20 or so for the no-contact directional infrared thermometer. BTW, and I am not joking, also keep old pop cans (aluminum or plastic)...they can make great directional shrouds when cut apart to size.


----------



## J7SC

Shawnb99 said:


> Dank EKWB drain ports.
> Go to drain my system to install the new GPU only for the drain port not to work, just turns and turns without letting any water out.
> So can’t drain my system atm unless I use an upper port on the res and even if I do still need a new drain port for when I rebuild.
> 
> So no installing this week



You seem to be on quite an odyssey, sadly, with your 2080 TI upgrade. As to the drain plug, both Home Depot and CDN Tire have a variety of little valves (useful as a drain plug) in brass, ABS and stainless steel with sizes of 1/4 inch and up (way up). If you have a bit of extra hose to route the rad intake down to such an item, it **may** solve your problem. A lot of the fittings in my new system are from those two places...:wheee:


----------



## devilhead

so i want to share some experience with you, bought asus 2080ti turbo. Checked with ekwb configurator, says ok  so got in to some problems, because can't mount of 2x problems. here i will attach pictures.. after cutting some "plastic", and removing 2x screws it worked 
Maybe just temperatures are not ok, during gameplay loop temperature 31C, gpu 41C (gpu boosts to 2130 and stays all the time, 400w bios)


----------



## ducky083

Thx ;-)


----------



## mistershan

Hey guys I have the EVGA 2080ti XC and ever since I have been using it my fans are going crazy. I thought it was the fans on the video card but now I am seeing it is the top fans on my cooler. The fans sorta go up and then down. Sort of like an engine revving up and then down. How do you know if a cooler is working properly? It's from 2014. This is the cooler I have. 

CORSAIR HYDRO H105I LIQUID COOLER


----------



## BudgieSmuggler

mistershan said:


> Hey guys I have the EVGA 2080ti XC and ever since I have been using it my fans are going crazy. I thought it was the fans on the video card but now I am seeing it is the top fans on my cooler. The fans sorta go up and then down. Sort of like an engine revving up and then down. How do you know if a cooler is working properly? It's from 2014. This is the cooler I have.
> 
> CORSAIR HYDRO H105I LIQUID COOLER[/QUOTE
> 
> 
> i'd check the fan profile on your Corsair obviously. Do you have an Asus mobo? If so then check the fan profile (in AI Suite fan expert 4) for your cpu fans and aio pump. Is the pump setting at max as it should be and do you need to increase slightly the curve for the cpu fans. If my fan curve is set too low it has to boost itself too much too often. Also they don't have a long lifespan. So if your profiles are normal and your temps are high then the aio is probably coming to the end of it's lifespan. i had one fail recently which i got replaced by corsair as it was only after a year. but it has cause damage to either my cpu or mobo (still don't know which) But i can't hit 5ghz anymore only 4.9ghz. Even if i set 5ghz my Pc clocks at 4.9ghz. As if 5ghz is broken. point being if your temps are high and cooler is acting erratically i would replace it asap. It's not worth the damage it can cause to sit with it failing


----------



## Hanks552

krizby said:


> at 1.05v you have 2175mhz and flatline after that so the gpu will not use above 1.05v. At 1.062v increase freq to 2190mhz, at 1.075v set to 2205mhz, at 1.093v set to 2220mhz. Btw if your card is stable at 2175mhz/1.05v then you have a golden chip in your hand 🙂.


what is the purpose of shunt mod if we can go higher then 1.094v?


----------



## VPII

J7SC said:


> ...I wish I would have your re*$*olve, even with having gotten my two cards at a very decent (in very relative terms) prices. Still, if I were you, I would invest $20 or so for the no-contact directional infrared thermometer. BTW, and I am not joking, also keep old pop cans (aluminum or plastic)...they can make great directional shrouds when cut apart to size.


Hi.... yup I will get one of those thermometers as it would make me feel a hell of a a lot better when I know and see the actual temps.


----------



## kot0005

Ok So I tried using the 406W Trio X bios on my Seahawk Ek and the power usage woudnt go past 355w for some reason 😮

I tried the 380w galax bios but had to revert back because it disables the leds.


----------



## Hanks552

got performance limit power, but seems is only using 350w and my bios is 380w what?


----------



## J7SC

VPII said:


> Hi.... yup I will get one of those thermometers as it would make me feel a hell of a a lot better when I know and see the actual temps.



Hi - You may have already seen this GN vid re. a hybrid 2080 TI setup, fans on modules and such. 1m 31s + is quite interesting, as are the VRAM and VRM temp charts  (per your earlier query).


----------



## mistershan

BudgieSmuggler said:


> mistershan said:
> 
> 
> 
> Hey guys I have the EVGA 2080ti XC and ever since I have been using it my fans are going crazy. I thought it was the fans on the video card but now I am seeing it is the top fans on my cooler. The fans sorta go up and then down. Sort of like an engine revving up and then down. How do you know if a cooler is working properly? It's from 2014. This is the cooler I have.
> 
> CORSAIR HYDRO H105I LIQUID COOLER[/QUOTE
> 
> 
> i'd check the fan profile on your Corsair obviously. Do you have an Asus mobo? If so then check the fan profile (in AI Suite fan expert 4) for your cpu fans and aio pump. Is the pump setting at max as it should be and do you need to increase slightly the curve for the cpu fans. If my fan curve is set too low it has to boost itself too much too often. Also they don't have a long lifespan. So if your profiles are normal and your temps are high then the aio is probably coming to the end of it's lifespan. i had one fail recently which i got replaced by corsair as it was only after a year. but it has cause damage to either my cpu or mobo (still don't know which) But i can't hit 5ghz anymore only 4.9ghz. Even if i set 5ghz my Pc clocks at 4.9ghz. As if 5ghz is broken. point being if your temps are high and cooler is acting erratically i would replace it asap. It's not worth the damage it can cause to sit with it failing
> 
> 
> 
> Thanks. What is the AI suite fan? I uninstalled all the Asus software because it gave me problems? Yes I have the Asus Rampage Extreme V. Should I download that? How do I monitor my CPU temp? I don't over clock so it shouldn't be that high.
Click to expand...


----------



## VPII

J7SC said:


> Hi - You may have already seen this GN vid re. a hybrid 2080 TI setup, fans on modules and such. 1m 31s + is quite interesting, as are the VRAM and VRM temp charts  (per your earlier query).
> 
> https://www.youtube.com/watch?v=ORJfhlTAgfU


Very interesting what he stated about the requirement fro sinks on the VRAM. Well I have them in an open bench so airflow is not an issue. My fan on top though also blows. So basically all the air getting blown from the left, right and top of GPU get pushed out at the bottom.


----------



## kx11

the average is 530w to 585w , this was during OC scanner test


----------



## Krzych04650

So the new Afterburner 4.6.0 Beta 10 thinks that I have MSI 2080 Ti Lightning. Interesting  Also does it actually add independent fan control? I have 2 fans on my Sea Hawk X (obviously) and they are both detected by OSD and I can even control them independently in EVGA Precision, but I still don't see an option in MSI AB. Not like I need it, but update notes say it should be a feature.


----------



## jelome1989

Max safe voltage: https://youtu.be/GjpQwqtvCZc?t=5m17s

Not sure if this has been discussed before but what does he mean by going from 5 year reliability to 1 year reliability? If 1.06 is the normal voltage does it mean that 1.06v has 5 year reliability and 1.093v (3 ticks from 1.06v) has 1 year reliability?

Also pulled the trigger and got myself an Asus Strix RTX 2080 Ti Call of Duty edition. I was asking before if it was included in the new promo and indeed it was. I got the BFV and Anthem game codes. However, I ran into some other problem - redeeming the Call of Duty game code itself! The card that came with the purchase was empty. Tried emailing / calling asus but to no avail. How ironic


----------



## BudgieSmuggler

mistershan said:


> BudgieSmuggler said:
> 
> 
> 
> Thanks. What is the AI suite fan? I uninstalled all the Asus software because it gave me problems? Yes I have the Asus Rampage Extreme V. Should I download that? How do I monitor my CPU temp? I don't over clock so it shouldn't be that high.
> 
> 
> 
> 
> 
> You can monitor through Corsair link or AI suite or Rivatuner and Msi Afterburner. Take your pick. I use Ai suite now but yes it had some problems with one of the Win 10 updates. They had to release an updated version to work with that update. Since then i've had no problems. That does allow you to control fans from Windows but i'm aware some people aren't fans of software fan control (no pun intended) You can change your cpu fans in the bios. If it's revving up and down then the temperature it's set at is above where your default fan settings are currently. So your ghigher cpu temp is causing the fan to go high to bring down the temps. Though a constant very high revving up and down would suggest something else. You can use HW monitor also to see your cpu temps. What you want is an on screen display that can show temps under load (in game for example) so you know how high the temps are hitting. I understand you don't overclock but if there's a problem with your cooler your temps will still be too high. Be good to check that out asap
Click to expand...


----------



## Asmola

Tested MSI Trio 406W bios on my watercooled reference pcb, noticed that even if i raise power limit and GPU-Z recognize it, it's still using stock power limit 300W. Check power usage's from HWiNFO! First image is with 406W bios (power usage 311W), second one with 380W (power usage 386W).

406W BIOS










380W BIOS


----------



## VPII

Im sitting with bit of a balls up. I flashed my gpu for a second time to the 450watt tdp galax hof bios. 

Disabled gpu before I did it. Upon restart it went into windows but stuttered in the os and then went black screen. Was able to restart safe mode and flashed back. Upon restart got tje same stutter in os hanging for many seconds before being able to choose something and then will still hang. Tried safe mode but so difficult to select even **** restart give me vague options.

First time I flashed the bios no issues.

Sent from my SM-G950F using Tapatalk


----------



## knightriot

Esenel said:


> Hi,
> As Delta T between GPU and water is ~11°C for the most of us this would mean your water is ~40°C.
> 
> This would mean Delta t between water and your ambient is as well ~10°C.
> As the best Delta is around 5°C this seems to be a valid temperature for your ambient and a normal custom loop.


Oh thanks man, i feel better


----------



## Diablix6

I have MSI RTX 2080 Ti Seahawk X (Hybrid) and I am thinking about reflashing ROM, because I need more power for my Pimax 8K. Is there any guide for reflashing? 



Next question, what ROM would be best for my model? Could I safely use *MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W)?*my model has just reference PCB. Should I set fan on 100% when playing to keep it safe?


----------



## boli

krizby said:


> What I can achieve here is that the card will not throttle due to temperature, for example you set a +120mhz offset overclock, once the GPU reach 44C the VF curve changes itself to +105mhz offset, you can manually bring the VF curve back up to +120mhz offset (SHIFT + Drag the curve). If your card doesn't crash then you just gain another +15mhz overclock. This doesn't benefit short benchmark since your temp won't reach 44C anyways. However this can bring total clock stability for people who requires it (playing competitive games).


Not a direct reply, but may be of interest to you, as you were my inspiration for all of this undervolting for consistent frametimes thingy. 

*Fixing weird undervolting curve*
I'm currently running my 2080 Ti at 320W power limit and 1920 MHz at 931 mV. This runs for hours on end while playing games (currently _Ring of Elysium_).

The curve is based off an OC Scanner curve that I edited manually, details in an earlier post of mine:








Anyway, due to the way I edited it, you will notice that the very light grey line in the background is very jaggy on the right side of the selected point. This sucks when I try to shift the whole curve up or down when pressing Shift, because the light grey "base" line will be used as new base, which would require lots of editing to fix.

This is why I wanted to fix my light grey base line. I did so as follows:
When the light gray line is above the solid white line, move the white point upwards a little bit (say +8) and save. The white point should snap back to where it was, and the light grey line should move down a little bit. Repeat until it displays 0 offset (going directly to 0 offset _might_ work, try it and reload if it doesn't work).

As an example, let's look at my current 1935 MHz at 943 mV curve that passed all my tests (slightly higher/different from first pic, but with the same problem in the light grey line).
In the pic above it says -31 offset at the selected point, and the grey line is above the white line. So I moved the white square point upwards a little, saved the curve, and repeated the same at this very point until the offset read 0, like so:








Rinse and repeat for all the points towards the right.

Starting from 1137 mV the target offset is -1 (otherwise the solid white line would increase by 15):








And starting from 1187 mV the target offset is -16 (same reason as above):








This is using Afterburner 4.6 beta 10, while GPU is at 49C using Furmark. During gaming the GPU is a few degrees cooler.

With the fixed, flatter curve I can more easily find a higher flat curve, that does not hit temperature or power limits and thus stays completely flat for hours.


----------



## Thoth420

Is that 4.6 beta 10 stable? I have it and the last official release on my flash drive. Doing a fresh OS install on a new drive today and putting my 2080Ti in. Didn't know which to install so I grabbed both. I tend to prefer not using beta software but if its not crashing or has any major issues that the release doesn't I'm cool with it. I just want to make a custom fan profile for now anyways.


----------



## Sheyster

Asmola said:


> Tested MSI Trio 406W bios on my watercooled reference pcb, noticed that even if i raise power limit and GPU-Z recognize it, it's still using stock power limit 300W. Check power usage's from HWiNFO! First image is with 406W bios (power usage 311W), second one with 380W (power usage 386W).
> 
> 406W BIOS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 380W BIOS



Excellent information! Thank you for doing this.  +rep if I could!


----------



## Hanks552

Asmola said:


> Tested MSI Trio 406W bios on my watercooled reference pcb, noticed that even if i raise power limit and GPU-Z recognize it, it's still using stock power limit 300W. Check power usage's from HWiNFO! First image is with 406W bios (power usage 311W), second one with 380W (power usage 386W).
> 
> 406W BIOS
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 380W BIOS





VPII said:


> Im sitting with bit of a balls up. I flashed my gpu for a second time to the 450watt tdp galax hof bios.
> 
> Disabled gpu before I did it. Upon restart it went into windows but stuttered in the os and then went black screen. Was able to restart safe mode and flashed back. Upon restart got tje same stutter in os hanging for many seconds before being able to choose something and then will still hang. Tried safe mode but so difficult to select even **** restart give me vague options.
> 
> First time I flashed the bios no issues.
> 
> Sent from my SM-G950F using Tapatalk





knightriot said:


> Esenel said:
> 
> 
> 
> Hi,
> As Delta T between GPU and water is ~11°C for the most of us this would mean your water is ~40°C.
> 
> This would mean Delta t between water and your ambient is as well ~10°C.
> As the best Delta is around 5°C this seems to be a valid temperature for your ambient and a normal custom loop.
> 
> 
> 
> Oh thanks man, i feel better /forum/images/smilies/biggrin.gif
Click to expand...

Thanks! I didn’t had balls to use other bios instead of ref PCB. I was afraid of F everything.

My experience is when using galax bios, dx12 and dx11 I did hit power limit 386, but running DXR I hit power limit but with 350w ( my original was 300w)


----------



## Krzych04650

Diablix6 said:


> I have MSI RTX 2080 Ti Seahawk X (Hybrid) and I am thinking about reflashing ROM, because I need more power for my Pimax 8K. Is there any guide for reflashing?
> 
> 
> 
> Next question, what ROM would be best for my model? Could I safely use *MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W)?*my model has just reference PCB. Should I set fan on 100% when playing to keep it safe?


You can use whichever BIOS for 2x8pin cards, so not the from Trio. Use Galax 380W, I am using it with Sea Hawk X and getting very good results. Set your fan according to temperatures.



Thoth420 said:


> Is that 4.6 beta 10 stable? I have it and the last official release on my flash drive. Doing a fresh OS install on a new drive today and putting my 2080Ti in. Didn't know which to install so I grabbed both. I tend to prefer not using beta software but if its not crashing or has any major issues that the release doesn't I'm cool with it. I just want to make a custom fan profile for now anyways.


Afterburner is almost always beta. They are adding new features and instead of calling it a new version they just call it a new build and call it beta. But it doesn't really matter how it is called, it is never causing issues.


----------



## J7SC

VPII said:


> Im sitting with bit of a balls up. I flashed my gpu for a second time to the 450watt tdp galax hof bios.
> 
> Disabled gpu before I did it. Upon restart it went into windows but stuttered in the os and then went black screen. Was able to restart safe mode and flashed back. Upon restart got tje same stutter in os hanging for many seconds before being able to choose something and then will still hang. Tried safe mode but so difficult to select even **** restart give me vague options.
> 
> First time I flashed the bios no issues.
> 
> Sent from my SM-G950F using Tapatalk



...probably too late to reiterate that flashing a 3 EPS card bios onto a 2 EPS card is - at best - not going to work re. full wattage :sad-smile

...hard to do long-distance diagnosis, but likely, the Win registry is locking onto the wrong or possibly mangled GPU id (= per bios). First thing I would try is grab an old HD or SSD, wipe / reformat and do a quick fresh Win 10 install (without any of the current machines drives connected for the time being). If it boots up and subsequently loads the NVidia drivers for the 2080 TI, you know it is not the card itself. Just to be on the safe side, before the fresh Win 10 test install, I would re-flash the 2080 TI yet again, preferably with a second older NVidia card in the system in the primary spot.


----------



## LayZ_Pz

jelome1989 said:


> Max safe voltage: https://youtu.be/GjpQwqtvCZc?t=5m17s
> 
> Not sure if this has been discussed before but what does he mean by going from 5 year reliability to 1 year reliability? If 1.06 is the normal voltage does it mean that 1.06v has 5 year reliability and 1.093v (3 ticks from 1.06v) has 1 year reliability?
> 
> Also pulled the trigger and got myself an Asus Strix RTX 2080 Ti Call of Duty edition. I was asking before if it was included in the new promo and indeed it was. I got the BFV and Anthem game codes. However, I ran into some other problem - redeeming the Call of Duty game code itself! The card that came with the purchase was empty. Tried emailing / calling asus but to no avail. How ironic


Yey, another fellow Strix Owner !

Got a non COD edition, did you fiddle with Bios yet ? I hate the power limit on this thing..


----------



## xkm1948

Has anyone tried to use TimeSpy Extreme Stress Test and gets over 96% stable? Mine always sits around 94%~95%


----------



## Krzych04650

xkm1948 said:


> Has anyone tried to use TimeSpy Extreme Stress Test and gets over 96% stable? Mine always sits around 94%~95%


I am getting anywhere from 98 to 98,5%, I have also seen many people posting similar scores. You probably have a lot of clock fluctuations. Here is how it looks for me:





https://www.3dmark.com/tsst/252417


----------



## kx11

xkm1948 said:


> Has anyone tried to use TimeSpy Extreme Stress Test and gets over 96% stable? Mine always sits around 94%~95%



using OC Scanner curve and 350+ MEM i got 97.8%


----------



## Hanks552

nvm


----------



## jelome1989

LayZ_Pz said:


> Yey, another fellow Strix Owner !
> 
> Got a non COD edition, did you fiddle with Bios yet ? I hate the power limit on this thing..


Not yet, still finding a stable overclock and playing around with the curve. So far I'm getting 2100-2115 max OC at 1.05v which is a bit underwhelming. Not stable at 1.025v. 

I have Micron memory so that might be a factor. How about you?

EDIT: I will probably not mess around with the bios until after a month of heavy gaming and testing. Just to have confidence that my card is not one of the faulty cards. Also by that time maybe the ROG Matrix Platinum bios will be available, hope we can get it to work with the Strixes


----------



## BudgieSmuggler

J7SC said:


> ...probably too late to reiterate that flashing a 3 EPS card bios onto a 2 EPS card is - at best - not going to work re. full wattage :sad-smile
> 
> ...hard to do long-distance diagnosis, but likely, the Win registry is locking onto the wrong or possibly mangled GPU id (= per bios). First thing I would try is grab an old HD or SSD, wipe / reformat and do a quick fresh Win 10 install (without any of the current machines drives connected for the time being). If it boots up and subsequently loads the NVidia drivers for the 2080 TI, you know it is not the card itself. Just to be on the safe side, before the fresh Win 10 test install, I would re-flash the 2080 TI yet again, preferably with a second older NVidia card in the system in the primary spot.



Have you got dual monitors or do you by any chance have a Benq 144hz monitor?


----------



## LayZ_Pz

jelome1989 said:


> Not yet, still finding a stable overclock and playing around with the curve. So far I'm getting 2100-2115 max OC at 1.05v which is a bit underwhelming. Not stable at 1.025v.
> 
> I have Micron memory so that might be a factor. How about you?
> 
> EDIT: I will probably not mess around with the bios until after a month of heavy gaming and testing. Just to have confidence that my card is not one of the faulty cards. Also by that time maybe the ROG Matrix Platinum bios will be available, hope we can get it to work with the Strixes


Yeah I have Micron too. I believe all the Strix 2080ti are using Micron chips, there's my curve attached, but basically +750 on the mem. 

I was hoping the same for the Matrix Platinum, hopefully we can go higher on the power target.. 

Are you in EU ? Did you remove the warranty sticker & know about how's the warranty if you do ?


----------



## kot0005

Krzych04650 said:


> You can use whichever BIOS for 2x8pin cards, so not the from Trio. Use Galax 380W, I am using it with Sea Hawk X and getting very good results. Set your fan according to temperatures.
> 
> 
> 
> Afterburner is almost always beta. They are adding new features and instead of calling it a new version they just call it a new build and call it beta. But it doesn't really matter how it is called, it is never causing issues.


I cant get my seahawk to go over 350W with the 406W bios..


----------



## Hanks552

jelome1989 said:


> Not yet, still finding a stable overclock and playing around with the curve. So far I'm getting 2100-2115 max OC at 1.05v which is a bit underwhelming. Not stable at 1.025v.
> 
> I have Micron memory so that might be a factor. How about you?
> 
> EDIT: I will probably not mess around with the bios until after a month of heavy gaming and testing. Just to have confidence that my card is not one of the faulty cards. Also by that time maybe the ROG Matrix Platinum bios will be available, hope we can get it to work with the Strixes


i have Micron too, and i can run 950 fine


----------



## hdtvnut

Hanks552 said:


> i have Micron too, and i can run 950 fine


Air or water? My Strix 011G on air doesn't like 1000, but 800 seems OK. I notice temp increasing in this range. Somewhere here, can't pass TSE stress.


----------



## jelome1989

LayZ_Pz said:


> Yeah I have Micron too. I believe all the Strix 2080ti are using Micron chips, there's my curve attached, but basically +750 on the mem.
> 
> I was hoping the same for the Matrix Platinum, hopefully we can go higher on the power target..
> 
> Are you in EU ? Did you remove the warranty sticker & know about how's the warranty if you do ?


Ah, interesting, didn't know that all Strix have Micron. Yeah, my curve looks about the same. Now testing the same curve with +1000 memory and 1.05-1.056v core with clocks @2085-2115 depending on temps, so far no crashes yet. 

No, I also don't know about the warranty and I'm from Asia, btw. I also plan to watercool this card when the EK waterblock becomes available. So about a month from now, good timing, if the card lives by then I would assume my card is good. 
@hdtvnut @Hanks552
How about your GPU clocks and voltages? I can't get mine to stay constant on TSE stress tests as temps bring down the clocks too much. Even at 80% fan speed temps reach 60-65c. But +1000 memory is stable at 1.05v on games so far



boli said:


> Spoiler
> 
> 
> 
> Not a direct reply, but may be of interest to you, as you were my inspiration for all of this undervolting for consistent frametimes thingy.
> 
> *Fixing weird undervolting curve*
> I'm currently running my 2080 Ti at 320W power limit and 1920 MHz at 931 mV. This runs for hours on end while playing games (currently _Ring of Elysium_).
> 
> The curve is based off an OC Scanner curve that I edited manually, details in an earlier post of mine:
> View attachment 246206
> 
> 
> Anyway, due to the way I edited it, you will notice that the very light grey line in the background is very jaggy on the right side of the selected point. This sucks when I try to shift the whole curve up or down when pressing Shift, because the light grey "base" line will be used as new base, which would require lots of editing to fix.
> 
> This is why I wanted to fix my light grey base line. I did so as follows:
> When the light gray line is above the solid white line, move the white point upwards a little bit (say +8) and save. The white point should snap back to where it was, and the light grey line should move down a little bit. Repeat until it displays 0 offset (going directly to 0 offset _might_ work, try it and reload if it doesn't work).
> 
> As an example, let's look at my current 1935 MHz at 943 mV curve that passed all my tests (slightly higher/different from first pic, but with the same problem in the light grey line).
> In the pic above it says -31 offset at the selected point, and the grey line is above the white line. So I moved the white square point upwards a little, saved the curve, and repeated the same at this very point until the offset read 0, like so:
> View attachment 246208
> 
> a
> Rinse and repeat for all the points towards the right.
> 
> Starting from 1137 mV the target offset is -1 (otherwise the solid white line would increase by 15):
> View attachment 246210
> 
> 
> And starting from 1187 mV the target offset is -16 (same reason as above):
> View attachment 246212
> 
> 
> This is using Afterburner 4.6 beta 10, while GPU is at 49C using Furmark. During gaming the GPU is a few degrees cooler.
> 
> With the fixed, flatter curve I can more easily find a higher flat curve, that does not hit temperature or power limits and thus stays completely flat for hours.


How were you able to raise the thin grey line above the curve? Mine always stay below the curve.


----------



## BudgieSmuggler

VPII said:


> Im sitting with bit of a balls up. I flashed my gpu for a second time to the 450watt tdp galax hof bios.
> 
> Disabled gpu before I did it. Upon restart it went into windows but stuttered in the os and then went black screen. Was able to restart safe mode and flashed back. Upon restart got tje same stutter in os hanging for many seconds before being able to choose something and then will still hang. Tried safe mode but so difficult to select even **** restart give me vague options.
> 
> First time I flashed the bios no issues.
> 
> Sent from my SM-G950F using Tapatalk


Do you have a BenQ 44hz monitor? There is a reason i'm asking as there is an issue with 2 Nvidia driver packages that cause my PC to hang also. It was the drivers and not my flashed bios.


----------



## J7SC

BudgieSmuggler said:


> Have you got dual monitors or do you by any chance have a Benq 144hz monitor?



Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem



Hanks552 said:


> i have Micron too, and i can run 950 fine


Both my cards have Micron and both do 1040+ on VRAM w/o crashing though I still have to test for the most *efficient* setting rather than just outright *max* before lock-up etc.


----------



## BudgieSmuggler

J7SC said:


> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> Yes i saw that after i had posted. Have asked him the same question now. thanks


----------



## J7SC

BudgieSmuggler said:


> J7SC said:
> 
> 
> 
> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> Yes i saw that after i had posted. Have asked him the same question now. thanks
> 
> 
> 
> 
> :thumb:
Click to expand...


----------



## Hanks552

hdtvnut said:


> Air or water? My Strix 011G on air doesn't like 1000, but 800 seems OK. I notice temp increasing in this range. Somewhere here, can't pass TSE stress.


 @hdtvnut
water, both my gpus on water



jelome1989 said:


> Ah, interesting, didn't know that all Strix have Micron. Yeah, my curve looks about the same. Now testing the same curve with +1000 memory and 1.05-1.056v core with clocks @2085-2115 depending on temps, so far no crashes yet.
> 
> No, I also don't know about the warranty and I'm from Asia, btw. I also plan to watercool this card when the EK waterblock becomes available. So about a month from now, good timing, if the card lives by then I would assume my card is good.
> 
> @hdtvnut @Hanks552
> How about your GPU clocks and voltages? I can't get mine to stay constant on TSE stress tests as temps bring down the clocks too much. Even at 80% fan speed temps reach 60-65c. But +1000 memory is stable at 1.05v on games so far
> 
> 
> How were you able to raise the thin grey line above the curve? Mine always stay below the curve.


 @jelome1989
well, is hard to say, because im already hiting voltage limit 1.094, some games the voltage is lower and im getting power limit
even with 380w... im going to do shunt mode next month, lets see



J7SC said:


> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> 
> Both my cards have Micron and both do 1040+ on VRAM w/o crashing though I still have to test for the most *efficient* setting rather than just outright *max* before lock-up etc.



@J7SC
how do you go over 1000? i cant with MSI afterburner


----------



## CptSpig

Hanks552 said:


> @hdtvnut
> water, both my gpus on water
> 
> 
> 
> @jelome1989
> well, is hard to say, because im already hiting voltage limit 1.094, some games the voltage is lower and im getting power limit
> even with 380w... im going to do shunt mode next month, lets see
> 
> 
> 
> 
> @J7SC
> how do you go over 1000? i cant with MSI afterburner


Update to MSI Afterburner 4.6.0 Beta it goes to 1500 memory. :thumb:


----------



## J7SC

Hanks552 said:


> @*J7SC*
> how do you go over 1000? i cant with MSI afterburner



...latest MSI AB (4.6.0 beta 10) goes up to 1500 MHZ VRAM. Another method is to use PrecX1 (or other such app, i.e. Galax, Gigabyte), set VRAM, close, then open MSI AB and adjust everything but VRAM speed. For that to work, do NOT have MSI AB load automatically at start-up.


----------



## Hanks552

hdtvnut said:


> Air or water? My Strix 011G on air doesn't like 1000, but 800 seems OK. I notice temp increasing in this range. Somewhere here, can't pass TSE stress.





jelome1989 said:


> Ah, interesting, didn't know that all Strix have Micron. Yeah, my curve looks about the same. Now testing the same curve with +1000 memory and 1.05-1.056v core with clocks @2085-2115 depending on temps, so far no crashes yet.
> 
> No, I also don't know about the warranty and I'm from Asia, btw. I also plan to watercool this card when the EK waterblock becomes available. So about a month from now, good timing, if the card lives by then I would assume my card is good.
> 
> @hdtvnut @Hanks552
> How about your GPU clocks and voltages? I can't get mine to stay constant on TSE stress tests as temps bring down the clocks too much. Even at 80% fan speed temps reach 60-65c. But +1000 memory is stable at 1.05v on games so far
> 
> 
> How were you able to raise the thin grey line above the curve? Mine always stay below the curve.





J7SC said:


> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> 
> Both my cards have Micron and both do 1040+ on VRAM w/o crashing though I still have to test for the most *efficient* setting rather than just outright *max* before lock-up etc.





CptSpig said:


> Update to MSI Afterburner 4.6.0 Beta it goes to 1500 memory. :thumb:



not this one?


----------



## J7SC

Hanks552 said:


> not this one?



Nope, this one https://www.guru3d.com/files-details/msi-afterburner-beta-download.html


----------



## CptSpig

Hanks552 said:


> not this one?


No, 4.6.0 Beta 10: https://www.guru3d.com/files-details/msi-afterburner-beta-download.html


----------



## krizby

So a lot of users want to know how to get to the maximum voltages or to undervolt here is the guide to modifying your VF curve:

First max your core voltage slider, find a stable core clock offset then enter the VF curve










Your VF curve will look like this, the default boost table is read from bios so different bios will have different boost table. 










Increase your clock at 1.056v, 1.063v, 1.069v, 1.075, 1.081v, 1.087v, 1.093v so it looks like this, each up in frequency is 15mhz










To test your card for maximum clock speeds, try any game but disable V-sync and turn all graphic settings to low, this will prevent the card from hitting power limits


----------



## kx11

checking on GPUz i don't like how it reports my GPU subvendor as NVIDIA instead of GALAX 

















is that alright ?


----------



## Hanks552

krizby said:


> So a lot of users want to know how to get to the maximum voltages or to undervolt here is the guide to modifying your VF curve:
> 
> First max your core voltage slider, find a stable core clock offset then enter the VF curve
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Your VF curve will look like this, the default boost table is read from bios so different bios will have different boost table.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Increase your clock at 1.056v, 1.063v, 1.069v, 1.075, 1.081v, 1.087v, 1.093v so it looks like this
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> To test your card for maximum clock speeds, try any game but disable V-sync and turn all graphic settings to low, this will prevent the card from hitting power limits



Thanks, i will try that later
im going to do the shunt mod first
im getting power limit at 2130 / 8080


----------



## krizby

Now for undervolt, first set the core voltage slider to 0 then go to the VF curve, select the point at .950v and Press L, this will lock your frequency/ clock at .950v. Now find the stable clock offset, note that it will be higher offset than when you allow the card to use the entire voltages range (for example your stable offset was +90mhz, undervolting and your card can be stable +120 or even +150mhz).

After finding out the stable clock offset, go to your VF curve and press L to unlock the frequency/clock and adjust your clock for every point after .950v until 1.100v so that it is flat (no need to flatten the voltages after 1.100v because the card never boost pass 1.05v anyways)

example:









Don't worry about the curve auto adjust at the last few voltage points

Note: I think .950v is the perfect balance between performance/efficiency for users with 330W power limits. For people who stuck with non-A chips 280W power limit undervolting to .900v helps.


----------



## BudgieSmuggler

Hanks552 said:


> @*hdtvnut*
> water, both my gpus on water
> 
> 
> 
> @*jelome1989*
> well, is hard to say, because im already hiting voltage limit 1.094, some games the voltage is lower and im getting power limit
> even with 380w... im going to do shunt mode next month, lets see
> 
> 
> 
> 
> @*J7SC*
> how do you go over 1000? i cant with MSI afterburner



You can use Galax extreme tuner. If you want to use it with afterburner it works fine for me. I set my vf curve in afterburner then go to extreme tuner and set memory to 1100, don't touch anything else and you should have the vf curve seetings in afterburner applied with the memory overclock settings from extreme tuner. If you're happy to use extreme tuner then all good but i couldn't get the osd to work in that software


----------



## J7SC

krizby said:


> So a lot of users want to know how to get to the maximum voltages or to undervolt here is the guide to modifying your VF curve:
> 
> First max your core voltage slider, find a stable core clock offset then enter the VF curve
> 
> Your VF curve will look like this, the default boost table is read from bios so different bios will have different boost table.
> 
> Increase your clock at 1.056v, 1.063v, 1.069v, 1.075, 1.081v, 1.087v, 1.093v so it looks like this
> 
> To test your card for maximum clock speeds, try any game but disable V-sync and turn all graphic settings to low, this will prevent the card from hitting power limits



Great and compact refresher ! :thumb: 

...had the curve on one GPU before, but going to try this later in the week in NVLink/SLI, may be after some energy repatriation :drink:


----------



## arrow0309

Krzych04650 said:


> So the new Afterburner 4.6.0 Beta 10 thinks that I have MSI 2080 Ti Lightning. Interesting  Also does it actually add independent fan control? I have 2 fans on my Sea Hawk X (obviously) and they are both detected by OSD and I can even control them independently in EVGA Precision, but I still don't see an option in MSI AB. Not like I need it, but update notes say it should be a feature.


Just change settings from Msi Standard to Third Party, no more MSI Lightning.


----------



## VPII

J7SC said:


> ...probably too late to reiterate that flashing a 3 EPS card bios onto a 2 EPS card is - at best - not going to work re. full wattage :sad-smile
> 
> ...hard to do long-distance diagnosis, but likely, the Win registry is locking onto the wrong or possibly mangled GPU id (= per bios). First thing I would try is grab an old HD or SSD, wipe / reformat and do a quick fresh Win 10 install (without any of the current machines drives connected for the time being). If it boots up and subsequently loads the NVidia drivers for the 2080 TI, you know it is not the card itself. Just to be on the safe side, before the fresh Win 10 test install, I would re-flash the 2080 TI yet again, preferably with a second older NVidia card in the system in the primary spot.


Hi J7SC I actually do have a second drive with win10 and one with win7 installed which should do the trick. Ill work it from there to see if it works.

Sent from my SM-G950F using Tapatalk


----------



## menko2

I have the Asus Strix OC and the max overclock I can get is 150mhz and 800mhz.

Has someone managed to flash to another Bios with more power? My temps are good so I assume it's the power what's limiting the overclock of the card.


----------



## kot0005

How are people getting the leds in gaming x trio /seahawk to work with galax 380w bios ?


----------



## kx11

i might sell that HOF i got , i want to go back to using the vertical GPU mount


----------



## kot0005

played a lot with my gpu, the core seems to be a dud. memory is insane though, I can hot 1400mhz no problem. so my gpu can do 2055 at 1.031v but it needs 1.081v for 2085 and 1.093v at 2100mhz...

I just ended up uundervolting it to 2055 at 1.031v This is for games btw.


----------



## boli

krizby said:


> Note: I think .950v is the perfect balance between performance/efficiency for users with 330W power limits. For people who stuck with non-A chips 280W power limit undervolting to .900v helps.



Nice guide mate! This sounds about right in my experience as well. I'm at 320W (by choice) and previously ran 1920 MHz at 925mV, 1935 at 931 and now 1950 at 950 (would be 1965 if card stayed below 45C [emoji12]). All of these stable for hours. Still testing.


----------



## CptKuolio

Tried the 450W HAF bios on reference-OC Palit, not working properly. 

BIOS is stable and you can run games and such, but the card powerdraw maxes out at default 300w instead of 450w. Used Kombustor/PUBG as testing method, with it I can get the 380w bios to function as expected, 380w powerdraw.

Screenshot of HWINFO64 included as attachment.


----------



## EarlZ

@krizby at what undervolt value and clocks do you run your 2080TI at ?


----------



## krizby

boli said:


> Nice guide mate! This sounds about right in my experience as well. I'm at 320W (by choice) and previously ran 1920 MHz at 925mV, 1935 at 931 and now 1950 at 950 (would be 1965 if card stayed below 45C [emoji12]). All of these stable for hours. Still testing.


Nice to find someone to share my experiences . My finding with undervolt is that for each 50mv increase the GPU use 30W more at 45C (temperature affect efficiency). So for reference here are the value:
.850v ~ 220-240W
.900v ~ 250-270W 
.950v ~ 280-300W
1.000v ~ 310-330W
1.050v ~ 340-360W
This is for testing at 3440x1440 resolution, if you are playing game at 4K the load will be higher and if you are playing at 1440p load will be less.
For each 50mv increase after .900v the core clock can be increased by 60mhz, for example 1890mhz/.900v --> 1950mhz/.950v



EarlZ said:


> @krizby at what undervolt value and clocks do you run your 2080TI at ?


As I have a non-A chip that stuck with a 280W power limits, I undervolt to 1890mhz/.900v. Average load during gaming is 250-270W. Maximum GPU temp without undervolting is 50C, with undervolting is 47C.



CptKuolio said:


> Tried the 450W HAF bios on reference-OC Palit, not working properly.
> 
> BIOS is stable and you can run games and such, but the card powerdraw maxes out at default 300w instead of 450w. Used Kombustor/PUBG as testing method, with it I can get the 380w bios to function as expected, 380w powerdraw.
> 
> Screenshot of HWINFO64 included as attachment.


Lol don't trust the wattage value that HWinfo give you dude, I see your bios is 400W at 100% and 450W at 112%, looking at AB graph your card sit at 100% power usage already. Just calculate wattage from the power % that AB give you.


----------



## boli

First test (I think) of Aqua Computer's kryographics next water block.

It's late to the party, but according to Igor it beats the previous best (Heatkiller IV). It's in German, but pictures and videos are mostly self-explanatory.  Ambient (air) temp is 22C, (chilled) water temp is 20C.

The installation was a bit fiddly apparently (but so were most other 2080 Ti water block installs), and the optional normal or active backplates were not tested (would be in the way for the temperature readings).


----------



## BudgieSmuggler

kx11 said:


> checking on GPUz i don't like how it reports my GPU subvendor as NVIDIA instead of GALAX
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> is that alright ?



When i flashed that bios my sub vnedor was listed as galax, Noe it say's Evga as i'm on the 338W Evga XC so i don't think it should say Nvidia. Which card do you have?


----------



## VPII

J7SC said:


> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> 
> Both my cards have Micron and both do 1040+ on VRAM w/o crashing though I still have to test for the most *efficient* setting rather than just outright *max* before lock-up etc.


Hello my friend..... Card work like a charm in my other windows setup. I think I may have buggered it up when I disabled it before flashing. I'll try again with my other setup to see if I can get in safe mode to see if I can make the required changes. I see it was flashed back to the stock bios so it is fine now. Don't really want to flash it again. I'll test it in this setup with the OS and see that all is still well.


----------



## SoldierRBT

Hi,

I got a RTX 2080 Ti FE with max power limit (123%) and +175 on the core, it does 2040MHz at 1.006mV. Is this considered good? Would it be able to hit 2100MHz stable with the Galax Bios (380W)?. According to MSI afterburner max core registered was 2175Mhz but never seen that number while gaming.


----------



## VPII

J7SC said:


> Ahem ? Nope to both...btw, my comment was a response to VPII's reported problem
> 
> 
> 
> Both my cards have Micron and both do 1040+ on VRAM w/o crashing though I still have to test for the most *efficient* setting rather than just outright *max* before lock-up etc.


Oops 417.35 driver install resulted in the same.... now I sit without the start-up options in advance menu for shift safe mode try to boot. Opening CMD as admin does not open, so maybe reinstall. WIndows 7 running fine with the card. I reflashed again and all good.

Update Windows 10 reinstalled on original drive. Will test now with 417.58 driver to see if still an issue, if so I'll reinstall again.... just hope windows 10 don't give me activation issues. Worked fine now, I know that 416 drivers work without an issue. No G-sync monitor, but you never know.


----------



## ducky083

Hi, i've Samsung (instead Micron) memory on my ZOTAC 2080ti Triple fan, are you sure that i can put without problem the galax bios on it ?


----------



## TK421

ducky083 said:


> Hi, i've Samsung (instead Micron) memory on my ZOTAC 2080ti Triple fan, are you sure that i can put without problem the galax bios on it ?



Is a the samsung card better than the micron in terms of durability? I heard that the memory chips on early batches of 2080ti that broke is the micron one then samsung was used to replace the newer batches.




Wondering if there's a way I could tell the difference between the samsung and micron cards? Going to buy a founders from best buy and dunno if they use new or old batch of card.


----------



## jura11

kot0005 said:


> played a lot with my gpu, the core seems to be a dud. memory is insane though, I can hot 1400mhz no problem. so my gpu can do 2055 at 1.031v but it needs 1.081v for 2085 and 1.093v at 2100mhz...
> 
> I just ended up uundervolting it to 2055 at 1.031v This is for games btw.


Hi there 

You are not alone,my GPU is same,core is dud like yours, 2055MHz at 1.05v, 2085-2088MHz at 1.08v and 2100MHz at 1.093v,memory can do 1100MHz easy and beyond that didn't tried

Will do few tests, will remount waterblock and replace EK thermal pads for Thermal Grizzly Minus Pad and will see if this does make any difference if not have at home as well EVGA RTX 2080Ti XC and test this card if its better OC than my one

I would be happy with OC, just if this clocks are stable in rendering mainly in IRAY or Octane but clocks are unstable in rendering and usually I get TDR or hard lock 

Hope this helps 

Thanks, Jura


----------



## J7SC

VPII said:


> Hello my friend..... Card work like a charm in my other windows setup. I think I may have buggered it up when I disabled it before flashing. I'll try again with my other setup to see if I can get in safe mode to see if I can make the required changes. I see it was flashed back to the stock bios so it is fine now. Don't really want to flash it again. I'll test it in this setup with the OS and see that all is still well.






VPII said:


> Oops 417.35 driver install resulted in the same.... now I sit without the start-up options in advance menu for shift safe mode try to boot. Opening CMD as admin does not open, so maybe reinstall. WIndows 7 running fine with the card. I reflashed again and all good.
> 
> Update Windows 10 reinstalled on original drive. Will test now with 417.58 driver to see if still an issue, if so I'll reinstall again.... just hope windows 10 don't give me activation issues. Worked fine now, I know that 416 drivers work without an issue. No G-sync monitor, but you never know.



Hi VPII - ..sounds like progress, and good to hear that your card is ok :thumbsups . Windows (both 7 and 10) have an entry for each GPU PER* Bios*, including a new ID flashed Bios on the same card...so a card with triple Bios will have three registry entries and if you run Tri-SLI, it's nine entries. Sometimes, it does not distinguish correctly between them anymore and you have a choice of either trying to hack the registry to death (searching for anything related to 'NVidia') or do a fresh WIN install. Subsequent activation issues can be a hassle, but probably the lesser of two evils... anyway, at least you know your card is ok and the Galax 380w is probably your best choice (w/ the extra cooling as it has been reported to run a bit toastier..). Also, you might want to uninstall MSI AB, including saved profiles !


----------



## J7SC

TK421 said:


> Is a the samsung card better than the micron in terms of durability? I heard that the memory chips on early batches of 2080ti that broke is the micron one then samsung was used to replace the newer batches.
> 
> Wondering if there's a way I could tell the difference between the samsung and micron cards? Going to buy a founders from best buy and dunno if they use new or old batch of card.



According to earlier posts in this thread, apparently *both* some of the Micron and Samsung-equipped earlier cards got bricked  

As to buying a ''new'' FE card from Best Buy, given the shortage in supply and related price increases, it may not have been on the shelf that long. Besides, a bit of sleuthing should tell you what serial numbers they're at now, and you can compare that to the one(s) you think of picking up.


----------



## Thoth420

Mine is Micron. Gaming X Trio from MSI


----------



## VPII

J7SC said:


> Hi VPII - ..sounds like progress, and good to hear that your card is ok :thumbsups . Windows (both 7 and 10) have an entry for each GPU PER* Bios*, including a new ID flashed Bios on the same card...so a card with triple Bios will have three registry entries and if you run Tri-SLI, it's nine entries. Sometimes, it does not distinguish correctly between them anymore and you have a choice of either trying to hack the registry to death (searching for anything related to 'NVidia') or do a fresh WIN install. Subsequent activation issues can be a hassle, but probably the lesser of two evils... anyway, at least you know your card is ok and the Galax 380w is probably your best choice (w/ the extra cooling as it has been reported to run a bit toastier..). Also, you might want to uninstall MSI AB, including saved profiles !


417.58 Driver is a go.... no problems thus far all good. So thanks.... Windows 10 re activated and all good.... now just for a ton of downloads and with the speed here is SA.... a few days at least.


----------



## J7SC

VPII said:


> 417.58 Driver is a go.... no problems thus far all good. So thanks.... Windows 10 re activated and all good.... now just for a ton of downloads and with the speed here is SA.... a few days at least.



... :thumb: 

PS...I remember downloading on a 56.6 kbs modem  hopefully, your connection is faster than that. Then again, vid drivers were about 20mb tops in those days, not 500


----------



## Thoth420

Ugh that's brutal... and then some of us have more bandwidth than we could ever use.


----------



## VPII

J7SC said:


> ... :thumb:
> 
> PS...I remember downloading on a 56.6 kbs modem  hopefully, your connection is faster than that. Then again, vid drivers were about 20mb tops in those days, not 500


At least I have a 10mb line so a little better that 56.6 kbs, but still a mission downloading stuff that is 30 to 50gb. But for now I won't download games just want to first see that all runs well.


----------



## Thoth420

Does anyone know if the Mystic Light software from MSI actually supports the 2080Ti? It is not on the list nor is any 20xx series.


----------



## kot0005

Thoth420 said:


> Does anyone know if the Mystic Light software from MSI actually supports the 2080Ti? It is not on the list nor is any 20xx series.


Yes? it works with Gaming X trio and seahawk EK. Didnt work when I used Galax(reference card) bios tho.


----------



## Scotty99

Does anyone in here know if newegg's ebay store charges tax?


----------



## CptSpig

Scotty99 said:


> Does anyone in here know if newegg's ebay store charges tax?


Yes, see spoiler



Spoiler



Alabama (AL)*
4.000 %

California (CA)*
7.250 %

Connecticut (CT)*
6.350 %

District of Columbia (DC)*
6.000 %

Georgia (GA)*
8.900 %

Hawaii (HI)*
4.000 %

Illinois (IL)*
6.250 %

Indiana (IN)*
7.000 %

Iowa (IA)*
6.000 %

Kentucky (KY)*
6.000 %

Louisiana (LA)*
9.950 %

Maine (ME)*
5.500 %

Maryland (MD)*
6.000 %

Massachusetts (MA)*
6.250 %

Michigan (MI)*
6.000 %

Minnesota (MN)*
6.875 %

Mississippi (MS)*
7.000 %

Nebraska (NE)*
7.000 %

Nevada (NV)*
4.600 %

New Jersey (NJ)*
6.625 %

North Carolina (NC)*
4.750 %

North Dakota (ND)*
5.000 %

Pennsylvania (PA)*
6.000 %

South Carolina (SC)*
6.000 %

South Dakota (SD)*
6.500 %

Tennessee (TN)*
7.000 %

Utah (UT)*
7.600 %

Vermont (VT)*
6.000 %

Washington (WA)*
6.500 %

West Virginia (WV)*
7.000 % 

Wisconsin (WI)*
5.000 %


----------



## Scotty99

CptSpig said:


> Yes, see spoiler
> 
> Thanks for the link
> 
> i guess no online sales tax is officially dead now that B+H started charging tax too, drats lol


----------



## J7SC

Thoth420 said:


> Ugh that's brutal... and then some of us have more bandwidth than we could ever use.





VPII said:


> At least I have a 10mb line so a little better that 56.6 kbs, but still a mission downloading stuff that is 30 to 50gb. But for now I won't download games just want to first see that all runs well.



Yeah, 56.6 kbs was the 'bad old days' (though back then, it was twice as fast as the 28.8 std we had in school and thought it was really cool). I've been on a fibre backbone and 'throttled' to a 100mbs for a long time now, but the other day, when looking for water-cooling bits in storage for the 2080 TIs, I came across the box with the modems w/ fax and, wait for it :drum: , Creative Soundblaster sound cards... :lmaosmile


----------



## Thoth420

kot0005 said:


> Yes? it works with Gaming X trio and seahawk EK. Didnt work when I used Galax(reference card) bios tho.


Thanks for the response! I don't plan on BIOS swapping as I am not as extreme as most of you guys so that's good news. The LEDs on this card are very high quality.


----------



## BudgieSmuggler

ducky083 said:


> Hi, i've Samsung (instead Micron) memory on my ZOTAC 2080ti Triple fan, are you sure that i can put without problem the galax bios on it ?



I have the Zotac amp 2080ti(with Micron memory) and i flashed with the Galax bios. Saw no benefit from doing it. Went back to the Evga 338W (slightly higher than Zotac default power limit) where i've found my best stable overclock. On air 2100mhz and +1000 memory will do fine for me.


----------



## NBrock

This may be a silly question, but I haven't seen any specific answer. I have a Founders card with the Samsung memory. Are there different 380 watt bios for the cards with Samsung memory?


----------



## MrTOOSHORT

NBrock said:


> This may be a silly question, but I haven't seen any specific answer. I have a Founders card with the Samsung memory. Are there different 380 watt bios for the cards with Samsung memory?


Galax bios works for micron and samsung. My FE is samsung, using Galax 380w bios.


----------



## SoldierRBT

MrTOOSHORT said:


> NBrock said:
> 
> 
> 
> This may be a silly question, but I haven't seen any specific answer. I have a Founders card with the Samsung memory. Are there different 380 watt bios for the cards with Samsung memory?
> 
> 
> 
> Galax bios works for micron and samsung. My FE is samsung, using Galax 380w bios.
Click to expand...

How many MHz did your FE gain from the Galax 380W bios? I got a FE that does 2040Mhz at 1006 mV with stock bios. Max core boost registered in MSI afterburner is 2175Mhz


----------



## MrTOOSHORT

Nothing, just keeps the clocks steady at high clocks.


----------



## NBrock

My FE is under water and seems to run really well since my temps are so low (it's in the basement so ambient especially this time of year is low). I figured since it seems to be keeping really cool that it would benefit from the higher power limit.


----------



## J7SC

NBrock said:


> This may be a silly question, but I haven't seen any specific answer. I have a Founders card with the Samsung memory. Are there different 380 watt bios for the cards with Samsung memory?



...not a silly question at all




MrTOOSHORT said:


> Galax bios works for micron and samsung. My FE is samsung, using Galax 380w bios.



...and apparently even for Hynix, though I have yet to see / read about a 2080 TI w/ Hynix GDDR6


----------



## TK421

J7SC said:


> According to earlier posts in this thread, apparently *both* some of the Micron and Samsung-equipped earlier cards got bricked
> 
> As to buying a ''new'' FE card from Best Buy, given the shortage in supply and related price increases, it may not have been on the shelf that long. Besides, a bit of sleuthing should tell you what serial numbers they're at now, and you can compare that to the one(s) you think of picking up.





Well I'm having them delivered, so I can't know of the serial number until I receive them.






Is there a particular serial range I should look out for?


----------



## Krzych04650

gridironcpj said:


> Apparently there are no plans to release a BIOS with a higher power limit for the Strix.


Isn't it exactly the same PCB as Asus Matrix? This one should come with much higher power limit and should be perfectly flashable to the Strix.

Though I have to say what Asus did with 2080 and 2080 Ti is a bit crazy. 2080 Strix has 307W power limit and 2080 Ti Strix has 325W. How does that even make sense.


----------



## Hanks552

Anybody did shunt mode? I want to know how much ohms it needs to be, I’m using 380w galax bios, is a reference PCB, if somebody could tell me exactly the ohms and where I can buy the shunt. And which one should I do it, I know there is 3, but 1 is for the pci, so I’m wondering should I do 3, or don’t do the pci ?


----------



## kx11

BudgieSmuggler said:


> When i flashed that bios my sub vnedor was listed as galax, Noe it say's Evga as i'm on the 338W Evga XC so i don't think it should say Nvidia. Which card do you have?





very interesting , my gpu is GALAX HOF and i flashed it with the new official Bios afterwards i took that screenshot , i didn't use GPUz prior to that screenshot , hopefully it's not busted or anything


----------



## J7SC

TK421 said:


> Well I'm having them delivered, so I can't know of the serial number until I receive them.
> 
> Is there a particular serial range I should look out for?


I wouldn't worry about it as you already have your silicon lottery tickets now, and you might get very lucky :thumb: My point was only to may be avoid very early serial numbers..on my 2x Aorus Xtreme WB, the serial numbers end the 700 range for the last three digits.




Hanks552 said:


> Anybody did shunt mode? I want to know how much ohms it needs to be, I’m using 380w galax bios, is a reference PCB, if somebody could tell me exactly the ohms and where I can buy the shunt. And which one should I do it, I know there is 3, but 1 is for the pci, so I’m wondering should I do 3, or don’t do the pci ?


Check out the Titan RTX thread...they have tons of shunt mod info, including hard mods with resistor purchase links, and also metal pastes (rather than liquid metal) again with purchase links...CallsignVega and others have posted a lot of info on this. Also, most of the posts I have seen suggest to avoid the PCIe shunt mod


----------



## jelome1989

Max safe voltage: https://youtu.be/GjpQwqtvCZc?t=5m17s

Not sure if this has been discussed before but what does he mean by going from 5 year reliability to 1 year reliability? If 1.06 is the normal voltage does it mean that 1.06v has 5 year reliability and 1.093v (3 ticks from 1.06v) has 1 year reliability?


----------



## TK421

J7SC said:


> I wouldn't worry about it as you already have your silicon lottery tickets now, and you might get very lucky :thumb: My point was only to may be avoid very early serial numbers..on my 2x Aorus Xtreme WB, the serial numbers end the 700 range for the last three digits.
> 
> 
> 
> 
> Check out the Titan RTX thread...they have tons of shunt mod info, including hard mods with resistor purchase links, and also metal pastes (rather than liquid metal) again with purchase links...CallsignVega and others have posted a lot of info on this. Also, most of the posts I have seen suggest to avoid the PCIe shunt mod




So I should look out for serial numbers that end on 7xx on the last 3 digits right? If it's above then I should be fine?






What kind of clocks/temps should I expect from 26c ambient on an FE cooler? Will the modded bios provide any benefits from the higher TDP limits?


----------



## boli

*2080 Ti water block comparison*
I compiled the temperatures from Igor's awesome tests at tomshw.de, and put them into a chart for your convenience.

For additional info, in particular to see the location of each temperature on the PCB, please go read the reviews:

EK Vector
Phanteks Glacier
Watercool Heatkiller IV
Aqua Computer kryographics NEXT


----------



## Duskfall

Hey guys just got my asus rog strix and I wanted to flash the OC bios but mine seems to be newer/different that the one listed in TechPowerUp list.
https://www.techpowerup.com/vgabios/?manufacturer=Asus&model=RTX+2080+Ti
Mine is like the screenshot. Will I be fine flashing the "old" OC one anyone knows?


----------



## shiokarai

boli said:


> *2080 Ti water block comparison*
> I compiled the temperatures from Igor's awesome tests at tomshw.de, and put them into a chart for your convenience.
> 
> For additional info, in particular to see the location of each temperature on the PCB, please go read the reviews:
> 
> EK Vector
> Phanteks Glacier
> Watercool Heatkiller IV
> Aqua Computer kryographics NEXT
> 
> View attachment 246654


GREAT INFO!!!!! Thanks!! I've read (machine translated though...) his reviews and watched youtube thermal videos and I must say.. wow, that's how water blocks testing should be done. Really valuable information, much appreciated. Now waiting for my RTX 2080 Ti Aquacomputer block... shop says 30 days, let's see...


----------



## TK421

Does anyone know if prebuilts from boutique (ie: ibuy, maingear, origin) come with an -A or non A (280w locked) GPU? Some of their 2080Ti doesn't mention brand or OEM, even though you can upgrade to other AIB cards.



Or maybe they just use FE cards?


----------



## krizby

Duskfall said:


> Hey guys just got my asus rog strix and I wanted to flash the OC bios but mine seems to be newer/different that the one listed in TechPowerUp list.
> https://www.techpowerup.com/vgabios/?manufacturer=Asus&model=RTX+2080+Ti
> Mine is like the screenshot. Will I be fine flashing the "old" OC one anyone knows?


Sadly your 2080Ti is a non-A chip and cannot be flashed with A chip bios atm (board ID 1E04 vs A chip use board ID 1E07). If you want more performance (7% at the cost of 35% more power consumption) you should exchange for an OC version of 2080TI.


----------



## VPII

Okay I am one of the victims.... I have my card now stuck at 1350mhz all through and no overclock possible, it would freeze up windows, stutter without any luck and just crash. I even took off the NZXT Kraken G12 with the cooling and refit the card stock cooler to take back to shop tomorrow but I figured lets try my windows 7 hard drive... Well what do you know, everything working perfectly. Now this means the card is working but I'll yet again need to reinstall windows 10 for a second time in 2 days and hopefully not have an issue with activation. But I figured, I might as well get some advice from here?


----------



## J7SC

TK421 said:


> So I should look out for serial numbers that end on 7xx on the last 3 digits right? If it's above then I should be fine?
> What kind of clocks/temps should I expect from 26c ambient on an FE cooler? Will the modded bios provide any benefits from the higher TDP limits?


..really depends on the vendors (re. their own serial number sequence). My example was specific to Gigabyte Aorus XTR WB serial numbers. Generally speaking, the higher the number in the last 3 or 4 digits per vendor, the better. 



boli said:


> *2080 Ti water block comparison*
> I compiled the temperatures from Igor's awesome tests at tomshw.de, and put them into a chart for your convenience.
> 
> For additional info, in particular to see the location of each temperature on the PCB, please go read the reviews:
> 
> EK Vector
> Phanteks Glacier
> Watercool Heatkiller IV
> Aqua Computer kryographics NEXT
> 
> View attachment 246654



Really good stuff :thumb: now we can finally retire the 1080 TI WB pseudo data...

...EKB and Phantek on their 2080 TI blocks don't look quite as good. Hopefully, Igor @ TH will add Bitspower, Bykski and other supplier tests to this as well.


----------



## VPII

J7SC said:


> ..really depends on the vendors (re. their own serial number sequence). My example was specific to Gigabyte Aorus XTR WB serial numbers. Generally speaking, the higher the number in the last 3 or 4 digits per vendor, the better.
> 
> 
> 
> 
> Really good stuff :thumb: now we can finally retire the 1080 TI WB pseudo data...
> 
> ...EKB and Phantek on their 2080 TI blocks don't look quite as good. Hopefully, Igor @ TH will add Bitspower, Bykski and other supplier tests to this as well.


J7SC, check my post above please... I'd like to know what you think.


----------



## boli

shiokarai said:


> I've read (machine translated though...) his reviews and watched youtube thermal videos and I must say.. wow, that's how water blocks testing should be done. Really valuable information, much appreciated.


In their forums he guessed that the active backplate is _probably_ rather pointless, for heat removal anyway, but does look nice. Personally I ordered the passive backplate with the water block, mostly for protection and looks. I'll be replacing my EK Vector, as I'm keen to try more Aqua Computer gear. Recently I added an Aquaero 6 LT plus flow and water temperature sensors to my build, and was rather impressed with their stuff.


----------



## J7SC

VPII said:


> Okay I am one of the victims.... I have my card now stuck at 1350mhz all through and no overclock possible, it would freeze up windows, stutter without any luck and just crash. I even took off the NZXT Kraken G12 with the cooling and refit the card stock cooler to take back to shop tomorrow but I figured lets try my windows 7 hard drive... Well what do you know, everything working perfectly. Now this means the card is working but I'll yet again need to reinstall windows 10 for a second time in 2 days and hopefully not have an issue with activation. But I figured, I might as well get some advice from here?



The stuck at '1350 MHz' problem has crept up several times, including at Gamers Nexus' review of bricked 2080 TIs sent in by their owners. It even happened with a new Titan RTX review sample. 

GN fixed the 2080 TI by flashing a *FRESH* bios file from the same vendor for the same card...try downloading a stock bios file for your card from TechPowerup VGA Bios database.


----------



## shiokarai

boli said:


> In their forums he guessed that the active backplate is _probably_ rather pointless, for heat removal anyway, but does look nice. Personally I ordered the passive backplate with the water block, mostly for protection and looks. I'll be replacing my EK Vector, as I'm keen to try more Aqua Computer gear. Recently I added an Aquaero 6 LT plus flow and water temperature sensors to my build, and was rather impressed with their stuff.


I went with active backplate - found some old(ish) review of 1080 block by VSG with the same (I think) active backplate and VRM temps were seriously lower so... not going to hurt I guess  Here it is: https://thermalbench.com/2017/04/26/aqua-computer-kryographics-pascal-1080/5/


----------



## VPII

J7SC said:


> The stuck at '1350 MHz' problem has crept up several times, including at Gamers Nexus' review of bricked 2080 TIs sent in by their owners. It even happened with a new Titan RTX review sample.
> 
> GN fixed the 2080 TI by flashing a *FRESH* bios file from the same vendor for the same card...try downloading a stock bios file for your card from TechPowerup VGA Bios database.


They do not have.... only the KFA one.


----------



## boli

shiokarai said:


> I went with active backplate - found some old(ish) review of 1080 block by VSG with the same (I think) active backplate and VRM temps were seriously lower so... not going to hurt I guess  Here it is: https://thermalbench.com/2017/04/26/aqua-computer-kryographics-pascal-1080/5/


Cheers mate. Damn, now I kinda regret having ordered only the passive backplate. 
Though its heat pipe might interfere with my new/current flow sensor arrangement. :thinking:


----------



## Duskfall

krizby said:


> Sadly your 2080Ti is a non-A chip and cannot be flashed with A chip bios atm (board ID 1E04 vs A chip use board ID 1E07). If you want more performance (7% at the cost of 35% more power consumption) you should exchange for an OC version of 2080TI.


I was told that I can since its the same PCB though...


----------



## VPII

J7SC said:


> The stuck at '1350 MHz' problem has crept up several times, including at Gamers Nexus' review of bricked 2080 TIs sent in by their owners. It even happened with a new Titan RTX review sample.
> 
> GN fixed the 2080 TI by flashing a *FRESH* bios file from the same vendor for the same card...try downloading a stock bios file for your card from TechPowerup VGA Bios database.


KFA or my saved Galax bios is a no go but interestingly the Galax HOF 450 watt tdp bios works now in windows 10.


----------



## shiokarai

boli said:


> Cheers mate. Damn, now I kinda regret having ordered only the passive backplate.
> Though its heat pipe might interfere with my new/current flow sensor arrangement. :thinking:


We'll see if there's any perceptible difference  I've kicked out the flow sensor (same as yours) as constant ticking clicking noise was killing me... so I killed the sensor


----------



## dante`afk

while the AC waterblock is good, availability is a pain in the ass.


----------



## Thoth420

Can I be added to the Owners list please? MSI Gaming X Trio with Micron VRAM

Not the best quality pics but there she is. Waiting on a sleeved 6pin from cablemod then gonna clean up the cable mess.


----------



## zhrooms

Duskfall said:


> I was told that I can since its the same PCB though...


 
PCB here is irrelevant, the GPU itself is not the same as on the other cards, sadly.

There is nothing you can do, basically got screwed, paid premium for a cooler that allow for high overclock, only to find out it can barely overclock. You would have been far better off buying a cheaper card with an *A* GPU, even if the cooler was cheap too, it would still perform 6-12% better. Return it is my advice.

Read the FAQ at the bottom of the *original post* for more details. And also check out the list of all the better cards you can get instead.


----------



## J7SC

VPII said:


> KFA or my saved Galax bios is a no go but *interestingly the Galax HOF 450 watt tdp bios works now in windows 10*.



Sorry for the delay - biz lunch and all that. 

...Have GPUz and/or HWinfo open to confirm wattage, i.e. when running Superposition 4k / 8k. Is it a real 450w ? On a 2x EPS 8 pin card ? I would be pleasantly surprised, but even if you can 'just' hit 380w, I would call that a win ! 

In any event, there's a related question: How far are you pushing the DDR on your CPU oc ? If you're too far beyond spec, you start to get write-errors (i.e when flashing Bios). Not saying it is the case here, just wondering about flash reliability issues.


----------



## jelome1989

After a few days of testing, my Asus Strix 2080 Ti CoD edition is stable at 2100mhz 1.05v +1000mhz memory. Also stable at 2055mhz at 1.037v +1000mhz memory. This is on air with temps at 60c+. This needs extra effort in setting the curve since it downclocks by increments of 15mhz as temps go higher or as it reaches power limit

I plan to watercool the card after a few more weeks of testing (or before the Z390 Apex becomes available) and hope it allows higher clocks at the same voltages. I haven't seen that many Strixes (only 2 - one in FaceBook and one in Reddit) fail so I hope mine is a good one.


----------



## boli

shiokarai said:


> We'll see if there's any perceptible difference  I've kicked out the flow sensor (same as yours) as constant ticking clicking noise was killing me... so I killed the sensor


 Hmm, they recommend a horizontal installation with the alu plate/connector at the top against the clicking noise, that's why I installed it the way I did. So far no clicking noise on mine.


----------



## iamjanco

boli said:


> In their forums he guessed that the active backplate is _probably_ rather pointless, for heat removal anyway, but does look nice. Personally I ordered the passive backplate with the water block, mostly for protection and looks. I'll be replacing my EK Vector, as I'm keen to try more Aqua Computer gear. Recently I added an Aquaero 6 LT plus flow and water temperature sensors to my build, and was rather impressed with their stuff.


Actually, if the active backplate is anything like the one *AC did for the 1080*, it might not be as pointless as Igor hinted at. Check out the temps on the VRM using it at *this link*.

The only drawback currently is availability. I've had a block and backplate (as well as a few other parts) on backorder from AC since December 30th, and they let me know today that it looks like at least another four weeks before my package will ship. @Shoggy. That's okay though because it gives me time to prep everything else. Once I'm set up, I plan on not only checking the thermals using the backplate, but will also be doing restriction testing using a *Dwyer 490-A1* and a few other tools I have at my disposal.

Not sure why Igor hasn't been including restriction testing in his tests, but it might not be because of costs (a 490A Series manometer can be expensive, but cheaper manometers are available). Still, he's got some other pretty elaborate test equipment on his bench from what I could tell from the images he shared.

Oops (added edit), just noticed the header in the English version of the installation guide for the 2080ti block reads "NEXT for RTX 1080 Ti, 01/2019" (the German version appears to be correct). AC might want to fix that. 

Btw, nice work compiling Igor's figures.


----------



## NBrock

Got the 380 watt bios flashed to my FE without a hitch. No more throttling in Superposition... so close to cracking 14k in 4k optimized.


----------



## VPII

J7SC said:


> Sorry for the delay - biz lunch and all that.
> 
> ...Have GPUz and/or HWinfo open to confirm wattage, i.e. when running Superposition 4k / 8k. Is it a real 450w ? On a 2x EPS 8 pin card ? I would be pleasantly surprised, but even if you can 'just' hit 380w, I would call that a win !
> 
> In any event, there's a related question: How far are you pushing the DDR on your CPU oc ? If you're too far beyond spec, you start to get write-errors (i.e when flashing Bios). Not saying it is the case here, just wondering about flash reliability issues.


Hi there... morning this side so at work. Will refit the watercooling and test it.

You may have a point seen that my cpu is an Amd Ryzen and my Gskill Frarex 3200 cl14 was running 3466 cl14 when I first flashed it. Will go all stock later today and test first with the 450 watt to see wattage pull in sp 8k then Ill flash again with original bios to see if it works.

Sent from my SM-G950F using Tapatalk


----------



## TK421

NBrock said:


> Got the 380 watt bios flashed to my FE without a hitch. No more throttling in Superposition... so close to cracking 14k in 4k optimized.


What's your temp with extended gaming loads and your ambient temp?


Also what overclock did you achieve with the 380w bios?


----------



## Zer0G

Hi, got my Palit 2080ti GamingPro OC yesterday. Runs not bad at all. 
With the std Bios (260W + 127%) powerlimited at 330W with 1980/7820Mhz. 
Temps at air <70°C

A Heatkiller IV copper is on the way. 

Can I flash the Galax 380W Bios (also reference) to my palit card?


----------



## LayZ_Pz

jelome1989 said:


> After a few days of testing, my Asus Strix 2080 Ti CoD edition is stable at 2100mhz 1.05v +1000mhz memory. Also stable at 2055mhz at 1.037v +1000mhz memory. This is on air with temps at 60c+. This needs extra effort in setting the curve since it downclocks by increments of 15mhz as temps go higher or as it reaches power limit
> 
> I plan to watercool the card after a few more weeks of testing (or before the Z390 Apex becomes available) and hope it allows higher clocks at the same voltages. I haven't seen that many Strixes (only 2 - one in FaceBook and one in Reddit) fail so I hope mine is a good one.


What did you use to check how stable were you ? I have stable OC on many games & benchmarks, however The Witcher 3 is really killing it everytime.


----------



## boli

LayZ_Pz said:


> What did you use to check how stable were you ? I have stable OC on many games & benchmarks, however The Witcher 3 is really killing it everytime.


I'm not who you asked, but here goes anyway: Everyone has their own "killer".  Mine are FireStrike Ultra (graphics test 2), Time Spy Extreme (graphics test 2), Shadow of the Tomb Raider (the very last benchmark) and just playing Battlefield V.



Zer0G said:


> Can I flash the Galax 380W Bios (also reference) to my palit card?


Yes, if it's still an A chip. Please read the excellent Q&A in the very first post.


----------



## Zer0G

boli said:


> Yes, if it's still an A chip. Please read the excellent Q&A in the very first post.


Thanks. Yes works fine. But at my first test runs I realized, the Vcore did't goes over 1.043V anymore. 
Power consumption <340W. Temps <75°C

More tests to come.


----------



## Diablix6

Krzych04650 said:


> You can use whichever BIOS for 2x8pin cards, so not the from Trio. Use Galax 380W, I am using it with Sea Hawk X and getting very good results. Set your fan according to temperatures.
> 
> 
> 
> Afterburner is almost always beta. They are adding new features and instead of calling it a new version they just call it a new build and call it beta. But it doesn't really matter how it is called, it is never causing issues.



Cool, thanks mate for the info. What settings are you using with your Sea Hawk X (Power Delivery, Fan, CPU core Offset)? Are you also changing voltage? 
With original ROM I have 8000 MHz Memory and +130 GPU Boost from OC Scanner, with Peaks clock of 2100 MHz and long-time stable clock 2040-2025 MHz.


----------



## VPII

J7SC said:


> Sorry for the delay - biz lunch and all that.
> 
> ...Have GPUz and/or HWinfo open to confirm wattage, i.e. when running Superposition 4k / 8k. Is it a real 450w ? On a 2x EPS 8 pin card ? I would be pleasantly surprised, but even if you can 'just' hit 380w, I would call that a win !
> 
> In any event, there's a related question: How far are you pushing the DDR on your CPU oc ? If you're too far beyond spec, you start to get write-errors (i.e when flashing Bios). Not saying it is the case here, just wondering about flash reliability issues.


Hi there... I tested it with the 450 watt bios and the wattage pulled was alot more than usually when I run normal Time Spy, something like 40 to 60 watt more, but my clock speed at stock was usually 1965 and it was 1995 with this bios. Now I went all stock on cpu and memory and flashed it again with the normal stock OC bios and it worked. I'm back with the stock bios. To get back to the added wattage pulled.... on the stock bios I can do 2130mhz and it will bare touch 380 watt now with stock 1995 with the 450 watt bios it was pulling 340watt so I'm sorry but the 380 watt stock bios is best.

On a side note, I have an infrared thermometer on the way so will test mem temps when I get it.


----------



## Krzych04650

Diablix6 said:


> Cool, thanks mate for the info. What settings are you using with your Sea Hawk X (Power Delivery, Fan, CPU core Offset)? Are you also changing voltage?
> With original ROM I have 8000 MHz Memory and +130 GPU Boost from OC Scanner, with Peaks clock of 2100 MHz and long-time stable clock 2040-2025 MHz.


I have posted a short video with stress test, you also have all the settings and VF at the beginning: 






I found 1.037 to be a sweetspot between performance/OC potential and avoiding power limit. Long time stable clock is 2100-2115.

Though voltage goes up to 1.093v if you set voltage offset on main AB screen to 100 before editing the curve, so if you have games that are not consuming a lot of power (like for example AC: Origins has some unusual way of utilizing the GPU that makes it draw much less power even in 100% GPU bound scenarios, it is very good as a lower power benchmark to determine max OC potential without hitting power limit) then you can go higher. Case dependent. 

Fans are 100% under load because I have my PC outside my room so I don't care about noise and want maximum cooling performance. 

I am also using 8000 memory, it is stable even above 8100 and crashes only at 8200, but there is no point on risking crashes for 100 MHz more memory clock, so I just keep it at 8000,so it is a nice 2100/8000, full nice looking numbers


----------



## EKJake

boli said:


> *2080 Ti water block comparison*
> I compiled the temperatures from Igor's awesome tests at tomshw.de, and put them into a chart for your convenience.
> 
> For additional info, in particular to see the location of each temperature on the PCB, please go read the reviews:
> 
> EK Vector
> Phanteks Glacier
> Watercool Heatkiller IV
> Aqua Computer kryographics NEXT
> 
> View attachment 246654


Right after Igor finished his review and gave us feedback, we updated our instruction manual and have started including additional thermal pads to place on the coils. Internal tests have indicated proper improvements in the PCB temps


----------



## GAN77

EKJake said:


> Right after Igor finished his review and gave us feedback, we updated our instruction manual and have started including additional thermal pads to place on the coils. Internal tests have indicated proper improvements in the PCB temps


What thermal conductivity do thermal pads use?


----------



## NBrock

TK421 said:


> What's your temp with extended gaming loads and your ambient temp?
> 
> 
> Also what overclock did you achieve with the 380w bios?


Extended loads for [email protected] are 36*c for gaming where the memory is utilized more the highest I have seen is 40*c. Not sure what my ambient temps are, but they are pretty low since it is wintertime and the rig is in my basement. I have one 240 rad and one 360 rad with a 9900k in the same loop.

SuperPosition benchmark would throttle to 2065 core, with the new BIOS it maintained 2145 throughout the whole benchmark. I haven't tried going higher yet.
In Valley benchmark 2560x1440, it runs at a higher clock since the load isn't very high. It runs at 2315 core clock.


----------



## Jpmboy

NBrock said:


> Extended loads for [email protected] are 36*c for gaming where the memory is utilized more the highest I have seen is 40*c. Not sure what my ambient temps are, but they are pretty low since it is wintertime and the rig is in my basement. I have one 240 rad and one 360 rad with a 9900k in the same loop.
> 
> SuperPosition benchmark would throttle to 2065 core, with the new BIOS it *maintained 2145 throughout the whole benchmark*. I haven't tried going higher yet.
> In Valley benchmark 2560x1440, it runs at a higher clock since the load isn't very high. It runs at 2315 core clock.


that's a pretty decent card! :thumb:


----------



## Pauliesss

Hey guys,

anyone here who owns GAINWARD GeForce RTX 2080 Ti Phoenix GS - I am struggling to find a proper reviews of this GPU.

Can you tell me how the temperatures are, fan noise and perhaps compare the performance to other Ti's?

Thanks!


----------



## NBrock

Jpmboy said:


> that's a pretty decent card! :thumb:


Thanks!

Just tried going higher without success. 2145 and +800 on memory seems to be its happy spot in superposition.
Those same settings get me a 16,696 graphics score in TimeSpy


----------



## J7SC

VPII said:


> Hi there... I tested it with the 450 watt bios and the wattage pulled was alot more than usually when I run normal Time Spy, something like 40 to 60 watt more, but my clock speed at stock was usually 1965 and it was 1995 with this bios. Now I went all stock on cpu and memory and flashed it again with the normal stock OC bios and it worked. I'm back with the stock bios. To get back to the added wattage pulled.... on the stock bios I can do 2130mhz and it will bare touch 380 watt now with stock 1995 with the 450 watt bios it was pulling 340watt so I'm sorry but the 380 watt stock bios is best.
> 
> On a side note, I have an infrared thermometer on the way so will test mem temps when I get it.



Hello...glad to hear it  , sounds like 'happy are are here again' ! Your findings on the bios comparison are very interesting - and suggest once more that the Galax 380w is the 'go-to' Bios for those who want to flash a non-stock Bios (i.e those who don't already have a stock Bios that ranges up to 360w - 380w). The infrared thermometer is a good investment, not only for GPUs...any hot spots you find you can deal w/ by adjusting your fans on your open test bench.


----------



## mistershan

Hey guys. Just wanted to says a problem that i have been posting about has been fixed. I will warn all of you to STAY AWAY from EVGA's ref cards the XC or XC Ultra. I had the XC and it would spike to almost 80 degrees on it's stock clock. It made my computer fans and cooler sound like an engine. I exchanged it for the Asus Strix 2080ti and I get non of the big fan noise now. It also runs at least 10 degrees cooler. Hopefully I didn't do any long term damage. I don't understand why Nvidia can make a chip but can't design a card correctly. It's so strange. 

Does anyone know how to change the Strix lighting without downloading their bloatware? I had 980 and 1080 Strix cards and I just left them alone LED wise but the 2080ti stock does the breathing effect which is really annoying and lame. Clients of mine will probably think my computer is failing or something. Idk why people think that looks cool. 

Also, when you guys get free games with your video cards that you already have what do you do with them? Is there a good site to trade? I wish I could trade BFV and Anthem for the AMD free games that I don't have which is DM5, RE2 and The Division 2.


----------



## zhrooms

Pauliesss said:


> Anyone own a GAINWARD GeForce RTX 2080 Ti Phoenix GS?
> 
> Can you tell me how the temperatures are, fan noise and perhaps compare the performance to other Ti's?


 
Incredibly hard to say, there are currently over 30 unique cooler designs out right now (excluding waterblocked cards), no reviewer can compare them all, so to ask if one card is better than another is like.. yeah, doesn't mean much as you have a sea of cards to choose from.

And who uses stock fan curve? They're *always* worse than what you can manually set yourself, so you decide fan noise and temperature.

Lastly the performance is up to the cards BIOS, the higher power limit a card has the higher it can overclock, so just check the list in the original post to find out which cards are good.

Generally the best coolers are triple fan / 2.5 slot (and above) design, they are the most effective at cooling thus lower noise and higher overclock, and if you feel adventurous just swap BIOS to the Galax 380W and go nuts with overclocking.

So what I can tell you about Gainward Phoenix GS is that it's a triple fan, 2.5 slot, 292mm long, 330W power limit and factory overclocked card. These specs make it one of the best cards you can buy out of over 60 unique cards. No need for a review or user input, it's simply a great card.

Read the FAQ at the bottom of the original post by the way, for more info.


----------



## Tricky

Hey guys. I'm in a weird spot right now trying to get my hands on a 2080ti. I've basically placed an order for the Gigabyte Aorus Xtreme 2080ti which came out to 1822CAD. I have a second option to go for an Asus Strix OC version but it would come out to 2100CAD at a local store. My third option would be a Zotac Amp 2080ti for 1760CAD. All these prices are including tax/etc.

Should I wait for the Aorus Xtreme to come in (which will take 3 weeks) or get the Strix oc (which I don't see it being worth almost 300$more) OR just settle for the Zotac AMP ??

I understand the Zotac Amp would be the weaker of the bunch, but would it be worth the savings?? I do plan on overclocking it to the max. Also these are literally my only options for now, availability is really low!


----------



## zhrooms

Tricky said:


> I've basically placed an order for the Gigabyte Aorus Xtreme 2080ti which came out to 1822CAD. I have a second option to go for an Asus Strix OC version but it would come out to 2100CAD at a local store. My third option would be a Zotac Amp 2080ti for 1760CAD. All these prices are including tax/etc.
> 
> Should I wait for the Aorus Xtreme to come in (which will take 3 weeks) or get the Strix oc (which I don't see it being worth almost 300$more) OR just settle for the Zotac AMP ??
> 
> I understand the Zotac Amp would be the weaker of the bunch, but would it be worth the savings?? I do plan on overclocking it to the max. Also these are literally my only options for now, availability is really low!


 
Read the FAQ at the bottom of the *original post*, and you'll know what card to buy.

If you have any questions after reading it, ask here and I will answer as best as I can.


----------



## shiokarai

boli said:


> Hmm, they recommend a horizontal installation with the alu plate/connector at the top against the clicking noise, that's why I installed it the way I did. So far no clicking noise on mine.


Mine was silent, but one day it decided it's time to do some clicking, nothing helped so I've just thrown it out of the loop as my system was otherwise silent, so the noise was really annoying... also I've found it's not so helpful after all (and inaccurate).


----------



## Tricky

zhrooms said:


> Read the FAQ at the bottom of the *original post*, and you'll know what card to buy.
> 
> If you have any questions after reading it, ask here and I will answer as best as I can.


I did read it. The Strix OC clearly has a better PCB and power design (at least it has 10 real power phases as opposed to 8 on the Aorus Xtreme). 

What I do NOT like about the Aorus X is the fact that it has alot of similarities to a vanilla 2080ti card. Yet it has a very high boost clock out of the box...

I'll try to ask a more direct question..Which one, given those three choices and considering the price differences, would be worth keeping?

edit: I do want to OC it to the max potential.
edit2: Also, am I wrong about something here?


----------



## psychrage

Looking to upgrade from my 1080ti FTW3 to a 2080 Ti in a few weeks.

Are 2080 Ti's still dying? Is there any brand in particular safer than others?

I'll be putting a waterblock on. How are both Nvidia & Gigabyte (Considering Auros for the 19 power phases and to brand match my motherboard) in terms of warranty with a card that has had the heatsync swapped(factory put back on when sent in, obviously)?


----------



## kot0005

mistershan said:


> Hey guys. Just wanted to says a problem that i have been posting about has been fixed. I will warn all of you to STAY AWAY from EVGA's ref cards the XC or XC Ultra. I had the XC and it would spike to almost 80 degrees on it's stock clock. It made my computer fans and cooler sound like an engine. I exchanged it for the Asus Strix 2080ti and I get non of the big fan noise now. It also runs at least 10 degrees cooler. Hopefully I didn't do any long term damage. I don't understand why Nvidia can make a chip but can't design a card correctly. It's so strange.
> 
> Does anyone know how to change the Strix lighting without downloading their bloatware? I had 980 and 1080 Strix cards and I just left them alone LED wise but the 2080ti stock does the breathing effect which is really annoying and lame. Clients of mine will probably think my computer is failing or something. Idk why people think that looks cool.
> 
> Also, when you guys get free games with your video cards that you already have what do you do with them? Is there a good site to trade? I wish I could trade BFV and Anthem for the AMD free games that I don't have which is DM5, RE2 and The Division 2.


Try reddit steam game swap to trade ur game. Dont get scammed tho. Read the pinned post.


----------



## kot0005

Tricky said:


> Hey guys. I'm in a weird spot right now trying to get my hands on a 2080ti. I've basically placed an order for the Gigabyte Aorus Xtreme 2080ti which came out to 1822CAD. I have a second option to go for an Asus Strix OC version but it would come out to 2100CAD at a local store. My third option would be a Zotac Amp 2080ti for 1760CAD. All these prices are including tax/etc.
> 
> Should I wait for the Aorus Xtreme to come in (which will take 3 weeks) or get the Strix oc (which I don't see it being worth almost 300$more) OR just settle for the Zotac AMP ??
> 
> I understand the Zotac Amp would be the weaker of the bunch, but would it be worth the savings?? I do plan on overclocking it to the max. Also these are literally my only options for now, availability is really low!


Strix has the best cooling. Zotac seems to have better clocks. I'd recommend these 2 unless u just want good looks for Aorus.

Factory Boost clocks r a gimmick since 10series. Card will always boots higher with better cooling.


----------



## Thoth420

mistershan said:


> Does anyone know how to change the Strix lighting without downloading their bloatware? I had 980 and 1080 Strix cards and I just left them alone LED wise but the 2080ti stock does the breathing effect which is really annoying and lame. Clients of mine will probably think my computer is failing or something. Idk why people think that looks cool.


Hi! Install ASUS Aura and change the setting to whatever you want it to be and then restart the system and you can uninstall Aura and the setting should be saved to the BIOS. I am very against extraneous software but it and Mystic Light both can be set and then chucked if you want however neither run anything in the background I can see once the app has been closed.


----------



## taem

Dunno if someone posted this already but GPU Z 2.16.0 now displays ICX temp readouts for EVGA cards. At least the FTW3 Ultra it does. Very nice.

Only 2 of the fans display though, as before.


----------



## zhrooms

Tricky said:


> I did read it. The Strix OC clearly has a better PCB and power design (at least it has 10 real power phases as opposed to 8 on the Aorus Xtreme).
> 
> What I do NOT like about the Aorus X is the fact that it has alot of similarities to a vanilla 2080ti card. Yet it has a very high boost clock out of the box...
> 
> I'll try to ask a more direct question..Which one, given those three choices and considering the price differences, would be worth keeping?
> 
> edit: I do want to OC it to the max potential.
> edit2: Also, am I wrong about something here?


 
Then why do you bring up that the Strix has a better PCB? It's completely irrelevant what PCB it has, the reference cards overclock *exactly* the same.

Boost clocks don't mean anything either, as you manually overclock it anyway, 1755 MHz is one of the higher boost clocks and that translates to just 1890-1905 MHz actual clock, while with a manual overclock you reach 2000MHz with ease.

If you want max OC, then the Gigabyte AORUS Xtreme is the obvious answer because it has the highest power limit, that is the only real thing that matters between these cards as they all got massive triple fan coolers, meaning similar fan noise and temperature.

On the other hand, if you are willing to flash BIOS to the Galax 380W, then they all *overclock the same*, now what matters are the coolers, realistically the temp difference will be at most 5c between the best and worst of the three.

The advantage AORUS and Strix has here is that they use custom PCBs which allow for the cooler to be bigger. So AMP is therefore the worst of the three (but still a fantastic card).

Remaining two cards are almost identical when flashing 380W BIOS, few degrees difference at most in temperature, which at best results in a 15 MHz higher overclock, but to say which card cools better at your preferred fan speed is impossible to say right here, so then it simply comes down to how they look and price.

Personally I think AORUS looks far better than Strix, it's also got a 366W BIOS while Strix only has 325W, so you wouldn't necessarily have to flash the BIOS on the Gigabyte for it to perform really well on air, which is nice, zero risk of voiding warranty.

But in your case, the Strix also cost far more, which means the answer becomes very, very easy, the card you should get is the *AORUS*.

Though if I were in your position I would get neither, rather wait and look for other cards, the absolute best cards are the hybrid ones, and the card I universally recommend to anyone is the second best air card (which is also happens to be the cheapest custom PCB card), it is the Gaming X Trio, absolute unit of a cooler and insane power limit of 406W, also incredible looking, fairly priced.


----------



## mistershan

kot0005 said:


> Try reddit steam game swap to trade ur game. Dont get scammed tho. Read the pinned post.


Thanks. I just read this though. Idk if this is still true but NVIDIA found a way to make sure you can only redeem the code you get? Or can you give away or sell it? 

https://www.vg247.com/2017/02/02/fr...g-nvidia-gpus-will-only-be-redeemable-by-you/


----------



## NBrock

So close to 16k overall score in TimeSpy

https://www.3dmark.com/3dm/32483767


----------



## J7SC

Tricky said:


> I did read it. The Strix OC clearly has a better PCB and power design (at least it has 10 real power phases as opposed to 8 on the Aorus Xtreme).
> 
> What I do NOT like about the Aorus X is the fact that it has alot of similarities to a vanilla 2080ti card. Yet it has a very high boost clock out of the box...
> 
> I'll try to ask a more direct question..Which one, given those three choices and considering the price differences, would be worth keeping?
> 
> edit: I do want to OC it to the max potential.
> edit2: Also, am I wrong about something here?



Much of it is just a mix of personal preference and actual availability...I have a couple of the Aorus XTR 2080 Ti Waterforce and really like them, including the 4 year warranty and of course the fact that they were going into a custom loop anyway and already came with the block from the factory. On the other hand, now that they have been back-ordered for over a month in most places, the Zotac AMP / Ex, MSI Trio Gaming and Asus Strix are also cards I would have *very much considered*...then there are the hyper ones just announced at CES such as the Kingpin and MSI Lightning with special Bios up to 520w, and Strix Matrix which is like the Kingpin equipped w/ AIO...but who knows when they are available and how much in C$ they will cost.

Finally, when considering a purchase, you can always look at the NewEgg.Ca summary re. availability, pricing and the number of stars by customer review to help you make up your mind !


----------



## kx11

taem said:


> Dunno if someone posted this already but GPU Z 2.16.0 now displays ICX temp readouts for EVGA cards. At least the FTW3 Ultra it does. Very nice.
> 
> Only 2 of the fans display though, as before.



because 1xblower 1xcooling , the blower fan is automatically synced with the other blower fan so there's no need to control the 3rd fan AFAIK


----------



## kot0005

Thoth420 said:


> mistershan said:
> 
> 
> 
> Does anyone know how to change the Strix lighting without downloading their bloatware? I had 980 and 1080 Strix cards and I just left them alone LED wise but the 2080ti stock does the breathing effect which is really annoying and lame. Clients of mine will probably think my computer is failing or something. Idk why people think that looks cool.
> 
> 
> 
> Hi! Install ASUS Aura and change the setting to whatever you want it to be and then restart the system and you can uninstall Aura and the setting should be saved to the BIOS. I am very against extraneous software but it and Mystic Light both can be set and then chucked if you want however neither run anything in the background I can see once the app has been closed.
Click to expand...

That doesnt work with mystic lighting. Atleast for my gpu. I find Aura sync to be the best rgb app so far. What i do is set aura lighting in services to manual start. So it wont start unless u start the app. Tried this with mystic lighting but it resets my static white to ra8nbow, so annoying.


----------



## kot0005

mistershan said:


> kot0005 said:
> 
> 
> 
> Try reddit steam game swap to trade ur game. Dont get scammed tho. Read the pinned post.
> 
> 
> 
> Thanks. I just read this though. Idk if this is still true but NVIDIA found a way to make sure you can only redeem the code you get? Or can you give away or sell it?
> 
> https://www.vg247.com/2017/02/02/fr...g-nvidia-gpus-will-only-be-redeemable-by-you/
Click to expand...

That is outdated. I sold my keys.


----------



## mistershan

kot0005 said:


> That is outdated. I sold my keys.


Really? So what's to stop people from buying a cheap Radeon 590, getting the free 3 games and then returning it?


----------



## jelome1989

LayZ_Pz said:


> What did you use to check how stable were you ? I have stable OC on many games & benchmarks, however The Witcher 3 is really killing it everytime.





boli said:


> I'm not who you asked, but here goes anyway: Everyone has their own "killer".  Mine are FireStrike Ultra (graphics test 2), Time Spy Extreme (graphics test 2), Shadow of the Tomb Raider (the very last benchmark) and just playing Battlefield V..


I agree, it's stable with the games I play like AotS, Far Cry, AC: Odyssey, Tomb Raider, PUBG, Batman, etc. 

With 3DMark tests, the card rarely reaches 2100mhz, because of power limit or high temps, so still can't say what is this card's "killer". But I'm satisfied with the card's performance and that's what's important


----------



## zack_orner

I have the msi 2080 ti gaming x tro and don't mind reading but almost 5700 post, can somebody point me in the right direction of anyones results with the custom bios for this card in op please.

Sent from my SM-N950U using Tapatalk


----------



## Esenel

psychrage said:


> Are 2080 Ti's still dying? Is there any brand in particular safer than others?


In my opinion it is as safe as usual to buy a card.

My Founders Edition is running since October with +145 Core and +1000MEM.
And tortures inbetween of +160 / +1100.

If you want one go buy it.


----------



## arrow0309

Anyone having trouble with crashes in Hunt Showdown?
I'm under wc @40C and moderate oc +90 / 500 on 380w bios (my 2080ti FE is not a good overclocker, this is my max stable oc, 2070 - 2085). 
Three days ago had a couple of freezes during the game, I thought it was from my ram overclocked to 4000 so I reverted it back to Xmp values.
Yesterday evening I've also had 3 crashes, the first one with a short freeze and then crash report, put the vga memory at default and then in half an hour or so I had another crash to desktop and then another one (small window with dx device removed or something).
I've previously updated yesterday afternoon the nvidia drivers after a couple of DDU cleanings.
Could this FE be so lousy in oc not even holding well these clocks?
If I have to lower it furthermore to 2055 I think I'll also switch back to the Evga 338W bios.


----------



## TK421

NBrock said:


> Extended loads for [email protected] are 36*c for gaming where the memory is utilized more the highest I have seen is 40*c. Not sure what my ambient temps are, but they are pretty low since it is wintertime and the rig is in my basement. I have one 240 rad and one 360 rad with a 9900k in the same loop.
> 
> SuperPosition benchmark would throttle to 2065 core, with the new BIOS it maintained 2145 throughout the whole benchmark. I haven't tried going higher yet.
> In Valley benchmark 2560x1440, it runs at a higher clock since the load isn't very high. It runs at 2315 core clock.



I thought you're still on the air cooler, but thanks for the insight!










In terms of what we say good/average/bad/dud card, what kind of frequencies should I look for in the GPU core and VRAM frequency?


----------



## BudgieSmuggler

Tricky said:


> Hey guys. I'm in a weird spot right now trying to get my hands on a 2080ti. I've basically placed an order for the Gigabyte Aorus Xtreme 2080ti which came out to 1822CAD. I have a second option to go for an Asus Strix OC version but it would come out to 2100CAD at a local store. My third option would be a Zotac Amp 2080ti for 1760CAD. All these prices are including tax/etc.
> 
> Should I wait for the Aorus Xtreme to come in (which will take 3 weeks) or get the Strix oc (which I don't see it being worth almost 300$more) OR just settle for the Zotac AMP ??
> 
> I understand the Zotac Amp would be the weaker of the bunch, but would it be worth the savings?? I do plan on overclocking it to the max. Also these are literally my only options for now, availability is really low!



I have the Zotac Amp 2080ti and have no complaints on it. I am at 2100mhz and can +1100 on memory and this is on air. Even if i hit 54 degrees my 2100mhz stays stable. This is on the slightly higher 338W bios from the Asus Ultra XC. This fits with some of the highest stable clocks i've seen on air at least.


----------



## laxu

Does anyone know if the *Palit 2080 Ti Gaming Pro* (not OC) can accept a BIOS with higher power limit? Otherwise the difference between the two seems to be just the power limit giving the OC higher boost clocks.


----------



## zhrooms

zack_orner said:


> I have the msi 2080 ti gaming x tro and don't mind reading but almost 5600 post, can somebody point me in the right direction of anyones results with the custom bios for this card in op please.


 
There are none here, also looked around elsewhere and can't find anything, but that's not a surprise, very few have this card and even fewer bother going for benchmark results. What is the reason you want to know?

We already know it's slightly faster than a card with the 380W BIOS, to even get something from the additional 26W you need to water cool it or use it in a room with a low ambient temperature. Running the card at at max power limit (1.050V and above) will generate a lot of heat.
 


arrow0309 said:


> Anyone having trouble with crashes in Hunt Showdown?
> I'm under wc @40C and moderate oc +90 / 500 on 380w bios (my 2080ti FE is not a good overclocker, this is my max stable oc, 2070 - 2085).
> Three days ago had a couple of freezes during the game, I thought it was from my ram overclocked to 4000 so I reverted it back to Xmp values.
> Yesterday evening I've also had 3 crashes, the first one with a short freeze and then crash report, put the vga memory at default and then in half an hour or so I had another crash to desktop and then another one (small window with dx device removed or something).
> I've previously updated yesterday afternoon the nvidia drivers after a couple of DDU cleanings.
> Could this FE be so lousy in oc not even holding well these clocks?


 
Sounds like it might be the boost, what is your resolution and framerate? If there are spots in the game where GPU Usage is all over the place, that could lead to the core clock spiking suddenly and not having enough voltage thus resulting in a crash (or freeze).

What I would like to know is the GPU _Core Clock_, _Temperature_, _Usage_ and _Voltage_ when playing.
 


BudgieSmuggler said:


> I have the Zotac Amp 2080ti and have no complaints on it. I am at 2100mhz and can +1100 on memory and this is on air. Even if i hit 54 degrees my 2100mhz stays stable. This is on the slightly higher 338W bios from the Asus Ultra XC. This fits with some of the highest stable clocks i've seen on air at least.


 
That's not how it works, it's impossible for a 338W BIOS to stay at 2100MHz when the card is going all out, regardless of temperature, you're talking about speeds when you don't max out you card. Download *Unigine Superposition* and run either 8K Optimized or 1080p Extreme preset, and get back to us with what your highest core clock was towards the end of the run, that's your max overclock.


----------



## Tricky

zhrooms said:


> Then why do you bring up that the Strix has a better PCB? It's completely irrelevant what PCB it has, the reference cards overclock *exactly* the same.


Well I wouldn't go as far as saying it's irrelevant. Just pointing out the different quality between the two.




zhrooms said:


> Boost clocks don't mean anything either, as you manually overclock it anyway, 1755 MHz is one of the higher boost clocks and that translates to just 1890-1905 MHz actual clock, while with a manual overclock you reach 2000MHz with ease.
> 
> If you want max OC, then the Gigabyte AORUS Xtreme is the obvious answer because it has the highest power limit, that is the only real thing that matters between these cards as they all got massive triple fan coolers, meaning similar fan noise and temperature.
> 
> On the other hand, if you are willing to flash BIOS to the Galax 380W, then they all *overclock the same*, now what matters are the coolers, realistically the temp difference will be at most 5c between the best and worst of the three.
> 
> The advantage AORUS and Strix has here is that they use custom PCBs which allow for the cooler to be bigger. So AMP is therefore the worst of the three (but still a fantastic card).
> 
> Remaining two cards are almost identical when flashing 380W BIOS, few degrees difference at most in temperature, which at best results in a 15 MHz higher overclock, but to say which card cools better at your preferred fan speed is impossible to say right here, so then it simply comes down to how they look and price.
> 
> Personally I think AORUS looks far better than Strix, it's also got a 366W BIOS while Strix only has 325W, so you wouldn't necessarily have to flash the BIOS on the Gigabyte for it to perform really well on air, which is nice, zero risk of voiding warranty.
> 
> But in your case, the Strix also cost far more, which means the answer becomes very, very easy, the card you should get is the *AORUS*.
> 
> Though if I were in your position I would get neither, rather wait and look for other cards, the absolute best cards are the hybrid ones, and the card I universally recommend to anyone is the second best air card (which is also happens to be the cheapest custom PCB card), it is the Gaming X Trio, absolute unit of a cooler and insane power limit of 406W, also incredible looking, fairly priced.


Looks like I'll just wait for the Aorus Xtreme to come in. I can't wait longer than 3 weeks. Thanks for your input!



J7SC said:


> Much of it is just a mix of personal preference and actual availability...I have a couple of the Aorus XTR 2080 Ti Waterforce and really like them, including the 4 year warranty and of course the fact that they were going into a custom loop anyway and already came with the block from the factory. On the other hand, now that they have been back-ordered for over a month in most places, the Zotac AMP / Ex, MSI Trio Gaming and Asus Strix are also cards I would have *very much considered*...then there are the hyper ones just announced at CES such as the Kingpin and MSI Lightning with special Bios up to 520w, and Strix Matrix which is like the Kingpin equipped w/ AIO...but who knows when they are available and how much in C$ they will cost.
> 
> Finally, when considering a purchase, you can always look at the NewEgg.Ca summary re. availability, pricing and the number of stars by customer review to help you make up your mind !


Agreed. 4 year warranty is nice indeed. Is it transferable when I sell the car later down the road (couldn't find info on this on GB website)?


----------



## zhrooms

laxu said:


> Does anyone know if the *Palit 2080 Ti Gaming Pro* (not OC) can accept a BIOS with higher power limit?


 
Yes, I suggest you check out the list of cards and read the FAQ in the *original post*.
 


Tricky said:


> Well I wouldn't go as far as saying it's irrelevant. Just pointing out the different quality between the two.
> 
> Looks like I'll just wait for the Aorus Xtreme to come in.


 
It's irrelevant unless you plan on overclocking with LN2 (or) keeping it for more than 5 years, and by that point it's going to be worth like $100 anyway, so it doesn't matter if it dies. 

What was the pre-tax price? It's $1340 USD in germany, so $140 above NVIDIA Founders Edition MSRP. Meanwhile the Gaming X Trio which *destroys* the AORUS Extreme is only $1250 pre-tax.









Also, speaking of prices, seems like as of today the *A* chip is now sold for a *record low* €1099 including Tax in Germany, Palit GamingPro with Reference PCB & Factory Overclock, that is just $1050 pre-tax compared to the $999 MSRP by NVIDIA for the Non-A. Best price/performance seen yet, it can take the 380W BIOS and reach 2100MHz in 5K gaming at water temperatures. The cooler is also huge for being just dual fan, mine managed to keep 60C at 1905MHz/0.900V with the fans at 2200RPM (100%) in 5K gaming.


----------



## Tricky

zhrooms said:


> What was the pre-tax price? It's $1340 USD in germany, so $140 above NVIDIA Founders Edition MSRP. Meanwhile the Gaming X Trio which *destroys* the AORUS Extreme is only $1250 pre-tax.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also, speaking of prices, seems like as of today the *A* chip is now sold for a *record low* €1099 including Tax in Germany, Palit GamingPro with Reference PCB & Factory Overclock, that is just $1050 pre-tax compared to the $999 MSRP by NVIDIA for the Non-A. Best price/performance seen yet, it can take the 380W BIOS and reach 2100MHz in 5K gaming at water temperatures. The cooler is also huge for being just dual fan, mine managed to keep 60C at 1905MHz/0.900V with the fans at 2200RPM (100%) in 5K gaming.


Pre-tax price was 1730CAD or 1300USD. I've searched everywhere and couldn't get it for less. Same for my other 2 choices.

Unfortunately for us canadians, can't even get the palit gamingpro (for now at least). Seems like you guys have more to choose from :/

EDIT: The MSI trio is more expensive at 1760CAD , Ive seen some at 1800 aswell. I'm telling you the choices here are limited. I feel like the Aorus Xtreme is still a very good choice.


----------



## J7SC

I somehow doubt that the MSI Trio 'destroys' the Aorus Waterforce, whatever is meant by 'destroys'  (never mind the reports of some of the 2080 TIs destroying themselves  ).

That said, below or two sets of graphs...one is from NewEgg.ca re. customer satisfaction ratings for the Aorus Xtr WB, MSI Gaming Trio and Asus Strix, the other set is about OC performance per OC3D, and the best card seems to be the Asus Strix (A chip) when push comes to shove (no Aorus card available in that review)


...*but what is really interesting* is how well the FE cards are doing...unless it is a pure bench comparison, I doubt anyone will notice the fps difference between these options...


----------



## BudgieSmuggler

zhrooms said:


> There are none here, also looked around elsewhere and can't find anything, but that's not a surprise, very few have this card and even fewer bother going for benchmark results. What is the reason you want to know?
> 
> We already know it's slightly faster than a card with the 380W BIOS, to even get something from the additional 26W you need to water cool it or use it in a room with a low ambient temperature. Running the card at at max power limit (1.050V and above) will generate a lot of heat.
> 
> 
> 
> Sounds like it might be the boost, what is your resolution and framerate? If there are spots in the game where GPU Usage is all over the place, that could lead to the core clock spiking suddenly and not having enough voltage thus resulting in a crash (or freeze).
> 
> What I would like to know is the GPU _Core Clock_, _Temperature_, _Usage_ and _Voltage_ when playing.
> 
> 
> 
> That's not how it works, it's impossible for a 338W BIOS to stay at 2100MHz when the card is going all out, regardless of temperature, you're talking about speeds when you don't max out you card. Download *Unigine Superposition* and run either 8K Optimized or 1080p Extreme preset, and get back to us with what your highest core clock was towards the end of the run, that's your max overclock.


Thanks for th info. Am aware the card is capable of more as i think it's a pretty good overclocker. I was talking about the fact the card is a good card. And also talking about what my card can do with my current setup/bios. I can't give someone advice on whether to buy a certain card based on conjecture. How's it impossible? i am at 2100mhz 99% of the time, stable ...on air....on the 338w(default) 360w(maxed out) bios. The odd drop to 2085mhz. For me on air, with the bios i'm on my card is maxed out. It won't hit anything higher as it will downclockdue to temp and or power limit. I wasn't looking to get into an argument with you when all i was doing was offering advice that the Zotac amp(even with it's lower powere default bios) is not a bad card at all. Now with it being less power limited even more so. Kindly take your salty arse and talk to someone else. Thanks


----------



## zack_orner

zhrooms said:


> There are none here, also looked around elsewhere and can't find anything, but that's not a surprise, very few have this card and even fewer bother going for benchmark results. What is the reason you want to know?
> 
> 
> 
> We already know it's slightly faster than a card with the 380W BIOS, to even get something from the additional 26W you need to water cool it or use it in a room with a low ambient temperature. Running the card at at max power limit (1.050V and above) will generate a lot of heat.
> 
> 
> 
> 
> 
> 
> 
> Sounds like it might be the boost, what is your resolution and framerate? If there are spots in the game where GPU Usage is all over the place, that could lead to the core clock spiking suddenly and not having enough voltage thus resulting in a crash (or freeze).
> 
> 
> 
> What I would like to know is the GPU _Core Clock_, _Temperature_, _Usage_ and _Voltage_ when playing.
> 
> 
> 
> 
> 
> 
> 
> That's not how it works, it's impossible for a 338W BIOS to stay at 2100MHz when the card is going all out, regardless of temperature, you're talking about speeds when you don't max out you card. Download *Unigine Superposition* and run either 8K Optimized or 1080p Extreme preset, and get back to us with what your highest core clock was towards the end of the run, that's your max overclock.


I flashed it works fine able to get a lil more stable memory oc now but core is worse because of thermals like you said. Thank you for your time greatly appreciated. 

Sent from my SM-N950U using Tapatalk


----------



## Tricky

J7SC said:


> I somehow doubt that the MSI Trio 'destroys' the Aorus Waterforce, whatever is meant by 'destroys'  (never mind the reports of some of the 2080 TIs destroying themselves  ).


I think he was comparing it to the Xtreme not the waterforce (which have the exact same PCB). But still, from what I've seen the MSI Trio and the Aorus Xtreme are very close when it comes to OC capabilities.


----------



## J7SC

Tricky said:


> I think he was comparing it to the Xtreme not the waterforce (which have the exact same PCB). But still, from what I've seen the MSI Trio and the Aorus Xtreme are very close when it comes to OC capabilities.


...at the end of the day, I am far more concerned about quality of PCB and components (with warranty being related) than anything else, not least as I will be also using them for work when the time comes - and the 2x Aorus meet that criteria as well as being quite speedy, per earlier Superposition posts (both Aorus pull between 375w and 380w max on stock Bios right of the box).

But in this RTX 2080 TI generation, NVidia has really tied down their vendors in terms of design freedoms anyway - until they have churned through a pile of 'regular' 2080 TIs in their chip order-book (Galax seems to be a bit of an exception...). Then there are what looks like 'lower than expected' yields at the foundry, which limits the actual availability of top-end cards which often are binned. So quality and actual availability (at a non-gouge price <> some vendors charge over $3800 for certain 2080 TIs that are in short supply) are key, with high oc potential next in line

...with CES just over, my fav 'halo dream cards' are ( in no particular order) the Galax 2080 TI HOF OC Labs water-blocked > but there were only 100 - 200, mostly reserved for extreme oc'ers...likely the 3x 8 pin Kingpin w/ AIO (but not released yet)...likely the 3x 8 pin MSI Lightning (though needs to be water-cooled, which may be an issue re. aftermarket blocks for that different than std MSI PCB)...and likely the Asus Strix Matrix w/ AIO...it has 'only' 2x 8 pin but it's PCB is on par or a close second only (IMO) to the elusive Galax, and short of extreme LN2 or Liquid Helium, the 2x 8 pin should not be much of an issue.


----------



## NBrock

TK421 said:


> I thought you're still on the air cooler, but thanks for the insight!
> 
> 
> 
> In terms of what we say good/average/bad/dud card, what kind of frequencies should I look for in the GPU core and VRAM frequency?


I put my block on the other day when my fittings came in. I have the EK block for it. Looks like mine is the "updated" one with the additional thermal pads.

I can't seem to go as high on the memory clocks as others can, the most I have gotten away with is +850 memory. My core clocks depending on application seem to sit around 2130 to 2145. In TimeSpy for example 2130 is where my core sits and the same settings get me 2145 in Superposition 4k optimized.


----------



## Diablix6

Krzych04650 said:


> I have posted a short video with stress test, you also have all the settings and VF at the beginning:
> 
> https://www.youtube.com/watch?v=ZFEHM9KPo1U
> 
> I found 1.037 to be a sweetspot between performance/OC potential and avoiding power limit. Long time stable clock is 2100-2115.
> 
> Though voltage goes up to 1.093v if you set voltage offset on main AB screen to 100 before editing the curve, so if you have games that are not consuming a lot of power (like for example AC: Origins has some unusual way of utilizing the GPU that makes it draw much less power even in 100% GPU bound scenarios, it is very good as a lower power benchmark to determine max OC potential without hitting power limit) then you can go higher. Case dependent.
> 
> Fans are 100% under load because I have my PC outside my room so I don't care about noise and want maximum cooling performance.
> 
> I am also using 8000 memory, it is stable even above 8100 and crashes only at 8200, but there is no point on risking crashes for 100 MHz more memory clock, so I just keep it at 8000,so it is a nice 2100/8000, full nice looking numbers



Great, thanks for answering :thumb:. I will let you know the result as soon as I will have time to try it, I hope for some perfomance improvements .


----------



## laxu

zhrooms said:


> Yes, I suggest you check out the list of cards and read the FAQ in the *original post*.
> 
> Also, speaking of prices, seems like as of today the *A* chip is now sold for a *record low* €1099 including Tax in Germany, Palit GamingPro with Reference PCB & Factory Overclock, that is just $1050 pre-tax compared to the $999 MSRP by NVIDIA for the Non-A. Best price/performance seen yet, it can take the 380W BIOS and reach 2100MHz in 5K gaming at water temperatures. The cooler is also huge for being just dual fan, mine managed to keep 60C at 1905MHz/0.900V with the fans at 2200RPM (100%) in 5K gaming.


Thanks for the reply! The Palit Gaming Pro OC was actually on sale for 1000 euros (incl. taxes) briefly, just missed it myself. Currently the non-OC is 1098€ so I'm considering buying one and slapping a higher power limit BIOS on it. 

There's still plenty of room for 2080 Ti prices to go lower and to be honest they should. It used to be that the higher end models overclocked better while running cool and quiet but now even the reference based cards are starting to perform similarly so the price premiums seem unwarranted. I guess at this point they are trying to figure out how far they can push pricing before enthusiasts balk at them and wait for sales.


----------



## arrow0309

zhrooms said:


> ... cut ...
> 
> Sounds like it might be the boost, what is your resolution and framerate? If there are spots in the game where GPU Usage is all over the place, that could lead to the core clock spiking suddenly and not having enough voltage thus resulting in a crash (or freeze).
> 
> What I would like to know is the GPU _Core Clock_, _Temperature_, _Usage_ and _Voltage_ when playing.
> 
> ... cut ...


Hi, I'm running the game at 3440x1440 on my gsync X34P 120hz ~95-100 fps, high settings.
Just crashed after 20':

And below the detailed msi ab monitoring:


----------



## arrow0309

@zhrooms
Also (like I said), 380W bios, msi ab voltage slider 0, PL max, gpu +90 and memory +500.
Gonna try to raise the voltage slider to it's max (I don't like the curve) and test again.


----------



## zhrooms

Tricky said:


> Pre-tax price was 1730CAD or 1300USD.
> 
> Unfortunately for us canadians, can't even get the palit gamingpro (for now at least). Seems like you guys have more to choose from :/
> 
> The MSI trio is more expensive at 1760CAD , Ive seen some at 1800 aswell. I'm telling you the choices here are limited. I feel like the Aorus Xtreme is still a very good choice.


 
27 out of 50 cards *ready to ship* in Germany (and surrounding area), so yeah there's plenty to choose from here.

31 out of 56 cards *ready to ship* in Sweden.

AORUS Xtreme is one of the best cards indeed, it's in the *Top 10*, but just saying there are better cards that cost less.
 


J7SC said:


> I somehow doubt that the MSI Trio 'destroys' the Aorus Xtreme, whatever is meant by 'destroys'
> 
> That said, below or two sets of graphs...one is from NewEgg.ca re. customer satisfaction ratings for the Aorus Xtr WB, MSI Gaming Trio and Asus Strix, the other set is about OC performance per OC3D, and the best card seems to be the Asus Strix (A chip) when push comes to shove (no Aorus card available in that review)
> 
> ...*but what is really interesting* is how well the FE cards are doing...unless it is a pure bench comparison, I doubt anyone will notice the fps difference between these options...


 
*Destroy* is an exaggeration sure, but it's better at everything than the AORUS Xtreme.

Customer Satisfaction Rating & Reviews









Reviews only provide a glimpse of what the cards can offer us, the consumers. When I purchase a MSI Gaming X Trio I get completely different results and values compared to reviews, rendering them meaningless. That's why you come here to Overclock.net and hear directly from the consumers that experience these cards daily with far more aggressive overclocking and cooling methods.

Customer Satisfaction Rating is also less than useless, what 3 people share about their Zotac AMP does not mean the other 3000 card owners has the same experience as they do, the majority of people who purchase these Ti cards don't overclock, and barely know anything about computers. What they have to say is therefore pointless. Again, that's one of the reasons you come to this forum. Maintaining this thread I've memorized every single card out there and their specs, there are over 50 unique ones. I've also spent about 20 hours benchmarking two Ti cards, one 300 and one 300A, both at Air and Water temperatures, tested many different BIOS variants and more.

Do you honestly believe a couple of strangers (with likely very low knowledge about computers) that give the Gaming X Trio *3 Stars* tell you more about the card than I can?

Also just to prove it, the reason it has 3 Stars out of 5, is that 3 people had their GPU die on them and they gave it 1 Star, which is the fault of NVIDIA, not MSI and that specific card. The rest of the ratings were all 5 Star.

Out of these cards, 

(Price in Germany, Excluding Tax, as of January 17th, 2019)

ASUS ROG Strix ($1370)
EVGA FTW3 Ultra ($1515)
Gigabyte AORUS Xtreme ($1340)
MSI Gaming X Trio (*$1250*)
Zotac AMP Extreme ($1390)

The best one is, at Air Cooling;

Best PCB? *MSI Gaming X Trio* (Irrelevant, Shared first place)
Best Cooler? *MSI Gaming X Trio* (Triple Fan, 327mm)
Best Looking? *MSI Gaming X Trio* (From the side)
Best Overclocking? *MSI Gaming X Trio* (406W Power Limit)
Best Price? *MSI Gaming X Trio* ($1250 Pre-Tax)

This is the *best* card you can purchase today, the only one beating it at everything (except price) is the Galax Hall of Fame, but its price is off the charts.

ASUS ROG Strix is the worst one as it has the lowest power limit at just 325W, absolute joke of a card (Founders Edition comes with a 320W power limit).
 


Tricky said:


> From what I've seen the MSI Trio and the Aorus Xtreme are very close when it comes to OC capabilities.


 
Yes, they are if you keep a normal fan curve around 50%, both cards will then reach the same overclock, but as soon as you push the fan speed, or water cool, or attempt any benchmark score the MSI will be ahead.
 


J7SC said:


> I am concerned about the quality of the PCB and components (and warranty) more than anything else, as I will also be using them for work when the time comes.
> 
> In this RTX generation, NVIDIA has really tied down the vendors in terms of design freedoms.


 
You really shouldn't be, the reference PCB is overkill already, the custom PCB cards were specifically made for sub zero overclocking, so you running your cards ~10% under the NVIDIA safety limit (voltage) is like, absurdly safe. *Odds of you killing the cards within 5 years is astronomically low*. You're making it far more complicated than it has to be.

NVIDIA has not tied down the vendors in any way, it's just taken time for these new cards such as the Matrix, Kingpin and Lightning to appear, they (not all) are bypassing the NVIDIA safety limit (voltage), this is where you get some use out of your improved PCB (such as increased amount of Power Phases).



laxu said:


> The Palit Gaming Pro OC was actually on sale for 1000 euros (incl. taxes) briefly, just missed it myself. Currently the non-OC is 1098€ so I'm considering buying one and slapping a higher power limit BIOS on it.
> 
> There's still plenty of room for 2080 Ti prices to go lower and to be honest they should. It used to be that the higher end models overclocked better while running cool and quiet but now even the reference based cards are starting to perform similarly so the price premiums seem unwarranted. I guess at this point they are trying to figure out how far they can push pricing before enthusiasts balk at them and wait for sales.


 
Where was that sale? Sounds unbelievable.

This is the difference between Dual, GamingPro and GamingPro OC;

GamingPro OC: Factory OC to 1650 MHz
GamingPro: Factory OC to 1575 MHz
Dual: TU102-300 GPU, No Backplate

You could simply flash the GamingPro with the GamingPro OC BIOS to turn the card into the OC variant.

TU102-300A for €1100 is super good, the second lowest card is €80 more, it will very likely go back up towards €1200 again soon and might be weeks until it drops back down again, no one can know. If you've been looking for a card drop in price, this is it!

Prices in general are very unlikely to drop, they've stayed more or less the same for 3 months now, the Non-A cards are almost correctly priced at close to $1000 MSRP while the A cards is at an average of $1200 just like the Founders Edition, it's just luck that an A card drops down below $1100, second cheapest is $1130 right now, and the Palit is at just $1050 now, as previously said in another comment, that is record low (non flash sale or similar), after replacing the BIOS it becomes faster than 9/10 cards on the market that cost far more (if you do not flash them, that is).


----------



## laxu

zhrooms said:


> Where was that sale? Sounds unbelievable.
> 
> This is the difference between Dual, GamingPro and GamingPro OC;
> 
> GamingPro OC: Factory OC to 1650 MHz
> GamingPro: Factory OC to 1575 MHz
> Dual: TU102-300 GPU, No Backplate
> 
> You could simply flash the GamingPro with the GamingPro OC BIOS to turn the card into the OC variant.
> 
> TU102-300A for €1100 is super good, the second lowest card is €80 more, it will very likely go back up towards €1200 again soon and might be weeks until it drops back down again, no one can know. If you've been looking for a card drop in price, this is it!
> 
> Prices in general are very unlikely to drop, they've stayed more or less the same for 3 months now, the Non-A cards are almost correctly priced at close to $1000 MSRP while the A cards is at an average of $1200 just like the Founders Edition, it's just luck that an A card drops down below $1100, second cheapest is $1130 right now, and the Palit is at just $1050 now, as previously said in another comment, that is record low (non flash sale or similar), after replacing the BIOS it becomes faster than 9/10 cards on the market that cost far more (if you do not flash them, that is).


Sale was at Mindfactory which seems to have the lowest European prices for these GPUs constantly.

I'm still on the fence about the Gaming Pro because at least one person on a Finnish message board says he received a non-A chip Gaming Pro. Could be luck of the draw or some earlier batch is non-A. The Palit Dual is definitely non-A and at Mindfactory it is priced the same as the Gaming Pro right now which seems a bit suspicious.


----------



## zhrooms

laxu said:


> Sale was at Mindfactory which seems to have the lowest European prices for these GPUs constantly.
> 
> I'm still on the fence about the Gaming Pro because at least one person on a Finnish message board says he received a non-A chip Gaming Pro. Could be luck of the draw or some earlier batch is non-A. The Palit Dual is definitely non-A and at Mindfactory it is priced the same as the Gaming Pro right now which seems a bit suspicious.


 
Got it, Mindfactory is one of the best stores, still surprised they had a flash sale on a Ti card. Or it might have been a returned card that was re-sold?

Do you have a link to the source of that claim? Palit screwed up if anything, the GamingPro has always been factory overclocked, hence it requires an *A* chip. If you happened to get a card with an advertised factory overclock and it turned out not to be, all you'd have to do is ask for a replacement and you'd get it, since it's the *wrong product*. I think it might be just a one time thing, could have been a mixup in RMA/Returns, unless the accuser has picture proof and such, like the card had a backplate (and serial numbers, product number).

The fact that they're priced the same is just *chance*, Non-A cards are already overpriced for the 2080 Ti, the RTX 2080 is $80 under MSRP and the RTX 2070 is $40 under MSRP, meanwhile the 2080 Ti is $50 above.


----------



## arrow0309

arrow0309 said:


> Hi, I'm running the game at 3440x1440 on my gsync X34P 120hz ~95-100 fps, high settings.
> Just crashed after 20':
> 
> And below the detailed msi ab monitoring:





arrow0309 said:


> @zhrooms
> Also (like I said), 380W bios, msi ab voltage slider 0, PL max, gpu +90 and memory +500.
> Gonna try to raise the voltage slider to it's max (I don't like the curve) and test again.


So, it's stable now (as it seems) after 1.5h (voltage max, same clocks).
Good


----------



## J7SC

zhrooms said:


> 27 out of 50 cards ready to ship in Germany (and surrounding area), so yeah there's plenty to choose from here.
> 
> 31 out of 56 cards ready to ship in Sweden.
> 
> AORUS Xtreme is one of the best cards indeed, it's in the Top 10, but just saying there are better cards that cost less.
> 
> (cut)
> ...
> 
> You really shouldn't be, the reference PCB is overkill already, the custom PCB cards were specifically made for sub zero overclocking, so you running your cards ~10% under the NVIDIA safety limit (voltage) is like, absurdly safe. Odds of you killing the cards within 5 years is astronomically low. You're making it far more complicated than it has to be.
> 
> NVIDIA has not tied down the vendors in any way, it's just taken time for these new cards such as the Matrix, Kingpin and Lightning to appear, they (not all) are bypassing the NVIDIA safety limit (voltage), this is where you get some use out of your improved PCB (such as increased amount of Power Phases).


Here, very few Aorus Xtreme (and zero Waterforce) available at time of writing at all the major retailers...

...also, I don't mean to drag this on as it is becoming pointless, sadly  , but out of 30-odd what were then top-end GPUs I have here now (680 and up to 2080 TIs), 8 are EVGA, 3 of which developed a PCB related problem which other users also had (they otherwise still function). 6 are Gigabyte, with no issues. Ditto for the rest, including 5 Asus.

I also bought a MSI Lightning a few years back...the first one didn't work right out of the box and was returned to the shop the next day which replaced it > that 2nd one black-screened within 18 months, so_ "Odds of you killing the cards within 5 years is astronomically low_" is relative, certainly to me. I inherited my first computer as a young lad, a 486/66 from the early 90s...in the intervening 26 years, I came to work in the computer field apart from just playing around with them (including sub-zero). IMO, there are select quality differences on PCBs and what's on them beyond the GPU chip from AMD or NVidia, ones I learned to look for, especially because we also use a lot of later-gen equipment for critical work apps, including rendering and such...any downtime by my staff is more than just an annoyance, it seriously affects productivity and the bottom line. But that's just my take...


----------



## cg4200

hey I am just checking in has anyone figured out how to mod nv flash to get a bios on non a-chip yet?? Or mod and trick the device id ??
I am better with a hammer ..lol Thanks


----------



## zhrooms

cg4200 said:


> hey I am just checking in has anyone figured out how to mod nv flash to get a bios on non a-chip yet?? Or mod and trick the device id ??
> I am better with a hammer ..lol Thanks


 
It will *never* work sadly, the chip itself is different so you will not be able to bypass it. It's like trying to flash a RTX 2080 Ti BIOS onto a RTX 2070 card, it just won't work.

Return your card if you recently purchased it (if it is within the return period), only way is to replace the card if you ever want to really overclock.


----------



## Jpmboy

zhrooms said:


> It will *never* work sadly, the chip itself is different so you will not be able to bypass it. It's like trying to flash a RTX 2080 Ti BIOS onto a RTX 2070 card, it just won't work.
> 
> Return your card if you recently purchased it (if it is within the return period), only way is to replace the card if you ever want to really overclock.


 Hey bud. I just gotta say - this is the best GPU owners club thread I've seen in a long time. And the best Owners OP ever!
where is the damn rep button!!


----------



## Tricky

zhrooms said:


> Yes, they are if you keep a normal fan curve around 50%, both cards will then reach the same overclock, but as soon as you push the fan speed, or water cool, or attempt any benchmark score the MSI will be ahead.


How many here on OCN actually have the Aorus Xtreme for you to say this? At any rate, based on what is available here now I didn't have much of a choice. If I can get 2000MHz+ on core i'll be happy tbh.


----------



## zhrooms

Jpmboy said:


> Best owners club I've seen in a long time.


 
So you're saying you've seen better ones?


----------



## Jpmboy

wouldn't want to dis every other thread author.


----------



## dantoddd

Hi guys,

so i put together a new system, 8700K + 2080 Ti gaming Trio X. I'm thinking of buying this monitor

https://www.asus.com/Monitors/ROG-SWIFT-PG27UQ/

It is mad expensive but the reviews are very generous. My question is do you guys think with my specs i get the best out of this monitor especially HDR and GSYNC at 4k. Or should i go for a different cheaper 4k monitor. I'm set on 4k, i play mostly cinematic action RPG type games and RTS games


----------



## Glerox

dantoddd said:


> Hi guys,
> 
> so i put together a new system, 8700K + 2080 Ti gaming Trio X. I'm thinking of buying this monitor
> 
> https://www.asus.com/Monitors/ROG-SWIFT-PG27UQ/
> 
> It is mad expensive but the reviews are very generous. My question is do you guys think with my specs i get the best out of this monitor especially HDR and GSYNC at 4k. Or should i go for a different cheaper 4k monitor. I'm set on 4k, i play mostly cinematic action RPG type games and RTS games


I own this monitor and it's definitely the best gaming monitor available, unless you do competitive gaming.

If you want HDR, it's your only choice with proper display HDR 1000 standard... 

If you don't need HDR than I would go with the cheaper XV273K. It's freesync but now it's maybe supported by nvidia gpus.

I think we're at least 2 years away from something better than the PG27UQ : miniLED gaming monitors (which is quite a long gap in the tech world).

Hope it helps you.


----------



## cx-ray

laxu said:


> Sale was at Mindfactory which seems to have the lowest European prices for these GPUs constantly.


The problem is they don't ship to consumers outside of Germany. Only a business account with registered tax-ID will qualify for international shipping.

BTW, Caseking on Dec. 31 traditionally holds huge flash sale on New Years eve. Got lucky this time -23% on a product I wanted  Might want to keep that in mind.


----------



## kot0005

Tricky said:


> How many here on OCN actually have the Aorus Xtreme for you to say this? At any rate, based on what is available here now I didn't have much of a choice. If I can get 2000MHz+ on core i'll be happy tbh.


I have Had 3 GPU's including the Aorus. They all OC the same... except for Samsung GDDR6 can easily do over 1200.


----------



## Tricky

kot0005 said:


> I have Had 3 GPU's including the Aorus. They all OC the same... except for Samsung GDDR6 can easily do over 1200.


Hol-up. You've had 3 different 2080tis? What where you able to achieve with the Aorus Xtreme?


----------



## J7SC

cx-ray said:


> The problem is they don't ship to consumers outside of Germany. Only a business account with registered tax-ID will qualify for international shipping.
> 
> BTW, Caseking on Dec. 31 traditionally holds huge flash sale on New Years eve. Got lucky this time -23% on a product I wanted  Might want to keep that in mind.



You're right about the flash deals at Caseking. We are hq'ed in the Pacific Northwest but have a branch in the EU near Berlin and Caseking.de is something we check weekly for hardware deals for our EU offices, along with Micro Center, Newegg.com /.ca and MemoryExpress (the latter since NCIX went the way of the Dodo bird...) for over here. Between all of them, you usually get a pretty good picture of what is available for actual delivery, prices and customer ratings/ complaints etc about a product. 

MindFactory is interesting as well, but for 2080 TIs for just today for example, Caseking.de has 14 brands / models available for immediate delivery ("lagernd") while Mindfactory has just 6. Caseking incidentally also owns overclockers.uk and quite a few other outlet brands - and then there's Der8auer doing his thing @ Caseking


----------



## laxu

zhrooms said:


> Do you have a link to the source of that claim? Palit screwed up if anything, the GamingPro has always been factory overclocked, hence it requires an *A* chip. If you happened to get a card with an advertised factory overclock and it turned out not to be, all you'd have to do is ask for a replacement and you'd get it, since it's the *wrong product*. I think it might be just a one time thing, could have been a mixup in RMA/Returns, unless the accuser has picture proof and such, like the card had a backplate (and serial numbers, product number).
> 
> The fact that they're priced the same is just *chance*, Non-A cards are already overpriced for the 2080 Ti, the RTX 2080 is $80 under MSRP and the RTX 2070 is $40 under MSRP, meanwhile the 2080 Ti is $50 above.


Here, it's in Finnish though but Google should be able to translate. He mentions it in an earlier post too and he also bought it under the assumption that higher boost than reference = A chip. It's hard to verify too as any 2080 Ti will boost to the stated slight overclock and beyond anyway without overclocking.


----------



## Pauliesss

So I bought the GAINWARD 2080 Ti Phoenix GS and I am really satisfied with it. The only downside is that the fans are still spinning even when it is not really needed, but I can live with that.

I only tried some "minor" OC using the OC scanner, but I will try manual OC later today or tomorrow.

Thanks to zhrooms for answering my pre-sales question. 

Edit: btw the card comes already with the latest BIOS for this GPU that was released back in November.

GAINWARD VBIOS Release Note (2018/11/5)
(for GAINWARD GeForce RTX 2080 Ti Phoenix GS)
1. Update version to .17
2. Change “MAXIMUM TARGET POWER”


----------



## LayZ_Pz

Anyone got the ROG Matrix RTX 2080 Ti bios dumped somewhere ? Would be interesting to see if they raised the PL and so I could get it on my Strix.


----------



## BudgieSmuggler

dantoddd said:


> Hi guys,
> 
> so i put together a new system, 8700K + 2080 Ti gaming Trio X. I'm thinking of buying this monitor
> 
> https://www.asus.com/Monitors/ROG-SWIFT-PG27UQ/
> 
> It is mad expensive but the reviews are very generous. My question is do you guys think with my specs i get the best out of this monitor especially HDR and GSYNC at 4k. Or should i go for a different cheaper 4k monitor. I'm set on 4k, i play mostly cinematic action RPG type games and RTS games


 Truth is you won't hit 144hz at 4k even with a 2080ti. In some older games sure but not with latest gen AAA titles. You will have to dial down a few things or be happy with around(this is a guess) between 70 and 120 fps dependant on the game. It has Gsync so gaming will still be smooth and a big improvemnt on current 4k 60hz. Question is how much is that really worth to you. They are expensive. I was browsing the same thing last night but personally i'm going to enjoy 1440p at 144hz (getting that in nearly every game with highest possible graphics) and my 60hz 4K Predator for a couple of years(maybe 12 months who knows when prices will fall) In that time there will be next iteration of the 2080ti or whatever is best, that will fully run 4k 144hz. Hope this helps. ps. If you have money to burn go for it.Sure it will be beatiful HDR 4k lol


----------



## boli

dantoddd said:


> so i put together a new system, 8700K + 2080 Ti gaming Trio X. I'm thinking of buying this monitor
> 
> https://www.asus.com/Monitors/ROG-SWIFT-PG27UQ/
> 
> It is mad expensive but the reviews are very generous. My question is do you guys think with my specs i get the best out of this monitor especially HDR and GSYNC at 4k. Or should i go for a different cheaper 4k monitor. I'm set on 4k, i play mostly cinematic action RPG type games and RTS games


This display offers a superb visual experience. In my opinion its main strength is HDR, so you may want to check if your favorite games support it. Like others said, forget about playing triple A titles at 144 Hz with a single 2080 Ti, you can find my fps numbers in previous posts in this thread if interested.

The full chroma you only get at 98 Hz with HDR and 120 Hz without HDR, so that's what I'm using for now, because the games I play don't run faster than that anyway.  From what I read going reduced chroma is no big deal for games, and mostly noticeable for text, so just keep the desktop without HDR at 120 Hz and it's all good.

This display does have on significant downside though, which is the built in fan. It's very audible and I dislike it, but I usually play with a headset on, which makes it bearable – I don't do anything other than playing games on the PC with that display, anything else I get done on another (5K) Machine. The visual quality does make up for the fan (that's what I keep telling myself anyway), but if a similar display without fan comes along, it probably won't be long until I ditch this one. 

I would however suggest you take a look at the Acer Predator X27 as well, it uses the same panel, has less (IMO unnecessary) RGB fluff, and its built-in fan is bigger, which could/should make it quieter.


----------



## cg4200

zhrooms said:


> It will *never* work sadly, the chip itself is different so you will not be able to bypass it. It's like trying to flash a RTX 2080 Ti BIOS onto a RTX 2070 card, it just won't work.
> 
> Return your card if you recently purchased it (if it is within the return period), only way is to replace the card if you ever want to really overclock.


Thanks bud that really stinks like sewage ... much past return date .
I paid 999.00 and have a way around the bios to hold 2160 steady on core thru everything firestrike port royal..
But would rather flash galax 380...
I could ebay it I see newegg now lists for 1189 crazy evga ceo is looking for leather jackets as well..lol
But am afraid I will not get lucky with new card and oc over 2100 again..


----------



## Pepillo

I did not win the lottery, I touched a bad chip with my Aorus Xtreme. But with the fullcover Byskin I am very happy and I could raise something the clocks with good temperatures and stability:










http://www.3dmark.com/pr/22294


----------



## Tricky

Pepillo said:


> I did not win the lottery, I touched a bad chip with my Aorus Xtreme. But with the fullcover Byskin I am very happy and I could raise something the clocks with good temperatures and stability:


almost 2100MHz is not bad at all! I'm jelous, still waiting for mine to come in.


----------



## J7SC

Tricky said:


> almost 2100MHz is not bad at all! I'm jelous, still waiting for mine to come in.



Nice...Are you going to water-cool them and/or are they factory-blocked ?


----------



## Martin778

My AORUS Xtreme 2080Ti with Samsung memory didn't even last 3 weeks. Ridiculous...
Random crash in Forza 4, reboot and came back to 'starry night' artifacts and it's losing output all the time.






(someone advised to try underclocking it - no difference whatsoever)


----------



## Scotty99

Quick question about EVGA precision x1 OC scanner. Does this tool also set the fan curve more aggressively or would i have to do that manually? 

Thanks.


----------



## BudgieSmuggler

Scotty99 said:


> Quick question about EVGA precision x1 OC scanner. Does this tool also set the fan curve more aggressively or would i have to do that manually?
> 
> Thanks.


What version of the 2080ti do you have? 



Personally i'd use Msi Afterburner as find it all around more stable than anything i've used. Whichever program you use... I found the OC scanner was a bit modest and i was able to get higher manually. Also you can really tweak it to your preference with a locked voltage. In afterburner click on the point(small cube)where voltage and clock speed cross hold ctrl and hit L. I undervolt mine sometimes on games where a lower clock is better (usually down to instability of that game with an overlock) So say click on .931 volts at 2000mhz and hit ctrl L. It will lock that clock and voltage in. 



You'd gave to play arooound to make sure it's stable with that clock speed and voltage and only testing will tell you as every card is different. One will do 2000mhz at .931 another may need .950 or .975. Silicon lottery. On air you should be able to do 2050mhz -2100mhz. at a guess dependant on your card. Minsits happily at 2100mhz, Zotac Amp version on air cooling. At least the OC scanner will give you a stable base for you to start with if you choose to aim higher. Lot's on here will help so ask away o browse the thread. Lots of good info within the pages here.


----------



## Scotty99

BudgieSmuggler said:


> What version of the 2080ti do you have?
> 
> 
> 
> Personally i'd use Msi Afterburner as find it all around more stable than anything i've used. Whichever program you use... I found the OC scanner was a bit modest and i was able to get higher manually. Also you can really tweak it to your preference with a locked voltage. In afterburner click on the point(small cube)where voltage and clock speed cross hold ctrl and hit L. I undervolt mine sometimes on games where a lower clock is better (usually down to instability of that game with an overlock) So say click on .931 volts at 2000mhz and hit ctrl L. It will lock that clock and voltage in.
> 
> 
> 
> You'd gave to play arooound to make sure it's stable with that clock speed and voltage and only testing will tell you as every card is different. One will do 2000mhz at .931 another may need .950 or .975. Silicon lottery. On air you should be able to do 2050mhz -2100mhz. at a guess dependant on your card. Minsits happily at 2100mhz, Zotac Amp version on air cooling. At least the OC scanner will give you a stable base for you to start with if you choose to aim higher. Lot's on here will help so ask away o browse the thread. Lots of good info within the pages here.


I actually have a founders 2060 lol, i just dont know where else to post there are no active threads on the 2060 yet.

I prefer a set it and forget it approach thats why i was intrerested in the OC scanner function, but i dont have much thermal headroom at stock so was curious if the OC scanner bumped fans up or if that was needed to be done manually.


----------



## nyk20z3

LayZ_Pz said:


> Anyone got the ROG Matrix RTX 2080 Ti bios dumped somewhere ? Would be interesting to see if they raised the PL and so I could get it on my Strix.



The card is not even released yet


----------



## cstkl1

hmmm

this is my advice for ocing with msi ab

1. cold boot into windows when your gpu is at the lowest temp
2. set in ab max power limit, now just test 
at 1.05v default for max clock with uniengine heaven ( set to extreme preset custom whatever resolution) for an hour
its a increament of 15. start with 105/120.
ctrl F to see the voltage curve before you run the test

3. once you find the best stable clock.. increase the voltage to max and in voltage curve

1.075-1.093... for each step increase the clock by 15mhz. test again with heaven 1 hr

fastest easiest way to clock with AB instead of fine tuning. 

save the clock. shutdown. 
on again once everything is cool down. test heaven again. 

mem clock is pretty straight forward. test stable on heaven. i normally will run 24/7 .. 25-50mhz less than what i deem stable in heaven or timespy gpu test 2.


----------



## J7SC

Martin778 said:


> My AORUS Xtreme 2080Ti with Samsung memory didn't even last 3 weeks. Ridiculous...
> Random crash in Forza 4, reboot and came back to 'starry night' artifacts and it's losing output all the time.
> 
> (someone advised to try underclocking it - no difference whatsoever)



That looks like a RMA case alright  ...Might be a long shot, but *some 2080 TIs* like fixed (locked) voltage via MSI AB though RMA at this stage is probably a better option. 

Is this an air-cooled or factory water-cooled version ? Also, as this is the first Aorus XTR card I ran across with Samsung memory, I wonder about the 3rd and 4th digits from the right of the serial number on the box (not asking for the last 2 digits, which is private). Anyway, good luck with the replacement.


----------



## zhrooms

laxu said:


> Here, it's in Finnish though but Google should be able to translate. He mentions it in an earlier post too and he also bought it under the assumption that higher boost than reference = A chip. It's hard to verify too as any 2080 Ti will boost to the stated slight overclock and beyond anyway without overclocking.


 
Yeah that guy is a lunatic based on the google translation.








_"Now that the card's price is close to one and a half elf, I can shave as standard and stay at least a week or so before I hit the waters."_

But in all seriousness, this is his story;

     _Orders the Palit GamingPro OC, with no delivery date, a month passes and he sees that the store got a delivery of the GamingPro card, and he asks the store to change his order, and they accept (or simply places a new order) and he gets his GamingPro the same day, November 14.

     On December 18, after owning the card for a month he attempts to flash the BIOS and it fails, he says that GPU-Z shows 1E04 which indicates it is the Non-A chip, he searched if this happened to anyone else, and claim it happened to a user here on Overclock.net. (This is false, at least not in this thread, as I have read every comment and the only ones who were surprised by their Palit not being *A* were *Dual* owners)._

We have no proof of what he's saying is true, not shared any pictures, screenshots, or uploaded BIOS.

There's nothing further to discuss here, the GamingPro uses an A chip, any card that carries a factory overclock is automatically binned, as it is *forbidden* on the Non-A cards. There's even one verified and one unverified BIOS uploaded on TechPowerUp for the GamingPro, both show the correct 1575 MHz factory overclock as well as the 1E07 identification, also 250/280W power limit (250W is specified on their website).

He could be completely clueless and actually has the Dual card, or the store mixed the cards up during RMA/Returns, or Palit made a big oopsie at the card assembly (screwed up batch).

If you ordered this card and got the wrong one you'd simply get it replaced, as it is the wrong product, no risk of getting screwed in the end, just an inconvenience.
@laxu I'm assuming you're from finland? Could you perhaps shoot him a PM and tell him to join the (OCN RTX 2080/Ti Owner's Club) Discord?
 
 
// *Click here to join the discussion on Discord* or join directly through the *Discord app* with the code *kkuFR3d*


----------



## dantoddd

Glerox said:


> I own this monitor and it's definitely the best gaming monitor available, unless you do competitive gaming.
> 
> If you want HDR, it's your only choice with proper display HDR 1000 standard...
> 
> If you don't need HDR than I would go with the cheaper XV273K. It's freesync but now it's maybe supported by nvidia gpus.
> 
> I think we're at least 2 years away from something better than the PG27UQ : miniLED gaming monitors (which is quite a long gap in the tech world).
> 
> Hope it helps you.


just bought it. I'll tell you how i feel about the 1700 USD.


----------



## RaGran

Hello,

I have successfully flashed A-chip bioses (KFA/Galax 380W, EVGA FTW and Asus Dual) to a non-a board (MSI Ventus ti) with an external programmer (Revelprog-IS) and a test clip (Pomona SOIC8). However, the card does not boot or output any picture with an A chip bios and instead runs the rear fan at 100%. 

Looks like the A and non-A chips do have some sort of difference in the chip itself and not just the bios and you need a modded non-a bios (which are not available) or a shunt mod to bypass the power limit. Is there any other conclusion to be made?


----------



## Krzych04650

Well, now I've got a proper TimeSpy score:

https://www.3dmark.com/3dm/32562246?

16740 graphics score. I couldn't get past 16300 before, even on more aggressive settings than these ones. Seems like the issue with Windows. Just did clean a install of 1809 and I can see performance increase across the board. I did install it without web connection, updated it manually, then disabled updates everywhere possible, removed all Meltdown/Spectre nonsense and etc, but I did exactly the same with 1709 before as well (I had massive stuttering issues on 1803 so stayed with 1709). Just few days ago 1709 with the newest drivers was still like 16250 graphics score max. I wonder what the issue was.


----------



## pewpewlazer

Got my EVGA Hydro Flopper block installed today. Worst mounting system I've ever seen. Seriously, *** EVGA? The 4 GPU cooler mounting holes are used to bolt on some useless "GPU bracket" that literally does nothing. Nothing bolts into it. It's just like a picture frame around the GPU. The entire block mounts by screwing into the backplate. I'm amazed it even works.

I had bought a GTX 280 rad I planned on running in push/pull, but that didn't exactly fit. I ended up digging up a 240 GTS from my old setup and using that. Probably could have fit with 280 in push only, but it was too late to care at that point. Could probably fit everything the way I want if I had a normal 420 rad up top instead of an x-flow.

Ran OC scanner and a few runs of superposition 4k. Floated around the 2000-2050mhz range. ~52*C max. Outlet temp from the 240 GTS rad to my card measured ~40*C with an IR thermometer. I bought one of those fancy pantsy aquaero 6 LT fan controllers so I could control fan speed based off water temp instead of CPU temp, but I don't even know where I'm going to fit it in my case, and I'm too drunk and frustrated to care right now. Maybe tomorrow...


----------



## toncij

kot0005 said:


> I have Had 3 GPU's including the Aorus. They all OC the same... except for Samsung GDDR6 can easily do over 1200.


I wasn't aware any AIB 2080Ti shipped with Samsung. You had FE with Sam?


----------



## kot0005

toncij said:


> I wasn't aware any AIB 2080Ti shipped with Samsung. You had FE with Sam?


Its on my MSI SeaHawk EK, Based on Gaming X Trio PCB


----------



## kot0005

FTW3 here with international shipping. Its from official EVGA store

https://www.ebay.com/itm/264062771130


----------



## Cirrus550

zhrooms said:


> Yeah that guy is a lunatic based on the google translation.
> 
> 
> 
> 
> 
> 
> 
> 
> _"Now that the card's price is close to one and a half elf, I can shave as standard and stay at least a week or so before I hit the waters."_
> 
> But in all seriousness, this is his story;
> 
> _Orders the Palit GamingPro OC, with no delivery date, a month passes and he sees that the store got a delivery of the GamingPro card, and he asks the store to change his order, and they accept (or simply places a new order) and he gets his GamingPro the same day, November 14.
> 
> On December 18, after owning the card for a month he attempts to flash the BIOS and it fails, he says that GPU-Z shows 1E04 which indicates it is the Non-A chip, he searched if this happened to anyone else, and claim it happened to a user here on Overclock.net. (This is false, at least not in this thread, as I have read every comment and the only ones who were surprised by their Palit not being *A* were *Dual* owners)._
> 
> We have no proof of what he's saying is true, not shared any pictures, screenshots, or uploaded BIOS.
> 
> There's nothing further to discuss here, the GamingPro uses an A chip, any card that carries a factory overclock is automatically binned, as it is *forbidden* on the Non-A cards. There's even one verified and one unverified BIOS uploaded on TechPowerUp for the GamingPro, both show the correct 1575 MHz factory overclock as well as the 1E07 identification, also 250/280W power limit (250W is specified on their website).
> 
> He could be completely clueless and actually has the Dual card, or the store mixed the cards up during RMA/Returns, or Palit made a big oopsie at the card assembly (screwed up batch).
> 
> If you ordered this card and got the wrong one you'd simply get it replaced, as it is the wrong product, no risk of getting screwed in the end, just an inconvenience.
> 
> @laxu I'm assuming you're from finland? Could you perhaps shoot him a PM and tell him to join the (OCN RTX 2080/Ti Owner's Club) Discord?
> 
> 
> // *Click here to join the discussion on Discord* or join directly through the *Discord app* with the code *kkuFR3d*


Not the card owner, but some screenshots were shared more privately.


----------



## TurricanM3

Looks like a got a pretty good chip. Superposition @2250:






(limited because of 330w bios)

Got it for 999,- euros. ^^


----------



## Tricky

J7SC said:


> Nice...Are you going to water-cool them and/or are they factory-blocked ?


I'm going to leave the stock cooler on it, if I can get 2050-2100mhz on core that would make me happy. Also I don't keep GPUs longer 
than 2-3 years so no point in spending more $ to put em under water, always sell to get the next flagship card.



Martin778 said:


> My AORUS Xtreme 2080Ti with Samsung memory didn't even last 3 weeks. Ridiculous...
> Random crash in Forza 4, reboot and came back to 'starry night' artifacts and it's losing output all the time.
> 
> (someone advised to try underclocking it - no difference whatsoever)


That really sucks, sorry to hear. Gigabyte should be very good with RMA process. Hope the batch I'm getting mine from is good..


----------



## kx11

TurricanM3 said:


> Looks like a got a pretty good chip. Superposition @2250:
> 
> https://www.youtube.com/watch?v=295QayS3GE4&feature=youtu.be
> 
> (limited because of 330w bios)
> 
> Got it for 999,- euros. ^^





how did you capture this video ? an external device ?


----------



## zhrooms

RaGran said:


> I have successfully flashed A-chip bioses (KFA/Galax 380W, EVGA FTW and Asus Dual) to a non-a board (MSI Ventus ti) with an external programmer (Revelprog-IS) and a test clip (Pomona SOIC8). However, the card does not boot or output any picture with an A chip bios and instead runs the rear fan at 100%.
> 
> Looks like the A and non-A chips do have some sort of difference in the chip itself and not just the bios and you need a modded non-a bios (which are not available) or a shunt mod to bypass the power limit.
> 
> Is there any other conclusion to be made?


 
Not really no, it's obvious NVIDIA split the GPUs into two (300/300A) for monetary reasons alone, there are dozens of people benchmarking Non-A chips at over 2150MHz (1.093V) on water after simply shunt modding them, there is no evidence that the TU102-300 chip is any less "binned".

I'm assuming they did it so they could sell the cards at a higher price, sticking the 300 chip at $999 MSRP and the 300A (Founders Edition) with unlocked overclocking (power limit) at $1199.

Similarly how Intel offer the Non-K and K models, it's been proven time after time that when overclocking a Non-K model using BCLK, they can reach 5 GHz no problem.

Allowing the consumers to freely unlock (flash) the cards to unlock a higher power limit (overclock) would discourage anyone from purchasing the higher priced cards, therefore they blocked them at a hardware level, renaming the chip at the same time.

NVIDIA (partners/vendors) definitely got screwed out of a lot of money over the lifespan of the GTX 1080 Ti because of the power limit being removed very early on by ASUS, who (intentionally/unintentionally) released a BIOS that worked on basically every card, which completely disabled the power limit (for LN2 purposes).

For example, I purchased my Founders Edition (Reference PCB) GTX 1080 Ti a few weeks after release for a very good price, then put a water block on it and disabled the power limit with a 30 second BIOS flash, and I could then overclock it to ~2150 MHz (far above 400W), the same speed as any premium Custom PCB card that cost $200-300 more, it was all up to the silicon lottery at that point, some custom PCB cards overclocked higher than mine, some the same, some didn't.

This time around you're forced to pay a higher price for a higher overclock (power limit). Water cooling a Non-A card is the biggest blunder you can make, results in losing out on about 10% performance, that's 60 > 66 FPS in 4K.
 


Cirrus550 said:


> Not the card owner, but some screenshots were shared more privately.


 
That doesn't tell me anything, just shows 1E04 and 1545 MHz boost which indicates it is a Non-A chip, that exact BIOS is also uploaded on TechPowerUp, and listed under the name *Dual*, as it should.

Where is the actual proof that it is the GamingPro (NE6208TT20LC-150A)? We need pictures of the *box* and the *card* (with visible *product number* and *serial*). The cards are visually identical with one exception, the Dual does not feature a backplate (and the text on the box says GamingPro instead of Dual). _It's not hard to mix the cards up._



TurricanM3 said:


> Looks like a got a pretty good chip. Superposition 2250 MHz.
> 
> (limited because of 330w bios)
> 
> Got it for 999,- euros. ^^


 
Misleading, you're barely pushing the card, show us a video of it running the 8K Optimized preset instead. Should reach at most 2050 MHz with a 330W Power Limit.


----------



## Pepillo

Phanteks Glacier for Aorus Xtrem y Asus Strix:

http://www.phanteks.com/Glacier-GPU.html


----------



## kx11

i have uploaded my ( updated ) Vbios for GALAX 2080ti HOF in case anyone wants to give it a run




https://www.techpowerup.com/vgabios/207462/207462


----------



## pewpewlazer

zhrooms said:


> Misleading, you're barely pushing the card, show us a video of it running the 8K Optimized preset instead. Should reach at most 2050 MHz with a 330W Power Limit.


You're pretty spot on there... Just tried superposition 8k and my card sat around 2040 @ 0.962v +/-15mhz bouncing off power limit (338W BIOS). 

In actual 4k gaming my card seems to stays around the 1.000v range, which gives me ~2070mhz with my current V-F curve.


----------



## Tricky

Anyone know how easy/friendly Zotac's RMA process is? Starting to get impatient on the Aorus X (2 more weeks)..


----------



## J7SC

Tricky said:


> I'm going to leave the stock cooler on it, if I can get 2050-2100mhz on core that would make me happy. Also I don't keep GPUs longer
> than 2-3 years so no point in spending more $ to put em under water, always sell to get the next flagship card.



I usually use a pile of universal GPU blocks but the latest GPU chip gen doesn't really fit those, or just barely. Eventually, the cards go back into the company supply room - after I spent hours trying to find the little spring-loaded NVidia GPU screws to remount the air coolers...I always put them in a zip-lock bag in a safe place, but somehow, they tend to wander off on their own  ...this set of 2080 TIs is water-blocked from the factory, though, so no search-and-rescue for the screws a few years from now


----------



## J7SC

...just saw this on OCN https://www.overclock.net/forum/69-nvidia/1718706-did-my-2080ti-just-die.html 

I wish there would be much more 'sorted' info, or table, on card death by make, exact model, VRAM type, stock or custom Bios, and type of error (i.e. VRAM or other, if discernible)


----------



## MrTOOSHORT

Wonder if my ticking time bomb 2080ti FE Samsung is going to go off one day.

Bought from the nvidia store Nov 30th. Oct 30th was the last time a 2080ti was in stock there until again on Nov 30th. So my thoughts were nvidia fixed the issue.


----------



## renejr902

Hi! Please can someone tell me their temps while gaming or using heaven benchmark with a Geforce 2080ti aorus xtreme waterforce or Msi seahawk 2080 ti .. i just bought a "Gigabyte aeorus xtreme 2080ti" My temps are 71c-72c and 76c when fully overclocked. i hesitate to exchange it for the watercooled version. (Thanks a lot i dont have a watercooled system. i want see temp of cards with a included waterblock system. i dont want to watercool my system.)


----------



## J7SC

MrTOOSHORT said:


> Wonder if my ticking time bomb 2080ti FE Samsung is going to go off one day.
> 
> Bought from the nvidia store Nov 30th. Oct 30th was the last time a 2080ti was in stock there until again on Nov 30th. So my thoughts were nvidia fixed the issue.


...you're touching on an important item here, and that is the uncertainty / back of the mind which affects the ownership enjoyment. I am not a friend of the litigious society, but if this would be a car industry product, there would be recalls, if not legal follow-up. I just wish there would be better info in addition to what is anecdotal so that one can start to make some sense out of it. The other thing which makes me nervous is the 'sudden and final' death described by many - little or no warning, unlike what we used to get with many of the other gens.




renejr902 said:


> Hi! Please can someone tell me their temps while gaming or using heaven benchmark with a Geforce 2080ti aorus xtreme waterforce or Msi seahawk 2080 ti .. i just bought a "Gigabyte aeorus xtreme 2080ti" My temps are 71c-72c and 76c when fully overclocked. i hesitate to exchange it for the watercooled version. (Thanks a lot i dont have a watercooled system. i want see temp of cards with a included waterblock system. i dont want to watercool my system.)



...there are two factory water-cooled versions, one is the AIO, the other is the regular blocked version for custom loops, which are the ones I have. ...Haven't run Heaven in a while, but I know my factory water-blocked Aorus' never exceeded 47C on anything, with 360 rad and 3x 120 silent CPU fans on idle, even after multiple Superposition runs.


----------



## BudgieSmuggler

Scotty99 said:


> I actually have a founders 2060 lol, i just dont know where else to post there are no active threads on the 2060 yet.
> 
> I prefer a set it and forget it approach thats why i was intrerested in the OC scanner function, but i dont have much thermal headroom at stock so was curious if the OC scanner bumped fans up or if that was needed to be done manually.



Gotcha. you can use oc scanner see what it comes out with and try and bump it up manually if you like for a bit extra. If it gives you +60 core go +80 manually and so on. You have to overclock memory manually and i believe these cards get better results with memory oc vs core clock. Start at +200 memory and see how high you can go without crash or artifacting. Once you've done this you can just save and enjoy your card. Or stick with what oc scanner gives you and then set your fan curve manually. Dependant on your tolerance for fan noise you can go all out and get less downclocking due to thermal limit or go moderate and be happy with whatever your chip levels out at with your temps. I always et a manual fan curve. I think in Evga precision you set a target temp don't you and it sets fans accordingl?


----------



## Scotty99

BudgieSmuggler said:


> Gotcha. you can use oc scanner see what it comes out with and try and bump it up manually if you like for a bit extra. If it gives you +60 core go +80 manually and so on. You have to overclock memory manually and i believe these cards get better results with memory oc vs core clock. Start at +200 memory and see how high you can go without crash or artifacting. Once you've done this you can just save and enjoy your card. Or stick with what oc scanner gives you and then set your fan curve manually. Dependant on your tolerance for fan noise you can go all out and get less downclocking due to thermal limit or go moderate and be happy with whatever your chip levels out at with your temps. I always et a manual fan curve. I think in Evga precision you set a target temp don't you and it sets fans accordingl?


I did exactly that today lol. Ran oc scanner and it gave me a +159 on core, in games its between 1980-2010mhz. I also ramped fans up a bit, so at 70c they hit 70% which is 2600 rpm, seems to do the trick.

I havent touched memory yet, ill start with your 200 suggestion


----------



## renejr902

J7SC said:


> ...you're touching on an important item here, and that is the uncertainty / back of the mind which affects the ownership enjoyment. I am not a friend of the litigious society, but if this would be a car industry product, there would be recalls, if not legal follow-up. I just wish there would be better info in addition to what is anecdotal so that one can start to make some sense out of it. The other thing which makes me nervous is the 'sudden and final' death described by many - little or no warning, unlike what we used to get with many of the other gens.
> 
> 
> 
> 
> 
> ...there are two factory water-cooled versions, one is the AIO, the other is the regular blocked version for custom loops, which are the ones I have. ...Haven't run Heaven in a while, but I know my factory water-blocked Aorus' never exceeded 47C on anything, with 360 rad and 3x 120 silent CPU fans on idle, even after multiple Superposition runs.


 Thanks for answer, its impressive 47c. I dont think aio would have great temp like that.


----------



## J7SC

renejr902 said:


> Thanks for answer, its impressive 47c. I dont think aio would have great temp like that.



...big 360 / 60 rad helps ! 

Re. Aorus AIO version, just noticed this (looong stress test w/ Heaven, 3DMark, Superposition etc..)


----------



## TK421

NBrock said:


> I put my block on the other day when my fittings came in. I have the EK block for it. Looks like mine is the "updated" one with the additional thermal pads.
> 
> I can't seem to go as high on the memory clocks as others can, the most I have gotten away with is +850 memory. My core clocks depending on application seem to sit around 2130 to 2145. In TimeSpy for example 2130 is where my core sits and the same settings get me 2145 in Superposition 4k optimized.





So I should look for around +1000 memory stable to consider it a good card?




Other than timespy (I have the 3dmark paid version) what other applications I use? I don't have the superposition license.


----------



## RaGran

zhrooms said:


> Not really no, it's obvious NVIDIA split the GPUs into two (300/300A) for monetary reasons alone, there are dozens of people benchmarking Non-A chips at over 2150MHz (1.093V) on water after simply shunt modding them, there is no evidence that the TU102-300 chip is any less "binned".



So it has been confirmed that a shunt mod works on the non-a cards as well? 
In that case I will probably do that too.




> I'm assuming they did it so they could sell the cards at a higher price, sticking the 300 chip at $999 MSRP and the 300A (Founders Edition) with unlocked overclocking (power limit) at $1199.
> 
> Similarly how Intel offer the Non-K and K models, it's been proven time after time that when overclocking a Non-K model using BCLK, they can reach 5 GHz no problem.
> 
> Allowing the consumers to freely unlock (flash) the cards to unlock a higher power limit (overclock) would discourage anyone from purchasing the higher priced cards, therefore they blocked them at a hardware level, renaming the chip at the same time.



Looks like Nvidia thinks they don' t need brand loyalty. I hope AMD beats them soon. I heard from a dealer that the reason why some brands have better availability (Palit) and others worse (MSI) is that Nvidia are giving more chips to vendors who make cards exclusively with their chips. If I had waited for the MSI Duke OC that I preordered, I probably still wouldn't have it.




> This time around you're forced to pay a higher price for a higher overclock (power limit). Water cooling a Non-A card is the biggest blunder you can make, results in losing out on about 10% performance, that's 60 > 66 FPS in 4K



The results posted on page 476 of this thread indicate that water coooling will lead to performance increase regardless of whether or not the card is overclocked or not and power limit (sorry no link). The combination of water cooling and higher power will give the best results though.

Anyone know what the number after letter K stands for in the chip's code name? Mine is TU102-300-K5-A1. This one seems to overclock particularly well with +250Mhz on the core and one benchmark even saying it has 2340mhz peak clock. I've seen the EVGA Black edition cards are K3.


----------



## callonryan

pewpewlazer said:


> You're pretty spot on there... Just tried superposition 8k and my card sat around 2040 @ 0.962v +/-15mhz bouncing off power limit (338W BIOS).
> 
> In actual 4k gaming my card seems to stays around the 1.000v range, which gives me ~2070mhz with my current V-F curve.


That entire test is misleading. I don't even think the 8k test will show any difference. I used the same OC on 8k than I do on the 4k.

https://benchmark.unigine.com/results/rid_89bbb05348fc474ba4b54c92d2708652

grrr


----------



## callonryan

TurricanM3 said:


> Looks like a got a pretty good chip. Superposition @2250:
> 
> https://www.youtube.com/watch?v=295QayS3GE4&feature=youtu.be
> 
> (limited because of 330w bios)
> 
> Got it for 999,- euros. ^^


Turrican, for ****s and giggles give Timespy Extreme a go! I'm thinking you'll be sitting around 2190mhz (which is a very good OC).


----------



## BudgieSmuggler

Scotty99 said:


> I did exactly that today lol. Ran oc scanner and it gave me a +159 on core, in games its between 1980-2010mhz. I also ramped fans up a bit, so at 70c they hit 70% which is 2600 rpm, seems to do the trick.
> 
> I havent touched memory yet, ill start with your 200 suggestion


Sounds good. +159 is pretty good start. I think my 2080ti on oc scanner gave me +139 and then i went manual and experimented. At least it gives an idea of where to start. Curious to know what memory overclock you can get on your 2060. Let me know what you top out at before crashing etc. Have fun


----------



## Nephalem89

Hi i have problem whit my gigabyte aorus 2080 ti xtreme water force I have used for oc msi after burner 4.6.0 beta version 10 I don't have modify voltage any idea ? thanks a lot !


----------



## cg4200

did you go in settings and unlock voltage monitoring and stuff???


----------



## kot0005

TurricanM3 said:


> Looks like a got a pretty good chip. Superposition @2250:
> 
> 
> 
> 
> 
> 
> (limited because of 330w bios)
> 
> Got it for 999,- euros. ^^


How do you keep your gpu at 37c with full load ??


----------



## Nephalem89

cg4200 said:


> did you go in settings and unlock voltage monitoring and stuff???


If ... everything and I put third party and I do not understand the core clock bar the curve I can move it but ... the percentage of the core voltage I move it but it does not move from 725 mv I use the last bios of the graph and the last one beta of ab


----------



## NBrock

kot0005 said:


> How do you keep your gpu at 37c with full load ??


Probably water-cooling and good ambient temps. Mine peaks at 36°c for most stuff.


----------



## iRSs

i have a curiosity; is there any of you power limited in msi kombustor (full screen)?

Sent from my LG-H930 using Tapatalk


----------



## TurricanM3

Got 30190 points (old windows with a lot of tools) in SP 1080p medium @2250/8200. PT is max. 358w and the chip runs 2250MHz all the time. In 4k i am limited. Makes no sense to bench.

I can play BF V @987mV @2115/8200 and 893mV for 2010/8200 with the fans in silent mode (500rpm, GPU up to 42°). =)

Got conductonaut on the chip:


----------



## ducky083

TurricanM3 said:


> Got 30190 points (old windows with a lot of tools) in SP 1080p medium @2250/8200. PT is max. 358w and the chip runs 2250MHz all the time. In 4k i am limited. Makes no sense to bench.
> 
> I can play BF V @987mV @2115/8200 and 893mV for 2010/8200 with the fans in silent mode (500rpm, GPU up to 42°). =)
> 
> Got conductonaut on the chip:



Hi, all of the A Gpu can make up than 2250 mhz on watercooling ? i've ordered a watercool heatkiller 4...


----------



## nyk20z3

NBrock said:


> Probably water-cooling and good ambient temps. Mine peaks at 36°c for most stuff.


Must have the ac blasting or its winter time....


----------



## Hanks552

TurricanM3 said:


> Got 30190 points (old windows with a lot of tools) in SP 1080p medium @2250/8200. PT is max. 358w and the chip runs 2250MHz all the time. In 4k i am limited. Makes no sense to bench.
> 
> I can play BF V @987mV @2115/8200 and 893mV for 2010/8200 with the fans in silent mode (500rpm, GPU up to 42°). =)
> 
> Got conductonaut on the chip:


im not using thermal pad everywhere like you, im going to do a shunt mod tomorrow
do you think i should add more thermal pad?


----------



## cg4200

I have evga black non-a chip and I was wondering I can only get to 1.080 volts I can not get the last 2 1.087 or 1.093 using afterburner with everything max volt and pl 
i know I have 280 watt limit and I am not hitting tdp.. no perf caps 
But don't understand is the higher volts not available on 280 wat cards ?? I run 2160 on 1.080 but can not get volts any higher thanks


----------



## CptSpig

Nvidia 2080ti Founders Edition (380 Bios) running 2175 / 2080 Port Royal, 2160 / 2073 Fire Strike Extreme, 2175 / 2050 Time Spy Extreme and 2160 / 2050 Time Spy. Really Happy with the card.

http://www.3dmark.com/pr/23499 http://www.3dmark.com/fs/17951194 http://www.3dmark.com/spy/5870453 http://www.3dmark.com/spy/5870019


----------



## Martin778

At what voltage? Mine locked up above 2040Mhz.


----------



## CptSpig

Martin778 said:


> At what voltage? Mine locked up above 2040Mhz.


Using MSI afterburner 4.6.0 beta 10 voltage locked at 1.093.


----------



## Rylen

Can’t decide what to do 

Asus Strix OC And keeps the stock cooling, could get this 2nd hand for $1100

EVGa XC Gaming and add 380w BIOS + Kraken G12 + Kraken X72 (kraken is currently attached to 2700X). I’d than get an X42 for the CPU

Do same as above (G12) but use MSI Trio with its 406w BIOS instead of EVGA XC


I’m kinda leaving EVGA because of their RMA policies. Or just getting a Strix and keeping it stock 

NOISE IS IMPORTANT TO ME

I have about 12 inches of space for GPU, unless I switch my X72 from Push/Pull to Push only, removing the Pull fans would allow me to fit a 13” GPU


----------



## Rylen

Also would this be a good way to cool VRM if using a G12?


----------



## Scotty99

Rylen said:


> Can’t decide what to do
> 
> Asus Strix OC And keeps the stock cooling, could get this 2nd hand for $1100
> 
> EVGa XC Gaming and add 380w BIOS + Kraken G12 + Kraken X72 (kraken is currently attached to 2700X). I’d than get an X42 for the CPU
> 
> Do same as above (G12) but use MSI Trio with its 406w BIOS instead of EVGA XC
> 
> 
> I’m kinda leaving EVGA because of their RMA policies. Or just getting a Strix and keeping it stock
> 
> NOISE IS IMPORTANT TO ME
> 
> I have about 12 inches of space for GPU, unless I switch my X72 from Push/Pull to Push only, removing the Pull fans would allow me to fit a 13” GPU



Buy the cheapest evga 2080ti ftw you can find and slap this on it:
https://www.bhphotovideo.com/c/prod...I1m88OsAcNij_xoC4iIQAvD_BwE&lsft=BI:514&smp=Y

Oh does the kraken g12 fit 2080ti's? I thought the holes were different.

Carry on lol.

Edit: Also yes that would most definitely help, where did you find that bracket with fans? If that came in black i might add that just for my air cooled gpu.


----------



## Rylen

Scotty99 said:


> Buy the cheapest evga 2080ti ftw you can find and slap this on it:
> https://www.bhphotovideo.com/c/prod...I1m88OsAcNij_xoC4iIQAvD_BwE&lsft=BI:514&smp=Y
> 
> Oh does the kraken g12 fit 2080ti's? I thought the holes were different.
> 
> Carry on lol.
> 
> Edit: Also yes that would most definitely help, where did you find that bracket with fans? If that came in black i might add that just for my air cooled gpu.


Im reading G12 fits with AMD bracket

But some people are hesitant because the VRM’s nearest left of GPU don’t get a fan on them 










Gelid makes the PCI Slot Fan

https://www.outletpc.com/pf3353-gelid-pci-slot-fans-sl-pci-02.html


----------



## Scotty99

Rylen said:


> Im reading G12 fits with AMD bracket
> 
> But some people are hesitant because the VRM’s nearest left of GPU don’t get a fan on them
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gelid makes the PCI Slot Fan
> 
> https://www.outletpc.com/pf3353-gelid-pci-slot-fans-sl-pci-02.html


Oh yea i see where that could be an issue.

For a video card this expensive its probably best to ere on the side of caution, but given how cool vrm's stayed when jay tested his 1080ti for the g12 video i doubt it would be much of a problem. (especially if you got that fan bracket).


----------



## Scotty99

Just wanted to link this :





Thats the reason i was confused about the g12 fitting these cards, if you look down into the comments as well this is apparently a crapshoot, and the AMD bracket will not work with some cards.


----------



## kot0005

nyk20z3 said:


> Must have the ac blasting or its winter time....


For sure...Mine hits around 48-50c buts its summer here with 30-32c amb.


----------



## ring0r

Hello, I have received my "new" 2080Ti FE with Samsung memory and im unable to flash the BIOS, even with the patched NVFLASH ... anyone has been able to do so? ive tryed there steps:

NVFLASH64 --protectoff
Choose GPU 0
NVFLASH64 -6 rom.rom

Mismatch ... :|


----------



## willverduzco

Finally joining the club and mating a 2080ti to my 9900K and Aorus Ultra, after waiting for my local Microcenter to get one in stock for what felt like forever. The aforementioned parts are replacing a 1080TI on XOC bios and a delidded 7700k that was stable at 5.2 on a Maximus IX Hero. Pics of components and destination rig are below 

Fortunately/unfortunately the only card I could get my hands on is an Asus Strix OC (300A/1E07). From what I gather, it has supposedly better VRMs. However, due to the nonstandard PCB, additional cooling will be a bit more of a headache. I had originally planned on getting a reference card or an FTW3 and putting on one of those EVGA hybrid kits, as I had done for my Titan XP and 1080ti--but I guess I'm going to be going Kraken G12 + some 280mm Asetek AIO for the time being until I eventually decide on piecing together another custom loop.

Apologies in advance on the question below; I've skimmed much of this thread and used the search function, but 567 pages makes it's hard to read everything thoroughly:

*Question:* I've seen that plenty of people in this thread have cross-flashed BIOSes in order to raise that power limit. Is cross-flashing limited to cards that have the reference/founders PCB and VRM layout, or would I be able to flash an alternative BIOS on my Strix OC card? The Strix has 2x8 pin power connectors, so I would assume I can. If it is possible, is the Galax bios linked in the OP the best one that I could successfully flash? I saw the MSI Gaming X has a higher power limit, but it has a different power connector layout (2x8 + 1x6), so I don't think a cross-flash would be safe.

Thanks again in advance for the help!


----------



## laxu

zhrooms said:


> That doesn't tell me anything, just shows 1E04 and 1545 MHz boost which indicates it is a Non-A chip, that exact BIOS is also uploaded on TechPowerUp, and listed under the name *Dual*, as it should.
> 
> Where is the actual proof that it is the GamingPro (NE6208TT20LC-150A)? We need pictures of the *box* and the *card* (with visible *product number* and *serial*). The cards are visually identical with one exception, the Dual does not feature a backplate (and the text on the box says GamingPro instead of Dual). _It's not hard to mix the cards up._


Turns out the person and a few others had received the Dual packaged as GamingPro, most likely a mistake at the Palit packaging line. That's what happens when you have this bull**** where there are multiple models that are near identical visually.

I contacted Palit on Facebook and they confirmed that both GamingPro models should have the 300A chip. I ordered the non-OC model last night. Looking at European pricing history in Geizhals.eu for a few cards, it seems the 999€ Gaming Pro OC was a one day sale so the GamingPro non-OC for 1099€ is the best deal on a A-chip 2080 Ti in EU at the moment.

Now I just need to hope it doesn’t have coil whine and isn’t too noisy.


----------



## ilmazzo

willverduzco said:


> Finally joining the club and mating a 2080ti to my 9900K and Aorus Ultra, after waiting for my local Microcenter to get one in stock for what felt like forever. The aforementioned parts are replacing a 1080TI on XOC bios and a delidded 7700k that was stable at 5.2 on a Maximus IX Hero. Pics of components and destination rig are below
> 
> Fortunately/unfortunately the only card I could get my hands on is an Asus Strix OC (300A/1E07). From what I gather, it has supposedly better VRMs. However, due to the nonstandard PCB, additional cooling will be a bit more of a headache. I had originally planned on getting a reference card or an FTW3 and putting on one of those EVGA hybrid kits, as I had done for my Titan XP and 1080ti--but I guess I'm going to be going Kraken G12 + some 280mm Asetek AIO for the time being until I eventually decide on piecing together another custom loop.


are these tools in the second picture to speed up rmas process? lulz

as a european dude it is always disturbing to me to see assualt weapons around an house like could be a vacuum, but hey, it is a thing of mine I assume....


----------



## Edge0fsanity

anyone flash the 406w msi bios or the 450w hof bios to a reference card? I'm using the 380w bios now but i'd like to run these at 1.093v without throttling and i don't want to shunt mod.


----------



## cg4200

ilmazzo said:


> are these tools in the second picture to speed up rmas process? lulz
> 
> as a european dude it is always disturbing to me to see assualt weapons around an house like could be a vacuum, but hey, it is a thing of mine I assume....


I actually liked the picture made me smile..lol
No offence intended but you must be the type of person that believes the government.. (When they say we are here to protect you) haha 
I would not put my or my family's life in the hands of someone who may or may not protect us .. depending who gets the emergency call
Any hoot nice setup I also like the speakers!!


----------



## Zer0G

Yesterday I watercooled my Palit GamingPro OC. I realized there is a difference to the FE at the PCB. I marked the parts. There are missing some parts on the upper marked area and added some parts in the lower area. 

My question is, is there any added part whitch has to be cooled!? Im pretty not sure if I should give a coolingpad on one of the parts? 



By the way, watercooled it runs at 2040mhz and +900mhz on the Samsung chips. 

The heatkiller IV does a good job and fits good. 

I've added two pics, one naked, and one of the system. (Don't wonder... it's a working not a gaming system)


----------



## Yendi

Hello guys,

I have a PNY 2080ti XLR8, if I flash with a EVGA bios, will I be able to have the 0db feature (fan off when no load on the card)?

Thanks !


----------



## VPII

@J7SC My card is in to possibly go for RMA. Unfortunately after driver update to 417.7 it was back stuck at 1350mhz core. Tried win 7 and it worked untill I updated thr driver. Tried total driver clean but still s no go.

On another note... I have my infrared thermometer. 

If the shop agree to rma and I need to wait on stock Ill get the Zotac Amp. If I install the nzxt g12 on it the back plate and cover plate may stay which would help for vrm znd ram.

Sent from my SM-G950F using Tapatalk


----------



## Sheyster

ilmazzo said:


> as a european dude it is always disturbing to me to see assualt weapons around an house like could be a vacuum, but hey, it is a thing of mine I assume....



https://en.wikipedia.org/wiki/Second_Amendment_to_the_United_States_Constitution

In most states now (like California where I live) there are state laws limiting many assault weapons. That's one of the biggest reasons why I won't stay here long-term. I will most likely retire in Arizona or Texas where gun laws are much more relaxed.


----------



## Garrett1974NL

TurricanM3 said:


> Got 30190 points (old windows with a lot of tools) in SP 1080p medium @2250/8200. PT is max. 358w and the chip runs 2250MHz all the time. In 4k i am limited. Makes no sense to bench.
> 
> I can play BF V @987mV @2115/8200 and 893mV for 2010/8200 with the fans in silent mode (500rpm, GPU up to 42°). =)
> 
> Got conductonaut on the chip:


Those are insane temps... what is in your watercooling loop?
I mean what radiators, what pump (and its speed) what fans etc etc... what waterblock is on the GPU?
MOAR info plz... I have a GPU-only waterblock now but I would like a fullcover...it just looks so badass


----------



## NewType88

ilmazzo said:


> as a european dude it is always disturbing to me to see assualt weapons around an house like could be a vacuum, but hey, it is a thing of mine I assume....



Lol, they’re just rifles.....don’t be scared.


----------



## Glottis

NewType88 said:


> Lol, they’re just rifles.....don’t be scared.


Yeah just extremely efficient killing machines. Nothing to be afraid of. It's completely normal and not at all bizarre to post pictures like that in PC enthusiast forum.

I'm also European and if you had guns just sitting like that in your house here in Europe, other people would not want anything to do with you and might even question your mental stability.


----------



## NewType88

Glottis said:


> Yeah just extremely efficient killing machines. Nothing to be afraid of. It's completely normal and not at all bizarre to post pictures like that in PC enthusiast forum.
> 
> I'm also European and if you had guns just sitting like that in your house here in Europe, other people would not want anything to do with you and might even question your mental stability.


Yeah its a rifle, been around for hundreds of years.....

I don't find it bizarre at all, look at the picture again. He is a pc enthusiast and audiophile and a gun lover.....dudes got hobbies and you see it all in this one picture.

Almost everyone I know owns multiple guns.


----------



## Nicklas0912

Does any one know if the Gainward GeForce RTX 2080 Ti Phoenix GS has Dual Bios?

Im looking for a RTX 2080 TI wich have Dual bios, I know Evga FTW3 and Strix do, is there any other cards besides the OCLAB ?


----------



## NBrock

Glottis said:


> Yeah just extremely efficient killing machines. Nothing to be afraid of. It's completely normal and not at all bizarre to post pictures like that in PC enthusiast forum.
> 
> I'm also European and if you had guns just sitting like that in your house here in Europe, other people would not want anything to do with you and might even question your mental stability.


Obviously the dude doesn't leave them just sitting in the way of his rig. Looks like he was just setting up a photo with some of his hobbies in one pic... some of which are a couple rifles... and those rifles don't even have magazines in them. I love how people see one pic the guy obviously staged and are now questioning his mental stability LOLOL.


----------



## NBrock

Now for some on topic talk. 

It seems Superposition didn't like anything higher than +850 memory clock on my 2080ti, but TimeSpy and games don't mind it.
with +1000 mem clock and some tighter timings on my RAM I was able to get just over 16k total score.
https://www.3dmark.com/3dm/32558019


----------



## J7SC

VPII said:


> @*J7SC* My card is in to possibly go for RMA. Unfortunately after driver update to 417.7 it was back stuck at 1350mhz core. Tried win 7 and it worked untill I updated thr driver. Tried total driver clean but still s no go.
> 
> On another note... I have my infrared thermometer.
> 
> If the shop agree to rma and I need to wait on stock Ill get the Zotac Amp. If I install the nzxt g12 on it the back plate and cover plate may stay which would help for vrm znd ram.
> 
> Sent from my SM-G950F using Tapatalk



The more I read about all this w/ 2080 TI issues, the more I feel like this (E.Munch, per YouTube)...


----------



## H4rd5tyl3

Does anyone else have issues with coil whine? My RTX 2080 Ti Strix OC is experiencing it under load... I'm not sure if it's quite loud enough to warrant an RMA... I plan to put it under water once WK comes out with a block for it but don't want the whine to transfer over. Luckily my FT02's huge fans mask the sound a bit but I'm planning on getting an O11 Dynamic so only rad fans will be running.






Edit: video


----------



## Sheyster

H4rd5tyl3 said:


> Does anyone else have issues with coil whine? My RTX 2080 Ti Strix OC is experiencing it under load... I'm not sure if it's quite loud enough to warrant an RMA but I'm not sure. I plan to put it under water once WK comes out with a block for it but don't want the whine to transfer over. Luckily my FT02's huge fans mask the sound a bit but I'm planning on getting an O11 Dynamic so only rad fans will be running.


Seems like a mixed bag as far as coil whine with these cards. I have not heard any at all with my MSI card, and it's in an open bench just a few feet away from where I sit. Hopefully some Strix owners can give you some feedback.


----------



## LayZ_Pz

H4rd5tyl3 said:


> Does anyone else have issues with coil whine? My RTX 2080 Ti Strix OC is experiencing it under load... I'm not sure if it's quite loud enough to warrant an RMA... I plan to put it under water once WK comes out with a block for it but don't want the whine to transfer over. Luckily my FT02's huge fans mask the sound a bit but I'm planning on getting an O11 Dynamic so only rad fans will be running.
> 
> https://youtu.be/g8U5iE1g41c
> 
> Edit: video


I do not have any coil whine on my first gen (I do not have the fan headers) 2080TI Strix. 

I did not stick my ear inside the PC when it is under load, but there is nothing audible from where I sit. 

Also your video is not available.


----------



## Thoth420

Sheyster said:


> Seems like a mixed bag as far as coil whine with these cards. I have not heard any at all with my MSI card, and it's in an open bench just a few feet away from where I sit. Hopefully some Strix owners can give you some feedback.


I solved coil whine on 100% of GPUs once I stopped using splitter cables for said GPUs. So one PSU bank/terminal for each individual cable. Also using EVGA Supernova G3s for reference and all the stuff is plugged into a standard tripplite isobar. Prior to that every last GPU would whine to some degree at the same psychical location. Hope this helps. All my cables are from cablemod as well usually but I am testing the stock cables from my unit atm waiting on new sleeved ones to ship.


----------



## NBrock

I have a bit of coil whine on my FE. My Titan Xps both coil whine and the majority of other high end cards both AMD and Nvidia have had some amount of coil whine. Never had an issue with a card that whined. If it is so loud you can't stand it, I'd RMA it, but the chances of getting a perfectly quiet card are probably pretty slim.


----------



## LayZ_Pz

willverduzco said:


> *Question:* I've seen that plenty of people in this thread have cross-flashed BIOSes in order to raise that power limit. Is cross-flashing limited to cards that have the reference/founders PCB and VRM layout, or would I be able to flash an alternative BIOS on my Strix OC card? The Strix has 2x8 pin power connectors, so I would assume I can. If it is possible, is the Galax bios linked in the OP the best one that I could successfully flash? I saw the MSI Gaming X has a higher power limit, but it has a different power connector layout (2x8 + 1x6), so I don't think a cross-flash would be safe.
> Thanks again in advance for the help!


To answer that part, I have seen a post with a folk somewhere here that flashed another bios to our Strix card. However IIRC he lost few video outputs.

I don't remember what was the end result on Power Limit.

For now our best bet is to wait for the Asus Matrix which may provide us a bios with a max TDP higher than 325W.


----------



## H4rd5tyl3

LayZ_Pz said:


> I do not have any coil whine on my first gen (I do not have the fan headers) 2080TI Strix.
> 
> I did not stick my ear inside the PC when it is under load, but there is nothing audible from where I sit.
> 
> Also your video is not available.


video should work now


----------



## LayZ_Pz

H4rd5tyl3 said:


> video should work now


Is that the right link ? https://*www.youtube.com*/*watch?v=g8U5iE1g41c

Still doesnt work for me.


----------



## amano74

Nicklas0912 said:


> Does any one know if the Gainward GeForce RTX 2080 Ti Phoenix GS has Dual Bios?
> 
> Im looking for a RTX 2080 TI wich have Dual bios, I know Evga FTW3 and Strix do, is there any other cards besides the OCLAB ?


hello, i have the rtx 2080ti gainward gs, it's a triple fan cooler on a reference design pcb with only one bios….


----------



## nyk20z3

https://www.guru3d.com/articles-pages/msi-geforce-rtx-2080-ti-lightning-z-review,1.html

The Dragon on the LCD panel ruined it for me, you don't put the msi dragon on Lighting cards it looks so tacky....


----------



## StuttgartRob

Just got my INNO3D GeForce RTX 2080 Ti iChill Frostbite


----------



## H4rd5tyl3

LayZ_Pz said:


> Is that the right link ? https://*www.youtube.com*/*watch?v=g8U5iE1g41c
> 
> Still doesnt work for me.







try again


----------



## H4rd5tyl3

StuttgartRob said:


> Just got my INNO3D GeForce RTX 2080 Ti iChill Frostbite


Beautiful. Looks similar to what I'm planning except my O11 will be white


----------



## willverduzco

LayZ_Pz said:


> To answer that part, I have seen a post with a folk somewhere here that flashed another bios to our Strix card. However IIRC he lost few video outputs.
> 
> I don't remember what was the end result on Power Limit.
> 
> For now our best bet is to wait for the Asus Matrix which may provide us a bios with a max TDP higher than 325W.


Thank you! I ended up giving the Galax BIOS a try anyway; but just as you predicted, I lost partial functionality in my HDMI ports in a very strange way... The power limit indeed rose, and it worked great for increasing performance (Timespy GPU score rose to 16612 from 15970 with only a very mild OC), but I no longer have audio-out functionality through my HDMI ports... which is a bummer since I use HDMI to transmit 7-ch, uncompressed PCM audio at 24/192 to my Pioneer Elite SC-37 receiver.

Strangely, my Strix OC with the Galax BIOS still transmits audio to my Asus PG348Q ultrawide, but that's about as useful as a hole in the head. My guess is that it's something that can be fixed via troubleshooting since it can output video but not sound to the receiver, and it can output both to my ultrawide. If I can't fix it, perhaps I'll try the EVGA FTW3 BIOS next, as it also uses 2 power connectors and is only a tiny bit lower than the Galax in terms of power limit.

Also, thanks to my like-minded hobbyists (cg4200, Sheyster, NewType88, and NBrock), and my apologies to anyone who found their inclusion jarring (Glottis, ilmazzo) as that was not my intent. Rather, as NBrock alluded, it's just a few of my hobbies in one pic (couldn't add modding/racing cars, photography, and music sadly). For those interested in that realm (and sorry for the off-topic), they're a (R) BCM Recce 16 Precision and a (L) Taiwanese Type 91 (short-stroke piston instead of the Stoner design of pseudo-direct gas impingement). Both are upgraded with a Larue MBT-2S trigger and an Armaspec recoil-reducing captured buffer in H2 weight. In the pic, the T91 has a Holosun HS510C and 3x magnifier; but now both share the same dot+magnifier setup, as it has replaced the Vortex scope, Eotech 518, and DI EG1 on the BCM.


----------



## SoldierRBT

H4rd5tyl3 said:


> Does anyone else have issues with coil whine? My RTX 2080 Ti Strix OC is experiencing it under load... I'm not sure if it's quite loud enough to warrant an RMA... I plan to put it under water once WK comes out with a block for it but don't want the whine to transfer over. Luckily my FT02's huge fans mask the sound a bit but I'm planning on getting an O11 Dynamic so only rad fans will be running.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Edit: video


I returned a MSI RTX 2080 Ti Sea Hawk X with severe coil whine. 

https://m.youtube.com/watch?v=nX4a3zisWls&feature=youtu.be


----------



## Edge0fsanity

H4rd5tyl3 said:


> Does anyone else have issues with coil whine? My RTX 2080 Ti Strix OC is experiencing it under load... I'm not sure if it's quite loud enough to warrant an RMA... I plan to put it under water once WK comes out with a block for it but don't want the whine to transfer over. Luckily my FT02's huge fans mask the sound a bit but I'm planning on getting an O11 Dynamic so only rad fans will be running.
> 
> https://youtu.be/g8U5iE1g41c
> 
> Edit: video


that doesn't sound all that bad, but its hard to tell from the video. All high end cards have coil whine if you listen closely enough. Bad coil whine is when you can hear it despite the fans on the heatsink running at 100%. My first 2080ti which was an FE was like that. Had to return it. The two cards i have right now both make noise but i have to stick my head right next to the case to hear it. Both cards are on water with just rad fans in the background.

I would say if you think its going to be audible above the rad fans in your loop return it.


----------



## H4rd5tyl3

Edge0fsanity said:


> that doesn't sound all that bad, but its hard to tell from the video. All high end cards have coil whine if you listen closely enough. Bad coil whine is when you can hear it despite the fans on the heatsink running at 100%. My first 2080ti which was an FE was like that. Had to return it. The two cards i have right now both make noise but i have to stick my head right next to the case to hear it. Both cards are on water with just rad fans in the background.
> 
> I would say if you think its going to be audible above the rad fans in your loop return it.


you think you can post a video of yours? Curious to what I should expect under a similar setup


----------



## Nunzi

New ASUS strix 2080ti power limet only goes to 112 in AB 

is this normal, thought is was supposed to go to 125..


----------



## CptSpig

Nunzi said:


> New ASUS strix 2080ti power limet only goes to 112 in AB
> 
> is this normal, thought is was supposed to go to 125..


Try this one: https://www.guru3d.com/news-story/download-msi-afterburner-4-6-beta-10-(build14218).html


----------



## krizby

StuttgartRob said:


> Just got my INNO3D GeForce RTX 2080 Ti iChill Frostbite


Nice, here is my Lian Li 011 with the Touchaqua Sedna 011 .


----------



## Bloodred217

I was looking to get a 2080 Ti for a while, finally decided on a Gainward Phoenix GS. The information here was very useful to me, so thanks to everyone for putting it together! Knowing about flashing different BIOS versions and about what GPU each 2080 Ti comes with made it much easier for me to choose something.

The Phoenix GS is a pretty solid card, I'm quite happy with it. Mine came with the 330W BIOS from the factory, but I flashed the Galax 380W version anyway. It also has Samsung memory. I've got a couple of questions though, perhaps somebody has a clue. I played around a bit with OC, I got the card to about 2010-2050/8200MHz (+125/+1200) under load. I'm definitely happy with the performance, but I've seen people posting about getting 2100 or even 2200MHz out of their cards. I can't really push the core beyond +125, since it crashes. Playing around with the voltage boost doesn't seem to do anything, plus the card never hits 1.093V even with the 380W BIOS. Is this just a silicon lottery thing, or is there some more in-depth tweaking to do to get that high (manually modifying the curve maybe)? Or is it just a temperature thing? The card hits about 70-71C, are such high clocks only really achievable with water cooling?

The second thing I'd like to ask about is the damn bright green RGB LEDs. I can turn them off or change the color with EXPERTool, but it does not persist if the system is turned off. I don't want to always run EXPERTool since I use MSI AB, is there any way for me to make the change stick, or alternatively is there any 3rd party software which can control the LED without possibly interfering with AB clock/fan speed settings?



Also, I ran into a bit of trouble when trying to flash the BIOS, due to my mobo I think. I have a mobo with a PLX chip and for some reason nvflash seems to detect the PLX chip as a graphics card. When I tried to flash it, it tried to flash the BIOS to the PLX chip which obviously failed. If anyone else runs into this sort of thing, you can tell nvflash exactly what you want it to flash.
Use "nvflash64 --list" to see the graphics cards it detects, then when flashing add the "--index=n" option, where "n" is the index of your graphics card, which you get from the list.


----------



## Nunzi

CptSpig said:


> Try this one: https://www.guru3d.com/news-story/download-msi-afterburner-4-6-beta-10-(build14218).html


Thank you , didn't work, tried beat 10 & 11 with clean install..


----------



## J7SC

Nunzi said:


> Thank you , didn't work, tried beat 10 & 11 with clean install..



The Power Limit percentage is not an absolute number to worry about...you have to ask 'percentage in relation to what' > just as an example, if stock bios is at 350w (100%), then *115% *= 402.5w. If on the other hand stock bios is at 285w (100%), then *130%* = 370.5w


----------



## krizby

Nunzi said:


> CptSpig said:
> 
> 
> 
> Try this one: https://www.guru3d.com/news-story/download-msi-afterburner-4-6-beta-10-(build14218).html
> 
> 
> 
> Thank you , didn't work, tried beat 10 & 11 with clean install..
Click to expand...

You have a Non-A chip with 250/280w PL, nothing you do can change that sadly 😕



Bloodred217 said:


> The Phoenix GS is a pretty solid card, I'm quite happy with it. Mine came with the 330W BIOS from the factory, but I flashed the Galax 380W version anyway. It also has Samsung memory. I've got a couple of questions though, perhaps somebody has a clue. I played around a bit with OC, I got the card to about 2010-2050/8200MHz (+125/+1200) under load. I'm definitely happy with the performance, but I've seen people posting about getting 2100 or even 2200MHz out of their cards. I can't really push the core beyond +125, since it crashes. Playing around with the voltage boost doesn't seem to do anything, plus the card never hits 1.093V even with the 380W BIOS. Is this just a silicon lottery thing, or is there some more in-depth tweaking to do to get that high (manually modifying the curve maybe)? Or is it just a temperature thing? The card hits about 70-71C, are such high clocks only really achievable with water cooling?


Here is the guide on how to modify the VF curve so that the card will reach 1.093V
https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-545.html#post27805282


----------



## CptSpig

Nunzi said:


> Thank you , didn't work, tried beat 10 & 11 with clean install..





J7SC said:


> The Power Limit percentage is not an absolute number to worry about...you have to ask 'percentage in relation to what' > just as an example, if stock bios is at 350w (100%), then *115% *= 402.5w. If on the other hand stock bios is at 285w (100%), then *130%* = 370.5w


Sorry, I forgot I have the 380w bios so 126 on the slider.


----------



## jura11

I just tested on friend loop Bykski FR-N-RTX2080TI-X and must say I'm impressed by temperatures,single 360mm radiator for 8086k with 5.2GHz with Bykski CPU WB and Bykski FR-N-RTX2080TI-X on Palit RTX 2080Ti Gaming with Galax 380W BIOS and max OC on GPU 2100MHz or something around this figure and temperatures in 45-47°C during the gaming etc and that's in 24-25°C and Phanteks EVOLV ATX which is not best case with best airflow but friend is happy as we are expected mid 50's on GPU 

Hope this helps 

Thanks, Jura


----------



## Yendi

Yendi said:


> Hello guys,
> 
> I have a PNY 2080ti XLR8, if I flash with a EVGA bios, will I be able to have the 0db feature (fan off when no load on the card)?
> 
> Thanks !


No one has an idea about that?


----------



## LayZ_Pz

H4rd5tyl3 said:


> https://youtu.be/g8U5iE1g41c
> 
> try again


This does not sound bad at all. Now I wonder if I have it too, however that is not something I would be returning the card over as, if I have it, I can't hear it over the fan ramping up. 



willverduzco said:


> Thank you! I ended up giving the Galax BIOS a try anyway; but just as you predicted, I lost partial functionality in my HDMI ports in a very strange way... The power limit indeed rose, and it worked great for increasing performance (Timespy GPU score rose to 16612 from 15970 with only a very mild OC), but I no longer have audio-out functionality through my HDMI ports... which is a bummer since I use HDMI to transmit 7-ch, uncompressed PCM audio at 24/192 to my Pioneer Elite SC-37 receiver.
> 
> Strangely, my Strix OC with the Galax BIOS still transmits audio to my Asus PG348Q ultrawide, but that's about as useful as a hole in the head. My guess is that it's something that can be fixed via troubleshooting since it can output video but not sound to the receiver, and it can output both to my ultrawide. If I can't fix it, perhaps I'll try the EVGA FTW3 BIOS next, as it also uses 2 power connectors and is only a tiny bit lower than the Galax in terms of power limit.


Sounds good ! Power Limit is really holding this card back, and I am curious to hear about your findings. What was the max wattage you could draw ?

Also in event of a BIOS malfunctioning, what is the process exactly ? Let's say you flash wrong bios to Bios A :

- Boot from Bios B
- When Windows has booted switch to Bios A
- Flash proper Bios to Bios A 
- Profit ?


----------



## willverduzco

LayZ_Pz said:


> Sounds good ! Power Limit is really holding this card back, and I am curious to hear about your findings. What was the max wattage you could draw ?
> 
> Also in event of a BIOS malfunctioning, what is the process exactly ? Let's say you flash wrong bios to Bios A :
> 
> - Boot from Bios B
> - When Windows has booted switch to Bios A
> - Flash proper Bios to Bios A
> - Profit ?


So unfortunately, I was unsuccessful with the EVGA FTW3 BIOS, and I encountered the same minor issue where the HDMI ports worked for video but not audio.

Let me reiterate that all but one of the ports work fine for video out (1 of 2 DP work, 2 of 2 HDMI work) on either BIOS. The only thing that does not work is transmitting uncompressed PCM audio over HDMI. I ended up flashing back to the GALAX bios, as it ends up staying around 380 watts and greatly improves the overall performance of the card. In fact, using the GALAX BIOS, I was able to find new daily-driver clocks for my 2080TI (+150c/+1050m offset or ~2130c/8050m) and 9900K (5.3 all-core), which got me a Timespy score of 16284 (GPU 17015, CPU 13099) and a Cinebench of 2307 at reasonable temps on all components... or a Cinebench 2322 at 5.4 GHz all-core at volts I would not want to run 24/7.

[EDIT: I have since shunt modded and increased my score to 16747 (GPU 17410, CPU 13779) at slightly higher clocks on the 9900K (5.4 all-core) and 2080Ti (2160c/8300m).]

Regarding audio, I took a step back and realized that since I own a consumer-level chip, part of the money of my i9-9900k went to pay for its iGPU. So to get around the lack of uncompressed 8x 24bit/192kHz audio output, I just re-enabled my iGPU and used that HDMI connection on my motherboard to transmit uncompressed PCM multi-channel audio to my Pioneer Elite SC-37 receiver. It works just as well as when I was using my GPU to do this, and in no way slows down the system, as I was able to achieve the same overall performance in all benchmarks that I tried. I assume for those who have enthusiast-grade chips (e.g. X299) may have a tougher time, since their CPUs lack iGPU functionality. That said, the chances that someone has an X299 setup, has a BIOS-flashed Strix OC 2080TI, AND uses HDMI to transmit uncompressed audio to a home theater receiver are so low.

Regarding troubleshooting steps, you hit the nail on the head. I did this all the time with my old 2x Radeon R9 290x Crossfire setup. I would test a modded BIOS out, and if it didn't work, I'd shut down, switch BIOS for bootup, switch back to borked slot once in OS, and flash again. Super easy, and much more convenient than dealing with using a second video card.


----------



## Rylen

What’s the difference between the 2080 Ti FTW3 Ultra, and FTW3 Gaming?

Techpowerup says they’re both 300A chips
EVGA says they both have ICX2



Only difference I see is the Boost Clock for Ultra goes higher, but that should be irrelevant if they’re both 300A chips? 

Unless the FTW3 Gaming has a lower power limit ?

And if the FTW3 Gaming does have a lower power limit than FTW3 Ultra, it can probably be flashed with Ultra Bios?

11G-P4-2483-KR = FTW3 Gaming


----------



## Zer0G

willverduzco said:


> Let me reiterate that all but one of the ports work fine for video out (1 of 2 DP work, 2 of 2 HDMI work) on either BIOS. The only thing that does not work is transmitting uncompressed PCM audio over HDMI. I ended up flashing back to the GALAX bios, as it ends up staying around 380 watts and greatly improves the overall performance of the card. In fact, using the GALAX BIOS, I was able to find new daily-driver clocks for my 2080TI (+150c/+1050m offset or ~2130c/8050m) and 9900K (5.2 all-core), which got me a Timespy score of 16022 (GPU 16828, CPU 12604) and a Cinebench of 2307 at reasonable temps on all components.


Great setup at all! 
Can you tell me how do keep your built cool? 

I've an incase watercooling, with a 360 radiator in the front (blow in), a 280 radiator on top (blow out) and a 140 radiator in the back (blow out). The fans runs all at 7V, cause I need a silent system for work. But if I'm gaming for 2 hours my system runs into the temp limit. GPU and CPU also.

9900k @ 5.1 Ghz with 1.365V. DirectDie Cooling (needs only 100-120W while gaming)
2080ti @ 366W Powerlimit


----------



## Shawnb99

Zer0G said:


> Great setup at all!
> Can you tell me how do keep your built cool?
> 
> I've an incase watercooling, with a 360 radiator in the front (blow in), a 280 radiator on top (blow out) and a 140 radiator in the back (blow out). The fans runs all at 7V, cause I need a silent system for work. But if I'm gaming for 2 hours my system runs into the temp limit. GPU and CPU also.
> 
> 9900k @ 5.1 Ghz with 1.365V. DirectDie Cooling (needs only 100-120W while gaming)
> 2080ti @ 366W Powerlimit




Change all the exhaust rads to intake and you’ll see better temps.
All rads should be intakes as the ambient air is cooler then what’s in the case. 


Sent from my iPhone using Tapatalk


----------



## willverduzco

Zer0G said:


> Great setup at all!
> Can you tell me how do keep your built cool?
> 
> I've an incase watercooling, with a 360 radiator in the front (blow in), a 280 radiator on top (blow out) and a 140 radiator in the back (blow out). The fans runs all at 7V, cause I need a silent system for work. But if I'm gaming for 2 hours my system runs into the temp limit. GPU and CPU also.
> 
> 9900k @ 5.1 Ghz with 1.365V. DirectDie Cooling (needs only 100-120W while gaming)
> 2080ti @ 366W Powerlimit


Thanks for the compliment on the rig. I am somewhat lucky with my CPU, as it is AVX stable at 5.0 at 1.26v and 5.2 at 1.32v. Not quite a supremely golden chip, but certainly better than average from what I've read.

Regarding keeping it cool, I unfortunately had to make the tradeoff you are unwilling to make with noise since I'm only using 280mm AIOs right now. I currently use a Corsair H115i with push/pull Corsair MagLev fans on CPU as front panel intake, and will install a 280mm AIO (top exhaust) + Kraken G12 for the GPU tomorrow once the G12 arrives.

Making matters slightly less bad, I set my fan profiles set to shut down half of the fans at idle, and gradually spin up to 100% once the CPU hits 65C. This allows me to browse the web and do light gaming without any CPU fan noise and minor GPU fan noise, and that GPU fan noise will largely be eliminated once I get the 280mm AIO installed on the 2080TI tomorrow. When running Aida64, RealBench, Cinebench, or 3DMark's CPU tests, though, it sounds like a jet engine taking off in my office. 

Two rigs ago, when I had a 5930k and a Lian Li PC-A70F full tower, I had a custom loop with a 480mm XSPC double-width rad, 8 gentle typhoons in push/pull, and a D5 pump. It was nearly silent, even at full load, thanks to all that radiator surface area and the phenomenal static pressure of the gentle typhoon fan. Unfortunately, I decided to downsize cases to the Phanteks Evolv ATX TG (for aesthetics) when I side-graded to a de-lidded 7700k running at 5.2. The 280mm H115i was more than enough to handle that at reasonable noise levels, but it certainly struggles with double the cores on the 9900k.


----------



## LayZ_Pz

willverduzco said:


> So unfortunately, I was unsuccessful with the EVGA FTW3 BIOS, and I encountered the same minor issue where the HDMI ports worked for video but not audio.
> 
> Let me reiterate that all but one of the ports work fine for video out (1 of 2 DP work, 2 of 2 HDMI work) on either BIOS. The only thing that does not work is transmitting uncompressed PCM audio over HDMI. I ended up flashing back to the GALAX bios, as it ends up staying around 380 watts and greatly improves the overall performance of the card. In fact, using the GALAX BIOS, I was able to find new daily-driver clocks for my 2080TI (+150c/+1050m offset or ~2130c/8050m) and 9900K (5.2 all-core), which got me a Timespy score of 16022 (GPU 16828, CPU 12604) and a Cinebench of 2307 at reasonable temps on all components.
> 
> Regarding audio, I took a step back and realized that since I own a consumer-level chip, part of the money of my i9-9900k went to pay for its iGPU. So to get around the lack of uncompressed 8x 24bit/192kHz audio output, I just re-enabled my iGPU and used that HDMI connection on my motherboard to transmit uncompressed PCM multi-channel audio to my Pioneer Elite SC-37 receiver. It works just as well as when I was using my GPU to do this, and in no way slows down the system, as I was able to achieve the same overall performance in all benchmarks that I tried. I assume for those who have enthusiast-grade chips (e.g. X299) may have a tougher time, since their CPUs lack iGPU functionality. That said, the chances that someone has an X299 setup, has a BIOS-flashed Strix OC 2080TI, AND uses HDMI to transmit uncompressed audio to a home theater receiver are so low.
> 
> Regarding troubleshooting steps, you hit the nail on the head. I did this all the time with my old 2x Radeon R9 290x Crossfire setup. I would test a modded BIOS out, and if it didn't work, I'd shut down, switch BIOS for bootup, switch back to borked slot once in OS, and flash again. Super easy, and much more convenient than dealing with using a second video card.


Sounds great, thanks for confirming the procedure in case of a **** up.

I do not use audio from the card itself so that should not impact me as much, mind linking me to the exact bios you used ? I will then backup both bios and give it a go tonight.


----------



## Bloodred217

krizby said:


> Here is the guide on how to modify the VF curve so that the card will reach 1.093V
> https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-545.html#post27805282


Thanks for the link, that seems to have done the trick. If I keep increasing the core clock in +15MHz increments at every voltage point like you mentioned there it ends up being unstable, but I've toned it down a bit and have seen the card boost up to 2130MHz now and it also briefly hit 1.093V, though it mostly sits around 2040-2080MHz in the SOTTR benchmark now, looks like it's hitting the power limit. I'll keep messing with it, maybe I can squeeze a few extra MHz out of it still.


----------



## Zer0G

willverduzco said:


> Regarding keeping it cool, I unfortunately had to make the tradeoff you are unwilling to make with noise since I'm only using 280mm AIOs right now. I currently use a Corsair H115i with push/pull Corsair MagLev fans on CPU as front panel intake, and will install a 280mm AIO (top exhaust) + Kraken G12 for the GPU tomorrow once the G12 arrives.
> 
> Making matters slightly less bad, I set my fan profiles set to shut down half of the fans at idle, and gradually spin up to 100% once the CPU hits 65C. This allows me to browse the web and do light gaming without any CPU fan noise and minor GPU fan noise, and that GPU fan noise will largely be eliminated once I get the 280mm AIO installed on the 2080TI tomorrow. When running Aida64, RealBench, Cinebench, or 3DMark's CPU tests, though, it sounds like a jet engine taking off in my office.



Ok, so noise is key. I have to give my fans a chance at 100% to see how could I can get the system. 

My whole system needs about 570-580 watts. It's really not easy to cool it down in a silent environment. Im pretty not sure if it helps to turn the fans. If they all blow inside the case, it's gonna quite hot inside for all the other parts. Because there is now airflow inside anymore. 

Since I upgraded to 64gb Ram I have stability issiue... Get random bluescreens or resets after 30min of gaming. I didn't found the reason why.


----------



## willverduzco

LayZ_Pz said:


> Sounds great, thanks for confirming the procedure in case of a **** up.
> 
> I do not use audio from the card itself so that should not impact me as much, mind linking me to the exact bios you used ? I will then backup both bios and give it a go tonight.


I only bothered backing up the BIOS that I was overwriting (P mode). I think it'd just be a waste of energy to backup both if you're only modifying one. That said, I'm sure it can't hurt to have both backed up.

As for the exact bios, I used this one, which I found on zhrooms's OP, which was originally sourced from hhbreaker's post in this thread. I have also attached a copy of the exact BIOS I flashed, as well as a backup of the Asus Strix 2080TI OC (in P mode) to this reply for ease of access. The names within the ZIP should be self explanatory.

Oh, and just in case you don't have the commands committed to memory yet, these go into an elevated (administrator) command prompt:


Code:


nvflash64 --protect off
nvflash64 --save backup.rom
nvflash64 -6 rom_name.rom
shutdown -r -t 0




Zer0G said:


> Ok, so noise is key. I have to give my fans a chance at 100% to see how could I can get the system.
> 
> My whole system needs about 570-580 watts. It's really not easy to cool it down in a silent environment. Im pretty not sure if it helps to turn the fans. If they all blow inside the case, it's gonna quite hot inside for all the other parts. Because there is now airflow inside anymore.
> 
> Since I upgraded to 64gb Ram I have stability issiue... Get random bluescreens or resets after 30min of gaming. I didn't found the reason why.


Hmm, perhaps temps aren't your issue, since you mention that it was fine prior to the RAM upgrade. Exactly how high are your temps now? Also, you want to have approximately balanced intake and exhaust airflow. It's okay if some some rads are exhaust if you can't have them all set to intake while balancing airflow. I generally have one 280mm intake rad and one 280mm exhaust rad since my case is small now, and I can't just have one 480mm rad as intake like before.

Anyhow, I wonder if something is off with your memory timings. Did you use an XMP profile? Are they all matched sticks? I assume you have 4 sticks, right? Are you running 1T or 2T in "cmd rate?" I am running 4 sticks (8 gb each) of Corsair LPX 3200, and they run just fine on my Z390 Aorus Ultra using the XMP profile and manually added 1T cmd rate.


----------



## Zer0G

willverduzco said:


> Hmm, perhaps temps aren't your issue, since you mention that it was fine prior to the RAM upgrade. Exactly how high are your temps now? Also, you want to have approximately balanced intake and exhaust airflow. It's okay if some some rads are exhaust if you can't have them all set to intake while balancing airflow. I generally have one 280mm intake rad and one 280mm exhaust rad since my case is small now, and I can't just have one 480mm rad as intake like before.
> 
> Anyhow, I wonder if something is off with your memory timings. Did you use an XMP profile? Are they all matched sticks? I assume you have 4 sticks, right? Are you running 1T or 2T in "cmd rate?" I am running 4 sticks (8 gb each) of Corsair LPX 3200, and they run just fine on my Z390 Aorus Ultra using the XMP profile and manually added 1T cmd rate.


After 2 hours of gaming the gpu goes up to 80-84°C and the CPU is between 90 and 99°C. The water temperature goes up to 60°C, thats the reason why...
I've a 360 intake and a 280 + 140 outake. 

Yes I have 4 sticks of 16gb Ripjaws V 3200 cl16 (Hynix M-Die), running on xmp 2 profile. 3200 16-18-18-18. I didn't changed the cmd rate. Think it's still 2T. What does the cmd rate? Can it be a solution?


----------



## Silent Scone

60C water is warm, to say the least.


----------



## willverduzco

Zer0G said:


> After 2 hours of gaming the gpu goes up to 80-84°C and the CPU is between 90 and 99°C. The water temperature goes up to 60°C, thats the reason why...
> I've a 360 intake and a 280 + 140 outake.
> 
> Yes I have 4 sticks of 16gb Ripjaws V 3200 cl16 (Hynix M-Die), running on xmp 2 profile. 3200 16-18-18-18. I didn't changed the cmd rate. Think it's still 2T. What does the cmd rate? Can it be a solution?


Yeah, definitely seems like temps. I assume it was just a coincidence that you installed another set of RAM modules at the same time. All of the temps you listed seem crazy high. What are your voltages at? Are you able to check the flow rate on your loop? Did you mix metals, and could it be gunked up? What pump are you running, and what liquid did you use to fill? As stated by Silent Scone, 60C water temp is insane, and can lead to premature pump failure, if it hasn't already been damaged.

First try raising the fan speeds to see if it's just a dissipation issue... But if that doesn't improve the temps, I think your loop needs to be troubleshooted.


----------



## Zer0G

willverduzco said:


> Yeah, definitely seems like temps. I assume it was just a coincidence that you installed another set of RAM modules at the same time. All of the temps you listed seem crazy high. What are your voltages at? Are you able to check the flow rate on your loop? Did you mix metals, and could it be gunked up? What pump are you running, and what liquid did you use to fill? As stated by Silent Scone, 60C water temp is insane, and can lead to premature pump failure, if it hasn't already been damaged.
> 
> First try raising the fan speeds to see if it's just a dissipation issue... But if that doesn't improve the temps, I think your loop needs to be troubleshooted.


Yes the temps are really high. But I have not more space for additional radiators. Only thing is to run the fans faster or go down with the powerlimits. I'll have a look. 

The VCores are: 
CPU: 1,365V @ 5,1ghz
GPU: up to 1,043V @ 2035mhz

No I can not check the flow rate. I've installed two DC-LT 2600 from Alphacool for redundancy. (Working System). The liquid is also from Alphacool, std clear.


----------



## Zfast4y0u

not sure if this is 2080ti version, but thought i should drop this is in, so u guys can see how this things are made


----------



## LayZ_Pz

willverduzco said:


> I only bothered backing up the BIOS that I was overwriting (P mode). I think it'd just be a waste of energy to backup both if you're only modifying one. That said, I'm sure it can't hurt to have both backed up.
> 
> As for the exact bios, I used this one, which I found on zhrooms's OP, which was originally sourced from hhbreaker's post in this thread. I have also attached a copy of the exact BIOS I flashed, as well as a backup of the Asus Strix 2080TI OC (in P mode) to this reply for ease of access. The names within the ZIP should be self explanatory.
> 
> Oh, and just in case you don't have the commands committed to memory yet, these go into an elevated (administrator) command prompt:
> 
> 
> Code:
> 
> 
> nvflash64 --protect off
> nvflash64 --save backup.rom
> nvflash64 -6 rom_name.rom
> shutdown -r -t 0[/quote]
> 
> Thank you so much, it worked perfectly. I just had to switch my display port to the other one as it got killed at the moment I flashed the bios.
> 
> I now need better thermals and more airflow inside my case &/or changing the fan curve to something more aggressive.
> 
> Thanks again !


----------



## shiokarai

Zer0G said:


> Yes the temps are really high. But I have not more space for additional radiators. Only thing is to run the fans faster or go down with the powerlimits. I'll have a look.
> 
> The VCores are:
> CPU: 1,365V @ 5,1ghz
> GPU: up to 1,043V @ 2035mhz
> 
> No I can not check the flow rate. I've installed two DC-LT 2600 from Alphacool for redundancy. (Working System). The liquid is also from Alphacool, std clear.


This much rad space (360 + 280 + 140 right?) shouldn't give you such high temps, no way - there's something wrong. I've had a 6850k OC + 2 x GTX 1080 Ti OC setup with 3 x 240 slim rads and temps were about 40-50c at most (which was high).


----------



## boli

Zer0G said:


> Yes the temps are really high. But I have not more space for additional radiators.



I will second the opinion to change at least the top 280 to intake. Fresh air is more important than in/out balance. See my recent experience here

Experiment with the back 140. It probably won't cool much with all other rad fans as intake. For quick testing just flip the fans


----------



## Martin778

Anyone here running 2080Ti SLI on X399 rig, mainly the 2950X? I wonder if AMD got the SLI issues sorted out, last time I tried 1950X with 1080Ti SLI it ran awful, constantly dropping load from both cards.
I have a 3440x1440 144Hz screen so every frame squeezed out of the GPU comes in handy.


----------



## LunaP

Curious if the FTW3 Ultra from EVGA would fit the XSPC 2080ti Waterblocks, since they normally don't update the compatibility list, is there a huge diff on the layout? vs the XC Ultra? I see the Copper from evga fits both the XC and ftw3 ultra so I'm assuming it should? Hoping to hear from anyone else that did.

Looking to pick 2 cards but wanted to verify first. Upgrading from SLI 1080ti's


----------



## Shawnb99

Nothing but bad luck.
Just had one of the spacers on my EKWB radiators break on me. 

Now I have to find a replacement, more downtime 


Sent from my iPhone using Tapatalk


----------



## anticommon

Is anyone else using an EK Vector block on the 2080 Ti XC Ultra (or other reference board)? I am getting a LOT of coil whine to the point where even my external DAC picks up on it quite a bit past 25% volume (usually listen at 25-30%). May have to go back to the stock bios but I remember it even then but my GPU sat at ~30-4mhz less.


----------



## kot0005

anticommon said:


> Is anyone else using an EK Vector block on the 2080 Ti XC Ultra (or other reference board)? I am getting a LOT of coil whine to the point where even my external DAC picks up on it quite a bit past 25% volume (usually listen at 25-30%). May have to go back to the stock bios but I remember it even then but my GPU sat at ~30-4mhz less.


use thermal pads to dampen the noise.


----------



## Nocliptoni

Hi, is there a higher power limit BIOS that works for Gigabyte 2080ti Aorus Xtreme waterforce AIO ?I seem to be constantly banging my head against the 366W limit when running benchmarks  Ive been trying to research it myself but there doesnt seem to be much info about .


----------



## J7SC

Nocliptoni said:


> Hi, is there a higher power limit BIOS that works for Gigabyte 2080ti Aorus Xtreme waterforce AIO ?I seem to be constantly banging my head against the 366W limit when running benchmarks  Ive been trying to research it myself but there doesnt seem to be much info about .



...in GPUz, both of my Aourus XTR (full waterblock version, but same PCB) manage 375w - 380w, but some of it surely is just RGB . The only other Bios I know of which works would be the Galax 380w...not much of a step-up in terms of overall wattage increase, though it may have other benefits. Then again, RGB will likely not work and possibly ditto for some of the output ports and apparently, it runs a bit hotter...

Have you tried the MSI AB 4.6.0 B10 voltage curve approach referenced a few posts back ?


----------



## J7SC

Martin778 said:


> Anyone here running 2080Ti SLI on X399 rig, mainly the 2950X? I wonder if AMD got the SLI issues sorted out, last time I tried 1950X with 1080Ti SLI it ran awful, constantly dropping load from both cards.
> I have a 3440x1440 144Hz screen so every frame squeezed out of the GPU comes in handy.



...I let you know after the weekend ...working on transferring 2x Aorus XTR 2080 TI WB from an Intel system to AMD X399 / 2950X


----------



## Nocliptoni

J7SC said:


> ...in GPUz, both of my Aourus XTR (full waterblock version, but same PCB) manage 375w - 380w, but some of it surely is just RGB . The only other Bios I know of which works would be the Galax 380w...not much of a step-up in terms of overall wattage increase, though it may have other benefits. Then again, RGB will likely not work and possibly ditto for some of the output ports and apparently, it runs a bit hotter...
> 
> Have you tried the MSI AB 4.6.0 B10 voltage curve approach referenced a few posts back ?


I thought about that BIOS but i wasnt sure if it would work .I seem to top up at 367.1W according to GPU-z .Even that little bit more could possibly help .I tried locking the VF curve to 1.093V but it doesnt seem to be working .Im not sure if understand correctly how Nvidia boost works but i think when the card hits the power limit it drops voltage and therefore drops MHz as well ?I dont mind loosing RGB as im not a great fan of it in the first place so thats not an issue for me personally and the AIO seems to keep temps in check quite well .Thank you for the reply , ill see if i feel brave enough to flash that BIOS lol


----------



## Nocliptoni

J7SC said:


> ...I let you know after the weekend ...working on transferring 2x Aorus XTR 2080 TI WB from an Intel system to AMD X399 / 2950X


Thank you , would greatly appreciate that .


----------



## J7SC

Nocliptoni said:


> I thought about that BIOS but i wasnt sure if it would work .I seem to top up at 367.1W according to GPU-z .Even that little bit more could possibly help .I tried locking the VF curve to 1.093V but it doesnt seem to be working .Im not sure if understand correctly how Nvidia boost works but i think when the card hits the power limit it drops voltage and therefore drops MHz as well ?I dont mind loosing RGB as im not a great fan of it in the first place so thats not an issue for me personally and the AIO seems to keep temps in check quite well .Thank you for the reply , ill see if i feel brave enough to flash that BIOS lol



Have you run Unigine Superposition 4K or 8K ? That always maxes my cards @ 375w to 380w in GPUz :wheee:


----------



## shadow85

Does anyone here have the MSI RTX 2080 Ti Sea Hawk X?

Please let me know what your temps are. My temps are sky rocketing easily to 70°C even with a very aggressive fan profile.

I have 2000-2100 MHz core, stock mem. speeds running through it. Please let me know if this sounds normal or I could I have issues?


----------



## SoldierRBT

shadow85 said:


> Does anyone here have the MSI RTX 2080 Ti Sea Hawk X?
> 
> Please let me know what your temps are. My temps are sky rocketing easily to 70°C even with a very aggressive fan profile.
> 
> I have 2000-2100 MHz core, stock mem. speeds running through it. Please let me know if this sounds normal or I could I have issues?


I had one but returned due to severe coil whine. With 80% fans speed it was running around 54-58C. High temperature could be bad airflow in your case. Remove your side panel to see if temps drop.


----------



## shadow85

SoldierRBT said:


> I had one but returned due to severe coil whine. With 80% fans speed it was running around 54-58C. High temperature could be bad airflow in your case. Remove your side panel to see if temps drop.


Ok thank you. Will try this later.


----------



## Hanks552

Preparing myself mentally to do the shunt mod, god help me


----------



## Spiriva

He should share that bios hes talking about


----------



## iRSs

Hanks552 said:


> Preparing myself mentally to do the shunt mod, god help me


remember, not too much and stay away from solder.

this on all three shunts should double your power limit 









Sent from my LG-H930 using Tapatalk


----------



## Hanks552

iRSs said:


> remember, not too much and stay away from solder.
> 
> this on all three shunts should double your power limit
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my LG-H930 using Tapatalk


did you use the circuit pen?
all 3? i heard is not good to do on the PCI shunt...
i saw vega pic, he used a lot


----------



## Duskfall

Has anyone tested this version of nvflash guys? https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled
I want to flash a 300A bios to 300 chip. Or is this only for FE cards?


----------



## dureiken

Hi expert guys !

I'm the new proud owner of a EVGA 2080TI black gaming edition ! there's still no software way to change power limit ?

I plan to go under WC as for my 1080ti but in gaming I always hit powerlimit at 1930 Mhz

THanks a lot


----------



## Hanks552

dureiken said:


> Hi expert guys !
> 
> I'm the new proud owner of a EVGA 2080TI black gaming edition ! there's still no software way to change power limit ?
> 
> I plan to go under WC as for my 1080ti but in gaming I always hit powerlimit at 1930 Mhz
> 
> THanks a lot


Non-A?
if is reference PCB and A you can use Galax Bios 380W


----------



## dureiken

Hanks552 said:


> Non-A?
> if is reference PCB and A you can use Galax Bios 380W


It's Non-A on 11G-P4-2281-KR


----------



## Hanks552

dureiken said:


> It's Non-A on 11G-P4-2281-KR


well, sorry my dude, if you can, return it ( only if you really care about overclocking it)


----------



## shadow85

SoldierRBT said:


> [
> 
> I had one but returned due to severe coil whine. With 80% fans speed it was running around 54-58C. High temperature could be bad airflow in your case. Remove your side panel to see if temps drop.


So I removed the side panel, resulted in no change in GPU temps.

***.


----------



## iRSs

Hanks552 said:


> did you use the circuit pen?
> all 3? i heard is not good to do on the PCI shunt...
> i saw vega pic, he used a lot


what circuit pen?

if you put more LM than this the card will enter in "limp" mode and will lock to 300mhz core.

i used the pci one too because without it i was still hitting power limit; so with all three i am still not hitting power limit even in msi kombustor whre i was at 1.093v all the time. all that is limiting me is temperature, hence the decrease in frequency.

u can use alot on any other card but not on turing.

Sent from my LG-H930 using Tapatalk


----------



## Hanks552

iRSs said:


> what circuit pen?
> 
> if you put more LM than this the card will enter in "limp" mode and will lock to 300mhz core.
> 
> i used the pci one too because without it i was still hitting power limit; so with all three i am still not hitting power limit even in msi kombustor whre i was at 1.093v all the time. all that is limiting me is temperature, hence the decrease in frequency.
> 
> u can use alot on any other card but not on turing.
> 
> Sent from my LG-H930 using Tapatalk


look the pic that i posted, im using the same pen that Vega used


----------



## iRSs

Hanks552 said:


> look the pic that i posted, im using the same pen that Vega used


i am not using any pen, i use thermal grizzly conductonaut.

Sent from my LG-H930 using Tapatalk


----------



## Hanks552

I’m afraid of using lm, so I’m going to try this pen, seems like it doesn’t react with the tin, and it works ( easier for me in case of RMA)


----------



## Hanks552

I think I screw up big time, what do you guys think? Should I RMA it?


----------



## NBrock

RMA ain't gonna do anything for ya if ya manhandled it.


----------



## Hanks552

NBrock said:


> RMA ain't gonna do anything for ya if ya manhandled it.


Nvm, card is dead. I’m going to order another one. ????*♂


----------



## J7SC

Hanks552 said:


> Nvm, card is dead. I’m going to order another one. ????*♂



...looks and sounds serious, but there are specialty shops which can **sometimes / possibly** repair cut traces etc, but it really depends on the damage re. lower PCB layers. I would ask any local sizable hardware dealer for some recommendations for such specialty electronics shops...what have you got to lose at this stage ?

This looks too serious for the bake-in-the-oven method to re-flow the solder, though it is a bit hard to tell from the pics if this is just surface stuff or 'deep'.


----------



## kot0005

Hanks552 said:


> I think I screw up big time, what do you guys think? Should I RMA it?


How did you manage that ? They will reject warranty if the find that damage...


----------



## Hanks552

kot0005 said:


> Hanks552 said:
> 
> 
> 
> I think I screw up big time, what do you guys think? Should I RMA it?
> 
> 
> 
> How did you manage that ? They will reject warranty if the find that damage...
Click to expand...

It’s fine, it was a mistake, the card was kinda like super glued to the waterblock, I tried to remove but I didn’t wanted to bend the PCB, tried shaking, and everything, then I was thinking that area was safe and I used a screw driver to separate the block from the PCB, and that happened, yeah I know stupid idea, but I didn’t know about the lines there, well... living and learning, next time I will take more care with it


----------



## acmilangr

Here is how my card goes on maximum OC

https://youtu.be/uZEMc_5HH_M


----------



## Zer0G

shiokarai said:


> This much rad space (360 + 280 + 140 right?) shouldn't give you such high temps, no way - there's something wrong. I've had a 6850k OC + 2 x GTX 1080 Ti OC setup with 3 x 240 slim rads and temps were about 40-50c at most (which was high).





boli said:


> I will second the opinion to change at least the top 280 to intake. Fresh air is more important than in/out balance. See my recent experience here
> 
> Experiment with the back 140. It probably won't cool much with all other rad fans as intake. For quick testing just flip the fans


Thanks a lot for all your kind help!

Just for your info, I found the reason why my system crashes after about 30min in bluescreen or restart. 

It is the combination of my mainboard, cpu and ram. It seems like my mainboard struggle with 4x16gb Hynix M-Die's (Ripjaws V 3200 CL16). 

Reducing ram frequency didn't help. Timinings also... The only solution was to go down from 5.1Ghz to 4.9Ghz, now it's stable at all. The last night I rendered a video for 8 hours. All good. 

Now on 4.9Ghz and Vcore 1.32 the system is much cooler. After two hours of BF5 I didn't reach any temp limit. 

Thanks again for your help, even it was not the 2080ti. Great community here!


----------



## PrettyDancer

Hey guys ! I'm in the process of reading the whole thread, almost through now...

I didn't quite get the hang of overclocking my 2080ti so I'll ask here : 

I have a XLR8 PNY which is PL to 300w with the slider to the max. Automatic OC gave me +130mhz and I'm hovering around 1875 / 1900 MHz in superposition 8k at around 0.9mv but at 75° C with fans to 80% (which is already crazy loud)...

Would it be of any use to me to try and flash my bios to get higher clocks or am I already limited by the temperature ? I have 3x noctua intakes sucking air in.

Another question : I'm starting to get artifacts with memory around +400mhz, but 350mhz works fine.would bios flashing help ? Or did I just severely lost the lottery ?

Thanks guys !


----------



## Zer0G

PrettyDancer said:


> I have a XLR8 PNY which is PL to 300w with the slider to the max. Automatic OC gave me +130mhz and I'm hovering around 1875 / 1900 MHz in superposition 8k at around 0.9mv but at 75° C with fans to 80% (which is already crazy loud)...
> 
> Would it be of any use to me to try and flash my bios to get higher clocks or am I already limited by the temperature ? I have 3x noctua intakes sucking air in.


300W is not to much... With a 366 or 380W bios the clock can go much higher. Hard to say how much exactly. Give it a try. Just let the case open for testing. 

The temperature is allready high (for 300w). It's about one clock step (15mhz) each 10°C you lost. Thats the reason why many guys here try to stay under 40°C to reach the highest clock. What is realy crazy to manage at 380 watts...



PrettyDancer said:


> Another question : I'm starting to get artifacts with memory around +400mhz, but 350mhz works fine.would bios flashing help ? Or did I just severely lost the lottery ?


Please check first which ram is on your card. Micron or Samsung? Easy to find out in GPU-Z. 
Normally the higher PT doesn't have an effect on the ramclock. But if the ram needs less power there is more for the core. ;-)


----------



## PrettyDancer

Zer0G said:


> Please check first which ram is on your card. Micron or Samsung? Easy to find out in GPU-Z.
> Normally the higher PT doesn't have an effect on the ramclock. But if the ram needs less power there is more for the core. ;-)


Micron RAM ! Seems really slow at 350 MHz ^^


----------



## Rob w

Hanks552 said:


> I think I screw up big time, what do you guys think? Should I RMA it?


Hi Hanks, there are specialist companies that will repair those tracks, I blew a hole in my Titan v which needed multi level repair! I found a co; up north uk.
They x-ray card and did ( micro surgery on it lol) repairs cost me about £150 in total, so don’t despair they can be fixed. 
Nv will just answer you with ‘ customer damage, goodby’ .


----------



## Monstieur

Garrett1974NL said:


> Look here's how I did it... and yeah that's the Geforce font
> The rectangular cutout on the metal plate (on the memory and VRM) is JUST large enough to let the block 'in' if that makes sense.
> I used some Swiftech nuts, springs and screws to mount it on the card.
> The holes I used correspond with the AMD hole distances.


Can I mount an AIO directly on the RTX 2080 Ti mid-plate using the AMD bracket, without the Kraken G12?


----------



## iRSs

Rob w said:


> Hi Hanks, there are specialist companies that will repair those tracks, I blew a hole in my Titan v which needed multi level repair! I found a co; up north uk.
> They x-ray card and did ( micro surgery on it lol) repairs cost me about £150 in total, so don’t despair they can be fixed.
> Nv will just answer you with ‘ customer damage, goodby’ .


i can repair it at home because only the surface is damaged 

Sent from my LG-H930 using Tapatalk


----------



## Hanks552

Rob w said:


> Hi Hanks, there are specialist companies that will repair those tracks, I blew a hole in my Titan v which needed multi level repair! I found a co; up north uk.
> They x-ray card and did ( micro surgery on it lol) repairs cost me about £150 in total, so don’t despair they can be fixed.
> Nv will just answer you with ‘ customer damage, goodby’ .


I bought it thru amazon, usually they are very easy with it, let’s see what happens


----------



## Doni Weiss

Hi,
I had a Gainward 2080 TI Golden Sample and it just kept crashing with artifacts and black screen. This is my second card since i returned the first one because of Coil Whine.

I decided to see what the company had in Stock and they will replace me with a Msi Gaming trio (NOT TRIO X). Except for clock speed whats the difference towards the Trio X ??

Is it possible to install the trio X 406w bios instead?


----------



## Zer0G

Doni Weiss said:


> Hi,
> I had a Gainward 2080 TI Golden Sample and it just kept crashing with artifacts and black screen. This is my second card since i returned the first one because of Coil Whine.
> 
> I decided to see what the company had in Stock and they will replace me with a Msi Gaming trio (NOT TRIO X). Except for clock speed whats the difference towards the Trio X ??
> 
> Is it possible to install the trio X 406w bios instead?


Yes should work. The PCB is the same. 

Seems like a good deal a GS for a Trio X.


----------



## Garrett1974NL

Monstieur said:


> Can I mount an AIO directly on the RTX 2080 Ti mid-plate using the AMD bracket, without the Kraken G12?


That might work, but I don't have an AIO myself so don't take my word for it.
With a couple of screws and nuts I think you can get very far 

Pic of the card would help I guess...


----------



## BudgieSmuggler

PrettyDancer said:


> Hey guys ! I'm in the process of reading the whole thread, almost through now...
> 
> I didn't quite get the hang of overclocking my 2080ti so I'll ask here :
> 
> I have a XLR8 PNY which is PL to 300w with the slider to the max. Automatic OC gave me +130mhz and I'm hovering around 1875 / 1900 MHz in superposition 8k at around 0.9mv but at 75° C with fans to 80% (which is already crazy loud)...
> 
> Would it be of any use to me to try and flash my bios to get higher clocks or am I already limited by the temperature ? I have 3x noctua intakes sucking air in.
> 
> Another question : I'm starting to get artifacts with memory around +400mhz, but 350mhz works fine.would bios flashing help ? Or did I just severely lost the lottery ?
> 
> Thanks guys !



Hi there. You might be getting artifacting from high temps. Try running your fans at 100% and see if it goes. Are you in game when you get the artifacts or benchamrking? Some games don't like too much of an overclock. I can get artifacting in The division but not in BFV with the exact same clocks. Try a different game and with fans on full to keep temps in check. It may help to try the Galax 380 watt bios but that won't really help the artifacting. Someone else can chip in but i don't think you'll get artifacting from hitting power limit. It could be that you haven't won silicon lottery. Can always try and return it and gamble on another card if that's important to you. Truth is the cards are great with a mild overclock and outside of benchmarkign a lot will be undervolting the cards to sit with better temps and consistent frame rates rather than going out max overclock all the time. If you need help with the VF curve just ask. I have my card at 2130mhz after 1056mv onwards which with a slight drop when i hit temp limit gives me 2100mhz stable and +1000 memory on air cooling. Benchmarking i drop to between 1995mhz - 2010mhz under load in 4k and 8k superposition. In game is where i get 2100mhz in most games soemtimes 2085mhz. I'am on zotac amp 2080ti. ps you can try and take your side panel off to help with temps dependant on your room ambient temp


----------



## managerman

Martin778 said:


> Anyone here running 2080Ti SLI on X399 rig, mainly the 2950X? I wonder if AMD got the SLI issues sorted out, last time I tried 1950X with 1080Ti SLI it ran awful, constantly dropping load from both cards.
> I have a 3440x1440 144Hz screen so every frame squeezed out of the GPU comes in handy.


I am in the middle of my build out. Build log here ----> https://www.overclock.net/forum/180...ild-carbon-fiber-tubes-fittings-overload.html

However, before beginning the build I tested on an open air bench and the only issue I had was running a 2990WX in full creator mode (32/64 cores) The graphic cards would only hit 80-85% utilization....once I cut the cores in half (16/32) I got 100% utilization...

This was on air....hoping for a little more performance on water.... https://www.3dmark.com/spy/5449851 

-M


----------



## Bloodred217

Is anyone here using the USB C port on their 2080 Ti? I have a reference PCB card (Gainward Phoenix GS). I ran into a very strange issue, almost thought my card had completely died overnight.

Long story short, I have a USB C 3.1 Gen2 (10Gbps) hub which I plugged into the graphics card. This works perfectly in Windows, but the system won't boot if the hub is connected. It doesn't matter if any actual USB devices are plugged into the hub or not, the hub alone causes it. I can have other things plugged in, including USB 3.0 (5Gbps) hub and it boots up fine. If I unplug the problem hub while the system is stuck, the boot process resumes and works normally. As I've said, the hub works perfectly once Windows is up and running.

For a bit more detail, I'm trying to use the USB C port and hub for my Oculus Rift (3 sensors on USB3.0, plus the headset). The Oculus sensors are notoriously sensitive and picky, but the controller on the 2080 Ti and the hub work perfectly (my motherboard controller does not). I'd very much like to use it without having to constantly reach behind my system and fumble around to plug and unplug the hub. I could've sworn I've rebooted my system with hub + 3 sensors plugged in and it worked, but now it doesn't even with the hub alone, as I've said.

Any ideas or experience with this? To me it looks like it's trying to initialize the hub early on and hanging or something like that, but I've no idea how to even approach troubleshooting this.


----------



## Esenel

Ahhhh Curse Hotfix drivers :-D

Better score with lower OC...
..but not valid...


----------



## Shawnb99

Bloodred217 said:


> Is anyone here using the USB C port on their 2080 Ti? I have a reference PCB card (Gainward Phoenix GS). I ran into a very strange issue, almost thought my card had completely died overnight.
> 
> 
> 
> Long story short, I have a USB C 3.1 Gen2 (10Gbps) hub which I plugged into the graphics card. This works perfectly in Windows, but the system won't boot if the hub is connected. It doesn't matter if any actual USB devices are plugged into the hub or not, the hub alone causes it. I can have other things plugged in, including USB 3.0 (5Gbps) hub and it boots up fine. If I unplug the problem hub while the system is stuck, the boot process resumes and works normally. As I've said, the hub works perfectly once Windows is up and running.
> 
> 
> 
> For a bit more detail, I'm trying to use the USB C port and hub for my Oculus Rift (3 sensors on USB3.0, plus the headset). The Oculus sensors are notoriously sensitive and picky, but the controller on the 2080 Ti and the hub work perfectly (my motherboard controller does not). I'd very much like to use it without having to constantly reach behind my system and fumble around to plug and unplug the hub. I could've sworn I've rebooted my system with hub + 3 sensors plugged in and it worked, but now it doesn't even with the hub alone, as I've said.
> 
> 
> 
> Any ideas or experience with this? To me it looks like it's trying to initialize the hub early on and hanging or something like that, but I've no idea how to even approach troubleshooting this.



In your motherboard bios settings you can change the boot options. Disable everything but your windows drive. 
Windows boot manager is likely seeing it as a bootable device. 


Sent from my iPhone using Tapatalk


----------



## Hanks552

Rob w said:


> Hi Hanks, there are specialist companies that will repair those tracks, I blew a hole in my Titan v which needed multi level repair! I found a co; up north uk.
> They x-ray card and did ( micro surgery on it lol) repairs cost me about £150 in total, so don’t despair they can be fixed.
> Nv will just answer you with ‘ customer damage, goodby’ .


Thank you
lets see what amazons says, if they return it, i will look for somebody to fix it
Thank you


----------



## BudgieSmuggler

Bloodred217 said:


> Is anyone here using the USB C port on their 2080 Ti? I have a reference PCB card (Gainward Phoenix GS). I ran into a very strange issue, almost thought my card had completely died overnight.
> 
> Long story short, I have a USB C 3.1 Gen2 (10Gbps) hub which I plugged into the graphics card. This works perfectly in Windows, but the system won't boot if the hub is connected. It doesn't matter if any actual USB devices are plugged into the hub or not, the hub alone causes it. I can have other things plugged in, including USB 3.0 (5Gbps) hub and it boots up fine. If I unplug the problem hub while the system is stuck, the boot process resumes and works normally. As I've said, the hub works perfectly once Windows is up and running.
> 
> For a bit more detail, I'm trying to use the USB C port and hub for my Oculus Rift (3 sensors on USB3.0, plus the headset). The Oculus sensors are notoriously sensitive and picky, but the controller on the 2080 Ti and the hub work perfectly (my motherboard controller does not). I'd very much like to use it without having to constantly reach behind my system and fumble around to plug and unplug the hub. I could've sworn I've rebooted my system with hub + 3 sensors plugged in and it worked, but now it doesn't even with the hub alone, as I've said.
> 
> Any ideas or experience with this? To me it looks like it's trying to initialize the hub early on and hanging or something like that, but I've no idea how to even approach troubleshooting this.



Don't have the issue with the usb-c port but sometimes i have to unplug and replug my BenQ monitor to get a screen on boot up. Only had this since the Jan 15th Nvidia driver update. Are you on the latest driver with the latest usbc driver? I know my issue is very different. Sorry i can't help someone else may be able to. ps have you tried disabling fast boot to see if that changes anything?


----------



## dureiken

Hanks552 said:


> well, sorry my dude, if you can, return it ( only if you really care about overclocking it)


I finally bought a EVGA 2080ti XC ultra edition, will be fine with that GPU ?  

I'm so weak ...


----------



## BudgieSmuggler

Bloodred217 said:


> Is anyone here using the USB C port on their 2080 Ti? I have a reference PCB card (Gainward Phoenix GS). I ran into a very strange issue, almost thought my card had completely died overnight.
> 
> Long story short, I have a USB C 3.1 Gen2 (10Gbps) hub which I plugged into the graphics card. This works perfectly in Windows, but the system won't boot if the hub is connected. It doesn't matter if any actual USB devices are plugged into the hub or not, the hub alone causes it. I can have other things plugged in, including USB 3.0 (5Gbps) hub and it boots up fine. If I unplug the problem hub while the system is stuck, the boot process resumes and works normally. As I've said, the hub works perfectly once Windows is up and running.
> 
> For a bit more detail, I'm trying to use the USB C port and hub for my Oculus Rift (3 sensors on USB3.0, plus the headset). The Oculus sensors are notoriously sensitive and picky, but the controller on the 2080 Ti and the hub work perfectly (my motherboard controller does not). I'd very much like to use it without having to constantly reach behind my system and fumble around to plug and unplug the hub. I could've sworn I've rebooted my system with hub + 3 sensors plugged in and it worked, but now it doesn't even with the hub alone, as I've said.
> 
> Any ideas or experience with this? To me it looks like it's trying to initialize the hub early on and hanging or something like that, but I've no idea how to even approach troubleshooting this.



Don't have the issue with the usb-c port but sometimes i have to unplug and replug my BenQ monitor to get a screen on boot up. Only had this since the Jan 15th Nvidia driver update. Are you on the latest driver with the latest usbc driver? I know my issue is very different. Sorry i can't help someone else may be able to. ps have you tried disabling fast boot to see if that changes anything?


----------



## Bloodred217

BudgieSmuggler said:


> Don't have the issue with the usb-c port but sometimes i have to unplug and replug my BenQ monitor to get a screen on boot up. Only had this since the Jan 15th Nvidia driver update. Are you on the latest driver with the latest usbc driver? I know my issue is very different. Sorry i can't help someone else may be able to. ps have you tried disabling fast boot to see if that changes anything?


I'm using 417.71 and I've only had the card for a few days, so it should be the latest driver. So you have a USB C monitor? Sounds like a different issue, to be clear even though I want to connect an Oculus Rift, it doesn't use video over USB C, just pure USB. The headset has a regular HDMI connector for video, so there's no video output from the USB C port. I already have fast boot disabled since I use wake on LAN occasionally and it doesn't work with fast boot on.



Shawnb99 said:


> In your motherboard bios settings you can change the boot options. Disable everything but your windows drive.
> Windows boot manager is likely seeing it as a bootable device.
> 
> 
> Sent from my iPhone using Tapatalk


I didn't think this would work but I tried it anyway, and sadly it did nothing. Maybe I didn't make it clear enough, but the system hangs very early in the boot process. It does not POST at all, I don't think the Windows boot manager is even invoked this early. I can't even open UEFI with the hub plugged in and nothing comes up on screen.


I now had a look at my POST display and checked the motherboard manual. After a bunch of codes going by quickly, I've seen 3 which progressed very slowly (across minutes) and then it seemed to hang entirely, until I unplugged the hub and the system booted up immediately. I've seen:
B4 - "USB device hot plug-in"
9C - "Detect and install all currently connected USB devices"
B2 - "Legacy Option ROM initialization"

So it looks like my motherboard UEFI is having some difficulty with USB initialization. My motherboard is quite old (Gigabyte Z97X Gaming GT), so maybe it's not compatible with the 3.1 Gen 2 hub. I'm not sure if something like that could actually stall the POST process or not. I don't have any other 3.1 Gen 2 ports, so I can't plug the hub in another to see if there's anything going on with the graphics card or not.

LE: I decided to wait some more and it looks like I was wrong, the POST process does not hang indefinitely. The system booted up eventually, but it took like 6-7 minutes from pressing the power button to Windows starting to load. Once Windows was loading it did so quickly, at normal speed.


----------



## wirefox

Newegg has waterforce 2080ti's in stock. just picked one up... Get them while they are hot!

GIGABYTE AORUS GeForce RTX 2080 Ti DirectX 12 GV-N208TAORUSX WB-11GC 11GB 352-Bit GDDR6 PCI Express 3.0 x16 SLI Support ATX Video Card

https://www.newegg.com/Product/Product.aspx?item=N82E16814932074


----------



## BudgieSmuggler

Bloodred217 said:


> I'm using 417.71 and I've only had the card for a few days, so it should be the latest driver. So you have a USB C monitor? Sounds like a different issue, to be clear even though I want to connect an Oculus Rift, it doesn't use video over USB C, just pure USB. The headset has a regular HDMI connector for video, so there's no video output from the USB C port. I already have fast boot disabled since I use wake on LAN occasionally and it doesn't work with fast boot on.
> 
> 
> I didn't think this would work but I tried it anyway, and sadly it did nothing. Maybe I didn't make it clear enough, but the system hangs very early in the boot process. It does not POST at all, I don't think the Windows boot manager is even invoked this early. I can't even open UEFI with the hub plugged in and nothing comes up on screen.
> 
> 
> I now had a look at my POST display and checked the motherboard manual. After a bunch of codes going by quickly, I've seen 3 which progressed very slowly (across minutes) and then it seemed to hang entirely, until I unplugged the hub and the system booted up immediately. I've seen:
> B4 - "USB device hot plug-in"
> 9C - "Detect and install all currently connected USB devices"
> B2 - "Legacy Option ROM initialization"
> 
> So it looks like my motherboard UEFI is having some difficulty with USB initialization. My motherboard is quite old (Gigabyte Z97X Gaming GT), so maybe it's not compatible with the 3.1 Gen 2 hub. I'm not sure if something like that could actually stall the POST process or not. I don't have any other 3.1 Gen 2 ports, so I can't plug the hub in another to see if there's anything going on with the graphics card or not.
> 
> LE: I decided to wait some more and it looks like I was wrong, the POST process does not hang indefinitely. The system booted up eventually, but it took like 6-7 minutes from pressing the power button to Windows starting to load. Once Windows was loading it did so quickly, at normal speed.



Yeh like i said a different issue just thought it could be a driver issue as there are some bugs obviously as is typical. As you're using a hub have you checked in device manager in the hub properties (power management) and checked for "allow the computer to turn off this device to save power"​ and uncheck that


----------



## Bloodred217

BudgieSmuggler said:


> Yeh like i said a different issue just thought it could be a driver issue as there are some bugs obviously as is typical. As you're using a hub have you checked in device manager in the hub properties (power management) and checked for "allow the computer to turn off this device to save power"​ and uncheck that


Yeah, I've tried that. Doesn't change anything.


----------



## kx11

got myself a Lightning Z 2080ti all the way from Singapore , hopefully by saturday it'll be in my hands


----------



## p1r473

*FTW3*

Can someone confirm I can safely flash the Galax 380W onto my 373W FTW3?


----------



## Doni Weiss

Zer0G said:


> Yes should work. The PCB is the same.
> 
> Seems like a good deal a GS for a Trio X.



Well the price is baically the same in sweden. Its all about supply on 2080 ti Cards. Everyone is short on stock.

So why is there a non-x trio card? Same pcb and both A-GPUs. And price is the same here ( i couldnt get the trio-x since out of stock).


----------



## J7SC

wirefox said:


> Newegg has waterforce 2080ti's in stock. just picked one up...* Get them while they are hot!*
> 
> GIGABYTE AORUS GeForce RTX 2080 Ti DirectX 12 GV-N208TAORUSX WB-11GC 11GB 352-Bit GDDR6 PCI Express 3.0 x16 SLI Support ATX Video Card
> 
> https://www.newegg.com/Product/Product.aspx?item=N82E16814932074



*Get them while they are hot! ? * Actually, the two I have run quite cool


----------



## kot0005

https://www.techpowerup.com/vgabios/207628/msi-rtx2080ti-11264-181118

lightning z has 380w bios. flashed it onto my seahawk ek, trio x based pcb. working fine.


----------



## Nocliptoni

Pretty decent with AMD CPU 

https://benchmark.unigine.com/results/rid_4ed80ef9d0fb4296b536974dcf1d80d1


----------



## J7SC

Nocliptoni said:


> Pretty decent with AMD CPU
> 
> https://benchmark.unigine.com/results/rid_4ed80ef9d0fb4296b536974dcf1d80d1




...hope that the upcoming switch to AMD (2950X) works well. Below is a 4K run of the Aorus from the first day I got it on my Z170 SOC Force 6700k test-bench (since then found out that VRAM actually goes higher, about +1040) . Threadripper system memory perf should also help


----------



## Yvese

So I'm thinking of taking the plunge and buying one of these. Are 2080 ti's still having that 'failure' issue a few months back or was it fixed? Were they FE only cards?


----------



## Medusa666

kx11 said:


> got myself a Lightning Z 2080ti all the way from Singapore , hopefully by saturday it'll be in my hands


Nice, what made you do it? Going to sell the HOF card?


----------



## J7SC

kx11 said:


> got myself a Lightning Z 2080ti all the way from Singapore , hopefully by saturday it'll be in my hands





Medusa666 said:


> Nice,* what made you do it?* Going to sell the HOF card?



...was wondering about that, too ! Still, Lightning is a reeeaaal nice card...are you going to w-cool it ? Here is just one of Gunslinger's 2080 TI Lightning exploits from HWbot, mind you on serious sub-zero. Also note the Micron memory.


----------



## kx11

Medusa666 said:


> Nice, what made you do it? Going to sell the HOF card?



yeah i 'll do that ASAP , i want to do a vertical gpu mount so bad and HOF card is such a giant i can't do it without touching the ram sticks , also MSI AB got a problem with HOF fans for some reason they don't spin sometimes ?!! or maybe my card is faulty regarding the fans , i bet it's a great card under water


----------



## BudgieSmuggler

kx11 said:


> yeah i 'll do that ASAP , i want to do a vertical gpu mount so bad and HOF card is such a giant i can't do it without touching the ram sticks , also MSI AB got a problem with HOF fans for some reason they don't spin sometimes ?!! or maybe my card is faulty regarding the fans , i bet it's a great card under water



I know someone who is looking to buy a 2nd hand 2080ti. Where are you located?


----------



## Bloodred217

I managed to find a workaround for my USB hub problem. I enabled Fast Boot in UEFI and I *disabled* USB support. No USB support = no USB initialization = no USB hub issue. Naturally this means that I now cannot actually access UEFI, even tried it with PS/2 keyboard and it's no go, Fast Boot is too fast, by the time the PS/2 keyboard comes up Windows is already loading. Thankfully the mobo has an option to revert to normal boot after AC power is removed, so if I ever need to get into UEFI I'll have to hit the power switch on the PSU first. That's no big deal, so this solution is good enough for me.

I can now use the USB C port on my 2080 Ti without extra hassle.


----------



## Jpmboy

Bloodred217 said:


> I managed to find a workaround for my USB hub problem. I enabled Fast Boot in UEFI and I *disabled* USB support. No USB support = no USB initialization = no USB hub issue. Naturally this means that I now cannot actually access UEFI, even tried it with PS/2 keyboard and it's no go, Fast Boot is too fast, by the time the PS/2 keyboard comes up Windows is already loading. Thankfully the mobo has an option to revert to normal boot after AC power is removed, so if I ever need to get into UEFI I'll have to hit the power switch on the PSU first. That's no big deal, so this solution is good enough for me.
> 
> I can now use the USB C port on my 2080 Ti without extra hassle.


 you may be able to boot in to safe mode by holding down the power switch for 3-5 sec with the USBs disabled.


----------



## BudgieSmuggler

Bloodred217 said:


> I managed to find a workaround for my USB hub problem. I enabled Fast Boot in UEFI and I *disabled* USB support. No USB support = no USB initialization = no USB hub issue. Naturally this means that I now cannot actually access UEFI, even tried it with PS/2 keyboard and it's no go, Fast Boot is too fast, by the time the PS/2 keyboard comes up Windows is already loading. Thankfully the mobo has an option to revert to normal boot after AC power is removed, so if I ever need to get into UEFI I'll have to hit the power switch on the PSU first. That's no big deal, so this solution is good enough for me.
> 
> I can now use the USB C port on my 2080 Ti without extra hassle.



Glad you're all sorted


----------



## J7SC

Bloodred217 said:


> I managed to find a workaround for my USB hub problem. I enabled Fast Boot in UEFI and I *disabled* USB support. No USB support = no USB initialization = no USB hub issue. Naturally this means that I now cannot actually access UEFI, even tried it with PS/2 keyboard and it's no go, Fast Boot is too fast, by the time the PS/2 keyboard comes up Windows is already loading. Thankfully the mobo has an option to revert to normal boot after AC power is removed, so if I ever need to get into UEFI I'll have to hit the power switch on the PSU first. That's no big deal, so this solution is good enough for me.
> 
> I can now use the USB C port on my 2080 Ti without extra hassle.



Glad to hear that you have a workable solution. I have never seen a comp between HDMI out and USB -C out of the same card. Given ample bandwidth, there shouldn't be appreciable differences, but if you've got some results to post...:thumb:


----------



## Bloodred217

Jpmboy said:


> you may be able to boot in to safe mode by holding down the power switch for 3-5 sec with the USBs disabled.


I should be able to get into Safe Mode using the Windows advanced startup feature (hold Shift while clicking Restart). As for UEFI, I can either remove AC power to get the mobo into normal boot mode or I can reset the system during POST a couple of times, that also puts the mobo in normal boot mode.


BudgieSmuggler said:


> Glad you're all sorted


Yeah, good thing I found this option I had never used before, no idea how I could've actually made the hub work otherwise.


J7SC said:


> Glad to hear that you have a workable solution. I have never seen a comp between HDMI out and USB -C out of the same card. Given ample bandwidth, there shouldn't be appreciable differences, but if you've got some results to post...:thumb:


Well, I'm actually just using the port as regular USB, there's no video output going through. Ironically enough I have Oculus sensors plugged in via the hub, which are actually IR cameras, so my graphics card is now serving both for video output to my monitors and video input from 3 cameras via USB.

On a more serious note, I think video output over USB C actually uses DisplayPort signals, so there shouldn't be any quality difference between that and HDMI as long as the same color space/bit depth can be used.


----------



## J7SC

Bloodred217 said:


> ...
> 
> Well, I'm actually just using the port as regular USB, there's no video output going through. Ironically enough I have Oculus sensors plugged in via the hub, which are actually IR cameras, so my graphics card is now serving both for video output to my monitors and video input from 3 cameras via USB.
> 
> On a more serious note, I think video output over USB C actually uses DisplayPort signals, so there shouldn't be any quality difference between that and HDMI as long as the same color space/bit depth can be used.



Tx, good to know...was also wondering about refresh rates, just in case I want to build a tiled monitor wall - for flying around in 'Anthem'


----------



## Krzych04650

shadow85 said:


> Does anyone here have the MSI RTX 2080 Ti Sea Hawk X?
> 
> Please let me know what your temps are. My temps are sky rocketing easily to 70°C even with a very aggressive fan profile.
> 
> I have 2000-2100 MHz core, stock mem. speeds running through it. Please let me know if this sounds normal or I could I have issues?


For me temperature stabilizes around 50-52 C under full sustained load at 100% fan speed, in actual gaming it is around 45-47 because the load is not 100% all the time since I am applying framerate limit. Thats for 380W BIOS and [email protected], with default 330W one it was impossible to even hit 50 C with 100% fan speed, because such a strict power limit basically enforces below 1V voltages if you want to get anywhere with clocks without massive throttling.


----------



## wirefox

wirefox said:


> Newegg has waterforce 2080ti's in stock. just picked one up... Get them while they are hot!
> 
> GIGABYTE AORUS GeForce RTX 2080 Ti DirectX 12 GV-N208TAORUSX WB-11GC 11GB 352-Bit GDDR6 PCI Express 3.0 x16 SLI Support ATX Video Card
> 
> https://www.newegg.com/Product/Product.aspx?item=N82E16814932074





J7SC said:


> *Get them while they are hot! ? * Actually, the two I have run quite cool


two! wow that's a beefy rig..

Mine will arrive tomorrow!!! if all goes well. 

I am hoping to get some more frames from my Acer Predator X34 Pbmiphzx


----------



## knightriot

bye EK and welcome heatkiller iv , room temp~30*c, fullload at 49*c
Clock: 2055~2100 
Mem:+1000


----------



## Zer0G

knightriot said:


> bye EK and welcome heatkiller iv , room temp~30*c, fullload at 49*c
> Clock: 2055~2100
> Mem:+1000


Does the heatkiller iv fits to the MSI Lightning? 

Use it on my Palit GamingPro OC. Totaly happy with it. But the surface of the gpu core was not milled exactly fine. It has some grooves, which I can feel with the fingernail. Doesn't matter for me, I'm using a normal thermal grease. 
How is yours?


----------



## Garrett1974NL

knightriot said:


> bye EK and welcome heatkiller iv , room temp~30*c, fullload at 49*c
> Clock: 2055~2100
> Mem:+1000


What's the temp difference between the EK and the HK IV?
I'm seriously considering the HK IV block


----------



## PrettyDancer

Hi guys ! Can I have some insight about the results of my overclock on my PNY 2080ti XLR8 ? I'm only at +30mhz, effective PL at 300W... And I'm getting 1875mhz at around 1000mv stable at 79°C with fans at 80% on Battlefield 5 ultra 1440p (see attachments). Anything higher seems to crash  

Is this normal or is my chip binned ? Should I even attempt to flash another bios ? I'm a bit disappointed... Would an AIO kit be worth it ?


----------



## Zer0G

PrettyDancer said:


> Hi guys ! Can I have some insight about the results of my overclock on my PNY 2080ti XLR8 ? I'm only at +30mhz, effective PL at 300W... And I'm getting 1875mhz at around 1000mv stable at 79°C with fans at 80% on Battlefield 5 ultra 1440p (see attachments). Anything higher seems to crash
> 
> Is this normal or is my chip binned ? Should I even attempt to flash another bios ? I'm a bit disappointed... Would an AIO kit be worth it ?


You didn't push the core voltage. But you should! Give it 100% and try to keep the card cool. If you're not in the powerlimit, you crash into the temperture limit soon with 79°C at the moment @ 300Watts. 

Also clock the ram.


----------



## LayZ_Pz

I have been trying all the high Power Target bioses on my Asus RTX 2080Ti Strix and for now the best was the KFA2 OC @ 380-ish max Wattage.

I was trying the Gaming X Trio but had issues overclocking with that one. 

Is there another bios with 400W+ of TDP ?


----------



## VPII

J7SC said:


> Glad to hear :


Sorry quoted another post of yours. Did not have a PC for a little over a week. Well I'm back, this time with a Palit RTX 2080 Ti Gamingpro OC. Busy testing the card. Got an RMA sorted for my old card. Was some questions regarding some markings on the shroud. Galax said they want to assist to get the card fixed via firmware update, but they felt bad that I had to wait this long for a replacement and offered a card they had in stock. This card is a lot cooler than the Galax I had. With same custom fan curve I hit max 59c running single run Time Spy where as with the Galax it would be between 69 and 75. Now the 59c was with ambient around 28c in the man cave.


----------



## knightriot

Zer0G said:


> Does the heatkiller iv fits to the MSI Lightning?
> 
> Use it on my Palit GamingPro OC. Totaly happy with it. But the surface of the gpu core was not milled exactly fine. It has some grooves, which I can feel with the fingernail. Doesn't matter for me, I'm using a normal thermal grease.
> How is yours?


Not lighting, i use zotac amp with 380W bios, just msi afterburner issue. And about hkIV , i use mx-4 and work perfect, better than my old EK vector


----------



## knightriot

Garrett1974NL said:


> What's the temp difference between the EK and the HK IV?
> I'm seriously considering the HK IV block


Hi, about temperatures my room temp ~29~30*c at night)
_ At Fullload HKIV better than EK 3*c ,both use mx-4, with EK i got 53*c, with HKIV i got 49*c.
_ HKIV block full cover pcb, ek miss a little . Follow my heatgun, VRM's temp on hkiv better than ek ~10*c (64*c vs 54*c).
_ With HKIV, my watertemp reduce from 43 to 40 at fullload.
_ HKIV really heavy on hand.
Sorry about my bad English


----------



## Edge0fsanity

knightriot said:


> bye EK and welcome heatkiller iv , room temp~30*c, fullload at 49*c
> Clock: 2055~2100
> Mem:+1000


edit: nvm, i missed the other reply


----------



## willverduzco

So I finally put a 280mm AIO and a Kraken G12 on my Asus Strix OC, as my ears couldn't take the blow dryer noise from the stock cooler any longer.

At around 1100 RPM on the 2x 140mm fans in a Phanteks Evolv ATX TG case with a room ambient of about 80F / 27C, it gets to mid-high 40s in games and scrapes 50-51C in Superposition at a nominal clock of 2130 MHz (obviously dropping as the benchmark goes on).

I am on the Galax 380W BIOS, but still found myself quite limited by TDP prior to going hybrid cooler. Getting rid of the stock fans and running the AIO, VRM fan, and radiator fans off of the motherboard probably saved me some small amount of power. Ever since the mod, I find that I no longer run into the power limit in games like Shadow of the Tomb Raider, whereas I did occasionally before.

At 380W, however, I still hit the limit hard in benchmarks. That said, I was able to get a pretty decent Timespy of 16128 (GPU 16861, CPU 12941) on my new 24/7 clocks that pass RealBench and the like. The best part was beating JaysTwoCents's single-GPU score of 16119 (GPU 16602, CPU 13842) a few points on the single-gpu hall of fame and earning 69th place, giggity giggity.

Oh and for those on the fence about using a hybrid kit vs a G12 and a larger radiator AIO, my VRMs feel cool to the touch after heavy load on both sides of the card despite not having direct contact with heatsinks or heatspreaders. I'm not too concerned about their long-term health anymore.


----------



## Monstieur

Garrett1974NL said:


> That might work, but I don't have an AIO myself so don't take my word for it.
> With a couple of screws and nuts I think you can get very far
> 
> Pic of the card would help I guess...


I have the same Inno3D X2. Where did you get the screws to fit the inner mounting holes? Are both the back plate and mid plate secured when using the inner mounting holes?


----------



## Garrett1974NL

Monstieur said:


> I have the same Inno3D X2. Where did you get the screws to fit the inner mounting holes? Are both the back plate and mid plate secured when using the inner mounting holes?


I used this:
http://www.performance-pcs.com/swiftech-mcw60-a2900-adapter-kit-for-ati-2900xt.html

(I didn't buy it there, but highflow.nl doesn't stock them anymore)
The stock plate threads are M2.5 but the swiftech is just a tad smaller, although it does 'grip' if that makes sense.
So basically you're loosening the plate a little bit, it's screwed in by more than those 4 screws.
And by screwing in the Swiftech screws you're fastening it again...hope this makes sense hahah... anyway, then I attached the Koolance GPU-230 block, I threw the original screws of the Koolance away, way too thick and fragile.
But I'm considering a fullcover block now because it just looks so good... Heatkiller IV from Watercool.de


----------



## boli

Garrett1974NL said:


> What's the temp difference between the EK and the HK IV?
> 
> I'm seriously considering the HK IV block



See Igor's comparison numbers here: https://www.overclock.net/forum/69-...ti-owners-club-post27807346.html#post27807346


----------



## vmanuelgm

New Resident Evil 2 short gameplay using the 2080Ti...


----------



## J7SC

VPII said:


> Sorry quoted another post of yours. Did not have a PC for a little over a week. Well I'm back, this time with a Palit RTX 2080 Ti Gamingpro OC. Busy testing the card. Got an RMA sorted for my old card. Was some questions regarding some markings on the shroud. Galax said they want to assist to get the card fixed via firmware update, but they felt bad that I had to wait this long for a replacement and offered a card they had in stock. This card is a lot cooler than the Galax I had. With same custom fan curve I hit max 59c running single run Time Spy where as with the Galax it would be between 69 and 75. Now the 59c was with ambient around 28c in the man cave.



...'Cool' in more ways than one. I take it you transferred the AIO (re. temps you posted above) ? Also, what are you going to do if Galax fixes your old one ? May be run SLI  :thumb:

...28 c ambient in the man cave ? I'm looking outside and see snow...


----------



## jelome1989

LayZ_Pz said:


> I have been trying all the high Power Target bioses on my Asus RTX 2080Ti Strix and for now the best was the KFA2 OC @ 380-ish max Wattage.
> 
> I was trying the Gaming X Trio but had issues overclocking with that one.
> 
> Is there another bios with 400W+ of TDP ?


What clocks/voltage are you hitting with the new bios?



willverduzco said:


> Spoiler
> 
> 
> 
> So I finally put a 280mm AIO and a Kraken G12 on my Asus Strix OC, as my ears couldn't take the blow dryer noise from the stock cooler any longer.
> 
> At around 1100 RPM on the 2x 140mm fans in a Phanteks Evolv ATX TG case with a room ambient of about 80F / 27C, it gets to mid-high 40s in games and scrapes 50-51C in Superposition at a nominal clock of 2130 MHz (obviously dropping as the benchmark goes on).
> 
> I am on the Galax 380W BIOS, but still found myself quite limited by TDP prior to going hybrid cooler. Getting rid of the stock fans and running the AIO, VRM fan, and radiator fans off of the motherboard probably saved me some small amount of power. Ever since the mod, I find that I no longer run into the power limit in games like Shadow of the Tomb Raider, whereas I did occasionally before.
> 
> At 380W, however, I still hit the limit hard in benchmarks. That said, I was able to get a pretty decent Timespy of 16128 (GPU 16861, CPU 12941) on my new 24/7 clocks that pass RealBench and the like. The best part was beating JaysTwoCents's single-GPU score of 16119 (GPU 16602, CPU 13842) a few points on the single-gpu hall of fame and earning 69th place, giggity giggity.
> 
> Oh and for those on the fence about using a hybrid kit vs a G12 and a larger radiator AIO, my VRMs feel cool to the touch after heavy load on both sides of the card despite not having direct contact with heatsinks or heatspreaders. I'm not too concerned about their long-term health anymore.


What clocks / voltages are you hitting when gaming?


----------



## PrettyDancer

BudgieSmuggler said:


> Hi there. You might be getting artifacting from high temps. Try running your fans at 100% and see if it goes. Are you in game when you get the artifacts or benchamrking? Some games don't like too much of an overclock. I can get artifacting in The division but not in BFV with the exact same clocks. Try a different game and with fans on full to keep temps in check. It may help to try the Galax 380 watt bios but that won't really help the artifacting. Someone else can chip in but i don't think you'll get artifacting from hitting power limit. It could be that you haven't won silicon lottery. Can always try and return it and gamble on another card if that's important to you. Truth is the cards are great with a mild overclock and outside of benchmarkign a lot will be undervolting the cards to sit with better temps and consistent frame rates rather than going out max overclock all the time. If you need help with the VF curve just ask. I have my card at 2130mhz after 1056mv onwards which with a slight drop when i hit temp limit gives me 2100mhz stable and +1000 memory on air cooling. Benchmarking i drop to between 1995mhz - 2010mhz under load in 4k and 8k superposition. In game is where i get 2100mhz in most games soemtimes 2085mhz. I'am on zotac amp 2080ti. ps you can try and take your side panel off to help with temps dependant on your room ambient temp


Hi ! Thanks for your answer  I really didn't quite get how the voltage curve worked, where to start and so on ... The automatic OC isn't stable for me sadly


----------



## VPII

J7SC said:


> ...'Cool' in more ways than one. I take it you transferred the AIO (re. temps you posted above) ? Also, what are you going to do if Galax fixes your old one ? May be run SLI  :thumb:
> 
> ...28 c ambient in the man cave ? I'm looking outside and see snow...


Hi J7SC nope this card is a replacement so I'll stick with it. It seems fine. Finally got a 15.1K Time Spy which is not too bad when running an AMD Ryzen cpu. I only changed to the AIO cooler after I posted so the previous temps was with stock cooler. At present with the AIO cooler I've seen max 40c while benching and 25 to 28c idle.

https://www.3dmark.com/spy/5944658

Oh and I forgot to add... new card has Samsung memory or so it seems as per GPUz and the memory runs a hell of alot cooler. Temps taken with infrared thermometer at the back and front was around 48c.


----------



## TK421

Stock of this card is still quite problematic isn't it :|


----------



## J7SC

VPII said:


> Hi J7SC nope this card is a replacement so I'll stick with it. It seems fine. Finally got a 15.1K Time Spy which is not too bad when running an AMD Ryzen cpu. I only changed to the AIO cooler after I posted so the previous temps was with stock cooler. At present with the AIO cooler I've seen max 40c while benching and 25 to 28c idle.
> 
> https://www.3dmark.com/spy/5944658
> 
> Oh and I forgot to add... new card has Samsung memory or so it seems as per GPUz and the memory runs a hell of alot cooler. Temps taken with infrared thermometer at the back and front was around 48c.



...Seems like a great overall package, c'grats  ! Interesting also re. the VRAM temp diffs - those infrared thermometers come in handy all the time. And per your 3dmark link, it looks like you _might_ already have loaded the 380w G. Bios ? Unfortunately, Palit isn't sold here so no experience with that brand in my GPU pantry.


----------



## VPII

J7SC said:


> ...Seems like a great overall package, c'grats  ! Interesting also re. the VRAM temp diffs - those infrared thermometers come in handy all the time. And per your 3dmark link, it looks like you _might_ already have loaded the 380w G. Bios ? Unfortunately, Palit isn't sold here so no experience with that brand in my GPU pantry.


Hi there J7SC, nope it is still the stock bios mac wattage 330watt but usually around 328 which is standard with this gpu. Seems to draw less power than the Galax.


----------



## J7SC

VPII said:


> Hi there J7SC, nope it is still the stock bios mac wattage 330watt but usually around 328 which is standard with this gpu. Seems to draw less power than the Galax.



Hi VPII - ...ok, was just wondering why 3DMark didn't identify the vendor (i.e. Palit) in the link...with the results you have, though, you don't even need to load the 380w Galax...of course in a few weeks, a little voice will appear and whisper in your ear 'flash the Bios, flash the Bios'


----------



## PrettyDancer

So, guys I have a question : my XLR8 PNY 2080ti gets artefacts as soon as I up the vram frequency to +100... My card is PL to 300w. Would flashing the bios with a higher power limit help ? 

Ive seen countless people saying they were getting +1000mhz on average, I feel out of luck


----------



## VPII

PrettyDancer said:


> So, guys I have a question : my XLR8 PNY 2080ti gets artefacts as soon as I up the vram frequency to +100... My card is PL to 300w. Would flashing the bios with a higher power limit help ?
> 
> Ive seen countless people saying they were getting +1000mhz on average, I feel out of luck


Hi PrettyDancer, I do not think a bios flash will help you get more memory speed. It could very well be that the memory chips you have are right on the limit. I've had an issue with my previous Galax RTX 2080 Ti OC card where after about a month of using it got stuck at the base clock. The memory on this card was Micron and I was limited to +800mhz on the memory. After a long process and getting another card, Palit RTX 2080 Ti Gamingpro OC I noticed the card has samsung memory. Funny enough with my infrared thermometer I noticed the memory temps being lower than the micron but I'm also able to run it +1000mhz. Did not try higher, but will do so going forward.


----------



## BigBeard86

my memory only does +700mhz stable on my evga 2080ti. Is this abnormal? My core however runs 2050mhz run stable.


----------



## Edge0fsanity

BigBeard86 said:


> my memory only does +700mhz stable on my evga 2080ti. Is this abnormal? My core however runs 2050mhz run stable.


thats about what i got on my xc ultra when i tested it for a day on air. On water it does +1000mhz, maybe more but AB is limited and i don't care enough to use different OC software


----------



## Sheyster

Edge0fsanity said:


> thats about what i got on my xc ultra when i tested it for a day on air. On water it does +1000mhz, maybe more but AB is limited and i don't care enough to use different OC software


The latest AB Beta has a higher memory OC limit. It's listed on the official page now, was only on G3D for a while.


----------



## acmilangr

So is there any bios compatible with Asus strix 2080ti OC?


----------



## Edge0fsanity

Sheyster said:


> The latest AB Beta has a higher memory OC limit. It's listed on the official page now, was only on G3D for a while.


didn't know that, will switch over today, thanks


----------



## knightriot

Zotac 2080ti amp + heatkiller iv + 4k res. My best is [email protected] and mem+1000  . Too happy with it.


----------



## nmkr

kot0005 said:


> https://www.techpowerup.com/vgabios/207628/msi-rtx2080ti-11264-181118
> 
> lightning z has 380w bios. flashed it onto my seahawk ek, trio x based pcb. working fine.


can you enable the manual voltage slider in afterburner if you change voltage regulation to "msi-advanced".
restart afterburner, should give you an slider.

on my 1080ti lightning z there was a slider from 0.900 - 1.200V 

did anyone tried to flash the new 2080ti z bios on a 2x 8pin PCB ?


----------



## eux

anyone using these w/ SLI, thinking of picking up 2 but not sure how support is nowadays. I've looked at some benchmarks and it seems ok.


----------



## hrm

Doni Weiss said:


> Well the price is baically the same in sweden. Its all about supply on 2080 ti Cards. Everyone is short on stock.
> 
> So why is there a non-x trio card? Same pcb and both A-GPUs. And price is the same here ( i couldnt get the trio-x since out of stock).


Not sure why non X trio exists but I bought one after my 10/2018 build 2080 ti Trio X crashed during gaming and finally failed to boot after 1 months use. My X trio had Micron memory and the highest mem oc was +700Mhz. This 01/2019 build non X trio has Samsung memory and with stock bios gives +130Mhz gpu oc (boosting around 2025-2050Mhz during gaming) and +1200Mhz mem oc. Seems fine, hopefully it will last over a month.


----------



## J7SC

PrettyDancer said:


> So, guys I have a question : my XLR8 PNY 2080ti gets artefacts as soon as I up the vram frequency to +100... My card is PL to 300w. Would flashing the bios with a higher power limit help ?
> 
> Ive seen countless people saying they were getting +1000mhz on average, I feel out of luck



I posted some tests back in December that with 1000+ on my (Micron) VRAM, overall power consumption only went up by less than 10w (all else being held equal, PL set to 110%, well below 122% max), so a new bios w/ higher limit seems not likely to do much. 

I add that both my cards are full-block water-cooled and that probably helps re. VRAM...both are Micron and both go well over 1000+. One thing to check, whether Samsung or Micron, is 'good seating' of the heat-pads of the cooler on the VRAM, in particular the ones below the GPU core, closest to the PCIe slot. In some later 2080TI and also Titan RTX models (similar / same PCB), they increased the thickness of the thermal pads on the lower VRAM to get better seating and heat dispersion.


----------



## LayZ_Pz

jelome1989 said:


> What clocks/voltage are you hitting with the new bios?


2100 @1.03ish V on the GPU and I went to +850 on the mem, but did not try any further, probably could take more from what I have seen from others. 

I am now back on the stock bios for various reasons 

Anyone knows why with a 450W bios, my card would hit a wall at 380 Watts ? Is it because of the two connectors instead of 3 ? I have a BeQuiet Dark Power Pro 11 1200W, should I go for two PCI leads instead of one ?


----------



## LayZ_Pz

acmilangr said:


> So is there any bios compatible with Asus strix 2080ti OC?


Yes & no. 

If you are willing to give Audio from the GPU and a Display Port, you can go with pretty much any of the bios you see here. 

For me the best was this one : https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910


----------



## sblantipodi

is there any hope to use our cards for some RTX games?
will we ever see RTX and DLSS on tomb raider?

will metro exodus support RTX and DLSS on launch?


----------



## J7SC

LayZ_Pz said:


> 2100 @1.03ish V on the GPU and I went to +850 on the mem, but did not try any further, probably could take more from what I have seen from others.
> 
> I am now back on the stock bios for various reasons
> 
> Anyone knows why with a 450W bios, my card would hit a wall at 380 Watts ?* Is it because of the two connectors instead of 3 *? I have a BeQuiet Dark Power Pro 11 1200W, should I go for two PCI leads instead of one ?



That's it per earlier posts by folks who tried the same


----------



## LayZ_Pz

J7SC said:


> That's it per earlier posts by folks who tried the same


That's annoying, Jayz2Cents actually got a 500w bios with 2x8 pins on his EVGA FTW, so I guess this limit is coded in the bios as well..


----------



## Doni Weiss

*Hmm*



hrm said:


> Not sure why non X trio exists but I bought one after my 10/2018 build 2080 ti Trio X crashed during gaming and finally failed to boot after 1 months use. My X trio had Micron memory and the highest mem oc was +700Mhz. This 01/2019 build non X trio has Samsung memory and with stock bios gives +130Mhz gpu oc (boosting around 2025-2050Mhz during gaming) and +1200Mhz mem oc. Seems fine, hopefully it will last over a month.


Interesting.... my Trio card (still not Trio X  )actually has Micron memory  

Lets see for how long this one works. But with oc scanner curve i reached 2040 Mhz in time spy


----------



## willverduzco

jelome1989 said:


> What clocks/voltage are you hitting with the new bios?
> 
> 
> What clocks / voltages are you hitting when gaming?


It generally stays at 2100 MHz and 1.05V while gaming using the Galax 380W BIOS, though occasionally dips to 2085 1.043 due to "Voltage" being the most common limit reason instead of "Power" as was the case when on air. My temps while gaming stay under 45 C, and I'm no longer drawing power for fans from the GPU ever since I slapped on a 280mm AIO, which also helps.



VPII said:


> Hi J7SC nope this card is a replacement so I'll stick with it. It seems fine. Finally got a 15.1K Time Spy which is not too bad when running an AMD Ryzen cpu. I only changed to the AIO cooler after I posted so the previous temps was with stock cooler. At present with the AIO cooler I've seen max 40c while benching and 25 to 28c idle.
> 
> https://www.3dmark.com/spy/5944658
> 
> Oh and I forgot to add... new card has Samsung memory or so it seems as per GPUz and the memory runs a hell of alot cooler. Temps taken with infrared thermometer at the back and front was around 48c.


40C max while benching is nuts for anything other than a huge radiator and custom loop. I'm surprised you are getting that with an AIO hybrid. Which AIO are you using, what fans and RPM, how cold is your ambient room temperature, and what case do you have?

For reference, I have a 280mm AIO and in my 27C room and Phanteks Evolv ATX TG case (known for it's mediocre airflow), I only manage mid-40s while gaming and 50C while benching with fans at around 1100 RPM. At around 900 RPM , temps all rise about 3-4 degrees, so that's a much better tradeoff for me given that it's practically silent at that point.


----------



## J7SC

LayZ_Pz said:


> That's annoying, Jayz2Cents actually got a 500w bios with 2x8 pins on his EVGA FTW, so I guess this limit is coded in the bios as well..



...yeah, saw that. Probably doesn't hurt that Kingpin / Vince dropped by Jay2C earlier . Now Steve from GN is getting into the act as well with a LN2 pro-ocer 'surprise guest' dropping by Sunday (Jan 27th/live-stream) ...where there's a big tank of LN2, a custom bios is rarely far away...


----------



## LayZ_Pz

J7SC said:


> ...yeah, saw that. Probably doesn't hurt that Kingpin / Vince dropped by Jay2C earlier . Now Steve from GN is getting into the act as well with a LN2 pro-ocer 'surprise guest' dropping by Sunday (Jan 27th/live-stream) ...where there's a big tank of LN2, a custom bios is rarely far away...


If only they could hook us up with a bit of that magic too.....


----------



## J7SC

LayZ_Pz said:


> If only they could hook us up with a bit of that magic too.....



Sooner or later, those things 'leak' out  In the meantime, there's a shunt mod or three so that it 'will play Crysis' at 8k


----------



## Bloodred217

eux said:


> anyone using these w/ SLI, thinking of picking up 2 but not sure how support is nowadays. I've looked at some benchmarks and it seems ok.


I used SLI before I got my 2080 Ti (2 GTX 1080s). SLI nowadays is in a so-so state. The core problem is that support for SLI is far from universal, a lot of games do not have any SLI profiles from NVIDIA. Some fundamentally do not work, but some can be made to work with custom profiles set through NVIDIA Profile Inspector. You can try to make profiles yourself or you can find whatever other people post online. In terms of hardware requirements, SLI before NVLink used to require 2 full PCIe x16 slots, but on 2080 Ti the NVLink bridge takes care of that, so it's an improvement. SLI in general is better on RTX/Turing than on Pascal, but the core issue of profiles/support still exists.

I don't really think SLI 2080 Ti is worth it unless you've got a 4K 144Hz monitor, in that case you need all the power you can get.


----------



## pegnose

My ASUS Strix 2080 Ti OC does not run so well. It barely goes above 2000 MHz. I can only put +70 MHz onto the 1650 MHz default boost clock before it crashes, even if voltage and power target settings are maxed out in MSI Afterburner. I would like to ask two questions:

1. This seems highly irregular to me for such a pricey high-quality card. The cheaper Zotac Amp of a friend can take at least +100 onto its 1665 MHz default boost (with stock bios; +120 with the Galax 380 W bios). I was expecting more from this card as buildzoid was so fond of its VRMs. Also, the card does not seem to cool any better. What do you think?
2. Can I flash the Galax 380 W bios to this card, or is this not possible / ill-advised due to it not being a reference PCB card?


----------



## Christopher2178

Lurking since this thread opened so here’s my input finally..

I have an MSI Trio X w/Micron Memory on water 27c-29c at idle, about 37c-40c gaming (I hate the 40c clock drop Nvidia..why?), and 42c-45c max when benchmarking. 

Running 8K Superposition with VRAM +1200 and core is +125 which is giving me 2130mhz avg/low (2145mhz -2160mhz peaks) on the MSI 406 watt bios.

On the 406 Watt bios highest power draw I have seen in GpuZ is about 401 watts pulled during an 8k Superposition but most games and even benchmarks max about 350-375 watts it seems..

Just tried the lightning z bios and could hit same speeds basically but seemed to drop from 2160/2145 down to 2130 a little sooner and more often and highest wattage use in GPuZ during all benchmarks was 381... So back to the MSI 406 watt bios for me.. Thanks!


Sent from my iPhone using Tapatalk


----------



## eux

Bloodred217 said:


> I used SLI before I got my 2080 Ti (2 GTX 1080s). SLI nowadays is in a so-so state. The core problem is that support for SLI is far from universal, a lot of games do not have any SLI profiles from NVIDIA. Some fundamentally do not work, but some can be made to work with custom profiles set through NVIDIA Profile Inspector. You can try to make profiles yourself or you can find whatever other people post online. In terms of hardware requirements, SLI before NVLink used to require 2 full PCIe x16 slots, but on 2080 Ti the NVLink bridge takes care of that, so it's an improvement. SLI in general is better on RTX/Turing than on Pascal, but the core issue of profiles/support still exists.
> 
> I don't really think SLI 2080 Ti is worth it unless you've got a 4K 144Hz monitor, in that case you need all the power you can get.


yea, it seems like support is better nowadays but not much information out there or at least i'm not looking hard enough. Currently running an x27 on a 1070(lol). Thanks for the reply.


----------



## VPII

willverduzco said:


> It generally stays at 2100 MHz and 1.05V while gaming using the Galax 380W BIOS, though occasionally dips to 2085 1.043 due to "Voltage" being the most common limit reason instead of "Power" as was the case when on air. My temps while gaming stay under 45 C, and I'm no longer drawing power for fans from the GPU ever since I slapped on a 280mm AIO, which also helps.
> 
> 
> 
> 40C max while benching is nuts for anything other than a huge radiator and custom loop. I'm surprised you are getting that with an AIO hybrid. Which AIO are you using, what fans and RPM, how cold is your ambient room temperature, and what case do you have?
> 
> For reference, I have a 280mm AIO and in my 27C room and Phanteks Evolv ATX TG case (known for it's mediocre airflow), I only manage mid-40s while gaming and 50C while benching with fans at around 1100 RPM. At around 900 RPM , temps all rise about 3-4 degrees, so that's a much better tradeoff for me given that it's practically silent at that point.


My case is an open bench table so the entire system is open. I fitted a Corsair H110 CW with the NZXT Kraken G12 onto the gpu. I'm pleasantly surprised about the memory temps seen that I do not have any heat sinks on them but they go a little over 50c while benching. my temps did rise to 42c yesterday, but ambient was sitting around 28 to 29c with around 33c outside.


----------



## J7SC

Christopher2178 said:


> Lurking since this thread opened so here’s my input finally..
> 
> I have an MSI Trio X w/Micron Memory on water 27c-29c at idle, about 37c-40c gaming (I hate the 40c clock drop Nvidia..why?), and 42c-45c max when benchmarking.
> Running 8K Superposition with VRAM +1200 and core is +125 which is giving me 2130mhz avg/low (2145mhz -2160mhz peaks) on the MSI 406 watt bios.
> On the 406 Watt bios highest power draw I have seen in GpuZ is about 401 watts pulled during an 8k Superposition but most games and even benchmarks max about 350-375 watts it seems..
> Just tried the lightning z bios and could hit same speeds basically but seemed to drop from 2160/2145 down to 2130 a little sooner and more often and highest wattage use in GPuZ during all benchmarks was 381... So back to the MSI 406 watt bios for me.. Thanks!



That's interesting stuff  ! Re. watts, you may still be running into issues relating to the 3x 8pin EPS (Lightning Z) vs the 2x8pin +1x6pin EPS (Trio X). And just out of interest, which Lightning Z Bios did you use (as it is dual Bios) - the regular one or the LN2 one ? I believe the LN2 one goes up to 520w or so, but of course caution when using it on water...


----------



## willverduzco

VPII said:


> My case is an open bench table so the entire system is open. I fitted a Corsair H110 CW with the NZXT Kraken G12 onto the gpu. I'm pleasantly surprised about the memory temps seen that I do not have any heat sinks on them but they go a little over 50c while benching. my temps did rise to 42c yesterday, but ambient was sitting around 28 to 29c with around 33c outside.


Ah, that makes a lot more sense since you're on an open bench. Still, those are great temps on the GPU core, and I'm glad you did so well with a 240mm.


----------



## VPII

willverduzco said:


> Ah, that makes a lot more sense since you're on an open bench. Still, those are great temps on the GPU core, and I'm glad you did so well with a 240mm.


I think it is a 280mm rad on the H110 CW. Well it does seem bigger than the Coolermaster ML240 I have on the cpu.


----------



## willverduzco

VPII said:


> I think it is a 280mm rad on the H110 CW. Well it does seem bigger than the Coolermaster ML240 I have on the cpu.


Ah, my apologies. I was under the incorrect impression that the H115i was Corsair's only 280mm AIO, but your H110 CW is also 280. Open bench + 280mm makes it a no brainer as to why your temps are so good (and better than my 280mm GPU AIO inside of a case).


----------



## VPII

J7SC said:


> Hi VPII - ...ok, was just wondering why 3DMark didn't identify the vendor (i.e. Palit) in the link...with the results you have, though, you don't even need to load the 380w Galax...of course in a few weeks, a little voice will appear and whisper in your ear 'flash the Bios, flash the Bios'


Hi J7SC... so I did it.... flashed the 380watt bios and found something really interesting. Running time spy I got about 2fps more in the first test but second test failed. When looking at the sensor file I noticed the gpu clocks were sitting at 2175 when it failed so I dropped the OC by 15mhz which will give me 2160 which worked with the stock bios. Unfortunately with the 380 bios applying 126% does not increase the wattage to 380watt which is clearly visible when looking at GPUz monitor file as it is stuck at 300watt.

The reason I wanted to try the Galax 300 - 380 watt bios is because with the stock bios my clocks would drop to 2040, 2055 and there about with the power being the limiting factor it seems as it would be around 330.xx or above even. I'll try the 380watt bios again, maybe download the KFA one to see if my bios file I saved from previous GPU is stuffed.


----------



## VPII

J7SC said:


> That's interesting stuff  ! Re. watts, you may still be running into issues relating to the 3x 8pin EPS (Lightning Z) vs the 2x8pin +1x6pin EPS (Trio X). And just out of interest, which Lightning Z Bios did you use (as it is dual Bios) - the regular one or the LN2 one ? I believe the LN2 one goes up to 520w or so, but of course caution when using it on water...


Sorry quoting you you on this but I got it working with the KFA bios. Seems my saved bios file might be faulty which might be why my previous card went. I got a nice 15.2k Time Spy...

https://www.3dmark.com/3dm/32841820?


----------



## J7SC

VPII said:


> Hi J7SC... so I did it.... flashed the 380watt bios and found something really interesting. Running time spy I got about 2fps more in the first test but second test failed. When looking at the sensor file I noticed the gpu clocks were sitting at 2175 when it failed so I dropped the OC by 15mhz which will give me 2160 which worked with the stock bios. Unfortunately with the 380 bios applying 126% does not increase the wattage to 380watt which is clearly visible when looking at GPUz monitor file as it is stuck at 300watt.
> 
> The reason I wanted to try the Galax 300 - 380 watt bios is because with the stock bios my clocks would drop to 2040, 2055 and there about with the power being the limiting factor it seems as it would be around 330.xx or above even. I'll try the 380watt bios again, maybe download the KFA one to see if my bios file I saved from previous GPU is stuffed.





VPII said:


> Sorry quoting you you on this but I got it working with the KFA bios. Seems my saved bios file might be faulty which might be why my previous card went. I got a nice 15.2k Time Spy...
> 
> https://www.3dmark.com/3dm/32841820?



Hello VPII -...380w Galax / KFA was just a matter of time  ...glad you did it though and your card lived past 1350MHz ! Re. your post about 3DM Time Spy, I seem to recall that the GP2 test is harder / more intensive on VRAM (while GP1 is harder on GPU clock), though once you're near the overall board watt limit, either GPU or VRAM may bark anyway.

Now all you need is a used A/C unit for your mancave which you can direct at the AIO rads


----------



## VPII

J7SC said:


> Hello VPII -...380w Galax / KFA was just a matter of time  ...glad you did it though and your card lived past 1350MHz ! Re. your post about 3DM Time Spy, I seem to recall that the GP2 test is harder / more intensive on VRAM (while GP1 is harder on GPU clock), though once you're near the overall board watt limit, either GPU or VRAM may bark anyway.
> 
> Now all you need is a used A/C unit for your mancave which you can direct at the AIO rads


The AC sounds about right to me..... what I would usually do with an AIO on a gpu is take off the fans and drop in in a bucket of water with ice cubes in it. It works great and helps alot. Max temp usually around 7C under full load.


----------



## J7SC

VPII said:


> The AC sounds about right to me..... what I would usually do with an AIO on a gpu is take off the fans and drop in in a bucket of water with ice cubes in it. It works great and helps alot. Max temp usually around 7C under full load.



...I used to do that w/ two submerged rads for tri- or quad SLI, though water ice didn't last long with modded multi gpus...for added kick, I threw quite a few pounds of dry ice (frozen CO2) into the bucket w/ ice water >>> keeping your distance as it is quite a show, and short-term temps drop like a rock. And protect the rad w/ a metal grid as the ice rocks 'dance violently' :kookoo:


----------



## pewpewlazer

Finally flashed the GALAX BIOS on my EVGA card. Ran nvflash in Windows, which was a first for me. Rebooted, walked away, came back to a black screen. Reboot, nothing. Had to hard power off my computer then boot it back up. All good now. Strange. Gave me a bit of a scare though.

Was less than amused to realize that my afterburner profiles were GONE.

OC scanner was slightly more generous than on the EVGA BIOS. Got +154mhz average vs +140/141mhz on EVGA BIOS. Woo! Still worse than the curve I had from the afterburner 4.6.0 beta 9 OC scanner (+ slight tweaks).

Too lazy to spend all night playing with the V-F curve, I ended up with +175mhz core, which gives me something like 2015-2130 around the 1.025v range. Any higher ends up crashing at 2045mhz in SotTR benchmark, regardless of what voltage point its at (even with 1.05v I think it was, it crashed). ~52*C load.

I guess I should be satisfied with 2115mhz. Could be better, could be worse...

Fun fact, ~750mhz was the best I could get on my ram with the stock EVGA XC POS cooler. 850mhz crashed pretty quick. On water, 1000mhz passed no problem. Haven't tried further.

EDIT: for some reason, after flashing, my card was running 300-435mhz 2d clocks with my dual display setup. Shaved ~40w off idle power consumption, which was nice. After shutting down last night, it has been back to the usual 1350mhz multi-monitor 2d clocks. Oh well.


----------



## jelome1989

LayZ_Pz said:


> 2100 @1.03ish V on the GPU and I went to +850 on the mem, but did not try any further, probably could take more from what I have seen from others.
> 
> I am now back on the stock bios for various reasons
> 
> Anyone knows why with a 450W bios, my card would hit a wall at 380 Watts ? Is it because of the two connectors instead of 3 ? I have a BeQuiet Dark Power Pro 11 1200W, should I go for two PCI leads instead of one ?


Yeah try to push +1000 seems like the Strix can handle it



willverduzco said:


> It generally stays at 2100 MHz and 1.05V while gaming using the Galax 380W BIOS, though occasionally dips to 2085 1.043 due to "Voltage" being the most common limit reason instead of "Power" as was the case when on air. My temps while gaming stay under 45 C, and I'm no longer drawing power for fans from the GPU ever since I slapped on a 280mm AIO, which also helps.


Nice, great temps. Can't wait to put mine underwater as well. Hopefully EK or Watercool stays faithful to their targeted release dates. 
Hm. Yeah I wonder how much power the fans draw



hrm said:


> Not sure why non X trio exists but I bought one after my 10/2018 build 2080 ti Trio X crashed during gaming and finally failed to boot after 1 months use. My X trio had Micron memory and the highest mem oc was +700Mhz. This 01/2019 build non X trio has Samsung memory and with stock bios gives +130Mhz gpu oc (boosting around 2025-2050Mhz during gaming) and +1200Mhz mem oc. Seems fine, hopefully it will last over a month.


What kind of workload did your non X trio go under? Can you estimate how many hours of use before it failed? My GPU is a little over 2 weeks old with about 50hrs+ of stress testing and gameplay. I'm finding the timeframe where I can be confident that my GPU is good. 



pegnose said:


> My ASUS Strix 2080 Ti OC does not run so well. It barely goes above 2000 MHz. I can only put +70 MHz onto the 1650 MHz default boost clock before it crashes, even if voltage and power target settings are maxed out in MSI Afterburner. I would like to ask two questions:


Yeah, this is the worst I've seen with the Strix, even with the regular and advanced versions. On what benchmarks / games did it crash? What about memory overclock?


----------



## pegnose

jelome1989 said:


> Yeah, this is the worst I've seen with the Strix, even with the regular and advanced versions. On what benchmarks / games did it crash? What about memory overclock?


Thanks for the reply! It crashes in FireStrike 1080p, very quickly. Didn't try anything else after that. I can OC the VRAM to +950 without any artifacting in longer gaming sessions, that part seems fine to me.

I also tried with GPU Tweak to see whether ASUS has hidden some magic there. But to no avail, same less-than-mediocre overclock.

EDIT: Unigine Heaven Extreme preset (windowed) crashes after ~1min if card is set to +90 in GPU. And even then the boost seems very low. I see 1995 MHz after the first 30 s or so. Can I flash this Galax 380W bios too, or better not?


----------



## jelome1989

pegnose said:


> Thanks for the reply! It crashes in FireStrike 1080p, very quickly. Didn't try anything else after that. I can OC the VRAM to +950 without any artifacting in longer gaming sessions, that part seems fine to me.
> 
> I also tried with GPU Tweak to see whether ASUS has hidden some magic there. But to no avail, same less-than-mediocre overclock.


So even in games, it's not stable at 2000mhz+? What about temps? I really think it should be able to do at least 2050mhz @1.093v in games. Well, going from 2000 to 2100 isn't that big of a deal in my experience anyway and at least your VRAM OC is good.


----------



## pegnose

jelome1989 said:


> So even in games, it's not stable at 2000mhz+? What about temps? I really think it should be able to do at least 2050mhz @1.093v in games. Well, going from 2000 to 2100 isn't that big of a deal in my experience anyway and at least your VRAM OC is good.


I find that games usually show instabilities better then benchmarks, even if it takes more time. Because they offer more uneven work loads. After a while in Unigine Heaven with the Extreme preset the clock sits at 1950 MHz while my card is 70°C warm (default fan curve).

The card rarely ever shows a 2040 MHz boost, and I rarely see voltage go above 1.050 v. Usually it sits there and complains about its voltage limit.


----------



## VPII

J7SC said:


> ...I used to do that w/ two submerged rads for tri- or quad SLI, though water ice didn't last long with modded multi gpus...for added kick, I threw quite a few pounds of dry ice (frozen CO2) into the bucket w/ ice water >>> keeping your distance as it is quite a show, and short-term temps drop like a rock. And protect the rad w/ a metal grid as the ice rocks 'dance violently' :kookoo:


I'm really happy with this card. 2160mhz core is a go and +1150 memory also seem to work.


----------



## acmilangr

LayZ_Pz said:


> acmilangr said:
> 
> 
> 
> So is there any bios compatible with Asus strix 2080ti OC?
> 
> 
> 
> Yes & no.
> 
> If you are willing to give Audio from the GPU and a Display Port, you can go with pretty much any of the bios you see here.
> 
> For me the best was this one : https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910
Click to expand...

Thanks for the answer.
Did you try it? Can I flash again the default bios after that? I am planning to do that on second bios of strix.


----------



## hrm

jelome1989 said:


> What kind of workload did your non X trio go under? Can you estimate how many hours of use before it failed? My GPU is a little over 2 weeks old with about 50hrs+ of stress testing and gameplay. I'm finding the timeframe where I can be confident that my GPU is good.


X Trio failed after 20-30hours of gaming (BF V and Dirt4 gfx settings maxed out) a few stress testing with Furmark. During that time I played 90% with stock settings. I tried to underclock gpu which helped for a while, but within few days after crashing began the card failed to boot.


----------



## Asmodian

Just say no to Furmark.


----------



## LayZ_Pz

acmilangr said:


> Thanks for the answer.
> Did you try it? Can I flash again the default bios after that? I am planning to do that on second bios of strix.


Yep, I have tried it and it was the best Bios I have tried when it comes to clocks and stability. 

Yes you can, you can make a backup of all the bios if it safer to you. 

The dual bios feature is basically noob proof as even if the one you flash fails, you can safely return to another one to flash back on the broken one, but I have tried all 380+W bios out there, and none of them was broken, even the "un-verified" ones.


----------



## jelome1989

hrm said:


> X Trio failed after 20-30hours of gaming (BF V and Dirt4 gfx settings maxed out) a few stress testing with Furmark. During that time I played 90% with stock settings. I tried to underclock gpu which helped for a while, but within few days after crashing began the card failed to boot.


Thanks for the reply. Seems like the majority of failure is within the first 2 weeks of use. But I've seen some fail after more than a month of use but they are rare.


----------



## GAN77

Good day guys!

There are owners on the forum MSI RTX2080Ti Sea Hawk EK X?

What are your responses, what are the temperatures?


----------



## acmilangr

LayZ_Pz said:


> acmilangr said:
> 
> 
> 
> Thanks for the answer.
> Did you try it? Can I flash again the default bios after that? I am planning to do that on second bios of strix.
> 
> 
> 
> Yep, I have tried it and it was the best Bios I have tried when it comes to clocks and stability.
> 
> Yes you can, you can make a backup of all the bios if it safer to you.
> 
> The dual bios feature is basically noob proof as even if the one you flash fails, you can safely return to another one to flash back on the broken one, but I have tried all 380+W bios out there, and none of them was broken, even the "un-verified" ones.
Click to expand...

Thanks alot. 
A question please. How can I flash a bios on gpu with dual bios? Just change the bios (from strix  button) and flash it? Or is there any other way?


----------



## pegnose

acmilangr said:


> Thanks alot.
> A question please. How can I flash a bios on gpu with dual bios? Just change the bios (from strix button) and flash it? Or is there any other way?


http://www.tomshardware.co.uk/answers/id-2833878/nvflash-dual-bios.html
Don't know if that works, though.


----------



## axiumone

I was pretty shocked when I got this one. This info was already available on techpowerup's database, but I didn't even think to look, as it's such a high end GPU. In any case it's a regular non A series chip with max power of 112%. 

Does anyone know if there's a way to flash non A chips yet? I've tried a few different nvlash revisions, they all flash my A chip card without problems, but this card refuses to flash with a pci board ID mistmach.


----------



## pegnose

axiumone said:


> I was pretty shocked when I got this one. This info was already available on techpowerup's database, but I didn't even think to look, as it's such a high end GPU. In any case it's a regular non A series chip with max power of 112%.
> 
> Does anyone know if there's a way to flash non A chips yet? I've tried a few different nvlash revisions, they all flash my A chip card without problems, but this card refuses to flash with a pci board ID mistmach.



In case you mean a "PCI subsystem ID mismatch", you can use the "-6" parameter on recent versions: "nvflash64.exe -6 bios.rom".

I am not sure it is advised, or pays off, though, to flash a "performance" bios onto a non-A.


----------



## Krzych04650

pewpewlazer said:


> Finally flashed the GALAX BIOS on my EVGA card. Ran nvflash in Windows, which was a first for me. Rebooted, walked away, came back to a black screen. Reboot, nothing. Had to hard power off my computer then boot it back up. All good now. Strange. Gave me a bit of a scare though.
> 
> Was less than amused to realize that my afterburner profiles were GONE.
> 
> OC scanner was slightly more generous than on the EVGA BIOS. Got +154mhz average vs +140/141mhz on EVGA BIOS. Woo! Still worse than the curve I had from the afterburner 4.6.0 beta 9 OC scanner (+ slight tweaks).
> 
> Too lazy to spend all night playing with the V-F curve, I ended up with +175mhz core, which gives me something like 2015-2130 around the 1.025v range. Any higher ends up crashing at 2045mhz in SotTR benchmark, regardless of what voltage point its at (even with 1.05v I think it was, it crashed). ~52*C load.
> 
> I guess I should be satisfied with 2115mhz. Could be better, could be worse...
> 
> Fun fact, ~750mhz was the best I could get on my ram with the stock EVGA XC POS cooler. 850mhz crashed pretty quick. On water, 1000mhz passed no problem. Haven't tried further.
> 
> EDIT: for some reason, after flashing, my card was running 300-435mhz 2d clocks with my dual display setup. Shaved ~40w off idle power consumption, which was nice. After shutting down last night, it has been back to the usual 1350mhz multi-monitor 2d clocks. Oh well.


MSI AB profiles does not deleted when you flash BIOS, they are still there and will appear available once you flash back to original BIOS.

You don't have to spend whole night with VF curve, just find the voltage at which card is not power throttling, for 380W BIOS that should be around 1.037V, so set for example [email protected], and if you don't power throttle then voltage will stay fixed and only clock will change depending on temperature. Full-range multiple-point VF curve is needed only if you for example play with unlocked framerate and want the card to push 100% at all times, so you also configure higher voltage points as well so the card can go higher when it is not power limited. This is also good for increasing benchmark scores.

I also had very low idle clocks and power draw for some time but with single monitor, the card was drawing like 24W idle, but now for some reason I am back to 1350 MHz and 60 W idle power. Though this doesn't seem unusual, when I had 1080 SLI they were both drawing 50W idle as well, also single display.


----------



## pewpewlazer

Krzych04650 said:


> MSI AB profiles does not deleted when you flash BIOS, they are still there and will appear available once you flash back to original BIOS.
> 
> You don't have to spend whole night with VF curve, just find the voltage at which card is not power throttling, for 380W BIOS that should be around 1.037V, so set for example [email protected], and if you don't power throttle then voltage will stay fixed and only clock will change depending on temperature. Full-range multiple-point VF curve is needed only if you for example play with unlocked framerate and want the card to push 100% at all times, so you also configure higher voltage points as well so the card can go higher when it is not power limited. This is also good for increasing benchmark scores.
> 
> I also had very low idle clocks and power draw for some time but with single monitor, the card was drawing like 24W idle, but now for some reason I am back to 1350 MHz and 60 W idle power. Though this doesn't seem unusual, when I had 1080 SLI they were both drawing 50W idle as well, also single display.


Are you running your monitor at 75hz? Running a single display >60hz kicks the card into higher 2d clocks as well IIRC.

I figured the AB profiles would still be there if I flashed back to the original BIOS, but I wasn't about to flash back just to write down all my offsets. Oh well. Wish I had taken better notes on what clocks/voltages my card was running on water with the EVGA BIOS. I know I was sitting around 2055-2070mhz in games for the most part, but I'm not sure the actual voltage. I think 1.006-1.012 was about the highest it would go before throttling with the 338w PL though.

Been playing with the V-F curve this morning and I think I have something that works. The card, at least in Firestrike Ultra, is rather finicky. Even a blanket +150 offset seemed to crash almost immediately in GT2 since the clock speed would shoot up to 2145mhz. Tried lowering offsets to keep it under 2145mhz but it just ended up crashing GT2 at 2115mhz @ 1.037v. 

Eventually got it to where the card passed GT2 2100mhz @ 1.037v using +162 offset (with everything beyond lowered to match). Then I bumped up all the offsets at 0.987v through 1.025v from +147 to +162. All of a sudden GT2 was running 2115mhz @ 1.037v at the same temps as before, except it wasn't crashing. GPU boost is a real PITA...

Superposition 4k optimized = 13454 (card almost flat lined 2100mhz @ 1.025v, a few dips to 2085mhz @ 1.012v)

Shadow of the Tomb Raider benchmark (4k, max settings) sits at 2100mhz with minimal dips down to 2085mhz as well. After sitting at the results screen for a bit, my computer locked up when I went to exit back to the menu. I suppose +1000mhz mem isn't completely stable. Dropped back to +750 for now and all is well.

Overall the extra 42 watts of power limit breathing room seems to have gained me 30mhz. I have no idea if 2100mhz @ 1.025v is any good or not, but it seems that's about my cards limit.


----------



## axiumone

pegnose said:


> In case you mean a "PCI subsystem ID mismatch", you can use the "-6" parameter on recent versions: "nvflash64.exe -6 bios.rom".
> 
> I am not sure it is advised, or pays off, though, to flash a "performance" bios onto a non-A.


I've tried all of different possible parameters. The A chip flashes without any issues. The non A is blocked and does not allow you to flash to an A chip card bios. I think a modded version of nvlfash is needed.


----------



## jelome1989

Anybody used Phanteks Glacier waterblock here? Wondering how it performs.


----------



## GAN77

jelome1989 said:


> Anybody used Phanteks Glacier waterblock here? Wondering how it performs.


https://www.tomshw.de/2018/10/10/ph...ehlung-und-ein-treffen-bei-380-watt-igorslab/
https://forum.aquacomputer.de/index...19&h=690325078e314a07fc03c4f20d95b911f27176f6


----------



## jelome1989

GAN77 said:


> https://www.tomshw.de/2018/10/10/ph...ehlung-und-ein-treffen-bei-380-watt-igorslab/
> https://forum.aquacomputer.de/index...19&h=690325078e314a07fc03c4f20d95b911f27176f6


Yeah already saw those when I did a little research. Now, I posted and asked here for confirmation from other users. But yeah, thanks for the links anyway!


----------



## Glerox

wrong thread


----------



## Rune

Things are going well so far. Day two and I have the FTW3 Ultra at 2190 mostly stable on the stock cooler. Going to be ordering the hydrocopper block shortly I think. There's some room left here even if only by temperature. Now if only I had _that one_ bios...

Edit: Or...if there is a version of the ftw3 for this...


----------



## toncij

Has anyone so far found advantage in FTW3? Or it's the same as any other so far?


----------



## Doni Weiss

Has anyone switched bios in a Msi gaming trio (non x version with 1635 mhz) to the bios with 406 mhz ??? And how did it work out?

The non x version can only put voltage to 110% with current bios.


----------



## eux

anyone have a lightning z yet? interested in hearing how this card is and if the 450w bios works.


----------



## Jpmboy

axiumone said:


> I've tried all of different possible parameters. The A chip flashes without any issues. The non A is blocked and does not allow you to flash to an A chip card bios. I think a modded version of nvlfash is needed.


it's actually not simply a matter of a modified nvflash.


----------



## Monstieur

Garrett1974NL said:


> Look here's how I did it... and yeah that's the Geforce font
> The rectangular cutout on the metal plate (on the memory and VRM) is JUST large enough to let the block 'in' if that makes sense.
> I used some Swiftech nuts, springs and screws to mount it on the card.
> The holes I used correspond with the AMD hole distances.


Which mounting holes (inner / outer) connect what to what (base plate / mid plate / heat sink) on the Inno3D X2? I want to see if I can fit an Accelero Xtreme III and keep the base plate and / or mid plate.


----------



## VPII

I discovered something interesting. I was usually able to run my Palit RTX 2080 Ti GamingPro OC at +180core and +1150mem without an issue.  This would give me 2160mhz core, but we all know that even with water cooling it will still drop to 2145 or 2130 due to heat. AT present this manual overclock would not work so I played around while in the fan vcurve and at 1.053v pushed the clocks to be 2160 as I saw it was stuck at 1.043v while doing manual OC. Well it worked, but my result in Timespy is about 500point below the best I got with the manual OC. I have not touched the voltage slider in MSI AB, so maybe it would do the trick, but why would it work before but not now?


----------



## SoldierRBT

Has anyone with a MSI RTX 2080 Ti Gaming X Trio flashed it with the 406W bios? did it work well?


----------



## Shawnb99

Damn EK

I make the mistake of breaking one of those extender thingys on my EK rad, going on week 2 trying to find a replacement without buying a new rad. 
Such a headache. 


Sent from my iPhone using Tapatalk


----------



## J7SC

Shawnb99 said:


> Damn EK
> 
> I make the mistake of breaking one of those extender thingys on my EK rad, going on week 2 trying to find a replacement without buying a new rad.
> Such a headache.
> 
> 
> Sent from my iPhone using Tapatalk



Not sure what you mean by 'extender thingys' - pictures ? ..No way to repair it w/ special silicon adhesives ?


----------



## Glerox

Shawnb99 said:


> Damn EK
> 
> I make the mistake of breaking one of those extender thingys on my EK rad, going on week 2 trying to find a replacement without buying a new rad.
> Such a headache.
> 
> 
> Sent from my iPhone using Tapatalk


Damn, that's weird, I always get excellent service from EK.


----------



## zack_orner

Sent from my SM-N950U using Tapatalk


----------



## zack_orner

SoldierRBT said:


> Has anyone with a MSI RTX 2080 Ti Gaming X Trio flashed it with the 406W bios? did it work well?


Yes it works fine, this card likes the higher power limit.

Sent from my SM-N950U using Tapatalk


----------



## SoldierRBT

zack_orner said:


> Yes it works fine, this card likes the higher power limit.
> 
> Sent from my SM-N950U using Tapatalk


Did you see any improvement on the core clocks?


----------



## bbmaster123

hey, wondering if someone answer this for me. I've got a duke OC that I've flashed to the galax hof 450w bios. I decided today I wanted to play around with the rgb settings since the rainbow was getting just a little annoying. I installed mystic light 3, got an error (platform not supported) and the lighting on the card shut off. Since I cant get into the mystic light app, its just stuck without lighting.

I of course considered it could have to do with the vbios not matching, but its not exactly ideal to reflash to stock to change the rgb and then flash back and still not be able to control the lighting. Has anyone else had this issue?

Also, I seem to be pulling max 380w on this bios on air around 2050mhz. If I dropped down to a 400w or 380w bios, would that impact my power draw since I'm not even hitting close to that high? would flashing another vbios allow me to control the rgb with its associated utility?

Thanks to anyone who can help answer my questions
cheers


----------



## zack_orner

SoldierRBT said:


> Did you see any improvement on the core clocks?


The memory oc is better for me the core not as much fighting temps at the moment about to water cool it. It does clock better if you set a custom vf cuve

Sent from my SM-N950U using Tapatalk


----------



## Shawnb99

J7SC said:


> Not sure what you mean by 'extender thingys' - pictures ? ..No way to repair it w/ special silicon adhesives ?




The port on the rad that you screw the fittings to, on the website it’s listed as an G 1/4 extender but when contacting support and having them send that part it’s the wrong one.

I’ll post a pic of it when I get off work if I can. Likely will have to send a few in to EK to sort this out.
That or just buy a new rad 


Sent from my iPhone using Tapatalk


----------



## Thoth420

Does anyone here play Hitman 2 and have it no randomly crash to desktop with "breakpoint reached" or somesuch found in the event viewer as well as the driver sometimes crashing too? This happened on a totally different 1080ti system as well..


----------



## Carillo

bbmaster123 said:


> hey, wondering if someone answer this for me. I've got a duke OC that I've flashed to the galax hof 450w bios. I decided today I wanted to play around with the rgb settings since the rainbow was getting just a little annoying. I installed mystic light 3, got an error (platform not supported) and the lighting on the card shut off. Since I cant get into the mystic light app, its just stuck without lighting.
> 
> I of course considered it could have to do with the vbios not matching, but its not exactly ideal to reflash to stock to change the rgb and then flash back and still not be able to control the lighting. Has anyone else had this issue?
> 
> Also, I seem to be pulling max 380w on this bios on air around 2050mhz. If I dropped down to a 400w or 380w bios, would that impact my power draw since I'm not even hitting close to that high? would flashing another vbios allow me to control the rgb with its associated utility?
> 
> Thanks to anyone who can help answer my questions
> cheers


Hey. I unfortunately dont have an answear to your question, but i wondering to do the same thing with my Zotac 2080 Ti AMP. I have the Galax 380w bios and on custum water loop with a 1/2 HP chiller. Even with 20c load temps, im hitting powerlimit hard in tests like Time Spy. I was hoping to find someone that have flashed a 3x8 pins bios to 2x8 pins card like you. Have you used this bios for a while ? https://www.3dmark.com/spy/5985990


----------



## ReFFrs

Thoth420 said:


> Does anyone here play Hitman 2 and have it no randomly crash to desktop with "breakpoint reached" or somesuch found in the event viewer as well as the driver sometimes crashing too? This happened on a totally different 1080ti system as well..


Oh yes, this is what constantly happens to me on 2080 Ti (Samsung memory). Hitman 2 is either crashing withing 5 minutes after any mission launch or working stable at least for hour (haven't tested longer yet). 50/50 on each launch and I have tried many of them.

I'm using 1440p + 150% resolution scaling and the game version is pretty early one (not the latest update). My driver is 417.75 hotfix.

At first I thought it was unstable OC, but after returning to stock values Hitman 2 still crashed exactly in the same way. It always writes "Display driver nvlddmkm stopped responding and has successfully recovered" into System log in Windows Event Viewer. In Application log it also writes:

Faulting application name: HITMAN2.exe, version: 2.11.0.0
Faulting module name: HITMAN2.exe, version: 2.11.0.0
Exception code: 0x80000003

Other games working fine with OC and also heavy benchmarks for hours launched in a loop. Only Hitman 2 has this problem.

There are many reports of this issue online, so looks like solely a game issue (not failing GPU). Game developers with nVidia know about it and planning to fix, but haven't reached any success yet as mentioned in 23 Jan update: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/

https://steamcommunity.com/app/863550/discussions/0/1742231705660157322/

https://forums.geforce.com/default/...creens-then-ctd-nvlddmkm-stopped-responding-/


----------



## krizby

ReFFrs said:


> Oh yes, this is what constantly happens to me on 2080 Ti (Samsung memory). Hitman 2 is either crashing withing 5 minutes after any mission launch or working stable at least for hour (haven't tested longer yet). 50/50 on each launch and I have tried many of them.
> 
> I'm using 1440p + 150% resolution scaling and the game version is pretty early one (not the latest update). My driver is 417.75 hotfix.
> 
> At first I thought it was unstable OC, but after returning to stock values Hitman 2 still crashed exactly in the same way. It always writes "Display driver nvlddmkm stopped responding and has successfully recovered" into System log in Windows Event Viewer. In Application log it also writes:
> 
> Faulting application name: HITMAN2.exe, version: 2.11.0.0
> Faulting module name: HITMAN2.exe, version: 2.11.0.0
> Exception code: 0x80000003
> 
> Other games working fine with OC and also heavy benchmarks for hours launched in a loop. Only Hitman 2 has this problem.
> 
> There are many reports of this issue online, so looks like solely a game issue (not failing GPU). Game developers with nVidia know about it and planning to fix, but haven't reached any success yet as mentioned in 23 Jan update: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/
> 
> https://steamcommunity.com/app/863550/discussions/0/1742231705660157322/
> 
> https://forums.geforce.com/default/...creens-then-ctd-nvlddmkm-stopped-responding-/



Had this problem when my overclock was unstable, Hitman 2 is pretty light on load so your 2080 Ti pretty much run at maximum frequency/ voltage all the time. If you have CTD early on then reduce overclock by 30mhz, if you can play for a while and it crashes randomly then reduce overclock by 15mhz. Another alternative is keeping your previous stable overclock and just undervolt to 1.000v.


----------



## ReFFrs

krizby said:


> Had this problem when my overclock was unstable, Hitman 2 is pretty light on load so your 2080 Ti pretty much run at maximum frequency/ voltage all the time. If you have CTD early on then reduce overclock by 30mhz, if you can play for a while and it crashes randomly then reduce overclock by 15mhz. Another alternative is keeping your previous stable overclock and just undervolt to 1.000v.


As I mentioned, it crashed even on stock settings without OC, only powerlimit pushed to max 366W

I agree that this game can crash due to unstable OC as well, but what I have described above the 0x80000003 crash is engine-related and recognized by developers/nvidia as such. 

Read carefully this message from game dev: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/

It's a shame they cannot fix it for so long.


----------



## krizby

ReFFrs said:


> As I mentioned, it crashed even on stock settings without OC, only powerlimit pushed to max 366W
> 
> I agree that this game can crash due to unstable OC as well, but what I have described above the 0x80000003 crash is engine-related and recognized by developers/nvidia as such.
> 
> Read carefully this message from game dev: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/
> 
> It's a shame they cannot fix it for so long.


Somehow the CTD in hitman 2 is not related to overclock but the rather voltages, so when you increase the power limit the card will run at higher voltages and cause crashes, try undervolting to below 1.000v and see if that improve stability. I never had any crashes since undervolting in Hitman 2.


----------



## dante`afk

seen lots of reports that the 373w EVGA Bios results in higher scores/OC than the 380 or even 400+w bios.




knightriot said:


> bye EK and welcome heatkiller iv , room temp~30*c, fullload at 49*c
> Clock: 2055~2100
> Mem:+1000


49c on load? check the contact to the block/gpu. this temp is terrible. it should not go over 38-40c.


----------



## ComansoRowlett

dante`afk said:


> seen lots of reports that the 373w EVGA Bios results in higher scores/OC than the 380 or even 400+w bios.
> 
> 
> 
> 
> 49c on load? check the contact to the block/gpu. this temp is terrible. it should not go over 38-40c.


373w bios outperforming 380/400w+ bios? That sounds really strange, any idea why?


----------



## knightriot

dante`afk said:


> seen lots of reports that the 373w EVGA Bios results in higher scores/OC than the 380 or even 400+w bios.
> 
> 
> 
> 
> 49c on load? check the contact to the block/gpu. this temp is terrible. it should not go over 38-40c.


my water temp reach to 40*c and my room temp~30*c, so i think this is good , and i playing 4k gaming so gpu reach 100% all time


----------



## nrpeyton

hey all -- long time no see...

Anyone tried the new OC Scanner built into the newest beta release of MSI Afterburner?


It seems like a bit of a power virus to me -- Even at only 1.0 to 1.05v it was hitting 500 watts +. My temperature probes that I connected to my cards VRM over a year ago I literally forgot I had done that UNTIL yesterday when the over-heating alarm started to sound on my VRM. I thought "what the hell is that noise" so I had a closer look -- and as I thought -- the temperature of my VRM (under water) was higher than I've ever seen it before.

I felt compelled to turn off the test.

Now I know I have liquid metal on the shunt resistors AND using the STRIX BIOS, so my card effectively has no power limit. But this has NEVER, NOT UNTIL NOW -- ever caused to draw so much power from the wall (in watts) as long as I kept the voltae in check. I'd expect I'd normally need to run at least 1.2v to consume 500 watts.


What ever algorthms they are using for testing in this new OC Scanner -- it is more dangerous than FURMARK as a power-virus.

Any thoughts ?


Nick
P.S. 
I know my card is the previous generation -- but everything done would be achieved there or vice-versa the only difference is generation. The power-virus problem effects 20 series just as much. And I figured this forum is probably more active.
Thanks.


----------



## Martin778

I couldn't help myself, as soon I saw the Lightning Z in store I bought it...sorry. I will try to find a cheap nvlink connector to try it with my Aorus Xtreme in NVlink SLI before I sell the Aorus.


----------



## LayZ_Pz

dante`afk said:


> seen lots of reports that the 373w EVGA Bios results in higher scores/OC than the 380 or even 400+w bios.
> 
> 
> 
> 
> 49c on load? check the contact to the block/gpu. this temp is terrible. it should not go over 38-40c.


Any specific one in mind ? Im curious to give it a go. A link would be perfect


----------



## Thoth420

krizby said:


> Had this problem when my overclock was unstable, Hitman 2 is pretty light on load so your 2080 Ti pretty much run at maximum frequency/ voltage all the time. If you have CTD early on then reduce overclock by 30mhz, if you can play for a while and it crashes randomly then reduce overclock by 15mhz. Another alternative is keeping your previous stable overclock and just undervolt to 1.000v.


Nothing in my system is OC'd atm aside the RAM running XMP 3200 and the GPU is just running it's factory OC. I suspect maybe the game is super RAM sensitive and perhaps fiddling in BIOS may solve it but I have never had to do manual setup for memory because I am an XMP baby so I am bit lost.


----------



## kot0005

ReFFrs said:


> Oh yes, this is what constantly happens to me on 2080 Ti (Samsung memory). Hitman 2 is either crashing withing 5 minutes after any mission launch or working stable at least for hour (haven't tested longer yet). 50/50 on each launch and I have tried many of them.
> 
> I'm using 1440p + 150% resolution scaling and the game version is pretty early one (not the latest update). My driver is 417.75 hotfix.
> 
> At first I thought it was unstable OC, but after returning to stock values Hitman 2 still crashed exactly in the same way. It always writes "Display driver nvlddmkm stopped responding and has successfully recovered" into System log in Windows Event Viewer. In Application log it also writes:
> 
> Faulting application name: HITMAN2.exe, version: 2.11.0.0
> Faulting module name: HITMAN2.exe, version: 2.11.0.0
> Exception code: 0x80000003
> 
> Other games working fine with OC and also heavy benchmarks for hours launched in a loop. Only Hitman 2 has this problem.
> 
> There are many reports of this issue online, so looks like solely a game issue (not failing GPU). Game developers with nVidia know about it and planning to fix, but haven't reached any success yet as mentioned in 23 Jan update: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/
> 
> https://steamcommunity.com/app/863550/discussions/0/1742231705660157322/
> 
> https://forums.geforce.com/default/...creens-then-ctd-nvlddmkm-stopped-responding-/



I had to reduce my oc by a lot to play hitman 2. Try turning off shadowplay, it gave me more stability.


----------



## bbmaster123

Carillo said:


> Hey. I unfortunately dont have an answear to your question, but i wondering to do the same thing with my Zotac 2080 Ti AMP. I have the Galax 380w bios and on custum water loop with a 1/2 HP chiller. Even with 20c load temps, im hitting powerlimit hard in tests like Time Spy. I was hoping to find someone that have flashed a 3x8 pins bios to 2x8 pins card like you. Have you used this bios for a while ? https://www.3dmark.com/spy/5985990


I flashed the bios the day I got my card haha 
its been working great so far, but I haven't tried any other bioses as this was the best one available at the time. If there's a better one, let me know, my card can definitely handle faster gpu clocks (although only 7690mhz micron ram)
As for your card, I don't see why it wouldn't work. If you've gotten as far as flashing your bios, and use a chiller, and have such a sweet setup, you are probably more than capable of fixing it too if something were to go wrong. 

Is that your 24/7 computer, or do you just use it for benchmarks? 

Also my rgb came back on. I guess it needed to be shut off completely to reinitialize fullly for some reason. Still no control over it though, just rainbowssss


----------



## Rune

Rune said:


> Things are going well so far. Day two and I have the FTW3 Ultra at 2190 mostly stable on the stock cooler. Going to be ordering the hydrocopper block shortly I think. There's some room left here even if only by temperature. Now if only I had _that one_ bios...
> 
> Edit: Or...if there is a version of the ftw3 for this...


Update: 2220 seems to be the highest I can safely get this guy on air. Pretty nice one, evga!  Now I just need the block and the BA bios...


----------



## ReFFrs

Rune said:


> Update: 2220 seems to be the highest I can safely get this guy on air. Pretty nice one, evga!  Now I just need the block and the BA bios...


Where are the proofs? Validated benchmark results? Nobody believes you can reach even 2000 MHz on air.


----------



## Thoth420

kot0005 said:


> I had to reduce my oc by a lot to play hitman 2. Try turning off shadowplay, it gave me more stability.


I don't believe I have shadowplay installed. I don't install the GFE so if it's packaged with that it's not on my system. Ansel is disabled however. I really think its the XMP profile on the release BIOS of this board not being 100% stable. I did some reading and supposedly I should manually enter my RAM timings, voltage etc. as well as alter VCCIO and VCCSA. I'm pretty lost never really had to not use XMP. I want to try this before flashing the BIOS on the mobo.


----------



## Carillo

bbmaster123 said:


> I flashed the bios the day I got my card haha
> its been working great so far, but I haven't tried any other bioses as this was the best one available at the time. If there's a better one, let me know, my card can definitely handle faster gpu clocks (although only 7690mhz micron ram)
> As for your card, I don't see why it wouldn't work. If you've gotten as far as flashing your bios, and use a chiller, and have such a sweet setup, you are probably more than capable of fixing it too if something were to go wrong.
> 
> Is that your 24/7 computer, or do you just use it for benchmarks?
> 
> Also my rgb came back on. I guess it needed to be shut off completely to reinitialize fullly for some reason. Still no control over it though, just rainbowssss


haha  I tried flashing the HOF bios. It worked, well i did not brick the card  Performance wise it hit power limit already @340w. I guess the voltage drop measured to calculate power draw is measured over all 3 8-pin power connectors, witch means that when you only have two 8 pin PCB with a 3 pins bios it seems like you drawing a lot more power than you actually are.. That's my guess. I flashed back to the 380w bios witch is a much greater performer. I think you will benefit doing the same thing. I have yet not tried the EVGA 373w bios, but i will later today. Yes my chiller is for 24/7  Also ordered a 7980xe yesterday so i can try to join the Total score Time Spy hall off fame  I did improve my Time Spy score yesterday, placing currently 66. place Time Spy graphic hall of fame : https://www.3dmark.com/spy/5995770


----------



## Krzych04650

ReFFrs said:


> Oh yes, this is what constantly happens to me on 2080 Ti (Samsung memory). Hitman 2 is either crashing withing 5 minutes after any mission launch or working stable at least for hour (haven't tested longer yet). 50/50 on each launch and I have tried many of them.
> 
> I'm using 1440p + 150% resolution scaling and the game version is pretty early one (not the latest update). My driver is 417.75 hotfix.
> 
> At first I thought it was unstable OC, but after returning to stock values Hitman 2 still crashed exactly in the same way. It always writes "Display driver nvlddmkm stopped responding and has successfully recovered" into System log in Windows Event Viewer. In Application log it also writes:
> 
> Faulting application name: HITMAN2.exe, version: 2.11.0.0
> Faulting module name: HITMAN2.exe, version: 2.11.0.0
> Exception code: 0x80000003
> 
> Other games working fine with OC and also heavy benchmarks for hours launched in a loop. Only Hitman 2 has this problem.
> 
> There are many reports of this issue online, so looks like solely a game issue (not failing GPU). Game developers with nVidia know about it and planning to fix, but haven't reached any success yet as mentioned in 23 Jan update: https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/
> 
> https://steamcommunity.com/app/863550/discussions/0/1742231705660157322/
> 
> https://forums.geforce.com/default/...creens-then-ctd-nvlddmkm-stopped-responding-/


Such games will happen from time to time. I had to lower my OC for Shadow of War as well, because it was crashing and throwing some strange Windows error saying that GPU is not going to execute any more commands because graphics settings are wrong. It turned out to be Dynamic Resolution+DSR that made the game heavier (I used it in combination with DSR because game is generally not demanding for like 85% of time but has some demanding places that need even up to 2x more power than usual and there you need to switch back to native res), so I ended up with 2070/7500 instead of 2100/8000. Supersampling and Dynamic Res are typically causing such situations.


----------



## ReFFrs

Krzych04650 said:


> Such games will happen from time to time. I had to lower my OC for Shadow of War as well, because it was crashing and throwing some strange Windows error saying that GPU is not going to execute any more commands because graphics settings are wrong. It turned out to be Dynamic Resolution+DSR that made the game heavier (I used it in combination with DSR because game is generally not demanding for like 85% of time but has some demanding places that need even up to 2x more power than usual and there you need to switch back to native res), so I ended up with 2070/7500 instead of 2100/8000. Supersampling and Dynamic Res are typically causing such situations.


As a part of OC testing I played Shadow of War for hour at 4K rendered resolution, no crashes. But will play again for extended period to confirm this.


----------



## Rune

ReFFrs said:


> Where are the proofs? Validated benchmark results? Nobody believes you can reach even 2000 MHz on air.


Has anyone ever told you that you are an aggressive person?

Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.

Let me know if your highness would require any additional "proofs". I'll run whatever if I've not put the block on by the time I get a response.


----------



## ReFFrs

Rune said:


> Has anyone ever told you that you are an aggressive person?
> 
> Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.
> 
> Let me know if your highness would require any additional "proofs". I'll run whatever if I've not put the block on by the time I get a response.


You know that your FS Ultra graphics score (9 151) is an utter sht and doesn't correspond to 2190-2220 MHz clocks you are bragging about?

I'm getting bigger score than yours with Aorus 2080 Ti Waterforce (AIO) clocked at 2055 MHz (stable if not hitting the power limit) and +1000 mem (16140 MHz effective) at 60C temp.

But still it's hitting the power limit throughout the test, so clocks are going lower sometimes. And still it's a bigger score than yours.

I suppose now you will stop telling fairy tales about your nonexistent and unstable OC, cause it's starting to be annoying hearing everyday from you "OMG I'm safely reaching 200500 MHz on air!"


----------



## nmkr

bbmaster123 said:


> I flashed the bios the day I got my card haha
> its been working great so far, but I haven't tried any other bioses as this was the best one available at the time. If there's a better one, let me know, my card can definitely handle faster gpu clocks (although only 7690mhz micron ram)
> As for your card, I don't see why it wouldn't work. If you've gotten as far as flashing your bios, and use a chiller, and have such a sweet setup, you are probably more than capable of fixing it too if something were to go wrong.
> 
> Is that your 24/7 computer, or do you just use it for benchmarks?
> 
> Also my rgb came back on. I guess it needed to be shut off completely to reinitialize fullly for some reason. Still no control over it though, just rainbowssss


so you flashed the msi 3x pin bios to an 2x pin card?


----------



## LayZ_Pz

I've got some interesting results that I can't really explain :

Situation : I got a RTX 2080TI Strix OC, 325W limit on stock Bios, here are the results on Uniengine Valley :



Code:


1st run 2040/2055 : 185.4FPS/7757
2nd run 2040/2055 : 185FPS/7740
3rd run 2040/2055 : 185FPS/7741
Temp : 67c @ 3300RPM

I decided to go for the FTW EVGA Bios which has a 373W limit :



Code:


1st run 2040/2055 : 181.2FPS/7580
2nd run 2040/2055 : 179.9FPS/7529
3rd run +30mhz (2085/2070mhz): 171FPS/7173 (possibly a fluke and to be ignored)
4th run +30mhz (2085/2070mhz): 182.3FPS/7628
68/69 : 3000 rpm instead of 3300rpm cause bios limit to 3k RPM

The results I get does not make sense. I have a bigger power reserve and yet I have worse performance even with higher clocks ?

For the sake of it I decided to re-flash the stock Asus bios :



Code:


1st run : 185.9FPS/7779
2nd run 2070/2085 mhz: 186.9FPS/7821
3rd run 2070/2085 +900mem : 187.5FPS/7846

Now the results are on par with the 1st run with the stock bios.

Question is, why a stock bios with a lower power limit scores higher than a bios with a higher power limit ?


----------



## Rob w

My card is the nv founders edition A chip Samsung memory, I want to flash another bios for higher power limit but having done some research it seems there’s an element of uncertainty over which is the best one to flash to the fe ? And I am now in total confusion over which one to use!
Should/can I use the 380w galaxy or should I use a different one?
Rig on water + chiller.


----------



## psychrage

I just ordered an EVGA FTW Gaming card. From what I can tell, techpowerup lists it as an A chip. Can anyone confirm it is indeed an A chip?

https://www.evga.com/products/product.aspx?pn=11G-P4-2483-KR
https://www.techpowerup.com/gpu-specs/evga-rtx-2080-ti-ftw3.b6144


----------



## VPII

I'm somewhat surprised by the scores I see. Look I am a little limited with running a Ryzen 2700X seen that combined and physics not up to scratch compared to Intel. But I still managed a good result. My actual highest clocks is 2160mhz even though the link for FS Ultra state 2145mhz.

https://www.3dmark.com/fs/18035654
@J7SC I actually got 2175mhz core to run through Time Spy. The score is not my best as ambient is sitting in the 30C range. But the only way I was able to do this was to use the curve and up the speed to 2175 at 1.068v which seem to work. Unfortunately when you apply the clocks from the curve you sit with the 1.068v and you temps rise about 2 to 3C. SO this meant I was idling at 32C and load temps went up to 43C during the bench run.

My best result however was with 2160mhz core.

https://www.3dmark.com/spy/5965452


----------



## dante`afk

Rune said:


> Has anyone ever told you that you are an aggressive person?
> 
> Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.
> 
> Let me know if your highness would require any additional "proofs". I'll run whatever if I've not put the block on by the time I get a response.


yea sorry buddy, your score is really bad. this suggests your card can't even hold the 2190mhz at all, it shows it maybe for couple of seconds and then drops down. showing a screenshot of msi ab / evga x1 is no proof.

make a video of how firestrike ultra holds consistent 2190 and you will be believed. but it won't do that, otherwise your score would have shown.



LayZ_Pz said:


> Any specific one in mind ? Im curious to give it a go. A link would be perfect


https://www.techpowerup.com/vgabios/207291/207291


----------



## krasoft

Got a Zotac 2080 ti AMP (Samsung RAM) card and tried out some overclocking on my old i5-4690k @ 4.2 GHz and DDR3 RAM. Brand new Seasonic FX850 PSU.

With the stock bios the most I can get a +150 MHz on both core and memory. If I go any higher on either then FurMark just crashes on test start. This keeps the core clock around 1800 MHz and the temperature around 74 degrees.

After flashing the Galax 380W bios I can raise the core clock to 1900 MHz keeping the ram at +150, but the temperature rises up to 80 degrees even with all the fans running 100% (loud like a vacuum!). Is this normal for air? I was hoping to at least break 2000 MHz.

Perhaps my case is poor? Bought it from NZXT five years ago. Resembles the H200. 2 front fans, 1 back fan, 1 top fan.

Also, while running my test I would occasionally hear the fans drop speed for about half a second and from time to time the GPU usage would dip down 10% with the video freezing. Is this due to hitting the power limit or is it a manufacturing defect?

Another question: does raising the core voltage percent do anything useful in MSI Afterburner? I heard that voltage is locked so this slider doesn't do anything. Is that true?


----------



## ReFFrs

Krzych04650 said:


> Such games will happen from time to time. I had to lower my OC for Shadow of War as well, because it was crashing and throwing some strange Windows error saying that GPU is not going to execute any more commands because graphics settings are wrong.





ReFFrs said:


> As a part of OC testing I played Shadow of War for hour at 4K rendered resolution, no crashes. But will play again for extended period to confirm this.



UPDATE: 

played another 2 hours Shadow of War max settings 4K res, no crashes. The same OC as tested with Hitman 2


----------



## Krzych04650

krasoft said:


> Another question: does raising the core voltage percent do anything useful in MSI Afterburner? I heard that voltage is locked so this slider doesn't do anything. Is that true?


It is unlocking more voltage. For me max voltage is 1.068 but if I turn voltage slider up to 100 and then edit VF curve max voltage applied is 1.093. It would allow higher OC if not for the power limits.



ReFFrs said:


> UPDATE:
> 
> played another 2 hours Shadow of War max settings 4K res, no crashes. The same OC as tested with Hitman 2


Did you use DSR in combination with in-game Dynamic resolution? Because this is what caused the crashes for me, not just running a game with high OC. Also 2 hours is not a lot, I've played like 10 hours no problem at first, it only started to crash in certain areas. Also the issue may not be the same for everyone, each card is different with OC. It may just be some combination of things for me, you may not be able to reproduce the issue at all, though there are many reports on this issue on the web since ever the game released, and not everyone got it fixed with reducing OC, some were getting that on stock.


----------



## Martin778

Anyone using MSI Dragon Center on MSI RTX cards? I can't find the LED controls anywhere!


Spoiler


----------



## CptSpig

Rune said:


> Has anyone ever told you that you are an aggressive person?
> 
> Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.
> 
> Let me know if your highness would require any additional "proofs". I'll run whatever if I've not put the block on by the time I get a response.





ReFFrs said:


> You know that your FS Ultra graphics score (9 151) is an utter sht and doesn't correspond to 2190-2220 MHz clocks you are bragging about?
> 
> I'm getting bigger score than yours with Aorus 2080 Ti Waterforce (AIO) clocked at 2055 MHz (stable if not hitting the power limit) and +1000 mem (16140 MHz effective) at 60C temp.
> 
> But still it's hitting the power limit throughout the test, so clocks are going lower sometimes. And still it's a bigger score than yours.
> 
> I suppose now you will stop telling fairy tales about your nonexistent and unstable OC, cause it's starting to be annoying hearing everyday from you "OMG I'm safely reaching 200500 MHz on air!"


Man you two need to chill! I can't tell if either one of you is on air. Need to post something like this screen shot.


----------



## ReFFrs

Today devs silently removed Denuvo from Hitman 2

Earlier on Jan 23 they reported trying to solve the 0x80000003 / nvlddmkm crashing issue: 

https://steamcommunity.com/app/863550/discussions/0/1741106440017161777/

It would be fun if Denuvo was the actual cause of the crashes, in particular its meddling with memory.


----------



## bbmaster123

nmkr said:


> so you flashed the msi 3x pin bios to an 2x pin card?


no I flashed Galax HOF OC Labs, this bios https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927

Being that I only really spent less than an hour on stock bios, can't really say if it works better or not, just that it works. I may try an msi bios as long as it can perform the same or better, and gives me control over my rgb.



Carillo said:


> haha  I tried flashing the HOF bios. It worked, well i did not brick the card  Performance wise it hit power limit already @340w. I guess the voltage drop measured to calculate power draw is measured over all 3 8-pin power connectors, witch means that when you only have two 8 pin PCB with a 3 pins bios it seems like you drawing a lot more power than you actually are.. That's my guess. I flashed back to the 380w bios witch is a much greater performer. I think you will benefit doing the same thing. I have yet not tried the EVGA 373w bios, but i will later today. Yes my chiller is for 24/7  Also ordered a 7980xe yesterday so i can try to join the Total score Time Spy hall off fame  I did improve my Time Spy score yesterday, placing currently 66. place Time Spy graphic hall of fame : https://www.3dmark.com/spy/5995770


thats a good point about the current measurement, didn't even cross my mind. In gpu-z it said it was pushing 380w which makes more sense. I will definitely try the 380w bios! And please do let me know if the 373w bios is better! 

Thats sick, best of luck with overclocking it and moving up the high score list  I'm personally going to hold off until sunny cove 8c/16t comes out. 

One mooooore ting  
Which block did you go for on your gpu? I am definitely going to grab a block myself when Canada eventually starts to thaw in a few months. It will be in a loop with my 5820k @4.4ghz with a 280mm rad + 120mm rad


----------



## ReFFrs

Krzych04650 said:


> Did you use DSR in combination with in-game Dynamic resolution? Because this is what caused the crashes for me, not just running a game with high OC. Also 2 hours is not a lot, I've played like 10 hours no problem at first, it only started to crash in certain areas. Also the issue may not be the same for everyone, each card is different with OC. It may just be some combination of things for me, you may not be able to reproduce the issue at all, though there are many reports on this issue on the web since ever the game released, and not everyone got it fixed with reducing OC, some were getting that on stock.


Look, there is no need in using nVidia DSR in this Shadow Of War because you can freely select the rendering resolution in video settings and it's marked as 1.5x, 2.25x etc there. If you select higher resolution than your display, DSR is not active, but it's processed by game engine itself. 

So I'm not using any Dynamic resolutions, I just set the rendering resolution in game settings to constant 4K, while my display is 1440p

What are those certain areas you are talking about where the game started to crash? I have tested it in the forest with a lot of trees (second campaign location). In my opinion it should be one of the heaviest areas in the game for the GPU.


----------



## bbmaster123

ReFFrs said:


> Look, there is no need in using nVidia DSR in this Shadow Of War because you can freely select the rendering resolution in video settings and it's marked as 1.5x, 2.25x etc there. If you select higher resolution than your display, DSR is not active, but it's processed by game engine itself.
> 
> So I'm not using any Dynamic resolutions, I just set the rendering resolution in game settings to constant 4K, while my display is 1440p
> 
> What are those certain areas you are talking about where the game started to crash? I have tested it in the forest with a lot of trees (second campaign location). In my opinion it should be one of the heaviest areas in the game for the GPU.


I've never played Shadow of War, but you're right. When DSR came out, most games if any had no way of making use of resolution scaling techniques. DSR worked and improved the image quality by outputting at a higher than native resolution.

When games started putting in the resolution scale slider, I would say that it made DSR obsolete/redundant. However it works differently. Where DSR changes "output resolution", the in game slider changes "render resolution". 

Given the option, always opt for the in game adjustment. Stacking the two technologies is not only going to bring your computer to its knees, it can also make image quality worse than just picking one or the other. 

BTW I know it seems like I'm explaining this to you directly when I'm pretty sure you already know all of this. I just wanted to add to what you already said


----------



## Carillo

bbmaster123 said:


> no I flashed Galax HOF OC Labs, this bios https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927
> 
> Being that I only really spent less than an hour on stock bios, can't really say if it works better or not, just that it works. I may try an msi bios as long as it can perform the same or better, and gives me control over my rgb.
> 
> 
> 
> thats a good point about the current measurement, didn't even cross my mind. In gpu-z it said it was pushing 380w which makes more sense. I will definitely try the 380w bios! And please do let me know if the 373w bios is better!
> 
> Thats sick, best of luck with overclocking it and moving up the high score list  I'm personally going to hold off until sunny cove 8c/16t comes out.
> 
> One mooooore ting
> Which block did you go for on your gpu? I am definitely going to grab a block myself when Canada eventually starts to thaw in a few months. It will be in a loop with my 5820k @4.4ghz with a 280mm rad + 120mm rad


Thanks mate. I think Sunny Cove will be a good performer!  I bought the Barrow water block because it was the only available , I have used the block on both RTX 2080 and now my 2080 Ti. It performs better than it look  Not bad at all actually.


----------



## Nocliptoni

J7SC said:


> ...hope that the upcoming switch to AMD (2950X) works well. Below is a 4K run of the Aorus from the first day I got it on my Z170 SOC Force 6700k test-bench (since then found out that VRAM actually goes higher, about +1040) . Threadripper system memory perf should also help


Thats a pretty good score .For some reason i cant quite produce a same level results at 4K as i can on 1080p .I guess im hitting that power limit harder there ? .Im sure 2950X is a very nice chip .I had a 1950X and it was great for the price .My best 4k result link below 

https://benchmark.unigine.com/results/rid_8d462033eb1c4812ae658ce4dc8b2243


----------



## Doni Weiss

Martin778 said:


> Anyone using MSI Dragon Center on MSI RTX cards? I can't find the LED controls anywhere!
> 
> 
> Spoiler


Download Msi Mystic light - The rgb controller is in that program


----------



## J7SC

Nocliptoni said:


> Thats a pretty good score .For some reason i cant quite produce a same level results at 4K as i can on 1080p .I guess im hitting that power limit harder there ? .Im sure 2950X is a very nice chip .I had a 1950X and it was great for the price .My best 4k result link below
> 
> https://benchmark.unigine.com/results/rid_8d462033eb1c4812ae658ce4dc8b2243



Tx. Things are slowly being assembled (as in 'now')...only challenge is that my X399 Meg Creation is supposed to be back from RMA later this week / in the meantime, I have been playing around w/ a X399 Pro Carbon in its place...the particular 2950x seems like a good chip (4.1 all16c/32T) @ 1.275v and RAM clocks up nice and tight, though there's massive cooling, to be fair...

...debating whether to put the 2x Aorus 2080 TI XTR WB in for just a few days (hard copper tubing, mobos are different form factor and use different SLI spacing) or just wait and leave them in the 6700K test bench...getting itchy to NVlink those babies in X399, though :ninja: ...and that RGB lighting show / ready to snip some LED wires :lachen:

*EDIT*: X399 Meg Creation *just arrived*


----------



## Martin778

Is there a way to prevent 2080Ti from dropping voltage so fast? I've noticed my Lighting Z drops the voltage faster than it drops the clock when the temp rises, which causes lockups. 
What's weird is that even going from 48 to 50*C it already drops from 1.06 to 1.04V and it can lock up when still running 2100MHz+


----------



## Pauliesss

Hey guys,

I have the Gainward 2080 Ti GS, what would be the best option to get maximum potential from this card?

Flash it with Galax 380W BIOS? Or any other suggestions?


----------



## Krzych04650

ReFFrs said:


> Look, there is no need in using nVidia DSR in this Shadow Of War because you can freely select the rendering resolution in video settings and it's marked as 1.5x, 2.25x etc there. If you select higher resolution than your display, DSR is not active, but it's processed by game engine itself.
> 
> So I'm not using any Dynamic resolutions, I just set the rendering resolution in game settings to constant 4K, while my display is 1440p
> 
> What are those certain areas you are talking about where the game started to crash? I have tested it in the forest with a lot of trees (second campaign location). In my opinion it should be one of the heaviest areas in the game for the GPU.


I know that SoW and SoM have their own supersampling, I have used that for SoM, but the setting I have used in SoW was specifically 1.78x of native res, exactly the same multiplier as one of DSR settings, so I assumed it was DSR. Maybe it was in game setting, I honestly can't remember now. I have used Dynamic Resolution because basically only Nurn is demanding and required around 90% of GPU at my native 3840x1600 res, the rest was more like 65% average so I was wasting a lot of potential, so I used supersampling and Dynamic resolution so it was going back to my native res in demanding places while applying much higher res in non demanding areas, and neven dropping below my native res, so it worked very well, all games should have supersampling and dynamic resolution so you can utilize 100% of GPU power at all times and maximize picture quality at all times. But Dynamic Resolution caused these crashes so I as a part of troubleshooting I backed down my OC a little bit (from 2100/8000 to 2070/7500 like I said) and it helped. So I am saying for the third time that it was Dynamic Resolution causing the issue, while you are trying to reproduce the issue without actually using the setting that caused it (not to even mention different specs, different Windows compilation, different res and in-game settings, different third party software in use and etc), so I don't really see a point of all of this.


----------



## acmilangr

LayZ_Pz said:


> I've got some interesting results that I can't really explain :
> 
> Situation : I got a RTX 2080TI Strix OC, 325W limit on stock Bios, here are the results on Uniengine Valley :
> 
> 
> 
> Code:
> 
> 
> 1st run 2040/2055 : 185.4FPS/7757
> 2nd run 2040/2055 : 185FPS/7740
> 3rd run 2040/2055 : 185FPS/7741
> Temp : 67c @ 3300RPM
> 
> I decided to go for the FTW EVGA Bios which has a 373W limit :
> 
> 
> 
> Code:
> 
> 
> 1st run 2040/2055 : 181.2FPS/7580
> 2nd run 2040/2055 : 179.9FPS/7529
> 3rd run +30mhz (2085/2070mhz): 171FPS/7173 (possibly a fluke and to be ignored)
> 4th run +30mhz (2085/2070mhz): 182.3FPS/7628
> 68/69 : 3000 rpm instead of 3300rpm cause bios limit to 3k RPM
> 
> The results I get does not make sense. I have a bigger power reserve and yet I have worse performance even with higher clocks ?
> 
> For the sake of it I decided to re-flash the stock Asus bios :
> 
> 
> 
> Code:
> 
> 
> 1st run : 185.9FPS/7779
> 2nd run 2070/2085 mhz: 186.9FPS/7821
> 3rd run 2070/2085 +900mem : 187.5FPS/7846
> 
> Now the results are on par with the 1st run with the stock bios.
> 
> Question is, why a stock bios with a lower power limit scores higher than a bios with a higher power limit ?


This is really weird. 
Have you tried any other benchmark?


----------



## bbmaster123

Carillo said:


> Thanks mate. I think Sunny Cove will be a good performer!  I bought the Barrow water block because it was the only available , I have used the block on both RTX 2080 and now my 2080 Ti. It performs better than it look  Not bad at all actually.


Yea I don't love the look, good to know it performs well though.
I flashed the galax 380w bios, getting about the same oc but power draw is reporting closer to 320w which is 50w less than before. I'll stick with it for now, but I'm curious about that 373w bios too.


----------



## Asmodian

Rune said:


> Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.


Your graphics score does seem a bit low when compared to my OC of 2070 MHz, especially since you have a 9900K at 5.2 GHz. However, hitting the power limit and/or thermal throttling could explain the difference. Your GPU is probably declocking quite a bit during the run. My GPU stays below 42°C and I never trigger the power limit due to a shunt mod so my clocks usually drop by at most one bin (15 MHz) due to temperature. A memory OC also helps, so your lower memory clock could also help explain why your graphics score is only a tiny bit higher than mine while running 500 MHz faster on the CPU and a nominal 120 MHz faster on the GPU.

An 8K Superposition run is a great way to see and share GPU OC performance, including the clock speeds. I find it really annoying that 3DMark does not keep track of the GPU clock speed during the run or report any clock speeds or temperatures in the online results.


----------



## Majentrix

EVGA has (or had) a promo going where they'd send you a free trim kit for your 2080 or 2080ti. I chose red, it looks fantastic.


----------



## kot0005

CptSpig said:


> Rune said:
> 
> 
> 
> Has anyone ever told you that you are an aggressive person?
> 
> Well, there's the firestrike ultra run at 2190 from a few days ago. And not that it matters, but the x1 screenshot I took shortly before the end of the combined run.
> 
> Let me know if your highness would require any additional "proofs". I'll run whatever if I've not put the block on by the time I get a response.
> 
> 
> 
> 
> 
> 
> ReFFrs said:
> 
> 
> 
> You know that your FS Ultra graphics score (9 151) is an utter sht and doesn't correspond to 2190-2220 MHz clocks you are bragging about?
> 
> I'm getting bigger score than yours with Aorus 2080 Ti Waterforce (AIO) clocked at 2055 MHz (stable if not hitting the power limit) and +1000 mem (16140 MHz effective) at 60C temp.
> 
> But still it's hitting the power limit throughout the test, so clocks are going lower sometimes. And still it's a bigger score than yours.
> 
> I suppose now you will stop telling fairy tales about your nonexistent and unstable OC, cause it's starting to be annoying hearing everyday from you "OMG I'm safely reaching 200500 MHz on air!"
> 
> Click to expand...
> 
> Man you two need to chill! I can't tell if either one of you is on air. Need to post something like this screen shot.
Click to expand...

15c gpu temp says it all lol.


----------



## J7SC

*@VPII *- ..thought you would enjoy Buildzoid (and his unique style) on the 1350MHz bug


----------



## willverduzco

After I was knocked out of my fortuitous place in the Timespy Single GPU Hall of Fame (#69 lolol), I decided to do a bit more benching at my 2080TI's 24/7 clocks (2130c/8080m), but add another 100 MHz to the CPU ([email protected]).

I certainly don't plan on running my 9900k at 5.3 24/7 though... My chip seemed to hit a voltage wall there, requiring 80 mV more than it did for a prime95-stable 5.2 (AVX) to just make it through the darn test! Back to 5.2 I go for 24/7.

I couldn't exactly get back to #69 on the single-GPU leader board, but I got down to #60 with a total score of 16284 (GPU: 17015, CPU: 13099). Much less fun than my old position. 
For kicks, I also ran Superposition 1080p Extreme at these same clocks and got 10528.

Not too bad for a hybrid mod with a 280mm Asetek! Now I just need that unlocked power limit BIOS that the YouTubers all have since I keep hitting power limit HARD!

[EDIT: I have since shunt modded and increased my score to 16747 (GPU 17410, CPU 13779) at slightly higher clocks on the 9900K (5.4 all-core) and 2080Ti (2175c/8300m) getting me to 50th place on the single-GPU leaderboard overall. Similarly, my Superposition 1080p Extreme score is now 10827 (2175c/8375m).]


----------



## Jody Hodgson

I have read a stack of pages on here and done some searching however I am struggling to find some good info regarding my first 2080ti liquid cooled solution should be?

I understand there is a power limit which is in some ways underwhelming OC 's however it seems a custom loop is favorable to get a little more?

I was looking at the MSI and Gigabyte WB versions and I may be able to fit the Gigabyte Auros WB in the Lian-Li 011D horizontally (preferred) the MSI is too wide!! Hwever Iam not opposed to a blower card and adding a custom water block..

So with so much money in the aftermarket cards pumped into the blowers and their not being massive differences with the various types of cards abilities I assume a cheap card that will fit a good waterblock with A chips and Samsung memory would be the best option or maybe the Auros WB card?

Can anyone point me at some good 2080TI cards and blocks (that are compatible) that produce the goods on water? 

All help appreciated (PS will be teaming up with a 9900K)


----------



## Garrett1974NL

Monstieur said:


> Which mounting holes (inner / outer) connect what to what (base plate / mid plate / heat sink) on the Inno3D X2? I want to see if I can fit an Accelero Xtreme III and keep the base plate and / or mid plate.


Won't work... The Accelero bracket will rest on the black plate and then the cooler will "hover" over the core.


----------



## kot0005

willverduzco said:


> After I was knocked out of my fortuitous place in the Timespy Single GPU Hall of Fame (#69 lolol), I decided to do a bit more benching at my 2080TI's 24/7 clocks (2130c/8080m), but add another 100 MHz to the CPU ([email protected]).
> 
> I certainly don't plan on running my 9900k at 5.3 24/7 though... My chip seemed to hit a voltage wall there, requiring 80 mV more than it did for a prime95-stable 5.2 (AVX) to just make it through the darn test! Back to 5.2 I go for 24/7.
> 
> I couldn't exactly get back to #69 on the single-GPU leader board, but I got down to #60 with a total score of 16284 (GPU: 17015, CPU: 13099). Much less fun than my old position.
> For kicks, I also ran Superposition 1080p Extreme at these same clocks and got 10528.
> 
> Not too bad for a hybrid mod with a 280mm Asetek! Now I just need that unlocked power limit BIOS that the YouTubers all have since I keep hitting power limit HARD!


How hot does your 9900k get with 5.2Ghz and what voltage ?


----------



## CptSpig

kot0005 said:


> 15c gpu temp says it all lol.


Yes it does....380w bios and a chiller is all you need!


----------



## Silent Scone

Bound to be a few fame seekers State side what with the current climate


----------



## Shawnb99

Silent Scone said:


> Bound to be a few fame seekers State side what with the current climate




Might even freeze the water in the loop.
-30 ambient would make for some nice water temperatures least before you froze to death


----------



## VPII

J7SC said:


> *@VPII *- ..thought you would enjoy Buildzoid (and his unique style) on the 1350MHz bug https://www.youtube.com/watch?v=9zspgeP3BDo


Hi @J7SC I had to laugh.... great listening to him. If only I had the knowledge he had, and the joke is I had electronics till the end of my school. But I always find it fascinating to listen to these people in the know explain.


----------



## acmilangr

I flashed on q bios (Asus strix) the KFA bios. But for some reasons when I change the bios (from strix switch Button) to the default bios, gpu-z still can't see that as strix.
Even if I reboot the same issue. Both bios seems to be the same (KFA).

if I flash back to the default then both of them are come back to strix default.


----------



## Carillo

CptSpig said:


> Yes it does....380w bios and a chiller is all you need!


Even a 7820x starts to perform with a chiller attached


----------



## willverduzco

kot0005 said:


> How hot does your 9900k get with 5.2Ghz and what voltage ?


Quite ironic you ask, as I just recently fixed a temp problem I was having by switching from adaptive voltage to fixed with "high" rather than "turbo" LLC on my Aorus Z390 Ultra... In order to get things stable in Prime with AVX using adaptive voltage and turbo LLC, I used to need what HWMon would report as 1.31 volts for 5.0. I never even attempted to get 5.2 prime-stable back then because I didn't want to push voltage higher since temps were already as high as I'd want them to be. Now that I moved over to fixed voltage and "high" LLC (as well as fixed frequency), I feel as if I may have a bit of a golden chip.

Thanks to the new settings, it now passes the latest Prime95 with AVX at just 1.32 at 5.2 GHz (no offset) and 1.23 at 5.0. I haven't tried lower volts yet at 5.2, as I haven't had the time to do the whole routine of rebooting, lowering 10 mV, re-testing Prime, etc. My 5.0 voltage is as low as I can go without it crashing in Prime after about an hour, though. Oh, and 5.33 GHz for the #60 score required 1.4 volts just to pass the Timespy CPU test... and it's certainly not dedicated CPU stress test-stable.

Temps at 5.2 at 1.32 are great in gaming and decent in stress tests, but not amazing... and at the expense of noise. I have a 280mm AIO (h115i v2 at max pump speed) on the CPU since I ditched my old custom loop when I downsized cases, and I am running 4x Corsair MagLev 140mm fans on it, in push-pull. My fan curve set to make the 4 corsair maglev fans go to 2000 RPM (max speed) once temps hit 75C, and near silent under 60C. Thanks to the above, it gets to about 70C in Prime without AVX, very high 80s and perhaps a few cores touching 90-92 in Prime with AVX. Asus Realbench and Aida64 with FPU enabled are about the same temps as well, but required about 10 mV less voltage than the latest prime with AVX in order to be fully stable. A similar Asetek 280mm radiator but with only 1 set of fans at 1100 RPM keep the GPU temp in the low 40s while gaming and about 50 while running stress tests. This is all in a Phanteks Evolv ATX TG case (known for mediocre airflow, but at least it allows me to install my 2x 280mm radiators as intakes in the front and top) with ambient room temps of 27C.

(For the benches I posted earlier today, I opened the window and let the cold NYC 14F / -10C air fill my room, lol. I don't think ambient in-room was that low, but it was probably close.)


----------



## Fraizer

someone kniw if i can found an 2080 Ti at minimum 1755 Mhz with the reference Layout ? why to install my waterblock on it who fit only on reference layout.

on the first page i see only one and look not sure because in grey... the : 

https://www.overclock.net/forum/69-...l-nvidia-rtx-2080-ti-owner-s-club-598.htmlSea Hawk X Hybrid | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R


if someone can confirm is for sure and better another 2080 Ti at or more of 1755 Mhz....


----------



## Jody Hodgson

Fraizer said:


> someone kniw if i can found an 2080 Ti at minimum 1755 Mhz with the reference Layout ? why to install my waterblock on it who fit only on reference layout.
> 
> on the first page i see only one and look not sure because in grey... the :
> 
> https://www.overclock.net/forum/69-...l-nvidia-rtx-2080-ti-owner-s-club-598.htmlSea Hawk X Hybrid | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R
> 
> 
> if someone can confirm is for sure and better another 2080 Ti at or more of 1755 Mhz....


I am in the same situation as you I think however I dont even have a water block yet so trying t work out with card and waterblock combo is best in the current market?


----------



## psychrage

psychrage said:


> I just ordered an EVGA FTW Gaming card. From what I can tell, techpowerup lists it as an A chip. Can anyone confirm it is indeed an A chip?
> 
> https://www.evga.com/products/product.aspx?pn=11G-P4-2483-KR
> https://www.techpowerup.com/gpu-specs/evga-rtx-2080-ti-ftw3.b6144


Got my FTW3 Gaming today and can confirm it is an A chip.


----------



## JustinThyme

Add me to the list.
2X Strix 2080Ti O11G with ROG Nvlink Ill be posting up refresh rebuild pics shortly.


----------



## Fraizer

i dont know but i want to use the last aqua computer waterblock who is the best of the market with display screen etc.


People except dismount the radiator/fan of the card how i can check if i have an A chip ? Gpu-Z ?


----------



## Martin778

I have the Aorus 2080Ti and MSI L-Z and trying to Nvlink them.....don't bother LOL. MSI has a lot wider PCB, it's absolutely massive compared to GB. The Nvlink bridge is hanging in the air.
http://i66.tinypic.com/rqz0n4.jpg

Well if they'd only make a flexible bridge...
I see no chance to SLI two L-Z's on Z390. The card is a 3-slot and so is the PCI-E spacing. Not to mention you MUST use an anti sag solution.


----------



## J7SC

Martin778 said:


> I have the Aorus 2080Ti and MSI L-Z and trying to Nvlink them.....don't bother LOL. MSI has a lot wider PCB, it's absolutely massive compared to GB. The Nvlink bridge is hanging in the air.
> http://i66.tinypic.com/rqz0n4.jpg
> 
> Well if they'd only make a flexible bridge...
> I see no chance to SLI two L-Z's on Z390. The card is a 3-slot and so is the PCI-E spacing. Not to mention you MUST use an anti sag solution.



There are only supposed to be 2 'standardized' sizes (spanning 3 or 4 slot)...per picture below, the bridges have the same external dimensions, but internal spacing is different. Using different X399 mobos created a similar problem for me (3 in old pic below > then moved to 4, now back to 3), mind you both of my cards are the same Aorus type. Still, the NVL connector is on the same vertical axis as the PCIe slot, so even with different cards, it **should** work, unless you have some other clearance problems, or one card is much taller.

On another note, I got the X399 Creation back in after RMA and fully tested out w/ two older cards, 2x M.2s etc everything works, max temps for the 2950X / 4.3 all core/threads at Cine15 = 57 C  ...next step, transfer the two 2080 TIs and their copper tubing :sad-smile (seemed like a good idea at the time...)


----------



## pewpewlazer

J7SC said:


> There are only supposed to be 2 'standardized' sizes (spanning 3 or 4 slot)...per picture below, the bridges have the same external dimensions, but internal spacing is different. Using different X399 mobos created a similar problem for me (3 in old pic below > then moved to 4, now back to 3), mind you both of my cards are the same Aorus type. Still, the NVL connector is on the same vertical axis as the PCIe slot, so even with different cards, it **should** work, unless you have some other clearance problems, or one card is much taller.
> 
> On another note, I got the X399 Creation back in after RMA and fully tested out w/ two older cards, 2x M.2s etc everything works, max temps for the 2950X / 4.3 all core/threads at Cine15 = 57 C  ...next step, transfer the two 2080 TIs and their copper tubing :sad-smile (seemed like a good idea at the time...)


The Lightning-Z PCB is huge. Like _HYOOGE_ huge. Of course SLI with any other card won't work. 













Martin778 said:


> I have the Aorus 2080Ti and MSI L-Z and trying to Nvlink them.....don't bother LOL. MSI has a lot wider PCB, it's absolutely massive compared to GB. The Nvlink bridge is hanging in the air.
> http://i66.tinypic.com/rqz0n4.jpg
> 
> Well if they'd only make a flexible bridge...
> I see no chance to SLI two L-Z's on Z390. The card is a 3-slot and so is the PCI-E spacing. Not to mention you MUST use an anti sag solution.


I see plenty of chance to SLI two L-Z's on Z390. Get rid of the stock air cooling and it's no longer a 3-slot card. Even at triple slot size, these cards need water cooling in order to overclock them without sounding like an F5 tornado blowing through your room. And that's single card. SLI? Good luck air cooling that period.


----------



## ReFFrs

Martin778 said:


> I have the Aorus 2080Ti and MSI L-Z and trying to Nvlink them.....don't bother LOL. MSI has a lot wider PCB, it's absolutely massive compared to GB. The Nvlink bridge is hanging in the air.
> http://i66.tinypic.com/rqz0n4.jpg


Can you please post single card benchmarks for Aorus 2080 Ti and MSI Lightning Z, both with a max stable OC and max power limit?

1. Firestrike Ultra
2. TimeSpy
3. TimeSpy Extreme
4. Superposition 4K Optimized
5. Superposition 8K Optimized
6. Unigine Heaven 1440p Extreme 8xMSAA

It would be interesting to compare these two cards.


----------



## Monstieur

Garrett1974NL said:


> Won't work... The Accelero bracket will rest on the black plate and then the cooler will "hover" over the core.


Is the GPU die surface recessed below the Inno3D base plate? Isn't the Accelero contact plate the lowest point on the cooler? Does it not fit through the cut out in the Inno3D base plate? Other people have left out the Accelero back plate bracket and just passed the screws through the 2080 Ti's original backplate.


----------



## Bloodred217

Pauliesss said:


> Hey guys,
> 
> I have the Gainward 2080 Ti GS, what would be the best option to get maximum potential from this card?
> 
> Flash it with Galax 380W BIOS? Or any other suggestions?


I have this card as well, Galax 380W BIOS works fine. The cooler at 100% seems pretty decent, mine holds 70-71C at the 380W power limit. Mine does 16400-16600MHz effective memory clock and around 2050MHz core, though I'm not entirely certain if this is rock-solid stable yet, I've been playing around with the voltage curve.


----------



## Martin778

ReFFrs said:


> Can you please post single card benchmarks for Aorus 2080 Ti and MSI Lightning Z, both with a max stable OC and max power limit?
> 
> 1. Firestrike Ultra
> 2. TimeSpy
> 3. TimeSpy Extreme
> 4. Superposition 4K Optimized
> 5. Superposition 8K Optimized
> 6. Unigine Heaven 1440p Extreme 8xMSAA
> 
> It would be interesting to compare these two cards.


I understand, however, the Aorus is already packed in and waiting for a new owner.
I wouldn't expect any significant difference to be honest - it's still the same card, one is just overclocked higher than the other.

By the way, the L-Z has such massive cooling that it tops out at 60*C in Superposition with the fans on max and and at 2080-2100MHz.  AFAIK there aren't any water blocks for it (yet).
I've done some search on water blocks for the previous 1080Ti L-Z and only Bykski / Barrow had one.


----------



## ReFFrs

Martin778 said:


> I understand, however, the Aorus is already packed in and waiting for a new owner.
> I wouldn't expect any significant difference to be honest - it's still the same card, one is just overclocked higher than the other.
> 
> By the way, the L-Z has such massive cooling that it tops out at 60*C in Superposition with the fans on max and and at 2080-2100MHz.  AFAIK there aren't any water blocks for it (yet).
> I've done some search on water blocks for the previous 1080Ti L-Z and only Bykski / Barrow had one.


How is the noise at 100% fan vs default fan curve and also the coil whine? 

My Aorus 2080 Ti Waterforce AIO 240mm is silent at heaviest load after I replaced stock 2200+ RPM fans with Silent Wings 3 1450 RPM 

Temp tops at 60-63C

Coil whine is also pretty low, however sometimes audible at high fps 100+

It was louder when I just got the card, but after few days become much better. 

Music/sounds ingame make coil whine gone. I have open case btw.


----------



## managerman

J7SC said:


> There are only supposed to be 2 'standardized' sizes (spanning 3 or 4 slot)...per picture below, the bridges have the same external dimensions, but internal spacing is different. Using different X399 mobos created a similar problem for me (3 in old pic below > then moved to 4, now back to 3), mind you both of my cards are the same Aorus type. Still, the NVL connector is on the same vertical axis as the PCIe slot, so even with different cards, it **should** work, unless you have some other clearance problems, or one card is much taller.
> 
> On another note, I got the X399 Creation back in after RMA and fully tested out w/ two older cards, 2x M.2s etc everything works, max temps for the 2950X / 4.3 all core/threads at Cine15 = 57 C  ...next step, transfer the two 2080 TIs and their copper tubing :sad-smile (seemed like a good idea at the time...)


J7SC,

Will be running a similar setup as you. Build is still in process but it is taking shape. Here is a link to my build log. 

https://www.overclock.net/forum/180...-carbon-fiber-tubes-bp-fittings-overload.html

When I tested all the components on air I was very pleased on the overall combination of gaming and productivity performance that the 2990WX and 2080ti SLI gave. It has been a learning curve on using the 2990wx though...it is a quirky CPU...I am glad I tested on air as both FE 2080ti cards died within two weeks. The replacements ran ok for a few weeks before I put the water blocks on...









-M


----------



## Martin778

ReFFrs said:


> How is the noise at 100% fan vs default fan curve and also the coil whine?
> 
> My Aorus 2080 Ti Waterforce AIO 240mm is silent at heaviest load after I replaced stock 2200+ RPM fans with Silent Wings 3 1450 RPM
> 
> Temp tops at 60-63C
> 
> Coil whine is also pretty low, however sometimes audible at high fps 100+
> 
> It was louder when I just got the card, but after few days become much better.
> 
> Music/sounds ingame make coil whine gone. I have open case btw.



Tops out at 57*C with a custom fan curve which is still relatively quiet, crazy good cooling for AC. In Superposition it's ~3-4 deg more.
Coil whine is pretty much none, my case is a Fractal R6 with open panel and 3x Noctua NF A14 fans for airflow.

Here is 1h of Realbench encoding, +95 CORE/+1000 MEM, power limit sliders at max.


----------



## black06g85

just got my evga rtx2080ti xc hybrid.
running at 2100mhz so far so good.


----------



## Renegade5399

2 eVGA 2080Ti Black Edition XC Gaming inbound!

Step-ups finally came through!


----------



## Martin778

Oh, do they still offer the 90-day step up? Trading up 2x1080Ti to 2x2080Ti sounds like the best deal ever. Pretty much free 800 bucks?


----------



## J7SC

Martin778 said:


> I understand, however, the Aorus is already packed in and waiting for a new owner.
> I wouldn't expect any significant difference to be honest - it's still the same card, one is just overclocked higher than the other.
> 
> By the way, the L-Z has such massive cooling that it tops out at 60*C in Superposition with the fans on max and and at 2080-2100MHz.  AFAIK there aren't any water blocks for it (yet). I've done some search on water blocks for the previous 1080Ti L-Z and only Bykski / Barrow had one.



Just for future reference, you can also check w/ the HWBot XOC crowd for 'custom' bridges...I needed an extra long one (quad SLI+++) and MSI actually had one for XOC, essentially to get around big GPU LN2 pots. Nor sure what's available custom wise for NVLink at this stage, though. 

...Just out of interest, could you even get TWO MSI L Z to fit on that mobo w/stock coolers on ? 



managerman said:


> J7SC,
> 
> Will be running a similar setup as you. Build is still in process but it is taking shape. Here is a link to my build log.
> 
> https://www.overclock.net/forum/180...-carbon-fiber-tubes-bp-fittings-overload.html
> 
> When I tested all the components on air I was very pleased on the overall combination of gaming and productivity performance that the 2990WX and 2080ti SLI gave. It has been a learning curve on using the 2990wx though...it is a quirky CPU...I am glad I tested on air as both FE 2080ti cards died within two weeks. The replacements ran ok for a few weeks before I put the water blocks on...
> 
> -M


I have come to appreciate the Thermaltake Core line very much. On this P5, there was a lot of hard modding (5 rads !) but everything is accessible. For my next build, I'm even thinking Core P90 - that is once I recover from this build  ...I hope to put up a build-log in a week or less on the P5


----------



## Martin778

I could sqeeze two Lightnings on it but they will be pretty much touching and would sag like crazy without any support in between. If I can get 6x8pin out of my PSU is another story  (turns out i can't, RM1000X only has 6 PCIE/EPS outputs, 2 used by the MB and 3 by the first card). Well gotta grab the AX1600i next month.


----------



## Renegade5399

Martin778 said:


> Oh, do they still offer the 90-day step up? Trading up 2x1080Ti to 2x2080Ti sounds like the best deal ever. Pretty much free 800 bucks?


$1032 for 2 shipped since I locked the Step-Up in before they raised the price $100.

If I find I don't really need 2, I'll sell one and make some $$ since the 1080Ti's paid for themselves in the mining farm. BTC was at between $8000-$9000 per at that time though and the mine has long been shut down.


----------



## sblantipodi

do you think that we will be able to play some RTX games before the RTX 3000 series will be available?
SHAME ON NVIDIA.


----------



## J7SC

Martin778 said:


> I could sqeeze two Lightnings on it but they will be pretty much touching and would sag like crazy without any support in between. If I can get 6x8pin out of my PSU is another story  (turns out i can't, RM1000X only has 6 PCIE/EPS outputs, 2 used by the MB and 3 by the first card). Well gotta grab the AX1600i next month.



...Waterkool seem to suggest (over at the OCN Heatkiller Club thread) that they may make some MSI L-Z blocks, but no firm ETA. 

On the PSU limitations, why not hook two PSUs together ? - there's the factory built-in OC connector (i.e. for models such as Antec 1300w APC Platinum, for a total of 2600w / I have two of them), or the cheaper ($15) route of the little gadget that combines any two PSUs...they're cheap now because a lot of the miners were using them, but not anymore . 

I hooked 4 PSUs together before for LN2 / multi SLI a few years back, no issues...but below is a pic from OGS/HWBot from just THIS WEEK...no more watt droughts  ...go for it, Martin ! And just think, it also comes in handy when you get that Xeon W3175X to go with your setup


----------



## Martin778

Nooo, I've already went through a balls to the wall build 2x1080Ti, 7980XE and X299 R6A later X299 DARK, with 2x 560 rads. Still gives me PTSD, blarghh. Every time I even get the thought of going custom, I pinch myself. Don't want the hassle. 

Just did 2hr of Armored Warfare on 3440x1440 ultra w. x2 MSAA and without V-Sync. The card topped out at 63*C with 1950RPM.
I mean, what else are you going to squeeze out of a card that does 2100 on air? 2150 on water? If it's for benching fun, alright, but for 24/7 usage....I have my doubts.


----------



## J7SC

Martin778 said:


> Nooo, I've already went through a balls to the wall build 2x1080Ti, 7980XE and X299 R6A later X299 DARK, with 2x 560 rads. Still gives me PTSD, blarghh. Every time I even get the thought of going custom, I pinch myself. Don't want the hassle.
> 
> Just did 2hr of Armored Warfare on 3440x1440 ultra w. x2 MSAA and without V-Sync. The card topped out at 63*C with 1950RPM.
> I mean, what else are you going to squeeze out of a card that does 2100 on air? 2150 on water? If it's for benching fun, alright, but for 24/7 usage....I have my doubts.



...I see what you're doing - you're holding out for that 7nm AMD 64c/128t Epyc 8-channel RAM build :ninja:


----------



## Martin778

Ryzen / Threadripper do not work well with SLI, probably due to CCX nature. i've tried it on 2700X and 1950X and got very inconsistent results like 60-70% load between cards.
Whoever desgined the cooler for that Lightning, is a legend. 350W <63C on air and it's still very far from 100% RPM of that card which is 3400 and 2800RPM.










i still don't understand what it's doing with voltage under load. It sometiems maintains 1.04 at 48-50*C causing a crash and sometimes it stays at 1.07 even at 60*C and runs 2115MHz at the frequenc without any issues.


----------



## kot0005

J7SC said:


> Christopher2178 said:
> 
> 
> 
> Lurking since this thread opened so here’s my input finally..
> 
> I have an MSI Trio X w/Micron Memory on water 27c-29c at idle, about 37c-40c gaming (I hate the 40c clock drop Nvidia..why?), and 42c-45c max when benchmarking.
> Running 8K Superposition with VRAM +1200 and core is +125 which is giving me 2130mhz avg/low (2145mhz -2160mhz peaks) on the MSI 406 watt bios.
> On the 406 Watt bios highest power draw I have seen in GpuZ is about 401 watts pulled during an 8k Superposition but most games and even benchmarks max about 350-375 watts it seems..
> Just tried the lightning z bios and could hit same speeds basically but seemed to drop from 2160/2145 down to 2130 a little sooner and more often and highest wattage use in GPuZ during all benchmarks was 381... So back to the MSI 406 watt bios for me.. Thanks!
> 
> 
> 
> 
> That's interesting stuff /forum/images/smilies/smile.gif ! Re. watts, you may still be running into issues relating to the 3x 8pin EPS (Lightning Z) vs the 2x8pin +1x6pin EPS (Trio X). And just out of interest, which Lightning Z Bios did you use (as it is dual Bios) - the regular one or the LN2 one ? I believe the LN2 one goes up to 520w or so, but of course caution when using it on water...
Click to expand...

There is no 520w bios for lightning z. 380w max.


----------



## J7SC

kot0005 said:


> There is no 520w bios for lightning z. 380w max.



Really  ? 

From Bit-tech.net when testing the Lightning Z _"...that’s a *maximum wattage of 525W* – way above 350W, but the headroom makes sense given the target audience of extreme overclockers that often like to perform hardware-level modifications to bypass the power/voltage limits.*To that end, the card has a dual-BIOS switch*, with one side marked as ‘Original’ and the other as ‘LN2’ i.e. liquid nitrogen. In LN2 mode, the card is said to have *no power limitations"*_


----------



## Martin778

I've tried the LN2 switch, it doesn't do anything on air. The card still acts the same way (not sure about the semi passive mode since I don't use it anyway, always 800rpm idle).

I think I'd rather put it back to normal since it doesn't hit the power limit anyways when set to 108% in AB, 350W is the highest I've seen. 

Still it's such a shame you can't tell the card to just lock the voltage, like keep 1.05V at all times...I bet I could hit 2130-2150 if I could lock in 1.07V.


----------



## J7SC

Martin778 said:


> I've tried the LN2 switch, it doesn't do anything on air. The card still acts the same way (not sure about the semi passive mode since I don't use it anyway, always 800rpm idle).
> 
> I think I'd rather put it back to normal since it doesn't hit the power limit anyways when set to 108% in AB, 350W is the highest I've seen.
> 
> Still it's such a shame you can't tell the card to just lock the voltage, like keep 1.05V at all times...I bet I could hit 2130-2150 if I could lock in 1.07V.


For the 2nd / LN2 Bios to show any real change, you probably have to use sub-zero, and appropriate v-tools...the point was only that* there are two Bios *(unlike suggested before), one of which goes way high (520w+) on limits *IF* other steps are taken (Gunslinger is posting a lot of MSI Lightning LN2 stuff, up to 2500MHz, give or take). It may also make a difference if you have chilled water, but very unlikely on air. 

On the voltage side, I take it you tried to lock the voltage down via MSI AB curve ? Sometimes locking it to 1.043v may get you better overall results in a bench, even if not in absolute top-speed MHz..I posted 2235MHz 'top speed' before on one of my w-cooled cards, but that means very little in the greater scope of things, i.e. consistent benching or gaming. Another issue on voltage is to dial back the PL % until you see no sudden voltage drops in a fairly short bench (i.e. if max PL is 125%, try 120%, 115%...). That would be useful info as well.

You might want to PM Gunslinger for more inside info on the MSI Lightning Z


----------



## VPII

Martin778 said:


> I've tried the LN2 switch, it doesn't do anything on air. The card still acts the same way (not sure about the semi passive mode since I don't use it anyway, always 800rpm idle).
> 
> I think I'd rather put it back to normal since it doesn't hit the power limit anyways when set to 108% in AB, 350W is the highest I've seen.
> 
> Still it's such a shame you can't tell the card to just lock the voltage, like keep 1.05V at all times...I bet I could hit 2130-2150 if I could lock in 1.07V.


Hi Martin, you can actually lock the voltage. Look at the screen grab. You open the vcurve in Afterburner and scroll on the curve to where the 1.068 or 1.05 voltage sit. Then once you clocked it, press "L" and it will show a yellow line on the graph. Then hit apply and it should be locked at the stated speed and vcore.


----------



## Thoth420

Hi guys I am trying to make my new rig and add it to my signature but I haven't been able to figure it out since this site got a new host a while back or whatever. Can someone PM me and tell me where I am missing this option in the user settings?


----------



## J7SC

Martin778 said:


> Ryzen / *Threadripper do not work well with SLI, probably due to CCX nature. i've tried it on 2700X and 1950X and got very inconsistent results like 60-70% load between cards.*
> Whoever desgined the cooler for that Lightning, is a legend. 350W <63C on air and it's still very far from 100% RPM of that card which is 3400 and 2800RPM.
> (...cut...)



...just ran some very quick Timespy tests w/ Threadripper 2950x @ 4.3 and 2x 780 Ti CLs...per below, don't seem to have a usage issue, both consistently at 100%. Mind you, with 2080 TIs, that will probably drop, simply because unless I'm running the CPU at 10 GHz, the CPU will invariably bottleneck such powerful cards. But inherently, there's nothing wrong with memory on Threadripper 2950x (results below are for 4.1 GHz and non-optimized mem...during the mobo RMA downtime, my trial period for Aida ran out  better buy a license key)...going to finish all the fine-tuning of the Threadripper before dropping in the 2x Aorus


----------



## VPII

It is actually really astounding the impact ambient temps have on the card's performance. My card is water cooled thanks to a Nzxt Kraken G12 and Corsair 110 CW. Max load temps when idle is 26 to 27C is around 39 to 40C. At present, in our beautiful summer I'm sitting with 30+C ambient so card would idle at 30c if I am lucky 29c. This effectively means I'll lose 100 to 200 marks in Time Spy as the core would drop from 2160 or 2175 all they way to 2085 in game test 2 for Time Spy. Look I am happy with the temps I get using what I am using but the impact of these temperature fluctuations is actually a little ridiculous.


----------



## J7SC

VPII said:


> It is actually really astounding the impact ambient temps have on the card's performance. My card is water cooled thanks to a Nzxt Kraken G12 and Corsair 110 CW. Max load temps when idle is 26 to 27C is around 39 to 40C. At present, in our beautiful summer I'm sitting with 30+C ambient so card would idle at 30c if I am lucky 29c. This effectively means I'll lose 100 to 200 marks in Time Spy as the core would drop from 2160 or 2175 all they way to 2085 in game test 2 for Time Spy. Look I am happy with the temps I get using what I am using but the impact of these temperature fluctuations is actually a little ridiculous.



Hello Mr VPII ...Probably goes to show you how high-strung these cards are :stun: what with the temp and PL envelope interplay. And just read about Thwaites glacier in Antarctica having developed a big cave and lost 14 billion tons of ice, mostly over just the last few years, but I digress...

...my cooling system dedicated for the two 2080 TI GPUs is getting up-rated (again), now 3x 360x60 rads, 2x 755 pumps and tons of fans - and I live in a colder clime :wheee:


----------



## VPII

J7SC said:


> Hello Mr VPII ...Probably goes to show you how high-strung these cards are :stun: what with the temp and PL envelope interplay. And just read about Thwaites glacier in Antarctica having developed a big cave and lost 14 billion tons of ice, mostly over just the last few years, but I digress...
> 
> ...my cooling system dedicated for the two 2080 TI GPUs is getting up-rated (again), now 3x 360x60 rads, 2x 755 pumps and tons of fans - and I live in a colder clime :wheee:


Look in about two three months the ambient temps here will be between 18 and 12C so I'll be happy. Just sucks that with the 2175core I cannot match my bets 2160core result in Time Spy right now.


----------



## J7SC

...about 25 min from here, usually snow on the ground till late June - snowmobile is extra


----------



## GAN77

Martin778 said:


> I've tried the LN2 switch, it doesn't do anything on air. The card still acts the same way (not sure about the semi passive mode since I don't use it anyway, always 800rpm idle).
> 
> I think I'd rather put it back to normal since it doesn't hit the power limit anyways when set to 108% in AB, 350W is the highest I've seen.
> 
> Still it's such a shame you can't tell the card to just lock the voltage, like keep 1.05V at all times...I bet I could hit 2130-2150 if I could lock in 1.07V.


You need AB Extreme)
https://www.tomshw.de/2019/01/28/bl...si-geforce-rtx-2080-ti-lightning-z-im-test/4/


----------



## Martin778

Interesting, I was reluctant to use the LN2 BIOS on air, not being sure if MSI implemented stuff like PCB heaters in LN2 mode or not. Is the AB Extreme just a skin or a completely different piece of software? Can't find it, I've flipped the switch to LN2 again and it still says max 108% PL. 



> _Since not everyone is allowed to use the MSI Afterburner Extreme (you have to "qualify" for it)_


lolwut? So without the Extreme I can't really do much with the LN2 BIOS?

VPII, I've tried that but it still dropped after a few seconds, weird. Even though my results cover the ones shown in Tom's review - 2085MHz on air is where my card fares in gaming. 
Locking the freq/voltage curve does nothing, after a few seconds of Unigine Valley it drops from 2140 1.07V -> 2100 1.043V and after some time it hovers 2070-2090 and stays that way.










I have to say gaming on a 3440x1440 @ 144Hz (34GK950F) monitor still squeezes the last bit of juice out of the 2080ti if you want all eye candy enabled.
Regarding the SLI + AMD thing, it won't show in synthetics but try for example GTA V. I've got very inconsistent results and the gameplay was stuttery, especially when driving a car on the highway @ Paleto. You could see both cards changing load simultaneously from 60 to 80% (dual strix 1080ti OC) and the CPU load also dropped at that moment.

+
I got it stable by tinkering with the curve a bit more: 


Spoiler














This is the best this core will do on air. 2135 locks up at 1.07V. It doesn't hit the power 108% limit in Valley but it sadly does in Superposition, apparently just flipping the switch to LN2 BIOS doesn't to anything, you have to manually let it free in AB extreme, looks like.
Memory runs at +1400 (16800 effective), no kidding. Must be some binned stuff.


----------



## VPII

Martin778 said:


> Interesting, I was reluctant to use the LN2 BIOS on air, not being sure if MSI implemented stuff like PCB heaters in LN2 mode or not. Is the AB Extreme just a skin or a completely different piece of software? Can't find it, I've flipped the switch to LN2 again and it still says max 108% PL.
> 
> 
> lolwut? So without the Extreme I can't really do much with the LN2 BIOS?
> 
> VPII, I've tried that but it still dropped after a few seconds, weird. Even though my results cover the ones shown in Tom's review - 2085MHz on air is where my card fares in gaming.
> Locking the freq/voltage curve does nothing, after a few seconds of Unigine Valley it drops from 2140 1.07V -> 2100 1.043V and after some time it hovers 2070-2090 and stays that way.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have to say gaming on a 3440x1440 @ 144Hz (34GK950F) monitor still squeezes the last bit of juice out of the 2080ti if you want all eye candy enabled.
> Regarding the SLI + AMD thing, it won't show in synthetics but try for example GTA V. I've got very inconsistent results and the gameplay was stuttery, especially when driving a car on the highway @ Paleto. You could see both cards changing load simultaneously from 60 to 80% (dual strix 1080ti OC) and the CPU load also dropped at that moment.
> 
> +
> I got it stable by tinkering with the curve a bit more:
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is the best this core will do on air. 2135 locks up at 1.07V. It doesn't hit the power 108% limit in Valley but it sadly does in Superposition, apparently just flipping the switch to LN2 BIOS doesn't to anything, you have to manually let it free in AB extreme, looks like.
> Memory runs at +1400 (16800 effective), no kidding. Must be some binned stuff.


Hi Martin, just out of curiosity did you perhaps check with GPUz monitor what wattage it pulled to see if that might have been the deciding factor. Sorry, I am just asking as it is pretty funny. I sit sort of with the same issue but mine i more related to the 30+C ambient temps at present. But I did see the TDP hitting max and a little over with a slight drop in core frequency.


----------



## Martin778

360+ 

It looks like it needed a full power cycle, including shutting down the PSU to switch to the LN2 BIOS, I now see 120% max power limit in AB. I wonder if it will spin the fans....custom fan curve doesn't seem to do anything.


----------



## orbitech

Happy owner of an Asus Rog OC myself for some days now.. 

I have achieved +100/1100 on the clocks but after that 100 on the core I'm hitting the power limit easily.. 

Which bios would you think would me better to try ? Thanks for any feedback


----------



## Martin778

But that LN2 BIOS has even lower power limit, only 300W @ 120%, what the...? Seems to be the case with this card, pretty weird that you have to apply for a piece of software that should be standard, after you've already sold a kidney for a GPU  
https://forum-en.msi.com/index.php?topic=314643.0



Spoiler















In superposition it cries for a higher power limit, hitting 380W and dropping to 2050MHz https://i.imgur.com/Dbm7XUJ.png


----------



## J7SC

Martin778 said:


> But that LN2 BIOS has even lower power limit, only 300W @ 120%, what the...? Seems to be the case with this card, pretty weird that you have to apply for a piece of software that should be standard, after you've already *sold a kidney* for a GPU
> https://forum-en.msi.com/index.php?topic=314643.0
> 
> In superposition it cries for a higher power limit, hitting 380W and dropping to 2050MHz https://i.imgur.com/Dbm7XUJ.png


...they do that /have been doing it re. the extreme version for years for 'warranty purposes', among other things. I used to have the extreme version for GTX 7/9 series when I did LN2 but you really don't want to try it on air...so easy to add 'just one more [v] for the road'.

Below are two Superposition 4K Opt runs I did (on my 6700K testbench / AIO / 4.8GHz / 3866 DDR4)... > chasing the last few MHz isn't always fruitful, as much as I understand it, since you have sold the kidney for the card. Note that the slightly lower MHZ gets a slightly higher score, though within margin or error.


----------



## Martin778

I only knew about that "enable unofficial overclocking" switch in the .cfg file  I'm not looking for more voltage, just 10-15% higher power limit.


----------



## dureiken

Hi experts,

I just received my 2080TI EVGA XC Ultra Gaming, I will put it under water  what is the best bios I can use safe to reach maximum clocks ?

thanks


----------



## kx11

so last night after playing Anthem demo for 20 minutes my Galax HOF started flickering and crashed with error message (the gpu was removed ...etc ) , it was giving me the same error with any game/benchmark i tried after that , i reseated it then re-installed the drivers...etc , i did everything imaginable but i guess it died on me , now back to the good old PALIT card i still have around


----------



## J7SC

kx11 said:


> so last night after playing Anthem demo for 20 minutes my Galax HOF started flickering and crashed with error message (the gpu was removed ...etc ) , it was giving me the same error with any game/benchmark i tried after that , i reseated it then re-installed the drivers...etc , i did everything imaginable but i guess it died on me , now back to the good old PALIT card i still have around



...well that sucks ! Weird message though, and no artifacts, black screen or so ? Have you tried to install the card in a completely different system (even a Win 7) ?


----------



## kx11

J7SC said:


> ...well that sucks ! Weird message though, and no artifacts, black screen or so ? Have you tried to install the card in a completely different system (even a Win 7) ?



no , i couldn't do that


if reseating the GPU doesn't fix the issue then i guess it's done


----------



## VPII

kx11 said:


> no , i couldn't do that
> 
> 
> if reseating the GPU doesn't fix the issue then i guess it's done


Hi Kx11 , well from what I heard and had the experience to go through, Galax is pretty forgiving on the warranty side. My Galax OC started giving issues being stuck at the base clock 1350mhz. I managed to fix it by firstly reinstalling windows 10 and flashing the bios with your card's bios. It worked. The reason I went that route is that when it was stuck at 1350mhz base it actually worked normally in my windows 7 setup without any issues. Problem was when I reinstalled windows 10 and it worked I checked for latest Nvidia driver and found 417.71 which I installed and once again it was stuck at 1350mhz core. So tested again with Windows 7 and all good, but I saw the driver I have is pretty old so I installed the latest. Again the card was locked at 1350mhz core.... now Win 7 and Win 10. Tried many means to fix it but to no avail.

Send card back and a week later I settled for a Palit GamingPro OC..... Well I am happy as this card can do 2175mhz Time Spy without any issues and the memory on this card being samsung runs cooler (which helps with the Nzxt G12 and H110 CW cooling) and I can clock it +1150mhz but even more I think, but still to test. I'm just waiting on lower ambient temps as this 30+C ambient is not helping.


----------



## Pepillo

kx11 said:


> so last night after playing Anthem demo for 20 minutes my Galax HOF started flickering and crashed with error message (the gpu was removed ...etc ) , it was giving me the same error with any game/benchmark i tried after that , i reseated it then re-installed the drivers...etc , i did everything imaginable but i guess it died on me , now back to the good old PALIT card i still have around


How many days have passed since you bought it?
I thought the sudden death of these cards was over. I'm sorry, good luck with the RMA.


----------



## songi

kx11 said:


> so last night after playing Anthem demo for 20 minutes my Galax HOF started flickering and crashed with error message (the gpu was removed ...etc ) , it was giving me the same error with any game/benchmark i tried after that , i reseated it then re-installed the drivers...etc , i did everything imaginable but i guess it died on me , now back to the good old PALIT card i still have around


:O how much was your HOF card? I had the same error pop up when I played the demo a few times as well. I don't think it has anything to do with your GPU being faulty cause I'm even using an older gen card in this PC (gtx 960) and it still happen


----------



## orbitech

songi said:


> :O how much was your HOF card? I had the same error pop up when I played the demo a few times as well. I don't think it has anything to do with your GPU being faulty cause I'm even using an older gen card in this PC (gtx 960) and it still happen



Yeah but he can't run anything after this error msg so it's probably a safe assumption that the card is a faulty unit.. Bummer..


----------



## kx11

Pepillo said:


> How many days have passed since you bought it?
> I thought the sudden death of these cards was over. I'm sorry, good luck with the RMA.





a month , i'll see if i can RMA this unit


----------



## kx11

VPII said:


> Hi Kx11 , well from what I heard and had the experience to go through, Galax is pretty forgiving on the warranty side. My Galax OC started giving issues being stuck at the base clock 1350mhz. I managed to fix it by firstly reinstalling windows 10 and flashing the bios with your card's bios. It worked. The reason I went that route is that when it was stuck at 1350mhz base it actually worked normally in my windows 7 setup without any issues. Problem was when I reinstalled windows 10 and it worked I checked for latest Nvidia driver and found 417.71 which I installed and once again it was stuck at 1350mhz core. So tested again with Windows 7 and all good, but I saw the driver I have is pretty old so I installed the latest. Again the card was locked at 1350mhz core.... now Win 7 and Win 10. Tried many means to fix it but to no avail.
> 
> Send card back and a week later I settled for a Palit GamingPro OC..... Well I am happy as this card can do 2175mhz Time Spy without any issues and the memory on this card being samsung runs cooler (which helps with the Nzxt G12 and H110 CW cooling) and I can clock it +1150mhz but even more I think, but still to test. I'm just waiting on lower ambient temps as this 30+C ambient is not helping.



me too , i sill have my Palit Gamin PRO OC which is also got an issue with manual OC , it won't allow anything over 1950mhz to run


----------



## kx11

songi said:


> :O how much was your HOF card? I had the same error pop up when I played the demo a few times as well. I don't think it has anything to do with your GPU being faulty cause I'm even using an older gen card in this PC (gtx 960) and it still happen



i thought it was the game but i tried benchmarks and other games where it was crashing within 5 seconds


----------



## Rob w

Where am I going wrong with nvflash64.
Downloaded nvflash64 from first page ( the one that allows flashing of reference cards) and extracted file to C drive ( nvflash64 ) and enabled run as admin.
Downloaded galax 380w bios from first page and placed in nvflash64 on C drive.
Followed guide and opened command prompt,
Cmd and then cd C:\nvfash64
Then nvflash64 -6 2080TIGX126.rom
Nvflash64 runs but comes up with board Id does not match image Id
Press y to override
Comes back with exception caught and fails.
I have tried over and over with no success, I’ve tried other bios and still no go, I have also done —protectoff and it does turn it off but still the missmach??
Card is a replacement 2080ti Fe from nv with Samsung memory but no matter what I do it will not flash using this method.
I have also tried nvflash and still no good.
Is there a different version of nvflash64 that I should be using, any ideas guys? 
I thaught I’d flash it rather than shunt it but having second thoughts now, this flash is a pain in the but!


----------



## Medusa666

kx11 said:


> i thought it was the game but i tried benchmarks and other games where it was crashing within 5 seconds


I'm very sorry to hear that your card died, shouldn't be a problem getting a replacement though, if it is dead it is dead. 

I have the same card, if it happens to me I'l post it here.


----------



## jelome1989

kx11 said:


> i thought it was the game but i tried benchmarks and other games where it was crashing within 5 seconds


Can you estimate how many hours of use before it died? What kind of workloads? Also is it heavily overclocked? At what voltages and clocks?


----------



## MrTOOSHORT

Sorry to hear kx11. Doesn't matter what 2080ti it is, it can go off. Wonder if any RTX Titans went bad?


----------



## Doni Weiss

kx11 said:


> no , i couldn't do that
> 
> 
> if reseating the GPU doesn't fix the issue then i guess it's done


I had this warning with my Gainward 2080 ti gs. It even crashed playing vids. Drivers stopped working and had to be reinstalled among other issues. I sent it back to the reseller and got a msi trio instead (non x). Currently the msi seems stable. Still micron though. Let’s hope this card survives. This is my third 2080 ti


----------



## VPII

One thing I noticed, but it is based on feeling not actual reading is that the micron memory runs hotter than the samsung memory. I noticed that coming from my Galax RTX 2080 TI OC with micron memory to a Palit RTX 2080 TI Gamingpro OC with Samsung memory. I also believe that the Samsung memory clocks a little higher than the micron as my previous card I was able to clock the memory up by 850mhz but the card I have now I can clock up to 1150 and possibly even more, just have not tried yet. I have a Infrared thermometer now and tested my current card with the samsung memory and found the chips are max 55C under load, well actually lower but I'll play it safe.


----------



## Danzle

Hey guys, maybe you could give me a quick answear to a question i've got. My Rig is equiped with a i7 6700k runing bone stock, because i can't be asked to OC and i like the silence i currently enjoy. If i'd pick up a 2080Ti and play games at Max/Epic/Ultra on my 165Hz 1440p Monitor, would i run in to CPU limitations not allowing me to full use the 2080Ti?


----------



## BudgieSmuggler

Danzle said:


> Hey guys, maybe you could give me a quick answear to a question i've got. My Rig is equiped with a i7 6700k runing bone stock, because i can't be asked to OC and i like the silence i currently enjoy. If i'd pick up a 2080Ti and play games at Max/Epic/Ultra on my 165Hz 1440p Monitor, would i run in to CPU limitations not allowing me to full use the 2080Ti?


Hard to say for sure but if you want to max all settings at 165hz you'd want to look at overclocking your cpu. I have an 8700k oc'ed to 4.9ghz and get 99% gpu use on my 2080ti which is also overclocked. 6700k is still a decent cpu but you might want to get motivated, overclock it and deal with a bit of heat and noise. lol


----------



## BudgieSmuggler

Danzle said:


> Hey guys, maybe you could give me a quick answear to a question i've got. My Rig is equiped with a i7 6700k runing bone stock, because i can't be asked to OC and i like the silence i currently enjoy. If i'd pick up a 2080Ti and play games at Max/Epic/Ultra on my 165Hz 1440p Monitor, would i run in to CPU limitations not allowing me to full use the 2080Ti?


 What gpu do you have currently, a 1080ti?


----------



## J7SC

MrTOOSHORT said:


> Sorry to hear kx11. *Doesn't matter what 2080ti it is*, it can go off. Wonder if any RTX Titans went bad?



...that's half the problem, little/no rhyme or reason to it...cards by different vendors, cards with Micron VRAM, with Samsung VRAM - the list is getting long though I still have to see some industry data on overall failure rates (w/prior series, it typically was 3-5%). 

Apart from RTX having been rushed to market by NVidia (perhaps due to the crypto mining collapse), there may be other issues. The foundry is currently in the news re. 'contaminated' chemicals being used in production, including affecting 12nm NVidia, apparently.

There's also the issue that the cards are getting physically so big that some sagging and twisting starts to play its part, even if it doesn't affect performance immediately, sealing w/ thermal pads i.e. on VRAM may become an issue. Best advice is still to w-cool 2080 TIs, with a strong backplate,IMO.





VPII said:


> One thing I noticed, but it is based on feeling not actual reading is that the micron memory runs hotter than the samsung memory. I noticed that coming from my Galax RTX 2080 TI OC with micron memory to a Palit RTX 2080 TI Gamingpro OC with Samsung memory. I also believe that the Samsung memory clocks a little higher than the micron as my previous card I was able to clock the memory up by 850mhz but the card I have now I can clock up to 1150 and possibly even more, just have not tried yet. I have a Infrared thermometer now and tested my current card with the samsung memory and found the chips are max 55C under load, well actually lower but I'll play it safe.



My VRAM (MIcron) seems to run quite cool and OCs to 1000+ on both cards, then again those cards have full water-blocks. Somewhat oddly, I noticed that MSI 2080 TI Lightning seems to be using both Micron and Samsung / must be a supply issue. Gunslinger's MSI Lightning (Micron) seems to be doing great though (mind you, on LN2)



Spoiler



https://d1ebmxcfh8bf9c.cloudfront.net/u12793/image_id_2131953.jpeg


----------



## Danzle

BudgieSmuggler said:


> Hard to say for sure but if you want to max all settings at 165hz you'd want to look at overclocking your cpu. I have an 8700k oc'ed to 4.9ghz and get 99% gpu use on my 2080ti which is also overclocked. 6700k is still a decent cpu but you might want to get motivated, overclock it and deal with a bit of heat and noise. lol


I don't like noise. Also my 6700k can't OC more than 200Mhz without crashing and that's with 1.29v.



BudgieSmuggler said:


> What gpu do you have currently, a 1080ti?


I've got an EVGA 980Ti Classfied, already squezed for everything it can give. I wanted a used 1080 Ti, but they sell for the same price as a RTX 2080.


----------



## J7SC

Danzle said:


> Hey guys, maybe you could give me a quick answear to a question i've got. My Rig is equiped with a i7 6700k runing bone stock, because i can't be asked to OC and i like the silence i currently enjoy. If i'd pick up a 2080Ti and play games at Max/Epic/Ultra on my 165Hz 1440p Monitor, would i run in to CPU limitations not allowing me to full use the 2080Ti?



I ran both 2080 TIs in my 6700K rig (AIO, 4.7GHz @1.31v, DDR4 - 3866MHz) w/o problems before switching them to my 2950x build. If you check back a few pages, you see some 4k Superpostion runs w/the 6700K I posted.


----------



## maxmix65

J7SC said:


> ...that's half the problem, little/no rhyme or reason to it...cards by different vendors, cards with Micron VRAM, with Samsung VRAM - the list is getting long though I still have to see some industry data on overall failure rates (w/prior series, it typically was 3-5%).
> 
> Apart from RTX having been rushed to market by NVidia (perhaps due to the crypto mining collapse), there may be other issues. The foundry is currently in the news re. 'contaminated' chemicals being used in production, including affecting 12nm NVidia, apparently.
> 
> There's also the issue that the cards are getting physically so big that some sagging and twisting starts to play its part, even if it doesn't affect performance immediately, sealing w/ thermal pads i.e. on VRAM may become an issue. Best advice is still to w-cool 2080 TIs, with a strong backplate,IMO.
> 
> 
> 
> 
> 
> 
> My VRAM (MIcron) seems to run quite cool and OCs to 1000+ on both cards, then again those cards have full water-blocks. Somewhat oddly, I noticed that MSI 2080 TI Lightning seems to be using both Micron and Samsung / must be a supply issue. Gunslinger's MSI Lightning (Micron) seems to be doing great though (mind you, on LN2)
> 
> 
> 
> Spoiler
> 
> 
> 
> https://d1ebmxcfh8bf9c.cloudfront.net/u12793/image_id_2131953.jpeg


My lightning Z use ddr6 Samsung
2100mhz easy, + 1000 ram ,vcore stock
Whit msi AB i can modificar controller voltage ddr6+ Aux1+ Aux2+ core, has temperature sensor Vrm1-2
there are two types of lightning
one with a Z code , and without one Z


----------



## J7SC

maxmix65 said:


> My lightning Z use ddr6 Samsung
> 2100mhz easy, + 1000 ram ,vcore stock
> Whit msi AB i can modificar controller voltage ddr6+ Aux1+ Aux2+ core, has temperature sensor Vrm1-2
> there are two types of lightning
> one with a Z code , and without one Z



Nice card and nice light show (never mind the OC)  I know about the Z vs non-Z, but so far, the only 'certain' difference seems to be boost clock. According to TechPowerup, _"MSI is releasing two variants of the Lightning: The RTX 2080 Ti Lightning Z, which is clocked at 1770 MHz Boost Clock, and the RTX 2080 Ti Lightning (without Z), which runs at a more conservative 1575 MHz Boost.."_

Not exactly sure what the price difference is between the two models (in US$). Do you know ?


----------



## Fraizer

hi

can someone tell me if the : Sea Hawk X Hybrid reported on this main topic like and reference pcb 16 Power phases etc.... --> | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R Msi link https://www.msi.com/Graphics-card/GeForce-RTX-2080-Ti-SEA-HAWK-X

can be OC with an strong custom watercooling to the same OC of for exemple the EVGA FTW3 Ultra withe 19 Power phase etc... --> | 3 Fan | 2.75 Slot | 302mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W | Custom PCB | EAN 4250812429711 | PN 11G-P4-2487-KR EVGA link : https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR

If both of them are watercooled.

why this question ? because i want to mount an watercooling waterblock Aquacomputer for 2080 Ti (have to use this one) on a good 2080 Ti to make an high OC. i understand regarding the 1st page of this topic this MSI card have a n reference PCB (maybe wrong ?) and by default is an 1755mhz like the best EVGA.

can you please confirm me that ? if i can oc to the same level this msi (16 power phases) to the evga (19 powe phases) i dont know if power phases are for that... ? and if this msi are realy an Reference PCB like the F edition to be able to put an waterblock on it ?

sorry for this bad english


----------



## Martin778

J7SC said:


> Nice card and nice light show (never mind the OC)  I know about the Z vs non-Z, but so far, the only 'certain' difference seems to be boost clock. According to TechPowerup, _"MSI is releasing two variants of the Lightning: The RTX 2080 Ti Lightning Z, which is clocked at 1770 MHz Boost Clock, and the RTX 2080 Ti Lightning (without Z), which runs at a more conservative 1575 MHz Boost.."_
> 
> Not exactly sure what the price difference is between the two models (in US$). Do you know ?


Yup, mine also runs 2100MHz 24/7. Only heavy benchmarks like Superposition make it hit the 360W power limit. I'm currently 24th on the Superposition chart in 4k Optimized but the PL is holding me back. If I could squeeze out another 100-150 points I'd have the fastest aircooled 2080Ti on the ranking.
The memory on mine is brutal, +1400 is doable, artifacting starts at +1450. Still I run a very conservative +500 for gaming.


----------



## BudgieSmuggler

Danzle said:


> I don't like noise. Also my 6700k can't OC more than 200Mhz without crashing and that's with 1.29v.
> 
> 
> I've got an EVGA 980Ti Classfied, already squezed for everything it can give. I wanted a used 1080 Ti, but they sell for the same price as a RTX 2080.



Understood on the cpu overclock. Having bought a 2080ti i'd go for it even though they are very pricey. It really is a beast of a card. Coming from 980ti you can't go wrong with your high refresh rate demands. If money isn't an issue i'd say buy one even with your stock 6700k. Even if you get slight cpu bottleneck you'll still get a huge boost It's not like you're on an fx8320. lol. Or if you're patient wait for the next iteration of the RTX cards.


----------



## nycgtr

So learned something new today. All 3 variants of the Strix. OC and Advanced are A chips (I've had about 4 of each of these by now) and the just "regular strix" O11G is a non A chip. 112% power limit on perf bios.


----------



## axiumone

Yeah, I posted about the non OC strix in the thread as well. It's complete bs. I shunt moded the non OC card and it overclocked just as well as the OC card.


----------



## J7SC

nycgtr said:


> So learned something new today. All 3 variants of the Strix. OC and Advanced are A chips (I've had about 4 of each of these by now) and the just "regular strix" O11G is a non A chip. 112% power limit on perf bios.



You seem to have some great past-times  , playing pool w/ interesting folk (per avatar), collecting 12 or so RTX 2080 TIs  ...Enjoy !


----------



## Krzych04650

J7SC said:


> ...just ran some very quick Timespy tests w/ Threadripper 2950x @ 4.3 and 2x 780 Ti CLs...per below, don't seem to have a usage issue, both consistently at 100%. Mind you, with 2080 TIs, that will probably drop, simply because unless I'm running the CPU at 10 GHz, the CPU will invariably bottleneck such powerful cards. But inherently, there's nothing wrong with memory on Threadripper 2950x (results below are for 4.1 GHz and non-optimized mem...during the mobo RMA downtime, my trial period for Aida ran out  better buy a license key)...going to finish all the fine-tuning of the Threadripper before dropping in the 2x Aorus


On the topic, here are some 1950X vs 7900X benchmarks with Titan Xp SLI, so much less GPU power than 2080 Ti SLI: 



 Absolute nonsense to build SLI with any AMD CPU, no matter how high Cinebench score may be 



kx11 said:


> so last night after playing Anthem demo for 20 minutes my Galax HOF started flickering and crashed with error message (the gpu was removed ...etc ) , it was giving me the same error with any game/benchmark i tried after that , i reseated it then re-installed the drivers...etc , i did everything imaginable but i guess it died on me , now back to the good old PALIT card i still have around


I was getting the same error with my max overclock that is not stable in all scenarios. Applying "daily" overclock removed the issue. I have never seen anything like this before. "Video card was physically removed for the system"... Sure, it got tired with serious gaming, ran off and bought a console  I assumed it was Anthem exclusive issue, because the game was far from stable. Not only this error, but 5 year long loading screens on SSD, 4 out of 5 times loosing connection during loading screen. After a while nothing could really surprise me with this game


----------



## J7SC

Krzych04650 said:


> On the topic, here are some 1950X vs 7900X benchmarks with Titan Xp SLI, so much less GPU power than 2080 Ti SLI: https://www.youtube.com/watch?v=JsV6ryk35R8 Absolute nonsense to build SLI with any AMD CPU, no matter how high Cinebench score may be





1 -Building a Threadripper SLI system is anything but nonsense for us, given that it's going into specific work programs we need to test out and develop as a software firm. We already have multiple Intel-based solutions w/ SLI and so far, I find the Threadripper better overall performer in our apps, never mind the cost advantage.

2 - If you do want to look at gaming specifically, I don't know about 1950x, but here are two similar vids for 2950x and Intel by channels with a combined subscriber base of over a million


----------



## Krzych04650

J7SC said:


> 1 -Building a Threadripper SLI system is anything but nonsense for us, given that it's going into specific work programs we need to test out and develop as a software firm. We already have multiple Intel-based solutions w/ SLI and so far, I find the Threadripper better overall performer in our apps, never mind the cost advantage.
> 
> 2 - If you do want to look at gaming specifically, I don't know about 1950x, but here are two similar vids for 2950x and Intel by channels with a combined subscriber base of over a million
> 
> 
> https://www.youtube.com/watch?v=YWYOqKhhX2A
> 
> 
> https://www.youtube.com/watch?v=rhA_DbxfBsg


Well I assume we are talking about gaming here, not some unspecified work programs.

Are you suggesting that subscriber base has anything to do with credibility or significance of these benchmarks? Because it is quite the opposite, originally great 100% pure tech channels turning into bs as they grow too big and need to adjust the content for wider viewer base, sacrificing the quality of content drastically, because otherwise they will stop growing or start to decline. Small enthusiast channels like Extreme Hardware are the best source of information, because there are actual gaming and hardware enthusiasts doing all of these benchmarks, on the hardware they have bought with their own money, and they are showing actual state 1:1 without any clowning, if that is even a word  

Credibility of Testing Games, or rather lack of, is well known since the "Nvidia lowers performance with drivers" drama, so I wouldn't use these videos as any kind of argument. Also this video is not using SLI. 

Paul's Hardawre video is 99% about synthetic CPU benchmarks and does not have any gaming benchmarks, he even apologizes for that in the video.


----------



## kx11

maxmix65 said:


> My lightning Z use ddr6 Samsung
> 2100mhz easy, + 1000 ram ,vcore stock
> Whit msi AB i can modificar controller voltage ddr6+ Aux1+ Aux2+ core, has temperature sensor Vrm1-2
> there are two types of lightning
> one with a Z code , and without one Z







what is the power target in your Lightning Z ?! mine (just arrived) got 102% only , memory chip is Samsung


also what app control the mini LCD panel ?


----------



## kx11

alright , Lightning Z just arrived ( late by 7 days ) and it looks good already


----------



## Martin778

Push harder  https://www.3dmark.com/3dm/32962361?

3DM is complaining about using the 417.75 Hotfix driver though.

Power limit on the stock BIOS is ~360W / 108% and that's what holding my scores back, heavy benchmarks like Superposition or PR trigger the PWR limit. 
LN2 BIOS has 120% but that's still only 300-320W. It needs the unlocked Afterburner to release it first.


----------



## kx11

these ram sticks are just stupid , they touch the back of the card and pick up the temperature from it


----------



## Martin778




----------



## maxmix65

kx11 said:


> what is the power target in your Lightning Z ?! mine (just arrived) got 102% only , memory chip is Samsung
> 
> 
> also what app control the mini LCD panel ?


the power target is 108,
press the small red triangle (beside core voltage)
can control the Vram-Aux 1-2 voltage panel
App control (panel lcd),you have to use app in the cd rom
Online msi apps do not work in my case


----------



## Martin778

The newest MSI Dragon Center works but has a bit of a "RTFM" issue to it. It caught me off guard too, I've installed it and saw no LED controls, turns out you have to open the "Live Update" within DC and it should show "Mystic Light" which you have to install.


----------



## Glerox

Imagine doing a shunt mod on a Lightning Z


----------



## Fraizer

Fraizer said:


> hi
> 
> can someone tell me if the : Sea Hawk X Hybrid reported on this main topic like and reference pcb 16 Power phases etc.... --> | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R Msi link https://www.msi.com/Graphics-card/GeForce-RTX-2080-Ti-SEA-HAWK-X
> 
> can be OC with an strong custom watercooling to the same OC of for exemple the EVGA FTW3 Ultra withe 19 Power phase etc... --> | 3 Fan | 2.75 Slot | 302mm | RGB | 19 Power Phases | 1755 MHz Boost | 300/373 W | Custom PCB | EAN 4250812429711 | PN 11G-P4-2487-KR EVGA link : https://www.evga.com/products/product.aspx?pn=11G-P4-2487-KR
> 
> If both of them are watercooled.
> 
> why this question ? because i want to mount an watercooling waterblock Aquacomputer for 2080 Ti (have to use this one) on a good 2080 Ti to make an high OC. i understand regarding the 1st page of this topic this MSI card have a n reference PCB (maybe wrong ?) and by default is an 1755mhz like the best EVGA.
> 
> can you please confirm me that ? if i can oc to the same level this msi (16 power phases) to the evga (19 powe phases) i dont know if power phases are for that... ? and if this msi are realy an Reference PCB like the F edition to be able to put an waterblock on it ?
> 
> sorry for this bad english



Please any body can answer to this ?


----------



## Renegade5399

Fraizer said:


> Please any body can answer to this ?


Sorry this got missed!

So, with the 2080Ti there's a couple facts people need to clear on:

1. The FE/reference board has had it's power delivery analyzed by some of the best engineers in the industry. The conclusion is that the reference VRM of the 2080Ti is over-engineered, over-built, and not only adequate, but borderline exceptional.

2. 2080Ti GPUs are not cherry picked. Every single one is subject to the same lottery as all the others.

3. You need to be familiar with the difference with A and Non-A chips. Non-A chips require physical modding (shunt) to get past the power limits. There are folks on here that have done this and the Non-A chips overclock just fine. Like, just as well as their A counterparts.

To answer your question: Yes it is possible to do so as the reference PCB is excellent. For ease of use, be sure to get an A chip card. That way you can flash the 380W Galax BIOS to it to get the highest power limit currently available as there is no XOC style BIOS (Pascal "no limits" for LN2) for these _*yet*_.

Keeping the temp of the GPU as low as possible is the trick. That will gain you the most clocks.


----------



## Danzle

Thanks for the answears lads. Now i only need to pick a card. If you don't mind assisting me with which one to chose. I'm mainly looking for a well cooled and quiet card with no intend to push the card and a good "just in case" rma service.

The cards i can chose from:

Palit 2080 Ti GamingPro OC 
Palit 2080 Ti GamingPro 

MSI RTX 2080 Ti DUKE OC
MSI RTX 2080 Ti GAMING X TRIO

GAINWARD RTX 2080 Ti Phoenix

GIGABYTE RTX 2080 Ti WINDFORCE
AORUS RTX 2080 Ti XTREME
AORUS RTX 2080 Ti


----------



## Shawnb99

Danzle said:


> Thanks for the answears lads. Now i only need to pick a card. If you don't mind assisting me with which one to chose. I'm mainly looking for a well cooled and quiet card with no intend to push the card and a good "just in case" rma service.
> 
> 
> 
> The cards i can chose from:
> 
> 
> 
> Palit 2080 Ti GamingPro OC
> 
> Palit 2080 Ti GamingPro
> 
> 
> 
> MSI RTX 2080 Ti DUKE OC
> 
> MSI RTX 2080 Ti GAMING X TRIO
> 
> 
> 
> GAINWARD RTX 2080 Ti Phoenix
> 
> 
> 
> GIGABYTE RTX 2080 Ti WINDFORCE
> 
> AORUS RTX 2080 Ti XTREME
> 
> AORUS RTX 2080 Ti




Gaming X Trio


Sent from my iPhone using Tapatalk


----------



## ReFFrs

Renegade5399 said:


> Non-A chips require physical modding (shunt) to get past the power limits. There are folks on here that have done this and the Non-A chips overclock just fine. Like, just as well as their A counterparts.


Maybe it's because shuntmodded Non-A cards are compared to A-cards without mod? Which are still power limited in many cases, even with 380W bios.

To get proper OC comparison you should have A and Non-A chips both shuntmodded.


----------



## Ambush083

nycgtr said:


> So learned something new today. All 3 variants of the Strix. OC and Advanced are A chips (I've had about 4 of each of these by now) and the just "regular strix" O11G is a non A chip. 112% power limit on perf bios.


Reminds me of the past few weeks for me in regards to seeing all those cards. ALL I was trying to do is go SLI using NVLink with 2 x RTX 2080 Ti OC cards. I had a bad card I had to return. Then I had to reorder a few times from different places due to being sent non-OC cards. I want what I want, plus I paid for it. In the end I'll have had 5 RTX 2080 Ti's pass through my hands. Definitely not something for me to boast about since I'm still waiting on 3 different refunds lol.


----------



## kx11

maxmix65 said:


> the power target is 108,
> press the small red triangle (beside core voltage)
> can control the Vram-Aux 1-2 voltage panel
> App control (panel lcd),you have to use app in the cd rom
> Online msi apps do not work in my case





i'm gonna have to pull out my external CD writer then


----------



## Pepillo

nVidia Port Royale DLSS test. With the configuration that I use default to play, light overclock, almost 50% gain:










Interesting and promising ……..


----------



## J7SC

Martin778 said:


>





Nah, too slow  This one :wheee:


----------



## J7SC

Danzle said:


> Thanks for the answears lads. Now i only need to pick a card. If you don't mind assisting me with which one to chose. I'm mainly looking for a well cooled and quiet card with no intend to push the card and a good "just in case" rma service.
> 
> The cards i can chose from:
> 
> Palit 2080 Ti GamingPro OC
> Palit 2080 Ti GamingPro
> 
> MSI RTX 2080 Ti DUKE OC
> MSI RTX 2080 Ti GAMING X TRIO
> 
> GAINWARD RTX 2080 Ti Phoenix
> 
> GIGABYTE RTX 2080 Ti WINDFORCE
> AORUS RTX 2080 Ti XTREME
> AORUS RTX 2080 Ti



I know you like 'nice'n quiet' but can't recall if you would also consider AIO...

For air, probably the MSI Gaming X Trio...for water, probably the Aorus RTX 2080 Ti AIO. The Gigab. Aorus comes w/ 4 yr warranty, and I also very recently had some really good experience with MSI RMA.

It also comes down what is actually available (I presume you would order from Europe), example here


Spoiler



https://www.caseking.de/pc-komponenten/grafikkarten/nvidia/geforce-rtx-2080-ti


----------



## reflex75

Why so many dead cards? 
Broken RTX generation...


----------



## sblantipodi

Pepillo said:


> nVidia Port Royale DLSS test. With the configuration that I use default to play, light overclock, almost 50% gain:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Interesting and promising ……..


the fact that no game implemented it yet is really not promising


----------



## kx11

sblantipodi said:


> the fact that no game implemented it yet is really not promising





FFXV got it 





this is my result with the new DLSS benchmark @ 4k




https://www.3dmark.com/nd/28625


----------



## sblantipodi

kx11 said:


> FFXV got it
> 
> 
> 
> 
> 
> this is my result with the new DLSS benchmark @ 4k
> 
> 
> 
> 
> https://www.3dmark.com/nd/28625


FFXV does not have it. it uses DLSS for demo only, developers announced that no new patch will be released for the game so no DLSS in the game.


----------



## Fraizer

Renegade5399 said:


> Sorry this got missed!
> 
> So, with the 2080Ti there's a couple facts people need to clear on:
> 
> 1. The FE/reference board has had it's power delivery analyzed by some of the best engineers in the industry. The conclusion is that the reference VRM of the 2080Ti is over-engineered, over-built, and not only adequate, but borderline exceptional.
> 
> 2. 2080Ti GPUs are not cherry picked. Every single one is subject to the same lottery as all the others.
> 
> 3. You need to be familiar with the difference with A and Non-A chips. Non-A chips require physical modding (shunt) to get past the power limits. There are folks on here that have done this and the Non-A chips overclock just fine. Like, just as well as their A counterparts.
> 
> To answer your question: Yes it is possible to do so as the reference PCB is excellent. For ease of use, be sure to get an A chip card. That way you can flash the 380W Galax BIOS to it to get the highest power limit currently available as there is no XOC style BIOS (Pascal "no limits" for LN2) for these _*yet*_.
> 
> Keeping the temp of the GPU as low as possible is the trick. That will gain you the most clocks.


thank you @Renegade5399

i understood many things ^^

but can you please tell me if you know :

<<<can someone tell me if the : Sea Hawk X Hybrid reported on this main topic like and reference pcb 16 Power phases etc.... --> | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R Msi link https://www.msi.com/Graphics-card/Ge...-Ti-SEA-HAWK-X>>>

because i can have this card but i need to know if i can mount on it the aqua computer waterblock who work on founder edition and on titan rtx

@Renegade5399

if you have an choice betwen this msi card i linked before (but need to work with my waterblock) and an Titan RTX for the same price you will choose which one regarding which one will give you the best Overcloking on strong watercooling to gain the best FPS in game ? Please is realy an important question based only on this not to be ressel it in 1year etc

regarding your great answer can you tell me if an a card who reach 1750mhz and sold like this by msi or evga it mean this card have better potentiel to an higer OC ?

hope my english was not so bad


----------



## nycgtr

J7SC said:


> 1 -Building a Threadripper SLI system is anything but nonsense for us, given that it's going into specific work programs we need to test out and develop as a software firm. We already have multiple Intel-based solutions w/ SLI and so far, I find the Threadripper better overall performer in our apps, never mind the cost advantage.
> 
> 2 - If you do want to look at gaming specifically, I don't know about 1950x, but here are two similar vids for 2950x and Intel by channels with a combined subscriber base of over a million
> 
> 
> https://www.youtube.com/watch?v=YWYOqKhhX2A
> 
> 
> https://www.youtube.com/watch?v=rhA_DbxfBsg


I had a 1950x oced to 4.1 that I replaced with a 7960x @ 4.8 with a pair of titan Xps in sli. Sli is much faster in the intel setup. It just how it is. For HFR resolutions it's still a better choice. Doesn't mean its the best balance of cost/workload etc elsewise.


----------



## Martin778

Danzle said:


> Thanks for the answears lads. Now i only need to pick a card. If you don't mind assisting me with which one to chose. I'm mainly looking for a well cooled and quiet card with no intend to push the card and a good "just in case" rma service.
> 
> The cards i can chose from:
> 
> Palit 2080 Ti GamingPro OC
> Palit 2080 Ti GamingPro
> 
> MSI RTX 2080 Ti DUKE OC
> MSI RTX 2080 Ti GAMING X TRIO
> 
> GAINWARD RTX 2080 Ti Phoenix
> 
> GIGABYTE RTX 2080 Ti WINDFORCE
> AORUS RTX 2080 Ti XTREME
> AORUS RTX 2080 Ti


2080Ti Trio, without any doubt. If it's anything like the Lightning in terms of cooling, it should run well under 70*C with a custom fan curve.
Aorus is nothing more than a Windforce with extra lights, mine 'ended' around 2GHz but it runs very hot, ~75c with custom fan...


----------



## sblantipodi

nycgtr said:


> I had a 1950x oced to 4.1 that I replaced with a 7960x @ 4.8 with a pair of titan Xps in sli. Sli is much faster in the intel setup. It just how it is. For HFR resolutions it's still a better choice. Doesn't mean its the best balance of cost/workload etc elsewise.


having a 7960x @ 4.8GHz is so rare or so unstable


----------



## Burke888

nycgtr said:


> So learned something new today. All 3 variants of the Strix. OC and Advanced are A chips (I've had about 4 of each of these by now) and the just "regular strix" O11G is a non A chip. 112% power limit on perf bios.


Nice find!
Are there any differences on the Regular Strix 011G? 
With the chip being a non-A chip, can you elaborate more on that?

Thanks!


----------



## kx11

sblantipodi said:


> FFXV does not have it. it uses DLSS for demo only, developers announced that no new patch will be released for the game so no DLSS in the game.





watch the very beginning of the video


----------



## sblantipodi

kx11 said:


> watch the very beginning of the video
> 
> 
> 
> 
> https://www.youtube.com/watch?v=_9vwZsawSbs


ok you are right in this case, sorry for the nonsenses...
but in any case that game can't be considered like a game that uses DLSS 
it's the only game where DLSS has really no sense. with such a bad graphics an RTX should rock it without DLSS


----------



## Duskfall

kx11 said:


> watch the very beginning of the video
> 
> 
> 
> 
> https://www.youtube.com/watch?v=_9vwZsawSbs



It's only for 4k mate, I have an ultrawide 3440x1440 and its disabled. The only way to enable it is to use DSR @ 2.25 which is effectively > 4k because of my ultrawide and the performance hit is ~15-20 fps with DLSS at that resolution, compared to TAA @ 1440p, even though it looks great


----------



## J7SC

nycgtr said:


> I had a 1950x oced to 4.1 that I replaced with a 7960x @ 4.8 with a pair of titan Xps in sli. Sli is much faster in the intel setup. It just how it is. For HFR resolutions it's still a better choice. Doesn't mean its the best balance of cost/workload etc elsewise.


 :2cents: You know, I never posted about the 7960X or anything like it as I loath brand discussions...besides, I'm sitting in an office with all but one machine running Intel HEDTs or Xeons, many with SLI and the one and only non-Intel is the 2950X (and I repeat that I know nothing about 1950X, other than 2950X is an improvement). This AMD setup is primarily a test-bed for work - and it also has to look good as well as perform, given its prominent location. I'm not building a 'gamer'; if that would be my goal, I'd order up a 7980XE or better yet a 9900K binned to 5.3 from Der8auer, complete with direct-die frame mount and hook it up to a phase cooler. I haven't had time yet to use any of the three game coupons such as for BFV from last November and December when I got the parts for this build...different priorities.

This thing is instead supposed to be a simplified test-bed as our datacenter is preparing to switch from Intel to AMD Epyc 2 (Rome) in the summer / fall. Our proprietary software covers a range from large-scale biz data analysis to encoding, encryption and rendering. NVidia has also released software (for Titan RTX and RTX 2080 TI only) that is supposed to make some of the large-data analysis tasks much more productive, but we obviously need to test that out. So if you add it all together, you might see why I went for the 2950x (2990x still has some Windows scheduler issues) and the 2080 TIs as an all-around test-bed.

All that aside, I ran a few SLI benchmarks, and @ 1080 (> more emphasis on CPU), TimeSpy gave me 100% utilization of both GTX cards (pls see below), unlike what was vehemently argued before...that's all I wanted to get across...either you have 100% utilization on two GPUs or you don't - that simple. Now, two cards such as 2080 TIs (or even more so two Titan RTX) will eventually get CPU bound, with the Threadrippers very likely hitting that point earlier than Intel, IMO. Then again, there are multiple comp vids of 2950X (including on games) and they do very well. Overall, I think it is a great processor - especially for the money -and judging by both Intel's and AMD's recent financial results just published, a lot of other folks think so as well. :2cents:


----------



## markuaw1

kx11 said:


> alright , Lightning Z just arrived ( late by 7 days ) and it looks good already


very nice how much was your card $$ ?? looks like you got a good one : ) , i just picked up a RTX 2080 Ti FTW3 ULTRA, payed to much but runs good ,














,, http://www.3dmark.com/pr/33547


----------



## markuaw1

Pepillo said:


> nVidia Port Royale DLSS test. With the configuration that I use default to play, light overclock, almost 50% gain:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Interesting and promising ……..


----------



## Glerox

Mine just arrived also!

Is it normal that I have a boner?

Now I just need a PC... my rig Gargantua just left today to a buyer in California lol! (I live in Montreal).

I want it back to try the lighting Z!!!

Who's gonna be the first to shunt mod it?


----------



## J7SC

Galax HOF OCL, Matirx, Lightning Z - and now KingPin >>> getting some popcorn, going to be quite a show when it all gets down and dirty...

...not sure if this has been posted here yet, found this via the French site 'vonguru.fr'


----------



## nycgtr

Burke888 said:


> Nice find!
> Are there any differences on the Regular Strix 011G?
> With the chip being a non-A chip, can you elaborate more on that?
> 
> Thanks!


Lower bin chip that's all. Just kinda sucks that Asus would put it on the strix models considering there is the Dual and turbo variant below the strix. Mind you on launch I openned 4 asus blowers and 4-5 asus duals and all of them were A chips. Non A chips were supposedly for cards we would see at 999 or closer to that. 



J7SC said:


> :2cents: You know, I never posted about the 7960X or anything like it as I loath brand discussions...besides, I'm sitting in an office with all but one machine running Intel HEDTs or Xeons, many with SLI and the one and only non-Intel is the 2950X (and I repeat that I know nothing about 1950X, other than 2950X is an improvement). This AMD setup is primarily a test-bed for work - and it also has to look good as well as perform, given its prominent location. I'm not building a 'gamer'; if that would be my goal, I'd order up a 7980XE or better yet a 9900K binned to 5.3 from Der8auer, complete with direct-die frame mount and hook it up to a phase cooler. I haven't had time yet to use any of the three game coupons such as for BFV from last November and December when I got the parts for this build...different priorities.
> 
> This thing is instead supposed to be a simplified test-bed as our datacenter is preparing to switch from Intel to AMD Epyc 2 (Rome) in the summer / fall. Our proprietary software covers a range from large-scale biz data analysis to encoding, encryption and rendering. NVidia has also released software (for Titan RTX and RTX 2080 TI only) that is supposed to make some of the large-data analysis tasks much more productive, but we obviously need to test that out. So if you add it all together, you might see why I went for the 2950x (2990x still has some Windows scheduler issues) and the 2080 TIs as an all-around test-bed.
> 
> All that aside, I ran a few SLI benchmarks, and @ 1080 (> more emphasis on CPU), TimeSpy gave me 100% utilization of both GTX cards (pls see below), unlike what was vehemently argued before...that's all I wanted to get across...either you have 100% utilization on two GPUs or you don't - that simple. Now, two cards such as 2080 TIs (or even more so two Titan RTX) will eventually get CPU bound, with the Threadrippers very likely hitting that point earlier than Intel, IMO. Then again, there are multiple comp vids of 2950X (including on games) and they do very well. Overall, I think it is a great processor - especially for the money -and judging by both Intel's and AMD's recent financial results just published, a lot of other folks think so as well. :2cents:


I still think the AMD chip is a better buy for those who's main purposes require the higher core count. Reason I swapped out the 1950x was since that was my daily multi purpose machine and I needed nested virtualization for MS products. The increase gpu performance is just icing.


----------



## kot0005

maxmix65 said:


> My lightning Z use ddr6 Samsung
> 2100mhz easy, + 1000 ram ,vcore stock
> Whit msi AB i can modificar controller voltage ddr6+ Aux1+ Aux2+ core, has temperature sensor Vrm1-2
> there are two types of lightning
> one with a Z code , and without one Z


wow so u can control voltages? can you shows some screenshots of msi AB ? TPU as usual didnt mention this. Turned into a ****ty site.


----------



## JustinThyme

Im thinking from what Im seeing these cards arent going much past 2100Mhz no matter the brand. I put the GALAX BIOS on my strix cards and with a 100 watts more power limit I went from 2100 solid OC with peaks of 2200 (hitting power limit) to.......drum roll. 2100 with peaks of 2200. Any higher and What ever load I put on it just crashed. Not even talking about a huge jump either. just an additional 10 Mhz and its going to crash. 20Mhz and it crashes faster. So i just went back to the OEM BIOS and clock it to +150 that gives me 2050 with solid peaks of 2100 without hitting the power limit. Gets to 123-124 but not 125. I just gonna lock that in and rip the knob off. 

As for memory The O11G Strix cards I have both have Samsung memory that I went straight to +500 without sneezing then up 50 at a time and sitting at +700 and no issues thus far. Dont know if they all do or not. Ill push some more soon to see where that knob gets ripped off. Im happy with the cards and results. Honestly I dont know that anything other than some serious extreme measures like LN2 or such will put a serious dent in the heat. Fastest cards to date but also the hottest. My pascal cards didnt come near the heat these puppies produce.


----------



## J7SC

JustinThyme said:


> Im thinking from what Im seeing these cards arent going much past 2100Mhz no matter the brand. I put the GALAX BIOS on my strix cards and with a 100 watts more power limit I went from 2100 solid OC with peaks of 2200 (hitting power limit) to.......drum roll. 2100 with peaks of 2200. Any higher and What ever load I put on it just crashed. Not even talking about a huge jump either. just an additional 10 Mhz and its going to crash. 20Mhz and it crashes faster. So i just went back to the OEM BIOS and clock it to +150 that gives me 2050 with solid peaks of 2100 without hitting the power limit. Gets to 123-124 but not 125. I just gonna lock that in and rip the knob off.
> 
> As for memory The O11G Strix cards I have both have Samsung memory that I went straight to +500 without sneezing then up 50 at a time and sitting at +700 and no issues thus far. Dont know if they all do or not. Ill push some more soon to see where that knob gets ripped off. Im happy with the cards and results. Honestly I dont know that anything other than some serious extreme measures like LN2 or such will put a serious dent in the heat. Fastest cards to date *but also the hottest. My pascal cards didnt come near the heat these puppies produce*.



I like the part about ripping the knobs off  Controlling heat is probably the only real thing one can do with these cards in regular setups. This review with a score of 9/10 of the Lightning Z (in French, but lots of pics and graphs, never mind Google translate) makes that point near the end re. water-cooling: 


Spoiler



https://vonguru.fr/2019/02/02/test-rtx-2080-ti-lightning-z-le-haut-de-gamme-de-msi/


I wonder how the Matrix and Kingpin w/ their hybrid liquid cooling will do in 'regular' (non-LN2) conditions :Snorkle:


----------



## kot0005

J7SC said:


> I like the part about ripping the knobs off  Controlling heat is probably the only real thing one can do with these cards in regular setups. This review with a score of 9/10 of the Lightning Z (in French, but lots of pics and graphs, never mind Google translate) makes that point near the end re. water-cooling:
> 
> 
> Spoiler
> 
> 
> 
> https://vonguru.fr/2019/02/02/test-rtx-2080-ti-lightning-z-le-haut-de-gamme-de-msi/
> 
> 
> I wonder how the Matrix and Kingpin w/ their hybrid liquid cooling will do in 'regular' (non-LN2) conditions :Snorkle:


wow nice find. Someone upload this Unlocked MSI afterburner with 1.5v lol


----------



## Martin778

I hope they release a 450W normal BIOS because this is pretty pointless - you can't do a thing with this card with LN2 BIOS unless you have the "veri speshul" afterburner...and the normal BIOS hits the 360W in heavy benchmarks anyways, affecting the score.


----------



## Glottis

I have a question about Asus STRIX OC. What's the difference between Silent and OC BIOS. Well I know that obviously fan curve is different, but do both of them have 0rpm fan mode and both have the same power limit?


----------



## markuaw1

JustinThyme said:


> Im thinking from what Im seeing these cards arent going much past 2100Mhz no matter the brand. I put the GALAX BIOS on my strix cards and with a 100 watts more power limit I went from 2100 solid OC with peaks of 2200 (hitting power limit) to.......drum roll. 2100 with peaks of 2200. Any higher and What ever load I put on it just crashed. Not even talking about a huge jump either. just an additional 10 Mhz and its going to crash. 20Mhz and it crashes faster. So i just went back to the OEM BIOS and clock it to +150 that gives me 2050 with solid peaks of 2100 without hitting the power limit. Gets to 123-124 but not 125. I just gonna lock that in and rip the knob off.
> 
> As for memory The O11G Strix cards I have both have Samsung memory that I went straight to +500 without sneezing then up 50 at a time and sitting at +700 and no issues thus far. Dont know if they all do or not. Ill push some more soon to see where that knob gets ripped off. Im happy with the cards and results. Honestly I dont know that anything other than some serious extreme measures like LN2 or such will put a serious dent in the heat. Fastest cards to date but also the hottest. My pascal cards didnt come near the heat these puppies produce.


 my 2080 Ti FTW3 ULTRA setting a voltage/frequency point to 2130mhz at 1.069v an +1200 memory runs great : )


----------



## Hanks552

can somebody help me with my overclock?
i did the shunt mod, i dont have power limit anymore
BUT, i was getting 2115 and even 2130 but power limit with my old graphics card
idk if you guys remember but i ****ed up one, so i got a new one with samsung memory
now one is samsung and the other micron...

i did shunt mod on both, but im not able to even reach 2115, it crashes...
i dont get power limit, the voltage seems low yet...
if somebody can help me out i appreciate that

i tried to overclock it manually with curve and it does work, i setup to 2130 lock voltage
but when i start any software, it goes down....

380w bios + shunt mod

Thanks!


----------



## nycgtr

Glottis said:


> I have a question about Asus STRIX OC. What's the difference between Silent and OC BIOS. Well I know that obviously fan curve is different, but do both of them have 0rpm fan mode and both have the same power limit?


Power limit is different. I believe. I never used the regular mode.


----------



## nycgtr

J7SC said:


> I like the part about ripping the knobs off  Controlling heat is probably the only real thing one can do with these cards in regular setups. This review with a score of 9/10 of the Lightning Z (in French, but lots of pics and graphs, never mind Google translate) makes that point near the end re. water-cooling:
> 
> 
> Spoiler
> 
> 
> 
> https://vonguru.fr/2019/02/02/test-rtx-2080-ti-lightning-z-le-haut-de-gamme-de-msi/
> 
> 
> I wonder how the Matrix and Kingpin w/ their hybrid liquid cooling will do in 'regular' (non-LN2) conditions :Snorkle:


If only some one would of been a champ and copped the matrix vbios off at ces.


----------



## J7SC

kot0005 said:


> wow nice find. Someone upload this Unlocked MSI afterburner with 1.5v lol



...I'dunno, death and destruction in the 2080 TI domain would increase swiftly 




nycgtr said:


> If only some one would of been a champ and copped the matrix vbios off at ces.



...too bad Elmor left ROG Asus last fall, he always seemed to be 'at least near' some special bios and hw mods.


----------



## maxmix65

kot0005 said:


> wow so u can control voltages? can you shows some screenshots of msi AB ? TPU as usual didnt mention this. Turned into a ****ty site.


Look here for setting Msi AB
https://youtu.be/XGit9wmMQbY?t=30


This is my first test wqhd


----------



## kx11

kot0005 said:


> wow so u can control voltages? can you shows some screenshots of msi AB ? TPU as usual didnt mention this. Turned into a ****ty site.



you mean this ?!


----------



## Krazee

but can it run Crysis?


----------



## Hanks552

i need some help, low voltage, no power limit, but if i try to go higher then 2085 it crashes...
i did the shunt mod, can it be diffent GPU power that is causing this limit?
can it be mounting pressure? card bending? im using the original pad thicker in the botton of the graphics card


----------



## Glerox

kx11 said:


> you mean this ?!


Is this a special version of MSI AB?
Have you tried if it makes any difference to modify the other voltages?
Can't wait to try the lightning!


----------



## ComansoRowlett

Can someone point me to where I can measure vcore voltage with a multimeter on a 2080ti FE please? I have some new mods I am trying out, I'm still learning in that regard. Thanks!


----------



## SEALBoy

Can anyone share their general overclocking strategy for the 2080 Ti? Here's mine:

1. Set power limit to max. Since OCing is power limited, this is a no-brainer.
2. Run the card under load at stock settings except power limit at max and see what voltage it hovers around when it reaches a stable temp (you will never exceed this temp since the card cannot produce any more heat due to the power limit).
3. Since the card is power limited, and the power your card uses increases with voltage and increases with frequency, pick a voltage one or two steps below what you saw in (2) and maximize your stable clock at that voltage using a custom Voltage/Frequency curve.


----------



## iamjanco

ComansoRowlett said:


> Can someone point me to where I can measure vcore voltage with a multimeter on a 2080ti FE please? I have some new mods I am trying out, I'm still learning in that regard. Thanks!


Not the FE, but you might be able to pull the info out of the following link if noone else chimes in with something more granular (the XC Ultra is based on the ref card; xdevs.com is eVGA TiN's site):

*Teardown of the EVGA GeForce RTX 2080 Ti XC Ultra*

Also have a look at this thread; some good, relevant info there:

*General GPU Voltmod and External VRM Information*

Lastly, if you're not aware of it, you might want to have a look at the *EVGA EPOWER V*, which is designed to facilitate performing similar tasks (it's got a ProbeIt connector):

*EVGA Support Manual for EVGA EPOWER V*









Good luck


----------



## krkseg1ops

So I flashed two non-factory BIOS versions to my Gainward 2080ti GS and I'm not seeing any results. First I tried the EVGA XC3 338W and then Galax 380W BIOS. I did tests with Final Fantasy XV benchmark and the scores are basically indentical (8500ish). Am I missing something? I can hear the fans working louder and the 100% fan speed now is much bigger than 100% fan speed on the default Gainward 330W BIOS but it is not translating to better performance.


----------



## maxmix65

kot0005 said:


> wow so u can control voltages? can you shows some screenshots of msi AB ? TPU as usual didnt mention this. Turned into a ****ty site.


https://www.guru3d.com/files-details/msi-afterburner-beta-download.html
click on the red triangle


----------



## stilllogicz

So I purchased a 2080 ti seahawk ek x and while the card is powerful and I love the looks, it was hitting 75c during benchmarks. After doing some research it seems all I could find were owners who had the same high temp issues. Does anyone have this card and not have this problem?

I'm going to return for a refund and am on the fence about trying to get another one. If I decide to go reference pcb, does it matter which card I get? I hear the gaps this generation are small. Obviously I would be overclocking it anyway for max performance.


----------



## Edge0fsanity

stilllogicz said:


> So I purchased a 2080 ti seahawk ek x and while the card is powerful and I love the looks, it was hitting 75c during benchmarks. After doing some research it seems all I could find were owners who had the same high temp issues. Does anyone have this card and not have this problem?
> 
> I'm going to return for a refund and am on the fence about trying to get another one. If I decide to go reference pcb, does it matter which card I get? I hear the gaps this generation are small. Obviously I would be overclocking it anyway for max performance.


There is no performance difference to be found with ambient cooling between reference and aib pcbs. The reference ones are very high quality and over engineered. The only difference is whether you get an A or non A chip and the bios they come with. All you need to do is make sure you get an A chip, the info for that is in the OP. With whatever you get just flash the 380w galax bios and you'll be set.


----------



## kot0005

Glerox said:


> Is this a special version of MSI AB?
> Have you tried if it makes any difference to modify the other voltages?
> Can't wait to try the lightning!


No, it unlocks for the Lightning Z. I tried Lightning Z BIOS on my Seahawk EK X and it didnt show up.

The special version of AB unlocks voltages upto 1.5V instead of 1.093v


----------



## marsder

Does anyone have anymore details on the Galax SG version? Even the power column is blank on the first page.


----------



## kot0005

stilllogicz said:


> So I purchased a 2080 ti seahawk ek x and while the card is powerful and I love the looks, it was hitting 75c during benchmarks. After doing some research it seems all I could find were owners who had the same high temp issues. Does anyone have this card and not have this problem?
> 
> I'm going to return for a refund and am on the fence about trying to get another one. If I decide to go reference pcb, does it matter which card I get? I hear the gaps this generation are small. Obviously I would be overclocking it anyway for max performance.


 I do, mine maxes out at 51c after 4-5hrs of gameplay with 350-375W GPU power. Check your contact or RMA it. Its also summer here, ambients are around 28-30c.


----------



## stilllogicz

kot0005 said:


> I do, mine maxes out at 51c after 4-5hrs of gameplay with 350-375W GPU power. Check your contact or RMA it. Its also summer here, ambients are around 28-30c.


Oh wow! They do make good ones. I was going to open mine but didn't want to deal with a voided warranty. Did your seahawk ek x work properly right out of the box or did you have to reseat it? Mine idled at 35c and would instantly shoot up to 75c during gaming/benchmarking. Was averaging a little over 2000 mhz.


----------



## Rob w

So I’ve been having some fun with this card, it games well but benching is another matter, I can’t even get on the scoreboard with it. 
It’s on water and chiller, I thought about shunting it like I did with the titanv but decided to flash it instead as there are good results coming from the galax 380watt bios, trouble is!
Flashed with nvflash64 = failed
Flashed with nvflash. = failed
Tried other versions of nvflash modified even the one for getting over the ID checks/ and = failed.
I’ve tried different vbios’s and None will flash to the card.
Commands tried, 
Nvflash64 —protectoff
Nvflash64 —index=0 -6 bios name.rom
Any combination and it fails the ID mismatch / board missmatch, it’s just not allowing a flash,
It is the Fe 2080ti A chip with Samsung memory
Doing a clean win10 and driver install and will try again after, but I now have a pack of 8mo resistors for if it doesn’t as a shunt mod WILL sort the power limit.
I do not see why I should have the only A chip that can’t be flashed. Lol
I have followed every guide I can find.


----------



## Martin778

stilllogicz said:


> Oh wow! They do make good ones. I was going to open mine but didn't want to deal with a voided warranty. Did your seahawk ek x work properly right out of the box or did you have to reseat it? Mine idled at 35c and would *instantly shoot up to 75c during gaming/benchmarking*. Was averaging a little over 2000 mhz.


RMA, probably bad block/pump mount or dead pump?


By the way, what do you guys think will happen when HDMI 2.1 (4k 120Hz) comes out, is NVidia going to update the firmware on the cards or are they going to release new ones with updated HDMI?


----------



## stilllogicz

Martin778 said:


> RMA, probably bad block/pump mount or dead pump?
> 
> 
> By the way, what do you guys think will happen when HDMI 2.1 (4k 120Hz) comes out, is NVidia going to update the firmware on the cards or are they going to release new ones with updated HDMI?


The pump is fine, I think it's a bad block seating. Have it set up for return, just gonna get a reference card and flash it.


----------



## Coldmud

Rob w said:


> So I’ve been having some fun with this card, it games well but benching is another matter, I can’t even get on the scoreboard with it.
> It’s on water and chiller, I thought about shunting it like I did with the titanv but decided to flash it instead as there are good results coming from the galax 380watt bios, trouble is!
> Flashed with nvflash64 = failed
> Flashed with nvflash. = failed
> Tried other versions of nvflash modified even the one for getting over the ID checks/ and = failed.
> I’ve tried different vbios’s and None will flash to the card.
> Commands tried,
> Nvflash64 —protectoff
> Nvflash64 —index=0 -6 bios name.rom
> Any combination and it fails the ID mismatch / board missmatch, it’s just not allowing a flash,
> It is the Fe 2080ti A chip with Samsung memory
> Doing a clean win10 and driver install and will try again after, but I now have a pack of 8mo resistors for if it doesn’t as a shunt mod WILL sort the power limit.
> I do not see why I should have the only A chip that can’t be flashed. Lol
> I have followed every guide I can find.


I use nvflash64 5.531.0
run cmd as admin
"nvflash64 -6 XXX.rom" 
press Y to bypass the mismatch, this should work, can you screenshot your problem?


----------



## Rob w

Coldmud said:


> I use nvflash64 5.531.0
> run cmd as admin
> "nvflash64 -6 XXX.rom"
> press Y to bypass the mismatch, this should work, can you screenshot your problem?


Will give it a go and post result either way it goes, ( loosing faith slowly)


----------



## kx11

Glerox said:


> Is this a special version of MSI AB?
> Have you tried if it makes any difference to modify the other voltages?
> Can't wait to try the lightning!





1. Nope it works only with Lightning Z and MSI AB allows 2xVRM temps sensor in the monitoring tab
2. those voltage sliders didn't help me a lot in benchmarks 







superb card btw


----------



## kx11

putting this lightning Z to work


----------



## GAN77

kx11 said:


> putting this lightning Z to work


What is your message about?)


----------



## Rob w

Coldmud said:


> I use nvflash64 5.531.0
> run cmd as admin
> "nvflash64 -6 XXX.rom"
> press Y to bypass the mismatch, this should work, can you screenshot your problem?


ok, gave it a go and this is what I get, whatever bios I try.?

https://www.overclock.net/forum/attachment.php?attachmentid=251888&stc=1&d=1549490237


----------



## Coldmud

Rob w said:


> ok, gave it a go and this is what I get, whatever bios I try.?
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=251888&stc=1&d=1549490237


strange! Have you tried the 5.531.0 version as well? This is the version I have been using for some time with success.


----------



## Fachasaurus

So I am in the market for one of these 2080 Ti cards and the first post has a wealth of information so kudos for getting all that assembled. It really helped me out a lot. My goal is to just get the cheapest "A" model EVGA or ASUS card as I personally have had excellent experiences with this two brands but I just wanted to see if anyone could provide me with some reassurance on the following.

When flashing the Galax 380W BIOS on to an EVGA or ASUS card, are there any fail safes to prevent a bad flashing? Is there anyway to restore it on these cards if something does go poorly? I know some of the Strix cards have a hardware dual bios switch but those cards aren't using reference PCBs so I couldn't put the EK waterblock on it. 

Similarly, would flashing a different branded BIOS void my warranty in this case or would I simply need to restore it to the original BIOS before the RMA process?


----------



## Kalm_Traveler

hey 2080 Ti friends - is there a preferred waterblock? or are they all probably about the same? I'm using the Bitspower 20-series blocks on my Titans and they seem fine, but I'm going to buy 1 more card to upgrade my smaller rig with and wouldn't mind trying a different brand if there's a clear winner.


----------



## kot0005

stilllogicz said:


> Oh wow! They do make good ones. I was going to open mine but didn't want to deal with a voided warranty. Did your seahawk ek x work properly right out of the box or did you have to reseat it? Mine idled at 35c and would instantly shoot up to 75c during gaming/benchmarking. Was averaging a little over 2000 mhz.


worked right out of the box. Your idle seems a little too high. Mine's around 32c.


----------



## kot0005

Rob w said:


> ok, gave it a go and this is what I get, whatever bios I try.?
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=251888&stc=1&d=1549490237


How do you know you have an A chip ? Did you open it or Just assuming ?


----------



## bbmaster123

Rob w said:


> ok, gave it a go and this is what I get, whatever bios I try.?
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=251888&stc=1&d=1549490237


try nvflash -j -4 -5 -6 bios.rom
I didn't even know about this one. I had the same issue. don't know why, but it worked for me
also check gpu-z to confirm if you have an A chip


----------



## J7SC

Kalm_Traveler said:


> hey 2080 Ti friends - is there a preferred waterblock? or are they all probably about the same? I'm using the Bitspower 20-series blocks on my Titans and they seem fine, but I'm going to buy 1 more card to upgrade my smaller rig with and wouldn't mind trying a different brand if there's a clear winner.



Aqua Computer Cryo w/active BP, or Watercool Heatkiller IV are highly rated for this kind of card


----------



## Glerox

kx11 said:


> putting this lightning Z to work


You're removing the stock cooler?


----------



## kx11

Glerox said:


> You're removing the stock cooler?



i will if a waterblock for it is to be released within this week


----------



## Rob w

kot0005 said:


> How do you know you have an A chip ? Did you open it or Just assuming ?


Definitely A chip, checked physically as well.









bbmaster123 said:


> try nvflash -j -4 -5 -6 bios.rom
> I didn't even know about this one. I had the same issue. don't know why, but it worked for me
> also check gpu-z to confirm if you have an A chip


Will try those instructions.

Tried 5.531 as well as every nvflash I could find, I have re installed windows and all drivers to the latest and it still won’t flash, wierd!
Going to give it a break and try again tomorrow, 
Had to rma my first card, this is the replacement they sent me,


----------



## krizby

Rob w said:


> Definitely A chip, checked physically as well.
> View attachment 251962
> 
> 
> Will try those instructions.
> 
> Tried 5.531 as well as every nvflash I could find, I have re installed windows and all drivers to the latest and it still won’t flash, wierd!
> Going to give it a break and try again tomorrow,
> Had to rma my first card, this is the replacement they sent me,


Your graphic score seems right, just that your CPU score is extremely low for the 7980XE, even my 8700K score 9400 in TimeSpy lol


----------



## Rob w

krizby said:


> Your graphic score seems right, just that your CPU score is extremely low for the 7980XE, even my 8700K score 9400 in TimeSpy lol


Cpu was at stock, no Oc set since installing new drivers.
Just been concentrating on gpu at the moment.


----------



## CptSpig

Rob w said:


> Definitely A chip, checked physically as well.
> View attachment 251962
> 
> 
> Will try those instructions.
> 
> Tried 5.531 as well as every nvflash I could find, I have re installed windows and all drivers to the latest and it still won’t flash, wierd!
> Going to give it a break and try again tomorrow,
> Had to rma my first card, this is the replacement they sent me,





krizby said:


> Your graphic score seems right, just that your CPU score is extremely low for the 7980XE, even my 8700K score 9400 in TimeSpy lol


For a better Physics score in Time Spy disable hyper-threading in bios. This works in Time Spy only not Extreme. 
http://www.3dmark.com/spy/5870019


----------



## Praegrandis

Hello everyone. I am in a need of advice/assistance from the soon-to-be fellow RTX 2080 Ti owners. Thank you very much in advance.

I am planning to buy an air cooled card and remove the shroud/fans to replace them with either two NF-A12x25s or two Silent Wings 3 140mm. Whichever ones fit, with the latter being preferable. 

My potential candidates are Lightning Z and FTW3 Ultra. Maybe Gaming X Trio if it has the same sized heatsink as Lightning Z (with this one I would have to flash the bios, which I would like to avoid because of the risk of voiding the warranty). I would consider a HOF if they were available (don't know whether that shroud/display can be removed easily). 

The goal here is to achieve the best noise/performance cooling ratio while having one of the potentially highest OCing air cooled cards. The price is not a consideration here. As I have understood while reading this awesome thread MSI has a larger heatsink than FTW3. Is that accurate? Also its height might allow fitting two 140mm fans instead of two 120mm. Can anyone verify? These factors would make it a number one choice for me. That is unless there is no way to do the mod without voiding the warranty on a MSI card. That is important for me. Can it be done? Otherwise, an FTW3 is the only choice.


----------



## Pepillo

kx11 said:


> i will if a waterblock for it is to be released within this week


For the MSI Lightning Z? What manufacturer?


----------



## kx11

Pepillo said:


> For the MSI Lightning Z? What manufacturer?



i've contacted Bitspower and they told me they'll release a Lightning Z block soon , they're not working currently because of the Chinese new year though


EK should release a block too but it'll come out last like always


----------



## Pepillo

kx11 said:


> i've contacted Bitspower and they told me they'll release a Lightning Z block soon , they're not working currently because of the Chinese new year though
> 
> 
> EK should release a block too but it'll come out last like always


Thank you for the information


----------



## RaGran

Hi,

Thought I'd share my results with a shunt modded and water cooled non-a board (MSI Ventus Ti) --> https://www.3dmark.com/3dm/33210282?

Graphics score is 16736 at +250/+1000, no added volts, and power limit set at +8% which should be about 439W. Water cooling with Bykski Ice Dragon RGB N-RTX2080TI-X and four Alphacool radiators (3*420 + 1*140).


----------



## Martin778

445W on a Ventus? Custom BIOS I assume? Even the Lightning has a 360W max power cap.


----------



## RaGran

Martin778 said:


> 445W on a Ventus? Custom BIOS I assume? Even the Lightning has a 360W max power cap.


Not a custom bios, but a hardware modification. I did the same shunt mod as described in this post --> https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html , which consists of soldering a 8mOhm resistor on top of each of the two resistors for the 8-pin connector power cable inputs. It causes the card to draw 1,625 times the amount of watts it normally would. So, 250*1,625*1,08=438,75W. The last multiplier is the +8% power limit boost from afterburner. I could set it higher at +12% for 455W, but that had no effect on test score.


----------



## RaGran

3DMark now has a NVidia DLSS feature test, which runs parts of the Port Royal test with TAA and DLSS and compares the results. I got 43,43 fps with TAA and 62,73 with DLSS --> https://www.3dmark.com/3dm/33264886?


----------



## Hiikeri

@zhrooms
On you 1st post:
Zotac Triple Fan and Zotac AMP.
The GPU with the highest power limit out of these is the Gigabyte Windforce OC, at 366W, but all of them can safely be flashed with the Galax 380W BIOS.

I have been used that 380W bios now on ~3 months (Zotac 2080Ti AMP), can my card handle 400+W bioses?


----------



## ComansoRowlett

iamjanco said:


> Not the FE, but you might be able to pull the info out of the following link if noone else chimes in with something more granular (the XC Ultra is based on the ref card; xdevs.com is eVGA TiN's site):
> 
> *Teardown of the EVGA GeForce RTX 2080 Ti XC Ultra*
> 
> Also have a look at this thread; some good, relevant info there:
> 
> *General GPU Voltmod and External VRM Information*
> 
> Lastly, if you're not aware of it, you might want to have a look at the *EVGA EPOWER V*, which is designed to facilitate performing similar tasks (it's got a ProbeIt connector):
> 
> *EVGA Support Manual for EVGA EPOWER V*
> 
> View attachment 251770
> 
> 
> Good luck


Thank you! Pretty sure they use an FE PCB as you say so should work out, really appreciate this info  https://youtu.be/eVy0zmgaMQE?t=1 all the mods I have on the cards are here in this video by Buildzoid, I plan to put the card on a full cover waterblock so VRM should be fine and the GPU core will love me for it considering the boost algorithm haha.


----------



## RaGran

Hiikeri said:


> @zhrooms
> On you 1st post:
> Zotac Triple Fan and Zotac AMP.
> The GPU with the highest power limit out of these is the Gigabyte Windforce OC, at 366W, but all of them can safely be flashed with the Galax 380W BIOS.
> 
> I have been used that 380W bios now on ~3 months (Zotac 2080Ti AMP), can my card handle 400+W bioses?


I think Gamersnexus estimated on their 2080 Ti reference PCB analysis video (on Youtube) that the power delivery components on the reference card could handle up to 600W but that the power cables may catch fire if you draw 600W through a single cable with two 8 pin connectors on the gpu side.


----------



## Kalm_Traveler

J7SC said:


> Aqua Computer Cryo w/active BP, or Watercool Heatkiller IV are highly rated for this kind of card


Thanks! I'm hearing Heatkiller recommended all over so I'll give that a try for this next card


----------



## Hiikeri

RaGran said:


> I think Gamersnexus estimated on their 2080 Ti reference PCB analysis video (on Youtube) that the power delivery components on the reference card could handle up to 600W but that the power cables may catch fire if you draw 600W through a single 8 pin connector cable.


Thx, no worry about 0.6Kw drawsings. Just default air cooling (100%) on gaming (=over 50C) and all time open case + 40cm fan.

2x8PIN + PCIE to card is then enough.


----------



## marsder

Got me a Palit Gaming OC last night - comes with 126% bios pre-flashed and Samsung vram. 

Can't seem to receive any other bios, tried Galax 380W and EVGA's before I stopped. After flashing and restarting, GPU-Z shows blank data in all the rows and my secondary monitor couldn't connect. Had to flash back to original ROM to get stuff working again.

Using Afterburner, I was able to get it to +140 core, +850 ram.


----------



## Ace99ro

Hello guys , i have a very good offer for an AORUS RTX 2080 Ti XTREME , are the thermals so bad as reviewed ? Should i try another custom design card ? ( my case is a Corsair Air 540 with 5 x Vardar EVO 140ER ) 

Thanks !


----------



## Martin778

Expect 75*C at 90% fan speed in Valley/Superposition. That's what I was seeing.


----------



## laxu

New BIOS seems to be out for Palit 2080 Ti Dual and Gaming Pro (non-OC). Anyone try them yet? I kinda can't be bothered as I have the Gaming Pro OC BIOS working fine on my non-OC model.


----------



## VPII

With the new Nvidia driver I thought I'll give the DLSS a go to see. Well I get only a 42% increase with DLSS, but I think it is partly due to ambient temps at present. Idle the card sits at 33 to 34C with ambient around 28 to 30C. So after the first run the GPU temp would max out at 45C but in between loading only drop to 38C so with the DLSS run it would go up to 46C which I gathered is why the DLSS run is lower.

https://www.3dmark.com/nd/34378


----------



## Coldmud

So is anyone playing the division 2 beta? Everything on ultra with shadow quality: very high, volumetric fog: cinematic and object detail on 100. 

I'm anywhere from 59 to 100 fps. Struggling to get 60fps @3440x1440p on grassy open areas.

This new snowdrop engine is amazing though, world detail is unbelievable, but ouch..

Wondering if the sli profile for this game is any good? This might be the game to get me back into sli.


----------



## pewpewlazer

Coldmud said:


> So is anyone playing the division 2 beta? Everything on ultra with shadow quality: very high, volumetric fog: cinematic and object detail on 100.
> 
> I'm anywhere from 59 to 100 fps. Struggling to get 60fps @3440x1440p on grassy open areas.
> 
> This new snowdrop engine is amazing though, world detail is unbelievable, but ouch..
> 
> Wondering if the sli profile for this game is any good? This might be the game to get me back into sli.


I played it a bit yesterday. 3840x2160 everything ultra/maxed out (other than fog) pulls 60 fps pretty consistently vsync on. Changing fog from ultra to cinematic is absolutely crippling. Game does look great though. I tried DX12 mode but for some reason the game felt like it was running at 20 fps even though I was allegedly getting 80 fps. Maybe my eyes deceived me, but the game appeared to look much better visually in DX12. Maybe I'll play it some more today.


----------



## Coldmud

pewpewlazer said:


> I played it a bit yesterday. 3840x2160 everything ultra/maxed out (other than fog) pulls 60 fps pretty consistently vsync on. Changing fog from ultra to cinematic is absolutely crippling. Game does look great though. I tried DX12 mode but for some reason the game felt like it was running at 20 fps even though I was allegedly getting 80 fps. Maybe my eyes deceived me, but the game appeared to look much better visually in DX12. Maybe I'll play it some more today.


Strange.. I've had the opposite experience with gsync, with dx11 gsync doesn't seem to be working, kind of a stuttery mess. 

dx12 felt smoother to me, I haven't done any visual comparisons between dx11 and 12 yet, both look stunning. 

These settings you can add above the extreme preset take about 10fps each in my case.


----------



## TK421

19K FS score on stock cooling with the power limit removed


----------



## kot0005

pewpewlazer said:


> I played it a bit yesterday. 3840x2160 everything ultra/maxed out (other than fog) pulls 60 fps pretty consistently vsync on. Changing fog from ultra to cinematic is absolutely crippling. Game does look great though. I tried DX12 mode but for some reason the game felt like it was running at 20 fps even though I was allegedly getting 80 fps. Maybe my eyes deceived me, but the game appeared to look much better visually in DX12. Maybe I'll play it some more today.


I was getting stable 4k 60fps as well. I turned off the reduce latency thing, object detail is 75 and I cranked up some stuff to very high.
DX12 didnt work for me. It was stuttering every few seconds.


----------



## xrb936

Hey guys, so I just got my MSI RTX 2080 Ti Sea Hawk EK X here, so happy!

Right now I am thinking to flash the Galax Bios on it, since AFAIK the card is actually the Gaming Trio X w/ a custom EKWB WB, and looks like someone who has Gaming Trio X has already flashed it. Is there anyone who knows about that? Should I go for it?

Thanks!


----------



## J7SC

Coldmud said:


> So is anyone playing the division 2 beta? Everything on ultra with shadow quality: very high, volumetric fog: cinematic and object detail on 100.
> 
> I'm anywhere from 59 to 100 fps. Struggling to get 60fps @3440x1440p on grassy open areas.
> 
> This new snowdrop engine is amazing though, world detail is unbelievable, but ouch..
> 
> Wondering if the sli profile for this game is any good? This might be the game to get me back into sli.



Hello Coldmud ...per attached, still needs beautification, but SLI in the intended setup rather than test-bench is getting there (slowly:sozo. Does Division 2 beta have a demo link ?


----------



## Robostyle

Anyone here with Palit 2080 Ti? Does it have VRM sensor?


----------



## Rob w

doing a bit more testing yesterday after a clean install of everything, I still cant flash the gpu, tried all suggestions but no go, shunts are ready for when I give up on it lol.
anyway re loaded afterburner and now find I can oc memory to 8500? but it freezes half way through superposition 1080p extreme, it will complete at Mem 8400/ core 2130 /voltage locked to 1.100v ( starts at 1.093v) but low score.
https://www.overclock.net/forum/attachment.php?attachmentid=252638&stc=1&d=1549805087
got higher score with oc of 8200 mem/ 2130 core / locking 1.106v, it holds its clocks better.
don't know if it will go any further without flashing or shunting!


----------



## zack_orner

Why not go for the 406w MSI galaxy x trio bios with 135 power limit from this thread. 

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## Coldmud

J7SC said:


> Hello Coldmud ...per attached, still needs beautification, but SLI in the intended setup rather than test-bench is getting there (slowly:sozo. Does Division 2 beta have a demo link ?


Getting there my man, slowly but steady!  Are you doing this build alone? My builds have always have been like this: I take about a week vacation and I force the same off my engineering buddy. Can't do projects alone, cause I stop and smoke some, and get distracted XD..

I can get you an invite, hit me a PM with your Uplay details.


----------



## Coldmud

Rob w said:


> doing a bit more testing yesterday after a clean install of everything, I still cant flash the gpu, tried all suggestions but no go, shunts are ready for when I give up on it lol.
> anyway re loaded afterburner and now find I can oc memory to 8500? but it freezes half way through superposition 1080p extreme, it will complete at Mem 8400/ core 2130 /voltage locked to 1.100v ( starts at 1.093v) but low score.
> https://www.overclock.net/forum/attachment.php?attachmentid=252638&stc=1&d=1549805087
> got higher score with oc of 8200 mem/ 2130 core / locking 1.106v, it holds its clocks better.
> don't know if it will go any further without flashing or shunting!


This is the strangest case, did you remove the heatsink and verify you indeed have an "A" chip? Well it has to be, since you stated it does see 1.093v, this is the max +V so I'm not sure why you wanna force 1.1v which doesn't work
Anyone else wanna chime in why this guy can't flash any bioses?? 

Afterburner since beta10 does oc memory to max +1500mhz =8500mhz
8400mhz memory oc is indeed nice, but must try some games to test stability, superposition isn't that picky with mem oc.
Try witcher 3, new tombraider or bfV.


----------



## J7SC

Coldmud said:


> Getting there my man, slowly but steady!  Are you doing this build alone? My builds have always have been like this: I take about a week vacation and I force the same off my engineering buddy. Can't do projects alone, cause I stop and smoke some, and get distracted XD..
> 
> I can get you an invite, hit me a PM with your Uplay details.



...nah, decided to finish two other build-projects first as RMA for the mobo with this project took some weeks, all told - in the meantime, I had substituted a X399 Pro-Carbon destined for a future build into this project, which "of course" :wth: had different dimensions and SLI GPU spacing :sozo: , so after reversing the mods for that temporary setup, I finally got back on track


----------



## Coldmud

J7SC said:


> ...nah, decided to finish two other build-projects first as RMA for the mobo with this project took some weeks, all told - in the meantime, I had substituted a X399 Pro-Carbon destined for a future build into this project, which "of course" :wth: had different dimensions and SLI GPU spacing :sozo: , so after reversing the mods for that temporary setup, I finally got back on track


Yeah I recall now, glad u got it sorted out, dimensions and spacing can be a nightmare with fixed tubes. Should've just bought an identical mobo like the 1st RMA  Gotta say I love the color scheme, which is probably gonna be ruined by any rbg.
Thats a P5 ur working on right? Never got to see any pics from further away.


----------



## J7SC

Coldmud said:


> Yeah I recall now, glad u got it sorted out, dimensions and spacing can be a nightmare with fixed tubes. Should've just bought an identical mobo like the 1st RMA  Gotta say I love the color scheme, which is probably gonna be ruined by any rbg.
> Thats a P5 ur working on right? Never got to see any pics from further away.



...first thing I tried was to get a 2nd X399 Creation, but was sold out... and yes, working on the right, top, bottom and back. We had all kinds of extra water-cooling parts laying around, so this project got 2x 360/60 on the first loop for the CPU and 3x 360/60 for the two 2080 TIs (+ total of 4 MPC 6550 pumps). Still have all kinds of other water-cooling stuff left over, but couldn't fit more than 5x 360 rads on the Core P5 

In any case, in oc'ed state, CPU and GPUs will put out over 1060w of heat energy, but the rads are willing and waiting


----------



## pewpewlazer

Rob w said:


> doing a bit more testing yesterday after a clean install of everything, I still cant flash the gpu, tried all suggestions but no go, shunts are ready for when I give up on it lol.
> anyway re loaded afterburner and now find I can oc memory to 8500? but it freezes half way through superposition 1080p extreme, it will complete at Mem 8400/ core 2130 /voltage locked to 1.100v ( starts at 1.093v) but low score.
> https://www.overclock.net/forum/attachment.php?attachmentid=252638&stc=1&d=1549805087
> got higher score with oc of 8200 mem/ 2130 core / locking 1.106v, it holds its clocks better.
> don't know if it will go any further without flashing or shunting!


What are your actual core clocks/voltages during superposition? I'm using the 380 watt BIOS on my card and I couldn't dream of running anywhere near 1.093v. My load temps are 15*C higher than yours, but still. If I let the card go up to the 1.068v default max voltage, it will PL throttle as soon as it tries to go up to 1.043v, and will bounce between 1.025v and 1.037v due to power throttling. Running Superposition 1080p Extreme, my normal gaming OC profile will run 2115mhz @ 1.031v for almost the entire benchmark and the average power draw I'm seeing with GPU-Z during the run is around 360 watts.

Also I believe these cards have built in error correction on the memory, so higher memory clocks could score lower if it's not actually stable and causing errors.


----------



## Coldmud

J7SC said:


> ...first thing I tried was to get a 2nd X399 Creation, but was sold out... and yes, working on the right, top, bottom and back. We had all kinds of extra water-cooling parts laying around, so this project got 2x 360/60 on the first loop for the CPU and 3x 360/60 for the two 2080 TIs (+ total of 4 MPC 6550 pumps). Still have all kinds of other water-cooling stuff left over, but couldn't fit more than 5x 360 rads on the Core P5
> 
> In any case, in oc'ed state, CPU and GPUs will put out over 1060w of heat energy, but the rads are willing and waiting


Sounds like you have adequate rads for the setup, If not you can always go the external route


----------



## Rob w

Coldmud said:


> This is the strangest case, did you remove the heatsink and verify you indeed have an "A" chip? Well it has to be, since you stated it does see 1.093v, this is the max +V so I'm not sure why you wanna force 1.1v which doesn't work
> Anyone else wanna chime in why this guy can't flash any bioses??
> 
> Afterburner since beta10 does oc memory to max +1500mhz =8500mhz
> 8400mhz memory oc is indeed nice, but must try some games to test stability, superposition isn't that picky with mem oc.
> Try witcher 3, new tombraider or bfV.


hi Coldmud, deffo A chip.
I found that by setting to 1.1v on scanner when the test starts it drops to the 1.093v and holds it longer, but if I set it lower then it starts test even lower ??
yep! looks like I downloaded the new afterburner
ill give bfv and d2beta a whirl ! check stability, I will go over this bios flash again, no one else has had problems with it ¨that I've heard of, case of double check everything.


----------



## Coldmud

Rob w said:


> hi Coldmud, deffo A chip.
> I found that by setting to 1.1v on scanner when the test starts it drops to the 1.093v and holds it longer, but if I set it lower then it starts test even lower ??
> yep! looks like I downloaded the new afterburner
> ill give bfv and d2beta a whirl ! check stability, I will go over this bios flash again, no one else has had problems with it ¨that I've heard of, case of double check everything.


It's probably just the gpuboost downclocking volts and core, first step is when you reach 40c. In any case if you lock @ 1.1v or higher in V/F editor it just jumps to 1.093v or 1.08v.. If you look closely in the editor the white line never exceeds 1.093v since it's locked down.

btw what happens when you save your rom in gpu-z and then try to flash it again? Did you try this?


----------



## Chimera619

Hey guys
I am in the market for a 2080ti
I dont want to pay a price premium for OC cards
the ones I am looking at atm is Zotac AMP & strix OC
I see both are 1250~ on amazon


----------



## J7SC

Coldmud said:


> Sounds like you have adequate rads for the setup, If not you can always go the external route


...didn't know you were into bong pc cooling :Snorkle: but those things work ! 

In any case, I'm not even in the same league as...


Spoiler



https://www.overclock.net/forum/18082-builds-logs-case-mods/1696937-build-log-muffler-bearings-3.html#post27640270


 

...or for really hot days, how about a 1500w chiller w/30w water pump built-in ?


----------



## Coldmud

Chimera619 said:


> Hey guys
> I am in the market for a 2080ti
> I dont want to pay a price premium for OC cards
> the ones I am looking at atm is Zotac AMP & strix OC
> I see both are 1250~ on amazon


According to buildzoid the Asus has true 10phase VRM situation, but it wont matter anything on air. These cards all perform identical basically, it really comes down to preference for brand and looks. And offcourse cooler performance, I would try to research some reviews that talk about temps in the newer dx12 games.



J7SC said:


> ...didn't know you were into bong pc cooling :Snorkle: but those things work !
> 
> In any case, I'm not even in the same league as...
> 
> 
> Spoiler
> 
> 
> 
> https://www.overclock.net/forum/18082-builds-logs-case-mods/1696937-build-log-muffler-bearings-3.html#post27640270


lol, seen that, suppose he has to wear a noisecancelling headset all day and never has to vacuum.  Btw pm was sent about div2, if you want to check it out it has to be before tomorrow as the beta ends.

edit: I have seriously contemplated going the chiller route, but after extensive research I decided it's just too much hastle for my light builds now. sli only goes up to 2, so I see no need and boy, even the 1/4hp ones sound like airplanes taking off. Gotta tuck that badboy away in another room or cellar, God the tubing situation would be a nightmare! 

Yeah I've seen that alphacool ez2000, it does looks nice, but it's basically a rebranded overpriced chinese Welder / Laser Tube chiller which cost about 4 to $500 like the 5200dh, has g1/4 tubing though which is nice


----------



## J7SC

Coldmud said:


> According to buildzoid the Asus has true 10phase VRM situation, but it wont matter anything on air. These cards all perform identical basically, it really comes down to preference for brand and looks. And offcourse cooler performance, I would try to research some reviews that talk about temps in the newer dx12 games.
> 
> 
> 
> lol, seen that, suppose he has to wear a* noisecancelling headset all day* and never has to vacuum.  Btw pm was sent about div2, if you want to check it out it has to be before tomorrow as the beta ends.
> 
> edit: I have seriously contemplated going the chiller route, but after extensive research I decided it's just too much hastle for my light builds now. sli only goes up to 2, so I see no need and boy, even the 1/4hp ones sound like airplanes taking off. Gotta tuck that badboy away in another room or cellar, God the tubing situation would be a nightmare!
> 
> Yeah I've seen that alphacool ez2000, it does looks nice, but it's basically a rebranded overpriced chinese Welder / Laser Tube chiller which cost about 4 to $500 like the 5200dh, has g1/4 tubing though which is nice



Well, he does use BeQuiet fans on what seems like PWM / I guess he can get it down to a nice 'whoosh' sound  As to the chiller(s), I had posted that over at the Xeon w3175x thread as someone wanted to seriously oc all 28 cores / 56 threads - and if you have some GPUs to cool down, that / other chillers might just be the ticket at 1500w heat energy rating. But who knows about the noise, and also, you probably have to be good friends w/ your local electrician...chillers like that PLUS heavily oc-ed muti-core CPU PLUS 2x oc'ed GPUs will probably want to have a couple of dedicated 50amp wall outlets 

PS ..saw your PM and answered


----------



## Coldmud

J7SC said:


> Well, he does use BeQuiet fans on what seems like PWM / I guess he can get it down to a nice 'whoosh' sound  As to the chiller(s), I had posted that over at the Xeon w3175x thread as someone wanted to seriously oc all 28 cores / 56 threads - and if you have some GPUs to cool down, that / other chillers might just be the ticket at 1500w heat energy rating. But who knows about the noise, and also, you probably have to be good friends w/ your local electrician...chillers like that PLUS heavily oc-ed muti-core CPU PLUS 2x oc'ed GPUs will probably want to have a couple of dedicated 50amp wall outlets
> 
> PS ..saw your PM and answered


I know about the noise  we have heaps of them at work, granted they are industrial, but still the mobile ones, not cabinets. They produce a lot of noise. If you search for waterchillers on YT there are some vids with ppl operating them, even the 1/4 hp ones are so loud. I just can't 

50amp? really  Over here we have just 18amp fuses, did a lot of mining and 4titans all on 18amp which drew almost 3KW.


----------



## Baasha

Is the Asus RoG Strix 2080 Ti OC edition available anywhere? It seems to be out of stock everywhere.


----------



## kx11

Baasha said:


> Is the Asus RoG Strix 2080 Ti OC edition available anywhere? It seems to be out of stock everywhere.



i've seen a lot of those in Ebay , higher price than retail but they are there


----------



## J7SC

Coldmud said:


> I know about the noise  we have heaps of them at work, granted they are industrial, but still the mobile ones, not cabinets. They produce a lot of noise. If you search for waterchillers on YT there are some vids with ppl operating them, even the 1/4 hp ones are so loud. I just can't
> 
> 50amp? really  Over here we have just 18amp fuses, did a lot of mining and 4titans all on 18amp which drew almost 3KW.


Many folks have the chiller in an adjacent room for noise reasons  and run some hoses. That Alphacool one has a water lift rating for its built-in waterpump of 32+ feet, so you could put it in the basement 

As to amps, I've tripped 15amp ones w/ just one heavily oc'ed CPU (LN2) and quad SLI on chilled water...besides, 50amps also allows you to have a fridge for brewskies and microwave for pizza pockets right next to your fav PC... :thumb:


----------



## iamjanco

I'll just leave this "*hear*"

Thought about using something like that to drown out the noise of the dual MO-RA3 setup 

Btw, who vacuums?


----------



## Coldmud

Oh my!  didn't mean any disrespect before! It's just that.. your project is outright insane! Tremendous respect for this build, can't wait to see updates on it!


----------



## J7SC

iamjanco said:


> I'll just leave this "*hear*"
> 
> Thought about using something like that to drown out the noise of the dual MO-RA3 setup
> 
> Btw, who vacuums?



...22,700 CFM / 48 inch :thinking: Thanks, that gives me an idea for my next 'pc-non-p[pol.]c[corr.]' build :thinking:


----------



## iamjanco

Lol, let me know when you start, J7SC; and no offense taken, Coldmud.


----------



## J7SC

iamjanco said:


> Lol, let me know when you start, J7SC; and no offense taken, Coldmud.


...this _would be the right weather_ for it in terms of sucking in outside air @ 22,700 cfm for the rads - but alas, got to finish the other project(s) first...


----------



## iamjanco

J7SC said:


> ...this _would be the right weather_ for it in terms of sucking in outside air @ 22,700 cfm for the rads - but alas, got to finish the other project(s) first...


I could relate to that. Still, it might not be a bad idea:


----------



## willverduzco

Rob w said:


> doing a bit more testing yesterday after a clean install of everything, I still cant flash the gpu, tried all suggestions but no go, shunts are ready for when I give up on it lol.
> anyway re loaded afterburner and now find I can oc memory to 8500? but it freezes half way through superposition 1080p extreme, it will complete at Mem 8400/ core 2130 /voltage locked to 1.100v ( starts at 1.093v) but low score.
> https://www.overclock.net/forum/attachment.php?attachmentid=252638&stc=1&d=1549805087
> got higher score with oc of 8200 mem/ 2130 core / locking 1.106v, it holds its clocks better.
> don't know if it will go any further without flashing or shunting!


Something is definitely very wrong with your setup. That score is substantially lower than your clocks would suggest, so you're either getting some kind of power/thermal/voltage throttling or your clocks are so borderline stable that it's actually decreasing performance. With those clocks (if you actually could sustain them), you should be near my score, but you're 400 points off.

Also, without hardware modifications or potentially an XOC bios and helper app if that is ever released, you cannot do above 1.093V, as that's Nvidia's hard limit for our cards. It doesn't matter what your curve is set to (or if you "lock" it to 1.1V in MSI AB), it simply cannot deliver over 1.093. And IMO, with a 380W limit, all you're doing at 1.093 is hitting that PL even sooner. Try backing down your clocks and lowering your voltages. I found 2130/8080 @ 1.05 was a sweetspot and good for 10.5k on 1080p Extreme (and 17k TimeSpy GPU Score -> #60 Total Score single-GPU, #56 GPU Score single-GPU) on my Asus Strix OC with Galax 380W Bios (no shunt) and a 280mm AIO.

FWIW, this also makes me wonder about 90% of people's scores in here with supposed overclocks of 2100 MHz or more. They're all so low, meaning that they're either throttling and they don't know it or they are losing points due to low stability.


----------



## JustinThyme

willverduzco said:


> Something is definitely very wrong with your setup. That score is substantially lower than your clocks would suggest, so you're either getting some kind of power/thermal/voltage throttling or your clocks are so borderline stable that it's actually decreasing performance. With those clocks (if you actually could sustain them), you should be near my score, but you're 400 points off.
> 
> Also, without hardware modifications or potentially an XOC bios and helper app if that is ever released, you cannot do above 1.093V, as that's Nvidia's hard limit for our cards. It doesn't matter what your curve is set to (or if you "lock" it to 1.1V in MSI AB), it simply cannot deliver over 1.093. And IMO, with a 380W limit, all you're doing at 1.093 is hitting that PL even sooner. Try backing down your clocks and lowering your voltages. I found 2130/8080 @ 1.05 was a sweetspot and good for 10.5k on 1080p Extreme (and 17k TimeSpy GPU Score -> #60 Total Score single-GPU, #56 GPU Score single-GPU) on my Asus Strix OC with Galax 380W Bios (no shunt) and a 280mm AIO.
> 
> FWIW, this also makes me wonder about 90% of people's scores in here with supposed overclocks of 2100 MHz or more. They're all so low, meaning that they're either throttling and they don't know it or they are losing points due to low stability.



Yeah Im not getting it either. Just ran it in the background no tweaks while multitasking other things and reading up on other posts here.


----------



## willverduzco

JustinThyme said:


> Yeah Im not getting it either. Just ran it in the background no tweaks while multitasking other things and reading up on other posts here.


Haha, well you're on SLI, and assuming the near perfect scaling synthetics should give you, that's 9.1k per card. What speeds do you get single-GPU?


----------



## xrb936

Hi, I just flashed the Galax 370w bios on my MSI Sea Hawk EK, however I was unable to use the OC scanner in Afterburner. I can run it with the original bios with no issue. Does anyone know why it happened? Thanks!


----------



## zack_orner

xrb936 said:


> Hi, I just flashed the Galax 370w bios on my MSI Sea Hawk EK, however I was unable to use the OC scanner in Afterburner. I can run it with the original bios with no issue. Does anyone know why it happened? Thanks!


Try the MSI 2080 ti gaming x trio 406w bios from the first post in this thread. Has a higher wattage and a +135% power limit. Also its made for the three power cord connection card. Link https://onedrive.live.com/?authkey=...54D70F!123&parId=D7731B0FB754D70F!114&o=OneUp

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## Thoth420

So when I open HWinfo64 (sensors only or the whole thing) my RGB lights on my Gaming X Trio shut off and I have to start mystic light in admin to fix it but that only works if HWinfo64 is closed. My motherboard and RAM RGBs are unaffected but they were set by ASUS Aura. Anyone know why this occurs?


----------



## xrb936

zack_orner said:


> Try the MSI 2080 ti gaming x trio 406w bios from the first post in this thread. Has a higher wattage and a +135% power limit. Also its made for the three power cord connection card. Link https://onedrive.live.com/?authkey=...54D70F!123&parId=D7731B0FB754D70F!114&o=OneUp
> 
> 2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


It doesn't really work on my card. If I oc the core to about 2150, I cannot even oc the memory to 7500. If I oc the memory to 8000, I cannot oc the core to 2100. 370W bios works perfectly so far.


----------



## CptKuolio

You can check if the MSI bios actually does anything to your card simply by:

Install the MSI bios (or what ever you want to test). Max out powertarget, voltage and whatnots, push a moderate OC you know to be stable. Run kombustor or something heavy. Check from HWiNFO or any other tool that shows your GPU powerdraw. If it is around the number your bios should let you; fine. If not, your bios is not working and defaulting to default power limit.

Happens to me with MSI Trio bios, it 'works' but cannot get the card to draw more then default numbers of power. I have palit gaming pro OC or something, the reference board with an A-chip. Best bios i've found working properly is the Galax 380w one, with it everything works as expected including getting my powerdraw to 380w.


----------



## xrb936

CptKuolio said:


> You can check if the MSI bios actually does anything to your card simply by:
> 
> Install the MSI bios (or what ever you want to test). Max out powertarget, voltage and whatnots, push a moderate OC you know to be stable. Run kombustor or something heavy. Check from HWiNFO or any other tool that shows your GPU powerdraw. If it is around the number your bios should let you; fine. If not, your bios is not working and defaulting to default power limit.
> 
> Happens to me with MSI Trio bios, it 'works' but cannot get the card to draw more then default numbers of power. I have palit gaming pro OC or something, the reference board with an A-chip. Best bios i've found working properly is the Galax 380w one, with it everything works as expected including getting my powerdraw to 380w.


I used GPU-z to check the TDP. Looks like I have the exact same situation as you.


----------



## Robostyle

CptKuolio said:


> You can check if the MSI bios actually does anything to your card simply by:
> 
> Install the MSI bios (or what ever you want to test). Max out powertarget, voltage and whatnots, push a moderate OC you know to be stable. Run kombustor or something heavy. Check from HWiNFO or any other tool that shows your GPU powerdraw. If it is around the number your bios should let you; fine. If not, your bios is not working and defaulting to default power limit.
> 
> Happens to me with MSI Trio bios, it 'works' but cannot get the card to draw more then default numbers of power. I have palit gaming pro OC or something, the reference board with an A-chip. Best bios i've found working properly is the Galax 380w one, with it everything works as expected including getting my powerdraw to 380w.


Does it have vrm sens?


----------



## pewpewlazer

willverduzco said:


> Something is definitely very wrong with your setup. That score is substantially lower than your clocks would suggest, so you're either getting some kind of power/thermal/voltage throttling or your clocks are so borderline stable that it's actually decreasing performance. With those clocks (if you actually could sustain them), you should be near my score, but you're 400 points off.
> 
> Also, without hardware modifications or potentially an XOC bios and helper app if that is ever released, you cannot do above 1.093V, as that's Nvidia's hard limit for our cards. It doesn't matter what your curve is set to (or if you "lock" it to 1.1V in MSI AB), it simply cannot deliver over 1.093. And IMO, with a 380W limit, all you're doing at 1.093 is hitting that PL even sooner. Try backing down your clocks and lowering your voltages. I found 2130/8080 @ 1.05 was a sweetspot and good for 10.5k on 1080p Extreme (and 17k TimeSpy GPU Score -> #60 Total Score single-GPU, #56 GPU Score single-GPU) on my Asus Strix OC with Galax 380W Bios (no shunt) and a 280mm AIO.
> 
> FWIW, this also makes me wonder about 90% of people's scores in here with supposed overclocks of 2100 MHz or more. They're all so low, meaning that they're either throttling and they don't know it or they are losing points due to low stability.


Here's my Superposition 1080p extreme. 2115mhz with the occasional PL throttle dips to 2100. +950 mem. GALAX 380w BIOS. I kept getting incredibly erratic scores in the mid 9000s. Seems like having the Aquasuite app for my fancy new Aquaero 6 LT open on my second monitor was somehow crippling my performance. Strange.

I also hadn't paid attention to the fact that the 3DMark HoF has "graphics score" specific top 100 for TS and TS Extreme prior to your post. That sounded interesting, since I'm one of the sad few still living in the X99 stone ages, so I wasted some time playing 3DMark tonight. 

https://www.3dmark.com/pr/46523 - Port Royal 9480
https://www.3dmark.com/spy/6182789 - 16811 Graphics score in Time Spy
https://www.3dmark.com/spy/6182721 - 7959 Graphics score in Time Spy Extreme

Port Royal is a cake walk to run, both in terms of max stable frequency and power throttling. 2145-2160mhz the entire time 1.037v-1.062v. I can't even come close to that in anything else. Although Time Spy seems to be a bit more forgiving than Fire Strike when it comes to core clock.

Does anyone know if there's ever a temperature point where GPU Boost stops adjusting the damn 'base curve'? Even in the low 40s it's all over the place. Sometimes I'd get 2160mhz @ 1.062v, other times it would run 2160mhz @ 1.05v (which is about as stable as a drunk walking a tightrope). Incredibly annoying.


----------



## Praegrandis

Praegrandis said:


> Hello everyone. I am in a need of advice/assistance from the soon-to-be fellow RTX 2080 Ti owners. Thank you very much in advance.
> 
> I am planning to buy an air cooled card and remove the shroud/fans to replace them with either two NF-A12x25s or two Silent Wings 3 140mm. Whichever ones fit, with the latter being preferable.
> 
> My potential candidates are Lightning Z and FTW3 Ultra. Maybe Gaming X Trio if it has the same sized heatsink as Lightning Z (with this one I would have to flash the bios, which I would like to avoid because of the risk of voiding the warranty). I would consider a HOF if they were available (don't know whether that shroud/display can be removed easily).
> 
> The goal here is to achieve the best noise/performance cooling ratio while having one of the potentially highest OCing air cooled cards. The price is not a consideration here. As I have understood while reading this awesome thread MSI has a larger heatsink than FTW3. Is that accurate? Also its height might allow fitting two 140mm fans instead of two 120mm. Can anyone verify? These factors would make it a number one choice for me. That is unless there is no way to do the mod without voiding the warranty on a MSI card. That is important for me. Can it be done? Otherwise, an FTW3 is the only choice.


Can someone contribute please?


----------



## dante`afk

anyone having a link to buy those 8ohm shunts to powermod the 2080Ti?

and is it better to get a 3ohm one and stack them or remove the 5ohm one and put a 8ohm on it? or not differnce?


----------



## lowrider_05

don´t blame me if it does not work for you!


----------



## zack_orner

CptKuolio said:


> You can check if the MSI bios actually does anything to your card simply by:
> 
> 
> 
> Install the MSI bios (or what ever you want to test). Max out powertarget, voltage and whatnots, push a moderate OC you know to be stable. Run kombustor or something heavy. Check from HWiNFO or any other tool that shows your GPU powerdraw. If it is around the number your bios should let you; fine. If not, your bios is not working and defaulting to default power limit.
> 
> 
> 
> Happens to me with MSI Trio bios, it 'works' but cannot get the card to draw more then default numbers of power. I have palit gaming pro OC or something, the reference board with an A-chip. Best bios i've found working properly is the Galax 380w one, with it everything works as expected including getting my powerdraw to 380w.


I'm definitely pulling more then stock boos settings but I also have not tried the 380 bios yet. I have tried hof 450w bios and it was worse for me.









2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## illidan2000

someone knows if evga hybrid kit could fit on an asus strix oc ? (obv 2080ti)
the pcb seems similar to reference.


----------



## VPII

Oh my word.... Intel all over. We tend to forget that there are people using AMD and you cannot blame them taking Intel's price point of some of their cpu s into consideration. When you running Ryzen, I can tell you if you get over 15K in 3dmark Time Spy you are good. You can possibly get a little higher, but trying to figure out how MS uses your CPU in the benchmark is a battle of note to say the least. My highest with a Ryzen 2700X was 15261 which I felt was pretty good, but I cannot reach that at present due to ambient temps.... or so I think. So, Intel is best at these benchmarks, so if you compare, do so against similar hardware.


----------



## NewType88

Praegrandis said:


> Praegrandis said:
> 
> 
> 
> Hello everyone. I am in a need of advice/assistance from the soon-to-be fellow RTX 2080 Ti owners. Thank you very much in advance.
> 
> I am planning to buy an air cooled card and remove the shroud/fans to replace them with either two NF-A12x25s or two Silent Wings 3 140mm. Whichever ones fit, with the latter being preferable.
> 
> My potential candidates are Lightning Z and FTW3 Ultra. Maybe Gaming X Trio if it has the same sized heatsink as Lightning Z (with this one I would have to flash the bios, which I would like to avoid because of the risk of voiding the warranty). I would consider a HOF if they were available (don't know whether that shroud/display can be removed easily).
> 
> The goal here is to achieve the best noise/performance cooling ratio while having one of the potentially highest OCing air cooled cards. The price is not a consideration here. As I have understood while reading this awesome thread MSI has a larger heatsink than FTW3. Is that accurate? Also its height might allow fitting two 140mm fans instead of two 120mm. Can anyone verify? These factors would make it a number one choice for me. That is unless there is no way to do the mod without voiding the warranty on a MSI card. That is important for me. Can it be done? Otherwise, an FTW3 is the only choice.
> 
> 
> 
> Can someone contribute please?
Click to expand...

Just get the FTW3. Most people in this thread are on water.


----------



## Rob w

pewpewlazer said:


> What are your actual core clocks/voltages during superposition? I'm using the 380 watt BIOS on my card and I couldn't dream of running anywhere near 1.093v. My load temps are 15*C higher than yours, but still. If I let the card go up to the 1.068v default max voltage, it will PL throttle as soon as it tries to go up to 1.043v, and will bounce between 1.025v and 1.037v due to power throttling. Running Superposition 1080p Extreme, my normal gaming OC profile will run 2115mhz @ 1.031v for almost the entire benchmark and the average power draw I'm seeing with GPU-Z during the run is around 360 watts.
> 
> Also I believe these cards have built in error correction on the memory, so higher memory clocks could score lower if it's not actually stable and causing errors.


It was still on the chiller! these cards just like the Titan v love the cold
That’s why I’m so desperate to flash it.
But still getting subsystem mismatch ??


----------



## TK421

Bestbuy has a part number 11G-P4-2383-KB for the XC Ultra, anyone know why the last number is KB and not the normal KR? What difference, if any, would this make to the card that I would be receiving?


----------



## Renegade5399

TK421 said:


> Bestbuy has a part number 11G-P4-2383-KB for the XC Ultra, anyone know why the last number is KB and not the normal KR? What difference, if any, would this make to the card that I would be receiving?


Best Buy has a deal with evga.

See here.

The "B" just means it is one of those cards.


----------



## Coldmud

Rob w said:


> It was still on the chiller!/forum/images/smilies/biggrin.gif these cards just like the Titan v love the cold/forum/images/smilies/smile.gif
> That’s why I’m so desperate to flash it.
> But still getting subsystem mismatch ??


What happens when you save your current bios to file in gpuz and then try to flash it again?


----------



## J7SC

Coldmud said:


> What happens when you save your current bios to file in gpuz and then try to flash it again?



perpetuum mobile :headscrat


----------



## NBrock

Rob w said:


> It was still on the chiller! these cards just like the Titan v love the cold
> That’s why I’m so desperate to flash it.
> But still getting subsystem mismatch ??


I am looking into a chiller for my loop. My humidity and dewpoint in my computer room are pretty low. What kind of gains were you able to make with the chiller. My card seems to clock pretty well under water as is. In time spy it runs 2130 with the 380 watt bios and in other lighter load benchmarks it runs a tad higher.


----------



## CptSpig

NBrock said:


> I am looking into a chiller for my loop. My humidity and see point in my computer room are pretty low. What kind of gains were you able to make with the chiller. My card seems to clock pretty well under water as is. In time spy it runs 2130 with the 380 watt bios and in other lighter load benchmarks it runs a tad higher.


I know I am not Rob but this is my Time Spy FE card with 380 bios on my chiller. By the way if you chill below the dew point you will get condensation so protect the board. :thumb:
http://www.3dmark.com/spy/5870019


----------



## Esenel

CptSpig said:


> I know I am not Rob but this is my Time Spy FE card with 380 bios on my chiller. By the way if you chill below the dew point you will get condensation so protect the board. :thumb:
> http://www.3dmark.com/spy/5870019


Try to do a BCLK OC.
This gave me the possibility do get 16960 points with a FE card (380W) by running it with +149 and +1050.
Normal water temp.

Then you should get the 17k :-D


----------



## CptSpig

Esenel said:


> Try to do a BCLK OC.
> This gave me the possibility do get 16960 points with a FE card (380W) by running it with +149 and +1050.
> Normal water temp.
> 
> Then you should get the 17k :-D


I have tried BCLK's of 101 to 104. 100 and disabling hyper-threading gave me the best overall score. This only works in Time Spy not Extreme.


----------



## TK421

Renegade5399 said:


> Best Buy has a deal with evga.
> 
> See here.
> 
> The "B" just means it is one of those cards.


 So the B is just a BB part number while the card is identical, got it!


Thanks!


----------



## MrTOOSHORT

Esenel said:


> Try to do a BCLK OC.
> This gave me the possibility do get 16960 points with a FE card (380W) by running it with +149 and +1050.
> Normal water temp.
> 
> Then you should get the 17k :-D



Assuming you have a 8086k, coffee lake will push a higher gpu score overall better than a skylake X or older arch.


----------



## CptSpig

MrTOOSHORT said:


> Assuming you have a 8086k, coffee lake will push a higher gpu score overall better than a skylake X or older arch.


^^^^:thumb:


----------



## Rob w

J7SC said:


> perpetuum mobile :headscrat





NBrock said:


> I am looking into a chiller for my loop. My humidity and dewpoint in my computer room are pretty low. What kind of gains were you able to make with the chiller. My card seems to clock pretty well under water as is. In time spy it runs 2130 with the 380 watt bios and in other lighter load benchmarks it runs a tad higher.


On air it runs superposition at about 52deg, and clocks up and down like a yo-yo.
on water it’s about 35deg, holds clocks better ie 2070to 2085 for longer, and on the chiller I’ve run it at 14deg ( window open haha)borderline condensation getting 2130 with drop to 2115-2100-1985-1970(power throttling).
Dew point needs watching, chiller helped me get the titanv to third place a short while back, they are well worth it:thumb:
I have the hilea 500c.


Coldmud said:


> What happens when you save your current bios to file in gpuz and then try to flash it again?


It saves bios as TU102.rom fine and will flash it back fine, so why the problem with the KFA2 I don’t know? Still the subsystem mismatch ID. Even with the bios for that? I can get it to protectoff but exception caught when trying to flash...


----------



## RaMsiTo

*2080 Ti lightning Z*

+150 core
+1500 memory




When starting the test it goes to 2205 mhz, then it goes down by temperature until 2160.


----------



## illidan2000

anyone is owner of an ASUS STRIX 2080TI OC 
I want to know if it is noisy... (on game)


----------



## kx11

RaMsiTo said:


> +150 core
> +1500 memory
> 
> 
> 
> 
> When starting the test it goes to 2205 mhz, then it goes down by temperature until 2160.





nice , was that the LN2 bios ?


----------



## Bull56

Hi there!

I bought two reference pcb non-A RTX 2080 Ti and then tried to overclock em... Well I tried and after I read this thread and well...I will sell then and I bought a big nice Galaxy RTX 2080 Ti HOF 

My HOF will arrive tomorrow, I will use it vor Air, H2O and LN2 Overclocking!

Also I have some problems with NVLink, it has Problems in some new games like BF5 or GTA5/METRO 2033 and so on  SLI worked better!


----------



## Renegade5399

My EVGA 2080Ti XC Blacks are en route! Should be here Monday! WOO HOO!


----------



## willverduzco

illidan2000 said:


> someone knows if evga hybrid kit could fit on an asus strix oc ? (obv 2080ti)
> the pcb seems similar to reference.


It most likely won't, as the VRM layout is significantly different. Get a Kraken G12, a 280mm AIO, and prepare yourself for low-mid 40s while OCed in gaming and high 40s while benchmarking (assuming ambient ~28C/82F and a near-silent fan-curve on the radiator). This resulted in pretty great results for me, which are seen below at my 24/7 clocks of 2130/8080. As an added benefit of replacing the stock cooler, taking off those fans (assuming 100% speed) and the backplate with LEDs buys you about ~10-15W of additional power that is no longer wasted, so you'll hit PL a bit less frequently.

You may need an extra ~60mm fan that you can attach on the other side of the card to cool those VRMs if your case runs hot, but this wasn't my experience since the Asus's VRM is so massively overbuilt that it simply doesn't get warm even under heavy usage without any cooling.


----------



## Martin778

So I wasn't crazy when I saw my card hitting PWR limits quicker with 3 fans at 100% compared to 50%?


----------



## RaMsiTo

kx11 said:


> nice , was that the LN2 bios ?


with the normal bios, ln2 I have not tried it since I do not have it for water. I imagine that by water I would keep 2200 mhz.

Excuse my English level.


----------



## Shadowdane

Have had my MSI 2080Ti Duke OC for a while now.. finally decided to try a different BIOS on it. 

That 400W MSI Bios did wonders for my card!! Previously could barely get +20Mhz overclock without driver crashes. No issue now pushing +120Mhz offset and broke 14k in 3DMark Timespy!
Card hits about ~355W now with with the new BIOS, previously it would top out at ~290W.

https://www.3dmark.com/spy/6197114



No screenshots as I was doing all this super late last night around 2am.. no way to grab screenshots at work now. lol


----------



## Shadowdane

Martin778 said:


> So I wasn't crazy when I saw my card hitting PWR limits quicker with 3 fans at 100% compared to 50%?


Yup! Fans use up power too... I find it funny seeing people testing stuff at 100% fan speed you'll lose some available power to your fans. I have mine set to max 60% from 60C to 84C then quickly ramps up to 100% at 88C. Granted I don't think I've seen my GPU get over 68C!


----------



## Renegade5399

Shadowdane said:


> Have had my MSI 2080Ti Duke OC for a while now.. finally decided to try a different BIOS on it.
> 
> That 400W MSI Bios did wonders for my card!! Previously could barely get +20Mhz overclock without driver crashes. No issue now pushing +120Mhz offset and broke 14k in 3DMark Timespy!
> Card hits about ~355W now with with the new BIOS, previously it would top out at ~290W.
> 
> https://www.3dmark.com/spy/6197114
> 
> 
> 
> No screenshots as I was doing all this super late last night around 2am.. no way to grab screenshots at work now. lol


So flashing the Trio (3 power connector) BIOS to a 2 power connector card didn't brick it?


----------



## Rob w

Try as I have 5 versions of nvflash, 4 bios’s, numerous combinations of instructions taken from suggestions and guides and the bloody card still flags a mismatch ID error.
So after wasting a couple of weeks of my life here’s the new tactic....
https://www.overclock.net/forum/attachment.php?attachmentid=253140&stc=1&d=1549985644
8mohm shunt, on top of the 5mohm.


----------



## Shadowdane

Renegade5399 said:


> So flashing the Trio (3 power connector) BIOS to a 2 power connector card didn't brick it?


Nope worked just fine granted I only tried two different MSI BIOS' on my card, didn't want to attempt a different brand BIOS... card just pulled as much power as possible from the 2 x 8-pin connectors maxes out around ~350-355W.


----------



## black06g85

rtx2080ti xc hybrid here.
best it will do so far
+145 core
+1100 mem.

max temps I've hit was 48-49c after a few hours gaming. 
core any higher it locks. mem any higher it artifacts.

constantly bouncing off the power and voltage limit though which is annoying.


----------



## Roen

TraktorXD said:


> If you really flash this bios Version: 90.02.0B.00.D7 (HOF OC LAB) its cool. Question is if its worth it


Where does one get the HOF 450W BIOS?

Can I use this with NVFlash?

https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927


----------



## Glerox

RaMsiTo said:


> +150 core
> +1500 memory
> 
> 
> 
> 
> When starting the test it goes to 2205 mhz, then it goes down by temperature until 2160.


nice overclock! everything stock? no shunt mod and with the stock cooler?
Fans at 100%?

Will try mine this week!


----------



## RaMsiTo

Glerox said:


> nice overclock! everything stock? no shunt mod and with the stock cooler?
> 
> Fans at 100%?
> 
> 
> 
> Will try mine this week!




yes, all stock and 100% fans for the test.

It's very cool, playing oddissey with 65% fans does not go up to 65 degrees, keeping 2130 in the core.





Tapatalk


----------



## Martin778

I'm running sub 60*C at ~1800-1900RPM playing FH4 or SOTR. It's pretty amazing for air cooling.


----------



## Thoth420

Yeah the Gaming X and Lightning cooler is amazing. I keep a fixed speed of 75% which in my case I cannot hear more than a dull hum and the card never has even seen 65C. It never really goes above 60C in any games either.


----------



## Sheyster

Shadowdane said:


> Nope worked just fine granted I only tried two different MSI BIOS' on my card, didn't want to attempt a different brand BIOS... card just pulled as much power as possible from the 2 x 8-pin connectors maxes out around ~350-355W.



I have the same card as you (MSI Duke OC). Are you aware it is a reference PCB card? You're better off with the GALAX 380w BIOS. You will have no trouble hitting the 380w limit with that one assuming you can attain a reasonable OC with your card. The MSI Trio BIOS is not suited for anything but the Trio. It's been tested several times by folks here in this thread and results are worse than GALAX on reference PCB cards.


----------



## NBrock

Rob w said:


> On air it runs superposition at about 52deg, and clocks up and down like a yo-yo.
> on water it’s about 35deg, holds clocks better ie 2070to 2085 for longer, and on the chiller I’ve run it at 14deg ( window open haha)borderline condensation getting 2130 with drop to 2115-2100-1985-1970(power throttling).
> Dew point needs watching, chiller helped me get the titanv to third place a short while back, they are well worth it:thumb:
> I have the hilea 500c.
> 
> 
> It saves bios as TU102.rom fine and will flash it back fine, so why the problem with the KFA2 I don’t know? Still the subsystem mismatch ID. Even with the bios for that? I can get it to protectoff but exception caught when trying to flash...


Awesome. Thanks for the reply. That's the chiller I am looking at. My dew point in winter in my office is pretty low. I'm able to get 2130 in time spy on water with temps peaking at about 37°c.


----------



## dante`afk

It's in german but numbers speak for themselves at minute 25:11

waterblock reviews

AC Kryopgraphix 2080Ti
EK Waterblocks EK Vector 2080Ti
Phanteks Glacier 2080Ti
Watercool Heatkiller IV 2080Ti


----------



## iamjanco

dante`afk said:


> https://www.youtube.com/watch?v=MpCVrYYxOMw
> 
> It's in german but numbers speak for themselves at minute 25:11
> 
> waterblock reviews
> 
> AC Kryopgraphix 2080Ti
> EK Waterblocks EK Vector 2080Ti
> Phanteks Glacier 2080Ti
> Watercool Heatkiller IV 2080Ti


While I'm glad that Igor went through the motions, I was a bit disappointed that he didn't have an opportunity to test VRM temps using AquaComputer's active back plate. That said, it's been six weeks and I'm still waiting on my parts to ship from AC, and they've had a few *fitment-related issues* they've had to resolve since releasing it. 

Would have also like to see flow/restriction testing for all of the blocks, but I'm not sure Igor is set up to do those tests. I'll be performing those tests on my AC setup at some point after receiving my parts.

Not that it necessarily matters, but please note that I do speak German and was able to follow along as Igor spoke. The numbers that Igor began sharing at 25:05 really are all that's important as far as this video is concerned.


----------



## J7SC

dante`afk said:


> https://www.youtube.com/watch?v=MpCVrYYxOMw
> 
> It's in german but numbers speak for themselves at minute 25:11
> 
> waterblock reviews
> 
> AC Kryopgraphix 2080Ti
> EK Waterblocks EK Vector 2080Ti
> Phanteks Glacier 2080Ti
> Watercool Heatkiller IV 2080Ti





iamjanco said:


> While I'm glad that Igor went through the motions, I was a bit disappointed that he didn't have an opportunity to test VRM temps using AquaComputer's active back plate. That said, it's been six weeks and I'm still waiting on my parts to ship from AC, and they've had a few *fitment-related issues* they've had to resolve since releasing it.
> 
> Would have also like to see flow/restriction testing for all of the blocks, but I'm not sure Igor is set up to do those tests. I'll be performing those tests on my AC setup at some point after receiving my parts.
> 
> Not that it necessarily matters, but please note that I do speak German and was able to follow along as Igor spoke. The numbers that Igor began sharing at 25:05 really are all that's important as far as this video is concerned.



Igor can go on a bit (I also speak German) though the vid is a good find and the graphs w/ prior infrared photos are great - especially if you want to place some extra 40mm helper fans for VRMs etc. Even the vid after this one about unboxing the Galax / KFA2 2080 Ti OC was fun. 

This water-block test also seems to reflect the 1080 TI water-block results we had seen here before to a large extent. The only problem seems to be the actual availability of the AquaComputer (and the Heatkiller IV?). Finally, I also hope that down the line, he can include some of the newer Asian offerings (Barrowch et al).


----------



## Kalm_Traveler

Not sure how many of you guys have picked up the Bitspower or Heatkiller IV blocks, but I have 2 Titans with Bitspower, and now 1 with Heatkiller IV and although this is not an apples to apples comparison by any stretch, it's a bit interesting to me that with 2 rigs next to each other in the same room, the card with the Heatkiller block idles about 4-5c cooler than either of the two with Bitspower blocks (and those have a much beefier cooling setup - big rig has 2 60mm thick 480mm radiators, the smaller one only has a single 30mm thick 420mm radiator).


----------



## iamjanco

J7SC said:


> Igor can go on a bit (I also speak German) though the vid is a good find and the graphs w/ prior infrared photos are great - especially if you want to place some extra 40mm helper fans for VRMs etc. Even the vid after this one about unboxing the Galax / KFA2 2080 Ti OC was fun.
> 
> This water-block test also seems to reflect the 1080 TI water-block results we had seen here before to a large extent. The only problem seems to be the actual availability of the AquaComputer (and the Heatkiller IV?). Finally, I also hope that down the line, he can include some of the newer Asian offerings (Barrowch et al).


Agreed :thumb:


----------



## Renegade5399

Kalm_Traveler said:


> Not sure how many of you guys have picked up the Bitspower or Heatkiller IV blocks, but I have 2 Titans with Bitspower, and now 1 with Heatkiller IV and although this is not an apples to apples comparison by any stretch, it's a bit interesting to me that with 2 rigs next to each other in the same room, the card with the Heatkiller block idles about 4-5c cooler than either of the two with Bitspower blocks (and those have a much beefier cooling setup - *big rig has 2 60mm thick 480mm radiators*, the smaller one only has a single 30mm thick 420mm radiator).












Sorry. That I could not resist.

Also:










WOO HOOOOOO!


----------



## MunneY

Aight,

So I'm lagging behind here. I finally decided to flash the Ref cards I have and when I get to the point of flashign the new bios, it say do you want to continue, hit Y. then it just runs through and jumps back to the CMD window.

I have 2 cards in my system but I've never had issues flashing GPUS before when I had others.


----------



## dante`afk

Kalm_Traveler said:


> Not sure how many of you guys have picked up the Bitspower or Heatkiller IV blocks, but I have 2 Titans with Bitspower, and now 1 with Heatkiller IV and although this is not an apples to apples comparison by any stretch, it's a bit interesting to me that with 2 rigs next to each other in the same room, the card with the Heatkiller block idles about 4-5c cooler than either of the two with Bitspower blocks (and those have a much beefier cooling setup - big rig has 2 60mm thick 480mm radiators, the smaller one only has a single 30mm thick 420mm radiator).


bitspower was one of the firsts to release a 2080ti block, I have it too. But I also ordered a heatkiller IV last week (just to fit with my heatkiller cpu block and also better temps)

made in Germany stands for quality, and this is being shown, through and through again.


----------



## Renegade5399

dante`afk said:


> bitspower was one of the firsts to release a 2080ti block, I have it too. But I also ordered a heatkiller IV last week (just to fit with my heatkiller cpu block and also better temps)
> 
> made in Germany stands for quality, and this is being shown, through and through again.












And that's enough beers for me tonight. Time to put the internet away until tomorrow.


----------



## stilllogicz

dante`afk said:


> https://www.youtube.com/watch?v=MpCVrYYxOMw
> 
> It's in german but numbers speak for themselves at minute 25:11
> 
> waterblock reviews
> 
> AC Kryopgraphix 2080Ti
> EK Waterblocks EK Vector 2080Ti
> Phanteks Glacier 2080Ti
> Watercool Heatkiller IV 2080Ti


Addressable RGB and top notch performance? Looks like it's time to sell this phanteks block. Too bad we gotta wait almost a month it seems?


----------



## Coldmud

Rob w said:


> Try as I have 5 versions of nvflash, 4 bios’s, numerous combinations of instructions taken from suggestions and guides and the bloody card still flags a mismatch ID error.
> So after wasting a couple of weeks of my life here’s the new tactic....
> https://www.overclock.net/forum/attachment.php?attachmentid=253140&stc=1&d=1549985644
> 8mohm shunt, on top of the 5mohm.


Good luck with that! Wish I had the balls for this.. probably gonna go the lightning Z route, when some waterblocks arrive.

Let us know how it goes, and some more pics would be nice


----------



## Coldmud

MunneY said:


> Aight,
> 
> So I'm lagging behind here. I finally decided to flash the Ref cards I have and when I get to the point of flashign the new bios, it say do you want to continue, hit Y. then it just runs through and jumps back to the CMD window.
> 
> I have 2 cards in my system but I've never had issues flashing GPUS before when I had others.


Should give an error, though it can only be for a split second in cmd before it closes, try to capture the screen and see what it says.


----------



## J7SC

Ah, the good old days (or the tomorrows for KingPin 2080 Ti users)


----------



## Coldmud

J7SC said:


> Ah, the good old days (or the tomorrows for KingPin 2080 Ti users)


Good old days indeed, had some fun with this puppy and 680 classies back in the day..
But finally, some exotic 2080ti's arriving, still.. this generation around it feels like nvidia and partners just can't supply the stores, to this day in my country (eu) most day 1 cards are still on backorder. 
And prices for lightning and the newer "A+" cards are rising to the €1700 mark. It's getting harder to justify the hobby, but still.. buying a new card just gets me all giddy. The Lightning Z, man I gotta play around with it  still waiting for blocks to hit and someone to dump this secret afterburner with unlocked voltage before I would even decide to sell the waterforce.
The kingpin is a hybrid only right? meh..


----------



## kot0005

BFV DLSS patch is out, quick someone with fast internet test it.


----------



## kx11

turns out my Lightning Z can rock 100+ core , 1500+ MEM


----------



## Rob w

Coldmud said:


> Good luck with that! Wish I had the balls for this.. probably gonna go the lightning Z route, when some waterblocks arrive.
> 
> Let us know how it goes, and some more pics would be nice


Shunt mod went fine especially after finding a way to hold resistors in place.


Not as messy as my titanv 
Conclusion is,. Glad I went this way, lock a voltage and core clock in and it holds solid here’s a run of superposition 1080p extreme that I set core at 2130/8200
It didn’t budge.
Hope video loads? Also shows tota watts for rig, lower than I thought it would be.


----------



## Coldmud

kot0005 said:


> BFV DLSS patch is out, quick someone with fast internet test it.


So dlss only works with dx12, which never ran well on frostbite. Gonna check in a bit if the stuttering is gone. Haven't tested dx12 since launch. But it looks like we'll only be able to use dlss in conjuction with raytracing. This is so dissapointing.. I have no real interest to run any raytracing in a fast paced online shooter, what's the point?



Rob w said:


> Shunt mod went fine especially after finding a way to hold resistors in place.
> 
> 
> Not as messy as my titanv
> Conclusion is,. Glad I went this way, lock a voltage and core clock in and it holds solid here’s a run of superposition 1080p extreme that I set core at 2130/8200
> It didn’t budge.
> Hope video loads? Also shows tota watts for rig, lower than I thought it would be.


Very clean job! You can hardly tell it's there. Great job anyway, the video did load, but was something like 240p, hard to look at  but good results already :thumb:
See you're pulling about 270w total, what did you draw before?


----------



## Rob w

Coldmud said:


> So dlss only works with dx12, which never ran well on frostbite. Gonna check in a bit if the stuttering is gone. Haven't tested dx12 since launch. But it looks like we'll only be able to use dlss in conjuction with raytracing. This is so dissapointing.. I have no real interest to run any raytracing in a fast paced online shooter, what's the point?
> 
> 
> Very clean job! You can hardly tell it's there. Great job anyway, the video did load, but was something like 240p, hard to look at  but good results already :thumb:
> See you're pulling about 270w total, what did you draw before?


Cold mud; that the first vid I’ve done via YouTube will have a look how to upload in 1080p/ 4K ? Something else to learn now! Hahaha.
Now to see what it can do because I’m still nowhere near my titanv.
Was pulling 270watts at the end but went up to ( I think) 580watts max during test.


----------



## Coldmud

dlss off rtx low https://ibb.co/PtMgwrP
dlss on rtx low https://ibb.co/wSm0qMT

What we've all been afraid for, it looks like a blurry mess, so dlss no thanks.



Rob w said:


> Now to see what it can do because I’m still nowhere near my titanv.


Yeah, but couldn't you give that 1.4 or 1.5v?


----------



## kot0005

well so apparently DLSS can only be enabled with Raytracing..


----------



## pewpewlazer

kot0005 said:


> well so apparently DLSS can only be enabled with Raytracing..


Yep. And I can only get it to enable at 4k. Well, it still gives me the option to turn it on at 2560x1600, which makes zero sense, but once I drop down to 2560x1440 or lower the option for DLSS becomes greyed out for me. People on Reddit claim they have it enabled at 1440p and 1080p though *shrug*. It looks absolutely atrocious though so who cares...

DLSS OFF - https://i.imgur.com/lbLnaJA.jpg

DLSS ON - https://i.imgur.com/Bl3aPTR.jpg

In terms of performance, I compared fps in a couple static scenes with DLSS on/off and it was a 60-70% improvement.



Coldmud said:


> So dlss only works with dx12, which never ran well on frostbite. Gonna check in a bit if the stuttering is gone. Haven't tested dx12 since launch. But it looks like we'll only be able to use dlss in conjuction with raytracing. This is so dissapointing.. I have no real interest to run any raytracing in a fast paced online shooter, what's the point?


I think they allegedly improved DX12 performance in one of the earlier patches. The occasional stutter is still there, but not as bad. DX12 performance is much better than DX11 (even FFR ON) performance in my experience. I was running DX11 FFR ON with my 1070 SLI setup. When I switched to the 2080 Ti my performance was similar, but GPU usage was abysmally low. Switched to DX12 and my GPU usage and frame rates went up a nice bit. Guess it breathed some additional life into my Haswell-E dinosaur.


----------



## kx11

tried DLSS now with BFV , ultra or Low DXR settings makes no improvements @ 4k at all




and the image is blurry


----------



## Coldmud

pewpewlazer said:


> Yep. And I can only get it to enable at 4k. Well, it still gives me the option to turn it on at 2560x1600, which makes zero sense, but once I drop down to 2560x1440 or lower the option for DLSS becomes greyed out for me. People on Reddit claim they have it enabled at 1440p and 1080p though *shrug*. It looks absolutely atrocious though so who cares...
> 
> DLSS OFF - https://i.imgur.com/lbLnaJA.jpg
> 
> DLSS ON - https://i.imgur.com/Bl3aPTR.jpg


it works @ 3440x1440p here, but indeed it looks like a joke compared to TAA, what was all the hype about?



kx11 said:


> and the image is blurry


It's a feature..


----------



## Hulk1988

Hello,

quick question about: 

HOF | 3 Fan | 2.75 Slot | 330mm | RGB | 19 Power Phases | 1635 MHz Boost | 300/450 W | Custom PCB | EAN 4895147132785 | PN 28IULBUCV6DK

Is there a Source for it that the 3Fan Editon has also a 450W Bios?

Thank you


----------



## shiokarai

J7SC said:


> Igor can go on a bit (I also speak German) though the vid is a good find and the graphs w/ prior infrared photos are great - especially if you want to place some extra 40mm helper fans for VRMs etc. Even the vid after this one about unboxing the Galax / KFA2 2080 Ti OC was fun.
> 
> This water-block test also seems to reflect the 1080 TI water-block results we had seen here before to a large extent. The only problem seems to be the actual availability of the AquaComputer (and the Heatkiller IV?). Finally, I also hope that down the line, he can include some of the newer Asian offerings (Barrowch et al).


Performance-pcs.com had few 2080 ti blocks from Aquacomputer few days ago (all sold out immediately unfortunately), but not the backplates. I assume issues with active backplates fitting with actual block and terminal are the cause of backplate delays, also some issues with blocks assembly, all detailed in the german thread: https://forum.aquacomputer.de/wasse...-und-kryographics-2080-ti-update/index11.html Unfortunately I don't speak German so I can only depend on google translating the thread.

Somehow this block and backplates are ill-fated - delayed, issues etc. Hope they'll sort this out, as by the results this is the best RTX 2080 ti block.


----------



## Krzych04650

pewpewlazer said:


> Yep. And I can only get it to enable at 4k. Well, it still gives me the option to turn it on at 2560x1600, which makes zero sense, but once I drop down to 2560x1440 or lower the option for DLSS becomes greyed out for me. People on Reddit claim they have it enabled at 1440p and 1080p though *shrug*. It looks absolutely atrocious though so who cares...
> 
> DLSS OFF - https://i.imgur.com/lbLnaJA.jpg
> 
> DLSS ON - https://i.imgur.com/Bl3aPTR.jpg
> 
> In terms of performance, I compared fps in a couple static scenes with DLSS on/off and it was a 60-70% improvement.


Well, if the game is indeed targeting 60-70% performance improvement then the implementation in fundamentally wrong from the beginning. There are no miracles, you cannot get such a massive performance boost without a significant loss to image quality. DLSS should only go as far as it is possible to go without affecting image quality in a significant way. Even if that would only be 20% performance improvement, it would still be a huge deal, 20% performance improvement is like if you had one grade higher GPU, and if you already have the fastest one then it would be like if you had next generation GPU. Also 20% performance improvement otherwise requires very significant compromises in graphics settings, likely resulting in bigger image quality degradation than proper DLSS implementation. 

I don't know, I find it hard to believe that someone actually released this kind of implementation, it looks like some broken DOF taking over the entire screen. With this kind of approach what is the point of all of this AI and Tensor Cores if it looks like some basic upscaling. Actually if you remember Quantum Break, Upscaling option seems to have smaller impact on image quality (though still far from acceptable) and it is almost doubling the performance without a need for all of this AI. I mean how bad you have to be to have all of this technology at hand and still develop something worse than normal upscaling... I mean how do you even do that, and most importantly how do you even release something like this to the public.


----------



## kot0005

Raytraced reflection on Ultra look a bit more pixelated...zzzz

I was able to maintain 4k 60fps with RTX in the Tank campaign with DLSS but there other stuff going on here..they might have reduced reflection quality.


----------



## kot0005

Coldmud said:


> it works @ 3440x1440p here, but indeed it looks like a joke compared to TAA, what was all the hype about?
> 
> 
> 
> It's a feature..


Its supposed to be superior to TAA. If you look at Port royal, its the opposite. TAA is Blurry and DLSS is Crispy.


----------



## Zay86

I'd just like to report that: Gainward Phoenix Non-A | 3 Fan | 2.5 Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4260183364115 | PN 426018336-4115, has A binned chip. 

Just bought my Gainward RTX 2080 Ti Phoenix and GPU-Z shows 1E07 on the device id, meaning A binned chip. It would be nice if zhrooms could fix this info into the starting post.


----------



## MunneY

Coldmud said:


> Should give an error, though it can only be for a split second in cmd before it closes, try to capture the screen and see what it says.


This is as close to the end as I can get. I've never had issues flashing any previous gen cards with 2 installed.


----------



## keikei

kot0005 said:


> Raytraced reflection on Ultra look a bit more pixelated...zzzz
> 
> I was able to maintain 4k 60fps with RTX in the Tank campaign with DLSS but there other stuff going on here..they might have reduced reflection quality.


Any gains in frames when DLSS enabled?


----------



## Krzych04650

NVIDIA's own comparison is also showing a lot of blurriness:






Though less than your screenshots and claims up to 40% performance increase, not 60-70.

Still, far from impressive. It was meant to be comparable to TAA, but this looks like 4K DLSS looks worse than native 1440p.


----------



## kx11

Nvidia dropped a new Driver , it should enhance DLSS in BFV




https://www.guru3d.com/files-details/geforce-418-91-whql-driver-download.html


----------



## kx11

Bitspower contacted me again , Lightning Z Water Block should be released within 2 weeks


----------



## GunnzAkimbo

Already met its match.

https://www.guru3d.com/articles-pages/metro-exodus-pc-graphics-performance-benchmarks,1.html

Good review for a game.

1440p is about the max res for ultra.


----------



## nycgtr

Just a heads up. Selling 2 strix 2080tis and a turbo 2080ti if anyone is interested. Much cheaper than what you'd pay anywhere else.


----------



## Sheyster

Coldmud said:


> But it looks like we'll only be able to use dlss in conjuction with raytracing. This is so dissapointing.. I have no real interest to run any raytracing in a fast paced online shooter, what's the point?



Exactly... No point at all if you're a _competitive_ online FPS gamer. In fact, I've already turned down multiple video options even while running on DX11. :thumb: I have not used DX12 at all in BFV.


----------



## Sheyster

MunneY said:


> This is as close to the end as I can get. I've never had issues flashing any previous gen cards with 2 installed.


Did you disable the card(s) in device manager first? If not try that. Also make sure you're using the FE modded version of NVFlash found in the OP. Other than that, no idea why it's not working.


----------



## nycgtr

Sheyster said:


> Did you disable the card(s) in device manager first? If not try that. Also make sure you're using the FE modded version of NVFlash found in the OP. Other than that, no idea why it's not working.


Not sure of the background on this story but that also pops up when your trying to flash onto a NON A chip.


----------



## J7SC

Coldmud said:


> Good old days indeed, had some fun with this puppy and 680 classies back in the day..
> But finally, some exotic 2080ti's arriving, still.. this generation around it feels like nvidia and partners just can't supply the stores, to this day in my country (eu) most day 1 cards are still on backorder.
> And prices for lightning and the newer "A+" cards are rising to the €1700 mark. It's getting harder to justify the hobby, but still.. buying a new card just gets me all giddy. The Lightning Z, man I gotta play around with it  still waiting for blocks to hit and someone to dump this secret afterburner with unlocked voltage before I would even decide to sell the waterforce.
> The kingpin is a hybrid only right? meh..



...yeah, the amount of what seems like paper / PR launches of cards that then seem to be back-ordered forever at even larger outlets is quite surprising...presumably, the yield for high-binned chips is still relatively low, which would also explain the high and rising prices. 

That Kingpin card is hybrid - but a few minor mods later, and you can expand cooling capacity significantly w/ bigger rads, fans etc...that is after checking if everything works out of the box first (re. warranty and stuff).


----------



## Carillo

kx11 said:


> tried DLSS now with BFV , ultra or Low DXR settings makes no improvements @ 4k at all
> 
> 
> 
> 
> and the image is blurry


It just works!


----------



## Chimera619

Hey guys
I am going to build my self a desktop since I am finally settling in a place (Had been on the move so was using a GT75VR Laptop)
I am going for a 2080TI
my options atm are ZOTAC AMP 1424$ Shipped to South Africa
Palit Gaming Pro OC 1275$ Shipped to SA
All the other cards are at least 100$ more over the zotac even the gigabyte windforce OC
any ideas ?
I am also not sure if I should keep Air or try to watercool 
Is performance worth the hassle ?


----------



## DeadSec

In your case I would swap definitely to watercool. A few hours hassle will go by quickly- the excellent performance will last for a long time.


----------



## Chimera619

Okay is there an AIO Cooler for a 2080TI ?
Or do I need to build a custom loop
I live in SA so these parts arent really easy to find
It will easily break 200-300$ for a custom loop i think


----------



## J7SC

Chimera619 said:


> Okay is there an AIO Cooler for a 2080TI ?
> Or do I need to build a custom loop
> I live in SA so these parts arent really easy to find
> It will easily break 200-300$ for a custom loop i think


 
...check w/ VPII - I think he lives in S.Africa and converted his air-cooled 2080 TI(s) to an AIO setup. Apart from that, MSI and Gigabyte Aorus both have 2080 TI models out now, in addition to the (upcoming/here now?) even pricier Asus Matrix and KingPin 2080 TI performance monsters which have AIO
:Snorkle:


----------



## Esenel

Another great experience with BF V DLSS here...
... it doesn't work in 1440p with a 2080Ti .......

Only in 4k via DSR for me...
And I tried not to believe your blurry image posts, but sadly they are true :-(


----------



## sblantipodi

http://screenshotcomparison.com/comparison/130067
http://screenshotcomparison.com/comparison/130069
http://screenshotcomparison.com/comparison/130070
http://screenshotcomparison.com/comparison/130071
http://screenshotcomparison.com/comparison/130072


----------



## sblantipodi

Dlss it's a real ****


----------



## MunneY

Sheyster said:


> Did you disable the card(s) in device manager first? If not try that. Also make sure you're using the FE modded version of NVFlash found in the OP. Other than that, no idea why it's not working.


Whatup man, been a minute. Cards are disabled. I'm using thee modded version. I've gotten it from 2 different sources. I've also tried 2 different bios.



nycgtr said:


> Not sure of the background on this story but that also pops up when your trying to flash onto a NON A chip.


Pretty sure its the right one... unless I had too much to drink last night


----------



## pewpewlazer

Esenel said:


> Another great experience with BF V DLSS here...
> ... it doesn't work in 1440p with a 2080Ti .......
> 
> Only in 4k via DSR for me...
> And I tried not to believe your blurry image posts, but sadly they are true :-(


Yeah that one blew my mind. Everyone on Reddit was telling me I was full of crap when I said DLSS was 4k only. I figured it was DICE being DICE. Then I saw the notes released this afternoon saying DLSS was only supported at 4k with the 2080 Ti, but lesser cards support lower resolutions. What kind of nonsense is that?


----------



## willverduzco

Unlike FF15, where DLSS only works at 4k res; DLSS in BF5 works just fine at 3440x1440 (21:9 ultrawide) as well. That said, it looks about as bad as turning resolution scale to 70%, but with the performance of resolution scale set to 85%. In other words, it's not worth it because of the huge image quality hit not commensurate to the middling performance gains.


----------



## Sheyster

MunneY said:


> Whatup man, been a minute. Cards are disabled. I'm using thee modded version. I've gotten it from 2 different sources. I've also tried 2 different bios.


You used the --protectoff NVflash option first, before flashing?

What actual NVflash syntax/options are you using to flash?


----------



## Sheyster

willverduzco said:


> Unlike FF15, where DLSS only works at 4k res; DLSS in BF5 works just fine at 3440x1440 (21:9 ultrawide) as well. That said, it looks about as bad as turning resolution scale to 70%, but with the performance of resolution scale set to 85%. In other words, it's not worth it because of the huge image quality hit not commensurate to the middling performance gains.


 It sounds like a hot mess!


----------



## VPII

Chimera619 said:


> Okay is there an AIO Cooler for a 2080TI ?
> 
> Or do I need to build a custom loop
> 
> I live in SA so these parts arent really easy to find
> 
> It will easily break 200-300$ for a custom loop i think


He there fellow SA... I used a Nzxt Kraken G12 with a Corsair H110 cw and it works great. Unfortunately no cover over memory and vrm but I use two active cooling fans over it with great success.

Sent from my SM-G960F using Tapatalk


----------



## kot0005

Yes, the whole point in DLSS is to use the physical Tensor cores to prevent image quality loss from downscaling and upscaling. Because if I wanted performance and didnt care about image quality, I would just use the resolution scale and get the performance back. 

Nvidia says it themselves..
1. https://youtu.be/-ZefAW-Anno?t=2034
2. https://youtu.be/-ZefAW-Anno?t=2153
3.https://youtu.be/-ZefAW-Anno?t=2192 (generate pixels it has never seen before, aka fill in the gaps in a 1440p img and make it 4k)
4.https://youtu.be/JyMdhW9qPE4?t=2069
5. https://youtu.be/JyMdhW9qPE4?t=2113 ( but in BFV DLSS looks worse than 1440p on a 4k screen ?)


STOP saying stuff like " what did you expect", "DLSS is going to be blurry" etc. AI is like a human (except its a lot faster and can correct stuff in real time when you play the game) that is supposed to fill in the incorrect stuff on the fly and make it not look like a straight up 1440p upscaling.

imagine if you were printing some photos and u find mistakes in the photos, you go back and fix them and then reprint them. But right now DLSS isnt doing that, it just seems to be doing the printing without correcting the images.


----------



## kx11

is this real ?!


----------



## MunneY

Sheyster said:


> You used the --protectoff NVflash option first, before flashing?
> 
> What actual NVflash syntax/options are you using to flash?


I've turned protect off yeah.

I've used nvflash64 -6 x.rom
and nvflash64 x.rom


----------



## J7SC

MunneY said:


> I've turned protect off yeah.
> 
> I've used nvflash64 -6 x.rom
> and nvflash64 x.rom



I didn't follow the whole thing, but I trust you're using 'run as admin' for CMD, and also, since you have more than one card, you're adding the 'index= __' to each flash command (--= the number for the specific adapter once you type in --list), and use 'protect off' for each card separately ? Just my :2cents: , sorry if you already went through all that.


----------



## pewpewlazer

kx11 said:


> is this real ?!


Yes, that is real.


----------



## stilllogicz

So what appeared to be a good night turned into a bad night. As I mentioned a few pages back, I returned my Seahawk EK X 2080 Ti and I ended up purchasing an XC black EVGA. Received the card today, had Samsung vram (cool I thought). Went through various benchmarks, everything looked good. Few hours later while watching youtube - space invaders and black screen, no longer boots into windows (no video signal). What the heck is up with these new Nvidia cards??

DDU and reinstalled my 1080 ti - everything is back to normal again.


----------



## kot0005

stilllogicz said:


> So what appeared to be a good night turned into a bad night. As I mentioned a few pages back, I returned my Seahawk EK X 2080 Ti and I ended up purchasing an XC black EVGA. Received the card today, had Samsung vram (cool I thought). Went through various benchmarks, everything looked good. Few hours later while watching youtube - space invaders and black screen, no longer boots into windows (no video signal). What the heck is up with these new Nvidia cards??
> 
> DDU and reinstalled my 1080 ti - everything is back to normal again.


Its not the Samsung or Micron memory, its just something else..hopefully ur next one wont die..


----------



## stilllogicz

I can't believe it's 2019, Nvidia is charging >$1,000 for a gpu and you have to play a lottery just for the basic functions to work. I'm pretty salty right now tbh. Guess I'll sit tight for the time being.


----------



## Coldmud

kx11 said:


> Bitspower contacted me again , Lightning Z Water Block should be released within 2 weeks


Good to know!



J7SC said:


> ...yeah, the amount of what seems like paper / PR launches of cards that then seem to be back-ordered forever at even larger outlets is quite surprising...presumably, the yield for high-binned chips is still relatively low, which would also explain the high and rising prices.
> 
> That Kingpin card is hybrid - but a few minor mods later, and you can expand cooling capacity significantly w/ bigger rads, fans etc...that is after checking if everything works out of the box first (re. warranty and stuff).


Don't forget DDR price skyrocketing, however they seem to have stabilised as of late. Heck we're still dealing with the bitcoin backlash even.

I don't like the look of the kingpin cooler tbh.


----------



## kot0005

So this guy on reddit...


----------



## Rob w

kot0005 said:


> So this guy on reddit...


I used a 27” at 1440 it was great, but afaik ( from past reviews)for 4K you need at least a 32” 
I went for the 32” 4K and not disappointed with it.


----------



## GunnzAkimbo

Rating of the EVGA 2080 ti Black (lowest priced Ti at $1660 AUD free shipping).

That is a damn good price from newegg for a Ti in AUS, here at the retail stores, it's $1859 (best prices) then you pay postage at $45 with transit cover ($1900+)

Heard it runs hot and loud (cant be any worse than a FE????), wearing headphones anyway so...

*forget about it, duty $ adds $$$.


----------



## Chimera619

Zotac AMP	Amazon	1512	1650Mhz 3 Fan
Zotac AMP	Wootware	1400	1650Mhz 3 Fan
Zotac AMP Extreme Core	Wootware	1570	1755Mhz 3 Fan RGB
Palit GamingPro OC	Wootware	1300	1650 2 Fan

These are my options prices are in USD
I was thinking of Palit and using it on stock in start then WCing it and overclocking later on


----------



## maxmix65

This is my Lightning Z


----------



## Glerox

kx11 said:


> Bitspower contacted me again , Lightning Z Water Block should be released within 2 weeks


Nice! Where? On the bitspower international online store?


----------



## kx11

Glerox said:


> Nice! Where? On the bitspower international online store?



yes , usually they put their new stuff there 1st




but it doesn't mean they're the 1st to release the block , EKWB is always last in these situations , Bykski Barrow and Bitspower are always ahead of everyone in this line of products


----------



## J7SC

*@iamjanco @Coldmud @VPII* and fellow 2080 TIers - finally woke the thing up in its final configuration, and_ '''It's Alive'''_ . Before, I was running the GPUs on a Z170 test-bench and the 2950X with some older GTX Classies. 

Initially had a scare this afternoon when I turned it on for the first time > black screen :sad-smile :sozo: :headscrat ...

I was using an older test monitor w/ ancient HDMI converter cable...worked on the Classies. Anyhow, once I switched monitors and cables > voila  . So now I can finally get to the final custom touches I planned out since this was a tricky build w/ 5x XSPC 360x60 rads, 4x MPC655 pumps etc etc, never mind mobo swap w/ RMA.

'Fully-lit' images when finished and the build-log goes up... ...and did I mention that I like a bit of RGB, but this, this will take some getting used to


----------



## Coldmud

J7SC said:


> *@iamjanco @Coldmud @VPII* and fellow 2080 TIers - finally woke the thing up in its final configuration, and_ '''It's Alive'''_ . Before, I was running the GPUs on a Z170 test-bench and the 2950X with some older GTX Classies.
> 
> Initially had a scare this afternoon when I turned it on for the first time > black screen :sad-smile :sozo: :headscrat ...
> 
> I was using an older test monitor w/ ancient HDMI converter cable...worked on the Classies. Anyhow, once I switched monitors and cables > voila  . So now I can finally get to the final custom touches I planned out since this was a tricky build w/ 5x XSPC 360x60 rads, 4x MPC655 pumps etc etc, never mind mobo swap w/ RMA.
> 
> 'Fully-lit' images when finished and the build-log goes up... ...and did I mention that I like a bit of RGB, but this, this will take some getting used to


Glad to see you almost complete the build! Looking forward to the log. Hit us up with some benchies in the meantime 
RGB can be turned off  How are you addressing the aorus btw? using rgbfusion? Just put it on digital wave and dim the rest..


----------



## J7SC

Coldmud said:


> Glad to see you almost complete the build! Looking forward to the log. Hit us up with some benchies in the meantime
> RGB can be turned off  How are you addressing the aorus btw? using rgbfusion? Just put it on digital wave and dim the rest..



The MSI software can control both the mobo and the TridentZ RGB, but not the GPUs...not using Aorus RGB engine (a bit suspect) but I'll either disconnect the wires for RGB, or load the Galax 380w Bios (though stock Bios is already at 379w max / Superposition).

Still doing setup stuff, so no hardcore benchies yet, though soon > but already posted Superposition at 14163 (4k opt, as usual single GPU w/that bench) and TimeSpy 2x GPU is close to 24k Graphics Score after 1st run this afternoon which was with stock GPU voltage, no 'curves', no full PL use (about 5% from max)), relatively mild oc...both GPUs held to 2115 MHz (not the limit). Even the 2950x did a respectable 126xx CPU score at only 4.2GHz / 16c/32t. 

The key though in this test for me was that the GPU temp only moved up by 1c after initial start during the full TimeSpy test and stayed under 40 c (w/ 22 c ambient). I say 'key' because that is why I built this GPU cooling (separate from CPU system) with a total of 1080mm x 60mm rad space, 2 pumps and 12x 120 fans > hard to heat-soak that baby 

Looking forward to do the fine-detail-build stuff I can now get to / already applied 7 coats of white paint to a certain part  Build-log by weekend


----------



## kot0005

Rob w said:


> I used a 27” at 1440 it was great, but afaik ( from past reviews)for 4K you need at least a 32”
> I went for the 32” 4K and not disappointed with it.


PG27UQ and X27 owners would disagree.


----------



## KingKnick

Hello, I hope you can help me ! I have an Asus RTX 2080 Ti Strix OC ... I have seen on techpoerup that there are two newer bios versions for my graphics card !

I have that bios version -

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.40.38
RTX2080TI VB Ver 90.02.0B.40.AS04

and these two are newer -

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.00.8A
RTX2080TI VB Ver 90.02.0B.00.AS05

and

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.00.8E
RTX2080TI VB Ver 90.02.0B.00.AS06

Do here anyone know why there are newer Veriosnen for this card? Do these bios versions contain bugfixes or improvements?

Has anyone here these versions already flashed on his strix ???

Thanks for Help


----------



## Chimera619

Zotac AMP	Amazon	1512$	1650Mhz 3 Fan
Zotac AMP	Wootware	1400$	1650Mhz 3 Fan
Zotac AMP Extreme Core	Wootware	1570$	1755Mhz 3 Fan RGB
Palit GamingPro OC	Wootware	1300$	1650 2 Fan

These are my options prices are in USD
I was thinking of Palit and using it on stock in start then WCing it and overclocking later on

Anybody got suggestions ?
I am torn atm


----------



## kot0005

More DLSS isssues… Apparently HDR doesnt work with DLSS.


----------



## kx11

is this normal ? or too high OC caused it ?


----------



## KingKnick

Hello, I hope you can help me ! I have an Asus RTX 2080 Ti Strix OC ... I have seen on techpoerup that there are two newer bios versions for my graphics card !

I have that bios version -

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.40.38
RTX2080TI VB Ver 90.02.0B.40.AS04

and these two are newer -

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.00.8A
RTX2080TI VB Ver 90.02.0B.00.AS05

and

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.0B.00.8E
RTX2080TI VB Ver 90.02.0B.00.AS06

Do here anyone know why there are newer Veriosnen for this card? Do these bios versions contain bugfixes or improvements?

Has anyone here these versions already flashed on his strix ???

Thanks for Help


----------



## xrb936

So, as I heard from some technicians, Nvidia has a driver-side power limit for all 20-series cards.

Which means, for all BIOS which have higher power limit, even for WOC BIOS, which has 2000W power limit, they won't really break the power limit set by Nvidia.

That's why most users reported that they got WORSE result when they used those BIOS.


----------



## ReFFrs

kx11 said:


> is this normal ? or too high OC caused it ?
> https://www.youtube.com/watch?v=PHB3mBYFktA


Scary AF. They should consider adding this effect to the base game in some places for a true postapocalyptic horror experience.


----------



## vmanuelgm

Metro Exodus first minutes @[email protected], highest settings with RT and DLSS on, 3440x1440p


----------



## kot0005

https://drive.google.com/drive/folders/1JNo9DSh7YUBShJyTibwwGd2OvKgu5G9P

DLSS is a bust, 1600p is better. If u want performance just set a native 16:9 1800p custom resolution in NVCP instead of turning on the Abomination DLSS.


----------



## octiny

Metro patch just released.

https://wccftech.com/day-1-metro-exodus-update/

"RTX Improvements, DLSS Tuning, HDR Saturation Tuning and Much More"


----------



## kot0005

octiny said:


> Metro patch just released.
> 
> https://wccftech.com/day-1-metro-exodus-update/
> 
> "RTX Improvements, DLSS Tuning, HDR Saturation Tuning and Much More"


the patch went live with game. MY post above includes screengrabs after patch.

Also HDR doesnt work with DLSS on so RIP us all PG27UQ and X27users. My GPU seems to hold 60fps with DLSS off and RT set to high, I am just in the starting area though and yet to enter open world so....Nice one Nvidia lol.


----------



## vmanuelgm

octiny said:


> Metro patch just released.
> 
> https://wccftech.com/day-1-metro-exodus-update/
> 
> "RTX Improvements, DLSS Tuning, HDR Saturation Tuning and Much More"



Now waiting Epic Launcher downloads it... xDDD


EDIT: I just read the mate who says the original download comes with that patch... If so, DLSS must still be improved.


----------



## kot0005

vmanuelgm said:


> Now waiting Epic Launcher downloads it... xDDD
> 
> 
> EDIT: I just read the mate who says the original download comes with that patch... If so, DLSS must still be improved.


yeah I am pretty sure it was patched...review code and Steam was 1.0.00 and Epic games one is 1.0.11


----------



## vmanuelgm

kot0005 said:


> yeah I am pretty sure it was patched...review code and Steam was 1.0.00 and Epic games one is 1.0.11


U are probably right. DLSS doesn't work ok. Maybe the next Nvidia driver can improve things.


----------



## BigMack70

How do you actually turn on Ray Tracing in Metro Exodus? I see the option to turn on DLSS, but there is no option anywhere I can see to turn on ray tracing. This is on game version 0.1.0.11 from the Epic store, the 418.91 driver, and of course the 2080 Ti.

--EDIT-- problem was Windows didn't automatically update itself to 1809. Was stuck on build 1803 and needed manual update. Updating windows version now and expect that will fix it.


----------



## Silent Scone

vmanuelgm said:


> U are probably right. DLSS doesn't work ok. Maybe the next Nvidia driver can improve things.


It's resolution specific. If not running 4K, I'm not sure what the outcome will be.


----------



## kx11

Anthem is open now for Premier members but the servers are offline


----------



## kx11

kot0005 said:


> yeah I am pretty sure it was patched...review code and Steam was 1.0.00 and Epic games one is 1.0.11



i'm on Steam and yes i didn't get the 1.0.11 patch


----------



## vmanuelgm

Silent Scone said:


> It's resolution specific. If not running 4K, I'm not sure what the outcome will be.



People at 4K say the same, blurry images... We must wait for new patches/drivers and see if things get better.

Epic Launcher, in my case, did download the patched version.


----------



## Pepillo

Is it possible that 1440p not know this problem? The view is not the best of my senses, but I can not see the difference between activating or not the DLSS on my 2k Monitor, out of more FPS …...


----------



## vmanuelgm

Both BFV and Metro Exodus are blurry using DLSS right now...



Anthem is now playable, servers are active. Here a minigameplay with the 2080Ti...


----------



## Silent Scone

vmanuelgm said:


> Both BFV and Metro Exodus are blurry using DLSS right now...
> 
> 
> 
> Anthem is now playable, servers are active. Here a minigameplay with the 2080Ti...
> 
> 
> https://youtu.be/frpMPp8QqDs


Slow down, one game at a time 

I'll be trying Metro this evening. I tried BFV @ 3440 with mixed results on DLSS, it's certainly not a viable alternative right now.


----------



## ducky083

*Impossible to fo to 1.093v*

Hi,

I've a Zotac triple fan 2080ti flashed with the galaxy bios (380w).

I put +100 to core voltage on msi afterburner but in bench (superposition or 3dmark) the highter voltage is 1.043v.

How i can reach the 1.093v ?

Thanks !


----------



## Djreversal

can someone help me out... I was just looking at my system which is a x399 MEG board with a AMD 2990wx, i have dual 2080ti's right now and i was thinking of getting a 3rd or possibly a 4th... the problem i noticed is the 2080ti has a PORT that drops down to the 2nd slot on the back of the computer, and if i was to put a card in the next slot it wouldnt go in because it would hit that port... do people remove that??? 



check the attached image -- on the left side


----------



## vmanuelgm

Silent Scone said:


> Slow down, one game at a time
> 
> I'll be trying Metro this evening. I tried BFV @ 3440 with mixed results on DLSS, it's certainly not a viable alternative right now.



Would u please record a video and upload it to Youtube???


----------



## kx11

alright i got in ANTHEM , performance is amazing but Fullscreeen mode forces HDR for me without an option to disable it other than running borderless mode 

i think DOF/HBAO hit the fps the hardest


----------



## Kalm_Traveler

Silent Scone said:


> Slow down, one game at a time
> 
> I'll be trying Metro this evening. I tried BFV @ 3440 with mixed results on DLSS, it's certainly not a viable alternative right now.


Does 3440 x 1440 allow you to enable DLSS? I haven't tried on my ultrawide setup yet, but it definitely was greyed out for me on the 2560 x 1440 rig.


----------



## vmanuelgm

Kalm_Traveler said:


> Does 3440 x 1440 allow you to enable DLSS? I haven't tried on my ultrawide setup yet, but it definitely was greyed out for me on the 2560 x 1440 rig.



Yep, it does allow!!!


Here a video comparing DLSS On and Off. With DLSS the image is blurry...


----------



## Coldmud

For you guys playing metro: did any of you find a workaround for the horrible input lag? Can't aim for ****, seems like a deadzone controller issue, or just insane lag.

Also locked @ 100fps for me..




J7SC said:


> The key though in this test for me was that the GPU temp only moved up by 1c after initial start during the full TimeSpy test and stayed under 40 c (w/ 22 c ambient). I say 'key' because that is why I built this GPU cooling (separate from CPU system) with a total of 1080mm x 60mm rad space, 2 pumps and 12x 120 fans > hard to heat-soak that baby


Amazing temps, you've inspired me to dustoff the old aquacomputer tower, gonna do some measurements and finally put this 9900k on water and waterforce under some bigger rads, with the newer games i seem to hit 50c often, costing me 2 15mhz ticks..


----------



## vmanuelgm

Coldmud said:


> For you guys playing metro: did any of you find a workaround for the horrible input lag? Can't aim for ****, seems like a deadzone controller issue, or just insane lag.
> 
> Also locked @ 100fps for me..



Maybe u have vsync on???


----------



## Coldmud

vmanuelgm said:


> Maybe u have vsync on???


Yes I do, without sync I get more but can't stand the tearing.


----------



## J7SC

Coldmud said:


> For you guys playing metro: did any of you find a workaround for the horrible input lag? Can't aim for ****, seems like a deadzone controller issue, or just insane lag.
> 
> Also locked @ 100fps for me..
> 
> 
> Amazing temps, you've inspired me* to dustoff the old aquacomputer tower*, gonna do some measurements and finally put this 9900k on water and waterforce under some bigger rads, with the newer games i seem to hit 50c often, costing me 2 15mhz ticks..



...cool (pardon the pun  ) My 2080 Ti WB cards have never been out of the 40 C range ever, be that in the test-bench setup or the new X399 Creation / Threadripper project. The extra rad space, fast pumps and extra fans don't drop temps from idle, but they hold them low much, much longer due to the extra cooling medium volume. 

...as to dusting off old equipment, that is actually how this whole project started..build up several machines out of parts we had in the (software company) store room for years collecting dust, then some upcoming work planning meant getting chummy with Epyc/Threadripper, and possibly RTX productivity features...it is *actually a ton of fun to go back and re-assemble / re-purpose older equipment*


----------



## black06g85

well mine killed itself yesterday while at idle when I was at work.
looks like it might have cooked the pcie slot in the process too.... 

5 fresh windows install attempts, 3 different hard drives, 2 other cards tested (gtx1080ti and 680) both instant black screen when win tries to load the driver then apparently corrupts the windows install too....
fun times... 

buddy had 3 FE cards do the same thing (I have both his dead boards at my house to try and revive to no success either).

wonderful at this ridiculous price.


----------



## amirhosein

Coldmud said:


> Yes I do, without sync I get more but can't stand the tearing.


That's your problem right there. Your actual fps is definitely higher than 100 fps. So when your actual fps is higher than the vsync fps you get a lot of input lag. Use g-sync, freesync if you can, also you can enable fast sync in the nvidia control panel, which takes care of tearing when the refresh rate is higher than your actual monitor refresh rate with minimal input lag. Just don't use g-sync. Or if you really have to, set an fps limit, either in game if it has the option or in rivatuner.
small edit: vsync+freesync/gsync should also be fine as long as you have a framerate cap that is at or lower than your screen's refresh rate.


----------



## J7SC

black06g85 said:


> well mine killed itself yesterday while at idle when I was at work.
> looks like it might have cooked the pcie slot in the process too....
> 
> 5 fresh windows install attempts, 3 different hard drives, 2 other cards tested (gtx1080ti and 680) both instant black screen when win tries to load the driver then apparently corrupts the windows install too....
> fun times...
> 
> buddy had 3 FE cards do the same thing (I have both his dead boards at my house to try and revive to no success either).
> 
> wonderful at this ridiculous price.


...sorry to hear that. Was it air- or water-cooled ? Wondering about your comment about the PCIe slot potentially being fried.


----------



## Coldmud

amirhosein said:


> That's your problem right there. Your actual fps is definitely higher than 100 fps. So when your actual fps is higher than the vsync fps you get a lot of input lag. Use g-sync, freesync if you can, also you can enable fast sync in the nvidia control panel, which takes care of tearing when the refresh rate is higher than your actual monitor refresh rate with minimal input lag. Just don't use g-sync. Or if you really have to, set an fps limit, either in game if it has the option or in rivatuner.
> small edit: vsync+freesync/gsync should also be fine as long as you have a framerate cap that is at or lower than your screen's refresh rate.


Thanks but I know how vsync works, I was just asking if anybody with exodus also got 100fps cap with gsync or vsync of fastsync on. I tried em all 
Also this game... it's not just input lag, the controls are bugged when aiming..


----------



## bogdi1988

Shadowdane said:


> Have had my MSI 2080Ti Duke OC for a while now.. finally decided to try a different BIOS on it.
> 
> That 400W MSI Bios did wonders for my card!! Previously could barely get +20Mhz overclock without driver crashes. No issue now pushing +120Mhz offset and broke 14k in 3DMark Timespy!
> Card hits about ~355W now with with the new BIOS, previously it would top out at ~290W.
> 
> https://www.3dmark.com/spy/6197114
> 
> 
> 
> No screenshots as I was doing all this super late last night around 2am.. no way to grab screenshots at work now. lol


Which file did you use for the 400W BIOS? How does that compare to the Galax 380W one?


----------



## kot0005

kx11 said:


> i'm on Steam and yes i didn't get the 1.0.11 patch


https://twitter.com/RobotBrush/status/1096446867624337410



kx11 said:


> alright i got in ANTHEM , performance is amazing but Fullscreeen mode forces HDR for me without an option to disable it other than running borderless mode
> 
> i think DOF/HBAO hit the fps the hardest


Console port, hence why no option..


Pretty Pissed that DLSS kills HDR in Metro Exodus.


----------



## bp7178

xrb936 said:


> So, as I heard from some technicians, Nvidia has a driver-side power limit for all 20-series cards.
> 
> Which means, for all BIOS which have higher power limit, even for WOC BIOS, which has 2000W power limit, they won't really break the power limit set by Nvidia.
> 
> That's why most users reported that they got WORSE result when they used those BIOS.


There may be some truth to that. In HWInfo, when using either the 450w or 380w bios on my 2080Ti FE card, the total was always capped right under 350W. The highest value HWInfo captured was always like 347.295w or something to that effect. This was when running Superposition on 1080 Extreme. 

ETA: I tested with a few more programs, and it doesn't seem to be capped. When playing The Division and running Furmark, HWInfo captured a max of 388.963w. 



bogdi1988 said:


> Which file did you use for the 400W BIOS? How does that compare to the Galax 380W one?


I actually got better results with the 380w bios. My card ran at +170 and bounced between 2130 and 2145mhz. Highest recorded temp was 49C, but 45-47 was much more typical. When using the 400w bios maxed out at 450w, it crashed at +170 and the clock speed was much lower at +160 then I typically see. 

https://benchmark.unigine.com/results/rid_0d18245c16404bb7999082336375d66f


----------



## jelome1989

After changing PSUs, my 2080 Ti gives off a loud coil whine under load. I didn't think that was possible. Before it was silent. Has anybody experienced this? Can cables be the culprit for this?

EDIT: Btw, I switched from Seasonic 750w Prime Titanium to Corsair AX1200i.


----------



## bp7178

I don't think the PSU has anything to do with it. 

If my card is cold, it is louder when when warm.


----------



## jelome1989

bp7178 said:


> I don't think the PSU has anything to do with it.
> 
> If my card is cold, it is louder when when warm.


Ok so the cables then. I'll try to replace the cables. Definitely not the temps since my GPU is running a lot cooler now.


----------



## J7SC

bp7178 said:


> There may be some truth to that. In HWInfo, when using either the 450w or 380w bios on my 2080Ti FE card, the total was always capped right under 350W. The highest value HWInfo captured was always like 347.295w or something to that effect. This was when running Superposition on 1080 Extreme.
> 
> *ETA: **I tested with a few more programs, and it doesn't seem to be capped*. When playing The Division and running Furmark, HWInfo captured a max of 388.963w.
> 
> cut
> 
> https://benchmark.unigine.com/results/rid_0d18245c16404bb7999082336375d66f



I wouldn't be at all surprised if some driver trickery could/would be sneaked in, but yeah, I regularly see 375w to 379w on my stock Aorus 2080 TI WB Bios, per blue circles below - and that is allowable max for a 2x 8pin 12v EPS (+ PCIe). As discussed before, 2x 8 pin can physically handle (much) more but that is the industry standard they have to adhere to when selling to the public (ie think what would happen with lawsuits if it would start a fire and came from the factory non-compliant...). This is also the reason why the higher-watt MSI Trio and Lightning Bios are written for 2x8 pin + 1x6 pin / 3x 8 pin power inputs.

*>>> Edit:* *On another topic*, I'm getting REALLY interested in *Metro Exodus*...anyone here w/ dual 2080 TIs who can report on NVLink/SLI profile & experience with this game ? Tx


----------



## ESRCJ

Any leads on an XOC BIOS? It'd be fun to push the voltage a little past 1.093V and not be as power-limited for some of these benchmarks.


----------



## VPII

J7SC said:


> *@iamjanco @Coldmud @VPII* and fellow 2080 TIers - finally woke the thing up in its final configuration, and_ '''It's Alive'''_ . Before, I was running the GPUs on a Z170 test-bench and the 2950X with some older GTX Classies.
> 
> Initially had a scare this afternoon when I turned it on for the first time > black screen :sad-smile :sozo: :headscrat ...
> 
> I was using an older test monitor w/ ancient HDMI converter cable...worked on the Classies. Anyhow, once I switched monitors and cables > voila  . So now I can finally get to the final custom touches I planned out since this was a tricky build w/ 5x XSPC 360x60 rads, 4x MPC655 pumps etc etc, never mind mobo swap w/ RMA.
> 
> 'Fully-lit' images when finished and the build-log goes up... ...and did I mention that I like a bit of RGB, but this, this will take some getting used to


Sounds cool.... I steer clear of RGB as much as possible,except for these lights changing colours on my Asus Crasshair VII mobo...... mind you I can switch it off.... well let me do it.


----------



## JustinThyme

Djreversal said:


> can someone help me out... I was just looking at my system which is a x399 MEG board with a AMD 2990wx, i have dual 2080ti's right now and i was thinking of getting a 3rd or possibly a 4th... the problem i noticed is the 2080ti has a PORT that drops down to the 2nd slot on the back of the computer, and if i was to put a card in the next slot it wouldnt go in because it would hit that port... do people remove that???
> 
> 
> 
> check the attached image -- on the left side


Any specific reason you want more than 2 cards? You mining? Running specific AI project? Nothing supports more than two with the exception of benchmarks and there are no Nvlink bridges for more than two that I know of. I was actually tinkering with the idea of adding a 3rd 1080Ti instead of buying 2080Ti cards and sat on the idea until I got an official response from Nvidia that they wont support more than two.


----------



## JustinThyme

jelome1989 said:


> After changing PSUs, my 2080 Ti gives off a loud coil whine under load. I didn't think that was possible. Before it was silent. Has anybody experienced this? Can cables be the culprit for this?
> 
> EDIT: Btw, I switched from Seasonic 750w Prime Titanium to Corsair AX1200i.


First are you sure its the GPU and not the PSU? Cables can also produce the same effect. Are you using smaller cables? 

Whoever invented the term coil whine needs a boot in their backside. A bit misleading. There is no whine about it, only people whining about it. Scratching sound like an old spinner hdd is high frequency vibrations as current passes through the chokes. 

Have you checked what your GPU voltage is doing under load and more specifically the 12V rails getting to the card. Simple ohms law, voltage goes down and current goes up. Current is the force behind the phenomena and the purpose of the chokes is to smooth current.


----------



## ALSTER868

del


----------



## bigbangSG

Hi any one 2080ti having problem? My Com shutdown suddenly when I was playing games . And I was unable to on it back until I off and on back the power. Really nid some help here.


----------



## jelome1989

JustinThyme said:


> First are you sure its the GPU and not the PSU? Cables can also produce the same effect. Are you using smaller cables?
> 
> Whoever invented the term coil whine needs a boot in their backside. A bit misleading. There is no whine about it, only people whining about it. Scratching sound like an old spinner hdd is high frequency vibrations as current passes through the chokes.
> 
> Have you checked what your GPU voltage is doing under load and more specifically the 12V rails getting to the card. Simple ohms law, voltage goes down and current goes up. Current is the force behind the phenomena and the purpose of the chokes is to smooth current.


Yeah, I double checked to be sure. I'm using Cablemod Pro series cables. I still haven't replaced the cables since I'm too lazy. Also I was testing on an open case earlier - all panels off, but now that I put them back the sound was barely audible, surprisingly. So I guess I'd leave it at that.


----------



## iamjanco

VPII said:


> Sounds cool.... I steer clear of RGB as much as possible,except for these lights changing colours on my Asus Crasshair VII mobo...... mind you I can switch it off.... well let me do it.


Yeah, RGB doesn't do a heck of a lot for me either. A little accent lighting at most keyed to the color of a build does the trick at my end. Not that some RGB build don't look nice, mind you; just that some do look a lot like a three-ring circus in a seedier part of town.

Just noted that it looks like @J7SC pinged me above, but for some reason the mention tags got dropped.


----------



## J7SC

VPII said:


> Sounds cool.... I steer clear of RGB as much as possible,except for these lights changing colours on my Asus Crasshair VII mobo...... mind you I can switch it off.... well let me do it.





iamjanco said:


> Yeah, RGB doesn't do a heck of a lot for me either. A little accent lighting at most keyed to the color of a build does the trick at my end. Not that some RGB build don't look nice, mind you; just that some do look a lot like a three-ring circus in a seedier part of town.
> 
> Just noted that it looks like @J7SC pinged me above, but *for some reason the mention tags got dropped*.



...may be you got banned :stun:  

On the RGB, I can deal with the mobo and RAM 'blink-da-blinky' via MSI Bios. As to the GPUs and running Aorus engine software for their RGB, it (together w/ Asus Aura) was written up before as a potential security threat though recent updates presumably dealt with that...still, I don't like all these things loading & running in windows. 

I can either physically disconnect the Aorus 2080 Ti WB RGB wires - or: just load the 380w Galax Bios. I don't really need the extra watts (per above), but while it would disable some output ports (I don't need all of them anyway) that would also disable the RGB on the GPUs...besides, an extra 15-20w freed up from the RBG wouldn't actually hurt either. 

Once it is all done, I can hardly wait to play Metro: Exodus - while this is not a pure gamer build and has some serious productivity functions ahead, dual 2080 TIs should make games like Metro: Ex very nice looking and performing w/ RTX, given what I have seen so far by 'Igor's lab' (visual comp at 5min+ below) and others, such as 'Hardware Unboxed'.


----------



## Edge0fsanity

J7SC said:


> Once it is all done, I can hardly wait to play Metro: Exodus - while this is not a pure gamer build and has some serious productivity functions ahead, dual 2080 TIs should make games like Metro: Ex very nice looking and performing w/ RTX, given what I have seen so far by 'Igor's lab' (visual comp at 5min+ below) and others, such as 'Hardware Unboxed'.
> 
> https://www.youtube.com/watch?v=acku0ArXvB4
> 
> https://www.youtube.com/watch?v=YH5QNQngL8A


No sli support in metro exodus. Makes me not want to play the game since i have 2 2080tis in sli. I could get playable framerates with rtx enabled and dlss off if sli was supported. Instead i'm stuck with no rtx and i can't stand dlss making everything blurry so thats off too.


----------



## bogdi1988

kot0005 said:


> https://twitter.com/RobotBrush/status/1096446867624337410
> 
> 
> 
> Console port, hence why no option..
> 
> 
> Pretty Pissed that DLSS kills HDR in Metro Exodus.


Yup! Metro with DLSS + HDR looks like absolute garbage. Way too washed out!


----------



## BigMack70

Edge0fsanity said:


> No sli support in metro exodus. Makes me not want to play the game since i have 2 2080tis in sli. I could get playable framerates with rtx enabled and dlss off if sli was supported. Instead i'm stuck with no rtx and i can't stand dlss making everything blurry so thats off too.


It's sad, but SLI is pretty much in the dumpster for gaming; it's benchmark tech and Nvidia appears to have done that fairly intentionally. 

With a controller - not mouse - I find the game playable on a single card at 4k. Dunno if I like the variable framerate or just locking it to 30, which it holds 99% of the time; I wish my 4k TV had gsync. 

DLSS is incredibly ugly. It's basically everything bad about FXAA cranked up to eleven... so blurry. It's the biggest flop of the RTX series. 70% resolution scaling looks significantly better than DLSS and I think it might even perform a bit better too.


----------



## J7SC

Edge0fsanity said:


> No sli support in metro exodus. Makes me not want to play the game since i have 2 2080tis in sli. I could get playable framerates with rtx enabled and dlss off if sli was supported. Instead i'm stuck with no rtx and i can't stand dlss making everything blurry so thats off too.


That sucks  ...and a bit weird as the 4K benchmarks w/ everything on max seem to bring even a single 2080 Ti down to lower overall frame rates and 1% FT readings -- are there _at least some plans by Epic games_ to introduce a SLI profile for Metro: Exodus later ?


----------



## JMCB

Anyone have any luck enabling the RGB on a Founder's Edition card yet? I don't like the green and want it to match my all red setup.


----------



## Coldmud

J7SC said:


> ...may be you got banned :stun:
> 
> I can either physically disconnect the Aorus 2080 Ti WB RGB wires - or: just load the 380w Galax Bios. I don't really need the extra watts (per above), but while it would disable some output ports (I don't need all of them anyway) that would also disable the RGB on the GPUs...besides, an extra 15-20w freed up from the RBG wouldn't actually hurt either.
> 
> 
> 
> Once it is all done, I can hardly wait to play Metro: Exodus - while this is not a pure gamer build and has some serious productivity functions ahead, dual 2080 TIs should make games like Metro: Ex very nice looking and performing w/ RTX, given what I have seen so far by 'Igor's lab' (visual comp at 5min+ below) and others, such as 'Hardware Unboxed'.
> 
> 
> 
> That sucks  ...and a bit weird as the 4K benchmarks w/ everything on max seem to bring even a single 2080 Ti down to lower overall frame rates and 1% FT readings -- are there _at least some plans by Epic games_ to introduce a SLI profile for Metro: Exodus later ?


rgb wave effect still works with the 380 galax bios. Only thing it disables are outputs, it messes up fan behaviour and leds on them if you had those.
You really think rgb takes 20w? wish I had a watometer laying around.

To bad metro is a mess right now, mouse aim is floaty and horrible, there is a grey shade over the screen where I can't see **** in the dark. Waiting for some patches really.

Latest iteration of the 4A engine was used, same as metro 2033 and last light, which all supported sli. Who knows what game nvidia is playing? Anthem also seemed to have sli support in the first alphas as the first demos ran on 2 1080tis, now poof game is golden and sli support is gone..


----------



## Vidati

bigbangSG said:


> Hi any one 2080ti having problem? My Com shutdown suddenly when I was playing games . And I was unable to on it back until I off and on back the power. Really nid some help here.


I have the same thing, i have Gigabyte Gaming OC and i think that their software is the problem. i have been experiencing loosing power and then i need to unplug and plug again.
Delete all Gigabyte software if you have any including RGB Fusion.


----------



## pewpewlazer

Coldmud said:


> rgb wave effect still works with the 380 galax bios. Only thing it disables are outputs, it messes up fan behaviour and leds on them if you had those.
> You really think rgb takes 20w? wish I had a watometer laying around.
> 
> To bad metro is a mess right now, mouse aim is floaty and horrible, there is a grey shade over the screen where I can't see **** in the dark. Waiting for some patches really.
> 
> Latest iteration of the 4A engine was used, same as metro 2033 and last light, which all supported sli. Who knows what game nvidia is playing? Anthem also seemed to have sli support in the first alphas as the first demos ran on 2 1080tis, now poof game is golden and sli support is gone..


The RGB LEDs on my EVGA Hydro Copper block use <1 watt. Turning them off basically doesn't change idle power consumption at all.

I've been playing Metro the past 2 days and haven't had any issues at all. Sure the gameplay is buggy at times, but that's par for the course with the Metro games unfortunately. Mouse feels like crap, but what else can you expect from vsync and 40-60 fps?

Why Metro Exodus doesn't support SLI is a mystery. Also a mystery why BF5 doesn't have official SLI support yet either. Regardless, Nvidia's lack of SLI support is just a big "screw you" to the people on previous gen SLI setups. Ray tracing is DX12 only, and DX12 "SLI" support is entirely up to the developer.


----------



## J7SC

Coldmud said:


> rgb wave effect still works with the 380 galax bios. Only thing it disables are outputs, it messes up fan behaviour and leds on them if you had those.
> You really think rgb takes 20w? wish I had a watometer laying around.
> 
> To bad metro is a mess right now, mouse aim is floaty and horrible, there is a grey shade over the screen where I can't see **** in the dark. Waiting for some patches really.
> 
> Latest iteration of the 4A engine was used, same as metro 2033 and last light, which all supported sli. Who knows what game nvidia is playing? Anthem also seemed to have sli support in the first alphas as the first demos ran on 2 1080tis, now poof game is golden and sli support is gone..



...re. RGB, it's not_ that_ important. I can either disconnect the wires (which I can actually see), or put a VRM helper fan in front of most of it even if they do not need it, or cover it with self-adhesive tape...typically, LED's use about 3 watt per foot on a strip, so the wattage is not the issue - fitting in with the rest of the build rather than looking like a carnival on steroids is...

re. Bios, what is strange is that both my Aorus 2080 TIs are within 2 (!) sequential numbers on the serial number - yet one has a slightly different Bios version # (though same release day) than the other according to GPUz :headscrat . Yet both cards clock within 20-30 MHz of each other on GPU (#1 > #2) while on VRAM, GPU#2 is a bit faster than GPU#1...all the same, before I flash the Galax 380w Bios, I'll flash the stock Aorus Bios from GPU #1 onto GPU #2 and see what is what 

I haven't had much time to really delve into gaming yet with this RTX SLI or any other setup for that matter for some time, but 'back in the day', I used to use various NVidia Inspector 'master SLI profiles' on a trial and error basis even for quad-SLI (often, the 3D Mark SLI profiles worked reasonably well on games they were not designed for & sometimes I got lucky). With RTX being still very new for developers, it may not be that easy this time around (yet), but I'm encouraged that the 'Unreal' game engine just added ray tracing. Hope springs eternal...

BTW, what is a good gaming mouse for $100 or less  ? We just use the typical Microsoft and Logitech wired USB mice en masse in the office and the home setup.


----------



## Gustavoh

Hello everybody, need some help with temps. I have an Asus Dual OC card with an EK Vector waterblock hooked up to a single loop (two 280mm and one 120 mm rads, Noctua fans). CPU is a 9700k oc'd to 5 GHz. I've been running the gpu at 2.040 GHz (Galax bios, max power limit, +120 on core and +500 on mem). 

With AC Odyssey, I was getting gpu temps in the high 40's (46-49 C), with water temps at about 36C (a 10 to 13 delta) (room temp at about 23C). With Anthem (which has really maxed out the GPU both in usage and power consumption), however, gpu temps went as high as 59 C, with water temp reaching 40 C (a delta of up to 19C). 

I understand that by running a fairly silent fan profile, my water temps will settle at a relatively high point (34-38C), but still, is a delta of 15 to 19C between GPU (operating at its very limit of 380W) and water normal? Or should I reapply thermal paste and reset the waterblock?


----------



## ESRCJ

Gustavoh said:


> Hello everybody, need some help with temps. I have an Asus Dual OC card with an EK Vector waterblock hooked up to a single loop (two 280mm and one 120 mm rads, Noctua fans). CPU is a 9700k oc'd to 5 GHz. I've been running the gpu at 2.040 GHz (Galax bios, max power limit, +120 on core and +500 on mem).
> 
> With AC Odyssey, I was getting gpu temps in the high 40's (46-49 C), with water temps at about 36C (a 10 to 13 delta) (room temp at about 23C). With Anthem (which has really maxed out the GPU both in usage and power consumption), however, gpu temps went as high as 59 C, with water temp reaching 40 C (a delta of up to 19C).
> 
> I understand that by running a fairly silent fan profile, my water temps will settle at a relatively high point (34-38C), but still, is a delta of 15 to 19C between GPU (operating at its very limit of 380W) and water normal? Or should I reapply thermal paste and reset the waterblock?


Based on your water temps, your GPU temps are not out of the ordinary. Running the card at 380W is obviously going to result in higher temps as well. You can always try to reapply the thermal paste and reseat the waterblock, making sure everything is where it needs to be. However, I wouldn't expect a miracle. What brand and thickness are your rads? It's all about how much heat they can dissipate in aggregate. At 36C in water temps, your rad cooling capacity is your bottleneck.


----------



## Gustavoh

gridironcpj said:


> Based on your water temps, your GPU temps are not out of the ordinary. Running the card at 380W is obviously going to result in higher temps as well. You can always try to reapply the thermal paste and reseat the waterblock, making sure everything is where it needs to be. However, I wouldn't expect a miracle. What brand and thickness are your rads? It's all about how much heat they can dissipate in aggregate. At 36C in water temps, your rad cooling capacity is your bottleneck.


Thanks for the feedback. All my rads are EK Coolstreams: one 280 and one 120mm are SE (slim) and the other 280mm is a CE (45mm thick, iirc). I have the 280 CE at the front, pulling air in, the 280 SE at the top pushing air out, and the 120 SE at the back, pushing air out (case is an NZXT h700i). In this config, both slim rads at the top and back are exhausting prewarmed air from the thicker rad at the front. Maybe I should switch the orientation of the fans on the top or back rad (pulling cooler air from the outside as well, with more positive pressure inside the case)? Or make all three rads push air out (that would give me a lot of negative pressure inside the case, which would probably result in more dust being pulled in)?


----------



## ESRCJ

Gustavoh said:


> Thanks for the feedback. All my rads are EK Coolstreams: one 280 and one 120mm are SE (slim) and the other 280mm is a CE (45mm thick, iirc). I have the 280 CE at the front, pulling air in, the 280 SE at the top pushing air out, and the 120 SE at the back, pushing air out (case is an NZXT h700i). In this config, both slim rads at the top and back are exhausting prewarmed air from the thicker rad at the front. Maybe I should switch the orientation of the fans on the top or back rad (pulling cooler air from the outside as well, with more positive pressure inside the case)? Or make all three rads push air out (that would give me a lot of negative pressure inside the case, which would probably result in more dust being pulled in)?


I think your fan configuration is fine. How are your CPU temps? As I said before though, the water temps indicate that radiator cooling capacity could be your bottleneck. In that case, you could increase the fan speed to improve this. Also, if your pump is running at a low RPM, you could increase that to help a little. The 2080 Ti is a harder card to cool than any of the GP102 cards from last gen, especially when pushing 380W.


----------



## mackanz

Did anyone succeed to flash a non-a chip, reference design 2080 ti, to a bios with a higher limit?
I got a poor Windforce (non oc) 2080 Ti with 109% that clocks pretty good in itself, but clearly limited with its bios.


----------



## Nicklas0912

I got a RTX 2080 TI XC Ultra.

Wich bioos is the best for max overclocking.

I see so many 380W bios, 

is it GALAX RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 126% Power Target BIOS (380W) ?


----------



## Dr Mad

mackanz said:


> Did anyone succeed to flash a non-a chip, reference design 2080 ti, to a bios with a higher limit?
> I got a poor Windforce (non oc) 2080 Ti with 109% that clocks pretty good in itself, but clearly limited with its bios.


Unless I'm wrong, it seems it's still not possible to flash non A chip. Only solution is to do shunt mod.

Otherwise, I see in the OP that nvflash was updated to support FE cards. Is the bios flash on FE 2080ti card really possible??


----------



## MunneY

Dr Mad said:


> Unless I'm wrong, it seems it's still not possible to flash non A chip. Only solution is to do shunt mod.
> 
> Otherwise, I see in the OP that nvflash was updated to support FE cards. Is the bios flash on FE 2080ti card really possible??


I flashed my 2 FE cards with the 380w 127% power limit cards. Not had any trouble once i remembered how to use NVFlash LOL


----------



## J7SC

MunneY said:


> I flashed my 2 FE cards with the 380w 127% power limit cards. Not had any trouble *once i remembered how to use NVFlash LOL*


 
...yup, it used to be 2nd nature, but the last time I flashed a GPU bios (w/ multi Bios switch to boot) was many years ago, and never with Win 10 as the OS :sad-smile


----------



## CptSpig

Dr Mad said:


> Unless I'm wrong, it seems it's still not possible to flash non A chip. Only solution is to do shunt mod.
> 
> Otherwise, I see in the OP that nvflash was updated to support FE cards. Is the bios flash on FE 2080ti card really possible??


Yes, I have a Nvidia 2080ti FE with the 380w bios. Use the FE modified NV flash in the op. See benchmarks below for the results of the 380w bios.

http://www.3dmark.com/spy/5870453
http://www.3dmark.com/fs/17951194


----------



## mackanz

Thanks,

What was the exact commands to use?


----------



## bigbangSG

Vidati said:


> bigbangSG said:
> 
> 
> 
> Hi any one 2080ti having problem? My Com shutdown suddenly when I was playing games . And I was unable to on it back until I off and on back the power. Really nid some help here.
> 
> 
> 
> I have the same thing, i have Gigabyte Gaming OC and i think that their software is the problem. i have been experiencing loosing power and then i need to unplug and plug again.
> Delete all Gigabyte software if you have any including RGB Fusion.
Click to expand...




So u delete their software and it ok nw?


----------



## Bighouse

Well, I've been running my two NVidia FE RTX2080ti cards for over two weeks, so I decided on this holiday weekend it was time to tear my system apart and extend the watercooling to these cards.

Long story short, the screws that were removed from the EKWB GPU cooler terminals are too short to use for the Terminal X bridge between the two cards. SO, I have to demote my system back to single air-cooled GTX1080 FTW. I'll survive. The cards were working well and I expect them to run cooler once I reinstall them.

To clean up my installation, I'm trying to hide the cable that runs to the EK RTX terminal covers. My cards are installed horizontally, so I'll be staring at the backplates, not the plexiglass sides of the cards. I could just dangle the wires in the space below each card, but noticed there is plenty of space between the PC board and the plexiglass to run the thin LED cable. It would make for a very clean install, but I'm concerned that the wire may interfere, or melt, if it comes into contact with hot elements on the PC board.

Soooo, any electronics experts out there, please take a look at the image in the link below and let me know if this could be trouble, or if it will end up being fine to do this.

https://imgur.com/B1Jd1iE


----------



## Sheyster

Nicklas0912 said:


> I got a RTX 2080 TI XC Ultra.
> 
> Wich bioos is the best for max overclocking.
> 
> I see so many 380W bios,
> 
> is it GALAX RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 126% Power Target BIOS (380W) ?



That's a reference PCB so yes you should use the 380w GALAX BIOS listed in the OP.



MunneY said:


> I flashed my 2 FE cards with the 380w 127% power limit cards. Not had any trouble once i remembered how to use NVFlash LOL



Glad you got it sorted out. :thumb:


----------



## Vidati

bigbangSG said:


> So u delete their software and it ok nw?


Correct, give it a try.


----------



## KCDC

Hello friends! 
Looking at the possibility of joining the 2080ti club. 
This would be for my 3d/mograph/vfx work, mostly redshift gpu rendering, and of course gaming here and there on my triple monitor setup and streaming to my 4k tv.
Currently running 2 1080ti FE cards on water. They would go into a render node.
Planning to run water on two 2080 tis as well.
Learning from my past mistake, I won't go with FE cards again in hopes of better clocks.
Leaning towards the EVGA FTW3 with waterblock preinstalled.
Wondering if it would be worth waiting for the kingpin variant, or if I'm better off getting a specific card and putting my own blocks on it. Probably the phanteks or ekwb vector block if that's the better route. 
What do y'all suggest? Still combing through this thread.


EDIT: formatting, OCN seems to put a ton of spaces between my line breaks...


----------



## JustinThyme

Bighouse said:


> Well, I've been running my two NVidia FE RTX2080ti cards for over two weeks, so I decided on this holiday weekend it was time to tear my system apart and extend the watercooling to these cards.
> 
> Long story short, the screws that were removed from the EKWB GPU cooler terminals are too short to use for the Terminal X bridge between the two cards. SO, I have to demote my system back to single air-cooled GTX1080 FTW. I'll survive. The cards were working well and I expect them to run cooler once I reinstall them.
> 
> To clean up my installation, I'm trying to hide the cable that runs to the EK RTX terminal covers. My cards are installed horizontally, so I'll be staring at the backplates, not the plexiglass sides of the cards. I could just dangle the wires in the space below each card, but noticed there is plenty of space between the PC board and the plexiglass to run the thin LED cable. It would make for a very clean install, but I'm concerned that the wire may interfere, or melt, if it comes into contact with hot elements on the PC board.
> 
> Soooo, any electronics experts out there, please take a look at the image in the link below and let me know if this could be trouble, or if it will end up being fine to do this.
> 
> https://imgur.com/B1Jd1iE


No worries at all, I ran my EK 1080Ti cards like that for over a year. Even going vertical can be made to not be an aesthetics issue buy simply applying window tint to the section of the acrylic where the wires will be routed. Not a big fan of how EK executed the RGB. Phanteks block RGB wiring exits at the right rear corner when viewed as installed so its right into the wire management chases of pretty much any case. Cant even see mine regardless of the angle.

Which BTW EK is now taking preorders for the Strix blocks to start shipping 2/25/19. Ive got Phanteks blocks which are performing great so I wont be going that route. The only EK stuff I have left in my system are the rads and M2 heatsink. Ill reconsider on whether to stay with Phanteks or not when Watercool releases their blocks and if they have significant performance advantage. Id like to go vertical with both cards but cant source a 2 slot Nvlink bridge. From what I gather the Quardro RTX 6000 bridge should work, but sold out and staying that way. I have miners!!!


----------



## bogdi1988

JustinThyme said:


> No worries at all, I ran my EK 1080Ti cards like that for over a year. Even going vertical can be made to not be an aesthetics issue buy simply applying window tint to the section of the acrylic where the wires will be routed. Not a big fan of how EK executed the RGB. Phanteks block RGB wiring exits at the right rear corner when viewed as installed so its right into the wire management chases of pretty much any case. Cant even see mine regardless of the angle.
> 
> Which BTW EK is now taking preorders for the Strix blocks to start shipping 2/25/19. Ive got Phanteks blocks which are performing great so I wont be going that route. The only EK stuff I have left in my system are the rads and M2 heatsink. Ill reconsider on whether to stay with Phanteks or not when Watercool releases their blocks and if they have significant performance advantage. Id like to go vertical with both cards but cant source a 2 slot Nvlink bridge. From what I gather the Quardro RTX 6000 bridge should work, but sold out and staying that way. I have miners!!!


Just ordered mine from here: https://www.sharbor.com/nvlink-hb-2-slot-for-quadro-rtx-6000-quadro-rtx-8000.html


----------



## Canson

Guys please i need help with my Asus Strix 2080 Ti Advanced Gaming. My Quit - bios mode dont work as it should. Fans runs 28% 1000rmp in idle, they never go to 0 rmp.
I have flashed my card before with galaxy 380w. that didnt gave me performance. it even decreased my performance some how.

I tried to flash back my card with original bios that i saved before. I flashed both the performance bios and quit bios. Same problem still. The q-bios fans runs exactly like p-bios switch (I also tried original bios from Strix OC version)



Please guys do you know how i can fix this? 

And do you know if both bios switches have different bios version? Because i never checked this before i was flashing my card and messing with it.
So i dont know if q-bios switch had different bios version. If so i need to get that version some how


----------



## Kimir

Here I am scrolling through the 2080/2080Ti owner's club just because I saw they give Anthem with those.. shame on me.
And I thought 750€ was expensive back then for 780Ti/980 KPE and 980Ti HOF I got, damn those RTX are steep! 1100-1500€ eww, but the MSI sea hawk EK are sexy.


----------



## Duskfall

mackanz said:


> Did anyone succeed to flash a non-a chip, reference design 2080 ti, to a bios with a higher limit?
> I got a poor Windforce (non oc) 2080 Ti with 109% that clocks pretty good in itself, but clearly limited with its bios.


As far as I know you can't since I wanted this as well, FAQ clearly states that non-A chips don't allow bios from A chips. If you find a way though plz share


----------



## toncij

Is there a confirmed card with Samsung memory aside from the MSI GeForce RTX 2080 Ti Lightning Z?


----------



## VPII

My Palit Rtx 2080 ti Gaming Pro OC has samsung memory.

Sent from my SM-G960F using Tapatalk


----------



## junglechocolate

anybody here own the EVGA black 2080 Ti aka the cheapest 2080 TI?
https://www.newegg.com/Product/Product.aspx?Item=N82E16814487418

What is your experience been like. Not loooking to OC tbh. Just need something that can run my games at 45 to 60 fps at 4K. I was actually looking at getting a 1080 Ti but the prices have been getting higher that spending 500 more for the "cheap" 2080 Ti is something I am considering


----------



## Coldmud

J7SC said:


> ...re. RGB, it's not_ that_ important. I can either disconnect the wires (which I can actually see), or put a VRM helper fan in front of most of it even if they do not need it, or cover it with self-adhesive tape...typically, LED's use about 3 watt per foot on a strip, so the wattage is not the issue - fitting in with the rest of the build rather than looking like a carnival on steroids is...
> 
> re. Bios, what is strange is that both my Aorus 2080 TIs are within 2 (!) sequential numbers on the serial number - yet one has a slightly different Bios version # (though same release day) than the other according to GPUz :headscrat . Yet both cards clock within 20-30 MHz of each other on GPU (#1 > #2) while on VRAM, GPU#2 is a bit faster than GPU#1...all the same, before I flash the Galax 380w Bios, I'll flash the stock Aorus Bios from GPU #1 onto GPU #2 and see what is what
> 
> I haven't had much time to really delve into gaming yet with this RTX SLI or any other setup for that matter for some time, but 'back in the day', I used to use various NVidia Inspector 'master SLI profiles' on a trial and error basis even for quad-SLI (often, the 3D Mark SLI profiles worked reasonably well on games they were not designed for & sometimes I got lucky). With RTX being still very new for developers, it may not be that easy this time around (yet), but I'm encouraged that the 'Unreal' game engine just added ray tracing. Hope springs eternal...
> 
> BTW, what is a good gaming mouse for $100 or less  ? We just use the typical Microsoft and Logitech wired USB mice en masse in the office and the home setup.


You can always try the old metro profiles with AFR. Rocking the logitech G502, no regrets.



Canson said:


> Guys please i need help with my Asus Strix 2080 Ti Advanced Gaming. My Quit - bios mode dont work as it should. Fans runs 28% 1000rmp in idle, they never go to 0 rmp.
> I have flashed my card before with galaxy 380w. that didnt gave me performance. it even decreased my performance some how.
> 
> I tried to flash back my card with original bios that i saved before. I flashed both the performance bios and quit bios. Same problem still. The q-bios fans runs exactly like p-bios switch (I also tried original bios from Strix OC version)
> 
> Please guys do you know how i can fix this?
> 
> And do you know if both bios switches have different bios version? Because i never checked this before i was flashing my card and messing with it.
> So i dont know if q-bios switch had different bios version. If so i need to get that version some how


So your problem is that fans spin in idle? Did you can alter the fancurve in your oc software? Can set it so that it ramps up with desired temp.


----------



## Vidati

toncij said:


> Is there a confirmed card with Samsung memory aside from the MSI GeForce RTX 2080 Ti Lightning Z?


My Gigabyte 2080 TI Gaming OC is Samsung memory


----------



## mackanz

Even though the time for exchange went out a long time ago, my dealer offered me an exchange to a Inno3D black with AIO which i believe is a "A" chip. Hexus gave it a splendid review. Man, Gigabyte have really messed up their 2080 lineup this time. The fan issue is on all cards and new bios doesn't seem to help. They are also extremely conservative with their windforce bios, at least on the Ti. Their support on the issue is non-existent as well. 

I do not recommend Gigabyte this generation.


----------



## skingun

toncij said:


> Is there a confirmed card with Samsung memory aside from the MSI GeForce RTX 2080 Ti Lightning Z?


Both of my Founders Edition 2080 Ti cards (2x) have Samsung memory. Originally I had cards with Micron memory but had to RMA both cards and the replacements arrived with Samsung chips.


----------



## Martin778

Aorus cards also have Samsung VRAM but their overall cooling solution is poor.

Any news on a higher PL BIOS for the Lightning Z? MSI seems to hide their heads in sand.


----------



## J7SC

Coldmud said:


> You can always try the old metro profiles with AFR. Rocking the logitech G502, no regrets.
> (...cut...).


 
Thanks Coldmud  - I shall visit the 'gamer isle' for a change and get me some nice mice / meow  The 'Orca with Rucksack' build is getting close to completion, and as posted before, it's not a dedicated gamer but has to do some productivity work, well, at work also. It is also going to be located in an area (eventually wall mounted) where there are biz visitors, so I still have to do some work re. RGB and other fine-tuning of the final 'look', but I like the theme, and it is locked in now.

I spent A LOT of time on the 2950X CPU...it's a good one, running 16c/32t at 1.32v all day long, never seeing 60 C or higher. Also spent EVEN MORE time of the 32GB quad channel DDR4 setup...running very tight timings on this Samsung B-Die kit at 3400MHz (nominal of the kit is 3866)...might even go to 3466 but only if I can keep the tight timings and pass extended memtests.

Per your earlier post, I ran some TimeSpy/Ex for basic setup (now up to 2130 for both cards)...getting up to 99% GPU utilization w/NVLink/SLI..still have a few MHz to go but I am running into the PL now - still on stock Bios which is decent enough (other than that RGB :arrowhead )


----------



## Chimera619

Just paid for my new PC build

9900k / 32 GB 3200 MHZ / Z390 Aorus Master / Palit 2080 TI GamingPro OC
I'm moving from a GT73 VR GTX 1080 Laptop with a 6820HK to this as I am currently living in a residence for a while instead of traveling alot like before

Was it wise to move to PC atm 
I am reading people cry alot about RTX and how **** DLSS/Ray Tracing is and how it's really inefficient to buy any atm
Should I have waited ?

Like I know people are crying about the 1200$ price tags on the 2080 ti but this is not an nvidia thing
This is a TECH thing like even ******* mobile phones are 1500$+ now (iPhone XS MAX & S10+)
and I dont think this is an RTX thing as well , I think even next years cards are gonna cost as much as well as AMD cards
Who would have thought AMD would release a 600$ card that competes with nvidia mid-high end card 1 year ago ?


----------



## sblantipodi

DLSS is a fraud.
Nvidia is so ridiculous.


----------



## sblantipodi

please post your feedback on DLSS here:
https://goo.gl/forms/Kkfou0TkhzD5Xbaz2


----------



## Rob w

skingun said:


> Both of my Founders Edition 2080 Ti cards (2x) have Samsung memory. Originally I had cards with Micron memory but had to RMA both cards and the replacements arrived with Samsung chips.


Same here.


----------



## Bighouse

How do you tell which manufacturer you have for your VRAM?


----------



## Rob w

Bighouse said:


> How do you tell which manufacturer you have for your VRAM?


Gpuz, under memory type.


----------



## Coldmud

Martin778 said:


> Aorus cards also have Samsung VRAM but their overall cooling solution is poor.


Mine has micron, and afaik most do.



J7SC said:


> Thanks Coldmud  - I shall visit the 'gamer isle' for a change and get me some nice mice / meow  The 'Orca with Rucksack' build is getting close to completion, and as posted before, it's not a dedicated gamer but has to do some productivity work, well, at work also. It is also going to be located in an area (eventually wall mounted) where there are biz visitors, so I still have to do some work re. RGB and other fine-tuning of the final 'look', but I like the theme, and it is locked in now.
> 
> I spent A LOT of time on the 2950X CPU...it's a good one, running 16c/32t at 1.32v all day long, never seeing 60 C or higher. Also spent EVEN MORE time of the 32GB quad channel DDR4 setup...running very tight timings on this Samsung B-Die kit at 3400MHz (nominal of the kit is 3866)...might even go to 3466 but only if I can keep the tight timings and pass extended memtests.
> 
> Per your earlier post, I ran some TimeSpy/Ex for basic setup (now up to 2130 for both cards)...getting up to 99% GPU utilization w/NVLink/SLI..still have a few MHz to go but I am running into the PL now - still on stock Bios which is decent enough (other than that RGB :arrowhead )


Oh man that is a sweet build! Looking forward to more shots  What kind of workloads will it be running? 99% utilization is mostly a given with benchmarks, with timespy graphic test 2 it's impossible to stay under PL


----------



## J7SC

Coldmud said:


> Mine has micron, and afaik most do.
> 
> 
> 
> Oh man that is a sweet build! Looking forward to more shots  What kind of workloads will it be running? 99% utilization is mostly a given with benchmarks, with timespy graphic test 2 it's impossible to stay under PL



Thanks  Work stuff for this setup is really being a test for 'Epyc' switch-over later in the year (w/ or w/o '''AI''' per GPUs, we'll see) > mostly 'large' database and metadata analysis, some rendering, compilation, en/-decryption etc... 

TimeSpy GPU2 also really pushes VRAM...I'm seeing the point where I can still incrementally increase VRAM speed for both cards but scores for GPU2 are either flat if not decline slightly / must be near the sweet spot (I hope...).


----------



## Coldmud

J7SC said:


> Thanks  Work stuff for this setup is really being a test for 'Epyc' switch-over later in the year (w/ or w/o '''AI''' per GPUs, we'll see) > mostly 'large' database and metadata analysis, some rendering, compilation, en/-decryption etc...
> 
> TimeSpy GPU2 also really pushes VRAM...I'm seeing the point where I can still incrementally increase VRAM speed for both cards but scores for GPU2 are either flat if not decline slightly / must be near the sweet spot (I hope...).


You have some testing ahead!  I found that 3dmark wasn't even that hard on vram oc.. I had to back down another 200mhz from 8300mhz (stable 3dmark with increased score from 8100, 8200 etc) to 8100mhz stable in games (mostly rise of the tombraider that crashed and witcher 3 that artifacted).


----------



## Deathscythes

Hello, 

I am desperate to find any data - does anyone here know something about the ROG Matrix 2080 Ti release ?
I absolutely want a pair of those but can't find any data :/ 
Please don't tell me to get Strix cards because they have the same PCB - I just want Matrix cards.

Also, any VBIOS for the Strix yet? =)

Thanks a lot!


----------



## J7SC

Coldmud said:


> You have some testing ahead!  I found that 3dmark wasn't even that hard on vram oc.. I had to back down another 200mhz from 8300mhz (stable 3dmark with increased score from 8100, 8200 etc) to 8100mhz stable in games (mostly rise of the tombraider that crashed and witcher 3 that artifacted).



I know...I did test each card extensively on its own back in December, but I can't find my notes  Worse things could happen though then testing w/ games  

Hopefully, I'm ready for some Metro: Exodus action (notwithstanding the NVLink issues w/RTX) later in the coming week...I guess I should go to the Epic store for that, from what I've read. I think I also have the FF XV bench somewhere.


----------



## Kalm_Traveler

J7SC said:


> I know...I did test each card extensively on its own back in December, but I can't find my notes  Worse things could happen though then testing w/ games
> 
> Hopefully, I'm ready for some Metro: Exodus action (notwithstanding the NVLink issues w/RTX) later in the coming week...I guess I should go to the Epic store for that, from what I've read. I think I also have the FF XV bench somewhere.


wait which NVLink issues?


----------



## J7SC

...check here https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club-637.html#post27856302


----------



## Knoxis

Just ordered the ASUS advanced, just want to check that I can safely reflash the bios to a GALAX RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W bios? 

Does it also mean I will have to use Galax's oc software instead of Asus one?

It will be watercooled.

Cheers,


----------



## Kalm_Traveler

J7SC said:


> ...check here https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club-637.html#post27856302


ooh the game doesn't have SLI support... I thought you mean there was an actual problem with NVLink


----------



## J7SC

New full water-blocked offering from Zotac, per Tom's Hardware news section here https://www.tomshardware.com/news/zotac-geforce-rtx-2080-ti-arcticstorm-specs,38639.html

It seems like a very nice card, but as an aside, why does it has to say 'Live to Game' ? Almost as bad as my Aorus (2x = stereo) with 'Team up. Fight on'  ...fortunately, I now don't see it anymore as I mounted the cards differently. Sometimes I think these vendors' marketing folks are from another planet. Anyhow, I think a vendor should bring out a completely flat black card (bonus: matches most builds) and with an EXTERNAL hw switch on the card (like a multi-Bios switch) to turn RGB on or off - I'd pay extra for that.


----------



## Canson

Coldmud said:


> You can always try the old metro profiles with AFR. Rocking the logitech G502, no regrets.
> 
> 
> 
> So your problem is that fans spin in idle? Did you can alter the fancurve in your oc software? Can set it so that it ramps up with desired temp.



yes exactly,my fans in quit-mode bios do spin in idle. they never stop. Normally they should stop and ramp up if gpu reach 55C temp.


I really would like to fix this issuse. Does anybody have Strix 2080 Ti Advanced Gaming or OC ?


----------



## ComansoRowlett

Just a random bit of info, a 2080ti scales with voltage upto around 1.06v or so (obviously cards differ) even if you don't increase core clock, performance seems to increase just from raising vcore (with a hard mod of course). The memory doesn't, but it does allow higher clock speed if you increase the mem voltage.


----------



## Martin778

Coldmud said:


> Mine has micron, and afaik most do.


When did you get it? I bought mine around Christmas and it had Samsungs.


----------



## Nocliptoni

Does anyone know a higher power limit (than 366W) BIOS that might work with Gigabyte Aorus 2080ti Extreme Waterforce ?


----------



## ReFFrs

Nocliptoni said:


> Does anyone know a higher power limit (than 366W) BIOS that might work with Gigabyte Aorus 2080ti Extreme Waterforce ?


Of course, GALAX 380W


----------



## krkseg1ops

^^ I tried EVGA 336W 0 Fan BIOS as well as 380W Galax BIOS on my Gainward 2080ti GS and I saw no performance improvement, only thermals increased. Does it actually do anything? I understand the basic premise behind it but I simply haven't seen any gains in games. Maybe synthetic benchmarks but yeah, no performance gain. I saw more by installing Gainward's Expert Tool and OCing memory from +1000 to +1500 (to 8500MHz).


----------



## stilllogicz

J7SC said:


> New full water-blocked offering from Zotac, per Tom's Hardware news section here https://www.tomshardware.com/news/zotac-geforce-rtx-2080-ti-arcticstorm-specs,38639.html
> 
> It seems like a very nice card, but as an aside, why does it has to say 'Live to Game' ? Almost as bad as my Aorus (2x = stereo) with 'Team up. Fight on'  ...fortunately, I now don't see it anymore as I mounted the cards differently. Sometimes I think these vendors' marketing folks are from another planet. Anyhow, I think a vendor should bring out a completely flat black card (bonus: matches most builds) and with an EXTERNAL hw switch on the card (like a multi-Bios switch) to turn RGB on or off - I'd pay extra for that.


Keeping an eye on this card and specifically comparing it to the Seahawk EK X spec wise. Interested for sure.


----------



## Kimir

stilllogicz said:


> Keeping an eye on this card and specifically comparing it to the Seahawk EK X spec wise. Interested for sure.


Specs are on Zotac website:
https://www.zotac.com/product/graphics_card/zotac-gaming-geforce-rtx-2080-ti-arcticstorm#spec


----------



## shiokarai

JustinThyme said:


> No worries at all, I ran my EK 1080Ti cards like that for over a year. Even going vertical can be made to not be an aesthetics issue buy simply applying window tint to the section of the acrylic where the wires will be routed. Not a big fan of how EK executed the RGB. Phanteks block RGB wiring exits at the right rear corner when viewed as installed so its right into the wire management chases of pretty much any case. Cant even see mine regardless of the angle.
> 
> Which BTW EK is now taking preorders for the Strix blocks to start shipping 2/25/19. Ive got Phanteks blocks which are performing great so I wont be going that route. The only EK stuff I have left in my system are the rads and M2 heatsink. Ill reconsider on whether to stay with Phanteks or not when Watercool releases their blocks and if they have significant performance advantage. Id like to go vertical with both cards but cant source a 2 slot Nvlink bridge. From what I gather the Quardro RTX 6000 bridge should work, but sold out and staying that way. I have miners!!!


In what case you're going to do 2 x RTX 2080 ti mounted vertically? Do you have a vertical mount to fit 2 cards, not just 1? I'm searching for the vertical mount to fit 2 cards and haven't found any - only one I'm aware of was CaseLabs option with new SMA8-A.


----------



## jelome1989

Canson said:


> yes exactly,my fans in quit-mode bios do spin in idle. they never stop. Normally they should stop and ramp up if gpu reach 55C temp.
> 
> 
> I really would like to fix this issuse. Does anybody have Strix 2080 Ti Advanced Gaming or OC ?


Try reinstalling GPU Tweak. Worked for me


----------



## stilllogicz

The Zotac cards with 20 power phases - are there any higher limit bios available?


----------



## Coldmud

J7SC said:


> New full water-blocked offering from Zotac, per Tom's Hardware news section here https://www.tomshardware.com/news/zotac-geforce-rtx-2080-ti-arcticstorm-specs,38639.html
> 
> It seems like a very nice card, but as an aside, why does it has to say 'Live to Game' ? Almost as bad as my Aorus (2x = stereo) with 'Team up. Fight on'  ...fortunately, I now don't see it anymore as I mounted the cards differently. Sometimes I think these vendors' marketing folks are from another planet. Anyhow, I think a vendor should bring out a completely flat black card (bonus: matches most builds) and with an EXTERNAL hw switch on the card (like a multi-Bios switch) to turn RGB on or off - I'd pay extra for that.


Looks great, but very conservative boost? I think you can change that "game to live" nonsense, unlike our TUFO 



Canson said:


> yes exactly,my fans in quit-mode bios do spin in idle. they never stop. Normally they should stop and ramp up if gpu reach 55C temp.
> 
> I really would like to fix this issuse. Does anybody have Strix 2080 Ti Advanced Gaming or OC ?


What oc software you use? I think it's a setting issue, not a bios issue?



Martin778 said:


> When did you get it? I bought mine around Christmas and it had Samsungs.


Bought on 15-12, judging from forums mostly, I've seen most ppl get micron.


----------



## J7SC

Coldmud said:


> Looks great, but very conservative boost? I think you can change that "game to live" nonsense, unlike our TUFO
> 
> (.cut..)



It's also embossed on the back plate  besides, you probably have to load hardware-access software to change the LED display. Not the end of the world, and it's a very nice card...but I would still like to get an unobtrusive (flat black or gun-metal gray) model w/ physical-on-card switch for RGB etc


Spoiler


----------



## Peaky

hey everyone, 

new around here and I have one question...is the 380W bios worth it?

I have an EVGA XC ULTRA 1E 07 - ID and I was wondering if anyone with the similar card has done the updated and saw an improvement that is worth doing the flash.

Cheers!


----------



## HeliXpc

Hey guys, I am getting 84C under full load with fan at 100%, is this normal? this is with +150 on core and +800 on memory, this was with tomb raider, still a pretty demanding game, the 2013 version.


----------



## kx11

HeliXpc said:


> Hey guys, I am getting 84C under full load with fan at 100%, is this normal? this is with +150 on core and +800 on memory, this was with tomb raider, still a pretty demanding game, the 2013 version.





how good is the airflow inside the case ?


----------



## J7SC

...I posted some 3DM subs by KingPin earlier, but does anyone have any idea when the 2080 Ti KingPin is actually expected to hit the market place ? Price ? Still think this is an amazing looking card, never mind features and (likely) performance

pics from YouTube / Paul's HW


----------



## MorganBulb

Hi
recently got a Palit RTX 2080ti Gaming Pro OC card (apparently my card is non A)

i am wondering what would be the best bios to flash on the card as im hitting the power limit/target consistently

and is there any guides on how to flash a bios as im quite new to changing the bios on GPU etc

side note, is there anyway to get MSI AB to control the fans or do i have to use the thunderstorm software for that ?

Many thanks


----------



## KCDC

shiokarai said:


> In what case you're going to do 2 x RTX 2080 ti mounted vertically? Do you have a vertical mount to fit 2 cards, not just 1? I'm searching for the vertical mount to fit 2 cards and haven't found any - only one I'm aware of was CaseLabs option with new SMA8-A.



In my Enthoo Elite, the vertical mounting solution has enough room for two GPUs and a couple other cards as well. Not sure about Phanteks' other options.


----------



## Jpmboy

J7SC said:


> ...I posted some 3DM subs by KingPin earlier, but does anyone have any idea when the 2080 Ti KingPin is actually expected to hit the market place ? Price ? Still think this is an amazing looking card, never mind features and (likely) performance
> 
> pics from YouTube / Paul's HW


oh man.. that evbot port is calling!!!


----------



## J7SC

Jpmboy said:


> oh man.. that evbot port is calling!!!


 

Right you are ! Fortunately, I'm a pack-rat so I even kept the (somewhat dusty) box...just noticed that the screen still has the foil on it after all these years  ...KingPin 2080 Ti-H with 3-Bios (one @ 520w) + KingPin + TiN EVB flash + 'special' Bios = :sonic:


----------



## Wo1olo

I've got an MSI RTX 2080 Ti Lightning Z. Really slick looking card and really fast. I'm looking forward to overclocking it even further. It came with a hefty boost clock out of the box, but I've gotten it even higher. I haven't had the balls to switch to the LN2 bios to tweak the voltage though. I've had no trouble overclocking my CPU (which is water cooled), but modifying the voltage on the GPU seems to work differently. I don't want to fry my expensive card so I'm doing a bit more research first.



Check it out, though


----------



## Kimir

Rofl, who at EVGA think it's a good idea to put an AIO as default cooler on KPE card... Just sell them naked or with a true waterblock already. -_-


----------



## zack_orner

J7SC said:


> Right you are ! Fortunately, I'm a pack-rat so I even kept the (somewhat dusty) box...just noticed that the screen still has the foil on it after all these years  ...KingPin 2080 Ti-H with 3-Bios (one @ 520w) + KingPin + TiN EVB flash + 'special' Bios = :sonic:


Are you abel to upload that 520w bios so I can try it.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## zack_orner

Op where did the 406w MSI gaming x trio come from, when I click on source it takes me to post from 2004?

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## shiokarai

MorganBulb said:


> Hi
> recently got a Palit RTX 2080ti Gaming Pro OC card (apparently my card is non A)
> 
> i am wondering what would be the best bios to flash on the card as im hitting the power limit/target consistently
> 
> and is there any guides on how to flash a bios as im quite new to changing the bios on GPU etc
> 
> 
> side note, is there anyway to get MSI AB to control the fans or do i have to use the thunderstorm software for that ?
> 
> Many thanks


No, your card is an A chip, because it’s factory OCed. Do you have any proof you did get non-A chip in the Palit Gaming Pro OC card?


----------



## dante`afk

J7SC said:


> ...I posted some 3DM subs by KingPin earlier, but does anyone have any idea when the 2080 Ti KingPin is actually expected to hit the market place ? Price ? Still think this is an amazing looking card, never mind features and (likely) performance
> 
> pics from YouTube / Paul's HW


unless it's with unlocked bios, not interested.


----------



## GraphicsWhore

J7SC said:


> New full water-blocked offering from Zotac, per Tom's Hardware news section here https://www.tomshardware.com/news/zotac-geforce-rtx-2080-ti-arcticstorm-specs,38639.html
> 
> It seems like a very nice card, but as an aside, why does it has to say 'Live to Game' ? Almost as bad as my Aorus (2x = stereo) with 'Team up. Fight on'  ...fortunately, I now don't see it anymore as I mounted the cards differently. Sometimes I think these vendors' marketing folks are from another planet. Anyhow, I think a vendor should bring out a completely flat black card (bonus: matches most builds) and with an EXTERNAL hw switch on the card (like a multi-Bios switch) to turn RGB on or off - I'd pay extra for that.


Bro you don't LIVE TO GAME??? Are you kidding? Are you not a REAL GAMER who LIVES, BREATHES and SLEEPS GAMING? Very disappointing. I thought we were going to RISE UP in our solidarity and to defend what we truly love (HARDCORE GAMING) but I guess some of you are not truly committed to this lifestyle!

And yeah other than that, the card looks pretty sweet.


----------



## ilmazzo

dante`afk said:


> unless it's with unlocked bios, not interested.


unlocked in voltages or a 1000W version?


----------



## bogdi1988

so, got the 2 port NVLink bridge. Noticed in GPU-Z that the cards are working at 2.0 16x (or 2.0 8x) instead of 3.0 port speed.

This is in an MSI X399 MEG Creator.
PCIe1 16x is NVME 4 port add on card - yes, running NVME RAID with 4 NVME drives
PCIE2 1x empty
PCIE3 8x empty
PCIE4 16x 2080TI
PCIE5 8x 2080TI

PCIE speed is forced to Gen 3 in BIOS.

Any ideas?


----------



## J7SC

zack_orner said:


> Are you abel to upload that 520w bios so I can try it.
> 
> 2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


 
First, card is not officially out yet (nor do I have one) - that's why I was asking earlier if anyone knew release date and prices. Second, KP Bios ''may potentially" work with MSI Gaming Trio X, but differences in EPS 12v setup might result in reduced wattage. Third, you likely will need EVBot or at least a yet-to-be-released software tool to get to the full wattage - never mind appropriate very-HD cooling. Fourth, I doubt the card will come with 520w Bios unlocked..if I go by past experience with KingPin editions, it will be released at Kingpincooling.com in the relevant forum when the time comes, along with EVBot firmware update for that card by Kingpin and TiN (Ilya)



dante`afk said:


> unless it's with unlocked bios, not interested.


...see above and my comment in earlier post re. ..KingPin 2080 Ti-H with 3-Bios (one @ 520w) + _KingPin + TiN EVB flash + 'special' Bios_ = :sonic:


----------



## J7SC

bogdi1988 said:


> so, got the 2 port NVLink bridge. Noticed in GPU-Z that the cards are working at 2.0 16x (or 2.0 8x) instead of 3.0 port speed.
> 
> This is in an MSI X399 MEG Creator.
> PCIe1 16x is NVME 4 port add on card - yes, running NVME RAID with 4 NVME drives
> PCIE2 1x empty
> PCIE3 8x empty
> PCIE4 16x 2080TI
> PCIE5 8x 2080TI
> 
> PCIE speed is forced to Gen 3 in BIOS.
> 
> Any ideas?



...also running MSI X399 Creation, with 2x 2080 TIs (both at 16x)...the mobo manual is your final resource, but AFAIK, PCIe 1 and 4 should be used for vid cards as those are the only 16x ones...you can also Google that mobo for images on the back and see the extra lanes on those two slots only...you probably have to move the NVME 4 port add-on card to PCIe slot 5 ?


----------



## Coldmud

Wo1olo said:


> I've got an MSI RTX 2080 Ti Lightning Z. Really slick looking card and really fast. I'm looking forward to overclocking it even further. It came with a hefty boost clock out of the box, but I've gotten it even higher. I haven't had the balls to switch to the LN2 bios to tweak the voltage though. I've had no trouble overclocking my CPU (which is water cooled), but modifying the voltage on the GPU seems to work differently. I don't want to fry my expensive card so I'm doing a bit more research first.
> 
> 
> 
> Check it out, though


Nice card! Tempting on buying, 1 in stock.. However, as I understand it: the ln2 bios doesn't go above 300w. You also need a special unlocked afterburner which goes up to 1.5v. And they only give those out to elite overclockers. Sidewinder makes afterburner, he's on guru3d forums, anyone wanna bug him for it? I have a very inactive account over there.


----------



## Hulk1988

Got my new special card  Air only for now and limited with Temp and Powerlimit. This will change soon. But enough for place 17 in superposition 4K  https://benchmark.unigine.com/leaderboards/superposition/1.0/4k-optimized/single-gpu/page-1 "SH"

Tested Samsung Memory with EVGA Tool because Afterburner goes to +1500 "only"

8600 mhz is fine (+1600)
With 8700 some graphic glitches. Fine for Benching.
With 8750 mhz crash


----------



## KCDC

GraphicsWhore said:


> Bro you don't LIVE TO GAME??? Are you kidding? Are you not a REAL GAMER who LIVES, BREATHES and SLEEPS GAMING? Very disappointing. I thought we were going to RISE UP in our solidarity and to defend what we truly love (HARDCORE GAMING) but I guess some of you are not truly committed to this lifestyle!
> 
> And yeah other than that, the card looks pretty sweet.



What would be preferred is the ability to input whatever text we want, or an animation or design, into that little screen instead of whatever the "hello, fellow kids" corporate garbage their prod dev team comes up with.


----------



## bogdi1988

J7SC said:


> ...also running MSI X399 Creation, with 2x 2080 TIs (both at 16x)...the mobo manual is your final resource, but AFAIK, PCIe 1 and 4 should be used for vid cards as those are the only 16x ones...you can also Google that mobo for images on the back and see the extra lanes on those two slots only...you probably have to move the NVME 4 port add-on card to PCIe slot 5 ?


I moved the video cards to 1 and 4 and still ran in 2.0
Did a full CMOS erase and that cleared the issue. Just now that PCIe3 and 5 are only 8x max, i had to use only 2 NVME in the add in card, 1 in a separate smaller nvme add in card and then move the 4th nvme on the board itself. (other 2 nvme slots on mobo are already populated)


----------



## mackanz

Stumbling on voltage limit way before power limit. What do you guys do about that? Undervolt?


----------



## kx11

i'm kinda puzzled here with a weird problem


so the LCD on Lightning Z was acting strange and this is before and after a fresh format , it doesn't show the current clocks...etc when i enable it from Mystic Light 3 , little did i know GALAX GAMER RGB app is installed in my PC !!!


who got it in there ?! tried deleting it from REGEDIT but it didn't work since i already uninstalled it from windows already , it keeps coming back 





i think the problem might be related to OneDrive that stored my setting when i had GALAX RGB rams a months ago , i'll keep digging more into this


----------



## J7SC

bogdi1988 said:


> I moved the video cards to 1 and 4 and still ran in 2.0
> Did a full CMOS erase and that cleared the issue. Just now that PCIe3 and 5 are only 8x max, i had to use only 2 NVME in the add in card, 1 in a separate smaller nvme add in card and then move the 4th nvme on the board itself. (other 2 nvme slots on mobo are already populated)



These things can be a wild goose chase I do not want to send anyone on. But if you have the time, try running the two 2080 TIs in PCIe slot 1 & 4, with only a single M.2 drive on the board itself connected (as in NOT w/ the add-in card at all) . That should give you 16x by 16x... if that works out, I would test out any of the other remaining slots with the add in M.2 card...at best though, I think you can expect 2x 2080 Ti at 16x each, and the add-in M.2 card w/ NVME at either 4x or 8x...


----------



## managerman

shiokarai said:


> In what case you're going to do 2 x RTX 2080 ti mounted vertically? Do you have a vertical mount to fit 2 cards, not just 1? I'm searching for the vertical mount to fit 2 cards and haven't found any - only one I'm aware of was CaseLabs option with new SMA8-A.


My Thermaltake Core P7 has two 2080ti's vertically mounted with 200mm PCI-E extensions. I think the Core P5 has the same mounting system. Almost finished with the build..

-M


----------



## J7SC

managerman said:


> My Thermaltake Core P7 has two 2080ti's vertically mounted with 200mm PCI-E extensions. I think the Core P5 has the same mounting system. Almost finished with the build..
> 
> -M


Ohhh - that's nice :wubsmiley


----------



## JMCB

Anyone have any luck changing the lighting on the Founder's Edition card to something else other than green?


----------



## Hulk1988

Hello,

quick question:

Gaming OC | 3 Fan | 2.5 Slot | 287mm | RGB | 16 Power Phases | 1650 MHz Boost | 300/366 W | Reference PCB | EAN 4719331303563 | PN GV-N208TGAMING OC-11GC

I need a Waterblock for it. It is named as Reference PCB but no websites listed the card as a support for their cooler. Here you can see the small differences: https://i.imgur.com/dXlTpUs.gif

Which waterblock I need to buy for it?  

Thank you


----------



## Pepillo

Search Bykski waterblocks. I have one for my Aorus and work great.


----------



## stilllogicz

For people with the Zotac Maxx and Extreme cards, what bios are you guys using for more power?


----------



## bogdi1988

J7SC said:


> These things can be a wild goose chase I do not want to send anyone on. But if you have the time, try running the two 2080 TIs in PCIe slot 1 & 4, with only a single M.2 drive on the board itself connected (as in NOT w/ the add-in card at all) . That should give you 16x by 16x... if that works out, I would test out any of the other remaining slots with the add in M.2 card...at best though, I think you can expect 2x 2080 Ti at 16x each, and the add-in M.2 card w/ NVME at either 4x or 8x...


After the BIOS clear, the cards started working on 3.0. very tricky issue. and yea, taking NVMEs out of the board is very tricky as i have full custom hard loop so it is quite hard to take cards out haha.


----------



## kx11

GURU 3 d claims this card isn't out yet ?!




it's MSI 2080 TI EK Sea Hawk X

https://www.guru3d.com/news_story/msi_and_ek_partner_up_geforce_rtx_seahawk_ek_x.html




i saw this gpu all over the web ready to ship !!!


----------



## Vencenzo

I'm bit late to the party.. Just got my Seahawk ek installed.

So far I'm solid at 2145/7500 (1875) with 48c max temp on any bench.

I have an extremely overkill loop. 360x120 90mm thick rad for my gpu and a rather fast pump.

This 110% power limit sucks.. I understand the shunt mod but I'd rather not. Sort of disappointed I have a 17 instead of 19 power phase but I feel like I have a ton of room still.

Haven't read through whole thread yet, any tips for my specific card?


----------



## shiokarai

managerman said:


> My Thermaltake Core P7 has two 2080ti's vertically mounted with 200mm PCI-E extensions. I think the Core P5 has the same mounting system. Almost finished with the build..
> 
> -M


Nice!!!! Damn, I'm with a CaseLabs STH10 so out of luck, only CableMod Vertical GPU mount or something similar will fit, but it's only for 1 card... jealous!


----------



## DarthBaggins

Kinda sucks that it's hard to source the CaseLabs vertical mount since they did launch one w/ the SM8A revision, but sadly no one is making their parts yet.


----------



## J7SC

bogdi1988 said:


> After the BIOS clear, the cards started working on 3.0. very tricky issue. and yea, taking NVMEs out of the board is very tricky as i have full custom hard loop so it is quite hard to take cards out haha.



I hear you, what with running copper tubing to the 2x 2080 TIs. While I can get access to the M.2s on the mobo as I left some degree of adjustability of the tubes, it would nevertheless be a 20min+ job. 

Glad to hear that you get your PCIe 3.0 working - on that mobo, the 'clear CMOS' button is a good friend ! I just had to remember to save my various OC profiles regularly. Also, having up to 7 M.2 slots available on the MSI X399 Creation is nice, if a bit overkill


----------



## Vencenzo

Here's the pics of my new seahawk ek x for membership 

It seems there's not a lot of feedback on this card as of yet. I will keep you updated if I figure anything out. So far looks to be a 300w TDP bios from gpu-z readings (these have been wrong before..). The newest beta MSI afterburner gives me a voltage slider that's %, not used to that. Power limit cap is 110%. Seems I'll need to use a higher wattage bios.


----------



## TK421

Hey guys, I just bought the 11G-P4-2387-KR (2080 Ti XC Ultra).





And I want to use this bios from the front page.




> Download GALAX RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 126% Power Target BIOS (380W) Unofficial (Source)



Is there something that I should be concerned or should know before flashing and using this bios?


----------



## Edge0fsanity

TK421 said:


> Hey guys, I just bought the 11G-P4-2387-KR (2080 Ti XC Ultra).
> 
> 
> 
> 
> 
> And I want to use this bios from the front page.
> 
> 
> 
> 
> 
> Is there something that I should be concerned or should know before flashing and using this bios?


That card has the dual fan control, the galax bios does not play nice with it. Before i put a waterblock on mine it would run the second fan along the curve until ~65C then it would drop the speed down to 1k rpm. I had temps in excess of 85C due to that.

I would try the ftw3 bios and see if you can retain your fan control. That bios is 373w.


----------



## TK421

Edge0fsanity said:


> That card has the dual fan control, the galax bios does not play nice with it. Before i put a waterblock on mine it would run the second fan along the curve until ~65C then it would drop the speed down to 1k rpm. I had temps in excess of 85C due to that.
> 
> I would try the ftw3 bios and see if you can retain your fan control. That bios is 373w.



Hi,


Isn't the FTW3 bios based on a custom PCB with different VRM layout and display outputs?


Not to mention the card has 1 extra fan.






Is there a fix for the dual fan control situation with Galax bios?


Thanks.


----------



## Edge0fsanity

TK421 said:


> Hi,
> 
> 
> Isn't the FTW3 bios based on a custom PCB with different VRM layout and display outputs?
> 
> 
> Not to mention the card has 1 extra fan.
> 
> 
> 
> 
> 
> 
> Is there a fix for the dual fan control situation with Galax bios?
> 
> 
> Thanks.


It shouldn't matter, i've seen multiple people in this thread run custom pcb bios's on reference boards. Maybe someone can chime in that is using the ftw3 bios on a reference pcb as to whether it works well or not.

For the few days i left my card on air i tried a few different OC software without any luck. None of them would fix the buggy fan issue. EVGAs dual fan feature is annoying, i would avoid it in the future. I didn't care since i knew i was going to block the card real quick and it was in stock at the time when 2080tis sold out in minutes.


----------



## TK421

Edge0fsanity said:


> It shouldn't matter, i've seen multiple people in this thread run custom pcb bios's on reference boards. Maybe someone can chime in that is using the ftw3 bios on a reference pcb as to whether it works well or not.
> 
> For the few days i left my card on air i tried a few different OC software without any luck. None of them would fix the buggy fan issue. EVGAs dual fan feature is annoying, i would avoid it in the future. I didn't care since i knew i was going to block the card real quick and it was in stock at the time when 2080tis sold out in minutes.





Do you think the problem might be the way how the EVGA fans are wired?


Or did EVGA make small adjustment to the PCB to enable dual fan control? I thought this model (XC Ultra) is a reference PCB?


----------



## Edge0fsanity

TK421 said:


> Do you think the problem might be the way how the EVGA fans are wired?
> 
> 
> Or did EVGA make small adjustment to the PCB to enable dual fan control? I thought this model (XC Ultra) is a reference PCB?


It is a reference pcb but there are some small differences from the founders edition and my asus dual OC 2080tis. I've owned those + evga xc2 2080ti. Their dual fan control thing is related to the bios. The fan connector is the same as the other cards.


----------



## TK421

Edge0fsanity said:


> It is a reference pcb but there are some small differences from the founders edition and my asus dual OC 2080tis. I've owned those + evga xc2 2080ti. Their dual fan control thing is related to the bios. The fan connector is the same as the other cards.



I see what you're mentioning, I didn't notice this part











Extra controller for fan?


----------



## Krzych04650

Just tried some Metro Exodus at 3840x1600 21:9 with RTX enable and it is not so bad. Extreme graphics setting seems to be a main thing to avoid here, it makes the game feel clunky even in places with higher framerates. Running the game on just Ultra with Hairworks On, Tesellatlion On, PhysX On, RTX High and DLSS Off (won't turn itself on at this resolution anyway) gives very reasonable performance, mostly slightly above 60 FPS, sometimes above 70, sometimes dropping to mid to low 50s. Perfectly playable with VRR and 60 FPS lock. I still have two previous Metro games to play first, so I won't play it, but it looks promising and will probably get considerably easier to run after few patches to RT. RTX High is not even that big of a performance hit, RTX Off is only 25-30% faster, thats not an unusual performance hit for any demanding option really. Hovering around 60 FPS Ultra with RTX High at such a high resolution is very respectable, I'd expect that from 3440x1440 but certainly not from 3840x1600 which is 20% more demanding. Didn't expect single GPU to be able to pull this kind of performance. I like what I saw and I'd rather have that than games being demanding for no reason, with RT at least there is a very good justification for high GPU requirements, instead of games getting more demanding while looking the same or worse than 4 years ago.


----------



## Rob w

Continuing on from the problems I’m having flashing the galax/ KAF2 bios to my 2080ti Fe,
I tried numerous combinations / versions of nvflash, nvflash64 and nvflash 5.527.0 modified to allow flashing FE cards and still get ID mismatch!
Seems this particular card will not accept it at all, and yes it 100% is the A chip and Samsung memory,
I am able to re flash it’s own bios but as yet no other??
In frustration I did a shunt mod ( 8mohm on top) that works well at holding higher clocks but still power throttling even on the chiller, so I am wondering? Did nv change the cards that were sent in the rma that has locked them or is it a case of ‘that’s as far as I can go with this board’ 
Seems the only thing I can do is try and find a vbios that will flash to it..
Any recommendations?


----------



## Edge0fsanity

Rob w said:


> Continuing on from the problems I’m having flashing the galax/ KAF2 bios to my 2080ti Fe,
> I tried numerous combinations / versions of nvflash, nvflash64 and nvflash 5.527.0 modified to allow flashing FE cards and still get ID mismatch!
> Seems this particular card will not accept it at all, and yes it 100% is the A chip and Samsung memory,
> I am able to re flash it’s own bios but as yet no other??
> In frustration I did a shunt mod ( 8mohm on top) that works well at holding higher clocks but still power throttling even on the chiller, so I am wondering? Did nv change the cards that were sent in the rma that has locked them or is it a case of ‘that’s as far as I can go with this board’
> Seems the only thing I can do is try and find a vbios that will flash to it..
> Any recommendations?


Makes me wonder if nvidia did something to the bios on newer cards to lock them down. Wouldn't surprise me at all. Your best hope is to use the voltage curve in AB and find a voltage where it doesn't throttle. 1.043 or 1.05 is usually a good spot. I've found it doesn't affect peak clocks much vs higher voltages. You'll probably lose 15-30mhz if that.


----------



## bogdi1988

J7SC said:


> I hear you, what with running copper tubing to the 2x 2080 TIs. While I can get access to the M.2s on the mobo as I left some degree of adjustability of the tubes, it would nevertheless be a 20min+ job.
> 
> Glad to hear that you get your PCIe 3.0 working - on that mobo, the 'clear CMOS' button is a good friend ! I just had to remember to save my various OC profiles regularly. Also, having up to 7 M.2 slots available on the MSI X399 Creation is nice, if a bit overkill


I used a bunch of those rotating 90' fittings and those allowed me to move the video card out by a few mm so I could remove the m.2 access panels.
And 7 m.2 is not overkill  I have 6 populated through various means. I did "throw out" the MSI NVME board and bought the Asus one. It is only 1 slot wide, cools just as well and doesn't need the additional power plug.


----------



## black06g85

just to add, my buddy had his 4th rtx2080ti die on him last night, this was a msi non ref card too.... also micron memory....

I"m not one to usually believe hype on failures, but this is the fifth cards I"ve personally seen die in the span of 4 weeks. at least this one didn't take his board with it this time.

3 rtx2080ti FE cards (took out a 2600k board rog maximus , and my asus x99 deluxe board in testing on the 2nd card)
1 msi rtx2080ti (not sure which model, but 3 fans and huge, not the lightning)
1 evga rtx2080ti xc hybrid (my card) took out my msi z390 godlike board in the process too......


anyone else running into these issues?
my card died at idle and the monitor was off..... 

in the 20+ years of building comps I"ve never seen anything like this before.


----------



## Vipeax

Rob w said:


> Continuing on from the problems I’m having flashing the galax/ KAF2 bios to my 2080ti Fe,
> I tried numerous combinations / versions of nvflash, nvflash64 and nvflash 5.527.0 modified to allow flashing FE cards and still get ID mismatch!
> Seems this particular card will not accept it at all, and yes it 100% is the A chip and Samsung memory,
> I am able to re flash it’s own bios but as yet no other??
> In frustration I did a shunt mod ( 8mohm on top) that works well at holding higher clocks but still power throttling even on the chiller, so I am wondering? Did nv change the cards that were sent in the rma that has locked them or is it a case of ‘that’s as far as I can go with this board’
> Seems the only thing I can do is try and find a vbios that will flash to it..
> Any recommendations?


Without any screenshots this post isn't very helpful.


----------



## Rob w

Vipeax said:


> Without any screenshots this post isn't very helpful.


Ok, screenshots when I’m back on pc later... sorry!


----------



## Canson

jelome1989 said:


> Try reinstalling GPU Tweak. Worked for me


I use msi afterburner. I tried to reinstall it and even the drivers . still same problem







Coldmud said:


> What oc software you use? I think it's a setting issue, not a bios issue?


I use msi afterburner. Right now i just need to know what bios version the Quit-mode has. I want to know if it has same as the P-mode or different


----------



## Rob w

Vipeax said:


> Without any screenshots this post isn't very helpful.


core at 2145 =crash
core at 2130 = crash
core at 2115 = runs at 2100 max 283.7watts.
nearly always clocking down during run, shunt mod helped to keep clocks at 2100 and above, so as I see it if I can get a higher bios flashed then that should help to get the higher clocks ( I could be totally wrong about that though?)


----------



## Vipeax

I'm interested in screenshots of you trying to flash the card.


----------



## ReFFrs

black06g85 said:


> 3 rtx2080ti FE cards (took out a 2600k board rog maximus , and my asus x99 deluxe board in testing on the 2nd card)
> 1 msi rtx2080ti (not sure which model, but 3 fans and huge, not the lightning)
> 1 evga rtx2080ti xc hybrid (my card) took out my msi z390 godlike board in the process too......


All these cards were Micron memory? You should definitely try a Samsung one.


----------



## Renegade5399

ReFFrs said:


> All these cards were Micron memory? You should definitely try a Samsung one.


Both the step-up cards I received have Samsung on them. In MSIAB I have gotten them to 8150 so far.


----------



## Rob w

Vipeax said:


> I'm interested in screenshots of you trying to flash the card.


sorry about that Vipeax ,just tried again here's some shots, re downloaded (nvflash-5.527.0 modified)from techpowerup + KFA2.RTX2080Ti.11264.1800910.rom, and 205611.rom
I unblocked all +run as admin. cmd and run as admin.


----------



## Renegade5399

Rob w said:


> sorry about that Vipeax ,just tried again here's some shots, re downloaded (nvflash-5.527.0 modified)from techpowerup + KFA2.RTX2080Ti.11264.1800910.rom, and 205611.rom
> I unblocked all +run as admin. cmd and run as admin.


Do you have an FE card? Sorry if you already answered this.

On my EVGA cards I used NVIDIA NVFlash 5.541.0.


----------



## Rob w

Renegade5399 said:


> Do you have an FE card? Sorry if you already answered this.
> 
> On my EVGA cards I used NVIDIA NVFlash 5.541.0.


yes founders edition direct from nv, physically confirmed 'A' chip and Samsung memory.
its on water ( quite often on the chiller) so temps not a problem,

I tried that version as well as previous versions, also tried all the galax bios, it will flash its own ok!
so when it wouldn't flash I then did the shunt mod but now I'm thinking it could still be helped with a better bios as well, but dammed thing wont flash a galax one any suggestions on what others could be worth trying?
I am really thinking maybe nv have locked the bios on later cards like mine...


----------



## black06g85

ReFFrs said:


> All these cards were Micron memory? You should definitely try a Samsung one.


yes they were

probably going with a ftw3 my other friend just got one and it at least has samsung memory on it, hopefully i get lucky.

funny my 1080ti has micron, been going strong for 2 years now (and back in use after the 2080ti death) 
guess they had some issues withe gddr6


----------



## Vipeax

Rob w said:


> sorry about that Vipeax ,just tried again here's some shots, re downloaded (nvflash-5.527.0 modified)from techpowerup + KFA2.RTX2080Ti.11264.1800910.rom, and 205611.rom
> I unblocked all +run as admin. cmd and run as admin.


By the looks of the screenshots you aren't using the version from https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/


----------



## Renegade5399

Vipeax said:


> By the looks of the screenshots you aren't using the version from https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/


Oh man, I hope that's it.


----------



## Rob w

Vipeax said:


> By the looks of the screenshots you aren't using the version from https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/


Vipeax man I cant thank you enough!!!
I tried all nvflash from techpowerup and none would work (strangely), so downloaded from elsewhere and still not work!
downloaded from your link and it worked first time? no hitches at all, thankyou. 
I don't know what difference the KFA2 bios would make but something is happening as rig is now pulling 768watts from the wall an extra 154watts!
SO nv did not lock card down just a dumbass user hahaha.
thanks again to all.
now for a bit more testing??:thumb:


----------



## pewpewlazer

Rob w said:


> core at 2145 =crash
> core at 2130 = crash
> core at 2115 = runs at 2100 max 283.7watts.
> nearly always clocking down during run, shunt mod helped to keep clocks at 2100 and above, so as I see it if I can get a higher bios flashed then that should help to get the higher clocks ( I could be totally wrong about that though?)


If you're only seeing 280w with a 320w power limit, then you're not still power throttling as you claim in an earlier post. If you are, it would list power as the reason for perf cap in GPU-Z or afterburner.

If you think you're setting 2115mhz and it runs 2100mhz, it's because the V-F curve shifts due to temperature under load and your 2115mhz becomes 2100mhz. Best to run heaven or something in the background to heat the GPU to normal load temp when trying to edit the V-F curve.

Could also just be that your card caps out at 2115mhz. Mine routinely crashes at 2130+, albeit at much lower voltages since I'm not shunt modded and limited to 380w. 1.081v for 2100 sounds absurdly high.


----------



## J7SC

ReFFrs said:


> All these cards were Micron memory? You should definitely try a Samsung one.



Given various earlier posts in this thread and elsewhere, some Samsung-equipped 2080 TIs also failed, though if I would have a choice, I'ed go for Samsung. The issue also seems to be that in the initial RTX production runs, there was just more Micron DDR6 available so it ended up on more 1st-run cards, some of which seemed to fail w/o VRAM being the issue...though true 'scientific' explanations and/or data on problem RATES by either NVidia or board vendors has been lacking.



bogdi1988 said:


> I used a bunch of those rotating 90' fittings and those allowed me to move the video card out by a few mm so I could remove the m.2 access panels.
> And 7 m.2 is not overkill  I have 6 populated through various means. I did "throw out" the MSI NVME board and bought the Asus one. It is only 1 slot wide, cools just as well and doesn't need the additional power plug.



Sounds like you're having fun hoarding all those M.2 drives !  :thumb: The Asus add-in card does have the advantage that it is a 1-slot only card (like the ASRock one below, though that does need a separate PCIe connector from the PSU, like the MSI one). I can fit the MSI one in my build on the bottom slot below the two 2080 TIs - while it takes up more than 1 slot, it is also shorter than the other two...apart from that, the Core P5 came came w/ one of those long PCIe extenders). But I don't really need more than the 3 M.2s I have on the mobo in this build and will use the MSI 4xM.2 card in an upcoming build which is more focused on file serving.


----------



## J7SC

Rob w said:


> Vipeax man I cant thank you enough!!!
> I tried all nvflash from techpowerup and none would work (strangely), so downloaded from elsewhere and still not work!
> downloaded from your link and it worked first time? no hitches at all, thankyou.
> I don't know what difference the KFA2 bios would make but something is happening as rig is now pulling 768watts from the wall an extra 154watts!
> SO nv did not lock card down just a dumbass user hahaha.
> thanks again to all.
> now for a bit more testing??:thumb:



That as quite a bios-flashing odyssey you were on for weeks ! Glad to hear that it finally worked / and now you have the bios you wanted AND the shunt mod to boot


----------



## Rob w

J7SC said:


> That as quite a bios-flashing odyssey you were on for weeks ! Glad to hear that it finally worked / and now you have the bios you wanted AND the shunt mod to boot


Didn’t I know it would be so simple hahaha!
Seriously though yes I can now flash my card, I will try the KFA2 bios now, as with this one every time card goes under stress it locks out the hdmi port grrrr, but closing down and draining power it resets so still have access to reflash,phew!:thumb:


----------



## Vipeax

Rob w said:


> Vipeax man I cant thank you enough!!!
> I tried all nvflash from techpowerup and none would work (strangely), so downloaded from elsewhere and still not work!
> downloaded from your link and it worked first time? no hitches at all, thankyou.
> I don't know what difference the KFA2 bios would make but something is happening as rig is now pulling 768watts from the wall an extra 154watts!
> SO nv did not lock card down just a dumbass user hahaha.
> thanks again to all.
> now for a bit more testing??:thumb:


Good stuff.


----------



## Jpmboy

J7SC said:


> Right you are ! Fortunately, I'm a pack-rat so I even kept the (somewhat dusty) box...just noticed that the screen still has the foil on it after all these years  ...KingPin 2080 Ti-H with 3-Bios (one @ 520w) + KingPin + TiN EVB flash + 'special' Bios = :sonic:


I('ve had (at least) one ever since the 780Ti KPEs (had 3). Thing will be whether or not Tin posts a firmware update or not. I've heard rumors EVGA may issue a new controller, but that really is only meaningful IF the GPU scales with voltage (hence the dusty EVBOTs  )


----------



## Jpmboy

Rob w said:


> Didn’t I know it would be so simple hahaha!
> Seriously though yes I can now flash my card, I will try the KFA2 bios now, as with this one every time card goes under stress it locks out the hdmi port grrrr, but closing down and draining power it resets so still have access to reflash,phew!:thumb:


rep the guy fer crissakes.


----------



## Vipeax

https://drive.google.com/open?id=1MZKpPv6twEvmcZp8bwXpHLC8-pISrgwm
nvflash_5.541.0 with the same patch.


----------



## MrTOOSHORT

Anyone know if the Strix 2080tis have Samsung chips?


----------



## Vencenzo

The Gaming trio x bios on my EK seahawk is solid. 
Figured pcb was similar enough.

Only got 15 mhz on core, yet my scores went way up.. I ran a few benches and noticed that boost was staying closer to max clocks a lot more.
Seahawk ek x bios is PL 110% and I was seeing about 328watt peak, on the gaming trio x %134 PL I'm seeing 363 watt peak to sustain 2160/7500 vs my old 2145/7500.
Temps still not exceeding 48c.


----------



## holeyguy

New to the rtx family and love it so far. Had some hiccups at first but got them handled. Love the power.


Evga 2080ti xc


----------



## axiumone

MrTOOSHORT said:


> Anyone know if the Strix 2080tis have Samsung chips?


Yes, they do. I have two. One has micron and the other has samsung. Both are OC versions.


----------



## zack_orner

Vencenzo said:


> The Gaming trio x bios on my EK seahawk is solid.
> 
> Figured pcb was similar enough.
> 
> 
> 
> Only got 15 mhz on core, yet my scores went way up.. I ran a few benches and noticed that boost was staying closer to max clocks a lot more.
> 
> Seahawk ek x bios is PL 110% and I was seeing about 328watt peak, on the gaming trio x %134 PL I'm seeing 363 watt peak to sustain 2160/7500 vs my old 2145/7500.
> 
> Temps still not exceeding 48c.


Nice do you have curve set or or set % for core.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## JMCB

So do we want Micron or Samsung chips? I ordered my 2080 TI FE and honestly I'll probably return it if it's not the desired memory on there. Coming from 1080 TI SLI that worked flawlessly, but just had that upgrade itch...


----------



## ReFFrs

JMCB said:


> So do we want Micron or Samsung chips? I ordered my 2080 TI FE and honestly I'll probably return it if it's not the desired memory on there. Coming from 1080 TI SLI that worked flawlessly, but just had that upgrade itch...


Many people who already have Micron would say it doesn't matter (for self-calming purposes), but in fact only Samsung is the right way to go with 2080 Ti's

I have Aorus 2080 Ti Waterforce manufactured in Dec 2018 and it has Samsung memory onboard although it's based on the same PCB as in video below. Perfect OC btw to 16140 MHz on mem. 

Most newer cars should have Samsung, but the problem is there is a mix of different cards on the market manufactured at different dates, so you cannot be sure when purchasing a card what version you would get.


----------



## Vencenzo

zack_orner said:


> Nice do you have curve set or or set % for core.
> 
> 2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


I tried out ocscanner on afterburner for curve and it's functional (kind of surprised). My core voltage % is 7 for potential 1.09 but I haven't seen it yet. My temps are pretty low from overkill cooling which equates to lower thermal resistance and less voltage req typically. Going to try messing around with vram on 135% PL (406w) tonight, seems like my core will not do over 2160 regardless.


----------



## Koplari

Hulk1988 said:


> Hello,
> 
> quick question:
> 
> Gaming OC | 3 Fan | 2.5 Slot | 287mm | RGB | 16 Power Phases | 1650 MHz Boost | 300/366 W | Reference PCB | EAN 4719331303563 | PN GV-N208TGAMING OC-11GC
> 
> I need a Waterblock for it. It is named as Reference PCB but no websites listed the card as a support for their cooler. Here you can see the small differences: https://i.imgur.com/dXlTpUs.gif
> 
> Which waterblock I need to buy for it?
> 
> Thank you


Got my 2080 TI Gaming OC yesterday, now waiting for block to arrive. I ordered EK Vector 2080 TI block, as it is reference PCB (almost), but I have to machine small channel for fan header on the PCB. But you can get few blocks from Alphacool and EK:

https://www.alphacool.com/shop/new-...dia-geforce-rtx-2080/2080ti-m02-mit-backplate

https://www.ekwb.com/shop/ek-fc2080-rtx-ti-classic-nickel

Block for Gaming OC has to have that small indent near PCIE power connectors, behind that threaded hole.


----------



## Canson

axiumone said:


> Yes, they do. I have two. One has micron and the other has samsung. Both are OC versions.


Please could you switch over to Q-mode with bios switch and download the bios version with gpu-z and upload it for me?


----------



## JustinThyme

ReFFrs said:


> Many people who already have Micron would say it doesn't matter (for self-calming purposes), but in fact only Samsung is the right way to go with 2080 Ti's
> 
> I have Aorus 2080 Ti Waterforce manufactured in Dec 2018 and it has Samsung memory onboard although it's based on the same PCB as in video below. Perfect OC btw to 16140 MHz on mem.
> 
> Most newer cars should have Samsung, but the problem is there is a mix of different cards on the market manufactured at different dates, so you cannot be sure when purchasing a card what version you would get.


I didnt experience this as my Strix cards came with Samsung. Didnt need to tear it down to see what the memory is, GPUZ displays the info. Loveed the part at the end that got bleeped out to underclock the memory to T-30 days to expiration of warranty.....

With Samsung memory Ive run it up to +700 and it doesn't even break a sweat. May even go further but Ive been concentrating on the core speeds. 
Its a no brainer in the memory dept. Samsung comes out on top 100% of the time whether its Vram, sDram or SSDs. Micron and Hynix havent been able to match. Micron owns Crucial and Hynix is owned by Hyundai.


----------



## zack_orner

Vencenzo said:


> I tried out ocscanner on afterburner for curve and it's functional (kind of surprised). My core voltage % is 7 for potential 1.09 but I haven't seen it yet. My temps are pretty low from overkill cooling which equates to lower thermal resistance and less voltage req typically. Going to try messing around with vram on 135% PL (406w) tonight, seems like my core will not do over 2160 regardless.


Some one said in here find your stable oc the click on the curve and add like 10-15 from 1.056 to 1.096 should see a little more improvement. How I have mine set up atm.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## pewpewlazer

ReFFrs said:


> Many people who already have Micron would say it doesn't matter (for self-calming purposes), but in fact only Samsung is the right way to go with 2080 Ti's
> 
> I have Aorus 2080 Ti Waterforce manufactured in Dec 2018 and it has Samsung memory onboard although it's based on the same PCB as in video below. Perfect OC btw to 16140 MHz on mem.
> 
> Most newer cars should have Samsung, but the problem is there is a mix of different cards on the market manufactured at different dates, so you cannot be sure when purchasing a card what version you would get.
> 
> https://www.youtube.com/watch?v=t5memuI5WD4


Please don't spread this misinformation in here. Cards with both Samsung and Micron have died. There has been no official answer on the cause of the deaths (other than Nvidia making a vague mention about some bad samples "escaping" QC or something like that), nor has there been any conclusive reason determined by anyone. The only conclusive statement we can make is that it is NOT specific to memory chip vendor.


----------



## ReFFrs

pewpewlazer said:


> The only conclusive statement we can make is that it is NOT specific to memory chip vendor.


Wrong, because there may be several dying card reasons and one of them (probably most common) can be faulty Micron memory. Some cards with Samsung memory died as well, but this may have happened due to a different reason. In little numbers there were dying 1080 Ti, 1080, 980 cards as well not affected by memory issues.


----------



## Rob w

Just been doing some benching and thought I would post results.
I flashed original bios back to card so only mod is the 8mo shunt mod and chiller ( needed/weather warm today)
this seems to be as far as I can push it stable in this test (not just a one off), I had to set voltage higher in order for clock to hit the 1.093v 
memory and core I have had higher but found it unstable.
next will be to flash the KFA2 bios again and re-bench to compare so here's result.
held 2160 solid through test.


----------



## Vipeax

JustinThyme said:


> With Samsung memory Ive run it up to +700 and it doesn't even break a sweat.


Even Micron memory runs at +1000 easily. Mine (also Micron, pretty much a day 1 card) runs at +1280 even and has been doing so since September and I rather have Micron memory with above average clock speeds than Samsung because of yet to be confirmed rumours.


----------



## Qb9

Any one has the stock bios for GV-N208TAORUSX W-11GC (the AIO one), I forgot to backup mine before updating to the one listed in the Aorus support page, which ended up limiting power to 110%...

I actually need any bios that would allow me to go to 366W power limit, I only found bios for the air cooled and the WB Xtreme versions of the card and nothing specific to my model.

Anything would help, thanks!


----------



## ReFFrs

Qb9 said:


> Any one has the stock bios for GV-N208TAORUSX W-11GC (the AIO one), I forgot to backup mine before updating to the one listed in the Aorus support page, which ended up limiting power to 110%...
> 
> I actually need any bios that would allow me to go to 366W power limit, I only found bios for the air cooled and the WB Xtreme versions of the card and nothing specific to my model.


110% power limit equals to 366W on Aorus Waterforce bios. You should check your real power limit in watts via nvidia-smi command.


----------



## iamjanco

With respect to the question of whether Micron or Samsung vram is better on an RTX card, I haven't seen 100% conclusive proof of what's really been causing the RTX ON/RTX OFF failures yet, and much of what's available via the 'net is more speculative than not. Not sure that we'll ever really know for sure...


----------



## Qb9

ReFFrs said:


> 110% power limit equals to 366W on Aorus Waterforce bios. You should check your real power limit in watts via nvidia-smi command.


I checked it says 331W  ... Any ideas on what bios to use to unlock the power limit to at least 360W or something.


----------



## Kaltenbrunner

You 2080 ti snob's think you're so cool with your silly RGB lights, LOL

Dam they are expensive. If I had have cut out parties for the last 2 months, I could have gotten a stock 2080. But that would have been a lot louder than my top-end 2070. A 2080 would last a little longer FPS wise, but I'm sure I'd upgrade in under 2yrs anyways, so no worries there either for 1440p 144Hz




iamjanco said:


> With respect to the question of whether Micron or Samsung vram is better on an RTX card, I haven't seen 100% conclusive proof of what's really been causing the RTX ON/RTX OFF failures yet, and much of what's available via the 'net is more speculative than not. Not sure that we'll ever really know for sure...


I wonder how much testing any of these companies, or annual-release type tech companies, really do ? I wish I could have a tour of these places, it would be most fascinating.


----------



## ReFFrs

Qb9 said:


> I checked it says 331W  ... Any ideas on what bios to use to unlock the power limit to at least 360W or something.


Strange. Run cmd as admin and show me the output of this command:

"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -d power


----------



## Qb9

ReFFrs said:


> Strange. Run cmd as admin and show me the output of this command:
> 
> "C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -d power


Here it is, bear no mind to the first GPU it's a 1080ti.

It shows that the max is 366W but 331W is enforced

Edit: I think I may have screwed up something, in the BIOS file I downloaded from GIGABYTE, it says that there are two BIOS, one when connected for DPx3 HDMIx1 USB-C and another for the additional HDMIx2, I updated the first one and didn't update the second, and the only file available was for F2 (I had no idea what it meant at the time, and now I barely have an idea). So now when I check the bios for the additional HDMIx2 it says F10 in the Aorus Engine, did I screw up something? because when I first connected to the other HDMIs and restarted I got a BSOD once and the 2nd boot was sluggish but it worked somehow. Am supposed not to use F2 with F10 interchangeably? or what's the idea anyways...


----------



## ReFFrs

Qb9 said:


> It shows that the max is 366W but 331W is enforced


So you should set power limit to max in OC utility (for example MSI Afterburner) and hit Apply button, then check again.


----------



## Qb9

ReFFrs said:


> So you should set power limit to max in OC utility (for example MSI Afterburner) and hit Apply button, then check again.


Oh I see, thank you for clarifying this to me... So 331W was actually 100%... and 110% of that is 366W.


----------



## Qb9

I think I may have screwed up something, in the BIOS file I downloaded from GIGABYTE, it says that there are two BIOS, one when connected for DPx3 HDMIx1 USB-C and another for the additional HDMIx2, I updated the first one and didn't update the second, and the only file available was for F2 (I had no idea what it meant at the time, and now I barely have an idea). So now when I check the bios for the additional HDMIx2 it says F10 in the Aorus Engine, did I screw up something? because when I first connected to the other HDMIs and restarted I got a BSOD once and the 2nd boot was sluggish but it worked somehow. Am I supposed not to use F2 with F10 interchangeably? or what's the idea anyways...


----------



## mercinator16

Does anyone here have an EVGA XC Black with the original bios backed up? If so, would said person mind uploading it?


----------



## axiumone

Canson said:


> Please could you switch over to Q-mode with bios switch and download the bios version with gpu-z and upload it for me?


From which card? The samsung one?


----------



## J7SC

Jpmboy said:


> I('ve had (at least) one ever since the 780Ti KPEs (had 3). Thing will be whether or not Tin posts a firmware update or not. I've heard rumors EVGA may issue a new controller, but that really is only meaningful IF the GPU scales with voltage (hence the dusty EVBOTs  )



...know what you mean about the 780 TiK  ...per below, that's now a back-up system for the daily beast of burden at my home-office, but SLI and all still runs just fine, especially on 1080p...those cards were the 2080 Ti of their day...:wheee:


----------



## J7SC

iamjanco said:


> With respect to the question of whether Micron or Samsung vram is better on an RTX card, I haven't seen 100% conclusive proof of what's really been causing the RTX ON/RTX OFF failures yet, and much of what's available via the 'net is more speculative than not. Not sure that we'll ever really know for sure...



...even just in this thread, there certainly are posts about 2080 Ti failures with both Micron and Samsung / just seems that in the early days, more cards came with Micron - and no matter what failed on those early models (VRAM or other), this morphed into a weird set of 'facts according to the internet'...fueled along by lack of real information by foundry, manufacturer and vendors

Anyhow, current 2080 Ti world record score in XOC HWBot 3DM Firestrike and other subs seem to be held by Micron card(s), for now, until a Samsung-equipped card takes it, may be... :ninja: :


Spoiler



https://hwbot.org/submission/4071994_ogs_3dmark___fire_strike_geforce_rtx_2080_ti_41175_marks/


----------



## JMCB

Anyone here running 2x 2080 TI's on air? I'm still waiting for one 2080 Ti but I'm about to pull the trigger on a second. I eventually plan on watecooling them, but I want to run them without removing the cooler for awhile to see if they are stable. Both are FE versions.


----------



## kx11

hey guys




do you think this is guy might be cheating ?! his score is higher than mine but his specs are not that powerful i mean AiO on the CPU and gpu is air cooled !!!




https://www.3dmark.com/compare/fs/18443089/fs/18446681


----------



## MrTOOSHORT

kx11 said:


> hey guys
> 
> 
> 
> 
> do you think this is guy might be cheating ?! his score is higher than mine but his specs are not that powerful i mean AiO on the CPU and gpu is air cooled !!!
> 
> 
> 
> 
> https://www.3dmark.com/compare/fs/18443089/fs/18446681


Looks fine to me. 5.1Ghz can be had on an air cooler for a good 9900k. Air cooled 2080ti, 2100Mhz can be had with 100% fan speed, open the window in the room when cold outside, the better.

Also in FS Ultra, I couldn't top my 5500Mhz 9900k score with my new 9980XE @5.2GHz. So a 9900k runs better here. Every other 3dmark, the 9980XE came out on top.


----------



## kx11

MrTOOSHORT said:


> Looks fine to me. 5.1Ghz can be had on an air cooler for a good 9900k. Air cooled 2080ti, 2100Mhz can be had with 100% fan speed, open the window in the room when cold outside, the better.
> 
> Also in FS Ultra, I couldn't top my 5500Mhz 9900k score with my new 9980XE @5.2GHz. So a 9900k runs better here. Every other 3dmark, the 9980XE came out on top.



yeah FS is a bit dated by now , crashed on me a lot while TS extreme didn't at the same OC values


----------



## VPII

kx11 said:


> yeah FS is a bit dated by now , crashed on me a lot while TS extreme didn't at the same OC values


Hi @kx11 I found with FS ultra that the clocks I run Time Spy at which works up to 2160mhz core, even 2175 worked, but only once thus far, will not work for FS Ultra.... had to drop the core to 2145mhz which gave me a nice result seen that I am running a 2700X at 4.28Ghz. Don't go by the cpu speed stated by Futuremark, they always get the Ryzen speeds incorrect.

https://www.3dmark.com/fs/18099376


----------



## ReFFrs

kx11 said:


> hey guys
> 
> do you think this is guy might be cheating ?! his score is higher than mine but his specs are not that powerful i mean AiO on the CPU and gpu is air cooled !!!
> 
> https://www.3dmark.com/compare/fs/18443089/fs/18446681


Are you kidding? Your opponent has higher mem OC, higher GPU and CPU OC and also a supreme Ring cache bus on Coffee Lake while your Mesh bus is not suitable for gaming and represents a server processor tree. It's you who could be cheating, as he should outperform you by a bigger margin. Anyway, I can beat you easily with 8700K @5.2 GHz + AIO cooled 2080 Ti @ 2055 core / 16500 mem


----------



## Qb9

Anyone care to explain to me the difference between F2 and F10 bios especially for the Aorus Xtreme Waterforce. I think it may have shipped with F10 and I flashed the BIOS on their download page which ended up to be F2. Then I realized that the other bios for the additional HDMI outputs were F10. Everything seems to be normal unless I decide to use the other HDMI outputs after restarting, which gives me an BSOD on the first boot then it would boot correctly and drivers would kick in...

Should I flash the other bios with F2 as well, did I make a mistake and shouldn't have flashed the F2 bios to begin with? I would really appreciate if someone could help me on this...

I totally forgot to backup the original bios too, which is foolish to say the least, luckily the card seems to be working correctly though.


----------



## Vencenzo

Ran into the same problem on my ek x seahawk running the trio bios as people on page 620. Could either oc core or vram well but not both. Seem to be limited at 363w and 1.068v even though it's suppose to be "406w".
Going to try the galax 380w.


----------



## Rob w

just as a follow up from my bios flash, interesting result?
first is the stock NV bios TU102 , max I can get?
second it the Galax KFA2 380watt bios, max I can get?


----------



## MrTOOSHORT

Hi Rob W, interesting there. I have a feeling it could be tight timings of the Samsung chips with the stock bios. Galax bios has loose timings with Micron memory, making the Samsung chips run slower. Can you match the ram speeds for both screens? To really compare. That is probably why the difference actually.

But I still wonder about the theory of the stock Samsung bios shunted being faster than just galax bios on a Samsung card.


----------



## J7SC

Rob w said:


> just as a follow up from my bios flash, interesting result?
> first is the stock NV bios TU102 , max I can get?
> second it the Galax KFA2 380watt bios, max I can get?





MrTOOSHORT said:


> Hi Rob W, interesting there. I have a feeling it could be tight timings of the Samsung chips with the stock bios. Galax bios has loose timings with Micron memory, making the Samsung chips run slower. Can you match the ram speeds for both screens? To really compare. That is probably why the difference actually.
> 
> But I still wonder about the theory of the stock Samsung bios shunted being faster than just galax bios on a Samsung card.



...'grain of salt' > but from what I have read, all RTX card Bios's require VRAM to cover Hynix, Micron and Samsung...presumably, with slightly different timings and other parameters for each, depending what the stock Bios 'specific version' reports and assumes is actually physically on the card when delivered. Flashing over a Bios from another vendor which assumed a different set of VRAM chips would actually make a difference, much more so than with GPU parameters which cover the same chip (well, apart from 'A' vs non-A etc). In any case, it would be interesting to find out if a 'native' Micron-equipped card would pick up higher scores when flashed with a 'native' Samsung VRAM Bios, or vice-versa...might even be lower, depending on what unique timings and such each VRAM maker specs. :2cents:


----------



## JustinThyme

I've already run the Galax BIOS and just went back to stock. I just found that within a margin of error what Im going to get is what its going to be right around the stock power limit at +125. I gained maybe 1% on Galax. So I either hit the power limit and throttle or dont hit it and not much past the stock it crashes. So just running stock and a solid 2150 on that and lower temps, dont even need to run up the Vcore. +150/+700 and power limit to +125 and Im good. Peaks just below power limit so no throttling. Think Im set. gonna lock it in and rip the knob off.


----------



## kx11

Lightning Z block is now up for pre-order




https://shop.bitspower.com/index.php?route=product/product&path=67_102_349&product_id=7204


----------



## EdgeCrusher86

*[TOOL] Custom DSR Tool by Orbmu2k (NVIDIA Inspector) - up to 12.00x DSR works fine:*
https://www.forum-3dcenter.org/vbulletin/showthread.php?p=11935309#post11935309

A new nice toy for enthusiasts. 



I made some screenshots: 

*UWQHD (3440x1440 native) - 12.00x DSR (11919x4989) - PCars 2 - SMAA Ultra + max. Details:*

https://www24.zippyshare.com/v/DQk7Q2rm/file.html
https://www24.zippyshare.com/v/tmja2s7q/file.html
https://www24.zippyshare.com/v/LiRbV8re/file.html



*UWQHD (3440x1440 native) - 9.00x DSR (10320x4320) - max. Details each:*

*AC : Origins (AA low):*
https://www117.zippyshare.com/v/7jpY7Pbb/file.html

*BF V (that ugly lowres palm though ):
* https://www118.zippyshare.com/v/rwa0EaWg/file.html

*TW3*:
https://www118.zippyshare.com/v/WcjXUhVi/file.html


X-times more GPU horesepower and 24GB of VRAM would be nice to have. 


*Additionally there is also a new RTSS based G-SYNC indicator by Orbmu2k:*
https://www.forum-3dcenter.org/vbulletin/showthread.php?t=593443


----------



## Martin778

kx11 said:


> Lightning Z block is now up for pre-order
> 
> 
> 
> 
> https://shop.bitspower.com/index.php?route=product/product&path=67_102_349&product_id=7204


Shame that it's still useless for OC as the card has a low power limit of 360W.


----------



## mackanz

I have no idea how you guys do it, but i can't get any higher than this without having the powerlimit hitting max constantly during a Firestrike extreme. Temps never over 43c. 380w bios.










https://www.3dmark.com/fs/18486279


----------



## Canson

axiumone said:


> From which card? The samsung one?


I have solved it. Thanks anyway.
@JustinThyme did send me the bios from the Q-mode. I actually dont know what card his was but the bios version worked perfectly. The fans stop in idle


----------



## VPII

I just tried something.... Flashed my card back to the stock bios for the Palit Gamingpro OC and ran Time Spy with same overclock I usually run. Then I went about running it with the KFA 380 watt bios. I checked with Nvinspector what the actual boost clock is and both was at 2160mhz. When checking the gpuz txt file for monitoring the highest 2115 with the TDP limit already hit hard. Some parts even 127% when it is set 126%. With the KFA bios I get more than 200 almost 250 marks more and tdp limit pretty safe. So I'll stay with the KFA 380 bios.

First link is with the Stock Palit bios, second with the KFA bios.

https://www.3dmark.com/spy/6378336

https://www.3dmark.com/spy/6378453

I'll be honest, my best is far better but it is a little difficult to recreate with 30 to 35c ambient at present here in Cape Town, South Africa...... Below is my best.

https://www.3dmark.com/spy/5965452

This run seems to be a once off but mostly due to temps I'd say.


----------



## maxmix65

Martin778 said:


> Shame that it's still useless for OC as the card has a low power limit of 360W.


same problem here
Where can I find an unlocked Ln2 bios ??


----------



## Nizzen

maxmix65 said:


> same problem here
> Where can I find an unlocked Ln2 bios ??


Maybe a tip?

https://xdevs.com/guide/evga_2080tixc/


----------



## Renegade5399

Currently shunt mod on stock BIOS is looking to be the route I'll be taking. Other BIOS with higher PL are hit and miss. Almost like there's some other protections going on behind the scenes.


----------



## maxmix65

Nizzen said:


> Maybe a tip?
> 
> https://xdevs.com/guide/evga_2080tixc/


thank you
I read that you need an extreme version of Msi Afterburner


----------



## Rob w

Renegade5399 said:


> Currently shunt mod on stock BIOS is looking to be the route I'll be taking. Other BIOS with higher PL are hit and miss. Almost like there's some other protections going on behind the scenes.


It’s now the route I’ve gone , having tried a few bios now I’m finding card is better with original bios + shunt. 
Seems memory timings are different? On other bios.


----------



## Frozburn

Has anyone flashed to a different BIOS on an ASUS STRIX 2080 Ti O11G-GAMING or is that not possible at all? Been using 2100 / 8300 for weeks and I'm curious if it's possible to go higher or is this card just doomed


----------



## J7SC

Igor's Lab added the Alpha Cool Eisblock block review for the 2080 Ti WITH 380w Bios (!)...block performance seems 'decent / middle ground' , depending on whether you're talking about GPU, VRM or VRAM > certainly a product for the 2080 TI to keep on the price/performance radar, even if it does not match the top performers / outperforms some of the rest...

YouTube vid is in German...infrared pics and comparison tables start at about 12min 45sec


----------



## JustinThyme

J7SC said:


> Igor's Lab added the Alpha Cool Eisblock block review for the 2080 Ti WITH 380w Bios (!)...block performance seems 'decent / middle ground' , depending on whether you're talking about GPU, VRM or VRAM > certainly a product for the 2080 TI to keep on the price/performance radar, even if it does not match the top performers / outperforms some of the rest...
> 
> YouTube vid is in German...infrared pics and comparison tables start at about 12min 45sec


Dont really trust Igor much. He obviously uses a chiller and the reference temp on his vids shifts to different point.

Regardless it seems that so long as you get decent temps these days with a non reference block you are good. The so called top performers may get 1C better but thats about it. Just dont buy a Barrow block!!!!!!!!!!!!! Utter trash!!!


----------



## JustinThyme

Canson said:


> I have solved it. Thanks anyway.
> 
> @JustinThyme did send me the bios from the Q-mode. I actually dont know what card his was but the bios version worked perfectly. The fans stop in idle


That was Q mode BIOS Strix 2080Ti  
Glad to hear you are all sorted out now.


----------



## JustinThyme

Frozburn said:


> Has anyone flashed to a different BIOS on an ASUS STRIX 2080 Ti O11G-GAMING or is that not possible at all? Been using 2100 / 8300 for weeks and I'm curious if it's possible to go higher or is this card just doomed


2100 isnt doomed. Better than pascal and reference.
I tried a higher power limit BIOS and got a little more but went back to stock. silicon lottery and heat is what its about with these chips. I can do +165 on the core but it hits power limit on some apps. I get a nice happy medium at 2100 Mhz runs cooler and My timespy bench is a whole 50 points less (hovering below power limit) than 2250 with power limit kicking in. 

With the GALAX BIOS anything more and it just crashes.


----------



## Glerox

I ordered the Lotan block for the lightning Z. It's available for pre-order since today 😉

Hope it will help me stay at 2100MHz. For those complaning about being "stuck" at 2100MHz, know that you are really lucky.
I had a FE with waterblock and was stuck at 2040 MHz and now the lightning Z stuck at 2070 MHz...

I did manages to get on the hall of fame because it's winter here I just opened the windows haha!

I will update results once I get the new block!


----------



## pewpewlazer

mackanz said:


> I have no idea how you guys do it, but i can't get any higher than this without having the powerlimit hitting max constantly during a Firestrike extreme. Temps never over 43c. 380w bios.
> 
> 
> https://www.3dmark.com/fs/18486279


1.05v for 2100mhz? Voltage is way too high. I have no idea how some people are getting 2160-2200mhz through firestrike or any benchmark period. But even with a 380w PL I've found that 1.031-1.037v is about the highest voltage I can run for most benchmarks without it PL throttling like crazy. I've seen others make similar comments about that voltage range being about the highest they can run before throttling as well.


----------



## BudgieSmuggler

HeliXpc said:


> Hey guys, I am getting 84C under full load with fan at 100%, is this normal? this is with +150 on core and +800 on memory, this was with tomb raider, still a pretty demanding game, the 2013 version.



Hi. Short answer is no that's not normal. Even if i have my fans peaking at 80% at around 65 degrees the cad won't go much higher than that. And that's at 2100mhz and +1000 memory on a Zotac 2080ti amp. Airflow in your case might not be good so heat is building up. What is your ambient room temp? Have you tried taking side panel off on your tower to let heat built up to escape?


----------



## Renegade5399

BudgieSmuggler said:


> Hi. Short answer is no that's not normal. Even if i have my fans peaking at 80% at around 65 degrees the cad won't go much higher than that. And that's at 2100mhz and +1000 memory on a Zotac 2080ti amp. Airflow in your case might not be good so heat is building up. What is your ambient room temp? Have you tried taking side panel off on your tower to let heat built up to escape?


I agree with this. Even these base model XC Blacks hit about 60°C gaming load and 68°C bench load set to 2050/16000 @ 1.025vGPU. That's the best speed to power limit settings I could get for now. Fan curve set in MSIAB for 40% until 50°C, then 70% until 65°C at which point I have them go to 100%.


----------



## krizby

Renegade5399 said:


> I agree with this. Even these base model XC Blacks hit about 60°C gaming load and 68°C bench load set to 2050/16000 @ 1.025vGPU. That's the best speed to power limit settings I could get for now. Fan curve set in MSIAB for 40% until 50°C, then 70% until 65°C at which point I have them go to 100%.


Try undervolting to .900v (~1890mhz +- 15mhz with your sample), you won't lose that much performance in game but fan noise will considerably be better.


----------



## JustinThyme

pewpewlazer said:


> 1.05v for 2100mhz? Voltage is way too high. I have no idea how some people are getting 2160-2200mhz through firestrike or any benchmark period. But even with a 380w PL I've found that 1.031-1.037v is about the highest voltage I can run for most benchmarks without it PL throttling like crazy. I've seen others make similar comments about that voltage range being about the highest they can run before throttling as well.


Dont give the frequency reported in 3D marks much thought, its reporting peak, at least that's what Im seeing on mine. Its reporting mine anywhere between 2150 and 2250 yet my OC has never seen past 2150 and last one showing 2130 when my OC is sitting at 2100. 

https://www.3dmark.com/spy/6182114


----------



## ducky083

Hi all,

Somebody has got the MSI RTX 2080 TI SEA HAWK X HYBRID ?

I would like to try this bios on my card.

Thanks !


----------



## navjack27

Anybody have the original EVGA 2080 Ti XC Black VBIOS file backed up that they could throw my way? 11G-P4-2282-KR
Not the upgraded one but the original shipping VBIOS.

I figured out how to fanegle the updated bios out of the update.exe file from EVGA... don't ask... it involved hex editing...

https://www.techpowerup.com/vgabios/208751/208751


----------



## J7SC

JustinThyme said:


> Dont really trust Igor much. He obviously uses a chiller and the reference temp on his vids shifts to different point.
> 
> Regardless it seems that so long as you get decent temps these days with a non reference block you are good. The so called top performers may get 1C better but thats about it. Just dont buy a Barrow block!!!!!!!!!!!!! Utter trash!!!



Whether he uses a chiller or not doesn't really matter to me, as long as it is consistent when testing different block models. His lab is well equipped (some vids on that), but at the end of the day, what I'm looking for in his tests / infrared pics and tables is not so much the often not-so-dramatic differences between different GPU blocks, but whether within a test pic of a given block, there are some spots (i.e. VRM 1;2, various VRAM locations) where the temp delta is significant compared to other spots on that same GPU block.


----------



## shiokarai

JustinThyme said:


> Dont really trust Igor much. He obviously uses a chiller and the reference temp on his vids shifts to different point.
> 
> Regardless it seems that so long as you get decent temps these days with a non reference block you are good. The so called top performers may get 1C better but thats about it. Just dont buy a Barrow block!!!!!!!!!!!!! Utter trash!!!


What's wrong with him using the chiller? The loop is the same for all the blocks tested so what's the problem? Is that a reason to not trust somebody? Please enlighten me. Also, please watch carefully. There is more than a mere 1 degree diff. between blocks...


----------



## JustinThyme

J7SC said:


> Whether he uses a chiller or not doesn't really matter to me, as long as it is consistent when testing different block models. His lab is well equipped (some vids on that), but at the end of the day, what I'm looking for in his tests / infrared pics and tables is not so much the often not-so-dramatic differences between different GPU blocks, but whether within a test pic of a given block, there are some spots (i.e. VRM 1;2, various VRAM locations) where the temp delta is significant compared to other spots on that same GPU block.


That's just the thing. He isn't consistent. There is usually a number in the top of his videos that should be referencing the GPU. I saw a comparison between 1080Ti cards with Phanteks vs EK vs watercool. The reference number in the top was the coolest part of the board on the watercool, the GPU on the Phanteks and the hottest part of the EK block with conclusion being the heatkiller beat out an EK block by 20C. 
Add all of that to seeing his test set up shooting and IR camera through glass or Plexiglas........IR does not travel though glass, plexi or acrylic, first thing I learned in thermography school. 

So I choose not to consider his findings in any of my decisions.

The testing done by tecchpowerup showed just that, 3 blocks I was considering with 1C difference between the top and the bottom with the middle being literally in the middle with 0.5C split.


Watercool block referencing GPU







EK block referencing VRMs







The casual observer is watching the number in the top in chich case who would buy an EK block with a 67C temp?


----------



## ducky083

*bios ti amp extreme*

Hi, it's possible to flash a reference pcb card with the Zotac amp extreme bios with custom pcb ? 



One man say on reedit that it flashed his 2080 ti amp to ti amp extreme and it works...


What do they think about that ?


The bios file is exactly same ko (1023ko) as a reference pcb bios file...


----------



## NewType88

HeliXpc said:


> Hey guys, I am getting 84C under full load with fan at 100%, is this normal? this is with +150 on core and +800 on memory, this was with tomb raider, still a pretty demanding game, the 2013 version.


My ftw3 would hit 80c with 100% fan speed too. Define s2 with noctua case fans. Stock Tim job looked fine. I have no idea why it was so hot. It’s on water now though.


----------



## maxmix65

NewType88 said:


> My ftw3 would hit 80c with 100% fan speed too. Define s2 with noctua case fans. Stock Tim job looked fine. I have no idea why it was so hot. It’s on water now though.


This is my lightning Z 2560x1440 ultra settings ,use fan msi afterburner custom curve, 40 degrees Celsius 58% speed, 50 degrees 68% - 60 degrees 78%, etcc


----------



## NewType88

maxmix65 said:


> This is my lightning Z 2560x1440 ultra settings ,use fan msi afterburner custom curve, 40 degrees Celsius 58% speed, 50 degrees 68% - 60 degrees 78%, etcc
> https://www.youtube.com/watch?v=Y_PuYSdFLyg&t=3s


How many watts is it pulling ? Every game is different. 100% load on one game might pull 275w instead of the max wattage your card can produce. Last time I played BF5 it did not go over 300w. I wanna say it was no more than 275, but I cant remember.

Play the witcher 3 on max settings, that's what would let me hit 80c. Evertyhing ultra - turn on max hair works.


----------



## No13

Trying to pick a card.. Keep pinging back and forth between 2080 and 2080ti. But what I'm certain I want is the cheapest board capable of competently handling the best BIOS (Gainward maybe?). I'll be watercooling on a beefy custom loop so just want a competent and compliant 'A' chip and PCB. 

Any advice?


----------



## fleps

Hi

I know this is the 2080 TI topic but the 2080 one doesn't have much attention and not so much information, and I digged a lot around and still not sure, so maybe someone here could help me out.

Basically I got an MSI RTX 2080 Ventus OC version (A chip) and I'm trying to figure out the best bios to flash on it to increase the power limit / watts.

This card is a reference PCB but I know that that's not the main concern, but actually the VRM controller, which unfortunately I couldn't find any information regarding it for this model.

I imagine the Gigabyte +22% / 300w limit should work as it's also a pretty basic card with reference PCB and multiple people reported it working on many models, but maybe other bios with more power / W would work too, like the custom EVGA FTW3 with 30% / 338W from TPU?

I'm not concerned about the temperatures as I will be WC it.

Any help is appreciated, 

Thanks!


----------



## J7SC

shiokarai said:


> What's wrong with him using the chiller? The loop is the same for all the blocks tested so what's the problem? Is that a reason to not trust somebody? Please enlighten me. Also, please watch carefully. There is more than a mere 1 degree diff. between blocks...





JustinThyme said:


> That's just the thing. He isn't consistent. There is usually a number in the top of his videos that should be referencing the GPU. I saw a comparison between 1080Ti cards with Phanteks vs EK vs watercool. The reference number in the top was the coolest part of the board on the watercool, the GPU on the Phanteks and the hottest part of the EK block with conclusion being the heatkiller beat out an EK block by 20C.
> Add all of that to seeing his test set up shooting and IR camera through glass or Plexiglas........IR does not travel though glass, plexi or acrylic, first thing I learned in thermography school.
> 
> So I choose not to consider his findings in any of my decisions.
> 
> The testing done by tecchpowerup showed just that, 3 blocks I was considering with 1C difference between the top and the bottom with the middle being literally in the middle with 0.5C split.
> 
> 
> (...cut...)
> 
> 
> The* casual observer* is watching the number in the top in chich case who would buy an EK block with a 67C temp?



...Just for the record, I am not even a subscriber to Igor's channel nor his lawyer / just wanted to let you folks know per my initial post that there was an additional GPU block out and tested for the 2080 Ti....time to rip off the knob of this discussion, soon  ...In any case, I certainly didn't do the 'casual observer thing' you suggest above, and as posted before, I'm really just interested to find out about temp deltas on various regions within a specific GPU / block so that I can perhaps mitigate it with air flow and fan mods if I have a similar setup. Also, I realize that IR travelling through glass, Plexiglas or acrylic can create issues, but places like Reddit are full of arguments that IR, including night-vision, does / does not work (depending on the glass coating, per Reddit _'Yes, you can do it, unless your window glass is coated to block IR as low as 850nm at 100%')._. I'm certainly not an expert on this, though we do have clients in the public sector security field which have some interesting tools, and countermeasures (whenever I casually talk with them, I want to get a new tinfoil hat !). Also, if Igor would have left the side-panel off, there undoubtedly would be folks attacking him for not having a realistic test due to the extra cooling that would mean.

---

On a different type of 'light', as of the RGB kind, have any of you with RGB on your card noticed that when 3DMark SystemInfo kicks in, RGB completely stops momentarily (as in goes dark for a brief moment) before it comes back on ? And in my case, on my dual 2080 Tis, more than 50% of the time with either or both, RGB cycling 'freezes' (rather than continuously cycle through various colours) during the bench, and even afterwards when I already closed 3DM an hour before. Vid performance does not suffer, RGB is still on though not cycling, and after a re-boot, everything is back to normal. Still, 3DM SystemInfo must be digging deep into low-level hardware access :thinking:


----------



## Renegade5399

fleps said:


> Hi
> 
> I know this is the 2080 TI topic but the 2080 one doesn't have much attention and not so much information, and I digged a lot around and still not sure, so maybe someone here could help me out.
> 
> Basically I got an MSI RTX 2080 Ventus OC version (A chip) and I'm trying to figure out the best bios to flash on it to increase the power limit / watts.
> 
> This card is a reference PCB but I know that that's not the main concern, but actually the VRM controller, which unfortunately I couldn't find any information regarding it for this model.
> 
> I imagine the Gigabyte +22% / 300w limit should work as it's also a pretty basic card with reference PCB and multiple people reported it working on many models, but maybe other bios with more power / W would work too, like the custom EVGA FTW3 with 30% / 338W from TPU?
> 
> I'm not concerned about the temperatures as I will be WC it.
> 
> Any help is appreciated,
> 
> Thanks!


This one if you're on water.


----------



## JustinThyme

J7SC said:


> On a different type of 'light', as of the RGB kind, have any of you with RGB on your card noticed that when 3DMark SystemInfo kicks in, RGB completely stops momentarily (as in goes dark for a brief moment) before it comes back on ? And in my case, on my dual 2080 Tis, more than 50% of the time with either or both, RGB cycling 'freezes' (rather than continuously cycle through various colours) during the bench, and even afterwards when I already closed 3DM an hour before. Vid performance does not suffer, RGB is still on though not cycling, and after a re-boot, everything is back to normal. Still, 3DM SystemInfo must be digging deep into low-level hardware access :thinking:


We will just leave Igor as a disagreement. I have my reasons, Ive stated them as everyone has their reasons on what they reference and what they do not. I'm not the casual observer either. What I can assure you 100% is IR does not travel though glass, plexi or acrylic tin foil hat or not. One of many of my job aspects is IR scans on critical power systems and the cameras I use make Igors look like whats on special at harbor freight with the Flir model we use ringing in at just under $80K. First thing on the agenda before loading the systems up with load banks is open up all covers and remove all louvers, glass panels acrylic etc as IR doesn't see through them. You actually either see nothing with a 200C hot spot behind it or a your own reflection. IR scans in all the big financial data centers critical power infrastructure is done every year looking for hot spots in transformers, cable connections, breaker contacts etc so Im doing this somewhere or another every weekend. I do have an inside view on this as a licensed thermographer and simply passing that along. Take what you want and leave the rest.

Ive not noted any lighting stalls due to any Benchmarks. I do however get a momentary blip when I use the GPU tweak tool to optimize system that shuts down unnecessary services, lighting being among them. The only RGB that is being controlled by my motherboard is the motherboard itself and the RGB lights on my Enthoo Elite case. All else is being controlled via corsair lighting nodes. They will both continue to run whatever profile is loaded after the service stops but there is a blip as the service stops. 

Ill look the nest time and not stop and services and see what I get. What lighting are you using? whats controlling it?


----------



## JustinThyme

J7SC said:


> On a different type of 'light', as of the RGB kind, have any of you with RGB on your card noticed that when 3DMark SystemInfo kicks in, RGB completely stops momentarily (as in goes dark for a brief moment) before it comes back on ? And in my case, on my dual 2080 Tis, more than 50% of the time with either or both, RGB cycling 'freezes' (rather than continuously cycle through various colours) during the bench, and even afterwards when I already closed 3DM an hour before. Vid performance does not suffer, RGB is still on though not cycling, and after a re-boot, everything is back to normal. Still, 3DM SystemInfo must be digging deep into low-level hardware access :thinking:


We will just leave Igor as a disagreement. I have my reasons, Ive stated them as everyone has their reasons on what they reference and what they do not. I'm not the casual observer either. What I can assure you 100% is IR does not travel though glass, plexi or acrylic tin foil hat or not. One of many of my job aspects is IR scans on critical power systems and the cameras I use make Igors look like whats on special at harbor freight with the Flir model we use ringing in at just under $80K. First thing on the agenda before loading the systems up with load banks is open up all covers and remove all louvers, glass panels acrylic etc as IR doesn't see through them. You actually either see nothing with a 200C hot spot behind it or a your own reflection. IR scans in all the big financial data centers critical power infrastructure is done every year looking for hot spots in transformers, cable connections, breaker contacts etc so Im doing this somewhere or another every weekend. I do have an inside view on this and simply passing that along. Take what you want and leave the rest.

Ive not noted any lighting stalls due to any Benchmarks. I do however get a momentary blip when I use the GPU tweak tool to optimize system that shuts down unnecessary services, lighting being among them. The only RGB that is being controlled by my motherboard is the motherboard itself and the RGB lights on my Enthoo Elite case. All else is being controlled via corsair lighting nodes. They will both continue to run whatever profile is loaded after the service stops but there is a blip as the service stops. 

Ill look the nest time and not stop and services and see what I get. What lighting are you using? whats controlling it?


----------



## Renegade5399

JustinThyme said:


> We will just leave Igor as a disagreement. I have my reasons, Ive stated them as everyone has their reasons on what they reference and what they do not. I'm not the casual observer either. What I can assure you 100% is IR does not travel though glass, plexi or acrylic tin foil hat or not. One of many of my job aspects is IR scans on critical power systems and the cameras I use make Igors look like whats on special at harbor freight with the Flir model we use ringing in at just under $80K. First thing on the agenda before loading the systems up with load banks is open up all covers and remove all louvers, glass panels acrylic etc as IR doesn't see through them. You actually either see nothing with a 200C hot spot behind it or a your own reflection. IR scans in all the big financial data centers critical power infrastructure is done every year looking for hot spots in transformers, cable connections, breaker contacts etc so Im doing this somewhere or another every weekend. I do have an inside view on this as a licensed thermographer and simply passing that along. Take what you want and leave the rest.
> 
> Ive not noted any lighting stalls due to any Benchmarks. I do however get a momentary blip when I use the GPU tweak tool to optimize system that shuts down unnecessary services, lighting being among them. The only RGB that is being controlled by my motherboard is the motherboard itself and the RGB lights on my Enthoo Elite case. All else is being controlled via corsair lighting nodes. They will both continue to run whatever profile is loaded after the service stops but there is a blip as the service stops.
> 
> Ill look the nest time and not stop and services and see what I get. What lighting are you using? whats controlling it?


Sorry to intrude on your convo here.

Igor reviews fall into the "grain of salt" category for me. I compare multiple reviews, including his.

I imagine your reasons are similar to mine in that he's no Tech Jesus when it comes to methodology.


----------



## JustinThyme

Renegade5399 said:


> Sorry to intrude on your convo here.
> 
> Igor reviews fall into the "grain of salt" category for me. I compare multiple reviews, including his.
> 
> I imagine your reasons are similar to mine in that he's no Tech Jesus when it comes to methodology.


No intrusion at all, its a public fora and differing opinions is what makes it work. If everyone thought the same things it be a boring a$$ world to live in.

I do look at some of his stuff but being the anal lytical arrogant prick I am, I always see the holes. Being an Electronics Engineer its kinda my job to over think things. The one that really killed me was the EK Blocks. Im no EK fanboi but my first impression was regardless of the comparison numbers was questioning if the friggin block was even mounted correctly with it showing the worst VRM performance out of all the blocks, including the extremely less than desirable. Must have run out of thermal pads that day.


----------



## JustinThyme

On another note Ive been wanting to go vertical with my 2080TI's. The bracket that comes with the Enthoo Elite allows for two cards but the hindrance has been no 2 slot Nvlink bridges. Peugeot systems put out a compatibility chart as none of the players for the consumer RTX cards do anything but 3 of 4 slot and they published that the Nvlink for the Quadro RTX6000 was compatible, now good luck finding one. Nvidia is always sold out as are their partners. I managed to find one and was apprehensive at first as Id never heard of the company and the price was a bit inflated but they shipped and it was delivered today as promised. Ill be going vertical over the next few days. There is no way to test a 2 slot with air cooling when the cards take up 3 slots so I have to mount everything and plumb it up to find out. It is noteworthy that the orientation of the pins is correct unlike the other quadro bridges that have reversed pin outs and are way too damn expensive at like $700 a pop. This ones MSRP is $79, on point with the consumer version and the same 100GB/s bandwidth. So Ill know soon enough. Fingers crossed as it would suck to go through all the trouble (Saving back the tubing to go back horizontal if needed) of re plumbing the GPU blocks and it not work!

If it does work this bridge is getting a paint job as this matches nothing in my rig.


----------



## J7SC

JustinThyme said:


> We will just leave Igor as a disagreement.
> 
> (cut)
> 
> Ive not noted any lighting stalls due to any Benchmarks. I do however get a momentary blip when I use the GPU tweak tool to optimize system that shuts down unnecessary services, lighting being among them. The only RGB that is being controlled by my motherboard is the motherboard itself and the RGB lights on my Enthoo Elite case. All else is being controlled via corsair lighting nodes. They will both continue to run whatever profile is loaded after the service stops but there is a blip as the service stops.
> 
> Ill look the nest time and not stop and services and see what I get. What lighting are you using? whats controlling it?



...probably not even a disagreement; like another post above says, best to take it w/ a grain of salt, which is advisable for most tests one watches / reads. There just are few comparison tests out there for 2080 Ti w-blocks which cover more than 3 or so vendors.

----

On the RGB / 3DM SystemInfo issue, I'm referring to the Aorus XTR factory WB model which has a lot of RGB.... I left it all in 'default' mode and am NOT using the Aorus engine software (not even installed, for security reasons). In default mode, the cards continuously cycle through through the full spectrum of colours and various combinations, whether gaming or doing other (non-3DM) benching...the ONLY app which has the above-described effect on the GPUs' RGB cycling is 3DM SystemInfo, and it does suggest low-level hw access


----------



## JustinThyme

J7SC said:


> ...probably not even a disagreement; like another post above says, best to take it w/ a grain of salt, which is advisable for most tests one watches / reads. There just are few comparison tests out there for 2080 Ti w-blocks which cover more than 3 or so vendors.
> 
> ----
> 
> On the RGB / 3DM SystemInfo issue, I'm referring to the Aorus XTR factory WB model which has a lot of RGB.... I left it all in 'default' mode and am NOT using the Aorus engine software (not even installed, for security reasons). In default mode, the cards continuously cycle through through the full spectrum of colours and various combinations, whether gaming or doing other (non-3DM) benching...the ONLY app which has the above-described effect on the GPUs' RGB cycling is 3DM SystemInfo, and it does suggest low-level hw access


Is the GPU RGB not influenced VIA PCIE MOBO RGB controller? My ASUS cards were when they still have the OEM sinks. You are right, thats a butt load of RGB on a GPU!! Hadn't looked at one of those before. At least they are doing it right unlike the Poseidon crap that ASUS pump out that was essentially a single tube running though the air heat sink, what a crock but people bought into it. Then they are now pushing the Matrix which is an AIO but worthless IMO as its dumping the heat in the case. Your cards with factory blocks actually look like they will perform decent.


----------



## black06g85

well new card came last night
evga rtx2080ti ftw3 air cooled version.

2115mhz so far +750 mem

temps are almost the same as my hybrid one I had suprisingly.


----------



## toncij

black06g85 said:


> well new card came last night
> evga rtx2080ti ftw3 air cooled version.
> 
> 2115mhz so far +750 mem
> 
> temps are almost the same as my hybrid one I had suprisingly.


What VRAM manufacturer is it on?


----------



## domrockt

ducky083 said:


> Hi, it's possible to flash a reference pcb card with the Zotac amp extreme bios with custom pcb ?
> 
> 
> 
> One man say on reedit that it flashed his 2080 ti amp to ti amp extreme and it works...
> 
> 
> What do they think about that ?
> 
> 
> The bios file is exactly same ko (1023ko) as a reference pcb bios file...


i flashed my Palit gamingpro oc with a MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W) this morning and yes it worked, kinda. 
GPU Z shows a PT of 406Watt but i cant make my Palit work with the max Voltage, so the Voltage jumps between 1095mv and 1000mv resulting in jumping core speeds from 2070mhz to 2175mhz
i cooled the Palit with my Chiller so it never reaches 30°

in GPU Z as far as i remember it shows no more real wattage than 330W with 3DMark that Raytracing Port Bench.

Yes i set the custom Curve to 2200Mhz and 1095mv
i will check it further thsi evening.


----------



## black06g85

toncij said:


> What VRAM manufacturer is it on?



samsung now

if it was micron I would have sent it back after what I"ve been dealing with


----------



## fleps

Renegade5399 said:


> This one if you're on water.


Yeah I was looking to this bios but was worried if an Aorus one would work on the MSI Ventus OC. Do you know someone is using it?

I might give it a try.

Thanks


----------



## Renegade5399

J7SC said:


> On the RGB / 3DM SystemInfo issue, I'm referring to the Aorus XTR factory WB model which has a lot of RGB.... I left it all in 'default' mode and am NOT using the Aorus engine software (not even installed, for security reasons). In default mode, the cards continuously cycle through through the full spectrum of colours and various combinations, whether gaming or doing other (non-3DM) benching...the ONLY app which has the above-described effect on the GPUs' RGB cycling is 3DM SystemInfo, and it does suggest low-level hw access


Doesn't 3DMark read hardware IDs and other info directly from the hardware? On ASUS boards that stuff rides on the SMBUS (right?). I know RGB things also use low level buses for control. Perhaps the bench is saturating it causing the LED controller to lose contact with the board and thus locking up.


----------



## bp7178

jelome1989 said:


> Ok so the cables then. I'll try to replace the cables. Definitely not the temps since my GPU is running a lot cooler now.


I think you misunderstood. 

My 2080 Ti has louder coil whine when it is cold or at near idle temps. 

I noticed this when I first put on the water block. When on air, the card would be in the mid 60s or so in game. Coil whine was there, but it was very much in the background. Since putting it on water, gaming temps are half of what they were, and coil whine is MUCH louder. 

If I turn my pump down to 800 RPM and shut all of the fans off, when the water nears 50 C this will allow the card to hit the mid 60s. Coil whine is very noticeably reduced to the same level as it was when on air when warm. 

I want to try running it without the thermal pad on the inductors. I'd want to put a temp probe on it though and monitor the temps on the inductors under a few different loads. The temp range of an LR22 inductor is something like -20 C to +120 C, so i'm not sure how necessary monitoring its temp will be.


----------



## fleps

bp7178 said:


> I think you misunderstood.
> 
> My 2080 Ti has louder coil whine when it is cold or at near idle temps.
> 
> I noticed this when I first put on the water block. When on air, the card would be in the mid 60s or so in game. Coil whine was there, but it was very much in the background. Since putting it on water, gaming temps are half of what they were, and coil whine is MUCH louder.
> 
> If I turn my pump down to 800 RPM and shut all of the fans off, when the water nears 50 C this will allow the card to hit the mid 60s. Coil whine is very noticeably reduced to the same level as it was when on air when warm.
> 
> I want to try running it without the thermal pad on the inductors. I'd want to put a temp probe on it though and monitor the temps on the inductors under a few different loads. The temp range of an LR22 inductor is something like -20 C to +120 C, so i'm not sure how necessary monitoring its temp will be.


I would contact the seller and replace the card. I can't stand coil whine.
Once I had to return an OCZ Fatality PSU because of the constant coil whine.


----------



## NewType88

Do all GPUs produce coil whine to a certain degree ?


----------



## kx11

NewType88 said:


> Do all GPUs produce coil whine to a certain degree ?



i know my Lightning Z does that only when it's been turned off for a long time like 20+ hours then turned on , it doesn't bother me much tbh since i'll put it under water like next week or so


----------



## TK421

What's the command in nvflash to find out exact TDP limits?


----------



## iamjanco

For those who are looking for them, PPCS has gotten a handful of Heatkiller blocks in for the 2080/2080ti, roughly 4-5 each version.


----------



## J7SC

JustinThyme said:


> Is the GPU RGB not influenced VIA PCIE MOBO RGB controller? My ASUS cards were when they still have the OEM sinks. You are right, thats a butt load of RGB on a GPU!! Hadn't looked at one of those before. At least they are doing it right unlike the Poseidon crap that ASUS pump out that was essentially a single tube running though the air heat sink, what a crock but people bought into it. Then they are now pushing the Matrix which is an AIO but worthless IMO as its dumping the heat in the case. Your cards with factory blocks actually look like they will perform decent.






Renegade5399 said:


> Doesn't 3DMark read hardware IDs and other info directly from the hardware? On ASUS boards that stuff rides on the SMBUS (right?). I know RGB things also use low level buses for control. Perhaps the bench is saturating it causing the LED controller to lose contact with the board and thus locking up.


...on the RGB load, yes, it's quite colorful though I'm getting used to it. Ironically, I went for a black/grey/white theme build, and then the RGB came knocking 

Mobo is MSI X399 Creation, and it does have RGB control software ('Mystic Light') but I haven't loaded that either as I hate to have all this stuff running in the background, especially if it has low-level HW control. That is why I was wondering about 3DM SystemInfo and the effects I observed and posted on....may very well be that the LED controller 'drops' at that heavy a load, but doesn't seem to be happening with Superposition 4k (or even 8K) or ANY other heavy-load app...weird...

In any case, from what I have read, the MSI 'Mystic Light' app can control RGB on the mobo of course, but also the TridentZ RGB - but not the Aorus RGB.


Before...












After  (I turn it on...)


Spoiler
















...and more detail of the 'have you had your RGB today 


Spoiler
































Finally, some Superposition (posted before. note the power draw / blue circles) and some letting it all hang out on stock Bios GPUz rendering...certainly NOT my normal bench or gaming speed, just wanted to find the outside limit


Spoiler


----------



## NewType88

kx11 said:


> i know my Lightning Z does that only when it's been turned off for a long time like 20+ hours then turned on , it doesn't bother me much tbh since i'll put it under water like next week or so


I never noticed it before, until I put my FTW3 on water. Mine just makes noise during games, some games make it louder than others. Not sure what the norm is, noise wise.


----------



## krizby

JustinThyme said:


> Renegade5399 said:
> 
> 
> 
> Sorry to intrude on your convo here.
> 
> Igor reviews fall into the "grain of salt" category for me. I compare multiple reviews, including his.
> 
> I imagine your reasons are similar to mine in that he's no Tech Jesus when it comes to methodology.
> 
> 
> 
> No intrusion at all, its a public fora and differing opinions is what makes it work. If everyone thought the same things it be a boring a$$ world to live in.
> 
> I do look at some of his stuff but being the anal lytical arrogant prick I am, I always see the holes. Being an Electronics Engineer its kinda my job to over think things. The one that really killed me was the EK Blocks. Im no EK fanboi but my first impression was regardless of the comparison numbers was questioning if the friggin block was even mounted correctly with it showing the worst VRM performance out of all the blocks, including the extremely less than desirable. Must have run out of thermal pads that day.
Click to expand...

EK block don't have thermal pads over the solid capacitors area while the Heatkiller IV block do, I have the heatkiller block and I put thermal pads on the VRM, the choke and the capacitors. The 20C difference was probably over the capacitors area and not the VRM.


----------



## ducky083

domrockt said:


> i flashed my Palit gamingpro oc with a MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W) this morning and yes it worked, kinda.
> GPU Z shows a PT of 406Watt but i cant make my Palit work with the max Voltage, so the Voltage jumps between 1095mv and 1000mv resulting in jumping core speeds from 2070mhz to 2175mhz
> i cooled the Palit with my Chiller so it never reaches 30°
> 
> in GPU Z as far as i remember it shows no more real wattage than 330W with 3DMark that Raytracing Port Bench.
> 
> Yes i set the custom Curve to 2200Mhz and 1095mv
> i will check it further thsi evening.



Seriously ? you've tried that and it works ??? without any problem ?


----------



## ReFFrs

ducky083 said:


> Seriously ? you've tried that and it works ??? without any problem ?


Problem is that his real power limit limited to 330W instead of 400W, likely due to he is trying to use 2x8pin+1x6pin bios on a 2x8pin card.


----------



## domrockt

dunno doublepost


----------



## domrockt

ducky083 said:


> domrockt said:
> 
> 
> 
> i flashed my Palit gamingpro oc with a MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W) this morning and yes it worked, kinda.
> GPU Z shows a PT of 406Watt but i cant make my Palit work with the max Voltage, so the Voltage jumps between 1095mv and 1000mv resulting in jumping core speeds from 2070mhz to 2175mhz
> i cooled the Palit with my Chiller so it never reaches 30°
> 
> in GPU Z as far as i remember it shows no more real wattage than 330W with 3DMark that Raytracing Port Bench.
> 
> Yes i set the custom Curve to 2200Mhz and 1095mv
> i will check it further thsi evening.
> 
> 
> 
> 
> Seriously ? you've tried that and it works ??? without any problem ?
Click to expand...

Yup i can Flash it later again to proof it. But it does not use the full wattage so i stick with the Galax 380w bios or gigabyte bios 366w

And it wont stick with max voltage so the core clock Jumps all over the Place from 2050 to 2200mhz


----------



## ducky083

domrockt said:


> Yup i can Flash it later again to proof it. But it does not use the full wattage so i stick with the Galax 380w bios or gigabyte bios 366w



What bios i can flash ? Windforce OC ? Gaming OC ? aorus X ? thanks !


----------



## domrockt

ducky083 said:


> domrockt said:
> 
> 
> 
> Yup i can Flash it later again to proof it. But it does not use the full wattage so i stick with the Galax 380w bios or gigabyte bios 366w
> 
> 
> 
> 
> What bios i can flash ? Windforce OC ? Gaming OC ? aorus X ? thanks !
Click to expand...

You can leverage any FE bios you want if you have an A variant card. The custom extended powerdelivery Ones will act like i stated.


----------



## ducky083

domrockt said:


> You can leverage any FE bios you want if you have an A variant card. The custom extended powerdelivery Ones will act like i stated.



Yes i've a zotac triple fan with A revision gpu.


I hope we can soon modify bios files...


----------



## sega4ever

Is there any software that will let me set fan curves for both fans on the sea hawk x? Msi afterburner only lets me set one fan curve.


----------



## Asmodian

ducky083 said:


> I hope we can soon modify bios files...


Very unlikely, we still cannot flash modified BIOS files on Pascal let alone Turing.


----------



## ThrashZone

Hi,
Local micro center dropped 100.us off the 1500.us evga 2080ti ftw3 ultra I noticed today 
Stock not moving out the door for 2 weeks they've had them still 10+ in stock lol
Otherwise MC sold at least 3 msi 2080ti sea hawks for 1400.us which was what I was looking at more seriously than the evga.


----------



## ducky083

sega4ever said:


> Is there any software that will let me set fan curves for both fans on the sea hawk x? Msi afterburner only lets me set one fan curve.



Hi, 



Please can you share your bios file ? (if you have the hybrid version with reference pcb)


Thanks !


----------



## Rognin

First purchase in GPU's in a long time. Going from my trust worthy Tri-SLI GTX 580 's to a single Nvidia 2080 Ti FE. Hoping I won't regret the switch.


----------



## Coldmud

J7SC said:


> ...on the RGB load, yes, it's quite colorful though I'm getting used to it. Ironically, I went for a black/grey/white theme build, and then the RGB came knocking
> 
> Mobo is MSI X399 Creation, and it does have RGB control software ('Mystic Light') but I haven't loaded that either as I hate to have all this stuff running in the background, especially if it has low-level HW control. That is why I was wondering about 3DM SystemInfo and the effects I observed and posted on....may very well be that the LED controller 'drops' at that heavy a load, but doesn't seem to be happening with Superposition 4k (or even 8K) or ANY other heavy-load app...weird...
> 
> In any case, from what I have read, the MSI 'Mystic Light' app can control RGB on the mobo of course, but also the TridentZ RGB - but not the Aorus RGB.
> 
> 
> Before...
> 
> 
> After  (I turn it on...)
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ...and more detail of the 'have you had your RGB today
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Finally, some Superposition (posted before. note the power draw / blue circles) and some letting it all hang out on stock Bios GPUz rendering...certainly NOT my normal bench or gaming speed, just wanted to find the outside limit
> 
> 
> Spoiler


Looks great! However the black fan that sits on the side of the gpu's, outside of the case. I think it kinda kills the look.. Does it really help with temps? Is the one behind them not pushing enough airflow through? 
On account of your earlier post: I have the same issue with 3dm as in rgb stopping for a brief sec and sometimes locking up. Never really noticed it before, now I'm wondering if older 3dm versions had this issue.
Installing on a workstation might be sketchy, with the previous security flaws and all. But you can always install rgbfusion or aorus engine, adjust the lighting and then kill it.. It's absolute trash on a gigabyte board and interferes with some corsair and gskill kits, but it does the job for the waterforce.


----------



## J7SC

Coldmud said:


> Looks great! However the black fan that sits on the side of the gpu's, outside of the case. I think it kinda kills the look.. Does it really help with temps? Is the one behind them not pushing enough airflow through?
> On account of your earlier post: I have the same issue with 3dm as in rgb stopping for a brief sec and sometimes locking up. Never really noticed it before, now I'm wondering if older 3dm versions had this issue.
> Installing on a workstation might be sketchy, with the previous security flaws and all. But you can always install rgbfusion or aorus engine, adjust the lighting and then kill it.. It's absolute trash on a gigabyte board and interferes with some corsair and gskill kits, but it does the job for the waterforce.



...tx  / that black fan isn't really needed, but it's a 'thin 120 (only 1/3rd of the normal thickness) and points straight into the VRM sections of both cards...then again, there's MASSIVE airflow from the 9x 120mm from the on-their-side-mounted radiators just 1 1/2 inch away (flow for the whole system goes from left to right, front and back). 

...I rather disconnect the RGB wires on the GPUs (can conveniently see & get to it with pincers)...I do have serious security concerns about any kind of low-level HW access app which connects to the web...even though there presumably have been some updates, there's this relevant story


Spoiler



https://www.guru3d.com/news-story/asus-aura-sync-and-gigabyte-xtreme-software-contain-vulnerabilities.html


 Finishing up tiny details for this build / daytime photos schedule for tomorrow w/ a better camera for the build-log...in addition to this beast, I put together 4 other (more 'regular') machines over the past few weeks so no more excuses about the state of my home office nor my section in the work store room...spring cleaning, here I come ( :sozo: )


----------



## xz71

Hi guys, I flashed my Zotac AMP 2080 Ti to the GALAX 380W vbios successfully, but now I get this Nvidia bios version screen for a couple seconds before continuing booting to Windows. This happens on every boot. Is there a way to NOT show this screen? It is not causing any issue but slowing down boot time unnecessarily. Thanks.
https://lh3.googleusercontent.com/X...TA52q75HvSARfvMvLpsFrE8TDhR3vZw=w1723-h970-no


----------



## TK421

Can overclocking memory too high cause negative performance gain?


Score when memory is +850 30 minutes in FF15 bench 1440p very high settings is lower than when memory is +800. But the score difference isn't that much, only double digit points. Is this normal variance or did I push the memory too far?




EVGA 2080 Ti XC Ultra with the updated 360w bios. I should also mention that I have micron memory, does brand matter in this case?


----------



## J7SC

TK421 said:


> Can overclocking memory too high cause negative performance gain?
> 
> 
> Score when memory is +850 30 minutes in FF15 bench 1440p very high settings is lower than when memory is +800. But the score difference isn't that much, only double digit points. Is this normal variance or did I push the memory too far?
> 
> 
> 
> 
> EVGA 2080 Ti XC Ultra with the updated 360w bios. I should also mention that I have micron memory, does brand matter in this case?



'Yes' in regard to your question. Any memory, no matter what brand, will start to have deteriorating performance when a certain speed with a given load is reached (other factors such as temps being held equal) but before it outright locks up / freezes.


----------



## fleps

TK421 said:


> Can overclocking memory too high cause negative performance gain?
> Score when memory is +850 30 minutes in FF15 bench 1440p very high settings is lower than when memory is +800. But the score difference isn't that much, only double digit points. Is this normal variance or did I push the memory too far?
> EVGA 2080 Ti XC Ultra with the updated 360w bios. I should also mention that I have micron memory, does brand matter in this case?


Yes. The memory can be too high, or the general OC (memory + core) is too high and it may be impacting the core OC/Mhz stability.


----------



## TK421

J7SC said:


> 'Yes' in regard to your question. Any memory, no matter what brand, will start to have deteriorating performance when a certain speed with a given load is reached (other factors such as temps being held equal) but before it outright locks up / freezes.





fleps said:


> Yes. The memory can be too high, or the general OC (memory + core) is too high and it may be impacting the core OC/Mhz stability.





Ok.


Is there a preferred way to test the memory OC specifically?






I can try to use the 380w GALAX bios to see if power limit raised can improve OC a bit.


----------



## JustinThyme

Overclocking CPU, DRAM, GPU and VRAM is more of a balancing act. You can pass a benchmark with higher on both but have diminishing returns and even lose points after passing your sweet spot. I say yours because every machine is different even if they have the same CPU and GPU/s.


----------



## J7SC

TK421 said:


> Ok.
> 
> 
> Is there a preferred way to test the memory OC specifically?
> 
> I can try to use the 380w GALAX bios to see if power limit raised can improve OC a bit.



...VRAM is not really worth changing bios for (apart from the fact that 'new non-stock'' Bios may be geared towards another VRAM vendor)... GDDR6 GPU VRAM uses very little power; from stock setting to 1100 MHz+ (with GPU core held at stock), the extra power consumption was about 8 - 10w per card as far as I recall (I posted on this about 3 mth back). On my Aorus XTR WB cards, of the 19 power phases, only 3 are for VRAM...

...preferred way to test ? How much time have you got ?  But a good start would be to pick 3-4 benches and 3-4 of you fav games, and lock down the GPU oc may be 30 MHz or so from your known max. On the benches, try 3DM FS Extreme and Ultra, if you're in a hurry don't run the whole thing, just run graphics test '2' which according to Steve B. / GN is particularly hard on VRAM, and I can certainly second that. Also be sure to run Unigine Superposition 4K and 8K. If you want to be scientific about it, rinse and repeat 3 or so times, each time after a reboot and waiting 2 min or so before loading the tests (needless to add, all other conditions should be consistent as well, such as ambient temps, GPU speed and voltage etc). Write down the results...

Do the same with your top 3-4 fav games. If they have a build-in bench, all the better, otherwise pick the same spots / duration for consistency. Don't be surprised if some of the games are a lot harder on what you thought was max performance - max stable VRAM after the bench tests. Anyhow, rinse and repeat...Now - and I certainly wouldn't bet the farm on it (if I would even have a farm...) but between FS Ultra, 3DM TS/E and Superposition 4K/8K, you should be able to zero in on 'the sweet spot range'...if not, rinse and repeat  > and good luck !


----------



## JustinThyme

OK, verdict is in.
Went Vertical with my GPUs in SLI using the 2 slot Quadro RTX 6000 Nvlink Bridge. Same 100GB/s bandwidth and pin out. Just ugly so its getting a paint job. Performace exactly as it was with ROG 4 slot. Strange that no one is offring one tagged 20XX in anything but 3 or 4 slot which is a bit lame. Some boards are capable of SLI in any slots you want like the ASUS WS Sage X299 then there is my configuration with two cards vertical. The rack that came with the Enthoo Elite allows for Two but HDMI is covered on the card towards the front on 2080Ti, DVI on the 1080Ti. Dont use either so it doesnt affect me. Not being able to get a 2 slot bridge does. Still banging out 25K Timespy scores. It was a little difficult bleeding. Got a pocket on the top left of the GPU that just wasnt coming out. Not bubbles but a nice size pocket about 1/2 inch deep. Had to leave pumps running then put the case face down to get it out.


----------



## J7SC

JustinThyme said:


> OK, verdict is in.
> Went Vertical with my GPUs in SLI using the 2 slot Quadro RTX 6000 Nvlink Bridge. Same 100GB/s bandwidth and pin out. Just ugly so its getting a paint job. Performace exactly as it was with ROG 4 slot. Strange that no one is offring one tagged 20XX in anything but 3 or 4 slot which is a bit lame. Some boards are capable of SLI in any slots you want like the ASUS WS Sage X299 then there is my configuration with two cards vertical. The rack that came with the Enthoo Elite allows for Two but HDMI is covered on the card towards the front on 2080Ti, DVI on the 1080Ti. Dont use either so it doesnt affect me. Not being able to get a 2 slot bridge does. Still banging out 25K Timespy scores. It was a little difficult bleeding. Got a pocket on the top left of the GPU that just wasnt coming out. Not bubbles but a nice size pocket about 1/2 inch deep. Had to leave pumps running then put the case face down to get it out.



...great build (!) and great info about the Quadro NVLink Bridge...and the fact that there's no performance loss with PCIe extenders. What PCIe extenders (make/model) are you using ? On the bleeding the loop, I learned that when doing a 'non-conventional' build, prefill as much (i.e. GPUs, rads) as one can - and be ready to 'tilt'  ...not ok for pinball, but with PCs, that's entirely ok.


----------



## JustinThyme

J7SC said:


> ...great build (!) and great info about the Quadro NVLink Bridge...and the fact that there's no performance loss with PCIe extenders. What PCIe extenders (make/model) are you using ? On the bleeding the loop, I learned that when doing a 'non-conventional' build, prefill as much (i.e. GPUs, rads) as one can - and be ready to 'tilt'  ...not ok for pinball, but with PCs, that's entirely ok.


Thanks

PCIe extenders are Phanteks PH-CBRS_FL30 – 300mm EMI shielded. The case came with a slim one non shielded. Had tried ThermalTake shielded before and dont know what the deal was but they just didnt want to work. The Phanteks were also not among the most expensive.

Jaybird the youtube personality did a piece on non shielded and showed no performance loss. With that much traffic I wanted shielded no matter and didnt want to use the generic branded versions either. I stumbled across these while looking and they are listed as compatible with my case so I gave it a shot.
Any differences Im seeing are within margin of error going high and low. Havent hit my personal best but did better than a lot of other runs before this. My results stay between 24.5K and 25.5K on time spy and around 16K on Port Royal. 

Last time I did prefill my wife was ready to stomp a mudhole in my azz, few dribbles of red coolant on a beige carpet. Fortunately I was able to get them out, took some doing though. My first assembly of this beast I did in the basement but it lives on the second floor in my home office. I didnt weight it but once everything was in it I'm guessing about 100 lbs. 
Im used to tilting in all directions to get the bubbles out, just not face down. I just held it there for 30 seconds or so then stood it back up, I could hear air bubble hitting the impellers. lather rinse repeat and of course backwards, then to the front of the case then to the I/O. Funny thing though I didnt change any of the rest of the loop other than a few pieces length changed just a little and when the cards were horizontal this was by far the easiest fill and bleed and drain Id ever had.


----------



## J7SC

JustinThyme said:


> Thanks
> 
> PCIe extenders are Phanteks PH-CBRS_FL30 – 300mm EMI shielded. The case came with a slim one non shielded. Had tried ThermalTake shielded before and dont know what the deal was but they just didnt want to work. The Phanteks were also not among the most expensive.
> 
> Jaybird the youtube personality did a piece on non shielded and showed no performance loss. With that much traffic I wanted shielded no matter and didnt want to use the generic branded versions either. I stumbled across these while looking and they are listed as compatible with my case so I gave it a shot.
> Any differences Im seeing are within margin of error going high and low. Havent hit my personal best but did better than a lot of other runs before this. My results stay between 24.5K and 25.5K on time spy and around 16K on Port Royal.
> 
> Last time I did prefill my wife was ready to stomp a mudhole in my azz, few dribbles of red coolant on a beige carpet. Fortunately I was able to get them out, took some doing though. My first assembly of this beast I did in the basement but it lives on the second floor in my home office. I didnt weight it but once everything was in it I'm guessing about 100 lbs.
> Im used to tilting in all directions to get the bubbles out, just not face down. I just held it there for 30 seconds or so then stood it back up, I could hear air bubble hitting the impellers. lather rinse repeat and of course backwards, then to the front of the case then to the I/O. Funny thing though I didnt change any of the rest of the loop other than a few pieces length changed just a little and when the cards were horizontal this was by far the easiest fill and bleed and drain Id ever had.


 
Thanks for the info on the Phantek PCIe extenders and model names. When I did the current 2x 2080 TI build, I took every precaution (i.e. rad and GPU pre-fill and all that), but it is a dual loop system, with the 2x 2080 TI loop alone covering 3x XSPC 360/60 rads and 2x MPC 655 pumps...it all worked surprisingly well, but there was one nasty bubble in that loop...I could not only hear it but see it (taunting me...) in the lower of the two GPUs, on the N/E corner of the actual finned area of the water block. After a day or so of trying every other trick, it was time for some serious tilting...20 sec later > all solved 

Blood red liquid on beige carpet ? Better get a nice bouquet of red roses for the wife (more females than males in my household...) - and you still will be reminded of that transgression every once in a while. I haven't weighed this particular system, but w / o a doubt, it is the heaviest I ever put together...


----------



## TK421

J7SC said:


> ...VRAM is not really worth changing bios for (apart from the fact that 'new non-stock'' Bios may be geared towards another VRAM vendor)... GDDR6 GPU VRAM uses very little power; from stock setting to 1100 MHz+ (with GPU core held at stock), the extra power consumption was about 8 - 10w per card as far as I recall (I posted on this about 3 mth back). On my Aorus XTR WB cards, of the 19 power phases, only 3 are for VRAM...
> 
> ...preferred way to test ? How much time have you got ?  But a good start would be to pick 3-4 benches and 3-4 of you fav games, and lock down the GPU oc may be 30 MHz or so from your known max. On the benches, try 3DM FS Extreme and Ultra, if you're in a hurry don't run the whole thing, just run graphics test '2' which according to Steve B. / GN is particularly hard on VRAM, and I can certainly second that. Also be sure to run Unigine Superposition 4K and 8K. If you want to be scientific about it, rinse and repeat 3 or so times, each time after a reboot and waiting 2 min or so before loading the tests (needless to add, all other conditions should be consistent as well, such as ambient temps, GPU speed and voltage etc). Write down the results...
> 
> Do the same with your top 3-4 fav games. If they have a build-in bench, all the better, otherwise pick the same spots / duration for consistency. Don't be surprised if some of the games are a lot harder on what you thought was max performance - max stable VRAM after the bench tests. Anyhow, rinse and repeat...Now - and I certainly wouldn't bet the farm on it (if I would even have a farm...) but between FS Ultra, 3DM TS/E and Superposition 4K/8K, you should be able to zero in on 'the sweet spot range'...if not, rinse and repeat  > and good luck !



You're able to +1100 on memory?


Do you have samsung or micron?


----------



## JustinThyme

Cheese and rice, thought I was bad with the Rads! Im sitting on the fence with adding one more in to the twin 480x60 rads and 360x60. Not about capacity as much as it is to knock the sound back a little more. Keeping a good delta but if I set up my curves for silent my temps of course go up and I loose a little OC on the GPUs, nothing worth worrying about though. Instead of 1815 I have to dial it back to 1800 or 1805 max. Whatever keeps me just under the power limit and Im good. If I want a good bench run I an either do it on a cold boot or just let the 20 fans go to max. As for practicality Ive not been able to bog it down. all games run at ultra with sliders pegged to the right and it doesnt do much more than yawn.


----------



## JustinThyme

TK421 said:


> You're able to +1100 on memory?
> 
> 
> Do you have samsung or micron?


No way you are doing that with Micron, lucky to get stock. Ive run mine up to +700 but havent pushed it yet, Samsung VRAM


----------



## TK421

JustinThyme said:


> No way you are doing that with Micron, lucky to get stock. Ive run mine up to +700 but havent pushed it yet, Samsung VRAM





damn micron


----------



## JustinThyme

TK421 said:


> damn micron


Micron was one of the big problems with 2080 cards failing early on. Work around was to go -500 on the Vram. Samsung has always been pretty brutal. Like I said I havent even really tried, just saw a lot of +700s and run the slider over. Ill get around to it eventually when I quit playing with the core. Im waiting on the ASUS matrix to hit the shelves so I an snag that Vbios. Identical to the Strix in every aspect. Supposedly just higher binned chips and tweaked out BIOS.


----------



## TK421

JustinThyme said:


> Micron was one of the big problems with 2080 cards failing early on. Work around was to go -500 on the Vram. Samsung has always been pretty brutal. Like I said I havent even really tried, just saw a lot of +700s and run the slider over. Ill get around to it eventually when I quit playing with the core. Im waiting on the ASUS matrix to hit the shelves so I an snag that Vbios. Identical to the Strix in every aspect. Supposedly just higher binned chips and tweaked out BIOS.





so the general consensus is that samsung memory perform much better and can overclock higher than micron?


----------



## JustinThyme

Well for giggles, just upped to +1000 without a sneeze


----------



## J7SC

TK421 said:


> You're able to +1100 on memory?
> 
> 
> Do you have samsung or micron?



...both my cards have Micron...one tops out (before lock-up, but not most efficient, which is lower) at + 1045 VRAM MHz, the other at about +1135. That said, Micron and Samsung run different timing regimes anyway, not that my preference isn't for Samsung (like the Samsung B-Die I run on my main system memory).

...but for now, I wouldn't worry about what you see posted - get the best out of the HW you got...and just in case of doubt, the top global scorer at HWBot with benches like FS Extreme runs Micron, just as it reverts back to Samsung on other benches...


Spoiler















.


----------



## Aurosonic

TK421 said:


> so the general consensus is that samsung memory perform much better and can overclock higher than micron?


Running my Aorus Extreme Waterblock on Micron memory +1180 for 4 month already without any issue.


----------



## Krzych04650

TK421 said:


> Ok.
> Is there a preferred way to test the memory OC specifically?
> I can try to use the 380w GALAX bios to see if power limit raised can improve OC a bit.


Thats pretty common, I remember dealing with similar thing on my 1080s with G5X. Past some point all the benefit from memory overclocking would diminish, +500 was the sweetspot, while +600 had the same performance as stock, like if I didn't overclock memory at all.



JustinThyme said:


> No way you are doing that with Micron, lucky to get stock. Ive run mine up to +700 but havent pushed it yet, Samsung VRAM


Majority of Micron cards are doing +1000-1100 no problem. Almost all reviews reported 8000MHz+ memory overclock, on all Turing cards, and it was all Micron for a long time before cards with Samsung chips started to appear. So you really need to stop with all of this Samsung memory mythology.

Though it may change with time as target for GDDR6 is 16Gbps not 14, so I suspect newer cards will overclock better, or most likely there are going to be some 16Gbps versions released for some Turing models, just like GTX 1080 11 Gbps or GTX 1060 9 Gbps. 

But for now there is no real benefit from running Samsung over Micron. Alleged Micron memory failings were never confirmed as a cause of RTX failings, actually high failure rates for RTX cards overall were never confirmed in the first place, so all of this is more of a witch hunt than anything else, and looking for imaginary cause of imaginary issue. Thats all because of the hate caused by high pricing, so every issue is going to be blown completely out of proportions as it is now a life's ambition for many people to hate on these cards. If they were priced normally we most likely wouldn't even hear about any of this.



sega4ever said:


> Is there any software that will let me set fan curves for both fans on the sea hawk x? Msi afterburner only lets me set one fan curve.


EVGA Precision sees both fans separately and let's you control them independently, at least on stock BIOS, I didn't try on 380W one.

Some release notes of one of the latest MSI Afterburner version mention controlling fans separately, but it didn't get such option with my Sea Hawk X.



xz71 said:


> Hi guys, I flashed my Zotac AMP 2080 Ti to the GALAX 380W vbios successfully, but now I get this Nvidia bios version screen for a couple seconds before continuing booting to Windows. This happens on every boot. Is there a way to NOT show this screen? It is not causing any issue but slowing down boot time unnecessarily. Thanks.
> https://lh3.googleusercontent.com/X...TA52q75HvSARfvMvLpsFrE8TDhR3vZw=w1723-h970-no


Everybody gets that and unfortunately every time someone asked about it the answer was that there is no way to remove it, at least the last time I checked.


----------



## Kalm_Traveler

TK421 said:


> so the general consensus is that samsung memory perform much better and can overclock higher than micron?


I can't speak to the Micron memory on these RTX cards but all 3 of my Titans have Samsung, and I've been running their memory at +1207 for daily use for several months now. Zero issues.


----------



## JustinThyme

Krzych04650 said:


> Majority of Micron cards are doing +1000-1100 no problem. Almost all reviews reported 8000MHz+ memory overclock, on all Turing cards, and it was all Micron for a long time before cards with Samsung chips started to appear. So you really need to stop with all of this Samsung memory mythology.
> 
> Though it may change with time as target for GDDR5 is 16Gbps not 14, so I suspect newer cards will overclock better, or most likely there are going to be some 16Gbps versions released for some Turing models, just like GTX 1080 11 Gbps or GTX 1060 9 Gbps.
> 
> But for now there is no real benefit from running Samsung over Micron. Alleged Micron memory failings were never confirmed as a cause of RTX failings, actually high failure rates for RTX cards overall were never confirmed in the first place, so all of this is more of a witch hunt than anything else, and looking for imaginary cause of imaginary issue. Thats all because of the hate caused by high pricing, so every issue is going to be blown completely out of proportions as it is now a life's ambition for many people to hate on these cards. If they were priced normally we most likely wouldn't even hear about any of this.


What we are talking about is Micron GDDR6 in the 2080Ti cards that have in fact been having issues with crapping out from overheating among other things. No Myth to it, all fact.

https://www.tomshardware.com/news/rtx-2080-ti-gpu-defects-launch,37995.html

https://www.reddit.com/r/nvidia/comments/akl3he/heres_why_my_rtx_graphics_card_failed_with_proof/

https://www.tomshardware.com/news/nvidia-geforce-rtx-2080-ti-failures,38085.html

https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/

https://forums.geforce.com/default/...es/rtx-2080ti-new-batch-gets-samsung-memory-/

https://www.digitaltrends.com/computing/nvidia-admits-2080-ti-problems/

https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens

https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy

https://www.theinquirer.net/inquirer/news/3066514/nvidia-admits-some-rtx-2080-ti-fe-cards-are-borked

https://www.kitguru.net/components/...-an-unusually-high-number-of-failure-reports/

I can go on for hours.........page after page after page.....


----------



## Coldmud

So this thread keeps turning into micron vs samsung memory a lot lately.. Thought these speculation posts ended late last year. Was it ever a proven fact that micron memory was the culprit here, and not the im controller or another design flaw? 
I've seen samsung cards dying too.. Was micron memory even a problem with nv cards in the past??
Now we have ppl telling us how far micron can oc too? Where are all the posts of ppl with (unlocked) lightning z's? Slapping wblocks on them? Speculations about the kingpin? Anything important tbh? 
In titan times I only lurked, but this forum was cutting edge, insider ppl here with lots of info, everyone was doing crazy stuff, now it's 90% of fluff and questions a good search would have taken care off.
/rant


----------



## Krzych04650

JustinThyme said:


> What we are talking about is Micron GDDR6 in the 2080Ti cards that have in fact been having issues with crapping out from overheating among other things. No Myth to it, all fact.
> 
> https://www.tomshardware.com/news/rtx-2080-ti-gpu-defects-launch,37995.html
> 
> https://www.reddit.com/r/nvidia/comments/akl3he/heres_why_my_rtx_graphics_card_failed_with_proof/
> 
> https://www.tomshardware.com/news/nvidia-geforce-rtx-2080-ti-failures,38085.html
> 
> https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/
> 
> https://forums.geforce.com/default/...es/rtx-2080ti-new-batch-gets-samsung-memory-/
> 
> https://www.digitaltrends.com/computing/nvidia-admits-2080-ti-problems/
> 
> https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens
> 
> https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy
> 
> https://www.theinquirer.net/inquirer/news/3066514/nvidia-admits-some-rtx-2080-ti-fe-cards-are-borked
> 
> https://www.kitguru.net/components/...-an-unusually-high-number-of-failure-reports/
> 
> I can go on for hours.........page after page after page.....


Yea I meant GDDR6, obviously, there is no GDDR5 that does 14 or 16 Gbps.

I am sure that you can go on for hours and posting links to "news" from every single media outlet in the world, but it doesn't change a fact that it all originated from the same myth and it is all just copy-pasted jibber-jabber with no substance or especially data. The amount of times it is copied doesn't increase its credibility, obviously all media is going to pick up on cheap drama like that and are going to maintain it as long as possible. There is no conclusive evidence that Turing cards have higher RMA rates than other generations, even if there was some faulty batch as NVIDIA confirmed, this is nothing uncommon. Only before people were just simply sending things to RMA without posting hour long tirades of how outrageous it is that $1200 cards can be faulty, like if the price had anything to do with it. And of course all of the broke internet frustrates are going to pick up on this immediately and spread it everywhere, especially this time because prices were increased heavily. 

So it is not "all fact" as you say, these are all serious claims based on absolutely no data, it is the exact opposite of fact.


----------



## J7SC

Coldmud said:


> So this thread keeps turning into micron vs samsung memory a lot lately.. Thought these speculation posts ended late last year. Was it ever a proven fact that micron memory was the culprit here, and not the im controller or another design flaw?
> I've seen samsung cards dying too.. Was micron memory even a problem with nv cards in the past??
> Now we have ppl telling us how far micron can oc too? Where are all the posts of ppl with (unlocked) lightning z's? Slapping wblocks on them? Speculations about the kingpin? Anything important tbh?
> In titan times I only lurked, but this forum was cutting edge, insider ppl here with lots of info, everyone was doing crazy stuff, now it's 90% of fluff and questions a good search would have taken care off.
> /rant



...yeah, I thought this thing was already discussed ad nausea  ...usually, someone posts that cards w/ Samsung also expired, and round-and-round we go, again and again. Most of the initial batch of cards had Micron, so no matter what the problem really was/is with any of the components, Micron will always show up at a high rate, at least in the first year or so. Also, just quoting a whole bunch of articles which seem to feed of each other and their internal 'research' doesn't really cut it....quantity does not equal quality, a well-known problem w/ social media...

The Inquirer article linked above is quite interesting in its own right. For example, it states this (excerpts) _""In a low-key blog post this week, Nvidia finally fessed up to the borkage: "Limited test escapes from early boards caused the issues some customers have experienced with RTX 2080 Ti Founders Edition...."This is a roundabout way of saying that the as-yet-unknown underlying cause for the issues made it past quality control and into the hands of consumers..While Nvidia hasn't confirmed what's causing the problems, ExtremeTech reports that *forum users are speculating* the problem may be tied to Micron's GDDR6 memory, mainly because the replacements they've received from Nvidia shows the Micron memory swapped out for Samsung. However, as the website notes, this could simply be due to the fact that Nvidia's repair shop / GPU supplier only had Samsung memory in-stock instead of Micron, so it *remains speculation for now.*"" _ This doesn't strike me as proof of anything...just my :2cents:


----------



## Esenel

JustinThyme said:


> Krzych04650 said:
> 
> 
> 
> Majority of Micron cards are doing +1000-1100 no problem. Almost all reviews reported 8000MHz+ memory overclock, on all Turing cards, and it was all Micron for a long time before cards with Samsung chips started to appear. So you really need to stop with all of this Samsung memory mythology.
> 
> Though it may change with time as target for GDDR5 is 16Gbps not 14, so I suspect newer cards will overclock better, or most likely there are going to be some 16Gbps versions released for some Turing models, just like GTX 1080 11 Gbps or GTX 1060 9 Gbps.
> 
> But for now there is no real benefit from running Samsung over Micron. Alleged Micron memory failings were never confirmed as a cause of RTX failings, actually high failure rates for RTX cards overall were never confirmed in the first place, so all of this is more of a witch hunt than anything else, and looking for imaginary cause of imaginary issue. Thats all because of the hate caused by high pricing, so every issue is going to be blown completely out of proportions as it is now a life's ambition for many people to hate on these cards. If they were priced normally we most likely wouldn't even hear about any of this.
> 
> 
> 
> What we are talking about is Micron GDDR6 in the 2080Ti cards that have in fact been having issues with crapping out from overheating among other things. No Myth to it, all fact.
> 
> https://www.tomshardware.com/news/rtx-2080-ti-gpu-defects-launch,37995.html
> 
> https://www.reddit.com/r/nvidia/comments/akl3he/heres_why_my_rtx_graphics_card_failed_with_proof/
> 
> https://www.tomshardware.com/news/nvidia-geforce-rtx-2080-ti-failures,38085.html
> 
> https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/
> 
> https://forums.geforce.com/default/...es/rtx-2080ti-new-batch-gets-samsung-memory-/
> 
> https://www.digitaltrends.com/computing/nvidia-admits-2080-ti-problems/
> 
> https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens
> 
> https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy
> 
> https://www.theinquirer.net/inquirer/news/3066514/nvidia-admits-some-rtx-2080-ti-fe-cards-are-borked
> 
> https://www.kitguru.net/components/...-an-unusually-high-number-of-failure-reports/
> 
> I can go on for hours.........page after page after page.....
Click to expand...

So?
You are still wrong.

No higher failing rate confirmed.
And my 2080Ti Micron memory runs +1000 since release in October.
Which is impossible according your saying.

Whats your Timespy graphics score with a single card?

Build is looking very nice.


----------



## Nizzen

I have 5 2080ti with micron mem. No problems 

1x fe
2x msi tri x
2x zotac Amp!


----------



## Silent Scone

Reference 2080Ti with Micron here since Oct, too. 800+ and no issues.


----------



## CptSpig

Silent Scone said:


> Reference 2080Ti with Micron here since Oct, too. 800+ and no issues.


I have a 2080ti FE with Micron +1340 on the memory. :blinksmil


----------



## Kalm_Traveler

CptSpig said:


> I have a 2080ti FE with Micron +1340 on the memory. :blinksmil


I was able to get through some benchmarks on my Titans with that high, but the scores were lower. I found somewhere around+1250 or so seemed to be the threshold of where memory errors start to exceed the benefit of higher speed.


----------



## NewType88

I have a ftw3 with micron that blew up and left a hole in my wall - jk. I've had my ftw3 since nov with micron.


----------



## mark12000

Does anyone know if the EK Vector waterblock fits the Gigabyte 2080 Ti Windforce OC? I know the Gaming OC doesn't without some mods, but can't find anything on it... Also, does anyone know if updating the bios to the higher power limit they released voids warranty? Last, but not least, would this be worth getting over the NV Founders Edition for $100 less? (AUD) 

It's going to be water called either way so yeah haha.


----------



## Jpmboy

CptSpig said:


> I have a 2080ti FE with Micron +1340 on the memory. :blinksmil


:thumb:
Have you tried that Aorus bios folks claim runs a bit better than the galax 380W bios?


----------



## xaxx

Hi, I'm new and I hope you can help me. I have requested a msi rtx 2080 ti Sea Hawk x, I have seen the update of the bios to the galaxy 380w. I could change it for the msi x trio. Since I have seen that there is a bios for the x trio of 406w. Which do you think will be better? Will the trio give much more performance with the bios of 406w? Thanks for the help


----------



## J7SC

CptSpig said:


> I have a 2080ti FE with Micron +1340 on the memory. :blinksmil


Very nice ! 

The Aorus XTR cards seem to default-clock their VRAM just a touch higher out of the box, but that's all academic given the extra room for oc. What's interesting though is that while the serial numbers of my two cards are only a couple of digits apart, they have a slightly different Bios # version (same date though), and one clocks the GPU a bit higher but is a touch (30 MHz +-) slower on the VRAM in 3DM etc. So they may be 'twins' but not identical twins (...the attached pic had the older version of MSI AB which was initially limited to '+1000' MHz - in the top pic, I had used the EVGA PrecX to set VRAM)...

Hopefully, this round of the Micron allegations is over (until the next time :wth: ) ...I just sent myself an email with the link for this Techspot article for next time. I highly recommend it:


Spoiler



https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html


 It makes some excellent points, including the fact that Micron had exclusivity for the initial batch of cards, so no matter what really went wrong, the cards all had Micron. Even more important "_...the issue seems restricted to the RTX 2080 Ti, which makes it unique to some feature that is not shared with the 2080. Since the RTX 2080 Ti and non-Ti cards share the same memory controller and chips, it's highly unlikely this was caused by a hardware defect in those components..._"

*What I really want to know:* :headscrat Does anyone actually have a 2080 Ti with Hynix GDDR6, or at least a link to a test / review with that VRAM ?


----------



## zack_orner

xaxx said:


> Hi, I'm new and I hope you can help me. I have requested a msi rtx 2080 ti Sea Hawk x, I have seen the update of the bios to the galaxy 380w. I could change it for the msi x trio. Since I have seen that there is a bios for the x trio of 406w. Which do you think will be better? Will the trio give much more performance with the bios of 406w? Thanks for the help


I would say try them both I have seen my card pull 405w on the msi bios and I get better results but its made for my card. Others have had worse results but I believe they had two 8 pin cards. Not two 8s and one 6 pin, or three 8 pin cards. I also tried the 450w hof bios i mentioned earlier in the thead, with worse results. But I say do a lil testing with all three if you can spare the time.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## CptSpig

Jpmboy said:


> :thumb:
> Have you tried that Aorus bios folks claim runs a bit better than the galax 380W bios?


I have not but I am willing to give it a shot. Have you tried the Aorus bios?


----------



## Time2DevNull

Hey all. Upgrading from 970 to 2080ti. Choosing between the cards, need help  
Currently paying attention to power phases and base clock, therefore choosing between 1. Gigabyte Aorus X, 2. Evga Ftw3 Ultra and 3. Asus Rog Strix. Would you suggest if it is the way to go, please? Besides, Gigabyte just got available in local stores (others not, following for 2 weeks, also requested them) - wondering if I should just order it.


----------



## xaxx

Initially, the bios 406w I would not put it since my card is a pcm reference and two pins of 8pin. Can you tell me how many mhz your 2080th x trio arrives with the 406w bios and at what temperature, can you upload the mhz of the core and the memory a lot? If the x trio gives better results than the hawn X, what it would do is return the hawn x, since the x trio is much cheaper.


----------



## AngryLobster

Can anyone confirm a bios that allows for 0RPM functionality on a AMP (not AMP Extreme)?


----------



## ReFFrs

J7SC said:


> The Aorus XTR cards seem to default-clock their VRAM just a touch higher out of the box, but that's all academic given the extra room for oc.


Now I want to make this very clear:

*Different BIOSes have different MEMORY TIMINGS, specifically for the Samsung VRAM, but maybe for Micron as well.*

That's why you can get better scores even if memory clock remains the same.

GALAX 380W bios has definitely outdated timings and won't give you a score as high as with AORUS 366W bios or EVGA 373W (choose latest compiled version).

Try it yourself and you will see a difference.


----------



## xaxx

I think that the cards with personalized pcm work better with their bios updated and not with a bios of a card with pcb of reference. That's why the Aorus will work better with its bios than with the galax 380w which is a reference pcb.


----------



## zack_orner

xaxx said:


> Initially, the bios 406w I would not put it since my card is a pcm reference and two pins of 8pin. Can you tell me how many mhz your 2080th x trio arrives with the 406w bios and at what temperature, can you upload the mhz of the core and the memory a lot? If the x trio gives better results than the hawn X, what it would do is return the hawn x, since the x trio is much cheaper.


I did not hit the silicon lottery. Although I did see improvement I'm getting 2130 MHz and about 1026-1046 micron memory in bench marks and about 2085 in game stable but to be honest I did 100 MHz and set a curve from 1056 to 1093 MV my biggest issue is heat as soon as my card reaches 55c it down clocks. It tops out at about 63c, I believe if I water cooled it it would do better.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## Esenel

ReFFrs said:


> J7SC said:
> 
> 
> 
> The Aorus XTR cards seem to default-clock their VRAM just a touch higher out of the box, but that's all academic given the extra room for oc.
> 
> 
> 
> Now I want to make this very clear:
> 
> *Different BIOSes have different MEMORY TIMINGS, specifically for the Samsung VRAM, but maybe for Micron as well.*
> 
> That's why you can get better scores even if memory clock remains the same.
> 
> GALAX 380W bios has definitely outdated timings and won't give you a score as high as with AORUS 366W bios or EVGA 373W (choose latest compiled version).
> 
> Try it yourself and you will see a difference.
Click to expand...

Would you mind linking the Evga bios?

Thanks.


----------



## Jpmboy

ReFFrs said:


> Now I want to make this very clear:
> 
> *Different BIOSes have different MEMORY TIMINGS, specifically for the Samsung VRAM, but maybe for Micron as well.*
> 
> That's why you can get better scores even if memory clock remains the same.
> 
> *GALAX 380W bios has definitely outdated timings* and won't give you a score as high as with AORUS 366W bios or EVGA 373W (choose latest compiled version).
> 
> Try it yourself and you will see a difference.


have you examined the timing sets for both vram ICs? Or are you making this statement based on some "score"?


----------



## Jpmboy

CptSpig said:


> I have not but I am willing to give it a shot. Have you tried the Aorus bios?


 not yet... I have 2 2080Ti FEs sitting here that I can try it on. Anyone have it handy?

edit: nvm, I got it off TPU


----------



## pewpewlazer

Jpmboy said:


> not yet... I have 2 2080Ti FEs sitting here that I can try it on. Anyone have it handy?
> 
> edit: nvm, I got it off TPU


So is this the one? Allegedly better than the GALAX 380w? And it works fine on reference PCB cards? I may give it a go later then.

https://www.techpowerup.com/vgabios/205897/gigabyte-rtx2080ti-11264-181025-1


----------



## ReFFrs

Esenel said:


> Would you mind linking the Evga bios?
> 
> Thanks.


Try these two bios versions with tuned memory timings and see which one is better for you:

*1) AORUS 2080 Ti XTREME WATERFORCE*

366W: https://www.techpowerup.com/vgabios/208831/208831

*2) EVGA 2080 Ti FTW3 HYDRO COPPER*

373W: https://www.techpowerup.com/vgabios/207291/207291


----------



## xaxx

Pewpewlazer what card is yours? It is PCB reference or customized


----------



## Sheyster

pewpewlazer said:


> So is this the one? Allegedly better than the GALAX 380w? And it works fine on reference PCB cards? I may give it a go later then.
> 
> https://www.techpowerup.com/vgabios/205897/gigabyte-rtx2080ti-11264-181025-1


In what way is it better? I've been using the GALAX BIOS since I got my card; it works well for me!

EDIT - Just re-read recent posts, guess it's memory timings/sub-timings related?


----------



## J7SC

ReFFrs said:


> Now I want to make this very clear:
> 
> *Different BIOSes have different MEMORY TIMINGS, specifically for the Samsung VRAM, but maybe for Micron as well.*
> 
> That's why you can get better scores even if memory clock remains the same.
> 
> GALAX 380W bios has definitely outdated timings and won't give you a score as high as with AORUS 366W bios or EVGA 373W (choose latest compiled version).
> 
> Try it yourself and you will see a difference.



Tx, but I actually have the Aorus Xtr WB cards with their stock Bios; running since late November (Card 1) and early December (card 2). They pull between 375w and 379w in Unigine Superposition 4K/8K, according to GPUz.


----------



## pewpewlazer

xaxx said:


> Pewpewlazer what card is yours? It is PCB reference or customized


EVGA XC Ultra. Reference PCB. Hence my question.


----------



## xaxx

I suppose then that nothing happens by putting a bios of a card with pcb personalized to one with reference pcb not ?? If so, we could try the different bios in custom PCBs, such as the 406w x trío or the 450w galax. Anyway, when I get the Sea Hawn x I will try to see how much the stock and how it goes with the bios of galax 380. If someone has tried a bios of a card with personalized PCB in a PCB reference that explains how has been their experience. Thank you


----------



## xaxx

J7SC he visto que sus tarjetas Dan 2200mhz y 2235mhz, que es así? Es estable a esa velocidad y que la temperatura tiene, ya que estaba dudando entre la msi y el Aorus Xtreme pero esta última leí lo que se puede hacer muy poco overclock y que la temperatura era muy alta


----------



## Esenel

ReFFrs said:


> Try these two bios versions with tuned memory timings and see which one is better for you:
> 
> *1) AORUS 2080 Ti XTREME WATERFORCE*
> 
> 366W: https://www.techpowerup.com/vgabios/208831/208831
> 
> *2) EVGA 2080 Ti FTW3 HYDRO COPPER*
> 
> 373W: https://www.techpowerup.com/vgabios/207291/207291


Thanks for sharing.
Had no luck though.

Both Bios didn't stand my 24/7 OC with Galax 380W of 2115 MHz (+145) + 8000 MHz (+1000).

With the Gigabyte 366W 2115 MHz (+135) + 7980 MHz (+800) failed at start several times.
Same for EVGA bios.

So my benching values couldn't even be tested.

Just reran it with Galax 380W:

BCLK 103.5 - +149 Core + 1050 Mem: 16977
https://www.3dmark.com/spy/6460995

This lousy 23 points -.-

I will get you at some point in time...


----------



## J7SC

xaxx said:


> J7SC he visto que sus tarjetas Dan 2200mhz y 2235mhz, que es así? Es estable a esa velocidad y que la temperatura tiene, ya que estaba dudando entre la msi y el Aorus Xtreme pero esta última leí lo que se puede hacer muy poco overclock y que la temperatura era muy alta



Hopefully, 'Google Translate' didn't screw anything up  ...There are 3 different versions of the Aorus Xtr 2080 Ti > an air-cooled one, an AIO one and the full factory-waterblocked ones (I have the latter). AFAIK, the PCBs are the same. Temps were never an issue, even before I updated the (separate) GPU loop from a single rad to 3x 360/60 rads and twin pumps. The extra rads do not so much lower the idle temps but keep temps from rising by more than 1 or 2 degrees once a game or bench has started. The posted GPUz render speeds are, as mentioned before, the max I can achieve on light loads. Typical Unigine Superpostion 4K is max appr. 2150s to 2190 (it moves around a bit in Superposition, even w/ stable temps).


----------



## TK421

pewpewlazer said:


> EVGA XC Ultra. Reference PCB. Hence my question.





J7SC said:


> Hopefully, 'Google Translate' didn't screw anything up  ...There are 3 different versions of the Aorus Xtr 2080 Ti > an air-cooled one, an AIO one and the full factory-waterblocked ones (I have the latter). AFAIK, the PCBs are the same. Temps were never an issue, even before I updated the (separate) GPU loop from a single rad to 3x 360/60 rads and twin pumps. The extra rads do not so much lower the idle temps but keep temps from rising by more than 1 or 2 degrees once a game or bench has started. The posted GPUz render speeds are, as mentioned before, the max I can achieve on light loads. Typical Unigine Superpostion 4K is max appr. 2150s to 2190 (it moves around a bit in Superposition, even w/ stable temps).





The XC Ultra has an additional chip near the fan connector to enable separate fan control on the XC Cooler though.


Someone here with the Galax 380w bios reported that the fan control didn't work out very well when the card reached high temps.




















Have anyone here repasted the XC Ultra with a different thermal paste, like Kryonaut (not liquid metal)?


Wonder if the temp difference is going to be significant enough to warrant it.


----------



## xaxx

Well, for the price that the msi see hawn x cost me, I could buy the Aorus Xtreme waterforce, I do not know if the hawn will give that performance.


----------



## xaxx

The msi cools the gpu for water and the memory and plate for air with the other fan while the Aorus is all for water. It is a big doubt


----------



## J7SC

*@Coldmud*



Spoiler



Per your earlier comment, find the wandering fan...hint: look up


----------



## JustinThyme

Esenel said:


> So?
> You are still wrong.
> 
> No higher failing rate confirmed.
> And my 2080Ti Micron memory runs +1000 since release in October.
> Which is impossible according your saying.
> 
> Whats your Timespy graphics score with a single card?
> 
> Build is looking very nice.


How am I wrong? Sharing information isn't wrong where I come from. Generally considered courtesy. 

Great that you get good results. Personally Ive never had good luck with Micron in anything with Hynix about the same. Last generations 1080Tis that I had, Strix OC had Micron chips. Never got past +200 on them and even that was sketchy. Ran fine that way in some things and artifacts out the wazoo in others. In the end I just left it at stock where I knew there were no problems.

Of course no one is going to fess up to screwing the pooch. Simple deduction. They did fess up to there being an issue. The only way to cconfirm any failure rate data is to buy up a thousand cards and test yourself, Nvidia certainly isnt going to volunteer that info. Either way my decision is not to run Micron chips and I have a solid foundation for that choice. 

Honestly I dont have a clue what my single card scores are. I bought two intending to use two and have run two for a very long time, pretty much since SLI was born. Id feel slighted using a single card.  Only Time Ive run a single card in the last 20 years is with a standby while working on the primaries. Right now my Standby is a Strix 1080 that I was running a pair of before the 1080Ti was released. What I can tell you is my 1080Ti SLI Scores were 18K If I remember correctly.


----------



## Esenel

JustinThyme said:


> How am I wrong? Sharing information isn't wrong where I come from. Generally considered courtesy.
> 
> Great that you get good results. Personally Ive never had good luck with Micron in anything with Hynix about the same. Last generations 1080Tis that I had, Strix OC had Micron chips. Never got past +200 on them and even that was sketchy. Ran fine that way in some things and artifacts out the wazoo in others. In the end I just left it at stock where I knew there were no problems.
> 
> Of course no one is going to fess up to screwing the pooch. Simple deduction. They did fess up to there being an issue. The only way to cconfirm any failure rate data is to buy up a thousand cards and test yourself, Nvidia certainly isnt going to volunteer that info. Either way my decision is not to run Micron chips and I have a solid foundation for that choice.
> 
> Honestly I dont have a clue what my single card scores are. I bought two intending to use two and have run two for a very long time, pretty much since SLI was born. Id feel slighted using a single card. /forum/images/smilies/biggrin.gif Only Time Ive run a single card in the last 20 years is with a standby while working on the primaries. Right now my Standby is a Strix 1080 that I was running a pair of before the 1080Ti was released. What I can tell you is my 1080Ti SLI Scores were 18K If I remember correctly.


It really seems you have bad luck with Micron. That's a shame.
My MSI 1080 back in the days did +550 without diminishing returns.

As others said as well, some so called tech sites just jumped on the hype train and reported rumors as "news".

The only one kind of trying to do "research" on this topic was GamersNexus.
And if I remember correctly he proved that over half of the sent in cards to him were not broken as the owners claimed.

For most it just was the graphics cards driver or some dual monitor setup issue.
One he he had to reflash bios.

So if we apply this numbers to all the people claiming they have a dead 2080Ti, it isn't so bad anymore. 🙂

Saying this, I always wait for someone to prove the topic instead of believing "media" resharing so called "news" 😉

https://youtu.be/A4g5CZCaWXo

https://youtu.be/A0dRm5DqHWQ

And the summary.
His first sentence:
"The RTX 2080Ti failures aren't as widespread as they might have seemed from initial postings..."

https://youtu.be/JIRfPlC15uc

And SLI.
I also thought about doing it, but nahh 😉
Have fun with it though.


----------



## ic3man1986

[email protected],

is there a BIOS for the AORUS GeForce RTX™ 2080 Ti XTREME WATERFORCE WB 11G with a higher power limit?

Regards,
Ic3mAn


----------



## xaxx

Ec3man I guess that as ReFFrs said before, it works with yours from 366w stock as with the EVGA 2080 Ti FTW3 HYDRO COPPER of 377w. I suppose that with the last one it should give more performance.


----------



## bogdi1988

ReFFrs said:


> J7SC said:
> 
> 
> 
> The Aorus XTR cards seem to default-clock their VRAM just a touch higher out of the box, but that's all academic given the extra room for oc.
> 
> 
> 
> Now I want to make this very clear:
> 
> *Different BIOSes have different MEMORY TIMINGS, specifically for the Samsung VRAM, but maybe for Micron as well.*
> 
> That's why you can get better scores even if memory clock remains the same.
> 
> GALAX 380W bios has definitely outdated timings and won't give you a score as high as with AORUS 366W bios or EVGA 373W (choose latest compiled version).
> 
> Try it yourself and you will see a difference.
Click to expand...




pewpewlazer said:


> Jpmboy said:
> 
> 
> 
> not yet... I have 2 2080Ti FEs sitting here that I can try it on. Anyone have it handy?
> 
> edit: nvm, I got it off TPU
> 
> 
> 
> So is this the one? Allegedly better than the GALAX 380w? And it works fine on reference PCB cards? I may give it a go later then.
> 
> https://www.techpowerup.com/vgabios/205897/gigabyte-rtx2080ti-11264-181025-1
Click to expand...




ReFFrs said:


> Esenel said:
> 
> 
> 
> Would you mind linking the Evga bios?
> 
> Thanks.
> 
> 
> 
> Try these two bios versions with tuned memory timings and see which one is better for you:
> 
> *1) AORUS 2080 Ti XTREME WATERFORCE*
> 
> 366W: https://www.techpowerup.com/vgabios/208831/208831
> 
> *2) EVGA 2080 Ti FTW3 HYDRO COPPER*
> 
> 373W: https://www.techpowerup.com/vgabios/207291/207291
Click to expand...

Which ones of these bioses would work on a reference card and be better than the Galaxy one?


----------



## pfinch

Any update regarding Matrix BIOS? 

Does the STRIX OC work with that BIOS EVGA 2080 Ti FTW3 HYDRO COPPER?


----------



## ReFFrs

Pretty every bios should work with every card, so don't worry. Only avoid bioses if its card has let's say 3 power connectors, but yours only have 2. Even some of 3-connector bioses will work, but won't bring much profit.


----------



## xaxx

Then any card can be put any bios. To one with reference pcb could you put the galaxy bios of 450w ?? Or the 406w of the x trio ???


----------



## CptSpig

Jpmboy said:


> not yet... I have 2 2080Ti FEs sitting here that I can try it on. Anyone have it handy?
> 
> edit: nvm, I got it off TPU


Thanks, I found it on tech power up also.


----------



## VPII

xaxx said:


> Then any card can be put any bios. To one with reference pcb could you put the galaxy bios of 450w ?? Or the 406w of the x trio ???


Hi there..... I've actually flashed my old RTX 2080 ti, the Galax RTX 2080 Ti OC (380watt tdp bios) with the 450Watt bios.... The result was higher temps and the card actually pulling close to 450watt power with the same clocks as the 380 watt bios so I went back to the 380 watt bios. Currently, due to my old card going bonkers stuck at 1350mhz core clock, I got a Palit RTX 2080 ti Gamingpro OC which is currently running with the 380 watt bios. Luckily this card has samsung memory which runs alot cooler than my old card with micron memory so the Nzxt Kraken G12 fitted to the card with a Corsair H110 CW does a great job. Highest temps, middle of summer here in SA which I saw was 46c.


----------



## ReFFrs

VPII said:


> Hi there..... I've actually flashed my old RTX 2080 ti, the Galax RTX 2080 Ti OC (380watt tdp bios) with the 450Watt bios.... The result was higher temps and the card actually pulling close to 450watt power with the same clocks as the 380 watt bios so I went back to the 380 watt bios.


Are you talking about this bios? https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927

Your clocks were the same as you say, but what about bumping into power limit and dropping clocks at heavy loads? Was the card acting the same or better with 450W than with 380W?


----------



## VPII

ReFFrs said:


> Are you talking about this bios? https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927
> 
> Your clocks were the same as you say, but what about bumping into power limit and dropping clocks at heavy loads? Was the card acting the same or better with 450W than with 380W?


Yes I did..... this bios actually made my card run at full speed after stuck at 1350mhz core.... it worked even after flashing back but then a new driver update resulted in the same core clock stuck. Well it was RMA'd so I cannot complain. The card was acting worse as it was temp limited, at that stage air cooling. But I found even with the current water cooling it still resulted in clocks dropping due to TDP.


----------



## ReFFrs

VPII said:


> Yes I did..... this bios actually made my card run at full speed after stuck at 1350mhz core.... it worked even after flashing back but then a new driver update resulted in the same core clock stuck. Well it was RMA'd so I cannot complain. The card was acting worse as it was temp limited, at that stage air cooling. But I found even with the current water cooling it still resulted in clocks dropping due to TDP.


Wait wait... So the 450W bios wasn't the cause of clocks locking to 1350 MHz core, because with the new card and watercooling you tried 450W bios again and clocks remain OK?


----------



## J7SC

Jpmboy said:


> not yet... I have 2 2080Ti FEs sitting here that I can try it on. Anyone have it handy?
> 
> edit: nvm, I got it off TPU





CptSpig said:


> Thanks, I found it on tech power up also.



As posted before, there '''may''' be slightly different versions of the Aorus WB Bios though I am not sure what differences there are, if any. My two Aorus WB behave basically the same (very minor difference on which is better on GPU, which on VRAM, and both pull about 375w to 379w @ 122% PL). Further, they are just a few single digits apart on the serial numbers. Their Bios also have the same release date, but oddly enough, slightly different version numbers. That machine which houses them is off right now after a bit of painting some panels (black and white of course) but later on in the afternoon, I can post both cards' Bios as zip files if you indicate to do so...just don't ask me what the differences between the two are and which to use


----------



## jura11

Dancop posted XOC BIOS for RTX 2080Ti






Hope this helps 

Thanks, Jura


----------



## GAN77

jura11 said:


> Dancop posted XOC BIOS for RTX 2080Ti
> 
> https://youtu.be/NvW_F7fb4bU
> 
> Hope this helps
> 
> Thanks, Jura




Can I link to the source?


----------



## xaxx

VPII which 450W bios tested, is it safe for reference pcb cards? Mine will be one hawn x refrigerated by water, it will work well or as happened to you it will go crazy and I will have to return it. Thank you


----------



## jura11

GAN77 said:


> Can I link to the source?


Dancop posted this over on Guru3D 


Dear Community,

I have a very special present for you guys!
You are sick of the powertarget of a 2080Ti?
Then watch my video, give me a thumbs up and maybe follow my channel...you can find the BIOS in the description!
With this BIOS, there's no powertarget at all!
Furthermore, some guys reported, it might also run on a reference card, just one DP port is disabled!

Please don't just share the BIOS, once you downloaded it, please share my video instead 

Hope this helps 

Thanks, Jura


----------



## BigMack70

How much extra performance over stock are you guys able to get with custom BIOS? Anyone approaching +20% performance over stock?


----------



## KIENAST

dear all ,
I have a question and being honest I was lazy to go through all the discussion before posting here . 
I have an SLI of 1080TI with no available GPU waterblock on the market anymore (thanks Gigabytes) and for sake of build simplicity I am looking to replace this SLI by one eForce RTX 2080 Ti SEA HAWK EK X . 

I am expecting more or less the same or very little difference in terms of performance between both setup ? 

I am using a 7960x and the whole build is already watercooled .

thanks !


----------



## xaxx

I have had a sli until recently of two 1080 Ti aorus extreme and I have tried two rtx 2080 ti. The Sli in the games where you could put out more performance than the RTX but the problem is as follows, most of the games are not compatible with SLI and when forced with nvidia inspector, there are effects that do not process them or proceed wrong . As for example in battlefield V, the rain, the snow, the reflections in post processed being an example the reflections in the water, a tin blur of movement that causes the objects to become blurred. However in the shadow of tomb raider the sli is perfect, offering more fps than the rtx. But the problem is the games that are not compatible with Sli, I can count them with your fingers, so better an RTX and you will see the game as it is with all its wonders and you will not miss a single one.


----------



## xaxx

You swear you put the honey in my mouth with what Dancop said. This Friday I get Sea Hawn X, I'll do a few tests, heat it well and then try the bios of Galax 380 and the Asus de Dancop. I will see what performance it gives with each one and I will comment here


----------



## Esenel

jura11 said:


> GAN77 said:
> 
> 
> 
> Can I link to the source?
> 
> 
> 
> Dancop posted this over on Guru3D
> 
> 
> Dear Community,
> 
> I have a very special present for you guys!
> You are sick of the powertarget of a 2080Ti?
> Then watch my video, give me a thumbs up and maybe follow my channel...you can find the BIOS in the description!
> With this BIOS, there's no powertarget at all!
> Furthermore, some guys reported, it might also run on a reference card, just one DP port is disabled!
> 
> Please don't just share the BIOS, once you downloaded it, please share my video instead /forum/images/smilies/wink.gif
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Do you have a link to the community?


----------



## jura11

Esenel said:


> Do you have a link to the community?


He posted YT video over on Guru3D forum and I posted YT link as well over here

Hope this helps 

Thanks, Jura


----------



## DeadSec

jura11 said:


> Dancop posted this over on Guru3D
> 
> 
> Dear Community,
> 
> I have a very special present for you guys!
> You are sick of the powertarget of a 2080Ti?
> Then watch my video, give me a thumbs up and maybe follow my channel...you can find the BIOS in the description!
> With this BIOS, there's no powertarget at all!
> Furthermore, some guys reported, it might also run on a reference card, just one DP port is disabled!
> 
> Thanks, Jura


Oh wow. If you want to get rid of a DP-port and desire to burn the pci_e-slot of your mainboard, just flash it and wait for a very special flame.


----------



## Esenel

jura11 said:


> Esenel said:
> 
> 
> 
> Do you have a link to the community?
> 
> 
> 
> He posted YT video over on Guru3D forum and I posted YT link as well over here
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

With the Founders it doesn't work.
Screen stays black for all DP and the HDMI.


----------



## ReFFrs

Esenel said:


> With the Founders it doesn't work.
> Screen stays black for all DP and the HDMI.


Works with AORUS 2080 Ti Xtreme Waterforce

Tested with core clock max 2130 MHz @1.093v / mem 16340 Mhz 

Power Limit @420W (42% of XOC max limit 1000W)

Beware: peak power consumption reached 48% (480W) in TimeSpy Extreme, which is considered the most power-hungry test. Then clocks started to drop of course following power limit, so it's a short time peak only. 

You can see settings here and the result:


----------



## xaxx

If it happens that the screen is black in hdmi and dp as you can go back to the bios of stock without seeing


----------



## J7SC

ReFFrs said:


> *Works with AORUS 2080 Ti Xtreme Waterforce*
> 
> Tested with core clock max 2130 MHz @1.093v / mem 16340 Mhz
> 
> Power Limit @420W (42% of XOC max limit 1000W)
> 
> Beware: peak power consumption reached 48% (480W) in TimeSpy Extreme, which is considered the most power-hungry test. Then clocks started to drop of course following power limit, so it's a short time peak only.
> 
> You can see settings here and the result:



Oh, thanks for that info / very nice that it works with that card, and nice results :thumb: . The only fly in the ointment, if I load this on both Aorus XTR Waterforce cards, is that with the x2 watt increase + another 280w-300w or so on my CPU @ peak, I' going to streeeetch the Antec Platn/HPC 1300w PSU. But judging by the MSI AB pic in your post, the Power Limit slider still works, just at an, ahem, logarithmic scale ?!


----------



## ReFFrs

J7SC said:


> But judging by the MSI AB pic in your post, the Power Limit slider still works, just at an, ahem, logarithmic scale ?!


Slider works. 100% = 1000W, so set anything below this to get the desired limit. In my example 42% = 420W

However there is no VF Curve on this bios. Curve and OC Scanner window won't open, so you can only set your OC with an offset. I hope curve for XOC will be added later in MSI AB, but maybe it's just not possible there.


----------



## J7SC

ReFFrs said:


> Slider works. 100% = 1000W, so set anything below this to get the desired limit. In my example 42% = 420W
> 
> However there is no VF Curve on this bios. Curve and OC Scanner window won't open, so you can only set your OC with an offset. I hope curve for XOC will be added later in MSI AB, but maybe it's just not possible there.



Tx & it is still a huge step forward, what with PL max = 1000w


----------



## xaxx

Let's see if I understand it. Is this bios the Asus xoc power limit 100% = 1000w? It only happens with this bios no ?? With the other bios the limit is marked by the own bios I understand, for example the bios of galax 380w, the limit of power 100% = 380w, is correct ?? So this bios without limit of power of the xoc is something dangerous.


----------



## J7SC

ReFFrs said:


> Slider works. 100% = 1000W, so set anything below this to get the desired limit. In my example 42% = 420W
> ...edit...)



...Sorry, I forgot to ask  : What happens to the Aorus XTR WB 'RGB light-show' with Dancop's bios ? I would not be entirely heartbroken if it would stop working w/o any secondary low-level HW app.


----------



## ReFFrs

J7SC said:


> ...Sorry, I forgot to ask  : What happens to the Aorus XTR WB 'RGB light-show' with Dancop's bios ? I would not be entirely heartbroken if it would stop working w/o any secondary low-level HW app.


Don't even know and don't want to know. I have downloaded a separate RGB Fusion utility from Gigabyte and disabled RGB completely on card. Since then it keeps turned off even after bios re-flash to XOC. 

RGB is also slightly consuming your power limit, so in terms of power it's better to keep it off.


----------



## Esenel

ReFFrs said:


> Works with AORUS 2080 Ti Xtreme Waterforce
> 
> Tested with core clock max 2130 MHz @1.093v / mem 16340 Mhz
> 
> Power Limit @420W (42% of XOC max limit 1000W)
> 
> Beware: peak power consumption reached 48% (480W) in TimeSpy Extreme, which is considered the most power-hungry test. Then clocks started to drop of course following power limit, so it's a short time peak only.
> 
> You can see settings here and the result:


Score wise I can do the same with the 380W bios.

https://www.3dmark.com/spy/6160360
https://www.overclock.net/forum/attachment.php?attachmentid=257084&thumb=1

Hm I will stay with the 380W.
But thanks for sharing


----------



## ReFFrs

Esenel said:


> Score wise I can do the same with the 380W bios.
> 
> https://www.3dmark.com/spy/6160360
> https://www.overclock.net/forum/attachment.php?attachmentid=257084&thumb=1
> 
> Hm I will stay with the 380W.
> But thanks for sharing


You may have a better chip in terms of ASIC and better cooling. XOC bios would give you a greater advantage than it given for my card.


----------



## J7SC

ReFFrs said:


> Don't even know and don't want to know. I have downloaded a separate RGB Fusion utility from Gigabyte and disabled RGB completely on card. Since then it keeps turned off even after bios re-flash to XOC.
> 
> RGB is also slightly consuming your power limit, so in terms of power it's better to keep it off.


 
Tx - once RGB is turned off by RGB Fusion app, do you have to load that app every time or can I uninstall it afterwards (i.e. it mods the registry) ? I'm a bit paranoid re. security and the machine also has some office / productivity roles and sits on our network.


----------



## dangerSK

Esenel said:


> Score wise I can do the same with the 380W bios.
> 
> https://www.3dmark.com/spy/6160360
> https://www.overclock.net/forum/attachment.php?attachmentid=257084&thumb=1
> 
> Hm I will stay with the 380W.
> But thanks for sharing


its safer for sure  btw i wouldnt rly put XOC bios on custom PCB cards as it can brick them hard. Today i messed up my 2080Ti lightning, its stucked at 1350mhz (safe mode) after XOC flash, flashed back but didnt work, with stock bios still safe mode :/


----------



## Glerox

dangerSK said:


> its safer for sure  btw i wouldnt rly put XOC bios on custom PCB cards as it can brick them hard. Today i messed up my 2080Ti lightning, its stucked at 1350mhz (safe mode) after XOC flash, flashed back but didnt work, with stock bios still safe mode :/


Damn that sucks! Have you tried to switch to LN2 bios?
I'm looking to do a shunt mod on the lightning Z, waiting for the bitspower block.

If there is a software way to remove PL than I won't mess with soldering a shunt resistor lol.


----------



## kx11

dangerSK said:


> its safer for sure  btw i wouldnt rly put XOC bios on custom PCB cards as it can brick them hard. Today i messed up my 2080Ti lightning, its stucked at 1350mhz (safe mode) after XOC flash, flashed back but didnt work, with stock bios still safe mode :/



i think the memory timings didn't match the XOC bios , people shouldn't play with their high end GPUs i guess


----------



## J7SC

dangerSK said:


> its safer for sure  btw i wouldnt rly put XOC bios on custom PCB cards as it can brick them hard. Today i messed up my 2080Ti lightning, its stucked at 1350mhz (safe mode) after XOC flash, flashed back but didnt work, with stock bios still safe mode :/



Sorry to hear this. While the XOC Bios is based on Strix - a 2x 8 pin card, unlike the 3x 8 Lightning Z, I'm not sure / doubt that's it, as you flashed back and still got stuck at 1350MHz. Gamers Nexus had some similar weird experiences with both a RTX 2080 TI and RTX Titan stuck at 1350 MHz...I wonder if there are just some bad Bios flash modules out there, though certainly don't know for sure.

BTW, GN eventually got the 2080 Ti going again / check their YouTube vids (around RTX Titan release dates when he made the comment about getting the 2080 Ti back from 1350MHz...)....hope it gets sorted !


----------



## dangerSK

J7SC said:


> Sorry to hear this. While the XOC Bios is based on Strix - a 2x 8 pin card, unlike the 3x 8 Lightning Z, I'm not sure / doubt that's it, as you flashed back and still got stuck at 1350MHz. Gamers Nexus had some similar weird experiences with both a RTX 2080 TI and RTX Titan stuck at 1350 MHz...I wonder if there are just some bad Bios flash modules out there, though certainly don't know for sure.
> 
> BTW, GN eventually got the 2080 Ti going again / check their YouTube vids (around RTX Titan release dates when he made the comment about getting the 2080 Ti back from 1350MHz...)....hope it gets sorted !


thanks for info i will check it out, yeah i knew the card pcb is different though they share same IC MP2888 and i was kinda idiotic to bet everything on dual bios, i thought that eventually even if i fck up i can boot with second bios, seems like the issue is somewhere deeper in the card.


----------



## dangerSK

Glerox said:


> Damn that sucks! Have you tried to switch to LN2 bios?
> I'm looking to do a shunt mod on the lightning Z, waiting for the bitspower block.
> 
> If there is a software way to remove PL than I won't mess with soldering a shunt resistor lol.


there is a way, u need proper LN2 bios from MSI  good luck with that i am waiting for month even though im "XOC OCer" (have more than 25 LN2 submissions)


----------



## LCRava

Hi guys,

Has anyone done the Shunt Mod on the Asus 2080 Ti Strix Cards?

From what I understood it uses a different PCB so I would greatly appreciate if you guys could tell me the right shunts to mod.

Have a good one!


----------



## J7SC

dangerSK said:


> thanks for info i will check it out, yeah i knew the card pcb is different though they share same IC MP2888 and i was kinda idiotic to bet everything on dual bios, i thought that eventually even if i fck up i can boot with second bios, seems like the issue is somewhere deeper in the card.



With a dual Bios, it clearly appeared to be / is worth the risk. From what I recall, the aforementioned GN vid showed how they had problems even after re-loading the original Bios on the 2080 Ti and kept on getting stuck at 1350MHz, but once they got an '''identical''' Bios from elsewhere (TechPW up may be, or another card ?), it worked again. Just to be on the safe side, if you got your system RAM cranked to 11/10 on timings, I would slow it just for the re-flash.


----------



## Glerox

dangerSK said:


> there is a way, u need proper LN2 bios from MSI  good luck with that i am waiting for month even though im "XOC OCer" (have more than 25 LN2 submissions)


Yeah I read that on an MSI forum. This bios seems more like a myth to me than anything else lol!
I understand why they won't release unlimited power vbios but at least the stock lightning Z should let us go higher than 350W... frustrating.


----------



## axiumone

LCRava said:


> Hi guys,
> 
> Has anyone done the Shunt Mod on the Asus 2080 Ti Strix Cards?
> 
> From what I understood it uses a different PCB so I would greatly appreciate if you guys could tell me the right shunts to mod.
> 
> Have a good one!


Yep, it works great. Asus has made it super easy. The two shunts are right next to their corresponding 8 pins. You can't miss them.


----------



## dangerSK

Glerox said:


> Yeah I read that on an MSI forum. This bios seems more like a myth to me than anything else lol!
> I understand why they won't release unlimited power vbios but at least the stock lightning Z should let us go higher than 350W... frustrating.


they cant, Nvidia wont approve that high power limit, btw Gunslinger is using that "TRUE LN2" bios, as u can see XOC bioses in most cases end with two letters, The MSI ends with FB. That Lightning bios also have 1000W power limit.
I heard from Wizerty that they got the cards already preloaded with this bios so its something thats exist but u cant get hands on, maybe we will be lucky and some OCer will sell the Lightning and next owner will leak the bios


----------



## dangerSK

J7SC said:


> With a dual Bios, it clearly appeared to be / is worth the risk. From what I recall, the aforementioned GN vid showed how they had problems even after re-loading the original Bios on the 2080 Ti and kept on getting stuck at 1350MHz, but once they got an '''identical''' Bios from elsewhere (TechPW up may be, or another card ?), it worked again. Just to be on the safe side, if you got your system RAM cranked to 11/10 on timings, I would slow it just for the re-flash.


yeah its was a mistake :/ i watched the GN video but i already reflashed the card with every avaible bios, nothing helps. Nah im not running tight rams on daily just CL15


----------



## Glerox

dangerSK said:


> they cant, Nvidia wont approve that high power limit, btw Gunslinger is using that "TRUE LN2" bios, as u can see XOC bioses in most cases end with two letters, The MSI ends with FB. That Lightning bios also have 1000W power limit.
> I heard from Wizerty that they got the cards already preloaded with this bios so its something thats exist but u cant get hands on, maybe we will be lucky and some OCer will sell the Lightning and next owner will leak the bios


Man that would be so sick! PLEASE PM me if you ever get yours hands on it, I will do the same (but chances are you'll find it first haha)

Meanwhile, I will pray. Please, God of LIGHTNING, bless us with your unlocked vBios so we can FLASH it.


----------



## dangerSK

Glerox said:


> Man that would be so sick! PLEASE PM me if you ever get yours hands on it, I will do the same (but chances are you'll find it first haha)
> 
> Meanwhile, I will pray. Please, God of LIGHTNING, bless us with your unlocked vBios so we can FLASH it.


I will probably RMA Lightning, if i get money back i will get 2080TI HOF OC Lab, costs less, offers more. I can get HOF NVVDD on it and i dont have annoying power limit, so LN2 ready, fck MSI with ZERO support for LN2 OCers.
But sure if i get hands on unlocked msi vbios...


----------



## J7SC

dangerSK said:


> they cant, Nvidia wont approve that high power limit, btw Gunslinger is using that "TRUE LN2" bios, as u can see XOC bioses in most cases end with two letters, The MSI ends with FB. That Lightning bios also have 1000W power limit.
> I heard from Wizerty that they got the cards already preloaded with this bios so its something thats exist but u cant get hands on, maybe we will be lucky and some OCer will sell the Lightning and next owner will leak the bios



...still asking myself how Galax gets away from the NVidia watt-thought police - and usually VERY EARLY in the product cycle to boot. I did some XOC at HWBT some years back with decent results, but I could never even get Galax to respond, much less get a HOF OC Labs edition of any series. 

Rauf's Time Spy sub from December '18 for his 2080 Ti (at 2700 MHz, per attached) is crazy enough, but a few weeks ago, he subbed SLI at 2640MHz  How _does_ Galax do it ?...Anyhow, still hoping to see the 2080 Ti Kingpin soon. He and TiN usually update the Bios and EVBot flash at their forum, and they do tend to share. Hope springs eternal...


----------



## LCRava

axiumone said:


> Yep, it works great. Asus has made it super easy. The two shunts are right next to their corresponding 8 pins. You can't miss them.


Thx! I watched the entire video so if someone else is interested:






Did you solder resistors on top of the shunts, replaced the shunts or simply connected them with a conductive solution?

Do you mind sharing what you used?

Did you flash a new BIOS like the GALAX 380W as well?

Thank you for the help. I was about to get a Titan RTX today, but since nothing on the RTX line will have enough performance for what I wanted (BFGD HFR gaming), I’ll just buy a 2080 Ti Strix w/ a 1440p monitor to game until NVIDIA releases the 7nm cards that hopefully perform well enough to justify the ridiculous cost of the RTX line up.


----------



## dangerSK

J7SC said:


> ...still asking myself how Galax gets away from the NVidia watt-thought police - and usually VERY EARLY in the product cycle to boot. I did some XOC at HWBT some years back with decent results, but I could never even get Galax to respond, much less get a HOF OC Labs edition of any series.
> 
> Rauf's Time Spy sub from December '18 for his 2080 Ti (at 2700 MHz, per attached) is crazy enough, but a few weeks ago, he subbed SLI at 2640MHz  How _does_ Galax do it ?...Anyhow, still hoping to see the 2080 Ti Kingpin soon. He and TiN usually update the Bios and EVBot flash at their forum, and they do tend to share. Hope springs eternal...


I heard somewhere they get special exception because they dont really sell big amounts of GPU, OC Lab series is very limited, only 100 samples !!! I think they can get away with this small volume. Yeah kingpin is really taking his time with 2080Ti :/ I think they will release full unlocked vBios from factory.


----------



## axiumone

LCRava said:


> Thx! I watched the entire video so if someone else is interested:
> 
> https://youtu.be/N6A_UWwYgkA
> 
> Did you solder resistors on top of the shunts, replaced the shunts or simply connected them with a conductive solution?
> 
> Do you mind sharing what you used?
> 
> Did you flash a new BIOS like the GALAX 380W as well?
> 
> Thank you for the help. I was about to get a Titan RTX today, but since nothing on the RTX line will have enough performance for what I wanted (BFGD HFR gaming), I’ll just buy a 2080 Ti Strix w/ a 1440p monitor to game until NVIDIA releases the 7nm cards that hopefully perform well enough to justify the ridiculous cost of the RTX line up.


I actually did a shunt mod on the strix NON OC model. There's no way to flash it. I used some conductonaut for short duration testing. It worked great, but I wouldn't keep it as a long term solution.


----------



## dangerSK

axiumone said:


> I actually did a shunt mod on the strix NON OC model. There's no way to flash it. I used some conductonaut for short duration testing. It worked great, but I wouldn't keep it as a long term solution.


Yeah shunt mods are not very good solution, shunt mod on my 1080Ti lasted half a year then shunt falled off.


----------



## LCRava

axiumone said:


> I actually did a shunt mod on the strix NON OC model. There's no way to flash it. I used some conductonaut for short duration testing. It worked great, but I wouldn't keep it as a long term solution.


Thx for the quick help! I’m actually set on doing the way QuicksignVega did on his post #205 on the Titan RTX Owner’s thread!

https://www.overclock.net/forum/69-nvidia/1716320-official-nvidia-titan-rtx-owner-s-club-21.html

I think it will work perfectly for a semi-permanent solution. It is easily removable if the need arises later on!


----------



## LCRava

axiumone said:


> I actually did a shunt mod on the strix NON OC model. There's no way to flash it. I used some conductonaut for short duration testing. It worked great, but I wouldn't keep it as a long term solution.


Thx for the quick help! I’m actually set on doing the way QuicksignVega did on his post #205 on the Titan RTX Owner’s thread!

https://www.overclock.net/forum/69-nvidia/1716320-official-nvidia-titan-rtx-owner-s-club-21.html

I think it will work perfectly for a permanent solution.


----------



## J7SC

dangerSK said:


> I heard somewhere they get special exception because they dont really sell big amounts of GPU, OC Lab series is very limited, only 100 samples !!! I think they can get away with this small volume. Yeah kingpin is really taking his time with 2080Ti :/ I think they will release full unlocked vBios from factory.



...may be not directly "from the factory", but a post or two @ kingpincooling at the forum for their 2080 Ti KPE, or may be TiN sneaks it in with a post about how to get the most from your 2080 Ti KPE 

As to Galax's special exception, it must be special indeed. Normally, a vendor has to sell a pile of lower-binned GPUs first before it gets some higher-binned ones as a percentage from NVida, but with yield problems for higher-binned 2080 Ti, Galax's exception looks even more valuable, even if just for a 100 units; nice halo product !


----------



## bogdi1988

dangerSK said:


> I will probably RMA Lightning, if i get money back i will get 2080TI HOF OC Lab, costs less, offers more. I can get HOF NVVDD on it and i dont have annoying power limit, so LN2 ready, fck MSI with ZERO support for LN2 OCers.
> But sure if i get hands on unlocked msi vbios...


MSI support is absolute garbage.


----------



## iDelta

Just sold my 2080 in hopes of buying a 2080 Ti. I'm looking to put it under a EK Vector and custom watercool my entire system when Zen 2 drops. I should be looking at the ref PCB's correct? Price isn't a huge factor but I'd like to get the best OC headroom and ability to fiddle around. I'm planning to flash the galax 380w bios/ similar one's. Is there any evidence of cards being higher binned? (out of the 300a ones) maybe besides the hof and KP. 

Another question is, IF and only IF the 2080 Ti does die like the first batch, will other manufacturers honour the warranty? As far as I know only EVGA covers warranty if you remove the cooler which I'm planning to do. 

The price difference between the cheapest 2080 Ti 300A chip and the Lightning is $1000 AUD. I've been picking between the FE, Zotac X2/X3 OC's and the EVGA XC Black but I was wondering if anybody had better results putting the WB on the STRIX or buying a GB WF/ MSI Seahawk WB? I've gone through the last 10 pages of forums but was wondering if anyone had any input on this. The general consensus is that if I'm putting a WB on it go for the cheapest 300a ref PCB which seems to be the Zotac X2 at $1699 followed by the FE at $1799. All in AUD. I'm assuming there won't be a large delta in temp between the Seahawk EK X / Waterforce WB and the EK vector? 

Cheers, Wales


----------



## bogdi1988

iDelta said:


> Just sold my 2080 in hopes of buying a 2080 Ti. I'm looking to put it under a EK Vector and custom watercool my entire system when Zen 2 drops. I should be looking at the ref PCB's correct? Price isn't a huge factor but I'd like to get the best OC headroom and ability to fiddle around. I'm planning to flash the galax 380w bios/ similar one's. Is there any evidence of cards being higher binned? (out of the 300a ones) maybe besides the hof and KP.
> 
> Another question is, IF and only IF the 2080 Ti does die like the first batch, will other manufacturers honour the warranty? As far as I know only EVGA covers warranty if you remove the cooler which I'm planning to do.
> 
> The price difference between the cheapest 2080 Ti 300A chip and the Lightning is $1000 AUD. I've been picking between the FE, Zotac X2/X3 OC's and the EVGA XC Black but I was wondering if anybody had better results putting the WB on the STRIX or buying a GB WF/ MSI Seahawk WB? I've gone through the last 10 pages of forums but was wondering if anyone had any input on this. The general consensus is that if I'm putting a WB on it go for the cheapest 300a ref PCB which seems to be the Zotac X2 at $1699 followed by the FE at $1799. All in AUD. I'm assuming there won't be a large delta in temp between the Seahawk EK X / Waterforce WB and the EK vector?
> 
> Cheers, Wales


Take a look at the reference cards list in the first page. All reference cards are the same, so I would get the cheapest (as long as it is 300A) and then put the Vector block on it and then flash Galax 380W BIOS. You want the cheapest because board will be the same - the only differentiation (in cost as well) is the cooler (and brand), which you will remove anyway. I got the Zotac 2080TI and an FE one in SLI. Both running Galaxy BIOS and water cooled.


----------



## romanlegion13th

Hi Guys,
Looking to get rid of my Titan X (Maxwell) 
I am looking for a 2080ti EVGA, Im reading about the 300a Cards so i want to avoid the low power cards.
I plan to put it under water and will be flashing the Bios but not right away.

These are my options.

dose this card have the low power?
https://www.scan.co.uk/products/evg...dy-graphics-card-4352-core-1350mhz-gpu-1545mh

https://www.scan.co.uk/products/evg...g-11gb-gddr6-vr-ready-graphics-card-4352-core


This seems to be the best option.
https://www.scan.co.uk/products/evg...g-11gb-gddr6-vr-ready-graphics-card-4352-core


----------



## xaxx

bogdi1988 said:


> iDelta said:
> 
> 
> 
> Just sold my 2080 in hopes of buying a 2080 Ti. I'm looking to put it under a EK Vector and custom watercool my entire system when Zen 2 drops. I should be looking at the ref PCB's correct? Price isn't a huge factor but I'd like to get the best OC headroom and ability to fiddle around. I'm planning to flash the galax 380w bios/ similar one's. Is there any evidence of cards being higher binned? (out of the 300a ones) maybe besides the hof and KP.
> 
> Another question is, IF and only IF the 2080 Ti does die like the first batch, will other manufacturers honour the warranty? As far as I know only EVGA covers warranty if you remove the cooler which I'm planning to do.
> 
> The price difference between the cheapest 2080 Ti 300A chip and the Lightning is $1000 AUD. I've been picking between the FE, Zotac X2/X3 OC's and the EVGA XC Black but I was wondering if anybody had better results putting the WB on the STRIX or buying a GB WF/ MSI Seahawk WB? I've gone through the last 10 pages of forums but was wondering if anyone had any input on this. The general consensus is that if I'm putting a WB on it go for the cheapest 300a ref PCB which seems to be the Zotac X2 at $1699 followed by the FE at $1799. All in AUD. I'm assuming there won't be a large delta in temp between the Seahawk EK X / Waterforce WB and the EK vector?
> 
> Cheers, Wales
> 
> 
> 
> Take a look at the reference cards list in the first page. All reference cards are the same, so I would get the cheapest (as long as it is 300A) and then put the Vector block on it and then flash Galax 380W BIOS. You want the cheapest because board will be the same - the only differentiation (in cost as well) is the cooler (and brand), which you will remove anyway. I got the Zotac 2080TI and an FE one in SLI. Both running Galaxy BIOS and water cooled.
Click to expand...

So, is it good to buy the Sea Hawn X, which is a PCB reference, since I do not want to disassemble a card to put a block of water?


----------



## iDelta

I was considering the Zotac X3 OC but am wondering about warranty and teardown. I know it won't be much more difficult if any but with the FE there's a million teardowns and so on. I save $100 getting the Zotac X2 over the FE. I'm more concerned about warranty which they both would not honour. Perhaps I'll get the FE.


----------



## THC Butterz

I am the proud owner of a EVGA RTX 2080 TI XC Ultra 🙂


----------



## dante`afk

guess a shunt mod becomes irrelevant with the XOC bios with unlocked powerlimit eh?


----------



## xaxx

THC Butterz said:


> I am the proud owner of a EVGA RTX 2080 TI XC Ultra 🙂


I guess you have the bios galax 380 w? What stable speed does the card offer and what temperatures?


----------



## THC Butterz

xaxx said:


> THC Butterz said:
> 
> 
> 
> I am the proud owner of a EVGA RTX 2080 TI XC Ultra 🙂
> 
> 
> 
> I guess you have the bios galax 380 w? What stable speed does the card offer and what temperatures?
Click to expand...

No, I'm still on stock bios, it runs at 2144 or so +175 core, +1100 vram, maxed voltage sliders, gets to about 59c in bf5 after 3 hrs gameplay, this is about the best I have been able to get in time spy with it, but I have only had it running for 2 days


----------



## xaxx

THC Butterz said:


> xaxx said:
> 
> 
> 
> 
> 
> THC Butterz said:
> 
> 
> 
> I am the proud owner of a EVGA RTX 2080 TI XC Ultra 🙂
> 
> 
> 
> I guess you have the bios galax 380 w? What stable speed does the card offer and what temperatures?
> 
> Click to expand...
> 
> No, I'm still on stock bios, it runs at 2144 or so +175 core, +1100 vram, maxed voltage sliders, gets to about 59c in bf5 after 3 hrs gameplay, this is about the best I have been able to get in time spy with it, but I have only had it running for 2 days
Click to expand...


I guess the 2144 mhz are not stable, but they are peaks of speed. I have seen cards with modified bios that Dan 2100 and 2150 stable and that is already a very high speed, without going down or up.


----------



## THC Butterz

xaxx said:


> THC Butterz said:
> 
> 
> 
> I guess the 2144 mhz are not stable, but they are peaks of speed. I have seen cards with modified bios that Dan 2100 and 2150 stable and that is already a very high speed, without going down or up.
> 
> 
> 
> I really don't know, I have only been playing with it for a short time, so I definitely haven't put it through its paces yet, I know in timespy I get spikes as high as 2155 but the lows are generally not lower than 2100 that I have seen, I need to spend some time with the card and really test it in other apps, NV boost (or whatever they call the automatic overclocking) is kinda a pain, last time I didn't have a dud card that would OC was Fermi, so I'm a little rusty
Click to expand...


----------



## V I P E R

jura11 said:


> Dancop posted XOC BIOS for RTX 2080Ti
> 
> https://youtu.be/NvW_F7fb4bU
> 
> Hope this helps
> 
> Thanks, Jura


I flashed the bios and my card overclocks now to 2190 Mhz core, but the power consumption is 519W peak. Can the 2x8pin power connectors hold that?


----------



## DeadSec

I highly doubt it. You will ruin your card and your board sooner or later. 
Time to dislike that stupid video.


----------



## jura11

V I P E R said:


> I flashed the bios and my card overclocks now to 2190 Mhz core, but the power consumption is 519W peak. Can the 2x8pin power connectors hold that?


Hi there 

Few pages ago I think @ReFFrs posted with 100% power target slider GPU will pull 1000w and you should change power target to yours preferred or most comfortable value, he is running power target at 42% which means 420W

8 pin alone can in peak pull 300W, its rated for 150W but can pull 300W

Several guys run on their GTX1080 or GTX1080Ti XOC BIOS without the issue or problems, I wouldn't run that BIOS on air cooled GPU 

Didn't tried on my RTX 2080Ti, will be doing few tests later on there

2190MHz OC that's nice, my do 2100-2115MHz as max with 1.093v and what voltage do you running 2190MHz?

Hope this helps 

Thanks, Jura


----------



## Renegade5399

V I P E R said:


> I flashed the bios and my card overclocks now to 2190 Mhz core, but the power consumption is 519W peak. Can the 2x8pin power connectors hold that?


Please see this from the man himself (kingpin):



> This RTX 2080 Ti have both 8-pins located on top edge of the PCBA. These can provide plenty of power for any normal-condition benchmarking, stress-testing and overclocking.
> 
> There is common misunderstanding to refer 6-pin or 8-pin MiniFit.JR connectors as fixed 75W or 150W power capable inputs.
> 
> Nothing can be further from truth actually, as connector port itself does not define the power cap. These power levels are nothing but just the way for how NV determine capability of used board hardware to deliver high power to the voltage regulators. It’s purely imaginary specification and have nothing to do with actual power taken from connector nor power input capability. Active circuitry on PCBA after the connector is used to measure current flowing from the connector into the VRM. This enables software, driver and NV BIOS to handle GPU clocks and reduce voltages if measured power hitting programmed BIOS limit value (which can be lower or higher value than 75/150W!).
> 
> So if we play and change circuit to adjust the calibration point, this limitation will be lifted accordingly as well. Also to make sure we are not at any physical limit of power connector itself, check Molex 26-01-3116 specifications, which have specifications both 13A per contact (16AWG wire in small connector) to 8.5A/contact (18AWG wire). This means that using common 18AWG cable, 6-pin connector specified for 17A of current (3 contacts for +12V power, 2 contacts for GND return, one contact for detect). 8-pin have 25.5A current specification (3 contacts for +12V power, 3 contacts for GND return and 2 contacts for detection). This is 204W at +12.0V level or 306W for 8-pin accordingly.
> 
> Now when somebody tells you that 6-pin can’t provide more than 75W, you know they don’t understand the topic very well. It’s not the connector itself or cable limit the power, but active regulation of GPU/BIOS/Driver according to detection of used cables and preprogrammed limits.


So on 18AWG wire (typical of most PSUs) you can expect a limit of around 612W for 2 8 pin connectors. More if your PSU uses 16AWG wire.

Source

Nvidia cards do not pull more that 75W from the PCIe slot unless you have modded the wrong shunt on your card. The stories of AMD cards frying slots comes from an error on AMDs part and allowing their cards to pull over the 75W limit on the slot. 

Please DO NOT run the XOC BIOS on an air cooled card. While the core temp may be fine, the voltage regulation hardware will start dumping more heat than the sink can handle.



DeadSec said:


> I highly doubt it. You will ruin your card and your board sooner or later.
> Time to dislike that stupid video.


It's time to stop speaking to things you know nothing of. I have been running XOC on a 1080Ti Aorus card ON AIR (I know and accept the risk) for a while now without issue. I will be running my 2080Tis with this new XOC BIOS and am confident all will be well. You will ruin your card only by not understanding what you are dealing with. Setting slider to 100% for example for a 1000W power limit or trying to use this BIOS on air. The voltage regulation hardware on these cards will get smoking hot with the slider even at 450-500W on air cooling. Water is considered to be the minimum for XOC BIOS use 24/7. As to ruining your board: With modern nvidia cards, it's not going to happen. Unless you modded the wrong shunt, the card will never pull more than 75W from the slot. Ever. So barring user error with a hardware mod, this is myth. The AMD cards that did this were, from the factory, allowed to pull over the 75W limit on the slot. So, no need to link/quote those articles.


----------



## bogdi1988

xaxx said:


> So, is it good to buy the Sea Hawn X, which is a PCB reference, since I do not want to disassemble a card to put a block of water?


If you don't want to do all that work, then yea... but what's the cost difference?


----------



## xaxx

Price card by air 1250 euros or 1400 dollars and the price that has cost me the sea hawn X is 1400 euros or 1580 dollars.


----------



## jura11

DeadSec said:


> I highly doubt it. You will ruin your card and your board sooner or later.
> Time to dislike that stupid video.


If you're scared of wolves don't go into the woods

People run on their previous generations of GPUs XOC BIOS without the issue or problems, I have run XOC BIOS on my GTX1080Ti without the problems 

But as above I wouldn't run this BIOS on air cooled GPUs 

Hope this helps 

Thanks, Jura


----------



## MrTOOSHORT

V I P E R said:


> I flashed the bios and my card overclocks now to 2190 Mhz core, but the power consumption is 519W peak. Can the 2x8pin power connectors hold that?


Check this video out, particularly from the 27min mark:


----------



## Kaltenbrunner

HA HA HA I mock you rtx 2080 Ti owners. 1440p +144Hz is all i need so an OCed rtx 2070 was the better choice. I would still be saving for a top-end 2080Ti.


----------



## J7SC

jura11 said:


> Hi there
> 
> Few pages ago I think @ReFFrs posted with 100% power target slider GPU will pull 1000w and you should change power target to yours preferred or most comfortable value, he is running power target at 42% which means 420W
> 
> 8 pin alone can in peak pull 300W, its rated for 150W but can pull 300W
> 
> Several guys run on their GTX1080 or GTX1080Ti XOC BIOS without the issue or problems, I wouldn't run that BIOS on air cooled GPU
> 
> Didn't tried on my RTX 2080Ti, will be doing few tests later on there
> 
> 2190MHz OC that's nice, my do 2100-2115MHz as max with 1.093v and what voltage do you running 2190MHz?
> 
> Hope this helps
> 
> Thanks, Jura





jura11 said:


> If you're scared of wolves don't go into the woods
> 
> People run on their previous generations of GPUs XOC BIOS without the issue or problems, I have run XOC BIOS on my GTX1080Ti without the problems
> 
> But as above I wouldn't run this BIOS on air cooled GPUs
> 
> Hope this helps
> 
> Thanks, Jura



Yeah, helpful ! Folks just shouldn't be careless with the XOC as its PL at 100% = up to 1000w. I would start with may be 10w (!) more than whatever previous Bios folks were running, so the MSI AB slider set to, say 36% (=360w) and slowly work your way up, all the while watching temps & power consumption. Also watch your MSI AB boot settings re. PL % setting from prior setup. In addition, I would be more nervous loading a Bios designed for 2x 8 pin on a card that does not match that Pin count.

All this also assumes that you got solid cooling (IMO 'big water' minimum) as this is an XOC Bios designed to allow DICE, LN2 etc. And as some experienced yesterday and many times before w/ other Bios, there's still a risk when flashing any (especially non-stock) Bios. It's a personal risk / reward balance thing....

...it's worth repeating: IF your own risk/reward meter tells you it is worth loading the XOC Bios, start slowly (say at your previous max) and take small incremental steps, checking temps and power consumption every step of the way w/ your fav benches and games. It's no different than the 'shunt mod', though potentially far more effective. BTW, I know I sound like someone's granny telling you to wear a scarf when it's cold outside :teaching:


----------



## DeadSec

J7SC said:


> ...it's worth repeating: IF your own risk/reward meter tells you it is worth loading the XOC Bios, start slowly (say at your previous max) and take small incremental steps, checking temps and power consumption every step of the way w/ your fav benches and games.


Yeah, helpful. You forgot to mention just one thing: Let us know how far you came with your steps until the card said good bye


----------



## ReFFrs

Renegade5399 said:


> Please DO NOT run the XOC BIOS on an air cooled card. While the core temp may be fine, the voltage regulation hardware will start dumping more heat than the sink can handle.


Can you please point out specifically where the voltage regulation hardware located on PCB? Is it right side of the card (close to power connectors area) or somewhere else? 

We need to know exactly where additional cooling should be applied.


----------



## zhrooms

The Dancop XOC BIOS for Strix https://www.techpowerup.com/vgabios/208990/208990

BIOS is confirmed by me to be working on a Palit RTX 2080 Ti GamingPro (Reference PCB), but only partly, it has big issues.

For example the MSI Afterburner frequency/voltage editor is disabled, it runs the memory at full clock speed in idle, thermal limit is disabled, fan curve is not optimal for my card so it runs high RPM idle, it disabled two of the three DisplayPort outputs (did not try HDMI).

So, the main issue here is that NVIDIA Boost is still very much in play, and it messes everything up, my first run completely stock it started out at 1.043v 1950MHz in UNIGINE Superposition 1080p Extreme, that didn't look promising at all, I then applied +100 voltage in MSI Afterburner and ran it again, now the voltage jumped from 1.043v to 1.068v, great, now to add some core clock, +150 made it start at 1.081v 2100MHz, but in an instant it drops down to 1.050v 2085MHz, only 2c temperature change.

So I documented the downclocking that occured right before my eyes,

1093mV - 2115MHz - 52c
1075mV - 2100MHz - 55c
1050mV - 2085MHz - 58c
1043mV - 2070MHz - 63c
1043mV - 2055MHz - 67c
Crash

That's some big steps down in voltage, so then it was time to open the Window, make it freezing cold.

After running the benchmark for a few seconds it was running 1050mV 2130MHz at 40c. A few seconds later it clocked down to 1043mV 2115MHz 44c and crashed. That's expected as 2115MHz is too fast for only 1043mV.

I ran again, and again, and what I saw was very strange, it started at 1050mV 2130MHz then suddenly spiked to 1093mV 2130MHz, at 45c on the GPU.

The power limit does allow me to run the full 1.093v, but it can't sustain it, even at 40c. It eventually drops to 1.050v, a possible fix for this would be to use the MSI Afterburner Frequency/Voltage editor and lock the card to 3D Clocks at 1.093V 2100MHz+, but that is not possible since the BIOS disabled the function.

After running the benchmark 20 times it does not seem to be able to hold more than 1.050v over time, which is similar to the Galax 380W BIOS.

Starting the benchmark at 15c on the GPU, +170 made it run 1075-1087mV 2160MHz for about 10 seconds, stays under 35c and then crashes, it would work if it could only stay at 1093mV, but it's just not possible. If only the frequency/voltage editor worked.

Lowered power limit just to check, +55% (550W) voltage and clocks the same, +30% resulted in 0.962v 2010MHz so power limit function seems to be working as intended.

Conclusion is that without the MSI Afterburner Frequency/Voltage Editor the BIOS is pretty much useless, for anyone using a Strix it's useful as you regain all outputs. I also cannot say what would happen at sub zero temperature, maybe it would hold the full 1.093v.

*Edit:* The curve doesn't seem to ever be patchable, this clearly isn't the BIOS we have been waiting on.


----------



## domrockt

dang the xoc bios pushed my 3dmark score a bit but not high enough and have the same problem as every custom bios i tryed 



only 1.05VMax and still some how capped below 400W !?


----------



## Belcebuu

Hi guys, I am thinking about buying a Gigabyte rtx 2080 ti gaming oc due to value/price temps/noise, is there anything better value/price temp/noise wise? Thanks


----------



## Renegade5399

ReFFrs said:


> Can you please point out specifically where the voltage regulation hardware located on PCB? Is it right side of the card (close to power connectors area) or somewhere else?
> 
> We need to know exactly where additional cooling should be applied.


This would depend on the card. Most models have a teardown posted online and is easy to find.



DeadSec said:


> Yeah, helpful. You forgot to mention just one thing: Let us know how far you came with your steps until the card said good bye


We get it. You feel this BIOS is a bad idea.

Most of your posts on the topic would have more merit if they weren't simply senseless ramblings. Example: Anyone who thinks PCIe power connections are "limited" to 75W and 150W respectively should automatically be disregarded. Trying to state that as fact immediately shows you have ZERO clue what you are speaking to and there is no value in your input. Let's not even bring up the "slot frying" comments you have mentioned...

I'm sure playing at 1080p/1440p is just fine with your card gimped to ~360 Watts. There are those of us who want higher bench scores and the ability to play at 4k (either native or using DSR) or simply "because we can". 

Me for example, I miss the Maxwell days where this Boost nonsense wasn't as much of a factor and we could edit the BIOS to heed our will. My GTX970's were both golden and clocked like crazy. I actually sold them for what I paid for them. I want that on Turing as I had on Pascal with XOC. Removing the downclocking from the card yields excellent real world performance in games. This BIOS allows for that. Following proper usage methods, XOC BIOS can be used on watercooled cards without issue.

But alas, my posts are most likely in vain as opinions are like the anus in that everyone seems to have one.


----------



## Renegade5399

zhrooms said:


> The Dancop XOC BIOS for Strix https://www.techpowerup.com/vgabios/208990/208990
> 
> BIOS is confirmed by me to be working on a Palit RTX 2080 Ti GamingPro (Reference PCB), but only partly, it has big issues.
> 
> For example the MSI Afterburner frequency/voltage editor is disabled, thermal limit is disabled, fan curve is bad for my card so it runs high RPM idle, it disabled at least one displayport I believe.
> 
> So, the main issue here is that NVIDIA Boost is still very much in play, and it messes everything up, my first run completely stock it started out at 1.043v 1950MHz in UNIGINE Superposition 1080p Extreme, that didn't look promising at all, I then applied +100 voltage in MSI Afterburner and ran it again, now the voltage jumped from 1.043v to 1.068v, great, now to add some core clock, +150 made it start at 1.081v 2100MHz, but in an instant it drops down to 1.050v 2085MHz, only 2c temperature change.
> 
> So I documented the downclocking that occured right before my eyes,
> 
> 1093mV - 2115MHz - 52c
> 1075mV - 2100MHz - 55c
> 1050mV - 2085MHz - 58c
> 1043mV - 2070MHz - 63c
> 1043mV - 2055MHz - 67c
> Crash
> 
> That's some big steps down in voltage, so then it was time to open the Window, make it freezing cold.
> 
> After running the benchmark for a few seconds it was running 1050mV 2130MHz at 40c. A few seconds later it clocked down to 1043mV 2115MHz 44c and crashed. That's expected as 2115MHz is way too fast for only 1043mV.
> 
> I ran again, and again, and what I saw was very strange, it started at 1050mV 2130MHz then suddenly spiked to 1093mV 2130MHz, at 45c on the GPU.
> 
> The power limit does allow me to run the full 1.093v, but it can't sustain it, even at 40c. It instantly drops to 1.050v, the fix for this would be to use the MSI Afterburner Frequency/Voltage editor and lock the card to 3D Clocks at 1.093V 2100MHz, but that is not possible since the BIOS disabled it.
> 
> After running the benchmark 20 times it does not seem to be able to hold more than 1.050v over time.
> 
> Starting the benchmark at 15c GPU temp, +170 leads it to run 1075-1087mV 2160MHz for about 10 seconds, stays under 35c and then crashes, it would work if it could only stay at 1093mV, but it's just not possible for whatever reason (I believe it's boost related). If only the frequency/voltage editor worked.
> 
> Lowered power limit just to check, +55% voltage and clocks the same, +30% resulted in 0.962v 2010MHz so power limit seems to be working as intended.
> 
> Conclusion is that without the MSI Afterburner Frequency/Voltage Editor the BIOS is pretty much useless, for anyone using a Strix it's very useful as you regain all outputs.


1. DO NOT USE THIS ON AIR COOLED CARDS.

2. There isn't supposed to be a freq/voltage curve editor in MSIAB at this time. You clock by offset only, no curve until (or if) it gets updated by Unwinder.

3. Thermal limit is supposed to be disabled. Kind of the point of an XOC BIOS.

4. Fan curve won't work. See #2 and THIS BIOS IS DESIGNED TO BE USED WITH LN2. FAN CURVES ARE IRRELEVANT SINCE YOU ARE NOT SUPPOSED TO USE AIR COOLING AT ALL.

5. One DP is supposed to be disabled. It states this in the video description. This BIOS is for ASUS cards. Flashing to other cards may have adverse effects.

Wait a couple days and I'll bet Unwinder updates MSIAB to work properly with this. He's typically pretty good with the updates.


----------



## zhrooms

Renegade5399 said:


> 1. DO NOT USE THIS ON AIR COOLED CARDS.
> 
> 2. There isn't supposed to be a freq/voltage curve editor in MSIAB at this time. You clock by offset only, no curve until (or if) it gets updated by Unwinder.
> 
> 3. Thermal limit is supposed to be disabled. Kind of the point of an XOC BIOS.
> 
> 4. Fan curve won't work. See #2 and THIS BIOS IS DESIGNED TO BE USED WITH LN2. FAN CURVES ARE IRRELEVANT SINCE YOU ARE NOT SUPPOSED TO USE AIR COOLING AT ALL.
> 
> 5. One DP is supposed to be disabled. It states this in the video description. This BIOS is for ASUS cards. Flashing to other cards may have adverse effects.
> 
> Wait a couple days and I'll bet Unwinder updates MSIAB to work properly with this. He's typically pretty good with the updates.


Chill out man, I know what I'm doing, this is completely fine on air cooled cards, it can't go over 1.093v meaning it's not going over 450W which is more than safe.

How do you know the editor is not suppose to work?

I was just mentioning how the thermal limit is disabled for anyone wondering what's changed with the BIOS.

Fan curve works perfectly, you clearly haven't tested it on an air card.

DP is disabled because it's a Strix card which has a different output layout, it's safe to use on any card, just not optimal for anyone wanting to use any multiple outputs.

I doubt it'll be patched, but great if it does I guess. If it would solve the voltage problems it would make the BIOS not useless.

The main reason one should not consider to use this daily on any card is that it's running 3D clocks on the memory at idle, full 7000MHz stock, that's obviously not great over time. The Galax 380W is giving me the same overclock and works a lot better with any card except Strix.


----------



## xaxx

So is it safe to use the bios galax 380w? You know mine is msi hawn X with 2x8 pin water block. With so many subjects I am confused.


----------



## zhrooms

xaxx said:


> So is it safe to use the bios galax 380w? You know mine is msi hawn X with 2x8 pin water block. With so many subjects I am confused.


 
Yes, if you have the Sea Hawk with hybrid cooler (2x8pin) you can use the Galax 380W, others have done it successfully. Just be aware fan curve changes and RGB might stop working.


----------



## xaxx

I always make a custom fan curve. And if rgb does not work, better since there will be more power for the card.


----------



## zhrooms

_"This RTX 2080 Ti GamingPro by Palit is suffering from an extremely odd and unique problem, it overclocks and runs fine above 30c, but under 25c as shown in the video, there are red pixels appearing at the Windows Desktop (app focus), attempting to run a game or benchmark results in a reboot of the computer and the device can no longer be found. Later in the video I tab back into Superposition after starting it at a higher temperature, and you can see the first frame is purple, entire scene is messed up. Also the text in the Superposition launcher window appear to be warped at one point, there are also streaks/lines of black pixels across the window. I flashed a LN2 BIOS on the card that forced 3D clocks on the memory at idle speed, so it ran 7000MHz in Windows, this completely solved the issue I was having, I could keep the GPU at 15c with no artifacting, so the conclusion is that when the memory runs idle 400MHz combined with a sub 25c GPU temperature it becomes "unstable" and eventually leads to a complete crash and reboot of the PC."_


----------



## Krzych04650

xaxx said:


> I always make a custom fan curve. And if rgb does not work, better since there will be more power for the card.


Does Sea Hawk X even have RGB? I have one and I didn't even check RGB because I am not interested since my PC is located 10 meters away from me in a case without a window, but isn't the light just a fixed white? 

Fan curve works normally with 380W bios, only reported RPM speeds are incorrect.


----------



## Renegade5399

zhrooms said:


> Chill out man, I know what I'm doing, this is completely fine on air cooled cards, it can't go over 1.093v meaning it's not going over 450W which is more than safe.
> 
> How do you know the editor is not suppose to work?
> 
> I was just mentioning how the thermal limit is disabled for anyone wondering what's changed with the BIOS.
> 
> Fan curve works perfectly, you clearly haven't tested it on an air card.
> 
> DP is disabled because it's a Strix card which has a different output layout, it's safe to use on any card, just not optimal for anyone wanting to use any multiple outputs.
> 
> I doubt it'll be patched, but great if it does I guess. If it would solve the voltage problems it would make the BIOS not useless.
> 
> The main reason one should not consider to use this daily on any card is that it's running 3D clocks on the memory at idle, full 7000MHz stock, that's obviously not great over time. The Galax 380W is giving me the same overclock and works a lot better with any card except Strix.


In the video comments and in the Guru3D thread it's mentioned there is no curve adjustment at this time. Unwinder will most likely fix that in a few days.

Good thing you did actually since there are some folks in here that are doing shunt mods and such that probably shouldn't.

You are correct. I have not tested this on an air cooled card. Reference models get very very hot on the VRMs when using this BIOS set to 50% PL.

Nvidia FE cards go full black on all outputs. Not sure what they are doing different, but it seems to make all the outputs not work. Card is not bricked however and can be flashed back.

Unwinder is really good (albeit rude) about patching MSIAB. The 1000W power limit probably throws something out of whack calculation wise. Just a guess, I have no idea how the nvidia provided automatic OC tool actually is written.

That clock won't kill the RAM. Most have P2 power states disabled in nvidia inspector anyways. I have 1080Tis that have ran since launch at full speed with no issues.

Wasn't trying to jump down your throat. Was doing a PSA similar to your no temp limit comment. For every 1 who knows what they are doing in this thread, there are 20 who are going to run XOC BIOS with a shunt mod and wonder why their card smoked because they neglected to do the power consumption calculation properly. Setting for 42% with the shunt will get you a wee bit more than 420W, LOL.

However I do apologize for the brash manner of my post and did not intend to come as insulting your skill level.


----------



## J7SC

It almost seems like a catch 22...for most of the time since the 2080 Ti's release, folks have been swearing at and 'wrestling' the Power Limit, with some going the shunt mod route. Now that there is a XOC Bios with / a 1000w limit (before shunt mod ), it's far easier to get into trouble. Btw, I'm not sure re. the video linked above why the MSI AB PL is set to 100% and higher+...if that is the Strix XOC Bios linked yesterday, that would be 1000w (again, w/o shunt mods)...other posters who had used it and posted their screen shots (ie on Aorus waterforce) had the limit set around 46%, but may be these are different versions of either / and / or the Bios or MSI AB. 

*In any case, these ARE specialized XOC Bios* w/all the quirks for LN2 and such, and that includes the high VRAM speed.

As a note of caution, I* only ever lost one* out of 30 or so GPUs (2013 genre and up) I used to bench w/ chilled water, DICE or LN2, and that was a MSI Lightning. It was fully insulated and all that, but I got busy with other things in life and just put it aside after taking the LN2 pot and shop towels off...some months went by and I had the brilliant idea to put that card in a light gamer I was putting together. I cleaned it up, mounted the triple-fan air-cooler back on and had some fun. I did wonder about the higher temps and fan noise and even re-seated the IHS and fan body. Anyhow, after two or so weeks, it started to black-screen and finally stayed 'black'. I simply had forgotten :doh: to throw the Bios switch back from the LN2 settings when I had set it aside after the XOC benching months earlier...over 1.45v or thereabouts, huge PL...air-cooling :doh: > kaput


----------



## xaxx

Krzych04650 said:


> xaxx said:
> 
> 
> 
> I always make a custom fan curve. And if rgb does not work, better since there will be more power for the card.
> 
> 
> 
> Does Sea Hawk X even have RGB? I have one and I didn't even check RGB because I am not interested since my PC is located 10 meters away from me in a case without a window, but isn't the light just a fixed white?
> Krzych04650
> Fan curve works normally with 380W bios, only reported RPM speeds are incorrect.
Click to expand...



Krzych04650 If the RPM is not correct, can it be configured at a custom speed anyway? I guess with the percentage ?? Omitting the RPM and looking at the%?


----------



## V I P E R

Renegade5399 said:


> Please see this from the man himself (kingpin):
> 
> 
> 
> So on 18AWG wire (typical of most PSUs) you can expect a limit of around 612W for 2 8 pin connectors. More if your PSU uses 16AWG wire.
> 
> Source
> 
> Nvidia cards do not pull more that 75W from the PCIe slot unless you have modded the wrong shunt on your card. The stories of AMD cards frying slots comes from an error on AMDs part and allowing their cards to pull over the 75W limit on the slot.
> 
> Please DO NOT run the XOC BIOS on an air cooled card. While the core temp may be fine, the voltage regulation hardware will start dumping more heat than the sink can handle.
> 
> 
> 
> It's time to stop speaking to things you know nothing of. I have been running XOC on a 1080Ti Aorus card ON AIR (I know and accept the risk) for a while now without issue. I will be running my 2080Tis with this new XOC BIOS and am confident all will be well. You will ruin your card only by not understanding what you are dealing with. Setting slider to 100% for example for a 1000W power limit or trying to use this BIOS on air. The voltage regulation hardware on these cards will get smoking hot with the slider even at 450-500W on air cooling. Water is considered to be the minimum for XOC BIOS use 24/7. As to ruining your board: With modern nvidia cards, it's not going to happen. Unless you modded the wrong shunt, the card will never pull more than 75W from the slot. Ever. So barring user error with a hardware mod, this is myth. The AMD cards that did this were, from the factory, allowed to pull over the 75W limit on the slot. So, no need to link/quote those articles.


I'm using EVGA 1600 P2 PSU with EVGA individually sleeved cables, but I can't find any information what AWG wires these cables have. 
I'm also running a custom watercooling loop with 1x480 and 2x240 radiators with EK Vardar fans @ 2000 rpm and the water never exceeds 25 degrees so I think the cooling is enough. For 24/7 I've set the power limit to 300 Watts just to be safe and I will use up until 500 W when benching. My GPU waterblock is Phanteks Glacier Strix G2080Ti which has a lot bigger channel surface compared to other blocks for 2080Ti and for now it is holding the card very cool.


----------



## Garrett1974NL

Still waiting for the Heatkiller IV block from watercool.de... Ordered 15 days ago... No communication whatsoever from their side. I could claim my money back but I just want the damn block... Any opinions on this?


----------



## diedo

of all the after market coolers which one that has less failure rate? I'm also offered Two cards and it annoys me which one to pick since this is first time going Nvidia from my precious old AMD R9 290X 

- MSI GeForce RTX 2080 TI GAMING X TRIO

- Gigabyte GeForce RTX 2080 TI GAMING OC

So which one of those has less failure rate? I know that both has the golden chip (300A) But I also want my card to live as long as the R9 290X.


----------



## Renegade5399

diedo said:


> of all the after market coolers which one that has less failure rate? I'm also offered Two cards and it annoys me which one to pick since this is first time going Nvidia from my precious old AMD R9 290X
> 
> - MSI GeForce RTX 2080 TI GAMING X TRIO
> 
> - Gigabyte GeForce RTX 2080 TI GAMING OC
> 
> So which one of those has less failure rate? I know that both has the golden chip (300A) But I also want my card to live as long as the R9 290X.


Are you talking the cards in total or the cooler on the cards?

I can't speak to the Gigabyte 2080Ti, but I had the triple fan Aorus 1080Ti and used it for mining. The fans were ran 100% 24/7 for months with no issues. They still work as a matter of fact.


----------



## zhrooms

Renegade5399 said:


> In the video comments and in the Guru3D thread it's mentioned there is no curve adjustment at this time. Unwinder will most likely fix that in a few days.
> 
> Good thing you did actually since there are some folks in here that are doing shunt mods and such that probably shouldn't.
> 
> You are correct. I have not tested this on an air cooled card. Reference models get very very hot on the VRMs when using this BIOS set to 50% PL.
> 
> Nvidia FE cards go full black on all outputs. Not sure what they are doing different, but it seems to make all the outputs not work. Card is not bricked however and can be flashed back.
> 
> Unwinder is really good (albeit rude) about patching MSIAB. The 1000W power limit probably throws something out of whack calculation wise. Just a guess, I have no idea how the nvidia provided automatic OC tool actually is written.
> 
> That clock won't kill the RAM. Most have P2 power states disabled in nvidia inspector anyways. I have 1080Tis that have ran since launch at full speed with no issues.
> 
> Wasn't trying to jump down your throat. Was doing a PSA similar to your no temp limit comment. For every 1 who knows what they are doing in this thread, there are 20 who are going to run XOC BIOS with a shunt mod and wonder why their card smoked because they neglected to do the power consumption calculation properly. Setting for 42% with the shunt will get you a wee bit more than 420W, LOL.


 
Yeah he did mention that but you made it sound like it was intentional, aka you might have had insider knowledge so I had to ask. I'm assuming it was just a bug, since for example the thermal limit is disabled and inside the Frequency/Voltage Curve Editor you can switch to Thermal curve, and since that is disabled it might have just broken all of it.

Shunt mods + this bios doesn't matter, it still won't use more than 1.093v.

The % PL set doesn't matter either, 50% or 100% is the same, 500 and 1000W respectively, it's just a number that the card is told not to exceed, and it can't go over 1.093V ~ 440W anyway so there's no point in using anything less than 100% PL.

Also, there is no way that is true, that the VRM would get any hotter than on Galax 380W, in my testing I reach max 1.050v on both, so they literally run the same speed, voltage and temperature. It should therefore result in the same VRM temperature.

Do you have any source (preferably more than one) to the claim that all FE cards go black regardless of which output is used? I just tried switching output on my refrence PCB (same as FE) and two DP are disabled, did not try the HDMI, last DP works.

Didn't say it would kill the memory, just might not be wise to run it at 3D clocks for a long period of time. Just a general caution. I do believe it's safe too as I ran it myself on 1080 Ti.

It doesn't matter if you shunt mod it, shunting is a way of tricking the cards power limit (software), and that alone, nothing else;

380W Power Limit = Card throttles at 380W (only reaches 1.050V)
No Power Limit = Card throttles at 440W (because it reaches 1.093V hard limit)
380W Power Limit + Shunt 100W = 440W (because it reaches 1.093V hard limit)

The shunt doesn't magically make it use 480W.

The reason the shunt makes it hotter / use more watt is because it uses a higher voltage (from 1.050V up to 1.093V).

1000W Power Limit + Shunt 100W = still 440W (because it reaches 1.093V hard limit)

The only thing that will happen is that the power reading is wrong, won't exceed 1.093V however you shunt it. Thus not using more watt than is needed for 1.093 which is approx 440W.
 


diedo said:


> of all the after market coolers which one that has less failure rate? I'm also offered Two cards and it annoys me which one to pick since this is first time going Nvidia from my precious old AMD R9 290X
> 
> - MSI GeForce RTX 2080 TI GAMING X TRIO
> 
> - Gigabyte GeForce RTX 2080 TI GAMING OC
> 
> So which one of those has less failure rate? I know that both has the golden chip (300A) But I also want my card to live as long as the R9 290X.


 
That is impossible to answer, the cards have only been out a few months, the Gaming X Trio is a better card than Gigabyte Gaming OC in every single way.

If price difference is small definitely go with the MSI. And read the FAQ in the original post of this thread if you haven't already.


----------



## Roen

Edit first post, Zotac 2080 Ti Amp Maxx 260W Base / 291W OC, 112% Power Limit


----------



## Barefooter

Garrett1974NL said:


> Still waiting for the Heatkiller IV block from watercool.de... Ordered 15 days ago... No communication whatsoever from their side. I could claim my money back but I just want the damn block... Any opinions on this?


It took four or five weeks for me to receive mine. Be patient as it will be worth it once you have the block in hand.

I had my backplates chrome plated too. Here's a couple of pics.





























More pics on this post on my build log.


.


----------



## eux

Tried the xoc bios on my evga xc, it works ok but only the top right dp is functional and the hdmi is semi functional(will only run 1080p). Pretty hard to pull more than 500w at <40c. Was able to hit 2180-2210 in 4k benches without dropping clocks, power draw around 420-480w. No voltage control sucks; was maxing out around 1044mv unless I dropped ambient to 10c then card went upto 1068mv. Fun to play with but I already flashed back. Considering doing a shunt mod now :/


----------



## J7SC

zhrooms said:


> Yeah he did mention that but you made it sound like it was intentional, aka you might have had insider knowledge so I had to ask. I'm assuming it was just a bug, since for example the thermal limit is disabled and inside the Frequency/Voltage Curve Editor you can switch to Thermal curve, and since that is disabled it might have just broken all of it.
> 
> Shunt mods + this bios doesn't matter, it still won't use more than 1.093v.
> 
> (...edit)
> 
> The only thing that will happen is that the power reading is wrong, won't exceed 1.093V however you shunt it. Thus not using more watt than is needed for 1.093 which is approx 440W.



Would you consider 1.093v / 440w safe for more or less daily use with good cooling for two 2080 Tis that @ stock Bios already pull 375-380W each ? Good cooling means s.th. like 3x360/60 rads, 12x 120mm fans... Academic for me right now as the machine in question also does productivity work, the 2080 Tis perform very well as is and the rated PSU limit would get uncomfortable close. But I'm looking at another build in a few months where I may switch cards over. Tx.


----------



## Krzych04650

xaxx said:


> Krzych04650 If the RPM is not correct, can it be configured at a custom speed anyway? I guess with the percentage ?? Omitting the RPM and looking at the%?


Fan curves are always set by percentage speeds, RPM readings doesn't matter. It reacts to fan curve exactly the same as with stock BIOS, and percetage fan speeds are read and set correctly, only RPM reading as way too high (for me it shows over 3200 RPM on radiator fan while it's max is 2300)


----------



## ReFFrs

Don't expected from that lazy dude any different answer anyway: https://forums.guru3d.com/threads/xoc-bios-support-for-2080-ti.425618/

So we are not getting V/F curve for XOC bios lads.


----------



## zhrooms

ReFFrs said:


> Don't expected from that lazy dude any different answer anyway: https://forums.guru3d.com/threads/xoc-bios-support-for-2080-ti.425618/
> 
> So we are not getting V/F curve for XOC bios lads.


 
Yeah the BIOS is pointless for almost everyone, I get higher score with Galax 380W than with the 1000W one, since the curve don't work we have to rely on NVIDIA Boost to adjust the voltage which doesn't seem to work half the time, downvolts by itself down to 1.043 from 1.093 even at 30c on the GPU, it's unrelated to temperature changes. What is a bit interesting is that the voltage stayed higher at about 1.070 on an actual Strix OC card, different VRM, can't say anything more than that, just know I did not get the same voltage on my Reference PCB as someone on Strix PCB.

It's essentially a half working XOC BIOS, multiple issues. If any of you want to try it on a reference PCB go for it.

To make it clear this BIOS can *not* be compared to the Strix XOC BIOS that Elmor uploaded for the GTX 1080 Ti, they are wildly different. This BIOS for 2080 Ti is not what most people have been waiting for.


----------



## ReFFrs

zhrooms said:


> It's essentially a half working XOC BIOS, multiple issues. If any of you want to try it on a reference PCB go for it.


This bios may help with very demanding games where card is constantly bumping into power limit (even 380W), like Metro Exodus 4K or demanding benchmarks like TimeSpy Extreme, because this bios actually allows to pull 400-450W or more for your card and you won't be limited in power, so clocks won't drop.


----------



## LanceBoyle

Gigabyte RTX 2080 Ti WF OC (Alphacool Eiswolf 240) flashed with the 406W MSI BIOS. Mem always at max. clock. Flashed back to special F2 BIOS.


----------



## ducky083

Garrett1974NL said:


> Still waiting for the Heatkiller IV block from watercool.de... Ordered 15 days ago... No communication whatsoever from their side. I could claim my money back but I just want the damn block... Any opinions on this?



Hi, yes !


I've maked an order on watercool.de for the same block and nothing... (not really in stock...)


I've cancelled my order and make a new on Caseking.de (they have the same block in stock (stock on caseking.de are real !)


----------



## J7SC

ducky083 said:


> Hi, yes !
> 
> 
> I've maked an order on watercool.de for the same block and nothing... (not really in stock...)
> 
> 
> I've cancelled my order and make a new on Caseking.de (they have the same block in stock (stock on caseking.de are real !)


 
Our EU office likes Caseking.de / absolutely no complaints. But you might have been lucky w/ the Heatkiller IV 2080 Ti block there...per below (March 7, 5:45 am Berlin time) status now changed from 'in stock' to 'unknown' for that block...


----------



## TK421

So the 1000w XOC bios from asus only work on their ROG strix card? :|


----------



## lkkane00

tried the XOC on both a MSI DUKE OC [ref pcb] and HOF WC LABS [custom] (for ****s and giggles)

Am running a omega VI with 1600 Supernova p2 so am not worried whatsoever, plus ive always been down to take one for the community.

First thing I noticed is that the bios flashes far quicker than others, making me assume it is missing some data compared to a typical bios.

Second thing I've noticed is that NVLINK bandwidth decreased from 93.6 to 36.2, which is a pretty big issue for me, obviously.

Am able to hit 2250+ pretty easily on the HOF with the bios that shipped and for some reason power draw is never over 350w (without using external OC panel). On the XOC it actually draws up to 520 watt (70 higher than max limit on stock bios) but i max out at 2190 or so.

For the Duke, was able to his 520 watt aswell and it performs up to 2220 compared to 2110 now. Temp is no issue here because of hailea 500 chiller with 7 d5s.

Unfortunately, its not an option for me since NVLINK is useless. If it were my only card, 100 would i use this XOC bios with my DUKE.

Curiously, my OC LABS stock bios isnt the one in the online bios database. Futhermore, Galax actually supplied the XOC championship members with "OC MASTER SOFTWARE" (allowing voltage control) and a bios with device id a17 that is impossible to find online apparently. 

Just figured id give my experience, maybe we can draw better conclusions as to how this XOC bios is working.


----------



## Esenel

zhrooms said:


> Do you have any source (preferably more than one) to the claim that all FE cards go black regardless of which output is used? I just tried switching output on my refrence PCB (same as FE) and two DP are disabled, did not try the HDMI, last DP works.


Then I mark myself as source #1.
I was able to flash the XOC bios to my founders, but the same moment it reached 100% my HDMI 1080p Screen and my DP 1440p screen went black.
I pushed the restart button and tried all DPs with no luck. It stayed black.

So I pulled both 8pin and used DP of IGPU.
In Bios I activated IGPU + discrete Graphics Card.

Started into Windows with still the both 8 pins pulled.
I then was able to flash the Galax 380W bios again and I am now back to normal.


----------



## Garrett1974NL

Hello @Barefooter, @ducky083 and @J7SC 
I called them today, so I spoke to them in my best German (which is pretty good I might add lmao) and it should be arriving within ±1 week from now...tops!
So I'll just wait it out 
And Barefooter that's a nice block, but I ordered the black one, no plexi allowed 
I even had them swap the stainless grey plate on the back for a totally black one 
Can't wait till I have it... what are your load temps?


----------



## Hemorz

Good evening everyone.

I tried scrolling comments but there is too much info spread over this thread. 

I think I'm going to purchase a Galax GeForce 2080ti SG GPU for $1699 Aus. Does it matter what PCB I use? Apparently it's binned but does that matter if I'm manually overclocking anyway? Can I flash any bios to it so I can increase the power?

What's the difference in buying this and a 2080ti that's $800 more?

Finally but not least...what waterblock should I use? 

Thanks so much for any help. [emoji16]

(It's my bday in 2 hours so best birthday present would be an answer to everything I've asked haha)









Sent from my SM-G965F using Tapatalk


----------



## wheatpaste1999

Garrett1974NL said:


> Hello @Barefooter, @ducky083 and @J7SC
> I called them today, so I spoke to them in my best German (which is pretty good I might add lmao) and it should be arriving within ±1 week from now...tops!
> So I'll just wait it out
> And Barefooter that's a nice block, but I ordered the black one, no plexi allowed
> I even had them swap the stainless grey plate on the back for a totally black one
> Can't wait till I have it... what are your load temps?


Just FYI if you ask in the Heatkiller thread under the Watercooling Subforum, or DM Jakob from Watercool directly, he is generally pretty responsive about orders placed via their store.

I ordered two Heatkiller 2080Ti blocks and they took a little under a month to get to me in the US. A little over two weeks from order to shipping, and an additional week for shipping. I agree that the website and visibility into order status isn't great. The products were well worth the wait though.


----------



## VPII

Maybe some of you can tell me why.... Initially I ran a +180mhz core which means 2160mhz without an issue and it would run without a hiccup. Memory was set +1000 to +1150mhz and it runs without an issue. At present this would not work, unless I use the vcurve and set it to same speed with a locked core voltage. Then it would run all the way without an issue. But right now if I set it normally with +180mhz core it would fail in the first Time Spy test. Why is that?


----------



## VPII

Actually got it working now.... Don't ask how as I'm not sure. Will keep testing.


----------



## ducky083

J7SC said:


> Our EU office likes Caseking.de / absolutely no complaints. But you might have been lucky w/ the Heatkiller IV 2080 Ti block there...per below (March 7, 5:45 am Berlin time) status now changed from 'in stock' to 'unknown' for that block...



So, yes maybe lucky ;-) i've buyed no rgb version and added after the rgb strip ;-)


It's a very good cooling (but i've one UT60 360 and another one UT60 280 with noctua NF-A12x25)


----------



## zhrooms

Esenel said:


> I was able to flash the XOC bios to my founders, but the same moment it reached 100% my HDMI 1080p Screen and my DP 1440p screen went black.
> I pushed the restart button and tried all DPs with no luck. It stayed black.
> 
> So I pulled both 8pin and used DP of IGPU.
> In Bios I activated IGPU + discrete Graphics Card.
> 
> Started into Windows with still the both 8 pins pulled.
> I then was able to flash the Galax 380W bios again and I am now back to normal.


 
Yes it went black on all ports for me too after flashing, none of the DP worked. Restarted computer manually and kept it in the same output as when I flashed it, didn't work, so I switched to the second DP, still black screen and last DP worked and it showed my windows login screen.

*Left port worked, not the other two, the one I originally used was the top one.*

Also, don't know how it works for you, but for me, I don't have to do anything in BIOS, it's all auto detected as long as you use the DP on MB, and there should be no need to remove power cables, never done that and see no reason why one should do it.


----------



## zhrooms

Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!

This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;

It's missing 5 power phases!


----------



## fleps

Hey guys, a little question (sorry is about a 2080, the official topic doesn't have much attention)

If with the default bios the max stable core OC never really get's the card close to Max TDP / Wattage neither the temps go above 70 C, flashing a bios with more TDP/W will do nothing right?

Looks like I didn't get silicon lucky and the core is already on it's limit despite everything else.

Oddly OC Scanner says the limitation is power, but on manual OC, TDP/Watt usage during Heaven bench never gets close to the max 111%/250W.

Core stable at 2040Mhz with occasional 2055 (2070 peak until card reaches 60C). If I add more on the OC to force 2070 during bench, it start showing artifacts.

So not sure if is worth the work of installing a Kraken G12/X42 + bios flash, looks like i'll not get more out of it?

Thanks


----------



## ENTERPRISE

zhrooms said:


> Yes it went black on all ports for me too after flashing, none of the DP worked. Restarted computer manually and kept it in the same output as when I flashed it, didn't work, so I switched to the second DP, still black screen and last DP worked and it showed my windows login screen.
> 
> *Left port worked, not the other two, the one I originally used was the top one.*
> 
> Also, don't know how it works for you, but for me, I don't have to do anything in BIOS, it's all auto detected as long as you use the DP on MB, and there should be no need to remove power cables, never done that and see no reason why one should do it.





zhrooms said:


> Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!
> 
> This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;
> 
> It's missing 5 power phases!





fleps said:


> Hey guys, a little question (sorry is about a 2080, the official topic doesn't have much attention)
> 
> If with the default bios the max stable core OC never really get's the card close to Max TDP / Wattage neither the temps go above 70 C, flashing a bios with more TDP/W will do nothing right?
> 
> Looks like I didn't get silicon lucky and the core is already on it's limit despite everything else.
> 
> Oddly OC Scanner says the limitation is power, but on manual OC, TDP/Watt usage during Heaven bench never gets close to the max 111%/250W.
> 
> Core stable at 2044Mhz with occasional 2055 and 2070 peak until card reaches 60C. If I add more on the OC to force 2070 during bench, it start showing artifacts.
> 
> So not sure if is worth the work of installing a Kraken G12/X42 + bios flash, looks like i'll not get more out of it?
> 
> Thanks


THIS IS A TEST


----------



## J7SC

zhrooms said:


> Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!
> 
> This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;
> 
> It's missing 5 power phases!
> 
> (...edit..)


 
...not sure what your're asking ...the Giga.Turbo is advertised as an 8+3 on their site, per below ?!


----------



## Jpmboy

zhrooms said:


> Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!
> 
> This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;
> 
> It's missing 5 power phases!


and the empty memory pack is in a different position from the 2080Ti FE for sure.


----------



## Esenel

zhrooms said:


> Yes it went black on all ports for me too after flashing, none of the DP worked. Restarted computer manually and kept it in the same output as when I flashed it, didn't work, so I switched to the second DP, still black screen and last DP worked and it showed my windows login screen.
> 
> *Left port worked, not the other two, the one I originally used was the top one.*


For me all DP didn't show anything after rebooting.
And then still it would not be useable for me as the HDMI doesn't work as well.



zhrooms said:


> Also, don't know how it works for you, but for me, I don't have to do anything in BIOS, it's all auto detected as long as you use the DP on MB, and there should be no need to remove power cables, never done that and see no reason why one should do it.


Might be yes.
It was my first approach to do it this way I described and it worked.
Maybe your way would have worked as well.

But in my opinion there is not much to gain from this Bios, except some better benches. AT LEAST for my card.
It tops out at around 2100-2115 Core at the moment.
With the XOC bios "maybe" being able to do 2150 would result in how many FPS more? 1-5?

For 1440p it is fast enough at the moment.
And for my future preferred resoltion of 5120:2160 (21:9) they first have to offer such a screen + another graphics cards generation 

Cheers


----------



## GAN77

Jpmboy said:


> and the empty memory pack is in a different position from the 2080Ti FE for sure.


And memory samsung. PCB Frankenstein)


----------



## kot0005

zhrooms said:


> Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!
> 
> This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;
> 
> It's missing 5 power phases!


damm lol!!!

is tis the case with other cards too ? Asus Turbo, evga black.


----------



## GnarlyCharlie

I grudgingly bought a 2080Ti FE off Nvidia site and a Heatkiller block/backplate. I was going to skip this gen, but I'm going to move the Titan X Pascal over to a build for my niece's family of Hellions (not really, I wouldn't build them another rig if they were bad kids). They have outgrown the 1070 I put in their 6700K rig, so they get the TX Pascal and I'll run the 2080Ti. Maybe they can get that one when the next gen comes out, their new rig will be water cooled so it should be a fairly easy swap as long as they bring the rig to me. The 6700K/1070 is still running, they can stick another air cooled card in it if they want to after they try a Titan X of the same gen. They ain't getting my shunt modded Xp.


----------



## Krzych04650

What's up with all of the Ubisoft engines like AnvilNext 2.0 or Disrupt, from games like AC: Origins or Watch Dogs 2 for example, why is the GPU power draw so low even at 99% usage? There is almost no way I can go above 300W even at 1.093V, in most games this would throttle like crazy with 380W bios. Is it for Turing only and it is somehow underutilized (which would explain lower than normal gains over 1080 Ti in new Assassin's Creed games) or Pascal behaves the same?


----------



## mackanz

1059mV and 2115 with the 380w bios seems to be the limit with this Inno3D Black version AIO. Is there any 400+W bios that works on reference boards? Samsung memory btw. 2115/8400 seems gaming stable, but the limit is the bios i think. As soon as i bump the core to 2140 it hard crashes, no matter what voltage i use.


----------



## Ace99ro

quick question guys , i want to nvlink 2 x 2080ti Gigabyte Windforce OC , i have this PSU - Corsair HX1200 80 Plus Platinum , full specs :

9900k - 5ghz @ 1.33v
Gigabyte Aorus Master
16gb Gskill 3200mhz @ CL14
2 x 7200rpm drives
1 x Sata SSD
Corsair H115i Pro with 2 x Vardar EVO 140ER 
Corsair Air 540 
+ 3 more Vardar EVO 140ER in the case

i don't plan to OC the cards 24/7 , maybe just for a couple of benchmarks 

So is this PSU enough ?


----------



## J7SC

Ace99ro said:


> quick question guys , i want to nvlink 2 x 2080ti Gigabyte Windforce OC , i have this PSU - Corsair HX1200 80 Plus Platinum , full specs :
> 
> 9900k - 5ghz @ 1.33v
> Gigabyte Aorus Master
> 16gb Gskill 3200mhz @ CL14
> 2 x 7200rpm drives
> 1 x Sata SSD
> Corsair H115i Pro with 2 x Vardar EVO 140ER
> Corsair Air 540
> + 3 more Vardar EVO 140ER in the case
> 
> i don't plan to OC the cards 24/7 , maybe just for a couple of benchmarks
> 
> *So is this PSU enough* ?


 
...366w Bios x2 for your 2080ti Gigabyte Windforce OC plus a 9900K 5G / 1.33v should leave you some breathing-room. FYI, I'm running a 2950X 16c/32t TR @ 4.225G (280w max) and 2x Aorus 2080 Ti WB, 32GB of TridentZ, 18x 120mm fans etc etc etc on a 1300W Antec HPC Platinum w/o any problems.


----------



## ReFFrs

Krzych04650 said:


> What's up with all of the Ubisoft engines like AnvilNext 2.0 or Disrupt, from games like AC: Origins or Watch Dogs 2 for example, why is the GPU power draw so low even at 99% usage? There is almost no way I can go above 300W even at 1.093V, in most games this would throttle like crazy with 380W bios. Is it for Turing only and it is somehow underutilized (which would explain lower than normal gains over 1080 Ti in new Assassin's Creed games) or Pascal behaves the same?


The GPU utilization problem may be hiding behing the fact that all these games can't work in exclusive fullscreen. Even if you select fullscreen in settings, game still being launched in borderless windowed mode, so you see it as a fullscreen. If you try to alt-tab to desktop several times, even in game settings it will automatically revert to borderless windowed instead of fullscreen. The same was on Pascal. Ghost Recon Wildlands is another example of such crappy engine btw.

I estimate we are losing 10-15% of performance due to this sht.

If someone knows how to force game to run in exclusive fullscreen please share and we will test.


----------



## fleps

ReFFrs said:


> The GPU utilization problem may be hiding behing the fact that all these games can't work in exclusive fullscreen. Even if you select fullscreen in settings, game still being launched in borderless windowed mode, so you see it as a fullscreen. If you try to alt-tab to desktop several times, even in game settings it will automatically revert to borderless windowed instead of fullscreen. The same was on Pascal. Ghost Recon Wildlands is another example of such crappy engine btw.
> 
> I estimate we are losing 10-15% of performance due to this sht.
> 
> If someone knows how to force game to run in exclusive fullscreen please share and we will test.


You need to check "Disable Fullscreen Optimizations" on the game .exe "properties > compatibility" tab on windows 10 to use exclusive fullscreen.


----------



## ReFFrs

fleps said:


> You need to check "Disable Fullscreen Optimizations" on the game .exe "properties > compatibility" tab on windows 10 to use exclusive fullscreen.


No, this is useless. Changed nothing in game. For example Metro Exodus (DX11) perfectly runs exclusive fullscreen without messing with compatibility tab.


----------



## xaxx

Hello, I wanted to ask you about the voltage issue, is it necessary to upload it for overclock? If I do not modify it or if I put it to 100% I do not notice difference. I've always played gpu, memory and fans but I do not know if I really need to touch the voltage or it's better than not. Thank you.


----------



## dante`afk

Garrett1974NL said:


> Still waiting for the Heatkiller IV block from watercool.de... Ordered 15 days ago... No communication whatsoever from their side. I could claim my money back but I just want the damn block... Any opinions on this?


i ordered on 2/6 and it was shipped on 3/7.

they have huge orders.


----------



## dante`afk

Renegade5399 said:


> Please see this from the man himself (kingpin):
> 
> 
> 
> It's time to stop speaking to things you know nothing of. I have been running XOC on a 1080Ti Aorus card ON AIR (I know and accept the risk) for a while now without issue. I will be running my 2080Tis with this new XOC BIOS and am confident all will be well. You will ruin your card only by not understanding what you are dealing with. Setting slider to 100% for example for a 1000W power limit or trying to use this BIOS on air. The voltage regulation hardware on these cards will get smoking hot with the slider even at 450-500W on air cooling. Water is considered to be the minimum for XOC BIOS use 24/7. As to ruining your board: With modern nvidia cards, it's not going to happen. Unless you modded the wrong shunt, the card will never pull more than 75W from the slot. Ever. So barring user error with a hardware mod, this is myth. The AMD cards that did this were, from the factory, allowed to pull over the 75W limit on the slot. So, no need to link/quote those articles.



so the XOC bios makes a shunt mod irrelevant, basically?


----------



## fleps

ReFFrs said:


> No, this is useless. Changed nothing in game. For example Metro Exodus (DX11) perfectly runs exclusive fullscreen without messing with compatibility tab.


It's not useless. It will depend on how the engine implements fullscreen / exclusive fullscreen and how it interacts with W10 Fullscreen Optimizations.

On a few games if you don't disable this, exclusive fullscreen will not work as W10 will overwrite it.

On other games, the engine itself doesn't have exclusive fullscreen at all, just fulscreen wich works as borderless fullscreen.

This is well documented on Nvidia forums.

It doesn't mean you will see a change in performance.


----------



## Emmett

kot0005 said:


> zhrooms said:
> 
> 
> 
> 
> Also wanted to show something I discovered recently, which I have no answer to, maybe someone could shed some light on it!
> 
> This picture was uploaded by the owner of a Gigabyte RTX 2080 Ti Turbo (Non-A), he wanted to put a waterblock on it, and asked if it would fit, I was sure it was a reference PCB, but then he pointed out that the 3D render on the gigabyte website showed a fan connector at the top just like on the Gigabyte RTX 2080 Ti Gaming OC (A), and it turned out to be true, they actually screwed up the Turbo card as well, so waterblock won't fit without removing the fan connector, but that's not the surprising part, behold;
> 
> It's missing 5 power phases!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> damm lol!!!
> 
> is tis the case with other cards too ? Asus Turbo, evga black.
Click to expand...

this is my Asus turbo purchased day 1 it has an A chip.


----------



## Renegade5399

mackanz said:


> 1059mV and 2115 with the 380w bios seems to be the limit with this Inno3D Black version AIO. Is there any 400+W bios that works on reference boards? Samsung memory btw. 2115/8400 seems gaming stable, but the limit is the bios i think. As soon as i bump the core to 2140 it hard crashes, no matter what voltage i use.


You can try another BIOS but man, 2115 is damned good!



Ace99ro said:


> quick question guys , i want to nvlink 2 x 2080ti Gigabyte Windforce OC , i have this PSU - Corsair HX1200 80 Plus Platinum , full specs :
> 
> 9900k - 5ghz @ 1.33v
> Gigabyte Aorus Master
> 16gb Gskill 3200mhz @ CL14
> 2 x 7200rpm drives
> 1 x Sata SSD
> Corsair H115i Pro with 2 x Vardar EVO 140ER
> Corsair Air 540
> + 3 more Vardar EVO 140ER in the case
> 
> i don't plan to OC the cards 24/7 , maybe just for a couple of benchmarks
> 
> So is this PSU enough ?


Yup, you'll have some headroom.



dante`afk said:


> so the XOC bios makes a shunt mod irrelevant, basically?


Depending on the card. This XOC is way pickier on what card it's put on. It's also missing a lot of things (you can see this by the speed in which it flashes). Examples: You can only offset OC, no curve adjustment in MSIAB. Unwinder says it's most likely due to the lack of NVAPI support. Source. Also, the NVlink bandwidth is reduced making multi card setups not so hot. Source. For a single card, this is still decent, but it depends on the OEM. My EVGAs don't come close to clocking the RAM to 8200 on any other BIOS than the EVGA one. This is due to timing settings from what I have read here and over at guru3d. For me with dual cards, I'll be using the stock, upgraded PL BIOS from EVGA with the shunt mod and watercooling.



fleps said:


> It's not useless. It will depend on how the engine implements fullscreen / exclusive fullscreen and how it interacts with W10 Fullscreen Optimizations.
> 
> On a few games if you don't disable this, exclusive fullscreen will not work as W10 will overwrite it.
> 
> On other games, the engine itself doesn't have exclusive fullscreen at all, just fulscreen wich works as borderless fullscreen.
> 
> This is well documented on Nvidia forums.
> 
> It doesn't mean you will see a change in performance.


This. I just search over there to find what settings I need to tweak for my games.


----------



## Esenel

fleps said:


> ReFFrs said:
> 
> 
> 
> No, this is useless. Changed nothing in game. For example Metro Exodus (DX11) perfectly runs exclusive fullscreen without messing with compatibility tab.
> 
> 
> 
> It's not useless. It will depend on how the engine implements fullscreen / exclusive fullscreen and how it interacts with W10 Fullscreen Optimizations.
> 
> On a few games if you don't disable this, exclusive fullscreen will not work as W10 will overwrite it.
> 
> On other games, the engine itself doesn't have exclusive fullscreen at all, just fulscreen wich works as borderless fullscreen.
> 
> This is well documented on Nvidia forums.
> 
> It doesn't mean you will see a change in performance.
Click to expand...

Could you share a link about this please?

Thanks!


----------



## J7SC

Quick *question* :headscrat for the folks who run 2x 2080 TIs w/NVLink and use the MSI AB curve method:

My two cards are very close in performance and voltage, but certainly not identical...in my initial test-bench setup, I ran MSI AB curve for each card individually, but never two curves for two cards w/SLI/NVLink. Question is whether I should de-select the 'synchronize GPU' option in MSI AB first when attempting to give each GPU its own custom curve. Thanks


----------



## Renegade5399

J7SC said:


> Quick *question* :headscrat for the folks who run 2x 2080 TIs w/NVLink and use the MSI AB curve method:
> 
> My two cards are very close in performance and voltage, but certainly not identical...in my initial test-bench setup, I ran MSI AB curve for each card individually, but never two curves for two cards w/SLI/NVLink. Question is whether I should de-select the 'synchronize GPU' option in MSI AB first when attempting to give each GPU its own custom curve. Thanks


You can. Keep in mind this is the SLi protocol over an NVlink connector. It's not actually NVlink. The cards will run at matched speeds based off of the lowest speed card. So, when you are setting your curves, be sure that both cards can hit the same clocks. Voltages don't matter, but the speeds do. If both cards can hit 2100 for example, then your SLi speed will be both cards at 2100.

Also, if air cooling, I highly suggest unlinking their settings. The top GPU will most likely need a more aggressive fan curve than the lower card.


----------



## Zurv

J7SC said:


> Quick *question* :headscrat for the folks who run 2x 2080 TIs w/NVLink and use the MSI AB curve method:
> 
> My two cards are very close in performance and voltage, but certainly not identical...in my initial test-bench setup, I ran MSI AB curve for each card individually, but never two curves for two cards w/SLI/NVLink. Question is whether I should de-select the 'synchronize GPU' option in MSI AB first when attempting to give each GPU its own custom curve. Thanks


the clocks between cards are locked in SLI. So any voltage gains by makings custom curves (assuming it SLI in action doesn't force these to sync up too) would have little impact. In the end it will run at the highest clock of the slowest card.


----------



## J7SC

Renegade5399 said:


> You can. Keep in mind this is the SLi protocol over an NVlink connector. It's not actually NVlink. The cards will run at matched speeds based off of the lowest speed card. So, when you are setting your curves, be sure that both cards can hit the same clocks. Voltages don't matter, but the speeds do. If both cards can hit 2100 for example, then your SLi speed will be both cards at 2100.
> 
> Also, if air cooling, I highly suggest unlinking their settings. The top GPU will most likely need a more aggressive fan curve than the lower card.


 
Thank you :thumb: 
The cards are custom w-cooled, btw so I don't have to worry about those fans. On GPU top speed, Card 1 is about 30 Mhz or so faster, while Card 2 has a 40-50 MHz edge on max VRAM...and with the same speed setting (stock Bios, using MSI AB sliders at '0' extra voltage, w/o custom curves) what one does at 1.043v, the other shows 1.05v.

Edit: Thanks Zurv


----------



## Rob Koe

Garrett1974NL said:


> Still waiting for the Heatkiller IV block from watercool.de... Ordered 15 days ago... No communication whatsoever from their side. I could claim my money back but I just want the damn block... Any opinions on this?


Yea, waiting time for this block is around 4 weeks. The demand is huge and the guys are busy keeping up with it. The block is awesome and way cooler compared to the EKWB block.


----------



## Glerox

I need some help guys.

I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.

And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.

Ive tried :

-reinstalling drivers
-LN2 bios switch
-reflashing the same original vbios
-reinstalling Windows

I've noticed in MSI afterburner the Power limit slider is greyed out also now.

Anyone has any idea of what might have happened and/or solution?

thanks, feeling a bit desperate here


----------



## J7SC

Glerox said:


> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here


 
...check this thread + specific post in the spoiler, potentially a mounting pressure issue per description of the specific change you made


Spoiler



https://www.overclock.net/forum/69-nvidia/1722102-2080ti-lightning-dead-after-xoc-bios-flash-5.html#post27884026


----------



## VPII

Glerox said:


> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here


If all else fail it is RMA time.... Had a similar issue got it fixed, don't ask how and then it went all pair shaped after driver update and now I got a new different card.


----------



## J7SC

VPII said:


> If all else fail it is RMA time.... Had a similar issue got it fixed, don't ask how and then it went all pair shaped *after driver update and now I got a new different card*.


 
...what are you going to do when there's another driver update ?


----------



## Glerox

VPII said:


> Glerox said:
> 
> 
> 
> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here
> 
> 
> 
> If all else fail it is RMA time.... Had a similar issue got it fixed, don't ask how and then it went all pair shaped after driver update and now I got a new different card.
Click to expand...

I switched back to the air cooler and it's working, I switch back to the waterblock without overscrewing it and it goes back to 1350 mhz! 

so weird


----------



## Rob w

Glerox said:


> I switched back to the air cooler and it's working, I switch back to the waterblock without overscrewing it and it goes back to 1350 mhz!
> 
> so weird


I would check afterburner settings that voltage is unlocked, other than that I tend to think an ill fitting waterblock possibly shorting on card somewhere?u


----------



## hrm

Glerox said:


> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here


Could it be that Lightning requires (org) aircooler fan header to be connected to get it work with watercooling?


----------



## Garrett1974NL

dante`afk said:


> i ordered on 2/6 and it was shipped on 3/7.
> 
> they have huge orders.


Yeah I called them and it should arrive next week. @Rob Koe


----------



## kx11

Glerox said:


> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here



i should have my block in 2 days , hopefully i won't have that issue , how good are the temps ?!


----------



## JustinThyme

Rob w said:


> I would check afterburner settings that voltage is unlocked, other than that I tend to think an ill fitting waterblock possibly shorting on card somewhere?u


Im with stupid^^^^
If i runs on air then craps on water block something is wrong with the block. It putting too much pressure on something or shorting out something. Long process to find the exact point but Id start by making sure the block is flat. I tried a Lotan block on my strix and the block was warped. Sent that puppy back. I never even ran water through it. I always do a test assembly and check the die for TIM spread and the thermal pads for even indentations from the VRMS and other components. My sample had like 4 of 16 mosfets actually making indentations. If you arent making good contact it could be thermal issues or just short from a metal shard you cant see. 

I know the feeling, just went though this with my Strix card and two blocks ill not buy again is a bitspower or Barrow. I wasnt looking to go cheap, just looking for a block as both EK and Watercool are late to the party. EK just starting shipping theirs and Watercool has yet to start shipping. Card was released in October and 5 months later.....really?


----------



## kx11

JustinThyme said:


> Im with stupid^^^^
> If i runs on air then craps on water block something is wrong with the block. It putting too much pressure on something or shorting out something. Long process to find the exact point but Id start by making sure the block is flat. I tried a Lotan block on my strix and the block was warped. Sent that puppy back. I never even ran water through it. I always do a test assembly and check the die for TIM spread and the thermal pads for even indentations from the VRMS and other components. My sample had like 4 of 16 mosfets actually making indentations. If you arent making good contact it could be thermal issues or just short from a metal shard you cant see.
> 
> I know the feeling, just went though this with my Strix card and two blocks ill not buy again is a bitspower or Barrow. I wasnt looking to go cheap, just looking for a block as both EK and Watercool are late to the party. EK just starting shipping theirs and Watercool has yet to start shipping. Card was released in October and 5 months later.....really?



kinda funny how i see amazing watercooled rigs on Instagram using Bitspower blocks on GPUs/CPU , EK blocks on the other hand are rare to find in those circles 



i mean looks at this


https://www.instagram.com/p/Bt7Cap5ltSF/


----------



## x-speed69

Glerox said:


> I need some help guys.
> 
> I've installed the LOTAN gpu block on my MSI 2080TI lightning Z. It's not my first time. Eveything went smoothly.
> 
> And now the gpu is stuck at 1350 MHZ... I haven't touch the vbios, haven't shunt modded, only installed the block. Temps are good.
> 
> Ive tried :
> 
> -reinstalling drivers
> -LN2 bios switch
> -reflashing the same original vbios
> -reinstalling Windows
> 
> I've noticed in MSI afterburner the Power limit slider is greyed out also now.
> 
> Anyone has any idea of what might have happened and/or solution?
> 
> thanks, feeling a bit desperate here





That card doesn't like when you unplug stock fans, it goes in to safe mode.


I'm still using stock heatsink but changed those fans and ran into the same problem.


You can get it running with waterblock when you plug back in stock fans and boot the system up. Core is probably still at 1350 but now when you change bios mode, all goes back to normal.
Now you can unplug fans and it keeps working. Remember that when you change bios mode, update bios or turn off power completely you have to do this hassle again.


I don't know if this is a bug or just stupid design from MSI.


----------



## Hemorz

x-speed69 said:


> That card doesn't like when you unplug stock fans, it goes in to safe mode.
> 
> 
> I'm still using stock heatsink but changed those fans and ran into the same problem.
> 
> 
> You can get it running with waterblock when you plug back in stock fans and boot the system up. Core is probably still at 1350 but now when you change bios mode, all goes back to normal.
> Now you can unplug fans and it keeps working. Remember that when you change bios mode, update bios or turn off power completely you have to do this hassle again.
> 
> 
> I don't know if this is a bug or just stupid design from MSI.


Sounds like you just need a dummy fan plug to plug into the GPU fan

Sent from my SM-G965F using Tapatalk


----------



## maxmix65

New bios Msi Lightning Z 400w


----------



## roccale

Rog strix rtx2080Ti oc 3 days ago, I'm doing the first air tests:


----------



## Glerox

x-speed69 said:


> That card doesn't like when you unplug stock fans, it goes in to safe mode.
> 
> 
> I'm still using stock heatsink but changed those fans and ran into the same problem.
> 
> 
> You can get it running with waterblock when you plug back in stock fans and boot the system up. Core is probably still at 1350 but now when you change bios mode, all goes back to normal.
> Now you can unplug fans and it keeps working. Remember that when you change bios mode, update bios or turn off power completely you have to do this hassle again.
> 
> 
> I don't know if this is a bug or just stupid design from MSI.


You are right. Thank you for this. In fact, thanks everyone for all your answers!

I've installed the water block with the gpu fans of the air cooler plugged in (that was quite a pain to do), which is a bit ridiculous.
After, I've removed the fans while the PC was running and now it seems to work.
I'm just really afraid now to reinstall a new driver as the issue might come back...

This is so ridiculous coming from a GPU that is MADE of overclocking...


----------



## Glerox

So I tried the new LN2 bios and the GPU is back into safe mode... reverting back to the regular vbios doesn't change the issue. I would have to reinstall the original cooler fans again...

I'll have to write to bitspower to understand how they got the GPU working with the water block...


----------



## fleps

Glerox said:


> So I tried the new LN2 bios and the GPU is back into safe mode... reverting back to the regular vbios doesn't change the issue. I would have to reinstall the original cooler fans again...
> 
> I'll have to write to bitspower to understand how they got the GPU working with the water block...


I also suggest you post on the MSI forums, they are very active there with BIOS and requests / issues report.


----------



## x-speed69

Glerox said:


> So I tried the new LN2 bios and the GPU is back into safe mode... reverting back to the regular vbios doesn't change the issue. I would have to reinstall the original cooler fans again...
> 
> I'll have to write to bitspower to understand how they got the GPU working with the water block...



Currently it's painful problem and a very weird one because you would think this card is meant to run on water not air.


In my experience updating drivers is ok but if you do anything to bios side it goes back to safe mode.


About the "real" LN2 bios. I have been fighting with MSI about it since January and they have given me crappiest customer service ever. This is for sure last time I buy anything from MSI.


Basic support is **** and they don't know anything or straight up lie about things...


----------



## kx11

maxmix65 said:


> New bios Msi Lightning Z 400w



ok , how can i install it since i use an ROG board ?


----------



## xaxx

I wanted to tell you that I have done numerous tests with the Msi Sea Hawk x with its bios of 330w and with the bios galax 380w The two bios correspond to cards with reference pcb. The 53 ° has not been surpassed in any test All tests have been done with the graph at rest after 15 minutes and with initial temperature of 30 °. Repeated tests Results: (4k and ultra) 

* Bios Msi stock without OC * 1935mhz peaks 
Superposition 4k = 12358points Bechmark shadow of tomb raider = 65fps 
Bechmark shadow of war = 85fos Bechmark assassin's odyssey = 54fpa 

* Bios galax stock without OC * 1905 mhz peaks 
Superposition 4k = 12282points Bechmark shadow of tomb raider = 65fps 
Bechmark shadow of war = 84fos Bechmark assassin's odyssey = 53fpa 

-Overclock for the two Bios: 

+800mhz in memory + 100% voltage At maximum temperature and power limit 100% fans 


*Bios Msi 330w with stable OC + 130mhz. Peaks of 2040mhz. Superposition 4k = 13342points Bechmark shadow of tomb raider = 71fps 
Bechmark shadow of war = 92fps Bechmark assassin's odyssey = 62fps 

*Bios Galax 380w with stable OC + 150mhz. 2025mhz peaks. Superposition 4k = 13168points Bechmark shadow of tomb raider = 70fps 
Bechmark shadow of war = 91fps Bechmark assassin's odyssey = 57fps 

My memory ddr6 is Samsung I do not know if this has something to do but definitely the stock bios of 330w gives better results than the galaxy 380w. Any suggestions? Some other bios that works better. I'm thinking of changing it for the Msi x trio since its bios reaches 406w and I hope that it will perform better than Sea Hawk X even if the temperature is higher.


----------



## x-speed69

kx11 said:


> ok , how can i install it since i use an ROG board ?



You can use Dragon Center on all systems.


----------



## Glerox

I tried the new MSI LN2 for the lightning Z bios. I can confirm it allows higher clocks (from 2100mhz to 2130mhz and higher power consumption (around 40W) than the regular BIOS. It improved my ranking on the 3dmark HOF 😛

You asked about temps with the LOTAN block. It's quite good! My peak during timespy extreme is 43 degrees. It's the best temp I ever had on a GPU 🙂

Now we just need MSI to fix that stupid fan safe mode in their bios...


----------



## kx11

x-speed69 said:


> You can use Dragon Center on all systems.



tried it , almost messed up my system 



i'll give it another go


----------



## ReFFrs

xaxx said:


> My memory ddr6 is Samsung I do not know if this has something to do but definitely the stock bios of 330w gives better results than the galaxy 380w. Any suggestions? Some other bios that works better. I'm thinking of changing it for the Msi x trio since its bios reaches 406w and I hope that it will perform better than Sea Hawk X even if the temperature is higher.


Try these two bios versions with tuned memory timings:

1) AORUS 2080 Ti XTREME WATERFORCE

366W: https://www.techpowerup.com/vgabios/208831/208831

2) EVGA 2080 Ti FTW3 HYDRO COPPER

373W: https://www.techpowerup.com/vgabios/207291/207291

After that try XOC 1000W bios with power target set around 420W and compare the results.


----------



## dangerSK

Glerox said:


> I tried the new MSI LN2 for the lightning Z bios. I can confirm it allows higher clocks (from 2100mhz to 2130mhz and higher power consumption (around 40W) than the regular BIOS. It improved my ranking on the 3dmark HOF 😛
> 
> You asked about temps with the LOTAN block. It's quite good! My peak during timespy extreme is 43 degrees. It's the best temp I ever had on a GPU 🙂
> 
> Now we just need MSI to fix that stupid fan safe mode in their bios...


hows the lotan block ? does it fit good on lightning Z ? which block are u using ?
thx


----------



## xaxx

Nothing will happen?? The two Bios are personal pcb and mine is reference ???? The XOC 1000 after the companion I was freezing with the lightning gives me a little fear.


----------



## dangerSK

x-speed69 said:


> Currently it's painful problem and a very weird one because you would think this card is meant to run on water not air.
> 
> 
> In my experience updating drivers is ok but if you do anything to bios side it goes back to safe mode.
> 
> 
> About the "real" LN2 bios. I have been fighting with MSI about it since January and they have given me crappiest customer service ever. This is for sure last time I buy anything from MSI.
> 
> 
> Basic support is **** and they don't know anything or straight up lie about things...


As i told you in MSI Forum thread, they wont give you that unlocked bios, i cant get hands on it even though im "LN2 overclocker", its engineering bios meant only for MSI sponsored guys, ive talked about this with 2 "msi guys".
Worst thing is its under very strict NDA so they cant share, well just use the ASUS 1000W XOC bios, works good.


----------



## xaxx

And if you can explain to me that it is a bios with optimized memory times, I would appreciate it to know a little more about the subject. Thank you


----------



## dangerSK

xaxx said:


> Nothing will happen?? The two Bios are personal pcb and mine is reference ???? The XOC 1000 after the companion I was freezing with the lightning gives me a little fear.


if u have reference PCB use the Asus XOC bios, Dancop said its working fine with reference pcb.


----------



## kot0005

Did anyone try the 1000W bios on Gaming X trio PCB ?


----------



## ReFFrs

xaxx said:


> And if you can explain to me that it is a bios with optimized memory times, I would appreciate it to know a little more about the subject. Thank you


VRAM has memory timings as well as standard DDR4. Those numbers like 15-17-17-35

So different vendor bioses have different timings in mind. The GALAX 380W bios was released at RTX launch and haven't been updated since then, so it contains outdated timings for Micron memory, but they are not the best for Samsung. Newer/updated bioses released after november 2018 contain up-to-date timings specifically for Samsung VRAM, but maybe for Micron as well. 

That's why you could be getting better scores with new bioses. Even if you set OC lower, you still can get better benchmark results due to difference in timings. So newer bioses definitely worth trying over that 380W one.


----------



## xaxx

Perfect now it is clear to me. If it is true that people who have tried the galax 380w on my Sea Hawk X card, they bought it before Christmas so it makes a lot of sense. I have also thought about trying the bios of gigabyte windforce since it is also 366w and a reference pcb like mine.


----------



## x-speed69

dangerSK said:


> As i told you in MSI Forum thread, they wont give you that unlocked bios, i cant get hands on it even though im "LN2 overclocker", its engineering bios meant only for MSI sponsored guys, ive talked about this with 2 "msi guys".
> Worst thing is its under very strict NDA so they cant share, well just use the ASUS 1000W XOC bios, works good.



That seems clear but I will keep fighting with msi.
I didn't spend 1600€ for nothing :thumbsdow


BTW does XOC bios allow you to adjust voltage also?


----------



## dangerSK

x-speed69 said:


> That seems clear but I will keep fighting with msi.
> I didn't spend 1600€ for nothing :thumbsdow
> 
> 
> BTW does XOC bios allow you to adjust voltage also?


Well its your fight though i can say for sure you will loose  
Nah you cant change voltage, but they will release XOC Bios tool same as 1080ti had. But again it doesnt matter because on air/ambient water its pointless to push voltage, doesnt scale on turing  If u want XOC go chilled water/dice. For LN2 - yeah it sucks


----------



## worms14

Hi everyone,
I am very sorry for my English, I write with the help of google translate.
I bought from my colleague nVidia RTX 2080 Ti Funders Edition
I'm happy with it, but I would like more OC, I think about changing bios.
However, I will do it for him, I would like to buy water cooling.
Please advise which block is best to choose when it comes to ease of assembly and performance?

I looked at the ECWB website, but the compatibility list does not work, I do not know what to choose, maybe ECWB is not the best choice.
I will be very grateful for the advice.

Greetings.

A few pics from my GPU


----------



## GAN77

worms14 said:


> I looked at the ECWB website, but the compatibility list does not work, I do not know what to choose, maybe ECWB is not the best choice.
> I will be very grateful for the advice.


All water blocks are approximately equal to each other.
I chose between Watercool and XSPC.

I decided to look at the tempered glass from XSPC)


----------



## Carillo

Hello 


After 2 days of tweaking with the XOC BIOS, my Zotac AMP 2080Ti managed a pretty good Time Spy score. It's just a pity that there is no BIOS tool yet 
https://www.3dmark.com/spy/6539232


----------



## dangerSK

Carillo said:


> Hello
> 
> 
> After 2 days of tweaking with the XOC BIOS, my Zotac AMP 2080Ti managed a pretty good Time Spy score. It's just a pity that there is no BIOS tool yet
> https://www.3dmark.com/spy/6539232


Sorry to disappoint you there wont be any tool, unless some fan-made.


----------



## Carillo

dangerSK said:


> Sorry to disappoint you there wont be any tool, unless some fan-made.



Besides, I have tested the bios for normal use (games) and it is quite useless. The core voltage throttles incredibly fast under normal water cooled conditions and results in great instability. The Galax 380w bios is still by far the best for my card.


----------



## dangerSK

Carillo said:


> Besides, I have tested the bios for normal use (games) and it is quite useless. The core voltage throttles incredibly fast under normal water cooled conditions and results in great instability. The Galax 380w bios is still by far the best for my card.


ive tested one bios (cant specify sry) which has 1000+W powerlimit, i tested on air with lightning card.
TDP was ingame (AC:O) around 510W, gpu was doing fine 2115mhz 1.2v core
conclusion : that galax 380w is nothing, u need a lot more, thats why your core voltage is dropping like crazy


----------



## Carillo

dangerSK said:


> ive tested one bios (cant specify sry) which has 1000+W powerlimit, i tested on air with lightning card.
> TDP was ingame (AC:O) around 510W, gpu was doing fine 2115mhz 1.2v core
> conclusion : that galax 380w is nothing, u need a lot more, thats why your core voltage is dropping like crazy


My voltage is not dropping with the galax 380w bios, thats the point with the VF curve. My voltage is dropping with the 1000w XOC bios, since there is only offset core clock and not VF.


----------



## xaxx

I am the only one that the bios galax 380w does not work well? You can help me with the frequency / voltage curve. If I put the stable mhz for my gpu in Afterburner it is applied to the whole curve. I do not know how to edit it, I can move according to the voltage of the mhz but I do not know what to put in 1.093 or in 1.050 ... Etc. It is assumed that at less voltage less the mhz that is added will have a figure lower than the peak mhz no ?? Help


----------



## ReFFrs

Carillo said:


> My voltage is not dropping with the galax 380w bios, thats the point with the VF curve. My voltage is dropping with the 1000w XOC bios, since there is only offset core clock and not VF.


Can you please post a screenshot how you configured your curve for 380W? 1) at cold stated and 2) at full load with constant temps would be much appreciated.

Also if you will open C:\Program Files (x86)\MSI Afterburner\Profiles 

there will be file called something like VEN_10DE&DEV_1E07&*******.cfg

Curve settings stores there, so can you please post the code after "VFCurve=" from the profile number you are using?

It should look like:

[Profile2]
Format=2
CoreVoltageBoost=100
PowerLimit=120
ThermalLimit=
CoreClkBoost=30000
VFCurve=000001005******************
MemClkBoost=205000
FanMode=1
FanSpeed=25


----------



## Carillo

ReFFrs said:


> Can you please post a screenshot how you configured your curve for 380W? 1) at cold stated and 2) at full load with constant temps would be much appreciated.
> 
> Also if you will open C:\Program Files (x86)\MSI Afterburner\Profiles
> 
> there will be file called something like VEN_10DE&DEV_1E07&*******.cfg
> 
> Curve settings stores there, so can you please post the code after "VFCurve=" from the profile number you are using?
> 
> It should look like:
> 
> [Profile2]
> Format=2
> CoreVoltageBoost=100
> PowerLimit=120
> ThermalLimit=
> CoreClkBoost=30000
> VFCurve=000001005******************
> MemClkBoost=205000
> FanMode=1
> FanSpeed=25


Yes, i can do that, but right know i'm using the 1000w bios for benchmarking. I can post it later tonight. My card is not the best binned, far from it. With the 380w galax bios i usually run 2115mhz/8100mhz @1075mV for normal use. I can do 2130mhz and even [email protected] but after hours of gaming my water reaches 40 degree celsius and my card peaks at 52 degrees, and the card gets unstable .(when benchmarking i'm using a chiller) Reason i'm not using more than 1075mV is that we found out, there is some strange input lag happening in Battlefield 5 with voltages over 1075mV.


----------



## ReFFrs

Carillo said:


> I can post it later tonight. My card is not the best binned, far from it. With the 380w galax bios i usually run 2115mhz/8100mhz @1075mV for normal use.


Thanks, most of cards people have on hands are not the best binned, so your curve will be useful to try.


----------



## Glerox

dangerSK said:


> x-speed69 said:
> 
> 
> 
> Currently it's painful problem and a very weird one because you would think this card is meant to run on water not air.
> 
> 
> In my experience updating drivers is ok but if you do anything to bios side it goes back to safe mode.
> 
> 
> About the "real" LN2 bios. I have been fighting with MSI about it since January and they have given me crappiest customer service ever. This is for sure last time I buy anything from MSI.
> 
> 
> Basic support is **** and they don't know anything or straight up lie about things...
> 
> 
> 
> As i told you in MSI Forum thread, they wont give you that unlocked bios, i cant get hands on it even though im "LN2 overclocker", its engineering bios meant only for MSI sponsored guys, ive talked about this with 2 "msi guys".
> Worst thing is its under very strict NDA so they cant share, well just use the ASUS 1000W XOC bios, works good.
Click to expand...

You're saying the Asus 1000W XOC works good on the MSI Lightning Z? One guy in the thread was stuck at 1350MHZ after flashing this bios on his Z.


----------



## roccale

Someone is able to explain to me why turing with the same clock but when the vcore varies consumes more watts, while in undervolt not be a dropp of the same watts?

1.062v pl 100% = 265w
1.062v pl 125% = 327w
0.950v pl 100% = 263w
0.950v pl 125% = 326w


----------



## Carillo

xaxx said:


> I am the only one that the bios galax 380w does not work well? You can help me with the frequency / voltage curve. If I put the stable mhz for my gpu in Afterburner it is applied to the whole curve. I do not know how to edit it, I can move according to the voltage of the mhz but I do not know what to put in 1.093 or in 1.050 ... Etc. It is assumed that at less voltage less the mhz that is added will have a figure lower than the peak mhz no ?? Help


Try the following: Reset to default in afterburner if anything is applied. Press CTRL+F, your curve will show up. Try(depending on your cooling of course) putting your mouse cursor where the "X-axis" is showing 1075mV and you press and hold while lifting the curve until you match the point on the "Y-axis" where it says for example 2115mhz. Then press apply. Pick your choice of application for stability testing. If fails, repeat the first steps with either higher voltage or lower core clock. Don't touch your memory until your core is stable. Just my suggestion, hope it helps you


----------



## worms14

GAN77 said:


> All water blocks are approximately equal to each other.
> I chose between Watercool and XSPC.
> 
> I decided to look at the tempered glass from XSPC)



Thank you very much for your answer.
I liked this model more, already ordered, I hope it will improve my OC and there will be no more noise.
http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Products/15622

Now I am wondering what bios to choose for my GPU, I am reviewing the topic in the hope that I will find people who will recommend something and work.
I hope that as a beginner I will not spoil the graphics.


----------



## Esenel

Carillo said:


> Yes, i can do that, but right know i'm using the 1000w bios for benchmarking. I can post it later tonight. My card is not the best binned, far from it. With the 380w galax bios i usually run 2115mhz/8100mhz @1075mV for normal use. I can do 2130mhz and even [email protected] but after hours of gaming my water reaches 40 degree celsius and my card peaks at 52 degrees, and the card gets unstable .(when benchmarking i'm using a chiller) Reason i'm not using more than 1075mV is that we found out, there is some strange input lag happening in Battlefield 5 with voltages over 1075mV.


Hi,

interesting.
My card is doing roughly the same.
Although I think with Offset values I am doing better somehow. Hm. But maybe It is just me setting up the curve wrong :-D

With the 380W I can do +145 on Core and +1000 MEM.

While GPU temp is <40°C I get 2130 MHz (1.093V)
40-45°C it is 2115 MHz (1.081V) 
>45°C it drops to 2100 MHz (1.062V)

I tried thee curve several times with stuff like fixed 1.093V at 2130 MHz but either it got lower scores or at 2145 with 1.093V it failes.

So with Offset I am always on 2115 MHz until it is either to warm in the room or it is a heavy game, than it stays at 2100 MHz.

I do not think that there is so much to improve here, unless I have a wrong understanding of the matter :-x


----------



## Section31

worms14 said:


> Thank you very much for your answer.
> I liked this model more, already ordered, I hope it will improve my OC and there will be no more noise.
> http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Products/15622
> 
> Now I am wondering what bios to choose for my GPU, I am reviewing the topic in the hope that I will find people who will recommend something and work.
> I hope that as a beginner I will not spoil the graphics.


Actually someone tested out the blocks. I think they all performed the same (silicon lottery on gpu and unlocked bios for oc) just that the coolest of them was the aquacomputer block. The heatkiller ( I have one) was second best but was also much cheaper (heatkiller costs the same as ekwb). The difference was minor (1-2degrees). The xpsc looks cool but my biggest concern is quality control, so far heatkiller, bitspower has been 100% on quality control. Rest have had bad luck at least once.


----------



## pewpewlazer

Esenel said:


> Hi,
> 
> interesting.
> My card is doing roughly the same.
> Although I think with Offset values I am doing better somehow. Hm. But maybe It is just me setting up the curve wrong :-D
> 
> With the 380W I can do +145 on Core and +1000 MEM.
> 
> While GPU temp is <40°C I get 2130 MHz (1.093V)
> 40-45°C it is 2115 MHz (1.081V)
> >45°C it drops to 2100 MHz (1.062V)
> 
> I tried thee curve several times with stuff like fixed 1.093V at 2130 MHz but either it got lower scores or at 2145 with 1.093V it failes.
> 
> So with Offset I am always on 2115 MHz until it is either to warm in the room or it is a heavy game, than it stays at 2100 MHz.
> 
> I do not think that there is so much to improve here, unless I have a wrong understanding of the matter :-x


What games or benchmarks are you guys running where you can use 1.062-1.093v? I just can't wrap my head around it. I've gotten ~1.05v to hold for most of Port Royal, but that's about it. Time Spy Extreme will throttle down as low at 1.00v at some points. 380w GALAX BIOS of course, load temps ~45*C on water.

In games 1.031-1.037v is about my max, and even then it will occasionally power throttle in some games (Metro Exodus being the most recent one). I suppose at 1440p I could push the voltage a bit higher, but an extra 15mhz is kind of pointless at that res.


----------



## J7SC

I am getting ready to do some 'MSI AB curve' overclocking for the TR 2950X w/ dual Aorus 2080 Ti XTR Waterforce GPUs...spent a day or so setting the baseline via 'regular MSI AB' hit-and-miss sliders so I know after the curve setting which one is which...very pleasantly surprised at the baseline performance, TimeSpy single @ 2175 MHz (score > 16000), TimeSpy SLI @ 2145 MHz and SuperPosition 4K at 2205 MHz (peak / bounces between 2175 and 2205). The latter for example would be in the global top 15 or so (...if I would have purchased an account / may be later). Bios is stock, no extra voltage applied.

The trick for the final performance steps was a minor mod (pic below/attached) which I thought about after the dual results consistently showed max '39 c and 37 c' respectively...this bugged me as I think there's a GPU speed-step at 38 c. I made some simple 'baffles' between GPU rad 1 and rad 2 to control airflow better, and it worked  . The blue arrows show where I put the baffles. Airflow is left-to-right. Overall, GPUs have ambient water-cooling, w/ 3x RX 360/60 rads, 2 pumps and 12x 120mm fans. As discussed before, this is not a gamer or bencher but a mixed use (Threadripper 2950X) 'show' system for the office...it also does productivity work (rendering, de/compression, encryption, compiling...) and better not crash ! Top CPU speed was 4250 MHz / all cores (normal setup is 4225 at 1.325v as AMD has quarter-step multipliers  ) 

I've done curves MSI AB before, but never on a dual / SLI system... :sad-smile I'll probably will try next weekend or so.


----------



## xaxx

Thank you Carillo, I understand. My card with the curve at 1075mv at 2115mhz is stable at + 10% voltage, with the maximum temperature limit, the power limit and fans at 100%, with the bios 380w. I have done test of time spy extreme 4k about 20 times and it is 98.5% (more than 97% stability, what it puts in the program) And test effort of Firestrike ultra 4k, 20 times and gives 99%. I would still have to overclock in memory. I do not know if it's ok like that or I should go up to 1075mv plus mhz and put 2125 and try even if it's going up a bit more voltage, or is it not advisable to raise the voltage in exchange for more mhz ?? Thanks in advance.


----------



## Section31

When installing there is a lot of thermal pads to install. Be careful. I suspect one of my thermal pads is not in right position when i put the block on. Its whining at 100% usage but I just turn up my fans to block out the noise. Not going to rma the gpu for that considering that I may just buy nvidia 7nm in 2020 (plus i don't game much at all) and putting on the original block is an massive pain.


----------



## Esenel

pewpewlazer said:


> What games or benchmarks are you guys running where you can use 1.062-1.093v?


That was TimeSpy Graphics Test 2 in a loop for 4 rounds.


----------



## dangerSK

Glerox said:


> You're saying the Asus 1000W XOC works good on the MSI Lightning Z? One guy in the thread was stuck at 1350MHZ after flashing this bios on his Z.


Yeah it was my thread, its already resolved, go check it out


----------



## Glerox

dangerSK said:


> Yeah it was my thread, its already resolved, go check it out


Ok thanks, it turns out I had the same problem!


----------



## c0nsistent

Has anyone found a workaround other than shunt modding the 112% PL cards such as the EVGA Blacks and so on?

The Non-A Variants or whatever...


----------



## Koplari

When using AB OC scanner, should PL go above 100%? BIOS allows 122% and I have set it there. But during scans it peaks at 104%, not even going near 122%. When gaming it uses that 122% PL, peaking 131%.


----------



## jura11

worms14 said:


> Hi everyone,
> I am very sorry for my English, I write with the help of google translate.
> I bought from my colleague nVidia RTX 2080 Ti Funders Edition
> I'm happy with it, but I would like more OC, I think about changing bios.
> However, I will do it for him, I would like to buy water cooling.
> Please advise which block is best to choose when it comes to ease of assembly and performance?
> 
> I looked at the ECWB website, but the compatibility list does not work, I do not know what to choose, maybe ECWB is not the best choice.
> I will be very grateful for the advice.
> 
> Greetings.
> 
> A few pics from my GPU


Hi there 

Personally I would have go with Watercool HEATKILLER without the question, Aquacomputer RTX 2080Ti WB is another option with active backplate 

Have EK Vector RTX 2080Ti and I wouldn't touch this block again or used, thought so will be good performing block and wanted to have all GPU blocks from EK

Recently used Bykski GPU waterblock for RTX 2080Ti and this block is on par with my EK Vector RTX 2080Ti in same loop, with Bykski temperatures I have seen 42-45°C in same loop with single 360mm radiator and 8086k with 5.2GHz and with EK Vector RTX 2080Ti temperatures in same loop I have seen same temperatures

Personally I would go with Heatkiller there

Hope this helps 

Thanks, Jura


----------



## jura11

ReFFrs said:


> Can you please post a screenshot how you configured your curve for 380W? 1) at cold stated and 2) at full load with constant temps would be much appreciated.
> 
> Also if you will open C:\Program Files (x86)\MSI Afterburner\Profiles
> 
> there will be file called something like VEN_10DE&DEV_1E07&*******.cfg
> 
> Curve settings stores there, so can you please post the code after "VFCurve=" from the profile number you are using?
> 
> It should look like:
> 
> [Profile2]
> Format=2
> CoreVoltageBoost=100
> PowerLimit=120
> ThermalLimit=
> CoreClkBoost=30000
> VFCurve=000001005******************
> MemClkBoost=205000
> FanMode=1
> FanSpeed=25



Hi there

As above in any games or benchmark I have not seen my voltage to drop bellow what I've set in MSI Afterburner VF Curve editor,not once I touched power limit in any game or any rendering SW,I can post my VF Curve if this does help you

Strange is with 1.05V GPU Power in AC:O is around 254-258W and with 1.09v I have seen ~305-310W as max in AC:O with same OC 2085MHz and +1000MHz on VRAM,temps are as max in range 38-42C and using same as you probably Galax 380W BIOS

Here is link on my Dropbox

https://www.dropbox.com/s/jfl4tzqxk3h0e6d/MSI 380W BIOS CFG.rar?dl=0 

Hope this helps

Thanks,Jura


----------



## iamjanco

Not sure if this has been brought up before, but just noticed something very interesting on the *NVIDIA site*:









That's a far cry (no pun intended) from the $10,000 it was listed for a few months back.


----------



## LCRava

For the OC Strix 2080 Ti, what BIOS would you guys recommend with it?

Also, should I use the stock BIOS w/ a Shunt Mod or a different BIOS?

Thx for the help!


----------



## ReFFrs

jura11 said:


> Here is link on my Dropbox
> 
> https://www.dropbox.com/s/jfl4tzqxk3h0e6d/MSI 380W BIOS CFG.rar?dl=0
> 
> Hope this helps
> 
> Thanks,Jura


Thanks, and what profile number you are using for the best curve?


----------



## jura11

ReFFrs said:


> Thanks, and what profile number you are using for the best curve?


Hi there 

You can try all my profiles, currently running 1.05v with 2085MHz and 1000MHz on VRAM

There are few profiles which are similar with different voltages like 1.093v and 2115MHz(2100MHz when temperatures are under 38-40°C I think) or 2100MHz at 1.093v and which drops to 2085MHz when temperatures are beyond 38-40°C I think

I can do for you custom VF curves if you wish, just you need to tell me preferred voltage and what frequency are you running at that voltage and what VRAM OC do you running etc

Hope this helps 

Thanks, Jura


----------



## jura11

ReFFrs said:


> Thanks, and what profile number you are using for the best curve?


Will have bit more time maybe later on and post per profile frequency and VRAM OC and voltage if this does help, strange is I didn't save my 2085MHz OC at 1.05v which I thought so I saved, will redo profiles and post updated 

Hope this helps 

Thanks, Jura


----------



## jura11

ReFFrs said:


> Thanks, and what profile number you are using for the best curve?


Here is updated MSI Afterburner my profiles for Galax 380W BIOS

https://www.dropbox.com/s/qt4mjhshs5ct5y3/MSI Afterburner 380W BIOS Profile.rar?dl=0

Here are frequencies,voltages,VRAM OC 

Profile 1: 2100MHz at 1.081v and 650MHz OC on VRAM
Profile 2: 2100MHz at 1.081v and 1000MHz OC on VRAM
Profile 3: 2115MHz at 1.093v and 1000MHz OC on VRAM (this one has been my daily driver and daily profile)
Profile 4: 2130MHz at 1.093v and 850MHz OC on VRAM (this one I run only for benchmarks and still temperatures must be in low 30's to be 100% stable)
Profile 5: 2085MHz at 1.050v and 1000MHz OC on VRAM

Personally I use Profile 5 most of the time 

Hope this helps

Thanks,Jura


----------



## Rob w

LCRava said:


> For the OC Strix 2080 Ti, what BIOS would you guys recommend with it?
> 
> Also, should I use the stock BIOS w/ a Shunt Mod or a different BIOS?
> 
> Thx for the help!


Hi, I have the 2080ti founders edition which I tried several vbios on and found it performed worse, from what I understand it is the memory timings in the stock bios is set to maximise Oc, other vbios have looser timings so I went for the shunt mod which gave me the power stability needed and kept stock vbios.
You don’t realy need both, if you can get a vbios that works well with that card then go for it and forget shunting it.


----------



## ReFFrs

jura11 said:


> Profile 5: 2085MHz at 1.050v and 1000MHz OC on VRAM
> 
> Personally I use Profile 5 most of the time


Thank you. This is what I usually get with XOC bios (2085MHz at 1.050v) with temps below 50C

I'm currently on XOC 420W power limit, maybe will try your profiles later. But as I see there won't be a big difference, although increased power limit really helps in some demanding games to keep clock stable, especially at high resolutions 4k+


----------



## JustinThyme

LCRava said:


> For the OC Strix 2080 Ti, what BIOS would you guys recommend with it?
> 
> Also, should I use the stock BIOS w/ a Shunt Mod or a different BIOS?
> 
> Thx for the help!


I haven shut modded but I have tried other BIOS with a higher power limit. In the end Ive found that stock BIOS is fine. I get a solid 2100Mhz and max out at 2150MHz staying just under the stock power limit with no increase is standard Vcore. Increase the power limit and Vcore and get the same maybe a little more peak right before a black screen.


----------



## jura11

ReFFrs said:


> Thank you. This is what I usually get with XOC bios (2085MHz at 1.050v) with temps below 50C
> 
> I'm currently on XOC 420W power limit, maybe will try your profiles later. But as I see there won't be a big difference, although increased power limit really helps in some demanding games to keep clock stable, especially at high resolutions 4k+


Hi there 

With lower GPU temperature bellow 38-40°C yours OC I think would be much better than my without the question if you are getting 2085MHz below 50°C, in my case if temperatures are beyond 40-42°C or maybe bit more clocks start to drop to 2070MHz

I play at 3440x1440 and at such resolution RTX struggle sometimes if I play in ultra etc mode, but this can be down my CPU which is 5960x with 4.7GHz and 2133MHz RAM 

Some games like Assassin's Creed have just very poor utilisation, 70-80% utilisation in 3440x1440 etc

You will see if you are really need to try or check my profiles

Are you have RTX 2080Ti under water? 

Hope this helps 

Thanks, Jura


----------



## bigjdubb

Through a weird twist of fate I find myself being the owner of a EVGA 2080ti FTW3 Ultra. Are the FTW3's any good as far as 2080ti's go? Can anything be done to the BIOS to get the most out of them?


----------



## skline00

bigjdubb said:


> Through a weird twist of fate I find myself being the owner of a EVGA 2080ti FTW3 Ultra. Are the FTW3's any good as far as 2080ti's go? Can anything be done to the BIOS to get the most out of them?


Didn't you also just get a Radeon VII???


----------



## bigjdubb

Yeah, I have barely even broken in the RVII. It's a strange story that isn't really fit for public consumption, but it was an offer that could not be refused. I will probably be selling the RVII, keeping my 1080ti, and using the 2080ti. I also need to return a Freesync monitor I just bought for the RVII.


----------



## ReFFrs

jura11 said:


> Hi there
> 
> With lower GPU temperature bellow 38-40°C yours OC I think would be much better than my without the question if you are getting 2085MHz below 50°C, in my case if temperatures are beyond 40-42°C or maybe bit more clocks start to drop to 2070MHz
> 
> You will see if you are really need to try or check my profiles
> 
> Are you have RTX 2080Ti under water?
> 
> Hope this helps
> 
> Thanks, Jura


I'm on AIO, so temps reach 49-51C at max load 4K. The card can maintain 2085 MHz at these temps, but for clock stability an extended power limit is needed which is achieved with XOC bios at 420W. This bios doesn't support Curve at all, only manual offset OC. 

Have you tried XOC bios yet? With it clock stability may be better.


----------



## VPII

ReFFrs said:


> I'm on AIO, so temps reach 49-51C at max load 4K. The card can maintain 2085 MHz at these temps, but for clock stability an extended power limit is needed which is achieved with XOC bios at 420W. This bios doesn't support Curve at all, only manual offset OC.
> 
> Have you tried XOC bios yet? With it clock stability may be better.


If I may ask, which AIO cooler are you using. My reason for asking is that I sit with about 30 odd celcius ambient and the highest I've seen my core go up to was 46c which I felt was okay but a bit bad. In cooler ambient it would barely reach 40c. The AIO cooler I have on it is the Corsair H110 CW.


----------



## ReFFrs

VPII said:


> If I may ask, which AIO cooler are you using. My reason for asking is that I sit with about 30 odd celcius ambient and the highest I've seen my core go up to was 46c which I felt was okay but a bit bad. In cooler ambient it would barely reach 40c. The AIO cooler I have on it is the Corsair H110 CW.


AIO Kraken x62, but power limit is 420W and I've replaced stock fans to Noctua NF-A14 with 1500 rpm max.


----------



## jura11

ReFFrs said:


> I'm on AIO, so temps reach 49-51C at max load 4K. The card can maintain 2085 MHz at these temps, but for clock stability an extended power limit is needed which is achieved with XOC bios at 420W. This bios doesn't support Curve at all, only manual offset OC.
> 
> Have you tried XOC bios yet? With it clock stability may be better.


Hi there 

That's quite good temperatures on AIO there, what utilisation are you seeing in games? 

Sadly I didn't tried yet XOC BIOS, my back is broken and planned swap my EK Vector RTX 2080Ti for Heatkiller RTX 2080Ti but Heatkiller is out of stock everywhere and if everything will go as planned and my back will be in better condition then I will try XOC with Heatkiller, with EK Vector RTX 2080Ti block I just don't want to try this BIOS

But for sure looking forward to test this BIOS, with XOC BIOS all DP are working or one of them is disabled?

Hope this helps 

Thanks, Jura


----------



## ReFFrs

jura11 said:


> Hi there
> 
> That's quite good temperatures on AIO there, what utilisation are you seeing in games?
> 
> But for sure looking forward to test this BIOS, with XOC BIOS all DP are working or one of them is disabled?


GPU utilization 99% in most games and PL floating around 380-420W. It's easy to achieve by increasing Resolution Scaling what improves visual quality without introducing any blur. But there are some Ubisoft titles (AC: Odyssey, GR Wildlands etc) where GPU maxes out at 97-98% and you cannot do anything about this, even at 5k resolution it won't fully utilize GPU. Maybe devs just wanted to prevent overheating of entry-level GPUs with weak cooling solutions, but limiting performance in such way is a very bad idea anyway.

One DP port is working with XOC and this was my default port. Haven't tried others, I have only one display connected.


----------



## worms14

jura11 said:


> Hi there
> 
> Personally I would have go with Watercool HEATKILLER without the question, Aquacomputer RTX 2080Ti WB is another option with active backplate
> 
> Have EK Vector RTX 2080Ti and I wouldn't touch this block again or used, thought so will be good performing block and wanted to have all GPU blocks from EK
> 
> Recently used Bykski GPU waterblock for RTX 2080Ti and this block is on par with my EK Vector RTX 2080Ti in same loop, with Bykski temperatures I have seen 42-45°C in same loop with single 360mm radiator and 8086k with 5.2GHz and with EK Vector RTX 2080Ti temperatures in same loop I have seen same temperatures
> 
> Personally I would go with Heatkiller there
> 
> Hope this helps
> 
> Thanks, Jura



Thank you so much for help.

As soon as I receive my cooling, I will share my impressions.

I wonder what thermal paste to root on, which is currently the best and worth using without any danger?

Thanks, worms


----------



## J7SC

iamjanco said:


> Not sure if this has been brought up before, but just noticed something very interesting on the *NVIDIA site*:
> 
> View attachment 258520
> 
> 
> That's a far cry (no pun intended) from the $10,000 it was listed for a few months back.


 
...nice find !:thumb: ...I was going to ask whether 'it plays Crysis', but that's getting old. May be NVidia realized that the Titan RTX and even 2080 TI RTX are making inroads in the semi-pro and pro vid environment, and perhaps cannibalizing the Quadro RTX 8000 sales. That's a BIG price drop !

...on a related note, is there such a thing as 'RAM Drive for VRAM'...what with 48 GB on the RTX Quadro card ? We used to use RAM drives on some machines, but since M.2 SSD/NVMEs, there really isn't that much need for it, but I have never come across a VRAM RAM Drive - have you ?

In other news, I think I'm getting used to the RGB overload on my current build...really only an issue at dusk or dawn / twilight


Spoiler


----------



## toncij

J7SC said:


> ...nice find !:thumb: ...I was going to ask whether 'it plays Crysis', but that's getting old. May be NVidia realized that the Titan RTX and even 2080 TI RTX are making inroads in the semi-pro and pro vid environment, and perhaps cannibalizing the Quadro RTX 8000 sales. That's a BIG price drop !
> 
> ...on a related note, is there such a thing as 'RAM Drive for VRAM'...what with 48 GB on the RTX Quadro card ? We used to use RAM drives on some machines, but since M.2 SSD/NVMEs, there really isn't that much need for it, but I have never come across a VRAM RAM Drive - have you ?
> 
> In other news, I think I'm getting used to the RGB overload on my current build...really only an issue at dusk or dawn / twilight
> 
> 
> Spoiler


It's a production stock issue, not eating into. You can't eat into Quadro 48GB market.


----------



## J7SC

toncij said:


> It's a production stock issue, not eating into. You can't eat into Quadro 48GB market.


 
...Not sure what you mean by 'production stock issue' - high inventories ? That can in part be a result of folks heading for cheaper 'alternatives', including NVidia's own high-end RTX. For example, back in December, the OCN news section carried an article re. a productivity SDK release for Titan RTX (and 2080 Ti RTX) for content creation / 8K...the article raised some question then about cannibalizing some Quadro sales. Now it's about 3 months later, and the RTX Quadro 8000 price drops by almost half...

_"..Last week, the graphics card manufacturer demonstrated the solution during a live event. 24+ frames per second capabilities were shown running on an NVIDIA Quadro RTX 6000 GPU to play back, edit, and color-grade RAW 8K footage on a system with a single-CPU HP Z4 Workstation, thus “eliminating the need for either a $6,750 RED ROCKET-X or a $20,000 dual-processor workstation.” NVIDIA said “this 8K performance is also available with NVIDIA TITAN RTX or GeForce RTX 2080 Ti GPUs, so editors can choose the right tools for their budget or shooting location.” This raises the question of the need of and expensive Quadro card if high-end consumer RTX cards can also manage 8K footage.." _Source:


Spoiler



https://fstoppers.com/news/nvidia-and-red-unveil-gpu-solution-8k-real-time-editing-320356


That said, it comes down to applications - *clearly, there are some where 48GB is a must, ditto for FP64*. But in our company's case where we're looking to upgrade to AMD Epyc (Rome) servers & workstations and perhaps RTX Quadro later in the year, the 2080 Tis and Threadripper presented a much cheaper 'test bed' now, given our specific application needs; otherwise, we would have purchased RTX Quadros (@ $10,000k a pop  ).


----------



## Esenel

380W Bios with Offset.
Voltage Values.

This topic got my attention now, after you mentioned your fixed voltages.

For this test I apllied +130 Core and +1000 MEM which results in 2100 MHz. Only if the temperature reaches 45°C it drops to 2085 Mhz.
But except from that it keeps its frequency at all time.

The really interesting part about it was the voltage applied.
I played roughly 2-3 hours Shadow of the Tomb Raider 1440p maxed out.

At the beginning voltage was at 1.093V for 2100 MHz, but after like one hour the voltage dropped to 1.50V. And afterwards it kept the frequency at 2085 MHz until the end of my gaming session, where it stayed steady with 1.042V.

What I did not understand, was if the graphics cards needs for heavier workloads more voltage at some point?
Somehow I do not understand this behaviour:-D


----------



## xaxx

Problems with the card On Friday the 8th I got the Sea Hawk X, I have done tests with Superposition 4k and bechmark of games. Until yesterday I gave a high score in stock and OC both with its BIOS and with other BIOS that I have tried. The problem esque today does not even come by chance to score or FPS with any BIOS. With the one of STOCK it gave without OC 12358 in SUPERPOSITION AND IN EARTH 85FPS AND today it gives 11700 in Superposition and 80 fps. So it happens with all the BIOS, they have dropped performance and it does not go up. The card has been damaged or could have happened.


----------



## xaxx

I keep doing tests and nothing, the score with 5 BIOS that I tried, now they give much less score. What do you think?? Could it be that the card has been damaged or is it normal for a new card, that after a while the performance drops?


----------



## J7SC

xaxx said:


> I keep doing tests and nothing, the score with 5 BIOS that I tried, now they give much less score. What do you think?? Could it be that the card has been damaged or is it normal for a new card, that after a while the performance drops?
> ---
> I keep doing tests and nothing, the score with 5 BIOS that I tried, now they give much less score. What do you think?? Could it be that the card has been damaged or is it normal for a new card, that after a while the performance drops?


 
...hard to say from afar, but if your card is not stuck at 1350MHz and still boosts up, I would not think so. I would go back to the stock Bios and try to hit the same conditions (ie. temps, driver versions, other apps running) and see what you get in the same tests. RTX 2080 Tis are very sensitive to temps re. performance gains / losses.


----------



## fleps

xaxx said:


> I keep doing tests and nothing, the score with 5 BIOS that I tried, now they give much less score. What do you think?? Could it be that the card has been damaged or is it normal for a new card, that after a while the performance drops?


Besides what J7SC mentioned above, you don't by any chance limited FPS on RTSS or Nvidia Inspection and forgot to remove it to benchmark?

This happened once with me, I was so mad at myself once I realized xD


----------



## zack_orner

fleps said:


> Besides what J7SC mentioned above, you don't by any chance limited FPS on RTSS or Nvidia Inspection and forgot to remove it to benchmark?
> 
> 
> 
> This happened once with me, I was so mad at myself once I realized xD


Or g sync turned on.

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## xaxx

That nothing has changed on the computer, the inspector is not present, nothing has been installed, the sync g is not activated. The first days until Sunday the highest score and yesterday could not be reached. MSI teachers say that the score I get is normal, but how is it possible that the first days were much higher and now do not reach that speed. The score of the card in stock in Superposition 4k was 11700 but the first days was 12360, you see the score now very low to be in stock or very high in the first days of 12360. I think something on the card does not work as before. The overclock that gave me in Superposition 4k, the score of 13300 was increasing the gpu to 130 mhz and the memory to +800 without modifying the curve, it is possible because those of MSI say that the 12700 are the correct ones and the 11700 of stock also but before it gave much more. I do not know if I change the graph, I can not find an explanation.


----------



## ReFFrs

xaxx said:


> The score of the card in stock in Superposition 4k was 11700 but the first days was 12360, you see the score now very low to be in stock or very high in the first days of 12360.


What is your GPU temperature at heavy load? Thermal grease may have degraded in few days if it was improperly applied from the factory. But this can be fixed if you reapply it manually.

Second question, what is your power limit? Have you set it to max before testing? You should definitely do it for higher scores.


----------



## xaxx

The temperature is 55 ° maximum on the card. I already told him that he had written down the results with the stock card without modifying speed or power limit or anything and the score was 12360, now the same temperature and same values ​​11700. Before with limit to the maximum and + 130gpu and +800 of memory it gave 13300, now with the same OC gives 12700. As in the games at 4k now gives about 4-5 fps less. The case is that it marks 55 °, but I do not know if it is the memory that gets hot that the information of this one does not give it, since it only gives it of the gpu. What do you think, there will be something wrong? Should I return it ??


----------



## ReFFrs

xaxx said:


> What do you think, there will be something wrong? Should I return it ??


Before returning it check your PCIE link speed in GPU-Z after clicking the "?" button next to Bus Interface field and then starting the Rendering test to apply load to the card. It should be PCIE x 16 3.0 @ x16 3.0

If it's not, then it's the problem with PCIE slot. Probably some other device like nvme SSD occupies pcie lines.

And also you should tell us your GPU and memory clocks as shown by MSI Afterburner in OSD during heavy load. Maybe the problem is in clocks, e.g. if GPU stays at 1350 MHz and not goes up.


----------



## xaxx

Exacto puts x16 [email protected] 3.0 The loading speed goes up, in stock up to 1920 plus or minus in OC also up to 2025, but it does not perform as much as before either in Bechmark or in fps in games.


----------



## xaxx

I have tested it on a clean computer with the operating system just installed, with different versions of the nvidea drivers and nothing, remains the same. I do not know if to return it and get another one or change the model. The gigabyte rtx 2080 ti waterforce I have seen that it does not support much OC, practically nothing, and is somewhat slower. So I do not know what to do. Or if you buy the one that originally had in mind that is the Msi X Trio. What a bad letter has come out


----------



## ReFFrs

xaxx said:


> The gigabyte rtx 2080 ti waterforce I have seen that it does not support much OC, practically nothing, and is somewhat slower. So I do not know what to do.


Hehe  Here is mine gigabyte rtx 2080 ti waterforce with Asus XOC bios. Your OC will mostly depend on chip quality known as ASIC and it's always a lottery with any vendor. In the same batch of cards there may be 2200 MHz chips in terms of max stable OC as well as 1980 Mhz chips. If you want a guarantee that card overclocks well then look at something like HOF 2080 Ti or EVGA KINGPIN. Highly overpriced versions, but with binned chips.


----------



## xaxx

What a strange thing. For the past hour I have been testing the graphics on a computer with a clean operating system. I have tried with the latest nvidea drivers and with the previous two and nothing. And suddenly he has again given me the 12360 in stock and the 13300 with the OC. Nothing has changed, the only thing I have done is deactivate the One Conect that is the domode device go the hdmi of the samsung TV. Do you think it was because of that device the power drop?


----------



## xaxx

Reff that score is I guess doing a frequency curve my score is putting on the main screen of Afterburner 130 in the GPU and 800 in memory I do not bend any I guess that in my card doing curve would reach a higher score you could tell me what score your card has a fixed frequency in the CPU and in the memory and that is stable in Time Spy at least to compare one and another in case I have to change mine or buy another equal


----------



## dangerSK

xaxx said:


> Reff that score is I guess doing a frequency curve my score is putting on the main screen of Afterburner 130 in the GPU and 800 in memory I do not bend any I guess that in my card doing curve would reach a higher score you could tell me what score your card has a fixed frequency in the CPU and in the memory and that is stable in Time Spy at least to compare one and another in case I have to change mine or buy another equal


As Reff told you it heavily depends on bin, for example my lightning can do fine around 2145mhz (custom water) u can use my time spy score for comparison.


----------



## xaxx

Veo que su rayo funcionó muy alto, pero como no voy a personalizar la gráfica. Yo por ahora no tengo una curva de frecuencia, simplemente subo los mhz en la pantalla de Afterburner. Por eso me interesaría saber que puntuación en Superposición 4k optimizado La marca a ReFFrs subiendo los mhz en la pantalla principal de Afterburner y es estable en Timespy. Esas son las pruebas que yo hago. Si es superior y tengo que descambiar mi gráfica la cambiaría por la suya


----------



## ReFFrs

xaxx said:


> Reff that score is I guess doing a frequency curve my score is putting on the main screen of Afterburner


XOC bios doesn't have a frequency curve, so my OC is home screen only. 

Voltage +100
PL at 42% (420W)
Clock +130
Memory +1170


----------



## dangerSK

ReFFrs said:


> XOC bios doesn't have a frequency curve, so my OC is home screen only.
> 
> Voltage +100
> PL at 42% (420W)
> Clock +130
> Memory +1170


u should increase pl to 500w, my card was pulling around 480w in fire strike.


----------



## ReFFrs

dangerSK said:


> my card was pulling around 480w in fire strike.


Mine peaked at 480W in TimeSpy Extreme even with power limit set to 420, then dropped clocks of course. I'm still testing and not moving PL to max.


----------



## dangerSK

ReFFrs said:


> Mine peaked at 480W in TimeSpy Extreme even with power limit set to 420, then dropped clocks of course. I'm still testing and not moving PL to max.


On my card i cant really change it so i dont bother  what could happen... the card wont pull more than 500w with that low voltage


----------



## xaxx

ReFFrs ¿Es esta su tarjeta o el otro modelo WB? En todos los lados dicen que la gráfica de la imagen no se puede subir más de 80 mhz el gpu y que no llegaba a +900 la memoria.


----------



## dangerSK

xaxx said:


> ReFFrs ¿Es esta su tarjeta o el otro modelo WB? En todos los lados ponemos que la gráfica de la imagen no se puede subir más de 80 mhz el gpu y que no llegaba a +900 la memoria.


ENGLISH PLEASE ! this is not private spanish conversation, if u want help atleast make effort to write with basic english.


----------



## xaxx

Think that the slowdown of the graphics was produced by the Samsung TV device called ONE CONNECT, it was turned off and on and, since then, it is going at the same speed as before, I do not know if it was a coincidence. Can you be responsible for the decrease in graphics speed? I have no explanation for that.


----------



## xaxx

Sorry man, I'm using the translator and I copy and paste the paragraphs, I had to copy the one that was in Spanish Reffrs Is this your card or the other WB model? On all sides you have to see the graph of the image you can not upload more than 80 mhz the gpu and that did not reach +900 memory.


----------



## ReFFrs

xaxx said:


> Sorry man, I'm using the translator and I copy and paste the paragraphs, I had to copy the one that was in Spanish Reffrs Is this your card or the other WB model? On all sides you have to see the graph of the image you can not upload more than 80 mhz the gpu and that did not reach +900 memory.


Yes that's my card. Cannot understand what you want to tell in the next sentence.


----------



## xaxx

I'm sorry for my English. I wanted to know if you have done anything on your card other than to put the XOC BIOS, if you have modified the cooling or something. It seems that it gives very good results. I will try that bios XOC with mine, but with the bios I have now and with the GALAX 380W I can not increase the memory more than 800 mhz and even if the PL increases, it does not give me a better score.


----------



## xaxx

And I have the problem that yesterday I had a lower than normal performance, I did the same tests and today I went back up only the speed and, for that reason, I raised again the question that I have the only thing different. , and it is to restart the ONE Connect of the Samsung TV.


----------



## J7SC

xaxx said:


> And I have the problem that yesterday I had a lower than normal performance, I did the same tests and today I went back up only the speed and, for that reason, I raised again the question that I have the only thing different. , and it is to restart the ONE Connect of the Samsung TV.


 
...glad that you have confirmed that the card is not 'broken' :thumb: Keep in mind that temps play a huge role with RTX 2080 Ti - my 14412 Superpos run I posted earlier had *34 C as max* temps because the custom GPU loop (CPU loop is separate) has a total of 1080 / 60 rad cooling and was build to keep temps of two 2080 Ti under control. BTW, my cards are not the AIO model but the WB ones, per spoiler, and '+150' or so quoted in posts really depends on your Bios 'stock boost' setting to begin with.


Spoiler














 Also, unless you run a custom Bios w/ a very high or no Power Limit, think of the PL as a budget which you spend on GPU core and VRAM (and may be RGB, fans but that is minor). The lower the voltage, the lower the temps and the more watt headroom you have before you bump into limits. I have seen higher clocks resulting in lower scores on my setup. Higher voltage = higher temps and reduced 'headroom' in the power budget; thus XOC is usually with a w/ chiller, DICE or LN2


----------



## xaxx

So the XOC is not suitable for my card or is it? Perodna but the translator is not very good. I tried my card with Bios Galax 380w and it did not work well at all, but with Factory Bios it worked better and it's 330w, that's something I do not understand. Then the maximum voltage is better not to touch it or if? It is something that is not clear to me, I only touch the mhz on the main screen, I do not do the voltage curve, I do not know how to configure it well.


----------



## J7SC

xaxx said:


> So the XOC is not suitable for my card or is it? Perodna but the translator is not very good. I tried my card with Bios Galax 380w and it did not work well at all, but with Factory Bios it worked better and it's 330w, that's something I do not understand. Then the maximum voltage is better not to touch it or if? It is something that is not clear to me, I only touch the mhz on the main screen, I do not do the voltage curve, I do not know how to configure it well.


 
I can't really comment on the XOC Bios as I have not installed it / I'm still running the stock Aorus Bios (consistently just under 380w, according to GPUz). I know other folks with the same card(s) I have successfully installed the XOC Bios and increased their PL, and what they posted seemed very good but they were *'careful :bruce: *'with it. All I'm saying is to try to get your GPU temp as low as you can because for a given increase in temps, clocks will drop speed-steps with most Bios.


----------



## Esenel

ReFFrs said:


> Hehe  Here is mine gigabyte rtx 2080 ti waterforce with Asus XOC bios. Your OC will mostly depend on chip quality known as ASIC and it's always a lottery with any vendor. In the same batch of cards there may be 2200 MHz chips in terms of max stable OC as well as 1980 Mhz chips. If you want a guarantee that card overclocks well then look at something like HOF 2080 Ti or EVGA KINGPIN. Highly overpriced versions, but with binned chips.


In my humble opinion I see the Asus XOC bios and the Galax 380W on par.
This result is with the 380W bios on a founders card with Micron.

+145 Core and +1000 MEM resulting in
below 40°C (first seconds of the test) = 2130MHz
above 40°C (during the rest of the run)= 2115 MHz

You get roughly the same results.
If one bios might deliver 1-2% more or less is hard to tell as we act at the edge of the turing chip's possibilities.


----------



## kot0005

did anyone try the 1000W bios on gaming x trio ?


----------



## dangerSK

kot0005 said:


> did anyone try the 1000W bios on gaming x trio ?


i tried it on Lightning, worked fine. You can flash it on trio without issues.


----------



## xaxx

Hi, could you tell me if the water force gigabyte 2080ti cools the entire card with the water block or if it is like the Sea Hawk X that only cools the GPU and partly the memory and the rest with a fan? THANK YOU


----------



## Renegade5399

Both the Waterforce Extreme and Waterforce Extreme AIO are full cover blocks. 

The Sea Hawk X is what is called a Hybrid cooler for future reference.

Translation to save you time:


Tanto Waterforce Extreme como Waterforce Extreme AIO son bloques de cobertura total.

El Sea Hawk X es lo que se llama un enfriador híbrido para futuras referencias.


----------



## MrFox

zhrooms said:


> The Dancop XOC BIOS for Strix https://www.techpowerup.com/vgabios/208990/208990
> 
> BIOS is confirmed by me to be working on a Palit RTX 2080 Ti GamingPro (Reference PCB), but only partly, it has big issues.
> 
> For example the MSI Afterburner frequency/voltage editor is disabled, it runs the memory at full clock speed in idle, thermal limit is disabled, fan curve is not optimal for my card so it runs high RPM idle, it disabled two of the three DisplayPort outputs (did not try HDMI).
> 
> So, the main issue here is that NVIDIA Boost is still very much in play, and it messes everything up, my first run completely stock it started out at 1.043v 1950MHz in UNIGINE Superposition 1080p Extreme, that didn't look promising at all, I then applied +100 voltage in MSI Afterburner and ran it again, now the voltage jumped from 1.043v to 1.068v, great, now to add some core clock, +150 made it start at 1.081v 2100MHz, but in an instant it drops down to 1.050v 2085MHz, only 2c temperature change.
> 
> So I documented the downclocking that occured right before my eyes,
> 
> 1093mV - 2115MHz - 52c
> 1075mV - 2100MHz - 55c
> 1050mV - 2085MHz - 58c
> 1043mV - 2070MHz - 63c
> 1043mV - 2055MHz - 67c
> Crash
> 
> That's some big steps down in voltage, so then it was time to open the Window, make it freezing cold.
> 
> After running the benchmark for a few seconds it was running 1050mV 2130MHz at 40c. A few seconds later it clocked down to 1043mV 2115MHz 44c and crashed. That's expected as 2115MHz is too fast for only 1043mV.
> 
> I ran again, and again, and what I saw was very strange, it started at 1050mV 2130MHz then suddenly spiked to 1093mV 2130MHz, at 45c on the GPU.
> 
> The power limit does allow me to run the full 1.093v, but it can't sustain it, even at 40c. It instantly drops to 1.050v, a possible fix for this would be to use the MSI Afterburner Frequency/Voltage editor and lock the card to 3D Clocks at 1.093V 2100MHz+, but that is not possible since the BIOS disabled the function for whatever reason (possibly bug).
> 
> After running the benchmark 20 times it does not seem to be able to hold more than 1.050v over time, which is similar to the Galax 380W BIOS.
> 
> Starting the benchmark at 15c on the GPU, +170 made it run 1075-1087mV 2160MHz for about 10 seconds, stays under 35c and then crashes, it would work if it could only stay at 1093mV, but it's just not possible for whatever reason (I believe it's boost related). If only the frequency/voltage editor worked..
> 
> Lowered power limit just to check, +55% (550W) voltage and clocks the same, +30% resulted in 0.962v 2010MHz so power limit seems to be working as intended.
> 
> Conclusion is that without the MSI Afterburner Frequency/Voltage Editor the BIOS seems pretty much useless, for anyone using a Strix it's very useful as you regain all outputs. I also cannot say what would happen at sub zero temperature, maybe it would hold the full 1.093v.
> 
> *Edit:* The curve does seem to ever be patchable, so this isn't the BIOS we have been waiting on.


Thank you for posting this.

This is extremely sad. I was hoping this vBIOS would give us up to (or more) 1.200V like the 1080 Ti Strix XOC firmware did. This wuss-boy 1.093V limit on the 2080 Ti really sucks. I wish I would have saved the money on a second 1080 Ti to mod like the one my 2080 Ti replaced. RTX was a frivolous waste of money in 20/20 hindsight thanks to the way the NVIDIOTS have castrated it so severely. 

I am still going to test it to see how it works with my shunt modded 2080 Ti, but if it doesn't hold the voltage under load it will definitely be as worthless as the Galax HOF firmware. My stock XC2, FTW3 and Aorus Extreme vBIOS all work much better than the Galax HOF firmware.


----------



## dangerSK

MrFox said:


> Thank you for posting this.
> 
> This is extremely sad. I was hoping this vBIOS would give us up to (or more) 1.200V like the 1080 Ti Strix XOC firmware did. This wuss-boy 1.093V limit on the 2080 Ti really sucks. I wish I would have saved the money on a second 1080 Ti to mod like the one my 2080 Ti replaced. RTX was a frivolous waste of money in 20/20 hindsight thanks to the way the NVIDIOTS have castrated it so severely.
> 
> I am still going to test it to see how it works with my shunt modded 2080 Ti, but if it doesn't hold the voltage under load it will definitely be as worthless as the Galax HOF firmware. My stock XC2, FTW3 and Aorus Extreme vBIOS all work much better than the Galax HOF firmware.


Because the 380W Galax bios isnt "true" hof bios, the true (better say OC Lab) one has 2500W pwr limit (unlimited...) and higher core voltage


----------



## bigjdubb

I installed my new (to me) EVGA FTW3 2080ti last night, it's huge. I maxed the power target and was able to play for a few hours with +100 gpu and +750 on the memory. I can't complain about the cooler, even with my open chassis the fans weren't noticeable up to about 80% (the room has a high noise floor so your results may vary) and the card stayed in the low 60's after hours of playing. I don't think it would have been a worthwhile upgrade at full price but I got one hell of a deal on mine and it feels like a proper generational 1080ti upgrade for the $700 I paid for it.


----------



## Renegade5399

bigjdubb said:


> I installed my new (to me) EVGA FTW3 2080ti last night, it's huge. I maxed the power target and was able to play for a few hours with +100 gpu and +750 on the memory. I can't complain about the cooler, even with my open chassis the fans weren't noticeable up to about 80% (the room has a high noise floor so your results may vary) and the card stayed in the low 60's after hours of playing. I don't think it would have been a worthwhile upgrade at full price but I got one hell of a deal on mine and it feels like a proper generational 1080ti upgrade for the $700 I paid for it.


That FTW3 has one beefy cooler on it. Dang.


----------



## bigjdubb

It's huge. I have an MK-26 with 2 140mm fans on my 1080ti so it is physically bigger than the FTW3 but the FTW3 feels quite a bit more substantial. The FTW3 is like a brick where the MK-26 is more like a loaf of bread. I had to cut a dowel for support between the PSU and the vertical bracket on my Core P1 because of the sag.


----------



## J7SC

bigjdubb said:


> It's huge. I have an MK-26 with 2 140mm fans on my 1080ti so it is physically bigger than the FTW3 but the FTW3 feels quite a bit more substantial. The FTW3 is like a brick where the MK-26 is more like a loaf of bread. I had to cut a dowel for support between the PSU and the vertical bracket on my Core P1 because of the sag.


 
Those 2080 TIs are the longest, heaviest GPUs I ever had (including dual GPU cards w/ copper block). My 2080 TIs are factory w-blocked, and I'm not sure whether these are heavier or lighter compared to those with huge aircoler assemblies. Fortunately, between the metal-framed PCIe mobo slots, dual mounting screws, and the NVLink there was some decent support...but just to be safe, I added that bracket (blue arrows) for additional support.


----------



## bigjdubb

My case is Thermaltake so the steel is real flimsy. The bracket you add for vertical mounting isn't supported very well on one side so I just cut a piece of wood dowel to prop it up. I didn;t take a photo of it but this should give you an idea.


----------



## ReFFrs

Has anyone heard of bios *90.02.17.00.B1* for Aorus 2080 Ti Waterforce AIO?

This dude has received the card with this bios and it overclocks like crazy.

2180 MHz at 54C

There is no such bios on TechPowerUp and my card came with 90.02.17.00.*B0*

He also has a different Device ID 10DE-1E07-1458 37*B8*, while mine is 37*B9* Samsung VRAM as well.

Has anyone tried that F2 bios from gigabyte, is it B0 or B1? https://www.gigabyte.com/Graphics-Card/GV-N208TAORUSX-W-11GC#support-dl-bios

https://youtu.be/Te1mVeoFLzE?t=40


----------



## J7SC

bigjdubb said:


> My case is Thermaltake so the steel is real flimsy. The bracket you add for vertical mounting isn't supported very well on one side so I just cut a piece of wood dowel to prop it up. I didn;t take a photo of it but this should give you an idea.


 
...I'm also on Thermaltake (Core P5) ...commandeered the 'vertical GPU' bracket and did some Dremel work to help in its support 'function'.


----------



## bigjdubb

How many 2080ti owners here are using Ryzen processors?



J7SC said:


> ...I'm also on Thermaltake (Core P5) ...commandeered the 'vertical GPU' bracket and did some Dremel work to help in its support 'function'.


My P5 was the reason I got the P1. Say what you want about Thermaltake but the Core P cases are great (in use), easy to work on like a test bench but not so "test bench" looking with cable management etc. I moved the PSU around to the back on my P5 and it really cleaned up the look on the front. Even with two 480mm rads mounted to it, it was by far the easiest case I've ever had for loop maintenance and cleaning.


----------



## J7SC

bigjdubb said:


> How many 2080ti owners here are using Ryzen processors?
> 
> My P5 was the reason I got the P1. Say what you want about Thermaltake but the Core P cases are great (in use), easy to work on like a test bench but not so "test bench" looking with cable management etc. I moved the PSU around to the back on my P5 and it really cleaned up the look on the front. Even with two 480mm rads mounted to it, it was by far the easiest case I've ever had for loop maintenance and cleaning.


 
...I'm on a Ryzen TR 2950X 

really like the P5 / unique looking and allowing for unusual builds, great airflow etc


----------



## bigjdubb

I noticed lower than expected GPU utilization in BFV (80-85%) and I was wondering if other Ryzen (specifically 8/16 Ryzen) users noticed the same thing. My 1080ti was usually between 95-100% so I'm guessing that my 2080ti is outrunning my CPU.


----------



## J7SC

bigjdubb said:


> I noticed lower than expected GPU utilization in BFV (80-85%) and I was wondering if other Ryzen (specifically 8/16 Ryzen) users noticed the same thing. My 1080ti was usually between 95-100% so I'm guessing that my 2080ti is outrunning my CPU.


 
...in what I've seen (ie mostly benches, productivity rendering), I'm at 97%-98% GPU utilization on the TR


----------



## knightriot

bigjdubb said:


> How many 2080ti owners here are using Ryzen processors?
> 
> 
> 
> My P5 was the reason I got the P1. Say what you want about Thermaltake but the Core P cases are great (in use), easy to work on like a test bench but not so "test bench" looking with cable management etc. I moved the PSU around to the back on my P5 and it really cleaned up the look on the front. Even with two 480mm rads mounted to it, it was by far the easiest case I've ever had for loop maintenance and cleaning.


2950x on 2080ti + heatkiller


----------



## zack_orner

bigjdubb said:


> How many 2080ti owners here are using Ryzen processors?
> 
> 
> 
> 
> 
> 
> 
> My P5 was the reason I got the P1. Say what you want about Thermaltake but the Core P cases are great (in use), easy to work on like a test bench but not so "test bench" looking with cable management etc. I moved the PSU around to the back on my P5 and it really cleaned up the look on the front. Even with two 480mm rads mounted to it, it was by far the easiest case I've ever had for loop maintenance and cleaning.


Ryzen 7 2700x

2700x x470-f gaming 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## LanceBoyle

Just try it out. If the bios mismatch at least will work in a fixed powerstate. I tried 3+ BIOS on my card, none of them work 100% so I decided to flash the original bios again.


----------



## Nizzen

J7SC said:


> ...in what I've seen (ie mostly benches, productivity rendering), I'm at 97%-98% GPU utilization on the TR


In BF V 1440p?


----------



## Xeq54

bigjdubb said:


> I noticed lower than expected GPU utilization in BFV (80-85%) and I was wondering if other Ryzen (specifically 8/16 Ryzen) users noticed the same thing. My 1080ti was usually between 95-100% so I'm guessing that my 2080ti is outrunning my CPU.


BFV is pretty cpu intensive. I am cpu bound at around 130-140 FPS with my system (5ghz 7820x) running at 1440p with everything on max. I had GPU utilization around 80-85%. Since I cannot push the CPU any higher, I increased the render resolution of the game to 120% for increased visual quality. The utilization is around 95-99 now with the same framerate.


----------



## xaxx

Today and tomorrow I will try to see which of the two graphs is better for cooling, stock performance and overclocking.


----------



## Jpmboy

for anyone using 2 vector blocks:


https://www.ekwb.com/blog/can-i-use...u_water_block_performance&utm_term=2019-03-14


----------



## ReFFrs

xaxx said:


> Today and tomorrow I will try to see which of the two graphs is better for cooling, stock performance and overclocking.


Please upload your bios for Aorus Waterforce if it's version 90.02.17.00.*B1*

You can see the bios version in GPU-Z. By clicking on the small arrow button you can save the bios and also upload it TechPowerUp.


----------



## dangerSK

ReFFrs said:


> Please upload your bios for Aorus Waterforce if it's version 90.02.17.00.*B1*
> 
> You can see the bios version in GPU-Z. By clicking on the small arrow button you can save the bios and also upload it TechPowerUp.


what has that bios better compared to others ?


----------



## ReFFrs

dangerSK said:


> whats has that bios better compared to others ?


People get 2180 MHz at 54C with this bios, so just wanted to try. https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-232.html#post27890366


----------



## dangerSK

ReFFrs said:


> People get 2180 MHz at 54C with this bios, so just wanted to try. https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-232.html#post27890366


EDIT : My bad i read it wrong  yeah thats okay i suppose ?


----------



## Renegade5399

dangerSK said:


> EDIT : My bad i read it wrong  yeah thats okay i suppose ? i can try that with OC Lab bios just to check
> whats max core voltage with that bios ? i can go up to 1.19V


Care to share (or have you already)?


----------



## fleps

Guys, is there a tool to read the information / settings from a bios ROM file, for *Turing*?
Like we can do with Maxwell by just dragging the file into MaxwellBiosTweaker.

Or we are out of luck and every time need to check the bios page on TPU?

Thanks!


----------



## dangerSK

Renegade5399 said:


> Care to share (or have you already)?


i cant sry.
But i can tell you its A LOT better than Asus XOC, you have voltage curve and all features


----------



## Renegade5399

fleps said:


> Guys, is there a tool to read the information / settings from a bios ROM file, for *Turing*?
> Like we can do with Maxwell by just dragging the file into MaxwellBiosTweaker.
> 
> Or we are out of luck and every time need to check the bios page on TPU?
> 
> Thanks!


The BIOS on Turing is encrypted and it has not been broken. There is currently no way available to do like we could on Maxwell.


----------



## dangerSK

fleps said:


> Guys, is there a tool to read the information / settings from a bios ROM file, for *Turing*?
> Like we can do with Maxwell by just dragging the file into MaxwellBiosTweaker.
> 
> Or we are out of luck and every time need to check the bios page on TPU?
> 
> Thanks!


Nah you cant edit turing bioses, not even pascal one. Best bet is to get proper XOC bios, like Asus XOC.


----------



## fleps

Renegade5399 said:


> The BIOS on Turing is encrypted and it has not been broken. There is currently no way available to do like we could on Maxwell.


Ah. I though the encryption would only block modifications, not also block the ability to just read the default settings like TDP, boots clock, etc. 

So the only way to read is via GPU-Z after is already flashed on a card.

Oh well.

Edit:


dangerSK said:


> Nah you cant edit turing bioses, not even pascal one. Best bet is to get proper XOC bios, like Asus XOC.


That I know, my question was more about to be able to READ the settings, like a bios reader/inspector like we can see on GPU-Z after the bios is flashed =)


----------



## dangerSK

fleps said:


> Ah. I though the encryption would only block modifications, not also block the ability to just read the default settings like TDP, boots clock, etc.
> 
> So the only way to read is via GPU-Z after is already flashed on a card.
> 
> Oh well.


Yeah though it doesnt matter because all info about each bios is shared on Techpowerup bios database. (Only "secret" XOC,LN2, OC Lab bioses arent  )


----------



## bigjdubb

Xeq54 said:


> BFV is pretty cpu intensive. I am cpu bound at around 130-140 FPS with my system (5ghz 7820x) running at 1440p with everything on max. I had GPU utilization around 80-85%. Since I cannot push the CPU any higher, I increased the render resolution of the game to 120% for increased visual quality. The utilization is around 95-99 now with the same framerate.


I did the exact same thing last night. Seems odd that my 2700x is stuck at the same 130fps rut as your 7820x, especially with yours being clocked to 5.0ghz. It appears as though the 9900k can get the 2080ti to 100% usage in BFV at 1440p without resolution scaling but I'm not sure if it has to be OC'ed to 5.0 to make it happen.


----------



## xaxx

I do not have any problem to load the Bios here or the techpowerup, but if you look at the images of the Gpu-Z that I send Reffrs in the one on the left, it corresponds to Bios 90.02.17.00.B1 but in the one on the right with OC it eliminates the number. Because that's the way it is, is not Bios the same?


----------



## dangerSK

xaxx said:


> I do not have any problem to load the Bios here or the techpowerup, but if you look at the images of the Gpu-Z that I send Reffrs in the one on the left, it corresponds to Bios 90.02.17.00.B1 but in the one on the right with OC it eliminates the number. Because that's the way it is, is not Bios the same?


Thats weird, i dont know about that B1 bios but i assume the second picture is blurred because he used different bios (maybe under NDA ??) and didnt want to cause problems so he blurred it on purpose. just guess though.


----------



## xaxx

What is NDA


----------



## bigjdubb

xaxx said:


> What is NDA


Non Disclosure Agreement?


----------



## J7SC

dangerSK said:


> Nah you cant edit turing bioses, not even pascal one. Best bet is to get proper XOC bios, like Asus XOC.


 
...yeah, they really locked that one down. Also, just be going Bios version numbers may not be that fruitful. My two 2080 TIs are just 2 or 3 single digits apart on the serial numbers, yet they have different Bios version#s. though same date...they also seem to behave the same, pretty much. That said, GamersNx just had a vid on the Gigabyte factory visit and looking at the line which does mobos and GPUs...fully automated machine and doing 10+ simultaneously - some of the different numbers may simply refer which section of that machine planted what GPU or Bios chip as they're drawing on different spools and trays - as long as it is all locked up, not much we can do as we're tapping in the dark with which is what.

Just for giggles, I compared the two Bios in WinMerge and switched to Hex view, with a small excerpt below > colour highlights identify differences, and there are a 'fair amount' throughout


----------



## bigjdubb

Users are doing some sort of registry editing to get access to higher power limits with the Radeon VII, it seems kinda like a software shunt mod. I don't exactly know how it works but I wonder if it is possible to do something like that with Nvidia.


----------



## dangerSK

bigjdubb said:


> Users are doing some sort of registry editing to get access to higher power limits with the Radeon VII, it seems kinda like a software shunt mod. I don't exactly know how it works but I wonder if it is possible to do something like that with Nvidia.


No, its locked down by bios, u cant bypass that.


----------



## ReFFrs

I guess now that Aorus Xtreme bioses B0 and B1 may indeed contain the same settings, just a different bios is activated depending on what DP/HDMI ports a user has chosen to connect displays. The card has additional ports and 2 bioses by default to switch some ports on/off if others are connected.


----------



## dangerSK

ReFFrs said:


> I guess now that Aorus Xtreme bioses B0 and B1 may indeed contain the same settings, just a different bios is activated depending on what DP/HDMI ports a user has chosen to attach displays. Aorus has additional ports and 2 bioses by default to switch some ports on/off if others are connected.


i had GB 980Ti G1 gaming, yea thats how dual bios works on gigabyte cards. Bios version (or which bios is activated) depends on which ports u have plugged.


----------



## bigjdubb

dangerSK said:


> No, its locked down by bios, u cant bypass that.


I guess the Radeon VII's "locked" bios is different from Nvidias "locked" bios. The Registry edit for the Radeon VII doesn't change anything in the BIOS it just reports the power level as being lower than what it really is, so your card thinks your pulling 250 watts but your really using 600 watts.




ReFFrs said:


> I guess now that Aorus Xtreme bioses B0 and B1 may indeed contain the same settings, just a different bios is activated depending on what DP/HDMI ports a user has chosen to connect displays. The card has additional ports and 2 bioses by default to switch some ports on/off if others are connected.





dangerSK said:


> i had GB 980Ti G1 gaming, yea thats how dual bios works on gigabyte cards. Bios version (or which bios is activated) depends on which ports u have plugged.


I used to use that feature to switch my bios on the 970 G1's, just swap the monitor cable. I miss the days when you could modify a BIOS.


----------



## dangerSK

bigjdubb said:


> I guess the Radeon VII's "locked" bios is different from Nvidias "locked" bios. The Registry edit for the Radeon VII doesn't change anything in the BIOS it just reports the power level as being lower than what it really is, so your card thinks your pulling 250 watts but your really using 600 watts.


Nvidia is calculating power draw with shunts and that is reported to IC. It cannot be changed with windows registers because its on IC level. (thats how i believe it works, if im wrong then im sorry)


----------



## zack_orner

Delete


----------



## bigjdubb

I have no idea how it works so I'm just throwing stuff at the wall to see what sticks. Luckily I haven't hit the power limit on my card yet so it's mostly irrelevant for me.


----------



## dangerSK

bigjdubb said:


> I have no idea how it works so I'm just throwing stuff at the wall to see what sticks. Luckily I haven't hit the power limit on my card yet so it's mostly irrelevant for me.


if u havent hit power limit it means u arent pushing hard enough


----------



## bigjdubb

That is 100% correct. I've only spent a few hours playing with overclocks so far, I just got it on Tuesday. Mostly I have been playing games with it to get a feel for the performance difference from my 1080ti and RVII. I will give it a bit of a thrashing this weekend to see if I can do better than 2100mhz in game.


----------



## J7SC

dangerSK said:


> i had GB 980Ti G1 gaming, yea thats how dual bios works on gigabyte cards. Bios version (or which bios is activated) depends on which ports u have plugged.


 
Hmm :headscrat so with that being the case, in a 2-card GigaB. equipped system, with card 1 having HDMI connected and card 2 with no monitor connection, just NVLink, how do I flash the Bios from card #1 to #2 correctly ? First, plug HDMI into 2nd card, save Bios from the first (now w/o HDMI for the moment), save Bios of #1 via GPUz, then switch HDMI cables back and then flash #1 Bios onto Card #2 ?


----------



## dangerSK

J7SC said:


> Hmm :headscrat so with that being the case, in a 2-card GigaB. equipped system, with card 1 having HDMI connected and card 2 with no monitor connection, just NVLink, how do I flash the Bios from card #1 to #2 correctly ? First, plug HDMI into 2nd card, save Bios from the first (now w/o HDMI for the moment), save Bios of #1 via GPUz, then switch HDMI cables back and then flash #1 Bios onto Card #2 ?


I believe so, firstly its always pain to flash SLI setups, its best to flash one card at time.


----------



## J7SC

dangerSK said:


> I believe so, firstly its always pain to flash SLI setups, its best to flash one card at time.


 
Tx, yeah, I vaguely recall flashing tri-SLI on mobos w/ 2 PEX chips on board...id= _whaaat_ :sad-smile ?. In this case, I just want to flash stock Bios from card 1 onto card 2 to see if there are any differences whatsoever...just wasn't sure about saving a Bios from a Gigabyte card with an active video port connection to one without.


----------



## Hemorz

Can you flash non-binned cards?

I have the Galax 2080ti SG one click edition

Sent from my SM-G965F using Tapatalk


----------



## SuperMumrik

Xeq54 said:


> BFV is pretty cpu intensive. I am cpu bound at around 130-140 FPS with my system (5ghz 7820x) running at 1440p with everything on max. I had GPU utilization around 80-85%. Since I cannot push the CPU any higher, I increased the render resolution of the game to 120% for increased visual quality. The utilization is around 95-99 now with the same framerate.



Your cpu is underperforming quite a lot 

I have the same one (7820x)@5Ghz and running 1440p ultra and my gpu load stays pegged at max and your fps are way to low.
Even on custom settings I am gpu bound for the most part.


What memory and cashe speeds are you running? Turning off HT could help to


----------



## m4fox90

Got a nice payday and an EVGA Black Edition is on the way  I'm pretty excited to join the club!


----------



## Xeq54

SuperMumrik said:


> Your cpu is underperforming quite a lot
> 
> I have the same one (7820x)@5Ghz and running 1440p ultra and my gpu load stays pegged at max and your fps are way to low.
> Even on custom settings I am gpu bound for the most part.
> 
> 
> What memory and cashe speeds are you running? Turning off HT could help to


Well that is strange, maybe its because I am running BFV on DX12 ? Did not really try DX11

I have HT off already and the CPU cache is at 3200mhz and RAM is at 3200mhz CL15.

Just out of curiosity, what is your cinebench score with HT off at 5ghz ?


----------



## cyper.bg

Guys, which cards of all the 2080ti is most likely to have the best chips for overclocking?
As in which models from which manufacturers are likely to have the best binned chip?

Cheers


----------



## VPII

bigjdubb said:


> How many 2080ti owners here are using Ryzen processors?
> 
> 
> 
> My P5 was the reason I got the P1. Say what you want about Thermaltake but the Core P cases are great (in use), easy to work on like a test bench but not so "test bench" looking with cable management etc. I moved the PSU around to the back on my P5 and it really cleaned up the look on the front. Even with two 480mm rads mounted to it, it was by far the easiest case I've ever had for loop maintenance and cleaning.


Sorry for the late reply... I'm running my Palit RTX 2080 ti Gamingpro OC with Ryzen 2700X at 4.28ghz manual OC. My memory is sitting at 3470mhz CL 14,14,14,28 1T. I'm pretty happy as the results I get is fairly good.


----------



## J7SC

@dangerSK


Want to cool a Lightning Z w/ hot Bios ? Then...


Spoiler













(c) NewScientist


----------



## xaxx

Hello again. The water force of gigabyte has two Bios, one hdmi activates the B0 and one set of outputs and the other two hdmi activate the B1 with another set of outputs. The two Bios of identical speed and OC. Now I wanted to ask you, I'm doing stress tests with 3dmark, in the Timespy 4k I do 20 repetitions without problems, but in the Firestrike 4k it is interrupted. Which one can I trust?


----------



## dangerSK

cyper.bg said:


> Guys, which cards of all the 2080ti is most likely to have the best chips for overclocking?
> As in which models from which manufacturers are likely to have the best binned chip?
> 
> Cheers


Kingpin and Hall of fame OC Lab cards


----------



## dangerSK

J7SC said:


> @dangerSK
> 
> 
> Want to cool a Lightning Z w/ hot Bios ? Then...
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (c) NewScientist


sorry i cant see it can u upload it to somewhere ?


----------



## SuperMumrik

Xeq54 said:


> Well that is strange, maybe its because I am running BFV on DX12 ? Did not really try DX11
> 
> I have HT off already and the CPU cache is at 3200mhz and RAM is at 3200mhz CL15.
> 
> Just out of curiosity, what is your cinebench score with HT off at 5ghz ?



Ahh... Try dx11 and future frame rendering on


[email protected] HT off scores 4095


----------



## fleps

dangerSK said:


> sorry i cant see it can u upload it to somewhere ?


I also can't never see @J7SC spoilers/attachments =P

I even tried to disable my adblock, no change.


----------



## J7SC

dangerSK said:


> sorry i cant see it can u upload it to somewhere ?





fleps said:


> I also can't never see @J7SC spoilers/attachments =P
> 
> I even tried to disable my adblock, no change.


...weird ! I always check if it is visible (and it is...on this machine) Can you see it now, per attachment ?


----------



## fleps

J7SC said:


> ...weird ! I always check if it is visible (and it is...on this machine) Can you see it now, per attachment ?


Yeah it works with attachment =)


----------



## dangerSK

J7SC said:


> ...weird ! I always check if it is visible (and it is...on this machine) Can you see it now, per attachment ?


hahah  I have enough liquid nitrogen, just need to prepare for LN2 session with Lightning  Cpu is ready


----------



## Hemorz

Hemorz said:


> Can you flash non-binned cards?
> 
> I have the Galax 2080ti SG one click edition
> 
> Sent from my SM-G965F using Tapatalk


It would take 2 seconds to answer my question.......

Sent from my SM-G965F using Tapatalk


----------



## dangerSK

Hemorz said:


> It would take 2 seconds to answer my question.......
> 
> Sent from my SM-G965F using Tapatalk


It would take ONE SECOND to read first page of this thread: "There are two GPU variants: TU102-300-K1-A1 (1E04) and TU102-300A-K1-A1 (1E07). Factory overclocking is prohibited on the former, it has a boost of 1545 MHz and a maximum power limit of 280W. The latter chip has varying factory overclocks and power limits up to 450 W, flashing a 300A based BIOS onto a 300 GPU and vice versa is not possible. Manual overclocking is possible on both."


----------



## Hemorz

dangerSK said:


> It would take ONE SECOND to read first page of this thread: "There are two GPU variants: TU102-300-K1-A1 (1E04) and TU102-300A-K1-A1 (1E07). Factory overclocking is prohibited on the former, it has a boost of 1545 MHz and a maximum power limit of 280W. The latter chip has varying factory overclocks and power limits up to 450 W, flashing a 300A based BIOS onto a 300 GPU and vice versa is not possible. Manual overclocking is possible on both."


And I read that but my question still stands. 
Can I flash my non binned bios...or to clarify can it be stocked with a decent bios to manually overclock with a waterblock?

Sent from my SM-G965F using Tapatalk


----------



## xaxx

Any suggestion to check the stability of the OC. However, in Timespy 4k with 20 stable repeats, in Firestrike 4k it is interrupted as soon as you start.


----------



## J7SC

fleps said:


> Yeah it works with attachment =)





dangerSK said:


> hahah  I have enough liquid nitrogen, just need to prepare for LN2 session with Lightning  Cpu is ready


 
...ok, glad it worked re. attachment visibility / though the idea is kind of ruined w/o Spoiler surprise  Anyhow, looking forward to some pics / results on your LN2 XOC w/Lightning Z :thumb:

Also, I see you're using a nice EVGA Classified (for XOC setup ? or old pic ?)...anyhow, those are my fav go-to-cards for, well, everything. Of the five machines in my home/office, one is that Threadripper w/2x 2080 TI Aorus WB, pretty much everything else runs SLI Classies (700 and 900 series), with some KPEs 'for backup'  In the attachment, the top machine has a crazy 4960X 'ES' and is for old 1080P DX11 titles, the bottom one is a daily driver for productivity etc.


----------



## dangerSK

J7SC said:


> ...ok, glad it worked re. attachment visibility / though the idea is kind of ruined w/o Spoiler surprise  Anyhow, looking forward to some pics / results on your LN2 XOC w/Lightning Z :thumb:
> 
> Also, I see you're using a nice EVGA Classified (for XOC setup ? or old pic ?)...anyhow, those are my fav go-to-cards for, well, everything. Of the five machines in my home/office, one is that Threadripper w/2x 2080 TI Aorus WB, pretty much everything else runs SLI Classies (700 and 900 series), with some KPEs 'for backup'  In the attachment, the top machine has a crazy 4960X 'ES' and is for old 1080P DX11 titles, the bottom one is a daily driver for productivity etc.


Nice build there ! There will be some pics after session i will probably post them here. that 980Ti classified was my friends gpu and he didnt allow me to run it on LN2  so only scores on air, atleast something : https://hwbot.org/submission/4086179_ 
Im more HOF/Lightning guy but that classified was nice card, maybe i will get one later.


----------



## ReFFrs

xaxx said:


> Any suggestion to check the stability of the OC. However, in Timespy 4k with 20 stable repeats, in Firestrike 4k it is interrupted as soon as you start.


What is your clock and voltage at that point? Firestrike Ultra 4k crash with Timespy Extreme no-crash usually means you should lower your OC by 15 Mhz


----------



## J7SC

dangerSK said:


> Nice build there ! There will be some pics after session i will probably post them here. that 980Ti classified was my friends gpu and he didnt allow me to run it on LN2  so only scores on air, atleast something : https://hwbot.org/submission/4086179_
> Im more HOF/Lightning guy but that classified was nice card, maybe i will get one later.


 
...HOF's (even non-OCL) would be my first choice for chilled / XOC but are so hard/impossible to get here, but 2080 TI KPE could be nice (if they_ ever_ become available :ninja: )


----------



## roccale

Hi everyone. Is there a bios for the 2080ti asus strix oc with a PL greater than 325w and that works well with the vga?
Thanks in advance.


----------



## pewpewlazer

Hemorz said:


> And I read that but my question still stands.
> Can I flash my non binned bios...or to clarify can it be stocked with a decent bios to manually overclock with a waterblock?
> 
> Sent from my SM-G965F using Tapatalk


How does your question still stand?

To RE-reiterate:



> *flashing a 300A based BIOS onto a 300 GPU and vice versa is not possible.*


If you have a "300 GPU" aka NON-A CHIP aka NON BINNED 2080 Ti, *YOU CANNOT FLASH THE BIOS* with any of the higher power limit ones being discussed in this thread.

You can manually overclock your card with a waterblock using the stock BIOS. There's nothing stopping you from doing that.


----------



## Hemorz

pewpewlazer said:


> How does your question still stand?
> 
> 
> 
> To RE-reiterate:
> 
> 
> 
> 
> 
> 
> 
> If you have a "300 GPU" aka NON-A CHIP aka NON BINNED 2080 Ti, *YOU CANNOT FLASH THE BIOS* with any of the higher power limit ones being discussed in this thread.
> 
> 
> 
> You can manually overclock your card with a waterblock using the stock BIOS. There's nothing stopping you from doing that.


In the past have previous gens of cards allowed for future bios's to be flashed? Or have they brought out a stock flash power upgrade?
What's the limitation and what can I do to bypass it.

Sent from my SM-G965F using Tapatalk


----------



## pewpewlazer

Hemorz said:


> In the past have previous gens of cards allowed for future bios's to be flashed? Or have they brought out a stock flash power upgrade?


What?



Hemorz said:


> What's the limitation and what can I do to bypass it.


It's a hardware limitation. You cannot bypass it. Period. Read the thread.


----------



## xaxx

ReFFrs, every time I do the test, Firestrike 4k is cut and indicates that the user cancels the test, without touching anything. I have nothing in the background, I have removed the rivaturner, disabled the antivirus ... I do not know what it could be. My voltages are normal and the average speed of the clock.


----------



## J7SC

xaxx said:


> ReFFrs, every time I do the test, Firestrike 4k is cut and indicates that the user cancels the test, without touching anything. I have nothing in the background, I have removed the rivaturner, disabled the antivirus ... I do not know what it could be. My voltages are normal and the average speed of the clock.


 
...don't mean to answer for ReFFrs, but as far as I remember, Firestrike 4K is quite sensitive to VRAM speed (more so than TimeSpy/E), why not lower VRAM MHz by a significant amount just for a run to see if the test completes ?


----------



## ReFFrs

xaxx said:


> ReFFrs, every time I do the test, Firestrike 4k is cut and indicates that the user cancels the test, without touching anything. I have nothing in the background, I have removed the rivaturner, disabled the antivirus ... I do not know what it could be. My voltages are normal and the average speed of the clock.


When test is interrupted you should check your Windows Event Viewer -> Windows Logs -> System, for errors like "Display driver nvlddmkm stopped responding and has successfully recovered"

If you see such errors there it means there was crash due to OC instability.


----------



## xaxx

Hello Reffrs and J7SC. Thanks for your help, thanks to that I have solved it. I comment For this gigabyte graphics card of waterforce I could only load about 80 mhz to gpu and 800 to memory, which made me think because other graphics of lower quality could load more. The strange thing was that in Superposition, in Timespy 4k and in Unigine Heaven, he was fine and without problems. I looked at what Reffrs and J7SC told me, since I believed that some program was stopping the Firestrike 4k mark 20 times, because it indicated that the user had stopped the test instead of saying that an error had occurred. I looked at the Gpu-Z and noticed that it was cut because the gpu reached a peak of 2150 mhz with just that 80 mhz increase. Now it is stable in Firestrike with 70 mhz and a peak of 2130 mhz and memory in 16120 mhz. Thanks again.


----------



## ReFFrs

xaxx said:


> Now it is stable in Firestrike with 70 mhz and a peak of 2130 mhz and memory in 16120 mhz. Thanks again.


Very good OC indeed.


----------



## nyk20z3

Any expected release date on the 2080 Ti Matrix?


----------



## knightriot

nyk20z3 said:


> Any expected release date on the 2080 Ti Matrix?


may be nobody care about this joke


----------



## dangerSK

nyk20z3 said:


> Any expected release date on the 2080 Ti Matrix?


You meant strix with crap aio cooling ?


----------



## nyk20z3

dangerSK said:


> You meant strix with crap aio cooling ?


Some of us are collectors here, i collect Lighting and Matrix cards :thumb:


----------



## roccale

roccale said:


> Hi everyone. Is there a bios for the 2080ti asus strix oc with a PL greater than 325w and that works well with the vga?
> Thanks in advance.


Up


----------



## ReFFrs

roccale said:


> Hi everyone. Is there a bios for the 2080ti asus strix oc with a PL greater than 325w and that works well with the vga?
> Thanks in advance.


This bios is called XOC 1000W and it's originally created for Asus Strix


----------



## roccale

ReFFrs said:


> This bios is called XOC 1000W and it's originally created for Asus Strix


A ok,thanks but i saw it on the first page but in the read me it is not recommended to use it from what I understood. It does not add improvements. I misunderstood?


----------



## ReFFrs

roccale said:


> A ok,thanks but i saw it on the first page but in the read me it is not recommended to use it from what I understood. It does not add improvements. I misunderstood?


In my opinion it's the best bios so far for watercooled systems (AIO will be fine). With other bioses where curve is available you can reach maybe 15-30 MHz better clocks by tuning the curve, but all that advantage will gone in any demanding game or benchmark due to constantly bumping into power limit. You will lose 30-100 MHz from your OC as a result. But with XOC bios you won't be losing anything if power limit set around 420-480W


----------



## xaxx

I keep asking in the forum to continue learning. A water-cooled card, such as the gigabyte water force, if placed vertically, will give the same performance or give less than horizontally. I know that those who have fans if they get hot and give less performance, but one like this with a block of water, I do not know if it happens to them too.


----------



## xaxx

ReFFrs, try placing the XOC BIOS on the Sea Hawk X and now on the strength of the water to gigabyte and in the two graphics the same thing happens to me, it detects the hdmi output, which is what I use as 1080p, I can not change the resolution to higher. The same thing happens to you? How have you resolved?


----------



## ReFFrs

xaxx said:


> ReFFrs, try placing the XOC BIOS on the Sea Hawk X and now on the strength of the water to gigabyte and in the two graphics the same thing happens to me, it detects the hdmi output, which is what I use as 1080p, I can not change the resolution to higher. The same thing happens to you? How have you resolved?


On Aorus I'm using DP near A1 mark and all is fine: https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-234.html#post27891714


----------



## dante`afk

so I got my heatkiller block today and somehow got worse temps than with my previos bitspower block. do you think it does not sit right? I also hardmodded shunts, however the powerdraw does not go over 280w.

before i hat at 26c ambient about 38-40c on gpu load, now I have at 26c ambient 42-45c on gpu. but I also have a lot of waterbubbles in the block?



looks normal... more tim maybe?


----------



## VETDRMS

XOC bios works pretty well on my reference card, though voltage is a bit finicky. Has stabilized at 1.075v. I'd guess this is pretty good for a 4.7ghz 6700k; graphics score looks decent. It will do 2220mhz at 1.093v, but it gave up on holding that voltage for some reason. 

https://www.3dmark.com/3dm/34605678?

2205 / 7625

Temps around 20C; pulling 675 watts from wall in graphics test 2.

Port Royal (#51 on leaderboard..neat)

https://www.3dmark.com/3dm/34606325?

2205 / 7925


----------



## xaxx

For me, the biographies of XOC did not work for me in Sea Hawk X and in Aorus at the moment I do not see much improvement. This is my Timespy and Timespy Extreme score with the gigabyte stock bios, I think it's very similar to yours Reffrs with the Bios XOC and a bit lower than the VETDRMS.


----------



## J7SC

xaxx said:


> For me, the biographies of XOC did not work for me in Sea Hawk X and in Aorus at the moment I do not see much improvement. This is my Timespy and Timespy Extreme score with the gigabyte stock bios, I think it's very similar to yours Reffrs with the Bios XOC and a bit lower than the VETDRMS.


 
Pretty good results you have, actually  ! As always, temps play a role as well as Bios PL. In your screen shot, you were at 39.8 or so max...while that is very good, at around 38 C or so, there's a speed-step with RTX 2080 Ti that bumps the card's speed down by 15 MHz or so. In one of the other posts above, the author mentioned 20 C, though it is not clear whether that was ambient temp, or max GPU temp....anyways, the lower the temp, the better, not only in addition to power PL Bios, but especially with it.


----------



## J7SC

dante`afk said:


> so I got my heatkiller block today and somehow got worse temps than with my previos bitspower block. do you think it does not sit right? I also hardmodded shunts, however the powerdraw does not go over 280w.
> 
> before i hat at 26c ambient about 38-40c on gpu load, now I have at 26c ambient 42-45c on gpu. but I also have a lot of waterbubbles in the block?
> 
> 
> 
> looks normal... more tim maybe?


 
...definitely get the bubbles out (tilting usually works). Also - and it is a bit hard to tell from the pic, so grain of salt - but GPU TIM trace looks a bit uneven


----------



## JustinThyme

Ive come to find in my instance that regardless of the power limit the highest clocks are 2150-2200 without coming close to the stock power limit. Tried a few BIOS's and just went back to stock.
Something else Ive found it highest clock doesn't = best performance or highest bench. Not in a single instance for me.

Heres my dialed in results and the best bench frequency varies from one bench to another. This is all staying under stock power limit of Strix 2080Ti 011G

https://www.3dmark.com/spy/6182114

https://www.3dmark.com/spy/5859180

https://www.3dmark.com/pr/46520


----------



## JustinThyme

dante`afk said:


> so I got my heatkiller block today and somehow got worse temps than with my previos bitspower block. do you think it does not sit right? I also hardmodded shunts, however the powerdraw does not go over 280w.
> 
> before i hat at 26c ambient about 38-40c on gpu load, now I have at 26c ambient 42-45c on gpu. but I also have a lot of waterbubbles in the block?
> 
> 
> 
> looks normal... more tim maybe?


Thats got to be a change in ambient. 45C loaded is about whats to be expected. I would doubt the accuracy of a 38-40C loaded temp on one of these cards without either running your machine outside the window in the winter or a chiller. Im getting a 15C DT over liquid temp loaded. So with liquid at 28C Im about 29C Idle and 43C loaded. As the liquid creeps up the GPU temp of course follows. My liquid never pass 30C so I top out at 45C on the GPUs 


Tim in photo is abbout right. If its spilling over the edge of the Die you have plenty. 

A bubbles dont help and neither does reverse flow. EK made a statement of reverse flow doest matter but in the same breath said with their latest blocks with a short inconclusinve run they show a 2C difference. Ive got a pet peeve of plumbing them up as designed with inlet to inlet and outlet to outlet regardless of aesthetics or ease of plumbing.

What im not seeing is a good impression on the thermal pads from the mosfets


----------



## VPII

I find it somewhat strange when I see some of the temps posted here with liquid cooling. I am no expert so forgive me for asking this if not really relevant. I fitted a Nzxt Kraken G12 to my GPU with a Corsair H110 CW as the actual cooler. Now I know that my vrm and vram is not covered, but I'm pretty set as the max reading I got from the vrm and mem was around 55C using an infrared thermometer. I think the fact that this gpu has samsung memory is partly the reason for the low temps. Now I am in Cape Town, South Africa, gradually moving towards Autumn now, but during the mids of summer with 30+ ambient the highest load temps I've seen on the gpu was 46C, at present 42c when running FS Ultra stress test with a 2145mhz core and +1150mhz memory.

My case is an open case, so airflow is not an issue. I prefer a full open case which looks practical thanks to Lian Li PC-T16A. I'm not a fan of lights and glimmer all round. Give me something that works which I take to its limits without an issue.

https://www.3dmark.com/fsst/1063309

My reason for running FS Ultra is that it is a hell of lot more demanding on the gpu than Time Spy, even extreme as I can pass both normal bench runs at 2160 and even 2175 core but FS is 2145 only..


----------



## roccale

So guys can I easily flash with confidence the XOC 1000W bios on my strix 2080ti oc and it won't suffer from door problems and more?
Will I only have a higher pl? My intention was to flash on quiet bios.
Thx.


----------



## dante`afk

JustinThyme said:


> Thats got to be a change in ambient. 45C loaded is about whats to be expected. I would doubt the accuracy of a 38-40C loaded temp on one of these cards without either running your machine outside the window in the winter or a chiller. Im getting a 15C DT over liquid temp loaded. So with liquid at 28C Im about 29C Idle and 43C loaded. As the liquid creeps up the GPU temp of course follows. My liquid never pass 30C so I top out at 45C on the GPUs
> 
> 
> Tim in photo is abbout right. If its spilling over the edge of the Die you have plenty.
> 
> A bubbles dont help and neither does reverse flow. EK made a statement of reverse flow doest matter but in the same breath said with their latest blocks with a short inconclusinve run they show a 2C difference. Ive got a pet peeve of plumbing them up as designed with inlet to inlet and outlet to outlet regardless of aesthetics or ease of plumbing.
> 
> What im not seeing is a good impression on the thermal pads from the mosfets


no ambient is the same. I disassembled the block again and spread the TIM on block an gpu evenly; temps are now equal to the bitspower - bit of disappointed - or rather impressed how good the bitspower is. gpu is on idle 30c and on load 38-39c. ambient is 26c. but yea the pressure from the block is not good if I compare it to my bitspower one.


----------



## NewType88

VPII said:


> I find it somewhat strange when I see some of the temps posted here with liquid cooling. I am no expert so forgive me for asking this if not really relevant. I fitted a Nzxt Kraken G12 to my GPU with a Corsair H110 CW as the actual cooler. Now I know that my vrm and vram is not covered, but I'm pretty set as the max reading I got from the vrm and mem was around 55C using an infrared thermometer. I think the fact that this gpu has samsung memory is partly the reason for the low temps. Now I am in Cape Town, South Africa, gradually moving towards Autumn now, but during the mids of summer with 30+ ambient the highest load temps I've seen on the gpu was 46C, at present 42c when running FS Ultra stress test with a 2145mhz core and +1150mhz memory.
> 
> My case is an open case, so airflow is not an issue. I prefer a full open case which looks practical thanks to Lian Li PC-T16A. I'm not a fan of lights and glimmer all round. Give me something that works which I take to its limits without an issue.
> 
> https://www.3dmark.com/fsst/1063309
> 
> My reason for running FS Ultra is that it is a hell of lot more demanding on the gpu than Time Spy, even extreme as I can pass both normal bench runs at 2160 and even 2175 core but FS is 2145 only..


What temps are strange to you and why ?


----------



## VETDRMS

Loaded temps were 20C. Coolant temp is 8-9C using a chiller. I normally run the coolant at 13C and loaded temps are usually 25-28C. I prefer this sound to a bunch of fans and it is actually very quiet and cycles on/off every 5 minutes or so; so it is "silent" at times..


----------



## J7SC

dante`afk said:


> no ambient is the same. I disassembled the block again and spread the TIM on block an gpu evenly; temps are now equal to the bitspower - bit of disappointed - or rather impressed how good the bitspower is. gpu is on idle 30c and on load 38-39c. ambient is 26c. but yea the pressure from the block is not good if I compare it to my bitspower one.


 
That sounds a lot better  ...and no matter which block, GPU load temps of 38-39C w/ 26c ambient is outstanding. ...I'm not sure if your Heatkiller came w/ a back-plate, but directing some airflow over the back (especially near the GPU and VRM) might supplement nicely, depending on your case config etc.




VETDRMS said:


> Loaded temps were 20C. Coolant temp is 8-9C using a chiller. I normally run the coolant at 13C and loaded temps are usually 25-28C. I prefer this sound to a bunch of fans and it is actually very quiet and cycles on/off every 5 minutes or so; so it is "silent" at times..


 
That is one heck of a clean setup :thumb: (never mind those steering wheels on the wall) ...I'm running a Core P5 w/my TR and 2x Aorus 2080 Ti WBs, and your system case looks like either also a P5 (or P3) on its side ?! Congrats on both 'form and function / performance' !


----------



## NewType88

VETDRMS said:


> Loaded temps were 20C. Coolant temp is 8-9C using a chiller. I normally run the coolant at 13C and loaded temps are usually 25-28C. I prefer this sound to a bunch of fans and it is actually very quiet and cycles on/off every 5 minutes or so; so it is "silent" at times..


Cool setup ! Can you give me a link to those mounts holding the tubes to the wall ?


----------



## Damaging Excess

Seems my overclock on air with an EVGA FTW 3 ULTRA is about 2040 during gaming and around 2050-2080 when benching and 1100 mem. Anything else and it's not really stable in everything I do. Apex Legends specifically seems to be very sensitive to unstable overclocks. Debating flashing a better bios but don't even know if it would be worth it past a slightly higher score in benchmarks


----------



## GnarlyCharlie

dante`afk said:


> so I got my heatkiller block today and somehow got worse temps than with my previos bitspower block. do you think it does not sit right? I also hardmodded shunts, however the powerdraw does not go over 280w.
> 
> before i hat at 26c ambient about 38-40c on gpu load, now I have at 26c ambient 42-45c on gpu. but I also have a lot of waterbubbles in the block?
> 
> 
> 
> looks normal... more tim maybe?


The pic gets a little blurry when I enlarge it on this tablet, so I might be mistaken.

Looks to me that you have a shunt resistor on the pcie shunt (vertical in the pic, lower right, 8mOhm) that most advise not to shunt mod, and no shunt resistor on the power shunt resistor (horizontal in the pic, mid right, 5mOhm). There should be a second power shunt resistor (near the notch lower right) on the other side of the card.

Some have reported the card reverting to emergency downclock mode with the pcie shunt resistor being decreased.

Look at the pic in the first post, resistors to mod circled in red

https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html


----------



## J7SC

...interesting GN Vid about 2080 Ti Kingpin ...note a.) EVBot and b.) comment @ 4 min re. Bios


----------



## dante`afk

yea I modded the card, there is a shunt on the front bottom right and one in the back. the top near the pcie should not be touched. does not down clock.

so when I had the bitspower on my card ( no shunt mod) the card could run furmark 10 minutes at max 43c. now with the heatkiller it goes above 45c + and downclocks - I guess because of the shunt mod, not sure because it drew before and now just 380w


----------



## GnarlyCharlie

dante`afk said:


> yea I modded the card, there is a shunt on the front bottom right and one in the back. the top near the pcie should not be touched. does not down clock.
> 
> so when I had the bitspower on my card ( no shunt mod) the card could run furmark 10 minutes at max 43c. now with the heatkiller it goes above 45c + and downclocks - I guess because of the shunt mod, not sure because it drew before and now just 380w


OK, good to know. It looked to me like you did shunt the resistor I have outlined in red - the one that shouldn't be done - and not done the resistor in green that should be done.


----------



## Krzych04650

Is anyone using FE with stock BIOS? Can you make a screenshot of completely stock VF curve and post it here?


----------



## kot0005

I tried the 1000W XOC BIOS on Gaming X Trio PCB. DP1 works fine but Memory runs at full speed...I had to revert back. Dont waste ur time.


----------



## ReFFrs

kot0005 said:


> I tried the 1000W XOC BIOS on Gaming X Trio PCB. DP1 works fine but Memory runs at full speed...I had to revert back. Dont waste ur time.


Just use MultiDisplay Power Saver from nVidia Inspector package. It allows locking clocks to 2D mode in Windows and adding applications like games to run in full 3D or setting GPU usage threshold to switch to 3D clocks. Can confirm memory is dropping clocks now.


----------



## zack_orner

But it is still only one display output that works CORRECT? Would love to try on my trio but I need a least 2 outputs.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio


----------



## lowrider_05

the hdmi for me works kind of


----------



## dante`afk

GnarlyCharlie said:


> OK, good to know. It looked to me like you did shunt the resistor I have outlined in red - the one that shouldn't be done - and not done the resistor in green that should be done.


aw **** you are right.

well either it's not doing anything for me (not properly soldered) or it's fine as it is there


----------



## GnarlyCharlie

I think you will get an overall better result if you do the resistor circled in green, but then you would probably have to remove the one circled in red to avoid the emergency mode downclocking.


----------



## Renegade5399

dante`afk said:


> aw **** you are right.
> 
> well either it's not doing anything for me (not properly soldered) or it's fine as it is there


That is the shunt for the 75W PCIe power limit AFAIK. You run the risk of damaging your motherboard leaving it there. Those are pretty big SMDs and are fairly easy to work on. Don't be scared, just move it to the proper location.


----------



## arrow0309

ReFFrs said:


> Just use MultiDisplay Power Saver from nVidia Inspector package. It allows locking clocks to 2D mode in Windows and adding applications like games to run in full 3D or setting GPU usage threshold to switch to 3D clocks. Can confirm memory is dropping clocks now.


Hi, is this bios working properly on FE boards as well?


----------



## dangerSK

arrow0309 said:


> Hi, is this bios working properly on FE boards as well?


Well yeah, u may loose DP port though. Be aware that its still XOC bios intended for very good cooling.


----------



## Hemorz

Can someone please confirm whether my Galax card is binned or not.

28IULBUCT4KW

It's not in the list at the start.

GALAX GeForce RTX 2080Ti White (1-Click OC)

Sent from my SM-G965F using Tapatalk


----------



## roccale

NVFlash Download v5.556.0

https://www.guru3d.com/files-get/nvflash-download-for-windows,1.html

Ok i flashed the xoc bios on mine strix 2080ti and works very good. Fan works too. Really a great bios.
Thx all those who supported me in particular killwill and zhrooms, very kind person.


----------



## bigjdubb

Damaging Excess said:


> Seems my overclock on air with an EVGA FTW 3 ULTRA is about 2040 during gaming and around 2050-2080 when benching and 1100 mem. Anything else and it's not really stable in everything I do. Apex Legends specifically seems to be very sensitive to unstable overclocks. Debating flashing a better bios but don't even know if it would be worth it past a slightly higher score in benchmarks


That's pretty much where mine ends up. Mine has been running stable in games at +100 and +750 with the power limit maxed and voltage left alone. My fan curve ramps to 60% at 50 degrees and 80% at 60 degrees, this keep the gpu between 50-60 degrees for hours on end and is inaudible to me with headphones and somewhat audible with speakers when it's at 80%.

I didn't have any luck getting higher clocks by increasing the voltage.


----------



## klepp0906

so before i go reading through 100's of pages of this - I just scooped one of these up against my better judgement. Damned tax returns.

Then i couldnt help myself from getting the hybrid kit.

Then i realized if i was going to take it apart anyways .. So i cancelled and just got the full waterblock.

So card is here, no waterblock till friday so I'm trying to pre-prep as to limit my downtime so i can play 

Whats the go-to (safe) bios for a watercooled xc ultra? I saw that 1000w obnoxious thing - i want nothing to do with that. This card is prooooobably going to be sticking around 3 years 

I play a lot of ffxiv and at 4k with this thing at stock, its already tapping out more or less lol. Of course my 1080ti needed to turn down hbao for the roughly the same framerate. and it was on water and overclocked.


----------



## bigjdubb

klepp0906 said:


> so before i go reading through 100's of pages of this - I just scooped one of these up against my better judgement. Damned tax returns.
> 
> Then i couldnt help myself from getting the hybrid kit.
> 
> Then i realized if i was going to take it apart anyways .. So i cancelled and just got the full waterblock.
> 
> So card is here, no waterblock till friday so I'm trying to pre-prep as to limit my downtime so i can play
> 
> Whats the go-to (safe) bios for a watercooled xc ultra? I saw that 1000w obnoxious thing - i want nothing to do with that. This card is prooooobably going to be sticking around 3 years
> 
> I play a lot of ffxiv and at 4k with this thing at stock, its already tapping out more or less lol. Of course my 1080ti needed to turn down hbao for the roughly the same framerate. and it was on water and overclocked.


I'm pretty sure the XC Ultra is the reference card with a better cooler, so you should be able to follow what people have done with FE and get similar results.


----------



## VETDRMS

J7SC said:


> That is one heck of a clean setup :thumb: (never mind those steering wheels on the wall) ...I'm running a Core P5 w/my TR and 2x Aorus 2080 Ti WBs, and your system case looks like either also a P5 (or P3) on its side ?! Congrats on both 'form and function / performance' !


The case is a P5, with the back facing down. I relocated the power supply to make room for the reservoir/pump. The little shelf holding the cars is the power supply bracket from the bottom with a piece of plexi inset. The chiller replaced a 360x60mm EK radiator with 6 38mm thick Panaflo fans. It worked well, but the chiller is quieter and much cooler.  



NewType88 said:


> Cool setup ! Can you give me a link to those mounts holding the tubes to the wall ?


https://www.amazon.com/gp/product/B07L4CVKPH/

I drilled out the tapped side for the bolt and anchored them to the wall with sheetrock anchors. I've had good luck with Walldog anchors. I need to order another pair to fill in that gap where the hose sags a bit...always something.


----------



## kx11

Glerox said:


> You are right. Thank you for this. In fact, thanks everyone for all your answers!
> 
> I've installed the water block with the gpu fans of the air cooler plugged in (that was quite a pain to do), which is a bit ridiculous.
> After, I've removed the fans while the PC was running and now it seems to work.
> I'm just really afraid now to reinstall a new driver as the issue might come back...
> 
> This is so ridiculous coming from a GPU that is MADE of overclocking...





i'm having the exact issue here , how did you reach those fan headers in the back ? my cooling is hard tubes so i'm kinda in trouble here


----------



## Glerox

kx11 said:


> Glerox said:
> 
> 
> 
> You are right. Thank you for this. In fact, thanks everyone for all your answers!
> 
> I've installed the water block with the gpu fans of the air cooler plugged in (that was quite a pain to do), which is a bit ridiculous.
> After, I've removed the fans while the PC was running and now it seems to work.
> I'm just really afraid now to reinstall a new driver as the issue might come back...
> 
> This is so ridiculous coming from a GPU that is MADE of overclocking...
> 
> 
> 
> 
> 
> 
> i'm having the exact issue here , how did you reach those fan headers in the back ? my cooling is hard tubes so i'm kinda in trouble here
Click to expand...

like this... once driver/bios is installed you can remove the fans. I was lucky enough to have quick disconnects in this build. I usually go with hard tubing too.


----------



## J7SC

kx11 said:


> i'm having the exact issue here , how did you reach those fan headers in the back ? my cooling is hard tubes so i'm kinda in trouble here


 
...per attached pic (from TechPowerUp) for the MSI 2080 TI Lightning Z, the fan connectors look like the typical 4-pwm ones ...couldn't you plug in permanent 'extensions' for those on the PCB (or even fabricate them) and leave them plugged into the card for good, i.e. for when you need them again after a driver or Bios update w/o having to fiddle around the hard tubes ? I've got a ton of 4-PWM extensions over the years from other builds, and you can also buy them in different lengths at various spots


----------



## J7SC

VETDRMS said:


> The case is a P5, with the back facing down. I relocated the power supply to make room for the reservoir/pump.
> 
> (edit)
> 
> https://www.amazon.com/gp/product/B07L4CVKPH/
> 
> ... I've had good luck with Walldog anchors. I need to order another pair to fill in that gap where the hose sags a bit...always something.


 
Those look neat and sturdy...I'll order some from Amazon for an upcoming build (another Core P5 or may be Core P7)


----------



## kx11

Glerox said:


> like this... once driver/bios is installed you can remove the fans. I was lucky enough to have quick disconnects in this build. I usually go with hard tubing too.



do i have to plug all the fans or just one ? i can do it when the PC is off so i can unplug the gpu a little then put one fan header in , or a new bios could solve this issue for good


----------



## Glerox

You have to plug all three fans... I tested it. It's really a bad design choice from MSI. Unfortunately can't return it because you're not supposed to take it apart lol. We must wait for a new vbios...


----------



## kx11

Glerox said:


> You have to plug all three fans... I tested it. It's really a bad design choice from MSI. Unfortunately can't return it because you're not supposed to take it apart lol. We must wait for a new vbios...





yeah i'm stuck i can't do it , the block is too tight




hopefully someone can take out that new bios from MSI Live update and post it here since i can't because i have an asus mobo


----------



## arrow0309

dangerSK said:


> Well yeah, u may loose DP port though. Be aware that its still XOC bios intended for very good cooling.


You mean loose one port, two or all of them? 
Cause I only need one dp port and one hdmi (for my 4K Oled). 
Yeah, I'm under serious wc loop with a third external rad (monsta 420).


----------



## x-speed69

kx11 said:


> yeah i'm stuck i can't do it , the block is too tight
> 
> 
> 
> 
> hopefully someone can take out that new bios from MSI Live update and post it here since i can't because i have an asus mobo



If you need LN2 bios with 400w limit you can get it from here:
https://forum-en.msi.com/index.php?topic=314643.0
Post #13.


I managed to plug my fans back in after installing bitspower block, you have just enough space to force them there.
If MSI can't solve this I have to probably modify block so it has enough space around fan connectors so plug/unplug is possible when card is installed in the system.




J7SC said:


> ...per attached pic (from TechPowerUp) for the MSI 2080 TI Lightning Z, the fan connectors look like the typical 4-pwm ones ...couldn't you plug in permanent 'extensions' for those on the PCB (or even fabricate them) and leave them plugged into the card for good, i.e. for when you need them again after a driver or Bios update w/o having to fiddle around the hard tubes ? I've got a ton of 4-PWM extensions over the years from other builds, and you can also buy them in different lengths at various spots




It's not same if you mean 4-pwm connectors that are on motherboard, physical dimensions are different.


----------



## GAN77

I got XSPC Razor Neo RTX2080TI. 
The product looks great!


----------



## willverduzco

arrow0309 said:


> You mean loose one port, two or all of them?
> Cause I only need one dp port and one hdmi (for my 4K Oled).
> Yeah, I'm under serious wc loop with a third external rad (monsta 420).


If you're under water, you'll be fine with temps since the BIOS only allows for 1.093 max, and you'll never get above 45% PL (450W).

As for DisplayPort loss, if you're on a reference card, you lose one DP port and you lose some HDMI functionality (namely HDMI audio output). One DP will still work, and the HDMI ports will still partially work. If you're on the Strix OC PCB (what the BIOS was originally intended for) you do not lose DP or HDMI audio functionality.

Finally, you may or may not see a real world benefit vs the Galax 380W because many people on ref PCBs seem to run into Vrel perfcap quite heavily after switching from the 380W to the XOC bios. Net result for many seems to be around the same or ever so slightly higher sustained voltages or clocks. For me on my watercooled Strix, it was quite a bit of a boost, but that's not certainly the norm for everyone. The real limiting factor on this BIOS IMO is its inability to allow users to modify the F-V curve (and the heavily reduced NVLink speed for SLI users).


----------



## GnarlyCharlie

Strange Driver Issue - I've never seen this one.

I bought a 2080Ti to swap into my 7980XE system, that system is water cooled. When I first got the card, I installed it in a 7920X system that's still under air to make sure it wasn't DOA or artifacting. Ran fine, I used the newest driver 419.35.

I put a water block on it and verified it still ran just briefly in the air system, no load. Ran fine.

Put it in the loop of the 7980XE system, uninstalled old drivers, uninstalled Afterburner. Tried to install 419.35, got an error "Driver Not Compatible with this Windows Version" or something like that.

I have Windows 10 Pro 64 bit, 1607 14393.2214, supposed to be recent enough to run this driver generation.

So I try to let Nvidia scan the system to find a driver that will work, needs Java. I only have Chrome and Edge, neither support Java.

Went through the whole GeForce Experience - it downloaded the same driver that I tried before that wouldn't work.

1607 is supposed to be recent enough to work with this driver generation. Did a sfc /scannow, no system issues. GPU-Z recognizes the TU-102 chip, but with no driver it calls it a basic display adapter or something. Tried disabling the basic display adapter in Device Manger, driver still won't install.

I guess I can just put it back in the 7920X machine, but I was wanting the more powerful card in the 7980XE rig.


----------



## x-speed69

GnarlyCharlie said:


> Strange Driver Issue - I've never seen this one.
> 
> I bought a 2080Ti to swap into my 7980XE system, that system is water cooled. When I first got the card, I installed it in a 7920X system that's still under air to make sure it wasn't DOA or artifacting. Ran fine, I used the newest driver 419.35.
> 
> I put a water block on it and verified it still ran just briefly in the air system, no load. Ran fine.
> 
> Put it in the loop of the 7980XE system, uninstalled old drivers, uninstalled Afterburner. Tried to install 419.35, got an error "Driver Not Compatible with this Windows Version" or something like that.
> 
> I have Windows 10 Pro 64 bit, 1607 14393.2214, supposed to be recent enough to run this driver generation.
> 
> So I try to let Nvidia scan the system to find a driver that will work, needs Java. I only have Chrome and Edge, neither support Java.
> 
> Went through the whole GeForce Experience - it downloaded the same driver that I tried before that wouldn't work.
> 
> 1607 is supposed to be recent enough to work with this driver generation. Did a sfc /scannow, no system issues. GPU-Z recognizes the TU-102 chip, but with no driver it calls it a basic display adapter or something. Tried disabling the basic display adapter in Device Manger, driver still won't install.
> 
> I guess I can just put it back in the 7920X machine, but I was wanting the more powerful card in the 7980XE rig.



Windows 10 1607 reached end of service April 2018 so it's not that new anymore. I'm pretty sure games like metro exodus or shadow of the tomb raider with rtx patch won't even run on that version.


----------



## CptSpig

GnarlyCharlie said:


> Strange Driver Issue - I've never seen this one.
> 
> I bought a 2080Ti to swap into my 7980XE system, that system is water cooled. When I first got the card, I installed it in a 7920X system that's still under air to make sure it wasn't DOA or artifacting. Ran fine, I used the newest driver 419.35.
> 
> I put a water block on it and verified it still ran just briefly in the air system, no load. Ran fine.
> 
> Put it in the loop of the 7980XE system, uninstalled old drivers, uninstalled Afterburner. Tried to install 419.35, got an error "Driver Not Compatible with this Windows Version" or something like that.
> 
> I have Windows 10 Pro 64 bit, 1607 14393.2214, supposed to be recent enough to run this driver generation.
> 
> So I try to let Nvidia scan the system to find a driver that will work, needs Java. I only have Chrome and Edge, neither support Java.
> 
> Went through the whole GeForce Experience - it downloaded the same driver that I tried before that wouldn't work.
> 
> 1607 is supposed to be recent enough to work with this driver generation. Did a sfc /scannow, no system issues. GPU-Z recognizes the TU-102 chip, but with no driver it calls it a basic display adapter or something. Tried disabling the basic display adapter in Device Manger, driver still won't install.
> 
> I guess I can just put it back in the 7920X machine, but I was wanting the more powerful card in the 7980XE rig.


Update to 1809 it works great and reinstall driver. This is what I run on my 7980Xe machine with no issues.


----------



## GnarlyCharlie

CptSpig said:


> Update to 1809 it works great and reinstall driver. This is what I run on my 7980Xe machine with no issues.


Thanks Cpt! After doing some digging, that's what all roads are pointing to. I thought W10 upgraded itself to the latest version, seems like there's always gigabytes of downloads pending. 

I miss the old days when you knew if you were running Windows 7 then Windows 10 stuff wouldn't work, now we have Windows 10, but all Windows 10s are not created equal. You got Windows 10 stuff that won't work on Windows 10 if it's not the right Windows 10.


----------



## bogdi1988

VETDRMS said:


> The case is a P5, with the back facing down. I relocated the power supply to make room for the reservoir/pump. The little shelf holding the cars is the power supply bracket from the bottom with a piece of plexi inset. The chiller replaced a 360x60mm EK radiator with 6 38mm thick Panaflo fans. It worked well, but the chiller is quieter and much cooler.
> 
> 
> 
> https://www.amazon.com/gp/product/B07L4CVKPH/
> 
> I drilled out the tapped side for the bolt and anchored them to the wall with sheetrock anchors. I've had good luck with Walldog anchors. I need to order another pair to fill in that gap where the hose sags a bit...always something.


How many HP is that chiller? 1/4, 1/2 or 1?


----------



## CptSpig

GnarlyCharlie said:


> Thanks Cpt! After doing some digging, that's what all roads are pointing to. I thought W10 upgraded itself to the latest version, seems like there's always gigabytes of downloads pending.
> 
> I miss the old days when you knew if you were running Windows 7 then Windows 10 stuff wouldn't work, now we have Windows 10, but all Windows 10s are not created equal. You got Windows 10 stuff that won't work on Windows 10 if it's not the right Windows 10.


If you have not upgraded yet use the update tool it will make life easier. :thumb:

https://www.microsoft.com/en-us/software-download/windows10


----------



## J7SC

x-speed69 said:


> (edit)
> 
> It's not same if you mean 4-pwm connectors that are on motherboard, physical dimensions are different.


 
I know, I meant GPU 4-pwm connectors / you can buy them online, though I have a pile left over from coolers from prior water-cooling builds


----------



## kx11

this is MSi official " workaround " for my problem !!!!




https://forum-en.msi.com/index.php?topic=316562.msg1809776#msg1809776




if a new game-ready driver comes out i'm screwed


----------



## x-speed69

J7SC said:


> I know, I meant GPU 4-pwm connectors / you can buy them online, though I have a pile left over from coolers from prior water-cooling builds



Oh okay, have to look them up.


edit:
Just looked at some pictures of the card and second fan connector is 5pin??


----------



## J7SC

x-speed69 said:


> Oh okay, have to look them up.
> 
> 
> edit:
> Just looked at some pictures of the card and second fan connector is 5pin??


 
...looks like it, but places like AliExpress (and others) offer this kind of thing per attachment...here a 5 pin to 4 pin connector (as mentioned, there are also various 4 pin extensions, plus 4 >3, 3>4 etc). Whether the fifth pin matters or not with the current safe-mode issue is a matter of trial and error

...all this makes me wonder if there's a 'must-have' RGB connector in the future of GPUs which will put the card in safe-mode when disconnected


----------



## kx11

Bitspower support responded to my email and told me to check if i got thermal pads under the backplate which is weird 





i didn't do that


----------



## Krzych04650

GnarlyCharlie said:


> Strange Driver Issue - I've never seen this one.
> 
> I bought a 2080Ti to swap into my 7980XE system, that system is water cooled. When I first got the card, I installed it in a 7920X system that's still under air to make sure it wasn't DOA or artifacting. Ran fine, I used the newest driver 419.35.
> 
> I put a water block on it and verified it still ran just briefly in the air system, no load. Ran fine.
> 
> Put it in the loop of the 7980XE system, uninstalled old drivers, uninstalled Afterburner. Tried to install 419.35, got an error "Driver Not Compatible with this Windows Version" or something like that.
> 
> I have Windows 10 Pro 64 bit, 1607 14393.2214, supposed to be recent enough to run this driver generation.
> 
> So I try to let Nvidia scan the system to find a driver that will work, needs Java. I only have Chrome and Edge, neither support Java.
> 
> Went through the whole GeForce Experience - it downloaded the same driver that I tried before that wouldn't work.
> 
> 1607 is supposed to be recent enough to work with this driver generation. Did a sfc /scannow, no system issues. GPU-Z recognizes the TU-102 chip, but with no driver it calls it a basic display adapter or something. Tried disabling the basic display adapter in Device Manger, driver still won't install.
> 
> I guess I can just put it back in the 7920X machine, but I was wanting the more powerful card in the 7980XE rig.


According to GeForce forums, minimum supported Windows version for Turing GPUs is 1803. By supported I mean you will get customer support if you have an issue (how useful is that everyone knows ). Minimum version that the driver will install for 2080 Ti is 1709, and it works normally (I am using this version since the beginning with 2080 Ti), but you won't get customer support. Anything earlier is going to show this incompatibility message. There are probably custom ways to get it to work, but by official standards this is how it goes.

So you don't necessarily have to use 1809 which removes exclusive fullscreen functionality from Windows permanently and cause huge trouble for some use cases like mine, using Desktop Duplication API to screen capture for DIY Ambilight, game needs to be in Exclusive Fullscreen or G-sync will stop working when Desktop Duplication API is active. So I am using 1709 to avoid issues, though DXR requires 1809 and DX12 inherently doesn't have exclusive fullscreen feature, so I don't know what I am going to do about it yet. Two separate installations of Windows for sure, 1809 for DXR games and 1709 for everything else, but that still doesn't solve the issue of no exclusive fullscreen under DX12.

Also newer versions of Windows are full of junk like Meltdown/Spectre patches, broken updates and etc, also recently Microsoft manages to break something significantly with every update, so it is all not as easy and plug and play as it was with previous versions, when you just installed Windows and this was the last time you thought about it. Now Microsoft is really devoted to throw obstacles in gamer's way on every single occasion. All the imaginary security issues hysteria is not helping either, all of these patches were so chaotic and left many issues that are apparently never getting fixed. So removing all of this nonsense is a second thing to do after disabling updates.


----------



## J7SC

Krzych04650 said:


> According to GeForce forums, minimum supported Windows version for Turing GPUs is 1803. By supported I mean you will get customer support if you have an issue (how useful is that everyone knows ).
> 
> (edit)
> 
> Also newer versions of Windows are full of junk like Meltdown/Spectre patches, broken updates and etc, also recently *Microsoft manages to break something significantly with every update*, so it is all not as easy and plug and play as it was with previous versions, when you just installed Windows and this was the last time you thought about it. Now Microsoft is really devoted to throw obstacles in gamer's way on every single occasion. All the imaginary security issues hysteria is not helping either, all of these patches were so chaotic and left many issues that are apparently never getting fixed. So removing all of this nonsense is a second thing to do after disabling updates.


 
Funny that you mention that. I started to run Win 10 just about 3 month ago (on just one machine, all others are either Win7 or Server editions, apart from other BSD type setups). Most of the time, the auto update is disabled successfully, but every once in a while weird stuff happens...like TODAY, when I was in *the middle of *TimeSpy's Graphics Test 1 w/ my 2x 2080 Ti on full blast when it all stops and a message is superimposed on the screen - from Win 10, to update... :sozo: It gives me the option to pick another time, but that's all grayed out - what a PoS

After all was said and done (and updated), 3DMark Systeminfo was not working anymore, and there has been a marginal loss of score w/the same parameters (CPU, temps, MSI AB settings). And it reset all the data-sharing settings with Microsoft...and added a bunch of new items in the 'Device Manager' for Bluetooth (even though that is disabled on the mobo bios). For good measure, the four or five Bluetooth additions also called up some PCIe warning marks... Anyway, I uninstalled that crappy update, hacked the Win 10 registry some more, stripped out OneDrive and various other things - which made me feel very good.

But I'm not done yet / downloaded ubuntu, and emailed my team to start looking for non-MS options for work ops.


----------



## GnarlyCharlie

All good now, did an upgrade to 1809 this afternoon, up and running with the latest driver now. Just strange to me to have Windows 10 Pro 64 Bit and yet not a _compatible version_ of Windows 10 Pro 64 Bit. Call it something different.


----------



## VPII

CptSpig said:


> If you have not upgraded yet use the update tool it will make life easier. :thumb:
> 
> https://www.microsoft.com/en-us/software-download/windows10


Thanks for this..... I did the update, not sure what I was on but I ran Time Spy again and finally got a higher result than before.

https://www.3dmark.com/spy/6665495


----------



## zack_orner

VPII said:


> Thanks for this..... I did the update, not sure what I was on but I ran Time Spy again and finally got a higher result than before.
> 
> 
> 
> https://www.3dmark.com/spy/6665495


Nice I got at 14841 500 point behind and saw your name all over the high scores my cpu was 10250 not bad, but my gpu dosen't perform as well as yours , here is my best so far.









2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## VPII

zack_orner said:


> Nice I got at 14841 500 point behind and saw your name all over the high scores my cpu was 10250 not bad, but my gpu dosen't perform as well as yours , here is my best so far.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


Your Cinebench results are good for normal AIO cooler. Im battling with that a bit unless I go LN2. My gpu is under water which explains the results

Sent from my SM-G960F using Tapatalk


----------



## Krzych04650

J7SC said:


> Funny that you mention that. I started to run Win 10 just about 3 month ago (on just one machine, all others are either Win7 or Server editions, apart from other BSD type setups). Most of the time, the auto update is disabled successfully, but every once in a while weird stuff happens...like TODAY, when I was in *the middle of *TimeSpy's Graphics Test 1 w/ my 2x 2080 Ti on full blast when it all stops and a message is superimposed on the screen - from Win 10, to update... :sozo: It gives me the option to pick another time, but that's all grayed out - what a PoS
> 
> After all was said and done (and updated), 3DMark Systeminfo was not working anymore, and there has been a marginal loss of score w/the same parameters (CPU, temps, MSI AB settings). And it reset all the data-sharing settings with Microsoft...and added a bunch of new items in the 'Device Manager' for Bluetooth (even though that is disabled on the mobo bios). For good measure, the four or five Bluetooth additions also called up some PCIe warning marks... Anyway, I uninstalled that crappy update, hacked the Win 10 registry some more, stripped out OneDrive and various other things - which made me feel very good.
> 
> But I'm not done yet / downloaded ubuntu, and emailed my team to start looking for non-MS options for work ops.


As for updates, I followed this guide on Win10 Pro:






And I didn't get any Windows Update related issues/notifications for months now, so it seems to be disabled permanently. You can always update manually by downloading an installer for particular update from Microsoft website, there is absolutely no need to have Windows Update enabled even if you want to stay up to date and be a free beta tester for their chaotic updates.


----------



## zack_orner

VPII said:


> Your Cinebench results are good for normal AIO cooler. Im battling with that a bit unless I go LN2. My gpu is under water which explains the results
> 
> Sent from my SM-G960F using Tapatalk


Yeah between heat and power limits hoping to build a custom loop this summer when I can afford it. 

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## VPII

zack_orner said:


> Yeah between heat and power limits hoping to build a custom loop this summer when I can afford it.
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


What are you running your memory at.... not gpu your system memory?

Sent from my SM-G960F using Tapatalk


----------



## kx11

i'm impressed , my unit is downclocked badly and it still kick ass @ 2k ultra graphics + DLSS ray tracing


----------



## Shawnb99

Every game I try to play crashes on me, would that indicate a faulty GPU? Chrome also crashes a lot and I've gotten more blue screens in the last week then I ever had on my old system.
Reformated twice already, can't get anything to work. 

Likely just my luck I'd end up with another crappy card that needs to be sent back.


----------



## bigjdubb

It's possible. I've never had a faulty GPU so I'm really not sure what sort of problems it can cause. If the GPU is the only hardware change you have made then it's fairly safe to assume it has something to do with the problem.

Do you have another GPU you can use to test with?


----------



## Shawnb99

I changed the whole system so it could be a few things. My first 2080TI died on me hence my big fear of this one. It's likely something I have installed or some MB setting that I screwed up on.
It's all under water so I can't really swap out a new GPU. I'll try stress testing it this weekend when I reformat again.


----------



## J7SC

Krzych04650 said:


> As for updates, I followed this guide on Win10 Pro:
> 
> https://www.youtube.com/watch?v=KHvQVZSUbes
> 
> And I didn't get any Windows Update related issues/notifications for months now, so it seems to be disabled permanently. You can always update manually by downloading an installer for particular update from Microsoft website, there is absolutely no need to have Windows Update enabled even if you want to stay up to date and be a free beta tester for their chaotic updates.


 
Thanks  I had done what's in the vid (most of it anyways) when I first installed W10 in November, and there were no unwanted updates. However, after moving that W10 M.2 over from an Intel 6700K SOC Z170 Gigabyte mobo (my standard new vid HW test-bench) to the MSI Meg Creation / Threadripper solely for the 2080 TIs, things got weird and W10 started to reset all kinds of things.

Anyhow, I'll go through W10 again re. the above, and I'm now enjoying a dual-boot (w W7-64)- and hopefully soon, some Ubuntu as well.


----------



## J7SC

Shawnb99 said:


> Every game I try to play crashes on me, would that indicate a faulty GPU? Chrome also crashes a lot and I've gotten more blue screens in the last week then I ever had on my old system.
> Reformated twice already, can't get anything to work.
> 
> Likely just my luck I'd end up with another crappy card that needs to be sent back.


 
...could very well be that the vid card is at fault, but the 'crashing in Chrome' gives me pause / it might also suggest either a systems memory setting, or even CPU vCore...obviously impossible to diagnose this remotely, but you might to try a run or two with a bit higher vCore and/or slightly relaxed systems timings, if only to eliminate these as potential sources of your trouble.


----------



## Shawnb99

J7SC said:


> ...could very well be that the vid card is at fault, but the 'crashing in Chrome' gives me pause / it might also suggest either a systems memory setting, or even CPU vCore...obviously impossible to diagnose this remotely, but you might to try a run or two with a bit higher vCore and/or slightly relaxed systems timings, if only to eliminate these as potential sources of your trouble.


Thanks. I'll try a few things this weekend when I get a chance. Yeah the Chrome crashes threw me as well. Will try a fresh offline install and go from there. I assume it might be a ASUS AI OC issue, Just have optimized defaults in the bios but I still don't trust it.


----------



## ReFFrs

Shawnb99 said:


> Yeah the Chrome crashes threw me as well.


Chrome utilizes GPU acceleration when available, so can crash because of faulty card either.


----------



## Shawnb99

I wouldn't be surprised if it was. It crashes when watching video mostly. I'll try some hardcore stress testing once I do the clean install and see if I can pinpoint the issue


----------



## zack_orner

VPII said:


> What are you running your memory at.... not gpu your system memory?
> 
> Sent from my SM-G960F using Tapatalk


https://www.overclock.net/showthread.php?p=27901758

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## J7SC

Shawnb99 said:


> Thanks. I'll try a few things this weekend when I get a chance. Yeah the Chrome crashes threw me as well. Will try a fresh offline install and go from there. I assume it might be a ASUS AI OC issue, Just have optimized defaults in the bios but I still don't trust it.





ReFFrs said:


> Chrome utilizes GPU acceleration when available, so can crash because of faulty card either.


 
Yeah, just try things out as it can be either. I mentioned the Chrome issue as on another build (my daily driver X99/Haswell-e/2x GTX 900series), only Chrome crashes every once in a while, with BSOD suggesting vcore - every other stress test works w/o a flaw at the same vcore, and the SLI GPUs are in total stock config...but again, it can very well also be the GPU (or, horror > both simultaneously).

---

On another note, I sometimes wonder whether all that time spent on squeezing every last drop out of the GPUs is really worth it ('heresy', I know  ). Per included pic, the top run (all with same GPU Bios and drivers, GPUs are Aorus 2080 TI factory full water-blocked) is a SLI TimeSpy @ 2145 MHz indicated for both cards, w/ VRAM @ 7984 or so. Right below it the same run, but this time *ONLY* w/ PL set to max (122%), so *NO* oc'ing on GPU or VRAM via MSI AB whatsoever...check the FPS difference  Granted, factory-set boost is already 1770 MHz for the GPU and 7070 for VRAM, and utilization is around 97-98%, but still...

Single card run at 2175 is included for comparison, because the other thing I've noticed is that the system (2950X, 32GB of TridZ @3400), which is normally set to 'tight' memory, can take even tighter memory settings (i.e. tRC and a few others) and pass all memory stability tests...BUT while TimeSpy CPU scores go up by about 500 points or so with 'supertight' settings, GPU scores actually go down by an amount that outweighs that in the overall score. Unigine's Superposition 4K Opt seems to be a bit of a different animal...running only a single GPU anyhow by default, it seems to get along a bit better with 'super tight' system settings.

But after all is said and done, it seems to me that at least the custom-PCB RTX 2080 TI are already pretty highly strung right out of the box - overclocking Pascal and Maxwell were a somewhat different experience, not that I'm actually complaining about the 'base' 2080 Ti performance...


----------



## dangerSK

J7SC said:


> Yeah, just try things out as it can be either. I mentioned the Chrome issue as on another build (my daily driver X99/Haswell-e/2x GTX 900series), only Chrome crashes every once in a while, with BSOD suggesting vcore - every other stress test works w/o a flaw at the same vcore, and the SLI GPUs are in total stock config...but again, it can very well also be the GPU (or, horror > both simultaneously).
> 
> ---
> 
> On another note, I sometimes wonder whether all that time spent on squeezing every last drop out of the GPUs is really worth it ('heresy', I know  ). Per included pic, the top run (all with same GPU Bios and drivers, GPUs are Aorus 2080 TI factory full water-blocked) is a SLI TimeSpy @ 2145 MHz indicated for both cards, w/ VRAM @ 7984 or so. Right below it the same run, but this time *ONLY* w/ PL set to max (122%), so *NO* oc'ing on GPU or VRAM via MSI AB whatsoever...check the FPS difference  Granted, factory-set boost is already 1770 MHz for the GPU and 7070 for VRAM, and utilization is around 97-98%, but still...
> 
> Single card run at 2175 is included for comparison, because the other thing I've noticed is that the system (2950X, 32GB of TridZ @3400), which is normally set to 'tight' memory, can take even tighter memory settings (i.e. tRC and a few others) and pass all memory stability tests...BUT while TimeSpy CPU scores go up by about 500 points or so with 'supertight' settings, GPU scores actually go down by an amount that outweighs that in the overall score. Unigine's Superposition 4K Opt seems to be a bit of a different animal...running only a single GPU anyhow by default, it seems to get along a bit better with 'super tight' system settings.
> 
> But after all is said and done, it seems to me that at least the custom-PCB RTX 2080 TI are already pretty highly strung right out of the box - overclocking Pascal and Maxwell were a somewhat different experience, not that I'm actually complaining about the 'base' 2080 Ti performance...


ofcs all that hassle around ocing isnt really worth it, u get maybe +5-10fps with OC in games, but why not ? ocing is about pushing HW to edge and competing with others


----------



## J7SC

dangerSK said:


> ofcs all that hassle around ocing isnt really worth it, u get maybe +5-10fps with OC in games, but why not ? ocing is about pushing HW to edge and competing with others


 
...agree, but a bit of 'oc heresy' _can_ be useful if things get overly serious


----------



## Carillo

Still tweaking my Zotac Amp with the "1000w" bios  

https://www.3dmark.com/spy/6673168


----------



## Cyber Locc

Do we need to have our name by the card in our case pics or no?


----------



## bigjdubb

Anyone done any benchmarking with the "creator driver"?


----------



## Rob w

bigjdubb said:


> Anyone done any benchmarking with the "creator driver"?


Jpmboy has done a comparison aida64 vs Rtx in the Titan v thread.


----------



## kx11

Msi made a beta bios to fix my lightning Z bug with 1350mhz clock speeds problem 







hopefully it works


----------



## dangerSK

kx11 said:


> Msi made a beta bios to fix my lightning Z bug with 1350mhz clock speeds problem
> 
> 
> 
> 
> 
> 
> 
> hopefully it works


can u upload it, dont wanna waste time PMing Flobelix, wanna compare it with my Lightning bios


----------



## kx11

dangerSK said:


> can u upload it, dont wanna waste time PMing Flobelix, wanna compare it with my Lightning bios





waiting for him to send it 



it will turn off RGB though i think he means the backplate RGB only


----------



## dangerSK

kx11 said:


> waiting for him to send it
> 
> 
> 
> it will turn off RGB though i think he means the backplate RGB only


ok i PMed him also, i dont care about RGB  Just want to see what they changed in terms of power limit etc. (if something, maybe only they fixed the issue  )


----------



## kx11

dangerSK said:


> ok i PMed him also, i dont care about RGB  Just want to see what they changed in terms of power limit etc. (if something, maybe only they fixed the issue  )





i got the vBios , testing now


----------



## J7SC

dangerSK said:


> ok i PMed him also, i dont care about RGB  Just want to see what they changed in terms of power limit etc. (if something, maybe only they fixed the issue  )


 
...I remember the '''good ol'days''' when RGB only meant the cable inputs on the back of your pro-monitor. 

My carefully themed black / gray / white build looks, ahem, _very different,_ when I turn it on at night...help ! :bigeyedsm


----------



## headman78

Does anybody here have Gigabyte TURBO OC (GV-N208TTURBO OC-11GC) original bios available? I bought one used and the previous owner did not have a backup when he flashed it. Gigabyte's support page does not offer it.


----------



## J7SC

headman78 said:


> Does anybody here have Gigabyte TURBO OC (GV-N208TTURBO OC-11GC) original bios available? I bought one used and the previous owner did not have a backup when he flashed it. Gigabyte's support page does not offer it.


 
Check with 'Techpowerup' VGA Bios data base.


----------



## dangerSK

J7SC said:


> ...I remember the '''good ol'days''' when RGB only meant the cable inputs on the back of your pro-monitor.
> 
> My carefully themed black / gray / white build looks, ahem, _very different,_ when I turn it on at night...help ! :bigeyedsm


Yeah uhmm i dont care about that  my board (main one) doesnt have RGB, so i dont really care.


----------



## kx11

ok this Bios solves the issue with watercooling however the memory voltage sliders are gone in MSI AB even after re-installing drivers/MSI AB 



also the clocks are stuck @ 1950mhz while memory runs 1000+ just fine


----------



## headman78

J7SC said:


> Check with 'Techpowerup' VGA Bios data base.


No luck there.


----------



## x-speed69

kx11 said:


> ok this Bios solves the issue with watercooling however the memory voltage sliders are gone in MSI AB even after re-installing drivers/MSI AB
> 
> 
> 
> also the clocks are stuck @ 1950mhz while memory runs 1000+ just fine



wow, good to hear fan issue can be solved with simple bios update.


Now just have to wait for a proper update so those other issues are solved


----------



## dangerSK

kx11 said:


> ok this Bios solves the issue with watercooling however the memory voltage sliders are gone in MSI AB even after re-installing drivers/MSI AB
> 
> 
> 
> also the clocks are stuck @ 1950mhz while memory runs 1000+ just fine


Issue on your side, my card is doing fine 2115mhz. But yeah voltage sliders are gone.


----------



## J7SC

dangerSK said:


> Yeah uhmm i dont care about that  my board (main one) doesnt have RGB, so i dont really care.


 
Easy for you to say  But since I'm using a TT Core P5 case that has a glass cover, I might try adding this to the back of the window, for night ops


----------



## dangerSK

J7SC said:


> Easy for you to say  But since I'm using a TT Core P5 case that has a glass cover, I might try adding this to the back of the window, for night ops


Im also using TT P5, more like testbench though.


----------



## J7SC

headman78 said:


> No luck there.


 
That 2080 Ti card is relatively new and a bit unusual...while other Bios (ie Asus blower Bios) may / may not work, you might be better off contacting Gigabyte directly and ask them to send you a new Bios.


----------



## kx11

dangerSK said:


> Issue on your side, my card is doing fine 2115mhz. But yeah voltage sliders are gone.





it's most likely the bios , i'll wait for the final bios to verify it


----------



## J7SC

dangerSK said:


> Im also using TT P5, more like testbench though.


 
TT Core P5 + Heatkiller blocks = OC happiness


----------



## kx11

hopefully this benchmark's result is accurate compared to the OC value i used


----------



## reflex75

Shawnb99 said:


> I changed the whole system so it could be a few things. My first 2080TI died on me hence my big fear of this one. It's likely something I have installed or some MB setting that I screwed up on.
> It's all under water so I can't really swap out a new GPU. I'll try stress testing it this weekend when I reformat again.


RTX = TOXIC


----------



## Hemorz

What is a decent idle temp in a loop for a 2080ti and what benchmarks should I be running with what temp under load?

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Hemorz said:


> What is a decent idle temp in a loop for a 2080ti and what benchmarks should I be running with what temp under load?
> 
> Sent from my SM-G965F using Tapatalk


Also guys I'm super stumped....why is my 2080ti binned card getting trash scores on 3dmark time spy?









Sent from my SM-G965F using Tapatalk


----------



## kx11

your GPU seems to be causing the issue , that fps score is terrible , either it's not OC correctly or it was running hot and throttled during the test


----------



## Hemorz

kx11 said:


> your GPU seems to be causing the issue , that fps score is terrible , either it's not OC correctly or it was running hot and throttled during the test


Every single test has been trash and it's not throttling. I've done fresh install of drivers even though it's a fresh build and tried new power cable to the GPU.
Throttle is set to 85 and GPU doesn't go above 75 under 100% load.

What else can I try?









Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> Every single test has been trash and it's not throttling. I've done fresh install of drivers even though it's a fresh build and tried new power cable to the GPU.
> Throttle is set to 85 and GPU doesn't go above 75 under 100% load.
> 
> What else can I try?
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


 
...next to impossible to tell online without much more info as it could be many things. On your mobo, I take it your card is running the full 16x (check w/ CPUiD/ Mainboard tab) ? CPU speed ? System RAM amount and speed is decent (and not too far oc'ed, i.e. re. tRC etc) ? And if you do another Timespy run, leave GPUz open and on 'sensor', with 'max value' of GPUz parameters showing and a screenshot right afterwards.


----------



## Hemorz

J7SC said:


> ...next to impossible to tell online without much more info as it could be many things. On your mobo, I take it your card is running the full 16x (check w/ CPUiD/ Mainboard tab) ? CPU speed ? System RAM amount and speed is decent (and not too far oc'ed, i.e. re. tRC etc) ? And if you do another Timespy run, leave GPUz open and on 'sensor', with 'max value' of GPUz parameters showing and a screenshot right afterwards.


I haven't overclocked anything and the ram is stock 3200mhz so I've just got xmp enabled because it was sitting at 2133 until I did.

CPU is stock ryzen 7 settings under water.

Yes it's running at full 16mb with cpuid showing 2x 8mb in 16x slot at 3.0.

Here is a pic of results at max limits on gpuid straight after timespy.

Hope it helps you work out what's going on. Thanks for the help.










Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> I haven't overclocked anything and the ram is stock 3200mhz so I've just got xmp enabled because it was sitting at 2133 until I did.
> 
> CPU is stock ryzen 7 settings under water.
> 
> Yes it's running at full 16mb with cpuid showing 2x 8mb in 16x slot at 3.0.
> 
> Here is a pic of results at max limits on gpuid straight after timespy.
> 
> Hope it helps you work out what's going on. Thanks for the help.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


 
A couple of quick points...first, your PCIe should like s.th. like in the attachment (16x and16x w/ single card). Perhaps much more importantly, your watt usage at 224w and TDP % at 82.x % are way low and strongly hint at the problem area. I take it you have the latest MSI Afterburner (AB) installed ? Just use MSI AB and set the PowerLimit to max (depending on GPU Bios 110% to 130+ %, unless it is a special XOC bios for sub-zero cooling), then run the test again w/GPUz open on sensor max values.


----------



## Hemorz

Ok I've done that and GPU is definitely same as your screenshot with 16x in both sides. 
I've increased core voltage +100 and made power limit slider max at 111% temp limit is 88 degrees.

I'm disappointed as out of the box it should be beating a 2080 easily without adjusting anything.

I just ran bf5 on 3440x1440 120hz with ray tracing enabled and everything ultra and it was jumping between 20-40 fps which means something is definitely wrong right?

Here are results and settings before bench:









Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> Ok I've done that and GPU is definitely same as your screenshot with 16x in both sides.
> I've increased core voltage +100 and made power limit slider max at 111% temp limit is 88 degrees.
> 
> I'm disappointed as out of the box it should be beating a 2080 easily without adjusting anything.
> 
> I just ran bf5 on 3440x1440 120hz with ray tracing enabled and everything ultra and it was jumping between 20-40 fps which means something is definitely wrong right?
> 
> Here are results and settings before bench:
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


 
Ok, PCIe X16 is fine, and watt usage and TDP are much better. Now, try the same run (w/ GPUz sensors etc), but this time MSI AB with NO (> '0') extra voltage, all else stock other than the Power Limit slider to max


----------



## Hemorz

J7SC said:


> Ok, PCIe X16 is fine, and watt usage and TDP are much better. Now, try the same run (w/ GPUz sensors etc), but this time MSI AB with NO (> '0') extra voltage, all else stock other than the Power Limit slider to max


I've never had such a big issue with a graphics card in the past. I have emailed inno3d to see if it's a common issue.

Is 75 too hot for on water?

Could I have stuffed up putting the waterblock on with heatpads touching more than they are meant to?

Here is test again with just power slider max and everything else stock:










Sent from my SM-G965F using Tapatalk


----------



## ReFFrs

Hemorz said:


> Here is test again with just power slider max and everything else stock


Run cmd as admin and show us the output of this command:

"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -d power


----------



## Hemorz

ReFFrs said:


> Run cmd as admin and show us the output of this command:
> 
> 
> 
> "C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -d power


Here we go









Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Hemorz said:


> Here we go
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


Here it is while running heaven in windowed mode:









Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> I've never had such a big issue with a graphics card in the past. I have emailed inno3d to see if it's a common issue.
> 
> Is 75 too hot for on water?
> 
> Could I have stuffed up putting the waterblock on with heatpads touching more than they are meant to?
> 
> Here is test again with just power slider max and everything else stock:
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


 
Well, everything checks out, other than the 'hedgehog syndrome'  By that I mean the 'spiky' / bouncy top MHz bar reading in GPUz even after you dialed down the voltage slider. In the attachment, you see a quick comp of 2x 900 series cards in Firestrike (sorry, I'm at another location right now w/o the 2080 Tis machine but the point / result is the same). Compare the two GPUz on the left w/ your posted one on the right. The MHz shouldn't bounce around that much...on the left, the only steps are due to pauses between 3D tests and/or CPU tests.

I guess the big question is WHY your card bounces this much & continuously, even without super-high temps..and that can be quite a wild goose chase. I take it your PSU overall and its EPS rails for the GPU specifically are fine / have enough head room ? It can also be indeed your card, ie. s.th. on the power delivery or sensors on the GPU PCB. In any case, even at stock settings, it spikes way too much / bounces nto s.th. which needs to be tracked down...if it is not elsewhere in the system (i.e. PSU et al), than it has to be on the GPU.


----------



## dangerSK

Is this good for 2080Ti ? (joke, obviously  )


----------



## Hemorz

J7SC said:


> Well, everything checks out, other than the 'hedgehog syndrome'  By that I mean the 'spiky' / bouncy top MHz bar reading in GPUz even after you dialed down the voltage slider. In the attachment, you see a quick comp of 2x 900 series cards in Firestrike (sorry, I'm at another location right now w/o the 2080 Tis machine but the point / result is the same). Compare the two GPUz on the left w/ your posted one on the right. The MHz shouldn't bounce around that much...on the left, the only steps are due to pauses between 3D tests.
> 
> 
> 
> I guess the big question is WHY your card bounces this much & continuously, even without super-high temps..and that can be quite a wild goose chase. I take it your PSU overall and its EPS rails for the GPU specifically are fine / have enough head room ? It can also be indeed your card, ie. s.th. on the power delivery or sensors on the GPU PCB. In any case, even at stock settings, it spikes way too much / bounces nto s.th. which needs to be tracked down...if it is not elsewhere in the system (i.e. PSU et al), than it has to be on the GPU.


I've got a 1200w PSU so it shouldn't be that.. 

Thanks for trying man hopefully someone can find out what's going on or atleast inno3d 

I'll try my old 1000w PSU for comparison

Sent from my SM-G965F using Tapatalk


----------



## J7SC

dangerSK said:


> Is this good for 2080Ti ? (joke, obviously  )


 
...need more powa / PSUs ! (can't find OGS / HWBot screenshot w/ 4x) 1300w ...btw, I actually have Antec HPC 1300s > love that 'OC link'


----------



## dangerSK

J7SC said:


> ...need more powa / PSUs ! (can't find OGS / HWBot screenshot w/ 4x) 1300w ...btw, I actually have Antec HPC 1300s > love that 'OC link'


haha nice, thats crazy. Im using 750w supernova g2, enough for daily 2080ti + 9980XE, but for LN2? ...  only cpu can pull 1000w add gpus and you end up with big number.


----------



## lkkane00

Nah, youre fine.

Seems average


----------



## dangerSK

Please repost it with a proper picture  too long for me


----------



## lkkane00

done  second card too


----------



## dangerSK

lkkane00 said:


> Nah, youre fine.
> 
> Seems average


Ikki I see youre a man of culture as well  OC Lab bios is nice, you should hook me up with nvvdd


----------



## Hemorz

What temps should I realistically be getting idle and load with a waterblock? Also do they let your RMA coil whine...it's so bad!

Sent from my SM-G965F using Tapatalk


----------



## lkkane00

haha culture no doubt. this chiller enables wildness


----------



## dangerSK

lkkane00 said:


> haha culture no doubt. this chiller enables wildness


lol nice results for chiller, very close to LN clocks  Would do the same if i had proper voltage control. Im not gonna drop 3k€ on OC Lab card though, Lightning is enough


----------



## lkkane00

thanks so much. 


that was with my fluid at 16c. just got AC and now its down to 7c (dew point 3c). Will push further in a few days and see what clocks i can hit.

The 9980xe was top bin from silicon lottery so running at 5.2 and 2350-2450 at all times.

LN2 is dope, but my pc works like any other pc. I just turn it on and use it.

The cards are wild but there are definitely perfectly rivaling alternatives for cheaper.


----------



## dangerSK

lkkane00 said:


> thanks so much.
> 
> 
> 
> that was with my fluid at 16c. just got AC and now its down to 7c (dew point 3c). Will push further in a few days and see what clocks i can hit.
> 
> The 9980xe was top bin from silicon lottery so running at 5.2 and 2350-2450 at all times.
> 
> LN2 is dope, but my pc works like any other pc. I just turn it on and use it.
> 
> The cards are wild but there are definitely perfectly rivaling alternatives for cheaper.


Good, nice to hear you will rebench 2080tis. yeah that 9980XE looks perfect, mine does 5.6ghz cb (yeah ln ofc), 1.4vcore, probably IHS needs lapping. Lightning is good alternative if u can get "true" ln2 NDA bios etc. Its pain in the ass to get full unlocked voltage control for "normal" overclocker like me. Asus seems fine, they released 1000W bios for public.


----------



## Hemorz

lkkane00 said:


> thanks so much.
> 
> 
> that was with my fluid at 16c. just got AC and now its down to 7c (dew point 3c). Will push further in a few days and see what clocks i can hit.
> 
> The 9980xe was top bin from silicon lottery so running at 5.2 and 2350-2450 at all times.
> 
> LN2 is dope, but my pc works like any other pc. I just turn it on and use it.
> 
> The cards are wild but there are definitely perfectly rivaling alternatives for cheaper.


How are you getting it so cold? :O

Sent from my SM-G965F using Tapatalk


----------



## dangerSK

Hemorz said:


> How are you getting it so cold? :O
> 
> Sent from my SM-G965F using Tapatalk


chiller, and good one


----------



## Hemorz

dangerSK said:


> chiller, and good one


So like an aircon blowing right on your radiators?

Sent from my SM-G965F using Tapatalk


----------



## J7SC

After that Win 10 update mess superimposing on a TimeSpy run ( :kookoo: ), I finally got around to updating 3DM with Port Royale. Below is my 3rd or so run with my 2x Aours 2080 Tis (ended up at slot 30 @ 3DM HoF, wonder for how many minutes  ) so I can't claim to be an expert, but I did try a few different VRAM settings...and raising VRAM in the MSI AB setup by about 150 MHz netted me a grand total of :drum: * 3* extra points in the score...I guess Port Royale is not that VRAM sensitive, or the Tensor cores / DLSS are too busy to notice.


----------



## dangerSK

Hemorz said:


> So like an aircon blowing right on your radiators?
> 
> Sent from my SM-G965F using Tapatalk


something like this  But ikkane has it better,bigger and nicer for sure.


----------



## Hemorz

dangerSK said:


> something like this  But ikki has it better,bigger and nicer for sure.


Oh wow looks fun 

Sent from my SM-G965F using Tapatalk


----------



## dangerSK

J7SC said:


> After that Win 10 update mess superimposing on a TimeSpy run ( :kookoo: ), I finally got around to updating 3DM with Port Royale. Below is my 3rd or so run with my 2x Aours 2080 Tis (ended up at slot 30 @ 3DM HoF, wonder for how many minutes  ) so I can't claim to be an expert, but I did try a few different VRAM settings...and raising VRAM in the MSI AB setup by about 150 MHz netted me a grand total of :drum: * 3* extra points in the score...I guess Port Royale is not that VRAM sensitive, or the Tensor cores / DLSS are too busy to notice.


Sorry cant really, tell/help because i didnt have time to do Port royale yet, but raising just by 150mhz ? thats low. on my lightning i can do easily +1200mhz on vrams in AB.


----------



## lkkane00

Damn, 5.6 is wild, especially stable. Unimaginable for me, id guess a stable 5.3 if ALL works out on a good day at 1.5 lol. On a asus omega and 4600 ram (for the life of me wont run higher than 4400 stable) feel the bottlneck has been the mobo for some reason.


Anyways, in regards to the OC BIOS etc. Im glad you brought that up man because I'm just getting into XOC heavy and my initial experiences have been filled with a lot of pretentious arrogance. All these dorks are proud of themselves for breaking records while everyone else has a damn 400w super **** bios (or SUPER CRIPPLED xoc).
With the HOF card on a regular bios i was getting worse than my MSI DUKE on the 380w bios. Its 100 unfair because our passion is just as great, if not more sometimes. I had to beg GALAX and they didnt even bother sending it to me (apparently i missed their little competition), imagine. AFTER 3k.

I had to hustle and beg to get that ****, how is this an "OPEN" community, my ass.

I can confirm my DUKE (2 pin) works incredibly with this 2000w XOC bios, it performs as the fake XOC should have. Nothing has burnt, crashed, melted etc. i peak at 512w and hit 2290.


I miss the days where we could push the limits ourselves. Now they lock their bioses unless I'm Linus or Pewdiepie. I've always been into OC and even hit 4th on my peak global (on water vs ln2!) but this new generation of $2k gpus and prioritization makes me believe my passion is misplaced. Well most of us here, seeing as our hands are typically tied.


----------



## lkkane00

"How are you getting it so cold? :O

Sent from my SM-G965F using Tapatalk "





hailea 500a and 7 d5

ps will use qoute next time


----------



## dangerSK

lkkane00 said:


> Damn, 5.6 is wild, especially stable. Unimaginable for me, id guess a stable 5.3 if ALL works out on a good day at 1.5 lol. On a asus omega and 4600 ram (for the life of me wont run higher than 4400 stable) feel the bottlneck has been the mobo for some reason.
> 
> 
> Anyways, in regards to the OC BIOS etc. Im glad you brought that up man because I'm just getting into XOC heavy and my initial experiences have been filled with a lot of pretentious arrogance. All these dorks are proud of themselves for breaking records while everyone else has a damn 400w super **** bios (or SUPER CRIPPLED xoc).
> With the HOF card on a regular bios i was getting worse than my MSI DUKE on the 380w bios. Its 100 unfair because our passion is just as great, if not more sometimes. I had to beg GALAX and they didnt even bother sending it to me (apparently i missed their little competition), imagine. AFTER 3k.
> 
> I had to hustle and beg to get that ****, how is this an "OPEN" community, my ass.
> 
> I can confirm my DUKE (2 pin) works incredibly with this 2000w XOC bios, it performs as the fake XOC should have. Nothing has burnt, crashed, melted etc. i peak at 512w and hit 2290.
> 
> 
> I miss the days where we could push the limits ourselves. Now they lock their bioses unless I'm Linus or Pewdiepie. I've always been into OC and even hit 4th on my peak global (on water vs ln2!) but this new generation of $2k gpus and prioritization makes me believe my passion is misplaced. Well most of us here, seeing as our hands are typically tied.


Yep its very annoying  i feel u man, had the same issue where i had to beg MSI for Bios. btw did they sent u nvvdd tool ?


----------



## lkkane00

dangerSK said:


> Yep its very annoying  i feel u man, had the same issue where i had to beg MSI for Bios. btw did they sent u nvvdd tool ?


GALAX left me to DIE. They gave exactly 0 ***** about me.

I was lucky to have a friend. Yeah its a zip with their own hardcore version of their xtreme tuner that controls voltages, the bios and some directions.


Can confirm when I tested DUKE on it without bios change (even on galax 380), got error on launch. When flashed to HOF XOC got no error, but no further voltage controls (as is on the the 2 HOFS) were usable.


----------



## J7SC

dangerSK said:


> Sorry cant really, tell/help because i didnt have time to do Port royale yet, but raising just by 150mhz ? thats low. on my lightning i can do easily +1200mhz on vrams in AB.


 
No, no - I meant that the 150 MHz extra in the third run was ON TOP of the initial 839 MHz+ oc (total VRAM oc was +980s MHz, ran fine but the score point gain was minuscule). So I was thinking that spending the (stock Bios) power / watt 'budget ' on GPU makes more sense than spending it on VRAM for the next few runs...wondering what folks think who have done more Port Royale w/2080 Tis (especially w/ SLI) than I have.


----------



## dangerSK

lkkane00 said:


> GALAX left me to DIE. They gave exactly 0 ***** about me.
> 
> I was lucky to have a friend. Yeah its a zip with their own hardcore version of their xtreme tuner that controls voltages, the bios and some directions.
> 
> 
> Can confirm when I tested DUKE on it without bios change (even on galax 380), got error on launch. When flashed to HOF XOC got no error, but no further voltage controls (as is on the the 2 HOFS) were usable.


Can u please send it to me, wanna try on Lightning just for fun  please PM me if youre okay with it. Thanks


----------



## lkkane00

dangerSK said:


> Can u please send it to me, wanna try on Lightning just for fun  please PM me if youre okay with it. Thanks


just wrote you


----------



## dangerSK

lkkane00 said:


> just wrote you


I dont have any new messages in inbox, i wrote u pm now, can u sent it there please ?


----------



## lkkane00

dangerSK said:


> I dont have any new messages in inbox, i wrote u pm now, can u sent it there please ?


got it. replied to that. hit me back on there if it works.


----------



## VPII

Hemorz said:


> I've got a 1200w PSU so it shouldn't be that..
> 
> Thanks for trying man hopefully someone can find out what's going on or atleast inno3d
> 
> I'll try my old 1000w PSU for comparison
> 
> Sent from my SM-G965F using Tapatalk


Hi Hemorz do me a favour and run gpuz and tick the monitor tab that will record your cards output to a file. The do a custom run of timespy only the second gpu test. After the test check what was your actual gpu clocks during the test.

Secondly the 73c core temp you get is too high for the gpu under water. You shoukd get max 50c, in my case mid 40's. Try to reseat the water block and check that the thermal pads line up with vram and vrm. Dont fasten to tightly at first. First fasten each screw slightly but equally then refasten little by little all around.

Sent from my SM-G960F using Tapatalk


----------



## Hemorz

VPII said:


> Hi Hemorz do me a favour and run gpuz and tick the monitor tab that will record your cards output to a file. The do a custom run of timespy only the second gpu test. After the test check what was your actual gpu clocks during the test.
> 
> Secondly the 73c core temp you get is too high for the gpu under water. You shoukd get max 50c, in my case mid 40's. Try to reseat the water block and check that the thermal pads line up with vram and vrm. Dont fasten to tightly at first. First fasten each screw slightly but equally then refasten little by little all around.
> 
> Sent from my SM-G960F using Tapatalk


Hey mate I've put in an RMA through inno3d as I don't think there is anything that can be done. I've swapped out every component in my PC and it's still performing worse than a 980ti at stock settings. 
I've never had a GPU this bad before...they all atleast get decent benches straight out of the box. I agree with you that it could have been seated better.
Do you think it's worth me putting it back in my system? 
I'm going to put the stock fan back on and put it in a completely different system on air before it gets returned. I just don't understand how a 300a binned chip that is meant to have quality control can be this bad?

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

VPII said:


> Hi Hemorz do me a favour and run gpuz and tick the monitor tab that will record your cards output to a file. The do a custom run of timespy only the second gpu test. After the test check what was your actual gpu clocks during the test.
> 
> Secondly the 73c core temp you get is too high for the gpu under water. You shoukd get max 50c, in my case mid 40's. Try to reseat the water block and check that the thermal pads line up with vram and vrm. Dont fasten to tightly at first. First fasten each screw slightly but equally then refasten little by little all around.
> 
> Sent from my SM-G960F using Tapatalk


Wow I know why the temps were comparitive to air.....first time doing GPU block = fail. But everything else on the board look ok?










Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> Wow I know why* the temps were comparitive to air...*.first time doing GPU block = fail. But everything else on the board look ok?
> 
> 
> 
> Spoiler
> 
> 
> 
> https://uploads.tapatalk-cdn.com/20190324/2b89e68735b93813923c1de129b421a5.jpg[/IMG [/SPOILER][/quote]
> 
> 
> I thought [I]they were aircooled :D [/I]w/temps you showed earlier


----------



## ReFFrs

lkkane00 said:


> got it. replied to that. hit me back on there if it works.


Oh you're a hero. Please check your inbox, have written to you as well.


----------



## JustinThyme

Hemorz said:


> Wow I know why the temps were comparitive to air.....first time doing GPU block = fail. But everything else on the board look ok?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


Cant make out the impressions on the mosfets and chokes but so long as you remedy the fail on the TIM and have good impressions on the thermal pads then you should see an immediate drop. I ditched EK products but you should still get decent performance. What the rest of your loop, missed what you have for rad/s and pump/s and what else you have in there to give you a guestimate on what temps you should see. My liquid temp hovers around 28-29 with idle temps around 32CC and loaded at 45C for two 2080Tis. Check rig my rig for comparisons. Using Phanteks blocks and a butt load of rads, fans and pumps.


----------



## Hemorz

JustinThyme said:


> Cant make out the impressions on the mosfets and chokes but so long as you remedy the fail on the TIM and have good impressions on the thermal pads then you should see an immediate drop. I ditched EK products but you should still get decent performance. What the rest of your loop, missed what you have for rad/s and pump/s and what else you have in there to give you a guestimate on what temps you should see. My liquid temp hovers around 28-29 with idle temps around 32CC and loaded at 45C for two 2080Tis. Check rig my rig for comparisons. Using Phanteks blocks and a butt load of rads, fans and pumps.


Temps didn't bother me as much as my 2080ti performing like a 980ti....

My setup is overkill for it just bad paste. I've got 2 X 360mm rads in pull.

Gonna test it in my daughter's PC while I wait for them to get back to me with RMA. 


Whoops had warrant void if removed stickers on it....should I be worried?









Sent from my SM-G965F using Tapatalk


----------



## Hemorz

JustinThyme said:


> Cant make out the impressions on the mosfets and chokes but so long as you remedy the fail on the TIM and have good impressions on the thermal pads then you should see an immediate drop. I ditched EK products but you should still get decent performance. What the rest of your loop, missed what you have for rad/s and pump/s and what else you have in there to give you a guestimate on what temps you should see. My liquid temp hovers around 28-29 with idle temps around 32CC and loaded at 45C for two 2080Tis. Check rig my rig for comparisons. Using Phanteks blocks and a butt load of rads, fans and pumps.


I'm a little stuck...the air cooler is on but it feels a bit loose. Have I missed any screws if I thought there were only 4 connecting it to the actual card? Backplate is fine but fan just can't work out ..regret not better planning it.

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Holy **** I think it's acting like a 980ti because it's throttling it due to needing something plugged into the fans..........is this a thing? I've got it on air ATM and it just gave me double my previous score on heaven 

Sent from my SM-G965F using Tapatalk


----------



## x-speed69

Hemorz said:


> Wow I know why the temps were comparitive to air.....first time doing GPU block = fail. But everything else on the board look ok?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk



Contact definitely looks poor. Maybe double check that you have placed thermalpads on right spots. If I remember correctly EK block came with 0.5mm and 1.0mm pads so placing them on wrong spots could raise the block and make poor contact to core.
Why is bottom ram pads black and not blue?


----------



## Hemorz

VPII said:


> Hi Hemorz do me a favour and run gpuz and tick the monitor tab that will record your cards output to a file. The do a custom run of timespy only the second gpu test. After the test check what was your actual gpu clocks during the test.
> 
> Secondly the 73c core temp you get is too high for the gpu under water. You shoukd get max 50c, in my case mid 40's. Try to reseat the water block and check that the thermal pads line up with vram and vrm. Dont fasten to tightly at first. First fasten each screw slightly but equally then refasten little by little all around.
> 
> Sent from my SM-G960F using Tapatalk


Hey mate it's back in my system reseated....what's super strange is it performed fine on air 5 degrees hotter under load but just won't perform with my waterblock. I've flashed it to 380w Galax bios and still no cigar....after putting new thermal paste it's idling at 30 and under load it's at 70.

I can't do a custom test ATM but can so the whole thing.. score is bad .. also when I had the air cooler on it was double my current heaven score. Could it be my motherboard causing me these issues or is it just nothing plugged into the fans like the lightning issue?

I've never had this much bad luck with a GPU before...

Time spy score is 7032 with a graphics score of 6836. Only 38 fps for test 2 and 45 fps for test 1....









Sent from my SM-G965F using Tapatalk


----------



## Hemorz

x-speed69 said:


> Contact definitely looks poor. Maybe double check that you have placed thermalpads on right spots. If I remember correctly EK block came with 0.5mm and 1.0mm pads so placing them on wrong spots could raise the block and make poor contact to core.
> Why is bottom ram pads black and not blue?


I've changed the bottom ones but honestly temperature isn't my biggest issue here...I'm not thermal throttling. It's somehow in safe mode without looking like I'm losing core but just **** performance.

See previous gpuz and score.

My air cooler was hotter and double the performance. 

Sent from my SM-G965F using Tapatalk


----------



## x-speed69

Hemorz said:


> I've changed the bottom ones but honestly temperature isn't my biggest issue here...I'm not thermal throttling. It's somehow in safe mode without looking like I'm losing core but just **** performance.
> 
> See previous gpuz and score.
> 
> My air cooler was hotter and double the performance.
> 
> Sent from my SM-G965F using Tapatalk



If possible while on water connect stock fans back to the card and test it then. My MSI has a problem where it goes to safe mode when no fans are connected to the card.


edit: MSI problems is very clear because core is stuck 1350 but on your case core clocks seems fine? very strange...


edit2: generally speaking 70c is very good but on water that seems way too high. My card only hits max 50c, no cpu in the loop, one 280 rad.


----------



## BudgieSmuggler

Just jumped in so it may have been mentioned but is your Nvidia control panel set to prefer maximum performance. Confident you've checked it but thought i'd throw it out there all the same


----------



## Hemorz

x-speed69 said:


> If possible while on water connect stock fans back to the card and test it then. My MSI has a problem where it goes to safe mode when no fans are connected to the card.
> 
> 
> edit: MSI problems is very clear because core is stuck 1350 but on your case core clocks seems fine? very strange...
> 
> 
> edit2: generally speaking 70c is very good but on water that seems way too high. My card only hits max 50c, no cpu in the loop, one 280 rad.


There isn't room to fit a fan under the block. 
Not too sure what to do...it's either mobo or not having fans plugged in.with temp would it be just because I have slow quiet rpm fans? 2 X 360mm rads in pull for CPU and GPU. Or have I don't paste **** again...

Yeah I've checked that box in NVIDIA 

Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> There isn't room to fit a fan under the block.
> Not too sure what to do...it's either mobo or not having fans plugged in.with temp would it be just because I have slow quiet rpm fans? 2 X 360mm rads in pull for CPU and GPU. Or have I don't paste **** again...
> 
> Yeah I've checked that box in NVIDIA
> 
> Sent from my SM-G965F using Tapatalk


 
If you had almost double the score on air (w/stock GPU fans) in Heaven and now you don't when being back on water, you may be running into some sort of sensor issue - which could be intended by the vendor as a 'safety'. You already know what happened to folks w/ Lightning Z moving to water and w/o the fans connected. It could very well be hard-wired via sensors on your Inno model's PCB, and it does not have to materialize just as '1350 MHz Safemode', but could throttle and 'hedgehog' in other ways. I realize that w/ water-blocks, your fans headers might be obscured, but may be you can get s.th. rigged up. In any case, if you had that big jump in score (on air), you know it is not the system or card as such - and that counts as good progress !


----------



## Hemorz

J7SC said:


> If you had almost double the score on air (w/stock GPU fans) in Heaven and now you don't when being back on water, you may be running into some sort of sensor issue - which could be intended by the vendor as a 'safety'. You already know what happened to folks w/ Lightning Z moving to water and w/o the fans connected. It could very well be hard-wired via sensors on your Inno model's PCB, and it does not have to materialize just as '1350 MHz Safemode', but could throttle and 'hedgehog' in other ways. I realize that w/ water-blocks, your fans headers might be obscured, but may be you can get s.th. rigged up. In any case, if you had that big jump in score (on air), you know it is not the system or card as such - and that counts as good progress !


Yeah and other advice I've gotten is the main b450 board might be the culprit...card worked fine on air on an Asus sabertooth z77.

I guess all I can try now is connecting it through the block on my Asus board .... What a pain.

Sent from my SM-G965F using Tapatalk


----------



## kx11

contact inno3d directly and get an official answer for your problem instead of testing and wasting time


----------



## Hemorz

kx11 said:


> contact inno3d directly and get an official answer for your problem instead of testing and wasting time


I've shot off an email and support ticket as well as posted on their Facebook page lol

Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> Yeah and other advice I've gotten is the main b450 board might be the culprit...card worked fine on air on an Asus sabertooth z77.
> 
> I guess all I can try now is connecting it through the block on my Asus board .... *What a pain*.
> 
> Sent from my SM-G965F using Tapatalk


 
...or you could start again w/ fresh build that gives you less pain  
(though wallet will likely commit violence):


----------



## Cyber Locc

So my best run thus far, what do yall think? 

https://www.3dmark.com/spy/6676616 - I know my CPU sucks lol, I9 9940x on the way as we speak  However as far as the gpu score 7715 not bad? I feel I could do better with a better CPU, but then I was watching Jays video and the fact he had to use an AC blowing on the rad to get a similar score had me feeling good lol.


I love the card though, so ready to get another one  wifey grounded me though, no more parts for a few months. Wifes dont like it when you blow 5k on pc stuff . Next on the list is SLIing these bad boys though! And some of those 970 Evo 2tbs . My old SSDs are Ready! So sick of Sata Ports.


----------



## JustinThyme

Iff the card performed well on air then the card is not the problem. Id be looking at the install of the block and flow. You should be worried about temps as that defines everything. Its most likely performing less than it should because of thermal throttling. The lower you can get the temps the better chance at an over clock as well. With what you have listed you should never make 50C, more in the mid 40s is what you should expect. Make sure you are using the correct screws to mount the block. They come with their own set, dont attempt anything else. I your block has wobble and can move around you dont have a good mount. If you followed the instructions and used the right hardware Id reach out to EK and return the block.

No you dont need anything plugged into the fans. You need something to pull the heat off the chip and the VRMS to keep it from thermal throttling.


----------



## Hemorz

JustinThyme said:


> Iff the card performed well on air then the card is not the problem. Id be looking at the install of the block and flow. You should be worried about temps as that defines everything. Its most likely performing less than it should because of thermal throttling. The lower you can get the temps the better chance at an over clock as well. With what you have listed you should never make 50C, more in the mid 40s is what you should expect. Make sure you are using the correct screws to mount the block. They come with their own set, dont attempt anything else. I your block has wobble and can move around you dont have a good mount. If you followed the instructions and used the right hardware Id reach out to EK and return the block.
> 
> 
> 
> No you dont need anything plugged into the fans. You need something to pull the heat off the chip and the VRMS to keep it from thermal throttling.


If it was thermal throttling wouldn't the temps spike hotter than the air cooled card? 

The EK vector block came with so many screws!!! What size should be on the middle securing it? Short or long?

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Ok thanks for everyone's help....I didn't realise it would thermal throttle without going close to the limit...or atleast spiking there. Just reseated GPU for the 3rd time with a 3rd paste and scored 13,500 (15k GPU score)in time spy and temps only hit a max of 47 degrees on stock settings. Time to overclock now 

Damn the coil whine is bad...


Sent from my SM-G965F using Tapatalk


----------



## Cyber Locc

Hemorz said:


> Ok thanks for everyone's help....I didn't realise it would thermal throttle without going close to the limit...or atleast spiking there. Just reseated GPU for the 3rd time with a 3rd paste and scored 13,500 (15k GPU score)in time spy and temps only hit a max of 47 degrees on stock settings. Time to overclock now
> 
> Sent from my SM-G965F using Tapatalk


AFAIK The card thermal throttles at every 10 degrees, from 30 forward, so if its under 30c you will get your best clocks, at 30-40 they will be reduced, 40-50 same deal, ect.


----------



## Hemorz

Cyber Locc said:


> The card thermal throttles at every 10 degrees, by a pretty big margin, from 30 forward, so if its under 30c you will get your best clocks, at 30-40 they will be reduced, 40-50 same deal, ect.


Wow is that a new thing? I thought it would only throttle if you hit the limit 

Just got 9300 on superposition 1080p extreme... I'm just glad to not have to pull this ****er out of my loop again haha

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Hemorz said:


> Wow is that a new thing? I thought it would only throttle if you hit the limit
> 
> Just got 9300 on superposition 1080p extreme... I'm just glad to not have to pull this ****er out of my loop again haha
> 
> Sent from my SM-G965F using Tapatalk


10,066 in superposition 1080p extreme with +150 core and +600 memory
It's getting up there!

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

Hey question.....if I made it in top 50 of single 2080ti for unigine superposition 1080p extreme...does that mean I have a great card?
I just scored pretty high with +1275 memory and +180 core and a temp of 57 degrees.

Sent from my SM-G965F using Tapatalk


----------



## J7SC

Hemorz said:


> 10,066 in superposition 1080p extreme with +150 core and +600 memory
> It's getting up there!
> 
> Sent from my SM-G965F using Tapatalk


 

...glad it's working for you now... those MHz spikes / hedgehogs were the result of very rapid temp increases 'cause of bad contact of the w-block?! And yes, MHz reduction w/temps is not a single hard border with RTX, but a step-by-step affair...I usually manage to keep mine below 38c, but not always...


----------



## The_Rocker

I have two ASUS 2080Ti Turbo Blower cards, temps are all good, below 70c with my custom fan curve, however I am hitting the power limit and clocks fluctuate between 1800-1900Mhz in Firestrike Ultra.

I know its a Non-A card but it is reference design. Can I flash an A variant BIOS such as the eVGA Ultra to it?

Or can I not flash anything that increases power limit?

I heard the A is just a binned chip, and that its not physically different?

Any modded BIOS's that remove the power limit from Non A cards?


----------



## MrTOOSHORT

The_Rocker said:


> I have two ASUS 2080Ti Turbo Blower cards, temps are all good, below 70c with my custom fan curve, however I am hitting the power limit and clocks fluctuate between 1800-1900Mhz in Firestrike Ultra.
> 
> I know its a Non-A card but it is reference design. Can I flash an A variant BIOS such as the eVGA Ultra to it?
> 
> Or can I not flash anything that increases power limit?
> 
> I heard the A is just a binned chip, and that its not physically different?
> 
> Any modded BIOS's that remove the power limit from Non A cards?



Can't flash and non A card with an A bios. You can do the shunt mod, 3 options. Either solder 8mo resistors on top of the 5mo ones, remove the 5mo ones and solder 3mo in it's place, or buy a silver pen go over the shunts:


*https://www.overclock.net/forum/27793140-post205.html*


----------



## The_Rocker

MrTOOSHORT said:


> Can't flash and non A card with an A bios. You can do the shunt mod, 3 options. Either solder 8ohm resisters on top of the 5ohm ones, remove the 5ohm ones and solder 3ohm in it's place, or buy a silver pen go over the shunts:
> 
> 
> *https://www.overclock.net/forum/27793140-post205.html*


Would this make a different if the power limit is still in software (BIOS)?


----------



## MrTOOSHORT

Honestly, the shunt mod will give the card allowable access to pull more power. So it's really only beneficial for water cooled cards and more exotic cooling. I wouldn't do it on stock air blower cooling.

Read this thread, good read:

*https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html*


----------



## The_Rocker

MrTOOSHORT said:


> Honestly, the shunt mod will give the card allowable access to pull more power. So it's really only beneficial for water cooled cards and more exotic cooling. I wouldn't do it on stock air blower cooling.
> 
> Read this thread, good read:
> 
> *https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html*


My temps are under 70c, so thats suggests I have a fair amount of headroom. Even without overclocking any more I should then be able to consistently boost over 2Ghz?

EDIT:
Ordered this and will give it a go.

https://www.mgchemicals.com/product...conductive-pens/842ar-p-silver-conductive-pen


----------



## MrTOOSHORT

70'C for the core, the VRMs will get a lot more toasty with the shunt mod.


----------



## ALSTER868

Hey guys, what does the ''aux voltage'' slider in MSI AB for the Lightning Z do? What's the use of it?


----------



## Renegade5399

The_Rocker said:


> My temps are under 70c, so thats suggests I have a fair amount of headroom. Even without overclocking any more I should then be able to consistently boost over 2Ghz?
> 
> EDIT:
> Ordered this and will give it a go.
> 
> https://www.mgchemicals.com/product...conductive-pens/842ar-p-silver-conductive-pen


At 70°C you have almost no headroom. You have to understand how the thermal throttling works on these cards. Those of us that have done the shunt mod try and get temps BELOW 45°C at 100% load. We have to resort to things like drawing in cold winter air or installing a ridiculous amount of radiators. I have (2) 360mm and (1) 240mm right now and am considering another 240mm since I have 2 GPUs and a coal fired CPU in the loop. Every 10° beginning at (I think) 30°C causes a reduction of clocks because of how Boost 3.0 and it's associated "features" work. I would NOT do the shunt mod on those Turbo model cards with the stock cooler. The VRM and GPU and in turn, the whole back of the card will get hot. Like, really hot. 

My suggestion is to work with what you have for now and save up for some type of water cooling for the cards. Then you can do the shunt mod, as long as you fully understand the risks (card damage, void warranty).

The power limit in the stock BIOS does not matter once the shunts are installed. Changing the value of them causes the measured voltage reported back to the controller to be higher. This fools the controller into thinking there is less power usage than what is actually being drawn. You hit the limit of the silicon before you hit the "new" power limit.

The nice thing about forums like this are access to knowledge of how to perform such mods. The downside is misunderstanding/misinformation. The shunts are NOT 5 Ohm. The add-on resistors for the shunts are NOT 8 Ohm. The total replacement resistor for the shunts are NOT 3 Ohm. All values are in milliohm. The stock shunts are 5 mOhm (milliohm). You add on an 8 mOhm resistor to get the target goal of ~3mOhm. If you find yourself in wonder as to how this works and don't understand how resistors work in parallel, please consider going out and researching that, getting a basic understanding, then consider if modding like this is for you. Also, if you've never done more than just tin wires with a soldering iron and/or have a 300W broad tip soldering iron, don't try this mod. Get a proper tip for SMD work (this is personal preference, I use an SMD "broad" tip which is actually nice and tiny), a proper 150-200W or lower if you like iron, and practice on a dead board with similar SMDs on it. One of those magnifying lights really really helps too. I'm not saying don't try it or you don't have the skills/understanding to do it. I am saying go do the proper prep work first.

Good luck!


----------



## Renegade5399

ALSTER868 said:


> Hey guys, what does the ''aux voltage'' slider in MSI AB for the Lightning Z do? What's the use of it?


Typically this is a voltage adjusted while using LN2 to keep something warm like RAM. Some cards even have specific warmer circuits hooked to an AUX voltage bus.


----------



## willverduzco

Hemorz said:


> Hey question.....if I made it in top 50 of single 2080ti for unigine superposition 1080p extreme...does that mean I have a great card?
> I just scored pretty high with +1275 memory and +180 core and a temp of 57 degrees.
> 
> Sent from my SM-G965F using Tapatalk


Your score of 10.3k is OK-ish for single-GPU Superposition 1080p Extreme--definitely nothing too special though. That is about what you would expect from 2070 MHz when the core isn't fluctuating due to power, reliable voltage, or temperature. The only reason that it "would be in the top 50" is that very few people upload to the Superposition leader-board since it's behind a paywall.

Here is what a tiny bit more OC does, here's my card at 2115-2130c/8080m. At the time, it "would have been in the top 14," but that means nothing because nobody uses that leader-board because of the aforementioned paywall. Score below was with the Galax 380W BIOS and only +1080 mem. I have since loaded the Strix XOC BIOS, which works quite well on my Strix PCB and upped my Samsung memory to +1350 for daily usage since I am no longer irrationally afraid of damaging my memory just from clocking it higher.


----------



## MrTOOSHORT

Thanks for the correction in my post Renegade5399.


----------



## fleps

Renegade5399 said:


> At 70°C you have almost no headroom. You have to understand how the thermal throttling works on these cards. Those of us that have done the shunt mod try and get temps BELOW 45°C at 100% load. We have to resort to things like drawing in cold winter air or installing a ridiculous amount of radiators. I have (2) 360mm and (1) 240mm right now and am considering another 240mm since I have 2 GPUs and a coal fired CPU in the loop. Every 10° beginning at (I think) 30°C causes a reduction of clocks because of how Boost 3.0 and it's associated "features" work. I would NOT do the shunt mod on those Turbo model cards with the stock cooler. The VRM and GPU and in turn, the whole back of the card will get hot. Like, really hot.
> 
> My suggestion is to work with what you have for now and save up for some type of water cooling for the cards. Then you can do the shunt mod, as long as you fully understand the risks (card damage, void warranty).
> 
> The power limit in the stock BIOS does not matter once the shunts are installed. Changing the value of them causes the measured voltage reported back to the controller to be higher. This fools the controller into thinking there is less power usage than what is actually being drawn. You hit the limit of the silicon before you hit the "new" power limit.
> 
> The nice thing about forums like this are access to knowledge of how to perform such mods. The downside is misunderstanding/misinformation. The shunts are NOT 5 Ohm. The add-on resistors for the shunts are NOT 8 Ohm. The total replacement resistor for the shunts are NOT 3 Ohm. All values are in milliohm. The stock shunts are 5 mOhm (milliohm). You add on an 8 mOhm resistor to get the target goal of ~3mOhm. If you find yourself in wonder as to how this works and don't understand how resistors work in parallel, please consider going out and researching that, getting a basic understanding, then consider if modding like this is for you. Also, if you've never done more than just tin wires with a soldering iron and/or have a 300W broad tip soldering iron, don't try this mod. Get a proper tip for SMD work (this is personal preference, I use an SMD "broad" tip which is actually nice and tiny), a proper 150-200W or lower if you like iron, and practice on a dead board with similar SMDs on it. One of those magnifying lights really really helps too. I'm not saying don't try it or you don't have the skills/understanding to do it. I am saying go do the proper prep work first.
> 
> Good luck!


Pretty much this, just a few comments (as far I know and researched): 
- It's Boost 4.0 now
- The thermal throttling starts at above 45C as far I saw on the nvidia engineer interview with Gamernexus


----------



## arrow0309

Hi, there's this new added (unverified) Galax bios on Techpowerup with reference clocks but 450W max (+50%):

https://www.techpowerup.com/vgabios/209434/209434

Also KFA2, same bios (I assume it's the HOF aircooled):

https://www.techpowerup.com/vgabios/207475/207475

Has anyone tested it on a reference / FE board?


----------



## Cyber Locc

Hemorz said:


> Hey question.....if I made it in top 50 of single 2080ti for unigine superposition 1080p extreme...does that mean I have a great card?
> I just scored pretty high with +1275 memory and +180 core and a temp of 57 degrees.
> 
> Sent from my SM-G965F using Tapatalk




Very nice , im going to have top come for you on SP now , Soon TM. 

I was just looking through the Timespy Extremes and I am sad, If I wasnt being held back by my Haswell E CPU, I would be 57th in Timespy Extreme . Stupid CPU, good thing new one coming


----------



## dangerSK

arrow0309 said:


> Hi, there's this new added (unverified) Galax bios on Techpowerup with reference clocks but 450W max (+50%):
> 
> https://www.techpowerup.com/vgabios/209434/209434
> 
> Also KFA2, same bios (I assume it's the HOF aircooled):
> 
> https://www.techpowerup.com/vgabios/207475/207475
> 
> Has anyone tested it on a reference / FE board?


Wait till someone leaks OC Lab bios with 2k W pwr limit.


----------



## Hemorz

dangerSK said:


> Wait till someone leaks OC Lab bios with 2k W pwr limit.


Is it just temperature preventing more power on cards? 

Sent from my SM-G965F using Tapatalk


----------



## jura11

Cyber Locc said:


> So my best run thus far, what do yall think?
> 
> https://www.3dmark.com/spy/6676616 - I know my CPU sucks lol, I9 9940x on the way as we speak  However as far as the gpu score 7715 not bad? I feel I could do better with a better CPU, but then I was watching Jays video and the fact he had to use an AC blowing on the rad to get a similar score had me feeling good lol.
> 
> 
> I love the card though, so ready to get another one  wifey grounded me though, no more parts for a few months. Wifes dont like it when you blow 5k on pc stuff . Next on the list is SLIing these bad boys though! And some of those 970 Evo 2tbs . My old SSDs are Ready! So sick of Sata Ports.


Hi there

That's quite good OC 2175MHz on core and assuming 1000MHz on VRAM?

Graphics Score 7715 is nice,I can do max 7688 that's with 2115MHz and 1125MHz on VRAM and my CPU Score is 4239(4309 is best) that's with 5960X with 4.7GHz 

Here is my score and please ignore GTX1080Ti with which I scored this,not sure why 3DMark reports this is my main GPU,although I have 4 GPUs

https://www.3dmark.com/3dm/34874282

Hope this helps

Thanks,Jura


----------



## jura11

fleps said:


> Pretty much this, just a few comments (as far I know and researched):
> - It's Boost 4.0 now
> - The thermal throttling starts at above 45C as far I saw on the nvidia engineer interview with Gamernexus


Hi there 

I'm pretty sure first thermal throttling or downclocking starts at 37-38°C or something around that figure, then next step or next thermal throttling or downclocking I think will be around 47-48°C with this I'm not sure as I have seen such temperatures on my RTX 2080Ti 

These downclocking steps are definitely different to Pascal Nvidia Boost 3.0 what I know and tested 

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/clock_analysis2.jpg

Hope this helps 

Thanks, Jura


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> That's quite good OC 2175MHz on core and assuming 1000MHz on VRAM?
> 
> 
> 
> Graphics Score 7715 is nice,I can do max 7688 that's with 2115MHz and 1125MHz on VRAM and my CPU Score is 4239(4309 is best) that's with 5960X with 4.7GHz
> 
> 
> 
> Here is my score and please ignore GTX1080Ti with which I scored this,not sure why 3DMark reports this is my main GPU,although I have 4 GPUs
> 
> 
> 
> https://www.3dmark.com/3dm/34874282
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks,Jura


Wait hang on....why is my result better than 98% of all results for time spy? I get a graphics score of 16,500. Am I doing something wrong?

Sent from my SM-G965F using Tapatalk


----------



## jura11

Hemorz said:


> Wait hang on....why is my result better than 98% of all results for time spy? I get a graphics score of 16,500. Am I doing something wrong?
> 
> Sent from my SM-G965F using Tapatalk


Hi there 

We are both run Timespy Extreme not normal one,I didn't tried normal Timespy there and therefore I'm not sure what is good score

Can you try Timespy Extreme and post yours results? 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> I'm pretty sure first thermal throttling or downclocking starts at *37-38°C* or something around that figure, then next step or next thermal throttling or downclocking I think will be around 47-48°C with this I'm not sure as I have seen such temperatures on my RTX 2080Ti
> 
> These downclocking steps are definitely different to Pascal Nvidia Boost 3.0 what I know and tested
> 
> https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/clock_analysis2.jpg
> 
> Hope this helps
> 
> Thanks, Jura


 
I think you're right... Given very extensive w-cooling, and ambient at no more than 20 c, I usually manage to stay below 35 C max w/one 2080 TI (ie in Superposition), or 38 C w/two cards in other SLI tests. I always get the highest clocks (as near as I can relate it to an open GPUz sensor window) when staying below 38 C. And as stated before, there are a series of GPU MHz down-steps (not just one or two) as temps increase with the latest NVidia boost on 2080 TI. To state the obvious [again], Cooling a 2080 TI as best as one can should be the first order of the day w/ 2080 TI


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> We are both run Timespy Extreme not normal one,I didn't tried normal Timespy there and therefore I'm not sure what is good score
> 
> 
> 
> Can you try Timespy Extreme and post yours results?
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Getting graphics score of 7700 in extreme

Sent from my SM-G965F using Tapatalk


----------



## jura11

Hemorz said:


> Getting graphics score of 7700 in extreme
> 
> Sent from my SM-G965F using Tapatalk


Hi there 

That's not bad, have look I'm getting 7688 with 2115MHz and 1125MHz on VRAM, anything above that will crash 

Above poster getting 7715 with 2175MHz OC that's very nice OC 

What is yours OC?

Hope this helps 

Thanks, Jura


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> That's not bad, have look I'm getting 7688 with 2115MHz and 1125MHz on VRAM, anything above that will crash
> 
> 
> 
> Above poster getting 7715 with 2175MHz OC that's very nice OC
> 
> 
> 
> What is yours OC?
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Just playing around now...my core crashes anything above 185 and memory can go quite high but I'll try on his settings.

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> That's not bad, have look I'm getting 7688 with 2115MHz and 1125MHz on VRAM, anything above that will crash
> 
> 
> 
> Above poster getting 7715 with 2175MHz OC that's very nice OC
> 
> 
> 
> What is yours OC?
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Can you explain to me the vram measurement? Is that what you added to the slider after stock 7000?

Sent from my SM-G965F using Tapatalk


----------



## jura11

Hemorz said:


> Can you explain to me the vram measurement? Is that what you added to the slider after stock 7000?
> 
> Sent from my SM-G965F using Tapatalk


Yes, added extra 1125MHz or +1125MHz on VRAM

+1125MHz is best what my memory can do, anything above crashes 

Hope this helps 

Thanks, Jura


----------



## Hemorz

jura11 said:


> Yes, added extra 1125MHz or +1125MHz on VRAM
> 
> 
> 
> +1125MHz is best what my memory can do, anything above crashes
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


What's my best way to determine? Just leaving core at 0 and doing memory only? If so I am stable at +1300 (haven't tweaked further)
With core I'm stable at +175 but this could be different for every card right?

Is it normal for core to fluctuate during timespy?

Just trying to work out how to make this stable and give a more accurate result to you haha

Sent from my SM-G965F using Tapatalk


----------



## Hemorz

jura11 said:


> Yes, added extra 1125MHz or +1125MHz on VRAM
> 
> 
> 
> +1125MHz is best what my memory can do, anything above crashes
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


7722 stable....now fine tuning..just not sure best way haha

Stats are 2145 core/2075 vram at 43 degrees

Edit: 2145/2081 got me a score of 7731

Edit 2: 2145/2081 but boosted cor voltage by +40 and gave me score of 7750 [emoji848]









Sent from my SM-G965F using Tapatalk


----------



## jura11

Hemorz said:


> What's my best way to determine? Just leaving core at 0 and doing memory only? If so I am stable at +1300 (haven't tweaked further)
> With core I'm stable at +175 but this could be different for every card right?
> 
> Is it normal for core to fluctuate during timespy?
> 
> Just trying to work out how to make this stable and give a more accurate result to you haha
> 
> Sent from my SM-G965F using Tapatalk


Hi there 

What is best way to determine what is yours best hard to say, every card and RTX 2080Ti is different, have look my Zotac RTX 2080Ti AMP will do 2115MHz as max,with bit of luck or very cold temperatures I can maybe do 2130MHz with which I have achieved I think only twice or three times as max

For VRAM +1125MHz is max, if yours can do +1300MHz that's awesome there, without the question 

I'm running Galax 380W BIOS now, assuming you are on stock BIOS 

With core I prefer custom Voltage/Frequency curve and at 2115MHz I'm at +174MHz at 1.093v 

During the Timespy are you hitting any limit, power etc 

Can you check GPU-Z if you are hitting Power Limit etc? 

This core fluctuations I only experienced with stock Zotac BIOS and when I hit power limit, I would recommend try creating custom V/F curve for core, maybe you can squeeze bit more, with manual core and V/F curve I'm able to squeeze around 40-50MHz on top what OC scanner will "auto overclock" for me

Assuming during the core fluctuations yours voltage too fluctuate or not? 

Hope this helps 

Thanks, Jura


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> What is best way to determine what is yours best hard to say, every card and RTX 2080Ti is different, have look my Zotac RTX 2080Ti AMP will do 2115MHz as max,with bit of luck or very cold temperatures I can maybe do 2130MHz with which I have achieved I think only twice or three times as max
> 
> 
> 
> For VRAM +1125MHz is max, if yours can do +1300MHz that's awesome there, without the question
> 
> 
> 
> I'm running Galax 380W BIOS now, assuming you are on stock BIOS
> 
> 
> 
> With core I prefer custom Voltage/Frequency curve and at 2115MHz I'm at +174MHz at 1.093v
> 
> 
> 
> During the Timespy are you hitting any limit, power etc
> 
> 
> 
> Can you check GPU-Z if you are hitting Power Limit etc?
> 
> 
> 
> This core fluctuations I only experienced with stock Zotac BIOS and when I hit power limit, I would recommend try creating custom V/F curve for core, maybe you can squeeze bit more, with manual core and V/F curve I'm able to squeeze around 40-50MHz on top what OC scanner will "auto overclock" for me
> 
> 
> 
> Assuming during the core fluctuations yours voltage too fluctuate or not?
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Hey mate can you explain how to do the voltage curve? 

I'm flashed on Galax 380w too....this is best result I've gotten so far I think core clock does spike a little.

You got messenger?

So that's best score so far with following:

2145 core
2081 memory
41 degree temp
Time spy extreme GPU score 7752










Sent from my SM-G965F using Tapatalk


----------



## jura11

Hemorz said:


> Hey mate can you explain how to do the voltage curve?
> 
> I'm flashed on Galax 380w too....this is best result I've gotten so far I think core clock does spike a little.
> 
> You got messenger?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Sent from my SM-G965F using Tapatalk


Hi there 

2145MHz is quite nice there without the question,I still think you should be able squeeze bit more if yours temperatures will be under 37-38°C then you will gain extra 15MHz 

With custom V/F curve too you can gain extra few more MHz, I can gain around 40-50MHz on top what OC Scanner in MSI Afterburner reporting as my max OC

Have look on this YT video, its for Pascal GPUs but works for Turing RTX as well 

https://youtu.be/koBxaf-KZgo

But still +1300MHz on VRAM is very nice

Hope this helps 

Thanks, Jura


----------



## Hemorz

jura11 said:


> Hi there
> 
> 
> 
> 2145MHz is quite nice there without the question,I still think you should be able squeeze bit more if yours temperatures will be under 37-38°C then you will gain extra 15MHz
> 
> 
> 
> With custom V/F curve too you can gain extra few more MHz, I can gain around 40-50MHz on top what OC Scanner in MSI Afterburner reporting as my max OC
> 
> 
> 
> Have look on this YT video, its for Pascal GPUs but works for Turing RTX as well
> 
> 
> 
> https://youtu.be/koBxaf-KZgo
> 
> 
> 
> But still +1300MHz on VRAM is very nice
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Thanks mate...haven't even done OC scanner haha didn't know it was a thing. Trying that now to compare too

Sent from my SM-G965F using Tapatalk


----------



## JustinThyme

Hemorz said:


>


It would be more beneficial if you took screen shots instead of pictures of your monitor. Hard to see whats going on.
Also download and install techpowerup GPUZ and run that, take a screen shot of that as well.

In case you arent versed in screen shots.

CTRL+PrtSc
Then open paint and CTRL+V
you can select and crop then save as PNG or JPEG.


----------



## Hemorz

JustinThyme said:


> It would be more beneficial if you took screen shots instead of pictures of your monitor. Hard to see whats going on.
> 
> Also download and install techpowerup GPUZ and run that, take a screen shot of that as well.
> 
> 
> 
> In case you arent versed in screen shots.
> 
> 
> 
> CTRL+PrtSc
> 
> Then open paint and CTRL+V
> 
> you can select and crop then save as PNG or JPEG.


Lol I know how to take screenshots..I thought all the numbers I took a photo of are legible especially when you zoom

Just noticed gpuz turned out blurry...will get another clearer picture soon

Sent from my SM-G965F using Tapatalk


----------



## Cyber Locc

jura11 said:


> Hi there
> 
> That's quite good OC 2175MHz on core and assuming 1000MHz on VRAM?
> 
> Graphics Score 7715 is nice,I can do max 7688 that's with 2115MHz and 1125MHz on VRAM and my CPU Score is 4239(4309 is best) that's with 5960X with 4.7GHz
> 
> Here is my score and please ignore GTX1080Ti with which I scored this,not sure why 3DMark reports this is my main GPU,although I have 4 GPUs
> 
> https://www.3dmark.com/3dm/34874282
> 
> Hope this helps
> 
> Thanks,Jura


Very nice score , and ya our Haswell Es kill the CPU score  also you are correct 1k on the mem 

I'm still messing with the memory, I literally put it to 1000, and benched it a few times, got that score, and then had to go to bed. My wife hates my fans, which are loud when I bench as I dont have an external rad for my TB so 2 360 EK XEs for now with 3k Vardar Evos, so I have to do when she is sleeping lol. Then haven't been able to bench since then, as other stuff has came up and new board+CPU on the way anyway. Now I have to add an external rad, with a second 2080ti going in soon, and the I9 9960x


----------



## J7SC

GN has a nice tear-down of the KingPin 2080 Ti / looks juicy, EVBot connector and all. Vince in another vid mentioned that there might be a second version, one w/ a full cover water-block :drool:


----------



## lkkane00

This is with fluid at 4c, highest I could push it for now.


----------



## The_Rocker

Renegade5399 said:


> At 70°C you have almost no headroom. You have to understand how the thermal throttling works on these cards. Those of us that have done the shunt mod try and get temps BELOW 45°C at 100% load. We have to resort to things like drawing in cold winter air or installing a ridiculous amount of radiators. I have (2) 360mm and (1) 240mm right now and am considering another 240mm since I have 2 GPUs and a coal fired CPU in the loop. Every 10° beginning at (I think) 30°C causes a reduction of clocks because of how Boost 3.0 and it's associated "features" work. I would NOT do the shunt mod on those Turbo model cards with the stock cooler. The VRM and GPU and in turn, the whole back of the card will get hot. Like, really hot.
> 
> My suggestion is to work with what you have for now and save up for some type of water cooling for the cards. Then you can do the shunt mod, as long as you fully understand the risks (card damage, void warranty).
> 
> The power limit in the stock BIOS does not matter once the shunts are installed. Changing the value of them causes the measured voltage reported back to the controller to be higher. This fools the controller into thinking there is less power usage than what is actually being drawn. You hit the limit of the silicon before you hit the "new" power limit.
> 
> The nice thing about forums like this are access to knowledge of how to perform such mods. The downside is misunderstanding/misinformation. The shunts are NOT 5 Ohm. The add-on resistors for the shunts are NOT 8 Ohm. The total replacement resistor for the shunts are NOT 3 Ohm. All values are in milliohm. The stock shunts are 5 mOhm (milliohm). You add on an 8 mOhm resistor to get the target goal of ~3mOhm. If you find yourself in wonder as to how this works and don't understand how resistors work in parallel, please consider going out and researching that, getting a basic understanding, then consider if modding like this is for you. Also, if you've never done more than just tin wires with a soldering iron and/or have a 300W broad tip soldering iron, don't try this mod. Get a proper tip for SMD work (this is personal preference, I use an SMD "broad" tip which is actually nice and tiny), a proper 150-200W or lower if you like iron, and practice on a dead board with similar SMDs on it. One of those magnifying lights really really helps too. I'm not saying don't try it or you don't have the skills/understanding to do it. I am saying go do the proper prep work first.
> 
> Good luck!


This is good advice. Luckily I have a fair bit of soldering experience, not with SMD however but I can get my hand in on some other stuff before attempting this mod. I have a Hako 888 iron with variable temp and multiple tips so im all good on that front.

I was just going to try the mod with silver trace pen first as shown in the RTX Titan thread. A good thick layer appears to work. However its certainly a much nicer solution if I solder 8mOhm resistors on top instead. Any particular type I need to buy? May as well get them in now.

On the cooling front, these blower cards are actually temporary replacements to my originals which were eVGA Black's with the dual fan open cooler, 2 slot. I should have both of them in by the end of the week. 

However, as I have done it before, numerous times, im feeling like I might rebuild this system into a thermaltake W200 with a pair of Black Ice GTR 560mm rads, one loop on the cards, one on the GPU. From what I am learning about these cards it seems like I should wait until I have the cards under water to do the shunt mod as it heats up everything regardless of clocks?


----------



## dante`afk

jura11 said:


> Hi there
> 
> I'm pretty sure first thermal throttling or downclocking starts at 37-38°C or something around that figure, then next step or next thermal throttling or downclocking I think will be around 47-48°C with this I'm not sure as I have seen such temperatures on my RTX 2080Ti
> 
> These downclocking steps are definitely different to Pascal Nvidia Boost 3.0 what I know and tested
> 
> https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1080/images/clock_analysis2.jpg
> 
> Hope this helps
> 
> Thanks, Jura



38, 42, 45, etc.


----------



## dangerSK

Hemorz said:


> Is it just temperature preventing more power on cards?
> 
> Sent from my SM-G965F using Tapatalk


No, its nvidia locking up cards for nothing...


----------



## VPII

Cyber Locc said:


> Very nice score , and ya our Haswell Es kill the CPU score  also you are correct 1k on the mem
> 
> I'm still messing with the memory, I literally put it to 1000, and benched it a few times, got that score, and then had to go to bed. My wife hates my fans, which are loud when I bench as I dont have an external rad for my TB so 2 360 EK XEs for now with 3k Vardar Evos, so I have to do when she is sleeping lol. Then haven't been able to bench since then, as other stuff has came up and new board+CPU on the way anyway. Now I have to add an external rad, with a second 2080ti going in soon, and the I9 9960x


Well Haswell might seem like it is killing the cpu score, but then look at this.

https://www.3dmark.com/spy/6716589

https://www.3dmark.com/spy/6716339

GPU temps due to water cooling and half the rad put in water. But the gpu score in both runs are actually not bad taken 2160 in TS Extreme and 2175 in TS. As for the CPU, hell it works great and does the job and costed me a hell of a lot less than what I would have paid for an 8 core Intel cpu. Come Zen2 please, I need to upgrade.


----------



## bogdi1988

arrow0309 said:


> Hi, there's this new added (unverified) Galax bios on Techpowerup with reference clocks but 450W max (+50%):
> 
> https://www.techpowerup.com/vgabios/209434/209434
> 
> Also KFA2, same bios (I assume it's the HOF aircooled):
> 
> https://www.techpowerup.com/vgabios/207475/207475
> 
> Has anyone tested it on a reference / FE board?


Also curious if anyone has tested this on a reference type PCB.


----------



## outofmyheadyo

More like 20 to 30%


----------



## arrow0309

dangerSK said:


> Wait till someone leaks OC Lab bios with 2k W pwr limit.


That 300W - 450W (150% PL max) Galax / KFA2 bios would be enough for my FE water-cooled. 
If confirmed that there are no issues whatsoever.


----------



## jura11

dante`afk said:


> 38, 42, 45, etc.


Hi there 

I'm sure first step is 37-38°C or around that figure, next step is not 42°C as this temperature I have seen on my RTX 2080Ti and didn't downclocked by 15MHz and 45°C this I have seen too on my RTX 2080Ti and no downclocking, I suspect next step is around 46-47°C maybe lower but for sure from my limited testing I know is not 42°C or 45°C 

Got another loop where I'm running RTX 2080Ti with Bykski GPU waterblock and on this loop I see max 42-45°C that's with single 360mm radiator for CPU ajd GPU and there is no downclocking around these figures 

Hope this helps 

Thanks, Jura


----------



## Cyber Locc

VPII said:


> Well Haswell might seem like it is killing the cpu score, but then look at this.
> 
> https://www.3dmark.com/spy/6716589
> 
> https://www.3dmark.com/spy/6716339
> 
> GPU temps due to water cooling and half the rad put in water. But the gpu score in both runs are actually not bad taken 2160 in TS Extreme and 2175 in TS. As for the CPU, hell it works great and does the job and costed me a hell of a lot less than what I would have paid for an 8 core Intel cpu. Come Zen2 please, I need to upgrade.


Ya its not too bad, still cpu limited in the bench though. 

The big thing for me, with the 8 core Intel is performance differences aside, you cannot run SLI on that rig, with a 2080ti, nor could you run SLI with a M.2 or PCI drive at all. Thats what forces me into the E platform, and I use it pretty much exclusively since X58. 

I run m.2s and PCIe drives, and now with the 2080ti, requiring x16 per card, and x8 being a bottleneck we have a problem .


----------



## Cyber Locc

jura11 said:


> Hi there
> 
> I'm sure first step is 37-38°C or around that figure, next step is not 42°C as this temperature I have seen on my RTX 2080Ti and didn't downclocked by 15MHz and 45°C this I have seen too on my RTX 2080Ti and no downclocking, I suspect next step is around 46-47°C maybe lower but for sure from my limited testing I know is not 42°C or 45°C
> 
> Got another loop where I'm running RTX 2080Ti with Bykski GPU waterblock and on this loop I see max 42-45°C that's with single 360mm radiator for CPU ajd GPU and there is no downclocking around these figures
> 
> Hope this helps
> 
> Thanks, Jura


My card, (small sample size lol, and I think its power limiting, as I dont have the 380 bios on it yet) seems to drop by 15mhz at 42 as well, and around 38. I dont know about 45, as my card teeters between 41-42 most the time, never actually hitting 43, and it spends so much time between 41-42, is how I see the drop.

Thats with a EVGA Hydro Copper block, and my OC, and Dual XE360s with Vardars. 

I have another card and block on the way, and a MO-RA3, so we will see what affect that has. Its not really for the GPUs as much as the 9960x, 1400ws when I do put 380w bios, is alot to cool lol.


----------



## jura11

Cyber Locc said:


> My card, (small sample size lol, and I think its power limiting, as I dont have the 380 bios on it yet) seems to drop by 15mhz at 42 as well, and around 38. I dont know about 45, as my card teeters between 41-42 most the time, never actually hitting 43, and it spends so much time between 41-42, is how I see the drop.
> 
> Thats with a EVGA Hydro Copper block, and my OC, and Dual XE360s with Vardars.
> 
> I have another card and block on the way, and a MO-RA3, so we will see what affect that has. Its not really for the GPUs as much as the 9960x, 1400ws when I do put 380w bios, is alot to cool lol.


Hi there 

I can try to again to do few tests and will see if first frequency drop is in 38°C which we can agree is it and another in 42°C, I'm pretty sure but maybe I'm wrong on that, seen 42-45°C on another loop as max, on current my one 36-38°C is normal in gaming or rendering right now, previously I seen that too, will turn central heating on and do few tests 

Running 380W BIOS posted here and works for me there, I would say is better than my stock Zotac RTX 2080Ti AMP BIOS 

This what I have seen on my loop its spends time more between the 41-42°C won't go beyond that in long sessions when ambient is beyond 24-26°C 

I have got as well EVGA Hydro Copper for RTX 2080Ti which is not mounted yet, this is friend one block for his build and been not sure how it performs against the Phanteks or Heatkiller or even my EK RTX 2080Ti Vector block

With MO-ra3 in loop you should have better water delta T, if this does make difference if you will be cooling CPU and another GPU, I would say yes there

My current loop have power draw in rendering around 1150-1250W with all GPUs OC, GPU usage is in 80-85% as max, there not many renderers which utilise 100% and above 

Hope this helps 

Thanks, Jura


----------



## Cyber Locc

jura11 said:


> Hi there
> 
> I can try to again to do few tests and will see if first frequency drop is in 38°C which we can agree is it and another in 42°C, I'm pretty sure but maybe I'm wrong on that, seen 42-45°C on another loop as max, on current my one 36-38°C is normal in gaming or rendering right now, previously I seen that too, will turn central heating on and do few tests
> 
> Running 380W BIOS posted here and works for me there, I would say is better than my stock Zotac RTX 2080Ti AMP BIOS
> 
> This what I have seen on my loop its spends time more between the 41-42°C won't go beyond that in long sessions when ambient is beyond 24-26°C
> 
> I have got as well EVGA Hydro Copper for RTX 2080Ti which is not mounted yet, this is friend one block for his build and been not sure how it performs against the Phanteks or Heatkiller or even my EK RTX 2080Ti Vector block
> 
> With MO-ra3 in loop you should have better water delta T, if this does make difference if you will be cooling CPU and another GPU, I would say yes there
> 
> My current loop have power draw in rendering around 1150-1250W with all GPUs OC, GPU usage is in 80-85% as max, there not many renderers which utilise 100% and above
> 
> Hope this helps
> 
> Thanks, Jura


One thing that may show the throttle for us, and not you, is bios dependency, or even silicon. There may not be hard set temp limits, as much as some other metric of temp/power/clock combination that makes it down clock at different temps as well. Or it could be slight variations in the throttle temp, on different cards bios. 

I am running the EVGA power limited bios lol. Hoping they will come out with a EVGA 380ws, and just want to be sure my cards are both solid before flashing.


----------



## jura11

Cyber Locc said:


> One thing that may show the throttle for us, and not you, is bios dependency, or even silicon. There may not be hard set temp limits, as much as some other metric of temp/power/clock combination that makes it down clock at different temps as well. Or it could be slight variations in the throttle temp, on different cards bios.
> 
> I am running the EVGA power limited bios lol. Hoping they will come out with a EVGA 380ws, and just want to be sure my cards are both solid before flashing.


Hi there

I have just finished quick testing with turned central heating on and here are quick results,tested that with my 2055MHz OC and +800MHz on VRAM,I use this in RT enabled games like Metro Exodus etc,anything above that will cause TDR

Here is my OC










In this test my max temperature has been 40C and clocks stayed at 2055MHz from start to finish,no downclocking to 2040MHz as in next scenario










In this test leaved central heating at max,closed windows and doors etc just to increase ambient temperatures and as you can see GPU downclocked from start to 2040MHz at 42C,max temperature has been all the way 41-42C with small increase or rather bump to 43C but clocks stayed at 2040MHz(downclocked from 2055MHz) during this test










This RTX or Turing is different to Pascal generation with which most of people do have more experience and is probably more mapped than RTX

Not sure if its dependent on BIOS,maybe yes maybe not,due this is hard on this comment with certainty is 100% true or not and silicone lottery with RTX can play role,I lost on this one,my old GTX1080Ti will do 2113MHz at 1.07v easy and max 2164MHz at 1.093v

With stock EVGA BIOS you are already have nice OC which I'm not able hit unless I use chiller,due this I would keep it,if you want then try Galax 380W BIOS which is perfect for me with stock Zotac BIOS I have few issues mainly with hitting power limit,probably will borrow from friend his EVGA RTX2080TI XC and do tests on that card as my is proper OC dud:h34r-smi

Hope this helps and good luck,if you are happy with performance there is no reason to flash different BIOS

Thanks,Jura


----------



## Cyber Locc

jura11 said:


> Hi there
> 
> I have just finished quick testing with turned central heating on and here are quick results,tested that with my 2055MHz OC and +800MHz on VRAM,I use this in RT enabled games like Metro Exodus etc,anything above that will cause TDR
> 
> Here is my OC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test my max temperature has been 40C and clocks stayed at 2055MHz from start to finish,no downclocking to 2040MHz as in next scenario
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test leaved central heating at max,closed windows and doors etc just to increase ambient temperatures and as you can see GPU downclocked from start to 2040MHz at 42C,max temperature has been all the way 41-42C with small increase or rather bump to 43C but clocks stayed at 2040MHz(downclocked from 2055MHz) during this test
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This RTX or Turing is different to Pascal generation with which most of people do have more experience and is probably more mapped than RTX
> 
> Not sure if its dependent on BIOS,maybe yes maybe not,due this is hard on this comment with certainty is 100% true or not and silicone lottery with RTX can play role,I lost on this one,my old GTX1080Ti will do 2113MHz at 1.07v easy and max 2164MHz at 1.093v
> 
> With stock EVGA BIOS you are already have nice OC which I'm not able hit unless I use chiller,due this I would keep it,if you want then try Galax 380W BIOS which is perfect for me with stock Zotac BIOS I have few issues mainly with hitting power limit,probably will borrow from friend his EVGA RTX2080TI XC and do tests on that card as my is proper OC dud:h34r-smi
> 
> Hope this helps and good luck,if you are happy with performance there is no reason to flash different BIOS
> 
> Thanks,Jura


I am happy with performance, still want to bench though . Not new to custom bios either . I just want to wait and see if EVGA will step it up, and give us a 380/400w bios, before I flash Galax.

The XOC bios, and the fact the card has dual 8 pins, and VGA power, is plenty of headroom for 400ws. This is artificial limitations from AIBs, and with the small gains from 1080ti, they really need to let people use up to 400ws. 

I understand they have to make money, however if they have to artificially limit their Ref cards to get people to buy custom PCBs that is a little wrong. 

Also, I do hit power limits, quite alot at 2175mhz, which causes short down clocks in heavy apps like Timespy.


However yes, I am pretty happy with the potential of this card, seems to be a pretty good sample  Hopefully my new one is as good  If its not, ill just drop clocks for sli, all will be well. SLI scaling is pretty good again, so thats good at least, even if it is at a pretty hefty premium in cards, and the bridge. 

Funny story with that, that kind of makes me like wth lol. So the new X299 RVIEO I have on the way comes with a HB sli connector instead of NVlink?? Why would a 750 dollar current gen board, give last gen connector included but not the current gen is pretty dumb. Then with the new x16 per card, and the Omega layout, I cant use my Intel 750 so I have to buy a 970 pro... So another 500 has to be spent to get the SLI going. Pretty upset about that lol.

I also understand research time, ect, but the board just came out, cards been out for what 6 months almost? And Asus had them well before, kind of bad logic here. Sorry for ranting and semi OT rant too lol.


----------



## Renegade5399

jura11 said:


> Hi there
> 
> I have just finished quick testing with turned central heating on and here are quick results,tested that with my 2055MHz OC and +800MHz on VRAM,I use this in RT enabled games like Metro Exodus etc,anything above that will cause TDR
> 
> Here is my OC
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test my max temperature has been 40C and clocks stayed at 2055MHz from start to finish,no downclocking to 2040MHz as in next scenario
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test leaved central heating at max,closed windows and doors etc just to increase ambient temperatures and as you can see GPU downclocked from start to 2040MHz at 42C,max temperature has been all the way 41-42C with small increase or rather bump to 43C but clocks stayed at 2040MHz(downclocked from 2055MHz) during this test
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This RTX or Turing is different to Pascal generation with which most of people do have more experience and is probably more mapped than RTX
> 
> Not sure if its dependent on BIOS,maybe yes maybe not,due this is hard on this comment with certainty is 100% true or not and silicone lottery with RTX can play role,I lost on this one,my old GTX1080Ti will do 2113MHz at 1.07v easy and max 2164MHz at 1.093v
> 
> With stock EVGA BIOS you are already have nice OC which I'm not able hit unless I use chiller,due this I would keep it,if you want then try Galax 380W BIOS which is perfect for me with stock Zotac BIOS I have few issues mainly with hitting power limit,probably will borrow from friend his EVGA RTX2080TI XC and do tests on that card as my is proper OC dud:h34r-smi
> 
> Hope this helps and good luck,if you are happy with performance there is no reason to flash different BIOS
> 
> Thanks,Jura


I have a 1080Ti like that as well and it does those speeds on air with a modded stock heatsink.

I run my 2080Ti's at 2055/16000 for 24/7 operation. While that may not be the highest of speeds, it sure plays games like crazy. I just wanted to see what a single 2080Ti could do in Fallout 76 last night. I enabled DSR and cranked the resolution to 5120x2880. I left the other settings at the GFE optimized settings. Was getting between 90-144 fps. At that resolution even the Creation engine looks decent! LOL!


----------



## bp7178

bogdi1988 said:


> Also curious if anyone has tested this on a reference type PCB.


I've done both. 

They both work, but performance was better with the KFA2 despite the power limit difference.


----------



## bogdi1988

bp7178 said:


> I've done both.
> 
> They both work, but performance was better with the KFA2 despite the power limit difference.


So you are saying this (https://www.techpowerup.com/vgabios/207475/207475) was better than this (https://www.techpowerup.com/vgabios/209434/209434)?


----------



## J7SC

jura11 said:


> Hi there
> 
> I have just finished quick testing with turned central heating on and here are quick results,tested that with my 2055MHz OC and +800MHz on VRAM,I use this in RT enabled games like Metro Exodus etc,anything above that will cause TDR
> 
> Here is my OC
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test my max temperature has been 40C and clocks stayed at 2055MHz from start to finish,no downclocking to 2040MHz as in next scenario
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In this test leaved central heating at max,closed windows and doors etc just to increase ambient temperatures and as you can see GPU downclocked from start to 2040MHz at 42C,max temperature has been all the way 41-42C with small increase or rather bump to 43C but clocks stayed at 2040MHz(downclocked from 2055MHz) during this test
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This RTX or Turing is different to Pascal generation with which most of people do have more experience and is probably more mapped than RTX
> 
> Not sure if its dependent on BIOS,maybe yes maybe not,due this is hard on this comment with certainty is 100% true or not and silicone lottery with RTX can play role,I lost on this one,my old GTX1080Ti will do 2113MHz at 1.07v easy and max 2164MHz at 1.093v
> 
> 
> With stock EVGA BIOS you are already have nice OC which I'm not able hit unless I use chiller,due this I would keep it,if you want then try Galax 380W BIOS which is perfect for me with stock Zotac BIOS I have few issues mainly with hitting power limit,probably will borrow from friend his EVGA RTX2080TI XC and do tests on that card as my is proper OC dud:h34r-smi
> 
> Hope this helps and good luck,if you are happy with performance there is no reason to flash different BIOS
> 
> Thanks,Jura


 
Thanks for the refresher on MSI AB Curves, and I admire your dedication, what with turning up the central heat and all that 

I'm still running stock (Aorus) Bios on both cards and as this 2950X Threadripper machine is not a pure bencher / gamer but also does productivity, I'll probably keep it that way. The only thing I'm contemplating is to flash the stock Bios form 'Card 1' onto 'Card 2' so that the curve setup is focused on the same Bios parameters. I'm not in a hurry to do that since the GPU setup is proving quite capable already (incl. on 99% 2x GPU usage). Some of the scores below are actually inside the top 30 @ 3D Hof (for now, wait 5 min...).

I do wonder though about two weird observations. As context, when I originally got the first Aorus 2080 Ti, I plugged it into a test-bench ( just after installing Win 10 for the first time), a trusty 6700K 4c/8t Skylake at 4.7GHz w/ fast system memory. As an aside, you'ed be surprised (like myself) how competitive this setup actually was on GPU tests (obviously, not so much in CPU physics). Then I got the second GPU...so I took the first one out and plugged the second one in its place and everything worked like a charm. Then I build up the X399 Threadripper system, transferred the M.2 W10, used DDU etc and updated GPU drivers, after of course installing the X399 package. Everything is working well, but when I disable SLI in the NVidia driver tab and then disable GPU 2 in Device Manager (leaving the software in place), I seem to actually end up running on GPU 2 (bottom) as the 'single'. Not 100% sure about that, but it kind of looks that way, also re. GPUz etc. I rather not reinstall Win 10, but may be I really should ?!

The second weird thing is that when I do some Superposition or 3DM benchmarks, some VRAM settings do not work well. Yet when I increase VRAM even more (ie the Superposition 1080 Extreme run below was done at effective 8180 / + 1110 MHz), the higher VRAM settings work just fine. Than another 'dead zone', then another increase and things work fine again. It's repeatable...Has anybody else observed s.th. similar ?


----------



## magnus71

Hi,
I would like to flash Galak bios but I do not know if I'll lose RGB on my Manli RTX 2080 Ti Gallardo.


----------



## jura11

J7SC said:


> Thanks for the refresher on MSI AB Curves, and I admire your dedication, what with turning up the central heat and all that
> 
> I'm still running stock (Aorus) Bios on both cards and as this 2950X Threadripper machine is not a pure bencher / gamer but also does productivity, I'll probably keep it that way. The only thing I'm contemplating is to flash the stock Bios form 'Card 1' onto 'Card 2' so that the curve setup is focused on the same Bios parameters. I'm not in a hurry to do that since the GPU setup is proving quite capable already (incl. on 99% 2x GPU usage). Some of the scores below are actually inside the top 30 @ 3D Hof (for now, wait 5 min...).
> 
> I do wonder though about two weird observations. As context, when I originally got the first Aorus 2080 Ti, I plugged it into a test-bench ( just after installing Win 10 for the first time), a trusty 6700K 4c/8t Skylake at 4.7GHz w/ fast system memory. As an aside, you'ed be surprised (like myself) how competitive this setup actually was on GPU tests (obviously, not so much in CPU physics). Then I got the second GPU...so I took the first one out and plugged the second one in its place and everything worked like a charm. Then I build up the X399 Threadripper system, transferred the M.2 W10, used DDU etc and updated GPU drivers, after of course installing the X399 package. Everything is working well, but when I disable SLI in the NVidia driver tab and then disable GPU 2 in Device Manager (leaving the software in place), I seem to actually end up running on GPU 2 (bottom) as the 'single'. Not 100% sure about that, but it kind of looks that way, also re. GPUz etc. I rather not reinstall Win 10, but may be I really should ?!
> 
> The second weird thing is that when I do some Superposition or 3DM benchmarks, some VRAM settings do not work well. Yet when I increase VRAM even more (ie the Superposition 1080 Extreme run below was done at effective 8180 / + 1110 MHz), the higher VRAM settings work just fine. Than another 'dead zone', then another increase and things work fine again. It's repeatable...Has anybody else observed s.th. similar ?


Hi there 

I just wanted to see at which thermal points GPU starts to downclocking, its still inconclusive and will probably do more tests

Are you running 2205MHz on yours RTX 2080Ti, that's again nice, looking at these OC, now for sure I know my GPU is just one poor OC'er, spoke with friend and he us getting on his RTX 2080Ti XC on air such OC like I'm running 2115MHz 

I knew Zotac are not the best RTX 2080Ti,owned Zotac GTX1080 AMP as well which hasn't been bad but hasn't been best in therm of OC but this one RTX 2080Ti is just one ... GPU 

If you are getting such OC on another GPU then you are looks like won silicone lottery there, these clocks are nice there

I will probably wait on Zen2 amd see if there will be board with multiple PCI_E slots if not then X399 and ThreadRipper refresh I will be getting 

Hard to say if you are gain or you will loose by flashing BIOS from Card 1 to Card 2,but must say nice scores there

In Superposition 1080p Extreme I'm getting around 1025-10300 and in 4k Optimized getting 13300-13500 these are results with 2100MHz and 1000MHz on VRAM 

Regarding the yours issues with ThreadRipper and Win10, I assume you have 1809, if yes do you have yours old scores if they're changed like in Cinebench or 3DMark benchmarks and Unigine Superposition, check if you are getting correct readouts of CPU speeds, check IOPS as well, I have runout to issues last time when I updated my Windows 10 to 1809,have run 4.6Ghz but results had been same like I have been running stock speeds, couldn't find issue, at the updated motherboard BIOS, disabled Meltdown and Spectre etc and my scores are back

If you are transferred Intel Windows 10 to ThreadRipper I would only think of, this maybe causing the issue what are you experiencing, but maybe I'm wrong, I know only when last time I done it on friend loop where we are too he swap his 5960x for AMD ThreadRipper 2990x and he hasn't have best results etc, performance hasn't been bad, at the end friend reinstalled Win10 and no more issues 

I was considering or contemplating too reinstall but don't want to loose and reinstall all my SW which would be major pain 

With this yours main GPU showing as GPU 2, can you check that in MSI Afterburner if its GPU 1 or 2,in my case my RTX 2080Ti is GPU 4,please check this as well in SIV64, in SIV64 you should see yours GPU as GPU 1,if its 2 then not sure, but something similar I have seen on Asus Maximus X Formula where main GPU which is closest to CPU showing as GPU 2,not GPU 1,this I have experienced with Gigabyte GTX1080Ti Aorus Extreme 

Regarding the VRAM settings, I'm pretty sure if you check Gamer Nexus video and Kingpin he speaks about this too,due this you will see I used rather 1125MHz, with 1150MHz it will crash or in worse case scenario it will BSOD, tried 1135MHz but scores has been lower, try 25MHz increases not 10MHz 

Sometimes you will find like you are hitting wall with scores and VRAM settings like I found out with friend RTX 2080Ti where 1200MHz extra on VRAM giving me lower score than with 1175MHz 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> I just wanted to see at which thermal points GPU starts to downclocking, its still inconclusive and will probably do more tests
> 
> Are you running 2205MHz on yours RTX 2080Ti, that's again nice, looking at these OC, now for sure I know my GPU is just one poor OC'er, spoke with friend and he us getting on his RTX 2080Ti XC on air such OC like I'm running 2115MHz
> 
> 
> 
> Spoiler
> 
> 
> 
> I knew Zotac are not the best RTX 2080Ti,owned Zotac GTX1080 AMP as well which hasn't been bad but hasn't been best in therm of OC but this one RTX 2080Ti is just one ... GPU
> 
> If you are getting such OC on another GPU then you are looks like won silicone lottery there, these clocks are nice there
> 
> I will probably wait on Zen2 amd see if there will be board with multiple PCI_E slots if not then X399 and ThreadRipper refresh I will be getting
> 
> Hard to say if you are gain or you will loose by flashing BIOS from Card 1 to Card 2,but must say nice scores there
> 
> In Superposition 1080p Extreme I'm getting around 1025-10300 and in 4k Optimized getting 13300-13500 these are results with 2100MHz and 1000MHz on VRAM
> 
> Regarding the yours issues with ThreadRipper and Win10, I assume you have 1809, if yes do you have yours old scores if they're changed like in Cinebench or 3DMark benchmarks and Unigine Superposition, check if you are getting correct readouts of CPU speeds, check IOPS as well, I have runout to issues last time when I updated my Windows 10 to 1809,have run 4.6Ghz but results had been same like I have been running stock speeds, couldn't find issue, at the updated motherboard BIOS, disabled Meltdown and Spectre etc and my scores are back
> 
> If you are transferred Intel Windows 10 to ThreadRipper I would only think of, this maybe causing the issue what are you experiencing, but maybe I'm wrong, I know only when last time I done it on friend loop where we are too he swap his 5960x for AMD ThreadRipper 2990x and he hasn't have best results etc, performance hasn't been bad, at the end friend reinstalled Win10 and no more issues
> 
> I was considering or contemplating too reinstall but don't want to loose and reinstall all my SW which would be major pain
> 
> 
> 
> With this yours main GPU showing as GPU 2, can you check that in MSI Afterburner if its GPU 1 or 2,in my case my RTX 2080Ti is GPU 4,please check this as well in SIV64, in SIV64 you should see yours GPU as GPU 1,if its 2 then not sure, but something similar I have seen on Asus Maximus X Formula where main GPU which is closest to CPU showing as GPU 2,not GPU 1,this I have experienced with Gigabyte GTX1080Ti Aorus Extreme
> 
> Regarding the VRAM settings, I'm pretty sure if you check Gamer Nexus video and Kingpin he speaks about this too,due this you will see I used rather 1125MHz, with 1150MHz it will crash or in worse case scenario it will BSOD, tried 1135MHz but scores has been lower, try 25MHz increases not 10MHz
> 
> Sometimes you will find like you are hitting wall with scores and VRAM settings like I found out with friend RTX 2080Ti where 1200MHz extra on VRAM giving me lower score than with 1175MHz
> 
> Hope this helps
> 
> Thanks, Jura


 
Thanks Jura  
I think MSI AB enumerates the two GPUs one way (as to which is #1, which one #2) while GPUz seems to be doing it the other way around...never seen that before. I do know that the first GPU a fresh Window install ever sees is the original 1st position, per registry, and it seems to relate to the VidBios (I say this as I have had plenty of GPUs before w/ 2 or 3 Bios onboard, each one would create a discreetly enumerated entry in the Window registry).

Per attachment, I have no worries about the rest of the system, ie. Cinebench, Aida System Ram etc, even after moving the M.2 Win10 from the 6700K to the X399 TR. I spent a lot of time on memory tuning and testing this build, and in fact, I'm still not done. There are a lot more memory vars to play with in Threadripper than my other (all Intel) setups....and some improve processor or RAM scores but start to negatively impact on Graphics tests....just a new learning curve for me.

As to switching Bios, and scores...I'll likely stay with the Aorus stock Bios (which reaches just under 380w anyhow), but I noticed that even though performance of both cards is 'close enough', WinMerge etc still points to some slight differences in Bios between the two cards. I prefer to attempt doing the MSI AB curve on two GPUs with identical Bios (thanks again for all the curve info you posted).


Spoiler



On the GPU scores, I'm just now really getting into some benching and noticed that compared to Kepler, Pascal etc, the VRAM jumps are a bit different w/ RTX & GDDR6...what I was trying to say earlier is that there seem to be 'dead zones' with the VRAM speed...I can obviously start at stock speeds (effective 7070 MHz for these cards) and work my way up to find a.) where scores start go down and b.) it will artifact or hang. Yet with this setup, it seems to be a bit 'discontinuous'...with all other factors held equal (CPU, GPU speed, temps, bench), increasing VRAM speed (per minimum, single steps in MSI AB) gets to the point where scores start to go down, and another few increases after that will cause a 'hang'. But a further increase gets higher scores and no hang...rinse and repeat, up to a certain point. Obviously, it also depends on the bench itself; SuperPosition seems to like higher VRAM speeds in general compared to 3DM, though the phenomenon I described seems to apply to both.


 I should also mention that while the two Aorus XTR 2080 Ti WB cards are obviously not poor performers, they also have a very strong and somewhat unusual cooling system. While both GPUs are on the same loop, there is a cool-down 'loop' between the two (see attachment), and all told, GPU cooling alone has 1080x60 mm rad space, 2 pumps, and 12x 120mm fans.

Running a single GPU bench such as Superposition (or single TimeSpyEx etc) obviously means that the same 1080x60 setup now just has to cool one CPU being tested. In the 2205 MHz Superposition run, the MAX temp was 34C or less, and w/ two GPUs, I usually still manage to stay below 38C...that said, I think there might be speed step-downs happening even below 38c.

As to the 'silicon lottery', I have had some real duds on both CPU and GPU before, though those usually teach you the most re. oc'ing. I am obviously very happy with what I ended up with now, but my biggest worry was that one GPU would be a barnstormer, while the other would be a dud - not the thing you want with SLI. In that sense, I consider myself lucky. The same holds for the 2950X; it's both low-volt/high clock and has a strong IMC. So the optimist in me thinks this was compensation for previous duds (among 30 odd GPUs of more recent vintage, and a pile of CPUs), while the pessimist thinks that I will get a lousy draw the next time (waiting w/another build for the Zen2 etc...). Time will tell :thinking:


----------



## Cyber Locc

J7SC said:


> Thanks Jura
> I think MSI AB enumerates the two GPUs one way (as to which is #1, which one #2) while GPUz seems to be doing it the other way around...never seen that before. I do know that the first GPU a fresh Window install ever sees is the original 1st position, per registry, and it seems to relate to the VidBios (I say this as I have had plenty of GPUs before w/ 2 or 3 Bios onboard, each one would create a discreetly enumerated entry in the Window registry).
> 
> Per attachment, I have no worries about the rest of the system, ie. Cinebench, Aida System Ram etc, even after moving the M.2 Win10 from the 6700K to the X399 TR. I spent a lot of time on memory tuning and testing this build, and in fact, I'm still not done. There are a lot more memory vars to play with in Threadripper than my other (all Intel) setups....and some improve processor or RAM scores but start to negatively impact on Graphics tests....just a new learning curve for me.
> 
> As to switching Bios, and scores...I'll likely stay with the Aorus stock Bios (which reaches just under 380w anyhow), but I noticed that even though performance of both cards is 'close enough', WinMerge etc still points to some slight differences in Bios between the two cards. I prefer to attempt doing the MSI AB curve on two GPUs with identical Bios (thanks again for all the curve info you posted).
> 
> 
> Spoiler
> 
> 
> 
> On the GPU scores, I'm just now really getting into some benching and noticed that compared to Kepler, Pascal etc, the VRAM jumps are a bit different w/ RTX & GDDR6...what I was trying to say earlier is that there seem to be 'dead zones' with the VRAM speed...I can obviously start at stock speeds (effective 7070 MHz for these cards) and work my way up to find a.) where scores start go down and b.) it will artifact or hang. Yet with this setup, it seems to be a bit 'discontinuous'...with all other factors held equal (CPU, GPU speed, temps, bench), increasing VRAM speed (per minimum, single steps in MSI AB) gets to the point where scores start to go down, and another few increases after that will cause a 'hang'. But a further increase gets higher scores and no hang...rinse and repeat, up to a certain point. Obviously, it also depends on the bench itself; SuperPosition seems to like higher VRAM speeds in general compared to 3DM, though the phenomenon I described seems to apply to both.
> 
> 
> I should also mention that while the two Aorus XTR 2080 Ti WB cards are obviously not poor performers, they also have a very strong and somewhat unusual cooling system. While both GPUs are on the same loop, there is a cool-down 'loop' between the two (see attachment), and all told, GPU cooling alone has 1080x60 mm rad space, 2 pumps, and 12x 120mm fans.
> 
> Running a single GPU bench such as Superposition (or single TimeSpyEx etc) obviously means that the same 1080x60 setup now just has to cool one CPU being tested. In the 2205 MHz Superposition run, the MAX temp was 34C or less, and w/ two GPUs, I usually still manage to stay below 38C...that said, I think there might be speed step-downs happening even below 38c.
> 
> As to the 'silicon lottery', I have had some real duds on both CPU and GPU before, though those usually teach you the most re. oc'ing. I am obviously very happy with what I ended up with now, but my biggest worry was that one GPU would be a barnstormer, while the other would be a dud - not the thing you want with SLI. In that sense, I consider myself lucky. The same holds for the 2950X; it's both low-volt/high clock and has a strong IMC. So the optimist in me thinks this was compensation for previous duds (among 30 odd GPUs of more recent vintage, and a pile of CPUs), while the pessimist thinks that I will get a lousy draw the next time (waiting w/another build for the Zen2 etc...). Time will tell :thinking:



OT, but thats a very neat build.


----------



## fleps

So guys, I need help.

I just finished installing a Kraken g12 with X42 on my RTX 2080, everything connected, but for some reason the card is giving me Code 43 on device manager and operating on low resolution.

If I try to disable / enable, it gives me a corrupted screen full of artifacts for a few seconds goes back to the low resolution.

I already tried switching the PCIE slot and removing and reinstalling the driver with DDU, no change.

Did I just bricked my card somehow?

Any ideas?

Thanks


----------



## Cyber Locc

fleps said:


> So guys, I need help.
> 
> I just finished installing a Kraken g12 with X42 on my RTX 2080, everything connected, but for some reason the card is giving me Code 43 on device manager and operating on low resolution.
> 
> If I try to disable / enable, it gives me a corrupted screen full of artifacts for a few seconds goes back to the low resolution.
> 
> I already tried switching the PCIE slot and removing and reinstalling the driver with DDU, no change.
> 
> Did I just bricked my card somehow?
> 
> Any ideas?
> 
> Thanks


Lots of ideas. 

First check the PCIE cables. If they are not in all the way, this will happen. 

Then redo the cooler, and see if there is an issue.


----------



## J7SC

...in addition, also check how much you tightened the Kraken cooler (as well as if pressure is even). There have been reports of the described behavior via uneven or overly-tight cooler mounting...


----------



## fleps

Cyber Locc said:


> Lots of ideas.
> 
> First check the PCIE cables. If they are not in all the way, this will happen.
> 
> Then redo the cooler, and see if there is an issue.





J7SC said:


> ...in addition, also check how much you tightened the Kraken cooler (as well as if pressure is even). There have been reports of the described behavior via uneven or overly-tight cooler mounting...


Nothing.

Did a full inspect on cables, tested other ports from PSU (full modular).
PC works normally with Intel integrated graphics.

Removed Kraken entirely, did a full physical inspect on the card, nothing visible, re-assembled original cooler, same result.

Card will not install / activate, everytime it tries I see a corrupted pink/green screen for a few seconds until windows revert to generic driver.

Can't believe I broke this card and I don't even know how / what. I didn't even removed the backplate to reduce the changes of hitting something back there. 

Oh well.


----------



## Vlada011

CPU Die, NB Chipset and these new graphic processors without IHS are very sensitive on pressure.
Always is better just to touch little than a lot of pressure. People can't behave with naked chip like with processor where IHS protect die.
Just a little micro crack and could stop working.
Did you tested GPU before installing cooler. For me that's natural.
I tested M.2 default as well without heatsink. Than I don't need to think did I screw something.
To protect chipset on my motherboard before I installed monoblock I used thermal paste than peace of thermal pad to cover whole chipset.
He make nice contact without a lot of pressure and protect chipset, EK nicely advice that on User Guide.


----------



## Hdusu64346

Enjoying my 2080ti. I'd like to get a 240hz 2k monitor, or if I'm foolish enough a 34" 4k 165hz monitor(when they come out late 2019).


----------



## Cyber Locc

med1kl said:


> Enjoying my 2080ti. I'd like to get a 240hz 2k monitor, or if I'm foolish enough a 34" 4k 165hz monitor(when they come out late 2019).


Go with the 4k 144hz 🙂 they are worth every penny 🙂. I love mine. 

And wait it's 165hz? I don't think so, the 144hz version already saturates DP 1.4, 165hz isn't possible.


----------



## fleps

Vlada011 said:


> CPU Die, NB Chipset and these new graphic processors without IHS are very sensitive on pressure.
> Always is better just to touch little than a lot of pressure. People can't behave with naked chip like with processor where IHS protect die.
> Just a little micro crack and could stop working.
> Did you tested GPU before installing cooler. For me that's natural.
> I tested M.2 default as well without heatsink. Than I don't need to think did I screw something.
> To protect chipset on my motherboard before I installed monoblock I used thermal paste than peace of thermal pad to cover whole chipset.
> He make nice contact without a lot of pressure and protect chipset, EK nicely advice that on User Guide.


It was probably this.
I'm more used to handle CPU's where you can / should put good pressure, and no instructions were added on G12 manual.

So I put a lot of pressure on the screws, because visually it looked like the G12 plate was still not touching the brackets. But then at some point the G12 plate bend a bit and I stopped putting pressure (it was even, I always do pressure in a X format a bit a time)

One thing I noted today when looking closer is that one of the small copper heatsinks I installed on the chips around the core got a bit "bend".

I already installed them not exactly centered on the chips as I noted the X42 pump is a bit "fat" on the bottom and it was touching them when I was just screwing with fingers and checking pressure.
But I guess this particular one I needed to leave even farther, I'm afraid the plastic part of the pump got it and pressured on the chip and damaged it internally as there's no visual issue.

***.


----------



## VPII

I need some help..... For some reason after flasing my Palit card with Strix 1000watt bios I do not have the OC scanner option anymore. I'd like to have it as when I run my gpu at 2175mhz core it only gives 1.043v where it should be 1.057 or higher from what I've seen is stable. I really like this bios as my card stays on the clocks even up to 40C core temp. Please can someone help me find a way to get the vcurve so I can set it as need be.


----------



## gavros777

I have the 2080 ti gigabyte card and plan to switch to an open frame case.
I wanna add some custom fan dust filter/mesh on the card, what is the best way to go about it?


----------



## jura11

J7SC said:


> Thanks Jura
> I think MSI AB enumerates the two GPUs one way (as to which is #1, which one #2) while GPUz seems to be doing it the other way around...never seen that before. I do know that the first GPU a fresh Window install ever sees is the original 1st position, per registry, and it seems to relate to the VidBios (I say this as I have had plenty of GPUs before w/ 2 or 3 Bios onboard, each one would create a discreetly enumerated entry in the Window registry).
> 
> Per attachment, I have no worries about the rest of the system, ie. Cinebench, Aida System Ram etc, even after moving the M.2 Win10 from the 6700K to the X399 TR. I spent a lot of time on memory tuning and testing this build, and in fact, I'm still not done. There are a lot more memory vars to play with in Threadripper than my other (all Intel) setups....and some improve processor or RAM scores but start to negatively impact on Graphics tests....just a new learning curve for me.
> 
> As to switching Bios, and scores...I'll likely stay with the Aorus stock Bios (which reaches just under 380w anyhow), but I noticed that even though performance of both cards is 'close enough', WinMerge etc still points to some slight differences in Bios between the two cards. I prefer to attempt doing the MSI AB curve on two GPUs with identical Bios (thanks again for all the curve info you posted).
> 
> 
> Spoiler
> 
> 
> 
> On the GPU scores, I'm just now really getting into some benching and noticed that compared to Kepler, Pascal etc, the VRAM jumps are a bit different w/ RTX & GDDR6...what I was trying to say earlier is that there seem to be 'dead zones' with the VRAM speed...I can obviously start at stock speeds (effective 7070 MHz for these cards) and work my way up to find a.) where scores start go down and b.) it will artifact or hang. Yet with this setup, it seems to be a bit 'discontinuous'...with all other factors held equal (CPU, GPU speed, temps, bench), increasing VRAM speed (per minimum, single steps in MSI AB) gets to the point where scores start to go down, and another few increases after that will cause a 'hang'. But a further increase gets higher scores and no hang...rinse and repeat, up to a certain point. Obviously, it also depends on the bench itself; SuperPosition seems to like higher VRAM speeds in general compared to 3DM, though the phenomenon I described seems to apply to both.
> 
> 
> I should also mention that while the two Aorus XTR 2080 Ti WB cards are obviously not poor performers, they also have a very strong and somewhat unusual cooling system. While both GPUs are on the same loop, there is a cool-down 'loop' between the two (see attachment), and all told, GPU cooling alone has 1080x60 mm rad space, 2 pumps, and 12x 120mm fans.
> 
> Running a single GPU bench such as Superposition (or single TimeSpyEx etc) obviously means that the same 1080x60 setup now just has to cool one CPU being tested. In the 2205 MHz Superposition run, the MAX temp was 34C or less, and w/ two GPUs, I usually still manage to stay below 38C...that said, I think there might be speed step-downs happening even below 38c.
> 
> As to the 'silicon lottery', I have had some real duds on both CPU and GPU before, though those usually teach you the most re. oc'ing. I am obviously very happy with what I ended up with now, but my biggest worry was that one GPU would be a barnstormer, while the other would be a dud - not the thing you want with SLI. In that sense, I consider myself lucky. The same holds for the 2950X; it's both low-volt/high clock and has a strong IMC. So the optimist in me thinks this was compensation for previous duds (among 30 odd GPUs of more recent vintage, and a pile of CPUs), while the pessimist thinks that I will get a lousy draw the next time (waiting w/another build for the Zen2 etc...). Time will tell :thinking:


Hi there 

Hard to say why GPU-Z does enumerate GPUs or any other SW does that, if down to VBIOS hard to say for sure, I remember I have used dual GPUs in Hackintosh and there I have run to few issues with different VBIOS and flashing with same BIOS helped to overcome few issues, not sure if its helps in yours case

Looking good there,Cinebench looks good there, what speeds of RAM are you using? Ryzen or ThreadRipper does like fast RAM 

Stock Aorus BIOS does look like good BIOS if you have such high power limit, didn't tried this one, tried only Zotac stock and Galax 380W BIOS 

This MSI Afterburner V/F curve is easy to do,done that on all my 4*GPUs and running these clocks (1*2080Ti with 2055MHz,2*GTX1080Ti with 2113MHz, 3*GTX1080 with 2100MHz and 4*GTX1080 with 2164MHz)

Curve is good to squeeze bit more performance or extra MHz, hopefully you will be able to squeeze bit more with curve

From my experience with VRAM settings, tried last night again to do few tests, I just couldn't break 1125MHz on VRAM, tried 1200MHz or 1300MHz etc still would hang or TDR or BSOD

My best result in Timespy is with 1125MHz, score with 800MHz graphics score 7512 and 1125MHz graphics score 7688 
This is with 2115MHz and 800MHz on VRAM 

https://www.3dmark.com/spy/6719703

This is my best 2115MHz and 1125MHz on VRAM 

https://www.3dmark.com/spy/6720079

As you can see there is difference around 160 points in just VRAM settings, didn't tried yet Unigine Superposition there to see difference in that benchmark, will try later on and see if I see there any difference 

My loop does have 4*360mm radiators(2*HWLabs SR-2 360mm, EK PE360 and Mayhems Havoc 360mm 60mm thick radiator) plus MO-ra3 360mm this does help only with lower water delta T during the rendering or gaming etc, pump setup is XSPC D5 Vario with EK DDC 3.2 PWM Elite edition and Barrow DDC 18W pump

Regarding the temperatures, yours temperatures are great under load, my are bit off and will try probably Heatkiller IV RTX 2080Ti WB as my next WB,I'm suspecting it won't help a lot, can be GPU itself which I would probably try in my friend loop and test there if temperatures are same there, there we are running 8086k and 360mm and 240mm 60mm thick radiators 

In gaming I seen frequency downclocking(from 2055MHz to 2040MHz) in 37-38°C range which will return back to normal clocks like in my case 2055MHz if temperatures are bellow 34°C, this I never seen on Pascal or Maxwell GPU generations 

My 5960x is good OC'er and very happy with OC, right now doing 4.7GHz, 4.8GHz is possible I think will need 1.4v at least which I'm not prepared to run on my 5960x 

Have good luck with silicone lottery previously like CPU(5820k or 5930k, both of them been able to run 4.5Ghz), Pascal GPUs that's different, EVGA GTX 1080 its not perfect and will OC to 2100MHz and Manli GTX1080 will OC to 2164MHz and EVGA GTX1080Ti this one is very good OC'er 

Just this Zotac RTX 2080Ti is one poor OC'er

Did you tried yours OC in Octane RTX benchmark or Metro Exodus if does hold the clocks there or will downclocks or even crash with CUDA errors in Octane or in Metro Exodus TDR? 

Its possible for you to try Octane RTX benchmark? 


Hope this helps 

Thanks, Jura


----------



## Cyber Locc

gavros777 said:


> I have the 2080 ti gigabyte card and plan to switch to an open frame case.
> I wanna add some custom fan dust filter/mesh on the card, what is the best way to go about it?


I do t think you really can, or should tbh. 

They are not designed for that. You could probably do it with a flat card, but I still wouldn't. 

I would just use it as is, and clean it every once in awhile. Dust in open case isn't as bad as closed tbh. Just give it a little blow out/off every few months and you will be fine. 

My rig has been living in a test bench, for 2yrs+ now, I switched boards, to a MATX board and was going to do a R40 build, got the case and all the stuff and never did the build lol (I'm not a MATX guy), and I sold the case, and just grew found of my test bench as constant fixture.

And I have switched back to my MATX board the last week, or so as I gave the RVE to my Step Brother, and already can't stand it hehe. 🙂


----------



## J7SC

jura11 said:


> Hi there
> 
> Hard to say why GPU-Z does enumerate GPUs or any other SW does that, if down to VBIOS hard to say for sure, I remember I have used dual GPUs in Hackintosh and there I have run to few issues with different VBIOS and flashing with same BIOS helped to overcome few issues, not sure if its helps in yours case
> 
> Looking good there,Cinebench looks good there, what speeds of RAM are you using? Ryzen or ThreadRipper does like fast RAM
> 
> Stock Aorus BIOS does look like good BIOS if you have such high power limit, didn't tried this one, tried only Zotac stock and Galax 380W BIOS
> 
> 
> Spoiler
> 
> 
> 
> This MSI Afterburner V/F curve is easy to do,done that on all my 4*GPUs and running these clocks (1*2080Ti with 2055MHz,2*GTX1080Ti with 2113MHz, 3*GTX1080 with 2100MHz and 4*GTX1080 with 2164MHz)
> 
> Curve is good to squeeze bit more performance or extra MHz, hopefully you will be able to squeeze bit more with curve
> 
> From my experience with VRAM settings, tried last night again to do few tests, I just couldn't break 1125MHz on VRAM, tried 1200MHz or 1300MHz etc still would hang or TDR or BSOD
> 
> My best result in Timespy is with 1125MHz, score with 800MHz graphics score 7512 and 1125MHz graphics score 7688
> This is with 2115MHz and 800MHz on VRAM
> 
> https://www.3dmark.com/spy/6719703
> 
> This is my best 2115MHz and 1125MHz on VRAM
> 
> https://www.3dmark.com/spy/6720079
> 
> As you can see there is difference around 160 points in just VRAM settings, didn't tried yet Unigine Superposition there to see difference in that benchmark, will try later on and see if I see there any difference
> 
> My loop does have 4*360mm radiators(2*HWLabs SR-2 360mm, EK PE360 and Mayhems Havoc 360mm 60mm thick radiator) plus MO-ra3 360mm this does help only with lower water delta T during the rendering or gaming etc, pump setup is XSPC D5 Vario with EK DDC 3.2 PWM Elite edition and Barrow DDC 18W pump
> 
> Regarding the temperatures, yours temperatures are great under load, my are bit off and will try probably Heatkiller IV RTX 2080Ti WB as my next WB,I'm suspecting it won't help a lot, can be GPU itself which I would probably try in my friend loop and test there if temperatures are same there, there we are running 8086k and 360mm and 240mm 60mm thick radiators
> 
> In gaming I seen frequency downclocking(from 2055MHz to 2040MHz) in 37-38°C range which will return back to normal clocks like in my case 2055MHz if temperatures are bellow 34°C, this I never seen on Pascal or Maxwell GPU generations
> 
> My 5960x is good OC'er and very happy with OC, right now doing 4.7GHz, 4.8GHz is possible I think will need 1.4v at least which I'm not prepared to run on my 5960x
> 
> Have good luck with silicone lottery previously like CPU(5820k or 5930k, both of them been able to run 4.5Ghz), Pascal GPUs that's different, EVGA GTX 1080 its not perfect and will OC to 2100MHz and Manli GTX1080 will OC to 2164MHz and EVGA GTX1080Ti this one is very good OC'er
> 
> Just this Zotac RTX 2080Ti is one poor OC'er
> 
> 
> 
> Did you tried yours OC in Octane RTX benchmark or Metro Exodus if does hold the clocks there or will downclocks or even crash with CUDA errors in Octane or in Metro Exodus TDR?
> 
> Its possible for you to try Octane RTX benchmark?
> 
> 
> Hope this helps
> 
> Thanks, Jura


 
Thanks  ...yeah, running a unified Bios set for both cards makes sense, especially with the MSI AB curve I'm planning; I'm just wondering about PCI ID when I flash the one from card 1 over to card 2, but only one way to find out 

...I haven't got Metro EX or Octane RTX benchmark downloaded yet, but will do that before I bench again.


----------



## bogdi1988

J7SC said:


> Thanks  ...yeah, running a unified Bios set for both cards makes sense, especially with the MSI AB curve I'm planning; I'm just wondering about PCI ID when I flash the one from card 1 over to card 2, but only one way to find out
> 
> ...I haven't got Metro EX or Octane RTX benchmark downloaded yet, but will do that before I bench again.


When you flash BIOS with different manufacturer, the PCI ID changes. They will both end up having the same ID.


----------



## arrow0309

bogdi1988 said:


> So you are saying this (https://www.techpowerup.com/vgabios/207475/207475) was better than this (https://www.techpowerup.com/vgabios/209434/209434)?


I'm also curious 
So, you tried it already?


----------



## zack_orner

Has anyone here used a koolance computer liquid cooling system like the ex2-755 rev 1.3. Thinking about this instead of the multi fans, rads, pump. If it works it would probably save me 300 dollars in building my custom loop.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## JustinThyme

Tried one long ago when they first came out.......All bling and no zing. Got better temps on air.
If laziness is a contributing factor you can get much better performance with any of the AIO solutions released within the past few years.


----------



## zack_orner

JustinThyme said:


> Tried one long ago when they first came out.......All bling and no zing. Got better temps on air.
> 
> If laziness is a contributing factor you can get much better performance with any of the AIO solutions released within the past few years.


OK I just read that it keeps coolant temp at 25c, still want hard tubing in the case just would need a bigger case to do the multi rads the way I want. I have aio on CPU now but want a mono block and to add a water block on my gpu. I just thought if it would work as good or better then two three rads would save me a lot of money between rads fans pump res and case. Not scared of the work just figured I like my current case/build and could save a lot on my cooling upgrade. But if its not worth it I'll end up spending the money anyway. 

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## bogdi1988

arrow0309 said:


> I'm also curious
> So, you tried it already?


I haven't seen a reply so I haven't tried either of them. I might try one of the files later this week, though knowing which one is better would definitely help.


----------



## bigjdubb

zack_orner said:


> _OK I just read that it keeps coolant temp at 25c_, still want hard tubing in the case just would need a bigger case to do the multi rads the way I want. I have aio on CPU now but want a mono block and to add a water block on my gpu. I just thought if it would work as good or better then two three rads would save me a lot of money between rads fans pump res and case. Not scared of the work just figured I like my current case/build and could save a lot on my cooling upgrade. But if its not worth it I'll end up spending the money anyway.
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


25 degree delta over ambient is not 25 degree coolant. It means that if your room ambient temp is 25 degrees your water temps will be 50 degrees. Those kits are essentially a 240 rad with a pump and controls built in, it doesn't do anything special that a separate rad, pump and fan controller couldn't do.


----------



## VPII

So I flashed my Palit RTX 2080 Ti Gamingpro OC with the Strix XOC bios for the 1000watt TDP. Firstly I found that there is no voltage curve that can be set as it is not available with MSI Afterburner or Precision X so overclock has to be all manual. Unfortunately due to the vcore being stuck at 1.043 and sometimes jumping on increment higher I cannot run my core at 2160 or 2175 which I was able to before. But taken that I have no core speed drop the 2145 worked pretty well and I can drop the tdp to 42 or 43% as the most the card pulled was only slightly over 410watt. Well here is my results with Time Spy, Fire Strike Ultra and Port Royal.

Close to 17K gpu
https://www.3dmark.com/spy/6746701

Very close to 8K gpu
https://www.3dmark.com/spy/6737653

A little bump in FS Ultra from my previous result
https://www.3dmark.com/fs/18885725

Port Royal... not sure what to make of it
https://www.3dmark.com/pr/70064

As for cooling, well I just want to put it out there. I installed a Nzxt Kraken G12 with a Corsair H110 CW and I am pretty impressed with the temps. I've never touched 50C and the closest I got was 46c when I sat with ambient temps of 33C or so. At present I'll drop half of the rad in a bucket of water and it would shave of 4 to 5c and adding ice to the water more like 8 to 9c keeping my gpu below 40c under full load. Does not last long but good for two to three runs.


----------



## zhrooms

VPII said:


> So I flashed my Palit RTX 2080 Ti Gamingpro OC with the Strix XOC bios for the 1000watt TDP. Firstly I found that there is no voltage curve that can be set as it is not available with MSI Afterburner or Precision X so overclock has to be all manual. Unfortunately due to the vcore being stuck at 1.043 and sometimes jumping on increment higher I cannot run my core at 2160 or 2175 which I was able to before. But taken that I have no core speed drop the 2145 worked pretty well and I can drop the tdp to 42 or 43% as the most the card pulled was only slightly over 410watt. Well here is my results with Time Spy, Fire Strike Ultra and Port Royal.
> 
> As for cooling, well I just want to put it out there. I installed a Nzxt Kraken G12 with a Corsair H110 CW and I am pretty impressed with the temps. I've never touched 50C and the closest I got was 46c when I sat with ambient temps of 33C or so. At present I'll drop half of the rad in a bucket of water and it would shave of 4 to 5c and adding ice to the water more like 8 to 9c keeping my gpu below 40c under full load. Does not last long but good for two to three runs.


 
It's been confirmed that the 1000W BIOS is broken on a deeper level, personally on my reference PCB it did not provide any additional power limit over the Galax 380W one, and on a Strix card, which it was made for, it only worked slightly better, still wouldn't let the GPU reach even close to 1.093V.

The most recent development is that the 1000W BIOS was tested on a shunt modded Strix card, which works perfectly on the Galax 380W, it can sustain the full 1.093V in UNIGINE Superposition 8K Optimized throughout the entire run, but when the same test was attempted on the 1000W BIOS it resulted in a max voltage of 1.06x, conclusion is then that something is severly wrong with it, I would not recommend the BIOS to anyone but actual Strix users, and even that is conditional.

Shunt modding is the only way to get rid of the power limit once and for all, it is extremely unlikely there will ever be a BIOS with a higher power limit than the Galax 380W, that actually works.


----------



## zack_orner

bigjdubb said:


> 25 degree delta over ambient is not 25 degree coolant. It means that if your room ambient temp is 25 degrees your water temps will be 50 degrees. Those kits are essentially a 240 rad with a pump and controls built in, it doesn't do anything special that a separate rad, pump and fan controller couldn't do.


Thank you for the clarification I thought it was a chiller. I would have been severely disappointed. So back to looking for a case.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## ReFFrs

zhrooms said:


> It's been confirmed that the 1000W BIOS is broken on a deeper level, personally on my reference PCB it did not provide any additional power limit over the Galax 380W one


How do you tested that? Are you talking about power limit slider cannot be moved above 100%? But it's already 1000W 

XOC bios does provide additional power limit and I have actually seen utilizing 480W PL on my card. 

You should check for real power usage values in HWINFO64 -> GPU Power entry or from your PSU if it supports digital output of such data.


----------



## toncij

Is there a guaranteed-Samsung VRAM card out there? Seems that it's totally random.


----------



## bigjdubb

ReFFrs said:


> You should check for real power usage values in HWINFO64 -> GPU Power entry or from your PSU if it supports digital output of such data.


Out of curiosity, what PSU does that?


----------



## kx11

toncij said:


> Is there a guaranteed-Samsung VRAM card out there? Seems that it's totally random.



2080 ti Lightning Z , never saw one with Micron


----------



## Carillo

ReFFrs said:


> How do you tested that? Are you talking about power limit slider cannot be moved above 100%? But it's already 1000W
> 
> XOC bios does provide additional power limit and I have actually seen utilizing 480W PL on my card.
> 
> You should check for real power usage values in HWINFO64 -> GPU Power entry or from your PSU if it supports digital output of such data.


Have you tried the bios ?


----------



## toncij

kx11 said:


> 2080 ti Lightning Z , never saw one with Micron


I see.. Hmm, no block for it and it's 3 slot. GB Aorus AIO looks like better purchase then.


----------



## Cyber Locc

https://forums.evga.com/Can-we-get-...attage-Bios-Petition-m2938546-p2.aspx#2938819

Sign it up Boys! and girls. I think these are going to be my last EVGA cards, I love how they try to deny any bios is higher than theirs. I actually think they are actually throttling cards to sell Custom PCB, thats a new damn low.


----------



## zhrooms

ReFFrs said:


> XOC bios does provide additional power limit and I have actually seen utilizing 480W PL on my card.
> 
> You should check for real power usage values in HWINFO64 -> GPU Power entry or from your PSU if it supports digital output of such data.


 
It doesn't, your card can't really use more than about 450W because of the hard voltage limit of 1.093V.

The power reading is not that reliable, the calculation depends on multiple things, can vary by card.

You'll have to provide a sea of proof for anyone to believe your card actually pulled 480W, the card would have to run 1.093V and over 2200MHz for that usage.

Doesn't make sense even if you had Strix and cooled it with LN2 as the BIOS didn't allow a shunt modded Strix to go above 1.06x voltage, which means sustaining above 400W is impossible (unless it's a very short term peak).
 


toncij said:


> Is there a guaranteed-Samsung VRAM card out there? Seems that it's totally random.


 
It's up to the partners, high end cards should always have Samsung as they overclock better, but other than that it's hard to say for the reference cards, early after release almost every card had Micron, this year it seems to be more Samsung than Micron, regardless of what card you buy.


----------



## toncij

Regarding the Lightning, it seems than only Bitspower makes a block for it, while Aorus comes with one or even AIO. :=) Seems like even Asus is more accessible for water due to multiple blocks, including EKWB.


----------



## kx11

toncij said:


> I see.. Hmm, no block for it and it's 3 slot. GB Aorus AIO looks like better purchase then.



here's Lightning Z block which i'm using now




https://shop.bitspower.com/index.php?route=product/product&path=67_102_349&product_id=7204


----------



## JustinThyme

bigjdubb said:


> 25 degree delta over ambient is not 25 degree coolant. It means that if your room ambient temp is 25 degrees your water temps will be 50 degrees. Those kits are essentially a 240 rad with a pump and controls built in, it doesn't do anything special that a separate rad, pump and fan controller couldn't do.


I dont even think it can match a single 140mm. I ran one for a very brief time on a quad core CPU only. A few minutes of full load and it saturated. Fans to full bore and temps just kept rising to the point my OCd chip BSOD'd Replaced it with a H115i and CPU temps stayed 20C cooler. Just not a good cooler. Only issue with corrosion I ever had in 30+ years of doing this was from koolance parts. This particular item comes with very thin tubing, like 5mm or close with a spring inside to keep it from collapsing. 
Either way, so outdated and so not worth it.


----------



## Shawnb99

zhrooms said:


> It's up to the partners, high end cards should always have Samsung as they overclock better, but other than that it's hard to say for the reference cards, early after release almost every card had Micron, this year it seems to be more Samsung than Micron, regardless of what card you buy.


My first EVGA Ultra OC was Micron and so is FTW3 Hydrocopper. It seems to be luck of the draw


----------



## Cyber Locc

JustinThyme said:


> bigjdubb said:
> 
> 
> 
> 25 degree delta over ambient is not 25 degree coolant. It means that if your room ambient temp is 25 degrees your water temps will be 50 degrees. Those kits are essentially a 240 rad with a pump and controls built in, it doesn't do anything special that a separate rad, pump and fan controller couldn't do.
> 
> 
> 
> I dont even think it can match a single 140mm. I ran one for a very brief time on a quad core CPU only. A few minutes of full load and it saturated. Fans to full bore and temps just kept rising to the point my OCd chip BSOD'd Replaced it with a H115i and CPU temps stayed 20C cooler. Just not a good cooler. Only issue with corrosion I ever had in 30+ years of doing this was from koolance parts. This particular item comes with very thin tubing, like 5mm or close with a spring inside to keep it from collapsing.
> Either way, so outdated and so not worth it.
Click to expand...

Exagrating a tiny bit there? Water-cooling has only been a thing for 20ish years 😛.




Shawnb99 said:


> zhrooms said:
> 
> 
> 
> 
> It's up to the partners, high end cards should always have Samsung as they overclock better, but other than that it's hard to say for the reference cards, early after release almost every card had Micron, this year it seems to be more Samsung than Micron, regardless of what card you buy.
> 
> 
> 
> 
> My first EVGA Ultra OC was Micron and so is FTW3 Hydrocopper. It seems to be luck of the draw
Click to expand...

Dang, sorry to hear your luck. I was curious about this the other day, and today when unboxing my second card. I was nervous lol. I tend to have very good with this, I never get micron memory. I know it was huge to have Hynix back in the 290x days and out if the 5 I got, everyone was Hynix lol.


----------



## J7SC

Shawnb99 said:


> My first EVGA Ultra OC was Micron and so is FTW3 Hydrocopper. It seems to be luck of the draw


I don't mind the Micron VRAM on my 2080 Tis...I posted a few results a couple of days ago with effective RAM speed of 8180 (+1110 Mhz), and I have yet to see a single artifact.

btw, how,s the new build + w-cooling holding out ?


----------



## ReFFrs

zhrooms said:


> It doesn't, your card can't really use more than about 450W because of the hard voltage limit of 1.093V


Try TimeSpy Extreme with OC applied and you will get 480W either, maybe even more.


----------



## zhrooms

Shawnb99 said:


> My first EVGA Ultra OC was Micron and so is FTW3 Hydrocopper. It seems to be luck of the draw


 
EVGA XC Ultra is a reference card still, just 3 slot cooler instead of 2 slot, but the FTW3 (High-End, Custom PCB) from this year should always have Samsung, so question is, when did you buy it?

As previously mentioned, last year there were a lot more Micron cards out there, this year it seems to be more Samsung.
 


J7SC said:


> I don't mind the Micron VRAM on my 2080 Tis...I posted a few results a couple of days ago with effective RAM speed of 8180 (+1110 Mhz), and I have yet to see a single artifact.


 
Micron memory running on a hot air cooled card (at around 75c) can't really do more than +850, I've tested this on three micron cards, above +900 it artifacted on all of them, while keeping the card at a water cooled temperature allowed the memory to go up to +1100 without artifacting, my fastest card could do +1175, second fastest card +1135.

But this is really slow, most cards with Samsung can do up to +1400 on air, that's up to ~8% faster memory clock than my micron on air, that's a huge deal, I'd easily pay an extra $100 for Samsung. You can just forget a good benchmark score on Micron.
 


ReFFrs said:


> Try TimeSpy Extreme with OC applied and you will get 480W either, maybe even more.


 
No. That's not how any of this works.


----------



## dante`afk

SLI is still bad, even with RTX Titan NVlink and 2x 16x.

https://www.computerbase.de/2019-03/titan-rtx-sli-test/#abschnitt_frametimes_mit_grossen_problemen


----------



## fleps

Guys, in your opinion, is there any difference lets say about a MSI Gaming X Trio and an EVGA FTW3? (RTX 2080 in this case)?

I know the FTW3 has a higher power limit, but it seems that OC potential is defined by silicon lottery after all right?

I've seen Gaming X Trio with stable 2100 core OC's and FTW3 maxing out at 2070/2055, which is the same as my (now deceased) way cheaper Ventus OC.

So the choice is more about aesthetics and better RMA and stuff like that?


----------



## fleps

zhrooms said:


> Micron memory running on a hot air cooled card (at around 75c) can't really do more than +850, I've tested this on three micron cards, above +900 it artifacted on all of them, while keeping the card at a water cooled temperature allowed the memory to go up to +1100 without artifacting, my fastest card could do +1175, second fastest card +1135.
> 
> But this is really slow, most cards with Samsung can do up to +1400 on air, that's up to ~8% faster memory clock than my micron on air, that's a huge deal, I'd easily pay an extra $100 for Samsung. You can just forget a good benchmark score on Micron.


My Ventus OC is Micron and the memory passed all benchs with +1050 on memory and 2070/2055 on core, ambient temps around 30C and the card on air always stayed around 65/70C.
It's a 2080 card so not sure if there's differences to a 2080 TI, just wanted to mention it.


----------



## VPII

zhrooms said:


> EVGA XC Ultra is a reference card still, just 3 slot cooler instead of 2 slot, but the FTW3 (High-End, Custom PCB) from this year should always have Samsung, so question is, when did you buy it?
> 
> As previously mentioned, last year there were a lot more Micron cards out there, this year it seems to be more Samsung.
> 
> 
> 
> Micron memory running on a hot air cooled card (at around 75c) can't really do more than +850, I've tested this on three micron cards, above +900 it artifacted on all of them, while keeping the card at a water cooled temperature allowed the memory to go up to +1100 without artifacting, my fastest card could do +1175, second fastest card +1135.
> 
> But this is really slow, most cards with Samsung can do up to +1400 on air, that's up to ~8% faster memory clock than my micron on air, that's a huge deal, I'd easily pay an extra $100 for Samsung. You can just forget a good benchmark score on Micron.
> 
> 
> 
> No. That's not how any of this works.


Interestingly with my current mod on the card, namely Nzxt Kraken G12 and Corsair H110 CW I have no heat spreaders on the memory or VRM. I've checked the temps on the back of the card with infrared, a little difficult checking the from but at least you can add about 10 to 15C to know what it is on the front and a rarely get 50C on the back, mostly 42 to 45c. The memory on the card is samsung and I immediately felt when I did the mod that it felt a fair bit cooler than with my previous card which had Micron memory. The memory can easily do +1250mhz. I did not try higher yet, but will do so in time.


----------



## JackCY

toncij said:


> Is there a guaranteed-Samsung VRAM card out there? Seems that it's totally random.


EVGA Kingpin.

Micron has higher variance in clocks among samples. Samsung lower. Both can do similar clocks but the quality lottery is better with Samsung chips as always in recent years. Some Micron OC less some as well as top Samsung. Samsung chips OC well more of them. So with Samsung you may get better consistency of clocks among chips or they simply bin them more than Micron.


----------



## Renegade5399

So after testing all sorts of BIOS I came to conclusion that if you have a reference PCB card, shunt mod is the way to go. I'm running the stock EVGA BIOS with the shunt mods installed and am extremely happy. 20 minutes of work per card to tear down, solder the resistors on, and install the waterblocks. I now run 2085/16000 @ 1.05V for 24/7 operations like gaming and so far have gotten up to ~2125/16200 @ 1.093V for benching. Load temps hit ~48°C now that I have gotten the blocks mounted perfectly. At the 24/7 clocks, single card performance at 1440p up to 4K (with DSR) is way above satisfactory. Buildzoid was right, the reference PCB is phenomenal and there's really no reason to pay premium prices for the custom PCB cards when shunt modding the reference design works so well.


----------



## Cyber Locc

zhrooms said:


> No. That's not how any of this works.


Ya dude, that is exactly how this works. Harder loads require higher wattage, try it yourself.

Edit: I see now you are talking about 450ws, that may be the wattage limit, I am not sure. Have not gotten around to checking the cards out enough to know.


----------



## ReFFrs

zhrooms said:


> It doesn't, your card can't really use more than about 450W because of the hard voltage limit of 1.093V





ReFFrs said:


> Try TimeSpy Extreme with OC applied and you will get 480W either, maybe even more.





zhrooms said:


> No. That's not how any of this works.



You should be banned permanently on this forum for spreading fake news, especially about XOC bios capabilities and card's power draw. You don't even deserve XOC to be available to you for downloading.

Here is ultimate proof that RTX 2080 Ti can indeed consume more than 450W of power with XOC bios. Shame on you.


----------



## dangerSK

ReFFrs said:


> You should be banned permanently on this forum for spreading fake news, especially about XOC bios capabilities and card's power draw. You don't even deserve XOC to be available to you for downloading.
> 
> Here is ultimate proof that RTX 2080 Ti can indeed consume more than 450W of power with XOC bios. Shame on you.


Agree, u can reach around 450W easily even in games with XOC bios.


----------



## ReFFrs

dangerSK said:


> Agree, u can reach around 450W easily even in games with XOC bios.


Don't know how accurate is power limit reporting in percentage, but in TimeSpy Extreme it can go as high as 53% * 10 = 530W, even with power limit slider set to 48% (480W)

Just to remind 1000W = 100% PL on XOC bios


----------



## dangerSK

ReFFrs said:


> Don't know how accurate is power limit reporting in percentage, but in TimeSpy Extreme it can go as high as 53% * 10 = 530W, even with power limit slider set to 48% (480W)
> 
> Just to remind 1000W = 100% PL on XOC bios


I am using gpu z for PL monitoring. i didnt see more than 480W on my card.


----------



## outofmyheadyo

fleps said:


> Guys, in your opinion, is there any difference lets say about a MSI Gaming X Trio and an EVGA FTW3? (RTX 2080 in this case)?
> 
> I know the FTW3 has a higher power limit, but it seems that OC potential is defined by silicon lottery after all right?
> 
> I've seen Gaming X Trio with stable 2100 core OC's and FTW3 maxing out at 2070/2055, which is the same as my (now deceased) way cheaper Ventus OC.
> 
> So the choice is more about aesthetics and better RMA and stuff like that?


Havent tried the FTW3 but I cant fault my Trio it runs cool and quiet, flashed the 400W bios on it aswell so no problems there.

Havent even tried manual OC used the rivatuner utility, just set PT +35% and memory @ +1000

https://www.3dmark.com/spy/6748432


----------



## fleps

outofmyheadyo said:


> Havent tried the FTW3 but I cant fault my Trio it runs cool and quiet, flashed the 400W bios on it aswell so no problems there.
> 
> Havent even tried manual OC used the rivatuner utility, just set PT +35% and memory @ +1000
> 
> https://www.3dmark.com/spy/6748432


Thanks for the reply.

I have the opportunity to get a 2080 FTW3 "second hand" (the guy received as RMA 2 months ago from a faulty 1080 TI, but decided to buy a 2080 TI instead) that he tested and can do 2115/2100 core and have Samsung memories;
Or, a brand new Gaming X Trio for 100 bucks more than the FTW3, but no idea of memories or OC potential.

I think Ill go with the FTW3, even that I hate that EVGA doesn't offer RMA in my country and I would need to ship it to US if ever needed, while MSI has an amazing local RMA here.


----------



## J7SC

zhrooms said:


> (edit)
> 
> As previously mentioned, last year there were a lot more Micron cards out there, this year it seems to be more Samsung.
> 
> Micron memory running on a hot air cooled card (at around 75c) can't really do more than +850, I've tested this on three micron cards, above +900 it artifacted on all of them, while keeping the card at a water cooled temperature allowed the memory to go up to +1100 without artifacting, my fastest card could do +1175, second fastest card +1135.
> 
> (snip)


 
Temps really mean a lot, no matter whether it is Hynix (have not seen a 2080Ti w/ it yet), Micron or Samsung. Vince/KingPin in a recent vid (via GN) underscored that any GDDR6 should be kept under 75 C.

As to Micron's market presence, it seems that Micron had an exclusivity arrangement w/NVidia earl on (see article here: https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html

Finally, I can run my Micron VRAM well over ' +1340' on top of the already boosted factory clocks; then again, the cards are w-cooled, and the real question is where the ideal VRAM setting is for max score, not max clock, and that seems to vary from bench to bench.


----------



## outofmyheadyo

fleps said:


> Thanks for the reply.
> 
> I have the opportunity to get a 2080 FTW3 "second hand" (the guy received as RMA 2 months ago from a faulty 1080 TI, but decided to buy a 2080 TI instead) that he tested and can do 2115/2100 core and have Samsung memories;
> Or, a brand new Gaming X Trio for 100 bucks more than the FTW3, but no idea of memories or OC potential.
> 
> I think Ill go with the FTW3, even that I hate that EVGA doesn't offer RMA in my country and I would need to ship it to US if ever needed, while MSI has an amazing local RMA here.


I had strix 2080 before it had samsung ram aswell, and the only reason I bought the 2080ti is because someone decided to buy it from the store for close to 1400€ and sell it for 800€ about 2 days later while it was unused, sometimes you get a good deal I quess.


----------



## Cyber Locc

Well my second card sucks 😞, that's life though right, can't win twice lol. 

It crashes with anything over +125, however it's alot more stable with it's clocks, likely because it's not power limited. My other card, Silicon is much stronger just need a higher PL than the pathetic 338 EVGA gives us.


----------



## zhrooms

ReFFrs said:


> You should be *banned permanently* on this forum for spreading *fake news*, especially about XOC bios capabilities and card's power draw. *You don't even deserve XOC* to be available to you for downloading.
> 
> Here is *ultimate proof* that RTX 2080 Ti can indeed consume more than 450W of power with XOC bios. *Shame on you.*
> 
> Don't know how accurate is power limit reporting in percentage, but in TimeSpy Extreme it can go as high as 53% * 10 = 530W, even with power limit slider set to 48% (480W).


 









Thank you for the kind words.

Your own *ULTIMATE PROOF* picture is showing the TDP reading: *468W* at just *2115MHz 1.093V*, but the card can *not* reach that power usage at that clock speed and voltage, it's like saying your 1200HP car on a dyno gets a reading of 1200HP, when it's at half throttle. The core clock and voltage reflects the power reading.

It has been demonstrated over at GamersNexus that the FTW3 they tested without a power limit reached 439W peak at 1.093V. And that reading is directly from the all new RTX feature on FTW3: "Real-time monitoring from power connector and PCIe bus, better evaluation to determine power consumption while overclocking."
@willverduzco reached an estimated usage of 443W on his shunt modded Strix in Superposition 8K Optimized and 423W in 1080p Extreme. The Galax Hall of Fame card also comes with a power limit of 450W, highest out of all the cards, and that is not an incidental power limit, it is just above what is needed to max the card out at 1.093V. If you pay for Hall of Fame you do not get any restriction by the power limit.

Flashed the broken XOC BIOS again to show some numbers,

Galax *380W* BIOS = *2130MHz 1.050V* peak resulted in *371W*

Strix *1000W* BIOS = *2100MHz 1.043V* sustain resulted in *415W*

Starting Superposition with the GPU at 24c allowed it to reach an overclock of *2145MHz at 1.087V*, and the usage was just *422W*, compared to your *2115MHz 1.093V 468W*.

The XOC BIOS only works temporarily and not every time for me, the fastest I managed to run last time I had the BIOS installed, was 2160MHz 1.093V with the GPU at 15c, but even at that temperature it could not hold it for more than a few seconds, it always ended up at 1.043-1.050V regardless of temperature.

Conclusion is that the BIOS is not working properly (not even on Strix cards which it was made for), and the power reading is off, for you it's even more off than for me. You claiming your stock card reached 530W in Time Spy Extreme is comical.

Picture provided with details in the spoiler,



Spoiler














_Ban yourself._


----------



## VPII

J7SC said:


> Temps really mean a lot, no matter whether it is Hynix (have not seen a 2080Ti w/ it yet), Micron or Samsung. Vince/KingPin in a recent vid (via GN) underscored that any GDDR6 should be kept under 75 C.
> 
> As to Micron's market presence, it seems that Micron had an exclusivity arrangement w/NVidia earl on (see article here: https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html
> 
> Finally, I can run my Micron VRAM well over ' +1340' on top of the already boosted factory clocks; then again, the cards are w-cooled, and the real question is where the ideal VRAM setting is for max score, not max clock, and that seems to vary from bench to bench.


Hi J7SC, I believe that the temp on memory being kept below 75C does hold water. You know that I have a Nzxt Kraken G12 and Corsair H110 CW installed on my card. The only problem is there is no cooling coverage over the vrm and memory. After our previous discussions I did get an infrared thermometer with which I can check the temps on the back of the card. I state back as it is a little difficult checking the front with the G12 cover or hold down. The highest temps I've seen on the memory was 51C same on VRM so adding 10 or so should give you the front reading or there about. My reason for bringing this up is because I can run my memory at +1250mhz effectively 8250mhz with no artifact or any issue what so ever.


----------



## J7SC

VPII said:


> Hi J7SC, I believe that the temp on memory being kept below 75C does hold water. You know that I have a Nzxt Kraken G12 and Corsair H110 CW installed on my card. The only problem is there is no cooling coverage over the vrm and memory. After our previous discussions I did get an infrared thermometer with which I can check the temps on the back of the card. I state back as it is a little difficult checking the front with the G12 cover or hold down. The highest temps I've seen on the memory was 51C same on VRM so adding 10 or so should give you the front reading or there about. My reason for bringing this up is because I can run my memory at +1250mhz effectively 8250mhz with no artifact or any issue what so ever.


 
..yeah, Vince/ KP has some good comments about it, especially re. consistency (i.e. 6 min 20 sec below...). Whatever your RTX card is equipped with, the cooler the better...that's the point. I have no qualms about posting results w/ my 2x Micron equipped cards > which I really love; at the same time I'm (like a lot of folks) waiting for more info on Zen2 for a next build that will likely include 2080 Ti Kingpins w/ Samsung. But either way, you can bet that VRAM et al will be well cooled, below 50 C or so.


----------



## VPII

J7SC said:


> ..yeah, Vince/ KP has some good comments about it, especially re. consistency (i.e. 6 min 20 sec below...). Whatever your RTX card is equipped with, the cooler the better...that's the point. I have no qualms about posting results w/ my 2x Micron equipped cards > which I really love; at the same time I'm (like a lot of folks) waiting for more info on Zen2 for a next build that will likely include 2080 Ti Kingpins w/ Samsung. But either way, you can bet that VRAM et al will be well cooled, below 50 C or so.
> 
> https://www.youtube.com/watch?v=F7maYyFrXF8


Ahhhh Zen2.... now we are talking. I've tried to put money aside over the past year or so just to have the cash for when Zen2 lands. No new GPU for me, but I'm sure my puppy will be happy with the new cpu and possibly motherboard.


----------



## outofmyheadyo

Do you guys manually OC or use the afterburner tool ? Recently I have used the tool and never seen a crash or problem, but yes the OC you get is lower than manual.


----------



## Cyber Locc

So my second card is on water as of today  Dud . 

Cant break over 2085, and even that is edge of stable on Stock XC bios . Cant win the silicon lottery twice I guess, or can I  Ill let you know on Wednesday when the 9960x gets here


----------



## ReFFrs

zhrooms said:


> Starting Superposition with the GPU at 24c allowed it to reach an overclock of 2145MHz at 1.087V, and the usage was just 422W, compared to your 2115MHz 1.093V 468W


Something is definitely wrong with you. You have been told many times that for pushing card to its maximum power you should be testing specific games and benchmarks, not that filthy Superposition you are bringing back to us. In my example I have used *TimeSpy Extreme* and it's the right way to go.




zhrooms said:


> Your own ULTIMATE PROOF picture is showing the TDP reading: 468W at just 2115MHz 1.093V, but the card can not reach that power usage at that clock speed and voltage, it's like saying your 1200HP car on a dyno gets a reading of 1200HP, when it's at half throttle.


Look at PSU power draw chart on the left of my screenshot: https://www.overclock.net/forum/attachment.php?attachmentid=262180&d=1553873301

It displays data received via USB dongle directly from Corsair AX1200i PSU 

Chart peaks around total system consumption of 740W. Max CPU power usage was 160W as can be seen from HWINFO on the right. So the difference between these two is GPU consumption plus some smaller stuff like RAM and SSDs

740 - 160 = 580W 

Let's say excluding GPU and CPU, other stuff is using about 100W (probably less). You still get 580 - 100 = 480W for the GPU only.

Now go and listed to this guy who is making GPU PCB analysis videos for GamersNexus. He clearly says that operating parameters for VRM on 2080 Ti are 500KHz at 1.2V-1.8V, then they are being converted to max 1.093V when feeding to chip. But initially the whole card with all stuff onboard can pull more than 1.093V, that's how we get more than 450W power usage at heavy loads.

Go to 10:50 timestamp on video for this specific comment.


----------



## VPII

ReFFrs said:


> Something is definitely wrong with you. You have been told many times that for pushing card to its maximum power you should be testing specific games and benchmarks, not that filthy Superposition you are bringing back to us. In my example I have used *TimeSpy Extreme* and it's the right way to go.
> 
> 
> 
> Look at PSU power draw chart on the left of my screenshot: https://www.overclock.net/forum/attachment.php?attachmentid=262180&d=1553873301
> 
> It displays data received via USB dongle directly from Corsair AX1200i PSU
> 
> Chart peaks around total system consumption of 740W. Max CPU power usage was 160W as can be seen from HWINFO on the right. So the difference between these two is GPU consumption plus some smaller stuff like RAM and SSDs
> 
> 740 - 160 = 580W
> 
> Let's say excluding GPU and CPU, other stuff is using about 100W (probably less). You still get 580 - 100 = 480W for the GPU only.
> 
> Now go and listed to this guy who is making GPU PCB analysis videos for GamersNexus. He clearly says that operating parameters for VRM on 2080 Ti are 500KHz at 1.2V-1.8V, then they are being converted to max 1.093V when feeding to chip. But initially the whole card with all stuff onboard can pull more than 1.093V, that's how we get more than 450W power usage at heavy loads.
> 
> Go to 10:50 timestamp on video for this specific comment.
> 
> https://www.youtube.com/watch?v=VdwQC00x8V4&feature=youtu.be&t=649


I'm just going to put it out there.... I am using the Strix XOC bios so power limit is not an issue. I ran Timespy Extreme right through with GPU at 2145 using 1.043v. Cannot set higher even though the card can do 2175mhz core without an issue in Time Spy, obviously due to the fact that there is no voltage curve option when using this bios. So I had to settle with 2145mhz core. It gave me better results regardless as the clocks stayed constant at 2145mhz. Power draw when looking at GPUz was max 405watt in game test 1 and 445watt in game test 2. The card I have is obviously not bad seen that it can do 2145 at 1.043v without an issue. So there may some grounds to what @zhrooms was trying to say. I still like this bios as it is better than my previous taken the results.

https://www.3dmark.com/spy/6765564

https://www.3dmark.com/spy/6765471


----------



## ReFFrs

VPII said:


> Power draw when looking at GPUz was max 405watt in game test 1 and 445watt in game test 2. The card I have is obviously not bad seen that it can do 2145 at 1.043v without an issue. So there may some grounds to what @zhrooms was trying to say.


That dude was trying to say you cannot exceed 450W at 1.093V

And you are talking about 445W reached at 1.043V

So it's obvious that with 1.093V it will go higher in power consumption breaking 460-480W


----------



## zhrooms

ReFFrs said:


> *Something is definitely wrong with you.* You have been told many times that for pushing card to its maximum power you should be testing specific games and benchmarks, *not that filthy Superposition* you are bringing back to *us*. In my example I have used *TimeSpy Extreme* and *it's the right way to go*.
> 
> Look at PSU power draw chart on the left of my screenshot: https://www.overclock.net/forum/attachment.php?attachmentid=262180&d=1553873301. Chart peaks around total system consumption of 740W. Max CPU power usage was 160W as can be seen from HWINFO on the right. So the difference between these two is GPU consumption plus some smaller stuff like RAM and SSDs. Let's say excluding GPU and CPU, other stuff is using about 100W (probably less). You still get 580 - 100 = 480W for the GPU only.


 









Again, thank you for your kind words.

Time Spy Extreme is not a good benchmark as it's behind a paywall, and the Superposition 8K Optimized preset is very similar in power usage, you can also modify the preset by running it at 8K Extreme which results in it being an actual slideshow and reaching close to the VRAM limit, exceeding Time Spy Extreme.

Your power reading in Corsair LINK is meaningless when it comes to determining the median power consumption of the graphics card, which all standards go by. Going by _your_ logic the Galax 380W should be called Galax 500W, based on the picture below.










The power supply showing higher power draw is normal, as the total power going to the card can spike a lot higher than what your actual GPU is reporting. But the GPU simply cannot reach 480W at just 2100MHz 1.093V. The only way you would reach 480W would be if you shunt modded and ran your card at maybe 2250MHz @ 1.093V in Time Spy Extreme / Superposition 8K Extreme at close to sub zero temp, that'd get you close to the top of every benchmark leaderboard, but you're not there are you? Because your GPU is not actually reaching 480W usage, the power value is not accurate (proven), the math doesn't add up (proven) and the XOC BIOS is unable to sustain 1.093V over time (which the benchmark requires). 
 


VPII said:


> I am using the Strix XOC bios so *power limit is not an issue*. I ran Timespy Extreme right through with GPU at 2145 using 1.043v. Cannot set higher even though the card can do 2175mhz core without an issue in Time Spy, obviously due to the fact that there is no voltage curve option when using this bios. So I had to settle with 2145mhz core. It gave me better results regardless as the clocks stayed constant at 2145mhz. Power draw when looking at GPUz was max 405watt in game test 1 and 445watt in game test 2.


 
The fact that you cannot exceed 1.043V is exactly why the power limit *IS* an issue and not working properly. The reason 2175 is possible in Time Spy is because it is less demanding to run, therefore you can reach a higher core clock & voltage, not because of the curve editor.

I've already demonstrated the power reading is generating inaccurate values. *2145MHz 1.087V* vs *2100MHz 1.043V* the power usage reads the same *±5W*.
 


ReFFrs said:


> That *dude* was trying to say you cannot exceed 450W at 1.093V. And you are talking about 445W reached at 1.043V. So it's *obvious* that with 1.093V it will go higher in power consumption breaking 460-*480W*


 
All our tools and values are based on sustain.

We have actual evidence that at 1.093V the FTW3 showed a usage of 439W max, and a shunt modded card with a power limit of over 500W used at most an estimated 443W.

It's mind-boggling how you still believe it's possible that your card sustained 468W at 2115MHz, even after showing you the BIOS power reading is literally incorrect.

_Ban yourself? You haven't shared a shred of proof, only a fake power reading and a total power supply power consumption that is less than useless, and then you claim that anyone who defies your *ULTIMATE PROOF* should be permanently banned and hide in shame._


----------



## VETDRMS

bogdi1988 said:


> How many HP is that chiller? 1/4, 1/2 or 1?


1/4 hp, which is sized just about perfect. No problems keeping the water at the lowest temperature setting (39F) at full load. I was considering the 1/2 hp one, but this has a very similar btu rating (3000 for 1/4 hp and only 4000 for 1/2 hp), using less power and being quieter. The compressor cannot be restarted quickly (no less than every 5 minutes), so you do get a little temperature swing over the commanded swing setting (I use 4C). It pulls the temps down quickly, so the majority of the time it is silent. You could use a cooler or some other container with a mass of water as a thermal sink if you wanted more silent operation or longer periods between active/inactive compressor.

They are pretty cheap right now, too. $50 less than when I bought mine.

https://www.amazon.com/gp/product/B07BHHP71C


----------



## ReFFrs

zhrooms said:


> We have actual evidence that at 1.093V the FTW3 showed a usage of 439W max, and a shunt modded card with a power limit of over 500W used at most an estimated 443W.
> 
> It's mind-boggling how you still believe it's possible that your card sustained 468W at 2115MHz, even after showing you the BIOS power reading is literally incorrect.


Bios power reading is correct, at least in HWINFO "GPU Power" entry. You haven't tested TimeSpy Extreme yet and till you do that all further discussions with you are completely useless. Btw I haven't payed a cent for it and so can you. 

Here is my score with XOC bios.


----------



## Nizzen

To bad the last fun graphics card was evga 780ti Classified. There is no fun with 2080ti stock bios = "2100, and max OC with other bios + shuntmod is 2150mhz" 

Rip fun overclocking with Evbot and high % gain in performance


----------



## arrow0309

zhrooms said:


> Again, thank you for your kind words.
> 
> Time Spy Extreme is not a good benchmark as it's behind a paywall, and the Superposition 8K Optimized preset is very similar in power usage, you can also modify the preset by running it at 8K Extreme which results in it being an actual slideshow and reaching close to the VRAM limit, exceeding Time Spy Extreme.
> 
> Your power reading in Corsair LINK is meaningless when it comes to determining the median power consumption of the graphics card, which all standards go by. Going by _your_ logic the Galax 380W should be called Galax 500W, based on the picture below.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The power supply showing higher power draw is normal, as the total power going to the card can spike a lot higher than what your actual GPU is reporting. But the GPU simply cannot reach 480W at just 2100MHz 1.093V. The only way you would reach 480W would be if you shunt modded and ran your card at maybe 2250MHz @ 1.093V in Time Spy Extreme / Superposition 8K Extreme at close to sub zero temp, that'd get you close to the top of every benchmark leaderboard, but you're not there are you? Because your GPU is not actually reaching 480W usage, the power value is not accurate (proven), the math doesn't add up (proven) and the XOC BIOS is unable to sustain 1.093V over time (which the benchmark requires).
> 
> 
> 
> The fact that you cannot exceed 1.043V is exactly why the power limit *IS* an issue and not working properly. The reason 2175 is possible in Time Spy is because it is less demanding to run, therefore you can reach a higher core clock & voltage, not because of the curve editor.
> 
> I've already demonstrated the power reading is generating inaccurate values. *2145MHz 1.087V* vs *2100MHz 1.043V* the power usage reads the same *±5W*.
> 
> 
> 
> All our tools and values are based on sustain.
> 
> We have actual evidence that at 1.093V the FTW3 showed a usage of 439W max, and a shunt modded card with a power limit of over 500W used at most an estimated 443W.
> 
> It's mind-boggling how you still believe it's possible that your card sustained 468W at 2115MHz, even after showing you the BIOS power reading is literally incorrect.
> 
> _Ban yourself? You haven't shared a shred of proof, only a fake power reading and a total power supply power consumption that is less than useless, and then you claim that anyone who defies your *ULTIMATE PROOF* should be permanently banned and hide in shame._


Hi mate, you think I can achieve even for a benchmark purpose a little more OC headroom with a HOF 450W bios over the 380W bios?
My gpu is not soooo lucky, will perfectly hold stable 2070 / 2055 (over 38C) at default vf curve under liquid cooling.
Maybe 2100 / 2085 will still work in some short benches and with really cool tamb


----------



## Cyber Locc

Okay, just to clear that up as this is becoming a mess lol. 


Superposition is not a good Benchmark, it is not nearly as graphically intensive or power hungry as Any 3dmark application. Many clocks will be stable, and able to be ran in Unigine Benches and not able to run in any 3dmark application. 

Those same clocks that are stable in Unigine, but not in 3dmark, will not run most games either. 

Unigine is good for finding stable temps, and the like, and long term stress testing, however it does not have a load anywhere near that of 3dmark. 

"3dmark is not a good test as it's behind a pay wall" it's behind a pay wall, becuase it is the gold standard, is the best bench, and that's why it's behind a pay wall......

Are you honestly telling me, you can afford a 1200 dollar GPU and not a 30 dollar benching program? 

XOC comps, use 3dmark, sorry but it is litteraly the only bench of any relavancy outside of good fun benching. 

Your argument has deluded into serious hyperbole.


----------



## ESRCJ

Good afternoon folks. I haven't checked this thread in at least a month. Is there an XOC BIOS worth flashing on an FE for benchmark purposes? I've been using the 380W Galax reference BIOS since I got my card. I wouldn't mind a higher power limit for TS Extreme and I'd especially love to push the voltages past 1.093V.


----------



## ReFFrs

Cyber Locc said:


> Superposition is not a good Benchmark, it is not nearly as graphically intensive or power hungry as Any 3dmark application. Many clocks will be stable, and able to be ran in Unigine Benches and not able to run in any 3dmark application.


Don't even bother explaining something to that authentic dude. People have tried 10 times so far, he never listened. Maybe he's got some kind of a special edition 2080 Ti capable of competing at special olympics. There is only one discipline - Superposition and nothing else. 3DMark is for losers, you know.


----------



## Cyber Locc

gridironcpj said:


> Good afternoon folks. I haven't checked this thread in at least a month. Is there an XOC BIOS worth flashing on an FE for benchmark purposes? I've been using the 380W Galax reference BIOS since I got my card. I wouldn't mind a higher power limit for TS Extreme and I'd especially love to push the voltages past 1.093V.


Their is the asus XOC bios, 

Cannot push voltages past 1.093 without hardware mods. Thats not a bios issue, its a Nvidia lockdown.


----------



## J7SC

I have been doing some single-vs-dual 2080 TI Time Spy Extreme runs to look at temps in detail (GPUs are obviously water-cooled). Stock Bios, no extra voltage or voltage curves (yet), same ambient temp etc. What I find surprising is that between a single card (on top in pic below) and dual card run, there's only a couple of degrees of difference. From what I recall with Pascal SLI, the difference / delta was more pronounced - perhaps different approach by NVLink SLI makes a difference here ?


----------



## J7SC

...and speaking of temps...looks like Galax kept a few nice 2080 TI HOF OCL cards in reserve for some LN2 fun...2080 Ti @ 2805 MHz


----------



## kx11

very nice temps , mine with 70+ core OC 1000+ MEM it can reach 50c max if the fans are silent ( really silent ) and the game is super demanding like the division 2 



extreme fans speed it never passes 42c with room temp. 23c


----------



## DeadSec

Cyber Locc said:


> Their is the asus XOC bios,
> 
> Cannot push voltages past 1.093 without hardware mods. Thats not a bios issue, its a Nvidia lockdown.


https://www.google.com/imgres?imgur...nM:&vet=1&w=480&h=446&hl=de-DE&source=sh/x/im


----------



## VPII

What I found using the Strix XOC bios with the 1000watt TDP resulted in having to run lower clocks than before due to there be no Vcurve to set for higher voltage and speed. So I was basically stuck with having to run 2145mhz instead of 2170mhz on Time Spy. The result in itself was pretty good seen that it kept the 2145mhz clocks right through. Interestingly though, I noticed that with core temps being below 30C while the bench is starting it fed 1.068v with 2160mhz as the speed even though I've set the overclock manual at +135mhz. Taken that with this bios it would run stock at 2010mhz the +135mhz would mean 2145mhz. But now looking at the link I'll post with the run it will show GPU speed at 2160mhz. I do believe that this is all temperature related as the temps were at 19c at idle taken that half my rad was in a bucket of ice water. 

What I can confirm is that the true drop in speed from normal happens at 41 or 42C, not before except for this funny anomaly I had with the clocks running 2160mhz when set to 2145mhz as this dropped to 2145mzh at 31c.

https://www.3dmark.com/spy/6774714

I am really happy with this card, maybe not the best quality when compared to others, but I have to say this Palit RTX 2080 Ti GamingPro OC does pretty well. Even with the stock cooler the highest temps I've seen with about 30C ambient was 69C where as my previous card with stock cooling was sitting well into the 75C or so.


----------



## Hemorz

Can someone with a decent Galax 380w flashed overclock on their 2080ti please give me their heaven results....I'm trying to consider if I should send it back due to coil whine.

Here is mine with what feels like a decent overclock (+175 core and +1000 memory) bringing it to 2125/2000










Sent from my SM-G965F using Tapatalk


----------



## JustinThyme

Cyber Locc said:


> Exagrating a tiny bit there? Water-cooling has only been a thing for 20ish years


You should really reserve your judgments when having knowledge of how the water cooling PCs market took hold. I was making home made blocks using fish aquarium pumps driven off of a NO contact of a relay with a 12V coil which also power 120V fans that pulled the heat from a heater core out of a junkyard truck (enthusiast group of the very young internet had spreadsheets of tried and true heater cores) well before the majority of posters here were born. We shared information much like today only it wasnt where to buy a fan shroud because the commercial market did not yet exist, it was a how to make your own. What LN2, He and phase change is today is what water cooling was then. My first WC build was 1986 so I was actually being conservative. Back then it was the push to 1Ghz!! IIRC correct my first real clock that counted for anything was the 800 MHz milestone. Siwftech was an early market leader about like EK is today, now they are a name synonymous with a pump. Government installations have been at it since a quad core processor took up 5K square feet. The last 50 years has truly been a period of innovations. I remember when a 40MB HDD made a good coffee table. Now you cant find anything thats 40MB. The sheer mass of things now is astounding. The processing power thats out there now is just astounding. Take what you know of the fastest readily available platform then imagine if that technology spanned rack to rack for a million square feet......underground in an undisclosed location. You think google is bad with the ad prediction.....LMAO.

Its been fun watching it grow from the annoying pinging of a dial up modem that connected by putting your rotary phone receiver in a cradle to a GB over fiber connection all the way to the OND sitting by my power meter. Im guessing the 10GB connection is close.


----------



## JustinThyme

ReFFrs said:


> Don't even bother explaining something to that authentic dude. People have tried 10 times so far, he never listened. Maybe he's got some kind of a special edition 2080 Ti capable of competing at special olympics. There is only one discipline - Superposition and nothing else. 3DMark is for losers, you know.



Superposition is OK once you make some tweaks to the Nvidia profile to get it to use all of your resources.

Quick run in everyday driver mode.


----------



## ReFFrs

JustinThyme said:


> Superposition is OK once you make some tweaks to the Nvidia profile to get it to use all of your resources.
> 
> Quick run in everyday driver mode.


1080p, rly? No matter what you tweak in NV profile, running benchmarks FullHD in 2019 is a disgrace. And don't forget that Superposition is DX11 bench only, while TimeSpy Extreme is DX12 with Async Compute and all other stuff capable of pushing your card to a limit.


----------



## Cyber Locc

JustinThyme said:


> Cyber Locc said:
> 
> 
> 
> Exagrating a tiny bit there? Water-cooling has only been a thing for 20ish years
> 
> 
> 
> You should really reserve your judgments when having knowledge of how the water cooling PCs market took hold. I was making home made blocks using fish aquarium pumps driven off of a NO contact of a relay with a 12V coil which also power 120V fans that pulled the heat from a heater core out of a junkyard truck (enthusiast group of the very young internet had spreadsheets of tried and true heater cores) well before the majority of posters here were born. We shared information much like today only it wasnt where to buy a fan shroud because the commercial market did not yet exist, it was a how to make your own. What LN2, He and phase change is today is what water cooling was then. My first WC build was 1986 so I was actually being conservative. Back then it was the push to 1Ghz!! IIRC correct my first real clock that counted for anything was the 800 MHz milestone. Siwftech was an early market leader about like EK is today, now they are a name synonymous with a pump. Government installations have been at it since a quad core processor took up 5K square feet. The last 50 years has truly been a period of innovations. I remember when a 40MB HDD made a good coffee table. Now you cant find anything thats 40MB. The sheer mass of things now is astounding. The processing power thats out there now is just astounding. Take what you know of the fastest readily available platform then imagine if that technology spanned rack to rack for a million square feet......underground in an undisclosed location. You think google is bad with the ad prediction.....LMAO.
> 
> Its been fun watching it grow from the annoying pinging of a dial up modem that connected by putting your rotary phone receiver in a cradle to a GB over fiber connection all the way to the OND sitting by my power meter. Im guessing the 10GB connection is close.
Click to expand...

Except I was making homemade blocks back in the day too. 

The first ever documented water-cooled computer was around late 96, early 1997, so I guess you been doing for 11 more years, and just never told anyone huh?


That's kind of like everyone is an eningeer right? K have fun with that.

You are saying 86, so you water-cooled an IBM? Lol. K dude, I think once again you mean 96,


----------



## J7SC

VPII said:


> What I found using the Strix XOC bios with the 1000watt TDP resulted in having to run lower clocks than before due to there be no Vcurve to set for higher voltage and speed. So I was basically stuck with having to run 2145mhz instead of 2170mhz on Time Spy. The result in itself was pretty good seen that it kept the 2145mhz clocks right through. Interestingly though, I noticed that with core temps being below 30C while the bench is starting it fed 1.068v with 2160mhz as the speed even though I've set the overclock manual at +135mhz. Taken that with this bios it would run stock at 2010mhz the +135mhz would mean 2145mhz. But now looking at the link I'll post with the run it will show GPU speed at 2160mhz. I do believe that this is all temperature related as the temps were at 19c at idle taken that half my rad was in a bucket of ice water.
> 
> What I can confirm is that the true drop in speed from normal happens at 41 or 42C, not before except for this funny anomaly I had with the clocks running 2160mhz when set to 2145mhz as this dropped to 2145mzh at 31c.
> (snip)


 
My experience re. temps is a bit different, I think there's a step-down in MHz at the high 30ies C...and possibly even one before that, though I have to qualify that a bit...: We had a central heating outage recently which meant ambient was roughly 18c, with the GPU (before Time Spy/E started) at around 23 C. Taking advantage of that, I could get one run in at 2190MHz (likely initial 3DM measure), but by the next run in the same session, it was down to 2175MHz. Put differently, there really are two temp issues, the initial clocks at a (successful bench) start, and then the step-downs as things warm up.

Re. the other ongoing 'discussion' in this thread about first use of water-cooling for a computer chip, one never knows whether there was a tinkerer in a garage somewhere, but per _Wikipedia_, the Cray-2 supercomputer (1985) was one of the first, though it was a bit different then your regular block/rad: _"The dense packaging and resulting heat loads were a major problem for the Cray-2. This was solved in a unique fashion by forcing the electrically inert Fluorinert liquid through the circuitry under pressure and then cooling it outside the processor box. The unique "waterfall" cooler system came to represent high-performance computing in the public eye and was found in many informational films and as a movie prop for some time."_ 

But what I really want for my two Aorus 2080 TIs is the '*Corona Discharge Effect Cooling*' :headscrat also referenced in Wikipedia, the one with an '*Ionic Wind Pump*'...sadly, can't find any blocks with that for my build


----------



## VPII

J7SC said:


> My experience re. temps is a bit different, I think there's a step-down in MHz at the high 30ies C...and possibly even one before that, though I have to qualify that a bit...: We had a central heating outage recently which meant ambient was roughly 18c, with the GPU (before Time Spy/E started) at around 23 C. Taking advantage of that, I could get one run in at 2190MHz (likely initial 3DM measure), but by the next run in the same session, it was down to 2175MHz. Put differently, there really are two temp issues, the initial clocks at a (successful bench) start, and then the step-downs as things warm up.
> 
> 
> 
> Re. the other ongoing 'discussion' in this thread about first use of water-cooling for a computer chip, one never knows whether there was a tinkerer in a garage somewhere, but per _Wikipedia_, the Cray-2 supercomputer (1985) was one of the first, though it was a bit different then your regular block/rad: _"The dense packaging and resulting heat loads were a major problem for the Cray-2. This was solved in a unique fashion by forcing the electrically inert Fluorinert liquid through the circuitry under pressure and then cooling it outside the processor box. The unique "waterfall" cooler system came to represent high-performance computing in the public eye and was found in many informational films and as a movie prop for some time."_
> 
> 
> 
> But what I really want for my two Aorus 2080 TIs is the '*Corona Discharge Effect Cooling*' :headscrat also referenced in Wikipedia, the one with an '*Ionic Wind Pump*'...sadly, can't find any blocks with that for my build


You see that is why I stated that my real overclock 2010mhz stock plus 135mhz wpuld give me 2145mhz. You may have a point though as when I run Time Spy and temps are below 30c the clocks would be 2160mhz with 1.068v core but the moment it gets to 31c the voltage drop to 1.043v with 2145mhz core which will remain at that speed till the end as the temps will remain below 40c and max power pulled will be around 450watt in the second game test of time spy extreme.

I do understand that this bios is not perfect as you do not have a vcurve to work with but Im pretty happy running 2145mhz core and 8250mhz vmemory. It is pitty seen that the card can easily do 2175mhz in Time Spy and Extreme but the scores is higher regardless.



Sent from my SM-G960F using Tapatalk


----------



## JustinThyme

Cyber Locc said:


> Except I was making homemade blocks back in the day too.
> 
> The first ever documented water-cooled computer was around late 96, early 1997, so I guess you been doing for 11 more years, and just never told anyone huh?
> 
> 
> That's kind of like everyone is an eningeer right? K have fun with that.
> 
> You are saying 86, so you water-cooled an IBM? Lol. K dude, I think once again you mean 96,


You missed the boat by 10 years. My first WC rig was not an IBM it was a 386DXII circa 1985 with a stock 40MHz clock that I got to 60MHz that doesnt really account for much. There were no benchmarks , windows as we know it was not yet invented, simply a start up batch file that had buttone to launch DOS programs. Dont be hatin because older farts were at it before you were. First game I installed on my monster 40MB drive a few years later was Star wars tie fighter. 
LOL at first documented in 96, change that to ROTFLMAO. Maybe those late to the party started in the 90s, the pioneers were at it in the 80s.

The actual VERY first water cooled computer on record, mainframe UNIVAC 1, was delivered to the US Census bureau in 1951 11 years before I was born.

Not everyone is an engineer.....But I am.

You did get one thing right with 1996 but that was the start of the commercial market with swiftech blocks long after the garage brands were being made.

You are of course free to think what you want. I'm well accustomed to the personality.


----------



## JustinThyme

ReFFrs said:


> 1080p, rly? No matter what you tweak in NV profile, running benchmarks FullHD in 2019 is a disgrace. And don't forget that Superposition is DX11 bench only, while TimeSpy Extreme is DX12 with Async Compute and all other stuff capable of pushing your card to a limit.


My apologies that my 3440x1440 PG348Q monitor falls a little short of 4K but its hardly disgraceful. 1080P extreme is as high as it will run. 
TS EXtreme? OK
https://www.3dmark.com/spy/5859180

How about Port Royal or is that weak too?
https://www.3dmark.com/pr/46520


----------



## Cyber Locc

JustinThyme said:


> ReFFrs said:
> 
> 
> 
> 1080p, rly? No matter what you tweak in NV profile, running benchmarks FullHD in 2019 is a disgrace. And don't forget that Superposition is DX11 bench only, while TimeSpy Extreme is DX12 with Async Compute and all other stuff capable of pushing your card to a limit.
> 
> 
> 
> My apologies that my 3440x1440 PG348Q monitor falls a little short of 4K but its hardly disgraceful. 1080P extreme is as high as it will run.
> TS EXtreme? OK
> https://www.3dmark.com/spy/5859180
> 
> How about Port Royal or is that weak too?
> https://www.3dmark.com/pr/46520
Click to expand...

Timespy extreme will run comparable to 4k no matter your resolution 🙂. 

Why is your CPU score so low. That's weird. Are you only 4.8 on 1 core? That CPU should score much higher and it's tanking your scores.


----------



## J7SC

VPII said:


> You see that is why I stated that my real overclock 2010mhz stock plus 135mhz wpuld give me 2145mhz. You may have a point though as when I run Time Spy and temps are below 30c the clocks would be 2160mhz with 1.068v core but the moment it gets to 31c the voltage drop to 1.043v with 2145mhz core which will remain at that speed till the end as the temps will remain below 40c and max power pulled will be around 450watt in the second game test of time spy extreme.
> 
> I do understand that this bios is not perfect as you do not have a vcurve to work with but Im pretty happy running 2145mhz core and 8250mhz vmemory. It is pitty seen that the card can easily do 2175mhz in Time Spy and Extreme but the scores is higher regardless.



Yeah, the few MHZ you loose via lack of v-curve are definitely worth the extra PL watts per that Bios



JustinThyme said:


> You missed the boat by 10 years. My first WC rig was not an IBM it was a 386DXII circa 1985 with a stock 40MHz clock that I got to 60MHz that doesnt really account for much.
> (edit)
> 
> Not everyone is an engineer.....But I am.
> 
> You did get one thing right with 1996 but that was the start of the commercial market with swiftech blocks long after the garage brands were being made.
> 
> You are of course free to think what you want. I'm well accustomed to the personality.



...don't mean to inject myself in your argument, but I am a bit mystified at some points here...I did work with (at the time already ancient) 386 DX, which we used to power along the dot-matrix printer. The 386 had a lot of dust bunnies in it...

That said, looking at the data sheets, there was no 386 DX / 40 'stock' speed circa 1985. It wasn't until 1989 that Intel brought out the 386 DX 33 MHz. The only 386 DX 40 I can find was by AMD, introduced in 1991 after some delays due to Intel court action.

Obviously, I have no idea nor do I care what you water-cooled and how, but I wonder why...oc'ed, most of those things wouldn't get past 5 watt TDP...

Anyway, sorry for the interruption _"and now back to our regularly scheduled programming"_


----------



## JustinThyme

You guys need to do better research. I know what I did and know when I did it give or take a year or two. The difference is Im not looking it up, I lived it. In 1989 the 486 was out.


----------



## fleps

Yall, this is really out of topic on a 2080 TI thread. Chill.


----------



## LuckyTheWolf

I just got an MSI RTX 2080 Ti Gaming X Trio. I put on the bios for the Gaming X Trio that raises the power limit up to 130% Would anyone recommend a different bios that would yield better results? I noticed my card doesn’t overclock that well. +150mhz core +600mhz mem stable going higher resulted in freezes.


----------



## dangerSK

LuckyTheWolf said:


> I just got an MSI RTX 2080 Ti Gaming X Trio. I put on the bios for the Gaming X Trio that raises the power limit up to 130% Would anyone recommend a different bios that would yield better results? I noticed my card doesn’t overclock that well. +150mhz core +600mhz mem stable going higher resulted in freezes.


Try the 450W galax one, but unless you are hitting power limit (cant tell, i dont know whats 130% intepreted to W on your bios) i doubt it will improve by big margin.
Watercooling will help your card a lot as turing scales a lot from temps.


----------



## Cyber Locc

LuckyTheWolf said:


> I just got an MSI RTX 2080 Ti Gaming X Trio. I put on the bios for the Gaming X Trio that raises the power limit up to 130% Would anyone recommend a different bios that would yield better results? I noticed my card doesn’t overclock that well. +150mhz core +600mhz mem stable going higher resulted in freezes.


What's your base clock? +150 doesn't tell us much. What is your OCed clockspeed.

What memory do you have? 

You have a custom PCB not sure I'd risk it with the Galax bios, especially when I doubt you are power limited with that 406w bios. And if that OC is around 2150 or so, which I feel like it is, that's not a bad clock at all.


----------



## Zammin

Thought you guys would like this one


----------



## zack_orner

Zammin said:


> Thought you guys would like this one
> 
> 
> 
> https://www.youtube.com/watch?v=iQI...Ve9aaDUmUje1Uq4TVOCv5ACgJl7Lb5ZoFrm1_WJXtSrrE


That was funny nice break from bs going on in this form.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## Zammin

zack_orner said:


> That was funny nice break from bs going on in this form.


Man I was blown away by how legit that fake hardware looked haha. He had fully outfitted PCBs and everything. Next level April Fools trolling.


----------



## zack_orner

Zammin said:


> Man I was blown away by how legit that fake hardware looked haha. He had fully outfitted PCBs and everything. Next level April Fools trolling.


Yeah I bought it till the fire part then just fell out laughing good stuff thank you for the share.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## LuckyTheWolf

Cyber Locc said:


> What's your base clock? +150 doesn't tell us much. What is your OCed clockspeed.
> 
> What memory do you have?
> 
> You have a custom PCB not sure I'd risk it with the Galax bios, especially when I doubt you are power limited with that 406w bios. And if that OC is around 2150 or so, which I feel like it is, that's not a bad clock at all.


My base clock sits at 1350mhz no oc. Under load in Time Spy Extreme I fluctuate between 1935 and 1890mhz without oc. According to gpu-z im fluctuating between 310-380W under load. With it overclocked (+145core, +600mem)(raising either core or mem higher than this makes the card unstable) I fluctuate between 2065 and 2025mhz drawing between 330-385W. My memory is from samsung. My temps never got over 54C during the test so as you can see I'm not boosting that high. I only hit 2100mhz once on the core throughout all my bemchmarking extremely briefly. The highest its stayed at for a decent amount of time is 2085mhz. I am not sure if others had better luck using the galax bios but I am not sure if I should try it. To me it looks like I got really unlucky with the silicon lottery.


----------



## J7SC

...finally got back to some Port Royal 'RTX core' benching, after TS/E. Temps at max 35 C are pretty much the same as compared to TS/E in my NVLink SLI setup (= 2x Aorus 2080 Ti, water-cooled, stock Bios w/ no extra v or v-curve; CPU is 2950X Threadripper @ 4225 / all cores)

What is interesting is that Port Royal seems to react 'better' to higher clocks than TS/E. Also, the max VRAM ceiling seems to be higher. Per below, base score (stock clocks, just max PL) was 17108, while best score was 18402. 

I kept the GPU speed at 2130 for all but the stock tests (can go a bit higher before v-curve etc, but need to turn the heat off in our place  ). Main variable was VRAM speed. Best result was with VRAM +1102 = 2042 MHz (Aorus base VRAM is slightly higher than typical). At +1002, I seem to be getting near the VRAM efficiency peak for this Port Royal bench, so may be try s.th. in the +1070 range the next time. However, by VRAM '+ 1132', Port Royal score starts to fall again. Ditto for still higher VRAM settings.

BTW, I really do like the graphics in Port Royal, just wish it had a few more moving robots instead of the stoic ones.


----------



## bl4ckdot

Hello,
I'm looking forward to buy an EVGA 2080 Ti to replace my Titan XP. It will be watercooled (I need a reference PCB because of that). Now I'm wondering, is it worth getting an XC2 other a classic XC black edition ? Does the XC2 has higher bining ? Price difference isn't really big, but if they have the same bining process, I wouldn't mind saving the extra € 

Thank you in advance !


----------



## fleps

bl4ckdot said:


> Hello,
> I'm looking forward to buy an EVGA 2080 Ti to replace my Titan XP. It will be watercooled (I need a reference PCB because of that). Now I'm wondering, is it worth getting an XC2 other a classic XC black edition ? Does the XC2 has higher bining ? Price difference isn't really big, but if they have the same bining process, I wouldn't mind saving the extra €
> 
> Thank you in advance !


Looking the specs on EVGA site, the *XC Black Edition* seems to be the simplest one, boost clock is almost non-A chip, although the first page of this post says it's "A" chip. 

But besides that one, all the other XC are kind the same card with different cooler it seems, so I personally would pick the cheapest one besides the black edition.


----------



## bl4ckdot

fleps said:


> Looking the specs on EVGA site, the *XC Black Edition* seems to be the simplest one, boost clock is almost non-A chip, although the first page of this post says it's "A" chip.
> 
> But besides that one, all the other XC are kind the same card with different cooler it seems, so I personally would pick the cheapest one besides the black edition.


Yeah I was having a look at the first page and saw the XC Black Edition being an "A" chip. I really don't mind getting the "normal" XC just to be sure it's really an A chip.

So XC2 isn't worth it for watercooling ?


----------



## Renegade5399

bl4ckdot said:


> Yeah I was having a look at the first page and saw the XC Black Edition being an "A" chip. I really don't mind getting the "normal" XC just to be sure it's really an A chip.
> 
> So XC2 isn't worth it for watercooling ?


I love my XC Blacks. As has been stated by Buildzoid and a couple others, if you're going to watercool, there is no real reason to pay a premium. A base model reference 2080Ti A chip will clock just fine. These really are A chips. The stock clocks are a few MHz over the non A model. The issue you're going to run into is the power limit of the BIOS. EVGA still has not released a higher PL BIOS that goes above 363 Watts for these cards. You can play around with flashing of course. I tried all sorts of BIOS for these cards. Galax, XOC, etc. none of them allowed for high memory clocks. Could not get over 8000 no matter what and had to run 7500 most of the time. I went back to the OEM BIOS and instead performed the shunt mod. I now have fantastic performance, 2100/8200, for 24/7 operation. Even on water, the power limit will hinder you unless you do something to either increase it (BIOS) or bypass it (shunt mod).

*Note the shunt mod does also "increase" the power limit, but by fooling the voltage controller instead of increasing the PL number. That's why I said bypass.


----------



## bl4ckdot

Renegade5399 said:


> I love my XC Blacks. As has been stated by Buildzoid and a couple others, if you're going to watercool, there is no real reason to pay a premium. A base model reference 2080Ti A chip will clock just fine. These really are A chips. The stock clocks are a few MHz over the non A model. The issue you're going to run into is the power limit of the BIOS. EVGA still has not released a higher PL BIOS that goes above 363 Watts for these cards. You can play around with flashing of course. I tried all sorts of BIOS for these cards. Galax, XOC, etc. none of them allowed for high memory clocks. Could not get over 8000 no matter what and had to run 7500 most of the time. I went back to the OEM BIOS and instead performed the shunt mod. I now have fantastic performance, 2100/8200, for 24/7 operation. Even on water, the power limit will hinder you unless you do something to either increase it (BIOS) or bypass it (shunt mod).
> 
> *Note the shunt mod does also "increase" the power limit, but by fooling the voltage controller instead of increasing the PL number. That's why I said bypass.


Thanks for the feedback. I originaly went for EVGA because of their watercooling policy. Now, the BIOS limitation / oc is a bit of a drawback. Do you know what vendor have reference PCB with a high power limit BIOS without memory oc issue ? Galax ?


----------



## Jpmboy

bl4ckdot said:


> Thanks for the feedback. I originaly went for EVGA because of their watercooling policy. Now, the BIOS limitation / oc is a bit of a drawback. Do you know what vendor have reference PCB with a high power limit BIOS without memory oc issue ? Galax ?


standard Nvidia FE model flashed to the galax 380W bios. The samsung ram runs >8000 with this bios.


----------



## bl4ckdot

Jpmboy said:


> standard Nvidia FE model flashed to the galax 380W bios. The samsung ram runs >8000 with this bios.


How are Nvidia (concerning watercooling) if I need to RMA my GPU ? Do I play it dumb, mount the original cooler, flash the nvidia FE bios (is it even possible ?) and just say "nope never watercooled it" ?


----------



## Kanashimu

Apparently the 450W power limit has been solved. Some extreme OCers (Stavros and co) working with the Galaxy HOF card got a special XOC bios that makes power limits non existant. They managed to get 2940 MHz on the core. Benchmarks ran up to 2.8 GHz.

https://www.reddit.com/r/overclocki...80_ti_is_no_match_you_got_to_compete/ek1dn3k/

Also, I have a question regarding the EVGA FTW3 vs FTW3 Ultra. Does the regular FTW3 have a non A chip? Seems kinda ridiculous that they would cheap out with a non A chip on an FTW3.


----------



## J7SC

bl4ckdot said:


> Hello,
> I'm looking forward to buy an EVGA 2080 Ti to replace my Titan XP. It will be watercooled (I need a reference PCB because of that). Now I'm wondering, is it worth getting an XC2 other a classic XC black edition ? Does the XC2 has higher bining ? Price difference isn't really big, but if they have the same bining process, I wouldn't mind saving the extra €
> 
> Thank you in advance !


 
There might be higher binning (as is usual w/ EVGA) and the extra 90 MHz or so difference in advertised boost clocks on those models you quoted is nice (if the price difference works for you). But the main thing is to avoid 'non-A' chip cards if you can - especially with your planned water-cooling as you may want to try out a different Bios later.


----------



## bigjdubb

Kanashimu said:


> Apparently the 450W power limit has been solved. Some extreme OCers (Stavros and co) working with the Galaxy HOF card got a special XOC bios that makes power limits non existant. They managed to get 2940 MHz on the core. Benchmarks ran up to 2.8 GHz.
> 
> https://www.reddit.com/r/overclocki...80_ti_is_no_match_you_got_to_compete/ek1dn3k/
> 
> Also, I have a question regarding the EVGA FTW3 vs FTW3 Ultra. Does the regular FTW3 have a non A chip? Seems kinda ridiculous that they would cheap out with a non A chip on an FTW3.


People have been using the XOC bios for a while now. 


The FTW3 is listed as a non-A chip on the first page and I'm pretty sure those charts are accurate. I'm not sure which one is cheaping out with the Non A chip, EVGA or the purchaser, but if you want an FTW3 with an A chip then the Ultra is the one to get.


----------



## J7SC

OK, I have a question for the NVidia Bios flash experts among you.

Briefly, I have 2x Aorus 2080 TI WB which are only a couple of digits apart in serial numbers and perform nicely and closely to each other. But they have slightly different Bios versions and Device ID (see orange bars below). Also, 'WinMerge' identifies several differences in the hex content. Therefore, I want to flash the Bios from Card 1 onto Card 2. Now, I saved the Bios from Card 1 while it was originally set up in little test bench (Gigab. Z170 SOC) before moving it to the MSI Cration X399 build. I can obivously save them again via GPUz but I want to use the original saved file for a couple of good reasons.

Here's my question: If you look at the second set of SLI GPUz, which are from another setup here with dual GTX 900 series in a Haswell-E machine, you will note that I had flashed those with their own identical Bios, and they also ended up with the same Device ID...never was a problem (in 5+ years) in Win 7/64 here and they still live happily in SLI today. But I don't know how Win 10 (or for that matter the X399 mobo) will react to the same Device ID and two different cards in two different PCIe slots. I think it (or mobo) would automatically assign the resources, but what do you think ? :headscrat


----------



## Tragic

Tragic.
2080ti MSI gaming x Trio +800mem +135core clock (2115mhz)
32GB 3300mhz 13 13 13 34 2T
i7 8700K 5.1ghz Kraken X62
512GB 970 Pro Samsung NVME
500GB 960 Evo Samsung NVME
512GB 860 Pro Samsung SSD
Asus Z370F ROG Strix Gaming MB

Userbenchmark = 251%
Fire Strike Extreme = 17,744


----------



## fleps

Hi guys.

I replaced my broken Ventus OC with a EVGA 2080 FTW3 Ultra and went to try OC right away, and looks like I got lucky this time.

Using the voltage curve method on MSI AB (from Sajin great tutorial) I was able to oc it very well.

- While under 42C the core clock will peak 2205, then 2190, but of course these are only peaks.
- Real clock on games (like The Division 2) are 2160~2145 under 60C with fans at 65%
- On benchmarks will get get down to ~ 2130 after a while as the temps pass 65. Temp spike around 70C at 70~80% fan speed.
- Memory OC is 8200Mhz.
- Ambient temp is around 27C.

PerfCap report vRel and vOp with vcore locked at 1093mV, while Pwr is only reported very few times, max TDP of 338 will only happen very occasionally even on benchmarks.

If I try increasing another step it will star trowing ocasional artifacts on Heaven and The Division, very occasional and no crash but I consider it non-stable.

*Questions:*
- As perfCap is reporting voltage and I'm already locked on 1093mV, trying to put a non-official bios with a higher TDP wont matter right?
- I have a Kraken G12/X42 laying around, but I wonder if it's worth the work to install it. Do you think it will be able to keep the card below 42C so it stays locked on 2205, considering on air the card goes to 60~65?

Thanks!


----------



## VPII

fleps said:


> Hi guys.
> 
> 
> 
> I replaced my broken Ventus OC with a EVGA 2080 FTW3 Ultra and went to try OC right away, and looks like I got lucky this time.
> 
> 
> 
> Using the voltage curve method on MSI AB (from Sajin great tutorial) I was able to oc it very well.
> 
> 
> 
> - While under 42C the core clock will peak 2205, then 2190, but of course these are only peaks.
> 
> - Real clock on games (like The Division 2) are 2160~2145 under 60C with fans at 65%
> 
> - On benchmarks will get get down to ~ 2130 after a while as the temps pass 65. Temp spike around 70C at 70~80% fan speed.
> 
> - Memory OC is 8200Mhz.
> 
> - Ambient temp is around 27C.
> 
> 
> 
> PerfCap report vRel and vOp with vcore locked at 1093mV, while Pwr is only reported very few times, max TDP of 338 will only happen very occasionally even on benchmarks.
> 
> 
> 
> If I try increasing another step it will star trowing ocasional artifacts on Heaven and The Division, very occasional and no crash but I consider it non-stable.
> 
> 
> 
> *Questions:*
> 
> - As perfCap is reporting voltage and I'm already locked on 1093mV, trying to put a non-official bios with a higher TDP wont matter right?
> 
> - I have a Kraken G12/X42 laying around, but I wonder if it's worth the work to install it. Do you think it will be able to keep the card below 42C so it stays locked on 2205, considering on air the card goes to 60~65?
> 
> 
> 
> Thanks!


You more than likely keep the card in the low to mid 40c. My Kraken G12 with Corsair H110 keep my card low 40c with 32+C ambient. Now with winter knocking Ill possibly get it to even keep below 40C.

Sent from my SM-G960F using Tapatalk


----------



## fleps

VPII said:


> You more than likely keep the card in the low to mid 40c. My Kraken G12 with Corsair H110 keep my card low 40c with 32+C ambient. Now with winter knocking Ill possibly get it to even keep below 40C.


Humm that's very good to know, thanks for the info.

Maybe is worth it, I'll open it and see if it's possible to install it without removing the card front-plate.
I saw some pictures and looks like these custom PCB cards are using a front plate with more space so it's possible to put the braket and keep the plate.


----------



## gavros777

Cyber Locc said:


> I do t think you really can, or should tbh.
> 
> They are not designed for that. You could probably do it with a flat card, but I still wouldn't.
> 
> I would just use it as is, and clean it every once in awhile. Dust in open case isn't as bad as closed tbh. Just give it a little blow out/off every few months and you will be fine.
> 
> My rig has been living in a test bench, for 2yrs+ now, I switched boards, to a MATX board and was going to do a R40 build, got the case and all the stuff and never did the build lol (I'm not a MATX guy), and I sold the case, and just grew found of my test bench as constant fixture.
> 
> And I have switched back to my MATX board the last week, or so as I gave the RVE to my Step Brother, and already can't stand it hehe. 🙂


Thanks for the advice, i was thinking the same thing to use a dust can instead once a month.
By the way i custom made my open frame case and got this far.
I should have it complete and running by end of this week.


----------



## bigjdubb

fleps said:


> Hi guys.
> 
> I replaced my broken Ventus OC with a EVGA 2080 FTW3 Ultra and went to try OC right away, and looks like I got lucky this time.
> 
> Using the voltage curve method on MSI AB (from Sajin great tutorial) I was able to oc it very well.
> 
> - While under 42C the core clock will peak 2205, then 2190, but of course these are only peaks.
> - Real clock on games (like The Division 2) are 2160~2145 under 60C with fans at 65%
> - On benchmarks will get get down to ~ 2130 after a while as the temps pass 65. Temp spike around 70C at 70~80% fan speed.
> - Memory OC is 8200Mhz.
> - Ambient temp is around 27C.
> 
> PerfCap report vRel and vOp with vcore locked at 1093mV, while Pwr is only reported very few times, max TDP of 338 will only happen very occasionally even on benchmarks.
> 
> If I try increasing another step it will star trowing ocasional artifacts on Heaven and The Division, very occasional and no crash but I consider it non-stable.
> 
> *Questions:*
> - As perfCap is reporting voltage and I'm already locked on 1093mV, trying to put a non-official bios with a higher TDP wont matter right?
> - I have a Kraken G12/X42 laying around, but I wonder if it's worth the work to install it. Do you think it will be able to keep the card below 42C so it stays locked on 2205, considering on air the card goes to 60~65?
> 
> Thanks!


Nice results! Mine seems to top out around 2150 but averages just under 2100. I am pretty impressed with the cooler on the FTW3 Ultra, it's very effective and hardly makes any noise at all on my open frame.

Do you have a link for the tutorial you mentioned?


On another note. Has anyone done x8 vs x16 testing? I'm thinking about getting an m.2 card (the storage options on Ryzen kind of suck) but I want to make sure the 2080ti won't choke on x8.


----------



## diatribe

bigjdubb said:


> On another note. Has anyone done x8 vs x16 testing? I'm thinking about getting an m.2 card (the storage options on Ryzen kind of suck) but I want to make sure the 2080ti won't choke on x8.


So long as your using the top X16 slot on your Taichi X470 you should still have full bandwidth with a PCIE M.2 installed. Some motherboards will throttle the 2nd and 3rd PCI slots, but I haven't heard of any the effect the primary slot. That is assuming that you're not using a M.2 to PCIE adapter in your 2nd slot. At that point the 1st slot will surely drop down to X8.

Here's a link to the manual so you can make sure: http://asrock.pc.cdn.bitgravity.com/Manual/X470 Taichi Ultimate.pdf


----------



## fleps

bigjdubb said:


> Nice results! Mine seems to top out around 2150 but averages just under 2100. I am pretty impressed with the cooler on the FTW3 Ultra, it's very effective and hardly makes any noise at all on my open frame.
> Do you have a link for the tutorial you mentioned?


Thanks! Yeah it's a great card (just note that mine is a normal 2080, not TI)

This is the tutorial: https://forums.evga.com/Guide-How-t...-overclock-with-msi-afterburner-m2820280.aspx

A few notes:

1) Remember to disable any OC before starting this, *you are suppose to do it on stock*. Also check if you don't have any auto-profile on Precision being initialized.

2) You can use Precision and AB at the same time, as we need Precision to control XC coolers. Initially every time I opened MSI AB only the first fan would spin. I'm not sure if all this is needed but the way I fixed it was: 
- Set fans on 100% on Precision and minimized it.
- Set fans on 100% on MSI (make sure to have the curve fan option disabled).
- Did an OC Scanner run (from MSI AB) as it would forces the fans on Auto on MSI AB. After this I didn't touched fans options anymore on AB.
- Back on Precision I defined my fans as linked and on the advanced mode set the aggressive profile with auto enabled.

Now Precision is only minimized all the time for the coolers and led behavior, I disabled any OSD and HWM sensors that MSI has access.

3) Be aware that on the current version of MSI there's no "white line" anymore, it's red now.

4) The tutorial is a little confusing but after re-reading it a few times I understood. Basically the golden rules are:
- the steps to set the yellow line on max vcore (1093) with CTRL + L and applying.
- then the "white" (red line) will appear a few points to the left of the yellow line. Then starting from the first point where the red line is, you set it -1 then apply, which will make the line jump one point closer to the yellow line;
- and you keep doing it until the red line is together with yellow line.
- Then you hold CTRL to drag the yellow 1093mV point to whatever clock you want to OC and apply.

This will lock the *OC/vCore even on idle*.
So I recommend you have a default profile with no OC or small OC, and save whatever your result is on this one on another profile that you activate only when Benchmarking / Gaming, or the card will be on full load all the time.

Then you start to tweak it a bit, because the OC step you define will likely decease as the temps get high, so you need to test and see if you can set it higher. 
I just left Heaven open and monitored and changed the points a few times until I was happy with the results, my yellow line 1093mV is actually at 2175 clock for example, which will get me the results I posted before.

Also recommend doing all this with zero memory OC first.

Hope it helps.


----------



## Cyber Locc

So has anyone found a way to disable/enable SLI with ease? Aside from the fact, I have to close like 15 different things, which is absurd, one of these things I cannot disable. I was team red for a bit, and Have not ran SLI with windows 10 till now, and this is just horrid. 

Its saying I need to close windows internal power shell, well that wont let me, it relaunches, and if I try to alter it to stop it from re running it says Trusted installer permission needed.


----------



## fleps

Cyber Locc said:


> So has anyone found a way to disable/enable SLI with ease? Aside from the fact, I have to close like 15 different things, which is absurd, one of these things I cannot disable. I was team red for a bit, and Have not ran SLI with windows 10 till now, and this is just horrid.
> 
> Its saying I need to close windows internal power shell, well that wont let me, it relaunches, and if I try to alter it to stop it from re running it says Trusted installer permission needed.


??

There's an SLI option inside Nvidia Control Panel under 3D settings. It's easier as changing the resolution or disabling / enabling G-Sync.


----------



## Cyber Locc

fleps said:


> ??
> 
> There's an SLI option inside Nvidia Control Panel under 3D settings. It's easier as changing the resolution or disabling / enabling G-Sync.


Ya I know, and when I clcik disable it wants me to close a bunch of programs. 16 programs to be exact lol, and 2 windows services. I got the windows service to close permanently. 


Also This new CPU helps my Timespy alot lol, this isnt the highest Graphics score I got with this card (its really hot today), its actually 200 points under my best . I am working on balancing, but with a quick and dirty CPU OC, still got #100  

https://www.3dmark.com/spy/6817562


----------



## J7SC

OK, some '*Micron*' info 

...per earlier posts, I have been working with 3DM Port Royal to see where my Micron-equipped Aorus cards peak in terms of MHz and score (as you know, not the same). Micron GDDR6 is thought to be less consistent on oc-ing than Samsung (what I would get if I had the choice @ purchase), but certainly Micron is no reason to despair either ! There are a few folks here (I think VPII among them) who had both Micron and then Samsung 2080 Tis and who observed that the Micron seem to run a bit hotter than Samsung modules (correct me if I'm wrong). Given the extensive water-cooling for my 2x 2080 TI setup, I probably neutralized that to some extent, but in any case, the message is the same for all 2080 TI owners >>> cool, cool, cool as much as you can.

The initial two MS AB screens show the range I know the Micron to be 'non-crashing' in NVLink/SLI (outside Port Royal). I have yet to find the top VRAM efficiency range ( table below) with a couple more Port Royal runs to do, but this should help to underscore that Port Royal likes higher VRAM. GPU speed for SLI was 2130 - 2145 MHz (other than the stock run; same slider pre-sets for all other runs). Comparing single card to SLI, the max GPU speed possible with this bench is about the same as in TS/E (2175-90 for single, 2130-2145 for SLI).


----------



## fleps

Cyber Locc said:


> Ya I know, and when I clcik disable it wants me to close a bunch of programs. 16 programs to be exact lol, and 2 windows services. I got the windows service to close permanently.


 
I believe you can try to identify those programs on _Manage 3D Settings > Program Settings_ and leave SLI disabled on the ones you never need, so when you need to disable SLI entirely it will not prompt those.


----------



## Cyber Locc

fleps said:


> I believe you can try to identify those programs on _Manage 3D Settings > Program Settings_ and leave SLI disabled on the ones you never need, so when you need to disable SLI entirely it will not prompt those.


the programs that it mentions are not 3d programs, however I will give that a shot. See if it lists them and allows it.

Not only does that not work, however if I try to change SLI config for say 3dmark, per the application in Nv control, it still tells me to close them all. Pain in the neck, horrible system.


----------



## Pasbags

*Gigabyte RTX 2080Ti Gaming OC*

Anyone running into issues with the USB-C driver not installing correctly i'll post a full list of my troubleshooting if anyone wants to take a guess at what's going wrong, although at this moment in time it looks like it's going back to gigabyte...


----------



## Thoth420

Just a heads up to people with the Gaming X Trio. EK have Nickel/Plexi blocks up on their webshop for preorder.


----------



## JustinThyme

Cyber Locc said:


> Ya I know, and when I clcik disable it wants me to close a bunch of programs. 16 programs to be exact lol, and 2 windows services. I got the windows service to close permanently.


Had that happen once. Used DDI to remove drivers in safe mode, diasbled driver updates on reboot. Installed fresh copy of driver and enabled SLI right after. No issues since.


----------



## zack_orner

Thoth420 said:


> Just a heads up to people with the Gaming X Trio. EK have Nickel/Plexi blocks up on their webshop for preorder.


Kinda stinks the water block doesn't work with factory back plate, I don't like there chrome looking back plate.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## VPII

J7SC said:


> OK, some '*Micron*' info
> 
> ...per earlier posts, I have been working with 3DM Port Royal to see where my Micron-equipped Aorus cards peak in terms of MHz and score (as you know, not the same). Micron GDDR6 is thought to be less consistent on oc-ing than Samsung (what I would get if I had the choice @ purchase), but certainly Micron is no reason to despair either ! There are a few folks here (I think VPII among them) who had both Micron and then Samsung 2080 Tis and who observed that the Micron seem to run a bit hotter than Samsung modules (correct me if I'm wrong). Given the extensive water-cooling for my 2x 2080 TI setup, I probably neutralized that to some extent, but in any case, the message is the same for all 2080 TI owners >>> cool, cool, cool as much as you can.
> 
> The initial two MS AB screens show the range I know the Micron to be 'non-crashing' in NVLink/SLI (outside Port Royal). I have yet to find the top VRAM efficiency range ( table below) with a couple more Port Royal runs to do, but this should help to underscore that Port Royal likes higher VRAM. GPU speed for SLI was 2130 - 2145 MHz (other than the stock run; same slider pre-sets for all other runs). Comparing single card to SLI, the max GPU speed possible with this bench is about the same as in TS/E (2175-90 for single, 2130-2145 for SLI).


 @J7SC yup it was me who had two cards, one with Micron and my current card has Samsung. As I've stated even with my current water cooling on the card and no cooling except for fan over vram and vrm I get the memory to stay in the low 50C. Full water block would have meant even lower temps but still pretty good I'd say which is why I can run my memory +1250mhz without an issue. Have not really tried higher but will do so in time.


----------



## Thoth420

zack_orner said:


> Kinda stinks the water block doesn't work with factory back plate, I don't like there chrome looking back plate.
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


It probably does. EK uses that disclaimer liberally. 

In other news... just as I preorder a block my Gaming X Trio crapped out on me. Womp Womp.

Micron VRAM modules if anyone was curious.


----------



## zack_orner

Thoth420 said:


> It probably does. EK uses that disclaimer liberally.
> 
> 
> 
> In other news... just as I preorder a block my Gaming X Trio crapped out on me. Womp Womp.
> 
> 
> 
> Micron VRAM modules if anyone was curious.


Sorry to hear that.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## J7SC

Thoth420 said:


> It probably does. EK uses that disclaimer liberally.
> 
> In other news... just as I preorder a block my Gaming X Trio crapped out on me. Womp Womp.
> 
> Micron VRAM modules if anyone was curious.


 
Yeah, not good news for you, but there's RMA. Also, what do you mean by 'crapped out' /symptoms ? Boot failure, black screen, safe mode lock-in, artifacts ?


----------



## Thoth420

J7SC said:


> Yeah, not good news for you, but there's RMA. Also, what do you mean by 'crapped out' /symptoms ? Boot failure, black screen, safe mode lock-in, artifacts ?


It's fine. I have a replacement incoming by Monday and just tossed the Vega 64 in there (thankfully I didn't get around to putting the block on that) for now. I'll sell the RMA (unless ofc it clocks better on return).
I just wanted to report in with another dead card. I literally never messed with clocks or voltage at all just turned up the fan profile so it died under factory settings and never saw more than 61C.


----------



## J7SC

As follow-up to my Port Royal post re. Micron, I did a run with (what I thought were) similar settings, on Time Spy Extreme SLI. Turns out that while the '+ slider amount' in MSI AB is fairly accurate, the 'level reading' for VRAM in MSI AB seems to be off by as much as 100 Mhz in either direction compared to both GPUz and 3DM Systeminfo sheet. Inadvertently, I ran the VRAM at 2154 for TSEX (much higher than Port Royal), though it did not seem to mind 

Also, I'm happy about the 99% usage for the GPUs (Threadripper rumors turned out to be unfounded). Next, the two GPUs maxed out at 1.031v and 1.043v, respectively, so once I flash the stock Card 1 Bios to Card 2 and then do the MSI AB v-curves, I should have a bit of headroom. Finally, max PL @ MS AB is 122% stock, but per MSI AB monitoring screen during TSEX run, '''actual''' was between 131% and 139%... go figure


----------



## joyzao

Hello folks, I have a rtx 2080 ti trio x, how do I extract the maximum of it?

I put the bios of 400w, but it did not help much .. The maximum that I get stable is 100mhz in the clock gpu.

I wanted to get the most out of the board.


----------



## J7SC

joyzao said:


> Hello folks, I have a rtx 2080 ti trio x, how do I extract the maximum of it?
> 
> I put the bios of 400w, but it did not help much .. The maximum that I get stable is 100mhz in the clock gpu.
> 
> I wanted to get the most out of the board.


 
...as much cooling as you can invest in


----------



## Cyber Locc

J7SC said:


> ...as much cooling as you can invest in


^ Cooling will help, but even then, There is the silicon Lottery at play. 

1 of my cards does 205/1700, and the other only does 105/1500 , it sucks. I dont have those high wattage bioses yet though, I'm on Stock XC Bios.


There again, I have saying the clocks like that, because vendors stock clocks are different. so 2175 and 2075 for me.


----------



## ragesaq

Vencenzo said:


> Ran into the same problem on my ek x seahawk running the trio bios as people on page 620. Could either oc core or vram well but not both. Seem to be limited at 363w and 1.068v even though it's suppose to be "406w".
> Going to try the galax 380w.


What BIOS did you end up using for your 2080ti Sea Hawk EK X? I need to break out of this 330w limit, my 2080ti is super bored there with my stupidly overkill cooling.


----------



## Nizzen

J7SC said:


> ...as much cooling as you can invest in


Waterchilller or nothing 

2080ti is just boring, you just can't fix it like 780ti classified with EVBOT


----------



## J7SC

Nizzen said:


> Waterchilller or nothing
> 
> 2080ti is just boring, you just can't fix it like 780ti classified with EVBOT


 
Funny enough, Kingpin 2080 Ti with EVbot (still have mine, connected to a pair of 780 TIs ) is supposed to release within days


----------



## Thoth420

I'm literally just going to return my cards via Prime until I get a good clocker with Samsung memory on it. 30 dollar restock fee to retry. Thank you, yes!


----------



## Cyber Locc

J7SC said:


> Funny enough, Kingpin 2080 Ti with EVbot (still have mine, connected to a pair of 780 TIs ) is supposed to release within days



It really does no good, the EVBot does not allow you to push over the 1.093 limit, that's a hardware limit and EVGA is not allowed to allow you to push past it. The KPEs are only guaranteed to clock at 2100, most cards already can do that. The Ref pcb is all that is needed, the custom PCBs are worthless this time around, the Extreme OCers have already told us this, thats why they are "Selling Bios" because thats really all they can do.

You can shunt mod, and you can use XOC bios, if you want to push further, then maybe the KPE tech will be worth it. However the Voltage locks are still in place on KPE, so you gain nothing without doing so.





Thoth420 said:


> I'm literally just going to return my cards via Prime until I get a good clocker with Samsung memory on it. 30 dollar restock fee to retry. Thank you, yes!


I would tread very carefully about that, Amazon is known to shut down accounts for doing that, and not refunding your prime or any store credit, along with rejecting the next refund. You can do it a few times, but those times add up as well. 4 returns on that, next time 3 more, then the next return 2 years from now and they close your account. (Just an example, not hard fast numbers)


----------



## J7SC

Cyber Locc said:


> It really does no good, the EVBot does not allow you to push over the 1.093 limit, that's a hardware limit and EVGA is not allowed to allow you to push past it. The KPEs are only guaranteed to clock at 2100, most cards already can do that. The Ref pcb is all that is needed, the custom PCBs are worthless this time around, the Extreme OCers have already told us this, thats why they are "Selling Bios" because thats really all they can do.
> 
> You can shunt mod, and you can use XOC bios, if you want to push further, then maybe the KPE tech will be worth it. However the Voltage locks are still in place on KPE, so you gain nothing without doing so.
> 
> (edit))


 
I guess we have to wait and see for real-world tests of 2080 Ti KPE. Without knowing what they did to their cards, it is difficult to tell what true max voltage they had on GPU, but KP has subbed over 2080 Ti 2700MHz and OGS had some Galax 2080 Ti that exceeded 2900 MHz (the latter for very short benches) - all on LN2 of course. I seem to recall that some of the custom PCBs were even set up with switches for voltage control.

Importantly,* EVBot is also not just for GPU voltage*. I have used EVBot on GTX 700 and 900 series custom cards and there are a variety of different and useful parameters on it, beyond GPU-v. Finally, I have posted Superposition 4K at over 2200 MHz and TS/E at 2190 MHz for single cards here - usually with about 1.043v max...that 0.05v could come in handy if I could get to it in seconds with a few simple EVBot commands


----------



## Cyber Locc

J7SC said:


> I guess we have to wait and see for real-world tests of 2080 Ti KPE. Without knowing what they did to their cards, it is difficult to tell what true max voltage they had on GPU, but KP has subbed over 2080 Ti 2700MHz and OGS had some Galax 2080 Ti that exceeded 2900 MHz (the latter for very short benches) - all on LN2 of course. I seem to recall that some of the custom PCBs were even set up with switches for voltage control.
> 
> Importantly,* EVBot is also not just for GPU voltage*. I have used EVBot on GTX 700 and 900 series custom cards and there are a variety of different and useful parameters on it, beyond GPU-v. Finally, I have posted Superposition 4K at over 2200 MHz and TS/E at 2190 MHz for single cards here - usually with about 1.043v max...that 0.05v could come in handy if I could get to it in seconds with a few simple EVBot commands


Right, I mean it may be nice to have the EVbot, and I seen the talk of the voltage slider, wondering if it will be like a Shunt mod switch. 

The voltage issue comes down to Nvidia requirements, not a power limit. 

Also seen your 3dmarks , I beat you today on x2 sorry  your CPU is holding you back hard lol, you beat me on GPU score, but on Overall I passed you. 

I wanted to ask ya though, are you running Galax bios or some such? What cards do you have? 

My cards are still on the EVGA stock bios, and 1 of them is hard stuck at 2085 , the other I can do TSE at 2170, and Heaven at 2200 (haven't tried SP, kept forgetting, and now its Fresh Windows, so have to reinstall) that 330w bios is holding me back HARD lol. Also mine are about the same as you, I dont get close to voltage limits, its power limits that are throttling me.

That is exactly what I am saying though, EVGA is doing nothing more than selling bioses. Your paying for KPE or FTW3 for a more power allotted bios, thats it. I have seen FTW3s not even hit 2085, they are not binned better, they just allow more power. 

Oh, https://www.3dmark.com/spy/6849976 82nd on Timespy Extreme (all configs) 63 on x2, I dont make the GPU score leader board, I need more power.


My second card is kind of a let down though. I been thinking about just selling my second card, and buying another one. I just dont know if its worth the chance of ending up with micron lol. What are the odds of getting another one that even clocks close to as high as my first?


----------



## J7SC

Re. your questions, this machine is stock...also the Threadripper isn't 'holding me back', it is supposed to be there - part of a test-setup for productivity related software at work as we're switching from Intel Xeon to AMD Epyc Rome later in the year. That is why TRs max voltage is capped at 1.325v. 

We are also looking at RTX potential re. our proprietary data. Both 2080 Ti are Aorus Xtreme WB and are stock, including Bios and they usually stay at 1.043v or below...their 3DM HoF 2x Time Spy Ex graphic score is (currently) at 15680, so you have a little bit to go yet . I also recommend benching Port Royal as it includes DLSS. I only use MSI AB sliders for oc fun, but have been planning on trying out the v-curve.

...no idea whether you should trade in one of your cards / silicon lottery and all that. If benching is that important to you, get a 7980/9980XE or even the W3175X... either one with 2x 2080 Ti KingPins + EVBot and custom bios setup. Apart from the 'unobtanium' Galax 2080 Ti HoF OCL cards, that's the setup to beat.


----------



## Qutip

MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W)

Anyone know if this Bios works with the msi 2080ti Gaming Trio (non X) ?


----------



## kx11

2080 ti KingPiN is up for purchase


https://www.evga.com/products/product.aspx?pn=11G-P4-2589-KR


you need some members code though


----------



## Shawnb99

kx11 said:


> 2080 ti KingPiN is up for purchase
> 
> 
> https://www.evga.com/products/product.aspx?pn=11G-P4-2589-KR
> 
> 
> you need some members code though


Only $1,899.99

I'm sorry but I don't see the appeal of the AIO system on it. If I'm buying the best of the best I'm putting it in a custom loop not some AOI system. Plus why does it need an extra fan if it's WC?

Don't get it at all


----------



## hotrod717

Yeah, with code its $1800. Just like any other premium cards, it cost more. I couldnt resist. and i dont buy much gear these days. Inbound for Fri.


----------



## Shawnb99

If it wasn't for the AIO on it I'd be interested but it would just cost even more to custom cool it, plus whenever a block comes out for it.

Only $200 more then the FTW Hydro Copper so "reasonably" priced I guess


----------



## J7SC

Shawnb99 said:


> Only $1,899.99
> 
> I'm sorry but I don't see the appeal of the AIO system on it. If I'm buying the best of the best I'm putting it in a custom loop not some AOI system. Plus why does it need an extra fan if it's WC?
> 
> Don't get it at all


 
^^ well put...though Kingpin in a recent vid suggested that there will be also a second 2080 TI KP version (Hydrocopper w/ full w-block) coming. I suspect that the AIO, now at 240 mm rather than 120 mm per earlier prototype, is still meant like the GTX 700 and later series KP versions were > for XOCers to mount LN2 pods.


----------



## Shawnb99

J7SC said:


> ^^ well put...though Kingpin in a recent vid suggested that there will be also a second 2080 TI KP version (Hydrocopper w/ full w-block) coming. I suspect that the AIO, now at 240 mm rather than 120 mm per earlier prototype, is still meant like the GTX 700 and later series KP versions were > for XOCers to mount LN2 pods.


Yeah it does say there's an EVGA Hydro Copper full-cover waterblock option but to me it would make sense to release that one first. If you're going this high end for a GPU there's little chance you're going to use an AOI. I guess there will be some who don't know any better but that shouldn't be the market.

I'd be interested in the Hydro Copper model but just no interest in AOI


----------



## TK421

Have anyone managed to finesse EVGA to swap their Micron cards for Samsung ones?


Wonder how I should approach them.


----------



## hotrod717

Shawnb99 said:


> If it wasn't for the AIO on it I'd be interested but it would just cost even more to custom cool it, plus whenever a block comes out for it.
> 
> Only $200 more then the FTW Hydro Copper so "reasonably" priced I guess


I've used universal for past several years and after that its on to LN2. The aio option makes it easy to bench on water prior to ln2 and saves a step over stock air cooling solution.
Also nice to use on a bench setup without messing with custom loop. Its all about the benching aspects for this card.


----------



## Shawnb99

TK421 said:


> Have anyone managed to finesse EVGA to swap their Micron cards for Samsung ones?
> 
> 
> Wonder how I should approach them.


We can do that?


----------



## JackCY

Shawnb99 said:


> Only $1,899.99
> 
> I'm sorry but I don't see the appeal of the AIO system on it. If I'm buying the best of the best I'm putting it in a custom loop not some AOI system. Plus why does it need an extra fan if it's WC?
> 
> Don't get it at all


It's an XOC card, so why are you buying it for WC. The fan is to cool VRMs and the plate can stay while using LN2. Good luck fitting front plates on other cards when using pots.
What are you going to gain with a WC beside even bigger hole in your wallet? The AIO is good enough for nonXOC use.



TK421 said:


> Have anyone managed to finesse EVGA to swap their Micron cards for Samsung ones?
> 
> 
> Wonder how I should approach them.


If you want them to laugh at you just send them an email.


----------



## kx11

Msi 2080ti Lightning Z is the 1st gpu i bought from Msi in 12 years , this thread made me regret buying an Msi gpu 





https://forum-en.msi.com/index.php?topic=316562.0


----------



## joyzao

J7SC said:


> ...as much cooling as you can invest in


I already think it is very cold, I live in a tropical country and here it is not going from 60-61 degrees in game, of all the gpu's that I had in the air is the coldest, but I wanted to extract more power of her, the bios did not help me.

Any tips?

Or is it luck that does not favor? In battlefield with 2040 mhz fixed for every game and 60 degrees of temperature, what do you think?


----------



## Martin778

kx11 said:


> Msi 2080ti Lightning Z is the 1st gpu i bought from Msi in 12 years , this thread made me regret buying an Msi gpu
> 
> 
> 
> 
> 
> https://forum-en.msi.com/index.php?topic=316562.0


I always bought MSI Gaming series GPU's wherever possible, never had issues so far however the 360W TDP limit is rubbish! The LN2 BIOS should've been the unlocked one where it has even lower TDP limit, like 300-320W.

Flashing a new VBIOS with the limits removed apparently also disables RGB....sounds very shoddy if you ask me. This card was going for €1700 when it was released.


----------



## bigjdubb

joyzao said:


> I already think it is very cold, I live in a tropical country and here it is not going from 60-61 degrees in game, of all the gpu's that I had in the air is the coldest, but I wanted to extract more power of her, the bios did not help me.
> 
> Any tips?
> 
> Or is it luck that does not favor? In battlefield with 2040 mhz fixed for every game and 60 degrees of temperature, what do you think?


How much more were you hoping to get out of it? If benchmarking is your thing then every mhz counts, but in gaming 100mhz isn't very noticeable. I rarely notice the difference when I forget to load my overclocked profile before gaming, and that's about 150mhz clock speed difference in game and +750mhz on memory. Might be more noticeable if you're running 4k and chasing after every extra frame you can get.


----------



## J7SC

joyzao said:


> I already think it is very cold, I live in a tropical country and here it is not going from 60-61 degrees in game, of all the gpu's that I had in the air is the coldest, but I wanted to extract more power of her, the bios did not help me.
> 
> Any tips?
> 
> Or is it luck that does not favor? In battlefield with 2040 mhz fixed for every game and 60 degrees of temperature, what do you think?


 


bigjdubb said:


> How much more were you hoping to get out of it? If benchmarking is your thing then every mhz counts, but in gaming 100mhz isn't very noticeable. I rarely notice the difference when I forget to load my overclocked profile before gaming, and that's about 150mhz clock speed difference in game and +750mhz on memory. Might be more noticeable if you're running 4k and chasing after every extra frame you can get.


 
^^ This, primarily. There is a caveat, though: The 2080 Ti are just plain big (dies, PCB) and can easily get over 300w...if you can, I would water-cool it just for those reasons (it may also make gaming quieter, depending on your setup). You will also notice that the colder a given RTX 2080 Ti card runs, the higher the boost clocks will get (up to a given point determined by other factors, including 'the silicone lottery'). So no matter what your 'lotto ticket' came out as, cooling it as much as you can will maximize your card's potential.


----------



## hotrod717

kx11 said:


> Msi 2080ti Lightning Z is the 1st gpu i bought from Msi in 12 years , this thread made me regret buying an Msi gpu
> 
> 
> 
> 
> 
> https://forum-en.msi.com/index.php?topic=316562.0


Not sure why. Context. I have a multitude of hawk and lightning cards. Only issue i ever encountered was of my own making. If you are unsure of what you are doing, don't.


----------



## 86Jarrod

I have a problem overclocking while water cooled. I have a shunt modded ftw3 ultra that's back to the air cooler after trying to water cool it. The problem is I'm getting better 3d mark scores on the stock air cooler than when water cooled. Crazy right? My mobo is a ****ty Asus H-110, 7700 cpu, 2400 adata ram 2x8, 1200 evga t2 psu. On air i can do port royal at 2190 @1.093 and stays there down to 2130 because temp reaches 70c. Crashes on water @2130 not reaching above 41c. Basically the same with all other benchmarks except superposition because the length. Here my air cooled Port Royal https://www.3dmark.com/3dm/35247509?


----------



## dangerSK

hotrod717 said:


> Not sure why. Context. I have a multitude of hawk and lightning cards. Only issue i ever encountered was of my own making. If you are unsure of what you are doing, don't.


older lightning were fine, i have also 680 lightning and now 2080Ti lightning. I must say MSI disappointed me, only 400w PL limit, nothing really execeptional on the card.. hmm kingping and HOF are better cards for sure


----------



## kx11

dangerSK said:


> older lightning were fine, i have also 680 lightning and now 2080Ti lightning. I must say MSI disappointed me, only 400w PL limit, nothing really execeptional on the card.. hmm kingping and HOF are better cards for sure



HOF is not for anything other than LN2 , KingPiN though is cool and got an OC guide 





https://www.evga.com/support/manuals/files/2080Ti_KINGPIN_OC_Guide.pdf


----------



## dangerSK

kx11 said:


> HOF is not for anything other than LN2 , KingPiN though is cool and got an OC guide
> 
> 
> 
> 
> 
> https://www.evga.com/support/manuals/files/2080Ti_KINGPIN_OC_Guide.pdf


Not true, same as saying kingpin is only good for LN2, both HOF,Kingpin and (kinda) Lightning falls under XOC category. I still think OC Lab cards wont be beaten by Kingpin.
BTW Lightning also has OC guide and HOF too but theyre under NDA. I happen to have Lightning XOC guide and can say its fine.


----------



## J7SC

86Jarrod said:


> I have a problem overclocking while water cooled. I have a shunt modded ftw3 ultra that's back to the air cooler after trying to water cool it. The problem is I'm getting better 3d mark scores on the stock air cooler than when water cooled. Crazy right? My mobo is a ****ty Asus H-110, 7700 cpu, 2400 adata ram 2x8, 1200 evga t2 psu. On air i can do port royal at 2190 @1.093 and stays there down to 2130 because temp reaches 70c. Crashes on water @2130 not reaching above 41c. Basically the same with all other benchmarks except superposition because the length. Here my air cooled Port Royal https://www.3dmark.com/3dm/35247509?


 
Judging by your temps, this is not an issue of thermal interface, pads or other water-block mounting issues. There could be potential issues w/ shunt mods and cooling via the relationship to temperature-dependent boost w/ 2080 Ti, depending on the bios used. I know some folks use the paste / conductive paste method for shunts which is easier to reverse, i.e. just to test things out. Do you have that option ?



kx11 said:


> HOF is not for anything other than LN2 , KingPiN though is cool and got an OC guide
> 
> 
> https://www.evga.com/support/manuals/files/2080Ti_KINGPIN_OC_Guide.pdf


 
The Galax HOF OCL (the one w/ the records) vs KingPin is kind of a moot point, because the Galax is mostly 'unbotanium'...


----------



## 86Jarrod

J7SC said:


> Judging by your temps, this is not an issue of thermal interface, pads or other water-block mounting issues. There could be potential issues w/ shunt mods and cooling via the relationship to temperature-dependent boost w/ 2080 Ti, depending on the bios used. I know some folks use the paste / conductive paste method for shunts which is easier to reverse, i.e. just to test things out. Do you have that option ?
> 
> 
> 
> 
> The Galax HOF OCL (the one w/ the records) vs KingPin is kind of a moot point, because the Galax is mostly 'unbotanium'...


No i used 8mohm from digi-key. I could easily reverse it still though. How do people get such high oc's without dice/ln2? Without shunt? Special bios? The shunted card is running surprisingly well on air as long as i keep the max draw at 90 percent or lower with the stock ftw3 bios. Any higher it gets too hot but its enough to lock in 1.093 in all benchmarks i've done. Edit: Timespy GT2 drops to 1.87 for a second a couple of times.


----------



## dangerSK

J7SC said:


> The Galax HOF OCL (the one w/ the records) vs KingPin is kind of a moot point, because the Galax is mostly 'unbotanium'...


Nah u keep saying like u cant obtain one easily, on HWbot marketplace theres always someone selling OC Lab card, why ? because once u bench it it doesnt have any value anymore for you as overclocker so you sell it...


----------



## J7SC

dangerSK said:


> Nah u keep saying like u cant obtain one easily, on HWbot marketplace theres always someone selling OC Lab card, why ? because once u bench it it doesnt have any value anymore for you as overclocker so you sell it...


 
Well, I'm talking about buying it new, w/ warranty like you can w/ Kingpin. I would not buy a LN2 / max volt card used from myself  during the days when I subbed LN2 at the 'bot


----------



## dangerSK

J7SC said:


> Well, I'm talking about buying it new, w/ warranty like you can w/ Kingpin. I would not buy a LN2 / max volt card used from myself  during the days when I subbed LN2 at the 'bot


Well wait a while and kingpin wont be available for purchase too, its just matter of time like with OC Lab..


----------



## J7SC

dangerSK said:


> Well wait a while and kingpin wont be available for purchase too, its just matter of time like with OC Lab..


 
All in good fun, of course  , but less funny is the fact that we seem to have reached close to the US$2k mark for Ti / non-Titan cards :axesmiley


----------



## dangerSK

J7SC said:


> All in good fun, of course  , but less funny is the fact that we seem to have reached close to the US$2k mark for Ti / non-Titan cards :axesmiley


yes thats crazy big amount of money, but really who needs 2080Ti.. if u have 4K monitor you should have enough money for gpu that can drive it  Or 2K 165hz ..  u get the point.


----------



## Cyber Locc

J7SC said:


> Re. your questions, this machine is stock...also the Threadripper isn't 'holding me back', it is supposed to be there - part of a test-setup for productivity related software at work as we're switching from Intel Xeon to AMD Epyc Rome later in the year. That is why TRs max voltage is capped at 1.325v. We are also looking at RTX potential re. our proprietary data.


I meant TR was holding you back in the TSE leader boards lol. 



J7SC said:


> Both 2080 Ti are Aorus Xtreme WB and are stock, including Bios and they usually stay at 1.043v or below...their 3DM HoF 2x Time Spy Ex graphic score is (currently) at 15680, so you have a little bit to go yet . I also recommend benching Port Royal as it includes DLSS. I only use MSI AB sliders for oc fun, but have been planning on trying out the v-curve.


Ya you have almost 40ws over me with your stock bios. My cards are pretty wattage starved lol. 




J7SC said:


> ...no idea whether you should trade in one of your cards / silicon lottery and all that. If benching is that important to you, get a 7980/9980XE or even the W3175X... either one with 2x 2080 Ti KingPins + EVBot and custom bios setup. Apart from the 'unobtanium' Galax 2080 Ti HoF OCL cards, that's the setup to beat.


Right but like I already said, the KPE and Galax only benefits are the Bios, outside of LN2, which I am not doing the LN2 stuff these days, may grab a chiller but thats about it. My cards could handle the 400/450ws, that I can cool effectively if I shunt modded or got a different bios. 

As for the XE, actually most the top benchers disable the cores on the 7980xe, TSE doesn't much like over 16 cores (KPE mentioned this a few times, all his XE results have 2 cores disabled), my 9960x score matches and exceeds 7980xe scores at the same clocks. They do get the benefit of disabling 2 of underdog cores however, but I was forced into the 9960x, by supply, and I am happy enough. 10,500ish with crap ram (new stuff on the way, once new egg gets stock) and Asus Auto overclock, with some small tuning, I dont have enough cooling atm, waiting on bigger ext rad to come .


----------



## ESRCJ

Cyber Locc said:


> I meant TR was holding you back in the TSE leader boards lol.
> 
> 
> 
> Ya you have almost 40ws over me with your stock bios. My cards are pretty wattage starved lol.
> 
> 
> 
> 
> Right but like I already said, the KPE and Galax only benefits are the Bios, outside of LN2, which I am not doing the LN2 stuff these days, may grab a chiller but thats about it. My cards could handle the 400/450ws, that I can cool effectively if I shunt modded or got a different bios.
> 
> As for the XE, actually most the top benchers disable the cores on the 7980xe, TSE doesn't much like over 16 cores (KPE mentioned this a few times, all his XE results have 2 cores disabled), my 9960x score matches and exceeds 7980xe scores at the same clocks. They do get the benefit of disabling 2 of underdog cores however, but I was forced into the 9960x, by supply, and I am happy enough. 10,500ish with crap ram (new stuff on the way, once new egg gets stock) and Asus Auto overclock, with some small tuning, I dont have enough cooling atm, waiting on bigger ext rad to come .


You're thinking of Fire Strike in disabling 2 cores on a 7980XE. Fire Strike is best run at 16C/32T, Time Spy with HT disabled for a 7980XE, and Time Spy Extreme with all 36 threads. I've posted my results for reference:
https://www.3dmark.com/fs/17620724
https://www.3dmark.com/spy/5622114
https://www.3dmark.com/spy/5517723


----------



## dante`afk

hotrod717 said:


> Yeah, with code its $1800. Just like any other premium cards, it cost more. I couldnt resist. and i dont buy much gear these days. Inbound for Fri.


you'd get more with a TRX lol.


----------



## hotrod717

dante`afk said:


> you'd get more with a TRX lol.


More what. Headaches trying to oc it. Or more money for it @ $2500. So for context, I've owned just about every Matrix, Hawk, Lightning, or Kingpin since they started making them. And currently have about a dozen from various generations. With all these types of cards you are paying for features and the ease to which you can overclock it and the cards ability to take it.


----------



## joyzao

bigjdubb said:


> How much more were you hoping to get out of it? If benchmarking is your thing then every mhz counts, but in gaming 100mhz isn't very noticeable. I rarely notice the difference when I forget to load my overclocked profile before gaming, and that's about 150mhz clock speed difference in game and +750mhz on memory. Might be more noticeable if you're running 4k and chasing after every extra frame you can get.



I've had 4 video cards, the strix oc, the hof "normal", hof oc lab and now this msi trio x.

Of all in the air, it is the coolest for sure, in the hof I came to incredible 80 degrees in game, it had a high clock, but a high temperature. The strix was the worst of all in relation to clock and temperature. Hof oc lab I do not have to say, she's in the water, and that alone has no way to compare with the others.

This msi is very good because she picks up 2040-2055 and stays in the 60's house, which is great.

Yesterday I was stirring the voltage curve in the afterburner, I got better results, but as the board is unstable, even with higher clock I can not get better final results. But it reached up to 2130 mhz.

Any real tips on how I can tap this voltage curve?

The bios I'm using is the one of 400w, is it the correct one to use?

Rtx 2080 ti msi trio x

PS: My memory is samsung, in games my clock stays practically stable at 2040 mhz, with + 100 gpu clock and +900 mem some games accept +120 but the performance does not change.

 What do you think of the final result? Unfortunately I live in a country that does not have many options like that, and I took msi for the warranty and because I like the brand, the asus rma here sucks, and with galax I had problems.

Thanks


----------



## zack_orner

joyzao said:


> I've had 4 video cards, the strix oc, the hof "normal", hof oc lab and now this msi trio x.
> 
> 
> 
> Of all in the air, it is the coolest for sure, in the hof I came to incredible 80 degrees in game, it had a high clock, but a high temperature. The strix was the worst of all in relation to clock and temperature. Hof oc lab I do not have to say, she's in the water, and that alone has no way to compare with the others.
> 
> 
> 
> This msi is very good because she picks up 2040-2055 and stays in the 60's house, which is great.
> 
> 
> 
> Yesterday I was stirring the voltage curve in the afterburner, I got better results, but as the board is unstable, even with higher clock I can not get better final results. But it reached up to 2130 mhz.
> 
> 
> 
> Any real tips on how I can tap this voltage curve?
> 
> 
> 
> The bios I'm using is the one of 400w, is it the correct one to use?
> 
> 
> 
> Rtx 2080 ti msi trio x
> 
> 
> 
> PS: My memory is samsung, in games my clock stays practically stable at 2040 mhz, with + 100 gpu clock and +900 mem some games accept +120 but the performance does not change.
> 
> 
> 
> What do you think of the final result? Unfortunately I live in a country that does not have many options like that, and I took msi for the warranty and because I like the brand, the asus rma here sucks, and with galax I had problems.
> 
> 
> 
> Thanks


You can try the hof 450w bois from a few pages back I've had mixed results with it. My trio seems to bench best right after a bios flash then degrades over two days slightly, doesn't mater if its the 406 or the 450 bios. I have seen the card pull 405w on the 406 bios and 398w on the 450 bios, but clocks slightly better with the 450. My card has micron memory so hopefully your card has better results.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## joyzao

zack_orner said:


> You can try the hof 450w bois from a few pages back I've had mixed results with it. My trio seems to bench best right after a bios flash then degrades over two days slightly, doesn't mater if its the 406 or the 450 bios. I have seen the card pull 405w on the 406 bios and 398w on the 450 bios, but clocks slightly better with the 450. My card has micron memory so hopefully your card has better results.
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio



Is it safe to use this hof bios in my 2080 ti trio x? Could you tell me the link (download) if it was possible. Will not void warranty if necessary?

I'll try to improve then.

In fact pull even 400w of power? Awesome numbers rsrsrs.

Is it more appropriate to use + 100mv in the afterburner or to let the voltage reset?


----------



## Cyber Locc

gridironcpj said:


> Cyber Locc said:
> 
> 
> 
> I meant TR was holding you back in the TSE leader boards lol.
> 
> 
> 
> Ya you have almost 40ws over me with your stock bios. My cards are pretty wattage starved lol.
> 
> 
> 
> 
> Right but like I already said, the KPE and Galax only benefits are the Bios, outside of LN2, which I am not doing the LN2 stuff these days, may grab a chiller but thats about it. My cards could handle the 400/450ws, that I can cool effectively if I shunt modded or got a different bios.
> 
> As for the XE, actually most the top benchers disable the cores on the 7980xe, TSE doesn't much like over 16 cores (KPE mentioned this a few times, all his XE results have 2 cores disabled), my 9960x score matches and exceeds 7980xe scores at the same clocks. They do get the benefit of disabling 2 of underdog cores however, but I was forced into the 9960x, by supply, and I am happy enough. 10,500ish with crap ram (new stuff on the way, once new egg gets stock) and Asus Auto overclock, with some small tuning, I dont have enough cooling atm, waiting on bigger ext rad to come /forum/images/smilies/smile.gif.
> 
> 
> 
> You're thinking of Fire Strike in disabling 2 cores on a 7980XE. Fire Strike is best run at 16C/32T, Time Spy with HT disabled for a 7980XE, and Time Spy Extreme with all 36 threads. I've posted my results for reference:
> https://www.3dmark.com/fs/17620724
> https://www.3dmark.com/spy/5622114
> https://www.3dmark.com/spy/5517723
Click to expand...

You are probably correct, now that you mention it. 


Even still I'm happy, if I may, what are your clocks in that bench? Is it all cores unified is what I'm asking. 

That's a good TSE result. I really need ram, I have crap ram in my rig atm, it's some old DDR4 I had around waiting on backorder at Newegg. The ram is like 3200, 17-18-19-39 I think, it's OG DDR4, from the first release of Gskill Ripjaws. Terrible ram lol.

Once my new ram gets here, and I get this Rad situation sorted out, I will push higher. Right now my limiting factor is low rad space, with only 2 360s waiting on my External rad. I can do 48 all cores, at 1.23, haven't pushed further, as I can't cool it lol. (I may be able to cool it, just haven't tried lol)


----------



## bogdi1988

Just finished up my setup. TR 2990WX, X399 MEG Creation, 6x NVME, 2x 2080TI full watercooling. Running the Galax 450W bios on both cards.


----------



## Thoth420

6x NVME! Dayummmmm

Loving the distro plate


----------



## bogdi1988

Thoth420 said:


> 6x NVME! Dayummmmm
> 
> Loving the distro plate


2 in RAID 0 for OS, and 4 in RAID 0 storage.  The distro plate is from Bykski. The case is the Corsair 1000D. Had to mod the distro a bit and dremmel it as the 2080TI plate is a bit too long. The case got a bit of modding as well, has some stupid non-removable HDD bays so the rivets holding them were drilled out. Had to get some spacers built up for the distro plate as it's supposed to sit further to the right in the case, but that space is occupied by 1 of the 3 radiators. 21 total fans in that case as well. And the little thing peeking on the bottom left is a small Intel Hackintosh (yes, the case supports 2 PSUs and 2 different motherboards) Overkill? Yes!


----------



## Cyber Locc

bogdi1988 said:


> Thoth420 said:
> 
> 
> 
> 6x NVME! Dayummmmm
> 
> Loving the distro plate
> 
> 
> 
> 2 in RAID 0 for OS, and 4 in RAID 0 storage. /forum/images/smilies/smile.gif The distro plate is from Bykski. The case is the Corsair 1000D. Had to mod the distro a bit and dremmel it as the 2080TI plate is a bit too long. The case got a bit of modding as well, has some stupid non-removable HDD bays so the rivets holding them were drilled out. Had to get some spacers built up for the distro plate as it's supposed to sit further to the right in the case, but that space is occupied by 1 of the 3 radiators. 21 total fans in that case as well. And the little thing peeking on the bottom left is a small Intel Hackintosh (yes, the case supports 2 PSUs and 2 different motherboards) Overkill? Yes! /forum/images/smilies/biggrin.gif
Click to expand...

How you do the 6? The board and a AIC?


----------



## joyzao

A bios to try a better performance-benchmark would be this? https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927

Is it safe to apply on msi trio x 2080 ti?

Edit: 

A xoc-bios strix 1000w can I use, or is it no longer safe to use?


----------



## hotrod717

zack_orner said:


> You can try the hof 450w bois from a few pages back I've had mixed results with it. My trio seems to bench best right after a bios flash then degrades over two days slightly, doesn't mater if its the 406 or the 450 bios. I have seen the card pull 405w on the 406 bios and 398w on the 450 bios, but clocks slightly better with the 450. My card has micron memory so hopefully your card has better results.
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


The trick is cooling. If you do not have proper cooling the cards wont scale regardless of bios. If you can keep them under 30*c, you'd notice a difference.


----------



## ESRCJ

Cyber Locc said:


> gridironcpj said:
> 
> 
> 
> 
> 
> Cyber Locc said:
> 
> 
> 
> I meant TR was holding you back in the TSE leader boards lol.
> 
> 
> 
> Ya you have almost 40ws over me with your stock bios. My cards are pretty wattage starved lol.
> 
> 
> 
> 
> Right but like I already said, the KPE and Galax only benefits are the Bios, outside of LN2, which I am not doing the LN2 stuff these days, may grab a chiller but thats about it. My cards could handle the 400/450ws, that I can cool effectively if I shunt modded or got a different bios.
> 
> As for the XE, actually most the top benchers disable the cores on the 7980xe, TSE doesn't much like over 16 cores (KPE mentioned this a few times, all his XE results have 2 cores disabled), my 9960x score matches and exceeds 7980xe scores at the same clocks. They do get the benefit of disabling 2 of underdog cores however, but I was forced into the 9960x, by supply, and I am happy enough. 10,500ish with crap ram (new stuff on the way, once new egg gets stock) and Asus Auto overclock, with some small tuning, I dont have enough cooling atm, waiting on bigger ext rad to come /forum/images/smilies/smile.gif.
> 
> 
> 
> You're thinking of Fire Strike in disabling 2 cores on a 7980XE. Fire Strike is best run at 16C/32T, Time Spy with HT disabled for a 7980XE, and Time Spy Extreme with all 36 threads. I've posted my results for reference:
> https://www.3dmark.com/fs/17620724
> https://www.3dmark.com/spy/5622114
> https://www.3dmark.com/spy/5517723
> 
> Click to expand...
> 
> You are probably correct, now that you mention it.
> 
> 
> Even still I'm happy, if I may, what are your clocks in that bench? Is it all cores unified is what I'm asking.
> 
> That's a good TSE result. I really need ram, I have crap ram in my rig atm, it's some old DDR4 I had around waiting on backorder at Newegg. The ram is like 3200, 17-18-19-39 I think, it's OG DDR4, from the first release of Gskill Ripjaws. Terrible ram lol.
> 
> Once my new ram gets here, and I get this Rad situation sorted out, I will push higher. Right now my limiting factor is low rad space, with only 2 360s waiting on my External rad. I can do 48 all cores, at 1.23, haven't pushed further, as I can't cool it lol. (I may be able to cool it, just haven't tried lol)
Click to expand...

If I recall, all core clocks were unified in those runs. I will try to beat my old scores if a viable XOC BIOS comes along for the 2080 Ti. That would mainly help in Time Spy Graphics Test 2, where I was definitely hitting the 380W power limit with the Galax reference BIOS. 

Memory will definitely help with some of these physics tests. It helps squeeze out those last few points. 

4.8GHz at 1.23V is very good. You may need to increase vcore when you're pushing faster memory with tighter timings.


----------



## Thoth420

bogdi1988 said:


> 2 in RAID 0 for OS, and 4 in RAID 0 storage.  The distro plate is from Bykski. The case is the Corsair 1000D. Had to mod the distro a bit and dremmel it as the 2080TI plate is a bit too long. The case got a bit of modding as well, has some stupid non-removable HDD bays so the rivets holding them were drilled out. Had to get some spacers built up for the distro plate as it's supposed to sit further to the right in the case, but that space is occupied by 1 of the 3 radiators. 21 total fans in that case as well. And the little thing peeking on the bottom left is a small Intel Hackintosh (yes, the case supports 2 PSUs and 2 different motherboards) Overkill? Yes!


No such thing as overkill. The case I am building in now (Inwin 303C) has a distro plate from barrow but you kinda have to use their blocks too and I already ordered mine.


----------



## J7SC

bogdi1988 said:


> Just finished up my setup. TR 2990WX, X399 MEG Creation, 6x NVME, 2x 2080TI full watercooling. Running the Galax 450W bios on both cards.


 
Very nice ! :thumb:


I'm using the 3 M.2 slots on the MSI X399 MEG Creation mobo, but have yet to add the extra four via the Aero card that came with it


----------



## J7SC

gridironcpj said:


> You're thinking of Fire Strike in disabling 2 cores on a 7980XE. Fire Strike is best run at 16C/32T, Time Spy with HT disabled for a 7980XE, and Time Spy Extreme with all 36 threads. I've posted my results for reference:
> https://www.3dmark.com/fs/17620724
> https://www.3dmark.com/spy/5622114
> https://www.3dmark.com/spy/5517723


 
...yeah, per attachment below, the Time Spy Extreme 2x HOF leader board...W3175X 28 cr / 56 tr on LN2 certainly won't hurt the overall score, even when compared to 9980Xe...wondering whether we get even crazier high-core count scores in the summer / fall when some new heavies enter the CPU world...



hotrod717 said:


> The trick is cooling. If you do not have proper cooling the cards wont scale regardless of bios. If you can keep them under 30*c, you'd notice a difference.


 
^^ This I can certainly attest to. What is more, I get my best scores with low temps AND without touching the volt sliders...typically, max GPU-v tops at no more than 1.031v for one card and 1.043v for the second one. The lower the voltage per given GPU MHz speed (helped along by cooling), the more 'PL' w is left over.


----------



## Thoth420

Hey all, so I am having issues with my GPU on my version of windows (1809) and would like to install 1803 instead but it seems windows media creation tool doesn't have options for older versions. This install was direct to 1809 so I have no rollback options. Where do I secure a trusted copy of 1803 Pro 64bit?


----------



## Cyber Locc

Thoth420 said:


> Hey all, so I am having issues with my GPU on my version of windows (1809) and would like to install 1803 instead but it seems windows media creation tool doesn't have options for older versions. This install was direct to 1809 so I have no rollback options. Where do I secure a trusted copy of 1803 Pro 64bit?


Just uninstall the update.


----------



## Thoth420

Cyber Locc said:


> You really cant, windows updates are not optional anymore, even if you get that very old build, it will force update.
> 
> Your 2080ti drivers, all of them, also require 1809.


https://nvidia.custhelp.com/app/ans...ing-series)-gpus-and-windows-10-compatibility
Nvidia says 1803 or newer.
I know how to prevent the version update.

Should I just try the new April 2019?


----------



## Cyber Locc

Thoth420 said:


> https://nvidia.custhelp.com/app/ans...ing-series)-gpus-and-windows-10-compatibility
> Nvidia says 1803 or newer.
> I know how to prevent the version update.
> 
> Should I just try the new April 2019?


You could try the newest one, or try to uninstall the update. 

It looks like Nvidia may have fixed it, as the 1809 is known for GPU issues, however other people tried this back in the begging of the year, and Nvidia drivers required 1809. Though I am reading the same, that games can see issues.


https://pureinfotech.com/uninstall-windows-10-1809-october-2018-update/


----------



## Thoth420

Cyber Locc said:


> You could try the newest one, or try to uninstall the update.
> 
> It looks like Nvidia may have fixed it, as the 1809 is known for GPU issues, however other people tried this back in the begging of the year, and Nvidia drivers required 1809. Though I am reading the same, that games can see issues.
> 
> 
> https://pureinfotech.com/uninstall-windows-10-1809-october-2018-update/


I guess I will just ride it out or try April if it interferes with my main game. It's just my side catalogue that is having problems at the moment. 
Thanks Cyber


----------



## bogdi1988

Cyber Locc said:


> How you do the 6? The board and a AIC?





J7SC said:


> Very nice ! :thumb:
> 
> 
> I'm using the 3 M.2 slots on the MSI X399 MEG Creation mobo, but have yet to add the extra four via the Aero card that came with it


The MSI Aero card is garbage to be honest. Wayyy too big and bulky. Due to the way the MSI board does the PCI lane split, I have the following: 3 NVME on the motherboard m.2 slots. 2 NVME in an Asus Hyper AIC (it is slimmer and smaller. Only uses 1 PCI slot width!). 1 more NVME in a small PCI 4x AIC - generic one from Fry's electronics. Before I went SLI, and only had 1 GPU, I had 2 on mobo and 4 on Asus AIC.


----------



## J7SC

bogdi1988 said:


> The MSI Aero card is garbage to be honest. Wayyy too big and bulky. Due to the way the MSI board does the PCI lane split, I have the following: 3 NVME on the motherboard m.2 slots. 2 NVME in an Asus Hyper AIC (it is slimmer and smaller. Only uses 1 PCI slot width!). 1 more NVME in a small PCI 4x AIC - generic one from Fry's electronics. Before I went SLI, and only had 1 GPU, I had 2 on mobo and 4 on Asus AIC.


 
...since the Aero came ''free'' with the mobo and I have some PCIe extenders, I plan to put it into a different build, w/o the actual cooler (the build I'm thinking about has neighboring 120mm fans..). 

Also, given your PCIe slot needs, below is a pic of an actual Epyc Rome 7nm (64c/128t capable) mobo, complete with lots of PCIe slots (incl. P 4.0)....and re. other posts on the almost US$2k price for the 2080 Ti Kingpin, well, there's this $3k per 2080 Ti option coming... Nice card, with both Hybrid cooling and arrangement for full-custom block in one, but US$3k for a 2080 Ti :sicksmile


----------



## bogdi1988

Cyber Locc said:


> How you do the 6? The board and a AIC?





J7SC said:


> Very nice ! :thumb:
> 
> 
> I'm using the 3 M.2 slots on the MSI X399 MEG Creation mobo, but have yet to add the extra four via the Aero card that came with it





J7SC said:


> ...since the Aero came ''free'' with the mobo and I have some PCIe extenders, I plan to put it into a different build, w/o the actual cooler (the build I'm thinking about has neighboring 120mm fans..).
> 
> Also, given your PCIe slot needs, below is a pic of an actual Epyc Rome 7nm (64c/128t capable) mobo, complete with lots of PCIe slots (incl. P 4.0)....and re. other posts on the almost US$2k price for the 2080 Ti Kingpin, well, there's this $3k per 2080 Ti option coming... Nice card, with both Hybrid cooling and arrangement for full-custom block in one, but US$3k for a 2080 Ti :sicksmile


Looking at that motherboard in the picture... quite interesting but where's the motherboard chipset?  Is it on the back side of the motherboard, or?


----------



## J7SC

bogdi1988 said:


> Looking at that motherboard in the picture... quite interesting but where's the motherboard chipset?  Is it on the back side of the motherboard, or?


 
...looks like it has been covered up (competitors' eyes everywhere) in the top right quadrant. Here's a link to the current-gen Gigabyte Epyc (14nm) board I think the new board is based on https://static.gigabyte.com/Product/101/6351/2017062209201120_src.png

...I would love to do a build with a 64t/128t 7nm CPU and two 2080 TIs, and NVME raid...CPU might not oc that well, but doesn't have to, and octa channel ram @ 3(+?) GHz


----------



## Jpmboy

hotrod717 said:


> The trick is cooling. If you do not have proper cooling the cards wont scale regardless of bios. If you can keep them under 30*c, you'd notice a difference.


yo Rod... waiting to see your 2080Ti KPE in action. Have you found a firmware for EVBot?


----------



## Shawnb99

bogdi1988 said:


> 2 in RAID 0 for OS, and 4 in RAID 0 storage.  The distro plate is from Bykski. The case is the Corsair 1000D. Had to mod the distro a bit and dremmel it as the 2080TI plate is a bit too long. The case got a bit of modding as well, has some stupid non-removable HDD bays so the rivets holding them were drilled out. Had to get some spacers built up for the distro plate as it's supposed to sit further to the right in the case, but that space is occupied by 1 of the 3 radiators. 21 total fans in that case as well. And the little thing peeking on the bottom left is a small Intel Hackintosh (yes, the case supports 2 PSUs and 2 different motherboards) Overkill? Yes!


If all you're using it for is to surf the web and play candy crush then it might be a little overkill otherwise there's no such thing.


----------



## bogdi1988

Shawnb99 said:


> If all you're using it for is to surf the web and play candy crush then it might be a little overkill otherwise there's no such thing.


video editing, gaming, etc


----------



## Shawnb99

bogdi1988 said:


> video editing, gaming, etc


Then it's perfect!


----------



## Thoth420

I need some recommendations for a pump/res combo or a pump and res that can be combined for my TT P5 build. I want to use the base the chassis has to mount it to. The bigger the better. 
The 2080Ti is getting a block as well and added to the loop. Using a single 480mm rad for CPU, VRM(block built into mobo) and GPU I want to make sure the pump has balls.|
Radiator is a 480mm Black Ice Nemesis GTR from Hardware Labs.


----------



## hotrod717

Jpmboy said:


> yo Rod... waiting to see your 2080Ti KPE in action. Have you found a firmware for EVBot?


 Tools are not available yet. Soon. EVGA Forum - OC Lab is what i was told. Also expect it show up on xdevs.
I'm excited that i dont have to tear it down and put a universal on it. Should integrate into my set-up pretty easily.

Edit: Classy Tools is available. 44min to be exact.

@Thoth420 - I have various electronics and psu's that make this ftw. https://www.ebay.com/itm/Iwaki-Dire...e=STRK:MEBIDX:IT&_trksid=p2060353.m1438.l2649


----------



## bigjdubb

Thoth420 said:


> I need some recommendations for a pump/res combo or a pump and res that can be combined for my TT P5 build. I want to use the base the chassis has to mount it to. The bigger the better.
> The 2080Ti is getting a block as well and added to the loop. Using a single 480mm rad for CPU, VRM(block built into mobo) and GPU I want to make sure the pump has balls.|
> Radiator is a 480mm Black Ice Nemesis GTR from Hardware Labs.


I used an Alphacool VPP655 pump/top combo and a 250mm glass tube res on my p5. There is plenty of vertical space so you don't have to use a pump/res combo if you don't want to. I had a flow meter between the res and the pump but that was mostly to fill out some space. Any pump/top or pump/res that has some sort of brackets will mount up pretty easily but I think the D5 style pumps are the way to go, powerful and quiet. If you want something fantastical looking, aquatuning makes a tube res with a fountain effect that's kinda pretty.

The VPP655 didn't have any trouble pushing through my cpu, gpu and 2-480mm rads on my P5 but there is room to run a dual pump top vertically if you want the added oomph and security.


----------



## kx11

Lightning Z is back under the original fans getting ready to be sold with the block.. etc 



i loved it but MSi disappointed me with the way they handled the 1350mhz clock issues


----------



## J7SC

kx11 said:


> Lightning Z is back under the original fans getting ready to be sold with the block.. etc
> 
> 
> 
> i loved it but MSi disappointed me with the way they handled the 1350mhz clock issues


 
Which, if any, 2080 Ti are you getting instead ? KPE ? If so, I would try to go for the full water-blocked one, unless you plan to use a pot for LN2, DICE etc


----------



## Thoth420

bigjdubb said:


> I used an Alphacool VPP655 pump/top combo and a 250mm glass tube res on my p5. There is plenty of vertical space so you don't have to use a pump/res combo if you don't want to. I had a flow meter between the res and the pump but that was mostly to fill out some space. Any pump/top or pump/res that has some sort of brackets will mount up pretty easily but I think the D5 style pumps are the way to go, powerful and quiet. If you want something fantastical looking, aquatuning makes a tube res with a fountain effect that's kinda pretty.
> 
> The VPP655 didn't have any trouble pushing through my cpu, gpu and 2-480mm rads on my P5 but there is room to run a dual pump top vertically if you want the added oomph and security.


Thanks alot! I'll take a look at those. Fountain sounds cool. I am going for a Black Oil/Goo look for the coolant dye in this one.


----------



## ProfeZZor X

I guess I can officially join this club, since mine arrived in the mail yesterday. Brand new, I only paid $700 bucks for it.


----------



## kx11

J7SC said:


> Which, if any, 2080 Ti are you getting instead ? KPE ? If so, I would try to go for the full water-blocked one, unless you plan to use a pot for LN2, DICE etc



yeah , it should be here saturday , i'll slap the waterblock on it a bit later because my case needs that front panel res/punm combo from Bitspower so i can put 3 more fans in the back with the rad so the bottom of the case space can be open for any option like a vertical mount ... etc


----------



## J7SC

kx11 said:


> yeah , it should be here saturday , i'll slap the waterblock on it a bit later because my case needs that front panel res/punm combo from Bitspower so i can put 3 more fans in the back with the rad so the bottom of the case space can be open for any option like a vertical mount ... etc



:thumb: ...not sure if you have / can get an EVBot, but usually, there's a forum at Kingpincooling.com which may / will have some additional software tools (with or without EVBot)


----------



## zeall0rd

Just gave my loaner 2080Ti back. Considering buying one myself, but nothing really appeals to me right now. Hall of Fame ? Horrible bang for the buck, also no Watercool/Aquacomputer blocks. FTW3 ? Nice, still only EKWB block incoming. Founders ? I'd have to shuntmod it to not hamper performance via the GALAX 380W BIOS' VRAM timings. Strix ? Very nice PCB but there's that thing with the Waterblock compatibility due to ASUS' expoxy resin chiploc - also, only a half-working XOC bios without V-F. Right now, shuntmodding a Founder's Edition and putting a Watercool Heatkiller IV on it seems like the way to go, but to be frank, never done that before and I don't exactly want to take a soldering iron to a brand new card. There's the Kingpin, which is also horribly expensive and has Hybrid cooling - big no-no for me.


----------



## NBrock

zeall0rd said:


> Just gave my loaner 2080Ti back. Considering buying one myself, but nothing really appeals to me right now. Hall of Fame ? Horrible bang for the buck, also no Watercool/Aquacomputer blocks. FTW3 ? Nice, still only EKWB block incoming. Founders ? I'd have to shuntmod it to not hamper performance via the GALAX 380W BIOS' VRAM timings. Strix ? Very nice PCB but there's that thing with the Waterblock compatibility due to ASUS' expoxy resin chiploc - also, only a half-working XOC bios without V-F. Right now, shuntmodding a Founder's Edition and putting a Watercool Heatkiller IV on it seems like the way to go, but to be frank, never done that before and I don't exactly want to take a soldering iron to a brand new card. There's the Kingpin, which is also horribly expensive and has Hybrid cooling - big no-no for me.


Founders and a water block + the GALAX 380 bios are great. No real need for shunt mod.


----------



## hotrod717

Anyone post some information or benches of their KPE??


----------



## willverduzco

zeall0rd said:


> Just gave my loaner 2080Ti back. Considering buying one myself, but nothing really appeals to me right now. Hall of Fame ? Horrible bang for the buck, also no Watercool/Aquacomputer blocks. FTW3 ? Nice, still only EKWB block incoming. Founders ? I'd have to shuntmod it to not hamper performance via the GALAX 380W BIOS' VRAM timings. Strix ? Very nice PCB but there's that thing with the Waterblock compatibility due to ASUS' expoxy resin chiploc - also, only a half-working XOC bios without V-F. Right now, shuntmodding a Founder's Edition and putting a Watercool Heatkiller IV on it seems like the way to go, but to be frank, never done that before and I don't exactly want to take a soldering iron to a brand new card. There's the Kingpin, which is also horribly expensive and has Hybrid cooling - big no-no for me.


AND



NBrock said:


> Founders and a water block + the GALAX 380 bios are great. No real need for shunt mod.


I don't understand why so many are hesitant to perform the shunt mod. It's actually incredibly easy, safe, and removable (acetone) if you use a CircuitWriter conductive pen. And on the Strix PCB, it's even easier than for most other cards, as the 5 milliohm resistors that you'd be drawing on are right next to each of the 8-pin connectors, with no other SMCs anywhere near. Since there are no nearby components and since this is silver rather than gallium based (i.e. no interaction with solder), you can slather a bunch on, decreasing effective resistance with each pass. In any case, a shunt mod is highly recommended, as the 380W BIOS isn't quite as fast as other BIOSes when clocks are identical, and 380W is only good for about 1.05V in tough stress tests. You need about 460W to get a reliable 1.093V in the stress tests I've used (Superposition 1080p Extreme, Superposition 8k Optimized, Timespy) when under water and not running fans.

For my shunt mod, I decided to go with a 1.42x power limit boost by creating an effective resistance of 3.5 milliohms. The upper limit is around 1.625x, as that corresponds to 3.08 milliohm (1/3.08 = 1/5 + 1/8), which is the upper end of safe (before card goes to low-power limp mode). I've attached pics of my results and mod below. Tests were tightly controlled (fixed voltage and clocks such that PL and temps wouldn't affect clocks or load). I then tracked perceived power decrease as I added more and more passes of CircuitWriter. With just 3 passes, I was able to get an effective power limit of 1.26x. When I kept going with several more, I was able to get the 1.42x effective power multiplier described above. See spreadsheet for all the calculations, equations, parameters, etc.

In my experience, the shunt mod + FE or similar bios with tight memory timings (i.e. Asus 325W BIOS) yields far better results than the Galax 380W BIOS when clocks are identical. Using the 380W BIOS but no shunt mod, I was able to get a 17.1k GPU score in Timespy (top 45 GPU and top 60 overall at the time, but fell to top 80 on both after a while). Strix XOC BIOS without shunt mod works decently well on Strix PCBs (sustains 1.068V rather than 1.05V on ref cards, and 1.05V that I could sustain in stress tests on the Galax 380W), but lack of F-V curve is a killer. Halving the NVLink bandwidth is also a no-go for SLI users. Due to the lack of the FV curve, the Strix XOC BIOS only boosted my score to about 17.2k.

Reverting to stock Asus 325W BIOS and upping my effective power limit to 462W (325*1.42) via shunt mod, I went up to 17410 GPU score (and 16747 total + 13779 CPU with the 9900k at 5.4). This was good enough to place me in the top 50 overall and GPU. See screenie and link.

Superposition saw similar gains when running 1080p Extreme. I was able to get 10.53k on the Galax 380W BIOS, and slightly higher (~10.6k) when on stock BIOS but no longer power limited. That said, I didn't do more than a few passes on Superposition, and didn't push clocks much, so I think 10.7k is there if I really want to push (hence why I haven't updated my signature links to reflect the improved SuPo score yet).

Finally, unless you're going for a super aesthetic show build, going bonkers on the radiators, and/or concerned about AIO longevity/reliability/leakage, the AIO on the Kingpin isn't actually a bad thing. During the GN video, Vince mentioned really impressive temps when running at 2200 MHz and I believe 1.2V using EVBot (albeit on an open-air test bench). Likewise, there really isn't that much of an issue on the Asus having limited waterblock support (although to my knowledge they exist). You can just run a G12 + any 280mm or 360mm AIO and get similar performance to the very best blocks on custom loops with large radiators (~16-18C delta temps while in superposition or timespy). It's sort of cheating, as the AIO only has to cool the die, and water temps aren't increased by VRMs and VRAM. Check my signature for ~18C delta with 37C max on Supo when getting the 10.53k score described above... And that was with nearly silent ~950 RPM fans. I could probably knock a few degrees off that delta temp with full speed fans.

TBH, as a long time custom loop user (~20 years), I was reluctant to go AIO for this build, but did so since I was sick of maintenance and since my new case didn't have room for my old setup (swiftech-branded d5 pump + ek supremacy block + 5.25 bay res + XSPC 480mm double-thick rad + 8x 120mm gentle typhoons). In fact, I've been nothing other than extremely happy with the performance. Hell, I got that 17410 GPU score above on a 280mm AIO and a Kraken G12, and I challenge anyone here with a single card, no external volt mods (i.e. 1.093V or less), and ambient cooling (i.e. big water cooling without a chiller) to produce anything significantly higher--or even reach that.

Oh and since the VRM on the Strix is the second best from an output capacity and phase configuration standpoint (HOF > Strix > the rest of the custom cards) and third best from an output filtering standpoint (Kingpin >> HOF > Strix), it doesn't need any cooling in my experience. While drawing peaks of ~460W from the card during long bench sessions, my VRM temps stayed in the 70s, which is well within spec--and this is without even enabling the G12's included fan and while running 2160-2220 MHz at 1.093V depending on test--in a highly restrictive Phanteks Evolv ATX case with 27-28C room air and near silent fans on the two radiators.

TLDR: Do the shunt mod. Asus PCB is great, but not the best. That said, it has perhaps the third best VRM overall when considering output capacity (HOF = Strix = all other 16*70A SPS custom cards > 16*60A SPS Kingpin > Ref), phase configuration (10-phase config on HOF and Strix vs 8-phase without doublers on all other custom cards), efficiency (Kingpin > HOF > Strix > all other custom cards > ref), and output filtering (massive capacitor banks on Kingpin >> HOF > Strix > all other custom cards > ref). It also supports the OC Panel, which enables up to 1.2V on the core--though I haven't tested this yet, so I don't know if you still need to shunt mod when using the OC Panel. Ultimately, since the max power draw when restricted to 1.093V is under 500W, even the reference card's VRM is more than capable, provided cooling is adequate. In any case, don't let lack of a large variety of full cover blocks stop you if you want the Asus card and only care about performance... there are always 280/360mm AIOs and/or GPU core-only blocks if you're already on a custom loop. Oh, and if money and availability were no object, we would all (obviously) be running Kingpin or HOF cards.


----------



## zeall0rd

@willverduzco

Thank you, you helped me decide. I'm going to get the 2080 Ti Strix OC and put the Watercool Heatkiller IV on it. Watercool support confirmed they are aware of the whole chiploc stuff and said not to worry since it should work out anyway, and if not, just take some sanding paper to the plastic standoffs. Now, about shunting it, I'm gonna do it.
Bonus points, strix OC is very attainable here in Germany right now, and costs as much as the FE directly from nvidia.
So, about the shunt mod. You're right. I was thinking about actually soldering on an 8 milliohm resistor or glueing it on, but when it's that easy and doable with silver pen, heck, there's no reason not to. 
The PCB shot is a bit blurry - is the silver around the shunt resistor only that silver conductive pen or did you put anything else on there ?

As for cooling, I believe I qualify for "bonkers on the radiators" with 2x GTR 480 black ice nemesis from Hardwarelabs ^^


----------



## willverduzco

zeall0rd said:


> @willverduzco
> 
> Thank you, you helped me decide. I'm going to get the 2080 Ti Strix OC and put the Watercool Heatkiller IV on it. Watercool support confirmed they are aware of the whole chiploc stuff and said not to worry since it should work out anyway, and if not, just take some sanding paper to the plastic standoffs. Now, about shunting it, I'm gonna do it.
> Bonus points, strix OC is very attainable here in Germany right now, and costs as much as the FE directly from nvidia.
> So, about the shunt mod. You're right. I was thinking about actually soldering on an 8 milliohm resistor or glueing it on, but when it's that easy and doable with silver pen, heck, there's no reason not to.
> The PCB shot is a bit blurry - is the silver around the shunt resistor only that silver conductive pen or did you put anything else on there ?
> 
> As for cooling, I believe I qualify for "bonkers on the radiators" with 2x GTR 480 black ice nemesis from Hardwarelabs ^^


Ah, well... With that much rad surface area, it really makes sense to be after a full cover block. In all my custom loops I never went larger than a single 480 or 360, so I can only imagine the water temps you'll be maintaining even with minimal fan speed.

As for the pricing, it was relatively similar in the US for me... There was about a $60 difference for me VS reference locally, which when spending $1300 is next to negligible if one is better in some regards than the other. (Even if actually noticing the better built VRM and output filtering is unlikely, due to the ref PCB being so good already.)

As for the shunt resistors and the silver pen, you're correct. The two resistors (directly under each 8-pin connector) are covered in only silver ink. That particular pic was taken with 1 or 2 layers, but I kept on going many more times since the ink isn't extremely conductive and I wanted to get the 325W bios to have enough power rather than relying on shunt mod in conjunction with the 380W BIOS.

If I may suggest, just keep track of perceived current draw in a controlled manner as you add additional passes, as I did, in order to make sure you don't get too close to that 3.1 milliohm lower limit. The only other thing of note is that since asus uses so much conformal coating on their boards, you'll have to remove it prior to applying the silver ink either through abrasion (I used a rough dremel bit to scuff up the upper part) or using acetone.


----------



## zeall0rd

willverduzco said:


> Ah, well... With that much rad surface area, it really makes sense to be after a full cover block. In all my custom loops I never went larger than a single 480 or 360, so I can only imagine the water temps you'll be maintaining even with minimal fan speed.
> 
> As for the pricing, it was relatively similar in the US for me... There was about a $60 difference for me VS reference locally, which when spending $1300 is next to negligible if one is better in some regards than the other. (Even if actually noticing the better built VRM and output filtering is unlikely, due to the ref PCB being so good already.)
> 
> As for the shunt resistors and the silver pen, you're correct. The two resistors (directly under each 8-pin connector) are covered in only silver ink. That particular pic was taken with 1 or 2 layers, but I kept on going many more times since the ink isn't extremely conductive and I wanted to get the 325W bios to have enough power rather than relying on shunt mod in conjunction with the 380W BIOS.
> 
> If I may suggest, just keep track of perceived current draw in a controlled manner as you add additional passes, as I did, in order to make sure you don't get too close to that 3.1 milliohm lower limit. The only other thing of note is that since asus uses so much conformal coating on their boards, you'll have to remove it prior to applying the silver ink either through abrasion (I used a rough dremel bit to scuff up the upper part) or using acetone.


Excellent, thank you. Are you sure acetone would work ?


----------



## willverduzco

zeall0rd said:


> Excellent, thank you. Are you sure acetone would work ?


Not entirely sure, to be honest... Just from past experience, acetone SHOULD be able to clean up conformal coating along with scrubbing with a cotton ball or q-tip. That said, I just used abrasion with a rough dremel bit (sand paper would accomplish the same effect) such that the ends of each shunt resistor changed color a little bit. In my opinion, the latter should be easier if you're careful enough to not slip and damage or knock off any of the SMCs on the board itself.


----------



## zeall0rd

willverduzco said:


> Not entirely sure, to be honest... Just from past experience, acetone SHOULD be able to clean up conformal coating along with scrubbing with a cotton ball or q-tip. That said, I just used abrasion with a rough dremel bit (sand paper would accomplish the same effect) such that the ends of each shunt resistor changed color a little bit. In my opinion, the latter should be easier if you're careful enough to not slip and damage or knock off any of the SMCs on the board itself.


 Alright, I'd better not use acetone then. Guess I'll just get some sandpaper. Patience wins out, after all. Should not be too difficult to deal with. I'll have to look for silver conductive pens here since I wanna find something good but don't exactly wanna have something shipped from the US ^^ Oh, and you covered the entire top of the SMD, right ? Maybe I'm doing that too. Not to keen on remounting the cooler 8 times


----------



## willverduzco

zeall0rd said:


> Alright, I'd better not use acetone then. Guess I'll just get some sandpaper. Patience wins out, after all. Should not be too difficult to deal with. I'll have to look for silver conductive pens here since I wanna find something good but don't exactly wanna have something shipped from the US ^^ Oh, and you covered the entire top of the SMD, right ? Maybe I'm doing that too. Not to keen on remounting the cooler 8 times


I'm actually in the process of responding to your PM with more detailed instructions, but yes... I covered the entire resistor as thoroughly as possible with each application. Due to the somewhat low conductance (or rather, high resistance) of the ink, I applied as much as I could without it spilling over during each application. This roughly equated to a meniscus of about 2-3mm--which looks scary, but is no issue due to the high surface tension of the ink. Then, you must allow it to fully dry (~4-6 hours under heat or overnight without heat), otherwise you'll be scraping off partially dried material while trying to add new ink.

As for testing and re-mounting the cooler 8 or so times, I still recommend testing prior to each new application to see how far you've lowered your card's perceived power draw. After using it myself, I highly doubt anyone can apply enough passes with the conductive ink to enter limp mode, but it's still a good idea to monitor to make sure everything is working. Personally, I just reverted to air cooling while doing all of my testing since re-mounting the G12 + AIO each time would have been a nightmare. I didn't even bother with the backplate or coldplate on the card itself... I only used the main heatsink (which is attached via 6 screws) for convenience.


----------



## zeall0rd

willverduzco said:


> I'm actually in the process of responding to your PM with more detailed instructions, but yes... I covered the entire resistor as thoroughly as possible with each application. Due to the somewhat low conductance (or rather, high resistance) of the ink, I applied as much as I could without it spilling over during each application. This roughly equated to a meniscus of about 2-3mm--which looks scary, but is no issue due to the high surface tension of the ink. Then, you must allow it to fully dry (~4-6 hours under heat or overnight without heat), otherwise you'll be scraping off partially dried material while trying to add new ink.
> 
> As for testing and re-mounting the cooler 8 or so times, I still recommend testing prior to each new application to see how far you've lowered your card's perceived power draw. After using it myself, I highly doubt anyone can apply enough passes with the conductive ink to enter limp mode, but it's still a good idea to monitor to make sure everything is working. Personally, I just reverted to air cooling while doing all of my testing since re-mounting the G12 + AIO each time would have been a nightmare. I didn't even bother with the backplate or coldplate on the card itself... I only used the main heatsink (which is attached via 6 screws) for convenience.


Thank you, that's awesome. Eh, overnight, I guess 8-12 hours will suffice. You're right, gotta dry to really make it count. And yes, the strix cooler is just mounted with those 6 philips screws after all. Did you reapply thermal paste every time ? I'm seriously considering going for some sort of Corsair digital PSU to actually get good numbers for power draw


----------



## NBrock

willverduzco said:


> AND
> 
> 
> 
> I don't understand why so many are hesitant to perform the shunt mod. It's actually incredibly easy, safe, and removable (acetone) if you use a CircuitWriter conductive pen. And on the Strix PCB, it's even easier than for most other cards, as the 5 milliohm resistors that you'd be drawing on are right next to each of the 8-pin connectors, with no other SMCs anywhere near. Since there are no nearby components and since this is silver rather than gallium based (i.e. no interaction with solder), you can slather a bunch on, decreasing effective resistance with each pass. In any case, a shunt mod is highly recommended, as the 380W BIOS isn't quite as fast as other BIOSes when clocks are identical, and 380W is only good for about 1.05V in tough stress tests. You need about 460W to get a reliable 1.093V in the stress tests I've used (Superposition 1080p Extreme, Superposition 8k Optimized, Timespy) when under water and not running fans.
> 
> For my shunt mod, I decided to go with a 1.42x power limit boost by creating an effective resistance of 3.5 milliohms. The upper limit is around 1.625x, as that corresponds to 3.08 milliohm (1/3.08 = 1/5 + 1/8), which is the upper end of safe (before card goes to low-power limp mode). I've attached pics of my results and mod below. Tests were tightly controlled (fixed voltage and clocks such that PL and temps wouldn't affect clocks or load). I then tracked perceived power decrease as I added more and more passes of CircuitWriter. With just 3 passes, I was able to get an effective power limit of 1.26x. When I kept going with several more, I was able to get the 1.42x effective power multiplier described above. See spreadsheet for all the calculations, equations, parameters, etc.
> 
> In my experience, the shunt mod + FE or similar bios with tight memory timings (i.e. Asus 325W BIOS) yields far better results than the Galax 380W BIOS when clocks are identical. Using the 380W BIOS but no shunt mod, I was able to get a 17.1k GPU score in Timespy (top 45 GPU and top 60 overall at the time, but fell to top 80 on both after a while). Strix XOC BIOS without shunt mod works decently well on Strix PCBs (sustains 1.068V rather than 1.05V on ref cards, and 1.05V that I could sustain in stress tests on the Galax 380W), but lack of F-V curve is a killer. Halving the NVLink bandwidth is also a no-go for SLI users. Due to the lack of the FV curve, the Strix XOC BIOS only boosted my score to about 17.2k.
> 
> Reverting to stock Asus 325W BIOS and upping my effective power limit to 462W (325*1.42) via shunt mod, I went up to 17410 GPU score (and 16747 total + 13779 CPU with the 9900k at 5.4). This was good enough to place me in the top 50 overall and GPU. See screenie and link.
> 
> Superposition saw similar gains when running 1080p Extreme. I was able to get 10.53k on the Galax 380W BIOS, and slightly higher (~10.6k) when on stock BIOS but no longer power limited. That said, I didn't do more than a few passes on Superposition, and didn't push clocks much, so I think 10.7k is there if I really want to push (hence why I haven't updated my signature links to reflect the improved SuPo score yet).
> 
> Finally, unless you're going for a super aesthetic show build, going bonkers on the radiators, and/or concerned about AIO longevity/reliability/leakage, the AIO on the Kingpin isn't actually a bad thing. During the GN video, Vince mentioned really impressive temps when running at 2200 MHz and I believe 1.2V using EVBot (albeit on an open-air test bench). Likewise, there really isn't that much of an issue on the Asus having limited waterblock support (although to my knowledge they exist). You can just run a G12 + any 280mm or 360mm AIO and get similar performance to the very best blocks on custom loops with large radiators (~16-18C delta temps while in superposition or timespy). It's sort of cheating, as the AIO only has to cool the die, and water temps aren't increased by VRMs and VRAM. Check my signature for ~18C delta with 37C max on Supo when getting the 10.53k score described above... And that was with nearly silent ~950 RPM fans. I could probably knock a few degrees off that delta temp with full speed fans.
> 
> TBH, as a long time custom loop user (~20 years), I was reluctant to go AIO for this build, but did so since I was sick of maintenance and since my new case didn't have room for my old setup (swiftech-branded d5 pump + ek supremacy block + 5.25 bay res + XSPC 480mm double-thick rad + 8x 120mm gentle typhoons). In fact, I've been nothing other than extremely happy with the performance. Hell, I got that 17410 GPU score above on a 280mm AIO and a Kraken G12, and I challenge anyone here with a single card, no external volt mods (i.e. 1.093V or less), and ambient cooling (i.e. big water cooling without a chiller) to produce anything significantly higher--or even reach that.
> 
> Oh and since the VRM on the Strix is the second best from an output capacity and phase configuration standpoint (HOF > Strix > the rest of the custom cards) and third best from an output filtering standpoint (Kingpin >> HOF > Strix), it doesn't need any cooling in my experience. While drawing peaks of ~460W from the card during long bench sessions, my VRM temps stayed in the 70s, which is well within spec--and this is without even enabling the G12's included fan and while running 2160-2220 MHz at 1.093V depending on test--in a highly restrictive Phanteks Evolv ATX case with 27-28C room air and near silent fans on the two radiators.
> 
> TLDR: Do the shunt mod. Asus PCB is great, but not the best. That said, it has perhaps the third best VRM overall when considering output capacity (HOF = Strix = all other 16*70A SPS custom cards > 16*60A SPS Kingpin > Ref), phase configuration (10-phase config on HOF and Strix vs 8-phase without doublers on all other custom cards), efficiency (Kingpin > HOF > Strix > all other custom cards > ref), and output filtering (massive capacitor banks on Kingpin >> HOF > Strix > all other custom cards > ref). It also supports the OC Panel, which enables up to 1.2V on the core--though I haven't tested this yet, so I don't know if you still need to shunt mod when using the OC Panel. Ultimately, since the max power draw when restricted to 1.093V is under 500W, even the reference card's VRM is more than capable, provided cooling is adequate. In any case, don't let lack of a large variety of full cover blocks stop you if you want the Asus card and only care about performance... there are always 280/360mm AIOs and/or GPU core-only blocks if you're already on a custom loop. Oh, and if money and availability were no object, we would all (obviously) be running Kingpin or HOF cards.


Never said anything about being hesitent or it not being simple or safe if done right. Just said no real need to on the 380 watt bios with good cooling. The shunt mod will only get you so much further before you hit the same voltage limit everyone else hits.


----------



## willverduzco

zeall0rd said:


> Thank you, that's awesome. Eh, overnight, I guess 8-12 hours will suffice. You're right, gotta dry to really make it count. And yes, the strix cooler is just mounted with those 6 philips screws after all. Did you reapply thermal paste every time ? I'm seriously considering going for some sort of Corsair digital PSU to actually get good numbers for power draw


You should be able to get very consistent and reliable numbers to gauge your progress without having to pay for an external current monitor or buying a new PSU... You just have to isolate all the relevant variables by locking the voltage, clocks, and fan speed as I did in my tests, as well as by making sure that the point in the F-V curve that you lock to doesn't approach the power limit on the pre-shunt modded card. If you notice in my results, the standard deviations for max and average power draw across each layer/application were quite low. That said, those digital PSUs are quite cool, and more monitoring is always better than less. 

Regarding thermal paste, I didn't want to waste my Hydronaut since it's not exactly cheap, so I simply re-used the same thermal paste across all tests (over the course of a few days), and noticed absolutely no change in my temperatures. I just made sure to cover the GPU and cooler in order to prevent dust from messing with the thermal transfer. Obviously when I went back to water, I cleaned everything on the die and applied a new layer of Hydronaut.


----------



## willverduzco

NBrock said:


> Never said anything about being hesitent or it not being simple or safe if done right. Just said no real need to on the 380 watt bios with good cooling. The shunt mod will only get you so much further before you hit the same voltage limit everyone else hits.


My apologies on misinterpreting, then. And yeah, for real world gaming, the 380W BIOS is probably more than enough unless using DSR to game at super high resolution and the game has complex shaders. That said, if you are trying to sustain anything higher than 1.05V in stress tests / benchmarks, you'll quickly run into the same power limits I ran into with the 380W BIOS. At 1.093V and 2160-2220 MHz, I found that I needed slightly over 460W for Superposition 8k Optimized and Timespy to never invoke a power limit perfcap. Superposition 4k Extreme was a bit lower at 424W max draw at the same clocks and voltage, and probably more indicative of a super heavy real-world use case. Though to play devil's advocate, in Shadow of the Tomb Raider and Hitman 2 (which are both pretty intensive games), I was closer to peaks of 415W. As a result, I'm sure that 1.068V is readily sustainable in even the most demanding games with the 380W BIOS. I guess at that point, there's just the slight performance deficit when at any given clock, but you'd be hardpressed to notice a 0.5% difference in the real world anyhow.


----------



## zeall0rd

willverduzco said:


> You should be able to get very consistent and reliable numbers to gauge your progress without having to pay for an external current monitor or buying a new PSU... You just have to isolate all the relevant variables by locking the voltage, clocks, and fan speed as I did in my tests, as well as by making sure that the point in the F-V curve that you lock to doesn't approach the power limit on the pre-shunt modded card. If you notice in my results, the standard deviations for max and average power draw across each layer/application were quite low. That said, those digital PSUs are quite cool, and more monitoring is always better than less.
> 
> Regarding thermal paste, I didn't want to waste my Hydronaut since it's not exactly cheap, so I simply re-used the same thermal paste across all tests (over the course of a few days), and noticed absolutely no change in my temperatures. I just made sure to cover the GPU and cooler in order to prevent dust from messing with the thermal transfer. Obviously when I went back to water, I cleaned everything on the die and applied a new layer of Hydronaut.


True, those numbers are excellent too. I might go either way to be honest but I've wanted a new PSU anyway, so I might as well go for that sweet AX1600i. I'll still do the math every time to make sure it's all going well, but my end target is same as yours, lock that 1.093V in, regardless of load or application. As far as thermal paste is concerned, I guess I'd be going for Hydronaut or Kryonaut and just save that for the waterblock as well. 
Thank you again


----------



## Deathscythes

willverduzco said:


> AND
> 
> 
> 
> I don't understand why so many are hesitant to perform the shunt mod. It's actually incredibly easy, safe, and removable (acetone) if you use a CircuitWriter conductive pen. And on the Strix PCB, it's even easier than for most other cards, as the 5 milliohm resistors that you'd be drawing on are right next to each of the 8-pin connectors, with no other SMCs anywhere near. Since there are no nearby components and since this is silver rather than gallium based (i.e. no interaction with solder), you can slather a bunch on, decreasing effective resistance with each pass. In any case, a shunt mod is highly recommended, as the 380W BIOS isn't quite as fast as other BIOSes when clocks are identical, and 380W is only good for about 1.05V in tough stress tests. You need about 460W to get a reliable 1.093V in the stress tests I've used (Superposition 1080p Extreme, Superposition 8k Optimized, Timespy) when under water and not running fans.
> 
> For my shunt mod, I decided to go with a 1.42x power limit boost by creating an effective resistance of 3.5 milliohms. The upper limit is around 1.625x, as that corresponds to 3.08 milliohm (1/3.08 = 1/5 + 1/8), which is the upper end of safe (before card goes to low-power limp mode). I've attached pics of my results and mod below. Tests were tightly controlled (fixed voltage and clocks such that PL and temps wouldn't affect clocks or load). I then tracked perceived power decrease as I added more and more passes of CircuitWriter. With just 3 passes, I was able to get an effective power limit of 1.26x. When I kept going with several more, I was able to get the 1.42x effective power multiplier described above. See spreadsheet for all the calculations, equations, parameters, etc.
> 
> In my experience, the shunt mod + FE or similar bios with tight memory timings (i.e. Asus 325W BIOS) yields far better results than the Galax 380W BIOS when clocks are identical. Using the 380W BIOS but no shunt mod, I was able to get a 17.1k GPU score in Timespy (top 45 GPU and top 60 overall at the time, but fell to top 80 on both after a while). Strix XOC BIOS without shunt mod works decently well on Strix PCBs (sustains 1.068V rather than 1.05V on ref cards, and 1.05V that I could sustain in stress tests on the Galax 380W), but lack of F-V curve is a killer. Halving the NVLink bandwidth is also a no-go for SLI users. Due to the lack of the FV curve, the Strix XOC BIOS only boosted my score to about 17.2k.
> 
> Reverting to stock Asus 325W BIOS and upping my effective power limit to 462W (325*1.42) via shunt mod, I went up to 17410 GPU score (and 16747 total + 13779 CPU with the 9900k at 5.4). This was good enough to place me in the top 50 overall and GPU. See screenie and link.
> 
> Superposition saw similar gains when running 1080p Extreme. I was able to get 10.53k on the Galax 380W BIOS, and slightly higher (~10.6k) when on stock BIOS but no longer power limited. That said, I didn't do more than a few passes on Superposition, and didn't push clocks much, so I think 10.7k is there if I really want to push (hence why I haven't updated my signature links to reflect the improved SuPo score yet).
> 
> Finally, unless you're going for a super aesthetic show build, going bonkers on the radiators, and/or concerned about AIO longevity/reliability/leakage, the AIO on the Kingpin isn't actually a bad thing. During the GN video, Vince mentioned really impressive temps when running at 2200 MHz and I believe 1.2V using EVBot (albeit on an open-air test bench). Likewise, there really isn't that much of an issue on the Asus having limited waterblock support (although to my knowledge they exist). You can just run a G12 + any 280mm or 360mm AIO and get similar performance to the very best blocks on custom loops with large radiators (~16-18C delta temps while in superposition or timespy). It's sort of cheating, as the AIO only has to cool the die, and water temps aren't increased by VRMs and VRAM. Check my signature for ~18C delta with 37C max on Supo when getting the 10.53k score described above... And that was with nearly silent ~950 RPM fans. I could probably knock a few degrees off that delta temp with full speed fans.
> 
> TBH, as a long time custom loop user (~20 years), I was reluctant to go AIO for this build, but did so since I was sick of maintenance and since my new case didn't have room for my old setup (swiftech-branded d5 pump + ek supremacy block + 5.25 bay res + XSPC 480mm double-thick rad + 8x 120mm gentle typhoons). In fact, I've been nothing other than extremely happy with the performance. Hell, I got that 17410 GPU score above on a 280mm AIO and a Kraken G12, and I challenge anyone here with a single card, no external volt mods (i.e. 1.093V or less), and ambient cooling (i.e. big water cooling without a chiller) to produce anything significantly higher--or even reach that.
> 
> Oh and since the VRM on the Strix is the second best from an output capacity and phase configuration standpoint (HOF > Strix > the rest of the custom cards) and third best from an output filtering standpoint (Kingpin >> HOF > Strix), it doesn't need any cooling in my experience. While drawing peaks of ~460W from the card during long bench sessions, my VRM temps stayed in the 70s, which is well within spec--and this is without even enabling the G12's included fan and while running 2160-2220 MHz at 1.093V depending on test--in a highly restrictive Phanteks Evolv ATX case with 27-28C room air and near silent fans on the two radiators.
> 
> TLDR: Do the shunt mod. Asus PCB is great, but not the best. That said, it has perhaps the third best VRM overall when considering output capacity (HOF = Strix = all other 16*70A SPS custom cards > 16*60A SPS Kingpin > Ref), phase configuration (10-phase config on HOF and Strix vs 8-phase without doublers on all other custom cards), efficiency (Kingpin > HOF > Strix > all other custom cards > ref), and output filtering (massive capacitor banks on Kingpin >> HOF > Strix > all other custom cards > ref). It also supports the OC Panel, which enables up to 1.2V on the core--though I haven't tested this yet, so I don't know if you still need to shunt mod when using the OC Panel. Ultimately, since the max power draw when restricted to 1.093V is under 500W, even the reference card's VRM is more than capable, provided cooling is adequate. In any case, don't let lack of a large variety of full cover blocks stop you if you want the Asus card and only care about performance... there are always 280/360mm AIOs and/or GPU core-only blocks if you're already on a custom loop. Oh, and if money and availability were no object, we would all (obviously) be running Kingpin or HOF cards.



Thank you very much for this large piece of information, man. You helped me a lot and I am grateful.
Waterblocks for the Strix cards do exist, bitspower and EK make them however i have been given really bad feedback regarding the bitspower one. I haven't tried my EK ones yet as I am awaiting a pair of Matrix cards that should arrive before the end of april. Speaking of which I would be happy to share with you its bios which should feature a higher power limit and maybe better memory timings... who knows =)

I can't wait! and will be happy to share with you my results applying this fantastic and - it would appear - stupidly easy mod!


----------



## zeall0rd

Deathscythes said:


> Thank you very much for this large piece of information, man. You helped me a lot and I am grateful.
> Waterblocks for the Strix cards do exist, bitspower and EK make them however i have been given really bad feedback regarding the bitspower one. I haven't tried my EK ones yet as I am awaiting a pair of Matrix cards that should arrive before the end of april. Speaking of which I would be happy to share with you its bios which should feature a higher power limit and maybe better memory timings... who knows =)
> 
> I can't wait! and will be happy to share with you my results applying this fantastic and - it would appear - stupidly easy mod!


Thank you, that'd be awesome. Oh, and I'd recommend the block watercool.de makes. Heatkiller IV for the Strix motherboard. German company, top notch quality, but they are kind of lagging behind right now - drowning in orders, that's how sought after they are.


----------



## J7SC

I think this recent vid fits the current shunt mod discussion ...this week, well-known German XOCer 'Der8auer' (also goes by 'DerBauer') is in Moscow doing some special OCing with Strix 2080 TIs and a Xeon W3175X  

...he has a different method of actually doing shunt work, via soldering on an additional 6-pin onto the Strix card, so he is adding even more additional watts. However, you will need the item he's referring to the Elmor IC controller (should be available soon for sale) for control. DerBauer's Youtube channel has the vid in German and English. And as always, keep in mind that hard-modding your card can /will wreck your warranty. The aforementioned 'pen/paste' method is probably the safest 'temporary' hard mod, though vendors do check, and acetone for example will affect plastic parts and discolor PCBs if not careful (been there, done that w/ DICE where you also use acetone).

...finally, in addition to the Galax 380w Biod most folks here like, there is the Strix XOC 1000w bios out there (and links to posted on this thread), but it requires very serious cooling and a cautious hand, and there's some controversy how much extra watt it will yield w/o hard mods.

https://www.youtube.com/user/der8auer/videos


----------



## dpoverlord

Quick question ;-) I have the EVGA 2080ti XC Ultra and there does not seem to be a kboost option. I set Nvidia control panel to max performance but whats a safe overclock on this card on air? I have a killer air cooling setup and just didnt want to start clocking higher than +88 / 200 without knowing safe limits on air with the voltage.

1350 seems so low...

I saw a Test/ Scan option and when I test it seems not much happens and +87 does not seem to make sense considering my 1080ti had a much higher O/C

Love any input! (got a little lost in all the messages)


----------



## Jpmboy

willverduzco said:


> AND
> 
> 
> 
> I don't understand why so many are hesitant to perform the shunt mod. It's actually incredibly easy, safe, and removable (acetone) if you use a CircuitWriter conductive pen. And on the Strix PCB, it's even easier than for most other cards, as the 5 milliohm resistors that you'd be drawing on are right next to each of the 8-pin connectors, with no other SMCs anywhere near. Since there are no nearby components and since this is silver rather than gallium based (i.e. no interaction with solder), you can slather a bunch on, decreasing effective resistance with each pass. In any case, a shunt mod is highly recommended, as the 380W BIOS isn't quite as fast as other BIOSes when clocks are identical, and 380W is only good for about 1.05V in tough stress tests. You need about 460W to get a reliable 1.093V in the stress tests I've used (Superposition 1080p Extreme, Superposition 8k Optimized, Timespy) when under water and not running fans.
> 
> For my shunt mod, I decided to go with a 1.42x power limit boost by creating an effective resistance of 3.5 milliohms. The upper limit is around 1.625x, as that corresponds to 3.08 milliohm (1/3.08 = 1/5 + 1/8), which is the upper end of safe (before card goes to low-power limp mode). I've attached pics of my results and mod below. Tests were tightly controlled (fixed voltage and clocks such that PL and temps wouldn't affect clocks or load). I then tracked perceived power decrease as I added more and more passes of CircuitWriter. With just 3 passes, I was able to get an effective power limit of 1.26x. When I kept going with several more, I was able to get the 1.42x effective power multiplier described above. See spreadsheet for all the calculations, equations, parameters, etc.
> 
> In my experience, the shunt mod + FE or similar bios with tight memory timings (i.e. Asus 325W BIOS) yields far better results than the Galax 380W BIOS when clocks are identical. Using the 380W BIOS but no shunt mod, I was able to get a 17.1k GPU score in Timespy (top 45 GPU and top 60 overall at the time, but fell to top 80 on both after a while). Strix XOC BIOS without shunt mod works decently well on Strix PCBs (sustains 1.068V rather than 1.05V on ref cards, and 1.05V that I could sustain in stress tests on the Galax 380W), but lack of F-V curve is a killer. Halving the NVLink bandwidth is also a no-go for SLI users. Due to the lack of the FV curve, the Strix XOC BIOS only boosted my score to about 17.2k.
> 
> Reverting to stock Asus 325W BIOS and upping my effective power limit to 462W (325*1.42) via shunt mod, I went up to 17410 GPU score (and 16747 total + 13779 CPU with the 9900k at 5.4). This was good enough to place me in the top 50 overall and GPU. See screenie and link.
> 
> Superposition saw similar gains when running 1080p Extreme. I was able to get 10.53k on the Galax 380W BIOS, and slightly higher (~10.6k) when on stock BIOS but no longer power limited. That said, I didn't do more than a few passes on Superposition, and didn't push clocks much, so I think 10.7k is there if I really want to push (hence why I haven't updated my signature links to reflect the improved SuPo score yet).
> 
> Finally, unless you're going for a super aesthetic show build, going bonkers on the radiators, and/or concerned about AIO longevity/reliability/leakage, the AIO on the Kingpin isn't actually a bad thing. During the GN video, Vince mentioned really impressive temps when running at 2200 MHz and I believe 1.2V using EVBot (albeit on an open-air test bench). Likewise, there really isn't that much of an issue on the Asus having limited waterblock support (although to my knowledge they exist). You can just run a G12 + any 280mm or 360mm AIO and get similar performance to the very best blocks on custom loops with large radiators (~16-18C delta temps while in superposition or timespy). It's sort of cheating, as the AIO only has to cool the die, and water temps aren't increased by VRMs and VRAM. Check my signature for ~18C delta with 37C max on Supo when getting the 10.53k score described above... And that was with nearly silent ~950 RPM fans. I could probably knock a few degrees off that delta temp with full speed fans.
> 
> TBH, as a long time custom loop user (~20 years), I was reluctant to go AIO for this build, but did so since I was sick of maintenance and since my new case didn't have room for my old setup (swiftech-branded d5 pump + ek supremacy block + 5.25 bay res + XSPC 480mm double-thick rad + 8x 120mm gentle typhoons). In fact, I've been nothing other than extremely happy with the performance. Hell, I got that 17410 GPU score above on a 280mm AIO and a Kraken G12, and I challenge anyone here with a single card, no external volt mods (i.e. 1.093V or less), and ambient cooling (i.e. big water cooling without a chiller) to produce anything significantly higher--or even reach that.
> 
> Oh and since the VRM on the Strix is the second best from an output capacity and phase configuration standpoint (HOF > Strix > the rest of the custom cards) and third best from an output filtering standpoint (Kingpin >> HOF > Strix), it doesn't need any cooling in my experience. While drawing peaks of ~460W from the card during long bench sessions, my VRM temps stayed in the 70s, which is well within spec--and this is without even enabling the G12's included fan and while running 2160-2220 MHz at 1.093V depending on test--in a highly restrictive Phanteks Evolv ATX case with 27-28C room air and near silent fans on the two radiators.
> 
> TLDR: Do the shunt mod. Asus PCB is great, but not the best. That said, it has perhaps the third best VRM overall when considering output capacity (HOF = Strix = all other 16*70A SPS custom cards > 16*60A SPS Kingpin > Ref), phase configuration (10-phase config on HOF and Strix vs 8-phase without doublers on all other custom cards), efficiency (Kingpin > HOF > Strix > all other custom cards > ref), and output filtering (massive capacitor banks on Kingpin >> HOF > Strix > all other custom cards > ref). It also supports the OC Panel, which enables up to 1.2V on the core--though I haven't tested this yet, so I don't know if you still need to shunt mod when using the OC Panel. Ultimately, since the max power draw when restricted to 1.093V is under 500W, even the reference card's VRM is more than capable, provided cooling is adequate. In any case, don't let lack of a large variety of full cover blocks stop you if you want the Asus card and only care about performance... there are always 280/360mm AIOs and/or GPU core-only blocks if you're already on a custom loop. Oh, and if money and availability were no object, we would all (obviously) be running Kingpin or HOF cards.


So.. we know (for sure) that gains made in the power circuit are limited by the thermal circuit (which has control priority) and a few % difference between timespy runs (even an average) are confounded by all sorts of background processes since this is pretty CPU bound once you get to volta or turing, that is unless you are using a shaved OS. All that said, when using a "conductive pen" (and I have a half dozen) getting the change in resistance correct is a crap shoot - as you probably know, lowering the resistance too much locks the card in P8. You also need to remove the conformal coating from the 5MOs for the pen's colloidal suspension to make proper electrical contact.
The only way to alter the power circuit properly is to stack an appropriate resistor - knowing what the final milliohms are across that part. Anything else is a happy accident, or P8-lock panic at worst.

I understand your excitement, but sometimes a little QC is called for "to protect the innocent".


----------



## willverduzco

J7SC said:


> I think this recent vid fits the current shunt mod discussion ...this week, well-known German XOCer 'Der8auer' (also goes by 'DerBauer') is in Moscow doing some special OCing with Strix 2080 TIs and a Xeon W3175X
> 
> ...he has a different method of actually doing shunt work, via soldering on an additional 6-pin onto the Strix card, so he is adding even more additional watts. However, you will need the item he's referring to the Elmor IC controller (should be available soon for sale) for control. DerBauer's Youtube channel has the vid in German and English. And as always, keep in mind that hard-modding your card can /will wreck your warranty. The aforementioned 'pen/paste' method is probably the safest 'temporary' hard mod, though vendors do check, and acetone for example will affect plastic parts and discolor PCBs if not careful (been there, done that w/ DICE where you also use acetone).
> 
> ...finally, in addition to the Galax 380w Biod most folks here like, there is the Strix XOC 1000w bios out there (and links to posted on this thread), but it requires very serious cooling and a cautious hand, and there's some controversy how much extra watt it will yield w/o hard mods.
> 
> https://www.youtube.com/user/der8auer/videos


Interesting... I caught a video of his previous attempts months ago, in which he soldered an additional 5 milliohm resistor in parallel, yielding a net resistance of 2.5 milliohms via the parallel resistor equation. It didn't work for him because 2.5 milliohms was too low and would send the card into low power mode. That's a shame, as halving the resistance would have halved the voltage drop (Ohm's law) and thus tricked the card into thinking it was pulling half the power, and thus yielding an effective 2x power limit. That said, users on this forum have found that soldering an 8 milliohm resistor in parallel (and thus achieving a net 3.08 milliohms) works fine and yields a calculated 1.625x power draw.

Going back to Der8auer's video, however, 1.625x would have not been enough if he chose to stay on the better performing (on a clock-for-clock basis) stock 325W BIOS instead of using the higher power limit but less efficient Galax BIOS, which is presumably slower at the same clocks due to looser memory timings. I suppose this is what mandated his use of an additional power connector that was placed AFTER the shunt resistors, and thus unimpacted by the card's power monitoring. I also found it a bit strange that he used Elmor's controller rather than using the Asus OC Panel II, which many people have used to unlock voltages of up to 1.2V on last generation cards, and which Buildzoid talks about in his PCB breakdown video (as well as how he's surprised Nvidia still allows them to do it).

Also interesting is that you mention permanent discoloration using Acetone. I used a bit when undoing my first pass and re-doing it after a slip of the wrist caused conductive ink to go everywhere. I didn't see any immediately noticeable marks on the PCB or resistor, but it's certainly possible that it would be detectable under closer scrutiny. Thanks for adding that caveat.



Jpmboy said:


> So.. we know (for sure) that gains made in the power circuit are limited by the thermal circuit (which has control priority) and a few % difference between timespy runs (even an average) are confounded by all sorts of background processes since this is pretty CPU bound once you get to volta or turing, that is unless you are using a shaved OS. All that said, when using a "conductive pen" (and I have a half dozen) getting the change in resistance correct is a crap shoot - as you probably know, lowering the resistance too much locks the card in P8. You also need to remove the conformal coating from the 5MOs for the pen's colloidal suspension to make proper electrical contact.
> The only way to alter the power circuit properly is to stack an appropriate resistor - knowing what the final milliohms are across that part. Anything else is a happy accident, or P8-lock panic at worst.
> 
> I understand your excitement, but sometimes a little QC is called for "to protect the innocent".


100.00% agree. If one is able to solder without damaging the card, the absolute best way to go about a shunt mod is simply adding an 8 milliohm resistor in parallel to the stock 5 milliohm resistor, netting a total resistance of 3.08 milliohms and yielding a 1.625x effective power multiplier. That said, many of us (myself included) are not good enough at soldering to attempt this, even on the Strix PCB, which makes this easier than other cards since the shunt resistors are physically placed so far away from other SMCs. What makes this method particularly exciting is how safe (unless you short unintended leads) and removable (discounting the plastic discoloration J7SC mentioned) it is. Unlike gallium-based liquid metal, these silver ink pens do not react with (and thus erode) solder. Seems pretty ideal for those of us unable/unwilling to solder but also unsatisfied by the 380W limit on the Galax BIOS or its lower per-clock performance.

As for conformal coating, that is true as well. As I mentioned in my follow up posts, I used abrasion to scrape off the conformal coating myself, though presumably acetone and rubbing with a q-tip would have worked just as well.

Regarding the measurements and happy accidents, they were quite happy indeed. As for precision of measurements, I agree as well... which is why I not only took averages but also standard deviations. If you look at my results a bit more closely, you will see that the perceived power draw as reported by the GPU's power monitoring and logged by HWInfo given the same amount of completed work (and a locked voltage, frequency, and fan speed) was highly consistent from run to run. Standard deviations were in the the fractions of a Watt, which when dealing with a ~240W average load across runs is less than 0.1%. As such, I was able to calculate my net resistance to be consistent at 3.53 milliohms. While this was great in that it yielded a 1.416x effective power limit, it was still short of the 1.625x effective limit I would have achieved if I did it the "right way" by soldering. It was still enough for me, as I needed 460W for 1.093V at 2200 MHz in Superposition 8k Optimized, and I achieved ever so slightly over that with my more primitive mod.


----------



## Modus

Anyone own either the Zotac RTX 2080 Ti Triple Fan or AMP! Edition? any major difference between these two? Both on on sale right now with a AMP! being $100ish more. Would appreciate any opinions on which card to get?

below are the links (if thats allowed)

https://www.canadacomputers.com/product_info.php?cPath=43_1200_557_559&item_id=124177
https://www.canadacomputers.com/product_info.php?cPath=43_1200_557_559&item_id=124176 (AMP!)


----------



## J7SC

Modus said:


> Anyone own either the Zotac RTX 2080 Ti Triple Fan or AMP! Edition? any major difference between these two? Both on on sale right now with a AMP! being $100ish more. Would appreciate any opinions on which card to get?
> 
> below are the links (if thats allowed)
> 
> https://www.canadacomputers.com/product_info.php?cPath=43_1200_557_559&item_id=124177
> https://www.canadacomputers.com/product_info.php?cPath=43_1200_557_559&item_id=124176 (AMP!)


 
For the extra C$100 bucks, I would go for the AMP! edition, subject of course to your overall budget and plans for apps, oc etc


----------



## songi

question for anyone that has had one of those Galax 2080 ti HOF cards, what were your temps like on air? not planning to watercool it so curious how hot they run


----------



## hotrod717

RTX 2080ti Kingpin Guide is now live.

https://xdevs.com/guide/2080ti_kpe/


----------



## zeall0rd

Jpmboy said:


> So.. we know (for sure) that gains made in the power circuit are limited by the thermal circuit (which has control priority) and a few % difference between timespy runs (even an average) are confounded by all sorts of background processes since this is pretty CPU bound once you get to volta or turing, that is unless you are using a shaved OS. All that said, when using a "conductive pen" (and I have a half dozen) getting the change in resistance correct is a crap shoot - as you probably know, lowering the resistance too much locks the card in P8. You also need to remove the conformal coating from the 5MOs for the pen's colloidal suspension to make proper electrical contact.
> The only way to alter the power circuit properly is to stack an appropriate resistor - knowing what the final milliohms are across that part. Anything else is a happy accident, or P8-lock panic at worst.
> 
> I understand your excitement, but sometimes a little QC is called for "to protect the innocent".


Of course using a resistor is way easier. I would even solder on like a 10 milliohm resistor, resulting in a 1.5 multiplicator and total max power draw of 325W*1.5 = 487.5W. That's more than enough anyway. I'd do it myself but TBH I am not that good at soldering, so I'll go with the silver trace pen method.


----------



## zeall0rd

Just thought y'all would appreciate this:

https://www.techpowerup.com/vgabios/210080/210080

Just found it ^^ Anyone who got a Strix and wants to flash ? 
@willverduzco ?


----------



## dangerSK

BTW: If someone wants to use KP 2080Ti 520W bios it's fine, u can flash it. Classified controller doesn't work.


----------



## zeall0rd

Wait, the Kingpin 520W BIOS is flashable on Asus cards ? Or any card, really ? It's 3-pin though right ? So that'd be an issue.


----------



## dangerSK

zeall0rd said:


> Wait, the Kingpin 520W BIOS is flashable on Asus cards ? Or any card, really ? It's 3-pin though right ? So that'd be an issue.


I flashed it on Lightning card without problems, can't guarantee how it will work with Strix and other brands.


----------



## zeall0rd

dangerSK said:


> I flashed it on Lightning card without problems, can't guarantee how it will work with Strix and other brands.


Could you post that BIOS here ?


----------



## dangerSK

zeall0rd said:


> Could you post that BIOS here ?


download it here > https://xdevs.com/guide/2080ti_kpe/


----------



## zeall0rd

dangerSK said:


> download it here > https://xdevs.com/guide/2080ti_kpe/


Thank you


----------



## bigjdubb

dangerSK said:


> BTW: If someone wants to use KP 2080Ti 520W bios it's fine, u can flash it. Classified controller doesn't work.


I wonder if it would be any good for the FTW3 Ultra, seems like a nice jump in available wattage.


----------



## willverduzco

zeall0rd said:


> Just thought y'all would appreciate this:
> 
> https://www.techpowerup.com/vgabios/210080/210080
> 
> Just found it ^^ Anyone who got a Strix and wants to flash ?
> 
> @willverduzco ?


Welp, this happens literally two weeks after I spend a few days applying multiple passes of the conductive ink cure, letting it cure for hours, testing and retesting, and then applying more. If I had this 360W BIOS back then, I would have only aimed for a 1.25x multiplier (requiring just 3 passes), as we only need about 460W to sustain [email protected] in the benchmarks that i use (timespy, superposition 8k optimized, and superposition 4k extreme--actually 424W). Oh, well, wasted time, but no big deal.

I hope the per-clock performance on this BIOS is as good as it is on the 325W Asus BIOS. I'll do quite a bit of testing to find out, since I'm going to put each bios on each position of my card's BIOS switch.


----------



## J7SC

hotrod717 said:


> RTX 2080ti Kingpin Guide is now live.
> 
> https://xdevs.com/guide/2080ti_kpe/


 
Nice find !  I read it end-to-end. That's what I really like about KPE GPUs, the tips and software come right from the 'horses' mouth' and are accessible / great support




zeall0rd said:


> Of course using a resistor is way easier. I would even solder on like a 10 milliohm resistor, resulting in a 1.5 multiplicator and total max power draw of 325W*1.5 = 487.5W. That's more than enough anyway. I'd do it myself but TBH I am not that good at soldering, so I'll go with the silver trace pen method.


 
I first saw the conductive trace pen method mentioned by CallsignVega back at the beginning of January over at the Titan RTX thread; the related info in that thread might also be a good read for you for extra tips.

On the more general topic of removal of mods (including soldering), there are safe specialty liquids to remove flux after a shunt mod, i.e. s.th. like this:


----------



## joyzao

Could anyone confirm, if I can install the bios from that guide posted above? https://xdevs.com/guide/2080ti_kpe/

In an RTX without evga card.

Of all the bios I tested I kept more success in the bios of 406 w of the msi, but wanted to test this too.


----------



## J7SC

willverduzco said:


> Interesting... I caught a video of his previous attempts months ago, in which he soldered an additional 5 milliohm resistor in parallel, yielding a net resistance of 2.5 milliohms via the parallel resistor equation. It didn't work for him
> (edit)
> Going back to Der8auer's video, however, 1.625x would have not been enough if he chose to stay on the better performing (on a clock-for-clock basis) stock 325W BIOS instead of using the higher power limit but less efficient Galax BIOS, which is presumably slower at the same clocks due to looser memory timings. I suppose this is what mandated his use of an additional power connector that was placed AFTER the shunt resistors, and thus unimpacted by the card's power monitoring. I also found it a bit strange that he used Elmor's controller rather than using the Asus OC Panel II, which many people have used to unlock voltages of up to 1.2V on last generation cards, and which Buildzoid talks about in his PCB breakdown video (as well as how he's surprised Nvidia still allows them to do it).
> (edit)
> It was still enough for me, as I needed 460W for 1.093V at 2200 MHz in Superposition 8k Optimized, and I achieved ever so slightly over that with my more primitive mod.


 
I'm getting a bit confused as to which previous DerBauer vid you're referring to. He did one about 6 month ago to enter the TSEX 2x GPU fray Steve/GN, Jay2c and a few others had, and promptly took 1st place at 3DMark HoF 2x GPUs. That sub is still in the table and the GPU bios was not the Galax one but an Asus one. The same holds for the sub they did this week (re. the vid I had posted where Asus ROG sponsored that Moscow session). BTW, that was subbed under Elmor's handle at 3DM HoF...Speaking of Elmor, your comment about the Asus OC Panel II brought a smile to my face. Elmor was a lead tech at ROG Asus for many years and left only late last fall, presumably to concentrate on his HW lab biz. Elmor in all likelihood would have had a major hand in developing the OC Panel software. Also, Elmor's eVc DerBauer was/is using in the latest vid was first introduced back in '13 or so, and folks like Kingpin supported it in posts because it works on anything that is controlled by I2C (mobos, GPUs...), even going beyond the 'hard lock' vendors may have imposed. DerBauer also used it when he oc'ed 2x AMD Epyc server CPUs in a board that doesn't allow any typical oc functions.

Finally, on my setup (2x Aorus XT WB), I usually leave GPU voltages alone because w/an oc'ed 16c Threadripper CPU, even a 1300w Platinum PSU can get near its limits. Back in January, I posted some 2205MHz (peak) Superposition runs, with the (single) GPU maxing at 1.043v and '0' on the MSI AB voltage slider. Adding 50% via the slider, it came to 2235Mhz (peak) at 1.075v, noting that that speed could only do light / shorter bench tests. The point is the extra voltage may appear minor, but with two totally stock cards, it starts to eat into the PL..adding even more voltage (for 2x GPUs), I start to run into PSU issues, and even if I don't, clocks may be higher but scores drop. I may update this system in the summer (currently this build has to remain mostly stock for work tasks). I'm looking at a 1600w (or higher) PSU, a chiller for the GPUs and that Strix XOC/1000w bios. But for now, I get my best scores w/ stock voltage and very extensive w-cooling. The only mystery my current setup (again > all stock) creates is why - with a 122% max PL slider setting in MSI AB - it reports max power at up to 140% in MSI AB monitoring during 2x TSEX :headscrat


----------



## hotrod717

So about an hour with 2080ti Kingpin and kinda promising with 2175/1875 and no additional volts. Going to keep going, but there seemed a lack of posts on these with info. Completely mediocre setup with stock air cooling atm and a ambient of 43*C. Chip stays cool and haven't past 45*C w/ oem aggressive fan profile.


----------



## J7SC

hotrod717 said:


> So about an hour with 2080ti Kingpin and kinda promising with 2175/1875 and no additional volts. Going to keep going, but there seemed a lack of posts on these with info. Completely mediocre setup with stock air cooling atm and a ambient of 43*C. Chip stays cool and haven't past 45*C w/ oem aggressive fan profile.



Looks good ! What was the max watt drawn so far, i.e. in 3DM TS ?


----------



## hotrod717

J7SC said:


> Looks good ! What was the max watt drawn so far, i.e. in 3DM TS ?


 300 ish per the handy dandy lcd. Not over 310. Vol regulated at this point. 2205 on core seems the sweet spot for this sample. I'll give a go with chilled air and some juice tomorrow. PX1 is broke in some ways. Kind of to be expected atp.


----------



## willverduzco

hotrod717 said:


> 300 ish per the handy dandy lcd. Not over 310. Vol regulated at this point. 2205 on core seems the sweet spot for this sample. I'll give a go with chilled air and some juice tomorrow. PX1 is broke in some ways. Kind of to be expected atp.


Impressive preliminary results, for sure. However, I'm wondering if it's not actually maintaining those clocks, due to your GPU score of just 16.6k. With just 2160 MHz core (and 8300 MHz memory) maintained throughout the entire run, I obtained 17.41k GPU score on my shunted Strix card. Memory speed matters, but I don't think it matters THAT much. Perhaps you may want to try creating a custom curve and locking it in place using afterburner. With a 520W power limit and a great stock cooler, you'll neither run into power nor thermal limits... not by a mile. I would also be curious to see what the plots of freq, voltage, and power look like over time. Another thing that confuses me a bit as well is your power draw. In my tests, I required ~460W to maintain 2220 MHz at 1.093V in Superposition 8k Optimized. I didn't measure systematically in Timespy, but I can't imagine it being that much lower...

With all that said, you're a lucky man to get that Kingpin card... One of the two cards with even better VRM output filtering than the Strix I'm running (the other being HOF)... and since your card comes with a 520W BIOS from the factory, there's no need for any sort of silly shunt mod nonsense for us mere mortals on water (rather than LN2) and who choose to stay at 1.093V and below.



J7SC said:


> I'm getting a bit confused as to which previous DerBauer vid you're referring to. He did one about 6 month ago to enter the TSEX 2x GPU fray Steve/GN, Jay2c and a few others had, and promptly took 1st place at 3DMark HoF 2x GPUs. That sub is still in the table and the GPU bios was not the Galax one but an Asus one. The same holds for the sub they did this week (re. the vid I had posted where Asus ROG sponsored that Moscow session). BTW, that was subbed under Elmor's handle at 3DM HoF...Speaking of Elmor, your comment about the Asus OC Panel II brought a smile to my face. Elmor was a lead tech at ROG Asus for many years and left only late last fall, presumably to concentrate on his HW lab biz. Elmor in all likelihood would have had a major hand in developing the OC Panel software. Also, Elmor's eVc DerBauer was/is using in the latest vid was first introduced back in '13 or so, and folks like Kingpin supported it in posts because it works on anything that is controlled by I2C (mobos, GPUs...), even going beyond the 'hard lock' vendors may have imposed. DerBauer also used it when he oc'ed 2x AMD Epyc server CPUs in a board that doesn't allow any typical oc functions.
> 
> Finally, on my setup (2x Aorus XT WB), I usually leave GPU voltages alone because w/an oc'ed 16c Threadripper CPU, even a 1300w Platinum PSU can get near its limits. Back in January, I posted some 2205MHz (peak) Superposition runs, with the (single) GPU maxing at 1.043v and '0' on the MSI AB voltage slider. Adding 50% via the slider, it came to 2235Mhz (peak) at 1.075v, noting that that speed could only do light / shorter bench tests. The point is the extra voltage may appear minor, but with two totally stock cards, it starts to eat into the PL..adding even more voltage (for 2x GPUs), I start to run into PSU issues, and even if I don't, clocks may be higher but scores drop. I may update this system in the summer (currently this build has to remain mostly stock for work tasks). I'm looking at a 1600w (or higher) PSU, a chiller for the GPUs and that Strix XOC/1000w bios. But for now, I get my best scores w/ stock voltage and very extensive w-cooling. The only mystery my current setup (again > all stock) creates is why - with a 122% max PL slider setting in MSI AB - it reports max power at up to 140% in MSI AB monitoring during 2x TSEX :headscrat


I believe the one that I was looking at was this video from many months ago: 




In the video, Roman talks about his attempts while trying to shunt mod in the past--first not doing an aggressive enough modification and still running into power limit, and later trying another 5 milliohm resistor in parallel (and as such running into issues because 2.5 milliohms is too low). Strangely, I never realized that even back then, he was using the Elmor eVc... Which on that note, what an amazing device, given its versatility... So much potential, as you alluded to, and as Der8auer exploited with that 2x Epyc setup. I guess that's probably my answer as to why he used that rather than the OC Panel... This versatility, as well as how if the OC Panel works on current gen cards the same way I've seen it on video for last gen cards, that'd "only" allow for up to 1.2V, and not the 1.25V he used in the more recent video you posted. But as you mentioned, it's very interesting to see how Elmor was actually more than likely behind both tools.

As for your confusion regarding 122% max PL slider setting and peaks of 140%, that's not really impossible... Prior to my shunt mod, on both the Galax 380W and the Asus 325W BIOSes, I frequently hit ~8% higher than the specified power limit for extremely short-lived, transient peaks. This is not quite as dramatic as your ~18% difference, but it's not too far of a stretch given that they're only transients. But oh boy, I can't imagine how much power you'll require with 2 heavily overclocked 2080tis and a Threadripper, once you get a more powerful PSU.


----------



## dangerSK

joyzao said:


> Could anyone confirm, if I can install the bios from that guide posted above? https://xdevs.com/guide/2080ti_kpe/
> 
> In an RTX without evga card.
> 
> Of all the bios I tested I kept more success in the bios of 406 w of the msi, but wanted to test this too.





bigjdubb said:


> I wonder if it would be any good for the FTW3 Ultra, seems like a nice jump in available wattage.


You can use the bios, I have it on Lightning without problems, power limits works, voltage is default so there shouldn't be any problem.


----------



## joyzao

dangerSK said:


> joyzao said:
> 
> 
> 
> Could anyone confirm, if I can install the bios from that guide posted above? https://xdevs.com/guide/2080ti_kpe/
> 
> In an RTX without evga card.
> 
> Of all the bios I tested I kept more success in the bios of 406 w of the msi, but wanted to test this too.
> 
> 
> 
> 
> 
> 
> bigjdubb said:
> 
> 
> 
> I wonder if it would be any good for the FTW3 Ultra, seems like a nice jump in available wattage.
> 
> Click to expand...
> 
> You can use the bios, I have it on Lightning without problems, power limits works, voltage is default so there shouldn't be any problem.
Click to expand...

I tried, but for some reason my clock(gpu) was set at 300 mhz, any reason?


----------



## hotrod717

willverduzco said:


> hotrod717 said:
> 
> 
> 
> 300 ish per the handy dandy lcd. Not over 310. Vol regulated at this point. 2205 on core seems the sweet spot for this sample. I'll give a go with chilled air and some juice tomorrow. PX1 is broke in some ways. Kind of to be expected atp.
> 
> 
> 
> Impressive preliminary results, for sure. However, I'm wondering if it's not actually maintaining those clocks, due to your GPU score of just 16.6k. With just 2160 MHz core (and 8300 MHz memory) maintained throughout the entire run, I obtained 17.41k GPU score on my shunted Strix card. Memory speed matters, but I don't think it matters THAT much. Perhaps you may want to try creating a custom curve and locking it in place using afterburner. With a 520W power limit and a great stock cooler, you'll neither run into power nor thermal limits... not by a mile. I would also be curious to see what the plots of freq, voltage, and power look like over time. Another thing that confuses me a bit as well is your power draw. In my tests, I required ~460W to maintain 2220 MHz at 1.093V in Superposition 8k Optimized. I didn't measure systematically in Timespy, but I can't imagine it being that much lower...
> 
> With all that said, you're a lucky man to get that Kingpin card... One of the two cards with even better VRM output filtering than the Strix I'm running (the other being HOF)... and since your card comes with a 520W BIOS from the factory, there's no need for any sort of silly shunt mod nonsense for us mere mortals on water (rather than LN2) and who choose to stay .
Click to expand...

 As I said, stock, so I'm not locking voltage or clocks on those runs, 
Also this 7820x is definitely holding overall back. Very mediocre chip.
And draw was via card lcd, so that is up in air for now.


----------



## willverduzco

hotrod717 said:


> As I said, stock, so I'm not locking voltage or clocks on those runs,
> Also this 7820x is definitely holding overall back. Very mediocre chip.
> And draw was via card lcd, so that is up in air for now.


Hmm, fair enough. Amazing it boosted up to that speed while stock clocks then, even if only for a short duration.

Regarding your GPU score, your 7820X shouldn't even factor into the equation. That's the beauty of Timespy--it's fairly well isolated in its graphics and CPU tests. Your overall score is surely hurt by that chip, as 10751 is certainly not that high for an HEDT chip. The combination of both sub-scores is what determines your total score. So yes, your 7820X hurts your overall (and CPU score), but your GPU score should be (relatively) comparable to anyone else's GPU score.


----------



## hotrod717

willverduzco said:


> Hmm, fair enough. Amazing it boosted up to that speed while stock clocks then, even if only for a short duration.
> 
> Regarding your GPU score, your 7820X shouldn't even factor into the equation. That's the beauty of Timespy--it's fairly well isolated in its graphics and CPU tests. Your overall score is surely hurt by that chip, as 10751 is certainly not that high for an HEDT chip. The combination of both sub-scores is what determines your total score. So yes, your 7820X hurts your overall (and CPU score), but your GPU score should be (relatively) comparable to anyone else's GPU score.


 Lol. So if you read my first post, I refer to stock volts and list the clocks set. You are trying to pick apart what i am saying and taking it out of the context in which i am saying it. I never spoke about individual scores. Tralalalalalalalala.


----------



## dangerSK

joyzao said:


> I tried, but for some reason my clock(gpu) was set at 300 mhz, any reason?


Weird, I don't know. On my lightning everything works fine, even did 2.2ghz without much hassle.


----------



## mirkendargen

dangerSK said:


> Weird, I don't know. On my lightning everything works fine, even did 2.2ghz without much hassle.


I tried it on an MSI Seahawk EK X (same board as the Gaming Trio, 2x8pin 1x6pin) and had the same behavior. Everything is fine at the desktop, but as soon as you start a game the power limit flag goes to 1 and it runs at 300-500mhz. Oh well! My guess is it needs 3x8pin to work right. At least the 406W Gaming Trio bios works great for me, so it isn't so bad.


----------



## dangerSK

mirkendargen said:


> I tried it on an MSI Seahawk EK X (same board as the Gaming Trio, 2x8pin 1x6pin) and had the same behavior. Everything is fine at the desktop, but as soon as you start a game the power limit flag goes to 1 and it runs at 300-500mhz. Oh well! My guess is it needs 3x8pin to work right. At least the 406W Gaming Trio bios works great for me, so it isn't so bad.


Yes could be, I have 3x8pin so yeah...


----------



## willverduzco

hotrod717 said:


> Lol. So if you read my first post, I refer to stock volts and list the clocks set. You are trying to pick apart what i am saying and taking it out of the context in which i am saying it. I never spoke about individual scores. Tralalalalalalalala.


Not trying to "pick apart" anything that you said, by any stretch of the imagination--so I'm not sure why you're getting defensive. I was just confused as to why the GPU score was so low, in light of the "2175 MHz" clock stated in this post. If you made another post stating clocks or settings that I missed, I am sorry. Also, I apologize if it came off as me trying to pick it apart, however the GPU score (which isn't affected to an appreciable degree by a slow CPU) as well as the low power draw leads one to believe it's not actually staying at 2175 during the entire run.

That said, I'm happy for you that not only you have the best overall 2080TI available, but that at stock speeds it even grazed that clock speed. Just trying to get a better picture on exactly what is happening, as at 2175, you should be well over 17k GPU score... Possibly around 17.5k.


----------



## J7SC

hotrod717 said:


> 300 ish per the handy dandy lcd. Not over 310. Vol regulated at this point. 2205 on core seems the sweet spot for this sample. I'll give a go with chilled air and some juice tomorrow. PX1 is broke in some ways. Kind of to be expected atp.





hotrod717 said:


> As I said, stock, so I'm not locking voltage or clocks on those runs,
> Also this 7820x is definitely holding overall back. Very mediocre chip.
> And draw was via card lcd, so that is up in air for now.


 
Since there are KPE Bios and EVBot tools (and an up-to-520w LN2 setting) you can employ step-by-step, these results are very promising already. With increased cooling (at least sub-ambient, though not yet sub zero), I can see this card do_ at least_ the high 2200s...time will tell. BTW, 3DM's Port Royal /DLSS test is GPU-only, so worth a try re. 7820x issue.


----------



## hotrod717

J7SC said:


> Since there are KPE Bios and EVBot tools (and an up-to-520w LN2 setting) you can employ step-by-step, these results are very promising already. With increased cooling (at least sub-ambient, though not yet sub zero), I can see this card do_ at least_ the high 2200s...time will tell. BTW, 3DM's Port Royal /DLSS test is GPU-only, so worth a try re. 7820x issue.


I just found the firmware was posted for evbot( thanks a la jpmboy) and hope to give that a go this evening. I already updated and confirmed working, but thats it so far. It appears that a adapter is needed for pot anyway, so not too broken up about forgetting to take my dewar with me for a refill yesterday. Its been a minute since i benched cold, but definitely feel the bug coming on again. Look out bank account! Lol.


----------



## zeejc

willverduzco said:


> Welp, this happens literally two weeks after I spend a few days applying multiple passes of the conductive ink cure, letting it cure for hours, testing and retesting, and then applying more. If I had this 360W BIOS back then, I would have only aimed for a 1.25x multiplier (requiring just 3 passes), as we only need about 460W to sustain [email protected] in the benchmarks that i use (timespy, superposition 8k optimized, and superposition 4k extreme--actually 424W). Oh, well, wasted time, but no big deal.
> 
> I hope the per-clock performance on this BIOS is as good as it is on the 325W Asus BIOS. I'll do quite a bit of testing to find out, since I'm going to put each bios on each position of my card's BIOS switch.


Any chance you have given this a try? I'm considering picking up a strix. I'm currently researching what the best bios/mod config to run is.


----------



## willverduzco

zeejc said:


> Any chance you have given this a try? I'm considering picking up a strix. I'm currently researching what the best bios/mod config to run is.


Ah yes, apologies on not updating everyone here after playing around with the Matrix BIOSes. I've been sharing my findings with the folks in the discord linked in the OP quite a bit, but it slipped my mind to detail my findings here as well.

After fairly extensive testing over the last two days, the 360W Matrix BIOSes work precisely as one would expect. They're just as fast on a clock-for-clock basis as the stock 325W BIOS (presumably due to shared memory timings with the stock BIOSes). The only appreciable differences that I noticed were the following:

1. Similar to the XOC BIOS, my original Strix P and Q BIOSes (both positions on the BIOS switch) locked my memory clocks at the 3D speeds at all times. When moving to the Matrix BIOS, this is no longer the case. In other words, at idle, the memory down-clocks just like other BIOSes (and even other revisions of the Strix OC BIOS). I suppose this is good from a longevity standpoint.
2. On the topic of memory speeds, the default memory speed on this BIOS is 7400 MHz when set to 0 offset rather than 7000 on the stock Strix BIOSes. As such, you have to adjust your clocks accordingly when setting the overclock. In my case, my 24/7 clocks of 8300 MHz went from being labeled as "+1300 MHz" offset to "+900" in Afterburner.
3. In addition to the stock memory clocks being 400 MHz higher, the stock F-V curve is more aggressive on the Matrix BIOSes. Due to this, you can't simply copy your old OC curve from your default Strix BIOS. The overall achievable clocks on both memory and core are the same in my experience, but don't go by offsets at a particular voltage when reconstructing your F-V curve after swapping BIOSes.
4. Unlike the XOC BIOS, the Matrix BIOS has a different device ID compared to the default Strix BIOS. As such, you have to reinstall GPU drivers after switching between the two. This makes it a bit inconvenient if you had planned on leaving one of the two Matrix BIOSes on one position and one of the two stock BIOSes (or the XOC BIOS) on the other. IMO the best thing to do at this point is just to load both positions with its respective Matrix BIOS. (Matrix 360W Default Mode and Matrix 360W Quiet Mode)
5. The obvious difference of power limit (300 base * 1.20 limit = 360W total, as opposed to 260 base * 1.25 limit = 325W total).

One thing to note is that despite this 360W limit described above (and my 1.42x effective power multiplier from performing a shunt mod), I have seen brief transient peaks in Timespy Extreme Game Test 2 graze peaks of 132% (at 2160 MHz and 1.093V). Taking into account the aforementioned 1.42x multiplier from shunt modding, this means that I briefly touched 562W. Once again, that was only a transient peak, and the vast majority of the test stayed under 110% (i.e. the majority of the test stayed under 470W at 2160/1.093V).

Aside from TimeSpy Extreme Game Test 2 (and the OC Scanner's "Test" function), I have seen no other application or game that has gone above the ~462W I specified in a previous post when at ~2200 MHz and 1.093V. (As stated before, running Superposition 8k Optimized at 2220 MHz and 1.093V generates peak loads of approximately 462W in my testing. Changing the preset to 4k Extreme at the same clocks and voltage takes the power requirements to 424W.) Due to the above, you only need a roughly 1.18x effective power limit to run 4k extreme without throttling with this new BIOSes or a 1.28x effective power limit for 8k optimized. As such, with the new BIOSes, 3 passes of conductive ink would have been more than enough for my needs. FWIW, though, unless using DSR to run an unreasonably high resolution, games typically require less power to max out your GPU utilization. You would probably get away with the 360W Matrix BIOS or the 380W Galax BIOS if just gaming. Obviously YMMV on that regard, and I shunt modded to never run into the power limiter again--or at least so I though until I ran TSE GT2.

Anyway, tldr: Matrix 380W is a great bios. It's the BIOS I'm sticking with, as I found the Galax 380W BIOS's per-clock performance to be a bit lacking. On the Matrix BIOS, everything works as expected on a Strix PCB card, just you have to take into account the +400 MHz memory offset and increased core clock on the F-V curve at default, all of which was described above.


----------



## mercutiouk

Sorry to go over (perhaps) old ground. I've had a good search through for all references to my card.

I have the MSI Seahawk EK X 2080ti (not the hybrid).
It seems the best bios for me currently is the MSI Trio Gaming X bios (https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930). 

There's talk about the galax 380W reference bios but... it seems that my 1x6+2x8 power restricts my options slightly (the Seahawk X and the EK X are different cards). 

Is all this still correct? 
Is there a better bios for the EK X Seahawk?

I saw mention about a little extra temperature due to the power limits actually dropping speeds slightly. I'm on... pretty damn good water so would this still apply? (I guess the 26w makes a relative 1-2c difference in almost any ambient/delta situation?).

EDIT: The "info" I saw was... actually just above. Was a reference to "the lightning bios doesn't work, probably because the lightning is 3x8pin the gaming X is still ok for me though" so... I'm guessing that's our best option currently. Sorry to labour over the double checking.


----------



## mirkendargen

mercutiouk said:


> Sorry to go over (perhaps) old ground. I've had a good search through for all references to my card.
> 
> I have the MSI Seahawk EK X 2080ti (not the hybrid).
> It seems the best bios for me currently is the MSI Trio Gaming X bios (https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930).
> 
> There's talk about the galax 380W reference bios but... it seems that my 1x6+2x8 power restricts my options slightly (the Seahawk X and the EK X are different cards).
> 
> Is all this still correct?
> Is there a better bios for the EK X Seahawk?
> 
> I saw mention about a little extra temperature due to the power limits actually dropping speeds slightly. I'm on... pretty damn good water so would this still apply? (I guess the 26w makes a relative 1-2c difference in almost any ambient/delta situation?).
> 
> EDIT: The "info" I saw was... actually just above. Was a reference to "the lightning bios doesn't work, probably because the lightning is 3x8pin the gaming X is still ok for me though" so... I'm guessing that's our best option currently. Sorry to labour over the double checking.


It was actually the 520W Kingpin bios I was trying that didn't work. The Lightning power limit (on the bioses I've seen) is less than the 406W Gaming Trio one. I haven't done any extensive "clock for clock" testing like some here have to guesstimate where the best memory timings are, but I also don't care a whole lot. My card doesn't seem to be a stupendous overclocker anyway, It peaks out at 2.1Ghz core 8Ghz memory (Micron) before the Metro Exodus main menu crashes instantly/quickly (sitting at the Metro Exodus main menu for 15min is the best OC stress test I've found on this card...it'll be stable 50-100Mhz higher on other games).


----------



## J7SC

mercutiouk said:


> Sorry to go over (perhaps) old ground. I've had a good search through for all references to my card.
> 
> I have the MSI Seahawk EK X 2080ti (not the hybrid).
> It seems the best bios for me currently is the MSI Trio Gaming X bios (https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930).
> 
> (edit)
> it seems that my 1x6+2x8 power restricts my options slightly (the Seahawk X and the EK X are different cards).
> 
> Is all this still correct?
> Is there a better bios for the EK X Seahawk?
> 
> (edit)
> I'm on... pretty damn good water so would this still apply?


 
Generally speaking, you're right on the money. The best bet when wanting an alternate Bios is choose one written for the power delivery setup you have (i.e. 2x 8 pin plus 1x 6 pin in your case, so Gaming X Trio Bios). There are folks that have gotten Bios for different power delivery setups to "work" (i.e. the Strix-bases XOC Bios for 2x 8 pin has such a high limit that it can help on cards with additional EPS), but most of the time, they reported that it was not worth it, if it worked at all, and reverted back to something matching their card's EPS setup. Because NVidia tends to tighten the room they give vendors (w/ some exceptions), a lot of the 'foreign' Bios do work on other vendors' cards as the basics must be the same, but EPS delivery setup differences do matter.

Also, with 'pretty damn good water' per your post, I think you would not have to worry about the extra watt per Gaming X Trio...keeping the 2080 Ti RTX cards as cool as possible often yields more than an exotic Bios on its own.


----------



## Thoth420

My replacement card came with Samsung so score there. 
What type of power are you guys you using for these cards? I plan on pairing mine with a 9900k with a 5.0 to 5.1 all core OC goal.


----------



## kiwivda

What about TU102-300-K1-A1 chip card? Any news on bios with higher PL? I have a Strix non oc (still trying to figure out why they built it anyway..) and planning to shunt mod it, but a bios would be great, maybe also swapping the device ID with an A chip to allow cross flashing.
No one knows anything?


----------



## dangerSK

Thoth420 said:


> My replacement card came with Samsung so score there.
> What type of power are you guys you using for these cards? I plan on pairing mine with a 9900k with a 5.0 to 5.1 all core OC goal.


Depends on your workflow, I am using 9980XE but for games 9900K will be best.


----------



## VPII

Finally broke the 17K GPU score barrier. This XOC bios is really great except for no voltage curve so I'm stuck at 2145 where the gpu can do 2175mhz core. Yes I know my Ryzen 7 2700X is not helping, but I'll stick with it till Zen 2 comes knocking...

https://www.3dmark.com/spy/6908859


----------



## Shadowdane

Anyone here with a MSI RTX 2080 Ti Duke OC card found a BIOS that still retains fan control for all 3 fans?? It's funny the Sea Hawk BIOS works and retains fan controls but 100% speed only ramps up the fans to about ~1900-2000 rpm and isn't fast enough to keep the card running cool.

This is the card I have: https://www.techpowerup.com/vgabios/205080/msi-rtx2080ti-11264-181019


I've tried a number of different BIOS builds from various cards that use the Reference PCB the issue seems to be pretty much every BIOS I've tried it seems to only control one fan on the card so the card quickly overheats with only 1 fan spinning up to full speed. Really sucks as the Stock BIOS is limited to 290W which it will hit power limit with no overclock at all. I tried a few BIOS from cards that use a Custom PCB the card loads into Windows but usually shows weird problems which is likely expected I guess due to the different PCB design.


----------



## Thoth420

dangerSK said:


> Depends on your workflow, I am using 9980XE but for games 9900K will be best.


Yeah I just game and stream that's all. 
The reason I ask about power is because I am having crashing issues with Hitman 2 and Deus Ex Mankind Divided(which crashes on splash screen...)
I am using individual cables for each input on the card all from Cablemod but the original cables do the same thing. 
PSU is EVGA G3 850 Watt

8700k atm in a Maximus X Code

I have a Seasonic 1000W Titanium I could try that was intended for another build.

Crash occurs with all hardware at stock as well.


----------



## J7SC

Thoth420 said:


> Yeah I just game and stream that's all.
> The reason I ask about power is because I am having crashing issues with Hitman 2 and Deus Ex Mankind Divided(which crashes on splash screen...)
> I am using individual cables for each input on the card all from Cablemod but the original cables do the same thing.
> PSU is EVGA G3 850 Watt
> 
> 8700k atm in a Maximus X Code
> 
> I have a Seasonic 1000W Titanium I could try that was intended for another build.
> 
> Crash occurs with all hardware at stock as well.


 
Not sure if you're running 2x GPU or just 1x...with a single 2080 Ti and oc'ed 9900K, that Seasonic 1000w Titanium should be plenty...on 2x 2080 Ti, it would start to get tight...nice PSU, btw! 850w on the other hand ***might*** become an issue with heavily oc'ed 9900k and big-bios single 2080 Ti.


----------



## Thoth420

J7SC said:


> Not sure if you're running 2x GPU or just 1x...with a single 2080 Ti and oc'ed 9900K, that Seasonic 1000w Titanium should be plenty...on 2x 2080 Ti, it would start to get tight...nice PSU, btw! 850w on the other hand ***might*** become an issue with heavily oc'ed 9900k and big-bios single 2080 Ti.


Oh sorry. Just one GPU. I do like to have some headroom though. I got the unit for the 9900k but might just toss it in the current rig to see if it solves my problem. I have tried everything to solve this other than swapping PSU or refreshing my OS completely. Card is the MSI Gaming X Trio and the card and cpu are going under water along with the vrms on the board. (Formula XI)


----------



## hotrod717

Latest, still on aio and stock volts. I am seeing about 350 watts maximum draw on card during this run. On AB instead PX1


----------



## J7SC

hotrod717 said:


> Latest, still on aio and stock volts. I am seeing about 350 watts maximum draw on card during this run. On AB instead PX1





Ugh, nice GPU score ! And once cooling and v-mods kick in, you still have an additional 170w to play with, before playing with 'certain switches' on the KPE PCB


----------



## hotrod717

J7SC said:


> Ugh, nice GPU score ! And once cooling and v-mods kick in, you still have an additional 170w to play with, before playing with 'certain switches' on the KPE PCB


PX1 definitely feels broke. Feels like a different card.


----------



## 113802

Let's see some LuxMark overclocked results. 

http://luxmark.info/top_results/LuxBall HDR/OpenCL/GPU/1


----------



## hotrod717

WannaBeOCer said:


> Let's see some LuxMark overclocked results.
> 
> http://luxmark.info/top_results/LuxBall HDR/OpenCL/GPU/1


If it doesnt receive boints, i dont bench it.


----------



## J7SC

hotrod717 said:


> Latest, still on aio and stock volts. I am seeing about 350 watts maximum draw on card during this run. On AB instead PX1


 
...may be post the PX1 issue you are running into at their KP cooling or related hub ? ...Vince/KP & Illya/Tin are very responsive and probably want the early feedback


----------



## hotrod717

J7SC said:


> ...may be post the PX1 issue you are running into at their KP cooling or related hub ? ...Vince/KP & Illya/Tin are very responsive and probably want the early feedback


 Actually its the EVGA OC Lab now. PX has always had some issues.


----------



## willverduzco

hotrod717 said:


> PX1 definitely feels broke. Feels like a different card.


Now we're talking... That 17.46k GPU score you just got is exactly what one would expect from a sustained 2200 MHz. Very, very nice.

Not only is that just a great GPU score, period... But the fact you did that at 0% Power Slider (i.e. max 1.068V, and card reporting 1.05V in the render test) is even more impressive. Amazing card, and I can only imagine what it'd do with more volts and sub-ambient cooling.


----------



## Dreamdim

*Zotac AMP 2080 Ti bios advice*

Hi,

I just installed my new Zotac AMP 2080 Ti. Any advice of a new bios update / flash (Maybe Galax ones ?) that could allow me to perform a little / stable o/c of the card ? 

Thanks in advance for your help !


----------



## jura11

Dreamdim said:


> Hi,
> 
> I just installed my new Zotac AMP 2080 Ti. Any advice of a new bios update / flash (Maybe Galax ones ?) that could allow me to perform a little / stable o/c of the card ?
> 
> Thanks in advance for your help !


Hi there 

I have too Zotac RTX 2080Ti AMP and I highly recommend Galax 380W BIOS for this GPU if you are under water, not tried this BIOS on air cooled GPU and can't comment if RGB or fan control will work 

I can OC my VRAM to 1125MHz as max and GPU 2115MHz at 1.093v maybe bit more if temperatures are under 34°C 

My Zotac is dud and won't OC as other cards like friend Palit RTX 2080Ti 

Hope this helps 

Thanks, Jura


----------



## kaanaslan

Hello everyone,

I'm getting ready to buy my first 2080 ti today. I'm currently a 1080 user. 

I was leaning to 2080 TI Founders Edition because I like its compact look and smaller size. But there is also an option like EVGA FTW3 Ultra Gaming. I'm not overclocking that much. And I'm not trying to achieve +2-5 FPS in games. The only thing I want is not to stay behind of performances of other cards. So my question is, am do you think it is ok to but the Founders Edition and never think about any performance issue? Or should I definitely forget about the FE and get FTW3?


----------



## dangerSK

kaanaslan said:


> Hello everyone,
> 
> I'm getting ready to buy my first 2080 ti today. I'm currently a 1080 user.
> 
> I was leaning to 2080 TI Founders Edition because I like its compact look and smaller size. But there is also an option like EVGA FTW3 Ultra Gaming. I'm not overclocking that much. And I'm not trying to achieve +2-5 FPS in games. The only thing I want is not to stay behind of performances of other cards. So my question is, am do you think it is ok to but the Founders Edition and never think about any performance issue? Or should I definitely forget about the FE and get FTW3?


Get the FTW3 because why not ? You will have headroom for future if want decide to overclock and the card will be quieter also for sure.


----------



## kaanaslan

dangerSK said:


> Get the FTW3 because why not ? You will have headroom for future if want decide to overclock and the card will be quieter also for sure.


So you think in the future the Founders Edition will suffer even with overclocked?


----------



## dangerSK

kaanaslan said:


> So you think in the future the Founders Edition will suffer even with overclocked?


Worse cooling... yeah. I would buy EVGA just for the good support they have.


----------



## dangerSK

I have a question. Can I flash bios without changing PCI ID ? I need the ID to stay the same. Is it possible ? Because otherwise my Afterburner wont recognize my 2080Ti.


----------



## kaanaslan

dangerSK said:


> Worse cooling... yeah. I would buy EVGA just for the good support they have.


I don't know. I'm still not convinced  Well, I have EVGA right now and I know how good their support is. But don't you think Nvidia would be ready to help if I have any problem with the card?


----------



## dangerSK

kaanaslan said:


> I don't know. I'm still not convinced  Well, I have EVGA right now and I know how good their support is. But don't you think Nvidia would be ready to help if I have any problem with the card?


Cant tell you, never had to RMA nvidia card. But I dont see a single reason to buy Founder when u have EVGA right now and dont have any problems  Continue to support this brand which is one of few to support XOC and OC in general through Kingpin.


----------



## VPII

willverduzco said:


> Now we're talking... That 17.46k GPU score you just got is exactly what one would expect from a sustained 2200 MHz. Very, very nice.
> 
> Not only is that just a great GPU score, period... But the fact you did that at 0% Power Slider (i.e. max 1.068V, and card reporting 1.05V in the render test) is even more impressive. Amazing card, and I can only imagine what it'd do with more volts and sub-ambient cooling.


Taken you knowledge what to expect with sustained clocks. Well I ran my card effectively 2145mhz core and +1300mhz memory. In the link 3dmark will show core clock 2160 but it was only for a couple seconds until temps reached 30c there after it is all the way 2145mhz right through. Does the gpu score seem in line? Ignore the AMD Ryzen 2700X, it does its job from what I gathered.

https://www.3dmark.com/spy/6919652


----------



## dangerSK

VPII said:


> Taken you knowledge what to expect with sustained clocks. Well I ran my card effectively 2145mhz core and +1300mhz memory. In the link 3dmark will show core clock 2160 but it was only for a couple seconds until temps reached 30c there after it is all the way 2145mhz right through. Does the gpu score seem in line? Ignore the AMD Ryzen 2700X, it does its job from what I gathered.
> 
> https://www.3dmark.com/spy/6919652


Your graphics score is fine, accurate to the clock. Now you should work on that cpu


----------



## MacMus

what about

AMP | 3 Fan | 2.5 Slot | 308mm | RGB | 16 Power Phases | 1665 MHz Boost | 260/300 W | Reference PCB | EAN 4895173617058 | PN ZT-T20810D-10P

Is it good for watercooling..

Why it's only 300W not like other 380 ?

Does it matter?


----------



## Dreamdim

Hi Jura,

Thanks for your answer / help. 

Currently, I've installed the Vanilla card with air cooled but I just want to o/c it a little. Do I need to flash my card with a Galax bios ? This one for instance : https://www.techpowerup.com/vgabios/206773/galax-rtx2080ti-11264-181115

or flash with a newer bios version of Zotac ?

Thanks in advance for your help ! Very appreciated.

Edit 1 : I continue my testing using EVGA Precision X1 to stabilize the O/C. For the moment i can reach +125 on clock and +113% on Power Target (which is the maximum for me) on the software. A bios update / change is required ?

Do I need to change the value of memory (+750 clock maybe ?) and the Voltage (+100 ?)

Thanks for your help



jura11 said:


> Hi there
> 
> I have too Zotac RTX 2080Ti AMP and I highly recommend Galax 380W BIOS for this GPU if you are under water, not tried this BIOS on air cooled GPU and can't comment if RGB or fan control will work
> 
> I can OC my VRAM to 1125MHz as max and GPU 2115MHz at 1.093v maybe bit more if temperatures are under 34°C
> 
> My Zotac is dud and won't OC as other cards like friend Palit RTX 2080Ti
> 
> Hope this helps
> 
> Thanks, Jura


----------



## VPII

dangerSK said:


> Your graphics score is fine, accurate to the clock. Now you should work on that cpu


Zen 2...... that is all I will say.... although I do have the Dice or LN2 option for cpu.

Sent from my SM-G960F using Tapatalk


----------



## kaanaslan

dangerSK said:


> kaanaslan said:
> 
> 
> 
> I don't know. I'm still not convinced /forum/images/smilies/smile.gif Well, I have EVGA right now and I know how good their support is. But don't you think Nvidia would be ready to help if I have any problem with the card?
> 
> 
> 
> Cant tell you, never had to RMA nvidia card. But I dont see a single reason to buy Founder when u have EVGA right now and dont have any problems /forum/images/smilies/smile.gif Continue to support this brand which is one of few to support XOC and OC in general through Kingpin.
Click to expand...

Ok well, I did what you suggested. Bought the Evga FTW3 Ultra Gaming, I know I won’t regret it 🙂 Thanks.


----------



## J7SC

dangerSK said:


> I have a question. Can I flash bios without changing PCI ID ? I need the ID to stay the same. Is it possible ? Because otherwise my Afterburner wont recognize my 2080Ti.


 
...I'm in the same boat, per my earlier post here https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club-739.html#post27921260 If you find an answer elsewhere, please post in this thread. Tx


----------



## willverduzco

VPII said:


> Taken you knowledge what to expect with sustained clocks. Well I ran my card effectively 2145mhz core and +1300mhz memory. In the link 3dmark will show core clock 2160 but it was only for a couple seconds until temps reached 30c there after it is all the way 2145mhz right through. Does the gpu score seem in line? Ignore the AMD Ryzen 2700X, it does its job from what I gathered.
> 
> https://www.3dmark.com/spy/6919652


AND


dangerSK said:


> Your graphics score is fine, accurate to the clock. Now you should work on that cpu


As Danger said, your score looks about right, though possibly a tiny, tiny bit low. Certainly in the ballpark of what one would expect. Have you checked whatever stats monitoring program you use to see if you are hitting power limiter for a tiny duration at any point in the test, by chance? This is particularly common in the more power-intensive game test 2.

If you are not hitting power limiter, I'd dive a bit deeper and ask whether you are running an OSD like Rivatuner Stats server? Also, have you adjusted the image quality slider in NVCPL to "Max Performance?" If you already disabled any OSD for the duration of the benchmark and have adjusted the slider for "max performance," I'd say the score is indeed ever so slightly slow, and my knee-jerk reaction would be to assume power limit if you haven't shunt modded already. Or if you're running the Galax 380W BIOS, I found that it delivers slightly less performance at a given clock speed, so that could be the culprit as well.

With all that said, any score over 17k for single GPU is what I consider as the benchmark of a "great score" and an overall well-tuned GPU. For reference, though, my sustained 2160 MHz core with 8300 MHz memory did 17.41k GPU score on the 325W Asus BIOS (with shunt mod) and absolutely no clock deviation from 2160/8300 at 1.093. At identical clocks, my repeatable performance was quite a bit lower on the Galax BIOS (~50-100 pts on average, though I didn't run stats on the figures since I just quickly saw it was no longer a suitable choice for me).

If you're unsatisfied with 17.1k (which I would definitely not be, as that's a great score), you may want to play around with shunt modding (if you haven't already) as well as a faster per-clock BIOS. I haven't tried the FE or done too much testing with the Gigabyte 366W BIOSes, but I hear they're similarly fast as the Asus 325W/360W BIOSes. While it's unconfirmed as to why, the per-clock performance delta is presumably due to differences in memory timings, as that was certainly the case in previous generation cards where we had access to functional BIOS editors and could flash them without the need for signature verification. Anyway, perhaps all 4 BIOSes I mentioned are worth a try (or really, as many as you bother giving a shot). And if you do extensive testing (controlled via locked clocks/voltages), posting results here would be quite beneficial to all of us--especially those of us who have shunt modded and no longer care about which BIOS has a higher power limit.


----------



## willverduzco

J7SC said:


> ...I'm in the same boat, per my earlier post here https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club-739.html#post27921260 If you find an answer elsewhere, please post in this thread. Tx


AND


dangerSK said:


> I have a question. Can I flash bios without changing PCI ID ? I need the ID to stay the same. Is it possible ? Because otherwise my Afterburner wont recognize my 2080Ti.


It's not possible. The BIOS is itself what contains the Device ID data. When flashing, you'll notice a device ID mismatch warning the first time you flash a BIOS that doesn't match your current card's device ID. If you were to flash the card with the identical BIOS after that, you would get no such warning since the device ID has already been modified.

Anyway, this is especially frustrating for those of us who have multi-BIOS cards. Until yesterday, I had Stock Asus 325W on one position and Matrix 360W on the other. Despite giving the XOC BIOS the same device ID as the stock Strix 325W bios, Asus (in a sheer stroke of brilliance) decided to give the Matrix 360W a different device ID. I guess they wanted it to be "special," despite sharing the same PCB as the Strix. What it really means is that every time I switched the BIOS switch from 1 to 2, Windows would have to reinstall the GPU drivers. One benefit in that particular case is that since the Matrix 360W has a different stock memory clock (7400 vs 7000 MHz) and since it has a more aggressive F-V curve from the factory, this avoids any potential issues if a user were to try to load the same AB profile onto the Matrix BIOS. Still annoying, though.

Now if we had a working BIOS editor and the ability to sign modified BIOSes in a manner that would allow NVFLASH to flash to our cards, well then... Not only would we be able to modify the device ID, but we'd fix any other random issue such as these silly power limits and never have to resort to shunt modding and the like.


----------



## J7SC

willverduzco said:


> AND
> 
> It's not possible. The BIOS is itself what contains the Device ID data. When flashing, you'll notice a device ID mismatch warning the first time you flash a BIOS that doesn't match your current card's device ID. If you were to flash the card with the identical BIOS after that, you would get no such warning since the device ID has already been modified.
> 
> Anyway, this is especially frustrating for those of us who have multi-BIOS cards. Until yesterday, I had Stock Asus 325W on one position and Matrix 360W on the other. Despite giving the XOC BIOS the same device ID as the stock Strix 325W bios, Asus (in a sheer stroke of brilliance) decided to give the Matrix 360W a different device ID. I guess they wanted it to be "special," despite sharing the same PCB as the Strix. What it really means is that every time I switched the BIOS switch from 1 to 2, Windows would have to reinstall the GPU drivers. One benefit in that particular case is that since the Matrix 360W has a different stock memory clock (7400 vs 7000 MHz) and since it has a more aggressive F-V curve from the factory, this avoids any potential issues if a user were to try to load the same AB profile onto the Matrix BIOS. Still annoying, though.
> 
> Now if we had a working BIOS editor and the ability to sign modified BIOSes in a manner that would allow NVFLASH to flash to our cards, well then... Not only would we be able to modify the device ID, but we'd fix any other random issue such as these silly power limits and never have to resort to shunt modding and the like.


 
Thanks. As already posted before, each new Bios (if different from s.th. Windows saw earlier) will generate its own entry in the Win Registry...so a single GPU w/ triple Bios switch will thus get three entries, as long as all three bios settings are used at a point in time. With SLI, that's six entries...

...Answering my own concern 'partially' (re. identical 'ID' numbers for two different cards per GPUz after flashing), I just did some deeper digging (per attachment), and in spite of the same 'ID' per GPUz, the Mem, IO and IRQ resources assigned by Windows 7 are different, which is of course what I wanted to see and confirm...*I hope Win 10 will do the same*. The purpose of all this is for me to load the stock Bios from GPU 1 onto GPU 2; there are only minor differences but I still expect a positive impact...2nd attachment shows just some of the differences in Hex per WinMerge.

*@DangerSK *...per the first attached pic, there does seem to be an option to change settings if not on 'automatic'...haven't tried it yet, but I would think after some registry cleaning of GPU (per Bios) entry and booting into safe mode, you should be able to change some things around re. resources...this may / or may not include 'ID', but worth a shot, perhaps.


----------



## VPII

willverduzco said:


> AND
> 
> 
> As Danger said, your score looks about right, though possibly a tiny, tiny bit low. Certainly in the ballpark of what one would expect. Have you checked whatever stats monitoring program you use to see if you are hitting power limiter for a tiny duration at any point in the test, by chance? This is particularly common in the more power-intensive game test 2.
> 
> If you are not hitting power limiter, I'd dive a bit deeper and ask whether you are running an OSD like Rivatuner Stats server? Also, have you adjusted the image quality slider in NVCPL to "Max Performance?" If you already disabled any OSD for the duration of the benchmark and have adjusted the slider for "max performance," I'd say the score is indeed ever so slightly slow, and my knee-jerk reaction would be to assume power limit if you haven't shunt modded already. Or if you're running the Galax 380W BIOS, I found that it delivers slightly less performance at a given clock speed, so that could be the culprit as well.
> 
> With all that said, any score over 17k for single GPU is what I consider as the benchmark of a "great score" and an overall well-tuned GPU. For reference, though, my sustained 2160 MHz core with 8300 MHz memory did 17.41k GPU score on the 325W Asus BIOS (with shunt mod) and absolutely no clock deviation from 2160/8300 at 1.093. At identical clocks, my repeatable performance was quite a bit lower on the Galax BIOS (~50-100 pts on average, though I didn't run stats on the figures since I just quickly saw it was no longer a suitable choice for me).
> 
> If you're unsatisfied with 17.1k (which I would definitely not be, as that's a great score), you may want to play around with shunt modding (if you haven't already) as well as a faster per-clock BIOS. I haven't tried the FE or done too much testing with the Gigabyte 366W BIOSes, but I hear they're similarly fast as the Asus 325W/360W BIOSes. While it's unconfirmed as to why, the per-clock performance delta is presumably due to differences in memory timings, as that was certainly the case in previous generation cards where we had access to functional BIOS editors and could flash them without the need for signature verification. Anyway, perhaps all 4 BIOSes I mentioned are worth a try (or really, as many as you bother giving a shot). And if you do extensive testing (controlled via locked clocks/voltages), posting results here would be quite beneficial to all of us--especially those of us who have shunt modded and no longer care about which BIOS has a higher power limit.



Hi willverduzco, thanks for the comments. Power limit will not be an issue as I am using the XOC 1000 watt bios which I know is broke as no v curve but at least I get constant clocks right through a bench run. As for Rivatuner, well it is running as it came with MSI AB, what do I need to do with it? Image quality is set to max performance so that is also not an issue


----------



## dangerSK

Thanks guys for info, the thing is I got NDA LN2 bios from MSI which has 1000W power limit but my custom AfterBurner doesn't work with that bios so i cant set for example 1.3v core voltage, that afterburner works only with retail LN2 bios and 400W PL is not enough for 1.2v core voltage and more. I suspect its because these two bioses share for whatever reason different PCI IDs.


----------



## Dreamdim

Hi all,

I just installed my new Zotac AMP 2080 Ti. 

Currently, I've installed the Vanilla card with air cooled. Do I need to flash my card with a Galax bios ? This one for instance : https://www.techpowerup.com/vgabios/...i-11264-181115

or flash with a newer bios version of Zotac ?

Thanks in advance for your help ! Very appreciated.

Edit 1 : I continue my testing using EVGA Precision X1 to stabilize the O/C. For the moment i can reach +125 on clock and +113% on Power Target (which is the maximum for me) on the software. A bios update / change is required ?

Do I need to change the value of memory (+750 clock maybe ?) and the Voltage (+100 ?)

Thanks for your help


----------



## dante`afk

hotrod717 said:


> Latest, still on aio and stock volts. I am seeing about 350 watts maximum draw on card during this run. On AB instead PX1


which card and what mods? max temp?


----------



## dante`afk

VPII said:


> Taken you knowledge what to expect with sustained clocks. Well I ran my card effectively 2145mhz core and +1300mhz memory. In the link 3dmark will show core clock 2160 but it was only for a couple seconds until temps reached 30c there after it is all the way 2145mhz right through. Does the gpu score seem in line? Ignore the AMD Ryzen 2700X, it does its job from what I gathered.
> 
> https://www.3dmark.com/spy/6919652


what mods/bios?


----------



## VPII

dante`afk said:


> what mods/bios?


I use the Strix XOC bios the rest as normal

Sent from my SM-G960F using Tapatalk


----------



## J7SC

...Finally decided to completely (soft) uninstall both Aorus 2080 Ti WB cards in Win 10, and did the 'DDU several times'. I also completely uninstalled MSI AB and GPUz...per earlier posts, those apps (+3d Mark) were disagreeing which one was 'GPU 1' and which one was 'GPU 2', probably because I brought the M.2 Win 10 over from the (different mobo / CPU) test bench. Since I want to flash the Bios from card 1 to card 2, it's 'kind of important' to know which is the 'real GPU 1'. Anyway, after all that, all the apps now agree on which is card 1  The spoiler has some pics of the completed system, w/ focus on GPU...obviously glad that I do not have to physically uninstall cards after the build being completed 


Spoiler


----------



## broken pixel

Can some of you share your OC settings for ROG-STRIX-RTX-2080TI-011G with stock BIOS

I am wondering if I got a lemon, my highest OC is Voltage +100%, Clock +115MHz, 125%PL, Memory Clock +850MHz (Micron)


----------



## axiumone

broken pixel said:


> Can some of you share your OC settings for ROG-STRIX-RTX-2080TI-011G with stock BIOS
> 
> I am wondering if I got a lemon, my highest OC is Voltage +100%, Clock +115MHz, 125%PL, Memory Clock +850MHz (Micron)


Yep, that's pretty much what I get. It's SEVERELY power limited with the 325 watt stock bios. I tend to stay around 1920-1950 on the clocks at 4k. 

Recently updated to the Matrix bios, which is 360 watts. That's allowing me to stay around 1995-2010mhz. 

https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1


----------



## ESRCJ

I decided to give a BIOS other than the 380W Galax reference BIOS a try with my FE. The GB Aorus Extreme (366W power limit) seemed to give me the best results in TimeSpy.

My best score with the Galax BIOS (380W): https://www.3dmark.com/spy/5622114
My best score with the GB Aorus Extreme BIOS (366W): https://www.3dmark.com/spy/6923768

One thing I noticed immediately was that I could not achieve the same memory clocks with the Aorus BIOS as I could with the Galax BIOS. This may hint at different memory timings. Despite the slightly lower power limit and lower memory frequency, the Aorus BIOS pulled out ahead by almost 2 percent (graphics score). It very well could be the case that the Galax BIOS has looser memory timings, although I cannot confirm this with certainty. I will be sticking with the Aorus BIOS for the time being.


----------



## J7SC

gridironcpj said:


> I decided to give a BIOS other than the 380W Galax reference BIOS a try with my FE. The GB Aorus Extreme (366W power limit) seemed to give me the best results in TimeSpy.
> 
> My best score with the Galax BIOS (380W): https://www.3dmark.com/spy/5622114
> My best score with the GB Aorus Extreme BIOS (366W): https://www.3dmark.com/spy/6923768
> 
> One thing I noticed immediately was that I could not achieve the same memory clocks with the Aorus BIOS as I could with the Galax BIOS. This may hint at different memory timings. Despite the slightly lower power limit and lower memory frequency, the Aorus BIOS pulled out ahead by almost 2 percent (graphics score). It very well could be the case that the Galax BIOS has looser memory timings, although I cannot confirm this with certainty. I will be sticking with the Aorus BIOS for the time being.


 
I never bothered switching away from the stock Aorus Bios (I have two of those cards). That said, per earlier posts, there's more than one Aorus bios - even for two 'identical' 2080 Ti Xtre WB cards only 2 digits apart in serial numbers. Do you mind posting which Bios version you're using ? Per below, the 'upper' two GPUz screens show what I'm currently running on both cards (again, all stock)


----------



## dangerSK

1.093V core voltage is not a problem anymore  Excited to release this beast on LN2.


----------



## dangerSK

J7SC said:


> I never bothered switching away from the stock Aorus Bios (I have two of those cards). That said, per earlier posts, there's more than one Aorus bios - even for two 'identical' 2080 Ti Xtre WB cards only 2 digits apart in serial numbers. Do you mind posting which Bios version you're using ? Per below, the 'upper' two GPUz screens show what I'm currently running on both cards (again, all stock)


I would try all bioses and then settle with the best one, maybe theres one with better timings or smthing.


----------



## fleps

dangerSK said:


> 1.093V core voltage is not a problem anymore  Excited to release this beast on LN2.


I wish the RTX 2080 had some bioses to increase core voltage.
My FTW3 is reporting perfCap VREL locked at 1093mv still with headroom on power and temperatures =/


----------



## dangerSK

fleps said:


> I wish the RTX 2080 had some bioses to increase core voltage.
> My FTW3 is reporting perfCap VREL locked at 1093mv still with headroom on power and temperatures =/


1093mv cap is normal for all Turing, yep I know only few NDA bioses have that bypass.


----------



## Nizzen

dangerSK said:


> 1.093V core voltage is not a problem anymore /forum/images/smilies/biggrin.gif Excited to release this beast on LN2.


Not open for everyone = pay to win.

=boring


----------



## dangerSK

Nizzen said:


> Not open for everyone = pay to win.
> 
> =boring


If you want extreme settings like unlocked core voltage you have to pay for extreme cards (lightning,hof etc)  Even that is not enough, you have to get tools from manufacturer etc..


----------



## VPII

willverduzco said:


> AND
> 
> 
> As Danger said, your score looks about right, though possibly a tiny, tiny bit low. Certainly in the ballpark of what one would expect. Have you checked whatever stats monitoring program you use to see if you are hitting power limiter for a tiny duration at any point in the test, by chance? This is particularly common in the more power-intensive game test 2.
> 
> If you are not hitting power limiter, I'd dive a bit deeper and ask whether you are running an OSD like Rivatuner Stats server? Also, have you adjusted the image quality slider in NVCPL to "Max Performance?" If you already disabled any OSD for the duration of the benchmark and have adjusted the slider for "max performance," I'd say the score is indeed ever so slightly slow, and my knee-jerk reaction would be to assume power limit if you haven't shunt modded already. Or if you're running the Galax 380W BIOS, I found that it delivers slightly less performance at a given clock speed, so that could be the culprit as well.
> 
> With all that said, any score over 17k for single GPU is what I consider as the benchmark of a "great score" and an overall well-tuned GPU. For reference, though, my sustained 2160 MHz core with 8300 MHz memory did 17.41k GPU score on the 325W Asus BIOS (with shunt mod) and absolutely no clock deviation from 2160/8300 at 1.093. At identical clocks, my repeatable performance was quite a bit lower on the Galax BIOS (~50-100 pts on average, though I didn't run stats on the figures since I just quickly saw it was no longer a suitable choice for me).
> 
> If you're unsatisfied with 17.1k (which I would definitely not be, as that's a great score), you may want to play around with shunt modding (if you haven't already) as well as a faster per-clock BIOS. I haven't tried the FE or done too much testing with the Gigabyte 366W BIOSes, but I hear they're similarly fast as the Asus 325W/360W BIOSes. While it's unconfirmed as to why, the per-clock performance delta is presumably due to differences in memory timings, as that was certainly the case in previous generation cards where we had access to functional BIOS editors and could flash them without the need for signature verification. Anyway, perhaps all 4 BIOSes I mentioned are worth a try (or really, as many as you bother giving a shot). And if you do extensive testing (controlled via locked clocks/voltages), posting results here would be quite beneficial to all of us--especially those of us who have shunt modded and no longer care about which BIOS has a higher power limit.


Hi @willverduzco does this seem a little more in line. I uninstalled Rivatuner blah blah blah and this is what I got....

https://www.3dmark.com/spy/6936630


----------



## willverduzco

VPII said:


> Hi @willverduzco does this seem a little more in line. I uninstalled Rivatuner blah blah blah and this is what I got....
> 
> https://www.3dmark.com/spy/6936630


 Haha, that's one way to get rid of the OSD. I just use a hotkey (I found CTRL+F12 to be most convenient) in Afterburner to show/hide the OSD, but that doesn't get rid of the back-end that is still polling the hardware for data. Your way is less convenient, but lets you completely disable another Windows service that could possibly consume resources and lower the score.

As for your score, it's perfect, and just as one would expect from a sustained mid-2100s MHz core clock and the 8250 MHz memory, when there are no memory errors causing a slowdown due to repeat transfers. Good job on that score.

As an aside, I noticed you lowered your memory clock to 8250 from 8300 MHz. Did you notice an increase in score when you did that, or was that just coincidental with this benchmark run and the removal of Afterburner? For me, I notice score decreases if I push past 8400 MHz (+1400 offset on most BIOSes, +1000 on the Asus Matrix BIOS), meaning that my memory is causing issues at that point.


----------



## VPII

willverduzco said:


> Haha, that's one way to get rid of the OSD. I just use a hotkey (I found CTRL+F12 to be most convenient) in Afterburner to show/hide the OSD, but that doesn't get rid of the back-end that is still polling the hardware for data. Your way is less convenient, but lets you completely disable another Windows service that could possibly consume resources and lower the score.
> 
> 
> 
> As for your score, it's perfect, and just as one would expect from a sustained mid-2100s MHz core clock and the 8250 MHz memory, when there are no memory errors causing a slowdown due to repeat transfers. Good job on that score.
> 
> 
> 
> As an aside, I noticed you lowered your memory clock to 8250 from 8300 MHz. Did you notice an increase in score when you did that, or was that just coincidental with this benchmark run and the removal of Afterburner? For me, I notice score decreases if I push past 8400 MHz (+1400 offset on most BIOSes, +1000 on the Asus Matrix BIOS), meaning that my memory is causing issues at that point.


NopeI had issues with TS failing during first test so I reverted back to check.

Sent from my SM-G960F using Tapatalk


----------



## Sheyster

gridironcpj said:


> I decided to give a BIOS other than the 380W Galax reference BIOS a try with my FE. The GB Aorus Extreme (366W power limit) seemed to give me the best results in TimeSpy.
> 
> My best score with the Galax BIOS (380W): https://www.3dmark.com/spy/5622114
> My best score with the GB Aorus Extreme BIOS (366W): https://www.3dmark.com/spy/6923768
> 
> One thing I noticed immediately was that I could not achieve the same memory clocks with the Aorus BIOS as I could with the Galax BIOS. This may hint at different memory timings. Despite the slightly lower power limit and lower memory frequency, the Aorus BIOS pulled out ahead by almost 2 percent (graphics score). It very well could be the case that the Galax BIOS has looser memory timings, although I cannot confirm this with certainty. I will be sticking with the Aorus BIOS for the time being.





J7SC said:


> That said, per earlier posts, there's more than one Aorus bios - even for two 'identical' 2080 Ti Xtre WB cards only 2 digits apart in serial numbers. Do you mind posting which Bios version you're using ?



I'd like to know this as well; please link the BIOS you used. I may try it myself.


----------



## J7SC

Sheyster said:


> I'd like to know this as well; please link the BIOS you used. I may try it myself.


 
...you can try ALL these here


Spoiler



https://www.techpowerup.com/vgabios/?architecture=NVIDIA&manufacturer=Gigabyte&model=RTX+2080+Ti&interface=&memType=&memSize=&since=


 ...I'm running __17.00.63 on one card and thinking of trying out __17.00.B0


----------



## dante`afk

that's the waterforce though, he's using the extreme, and there are 3 different extreme bios'


aorus extreme vs windforce vs galax bios

https://www.3dmark.com/compare/spy/6940677/spy/6940274/spy/6940173


now, the gigabyte let me put another 100mzh on the memory and also run 2160mhz on the core more stable. before that it'd drop frequently to 2145.

since there are 3 differetn aorus extreme bios, i'll try them all out tomorrow.


----------



## J7SC

dante`afk said:


> that's the waterforce though, he's using the extreme, and there are 3 different extreme bios'
> 
> 
> aorus extreme vs windforce vs galax bios
> 
> https://www.3dmark.com/compare/spy/6940584/spy/6940234/spy/6940173


 
I think it's just a difference in labeling (afaik, all Aorus Waterforce are 'Xtreme'). In any case, 90.02.17.00.63 in the above link is the Bios for my Aorus Xtreme Waterforce WB (> full block, not AIO)


----------



## arcDaniel

Has someone allready flashed the Kingpin Bios on an FE Based Card?

https://www.techpowerup.com/vgabios/210258/210258


----------



## dangerSK

arcDaniel said:


> Has someone allready flashed the Kingpin Bios on an FE Based Card?
> 
> https://www.techpowerup.com/vgabios/210258/210258


I think it wont work, kingpin uses 3x8pin.. someone reported gpu slowdown to 300mhz in load with MSI card


----------



## kx11

man that 2080ti Matrix disappointed me hard , runs hot ( cooler than FE by 2% ) and hardly pushes OC above FE


----------



## bigjdubb

dangerSK said:


> I think it wont work, kingpin uses 3x8pin.. someone reported gpu slowdown to 300mhz in load with MSI card



That's the way I understand it as well. I was thinking about using it on my FTW3 Ultra but it doesn't have 3 power connectors.


----------



## Thoth420

kx11 said:


> man that 2080ti Matrix disappointed me hard , runs hot ( cooler than FE by 2% ) and hardly pushes OC above FE


Isn't that the ASUS card with the AIO built right into the shroud?


----------



## bigjdubb

Thoth420 said:


> Isn't that the ASUS card with the AIO built right into the shroud?


That sounds like a terrible idea. Seems like it would give you the worst of both types of solutions (air/water).


----------



## J7SC

bigjdubb said:


> That sounds like a terrible idea. Seems like it would give you the worst of both types of solutions (air/water).


 
I prefer full w-block custom cooling myself, but like the KingPin card, this is also designed to run LN2 the way it looks dissembled w/ cold plate etc, per 4.21min in vid below. Also, some folks don't want any hoses to route / show etc. What I find more surprising is that they didn't go w/ 3x 8 pin :headscrat


----------



## GanMenglin

I've flashed my msi seahawk X EK with kingpin bios, it works perfect. I don't know why your guys' card was stuck at 300mhz.

The best performance bios for this card is the 1000w bios, but it's not perfect, can only use DP1 port, and HDMI without sound. So till now, the kingpin is the perfect one.


----------



## Sheyster

dante`afk said:


> aorus extreme vs windforce vs galax bios
> 
> https://www.3dmark.com/compare/spy/6940677/spy/6940274/spy/6940173


Thanks! Based on this I'm not gonna bother with it, I'll stick with the 380W BIOS. 1 FPS difference between the 3 results in the graphics FPS tests = no perceived difference IRL use.


----------



## mirkendargen

GanMenglin said:


> I've flashed my msi seahawk X EK with kingpin bios, it works perfect. I don't know why your guys' card was stuck at 300mhz.
> 
> The best performance bios for this card is the 1000w bios, but it's not perfect, can only use DP1 port, and HDMI without sound. So till now, the kingpin is the perfect one.


Did you use it in a game? It was fine for me on the desktop too, it didn't slow down till running a game/benchmark.


----------



## VPII

willverduzco said:


> Haha, that's one way to get rid of the OSD. I just use a hotkey (I found CTRL+F12 to be most convenient) in Afterburner to show/hide the OSD, but that doesn't get rid of the back-end that is still polling the hardware for data. Your way is less convenient, but lets you completely disable another Windows service that could possibly consume resources and lower the score.
> 
> 
> 
> As for your score, it's perfect, and just as one would expect from a sustained mid-2100s MHz core clock and the 8250 MHz memory, when there are no memory errors causing a slowdown due to repeat transfers. Good job on that score.
> 
> 
> 
> As an aside, I noticed you lowered your memory clock to 8250 from 8300 MHz. Did you notice an increase in score when you did that, or was that just coincidental with this benchmark run and the removal of Afterburner? For me, I notice score decreases if I push past 8400 MHz (+1400 offset on most BIOSes, +1000 on the Asus Matrix BIOS), meaning that my memory is causing issues at that point.


Here is an update..... cpu under dry ice which is why 4.8ghz.

https://www.3dmark.com/spy/6946378

Sent from my SM-G960F using Tapatalk


----------



## jura11

Dreamdim said:


> Hi Jura,
> 
> Thanks for your answer / help.
> 
> Currently, I've installed the Vanilla card with air cooled but I just want to o/c it a little. Do I need to flash my card with a Galax bios ? This one for instance : https://www.techpowerup.com/vgabios/206773/galax-rtx2080ti-11264-181115
> 
> or flash with a newer bios version of Zotac ?
> 
> Thanks in advance for your help ! Very appreciated.
> 
> Edit 1 : I continue my testing using EVGA Precision X1 to stabilize the O/C. For the moment i can reach +125 on clock and +113% on Power Target (which is the maximum for me) on the software. A bios update / change is required ?
> 
> Do I need to change the value of memory (+750 clock maybe ?) and the Voltage (+100 ?)
> 
> Thanks for your help


Hi there 

+125MHz is something similar what I could get from my Zotac RTX 2080Ti AMP stock BIOS, with manual V/F curve you should be able to OC bit more, I think 140-150MHz is possible to achieve just not sure on X1 how to create there manual V/F curve as this UI is just big mess 

Regarding the VRAM, my Zotac RTX 2080Ti AMP could do +1125MHz, I would start with +800MHz or maybe 1000MHz should be to easy achieve on Zotac RTX 2080Ti 

Why I use Galax 380W BIOS is due the higher power limit and with stock Zotac RTX 2080Ti AMP BIOS I have hit power limit in most applications or games 

Newer BIOS from Zotac is I think 300W which is just not enough for most of cases 

Galax 380W BIOS works beautifully with Zotac in my case and prefer this BIOS 

Hope this helps 

Thanks, Jura


----------



## bigjdubb

GanMenglin said:


> I've flashed my msi seahawk X EK with kingpin bios, it works perfect. I don't know why your guys' card was stuck at 300mhz.
> 
> The best performance bios for this card is the 1000w bios, but it's not perfect, can only use DP1 port, and HDMI without sound. So till now, the kingpin is the perfect one.


Well now I'm curious again. I haven't tried it because of other people having problems.


----------



## kx11

using EVGA PX1 is a pain in the butt , OSD not working and fans are not moving against heat , not even manually 



can anyone help here ?


----------



## Thoth420

kx11 said:


> using EVGA PX1 is a pain in the butt , OSD not working and fans are not moving against heat , not even manually
> 
> 
> 
> can anyone help here ?


Yeah that's par for the course. That program will crash just leaving it to monitor and control fans and nothing more so the more you try to rely on it for the less stable it is.


----------



## kx11

Thoth420 said:


> Yeah that's par for the course. That program will crash just leaving it to monitor and control fans and nothing more so the more you try to rely on it for the less stable it is.



got the fans working , still no OSD (rtss worked 100% though ) and while i run furmark stress test i see a message on the LED that says warning +12v too high , should i be worried ? running OC bios currently


edit : got the OSD working , tryin to hit 2200mhz without crashing


----------



## acmilangr

Hi all.
I have Asus strix 2080ti OC.

-Does anyone have tried flash matrix bios?

-how to flash on second bios? shut down pc, change button bios, start pc and then flashing?


----------



## GanMenglin

bigjdubb said:


> Well now I'm curious again. I haven't tried it because of other people having problems.


I totally don't have any problem with it.


----------



## GanMenglin

mirkendargen said:


> Did you use it in a game? It was fine for me on the desktop too, it didn't slow down till running a game/benchmark.


Yes, I used it for 3dmark and play apex. Did you OC your video memory more than +1000?


----------



## axiumone

acmilangr said:


> Hi all.
> I have Asus strix 2080ti OC.
> 
> -Does anyone have tried flash matrix bios?
> 
> -how to flash on second bios? shut down pc, change button bios, start pc and then flashing?


Yep, flash without any issues. Select the bios you'd like to flash with the physical toggle switch and flash away.


----------



## mirkendargen

GanMenglin said:


> Yes, I used it for 3dmark and play apex. Did you OC your video memory more than +1000?


I tried it with "stock" (for that bios) clocks and it did the same thing. It also reports 70% TDP and 150w sitting at the desktop, when "normal" bioses report ~22% and 66w. I have no idea how it's working for you, I tried it again last night. I'm not the only one since someone else reported the same behavior...


----------



## ESRCJ

Sheyster said:


> I'd like to know this as well; please link the BIOS you used. I may try it myself.


https://www.techpowerup.com/vgabios/209199/gigabyte-rtx2080ti-11264-181121-1

This is the one I used. Here is a comparison of the Galax 380W BIOS versus the linked GB BIOS for my card:

https://www.3dmark.com/compare/spy/6923768/spy/5622114


----------



## J7SC

gridironcpj said:


> https://www.techpowerup.com/vgabios/209199/gigabyte-rtx2080ti-11264-181121-1
> 
> This is the one I used. Here is a comparison of the Galax 380W BIOS versus the linked GB BIOS for my card:
> 
> https://www.3dmark.com/compare/spy/6923768/spy/5622114


 
...may not make a difference (or may even perform better), but the pic for that Bios is the AIO model, not the full water-block one


----------



## ESRCJ

J7SC said:


> ...may not make a difference (or may even perform better), but the pic for that Bios is the AIO model, not the full water-block one


Yeah I went with that one in particular because I have a friend that scored over 11K in SuperPosition 1080p Extreme with that model and at much lower clocks than my 10.8K run with the Galax reference BIOS. I figured I'd test it out. It didn't give me a boost in SuperPosition, but it gave me repeatably better results in Time Spy.


----------



## Luca Prinzi

Hi, I have a zotac 2080 ti amp. I wanted to know, if I run the flash of the bios, maybe with the Galaxy, how many fps do I earn in game about? Thank you


----------



## GanMenglin

mirkendargen said:


> I tried it with "stock" (for that bios) clocks and it did the same thing. It also reports 70% TDP and 150w sitting at the desktop, when "normal" bioses report ~22% and 66w. I have no idea how it's working for you, I tried it again last night. I'm not the only one since someone else reported the same behavior...


I don't have any problem. What's your video memory chip? Samsung or Micro? Mine is Micro.

By the way, I use a water chiller for cooling the water. when I set the water temp as 15c， core frequency can be 2145mhz. But I adjust the water temp to 22c yesterday due to the condensation issue, now the core only can go like 2115mhz maximum. Is my card so bad?:h34r-smi


----------



## mirkendargen

GanMenglin said:


> I don't have any problem. What's your video memory chip? Samsung or Micro? Mine is Micro.
> 
> By the way, I use a water chiller for cooling the water. when I set the water temp as 15c， core frequency can be 2145mhz. But I adjust the water temp to 22c yesterday due to the condensation issue, now the core only can go like 2115mhz maximum. Is my card so bad?:h34r-smi


I have Micron as well. No idea... Which exact bios did you use?


----------



## Thoth420

Has anyone put a FC EK block on their MSI Gaming X Trio yet? If so any snags or issues?


----------



## mirkendargen

Thoth420 said:


> Has anyone put a FC EK block on their MSI Gaming X Trio yet? If so any snags or issues?


EK makes the block for the Sea Hawk EK, which is the same PCB as the Gaming X Trio. Seems like it should be fine unless EK can't talk to themselves.


----------



## Thoth420

mirkendargen said:


> EK makes the block for the Sea Hawk EK, which is the same PCB as the Gaming X Trio. Seems like it should be fine unless EK can't talk to themselves.


Ah sweet. I did not know that... took them a while to get the block out for it. Strange.


----------



## Sheyster

gridironcpj said:


> https://www.techpowerup.com/vgabios/209199/gigabyte-rtx2080ti-11264-181121-1
> 
> This is the one I used. Here is a comparison of the Galax 380W BIOS versus the linked GB BIOS for my card:
> 
> https://www.3dmark.com/compare/spy/6923768/spy/5622114



Thank you for posting this. I've decided not to try it. During my 2 hour session of BFV Firestorm yesterday I hit 368w TDP, so I'll stick with the 380w BIOS for now.


----------



## kx11

nice numbers for a 2080ti under AIO


TS


https://www.3dmark.com/spy/6975636




FS Ultra
https://www.3dmark.com/3dm/35567612?


----------



## VPII

kx11 said:


> nice numbers for a 2080ti under AIO
> 
> 
> TS
> 
> 
> https://www.3dmark.com/spy/6975636
> 
> 
> 
> 
> FS Ultra
> https://www.3dmark.com/3dm/35567612?


Hi @kx11

I had to make use of Dry Ice to meet those targets.... Had a rough time getting my system back up and running as normal afterwards. At least the Dish washer and oven helped get that sorted though.... AT present it appears to be the best run with an AMD Ryzen CPU.

https://www.3dmark.com/spy/6946378

https://www.3dmark.com/spy/6946325

Firestrike Ultra was with processor under normal liquid cooling so 4.28 to 4.3ghz

https://www.3dmark.com/fs/18885725


----------



## J7SC

...two somewhat related items...

1.) Per pic attachment below @ lower left, for those who have (including yours truly) the Aorus 2080 Ti WB Xtreme full factory water-block card(s), there were some earlier posts made here by others that you could not 'open them', i.e. for maintenance...obviously, that's not the case. The Nickel-coated block looks quite efficient, and is accessible after all, per screen grab from 'Declassified Systems' @ YouTube vid.

2.) The second item relates to a recent vid by Igor's Lab per link. I realize that Igor's Lab is not everyone's 'cup of tea' apart from the fact that the vid is in German. That said, Igor managed to almost triple his YouTube subscriber base in two months (albeit from low initial numbers) since he cleaned up his office, now with a new green-screen setup . Igor also has been a senior contributing editor for Tom's Hardware DE for ages. 

In any event, ostensibly, this vid is about Titan RTX and the glue (!) used by Nvidia these days on their top-end cards cards. Igor then gets into the 'old' RTX failure discussion...since ALL initial RTX 2080TI cards were equipped with Micron, Micron got a bad wrap, but it seems the issue has now been confirmed to be s.th. very different > a weak batch of solder / SMT points...This is the reason why I post this as the mentions the various DIFFERENT torque ratings re. components on the RTX 2080Ti (and Titan RTX PCBs).

The message is that when mounting an after-market water block for a 2080 Ti (which I would recommend, given their heat-to-boost / 380w + design), it is better to err on the side of caution, given what seems like some empirical evidence for 'weakish' solder traces re. VRAM an other connections.

Igor's related vid is here


----------



## VPII

J7SC said:


> ...two somewhat related items...
> 
> 1.) Per pic attachment below @ lower left, for those who have (including yours truly) the Aorus 2080 Ti WB Xtreme full factory water-block card(s), there were some earlier posts made here by others that you could not 'open them', i.e. for maintenance...obviously, that's not the case. The Nickel-coated block looks quite efficient, and is accessible after all, per screen grab from 'Declassified Systems' @ YouTube vid.
> 
> 2.) The second item relates to a recent vid by Igor's Lab per link. I realize that Igor's Lab is not everyone's 'cup of tea' apart from the fact that the vid is in German. That said, Igor managed to almost triple his YouTube subscriber base in two months (albeit from low initial numbers) since he cleaned up his office, now with a new green-screen setup . Igor also has been a senior contributing editor for Tom's Hardware DE for ages.
> 
> In any event, ostensibly, this vid is about Titan RTX and the glue (!) used by Nvidia these days on their top-end cards cards. Igor then gets into the 'old' RTX failure discussion...since ALL initial RTX 2080TI cards were equipped with Micron, Micron got a bad wrap, but it seems the issue has now been confirmed to be s.th. very different > a weak batch of solder / SMT points...This is the reason why I post this as the mentions the various DIFFERENT torque ratings re. components on the RTX 2080Ti (and Titan RTX PCBs).
> 
> The message is that when mounting an after-market water block for a 2080 Ti (which I would recommend, given their heat-to-boost / 380w + design), it is better to err on the side of caution, given what seems like some empirical evidence for 'weakish' solder traces re. VRAM an other connections.
> 
> Igor's related vid is here https://www.youtube.com/watch?v=nYWAEn_SOfk


Hi @J7SC a lot of pages back I actually commented regarding the failures and stated that when you have a processor with more pin outs than anything currently available you bound to sit with connectivity issues when you have temperature swings. This basically says exactly what you stated in "a weak batch of solder / SMT points"

Well what do you know....


----------



## kx11

VPII said:


> Hi @*kx11*
> 
> I had to make use of Dry Ice to meet those targets.... Had a rough time getting my system back up and running as normal afterwards. At least the Dish washer and oven helped get that sorted though.... AT present it appears to be the best run with an AMD Ryzen CPU.
> 
> https://www.3dmark.com/spy/6946378
> 
> https://www.3dmark.com/spy/6946325
> 
> Firestrike Ultra was with processor under normal liquid cooling so 4.28 to 4.3ghz
> 
> https://www.3dmark.com/fs/18885725





nice numbers you got there , i'll push mine even further hopefully the AiO can keep up


----------



## worms14

912/5000
Hi everyone.
I would like to ask for advice on changing the bios in my graphics card.

I have the version of nVidia RTX 2080Ti FE and I am wondering if there is a better BIOS which I could apply to myself?
I have EK Vector cooling and I admit that the temperatures are quite high, but maybe they will not be a limitation to change the bios for the better.
I am asking for advice on how to do it correctly and which is best to choose.
What to look for when changing the bios?

I know that there is a list of bios on this site: https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since=
Which one would you choose to fit?

Below is a picture of the current state at the moment of playing at the Assetto Corsa.
I admit that I'm playing Pimax 5K + and every frame is important, and the equipment is very demanding.

Thank you very much for help.


----------



## worms14

I saw a first-page clinic with a modified nvflash for the FE version.
I would like to know which version of bios is best to choose for my FE graphic and in my case it will be profitable, is it about temperature?


----------



## VPII

kx11 said:


> nice numbers you got there , i'll push mine even further hopefully the AiO can keep up


Drop the rad in a bucket with ice water and you should not have any speed drops.. Well I am using the XOC bios so 1000 watt available of which I use around 46%% or there about.


----------



## kx11

VPII said:


> Drop the rad in a bucket with ice water and you should not have any speed drops.. Well I am using the XOC bios so 1000 watt available of which I use around 46%% or there about.



no space in the room to do so but i'll find a way to do it hopefully


----------



## J7SC

VPII said:


> Hi @J7SC a lot of pages back I actually commented regarding the failures and stated that when you have a processor with more pin outs than anything currently available you bound to sit with connectivity issues when you have temperature swings. This basically says exactly what you stated in "a weak batch of solder / SMT points"
> 
> Well what do you know....


 
yeah, you/others had already zeroed in on it, but it kind of got lost / polluted in that Micron 'debate'. The reason why I posted the above vid is really just to reemphasize to those mounting an aftermarket water-block on their 2080 Ti (which is to be recommended) to be extra careful with tightening the screws, given the solder/SMT point issue


----------



## Misfit16R

Has anyone had any experience with flashing the 1000w ASUS Bios on the EVGA FTW3? Looking at using it but want to see if people have tried it yet.


----------



## LunaP

K So question, Dual 2080ti's XC Ultra's modded to FTW3 373W bios, getting 1100 on mem and 120 on Core stable on 3D Mark Time Spy, was running Driver 417.71.


Wanted to see DLSS in SoTR but found I needed 419 or higher.
Updated to 425.61 latest, getting black scanlines running first gfx test and video port shutting off i.e monitor can't find signal till reboot. Dropped to 100 core/1000 mem still same just no scan lines on first 2nd, scan on 2nd and crashes at end, still disabless. 900 Mem and first gfx test passes, no black bars or anything test finishes fine, soon as 3D mark finishes the port shuts off and have to restart, since no display signal again, other monitors work fine.

DDU'd and tried 419.61, same issue.

Back to 417.71 and no issue can run 1100/120 no issue. 

Any suggestions as to what might be happenning or should I try another reflash?


https://www.3dmark.com/3dm/35588949? for comparison w/ 417.71 

Fixed for 1809 windows WDDM2.5 if your'e having issue swap to DCH after DDU vs Standard drivers. 430 works like a charm.


----------



## Luca Prinzi

There are 2 Galax Bios 380 w in this site. Which one is the best? I have a zotac amp 2080 ti


----------



## Bequis

Palit RTX 2080 Ti Dual (Non-A chip) can go up to 310W with the latest bios.

This could be updated to the first message.


----------



## enforcer3399

sorry couldn't find it in the thread, which bios is considered "best" for a Msi Lighting Z on air for a higher power limit ?


----------



## dangerSK

enforcer3399 said:


> sorry couldn't find it in the thread, which bios is considered "best" for a Msi Lighting Z on air for a higher power limit ?


On air you can use stock MSI ln2 bios, 400w should be plenty, you will reach temp limit before power limit.


----------



## kx11

pushed KINGPiN even more 


port Royal


https://www.3dmark.com/pr/83540


TSE , GFX score 7951


https://www.3dmark.com/spy/6975749


----------



## Kanashimu

I just bought a 2080 Ti FTW3 secondhand, and in the first 2/3 boots, I had my computer freeze on desktop. I could move the mouse, but had no click inputs. My desktop wouldn't render, and my active window (Whatsapp) turned black. Keyboard wasn't taking inputs. I had to hard reset to fix the issues. After a few more boots, I was able to run benchmarks and do folding just fine. Is this a serious issue that I should be returning the card for?


----------



## J7SC

Kanashimu said:


> I just bought a 2080 Ti FTW3 secondhand, and in the first 2/3 boots, I had my computer freeze on desktop. I could move the mouse, but had no click inputs. My desktop wouldn't render, and my active window (Whatsapp) turned black. Keyboard wasn't taking inputs. I had to hard reset to fix the issues. After a few more boots, I was able to run benchmarks and do folding just fine. Is this a serious issue that I should be returning the card for?


 
...hard to diagnose from afar, but if it occurred after installing new-to-your setup vid card and *has NOT recurred since then*, I would not worry about...OS could have been searching for PCI bridge or other items (relating to mobo PCIe, not NVidia driver set)


----------



## Kanashimu

J7SC said:


> ...hard to diagnose from afar, but if it occurred after installing new-to-your setup vid card and *has NOT recurred since then*, I would not worry about...OS could have been searching for PCI bridge or other items (relating to mobo PCIe, not NVidia driver set)


Yeah I was wondering if it were related to first time installation quirks.. 

I did have an EVGA 2080 SC beforehand, so I was surprised that I would still run into these issues. I might go with a DDU wipe and run an entirely new installation of drivers.


----------



## rustyk

Hello experts!

I joined a while ago and had been reading this thread prior to deciding which model 2080TI to purchase. In the end I bought a Gigabyte 2080TI Gaming OC, as it was a good price and seemed to have a better cooling solution than the founders and the Windforce OC edition.

Long story short, I had a few stability issues and got an exchange card from Amazon, but I have some questions and an observation.

1. Does anyone here rate Kombuster as a test for memory stability? I've run various 3dmark benchmarks at different memory OCs and the system has been stable, but Kombuster can pick up artifacts. Is the artifact scanner legitimate? I wonder if the overclock is stable enough to complete benchmarks but there are anomalies that the naked eye doesn't see.

2. All things being equal, if Afterburner/Nvidia OC scanner is run on 2 cards and recommends 100Mhz overclock for card A, but 120Mhz overclock for card B, does that mean that card B is a superior overclocker?
My first card (A) didn't seem to overclock as well but seemed to peak at lower voltages. The second card (B) runs at 120Mhz OC and seems to run at higher voltages.

3. Does anyone know what the various memory support tables are in the BIOS for the different types of VRAM? In GPU-Z (under Advanced>Nvidia BIOS) , there are 6 entries. 3 for Samsung, 2 for Hynix and 1 for Micron. 

3. Both of the cards are Rev 1.0 and both have Samsung memory. According to serial numbers they were both manufactured in Feb 19. Not sure if that is useful to anyone.


----------



## HeadlessKnight

rustyk said:


> Hello experts!
> 
> I joined a while ago and had been reading this thread prior to deciding which model 2080TI to purchase. In the end I bought a Gigabyte 2080TI Gaming OC, as it was a good price and seemed to have a better cooling solution than the founders and the Windforce OC edition.
> 
> Long story short, I had a few stability issues and got an exchange card from Amazon, but I have some questions and an observation.
> 
> 1. Does anyone here rate Kombuster as a test for memory stability? I've run various 3dmark benchmarks at different memory OCs and the system has been stable, but Kombuster can pick up artifacts. Is the artifact scanner legitimate? I wonder if the overclock is stable enough to complete benchmarks but there are anomalies that the naked eye doesn't see.
> 
> 2. All things being equal, if Afterburner/Nvidia OC scanner is run on 2 cards and recommends 100Mhz overclock for card A, but 120Mhz overclock for card B, does that mean that card B is a superior overclocker?
> My first card (A) didn't seem to overclock as well but seemed to peak at lower voltages. The second card (B) runs at 120Mhz OC and seems to run at higher voltages.
> 
> 3. Does anyone know what the various memory support tables are in the BIOS for the different types of VRAM? In GPU-Z (under Advanced>Nvidia BIOS) , there are 6 entries. 3 for Samsung, 2 for Hynix and 1 for Micron.
> 
> 3. Both of the cards are Rev 1.0 and both have Samsung memory. According to serial numbers they were both manufactured in Feb 19. Not sure if that is useful to anyone.



Welcome to OCN!

1- The anomalies mean the card is unstable, the gpu is doing miscalculations due to the core/memory clocked too high. You need to tone it down. The easiest way to see whether the GPU is stable or not (if you are still not 100% completely sure) is by reverting back to stock and checking the benchmark again, if no artifacts appear, that means you just clocked the card too high.

2- No. It depends on the max boost clock the card reaches at stock and its base clock as well. If a card boosts to 2050 MHz at stock then adding +100 will make it reach a peak boost of 2150 MHz, but if it boosts at stock to 1980 MHz, then adding +120 will give it a peak boost of 2100 MHz. So it depends on the base clock and max boost clock, even cards with same SKU number can have different max peak boost, it depends on the quality of production of the chip. Offset values mean little to nothing especially if comparing two different SKUs with different base clocks. 

3- The same SKU can come with Samsung, Hynix or Micron memory chips, so memory timings and settings are pre-programmed in the GPU BIOS. But from my experience I've never seen an RTX 2080 Ti with Hynix memory, it must be super rare. and Micron has been used only in early 2080 Ti's, most of the new ones come with Samsung memory which is overall superior to Micron in terms of reliability and overclocking. 

Hope that answered your questions.

Regards.


----------



## rustyk

HeadlessKnight said:


> Welcome to OCN!
> 
> 1- The anomalies mean the card is unstable, the gpu is doing miscalculations due to the core/memory clocked too high. You need to tone it down. The easiest way to see whether the GPU is stable or not (if you are still not 100% completely sure) is by reverting back to stock and checking the benchmark again, if no artifacts appear, that means you just clocked the card too high.
> 
> 2- No. It depends on the max boost clock the card reaches at stock and its base clock as well. If a card boosts to 2050 MHz at stock then adding +100 will make it reach a peak boost of 2150 MHz, but if it boosts at stock to 1980 MHz, then adding +120 will give it a peak boost of 2100 MHz. So it depends on the base clock and max boost clock, even cards with same SKU number can have different max peak boost, it depends on the quality of production of the chip. Offset values mean little to nothing especially if comparing two different SKUs with different base clocks.
> 
> 3- The same SKU can come with Samsung, Hynix or Micron memory chips, so memory timings and settings are pre-programmed in the GPU BIOS. But from my experience I've never seen an RTX 2080 Ti with Hynix memory, it must be super rare. and Micron has been used only in early 2080 Ti's, most of the new ones come with Samsung memory which is overall superior to Micron in terms of reliability and overclocking.
> 
> Hope that answered your questions.
> 
> Regards.


Hi,

Thanks for the warm welcome and thank you for the detailed answer and explanations, that's really useful indeed! 
For a while I thought I wouldn't get a reply because I hadn't just signed up and asked what the best BIOS is for my card ;-)

1. I'd assumed this was the case and I had already set everything back to standard speeds (I mean everything) and run an extended Memtest as well. I wanted to verify that Kombuster was reliable and it sounds like it is, which is fine.
I'm not interested in just getting the highest benchmark score possible, I'm more concerned with proving that my GPU works as designed (at this point in time).

My situation is complicated by my setup, which is a 3d vision surround config (i.e. 3 displays) and I'd been seeing strange horizontal flickering black lines but generally only when 3d gaming.
Also it was only ever on one out of the 3 displays at any given time. Unfortunately it was hard to reproduce, although for some reason running the afterburner/rivatuner OSD seemed to make it happen more often. Not always on the same screen either.

While gaming in 2d in surround I couldn't really reproduce the issue at all, although I had seen it in 2d desktop mode, where the same flickering was happening within afterburner itself, but only on one screen. If I dragged it to another screen it was fine. I think I ruled out Chrome GPU acceleration and it might have been something to do with monitor refresh rates not resetting properly.
Sorry for the essay, I was just trying to give some background to the questions. The only thing I never tried was a fresh windows 10 install, although I hadn't had any of these issues with my previous 1070 so didn't know for sure if it was a HW or SW issue. That's why I got the advance replacement from Amazon.

2. Makes perfect sense. So would GPU-Z accurately show the base and boost speeds for each card? Also, before I switched to card B I'd screen captured the VF curve in aferburner and left it saved on profile one. After the OC scanner was run on card B, I compared the curves and it looks as though card B boosts higher all the way through the curve for the same input voltage. Would that imply a superior overclocker?

3. I've never heard of a Hynix equipped card either...

Thanks again!


----------



## HeadlessKnight

Hi again

If it happens even at stock, and doesn't happen in 2D it could be highly likely a problem with the drivers since Nvidia no longer supports surround/3dvision like it did before, and they officially dropped support for 3Dvision in their latest drivers, in anything newer than 418. 
https://www.ghacks.net/2019/03/11/nvidia-3d-vision-end-of-support/ . 
https://www.geforce.com/drivers/results/146347

As for GPU-Z, you will find the base clock in the first tab of GPU-Z along with the minimum expect boost clock and it can report the peak boost if you keep it running in the background for the two GPUs when you game, you can open two GPU-Z instances for GPU1 and GPU2 and toggle the max reading for each GPU Clock. But the most efficient way is to download MSI Afterburner and make it running in the background, it will report min/max value of every sensor the GPU supports. 
Keep in mind also the peak boost clock will not be reached if you launched your 3D app when the GPU temp is 40 C or more. This is a limitation imposed by GPU Boost 4.0, temps affect the peak boost the GPU will reach and the clock curve as well.


----------



## kx11

i bet if i had a waterblock i can game with these clocks all day (video is coming )


----------



## rustyk

HeadlessKnight said:


> Hi again
> 
> If it happens even at stock, and doesn't happen in 2D it could be highly likely a problem with the drivers since Nvidia no longer supports surround/3dvision like it did before, and they officially dropped support for 3Dvision in their latest drivers, in anything newer than 418.
> https://www.ghacks.net/2019/03/11/nvidia-3d-vision-end-of-support/ .
> https://www.geforce.com/drivers/results/146347
> 
> As for GPU-Z, you will find the base clock in the first tab of GPU-Z along with the minimum expect boost clock and it can report the peak boost if you keep it running in the background for the two GPUs when you game, you can open two GPU-Z instances for GPU1 and GPU2 and toggle the max reading for each GPU Clock. But the most efficient way is to download MSI Afterburner and make it running in the background, it will report min/max value of every sensor the GPU supports.
> Keep in mind also the peak boost clock will not be reached if you launched your 3D app when the GPU temp is 40 C or more. This is a limitation imposed by GPU Boost 4.0, temps affect the peak boost the GPU will reach and the clock curve as well.


Thanks again HeadlessKnight!
I've been really active on the 3d vision forum for nearly 9 years now so unfortunately I was aware that support has stopped. In fact I asked someone there with a 2080TI what their opinion was and they suspected it's a software issue as they've had to drop the memory overclock etc. to get full stability in 3d. The (limited) concensus seems to be that the 3d vision driver is to blame and there are a number of well documented issues with it already anyway, such as CPU core usage.

I know most people aren't a fan but it's the reason I upgraded again so I've just got to accept it is what it is. There is some hope that a 3rd party 3d driver can be developed, but the level of financial support and development commitment is still being scoped out.

Also, as I'm sure you're aware, every game/app etc. loads the various components of a video card differently, so although I might be able to run a firestrike ultra stress test, it doesn't necessarily equate to full stability with the same overclocks in 3d in another app. That's why I've been fishing for data and opinions.

I've read most if not all of this thread so I had some understanding of the temp/frequency factors with GPU boost. I decided that I'd go with a reasonable stock air cooled solution for now and if I *really* need more I'll put a block on it and switch to a custom loop later this year when/if I upgrade my system to a Zen 2 based build. 
But, before I got to that point I wanted to try and establish whether the card is faulty and if not, whether it's a 'good' overclocker, in case I go on water further down the line. Having said that, I accept it's diminishing returns and don't to spend tons of cash just to chase a few percentage points.

I was running 1070 SLI before and have no current intention of keeping both cards.

Thanks again for your insight. I think I'll peruse some of the other subforums and spend some more time here 

ps. Would it help if I posted the voltage/frequency curves for the 2 cards? I need to work out how to attach the screen captures first (noob error), but my reasoning would be that the OC scanner would give some indication as to what was achievable for a given voltage.
I'd expect that a higher vertical curve would indicate that the card was capable of sustaining a higher frequency for a given voltage, which would make it a better card.
I think I'm satisfied that both cards are stable at stock clocks BTW.


----------



## fleps

rustyk said:


> But, before I got to that point I wanted to try and establish whether the card is faulty and if not, whether it's a 'good' overclocker, in case I go on water further down the line. Having said that, I accept it's diminishing returns and don't to spend tons of cash just to chase a few percentage points.
> 
> ps. Would it help if I posted the voltage/frequency curves for the 2 cards? I need to work out how to attach the screen captures first (noob error), but my reasoning would be that the OC scanner would give some indication as to what was achievable for a given voltage.
> I'd expect that a higher vertical curve would indicate that the card was capable of sustaining a higher frequency for a given voltage, which would make it a better card.
> I think I'm satisfied that both cards are stable at stock clocks BTW.


If you want to chase the absolute max core clock you should really follow Sajin guide to force-lock vCore on 1093mV (maximum allowed on 20 series).

https://forums.evga.com/Guide-How-t...-overclock-with-msi-afterburner-m2820280.aspx

The guide is a little confusing to read I must admit, so here my version/notes about it:

1) Remember to disable any OC before starting this, you are suppose to do it on stock, so I recommend to have a stock profile saved.

2) Be aware that on the current version of MSI there's no "white line" anymore, it's red now.

3) The tutorial is a little confusing but after re-reading it a few times I understood. Basically the golden rules are:
- the steps to set the yellow line on max vcore (1093) with CTRL + L and applying.
- then the "white" (now red) line will appear a few points to the left of the yellow line. Then starting from the first point where the red line is, you set it -1 then apply, which will make the line jump one point closer to the yellow line;
- and you keep doing it until the red line is together with yellow line.
- Then you hold CTRL to drag the yellow 1093mV point to whatever clock you want to OC and apply.

*This will lock the OC/vCore even on idle.*
So I recommend you have a default profile with no OC or small OC, and save whatever your result is on this one on another profile that you activate only when Benchmarking / Gaming, or the card will be on full load all the time.

Then you start to tweak it a bit, because the OC step you define will likely decrease as the temps get high, so you need to test and see if you can set it higher. 
I just left Heaven open and monitored and changed the points a few times until I was happy with the results. For example for my 2145 stable clock (while gaming/bench) I have the 1093mV point in the curve actually at 2175 clock.
Also recommend doing all this with zero memory OC first.

Hope it helps.


----------



## Glerox

kx11 said:


> pushed KINGPiN even more
> 
> 
> port Royal
> 
> 
> https://www.3dmark.com/pr/83540
> 
> 
> TSE , GFX score 7951
> 
> 
> https://www.3dmark.com/spy/6975749


What's your score difference between your MSI lightning and the Kinpin in TSE GFX score?


----------



## zhrooms

Bequis said:


> Palit RTX 2080 Ti Dual (Non-A chip) can go up to 310W with the latest bios.
> 
> This could be updated to the first message.


 
Jesus christ, that's actually real, wouldn't be surprised if someone at Palit got fired for that, it has no reason to exist, same boost clock and everything.

They just gave 20x cards from the original post a 50MHz core clock bump, less incentive to get an A chip now, less money to partners & NVIDIA. And it's now faster than many A cards (11x to be specific), which is comical.

All one has to do now is safely shunt a non-A card using the $25 Conductive (Silver) Pen, and it'll run over 400W no sweat (increasing power limit by 30%), keeping warranty intact as the silver is easily removable.

On the other hand, at least in Europe, for months now it typically only costs an additional €50 to get an A chip over Non-A, but for Americans, the EVGA Black Edition on the EVGA shop is still just $999, so could save a big chunk (if it was in stock ever).


----------



## acmilangr

fleps said:


> rustyk said:
> 
> 
> 
> But, before I got to that point I wanted to try and establish whether the card is faulty and if not, whether it's a 'good' overclocker, in case I go on water further down the line. Having said that, I accept it's diminishing returns and don't to spend tons of cash just to chase a few percentage points.
> 
> ps. Would it help if I posted the voltage/frequency curves for the 2 cards? I need to work out how to attach the screen captures first (noob error), but my reasoning would be that the OC scanner would give some indication as to what was achievable for a given voltage.
> I'd expect that a higher vertical curve would indicate that the card was capable of sustaining a higher frequency for a given voltage, which would make it a better card.
> I think I'm satisfied that both cards are stable at stock clocks BTW.
> 
> 
> 
> If you want to chase the absolute max core clock you should really follow Sajin guide to force-lock vCore on 1093mV (maximum allowed on 20 series).
> 
> https://forums.evga.com/Guide-How-t...-overclock-with-msi-afterburner-m2820280.aspx
> 
> The guide is a little confusing to read I must admit, so here my version/notes about it:
> 
> 1) Remember to disable any OC before starting this, you are suppose to do it on stock, so I recommend to have a stock profile saved.
> 
> 2) Be aware that on the current version of MSI there's no "white line" anymore, it's red now.
> 
> 3) The tutorial is a little confusing but after re-reading it a few times I understood. Basically the golden rules are:
> - the steps to set the yellow line on max vcore (1093) with CTRL + L and applying.
> - then the "white" (now red) line will appear a few points to the left of the yellow line. Then starting from the first point where the red line is, you set it -1 then apply, which will make the line jump one point closer to the yellow line;
> - and you keep doing it until the red line is together with yellow line.
> - Then you hold CTRL to drag the yellow 1093mV point to whatever clock you want to OC and apply.
> 
> *This will lock the OC/vCore even on idle.*
> So I recommend you have a default profile with no OC or small OC, and save whatever your result is on this one on another profile that you activate only when Benchmarking / Gaming, or the card will be on full load all the time.
> 
> Then you start to tweak it a bit, because the OC step you define will likely decrease as the temps get high, so you need to test and see if you can set it higher.
> I just left Heaven open and monitored and changed the points a few times until I was happy with the results. For example for my 2145 stable clock (while gaming/bench) I have the 1093mV point in the curve actually at 2175 clock.
> Also recommend doing all this with zero memory OC first.
> 
> Hope it helps.
Click to expand...

Thanks for the explanation (of an already good tutorial.) 
I managed to lock the voltage with success in idle. But when I ran benchmarks there is throttling and the voltage drops (so the clock ).
So what is the point of that? I don't see any advantage.


----------



## Kanashimu

I'm having a really strange issue where I can't start 3DMark after DDUing my drivers and installed the 430 Nvidia drivers. Has anyone experienced something like this?


----------



## Cata79

yes, dxr_info.exe was hanging.


----------



## VPII

Running the Kingpin bios now to test instead of the XOC bios just to have a vcurve. At present running the OC scanner and it is a little crazy atill at 1 of 4 and at + 480 which i know should be a no go unless extreme cooling. Make that +510mhz. Well I'll see as the XOC has been pretty good except for not having a vcurve so stuck at +135mhz basically 2145mhz due to voltage.


----------



## J7SC

Kanashimu said:


> I'm having a really strange issue where I can't start 3DMark after DDUing my drivers and installed the 430 Nvidia drivers. Has anyone experienced something like this?





Cata79 said:


> yes, dxr_info.exe was hanging.


 
...that happened to me, too, after DDU, fresh drivers and clean MSI AB install...Systeminfo would just sit there with the rotating circle w/o starting the benchmark...oddly enough, plugging the network cable in seems to have solved it after a few reboots


----------



## fleps

acmilangr said:


> Thanks for the explanation (of an already good tutorial.)
> I managed to lock the voltage with success in idle. But when I ran benchmarks there is throttling and the voltage drops (so the clock ).
> So what is the point of that? I don't see any advantage.


The point is that with a proper curve OC you can 99% of the time reach higher OC because even if there's downclock due to heat/power, you can maintain quite a few steps down locked on 1093mV, which result on more stable OC.

What is throttling on your card, what is perfCap reporting? Power? Voltage? Heat?

Clocks will of couse reduced due heat, Boost 4 starts down-clocking after 38C if I remember correctly. 

You will need to tweak and try different values, Curve OC is not that simple as just putting +Mhz on a field.

The curve you set will also have boost added on it depending on the temperatures. So you need to be aware of that to not overshoot the core.

The tutorial is just the start point, you need to experiment with the curve (mainly the last 5 or 6 points before the yellow line).

For example on my card, I actually have my yellow line on 1100mV at 2190 with the 1093mV also on 2190, then the previous 2 dots on 2175, then the previous 5 dots on 2160.

This will peak on 2220Mhz while below 37C (which is only for a second of course) and give me a realistic / stable 2175Mhz clock on benchmarks now that I have it on AIO.
And even if it drops to 2160Mhz because of temps, due to the way my curve is configured, it still stays at 1093mV, which results on a more stable OC.

If I just go to manual OC, the high I can get is 2130Mhz at 1050mV which will eventually drop to 2115 and even 2100.

Hope this helps =)


----------



## dentnu

I have a MSI RTX 2080 Ti GAMING X TRIO and I am interested in trying the kingpin 2080 ti bios. I read conflicting reports on it working on other cards. Is it safe to update my card to one of the kingpin bios ? Thanks


----------



## zack_orner

dentnu said:


> I have a MSI RTX 2080 Ti GAMING X TRIO and I am interested in trying the kingpin 2080 ti bios. I read conflicting reports on it working on other cards. Is it safe to update my card to one of the kingpin bios ? Thanks


You can try it, but when I did with my trio it stuck at 300 MHz.

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## dangerSK

dentnu said:


> I have a MSI RTX 2080 Ti GAMING X TRIO and I am interested in trying the kingpin 2080 ti bios. I read conflicting reports on it working on other cards. Is it safe to update my card to one of the kingpin bios ? Thanks


On my 2080Ti Lightning works fine, well u can try, in worst case u will flash back.


----------



## TK421

Did the 2000w bios link get taken down?




Anyone tested the kingpin bios on a stock pcb card yet? Where can I download it?


----------



## VPII

TK421 said:


> Did the 2000w bios link get taken down?
> 
> 
> 
> 
> Anyone tested the kingpin bios on a stock pcb card yet? Where can I download it?


I tested the Kingpin bios on my Palit RTX 2080 Ti Gaming Pro OC and found that the clocks a like really low, I mean 1300 or there about at stock so I flashed back to the XOC bios. Maybe I'm just unlucky but it really does not work with my card.


----------



## iRSs

Bequis said:


> Palit RTX 2080 Ti Dual (Non-A chip) can go up to 310W with the latest bios.
> 
> This could be updated to the first message.


actually the 310w bios has a differrent identification number.
and its build date is on 31 january 2019.

ps: actual screenshots of my palit 2080ti dual









Sent from my LG-H930 using Tapatalk


----------



## iRSs

TK421 said:


> Did the 2000w bios link get taken down?
> 
> 
> 
> 
> Anyone tested the kingpin bios on a stock pcb card yet? Where can I download it?


what 2000w bios?

Sent from my LG-H930 using Tapatalk


----------



## ENTERPRISE

Well I finally got my GPU's after the the nightmare with two faulty Aorus Extreme GPU's with fan failures (Rubbish design with inefficient tolerances). I decided to go with 2x FTW3 Ultra Hybrids 2080Ti's, so far so good. Will be good to actually get gaming again.


----------



## bigjdubb

ENTERPRISE said:


> Well I finally got my GPU's after the the nightmare with two faulty Aorus Extreme GPU's with fan failures (Rubbish design with inefficient tolerances). I decided to go with 2x FTW3 Ultra Hybrids 2080Ti's, so far so good. Will be good to actually get gaming again.


I would be very curious to see what sort of clock speeds and temps you get during gaming session with the hybrid. I'm wondering if ti would be worth getting a hybrid kit for my FTW3. The air cooler on the FTW3 Ultra is pretty darn good so it makes it hard to decide.


----------



## dangerSK

TK421 said:


> Did the 2000w bios link get taken down?


Only 2000W bios I know is the OC Lab NDA bios which was for sure never public, so yeah


----------



## J7SC

ENTERPRISE said:


> Well I finally got my GPU's after the the nightmare with two faulty Aorus Extreme GPU's with fan failures (Rubbish design with inefficient tolerances). I decided to go with 2x FTW3 Ultra Hybrids 2080Ti's, so far so good. Will be good to actually get gaming again.



...that seemed like a long replacement time ...and for playing Crysis, you can try out that XOC 1000w Bios discussed elsewhere in this thread


----------



## TK421

iRSs said:


> what 2000w bios?
> 
> Sent from my LG-H930 using Tapatalk





dangerSK said:


> Only 2000W bios I know is the OC Lab NDA bios which was for sure never public, so yeah



But why is it listed and crossed out in the OP though? Seems misleading to me.


----------



## dangerSK

TK421 said:


> But why is it listed and crossed out in the OP though? Seems misleading to me.


Probably because some idiot leaked it and it was accessible for a short period of time, Im sure Galax dont want 2000W NDA bios floating around  There are few more NDA bioses that are not listed in this forum for example MSI ones, but I cant say really more as its NDA stuff.
BTW that bios is useless unless you have genuine OC Lab card as you need NVVDD Tool for proper voltage control.


----------



## toncij

bigjdubb said:


> I would be very curious to see what sort of clock speeds and temps you get during gaming session with the hybrid. I'm wondering if ti would be worth getting a hybrid kit for my FTW3. The air cooler on the FTW3 Ultra is pretty darn good so it makes it hard to decide.


Probably yes, for sustained clocks. But check if you won/lost silicon lottery prior to that (max fan rpm). Water really kicks it: I'm now running dual Aorus AIO cooled and it maintains 2085-2115MHz depending on ambient (23-27) with GPU being at under 60.


----------



## ENTERPRISE

bigjdubb said:


> I would be very curious to see what sort of clock speeds and temps you get during gaming session with the hybrid. I'm wondering if ti would be worth getting a hybrid kit for my FTW3. The air cooler on the FTW3 Ultra is pretty darn good so it makes it hard to decide.


Full load bench marking GPU sits at 53-57c The coolers do a great job. As for clocks will have to check again but they were between 1900-2010. That is with GPU boost, no manual OC or additional voltage applied.



J7SC said:


> ...that seemed like a long replacement time ...and for playing Crysis, you can try out that XOC 1000w Bios discussed elsewhere in this thread


It was a long time, I was originally waiting for the MSI Seahawk to come in stock, never did so when the FTW3 Hybrids came out I jumped on them instead. Crysis ? Huh ? Where did you get that from lol. As for the XOC BIOS, giving that a miss. The EVGA FTW3 Bios is just fine IMHO. No point in flashing to the XOC.


----------



## bigjdubb

ENTERPRISE said:


> Full load bench marking GPU sits at 53-57c The coolers do a great job. As for clocks will have to check again but they were between 1900-2010. That is with GPU boost, no manual OC or additional voltage applied.


Those are the clocks I get with the air cooled FTW3 in games. I haven't been running mine with an overclock (power slider moved up and fans to 70%) because there really doesn't seem to be much difference in performance and for some reason Division 2 seems to crash fairly often if I change something even a little bit. 



ENTERPRISE said:


> ....As for the XOC BIOS, giving that a miss. The EVGA FTW3 Bios is just fine IMHO. No point in flashing to the XOC.


I am also sticking with the stock BIOS. I haven't been going much over 105% (peak) in games and only time I ever benchmark is when I get new hardware or a new benchmark comes out, so I really don't think I would get anything out of it.


----------



## fleps

bigjdubb said:


> Those are the clocks I get with the air cooled FTW3 in games. I haven't been running mine with an overclock (power slider moved up and fans to 70%) because there really doesn't seem to be much difference in performance and for some reason Division 2 seems to crash fairly often if I change something even a little bit.


Are you using DX12 mode? The Division 2 is still having issues with DX12 mode.

The game engine is annoying sensitive to Overclock, my 2175Mhz OC is pretty stable on all benchmarks and even on the game benchmark itself, but while playing it will randomly crash after like 1h gameplay specially if in group.

But using DX11 it's possible to overclock, mine sits at 2145/2130 and doesn't crash.

Give it a try.


----------



## toncij

fleps said:


> Are you using DX12 mode? The Division 2 is still having issues with DX12 mode.
> 
> The game engine is annoying sensitive to Overclock, my 2175Mhz OC is pretty stable on all benchmarks and even on the game benchmark itself, but while playing it will randomly crash after like 1h gameplay specially if in group.
> 
> But using DX11 it's possible to overclock, mine sits at 2145/2130 and doesn't crash.
> 
> Give it a try.


On air?


----------



## fleps

toncij said:


> On air?


No, after I installed Kraken x42 I was able to bump the core clock of course. An my card is a 2080 non-TI.

But just reporting that TD2 is particularly sensitive to OC, when I was on air my OC was stable at 2145 but on TD2 I had to use it lower like 2115, so the information applies regardless the clock / card you have.


----------



## J7SC

ENTERPRISE said:


> (...)
> 
> ....so when the FTW3 Hybrids came out I jumped on them instead. *Crysis ? Huh ?* Where did you get that from lol. As for the XOC BIOS, giving that a miss. The EVGA FTW3 Bios is just fine IMHO. No point in flashing to the XOC.


 
When I was young(er), 'Crysis GPU' jokes still worked... 




dangerSK said:


> Only 2000W bios I know is the OC Lab NDA bios which was for sure never public, so yeah


 
...yeah, was watching a Galax HoF OCLab a while back on eBay, but so were a few other folks, and the price  ...also thinking that whoever sells those now probably has put them to 'good LN2 use already', or may be thinks it's not such a good sample, but who knows.... Speaking of good samples, here's Rauf latest HWBot 3DM11 sub (WR) with his OClab...2750MHz for the GPU - and Micron memory on the card...


----------



## bigjdubb

fleps said:


> Are you using DX12 mode? The Division 2 is still having issues with DX12 mode.
> 
> The game engine is annoying sensitive to Overclock, my 2175Mhz OC is pretty stable on all benchmarks and even on the game benchmark itself, but while playing it will randomly crash after like 1h gameplay specially if in group.
> 
> But using DX11 it's possible to overclock, mine sits at 2145/2130 and doesn't crash.
> 
> Give it a try.


The game doesn't play well for me at all in DX12, lots of short little freezes. When i first started playing I was able to run the game with my overclock but after a few patches I started to have trouble. It's only about 100 mhz difference between overclocked and not so I just leave it stock (save for the power slider and fans). I never thought about it until you said it but I get more crashes when I'm in a group as well.

I can run overclocked in other games with no problems but it's such a small difference I don't usually bother with selecting the profile.


----------



## jura11

fleps said:


> No, after I installed Kraken x42 I was able to bump the core clock of course. An my card is a 2080 non-TI.
> 
> But just reporting that TD2 is particularly sensitive to OC, when I was on air my OC was stable at 2145 but on TD2 I had to use it lower like 2115, so the information applies regardless the clock / card you have.


Hi there 

I wouldn't compare RTX 2080 vs 2080Ti in therms of clocks, owned and still have GTX1080Ti and GTX1080 which OC differently, my Manli GTX1080 will do easy 2164MHz at 1.08v and my EVGA GTX 1080Ti will do 2113MHz at 1.07v and with 1.093v it can do 2152MHz I think as best but at such voltage I'm just hitting power limit

Tried too Asus RTX2080 where I have used EK Vector WB and on that card I could do 2145MHz with stock voltage and in some benchmarks I could do 2160MHz easy with adjusted voltage curve

Usually non Ti xx80 will OC better than Ti models,although tried only one RTX 2080

I have for testing here Asus RTX 2080Ti Strix which I should be putting under water next or week after depending if Heatkiller IV it will be available over here and wish Aquacomputer RTX 2080Ti WB is available for this card 

Hope this helps 

Thanks, Jura


----------



## TK421

dangerSK said:


> Probably because some idiot leaked it and it was accessible for a short period of time, Im sure Galax dont want 2000W NDA bios floating around  There are few more NDA bioses that are not listed in this forum for example MSI ones, but I cant say really more as its NDA stuff.
> BTW that bios is useless unless you have genuine OC Lab card as you need NVVDD Tool for proper voltage control.





Hmm that's sad.


On stock voltage the 2000w bios wouldn't make any difference would it?


----------



## fleps

jura11 said:


> Hi there
> 
> I wouldn't compare RTX 2080 vs 2080Ti in therms of clocks [...]


I think you miss read my post or didn't saw the initial message as that was a followup to another question =)

I was not comparing clocks, it was a conversation about The Division 2 engine being very sensitive to overclock regardless the card =)


----------



## fleps

TK421 said:


> Hmm that's sad.
> On stock voltage the 2000w bios wouldn't make any difference would it?


If perfCap is reporting POWER, it would make the same difference as any other bios with higher TDP (until perfCap stop reporting Power)

If perfCap is heat or voltage, no advantage on expanding power limit.


----------



## dangerSK

TK421 said:


> Hmm that's sad.
> 
> 
> On stock voltage the 2000w bios wouldn't make any difference would it?


exactly, same ****ty 1.093V. Im so glad i dont have core voltage limit anymore, very annoying.


----------



## VPII

fleps said:


> If perfCap is reporting POWER, it would make the same difference as any other bios with higher TDP (until perfCap stop reporting Power)
> 
> If perfCap is heat or voltage, no advantage on expanding power limit.


With 1000watt XOC bios I can confirm that the only cap for performance I have is temps. To limit that cap I dropped my rad in an ice bucket of water, speeds actually went up until I reached 31C, but then it stayed where I set it right through as it stayed below 40C.


----------



## ENTERPRISE

J7SC said:


> When I was young(er), 'Crysis GPU' jokes still worked...
> 
> 
> 
> 
> 
> ...yeah, was watching a Galax HoF OCLab a while back on eBay, but so were a few other folks, and the price  ...also thinking that whoever sells those now probably has put them to 'good LN2 use already', or may be thinks it's not such a good sample, but who knows.... Speaking of good samples, here's Rauf latest HWBot 3DM11 sub (WR) with his OClab...2750MHz for the GPU - and Micron memory on the card...
> 
> 
> 
> [/quote]
> 
> Haha that joke went right over my head.
> 
> Quick question, last time I looked the EVGA FTW3 Ultra Hybrids have a 1000Watt BIOS ? Anyone confirm ?


----------



## dentnu

ENTERPRISE said:


> Haha that joke went right over my head.
> 
> Quick question, last time I looked the EVGA FTW3 Ultra Hybrids have a 1000Watt BIOS ? Anyone confirm ?


I do not think so from what I read on here the only 1000Watt bios is the XOC bios.


----------



## BIOSbreaker

Hey folks,

Wanted to upgrade to 2080 but long story short - after telling myself I won't be spending 50% more money for 20-30% more performance uplift, I had to jump on a deal that cost 30-ish per cent more than EVGA 2080 XC Gaming Ultra (because of that sweet 2,75slot cooler design, the 980Ti dumped enough heat into the case as it was) and got myself a pre-owned EVGA 2080Ti Black Edition from an official retailer for 1020€. The cheapest 2080Tis go for around 300€ more, give or take a few euros. The poor bloke at the counter tried to convince me about ordering extra "extended warranty" at the retailer and when I told him about the reputation of EVGA regarding their customer service he didn't really want to believe me... too bad 

I want to share my very first impression here - the packaging is bare bones (fine by me, don't need more clutter), card nicely tucked into a thick, hard foam protective layer. I like it aesthetically - it's elegant while being pretty sturdy and feels quite heavy in hand. It's surprisingly quiet under load and the fans don't sound obnoxious on higher RPMs - good acoustics there as well. It can run north of 80°C when stressed and OCd in Superposition but it seems it dissipates heat better into the case than my 980Ti. During my few short superposition runs and a small Rise of the Tomb Raider session, it can hold a +100 core +500 memory OC so far (lucky with Samsung chips). It boosts 1800-ish under power limit, and to 1950s while playing ROTR.

The only thing holding it back is the power limit but I ain't paying 400 more € for a few more frames that I wouldn't even recognize. Overall I'm super happy, this piece of silicon is mighty impressive and I can't wait for more tests (cough cough gaming sessions). My excuse to upgrade for Cyberpunk 2077 when it's out didn't quite live it up.

Cheers!


----------



## bigjdubb

BIOSbreaker said:


> Hey folks,
> 
> Wanted to upgrade to 2080 but long story short - after telling myself I won't be spending 50% more money for 20-30% more performance uplift, I had to jump on a deal that cost 30-ish per cent more than EVGA 2080 XC Gaming Ultra (because of that sweet 2,75slot cooler design, the 980Ti dumped enough heat into the case as it was) and got myself a pre-owned EVGA 2080Ti Black Edition from an official retailer for 1020€. The cheapest 2080Tis go for around 300€ more, give or take a few euros. The poor bloke at the counter tried to convince me about ordering extra "extended warranty" at the retailer and when I told him about the reputation of EVGA regarding their customer service he didn't really want to believe me... too bad
> 
> I want to share my very first impression here - the packaging is bare bones (fine by me, don't need more clutter), card nicely tucked into a thick, hard foam protective layer. I like it aesthetically - it's elegant while being pretty sturdy and feels quite heavy in hand. It's surprisingly quiet under load and the fans don't sound obnoxious on higher RPMs - good acoustics there as well. It can run north of 80°C when stressed and OCd in Superposition but it seems it dissipates heat better into the case than my 980Ti. During my few short superposition runs and a small Rise of the Tomb Raider session, it can hold a +100 core +500 memory OC so far (lucky with Samsung chips). It boosts 1800-ish under power limit, and to 1950s while playing ROTR.
> 
> The only thing holding it back is the power limit but I ain't paying 400 more € for a few more frames that I wouldn't even recognize. Overall I'm super happy, this piece of silicon is mighty impressive and I can't wait for more tests (cough cough gaming sessions). My excuse to upgrade for Cyberpunk 2077 when it's out didn't quite live it up.
> 
> Cheers!


There is a good chance that one of the higher power limit BIOS will work on your card, did you get the XC or non XC version? I don't have any power limit issues on the FTW3 ultra (373 watt limit) so if you can find a BIOS in the 400 watt range that works on your card it should resolve the power issue. Do some research to make sure it won't cause other issues though.


----------



## Tragic

*2080 Ti MSI Gaming X Trio*

Did mine hit a lottery?


2145mhz skydiver
2145mhz fire strike extreme and ultra

both timespys 2130mhz
GDDR6 OCs to +1575mhz 

no mods & stock bios.
on AIR with ambient @ 20C under load it never passes 62C

fan curve 75% fixed 



I'm thinking about VFlashing a new bios for increased perf but never have b4 and this card cost more than my first car.
should I?


----------



## Martin778

Looking pretty nice! I can only do 2100 on the L-Z.

By the way, how do i decide which of the 2 BIOSes I want to flash?!


----------



## ESRCJ

Tragic said:


> Did mine hit a lottery?
> 
> 
> 2145mhz skydiver
> 2145mhz fire strike extreme and ultra
> 
> both timespys 2130mhz
> GDDR6 OCs to +1575mhz
> 
> no mods & stock bios.
> on AIR with ambient @ 20C under load it never passes 62C
> 
> fan curve 75% fixed
> 
> 
> 
> I'm thinking about VFlashing a new bios for increased perf but never have b4 and this card cost more than my first car.
> should I?


Founder's edition? We can't make any conclusions about silicon lottery unless we get some voltages. Also, you're definitely going to hit the power limit in Time Spy, so that 2130MHz means nothing since it might not even be hitting that frequency when the benchmark is running.

I would say a "golden chip" can hit over 2200MHz at 1.093V in heavy loads.


----------



## ENTERPRISE

A possible change of tune my end with the Asus XOC BIOS. Having used the XOC before on a prior air cooled 2080Ti, I was not all that impressed due to the temps and it made little sense to continue with it, however now I have the FTW Ultra Hybrid, it may be a more sensible option this time. 

So my question is, with the FTW3 Ultra Hybrid having a BIOS switch to swap between ''Standard'' & ''OC'', can I assume that I can flash the Asus XOC over the ''OC'' bios while keeping the stock EVGA bios intact on the ''Standard'' side allowing me to switch back and fourth easily between BIOS types easily ?

As my card has a custom PCB, will the XOC bios function or just brick my card ?


----------



## BIOSbreaker

bigjdubb said:


> There is a good chance that one of the higher power limit BIOS will work on your card, did you get the XC or non XC version? I don't have any power limit issues on the FTW3 ultra (373 watt limit) so if you can find a BIOS in the 400 watt range that works on your card it should resolve the power issue. Do some research to make sure it won't cause other issues though.


Hi and thanks for the input,

I got the Black Edition (non-XC, ref board). I've searched around and found out (on reddit, of all the places) that there's a PALiT 2080Ti Dual BIOS that works with 2080Ti Black and can increase the power target to 310W.

I am a bit tempted to try it, on the other hand after testing the card a little more, it seems to be stable (need more testing to be 100% sure though) at +200 core and +1000 memory where it starts hitting a thermal headroom with 81C under load and fans on 80%. That heat dumped into the case in turn heats up my cpu by about 10C and fans running higher than 80% sound like a jet engine, breaking the tolerable acoustics barrier for me. Maybe in time I'll get the Arctic Accelero Xtreme IV Rev.2 - or if Rajintek makes a Morpheus II revision for 2080Tis where I could snap 2x NF-A12x25 would be lovely.

So far I'm super happy with the money's worth of the deal I got.


----------



## dante`afk

Tragic said:


> Did mine hit a lottery?
> 
> 
> 2145mhz skydiver
> 2145mhz fire strike extreme and ultra
> 
> both timespys 2130mhz
> GDDR6 OCs to +1575mhz
> 
> no mods & stock bios.
> on AIR with ambient @ 20C under load it never passes 62C
> 
> fan curve 75% fixed
> 
> 
> 
> I'm thinking about VFlashing a new bios for increased perf but never have b4 and this card cost more than my first car.
> should I?



nope, lottery win is 2200+


----------



## zhrooms

TK421 said:


> Did the 2000w bios link get taken down?





dangerSK said:


> Only 2000W bios I know is the OC Lab NDA bios which was for sure never public, so yeah





TK421 said:


> But why is it listed and crossed out in the OP though? Seems misleading to me.


 
It was never uploaded because it's confidential (not to be accessed by unauthorized persons). I intend to do a thorough review of the BIOS (soon™) with comparisons, benchmarks, details and more, because the BIOS is *very interesting*, as it works on *reference PCB* and unlocks a *higher voltage*.
 


dangerSK said:


> Probably because some idiot leaked it and it was accessible for a short period of time, Im sure Galax dont want 2000W NDA bios floating around  There are few more NDA bioses that are not listed in this forum for example MSI ones, but I cant say really more as its NDA stuff.
> BTW that bios is useless unless you have genuine OC Lab card as you need NVVDD Tool for proper voltage control.


 
That is correct, it was posted here *in this thread* not long ago, and it is actually not useless since it works on essentially every card, and unlocks a higher voltage as previously mentioned.
 


J7SC said:


> ...yeah, was watching a Galax HoF OCLab a while back on eBay, but so were a few other folks, and the price  ...also thinking that whoever sells those now probably has put them to 'good LN2 use already'


 
That's the gist of it yes, pretty safe to say most of those cards were abused on LN2.
 
 
*☣* 

Do not expect it to be uploaded or leaked again, it really is *dangerous* (likely to cause problems or to have adverse consequences) to casual users, not only is there *no power limit* (unlimited), but combined with the *higher voltage* it literally cannot be cooled by your average air cooler, even with the fans blasting at full speed the temperature reached *over 80°C*, in an open test bench with the 3-slot cooler pictured below. As well as software reporting peaks of close to 600W power usage in Time Spy Extreme (note that this power reading is neither double nor triple checked to be correct, but it's not far off in any case), which could be a real issue for users with a weaker PSU or using 1x8-pin to 2x8-pin cables (Y-Cable). It does feature a working curve editor thankfully, so it's usable on Air with a low fan speed after some adjustments.

It's definitely more powerful than the Strix 1080 Ti XOC BIOS, and is far from risk free as you should have figured by now, we also do not know how the increased voltage affects lifespan of the GPU and if any safety features by NVIDIA are disabled or non-functional, a lot more testing has to be done. For now I'd recommend the Strix 2080 Ti XOC BIOS as it runs the same (or similar) voltage under intense load as the 380W BIOS, but doesn't throttle, the lack of curve editor makes it safe to use as it won't reach 1.093V in intense benchmarks, in regular gaming you'd realistically only ever see around 400W.


----------



## VPII

zhrooms said:


> It was never uploaded because it's confidential (not to be accessed by unauthorized persons). I intend to do a thorough review of the BIOS (soon™) with comparisons, benchmarks, details and more, because the BIOS is *very interesting*, as it works on *reference PCB* and unlocks a *higher voltage*.
> 
> 
> 
> That is correct, it was posted here *in this thread* not long ago, and it is actually not useless since it works on essentially every card, and unlocks a higher voltage as previously mentioned.
> 
> 
> 
> That's the gist of it yes, pretty safe to say most of those cards were abused on LN2.
> 
> 
> *☣*
> 
> Do not expect it to be uploaded or leaked again, it really is *dangerous* (likely to cause problems or to have adverse consequences) to casual users, not only is there *no power limit* (unlimited), but combined with the *higher voltage* it literally cannot be cooled by your average air cooler, even with the fans blasting at full speed the temperature reached *over 80°C*, in an open test bench with the 3-slot cooler pictured below. As well as software reporting peaks of close to 600W power usage in Time Spy Extreme (note that this power reading is neither double nor triple checked to be correct, but it's not far off in any case), which could be a real issue for users with a weaker PSU or using 1x8-pin to 2x8-pin cables (Y-Cable). It does feature a working curve editor thankfully, so it's usable on Air with a low fan speed after some adjustments.
> 
> It's definitely more powerful than the Strix 1080 Ti XOC BIOS, and is far from risk free as you should have figured by now, we also do not know how the increased voltage affects lifespan of the GPU and if any safety features by NVIDIA are disabled or non-functional, a lot more testing has to be done. For now I'd recommend the Strix 2080 Ti XOC BIOS as it runs the same (or similar) voltage under intense load as the 380W BIOS, but doesn't throttle, the lack of curve editor makes it safe to use as it won't reach 1.093V in intense benchmarks, in regular gaming you'd realistically only ever see around 400W.


Hi @zhrooms I've been using the Strix XOC - 1000watt bios for the past month or so. It does a great job keping my overclock right through the bench if I keep the core temps below 40c, well more like 42c or that is what I've seen. I do understand that this bios does not have a vcurve but in that lies the problem I am facing. When I used the 380 watt bios I was able to run speeds up to 2175mhz in both TS and TSE, but at present I am stuck at 2145mhz. To try and get past the vcurve issue I flashed my card with the Kingpin bios for the 520 watt TDP as the most I've used was around 440 to 450watt with the xoc bios. I wanted to use it primarily to do an OC scan using the Kingpin bios. 

But unfortunately what I experienced was that when I run a benchmark to check the normal stock speed it started TS at 36fps and when checking the clocks with GPUz it was like just over or just below 1000mhz. Interestingly when I did the OC scan the speeds set was ridiculous like 2600mhz or there about which resulted upon reboot seeing waves of pixels and then a black or white screen. Needless to say I flashed back to the XOC bios. Unfortunately the most voltage I've seen being pushed through was 1.043 and only sometimes a little over 1.05v. I need 1.063 to run 2175mhz core. I've never ever seen my card give 1.093v to the core.


----------



## zhrooms

VPII said:


> I've been using the Strix XOC - 1000watt bios for the past month or so. It does a great job keping my overclock right through the bench. I do understand that this bios does not have a vcurve but in that lies the problem I am facing. When I used the 380 watt bios I was able to run speeds up to 2175mhz in both TS and TSE, but at present I am stuck at 2145mhz. To try and get past the vcurve issue I flashed my card with the Kingpin bios.
> 
> Needless to say I flashed back to the XOC bios. Unfortunately the most voltage I've seen being pushed through was 1.043 and only sometimes a little over 1.05v. I need 1.063 to run 2175mhz core. I've never ever seen my card give 1.093v to the core.


 
Yes, Kingpin 3 connector BIOS is not compatible. And yes, the Strix XOC BIOS default curve does not go up to 1.093V, and since we can't change the curve manually it's stuck at around 1.050V max, it has to do with NVIDIA Boost, as the same thing happens on other cards without this BIOS.

You'll simply have to learn to be happy with Strix XOC or shunt mod it with Conductive (Silver) Pen for $25 and it'll run 1.093V using custom curve (procedure is easy and safe, warranty intact since it's removable). But you should be happy with 2100MHz really, that is faster than most cards.


----------



## VPII

zhrooms said:


> Yes, Kingpin 3 connector BIOS is not compatible. And yes, the Strix XOC BIOS default curve does not go up to 1.093V, and since we can't change the curve manually it's stuck at around 1.050V max, it has to do with NVIDIA Boost, as the same thing happens on other cards without this BIOS.
> 
> You'll simply have to learn to be happy with Strix XOC or shunt mod it with Conductive (Silver) Pen for $25 and it'll run 1.093V using custom curve (procedure is easy and safe, warranty intact since it's removable). But you should be happy with 2100MHz really, that is faster than most cards.


Hi @zhrooms , conductive pen not really much of an option here seen that I am in South Africa. However, my friend from the The Overclocker online magazine told me I can also use an HB pencil to do it, but I'll wait for him to show me when he visits me in Cape Town.


----------



## kx11

basically don't mess with other GPUs bios files , only Strix/Auros cards can do that


----------



## VPII

kx11 said:


> basically don't mess with other GPUs bios files , only Strix/Auros cards can do that


Sorry @kx11 but my Palit goes pretty well with the Strix XOC bios

Sent from my SM-G960F using Tapatalk


----------



## J7SC

ENTERPRISE said:


> A possible change of tune my end with the Asus XOC BIOS. Having used the XOC before on a prior air cooled 2080Ti, I was not all that impressed due to the temps and it made little sense to continue with it, however now I have the FTW Ultra Hybrid, it may be a more sensible option this time.
> 
> So my question is, with the FTW3 Ultra Hybrid having a BIOS switch to swap between ''Standard'' & ''OC'', can I assume that I can flash the Asus XOC over the ''OC'' bios while keeping the stock EVGA bios intact on the ''Standard'' side allowing me to switch back and fourth easily between BIOS types easily ?
> 
> As my card has a custom PCB, will the XOC bios function or just brick my card ?


 
...what with all the related discussions and disclaimers in mind re. that XOC Asus based Bios, including temps, yes, flash it over the 'OC' Bios selection (after saving that original one first, of course). I would recommend MSI AB (and not on auto-load with Windows start), with PL slider starting at 40%, and work your way up...from past posts by folks who have the XOC loaded successfully, 46% (460w) or seems to be max that can be expected with non-extreme PCB cards, but there's always variance.


----------



## ENTERPRISE

J7SC said:


> ...what with all the related discussions and disclaimers in mind re. that XOC Asus based Bios, including temps, yes, flash it over the 'OC' Bios selection (after saving that original one first, of course). I would recommend MSI AB (and not on auto-load with Windows start), with PL slider starting at 40%, and work your way up...from past posts by folks who have the XOC loaded successfully, 46% (460w) or seems to be max that can be expected with non-extreme PCB cards, but there's always variance.


Hey bud, 

Thanks for the reply, may give it a go. Where do I get the XOC Bios, is it the one in the OP of the thread as per : ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)


Or is there a different/better version ? I assume not but its always good to ask before going to the trouble of flashing haha.


----------



## zhrooms

J7SC said:


> From past posts by folks who have the XOC loaded successfully, 46% (460w) or seems to be max that can be expected with non-extreme PCB cards, but there's always variance.


 
46% doesn't mean anything other than that's the limit it can't exceed, you should just leave it on 100%.

Time Spy Extreme (GT2) wants to pull up to around 520W at 1.093V, but since the Strix XOC has no curve editor and the default curve doesn't allow the card to (generally) go over ~1.050V then it just happens to stop at around 440-480W (short peak).
 


ENTERPRISE said:


> Where do I get the XOC Bios, is it the one in the OP of the thread as per : ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)


 
Yes it's that one. Don't expect any improvement in games since they use far less power than Time Spy Extreme as an example (~260W vs ~330W at 1905MHz 0.900V), and the Galax 380W at 1.050V can usually get by with 380W, it doesn't need much more at that voltage, so the increased power limit doesn't do much. It has to be really intense games or benchmarks for the added power limit to actually show an increase in performance. The loss of display outputs and curve editor is a problem for many, so I'd "generally" not recommend it as a daily BIOS.


----------



## dangerSK

zhrooms said:


> It was never uploaded because it's confidential (not to be accessed by unauthorized persons). I intend to do a thorough review of the BIOS (soon™) with comparisons, benchmarks, details and more, because the BIOS is *very interesting*, as it works on *reference PCB* and unlocks a *higher voltage*.
> 
> 
> 
> That is correct, it was posted here *in this thread* not long ago, and it is actually not useless since it works on essentially every card, and unlocks a higher voltage as previously mentioned.
> 
> 
> 
> That's the gist of it yes, pretty safe to say most of those cards were abused on LN2.
> 
> 
> *☣*
> 
> Do not expect it to be uploaded or leaked again, it really is *dangerous* (likely to cause problems or to have adverse consequences) to casual users, not only is there *no power limit* (unlimited), but combined with the *higher voltage* it literally cannot be cooled by your average air cooler, even with the fans blasting at full speed the temperature reached *over 80°C*, in an open test bench with the 3-slot cooler pictured below. As well as software reporting peaks of close to 600W power usage in Time Spy Extreme (note that this power reading is neither double nor triple checked to be correct, but it's not far off in any case), which could be a real issue for users with a weaker PSU or using 1x8-pin to 2x8-pin cables (Y-Cable). It does feature a working curve editor thankfully, so it's usable on Air with a low fan speed after some adjustments.
> 
> It's definitely more powerful than the Strix 1080 Ti XOC BIOS, and is far from risk free as you should have figured by now, we also do not know how the increased voltage affects lifespan of the GPU and if any safety features by NVIDIA are disabled or non-functional, a lot more testing has to be done. For now I'd recommend the Strix 2080 Ti XOC BIOS as it runs the same (or similar) voltage under intense load as the 380W BIOS, but doesn't throttle, the lack of curve editor makes it safe to use as it won't reach 1.093V in intense benchmarks, in regular gaming you'd realistically only ever see around 400W.


When I played with the Galax bios I think max voltage was well-known 1.093V without an option to raise it, because you would need NVVDD, or am I missing something out ? Also the bios is usable on air (though I don't recommend) if u have one of the better coolers, I was fine on Lightning. About leaking, it WILL eventually be leaked again, that's how people are. But your review might help to educate about "dangers" of using this bios for a longer period of time.


----------



## J7SC

ENTERPRISE said:


> Hey bud,
> 
> Thanks for the reply, may give it a go. Where do I get the XOC Bios, is it the one in the OP of the thread as per : ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)
> 
> 
> Or is there a different/better version ? I assume not but its always good to ask before going to the trouble of flashing haha.


 
Plenty of 'Asus Strix Bios' out there so best to double-check but yup, that's the one...got the right tag at the bottom "Asus, Strix OC 2080 Ti XOC BIOS, NVIDIA GeForce RTX 2080 Ti, XOC BIOS (100% PL = 1000W) (1350 / 1750)"


----------



## MrGuru

zhrooms said:


> Yes it's that one. Don't expect any improvement in games since they use far less power than Time Spy Extreme as an example (~260W vs ~330W at 1905MHz 0.900V), and the Galax 380W at 1.050V can usually get by with 380W, it doesn't need much more at that voltage, so the increased power limit doesn't do much. It has to be really intense games or benchmarks for the added power limit to actually show an increase in performance. The loss of display outputs and curve editor is a problem for many, so I'd "generally" not recommend it as a daily BIOS.


So for a non-extreme overclocker such as myself.. would flashing my just-bought EVGA GeForce RTX 2080 Ti XC GAMING (11G-P4-2382-KR) to the Galax 380W BIOS even be worth it? I'm not planning on watercooling, but I've always tried to get my EVGA cards as high as they'll go stable and just leave it at that.

I plan on keeping this card for ~5 years, as I kept my 980 ti for just about 4 before upgrading. I play many games at 2560x1600, but it's sounding like flashing that BIOS to a higher power limit isn't worth my time or the loss of warranty?


----------



## BIOSbreaker

MrGuru said:


> So for a non-extreme overclocker such as myself.. would flashing my just-bought EVGA GeForce RTX 2080 Ti XC GAMING (11G-P4-2382-KR) to the Galax 380W BIOS even be worth it? I'm not planning on watercooling, but I've always tried to get my EVGA cards as high as they'll go stable and just leave it at that.
> 
> I plan on keeping this card for ~5 years, as I kept my 980 ti for just about 4 before upgrading. I play many games at 2560x1600, but it's sounding like flashing that BIOS to a higher power limit isn't worth my time or the loss of warranty?


Hey, I suggest stress testing and running at whatever you can throw at it with the 130% power limit bios XC and higher cards possess but I think you are going to be severely limited by the thermal/noise limits you can tolerate with the 2-slot solution on "standard" XC Gaming. I got a very sweet deal for BLACK edition so I jumped on it - it's a reference board with a 2-slot cooler and a "measly" 112% power limit. The cooler seems to be the same XC has just by the pictures. Got the Core OC'd to +175 and Memory +1000 and it is just on the verge of thermal (82C on core, heating my CPU by 10 additional degrees as NH-D15 soaks up the hot air) and noise (at 80% fan speed, anything higher is a jet engine) tolerance for me.

I believe that XC Ultra Gaming, which is a 2,75 slot would run cooler and quieter, allowing for more headroom to play with increased wattage BIOSes. But that also depends on how you are constrained space-wise. It's also depending on the airflow in your case. Try to squeeze out whatever you can get under a prolonged stress test and if you still didn't hit any of the thermal/noise/core clock limits, flash away.


----------



## Xeq54

I hope the 2000w bios gets leaked again, not really fan of this "I wont give it to you because you will hurt yourself" special club guys who just get the feeling of false importance from it.


----------



## zhrooms

dangerSK said:


> But your review might help to educate about "dangers" of using this bios for a longer period of time.


 
Yeah that's my intention, people deserve to know it exists and how it affects the 2080 Ti. After Palit updated the BIOS on their cheapest Non-A card "Palit RTX 2080 Ti Dual", it made it faster than 11x "Premium" A-chip cards that cost $50-100 more, and ASUS has still not updated BIOS on the ROG Strix OC, still at at measly 325W which is essentially the same as Founders Edition, has like the top #3 PCB/VRM and bottom #3 power limit, it's completely messed up.
 


MrGuru said:


> So for a non-extreme overclocker such as myself.. would flashing my just-bought EVGA GeForce RTX 2080 Ti XC GAMING (11G-P4-2382-KR) to the Galax 380W BIOS even be worth it? I'm not planning on watercooling, but I've always tried to get my EVGA cards as high as they'll go stable and just leave it at that.
> 
> I play many games at 2560x1600, but it's sounding like flashing that BIOS to a higher power limit isn't worth my time or the loss of warranty?


 
MSI Gaming X Trio comes with a power limit of 330W and a boost clock of 1755 MHz, one of the highest, this makes the card run (no overclock) at 1905 MHz with the default fan curve (very quiet), card reaches 73c and 1.018-1.031V.

So, is the power limit of 330W an actual problem here? The answer is: Yes.. and No.

After messing with the curve editor, I could get the card to run 1905MHz at just 0.900V, that only required ~290W! So, the power limit was actually more than enough, but that was only when using the lower voltage, by not using the curve it ran 1.018-1.031V, which bumped the power usage quite a bit, closer to the power limit, which left no real room for any overclocking, at that temperature.

Was I happy with this? Hell no, as the MSI Gaming X Trio features one of the largest coolers on the market, it can reach an incredible temperature, at the above speed of 1905MHz and 0.900V it ran just ~50c with the fans on full speed, compared to the Palit 3-slot 2-fan cooler that was not only at 60c but also louder!

This showed that the MSI Gaming X Trio can provide an amazing headroom for overclocking, even remaining quiet, this is where the power limit becomes a real issue, as it's stuck at 330W while the cooler has no problem cooling an overclock that pulls 450W.

But the Palit I just mentioned, that card ran 10c hotter at the same clock speed/voltage and was louder, and trying to get the card running quiet was much more difficult, in the end the final overclock was 1905MHz 0.925V at a 75-77c temp limit (fans under 40%). At this slightly bumped voltage and a few degrees more, I measured the power usage to ~310W.

So conclusion is that if you have massive cooler, there is most definitely a need for a higher power limit BIOS, and if you don't have a massive cooler, there isn't really any need to flash your card.

There are many cards on the market with just 2-slot 2-fan coolers, there is no way they can run quiet at over 1905 MHz, and that means they won't require over 310W, so no need to flash. And this also means the Non-A BIOS that Palit released at 250/310W is enough to run one of the fastest boost overclocks, which is pretty amazing.

If you're looking for a very quiet 2080 Ti, there is no need to buy an A card now after the Palit 310W BIOS. As the coolers (cards) are not capable of running a faster overclock than the power limit allow.

Your EVGA XC Gaming has a 338W Power Limit which is therefore more than enough, if you intend to run the card quiet. Even if you run the fan a little faster than quiet there is an extra 30W to use for an additional 50-75 MHz (1950+ MHz). I've measured 1995-2010MHz to about ~360W. It's only if you plan on running the fan at close to 100% you'd really gain from flashing the 380W BIOS (an extra 40W) which gives you around ~2055MHz, at water temperatures it goes up to 2085-2100MHz.


----------



## dangerSK

Xeq54 said:


> I hope the 2000w bios gets leaked again, not really fan of this "I wont give it to you because you will hurt yourself" special club guys who just get the feeling of false importance from it.


I have that bios, like its not a bad bios but for me useless without proper voltage control. But I feel you, the feeling of having a nice card without a way to unleash full potential... I had that feeling for 2 months so i can relate  Well good for u at some point it will get leaked as previous HOF NDA bioses or u can try to ask someone who has oc lab card.


----------



## Martin778

Martin778 said:


> Looking pretty nice! I can only do 2100 on the L-Z.
> 
> By the way, how do i decide which of the 2 BIOSes I want to flash?!


Anyone an idea? NVflash64 -6 bios.rom seems to have flashed both at once...

I want the Kingpin one but EVGA is trying to be 'elite' again and has no stock for normal users.


----------



## zhrooms

Xeq54 said:


> I hope the 2000w bios gets leaked again, not really fan of this "I wont give it to you because you will hurt yourself" special club guys who just get the feeling of false importance from it.


 
It's more about legality (in accordance with the law). Some of us aren't anarchists.

The BIOS was specifically made for the Galax RTX 2080 Ti Hall of Fame OC Lab Limited Edition that was/is exclusively sold on the galaxstore.net.

The card comes with a max power limit of 450W but there was also an internal BIOS made for people who benchmarked it on LN2, also known as "XOC" (Extreme Overclock)", that was obviously never released publicly because why should it? It eventually got out and the story here on OCN is that an unnamed person purchased an OC Lab second hand, and I believe his friend who also owned an OC Lab, shared the BIOS to him, and he then in turn posted it. I'm assuming they were both unaware the BIOS worked on every single card on the market, not just the OC Lab. An unknown amount of people managed to download it before it was removed (removed by whom I do not know, I never asked), I saved it for reasons related to the original post in this thread (collect every BIOS), I never assumed it worked on any other card than OC Lab until someone pointed it out to me, so I tested it out and it did work.

Speaking of XOC, the Kingpin XOC BIOS was recently released publicly, but comes with multiple warnings as well as a locked fan speed of 100%.



EVGA said:


> The BIOS in this section is compatible only with EVGA RTX 2080 Ti KINGPIN card and will not work and may damage any regular 2080 Ti FTW3/SC or any other brand RTX card. It is provided AS IS, without any warranty for education purpose only. Do NOT hotlink this BIOS on forums.
> 
> All fans are 100% speed.
> Temp target is removed.
> Power limit is removed.


 
I do not know if anyone has tested it on any other card yet, if you are risking it, please feel free to share your findings. I'm assuming it doesn't work as they state.

The fact that the Galax XOC BIOS works on every card is kind of mind blowing, there are absolutely methods to ensure it's only compatible with specific cards, like EVGA did for the Kingpin above.

There is no reason to shunt mod or even purchase the MSI Lightning Z with this BIOS as the increased voltage cap is higher than the +25mV provided on the lightning. The BIOS effectively makes a $999 MSRP Reference PCB card + $125 AIO / $125 Water Block the second fastest RTX 2080 Ti you can buy after Kingpin. Reality is this wouldn't even have been interesting if partners didn't give the Strix OC (as an example), a measly 325W power limit when the actual specs allow for up to 375W (150+150+75), and any 3 connector card up to 525W (150+150+150+75). I would definitely have purchased the cheapest 3 connector card if that was the case (525W power limit) and called it a day, it would have ran 1.093V without throttling at good temps in any game or benchmark.

The only cards that actually maxed the power limit (following the official specs ±5W) was FTW3 at 373W, Galax OC at 380W and EVGA Kingpin at 520W. Should also give a small mention to Hall of Fame that came with a 450W BIOS even though it could have been 520W like Kingpin, but at least it wasn't 380W like Lightning Z, so better than nothing.

NVIDIA/Partners definitely lost a measurable amount of $ after Elmor released the Strix XOC BIOS publicly not long after the 1080 Ti release, how these partners continue to release unrestricted BIOS variants is beyond me. Like previously mentioned, EVGA at least took certain precautions to make sure their XOC BIOS was only compatible with their Kingpin card, unlike ASUS or Galax. Anyway I am not the one who releases anything here, that's the bottom line.


----------



## pewpewlazer

zhrooms said:


> It's more about legality (in accordance with the law). Some of us aren't anarchists.


Are you referring to the fact that it's under NDA and thus leaking it would be a legal issue? I'm not sure what Xeq54 was implying, but it's very much a BS "special club" the manufacturers are running. I wouldn't expect an individual to break their NDA and leak something, but it's incredibly lame of the manufacturers to release some "special" BIOS like this IMO.



zhrooms said:


> There is no reason to shunt mod or even purchase the MSI Lightning Z with this BIOS as the increased voltage cap is higher than the +25mV provided on the lightning. The BIOS effectively makes a $999 MSRP Reference PCB card + $125 AIO / $125 Water Block the second fastest RTX 2080 Ti you can buy after Kingpin. Reality is this wouldn't even have been interesting if partners didn't give the Strix OC (as an example), a measly 325W power limit when the actual specs allow for up to 375W (150+150+75), and any 3 connector card up to 525W (150+150+150+75). I would definitely have purchased the cheapest 3 connector card if that was the case (525W power limit) and called it a day, it would have ran 1.093V without throttling at good temps in any game or benchmark.


Are you saying it can be flashed onto ANY 2080 Ti, including the "non A chip" cards? Or are there reference PCB A chips cards available for $999 MSRP now?



zhrooms said:


> NVIDIA/Partners definitely lost a measurable amount of $ after Elmor released the Strix XOC BIOS publicly not long after the 1080 Ti release, how these partners continue to release unrestricted BIOS variants is beyond me. Like previously mentioned, EVGA at least took certain precautions to make sure their XOC BIOS was only compatible with their Kingpin card, unlike ASUS or Galax. Anyway I am not the one who releases anything here, that's the bottom line.


I HIGHLY doubt that an "XOC BIOS" being in the wild would make any appreciable difference to their bottom line. 

How many people are there that would actually sit there and go _"oh, wow, look at this leaked XOC BIOS, I'm going to cancel my order for the $1,900 KPE I just placed an hour ago and buy a regular card for $1,200 and a water block somewhere else and flash this BIOS instead"_? I can't imagine there are many.

On the flip side, how many people are there that maybe WOULD HAVE sprung for a high dollar KPE or OC LAB edition card, but ended up purchasing a founders edition or the like because the 'special edition' cards simply weren't available at launch? 
I'm sure there are people who sold off their FE cards at a loss after 6 months and bought a crazy expensive KPE card the second they were available, but there are probably also people who would have bought a KPE type card 6 months ago but think _"meh, my 2080 Ti FE is fine, I'll wait till next gen"_ now that it's actually available.

Maybe there is someone out there with first hand knowledge of these companies detailed financial metrics that can tell me I'm an idiot, but I don't see a KPE or OC LAB or STRIX OC or whatever variant card as being a graphics card manufacturers big money maker.


----------



## zhrooms

pewpewlazer said:


> Are you saying it can be flashed onto ANY 2080 Ti, including the "non A chip" cards? Or are there reference PCB A chips cards available for $999 MSRP now?


 
Yeah no, since the OC Lab is only 1E07 (A) it only works on those cards.

And yes, seen *A* cards at $950-975 MSRP (pre-tax) many times in Europe, the RTX 2070 and RTX 2080 has been consistently under MSRP for a long time, $60 below for both the 2070 (A) and 2080 (A) right now.

Found out some information about the Kingpin XOC, it has some resemblance to the Galax XOC, it overvolts as well and has a 2000W power limit too, works on reference cards and is public (although you're not allowed to hotlink it, so don't post it here). Main issues seem to be that the curve editor does not work, the fan is locked to 100% and the 2D clocks aren't working properly, but it's definitely more powerful than the Strix XOC as that BIOS doesn't go over ~1.050V under intense load, while this one goes over 1.100V.


----------



## dangerSK

Martin778 said:


> Anyone an idea? NVflash64 -6 bios.rom seems to have flashed both at once...
> 
> I want the Kingpin one but EVGA is trying to be 'elite' again and has no stock for normal users.


No you dont flash both at once, youre flashing the one you have switch on (dip switch on card). It doesnt matter which one u flash.


----------



## Tragic

zhrooms said:


> Yeah no, since the OC Lab is only 1E07 (A) it only works on those cards.
> 
> And yes, seen *A* cards at $950-975 MSRP (pre-tax) many times in Europe, the RTX 2070 and RTX 2080 has been consistently under MSRP for a long time, $60 below for both the 2070 (A) and 2080 (A) right now.
> 
> Found out some information about the Kingpin XOC, it has some resemblance to the Galax XOC, it overvolts as well and has a 2000W power limit too, works on reference cards and is public (although you're not allowed to hotlink it, so don't post it here). Main issues seem to be that the curve editor does not work, the fan is locked to 100% and the 2D clocks aren't working properly, but it's definitely more powerful than the Strix XOC as that BIOS doesn't go over ~1.050V under intense load, while this one goes over 1.100V.


what temps will you see at those voltages with triple fan cooler?


----------



## dangerSK

zhrooms said:


> It's more about legality (in accordance with the law). Some of us aren't anarchists.
> 
> The BIOS was specifically made for the Galax RTX 2080 Ti Hall of Fame OC Lab Limited Edition that was/is exclusively sold on the galaxstore.net.
> 
> The card comes with a max power limit of 450W but there was also an internal BIOS made for people who benchmarked it on LN2, also known as "XOC" (Extreme Overclock)", that was obviously never released publicly because why should it? It eventually got out and the story here on OCN is that an unnamed person purchased an OC Lab second hand, and I believe his friend who also owned an OC Lab, shared the BIOS to him, and he then in turn posted it. I'm assuming they were both unaware the BIOS worked on every single card on the market, not just the OC Lab. An unknown amount of people managed to download it before it was removed (removed by whom I do not know, I never asked), I saved it for reasons related to the original post in this thread (collect every BIOS), I never assumed it worked on any other card than OC Lab until someone pointed it out to me, so I tested it out and it did work.
> 
> Speaking of XOC, the Kingpin XOC BIOS was recently released publicly, but comes with multiple warnings as well as a locked fan speed of 100%.
> 
> 
> 
> I do not know if anyone has tested it on any other card yet, if you are risking it, please feel free to share your findings. I'm assuming it doesn't work as they state.
> 
> The fact that the Galax XOC BIOS works on every card is kind of mind blowing, there are absolutely methods to ensure it's only compatible with specific cards, like EVGA did for the Kingpin above.
> 
> There is no reason to shunt mod or even purchase the MSI Lightning Z with this BIOS as the increased voltage cap is higher than the +25mV provided on the lightning. The BIOS effectively makes a $999 MSRP Reference PCB card + $125 AIO / $125 Water Block the second fastest RTX 2080 Ti you can buy after Kingpin. Reality is this wouldn't even have been interesting if partners didn't give the Strix OC (as an example), a measly 325W power limit when the actual specs allow for up to 375W (150+150+75), and any 3 connector card up to 525W (150+150+150+75). I would definitely have purchased the cheapest 3 connector card if that was the case (525W power limit) and called it a day, it would have ran 1.093V without throttling at good temps in any game or benchmark.
> 
> The only cards that actually maxed the power limit (following the official specs ±5W) was FTW3 at 373W, Galax OC at 380W and EVGA Kingpin at 520W. Should also give a small mention to Hall of Fame that came with a 450W BIOS even though it could have been 520W like Kingpin, but at least it wasn't 380W like Lightning Z, so better than nothing.
> 
> NVIDIA/Partners definitely lost a measurable amount of $ after Elmor released the Strix XOC BIOS publicly not long after the 1080 Ti release, how these partners continue to release unrestricted BIOS variants is beyond me. Like previously mentioned, EVGA at least took certain precautions to make sure their XOC BIOS was only compatible with their Kingpin card, unlike ASUS or Galax. Anyway I am not the one who releases anything here, that's the bottom line.


Actually, you're wrong, Kingpin bios does and should work on every 3x8pin card, I heard it runs also on some 2x8pin card so they didn't lock the bios as you're saying, I was maxing out Kingpin bios on Lightning just fine. As for the Galax bios I think they just don't bother, its a bios made for their LN2 events and what happens after is problem of the users. Lightning card is very hard to review/recommend/rate because I know maybe 5 people including me who have Afterburner Extreme (core voltage up to 1.6v) and 1000w ln2 bios for that card, if youre "lucky" and get these tools its and amazing card however for normal consumer its kinda meeeh card I suppose ?


----------



## Tragic

gridironcpj said:


> Founder's edition? We can't make any conclusions about silicon lottery unless we get some voltages. Also, you're definitely going to hit the power limit in Time Spy, so that 2130MHz means nothing since it might not even be hitting that frequency when the benchmark is running.
> 
> I would say a "golden chip" can hit over 2200MHz at 1.093V in heavy loads.



during the FFXV benchmark windowed Precision X1 tells me that i'm slapping into the power limit and fluctuating voltages of 1019mv and 1031mv at temp of 58C fan curve 75% ambient 20C
A1 chip
liquid metal repaste. 

110% power limit. 



based on that should i see improvements if i flash the MSI unofficial 406W bios instead of my stock 330W?


----------



## Martin778

Does anyone have a higher TDP BIOS for the Lightining-Z that doesn't bork the LED / OLED LCD controls? 

@dangerSK,
Thanks, apparently my card needs 2-3 reboots before it jumps between BIOS'es after flipping the switch.

My LN2 BIOS says 380W max...not sure what's wrong.


----------



## zhrooms

Tragic said:


> what temps will you see at those voltages with triple fan cooler?


 
I don't know, try it yourself, but be cautious about the fan speed, as the fan curve on the BIOS might vary from your original BIOS, for example my Palit cooler only goes up to 2200RPM at 100% using the stock BIOS, but flashing a different BIOS can make it run 3000RPM, far exceeding the intended max fan speed, possibly damaging it. And the Kingpin XOC BIOS locks the fan speed at 100%, you cannot adjust it. If it does run a lot faster than intended, you should flash back to the previous BIOS.
 


dangerSK said:


> Actually, you're wrong, Kingpin bios does and should work on every 3x8pin card, I heard it runs also on some 2x8pin card so they didn't lock the bios as you're saying, I was maxing out Kingpin bios on Lightning just fine. As for the Galax bios I think they just don't bother, its a bios made for their LN2 events and what happens after is problem of the users. Lightning card is very hard to review/recommend/rate because I know maybe 5 people including me who have Afterburner Extreme (core voltage up to 1.6v) and 1000w ln2 bios for that card, if youre "lucky" and get these tools its and amazing card however for normal consumer its kinda meeeh card I suppose ?


 
I'm not wrong, I never said it didn't work or that it was "locked", I made that very clear. I have now confirmed that it does work as well. And you make it sound like there are _many_ 3x8-Pin cards, but there's only three, HOF, Lightning and now Kingpin, each of them has their own XOC BIOS now. Kingpin XOC (and Galax) works on every card, 2 & 3 connector as mentioned in my latest post. Just because the BIOS was used at an event doesn't mean it was made for the event, which it obviously wasn't.

Yes, the Lightning is an awful card with the BIOS it ships with, 380W is a disgrace, the overvolt function is useless because of it. I would like to know more about the Lightning XOC BIOS though, like is the curve editor, fan adjustment, idle clocks working? Do you know if it has been tested on ref cards by anyone?


----------



## dangerSK

zhrooms said:


> Yes, the Lightning is an awful card with the BIOS it ships with, 380W is a disgrace, the overvolt function is useless because of it. I would like to know more about the Lightning XOC BIOS though, like is the curve editor, fan adjustment, idle clocks working? Do you know if it has been tested on ref cards by anyone?


Well I can share some info, there is no voltage curve because why would u need one when u can hard set any voltage from 900mv to 1.6V and it will push that voltage no matter what, that is if u have special after burner made for you by MSI. Idle clock is set to 1350mhz, Ive never seen it below that value. I dont think it was ever tested on ref pcb, neither me or other MSI guys have ref pcb card but I think it will work as other bioses (kingpin, hof). Btw I will share one "secret", if u have right afterburner u can control core voltage even with retail LN2 bios (900mv to 1.6v)


----------



## Martin778

If we could only adjust the power limit instead of Vcore


----------



## Xeq54

Can confirm the XOC Kingpin bios works on 2*8 Pin reference PCB card (MSI ventus A-chip with micron memory)

Voltage is locked at 1.11v and there are absolutely no temperature clock drops. Gonna test out more.

BTW dont run kombustor on it. Card drew 800watts as the clock sticks to the set frequency no matter what  Was ok on water, but I am pretty sure this would be deadly on AIR.


----------



## dangerSK

Martin778 said:


> If we could only adjust the power limit instead of Vcore


kinda sucks yeah, for that u have to get NDA bios which i can tell u wont get


----------



## Martin778

I don't care for some experimental BIOS though, a 450W one with all LED controls working would suffice perfectly for an air cooled card like the L-Z.


----------



## x-speed69

Xeq54 said:


> Can confirm the XOC Kingpin bios works on 2*8 Pin reference PCB card (MSI ventus A-chip with micron memory)
> 
> Voltage is locked at 1.11v and there are absolutely no temperature clock drops. Gonna test out more.
> 
> BTW dont run kombustor on it. Card drew 800watts as the clock sticks to the set frequency no matter what  Was ok on water, but I am pretty sure this would be deadly on AIR.



Just to clarify, card really keeps set core mhz no matter how high temp gets?


----------



## J7SC

Xeq54 said:


> Can confirm the XOC Kingpin bios works on 2*8 Pin reference PCB card (MSI ventus A-chip with micron memory)
> 
> Voltage is locked at 1.11v and there are absolutely no temperature clock drops. Gonna test out more.
> 
> BTW dont run kombustor on it. Card drew 800watts as the clock sticks to the set frequency no matter what  Was ok on water, but I am pretty sure this would be deadly on AIR.



Interesting stuff. XOC Kingpin Bios would be for 3x EPS 8 pin, while MSI Ventus has 2x EPS8 Pin...but I guess the limits (or lack thereof) of the XOC Kingpin are so high that 2/3rd is still 'way plenty'


----------



## Martin778

Out of curiousity, how do you guys unbrick a 2080Ti that has dual BIOS but you flashed wrong file to one of them? 

(Not that I have problems or want to try it, just asking).


----------



## plazmic

Xeq54 said:


> Can confirm the XOC Kingpin bios works on 2*8 Pin reference PCB card (MSI ventus A-chip with micron memory)
> 
> Voltage is locked at 1.11v and there are absolutely no temperature clock drops. Gonna test out more.
> 
> BTW dont run kombustor on it. Card drew 800watts as the clock sticks to the set frequency no matter what  Was ok on water, but I am pretty sure this would be deadly on AIR.


Similarily, the XOC KP Bios works on the EVGA XC card (300a, samsung memory). ALthough, I don't have the same experience as you. It does hold 1.11v often, but it does drop when I'm running time spy extreme and the clocks drop with it slightly. Maybe that's just vdroop for me? Running 2x EVGA XC's flashed and on synced clocks. Power _maybe_ is an issue... running on supernova 1600w t2, but my CPU is w-3715x at 4.9ghz all core... its probably pulling 500w alone. I may get around to throwing my old 1200w on the cpu alone to verify.


The normal KPE bioses "work" and get past the device ID check on the classified tools for KPE, but I don't think the voltage controls work. Also, I couldn't seem to get the card above 300mhz, it'd start at the mid-tier around 1350mhz then drop. I suspect its overreading power or temperature. I may try on the ln02 bios with thermal disabled (my temps are fine).


----------



## fleps

I wish we had "some" of all this bioses and mods on the 2080 non TI, so much fun 

I thought about getting a TI but it's ridiculous expensive here and I would be buying it only for all this fun with overclock, as I don't care about 4K.


----------



## plazmic

Martin778 said:


> Out of curiousity, how do you guys unbrick a 2080Ti that has dual BIOS but you flashed wrong file to one of them?
> 
> (Not that I have problems or want to try it, just asking).


Not guaranteed, but generally if you have a second working card you can boot with it and flash the brick back.


----------



## Martin778

So if I understand corretly, there is no way to use something like a BIOS ID switch in NVflash to flash BIOS nr.2 while the card is running on nr.1


----------



## J7SC

Martin778 said:


> So if I understand corretly, there is no way to use something like a BIOS ID switch in NVflash to flash BIOS nr.2 while the card is running on nr.1


 
I think that's correct. Each Bios switch position (on a single card) still ends up as a separate GPU entry in the Win registry.


----------



## Xeq54

plazmic said:


> Similarily, the XOC KP Bios works on the EVGA XC card (300a, samsung memory). ALthough, I don't have the same experience as you. It does hold 1.11v often, but it does drop when I'm running time spy extreme and the clocks drop with it slightly. Maybe that's just vdroop for me? Running 2x EVGA XC's flashed and on synced clocks. Power _maybe_ is an issue... running on supernova 1600w t2, but my CPU is w-3715x at 4.9ghz all core... its probably pulling 500w alone. I may get around to throwing my old 1200w on the cpu alone to verify.
> 
> 
> The normal KPE bioses "work" and get past the device ID check on the classified tools for KPE, but I don't think the voltage controls work. Also, I couldn't seem to get the card above 300mhz, it'd start at the mid-tier around 1350mhz then drop. I suspect its overreading power or temperature. I may try on the ln02 bios with thermal disabled (my temps are fine).


Hi after more testing, I can confirm the card drops one clock bin (15MHZ) after I cross the 40 degree mark. I really had to bench for half an hour to cross this since my loop is pretty good. So I did not notice it beforehand during my quick test. The nice thing is that the frequency sticks even in Timespy GT2 during which the card often reaches 550-570w.

Only the frequency drops for me by the 15mhz, voltage doesn't.

The bios seems to be pretty usable if you have water cooler (due to the fan lock to 100%) Even idle clocks work, its at 300mhz 0.71v on desktop.

Also the Classified tool does not work for me, it does not launch (device incompatible error). So I don't have any way to control voltage. It sticks to 1.111v in heaven or games or 1.125v during heavier load such as GT2.


----------



## plazmic

Xeq54 said:


> Hi after more testing, I can confirm the card drops one clock bin (15MHZ) after I cross the 40 degree mark. I really had to bench for half an hour to cross this since my loop is pretty good. So I did not notice it beforehand during my quick test. The nice thing is that the frequency sticks even in Timespy GT2 during which the card often reaches 550-570w.
> 
> Only the frequency drops for me by the 15mhz, voltage doesn't.
> 
> The bios seems to be pretty usable if you have water cooler (due to the fan lock to 100%) Even idle clocks work, its at 300mhz 0.71v on desktop.
> 
> Also the Classified tool does not work for me, it does not launch (device incompatible error). So I don't have any way to control voltage. It sticks to 1.111v in heaven or games or 1.125v during heavier load such as GT2.


Seeing the same things re drops at 40 and idle speeds fine. One of my cards is mediocre so limited at 2100 for time spy extreme but finally pulled a new high score with this BIOS: 3dmark.com/spy/7093115 

The classy tool checks device ID and the xoc bios identifies as a generic 2080ti so fails the check. I have no idea if there are pinouts that allow finer voltage control on the actual KPE card, but if not the binary can be patched to ignore the check and possibly function.


----------



## VPII

Xeq54 said:


> Hi after more testing, I can confirm the card drops one clock bin (15MHZ) after I cross the 40 degree mark. I really had to bench for half an hour to cross this since my loop is pretty good. So I did not notice it beforehand during my quick test. The nice thing is that the frequency sticks even in Timespy GT2 during which the card often reaches 550-570w.
> 
> Only the frequency drops for me by the 15mhz, voltage doesn't.
> 
> The bios seems to be pretty usable if you have water cooler (due to the fan lock to 100%) Even idle clocks work, its at 300mhz 0.71v on desktop.
> 
> Also the Classified tool does not work for me, it does not launch (device incompatible error). So I don't have any way to control voltage. It sticks to 1.111v in heaven or games or 1.125v during heavier load such as GT2.


An interesting thing you'll notice, well which I noticed with the Strix XOC bios, is that when you run your card very cold, as in dropping the Rad in an ice bucket with water, the clocks would run 15mhz more than normal until you reach 30C - 31C after which it would drop to normal overclock set and run straight through unless temps hit 41 to 42c.


----------



## kot0005

can someone please share the XOC Kinpin bios via google drive with me ? Thanks


----------



## ESRCJ

Tragic said:


> during the FFXV benchmark windowed Precision X1 tells me that i'm slapping into the power limit and fluctuating voltages of 1019mv and 1031mv at temp of 58C fan curve 75% ambient 20C
> A1 chip
> liquid metal repaste.
> 
> 110% power limit.
> 
> 
> 
> based on that should i see improvements if i flash the MSI unofficial 406W bios instead of my stock 330W?


You might notice a slight boost, but ultimately you're limited by the cooling. Each BIOS has a different set of temperature ticks where core clocks drop 15MHz each time. I have no idea where those are with the MSI 406W BIOS, but drawing more power will lead to higher temps and could ultimately be self-defeating on air since you'll never get temps low enough to avoid those clock speed drops. Maybe some of the other users here who have messed around with air-cooled cards more can weigh in on what kind of gains, if any, they saw with a higher power limit. For me personally, I saw roughly a 5 percent gain when going from the Nvidia reference BIOS to the Aorus Extreme 366W BIOS. Although this was with water cooling and with a massive custom loop, so temps are always below 40C.


----------



## zhrooms

Xeq54 said:


> .. the card often reaches 550-570w.
> 
> Even idle clocks work, its at 300mhz 0.71v on desktop.


 
I haven't been able to confirm it actually pulls that much, highest I've seen on Galax XOC is 582W in GT2 during the early peaks in the run, but at 1.125V it's not unreasonable in Time Spy Extreme Graphics Test 2 (uses 33% more vertices, 266% tessellation patches, 4% triangles and 8% compute shader invocations than Graphics Test 1).

Can you show us a screenshot of it running that idle? I believe you, just want some proof, as I've seen screenshots from two people that has it showing 1350/7000 at the desktop.


----------



## NexusVibee

Does anyone know if the 450w HOF Bios will work on the ref PCB design?


----------



## Xeq54

zhrooms said:


> I haven't been able to confirm it actually pulls that much, highest I've seen on Galax XOC is 582W in GT2 during the early peaks in the run, but at 1.125V it's not unreasonable in Time Spy Extreme Graphics Test 2 (uses 33% more vertices, 266% tessellation patches, 4% triangles and 8% compute shader invocations than Graphics Test 1).
> 
> Can you show us a screenshot of it running that idle? I believe you, just want some proof, as I've seen screenshots from two people that has it showing 1350/7000 at the desktop.


Here are two screens, one showing the idle clocks, second the power consumption in GT2. 29.6% of 2000W PL thats 592w with default clocks (2010mhz). I also have a wall power meter which +- confirms this reading.


----------



## zhrooms

NexusVibee said:


> Does anyone know if the 450w HOF Bios will work on the ref PCB design?


 
It does not work.
 


Xeq54 said:


> Here are two screens, one showing the idle clocks, second the power consumption in GT2. 29.6% of 2000W PL thats 592w with default clocks (2010mhz). I also have a wall power meter which +- confirms this reading.


 
Thanks, so Memory runs 3D Clocks idle (1750MHz), but not the Core Clock, just like the ASUS XOC.


----------



## plazmic

zhrooms said:


> Thanks, so Memory runs 3D Clocks idle (1750MHz), but not the Core Clock, just like the ASUS XOC.


I'm running xoc on two cards. One of the cards will drop to 300 core / 100 memory drawing 13-64w idle. But, the other card stays 300/1750 and reports ~190w draw @ 9.6% TDP. Very strange, but the bios definitely _can_ clock ram speed down properly.


----------



## Talon2016

The Kingpin XOC vBios runs great on the my card as well. It idles down on both core and memory if I recall correctly. Will check when I get back home. Allows for some great benchmarks, far higher than default as I’m no longer hitting power limits.


----------



## Martin778

Can anyone confirm the Kingpin XOC running on the L-Z?


----------



## ReFFrs

*Where are you guys getting that Kingpin XOC bios from?*


----------



## GAN77

ReFFrs said:


> *Where are you guys getting that Kingpin XOC bios from?*


https://xdevs.com/guide/2080ti_kpe/


----------



## Glerox

I finally completed my build with the 2080 TI Lightning Z if you guys want to check it out. Timespy extreme graphics score of 8167 with the regular LN2 bios


----------



## jura11

Just received Aquacomputer Kryographics RTX 2080Ti WB with active backplate and looks awesome, will be doing few tests later this week

Probably will be testing this on my Zotac RTX 2080Ti AMP with Galax 380W BIOS, will see how it performs against my EK Vector RTX 2080Ti WB 

I thought so using this block and GPU on my loop but already friend bought it from me and for myself I have got Asus RTX 2080Ti Strix OC version which have Samsung VRAM modules and WB this is bit tricky, EK Vector is available which I don't want or like to use, will see

Hope this helps 

Thanks, Jura


----------



## nmkr

Xeq54 said:


> Can confirm the XOC Kingpin bios works on 2*8 Pin reference PCB card (MSI ventus A-chip with micron memory)
> 
> Voltage is locked at 1.11v and there are absolutely no temperature clock drops. Gonna test out more.
> 
> BTW dont run kombustor on it. Card drew 800watts as the clock sticks to the set frequency no matter what  Was ok on water, but I am pretty sure this would be deadly on AIR.


can confirm that on a zotac amp reference pcb, sadly cant clock higher but it holds the max clock in timespy and even superposition 8k where i read 525w+ from hwinfo


----------



## diatribe

jura11 said:


> Just received Aquacomputer Kryographics RTX 2080Ti WB with active backplate and looks awesome...



Where did you get it from? I've been looking for one for two months and can't find it in stock anywhere. At this point I'll probably stick with the Bykski block, but I'll always wonder if I'm giving away any performance.


----------



## jura11

diatribe said:


> Where did you get it from? I've been looking for one for two months and can't find it in stock anywhere. At this point I'll probably stick with the Bykski block, but I'll always wonder if I'm giving away any performance.


Hi there 

Bought it from Aquacomputer store,it took 3 weeks as these blocks are not in stock what I know, not sure if availability right now is better

Have tried few blocks like EK Vector RTX 2080Ti, Phanteks Glacier RTX 2080Ti and Bykski RTX 2080Ti WB

With my Zotac RTX 2080Ti and EK Vector I have seen on this loop these temperatures, idle in 18-21°C and load in 38-40°C, Palit RTX 2080Ti with Bykski WB temperatures have been in low 40's during the gaming, highest temperatures I have seen 40-43°C that's in 23°C ambient, Palit RTX 2080Ti with Phanteks Glacier WB temperatures have been 39-42°C range again in 21-23°C ambient temperature

Just didn't tried Heatkiller IV RTX 2080Ti block too which can't find in stock over here

Hope this helps 

Thanks, Jura


----------



## Tragic

zhrooms said:


> I haven't been able to confirm it actually pulls that much, highest I've seen on Galax XOC is 582W in GT2 during the early peaks in the run, but at 1.125V it's not unreasonable in Time Spy Extreme Graphics Test 2 (uses 33% more vertices, 266% tessellation patches, 4% triangles and 8% compute shader invocations than Graphics Test 1).
> 
> Can you show us a screenshot of it running that idle? I believe you, just want some proof, as I've seen screenshots from two people that has it showing 1350/7000 at the desktop.


if he's running adaptive power mode in nividia control panel the clocks make sense at idle. my Watts with gaming x trio with 406W bios it 388W max 8K superposition test


----------



## redking2436

Amazing thread so far, registered just to say hi and ask if anyone else has hit the issue i have, 



I have an MSI 2080TI Seahawk X that has a 330w power limit and can hit a maximum of 2040 at +120mhz with a +800mhz memory overclock, when i flash the kfa 380w bios i cant get higher than 2025 at +163Mhz and a +1200Mhz memory overclock. Have i done something wrong ? Im on a closed loop cooler so temps are generally in the 40's but i still see it downclock to 1995Mhz.


----------



## laxu

So is my Palit Gaming Pro 2080 Ti just a bad overclocker? I can't seem to go above about 1980 MHz boost clocks without games like Star Citizen and Shadow of the Tomb Raider starting to crash. No artifacts, they just throw a DirectX error and close. If the clocks stay around the 1980 area then I can play them all day long. I've tried upping the voltage % but that didn't seem to help. Heaven benchmark runs fine at about 2050 MHz clocks. Memory overclock is at +500 but at stock speed it doesn't make a difference either.

I had this issue with the stock 2.5 slot cooler and recently mounted a Corsair H55 + G10 bracket on it but that didn't improve the situation even if it made a huge difference in temps and noise. There's a 90mm and 120mm fan cooling the card VRM and VRAM as I no longer have heatsinks on those. CPU is a 6600K @ 4.6 GHz (waiting for that Ryzen 3).


----------



## Connolly

This is probably a really silly question, but I can't find an answer online. I purchased the GeForce RTX 2080 Ti GAMING X TRIO, which was listed as having a boost overclock of 1755mhz. I've just started to play around a little and noticed that the clock is only showing as 1350mhz in both MSI Afterburner and CPU-Z. Should it be overclocked to 1755mhz out of the box, or is that something that I'm supposed to do myself? I was under the impression that it should have been out of the box, but can't find anything to confirm this.

Cheers in advance!


----------



## fleps

Connolly said:


> This is probably a really silly question, but I can't find an answer online. I purchased the GeForce RTX 2080 Ti GAMING X TRIO, which was listed as having a boost overclock of 1755mhz. I've just started to play around a little and noticed that the clock is only showing as 1350mhz in both MSI Afterburner and CPU-Z. Should it be overclocked to 1755mhz out of the box, or is that something that I'm supposed to do myself? I was under the impression that it should have been out of the box, but can't find anything to confirm this.
> 
> Cheers in advance!


That seems to be the idle clock.
Did you opened any 3D application like a game or even MSI Kombustor to see the boost engaging?


----------



## Connolly

fleps said:


> That seems to be the idle clock.
> Did you opened any 3D application like a game or even MSI Kombustor to see the boost engaging?


Thanks for the reply. Seems that it was an issue with MSI Afterburner applying a profile that wasn't listing correctly, a reinstall fixed it.


----------



## Jpmboy

Martin778 said:


> Out of curiousity, how do you guys unbrick a 2080Ti that has dual BIOS but you flashed wrong file to one of them?
> 
> (Not that I have problems or want to try it, just asking).


on a dual bios card simply boot with the working bios and once in the OS flip the switch to the borked bios and flash. The bios NVRAM is only read during gpu post and requires a reboot to read the new-flashed bios. In other words, once the card as passed GPU power on self test and read NVRAM, the nvram is not accessed again until a reboot. So, boot, flip the switch, flash and reboot to load the new-flashed nvram. :thumb:


----------



## bl4ckdot

Jpmboy said:


> on a dual bios card simply boot with the working bios and once in the OS flip the switch to the borked bios and flash. The bios NVRAM is only read during gpu post and requires a reboot to read the new-flashed bios. In other words, once the card as passed GPU power on self test and read NVRAM, the nvram is not accessed again until a reboot. So, boot, flip the switch, flash and reboot to load the new-flashed nvram. :thumb:


Is there any 2080 TI with reference PCB and dual BIOS ?


----------



## Jpmboy

bl4ckdot said:


> Is there any 2080 TI with reference PCB and dual BIOS ?


someone else here probably knows the answer to that. But a reference PCB by design has only one nvram chip.


----------



## Martin778

Jpmboy said:


> on a dual bios card simply boot with the working bios and once in the OS flip the switch to the borked bios and flash. The bios NVRAM is only read during gpu post and requires a reboot to read the new-flashed bios. In other words, once the card as passed GPU power on self test and read NVRAM, the nvram is not accessed again until a reboot. So, boot, flip the switch, flash and reboot to load the new-flashed nvram. :thumb:


Never knew that, thanks!


----------



## bigjdubb

bl4ckdot said:


> Is there any 2080 TI with reference PCB and dual BIOS ?


I think that having dual bios would make it non-reference. There might be some with a "close to reference" board that has a dual bios, but I doubt it.


----------



## VPII

laxu said:


> So is my Palit Gaming Pro 2080 Ti just a bad overclocker? I can't seem to go above about 1980 MHz boost clocks without games like Star Citizen and Shadow of the Tomb Raider starting to crash. No artifacts, they just throw a DirectX error and close. If the clocks stay around the 1980 area then I can play them all day long. I've tried upping the voltage % but that didn't seem to help. Heaven benchmark runs fine at about 2050 MHz clocks. Memory overclock is at +500 but at stock speed it doesn't make a difference either.
> 
> I had this issue with the stock 2.5 slot cooler and recently mounted a Corsair H55 + G10 bracket on it but that didn't improve the situation even if it made a huge difference in temps and noise. There's a 90mm and 120mm fan cooling the card VRM and VRAM as I no longer have heatsinks on those. CPU is a 6600K @ 4.6 GHz (waiting for that Ryzen 3).


Hi @laxu do me a favour and do not set any clocks and run the benchmark and with GPUz record the monitoring to a file, then check what is the actual stock clocks when flashed with the 380 watt bios. My interest is based on the fact that I am running a Palit RTX 2080 Gaming Pro OC which is flashed with the Strix XOC bios. With this bios the card runs 2010mhz stock and I can run it all the way up to 2145mhz. However, with the KFA 360 Watt bios I can run the card all the way up to 2175mhz, the limit being max 1.05v core with the XOC bios. I am using a Kraken G12 with a corsair H110 cw as cooler, and my temps are pretty low so clocks run straight through each bench at 2145mhz.


----------



## Tragic

redking2436 said:


> Amazing thread so far, registered just to say hi and ask if anyone else has hit the issue i have,
> 
> 
> 
> I have an MSI 2080TI Seahawk X that has a 330w power limit and can hit a maximum of 2040 at +120mhz with a +800mhz memory overclock, when i flash the kfa 380w bios i cant get higher than 2025 at +163Mhz and a +1200Mhz memory overclock. Have i done something wrong ? Im on a closed loop cooler so temps are generally in the 40's but i still see it downclock to 1995Mhz.


silicon lotto. by chance is it micron mem? my samsung mem goes up to +1550 b4 it goes unstable


----------



## Tragic

Connolly said:


> This is probably a really silly question, but I can't find an answer online. I purchased the GeForce RTX 2080 Ti GAMING X TRIO, which was listed as having a boost overclock of 1755mhz. I've just started to play around a little and noticed that the clock is only showing as 1350mhz in both MSI Afterburner and CPU-Z. Should it be overclocked to 1755mhz out of the box, or is that something that I'm supposed to do myself? I was under the impression that it should have been out of the box, but can't find anything to confirm this.
> 
> Cheers in advance!



while under load it will display the true clocks achieved. run gpuz in backround during 3d mark test or short gaming run then look at gpuz sensors for core clock gpu and hit the lil black arrow to set highest achieved displayed. 

I own this card and when set to max perf in Nvidia control panel it idles at 1350mhz.


----------



## Tragic

Tragic said:


> Tragic.
> 2080ti MSI gaming x Trio +800mem +135core clock (2115mhz)
> 32GB 3300mhz 13 13 13 34 2T
> i7 8700K 5.1ghz Kraken X62
> 512GB 970 Pro Samsung NVME
> 500GB 960 Evo Samsung NVME
> 512GB 860 Pro Samsung SSD
> Asus Z370F ROG Strix Gaming MB
> 
> Userbenchmark = 251%
> Fire Strike Extreme = 17,744


 UPDATE: now hits 2145 mhz
+1550mem
+130coreclock
USING 406W bios


----------



## ENTERPRISE

As the EVGA FT3 Ultra Hybrid 2080Ti only has 2x8 pin connectors, am I realistically limited to the stock BIOS (Up to 373Watts) or is there another once I can flash to increase the power limit ?. Do not want XOC as I don't want it totally unlimited. Just looking for a compatible BIOS which will work with my card. I can only assume that a BIOS designed for a card with 3x power connectors will not function correctly on my card.


----------



## VPII

ENTERPRISE said:


> As the EVGA FT3 Ultra Hybrid 2080Ti only has 2x8 pin connectors, am I realistically limited to the stock BIOS (Up to 373Watts) or is there another once I can flash to increase the power limit ?. Do not want XOC as I don't want it totally unlimited. Just looking for a compatible BIOS which will work with my card. I can only assume that a BIOS designed for a card with 3x power connectors will not function correctly on my card.


The XOC bios will work, it is somewhat annoying as it limits vcore to max 1.05v which means my 2175mhz core is a no go, but it does mean I run every benchmark with my card water cooled 2145mhz right through as long as temps stay below 40 to 41c.


----------



## Jpmboy

ENTERPRISE said:


> As the EVGA FT3 Ultra Hybrid 2080Ti only has 2x8 pin connectors, am I realistically limited to the stock BIOS (Up to 373Watts) or is there another once I can flash to increase the power limit ?. Do not want XOC as I don't want it totally unlimited. Just looking for a compatible BIOS which will work with my card. I can only assume that a BIOS designed for a card with 3x power connectors will not function correctly on my card.


For the most part, if all the connectors are 8 pin on the 2 and 3 PCIE-power source cards flashing a 3 PCIE to a 2 PCIE should be fine. It's when flashing mix-matched pin count PCIE that things get tricky, eg a two 2x8 to a 8+6 pin card. The power table may not assign the correct power limit to the correct source - asking a 6 pin for 8 pin power. This can't happen when all connectors are 8-pin,.


----------



## ENTERPRISE

VPII said:


> The XOC bios will work, it is somewhat annoying as it limits vcore to max 1.05v which means my 2175mhz core is a no go, but it does mean I run every benchmark with my card water cooled 2145mhz right through as long as temps stay below 40 to 41c.





Jpmboy said:


> For the most part, if all the connectors are 8 pin on the 2 and 3 PCIE-power source cards flashing a 3 PCIE to a 2 PCIE should be fine. It's when flashing mix-matched pin count PCIE that things get tricky, eg a two 2x8 to a 8+6 pin card. The power table may not assign the correct power limit to the correct source - asking a 6 pin for 8 pin power. This can't happen when all connectors are 8-pin,.


Thanks guys for the info, 

Looking at the ASUS XOC Bios on the OP. I notice that the memory does not downclock when in 2D mode but the core does. I should not imagine this causes any issues other than additional power usage and heat from the memory side of things ? The Curve editor I would not tend to use anyway so that does not matter too much that it is not available. 

With the XOC Bios I should imagine while the power is "unlimited", the card will still downclock if it gets too toasty correct ?


----------



## dante`afk

jura11 said:


> Hi there
> 
> Bought it from Aquacomputer store,it took 3 weeks as these blocks are not in stock what I know, not sure if availability right now is better
> 
> Have tried few blocks like EK Vector RTX 2080Ti, Phanteks Glacier RTX 2080Ti and Bykski RTX 2080Ti WB
> 
> With my Zotac RTX 2080Ti and EK Vector I have seen on this loop these temperatures, idle in 18-21°C and load in 38-40°C, Palit RTX 2080Ti with Bykski WB temperatures have been in low 40's during the gaming, highest temperatures I have seen 40-43°C that's in 23°C ambient, Palit RTX 2080Ti with Phanteks Glacier WB temperatures have been 39-42°C range again in 21-23°C ambient temperature
> 
> Just didn't tried Heatkiller IV RTX 2080Ti block too which can't find in stock over here
> 
> Hope this helps
> 
> Thanks, Jura


so how are the temps on like 20x stress test run of firestrike ultra/timespy ultra?


----------



## Jpmboy

ENTERPRISE said:


> Thanks guys for the info,
> 
> Looking at the ASUS XOC Bios on the OP. I notice that the memory does not downclock when in 2D mode but the core does. I should not imagine this causes any issues other than additional power usage and heat from the memory side of things ? The Curve editor I would not tend to use anyway so that does not matter too much that it is not available.
> 
> With the XOC Bios I should imagine while the power is "unlimited", the card will still downclock if it gets too toasty correct ?


Holding the ram at a 3D frequency while in 2D mode really has little effect on temperature since the ram is not being loaded and unloaded to any extent - but it sure is annoying if you watch that frequency stay up. Since the ram is doing very little in 2D mode, the power use is nil.
I don't know what the more popular OC tool is, but on the 5 RTX cards I've had here (3 2080Tis, still have 2, and 2 RTX titans) I found that the Galax Xtreme tuner (0919v) does a great job and easily allows the user to lock the card in P0 when max performance is needed. It does not have a profiles function, but until you reset the sliders, it loads your OC each time you start the tool. ABeta is great, but for the 2080Ti, xtreme tuner works very well. No cntrl-F, ctrl-L curves, just find the best OC, set it and forget it.


----------



## chispy

Guys can i flash the kingpin XOC bios unlimited on a 2080Ti Nvidia reference ( bought it directly from the nvidia store ) , i need voltage control only as 1.093v does not get me too far ( only 2220Mhz ~ 2235Mhz at 1.050v max on 380w bios ) on water chiller at -21c. I'm pre-testing different Bios before give it a ride on LN2 , thanks in advanced.


Kind regards: Chispy


----------



## Jpmboy

chispy said:


> Guys can i flash the kingpin XOC bios unlimited on a 2080Ti Nvidia reference ( bought it directly from the nvidia store ) , i need voltage control only as 1.093v does not get me too far ( only 2220Mhz ~ 2235Mhz ) on water chiller at -21c. I'm pre-testing different Bios before give it a ride on LN2 , thanks in advanced.
> 
> 
> Kind regards: Chispy


AFAIK, the KPE uses different VRM and power control. I'd be very cautious trying to raise the voltage on an FE with the wrong command set. Check Tin's website for help with that.


----------



## Sheyster

Jpmboy said:


> I don't know what the more popular OC tool is, but on the 5 RTX cards I've had here (3 2080Tis, still have 2, and 2 RTX titans) I found that the Galax Xtreme tuner (0919v) does a great job and easily allows the user to lock the card in P0 when max performance is needed. It does not have a profiles function, but until you reset the sliders, it loads your OC each time you start the tool. ABeta is great, but for the 2080Ti, xtreme tuner works very well. No cntrl-F, ctrl-L curves, just find the best OC, set it and forget it.





ENTERPRISE said:


> The Curve editor I would not tend to use anyway so that does not matter too much that it is not available.


Earlier this year after I flashed to GALAX 380w, I used their tool for about 2 days. Overall I much prefer AB + a CTRL F/CTRL L curve saved to a profile. I lock a 2100 MHz OC at 1.068v and game all day with my 3 fans locked at 60%. Temps stay in the 60's. Frequency does drop a few bins but it's quite steady overall.

Enterprise, have you tried this with your 373w BIOS?


----------



## Jpmboy

Sheyster said:


> Earlier this year after I flashed to GALAX 380w, I used their tool for about 2 days. Overall I much prefer AB + a CTRL F/CTRL L curve saved to a profile. I lock a 2100 MHz OC at 1.068v and game all day with my 3 fans locked at 60%. Temps stay in the 60's. Frequency does drop a few bins but it's quite steady overall.
> 
> Enterprise, have you tried this with your 373w BIOS?


:thumb:


----------



## Spiriva

I havent been keeping up with this thread for awhile, but is there a "better" bios for the FE 2080ti (watercooled) then the GALAX 380w one ?


----------



## ENTERPRISE

Sheyster said:


> Earlier this year after I flashed to GALAX 380w, I used their tool for about 2 days. Overall I much prefer AB + a CTRL F/CTRL L curve saved to a profile. I lock a 2100 MHz OC at 1.068v and game all day with my 3 fans locked at 60%. Temps stay in the 60's. Frequency does drop a few bins but it's quite steady overall.
> 
> Enterprise, have you tried this with your 373w BIOS?


That is not a bad idea !, I will look into editing the curve on AB and locking it in before doing any BIOS flashing.

*Edit* 

Seem to get stuck at 1068mv as the max which is a limiting factor. May test XOC.


----------



## Xeq54

chispy said:


> Guys can i flash the kingpin XOC bios unlimited on a 2080Ti Nvidia reference ( bought it directly from the nvidia store ) , i need voltage control only as 1.093v does not get me too far ( only 2220Mhz ~ 2235Mhz at 1.050v max on 380w bios ) on water chiller at -21c. I'm pre-testing different Bios before give it a ride on LN2 , thanks in advanced.
> 
> 
> Kind regards: Chispy


I have the kingpin XOC on a MSI reference card so it should also work on your card but it has not been confirmed. This bios does have voltage control unlocked and the default voltage it runs at at load is around 1.115. But so far there is no (public) tool which works with it. The updated Classified tool now launches and detects the card, but the voltage control in it does nothing.


----------



## J7SC

Xeq54 said:


> I have the kingpin XOC on a MSI reference card so it should also work on your card but it has not been confirmed. This bios does have voltage control unlocked and the default voltage it runs at at load is around 1.115. But so far there is no (public) tool which works with it. The updated Classified tool now launches and detects the card, but the voltage control in it does nothing.


 
...Interesting. Just to clarify, when you say 'Kingpin XOC on a MSI reference card', I assume you mean NVidia reference design, including 2x 8pin EPS and their same VRM layout ? Tx.


----------



## chispy

Thanks for the quick reply JP and for the heads up.


----------



## chispy

Xeq54 said:


> I have the kingpin XOC on a MSI reference card so it should also work on your card but it has not been confirmed. This bios does have voltage control unlocked and the default voltage it runs at at load is around 1.115. But so far there is no (public) tool which works with it. The updated Classified tool now launches and detects the card, but the voltage control in it does nothing.


Thanks for the feedback , appreciate it.


----------



## Jpmboy

chispy said:


> Thanks for the quick reply JP and for the heads up.


 yeah, the I2C commands to the VRM vary by manufacturer. No it's not surprising that the current classy tool doesn't work on an FE or other non-KPE cards.
These two FEs still run great with the Galax bios and 1.06V is the peak voltage - no mods or shunts needed. Day driver clocks are 2100 and 8000.


----------



## jura11

dante`afk said:


> so how are the temps on like 20x stress test run of firestrike ultra/timespy ultra?


Hi there 

Still didn't have time to do any tests,probably end of the week or next week will do some tests, ordered just today 5L of Mayhems X1 and hopefully will be delivered this week or so, if yes then I would do few tests 

Hope this helps 

Thanks, Jura


----------



## J7SC

FYI given prior discussions on this thread re. the number of PCIe 8 pin EPS, DerBauer has an interesting (if a bit _tedious_) vid on his YouTube channel showing that 2x 8 pin EPS (vs for example 3x 8 pin) won't really be a limiting factor. He shows this by - literally - cutting cables in selective PCIe EPS connectors to a 2080 Ti Strix. Presumably, with LN2 and 700w+, extra power delivery via a 3rd 8 pin may yet come into play, but it otherwise shouldn't be a limiting factor.


----------



## mattxx88

dunno if someone already post it, but here in Italy Amazon.it is selling a PNY 2080Ti as a 2080 for 900€, for whom lives in EU may be interesting


----------



## ilmazzo

mattxx88 said:


> dunno if someone already post it, but here in Italy Amazon.it is selling a PNY 2080Ti as a 2080 for 900€, for whom lives in EU may be interesting


uh oh ah

cumpa'!!!!


----------



## mattxx88

mattxx88 said:


> dunno if someone already post it, but here in Italy Amazon.it is selling a PNY 2080Ti as a 2080 for 900€, for whom lives in EU may be interesting


edit: Amazon corrected their mistake, no more discount :S



ilmazzo said:


> uh oh ah
> 
> cumpa'!!!!


----------



## VPII

ENTERPRISE said:


> Thanks guys for the info,
> 
> Looking at the ASUS XOC Bios on the OP. I notice that the memory does not downclock when in 2D mode but the core does. I should not imagine this causes any issues other than additional power usage and heat from the memory side of things ? The Curve editor I would not tend to use anyway so that does not matter too much that it is not available.
> 
> With the XOC Bios I should imagine while the power is "unlimited", the card will still downclock if it gets too toasty correct ?


Yes if the card is starting at 30C during bench runs or gaming it will start to down clock around 41 to 42C and there after I cannot say as the 44c was the highest my core went, but at present it won't even touch 40C.


----------



## kawarius

I tried flashing it on my Zotac AMP (REF pcb) and it went dark the moment i started the flash. I got it back by flashing from Windows With intergrated Graphics output though. But i wouldnt recommend that bios. And ive tried alot of bioses. and none of them have bricked the card apart from the kingpin one. Some of them are gaming x trio, galax hof (450 watt), ftw3, and all have just ended in the powerlimit not working. Not "bricking" it.

Im now running the strix one, and having to live With 1.050V instead.


----------



## bl4ckdot

kawarius said:


> I tried flashing it on my Zotac AMP (REF pcb) and it went dark the moment i started the flash. I got it back by flashing from Windows With intergrated Graphics output though. But i wouldnt recommend that bios. And ive tried alot of bioses. and none of them have bricked the card apart from the kingpin one. Some of them are gaming x trio, galax hof (450 watt), ftw3, and all have just ended in the powerlimit not working. Not "bricking" it.
> 
> Im now running the strix one, and having to live With 1.050V instead.


You are talking about the 1000w bios right ? From you experience, what did work the best for you ?


----------



## ENTERPRISE

Just though I would take a look around for the Galax 2080Ti HOF BIOS as per : https://www.techpowerup.com/vgabios/204869/galax-rtx2080ti-11264-180927 

Has a maximum of 450Watts which could serve me a little better, also has the same outputs as my EVGA FTW Ultra Hybrid Cards. Has anyone has any issue with this BIOS, any gains ?


----------



## nycgtr

Anyone with the phanteks block mind chipping in on your terminal thickness. I noticed mine is like 13.7mm, hell having a stop plug on either side would impede flow.


----------



## jura11

nycgtr said:


> Anyone with the phanteks block mind chipping in on your terminal thickness. I noticed mine is like 13.7mm, hell having a stop plug on either side would impede flow.


Hi there

Which one do you have, I have Phanteks Glacier RTX 2080Ti WB Reference PCB and just checked and terminal thickness is around 16 to 16.3mm

Here is link on waterblock which I have here

http://www.phanteks.com/PH-GB2080TiFE.html

Hope this helps 

Thanks, Jura


----------



## nycgtr

jura11 said:


> Hi there
> 
> Which one do you have, I have Phanteks Glacier RTX 2080Ti WB Reference PCB and just checked and terminal thickness is around 16 to 16.3mm
> 
> Here is link on waterblock which I have here
> 
> http://www.phanteks.com/PH-GB2080TiFE.html
> 
> Hope this helps
> 
> Thanks, Jura


I have the strix 2080ti version here. I thought it looked off, and its much narrower than last gen phanteks blocks.


----------



## GAN77

nycgtr said:


> Anyone with the phanteks block mind chipping in on your terminal thickness. I noticed mine is like 13.7mm, hell having a stop plug on either side would impede flow.


Are you doing installation with complete plugs? Can show photo stop fitting?


----------



## nycgtr

GAN77 said:


> Are you doing installation with complete plugs? Can show photo stop fitting?


I had to swap the stop fittings on the back side to slimmer ones just to get the flow going more.


----------



## Esenel

*280TI FE on KingPin XOC Bios 17283 points*

Timespy Graphics Points 17283 points

https://www.3dmark.com/3dm/35991500?

Could go up to 2160 MHz.
But it consumed up to 670W.
Pretty nasty.

But no issues with one PCI-E cable having 2x8Pin.


----------



## GAN77

nycgtr said:


> I had to swap the stop fittings on the back side to slimmer ones just to get the flow going more.


I saw on the forum assembly with the same block.


----------



## bl4ckdot

Esenel said:


> Timespy Graphics Points 17283 points
> 
> https://www.3dmark.com/3dm/35991500?
> 
> Could go up to 2160 MHz.
> But it consumed up to 670W.
> Pretty nasty.
> 
> But no issues with one PCI-E cable having 2x8Pin.


Thats insane. Gratz


----------



## Cancretto

Guys, I need your help. Today I replaced thermal paste on my 2080ti, but when I put everything back on I noticed something really strange.
The card now with same settings either in afterburner or in extreme tuner plus just boost clocks significantly lower. Even if I'm using kingpin xoc bios wich has locked 3d clock speeds, gpu and memory clock somehow randomly manage to go idle. Also that card has serious problems reaching 1.125V again, it seems stuck at 1.075V and even those few times the card manage to go 1.125 the clocks are almost 100mhz lower. If I lock the voltage to 1.125 and increase mhz offset to match previous clocks, the card just crashes . I really don't know what is going on, what you guys think?
Thanks


----------



## J7SC

Esenel said:


> Timespy Graphics Points 17283 points
> 
> https://www.3dmark.com/3dm/35991500?
> 
> Could go up to 2160 MHz.
> But it consumed up to 670W.
> Pretty nasty.
> 
> But no issues with one PCI-E cable having 2x8Pin.


double post


----------



## J7SC

Esenel said:


> Timespy Graphics Points 17283 points
> 
> https://www.3dmark.com/3dm/35991500?
> 
> Could go up to 2160 MHz.
> But it consumed up to 670W.
> Pretty nasty.
> 
> But no issues with one PCI-E cable having 2x8Pin.


 
:thumb: !


----------



## jura11

nycgtr said:


> I had to swap the stop fittings on the back side to slimmer ones just to get the flow going more.


This block I was thinking getting as well for my Asus RTX 2080Ti Strix but looks more likely I will get EK Vector or Heatkiller IV RTX 2080Ti Strix which still I'm not sure if its available

What temperatures are you getting there with this block?

How much with normal stop plug fitting flow has been impeded or has been lower?

Hope this helps 

Thanks, Jura


----------



## nycgtr

jura11 said:


> This block I was thinking getting as well for my Asus RTX 2080Ti Strix but looks more likely I will get EK Vector or Heatkiller IV RTX 2080Ti Strix which still I'm not sure if its available
> 
> What temperatures are you getting there with this block?
> 
> How much with normal stop plug fitting flow has been impeded or has been lower?
> 
> Hope this helps
> 
> Thanks, Jura


I haven't tried the heatkiller yet. My buddy does have the vector. The heatkiller is the only one I haven't tried. I have tried the Bitspower, EK vector, Barrow (byski is pretty much same thing internally), and now the phanteks. In terms of performance tbh. I find all of them thus far to be the same which is hotter than any ref block (tried bp, vector, xspc) would do. Looking at 20c deltas under constant full load.


----------



## J7SC

nycgtr said:


> I haven't tried the heatkiller yet. My buddy does have the vector. The heatkiller is the only one I haven't tried. I have tried the Bitspower, EK vector, Barrow (byski is pretty much same thing internally), and now the phanteks. In terms of performance tbh. I find all of them thus far to be the same which is hotter than any ref block (tried bp, vector, xspc) would do. Looking at 20c deltas under constant full load.


 
For prior NVidia gens (600 - 1000 series), I either had custom or universal blocks (+120mm VRM fan), both of which did a very decent job...But for my 2x 2080 Ti (Aorus Xtreme WB), I can't say that I have any desire to switch from the factory blocks that came with it. Per attached below, Time Spy EX for single (top) at initial 2190 and dual (bottom) at initial 2160...temps seem well controlled, and between single and dual GPU runs, are only a couple degrees apart. I add that the GPUs do have a solid cooling solution via 3x RX 360/60 and 2 MPC655 pumps, but still, if the factory blocks would be garbage, that temp performance wouldn't happen. 

BTW, for a 'closer look' at the factory block, it just so happens that well-known modder 'Declassified Systems' actually took the (silly) cover off one of those Aorus 2080 Ti factory blocks in a recent build...check the vid @ between (4m22sec and 5min 42sec) for more 'under-the-hood' detail.


----------



## jura11

nycgtr said:


> I haven't tried the heatkiller yet. My buddy does have the vector. The heatkiller is the only one I haven't tried. I have tried the Bitspower, EK vector, Barrow (byski is pretty much same thing internally), and now the phanteks. In terms of performance tbh. I find all of them thus far to be the same which is hotter than any ref block (tried bp, vector, xspc) would do. Looking at 20c deltas under constant full load.


Hi there 

I have tried few too waterblocks like Bykski, Phanteks, EK Vector and Aquacomputer Kryographics RTX 2080Ti with active backplate which I didn't tested yet 

This one new Phanteks RTX 2080Ti Strix I didn't tried or tested, 20°C delta in my case would mean temperatures would be higher than have right now by 2-4°C

Heatkiller IV thinking getting but would prefer Aquacomputer Kryographics RTX 2080Ti if have been available for Strix 

Hope this helps 

Thanks, Jura


----------



## nycgtr

jura11 said:


> Hi there
> 
> I have tried few too waterblocks like Bykski, Phanteks, EK Vector and Aquacomputer Kryographics RTX 2080Ti with active backplate which I didn't tested yet
> 
> This one new Phanteks RTX 2080Ti Strix I didn't tried or tested, 20°C delta in my case would mean temperatures would be higher than have right now by 2-4°C
> 
> Heatkiller IV thinking getting but would prefer Aquacomputer Kryographics RTX 2080Ti if have been available for Strix
> 
> Hope this helps
> 
> Thanks, Jura


aquacomputer will only do the reference design. I've confirmed this with them.


----------



## mirkendargen

I tried the Kingpin XOC BIOS on my Seahawk EK X (Gaming X Trio board 8+8+6 power) and it works great, no power limit and 1.12v. None of the issues I had with the normal Kingpin BIOS. The Metro Exodus main menu (most crash-prone thing I can find) is stable at 2130 and is pulling 540W.


----------



## Tragic

Skaarj said:


> Greetings to all!
> Has anyone flashed bios from MSI 2080 Ti Gaming X TRIO 406W https://www.techpowerup.com/vgabios/205495/205495 into a PCB reference design?
> What are your impressions? Thank!



I have and this is my impression.


1. Need Moar.
2. minimal improvement...but still improvement.
3. still feels like there's more under the hood but only maybe 4% i'm not harnessing based on power limit.
4. This was my first time flashing a bios to a GPU, learning was fun, but scary. Never should have spent this much on a gpu. risky.

5. Trying to talk myself into flashing galax bios that everyone says is the best. no reason to. curiosity mostly
that is alllllllllllll


----------



## Tragic

Cancretto said:


> Guys, I need your help. Today I replaced thermal paste on my 2080ti, but when I put everything back on I noticed something really strange.
> The card now with same settings either in afterburner or in extreme tuner plus just boost clocks significantly lower. Even if I'm using kingpin xoc bios wich has locked 3d clock speeds, gpu and memory clock somehow randomly manage to go idle. Also that card has serious problems reaching 1.125V again, it seems stuck at 1.075V and even those few times the card manage to go 1.125 the clocks are almost 100mhz lower. If I lock the voltage to 1.125 and increase mhz offset to match previous clocks, the card just crashes . I really don't know what is going on, what you guys think?
> Thanks



reflash original bios.
turn off pc.
unplug pc.
take out 2080 ti.
pray for 5 mins.
put it back. 

plug in the pc and turn it on. keep praying.
profit.


----------



## VPII

Cancretto said:


> Guys, I need your help. Today I replaced thermal paste on my 2080ti, but when I put everything back on I noticed something really strange.
> The card now with same settings either in afterburner or in extreme tuner plus just boost clocks significantly lower. Even if I'm using kingpin xoc bios wich has locked 3d clock speeds, gpu and memory clock somehow randomly manage to go idle. Also that card has serious problems reaching 1.125V again, it seems stuck at 1.075V and even those few times the card manage to go 1.125 the clocks are almost 100mhz lower. If I lock the voltage to 1.125 and increase mhz offset to match previous clocks, the card just crashes . I really don't know what is going on, what you guys think?
> Thanks



Not sure which bios you are referring to. If I may ask, which 2080ti are you running, brand and model. The vcore is usually limited to 1.093v even though I've never seen it on my card. I did try out the standard Kingpin bios but found that my clocks with that bios is very low, as even less than the 1350 standard speeds. At present I am using the Strix XOC bios with the 1000watt TDP limit. Only draw back is that it limits the core voltage to 1.05v but at least it means that my card can run 2145mhz right through any benchmark or in games as my temps stay below 40c.

Another thing to consider is that maybe when you changed the thermal paste and remounted the cooler you fastened it too tightly which can also cause some issues.


----------



## J7SC

Tragic said:


> reflash original bios.
> turn off pc.
> unplug pc.
> take out 2080 ti.
> pray for 5 mins.
> put it back.
> 
> plug in the pc and turn it on. keep praying.
> profit.


 
...an odd way to find religion, but stranger things have happened


----------



## Cancretto

VPII said:


> Not sure which bios you are referring to. If I may ask, which 2080ti are you running, brand and model. The vcore is usually limited to 1.093v even though I've never seen it on my card. I did try out the standard Kingpin bios but found that my clocks with that bios is very low, as even less than the 1350 standard speeds. At present I am using the Strix XOC bios with the 1000watt TDP limit. Only draw back is that it limits the core voltage to 1.05v but at least it means that my card can run 2145mhz right through any benchmark or in games as my temps stay below 40c.
> 
> Another thing to consider is that maybe when you changed the thermal paste and remounted the cooler you fastened it too tightly which can also cause some issues.


I've tried various bios but no success with no one of them. The card is a normal reference pcb from palit, before repasting I was using some kryonaut, but I wanted to try conductonaut again this time. 
The metal application was perfect no spillage over smds or anything else, used a thin layer of metal and closed everything back. I also thought the problem was overtightening so I unscrewed a bit the waterblock from the card, but still no success.
Before repasting I was using kingpin XOC bios that also made my card run easily 1.125V 2190mhz with a gpu temp of about 40/42° but after repasting the card boosting seems pretty broken, it has serious difficulties to reach 1.125V it usually boosts to 1.075V and also the gpu boost curve seems lower as at the same voltage steps, the card boost to slightly lower clock. Even if I lock the voltage I have to rise a lot the gpu offset to match the same 2190 mhz, but when I launch any stress the gpu just crashes. Oh and temps with conductonaut are also better, like 37/38° under load. Another thing to say is that with kingpin XOC bios the core and memory clock were never at idle (always 1350/7000mhz) but now I don't know why the card can reach idle clocks even with maximum performance power plan in Nvidia control panel. 
I also flashed back to various bioses but that card just can't sustain clocks or voltage.
And finally of course I tried replacing thermal paste again with kryonaut but still nothing.
To me it's the first time something like this happen and I'm really confused right now. 
I'm sorry for the long reply but this is a very strange problem.


----------



## mattxx88

edit, found it sorry


----------



## ENTERPRISE

So I tried the XOC and HOF Bios on my FTW Ultra Hybrids and the XOC was the best, for obvious reasons. Though oddly, even though I have used DDU to clean out the GPU drivers and re-install, I tend to get driver crashes now and again..does not come up with an error but the Windows GUI and everything gets really sluggish and I have to reboot to remedy. Further from that MSI afterburner and other apps detect 0mv, even when I flash back to the stock BIOS and do a full GPU Driver wipe and re-install, making me think something has happened somehwere on windows. So will re-flash XOC and do a fresh install of windows on a test drive and see. 


Anyone else had these issues ?


----------



## toncij

ENTERPRISE said:


> So I tried the XOC and HOF Bios on my FTW Ultra Hybrids and the XOC was the best, for obvious reasons. Though oddly, even though I have used DDU to clean out the GPU drivers and re-install, I tend to get driver crashes now and again..does not come up with an error but the Windows GUI and everything gets really sluggish and I have to reboot to remedy. Further from that MSI afterburner and other apps detect 0mv, even when I flash back to the stock BIOS and do a full GPU Driver wipe and re-install, making me think something has happened somehwere on windows. So will re-flash XOC and do a fresh install of windows on a test drive and see.
> 
> 
> Anyone else had these issues ?


Without BIOS flashing I lately have random but guaranteed driver crashes. Driver reboots itself, but it's annoying because it brings the app down.


----------



## fleps

ENTERPRISE said:


> So I tried the XOC and HOF Bios on my FTW Ultra Hybrids and the XOC was the best, for obvious reasons. Though oddly, even though I have used DDU to clean out the GPU drivers and re-install, I tend to get driver crashes now and again..does not come up with an error but the Windows GUI and everything gets really sluggish and I have to reboot to remedy. Further from that MSI afterburner and other apps detect 0mv, even when I flash back to the stock BIOS and do a full GPU Driver wipe and re-install, making me think something has happened somehwere on windows. So will re-flash XOC and do a fresh install of windows on a test drive and see.
> 
> Anyone else had these issues ?


Driver crashes are usually unstable overclock. The bios might be messing up with your GDDR6 timings.

About the 0mV, are you re-unlocking voltage monitoring / control on MSI AB settings?
When you do a card wipeout it get's reseted inside AB.


----------



## Jpmboy

ENTERPRISE said:


> So I tried the XOC and HOF Bios on my FTW Ultra Hybrids and the XOC was the best, for obvious reasons. Though oddly, even though I have used DDU to clean out the GPU drivers and re-install, I tend to get driver crashes now and again..does not come up with an error but the Windows GUI and everything gets really sluggish and I have to reboot to remedy. Further from that MSI afterburner and other apps detect 0mv, even when I flash back to the stock BIOS and do a full GPU Driver wipe and re-install, making me think something has happened somehwere on windows. So will re-flash XOC and do a fresh install of windows on a test drive and see.
> *Anyone else had these issues *?


Has your system completed an in-place upgrade to Win 10v1903? If yes, and if it has not been 10 days, try reverting back to 1809. I had the same problem(s) on one rig here with one of the early 1903 distros.


----------



## VPII

Cancretto said:


> I've tried various bios but no success with no one of them. The card is a normal reference pcb from palit, before repasting I was using some kryonaut, but I wanted to try conductonaut again this time.
> The metal application was perfect no spillage over smds or anything else, used a thin layer of metal and closed everything back. I also thought the problem was overtightening so I unscrewed a bit the waterblock from the card, but still no success.
> Before repasting I was using kingpin XOC bios that also made my card run easily 1.125V 2190mhz with a gpu temp of about 40/42° but after repasting the card boosting seems pretty broken, it has serious difficulties to reach 1.125V it usually boosts to 1.075V and also the gpu boost curve seems lower as at the same voltage steps, the card boost to slightly lower clock. Even if I lock the voltage I have to rise a lot the gpu offset to match the same 2190 mhz, but when I launch any stress the gpu just crashes. Oh and temps with conductonaut are also better, like 37/38° under load. Another thing to say is that with kingpin XOC bios the core and memory clock were never at idle (always 1350/7000mhz) but now I don't know why the card can reach idle clocks even with maximum performance power plan in Nvidia control panel.
> I also flashed back to various bioses but that card just can't sustain clocks or voltage.
> And finally of course I tried replacing thermal paste again with kryonaut but still nothing.
> To me it's the first time something like this happen and I'm really confused right now.
> I'm sorry for the long reply but this is a very strange problem.


In all honesty I'm not really sure what might be happening. If you say it runs at lower clocks, are you saying lower than the 2190mhz you ran before, or way lower as in 1350mhz. My reason for asking is that my previous Galax card got an issue with locked clocks at 1350mhz. Interestingly this issue was not present in Windows 7 and I manage with a reflash to get the card working again until it happened again and the card had to be RMA'd.


----------



## Cancretto

VPII said:


> In all honesty I'm not really sure what might be happening. If you say it runs at lower clocks, are you saying lower than the 2190mhz you ran before, or way lower as in 1350mhz. My reason for asking is that my previous Galax card got an issue with locked clocks at 1350mhz. Interestingly this issue was not present in Windows 7 and I manage with a reflash to get the card working again until it happened again and the card had to be RMA'd.


Yeah, with xoc bios the card at idle should run locked at 1350mhz and full clock speed on memory, but now the card when not stressed manage to go idle clock like stock bios speeds, it goes 350 on core and 450 on memory. And also I can't reach maximum boost of 2190mhz with same offset of before, or even rising that up. I feel there's something broken with gpu boost, because it seems that the same core clock offset now correspond to a lower boost clock


----------



## Tragic

J7SC said:


> ...an odd way to find religion, but stranger things have happened


Prayer is an action of the heart, it's not exclusive to religion.


----------



## VPII

Cancretto said:


> Yeah, with xoc bios the card at idle should run locked at 1350mhz and full clock speed on memory, but now the card when not stressed manage to go idle clock like stock bios speeds, it goes 350 on core and 450 on memory. And also I can't reach maximum boost of 2190mhz with same offset of before, or even rising that up. I feel there's something broken with gpu boost, because it seems that the same core clock offset now correspond to a lower boost clock


Hi there, if your core clock is basically stuck at 1350mhz all through a benchmark you'll need to RMA the card. I had a similar issue and got the card RMA'd. Galax did propose a firmware update to the card I had as it was functioning normally until two or so months later, but I did not wait for that so I got a replacement.


----------



## Cancretto

VPII said:


> Hi there, if your core clock is basically stuck at 1350mhz all through a benchmark you'll need to RMA the card. I had a similar issue and got the card RMA'd. Galax did propose a firmware update to the card I had as it was functioning normally until two or so months later, but I did not wait for that so I got a replacement.


Hi man, thanks for the help 
I did some more testing today and I think that the card has some kind of vrm failure, checked with a digital multimeter and found that some phases are shorted out, also today I stressed the gpu with very demanding benchmarks that would utilize the card at 100% and after few seconds the card just crashes even with stock bios at stock clocks.
I'm going to get an rma for the card and hope for a new gpu that doesn't clock bad


----------



## Chris Marshall

I have an EVGA 2080ti XC using the GALAX 380W Bios and things are going ok.

I can hit 2115mhz stable at around 1040mv with voltage limit and no power limit. Manual offset +200
I can go up to 2160mhz with crashes in some games but not in 3d mark. Manual offset +220
I try locking the voltage using curve at 1040, 1050, 1060, 1080, 1090 all crashes in games.... dont know why.


Is there a better bios I can try with this card to stop the voltage limit and get stable over 2145mhz?
Im on custom water so no problem with temps.


----------



## Tragic

Chris Marshall said:


> I have an EVGA 2080ti XC using the GALAX 380W Bios and things are going ok.
> 
> I can hit 2115mhz stable at around 1040mv with voltage limit and no power limit. Manual offset +200
> I can go up to 2160mhz with crashes in some games but not in 3d mark. Manual offset +220
> I try locking the voltage using curve at 1040, 1050, 1060, 1080, 1090 all crashes in games.... dont know why.
> 
> 
> Is there a better bios I can try with this card to stop the voltage limit and get stable over 2145mhz?
> Im on custom water so no problem with temps.


The possibility exists that your card tops out at 2115mhz
my gaming x trio hits 2115 with stock bios on air stable but the 400W (406W) bios for msi unofficial puts it at 2145mhz, any more it's unstable even with temps at 54C. Watts b4 bios flash was 330 on gpu z but after flash it did go upto 390ishW


----------



## VPII

Chris Marshall said:


> I have an EVGA 2080ti XC using the GALAX 380W Bios and things are going ok.
> 
> I can hit 2115mhz stable at around 1040mv with voltage limit and no power limit. Manual offset +200
> I can go up to 2160mhz with crashes in some games but not in 3d mark. Manual offset +220
> I try locking the voltage using curve at 1040, 1050, 1060, 1080, 1090 all crashes in games.... dont know why.
> 
> 
> Is there a better bios I can try with this card to stop the voltage limit and get stable over 2145mhz?
> Im on custom water so no problem with temps.


Hey Chris, I'd say the Strixx XOC bios might be worth it. Yes scary as it is a 1000 watt TDP bios but with the vcore limited to 1.05 you'll max hit 440 to 460 watt. I've been using it for about a month and my card run 2145 all day.


----------



## dentnu

I just got the EK-Vector Trio RTX 2080 Ti Block and am using it it on a 360mm rad cooling CPU+GPU. I am seeing GPU idle temps of 27c-28c and max load 54c running Heaven and Timespy benchmarks. It is overclocked to 2145MHz Core and 8000MHz Memory. I would like to know if these temps are acceptable and what most of you on water are seeing ? Thanks


----------



## chispy

*chispy*



ENTERPRISE said:


> So I tried the XOC and HOF Bios on my FTW Ultra Hybrids and the XOC was the best, for obvious reasons. Though oddly, even though I have used DDU to clean out the GPU drivers and re-install, I tend to get driver crashes now and again..does not come up with an error but the Windows GUI and everything gets really sluggish and I have to reboot to remedy. Further from that MSI afterburner and other apps detect 0mv, even when I flash back to the stock BIOS and do a full GPU Driver wipe and re-install, making me think something has happened somehwere on windows. So will re-flash XOC and do a fresh install of windows on a test drive and see.
> 
> 
> Anyone else had these issues ?


Hi Enterprise i have had that issue once after installing many OCing software at the same time eg. MSI AB / Trixx / Galax extreme tuner. Somehow it corrupted my driver installs and eventually windows. I fix it by re-installing windows and installing only one OCing software MSI AB.

I have tested nearly all the Bios available on my founders edition and i have pin down the best 2 bios in my opinion are Galax 380w bios followed by Nvidia stock FE edition Bios with a shunt mod.

Let us know how your testing is going.


Kind Regards: Chipsy


----------



## Sheyster

chispy said:


> I have tested nearly all the Bios available on my founders edition and i have pin down the best 2 bios in my opinion are Galax 380w bios followed by Nvidia stock FE edition Bios with a shunt mod.


I too am a fan of the 380W GALAX BIOS. This said, if I had to do it over again, I'd roll with an MSI Trio X and the 406w unofficial MSI BIOS. That is the best option right now if you want to stay with the stock air cooler and have the best BIOS possible.


----------



## Martin778

If we only had that BIOS for the Lighting, with working LED controls and all  
I'm bouncing off the power limit in games like FC5, even with the PL set to max in AB.


----------



## J7SC

VPII said:


> Hey Chris, I'd say the Strixx XOC bios might be worth it. Yes scary as it is a 1000 watt TDP bios but with the vcore limited to 1.05 you'll max hit 440 to 460 watt. I've been using it for about a month and my card run 2145 all day.


 
...I'm looking for a quick summary on the XOC Strix bios 'versus' the KingPin XOC Bios...I got those two and the Galax 380w downloaded, but so far never switched away from my stock Gigabyte Aorus XTReme waterforce for my two cards as they regularly pull between 370w and 380w. Ideally, I would like to get the Gigabyte Bios, but with up to 450w, alas, no (publicly available) Bios editor 

The Strix XOC Bios has a 'technical' limit of 1000w and is written for 2x 8pin PCIE, though as stated before, usually won't go much beyond mid-400w ?! The KingPin XOC has a technical limit of 2000w (or any ?) but was written for 3x 8pin PCIe, but because of the high limit, even 2x 8 pin PCIe, will serve up more than enough ? How about 'down-clocking' both GPU and VRAM when not in 3D / which one does what ? Best oc tool, i.e. MSI AB ? Which one does not have the MSI AB voltage curve option ? What about and which IO connectors getting disabled ? While I have been trying to keep track of what folks have been posting here, it seems that references to 'XOC bios' could mean either of the two, so I'm trying to collate the info. Tx


----------



## ENTERPRISE

chispy said:


> Hi Enterprise i have had that issue once after installing many OCing software at the same time eg. MSI AB / Trixx / Galax extreme tuner. Somehow it corrupted my driver installs and eventually windows. I fix it by re-installing windows and installing only one OCing software MSI AB.
> 
> I have tested nearly all the Bios available on my founders edition and i have pin down the best 2 bios in my opinion are Galax 380w bios followed by Nvidia stock FE edition Bios with a shunt mod.
> 
> Let us know how your testing is going.
> 
> 
> Kind Regards: Chipsy


Yeah I think that could be it you know, would make a good amount of sense. I am however doing a new Windows install anyway so this should sort it. Using the test Windows with the XOC BIOS everything is running just fine...so I will abandon the old windows install and go with a new one.


----------



## dangerSK

Martin778 said:


> If we only had that BIOS for the Lighting, with working LED controls and all
> I'm bouncing off the power limit in games like FC5, even with the PL set to max in AB.


Strange thing happened today, I was trying my Lightning with regular LN2 bios on water and safe mode never happened... I wonder if its because of this specific bios (thats also works with custom Afterburner) or because i did some hardmods.


----------



## Tragic

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok



using it now. works well. are there higher power limit bios for this card?


----------



## Cancretto

J7SC said:


> ...I'm looking for a quick summary on the XOC Strix bios 'versus' the KingPin XOC Bios...I got those two and the Galax 380w downloaded, but so far never switched away from my stock Gigabyte Aorus XTReme waterforce for my two cards as they regularly pull between 370w and 380w. Ideally, I would like to get the Gigabyte Bios, but with up to 450w, alas, no (publicly available) Bios editor
> 
> The Strix XOC Bios has a 'technical' limit of 1000w and is written for 2x 8pin PCIE, though as stated before, usually won't go much beyond mid-400w ?! The KingPin XOC has a technical limit of 2000w (or any ?) but was written for 3x 8pin PCIe, but because of the high limit, even 2x 8 pin PCIe, will serve up more than enough ? How about 'down-clocking' both GPU and VRAM when not in 3D / which one does what ? Best oc tool, i.e. MSI AB ? Which one does not have the MSI AB voltage curve option ? What about and which IO connectors getting disabled ? While I have been trying to keep track of what folks have been posting here, it seems that references to 'XOC bios' could mean either of the two, so I'm trying to collate the info. Tx


Hi, i've used pretty much all XOC bioses, the kingpin 2000W XOC bios is way better than Asus 1000W XOC bios in my opinion. Both have V/F curve editor disabled, so frequency is locked, the major difference between the bioses is that the Asus one sometimes boost voltage up to 1.093V for a short period of time and then it locks at 1.05V, even at 40°C, due to this reason clock speeds goes down or the card instantly crashes. 
The kingpin XOC bios instead when boosting locks and overvolts the card at 1.125V, and despite temps the voltage and frequency are steady. Also kinpin bios has core clock locked at 1350mhz and memory frequency always at it's max, even when the card is idling, the asus bios instead can clock down to 405mhz, but for me this is not enough of drawback to prefer the asus bios instead than the kingpin one.

Also, yes although the kingpin bios is designed for a card with 3x8 power connectors, a reference card with 2x8 connectors can provide enough power for the card, since reference pcb can teorically sustain over 600W of power.
However in conclusion, i think kingpin xoc i'ts a good bios to try, but i wouldn't use it on crappy air cooled cards, it can make the card peak close to 600W, so i think it can potentially be pretty deadly on crappy heatsinks for VRMs and GPU itself


----------



## krizby

Some good news for non-A chip owners, there are bios with 310W power limit now:

https://www.techpowerup.com/vgabios/208274/208274

Hopefully they will let Non-A chip the same power limits as A chip in the near future.


----------



## J7SC

Cancretto said:


> Hi, i've used pretty much all XOC bioses, the kingpin 2000W XOC bios is way better than Asus 1000W XOC bios in my opinion. Both have V/F curve editor disabled, so frequency is locked, the major difference between the bioses is that the Asus one sometimes boost voltage up to 1.093V for a short period of time and then it locks at 1.05V, even at 40°C, due to this reason clock speeds goes down or the card instantly crashes.
> The kingpin XOC bios instead when boosting locks and overvolts the card at 1.125V, and despite temps the voltage and frequency are steady. Also kinpin bios has core clock locked at 1350mhz and memory frequency always at it's max, even when the card is idling, the asus bios instead can clock down to 405mhz, but for me this is not enough of drawback to prefer the asus bios instead than the kingpin one.
> 
> Also, yes although the kingpin bios is designed for a card with 3x8 power connectors, a reference card with 2x8 connectors can provide enough power for the card, since reference pcb can teorically sustain over 600W of power.
> However in conclusion, i think kingpin xoc i'ts a good bios to try, but i wouldn't use it on crappy air cooled cards, it can make the card peak close to 600W, so i think it can potentially be pretty deadly on crappy heatsinks for VRMs and GPU itself


 
Thanks Cancretto - this is much appreciated ! :thumb: As mentioned, the term 'XOC Bios' was getting a bit confusing in some posts - a la Eminem 'would the real XOC Bios please stand up...'. 

As to cooling, I'm running 2x MPC 655 pumps and 3x XSPC 360/60 rads just for the two (factory-) water cooled 2080 ti cards, and per re-post below, the cooling setup can sustain TimeSpy EX for both cards (bottom = 2 cards, top = 1 card) at well below 40 C....obviously, that's for stock (~375w) Bios, though. The other pic is for very light GPUz ***max clock*** stock rendering, so voltages aren't even the major concern, but sustained PL of 430w - 450w is what I'm after....after that, my PSU limit (also supplying a 16c/32t oc'ed Threadripper) is going to end the party anyhow...

I think I'm going to try the KingPin XOC...


----------



## Cancretto

J7SC said:


> Thanks Cancretto - this is much appreciated ! :thumb: As mentioned, the term 'XOC Bios' was getting a bit confusing in some posts - a la Eminem 'would the real XOC Bios please stand up...'.
> 
> As to cooling, I'm running 2x MPC 655 pumps and 3x XSPC 360/60 rads just for the two (factory-) water cooled 2080 ti cards, and per re-post below, the cooling setup can sustain TimeSpy EX for both cards (bottom = 2 cards, top = 1 card) at well below 40 C....obviously, that's for stock (~375w) Bios, though. The other pic is for very light GPUz ***max clock*** stock rendering, so voltages aren't even the major concern, but sustained PL of 430w - 450w is what I'm after....after that, my PSU limit (also supplying a 16c/32t oc'ed Threadripper) is going to end the party anyhow...
> 
> I think I'm going to try the KingPin XOC...


No problem man, with your waterblocks heat should definitely not not a problem, however watch out to not overload your power supply, 2 card with that bios are really power hungry. If unlimited power is too much for your psu, you can lower a bit your power limit via software and you're good to go. 
Have fun!


----------



## Martin778

> Download MSI RTX 2080 Ti Lightning Z Custom PCB (3x8-Pin) 1000W x 100% Power Target BIOS (1000W) Unofficial (Source) Read Me


Someone? Anyone?


----------



## dangerSK

Martin778 said:


> Someone? Anyone?


Not gonna happen, I believe Zhrooms added it there because I told him about this bios, very few overclockers have it and I believe nobody (including me) will leak it.


----------



## Martin778

Bummer, that means the RTX Trio is actually a much better choice as it can go 400W+.


----------



## VPII

J7SC said:


> ...I'm looking for a quick summary on the XOC Strix bios 'versus' the KingPin XOC Bios...I got those two and the Galax 380w downloaded, but so far never switched away from my stock Gigabyte Aorus XTReme waterforce for my two cards as they regularly pull between 370w and 380w. Ideally, I would like to get the Gigabyte Bios, but with up to 450w, alas, no (publicly available) Bios editor
> 
> 
> 
> The Strix XOC Bios has a 'technical' limit of 1000w and is written for 2x 8pin PCIE, though as stated before, usually won't go much beyond mid-400w ?! The KingPin XOC has a technical limit of 2000w (or any ?) but was written for 3x 8pin PCIe, but because of the high limit, even 2x 8 pin PCIe, will serve up more than enough ? How about 'down-clocking' both GPU and VRAM when not in 3D / which one does what ? Best oc tool, i.e. MSI AB ? Which one does not have the MSI AB voltage curve option ? What about and which IO connectors getting disabled ? While I have been trying to keep track of what folks have been posting here, it seems that references to 'XOC bios' could mean either of the two, so I'm trying to collate the info. Tx


I'm a little hesitant to try the Kingpin XOC bios. Not sure if it will work but I dont really want to brick my card. At present the Strix XOC does well as I can run 2145mhz all day but it limits my clocks due to 1.05v vcore limit. Max vrm temps I measured was just over 50c but the the KP XOC we talking maybe 550 to 600watt where as the Strix XOC is only 450 to 470watt.

Sent from my SM-G960F using Tapatalk


----------



## dangerSK

Martin778 said:


> Bummer, that means the RTX Trio is actually a much better choice as it can go 400W+.


hmm ? No. Lghtning goes up to 410W on stock LN2 bios..


----------



## Martin778

I could swear it throttled earlier than the normal BIOS. I will do some tests tonight! Do you have the LN2 VBIOS file you're sure of has the 410W PL?


----------



## dante`afk

Esenel said:


> Timespy Graphics Points 17283 points
> 
> https://www.3dmark.com/3dm/35991500?
> 
> Could go up to 2160 MHz.
> But it consumed up to 670W.
> Pretty nasty.
> 
> But no issues with one PCI-E cable having 2x8Pin.


any complications? any ports stopped working on the FE?

is it this one? https://www.techpowerup.com/vgabios/210258/210258


----------



## kawarius

It was with the kingpin 2000 watt bios. My card is back now, and I might try again now that I know how to unbrick it. Running the strix as my daily until then.


----------



## MostUnclean

Hey Guys, haven't been on the forums in a LOOOOONG time lol. I'm wondering if you guys can help me with something, ASUS-ROG-STRIX-2080TI-O11G-GAMING, has a part number of 90YV0CC0-M0NM00. I, on the other hand, just purchased this card, but my part number is 90YV0CC0-M0AA00. I'm wondering if anyone else has this card, and could let me know if there are any differences that would effect the fitment of water blocks? I've emailed Watercool, EK, Phanteks, and Asus, EK said they have no idea, and the others I am still waiting for a reply. I have seen the recent article, showing the "glue" on the corners of the dies, however, I have not found any info on the part numbers associated with those changes.


----------



## Renegade5399

MostUnclean said:


> Hey Guys, haven't been on the forums in a LOOOOONG time lol. I'm wondering if you guys can help me with something, ASUS-ROG-STRIX-2080TI-O11G-GAMING, has a part number of 90YV0CC0-M0NM00. I, on the other hand, just purchased this card, but my part number is 90YV0CC0-M0AA00. I'm wondering if anyone else has this card, and could let me know if there are any differences that would effect the fitment of water blocks? I've emailed Watercool, EK, Phanteks, and Asus, EK said they have no idea, and the others I am still waiting for a reply. I have seen the recent article, showing the "glue" on the corners of the dies, however, I have not found any info on the part numbers associated with those changes.


Same card, different regions of sale. Where did you get the card from?


----------



## MostUnclean

I bought it from Memory Express up here in Canada. So they have different part numbers for different regions? Honestly I haven't seen that before, or, rather, I haven't noticed.


----------



## VPII

J7SC said:


> ...I'm looking for a quick summary on the XOC Strix bios 'versus' the KingPin XOC Bios...I got those two and the Galax 380w downloaded, but so far never switched away from my stock Gigabyte Aorus XTReme waterforce for my two cards as they regularly pull between 370w and 380w. Ideally, I would like to get the Gigabyte Bios, but with up to 450w, alas, no (publicly available) Bios editor
> 
> The Strix XOC Bios has a 'technical' limit of 1000w and is written for 2x 8pin PCIE, though as stated before, usually won't go much beyond mid-400w ?! The KingPin XOC has a technical limit of 2000w (or any ?) but was written for 3x 8pin PCIe, but because of the high limit, even 2x 8 pin PCIe, will serve up more than enough ? How about 'down-clocking' both GPU and VRAM when not in 3D / which one does what ? Best oc tool, i.e. MSI AB ? Which one does not have the MSI AB voltage curve option ? What about and which IO connectors getting disabled ? While I have been trying to keep track of what folks have been posting here, it seems that references to 'XOC bios' could mean either of the two, so I'm trying to collate the info. Tx


Okay so I flashed my card with the Kingpin XOC bios..... Interestingly stock it shows in Nvinspector 2070mhz but running a bench it is actually 2055mhz. Not sure if it was cooling as the bios runs the core a little hotter than the Strix XOC due to the 1.1vcore stock. I did actually get 2175mhz to work, but it dropped to 2160 after touching 43c. Not sure if I like this bios, power was 600+ Watt


----------



## Renegade5399

MostUnclean said:


> I bought it from Memory Express up here in Canada. So they have different part numbers for different regions? Honestly I haven't seen that before, or, rather, I haven't noticed.


Either they got a shipment meant for the Eurasian region due to shortages or someone made a boo-boo. Either way, the card is physically the same.


----------



## MostUnclean

I appreciate the info! Makes shopping for Water blocks that much simpler.


----------



## LanceBoyle

*Gigabyte RTX 2080 Ti WF OC + Alphacool Eiswolf 240*

I flashed the Kingpin XOC BIOS to Gigabyte Windforce OC (Alphacool 240). Voltages are ~1.1, GPU ~2025 in GTAV. Now it failed with 600+ mem. Trying flash back.


----------



## jura11

MostUnclean said:


> Hey Guys, haven't been on the forums in a LOOOOONG time lol. I'm wondering if you guys can help me with something, ASUS-ROG-STRIX-2080TI-O11G-GAMING, has a part number of 90YV0CC0-M0NM00. I, on the other hand, just purchased this card, but my part number is 90YV0CC0-M0AA00. I'm wondering if anyone else has this card, and could let me know if there are any differences that would effect the fitment of water blocks? I've emailed Watercool, EK, Phanteks, and Asus, EK said they have no idea, and the others I am still waiting for a reply. I have seen the recent article, showing the "glue" on the corners of the dies, however, I have not found any info on the part numbers associated with those changes.



Hi there 

I'm pretty sure I've got 90YV0CC0-M0NM00 and this one have glue in corners of die, bought EK Vector RTX 2080Ti Strix WB and removed 4 of these standoffs just to be sure 

Still didn't tested GPU how it performs or how good OC etc as bought 2080 Strix backplate not 2080Ti Strix backplate and replacement probably will be here by Wednesday or Thursday as latest and do tests probably end of the week if my back allows 

Not sure what is difference between these two, can you check PCB revision my is CG150P REV 1.01X which is under OEM backplate 

Shame with EK Vector RTX 2080Ti Strix BIOS switch is not the best positioned or rather is covered by terminal and yes you can switch this BIOS switch with thin screwdriver but still looks like pain in backside

Hope this helps 

Thanks, Jura


----------



## ENTERPRISE

Looking to possibly try the Kingpin XOC Bios on my FTW Ultra Hybrids..but i notice there is a lack of fan control as mentioned by the download in the OP. Now the EVGA Kingpin card has fans, so obviously it has fan control. Does it just mean that flashing the Kingpin XOC to another card (Custom PCB) in my case it would lose fan control due to the PCB differences from the Kingpin to my own/any others ?


*EDIT* 



Answered my own question, the BIOS as its designed for high overclocking auto sets your fans to 100%, you cannot adjust lower.


----------



## J7SC

VPII said:


> Okay so I flashed my card with the Kingpin XOC bios..... Interestingly stock it shows in Nvinspector 2070mhz but running a bench it is actually 2055mhz. Not sure if it was cooling as the bios runs the core a little hotter than the Strix XOC due to the 1.1vcore stock. I did actually get 2175mhz to work, but it dropped to 2160 after touching 43c. Not sure if I like this bios, power was 600+ Watt


 
Thank you VPII  ...a few quick follow-up questions: Is the 600w figure 'max', and for 1.1v / 2175 MHz ? Any equivalent reading for the KPE XOC at 1.05v ? Max watt figure for Strix XOC at 1.05v (max?) or if not, at 1.093v ? Thanks


----------



## Esenel

dante`afk said:


> any complications? any ports stopped working on the FE?
> 
> is it this one? https://www.techpowerup.com/vgabios/210258/210258


No complications. All ports useable.
This bios works great. But as it doesn't have any restrictions, I flashed back to the 380W for daily use.

I downloaded it from here:
https://xdevs.com/guide/2080ti_kpe/

XOC BIOS, Version 90.02.17.40.88
XOC overclocking version for RTX 2080 Ti KINGPIN card.

Report back what you could achieve


----------



## kawarius

Just flashed my card again (zotac Amp). And the kingpin bios works fine now. No bricking this time around. But now I flashed from the Galax bios and not the strix bios as I did when I bricked it.

Running on custom water and Temps are fine so far, clocks are good too. Hit 2160mhz before, so doing some runs tomorrow to see if I can push more


----------



## bl4ckdot

Can you flash the EVGA XC2 Ultra and keep the iCX2 features (temp sensors) ? I doubt it but it's worth to ask


----------



## kx11

a new leak suggest 2080Ti-R gpu with 12gb vram , more cuda cores and 16gbps memory speed


----------



## Angrycrab

Which kingpin bios Is compatible with the XC Hybrid? And Is kingpin normal bios better than the 380watt one from galax?


----------



## Martin778

No way, 2080Ti-R would completely wipe the Titan RTX off the map.


----------



## J7SC

...as with all (but especially anonymous) rumours like that, several pounds of salt are in order. Still, I wouldn't put it past NVidia which is a few quarters away yet from having put the mining bubble behind them in terms of year-to-year stock price comps etc - and looking at the 1660/Ti/1650 etc releases by NVidia, anything is possible. Then you got AMD getting ready to release new GPUs at the mid market (lower to upper) tiers, and Intel apparently making progress on their GPU(s). 

All that said, the 2080 Ti-R would probably tick off both recent higher-end 2080 Ti customers and Titan RTX at the same time, not that advisable with increased competition for NVidia on the horizon.


----------



## kx11

Martin778 said:


> No way, 2080Ti-R would completely wipe the Titan RTX off the map.



TX-RTX is for a very specific group of people , regular 2080tis which are filling all stores around the world are taking a bad hit if that gpu is real


----------



## VPII

J7SC said:


> Thank you VPII  ...a few quick follow-up questions: Is the 600w figure 'max', and for 1.1v / 2175 MHz ? Any equivalent reading for the KPE XOC at 1.05v ? Max watt figure for Strix XOC at 1.05v (max?) or if not, at 1.093v ? Thanks


I was a little more than 600watt as in up to 608watt or there about. As for 1.05v on the KP XOC don't think possible as the lowest vcore Ive seen was 1.100v. Max wattage on Strix XOC was around 470watt or there about seen that vcore limited to 1.05v.

Sent from my SM-G960F using Tapatalk


----------



## J7SC

VPII said:


> I was a little more than 600watt as in up to 608watt or there about. As for 1.05v on the KP XOC don't think possible as the lowest vcore Ive seen was 1.100v. Max wattage on Strix XOC was around 470watt or there about seen that vcore limited to 1.05v.
> 
> Sent from my SM-G960F using Tapatalk


 
Thanks


----------



## ENTERPRISE

Looking here: https://xdevs.com/guide/2080ti_kpe/#cbios 



> Real voltage software monitoring
> As you already know, regular hardware health monitoring software is usually unable to show real voltages supplied to the GPU and also disregard additional voltages like memory and PLL power. This is due to the fact that the NVIDIA driver only supports readout of “requested” voltages, or GPU VID setting. It does not have a standard mechanism to report real voltages presented from VRM. Most end users are not aware of this detail and expect to see real voltages. That’s why we often see many confused people in forums/discussions who see LN2 records with 2500 MHz+, but are often confused about reported clock/voltage measurements and left scratching their heads wondering how is such a high clock possible with only 1.093V?
> 
> Precision X1 and Kingpin Edition RTX 2080 Ti will report real monitoring, just like standalone DMM connected to ProbeIt. Now with onboard OLED display, there is no need for separate DMM when running benchmark sessions. To further comply with this, EVGA will sell optional “Extreme OC pack” with metal frame to support OLED display and it’s interconnect board on main graphics card PCB, while allowing full removal of Hybrid cover and watercooling solution to allow LN2 pot setup.
> 
> BIOS & Tools
> BIOSes in this section are compatible only with EVGA RTX 2080 Ti KINGPIN card and will not work and may damage any regular 2080 Ti FTW3/SC or any other brand RTX card.
> 
> Just like all previous KINGPIN Edition cards, 2080 Ti KPE has three independent different BIOS ROMs and corresponding switch to select between them. Default configuration with switch locked at right (towards power plugs), setting on NORMAL BIOS position. BIOS difference comparison between the three are presented in table 4.












I can see they have three non XOC Bios's for the Kingpin card. I am tempted to try one of these three BIOS's to see if they offer better potential over the Asus XOC BIOS. The Asus XOC BIOS is limited to 1.05 volts which is ultimately the limiting factor. Can anyone confirm if the these three BIOS's as listed below in the image have a higher voltage limit ? There does not seem to be any confirmation that I can see. 

Thanks in advance.


----------



## Cancretto

ENTERPRISE said:


> Looking here: https://xdevs.com/guide/2080ti_kpe/#cbios
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I can see they have three non XOC Bios's for the Kingpin card. I am tempted to try one of these three BIOS's to see if they offer better potential over the Asus XOC BIOS. The Asus XOC BIOS is limited to 1.05 volts which is ultimately the limiting factor. Can anyone confirm if the these three BIOS's as listed below in the image have a higher voltage limit ? There does not seem to be any confirmation that I can see.
> 
> Thanks in advance.


Hi, as far as i know those 3 bios are still voltage locked at 1.093V, at least untill you use evga classified tool to manually rise up voltage, that works only on kingpin cards. 
For this reason i would suggest you to try the kingpin xoc bios, that completely remove temp and power limit and also overvolts non kingpin cards up to 1.125V. I think this bios is much better than the strix one.


----------



## ENTERPRISE

Cancretto said:


> Hi, as far as i know those 3 bios are still voltage locked at 1.093V, at least untill you use evga classified tool to manually rise up voltage, that works only on kingpin cards.
> For this reason i would suggest you to try the kingpin xoc bios, that completely remove temp and power limit and also overvolts non kingpin cards up to 1.125V. I think this bios is much better than the strix one.


The Kingpin XOC does not remove temp limits, again that is only possible on the Kingpin cards due to a hardware difference as per the linked documentation, but that would not bother me too much. I am hesitant to use the Kingpin XOC BIOS as I am looking for a daily driver, so I am not sure running the Kingpin XOC Bios all the time would be ideal, further to this, there is no fan control so it would sound like a jet engine all the time. 

The Asus XOC Bios is locked to 1.093v so far as I recall, even though it only ever shows up as 1.05 but as I understand that is an Nvidia driver limitation. May end up sticking with Asus XOC then as the Kingpin XOC will drive me nuts at 100% fans all the time. I was just wondering if any of the standard kingpin BIOS's were any good, but may not offer me anything over the Asus XOC.


----------



## THC Butterz

I have A xc ultra and have flashed the glax 380w bios in the past and it worked brilliantly, but this new "official" one in the op breaks my card, it shows up as a msi card not glax and all 3d acceleration is broken, i am attaching the ss of the bios i flashed and how it shows up in afterburner (all sliders are disabled)


----------



## J7SC

Oh, how I long for the pre-UEFI Bios days...i.e. on GTX 670/680 series cards...save Bios, convert to .txt file, use notepad to change parameters like PL, save and convert back to .rom...

Re. 2080 Ti custom Bios, max cooling (starting no higher than in the low 20s C and never exceeding 38 C) yields both higher clocks and more PL 'budget' available. Presumably, that still holds for both the Strix XOC and KPE XOC with their much higher PL limits, so even with 1.05v per driver limitation, low temps can help a lot with overall MHz and FPS / scores.


----------



## Jpmboy

J7SC said:


> Oh, how I long for the pre-UEFI Bios days...i.e. on GTX 670/680 series cards...save Bios, convert to .txt file, use notepad to change parameters like PL, save and convert back to .rom...
> 
> Re. 2080 Ti custom Bios, max cooling (starting no higher than in the low 20s C and never exceeding 38 C) yields both higher clocks and more PL 'budget' available. Presumably, that still holds for both the Strix XOC and KPE XOC with their much higher PL limits, so even with 1.05v per driver limitation, low temps can help a lot with overall MHz and FPS / scores.


yeah, a Turing (or even Pascal) bios editor would be wonderful!


----------



## VPII

@J7SC I noticed something interesting. I picked it up from AMD section the Windows 10 update 1903 assist the physics and combined score in Firestrike but also increases the graphics score in Timespy but drop the cpu score. Attached the link to my TS run last night.

http://www.3dmark.com/spy/7181910

And to compare the one with the cpu at 4.8ghz under Dice

https://www.3dmark.com/spy/6946378

Sent from my SM-G960F using Tapatalk


----------



## Tragic

Sheyster said:


> I too am a fan of the 380W GALAX BIOS. This said, if I had to do it over again, I'd roll with an MSI Trio X and the 406w unofficial MSI BIOS. That is the best option right now if you want to stay with the stock air cooler and have the best BIOS possible.


Agreed. I love my X trio

would you believe the bracket due to vibration caused by the fans ramping up to max when the pc started moved into the third fan area and broke off 2 blades, scewing over my warranty? i had to take the card apart and superglue the blades back on and now it works perfect again. I also ordered replacement fans on ALI Express to make it PERFECT again. the fans i ordered most likely are from the 1080 ti x trio version but should be fine.

I was so pissed, f*** that sag bracket.


----------



## GAN77

Hi guys!

Does it make sense to use liquid metal when installing a water block? What is the profit compared to thermal grease?


----------



## bl4ckdot

GAN77 said:


> Hi guys!
> 
> Does it make sense to use liquid metal when installing a water block? What is the profit compared to thermal grease?


Thats a bad idea.


----------



## Martin778

GAN77 said:


> Hi guys!
> 
> Does it make sense to use liquid metal when installing a water block? What is the profit compared to thermal grease?


Worked perfectly on 1080Ti, just covered the spots around the core with electrical tape.


----------



## GAN77

Martin778 said:


> Worked perfectly on 1080Ti, just covered the spots around the core with electrical tape.


I used liquid metal for 1080TI Asus with air cooling. Got a minus 3-5С.
But I had no experience of water-cooled and liquid metal.


----------



## Martin778

I was running like 35-38*C under full load on MSI Gaming X 1080Ti's........had a 560mm HW labs rad for them.


----------



## nizmoz

Just ordered the Gigabyte Aorus Xtreme RTX 2080 ti. It was ordered with my completely new system build linked below. Hopefully I made a good decision! 

https://pcpartpicker.com/list/F937QZ


----------



## bigjdubb

Time will tell, new cpu's are right around the corner.


----------



## dante`afk

for amd yea, for intel more like 2020/21 rather


----------



## J7SC

VPII said:


> @J7SC I noticed something interesting. I picked it up from AMD section the Windows 10 update 1903 assist the physics and combined score in Firestrike but also increases the graphics score in Timespy but drop the cpu score. Attached the link to my TS run last night.
> 
> http://www.3dmark.com/spy/7181910
> 
> And to compare the one with the cpu at 4.8ghz under Dice
> 
> https://www.3dmark.com/spy/6946378
> 
> Sent from my SM-G960F using Tapatalk


 
Thanks - I think this might be the result of MS mitigation of an earlier security update which reduced physics, but then, I'm not sure. I'm still on '1809' but will check for the update. Also, I uploaded 1usmus' new Ryzen Dram Calc. 1.51, and actually got tFAW 16 working on my quad channel setup...that bumped TS physics up nicely as well. When I have a bit more time, I'll do some new runs for 3DM to see how that impacts GPU tests.


----------



## kgtuning

I'm thinking of buying a Evga 2080 ti xc gaming. Does anyone have of these gpus aircooled in a SFF case that can tell me how hot it runs? I'm currently running a Evga 1080 ti FTW3 without issue...(max temp 75C under load) but I'd like a little more horsepower.


----------



## xl_BlackHawk_lx

Tragic said:


> Agreed. I love my X trio
> 
> would you believe the bracket due to vibration caused by the fans ramping up to max when the pc started moved into the third fan area and broke off 2 blades, scewing over my warranty? i had to take the card apart and superglue the blades back on and now it works perfect again. I also ordered replacement fans on ALI Express to make it PERFECT again. the fans i ordered most likely are from the 1080 ti x trio version but should be fine.
> 
> I was so pissed, f*** that sag bracket.



I have the same GPU (2080 Ti Gaming X Trio) on air cooling and have flashed it to MSI 2080 Ti Gaming X Trio 406W BIOS. It all went fine and its working normal. However i am finding it hard to find a good and stable overclock combination. 

If i do more than +104 CC and +1000 MC (2025-2010 MHz & 8000 MHz), it crashes. I have set the power limit to 135%, voltage to +100 (1.069 V), fans are spinning at 90% and temps are around 58-62'c while bench marking.

Am i missing anything here?? or this is the limit on this card unless i go water cooling?

Pls share your thoughts and overclocking numbers.....

Thanks in Advance!


----------



## Martin778

Aorus and Xtreme are...
uhm........rubbish, shame you didn't watch the reviews. The PCB is OK but the coolers are very weak.
I sold my Xtreme after a month. 70*C with fans at 100% is unacceptable.


----------



## J7SC

I'm very happy with my two Aorus Xtreme 2080 Ti (for about 6 mths now), but they are the factory full-waterblock ones. Same PCB than the air-cooled ones, but I have heard of several folks w/ issues re. the air-cooled model fans. If you can change your order to the water-block ones (or at least the AIO model), you might be happier in the long run.


----------



## Cancretto

GAN77 said:


> Hi guys!
> 
> Does it make sense to use liquid metal when installing a water block? What is the profit compared to thermal grease?


Hi, it probably has some benefits, i also tried to do it, but i experienced pretty weird stuff when applying liquid metal on 2080 ti with a waterblock. Simply in my case, after insulating with nail polisher and electrical tape all around the die and after a correct application of liquid metal, when i reassamble everything, the card obviously works, even with low temps, but as soon i launch something that stresses the gpu, it will istantly crash. After a couple of days testing this issue i could not find why this happens. 
So as long you are careful with application, it's fine and maybe it can work well for you, but in my case the card just refuses liquid metal.


----------



## 86Jarrod

Cancretto said:


> Hi, it probably has some benefits, i also tried to do it, but i experienced pretty weird stuff when applying liquid metal on 2080 ti with a waterblock. Simply in my case, after insulating with nail polisher and electrical tape all around the die and after a correct application of liquid metal, when i reassamble everything, the card obviously works, even with low temps, but as soon i launch something that stresses the gpu, it will istantly crash. After a couple of days testing this issue i could not find why this happens.
> So as long you are careful with application, it's fine and maybe it can work well for you, but in my case the card just refuses liquid metal.


Had the exact same issue. Reapplied conformal coating multiple times and lm multiple times with no success. Great temps but would always crash around a 100mhz lower or so than with normal thermal paste. Had to go with kryonaut.


----------



## zigfr33

Hey all 

I am happy with my Evga 2080 ti black but this card is severely power limit with no official bios to increase the power limit. I saw some people were having success with a Palit non-a 124% bios so i decided to try. It is recommended to do this with third party cooling as the max fan speed of this bios is only 2200 rpm but even so i still get a decent jump in my 3d mark scores using stock cooler. Is there any way to increase the max fan speed without going for a third party cooling solution?


----------



## VPII

J7SC said:


> Thanks - I think this might be the result of MS mitigation of an earlier security update which reduced physics, but then, I'm not sure. I'm still on '1809' but will check for the update. Also, I uploaded 1usmus' new Ryzen Dram Calc. 1.51, and actually got tFAW 16 working on my quad channel setup...that bumped TS physics up nicely as well. When I have a bit more time, I'll do some new runs for 3DM to see how that impacts GPU tests.


Hi there, so with the win update it also updated my nvidia driver to 340.39 not even the latest. So I installed my old 319 driver and physics were back more or less where it shpuld be with graphics still good.

Sent from my SM-G960F using Tapatalk


----------



## J7SC

VPII said:


> Hi there, so with the win update it also updated my nvidia driver to 340.39 not even the latest. So I installed my old 319 driver and physics were back more or less where it shpuld be with graphics still good.
> 
> Sent from my SM-G960F using Tapatalk


 
Thanks - AFAIK, that Win 10 / 1903 update version is not officially out yet / ("May update")...is there an early version for download floating around ? Also, fine-tuned quad-c system RAM some more before doing some additional 2080ti GPU on the weekend. Water-cooled TR 2950x at 'daily' 4.25 below


----------



## VPII

J7SC said:


> Thanks - AFAIK, that Win 10 / 1903 update version is not officially out yet / ("May update")...is there an early version for download floating around ? Also, fine-tuned quad-c system RAM some more before doing some additional 2080ti GPU on the weekend. Water-cooled TR 2950x at 'daily' 4.25 below


Get it with Windows insider program under Windows Update. That is how you can get it. Not officially released yet but if available under insider it is good to go.

Sent from my SM-G960F using Tapatalk


----------



## bl4ckdot

bl4ckdot said:


> Can you flash the EVGA XC2 Ultra and keep the iCX2 features (temp sensors) ? I doubt it but it's worth to ask


Any idea ?


----------



## GAN77

Colleagues, you need advice, help, knowledge of history)
I consider buying 2080TI on ref. Inno3D design for water cooling. A card at the pick-up point will wait three days.
Inspection showed the release date in November 2018.
Who remembers the history of failures of touring the first installments, when the situation with reliability stabilized, when the first maps appeared and with what date of production with Samsung memory?

Thank you in advance for the answers!


----------



## wheatpaste1999

bl4ckdot said:


> Any idea ?


If you flash a bios from a card that doesn't have the ICX sensors you will lose the ability to monitor those sensors.


----------



## jura11

GAN77 said:


> Colleagues, you need advice, help, knowledge of history)
> I consider buying 2080TI on ref. Inno3D design for water cooling. A card at the pick-up point will wait three days.
> Inspection showed the release date in November 2018.
> Who remembers the history of failures of touring the first installments, when the situation with reliability stabilized, when the first maps appeared and with what date of production with Samsung memory?
> 
> Thank you in advance for the answers!



Hi there 

November 2018, I would only guess it will have Micron memory modules, but seen that time RTX 2080Ti with Samsung memory modules, my Zotac is with Samsung memory modules and friend Palit RTX 2080Ti OC come up with Micron memory modules and if you are buying now Palit RTX 2080Ti OC most of them are with Samsung memory modules

Not sure there, without the disassembly or without the GPU-Z we will be guessing there

Hope this helps 

Thanks, Jura


----------



## bl4ckdot

wheatpaste1999 said:


> If you flash a bios from a card that doesn't have the ICX sensors you will lose the ability to monitor those sensors.


Thanks for the confirmation
+rep


----------



## wheatpaste1999

bl4ckdot said:


> Thanks for the confirmation
> +rep


Thanks for that! The ICX sensors really are handy. 

I'm not completely sure if it'll work but if you wanted to flash a different bios and keep the ICX sensors you might try one of the higher power limit FTW3 or KPE BIOS. Not 100% sure the ICX sensors will carry through but it's worth trying with those BIOS' first.


----------



## bl4ckdot

wheatpaste1999 said:


> Thanks for that! The ICX sensors really are handy.
> 
> I'm not completely sure if it'll work but if you wanted to flash a different bios and keep the ICX sensors you might try one of the higher power limit FTW3 or KPE BIOS. Not 100% sure the ICX sensors will carry through but it's worth trying with those BIOS' first.


I see. I am waiting Computex to buy a 2080Ti (we never know what could be announced and its only 2 weeks away) and I'm looking for a reference PCB to then flash the 380W Bios or better


----------



## J7SC

VPII said:


> Get it with Windows insider program under Windows Update. That is how you can get it. Not officially released yet but if available under insider it is good to go.
> 
> Sent from my SM-G960F using Tapatalk


 
Ahh, good to know...my Insider account may yet be working (was part of it during pre-roll-out of Win10) back during the last ice age (well, a long time ago). BTW, re. your comment above on Nvidia driver 340.39 vs 319, what were the positive and negative impacts of that driver on GPU and CPU, i.e. in TimeSpy ?


----------



## escalibur

bl4ckdot said:


> I see. I am waiting Computex to buy a 2080Ti (we never know what could be announced and its only 2 weeks away) and I'm looking for a reference PCB to then flash the 380W Bios or better


I'm also after Ti especially when I saw that Strix' price has dropped to 1199€ including shipping _(with QRNK4DYS discount code)_.  2080 is just 'meh' for an ultrawide monitor.

Computex might be worth waiting though I wouldn't hold my breath in terms of lower prices. I do really hope AMD comes with something which would force NVIDIA to drop RTX prices even by 10-15%.


----------



## VPII

J7SC said:


> Ahh, good to know...my Insider account may yet be working (was part of it during pre-roll-out of Win10) back during the last ice age (well, a long time ago). BTW, re. your comment above on Nvidia driver 340.39 vs 319, what were the positive and negative impacts of that driver on GPU and CPU, i.e. in TimeSpy ?



Hi there, well first of all I found that 419,xx is better as the Physics score do not go for a ball of hoo haa. You do lose about 1 fps in GPU test 2 but gain 1 or more fps in GPU test 1. I'll stick with this driver for now as I've tested all of the new ones and they are lacking, or so it seem.


----------



## fireanimal

Strange issue with the EVGA XOC 2000w bios, openCL benchmarks are very low, for example Realbench. Tried different drivers and it made no difference, flashed back to original bios and scores came back up to normal. Any idea why that maybe?


----------



## J7SC

VPII said:


> Hi there, well first of all I found that 419,xx is better as the Physics score do not go for a ball of hoo haa. You do lose about 1 fps in GPU test 2 but gain 1 or more fps in GPU test 1. I'll stick with this driver for now as I've tested all of the new ones and they are lacking, or so it seem.


 
Thanks - i usually am two or three driver gens behind latest, unless there was a security fix


----------



## acmilangr

Hi.

Soon I will get an evga XC gaming 2080ti. (with kraken AIO) 

What is the best bios I can use on that?


----------



## Cancretto

acmilangr said:


> Hi.
> 
> Soon I will get an evga XC gaming 2080ti. (with kraken AIO)
> 
> What is the best bios I can use on that?


Hi, I think you can try unlimited power limit kingpin xoc bios and see how it goes or you can try asus matrix bios and shunt mod the card to achieve a higher power limit


----------



## acmilangr

Cancretto said:


> acmilangr said:
> 
> 
> 
> Hi.
> 
> Soon I will get an evga XC gaming 2080ti. (with kraken AIO)
> 
> What is the best bios I can use on that?
> 
> 
> 
> Hi, I think you can try unlimited power limit kingpin xoc bios and see how it goes or you can try asus matrix bios and shunt mod the card to achieve a higher power limit
Click to expand...

Thanks for the answer

This card have tested with 380w Galax bios and have great clocks (up to 2205mhz on superposition 4k).

I saw these very high power limit bios and I have some questions :

-is there any dangerous with so high 
power on VRM or any other on the card?

-if it is unlimited will I have any issue with my EVGA 750W G3 Power supply?

-this card have reference pcb. Are these bios you recommend compatible with evga XC gaming?


----------



## dangerSK

- If you own 2080Ti Lightning Z -

After testing for a few days Ive found out bios : 90.02.17.40.89 is the best bios, why ? It supports Afterburner extreme BUT most importantly it doesnt have "No Fans bug", "Safe Mode bug" ! ive run the card on water without issues. Im also using shunt mod to increase power limit from 400w to around 600W (dunno exact value, on 1.1V core voltage and time spy i get only 200w power consumption since the shunts "tricked" power consumption watcher.) 
Download bios here : https://drive.google.com/open?id=1q6I1fID9inGDUHHiXMCerv5OOnfBtAk9
Please share your results with me so I can improve testing etc in the future, thanks.


----------



## Cancretto

acmilangr said:


> Thanks for the answer
> 
> This card have tested with 380w Galax bios and have great clocks (up to 2205mhz on superposition 4k).
> 
> I saw these very high power limit bios and I have some questions :
> 
> -is there any dangerous with so high
> power on VRM or any other on the card?
> 
> -if it is unlimited will I have any issue with my EVGA 750W G3 Power supply?
> 
> -this card have reference pcb. Are these bios you recommend compatible with evga XC gaming?


Yeah, 380W bios is a nice one but slightly slower than matrix bios, so if you want to shunt mod your card to rise power limit, I'd suggest you to use matrix bios

Anyways with xoc bios, the card under heavy load may peak around 600W, but even reference pcb can hold up over 600W of power.
Obviously your vrm temps will rise a bit, but with a fan pointed at them they will be more than fine, so I'd say it's not a problem.

Your PSU can sustain the card itself even with xoc bios, but depending on your other hardware, if your full system goes under full load, you may max out your PSU. 

Also yes, those bios are for sure compatible with reference boards, I've been using them for some months on my own reference card.
Hope this helps


----------



## acmilangr

Cancretto said:


> acmilangr said:
> 
> 
> 
> Thanks for the answer
> 
> This card have tested with 380w Galax bios and have great clocks (up to 2205mhz on superposition 4k).
> 
> I saw these very high power limit bios and I have some questions :
> 
> -is there any dangerous with so high
> power on VRM or any other on the card?
> 
> -if it is unlimited will I have any issue with my EVGA 750W G3 Power supply?
> 
> -this card have reference pcb. Are these bios you recommend compatible with evga XC gaming?
> 
> 
> 
> Yeah, 380W bios is a nice one but slightly slower than matrix bios, so if you want to shunt mod your card to rise power limit, I'd suggest you to use matrix bios
> 
> Anyways with xoc bios, the card under heavy load may peak around 600W, but even reference pcb can hold up over 600W of power.
> Obviously your vrm temps will rise a bit, but with a fan pointed at them they will be more than fine, so I'd say it's not a problem.
> 
> Your PSU can sustain the card itself even with xoc bios, but depending on your other hardware, if your full system goes under full load, you may max out your PSU.
> 
> Also yes, those bios are for sure compatible with reference boards, I've been using them for some months on my own reference card.
> Hope this helps /forum/images/smilies/smile.gif
Click to expand...

Thanks for the answer. 
What is your card?


----------



## Cancretto

acmilangr said:


> Thanks for the answer.
> What is your card?


I had a palit gaming pro oc liquid cooled with kraken x62 from October 2018, just recently I replaced it with an evga ftw3


----------



## mattxx88

hi guys, i have an rtx 2080Ti MSI gaming X trio, i am on a custom watercooled system and this gpu seems to be very hot to me.
I flashed 400W bios, and i have 52° in game (my old 1080Ti never did over 40°)
Some1 else using this cards watercooled can share his temps? just to have an idea


----------



## Xeq54

mattxx88 said:


> hi guys, i have an rtx 2080Ti MSI gaming X trio, i am on a custom watercooled system and this gpu seems to be very hot to me.
> I flashed 400W bios, and i have 52° in game (my old 1080Ti never did over 40°)
> Some1 else using this cards watercooled can share his temps? just to have an idea


Turing runs a bit hotter than pascal. I have custom loop with 2x280 rads push/pull and with fans at 800rpm I can reach around 45 degrees when gaming. With fans at 1800rpm, it stays around 39-42 degrees depending on the game and CPU load since it is on the same loop.

I had the pascal card on the same loop and as you described, it never exceeded 40 degrees while gaming at 800rpm.


----------



## GAN77

Hello guys!

How can you recommend bios for reference PCB with micron memory, with limit 360-400 watts?


----------



## Nicklas0912

Hello.

Did any one try the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W) on a ref PCB? I hear from someone that it works...


----------



## Jpmboy

GAN77 said:


> Hello guys!
> 
> How can you recommend bios for reference PCB with micron memory, with limit 360-400 watts?


the Galax 380W bios works fine on the FE with micron memory...


----------



## GAN77

Jpmboy said:


> the Galax 380W bios works fine on the FE with micron memory...


Thanks for the answer!

Waiting for the water, I test Inno3D RTX2080Ti Gaming OC X3 on the air.

Video card is absolutely not suitable for air((
Therefore, while I was engaged in Undervolting)




Spoiler


----------



## Cancretto

Nicklas0912 said:


> Hello.
> 
> Did any one try the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W) on a ref PCB? I hear from someone that it works...


Hi, yes that bios will work on reference boards, however unless you're on sub-ambient cooling with low temperature, the card will boost voltages just around 1.050V and since it lacks of V/F curve, you can't lock higher voltages. It's pretty similar to 380W Galax bios, but without any frequency fluctuation due to unlimited power limit.
For this reason I think it's not a great bios overall, a better choice you can try is the kingpin xoc bios, it will boost your card at 1.125V without fluctuations so you can squeeze some more frequency out your card


----------



## escalibur

GAN77 said:


> Hello guys!
> 
> How can you recommend bios for reference PCB with micron memory, with limit 360-400 watts?


Gigabyte Windforce OC?


----------



## Nicklas0912

Cancretto said:


> Hi, yes that bios will work on reference boards, however unless you're on sub-ambient cooling with low temperature, the card will boost voltages just around 1.050V and since it lacks of V/F curve, you can't lock higher voltages. It's pretty similar to 380W Galax bios, but without any frequency fluctuation due to unlimited power limit.
> For this reason I think it's not a great bios overall, a better choice you can try is the kingpin xoc bios, it will boost your card at 1.125V without fluctuations so you can squeeze some more frequency out your card


HMMM, where can I find that? I could not see it in the OP


----------



## GAN77

escalibur said:


> Gigabyte Windforce OC?


Inno3D RTX2080Ti Gaming OC X3


----------



## Martin778

Soooo.....I can get a Kingpin for €2k. 

How certain is I will get s....ed over by Nvidia and within 3-4 months they will release a 2080Ti with more CUDA cores that's a lot faster?


----------



## ENTERPRISE

Martin778 said:


> Soooo.....I can get a Kingpin for €2k.
> 
> How certain is I will get s....ed over by Nvidia and within 3-4 months they will release a 2080Ti with more CUDA cores that's a lot faster?


Fairly unlikely BUT Nvidia does have a habit of releasing new ''In between'' products so it is not impossible, but you face this with any hardware upgrade you make.


----------



## Martin778

I'm getting a bit tired of my L-Z bouncing off the power limit all the time and kinda hope the kingpin would run <50*C with 4x Noctua 12x25's.


----------



## Cancretto

Nicklas0912 said:


> HMMM, where can I find that? I could not see it in the OP


You can find kingpin bios here https://xdevs.com/guide/2080ti_kpe/


----------



## Martin778

Ok, bought the Kingpin. yolo as they say.


----------



## acmilangr

As I told you I have evga XC with kraken AIO. 

With default bios I get about 40500 gpu score on firestrike that it is really good score I think. 

It is strange that with Galax bios I get worst results even if I have better clock speed (2205 max) 

Any idea why is this happened?


----------



## zack_orner

acmilangr said:


> As I told you I have evga XC with kraken AIO.
> 
> With default bios I get about 40500 gpu score on firestrike that it is really good score I think.
> 
> It is strange that with Galax bios I get worst results even if I have better clock speed (2205 max)
> 
> Any idea why is this happened?


It could be better memory timmings on the default bios

2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio


----------



## acmilangr

zack_orner said:


> acmilangr said:
> 
> 
> 
> As I told you I have evga XC with kraken AIO.
> 
> With default bios I get about 40500 gpu score on firestrike that it is really good score I think.
> 
> It is strange that with Galax bios I get worst results even if I have better clock speed (2205 max)
> 
> Any idea why is this happened?
> 
> 
> 
> It could be better memory timmings on the default bios
> 
> 2700x rog cross hair hero vii 1 tb 970 evo gskiils 3200cl14 msi 2080 ti gaming x trio asus rog ryou 240 aio
Click to expand...

Any recommendation bios to try?


----------



## Nicklas0912

Cancretto said:


> You can find kingpin bios here https://xdevs.com/guide/2080ti_kpe/


And the kingpin xoc bios works fine on a ref PCB even thoug is only 2x 8 pin? and the kingpin card is 3x 8 pin.


----------



## tmac666

just tried to flash kingpin xoc to ref pcb Galax SG edition

can push to 2280 core, temperature 58C, power up to 600w peak, i am using 1000w gold psu

but it is not stable..........sometimes the game crash

go back to Galax 380w, it becomes stable again, 2180 core stable.

so the kingpin is not good


----------



## acmilangr

Nicklas0912 said:


> Cancretto said:
> 
> 
> 
> You can find kingpin bios here https://xdevs.com/guide/2080ti_kpe/
> 
> 
> 
> And the kingpin xoc bios works fine on a ref PCB even thoug is only 2x 8 pin? and the kingpin card is 3x 8 pin.
Click to expand...

Did you tried it?


----------



## MacMus

Anybody could tell me how to decode EVGA serial number into manufacture date?

I heard theat 2080ti had lot of manufacturing issues.. recently i have notice strange behavior like:
- small creeping artifacts in pubg and other games
- sometimes my windows comes up with black screen and i need to reboot it completely
- small and short milisecond freezes and stutters.

This only started to happen recently, so first of all i would like to eliminate a bad batch...


----------



## Martin778

The rotten apples were mostly the first batches with Micron memory. When did you buy your card?


----------



## Glerox

Oh oh... two evga kingpin incoming. Ouch


----------



## vmanuelgm

Martin778 said:


> The rotten apples were mostly the first batches with Micron memory. When did you buy your card?



Micron and Samsung cards were both affected, it wasn't a defective memory batch, but bad quality control in making the card, as Nvidia published. 

I've had two cards, same manufacturer, different dates and memory. First card, Gainward with Micron from release date, still alive after soldering shunting.

Second card, Gainward with Samsung and one of the latest pcbs, alive and soldering shunted. 

If u watch the release cards and now the latest ones, speaking of reference boards, there are some changed components. For example in the back side of the core.

I also believe some guys broke their cards and told they were defective.


----------



## Cancretto

acmilangr said:


> Did you tried it?


Yes, kingpin bios even though is designed for a 3x 8 pin card definitely works fine on any 2x 8 pin board. I tried it on 2 reference boards and on a ftw3


----------



## Jpmboy

tmac666 said:


> just tried to flash kingpin xoc to ref pcb Galax SG edition
> 
> can push to 2280 core, temperature 58C, power up to 600w peak, i am using 1000w gold psu
> 
> but it is not stable..........sometimes the game crash
> 
> go back to Galax 380w, it becomes stable again, 2180 core stable.
> 
> so the kingpin is not good


I doubt the galax bios would be game stable at 2280 on that card either. So... Kingpin bios is fine.


----------



## acmilangr

Cancretto said:


> Nicklas0912 said:
> 
> 
> 
> HMMM, where can I find that? I could not see it in the OP /forum/images/smilies/smile.gif
> 
> 
> 
> You can find kingpin bios here https://xdevs.com/guide/2080ti_kpe/
Click to expand...




Cancretto said:


> acmilangr said:
> 
> 
> 
> Did you tried it?
> 
> 
> 
> Yes, kingpin bios even though is designed for a 3x 8 pin card definitely works fine on any 2x 8 pin board. I tried it on 2 reference boards and on a ftw3
Click to expand...

XOC BIOS, Version 90.02.17.40.88
Is this kingpin bios we are talking?


----------



## Cancretto

acmilangr said:


> XOC BIOS, Version 90.02.17.40.88
> Is this kingpin bios we are talking?


Yes, that's it


----------



## acmilangr

Cancretto said:


> acmilangr said:
> 
> 
> 
> XOC BIOS, Version 90.02.17.40.88
> Is this kingpin bios we are talking?
> 
> 
> 
> Yes, that's it
Click to expand...

Thanks alot I hope I will not brick it 😛


----------



## bagelybagels

What is the best bios for the Zotac Amp cards that still has hdmi 4k support?


----------



## raceitchris

So questions and concerns – strictly related to reference cards on Kingpin XOC…

I want to give the Kingpin XOC 90.02.17.40.88 a shot on my reference EVGA 2080 Ti Hybrid card, but just nervous about sheet number of watts. Cancretto mentions in post #7844 that he took a digital multi-meter to his card and the VRM's had shorted. Was this due to the Kingpin XOC bios you said you were using at the time or was it due some other unrelated anomaly in re-timming that card with the liquid metal and remounting of the cooler? Xeq54 mentions in post #7742 that his reference card drew 800 watts on the Kingpin XOC bios while running kombustor.

The Q&A on the first page of this forum says, “the reference card can handle far above 500W”

In watching Buildzoid’s tear down video’s there’s no doubt the reference cards have strong VRM’s, but in all seriousness - a high end modern game and benchmark is going to pull 600 watts out of the card on Kingpin XOC.

I’m not an electrical engineer, but maybe someone that is can chime in on this? Is the reference card VRM safe at a constant 600Watts 5-hour gaming session under water with great temps and if so can you please explain how and why / or not?

Is anyone out there running 5-hour gaming sessions with their reference card on Kingpin XOC without issues?


----------



## Cancretto

raceitchris said:


> So questions and concerns – strictly related to reference cards on Kingpin XOC…
> 
> I want to give the Kingpin XOC 90.02.17.40.88 a shot on my reference EVGA 2080 Ti Hybrid card, but just nervous about sheet number of watts. Cancretto mentions in post #7844 that he took a digital multi-meter to his card and the VRM's had shorted. Was this due to the Kingpin XOC bios you said you were using at the time or was it due some other unrelated anomaly in re-timming that card with the liquid metal and remounting of the cooler? Xeq54 mentions in post #7742 that his reference card drew 800 watts on the Kingpin XOC bios while running kombustor.
> 
> The Q&A on the first page of this forum says, “the reference card can handle far above 500W”
> 
> In watching Buildzoid’s tear down video’s there’s no doubt the reference cards have strong VRM’s, but in all seriousness - a high end modern game and benchmark is going to pull 600 watts out of the card on Kingpin XOC.
> 
> I’m not an electrical engineer, but maybe someone that is can chime in on this? Is the reference card VRM safe at a constant 600Watts 5-hour gaming session under water with great temps and if so can you please explain how and why / or not?
> 
> Is anyone out there running 5-hour gaming sessions with their reference card on Kingpin XOC without issues?


Hi, the previous card that was sent back, i thought had vrm shorted out, but i didn't thought about the fact that on these cards 2 phases act like one, so now i'm not even sure anymore if the card had a vrm failure or something else. Anyways the problem for sure wasn't related to kinpin or other xoc bios. 

Also, yes apparently on turing, liquid metal between waterblock and die is a big issue, it messes up the card pretty badly, even without shorting or touching any component, but liquid metal on the same card works flawlessly on stock air cooler. 

Returning to main topic, reference board can handle kingpin xoc bios without problems, i measured vrms temps multiple times with an infrared thermometer and with only the fan of kraken g12 vrm temps were about 55-60°C, so no problem at all.
Also before i rma'd that card i was using that bios daily, and i had no issues. Now i'm running it again on my actual card


----------



## arrow0309

Hiya, just noticed (a bit late though) these new blocks from EK, the Vector RE.
Anyone has one of these? 
I was looking to improve a little bit my gpu temps (getting up to 46C with warmer tamb) but I really don't know if the new Vector could any improve over my (previous edition) Vector nickel acetal.
Alternatively I could come back to Watercool Heatkiller (or even try the new & fancy yet unavailable Aquacomputer block) but in those cases I'd have to replace my backplate as well.


----------



## raceitchris

Cancretto said:


> Hi, the previous card that was sent back, i thought had vrm shorted out, but i didn't thought about the fact that on these cards 2 phases act like one, so now i'm not even sure anymore if the card had a vrm failure or something else. Anyways the problem for sure wasn't related to kinpin or other xoc bios.
> 
> Also, yes apparently on turing, liquid metal between waterblock and die is a big issue, it messes up the card pretty badly, even without shorting or touching any component, but liquid metal on the same card works flawlessly on stock air cooler.
> 
> Returning to main topic, reference board can handle kingpin xoc bios without problems, i measured vrms temps multiple times with an infrared thermometer and with only the fan of kraken g12 vrm temps were about 55-60°C, so no problem at all.
> Also before i rma'd that card i was using that bios daily, and i had no issues. Now i'm running it again on my actual card


Thanks for the info. Just flashed to Kingpin XOC on my reference card. Went smooth as butter. Getting an extra 40 core and 100 memory out of it then I could from all the other bioses. Feels real good to see that 1.125v!!! 

https://www.3dmark.com/spy/7264612


----------



## SimoneTek

Hi guys, I have a Zotac RTX 2080Ti Amp! Edition but the bios is very limited.. Only 300w... So I would like to flash the Galax 380w but I'm not sure about all string of cmd.. Can someone type all the cmd codes? Thanks


----------



## arrow0309

SimoneTek said:


> Hi guys, I have a Zotac RTX 2080Ti Amp! Edition but the bios is very limited.. Only 300w... So I would like to flash the Galax 380w but I'm not sure about all string of cmd.. Can someone type all the cmd codes? Thanks


First page op, FLASH | GUIDE (Click SHOW)


----------



## SimoneTek

Oh sorry.. I didn't know that there was a guide.. Thank you man


----------



## Pedropc

Sorry for my English so bad. Can you flash an FE with the bios of the eVGA ti FTW3 ???, all video outputs go well ???, thanks and best regards.


----------



## Spiriva

I got an Nvidia FE 2080ti (with the 380W bios flashed) and a EK waterblock on it. Would you guys recommend the kingpin "XOC BIOS, Version 90.02.17.40.88" bios as a daily driver driver instead ?


----------



## Asmodian

Stock BIOS with a shunt mod is the way to go.


----------



## GAN77

Spiriva said:


> I got an Nvidia FE 2080ti (with the 380W bios flashed) and a EK waterblock on it. Would you guys recommend the kingpin "XOC BIOS, Version 90.02.17.40.88" bios as a daily driver driver instead ?


Nvidia FE has a bios of 380 watts?


----------



## bp7178

No, he flashed a different BIOS on it.

I have a 2080 Ti FE with a 380w bios on it, but I hit the voltage cap. So unless you're actually getting capped by the power limit, IDK it makes any sense to flash the card in an attempt to raise the power limit.


----------



## pewpewlazer

bp7178 said:


> No, he flashed a different BIOS on it.
> 
> I have a 2080 Ti FE with a 380w bios on it, but I hit the voltage cap. So unless you're actually getting capped by the power limit, IDK it makes any sense to flash the card in an attempt to raise the power limit.


EVERY 2080 Ti is capped by power limit with a 380w or lower limit BIOS.


----------



## Spiriva

Asmodian said:


> Stock BIOS with a shunt mod is the way to go.


No i flashed the galax 380w bios to it. But Im curious if anyone have flashed the kingpin bios on a FE card and use it 24/7, and how that goes ?


----------



## SimoneTek

Spiriva said:


> Asmodian said:
> 
> 
> 
> Stock BIOS with a shunt mod is the way to go. /forum/images/smilies/tongue.gif
> 
> 
> 
> No i flashed the galax 380w bios to it. But Im curious if anyone have flashed the kingpin bios on a FE card and use it 24/7, and how that goes ?
Click to expand...

The problem is that there is a physical limit about the Pci Power connector.. A 8 pin cannot handle more than 200w stable (the atx limit is 150-170)...so for the 2x8 2080Ti the best bios is the Galax 380w

Infact:
PCI express: 75w
8 pin: 150w (X2) =300w

375w total.. The Galax is perfect


----------



## J7SC

SimoneTek said:


> The problem is that there is a physical limit about the Pci Power connector.. A 8 pin cannot handle more than 200w stable (the atx limit is 150-170)...so for the 2x8 2080Ti the best bios is the Galax 380w
> 
> Infact:
> PCI express: 75w
> 8 pin: 150w (X2) =300w
> 
> 375w total.. The Galax is perfect


 
Do you know how many times this has been covered in this thread ? 

Anyhow, have a look at just the latest vids on this topic :


----------



## SimoneTek

J7SC said:


> Do you know how many times this has been covered in this thread ?
> 
> 
> 
> Anyhow, have a look at just the latest vids on this topic :
> 
> 
> 
> https://www.youtube.com/watch?v=fRVSGFjKf4E


Sincerely no 

I will see the video 

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## Spiriva

I flashed the Kingpin bios and tried some on my Nvidia 2080ti FE card, it worked fine.










I never saw it pull more then ~580W. I pulled the mem speed down from 8000mhz to ~7400mhz. The highest it would run was ~2180mhz, it crashed at 2190mhz.
It never went above 42c. Altho its pretty cold in Sweden today, ~10c outside and had my window open so it might have been around 18c-19c in the room.

All in all, this bios gave an extra ~40mhz.


----------



## Juub

I have an EVGA 2080 Ti Black Edition that is a lot slower than it should be, simply put, at stock clocks it's only a hair faster than my previous 2080 Ti and gets pretty low scores in Time Spy, Fire Strike and games. Temps, clocks, power usage, voltage and loads are all within the norm but it's just extremely slow.

Examples:

So far I tried: 

1. Updating all the drivers
2. Doing a clean install of all the drivers using DDU
3. Doing a clean install of Windows
4. Updating all of my motherboard's drivers
5. Checked the PCIe slots speed to make sure it was running at 16x
6. Moved it to a different PCIe slot(that defaulted to 8x for some reason, only the top one does not and according to the manual, all three are 16x)

I tried to overclock the core clock by 250MHz. I now hit 1900+MHz in games when it's boosting. I think the highest I reached was 2040MHz. Performance went up by a substantial amount in both benchmarks and games to the point I am within normal range of a stock 2080 Ti. Performance went up by 12-15% across the board. This is with my CPU OC'd to 5GHz.

I decided to drop the CPU down to 4GHz and kept the GPU overclock but my performance went south again. Then I decided to revert the GPU overclock but overclock the CPU to 5GHz, again, bad performance. If I overclock both the CPU and GPU I get a much higher graphics score.

It goes from 6128 to 6928 on the graphics score alone. That's an enormous 12% improvement over stock clocks for both CPU and GPU. That's in Time Spy Extreme. In games such as Far Cry 5 and Shadow of the Tomb Raider, performance also goes up significantly(around 15%) which makes little sense because this would suggest I am hitting both a CPU and GPU bottleneck running a 9900K+2080 Ti on a Z390 high-end motherboard with 3000MHz clocked RAM. I'm rather confused at the moment.

I already made a separate thread in this section but got only one response after 36hrs so I was thinking maybe folks in the owner's thread could lend a hand.

I started chatting with EVGA to see if there were any other potential issues but I think I'll have to RMA it.

Thanks to everyone for any kind of help.


----------



## Spiriva

Juub said:


> .



You did not forget to turn off g-sync before running 3dmark ?


----------



## SimoneTek

Zotac Amp 2080Ti with Bios Galax 380w

Before and After (2140mhz)















Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## Juub

Spiriva said:


> Juub said:
> 
> 
> 
> .
> 
> 
> 
> 
> You did not forget to turn off g-sync before running 3dmark ? /forum/images/smilies/smile.gif
Click to expand...

 Did not. I wish it was that but nope.


----------



## vmanuelgm

Cancretto said:


> Hi, the previous card that was sent back, i thought had vrm shorted out, but i didn't thought about the fact that on these cards 2 phases act like one, so now i'm not even sure anymore if the card had a vrm failure or something else. Anyways the problem for sure wasn't related to kinpin or other xoc bios.
> 
> Also, yes apparently on turing, liquid metal between waterblock and die is a big issue, it messes up the card pretty badly, even without shorting or touching any component, but liquid metal on the same card works flawlessly on stock air cooler.
> 
> Returning to main topic, reference board can handle kingpin xoc bios without problems, i measured vrms temps multiple times with an infrared thermometer and with only the fan of kraken g12 vrm temps were about 55-60°C, so no problem at all.
> Also before i rma'd that card i was using that bios daily, and i had no issues. Now i'm running it again on my actual card



Liquid metal messes up between DIE and Block???

What do u mean???


----------



## bp7178

pewpewlazer said:


> EVERY 2080 Ti is capped by power limit with a 380w or lower limit BIOS.


My point was that when gaming my card shows it is hitting the voltage cap, not the power limit. With the 380w BIOS, I am not hitting the power limit, but the voltage limit. Changing the power limit to something higher than 380w isn't going to help in my use case.


----------



## pewpewlazer

bp7178 said:


> My point was that when gaming my card shows it is hitting the voltage cap, not the power limit. With the 380w BIOS, I am not hitting the power limit, but the voltage limit. Changing the power limit to something higher than 380w isn't going to help in my use case.


Sounds like an extra couple mhz won't help in your "use case" either. Any modern game at 4k is going to smash the power limit hard, even at 380w. Even at a lower resolution, I can't imagine any relatively modern game not hitting the 380w limit at 1.093v unless you're CPU bottlenecked. The 1440p versions of 3dMark firestrike/timespy hit power limit long before voltage. And obviously with ray tracing you'll be crippled by power limit at 380w as well at lower resolutions.


----------



## Angrycrab

Spiriva said:


> I flashed the Kingpin bios and tried some on my Nvidia 2080ti FE card, it worked fine.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I never saw it pull more then ~580W. I pulled the mem speed down from 8000mhz to ~7400mhz. The highest it would run was ~2180mhz, it crashed at 2190mhz.
> It never went above 42c. Altho its pretty cold in Sweden today, ~10c outside and had my window open so it might have been around 18c-19c in the room.
> 
> All in all, this bios gave an extra ~40mhz.


Can the fans be adjusted on xoc bios or it’s stuck at 100%?


----------



## Cancretto

vmanuelgm said:


> Liquid metal messes up between DIE and Block???
> 
> What do u mean???


Don't know exactly what is happening with liquid metal, but using it between gpu die and waterblock really messes the card up. I mean, the card seems to run fine, it can boot into OS, no display artifacts and so on, but when it goes under load, it istantly crashes even at stock speeds or it runs, but a lot of weird stuff happens, like not overclocking as high as before, sometimes can't boost frequency properly and/or sustain maximum voltages. 

Obviously temps after liquid metal were a bit better, so i dont think that's the problem. Also I'm pretty sure it's not related to possible LM spillage over gpu components since i covered smd around gpu die with nail polisher and a bit of insulation tape. 

Also tried to loosen waterblock mounting screws, thinking the problem is caused by mounting pressure, but that didn't fix anything either.
Swapped few times between liquid metal and normal paste and everytime i used liquid metal gpu had problems, instead when i used normal paste i never had that problem.

At the end i just gave up, but it still bothers me a lot and i can't figure out why this happens, especially because before using a waterblock, i used liquid metal between card and stock heatsink for months and card never had a problem.


----------



## Alemancio

*About Kingpin BIOS on EVGA FTW3:*

As far as I've read, the BIOS is compatible, right?

To those who have tried it, how did your clocks/temps change? Would you recommend this BIOS?

Thanks!


----------



## ENTERPRISE

Angrycrab said:


> Can the fans be adjusted on xoc bios or it’s stuck at 100%?


Going by all the documentation on that bios, its stuck at 100% fan, if it was not then I would leap on to it, but for now using the Asus XOC.


----------



## vmanuelgm

Cancretto said:


> Don't know exactly what is happening with liquid metal, but using it between gpu die and waterblock really messes the card up. I mean, the card seems to run fine, it can boot into OS, no display artifacts and so on, but when it goes under load, it istantly crashes even at stock speeds or it runs, but a lot of weird stuff happens, like not overclocking as high as before, sometimes can't boost frequency properly and/or sustain maximum voltages.
> 
> Obviously temps after liquid metal were a bit better, so i dont think that's the problem. Also I'm pretty sure it's not related to possible LM spillage over gpu components since i covered smd around gpu die with nail polisher and a bit of insulation tape.
> 
> Also tried to loosen waterblock mounting screws, thinking the problem is caused by mounting pressure, but that didn't fix anything either.
> Swapped few times between liquid metal and normal paste and everytime i used liquid metal gpu had problems, instead when i used normal paste i never had that problem.
> 
> At the end i just gave up, but it still bothers me a lot and i can't figure out why this happens, especially because before using a waterblock, i used liquid metal between card and stock heatsink for months and card never had a problem.




Have been using liquid metal (first the Liquid Ultra and now the Conductonaut) for years in Nvidia gpus and never had any of those problems, even without protection around the core (I have no protection right now).

Have u tried several cards???



I would like to try the HOF OC Lab 2000w bios since it is supposed to have the voltage/frequency editor enabled...


----------



## Martin778

Never had problems with LM on a WB. Maybe the block is bent and occassionaly messes up the card? I've had 1080Ti blocks from EK that were awfully warped from the factory.


----------



## Cancretto

vmanuelgm said:


> Have been using liquid metal (first the Liquid Ultra and now the Conductonaut) for years in Nvidia gpus and never had any of those problems, even without protection around the core (I have no protection right now).
> 
> Have u tried several cards???
> 
> I would like to try the HOF OC Lab 2000w bios since it is supposed to have the voltage/frequency editor enabled...


Yeah, I've been using liquid metal on gpus for a good amount of time too, even without sealing smds and other components and I Never had a problem. This thing really bothers me a lot because if I'd only make my card literally 1/2° cooler I won't hit second clock slowdown at around 46° and make the card crash.

Also yes I tried liquid metal on 3 different cards (2 references and 1 ftw3), but always with same kraken aio on it. With liquid metal between die and waterblock all of them had problems running, but also all of them were fine when using liquid metal between die and stock air cooler.

Maybe the problem is the cooler itself, but I can't tell for sure, to me how the card behaves when using LM, makes no sense at all


----------



## Spiriva

Angrycrab said:


> Can the fans be adjusted on xoc bios or it’s stuck at 100%?


I got a EK waterblock on my card so i didnt check that. But i think it was set to 100%. Im not home for a few days so i cant check now.


----------



## Cloudie

I havnt checked in for a while now but would like to know what the recommended Bios for the Asus Strix OC is for daily use? 
There are so many pages right now and I've been browser through them but can't seem to find a best solution. The 1000W bios loses some options while the 380W bios is for reference cards, so Im not to sure which one to choose or if there is an even better version available. 

Cheers!


----------



## Martin778

Soooo I replaced the MSI L-Z with the Kingpin and oh man oh man  Currently 21st in 4K Superposition: https://benchmark.unigine.com/leaderboards/superposition/1.x/4k-optimized/single-gpu/page-1

Runs 2100-2130 daily and 8k on the VRAM, rock stable. Only once I've seen it hitting the power limit, when trying Furmark to see where the limit was - 426W peak, according to GPU-Z. This is with the slider at 144%.

The temps are riddiculous too, barely hitting 50*C with fans at like 1200RPM. I will go balls to the wall with that radiator, already ordered 4 more Noctua A12x25's.


----------



## Sheyster

Cloudie said:


> I havnt checked in for a while now but would like to know what the recommended Bios for the Asus Strix OC is for daily use?
> There are so many pages right now and I've been browser through them but can't seem to find a best solution. The 1000W bios loses some options while the 380W bios is for reference cards, so Im not to sure which one to choose or if there is an even better version available.
> 
> Cheers!


The 380W BIOS is fine for daily use/gaming, especially if you have a G-sync or Freesync monitor. IMHO no reason to use anything else for gaming. If you're benching and want the best bench numbers your card is capable of, use the Kingpin BIOS.

Edit - I believe the 380W BIOS works with Strix cards. Can someone confirm?


----------



## Alemancio

Bumping this guys, KPE Bios on FTW3, works?



Alemancio said:


> *About Kingpin BIOS on EVGA FTW3:*
> 
> As far as I've read, the BIOS is compatible, right?
> 
> To those who have tried it, how did your clocks/temps change? Would you recommend this BIOS?
> 
> Thanks!


----------



## 86Jarrod

*86jarrod*



Alemancio said:


> Bumping this guys, KPE Bios on FTW3, works?


Works just fine on my ftw3 ultra. Have it saved on my OC bios. I found i get better scores with the shunt modded regular bios even though less volts at 1.093. About 15 to 30 mhz less stable with KPE XOC. Your card might see an improvement especially if you haven't shunt modded. Give it a try just watch your temps.


----------



## Alemancio

86Jarrod said:


> Works just fine on my ftw3 ultra. Have it saved on my OC bios. I found i get better scores with the shunt modded regular bios even though less volts at 1.093. About 15 to 30 mhz less stable with KPE XOC. Your card might see an improvement especially if you haven't shunt modded. Give it a try just watch your temps.


Thanks, appreciate your input. I'll give it a shot then!


----------



## Alemancio

86Jarrod said:


> Works just fine on my ftw3 ultra. Have it saved on my OC bios. I found i get better scores with the shunt modded regular bios even though less volts at 1.093. About 15 to 30 mhz less stable with KPE XOC. Your card might see an improvement especially if you haven't shunt modded. Give it a try just watch your temps.


I flashed KPE Bios on a FTW3 Ultra, *however the GPU Clock is stuck between 350~580Mhz.* The card barely uses 90W regardless of what I move the slider to.

Any idea what's up? I've tried reflashing, flashing back (retesting stock, all good) and flashing again. Also, tried the Normal KPE and the OC KPE Bios, no luck...

Help?


----------



## Juub

@ everyone
Can you guys let me know your graphics score in; Time Spy, Time Spy Extreme, Fire Strike, Fire Strike Extreme, Fire Strike Ultra?

Additionally, can you post your frame rates for the benchmarks of: Far Cry 5, AC: Odyssey, AC: Origins, Shadow of the Tomb Raider, Rise of the Tomb Raider and Middle Earth: Shadow of War?

At stock clocks(1350MHz), 3840x2160 and at the highest PRESETS available.

Would like to compare the scores to mine because the score in benchmarks online are all over the place.


----------



## 86Jarrod

Alemancio said:


> I flashed KPE Bios on a FTW3 Ultra, *however the GPU Clock is stuck between 350~580Mhz.* The card barely uses 90W regardless of what I move the slider to.
> 
> Any idea what's up? I've tried reflashing, flashing back (retesting stock, all good) and flashing again. Also, tried the Normal KPE and the OC KPE Bios, no luck...
> 
> Help?


I'm not sure what to tell you i just switched to it and am on it now. I found after all the bios i've tried that sometimes after flashing you need to restart pc a couple times for flash to fully work. Not sure if that's it or not.


----------



## nikoli707

I have the chance to but a gigabyte turbo 11g non oc 300 non a model for cheap. I would be putting it under a kraken x42 g12 bracket.

Is there a bios flash i can do to get past the 280w limit? Are there any current tricks that can help me use the extra oc headroom i will have? Am i really going to see any significant difference from a bottom barrel 300a vs a 300?


----------



## Alemancio

86Jarrod said:


> I'm not sure what to tell you i just switched to it and am on it now. I found after all the bios i've tried that sometimes after flashing you need to restart pc a couple times for flash to fully work. Not sure if that's it or not.


Did you do anything else besides NVFLASH64 -6 KPE'sBIOS? I tried both OC and Normal ROMS from here (unless I should be using a different one?). I'm guessing that the PCIe ID flashes also, no? along with the Firmware (and therefore the card is detected as a Kingpin Hybrid now and no longer an FTW3)


----------



## 86Jarrod

Alemancio said:


> Did you do anything else besides NVFLASH64 -6 KPE'sBIOS? I tried both OC and Normal ROMS from here (unless I should be using a different one?). I'm guessing that the PCIe ID flashes also, no? along with the Firmware (and therefore the card is detected as a Kingpin Hybrid now and no longer an FTW3)


I haven't tried the two regular bios just the 90.02.17.40.88 XOC from the same place you linked. I'm not sure what else it could be. I am using FTW3 Ultra shunt modded but that shouldn't matter I don't think. Maybe someone with more knowledge than me about these cards will have an idea. Anymore questions i'll try to answer them if i can. Good luck


----------



## raceitchris

Alemancio said:


> Did you do anything else besides NVFLASH64 -6 KPE'sBIOS? I tried both OC and Normal ROMS from here (unless I should be using a different one?). I'm guessing that the PCIe ID flashes also, no? along with the Firmware (and therefore the card is detected as a Kingpin Hybrid now and no longer an FTW3)


Here is a good guide for flashing bios on 2080 Ti:

https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/


----------



## Cancretto

Alemancio said:


> I flashed KPE Bios on a FTW3 Ultra, *however the GPU Clock is stuck between 350~580Mhz.* The card barely uses 90W regardless of what I move the slider to.
> 
> Any idea what's up? I've tried reflashing, flashing back (retesting stock, all good) and flashing again. Also, tried the Normal KPE and the OC KPE Bios, no luck...
> 
> Help?


I've also used that bios on my ftw3 and had no problem, make sure you flashed the right one, the bios you want is the 90.02.17.40.88 

Also it's normal that regular or OC kingpin bios won't work properly on your card, since ftw3 is missing the third 8pin connector.

Also, as suggested by Jarrod, a couple of power cicles can help after flashing the card


----------



## Alemancio

86Jarrod said:


> I haven't tried the two regular bios just the 90.02.17.40.88 XOC from the same place you linked. I'm not sure what else it could be. I am using FTW3 Ultra shunt modded but that shouldn't matter I don't think. Maybe someone with more knowledge than me about these cards will have an idea. Anymore questions i'll try to answer them if i can. Good luck


Gotcha, maybe that's the issue coz I wanted to try the regular BIOS'



raceitchris said:


> Here is a good guide for flashing bios on 2080 Ti:
> 
> https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/


Thanks! This is exactly how I had been doing it - no luck


----------



## raceitchris

Alemancio said:


> Gotcha, maybe that's the issue coz I wanted to try the regular BIOS'
> 
> 
> Thanks! This is exactly how I had been doing it - no luck


I think I am reading you correctly, but just to make sure.... you are saying that you are stuck on the KPE XOC bios on your FTW3, and cannot flash out of it??


----------



## Spiriva

I use the kingpin 90.02.17.40.88 bios on mt 2080ti FE as my "daily driver". There is no problem yet, gpu stays around 40-43c (waterblock from EK). I like this bios as it keeps the clock speed alot better then the Galax 380W bios i used before.
This kingpin bios holds 2165mhz even over 40c (43c).


----------



## dentnu

Anyone know if the kingpin XOC bios works on the MSI 2080 TI Trio ? Thanks


----------



## RaMsiTo

what bios is better for a 2080 ti msi gaming x trio by air?

thx.


----------



## dentnu

RaMsiTo said:


> what bios is better for a 2080 ti msi gaming x trio by air?
> 
> thx.


Best bios in my opinion for that card on air is the 406w bios made for that card. 

https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930


----------



## RaMsiTo

dentnu said:


> Best bios in my opinion for that card on air is the 406w bios made for that card.
> 
> 
> 
> https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930




Thank you 


Enviado desde mi iPhone utilizando Tapatalk


----------



## vmanuelgm

Alemancio said:


> Gotcha, maybe that's the issue coz I wanted to try the regular BIOS'
> 
> 
> Thanks! This is exactly how I had been doing it - no luck



Some bios's won't work in reference design such as these regular for Kingpin or the one for MSI Trio (406w) since the power connectors configuration is different. The card will boot but will also downclock under load.


----------



## sblantipodi

so what is it super?


----------



## Tragic

dentnu said:


> Best bios in my opinion for that card on air is the 406w bios made for that card.
> 
> https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930





I have to agree that is the bios i'm using and it is killer. wattman from the wall says during gpu tests in timespy extreme i hit 550W and the cpu test hits 350W 2145mhz and from 7000mhz mem to 8500mhz mem (+1500)
temps 75C with fan curve 75% case side panel on ambient 20C (68F)

temps 65C with fan curve 75% case side panel off ambient 20C (68F)
Time Spy Extreme score 7186


----------



## Cloudie

*Cloudie*



Sheyster said:


> The 380W BIOS is fine for daily use/gaming, especially if you have a G-sync or Freesync monitor. IMHO no reason to use anything else for gaming. If you're benching and want the best bench numbers your card is capable of, use the Kingpin BIOS.
> 
> Edit - I believe the 380W BIOS works with Strix cards. Can someone confirm?


Thanks!

Could someone confirm this?


----------



## Jpmboy

Spiriva said:


> I use the kingpin 90.02.17.40.88 bios on mt 2080ti FE as my "daily driver". There is no problem yet, gpu stays around 40-43c (waterblock from EK). I like this bios as it keeps the clock speed alot better then the Galax 380W bios i used before.
> This kingpin bios holds 2165mhz even over 40c (43c).


can you post up or PM that exact bios version please...


----------



## Spiriva

Jpmboy said:


> can you post up or PM that exact bios version please...


Sure, sent you a PM.


----------



## CptSpig

Jpmboy said:


> can you post up or PM that exact bios version please...





Spiriva said:


> Sure, sent you a PM.


Hey, JP let me know how this bios works. If it's better than the 380w please forward for me to try. Thanks


----------



## bogdi1988

CptSpig said:


> Hey, JP let me know how this bios works. If it's better than the 380w please forward for me to try. Thanks


Same here please


----------



## Jpmboy

Spiriva said:


> Sure, sent you a PM.


got it. Thanks! +1



CptSpig said:


> Hey, JP let me know how this bios works. If it's better than the 380w please forward for me to try. Thanks


will do. Hopefully I get time to flash tomorrow...


----------



## MonarchX

I just got EVGA GeForce RTX 2080 Ti FTW3 to play with. One issue I have so far is that there is some sort of 3rd device... I use clean NVidia drivers with files only for video, audio, and PhysX. All of those are installed now, but Device Manager still has an Unknown PCI Device. Any idea what that is and which NVidia folder has drivers for it? Attached is what I have in my NVidia drivers directory for installation, it was perfect for GTX series.


----------



## Alemancio

Cancretto said:


> I've also used that bios on my ftw3 and had no problem, make sure you flashed the right one, the bios you want is the 90.02.17.40.88
> 
> Also it's normal that regular or OC kingpin bios won't work properly on your card, since ftw3 is missing the third 8pin connector.
> 
> Also, as suggested by Jarrod, a couple of power cicles can help after flashing the card


*Thanks! That was it.* I tried the XOC Bios and while it works, it's not worth it. I'm doing 2085mhz @ 55C on DX12 games and this XPE XOC barely does 2115mhz @ 58C on my FTW3.


----------



## Angrycrab

MonarchX said:


> I just got EVGA GeForce RTX 2080 Ti FTW3 to play with. One issue I have so far is that there is some sort of 3rd device... I use clean NVidia drivers with files only for video, audio, and PhysX. All of those are installed now, but Device Manager still has an Unknown PCI Device. Any idea what that is and which NVidia folder has drivers for it? Attached is what I have in my NVidia drivers directory for installation, it was perfect for GTX series.


You need to install the usb c folder from the nvidia drivers.


----------



## bl4ckdot

Spiriva said:


> Sure, sent you a PM.


Is it the same bios that is on the KPE website ? I'm interested in that as well !


----------



## Jusiz

Spiriva said:


> I use the kingpin 90.02.17.40.88 bios on mt 2080ti FE as my "daily driver". There is no problem yet, gpu stays around 40-43c (waterblock from EK). I like this bios as it keeps the clock speed alot better then the Galax 380W bios i used before.
> This kingpin bios holds 2165mhz even over 40c (43c).


I interested this bios too...can send it PM?


----------



## mattxx88

is 2 weeks i'm testing an MSI 2080Ti Gaming X Trio, and i got so many troubles i'm thinking to sell it 

1st problem, i think EK made a mistake in depth measurement of his waterblock. when i booted up pc the first time i got 60° @def. Then i took wb down and i see NO contact between die and wb. i fastened more the 4 screws near die and situation got better, but still not good. Now @def temps was about 27° (still too high in my opinion) but it's not possible to reach 55° on water and 51° with stock air cooler (even tough is the best air cooler i ever see on a stock GPU) during gaming

2nd problem, even flashing MSI 400W bios, my gpu still drop frequency and i dunno why.
few moments ago i saw this post on reddit:
https://www.reddit.com/r/nvidia/comments/9ncmyv/msi_2080ti_gaming_x_trio_power_limit/

edit: i forgot to add this info, when under load in GPUZ i can see that gpu never go beyond 360w, i think it should hit at least 400w using updated bios, right?

may be this the problem?


----------



## Cloudie

So I've installed the 380W bios on my strix and did some overclocking and benchmarks to see how much of a difference it makes compaired to the stock bios.
To my surprise the benchmark results actually went down even though you can clearly see it using 380W while the stock used only 330W. I've tested this on multiple frequencies and kept a close eye of the temp of the card which was about the same give or take 1-2 degrees.

How come?


----------



## Spiriva

bl4ckdot said:


> Is it the same bios that is on the KPE website ? I'm interested in that as well !





Jusiz said:


> I interested this bios too...can send it PM?




I got the bios from a friend who got a kingpin card.
Sent PM to both of you with it.

On my FE card +115/+1000 put the card at around 2165mhz-2180mhz/8000mhz. It crashed at 2190mhz almost at once in 3dmark.

☆Goes w/o saying but ofc use at your own risk


----------



## vmanuelgm

Spiriva said:


> I got the bios from a friend who got a kingpin card.
> Sent PM to both of you with it.
> 
> On my FE card +115/+1000 put the card at around 2165mhz-2180mhz/8000mhz. It crashed at 2190mhz almost at once in 3dmark.
> 
> ☆Goes w/o saying but ofc use at your own risk



Power limit???

Voltage/frequency editor enabled???


----------



## Spiriva

vmanuelgm said:


> Power limit???
> 
> Voltage/frequency editor enabled???


A picture from 3dmark run. I Got the mem put to +400 in this test.
As I wanted to make sure I could find the max speed of the gpu. Ive seen it pull ~580W as max, running at ~2180mhz


----------



## Cancretto

Cloudie said:


> So I've installed the 380W bios on my strix and did some overclocking and benchmarks to see how much of a difference it makes compaired to the stock bios.
> To my surprise the benchmark results actually went down even though you can clearly see it using 380W while the stock used only 330W. I've tested this on multiple frequencies and kept a close eye of the temp of the card which was about the same give or take 1-2 degrees.
> 
> How come?


Hi, your scores are lower probably because 380w is slower than the asus one 


mattxx88 said:


> is 2 weeks i'm testing an MSI 2080Ti Gaming X Trio, and i got so many troubles i'm thinking to sell it
> 
> 1st problem, i think EK made a mistake in depth measurement of his waterblock. when i booted up pc the first time i got 60° @def. Then i took wb down and i see NO contact between die and wb. i fastened more the 4 screws near die and situation got better, but still not good. Now @def temps was about 27° (still too high in my opinion) but it's not possible to reach 55° on water and 51° with stock air cooler (even tough is the best air cooler i ever see on a stock GPU) during gaming
> 
> 2nd problem, even flashing MSI 400W bios, my gpu still drop frequency and i dunno why.
> few moments ago i saw this post on reddit:
> https://www.reddit.com/r/nvidia/comments/9ncmyv/msi_2080ti_gaming_x_trio_power_limit/
> 
> edit: i forgot to add this info, when under load in GPUZ i can see that gpu never go beyond 360w, i think it should hit at least 400w using updated bios, right?
> 
> may be this the problem?


Hi, 55°C on water, especially with that bios is bad, regarding gpu clocking down you should look in gpu-z what is the limiting factor. Also another cause of your clock slowdown is your gpu temperature, at 55°C you should have reached the third or fourth thermal slowdown.

I don't know if with that bios you can still rely on software readings for power consumption, but if it's accurate and card never go beyond 360W you have some strange power limit reason.
A possibile cause I'm thinking could be sone strange mounting pressure caused by that waterblock on the pcb.


----------



## mattxx88

Cancretto said:


> Hi, 55°C on water, especially with that bios is bad, regarding gpu clocking down you should look in gpu-z what is the limiting factor. Also another cause of your clock slowdown is your gpu temperature, at 55°C you should have reached the third or fourth thermal slowdown.
> 
> I don't know if with that bios you can still rely on software readings for power consumption, but if it's accurate and card never go beyond 360W you have some strange power limit reason.
> A possibile cause I'm thinking could be sone strange mounting pressure caused by that waterblock on the pcb.


i think limiting factor is (yes the temperature, i'll receive tomorrow a new block from bitspower, so i can compare with the one from EK) but also the power limit
my gpu core with 1.050v goes up and down oftenly on the other hand if i set a fixed 1.000v frequency is a bit more stable. this make me assume that i am on power limit issues and no bios could fix this yet.
i was wondering about shunt mod, i did it on my old Titan pascal and worked great
what do you think about shunt? someone did it yet?


----------



## vmanuelgm

Spiriva said:


> A picture from 3dmark run. I Got the mem put to +400 in this test.
> As I wanted to make sure I could find the max speed of the gpu. Ive seen it pull ~580W as max, running at ~2180mhz



So it is the same xoc bios from the first post???

Or a different one with voltage/freq editor enabled???


----------



## Cancretto

mattxx88 said:


> i think limiting factor is (yes the temperature, i'll receive tomorrow a new block from bitspower, so i can compare with the one from EK) but also the power limit
> my gpu core with 1.050v goes up and down oftenly on the other hand if i set a fixed 1.000v frequency is a bit more stable. this make me assume that i am on power limit issues and no bios could fix this yet.
> i was wondering about shunt mod, i did it on my old Titan pascal and worked great
> what do you think about shunt? someone did it yet?


If power limit is a problem you can try kingpin xoc bios, it has unlimited power limit and overvolts your card up to 1.125V. I think it can work on your card since gaming x trio is basically a reference with an extra 6 pin.

Shunt modding the card + Asus matrix bios or gigabyte aorus extreme bios should make the card perform a bit better than using only kingpin xoc bios, but the whole process is a bit more tidious 

If you want to try out kingpin bios you can find it here https://xdevs.com/guide/2080ti_kpe/

The exact vbios version is 90.02.17.40.88


----------



## nikoli707

Would $750 be a decent deal for the lowly Gigabyte Turbo 11g 300 Non-A? Im putting it under a g12 x42 kraken and would do the palit 310w bios. Seems like good price to performance. I just need those rays traced properly.


----------



## dentnu

mattxx88 said:


> is 2 weeks i'm testing an MSI 2080Ti Gaming X Trio, and i got so many troubles i'm thinking to sell it
> 
> 1st problem, i think EK made a mistake in depth measurement of his waterblock. when i booted up pc the first time i got 60° @def. Then i took wb down and i see NO contact between die and wb. i fastened more the 4 screws near die and situation got better, but still not good. Now @def temps was about 27° (still too high in my opinion) but it's not possible to reach 55° on water and 51° with stock air cooler (even tough is the best air cooler i ever see on a stock GPU) during gaming
> 
> 2nd problem, even flashing MSI 400W bios, my gpu still drop frequency and i dunno why.
> few moments ago i saw this post on reddit:
> https://www.reddit.com/r/nvidia/comments/9ncmyv/msi_2080ti_gaming_x_trio_power_limit/
> 
> edit: i forgot to add this info, when under load in GPUZ i can see that gpu never go beyond 360w, i think it should hit at least 400w using updated bios, right?
> 
> may be this the problem?


I also just got this block for my MSI 2080Ti Gaming X Trio and I am seeing the same temps as you 27c idle and 55c under load. I spent the last two weeks trying to figure out why I was getting high temps and even asked on here a few weeks ago if those temps were normal but got no reply. I remounted it 4 times and same temps. I see it shoots straight up to 45c from 27c as soon as there is any load on it and then eventually within 5 mins hits 55c to 60c. I just requested a refund for it and plan on getting the bits-power block. Let me know how it goes with the new block.


----------



## Martin778

EK junk at it's finest, how many problems I've already had with their stuff in the past...warped blocks, gunky coolants...I can tell you the Kingpin with it's 240mm AiO is running OC'ed at around 40*C with 4x Noctua 12x25's in push-pull. Max I've seen it reach in synthetics was 45*c.
Not sure how much the Trio costs in the US but if the total, including block is like 1500-1600 bucks, you could just aswell buy the Kingpin with discount through an associate code and call it a day. Quality wise it's the best 2080Ti on the market.

@nikoli707,
Only if you have fully warranty (box and the original invoice, otherwise GB will give you the middle finger) and if it still has Micron VRM's I would pass on it anyways.
Don't get fooled by the low price because if the card dies and you're missing the paperwork, you'll end up with a $700 doorstop.


----------



## Spiriva

There are new Nvidia drivers in Windows10 insider now. Version 435.27

https://www.mediafire.com/file/pb67ea28aadwnf7/Nvidia+435.27.rar

Install via Device Manager, if setup.exe doesnt work for you.


----------



## Renegade5399

Asmodian said:


> Stock BIOS with a shunt mod is the way to go.


Yup. Best advice I took with these cards was yours.



SimoneTek said:


> The problem is that there is a physical limit about the Pci Power connector.. A 8 pin cannot handle more than 200w stable (the atx limit is 150-170)...so for the 2x8 2080Ti the best bios is the Galax 380w
> 
> Infact:
> PCI express: 75w
> 8 pin: 150w (X2) =300w
> 
> 375w total.. The Galax is perfect


:kookoo:


----------



## RaMsiTo

dentnu said:


> Best bios in my opinion for that card on air is the 406w bios made for that card.
> 
> https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930



very happy with the bios of 406w of the trio, 700 more points in port royal and very stable in the frequencies playing.

Thanks for the advice!


----------



## MonarchX

What OC should I try on EVGA GTX 2080 Ti FTW3? +100Mhz on core and +500Mhz on VRAM seems to be stable.


----------



## Martin778

Renegade5399 said:


> Yup. Best advice I took with these cards was yours.
> 
> :kookoo:


Yep, that's already been debunked by der8auer.





What you do have to keep in mind is the AWG wire gauge.


----------



## mattxx88

dentnu said:


> I also just got this block for my MSI 2080Ti Gaming X Trio and I am seeing the same temps as you 27c idle and 55c under load. I spent the last two weeks trying to figure out why I was getting high temps and even asked on here a few weeks ago if those temps were normal but got no reply. I remounted it 4 times and same temps. I see it shoots straight up to 45c from 27c as soon as there is any load on it and then eventually within 5 mins hits 55c to 60c. I just requested a refund for it and plan on getting the bits-power block. Let me know how it goes with the new block.


hi dentu, i post here the screen from EK support:



i opened that damned gpu 6 times now, last try will be to take off some material frome those 8 standoffs near DIE

Yesterday i mounted Bitspower Lotan and i sadly can tell you the situation is most the same
i played for 1 hour at the dvision 2 and gpu reached 52°
it's not a problem of mine loop cause my cpu is at same temp as always 

so i will go for EK wb modify and let you know, but it will be next week cause i'm away from tomorrw till tuesday

EDIT: i also noticed another thing form Gaming X Trio, i tried several bios and ALL of them don't go over 120% PL (round 360watt) seems like this gpu has an hardware PL lock
the only bios that bypass this thing is the kingpin bios

can you verify this with me?

loock at this screen with 400W MSI bios:


if you look at afterburner graphs you can see it clearly wall around 120/125 PL in every condition


----------



## JustinThyme

Martin778 said:


> EK junk at it's finest, how many problems I've already had with their stuff in the past...warped blocks, gunky coolants...I can tell you the Kingpin with it's 240mm AiO is running OC'ed at around 40*C with 4x Noctua 12x25's in push-pull. Max I've seen it reach in synthetics was 45*c.
> Not sure how much the Trio costs in the US but if the total, including block is like 1500-1600 bucks, you could just aswell buy the Kingpin with discount through an associate code and call it a day. Quality wise it's the best 2080Ti on the market.
> 
> @nikoli707,
> Only if you have fully warranty (box and the original invoice, otherwise GB will give you the middle finger) and if it still has Micron VRM's I would pass on it anyways.
> Don't get fooled by the low price because if the card dies and you're missing the paperwork, you'll end up with a $700 doorstop.


Ive not had good luck with EK lately either. I cant see using any of the cards with the AIOs that just dump hot air in the case. Maybe if you are running a test bed or open like a P5. Calling Trio the best quality is a bit brazen, there are plenty of cards that hold up just fine. Most of the top contenders perform well within a margin of error. Im not priviy to having seen any Micron VRMs. Lots of cards with Micron memory though.


----------



## Martin778

I run my R6 with the side panel open and you can easily push those Noctua 12x25 fans up to 80-100% without getting crazy from the noise, if you're familiar with Gentle Typhoon's sound signature the Noctua's are even better, they produce pretty much no low frequencies.

That said, there are cases like the Lian Li O11 that let you mount both AiO's as exhaust!


----------



## acmilangr

Superposition 4k benchmark 
Min 2205mhz
Max 2250mhz

(evga XC gaming with 380w Galax bios) 

Did i won silicon lottery?


----------



## Nicklas0912

So, is there any other bios besides the 380watt Galaxy? Im useing that atm, and maxing it now...

Core clock: 2190Mhz
Mem clock: 8100 
https://www.3dmark.com/spy/7327203

Max Temp was 47c.....

Will I get better overclock if I lowered the temps? 

Or is there a bios I can use that can incresse my OC, Im peak at 383.7Watt 24/7, so im at the limit of the bios sadly :/ 
Is a Ref RTX 2080 TI aka Evga RTX 2080 TI XC Ultra.


----------



## Cancretto

Nicklas0912 said:


> So, is there any other bios besides the 380watt Galaxy? Im useing that atm, and maxing it now...
> 
> Core clock: 2190Mhz
> Mem clock: 8100
> https://www.3dmark.com/spy/7327203
> 
> Max Temp was 47c.....
> 
> Will I get better overclock if I lowered the temps?
> 
> Or is there a bios I can use that can incresse my OC, Im peak at 383.7Watt 24/7, so im at the limit of the bios sadly :/
> Is a Ref RTX 2080 TI aka Evga RTX 2080 TI XC Ultra.


Man, are you really pushing 2190MHz with only that 380W bios? I mean with that bios, due to power limit, the card should be running around 1.050V, and man sustained 2190mhz at that voltage under load it's very very good. Did you shunt modded your card? 

Anyways, yes if you can keep your card at a lower temperature you can achieve a higher clock, due to gpu boost at 47C you're hitting the second thermal clock slowdown, wich is around 44/46C.
If you manage to keep your gpu around 40C the card should automatically run a higher clock step, wich is 2200MHz.

Unfortunately that's all theoretical since, at least with that bios you're probably running into power limitations.

In conclusion I'd suggest you to try kingpin xoc bios, wich has unlimited power limit and also overvolts your card up to 1.125V.

If you want to try the bios out, you can find it here: https://xdevs.com/guide/2080ti_kpe/
Exact vbios version is 90.02.17.40.88


----------



## dentnu

mattxx88 said:


> hi dentu, i post here the screen from EK support:
> 
> 
> 
> i opened that damned gpu 6 times now, last try will be to take off some material frome those 8 standoffs near DIE
> 
> Yesterday i mounted Bitspower Lotan and i sadly can tell you the situation is most the same
> i played for 1 hour at the dvision 2 and gpu reached 52°
> it's not a problem of mine loop cause my cpu is at same temp as always
> 
> so i will go for EK wb modify and let you know, but it will be next week cause i'm away from tomorrw till tuesday
> 
> EDIT: i also noticed another thing form Gaming X Trio, i tried several bios and ALL of them don't go over 120% PL (round 360watt) seems like this gpu has an hardware PL lock
> the only bios that bypass this thing is the kingpin bios
> 
> can you verify this with me?
> 
> loock at this screen with 400W MSI bios:
> 
> 
> if you look at afterburner graphs you can see it clearly wall around 120/125 PL in every condition


Hi Mattxx88

I no longer have my card installed on the block as I removed it to ship it back. It now has the stock air cooler on it and was able to test and with the 400watt bios and I do not have any issue hitting 400watts. The pics shows me hitting 392watts but I can assure you I was hitting +400watts when my card was on the waterblock. I do not understand way your card won't go over 120% PL. There might some issue with it as mines does not have this issue.


----------



## MrTOOSHORT




----------



## J7SC

MrTOOSHORT said:


>


 
... :thumb:  perfect combo ...got chiller ?


----------



## MrTOOSHORT

No chiller, just regular water cooling. Just wanted another kingpin for a longtime, last one was the 780ti. I'm sure I won't be disappointed!


----------



## Martin778

Good choice, you will love it.


----------



## J7SC

MrTOOSHORT said:


> No chiller, just regular water cooling. Just wanted another kingpin for a longtime, last one was the 780ti. I'm sure I won't be disappointed!



I really like the KPE w/ the full water block option, never mind all the voltage and other control tools. Also, Edmonton winters = auto-chiller ?!


----------



## acmilangr

https://youtu.be/3XtlcouO-RE


----------



## Nicklas0912

Cancretto said:


> Man, are you really pushing 2190MHz with only that 380W bios? I mean with that bios, due to power limit, the card should be running around 1.050V, and man sustained 2190mhz at that voltage under load it's very very good. Did you shunt modded your card?
> 
> Anyways, yes if you can keep your card at a lower temperature you can achieve a higher clock, due to gpu boost at 47C you're hitting the second thermal clock slowdown, wich is around 44/46C.
> If you manage to keep your gpu around 40C the card should automatically run a higher clock step, wich is 2200MHz.
> 
> Unfortunately that's all theoretical since, at least with that bios you're probably running into power limitations.
> 
> In conclusion I'd suggest you to try kingpin xoc bios, wich has unlimited power limit and also overvolts your card up to 1.125V.
> 
> If you want to try the bios out, you can find it here: https://xdevs.com/guide/2080ti_kpe/
> Exact vbios version is 90.02.17.40.88


Yes, it can do the 2175-2190Mhz depends on the temps, is really hot here in Denmark. atm So will get a chiller soon.

Your right, my problem is that im running into power limitations...

The card wont brik if I use the kingpin bios?


----------



## acmilangr

Nicklas0912 said:


> Cancretto said:
> 
> 
> 
> Man, are you really pushing 2190MHz with only that 380W bios? I mean with that bios, due to power limit, the card should be running around 1.050V, and man sustained 2190mhz at that voltage under load it's very very good. Did you shunt modded your card?
> 
> Anyways, yes if you can keep your card at a lower temperature you can achieve a higher clock, due to gpu boost at 47C you're hitting the second thermal clock slowdown, wich is around 44/46C.
> If you manage to keep your gpu around 40C the card should automatically run a higher clock step, wich is 2200MHz.
> 
> Unfortunately that's all theoretical since, at least with that bios you're probably running into power limitations.
> 
> In conclusion I'd suggest you to try kingpin xoc bios, wich has unlimited power limit and also overvolts your card up to 1.125V.
> 
> If you want to try the bios out, you can find it here: https://xdevs.com/guide/2080ti_kpe/
> Exact vbios version is 90.02.17.40.88
> 
> 
> 
> Yes, it can do the 2175-2190Mhz depends on the temps, is really hot here in Denmark. atm So will get a chiller soon.
> 
> Your right, my problem is that im running into power limitations...
> 
> The card wont brik if I use the kingpin bios?
Click to expand...

If you try it please let me know.
thanks


----------



## Cancretto

Nicklas0912 said:


> Yes, it can do the 2175-2190Mhz depends on the temps, is really hot here in Denmark. atm So will get a chiller soon.
> 
> Your right, my problem is that im running into power limitations...
> 
> The card wont brik if I use the kingpin bios?


Hi man, card should work fine with that bios, I've heard of some people already running it on their gaming trios. You'll not have anymore power limit problem with that bios


----------



## dante`afk

Nicklas0912 said:


> So, is there any other bios besides the 380watt Galaxy? Im useing that atm, and maxing it now...
> 
> Core clock: 2190Mhz
> Mem clock: 8100
> https://www.3dmark.com/spy/7327203
> 
> Max Temp was 47c.....
> 
> Will I get better overclock if I lowered the temps?
> 
> Or is there a bios I can use that can incresse my OC, Im peak at 383.7Watt 24/7, so im at the limit of the bios sadly :/
> Is a Ref RTX 2080 TI aka Evga RTX 2080 TI XC Ultra.


your clock is not stable. I've seen people with waaaaay less clockspeeds and higher score.




flashed the kingpin xoc bios today for testing, more vcore does not necessarily equal more clock for my FE card  still does max 2160mhz


----------



## jim2point0

I finally took the plunge... and by that, I impulse bought one at Microcenter when I was grabbing 2 more sticks of RAM and a new case.

The best one they had there was the Asus Strix 011G. My 1080Ti was an Asus Strix and I was satisfied with how cool and quiet that ran. So I figured this wouldn't be a terrible choice.

When benchmarking or playing games, the clock speed mostly seems to hang around ~1960-2040mhz, with the power limit seemingly causing a lot of fluctuation. Temps top out at around 76c with a fan curve that doesn't get TOO loud... but it's definitely waaaay hotter and louder than my 1080TI was. I guess that's probably normal.

Nothing I can probably do about he power limit, huh? I can only take it to like 120% or 125% in Afterburner. Seems like a poor overclocker overall but how much more performance could I possibly squeeze out of it on air anyways?


----------



## acmilangr

Will I have problem trying kingpin xoc bios with my power supply?
It is 750w evga g3


----------



## Jpmboy

MrTOOSHORT said:


>


 i know... tempting, right? Gotta say tho, the 780Ti KPE was an epic Vince production. The 2080Ti KPE is the only one I have not bought (yet  ). Still have your EVBOT?
Post back with how it compares to your FE (?). Which OC'd very well if I recall correctly.


----------



## ENTERPRISE

Just as an FYI for SLI users (The little that are left) the Asus XOC BIOS and the Kingpin XOC BIOS are not SLI friendly as they will bring down your NVlink speed to about 11Gb/s when it should be 90Gb/s+. Further more the clocks in SLI mode do not align between the cards, as such even though your in SLI, one will run at 300Mhz, while the other one will run at full boost. This cripples all performance.


----------



## bigjdubb

Whoa, the kingpin bios is not sli friendly? Isn't benchmarking the whole point of that card? I doubt anyone runs sli just for gaming because benchmarking is the only place it really starts to feel like your machine is better with sli. This seems like something that will be fixed so that they can actually set records with the darn things.


----------



## Cancretto

jim2point0 said:


> I finally took the plunge... and by that, I impulse bought one at Microcenter when I was grabbing 2 more sticks of RAM and a new case.
> 
> The best one they had there was the Asus Strix 011G. My 1080Ti was an Asus Strix and I was satisfied with how cool and quiet that ran. So I figured this wouldn't be a terrible choice.
> 
> When benchmarking or playing games, the clock speed mostly seems to hang around ~1960-2040mhz, with the power limit seemingly causing a lot of fluctuation. Temps top out at around 76c with a fan curve that doesn't get TOO loud... but it's definitely waaaay hotter and louder than my 1080TI was. I guess that's probably normal.
> 
> Nothing I can probably do about he power limit, huh? I can only take it to like 120% or 125% in Afterburner. Seems like a poor overclocker overall but how much more performance could I possibly squeeze out of it on air anyways?


Hi man, really good choice for the card, strix has the 3rd best pcb for 2080 tis after kingpin and hof. 
To squeeze more out your card you need to lower temperatures, maybe replacing stock thermal paste and using a more aggressive fan curve. 
To solve power limit issue, you'd need to flash an xoc bios or shunt mod your card


----------



## J7SC

ENTERPRISE said:


> Just as an FYI for SLI users (The little that are left) the Asus XOC BIOS and the Kingpin XOC BIOS are not SLI friendly as they will bring down your NVlink speed to about 11Gb/s when it should be 90Gb/s+. Further more the clocks in SLI mode do not align between the cards, as such even though your in SLI, one will run at 300Mhz, while the other one will run at full boost. This cripples all performance.


 


bigjdubb said:


> Whoa, the kingpin bios is not sli friendly? Isn't benchmarking the whole point of that card? I doubt anyone runs sli just for gaming because benchmarking is the only place it really starts to feel like your machine is better with sli. This seems like something that will be fixed so that they can actually set records with the darn things.


 
Whoa is right / and it seems to apply to both the Asus XOC and KPE XOC...hard to believe that KPE's Vince w/ TiN who sub SLI on LN2 aren't working on this. But thanks @ENTERPRISE for pointing this out, I do run 2x 2080 Ti and was getting tempted a bit by XOC Bios ''goodies''


----------



## Offfline

Hi guys, the KPE xoc bios can be compatible with an Gigabyte AORUS _2080 Ti XTREME_ WATERFORCE _WB_?


----------



## Jpmboy

ENTERPRISE said:


> Just as an FYI for SLI users (The little that are left) the Asus XOC BIOS and the Kingpin XOC BIOS are not SLI friendly as they will bring down your NVlink speed to about 11Gb/s when it should be 90Gb/s+. Further more the clocks in SLI mode do not align between the cards, as such even though your in SLI, one will run at 300Mhz, while the other one will run at full boost. This cripples all performance.





J7SC said:


> Whoa is right / and it seems to apply to both the Asus XOC and KPE XOC...hard to believe that KPE's Vince w/ TiN who sub SLI on LN2 aren't working on this. But thanks @*ENTERPRISE* for pointing this out, I do run 2x 2080 Ti and was getting tempted a bit by XOC Bios ''goodies''



daaum, and I thought I borked the flash on my 2 cards! I saw the same thing with the KPE bios posted a couple of days ago. Didn't post about it because my first thought was that I was doing something wrong and haven't had time to ferret out what was going on. :blinksmil


Unfortunately, can't rep ENT


----------



## Nicklas0912

dante`afk said:


> your clock is not stable. I've seen people with waaaaay less clockspeeds and higher score.
> 
> 
> 
> 
> flashed the kingpin xoc bios today for testing, more vcore does not necessarily equal more clock for my FE card  still does max 2160mhz



Idd, the 17k is a bit off. and I dont understand why, it is the higheste GPU score I can get, I tryed anything, 2100-2140Mhz, 800Mhz mem, 600Mhz mem. with the clocks im doing on that run, give me the higheste score everything.

I will try the Kingpin bios tomrrow, to see.

Also I see you hitting 40c max compare to mine 47-48 here., maybe that doing the trick?


----------



## VPII

Just running a couple more times before I get a Ryzen R9 3900X. I'me pretty impressed seen that the cpu was running 4.24ghz and I still managed to pull out a nice graphics score. The score would be even better with higher clocks, but 'm sure with AMD Zen 2 I'll get even more from it.

https://www.3dmark.com/3dm/36566883?

Max GPU temp was 39C during the run.


----------



## jelome1989

Anybody bought the Asus RoG Matrix 2080 Ti yet? How are the temps? Power limits?


----------



## willverduzco

jelome1989 said:


> Anybody bought the Asus RoG Matrix 2080 Ti yet? How are the temps? Power limits?


If you go to the Discord linked in the OP, my friend Deathscythes may be able to give you more information. He is running the Matrix cards (in SLI), but is planning on ditching the stock coolers soon (if he hasn't yet already).


----------



## Cancretto

Offfline said:


> Hi guys, the KPE xoc bios can be compatible with an Gigabyte AORUS _2080 Ti XTREME_ WATERFORCE _WB_?


Hi, yes it should work fine on your card


----------



## jelome1989

willverduzco said:


> If you go to the Discord linked in the OP, my friend Deathscythes may be able to give you more information. He is running the Matrix cards (in SLI), but is planning on ditching the stock coolers soon (if he hasn't yet already).


Thanks already sent him a message


----------



## acmilangr

acmilangr said:


> Will I have problem trying kingpin xoc bios with my power supply?
> It is 750w evga g3


Could you someone answer me on that please?


----------



## Martin778

With a single card? Nah.


----------



## jim2point0

Cancretto said:


> Hi man, really good choice for the card, strix has the 3rd best pcb for 2080 tis after kingpin and hof.
> To squeeze more out your card you need to lower temperatures, maybe replacing stock thermal paste and using a more aggressive fan curve.
> To solve power limit issue, you'd need to flash an xoc bios or shunt mod your card


Hey, thanks for the response!

What do you recommend for the thermal paste? All I have at home is kryonaut (used for my CPU). Not sure if there's anything better for GPUs atm. 

I'd use a more aggressive fan curve but the noise levels are currently set to "the loudest I can tolerate" in order to keep it at 76C. 

Is it worth it to use the XOC bios? Core clock seems to top out at +116. Anything higher and it crashes. If there's some gaming performance to be gained here, I'm all for it. But if I recall from my bios flashing 980Ti days, I only did it for slightly higher benchmark scores. Which I don't care so much about now.


----------



## mattxx88

dentnu said:


> Hi Mattxx88
> 
> I no longer have my card installed on the block as I removed it to ship it back. It now has the stock air cooler on it and was able to test and with the 400watt bios and I do not have any issue hitting 400watts. The pics shows me hitting 392watts but I can assure you I was hitting +400watts when my card was on the waterblock. I do not understand way your card won't go over 120% PL. There might some issue with it as mines does not have this issue.


Thnkyou dentu, this is very helpfull to me
Now i try a deep driver clean and afterburner beta

sorry for the delay but i was out for a while


----------



## acmilangr

It is strange that it is crashes on more than 116.
Becouse the step is 15mhz. So if you add +106 or +119 it will be the same. the curve will be choose the same. 


The steps are these :
+105
+120
+135
+150
+165
+180
+195
+210
+225
And so on....


----------



## ENTERPRISE

J7SC said:


> Whoa is right / and it seems to apply to both the Asus XOC and KPE XOC...hard to believe that KPE's Vince w/ TiN who sub SLI on LN2 aren't working on this. But thanks @ENTERPRISE for pointing this out, I do run 2x 2080 Ti and was getting tempted a bit by XOC Bios ''goodies''


Happy to help  In the end I dumped any of the XOC BIOS'S. I wanted to be able to use my setup in SLI, so had to go back to a regular BIOS but in all honesty the XOC BIOS while an advantage will not yield many meaningful gains for casual use.


----------



## acmilangr

So I am ready to flash kingpin xoc bios to my evga XC (on water).

Press the button? Are you sure?


----------



## Cancretto

jim2point0 said:


> Hey, thanks for the response!
> 
> What do you recommend for the thermal paste? All I have at home is kryonaut (used for my CPU). Not sure if there's anything better for GPUs atm.
> 
> I'd use a more aggressive fan curve but the noise levels are currently set to "the loudest I can tolerate" in order to keep it at 76C.
> 
> Is it worth it to use the XOC bios? Core clock seems to top out at +116. Anything higher and it crashes. If there's some gaming performance to be gained here, I'm all for it. But if I recall from my bios flashing 980Ti days, I only did it for slightly higher benchmark scores. Which I don't care so much about now.


Hi man, best thermal paste you can get is technically kryonaut, but in my personal experience out of 5 syringes, cooler master maker nano always outperformed kryo by few degrees, don't know if I was unlucky or it's just a bit worse. Anyways the difference between those 2 isn't enormous so just pick one of them at your own preference.

Xoc bios is a really good way to squeeze some more performance out of your card, but keep in mind that higher voltage and unlimited power limit will make your card run hotter, also with that particular bios, card's fan speed will be fixed at 100%. 
Since you're on air, not considering temperatures, i think it will make your card too loud to keep it near you. Maybe, if you feel it a better choice would be shunt mod your card to remove PL issues and flash matrix bios for better performance per clock.

Also, Turing cards are super temperature sensitve so higher temps will result in lower clockspeeds and may cause stability issues, so using a good aio on that card will already make a good difference


----------



## J7SC

ENTERPRISE said:


> Happy to help  In the end I dumped any of the XOC BIOS'S. I wanted to be able to use my setup in SLI, so had to go back to a regular BIOS but in all honesty the XOC BIOS while an advantage will not yield many meaningful gains for casual use.


 
I can't complain at all about the performance of these two (w-cooled) 2080Tis; in fact I'd probably be PSU-limited with both on XOC. But performance tinkering is so much fun...


----------



## Offfline

Hi J7SC, I saw that you had the same graphics card as me (Aorus 2080 ti WB), I would like to know if you have already change bios like KPE XOC and if you can explain me how you do it...
Thanks a lot
This its what i can do with original bios: https://www.3dmark.com/fs/19504022


----------



## jim2point0

Cancretto said:


> Xoc bios is a really good way to squeeze some more performance out of your card, but keep in mind that higher voltage and unlimited power limit will make your card run hotter, also with that particular bios, card's fan speed will be fixed at 100%.
> Since you're on air, not considering temperatures, i think it will make your card too loud to keep it near you. Maybe, if you feel it a better choice would be shunt mod your card to remove PL issues and flash matrix bios for better performance per clock.
> 
> Also, Turing cards are super temperature sensitve so higher temps will result in lower clockspeeds and may cause stability issues, so using a good aio on that card will already make a good difference


Great info all around. It does seem particularly sensitive. Moreso than previous cards I've owned. I don't much like the idea of a constant 100% fan speed though. Why does it do that, anyways? 

Not sure I trust myself to do a shunt mod. So for now, I guess I'll leave it how it is until I can implement a better cooling solution. I'd love an AIO cooler (hybrid cards are great, but so expensive) but I'll wait for a bit until I can justify throwing more funds at it. 



acmilangr said:


> It is strange that it is crashes on more than 116.
> Becouse the step is 15mhz. So if you add +106 or +119 it will be the same. the curve will be choose the same.


Was that a response to me (because I mentioned crashes going higher than +116 on the core). 
If it was, just to clarify, +116 is the highest step I can run a benchmark with (I think I was using Firestrike Extreme). The next step is +123, and that would crash. I don't know why either... and it seems kinda low from what a lot of people are getting out of their 2080Ti but that's just my luck.


----------



## J7SC

Offfline said:


> Hi J7SC, I saw that you had the same graphics card as me (Aorus 2080 ti WB), I would like to know if you have already change bios like KPE XOC and if you can explain me how you do it...
> Thanks a lot
> This its what i can do with original bios: https://www.3dmark.com/fs/19504022


 
Hello - I run two of those cards, thus the KPE (and also Asus) XOC bios won't work right on SLI specifically, per related posts over the last couple of days (ie by 'ENTERPRISE'). I did flash the second Aorus 2080 Ti with the Bios of the first just to have them identical, and I used the 'flash guide' in the OP of this thread. That said, your card's speed per 3DM posting looks excellent already...


----------



## blazarware

*VRAM Brand and S/N*

Hi
I'm trying to confirm something related MSI RTX 2080TI Trio's

S/N: 1XSB18XX Micron VRAM
S/N: 1XSB19XX Can't tell
S/N: 4XSB18XX Samsung VRAM
S/N: 4XSB19XX Samsung VRAM
Others, can't tell.

I already got through 2 failing 2080TI Trios both 1XSB18XX Micron. Here in me country we get really old stock and i am tired of waiting months sending cards to the U.S for RMA. 2nd lasted 2month, first 1 week. No OC.
I have done some research and i think that's my conclusion, i really liked the card ,performance and ray tracing but it's killing me.

Could someone refuse this checking serial and GPUz?, pls!


----------



## mattxx88

dentnu said:


> Hi Mattxx88
> 
> I no longer have my card installed on the block as I removed it to ship it back. It now has the stock air cooler on it and was able to test and with the 400watt bios and I do not have any issue hitting 400watts. The pics shows me hitting 392watts but I can assure you I was hitting +400watts when my card was on the waterblock. I do not understand way your card won't go over 120% PL. There might some issue with it as mines does not have this issue.


can i ask you what version of NVflash did u use to flash?



blazarware said:


> Hi
> I'm trying to confirm something related MSI RTX 2080TI Trio's
> 
> S/N: 1XSB18XX Micron VRAM
> S/N: 1XSB19XX Can't tell
> S/N: 4XSB18XX Samsung VRAM
> S/N: 4XSB19XX Samsung VRAM
> Others, can't tell.
> 
> I already got through 2 failing 2080TI Trios both 1XSB18XX Micron. Here in me country we get really old stock and i am tired of waiting months sending cards to the U.S for RMA. 2nd lasted 2month, first 1 week. No OC.
> I have done some research and i think that's my conclusion, i really liked the card ,performance and ray tracing but it's killing me.
> 
> Could someone refuse this checking serial and GPUz?, pls!


mine have samsung, asap i'll tell you S/N


----------



## HeadlessKnight

blazarware said:


> Hi
> I'm trying to confirm something related MSI RTX 2080TI Trio's
> 
> S/N: 1XSB18XX Micron VRAM
> S/N: 1XSB19XX Can't tell
> S/N: 4XSB18XX Samsung VRAM
> S/N: 4XSB19XX Samsung VRAM
> Others, can't tell.
> 
> I already got through 2 failing 2080TI Trios both 1XSB18XX Micron. Here in me country we get really old stock and i am tired of waiting months sending cards to the U.S for RMA. 2nd lasted 2month, first 1 week. No OC.
> I have done some research and i think that's my conclusion, i really liked the card ,performance and ray tracing but it's killing me.
> 
> Could someone refuse this checking serial and GPUz?, pls!


Sorry to hear that. Looks like you got completely unlucky or there is another problem you have such as an unreliable power supply, could be a bad batch too. I have this Zotac 2080 Ti AMP with Micron memory since November 2018 and it is still going strong with a +600 memory OC, the memory sucks compared to my Samsung card which can do +1400 but other than that it is completely solid.


----------



## GAN77

Del


----------



## Offfline

J7SC said:


> Hello - I run two of those cards, thus the KPE (and also Asus) XOC bios won't work right on SLI specifically, per related posts over the last couple of days (ie by 'ENTERPRISE'). I did flash the second Aorus 2080 Ti with the Bios of the first just to have them identical, and I used the 'flash guide' in the OP of this thread. That said, your card's speed per 3DM posting looks excellent already...



Thank you for your answer, so you think I can flash the bios with the kpe without problem or risk of brick even if there are only 2x8 pins and not 3x8 pins? The fact that my card wicks on a riser is it not an additional problem?


----------



## ENTERPRISE

Offfline said:


> Thank you for your answer, so you think I can flash the bios with the kpe without problem or risk of brick even if there are only 2x8 pins and not 3x8 pins? The fact that my card wicks on a riser is it not an additional problem?


It will be fine, however due to the different power configurations with the KPE BIOS designed with 3x8pin connectors vs the 2x8pin connectors you have on your card, you may see boosting issues whereby your GPU is not hitting its boost potential, you will only know by testing however. It will not brick your card, you will just have to flash back to a more suitable BIOS.


----------



## acmilangr

ENTERPRISE said:


> Offfline said:
> 
> 
> 
> Thank you for your answer, so you think I can flash the bios with the kpe without problem or risk of brick even if there are only 2x8 pins and not 3x8 pins? The fact that my card wicks on a riser is it not an additional problem?
> 
> 
> 
> It will be fine, however due to the different power configurations with the KPE BIOS designed with 3x8pin connectors vs the 2x8pin connectors you have on your card, you may see boosting issues whereby your GPU is not hitting its boost potential, you will only know by testing however. It will not brick your card, you will just have to flash back to a more suitable BIOS.
Click to expand...

Hello I have evga XC gaming 2080ti that clocks really high (2205-2235 on Superposition 4k). I have 14415 score using Galax 380w bios. 

Do you recommend to flash kingpin xoc 
bios on that? Is it safe? 
What about my psu? Can it handle? It is evga 750 g3

Thanks in advance


----------



## MonarchX

I recently went with EVGA GeForce RTX 2080 Ti FTW3 and have some questions:
1. Does anyone what "NGXCore" folder in NVidia drivers is for? Its not necessary to install, even on GTX 2080 Ti, but some custom clean driver packs that get rid of NVidia driver junk do NOT get rid of "NGXCore" folder.
2. Are there any tweaks or updated BIOS files for EVGA GeForce RTX 2080 Ti FTW3 (3-fan version)? I can keep it stable by maxing out voltage, power, and setting core to +100Mhz, but if I also add +500Mhz on VRAM, my games crash. The card tends to self-clock itself to about 2055Mhz.


----------



## Cancretto

jim2point0 said:


> Great info all around. It does seem particularly sensitive. Moreso than previous cards I've owned. I don't much like the idea of a constant 100% fan speed though. Why does it do that, anyways?
> 
> Not sure I trust myself to do a shunt mod. So for now, I guess I'll leave it how it is until I can implement a better cooling solution. I'd love an AIO cooler (hybrid cards are great, but so expensive) but I'll wait for a bit until I can justify throwing more funds at it.


No problem man, glad to help. Your best choice would be buying a kraken g12 mound and pair it with a good gen 4.5/5 280mm ai like kraken x62, you'll see a good boost in performance



acmilangr said:


> Hello I have evga XC gaming 2080ti that clocks really high (2205-2235 on Superposition 4k). I have 14415 score using Galax 380w bios.
> 
> Do you recommend to flash kingpin xoc
> bios on that? Is it safe?
> What about my psu? Can it handle? It is evga 750 g3
> 
> Thanks in advance


Hi, if your card is on water I don't see any problem to try it out. Bios should work fine. PSU wise, I think you're maybe on the limit, with xoc bios your card can peak around 600W on heavy benchmarks, so it's fine, but if you add also your other components power draw, you may max your PSU . 

Also for benchmarking it's better to use superposition at 1080p extreme or 8k optimized, it will stress more your card.


----------



## VPII

Cancretto said:


> No problem man, glad to help. Your best choice would be buying a kraken g12 mound and pair it with a good gen 4.5/5 280mm ai like kraken x62, you'll see a good boost in performance
> 
> 
> 
> 
> 
> 
> 
> Hi, if your card is on water I don't see any problem to try it out. Bios should work fine. PSU wise, I think you're maybe on the limit, with xoc bios your card can peak around 600W on heavy benchmarks, so it's fine, but if you add also your other components power draw, you may max your PSU .
> 
> 
> 
> Also for benchmarking it's better to use superposition at 1080p extreme or 8k optimized, it will stress more your card.


What I found with the Kingpin Xoc bios is that higher clocks is doable due to higher core voltage when compared to the Strix Xoc bios but it does come with more heat and I prefer the card staying below 40c when benching as the first core drop due to temps is around 41c.

Sent from my SM-G960F using Tapatalk


----------



## acmilangr

VPII said:


> Cancretto said:
> 
> 
> 
> No problem man, glad to help. Your best choice would be buying a kraken g12 mound and pair it with a good gen 4.5/5 280mm ai like kraken x62, you'll see a good boost in performance
> 
> 
> 
> 
> 
> 
> 
> Hi, if your card is on water I don't see any problem to try it out. Bios should work fine. PSU wise, I think you're maybe on the limit, with xoc bios your card can peak around 600W on heavy benchmarks, so it's fine, but if you add also your other components power draw, you may max your PSU .
> 
> 
> 
> Also for benchmarking it's better to use superposition at 1080p extreme or 8k optimized, it will stress more your card.
> 
> 
> 
> What I found with the Kingpin Xoc bios is that higher clocks is doable due to higher core voltage when compared to the Strix Xoc bios but it does come with more heat and I prefer the card staying below 40c when benching as the first core drop due to temps is around 41c.
> 
> Sent from my SM-G960F using Tapatalk
Click to expand...

Thanks for the answer. 
So strix xoc is also compatible with my card? (evga XC gaming as I told)


----------



## VPII

acmilangr said:


> Thanks for the answer.
> So strix xoc is also compatible with my card? (evga XC gaming as I told)


Yes it should work.

Sent from my SM-G960F using Tapatalk


----------



## MrTOOSHORT

Jpmboy said:


> i know... tempting, right? Gotta say tho, the 780Ti KPE was an epic Vince production. The 2080Ti KPE is the only one I have not bought (yet  ). Still have your EVBOT?
> Post back with how it compares to your FE (?). Which OC'd very well if I recall correctly.



No evbot, sold that a long time ago to marc. Stuff came in btw!


----------



## bl4ckdot

MrTOOSHORT said:


> No evbot, sold that a long time ago to marc. Stuff came in btw!


Have fun !


----------



## acmilangr

VPII said:


> acmilangr said:
> 
> 
> 
> Thanks for the answer.
> So strix xoc is also compatible with my card? (evga XC gaming as I told)
> 
> 
> 
> Yes it should work.
> 
> Sent from my SM-G960F using Tapatalk
Click to expand...

"should" is not enough for me! I want to be sure!

Anyway thanks alot!


----------



## MrTOOSHORT

So just got it up and running. I put the little voltage dip switches to "on" and it reads 1.15-1.17v on the OLED display on the card when loaded. Clocks were sustained 2160MHz on the LN2 bios. Won't have time to put on the water block until the weekend. So far so good!

*https://www.3dmark.com/3dm/36610732*


----------



## VPII

acmilangr said:


> "should" is not enough for me! I want to be sure!
> 
> Anyway thanks alot!


Well it works on my Palit Gamingpro Oc so can't see why it should not work.

Sent from my SM-G960F using Tapatalk


----------



## acmilangr

VPII said:


> acmilangr said:
> 
> 
> 
> "should" is not enough for me! I want to be sure!
> 
> Anyway thanks alot!
> 
> 
> 
> Well it works on my Palit Gamingpro Oc so can't see why it should not work.
> 
> Sent from my SM-G960F using Tapatalk
Click to expand...

Well I tried. It works. It is good but not really difference than Galax 380w. (actually a little worst result on superposition 4k)

It uses 600w.and not curve option. 

Is there any other bios I can try with safe?

Matrix?


----------



## Nicklas0912

acmilangr said:


> Well I tried. It works. It is good but not really difference than Galax 380w. (actually a little worst result on superposition 4k)
> 
> It uses 600w.and not curve option.
> 
> Is there any other bios I can try with safe?
> 
> Matrix?


Did you try the kingpin bios?


----------



## Nicklas0912

MrTOOSHORT said:


> So just got it up and running. I put the little voltage dip switches to "on" and it reads 1.15-1.17v on the OLED display on the card when loaded. Clocks were sustained 2160MHz on the LN2 bios. Won't have time to put on the water block until the weekend. So far so good!
> 
> *https://www.3dmark.com/3dm/36610732*


Nice score! How do you getting so high score? im doing 2160Mhz and +1000 om mem aswell, only getting 9777..

https://www.3dmark.com/3dm/36619753?

I see your mem doing 2125Mhz but still, is that doing the rest?


----------



## acmilangr

Nicklas0912 said:


> acmilangr said:
> 
> 
> 
> Well I tried. It works. It is good but not really difference than Galax 380w. (actually a little worst result on superposition 4k)
> 
> It uses 600w.and not curve option.
> 
> Is there any other bios I can try with safe?
> 
> Matrix?
> 
> 
> 
> Did you try the kingpin bios?
Click to expand...

Yes kingpin xoc


----------



## Nicklas0912

acmilangr said:


> Yes kingpin xoc


How was that? the best you tried or?


----------



## acmilangr

Nicklas0912 said:


> acmilangr said:
> 
> 
> 
> Yes kingpin xoc
> 
> 
> 
> How was that? the best you tried or?
Click to expand...

I prefer Galax 380w


----------



## Nicklas0912

acmilangr said:


> I prefer Galax 380w


really? Do you know if the classfield tool works with the KP bios? just for memorys.


----------



## ESRCJ

Has anyone done a waterblock comparison? I have the EK block on my FE and I've heard some people say it's not a very good block. I get about 9-10C over fluid temp under full load. I'm curious to know if Watercool is as good as people say.


----------



## JustinThyme

gridironcpj said:


> Has anyone done a waterblock comparison? I have the EK block on my FE and I've heard some people say it's not a very good block. I get about 9-10C over fluid temp under full load. I'm curious to know if Watercool is as good as people say.


Watercool is very good. Problem is lead time. I was just recently informed that the green "available" indication means available to order. They stock nothing. You order and when you get it depends on where the production run for that part fits into their schedule. I ordered two of the strix blocks about 3 weeks ago now and still no shipping. I contacted and they said they ran out of the acrylic raw materials and an additional 3-5 days before shipment.

I dont know that I would change if you have a 10C delta. Not getting much better than that. Good is considered less than 15C which most of the big names make. Everything after that is gravy.

I do find the fit and finish to be higher quality on the watercool blocks.


----------



## acmilangr

Nicklas0912 said:


> acmilangr said:
> 
> 
> 
> I prefer Galax 380w
> 
> 
> 
> really? Do you know if the classfield tool works with the KP bios? just for memorys.
Click to expand...

What is this tool?


----------



## acmilangr

Nicklas0912 said:


> acmilangr said:
> 
> 
> 
> I prefer Galax 380w
> 
> 
> 
> really? Do you know if the classfield tool works with the KP bios? just for memorys.
Click to expand...

What is this tool?


----------



## acmilangr

Question please.
As I told I tried xcos kingpin bios:

XOC BIOS, Version 90.02.17.40.88


Becouse I want to be able to see the curve and also control the power limit can i try this kingpin bios? :

Normal BIOS (Green LED), Version 90.02.30.00.77

I want to be sure that it Will not brick my card....


----------



## VPII

Had a bit of a balls-up flashing my GPU with the Kingpin XOC bios this morning. Now firstly it worked fine with the Kingpin XOC bios before, but I was not happy with the vgpu stuck at 1.1v or there about as it generated added heat so I went back to the strix XOC bios. This morning we have some cooler weather with my GPU temps idling around 20 to 21C so I thought I'll give the Kingpin XOC bios a go. After flashing and restarting my system would hang just as I'm in windows. I tried to start up in Safe mode, but for some reason it won't except my password. So I used my USB flash drive with windows install to get into command prompt and flashed it back to the strix XOC bios and all good now.

I think it might have been something else, other then the BIOS flash that caused the issue, but just interesting that it happened.


----------



## ENTERPRISE

VPII said:


> Had a bit of a balls-up flashing my GPU with the Kingpin XOC bios this morning. Now firstly it worked fine with the Kingpin XOC bios before, but I was not happy with the vgpu stuck at 1.1v or there about as it generated added heat so I went back to the strix XOC bios. This morning we have some cooler weather with my GPU temps idling around 20 to 21C so I thought I'll give the Kingpin XOC bios a go. After flashing and restarting my system would hang just as I'm in windows. I tried to start up in Safe mode, but for some reason it won't except my password. So I used my USB flash drive with windows install to get into command prompt and flashed it back to the strix XOC bios and all good now.
> 
> I think it might have been something else, other then the BIOS flash that caused the issue, but just interesting that it happened.


If anything it would have been a driver issue within Windows. When you flash a new BIOS the driver needs to essentially re-install the card as it now has a new hardware ID etc. BIOS Flash itself was likely fine.


----------



## acmilangr

VPII said:


> Had a bit of a balls-up flashing my GPU with the Kingpin XOC bios this morning. Now firstly it worked fine with the Kingpin XOC bios before, but I was not happy with the vgpu stuck at 1.1v or there about as it generated added heat so I went back to the strix XOC bios. This morning we have some cooler weather with my GPU temps idling around 20 to 21C so I thought I'll give the Kingpin XOC bios a go. After flashing and restarting my system would hang just as I'm in windows. I tried to start up in Safe mode, but for some reason it won't except my password. So I used my USB flash drive with windows install to get into command prompt and flashed it back to the strix XOC bios and all good now.
> 
> I think it might have been something else, other then the BIOS flash that caused the issue, but just interesting that it happened.


What card you own?


----------



## VPII

Palit Gamingpro Oc

Sent from my SM-G960F using Tapatalk


----------



## acmilangr

VPII said:


> Palit Gamingpro Oc
> 
> Sent from my SM-G960F using Tapatalk


Is this reference pcb? Can I try xoc strix on my evga XC gaming?


----------



## VPII

acmilangr said:


> Is this reference pcb? Can I try xoc strix on my evga XC gaming?


I do believe it is reference. The strix xoc is actually pretty good as it limits vgpu to 1.05v or there about and max wattage pulled around 460 to 480watt

Sent from my SM-G960F using Tapatalk


----------



## acmilangr

VPII said:


> acmilangr said:
> 
> 
> 
> Is this reference pcb? Can I try xoc strix on my evga XC gaming?
> 
> 
> 
> I do believe it is reference. The strix xoc is actually pretty good as it limits vgpu to 1.05v or there about and max wattage pulled around 460 to 480watt
> 
> Sent from my SM-G960F using Tapatalk
Click to expand...

This is what actually need. 

Is it compatible with mine? (evga XC gaming)

Anyone knows?


----------



## smonkie

Phew, Quake RTX really knows how to warm up my Duke's 2080Ti. This is the first time I see it reaching 80º. :O


----------



## DMatthewStewart

Whoa, the Hydro of the 2080Ti Kingpin was released? I never even saw it on their website. I was waiting for this because I didnt want to buy the reg version and then also have to buy a water block. Where did you get it?



MrTOOSHORT said:


> No evbot, sold that a long time ago to marc. Stuff came in btw!


----------



## J7SC

DMatthewStewart said:


> Whoa, the Hydro of the 2080Ti Kingpin was released? I never even saw it on their website. I was waiting for this because I didnt want to buy the reg version and then also have to buy a water block. Where did you get it?


 
...seems that you buy regular KPE, then add Hydro block which is sold separately / great combo, though for all but LN2ers


----------



## mikezachlowe2004

Can anyone tell me if the cooler for the RTX 2080Ti FE is the same cooler used on the RTX 2080 FE? Are they interchangeable between the two?


----------



## DMatthewStewart

Pull up the cooling config on EK website. If they offer the same block for both then they are likely interchangeable. Plus they have a link to the PCB



mikezachlowe2004 said:


> Can anyone tell me if the cooler for the RTX 2080Ti FE is the same cooler used on the RTX 2080 FE? Are they interchangeable between the two?


----------



## Cancretto

acmilangr said:


> Question please.
> As I told I tried xcos kingpin bios:
> 
> XOC BIOS, Version 90.02.17.40.88
> 
> 
> Becouse I want to be able to see the curve and also control the power limit can i try this kingpin bios? :
> 
> Normal BIOS (Green LED), Version 90.02.30.00.77
> 
> I want to be sure that it Will not brick my card....


Hi, depends on your card, if you have a 2x 8 pin board, normal kingpin bios will probably not work on your card, it won't brick it but the card should have problems boosting at full frequencies and may have a strange power limit behavior due to missing 3rd power connector. Kingpin xoc bios, although lack of curve editor sucks is the best available no PL bios out there


----------



## Gripen90

Got a MSI RTX 2080Ti Duke OCV1 and an Inno3D RTX 2080Ti Gaming OC X3.


----------



## JustinThyme

mikezachlowe2004 said:


> Can anyone tell me if the cooler for the RTX 2080Ti FE is the same cooler used on the RTX 2080 FE? Are they interchangeable between the two?


Looking at the bottom of the blocks the secondary VRMS are much different and the layout of the VRam is different so I would say no.


----------



## jura11

Hi there

Tested today my Asus RTX 2080Ti Strix with EK Vector RTX 2080Ti Strix Waterblock which hasn't been without the problems

As first with EK Vector RTX 2080Ti Strix Backplate tighten up I couldn't boot and GPU hasn't been recognised in Windows, however with one screw(around the core) untighten I could boot and GPU have been recognised etc

Temperatures wise, temperatures are higher by 2-6°C on load, on idle temperatures are higher by 2-4°C, highest temperature which I have seen is 42-44°C 

Stock Strix BIOS is just rubbish, 315W power limit is just enough for such card, I hit in every game or benchmarks power limit, stock clocks I have seen 2010MHz as max, tried OC Scanner but this doesn't work for me as I'm running multiple GPUs and seems there is issue with that

Tried +100MHz on core which GPU couldn't keep as I hit power limit and clocks started to drop or jump from 2115MHz to 2070MHz to 2055MHz etc, VRAM +1000MHz is easy with that card, no issues and my Strix does have Samsung VRAM modules

Flashed to Asus RTX 2080Ti Strix XOC BIOS and best OC what what I could achieve is +150MHz which in theory should have been 2160MHz, but running 2145MHz still at 1.05v and tested that like in 3DMARK and Unigine Superposition and no issues with stability, no crashes and I have set power limit to 42% which is 420W and max I have seen 389W in HWiNFO with 1000MHz on VRAM 

Just what I hate on XOC BIOS is VRAM is running at full speed and doesn't downclock on idle etc

Will do few more tests and will see 

Hope this helps 

Thanks, Jura


----------



## MacMus

jura11 said:


> Hi there
> As first with EK Vector RTX 2080Ti Strix Backplate tighten up I couldn't boot and GPU hasn't been recognised in Windows, however with one screw(around the core) untighten I could boot and GPU have been recognised etc


That is very strange ? Was screw shorting pressuring and shorting something ?
Maybe card has some HW issues?


----------



## MrTOOSHORT

Did some benching this morning with my KPE. Put on the HC block, seems the AIO is pretty good as the block and it compete really well with each other temps wise. I have the block for looks and easy swap in. But could have lived with out it. Port Royal bench below.

*https://www.3dmark.com/pr/100538*


----------



## acmilangr

Cancretto said:


> acmilangr said:
> 
> 
> 
> Question please.
> As I told I tried xcos kingpin bios:
> 
> XOC BIOS, Version 90.02.17.40.88
> 
> 
> Becouse I want to be able to see the curve and also control the power limit can i try this kingpin bios? :
> 
> Normal BIOS (Green LED), Version 90.02.30.00.77
> 
> I want to be sure that it Will not brick my card....
> 
> 
> 
> Hi, depends on your card, if you have a 2x 8 pin board, normal kingpin bios will probably not work on your card, it won't brick it but the card should have problems boosting at full frequencies and may have a strange power limit behavior due to missing 3rd power connector. Kingpin xoc bios, although lack of curve editor sucks is the best available no PL bios out there
Click to expand...

Yes it seems you are right. 

I think stay on Galax 380w.
At 1.093v it stays at 2205mhz


----------



## MrTOOSHORT

Kingpin .88 bios works fine on two 8pin cards such as the FE. I have the FE and it worked just fine, water cooled. But it's not really a 24/7 bios as it loads 1.125v. Benching it's great and a bunch of ppl have been rebenching their cards with that bios and kicking their old scores butts with it.

I wouldn't use the kingpin bios on air.


----------



## Cosmicshroom

Anyone here rocking the 2080ti Kingpin hybrid card?


----------



## jura11

MacMus said:


> That is very strange ? Was screw shorting pressuring and shorting something ?
> Maybe card has some HW issues?


Hi there 

Agree its very strange, checked every screw if its same size and compared to my Zotac RTX 2080Ti AMP with EK Vector RTX 2080Ti WB and backplate and all are correct ones

My Asus RTX 2080Ti Strix is revision model which have tiny glue around the GPU die and as per EK recommendations I removed 4 stand offs

https://www.ekwb.com/blog/did-you-know-compatibility-issues-with-strix-rtx-2080-ti-water-blocks/

Not sure what is happening, I tried without the backplate and tighten the block as usually and no issues, PC will boot up without the single issue 

With backplate I need to un screw one screw which is around the core and PC and GPU will boot up 

If card have some HW issues not sure but for sure OC very nicely, will do few more tests and see where is limit of the card later tonight

VRM temperatures on this card max I have seen have been in high 40's to mid 50's

Hope this helps 

Thanks, Jura


----------



## kx11

MrTOOSHORT said:


> Did some benching this morning with my KPE. Put on the HC block, seems the AIO is pretty good as the block and it compete really well with each other temps wise. I have the block for looks and easy swap in. But could have lived with out it. Port Royal bench below.
> 
> *https://www.3dmark.com/pr/100538*



1600+ on memory ? that's the LN2 bios i guess


----------



## Nunzi

Can I use the Galax 380w bios on the Asus strix oc ?

Thanks


----------



## MrTOOSHORT

So I wanted to redo my shunt mod today on the FE. I had the 8mo soldered on the 5mo but then the back plate never fit right and it bothered me. Plus incase I sell the card, I wanted it done right. I ordered some 3mo resistors and took off the old resistors and solder the 3mos on. Took awhile but finally got the job done. So I reinstalled the FE into the PC and did a port royal bench to compare to the KPE and got a nice result imo. Bios is the XOC KPE bios.

*https://www.3dmark.com/3dm/36690059*


----------



## dante`afk

MrTOOSHORT said:


> So just got it up and running. I put the little voltage dip switches to "on" and it reads 1.15-1.17v on the OLED display on the card when loaded. Clocks were sustained 2160MHz on the LN2 bios. Won't have time to put on the water block until the weekend. So far so good!
> 
> *https://www.3dmark.com/3dm/36610732*


let's see what you can pull wit the waterblock. 2160 do a lot of FE cards or custom cards for 1199




gridironcpj said:


> Has anyone done a waterblock comparison? I have the EK block on my FE and I've heard some people say it's not a very good block. I get about 9-10C over fluid temp under full load. I'm curious to know if Watercool is as good as people say.







AC Kryographix > Watercool > EK


----------



## papasmurf211

Nunzi said:


> Can I use the Galax 380w bios on the Asus strix oc ?
> 
> Thanks


I am also wondering this. What is the best bios to use on the Strix OC? I have downloaded the unlimited bios but I'm not sure if this is a good idea on air.


----------



## jura11

papasmurf211 said:


> I am also wondering this. What is the best bios to use on the Strix OC? I have downloaded the unlimited bios but I'm not sure if this is a good idea on air.


Hi there 

I'm pretty sure someone over here already tested Galax 380W BIOS on Asus RTX 2080Ti Strix and seems with Galax 380W you will loose one DP port and HDMI port etc

https://www.overclock.net/forum/27731052-post3978.html

I would recommend or suggest check IO panel of every GPU on Google etc and compare against yours Strix IO, if its same or similar then in theory you can flash BIOS without the loosing any of DP or HDMI ports

XOC BIOS I'm using on my Asus RTX 2080Ti Strix and performs pretty good, just VRAM doesn't downclocking is my main issue with V/F curve which you can't use 

OC right now I'm at 2145MHz and +1000MHz on VRAM,will try OC bit more today and will see where is limit of my Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## jura11

papasmurf211 said:


> I am also wondering this. What is the best bios to use on the Strix OC? I have downloaded the unlimited bios but I'm not sure if this is a good idea on air.


Hi there 

I have just tried Asus RTX 2080Ti Matrix BIOS which works with Asus RTX 2080Ti Strix without losing DP ports or HDMI ports, every port is working

https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1

Its have 360W power limit, stock clocks are under water at 2070MHz which is around 60MHz more than on Asus RTX 2080Ti Strix which stock clocks are around 2010MHz and stock VRAM is OC too at 7400MHz which is 400MHz extra over stock BIOS,currently running 2160MHz which would drop to 2145MHz when temperatures reach 38-40°C and +700MHz on VRAM which is 8100MHz

Although I have hit 360W power limit in benchmarks like in Unigine Superposition or 3DMARK

With XOC BIOS running +135MHz or +150MHz which it gives me 2145MHz or 2160MHz and +1100MHz on VRAM and difference in benchmarks is I would say pretty minimal

Hope this helps 

Thanks, Jura


----------



## MonarchX

Would that XOC BIOS work on EVGA RTX 2080 Ti FTW3 (well air-cooled) without losing any ports? I heard it can increase voltage and power safely for air-cooled cards to get higher OC. 2055Mhz is where my card is stuck. Where do I get that XOC BIOS? Can it be flashed with the latest standard version of NVFlash? What XOC stand for?


----------



## papasmurf211

jura11 said:


> Hi there
> 
> I have just tried Asus RTX 2080Ti Matrix BIOS which works with Asus RTX 2080Ti Strix without losing DP ports or HDMI ports, every port is working
> 
> https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1
> 
> Its have 360W power limit, stock clocks are under water at 2070MHz which is around 60MHz more than on Asus RTX 2080Ti Strix which stock clocks are around 2010MHz and stock VRAM is OC too at 7400MHz which is 400MHz extra over stock BIOS,currently running 2160MHz which would drop to 2145MHz when temperatures reach 38-40°C and +700MHz on VRAM which is 8100MHz
> 
> Although I have hit 360W power limit in benchmarks like in Unigine Superposition or 3DMARK
> 
> With XOC BIOS running +135MHz or +150MHz which it gives me 2145MHz or 2160MHz and +1100MHz on VRAM and difference in benchmarks is I would say pretty minimal
> 
> Hope this helps
> 
> Thanks, Jura


Awesome, thank you very much!


----------



## Nunzi

jura11 said:


> Hi there
> 
> I have just tried Asus RTX 2080Ti Matrix BIOS which works with Asus RTX 2080Ti Strix without losing DP ports or HDMI ports, every port is working
> 
> https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1
> 
> Its have 360W power limit, stock clocks are under water at 2070MHz which is around 60MHz more than on Asus RTX 2080Ti Strix which stock clocks are around 2010MHz and stock VRAM is OC too at 7400MHz which is 400MHz extra over stock BIOS,currently running 2160MHz which would drop to 2145MHz when temperatures reach 38-40°C and +700MHz on VRAM which is 8100MHz
> 
> Although I have hit 360W power limit in benchmarks like in Unigine Superposition or 3DMARK
> 
> With XOC BIOS running +135MHz or +150MHz which it gives me 2145MHz or 2160MHz and +1100MHz on VRAM and difference in benchmarks is I would say pretty minimal
> 
> Hope this helps
> 
> Thank You , going to try the matrix Bios


----------



## sagascor14

I'm new to overclocking but have my first custom loop running nicely. 9900k clocked to 4.9. I have EVGA 2080 TI FTW3 with a hydrocopper block on it.

My settings in Precision X1 are +70 / +1100, with +124 power target and +1000 mv. That is the highest I can get stable in these bench tests:

- Superposition 1080 Extreme (score 9771)
- Timespy free (13565)
- Firestrike free (25432)

Bench scores seem average to me, considering GPU temp is <50 at max load and CPU is 70s max.

I don't want to do a shunt mod but is there anything else I could consider? For instance, is it possible/wise to flash another bios to the FTW3? I see there's a Galax 380w (vs 373w FTW3) bios and also a KiNGPiN 520w bios but I don't know enough about this to know whether that's going to get me any gains and what the risk is?


----------



## Judah R

Question : How Safe is the XOC BIOS, Version 90.02.17.40.88 on a Reference 2080ti Card? I have a EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR with an EK waterblock. I was using the galax 380watt bios but am on the xoc bios atm and WOW... it pulled 662 watts peak on a time spy ex run...... it runs about 450-500 watts in games... up to 530 watts in demanding games such at metro exodus or shadow of the tomb raider. I havent even tried to push the overclock with the 1.125 volts it running.... it might do 2150-2200 idk... could only get about 2055-2070 with galax or stock bios.


----------



## Cancretto

MonarchX said:


> Would that XOC BIOS work on EVGA RTX 2080 Ti FTW3 (well air-cooled) without losing any ports? I heard it can increase voltage and power safely for air-cooled cards to get higher OC. 2055Mhz is where my card is stuck. Where do I get that XOC BIOS? Can it be flashed with the latest standard version of NVFlash? What XOC stand for?


Hi, with kingpin xoc bios you should be able to keep all video outputs, voltage itself with that bios isn't a problem, but it will increase your temps and for Turing cards temperature is the key, so keep an eye on that
If you want to try it out, you can find it here https://xdevs.com/guide/2080ti_kpe/
The exact bios version is the 90.02.17.40.88



sagascor14 said:


> I'm new to overclocking but have my first custom loop running nicely. 9900k clocked to 4.9. I have EVGA 2080 TI FTW3 with a hydrocopper block on it.
> 
> My settings in Precision X1 are +70 / +1100, with +124 power target and +1000 mv. That is the highest I can get stable in these bench tests:
> 
> - Superposition 1080 Extreme (score 9771)
> - Timespy free (13565)
> - Firestrike free (25432)
> 
> Bench scores seem average to me, considering GPU temp is <50 at max load and CPU is 70s max.
> 
> I don't want to do a shunt mod but is there anything else I could consider? For instance, is it possible/wise to flash another bios to the FTW3? I see there's a Galax 380w (vs 373w FTW3) bios and also a KiNGPiN 520w bios but I don't know enough about this to know whether that's going to get me any gains and what the risk is?


Hi man, since you're on water, I'd suggest you to try kingpin xoc bios, you may squeeze some more performance out of your card.
As long your cooling is adequate you won't run in troubles, you can find the bios on the link above



Judah R said:


> Question : How Safe is the XOC BIOS, Version 90.02.17.40.88 on a Reference 2080ti Card? I have a EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR with an EK waterblock. I was using the galax 380watt bios but am on the xoc bios atm and WOW... it pulled 662 watts peak on a time spy ex run...... it runs about 450-500 watts in games... up to 530 watts in demanding games such at metro exodus or shadow of the tomb raider. I havent even tried to push the overclock with the 1.125 volts it running.... it might do 2150-2200 idk... could only get about 2055-2070 with galax or stock bios.
> 
> https://www.youtube.com/watch?v=ww28UxMJASw


Hi, as said above, as long your cooling system is good enough you should be fine, reference boards can hold up above 600W of power.
Also time spy extreme is a pretty intense benchmark so it's expected that card pulled a lot of power. Averaging ~500W it's not a problem


----------



## jim2point0

Been going through my gaming backlog lately. Finished Darksiders 3 with my 2080TI. +116 on the core was stable in all games and benchmarks in including that game... played at 4K the whole time, too.

Then I start up Dishonored 2 and its crashes, crashes, crashes. It was fine if I left my card at stock. Started stepping down my OC and it still kept crashing. Finally I just said **** it and ran OC Scanner. Which came up with a value of +51 for the core. That leaves it around 1850-1950Mhz average from what I see in game. May as well leave the card stock at that point...

I really have **** luck with overclocks, that's for sure.


----------



## im late

MonarchX said:


> Would that XOC BIOS work on EVGA RTX 2080 Ti FTW3 (well air-cooled) without losing any ports? I heard it can increase voltage and power safely for air-cooled cards to get higher OC. 2055Mhz is where my card is stuck. Where do I get that XOC BIOS? Can it be flashed with the latest standard version of NVFlash? What XOC stand for?


I am using the 90.02.17.40.88 KP XOC bios in my FTW3 under ambient custom water.

I have an external 1080 rad (just for the GPU) and only bench in the cold winter nights (winter here in Kangaroo Land).

I used the latest NVflash with no issue, all display outputs work fine.

Did 2205 recently in TS and PR: <-- not the highest on ambient, but a good start IMO

https://www.3dmark.com/spy/7369184

https://www.3dmark.com/pr/99883

Have not gone over 550 on the ram yet. Core was 148

Was done on a raining night so outdoor ambient temps was warmer.

As most know, temps is the key, waiting to put the X299 stuff on the bench and lower night temps (remember I am in hot kangaroo land).... lol

If i was benching on air I would be cranking those fans at 100% during bench runs and letting the room take in cold night air hours before benching......

As has been stated above monitor temps and be careful on air, not to scare you off at all..should be fine, just monitor.....


----------



## Spiriva

Judah R said:


> Question : How Safe is the XOC BIOS, Version 90.02.17.40.88 on a Reference 2080ti Card? I have a EVGA GeForce RTX 2080 Ti XC BLACK EDITION GAMING, 11G-P4-2282-KR with an EK waterblock. I was using the galax 380watt bios but am on the xoc bios atm and WOW... it pulled 662 watts peak on a time spy ex run...... it runs about 450-500 watts in games... up to 530 watts in demanding games such at metro exodus or shadow of the tomb raider. I havent even tried to push the overclock with the 1.125 volts it running.... it might do 2150-2200 idk... could only get about 2055-2070 with galax or stock bios.


Ive been using the kingpin xoc bios on my 2080ti FE for about 2-3weeks now. It uses between 200W-580W depending on the title (world of warcraft uses the 2080ti less so around 200-230w, and in GTA5 its more like 550+W). So far so good.
Overclocks around 2165-2180mhz with the XOC bios.


----------



## SimoneTek

Nvidia will release a new 2080Ti... I think that i will sell my 2080Ti Amp and get a Readon VII... It is a respect issue

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## SimoneTek

SimoneTek said:


> Nvidia will release a new 2080Ti... I think that i will sell my 2080Ti Amp and get a Readon VII... It is a respect issue
> 
> Inviato dal mio LYA-L29 utilizzando Tapatalk


No, I'm not serious ahah, but I'm very angry 

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## dante`afk

Just do the math, you'll get a 2060 super, 2070 super and 2080 super.

the supposedly not yet confirmed 2080 ti super, how much better is it going to be? 1%? binned chips? there's no headroom between Ti and Titan.


----------



## JustinThyme

Until the time the earth ceases to spin on axis........
There will ALWAYS be a new release.
That being said the best one can do is build on whats there at the time of build. If you wait for the next best "new" thing to roll out you will be waiting forever. 
The smart ones are those that wait for the next new thing then buy 1 cycle older at an often significant cost reduction. Problem is most enthusiast are not among this group.....


----------



## SimoneTek

That is correct.. But think about 2060...only 4 months and a new gpu better probably at the same prize... Probably I'm not an Entusiasth but Money doesn't give them to me

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## kot0005

not sure if real but....
https://wccftech.com/exclusive-nvidias-super-gpus-unleashing-monsters/


----------



## SimoneTek

It is a new chip... More 2090 than 2080Ti super.. Probably ~4500 Cuda core with 14/16GB of Memory at 16Gbps

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## jim2point0

Well I just bought a 2080TI and its a dud on almost all fronts... so I wonder if I can return it still and wait out this announcement to see what comes. If this 2080TI SUPER is real, it will be a matter of pricing.


----------



## kot0005

jim2point0 said:


> Well I just bought a 2080TI and its a dud on almost all fronts... so I wonder if I can return it still and wait out this announcement to see what comes. If this 2080TI SUPER is real, it will be a matter of pricing.


with a 7700k 2080ti will obv look like a dud if u arent on 4k...


----------



## Martin778

*Hehe lol* 
*Proceeds to cry in a corner*
*F>>>ING NGREEDIA BESTERDSS*

Seriously it's that I play 3440x1440 @ 144 or 4k 60Hz and squeeze every bit of juice of of my Kingpin 2080TI otherwise i'd have taken the Radeon VII in a dime with a 12 core Ryzen.


----------



## jim2point0

kot0005 said:


> with a 7700k 2080ti will obv look like a dud if u arent on 4k...


I play mostly at 5160x2160 (via DSR). That's about 3 megapixels higher than 4K.

When I say dud... I mean it goes as high as 1850Mhz and sounds like a ******* jet engine just to keep the temps under 80C. 

My 1080Ti was whisper quiet and could hit 2000mhz easily. 

So that's what I mean by dud. It's loud, hot, and doesn't clock nearly as high as I think it should. I don't know what I should do with it.


----------



## sagascor14

Cancretto said:


> Hi man, since you're on water, I'd suggest you to try kingpin xoc bios, you may squeeze some more performance out of your card.
> As long your cooling is adequate you won't run in troubles, you can find the bios on the link above


Thanks for replying. That 90.02.17.40.88 XOC comes with a warning that it's only for benching and not an every day bios and also that it can brick an FTW3 so I'm too scared to take that risk. Is it possible to flash the "green" KPE bios 90.02.30.00.77 to a FTW3 to get the power target max from 124% to 144%?


----------



## 113802

kot0005 said:


> with a 7700k 2080ti will obv look like a dud if u arent on 4k...


Nothing is wrong with gaming on a 7700k with a 2080 Ti. I noticed practically no difference from going from my 6700k to the 9900k when gaming.

https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/12.html


----------



## LuckLess7

I have just moved my Computer from my office to another room. About the same time it got hotter outside, (room temperature without AC about 25°C).

I did all my testing with the card back when I got it. Unfortunately I don't really remember all the values, BUT, yesterday I played Senua's Sacrifice and noticed the temperatures hitting about 86°C. I turned the game off because that seemed awefully high, and I cannot remember if I ever had these temperatures.

I was playing on a 4k screen. 

I launched Battlefield V as I think I remember not exceeding 80°C, but the card got hotter in that game too.

Does the roomchange and the change in ambient temperatures really make this much of a difference?

The card doesn't seem to throttle and just runs at 86°C, but I kind of don't feel comfortable with the card running this hot.

I opened the case and directed a room ventilator at the open case, which helped with temperatures a little. I brought it down to 81-82. 

This still seems like a bad solution. Also ambient temps are at 53°C. 

What do you guys think about these temps?


----------



## Cancretto

sagascor14 said:


> Thanks for replying. That 90.02.17.40.88 XOC comes with a warning that it's only for benching and not an every day bios and also that it can brick an FTW3 so I'm too scared to take that risk. Is it possible to flash the "green" KPE bios 90.02.30.00.77 to a FTW3 to get the power target max from 124% to 144%?


Hi there, xoc bios won't brick your ftw3, as long temperature are fine you won't have any problem. I've flashed it on 4 cards, 2 references and 2 ftw3 and still using it daily on my ftw3. 
Also no, unfortunately other normal kingpin bioses like green, yellow and red won't work on your card since ftw3 is missing 3rd power connector. It will flash but card should not work/boost frequencies properly


----------



## sagascor14

Cancretto said:


> Hi there, xoc bios won't brick your ftw3, as long temperature are fine you won't have any problem. I've flashed it on 4 cards, 2 references and 2 ftw3 and still using it daily on my ftw3.
> Also no, unfortunately other normal kingpin bioses like green, yellow and red won't work on your card since ftw3 is missing 3rd power connector. It will flash but card should not work/boost frequencies properly


OK thanks again. I'm thinking of starting with something a little more gentle - what is the "next best bios" for the FTW3 and do you know where I can find it please? I've seen mention of a 400w (MSI?) or a 380w Galax) bios but I don't want to grab the wrong one lol.

Also, if I do summon the courage to do the XOC bios, will that start eating power like crazy? As it would be a 24/7 bios I would not want to be running 500w+ just when using firefox lol!

Last question - if I flash to a non-EVGA bios do I lose PX1 functionality?

Sorry for all the questions and thanks for helping me along


----------



## Jpmboy

hey guys.. throw your 2080Tis at *this month's FAT*! The card pulls over 3 million PPD! Even run it in what would be your normal down time for the 3 day event.


----------



## J7SC

Jpmboy said:


> hey guys.. throw your 2080Tis at *this month's FAT*! The card pulls over 3 million PPD! Even run it in what would be your normal down time for the 3 day event.



I initially misread your post ...throw out your 2080 TI (for super ?), but not until...'


----------



## Martin778

LOL, so did I!


----------



## JustinThyme

jim2point0 said:


> I play mostly at 5160x2160 (via DSR). That's about 3 megapixels higher than 4K.
> 
> When I say dud... I mean it goes as high as 1850Mhz and sounds like a ******* jet engine just to keep the temps under 80C.
> 
> My 1080Ti was whisper quiet and could hit 2000mhz easily.
> 
> So that's what I mean by dud. It's loud, hot, and doesn't clock nearly as high as I think it should. I don't know what I should do with it.


1850 will translate to 2100+
All 2080Tis run hot, I had two on air in an Enthoo Elite case for a short time while waiting for block for the strix cards to come out. This is why they have a power limit so tight. Any load at all and I had to leave the door open. Put blocks on them and they idle at 27C and load up at 40C. They are not 1080Tis. Compare benches from both. I got about a 30% performance increase between the two.


----------



## Martin778

Yep, they do run hot like hell except for the 240mm AiO versions and even these need a custom fan curve.

I've seen some benches claiming that Radeon VII uses less power than 2080Ti. Is that true?


----------



## ThrashZone

JustinThyme said:


> 1850 will translate to 2100+
> All 2080Tis run hot, I had two on air in an Enthoo Elite case for a short time while waiting for block for the strix cards to come out. This is why they have a power limit so tight. Any load at all and I had to leave the door open. Put blocks on them and they idle at 27C and load up at 40C. They are not 1080Tis. *Compare benches from both. I got about a 30% performance increase between the two*.


Hi,
Unfortunately the price jump was more than 30%


----------



## Martin778

Only 141% price difference from 1080Ti GamingX to 2080Ti Kingpin  Was it worth it? Hell nah now Nscums are going to release a "super" 2080Ti that will make me sell my KP, lose a ton of cash to buy a 'super' card with inferior cooling because there won't be any new Super-Kingpin.


----------



## jim2point0

JustinThyme said:


> 1850 will translate to 2100+


I assume you mean when watercooled?

Yeah, that's currently the plan. I'm going to return the Asus Strix. The extra I'm paying for it is doing jack monkey squat when it doesn't seem to perform better than an FE in terms of temps or performance. Going to get a cheaper model, then slap an AIO on it. The price should work out to be the same, and then I'll have a quiet GPU that runs a little bit faster on top. Might play around with a custom bios for more power draw, but we'll see. One step at a time. Tearing off a GPUs cooler and attaching an AIO is more than I've ever done with a GPU in my lifetime, so it will be a learning experience


----------



## truehighroller1

Just wanted to say that I was able to flash the XOC BIOS and wow!

https://www.3dmark.com/spy/7427729

Love the new unlimited power oh oh oh.

I have the Asus Dual OC on an ek waterblock with four radiators and two nice pumps in case anyone is wondering about my setup.

GPU Z showed me pulling 600 watts+ Temps were 50C tops. I can't wait for winter again now lol.


----------



## JustinThyme

ThrashZone said:


> Hi,
> Unfortunately the price jump was more than 30%


Depends on when you bought the 1080Ti. Initially it was around $700 then went up to the same level as the 2080Ti and is still up there. New 1080Ti right now is running the same range as the 2080Ti. Thanks to miners cleaning out the supply. I got mine in the $700 range for the Strix cards, sequential serials even from Mirocenter. A month later you couldnt buy one from any manufacturer or supplier as there were none to be had and when they did pop back up Same card went to $1300.


----------



## JustinThyme

ThrashZone said:


> Hi,
> Unfortunately the price jump was more than 30%





jim2point0 said:


> I assume you mean when watercooled?
> 
> Yeah, that's currently the plan. I'm going to return the Asus Strix. The extra I'm paying for it is doing jack monkey squat when it doesn't seem to perform better than an FE in terms of temps or performance. Going to get a cheaper model, then slap an AIO on it. The price should work out to be the same, and then I'll have a quiet GPU that runs a little bit faster on top. Might play around with a custom bios for more power draw, but we'll see. One step at a time. Tearing off a GPUs cooler and attaching an AIO is more than I've ever done with a GPU in my lifetime, so it will be a learning experience


The only difference between FE and anything else is custom PCBs and binned chips as well as binned memory and a guaranteed boost rate. FE is a crap shoot for performance. Ive not seen anyone post up 1850 on a FE yet. I tried it on the 1080ti and lost the lottery. 

Im happy with the performance of my pair of 2080Tis. I tried upping the power limit with a BIOS flash and it didnt gain me much, like 50MHz so I went back to stock and live stable and comfortable at 2100Mhz without even bumping up the core voltage.


----------



## JackCY

Martin778 said:


> Only 141% price difference from 1080Ti GamingX to 2080Ti Kingpin  Was it worth it? Hell nah now Nscums are going to release a "super" 2080Ti that will make me sell my KP, lose a ton of cash to buy a 'super' card with inferior cooling because there won't be any new Super-Kingpin.


Kingpin card made no sense in the first place when they didn't go with a refreshed Turing chip. It's an XOC card for very few that will be beaten on the epeen records faster than you could drink it's purchase price in beers.


----------



## J7SC

Martin778 said:


> Only 141% price difference from 1080Ti GamingX to 2080Ti Kingpin  Was it worth it? Hell nah now Nscums are going to release a "super" 2080Ti that will make me sell my KP, lose a ton of cash to buy a 'super' card with inferior cooling because there won't be any new Super-Kingpin.



...who knows, perhaps there will be a Galax 2080 Ti Super HoF OCL you can save up for  (availability = unobtainable for mortals, Price = $izzling)


----------



## Martin778

I don't mind the price but rather the availability is rubbish, barely any KFA2/Galax cards get to Europe. EVGA Kingpin is the best what you can find here and even then I couldn't purchase it directly through EVGA.eu but rather through Caseking.


----------



## truehighroller1

The limitless power is so awesome OMG.

https://www.3dmark.com/spy/7430137

I just keep going higher and higher!


----------



## Martin778

That all you got? 
https://www.3dmark.com/spy/7430363


----------



## willverduzco

Martin778 said:


> That all you got?
> https://www.3dmark.com/spy/7430363


That all you got? 
https://www.3dmark.com/spy/6785502

(In all fairness, great scores to the two above! Just 2 months ago, my score was top 50 overall and 51st place for GPU-score for single card... and now look how far we've fallen. I need to re-run my benches now that I've increased my cooling setup with Noctua A14-iPPC radiator fans. Y'all may want to also try Superposition 1080p Extreme preset. I've gotten 10680, and I'm sure you'd come reasonably close since your GPU score in Timespy was only a few hundred below mine.)


----------



## truehighroller1

https://www.3dmark.com/3dm/36775107?

That's about it until it gets colder yes lol.


----------



## truehighroller1

Martin778 said:


> That all you got?
> https://www.3dmark.com/spy/7430363





willverduzco said:


> That all you got?
> https://www.3dmark.com/spy/6785502
> 
> (In all fairness, great scores to the two above! Just 2 months ago, my score was top 50 overall and 51st place for GPU-score for single card... and now look how far we've fallen. I need to re-run my benches now that I've increased my cooling setup with Noctua A14-iPPC radiator fans. Y'all may want to also try Superposition 1080p Extreme preset. I've gotten 10680, and I'm sure you'd come reasonably close since your GPU score in Timespy was only a few hundred below mine.)





https://www.3dmark.com/3dm/36775107?

That's about it until it gets colder yes lol.

Thanks for the tip on the superposition preset.

Edit: Summoned a little more power.

https://www.3dmark.com/3dm/36776129?


----------



## willverduzco

truehighroller1 said:


> Edit: Summoned a little more power.
> 
> https://www.3dmark.com/3dm/36776129?


Fantastic progress, man! You're now ahead of Martin, and only 245 below mine! That CPU score of yours is beastly, thanks to that 7900X getting 13966. My lowly 9900K at 5.4 cowers in fear with just 13779 points. Now you gotta get that graphics score up to match your CPU score, though. At 17049, you still have a good 400+ points for it to stretch its legs.

Also IMO, you may find better luck with one of the lower power limit BIOSes + a conductive ink shunt mod than any of the XOC BIOSes. I've tried all of them (including the 3 OC Labs BIOSes) over the past 2.5 months; and on my Samsung memory card, all of the options perform poorly on a clock-for-clock basis compared to other BIOSes, presumably due to worse memory timings. Perhaps the looser timings will allow the mem to OC higher, but I have been too lazy/busy to check more thoroughly.

For those interested in per-clock performance on two of these "forbidden fruit" BIOSes when running Superposition 1080p Extreme and 8k Optimized, check the attachments. In my experience, in 8k Optimized (high mem bandwidth requirements), the performance deficit is roughly 0.6% versus the Matrix BIOS on the older 1010P BIOS, and about 0.3% on the newer 1130A BIOS. Also, take the three raw values highlighted in yellow with a grain of salt, as they are suspected outliers. Removing them from analysis makes the trend mentioned above even cleaner.


----------



## truehighroller1

willverduzco said:


> truehighroller1 said:
> 
> 
> 
> Edit: Summoned a little more power.
> 
> https://www.3dmark.com/3dm/36776129?
> 
> 
> 
> Fantastic progress, man! You're now ahead of Martin, and only 245 below mine! That CPU score of yours is beastly, thanks to that 7900X getting 13966. My lowly 9900K at 5.4 cowers in fear with just 13779 points. Now you gotta get that graphics score up to match your CPU score, though. At 17049, you still have a good 400+ points for it to stretch its legs.
> 
> Also IMO, you may find better luck with one of the lower power limit BIOSes + a conductive ink shunt mod than any of the XOC BIOSes. I've tried all of them (including the 3 OC Labs BIOSes) over the past 2.5 months; and on my Samsung memory card, all of the options perform poorly on a clock-for-clock basis compared to other BIOSes, presumably due to worse memory timings. Perhaps the looser timings will allow the mem to OC higher, but I have been too lazy/busy to check more thoroughly.
> 
> For those interested in per-clock performance on two of these "forbidden fruit" BIOSes when running Superposition 1080p Extreme and 8k Optimized, check the attachments. In my experience, in 8k Optimized (high mem bandwidth requirements), the performance deficit is roughly 0.6% versus the Matrix BIOS on the older 1010P BIOS, and about 0.3% on the newer 1130A BIOS. Also, take the three raw values highlighted in yellow with a grain of salt, as they are suspected outliers. Removing them from analysis makes the trend mentioned above even cleaner.
Click to expand...

Thanks. I've had this CPU at 5.25Ghz. This is my 24/7 clock I'm hitting these scores at. As far as my gpu bios goes. This is the best bios I've ever ran on this card hands down. The highest I achieved with other bios is 15999. Stock bios for this card is a joke.

I'm smashing my old records with the kingpin xoc bios and I'm not even pushing things. I'm tempted however to try the volt mod you speak of because it is limiting itself voltage wise still just not power.. Ohhh the power mmm... Lol.


----------



## Cancretto

sagascor14 said:


> OK thanks again. I'm thinking of starting with something a little more gentle - what is the "next best bios" for the FTW3 and do you know where I can find it please? I've seen mention of a 400w (MSI?) or a 380w Galax) bios but I don't want to grab the wrong one lol.
> 
> Also, if I do summon the courage to do the XOC bios, will that start eating power like crazy? As it would be a 24/7 bios I would not want to be running 500w+ just when using firefox lol!
> 
> Last question - if I flash to a non-EVGA bios do I lose PX1 functionality?
> 
> Sorry for all the questions and thanks for helping me along


Hi man, glad to help . The only onther standard bios with higher PL than your FTW3 is this one: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910
It is the galax 380W one, but honestly don't think it will change much from yours. I'd still suggest you to try out xoc bios and see how it behaves on your card, also don't worry with xoc bios the gpu won't pull 500W while browsing, or on low load applications.


----------



## JustinThyme

truehighroller1 said:


> The limitless power is so awesome OMG.
> 
> https://www.3dmark.com/spy/7430137
> 
> I just keep going higher and higher!





Martin778 said:


> That all you got?
> https://www.3dmark.com/spy/7430363





willverduzco said:


> That all you got?
> https://www.3dmark.com/spy/6785502
> 
> (In all fairness, great scores to the two above! Just 2 months ago, my score was top 50 overall and 51st place for GPU-score for single card... and now look how far we've fallen. I need to re-run my benches now that I've increased my cooling setup with Noctua A14-iPPC radiator fans. Y'all may want to also try Superposition 1080p Extreme preset. I've gotten 10680, and I'm sure you'd come reasonably close since your GPU score in Timespy was only a few hundred below mine.)





truehighroller1 said:


> https://www.3dmark.com/3dm/36775107?
> 
> That's about it until it gets colder yes lol.


Thats a you got?

https://www.3dmark.com/spy/6182114


----------



## ThrashZone

JustinThyme said:


> Depends on when you bought the 1080Ti. Initially it was around $700 then went up to the same level as the 2080Ti and is still up there. New 1080Ti right now is running the same range as the 2080Ti. Thanks to miners cleaning out the supply. I got mine in the $700 range for the Strix cards, sequential serials even from Mirocenter. A month later you couldnt buy one from any manufacturer or supplier as there were none to be had and when they did pop back up Same card went to $1300.


Hi,
Indeed I bought my 1080ti ftw3 for nearly 800. so yeah they got stupid expensive from miners later on which I can't count that as a point of reference it was so messed up 
Still have a little on my 1080ti ftw3 in store warranty left if the super drops and I have some time left to take it back maybe I will.



JustinThyme said:


> The only difference between FE and anything else is custom PCBs and binned chips as well as binned memory and a guaranteed boost rate. FE is a crap shoot for performance. Ive not seen anyone post up 1850 on a FE yet. I tried it on the 1080ti and lost the lottery.
> 
> Im happy with the performance of my pair of 2080Tis. I tried upping the power limit with a BIOS flash and it didnt gain me much, like 50MHz so I went back to stock and live stable and comfortable at 2100Mhz without even bumping up the core voltage.


Hi,
Yeah I'll never buy another FE after my disappointing titan Xp venture that is probably why I'm sitting out this series frankly and that was at the mining craze time if i hadn't bought it I probably would of done a 2080ti but still not from nvidia.


----------



## AlKappaccino

Greetings,

when it says "300/360 W" for example on the Strix, this is for standard and maximum power target I guess?

I'm planning on buying a RTX2080Ti within the next several months when price comes down a bit so I'm researching in what card is best to get.

I will not watercool it, so actual hardcore overclocking will not be possible. I'm assuming that getting a card that does at least 380W out of the box would be the best bet, since I don't potentially loose voltage control via bios flashing? At the moment I'm focusing on Evgas FTW3 Ultra and MSIs Gaming X Trio. Lightning Z also looks tempting, but I want to vertical mount it and as far as I can tell, a triple slot would not fit (Or is anyone doing this and made it work?).


----------



## Globespy

AlKappaccino said:


> Greetings,
> 
> when it says "300/360 W" for example on the Strix, this is for standard and maximum power target I guess?
> 
> I'm planning on buying a RTX2080Ti within the next several months when price comes down a bit so I'm researching in what card is best to get.
> 
> I will not watercool it, so actual hardcore overclocking will not be possible. I'm assuming that getting a card that does at least 380W out of the box would be the best bet, since I don't potentially loose voltage control via bios flashing? At the moment I'm focusing on Evgas FTW3 Ultra and MSIs Gaming X Trio. Lightning Z also looks tempting, but I want to vertical mount it and as far as I can tell, a triple slot would not fit (Or is anyone doing this and made it work?).


I doubt that the 2080ti will ever come down to a price point that's even remotely a good deal for performance/$.
The only thing that could have justified the price was if it could do 120-144FPS with RTX on, but we all now know that this generation of cards was simply a teaser.

The Turing refreshed 'Super' series of GPU's likely to drop in the next month will be (if rumors and leaks are correct) a much wiser investment, with the RTX 2080 'S' (no ti version of the 'Super' variants has been announced yet) likely performing within 5-10% of the 2080ti for half the price.
Remember, the 2080ti in non RTX games was barely 15-20% faster than the 1080ti, yet cost more than double, in some variants closer to triple.

Navi is going to do what AMD always do and fall short of the hype, and Nvidia is planning this 'Super' launch to put the nail in the coffin so to speak.
AMD are going to do very well on the CPU side, until intel wakes up from their monopolist slumber and I think we will see some hard hitting responses in 2020.

Smart to wait and see what happens, but it's pretty certain that this 1st gen of Turing GPU's (including the 2080ti) is not what you want to be spending your money on.
We need to see a return to the 40% performance gains across GPU generations, and pricing get back to sanity levels of $899 for the 'ti' cards.
Despite Nvidia's crafty plan to control supply of 2080ti's to create a sense of urgency, it's now clear they are sitting on a LOT of inventory - unless you can get a 2080ti for very close to the price of one of the new 2080 'S' cards, I wouldn't bother.


----------



## AlKappaccino

Globespy said:


> I doubt that the 2080ti will ever come down to a price point that's even remotely a good deal for performance/$.
> The only thing that could have justified the price was if it could do 120-144FPS with RTX on, but we all now know that this generation of cards was simply a teaser.
> 
> The Turing refreshed 'Super' series of GPU's likely to drop in the next month will be (if rumors and leaks are correct) a much wiser investment, with the RTX 2080 'S' (no ti version of the 'Super' variants has been announced yet) likely performing within 5-10% of the 2080ti for half the price.
> Remember, the 2080ti in non RTX games was barely 15-20% faster than the 1080ti, yet cost more than double, in some variants closer to triple.
> 
> Navi is going to do what AMD always do and fall short of the hype, and Nvidia is planning this 'Super' launch to put the nail in the coffin so to speak.
> AMD are going to do very well on the CPU side, until intel wakes up from their monopolist slumber and I think we will see some hard hitting responses in 2020.
> 
> Smart to wait and see what happens, but it's pretty certain that this 1st gen of Turing GPU's (including the 2080ti) is not what you want to be spending your money on.
> We need to see a return to the 40% performance gains across GPU generations, and pricing get back to sanity levels of $899 for the 'ti' cards.
> Despite Nvidia's crafty plan to control supply of 2080ti's to create a sense of urgency, it's now clear they are sitting on a LOT of inventory - unless you can get a 2080ti for very close to the price of one of the new 2080 'S' cards, I wouldn't bother.


You're right, the 2080ti is not a great value product. But still, lets see what those "super" cards change, since there is also the rumor of a 2080ti "super". Hence, I want to wait, since the cards already dropped a few hundred bucks. Thing is, cyberpunk 2077 and the new vampire bloodlines 2 will release with raytracing and while it's uncertain how well it runs or even improves the graphic fidelity, I'd like to settle for the max I can get until that  I'm playing 3440x1440 on high refreshrates, that's also to consider.


----------



## JustinThyme

Same is said on every release. 980Ti owners said that about the 1080Ti, 8800 ultra about the 980Ti etc etc etc. 
So long as mining is on the prices will stay up.


----------



## J7SC

truehighroller1 said:


> Thanks. I've had this CPU at 5.25Ghz. This is my 24/7 clock I'm hitting these scores at. As far as my gpu bios goes. This is the best bios I've ever ran on this card hands down. The highest I achieved with other bios is 15999. Stock bios for this card is a joke.
> 
> I'm smashing my old records with the kingpin xoc bios and I'm not even pushing things. I'm tempted however to try the volt mod you speak of because it is limiting itself voltage wise still just not power.. Ohhh the power mmm... Lol.



Just to confirm, KPE XOC Bios raises the Power Limit by a mile and then some, but not voltage compared to stock ? I thought KPE XOC would do 1.25v ? Or are you talking about beyond 1.25v...  ? Also, try to run a few 4k (even 8k) benches as well, as that isolates GPU setup better from CPU. Either way, it sounds like you got a great setup happening...always nice to take an existing piece of hardware and get more out of it.


----------



## Cancretto

J7SC said:


> Just to confirm, KPE XOC Bios raises the Power Limit by a mile and then some, but not voltage compared to stock ? I thought KPE XOC would do 1.25v ? Or are you talking about beyond 1.25v...  ? Also, try to run a few 4k (even 8k) benches as well, as that isolates GPU setup better from CPU. Either way, it sounds like you got a great setup happening...always nice to take an existing piece of hardware and get more out of it.


Kingpin XOC bios removes power limitations and allows the card to go up to 1.125V, to go further with voltages you'll need the evc tool that Elmor made


----------



## J7SC

Cancretto said:


> Kingpin XOC bios removes power limitations and allows the card to go up to 1.125V, to go further with voltages you'll need the evc tool that Elmor made



Tx , apart from Elmor's tool, EVBot should also work, I guess ?! 

EDIT...to clarify, EVBot for actual 2080Ti KPE (thinking of getting a regular non-super water-blocked one, once the 'super' release w/ reviews is done)


----------



## willverduzco

JustinThyme said:


> Thats a you got?
> 
> https://www.3dmark.com/spy/6182114


Not horrible, but oh man, that CPU is holding you back so much. 12k CPU score from a 14-core is shockingly low. I'd also expect much, much more GPU score from an SLI 2080TI setup, when the bench offers nearly perfect scaling in that subscore (95%+). (For reference my 8-core 9900K at just 5.4 GHz gets nearly 2000 more CPU score than yours, and your dual-gpu setup is only getting a 64% higher GPU score than my single card... despite this test having near-perfect scaling.)

Are you hitting thermal/power limits pretty hard, by chance? If you look at the 2x GPU Hall-of-Fame, 31k+ GPU score seems to be very attainable (e.g. this guy with 31187 GPU score, currently in 50th place at just 2070c/8000m).


----------



## MrTOOSHORT

9940x, disable HT for TimsSpy to get a higher cpu score:


----------



## maxjivi05

So I have the MSI Sea Hawk X "AIO" which one of these BIO's do I flash to get a higher PL cause the one for it apparently is the stock BIO's. I also don't get fan control. I'm assuming I can use "Download MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W) Unofficial (Source)" but I don't want to pull the trigger without confirmation!


----------



## willverduzco

MrTOOSHORT said:


> 9940x, disable HT for TimsSpy to get a higher cpu score:


Now _THAT's_ more like it for a 9940X! Fantastic GPU score! Strange that HT causes strange issues on this benchmark and more reason for JustinThyme (witty name) to re-run his benches with that disabled!


----------



## willverduzco

maxjivi05 said:


> So I have the MSI Sea Hawk X "AIO" which one of these BIO's do I flash to get a higher PL cause the one for it apparently is the stock BIO's. I also don't get fan control. I'm assuming I can use "Download MSI RTX 2080 Ti Gaming X Trio Custom PCB (2x8-Pin, 1x6-Pin) 300W x 135% Power Target BIOS (406W) Unofficial (Source)" but I don't want to pull the trigger without confirmation!


This is your card, correct? If so, it's a 2x8-pin card, and flashing a bios meant for a 3-connector card will result in a lower than expected effective power limit. You won't harm anything, and it's certainly worth at least trying, but I wouldn't expect much. Perhaps you may want to try the various ~360W BIOSes such as one of the Gigabyte 366W options, or the Galax 380W (although that one has slower per-clock performance, presumably due to memory timings, as do their private OC LABS BIOSes).


----------



## maxjivi05

This card listed "Sea Hawk EK X is the same card just without the AIO so everything else should be the same? This specific bio's wouldn't raise the PL?


----------



## truehighroller1

JustinThyme said:


> Thats a you got?
> 
> https://www.3dmark.com/spy/6182114





J7SC said:


> Just to confirm, KPE XOC Bios raises the Power Limit by a mile and then some, but not voltage compared to stock ? I thought KPE XOC would do 1.25v ? Or are you talking about beyond 1.25v...  ? Also, try to run a few 4k (even 8k) benches as well, as that isolates GPU setup better from CPU. Either way, it sounds like you got a great setup happening...always nice to take an existing piece of hardware and get more out of it.


Justin, 

I'm running one card, try running one and coming close to us, Then we'll talk. Still nice scores though by all means.

J7SC,

Yes, my card is still hard locking my voltage at 1.095v I believe but the power limit is limitless at this point. GPU-Z reported 600watts plus last night when I looked at it. I 'll try the program for voltage homeboy mentioned sometime tonight and post back with the results for you.


----------



## maxjivi05

Ok just read that EK X and X do not have same PCB so "Would it be bad to flash BIO's for EK X" onto 2080 Ti Sea Hawk X ?


----------



## willverduzco

maxjivi05 said:


> Ok just read that EK X and X do not have same PCB so "Would it be bad to flash BIO's for EK X" onto 2080 Ti Sea Hawk X ?


Not "bad" as in "harmful," but it just may not have the desired outcome. It's certainly worth a shot, though... Just keep track of what clocks and voltages you can reach without PL throttling in XYZ benchmarks prior to the flash, and then again after... and also try the other BIOSes I mentioned (one of the many Gigabyte 366W ones, and one of the two Galax/KFA2 380W ones).


----------



## J7SC

MrTOOSHORT said:


> 9940x, disable HT for TimsSpy to get a higher cpu score:



...very nice :thumb: and very interesting ! I'll have to try that (disable HT) on the 16c TR. 

On a more general note, below is a screenie from Port Royal when I fist ran it (haven't touched it since which means now it is position #55 instead of #24 @ HoF...). That run was done in April, with 2x 2080 Ti Aorus XTR WB on stock Bios (375w). 

The point is that above and below in that table, you see plenty of Titan RTX > therefore, let's first see what the config and reviews are for 2080 Ti 'Super'...and also, *how tied down the Bios is, and eventual availability of 'custom XOC Bios if any', and custom PCBs. *That said, on paper*, *2080 Ti super can do very well if not too constrained. I mentioned in a post above that I'm thinking of an additional build with a single GPU...if 2080 TI super is too 'tied down', I probably opt instead for the 2080 Ti KPE w/ waterblock....still have EVbot and all that from earlier-gen Cl/KPE builds.

Speaking of waterblocks, the Port Royal setup above has heavy w-cooling, similar to the second graph below it (temps for single and dual GPUs), though that one was for another 3DM. I maintain that keeping these GPUs at below 40c is still 'job 1'...


----------



## maxjivi05

Flashed KFA2.RTX2080Ti.11264.180910.rom and had to go back to my backup, screen went small and said device wasn't working properly. What is it I'm lookin for exactly cause the 2x8 Pins was the same on both boards?


----------



## sagascor14

Cancretto said:


> Hi man, glad to help . The only onther standard bios with higher PL than your FTW3 is this one: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910
> It is the galax 380W one, but honestly don't think it will change much from yours. I'd still suggest you to try out xoc bios and see how it behaves on your card, also don't worry with xoc bios the gpu won't pull 500W while browsing, or on low load applications.


OK mate, I flashed the XOC bios to my FTW3 Bios #2. My TimeSpy results are:

BEFORE
PX1 70/1100 clocks and other sliders maxed. Aida64 reports draw of upto 375w and PerfCap is usually Power or Power + Reliability Voltage.
SCORE: 14,851 (Graphics 16,159 CPU 10,183).

AFTER
PX1 70/1100 clocks and other sliders maxed. Aida64 reports max draw of upto 100w ??? PerfCap is always Reliability Voltage + Max Operating Voltage.
SCORE: 15,536 (Graphics 16,639 CPU 11,294).

So this is a more impressive result. But why on Earth is it only drawing <max 100w? I thought Aida64 would show it drawing 500w ish?

My PSU is Seasonic Prime 850 Titanium btw.


----------



## Spiriva

maxjivi05 said:


> Flashed KFA2.RTX2080Ti.11264.180910.rom and had to go back to my backup, screen went small and said device wasn't working properly. What is it I'm lookin for exactly cause the 2x8 Pins was the same on both boards?


Run ´cmd´ as admin go to where you saved the bios, and flash tool (from #1 in this thread, lets say c:\2080ti for this exemple)

and type: nvflash64 -6 KFA2.RTX2080Ti.11264.180910.rom

it will promt you to press Y once or twice, after you need to reboot your computer and you might need to reinstall your nvidia drivers.


----------



## truehighroller1

sagascor14 said:


> OK mate, I flashed the XOC bios to my FTW3 Bios #2. My TimeSpy results are:
> 
> BEFORE
> PX1 70/1100 clocks and other sliders maxed. Aida64 reports draw of upto 375w and PerfCap is usually Power or Power + Reliability Voltage.
> SCORE: 14,851 (Graphics 16,159 CPU 10,183).
> 
> AFTER
> PX1 70/1100 clocks and other sliders maxed. Aida64 reports max draw of upto 100w ??? PerfCap is always Reliability Voltage + Max Operating Voltage.
> SCORE: 15,536 (Graphics 16,639 CPU 11,294).
> 
> So this is a more impressive result. But why on Earth is it only drawing <max 100w? I thought Aida64 would show it drawing 500w ish?
> 
> My PSU is Seasonic Prime 850 Titanium btw.


Use gpu-z and I bet it tells you the right wattage you're drawing. You're hitting the same voltage wall as me but wide open power wise now thus the better results.


----------



## J7SC

truehighroller1 said:


> Use gpu-z and I bet it tells you the right wattage you're drawing. You're hitting the same voltage wall as me but wide open power wise now thus the better results.


 
What max temps (+ ambient) are you hitting in 3DM and similar tests with the wide open XOC Bios ?


----------



## truehighroller1

J7SC said:


> What max temps (+ ambient) are you hitting in 3DM and similar tests with the wide open XOC Bios ?


28C idle and 50C full throttle testing. ambient is about 70F in my office space. Just opened my windows it's going to be about 50F tonight.


----------



## J7SC

truehighroller1 said:


> 28C idle and 50C full throttle testing. ambient is about 70F in my office space. Just opened my windows it's going to be about 50F tonight.


 
...good temps, what with the high power limit Bios.

I just got a portable AC (still unpacked) for the home office...plan is to have some detachable shrouds direct cold air to the rads of the main rig, with an extra hose for cooling my beer


----------



## truehighroller1

J7SC said:


> ...good temps, what with the high power limit Bios.
> 
> I just got a portable AC (still unpacked) for the home office...plan is to have some detachable shrouds direct cold air to the rads of the main rig, with an extra hose for cooling my beer


Lol not a bad idea man. One good thing about winter is being able to close my office door and insulate the door so the rest of the house stays warm and just freeze my room out and crank the cpu up to the max. Grab some blankies and some coffee and have at it all night lol.


----------



## maxjivi05

truehighroller1 said:


> Use gpu-z and I bet it tells you the right wattage you're drawing. You're hitting the same voltage wall as me but wide open power wise now thus the better results.


it showed 0 cards detected? I have a MSI Sea Hawk X was trying to use another cards Bios for higher wattage and something went out of wack, had to restore to original bios


----------



## J7SC

truehighroller1 said:


> Lol not a bad idea man. One good thing about winter is being able to close my office door and insulate the door so the rest of the house stays warm and just freeze my room out and crank the cpu up to the max. Grab some blankies and some coffee and have at it all night lol.



...credit has to go to various others about the general AC idea, though the built-in custom beer cooler mod came to me in a flash of inspiration - per below, even bought some extra hoses for it. 

My home office is usually pretty cool (20c or below), but between 6pm to 8pm (prime benching hours!) in the summer, it can pick up 4 to 5C extra, due to direct sun exposure during those summer hours...not good for 2080TIs, nor brewskies


----------



## truehighroller1

maxjivi05 said:


> truehighroller1 said:
> 
> 
> 
> Use gpu-z and I bet it tells you the right wattage you're drawing. You're hitting the same voltage wall as me but wide open power wise now thus the better results.
> 
> 
> 
> it showed 0 cards detected? I have a MSI Sea Hawk X was trying to use another cards Bios for higher wattage and something went out of wack, had to restore to original bios
Click to expand...

I'm assuming you rebooted after the bios flash. That's odd. Yeah my Asus dual oc card took right to the kingpin xoc bios without a hitch. Maybe try again after a flush of the driver using display driver uninstaller..


----------



## zhrooms

Thought I'd share a little info I gathered while testing the Galax XOC BIOS on my latest card (Reference PCB & Samsung).

Superposition 1080p Extreme Preset
+1300 Memory (26/32/33 = Temperature (C) Fan In / Radiator / Fan Out)


Code:


0.900 V (+00) ~ ~ 290 W (+00) == 2010 MHz (+00) - 41c (26/32/33)
0.950 V (+50) ~ ~ 330 W (+40) == 2070 MHz (+60) - 43c (26/33/34)
1.000 V (+50) ~ ~ 370 W (+40) == 2100 MHz (+30) - 43c (26/33/34)
1.050 V (+50) ~ ~ 410 W (+40) == 2130 MHz (+30) - 46c (26/33/34)
1.093 V (+43) ~ ~ 450 W (+40) == 2160 MHz (+30) - 48c (27/35/37)
1.125 V (+32) ~ ~ 460 W (+10) == 2190 MHz (+30) - 48c (27/35/37)

Quick & dirty graph, showing that peak efficiency is around 0.950 V, and that 1.125V is more efficient than 1.093V.










Got an amazing (efficient) result at 15c ambient, 0.962V @ 2100MHz (Windows 10 v1903 & GeForce 430.86 DCH) on the stock Palit 310W BIOS that is also available on all Non-A cards, and the power consumption never even reached 310, peak was just above 300.










So if you're stuck with a Non-A card, there are ways to reach a good overclock using the 310W BIOS, mount a $90 AIO, create an overclock using the curve editor and away you go!


----------



## Angrycrab

So i tried the asus xoc on my XC Hybrid and as everyone mentioned it is stuck at 1080p but i was able to create a custom 4k resolution and it switched to It successfully. My only Issue is that Windows does not detect my TV as HDR capable. After looking at Nvidia Control panel It was labeling the connection to my TV as Dvi not hdmi which Is what I’m using. Why can’t we have nice things 

Is there anyway to fix this issue? I just want to game with no power limit I’m not trying break world records.


----------



## truehighroller1

Well, I think I've reached my high spot for this weather in my area, temperatures..

https://www.3dmark.com/3dm/36802134? 

I'm sitting at the number 11th spot for similar setups and two or four of them have sli so I'm content.

I cranked up my cpu to 5ghz for this run.


----------



## willverduzco

truehighroller1 said:


> Well, I think I've reached my high spot for this weather in my area, temperatures..
> 
> https://www.3dmark.com/3dm/36802134?
> 
> I'm sitting at the number 11th spot for similar setups and two or four of them have sli so I'm content.
> 
> I cranked up my cpu to 5ghz for this run.


And much more importantly, 82nd in the overall hall of fame, when looking at single-GPU setups. Good job!


----------



## Spiriva

zhrooms said:


> Thought I'd share a little info I gathered while testing the Galax XOC BIOS on my latest card (Reference PCB & Samsung).


Is that the with the bios "dancop" posted ?


----------



## sagascor14

truehighroller1 said:


> Use gpu-z and I bet it tells you the right wattage you're drawing. You're hitting the same voltage wall as me but wide open power wise now thus the better results.


Hmm, strangely GPU-Z reports 2400w ! Yes, 2.4kw!


----------



## maxjivi05

ok re-flashed same stuff and now works crazy, guess I needed to flash both cards of mine that are SLI to test it properly. lol


----------



## acmilangr

https://youtu.be/3XtlcouO-RE

With Galax 380w bios


----------



## maxjivi05

Benchmark with new BIOs so far. better score by a decent margin!


----------



## truehighroller1

willverduzco said:


> And much more importantly, 82nd in the overall hall of fame, when looking at single-GPU setups. Good job!


Thank you thank you.. Not bad for a mid price card honestly. I could do better but it's not cold enough right now. I cleaned my radiators today and ordered some quick disconnects for them as well so I can clean then better with water, they need it. I have an external radiator setup I made.


----------



## Padinn

Could someone tell me more about the gaming x trio 406w bios? Seems like it's an unofficial mod, where did it originate from? if I switched to it what risks do I have? I have the 2080 ti gaming x trio.


----------



## Padinn

Found answer in op, Don t know how I missed that...


----------



## ntuason

To those GPU experts: 

Do you think it bad or dumb to run my video card 24/7 with these settings? Its stable but will it degrade the card in anyway?

MSI GeForce RTX 2080 TI GAMING X TRIO


----------



## SimoneTek

ntuason said:


> To those GPU experts:
> 
> 
> 
> Do you think it bad or dumb to run my video card 24/7 with these settings? Its stable but will it degrade the card in anyway?
> 
> 
> 
> MSI GeForce RTX 2080 TI GAMING X TRIO


Probably it is my fault but i'm not able to read anything[emoji28].. Sorry 

Inviato dal mio LYA-L29 utilizzando Tapatalk


----------



## Tragic

ntuason said:


> To those GPU experts:
> 
> Do you think it bad or dumb to run my video card 24/7 with these settings? Its stable but will it degrade the card in anyway?
> 
> MSI GeForce RTX 2080 TI GAMING X TRIO





I run that exact card and 406W bios for over a month now. First off you can push the card SO much harder, try +130core and +1300 to 1500 gddr6 (samsung not micron) and push that voltage slider go 100% (placebo slider on this bios)


as for degradation, just drop it by 20% max stable for gaming and other operations and 100% max it for benches, should be PERFECT. this card is VERY capable just keep fan curve at fixed 75% or higher and take the case side panel off during benches (fat ass card) for better thermals. report back brother Thermals are key with these cards, very sensitive to temps with gpu boost...use Precision X1 to control the fans and once you set the curve to 100% for benches 75% for gaming you can close Precision X1 and the curve will stay.


----------



## NewType88

Is the consensus that kingpin xoc bios should be limited to benching ? Anyone use a bios like that for everyday gaming ? I'm not really into benching, but I wouldn't mind getting some more juice out of my card - I got the cooling for it.


----------



## Cancretto

NewType88 said:


> Is the consensus that kingpin xoc bios should be limited to benching ? Anyone use a bios like that for everyday gaming ? I'm not really into benching, but I wouldn't mind getting some more juice out of my card - I got the cooling for it.


hi, I've actually used it for some months now, and still using it without problems daily, obviously my card is on liquid so temperature is not an issue. 2080 Tis should handle fine this bios, even reference boards are fine with it


----------



## Judah R

ntuason said:


> To those GPU experts:
> 
> Do you think it bad or dumb to run my video card 24/7 with these settings? Its stable but will it degrade the card in anyway?
> 
> MSI GeForce RTX 2080 TI GAMING X TRIO


I think you are more the fine 24/7 those GPU settings.... although your CPU overclock is UNSTABLE... Whea errors... you need moar Vcore....is that a 9900k?


500+ watts in witcher 3 on reference board... XOC bios

My HWiFO64 Info image below


----------



## ntuason

Judah R said:


> I think you are more the fine 24/7 those GPU settings.... although your CPU overclock is UNSTABLE... Whea errors... you need moar Vcore....is that a 9900k?
> 
> 
> 500+ watts in witcher 3 on reference board... XOC bios
> 
> My HWiFO64 Info image below
> 
> https://youtu.be/OjOoUVa2-NA




Yes 9900k. I don’t know what those errors are. I’ve tried giving my CPU more voltage, upwards of 1.35v with LLC on the higher end (Turbo on Aorus Master) and still persists. 
It passes Aida64 stabilization test (1hour) Cinebench (20 runs) and ROG Real bench. So irritating these errors.


----------



## sagascor14

Cancretto said:


> hi, I've actually used it for some months now, and still using it without problems daily, obviously my card is on liquid so temperature is not an issue. 2080 Tis should handle fine this bios, even reference boards are fine with it


Could you do me a favour please? GPU-Z is showing me that the XOC bios is drawing 2400w in Timespy and Superposition which is obviously wrong. What does your GPU-Z say?


----------



## jim2point0

Yay. I'm in the double AIO master race now!










Super stressful experience overall. But I learned a lot about how resilient PCBs are because I feel like I manhandled this thing incorrectly a lot while installing this thing. 

But it all works, and now I can control both my X52 and X62 krakens via software. Super nice 










The Asus Strix required some seriously loud fans to keep the GPU at 80C for me, and settled at ~1800Mhz. A cheaper EVGA with an X62 attached to it runs super quiet and stays at 49-50C with an overclock. Steady 2085-2100Mhz.

Maaaaaybe I could squeeze a little more out of it by flashing a different bios... but I'm not sure it's worth it. I'm not really chasing massive overclock scores, and the added voltage might push the fans into "I can hear them" territory to keep it cool. I think I'm happy where it's at 

.... but somehow I managed to break Asus Aura... the software that controls my RAM and Motherboard LEDs. It crashes immediately when I open it, and I've tried all manner of things to fix it. But I give up. Rainbow RAM is my life now.


----------



## sagascor14

jim2point0 said:


> .... but somehow I managed to break Asus Aura... the software that controls my RAM and Motherboard LEDs. It crashes immediately when I open it, and I've tried all manner of things to fix it. But I give up. Rainbow RAM is my life now.


Aura is garbage software it's a well-known issue that it loses settings by Asus won't fix it. Try reinstalling and also disable fast boot. I have to reinstall every few days.


----------



## dante`afk

willverduzco said:


> That all you got?
> https://www.3dmark.com/spy/6785502


that's all you got? 

https://www.3dmark.com/spy/7344366

wish my card would do higher clocks, that's the max it can do even with kingpin xoc bios and 35c max


----------



## Spiriva

NewType88 said:


> Is the consensus that kingpin xoc bios should be limited to benching ? Anyone use a bios like that for everyday gaming ? I'm not really into benching, but I wouldn't mind getting some more juice out of my card - I got the cooling for it.


Ive been running it daily for a few weeks. EK block.


----------



## jim2point0

sagascor14 said:


> Try reinstalling and also disable fast boot. I have to reinstall every few days.


These are both among the many things I've tried in order to fix it. Fast boot disabled. Uninstalled and reinstalled so many times. Made sure I grabbed the specific version for my exact motherboard. Updated motherboard bios. All manner of suggestions...

The only thing I can think of that would have broken it was maybe installing the Asus AI Suite for Fan Expert (control chassis fans with software, and it actually works). Since then, Aura stopped working. Even tried uninstalling AI Suite and reinstalling Aura. But no luck.

G.Skill has software to control the LEDs. Might try that. Or just leave it on rainbow. **** it. Kinda looks neat.


----------



## sagascor14

jim2point0 said:


> Updated motherboard bios.


You sure the BIOS update didn't deactivate it in Bios?


----------



## NewType88

Cancretto said:


> hi, I've actually used it for some months now, and still using it without problems daily, obviously my card is on liquid so temperature is not an issue. 2080 Tis should handle fine this bios, even reference boards are fine with it





Spiriva said:


> Ive been running it daily for a few weeks. EK block.


Ok, thanks for the feedback. I just did it and its working. Temps so far are not really different, so that's good. Now just gotta up the clocks.


----------



## willverduzco

dante`afk said:


> that's all you got?
> 
> https://www.3dmark.com/spy/7344366
> 
> wish my card would do higher clocks, that's the max it can do even with kingpin xoc bios and 35c max


Great score... but mine's 110 higher than yours...


----------



## jim2point0

sagascor14 said:


> You sure the BIOS update didn't deactivate it in Bios?


Aura stopped working before that. The BIOS update was an attempt to fix it.

Only solution I can think of is a fresh Windows install. Which I'm not in the mood to do right now.


----------



## JustinThyme

truehighroller1 said:


> Justin,
> 
> I'm running one card, try running one and coming close to us, Then we'll talk. Still nice scores though by all means.
> 
> J7SC,
> 
> Yes, my card is still hard locking my voltage at 1.095v I believe but the power limit is limitless at this point. GPU-Z reported 600watts plus last night when I looked at it. I 'll try the program for voltage homeboy mentioned sometime tonight and post back with the results for you.


I can run one and get about the same as you guys. Run two and join the azz kicking club....then well talk.....LOL
Its all good. were in it for the same reasons.


----------



## JustinThyme

willverduzco said:


> Not horrible, but oh man, that CPU is holding you back so much. 12k CPU score from a 14-core is shockingly low. I'd also expect much, much more GPU score from an SLI 2080TI setup, when the bench offers nearly perfect scaling in that subscore (95%+). (For reference my 8-core 9900K at just 5.4 GHz gets nearly 2000 more CPU score than yours, and your dual-gpu setup is only getting a 64% higher GPU score than my single card... despite this test having near-perfect scaling.)
> 
> Are you hitting thermal/power limits pretty hard, by chance? If you look at the 2x GPU Hall-of-Fame, 31k+ GPU score seems to be very attainable (e.g. this guy with 31187 GPU score, currently in 50th place at just 2070c/8000m).


Thats CPU at stock. I can get better CPU if I wanted but I dont run graphics tests to get a CPU bench, I use Luxmark, realbench and cinebench for that. 31K hall of fame is on LN2. I hold top spot for my specs of 9940 and 2x 2080ti cards. DOnt be hatin...

I think I missed your link...


----------



## NewType88

When I ran timespy extreme my max wattage in hwinfo was 650w. Is that reading right ? ftw3 with kingpin bios.


----------



## 86Jarrod

dante`afk said:


> that's all you got?
> 
> https://www.3dmark.com/spy/7344366
> 
> wish my card would do higher clocks, that's the max it can do even with kingpin xoc bios and 35c max


Now I just need to figure out why my cpu score is so low. https://www.3dmark.com/spy/7337919


----------



## Cancretto

sagascor14 said:


> Could you do me a favour please? GPU-Z is showing me that the XOC bios is drawing 2400w in Timespy and Superposition which is obviously wrong. What does your GPU-Z say?


Hi man, yeah you can't rely on software readings I also have messed up power consumption readings in software


----------



## bp7178

86Jarrod said:


> Now I just need to figure out why my cpu score is so low. https://www.3dmark.com/spy/7337919


IIRC, Time Spy uses an AVX workload, so if you're limiting your CPU clocks with a high AVX offset your scores will suffer. Could also be a stability thing.


----------



## dante`afk

willverduzco said:


> Great score... but mine's 110 higher than yours...


talking purely gpu score here, since this is a gpu benchmark, not cpu 

your cpu is running +200mhz more than mine.





86Jarrod said:


> Now I just need to figure out why my cpu score is so low. https://www.3dmark.com/spy/7337919


throttling? sick gpu clock


----------



## JustinThyme

Guys I wouldn't worry about CPU score in 3D marks benches....any of them. These are meant for testing graphics. An overclocked 6700K can beat a 9980XE simply because thats just how the bench works. It prefers a single core with a high clock over multiple an HCC die with lower clocks. Worry about the GPU score and use one of the many other benches that will test the CPU. The ASUS realbench is a decent one that hits everything and will actually test your CPU for processing power and not just clock cycles. 

Here my realbench, top it and Ill concede.


----------



## MrTOOSHORT

Look a 9980xe pushing a 2080ti on water over 18,000 gpu score in TS:

*https://www.3dmark.com/spy/7384223*


----------



## doggymad

Hi Everyone!

I bit the bullet today and bought an MSI Sea Hawk X. I managed to get it at a decent price. I had been waiting for the Navi cards but they don't seem to be able to come anywhere near it and the RVII was only a few £££ less than I paid.

I'm going to pair it with a Ryzen 9 3900X, X570 MB and 32GB RAM. Hoping it runs like a beast!

So - I guess I'm in the club!!


----------



## J7SC

...tongue-in-cheek vid for 2080 Ti owners with a strong stomach...


----------



## JustinThyme

MrTOOSHORT said:


> Look a 9980xe pushing a 2080ti on water over 18,000 gpu score in TS:
> 
> *https://www.3dmark.com/spy/7384223*


Obtainable when you run it on a chiller or LN2 with a high clock and half the cores disabled/turn hyperthreading off
Now put it to default and see what you get....


----------



## MrTOOSHORT

No chiller or ln2. Just a water loop. KPEs can do 2205Mhz no problem on the AIO or HC block.


----------



## im late

Another custom ambient water loop - 2205 owner here:

https://www.3dmark.com/spy/7477215

And here:

https://www.3dmark.com/spy/7477329

And here:

https://www.3dmark.com/pr/104396


----------



## dante`afk

pretty low score for that clock.


----------



## NexusVibee

Shunt modded my card with 8mOhm shunts and i get about 570w on the gpu during Timespy 4K GT2. Is ******* insane. This is with the 380w Bios giving me a max theoretical (380w x 1.625 = 617.8w) total. I am forced to use the curve editor or else my voltage doesn't hit max (1.093v). The curve makes my scores really inefficient which sucks. https://imgur.com/a/btafdry


----------



## MrTOOSHORT

@NexusVibee

Looks like you had a hell of time getting those resistors on reasonably well. I sure did. I had 8mo on top of 5mo, but the EK backplate wouldn't go on right, so removed the 8 and 5mo and soldered just a 3mo in their place. Was a chore when you're not a soldering expert.


----------



## NexusVibee

MrTOOSHORT said:


> @NexusVibee
> 
> Looks like you had a hell of time getting those resistors on reasonably well. I sure did. I had 8mo on top of 5mo, but the EK backplate wouldn't go on right, so removed the 8 and 5mo and soldered just a 3mo in their place. Was a chore when you're not a soldering expert.


Had a friend do it that had done it to his card already. Funnily enough my EK backplate fits on fine.


----------



## sdubbin

Have a seahawk ek x trying to update bios to the 406w gaming x trio bios but getting this error message

Please press 'y' to confirm override of PCI Subsystem ID's:
Overriding PCI subsystem ID mismatch
Current - Version:90.02.30.00.65 ID:10DE:1E07:1462:3717
GPU Board (Normal Board)
Replace with - Version:90.02.0B.40.56 ID:10DE:1E07:1462:3715
GPU Board (Normal Board)

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.
BIOS Cert 2.0 Verification Error, Update aborted.
Nothing changed!

any ideas - is that the best bios


----------



## J7SC

latest 2080 Ti from the 'unobtanium department' <> 'new' Galax 10th anniversary 2080 Ti HoF water + air cooling


----------



## Jpmboy

unobtanium is right.


----------



## Xdrqgol

Hi guys,

I have the following card EVGA GeForce RTX 2080 Ti XC ULTRA:

Device ID: 10DE 1E07
Memory Type : GDDR6 (Micron)
Power Target in Precision X1: 130%

and I was wondering , what BIOS would you recommend to flash to get a bit more out of it..if any...?


Really appreciate the help!


----------



## NexusVibee

Xdrqgol said:


> Hi guys,
> 
> I have the following card EVGA GeForce RTX 2080 Ti XC ULTRA:
> 
> Device ID: 10DE 1E07
> Memory Type : GDDR6 (Micron)
> Power Target in Precision X1: 130%
> 
> and I was wondering , what BIOS would you recommend to flash to get a bit more out of it..if any...?
> 
> 
> Really appreciate the help!


 380w Galax/KFA2. Worked on my card and i have the exact same. 380w should be fine to run on air. Just up your fan speed curve


----------



## sblantipodi

New cards after six months. This market is sick


----------



## Xdrqgol

NexusVibee said:


> 380w Galax/KFA2. Worked on my card and i have the exact same. 380w should be fine to run on air. Just up your fan speed curve


Thank you!

What was your results after flashing the new Bios. Was it a considerable improvement? 
Thanks!


----------



## Padinn

Try llc high and 1.36 or 1.37v. Worked for me in same setup


----------



## whwidjaja

Just did my first run on RTX 2080 Ti Sea Hawk EK X:

https://www.3dmark.com/spy/7519888


----------



## Xdrqgol

whwidjaja said:


> Just did my first run on RTX 2080 Ti Sea Hawk EK X:
> 
> https://www.3dmark.com/spy/7519888



175+ core, is the maximum you can go with?


Decent score, but on my rig with my EVGA XC Ultra 2080 ti and i9 9900K @5.1Ghz - I get 16056 total score.


----------



## whwidjaja

Xdrqgol said:


> 175+ core, is the maximum you can go with?
> 
> 
> Decent score, but on my rig with my EVGA XC Ultra 2080 ti and i9 9900K @5.1Ghz - I get 16056 total score.


thanks, I haven't tried to find the maximum one. this is my first run and based on what i've seen so far looks like 150 - 175 is the max core offset that most people can go for. I think this card can go quite high only if I use to 406W bios which i really don't want to do since it will void the warranty.


----------



## Xdrqgol

whwidjaja said:


> thanks, I haven't tried to find the maximum one. this is my first run and based on what i've seen so far looks like 150 - 175 is the max core offset that most people can go for. I think this card can go quite high only if I use to 406W bios which i really don't want to do since it will void the warranty.


Makes sense, good numbers, have fun with it!


----------



## JustinThyme

whwidjaja said:


> thanks, I haven't tried to find the maximum one. this is my first run and based on what i've seen so far looks like 150 - 175 is the max core offset that most people can go for. I think this card can go quite high only if I use to 406W bios which i really don't want to do since it will void the warranty.


For what its worth Ive come to find a max ceiling unless you go with something like LN2. Water cooled you get to about 2100, maybe pull 2200 out of your arse no matter what BIOS you run. I tried an XOC BIOS and gained only like 50Mhz so Im back at stock with 2150 rock solid and happy with it.


----------



## NexusVibee

Xdrqgol said:


> Thank you!
> 
> What was your results after flashing the new Bios. Was it a considerable improvement?
> Thanks!


I've got my card shunt modded and can get around 2200 with the curve. but yes 380w is all you need and you can get a decent more OC out of it


----------



## NexusVibee

whwidjaja said:


> Just did my first run on RTX 2080 Ti Sea Hawk EK X:
> 
> https://www.3dmark.com/spy/7519888


Here is my shunt modded card. Runs 2190 1.093v whole test. CPU is the same as yours aswell. Idk why i get lower cpu score tho

https://www.3dmark.com/3dm/37034106


----------



## whwidjaja

NexusVibee said:


> Here is my shunt modded card. Runs 2190 1.093v whole test. CPU is the same as yours aswell. Idk why i get lower cpu score tho
> 
> https://www.3dmark.com/3dm/37034106


nice score!! I don't think I have the gut to shunt mod my card. It will void the warranty, won't it? for the CPU Score, at may be down to BIOS settings optimization including memory timing tuning since I used Raja's B-Die single rank preset.


----------



## RAGEdemon

Trying to flash MSI Duke OC.

Getting the same problem as the fella above: https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-828.html#post28012994

====================
"XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
a newer version of GPU firmware image for this product." 

BIOS Cert 2.0 Verification Error, Update aborted.

Nothing changed!

ERROR: Invalid firmware image detected.
=====================

Looks like an issue with the USB Type-C connector firmware?

This is with both the 400W "unofficial" MSI bios, as well as the official Galax/KFA2 380W bios.

Insight would be much appreciated - it's a travesty that this card has been locked to a 111%/290w power limit.

Edit:
Added GPU-Z screenshot showing current bios info etc.


----------



## Sheyster

RAGEdemon said:


> Trying to flash MSI Duke OC.
> 
> Getting the same problem as the fella above: https://www.overclock.net/forum/69-...tx-2080-ti-owner-s-club-828.html#post28012994
> 
> ====================
> "XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
> a newer version of GPU firmware image for this product."
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> =====================
> 
> Looks like an issue with the USB Type-C connector firmware?
> 
> This is with both the 400W "unofficial" MSI bios, as well as the official Galax/KFA2 380W bios.
> 
> Insight would be much appreciated - it's a travesty that this card has been locked to a 111%/290w power limit.
> 
> Edit:
> Added GPU-Z screenshot showing current bios info etc.


I had no trouble flashing my Duke OC to the 380w BIOS. Can you post how you are trying to flash it? (Specific nvflash commands you're using.)


----------



## RAGEdemon

Thanks for the response.

The instructions and feedback from the prompts are 100% exactly as per the flashing instructions on in the first post. - Every response is also exactly in-line with those shown in OP... until this last section.

=========
c:\nvflash>nvflash64 --protectoff
NVIDIA Firmware Update Utility (Version 5.560.0)
Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.

Adapter: GeForce RTX 2080 Ti (10DE,1E07,1462,3710) H:--:NRM S:00,B:01,D:00,F:00

EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

Setting EEPROM software protect setting...

Remove EEPROM write protect complete.


c:\nvflash>nvflash64 -6 msi400w.rom
NVIDIA Firmware Update Utility (Version 5.560.0)
Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.


Checking for matches between display adapter(s) and image(s)...

Adapter: GeForce RTX 2080 Ti (10DE,1E07,1462,3710) H:--:NRM S:00,B:01,D:00,F:00


EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

WARNING: Firmware image PCI Subsystem ID (1462.3715)
does not match adapter PCI Subsystem ID (1462.3710).

Please press 'y' to confirm override of PCI Subsystem ID's:
Overriding PCI subsystem ID mismatch
Current - Version:90.02.17.00.2E ID:10DE:1E07:1462:3710
GPU Board (Normal Board)
Replace with - Version:90.02.0B.40.56 ID:10DE:1E07:1462:3715
GPU Board (Normal Board)

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.

BIOS Cert 2.0 Verification Error, Update aborted.

Nothing changed!

ERROR: Invalid firmware image detected.

c:\nvflash>
=============

From my (uneducated) observation, the problem seems to be the firmware on my Duke is a later version than the one in the 400w / 380w firmwares. If you look at your original firmware version, I would wager that it is not 90.02.17.00.2E - or at least, not the same USB-typeC firmware component.

Presumably, it might seem that either a newer firmware needed, or a way to skip the USB-TypeC section of the firmware.

My uneducated 2 cents... Welcoming all responses in return 

-------------
On a related note: Was there a good reason you choose the 380w Galax over the 400w MSI? Do the 3 fans work as they should at the right speed? Do they still all stop on idle? Appreciate the info


----------



## Sheyster

RAGEdemon said:


> Thanks for the response.
> 
> c:\nvflash>nvflash64 -6 msi400w.rom


Try:



Code:


nvflash64 -4 -5 -6 msi400w.rom

Since the MSI Trio is a 3 (power) connector card and the Duke OC is a 2 connector reference PCB card, you should use the 380w BIOS for best results.

One of the fans still turns at idle, the other 2 do not, with the 380w BIOS.


----------



## RAGEdemon

Thanks, but Unfortunately:

===========
c:\nvflash>nvflash64 -4 -5 -6 msi400w.rom
NVIDIA Firmware Update Utility (Version 5.560.0)
Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.


Command format not recognized

Press 'Enter' to list of the available commands, or 'Q' to quit.

============

nvflash64 description does not show any meaning for option -4 nor -5. What do they do? I think these are outdated flags.

With respect, I would like to try something well documented and tested - my pockets are unfortunately very shallow when it comes to £1,200 graphics cards


----------



## NexusVibee

whwidjaja said:


> nice score!! I don't think I have the gut to shunt mod my card. It will void the warranty, won't it? for the CPU Score, at may be down to BIOS settings optimization including memory timing tuning since I used Raja's B-Die single rank preset.


Yes 100% voids warranty. I don't mind as i've had this card already for about 6 months and it hasn't died yet. I don't see it dying any time soon. I do have 16Gb sticks that don't tighten well. But sometimes if you run the test again your cpu score goes normal. Idk why that happens


----------



## Sheyster

RAGEdemon said:


> Thanks, but Unfortunately:
> 
> ===========
> c:\nvflash>nvflash64 -4 -5 -6 msi400w.rom
> NVIDIA Firmware Update Utility (Version 5.560.0)
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Command format not recognized
> 
> Press 'Enter' to list of the available commands, or 'Q' to quit.
> 
> ============
> 
> nvflash64 description does not show any meaning for option -4 nor -5. What do they do? I think these are outdated flags.
> 
> With respect, I would like to try something well documented and tested - my pockets are unfortunately very shallow when it comes to £1,200 graphics cards


It's possible -4 and/or -5 are obsolete now. They were various overrides to force the flash.


----------



## sdubbin

I tried everything with the guys on discord we couldn’t get it to flash, let me know if you have any luck


----------



## jura11

sdubbin said:


> I tried everything with the guys on discord we couldn’t get it to flash, let me know if you have any luck


Hi there 

Can you please try this there
c:\nvflash>nvflash64 --protectoff -6 msi400w.rom

If I remember correctly, this I needed to use with second BIOS on my Asus RTX 2080Ti Strix because I couldn't flash there BIOS

Hope this helps 

Thanks, Jura


----------



## RAGEdemon

jura11 said:


> Hi there
> 
> Can you please try this there
> c:\nvflash>nvflash64 --protectoff -6 msi400w.rom
> 
> If I remember correctly, this I needed to use with second BIOS on my Asus RTX 2080Ti Strix because I couldn't flash there BIOS
> 
> Hope this helps
> 
> Thanks, Jura


That doesn't work mate. "--protectoff" generally needs to be on a separate line preceding the flash command regardless.


----------



## Roen

Anyone with a Zotac 2080 Ti notice that the fans ramp up to 3000 rpm above 78* C regardless of fan curve setting? Is this intended behavior?


----------



## acmilangr

Can i try asus matrix bios on my evga XC gaming? Is it safe?


----------



## jura11

acmilangr said:


> Can i try asus matrix bios on my evga XC gaming? Is it safe?


Hi there 

I wouldn't use Matrix BIOS or Strix or XOC BIOS on reference PCB because you will loose one DP port and HDMI port I think with that BIOS 

Matrix BIOS is nice BIOS for Strix RTX 2080 Ti with good clocks and power limit which I still think is way too low at 360W

Hope this helps 

Thanks, Jura


----------



## jura11

RAGEdemon said:


> That doesn't work mate. "--protectoff" generally needs to be on a separate line preceding the flash command regardless.


Hi there 

Sorry forgot it on this, yes agree you need to use - - protectoff first then normal comand like - 6 etc

Thanks, Jura


----------



## acmilangr

jura11 said:


> acmilangr said:
> 
> 
> 
> Can i try asus matrix bios on my evga XC gaming? Is it safe?
> 
> 
> 
> Hi there
> 
> I wouldn't use Matrix BIOS or Strix or XOC BIOS on reference PCB because you will loose one DP port and HDMI port I think with that BIOS
> 
> Matrix BIOS is nice BIOS for Strix RTX 2080 Ti with good clocks and power limit which I still think is way too low at 360W
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

It is strange that even if with Galax I have higher clocks (2235 max), I have lower results on 3dmark. This is why I want to try another bios but I want to be sure it will not bricked.


----------



## jura11

acmilangr said:


> It is strange that even if with Galax I have higher clocks (2235 max), I have lower results on 3dmark. This is why I want to try another bios but I want to be sure it will not bricked.



Hi there 

Why you are getting lower scores with Galax 380W BIOS its probably down to VRAM timing which are probably on looser side with Galax 380W BIOS than with stock

Which BIOS do work with yours PCB, I would as first check I/O on back of yours GPU and compare this against the GPU which you would like to use it, then finding right BIOS 

Matrix BIOS I'm pretty sure it will not brick yours card, just you will probably loose one DP port and HDMI port but if you are OK with that then no problem 

Scores are slightly higher with Matrix BIOS if I'm comparing with Asus XOC BIOS, just I'm hitting power limit in Superposition or 3DMARK 

What is yours best OC on yours stock BIOS? 


Hope this helps 

Thanks, Jura


----------



## Shawnb99

Well just think my second 2080ti died on me. Made a few changes to my loop upon booting I get error 43 in device manager, try to reinstall drivers I get artifacts and they don’t install.

Never OC’d a card yet but now seems a second one has died.


----------



## acmilangr

jura11 said:


> acmilangr said:
> 
> 
> 
> It is strange that even if with Galax I have higher clocks (2235 max), I have lower results on 3dmark. This is why I want to try another bios but I want to be sure it will not bricked.
> 
> 
> 
> 
> Hi there
> 
> Why you are getting lower scores with Galax 380W BIOS its probably down to VRAM timing which are probably on looser side with Galax 380W BIOS than with stock
> 
> Which BIOS do work with yours PCB, I would as first check I/O on back of yours GPU and compare this against the GPU which you would like to use it, then finding right BIOS
> 
> Matrix BIOS I'm pretty sure it will not brick yours card, just you will probably loose one DP port and HDMI port but if you are OK with that then no problem
> 
> Scores are slightly higher with Matrix BIOS if I'm comparing with Asus XOC BIOS, just I'm hitting power limit in Superposition or 3DMARK
> 
> What is yours best OC on yours stock BIOS?
> 
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Maximum clock is about 2190mhz with stock bios. Hitting Power limit too often.

With Galax bios I have great score on superposition 4k (14415) but not in 3dmark.


----------



## acmilangr

Also a question.
If is bricked, is there any way to unbricked with igpu or is dead?


----------



## Shawnb99

Considering both my dead cards had micron memory I’m more open to the conspiracy that was the issue, hopefully they send me a replacement with Samsung memory and I can have a card that works for more then 3 months


----------



## JustinThyme

Did you have them under water, I see a hydro ccopper listed in your build? I think its more than a conspiracy. 
Put it simply without argument or possible rebuttal.
Every 2080ti that Ive seen or heard of failing had mircon memory.
Of those a lot were just flat out recalled in the beginning to change the heatsinks for more active cooling on the memory and they were all micron.
In the past with anything you bought it was a bummer to look down and see micron memory.
You dont see posts of any coveted micron memory.
From what Ive seen and heard the micron memory is more prone to breaking down under the heat being produced by the GPU. Normally this can be mitigated with water cooling. When you look at some of these heat sinks the Vram is left to fend for itself with either zero cooling or contact with anything or just part of the frame, not even all of it is acting as passive cooling.
Ive seem a few get decent results with micron but all of them unverified as in a picture of GPUZ with the Vram clocks but I can damn near double the speed of my Samsung Vram cards and get a GPUZ screenshot of it. Problem is load them down and watch the same thing for awhile.....as in locked up display. 
Have you considered a different manufacturer? Previousy EVGA was among the go to for GPUs but no so much with the 2080Ti. I think FTW now means........For the WAIT on the RMA. Lots of others to choose from. Im running Strix 011G OC cards that all have not only Samsung memory but binned Samsung memory. ASUS did well with these other than the sideways choke making it a beotch for waterblock makers and that same board on the 011G is also used on the matrix. ASUS emplyee confirmed that the sideways choke was so they could get the AIO mounted for the Matrix. Im still no fan of that either. It still dumps the hot air into your case. Not a problem for an open test bed or something like a core P5 but a big problem for a closed case.


----------



## Shawnb99

First one wasn’t under water was just the Ultra OC Model from EVGA, just the latest one was the Hydropcopper. Both were micron so that may just be the reason. 
I just hope they send me one with Samsung as a replacement.


----------



## J7SC

Shawnb99 said:


> First one wasn’t under water was just the Ultra OC Model from EVGA, just the latest one was the Hydropcopper. Both were micron so that may just be the reason.
> I just hope they send me one with Samsung as a replacement.


Hi Shawnb99 - I recall your inital 2080 Ti issue, then that EK barb thing, now another 2080 Ti . If it makes you feel any better, in the 20-odd years in the computer field with work and private builds, I did have one similar 'build-from-hell'...an early X99 whereby I had three expensive Asus mobos die in a row (1 Deluxe, 2x Rampage), usually while in Bios during setup. Now, as to the 'Micron > round 17', just have a quick look at the attachment below, referring to code 43 on a Hydro Cooper, artifacts et al, though with Samsung memory. Rather than getting into the usual argument (more than one Samsung card also has died the same death as per earlier posts in this thread), I like to raise some hopefully constructive points:

1.) A good article by TechSpot on 2080 Ti failures is here: https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html A telling excerpt which always made me wonder is that_ "the issue seems restricted to the RTX 2080 Ti, which makes it unique to some feature that is not shared with the 2080. Since the RTX 2080 Ti and non-Ti cards share the same memory controller and chips, it's highly unlikely this was caused by a hardware defect in those components (I have a feeling these are power issues)"_. Also, there is the part about virtually all early 2080 Tis being equipped with Micron, so no matter what the true issue(s)in the early production runs, Micron will look suspect.

2.) Whatever the root cause is, I find this whole 2080 Ti problem area underscores how weak consumer protection still is with electronics such as GPUs etc. If this would be the car industry, there would be re-calls, consumer protection agencies and lawyers all over it, IMO.

3.) My personal preference is Samsung memory, though my two 2080 TIs with Micron (custom-PCBs, factory water-cooled) have performed flawlessly since I got them last November. A lot of the top WRs @ HWBot are also 2080 Tis with Micron. The point is not to defend Micron one way or another, but to point out that the RTX roll-out seemed rushed (perhaps due to the mining market changes), and I personally wonder about actual quality control in that regard.

4.) I wonder if you have the serial numbers handy for your original 2080 Ti card and the subsequent RMA card which also just died...I realize they are slightly different models, but sometimes, RMA is just going from the fire into the frying pan. Our office had eight top of the line EVGA cards a few years back, four of which developed a (non-fatal) flaw re PCIe link speed, even when switched to different mobos. The RMA which arrived came in a blank white box and the GPU shroud clearly had major usage marks on it (apart from an Asic 68%...) and had other issues later on.

5.) The final point relates to 2080 Tis in general. I find these cards very long and very heavy, and I really do wonder about PCB flex, and also related issues that can arise with custom (ie water block) cooling. Apart from the artifacts issue, there were also several with the 'stuck at 1350'problem (beyond disconnected fan headers). The point is that these cards are so big and heavy that flexing **might** contribute to some of the failures, just my :2cents: . Since i mounted my two cards (with bracing, and reinforced PCIe slots), I haven't touched them - the size and weight makes me a trifle nervous. Hopefully soon, there will be 7nm (or less cards w/ HBM) at the top consumer end which are much smaller and lighter. Anyway, sorry for the long(ish) post - I have been watching your odyssey for a while now and hope that you'll get it all finally sorted out !


----------



## jura11

Hi @acmilangr

Its really hard to understand that message but I will try to answer it

2190MHz its very nice OC without the question there, my Asus RTX 2080Ti Strix will do 2160MHz which will drops to 2145MHz when temperatures are in 38-42°C

With stock Asus Strix BIOS I would too often hit power limit, but XOC BIOS its probably best for my Asus RTX 2080Ti Strix or Matrix BIOS which have 360W power limit which is not bad, with XOC I usually set my power limit to 42% which should be 420W

Again XOC BIOS do work with reference PCB just as with Matrix BIOS you will loose one DP port and HDMI port, if you are OK with that then I would try it

You shouldn't brick GPU with flashing BIOS, I flashed my Zotac RTX 2080Ti AMP few times with several other BIOS like Galax 380W or Asus XOC BIOS or Matrix which did work but as I lost one DP port and HDMI port which is not good for me as I use 2 monitor setup

Hope this helps 

Thanks, Jura


----------



## Shawnb99

J7SC said:


> Hi Shawnb99 - I recall your inital 2080 Ti issue, then that EK barb thing, now another 2080 Ti . If it makes you feel any better, in the 20-odd years in the computer field with work and private builds, I did have one similar 'build-from-hell'...an early X99 whereby I had three expensive Asus mobos die in a row (1 Deluxe, 2x Rampage), usually while in Bios during setup. Now, as to the 'Micron > round 17', just have a quick look at the attachment below, referring to code 43 on a Hydro Cooper, artifacts et al, though with Samsung memory. Rather than getting into the usual argument (more than one Samsung card also has died the same death as per earlier posts in this thread), I like to raise some hopefully constructive points:
> 
> 1.) A good article by TechSpot on 2080 Ti failures is here: https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html A telling excerpt which always made me wonder is that_ "the issue seems restricted to the RTX 2080 Ti, which makes it unique to some feature that is not shared with the 2080. Since the RTX 2080 Ti and non-Ti cards share the same memory controller and chips, it's highly unlikely this was caused by a hardware defect in those components (I have a feeling these are power issues)"_. Also, there is the part about virtually all early 2080 Tis being equipped with Micron, so no matter what the true issue(s)in the early production runs, Micron will look suspect.
> 
> 2.) Whatever the root cause is, I find this whole 2080 Ti problem area underscores how weak consumer protection still is with electronics such as GPUs etc. If this would be the car industry, there would be re-calls, consumer protection agencies and lawyers all over it, IMO.
> 
> 3.) My personal preference is Samsung memory, though my two 2080 TIs with Micron (custom-PCBs, factory water-cooled) have performed flawlessly since I got them last November. A lot of the top WRs @ HWBot are also 2080 Tis with Micron. The point is not to defend Micron one way or another, but to point out that the RTX roll-out seemed rushed (perhaps due to the mining market changes), and I personally wonder about actual quality control in that regard.
> 
> 4.) I wonder if you have the serial numbers handy for your original 2080 Ti card and the subsequent RMA card which also just died...I realize they are slightly different models, but sometimes, RMA is just going from the fire into the frying pan. Our office had eight top of the line EVGA cards a few years back, four of which developed a (non-fatal) flaw re PCIe link speed, even when switched to different mobos. The RMA which arrived came in a blank white box and the GPU shroud clearly had major usage marks on it (apart from an Asic 68%...) and had other issues later on.
> 
> 5.) The final point relates to 2080 Tis in general. I find these cards very long and very heavy, and I really do wonder about PCB flex, and also related issues that can arise with custom (ie water block) cooling. Apart from the artifacts issue, there were also several with the 'stuck at 1350'problem (beyond disconnected fan headers). The point is that these cards are so big and heavy that flexing **might** contribute to some of the failures, just my :2cents: . Since i mounted my two cards (with bracing, and reinforced PCIe slots), I haven't touched them - the size and weight makes me a trifle nervous. Hopefully soon, there will be 7nm (or less cards w/ HBM) at the top consumer end which are much smaller and lighter. Anyway, sorry for the long(ish) post - I have been watching your odyssey for a while now and hope that you'll get it all finally sorted out !




Yep it really has been the build from hell. Think I’ve had 2 months use in the 7 so far. If it’s not a part breaking, it’ll be something else.


----------



## acmilangr

jura11 said:


> Hi @acmilangr
> 
> Its really hard to understand that message but I will try to answer it
> 
> 2190MHz its very nice OC without the question there, my Asus RTX 2080Ti Strix will do 2160MHz which will drops to 2145MHz when temperatures are in 38-42Â°C
> 
> With stock Asus Strix BIOS I would too often hit power limit, but XOC BIOS its probably best for my Asus RTX 2080Ti Strix or Matrix BIOS which have 360W power limit which is not bad, with XOC I usually set my power limit to 42% which should be 420W
> 
> Again XOC BIOS do work with reference PCB just as with Matrix BIOS you will loose one DP port and HDMI port, if you are OK with that then I would try it
> 
> You shouldn't brick GPU with flashing BIOS, I flashed my Zotac RTX 2080Ti AMP few times with several other BIOS like Galax 380W or Asus XOC BIOS or Matrix which did work but as I lost one DP port and HDMI port which is not good for me as I use 2 monitor setup
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the answer. 
What xoc bios you are talking? Do you have any link?


----------



## jim2point0

jura11 said:


> Hi @acmilangr
> You shouldn't brick GPU with flashing BIOS, I flashed my Zotac RTX 2080Ti AMP few times with several other BIOS like Galax 380W or Asus XOC BIOS or Matrix which did work but as I lost one DP port and HDMI port which is not good for me as I use 2 monitor setup


That's what everyone told me. And yet, somehow I did manage to brick mine. 

I had an EVGA XC Gaming, which uses the reference PCB if I recall correctly. But I was told that I could use the Galax 380W bios just fine (from the OP). 

But for some reason... something went wrong when I attempted to flash it. I already tried another BIOS which went smoothly. I flashed back to my stock bios, then decided to try the GALAX 380W. But... that one failed. And even though it says "nothing changed" something was broken. My 2nd monitor didn't work anymore, and when I restarted my computer, NEITHER monitor worked. Windows refused to work with the card. So here's what I got from NVFlash when I first tried to flash it:










And this is what I saw in the device manager.










I tried plugging into my onboard video HDMI in order to flash my stock bios again... but this is what I got after every single bios flash attempt after the failure:










I had TiN on the EVGA forums giving me some suggestions, but nothing worked. The only recommendation was a hardware flasher, but at this point I preferred to spend a little more money to ship the card to someone that knows how to do that. Will update here when it gets to him and I hear back.

If it works, great. If not, I learned a valuable lesson. I will never flash another bios again. Because I will always have flashbacks to what happened here. Which sucks because 99.9% of the time, it seems to go just fine. I've done it before on other cards and even on this card. But this one instance will just scar me for life.


----------



## acmilangr

jim2point0 said:


> jura11 said:
> 
> 
> 
> Hi @acmilangr
> You shouldn't brick GPU with flashing BIOS, I flashed my Zotac RTX 2080Ti AMP few times with several other BIOS like Galax 380W or Asus XOC BIOS or Matrix which did work but as I lost one DP port and HDMI port which is not good for me as I use 2 monitor setup
> 
> 
> 
> That's what everyone told me. And yet, somehow I did manage to brick mine.
> 
> I had an EVGA XC Gaming, which uses the reference PCB if I recall correctly. But I was told that I could use the Galax 380W bios just fine (from the OP).
> 
> But for some reason... something went wrong when I attempted to flash it. I already tried another BIOS which went smoothly. I flashed back to my stock bios, then decided to try the GALAX 380W. But... that one failed. And even though it says "nothing changed" something was broken. My 2nd monitor didn't work anymore, and when I restarted my computer, NEITHER monitor worked. Windows refused to work with the card. So here's what I got from NVFlash when I first tried to flash it:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And this is what I saw in the device manager.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I tried plugging into my onboard video HDMI in order to flash my stock bios again... but this is what I got after every single bios flash attempt after the failure:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had TiN on the EVGA forums giving me some suggestions, but nothing worked. The only recommendation was a hardware flasher, but at this point I preferred to spend a little more money to ship the card to someone that knows how to do that. Will update here when it gets to him and I hear back.
> 
> If it works, great. If not, I learned a valuable lesson. I will never flash another bios again. Because I will always have flashbacks to what happened here. Which sucks because 99.9% of the time, it seems to go just fine. I've done it before on other cards and even on this card. But this one instance will just scar me for life.
Click to expand...

Wow. Too bad for you. I am sorry about it. 
I hope you will find solution. 

Just to know I have flashed several times Galax 380w bios and it is my main bios at this time. Never had any issue.
I have the same card with you.


----------



## x-speed69

Sorry about little offtopic but what's going on with @acmilangr messages and quotes?


----------



## acmilangr

x-speed69 said:


> Sorry about little offtopic but what's going on with @acmilangr messages and quotes?


I see them correct on my phone.


----------



## bigjdubb

It's a mobile issue that's been around for a few weeks now.


----------



## jim2point0

That's so random. It's only happening for the letter S...


----------



## bigjdubb

I dunno why it is, just that it is. I know how to use a keyboard and I know how to use a forum, but I don't know how to use a keyboard to make a forum.


----------



## jura11

jim2point0 said:


> That's what everyone told me. And yet, somehow I did manage to brick mine.
> 
> I had an EVGA XC Gaming, which uses the reference PCB if I recall correctly. But I was told that I could use the Galax 380W bios just fine (from the OP).
> 
> But for some reason... something went wrong when I attempted to flash it. I already tried another BIOS which went smoothly. I flashed back to my stock bios, then decided to try the GALAX 380W. But... that one failed. And even though it says "nothing changed" something was broken. My 2nd monitor didn't work anymore, and when I restarted my computer, NEITHER monitor worked. Windows refused to work with the card. So here's what I got from NVFlash when I first tried to flash it:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And this is what I saw in the device manager.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I tried plugging into my onboard video HDMI in order to flash my stock bios again... but this is what I got after every single bios flash attempt after the failure:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I had TiN on the EVGA forums giving me some suggestions, but nothing worked. The only recommendation was a hardware flasher, but at this point I preferred to spend a little more money to ship the card to someone that knows how to do that. Will update here when it gets to him and I hear back.
> 
> If it works, great. If not, I learned a valuable lesson. I will never flash another bios again. Because I will always have flashbacks to what happened here. Which sucks because 99.9% of the time, it seems to go just fine. I've done it before on other cards and even on this card. But this one instance will just scar me for life.


Hi there 

Sorry to hear that there regarding the yours GPU 

Many people over here flashed their reference PCB RTX 2080Ti with Galax 380W BIOS without the issue like I have tried on my Zotac RTX 2080Ti AMP too few BIOS and similarly flashed my Asus RTX 2080Ti Strix too with Galax 380W BIOS which doesn't work as I will loose DP port and HDMI port and then I have tried abd still running XOC BIOS and as my second BIOS running Matrix BIOS 

This PCI block corrupted I seen only twice when I flashed someone BIOS which have been corrupted already(someone dumped BIOS from GPU-Z and seems something has happened between this and upload), only thing which helped me in this case, flashing back with stock BIOS, I seen that error few years back with GTX680 I think or 780 not sure right now, its have been long time ago

This PMU command complete with error, Error code = 0x0044 I have not seen 

Is yours GPU for sure with A chip or its non A chip? 

I would try flash back with yours stock BIOS 

I flashed my RTX 2080Ti several times and never have issues 

Hope this helps 

Thanks, Jura


----------



## jim2point0

jura11 said:


> I would try flash back with yours stock BIOS





> I tried plugging into my onboard video HDMI in order to flash my stock bios again... but this is what I got after every single bios flash attempt after the failure:


Yeah... I tried  

Doesn't matter what BIOS I try to flash now, same error. In any case, I didn't post this to get additional troubleshooting help, as the card is currently being shipped to someone that can hopefully fix it. Just wanted to post that because.... apparently random stuff can happen when flashing that SHOULDN'T happen, but can essentially brick a card. So... yeah. Just hope lightning doesn't strike twice I guess


----------



## acmilangr

My card (evga gaming XC with kraken) gets really good clock. I am using galax 380w but I need a little more power. Something about 450-500w


----------



## elderan

Gigabyte AORUS GeForce RTX 2080 Ti Xtreme WATERFORCE 
https://www.amazon.com/Gigabyte-WATERFORCE-Pre-Installed-Gv-N208TAORUSX-WB-11GC/dp/B07JVJHWMQ/

or

EVGA GeForce RTX 2080 Ti FTW3 Ultra Hydro Copper Gaming
https://www.amazon.com/EVGA-GeForce-Copper-11G-P4-2489-KR-Technology/dp/B07PWHZT73/

Doing a SLI config. Not totally against doing another 2080ti and adding my own waterblock if there is some major reason to do so.


----------



## OrionBG

Hey guys, quick question:
If a person can get his/hers hands on an almost new EVGA RTX2080TI XC Gaming GPU with warranty, for about 850 Euro, will He/She be stupid, very stupid, or with serious mental issues not to buy it?
Asking for a friend... 

Now seriously, is this a good price especially having in mind the current and soon to come new offerings from both team green and team red?


----------



## Shawnb99

OrionBG said:


> Hey guys, quick question:
> 
> If a person can get his/hers hands on an almost new EVGA RTX2080TI XC Gaming GPU with warranty, for about 850 Euro, will He/She be stupid, very stupid, or with serious mental issues not to buy it?
> 
> Asking for a friend...
> 
> 
> 
> Now seriously, is this a good price especially having in mind the current and soon to come new offerings from both team green and team red?




If you do grab it make sure you get some kind of written invoice stating you bought the card or you can’t RMA it if anything else goes wrong.


----------



## OrionBG

Shawnb99 said:


> If you do grab it make sure you get some kind of written invoice stating you bought the card or you can’t RMA it if anything else goes wrong.


They will provide the original invoice supposedly.


----------



## Shawnb99

OrionBG said:


> They will provide the original invoice supposedly.




Still need something written that it’s been sold to you I believe. 

From the Warranty page

Secondhand owners should provide a proof of purchase showing the sale between the current owner (i.e. YOU) and previous owner (i.e. the person selling the card). This may be as simple as a screenshot of a PayPal invoice, Venmo invoice, Amazon Payments invoice, Ebay invoice, etc. It is understood that sometimes sales are not done online, so another means of showing the sale, even if it's made for the purpose of this proof of purchase, is acceptable. 
Secondhand owners should not provide the original owner's proof of purchase from an authorized reseller, or the proof of purchase between previous owners of the card.

All info can be found here 

https://forums.evga.com/m/tm.aspx?m=2969709&p=3#/m/tm.aspx?m=2848860&fp=1


----------



## OrionBG

Shawnb99 said:


> Still need something written that it’s been sold to you I believe.
> 
> From the Warranty page
> 
> Secondhand owners should provide a proof of purchase showing the sale between the current owner (i.e. YOU) and previous owner (i.e. the person selling the card). This may be as simple as a screenshot of a PayPal invoice, Venmo invoice, Amazon Payments invoice, Ebay invoice, etc. It is understood that sometimes sales are not done online, so another means of showing the sale, even if it's made for the purpose of this proof of purchase, is acceptable.
> Secondhand owners should not provide the original owner's proof of purchase from an authorized reseller, or the proof of purchase between previous owners of the card.
> 
> All info can be found here
> 
> https://forums.evga.com/m/tm.aspx?m=2969709&p=3#/m/tm.aspx?m=2848860&fp=1


Hmm... EVGA have made it a lot more complicated than before... I currently have an EVGA 1080TI FE that I also purchased second hand about an year ago, but I guess it was before that change went into effect as I even purchased an extended warranty for it from EVGA.
Anyway, Thanks for the info!


----------



## NBrock

How many Founders cards out there with Kingpin bios and full cover block? Debating if it is worth switching from the Galax Bios.


----------



## Spiriva

NBrock said:


> How many Founders cards out there with Kingpin bios and full cover block? Debating if it is worth switching from the Galax Bios.


I got a 2080ti fe/ek full cover block with the kingpin bios, been running it now for ~2months.


----------



## dansi

Are all cards on sale, all A now? 
Why does the non-A indicator different from techpowerup database? Which is more correcT?


----------



## Gustavoh

Spiriva said:


> I got a 2080ti fe/ek full cover block with the kingpin bios, been running it now for ~2months.


Compared to the Galax 380W bios, what more does the kingpin bios offer exactly? No power limit and voltage limit up to 1.125mV, is that correct?


----------



## Spiriva

Gustavoh said:


> Compared to the Galax 380W bios, what more does the kingpin bios offer exactly? No power limit and voltage limit up to 1.125mV, is that correct?


Yes, and about 30-45mhz more. And in my case the kingpin bios seem to "hold" the card at max speed even over 40c. My card runs at ~43c in the summer, north of Sweden.


----------



## ntuason

For all the knowledgeable users out there, do you know what these indications (beside red dots) imply? It seems when I’m running intensive GPU tasks one or all triggers ‘YES’. Am I getting bottlenecked because of this?

Thanks!


----------



## Tragic

ntuason said:


> For all the knowledgeable users out there, do you know what these indications (beside red dots) imply? It seems when I’m running intensive GPU tasks one or all triggers ‘YES’. Am I getting bottlenecked because of this?
> 
> Thanks!



For the GPU in question, you are hitting the power and voltage limits imposed by the bios. hardware limit for voltage is 1.093v so you still have a lil bit of potential without shunt.
My best advice is to use NVflash and use the MSI bios that gives you 406W power limit. (135%)
My gaming X trio gets +130mhz core and +1550mhz gddr6
i7 8700K 5ghz
Firestrike extreme = 18122
Firestrike Ultra = 9639
Time spy = 15187
Time spy extreme = 7088
Superposition 4K = 14211
Superposition 8K = 6120
Highest power consumption during testing was 405.8W highest temps 64C ambient 20C fan curve 100%


----------



## ntuason

Tragic said:


> For the GPU in question, you are hitting the power and voltage limits imposed by the bios. hardware limit for voltage is 1.093v so you still have a lil bit of potential without shunt.
> My best advice is to use NVflash and use the MSI bios that gives you 406W power limit. (135%)
> My gaming X trio gets +130mhz core and +1550mhz gddr6
> i7 8700K 5ghz
> Firestrike extreme = 18122
> Firestrike Ultra = 9639
> Time spy = 15187
> Time spy extreme = 7088
> Superposition 4K = 14211
> Superposition 8K = 6120
> Highest power consumption during testing was 405.8W highest temps 64C ambient 20C fan curve 100%


This so so strange. I have the 406W power limit. (135%) flashed on my 2080 Ti X Trio but it wont surpass 115%, thus power throttling? And my GPU power consumption will never go above 350W. Am I doing something wrong?


----------



## SsXxX

hello brothers, i need some help please, basically i need a replacement fan for my 2080 Ti Aorus Xtreme, my story in detail is in the thread below:

https://www.overclock.net/forum/69-...ginal-cooler-waterblock-please-assist-me.html

can somebody please assist me on this, i will much appreciate it.


----------



## dante`afk

anyone else having issuess after win1903 update, one of my displays keep switching inputs back and forth randomly. 

It got a bit better going from hdmi to dp, bust still.

Wondering if my card is dying.


----------



## NBrock

dante`afk said:


> anyone else having issuess after win1903 update, one of my displays keep switching inputs back and forth randomly.
> 
> It got a bit better going from hdmi to dp, bust still.
> 
> Wondering if my card is dying.


I am pretty sure I did a clean install of drivers after the update. Pretty much every "big" Windows update has caused some issue or another and I have needed to do a clean driver install. Issues ranging from overclocking not working well anymore to some games no working properly or display driver crashes. 


A clean install of your drivers might not be a bad starting point...but I haven't come across an issue like you are describing.


----------



## Pepillo

dante`afk said:


> anyone else having issuess after win1903 update, one of my displays keep switching inputs back and forth randomly.
> 
> It got a bit better going from hdmi to dp, bust still.
> 
> Wondering if my card is dying.


I have two monitors per DP, and the one I use as an auxiliary loses the signal. At first I'd spend it with both of them, they were black and the computer wasn't responding. 

After testing many things, it was partially fixed with the 435.27 drivers. With them only the auxiliary went black when entering 3d in some game. I changed the DP port and now it does it randomly and briefly, it turns off and the screen turns back on. It's since 19.03, and I've also thought about whether the graphics card will be dying, but it looks like a driver thing.


----------



## HeliXpc

Question everyone, I have a zotac 2080ti NON A version, the twin fan, the power limit is very low, does anyone know if I can flash this to a bios with a higher power limit? any help is appreciated, I have nvflash ready to go, i tried flashing with A variant and got the GPU mismatch error, i now know I cant flash a A bios on a non A version GPU of the 2080ti.


----------



## axiumone

You're out of luck on bios flashing. Non A chips cannot be flashed. Your only option is to do a shunt mod.


----------



## HeliXpc

WOOOHOOOO, so i flashed my non A 2080ti with this bios, https://www.techpowerup.com/vgabios/208761/208761 using the nvflash 5.527.0 Modified. https://onedrive.live.com/?authkey=...!122&parId=D7731B0FB754D70F!114&action=locate password is OCN (caps)


----------



## wasdz0r

Guys, i considering buy RTX 2080ti, and i have question: Does those cards still coming with defected GPU that causes BSOD, artifacts, black screen etc? I read couple of owner reviews and they are reporting that they have 2nd or 3rd RMA and card fails AGAIN, literally in july 2019.


----------



## J7SC

wasdz0r said:


> Guys, i considering buy RTX 2080ti, and i have question: Does those cards still coming with defected GPU that causes BSOD, artifacts, black screen etc? I read couple of owner reviews and they are reporting that they have 2nd or 3rd RMA and card fails AGAIN, literally in july 2019.



...lot's of discussions in this thread if you look through it , including over the last two weeks or so. Generally speaking, I would go for a good quality brand and make sure per serial number (or ask vendor!) that it is from a recent production run. Also consider water-cooling if you can. I had my 2x 2080 Ti (w-c) since late last year / no problems. 2080 Tis are great fun 

Related article on failures here https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html


----------



## Tragic

ntuason said:


> This so so strange. I have the 406W power limit. (135%) flashed on my 2080 Ti X Trio but it wont surpass 115%, thus power throttling? And my GPU power consumption will never go above 350W. Am I doing something wrong?



Do me a favor and download/install/run superposition 8K optimized or 3DMark timespy extreme. Heaven does not have enough balls to make the 2080ti truly sweat. Also make sure you have Nvidia control panel 3D options set to prefer MAX perf and High Perf, then goto edit power plan and make sure the power plan for the cpu is max perf and not balanced or power saving. apply all and then reset, then try a bench with GPU-Z open and show me the results.


----------



## kx11

this is my benchmark results from Division 2 , 4k ultra using 431.36 driver


----------



## Jpmboy

wasdz0r said:


> Guys, i considering buy RTX 2080ti, and i have question: Does those cards still coming with defected GPU that causes BSOD, artifacts, black screen etc? I read couple of owner reviews and they are reporting that they have 2nd or 3rd RMA and card fails AGAIN, literally in july 2019.


well... basically, if you see the same user(s) having repeated failures with several RMA cards... it is usually not the card that is the root-cause of the repeated problem.


----------



## tijgert

I have a Gigabyte 2080Ti Xtreme (non Waterforce) that I’m putting an EK block on. I’d like to flash some more powerful bios to it but I’m afraid it’s a non reference card, I think, and the exciting bioses are all said to be for reference cards only.

The regular Windforce cards are said to be reference, but they’re not waterblock compatible with the Xtreme Waterforce cards (and regular Xtreme) which supports my suspicion that the Xtreme cards are non reference.

So what can I/am I supposed to do if I want to flash something fancier?

Oh and by extension, if other bioses won’t work (due to it being non reference) would it be possible/beneficial to flash a Waterforce bios to my regular Xtreme?


----------



## wasdz0r

Jpmboy said:


> well... basically, if you see the same user(s) having repeated failures with several RMA cards... it is usually not the card that is the root-cause of the repeated problem.


IDK, they are reporting that cards fails within week or less.
They are trying to disassembly by hammer?
What about those who RMA'ed only once?


----------



## Jordel

So, for a reference PCB 2080 Ti (300A), the BIOS with highest power limit is the Galax at 380W?

The only other option is a shunt mod, yeah? Do we know for sure which shunt mods work with 2080 Ti considering the changes made by nVidia?


----------



## Shawnb99

wasdz0r said:


> Guys, i considering buy RTX 2080ti, and i have question: Does those cards still coming with defected GPU that causes BSOD, artifacts, black screen etc? I read couple of owner reviews and they are reporting that they have 2nd or 3rd RMA and card fails AGAIN, literally in july 2019.




Just had my second one die on me last week so yeah it can still happen


----------



## wasdz0r

Shawnb99 said:


> Just had my second one die on me last week so yeah it can still happen


Which model? How hard you overclocked your card? How long the last one lived? When it was produced?


----------



## Shawnb99

wasdz0r said:


> Which model? How hard you overclocked your card? How long the last one lived? When it was produced?




Most recent to die was the Hydrocopper in my sig, had it maybe 6 months with maybe 2 months of use. Never overclock s, never stress tested, never played a game on it.

First one was a EVGA Ultra OC model, didn’t last 2 weeks, never overclocked.

No idea as to when it was produced, both had Micron memory though.


----------



## Jordel

What are people's power usage for a given voltage and frequency?

I'm seeing 380W according to GPU-Z (hitting PerfCap Pwr at that point), even though I'm only at ~1900 MHz and 0.86 V on the core. How the hell is it drawing so much power? Loads of power leakage? Or perhaps hot VRMs?


----------



## mattxx88

Jordel said:


> What are people's power usage for a given voltage and frequency?
> 
> I'm seeing 380W according to GPU-Z (hitting PerfCap Pwr at that point), even though I'm only at ~1900 MHz and 0.86 V on the core. How the hell is it drawing so much power? Loads of power leakage? Or perhaps hot VRMs?


with this tool
https://www.nvidia.com/en-us/geforc...er-and-performance-benchmarking-app-download/
you can see the exact consumption of your GPU chip (only die) and the whole GPU board, yes distinctly


----------



## ntuason

Tragic said:


> Do me a favor and download/install/run superposition 8K optimized or 3DMark timespy extreme. Heaven does not have enough balls to make the 2080ti truly sweat. Also make sure you have Nvidia control panel 3D options set to prefer MAX perf and High Perf, then goto edit power plan and make sure the power plan for the cpu is max perf and not balanced or power saving. apply all and then reset, then try a bench with GPU-Z open and show me the results.


Hello,

I did as you asked and my results: 

It defiantly looks like the card is running much harder and not capping 115%. Can the card actually pull 400W+ 135% power or does it stay under on purpose for safety?

Thanks


----------



## SupernovaBE

Spiriva said:


> Yes, and about 30-45mhz more. And in my case the kingpin bios seem to "hold" the card at max speed even over 40c. My card runs at ~43c in the summer, north of Sweden.


Nice!!

Im in the same boat, i want to ditch the 380w bios and go to the kingping. 
Is there a downside ?
All ports working ? dp hdmi etc ? 
Im on FC EKWB block 

have upgraded my cpu to the 18/36 core and i want to push higher in the hof. 
running 60mm 480mm for cpu and an other 480 for gpu. 
Under load the gpu get to 40ish 
I fixed my Ac to one 480mm rad and got to 8C water temp ( 18 c load ) 
2190mhz, under room temp i get 2145mhz max
def need to up the power limit, the card needs to stay in boost to get the best out of it


----------



## truehighroller1

Download Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) Unofficial (Source) Read Me

How come I can't download this bios??


----------



## wsarahan

Hi guys how are you?

Yesterday my PNY RTX 2080 Ti XRL8 Gaming arrived, but the temps seems a little hot for me, with the max power limit and +110 on core and +600 on memory I get something about 81/82C with 100% fan speed, I just saw that the 2080Ti is a little hotter than 1080Ti, should I be worried? Should I return this card or its OK?

Thanks


----------



## Jordel

wsarahan said:


> Hi guys how are you?
> 
> Yesterday my PNY RTX 2080 Ti XRL8 Gaming arrived, but the temps seems a little hot for me, with the max power limit and +110 on core and +600 on memory I get something about 81/82C with 100% fan speed, I just saw that the 2080Ti is a little hotter than 1080Ti, should I be worried? Should I return this card or its OK?
> 
> Thanks


I've got the same model. My card loves to peg 88 degrees and throttle, so seems to not be too unusual. I'm running at ~2050 MHz on core, around 24 degrees room temp.


----------



## wsarahan

Jordel said:


> I've got the same model. My card loves to peg 88 degrees and throttle, so seems to not be too unusual. I'm running at ~2050 MHz on core, around 24 degrees room temp.


So, seems that card has high temps..... i think it`s normal so.....


----------



## 86Jarrod

truehighroller1 said:


> Download Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) Unofficial (Source) Read Me
> 
> How come I can't download this bios??


There's a nda i think people have to agree to before getting the bios from galax. The problem is whenever it gets leaked anywhere it gets taken down immediately. I messaged a few people on here that have it that don't have galax cards so they obviously got it from a leak and i didn't receive a single reply. It took me hours and hours of searching to find it. It works on my ftw3 but is weird about saving profiles on msi afterburner. I like it better than kpe xoc just because of volt curve because sometimes the xoc bios's drop from 1.125 to 1.118ish and will cause a crash. If you want i'll hook you up with the link. These hidden bios's for youtubers and "special" overclockers that aren't attainable by us regular people is kinda bull**** imo. As long as you're aware that there's always a chance to hurt your gpu and make sure to have proper cooling. I oc knowing i'm going to kill my card at some point and i'm completely fine with that.


----------



## truehighroller1

86Jarrod said:


> There's a nda i think people have to agree to before getting the bios from galax. The problem is whenever it gets leaked anywhere it gets taken down immediately. I messaged a few people on here that have it that don't have galax cards so they obviously got it from a leak and i didn't receive a single reply. People don't want to have there scores beaten or something. It took me hours and hours of searching to find it. It works on my ftw3 but is weird about saving profiles on msi afterburner. I like it better than kpe xoc just because of volt curve because sometimes the xoc bios's drop from 1.125 to 1.118ish and will cause a crash. If you want it i won't be like others and i'll hook you up with the link. These hidden bios's for youtubers and "special" overclockers that aren't attainable by us regular people is kinda bull**** imo. As long as you're aware that there's always a chance to hurt your gpu and make sure to have proper cooling. I oc knowing i'm going to kill my card at some point and i'm completely fine with that.


Bro,

Thank you for responding. I'll hit you you up IM. +REP


----------



## Kaltenbrunner

What happened with the artifacting issues and/or bricking problems, was it early cards or is the problem ongoing ?


----------



## truehighroller1

Kaltenbrunner said:


> What happened with the artifacting issues and/or bricking problems, was it early cards or is the problem ongoing ?


Some people are getting RMA's with the same issue without overclocking or even major gaming involved. It's random and no one knows as from what I've seen, it has happened to sammy memory people and micron people just the same..


----------



## fleps

ntuason said:


> Hello,
> 
> I did as you asked and my results:
> 
> It defiantly looks like the card is running much harder and not capping 115%. Can the card actually pull 400W+ 135% power or does it stay under on purpose for safety?
> 
> Thanks


TimeSpy probably will top it out, it's heavier than Superposition.

You may also try to use curve OC and lock vcore on 1093 on benchmarks and see if gives you better results


----------



## AvengedRobix

Waitin for 3900X and play with my dirty baby


----------



## maxjivi05

Alright I'm back after using 380w BIOs on my MSI Seahawk X 2080TI for a few months and curious if it would work with the Kingpin BIOS everyone is talking about? I have good temps, just want a little more power so I can stop bouncing off the limit. Safe flash?


----------



## dansi

HeliXpc said:


> WOOOHOOOO, so i flashed my non A 2080ti with this bios, https://www.techpowerup.com/vgabios/208761/208761 using the nvflash 5.527.0 Modified. https://onedrive.live.com/?authkey=...!122&parId=D7731B0FB754D70F!114&action=locate password is OCN (caps)


what bios is that? link down.
thinking of a cheaper nonA 2080Ti and flash to enable higher power limits OC


----------



## sblantipodi

is 3900X establishing new 3DMark records with the tis?


----------



## Sheyster

sblantipodi said:


> is 3900X establishing new 3DMark records with the tis?


Timespy Extreme hall of fame is all intel HEDT and Xeon procs pretty much.


----------



## kx11

if only i got my CPU to run all cores 4.9ghz on water 


but this is better than nothing 



https://www.3dmark.com/pr/120401


----------



## kx11

no.35 is not bad right ?




https://benchmark.unigine.com/results/rid_81ebde27bd9d46dd925d240d20785eea


----------



## Piroxbot14

Thanks to this guide I was able to chart top with my system specs on port royale with my 2080TI Black Gaming edition on water! https://www.3dmark.com/pr/120516


----------



## pt0x-

Hello,

For the last days I have been reading the internetz and especially this forum because i'm trying to figure out what card to buy. Obviously i'm buying a 2080 ti to replace my current 1080 ti strix (non-oc version). 
What i'm struggling with is if I should go with a custom PCB (strix OC) or reference PCB (XC ultra). 

I have a custom loop (2x360) for my GPU and 9700k @ 5GHz. So i'm 100% sure i'm going to water cool the new card. With that in mind the question that comes to mind is the power limit on both of these cards, as I will be overclocking the card, not super hardcore though. Goal is max clocks and stability for gaming, and a little bit of benchmarking for fun where I can push the card even further.

After extensive reading I can't find a clear answer to the pro's and cons here. I want to keep all controls (except fan control) and have a power limit of around 400w as I predict I will easily be able to cool that. currently 1080 ti card is running a 340w bios and never gets hotter then 43c (VRM 50c) even when stressing it and pulling max power. Even with CPU stressing as well.

If i'm correct this will lead to the following:
The Strix option has the ability to keep all controls with the Matrix bios @ 360W
The Strix option has the ability to keep all controls with the Galax bios @ 380W
The Strix option has the ability to get up to 1000W with the XOC bios but will loose the curve editor and memory down clocking + loose a displayport/HDMI port?
The Strix has a dual bios option
The Strix has even better VRM's
The Strix has no over voltage bios option
The Strix only allows ugly waterblocks (EK with the metal plate, Watercool with a scaled down block and weird plate, and Phanteks sort of pretty, but not as good in performance) 
This last one is off course 100% personal opinion 

The XC Ultra option has the ability to keep all controls with the FTW3 bios @ 374W
The XC Ultra option has the ability to keep all controls with the Galax bios @ 380W
The XC Ultra option has the ability to get up to 1000W with the XOC bios but will loose the curve editor and memory down clocking + loose a displayport/HDMI port?
The XC Ultra does not have dual bios
The XC Ultra has stock VRM's
The XC Ultra does not have over voltage bios options
The XC Ultra has better looking water blocks, and better performing water blocks like the Heatkiller IV or Kryographics Next

Are the statements that I make here true? I have red so much information in the last days that conflict that I just cant draw a conclusion at this moment.
Other than that, I cant seem to find a solid awnser about the following:

Can I flash the Kingpin or HOF OC lab bios to any of these cards ? Or will this limit my power / brick it ?
Is there a 400w private EVGA bios? Or is this just a rumor. If it is there, is it a better option ?
To keep all controls on both cards, is shunting a viable long term solution over flashing?

I know and understand if your first thought is.. UTFS! But this thread is already 8379 posts long and from month to month things are changing in the 2080 ti world. Searching for these things still leads you straight to a rabit hole with different answers from month to month.

Thanks for reading, understanding and if you feel like it, replying


----------



## krizby

pt0x- said:


> Hello,
> 
> For the last days I have been reading the internetz and especially this forum because i'm trying to figure out what card to buy. Obviously i'm buying a 2080 ti to replace my current 1080 ti strix (non-oc version).
> What i'm struggling with is if I should go with a custom PCB (strix OC) or reference PCB (XC ultra).


There is literally no reason to buy the Strix OC if you just slap on a waterblock right after. The VRM on the reference PCB is enough to handle LN2. 
Most users here report that the Galax 380W bios is the best for gaming or light benching, you can squeeze out some additional performance by modifying the VF curve compare to the XOC bios since VF curve is locked out.


----------



## tijgert

I’d like to flash a Gigabyte Xtreme Waterforce bios to my regular non-Waterforce Xtreme to get higher power limits (it’ll be fine, I have an EK Vector block installed).

Can anyone confirm this will work?
Can anyone confirm the boards are 100% identical?
TechPowerUp has several bios versions, is there any version specifically for Micron memory?

I did the shunt mod, I’d just like to get more power, more MORE!


----------



## Shawnb99

******* EVGA just returned my defective card with no ducking word as to why. RMA shows as complete yet it’s the same damn card


----------



## Shawnb99

Shawnb99 said:


> ******* EVGA just returned my defective card with no ducking word as to why. RMA shows as complete yet it’s the same damn card




Nice! Just got an email back from EVGA they are now setting up a second RMA for me and will be sending me a brand new card.

Very happy now, was not expecting this.
Wasn’t upset that they found nothing wrong with it, just the lack of communication over it. Looking forward to getting a new card now.
Hopefully I can get some Samsung memory


----------



## Gaffydk

Hey Everyone

I just got a 2080ti - and would like to put a custom bios on it - since I've invested in a waterblock, to get it up and running in my custom loop.

I have a MSI DUKE OC - a nice card, and an A-chip - that what what i researched beforehand, but now I seem to have hit a wall:

When I try and use NV flash (version 5.541.0 with Mismatch Disabled) - and try and flash the Galax 380W bios:

nvflash --protectoff
nvflash -6 newbios.rom

I get the following:

====================
"XUSB FW component of the input GPU firmware image is incompatible [sic], please use
a newer version of GPU firmware image for this product." 

BIOS Cert 2.0 Verification Error, Update aborted.

Nothing changed!

ERROR: Invalid firmware image detected.
=====================

Any Idas? Returning the board, is sadly not an option, since I already put it under my waterblock - thinking as long as it was an A Chip, everything was fine 

Really hope to get some pointers.

I see a fellow user here, has had the same problem, but it's the only mention of the DUKE OC, and custom bios I can find.

All the best - Yngve


----------



## schoolofmonkey

Just picked up a ROG RTX 2080ti Strix Advanced, just opening the box blew me away, this card was massive compared to the GTX 1080ti I have, well the wife has now.

Haven't done any overclocking with it yet, temps max out at 70c without changing anything.


----------



## J7SC

schoolofmonkey said:


> Just picked up a ROG RTX 2080ti Strix Advanced, *just opening the box blew me away, this card was massive compared to the GTX 1080ti I have, well the wife has now.*
> 
> Haven't done any overclocking with it yet, temps max out at 70c without changing anything.



...sounds like this is an opportunity you could benefit from for a while..enjoy 2080 Ti's hidden benefits  !


----------



## Misfit16R

Does anyone have any of the XOC bios working on the 2080ti FTW3 or know if it will work?


----------



## MrTOOSHORT

Misfit16R said:


> Does anyone have any of the XOC bios working on the 2080ti FTW3 or know if it will work?


It works, MrFox benched with the XOC bios on his HC FTW3:

*https://www.overclock.net/forum/27980568-post2143.html*


----------



## Misfit16R

MrTOOSHORT said:


> It works, MrFox benched with the XOC bios on his HC FTW3:
> 
> *https://www.overclock.net/forum/27980568-post2143.html*


Do you know which xoc bios he used? Was it the kingpin one?


----------



## MrTOOSHORT

Misfit16R said:


> Do you know which xoc bios he used? Was it the kingpin one?


Kingpin XOC, the one I'm a using on my KPE:


----------



## Misfit16R

MrTOOSHORT said:


> Kingpin XOC, the one I'm a using on my KPE:


Thanks man, appreciate it.


----------



## Pandora's Box

Decided enough is enough, for the past couple months I've been staring at 2080 Ti's and the upgrade itch just became too much. Found her as an open box item at Microcenter, marked down to $1015 USD. She's a beauty alright...Oh yeah and scored a 3900X open box for $424


----------



## ICHOR

So gonna bite the bullet aswel and buy one, as far as I can see if your gonna stick with air the best is to go with msi trio x with the 406w bios, this correct ?

Torn between buying the trio x and the evga ftw3 ultra icx2 card


----------



## Tragic

ICHOR said:


> So gonna bite the bullet aswel and buy one, as far as I can see if your gonna stick with air the best is to go with msi trio x with the 406w bios, this correct ?
> 
> Torn between buying the trio x and the evga ftw3 ultra icx2 card





Trio. you get a sexy back plate and the best RGB in the business.


----------



## J7SC

ICHOR said:


> So gonna bite the bullet aswel and buy one, as far as I can see if your gonna stick with air the best is to go with msi trio x with the 406w bios, this correct ?
> 
> Torn between buying the trio x and the evga ftw3 ultra icx2 card



What about the MSI Seahawk X EK (like trio, but w/ factory installed EK block)...could be nice and 'cool'. If you're not set on MSI or EVGA, you might also consider the Gigabyte Aorus Xtreme WB version, I had great luck w/ two of them.


----------



## ICHOR

J7SC said:


> What about the MSI Seahawk X EK (like trio, but w/ factory installed EK block)...could be nice and 'cool'. If you're not set on MSI or EVGA, you might also consider the Gigabyte Aorus Xtreme WB version, I had great luck w/ two of them.


True but won't u get more of a overclock with trio x and the bios? Just want the best card that can give me the highest clock without having to buy a block and water cool it.


----------



## J7SC

ICHOR said:


> True but won't u get more of a overclock with trio x and the bios? Just want the best card that can give me the highest clock without having to buy a block and water cool it.



water-cooing, custom or AIO, is not everyone's cup of tea, though with the way the NV GPU boost works w/ temps as one of the variables and up to 406w (or more w/ some custom bios) of heat energy to dispose of, w-cooling will help to get better OCs for a given silicone-lottery level 2080 Ti.


----------



## bigjdubb

ICHOR said:


> True but won't u get more of a overclock with trio x and the bios? Just want the best card that can give me the highest clock without having to buy a block and water cool it.


I would take a look at what the actual range of clocks people are getting and the performance that corresponds with that because it isn't very big. I think your luck (silicon lottery) plays as much, if not more, of a role in the clock speeds you get than the card you pick does.

It's a bit different if you are into the sport of benchmarking where you are chasing points like a race car driver chases 1/10ths of a second. An extra 5 frames per second can be huge in benchmarking.


----------



## Shawnb99

Replacement GPU should be here tomorrow. It’ll be a brand new card so here’s hoping I win the Silicon lottery this time. Hope I get Samsung ram as well.


----------



## Neosin

You guys have any ideas? 

So I'm trying to flash my bios and i'm getting this error when i do

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.

BIOS Cert 2.0 Verification Error, Update aborted.

Nothing changed!

ERROR: Invalid firmware image detected.


I have the MSI RTX 2080ti Gaming X Trio and trying to get the 400w bios on it, as i'm limited at 330w. 

Using nvflash from page 1, protectoff, then -6 romname, etc i think i'm doing it all correct. 

Any ideas?


----------



## Tragic

Neosin said:


> You guys have any ideas?
> 
> So I'm trying to flash my bios and i'm getting this error when i do
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> 
> 
> I have the MSI RTX 2080ti Gaming X Trio and trying to get the 400w bios on it, as i'm limited at 330w.
> 
> Using nvflash from page 1, protectoff, then -6 romname, etc i think i'm doing it all correct.
> 
> Any ideas?





nvflash64


----------



## Neosin

Tragic said:


> nvflash64


That's what I'm using..


----------



## mattxx88

Neosin said:


> You guys have any ideas?
> 
> So I'm trying to flash my bios and i'm getting this error when i do
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> 
> 
> I have the MSI RTX 2080ti Gaming X Trio and trying to get the 400w bios on it, as i'm limited at 330w.
> 
> Using nvflash from page 1, protectoff, then -6 romname, etc i think i'm doing it all correct.
> 
> Any ideas?


did u renamed the bios.rom file? make it shorter
exmp: 440w.rom

i have same gpu and i wait for your test cause i have problems with mine card
i tried several bios, and no one made my gpu go over 360watt max
i was forced to shunt it
this: https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod-13.html#post27995866
is my experience


----------



## Neosin

mattxx88 said:


> did u renamed the bios.rom file? make it shorter
> exmp: 440w.rom
> 
> i have same gpu and i wait for your test cause i have problems with mine card
> i tried several bios, and no one made my gpu go over 360watt max
> i was forced to shunt it
> this: https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod-13.html#post27995866
> is my experience


I've tried a short name 400w.rom and the long name MSI.RTX2080Ti.11264.180930.rom Both give the same error


----------



## mattxx88

Neosin said:


> I've tried a short name 400w.rom and the long name MSI.RTX2080Ti.11264.180930.rom Both give the same error


did u putted the nvflash folder in the main root of your ssd (C in my case and in the guide's example


----------



## Neosin

mattxx88 said:


> did u putted the nvflash folder in the main root of your ssd (C in my case and in the guide's example


Yup, followed it exactly. 20 year IT veteran, I'm normally pretty savvy about this kind of thing. But as experience has shown me, it's gonna be something incredibly stupid simple, or there is some kind of issue with nvflash, the bios file, something. I mean it's complaining about the USB, I was wondering has anyone actually flashed a MSI RTX 2080ti Gaming X Trio to the 400w bios? Which version of nvflash did they use? I'm thinking it's nvflash or the 400w bios isn't compatible with the card.


----------



## TK421

My memory apparently maxes out at 500 before artifact. Micron card.

Is this defective or just a case of massive bad luck?



EVGA XC Ultra (stock pcb) with 380w galax vbios from front page.


----------



## Neosin

TK421 said:


> My memory apparently maxes out at 500 before artifact. Micron card.
> 
> Is this defective or just a case of massive bad luck?
> 
> 
> 
> EVGA XC Ultra (stock pcb) with 380w galax vbios from front page.



Yea micron memory doesn't like the higher clocks as they make to much heat and crap out. Samsung memory does a much better job, mine easily hits +1350(gaming/benchmarking) and can ETH mine at +1400 64.5MH/s.


----------



## J7SC

TK421 said:


> My memory apparently maxes out at 500 before artifact. Micron card.
> 
> Is this defective or just a case of massive bad luck?
> 
> EVGA XC Ultra (stock pcb) with 380w galax vbios from front page.



...well, depends on your starting position (re '500') and whether you're talking about 'effective' or 'actual' speed, given GDDR6... Pic below is from a post I did back in early spring for 2x 2080 Ti NVLink (Aorus XTR, w-cooled, stock Bios) and Micron memory. Beyond that, at around +1400, diminishing returns / declining scores


----------



## TK421

Neosin said:


> Yea micron memory doesn't like the higher clocks as they make to much heat and crap out. Samsung memory does a much better job, mine easily hits +1350(gaming/benchmarking) and can ETH mine at +1400 64.5MH/s.



Hmm that's quite good. I'll see if I can convince an RMA for a samsung card.







J7SC said:


> ...well, depends on your starting position (re '500') and whether you're talking about 'effective' or 'actual' speed, given GDDR6... Pic below is from a post I did back in early spring for 2x 2080 Ti NVLink (Aorus XTR, w-cooled, stock Bios) and Micron memory. Beyond that, at around +1400, diminishing returns / declining scores



With +500, MSi AB reports 7500MHz as memory clock (+- 1mhz).


----------



## mattxx88

Neosin said:


> Yup, followed it exactly. 20 year IT veteran, I'm normally pretty savvy about this kind of thing. But as experience has shown me, it's gonna be something incredibly stupid simple, or there is some kind of issue with nvflash, the bios file, something. I mean it's complaining about the USB, I was wondering has anyone actually flashed a MSI RTX 2080ti Gaming X Trio to the 400w bios? Which version of nvflash did they use? I'm thinking it's nvflash or the 400w bios isn't compatible with the card.


i flashed it (400w bios) on my Gaming X Trio, following the 1st page 

my problem is that it's recgnized correctly from gpuz but in the real use the card is capped at 360w drain max


----------



## Neosin

TK421 said:


> Hmm that's quite good. I'll see if I can convince an RMA for a samsung card.


Just some screenshots of it in action, one OC and one stock numbers (with fan at 100%)


----------



## kot0005

Not sure if good result bnut here is my score with new build, Strix 2080ti oc using matrix bios, 3900x stock and 3200 cl 14 ram https://www.3dmark.com/fs/19964274


----------



## Neosin

kot0005 said:


> Not sure if good result bnut here is my score with new build, Strix 2080ti oc using matrix bios, 3900x stock and 3200 cl 14 ram https://www.3dmark.com/fs/19964274


Some rookie numbers  

Not bad at all


----------



## Gaffydk

@Neosin

I have the exact same problem as you - but with a MIS Duke OC.

I'm getting the same xUSB error - and Ieven tried flashing an older MSI bios onto the same card - no luck.

Did you manage to flash your card?


----------



## mattxx88

kot0005 said:


> Ok So I tried using the 406W Trio X bios on my Seahawk Ek and the power usage woudnt go past 355w for some reason 😮
> 
> I tried the 380w galax bios but had to revert back because it disables the leds.


I was reading old posts, and seems i'm not the only one that have problems flashing 400w bios on MSI Gaming X

Found this old post dated November 2018

Also my gpu don't go over 360w max peak


----------



## Neosin

Gaffydk said:


> @Neosin
> 
> I have the exact same problem as you - but with a MIS Duke OC.
> 
> I'm getting the same xUSB error - and Ieven tried flashing an older MSI bios onto the same card - no luck.
> 
> Did you manage to flash your card?


Nope, have not found a solution.


----------



## mattxx88

Neosin said:


> Nope, have not found a solution.


what if you flash an official MSi bios?
https://forum-en.msi.com/index.php?topic=310009.msg1783773#msg1783773

same error?


----------



## Sheyster

Neosin said:


> Nope, have not found a solution.



Have you tried to disable the card in device manager before flashing it? This should not be required anymore but who knows, it might help.


----------



## KCDC

Howdy! I've joined the 2080ti club with a couple Aorus waterforce xtremes. Finished adding them to my loop last night, fiddled a bit before going to bed. Didn't get amazingly different scores on firestrike ultra yet, maybe 10% better so far. Was able to push them to around 2100 before crashes, however I think one card can clock higher than the other. Haven't pushed the ram yet or checked to see what chips they are. Plan to do more tonight. Still some air lock in the blocks. Also noticing they idle about 6c warmer than the 1080tis, but that could be due to the air bubbles still trapped. 

I am wondering if anyone with dual cards has successfully OC'd them separately with different settings. I wasn't able to accomplish this successfully on my 1080ti's, it would always choose the lower settings for both cards. Also couldn't get a custom voltage curve to work properly with two cards. I gave up trying after a while.

Excited to get to gpu rendering on these! Tried wolfenstein 2 new colossus quickly at 7680x1440 uber settings, my fps jumped from the high 80s to over 160fps constant.


----------



## J7SC

KCDC said:


> Howdy! I've joined the 2080ti club with a couple Aorus waterforce xtremes. Finished adding them to my loop last night, fiddled a bit before going to bed. Didn't get amazingly different scores on firestrike ultra yet, maybe 10% better so far. Was able to push them to around 2100 before crashes, however I think one card can clock higher than the other. Haven't pushed the ram yet or checked to see what chips they are. Plan to do more tonight. Still some air lock in the blocks. Also noticing they idle about 6c warmer than the 1080tis, but that could be due to the air bubbles still trapped.
> 
> I am wondering if anyone with dual cards has successfully OC'd them separately with different settings. I wasn't able to accomplish this successfully on my 1080ti's, it would always choose the lower settings for both cards. Also couldn't get a custom voltage curve to work properly with two cards. I gave up trying after a while.
> 
> Excited to get to gpu rendering on these! Tried wolfenstein 2 new colossus quickly at 7680x1440 uber settings, my fps jumped from the high 80s to over 160fps constant.



Congrats...You're going to have some fun with those...  
Per pics in the spoiler below, I got my two Aorus Xtreme Waterforce around Christmas and I am extremely pleased with them. If you check back in this thread to April or so, I had posted single TimeSpy at GPU 2175Mhz and dual 2145MHz, and Superposition 4K (one card) ran as high as 2205 per earlier posts....all on stock bios. I should add though that there's a massive cooling system for just the two GPUs (2 pumps, 3 RX360/60 rads, 12 120mm fans etc). Whatever extra cooling you can give those cards is worth it as they respond well to cool(er) temps with higher clocks. Highest power consumption I have seen is 379W (stock bios).

I haven't tried to override MSI AB yet by overclocking the two cards separately / differently as they're plenty fast as is. But sooner or later, I'll try the MSI AB fixed voltage curve.




Spoiler


----------



## KCDC

J7SC said:


> Congrats...You're going to have some fun with those...
> Per pics in the spoiler below, I got my two Aorus Xtreme Waterforce around Christmas and I am extremely pleased with them. If you check back in this thread to April or so, I had posted single TimeSpy at GPU 2175Mhz and dual 2145MHz, and Superposition 4K (one card) ran as high as 2205 per earlier posts....all on stock bios. I should add though that there's a massive cooling system for just the two GPUs (2 pumps, 3 RX360/60 rads, 12 120mm fans etc). Whatever extra cooling you can give those cards is worth it as they respond well to cool(er) temps with higher clocks. Highest power consumption I have seen is 379W (stock bios).
> 
> I haven't tried to override MSI AB yet by overclocking the two cards separately / differently as they're plenty fast as is. But sooner or later, I'll try the MSI AB fixed voltage curve.
> 
> 
> 
> 
> Spoiler


Cool, great numbers! My loop is a single loop with 2 nemesis 420 gtr rads, one in push/pull and has held up nicely through all of my builds, the 1080tis would stay in the 45c area at 2075 under constant load and idle at 27c. These are idling around 33-35c currently. Just upgraded to a new D5 pump as well while I had everything down. I think I have air in my system still, so I'm gonna bleed the system tonight and get that stable first before pushing further as the GPUs would hit almost 60c at 2080-2100 after a couple runs. Unless that's expected on these cards. Hope they're not duds...


----------



## J7SC

KCDC said:


> Cool, great numbers! My loop is a single loop with 2 nemesis 420 gtr rads, one in push/pull and has held up nicely through all of my builds, the 1080tis would stay in the 45c area at 2075 under constant load and idle at 27c. These are idling around 33-35c currently. Just upgraded to a new D5 pump as well while I had everything down. I think I have air in my system still, so I'm gonna bleed the system tonight and get that stable first before pushing further as the GPUs would hit almost 60c at 2080-2100 after a couple runs. Unless that's expected on these cards. Hope they're not duds...



I can't quite recall the exact number, but around 38 C w/load or so, you start to loose MHz. If you hit 2080 - 2100 @ 60 C, your cards are definitely not duds.

That cooling setup I mentioned above for this machine which is dual-use 'work and play' really helps to control temps, per attached pic for temps in TimeSpy Extreme...top is single GPU, bottom dual GPU (back-to-back runs). Your aforementioned 60 C does sound a bit high though as your cooling setup is very good...you might be right about air in the system, or may be have some sort of flow restriction. Finally, the dual pumps (MPC 655, similar to D5) I have for the GPUs are set to 4/5.


----------



## KCDC

J7SC said:


> I can't quite recall the exact number, but around 38 C w/load or so, you start to loose MHz. If you hit 2080 - 2100 @ 60 C, your cards are definitely not duds.
> 
> That cooling setup I mentioned above for this machine which is dual-use 'work and play' really helps to control temps, per attached pic for temps in TimeSpy Extreme...top is single GPU, bottom dual GPU (back-to-back runs). Your aforementioned 60 C does sound a bit high though as your cooling setup is very good...you might be right about air in the system, or may be have some sort of flow restriction. Finally, the dual pumps (MPC 655, similar to D5) I have for the GPUs are set to 4/5.


Thanks, yes, I think after I get all possible air out of rads and blocks, I hope to have lower temps overall. I believe I remember the crashing starting when it was trying to hit 2100 @ 1.093v on FS Ultra NVLink. Before hitting the higher bin due to temp climbing, it seemed to be running ok 2100 at lower voltage. I think 1.038, can't remember from memory. I'll have more time tonight to figure out what's going on. Excited to see what redshift 3.0 can do on these cards! They're getting closer to nvlink memory pooling, but not just yet. GPU rendering doesn't respond well to OCing, so I won't be doing that on the work side. In the future, I may use my other d5 pump for a dual design. One rad/pump for both cards, the other for the CPU. Or one pump and two 420 rads for both GPUs and one 420 rad/pump for CPU. Overkill, but I do have the space for one more rad.


----------



## mattxx88

mattxx88 said:


> I was reading old posts, and seems i'm not the only one that have problems flashing 400w bios on MSI Gaming X
> 
> Found this old post dated November 2018
> 
> Also my gpu don't go over 360w max peak





Sheyster said:


> Have you tried to disable the card in device manager before flashing it? This should not be required anymore but who knows, it might help.


OMG man, you were right

i just tried reflashin 400W bios disabling the card and it worked now

i get also correct subvendor!!! left over a beer 4 u



this is a crucial step, should be menetioned in flashing guide in the 1st page


----------



## Sheyster

*Double post*


----------



## Sheyster

mattxx88 said:


> OMG man, you were right
> 
> i just tried reflashin 400W bios disabling the card and it worked now
> 
> i get also correct subvendor!!! left over a beer 4 u



Glad it worked out!


----------



## mattxx88

i'm very happy now with my card
pushed ram to the limit +1500 and core is now stable


----------



## kx11

trying the new Wolfenstein young blood w/KingPiN


it's so sweet seeing it holding 2.2ghz through the video


----------



## KCDC

I was hoping I got a bundle code with my cards to play youngblood, but alas, nothing... Guess I'll spend more money


----------



## J7SC

...at least the Quake 2 RTX demo is a free download and works great with NVLink/SLI https://www.nvidia.com/en-us/geforce/news/quake-ii-rtx-ray-traced-remaster-out-now-for-free/


----------



## KCDC

Haha, true. It was more a joke that all the other Gbyte cards appear to have that bundle deal, but the craziest version of their cards doesn't get free games. I did get a $20 steam credit when I registered them on the site. It'll go towards youngblood or that other one.


----------



## vasyltheonly

Well really glad to be part of the 2080Ti club, especially with the card I got! MSI Ventus 2080Ti with Samsung DDR6 RAM that can overclock to +150core (2085Mhz)+1350 (8352Mhz). This is the best I got when pushing my 3600x to 4.4Ghz(benchmark stable) with 3600CL14ish( very loose). 
https://www.3dmark.com/spy/7913959


----------



## silis

Oh yes, very happy to join the rtx 2080 Ti club, just couldn't wait anymore upgraded from msi gtx 1080 gaming x non Ti (had 3 yrs) to asus 2080 Ti strix oc(samsung gddr6). Having 1440p 144hz monitor so its a nice bump in fps, sweet spot imo.. always been msi guy on gpu side of things but have to admit that this my first asus AIB rtx 2080 Ti strix oc is very nice sturdy product and runs about 15c cooler than my old msi 1080 gaming x

Stock score

https://www.3dmark.com/3dm/37978992

EDIT: Woot it sits on desktop with 2560x1440p 144hz at 300mhz core clock 1080 didnt go below 1200mhz


----------



## kata40

For MSI 2080 TI Duke, does the 2080 ti Gaming X Trio bios go wrong without causing problems?


----------



## Gaffydk

mattxx88 said:


> OMG man, you were right
> 
> i just tried reflashin 400W bios disabling the card and it worked now
> 
> i get also correct subvendor!!! left over a beer 4 u
> 
> 
> 
> this is a crucial step, should be menetioned in flashing guide in the 1st page


Matt - what problem with flashing did you have before?
And glad you got it solved! 

Now I'm just waiting for a solution to the XUSB problem


----------



## mattxx88

Gaffydk said:


> Matt - what problem with flashing did you have before?
> And glad you got it solved!
> 
> Now I'm just waiting for a solution to the XUSB problem


before, flashing was reported fine in CMD, but after flash in subvendor appeared "NVIDIA" instead of the correct "MSI"
second, looking at "advanced" tab in GPUz and checking "NVIDIA BIOS" was correctly showed a Maximum power target of 400w
But testing it, my gpu never went over 360w instead of 400w reported
Also frequency was not fixed (i'm under liquid cooling so i have no temps limit) and this was a clear problem of power management

now reflashing in the correct way, i got the correct subvendor and a perfect power managment on my board with a stable core frequency

in few words, falshing the bios without disabling gpu in device manager, seems like it was an half-flashing, gpu did not work in the correct way
hope i've been enough clear, sorry my bad english


----------



## Gaffydk

Thanks - Glad you got it working!

Still looking for a solution for the problem Neosin and I are encountering


----------



## mattxx88

Gaffydk said:


> Thanks - Glad you got it working!
> 
> Still looking for a solution for the problem Neosin and I are encountering


still last night making my experiments, i got an error of board missmatch using normal version of nvflash
then i tried the modified nvflash64 version linked here:https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/

and i got it done

did you try this road?


----------



## Gaffydk

It does something a bit different - it runs nvflash in a new window, instead of the same one - but the result is the same XUSB error.

Ohh well - thanks for trying to help


----------



## KCDC

Good lord, the Aorus RGB software is horrendous, wouldn't work with iCue installed, and when it was installed and working, was difficult to get it to stick to anything. I used to think Aura Sync was the worst, but now I think this one's the winner for last. Unfortunate that I can't use JackNet's rgb sync with these cards, but I at least saved some low light neutral colors to the cards before uninstalling that crap. RGB isn't a huge deal, but it's fun to play with from a designer's standpoint.


Firestrike extreme with nvlink crashes at 2100 regardless of my temps. I believe nVidia is still working out NvLink bugs? At least that's what I hear from the gpu rendering side of things, which is why they haven't implemented memory pooling yet. I also get frame stuttering at the beginning of each scene. I read this has to do with GeForce Experience overlay not being on, but I don't have that installed. I have it installed via Steam, perhaps that's the culprit? About to try out the other single card benches, which have zero stutter. Hoping for higher clocks, but still pleased with what I have. I haven't been benching in a long time, so I may be late to the game with this info. FSE never stuttered like this in the past on other cards.


----------



## ntuason

KCDC said:


> Good lord, the Aorus RGB software is horrendous,


Couldn't agree more. My computer wont even shut down properly when RGBFusion is installed and when it does it doesn't save RGB settings...


----------



## KCDC

ntuason said:


> Couldn't agree more. My computer wont even shut down properly when RGBFusion is installed and when it does it doesn't save RGB settings...



It froze my machine when I booted. Had to go safe mode and uninstall. Even then, it wouldn't uninstall properly, had to delete the program folder. Aura's pretty bad, but they really showed how bad an app can be...


----------



## kata40

For the MSI 2080 TI Duke, can I find a better bios than the original one?


----------



## Sheyster

kata40 said:


> For the MSI 2080 TI Duke, can I find a better bios than the original one?



Yes, I own the Duke OC and I use the 380W GALAX BIOS from the front page.


----------



## J7SC

KCDC said:


> Good lord, the Aorus RGB software is horrendous, wouldn't work with iCue installed, and when it was installed and working, was difficult to get it to stick to anything. I used to think Aura Sync was the worst, but now I think this one's the winner for last. Unfortunate that I can't use JackNet's rgb sync with these cards, but I at least saved some low light neutral colors to the cards before uninstalling that crap. RGB isn't a huge deal, but it's fun to play with from a designer's standpoint.
> 
> 
> Firestrike extreme with nvlink crashes at 2100 regardless of my temps. I believe nVidia is still working out NvLink bugs? (...).



Aourus RGB, along with Asus Aura, were identified as major security risks in late '18...presumably updated since but apart from that I recall it was very buggy. One 'trick', at least on a dual booted (WIN 7 64; WIN 10) was to install it (subject to security requirements) in WIN 7, set RGB colours as you want it, reboot, uninstall Aorus RGB software, and it would stay at the chosen settings. Might work with WIN 10 as well, but I never bothered trying this step on that OS.

I haven't run into the Firestrike Ex issue, but if the same NVlink settings work w / o issue on TmeSpy ex etc for you, then I would not get too concerned. I believe the 2nd graphics test in FSE is particularly sensitive to VRAM oc...try dialing VRAM oc back first with given GPU oc and see what happens.


----------



## kata40

Sheyster said:


> Yes, I own the Duke OC and I use the 380W GALAX BIOS from the front page.


Work fine, no problems?


----------



## Sheyster

kata40 said:


> Work fine, no problems?



Yes, much better than the original BIOS.


----------



## KCDC

J7SC said:


> Aourus RGB, along with Asus Aura, were identified as major security risks in late '18...presumably updated since but apart from that I recall it was very buggy. One 'trick', at least on a dual booted (WIN 7 64; WIN 10) was to install it (subject to security requirements) in WIN 7, set RGB colours as you want it, reboot, uninstall Aorus RGB software, and it would stay at the chosen settings. Might work with WIN 10 as well, but I never bothered trying this step on that OS.
> 
> I haven't run into the Firestrike Ex issue, but if the same NVlink settings work w / o issue on TmeSpy ex etc for you, then I would not get too concerned. I believe the 2nd graphics test in FSE is particularly sensitive to VRAM oc...try dialing VRAM oc back first with given GPU oc and see what happens.



That's what I did for the software, they're using neutral colors. 

As far as my OCs, I can't get anything in 3dMark to go past 2100 at any temp or voltage. I got one run of FS Extreme to finish at 2100, after that it crashes. Tried with the ram at stock and multiple settings. Samsung ram on both thankfully, But I don't think these cards will go past 2100 reliably. The serial numbers are sequential, but not sure if that mean's they're from the same batch.


----------



## kot0005

Damm this chip is really good.. 2205Mhz in Port royal at just 1.068v using Matrix Bios on the Asus strix OC

https://www.3dmark.com/pr/126860

2190Mhz in Timespy but the clock wasn't at 2190all the time here 

https://www.3dmark.com/spy/7939603


----------



## J7SC

KCDC said:


> That's what I did for the software, they're using neutral colors.
> 
> As far as my OCs, I can't get anything in 3dMark to go past 2100 at any temp or voltage. I got one run of FS Extreme to finish at 2100, after that it crashes. Tried with the ram at stock and multiple settings. Samsung ram on both thankfully, But I don't think these cards will go past 2100 reliably. The serial numbers are sequential, but not sure if that mean's they're from the same batch.



Is this just for Firestrike/Ex or 3DMark in general, including the newer ones (TimeSpy etc) ? Also, I presume it's SLI, not single card. My two Aorus XTR WB serial numbers are just two apart. Both have Micron though, and while Samsung would have been preferable, I have been getting decent oc VRAM results...bottom two sections in pic below have stock VRAM speed on the left and max bench-useful oc VRAM on the right. The top segments are not the usual GPU bench speeds btw, but just comparing how the voltage slider impacts max GPU oc in rendering. I find that leaving the voltage slider alone gets me slightly better overall results even at marginally lower oc, presumably by leaving a bit more in the overall power budget. 

Also, in case you ever want to take the Aorus XTR WB apart, i.e. to check whether there's 'guck' build-up etc or for cleaning, this vid gives some hints (around 4min 30s) :


----------



## kx11

i can't close the case , the card is dusty now


----------



## pt0x-

I'm officially a member now  

Bought the Asus 2080 ti Strix OC with a Heatkiller IV Strix and backplate, very happy with it!

Mounting the block was very straight forward, including the backplate. My 1080 ti Strix had a phanteks block and I must say this block is better quality. It's in the details like the smaller fins and better thermal pads. other machining is comparable though.

First benchmark session results:

*Settings*

Bios: Galax 380W 
Vcore: 2100 / 2085 MHz @ 1075mV 
Vram: +1250 MHz (16.500MHz)

*3DMark*

Time Spy: 14893
https://www.3dmark.com/spy/7943787

Port Royal: 10243
https://www.3dmark.com/pr/127025

Fire Strike: 29042
https://www.3dmark.com/fs/19998749

*Superposition*

1080P Extreme: 10286
4K Optimized: 13927

More info on Watercooling hardware used:
https://www.reddit.com/r/watercooling/comments/ciul3k/phanteks_evolv_x_custom_loop_updated/

More pics:
in above URL


----------



## Shawnb99

After two cards with micron I’m happy to see my new replacement card is Samsun memory


----------



## OdinValk

Just got my Strix 2080ti yesterday. It has quite an annoying amount of coilwhine. Sounds like there is a cricket in my PC. Is this normal on these cards?


----------



## kata40

Sheyster said:


> Yes, much better than the original BIOS.


I'm not a gamer, but in what i do, video editing, i no se much differences
Maybe more power


----------



## Sheyster

kata40 said:


> I'm not a gamer, but in what i do, video editing, i no se much differences
> Maybe more power


If you're not already pushing the card to its limit (only 290W for the original BIOS) then don't bother flashing the GALAX BIOS. The reason to flash is for higher power limit.


----------



## kata40

Sheyster said:


> If you're not already pushing the card to its limit (only 290W for the original BIOS) then don't bother flashing the GALAX BIOS. The reason to flash is for higher power limit.


adobe premieres and even adobe media encoder use to the utmost to know, however, the video card.
i tried the bios of evga UC , asus strix oc, kaf here on the forum, which you mentioned, but it seems that it fits best, so far, the bios of palit 2080 ti pro, of course, what do i do with dvd fab and nvenc, pulled 970 fps / s to a 1080 / 50p video


----------



## schoolofmonkey

OdinValk said:


> Just got my Strix 2080ti yesterday. It has quite an annoying amount of coilwhine. Sounds like there is a cricket in my PC. Is this normal on these cards?


Mine's the same, I don't usually hear it unless it's quiet, which isn't very often.
There's not much you can do about it really, it's certainly not going to get RMA'd.


----------



## J7SC

Sheyster said:


> If you're not already pushing the card to its limit (only 290W for the original BIOS) then don't bother flashing the GALAX BIOS. The reason to flash is for higher power limit.



That's a good point to reiterate. What with NVidia multivariate boost 4.0 algorithm, the max GPU MHz should not be the 'only focus' folks look at... 

...I last ran PortRoyal back in late March w/ snow outside, and my two 2080 TIs managed to get into overall position 28 @ 3dM HoF then...over time, that result dropped to position 90. Yet when I reran it w/ same GPU settings to check updates in system RAM, I had a small improvement. But with the hottest days of the year here over the last 72 hrs (ambient about 30 C in my home office ), I dialed the clocks down a bit, by 45MHz or so...in the end, I picked up over a 1000 pts and was back up to position 28 in HoF. By dialing down the clocks (and indirectly, related volts) also in regard to higher ambient, the overall available power budget did the rest https://www.3dmark.com/hall-of-fame-2/port+royal+3dmark+score+performance+preset/version+1.0


----------



## KCDC

J7SC said:


> Is this just for Firestrike/Ex or 3DMark in general, including the newer ones (TimeSpy etc) ? Also, I presume it's SLI, not single card. My two Aorus XTR WB serial numbers are just two apart. Both have Micron though, and while Samsung would have been preferable, I have been getting decent oc VRAM results...bottom two sections in pic below have stock VRAM speed on the left and max bench-useful oc VRAM on the right. The top segments are not the usual GPU bench speeds btw, but just comparing how the voltage slider impacts max GPU oc in rendering. I find that leaving the voltage slider alone gets me slightly better overall results even at marginally lower oc, presumably by leaving a bit more in the overall power budget.
> 
> Also, in case you ever want to take the Aorus XTR WB apart, i.e. to check whether there's 'guck' build-up etc or for cleaning, this vid gives some hints (around 4min 30s) : https://www.youtube.com/watch?v=glpZ0nEe78Y



Thanks for chiming in. I'm trying superposition. I don't think it's temps or the blocks, once I got all the air out, they stay under 50c. They get up to about 47c on benches. So, you're leaving the voltage slider at 0 instead of 100 in MSI AB? I'll give that a shot.


----------



## J7SC

KCDC said:


> Thanks for chiming in. I'm trying superposition. I don't think it's temps or the blocks, once I got all the air out, they stay under 50c. They get up to about 47c on benches. *So, you're leaving the voltage slider at 0 instead of 100 in MSI AB? I'll give that a shot.*



Yes, seems to work better re. overall score (unless you have a XOC bios w/ really high or unlimited max power). Also, only half tongue-in-cheek, I figure those gazillion RGB probably eat up between 10 and 15 watts per card, one of these days, I'll disconnect them for a higher available power envelope on this stock bios


----------



## KCDC

Well, I was able to get consistent 2115-2130 in SP 4k benches. 8019 on the RAM, might be able to push that more. Temps hit about 48c @2115 1.043v. 2130-2145 I get crashes. Voltage slider at 0, doesn't seem to make a difference at 100 anyway. 



Division 2 7680x1440 won't run at these settings, or anything over 2100. Gonna try some other games for the hell of it, see what they can do.


----------



## J7SC

KCDC said:


> Well, I was able to get consistent 2115-2130 in SP 4k benches. 8019 on the RAM, might be able to push that more. Temps hit about 48c @2115 1.043v. 2130-2145 I get crashes. Voltage slider at 0, doesn't seem to make a difference at 100 anyway.
> 
> 
> 
> Division 2 7680x1440 won't run at these settings, or anything over 2100. Gonna try some other games for the hell of it, see what they can do.



7680 x 1440 is pushing a lot of pixels ! Re. single 4K as well as multi monitor benches and games, one thing you might do is have GPUz 'sensor windows' open for both cards separately, and set power consumption to 'show highest reading'. For Superposition 4K, 8K etc, both my Aorus XTR WB read between 373w - 380w peak each (stock bios) and your cards should be similar.


----------



## TK421

Anything higher than 380w on ref. pcb?


----------



## kata40

Sheyster said:


> Yes, I own the Duke OC and I use the 380W GALAX BIOS from the front page.


one last question I ask you, with that bios, how much have you managed to get the video card in overclock?


----------



## Sheyster

kata40 said:


> one last question I ask you, with that bios, how much have you managed to get the video card in overclock?


Every GPU is different, it's a silicon lottery as they say. In my case I can run at 2100 MHz in games. I have not pushed it higher, there is no need to since I don't bench anymore. Your card may clock higher or lower than mine.


----------



## Xdrqgol

Hi all,

I am looking for a new 2080 ti, and wanted to get your opinion :

is this any good INNO3D GEFORCE RTX 2080 TI TWIN X2 ?

I am looking for something cheap, that I can flash and maybe get some extra Mhz .

Looking for recommendations, if any!

Thanks!


----------



## ESRCJ

Did Port Royal performance improve through recent driver updates? My score went up almost 6 percent with the same core and memory frequencies on my 2080 Ti.

https://www.3dmark.com/compare/pr/128011/pr/90096#


----------



## J7SC

gridironcpj said:


> Did Port Royal performance improve through recent driver updates? My score went up almost 6 percent with the same core and memory frequencies on my 2080 Ti.
> 
> https://www.3dmark.com/compare/pr/128011/pr/90096#



I only got a 0.9% gain from the driver update from 430.86 to 431.60 for my dual 2080 Ti setup w/ stock Bios (score went from 19671 to 19848). *That said*, Win 10 / 1903 and related Windows updates along with recent AMD chipset driver updates for my TR certainly helped with a 5% gain 

https://www.3dmark.com/hall-of-fame-2/port+royal+3dmark+score+performance+preset/version+1.0


----------



## kata40

Thanks for answer Sheyster


----------



## AvengedRobix

ATTENTION PLEASE; Anyone have a Galax OCLab 2080Ti OClab edition and use hwinfo for monitoring.. With latest version they have added support for Infineon chip who show you a temp of VRM of GPU.. Disable istant the monitor of this chip.. If active after 1 or 2 minuti you have only artifacts and crash of you're GPU.. Please share this info to other people


----------



## majestynl

@Gaffydk @sdubbin @Neosin @RAGEdemon or any other member with suggestions!

Did you guys find a solution for flashing another bios. Im also getting below message with MSI Duke OC :

------------

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.



BIOS Cert 2.0 Verification Error, Update aborted.


Nothing changed!



ERROR: Invalid firmware image detected.

----------------

Looks like we got shipped with the newest FW supporting some new XUSB. I haven't tried to downgrade official bios and try to flash the 380w after this.!

*Edit: See my post below. Figured out whats going wrong here!*


----------



## Talon2016

majestynl said:


> @Gaffydk @sdubbin @Neosin @RAGEdemon or any other member with suggestions!
> 
> Did you guys find a solution for flashing another bios. Im also getting below message with MSI Duke OC :
> 
> ------------
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.
> 
> ----------------
> 
> Looks like we got shipped with the newest FW supporting some new XUSB. I haven't tried to downgrade official bios and try to flash the 380w after this.!


Same issue with my FTW3 that I've had a few months. Nothing has worked, it appears to be a new firmware version that is not compatible with Nvflash, the modded Nvflash, or using any other 'old' vbios.


----------



## J7SC

Tweaktown had this bit of news re. the new EVGA PrecX with boost lock. I guess it is a bit like the MSI AB 'voltage curve' approach...


----------



## kx11

tried that on KP , it's doesn't work as it should , i couldn't hold 2000mhz core clock with that thing enabled


----------



## majestynl

Talon2016 said:


> Same issue with my FTW3 that I've had a few months. Nothing has worked, it appears to be a new firmware version that is not compatible with Nvflash, the modded Nvflash, or using any other 'old' vbios.


Thanks for the info. Are you still having the FTW3?

But this is completely sad if we can't upgrade a bios. This card is so power limited as we know. There must be something. I hope it's only a compatibility with Nvflash and not a new hardware that is not supported with some older bios versions. 

Yesterday I saw a old post with modded Nvflash for the 9x series with somewhat same issue. Maybe we need modifying Nvflash?


----------



## Talon2016

majestynl said:


> Thanks for the info. Are you still having the FTW3?
> 
> But this is completely sad if we can't upgrade a bios. This card is so power limited as we know. There must be something. I hope it's only a compatibility with Nvflash and not a new hardware that is not supported with some older bios versions.
> 
> Yesterday I saw a old post with modded Nvflash for the 9x series with somewhat same issue. Maybe we need modifying Nvflash?


No I whipped out my cheap USB programmer from Amazon and flashed the vBIOS chip directly and had no issue going that route. A bit of an annoyance but having a dual BIOS setup lets me put an alternate vBIOS on the "OC" mode and I can flip back and forth whenever I want.


----------



## majestynl

So i figured out whats wrong with flashing the Galax 380w Bios. The Galax version is an older build with a older XUSB Firmware version.
NVFlash doesn't accept an older XUSB Fw if there is one newer installed on current loaded Vbios!

My Vbios was included with *XUSB-FW Version ID : 0x70090003 * and the current Galax 380w on TPU has an older version *XUSB-FW Version ID: 0x70060003*

What i did:


Backup-ed my Original Rom and just flashed same one back to check if NVflash etc is working properly
Checked TPU Bios libary to find an Bios who had better Power limit and was build on same month or later then my current bios version!
I found the EVGA XC and MSI Seahawk X (Ref board)
Double-checked if the XUSB had at least same FW version. It did!
Flashed EVGA Bios (338w) with NvFlash (Board Id Mismatch version)


Flashed successful and the card is running already better  Need to check which Bios is the best for now.
If we want to use the 380w bios, we need a newer Bios file from that card with at least same or newer XUSB Fw, otherwise we cant flash it!

I believe not only MSI Duke OC, but most new bought cards are running with an updated XUSB FW on their last vBios!


_See below for Screenshot Running MSI Duke OC with a EVGA Bios running 338w_


----------



## newari

So I lost all my sensibility regarding spending money on computer hardware and joined the club with an ASUS turbo model.
First reaction is confusion, as this model is supposed to be non A variant and GPU-Z devide id reports 1E04, but I could swear that I saw 300A etched on the GPU when instaling water block...
Card has Samsung memory but OC is kinda dissapointing (compared to the high OCs here). I start having black screens and driver crashes at +1200 (8200 Afterburner). Core with +150 offset hovers in 1970-2070 range depending on load (max temp around 52C) 

The non-A bios link on the front page leads nowhere, but I assume newest palit dual bios works.
And what happens if I try to flash A-version bios on this card, because again, I'm like 80% certain that the GPU said 300A on it.


----------



## Sheyster

majestynl said:


> I believe not only MSI Duke OC, but most new bought cards are running with an updated XUSB FW on their last vBios!


Did you try disabling the video card in device manager before flashing it? This worked for an MSI Duke OC owner who was not able to flash his card. Before he did that nothing else had worked.


----------



## majestynl

Sheyster said:


> Did you try disabling the video card in device manager before flashing it? This worked for an MSI Duke OC owner who was not able to flash his card. Before he did that nothing else had worked.


Definitely  That wasn't the issue here. Like i mentioned in the post: 
You cant flash a bios with lower XUSB FW if you have a newer version of the XUSB FW running already . Nothing more! 
Even with a older bios from same brand and model!

Anyways.. there is also a Gigabyte bios with 366w PL  That one has same XUSB FW version! 
All working!
@Vipeax Maybe modding nvflash more to bypass this check...if possible! 
cause we will see more people having this issue. I found out most new cards are having a higher XUSB version compared to the 380w thats available on TPU!


----------



## liquoredonlife

Hi all, I'm interested in getting a 2080 Ti.

Goals:
- swap to a nickel plated full cover block
- run it in SFF (ncase M1)
- modest overclocks (memory bump initially, possible bios flash)
- 1440p/144hz+ in most games

I can get a 2080 Ti FE at a slight discount. Would it be good to go with this card and pair with an EK block?

Concerns:
- nvidia support seems iffy in case of issues/RMA
- is this the best way to go as far as money and fewest headaches are concerned, in support of specified goals? or should I go with another brand's reference board despite said discount?

Thanks! Thrilled to be building a new rig again and lay to rest this i7-2600k/GTX680.


----------



## MrTOOSHORT

liquoredonlife said:


> Hi all, I'm interested in getting a 2080 Ti.
> 
> Goals:
> - swap to a nickel plated full cover block
> - run it in SFF (ncase M1)
> - modest overclocks (memory bump initially, possible bios flash)
> - 1440p/144hz+ in most games
> 
> I can get a 2080 Ti FE at a slight discount. Would it be good to go with this card and pair with an EK block?
> 
> Concerns:
> - nvidia support seems iffy in case of issues/RMA
> - is this the best way to go as far as money and fewest headaches are concerned, in support of specified goals? or should I go with another brand's reference board despite said discount?
> 
> Thanks! Thrilled to be building a new rig again and lay to rest this i7-2600k/GTX680.


Yes the FE is a fine card with a beefy VRM and is compatible with most bios'. The Galax 380w bios is most popular(recommended). You can use the Kingpin XOC bios to get 1.125v load instead of the 1.09v on the other bios'. Just make sure you have plenty rad to keep your card cool. And as long as you put the card back to original state( air cooler ), you should be fine with RMA. I had an FE and EK block, it ran great and flow rate was good too. I have a KPE and HC block, the flow rate isn't that good.


----------



## J7SC

Has anyone here tried the new NVidia 'studio driver' (431.70) as compared to the latest 'Game Ready' WHQL driver (431.60) on their 2080 Ti ? Any feedback re. install and performance ? I read that a handful of folks have install issues with the studio driver. Tx


----------



## KCDC

J7SC said:


> Has anyone here tried the new NVidia 'studio driver' (431.70) as compared to the latest 'Game Ready' WHQL driver (431.60) on their 2080 Ti ? Any feedback re. install and performance ? I read that a handful of folks have install issues with the studio driver. Tx



I've been using it and now the DCH version after some confusion on my part. Haven't done much gaming, but new colossus is still the same fps. I'm only using it due to the amount of 3d/2d/vfx programs I'm using. Otherwise, I'd just stick with the game-ready. One note is that with the DCH driver, I had to grab the nvidia control panel separately from the windows store which was odd. Had to look that one up. Not sure if that's just my situation since I DDU'd a standard version first.


----------



## J7SC

KCDC said:


> I've been using it and now the DCH version after some confusion on my part. Haven't done much gaming, but new colossus is still the same fps. I'm only using it due to the amount of 3d/2d/vfx programs I'm using. Otherwise, I'd just stick with the game-ready. One note is that with the DCH driver, I had to grab the nvidia control panel separately from the windows store which was odd. Had to look that one up. Not sure if that's just my situation since I DDU'd a standard version first.


 
Thanks much  I'm interested in some 10 bit capabilities for Adobe and probably give this a shot (after I ghost the current M.2 boot drive, just in case...)

---

EDIT: For general interest, I noticed this Phobya 400 x 85 full copper rad at a decent price...just the ticket if you run a custom / XOC bios on your 2080 Ti but want slower-turning (180mm) fans for noise control. Of course, mounting space might be a bit of an issue, depending on your case :thinking:


----------



## KCDC

I'm not sure if this one includes the latest "security issues" brought up since this was published before that came out.


----------



## J7SC

KCDC said:


> I'm not sure if this one includes the latest "security issues" brought up since this was published before that came out.



...I also noticed that the studio driver did not carry the 'WHQL' moniker but I'll check NVidia's site again in a few days


----------



## KCDC

Just fyi, Unless you have a true 10-bit 4:4:4 output monitor, like a pro reference, you really won't benefit since we can go up to 32 bit floating on the software side, but still output 8. For my purposes in vfx/comp, I'd love to use it on a 10-bit ref monitor, but it's a lot of cash, so I end up gambling until I hear back from client. You're really only going to get the full benefit with a ProArt or a Panny, NEC reference.. i.e. lotsa cash. I was hoping to grab an old one from the studio I'm at, but they're all BNC and DVI and only 1080p. Figuring out if I can find a bnc-to-usb-c breakout box, then I'd be good to go, but most stuff is 4k now, so it wouldn't be much help to me.


----------



## maneil99

I have flashed my 980ti/1080ti but this seems a bit different?

I have a 2080ti Gaming X Trio. Is it possible to flash this for higher power limit? What bios would I use?


----------



## J7SC

KCDC said:


> Just fyi, Unless you have a true 10-bit 4:4:4 output monitor, like a pro reference, you really won't benefit since we can go up to 32 bit floating on the software side, but still output 8. For my purposes in vfx/comp, I'd love to use it on a 10-bit ref monitor, but it's a lot of cash, so I end up gambling until I hear back from client. You're really only going to get the full benefit with a ProArt or a Panny, NEC reference.. i.e. lotsa cash. I was hoping to grab an old one from the studio I'm at, but they're all BNC and DVI and only 1080p. Figuring out if I can find a bnc-to-usb-c breakout box, then I'd be good to go, but most stuff is 4k now, so it wouldn't be much help to me.


 
tx - too bad my old monochrome 14 inch CRT TV won't benefit  ....but I'll try it on a 55 inch 10 bit 4:4:4 IPS first, otherwise the office just has to $pring for new panels. 

BTW, according to NVidia's site, drivers 431.60 and 431.70 (the latter being the studio driver) are allegedly unaffected by the security bug (until another is found, due to their built-in big data dlls )


----------



## thauch

Excited to join the club. Just scored a Gigabyte Gaming OC version for dirt cheap on Marketplace.

Bought the EK Classic block and replaced my old 1080. I am one happy camper!

I do have one question though. I bought the matching EK backplate and it doesn’t incorporate it into cooling the backside. Instructions don’t say anything about using pads or using it for cooling. This is the Classic which I needed because of the RGB header cutout on the block. The Vector series backplate however comes with pads and is used for cooling.




















Sent from my iPhone using Tapatalk


----------



## Angrycrab

Anyone knows how to lock the card at 2100mhz? My card Is shunt modded so I don’t have power issues. The clock speed seems to drop with temperature.


----------



## NexusVibee

Angrycrab said:


> Anyone knows how to lock the card at 2100mhz? My card Is shunt modded so I don’t have power issues. The clock speed seems to drop with temperature.


I am yet to find a way to lock clock without losing performance. The curve makes it so you lose performance when using it to lock max voltage and clock. Not sure why this is. If someone knows how to lock clock and voltage without losing perf i would love to know


----------



## ESRCJ

Angrycrab said:


> Anyone knows how to lock the card at 2100mhz? My card Is shunt modded so I don’t have power issues. The clock speed seems to drop with temperature.


Your clock speed will drop if you pass a particular temperature threshold. From what I know, there are a few different thresholds and passing each one will drop your frequency by 15MHz. Each threshold is BIOS-specific. Mine will drop 15MHz if I ever pass 40C, although there are some BIOS in which the drop occurs at even lower temperatures. 

If you're using MSI Afterburner, you can lock the frequency by selecting a point on the VF "curve" and pressing L, then apply. The VF "curve" is really a step function and the applied voltage will be the smallest value for a given frequency step. You will only maintain the locked frequency as long as you don't trigger the power limit, one of the temperature thresholds, and if the GPU is actually stable at that voltage-frequency pair.


----------



## Angrycrab

ESRCJ said:


> Your clock speed will drop if you pass a particular temperature threshold. From what I know, there are a few different thresholds and passing each one will drop your frequency by 15MHz. Each threshold is BIOS-specific. Mine will drop 15MHz if I ever pass 40C, although there are some BIOS in which the drop occurs at even lower temperatures.
> 
> If you're using MSI Afterburner, you can lock the frequency by selecting a point on the VF "curve" and pressing L, then apply. The VF "curve" is really a step function and the applied voltage will be the smallest value for a given frequency step. You will only maintain the locked frequency as long as you don't trigger the power limit, one of the temperature thresholds, and if the GPU is actually stable at that voltage-frequency pair.


Tried setting a curve and pressing L but unfortunately Clocks drops still happen due to temps. I guess it’s designed that way.

Thanks anyways.


----------



## Sheyster

Angrycrab said:


> Tried setting a curve and pressing L but unfortunately Clocks drops still happen due to temps. I guess it’s designed that way.
> 
> Thanks anyways.


I'm using the F+L method successfully. You have to do a little "recon" if you want to target a certain frequency, to account for the bin drops due to temp. For instance, if your card settles in at 2070 at 70 deg C and you actually dialed in 2100 MHz, you should crank up the fan slightly (I recommend using a fixed fan speed with these cards) and bump up the core clock a few bins in your AB curve (typically 30, 45 or 60 MHz). The card will briefly run a little faster than your target speed then settle in nicely at your target, assuming voltages are okay at the speed you targeted.


----------



## jlp0209

***Edit: Problem solved. Was crashing at stock in Time Spy and Heaven benchmark, also occasional stutter in games. EVGA Precision X1 was the culprit. Uninstalled and replaced with Afterburner. Also connected the 2x ML120 radiator fans to motherboard PWM headers which can be controlled based on PCIEX_16 temp, which is great. Shaved off several degrees with an OC of +125 - +150 on the core which gives me 2085-2100mhz. In Time Spy, F1 2019, and Heaven my max GPU temp never exceeds 70C now. Phew. 

_____________________

Just upgraded from a 1080 Ti SC2 Hybrid to a 2080 Ti FTW3 Hybrid. I have a question about high temps, the answer may just be I need to buy a new case. 

I want the smallest ATX case possible and am using a Corsair 400C, simply with a 280mm AIO CPU cooler mounted at the front of the case as intake and my Hybrid 120mm GPU radiator mounted at the rear as exhaust. No other case fans. Running a 4K 144Hz G-sync monitor. 

I was alarmed when I first ran the 2080 Ti that I hit 86C on the GPU in Time Spy benchmark using a +75 core OC. Memory was also mid 80s. I adjusted the fan curve to aggressive in EVGA X1 and it has since improved, may have been air bubbles or something out of the box before. 

I've replaced the stock EVGA radiator fan with 2x ML120 fans in push-pull. The 2nd fan is connected to the AUX fan connector on the GPU itself. Also added a crappy Corsair stock 120mm case fan at the top of my case for intake blowing right into the radiator fan. 

The GPU reaches 74C under load in F1 2019. Memory 3 ICX reading gets to 80C. These temps are with a +75 core OC getting me to 2040-2025Mhz, down to 1980-1995mhz when temps break 70C. 

For comparison, when I upgraded my monitor I saw my 1080 Ti Hybrid reach mid 60s celsius under load when gaming at 4k / 144Hz with F1 2019. 

I know this GPU runs hotter than the 10 series. I'm also worried I'm suffocating the GPU fans / radiator with this setup.

I'm looking into the Corsair Obsidian 500D case so that I can run 2x Noctua 140mm fans at front as intake, 280 AIO CPU cooler at the top as exhaust, and the GPU radiator at rear as exhaust. Do you think I'll see lower temps by switching to this set up? I really don't want to downgrade my CPU cooler to a 240mm AIO and would like the 280mm at the top, which is why I'm looking at the 500D. Thank you very much for any input.


----------



## majestynl

Is there anybody over here with a Galax/KFA2 380w PL Card? Maybe you can share your bios. 
I need another one than the one at TPU. The XUSB FW on that one is to low!


----------



## technodanvan

*August 2019 Foldathon!*

Hello 2080Ti Owner's Club! 

I apologize for potentially derailing otherwise productive conversation, but this will only take a moment (yes this is a canned message I'm using in other clubs too, sorry)

You might have noticed the banner link to the upcoming Foldathon, August 2019 Edition. The OCN Folding team has been losing 24/7 membership for some time, and to make up for that we encourage other OCN users to jump in for a couple of days and donate a little bit of CPU and GPU time to helping find a cure for a variety of diseases, notably various cancer's but also Alzheimer's and others. We'd really appreciate it if you would join us!

Link to the Foldathon is here: August 2019 Foldathon

Link to the Stanford [email protected] Website is here: [email protected] Windows Download Page

I promise it's easy to set up! You don't need to worry about changing driver's or anything like that. Just do the following:

1a. If you don't already have an FAH passkey, create one here: Passkey Request Form

1b. If you haven't had a passkey before, you might want to run this for a day or two leading up to the Foldathon in order to qualify for bonus points.

2. Download/install the installer: [email protected] Windows Download Page

3. Once installed it'll autorun, enter your account and passkey you just created, and enter "37726" as your team number for OCN

4. Once you have it working, head to the Foldathon page (linked above) to register!

After that, you're pretty much done! If you don't want your CPU to run you can either pause it individually or disable it completely in the advanced controller - Configure -> Slots -> Select CPU -> Select 'Remove' -> DONE.

This whole process will take less than ten minutes, I promise. Put that studly computer to work and help us out! You can always shut down or remove the FAH controller afterwards.


----------



## Tolkmod

Proud owner of an ASUS ROG Strix 2080 Ti that'll be using an EK full coverage block once my new system is built.

Right now I've got it plugged into my older system and it's working wonders but I haven't had a chance to really tweak and play with it yet. Can't wait to pair it with the new hardware and really make this beast shine.


----------



## kx11

saw this thing available , someone might find it useful 





https://galaxstore.net/Galax-GeForce-RTX-2080Ti-HOF-10th-Anniversary-OC-Lab-Edition_p_204.html


----------



## schoolofmonkey

kx11 said:


> saw this thing available , someone might find it useful
> https://galaxstore.net/Galax-GeForce-RTX-2080Ti-HOF-10th-Anniversary-OC-Lab-Edition_p_204.html


If anyone can get your hands on a Galax HOF I'd say do it, I had the 780ti version (when it was called Galaxy), that card blew me away with performance.


----------



## ntuason

Goddamn that Galax GeForce RTX 2080Ti HOF 10th Anniversary OC Lab Edition looks beautiful.


----------



## Jpmboy

it sure is a good looking card... a bit pricey this late in the product cycle tho.


----------



## kx11

i wonder if watercooling this card is as simple as hooking tubes to that terminal ? 



too bad HOF Ai app is broken as hell , it's still in beta since last November


----------



## J7SC

Jpmboy said:


> it sure it a good looking card... a bit pricey this late in the product cycle tho.


 
^^ This ... I posted about the card at another thread back during Computex, but now another two months have passed, with on/off rumors of a 2080 Ti Super since then. Also, there was an earlier version of the Galax 2080 Ti Hof OC Labs available quite early in the 2080 Ti product cycle (below). I wonder if the cards are the same PCB wise etc, just a different - though admittedly very nice - cooling setup with the hybrid cooler addition.


----------



## JustinThyme

Tolkmod said:


> Proud owner of an ASUS ROG Strix 2080 Ti that'll be using an EK full coverage block once my new system is built.
> 
> Right now I've got it plugged into my older system and it's working wonders but I haven't had a chance to really tweak and play with it yet. Can't wait to pair it with the new hardware and really make this beast shine.


Good cards. EK blocks on these are a bit lacking though. Only thing Id put on them would be phanteks or heatkiller. For some reason the flow to the main VRMs on the EK is lacking as in almost non existent. VRM temps on air is actually better. Strange card layout with a choke on the secondary VRMs mounted 90s off from the rest. I asked what the purpose for this was and it turns out the strix O11G gaming is identical to the Matrix. The twisted choke is to mount the AIO on the Matrix. 


If you get the card that accepts the reference design the EK may work OK but it looks like you have what I bought two of, the O11G.


----------



## Bradwell

I have 2 Strix OC 2080tis and I was wondering what the best compatible Bios is for them. The kingpin bios is the best bios but I don’t think they’re compatible, the galax bios offers +50% power but yet again there’s compatibility. The 1000w Strix bios offers +0% power so I don’t know how hat compares to stock. Thanks!


----------



## Asmodian

Bradwell said:


> The 1000w Strix bios offers +0% power so I don’t know how hat compares to stock. Thanks!


Wait, a 1000W BIOS is way above stock... at 1000W the power limit is effectively disabled so an additional +15% would be really pointless.


----------



## Tolkmod

JustinThyme said:


> Good cards. EK blocks on these are a bit lacking though. Only thing Id put on them would be phanteks or heatkiller. For some reason the flow to the main VRMs on the EK is lacking as in almost non existent. VRM temps on air is actually better. Strange card layout with a choke on the secondary VRMs mounted 90s off from the rest. I asked what the purpose for this was and it turns out the strix O11G gaming is identical to the Matrix. The twisted choke is to mount the AIO on the Matrix.
> 
> 
> If you get the card that accepts the reference design the EK may work OK but it looks like you have what I bought two of, the O11G.


Even with their purpose built Strix block for this card, you're still seeing the same issues?


----------



## Sheyster

Asmodian said:


> Wait, a 1000W BIOS is way above stock... at 1000W the power limit is effectively disabled so an additional +15% would be really pointless.


Most folks will dial down the power limit with that BIOS, to 50% or 45%.


----------



## jura11

JustinThyme said:


> Good cards. EK blocks on these are a bit lacking though. Only thing Id put on them would be phanteks or heatkiller. For some reason the flow to the main VRMs on the EK is lacking as in almost non existent. VRM temps on air is actually better. Strange card layout with a choke on the secondary VRMs mounted 90s off from the rest. I asked what the purpose for this was and it turns out the strix O11G gaming is identical to the Matrix. The twisted choke is to mount the AIO on the Matrix.
> 
> 
> If you get the card that accepts the reference design the EK may work OK but it looks like you have what I bought two of, the O11G.


Hi there 

Are you sure with VRM temperatures are better on air than water, because I have Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix and no issues there, temperatures are in 36-42°C as max, only in hotter ambient I see 44°C as max and max VRM I have seen in 46-49°C range

https://i.imgur.com/gQ1LWgi.jpg


Hope this helps 

Thanks, Jura


----------



## jura11

Bradwell said:


> I have 2 Strix OC 2080tis and I was wondering what the best compatible Bios is for them. The kingpin bios is the best bios but I don’t think they’re compatible, the galax bios offers +50% power but yet again there’s compatibility. The 1000w Strix bios offers +0% power so I don’t know how hat compares to stock. Thanks!


Hi there 

Best BIOS fir Asus RTX 2080Ti Strix is Matrix BIOS in my view, you will not loose any of HDMI or DP port like with Galax 380W BIOS 

Matrix BIOS is 360-365W I think and its OK there, stock Strix BIOS is just rubbish at 315W 

XOC BIOS is OK just VRAM doesn't downclock to idle speeds and in my case causing few issues, other than no issues, with XOC my best OC is 2190MHz and with Matrix BIOS 2145-2160MHz,I personally have set power limit with XOC 42-45%

Hope this helps 

Thanks, Jura


----------



## thauch

jura11 said:


> Hi there
> 
> 
> 
> Best BIOS fir Asus RTX 2080Ti Strix is Matrix BIOS in my view, you will not loose any of HDMI or DP port like with Galax 380W BIOS
> 
> 
> 
> Matrix BIOS is 360-365W I think and its OK there, stock Strix BIOS is just rubbish at 315W
> 
> 
> 
> XOC BIOS is OK just VRAM doesn't downclock to idle speeds and in my case causing few issues, other than no issues, with XOC my best OC is 2190MHz and with Matrix BIOS 2145-2160MHz,I personally have set power limit with XOC 42-45%
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura




The Galax loses hdmi/dp? Glad I didn’t flash. I have the gigabyte 360 stock already so I’ll just live with that.


Sent from my iPhone using Tapatalk


----------



## jura11

thauch said:


> The Galax loses hdmi/dp? Glad I didn’t flash. I have the gigabyte 360 stock already so I’ll just live with that.
> 
> 
> Sent from my iPhone using Tapatalk


Hi there 

That's the case Galax 380W BIOS on Asus RTX 2080Ti Strix, with Galax 380W BIOS you will loose one DP port and HDMI port

Not sure how it is on other cards of other makers, if you loose DP port and HDMI port or not

Hope this helps 

Thanks, Jura


----------



## JustinThyme

jura11 said:


> Hi there
> 
> Are you sure with VRM temperatures are better on air than water, because I have Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix and no issues there, temperatures are in 36-42°C as max, only in hotter ambient I see 44°C as max and max VRM I have seen in 46-49°C range
> 
> https://i.imgur.com/gQ1LWgi.jpg
> 
> 
> Hope this helps
> 
> Thanks, Jura


No. what Im saying is the VRMs are better on air than with EK blocks. Phanteks and Heatkiller they are greatly improved over air. I was a bit disappointed and happy to find someone to take the EK Strix O11G blocks off my hands. Which Strix card are you running? Only the 011G and Matrix has the PCB with the 90 deg rotated choke making it a PITA to get coolant to. Compare the standard RTX Vector block to the other and you can see what I mean. Your capture shows the VRMs at a max of 41, I loaded it up and saturated my loop which is way overkill with 2 60mm 480 rads and one 60mm 360 rad and 20 fans total and with EK my VRM temps were peaking at 60C where with the Phanteks and Heatkiller blocks they never pass 38C and Ive not seen the GPU past 40C. Maybe its just me but Ive not had good luck with EK products with my X299 build at all. Previously I had used EK exclusively


----------



## JustinThyme

Tolkmod said:


> Even with their purpose built Strix block for this card, you're still seeing the same issues?


Thats the only block they have that will fit it. I tried others too and was disappointed while waiting on EK or watercool to start production. First one was the TT pacific which is no big surprise there then a bitspower which was better but still not happy and I let a well known retailer in the US talk me into trying Barrow which left me bleeding from a sloppy finish and a test fit of the block showed it only making contact with about half of the primary VRMS and none of the secondary. I didn't even run water through those hunks of junk. Packed them up and sent them back. I will say in that retailers defense that they took them back, paid for return shipping and no restocking fee after I sent them the pictures. I think those were the worst blocks of any kind Ive ever tried.

Phanteks is good but uses original back plate which I think holds heat in as it makes zero contact with the card. Adding the Heatkiller with passive back plates dropped temps a few degrees more than the phanteks which outperformed the EK for me anyway.


----------



## ESRCJ

JustinThyme said:


> No. what Im saying is the VRMs are better on air than with EK blocks. Phanteks and Heatkiller they are greatly improved over air. I was a bit disappointed and happy to find someone to take the EK Strix O11G blocks off my hands. Which Strix card are you running? Only the 011G and Matrix has the PCB with the 90 deg rotated choke making it a PITA to get coolant to. Compare the standard RTX Vector block to the other and you can see what I mean. Your capture shows the VRMs at a max of 41, I loaded it up and saturated my loop which is way overkill with 2 60mm 480 rads and one 60mm 360 rad and 20 fans total and with EK my VRM temps were peaking at 60C where with the Phanteks and Heatkiller blocks they never pass 38C and Ive not seen the GPU past 40C. Maybe its just me but Ive not had good luck with EK products with my X299 build at all. Previously I had used EK exclusively


It's not just you. EK blocks seem to be getting worse across the board. My first EK block was the Supremacy Evo CPU block, which worked fine for the most part although I want to give a Heatkiller block a try. However, the RVIE monoblock was absolutely terrible, as my CPU temps were 20C higher than with my Evo block because EK did a terrible job with the cold plate (circular elevated portion that didn't even make full contact with the IHS). I also bought a Velocity block for the aesthetics, as I figured thermal performance would be no worse than my Evo. It turned out that the cold plate was too convex and it didn't make sufficient contact with the IHS, resulting in 20C higher temps on average. The EK Vector block for my 2080 Ti is meh. I feel like I should be able to do better given all of the rad space I have and that my water temps never exceed 30C under full load. All of my future blocks will be from anyone but EK at this point.


----------



## JustinThyme

ESRCJ said:


> It's not just you. EK blocks seem to be getting worse across the board. My first EK block was the Supremacy Evo CPU block, which worked fine for the most part although I want to give a Heatkiller block a try. However, the RVIE monoblock was absolutely terrible, as my CPU temps were 20C higher than with my Evo block because EK did a terrible job with the cold plate (circular elevated portion that didn't even make full contact with the IHS). I also bought a Velocity block for the aesthetics, as I figured thermal performance would be no worse than my Evo. It turned out that the cold plate was too convex and it didn't make sufficient contact with the IHS, resulting in 20C higher temps on average. The EK Vector block for my 2080 Ti is meh. I feel like I should be able to do better given all of the rad space I have and that my water temps never exceed 30C under full load. All of my future blocks will be from anyone but EK at this point.


Yeah I have two EK monoblocks on my wall of shame. M8E and RVIE. Im already at the point after the RVIE monoblock then the 2080ti Strix blocks that Its just time to move on. I got great results using heatkiller IV pro on the CPU and a separate VRM block.


----------



## ESRCJ

JustinThyme said:


> Yeah I have two EK monoblocks on my wall of shame. M8E and RVIE. Im already at the point after the RVIE monoblock then the 2080ti Strix blocks that Its just time to move on. I got great results using heatkiller IV pro on the CPU and a separate VRM block.


The Heatkiller VRM block is great. I regret not getting the Heatkiller 2080 Ti block.


----------



## MADworld

Zer0G said:


> Yesterday I watercooled my Palit GamingPro OC. I realized there is a difference to the FE at the PCB. I marked the parts. There are missing some parts on the upper marked area and added some parts in the lower area.
> 
> My question is, is there any added part whitch has to be cooled!? Im pretty not sure if I should give a coolingpad on one of the parts?


I was looking at watercooling the Palit GamingPro, non OC model but still an A-chip, I assume that the PCB is the same as the OC model.

So I would really want to know if the PCB is different from the reference model in some way and what effect that has.

Attached his picture.


----------



## jura11

JustinThyme said:


> No. what Im saying is the VRMs are better on air than with EK blocks. Phanteks and Heatkiller they are greatly improved over air. I was a bit disappointed and happy to find someone to take the EK Strix O11G blocks off my hands. Which Strix card are you running? Only the 011G and Matrix has the PCB with the 90 deg rotated choke making it a PITA to get coolant to. Compare the standard RTX Vector block to the other and you can see what I mean. Your capture shows the VRMs at a max of 41, I loaded it up and saturated my loop which is way overkill with 2 60mm 480 rads and one 60mm 360 rad and 20 fans total and with EK my VRM temps were peaking at 60C where with the Phanteks and Heatkiller blocks they never pass 38C and Ive not seen the GPU past 40C. Maybe its just me but Ive not had good luck with EK products with my X299 build at all. Previously I had used EK exclusively



Hi there 

My loop is too overkill, 4*360mm radiators plus MO-ra3 360mm, this loop cools 4*GPUs(Asus RTX 2080Ti Strix with 2160MHz, GTX1080Ti with 2113MHz, GTX1080 with 2164MHz and GTX1080 with 2100MHz) and 5960x with 4.7GHz OC and in rendering when all GPUs are rendering I never seen higher water delta T than 5°C usually is in 3-4°C and in gaming 1-2°C as max

GTX1080Ti or GTX1080's are running cooler usually they're in 34-38°C as max and my Asus RTX 2080Ti Strix running usually around 38-42°C as max 

Just yesterday I done render in Blender Cycles where I used all 4*GPUs and temperatures have been in 36-38°C on Pascal GPUs and 41°C on RTX 2080Ti with VRM at 43°C 

Usually VRM are 2-5°C higher than GPU core temperature in my case there 

What GPU I have I think is A11G Advanced/Gaming not 011G, but I have used 011G as well on friend build where we are used same block plus backplate and temperatures are comparable to my, his VRM are in 40's, not seen them in 50's what I tested that card

I didn't tried or tested Heatkiller IV RTX 2080Ti Strix WB or Phanteks one, only tested normal Heatkiller IV RTX 2080Ti on EVGA RTX 2080Ti XC and temperatures are in 38-42°C on normal larger loop with 2*360mm radiators and Phanteks Glacier RTX 2080Ti WB I have used only on Palit RTX 2080Ti and there temperatures are similar

I planned get as well for my Asus RTX 2080Ti Strix Heatkiller IV block but this block hasn't been in stock there due this I bought this one

Will see if I will be getting Heatkiller IV RTX 2080Ti Strix WB just for testing, but right now I'm happy with temperatures 


Hope this helps 

Thanks, Jura


----------



## MsNikita

May I join the party? 

Here is, three EVGA GeForce RTX 2080 Ti XC Ultra Gaming cards... and I've just realised I need the NV link bridge for two of them. Whoops


----------



## JustinThyme

jura11 said:


> Hi there
> 
> My loop is too overkill, 4*360mm radiators plus MO-ra3 360mm, this loop cools 4*GPUs(Asus RTX 2080Ti Strix with 2160MHz, GTX1080Ti with 2113MHz, GTX1080 with 2164MHz and GTX1080 with 2100MHz) and 5960x with 4.7GHz OC and in rendering when all GPUs are rendering I never seen higher water delta T than 5°C usually is in 3-4°C and in gaming 1-2°C as max
> 
> GTX1080Ti or GTX1080's are running cooler usually they're in 34-38°C as max and my Asus RTX 2080Ti Strix running usually around 38-42°C as max
> 
> Just yesterday I done render in Blender Cycles where I used all 4*GPUs and temperatures have been in 36-38°C on Pascal GPUs and 41°C on RTX 2080Ti with VRM at 43°C
> 
> Usually VRM are 2-5°C higher than GPU core temperature in my case there
> 
> What GPU I have I think is A11G Advanced/Gaming not 011G, but I have used 011G as well on friend build where we are used same block plus backplate and temperatures are comparable to my, his VRM are in 40's, not seen them in 50's what I tested that card
> 
> I didn't tried or tested Heatkiller IV RTX 2080Ti Strix WB or Phanteks one, only tested normal Heatkiller IV RTX 2080Ti on EVGA RTX 2080Ti XC and temperatures are in 38-42°C on normal larger loop with 2*360mm radiators and Phanteks Glacier RTX 2080Ti WB I have used only on Palit RTX 2080Ti and there temperatures are similar
> 
> I planned get as well for my Asus RTX 2080Ti Strix Heatkiller IV block but this block hasn't been in stock there due this I bought this one
> 
> Will see if I will be getting Heatkiller IV RTX 2080Ti Strix WB just for testing, but right now I'm happy with temperatures
> 
> 
> Hope this helps
> 
> Thanks, Jura


Yes, two totally different PCBs and blocks. If what you have is working well no need to change. My comparison applies only to the O11G and Matrix as they require a strange configuration that goes up and over to get coolant to the primary VRM sections. Heres stock photos of the two different HK blocks first is the standard RTX 2080Ti and second is the one for the 011G and Matrix. Third is the nekked card showing the strange layout.


----------



## Tolkmod

I've been really looking at the two blocks, and I fail to see any major difference between the blocks other than machining cuts, every section covers the same areas, so either I'm totally not seeing what you're seeing or there's not much difference other than preference (or maybe quality?)

I attached the backside photo of the ek strix block.


----------



## zazzn

Hi All,

I've been trying to flash my MSI Trio X with the 400W bios but it won't seem to accept it.

It's claiming it's "imcompatible" lol but, I've tried the NVflash527 modified (which says I need a newer version to flash) and the official 5.558.0 version. Here's the output when I try to flash with the official 558 version. I was able to flash the stock bios back on after I saved it without issue. I also tried disabling the video card and running the flash, but it did not help. (someone else had success doing that from what I googled in this thread)

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.



BIOS Cert 2.0 Verification Error, Update aborted.


PMU command complete with error, Error code = 0x00A5

Nothing changed!



ERROR: Invalid firmware image detected.


----------



## Talon2016

zazzn said:


> Hi All,
> 
> I've been trying to flash my MSI Trio X with the 400W bios but it won't seem to accept it.
> 
> It's claiming it's "imcompatible" lol but, I've tried the NVflash527 modified (which says I need a newer version to flash) and the official 5.558.0 version. Here's the output when I try to flash with the official 558 version. I was able to flash the stock bios back on after I saved it without issue. I also tried disabling the video card and running the flash, but it did not help. (someone else had success doing that from what I googled in this thread)
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> PMU command complete with error, Error code = 0x00A5
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.


The new shipping GPUs have a newer firmware that isn’t compatible with older vbios. Essentially we need either an Nvflash that will bypass this check or simply get vbios from the newer cards.


----------



## majestynl

zazzn said:


> Hi All,
> 
> I've been trying to flash my MSI Trio X with the 400W bios but it won't seem to accept it.
> 
> It's claiming it's "imcompatible" lol but, I've tried the NVflash527 modified (which says I need a newer version to flash) and the official 5.558.0 version. Here's the output when I try to flash with the official 558 version. I was able to flash the stock bios back on after I saved it without issue. I also tried disabling the video card and running the flash, but it did not help. (someone else had success doing that from what I googled in this thread)
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> PMU command complete with error, Error code = 0x00A5
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.






Talon2016 said:


> The new shipping GPUs have a newer firmware that isn’t compatible with older vbios. Essentially we need either an Nvflash that will bypass this check or simply get vbios from the newer cards.


Or find the 400w bios with same USB-X version. You could also flash a newer version but im afraid you cant go back to your backup bios if that one has a lower version!

How to check version:

- Copy the downloaded bios in the nvflash folder
- Run nvflash with command: nvflash --version biosname.rom

_change biosname.rom to your downloaded bios name!_


----------



## J7SC

jura11 said:


> Hi there
> 
> My loop is too overkill, 4*360mm radiators plus MO-ra3 360mm, this loop cools 4*GPUs(Asus RTX 2080Ti Strix with 2160MHz, GTX1080Ti with 2113MHz, GTX1080 with 2164MHz and GTX1080 with 2100MHz) and 5960x with 4.7GHz OC and in rendering when all GPUs are rendering I never seen higher water delta T than 5°C usually is in 3-4°C and in gaming 1-2°C as max
> 
> (...)



I have done s.th. similar before - giant simultaneous 'loop' setups for three+ mobos and six w-c GPUs...the only problem with that was starting it up just to quickly check email / a lot of noise for a single email...

That said, temps make all the difference as has been mentioned numerous times here, given NVida boost algorithms. The only difference for the two runs below was about 9c lower ambient...equated to about 150pts in that bench. Stock Bios, around 370-380w in GPUz


----------



## Nunzi

Can some one please tell me what bios is good for the EVGA FTW3 ultra stock is 373w

Thanks... will the stock kingpin bios work ? 500w would be perfect..


----------



## majestynl

Nunzi said:


> Can some one please tell me what bios is good for the EVGA FTW3 ultra stock is 373w
> 
> Thanks... will the stock kingpin bios work ? 500w would be perfect..


this thread is dying 

You can use a XOC bios and set Lower Power! 25% or 50% depending on bios max watt!


----------



## Xdrqgol

hey guys,

I just got my GALAX 2080 ti HOF card the other day, and after some 3dmark Port Royale I have seen it locked at 350W in GPUZ.

Cannot seem to find the XOC bios for it, (first page usual location is not available anymore for the GALAX HOF). 

Is there a way to extend this power limit to let's say 500W? or something like that. just FYI I am using it's default triple fan cooler and the temps are around 64C after 1 run of Port Royale with max FAN settings...

Thanks in advance for the help!


----------



## Nunzi

majestynl said:


> this thread is dying
> 
> You can use a XOC bios and set Lower Power! 25% or 50% depending on bios max watt!


Thank You .


----------



## Talon2016

Xdrqgol said:


> hey guys,
> 
> I just got my GALAX 2080 ti HOF card the other day, and after some 3dmark Port Royale I have seen it locked at 350W in GPUZ.
> 
> Cannot seem to find the XOC bios for it, (first page usual location is not available anymore for the GALAX HOF).
> 
> Is there a way to extend this power limit to let's say 500W? or something like that. just FYI I am using it's default triple fan cooler and the temps are around 64C after 1 run of Port Royale with max FAN settings...
> 
> Thanks in advance for the help!


As you own the HOF card your best bet would be to reach out to Galax to see if you can get the vbios from support.


----------



## AvengedRobix

Xdrqgol said:


> hey guys,
> 
> I just got my GALAX 2080 ti HOF card the other day, and after some 3dmark Port Royale I have seen it locked at 350W in GPUZ.
> 
> Cannot seem to find the XOC bios for it, (first page usual location is not available anymore for the GALAX HOF).
> 
> Is there a way to extend this power limit to let's say 500W? or something like that. just FYI I am using it's default triple fan cooler and the temps are around 64C after 1 run of Port Royale with max FAN settings...
> 
> Thanks in advance for the help!


Contact on FB oclab team or mad tse and they send you a bios


----------



## majestynl

@oreonutz This thread is a better place 

I have a MSI Duke OC (Ref PCB), 
just looking for the "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)" as stated in the first post here. But the link is killed probably.

btw: thanks for your effort m8!


----------



## oreonutz

majestynl said:


> @oreonutz This thread is a better place
> 
> I have a MSI Duke OC (Ref PCB),
> just looking for the "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)" as stated in the first post here. But the link is killed probably.
> 
> btw: thanks for your effort m8!


No Problem. This damn thing is elusive as hell. Have found a few dead links, but I am on the hunt still. 

I take it the Kingpin and Asus XOC Bios's are both a no go for some reason or another?

EDIT: FYI, I may have a lead on it, will PM You if it comes through, may take a day or 2. In the mean time, I doubt this helps at all, but I believe the Specific BIOS Number in Question we are looking for is: 90.02.0B.40.95

Also, again, I have the Kingpin BIOS, posted over in the wrong Thread for you, that is supposed to remove the Power limit all together, probably a bad Every Day OC, but if you feel like playing it is there, PW and all.


----------



## King4x4

Got a refurbished RTX 2080ti Aorus Waterforce incoming from newegg.

Should be joining this club on sunday if everything is okay.


----------



## majestynl

oreonutz said:


> No Problem. This damn thing is elusive as hell. Have found a few dead links, but I am on the hunt still.
> 
> I take it the Kingpin and Asus XOC Bios's are both a no go for some reason or another?
> 
> EDIT: FYI, I may have a lead on it, will PM You if it comes through, may take a day or 2. In the mean time, I doubt this helps at all, but I believe the Specific BIOS Number in Question we are looking for is: 90.02.0B.40.95
> 
> Also, again, I have the Kingpin BIOS, posted over in the wrong Thread for you, that is supposed to remove the Power limit all together, probably a bad Every Day OC, but if you feel like playing it is there, PW and all.


Yep tried the Asus but it wasn't that special for my specific card. Need one with more voltage  Tomorrow i will try the kingpin, thats also a option.
Let me know if you found the hof.

ps: dunno if its the 90.02.0B.40.95. before i flash it i need to do a version check for the right xusb version.

thanks again...


----------



## 86Jarrod

For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


----------



## oreonutz

86Jarrod said:


> For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


It definitely does, appreciate it! I got a buddy who is going to send it to me, so I can't wait to play with it! Appreciate your Words of Wisdom!


----------



## J7SC

86Jarrod said:


> For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


 
That's good info to have :thumb: I am still on the stock Aorus Waterfoce bios as it has been working great, but may try the Galax HoF XOC and/or the XOC Asus bios in the fall / winter time.


----------



## kx11

finally Lian-li O11D XL is around the corner , this means i can put a block on my KingPiN and i can close the case


----------



## majestynl

King4x4 said:


> Got a refurbished RTX 2080ti Aorus Waterforce incoming from newegg.
> 
> Should be joining this club on sunday if everything is okay.


Welcome!




86Jarrod said:


> For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


Thanks for the info!




oreonutz said:


> It definitely does, appreciate it! I got a buddy who is going to send it to me, so I can't wait to play with it! Appreciate your Words of Wisdom!


Good luck m8 



kx11 said:


> finally Lian-li O11D XL is around the corner , this means i can put a block on my KingPiN and i can close the case


Nice! Good cases. I got the 11 ROG version.


----------



## J7SC

King4x4 said:


> Got a refurbished RTX 2080ti Aorus Waterforce incoming from newegg.
> 
> Should be joining this club on sunday if everything is okay.


 
You should have some fun with that...I have been running two of them since late last year & they love plenty of rad (to avoid heat-soak). Temp pic is from 2xGPU PortRoyal run (stock bios)


----------



## King4x4

majestynl said:


> Welcome!


Thanks! Coming from a 2x1080ti system. None of the newer games allow for SLI to scale like last year games so basically gave up on SLI .



J7SC said:


> You should have some fun with that...I have been running two of them since late last year & they love plenty of rad (to avoid heat-soak). Temp pic is from 2xGPU PortRoyal run (stock bios)


No issues got a TH10 with three EK XTX480 rads in push-pull config and the only other component that is actively being cooled is the cpu.

I love stacking rads so I can run the fans at 900rpm. Maximum cooling at minimum dba.

I keep hearing horror stories regarding neweggs refurbished cards but I bought a few before and they were working nicely and finding a waterblocked gpu at 1100$ internationaly shipped was a good deal for me to pass up (Would have costed me at least $1550 (1300$ for the card+shipping and another 250$ to ship the block).


----------



## J7SC

King4x4 said:


> Thanks! Coming from a 2x1080ti system. None of the newer games allow for SLI to scale like last year games so basically gave up on SLI .
> 
> 
> 
> No issues got a TH10 with three EK XTX480 rads in push-pull config and the only other component that is actively being cooled is the cpu.
> 
> I love stacking rads so I can run the fans at 900rpm. Maximum cooling at minimum dba.
> 
> I keep hearing horror stories regarding neweggs refurbished cards but I bought a few before and they were working nicely and finding a waterblocked gpu at 1100$ internationaly shipped was a good deal for me to pass up (Would have costed me at least $1550 (1300$ for the card+shipping and another 250$ to ship the block).



I'm a rad-stacker myself  Then again, I had all the w-cooling items laying around anyhow from older builds. ...I have bought from Newegg before on more than one occasion and find them to be quite good. Besides, if there should be an issue (against expectations) with the card, you''ll find out pretty quickly and can let them know.


----------



## Mektor

Got one of them 2080 ti's. original pre-order from nvidia. 2080 ti founders edition. 

First used it in my Threadripper system on stock air cooling.

Then it got pulled from the Threadripper and replaced with my old GTX 1080 + a Koolance waterblock on the 1080, slapped an EK block on the 2080 ti and put it in the Intel rig I built last month for gaming.


----------



## ntuason

Damn the founders edition looks nice, kinda with I’d gotten that instead.


----------



## Mektor

Those nvidia founders edition stock coolers are frikken HEAVY! like seriously you would swear they were made out of lead. Imagine the weight of a brick...that's about the weight of that cooler. Heaviest video card cooler I've ever held. very sturdy though. They put some seriously thick aluminum framing around it. Quality material for sure...too bad I got the junk micron memory...So I don't overclock my memory beyond what it boosts to on its own. I do push the GPU to 2100MHz though.


----------



## TK421

Anything above 455 on my micron vram gives artifact, is this just really bad silicon lottery?

On my GPU core, at stock voltage, 370w vbios, 70c, the max overclock I get is +85mhz. Which results in around 1950-1980mhz.


EVGA XC Ultra model (stock pcb)

:\


----------



## 86Jarrod

86Jarrod said:


> For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


Another thing I think I should add so people can learn from my mistakes is with galax xoc in OC software never save profile to start with windows because you might crash on windows start up then you have to boot safe mode and remove OC software. Those of us with dual bios switches don't have to worry too much but it'd be a pain for those that don't.


----------



## kx11

TK421 said:


> Anything above 455 on my micron vram gives artifact, is this just really bad silicon lottery?
> 
> On my GPU core, at stock voltage, 370w vbios, 70c, the max overclock I get is +85mhz. Which results in around 1950-1980mhz.
> 
> 
> EVGA XC Ultra model (stock pcb)
> 
> :\





sorry to hear that , it looks like a bad chip , normal good chips hits 1000+ on memory easily and very very good ones hit 1200+ , i would recommend replacing it with a FTW or keep it if the performance is good , the temps are normal for dual fan setup that runs default fan profile


----------



## TK421

kx11 said:


> sorry to hear that , it looks like a bad chip , normal good chips hits 1000+ on memory easily and very very good ones hit 1200+ , i would recommend replacing it with a FTW or keep it if the performance is good , the temps are normal for dual fan setup that runs default fan profile




I don't think we can upgrade to another model during RMA process, AFAIK.

Or am I wrong @kx11?


----------



## J7SC

kx11 said:


> sorry to hear that , it looks like a bad chip , normal good chips hits 1000+ on memory easily and very very good ones hit 1200+ , i would recommend replacing it with a FTW or keep it if the performance is good , the temps are normal for dual fan setup that runs default fan profile


 
As long as you *cool the Micron VRAM* appropriately, they can be good performers. Quite a few of the HWbot WR scores were done with Micron, and even Steve at GN had a bit of a surprise with Micron vs Samsung GDDR6, though that's probably not the rule (see 13m55s + in vid). 

It varies a bit from bench to bench, but for my 2x 2080 Ti SLI/NVL with Micron, the sweet spot is between 2030 and 2070 MHz. White I can go higher without artifacts, scores start to drop. This is more or less consistent with many of the other entries around that score https://www.3dmark.com/pr/134118


----------



## kx11

TK421 said:


> I don't think we can upgrade to another model during RMA process, AFAIK.
> 
> Or am I wrong @*kx11* ?



yeah you can't do that unfortunately


----------



## kx11

J7SC said:


> As long as you *cool the Micron VRAM* appropriately, they can be good performers. Quite a few of the HWbot WR scores were done with Micron, and even Steve at GN had a bit of a surprise with Micron vs Samsung GDDR6, though that's probably not the rule (see 13m55s + in vid).
> 
> It varies a bit from bench to bench, but for my 2x 2080 Ti SLI/NVL with Micron, the sweet spot is between 2030 and 2070 MHz. White I can go higher without artifacts, scores start to drop. This is more or less consistent with many of the other entries around that score https://www.3dmark.com/pr/134118
> 
> 
> https://www.youtube.com/watch?v=deLBTEqynhg



it is a silicon lottery situation with a lot of the early released models , in his case it was not a good chip but at least he can OC it a little , my 1st 2080ti was Palit gaming pro OC and it didn't OC even +1 on core/mem , it had Micron memory chip


----------



## TK421

J7SC said:


> As long as you *cool the Micron VRAM* appropriately, they can be good performers. Quite a few of the HWbot WR scores were done with Micron, and even Steve at GN had a bit of a surprise with Micron vs Samsung GDDR6, though that's probably not the rule (see 13m55s + in vid).
> 
> It varies a bit from bench to bench, but for my 2x 2080 Ti SLI/NVL with Micron, the sweet spot is between 2030 and 2070 MHz. White I can go higher without artifacts, scores start to drop. This is more or less consistent with many of the other entries around that score https://www.3dmark.com/pr/134118
> 
> 
> https://www.youtube.com/watch?v=deLBTEqynhg


Well evga seems pretty diligent in their vram heatsink.

https://xdevs.com/guide/evga_2080tixc/


So I guess it's just lottery?


----------



## Talon2016

86Jarrod said:


> For those of you wanting to use the Galax HOF XOC bios there's a few things to know first. On non galax hof cards it doesn't play nicely with OC software. For instance using MSI Afterburner if you save a profile and then try to load it or if you try to lower the power limit it will crash. Using galax hof suite oc software you can change power limits but if you click reset to default it will crash. Basically like I said it doesn't "play" well with non galax cards. That being said it's my daily bios i use for everything once you get to know the bugs of your favorite oc software it's great. I game at 1.125 all day with no drops below 1.125 like the KPE bios which causes the occasional crash if your pushing frequency to it's limits. It might be finicky in different ways with different cards. I use the ftw3 ultra. If you are NOT WATER COOLED IT MOST LIKELY WILL BE AN ISSUE running any xoc bios. Also there's always a chance it will HURT YOUR CARD so it's on the user if anything happens. Hope this helps.


90.02.0B.40.95

That’s an ‘old’ version. Newer versions don’t have any issue using Afterburner for power limits, or saving profiles.


----------



## J7SC

TK421 said:


> Well evga seems pretty diligent in their vram heatsink.
> 
> https://xdevs.com/guide/evga_2080tixc/
> 
> 
> So I guess it's just lottery?


 
Well, even if the silicone lottery dealt you a dud, throwing max cooling at it will make it a faster dud, I reckon....especially with Micron, which seems to be more sensitive to temps than Samsung


----------



## TK421

J7SC said:


> Well, even if the silicone lottery dealt you a dud, throwing max cooling at it will make it a faster dud, I reckon....especially with Micron, which seems to be more sensitive to temps than Samsung


I'm just gonna RMA it then.


----------



## majestynl

King4x4 said:


> Thanks! Coming from a 2x1080ti system. None of the newer games allow for SLI to scale like last year games so basically gave up on SLI .


Yeap looks like SLI is going to DIE (games)




Talon2016 said:


> 90.02.0B.40.95
> 
> That’s an ‘old’ version. Newer versions don’t have any issue using Afterburner for power limits, or saving profiles.


hmm.. do you have a newer version? I have one where the XUSB version is also to old! Cant flash it!



J7SC said:


> Well, even if the silicone lottery dealt you a dud, throwing max cooling at it will make it a faster dud, I reckon....especially with Micron, which seems to be more sensitive to temps than Samsung


Can confirm 100%! I have much better Mem OC now with a water-block installed!


----------



## jura11

TK421 said:


> Anything above 455 on my micron vram gives artifact, is this just really bad silicon lottery?
> 
> On my GPU core, at stock voltage, 370w vbios, 70c, the max overclock I get is +85mhz. Which results in around 1950-1980mhz.
> 
> 
> EVGA XC Ultra model (stock pcb)
> 
> :\


Hi there 

As above try RMA and then sell it and get better card there, been unlucky myself as well with Zotac RTX 2080Ti AMP which for love of god wouldn't do more than 2115MHz and that's only in low ambient temperature and in rendering I could run that card only with stock clocks because any OC above stock clocks would crash render with TDR errors , currently have Asus RTX 2080Ti Strix which will do 2160MHz 

I expect under water yours card probably will do 2100MHz as max maybe less around 2085MHz

I tried few RTX 2080Ti with Micron memories and most of them would do at least 700MHz but only one of them would do 1000MHz+ on VRAM but this have been only possible in first days of trying that card then friend reported its unstable with anything above 1000Mhz then he dropped to 850MHz and now he is running +600MHz as max

Hope this helps 

Thanks, Jura


----------



## JustinThyme

majestynl said:


> Yeap looks like SLI is going to DIE (games)


SLI is still very much alive. Depends on who bothers to support. 


Here is one short list

https://www.gamingscan.com/best-games-that-support-sli/


----------



## majestynl

JustinThyme said:


> SLI is still very much alive. Depends on who bothers to support.
> 
> 
> Here is one short list
> 
> https://www.gamingscan.com/best-games-that-support-sli/


Quote: "Looks" and "Going"


----------



## kx11

JustinThyme said:


> SLI is still very much alive. Depends on who bothers to support.
> 
> 
> Here is one short list
> 
> https://www.gamingscan.com/best-games-that-support-sli/



yeah i won't bother with SLi


i'd rather invest in 1 powerful gpu


----------



## Mektor

This is what my memory boosts to on it's own. Not sure if it's good for micron or not but it does what it wants lol. I push the GPU core to 2100 though. It doesn't like going much higher than that as it will make my screen go black if I push it much further.


----------



## J7SC

kx11 said:


> yeah i won't bother with SLi
> 
> 
> i'd rather invest in 1 powerful gpu


 
...with SLI/NVLink, you get to invest in TWO powerful GPUs


----------



## majestynl

Mektor said:


> This is what my memory boosts to on it's own. Not sure if it's good for micron or not but it does what it wants lol. I push the GPU core to 2100 though. It doesn't like going much higher than that as it will make my screen go black if I push it much further.


- 7000 on memory is factory. you can try +500/+1000 or more
- and +136 is equal to 135 afaik. it goes in steps of 15mhz... 15/30/45/60/75/90/105/120/135/150.....


----------



## KCDC

JustinThyme said:


> SLI is still very much alive. Depends on who bothers to support.
> 
> 
> Here is one short list
> 
> https://www.gamingscan.com/best-games-that-support-sli/



Here's to hoping CDPR does right and adds support for Cyberpunk...


----------



## sblantipodi

It seems that we will see a rtx2080ti super and a titan rtx soon.


----------



## keikei

sblantipodi said:


> It seems that we will see a rtx2080ti super and a titan rtx soon.


Yeah, I saw the article. 2080S replaces 2080 @ $700. I can only imagine the same scenario with the 2080ti S. A large performance bump would be very surprising.


----------



## ducky083

Hi all,


Do you know if it's possible to flash an ASUS 2080 TI STRIX ADVANCED EDITION with the OC or MATRIX or unnoficial bios ?


Thanks for your help !


----------



## Talon2016

majestynl said:


> Yeap looks like SLI is going to DIE (games)
> 
> 
> 
> 
> hmm.. do you have a newer version? I have one where the XUSB version is also to old! Cant flash it!
> 
> 
> 
> Can confirm 100%! I have much better Mem OC now with a water-block installed!


Don't have it, only heard of it being newer and fixed issues. It has a newer xusb version and will flash without issue using Nvflash.


----------



## majestynl

Talon2016 said:


> Don't have it, only heard of it being newer and fixed issues. It has a newer xusb version and will flash without issue using Nvflash.


Need that one


----------



## zazzn

majestynl said:


> Or find the 400w bios with same USB-X version. You could also flash a newer version but im afraid you cant go back to your backup bios if that one has a lower version!
> 
> How to check version:
> 
> - Copy the downloaded bios in the nvflash folder
> - Run nvflash with command: nvflash --version biosname.rom
> 
> _change biosname.rom to your downloaded bios name!_


Thanks for the reply, maybe we can ask the guy that got the original 400W bios if he can get another.


----------



## zazzn

Talon2016 said:


> The new shipping GPUs have a newer firmware that isn’t compatible with older vbios. Essentially we need either an Nvflash that will bypass this check or simply get vbios from the newer cards.




Any idea if there is an NVflash that will get the VBIOS from the newer cards? Any other firmware known to work of these cards?


----------



## sblantipodi

Guys is there someone who can tell me how faster is a 2080ti over a 2080 please?

Is 2080ti 30% faster in normal games and 50% faster on rtx enabled games?
If not can you tell me a more precise percentage?


----------



## Nizzen

sblantipodi said:


> Guys is there someone who can tell me how faster is a 2080ti over a 2080 please?
> 
> Is 2080ti 30% faster in normal games and 50% faster on rtx enabled games?
> If not can you tell me a more precise percentage?


Read rewiews of the games you play 

There is plenty out there.

25-35% faster. It depends of the game.

There is no precise %


----------



## sblantipodi

Nizzen said:


> sblantipodi said:
> 
> 
> 
> Guys is there someone who can tell me how faster is a 2080ti over a 2080 please?
> 
> Is 2080ti 30% faster in normal games and 50% faster on rtx enabled games?
> If not can you tell me a more precise percentage?
> 
> 
> 
> 
> Read rewiews of the games you play /forum/images/smilies/smile.gif
> 
> There is plenty out there.
> 
> 25-35% faster. It depends of the game.
Click to expand...

Sincerely I don't see reviews that concentrate on rtx games. I know that with rtx differerence is bigger. Am I right?


----------



## Nizzen

sblantipodi said:


> Sincerely I don't see reviews that concentrate on rtx games. I know that with rtx differerence is bigger. Am I right?


2080 vs 2080ti
30% faster in BF V 1440p with RTX @ ultra
https://www.hardocp.com/article/201..._nvidia_ray_tracing_rtx_2080_ti_performance/8

More in 4k

https://www.hardocp.com/article/201..._nvidia_ray_tracing_rtx_2080_ti_performance/9


----------



## JustinThyme

KCDC said:


> Here's to hoping CDPR does right and adds support for Cyberpunk...



The ability is there. Nvidia stopped working on it for awhile as did game developers. Its making a pretty good comeback. Id say about 70% of the games I play anyhow do support it and when they do its darn near perfect scaling with the 20XX cards that support it. There is no more HB SLI bridge which hit 3GB/s now its Nvlink that hits 100GB/s which is why the scaling is so good. Between that and the RTX, well at least Im sure on the 2080Ti, the GPUs actually saturate X8 and both will make use of X16 slots. Between that and Nvlink two 2080Tis can run anything at all max settings and not flinch. Before Some games I couldnt do so with a single 1080Ti much less two of them. Biggest issue I had was when I went vertical with my GPUs was finding a 2 slot Nvlink bridge. Plenty of 3 and 4 slot but no 2 slot. Turns out the bridge for the RTX quaddro cards are the same. on next rearraging and changing Im gonna paint it solid black or something ridiculous like red.


----------



## KCDC

JustinThyme said:


> The ability is there. Nvidia stopped working on it for awhile as did game developers. Its making a pretty good comeback. Id say about 70% of the games I play anyhow do support it and when they do its darn near perfect scaling with the 20XX cards that support it. There is no more HB SLI bridge which hit 3GB/s now its Nvlink that hits 100GB/s which is why the scaling is so good. Between that and the RTX, well at least Im sure on the 2080Ti, the GPUs actually saturate X8 and both will make use of X16 slots. Between that and Nvlink two 2080Tis can run anything at all max settings and not flinch. Before Some games I couldnt do so with a single 1080Ti much less two of them. Biggest issue I had was when I went vertical with my GPUs was finding a 2 slot Nvlink bridge. Plenty of 3 and 4 slot but no 2 slot. Turns out the bridge for the RTX quaddro cards are the same. on next rearraging and changing Im gonna paint it solid black or something ridiculous like red.



NVLink is a pretty amazing step forward considering the memory pooling for GPU rendering. The bandwidth is great as well, I tested mine and I hit about 93 GB/s. I made the switch from 2x 1080ti once I learned the pooling wasn't a dream. Just waiting for redshift to get there. Rise of the Tomb Raider and Shadow of War have been perfect @ 7680x1440 surround. Assassin's creed Odyssey, however is so bad that I have to force NVLink off just to get it to work at a decent fps, so no Surround for that one. Metro Exodus is only using one card, but I'm surprised at the framerates on moderate settings with RTX on while on Surround. 70s. I did do this for my design work mostly, but still happy to see that SLI/NVLink isn't totally dead. If there is a studio that would provide this sort of multi gpu support, even if it's down the road, it would be a studio like CDPR. 



I think I like your Nvidia bridge better than the aorus from a design standpoint.


EDIT: We have the same case, I was worried I couldn't fit two cards vertically, but it seems that you've achieved this. is the fit without any DIY adjustment? Reread your post, looks like the big issue was the 2-slot bridge. disregard.


----------



## kithylin

I know this is a weird question but.. has anyone here tested a 2080 Ti (Or any Turing card on the 2000 series) with older Pre-UEFI motherboards? Specifically... LGA-775 boards? I'm just curious if the cards actually work at all in these systems.


----------



## J7SC

...somewhat ironically re. posts above, I'm running a (modded) Asus RoG NV bridge on my dual Aorus setup ...I have considered the vertical mount w/ PCIe extenders, but am still wondering about signal deterioration and so forth. Besides, I came up with some extra bracing (re. dual GPU with w-block weight) and I rather not take that apart anytime soon. 

Still, what are the best aftermarket PCIe extenders to use...just in case ?


----------



## kithylin

J7SC said:


> Still, what are the best aftermarket PCIe extenders to use...just in case ?


Usually the ones made by 3M I would think. At least that's the first thing that comes to mind when I think of PCIE Extenders.


----------



## sblantipodi

Nizzen said:


> sblantipodi said:
> 
> 
> 
> Sincerely I don't see reviews that concentrate on rtx games. I know that with rtx differerence is bigger. Am I right?
> 
> 
> 
> 2080 vs 2080ti
> 30% faster in BF V 1440p with RTX @ ultra
> https://www.hardocp.com/article/201..._nvidia_ray_tracing_rtx_2080_ti_performance/8
> 
> More in 4k
> 
> https://www.hardocp.com/article/201..._nvidia_ray_tracing_rtx_2080_ti_performance/9
Click to expand...

Thank you nice links 👍
It seems that in dxr performance difference is huge.
Don't understood if the big difference is due to vram that is not enough on 2080 or just because ti is more powerful


----------



## Tolkmod

Great info, makes me that much more happy that I bought a 2080 Ti


----------



## Jubijub

Hey everyone,

Which 2080Ti would you recommend with the following consideration :
- single card use only, no SLI
- I intend to run it @stock frequencies
- I want a/ efficient cooling and b/ silence (I am open to close looped cooler, but I intend to get rid of my custom loop)

(my config has just died, and I am not sure whether the 19080Tis inside are culprit or not. I also think I will move away from my custom loop because I dread to have to rebuilt the whole thing (I litterally have to tear down the whole PC, and there is A LOT of things inside  ), it will take me days as opposed to mere hours with a "dry" system)


----------



## KCDC

J7SC said:


> ...somewhat ironically re. posts above, I'm running a (modded) Asus RoG NV bridge on my dual Aorus setup ...I have considered the vertical mount w/ PCIe extenders, but am still wondering about signal deterioration and so forth. Besides, I came up with some extra bracing (re. dual GPU with w-block weight) and I rather not take that apart anytime soon.
> 
> Still, what are the best aftermarket PCIe extenders to use...just in case ?



I know I keep bringing up redshift, but since it requires accuracy above all things, they do not suggest using any cheap, non-shielded pcie extenders, the ones that look like recycled IDE cables from China, and instead use the super shielded ones from 3M as previously mentioned. In my case, I'd probably use the Phanteks ones, but I also don't mind it so much as it is. On the other hand, a friend of mine is building a "budget" GPU render farm with 2080s and his old Xeon boards for his studio and is using the cheap China risers that look like IDE cables with zero extra shielding. I haven't heard back from him, but he's pretty confident it will be fine. 



There's a forum thread on this topic over at redshift and the consensus has been to spend the money on the rubber shielded ones and stay away from the fabric covered thinner ones and to be careful how much they bend. Thermaltake and Phanteks has them, they're probably the same manufacturer. I'm just trying to figure out if the Phanteks risers are required to fit properly in the vertical mount cage for the Elite or if the mounts are all the same?


EDIT: A quote from one RS post:


"
I use the 3M brand riser cables. I did a lot of googling to find these things a few years ago. I’m guessing Thermaltake wasn’t selling them at that time. The 3M cables have been very reliable.
When riser cables are unreliable, they are also unpredictable. They will work for 2 days strait and then fail. They will load the card on one boot but not on the next. After trying a few different ones, I found that 3M makes some and I haven’t looked back. For a couple of years now, these cables are entirely reliable. 
I have used them on numerous motherboards and hardware configurations. I’m currently using one on the popular Asus X99 E-WS. I have one of my 4 GPUs mounted onto the case with a 3rd party PCI-E mounting bracket and the other 3 GPUs nicely spaced out on the motherboard. I have also used 3 of these cables on one motherboard with success. 
In my tests, I could not conclude that cable length makes a difference in reliability. Off-brand cables were similarly unreliable regardless of their length. 3M sells one that is very long. I don’t recall the exact length but it’s longer than any others I have seen. I tried it out and same thing… it worked just fine. I ended up returning it because it was so much longer than I actually needed, but you could take one of those and mount your cards on the top of your case if you just had a slot to run the riser through.
My best guess as to an explanation for the inconsistency is that the lack of an organized and enforced standard spec leads to a lot of products on the market that only test for a small portion of use cases and hardware combinations. 3M is a proven leader in mitigating similar problem areas so, unless anyone reports issues, this seems like the right answer.
I purchased mine from mouser.com:
https://www.mouser.com/m_new/3m/3M-PCI-Express-Extender-Assemblies/"


Last EDIT: https://videocardz.com/review/pci-express-riser-extender-test


----------



## J7SC

kithylin said:


> Usually the ones made by 3M I would think. At least that's the first thing that comes to mind when I think of PCIE Extenders.


 


KCDC said:


> I know I keep bringing up redshift, but since it requires accuracy above all things, they do not suggest using any cheap, non-shielded pcie extenders, the ones that look like recycled IDE cables from China, and instead use the super shielded ones from 3M as previously mentioned. In my case, I'd probably use the Phanteks ones, but I also don't mind it so much as it is. On the other hand, a friend of mine is building a "budget" GPU render farm with 2080s and his old Xeon boards for his studio and is using the cheap China risers that look like IDE cables with zero extra shielding. I haven't heard back from him, but he's pretty confident it will be fine.
> 
> There's a forum thread on this topic over at redshift and the consensus has been to spend the money on the rubber shielded ones and stay away from the fabric covered thinner ones and to be careful how much they bend. Thermaltake and Phanteks has them, they're probably the same manufacturer. I'm just trying to figure out if the Phanteks risers are required to fit properly in the vertical mount cage for the Elite or if the mounts are all the same?


 
Thanks guys  ...sounds like 3M is the way to go if I want to mount the GPUs vertical. The CoreP5 / MSI Creation setup did come with a PCIe extender and that funny looking Aero card (not a GPU but a 4x M.2), but apart from only having one extender, I had read about redshift. Besides, if you look at all the trouble GPU and mobo vendors go through to have short and perhaps isolated (V)RAM traces, only the top-of-the-line extenders make sense.


----------



## KCDC

J7SC said:


> Thanks guys  ...sounds like 3M is the way to go if I want to mount the GPUs vertical. The CoreP5 / MSI Creation setup did come with a PCIe extender and that funny looking Aero card (not a GPU but a 4x M.2), but apart from only having one extender, I had read about redshift. Besides, if you look at all the trouble GPU and mobo vendors go through to have short and perhaps isolated (V)RAM traces, only the top-of-the-line extenders make sense.



I posted this edit https://videocardz.com/review/pci-express-riser-extender-test


And another user reported the Corsair risers being fine as well since they're a bit cheaper.


----------



## J7SC

KCDC said:


> I posted this edit https://videocardz.com/review/pci-express-riser-extender-test
> 
> 
> And another user reported the Corsair risers being fine as well since they're a bit cheaper.


 
Tx....yeah, and prices might also be coming down a bit due to the mining market 'condition'...


----------



## ESRCJ

Any leads on when the alleged 2080 Ti Super could launch? I was about to pull the trigger on another 2080 Ti for some 2-way fun, but when I heard a Super variant could be coming, I figured it might be worth waiting for another 5-10%.


----------



## Juub

For those under water are your gaming temps and idle? Something strange happened to me. I just finished installing my custom loop and my idle temps were 29C and gaming temps at full load about 48-49C. I added a bit of OC to the core at +100(1450MHz on the core/1870 boost now). Performance improved a bit. Then I tried to see if adjusting the voltage could yield any result and cranked it to the right. It didn't so I put it back to 0. Then after that my idle temps went up to 33-35C and my gaming temps easily reach 58C. I did a Time Spy Stress Test and the temps were constantly at 57-58. I am no longer applying more voltage so I was wondering what could have caused this. Reading reviews online, I should hardly go over 51C during a 3 minutes benchmark.

My radiator is running at 4500 RPM constantly as well. Pretty much 100% speed.

I have the Gigabyte Aorus XTreme Waterforce WB. Radiator is 240mm/38mm.

These temps seem rather high under water no?


----------



## matrix17

*Bricked 2080 ti.*

Hello eveyone
I have an Asus dual oc 2080 ti.I was experimenting with it,tried the Galax bios a couple of times without issues.Three days ago i installed an accelero,and due to the better thermals i ghought i could give the Galax bios one more try.Thing is flashing worked,but card is stuck at 1350 mhz..No matter what i i tried,drivers,flashing,puting my cooler back,afterburner's power limit us greyed out,and even touching the sliders will bring up a black screen.Any help would be hugely appreciated


----------



## majestynl

matrix17 said:


> Hello eveyone
> I have an Asus dual oc 2080 ti.I was experimenting with it,tried the Galax bios a couple of times without issues.Three days ago i installed an accelero,and due to the better thermals i ghought i could give the Galax bios one more try.Thing is flashing worked,but card is stuck at 1350 mhz..No matter what i i tried,drivers,flashing,puting my cooler back,afterburner's power limit us greyed out,and even touching the sliders will bring up a black screen.Any help would be hugely appreciated


I heard someone had same issue, dunno where is read it but the guy fixed it. Will try to find the info. But if i remember well he fixed by flashing original rom to the bios-chip directly with a clip!


----------



## J7SC

Juub said:


> For those under water are your gaming temps and idle? Something strange happened to me. I just finished installing my custom loop and my idle temps were 29C and gaming temps at full load about 48-49C. I added a bit of OC to the core at +100(1450MHz on the core/1870 boost now). Performance improved a bit. Then I tried to see if adjusting the voltage could yield any result and cranked it to the right. It didn't so I put it back to 0. Then after that my idle temps went up to 33-35C and my gaming temps easily reach 58C. I did a Time Spy Stress Test and the temps were constantly at 57-58. I am no longer applying more voltage so I was wondering what could have caused this. Reading reviews online, I should hardly go over 51C during a 3 minutes benchmark.My radiator is running at 4500 RPM constantly as well. Pretty much 100% speed. I have the Gigabyte Aorus XTreme Waterforce WB. Radiator is 240mm/38mm. These temps seem rather high under water no?


 
...it's not so much the absolute temps but the change from before which raises an eyebrow. While I run 2x Aorus XTreme Waterforce WB, I use a lot more rads/fans/pumps in the loop so my temps (2x = low to mid 30s in PortRoyal etc) are probably not representative. Still, try disabling the card, uninstall the drivers with DDU >>> but also uninstall 'completely' (incl. older saved profiles) MSI AB or whatever app you use to oc...I had problems before with MSI AB on rare occasions re. voltage settings / release.




matrix17 said:


> Hello eveyone
> I have an Asus dual oc 2080 ti.I was experimenting with it,tried the Galax bios a couple of times without issues.Three days ago i installed an accelero,and due to the better thermals i ghought i could give the Galax bios one more try.Thing is flashing worked,but card is stuck at 1350 mhz..No matter what i i tried,drivers,flashing,puting my cooler back,afterburner's power limit us greyed out,and even touching the sliders will bring up a black screen.Any help would be hugely appreciated


 
Apart from the above tips, I can only recall three conditions for the '1350'... 
1.) card actually bricked 
2.) when installing after-market cooler, a fan header on the GPU PCB was/is not connected (issue w/ certain MSI cards which go into safe mode then) or 
3.) when installing after-market cooler, slightly uneven pressure on the mounting screws.


----------



## Juub

J7SC said:


> ...it's not so much the absolute temps but the change from before which raises an eyebrow. While I run 2x Aorus XTreme Waterforce WB, I use a lot more rads/fans/pumps in the loop so my temps (2x = low to mid 30s in PortRoyal etc) are probably not representative. Still, try disabling the card, uninstall the drivers with DDU >>> but also uninstall 'completely' (incl. older saved profiles) MSI AB or whatever app you use to oc...I had problems before with MSI AB on rare occasions re. voltage settings / release.
> 
> 
> 
> 
> 
> Apart from the above tips, I can only recall three conditions for the '1350'...
> 1.) card actually bricked
> 2.) when installing after-market cooler, a fan header on the GPU PCB was/is not connected (issue w/ certain MSI cards which go into safe mode then) or
> 3.) when installing after-market cooler, slightly uneven pressure on the mounting screws.


I was thinking maybe it's me. I just started testing it yesterday and the system had been off for hours. Now it's idling at 31-33C which seems to be within the norm. It still reaches 58C in Time Spy which I admittedly had no run yesterday. I just ran Shadow of the Tomb Raider which capped at about 49 but it's a short bench so after a few hours of being on, the few degrees might be the difference.

I'll keep troubleshooting.


----------



## J7SC

Juub said:


> I was thinking maybe it's me. I just started testing it yesterday and the system had been off for hours. Now it's idling at 31-33C which seems to be within the norm. It still reaches 58C in Time Spy which I admittedly had no run yesterday. I just ran Shadow of the Tomb Raider which capped at about 49 but it's a short bench so after a few hours of being on, the few degrees might be the difference.
> 
> I'll keep troubleshooting.



...obviously, also keep track of ambient during trouble shooting, and how long the system has been running near / at full tilt. The Aorus XTR WB stock bios regularly pulls close to 380w on mine....that's a lot of heat energy to disperse.


----------



## KCDC

Juub said:


> I was thinking maybe it's me. I just started testing it yesterday and the system had been off for hours. Now it's idling at 31-33C which seems to be within the norm. It still reaches 58C in Time Spy which I admittedly had no run yesterday. I just ran Shadow of the Tomb Raider which capped at about 49 but it's a short bench so after a few hours of being on, the few degrees might be the difference.
> 
> I'll keep troubleshooting.



If you have a temp probe, I'd add that to your rad exhaust and see how hot your water's getting while you're running. It won't be 100% accurate as a probe getting temp from the water directly, but should be close enough. What temp are the fans tied to? Preferably, they should be based on the water temp if you have a way of doing that, normally requires an external controller unless your mobo has a water temp plug. Is this loop just the GPU or is your CPU also in it? The card's gonna pull more power once you start to OC, so would be best to keep tabs on your water's temp. My whole system idles about 7c higher when I switched from 1080tis to 2080tis, now at 33-35c depending on ambient. Plus 9900x runs pretty hot. I've kept both cards at about 50c max when benching/gaming.


----------



## King4x4

Just received my refurbished Gigabyte Aorus Waterforce 2080 ti from newegg:










15mins later:


















It's up!

Ran to GPU-Z dreading the micron curse... but came up with Samsungs!










Fired up 3dmark Time Spy and got a decent 15608 GPU score and 14475 as an overall.

https://www.3dmark.com/spy/8193365

Started playing the settings... the max I could go on the Core clock was like 100+ and about +2300mhz according to Aorus OC software (3dmark read it as 325mhz increase though)

https://www.3dmark.com/spy/8194154

I am one happy monkey.


----------



## matrix17

J7SC said:


> ...
> 
> 
> 
> 
> 
> Apart from the above tips, I can only recall three conditions for the '1350'...
> 1.) card actually bricked
> 2.) when installing after-market cooler, a fan header on the GPU PCB was/is not connected (issue w/ certain MSI cards which go into safe mode then) or
> 3.) when installing after-market cooler, slightly uneven pressure on the mounting screws.


Im afraid its not the cooler,it was working fine before flashing.Maybe the vrms got heated up,or the card is failing.Having such an expensive brick is bad for health!


----------



## thauch

King4x4 said:


> Just received my refurbished Gigabyte Aorus Waterforce 2080 ti from newegg:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 15mins later:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> It's up!
> 
> 
> 
> Ran to GPU-Z dreading the micron curse... but came up with Samsungs!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Fired up 3dmark Time Spy and got a decent 15608 GPU score and 14475 as an overall.
> 
> 
> 
> https://www.3dmark.com/spy/8193365
> 
> 
> 
> Started playing the settings... the max I could go on the Core clock was like 100+ and about +2300mhz according to Aorus OC software (3dmark read it as 325mhz increase though)
> 
> 
> 
> https://www.3dmark.com/spy/8194154
> 
> 
> 
> I am one happy monkey.




Is your case inverted or is that an upside down pic?


Sent from my iPhone using Tapatalk


----------



## J7SC

King4x4 said:


> Just received my refurbished Gigabyte Aorus Waterforce 2080 ti from newegg: (...)
> 
> I am one happy monkey.


 
Nice ! Also, I posted this before but per vid below (around 4m35s ++), you can actually take the Aorus XTR Waterforce WB block apart, i.e. for checking coolant / w-block things out after a year or so of use - or if you want to mod the whole external look. Pic below vid was the mount of my card-1 in an old Z170 testbench...happy monkey !


----------



## King4x4

thauch said:


> Is your case inverted or is that an upside down pic?
> 
> 
> Sent from my iPhone using Tapatalk


It's inverted.



J7SC said:


> Nice ! Also, I posted this before but per vid below (around 4m35s ++), you can actually take the Aorus XTR Waterforce WB block apart, i.e. for checking coolant / w-block things out after a year or so of use - or if you want to mod the whole external look. Pic below vid was the mount of my card-1 in an old Z170 testbench...happy monkey !
> 
> https://www.youtube.com/watch?v=glpZ0nEe78Y


Nice! might do that down the line.

REPed.


----------



## KingEngineRevUp

Sorry to bug you all, I had a question regarding if my card is a OC dud or not. I'm sure it is. 

TL;DR: Guru3D with 2100 Mhz core gets Timespy GPU score of 15283. My card, which seems like a OC dud, gets a Timespy GPU score of 15,777 with an AIO. Do I have a dud? Is thermal throttling more of a impact because of RTX boost? Of course power limiting is an issue.

Is it more about the sustained clocks versus the max applied clock? I come from a 1080 Ti, I recently purchased a 2080 Ti Seahawk. Temperatures stay below 60C.

I was bummed out because I feel like I got a overclocking dud. The best I can do for Firestrike Extreme stress test is
- 2010 Mhz Core sustained (+100 on core)
- 8150 Mhz Memory (+1150 on memory)
- Timespy GPU score is 15,777 https://www.3dmark.com/fsst/1216197

I can run benchmarks at +118 on the core in Timespy and get a GPU score of 16,064 but because I can benchmark and play some games with this OC doesn't make me trust it since it can't pass Firestrike Extreme stress testing, which I find to be a hard stress test to pass for GPU OCs. https://www.3dmark.com/spy/8189073

Are my scores good? I notice reviewers are able to get 2100-2115 Mhz on the core but from looking at their clocks, they are thermal throttled and core clocks relax down to 1980 Mhz.
- Timespy for Guru3D with 2100 Mhz OC gets a GPU score of 15,283. https://www.guru3d.com/articles_pages/geforce_rtx_2080_ti_founders_review,37.html

Even though my score beats Guru3D with lower clock speeds, I still feel like my card is probably a dud. The AIO is what's probably helping me get a good benchmark score since my clocks don't fluctuate as bad. Or is it because I'm power limited and can access more voltage to get my higher clocks? Either way, I'm not getting good OC at low voltages I guess. 

What are your thoughts? Sorry for the long post.


----------



## ESRCJ

KingEngineRevUp said:


> Sorry to bug you all, I had a question regarding if my card is a OC dud or not. I'm sure it is.
> 
> TL;DR: Guru3D with 2100 Mhz core gets Timespy GPU score of 15283. My card, which seems like a OC dud, gets a Timespy GPU score of 15,777 with an AIO. Do I have a dud? Is thermal throttling more of a impact because of RTX boost? Of course power limiting is an issue.
> 
> Is it more about the sustained clocks versus the max applied clock? I come from a 1080 Ti, I recently purchased a 2080 Ti Seahawk. Temperatures stay below 60C.
> 
> I was bummed out because I feel like I got a overclocking dud. The best I can do for Firestrike Extreme stress test is
> - 2010 Mhz Core sustained (+100 on core)
> - 8150 Mhz Memory (+1150 on memory)
> - Timespy GPU score is 15,777 https://www.3dmark.com/fsst/1216197
> 
> I can run benchmarks at +118 on the core in Timespy and get a GPU score of 16,064 but because I can benchmark and play some games with this OC doesn't make me trust it since it can't pass Firestrike Extreme stress testing, which I find to be a hard stress test to pass for GPU OCs. https://www.3dmark.com/spy/8189073
> 
> Are my scores good? I notice reviewers are able to get 2100-2115 Mhz on the core but from looking at their clocks, they are thermal throttled and core clocks relax down to 1980 Mhz.
> - Timespy for Guru3D with 2100 Mhz OC gets a GPU score of 15,283. https://www.guru3d.com/articles_pages/geforce_rtx_2080_ti_founders_review,37.html
> 
> Even though my score beats Guru3D with lower clock speeds, I still feel like my card is probably a dud. The AIO is what's probably helping me get a good benchmark score since my clocks don't fluctuate as bad. Or is it because I'm power limited and can access more voltage to get my higher clocks? Either way, I'm not getting good OC at low voltages I guess.
> 
> What are your thoughts? Sorry for the long post.


A few things (in no particular order):

(i) The clocks reported by reviewers aren't necessarily sustained throughout their testing.

(ii) A frequency without an associated voltage is meaningless. What voltage are you targeting for a given clock? For most BIOS, you won't even hit the 1.093V limit without modifying the VF curve manually. Adjusting the slider will likely get you up to 1.063V at most, as the default VF curves for most BIOS are flat beyond that.

(iii) These cards throttle for a variety of reasons. The most common will be related to temperature and it's also BIOS-specific. You can expect a 15MHz decrease once you pass 36-40C (again, the exact value is BIOS-specific). There are a few more with higher temperatures, but I don't know what they are since I've never experienced them. 

(iv) When running stress tests and some very demanding games at 4K, power throttling is very likely with most BIOS. This one is simple: if your card hits the power limit, it will run at lower voltages and frequencies to remain under the limit. 

(v) Regarding your score, you're likely being held back by clock speeds, both maximum and sustained. Graphics Test 2 will cause your card to draw a lot of power, so the lower-end of your VF curve will play a huge role on your score. The first test will depend more so on the upper end of the VF curve, as it's not nearly as power-intensive. However, I could certainly see the 325W BIOS resulting in throttling even in Graphics Test 1, given you're actually running at the higher end of the VF curve. 

(vi) Now is 2010MHz any good? Again, it's hard to say without seeing the associated voltage on the GPU. For example, if you're hitting 2010MHz at 0.9V, then I'd say you probably have very good silicon. If you're hitting 2010MHz and can't push any further at let's say 1.063V, then you have something mediocre at best, maybe even a bit below average. Although I don't have any aggregate data to base this off of, but rather anecdotal evidence.


----------



## KingEngineRevUp

ESRCJ said:


> A few things (in no particular order):
> 
> (i) The clocks reported by reviewers aren't necessarily sustained throughout their testing.
> 
> (ii) A frequency without an associated voltage is meaningless. What voltage are you targeting for a given clock? For most BIOS, you won't even hit the 1.093V limit without modifying the VF curve manually. Adjusting the slider will likely get you up to 1.063V at most, as the default VF curves for most BIOS are flat beyond that.
> 
> (iii) These cards throttle for a variety of reasons. The most common will be related to temperature and it's also BIOS-specific. You can expect a 15MHz decrease once you pass 36-40C (again, the exact value is BIOS-specific). There are a few more with higher temperatures, but I don't know what they are since I've never experienced them.
> 
> (iv) When running stress tests and some very demanding games at 4K, power throttling is very likely with most BIOS. This one is simple: if your card hits the power limit, it will run at lower voltages and frequencies to remain under the limit.
> 
> (v) Regarding your score, you're likely being held back by clock speeds, both maximum and sustained. Graphics Test 2 will cause your card to draw a lot of power, so the lower-end of your VF curve will play a huge role on your score. The first test will depend more so on the upper end of the VF curve, as it's not nearly as power-intensive. However, I could certainly see the 325W BIOS resulting in throttling even in Graphics Test 1, given you're actually running at the higher end of the VF curve.
> 
> (vi) Now is 2010MHz any good? Again, it's hard to say without seeing the associated voltage on the GPU. For example, if you're hitting 2010MHz at 0.9V, then I'd say you probably have very good silicon. If you're hitting 2010MHz and can't push any further at let's say 1.063V, then you have something mediocre at best, maybe even a bit below average. Although I don't have any aggregate data to base this off of, but rather anecdotal evidence.


Thank you for responding. What your'e saying is what I was suspecting. At 0.993V is where I'm at 2010 Mhz. Of course my GPU clocks are fluctuating everywhere because I am hitting the power limit. GPU power in intensive benchmarks hits about 325-330 Watts, the max my card can handle. 

I'm going to flash the Galax 380 Watt bios to see if that gives me access to my other voltage points, hopefully I don't crash.


----------



## Juub

King4x4 said:


> Just received my refurbished Gigabyte Aorus Waterforce 2080 ti from newegg:
> 
> 
> Fired up 3dmark Time Spy and got a decent 15608 GPU score and 14475 as an overall.
> 
> https://www.3dmark.com/spy/8193365
> 
> Started playing the settings... the max I could go on the Core clock was like 100+ and about +2300mhz according to Aorus OC software (3dmark read it as 325mhz increase though)
> 
> https://www.3dmark.com/spy/8194154
> 
> I am one happy monkey.


Freakn' A mate. Got the same card too for an incredible price two days ago. Our scores are mostly the same as well and yeah max I was able to go is also +100MHz on the core. I think we're running into power limits as I seem to max out the 366W. Seems we have to look at flashing the BIOS to go over those scores.

What are your temps?


----------



## KingEngineRevUp

@ESRCJ well I guess at 380 watts, my GPU score went up to 16,096. Not sure if it's worth pushing an extra 50 watts.


----------



## Juub

KingEngineRevUp said:


> @ESRCJ well I guess at 380 watts, my GPU score went up to 16,096. Not sure if it's worth pushing an extra 50 watts.


It increased by how much and did you flash your BIOS?


----------



## ESRCJ

KingEngineRevUp said:


> @ESRCJ well I guess at 380 watts, my GPU score went up to 16,096. Not sure if it's worth pushing an extra 50 watts.


Are you running your radiator fan at max RPM? If not, you could always go that route for benchmarking. Keep in mind, a 120mm AIO isn't going to vastly outperform some of these triple fan air-cooled variants with massive heatsinks when it comes to cooling, if at all. So if you're hitting 60C, I would imagine you're hitting a couple of 15MHz frequency drops from temps alone. You could try locking your frequency and voltage to a single point on the VF curve, which may help a little. At the end of the day though, your GPU is probably operating below 2000MHz some of the time during the benchmark and I would say 16K graphics score is about where you should expect to see it with those frequencies. If you're looking for 17K or higher, you need very good silicon, more cooling capacity, and a high power limit (which you have now).


----------



## KingEngineRevUp

Juub said:


> It increased by how much and did you flash your BIOS?


Yes, I flashed the Galax bios. My performance went up approximately 3%. Not bad, but the card has to run 50 watts more for that performance increase which isn't worth it IMO. Perhaps I can set a higher OC at 1.063V, I didn't try that. If I had a 140mm radiator, I would probably get better results with that 380 watts. 



ESRCJ said:


> Are you running your radiator fan at max RPM? If not, you could always go that route for benchmarking. Keep in mind, a 120mm AIO isn't going to vastly outperform some of these triple fan air-cooled variants with massive heatsinks when it comes to cooling, if at all. So if you're hitting 60C, I would imagine you're hitting a couple of 15MHz frequency drops from temps alone. You could try locking your frequency and voltage to a single point on the VF curve, which may help a little. At the end of the day though, your GPU is probably operating below 2000MHz some of the time during the benchmark and I would say 16K graphics score is about where you should expect to see it with those frequencies. If you're looking for 17K or higher, you need very good silicon, more cooling capacity, and a high power limit (which you have now).


Yeah I had my fan at max. I repasted using Gelid GC-Extreme also which lowered my temperatures by 3C. The temperatures still went up to 58-60C with 380 watts into the card. I think I'm content with my results. It would be nice to OC higher, but you can't win the silicon lottery all the time. My memory can do +1150 though, so I think that's pretty decent. I saw some people can't go past +800.

My 1080 Ti OC like a champ and did well undervolting too. I guess you win some and lose some. Time to enjoy some games!


----------



## JustinThyme

KCDC said:


> NVLink is a pretty amazing step forward considering the memory pooling for GPU rendering. The bandwidth is great as well, I tested mine and I hit about 93 GB/s. I made the switch from 2x 1080ti once I learned the pooling wasn't a dream. Just waiting for redshift to get there. Rise of the Tomb Raider and Shadow of War have been perfect @ 7680x1440 surround. Assassin's creed Odyssey, however is so bad that I have to force NVLink off just to get it to work at a decent fps, so no Surround for that one. Metro Exodus is only using one card, but I'm surprised at the framerates on moderate settings with RTX on while on Surround. 70s. I did do this for my design work mostly, but still happy to see that SLI/NVLink isn't totally dead. If there is a studio that would provide this sort of multi gpu support, even if it's down the road, it would be a studio like CDPR.
> 
> 
> 
> I think I like your Nvidia bridge better than the aorus from a design standpoint.
> 
> 
> EDIT: We have the same case, I was worried I couldn't fit two cards vertically, but it seems that you've achieved this. is the fit without any DIY adjustment? Reread your post, looks like the big issue was the 2-slot bridge. disregard.


Im running the enthoo elite. I had the ROG spanning 3 slots with an ASUS R6E. No worries getting the vertical mount. The hunt was for a 2 slot Nvlink bridge which is the same as the quaddro RTX bridge. Its the one for the 6000 on this page. https://www.nvidia.com/en-us/design-visualization/quadro-store/ 

No one else makes them, only 3 and 4 slot. 

The caveat to vertical mount is covers all the PCIe slots and you lose HDMI on the forward most card. I use display port anyhow and only a single monitor ROG PG348Q. You could fit other PCIE devices but you would have to use extentions and mount them on a standing cage on the bottom. Did it for awhile with 2 intel 900P AIC SSDs in VROC but ditched it because of the latency killing my low que depth read and write. 5000MB/s sequential is fine if all you do is move a bunch of large files around all day but for the 4k it slows it down, even the boot time. So out they went and Im using a single U2 905P hiding around the back.

Which PCIE extension matter too. I tried a few and got the best results with no noticeable loss with a brank called Uphere I think, Ill dig through the boxes tomorrow and post back. 

I used the vertical cage that came with the case mounted all the way to the bottom. Give better clearance and still a little room under.


Blocks are just to purdy to leave them hidden.

Kinda have to question the code the makes anything that runs worse with more under the hood. 

Rise of the tomb raider is awesome with it. Anthem isnt too shabby either. Too much out there that will use the SLI so I try and stick to what does. Im not one to go but the latest titles I wait for them to develop a bit first. Last thing I bought that was new had a million patches within the first few months. So what if its 6 months old, its new to me!


----------



## Juub

matrix17 said:


> Hello eveyone
> I have an Asus dual oc 2080 ti.I was experimenting with it,tried the Galax bios a couple of times without issues.Three days ago i installed an accelero,and due to the better thermals i ghought i could give the Galax bios one more try.Thing is flashing worked,but card is stuck at 1350 mhz..No matter what i i tried,drivers,flashing,puting my cooler back,afterburner's power limit us greyed out,and even touching the sliders will bring up a black screen.Any help would be hugely appreciated


Seems it's similar to this.

https://www.overclock.net/forum/69-nvidia/1722102-2080ti-lightning-dead-after-xoc-bios-flash.html

Seems your card is running on safe mode.


----------



## matrix17

J7SC said:


> ...it's not so much the absolute temps but the change from before which raises an eyebrow. While I run 2x Aorus XTreme Waterforce WB, I use a lot more rads/fans/pumps in the loop so my temps (2x = low to mid 30s in PortRoyal etc) are probably not representative. Still, try disabling the card, uninstall the drivers with DDU >>> but also uninstall 'completely' (incl. older saved profiles) MSI AB or whatever app you use to oc...I had problems before with MSI AB on rare occasions re. voltage settings / release.
> 
> 
> 
> 
> 
> Apart from the above tips, I can only recall three conditions for the '1
> 
> 
> 
> Juub said:
> 
> 
> 
> Seems it's similar to this.
> 
> https://www.overclock.net/forum/69-nvidia/1722102-2080ti-lightning-dead-after-xoc-bios-flash.html
> 
> 
> Seems your card is running on safe mode.
> 
> 
> 
> Thanks for the info.Problem did occur with accelero installed,however i have already reverted to stock vooler and flashing does not help
Click to expand...


----------



## J7SC

Juub said:


> Freakn' A mate. Got the same card too for an incredible price two days ago. Our scores are mostly the same as well and yeah max I was able to go is also +100MHz on the core. I think we're running into power limits as I seem to max out the 366W. Seems we have to look at flashing the BIOS to go over those scores.
> 
> What are your temps?



My two Aorus WB typically pull between 375w and 380w (per GPUz) on stock bios in s.th. like Superposition 4K or 8k. I figure flashing the Galax 380W won't really make much of a difference, apart from potentially loosing some of the IO. There are XOC bios with very high PW limits, but they apparently affect IO as well, plus you REALLY do need A LOT of rad for those.


----------



## Intrud3r

I noticed I can only get +100 on my core too on my Aorus 2080 ti. Mem is running solid @ +200 atm ... still wanna try higher on that one ... stock card.

+125 runs benches no problem
Gaming however, it freezes up. Didn't matter if I upped my voltage by +40mv

And yeah ... stock bios on mine, already saw 370W+ showing up in HWiNFO.

(And somehow my backplate LED won't light up. With an older RGB Fusion version it doesn't show the logo as an option at all, with the latest it shows the backplate logo as an option, it's clickable, but it just won't light up ... not that it matters much, as I probably would have disabled it anyway, but just wanna throw it out here. Side logo and name + fans RGB works flawlessly)


----------



## J7SC

Intrud3r said:


> I noticed I can only get +100 on my core too on my Aorus 2080 ti. Mem is running solid @ +200 atm ... still wanna try higher on that one ... stock card.
> 
> +125 runs benches no problem
> Gaming however, it freezes up. Didn't matter if I upped my voltage by +40mv
> 
> And yeah ... stock bios on mine, already saw 370W+ showing up in HWiNFO.
> 
> (And somehow my backplate LED won't light up. With an older RGB Fusion version it doesn't show the logo as an option at all, with the latest it shows the backplate logo as an option, it's clickable, but it just won't light up ... not that it matters much, as I probably would have disabled it anyway, but just wanna throw it out here. Side logo and name + fans RGB works flawlessly)


I never touch the voltage slider, nor run RGB Fusion (after I tried it out once <> buggy, hackable...). I did notice that 90% of the time, RGB 'cycling' of the GPUs stops when running 3DMark Sysinfo, for some weird reason - even after the bench has finished and I'm doing other stuff, until I reboot. As to clocks, I can get a bit higher than that, but the two Aorus have their own loop with 3x RX360/60 rads and related 'condiments'...


----------



## KingEngineRevUp

Intrud3r said:


> I noticed I can only get +100 on my core too on my Aorus 2080 ti. Mem is running solid @ +200 atm ... still wanna try higher on that one ... stock card.
> 
> +125 runs benches no problem
> Gaming however, it freezes up. Didn't matter if I upped my voltage by +40mv
> 
> And yeah ... stock bios on mine, already saw 370W+ showing up in HWiNFO.
> 
> (And somehow my backplate LED won't light up. With an older RGB Fusion version it doesn't show the logo as an option at all, with the latest it shows the backplate logo as an option, it's clickable, but it just won't light up ... not that it matters much, as I probably would have disabled it anyway, but just wanna throw it out here. Side logo and name + fans RGB works flawlessly)


I have found Firestrike Extreme Stress Test to be a invaluable tool.I can game and bench at 2050-2070 Mhz but I can't pass Firestrike Extreme Stress Test with it. 

I have backed down to 2010 Mhz and +1150 Mhz on the core because this passes Firestrike Extreme. So far out of all the stress test I have done, this one kicks my GPUs ass even harder than Timespy. 

https://www.3dmark.com/fsst/1216508

EDIT: This is +100 on the core for the Seahawk X and +130 on the Galax Bios.


----------



## Juub

Intrud3r said:


> I noticed I can only get +100 on my core too on my Aorus 2080 ti. Mem is running solid @ +200 atm ... still wanna try higher on that one ... stock card.
> 
> +125 runs benches no problem
> Gaming however, it freezes up. Didn't matter if I upped my voltage by +40mv
> 
> And yeah ... stock bios on mine, already saw 370W+ showing up in HWiNFO.
> 
> (And somehow my backplate LED won't light up. With an older RGB Fusion version it doesn't show the logo as an option at all, with the latest it shows the backplate logo as an option, it's clickable, but it just won't light up ... not that it matters much, as I probably would have disabled it anyway, but just wanna throw it out here. Side logo and name + fans RGB works flawlessly)





J7SC said:


> I never touch the voltage slider, nor run RGB Fusion (after I tried it out once <> buggy, hackable...). I did notice that 90% of the time, RGB 'cycling' of the GPUs stops when running 3DMark Sysinfo, for some weird reason - even after the bench has finished and I'm doing other stuff, until I reboot. As to clocks, I can get a bit higher than that, but the two Aorus have their own loop with 3x RX360/60 rads and related 'condiments'...


I've never seen 380W. Most I've seen is 366. I figured I was power limited and used the XOC Bios of the Kingp!n card but it didn't help one bit. The power usage increased to 530W, the clocks were staying at 2100MHz but the scores were worse than at 2070MHz. Ultimately I reverted back to stock Bios. I also tried Strix Bios which has a lower factory overclock and tried going higher than 2085MHz, no luck.

I envy those guys pulling 2190MHz on the core.


----------



## Xdrqgol

Some weeks ago, i got my GALAX 2080 ti HOF , 3 fan edition and i was asking for the XOC bios...around here and was messaging @86Jarrod , providing me some info on how to setup things 

While I was tempted to flash the card, i realized that I am not ready for watercooling (I really have some bad anxiety only visualizing having that water loop in my system. Really scared that it will leak somehow water in the system...:/). So , I actually did some benchmarks with the triple fan and reached the following:

140+ Mhz on Core
1400+ Mhz on Memory

!Rate my overclock potential on air - ambient temp 26-27C!

P.S. I am pretty sure I can push it more on water, like...much much more ! I am a bit happy kinda off...what do you say? is this a good OC card?

\o/


----------



## KingEngineRevUp

Xdrqgol said:


> Some weeks ago, i got my GALAX 2080 ti HOF , 3 fan edition and i was asking for the XOC bios...around here and was messaging @86Jarrod , providing me some info on how to setup things
> 
> While I was tempted to flash the card, i realized that I am not ready for watercooling (I really have some bad anxiety only visualizing having that water loop in my system. Really scared that it will leak somehow water in the system...:/). So , I actually did some benchmarks with the triple fan and reached the following:
> 
> 140+ Mhz on Core
> 1400+ Mhz on Memory
> 
> !Rate my overclock potential on air - ambient temp 26-27C!
> 
> P.S. I am pretty sure I can push it more on water, like...much much more ! I am a bit happy kinda off...what do you say? is this a good OC card?
> 
> \o/


What are you actual clock speeds? I know people like to type +XXX, but a lot of Bios are different. For example, +100 for my MSI Sea Hawk X is +130 on the Galax bios.

EDIT: I'd also recommend running Firestrike Extreme Stress Test. I was playing, stress testing and benching fine with +118 (+148 on galax bios) and it ended up not doing well with Firestrike Extreme Stress Test. 

Firestrike has a way to reveal unstable overclocks.


----------



## Juub

Xdrqgol said:


> Some weeks ago, i got my GALAX 2080 ti HOF , 3 fan edition and i was asking for the XOC bios...around here and was messaging @86Jarrod , providing me some info on how to setup things
> 
> While I was tempted to flash the card, i realized that I am not ready for watercooling (I really have some bad anxiety only visualizing having that water loop in my system. Really scared that it will leak somehow water in the system...:/). So , I actually did some benchmarks with the triple fan and reached the following:
> 
> 140+ Mhz on Core
> 1400+ Mhz on Memory
> 
> !Rate my overclock potential on air - ambient temp 26-27C!
> 
> P.S. I am pretty sure I can push it more on water, like...much much more ! I am a bit happy kinda off...what do you say? is this a good OC card?
> 
> \o/


Unlikely you can push it more. You can probably get more stable clocks though.


----------



## chibi

Hi there, will the official nvidia store 2080 ti fe card be a good candidate for the galax 380w bios? Any quirks to flashing that card with said bios? Would it be as easy as flashing, installing Afterburner and upping the power limit to the max and I should be good to go for gaming and not hitting PL?

Edit - will be on a full custom loop with 2x 360mm rads, 1x cpu and 1x gpu full blocks.


----------



## KingEngineRevUp

chibi said:


> Hi there, will the official nvidia store 2080 ti fe card be a good candidate for the galax 380w bios? Any quirks to flashing that card with said bios? Would it be as easy as flashing, installing Afterburner and upping the power limit to the max and I should be good to go for gaming and not hitting PL?


Sure, but know that you're getting an extra 50 watts of heat into your card so you'll have to fight that off also. With more heat means more higher temperatures. If you get pushed past your boost threshold, you might lose 15 Mhz on your core clock anyways and it might be counter productive. 

Example (numbers are made up), you're at 2000 Mhz 1V @ 73C, thermal throttling occurs at 76C where you lose 15 Mhz. You put an extra 50 watts on your card, that pushes you to 78C, you lose 15 Mhz but the power limit lets you run at a higher voltage now. You're now running 2000 Mhz 1.03V @78C. 

The bios helps a lot, but the bios won't give you performance without the help of lowering your temperature. I think people on a full custom water loop benefit the most from it. People that keep their card at 40-45C even with 380 watts on the card.


----------



## chibi

KingEngineRevUp said:


> Sure, but know that you're getting an extra 50 watts of heat into your card so you'll have to fight that off also. With more heat means more higher temperatures. If you get pushed past your boost threshold, you might lose 15 Mhz on your core clock anyways and it might be counter productive.
> 
> Example (numbers are made up), you're at 2000 Mhz 1V @ 73C, thermal throttling occurs at 76C where you lose 15 Mhz. You put an extra 50 watts on your card, that pushes you to 78C, you lose 15 Mhz but the power limit lets you run at a higher voltage now. You're now running 2000 Mhz 1.03V @78C.
> 
> The bios helps a lot, but the bios won't give you performance without the help of lowering your temperature. I think people on a full custom water loop benefit the most from it. People that keep their card at 40-45C even with 380 watts on the card.



Thanks! I just edited my post. My current Titan Xp is under 40 degrees with my custom loop so temps shouldn't be an issue.


----------



## Xdrqgol

KingEngineRevUp said:


> What are you actual clock speeds? I know people like to type +XXX, but a lot of Bios are different. For example, +100 for my MSI Sea Hawk X is +130 on the Galax bios.
> 
> EDIT: I'd also recommend running Firestrike Extreme Stress Test. I was playing, stress testing and benching fine with +118 (+148 on galax bios) and it ended up not doing well with Firestrike Extreme Stress Test.
> 
> Firestrike has a way to reveal unstable overclocks.


I did run Port Royale and Time Spy with these settings I mentioned above, many times...

Will double check the clocks and come back here with an update.


----------



## KingEngineRevUp

Xdrqgol said:


> I did run Port Royale and Time Spy with these settings I mentioned above, many times...
> 
> Will double check the clocks and come back here with an update.


Yeah, Timespy doesn't work your card as hard as Firestrike does. I discovered that after passing Shadow of the Tomb Raider and TimeSpy stress test but I couldn't pass Firestrike. 

https://www.tomshardware.com/reviews/how-to-stress-test-graphics-cards,5449-3.html

Notice Firestrike Stress Test worked the card much harder and raised the card 2C higher. It's a real good test to run to judge an everyday overclock. I just do the standard 20 run loop.


----------



## Xdrqgol

KingEngineRevUp said:


> Yeah, Timespy doesn't work your card as hard as Firestrike does. I discovered that after passing Shadow of the Tomb Raider and TimeSpy stress test but I couldn't pass Firestrike.
> 
> https://www.tomshardware.com/reviews/how-to-stress-test-graphics-cards,5449-3.html
> 
> Notice Firestrike Stress Test worked the card much harder and raised the card 2C higher. It's a real good test to run to judge an everyday overclock. I just do the standard 20 run loop.


Yes yes, although I think I haven't made myself clear...

The +140 core /+1400 memory , where only tested in benchmarks...

Will have to try and see what is stable in FireStrike Extreme and Shadow of the Tomb Raider

Shadow of the Tomb Raider is a very good test for MEMORY only. I think it is the best out there. If there are any problems with the memory , Shadow of Tomb Raider will fail 100%. 


Will have to try and get back here with some results. Although I am sure I can push it even more on water...:/


----------



## kithylin

Xdrqgol said:


> Yes yes, although I think I haven't made myself clear...
> 
> The +140 core /+1400 memory , where only tested in benchmarks...
> 
> Will have to try and see what is stable in FireStrike Extreme and Shadow of the Tomb Raider
> 
> Shadow of the Tomb Raider is a very good test for MEMORY only. I think it is the best out there. If there are any problems with the memory , Shadow of Tomb Raider will fail 100%.
> 
> 
> Will have to try and get back here with some results. Although I am sure I can push it even more on water...:/


People have made the same mistake posting their benchmarks and results with the 900 series and the 1000 series and now apparently again with the 2000 series. +XXX Mhz overclock doesn't mean jack squat to anyone. With how nvidia boost works, +150 mhz will be an entirely different resulting clock speed based on the temperature of your room, your card, the airflow in your case, and the temperature of components inside your case, where your case is positioned in your room, etc. +150 Mhz will be a wildly different resulting actual clock speed for every one of the millions of nvidia owners. When posting anything about nvidia cards online.. remember to post the actual physical core mhz you are seeing, not any offsets.


----------



## J7SC

kithylin said:


> People have made the same mistake posting their benchmarks and results with the 900 series and the 1000 series and now apparently again with the 2000 series. +XXX Mhz overclock doesn't mean jack squat to anyone. With how nvidia boost works, +150 mhz will be an entirely different resulting clock speed based on the temperature of your room, your card, the airflow in your case, and the temperature of components inside your case, where your case is positioned in your room, etc. +150 Mhz will be a wildly different resulting actual clock speed for every one of the millions of nvidia owners. When posting anything about nvidia cards online.. remember to post the actual physical core mhz you are seeing, not any offsets.



^^ That !

The '''stock''' 2080 TI boost clock is advertised at 1545 MHz
The '''FE''' 2080 Ti boost clock is advertised at 1635 MHz
The Aorus XTR WB is advertised with a boost clock of 1770 MHz ...so +120 means different things to different people

...and all that is somewhat irrelevant anyways, depending on cooling and other factors. While I can 'hit' 2235 MHz w/ Aorus in GPUz render (second part is max VRAM before crashing), that certainly doesn't apply to tougher benches either on GPU or VRAM, noting that 2080 Ti RTX usually clocks a bit higher w/ DX12 than DX 11. 

A lot of it - per NV 4 boost algorithm - simply comes down to temp control...so if you think you got a dud, max out whatever you got with serious cooling ! You may not get the max 'MHz', but you will see a score improvement instead.


----------



## kithylin

J7SC said:


> ...and all that is somewhat irrelevant anyways, depending on cooling and other factors. While I can 'hit' 2235 MHz w/ Aorus in GPUz render (second part is max VRAM before crashing), that certainly doesn't apply to tougher benches either on GPU or VRAM, noting that 2080 Ti RTX usually clocks a bit higher w/ DX12 than DX 11.


That's the other thing.. I would be more curious what people are actually sustaining stable over a wide variety of games. 2100 Mhz core speed for example doesn't mean anything if you're only seeing that for the first 30 seconds and then it throttles down to say 1950 Mhz after 30~45 minutes of gaming once the card heats up and reaches it's actual core speed (I'm making this up theoretical as an example).


----------



## JustinThyme

kithylin said:


> That's the other thing.. I would be more curious what people are actually sustaining stable over a wide variety of games. 2100 Mhz core speed for example doesn't mean anything if you're only seeing that for the first 30 seconds and then it throttles down to say 1950 Mhz after 30~45 minutes of gaming once the card heats up and reaches it's actual core speed (I'm making this up theoretical as an example).


This only applies to air and under rated Liquid systems.


----------



## kithylin

JustinThyme said:


> This only applies to air and under rated Liquid systems.


I would think statistically that there is probably quite a lot more people with air cooled video cards vs water cooled ones. But even water cooling systems (even big ones) will eventually heat up over time. Admittedly it will take a lot longer than air cooled cards but it does happen too.


----------



## Tragic

kithylin said:


> That's the other thing.. I would be more curious what people are actually sustaining stable over a wide variety of games. 2100 Mhz core speed for example doesn't mean anything if you're only seeing that for the first 30 seconds and then it throttles down to say 1950 Mhz after 30~45 minutes of gaming once the card heats up and reaches it's actual core speed (I'm making this up theoretical as an example).





I'm at +130 (2145mhz) and +1550mhz to the gddr6
MSI gaming X trio fan curve 80% ambient temp 22C
firestrike extreme = 18979 using 8700K 5ghz
max temp after 3 runs = 67C 
sustained OC core over time = 2115mhz


----------



## KingEngineRevUp

kithylin said:


> That's the other thing.. I would be more curious what people are actually sustaining stable over a wide variety of games. 2100 Mhz core speed for example doesn't mean anything if you're only seeing that for the first 30 seconds and then it throttles down to say 1950 Mhz after 30~45 minutes of gaming once the card heats up and reaches it's actual core speed (I'm making this up theoretical as an example).


I don't think clocks are important anymore. Benchmark scores are more important. For example, all the review websites can clock their cards to 2100 and 2150 but I saw a screenshot of Guru3D and their card had throttled down to 1980. Their score was lower than my sustained 2010 Mhz. Initially my GPU clocks up to 2070 for the first like 5 seconds of a benchmark.


----------



## vmanuelgm

New 436.02 driver, standard GRD Windows 10 x64 version:


https://mega.nz/#!RyAnHCKa!VsybrTptKUwhvnf4p-W5XT9GYacDnGekOYhkiF7maFc


----------



## JustinThyme

kithylin said:


> I would think statistically that there is probably quite a lot more people with air cooled video cards vs water cooled ones. But even water cooling systems (even big ones) will eventually heat up over time. Admittedly it will take a lot longer than air cooled cards but it does happen too.


Statistically on a global scale, you are probably right. This is OCN so statistics for the average joe who barely knows the difference between RAM and a PSU is moot. If you stay on air no GPU, especially RTX (hottest running to date) will ever reach its potential unless you are above the arctic circle and leave your rig in a frozen tundra. 

I can leave both if mine running any stress test you want 24x7 clocked to 2150 for a month or more and they will never top 40C. 

BTW Clocks are what gets you the higher Benches. Better cooling gets you the higher clocks. Cards throttle down due to heat. You may want to have another conversation with Mr Jack Squat.....


----------



## Juub

JustinThyme said:


> Statistically on a global scale, you are probably right. This is OCN so statistics for the average joe who barely knows the difference between RAM and a PSU is moot. If you stay on air no GPU, especially RTX (hottest running to date) will ever reach its potential unless you are above the arctic circle and leave your rig in a frozen tundra.
> 
> I can leave both if mine running any stress test you want 24x7 clocked to 2150 for a month or more and they will never top 40C.
> 
> BTW Clocks are what gets you the higher Benches. Better cooling gets you the higher clocks. Cards throttle down due to heat. You may want to have another conversation with Mr Jack Squat.....


How do you even keep such low temps? I have a custom loop with a 240mm rad and my temps max out at about 55-58C. Ambient temps are in the 25-28C range.


----------



## KingEngineRevUp

Juub said:


> How do you even keep such low temps? I have a custom loop with a 240mm rad and my temps max out at about 55-58C. Ambient temps are in the 25-28C range.


Do you happen to monitor your liquid temperatures?


----------



## Juub

KingEngineRevUp said:


> Do you happen to monitor your liquid temperatures?


The sensors haven't arrived yet. I ordered this.

https://www.aliexpress.com/item/32863978933.html


----------



## kithylin

JustinThyme said:


> Statistically on a global scale, you are probably right. This is OCN so statistics for the average joe who barely knows the difference between RAM and a PSU is moot. If you stay on air no GPU, especially RTX (hottest running to date) will ever reach its potential unless you are above the arctic circle and leave your rig in a frozen tundra.
> 
> I can leave both if mine running any stress test you want 24x7 clocked to 2150 for a month or more and they will never top 40C.
> 
> BTW Clocks are what gets you the higher Benches. Better cooling gets you the higher clocks. Cards throttle down due to heat. You may want to have another conversation with Mr Jack Squat.....


It wasn't me that said clocks aren't important, that's KingEngineRevUp above. And that's great that you manage that. Congrats on successfully building a large custom water loop. Awesome temps there too. :thumb:


----------



## chibi

Juub said:


> How do you even keep such low temps? I have a custom loop with a 240mm rad and my temps max out at about 55-58C. Ambient temps are in the 25-28C range.



His 2x 480mm + 1x 360mm rads > your 240mm rad.


----------



## Juub

chibi said:


> His 2x 480mm + 1x 360mm rads > your 240mm rad.


I didn't think anything above 240mm made such a difference. Thought it would drop the temp by 1-3C while being much quieter. How many radiators can be used to cool down one card anyway?


----------



## kithylin

Juub said:


> I didn't think anything above 240mm made such a difference. Thought it would drop the temp by 1-3C while being much quieter. How many radiators can be used to cool down one card anyway?


It's not just a single 360mm. According to Juub, JustinThyme has two 480mm rads + a 360mm rad for a total surface area of 1320mm of radiator space. You only have a single 240mm. More radiator surface area = lower component temps all the way down until you have enough radiators to keep your components at the current room's ambient temps.


----------



## J7SC

Juub said:


> I didn't think anything above 240mm made such a difference. Thought it would drop the temp by 1-3C while being much quieter. How many radiators can be used to cool down one card anyway?


 
...it probably isn't so much about cooling GPUs down more, but rather keeping them & their cooling loop from heating up too much in the first place (see pic). My 2x 2080 Ti loop has a total of 1080x60 cooling area - I had the rads, pumps and fans laying around anyways from various other builds from years ago. 

The upshot is that with ambient at about 22 c, repeated Port Royal 2x GPU runs like the one below keep the temps from rising beyond the low 30ies, and thus clocks from dropping etc.


----------



## Juub

kithylin said:


> It's not just a single 360mm. According to Juub, JustinThyme has two 480mm rads + a 360mm rad for a total surface area of 1320mm of radiator space. You only have a single 240mm. More radiator surface area = lower component temps all the way down until you have enough radiators to keep your components at the current room's ambient temps.


Got it. I had assumed he had used his 480mm for his CPU/GPU and the 360mm for something else.

My Corsair H100i V2 uses a 240mm rad. My custom loop another 240mm one.

This is my case

https://www.thermaltake.com/core-x71.html

Currently have: 1x 120mm rear fan for exhaust, 2x 120mm front fans for intake, 1x240mm side radiator for the GPU waterblock, 1x240mm top radiator for the CPU AIO cooler.


----------



## ESRCJ

Juub said:


> I didn't think anything above 240mm made such a difference. Thought it would drop the temp by 1-3C while being much quieter. How many radiators can be used to cool down one card anyway?


The more radiators you have, the more you can keep the fluid temperatures down in the loop.  Think about two metrics in particular when it comes to water cooling temperatures: (i) the difference (or delta) in temperature between the component in question (GPU in the case) and the fluid, and (ii) the fluid temperature itself. Metric i is a matter of how efficient the waterblock is at removing heat from the GPU. This one is the bottleneck for those of us with massive loops. Metric ii is a matter of cooling capacity via the radiators. This one is the bottleneck for those with less cooling capacity. So if you can keep your fluid temps below 30C throughout a heavy load due to great radiator cooling capacity and if your GPU block can get you a 9C delta, then staying under 40C is obviously not an issue. Obviously, ambient temperature plays a role here too, so a more consistent version of metric ii would be the difference between ambient and fluid temperatures in a steady state.


----------



## KingEngineRevUp

The weirdest thing happens when I use the Galax bios. When I use google chrome, the whole chrome is black screened. I have to disable hardware acceleration to use chrome. Went back to my original BIOs and that issue went away. 

What gives? Anyone else have this issue and knows of a possible solution?


----------



## majestynl

KingEngineRevUp said:


> The weirdest thing happens when I use the Galax bios. When I use google chrome, the whole chrome is black screened. I have to disable hardware acceleration to use chrome. Went back to my original BIOs and that issue went away.
> 
> What gives? Anyone else have this issue and knows of a possible solution?


Don't worry happens on lot of systems. Even on a system where I have a low end Radeon GPU. It's a chrome issue on current version! Google and you will find more info..


----------



## KingEngineRevUp

But that doesn't resolve my issue. Why does Chrome work fine with my standard bios but it doesn't work with with Galax bios? How do I fix this issue?


----------



## kithylin

KingEngineRevUp said:


> How do I fix this issue?


Use FireFox until Google fixes it. It's a problem with Google's chrome as documented above.


----------



## JustinThyme

kithylin said:


> It's not just a single 360mm. According to Juub, JustinThyme has two 480mm rads + a 360mm rad for a total surface area of 1320mm of radiator space. You only have a single 240mm. More radiator surface area = lower component temps all the way down until you have enough radiators to keep your components at the current room's ambient temps.



Bingo! More rads = lower temps and lower fan speeds. My pumps and fans are controlled by aquaero and adjust automatically. My coolant never passes 30C with a 9940X and 2 2080Tis running full bore and fans seldom reach 1000 rpms, normally run around 600 rpms. Gain 3 things. Lower temps which allows higher OC and quieter system overall. I could add more but its a moot point. maybe get me slower fan speeds. The only thing that I think would top it now would be a chiller, phase change or LN2. Im contemplating a chiller in which case Id can the pumps and most of the fans but leave the rads just to add volume of coolant and a little bit of air flow going into the case for some passive cooling of what doesnt have blocks on it.


----------



## KingEngineRevUp

If you use a chiller, you will have to worry about condensation. The water in the air will turn to liquid when it touches or passes over surfaces lower than ambient temperature.

Your chiller would have to collect that water like an air conditioner does.


----------



## kithylin

KingEngineRevUp said:


> If you use a chiller, you will have to worry about condensation. The water in the air will turn to liquid when it touches or passes over surfaces lower than ambient temperature.
> 
> Your chiller would have to collect that water like an air conditioner does.


There's someone in the 1080 Ti owner's thread that uses a chiller and uses a small Arduino computer to control the thing so it cuts off just 1 degree before dew point in the room and prevent condensation.


----------



## KingEngineRevUp

kithylin said:


> There's someone in the 1080 Ti owner's thread that uses a chiller and uses a small Arduino computer to control the thing so it cuts off just 1 degree before dew point in the room and prevent condensation.


I guess if it's worth running a whole other system to draw 150+ watts to chill out 50-75 watts out of the water for 30 Mhz.


----------



## dantoddd

There was a rumor that Nvidia are looking to release a 2080 Ti Super. does anyone know anything more about that?


----------



## ESRCJ

dantoddd said:


> There was a rumor that Nvidia are looking to release a 2080 Ti Super. does anyone know anything more about that?


I've been digging around and I haven't found anything concrete. I'm also curious about this since I was planning on getting another 2080 Ti, but I'd rather sell my current one and get a pair of 2080 Ti Supers if they indeed release anytime soon.


----------



## dantoddd

ESRCJ said:


> I've been digging around and I haven't found anything concrete. I'm also curious about this since I was planning on getting another 2080 Ti, but I'd rather sell my current one and get a pair of 2080 Ti Supers if they indeed release anytime soon.


Yeah I'm also thinking about upgrading to a super if it gives makes 4K Ray tracing viable.


----------



## x-speed69

matrix17 said:


> Im afraid its not the cooler,it was working fine before flashing.Maybe the vrms got heated up,or the card is failing.Having such an expensive brick is bad for health!



Did you fix it?
I bet it's the fan bug, my MSI gets it everytime I do something to bios.


I have seen some Asus users who have experienced this as well.


----------



## J7SC

dantoddd said:


> There was a rumor that Nvidia are looking to release a 2080 Ti Super. does anyone know anything more about that?


 
2080 Ti Super release/ timing may depend on the release of the rumored AMD XTs such as XT 5900, XT 5950, with some new AMD models apparently being dual-GPU single PCB cards again.


----------



## ESRCJ

dantoddd said:


> Yeah I'm also thinking about upgrading to a super if it gives makes 4K Ray tracing viable.


I wouldn't expect much more than 5-10% over a 2080 Ti.


----------



## matrix17

x-speed69 said:


> Did you fix it?
> I bet it's the fan bug, my MSI gets it everytime I do something to bios.
> 
> 
> I have seen some Asus users who have experienced this as well.


Im afraid not,i havent found any solution


----------



## max883

The super Cards come With improved Samsung/micron memory Control. so no more Space invaders


----------



## kx11

back to basics


















this baby can do 120+ core / 1200+mem out the box


----------



## bleomycin

Just searched the thread and wasn't able to find anyone else with my card so wanted to verify before I risk bricking it.

Asus 2080 ti turbo, water block installed, TU102-300 chip. To increase the power limit I just grab the Palit bios from the Palit website here (the source link in first post): http://www.palit.com/palit/vgapro.php?id=3055&lang=en&pn=NE6208T020LC-150A&tab=do

Then I use the first post linked non-modified NVFlash utility and follow the 1st post flashing instructions? I'm not missing anything critical? Thanks!


----------



## JustinThyme

bleomycin said:


> Just searched the thread and wasn't able to find anyone else with my card so wanted to verify before I risk bricking it.
> 
> Asus 2080 ti turbo, water block installed, TU102-300 chip. To increase the power limit I just grab the Palit bios from the Palit website here (the source link in first post): http://www.palit.com/palit/vgapro.php?id=3055&lang=en&pn=NE6208T020LC-150A&tab=do
> 
> Then I use the first post linked non-modified NVFlash utility and follow the 1st post flashing instructions? I'm not missing anything critical? Thanks!


Just dont be expecting much from it. The turbo is lower tier on binning with a 1560 turbo and i think they all have micron memory. 
You could also use the matrix BIOS which is an ASUS BIOS and get you more than you will ever use. What Ive come to find is no matter what BIOS is used the power limit isnt that limiting. I get 2150 with stock BIOS and with 380W BIOS without ever increasing the core voltage, just upping power limit to max which I dont hit. This is on a pair of O11G OC cards.


----------



## kithylin

JustinThyme said:


> Just dont be expecting much from it. The turbo is lower tier on binning with a 1560 turbo and i think they all have micron memory.
> You could also use the matrix BIOS which is an ASUS BIOS and get you more than you will ever use. What Ive come to find is no matter what BIOS is used the power limit isnt that limiting. I get 2150 with stock BIOS and with 380W BIOS without ever increasing the core voltage, just upping power limit to max which I dont hit. This is on a pair of O11G OC cards.


I'm curious: Is overclocking the 2080 Ti pretty much dependent on temperature like with pascal? IE: Keeping the card colder = better overclocking, even on a stock bios?


----------



## bleomycin

JustinThyme said:


> Just dont be expecting much from it. The turbo is lower tier on binning with a 1560 turbo and i think they all have micron memory.
> You could also use the matrix BIOS which is an ASUS BIOS and get you more than you will ever use. What Ive come to find is no matter what BIOS is used the power limit isnt that limiting. I get 2150 with stock BIOS and with 380W BIOS without ever increasing the core voltage, just upping power limit to max which I dont hit. This is on a pair of O11G OC cards.


Thank you for the reply! I confirmed my card has Samsung memory when I put the waterblock on and gpu-z also agrees. I ran OC Scanner earlier and it said "Dominant limiters: Power". I figured a new bios allowing more would possibly help but I'm admittedly very clueless! In your opinion given this information should I bother flashing the card? Is this the matrix gpu bios you mentioned that will be compatible: https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1

Thanks!


----------



## dantoddd

J7SC said:


> 2080 Ti Super release/ timing may depend on the release of the rumored AMD XTs such as XT 5900, XT 5950, with some new AMD models apparently being dual-GPU single PCB cards again.


I don't think dual-GPU designs will happen again because the state of SLi. I don't think they will have enough support.


----------



## kithylin

dantoddd said:


> I don't think dual-GPU designs will happen again because the state of SLi. I don't think they will have enough support.


The above comment you quoted was referring to AMD coming out with another Dual-GPU design like the R9-295X2, or the Radeon Pro Duo. Nvidia might need something to counter that "threat". They may release the 2080 Ti Super if that thing appears.


----------



## JustinThyme

bleomycin said:


> Thank you for the reply! I confirmed my card has Samsung memory when I put the waterblock on and gpu-z also agrees. I ran OC Scanner earlier and it said "Dominant limiters: Power". I figured a new bios allowing more would possibly help but I'm admittedly very clueless! In your opinion given this information should I bother flashing the card? Is this the matrix gpu bios you mentioned that will be compatible: https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1
> 
> Thanks!


What are you getting from the card now? Point is Ive been down that road already with several different roms. In the end Upping the power limit did much of nothing. I coud push the card past its limits without hitting power limit. Want better performance, cool it down.



kithylin said:


> The above comment you quoted was referring to AMD coming out with another Dual-GPU design like the R9-295X2, or the Radeon Pro Duo. Nvidia might need something to counter that "threat". They may release the 2080 Ti Super if that thing appears.


SLI is still very much alive. Seeing how everyone cries about how FS is more brutal.......

or port royal
https://www.3dmark.com/pr/46520

or TS
https://www.3dmark.com/spy/6182114

https://www.3dmark.com/fs/18715181

or Realbench


----------



## Talon2016

KingEngineRevUp said:


> But that doesn't resolve my issue. Why does Chrome work fine with my standard bios but it doesn't work with with Galax bios? How do I fix this issue?


I don't have this issue on my card with that vbios. Which version are you running?


----------



## majestynl

I have tried most of the bios available out there. This is my conclusion so far. So if somebody has feedback its welcome!
_Normally im used to OC Radeon GPU's. It was a while i used a high end Nvidia card. But anyways. It is still fun. Only issue i have with it is the Power Limit and Voltage hard-limit set by Nvidia. _

Card: MSI Duke OC (Ref PCB)
Cooling: Heatkiller Waterblock
Temps: at ambient (25c) / IDLE: 30C / Load games: 41c / Load Benchmarking: 41-44c
Memory: Micron / Max 8400mhz+ benchmarks / 8200 in Games

*Stock:* Okish / much Power-limited in benchmarks
*EVGA XC Ultra:* Currently good/best working bios on my card. Max Clocks on Games 2175mhz steady at 1.075v (41c) / 2190mhz below 41c / Slightly Power-limited in benchmarks
*EVGA FTW3:* Strange bios. Even with more power the XC Ultra does much better with same OC!
*Gigabyte GOC:* Working almost same as the EVGA XC Ultra but cant hold the clocks in games with same voltages. Sometimes is fine but then next day it gets unstable.
*Galax HOF:* Not so impressed. Lower clocks, using a lot of Power but cant control Voltage/Freq curve. It just doesn't accept the frequencies. Stays at Original Pstates.
*KingPin XOC:* Woow this bios uses sooo much power. Even with 40% Power its hoovering around 550-650watt. Need more time to fiddle! And also using high voltages. Curve not working dunno if i want to use this. Def not as daily use.
*Asus XOC:* Really nice bios. Doing well under benchmarks with steady clocks because of the power-limit. But cant use Voltage/Freq curve so max clocks depends on original Pstates.

*Galax 380:* Cant flash bios because of older X-USB version. Dunno if somebody holds an version with XUSB version of xxx90003!
*Asus Matrix:* Cant flash bios because of newer X-USB version. Dunno if somebody holds an version with XUSB version of xxx90003!

*Highest Benchmarks runs i got so far till date of this post:*
- Superposition 4K (34th place): https://benchmark.unigine.com/results/rid_dda505640f354dbdbbbccc1a9b25ad4c
- Port Royal (HOF 67th place): https://www.3dmark.com/pr/138069

note; some are done with a AC blowing to the case! If i find the best bios i will focus on benchmarks even more! It would be perfect if i had a dual bios. Then i would have one for bench-marking and one for Daily use 

If somebody has an suggestion i will try/test it! Thanks


----------



## MrTOOSHORT

KingPin XOC is the best bios for benching and reference pcb 2080ti.


----------



## majestynl

MrTOOSHORT said:


> KingPin XOC is the best bios for benching and reference pcb 2080ti.


Yeap it was looking promising if you don't look at the wattage 
Will definitely try again and see how i can lock it down for daily use with a PL!

What are you using for daily?


----------



## bleomycin

JustinThyme said:


> What are you getting from the card now? Point is Ive been down that road already with several different roms. In the end Upping the power limit did much of nothing. I coud push the card past its limits without hitting power limit. Want better performance, cool it down.


I've attached my results from OC Scanner. Should I just leave it be? Thanks!


----------



## ESRCJ

So the 2080 Ti "Super" leak turned out to just be a Tesla for Geforce Now servers:

https://www.pcgamesn.com/nvidia/geforce-rtx-t10-8-geforce-now-ray-tracing

On another note, for those of you who have recently bought 2080 Ti, what kind of clock speeds are you getting? I'm curious to know if the average silicon has possibly improved over the past year. I may be getting another 2080 Ti for some NVLink fun.


----------



## bleomycin

ESRCJ said:


> So the 2080 Ti "Super" leak turned out to just be a Tesla for Geforce Now servers:
> 
> https://www.pcgamesn.com/nvidia/geforce-rtx-t10-8-geforce-now-ray-tracing
> 
> On another note, for those of you who have recently bought 2080 Ti, what kind of clock speeds are you getting? I'm curious to know if the average silicon has possibly improved over the past year. I may be getting another 2080 Ti for some NVLink fun.


I just bought my 2080ti this week, you can see my results in the post above yours.


----------



## kx11

bleomycin said:


> I've attached my results from OC Scanner. Should I just leave it be? Thanks!



cool values for an FE card , what matters is the core clock while gaming 





mine can do 135+ core and 1000+ mem , but i see the core clock hardly reaching 2100mhz because of the cooling


----------



## J7SC

majestynl said:


> Yeap it was looking promising if you don't look at the wattage
> Will definitely try again and see how i can lock it down for daily use with a PL!
> 
> What are you using for daily?


 
^^ Your point of wattage is a good one, especially if you run 2x 2080 Ti which with stock Bios can hit 380+ w - each, depending on model and Bios. In my case, throw in an oc'ed HEDT CPU, a pile of pumps and fans, and voila, a 1300w Platinum PSU doesn't seem to be that much anymore...

The other thing is 'binning'. Per Port Royal HoF below, I certainly don't mind benching, but for most anything else (including gaming on this dual use rig for work+play), I do not even bother kicking in any kind of OC. I do know that in DX 12, single GPU typically hits 2190+ in MSI AB settings, dual GPU is at 2145+. But what difference does that make compared to 'stock' ?

...I can't tell, unless it actually is a benchmark. Obviously, with 2080 TI being near/ at the top of the heap (for now...), folks here want to get the most out of it...but the 'the great OCer in the sky hates me' because I got a slightly less OC capable 2080 Ti GPU makes little sense. Between my two 2080 TI, there's only a 3 digit difference in serial numbers, yet they have different OC ceilings to the tune of 45 MHz / 30 MHz (GPU, VRAM)...but none of that affects my overall appreciation of the underlying system.


----------



## ESRCJ

bleomycin said:


> I just bought my 2080ti this week, you can see my results in the post above yours.


OC scanner isn't necessarily the best way to evaluate the overclocking capabilities of a 2080 Ti. For one, the current draw is well-beyond any game you'll play, even at 4K. Heck, I just ran OC scanner and I was hitting the power limit with my voltage at around 1V, and this is with a 380W power limit. The other issue is, the summary given only shows the average OC in terms of an increase over the stock curve, but the stock curve is BIOS-dependent and so I have no idea what "average overclock is 175MHz" results in with the final clock. Last, it does not test the higher voltages along the curve, so you have more performance on the table.

I think a quick way to do a simple comparison is what is the minimum voltage one needs for a particular clock, fully stable of course. Cooling obviously plays a role though.

How far have you been able to push your memory? Is yours Samsung?


----------



## Sheyster

ESRCJ said:


> OC scanner isn't necessarily the best way to evaluate the overclocking capabilities of a 2080 Ti. For one, the current draw is well-beyond any game you'll play, even at 4K. Heck, I just ran OC scanner and I was hitting the power limit with my voltage at around 1V, and this is with a 380W power limit. The other issue is, the summary given only shows the average OC in terms of an increase over the stock curve, but the stock curve is BIOS-dependent and so I have no idea what "average overclock is 175MHz" results in with the final clock. Last, it does not test the higher voltages along the curve, so you have more performance on the table.
> 
> I think a quick way to do a simple comparison is what is the minimum voltage one needs for a particular clock, fully stable of course. Cooling obviously plays a role though.
> 
> How far have you been able to push your memory? Is yours Samsung?


I don't use OC Scanner at all. With the 2080 Ti my approach is to use the curve, pick my voltage point on the X axis and lock a +core as high as possible while still remaining stable. If it passes Firestrike Extreme, I try some actual gaming with a game or two known to push the limits of the overclock.


----------



## KCDC

OC Scanner would just lock up on my dual 1080tis, so I stopped trying. Maybe it was a dual card thing, not sure about that.

My constant for both is 2100 47c-50c max when OCing, tends to get unstable past 2115 so I just kept it there. When rendering they stay around 1980 @ 40c max with OC turned off. Wondering if adding a third 420 GTR and second pump will do anything for me temp-wise. Water gets up to about 37c with all fans at full tilt and ambient around 28c when rendering over time. Gets a bit annoying hearing the fans while trying to do work and my render viewport updating. Haven't done much in particle sims with the GPUs yet, can't seem to get real flow to use both cards, curious to see if it's any faster than using the 9900x.


----------



## bleomycin

ESRCJ said:


> OC scanner isn't necessarily the best way to evaluate the overclocking capabilities of a 2080 Ti. For one, the current draw is well-beyond any game you'll play, even at 4K. Heck, I just ran OC scanner and I was hitting the power limit with my voltage at around 1V, and this is with a 380W power limit. The other issue is, the summary given only shows the average OC in terms of an increase over the stock curve, but the stock curve is BIOS-dependent and so I have no idea what "average overclock is 175MHz" results in with the final clock. Last, it does not test the higher voltages along the curve, so you have more performance on the table.
> 
> I think a quick way to do a simple comparison is what is the minimum voltage one needs for a particular clock, fully stable of course. Cooling obviously plays a role though.
> 
> How far have you been able to push your memory? Is yours Samsung?



Ahh good to know, I just threw that info out there not knowing how useful it was it's just all I have. I haven't oc'd a gpu in probably close to 10 years so I'm completely out of my depth now. I haven't even tried to push the memory yet I just set +500mhz because I figured that would be no problem. The memory is Samsung, the card is just the asus turbo reference pcb and I have it on water so cooling not much of a problem. I know now I kinda screwed myself buying this particular card due to the power limits but oh well, next time. I will educate myself more and attempt to OC this card a bit more manually and report back with my results.


----------



## MrTOOSHORT

majestynl said:


> Yeap it was looking promising if you don't look at the wattage
> Will definitely try again and see how i can lock it down for daily use with a PL!
> 
> What are you using for daily?


I have a KPE now and use the XOC bios, but the voltage and LLC jumpers are on the lowest settings 24/7.


----------



## 86Jarrod

matrix17 said:


> Im afraid not,i havent found any solution


I had a problem with a gigabyte 2080ti where I installed a kraken g12 bracket with an x62 aio and a day or two later all of a sudden I was stuck at 1350. After replacing stock cooler and frequency returned to normal I tried the g12 again and back to 1350. After taking it apart a few times and got it to work again i realized it was mount pressure with the weird plate on the aio and i had to use loose mount pressure. Not sure if this helps or not. I returned the card anyways. Some of the cards are just plain finicky.


----------



## ducky083

Hi all, 

Is it possible that a 2080TI HOF can't pass 2115 mhz on the gpu with stock cooler (gpu at 68° C) ?

Oc by afterburner PL max...


Thanks


----------



## AvengedRobix

ducky083 said:


> Hi all,
> 
> Is it possible that a 2080TI HOF can't pass 2115 mhz on the gpu with stock cooler (gpu at 68° C) ?
> 
> Oc by afterburner PL max...
> 
> 
> Thanks


the 10th anniversary?
.. The first HOF 2080Ti is only under water


----------



## ducky083

AvengedRobix said:


> the 10th anniversary?
> .. The first HOF 2080Ti is only under water



No the standard 2080 ti HOF with the three whites 90mm fans.


Vgpu is at 1.050 v max (non touching the vgpu for the moment)


----------



## dantoddd

kithylin said:


> The above comment you quoted was referring to AMD coming out with another Dual-GPU design like the R9-295X2, or the Radeon Pro Duo. Nvidia might need something to counter that "threat". They may release the 2080 Ti Super if that thing appears.


I had the R9-295X2 which i used for a couple of years. It was decent but had a few issues where some games didn't use the 2nd core. But I'm not sure if this gen we will see dual GPU cause both AMD and Nvidia seem to have left them behind


----------



## J7SC

KCDC said:


> OC Scanner would just lock up on my dual 1080tis, so I stopped trying. Maybe it was a dual card thing, not sure about that.
> 
> My constant for both is 2100 47c-50c max when OCing, tends to get unstable past 2115 so I just kept it there. When rendering they stay around 1980 @ 40c max with OC turned off. Wondering if adding a third 420 GTR and second pump will do anything for me temp-wise. Water gets up to about 37c with all fans at full tilt and ambient around 28c when rendering over time. Gets a bit annoying hearing the fans while trying to do work and my render viewport updating. Haven't done much in particle sims with the GPUs yet, can't seem to get real flow to use both cards, curious to see if it's any faster than using the 9900x.


  @KCDC and others with dual 2080 Ti Aorus XTR / WB. As mentioned before, my two cards are only a few serial numbers apart, but in spite of the same Bios date, the Bios version numbers are different for the cards. The second bit in the pic below is a screenshot of WinMerge showing that there are indeed differences beyond the Bios version number. Do your two 2080 Ti Aorus XTR / WB have the same or different Bios versions (and how far apart are they in serial numbers) ? Tx


----------



## kithylin

dantoddd said:


> I had the R9-295X2 which i used for a couple of years. It was decent but had a few issues where some games didn't use the 2nd core. But I'm not sure if this gen we will see dual GPU cause both AMD and Nvidia seem to have left them behind


Just like Nvidia Dual-GPU Cards, or any other CrossFireX or SLI setup: Most of the time you have to make your own manual profiles with RadeonPro / Nvidia Profile Inspector (3rd party programs) to get the most out of a setup like that. I hope some company brings back Dual-GPU cards again some day.


----------



## J7SC

kithylin said:


> Just like Nvidia Dual-GPU Cards, or any other CrossFireX or SLI setup: Most of the time you have to make your own manual profiles with RadeonPro / Nvidia Profile Inspector (3rd party programs) to get the most out of a setup like that. I hope some company brings back Dual-GPU cards again some day.


 
We had 8990s (dual AMD GPUs) at work, and they were compute monsters. The few games we tried out seemed to work just fine, and as you pointed out, there was/is also RadeonPro for custom profiles. I think AMD will bring them back for a variety of reasons, including the investments they made into new interconnects for their 7nm pro series...companies like to recover their R+D expenditures in as many applications as possible.


----------



## KCDC

The only reason I could see AMD making another dual GPU situation would be for the next Mac Pro like they did for the last one.


----------



## KCDC

J7SC said:


> @*KCDC* and others with dual 2080 Ti Aorus XTR / WB. As mentioned before, my two cards are only a few serial numbers apart, but in spite of the same Bios date, the Bios version numbers are different for the cards. The second bit in the pic below is a screenshot of WinMerge showing that there are indeed differences beyond the Bios version number. Do your two 2080 Ti Aorus XTR / WB have the same or different Bios versions (and how far apart are they in serial numbers) ? Tx



My serials are sequential, as in one number apart, GPUz reports identical BIOS as well 90.02.17.00.63


----------



## J7SC

KCDC said:


> My serials are sequential, as in one number apart, GPUz reports identical BIOS as well 90.02.17.00.63


 
Tx  ...mine end in .00.41 and .00.63 :thinking:


----------



## KCDC

J7SC said:


> Tx  ...mine end in .00.41 and .00.63 :thinking:



Would having closer serials mean they're from the same "batch" or is that just speculation? It would be interesting to know if these cards are actually "siblings" or just random cards that happen to have sequential serials.


Also, I think GByte has a bios update app on their site, I'm sure the software is awful like all of it, but might be worth trying.


----------



## J7SC

KCDC said:


> Would having closer serials mean they're from the same "batch" or is that just speculation? It would be interesting to know if these cards are actually "siblings" or just random cards that happen to have sequential serials.
> 
> 
> Also, I think GByte has a bios update app on their site, I'm sure the software is awful like all of it, but might be worth trying.


 
My cards' serial numbers are s.th. like xxxxxxx1 and xxxxxxx3, so almost sequential. I am asking as I have never run the MSI AB voltage and oc curve on these GPUs but would like to try...easier w/ same Bios. I never got around to flashing the cards with an identical Bios...


----------



## KCDC

J7SC said:


> My cards' serial numbers are s.th. like xxxxxxx1 and xxxxxxx3, so almost sequential. I am asking as I have never run the MSI AB voltage and oc curve on these GPUs but would like to try...easier w/ same Bios. I never got around to flashing the cards with an identical Bios...



I couldn't get the voltage curve to work with my 1080ti's, it would just default to the lowest on the curve and never budge. It was mentioned by someone in that forum that it may be due to having multiple cards, but there was no confirmation on that so I just gave up on it and moved on. Would be cool to see it working, two different curves per card, and really dial them in.


----------



## J7SC

KCDC said:


> I couldn't get the voltage curve to work with my 1080ti's, it would just default to the lowest on the curve and never budge. It was mentioned by someone in that forum that it may be due to having multiple cards, but there was no confirmation on that so I just gave up on it and moved on. Would be cool to see it working, two different curves per card, and really dial them in.


 
...I don't think the voltage_oc curve is the 'be all, end all' anyways, but I want to try locking the voltages in for each GPU at their observed, natural max, just for benching. Most of the rest of the time, I don't even OC the cards


----------



## TheLittleDetail

*Kingpin Bios*

So if I am understanding correctly from reading on this thread...the kingpin bios works on reference pcb cards even though that bios was designed for a card with a custom pcb?


----------



## jura11

KCDC said:


> OC Scanner would just lock up on my dual 1080tis, so I stopped trying. Maybe it was a dual card thing, not sure about that.
> 
> My constant for both is 2100 47c-50c max when OCing, tends to get unstable past 2115 so I just kept it there. When rendering they stay around 1980 @ 40c max with OC turned off. Wondering if adding a third 420 GTR and second pump will do anything for me temp-wise. Water gets up to about 37c with all fans at full tilt and ambient around 28c when rendering over time. Gets a bit annoying hearing the fans while trying to do work and my render viewport updating. Haven't done much in particle sims with the GPUs yet, can't seem to get real flow to use both cards, curious to see if it's any faster than using the 9900x.


Hi there 

OC Scanner doesn't work with multiple GPUs, this I can only confirm, personally running 4*GPUs(Asus RTX 2080Ti Strix, GTX1080Ti, GTX1080 and GTX1080) and OC Scanner would lock up when I use with Asus RTX 2080Ti Strix or any other GPU 

Regarding the rendering with RTX 2080Ti, I use my GPUs for rendering mainly with some occasional gaming and benching 

What I can confirm, in Octane RTX 2080Ti would be unstable beyond 2055MHz,usually I use in rendering 2025MHz OC profile which is rock stable, tried several renderers like V-RAY Next, Redshift, Blender Cycles or DAZ3D IRAY etc where anything above 2055MHz would crash rendering or result in TDR, tried several drivers, I thought so its driver issue, my RTX 2080Ti temperatures are below 40's in rendering usually sits in 36-38°C as other GPU

What is more strange my other GPUs are running same OC(GTX1080Ti with 2113MHz OC, GTX1080 with 2164MHz and GTX1080 with 2100MHz OC) from day one and if I render or use these cards in rendering no issues with crashing or TDR, previously I used two PSU setup(Seasonic 1250w and Corsair RM750W) and I thought its PSU, due this I went with Superflower 2000W PSU(bought from a friend) 

Water temperature will have impact on overall temperatures and if you have space for 420mm radiator I would add it, you will have lower water delta T for sure, my loop have 4*360mm radiators plus MO-ra3 360mm and my water delta T in rendering is in 3-5°C as max with fans spinning at 900RPM as max(that's on MO-ra3 360mm), other fans are spinning at 750-800RPM 

Particle simulation,personally don't use a lot particle simulation, if I do then I use Houdini, what software or plugin are you using for particle sims? 

Hope this helps 

Thanks, Jura


----------



## GRABibus

Hi,
I would like to test the MSI SeaHawk 2080Ti.
What is the best modded Bios to consider fo rit ?
The Galax 380W with this link on first page "GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W") ?
Problem is the I can't download it (No working link) ?

Where can I find this Bios ?

is it this one?

https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910

Thank you.


----------



## Zfast4y0u

jura11 said:


> Hi there
> 
> OC Scanner doesn't work with multiple GPUs, this I can only confirm, personally running 4*GPUs(Asus RTX 2080Ti Strix, GTX1080Ti, GTX1080 and GTX1080) and OC Scanner would lock up when I use with Asus RTX 2080Ti Strix or any other GPU
> 
> Regarding the rendering with RTX 2080Ti, I use my GPUs for rendering mainly with some occasional gaming and benching
> 
> What I can confirm, in Octane RTX 2080Ti would be unstable beyond 2055MHz,usually I use in rendering 2025MHz OC profile which is rock stable, tried several renderers like V-RAY Next, Redshift, Blender Cycles or DAZ3D IRAY etc where anything above 2055MHz would crash rendering or result in TDR, tried several drivers, I thought so its driver issue, my RTX 2080Ti temperatures are below 40's in rendering usually sits in 36-38°C as other GPU
> 
> What is more strange my other GPUs are running same OC(GTX1080Ti with 2113MHz OC, GTX1080 with 2164MHz and GTX1080 with 2100MHz OC) from day one and if I render or use these cards in rendering no issues with crashing or TDR, previously I used two PSU setup(Seasonic 1250w and Corsair RM750W) and I thought its PSU, due this I went with Superflower 2000W PSU(bought from a friend)
> 
> Water temperature will have impact on overall temperatures and if you have space for 420mm radiator I would add it, you will have lower water delta T for sure, my loop have 4*360mm radiators plus MO-ra3 360mm and my water delta T in rendering is in 3-5°C as max with fans spinning at 900RPM as max(that's on MO-ra3 360mm), other fans are spinning at 750-800RPM
> 
> Particle simulation,personally don't use a lot particle simulation, if I do then I use Houdini, what software or plugin are you using for particle sims?
> 
> Hope this helps
> 
> Thanks, Jura


i have 2 gpus in my system and overclock scanner works just fine on both gpus ( 1080ti's )


----------



## sblantipodi

sharpening filter introduced in the latest drivers is super awesome guys, give it a try.

Metro Exodus + RTX + DLSS in 4K with the sharpening filter is incredible.
All the glorious 4K RTX with the performance of DLSS without the blurry of the DLSS thanks to the sharpening.

WOW. Congrats nvidia.


----------



## KCDC

jura11 said:


> Hi there
> 
> OC Scanner doesn't work with multiple GPUs, this I can only confirm, personally running 4*GPUs(Asus RTX 2080Ti Strix, GTX1080Ti, GTX1080 and GTX1080) and OC Scanner would lock up when I use with Asus RTX 2080Ti Strix or any other GPU
> 
> Regarding the rendering with RTX 2080Ti, I use my GPUs for rendering mainly with some occasional gaming and benching
> 
> What I can confirm, in Octane RTX 2080Ti would be unstable beyond 2055MHz,usually I use in rendering 2025MHz OC profile which is rock stable, tried several renderers like V-RAY Next, Redshift, Blender Cycles or DAZ3D IRAY etc where anything above 2055MHz would crash rendering or result in TDR, tried several drivers, I thought so its driver issue, my RTX 2080Ti temperatures are below 40's in rendering usually sits in 36-38°C as other GPU
> 
> What is more strange my other GPUs are running same OC(GTX1080Ti with 2113MHz OC, GTX1080 with 2164MHz and GTX1080 with 2100MHz OC) from day one and if I render or use these cards in rendering no issues with crashing or TDR, previously I used two PSU setup(Seasonic 1250w and Corsair RM750W) and I thought its PSU, due this I went with Superflower 2000W PSU(bought from a friend)
> 
> Water temperature will have impact on overall temperatures and if you have space for 420mm radiator I would add it, you will have lower water delta T for sure, my loop have 4*360mm radiators plus MO-ra3 360mm and my water delta T in rendering is in 3-5°C as max with fans spinning at 900RPM as max(that's on MO-ra3 360mm), other fans are spinning at 750-800RPM
> 
> Particle simulation,personally don't use a lot particle simulation, if I do then I use Houdini, what software or plugin are you using for particle sims?
> 
> Hope this helps
> 
> Thanks, Jura



I have messed a bit OCing for GPU rendering, but not much of a speed difference (a few seconds difference) and it's been recommended to leave GPUs stock when rendering to avoid any instability issues, at least from Redshift support. Currently only own Redshift, haven't used the others at length, I am loving the speed increase from my 1080tis, I can play with SSS materials without waiting 20 minutes or faking it in some way. I am currently using Real Flow plugin for C4D and haven't really noticed much difference when turning on GPU in settings. I've use it in the past and may push to get a full version instead of the plugin as it offers more in the stand-alone engine including more gpu support with their Hybrido engine. I am getting back into Houdini as well, but no time to train yet. A friend has been showing off what he's already accomplished with it's dynamics capabilities, so it is next on my list once I have the time. I've been doing more mograph than VFX these days, so I keep pushing it aside. I also have FumeFX on the list, but also needs time to train and learn. I'm done with Maya, have been for years, so I am looking at Houdini for my VFX replacement.



I am probably going to get the third radiator, but am wondering if one D5 pump is enough for efficient water flow across all components with this addition so I may add a second pump, but for now I will enjoy what I currently have. Are you running more than one pump for your setup? I think my first mistake was getting the HWL GTRs originally as they are most efficient at higher RPMs. I've spent enough (too much) cash on this build for now! Maybe after a few more projects, I'll budget it in as it will also require some down time. Thanks for chiming in.


----------



## majestynl

J7SC said:


> ^^ Your point of wattage is a good one, especially if you run 2x 2080 Ti which with stock Bios can hit 380+ w - each, depending on model and Bios. In my case, throw in an oc'ed HEDT CPU, a pile of pumps and fans, and voila, a 1300w Platinum PSU doesn't seem to be that much anymore...


Lol thats true! Im used to high wattage anyways.. coming from Radeon GPU's 



J7SC said:


> The other thing is 'binning'. Per Port Royal HoF below, I certainly don't mind benching, but for most anything else (including gaming on this dual use rig for work+play), I do not even bother kicking in any kind of OC. I do know that in DX 12, single GPU typically hits 2190+ in MSI AB settings, dual GPU is at 2145+. But what difference does that make compared to 'stock' ?



Yeap its marginal.. gaming doesn't scale that well on freqs[/QUOTE]



MrTOOSHORT said:


> I have a KPE now and use the XOC bios, but the voltage and LLC jumpers are on the lowest settings 24/7.


Got ya..Thanks for the share !



sblantipodi said:


> sharpening filter introduced in the latest drivers is super awesome guys, give it a try.
> 
> Metro Exodus + RTX + DLSS in 4K with the sharpening filter is incredible.
> All the glorious 4K RTX with the performance of DLSS without the blurry of the DLSS thanks to the sharpening.
> 
> WOW. Congrats nvidia.


Will try.. Direct copy from AMD  dont understand why NVIDIA didnt introduce it earlier, but came with DLURRY-SS


----------



## JustinThyme

bleomycin said:


> I've attached my results from OC Scanner. Should I just leave it be? Thanks!


OC scanner is a good newbie tool but you can almost always do better. On air the temps isnt horrendous but will be a limiting factor. The Best OCs on these hot a$$ cards comes with liquid cooling. You arent even hitting the power limit.



ESRCJ said:


> So the 2080 Ti "Super" leak turned out to just be a Tesla for Geforce Now servers:
> 
> https://www.pcgamesn.com/nvidia/geforce-rtx-t10-8-geforce-now-ray-tracing
> 
> On another note, for those of you who have recently bought 2080 Ti, what kind of clock speeds are you getting? I'm curious to know if the average silicon has possibly improved over the past year. I may be getting another 2080 Ti for some NVLink fun.


Im running two Strix 2080Ti OC O11G and getting a max of 2150. Depending on what bench 1950-2150. These were bought when they first hit the shelves. Like Ive said in a previous post higher power limit BIOS gets me nowhere. Maybe another 20 Mhz. I tried afew and went back to stock. Works just fine as you can see from these scores. There are some higher with LN2 but on liquid its within the top 20 or so and top spots on my CPU/GPU combo.

https://www.3dmark.com/pr/46520

https://www.3dmark.com/spy/6182114

https://www.3dmark.com/fs/17956694

Just a few examples. Fire strike is the hardest to get past an OC and Port Royal taxes it pretty hard too.








bleomycin said:


> Ahh good to know, I just threw that info out there not knowing how useful it was it's just all I have. I haven't oc'd a gpu in probably close to 10 years so I'm completely out of my depth now. I haven't even tried to push the memory yet I just set +500mhz because I figured that would be no problem. The memory is Samsung, the card is just the asus turbo reference pcb and I have it on water so cooling not much of a problem. I know now I kinda screwed myself buying this particular card due to the power limits but oh well, next time. I will educate myself more and attempt to OC this card a bit more manually and report back with my results.


You can probably get +800 on the Vram.


----------



## J7SC

majestynl said:


> Lol thats true! Im used to high wattage anyways.. coming from Radeon GPU's  (...)


 
Yeah, as mentioned, we ran multiple Radeon 8990s...not exactly prudent on power consumption... I also noticed that the new dual pro card for the new Apple pro (the one with the optional $1k monitor stand  ) has a special PCIe connector with up to 475W...


----------



## kithylin

J7SC said:


> Yeah, as mentioned, we ran multiple Radeon 8990s...not exactly prudent on power consumption... I also noticed that the new dual pro card for the new Apple pro (the one with the optional $1k monitor stand  ) has a special PCIe connector with up to 475W...
> 
> <snip>


To be completely honest this just looks like someone took a single video card and copied it and stamped it down using photoshop more than an actual card.


----------



## J7SC

kithylin said:


> To be completely honest this just looks like someone took a single video card and copied it and stamped it down using photoshop more than an actual card.


 
May be, but at least they didn't photoshop a cheese-grader for the case pic...


----------



## owcraftsman

I finally bit the bullet and would like to join the owners club to begin overclocking and fine-tuning games.
I have a EVGA GeForce RTX 2080 Ti XC ULTRA 11G-P4-2383-KR








Time Spy score 11693 GPU Default clocks Rig 1 sig below
GPUz Validation

2 questions
1) What kind of OC can I expect?
2) Most games so far have played well but I get intermittent stutter in BFV using GeForce optimized defaults. Any suggestion of how to eliminate?


----------



## Sheyster

owcraftsman said:


> I finally bit the bullet and would like to join the owners club to begin overclocking and fine-tuning games.
> I have a EVGA GeForce RTX 2080 Ti XC ULTRA 11G-P4-2383-KR
> 
> 2 questions
> 1) What kind of OC can I expect?
> 2) Most games so far have played well but I get intermittent stutter in BFV using GeForce optimized defaults. Any suggestion of how to eliminate?


1) It's a silicon lottery. That card uses a reference PCB. Every card is different OC-wise.

2) Are you using RTX? If so, turn it off. If that doesn't work, use DX11 instead of DX12. That should fix any issues unless something else is going on.


----------



## owcraftsman

Sheyster said:


> 1) It's a silicon lottery. That card uses a reference PCB. Every card is different OC-wise.
> 
> 2) Are you using RTX? If so, turn it off. If that doesn't work, use DX11 instead of DX12. That should fix any issues unless something else is going on.



Thanks for the reply
Of course, it's a lottery I was looking for average OC for a starting point but I'll figure it oui.
Yes, ray tracing is enabled was I wrong to think it would be working by now?


----------



## pewpewlazer

owcraftsman said:


> Thanks for the reply
> Of course, it's a lottery I was looking for average OC for a starting point but I'll figure it oui.
> Yes, ray tracing is enabled was I wrong to think it would be working by now?


Your problem is you have a 4c/8t CPU. BF5 uses up to 11 threads. I wouldn't expect that game to run smoothly on anything less than a 6c/12t Intel chip, maybe a 9700k, or a 2700x/3700x/etc.

OC wise you can probably expect to hit 2055-2070mhz give or take a bit on stock BIOS. But like said, every card is different.


----------



## TheLittleDetail

owcraftsman said:


> Thanks for the reply
> Of course, it's a lottery I was looking for average OC for a starting point but I'll figure it oui.
> Yes, ray tracing is enabled was I wrong to think it would be working by now?


BFV does that for everyone. I have a 2080ti as well and I have the same issue. It is well known. Since your card is under water, you could use an XOC bios to get a better overclock.


----------



## JustinThyme

TheLittleDetail said:


> BFV does that for everyone. I have a 2080ti as well and I have the same issue. It is well known. Since your card is under water, you could use an XOC bios to get a better overclock.


I have 2x 2080Tis and a 14C/28T CPU all under water and dont have any issues. with everything enabled.


----------



## vmanuelgm

JustinThyme said:


> I have 2x 2080Tis and a 14C/28T CPU all under water and dont have any issues. with everything enabled.



No video=doesn't exist!!!


----------



## Sheyster

TheLittleDetail said:


> BFV does that for everyone. I have a 2080ti as well and I have the same issue. It is well known. Since your card is under water, you could use an XOC bios to get a better overclock.


BFV is buttery smooth for me: RTX off, DX12 off.


----------



## kithylin

Sheyster said:


> BFV is buttery smooth for me: RTX off, DX12 off.


Those are some of the best new features about the game though. Both of those are a lot of the reason people actually buy BFV. I hear the game play isn't that great but at least it (usually) looks great.


----------



## Sheyster

kithylin said:


> Those are some of the best new features about the game though. Both of those are a lot of the reason people actually buy BFV. I hear the game play isn't that great but at least it (usually) looks great.


RTX is just too big of a hit FPS-wise, for anyone IMHO. If you're even moderately competitive you don't want it on. I'm more focused on the multiplayer game play and being somewhat competitive than the image quality.


----------



## kithylin

Sheyster said:


> RTX is just too big of a hit FPS-wise, for anyone IMHO. If you're even moderately competitive you don't want it on. I'm more focused on the multiplayer game play and being somewhat competitive than the image quality.


According to a friend of mine, if you play it with RTX on, with DLSS on, and the new sharpening filter, on a 2080 Ti, supposedly the FPS impact is negligible, or 1~5 FPS vs with it off. I don't play BF-V myself though but that's what I'm being told.


----------



## Sheyster

kithylin said:


> According to a friend of mine, if you play it with RTX on, with DLSS on, and the new sharpening filter, on a 2080 Ti, supposedly the FPS impact is negligible, or 1~5 FPS vs with it off. I don't play BF-V myself though but that's what I'm being told.


Nope, not true. RTX is a huge hit to FPS in BFV. 30-40%+ FPS hit on most maps. Turning on DLSS or anything else does not help.


----------



## sblantipodi

RTX is a real crap. damn I spent 1300€ for a GPU (2080Ti) and I can't even play with that crappy DLSS...
Control even with the crappy DLSS is a stuttery mess on my Ti.

the only way to play it decently is at 1080P, what's the sense in having RTX if I need to play at 1080?


----------



## ntuason

Just curious what stuttering you guys are experiencing? What resolution are you guys running BFV? I have it on 2560 x 1440P @144Hz with everything ultra DX12 and DXR. I average about 80-90FPS dropping to mid 60s in explosions.


----------



## sblantipodi

sblantipodi said:


> RTX is a real crap. damn I spent 1300â‚¬ for a GPU (2080Ti) and I can't even play with that crappy DLSS...
> Control even with the crappy DLSS is a stuttery mess on my Ti.
> 
> the only way to play it decently is at 1080P, what's the sense in having RTX if I need to play at 1080?


I think that control has some driver problems.
With 4k DLSS (so at 1440p) it run pretty well around 50fps but there are hitching and hiccups. 

Hope they will solve.


----------



## kithylin

sblantipodi said:


> I think that control has some driver problems.
> With 4k DLSS (so at 1440p) it run pretty well around 50fps but there are hitching and hiccups.
> 
> Hope they will solve.


What CPU / Processor are you using in your computer?


----------



## ESRCJ

Aside from being able to use SLI, I'm not sure why anyone with a 2080 Ti would play BFV with DX11 over DX12. Heck, even at 4K I can't get full GPU usage. You need to run it with DX12 to get full GPU usage. I guess there are just too many draw calls for DX11 to handle.


----------



## JustinThyme

vmanuelgm said:


> No video=doesn't exist!!!


What? Because I built a system that takes everything I throw at without so much as a belch and it runs well it doesnt count? 

OMG!!

To answer another question 3440x1440 @ 100Hz DX12


----------



## sblantipodi

kithylin said:


> sblantipodi said:
> 
> 
> 
> I think that control has some driver problems.
> With 4k DLSS (so at 1440p) it run pretty well around 50fps but there are hitching and hiccups.
> 
> Hope they will solve.
> 
> 
> 
> What CPU / Processor are you using in your computer?
Click to expand...

. 

9900k @ 5GHz currently


----------



## pewpewlazer

TheLittleDetail said:


> BFV does that for everyone. I have a 2080ti as well and I have the same issue. It is well known. Since your card is under water, you could use an XOC bios to get a better overclock.


Again, the game uses 11 threads. Frostbite engine always has been very CPU intensive and the latest version is no exception. As you can see by all the comments from people saying they don't have this issue, BFV clearly does not "do that for everyone". If you'll also notice, everyone saying the game runs fine has a 6c/12t or higher processor.



kithylin said:


> Those are some of the best new features about the game though. Both of those are a lot of the reason people actually buy BFV. I hear the game play isn't that great but at least it (usually) looks great.


DX12 isn't really a "feature" anyone would be excited about. Unless you mean DLSS? In which case, no, it's a blurry ugly mess. Ray tracing looks OK, but cripples performance.


----------



## Intrud3r

BFV and Control run butter smooth on my system, have no problems whatsoever ... using RTX and DLSS in BFV and Control was at max settings.


----------



## TheLittleDetail

pewpewlazer said:


> Again, the game uses 11 threads. Frostbite engine always has been very CPU intensive and the latest version is no exception. As you can see by all the comments from people saying they don't have this issue, BFV clearly does not "do that for everyone". If you'll also notice, everyone saying the game runs fine has a 6c/12t or higher processor.
> 
> I have an 8086k OC to 5.2 and a 2080ti. I literally have not talked to one person that does not get these stutters.


----------



## Talon2016

Control 4K max settings including max ray tracing with DLSS 1440p render and it runs great. I haven’t had a single hitch or hiccup that some describe here. Running newest driver released on 27th. I average around 55fps and my 4K monitor is 144hz with Gsync. 

5ghz 9900K
2080 Ti FTW3 Hybrid
32gb (4x8gb) 4000 CL17
NVME 1TB


----------



## kithylin

pewpewlazer said:


> DX12 isn't really a "feature" anyone would be excited about. Unless you mean DLSS? In which case, no, it's a blurry ugly mess. Ray tracing looks OK, but cripples performance.


I haven't tried it myself because I don't use GeForce Experience. But one of the newer features added with a recent driver was a Sharpening filter for Nvidia FreeStyle, which is supposedly something inside GeForce Experience. The sharpening filter is supposed to "clean up" the "blurry ugly mess" from DLSS. Perhaps you may want to take a look at that?


----------



## sblantipodi

kithylin said:


> pewpewlazer said:
> 
> 
> 
> DX12 isn't really a "feature" anyone would be excited about. Unless you mean DLSS? In which case, no, it's a blurry ugly mess. Ray tracing looks OK, but cripples performance.
> 
> 
> 
> I haven't tried it myself because I don't use GeForce Experience. But one of the newer features added with a recent driver was a Sharpening filter for Nvidia FreeStyle, which is supposedly something inside GeForce Experience. The sharpening filter is supposed to "clean up" the "blurry ugly mess" from DLSS. Perhaps you may want to take a look at that?
Click to expand...

I agree, that sharpening filter with DLSS is pretty miraculous


----------



## sblantipodi

Talon2016 said:


> Control 4K max settings including max ray tracing with DLSS 1440p render and it runs great. I haven’t had a single hitch or hiccup that some describe here. Running newest driver released on 27th. I average around 55fps and my 4K monitor is 144hz with Gsync.
> 
> 5ghz 9900K
> 2080 Ti FTW3 Hybrid
> 32gb (4x8gb) 4000 CL17
> NVME 1TB


Your system can't be different since all others, the only reason why you don't have the problem is because you don't see it 😉


----------



## sblantipodi

Talon2016 said:


> Control 4K max settings including max ray tracing with DLSS 1440p render and it runs great. I haven’t had a single hitch or hiccup that some describe here. Running newest driver released on 27th. I average around 55fps and my 4K monitor is 144hz with Gsync.
> 
> 5ghz 9900K
> 2080 Ti FTW3 Hybrid
> 32gb (4x8gb) 4000 CL17
> NVME 1TB


Do you 
Set "Power management mode" from Adaptive to Maximum Performance?

Setting this params in driver helps me really a lot in reducing hiccups.


----------



## Talon2016

sblantipodi said:


> Your system can't be different since all others, the only reason why you don't have the problem is because you don't see it 😉


I just went back and tested again for another 10 minutes of gameplay and nadda, nothing. Smooth as 55fps can feel lol. 

No, the hitching and hiccups you report are NOT there at all for me, please don't tell me I don't see something just because you are having an issue. I have an overlay on with a frame time graph, watch my fps, and sensitive to such things so if it was there I would see it. It isn't. I run at a fairly consistent 55fps average even during large fights and explosions. I leave my graphics control panel alone except for changing G-Sync to full and windowed mode, Latency Mode to Ultra, Preferred Refresh Rate to Highest Available, I also use Nvidia Color Settings. Other than that I don't change anything and I am on Optimal Power mode as I've found it makes zero difference in games or benchmarks aside from causing a high idle on the desktop. 

The hitching and hiccups you are experiencing would likely be fixed by using emptystandbylist.exe and setting it up to run every five minutes via a scheduler.


----------



## sblantipodi

Talon2016 said:


> I just went back and tested again for another 10 minutes of gameplay and nadda, nothing. Smooth as 55fps can feel lol.
> 
> No, the hitching and hiccups you report are NOT there at all for me, please don't tell me I don't see something just because you are having an issue. I have an overlay on with a frame time graph, watch my fps, and sensitive to such things so if it was there I would see it. It isn't. I run at a fairly consistent 55fps average even during large fights and explosions. I leave my graphics control panel alone except for changing G-Sync to full and windowed mode, Latency Mode to Ultra, Preferred Refresh Rate to Highest Available, I also use Nvidia Color Settings. Other than that I don't change anything and I am on Optimal Power mode as I've found it makes zero difference in games or benchmarks aside from causing a high idle on the desktop.
> 
> The hitching and hiccups you are experiencing would likely be fixed by using emptystandbylist.exe and setting it up to run every five minutes via a scheduler.


ok sorry, the problems seems to occur on most 2080Ti even on other games, I had the same problem even with AC Odyssey.
probably "is a problem" that happen on Founders or similar cards, what card do you have?

surely is a software problem since it can be fixed by setting the power option to maximum performance.


----------



## Talon2016

sblantipodi said:


> ok sorry, the problems seems to occur on most 2080Ti even on other games, I had the same problem even with AC Odyssey.
> probably "is a problem" that happen on Founders or similar cards, what card do you have?
> 
> surely is a software problem since it can be fixed by setting the power option to maximum performance.


If you are having the issue in other games then it's not an issue with Control or all 2080 Ti. The issue you are describing has been an issue for almost 2 years now on Windows 10. I think it's when they released the Fall patch or whatever, I've lost track at this point, but it introduced microstutters in pretty much all games for everyone. A lot of people went back to older Win10 versions which ran fine, some dealt with it, others fixed it with EmptyStandbyList.exe. I am telling you this fixed it for me, and after each clean install of Windows, it's one of the first things I do. After this, hitching and stutters are gone in all games. 







https://wj32.org/wp/software/empty-standby-list/ -- EmptyStandbyList.exe download. A look at the download number count shows a lot of people are also using this fix.

I am on a FTW3 Hybrid with XOC vbios.


----------



## ducky083

Hi, please i need your help


I've a GALAX 2080 TI HOF


When i push the button to change the bios (2 bios on this card) i've always the same bios version on gpu-z (90.02.17.40.32) and same max power limit of 450W...


I've also a ZOTAC 2080ti TRIPLE FAN (reference pcb) flashed with 380w galax bios.


If i compare Firestrike score on both cards i've :


ZOTAC => 37751 graphics at average of 2115mhz


GALAX => 37385 graphics at average of 2115 mhz 



If i put higter fréquency on the GALAX like 2160 mhz (stable with stock cooling at fan max), the Firestrike score is not very better like just 37600....


I have a problem on the GALAX card ?


Thanks for your help !


----------



## Apothysis

I've flashed my MSI Trio X with the 406W bios. After flashing GPU-Z reads the correct bios version and I can set a power limit of 135% in Afterburner but it still hits Power perfcap around 350W in 3DMark. I've followed the nvflash guide step by step and also tried manually disabling the device before flashing without success, which seems to have worked for another user earlier in this thread. Any other suggestions that might help?


----------



## JustinThyme

sblantipodi said:


> ok sorry, the problems seems to occur on most 2080Ti even on other games, I had the same problem even with AC Odyssey.
> probably "is a problem" that happen on Founders or similar cards, what card do you have?
> 
> surely is a software problem since it can be fixed by setting the power option to maximum performance.


Whats the rest of your specs? 
Mine is in siggie and Im not seeing any of this in any games or anything else. 
If its there and Im not seeing it then its still good as "I" dont see anything out of the norm.


Maybe not optomized driver settings?
Maybe insufficient ram?


----------



## vmanuelgm

JustinThyme said:


> What? Because I built a system that takes everything I throw at without so much as a belch and it runs well it doesnt count?
> 
> OMG!!
> 
> To answer another question 3440x1440 @ 100Hz DX12



Would like to see a shadowplay record of your rig running BFV like u said... Just curiousity...


----------



## KCDC

delete, wrong thread.


----------



## Spiriva

Windows 436.20

https://developer.nvidia.com/vulkan-beta-43620-windows-10

Fixes:

General performance imporvements from 436.15 and 435.17

https://developer.nvidia.com/vulkan-driver


----------



## sblantipodi

Control is one of the few games where 2080ti shows his muscles over 1080ti.

2080ti is really really really faster than 1080ti on this game

https://youtu.be/wmleyuN7-Ew


----------



## kithylin

sblantipodi said:


> Control is one of the few games where 2080ti shows his muscles over 1080ti.
> 
> 2080ti is really really really faster than 1080ti on this game
> 
> https://youtu.be/wmleyuN7-Ew


And? You're showing that as if it's a good thing? It's still running < 60 FPS @ 4K even with a single 2080 Ti. To me it shows more than anything that a 2080 Ti still isn't fast enough for acceptable 4K performance.


----------



## kx11

kithylin said:


> And? You're showing that as if it's a good thing? It's still running < 60 FPS @ 4K even with a single 2080 Ti. To me it shows more than anything that a 2080 Ti still isn't fast enough for acceptable 4K performance.



pretty sure my game was running more than 60fps with ray tracing set to medium and DLSS @ 4k




here's my performance


----------



## kithylin

kx11 said:


> pretty sure my game was running more than 60fps with ray tracing set to medium and DLSS @ 4k
> 
> 
> 
> 
> here's my performance
> 
> 
> https://www.youtube.com/watch?v=MFSAkOm-VTA&t=4s


The same 45-58 FPS as the video above showed.

Also though I'm reading into it and I found out this game uses forced motion blur with no way to disable it. That's probably a pretty big source for poor performance with everyone's system. I'm just overall curious about the game and it's performance, but I won't be actually buying it for a long time due to EGS crap. I mainly play high refresh rate 1080p myself. I have a 144hz screen that overclocks to 170 Mhz.


----------



## Talon2016

kithylin said:


> The same 45-58 FPS as the video above showed.
> 
> Also though I'm reading into it and I found out this game uses forced motion blur with no way to disable it. That's probably a pretty big source for poor performance with everyone's system. I'm just overall curious about the game and it's performance, but I won't be actually buying it for a long time due to EGS crap. I mainly play high refresh rate 1080p myself. I have a 144hz screen that overclocks to 170 Mhz.


The game is extremely demanding using multiple ray traced features. It's essentially a great benchmark for the future, it reminds me of when Crysis first came out. I have a feeling this game will remain a benchmark staple for the foreseeable future. That said the 2080 Ti is delivery upwards of 75+% more performance than the old 1080 Ti. I expect this trend to continue in the future as developers have more time to work with Turing.


----------



## owcraftsman

pewpewlazer said:


> Your problem is you have a 4c/8t CPU. BF5 uses up to 11 threads. I wouldn't expect that game to run smoothly on anything less than a 6c/12t Intel chip, maybe a 9700k, or a 2700x/3700x/etc.
> 
> OC wise you can probably expect to hit 2055-2070mhz give or take a bit on stock BIOS. But like said, every card is different.


After hours of testing my stable OC (startup clock) is +120 CC and +600 MC. Not bad, but I don't feel like I hit the lottery here. 


Thanks for the suggestion I suppose I already knew I was in need of an upgrade or two, but the prospects have been dismal the last few years. My normal upgrade cycle was every two years at most and that's going back to Athlon days on my DFI LanParty. 
I'm used to being at the top of the stack, but intel fubarred that IMHO with HEDT which is a platform more than I need or want to spend that kind of money on. 

To go AMD or not to go AMD? 
Intel is still the king of IPC but it lacks PCIe 4.0 and Intel won't have that or 5.0 until 2nd half 2020 and I don't want to wait that long to buy a somewhat future proof new build suitable for my 2080 Ti. 

I may go 3900x on x570 if I can find a 32GB (4x8) set of RAM that works well at 3600MHz which should last a couple of years without too much compromise.


----------



## kithylin

owcraftsman said:


> I may go 3900x on x570 if I can find a 32GB (4x8) set of RAM that works well at 3600MHz which should last a couple of years without too much compromise.


Kind of off topic for this thread but if you do go AMD, 3600 mhz ram is the "Sweet Spot" for Ryzen Gen2 / 3000 series right now. Here's one of the best 3600 kits right now, 15-15-15 @ 3600, on sale for $150 for a 16GB set: https://www.newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232306


----------



## sblantipodi

guys I have a serious doubt about my rig, I don't know if it depends on the GPU or the Monitor or what else.
when I connect "some things" to the PC, 
a headphone, a USB gamepad, a microUSB stick, or sometimes even if I connect a USB charger to the wall near the PC socket,

monitor blinks for a microsecond, it became black for a microsecond and then return to normal.

this started happening since my new Acer XB271HK and RTX2080Ti.

it never happened before so I'm pretty worried, is there someone else with this problem?


----------



## GRABibus

Hello,

I would like to mention an issue. I am not the only one in the world (I have posted some of links related to that issue below).
There a so many threads dedicated to this issue....

Just to summarize :
As soon as I activate DXR in BF5, after some seconds (Even in the game menu) or afetr some minutes in the game, the game crash, I mean crash to desktop (Without any error messages).
If I disable DXR, and play whatever D12 or D11, the game runs perfectly.

I have this problem with my Sea Hawk 2080Ti, but alos with my former GTX 1080Ti !
Even with CPU, RAM and GPU at stock frequencies, I have the issue

A lot of people face excatly this issue and nobody is able to say what could be the solution...

My spec (See in sig).
My NVDIA Drivers are the 436.15. I tried to roll back to 431.60, but no way...
My windows 10 is 1903 version 64bits Professional.


Here are some links :
https://www.nvidia.com/en-us/geforc...80-frequent-hangscrashes-in-battlefield-v-bu/

https://www.nvidia.com/en-us/geforc...ashing-with-ray-tracing-enabled-on-2080ti-fe/

https://answers.ea.com/t5/Technical...don-t-know-what-to-do-HELP/m-p/8162073#M19114

https://answers.ea.com/t5/Technical...don-t-know-what-to-do-HELP/m-p/8162073#M19114


Should I open a new thread for this ?


----------



## smilinjohn

sblantipodi said:


> guys I have a serious doubt about my rig, I don't know if it depends on the GPU or the Monitor or what else.
> when I connect "some things" to the PC,
> a headphone, a USB gamepad, a microUSB stick, or sometimes even if I connect a USB charger to the wall near the PC socket,
> 
> monitor blinks for a microsecond, it became black for a microsecond and then return to normal.
> 
> this started happening since my new Acer XB271HK and RTX2080Ti.
> 
> it never happened before so I'm pretty worried, is there someone else with this problem?



I have something similar with my XB271HU on a 1080Ti, but mine only does it when I stand up from my chair. Nothing touching the stand I have the monitor on so it's not static, not bumping it so it's not that, it's not the cables as I have tried different ones, and settings don't matter either. I can sit in my chair for hours playing games, working on schoolwork, or just watching a movie without an issue, never flashes or blinks, but stand up from my chair and it will flash. I've even sent it to Acer for them to look at after I had it for about a week and they don't have any idea about what causes it, at this point, I wish I would have sent it back for a refund instead and just bought a different gaming monitor. They kept it so long I had to contact them to see what was going on with it, I even shot some video of the problem and sent it in with the monitor on a DVD (they probably didn't even watch it) to show them what it was doing. They reflashed it (so they said) when they had it and it helped for a little while, but here recently it has started doing it again.


I have read on the internet on other sites that others have this problem under different conditions but it's not an isolated issue. I'm probably going to go with a different brand and replace this monitor eventually which sucks because I really do like the display but I'm worried that one day it's going to flash and stop working leaving me without a monitor.


----------



## trivium nate

just picked up my 2080TI from BB!!!!


----------



## ntuason

trivium nate said:


> just picked up my 2080TI from BB!!!!


What card were you using before?


----------



## trivium nate

a gtx 1080


----------



## GRABibus

I have repaste my MSI Sea Hawk X with Conductonaut (AIO water cooled card)
Full load, I have 10C to 15°C less on the GPU than with factory paste !!
That sounds good.

I am at 50°c on GPU at 26°C ambiant in Time Spy Extreme with overclock described in attached picture. :

Do you think I can try to use the ASUS XOC Bios ?
is it compatible with my card ?
Are there big risks to do it ?

Thanks.


----------



## kithylin

GRABibus said:


> I have repaste my MSI Sea Hawk X with Conductonaut (AIO water cooled card)
> Full load, I have 10C to 15°C less on the GPU than with factory paste !!
> That sounds good.
> 
> I am at 50°c on GPU at 26°C ambiant in Time Spy Extreme with overclock described in attached picture. :
> 
> Do you think I can try to use the ASUS XOC Bios ?
> is it compatible with my card ?
> Are there big risks to do it ?
> 
> Thanks.


Does that use a copper water block on top? If so you don't want to use liquid metal on copper blocks. It will ruin/destroy the block over time. Depending on MSI's rules in your country it may void warranty too.


----------



## GRABibus

kithylin said:


> Does that use a copper water block on top? If so you don't want to use liquid metal on copper blocks. It will ruin/destroy the block over time. Depending on MSI's rules in your country it may void warranty too.


Yes, maybe some degradations over time, but globally, with copper, Conductonaut is ok...(Not 100% the best for sure). I took the risk !! 

https://www.gamersnexus.net/guides/...cts-copper-nickel-and-aluminum-corrosion-test

And for the Bios ? (See my question).


----------



## sblantipodi

smilinjohn said:


> I have something similar with my XB271HU on a 1080Ti, but mine only does it when I stand up from my chair. Nothing touching the stand I have the monitor on so it's not static, not bumping it so it's not that, it's not the cables as I have tried different ones, and settings don't matter either. I can sit in my chair for hours playing games, working on schoolwork, or just watching a movie without an issue, never flashes or blinks, but stand up from my chair and it will flash. I've even sent it to Acer for them to look at after I had it for about a week and they don't have any idea about what causes it, at this point, I wish I would have sent it back for a refund instead and just bought a different gaming monitor. They kept it so long I had to contact them to see what was going on with it, I even shot some video of the problem and sent it in with the monitor on a DVD (they probably didn't even watch it) to show them what it was doing. They reflashed it (so they said) when they had it and it helped for a little while, but here recently it has started doing it again.
> 
> 
> I have read on the internet on other sites that others have this problem under different conditions but it's not an isolated issue. I'm probably going to go with a different brand and replace this monitor eventually which sucks because I really do like the display but I'm worried that one day it's going to flash and stop working leaving me without a monitor.


can't understand how standing up from the chair can cause the issue.


----------



## J7SC

sblantipodi said:


> can't understand how standing up from the chair can cause the issue.


 
I was wondering about the same thing. That said, some years back, I had a monitor cable that must have had some torn wires inside the normal-looking shielding because the slightest move could at times disrupt the video signal...new cable solved it.


----------



## kithylin

GRABibus said:


> Yes, maybe some degradations over time, but globally, with copper, Conductonaut is ok...(Not 100% the best for sure). I took the risk !!


The issue is not that it degrades over time, and it doesn't "Dry out" over time either. It's that the liquid metal is literally absorbed by the copper over time. It just means at some point you will have to disassemble it again and put more of it in there.


----------



## GRABibus

ntuason said:


> Just curious what stuttering you guys are experiencing? What resolution are you guys running BFV? I have it on 2560 x 1440P @144Hz with everything ultra DX12 and DXR. I average about 80-90FPS dropping to mid 60s in explosions.


That's pretty common, especially with DXR quality on ultra !

At least you can play...From my side, as soon as I enable DXR, I crash to desktop in BF5.
Even with my former GTX1080Ti I experience this issue...
A lot of people have this issue...some don't have...

if you google, you will see ton of threads about this problem.


----------



## zrstoddart

majestynl said:


> So i figured out whats wrong with flashing the Galax 380w Bios. The Galax version is an older build with a older XUSB Firmware version.
> NVFlash doesn't accept an older XUSB Fw if there is one newer installed on current loaded Vbios!
> 
> My Vbios was included with *XUSB-FW Version ID : 0x70090003 * and the current Galax 380w on TPU has an older version *XUSB-FW Version ID: 0x70060003*
> 
> What i did:
> 
> 
> Backup-ed my Original Rom and just flashed same one back to check if NVflash etc is working properly
> Checked TPU Bios libary to find an Bios who had better Power limit and was build on same month or later then my current bios version!
> I found the EVGA XC and MSI Seahawk X (Ref board)
> Double-checked if the XUSB had at least same FW version. It did!
> Flashed EVGA Bios (338w) with NvFlash (Board Id Mismatch version)
> 
> 
> Flashed successful and the card is running already better  Need to check which Bios is the best for now.
> If we want to use the 380w bios, we need a newer Bios file from that card with at least same or newer XUSB Fw, otherwise we cant flash it!
> 
> I believe not only MSI Duke OC, but most new bought cards are running with an updated XUSB FW on their last vBios!
> 
> 
> _See below for Screenshot Running MSI Duke OC with a EVGA Bios running 338w_


So I have to dig this up because its driving me nuts. I have been reading page after page of this thread and no answers...
_Are there any BIOS I can use with a version the same or newer than the OLD 400W MSI Gaming X Trio Bios posted at the top of this page https://www.overclock.net/forum/27716280-post3683.html_? I have looked and it seems like there is no way for me to use this card to anywhere near its full potential... (which I bought since it was supposed to have a bios I could flash that would allow me to push 400W).

My card has a newer version that it shipped with than this bios and there is no way for me to flash an older bio onto this card, so I am stuck at 330W and 110% power limit on this monster card with 2 8-pin &1 6-pin power connectors that should easily hit 400W and 135% power limit. I have been reading since I first saw this issue come up on page 841:



Neosin said:


> You guys have any ideas?
> 
> So I'm trying to flash my bios and i'm getting this error when i do
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (EF,6014) : WBond W25Q80EW 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> 
> 
> I have the MSI RTX 2080ti Gaming X Trio and trying to get the 400w bios on it, as i'm limited at 330w.
> 
> Using nvflash from page 1, protectoff, then -6 romname, etc i think i'm doing it all correct.
> 
> Any ideas?


Please say someone has a solution to this by now. Is anyone able to save me from this 330W prison?


----------



## kithylin

zrstoddart said:


> Please say someone has a solution to this by now. Is anyone able to save me from this 330W prison?


My answer probably won't be what you want to hear but I just thought you may want to know that currently you have +10% allowed on your bios. 330 + 10% = 363. You're currently limited to 363 watts, not 330 watts.


----------



## zrstoddart

kithylin said:


> My answer probably won't be what you want to hear but I just thought you may want to know that currently you have +10% allowed on your bios. 330 + 10% = 363. You're currently limited to 363 watts, not 330 watts.


Nope. 330 after 110%. See Pic.


----------



## kithylin

zrstoddart said:


> Nope. 330 after 110%. See Pic.


My mistake. You didn't say it that way in your post, so I read it differently. Sorry.


----------



## kwalker99

zhrooms said:


> *└  Power Limit: Unlimited | Overvoltage: No (up to 1.093V) | Voltage/Frequency Curve Editor: Disabled | 2D Clocks: Core (Yes) / Memory (No) | Fan Adjustable: Yes ┘ *


Please can you clarify what these features are? Some are self-explanatory but a few are less so, could do with a Key/Legend explaining.

Does a BIOS having no Voltage/Frequency Curve Editor option, result in the card running at the maximum voltage/clock speed all the time, even when idle?

What does the '2D Memory Clock' feature relate to?

Thanks for an amazing post, if only we had an uncapped FTW3 Ultra BIOS :lmaosmile

KP


----------



## kwalker99

J7SC said:


> This came up before in this thread, mostly re. the 450w Galax HOF OCLabs Bios...flashing a Bios from a card with 3 physical EPS connectors (either 2x8 pin + 1x6 pin, or 3x 8 pin) onto a card with 2 physical EPS connectors will not yield the full power target but gimp it, as was also the experience of the folks who had tried it


What about the other way round? Has anyone managed to get the 450w Galax HOF OCLabs Bios running on a MSI Trio?


----------



## owcraftsman

kithylin said:


> Kind of off topic for this thread but if you do go AMD, 3600 mhz ram is the "Sweet Spot" for Ryzen Gen2 / 3000 series right now. Here's one of the best 3600 kits right now, 15-15-15 @ 3600, on sale for $150 for a 16GB set: https://www.newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232306


Thanks, I'm looking for single rank memory with Samsung B (20nm) or SK-Hynix CJR (18nm) die chips.


----------



## schoolofmonkey

I'd been running the 436.15 WHQL drivers.
I thought there was something wrong with my system, I've been getting a stupid amount of stuttering in different games, to the point they nearly stop (single digit) , rolled back to 431.70 and it's all gone.
Using 431.70 Control and The Dark Pictures Anthology Man of Medan are getting much better frame rates with RTX on (Control), no more stuttering making the games unplayable.

My GPU is a ROG RTX 2080 ti Advanced


----------



## kithylin

owcraftsman said:


> Thanks, I'm looking for single rank memory with Samsung B (20nm) or SK-Hynix CJR (18nm) die chips.


https://benzhaomin.github.io/bdiefinder/


----------



## smilinjohn

sblantipodi said:


> can't understand how standing up from the chair can cause the issue.



Me either, the damndest thing I've ever seen. Like I said in my post I'm not bumping the stand my monitor is on, nothing touching it, but every time I stood up it would flicker. It started as an intermittent problem and at first, I thought I was imaging it all together, but I had a friend over and he said something about it when he saw it happen a couple of times. It got progressively worse and would flicker every single time I stood up. I was able to shoot 20 minutes of video with my phone all the while explaining the issue while I demonstrated it. I burned that to DVD and sent it in with the monitor when I sent it to Acer. They said they couldn't find anything wrong and when it came back (scratched in several places to boot) it was fine, just a few weeks ago it started doing in again every now and then. 





J7SC said:


> I was wondering about the same thing. That said, some years back, I had a monitor cable that must have had some torn wires inside the normal-looking shielding because the slightest move could at times disrupt the video signal...new cable solved it.



I've tried several brand new cables, and it doesn't matter if I use DP or HDMI, it'll flicker when I stand up, very weird.


----------



## J7SC

smilinjohn said:


> Me either...(...)
> 
> 
> I've tried several brand new cables, and it doesn't matter if I use DP or HDMI, it'll flicker when I stand up, very weird.


 
...very weird indeed. Who knows, may be there's some sort of electro-radiation in the room that gets redirected when you stand up. Try tinfoil  ?


----------



## sblantipodi

This video saved my day.

https://youtu.be/WzsHYeIpFP4

I had tons of stuttering in dx12 titles, pretty solved now


----------



## smilinjohn

J7SC said:


> ...very weird indeed. Who knows, may be there's some sort of electro-radiation in the room that gets redirected when you stand up. Try tinfoil  ?





 Piss on that, it gets to bad and I'll try another brand of monitor :thumb:


----------



## GRABibus

sblantipodi said:


> This video saved my day.
> 
> https://youtu.be/WzsHYeIpFP4
> 
> I had tons of stuttering in dx12 titles, pretty solved now



I have tried with BF1.exe, no change. Still stutter fest in DX12....


----------



## Cancretto

smilinjohn said:


> Me either, the damndest thing I've ever seen. Like I said in my post I'm not bumping the stand my monitor is on, nothing touching it, but every time I stood up it would flicker. It started as an intermittent problem and at first, I thought I was imaging it all together, but I had a friend over and he said something about it when he saw it happen a couple of times. It got progressively worse and would flicker every single time I stood up. I was able to shoot 20 minutes of video with my phone all the while explaining the issue while I demonstrated it. I burned that to DVD and sent it in with the monitor when I sent it to Acer. They said they couldn't find anything wrong and when it came back (scratched in several places to boot) it was fine, just a few weeks ago it started doing in again every now and then.
> 
> 
> 
> 
> 
> 
> I've tried several brand new cables, and it doesn't matter if I use DP or HDMI, it'll flicker when I stand up, very weird.


Can feel you man, I have an xb271hu too and have the same issue when I stand up from my chair sometimes monitor flickers for a second. 
Similar thing happen when I turn on a light anywhere in my house.
Most hilarious thing is that rarely but still happens when I walk in front of my room door, the monitor flickers 
Don't know if Acer monitors has such bad electronics inside or it's just my hose wiring that's garbage


----------



## smilinjohn

Cancretto said:


> Can feel you man, I have an xb271hu too and have the same issue when I stand up from my chair sometimes monitor flickers for a second.
> Similar thing happen when I turn on a light anywhere in my house.
> Most hilarious thing is that rarely but still happens when I walk in front of my room door, the monitor flickers
> Don't know if Acer monitors has such bad electronics inside or it's just my hose wiring that's garbage



I couldn't rule out the wiring in this apartment but I don't think that's it, if it was the wiring then as long as I spend in front of my monitor it should do it, not ONLY when I stand up. Some of the things they do in Morehead, KY really make me scratch my head and ask *** were they thinking, then I realize they weren't. But that's what happens when someone SAYS they can do the work but really have no clue as to how to do it, they're just trying to work off overdue rent... I do maintenance on the side for my landlord so I see a lot of really F'ed up stuff, he owns several properties and a couple of them that I WON'T touch no matter what I absolutely refuse too.


----------



## sblantipodi

Cancretto said:


> smilinjohn said:
> 
> 
> 
> Me either, the damndest thing I've ever seen. Like I said in my post I'm not bumping the stand my monitor is on, nothing touching it, but every time I stood up it would flicker. It started as an intermittent problem and at first, I thought I was imaging it all together, but I had a friend over and he said something about it when he saw it happen a couple of times. It got progressively worse and would flicker every single time I stood up. I was able to shoot 20 minutes of video with my phone all the while explaining the issue while I demonstrated it. I burned that to DVD and sent it in with the monitor when I sent it to Acer. They said they couldn't find anything wrong and when it came back (scratched in several places to boot) it was fine, just a few weeks ago it started doing in again every now and then.
> 
> 
> 
> 
> 
> 
> I've tried several brand new cables, and it doesn't matter if I use DP or HDMI, it'll flicker when I stand up, very weird.
> 
> 
> 
> Can feel you man, I have an xb271hu too and have the same issue when I stand up from my chair sometimes monitor flickers for a second.
> Similar thing happen when I turn on a light anywhere in my house.
> Most hilarious thing is that rarely but still happens when I walk in front of my room door, the monitor flickers
> Don't know if Acer monitors has such bad electronics inside or it's just my hose wiring that's garbage
Click to expand...

If the flicker that occurs is similar to this 
https://youtu.be/gFDtw8jh-Cw

It's not the one I am talking about. All auo 4k gsync panels do this kind of flicker and should be considered normal.
There is no real correlation with a particular event, the flicker occurs due to the gsync modules and some sort of bad design. It should cause zero problems tough.


... But I was taking about another kind of flicker, the screen goes black for a second and then return to normal.


----------



## schoolofmonkey

GRABibus said:


> I have tried with BF1.exe, no change. Still stutter fest in DX12....


What driver version are you using, 436.15 is horrible for me, 431.70 works the best, no stuttering/hitching.

Used the posted fix ages ago to get better frame rate in GTA V.


----------



## Serchio

Hello RTX 2080 Ti owners. 

I would like to get some advice from you. I have recently bought EVGA RTX 2080 Ti XC - it was the only card I could get from a stationary shop in my city. I have tinkered a bit with MSI afterburner to keep the card temps below 70C ([email protected] @ 100% GPU) and I was thinking about building a custom water loop for it. I am not 100% whether I want to do it since it would be my first time and that would cost me around 45% of the price I have paid for the card. The second idea I have is to just return it - I have 45 days to do it and just get other RTX 2080 Ti. I have tried to find out which RTX 2080 Ti I should buy but I am a bit lost right now - would any of you recommend to go with AIOs or should I stick to the air cooled cards? Any recommendations up to MSI Lightning Z price level?


----------



## smilinjohn

sblantipodi said:


> If the flicker that occurs is similar to this
> https://youtu.be/gFDtw8jh-Cw
> 
> It's not the one I am talking about.



No that is not what I'm talking about, though how that could be considered "normal" I don't know. If I had a monitor that did that I'd get rid of it. 





sblantipodi said:


> the screen goes black for a second and then return to normal.



^ The video I had showed this, goes black, just flashes, for just a second.


Though I do have to say that it didn't do it at all yesterday. I just swapped out MBs late Saturday?


----------



## GRABibus

schoolofmonkey said:


> What driver version are you using, 436.15 is horrible for me, 431.70 works the best, no stuttering/hitching.
> 
> Used the posted fix ages ago to get better frame rate in GTA V.


Hey Schoolofmonkey ! 
I am with 436.15.
On BF1, whatever the drivers, I always stuttered in DX12.
With BFV, this is Much more smooth and no issue so far in BFV with DX12, even with 436.15.


----------



## sblantipodi

smilinjohn said:


> No that is not what I'm talking about, though how that could be considered "normal" I don't know. If I had a monitor that did that I'd get rid of it.
> 
> 
> 
> 
> 
> 
> ^ The video I had showed this, goes black, just flashes, for just a second.
> 
> 
> Though I do have to say that it didn't do it at all yesterday. I just swapped out MBs late Saturday?


it happen very rarely, if you stay to the PC all day, you can see that flickering two times in a day. if so.
can you show me the video where your flicker happen please?


----------



## majestynl

zrstoddart said:


> So I have to dig this up because its driving me nuts. I have been reading page after page of this thread and no answers...
> 
> 
> 
> Spoiler
> 
> 
> 
> _Are there any BIOS I can use with a version the same or newer than the OLD 400W MSI Gaming X Trio Bios posted at the top of this page https://www.overclock.net/forum/27716280-post3683.html_? I have looked and it seems like there is no way for me to use this card to anywhere near its full potential... (which I bought since it was supposed to have a bios I could flash that would allow me to push 400W).
> 
> My card has a newer version that it shipped with than this bios and there is no way for me to flash an older bio onto this card, so I am stuck at 330W and 110% power limit on this monster card with 2 8-pin &1 6-pin power connectors that should easily hit 400W and 135% power limit. I have been reading since I first saw this issue come up on page 841:
> 
> 
> Please say someone has a solution to this by now. Is anyone able to save me from this 330W prison?


No easy solution yet. You can force flash the bios-chip with a clip or try to find somebody who can mod Nvflash to lower the min.version check of XUSB fw!


----------



## mattxx88

majestynl said:


> So i figured out whats wrong with flashing the Galax 380w Bios. The Galax version is an older build with a older XUSB Firmware version.
> NVFlash doesn't accept an older XUSB Fw if there is one newer installed on current loaded Vbios!
> 
> My Vbios was included with *XUSB-FW Version ID : 0x70090003 * and the current Galax 380w on TPU has an older version *XUSB-FW Version ID: 0x70060003*
> 
> What i did:
> 
> 
> Backup-ed my Original Rom and just flashed same one back to check if NVflash etc is working properly
> Checked TPU Bios libary to find an Bios who had better Power limit and was build on same month or later then my current bios version!
> I found the EVGA XC and MSI Seahawk X (Ref board)
> Double-checked if the XUSB had at least same FW version. It did!
> Flashed EVGA Bios (338w) with NvFlash (Board Id Mismatch version)
> 
> 
> Flashed successful and the card is running already better  Need to check which Bios is the best for now.
> If we want to use the 380w bios, we need a newer Bios file from that card with at least same or newer XUSB Fw, otherwise we cant flash it!
> 
> I believe not only MSI Duke OC, but most new bought cards are running with an updated XUSB FW on their last vBios!
> 
> 
> _See below for Screenshot Running MSI Duke OC with a EVGA Bios running 338w_


i only read this today Majesty and i am so glad that you solved 

Can i ask you how to check the XUSB Fw version?


----------



## smilinjohn

sblantipodi said:


> it happen very rarely, if you stay to the PC all day, you can see that flickering two times in a day. if so.
> can you show me the video where your flicker happen please?





I've spent all day today in front of my monitor playing GTA V and don't have that flicker the video on here showed. 



My monitor flashed (screen went black for 1/2 a second) once today when I got up to go smoke a cigarette earlier and that's the first time it's done that since swapping out my MB Saturday. 



I wish I could show the video I took before I had sent it off to Acer, but I guess I didn't save it to the computer and I know it isn't in my phone anymore, I don't keep pictures or videos around past their usefulness to me, I'm not one of those people who needs pictures and/or video every time an ant farts or someone's kid picks their nose. Right now it doesn't flash consistently enough to capture video, but if it starts back up I'll start a thread here.


----------



## kx11

do you guys think O11D XL can house KingPiN 2080ti hydrocopper block ? or do i have to go vertical ?


KP specs page confused me with the height and length 





here's o11d xl specs 

http://www.lian-li.com/pc-o11d-rog/




KP dimensions 





*Dimensions*



Height: 5.48 in - 139.3 mm
Length: 11.45 in - 290.83 mm
Width: Dual Slot


----------



## J7SC

kx11 said:


> do you guys think O11D XL can house KingPiN 2080ti hydrocopper block ? or do i have to go vertical ?
> 
> KP specs page confused me with the height and length
> 
> (...)


 
Gamers Nexus just reviewed that case, I think (below, not sure about the 'D'). There seems to be massive clearance for a long GPU like KingPiN 2080ti hydrocopper, subject only to your specific arrangements of rads / rad fans etc. If you check my 'sig' rig, of the three XSPC 360/60 rads for dual 2080 Ti w-cooled, the two 360s on front of the Core P5 are turned on their side (just one option for you). All in all, I think it shouldn't be a real issue with that case if you have some flexibility re. rad, fan and reservoir mounts.


----------



## krizby

Serchio said:


> Hello RTX 2080 Ti owners.
> 
> I would like to get some advice from you. I have recently bought EVGA RTX 2080 Ti XC - it was the only card I could get from a stationary shop in my city. I have tinkered a bit with MSI afterburner to keep the card temps below 70C ([email protected] @ 100% GPU) and I was thinking about building a custom water loop for it. I am not 100% whether I want to do it since it would be my first time and that would cost me around 45% of the price I have paid for the card. The second idea I have is to just return it - I have 45 days to do it and just get other RTX 2080 Ti. I have tried to find out which RTX 2080 Ti I should buy but I am a bit lost right now - would any of you recommend to go with AIOs or should I stick to the air cooled cards? Any recommendations up to MSI Lightning Z price level?


Well if the noise is bothering you it wouldn't hurt going the watercooling route, if not then there is no point because there is little gain in gaming performance compare to the cost. You can try finding the max stable overclock outta your card first before deciding to exchange it, if you can reach something like 2070mhz/1.05v then you should keep it as there are chances you will get an inferior chip even on more expensive models.


----------



## kx11

J7SC said:


> Gamers Nexus just reviewed that case, I think (below, not sure about the 'D'). There seems to be massive clearance for a long GPU like KingPiN 2080ti hydrocopper, subject only to your specific arrangements of rads / rad fans etc. If you check my 'sig' rig, of the three XSPC 360/60 rads for dual 2080 Ti w-cooled, the two 360s on front of the Core P5 are turned on their side (just one option for you). All in all, I think it shouldn't be a real issue with that case if you have some flexibility re. rad, fan and reservoir mounts.
> 
> https://www.youtube.com/watch?v=Ahkbz1DGLCw





i hope you're right , thnx man


----------



## Serchio

krizby said:


> Well if the noise is bothering you it wouldn't hurt going the watercooling route, if not then there is no point because there is little gain in gaming performance compare to the cost. You can try finding the max stable overclock outta your card first before deciding to exchange it, if you can reach something like 2070mhz/1.05v then you should keep it as there are chances you will get an inferior chip even on more expensive models.




You are right. The noise do not bother me as much as temperatures of the card. 

Unfortunately, the cooler on non-ultra XC is too small to even try overclocking right now. The card gets hot really fast that is why I keep it at [email protected] and with 90-95% usage the temp can reach 78C after 40 minutes. To watercool the card I would need to spend around 600$ which is twice as much as the cost of switching to msi lightning z or evga ftw3 ultra.


Sent from my iPad using Tapatalk


----------



## krizby

Serchio said:


> You are right. The noise do not bother me as much as temperatures of the card.
> 
> Unfortunately, the cooler on non-ultra XC is too small to even try overclocking right now. The card gets hot really fast that is why I keep it at [email protected] and with 90-95% usage the temp can reach 78C after 40 minutes. To watercool the card I would need to spend around 600$ which is twice as much as the cost of switching to msi lightning z or evga ftw3 ultra.
> 
> 
> Sent from my iPad using Tapatalk


if you can get the Asus strix ROG 2080 Ti OC version that would be good, it has the best cooler out of the all the custom 2080Ti. FTW3 Ultra may be a tad more noisy but that's not something you would notice. 

https://www.techpowerup.com/review/msi-geforce-rtx-2080-ti-lightning-z/31.html


----------



## kicurry

I would go with the Eiswolf AIO by Aphacool. There are B stock unused going for around $120 bucks. They work great. Just be sure to get the one that matches your gpu. Check the Aquacooling website look for B stock.


----------



## schoolofmonkey

kicurry said:


> I would go with the Eiswolf AIO by Aphacool. There are B stock unused going for around $120 bucks. They work great. Just be sure to get the one that matches your gpu. Check the Aquacooling website look for B stock.


Man they even have one for my RTX 2080ti Strix Advanced, I know what I'll be getting..
I didn't even know these were a stand alone thing, thanks :thumb:


----------



## kithylin

Serchio said:


> You are right. The noise do not bother me as much as temperatures of the card.
> 
> Unfortunately, the cooler on non-ultra XC is too small to even try overclocking right now. The card gets hot really fast that is why I keep it at [email protected] and with 90-95% usage the temp can reach 78C after 40 minutes. To watercool the card I would need to spend around 600$ which is twice as much as the cost of switching to msi lightning z or evga ftw3 ultra.
> 
> 
> Sent from my iPad using Tapatalk


Just so you are aware: 80c - 85c core temps is considered "Normal" for modern video cards and is not "Too hot" / dangerous at all.


----------



## SubZeroBR

Arrived a bit late to the party, now a proud owner of a Zotac RTX 2080 Ti Amp Extreme


----------



## Cancretto

Serchio said:


> You are right. The noise do not bother me as much as temperatures of the card.
> 
> Unfortunately, the cooler on non-ultra XC is too small to even try overclocking right now. The card gets hot really fast that is why I keep it at [email protected] and with 90-95% usage the temp can reach 78C after 40 minutes. To watercool the card I would need to spend around 600$ which is twice as much as the cost of switching to msi lightning z or evga ftw3 ultra.
> 
> 
> Sent from my iPad using Tapatalk


Hi man, if you want to switch your card just forget ftw3, a lightning would ba a viable option but since everything you need to do proper oc is under nda don't know how worth it can still be. 


Hi man, if you want to switch your card just forget ftw3, a lightning would ba a viable option but since everything you need to do proper oc is under nda don't know how worth it can still be

I agree with kirzby on getting a strix at this point, has best vrms of all standard available cards


----------



## Ironcobra

I just recieved my 2080 ti xc ultra and Im trying to update the bios to the 380w version and everything appears to go as the guide says but when I restart after flashing its still the same stock bios 330w and the same release date of the stock bios. What am I doing wrong here.


----------



## Cancretto

Ironcobra said:


> I just recieved my 2080 ti xc ultra and Im trying to update the bios to the 380w version and everything appears to go as the guide says but when I restart after flashing its still the same stock bios 330w and the same release date of the stock bios. What am I doing wrong here.


Are you forcing flash with nvflash64 -6 command? 

Sequence should be 
nvflash64 --protectoff
And
nvflash64 -6 bios.rom

Also make your system do some power cicles sometimes fixes the problem or perhaps try a different nvflash version


----------



## Ironcobra

yeah I tried all of those commands, Ill try the other nvflash in the first post.


----------



## mattxx88

Ironcobra said:


> yeah I tried all of those commands, Ill try the other nvflash in the first post.


the guide in the 1st page don't mention the fact that is needed to disable from Device Manager the gpu, before flashing

https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/#&gid=1&pid=99256

i got same your problem and was caused by this
give it a try


----------



## jdmachogg

To anyone with a newer card, if you're still having issues flashing the bios even after disabling the GPU in device manager - you will need a newer bios version, because of the XUSB version.

I successfully flashed my MSI 2080 ti Gaming X Trio with the latest 380w lightning Z BIOS. Nothing older than that worked.


----------



## Ironcobra

mattxx88 said:


> the guide in the 1st page don't mention the fact that is needed to disable from Device Manager the gpu, before flashing
> 
> https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/#&gid=1&pid=99256
> 
> i got same your problem and was caused by this
> give it a try





jdmachogg said:


> To anyone with a newer card, if you're still having issues flashing the bios even after disabling the GPU in device manager - you will need a newer bios version, because of the XUSB version.
> 
> I successfully flashed my MSI 2080 ti Gaming X Trio with the latest 380w lightning Z BIOS. Nothing older than that worked.


Thanks guys easy peasy with this info, thats a pretty important step to leave out of the guide.


----------



## mattxx88

Ironcobra said:


> Thanks guys easy peasy with this info, thats a pretty important step to leave out of the guide.


absolutely agree!



jdmachogg said:


> To anyone with a newer card, if you're still having issues flashing the bios even after disabling the GPU in device manager - you will need a newer bios version, because of the XUSB version.
> 
> I successfully flashed my MSI 2080 ti Gaming X Trio with the latest 380w lightning Z BIOS. Nothing older than that worked.


I have the same card, thanks for your info, i'll try it tonight


----------



## Ironcobra

mattxx88 said:


> absolutely agree!
> 
> 
> 
> I have the same card, thanks for your info, i'll try it tonight


https://www.techpowerup.com/vgabios/207628/msi-rtx2080ti-11264-181118
Thats the version I used. Let me say whats the catch here????? I was able to add 300 more to get to 1000 on the memory and 10 to 110 on the core first try with temps down 10c. The performance gains in ac odyssey allowed me to get 4k 60 with everything buy clouds on high at ultra!? Why are the bigger vendors not using this wattage? If theres less heat for me I dont see any minuses. Whats a good overclock for these cards on air(xc ultra) Is it worth pushing on?


----------



## mattxx88

Ironcobra said:


> https://www.techpowerup.com/vgabios/207628/msi-rtx2080ti-11264-181118
> Thats the version I used. Let me say whats the catch here????? I was able to add 300 more to get to 1000 on the memory and 10 to 110 on the core first try with temps down 10c. The performance gains in ac odyssey allowed me to get 4k 60 with everything buy clouds on high at ultra!? Why are the bigger vendors not using this wattage? If theres less heat for me I dont see any minuses. Whats a good overclock for these cards on air(xc ultra) Is it worth pushing on?


Imo, the sweetspot on air for these cards are 0.95/0.975v voltage locked and overclock to 2000mhz on core


----------



## Angrycrab

Is It safe to pull 680watts on the card with OCCT? I'm trying to check for errors with my overclock.
I'm using XOC Bios.


----------



## Asmodian

Angrycrab said:


> Is It safe to pull 680watts on the card with OCCT? I'm trying to check for errors with my overclock.
> I'm using XOC Bios.


Theoretically, if you are using a water block to keep the VRMs and everything else cool, yes.

Personally I wouldn't run OCCT on my GPU to check for stability. It is like flooring your car at redline to check that the engine is fine. It does work but you are using up the life of your GPU during the testing more than anywhere else just for piece of mind that doesn't really exist. You can pass OCCT and crash in a game, not super likely but it does happen. I am not sure OCCT testing is worth anything besides testing if the water loop can handle the heat. Better to just play your games and drop it a bin or two if it crashes.


----------



## mattxx88

I found on MSI forum a Lightning bios with 400w PL

i tested it on my Gaming X Trio and it works

if some1 wanna try, you can find it here:
https://forum-en.msi.com/index.php?topic=314643.0
post #13, bios named * NV377MH_125.rar


----------



## Angrycrab

Asmodian said:


> Theoretically, if you are using a water block to keep the VRMs and everything else cool, yes.
> 
> Personally I wouldn't run OCCT on my GPU to check for stability. It is like flooring your car at redline to check that the engine is fine. It does work but you are using up the life of your GPU during the testing more than anywhere else just for piece of mind that doesn't really exist. You can pass OCCT and crash in a game, not super likely but it does happen. I am not sure OCCT testing is worth anything besides testing if the water loop can handle the heat. Better to just play your games and drop it a bin or two if it crashes.


I am using a hybrid cooler from Evga XC temps max out at 57c with a custom fan curve. I just wanted to check my overclock if its stable since i prefer to not crash In game, but I guess I’ll just play games from now on Instead of stress testing.

Thanks for the Input.


----------



## GRABibus

Hey,
I have flashed my MSI SeaHakwk X with XOC asus bios 2x8pins 1000W , mainly for Benchmark.
I am currently testing it in games and I have some questions.

In BFV, here are the settings :
2560x1440 144Hz
All graphics sertings at maximum.

OC on the GPU:
+130MHz on core 
+1300MHz on memory 
Power limit at 100%=1000W
Fan at 100%.
Room temperature at 23degres Celsius.

At the beginning of the game, GPU is at 2100MHz and voltage is 1.075V.
GPU temps is 42degrees.
Then , After some seconds, GPU fréquency goes down to 2085MHz and voltage is 1,068V. GPU température is 46 degrees.

GPU power consomption is around 300W.

My feeling is that this bios, due to GPU boost, decreases GPU and voltage when temperature rises (here from 42degrees to 46 degrees).

BUT, my understanding was, as this is a roughly unlimited power Bios, GPU boost should increase voltage in order to reach maximum possible overclock .... even when temperature rises.
Why, in this case, GPU boost doesn’t increase voltage, for example to 1,093V in order to maintain 2100MHz, éven if température has increased....

Power Drawn is only 300W, it is a 1000W bios and I throttle ??


----------



## maivorbim

Gpu boost prioritizes temps over power, as chip stability is dependant on temperatures. It does not matter that you are not hitting power limits, you will lose about 15mhz every temperature step if you are above 40-42 C (not sure on the lowest temperature where you're not temperature throttled). That's why cooling is the most important aspect for RTX overclocking.

Using a higher voltage will not raise frequency by gpu boost. Higher voltages are tied only with frequency stability. For example, you might not be stable at [email protected], but might be stable at [email protected] If you are already having temperatures above 42-44, raising voltage will only increase temperatures even further, which in turn might lead to more downclocking due to temperature throttling.

I suggest flashing GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W). You do not need the 1000W bios, as even in BFV you will rarely hit the power limits. Compared to the XOC bios, the Galax bios has frequency curves enabled. The best way of overclocking from my experience is to open frequency curves, pick a point on the graph corresponding to 1.093V at raise the expected frequency at that voltage level until it is not stable anymore.

If you hit power limits (for example, set power limit to 200W), you will downclock even if you are under 40C. But there are very few workloads that go above the 380W limit. I only hit 380W limit in BFV on Narvik if I am looking at snow, otherwise I am at around 300-350W maximum.

What FPS are you getting in BFV at ultra settings, 3440*1440p, rtx off? I am getting around 120-144 fps depending on map, at 2070 mhz @ 1.093V, 64C, air cooled, 84% fans.


----------



## cp9a

mattxx88 said:


> I found on MSI forum a Lightning bios with 400w PL
> 
> i tested it on my Gaming X Trio and it works
> 
> if some1 wanna try, you can find it here:
> https://forum-en.msi.com/index.php?topic=314643.0
> post #13, bios named * NV377MH_125.rar


Maybe I am missing something here.. What are the benefits of this lightning bios? TRIO already has one with 406W..


----------



## GRABibus

maivorbim said:


> Gpu boost prioritizes temps over power, as chip stability is dependant on temperatures. It does not matter that you are not hitting power limits, you will lose about 15mhz every temperature step if you are above 40-42 C (not sure on the lowest temperature where you're not temperature throttled). That's why cooling is the most important aspect for RTX overclocking.
> 
> Using a higher voltage will not raise frequency by gpu boost. Higher voltages are tied only with frequency stability. For example, you might not be stable at [email protected], but might be stable at [email protected] If you are already having temperatures above 42-44, raising voltage will only increase temperatures even further, which in turn might lead to more downclocking due to temperature throttling.
> 
> I suggest flashing GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W). You do not need the 1000W bios, as even in BFV you will rarely hit the power limits. Compared to the XOC bios, the Galax bios has frequency curves enabled. The best way of overclocking from my experience is to open frequency curves, pick a point on the graph corresponding to 1.093V at raise the expected frequency at that voltage level until it is not stable anymore.
> 
> If you hit power limits (for example, set power limit to 200W), you will downclock even if you are under 40C. But there are very few workloads that go above the 380W limit. I only hit 380W limit in BFV on Narvik if I am looking at snow, otherwise I am at around 300-350W maximum.
> 
> What FPS are you getting in BFV at ultra settings, 1440p, rtx off? I am getting around 120-144 fps depending on map, at 2070 mhz @ 1.093V, 64C, air cooled, 84% fans.


Thank you very much for this swift explanation.
I understand now that when GPU temperature increases above 40-42°C, My GPU frequency decreases, as my GPU voltage, even if my GPU is far below Power limit.

I use this BIOS because I don't get crashes in BFV with DXR enabled with it, at +130MHz GPU/+1300MHz GDDR6.
With other bios Galax 380W or MSI Sea Hawk X Bios 330W, i crash in BFV with DXR enabled, even at stock !!

I explained this here :
https://www.overclock.net/forum/78-pc-gaming/1732450-bf5-crashes-desktop-when-dxr-enabled.html
https://answers.ea.com/t5/Technical...o-desktop-with-DXR-enabled/m-p/8164387#M19122

I will try a again Galax 380W bios, and will tweak again V/F curves. Conclusions were not so staysfying but I will a try again.
With XOC Bios, not possible to edit V/F curve.

To your question of fps, I just checked.

In Multiplayer "Provence" map :
2560x1440 144Hz
DX12 enabled
DXR Disabled
GSync enabled
All graphics setting at ultra
[email protected] (+130MHz on Core in MSI AB)
[email protected] (+1300MHz on GDDR6 in MSI AB)
BIOS ASUS XOC 1000W

Between 150fps and 190fps.

with DXR enabled on "normal" level : between 95fps and 130ps


----------



## mattxx88

cp9a said:


> Maybe I am missing something here.. What are the benefits of this lightning bios? TRIO already has one with 406W..


newer xusbc version
and also a base clock higer
out of the box gpu run at 2025mhz @1.050v


----------



## maivorbim

See this for tips on how to set VF curves: https://forums.evga.com/m/tm.aspx?m=2820280&p=1

I just reset the curve to default, pick 1.093V and raise the frequency at that voltage by 15-30 mhz until it is no longer stable. You don't have to lower the V/F curve at other voltages, as MSI AB will do that for you once you hit apply.

Crashes are caused either by too low voltages (raise voltage to 1.093V) or too high temperatures. I have not seen any difference in stability between Asus 1000W and KFA 380W, but your results may vary. I prefer the KFA bios because it has voltage curves which are a better way of OC.


----------



## pewpewlazer

maivorbim said:


> If you hit power limits (for example, set power limit to 200W), you will downclock even if you are under 40C. But there are very few workloads that go above the 380W limit. I only hit 380W limit in BFV on Narvik if I am looking at snow, otherwise I am at around 300-350W maximum.
> 
> What FPS are you getting in BFV at ultra settings, 1440p, rtx off? I am getting around 120-144 fps depending on map, at 2070 mhz @ 1.093V, 64C, air cooled, 84% fans.


1.093v on air?! Holy smokes!

The people who comment that very few workloads go above the 380 watt power limit, or that they can run 1.093v for any appreciable length of time always baffle me. I guess maybe I could get a voltage that high to run most of the time at 1080p or 1440p, but what's the point? I don't need an extra 15mhz at that res. Most recent games I've played at 4k absolutely hammer the card in terms of power draw, especially stuff like Metro Exodus or Control with DXR enabled. 

I'm running 2100mhz @ 1.025v (or 1.031v depending on what GPU Boost feels like doing) and power draw in GPU intensive modern titles at 4k usually hovers around 360-380w. Very rarely it will power throttle down to 2085mhz at a slightly lower voltage for a few brief seconds in intense scenes. I have found that the Port Royal benchmark is pretty indicative of what kind of voltages I can run for real world 4k gaming as well. As soon as I try pushing 1.043-1.05v, the frequency graph will start to look like a roller coaster. I'd rather stick with a voltage that gives me consistent frequency instead of jumping all over the place.

Just for another data point, I was pulling 120-144 fps (144 cap in game) in BF5 (DX12) at 1440p ultra DXR off as well. Pretty much any drop in fps seemed to be a CPU bottleneck from what I saw. My grandpa Haswell-E chip is starting to show its age


----------



## maivorbim

I'm not stable @ 2070mhz below 1.093v, that's why I have to run it so high. I'm getting a constant 64C at that voltage, with no power limits being reached, so my frequency is a constant 2070, maybe with a few dips to 2055 mhz, but not lower than that. 

I am not experiencing the frequency rollercoaster you are describing. That sounds what would happen when you are hitting power limits, as the card will most likely downclock to around 2000 mhz, bounce to 2040-2055 mhz and go back and forth. 

I'm not benching at 4k, only at 1440p, that's why I'm not hitting power limits. 

If you can be stable at lower voltages, there's no reason to push it to 1.093V. GRABibus was asking for tips on benchmarking, where the 1.093v might be useful for hitting higher frequencies if you can keep temps in control.

If you're not using RTX, try using DX11. I seem to get about +10fps compared to DX12. BFV is hard to benchmark on, as depending on the amount of explosions / actors on screen your fps will vary significantly. Best way I found is to load the practice range and see how many fps you get when spawning in, as that provides the same environment each time.


----------



## cp9a

mattxx88 said:


> newer xusbc version
> and also a base clock higer
> out of the box gpu run at 2025mhz @1.050v


That's great. Will give it a shot. No issues whatsoever? Is fan control ok for both controllers?


----------



## mattxx88

cp9a said:


> That's great. Will give it a shot. No issues whatsoever? Is fan control ok for both controllers?


Dunno about fans, mine is under water, i see fans slider at 25% so i can assume fans works

edit: another important thing, i think this bios have better vram timing cause with gaming x trio 400w bios i can push vrams @8500mhz, with this at 8500mhz is unstable so i think it have lower timings (that is bettr)


----------



## marcus556

what does the red and yellow font mean over the size of the fan on the graphics cards?


----------



## GRABibus

maivorbim said:


> See this for tips on how to set VF curves: https://forums.evga.com/m/tm.aspx?m=2820280&p=1
> 
> I just reset the curve to default, pick 1.093V and raise the frequency at that voltage by 15-30 mhz until it is no longer stable. You don't have to lower the V/F curve at other voltages, as MSI AB will do that for you once you hit apply.
> 
> Crashes are caused either by too low voltages (raise voltage to 1.093V) or too high temperatures. I have not seen any difference in stability between Asus 1000W and KFA 380W, but your results may vary. I prefer the KFA bios because it has voltage curves which are a better way of OC.


As first shot, I tested the overclock of the picture : Galax bios 380W.
+170MHz on core
+1300MHz on memory.
2175MHz @1.093V.

I passed several sessions of Heaven and Time spy without any issue.
also 30mins of COD BO3.

But, in COD BO3, the game starts at :
2160MHz (As power limit is quite reach in the menu !)
1.093V
[email protected]°C

Then I play and after some minutes, I am at :
2130MHz
Still 1.093V
[email protected]°C.

So I throttle...Why ? Power limit is never reach during the game.

And you, with 63°C, you stay at 2070MHz ?

I don't understand.


----------



## Sheyster

GRABibus said:


> As first shot, I tested the overclock of the picture : Galax bios 380W.
> +170MHz on core
> +1300MHz on memory.
> 2175MHz @1.093V.
> 
> I passed several sessions of Heaven and Time spy without any issue.
> also 30mins of COD BO3.
> 
> But, in COD BO3, the game starts at :
> 2160MHz (As power limit is quite reach in the menu !)
> 1.093V
> [email protected]°C
> 
> Then I play and after some minutes, I am at :
> 2130MHz
> Still 1.093V
> [email protected]°C.
> 
> So I throttle...Why ? Power limit is never reach during the game.
> 
> And you, with 63°C, you stay at 2070MHz ?
> 
> I don't understand.


As GPU core temp goes up you drop bins (in 15 MHz increments). No way to avoid that as far as I know.

EDIT - Keeping temps from going up will avoid it, but that's easier said than done unless you have exotic cooling (chiller, etc.)


----------



## GRABibus

Sheyster said:


> As GPU core temp goes up you drop bins (in 15 MHz increments). No way to avoid that as far as I know.
> 
> EDIT - Keeping temps from going up will avoid it, but that's easier said than done unless you have exotic cooling (chiller, etc.)


But Maivorbim has stable 2070mhz at 63degrees....
I am at 50 degrees.
We have the same bios

i have the same phenomena in Heaven benchmark :
it starts at [email protected]@39°C
Then, goes down some seconds after to [email protected]@42°C...etc....And PL=380W is never reached during this.

So, PL not reached and temperature at 42°C and I throttle ??


----------



## pewpewlazer

maivorbim said:


> I'm not stable @ 2070mhz below 1.093v, that's why I have to run it so high. I'm getting a constant 64C at that voltage, with no power limits being reached, so my frequency is a constant 2070, maybe with a few dips to 2055 mhz, but not lower than that.
> 
> I am not experiencing the frequency rollercoaster you are describing. That sounds what would happen when you are hitting power limits, as the card will most likely downclock to around 2000 mhz, bounce to 2040-2055 mhz and go back and forth.
> 
> I'm not benching at 4k, only at 1440p, that's why I'm not hitting power limits.
> 
> If you can be stable at lower voltages, there's no reason to push it to 1.093V. GRABibus was asking for tips on benchmarking, where the 1.093v might be useful for hitting higher frequencies if you can keep temps in control.
> 
> If you're not using RTX, try using DX11. I seem to get about +10fps compared to DX12. BFV is hard to benchmark on, as depending on the amount of explosions / actors on screen your fps will vary significantly. Best way I found is to load the practice range and see how many fps you get when spawning in, as that provides the same environment each time.


Wow. Really puts the importance of cooling into perspective when overclocking these cards. And yes, the frequency roller coaster happens because the card hits power limit.

DX12 seemed to net me higher CPU usage and thus higher frame rates in BF5. Nothing crazy, but better nonetheless. Plus you don't have to enable FFR with DX12. DX12 might render ahead 3 frames or whatever anyway for all I know, but at least I'll still have the placebo effect of seeing FFR turned off 



GRABibus said:


> But Maivorbim has stable 2070mhz at 63degrees....
> I am at 50 degrees.
> We have the same bios
> 
> i have the same phenomena in Heaven benchmark :
> it starts at [email protected]@39°C
> Then, goes down some seconds after to [email protected]@42°C...etc....And PL=380W is never reached during this.
> 
> So, PL not reached and temperature at 42°C and I throttle ??


Yes. That is the joy of GPU Boost. As temps rise, clocks fall. 42*C seems to be the starting point where it will begin knocking off clocks 15mhz at a time. Another drop around 47*C. I've even seen mine drop 15mhz if temps hover in the 50-51 range sometimes.


----------



## GRABibus

pewpewlazer said:


> Yes. That is the joy of GPU Boost. As temps rise, clocks fall. 42*C seems to be the starting point where it will begin knocking off clocks 15mhz at a time. Another drop around 47*C. I've even seen mine drop 15mhz if temps hover in the 50-51 range sometimes.


Ok thanks.
So why Maivorbim is not throttling and has stable frequency [email protected]°C ?


----------



## GRABibus

pewpewlazer said:


> Just for another data point, I was pulling 120-144 fps (144 cap in game) in BF5 (DX12) at 1440p ultra DXR off as well. Pretty much any drop in fps seemed to be a CPU bottleneck from what I saw. My grandpa Haswell-E chip is starting to show its age


I also have a HW-E, [email protected]@1.20V.

DXR disabled, 2560x1440, 144Hz, All ultra, Gsync enabled and DX12, I get between 120fps and 190fps depending on maps and scenes (In Multiplayer)

Our HW-E are old yes, but I love mine..A golden sample.
Stable at [email protected] in Realbench.
I have also [email protected]@1.25Vcache


----------



## GRABibus

maivorbim said:


> I'm not stable @ 2070mhz below 1.093v, that's why I have to run it so high. I'm getting a constant 64C at that voltage, with no power limits being reached, so my frequency is a constant 2070, maybe with a few dips to 2055 mhz, but not lower than that.
> 
> I am not experiencing the frequency rollercoaster you are describing. That sounds what would happen when you are hitting power limits, as the card will most likely downclock to around 2000 mhz, bounce to 2040-2055 mhz and go back and forth.
> 
> I'm not benching at 4k, only at 1440p, that's why I'm not hitting power limits.
> 
> If you can be stable at lower voltages, there's no reason to push it to 1.093V. GRABibus was asking for tips on benchmarking, where the 1.093v might be useful for hitting higher frequencies if you can keep temps in control.
> 
> If you're not using RTX, try using DX11. I seem to get about +10fps compared to DX12. BFV is hard to benchmark on, as depending on the amount of explosions / actors on screen your fps will vary significantly. Best way I found is to load the practice range and see how many fps you get when spawning in, as that provides the same environment each time.


Can you please explain how you can have a stable frequency at 63°C without Throttling ?
As you explained and as I saw also, throttle starts at 42°C roughly with 15MHz lower frequency, and also 15MHz less at roughly 48°C...etc....

Maybe you mean you throttled from a certain frequency/temperature until it stabilizes at 2070MHz/63°C ?
it means you have maybe 2115MHz at 40°C before gaming or benchmark, and then, after some minutes and throttling, you stabilize at 2070MHz/63°C ?


----------



## kithylin

GRABibus said:


> Can you please explain how you can have a stable frequency at 63°C without Throttling ?
> As you explained and as I saw also, throttle starts at 42°C roughly with 15MHz lower frequency, and also 15MHz less at roughly 48°C...etc....
> 
> Maybe you mean you throttled from a certain frequency/temperature until it stabilizes at 2070MHz/63°C ?
> it means you have maybe 2115MHz at 40°C before gaming or benchmark, and then, after some minutes and throttling, you stabilize at 2070MHz/63°C ?


Silicon lottery. Different cards will run different clocks at different temperatures. Just because their card can do it does not mean all cards can do it. Some can some can't.


----------



## J7SC

kithylin said:


> Silicon lottery. Different cards will run different clocks at different temperatures. Just because their card can do it does not mean all cards can do it. Some can some can't.


 
^That ...'in the old days', there were asic values embedded in the GPUz info, but too much RMA abuse got that scuttled. Still, I think the UNMODIFIED MSI AB voltage curve is probably a good proxy for asic / 'silicone lottery'


----------



## GRABibus

kithylin said:


> Silicon lottery. Different cards will run different clocks at different temperatures. Just because their card can do it does not mean all cards can do it. Some can some can't.


Thanks.
I am not talking bout overclocking potential here. Yes you are right, silicon lottery has a big influence for couple Frequency/Voltage.

I am talking about throttling versus temperatures.

Just want to understand his "2070MHz/64°C" stable frequency.
As throttling starts at 41°C-42°C (lowering frequency by 15MHz), I just want to understand how he reaches stable [email protected]°[email protected]

I asked him a feedback, his V/F curve, etc...

Because, let's suppose he sets 2070MHz in MSI AB at 1.093V before gaming. Let's suppose also that he has 35°C on GPU before gaming (At idle).
after minutes of gaming, if he reaches 64°c on GPU, frequency would have theorically decreased...Then he should be far below 2070MHz, due to throttling starting at 42°C.

Let's wait for his feedabck.


----------



## Asmodian

GRABibus said:


> As throttling starts at 41°C-42°C (lowering frequency by 15MHz), I just want to understand how he reaches stable [email protected]°[email protected]


"Throttling" (frequency+voltage+temperature bins) starts below that, it is just that most don't get their temps that low.

Just pick an overclock that hits 2070 MHz @ 64°C, with the card at ~64°C the clocks are stable. What is the confusion?


----------



## GRABibus

Asmodian said:


> "Throttling" (frequency+voltage+temperature bins) starts below that, it is just that most don't get their temps that low.
> 
> Just pick an overclock that hits 2070 MHz @ 64°C, with the card at ~64°C the clocks are stable. What is the confusion?


What do you mean by "pick an overclock that hits 2070 MHz @ 64°C" ?


----------



## Sheyster

GRABibus said:


> Thanks.
> I am not talking bout overclocking potential here. Yes you are right, silicon lottery has a big influence for couple Frequency/Voltage.
> 
> I am talking about throttling versus temperatures.
> 
> Just want to understand his "2070MHz/64°C" stable frequency.
> As throttling starts at 41°C-42°C (lowering frequency by 15MHz), I just want to understand how he reaches stable [email protected]°[email protected]
> 
> I asked him a feedback, his V/F curve, etc...
> 
> Because, let's suppose he sets 2070MHz in MSI AB at 1.093V before gaming. Let's suppose also that he has 35°C on GPU before gaming (At idle).
> after minutes of gaming, if he reaches 64°c on GPU, frequency would have theorically decreased...Then he should be far below 2070MHz, due to throttling starting at 42°C.
> 
> Let's wait for his feedabck.


No need to wait. People with RTX cards tend to compensate for the GPU temperature on the curve. It's simple really: If you know your card is going to run ~XX temp at 2100 MHz, simply add a few bins to your +core speed on the curve and lock it with CTRL-L in AB. The card will drop bins accordingly as temps rise and eventually settle in nicely based on the bins you've compensated for up front. Of course extreme GPU usage and ambient temps can still affect the card's temp. This not an exact science but hopefully you get my point.


----------



## Asmodian

GRABibus said:


> What do you mean by "pick an overclock that hits 2070 MHz @ 64°C" ?


I think Sheyster already explained it pretty well but the overclock would be something like 2100 or 2115 MHz at 30°C, dropping to 2070 MHz at 64°C.


----------



## maivorbim

GRABibus said:


> As first shot, I tested the overclock of the picture : Galax bios 380W.
> +170MHz on core
> +1300MHz on memory.
> 2175MHz @1.093V.
> 
> I passed several sessions of Heaven and Time spy without any issue.
> also 30mins of COD BO3.
> 
> But, in COD BO3, the game starts at :
> 2160MHz (As power limit is quite reach in the menu !)
> 1.093V
> [email protected]°C
> 
> Then I play and after some minutes, I am at :
> 2130MHz
> Still 1.093V
> [email protected]°C.
> 
> So I throttle...Why ? Power limit is never reach during the game.
> 
> And you, with 63°C, you stay at 2070MHz ?
> 
> I don't understand.


Hi, sorry for the late reply. Your card seems to be performing normally. 

You are correct, I also start at [email protected]@40C but quickly drop to [email protected] 2070 mhz is my stable frequency during long gaming sessions, that's why i quoted it.

I will post my AB curve and more explanations once I get back home tomorrow.


----------



## jura11

GRABibus said:


> Thanks.
> I am not talking bout overclocking potential here. Yes you are right, silicon lottery has a big influence for couple Frequency/Voltage.
> 
> I am talking about throttling versus temperatures.
> 
> Just want to understand his "2070MHz/64°C" stable frequency.
> As throttling starts at 41°C-42°C (lowering frequency by 15MHz), I just want to understand how he reaches stable [email protected]°[email protected]
> 
> I asked him a feedback, his V/F curve, etc...
> 
> Because, let's suppose he sets 2070MHz in MSI AB at 1.093V before gaming. Let's suppose also that he has 35°C on GPU before gaming (At idle).
> after minutes of gaming, if he reaches 64°c on GPU, frequency would have theorically decreased...Then he should be far below 2070MHz, due to throttling starting at 42°C.
> 
> Let's wait for his feedabck.


Hi there 

I know from my experience with Asus RTX 2080Ti Strix,clocks willl drop by 15MHz if temperatures are above 38-42°C, its literally temperamental, once clocks will drop from 2145MHz to 2130MHz at 38°C and next time clocks would drop at 42-44°C and clocks will go up if temperatures to original clocks(2145MHz) are below 35-37°C and then again would drop if temperatures are in 40's 

On Pascal GPUs we are knew Nvidia Boost 3.0 pretty well because every 10 or degrees clocks would drop by 13MHz, on Turing literally I'm only guessing at what temperature I see frequency drop

I would suggest try this,leave OC like is it, set power and temperature limits at max, don't OC VRAM or core and check what clocks you will get, compare clocks to review of yours card, try something like other reviewers test like Unigine Valley(which is one pointless test for OC GPU, because I can pass this test at 2175MHz and hope so with ambient temperatures now dropping to low 10's during the night I can push my GPU to 2190MHz) 

My stock Strix BIOS I think would boost to 2025MHz with power limit and temperature limit set at maximum,with Matrix BIOS my boost is 2070MHz 

On Pascal I did use manual V/F curve which helped me to squeeze bit more MHz and I run my GTX1080Ti at 2113MHz at 1.07v but with Turing I use offset which do work for me, tried V/F curve but still I would probably gain extra 15MHz at 1.093v but for that I would need to have higher power limit because 360W Matrix BIOS is just not enough, I use as second BIOS XOC which is good but I miss there manual V/F curve and wish VRAM would downclock to idle speeds 

What GPU or rather what RTX 2080Ti do you have? 

I tested few of them on my build abd can compare them 

Hope this helps 

Thanks, Jura


----------



## GRABibus

maivorbim said:


> Hi, sorry for the late reply. Your card seems to be performing normally.
> 
> You are correct, I also start at [email protected]@40C but quickly drop to [email protected] 2070 mhz is my stable frequency during long gaming sessions, that's why i quoted it.
> 
> I will post my AB curve and more explanations once I get back home tomorrow.


Thanks 
This is what I supposed

I post again my curve :


----------



## GRABibus

jura11 said:


> Hi there
> 
> I know from my experience with Asus RTX 2080Ti Strix,clocks willl drop by 15MHz if temperatures are above 38-42°C, its literally temperamental, once clocks will drop from 2145MHz to 2130MHz at 38°C and next time clocks would drop at 42-44°C and clocks will go up if temperatures to original clocks(2145MHz) are below 35-37°C and then again would drop if temperatures are in 40's
> 
> On Pascal GPUs we are knew Nvidia Boost 3.0 pretty well because every 10 or degrees clocks would drop by 13MHz, on Turing literally I'm only guessing at what temperature I see frequency drop
> 
> I would suggest try this,leave OC like is it, set power and temperature limits at max, don't OC VRAM or core and check what clocks you will get, compare clocks to review of yours card, try something like other reviewers test like Unigine Valley(which is one pointless test for OC GPU, because I can pass this test at 2175MHz and hope so with ambient temperatures now dropping to low 10's during the night I can push my GPU to 2190MHz)
> 
> My stock Strix BIOS I think would boost to 2025MHz with power limit and temperature limit set at maximum,with Matrix BIOS my boost is 2070MHz
> 
> On Pascal I did use manual V/F curve which helped me to squeeze bit more MHz and I run my GTX1080Ti at 2113MHz at 1.07v but with Turing I use offset which do work for me, tried V/F curve but still I would probably gain extra 15MHz at 1.093v but for that I would need to have higher power limit because 360W Matrix BIOS is just not enough, I use as second BIOS XOC which is good but I miss there manual V/F curve and wish VRAM would downclock to idle speeds
> 
> What GPU or rather what RTX 2080Ti do you have?
> 
> I tested few of them on my build abd can compare them
> 
> Hope this helps
> 
> Thanks, Jura


Thansk for reply and advises.
In fact I did already everything you mention as I spend overclocking since 1 week now on this card.

I have the MSI 2080Ti Sea Hawk X flashed with Bios Galax 380W.
Formerly, I had flashed it with ASUS XOC 1000W Bios for benchmark, but alos as ASUS XOC Bios helps me in getting no crash to desktop with DXR enabled (Ray tracing) in BFV...Don't ask me why, but with Galax Bios, even at stock frequencies, I get 100% crash to Desktop in BFV with DXR enabled.
This is why I am reluctant to use Galax Bios, only for this...
Let's wait for COD MW which will be released in 2 weeks with Open beta. As there will be Ray Tracing in this game, I will check also if I crash with Galax Bios with RT in this coming game...

At 24°C ambient temperature with power target to 126%, voltage slider to 100% (Even if it seems to be useless) and fan at 100%, I boost to 1950MHz on Core with Vddc=1.05V in Heaven Benchmark.
with this Galax Bios, default clock is 1350MHz and default boost clock is 1620MHz.

I set the curve (see attached picture) : stable in Heaven and Time Spy extreme.
Also 30minutes of COD BO3 stable
My frequency fluctuates betwwen 2115MHz and 2160MHz depending on the game and room temperature...Throttling effect...

I could also set an overclokc with much more less throttling effect, due to low voltage and then low drawn power : 
Stabilize at 2055MHz in Heaven (No throttling) with Vddc=0.967V !
GPU temp=45°C
Room temperature = 24°C
If my room temperature would have ben at 28°c, then 4°C more than above test, my GPU would have raised to 49°C, then GPU freq would have decreased to 2040MHz (Throttling with 15MHz les than 2055MHz...)

I think I am gonna install my PC in a freezing container 

This is a not a golden sample, but this is really a good card !!!


----------



## J7SC

...just a quick note about the latest (few) 3DMark updates for RTX. I like running Port Royal as it relates to DLSS / work etc and have been doing well with it. However, two '3dMark updates' ago, I got error messages regarding 'update not found in the expected path'. That finally solved with the latest 3DM Futuremark update (today?), but soon after installing the latest update, I got black-screening /and auto-reboot on my 2x2080 TI GPU setup, stock Bios and even with no oc - that didn't happen before, and this affected other RTX/DLSS titles / apps as well !

However, after re-installing 3DM *.6751, the version used per below pic, everything was working perfectly again in repeated runs with stock and oc. So if after updating 3DM you have some issues in DLSS/RTX, try reverting back to version *.6751.

BTW, I emailed 3DM staff to let them know already.


----------



## mattxx88

GRABibus said:


> But Maivorbim has stable 2070mhz at 63degrees....
> I am at 50 degrees.
> We have the same bios
> 
> i have the same phenomena in Heaven benchmark :
> it starts at [email protected]@39°C
> Then, goes down some seconds after to [email protected]@42°C...etc....And PL=380W is never reached during this.
> 
> So, PL not reached and temperature at 42°C and I throttle ??


Hi Grabius i see you have an MSI Sea Hawk and my gpu (gaming x trio with waterblock ek) have same "issue". seems like our MSI gpu's have an hardware PL protection cause any bios i try, my gpu always drop freq @360w PL (more or less)


----------



## GRABibus

mattxx88 said:


> Hi Grabius i see you have an MSI Sea Hawk and my gpu (gaming x trio with waterblock ek) have same "issue". seems like our MSI gpu's have an hardware PL protection cause any bios i try, my gpu always drop freq @360w PL (more or less)


Hello,
Power dépends on the graphics application applied to you GPU.
Maybe thé graphics applications you use Never require more than 360W.
Check in graphics test 2 of time spy extreme . You will probably see more than 360W at some scenes.
By using 1000W Asus XOC bios, I coud see until 500W in graphics test 2 of time spy extreme.


----------



## jura11

GRABibus said:


> Thansk for reply and advises.
> In fact I did already everything you mention as I spend overclocking since 1 week now on this card.
> 
> I have the MSI 2080Ti Sea Hawk X flashed with Bios Galax 380W.
> Formerly, I had flashed it with ASUS XOC 1000W Bios for benchmark, but alos as ASUS XOC Bios helps me in getting no crash to desktop with DXR enabled (Ray tracing) in BFV...Don't ask me why, but with Galax Bios, even at stock frequencies, I get 100% crash to Desktop in BFV with DXR enabled.
> This is why I am reluctant to use Galax Bios, only for this...
> Let's wait for COD MW which will be released in 2 weeks with Open beta. As there will be Ray Tracing in this game, I will check also if I crash with Galax Bios with RT in this coming game...
> 
> At 24°C ambient temperature with power target to 126%, voltage slider to 100% (Even if it seems to be useless) and fan at 100%, I boost to 1950MHz on Core with Vddc=1.05V in Heaven Benchmark.
> with this Galax Bios, default clock is 1350MHz and default boost clock is 1620MHz.
> 
> I set the curve (see attached picture) : stable in Heaven and Time Spy extreme.
> Also 30minutes of COD BO3 stable
> My frequency fluctuates betwwen 2115MHz and 2160MHz depending on the game and room temperature...Throttling effect...
> 
> I could also set an overclokc with much more less throttling effect, due to low voltage and then low drawn power :
> Stabilize at 2055MHz in Heaven (No throttling) with Vddc=0.967V !
> GPU temp=45°C
> Room temperature = 24°C
> If my room temperature would have ben at 28°c, then 4°C more than above test, my GPU would have raised to 49°C, then GPU freq would have decreased to 2040MHz (Throttling with 15MHz les than 2055MHz...)
> 
> I think I am gonna install my PC in a freezing container
> 
> This is a not a golden sample, but this is really a good card !!!


Hi there 

Looks like you have really good sample of MSI RTX 2080Ti Seahawk X, tgis Galax 380W BIOS I have used on my Zotac RTX 2080Ti AMP and from my experience seems Galax 380W BIOS have loose VRAM timings, but that's my view on this, I compared few BIOS on my Zotac RTX 2080Ti AMP previously and still would choose Galax 380W BIOS 

I read your comment on Guru3D forum regarding the Raytracing in BFV, I can confirm that I experienced something similar on my old Zotac RTX 2080Ti AMP with Galax 380W BIOS just not in BFV, tried every RT game which is available, in Metro Exodus I could run as max 2055MHz, BFV I could run only at 2040MHz and Tomb Raider that's have been challenging because with anything above stock I would get TDR, tried too Control in this game with Galax 380W BIOS best what I could achieve is 2025MHz I think 

On other hand with Matrix BIOS on my Asus RTX 2080Ti Strix, BFV is stable and no crashes at 2070-2085MHz, Metro Exodus same 2070-2085MHz, Tomb Raider 2070-2085MHz and Control 2070-2085MHz is easy, tried my normal OC 2145MHz but no way it would pass these games with such OC

Similarly in rendering wherey PC spend more time, I can't OC as high as in games, usually in non RT games I can use 2145MHz, games with RT I need to use or run 2070-2085MHz profile, in Blender Cycles, Octane or IRAY my OC profile is 2055-2070MHz as max, anything above that would crash with TDR and CUDA errors 

I think is down to Tensor cores which are on Turing, power usage is similar on all cards and on Pascal I can use my best OC(GTX1080Ti with 2113MHz OC and GTX1080 with 2100MHz OC and GTX1080 with 2164MHz OC) and no issues which is strange 

Yes in some games I can use too higher OC(2160MHz) and in some I use my normal profile 2145MHz which is OK 

My normal GPU temperatures are in 38-42°C, last night I played Control for few hours with 2085MHz and no issues, temperatures have been in 36-37°C as max, few days back I played Assassin's Creed and no issues with 2145MHz OC profile and temperatures been in 36°C as max 

How many radiators do you have abd at what speed are you running fans? 

Hope this helps 

Thanks, Jura


----------



## GRABibus

jura11 said:


> Hi there
> 
> Looks like you have really good sample of MSI RTX 2080Ti Seahawk X, tgis Galax 380W BIOS I have used on my Zotac RTX 2080Ti AMP and from my experience seems Galax 380W BIOS have loose VRAM timings, but that's my view on this, I compared few BIOS on my Zotac RTX 2080Ti AMP previously and still would choose Galax 380W BIOS
> 
> I read your comment on Guru3D forum regarding the Raytracing in BFV, I can confirm that I experienced something similar on my old Zotac RTX 2080Ti AMP with Galax 380W BIOS just not in BFV, tried every RT game which is available, in Metro Exodus I could run as max 2055MHz, BFV I could run only at 2040MHz and Tomb Raider that's have been challenging because with anything above stock I would get TDR, tried too Control in this game with Galax 380W BIOS best what I could achieve is 2025MHz I think
> 
> On other hand with Matrix BIOS on my Asus RTX 2080Ti Strix, BFV is stable and no crashes at 2070-2085MHz, Metro Exodus same 2070-2085MHz, Tomb Raider 2070-2085MHz and Control 2070-2085MHz is easy, tried my normal OC 2145MHz but no way it would pass these games with such OC
> 
> Similarly in rendering wherey PC spend more time, I can't OC as high as in games, usually in non RT games I can use 2145MHz, games with RT I need to use or run 2070-2085MHz profile, in Blender Cycles, Octane or IRAY my OC profile is 2055-2070MHz as max, anything above that would crash with TDR and CUDA errors
> 
> I think is down to Tensor cores which are on Turing, power usage is similar on all cards and on Pascal I can use my best OC(GTX1080Ti with 2113MHz OC and GTX1080 with 2100MHz OC and GTX1080 with 2164MHz OC) and no issues which is strange
> 
> Yes in some games I can use too higher OC(2160MHz) and in some I use my normal profile 2145MHz which is OK
> 
> My normal GPU temperatures are in 38-42°C, last night I played Control for few hours with 2085MHz and no issues, temperatures have been in 36-37°C as max, few days back I played Assassin's Creed and no issues with 2145MHz OC profile and temperatures been in 36°C as max
> 
> How many radiators do you have abd at what speed are you running fans?
> 
> Hope this helps
> 
> Thanks, Jura


Hi,
So, you conclude also that DXR requires to downclock GPU ?
I will give a try tomorrow again, but as far as I remember, even at stock frequencies with Galax bios, I crashed to desktop in BFV with DXR enabled.
I have no radiators.
I just use my sea Hawk with [email protected]%.
What I did is to repaste GPU with conductonaut. I could decrease GPU temperature by 10degrees to 15 degrees


----------



## maivorbim

Here are my Afterburner settings. As you can see, it is set for 2130 mhz @ 1.093V, which I hit only for a few seconds in BF V, then downclock to 2070 [email protected] as temps rise and stabilise at around 64C. Since your card is doing the same, it is running normally. 

The easiest way to see if you are reaching any limits is to go to Afterburner>settings> monitoring> enable Temp/Power/Voltage limit monitoring in RTSS OSD. When you will play BFV or benchmark, the OSD will show when you're hitting a power limit


----------



## J7SC

FYI - quick follow-up to my note above about the latest 3DM Port Royal black-screening issue...it seems related to the latest version of 3DMark Systeminfo...


----------



## maivorbim

GRABibus said:


> What I did is to repaste GPU with conductonaut. I could decrease GPU temperature by 10degrees to 15 degrees


Did you use conformal coating for the resistors near the chip? Did you have to replace the thermal pads? 

You are saying you decreased temps by 10-15 C, but from what I have read people are reporting a maximum 1-2C difference after conductonaut:

https://www.reddit.com/r/nvidia/comments/bnqs14/cleaning_off_my_2080_hybrid_before_conductonaut/





I am asking because I am also looking to apply conductonaut to my Asus Strix 2080ti, but I am not sure if it is worth the trouble.


----------



## GRABibus

maivorbim said:


> Did you use conformal coating for the resistors near the chip? Did you have to replace the thermal pads?
> 
> You are saying you decreased temps by 10-15 C, but from what I have read people are reporting a maximum 1-2C difference after conductonaut:
> 
> https://www.reddit.com/r/nvidia/comments/bnqs14/cleaning_off_my_2080_hybrid_before_conductonaut/
> https://www.youtube.com/watch?v=r_BsEwSM0ys&feature=youtu.be&t=683
> 
> I am asking because I am also looking to apply conductonaut to my Asus Strix 2080ti, but I am not sure if it is worth the trouble.


No, I checked the thermal pads and it was ok.
Applying conductonaut requires patience and to be really careful.
Also, be sure that the cooling radiator of the card is not aluminum !
Even copper is not so ideal, as the paste will migrate into the copper of my water radiator.
Let’s see in some months.

Yes, Temps decreased by 10degrees.
I went from 60degreed to 50degrees at full overclock and 25 room temperature


----------



## GRABibus

maivorbim said:


> Here are my Afterburner settings. As you can see, it is set for 2130 mhz @ 1.093V, which I hit only for a few seconds in BF V, then downclock to 2070 [email protected] as temps rise and stabilise at around 64C. Since your card is doing the same, it is running normally.
> 
> The easiest way to see if you are reaching any limits is to go to Afterburner>settings> monitoring> enable Temp/Power/Voltage limit monitoring in RTSS OSD. When you will play BFV or benchmark, the OSD will show when you're hitting a power limit


Yes, I monitor à lot of things with RTSS.
I learned also that you can monitor GPU power in RTSS by importing plugin (.dll) of hwmonitor or aida64.

https://www.overclock.net/forum/69-nvidia/1732714-screen-wattage.html#post28115444


----------



## kithylin

maivorbim said:


> I am asking because I am also looking to apply conductonaut to my Asus Strix 2080ti, but I am not sure if it is worth the trouble.


If you have a copper cold plate on the bottom of the heatsink on your video card then using liquid metal will permanently alter the chemical composition of the copper and you can not go back. It will slowly be absorbed by the copper over time and even if you remove the heat sink and wipe the liquid metal off, it will still be alloy'd to the bottom of the copper cold plate forever.


----------



## GRABibus

jura11 said:


> Hi there
> 
> Looks like you have really good sample of MSI RTX 2080Ti Seahawk X, tgis Galax 380W BIOS I have used on my Zotac RTX 2080Ti AMP and from my experience seems Galax 380W BIOS have loose VRAM timings, but that's my view on this, I compared few BIOS on my Zotac RTX 2080Ti AMP previously and still would choose Galax 380W BIOS
> 
> I read your comment on Guru3D forum regarding the Raytracing in BFV, I can confirm that I experienced something similar on my old Zotac RTX 2080Ti AMP with Galax 380W BIOS just not in BFV, tried every RT game which is available, in Metro Exodus I could run as max 2055MHz, BFV I could run only at 2040MHz and Tomb Raider that's have been challenging because with anything above stock I would get TDR, tried too Control in this game with Galax 380W BIOS best what I could achieve is 2025MHz I think
> 
> On other hand with Matrix BIOS on my Asus RTX 2080Ti Strix, BFV is stable and no crashes at 2070-2085MHz, Metro Exodus same 2070-2085MHz, Tomb Raider 2070-2085MHz and Control 2070-2085MHz is easy, tried my normal OC 2145MHz but no way it would pass these games with such OC
> 
> Similarly in rendering wherey PC spend more time, I can't OC as high as in games, usually in non RT games I can use 2145MHz, games with RT I need to use or run 2070-2085MHz profile, in Blender Cycles, Octane or IRAY my OC profile is 2055-2070MHz as max, anything above that would crash with TDR and CUDA errors
> 
> I think is down to Tensor cores which are on Turing, power usage is similar on all cards and on Pascal I can use my best OC(GTX1080Ti with 2113MHz OC and GTX1080 with 2100MHz OC and GTX1080 with 2164MHz OC) and no issues which is strange
> 
> Yes in some games I can use too higher OC(2160MHz) and in some I use my normal profile 2145MHz which is OK
> 
> My normal GPU temperatures are in 38-42°C, last night I played Control for few hours with 2085MHz and no issues, temperatures have been in 36-37°C as max, few days back I played Assassin's Creed and no issues with 2145MHz OC profile and temperatures been in 36°C as max
> 
> How many radiators do you have abd at what speed are you running fans?
> 
> Hope this helps
> 
> Thanks, Jura


I will try to set a « Ray tracing stable overclock » as the one I mention above crashes in BF5 with DXR 
But as I mentioned, I remember that even at stock frequencies for the card with Galax bios, BF5 crashes with Ray tracing....weird...
I will recheck and keep you posted of results.


----------



## jura11

GRABibus said:


> Hi,
> So, you conclude also that DXR requires to downclock GPU ?
> I will give a try tomorrow again, but as far as I remember, even at stock frequencies with Galax bios, I crashed to desktop in BFV with DXR enabled.
> I have no radiators.
> I just use my sea Hawk with [email protected]%.
> What I did is to repaste GPU with conductonaut. I could decrease GPU temperature by 10degrees to 15 degrees


Hi there 

Yes I concluded that while back there,in any game where RT is it present then you need to downclock GPU, but strangely in 3DMARK Port Royal you don't need to downclock GPU at all

Try different TIM like is Thermalright TFX which seems outperforming Thermal Grizzly Kryonaut by 2-3°C maybe bit more

I wouldn't use Conductonaut on stock coolers, if then on cooler which is nickel plated at least

Getting better temperatures is possible but I would say you are really chiller get lot better temperatures

Hope this helps 

Thanks, Jura


----------



## JustinThyme

kithylin said:


> If you have a copper cold plate on the bottom of the heatsink on your video card then using liquid metal will permanently alter the chemical composition of the copper and you can not go back. It will slowly be absorbed by the copper over time and even if you remove the heat sink and wipe the liquid metal off, it will still be alloy'd to the bottom of the copper cold plate forever.


+1 on this. No need for it either. Im getting around 10C delta on my strix OC cards with heat killer blocks and Kryonaut.


----------



## szeged

My maxwell titan X is finally heading to retirement, which 2080ti should i get to replace it? was thinking either an asus strix or a evga ftw3.

going to be on air for a while at first, maybe put it under water.

or do i spend more for one of the AIO cooled cards? havent looked into gpu stuff since the end of 2015 so i have no idea whats good anymore.


----------



## kithylin

szeged said:


> My maxwell titan X is finally heading to retirement, which 2080ti should i get to replace it? was thinking either an asus strix or a evga ftw3.
> 
> going to be on air for a while at first, maybe put it under water.
> 
> or do i spend more for one of the AIO cooled cards? havent looked into gpu stuff since the end of 2015 so i have no idea whats good anymore.


Today it's all about temperatures. The colder you can get (and maintain) the card under load the better the cards will do. If you can keep a card at 50c or below with a custom water loop then even a reference card should be just as good as any aftermarket one. And reference cards with a reference PCB have a easier time of water block compatibility too. It's more difficult to find blocks for cards with a custom-designed PCB. If you want to check if a card uses a reference PCB or custom PCB, you can go through picking it's model number exactly in this site: https://www.ekwb.com/configurator/


----------



## KCDC

JustinThyme said:


> +1 on this. No need for it either. Im getting around 10C delta on my strix OC cards with heat killer blocks and Kryonaut.



Only thing I used Conductonaut for was an attempt to shunt mod my 1080TIs. It worked great for a few weeks, but after learning it could seep over and degrade the solder points, I took it off before any damage was made. That stuff is dangerous for long-term use. Gallium is malleable, but loves to bind with other metals like copper and aluminum, degrading it in the process. Not worth it for bench points unless you're competing professionally.


----------



## GRABibus

KCDC said:


> Only thing I used Conductonaut for was an attempt to shunt mod my 1080TIs. It worked great for a few weeks, but after learning it could seep over and degrade the solder points, I took it off before any damage was made. That stuff is dangerous for long-term use. Gallium is malleable, but loves to bind with other metals like copper and aluminum, degrading it in the process. Not worth it for bench points unless you're competing professionally.


with copper, there is migration, but that should be ok. 
I used conductonaut on my former 1080ti since 1 year and half without any issue and with stable temperature since the beginning.
If some people got big problems with conductonaut after some months, It should be nice to have their feedback.


----------



## szeged

kithylin said:


> Today it's all about temperatures. The colder you can get (and maintain) the card under load the better the cards will do. If you can keep a card at 50c or below with a custom water loop then even a reference card should be just as good as any aftermarket one. And reference cards with a reference PCB have a easier time of water block compatibility too. It's more difficult to find blocks for cards with a custom-designed PCB. If you want to check if a card uses a reference PCB or custom PCB, you can go through picking it's model number exactly in this site: https://www.ekwb.com/configurator/


thanks.

unfortunately right now the budget is about $1400 max, spending a lot of money on another hobby currently.

also my d5 pump died on me finally so theres another $80 id have to spend as well as the $140+ on the block to bring the gpu into the loop lol


----------



## kithylin

szeged said:


> thanks.
> 
> unfortunately right now the budget is about $1400 max, spending a lot of money on another hobby currently.
> 
> also my d5 pump died on me finally so theres another $80 id have to spend as well as the $140+ on the block to bring the gpu into the loop lol


Yep no problem. I'll elaborate a bit further for you: Today with the 1000 series and the 2000 series (And AMD too from what I understand, but I don't know as much about modern AMD cards and I might be wrong there, but I think this applies to them as well), the video cards will reduce their boost speed depending on how hot the core temp is. The days from maxwell where you could say for example, set a card to 1200 mhz core and it will maintain 1200 mhz core no matter how hot it runs (even to it's own detriment) are over. Today with video cards you can set a card to run at 2100 Mhz and apply it but if it runs hot (like 80c) it will reduce it's self down to 1900-2000 Mhz due to heat. No matter what you set, what bios you use, what voltage you give it. If the cards run hot, the clocks go down. I believe it starts at about 40c core temps and it reduces it's clocks by about -15 Mhz every 10c. I'm not sure of the exact values but that's what other people have reported. All versions of video cards do this, aftermarket or reference.

Also there is power limit and the cards will reduce their clocks based on that too, if they hit power limit, clocks go down. Currently there's no way to modify the bios like we could with Maxwell and older cards, not anymore. We can't flash on custom bios's anymore that we manually edit. Now the only way to do it is to sometimes manage to cross-flash a bios from some other manufacturer that has a higher default power limit.

The difference in aftermarket and stock / reference cards is the cooler design. If you're not going full custom water loop, an AIO hybrid card would probably be your next best option. From there.. if you're going air, you probably want to go with an aftermarket card with the biggest heatsink and most fans on it. Bigger cooler on the card = card runs colder = better clocks.


----------



## J7SC

kithylin said:


> Today it's all about temperatures. *The colder you can get (and maintain)* the card under load the better the cards will do. If you can keep a card at 50c or below with a custom water loop then even a reference card should be just as good as any aftermarket one. And reference cards with a reference PCB have a easier time of water block compatibility too. It's more difficult to find blocks for cards with a custom-designed PCB. If you want to check if a card uses a reference PCB or custom PCB, you can go through picking it's model number exactly in this site: https://www.ekwb.com/configurator/


 
Your 'maintain' comment is important to highlight, given that NVBoost will down-clock throughout, not just at the beginning of a stressful task, depending on temps (and power limits). I have posted a pic like this before, which underscores the point...PortRoyal temps for dual Aorus 2080 Tis XTR WB on a big loop (total GPU loop = 1080/60), stock Bios and factory stock waterblocks. Note how the temps jump up from what is essentially 23c ambient when the test starts, but then maintain at around 32c-34c.


----------



## JustinThyme

szeged said:


> thanks.
> 
> unfortunately right now the budget is about $1400 max, spending a lot of money on another hobby currently.
> 
> also my d5 pump died on me finally so theres another $80 id have to spend as well as the $140+ on the block to bring the gpu into the loop lol


You should have some kids, they will erase the phrase "expendable income" from your vocabulary! LOL One of mine is a Jr in college and another is HS Senior.......Good thing my wife makes a great salary! LMAO

Too many hobbies myself but the PCs take the top spot in the $$ pit. Used to be boats but got tired of that. My other big one is RC cars. Race competitively. Cheapest car I have has a bit over $2K in it then the high one has around $5K in it. Nothing cooler than watching a 5th scale car whizzing past most cars on the road. Clocked it at 108MPH and had to follow it in another car to do that, otherwise you lose sight of it by the time you hit the 80mph mark.


----------



## GRABibus

kithylin said:


> If you have a copper cold plate on the bottom of the heatsink on your video card then using liquid metal will permanently alter the chemical composition of the copper and you can not go back. It will slowly be absorbed by the copper over time and even if you remove the heat sink and wipe the liquid metal off, it will still be alloy'd to the bottom of the copper cold plate forever.


So what do you advise ?
I unmount and repaste with noctua or kryonaut ?

I did apply conductonaut only 10 days ago.


----------



## kithylin

GRABibus said:


> So what do you advise ?
> I unmount and repaste with noctua or kryonaut ?
> 
> I did apply conductonaut only 10 days ago.


You've already put conductonaut on there.. there's kind of no going back even though it's just been a few days. At this point you may as well go with it I guess. I would worry about it dripping/leaking on to other components though if it was my card.


----------



## GRABibus

kithylin said:


> You've already put conductonaut on there.. there's kind of no going back even though it's just been a few days. At this point you may as well go with it I guess. I would worry about it dripping/leaking on to other components though if it was my card.


So I keep conductonaut ?
As I change GPU roughly at each generation, this should be ok ?
It is the third component on which I put conductonaut and I didn’t get any issues until now (5930k with EVGA clc 280 in February 2019 and 1080ti same period).
If there is no leak , the migration shouldn’t damage the GPU and copper block on 2 years period.


----------



## kithylin

GRABibus said:


> So I keep conductonaut ?
> As I change GPU roughly at each generation, this should be ok ?
> It is the third component on which I put conductonaut and I didn’t get any issues until now (5930k with EVGA clc 280 in February 2019 and 1080ti same period).
> If there is no leak , the migration shouldn’t damage the GPU and copper block on 2 years period.


If you're rotating new cards every 1-2 years you probably won't have issues. Liquid metal on cards is more of an issue for folks like me that only get a new card once every 5 years or so. And I have a nickel plated water block so I'll probably go liquid metal if I re-do my loop again for my gpu.


----------



## GRABibus

kithylin said:


> I believe it starts at about 40c core temps and it reduces it's clocks by about -15 Mhz every 10c. I'm not sure of the exact values but that's what other people have reported.


From my side, i don't see reallly this rule.

For example in BFV :

if I set [email protected] in V/F curve. I am at 22°c ambient.
Then the game starts at 38°C, then the clock is already down at 2160MHz.
Then, when temperature is roughly 41°C-42°C, then the clock goes down to 2145MHz.
At 47°C-48°C, the clock goes down at 2130MHz.

And I am sure at 50°C-52°C, it would go down at 2115MHz..

So you see, 2160MHz-2130Mz=30MHz=2x15MHz when GPU temperature goes from 38°C to 48°C...10°C....

So for me, it is more like -15MHz each 5°C more on the GPU...

Maivorbim wrote : "it is set for 2130 mhz @ 1.093V, which I hit only for a few seconds in BF V, then downclock to 2070 [email protected] as temps rise and stabilise at around 64C".
i don't know at which temperature the GPU is at when he starts playing BFV.

I don't know for other people...


----------



## Sheyster

GRABibus said:


> From my side, i don't see reallly this rule.
> 
> For example in BFV :
> 
> if I set [email protected] in V/F curve. I am at 22°c ambient.
> Then the game starts at 38°C, then the clock is already down at 2160MHz.
> Then, when temperature is roughly 41°C-42°C, then the clock goes down to 2145MHz.
> At 47°C-48°C, the clock goes down at 2130MHz.
> 
> And I am sure at 50°C-52°C, it would go down at 2115MHz..
> 
> So you see, 2160MHz-2130Mz=30MHz=2x15MHz when GPU temperature goes from 38°C to 48°C...10°C....
> 
> So for me, it is more like -15MHz each 5°C more on the GPU...
> 
> Maivorbim wrote : "it is set for 2130 mhz @ 1.093V, which I hit only for a few seconds in BF V, then downclock to 2070 [email protected] as temps rise and stabilise at around 64C".
> i don't know at which temperature the GPU is at when he starts playing BFV.
> 
> I don't know for other people...


This is all by design, nVidia wants the card to drop bins as it heats up. You sound like you're almost OCD about this behavior! Just run the thing at 2070 for gaming and call it a day. :thumb:


----------



## GRABibus

Sheyster said:


> This is all by design, nVidia wants the card to drop bins as it heats up. You sound like you're almost OCD about this behavior! Just run the thing at 2070 for gaming and call it a day. :thumb:


What is OCD ?
I was just commenting the rule -15MHz each 10degrees increase, which seems not to be true


----------



## mouacyk

GRABibus said:


> What is OCD ?
> I was just commenting the rule -15MHz each 10degrees increase, which seems not to be true


overclocking disease


----------



## Sheyster

GRABibus said:


> What is OCD ?


OCD:
Short for obsessive-compulsive disorder.
"having a tendency towards excessive orderliness, perfectionism, and great attention to detail."


----------



## GRABibus

Sheyster said:


> OCD:
> Short for obsessive-compulsive disorder.
> "having a tendency towards excessive orderliness, perfectionism, and great attention to detail."


Lol


----------



## ducky083

GRABibus said:


> Hello,
> Power dépends on the graphics application applied to you GPU.
> Maybe thé graphics applications you use Never require more than 360W.
> Check in graphics test 2 of time spy extreme . You will probably see more than 360W at some scenes.
> By using 1000W Asus XOC bios, I coud see until 500W in graphics test 2 of time spy extreme.



Hi,



Can we put Asus XOC bios on 2080ti reference pcb without any problem ? even if my screen is connected by Display Port ?


How it's possible with this XOC bios to let the voltage at 1.093v all time ?



Thanks


----------



## kithylin

ducky083 said:


> Hi,
> 
> 
> 
> Can we put Asus XOC bios on 2080ti reference pcb without any problem ? even if my screen is connected by Display Port ?
> 
> 
> How it's possible with this XOC bios to let the voltage at 1.093v all time ?
> 
> 
> 
> Thanks


From what others in this thread have commented.. the XOC bios causes extremely excessive power draw from these cards. As much as +2x / double what any other "normal bios" for 2080 Ti's would draw. It may not be a good idea to run this long-term. I believe the XOC bios is more for competitive overclocking / Liquid nitrogen / etc.


----------



## GRABibus

ducky083 said:


> Hi,
> 
> 
> 
> Can we put Asus XOC bios on 2080ti reference pcb without any problem ? even if my screen is connected by Display Port ?
> 
> 
> How it's possible with this XOC bios to let the voltage at 1.093v all time ?
> 
> 
> 
> Thanks


I tested it on my MSI Sea Hawk.
Perfect for bench, especially Time Spy graphic test 2 on which it help a lot (because this test draws a lot of power and all cards reaches their power limit here.)
At least a good AIO WC is required.

Problem of this Bios : voltage limited at 1,093V and impossible to edit V/F curve to tweak the overclock.

For gaming, as soon as power limit is not exceeded, this Bios is useless.
Better use galax 380W for gaming and by tweaking V/F curve to optimise the overclock.


----------



## edychi

Where I live is promoting this model, I needed your opinion thank you

Gigabyte NVIDIA GeForce RTX 2080 Ti WindForce 11GB GDDR6 - GV-N208TWF3-11GC

Today I am using a 1080 ti Evga FTW3.


----------



## J7SC

A Quick note for "non-A" chip card owners (of which I'm not one, and my knowledge about those specific ones is limited...). It seems that for RTX 2070 and RTX 2080 'non-A' GPUs, there is now - after all - a Bios update for higher power limits. The updated Bios were quietly released by NVidia to vendor partners, and some of these new Bios have shown up in the TechPowerUp Bios DB...even cross-flashing might be possible as long as the hardware ID is correct.

Now, this is the 2080 Ti thread, and there was no mention of this for 2080 Ti 'non-A', but given that the new Bios effort was made for the RTX 2070 and RTX 2080 'non-A', there is at least the possibility it might also be happening for the 2080 Ti 'non-A'...at least it is worth checking the relevant sections at the TechPowerUp Bios DB. I got the info from 'Igor's Lab' (YouTube vid below), noting that it is in German, but the above explains the gist of it.


----------



## maivorbim

For anyone trying to apply liquid metal to an Asus RTX 2080 TI Strix card, don't bother! After applying Conductonaut to both the die and die plate, I saw increased temperatures compared to stock (from 64C to 69C during load). Screws were all tightened in a X pattern. Most likely, the cooler plate does not make perfect contact with the die and the liquid metal cannot bridge the gap, being less viscous than thermal paste. I have cleaned the conductonaut and applied Kryonaut. I am now back to identical stock temperatures.

In conclusion, the stock thermal paste used by Asus is as good as it gets (with a margin of 1-2C). Don't bother opening the card and repasting.


----------



## AvengedRobix

maivorbim said:


> For anyone trying to apply liquid metal to an Asus RTX 2080 TI Strix card, don't bother! After applying Conductonaut to both the die and die plate, I saw increased temperatures compared to stock (from 64C to 69C during load). Screws were all tightened in a X pattern. Most likely, the cooler plate does not make perfect contact with the die and the liquid metal cannot bridge the gap, being less viscous than thermal paste. I have cleaned the conductonaut and applied Kryonaut. I am now back to identical stock temperatures.
> 
> 
> 
> In conclusion, the stock thermal paste used by Asus is as good as it gets (with a margin of 1-2C). Don't bother opening the card and repasting.


Have you do an error ti apply LM.. I'M15/10 low with lm

Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk


----------



## max883

What about your warenty? Removing the sticker from the screw


----------



## kithylin

max883 said:


> What about your warenty? Removing the sticker from the screw


EVGA and MSI don't enforce that in the USA. They can't legally enforce that anymore. No one can in this country. "Warranty void if removed" stickers aren't valid in USA anymore.


----------



## GRABibus

maivorbim said:


> For anyone trying to apply liquid metal to an Asus RTX 2080 TI Strix card, don't bother! After applying Conductonaut to both the die and die plate, I saw increased temperatures compared to stock (from 64C to 69C during load). Screws were all tightened in a X pattern. Most likely, the cooler plate does not make perfect contact with the die and the liquid metal cannot bridge the gap, being less viscous than thermal paste. I have cleaned the conductonaut and applied Kryonaut. I am now back to identical stock temperatures.
> 
> In conclusion, the stock thermal paste used by Asus is as good as it gets (with a margin of 1-2C). Don't bother opening the card and repasting.


From my side on the MSI sea Hawk, conductonaut improved a lot .

At 25degreed ambient :
Stock paste => 60degrees under heaven benchmark
Noctua NH-T1 => 53degrees under heaven Benchmark
Conductonaut => 48 degrees under heaven benchmark


----------



## maivorbim

AvengedRobix said:


> Have you do an error ti apply LM.. I'M15/10 low with lm
> 
> Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk



What 2080 ti model do you have?


----------



## maivorbim

GRABibus said:


> From my side on the MSI sea Hawk, conductonaut improved a lot .
> 
> At 25degreed ambient :
> Stock paste => 60degrees under heaven benchmark
> Noctua NH-T1 => 53degrees under heaven Benchmark
> Conductonaut => 48 degrees under heaven benchmark


Your post is what convinced me to try liquid metal and I wish I had the same results as yours. However, I am not the only one that has noticed no improvements with liquid metal:

https://www.reddit.com/r/nvidia/comments/a6n0bj/anyone_tried_liquid_metal_on_a_2080_ti_yet_my/




https://www.reddit.com/r/nvidia/comments/bnqs14/cleaning_off_my_2080_hybrid_before_conductonaut/
https://www.reddit.com/r/nvidia/comments/9m4j5u/does_repasting_the_fe_2080_ti_lower_temps/
https://www.reddit.com/r/nvidia/comments/9iy73g/anyone_planning_on_trying_liquid_metal_on_turing/

My suspicion is that it is due to poor contact of cold plate with the die. Strix have a metal frame around the die, while yours seems to not have one. I'm thinking that because of the metal frame, the cold plate sits higher and has less of contact pressure with the die? I don't think it is user error, as I have used liquid metal before and tightened the screws really well.

Strix: https://tpucdn.com/review/asus-geforce-rtx-2080-ti-strix-oc/images/front_small.jpg
Sea Hawk x: https://imgur.com/a/3WDEXZk#U9ARHY9


----------



## GRABibus

Ok. Thanks for the links.

What we miss is the feedback some years after of people who applied LM on GPU.


----------



## majestynl

GRABibus said:


> From my side, i don't see reallly this rule.
> 
> For example in BFV :
> 
> if I set [email protected] in V/F curve. I am at 22°c ambient.
> Then the game starts at 38°C, then the clock is already down at 2160MHz.
> Then, when temperature is roughly 41°C-42°C, then the clock goes down to 2145MHz.
> At 47°C-48°C, the clock goes down at 2130MHz.
> 
> And I am sure at 50°C-52°C, it would go down at 2115MHz..
> 
> So you see, 2160MHz-2130Mz=30MHz=2x15MHz when GPU temperature goes from 38°C to 48°C...10°C....
> 
> So for me, it is more like -15MHz each 5°C more on the GPU...
> 
> Maivorbim wrote : "it is set for 2130 mhz @ 1.093V, which I hit only for a few seconds in BF V, then downclock to 2070 [email protected] as temps rise and stabilise at around 64C".
> i don't know at which temperature the GPU is at when he starts playing BFV.
> 
> I don't know for other people...


I also didnt understand how he could stepdown from 2130 to 2070mhz !
Yeap its in steps of 15mhz for sure. For me mostly at 37c and 42c! So 5degrees! But sometimes it gets a certain compensation where it can hold the freq but with a higher voltage.





ducky083 said:


> Hi,
> 
> Can we put Asus XOC bios on 2080ti reference pcb without any problem ? even if my screen is connected by Display Port ?
> How it's possible with this XOC bios to let the voltage at 1.093v all time ?
> 
> Thanks


Yes you can. In my case it wasnt at 1.093v all the time! Its just using the normal Pstates as rest of bios versions. So its probably how this bios runs on your card! 




kithylin said:


> From what others in this thread have commented.. the XOC bios causes extremely excessive power draw from these cards. As much as +2x / double what any other "normal bios" for 2080 Ti's would draw. It may not be a good idea to run this long-term. I believe the XOC bios is more for competitive overclocking / Liquid nitrogen / etc.


hmm cant confirm with ASUS XOC on my card. With my tesresults it was just using slightlty more power then other 330-380w bios versions! Definitly not double or something. The Kingpin XOC was drawing double for sure 
And the ASUS XOC is not that special. Their is no powerlimit but most of us will be voltage limited. And the ASUS has just the regular 1.093v limit.

For competitive OC and Liquid Nitrogen i would definitly use a bios with higher voltage limit! Kingpin or HOF XOC!




max883 said:


> What about your warenty? Removing the sticker from the screw


Im in Europe so we are not covered! But its just a silly sticker. Use a hairdryer and gently remove the sticker with a Utility knife and stick it somewhere save. If you need to RMA, just place the silly sticker back


----------



## ducky083

Thanks for all yours answers ;-)


----------



## kithylin

majestynl said:


> hmm cant confirm with ASUS XOC on my card. With my tesresults it was just using slightlty more power then other 330-380w bios versions! Definitly not double or something. The Kingpin XOC was drawing double for sure  And the ASUS XOC is not that special. Their is no powerlimit but most of us will be voltage limited. And the ASUS has just the regular 1.093v limit.


I didn't know there was more than one XOC bios for these cards.. my mistake.


----------



## ducky083

Hi, 



It is possible to bypass the 1.093v limit on reference pcb board ? is there any voltage mod to do like soldering a variable resistor ?


Or to bypass frequency throttling with the gpu temperature ?


Still not possibility to modify original bios like maxwell bios tweaker ? ( to remove throttling, power limit...)


Thanks !


----------



## majestynl

kithylin said:


> I didn't know there was more than one XOC bios for these cards.. my mistake.


NP!




ducky083 said:


> Hi,
> 
> 
> 
> It is possible to bypass the 1.093v limit on reference pcb board ? is there any voltage mod to do like soldering a variable resistor ?
> 
> 
> Or to bypass frequency throttling with the gpu temperature ?
> 
> 
> Still not possibility to modify original bios like maxwell bios tweaker ? ( to remove throttling, power limit...)
> 
> 
> Thanks !


More voltage + more mods: Kingpin XOC bios + their special tool 

check out:
https://xdevs.com/guide/2080ti_kpe/#cbios


----------



## ducky083

With the 380w galaxy bios, it is possible to use the two fans connectors or just one ?


----------



## ducky083

majestynl said:


> NP!
> 
> 
> 
> 
> More voltage + more mods: Kingpin XOC bios + their special tool
> 
> check out:
> https://xdevs.com/guide/2080ti_kpe/#cbios



Possible on the reference pcb ? or better to shunt the three resistors on the card to delete power limit ?


----------



## joyzao

An upcoming rtx 2080 ti Windforce Non-A | 3 Fan | 2 Slot | 280mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4719331303815 | PN GV-N208TWF3-11GC

I was wondering if I have a problem putting bios a chip A in a non A?

Which bios would be indicated to get a better overclock?


----------



## kithylin

ducky083 said:


> Or to bypass frequency throttling with the gpu temperature ?


No. Not possible.



ducky083 said:


> Still not possibility to modify original bios like maxwell bios tweaker ? ( to remove throttling, power limit...)


Also not possible. Nope.


----------



## gfunkernaught

hello all. been lurking this forum for quite some time glad to have finally joined! my water cooled 2080 ti has driven me nuts for the past few months, trying to figure out how gpu boost works with temps and throttling and power limits and all that good stuff. i've tried the KFA/Galax bios 380w, crashes. XOC bios work great for benchmarks, hit [email protected] max temp was 28c, but that was with my portable a/c blowing directly into the case for about an hour. any normal game crashed. so i ended up going back to the stock bios that i saved from the card when i bought it. the default clock after setting the power and temp sliders in AB all the way to the right was 1999mhz, meanwhile the kfa bios defaulted at 1950mhz. now the card will hit 2100mhz as long as i keep it below 39c which is next to impossible, then it comes down to 2085mhz, and in games like Control the power limit is constantly being hit and i saw it go as low as 2055mhz. so i think i lost the lottery big time. hard pill to swallow considering i paid $1250 for this card. it seems like the norm for water cooled cards hitting +2100mhz with ease and low voltages. if anyone has some insight for similar experience i am all ears.


----------



## AvengedRobix

Satisfied =)


----------



## joyzao

I have this card https://www.newegg.com/gigabyte-geforce-rtx-2080-ti-gv-n208twf3-11gc/p/N82E16814932079

What is the best bios option? To extract the maximum .... The best would be the palit?


----------



## Xeq54

GRABibus said:


> Ok. Thanks for the links.
> 
> What we miss is the feedback some years after of people who applied LM on GPU.


Been running GTX 1080 with waterblock and conductonaut for almost 2 years. I did not use any additional protection (coating the area around the die etc.) There were no issues. Performance was better by about 5 degrees compared to kryonaut.

When I sold it, I put back the stock cooler and standard paste (kryonaut) and the card is working just fine.

There was a small stain on the GPU die that would not come off but it did not cause any issues, it was nothing structural, there was no bump on the surface or anything, just slight discoloration. Same with the copper block, but it does not affect anything.


----------



## GRABibus

Xeq54 said:


> Been running GTX 1080 with waterblock and conductonaut for almost 2 years. I did not use any additional protection (coating the area around the die etc.) There were no issues. Performance was better by about 5 degrees compared to kryonaut.
> 
> When I sold it, I put back the stock cooler and standard paste (kryonaut) and the card is working just fine.
> 
> There was a small stain on the GPU die that would not come off but it did not cause any issues, it was nothing structural, there was no bump on the surface or anything, just slight discoloration. Same with the copper block, but it does not affect anything.


Thanks for feedback.


----------



## kx11

i was in a good mood


----------



## majestynl

ducky083 said:


> Possible on the reference pcb ? or better to shunt the three resistors on the card to delete power limit ?


it works on ref pcb for sure!

After removing PL, u are still voltage limited...i would go for the xoc bios. easier!


----------



## GRABibus

majestynl said:


> it works on ref pcb for sure!
> 
> After removing PL, u are still voltage limited...i would go for the xoc bios. easier!


Asus XOC bios ?
Also voltage limited at 1,093V


----------



## kicurry

As Nvidia does not report hotspot you probably lowered temp more than that.


----------



## gfunkernaught

majestynl said:


> it works on ref pcb for sure!
> 
> After removing PL, u are still voltage limited...i would go for the xoc bios. easier!


xoc bios had odd behavior on my reference pcb. voltage would max out at 1125mv, but then fluctuate between 1100mv and 1125mv, occasionally dropping down to 1093mv. my psu is 750w with 60A on the +12v rail so power delivery shouldnt be an issue. just keep that in mind when overclocking. the base clock on the xoc bios was 2000mhz, so i set the offset to +145, netting 2145mhz, which would eventually drop down to 2130mhz due to thermal throttling around 40c, but then that voltage fluctuation would occur and eventually any game i was playing would crash.


----------



## Asmodian

ducky083 said:


> Possible on the reference pcb ? or better to shunt the three resistors on the card to delete power limit ?


Do not shunt all three! You do not want to shunt the one for the PCIe slot power, it doesn't gain you anything and it might damage your motherboard.


----------



## ncpneto

Hello Guys,

This is my first post here and I'd really appreciate if you could help me with the following issues:
(I've already read pages and pages from this forum before posting)

I own a Galax 2080 Ti HoF (300-450w PL) (third party water block on it).

With the original BIOS, when it reaches more than 380w the gpu-z or hwinfo64 sensors readings show "power limit exceeded" and the voltages and clocks start to throttle.
Does anyone know why it is happening? PL is set to 150% (450w)... at 380w I got "power limit exceeded" message.

I also tried the OC Lab BIOS, but I had exactly the same problem. 

So now I'm looking for the HoF XOC BIOS to test it and see if the problem persists.

Temperature is not an issue (50 / 55C max) and I got a hx850i psu.

Thank you!


----------



## majestynl

GRABibus said:


> Asus XOC bios ?
> Also voltage limited at 1,093V


ASUS one yes, but again it's easier to flash a bios vs shunt modding 

HOF labs and Kingpin do have higher voltage limit!




gfunkernaught said:


> xoc bios had odd behavior on my reference pcb. voltage would max out at 1125mv, but then fluctuate between 1100mv and 1125mv, occasionally dropping down to 1093mv. my psu is 750w with 60A on the +12v rail so power delivery shouldnt be an issue. just keep that in mind when overclocking. the base clock on the xoc bios was 2000mhz, so i set the offset to +145, netting 2145mhz, which would eventually drop down to 2130mhz due to thermal throttling around 40c, but then that voltage fluctuation would occur and eventually any game i was playing would crash.


Hmm dunno didn't had those issues with the XOC bioses...
But it's ignoing cause Freq/Voltage curve is not editable with those. And I get the best perf with a precision curve adjustment....


----------



## gfunkernaught

sorry forgot to mention that i used the kingpin xoc, not strix. Strix xoc limited my video output to 1080p so i opted for kingpin since it didnt break any video output. i honestly think my card ranks in quality at about 3, 1 being a brick and 10 being a solid gold silicon lottery winner. I base this on my own research of other owners getting higher clocks at higher temps than i do. 2190mhz stable bench at 28c, but not stable at all for gaming. i used to rely on v/f curve with the kfa bios, but had an issue where the thermal throttle wouldnt kick in at a unpredictable temp. for example, i found with kfa 380w bios [email protected] was stable and the curve was set to 2100mhz so it would throttle down to 2085mhz. but it wouldnt always throttle and would then crash because 1032mv is not enough for 2100mhz.


----------



## kx11

some pro OC is about to get busy


----------



## ducky083

gfunkernaught said:


> xoc bios had odd behavior on my reference pcb. voltage would max out at 1125mv, but then fluctuate between 1100mv and 1125mv, occasionally dropping down to 1093mv. my psu is 750w with 60A on the +12v rail so power delivery shouldnt be an issue. just keep that in mind when overclocking. the base clock on the xoc bios was 2000mhz, so i set the offset to +145, netting 2145mhz, which would eventually drop down to 2130mhz due to thermal throttling around 40c, but then that voltage fluctuation would occur and eventually any game i was playing would crash.



I don't understand, i can have more than 1.093v with XOC bios on référence pcb board ??? witch one ?


----------



## ducky083

Asmodian said:


> Do not shunt all three! You do not want to shunt the one for the PCIe slot power, it doesn't gain you anything and it might damage your motherboard.



Thanks for this precision ;-)


----------



## EarlZ

Whats the best and most efficient way to test memory overclock ?


----------



## gfunkernaught

ducky083 said:


> I don't understand, i can have more than 1.093v with XOC bios on référence pcb board ??? witch one ?


i only have extensive experience with the kingpin xoc bios. the only thing i don't like about those bios is that you cannot change the voltage unless you drop the power limit which makes it pointless. plus there is thermal throttling which exists on all turing gpus regardless of what bios you have on them. very unhappy with this generation. feel like i wasted my money.


----------



## ahnafakeef

Hello everyone!

How does a 9900K+2080Ti setup fare at 4K at maximum settings with RT turned on? What would the min/avg FPS be like on games like Tomb Raider and Metro Exodus?

Thank you!


----------



## GRABibus

ahnafakeef said:


> Hello everyone!
> 
> How does a 9900K+2080Ti setup fare at 4K at maximum settings with RT turned on? What would the min/avg FPS be like on games like Tomb Raider and Metro Exodus?
> 
> Thank you!


----------



## ahnafakeef

GRABibus said:


> https://www.youtube.com/watch?v=50Y90U2dxjA


Thank you very much! 

1. Does DLSS off look better than on, even if it is at the cost of performance?
2. Is there a way to prevent frame drops e.g., when using the lighter to clear the cobwebs, or during action scenes?
3. Does overclocking a 9900K beyond stock provide any benefit at 4K?

Thank you!


----------



## EeK9X

I'm considering upgrading from my EVGA 1080 Ti SC2 Hybrid, which has served me well for almost two years. It runs cool and quiet, and, for those reasons, I'm only interested in GPUs with a Closed Loop Cooler (CLC).

I have no interest in building a full custom loop, and have already discarded the options from MSI (Sea Hawk X - too ugly), Asus (ROG Matrix - not sold on their integrated solution) and the other EVGA alternatives (Kingpin - too expensive - and XC Hybrid - not worth it compared to the FTW3 Ultra and also not available where I live).

That leaves us with the aforementioned Aorus Xtreme Waterforce and EVGA FTW3 Ultra Hybrid. Here's my list of pros and cons between the two:

Aorus Xtreme Waterforce

Pros:

+ Slightly higher boost clock - 1770Mhz (same as the EVGA Kingpin), versus 1755Mhz on the FTW3 Ultra. Allegedly, both have binned chips, but a higher boost clock is usually indicative of better overclocking performance;
+ Bigger radiator - 240mm (with two 120mm RGB fans), versus 120mm (with one regular fan) on the FTW3 Ultra;
+ More HDMI outputs - 3 in total, versus 1 on the FTW3 Ultra. Useful to me, since my PC is connected to a TV through an AVR, and running a single HDMI cable through the AVR can cause issues with some non-standard resolutions and refresh rates not being supported by that device. That could potentially be solved by connecting one HDMI directly to the TV, and another to the AVR, for multi-channel audio;
+ Slightly cheaper than the FTW3 Ultra.

Cons:

- A bigger radiator means that I'd need a new case. My NZXT S340 Elite only has room for one 280mm rad, and that space is taken by a Corsair H110i. I was planning on purchasing a new case along with a new processor and a new 360mm cooler - but only next year, when Intel's upcoming line of CPUs is likely going to be released;
- Can take up to 30 business days to be delivered.

Unsure:

The Waterforce has an all-around cooling solution that cools not only the GPU, but also the VRAM and MOSFET. The FTW3 Ultra, on the other hand, offers a hybrid solution, with liquid only cooling the GPU itself, and a fan for the other components. I'm not sure which one is better. An all-around solution sounds good in paper, but can cause the GPU to become warmer due to the shared cooling. EVGA's blower fan is barely used on my 1080 Ti, in comparison, as VRAM and MOSFET can handle more heat than the GPU.

EVGA FTW3 Ultra

Pros:

+ Already familiar with the brand, know what to expect from the product (never owned anything from Gigabyte before);
+ Replacing my current GPU is a matter of unplugging the old card and plugging in the new one, no other changes required;
+ Readily available.

Cons:

- The opposite of the Waterforce's pros.

Unsure:

May be able to use an "unofficial" BIOS with it? The FTW3 Ultra allows for switching between two BIOS. Not sure if that means being able to use [the BIOS designed to increase the power limit in Turing GPUs](https://xdevs.com/guide/2080ti_kpe/), or if that's limited to the Kingpin variant;

Again, not sure which card would end up running cooler: the Waterforce, with its bigger rad, but all-in-one cooling system; or the FTW3 Ultra, with its smaller rad, but hybrid solution of CLC plus blower fan.

What do you guys think? I'm all ears (or, in this case, eyes).


----------



## Sheyster

ahnafakeef said:


> Thank you very much!
> 
> 1. Does DLSS off look better than on, even if it is at the cost of performance?
> 2. Is there a way to prevent frame drops e.g., when using the lighter to clear the cobwebs, or during action scenes?
> 3. Does overclocking a 9900K beyond stock provide any benefit at 4K?
> 
> Thank you!


1. DLSS is a blurry mess. The new GFE Sharpen filter helps. Recommendation: Avoid DLSS.

2. If using RT, no. Recommendation: If you value FPS (esp. competitive multiplayer games like BF5) avoid RT.

3. Depends on the game, 5 GHz is good enough for gaming. Recommendation: Lock all cores at 5 GHz and call it a day. Minimize vcore as much as possible.


----------



## Sheyster

EeK9X said:


> I'm considering upgrading from my EVGA 1080 Ti SC2 Hybrid, which has served me well for almost two years. It runs cool and quiet
> 
> ..
> ..
> 
> What do you guys think? I'm all ears (or, in this case, eyes).


If you've waited this long then wait for the next gen IMHO (or 2080 Ti Super if that actually happens). I actually regret getting rid of my Titan Xp. I should have waited.


----------



## GRABibus

jura11 said:


> Hi there
> 
> Yes I concluded that while back there,in any game where RT is it present then you need to downclock GPU, but strangely in 3DMARK Port Royal you don't need to downclock GPU at all
> 
> Try different TIM like is Thermalright TFX which seems outperforming Thermal Grizzly Kryonaut by 2-3°C maybe bit more
> 
> I wouldn't use Conductonaut on stock coolers, if then on cooler which is nickel plated at least
> 
> Getting better temperatures is possible but I would say you are really chiller get lot better temperatures
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura11,

after hours of tweaking, I could play some hours of BFV with DXR enabled (DXR setting set on "Normal") with following OC (See V/F curve attached, +160MHz on Core and V/F point [email protected], +1275MHz on memory).

Reminder : MSI 2080Ti Sea Hawk X with Galax Bios 380W.

Here are 2 videos :











Still stuttering sometimes (As you can see 3 or 4 times in the videos), especially when entering in a map at the beginning of a match. Some maps more stuttering than others...

Since Update 4.4 on BFV, I can play now hours with OC on my GPU.
Before this update, even at stock frequencies (With GALAX Bios), I crashed to Desktop with DXR enabled.

Not sure if this OC is stable in BFV with DXR, but at least I am on the right way.


----------



## jura11

GRABibus said:


> Hi Jura11,
> 
> after hours of tweaking, I could play some hours of BFV with DXR enabled (DXR setting set on "Normal") with following OC (See V/F curve attached, +160MHz on Core and V/F point [email protected], +1275MHz on memory).
> 
> Reminder : MSI 2080Ti Sea Hawk X with Galax Bios 380W.
> 
> Here are 2 videos :
> 
> https://www.youtube.com/watch?v=_Ja7KOh_PM0
> 
> https://www.youtube.com/watch?v=Ex_mqD9hJ40
> 
> Still stuttering sometimes (As you can see 3 or 4 times in the videos), especially when entering in a map at the beginning of a match. Some maps more stuttering than others...
> 
> Since Update 4.4 on BFV, I can play now hours with OC on my GPU.
> Before this update, even at stock frequencies (With GALAX Bios), I crashed to Desktop with DXR enabled.
> 
> Not sure if this OC is stable in BFV with DXR, but at least I am on the right way.


Hi there 

Glad you make it BFV work,I usually test every game with my OC and if I'm getting TDR or CTD then tweaking is only option

What I remember in BFV I run all at max with 2070-2085MHz OC,RT I think I have used highest preset etc and no issues with TDR or CTD, I uninstalled BFV few months back after I finished SP, not sure why I just don't play at all MP games

I would try single player mission with yours settings there and then you should see if its stable etc, for most RT enabled games I'm using 2070-2085MHz OC profile which is OK 

For non RT games I use 2130-2145MHz OC profile 

Hope this helps 

Thanks, Jura


----------



## ahnafakeef

Sheyster said:


> 1. DLSS is a blurry mess. The new GFE Sharpen filter helps. Recommendation: Avoid DLSS.
> 
> 2. If using RT, no. Recommendation: If you value FPS (esp. competitive multiplayer games like BF5) avoid RT.
> 
> 3. Depends on the game, 5 GHz is good enough for gaming. Recommendation: Lock all cores at 5 GHz and call it a day. Minimize vcore as much as possible.


Thank you very much!

Is 5GHz a commonplace OC for 9900K chips? Should I expect better thermals and consequently higher OC with a 9900KF vs a 9900K?

Apologies for the off-topic question


----------



## GRABibus

jura11 said:


> Hi there
> 
> Glad you make it BFV work,I usually test every game with my OC and if I'm getting TDR or CTD then tweaking is only option
> 
> What I remember in BFV I run all at max with 2070-2085MHz OC,RT I think I have used highest preset etc and no issues with TDR or CTD, I uninstalled BFV few months back after I finished SP, not sure why I just don't play at all MP games
> 
> I would try single player mission with yours settings there and then you should see if its stable etc, for most RT enabled games I'm using 2070-2085MHz OC profile which is OK
> 
> For non RT games I use 2130-2145MHz OC profile
> 
> Hope this helps
> 
> Thanks, Jura



Here is my curve for "Non DXR games" (Not tested yet in all my games, but confident) :

+175MHz on Core and V/F point 1.093V/2160MHz
+1275MHz on Memory


----------



## TheRedViper

New card inbound for new build


----------



## GRABibus

jura11 said:


> Hi there
> 
> Glad you make it BFV work,I usually test every game with my OC and if I'm getting TDR or CTD then tweaking is only option
> 
> What I remember in BFV I run all at max with 2070-2085MHz OC,RT I think I have used highest preset etc and no issues with TDR or CTD, I uninstalled BFV few months back after I finished SP, not sure why I just don't play at all MP games
> 
> I would try single player mission with yours settings there and then you should see if its stable etc, for most RT enabled games I'm using 2070-2085MHz OC profile which is OK
> 
> For non RT games I use 2130-2145MHz OC profile
> 
> Hope this helps
> 
> Thanks, Jura


Hey Jura11,
I have tried something : disabling FFR option in advanced graphic oprtions (Future Frame Render).

I could play one hour with my called "Non DXR games Overclock", which never happened until now !
With this OC, I usually crashed to Desktop 10 minutes maximum after entering a match, or even in the menu.

I willl try again tomorrow.

Do you have FFR enabled ?
If yes, try disabling it and play with DXR enabled with your maximum overclock profile to see if it helps.

Bye.


----------



## J7SC

TheRedViper said:


> New card inbound for new build


 
...'grats ! Very, very nice :wubsmiley ...checked on those yesterday on eBay just for the fun of it. Are you going to water-cool them, given their hybrid setup ?


----------



## Sheyster

ahnafakeef said:


> Thank you very much!
> 
> Is 5GHz a commonplace OC for 9900K chips? Should I expect better thermals and consequently higher OC with a 9900KF vs a 9900K?
> 
> Apologies for the off-topic question


It's a silicon lottery. Some of them can do 5 GHz all-core with lower voltage than others. 5 GHz all core is not too unusual, it's more about what voltage it can be achieved at. 1.25v would be considered very good. I don't expect there is too much difference between the two, there are good and bad samples for each model. Seems like price on the KF is lower lately. I saw Newegg had a sale on them for $419 recently.


----------



## TheRedViper

J7SC said:


> ...'grats ! Very, very nice :wubsmiley ...checked on those yesterday on eBay just for the fun of it. Are you going to water-cool them, given their hybrid setup ?


Of course


----------



## AndrejB

Have an asus strix 2080ti oc and was wondering which bios can flash for everyday use? Also can I flash over the silent bios instead of the main?


----------



## gfunkernaught

jura11 said:


> Hi there
> 
> Glad you make it BFV work,I usually test every game with my OC and if I'm getting TDR or CTD then tweaking is only option
> 
> What I remember in BFV I run all at max with 2070-2085MHz OC,RT I think I have used highest preset etc and no issues with TDR or CTD, I uninstalled BFV few months back after I finished SP, not sure why I just don't play at all MP games
> 
> I would try single player mission with yours settings there and then you should see if its stable etc, for most RT enabled games I'm using 2070-2085MHz OC profile which is OK
> 
> For non RT games I use 2130-2145MHz OC profile
> 
> Hope this helps
> 
> Thanks, Jura


which gpu and bios are you using?


----------



## J7SC

TheRedViper said:


> Of course


 
...post link to your build-thread here when ready - I like over-the-top


----------



## DrunknFoo

Oh man its been awhile since i looked at this post, and find 600 pages to catch up on...

Quick question, is there any new bios for the reference pcb that pushes out more than the galax 127 380W? And confirmed? Not sure if first page has been updated in awhile....


----------



## mattxx88

DrunknFoo said:


> Oh man its been awhile since i looked at this post, and find 600 pages to catch up on...
> 
> Quick question, is there any new bios for the reference pcb that pushes out more than the galax 127 380W? And confirmed? Not sure if first page has been updated in awhile....


Both this bios fit with reference crds:


mattxx88 said:


> I found on MSI forum a Lightning bios with 400w PL
> 
> i tested it on my Gaming X Trio and it works
> 
> if some1 wanna try, you can find it here:
> https://forum-en.msi.com/index.php?topic=314643.0
> post #13, bios named * NV377MH_125.rar


and this: https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930


----------



## DrunknFoo

mattxx88 said:


> Both this bios fit with reference crds:
> 
> 
> and this: https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930


hmm ill dig a bit deeper for more confirmations

but the card you are using is a custom pcb correct?
outa curiosity what clock frequency are you able to maintain with this bios?


----------



## joyzao

My rtx is the 11G-P4-2281-KR 

So this link bios wouldn't give it to me ..

Because it's a chip non A, I can't put the bios on a chip A, so I wanted some modified bios on a chip non A to get the most out of tdp.


----------



## jura11

GRABibus said:


> Here is my curve for "Non DXR games" (Not tested yet in all my games, but confident) :
> 
> +175MHz on Core and V/F point 1.093V/2160MHz
> +1275MHz on Memory


Hi there 

This 2160MHz I can achieve too, yesterday I just tried Wolfenstein 2: The New Colossus and there I have seen 2160MHz which dropped to 2145MHz when temperatures rised to 42°C I think, during the game clocks have been in range of 2145-2160MHz that's with Matrix BIOS, seen GPU power at 359W during this game as max

I tested all games with such OC and no issues there

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> which gpu and bios are you using?


Hi there 

I have got Asus RTX 2080Ti Strix and I use Asus Matrix BIOS which I think is best BIOS for Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## jura11

AndrejB said:


> Have an asus strix 2080ti oc and was wondering which bios can flash for everyday use? Also can I flash over the silent bios instead of the main?


Hi there 

Personally I would suggest try Asus Matrix BIOS which I use on my Asus RTX 2080Ti Strix,offers higher power limit(360W) as stock Asus RTX 2080Ti Strix does have low or lower power limit at 320-325W which is just not enough

Other BIOS which does work with Asus RTX 2080Ti Strix is XOC BIOS but just be aware you will need to lower power limit from 100% to 38% or 42% but still what I don't like on this BIOS is VRAM is running at full speed

Due this I would recommend Matrix BIOS, off course you can try and flash any BIOS but be aware you will loose one DP port and HDMI port, if you want to cross flash different BIOS check I/O panel of the GPU of yours and GPU which you want to use 

Hope this helps 

Thanks, Jura


----------



## jura11

GRABibus said:


> Hey Jura11,
> I have tried something : disabling FFR option in advanced graphic oprtions (Future Frame Render).
> 
> I could play one hour with my called "Non DXR games Overclock", which never happened until now !
> With this OC, I usually crashed to Desktop 10 minutes maximum after entering a match, or even in the menu.
> 
> I willl try again tomorrow.
> 
> Do you have FFR enabled ?
> If yes, try disabling it and play with DXR enabled with your maximum overclock profile to see if it helps.
> 
> Bye.



Hi there 

Sadly I uninstalled BFV and not sure if I want to reinstall this game again as I finished the game

Probably will do few tests in other games, but in Control tried everything and best OC in Control is same 2070-2085MHz, probably will try with disabling few features if does CTD or will get TDR

Personally I have TDR at higher values as I render with GPUs and any instability in games usually ends with freeze or hard lock 

But of course I do few tests later on and see if its passes 

Hope this helps 

Thanks, Jura


----------



## AndrejB

jura11 said:


> Hi there
> 
> Personally I would suggest try Asus Matrix BIOS which I use on my Asus RTX 2080Ti Strix,offers higher power limit(360W) as stock Asus RTX 2080Ti Strix does have low or lower power limit at 320-325W which is just not enough
> 
> Other BIOS which does work with Asus RTX 2080Ti Strix is XOC BIOS but just be aware you will need to lower power limit from 100% to 38% or 42% but still what I don't like on this BIOS is VRAM is running at full speed
> 
> Due this I would recommend Matrix BIOS, off course you can try and flash any BIOS but be aware you will loose one DP port and HDMI port, if you want to cross flash different BIOS check I/O panel of the GPU of yours and GPU which you want to use
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura, thank you for your response.

Just for sanity can you let me know which version of nvflash do you use and also can I flash over the silent bios?


----------



## jura11

AndrejB said:


> Hi Jura, thank you for your response.
> 
> Just for sanity can you let me know which version of nvflash do you use and also can I flash over the silent bios?


Hi Andrej

I used this one version of Nvflash 
NVIDIA NVFlash with Board Id Mismatch Disabled v5.541.0

https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/

Matrix BIOS 

https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1

I think I have flashed over Silent BIOS Matrix BIOS, as first BIOS I use Asus XOC BIOS and as second I use or daily BIOS Matrix BIOS 

Hope this helps 

Thanks, Jura


----------



## sblantipodi

Guys I plays generally slow games like assassin's creed and action RPG in general.

I have a 2080ti but I like to play in 4k for the best image quality. 

I have an Acer Predator XB271HK [email protected] gsync monitor. 
Do you think that I should upgrade to a 144Hz panel to reduce image blur?
Is blur so better with higher refresh rate monitors?


----------



## Lao Tzu

Hi Guys, my Gigabyte RTX 2080ti, with aorus engine v1.5.0.1 have power limit to 12%, and can OC +270 Core +1330 Memory (Samsung), reaching 2145MHz/15330MHz !!!


----------



## mattxx88

DrunknFoo said:


> hmm ill dig a bit deeper for more confirmations
> 
> but the card you are using is a custom pcb correct?
> outa curiosity what clock frequency are you able to maintain with this bios?


other in this thread can confirm this, a friend of mine have a reference board watercooled and is using lightnin z bios, he also tried Kingpin bios and also that worked (only for bench and only on water)

my card is a custom pcb, is correct. My daily is 2020mhz @0.975v or 2130mhz @1.050v 
mine is watercooled too


----------



## ahnafakeef

Hello everyone! 

1. Which is the best 2080Ti for an air-cooled setup if I want to flash a custom BIOS on it?
2. Are Lightning Z cards binned, or in any way better overclockers than other versions of 2080Ti?
3. If not, which card would overclock the best?

Thank you!


----------



## Lao Tzu

Was wrong about memory, thinking that numbers in Aorus Engine was x2, now i put +2400 on memory to achive 16400MHz!!! its golden chip and memory?


----------



## ntuason

ahnafakeef said:


> Hello everyone!
> 
> 1. Which is the best 2080Ti for an air-cooled setup if I want to flash a custom BIOS on it?
> 2. Are Lightning Z cards binned, or in any way better overclockers than other versions of 2080Ti?
> 3. If not, which card would overclock the best?
> 
> Thank you!


I read everywhere the the Kingpins are the top overclockers. As for thermals I have no idea.


----------



## AvengedRobix

ahnafakeef said:


> Hello everyone!
> 
> 
> 
> 1. Which is the best 2080Ti for an air-cooled setup if I want to flash a custom BIOS on it?
> 
> 2. Are Lightning Z cards binned, or in any way better overclockers than other versions of 2080Ti?
> 
> 3. If not, which card would overclock the best?
> 
> 
> 
> Thank you!


Kingping or hof OCLab edition

Inviato dal mio ONEPLUS A6013 utilizzando Tapatalk


----------



## GRABibus

Someone knows the reasons why V/F curve is not accessible with XOC ASUS bios ?
Is there a particular reason ?
It is not planned to have this bios with possibility to tweak V/F curve ?


----------



## gfunkernaught

i dont know why either, but the kingpin bios doesnt have it either. my guess is that for ln2 overclocking, evga has their own voltage utility that they use in conjunction with either AB or PX, so if you use an external utility for voltage, and precision x for frequency, then you dont need a v/f curve. i wish xoc bios had a v/f curve. theyre kind of useless to me for gaming since all games crash on it. the 380w bios thats been floating around also crashes with games on anything above 2085mhz once the temp hits 40-43c. i have a crap oc'er.


----------



## Asmodian

gfunkernaught said:


> The 380w bios thats been floating around also crashes with games on anything above 2085mhz once the temp hits 40-43c. i have a crap oc'er.


My shunt modded 2080 Ti FE with stock BIOS behaves about the same, 2070 MHz is my max daily driver clock if at mid 40s. It can hold 2070 MHz with any workload though. 

I don't think the various BIOSes help you overclock, the higher power ones simply stop it throttling with higher power loads.


----------



## DrunknFoo

mattxx88 said:


> DrunknFoo said:
> 
> 
> 
> hmm ill dig a bit deeper for more confirmations
> 
> but the card you are using is a custom pcb correct?
> outa curiosity what clock frequency are you able to maintain with this bios?
> 
> 
> 
> other in this thread can confirm this, a friend of mine have a reference board watercooled and is using lightnin z bios, he also tried Kingpin bios and also that worked (only for bench and only on water)
> 
> my card is a custom pcb, is correct. My daily is 2020mhz @0.975v or 2130mhz @1.050v
> mine is watercooled too
Click to expand...


So in other words no real point changing from the galax? My card peaks at 2200 holds 2170.... Hmmm


----------



## mattxx88

DrunknFoo said:


> So in other words no real point changing from the galax? My card peaks at 2200 holds 2170.... Hmmm


Exactly
i noticed 1 thing with lightning z bios, i'm not completely stable @8500mhz vram frequency like i do with gaming x trio 400w bios
this make me assume it has lower Vram timings like Matrix bios


----------



## kot0005

ops wrong thread lol.


----------



## MGTG

jura11 said:


> Hi Andrej
> 
> I used this one version of Nvflash
> NVIDIA NVFlash with Board Id Mismatch Disabled v5.541.0
> 
> https://www.techpowerup.com/download/nvidia-nvflash-with-board-id-mismatch-disabled/
> 
> Matrix BIOS
> 
> https://www.techpowerup.com/vgabios/210080/asus-rtx2080ti-11264-190307-1
> 
> I think I have flashed over Silent BIOS Matrix BIOS, as first BIOS I use Asus XOC BIOS and as second I use or daily BIOS Matrix BIOS
> 
> Hope this helps
> 
> Thanks, Jura


I am currently on the Matrix BIOS for my Strix 2080 Ti. I do still hit the power limit in heavy stress tests with temps approaching 80C. Is it worth it going for the Galax 380W BIOS? :/


----------



## ESRCJ

MGTG said:


> I am currently on the Matrix BIOS for my Strix 2080 Ti. I do still hit the power limit in heavy stress tests with temps approaching 80C. Is it worth it going for the Galax 380W BIOS? :/


I don't think it makes much sense to use a higher power limit BIOS if your biggest limiting factor is cooling. Your frequency drops 15MHz at various temperature ticks. I would imagine there are a handful of ticks leading up to 80C.


----------



## jura11

MGTG said:


> I am currently on the Matrix BIOS for my Strix 2080 Ti. I do still hit the power limit in heavy stress tests with temps approaching 80C. Is it worth it going for the Galax 380W BIOS? :/


Hi there 

You need lower temperatures aa first because these temperatures are bit high for Strix

If Galax 380W BIOS would help, hard to say, I have hit several times power limit in Wolfenstein New Colossus or in Control(in some scenes) and few other games with Matrix BIOS as well but still is less frequent than with stock Strix BIOS 

I tried as well XOC BIOS in Wolfenstein and Control where I have set power limit at 42%(420W) and still I would hit power limit in some scenarios

In which stress tests you are hitting Power Limit?

Just be aware with Galax 380W BIOS you will loose one DP port and HDMI port when you flash on yours Strix 

Hope this helps 

Thanks, Jura


----------



## GRABibus

We have debated recently here about using Liquid Metal or not, on copper or not, etc....

Here is a very learning and interesting video about this topic :


----------



## gfunkernaught

Asmodian said:


> My shunt modded 2080 Ti FE with stock BIOS behaves about the same, 2070 MHz is my max daily driver clock if at mid 40s. It can hold 2070 MHz with any workload though.
> 
> I don't think the various BIOSes help you overclock, the higher power ones simply stop it throttling with higher power loads.


thats what im beginning to understand about these cards, the depth of the fact that every single graphics card is unique. so the stock bios seemed to hover between 2055mhz-2100mhz based on temp and power. with the 380w bios again the sweet spot was [email protected] i recently upgraded my loop, and added a 200mm fan to the side of my case (corsair 760T, no side fans) which now provides fresh air to the top 360rad and the VRMS and CPU area. so the temps are a bit more under control, so they top out at around 41-42c, it hit 43c once while playing metro exodus. looks like i could be wrong about my card being a crap ocer. may not be a golden sample, but its not crap. i can be dramatic lol. the ambient temp has dropped significantly now that we are heading into Fall, so now [email protected] is stable, no throttling, temps 39-41c, power limit doesnt get touched in Control and Metro. i also noticed something about how the thermal throttle happens. i think the algorithm is based on how quickly the temps are rising or approaching a defined throttle point(?). Control pushed the card to 42c at certain times, but no throttling occurred last night, but a couple weeks ago, when it was warmer, it did. its all strange, frustrating, and interesting as hell!


----------



## kien18

hello this is my first ever time attemping to update my gpu vbios for my MSI RTX 2080 TI GAMING X TRIO but i dont know how to do it can anyone explain to me in detail here ? thank you
(got the vbios directly from msi already)


----------



## mattxx88

kien18 said:


> hello this is my first ever time attemping to update my gpu vbios for my MSI RTX 2080 TI GAMING X TRIO but i dont know how to do it can anyone explain to me in detail here ? thank you
> (got the vbios directly from msi already)


this guide is well done:

https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/

If you received the bios directly from MSI i remind you to rename it as .rom


----------



## GAN77

kien18 said:


> hello this is my first ever time attemping to update my gpu vbios for my MSI RTX 2080 TI GAMING X TRIO but i dont know how to do it can anyone explain to me in detail here ? thank you
> (got the vbios directly from msi already)



Can you share the bios? Which PL?


----------



## ncpneto

Please, for those who are using Kingpin XOC Bios, is there any way to lock the voltage? AB curve editor is disabled so I wonder if there is another option to do it...
Thanks


----------



## Vlada011

What's biggest manipulation of NVIDIA or AMD you can remember.
For me is overclocking RTX Founders Edition to avoid humiliation if NVIDIA show up with RTX2080 and RTX2080Ti with same clocks as they give to partners.
ASUS, MSI, GIGABYTE offer RTX2080 and RTX2080Ti with 100MHz lower clock then NVIDIA Founders Edition. Off course they had custom models. But reference RTX2080 and RTX2080Ti were 100MHz slower then FE. 
That 100MHz was only way to NVIDIA beat GTX1080Ti Founders Edition with RTX2080 Founders Edition and to avoid 19% improvement from GTX1080Ti FE to RTX2080 FE and half of people didn't even figure out what they done. For them is RTX 2080 naturally faster then GTX1080Ti and measured with FE models except FE and faster were slower then GTX1080Ti FE.
Nice move.

Situation was like this... few weeks before launch date somewhere in NVIDIA Company...
OK what we have? We have obviously, that mean, obviously in every situation slower RTX2080 then GTX1080Ti, no any way to manipulate with scores, fps, drivers...it's 5-10% slower card depend of benchmark test and game... and somewhere around 20% faster RTX2080Ti.
What could we done? 
Only choice is overclocking reference models... But good percent of cards will not work just like that 100MHz faster? 
No, we will not OC all of them, vendors will get original reference clock, slower then GTX1080Ti FE, but we will choose significant number of nice chips and overclocked them 100MHz and sell them for 100$ more, vendors will not get such card, we need off course to replace cooler, turbine from GTX780Ti FE, GTX980 FE, GTX1080 FE can't serve any more, we need dual cooler. We will advertise FE clocks and compare them with previous generation. But partners will get slower clock. 
Reviews need to help and test FE 100MHz higher boost then cheapest reference models partners get but people used that FE is reference cards and they will naturally think if Turing FE is faster then Pascal FE that mean every Turing is faster.


----------



## jura11

Lao Tzu said:


> Was wrong about memory, thinking that numbers in Aorus Engine was x2, now i put +2400 on memory to achive 16400MHz!!! its golden chip and memory?


Hi there 

2145MHz on air is pretty good result without the question,this 2145MHz frequency stays during the benchmark or during the game or does drop to lower frequency?

Here is my result in Unigine Superposition with 2145MHz OC and 5960X running at 4.6GHz I think

https://i.imgur.com/XTCo8yj.jpg

My old Zotac RTX 2080Ti AMP would do 13382 with 2100MHz and +1100MHz on VRAM 

https://i.imgur.com/XymE3E3.png


Hope this helps 

Thanks, Jura


----------



## keikei

Greetings Members, 

is a mild overclock worth it for the extra frames in general for this card? Playing @ 4K. Thanks.


----------



## Sheyster

keikei said:


> Greetings Members,
> 
> is a mild overclock worth it for the extra frames in general for this card? Playing @ 4K. Thanks.


I think it is... I use the 380w GALAX BIOS and usually game at a +120 core OC. For gaming I don't think it's worth it to max out your OC.


----------



## sblantipodi

sblantipodi said:


> Guys I plays generally slow games like assassin's creed and action RPG in general.
> 
> I have a 2080ti but I like to play in 4k for the best image quality.
> 
> I have an Acer Predator XB271HK [email protected] gsync monitor.
> Do you think that I should upgrade to a 144Hz panel to reduce image blur?
> Is blur so better with higher refresh rate monitors?


bump question


----------



## kot0005

Sheyster said:


> I think it is... I use the 380w GALAX BIOS and usually game at a +120 core OC. For gaming I don't think it's worth it to max out your OC.



I benched Gears 5 with 2100Mhz and 2175Mhz at 1440p. its like 1 fps. 115 vs 116.


----------



## Sheyster

kot0005 said:


> I benched Gears 5 with 2100Mhz and 2175Mhz at 1440p. its like 1 fps. 115 vs 116.


It really depends on the game and if you're CPU bound or not. I assume you mean 1 FPS when comparing the maximum FPS numbers. IMHO minimum FPS is actually more important especially in FPS multiplayer games. Almost every game will benefit from a GPU OC for min FPS. You're seldom CPU bound at the lower end of the FPS range.


----------



## Sheyster

sblantipodi said:


> bump question


If you play FPS games 144 Hz is absolutely the way to go, and I would even suggest avoiding 4K completely if you're at all competitive. I've been on 144 Hz since 2013. My first 144 Hz monitor was an ASUS VG278, it was epic at the time but pretty old now. I currently own two 144 Hz monitors: A Predator Z35 21:9 35" (can run at 200 Hz but I run the native 144 Hz) and a Samsung CHG70 16:9 32" (144 Hz native). I have my eye on the new LG 38" coming out soon:

https://www.lg.com/us/monitors/lg-38GL950G-B-gaming-monitor

It's going to be very pricey but initial reviews seem very good. There is one up on YouTube.


----------



## keikei

Philips is supposedly launching a 4k/120hz monitor thats under $1000 in 1st quarter 2020. Its the first piece of tech on my list for the year. There is way too many issues with the asus monitor for me to go near it.


*Anyone not able to hit near 100% gpu usage playing Subnautica? Mine hovers around 50-60% for some reason and frames 40 or so. Vsync off and there is no frame cap option for the game. Gaming @ 4k. The game itself is the not most optimized and stable. Probably the worst game to gauge a card with sadly. I also know frames tanks the longer you play it. Even with a game reinstall, i still get frame/performance issue. Weird.


----------



## majestynl

kot0005 said:


> I benched Gears 5 with 2100Mhz and 2175Mhz at 1440p. its like 1 fps. 115 vs 116.


Yeap can confirm, not much difference between those clocks.
And also the mins are looking the same. Maybe 1 or 2 FPS on averages but nothing more...

Don't know what's happening but my FPS is fluctuating between 80-140. only playing versus. GPU usage is around 60%!

1440p @ Ultra.

No stuttering or anything bad but those fluctuations. Tried many things. No luck so far. How about yours?


----------



## kot0005

Sheyster said:


> It really depends on the game and if you're CPU bound or not. I assume you mean 1 FPS when comparing the maximum FPS numbers. IMHO minimum FPS is actually more important especially in FPS multiplayer games. Almost every game will benefit from a GPU OC for min FPS. You're seldom CPU bound at the lower end of the FPS range.


Its the average fps number from the benchmark


----------



## sblantipodi

Sheyster said:


> sblantipodi said:
> 
> 
> 
> bump question
> 
> 
> 
> If you play FPS games 144 Hz is absolutely the way to go, and I would even suggest avoiding 4K completely if you're at all competitive. I've been on 144 Hz since 2013. My first 144 Hz monitor was an ASUS VG278, it was epic at the time but pretty old now. I currently own two 144 Hz monitors: A Predator Z35 21:9 35" (can run at 200 Hz but I run the native 144 Hz) and a Samsung CHG70 16:9 32" (144 Hz native). I have my eye on the new LG 38" coming out soon:
> 
> https://www.lg.com/us/monitors/lg-38GL950G-B-gaming-monitor
> 
> It's going to be very pricey but initial reviews seem very good. There is one up on YouTube.
Click to expand...

But is 144Hz really sharper than 60Hz monitors during animations? Can I enjoy the greater refresh rate even in slower games?


----------



## kot0005

majestynl said:


> Yeap can confirm, not much difference between those clocks.
> And also the mins are looking the same. Maybe 1 or 2 FPS on averages but nothing more...
> 
> Don't know what's happening but my FPS is fluctuating between 80-140. only playing versus. GPU usage is around 60%!
> 
> 1440p @ Ultra.
> 
> No stuttering or anything bad but those fluctuations. Tried many things. No luck so far. How about yours?


I haven't tried multiplayer in Gears, only single player. The game run butter smooth. In game is get around 130-144fps(144hz cap) , the benchmark is a bit taxing.


----------



## xrenj

Hello,

is there a BIOS for EVGA 2080ti XC Ultra with fanstop disabled? I can't stand the noise of constant turning off and on of both fans. Fancurve is not a solution for me because I use sometimes Linux on my PC.

Thanks in advance.


----------



## Sheyster

sblantipodi said:


> But is 144Hz really sharper than 60Hz monitors during animations? Can I enjoy the greater refresh rate even in slower games?


You won't appreciate 144 Hz as much with RTS and MMO games, although in WoW I felt it was better. FPS and third-person (Tomb Raider/The Division style games) will benefit the most.


----------



## keikei

I know the Cyberpunk demo a few months back was running with a 2080Ti. 1080p/RT on if I wasn't mistaken? I may not turn on RT if its that much of a performance hit. I hope CDPR can optimize RT as much as possible.


----------



## jepz

keikei said:


> I know the Cyberpunk demo a few months back was running with a 2080Ti. 1080p/RT on if I wasn't mistaken? I may not turn on RT if its that much of a performance hit. I hope CDPR can optimize RT as much as possible.


I think it was running on a Titan RTX.


----------



## keikei

jepz said:


> I think it was running on a Titan RTX.



DF claims its a Ti. More interestingly is that DF was there to see the actual gameplay and from what they saw back then to what CDPR released its not the same. Me thinks the released gameplay is the console version and CDPR is tempering expectations. DF was blown away from the first-hand footage, but as the game is still in development, things will change.


----------



## jepz

keikei said:


> DF claims its a Ti. More interestingly is that DF was there to see the actual gameplay and from what they saw back then to what CDPR released its not the same. Me thinks the released gameplay is the console version and CDPR is tempering expectations. DF was blown away from the first-hand footage, but as the game is still in development, things will change.
> 
> 
> https://www.youtube.com/watch?v=SaVm7jQ4TT4


I just remembered something like the PC bellow running on E3 2019:

https://www.guru3d.com/news-story/cyberpunk-2077-demo-was-nvidia-titan-rtx.html

CPU: Intel i7-8700K @ 3.70 GHz
Motherboard: ASUS ROG STRIX Z370-I GAMING
RAM: G.Skill Ripjaws V, 2x16GB, 3000MHz, CL15
GPU: Titan RTX
SSD: Samsung 960 Pro 512 GB M.2 PCIe
PSU: Corsair SF600 600W


----------



## ESRCJ

keikei said:


> Greetings Members,
> 
> is a mild overclock worth it for the extra frames in general for this card? Playing @ 4K. Thanks.


I personally think it is. I under-volt the GPU vcore to 1.006V and run it at 2100MHz for my 4K gaming. Very smooth experience with zero clock drops, although mine is water cooled and has a 380W BIOS. The 2080 Ti is a great 4K card. Feel free to check out some bench-a-thon results for various games:

https://sites.google.com/view/benchathon/home?authuser=0


----------



## bl4ckdot

Do people still run the KPE bios on ref 2080Ti ? I'm still using the Galax 380W and was wondering how much can you really push it ? I'm at ~17100 graphic score on timespy at the moment, can this bios push the 18K or is an XOC bios requiered ?


----------



## keikei

jepz said:


> I just remembered something like the PC bellow running on E3 2019:
> 
> https://www.guru3d.com/news-story/cyberpunk-2077-demo-was-nvidia-titan-rtx.html
> 
> CPU: Intel i7-8700K @ 3.70 GHz
> Motherboard: ASUS ROG STRIX Z370-I GAMING
> RAM: G.Skill Ripjaws V, 2x16GB, 3000MHz, CL15
> GPU: Titan RTX
> SSD: Samsung 960 Pro 512 GB M.2 PCIe
> PSU: Corsair SF600 600W


Gud find. Luckily (for us) the engine favors nvidia. if TW3 can run on a switch, i dont expect too much worry regarding optimization, minus RT ofc.


----------



## ahnafakeef

Hello everyone.

Is Metro Exodus a good enough gaming stress test for a 2080Ti overclock?

With the stock ASUS Strix OC BIOS, I can run +100 core and +500 memory with the power limit at 125%. +200 core crashes immediately, even with zero memory overclock. The core clock fluctuates between 1800 and 2000 with +100.

Can I get a higher overclock with the stock BIOS? If yes, how?

As for a custom BIOS, can someone please recommend one that will prevent me from inadvertently frying my card?

The card is hardly breaking 60c at 100% usage. So I suppose there's quite a bit of performance to be gained from overclocking? 

Playing at 4K. Advice and recommendations welcome and appreciated.

Thank you.


----------



## ESRCJ

ahnafakeef said:


> Hello everyone.
> 
> Is Metro Exodus a good enough gaming stress test for a 2080Ti overclock?
> 
> With the stock ASUS Strix OC BIOS, I can run +100 core and +500 memory with the power limit at 125%. +200 core crashes immediately, even with zero memory overclock. The core clock fluctuates between 1800 and 2000 with +100.
> 
> Can I get a higher overclock with the stock BIOS? If yes, how?
> 
> As for a custom BIOS, can someone please recommend one that will prevent me from inadvertently frying my card?
> 
> The card is hardly breaking 60c at 100% usage. So I suppose there's quite a bit of performance to be gained from overclocking?
> 
> Playing at 4K. Advice and recommendations welcome and appreciated.
> 
> Thank you.


I would recommend tuning the voltage-frequency curve for overclocking. In particular, locking at a lower voltage-frequency point may be ideal if you want consistency and efficiency. You can expect a few frequency drops by the time you reach 60C, as there are various temperature ticks that will prompt a 15MHz drop each. This is BIOS-specific. You may also lose some frequency from hitting the power limit. 

As for whether or not Metro is a good stress test, I'm not sure since I've never played the new one. I use the TimeSpy Extreme stress test as a good proxy for heavy 4K gaming and an undervolt of 1.006V. For lighter titles, I just run my card at the minimum voltage of 0.725V and 1635MHz if that's sufficient for a solid 120FPS at 4K. Basically I think it's a good idea to undervolt for gaming since you cut down on power, resulting in less heat. My card can do 2100MHz at 1.006V for more demanding games and 2175MHz at 1.093V. The latter isn't much of an improvement in performance, but adds a bit more in power draw and so I personally don't find it worthwhile. 

I would not recommend a BIOS with a higher power limit for air cooling. The benefit is partly negated by the frequency drops due to temperature. There are also some games at 4K that will get very close or even hit the 380W power limit on my card, such as Gears 5 and Division 2. Again though, your temps will be quite high at 380W and you'll lose enough frequency as a result to make the BIOS flash almost pointless.


----------



## gfunkernaught

ESRCJ said:


> I personally think it is. I under-volt the GPU vcore to 1.006V and run it at 2100MHz for my 4K gaming. Very smooth experience with zero clock drops, although mine is water cooled and has a 380W BIOS. The 2080 Ti is a great 4K card. Feel free to check out some bench-a-thon results for various games:
> 
> https://sites.google.com/view/benchathon/home?authuser=0


which 2080 ti do you have? which games are you playing? i cant hit 2100mhz gaming unless i push at least 1050mv.


----------



## ESRCJ

gfunkernaught said:


> ESRCJ said:
> 
> 
> 
> I personally think it is. I under-volt the GPU vcore to 1.006V and run it at 2100MHz for my 4K gaming. Very smooth experience with zero clock drops, although mine is water cooled and has a 380W BIOS. The 2080 Ti is a great 4K card. Feel free to check out some bench-a-thon results for various games:
> 
> https://sites.google.com/view/benchathon/home?authuser=0
> 
> 
> 
> which 2080 ti do you have? which games are you playing? i cant hit 2100mhz gaming unless i push at least 1050mv.
Click to expand...

Founder's edition with a Heatkiller waterblock. Currently I've only had time to play Gears 5. Like I said, the 2080 Ti can draw a lot or power in this game. I was seeing over 370W at 2175MHz 1.093V. A few months ago I was playing Forza Horizon 4, Shadow of the Tomb Raider, Division 2, and Forza Motorsport 7.


----------



## ahnafakeef

ESRCJ said:


> I would recommend tuning the voltage-frequency curve for overclocking. In particular, locking at a lower voltage-frequency point may be ideal if you want consistency and efficiency. You can expect a few frequency drops by the time you reach 60C, as there are various temperature ticks that will prompt a 15MHz drop each. This is BIOS-specific. You may also lose some frequency from hitting the power limit.
> 
> As for whether or not Metro is a good stress test, I'm not sure since I've never played the new one. I use the TimeSpy Extreme stress test as a good proxy for heavy 4K gaming and an undervolt of 1.006V. For lighter titles, I just run my card at the minimum voltage of 0.725V and 1635MHz if that's sufficient for a solid 120FPS at 4K. Basically I think it's a good idea to undervolt for gaming since you cut down on power, resulting in less heat. My card can do 2100MHz at 1.006V for more demanding games and 2175MHz at 1.093V. The latter isn't much of an improvement in performance, but adds a bit more in power draw and so I personally don't find it worthwhile.
> 
> I would not recommend a BIOS with a higher power limit for air cooling. The benefit is partly negated by the frequency drops due to temperature. There are also some games at 4K that will get very close or even hit the 380W power limit on my card, such as Gears 5 and Division 2. Again though, your temps will be quite high at 380W and you'll lose enough frequency as a result to make the BIOS flash almost pointless.


Thank you for the prompt response. I've been out of the loop for a while, so please bear with me.

1. The thermal throttling limit used to be 80c. Has it been lowered to 60c with recent GPUs?
2. Why undervolt the card? Shouldn't I be pushing the voltage as high as possible until I hit the temp limit in order to maximise the overclock?
3. What power limit would you recommend for air cooling?

Last time I overclocked a GPU was with Titan X Pascal SLI. The process I followed was basically:
1. Flash a new BIOS with higher power limit and manual voltage control
2. Set a higher power limit and custom fan curve in MSI AB
3. Set voltage to the highest amount within the safety threshold
4. Keep pushing core clock until the highest stable clock is found
5. Save overclock profile in AB and apply every time before gaming

Is the process any different for 2080Ti GPUs?

The idea is to ascertain the maximum stable overclock for my card and have it running constantly at those speeds in order to maximise frame rate and minimise frame drops when gaming at 4K.

Which BIOS would you recommend to begin with? And which guide would you recommend I follow for the process?

Thank you!


----------



## kithylin

ahnafakeef said:


> With the stock ASUS Strix OC BIOS, I can run +100 core and +500 memory with the power limit at 125%. +200 core crashes immediately, even with zero memory overclock. The core clock fluctuates between 1800 and 2000 with +100.


Just a note for you for the future: With nvidia cards and how nvidia's boost works you probably want to avoid ever posting the "+100" or "+200" for your core offset on the forums when referring to your core speed overclock. That doesn't mean anything and no one can possibly have any idea what speed your card is running at, because even with +100 or +200 set in afterburner, the resulting core Mhz speed will be completely different for every single user's setup.

Instead you want to post what actual core Mhz speed you are running at after the +100 or +200 offset, after your card warms up and settles in with gaming load for a few minutes.


----------



## ahnafakeef

kithylin said:


> Just a note for you for the future: With nvidia cards and how nvidia's boost works you probably want to avoid ever posting the "+100" or "+200" for your core offset on the forums when referring to your core speed overclock. That doesn't mean anything and no one can possibly have any idea what speed your card is running at, because even with +100 or +200 set in afterburner, the resulting core Mhz speed will be completely different for every single user's setup.
> 
> Instead you want to post what actual core Mhz speed you are running at after the +100 or +200 offset, after your card warms up and settles in with gaming load for a few minutes.


Noted. Thank you.

The offsets were more meant as a reference point as to what I had changed in AB than what the actual effect was, which is why I mentioned the 1800-2000 range. I noticed the boost affecting the core even before I changed the core clock in AB, resulting in the aforementioned arbitrarily fluctuating speeds across different times during gameplay.


----------



## FedericoUY

Wow this 2080 ti's are a beast. I've got a sample and testing out it now, it does 2100 with 950mv and 8000 vram, out of the box, cannot go higher on clocks because of temps, but really they are much more clock capable than 1080 ti's... Insane.


----------



## ESRCJ

ahnafakeef said:


> Thank you for the prompt response. I've been out of the loop for a while, so please bear with me.
> 
> 1. The thermal throttling limit used to be 80c. Has it been lowered to 60c with recent GPUs?
> 2. Why undervolt the card? Shouldn't I be pushing the voltage as high as possible until I hit the temp limit in order to maximise the overclock?
> 3. What power limit would you recommend for air cooling?
> 
> Last time I overclocked a GPU was with Titan X Pascal SLI. The process I followed was basically:
> 1. Flash a new BIOS with higher power limit and manual voltage control
> 2. Set a higher power limit and custom fan curve in MSI AB
> 3. Set voltage to the highest amount within the safety threshold
> 4. Keep pushing core clock until the highest stable clock is found
> 5. Save overclock profile in AB and apply every time before gaming
> 
> Is the process any different for 2080Ti GPUs?
> 
> The idea is to ascertain the maximum stable overclock for my card and have it running constantly at those speeds in order to maximise frame rate and minimise frame drops when gaming at 4K.
> 
> Which BIOS would you recommend to begin with? And which guide would you recommend I follow for the process?
> 
> Thank you!


Your frequency will drop 15MHz once certain temperature thresholds have been met and this is BIOS-specific. For example, once you hit 40C, your frequency will drop 15MHz (just an example, although a lot of BIOS have a frequency drop around 40C). There may be another in the 50s. The same kind of mechanism was in place with Pascal. The difference is that Turing clock speeds adjust in increments of 15MHz. Also, I would say that 60C is a best case scenario for air cooling. You be hitting higher temperatures with a high power limit BIOS. 

With that said, there isn't much of a point flashing BIOS with a higher power limit on an air-cooled card since the benefit will be partially negated by what I just mentioned in the previous paragraph. Undervolting the card is the way to go because you will cut down on power and thus have lower temperatures. The bottleneck with your card is the cooling, not the power limit. Also note that transistors perform better under cooler temperatures, so simply cranking everything to the max may not work in your favor. 

If you still have the desire to flash your BIOS for a higher power limit (again, I'm not endorsing it for air cooling), then the OP has information on this. And as far as your "minimizing frame drops" statement goes, having your card throttle due to temperatures or power limit will not help with that. Not to mention, overclocking your CPU and tuning your memory accordingly will go a long way with frame time consistency. 



FedericoUY said:


> Wow this 2080 ti's are a beast. I've got a sample and testing out it now, it does 2100 with 950mv and 8000 vram, out of the box, cannot go higher on clocks because of temps, but really they are much more clock capable than 1080 ti's... Insane.


If you're able to do 2100MHz at 0.95V in a load such as Time Spy Extreme with consistency, then that would be a silicon lottery winner and likely capable of over 2200MHz at 1.093V. There's no way you'd be able to test this on air though since you're definitely not going to be able to sustain that clock speed in a heavy workload with the way GPU boost works.


----------



## JustinThyme

ESRCJ said:


> Your frequency will drop 15MHz once certain temperature thresholds have been met and this is BIOS-specific. For example, once you hit 40C, your frequency will drop 15MHz (just an example, although a lot of BIOS have a frequency drop around 40C). There may be another in the 50s. The same kind of mechanism was in place with Pascal. The difference is that Turing clock speeds adjust in increments of 15MHz. Also, I would say that 60C is a best case scenario for air cooling. You be hitting higher temperatures with a high power limit BIOS.
> 
> With that said, there isn't much of a point flashing BIOS with a higher power limit on an air-cooled card since the benefit will be partially negated by what I just mentioned in the previous paragraph. Undervolting the card is the way to go because you will cut down on power and thus have lower temperatures. The bottleneck with your card is the cooling, not the power limit. Also note that transistors perform better under cooler temperatures, so simply cranking everything to the max may not work in your favor.
> 
> If you still have the desire to flash your BIOS for a higher power limit (again, I'm not endorsing it for air cooling), then the OP has information on this. And as far as your "minimizing frame drops" statement goes, having your card throttle due to temperatures or power limit will not help with that. Not to mention, overclocking your CPU and tuning your memory accordingly will go a long way with frame time consistency.
> 
> 
> 
> If you're able to do 2100MHz at 0.95V in a load such as Time Spy Extreme with consistency, then that would be a silicon lottery winner and likely capable of over 2200MHz at 1.093V. There's no way you'd be able to test this on air though since you're definitely not going to be able to sustain that clock speed in a heavy workload with the way GPU boost works.


While the 2080Ti is a beast its a hot beast. When I first got my pair of Strix 2080Ti OC O11G cards I didnt have water blocks for them because they didnt exist. Any time I did anything with even a moderate GPU load the fans cranked to Jet engine speeds and all the heat dumps in the case. It would actually raise my water loop temps because it was pulling the hot air through the rads. I had to leave my case door open until I got blocks. Now after you put them under water thats another story. I can get 2150 sustained out of mine with temps topping out at around 40C after several minutes of intense load. Of course this is all dependent on the loop temps. overkill on the rads, pumps and fans and a good way to control them (I use aquaero) is definitely a plus so that 40C is based on keeping the liquid temps in the 25C to 30C range.


----------



## keikei

ahnafakeef said:


> Hello everyone.
> 
> Is Metro Exodus a good enough gaming stress test for a 2080Ti overclock?
> 
> With the stock ASUS Strix OC BIOS, I can run +100 core and +500 memory with the power limit at 125%. +200 core crashes immediately, even with zero memory overclock. The core clock fluctuates between 1800 and 2000 with +100.
> 
> Can I get a higher overclock with the stock BIOS? If yes, how?
> 
> As for a custom BIOS, can someone please recommend one that will prevent me from inadvertently frying my card?
> 
> The card is hardly breaking 60c at 100% usage. So I suppose there's quite a bit of performance to be gained from overclocking?
> 
> Playing at 4K. Advice and recommendations welcome and appreciated.
> 
> Thank you.


I was curious in regards to what game to test as an insane metric. I popped in *FFXV *and cranked up most of the bell's & whistles. I left out a few nvidiaworks stuff & AA. It definitely dropped frames below 60. The game did have some odd stops/stutters, which might explain some of the mixed reviews though.


----------



## FedericoUY

ESRCJ said:


> If you're able to do 2100MHz at 0.95V in a load such as Time Spy Extreme with consistency, then that would be a silicon lottery winner and likely capable of over 2200MHz at 1.093V. There's no way you'd be able to test this on air though since you're definitely not going to be able to sustain that clock speed in a heavy workload with the way GPU boost works.


I'm not sure that clocks are rock solid, since I've only been playing to BF V with them (pretty intensive game anyway). I could not go higher in clocks at 1v because even after setting 2150 it would start throttling down because of temps, so I went down to 2100 and started undervolting it, so at 950mv was able to play without problems, and at 925mv after some seconds it would crash the driver. But at 2150 with 1v, it wasn't crashing the driver.
This sample is almost all the time keeping the clock value, and sometimes when reaching 48 to 50 deg it will throttle down to 2085, but keeps there. The card is working between 45 and 50 degrees Celsius (with custom fan curve).

Just to clarify, I have a 7700k at 5ghz as a CPU, so the VGA is not topping out 99% all the time, this cpu may be bottlenecking. It rides almost all the time at 9x%, and in some cases it will do 8x%. This may also be a factor why temps remain somewhere in good values even in air cooling, and also clocks and volts.
The cpu balances at the same amounts of percentage used (between 80 and 95 %).

I'll try later some 3dmarks and report.


----------



## sblantipodi

Guys this year and the next one will be the years of the cloud game platforms like Google stadia, Nvidia GeForce now and Microsoft game something.

Do you think that pc gaming is near to the end?
Will it have sense to buy a gpu for "local rendering" in one or two years?


----------



## kithylin

sblantipodi said:


> Guys this year and the next one will be the years of the cloud game platforms like Google stadia, Nvidia GeForce now and Microsoft game something.
> 
> Do you think that pc gaming is near to the end?
> Will it have sense to buy a gpu for "local rendering" in one or two years?


Of course it will. Those services will never be able to handle running 1440p @ 144+ Hz or 4K @ 120hz. No one has the internet speed capable of that either on their end or our end. They also won't ever be able to cope with VR either.


----------



## sblantipodi

Using a rtx2080ti here on a [email protected] monitor for max image quality and there is really no other horse power for highering the FPS neither on my system.

I don't think that high FPS can be the reason why to not to switch to cloud for most people.


----------



## kithylin

sblantipodi said:


> Using a rtx2080ti here on a [email protected] monitor for max image quality and there is really no other horse power for highering the FPS neither on my system.
> 
> I don't think that high FPS can be the reason why to not to switch to cloud for most people.


There are new 120hz 4K monitors coming out (Some are already out) and people are going to want to play that soon. Besides that, it's already been demonstrated in many review websites that a single 2080 Ti can not maintain 60 FPS minimums at all times @ 4K in most modern AAA Games. Not if you turn on "All the bells and whistles", which includes RTX and anti-aliasing. Almost all people playing at 4K have to turn off something (AA, Shadows, etc) to manage 60 FPS depending on the title.


----------



## keikei

kithylin said:


> There are new *120hz 4K* monitors coming out (Some are already out) and people are going to want to play that soon. Besides that, it's already been demonstrated in many review websites that a single 2080 Ti can not maintain 60 FPS minimums at all times @ 4K in most modern AAA Games. Not if you turn on "All the bells and whistles", which includes RTX and anti-aliasing. Almost all people playing at 4K have to turn off something (AA, Shadows, etc) to manage 60 FPS depending on the title.



I'm looking forward to the phillips model with their mainstream pricing. I believe 30+inch under $1000? Too lazy to google. Speaking of expectations, I don't expect Cyberpunk to do 4k/60fps (no rt) with this card, but I'm hopeful.


----------



## gfunkernaught

kithylin said:


> Of course it will. Those services will never be able to handle running 1440p @ 144+ Hz or 4K @ 120hz. No one has the internet speed capable of that either on their end or our end. They also won't ever be able to cope with VR either.


you don't understand technology...

internet speed has all but decreased since the release of broadband to the public. some switches use copper uplinks that run 10Gbps. consumer fiber internet has been in the 1Gbps range for years now. it will only increase. streaming will become the industry standard. it will be easier for everyone, and allow more control to the devs and suppliers. devs wont have to worry about how their content will run on a variety of different hardware configurations. at some point a tv and/or vr headset is all players will need. there is latency now, sure. but that will vanish. local rendering will become a sort of luxury, vanity, that only the people willing to spend top dollar will have. look at it now, $1200 for a consumer gaming gpu...? for what? ray tracing? *chuckle* ok sure yea give us more money so you can render at home. tech will change once we can no longer keep electrons from flowing through the transistor gates. intel and Panasonic are already working on light-based circuitry, which in case you arent aware, is basically like this: if you look at any circuit board, and see those tiny copper connections all over the place, replace them with fiber optics, and figure out how fast that would be. so yea, streaming [email protected] is inevitable.


----------



## Bradwell

I have a 2080ti Strix OC on water that is running the 1000w XOC bios and I have tons of thermal headroom and power headroom. Is there any program that I can use to boost the voltage to tap into that extra headroom? I tried the K|ngp|n RTX tool but that only works with those cards.


----------



## Serchio

Bradwell said:


> I have a 2080ti Strix OC on water that is running the 1000w XOC bios and I have tons of thermal headroom and power headroom. Is there any program that I can use to boost the voltage to tap into that extra headroom? I tried the K|ngp|n RTX tool but that only works with those cards.




As far as I know they only way to increase the voltage is to modify the card - placing a shunt resistor.


----------



## FedericoUY

Bradwell said:


> I have a 2080ti Strix OC on water that is running the 1000w XOC bios and I have tons of thermal headroom and power headroom. Is there any program that I can use to boost the voltage to tap into that extra headroom? I tried the K|ngp|n RTX tool but that only works with those cards.


What temps and volts are you handling at around 2150 or 2200?


----------



## Bradwell

FedericoUY said:


> What temps and volts are you handling at around 2150 or 2200?


At 2190/2175 both cards hover at 39-42 depending on ambient temperatures. When using single gpu the main card stays at about 37. I will check voltages once I get home but I know the voltage frequency curve does not work with the xoc bios.


----------



## Bradwell

Are there any tools to adjust NVVDD? Besides the classified tool?


----------



## keikei

Is RT worth it in BFV?


----------



## sblantipodi

kithylin said:


> There are new 120hz 4K monitors coming out (Some are already out) and people are going to want to play that soon. Besides that, it's already been demonstrated in many review websites that a single 2080 Ti can not maintain 60 FPS minimums at all times @ 4K in most modern AAA Games. Not if you turn on "All the bells and whistles", which includes RTX and anti-aliasing. Almost all people playing at 4K have to turn off something (AA, Shadows, etc) to manage 60 FPS depending on the title.


guys the difference for the masses can't be that.
masses will not care about uber framerate or uber resolution, imho the PC gaming is changing forever and in one, two years we could see the end of the 
gaming pc as we know it now.
will we see a RTX4080Ti? I'm not so sure.


----------



## Serchio

sblantipodi said:


> guys the difference for the masses can't be that.
> 
> masses will not care about uber framerate or uber resolution, imho the PC gaming is changing forever and in one, two years we could see the end of the
> 
> gaming pc as we know it now.
> 
> will we see a RTX4080Ti? I'm not so sure.




Can you imagine playing multiplayer games that way? I can’t and I do not think that it will change anything within the next 10 years.


----------



## JustinThyme

Serchio said:


> Can you imagine playing multiplayer games that way? I can’t and I do not think that it will change anything within the next 10 years.


Ditto. MP games are now and will be for the forseeable future limited by the latency of the net.


----------



## Serchio

JustinThyme said:


> Ditto. MP games are now and will be for the forseeable future limited by the latency of the net.




It is true but the difference is that if you have a game client on your PC and latency to the server is <= 50 ms the game feels responsible to your actions. There might be always issues with a game netcode but if it is written right you will not notice it. When you have a game client in a cloud and the same latency you will notice that a game is lagging with response to your actions. It only makes sense when you are close to a data center where a game is hosted but it does not seems to be possible for all players. That is why I do not believe that cloud gaming will change anything for a majority of gamers.


----------



## sblantipodi

Serchio said:


> Can you imagine playing multiplayer games that way? I can’t and I do not think that it will change anything within the next 10 years.


ok compatitive games can be limited, but what for other games?
will it have sense to spend 1300€ for a GPU just for gaming? I mean, next year or in two years.


----------



## AndrejB

Heaven extreme preset (1965 mhz avg):
Hooked up to the 1440p 144hrz (dp cable): 5600 scrore
Hooked up to the 4k 60hrz (hdmi cable): 6800 score

So the monitor is pulling much more resources than the tv for the desktop?


----------



## J7SC

A quick update on the perennial 2080 Ti temps and also voltages themes...

The two 3DM PortRoyal runs below (for 2x Aorus 2080 Ti / factory w-cooled) used identical 'oc' settings in every way - the *ONLY* difference was ambient temp (17c vs 21c) as it is cool fall day and I opened all the windows. Of course, the difference is not noticeable in other apps, such as gaming, but it does show how sensitive NVBoost algorithms are to temp changes. The system does have an ample 1080/60 total rad space just for the GPUs.

The other thing is that while lots of folks here try to max voltages, the lead card per above usually runs at ~ 1.043v in benches, while the second one is slightly 'worse' at ~ 1.06v. MS AB voltage sliders are left at default, and no custom AB curve. What with the NVBoost algorithms, I suspect that unless you really have sub-ambient / sub-zero cooling, it might be better not to push voltages, even with an XOC Bios, given the trade-off with temps in NVBoost. Of course, that's just my :2cents: and I really should load XOC bios to make a more definitive statement...but so far, temps have had the greatest impact on my setup in therms of FPS etc.


----------



## acmilangr

My evga 2080ti XC gaming (with kraken x62) is stable at 1.093v/2190mhz with Galax bios.(2220mhz on superposition 4k ) 

Is there any other bios I can test with safe? Is there any other bios that can let me go to 1.1v?


----------



## gfunkernaught

im pretty much convinced that simply flashing to a high power limit bios is a good thing for my card. been using the 380w bios and it just crashed while playing Control. not the first time either. Temps maxed at 41c, clock was [email protected] The power limit indicator on my osd was shown but the clock/voltage didnt drop. so im convinced that my card absolutely cannot handle anything above what the stock bios (evga xc ultra 338w) was written for. should have bought the founders edition.


----------



## JustinThyme

Serchio said:


> It is true but the difference is that if you have a game client on your PC and latency to the server is <= 50 ms the game feels responsible to your actions. There might be always issues with a game netcode but if it is written right you will not notice it. When you have a game client in a cloud and the same latency you will notice that a game is lagging with response to your actions. It only makes sense when you are close to a data center where a game is hosted but it does not seems to be possible for all players. That is why I do not believe that cloud gaming will change anything for a majority of gamers.


Latency is everything. If two players are playing on even ground having the same latency it wont matter but different latency changes the game. Hedge fund data centers pay big bucks to be in or next to a main net hub where every ms counts. Anything more than 4 ms is unacceptable. They prefer less than 2 ms.


----------



## kithylin

JustinThyme said:


> Hedge fund data centers pay big bucks to be in or next to a main net hub where every ms counts. Anything more than 4 ms is unacceptable. They prefer less than 2 ms.


What does this have to do with RTX 2080 Ti's?


----------



## Sheyster

kithylin said:


> What does this have to do with RTX 2080 Ti's?


If the hedge fund performs well I can buy a second 2080 Ti?


----------



## Jonas Larsson

Could anyone be so kind and send me the 2080 ti hof oc lab xoc bios? I use the kingpin xoc atm but it wont hold the volt.


----------



## bl4ckdot

Didn't had any feedback  :
Do people still run the KPE bios on ref 2080Ti ? I'm still using the Galax 380W and was wondering how much can you really push it ? I'm at ~17100 graphic score on timespy at the moment, can this bios push the 18K or is an XOC bios requiered ?
TLDR : how to push to 18K on water


----------



## J7SC

Here is an interesting vid re GPU boost and temps, voltages etc, especially >> 11.40m onward. Jay2C suggests that the first drop in frequency bin happens at 25c (I though it was at 28c), but in any case, GPU frequency starts to drop quite early, and the lower the voltage for a given frequency setting and setup, the less heat and thus more headroom.


----------



## gfunkernaught

bl4ckdot said:


> Didn't had any feedback  :
> Do people still run the KPE bios on ref 2080Ti ? I'm still using the Galax 380W and was wondering how much can you really push it ? I'm at ~17100 graphic score on timespy at the moment, can this bios push the 18K or is an XOC bios requiered ?
> TLDR : how to push to 18K on water


i ran those bios for a while just for benching, had my a/c blowing directly onto the gpu and got to [email protected], max load temp was 28c during timespy extreme and port royal. gaming crashed within seconds, even at 28c. i tried the galax 380w bios and my card doesnt like it. crashes occasionally. so im stuck with the evga xc ultra stock 338w bios, $1250 for a lottery loser.


----------



## m4fox90

gfunkernaught said:


> you don't understand technology...
> 
> internet speed has all but decreased since the release of broadband to the public. some switches use copper uplinks that run 10Gbps. consumer fiber internet has been in the 1Gbps range for years now. it will only increase. streaming will become the industry standard. it will be easier for everyone, and allow more control to the devs and suppliers. devs wont have to worry about how their content will run on a variety of different hardware configurations. at some point a tv and/or vr headset is all players will need. there is latency now, sure. but that will vanish. local rendering will become a sort of luxury, vanity, that only the people willing to spend top dollar will have. look at it now, $1200 for a consumer gaming gpu...? for what? ray tracing? *chuckle* ok sure yea give us more money so you can render at home. tech will change once we can no longer keep electrons from flowing through the transistor gates. intel and Panasonic are already working on light-based circuitry, which in case you arent aware, is basically like this: if you look at any circuit board, and see those tiny copper connections all over the place, replace them with fiber optics, and figure out how fast that would be. so yea, streaming [email protected] is inevitable.


You don't live in the States, do you? Most internet plans are throttled heavily and data capped, rarely ever reaching advertised speeds.

I finally got my baby under water and it's been very nice. Case is so quiet now!


----------



## JustinThyme

kithylin said:


> What does this have to do with RTX 2080 Ti's?


Nothing, Just like a great deal of your posts. The conversation took a twist on gaming and latency, this was in reply to that. 3/4 of this thread has nothing to do with 2080Tis. While we like to stay on topic the conversation is bound to have twists along the way. There is not a single thread here that 100% of the posts are 100% on topic.


----------



## gfunkernaught

m4fox90 said:


> You don't live in the States, do you? Most internet plans are throttled heavily and data capped, rarely ever reaching advertised speeds.
> 
> I finally got my baby under water and it's been very nice. Case is so quiet now!


i do live in the states, the speeds im paying for are very close to what they advertise. down speed obviously depends on the server, but on a good day a speed test netted 940 up and down. even "4k" isnt actually 4000 horizontal nor vertical pixels (unless its a 4096x2160 TV) :thumb:

i got into water cooling to get a quieter case too. then i started to overclock, still way quieter than air cooling. plus im either wearing headphones or blasting the 5.1 speakers so i really dont mind the noise. idle though i cant even hear it.


----------



## JustinThyme

kithylin said:


> What does this have to do with RTX 2080 Ti's?





m4fox90 said:


> You don't live in the States, do you? Most internet plans are throttled heavily and data capped, rarely ever reaching advertised speeds.
> 
> I finally got my baby under water and it's been very nice. Case is so quiet now!


I do live in the states with 1GB over fiber up and down and I never get capped. It may not always run the entire 1GB but stays pretty close. Traffic is seldom the issue until you hit a high latency connection. I do know though that nearly every cable connection gets capped which is the majority of the country. They also share a common connection with multiple users all the way back to the head end. Fiber uses common fibers but every connection down that fiber gets a dedicated spectrum color. 

Glad to see you finally got your card under water. It will make a huge performance increase so long as the loop can handle it. 2080Tis are hot heads. They cant reach full potential on air until you jam them in a snow bank.


----------



## Botrin

Hello everyone, I am going to ask for an opinion of what is happening to me with my 2080ti, in games with maximum demand such as Metro Exodus or Control, I use the MSI Afterburner to monitor and do Overclock, let's say that when I have all the default voltage values, power limit etc ... everything goes perfect without any crashes, but when I upload the power limit butt in the Afterburner without overclock or anything, the game freezes shortly after hanging the Windows, I have to say I have the last bios with higher tdp. I changed the power supply, if that was the problem but still doing the same. Greetings.


----------



## MrTOOSHORT

Proud to break 11,000 in Port Royal:

*https://www.3dmark.com/pr/156539*


----------



## keikei

Surprisingly, even bad case airflow can heat up a decent card. I set my case fans to silent and the card hit 83C. Lol. It was a warm day the other day, but clearly it looks like i need some better case fans. Looking at these: https://www.newegg.com/noctua-nf-f12-pwm-case-fan/p/N82E16835608026 My current corsairs look nice, but get extremely audible when set to performance mode.

NVIDIA Releases GeForce *436.48* Game Ready Drivers

I'm more interested in TD2 DX12 crash fix.


----------



## devilhead

MrTOOSHORT said:


> Proud to break 11,000 in Port Royal:
> 
> *https://www.3dmark.com/pr/156539*


damn that memory clock...  thats why i'm 7 positions lower


----------



## J7SC

MrTOOSHORT said:


> Proud to break 11,000 in Port Royal:
> 
> *https://www.3dmark.com/pr/156539*


 
Very nice :thumb: I think you have the full water-bock version of the KPE, no ?...also, for oc'ing, are you using EVBot or 'just' the desktop software app ?


----------



## JustinThyme

MrTOOSHORT said:


> Proud to break 11,000 in Port Royal:
> 
> *https://www.3dmark.com/pr/156539*


Yah that one is a bear
https://www.3dmark.com/pr/46520



keikei said:


> Surprisingly, even bad case airflow can heat up a decent card. I set my case fans to silent and the card hit 83C. Lol. It was a warm day the other day, but clearly it looks like i need some better case fans. Looking at these: https://www.newegg.com/noctua-nf-f12-pwm-case-fan/p/N82E16835608026 My current corsairs look nice, but get extremely audible when set to performance mode.
> 
> NVIDIA Releases GeForce *436.48* Game Ready Drivers
> 
> 
> 
> I'm more interested in TD2 DX12 crash fix.


I dont know that all the air flow in the world is enough to cool a 2080Ti to achieve its potential, unless ists fans blowing through a radiator.


----------



## MrTOOSHORT

Thx guys. 

I have the full HC block yes and I'm using the Classified Tool. The block isn't as good as an EK block I always use for my past cards. Flow isn't that great, or maybe contact isn't the best. But it does look nice. I had and sold my Evbot a few years ago with regret now. Hard to find one for sale now a days. The tool helps so much with the high memory clock. I gave the vram 1.6v, no problem. Offset +1800 is very difficult to pass Port Royal, but it makes a big difference for the scoring.


----------



## FarmerJo

hey quick question,

does anyone else have a evga ftw3 and tried the xoc bois on air? does only 1 fan spin for you as well?

thanks


----------



## kata40

I'm not a big game player, I do something else with a video card.
What is the difference between 16 and 19 phases in 2080 ti?
More processing power or more power consumption?


----------



## bhsmurfy

Been a LONG time since I upgraded. (Cant tell im a lurker by my post count)
Well apart of the club now
EVGA XC Ultra on EVGA XOC Bios.

Still waiting on new cpu/mobo/mem to arrive but happy with it so far.
Got it ponied to its own 750w PSU
Parts should arrive this week and I can replace this ol 4690k/8GB DDR3 and let this baby really rip.


----------



## keikei

JustinThyme said:


> Yah that one is a bear
> https://www.3dmark.com/pr/46520
> 
> 
> 
> I dont know that all the air flow in the world is enough to cool a 2080Ti to achieve its potential, unless ists fans blowing through a radiator.


Thats the thing, im not even ocing yet, just the factory setting. The rgb factor definitely swayed my bad decision to buy my current case fans, but due to the size of my case and the large amount of hdd's i have (huge tower behind the front fans), i need more airflow pushing through the rig. I was not expecting to drop over a bill for fans, but here we are. On another note: i'm super excited to play the Metro map for BFV. Huge fan of the original and first remake. The same designer i heard made this one. RT on!


----------



## bhsmurfy

4690k err...

2080 ti powered 4690k
https://www.3dmark.com/pr/156922
Gonna try and break 10k lol
K, im happy. Broke 10k with this CPU.
https://www.3dmark.com/pr/156926

Cant wait for the skylark to arrive.

Also does classified tool work on non KPE. I have my CX Ultra flashed to XOC and am running 2160MHz around 50c 100% GPU load.


----------



## acmilangr

is it safe to try gigabyte gaming OC bios to my evga XC gaming?


----------



## J7SC

MrTOOSHORT said:


> Thx guys.
> 
> I have the full HC block yes and I'm using the Classified Tool. The block isn't as good as an EK block I always use for my past cards. Flow isn't that great, or maybe contact isn't the best. But it does look nice. I had and sold my Evbot a few years ago with regret now. Hard to find one for sale now a days. The tool helps so much with the high memory clock. I gave the vram 1.6v, no problem. Offset +1800 is very difficult to pass Port Royal, but it makes a big difference for the scoring.


 
Thanks. I hope you can get your hands on an EVBot (I used them before on KPEs/CLs 700 and 900 series). That said, does the Classified Tool not include most/all of the EVBot functions (ie PLL etc), other than obviously not adjusting things on the fly ?

Also, the latest NVidia driver, 436.48 WHQL, seems to be ever so slightly faster than 436.20. I haven't run it on full tilt yet (heat off has to wait for the weekend  )...but in my base-mode run for Port Royal, which is stock everything other than PowerTarget to 110%, it gained a consistent 50 points in PortRoyal in SLI compared to 436.20 (last 436.20 Port Royal SLI run https://www.3dmark.com/pr/155922 )


----------



## EarlZ

Does anyone know if the NZXT G12 will fit on the Gigabyte 2080Ti Gaming OC ?


----------



## gfunkernaught

Timespy 14663
https://www.3dmark.com/3dm/39893618

Timespy Extreme 6921
https://www.3dmark.com/spy/8758206

Port Royal 10246
https://www.3dmark.com/pr/157114


----------



## MsNikita

Hey guys.. Has anyone else had issues with their *EVGA RTX 2080 TI XC Ultra* P/N: 11G-P4-2387-KR ?? 

I made the original purchase less than six months ago, already on my second replacement. The first one suddenly died with no artefacts or anything, it simply stopped powering up. Then last week it's replacement also died in a similar fashion but this time on a different board. I'm starting to think there's a dodgy batch out there, or am I extremely unlucky? 

Anyone else had any similar issues? Right now I'm getting them swapped out from my supplier who's been nice enough to put up with me. I don't wanna have a third board, only to experience the same problem.. It's a hassle draining my loops.


----------



## bhsmurfy

UltraNEO said:


> Hey guys.. Has anyone else had issues with their *EVGA RTX 2080 TI XC Ultra* P/N: 11G-P4-2387-KR ??
> 
> I made the original purchase less than six months ago, already on my second replacement. The first one suddenly died with no artefacts or anything, it simply stopped powering up. Then last week it's replacement also died in a similar fashion but this time on a different board. I'm starting to think there's a dodgy batch out there, or am I extremely unlucky?
> 
> Anyone else had any similar issues? Right now I'm getting them swapped out from my supplier who's been nice enough to put up with me. I don't wanna have a third board, only to experience the same problem.. It's a hassle draining my loops.


I got mine out of best buy about half a month ago. No problem so far except user error.
I've been running XOC bios on mine. Max watts 720w @ 1.3mV. I've froze it a couple times to where i had to put an older gpu in and relfash it back to stock and start from scratch. At the same time one of my mounting screws where loose on the mobo/case so think that was cause of short and probably didnt need to flash bios back to stock. did all that while taking componets out of case and laying on cardbord and now everything works.

Hope yours decides to work.

I've gotten one clean bench @ 2200MHz on Port Royal, Couldnt get firestrick to finish, stops last second on combined test. figure its my dated 4690k with a bent pinn tho.

Whats your PSU. No im not gonna blame thison the PSU like 700 other post. Just curious. I couldnt run my 2080 ti xc ultra stock with a 4690k stock and 8gb ram no rgb fans or anything extra power draw. was with a CX750M So im powering my old 600m/750m to everything.
I have a 8pin going to each PSU while 750M is power Mobo/CPU and 600M has satas.









Also. Does classified tool work on XOC flashed non KPE cards?
I have XOC on my XC Ultra and working fine consider it KPE has 3 connectors. I do notice I dont read mV on mem or core in x1, probably because the hardware isnt there. kinda like icx. 

Curious if what controls classified tool -> card is hardware dependency somewhere?

I live in basically a chiller(push pull on cpu aio rad blowing into the gpu) ambient temps from push pull are norm 5c-11c; yes i live where its freezing and have cut outs on the door for my rad. so my max at full load has been 50c(air beit) @ 1.125mV @ 2190Mhz I have problems getting into 2200 tho. Max wattage on the card via gpu-z says 690w at peaks.

EDIT2: Just peaked at 716.4W but still at 1.1250V max 56c now. How do i turn this into clean power. I feel like with my thermal and wattage in it should be able to hold 2200MHz core.

EDIT69??: Cant tell vrm temp cause no icx but touching backplate is cool. like cold. not room temp. but cold.

My temps are telling me im good. Just need some more volts.









2205Mhz cant get past now :*(



@10480 Port Royal now. How much more of a score can i figure from going from a 4690k on ddr3 8gb (dual running single-chan mind you) when i get my new XE and other supporting mods?

EDIT
[email protected] 2190gpu/2193.7mem
I know this card can handle more 









Yeah i know my haswell loves juice. I have a broke pin on my mobo also. anything on core 3 stutters a tad.


TL;DR
Is my next step a shunt?


----------



## TK421

Does anyone have the data that supports samsung vram scaling positively in frequency with higher temperatures? While garbage can micron and hynix scale positively with lower temperatures?


----------



## keikei

UltraNEO said:


> Hey guys.. Has anyone else had issues with their *EVGA RTX 2080 TI XC Ultra* P/N: 11G-P4-2387-KR ??
> 
> I made the original purchase less than six months ago, already on my second replacement. The first one suddenly died with no artefacts or anything, it simply stopped powering up. Then last week it's replacement also died in a similar fashion but this time on a different board. I'm starting to think there's a dodgy batch out there, or am I extremely unlucky?
> 
> Anyone else had any similar issues? Right now I'm getting them swapped out from my supplier who's been nice enough to put up with me. I don't wanna have a third board, only to experience the same problem.. It's a hassle draining my loops.


How soon did both die? My purchase has only been a few weeks old. I would have to double-check that exact model #.

*driver hot fix: https://videocardz.com/driver/nvidia-geforce-game-ready-436-51-hotfix


----------



## FedericoUY

bhsmurfy said:


> Spoiler
> 
> 
> 
> I got mine out of best buy about half a month ago. No problem so far except user error.
> I've been running XOC bios on mine. Max watts 720w @ 1.3mV. I've froze it a couple times to where i had to put an older gpu in and relfash it back to stock and start from scratch. At the same time one of my mounting screws where loose on the mobo/case so think that was cause of short and probably didnt need to flash bios back to stock. did all that while taking componets out of case and laying on cardbord and now everything works.
> 
> Hope yours decides to work.
> 
> I've gotten one clean bench @ 2200MHz on Port Royal, Couldnt get firestrick to finish, stops last second on combined test. figure its my dated 4690k with a bent pinn tho.
> 
> Whats your PSU. No im not gonna blame thison the PSU like 700 other post. Just curious. I couldnt run my 2080 ti xc ultra stock with a 4690k stock and 8gb ram no rgb fans or anything extra power draw. was with a CX750M So im powering my old 600m/750m to everything.
> I have a 8pin going to each PSU while 750M is power Mobo/CPU and 600M has satas.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also. Does classified tool work on XOC flashed non KPE cards?
> I have XOC on my XC Ultra and working fine consider it KPE has 3 connectors. I do notice I dont read mV on mem or core in x1, probably because the hardware isnt there. kinda like icx.
> 
> Curious if what controls classified tool -> card is hardware dependency somewhere?
> 
> I live in basically a chiller(push pull on cpu aio rad blowing into the gpu) ambient temps from push pull are norm 5c-11c; yes i live where its freezing and have cut outs on the door for my rad. so my max at full load has been 50c(air beit) @ 1.125mV @ 2190Mhz I have problems getting into 2200 tho. Max wattage on the card via gpu-z says 690w at peaks.
> 
> EDIT2: Just peaked at 716.4W but still at 1.1250V max 56c now. How do i turn this into clean power. I feel like with my thermal and wattage in it should be able to hold 2200MHz core.
> 
> EDIT69??: Cant tell vrm temp cause no icx but touching backplate is cool. like cold. not room temp. but cold.
> 
> My temps are telling me im good. Just need some more volts.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2205Mhz cant get past now :*(
> 
> 
> 
> @10480 Port Royal now. How much more of a score can i figure from going from a 4690k on ddr3 8gb (dual running single-chan mind you) when i get my new XE and other supporting mods?
> 
> EDIT
> [email protected] 2190gpu/2193.7mem
> I know this card can handle more
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah i know my haswell loves juice. I have a broke pin on my mobo also. anything on core 3 stutters a tad.
> 
> 
> TL;DR
> Is my next step a shunt?


Wich is the test are you throwing at it? Can you sustain 2190 on air?


----------



## MsNikita

keikei said:


> How soon did both die? My purchase has only been a few weeks old. I would have to double-check that exact model #.
> 
> *driver hot fix: https://videocardz.com/driver/nvidia-geforce-game-ready-436-51-hotfix


Within a couple of weeks of it's replacement! 

When I said it stopped working, I mean it literally died. No power to the board what so ever! Does't show up in the BIOS/UEFI, not seen by windows, even the lights on the fan are completely off. I even tried it on another system, same... Dead as a dodo.


----------



## acmilangr

My evga 2080ti XC gaming (with kraken x62 AIO) is stable At 2190mhz/1.093v.
Can pass 2225mhz at superposition 4k benchmark.

But there is very small difference in performance from default bios. 

Is there any other bios I can try?


----------



## bhsmurfy

acmilangr said:


> My evga 2080ti XC gaming (with kraken x62 AIO) is stable At 2190mhz/1.093v.
> Can pass 2225mhz at superposition 4k benchmark.
> 
> But there is very small difference in performance from default bios.
> 
> Is there any other bios I can try?


I use the EVGA XOC bios. Power limit goes to 1.125mV


----------



## acmilangr

bhsmurfy said:


> acmilangr said:
> 
> 
> 
> My evga 2080ti XC gaming (with kraken x62 AIO) is stable At 2190mhz/1.093v.
> Can pass 2225mhz at superposition 4k benchmark.
> 
> But there is very small difference in performance from default bios.
> 
> Is there any other bios I can try?
> 
> 
> 
> I use the EVGA XOC bios. Power limit goes to 1.125mV
Click to expand...

But can you use curve?


----------



## bhsmurfy

acmilangr said:


> But can you use curve?


No but you really dont need it as far as I can tell.
10.55k
https://www.3dmark.com/pr/157200
2190MHz GPU Core(Dip to 2160)/2193Mhz Mem.
55c max Air Cooled/ 1.125mV

10.58k
https://www.3dmark.com/pr/157211
2205Mhz GPU Core(Dip to 2175)/2194Mhz Mem
48c Max Air Cooled/ 1.125mV


I dont think the classified tool works with the XC Ultra with XOC Bios tho 
Gonna be getting a KPE soon anyways but wanted to push this non custom-pcb xc ultra to its limits.

If you're hitting 2225 constant with 1.093mV I'd like to see what you could get on the XOC Bios with the extra voltage.

I think we both got pretty lucky with our XC Ultras.


----------



## acmilangr

bhsmurfy said:


> acmilangr said:
> 
> 
> 
> But can you use curve?
> 
> 
> 
> No but you really dont need it as far as I can tell.
> 10.55k
> https://www.3dmark.com/pr/157200
> 2190MHz GPU Core(Dip to 2160)/2193Mhz Mem.
> 55c max Air Cooled/ 1.125mV
> 
> 10.58k
> https://www.3dmark.com/pr/157211
> 2205Mhz GPU Core(Dip to 2175)/2194Mhz Mem
> 48c Max Air Cooled/ 1.125mV
> 
> 
> I dont think the classified tool works with the XC Ultra with XOC Bios tho /forum/images/smilies/frown.gif
> Gonna be getting a KPE soon anyways but wanted to push this non custom-pcb xc ultra to its limits.
> 
> If you're hitting 2225 constant with 1.093mV I'd like to see what you could get on the XOC Bios with the extra voltage.
> 
> I think we both got pretty lucky with our XC Ultras.
Click to expand...

2220 (sorry I was wrong 2225) only on superposition. 2190 stable everywhere.

https://youtu.be/3XtlcouO-RE


----------



## FedericoUY

acmilangr said:


> 2220 (sorry I was wrong 2225) only on superposition. 2190 stable everywhere.
> 
> https://youtu.be/3XtlcouO-RE


Nice results there. I think on air those clocks are non possible. Did you tried yours on air?


----------



## sblantipodi

is it a good day to buy a RTX2080Ti or it will be better to wait for Christmas for a RTX3080Ti ?
when new RTX is expected?


----------



## FedericoUY

sblantipodi said:


> is it a good day to buy a RTX2080Ti or it will be better to wait for Christmas for a RTX3080Ti ?
> when new RTX is expected?


There's no point on always wait newer versions, you could be almost always wating for newer versions. You can be sure there will be no 3080 / ti for christmas.
2080 ti is a great card, pretty new and performs like hell. Go ahead and take one!

*PS: The "always wait" comment is not for you in particular, is because I see many comments regarding this, many people that do this...


----------



## sblantipodi

FedericoUY said:


> There's no point on always wait newer versions, you could be almost always wating for newer versions. 2080 ti is a great card, pretty new and performs like hell. Go ahead and take one!


yes but when the new RTX is expected?
christmas?


----------



## FedericoUY

sblantipodi said:


> yes but when the new RTX is expected?
> christmas?


I don't think there will be a mainstream gpu (excluding titans) more powerful than 2080 ti before 2nd quarter of 2020...


----------



## sblantipodi

FedericoUY said:


> I don't think there will be a mainstream gpu (excluding titans) more powerful than 2080 ti before 2nd quarter of 2020...


ok thanks


----------



## acmilangr

No I didn't


----------



## ahnafakeef

sblantipodi said:


> is it a good day to buy a RTX2080Ti or it will be better to wait for Christmas for a RTX3080Ti ?
> when new RTX is expected?


Judging by the performance I'm getting at 4K from my one week old build, I'd say it is worth it since you'd be getting top tier performance from right now for however many months away the new flagship is. And you can always sell it off around 3080Ti launch time without taking too much of a hit on your wallet. 

However, this does depend on whether the performance increase from your current GPU warrants the immediate upgrade. If you're happy to stick it out with your current GPU until whenever the next flagship is released, you might not want to go through the hassle of buying and selling a new GPU within a span of mere months. 

Also, I'm not aware of when Nvidia might launch the new card. In case it is less than 2/3 months away, my point might be entirely moot. But I doubt it'll be that soon. Otherwise we'd be seeing rumors flying around left and right. Just my two cents though.


----------



## acmilangr

FedericoUY said:


> acmilangr said:
> 
> 
> 
> 2220 (sorry I was wrong 2225) only on superposition. 2190 stable everywhere.
> 
> https://youtu.be/3XtlcouO-RE
> 
> 
> 
> Nice results there. I think on air those clocks are non possible. Did you tried yours on air?
Click to expand...

 No I didn't


----------



## bhsmurfy

FedericoUY said:


> Nice results there. I think on air those clocks are non possible. Did you tried yours on air?


https://www.3dmark.com/pr/157211
2080 ti on AIR.
Cant get higher tho.


----------



## bhsmurfy

sblantipodi said:


> is it a good day to buy a RTX2080Ti or it will be better to wait for Christmas for a RTX3080Ti ?
> when new RTX is expected?


You think there will be a 3080 ti launch in 2019 or 2020.. confused..
Still working on 20x series.. kinda of a pipe dream to think a 30x series is coming any time soon.


----------



## bhsmurfy

FedericoUY said:


> Wich is the test are you throwing at it? Can you sustain 2190 on air?


Port Royal considering its not a 4k setup yet.


----------



## Diablix6

I am thinking about buying second gpu to my main build for nv-link sli, but I don't want to break a bank. So I can buy MSI 2080 Ti Ventus GP, without USB-C for 1000EUR, but it has 1545 MHz Boost so it should be non-A chip. 



But I think I read article couple months ago about new chips for 2080 Ti, where new stock should be with opened bios. Do I remember correctly, or is it about other gpu in Nvidia stack?


I could buy Gigabyte 2080 Ti Gaming OC for 80EUR more, but if Ventus GP is going to be opened for 380W BIOS, I guess I don't need to pay another 80EUR for better cooling and USB-C, when I plan to use GPU blocks.


Thanks for any thoughts.


----------



## JohanssoN85

*Help with MSI 2080ti Seahawk EK X bios*

Hello, i would like to flash my vbios on this card to get a higher powerlimit to see if it helps with OC

but since im a noob when it comes to flashing GPU im not sure what vbios version i can use, can anyone help me with this?

am i taking a big risk if i try to flash another vbios on my card?

right now i am reaching 2100Mhz on my card at 45c when gaming but im constantly at powerlimit


----------



## Asmodian

JohanssoN85 said:


> Hello, i would like to flash my vbios on this card to get a higher powerlimit to see if it helps with OC


It will not help increase your max stable clock at all. The power limit does not help OC, it simply stops down-clocking with higher power loads. If you want to increase the voltage it can help prevent throttling but it does not increase stability by itself.

I like a shunt mod a lot more than BIOS flashing. If you use a BIOS others reported as working on your card the risk is low, but there can be weird issues if your card is not exactly the same PCB as the BIOS is intended for.


----------



## kithylin

Diablix6 said:


> But I think I read article couple months ago about new chips for 2080 Ti, where new stock should be with opened bios. Do I remember correctly, or is it about other gpu in Nvidia stack?


Nvidia disabled our ability to write a custom bios to the cards with the Pascal (GTX 1000) series, also well for the RTX 2000 series and GTX 1600 series. The days of Nvidia video cards having an "Open Bios" where we can manipulate it and flash our own customized options back to it are gone and a part of history now.


----------



## J7SC

kithylin said:


> Nvidia disabled our ability to write a custom bios to the cards with the Pascal (GTX 1000) series, also well for the RTX 2000 series and GTX 1600 series. The days of Nvidia video cards having an "Open Bios" where we can manipulate it and flash our own customized options back to it are gone and a part of history now.



...yes, so sad a trip down memory lane . I remember when for my Gefore 600 series cards, I would save the Bios, rename as .txt, plug in values as I saw fit, save, rename as .rom and flash onto the cards - voila


----------



## JustinThyme

I dont think the encrypted BIOS changes anything. I tried several and its more about cool than anything else, at least for me. No matter what the power limit is and no matter what the voltage is the highest Ive seen is 2200 and thats only good for a few quick runs 2150 is more realistic and 2100 is VERY stable. These cards run hot. On air you don't stand a chance unless its -40 where you live and you put the machine out on a balcony of whatever.


----------



## kx11

ghost recon breakpoint performance 4k ultimate graphics , kingpin under hydrocopper block


----------



## Diablix6

kithylin said:


> Nvidia disabled our ability to write a custom bios to the cards with the Pascal (GTX 1000) series, also well for the RTX 2000 series and GTX 1600 series. The days of Nvidia video cards having an "Open Bios" where we can manipulate it and flash our own customized options back to it are gone and a part of history now.



Sorry, I meant to say that I remember reading article couple months ago where they won't separate new 2080 Ti chips into A and non-A version. Or is it just my imagination? Are chips still binned into A and non-A versions?


----------



## FedericoUY

kx11 said:


> ghost recon breakpoint performance 4k ultimate graphics , kingpin under hydrocopper block
> 
> 
> 
> Spoiler
> 
> 
> 
> https://www.youtube.com/watch?v=VmNpNMsQ_KE


Nice results there. What is your vgpu for that?


----------



## kx11

FedericoUY said:


> Nice results there. What is your vgpu for that?



well it's a 2080 ti Kingpin running the OC bios ,


----------



## FedericoUY

kx11 said:


> well it's a 2080 ti Kingpin running the OC bios ,


That's ok, but I was asking at that clock speed (2145mhz), wich voltage is setting to the gpu... ? Did you curved it, or just automatic?


----------



## acmilangr

is it safe to try Gigabyte Windforce OC or Gaming OC bios on my EVGA XC GAMING?


----------



## Sheyster

acmilangr said:


> is it safe to try Gigabyte Windforce OC or Gaming OC bios on my EVGA XC GAMING?


Why not just use the GALAX 380W BIOS? That's a reference PCB card as far as I know.


----------



## kx11

FedericoUY said:


> That's ok, but I was asking at that clock speed (2145mhz), wich voltage is setting to the gpu... ? Did you curved it, or just automatic?



that was the stock voltage , meaning i didn't use the Classified E200 tool that removes Nvidia voltage protection , the clocks were not curved ( 120+ in AB )


i have a curve for this gpu and let's just say it does very very well with stock voltage


----------



## acmilangr

Sheyster said:


> acmilangr said:
> 
> 
> 
> is it safe to try Gigabyte Windforce OC or Gaming OC bios on my EVGA XC GAMING?
> 
> 
> 
> Why not just use the GALAX 380W BIOS? That's a reference PCB card as far as I know.
Click to expand...

This is one i am using. 
Just for testing. Sometimes bios with lower watt have better performance


----------



## keikei

sblantipodi said:


> yes but when the new RTX is expected?
> christmas?



Not anywhere near Xmas, but if this prediction is correct, then next summer or earlier. Seems plausible as Navi 2 is also rumored around that timeframe.


----------



## J7SC

keikei said:


> Not anywhere near Xmas, but if this prediction is correct, then next summer or earlier. Seems plausible as Navi 2 is also rumored around that timeframe.



...yeah, I think that NVidia might have s.th. in the pipeline (Ti Super ?), but it will really come down to when AMD's big Navi (and perhaps Intel's discreet X GPU) will make an appearance, and what they offer.

Also, a single 2080 Ti (never mind dual) is/are still more than plenty. I ran some 18c ambient 8K Superposition just now. While it hits 2220 MHz, it does cycle back to 2205 as temps reach and stay at 28C. Subtest #8 in Superposition is a good one for you to check temps vs GPU speed as it is a longer, stable scene. Max watts were just under 374W on the stock-bios Aorus XTR (factory w-cooled) and w/o touching the voltage slider, max voltage was 1.068v.

I have had the card as high as 2235 (bottom) but that needed voltage adjusted to 1.075v and overall scores were lower, probably due Power Limit. I am also not sure yet about the ideal VRAM speed for Superposition 8K, but it is definitely lower than in PortRoyal.


----------



## bhsmurfy

Sheyster said:


> Why not just use the GALAX 380W BIOS? That's a reference PCB card as far as I know.


Probably because those wont flash after new drivers. If he nas EVGA he needs to load the EVGA XOC bios.


----------



## Sheyster

bhsmurfy said:


> Probably because those wont flash after new drivers. If he nas EVGA he needs to load the EVGA XOC bios.


Won't flash after new drivers?? Do you mean a newer BIOS/updated revision of the card? The original EVGA 2080 Ti cards were all flashable as far as I know.


----------



## bhsmurfy

Sheyster said:


> Won't flash after new drivers?? Do you mean a newer BIOS/updated revision of the card? The original EVGA 2080 Ti cards were all flashable as far as I know.


I cant get anything non evga onto my card. tried --protectofff, -6, -f etc.

Ironically the custom PCB XOC bios runs just fine on my ref pcb..
Dunno, im stumped.


----------



## Talon2016

bhsmurfy said:


> I cant get anything non evga onto my card. tried --protectofff, -6, -f etc.
> 
> Ironically the custom PCB XOC bios runs just fine on my ref pcb..
> Dunno, im stumped.


You can't flash the new/recent cards with older vbios as they contain a newer xusb firmware.


----------



## NeonFlak

Talon2016 said:


> You can't flash the new/recent cards with older vbios as they contain a newer xusb firmware.



Would this be why the MSI X Trio 406w bios in the OP won't flash on my newer X Trio (March 2019 build date)?


----------



## Talon2016

NeonFlak said:


> Talon2016 said:
> 
> 
> 
> You can't flash the new/recent cards with older vbios as they contain a newer xusb firmware.
> 
> 
> 
> 
> Would this be why the MSI X Trio 406w bios in the OP won't flash on my newer X Trio (March 2019 build date)?
Click to expand...

Yes this is exactly why.


----------



## AndrejB

I flashed the matrix bios to my strix.

As soon as I raise the power limit (in gpu tweak ii) heaven crashes.

Any suggestions? Should I just go back to the original bios?


----------



## kithylin

AndrejB said:


> I flashed the matrix bios to my strix.
> 
> As soon as I raise the power limit (in gpu tweak ii) heaven crashes.
> 
> Any suggestions? Should I just go back to the original bios?


Perhaps you may want to remove variables. Are you trying to overclock the core or memory with it as well? Or are you leaving the core and memory alone at +0 and just trying to raise the power limit?


----------



## AndrejB

kithylin said:


> AndrejB said:
> 
> 
> 
> I flashed the matrix bios to my strix.
> 
> As soon as I raise the power limit (in gpu tweak ii) heaven crashes.
> 
> Any suggestions? Should I just go back to the original bios?
> 
> 
> 
> Perhaps you may want to remove variables. Are you trying to overclock the core or memory with it as well? Or are you leaving the core and memory alone at +0 and just trying to raise the power limit?
Click to expand...

Just the power limit, everything else is on stock.

I'm on air, stock cooler, not sure if that makes a difference, but I did put 80% speed at 70c.


----------



## J7SC

...re. questions about new NVidia GPUs raised again on the previous page(s) in this thread, Raja (formerly with Asus, now with Intel) is quoted as suggesting ~ June 2020 for Intel's discreet GPU release, per below. Also, yesterday there was a news story about NVIdia 'Ampere' 7nm RTX coming earlier in the 1st half of 2020, though the page linked to for that story at Igor's Lab quickly disappeared. AMD's big Navi also figures in that general time window.

While NVidia may yet pull a 2080 Ti Super 'refresh' out of the hat before then, it seems like late next spring to early fall of 2020 is going to be exciting for GPU performance aficionados...if verified rumours about the performance of Intel's top card start to hit earlier, NVidia might have to reign in prices, just as Intel did with last week's announcement re. 10980XE


----------



## bhsmurfy

J7SC said:


> ...re. questions about new NVidia GPUs raised again on the previous page(s) in this thread, Raja (formerly with Asus, now with Intel) is quoted as suggesting ~ June 2020 for Intel's discreet GPU release, per below. Also, yesterday there was a news story about NVIdia 'Ampere' 7nm RTX coming earlier in the 1st half of 2020, though the page linked to for that story at Igor's Lab quickly disappeared. AMD's big Navi also figures in that general time window.
> 
> While NVidia may yet pull a 2080 Ti Super 'refresh' out of the hat before then, it seems like late next spring to early fall of 2020 is going to be exciting for GPU performance aficionados...if verified rumours about the performance of Intel's top card start to hit earlier, NVidia might have to reign in prices, just as Intel did with last week's announcement re. 10980XE


I think i see their game. With the money intel is cutting back leaves room for nvidia to drop the double whammie on us. and raise again.


----------



## jura11

AndrejB said:


> I flashed the matrix bios to my strix.
> 
> As soon as I raise the power limit (in gpu tweak ii) heaven crashes.
> 
> Any suggestions? Should I just go back to the original bios?


Hi Andrej 

I would rather use MSI Afterburner than GPU Tweak II, some SW can cause issues

I'm running Matrix BIOS on my Asus RTX 2080Ti Strix and no issues or crashes(only crash which I know its more related to Turing itself is in some rendering SW and some games where RT is used and in these SW or games you need to downclock GPU a bit)

You have two BIOS(Quiet and normal one), you can try untouched BIOS if its crashes or not, if yes then you have issue somewhere else and as furst I would raise TDR, I have set TDR at 8

Hope this helps 

Thanks, Jura


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> I flashed the matrix bios to my strix.
> 
> As soon as I raise the power limit (in gpu tweak ii) heaven crashes.
> 
> Any suggestions? Should I just go back to the original bios?
> 
> 
> 
> Hi Andrej
> 
> I would rather use MSI Afterburner than GPU Tweak II, some SW can cause issues
> 
> I'm running Matrix BIOS on my Asus RTX 2080Ti Strix and no issues or crashes(only crash which I know its more related to Turing itself is in some rendering SW and some games where RT is used and in these SW or games you need to downclock GPU a bit)
> 
> You have two BIOS(Quiet and normal one), you can try untouched BIOS if its crashes or not, if yes then you have issue somewhere else and as furst I would raise TDR, I have set TDR at 8
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Hey Jura,

Thank you for the suggestion, I'll raise windows tdr to 8 when I get home.
The card I think is a dud (silicon wise) at this point because the default oc profile on the default bios crashes as well.

Maybe the tdr will help, I'll let you know.


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> I flashed the matrix bios to my strix.
> 
> As soon as I raise the power limit (in gpu tweak ii) heaven crashes.
> 
> Any suggestions? Should I just go back to the original bios?
> 
> 
> 
> Hi Andrej
> 
> I would rather use MSI Afterburner than GPU Tweak II, some SW can cause issues
> 
> I'm running Matrix BIOS on my Asus RTX 2080Ti Strix and no issues or crashes(only crash which I know its more related to Turing itself is in some rendering SW and some games where RT is used and in these SW or games you need to downclock GPU a bit)
> 
> You have two BIOS(Quiet and normal one), you can try untouched BIOS if its crashes or not, if yes then you have issue somewhere else and as furst I would raise TDR, I have set TDR at 8
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Unfortunately no go, raising TdrDelay to 8 and using afterburner to raise power limit, same result, superposition and heaven crash after 2,3 min


----------



## jura11

AndrejB said:


> Unfortunately no go, raising TdrDelay to 8 and using afterburner to raise power limit, same result, superposition and heaven crash after 2,3 min


Hi Andrej 

Can you please check what is crashing in Event Viewer,if its app crashes or if its OC, if its app then I suggest close Asus GPU Tweak and try again, if its OC then usually you will get nvlddmkm error or Display Driver Stopped Responding and Has Recovered which usually points to OC but sometimes driver etc

Strange with stock Matrix BIOS you are getting crashes, in my case no issues with this BIOS

Hope this helps 

Thanks, Jura


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> Unfortunately no go, raising TdrDelay to 8 and using afterburner to raise power limit, same result, superposition and heaven crash after 2,3 min
> 
> 
> 
> Hi Andrej
> 
> Can you please check what is crashing in Event Viewer,if its app crashes or if its OC, if its app then I suggest close Asus GPU Tweak and try again, if its OC then usually you will get nvlddmkm error or Display Driver Stopped Responding and Has Recovered which usually points to OC but sometimes driver etc
> 
> Strange with stock Matrix BIOS you are getting crashes, in my case no issues with this BIOS
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

It's the display driver.
I used msi afterburner to raise the power limit and ran the tests.
On stock matrix bios the card works without issues, can even run tomb raider.

I'll reinstall Windows these days and try again, maybe this installation is a bit corrupt.


----------



## ntuason

Hey guys,

I’m picking up an ASUS Strix 2080 Ti O11G tomorrow strictly for the oc capability and 1000W x 100% Power. I saw in the past post that the Gaming X Trio cannot flash modded bios anymore due to newer bios update(s)? Just want to Make sure that the ASUS doesn’t have this correct?


----------



## JustinThyme

ntuason said:


> Hey guys,
> 
> I’m picking up an ASUS Strix 2080 Ti O11G tomorrow strictly for the oc capability and 1000W x 100% Power. I saw in the past post that the Gaming X Trio cannot flash modded bios anymore due to newer bios update(s)? Just want to Make sure that the ASUS doesn’t have this correct?


Have two of the same card in SLI. Have flashed them with several BIOS roms but went back to stock as the best I got was actually with the Matrix BIOS and it only gained 25MHz or so stable. 2100-2150 depending on what it is thats running is about as good as I could get and get that on stock BIOS. Cool is the name of the game as in under water. Too hot to get a decent OC otherwise but thats will all the 2080Tis


----------



## mattxx88

ntuason said:


> Hey guys,
> 
> I’m picking up an ASUS Strix 2080 Ti O11G tomorrow strictly for the oc capability and 1000W x 100% Power. I saw in the past post that the Gaming X Trio cannot flash modded bios anymore due to newer bios update(s)? Just want to Make sure that the ASUS doesn’t have this correct?


Mine Gaming X Trio is from April 2019 and i'm currently running xoc bios without any issue


----------



## jura11

AndrejB said:


> It's the display driver.
> I used msi afterburner to raise the power limit and ran the tests.
> On stock matrix bios the card works without issues, can even run tomb raider.
> 
> I'll reinstall Windows these days and try again, maybe this installation is a bit corrupt.


Hi Andrej 

Its strange you are getting nvlddmkm error with power limit set at max in Heaven or Superposition 

What frequency or clocks are you seeing in gaming or Heaven or Superposition? 

If Windows reinstall helps, maybe yes, try create second administrator account and try there Heaven or Superposition there

These two mentioned benchmarks are nkt as heavy as other ones and its really strange that 

Hope this helps 

Thanks, Jura


----------



## jura11

ntuason said:


> Hey guys,
> 
> I’m picking up an ASUS Strix 2080 Ti O11G tomorrow strictly for the oc capability and 1000W x 100% Power. I saw in the past post that the Gaming X Trio cannot flash modded bios anymore due to newer bios update(s)? Just want to Make sure that the ASUS doesn’t have this correct?


Hi there 

As @JustinThyme mentioned best BIOS for Asus RTX 2080Ti Strix, I use this BIOS and my best OC is in 2145-2160MHz range

You shouldn't have issues with XOC BIOS too, have used but what I didn't liked on that BIOS is VRAM running at full speed

Hope this helps 

Thanks, Jura


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> It's the display driver.
> I used msi afterburner to raise the power limit and ran the tests.
> On stock matrix bios the card works without issues, can even run tomb raider.
> 
> I'll reinstall Windows these days and try again, maybe this installation is a bit corrupt.
> 
> 
> 
> Hi Andrej
> 
> Its strange you are getting nvlddmkm error with power limit set at max in Heaven or Superposition
> 
> What frequency or clocks are you seeing in gaming or Heaven or Superposition?
> 
> If Windows reinstall helps, maybe yes, try create second administrator account and try there Heaven or Superposition there
> 
> These two mentioned benchmarks are nkt as heavy as other ones and its really strange that
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

I was using the default 1800/7400 clocks on the matrix and seeing 2025-2055 in the benchmarks, should of I lowered them?
Also I was monitoring with hwinfo that may have been conflicting with the benchmarks, but again it's weird that it didn't crash on stock (without the power limit increase)


----------



## jura11

AndrejB said:


> I was using the default 1800/7400 clocks on the matrix and seeing 2025-2055 in the benchmarks, should of I lowered them?
> Also I was monitoring with hwinfo that may have been conflicting with the benchmarks, but again it's weird that it didn't crash on stock (without the power limit increase)



Hi Andrej 

2025-2055MHz are quite nice clocks on Matrix BIOS because stock clocks are 2070MHz which I can see under water, I don't think its related to OC, can you please confirm yours memory (VRAM) modules on yours Asus RTX 2080Ti Strix are Samsung or Micron

I would try lower VRAM by 200MHz, maybe it helps

Don't think its down to HWiNFO because I'm using all the time HWiNFO and in my case HWiNFO is running at background with Aquasuite and SIV64 

Assume GPU temperatures are good and VRM temperatures are good too? 

Agree its weird mate 

Hope this helps 

Thanks, Jura


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> I was using the default 1800/7400 clocks on the matrix and seeing 2025-2055 in the benchmarks, should of I lowered them?
> Also I was monitoring with hwinfo that may have been conflicting with the benchmarks, but again it's weird that it didn't crash on stock (without the power limit increase)
> 
> 
> 
> 
> Hi Andrej
> 
> 2025-2055MHz are quite nice clocks on Matrix BIOS because stock clocks are 2070MHz which I can see under water, I don't think its related to OC, can you please confirm yours memory (VRAM) modules on yours Asus RTX 2080Ti Strix are Samsung or Micron
> 
> I would try lower VRAM by 200MHz, maybe it helps
> 
> Don't think its down to HWiNFO because I'm using all the time HWiNFO and in my case HWiNFO is running at background with Aquasuite and SIV64
> 
> Assume GPU temperatures are good and VRM temperatures are good too?
> 
> Agree its weird mate
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Those clocks were before crashing.

It just seems I got the short straw for the core.
@ 70c core and vrm:
+50 on default bios is the max stable for heaven (1930-2010)
Default everything on matrix gives a bit better performance (1950-2040)

On the flip side it seems I got golden memory(samsung), 16000mhz without issues, can pull off 17000 but with errors (small freezes)

Anyways thank you for the help Jura.

Oh also the new nvflash allows flashing of different bioses, just gives a warning, no more device id mismatch.


----------



## FarmerJo

Hey has anyone been able to get around the xusb error when flashing bios? I got my hands on the galax hof bios but isn't working to flash on my card


----------



## dante`afk

kx11 said:


> ghost recon breakpoint performance 4k ultimate graphics , kingpin under hydrocopper block
> 
> 
> 
> 
> 
> https://www.youtube.com/watch?v=VmNpNMsQ_KE


that is all ? 2145 with drops to 2130? i'd be pretty pissed paying 1800$+ only to achieve clocks that pretty much any standard 2080Ti does.


----------



## Lownage

Did anyone try the Lightning Z LN2 Bios on their Trio X Gaming?


----------



## FedericoUY

dante`afk said:


> that is all ? 2145 with drops to 2130? i'd be pretty pissed paying 1800$+ only to achieve clocks that pretty much any standard 2080Ti does.


:O :O :O pretty agressive comment there! Lol

BTW, checking your sig, wich 2080ti , cooling and volts you have there?


----------



## acmilangr

acmilangr said:


> is it safe to try Gigabyte Windforce OC or Gaming OC bios on my EVGA XC GAMING?


Could someone please answer on this?


----------



## AndrejB

acmilangr said:


> acmilangr said:
> 
> 
> 
> is it safe to try Gigabyte Windforce OC or Gaming OC bios on my EVGA XC GAMING?
> 
> 
> 
> Could someone please answer on this?
Click to expand...

The thing that can happen with flashing other bioses is that you can loose the ports (in your case if the gigabyte bios maps hdmi to your dp and so on), so if you loose all the ports and don't have a igpu you will need another pc to flash back the original bios. 
I doubt you can completely brick your card but don't take my word for it.


----------



## kx11

dante`afk said:


> that is all ? 2145 with drops to 2130? i'd be pretty pissed paying 1800$+ only to achieve clocks that pretty much any standard 2080Ti does.





oh you want the other video with the crazy OC ?!


----------



## J7SC

kx11 said:


> oh you want the other video with the crazy OC ?!
> (...)



...GPU speed is great, but the driving...


----------



## kx11

J7SC said:


> ...GPU speed is great, but the driving...



coming straight from Borderlands3 my driving got a lot worse in RAGE2


----------



## keikei

I was expect horrible RT performance honestly. Great news.


----------



## iamjanco

keikei said:


> I was expect horrible RT performance honestly. Great news.


You just hit 10,000 posts. There should be a badge for that


----------



## keikei

iamjanco said:


> You just hit 10,000 posts. There should be a badge for that


Appreciate it.  Mostly filler.


----------



## acmilangr

AndrejB said:


> The thing that can happen with flashing other bioses is that you can loose the ports (in your case if the gigabyte bios maps hdmi to your dp and so on), so if you loose all the ports and don't have a igpu you will need another pc to flash back the original bios.
> I doubt you can completely brick your card but don't take my word for it.


Thanks for the answer. I tried and it works


----------



## FedericoUY

kx11 said:


> oh you want the other video with the crazy OC ?!


Nice man! Just to know... The monitoring is RT from afterburner? Do you configure it someway?


----------



## JackCY

keikei said:


> I was expect horrible RT performance honestly. Great news.


In that case it better be sold on a 250GB SSD since it supposedly requires 175GB of free space. What the heck is that. Can't sell enough drives? Persuade game makers to blow up the game sizes 10x for no reason or quality gain.


----------



## keikei

JackCY said:


> In that case it better be sold on a 250GB SSD since it supposedly requires 175GB of free space. What the heck is that. Can't sell enough drives? Persuade game makers to blow up the game sizes 10x for no reason or quality gain.


Supposedly, it represents all future DLC content. It is still jinormous though.


----------



## kx11

FedericoUY said:


> Nice man! Just to know... The monitoring is RT from afterburner? Do you configure it someway?





yes , through MSI AB monitoring tap in settings , you can go extreme and enable HWinfo.dll and monitor ram freq./HDD/SSD/networkDL all on the screen


----------



## kx11

keikei said:


> I was expect horrible RT performance honestly. Great news.





people always forget how amazing COD engine is , aside from Ghosts every COD release was amazing and delivered on all aspects of PC gaming


----------



## kithylin

keikei said:


> I was expect horrible RT performance honestly. Great news.


Which game is this again?


----------



## keikei

kithylin said:


> Which game is this again?



Apologies. The new COD. i had linked it in the official thread, but apparently not this one. Duh. https://wccftech.com/modern-warfare...el-and-nvidia-highlights-available-at-launch/


----------



## JonnyV75

New owner of a Evga 2080 Ti XC Ultra with a Heatkiller IV waterblock ... is the Galax bios still the recommended one to flash with?


----------



## TK421

JonnyV75 said:


> New owner of a Evga 2080 Ti XC Ultra with a Heatkiller IV waterblock ... is the Galax bios still the recommended one to flash with?


for daily use yes, 380w is more than enough

maybe look for 1000w strix or 2000w galax bios for benchmarks, not practical for daily use since small relative perf gain in exchange for astronomical increase in power consumption






I also have this 2080ti xc at .9v, +150mhz, eventual 1920mhz on core and +1300 so far stable

Golden sample or too early to tell?


----------



## JustinThyme

TK421 said:


> for daily use yes, 380w is more than enough
> 
> maybe look for 1000w strix or 2000w galax bios for benchmarks, not practical for daily use since small relative perf gain in exchange for astronomical increase in power consumption
> 
> 
> 
> 
> 
> 
> I also have this 2080ti xc at .9v, +150mhz, eventual 1920mhz on core and +1300 so far stable
> 
> Golden sample or too early to tell?


Cant read the numbers very well but +150 is Meh at best when even meager cards run that. My Strix OC cards are hitting an eventual 2150MHz.


----------



## TK421

JustinThyme said:


> Cant read the numbers very well but +150 is Meh at best when even meager cards run that. My Strix OC cards are hitting an eventual 2150MHz.


with .9v?


----------



## gfunkernaught

@TK421, thats pretty good for the temp of your gpu. air cooled right? those strix oc that settle around 2150mhz, voltage is way above .9v, probably water cooled too. my xc ultra with the 380w bios starts at [email protected] without offset. i have +160 on the gpu core and 1000+ on the ram. so mine settles around 2085mhz, and the temps max around 38-41c depending on ambient temp.


----------



## JonnyV75

TK421 said:


> for daily use yes, 380w is more than enough
> 
> maybe look for 1000w strix or 2000w galax bios for benchmarks, not practical for daily use since small relative perf gain in exchange for astronomical increase in power consumption
> 
> 
> 
> 
> 
> 
> I also have this 2080ti xc at .9v, +150mhz, eventual 1920mhz on core and +1300 so far stable
> 
> Golden sample or too early to tell?


Galax 380 it is.

Golden sample? I did get Samsung memory, but I don’t think so - average at best? .... Still testing but I was able to OC +170 on the core and 1200 on memory with no crashes on Time Spy and Time Spy Extreme.... but crashed with Port Royal. To get Port Royal stable I had to dial it back to +150/+1100 - Clock hits 2115 MHz.

That’s with voltage at +100 and power at 130 via Precision X.


----------



## FedericoUY

My msi x trio did 2025 with .9, and did 2100 with .96. Golden sample should be .95 at 2100 stable, and 2200 with less than 1.05 stable. Otherwise, I do not see a golden not even good 2080 ti. Let us know how it behaves...

EDIT: Please people talking about clocks talk in absolute clocks speed (fe 2100mhz) and not "+xxx".


----------



## gfunkernaught

FedericoUY said:


> My msi x trio did 2025 with .9, and did 2100 with .96. Golden sample should be .95 at 2100 stable, and 2200 with less than 1.05 stable. Otherwise, I do not see a golden not even good 2080 ti. Let us know how it behaves...
> 
> EDIT: Please people talking about clocks talk in absolute clocks speed (fe 2100mhz) and not "+xxx".


offsets can still be mentioned since there isnt an "absolute" clock speed unless you have xoc bios, even those throttle based on temp. 

i managed [email protected] with the xoc bios with timespy extreme and port royal, but i had to use my a/c to cool the gpu, max temp was 28c after about 20min of cooling. no crashes during the benchmark, but any game that uses RT cores crashed within minutes. i've tried [email protected] with the 380w bios and it wasnt stable. i have what is known as the trash sample in terms of overclock capability.


----------



## J7SC

FedericoUY said:


> My msi x trio did 2025 with .9, and did 2100 with .96. Golden sample should be .95 at 2100 stable, and 2200 with less than 1.05 stable. Otherwise, I do not see a golden not even good 2080 ti. Let us know how it behaves...
> 
> EDIT: Please people talking about clocks talk in absolute clocks speed (fe 2100mhz) and not "+xxx".



^^yeah, +xxx doesn't mean much; absolute clocks on the other hand allow for straight(er) comparisons, subject to the all-important temps re. 2080 Ti. Besides, talking about 'Golden Sample' never leads to much good anyways, either way...

:2cents: IMO, a consistent 2145 on GPU and 2040+ on VRAM for a single card means you've got a very good sample, especially if air-cooled. I've run 2160 GPU and 2056 VRAM / Micron for 2x / NVLink / SLI (w-cooled) and spent much of the year in the top 20 of 3DM HoF / Port Royal (still there), but that also meant 'heat off' in my SoHo, and all that jazz...brrr.

For my top card, 2190 on Port Royal, TimeSpyEX and so forth is not too difficult, and I've also posted select 2235 MHz GPUz rendering...but at the end of the day, I cannot tell any difference at all between stock bios and MSI AB + 180 or so in games, unless I have FPS counters open...


----------



## JonnyV75

Well, I can’t flash the Evga XC Ultra with Galax 380 due to XUSB error. 

Has a newer Galax firmware been made available or NVflash bypass available? Or am I just SOL for now?


----------



## keikei

Gud news. The other reason why we bought this card right?


----------



## J7SC

keikei said:


> Gud news. The other reason why we bought this card right?


 
...nice visual improvements ! Now, an 'official' SLI/NVLink DX12 profile would be icing on the cake


----------



## gfunkernaught

J7SC said:


> ^^yeah, +xxx doesn't mean much; absolute clocks on the other hand allow for straight(er) comparisons, subject to the all-important temps re. 2080 Ti. Besides, talking about 'Golden Sample' never leads to much good anyways, either way...
> 
> :2cents: IMO, a consistent 2145 on GPU and 2040+ on VRAM for a single card means you've got a very good sample, especially if air-cooled. I've run 2160 GPU and 2056 VRAM / Micron for 2x / NVLink / SLI (w-cooled) and spent much of the year in the top 20 of 3DM HoF / Port Royal (still there), but that also meant 'heat off' in my SoHo, and all that jazz...brrr.
> 
> For my top card, 2190 on Port Royal, TimeSpyEX and so forth is not too difficult, and I've also posted select 2235 MHz GPUz rendering...but at the end of the day, I cannot tell any difference at all between stock bios and MSI AB + 180 or so in games, unless I have FPS counters open...


agreed. i only said offsets can still be mentioned because i can set an offset of +160 which on my card would net 2100mhz but eventually falls to 2085mhz and sometimes dips to 2070 or 2055, based on power limit. also please if you're going to post your speeds, please post the voltage. i see a lot of speeds with no voltages or temps.


----------



## J7SC

gfunkernaught said:


> agreed. i only said offsets can still be mentioned because i can set an offset of +160 which on my card would net 2100mhz but eventually falls to 2085mhz and sometimes dips to 2070 or 2055, based on power limit. also please if you're going to post your speeds, *please post the voltage. i see a lot of speeds with no voltages or temps*.



...posted that just over a week ago, along with Superposition 8K. Also, cards + bios are bone stock.


----------



## Socio

keikei said:


> Gud news. The other reason why we bought this card right?
> 
> 
> https://www.youtube.com/watch?v=PBhnTVuD31I


Just started playing the main game the visuals with ray tracing are already phenomenal, can't wait to see the improvements of this expansion.


----------



## JustinThyme

gfunkernaught said:


> @TK421, thats pretty good for the temp of your gpu. air cooled right? those strix oc that settle around 2150mhz, voltage is way above .9v, probably water cooled too. my xc ultra with the 380w bios starts at [email protected] without offset. i have +160 on the gpu core and 1000+ on the ram. so mine settles around 2085mhz, and the temps max around 38-41c depending on ambient temp.


Yes watercooled, +125 on power +165 on CPU +800 non Vram and voltage left at +0 where is maxes out under load at .998 and temps max out at 42C.


----------



## Unl33t

Just ordered mine! Gigabyte RTX 2080Ti Gaming OC. Can't wait till it gets here!


----------



## J7SC

Quick question: What are the best driving / racing games out there with great if not demanding graphics one or two 2080 TIs could stretch their legs with ? Any with Ray tracing and/or NVLink/SLI ? 

I still play NFS: Most Wanted, but haven't kept up with what is new out there in this genre. Tx


----------



## kithylin

J7SC said:


> Quick question: What are the best driving / racing games out there with great if not demanding graphics one or two 2080 TIs could stretch their legs with ? Any with Ray tracing and/or NVLink/SLI ?
> 
> I still play NFS: Most Wanted, but haven't kept up with what is new out there in this genre. Tx


From the reviews I have seen online, Project C.A.R.S. 1 & 2 both scale well with SLI And support Nvidia Surround. I do not know if they support RTX though. I also do not currently personally own a SLI system so I don't know if this is actually true. But I have seen reviews online showing this game series scaling +70% to +75% with a second card in SLI, at least with a 1080 Ti.


----------



## J7SC

kithylin said:


> From the reviews I have seen online, Project C.A.R.S. 1 & 2 both scale well with SLI And support Nvidia Surround. I do not know if they support RTX though. I also do not currently personally own a SLI system so I don't know if this is actually true. But I have seen reviews online showing this game series scaling +70% to +75% with a second card in SLI, at least with a 1080 Ti.


Thanks, kithylin - I'll check Project C.A.R.S. out


----------



## kithylin

J7SC said:


> Thanks, kithylin - I'll check Project C.A.R.S. out


Remember: If you aren't getting good SLI Scaling in a game, search in google for a nvidia profile to import in to nvidia profile inspector. Often times a lot of games out there actually can scale well, it's just nvidia's default profiles are (sometimes) garbage / useless. Sometimes people out there in the community (reddit and forums) have found better profiles for specific games that yeild better SLI scaling. It's just a few seconds to open program, click import, browse to file, import, save, close program go game. And it's a one-time thing.


----------



## J7SC

kithylin said:


> Remember: If you aren't getting good SLI Scaling in a game, search in google for a nvidia profile to import in to nvidia profile inspector. Often times a lot of games out there actually can scale well, it's just nvidia's default profiles are (sometimes) garbage / useless. Sometimes people out there in the community (reddit and forums) have found better profiles for specific games that yeild better SLI scaling. It's just a few seconds to open program, click import, browse to file, import, save, close program go game. And it's a one-time thing.


Thanks - back when I did a lot of HWBot, I played with NV Inspector. I recently downloaded a new copy/version for the 2080 Tis and just started playing with it (ie using 3DM PortRoyal as master/default), never mind all the SLI options...haven't blown up anything yet, though got the odd visual weirdness


----------



## kithylin

J7SC said:


> Thanks - back when I did a lot of HWBot, I played with NV Inspector. I recently downloaded a new copy/version for the 2080 Tis and just started playing with it (ie using 3DM PortRoyal as master/default), never mind all the SLI options...haven't blown up anything yet, though got the odd visual weirdness


Nvidia Profile Inspector does have an option where you can change one innocent thing, like texture filtering or something to make it a "Customized" profile and then you can export it and save it as a file on your system. That way after tuning things if you blow up a game and can't make it work, you can always import the saved file and "restore" it, which is nice.


----------



## Socio

J7SC said:


> Quick question: What are the best driving / racing games out there with great if not demanding graphics one or two 2080 TIs could stretch their legs with ? Any with Ray tracing and/or NVLink/SLI ?
> 
> I still play NFS: Most Wanted, but haven't kept up with what is new out there in this genre. Tx


Assetto Corsa Competizione it utilizes ray tracing

https://www.assettocorsa.net/competizione/


----------



## J7SC

Socio said:


> Assetto Corsa Competizione it utilizes ray tracing
> 
> https://www.assettocorsa.net/competizione/



Thanks !


----------



## monza1412

Socio said:


> Assetto Corsa Competizione it utilizes ray tracing
> 
> https://www.assettocorsa.net/competizione/



errrr...no


https://www.pcgamesn.com/assetto-corsa-competizione/nvidia-rtx-support


----------



## J7SC

monza1412 said:


> errrr...no
> 
> 
> https://www.pcgamesn.com/assetto-corsa-competizione/nvidia-rtx-support



With next-gen consoles and Intel's upcoming discreet GPU apparently adding hw ray tracing, assetto corsa may yet eventually get around to it as well. More importantly, it seems to work in NVLink / 5K...


https://www.youtube.com/watch?time_continue=30&v=Yt3M40ur2-k


----------



## Socio

monza1412 said:


> errrr...no
> 
> 
> https://www.pcgamesn.com/assetto-corsa-competizione/nvidia-rtx-support


I have seen it in several ray tracing game lists and it looked like it was going to;

https://www.nvidia.com/en-us/geforce/news/assetto-corsa-competizione-rtx-ray-tracing/


----------



## Robostyle

Had anyone used/bought refurbished 2080 Ti's? How are they? I have an opportunity to buy one, Sea Hawk EKWB specifically, for ~930$. 

Dunno, if it will last as well as any other refurbished card, or it just isn't worth it, better wait 3080Ti?


----------



## JustinThyme

Robostyle said:


> Had anyone used/bought refurbished 2080 Ti's? How are they? I have an opportunity to buy one, Sea Hawk EKWB specifically, for ~930$.
> 
> Dunno, if it will last as well as any other refurbished card, or it just isn't worth it, better wait 3080Ti?


Refurb of anything is all about the price and if they offer a warranty on it. If its a chump change discount with no warranty then its not worth it. If its 20% or better with a full manufacturers warranty then go for it.


----------



## keikei

Robostyle said:


> Had anyone used/bought refurbished 2080 Ti's? How are they? I have an opportunity to buy one, Sea Hawk EKWB specifically, for ~930$.
> 
> Dunno, if it will last as well as any other refurbished card, or it just isn't worth it, better wait 3080Ti?


As long as its under warranty you should be gud. Something happens and you get a new one.


----------



## Robostyle

JustinThyme said:


> Refurb of anything is all about the price and if they offer a warranty on it. If its a chump change discount with no warranty then its not worth it. If its 20% or better with a full manufacturers warranty then go for it.


3 months warranty, not that much actually. 

Discount? Well, I think everyone here knows the prices for 2080 Ti, starting from 1200$ and up to 1600-2400$, so yeah, it's like 25% from the cheapest one, like Inno. And almost 50% discount, if you take it from brand new seahawk.


----------



## mllrkllr88

I will join up 


I have a nice clocking 2080Ti Kingpin Edition. It's running 2250/275 in Fire Strike Ultra with normal ambient custom water. I tested it quickly on LN2 and it was easily doing 2655 core in Port Royal. Cheers


----------



## gfunkernaught

heres some odd behavior: i tried to undervolt my 2080 ti to [email protected] and played quake 2 rtx for about 2hrs, temps never went above 39c. no crash. played control for 30min, crash, same temp. i figured quake 2 rtx would be heavier since it is fully ray-traced as opposed to Control's partial ray tracing.


----------



## kx11

nice results with the new WOT ray tracing benchmark


----------



## keikei

kx11 said:


> nice results with the new WOT ray tracing benchmark


i did see the news regarding RT and that game. I wonder if the devs would let loose RT though instead of just for the tanks? The option to full RT would be noice.


----------



## kx11

keikei said:


> i did see the news regarding RT and that game. I wonder if the devs would let loose RT though instead of just for the tanks? The option to full RT would be noice.



it's mostly on shadows in this benchmark , really good performance on 4k without things like DLSS


----------



## keikei

Just a typo right?  https://www.guru3d.com/news-story/i...force-rtx-2080-ti-super-on-their-website.html


----------



## kithylin

JustinThyme said:


> Refurb of anything is all about the price and if they offer a warranty on it. If its a chump change discount with no warranty then its not worth it. If its 20% or better with a full manufacturers warranty then go for it.


EVGA sends refurbished cards for warranty replacements. I once received a warranty replacement from EVGA (refurb card) and accidentally fried it just by trying to flash a bios to it. And it wasn't me, I've flashed bios's to cards over a thousand times by now and never fried a card from it before. I'm commenting this because you may have similar issues with other factory refurbished cards if you get one.


----------



## J7SC

keikei said:


> Just a typo right?  https://www.guru3d.com/news-story/i...force-rtx-2080-ti-super-on-their-website.html



The plot thickens...yet 2080 Ti Super has apparently been around as a prototype for many months now 'just in case' a big Navi was introduced, and it would not surprise me at all if NVidia brings it out (for Christmas ?), certainly before Ampere. Perhaps it will be paired with a bit faster VRAM. None of this will please Titan RTX owners, especially if the 2080 Ti Super will be available with custom PCB / higher PLs.


----------



## keikei

J7SC said:


> The plot thickens...yet 2080 Ti Super has apparently been around as a prototype for many months now 'just in case' a big Navi was introduced, and it would not surprise me at all if NVidia brings it out (for Christmas ?), certainly before Ampere. Perhaps it will be paired with a bit faster VRAM. None of this will please Titan RTX owners, especially if the 2080 Ti Super will be available with custom PCB / higher PLs.



As a prominent member has already mentioned, there is plenty of wiggle room for a 2080 Ti Supa. Its just when Green wants to drop it and just how powerful it will be. A Christmas launch would certainly help sales (price drop for og version) and help market RTX in general. Didn't we see Star Wars themed TITANS launch around the holiday season?


----------



## Section31

If you were an RTX2080TI owner, our only upgrade path was ampere (3080TI) or big navi (5900XT). Nvidia always like that, let AMD jump ahead and implement cutting edge stuff while they slowly cook and develop it and then release an fully ready version. I hope the ampere launch is more like the GTX10xx launch with the 1080ti being real world performance 50%+ that of the 1080.


----------



## bhsmurfy

Wouldnt mind someone accidently PM'ing me the Galax XOC bios


----------



## Serchio

bhsmurfy said:


> Wouldnt mind someone accidently PM'ing me the Galax XOC bios



https://app.box.com/s/8hk4gatmadgjer9tbna8f17bxraowz9v

I’m running watercooled EVGA RTX 2080TI XC at 2175MHz(1935 default) core and 8002MHz mem at 1.05V. I’m using galaxy 380W bios. Superposition 8k causes core to drop to 2160MHz for almost the whole run - power limit. I have played Control for an hour and some other games later to make sure that the oc is stable.

Edit. Max temp after a few hours of gaming was 45C.
Edit 2. Not galay xoc bios but galaxy 380w bios.


----------



## GRABibus

Hello,
no new Bios for 2x8pins T102 cards ?
is the 380W Galax still the best one ?


----------



## Sheyster

GRABibus said:


> Hello,
> no new Bios for 2x8pins T102 cards ?
> is the 380W Galax still the best one ?


For 24/7 and gaming use, yes.


----------



## bhsmurfy

GRABibus said:


> Hello,
> no new Bios for 2x8pins T102 cards ?
> is the 380W Galax still the best one ?


Comes down tyo what xusb firmware you have and what ports your want to sacrifice if using a ref pcb.
Im assuming when you judt say T102 you're talking about non-A. Technically all 2080 ti's are T102s some are Non-As
Cant speak if you have a non-A but when im benching I use the EVGA XOC as its xusb firmware is compatible with my pcb. I dont lose any port but I lose vf curve and fans are stuck at 110% speed.
I have a EVGA XC Ultra.

I've tried installing the Galax XOC Bios with paper'squads installer but gets a n vflash error somewhere while installing and cant see a log to see what error is. I am assuming is xusb firmware tho. Which sucks because the Galax XOC bios has fan control.

I do have a good board tho. Im assuming at least. I know its not my CPU that helped me get into top100. Only i5 on the top100 in sight. Nothing but XE's and 9900K/KF. Should I expect a couple hundred more points when I upgrade cpu or is PR PURELY GPU. I figured my cpu would be bottle knecking this score some..
I got it to run 2205gpu/2200mem for a couple Port Royals.
Only got one score to stick on Hall of Fame 1x GPU tho.
Im happy with #67 and on air.
Cant wait until my other cooling goodies get here.

I'll probably offer this card up for trade +cash for one of the custom pcb boards. Probably Kingpin or Galax. Probably KP tho since it comes with LN2 goodies.


For xusb firmware workaround has anyone been able to decompile a bios with NASM or something and c&p the differences for xusb?


----------



## Unl33t

Just received mine yesterday and installed. All I can say is wow these things are fast straight out of the box! Can't wait to see what gains can be had from here. So far though, I am very happy I went this path.


----------



## keikei

Unl33t said:


> Just received mine yesterday and installed. All I can say is wow these things are fast straight out of the box! Can't wait to see what gains can be had from here. So far though, I am very happy I went this path.



Fast right?  I'm looking forward to the higher rez/hz monitors (specifically the phillips) in early 2020. 4k/60fps is too easy with this card.


----------



## kithylin

keikei said:


> Fast right?  I'm looking forward to the higher rez/hz monitors (specifically the phillips) in early 2020. 4k/60fps is too easy with this card.


Remember to max out anti-aliasing and all of the little bells and whistles options in game graphics and then see if you still get 60 FPS minimums at all times in all places in games @ 4K. Supposedly from the game reviews online (And Linus even demonstrates it too), even 2080 Ti's will struggle with 60 FPS minimums @ 4K in most AAA titles @ max settings.


----------



## Unl33t

kithylin said:


> Remember to max out anti-aliasing and all of the little bells and whistles options in game graphics and then see if you still get 60 FPS minimums at all times in all places in games @ 4K. Supposedly from the game reviews online (And Linus even demonstrates it too), even 2080 Ti's will struggle with 60 FPS minimums @ 4K in most AAA titles @ max settings.


I only had a small window to fiddle around with it last-night, but I was able to Ultra everything, including Hairworks, in Witcher 3 4K and was getting buttery smooth 60FPS. Haven't had a chance to test other titles yet unfortunately.

My main reason for upgrading to this card (and an i9-9900K in the coming months) was in preparation for CyberPunk 2077 @4k.


----------



## memery.uag

*GIGABYTE AORUS RTX 2080 Ti XTREME WaterForce WB*

Cool thread, lots of info. Wish I had come here when I first got this beast. This is an upgrade from a Founders Edition GTX 970 so quite a jump. Here is my GPU-Z validation too, for kicks: https://www.techpowerup.com/gpuz/details/kp28x

Cheers!


----------



## ahnafakeef

kithylin said:


> Remember to max out anti-aliasing and all of the little bells and whistles options in game graphics and then see if you still get 60 FPS minimums at all times in all places in games @ 4K. Supposedly from the game reviews online (And Linus even demonstrates it too), even 2080 Ti's will struggle with 60 FPS minimums @ 4K in most AAA titles @ max settings.


It might be my less than perfect eyes, but I can’t see the difference between medium and ultra AA settings in most games on my 32” 4K screen. And with AA turned down and RT turned off, Exodus and Odyssey runs between 50 and 60 FPS most of the time. 

Granted, that is not maxed out, and certainly not a minimum of 60 FPS at 4K, but it is quite close, and pretty amazing for a chip that isn’t even fully unlocked. Bodes very well for 4K120 with Ampere SLI, don’t you think? 

P.S. I’m living in denial of the existence of RT until cards can run it better. Ignorance is bliss.


----------



## keikei

kithylin said:


> Remember to max out anti-aliasing and all of the little bells and whistles options in game graphics and then see if you still get 60 FPS minimums at all times in all places in games @ 4K. Supposedly from the game reviews online (And Linus even demonstrates it too), even 2080 Ti's will struggle with 60 FPS minimums @ 4K in most AAA titles @ max settings.


I'll agree with the max all settings marker tanking frames, but if its not ones first time around the block, then one should set them per game basis. I cant remark regarding current AAA titles struggle with 4k/60fps, but one game which stands out for me is the current Metro. The game really humbles the current Ti. I"m not evening mentioning RT being enabled.


----------



## J7SC

memery.uag said:


> Cool thread, lots of info. Wish I had come here when I first got this beast. This is an upgrade from a Founders Edition GTX 970 so quite a jump. (...)
> 
> Cheers!



I got my Gigabyte Aorus Xtr WBs last December...no probs and great fun, especially if you can throw lot's of radiator space at it.



Spoiler


----------



## bhsmurfy

Looking for suggestion for a water block for my EVGA XC Ultra 2080 ti. Also fittings you guys trusts.


----------



## gfunkernaught

bhsmurfy said:


> Looking for suggestion for a water block for my EVGA XC Ultra 2080 ti. Also fittings you guys trusts.


ekwb vector is what i use with ek fittings. i've read that those new heatkiller blocks perform better than ek on the 2080 ti.


----------



## gfunkernaught

Serchio said:


> https://app.box.com/s/8hk4gatmadgjer9tbna8f17bxraowz9v
> 
> I’m running watercooled EVGA RTX 2080TI XC at 2175MHz(1935 default) core and 8002MHz mem at 1.05V. I’m using galaxy 380W bios. Superposition 8k causes core to drop to 2160MHz for almost the whole run - power limit. I have played Control for an hour and some other games later to make sure that the oc is stable.
> 
> Edit. Max temp after a few hours of gaming was 45C.
> Edit 2. Not galay xoc bios but galaxy 380w bios.


that the same bios as the kfa 380w bios? with the kfa 380w bios i cant get past [email protected] for gaming.


----------



## MrTOOSHORT

Old EK Supreme HF goes well with the Kingpin backplate


----------



## memery.uag

J7SC said:


> I got my Gigabyte Aorus Xtr WBs last December...no probs and great fun, especially if you can throw lot's of radiator space at it.
> 
> 
> 
> Spoiler


Hey I was wondering what BIOS are you using? and are you using the latest RGB Fusion 2.0? Mine won't detect my card and Aorus Engine can't read the BIOS or update it. Any help is appreciated!


----------



## ntuason

Hey guys quick question for the knowledgeable folks. What determines a “Golden Chip” or even a above average one with the 2080 Ti? At what frequency and overclock should a 2080 Ti able to attain to be qualified as a good chip?

Thanks


----------



## J7SC

memery.uag said:


> Hey I was wondering what BIOS are you using? and are you using the latest RGB Fusion 2.0? Mine won't detect my card and Aorus Engine can't read the BIOS or update it. Any help is appreciated!


 
I never changed the Bios for either card, so whatever stock Bios came with it is still on there (ie this one https://www.techpowerup.com/vgabios/208102/gigabyte-rtx2080ti-11264-181206 ). You can use GPUz to read/safe your current Bios, and the NVflash + step-by-step on the OP page in this thread to update (if that is what you want to do). As mentioned before, 'big' cooling is probably the best performance enhancer (and can outweigh GPU 'lottery'), not least as these Aorus xTr waterblock cards already pull just under 380w max with stock Bios, so rads and fans have to dissipate that 

On the RGB Fusion, I *installed* ver. 2.0, changed the colours to my liking (everything worked), then rebooted and *uninstalled *RGB Fusion...the cards will hold whatever colour combo you set even without RGB Fusion running or installed...I don't like bloatware (never mind low-level HW access apps, for security reasons)


----------



## JustinThyme

ntuason said:


> Hey guys quick question for the knowledgeable folks. What determines a “Golden Chip” or even a above average one with the 2080 Ti? At what frequency and overclock should a 2080 Ti able to attain to be qualified as a good chip?
> 
> Thanks


From what Ive seen there are no golden chips. They all seem to get the same core clocks IF you water cool. If on air you are SOL. Too hot to get anywhere and you will throttle quickly. Generally even with the XOC BIOS 2100-2150 is about the average. Very few may get 20Hz over that on one run out of the blue. 

The only real difference Ive seen is the capability of the memory....Micron VS Samsung. Ive seen a few get decent clocks from Micron but for the most part the Samsung memory just clocks better. On my cards Im getting 2150 on the core and +900 on the VRAM. I can push it to +1000 on some things but some doesnt cut it so I call +900 Stable as its repetitive. Ive only seen artifacts once on +900 so if you are being super picky knock that back to +800 all day long and tomorrow as well. I have a pair of ASUS Strix 2080Ti O11G cards with Heatkiller IV blocks and back plates. I had phanteks blocks on them originally that still performed well but they used the original back plates and the little extra passive cooling on theback plates gave me a few degrees lower. Thing with these cards is keeping them cool. When I first installed the cards I didnt have any blocks because no one was making them yet. First thing I tried was Barrow because they were the first to market. What a huge disappointment that was. Never even ran water through them. Test fit showed warped blocked where the GPU was the only thing making OK contact. Vram and mosfets was horrendous. like half the Vram and 3 of 16 mostfets making an impression in the thermal pads. Add insult to injury and the couldnt even fit an Nvlink bridge on because the water ports were in the way. Anyhow on air I had to leave the door open on a huge Enthoo Elite case because the heat from the cards just kept rising and all that hot air was getting sucked through the rads heating up the loop temps more than with CPU and two GPUs in the loop does.


----------



## bhsmurfy

ntuason said:


> Hey guys quick question for the knowledgeable folks. What determines a “Golden Chip” or even a above average one with the 2080 Ti? At what frequency and overclock should a 2080 Ti able to attain to be qualified as a good chip?
> 
> Thanks


Seems to be anything stable benching at 2200core/2200mem is golden. while achieving 10.5k+ in Port Royal.
2600core if ln2.

Gunslinger has one bench on the HoF for PR with his core at 2475Mhz tho and he only scored 10534
I got a 10583 @ 2205Mhz so higher clock isnt always faster with this card.


----------



## acmilangr

https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/1+gpu

I am in rank #89


----------



## MrTOOSHORT

bhsmurfy said:


> Seems to be anything stable benching at 2200core/2200mem is golden. while achieving 10.5k+ in Port Royal.
> 2600core if ln2.
> 
> Gunslinger has one bench on the HoF for PR with his core at 2475Mhz tho and he only scored 10534
> I got a 10583 @ 2205Mhz so higher clock isnt always faster with this card.


Use the driver Gunslinger used at the time and see what you get.


----------



## kithylin

acmilangr said:


> https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/1+gpu
> 
> I am in rank #89


Congrats on getting in the top-100! :thumb:


----------



## bhsmurfy

acmilangr said:


> https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+ultra+preset/version+1.1/1+gpu
> 
> I am in rank #89


Nice 9700k you got there.


----------



## bhsmurfy

MrTOOSHORT said:


> Use the driver Gunslinger used at the time and see what you get.


Holy **** 417.35 SUCKS. Can barely break 10k @ 2205....
436.15 seems to be best for Port Royal that I can find.


----------



## Serchio

gfunkernaught said:


> that the same bios as the kfa 380w bios? with the kfa 380w bios i cant get past [email protected] for gaming.




Yes, it’s the same one.


----------



## GRABibus

I have big questions about all those people who say "I am stable at 2205MHz"....That's great but do they have tested all the games they play during hours or only in 2 or 3 benchmarks ?
From my side I consider a graphics card overclock as stable, when I can pass all my benchmarks tests and play all my games without any issues. So it takes some weeks of tweakings.

I was happy to bench TimeSpy and Heaven Benchmarks at 2175MHz/16600MHz/1.093V (settings in MSI AB)
BUT, by starting playing all my games, I had to decrease gradually those frequencies : 2175MHz on GPU makes me Crashed To Desktop in BFV.
Also, 16600MHz made some artefacts in same game.

So, I was yesterday at 2145MHz/16500Mz considering it as stable.
BUT, playing COD WWII, I saw some corrupted textures...I then decreased Memory from 16500MHz to 16400MHz and no more corrupted textures.

I am now at 2145MHz/16400MHz/1.093V....

BUT, maybe, tomorrow, if I play a new game, I will have to reduce it due to some instabilities.

Overclocking a graphics card and considering an overclock as stable, I mean in all benchmarks and games you play, takes some weeks of tests.


----------



## bhsmurfy

GRABibus said:


> I have big questions about all those people who say "I am stable at 2205MHz"....That's great but do they have tested all the games they play during hours or only in 2 or 3 benchmarks ?
> From my side I consider a graphics card overclock as stable, when I can pass all my benchmarks tests and play all my games without any issues. So it takes some weeks of tweakings.
> 
> I was happy to bench TimeSpy and Heaven Benchmarks at 2175MHz/16600MHz/1.093V (settings in MSI AB)
> BUT, by starting playing all my games, I had to decrease gradually those frequencies : 2175MHz on GPU makes me Crashed To Desktop in BFV.
> Also, 16600MHz made some artefacts in same game.
> 
> So, I was yesterday at 2145MHz/16500Mz considering it as stable.
> BUT, playing COD WWII, I saw some corrupted textures...I then decreased Memory from 16500MHz to 16400MHz and no more corrupted textures.
> 
> I am now at 2145MHz/16400MHz/1.093V....
> 
> BUT, maybe, tomorrow, if I play a new game, I will have to reduce it due to some instabilities.
> 
> Overclocking a graphics card and considering an overclock as stable, I mean in all benchmarks and games you play, takes some weeks of tests.


Different programs/games/benchmark utilities all have different settings they like. PUBG is a good example. It doesn't even like OC's on cpus
Stable is in the eye of the beholder. Stable to me is being able to duplicate something.
Can run Port Royal all day @ 2205
I game at 2175. OC doesnt really scale that well in games that dont have RT.


----------



## acmilangr

bhsmurfy said:


> Different programs/games/benchmark utilities all have different settings they like. PUBG is a good example. It doesn't even like OC's on cpus
> Stable is in the eye of the beholder. Stable to me is being able to duplicate something.
> Can run Port Royal all day @ 2205
> I game at 2175. OC doesnt really scale that well in games that dont have RT.


What bios are you using?
how did you go on 1.125v?


----------



## ntuason

JustinThyme said:


> From what Ive seen there are no golden chips. They all seem to get the same core clocks IF you water cool. If on air you are SOL. Too hot to get anywhere and you will throttle quickly. Generally even with the XOC BIOS 2100-2150 is about the average. Very few may get 20Hz over that on one run out of the blue.
> 
> The only real difference Ive seen is the capability of the memory....Micron VS Samsung. Ive seen a few get decent clocks from Micron but for the most part the Samsung memory just clocks better. On my cards Im getting 2150 on the core and +900 on the VRAM. I can push it to +1000 on some things but some doesnt cut it so I call +900 Stable as its repetitive. Ive only seen artifacts once on +900 so if you are being super picky knock that back to +800 all day long and tomorrow as well. I have a pair of ASUS Strix 2080Ti O11G cards with Heatkiller IV blocks and back plates. I had phanteks blocks on them originally that still performed well but they used the original back plates and the little extra passive cooling on theback plates gave me a few degrees lower. Thing with these cards is keeping them cool. When I first installed the cards I didnt have any blocks because no one was making them yet. First thing I tried was Barrow because they were the first to market. What a huge disappointment that was. Never even ran water through them. Test fit showed warped blocked where the GPU was the only thing making OK contact. Vram and mosfets was horrendous. like half the Vram and 3 of 16 mostfets making an impression in the thermal pads. Add insult to injury and the couldnt even fit an Nvlink bridge on because the water ports were in the way. Anyhow on air I had to leave the door open on a huge Enthoo Elite case because the heat from the cards just kept rising and all that hot air was getting sucked through the rads heating up the loop temps more than with CPU and two GPUs in the loop does.





bhsmurfy said:


> Seems to be anything stable benching at 2200core/2200mem is golden. while achieving 10.5k+ in Port Royal.
> 2600core if ln2.
> 
> Gunslinger has one bench on the HoF for PR with his core at 2475Mhz tho and he only scored 10534
> I got a 10583 @ 2205Mhz so higher clock isnt always faster with this card.


Thank you for the reply. I asked because my I think my card may be pretty decent as it does +175 on core and +1350 memory solidly. But I’m tempted to exchange it because it has coil whine but I’m afraid that the new card might be a worse overclocker and still have coil whine LOL! Dilemma, dilemma...


----------



## GRABibus

bhsmurfy said:


> Different programs/games/benchmark utilities all have different settings they like. PUBG is a good example. It doesn't even like OC's on cpus
> Stable is in the eye of the beholder. Stable to me is being able to duplicate something.
> Can run Port Royal all day @ 2205
> I game at 2175. OC doesnt really scale that well in games that dont have RT.


You probably use one of those 2 bios :
EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) 
Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) 

These are the only ones going until 1.125V.
They are 3x8pins Bios.

They can be used on you 1x6pin + 2x8pin card ?


----------



## GRABibus

ntuason said:


> Thank you for the reply. I asked because my I think my card may be pretty decent as it does +175 on core and +1350 memory solidly. But I’m tempted to exchange it because it has coil whine but I’m afraid that the new card might be a worse overclocker and still have coil whine LOL! Dilemma, dilemma...


Keep it !!


----------



## Sheyster

GRABibus said:


> I have big questions about all those people who say "I am stable at 2205MHz"....That's great but do they have tested all the games they play during hours or only in 2 or 3 benchmarks ?
> From my side I consider a graphics card overclock as stable, when I can pass all my benchmarks tests and play all my games without any issues. So it takes some weeks of tweakings.
> 
> I was happy to bench TimeSpy and Heaven Benchmarks at 2175MHz/16600MHz/1.093V (settings in MSI AB)
> BUT, by starting playing all my games, I had to decrease gradually those frequencies : 2175MHz on GPU makes me Crashed To Desktop in BFV.
> Also, 16600MHz made some artefacts in same game.
> 
> So, I was yesterday at 2145MHz/16500Mz considering it as stable.
> BUT, playing COD WWII, I saw some corrupted textures...I then decreased Memory from 16500MHz to 16400MHz and no more corrupted textures.
> 
> I am now at 2145MHz/16400MHz/1.093V....
> 
> BUT, maybe, tomorrow, if I play a new game, I will have to reduce it due to some instabilities.
> 
> Overclocking a graphics card and considering an overclock as stable, I mean in all benchmarks and games you play, takes some weeks of tests.





bhsmurfy said:


> Different programs/games/benchmark utilities all have different settings they like. PUBG is a good example. It doesn't even like OC's on cpus
> Stable is in the eye of the beholder. Stable to me is being able to duplicate something.
> Can run Port Royal all day @ 2205
> I game at 2175. OC doesnt really scale that well in games that dont have RT.


I run 2100 at 1.05 for gaming 100% stable, and I agree that BFV is a very good GPU OC test. It's just not worth it to push a couple of bins higher at 1.093, there is virtually no difference while gaming. All you get is more TDP and more heat.


----------



## acmilangr

ntuason said:


> Thank you for the reply. I asked because my I think my card may be pretty decent as it does +175 on core and +1350 memory solidly. But I’m tempted to exchange it because it has coil whine but I’m afraid that the new card might be a worse overclocker and still have coil whine LOL! Dilemma, dilemma...


Try a different PSU. 
maybe it sounds weird but Corsair PSU causes too often coil whine on GPU. i have seen that on myself


----------



## ntuason

GRABibus said:


> Keep it !!


Ya I’m starting to lean towards sticking with it.



acmilangr said:


> Try a different PSU.
> maybe it sounds weird but Corsair PSU causes too often coil whine on GPU. i have seen that on myself


I’ll give my old Silversones Decathlon 1000W a try, though I heard this PSU had bad ratings.

Thanks


----------



## JohanssoN85

*JohanssoN85*

im still looking for a vbios to my MSI 2080ti Seahawk EK X, is the only option i have the Gaming X trio version with powerlimit of 406w? i have tried it and im hitting powerlimit at 360w only a 30w increase from stock, seems weird.

is there any other vbios i can try, i have 2x8pin + 1x6pin on my card


----------



## pantsoftime

bhsmurfy said:


> Looking for suggestion for a water block for my EVGA XC Ultra 2080 ti. Also fittings you guys trusts.


I'm using the EK block also. Seems to work fairly well. Can't comment on if it's better than the heatkiller, I chose it because it was in stock at Microcenter when I picked up the card.


----------



## kithylin

ntuason said:


> Thank you for the reply. I asked because my I think my card may be pretty decent as it does +175 on core and +1350 memory solidly. But I’m tempted to exchange it because it has coil whine but I’m afraid that the new card might be a worse overclocker and still have coil whine LOL! Dilemma, dilemma...


What actual core speed are you running at? +175 doesn't tell us anything. We still have no idea what speed your card is running at.


----------



## bhsmurfy

GRABibus said:


> You probably use one of those 2 bios :
> EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)
> Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)
> 
> These are the only ones going until 1.125V.
> They are 3x8pins Bios.
> 
> They can be used on you 1x6pin + 2x8pin card ?


I use KPE XOC and I have a EVGA XC Ultra 2x 8pins.


----------



## Lownage

JohanssoN85 said:


> im still looking for a vbios to my MSI 2080ti Seahawk EK X, is the only option i have the Gaming X trio version with powerlimit of 406w? i have tried it and im hitting powerlimit at 360w only a 30w increase from stock, seems weird.
> 
> is there any other vbios i can try, i have 2x8pin + 1x6pin on my card


I have the same with my X Trio. 406W bios results in 360W drawn... Same with Galax


----------



## ntuason

kithylin said:


> What actual core speed are you running at? +175 doesn't tell us anything. We still have no idea what speed your card is running at.


45c-50c it hovers around 2130 down to 2115 and when it hits 55c to 60c it 2100 down to 2085. The slowest that I've seen was 2050 that was on OCCT 77C. All this is with the 1000w bios.

These seem like standard oc and if so I’d have no problem exchanging it for another copy to probably get one with no coil whine.

Thanks


----------



## JohanssoN85

i dont understand why its not drawing more than 360w, anyway i actually would like a vbios that is made for my specific card, if i benchmark with the 406w vbios it says in the results that i have a gaming X trio, that isnt really a problem but i would be nice to be able to list the correct card when uploading a benchmark score.

* i have had 1 problem when changing vbios, for some reason my browser (Chrome) gets all blacked out after i change vbios, when i start Chrome its just a black screen, anyone know why?


----------



## acmilangr

bhsmurfy said:


> I use KPE XOC and I have a EVGA XC Ultra 2x 8pins.


Link of this bios please?

Does this allow you to use curve?


----------



## kithylin

ntuason said:


> 45c-50c it hovers around 2130 down to 2115 and when it hits 55c to 60c it 2100 down to 2085. The slowest that I've seen was 2050 that was on OCCT 77C. All this is with the 1000w bios.
> 
> These seem like standard oc and if so I’d have no problem exchanging it for another copy to probably get one with no coil whine.
> 
> Thanks


That's about standard / normal from what I've seen in this thread for an air cooled card. You're probably not going to get anything better unless you go to water.


----------



## JohanssoN85

kithylin said:


> That's about standard / normal from what I've seen in this thread for an air cooled card. You're probably not going to get anything better unless you go to water.


is that really standard for an aircooled card? im on water and i dont reach 2130 stable, and the temps? 45-50c on a aircooled card must be extremely rare


----------



## bhsmurfy

acmilangr said:


> Link of this bios please?
> 
> Does this allow you to use curve?


Its on page one and no.
KPE + Kingpin edition. So its the EVGA KPE XOC. KPE for short.


----------



## kithylin

JohanssoN85 said:


> is that really standard for an aircooled card? im on water and i dont reach 2130 stable, and the temps? 45-50c on a aircooled card must be extremely rare


I'm not sure what you're going on about? I wasn't even replying to you. I was replying to ntuason who claimed their card was running between 55c - 77c, and they have an air cooled card.

Also more than half of the USA right now (and most of Canada) are already in to winter with sub-freezing ambient temps. It's not impossible for air cooled cards to hover around 50-60c under load in the winter, especially with a very large 3-fan coolers, like the card ntuason has.

Lots of people in this thread have reported 2100 - 2150 Mhz stable for their 2080 Ti's under water. But there's tons of factors in play. Maybe you got a dud?

From most of what folks in here seem to be reporting, 1900 - 2000 Mhz seems to be average for 2080 Ti's under "very good" air cooling, looking like 1800 - 1900 mhz is common for reference cards.


----------



## ntuason

JohanssoN85 said:


> is that really standard for an aircooled card? im on water and i dont reach 2130 stable, and the temps? 45-50c on a aircooled card must be extremely rare


Which 2080 Ti do you have? When I was 2130 stable if I remember correctly it was around 20C in my room (maybe even liower) and left the side panel off at 80% fan speed. I was trying to find the highest core/memory oc my card would go stable which is +195 on core and +1375 on memory. After that Friestrike, Time Spy and Port Royal just crashes to desktop with 0 score.


----------



## JohanssoN85

Kithylin ,you answered ntuason that 2100-2130mhz on an aircooled card is pretty standard or maybe i misunderstood, and i felt the temps seemed low to be an aircooled card thus why i asked.

okey sure its winter but im guessing you have decent temps inside anyways  

my card isnt far away in regards of mhz, im stable around 2100 but that is on water, i didnt think aircooled would be able to get close to that.

Ntuason, i have an MSI 2080ti Seahawk EK X


edit- i can increase core clock for 3dmark runs but it wont run with all games with the highest coreclock working in 3dmark.


----------



## kithylin

JohanssoN85 said:


> Kithylin ,you answered ntuason that 2100-2130mhz on an aircooled card is pretty standard or maybe i misunderstood.


You did. They said between 2085 - 2100, with temps around 55c - 60c, and 2050 with 77c. Here's the original post:


ntuason said:


> 45c-50c it hovers around 2130 down to 2115 and when it hits 55c to 60c it 2100 down to 2085. The slowest that I've seen was 2050 that was on OCCT 77C. All this is with the 1000w bios.


Air or water, that seems to be common clocks for those temps. The 2080 Ti's clocks are primarily based on temperatures. It doesn't matter how you get there (water cooling or very good air cooling in winter), as long as you can maintain those temps.

I'm basing this on what other people have been posting in this thread over the past few months.


----------



## jura11

bhsmurfy said:


> Looking for suggestion for a water block for my EVGA XC Ultra 2080 ti. Also fittings you guys trusts.


Hi there 

Personally I would consider or get Aquacomputer Kryographics NEXT RTX 2080Ti with active backplate if you can find it in stock, bought mine from Aquacomputer store and arrived after 3-4 weeks but is well worth it there

Another option is Heatkiller IV RTX 2080Ti 

Phanteks Glacier RTX 2080Ti is another option 

I would put EK Vector RTX 2080Ti WB at last place there, just for build quality and performance is somewhere in middle 

Barrow or Bykski WB I have used on two builds and on friend EVGA RTX 2080Ti temperatures have been in 42-45°C, that's with single 360mm radiator which cooled 8086k with 5.2GHz OC and RTX 2080Ti with 2115MHz

For fittings I personally use Barrow or Bykski and never have issues with them

Hope this helps 

Thanks, Jura


----------



## JustinThyme

jura11 said:


> Hi there
> 
> Personally I would consider or get Aquacomputer Kryographics NEXT RTX 2080Ti with active backplate if you can find it in stock, bought mine from Aquacomputer store and arrived after 3-4 weeks but is well worth it there
> 
> Another option is Heatkiller IV RTX 2080Ti
> 
> Phanteks Glacier RTX 2080Ti is another option
> 
> I would put EK Vector RTX 2080Ti WB at last place there, just for build quality and performance is somewhere in middle
> 
> Barrow or Bykski WB I have used on two builds and on friend EVGA RTX 2080Ti temperatures have been in 42-45°C, that's with single 360mm radiator which cooled 8086k with 5.2GHz OC and RTX 2080Ti with 2115MHz
> 
> For fittings I personally use Barrow or Bykski and never have issues with them
> 
> Hope this helps
> 
> Thanks, Jura


Excellent recommendations. I would love it if Aquacomputer made blocks for the Strix O11G cards but they dont. They make some of the best blocks on the market but unfortunately stick with reference cards. So im in the boat of your second choice with the HK IV with a passive backplate. The Phanteks blocks are also very nice, they just use the original backplate which does absolutely nothing for cooling, Zero contact with any surfaces. Their block fit and finish are right up there with the HK blocks. Ive not had good luck with my last few attempts with EK products so my go to now is HK for everything. Barrow........I hate to even mention that name. They were the first to come to market with a Strix block and saying they are 100% junk is an understatement. Between warped blocks, Terminal blocks that block access to install any Nvlink bridge and just sloppy production, crappy plating and failure to debur that actually took some of my blood they were by far the worst blocks Ive had the displeasure of installing on anything to date, even from the old days of home made blocks. Ill post up a few pics of that adventure as its worth it to warn off people. I cant say one way or another on the Byski blocks other than they are big in the asian market and just now showing up in the US market. PPCs was bad mouthing them one day about some undsclosed business ethics then selling them the next. Some of what Ive seen look nice but looks arent everything.

Check out the Barrow junk that PPCs sold stating they were as good as EK and lied saying they sold designs to EK and HK but then later wouldnt produce anything to back that up. The only honorable thing they did was take them back without charging me a restocking fee. Extremely sharp edges, VRMs with no contact points at all and crappy contact with most of the primary VRMs. On burr on a screw hole that literally had a shard protruding is the one that took my blood. One would not think they needed kevlar gloves to install a water block.


----------



## JustinThyme

Phanteks on the other hand have a very nice presentation from the moment you open the box. Bought these directly from Phanteks.
The only shortcoming I saw is not too thrilled with the water terminals and no back plate for the Strix available.
Ill post up some HK pics later.


----------



## Serchio

I’m quite happy with HK IV - gpu stays below 45C, clock stays at 2160(2175 on voltage curve).


----------



## mattxx88

JohanssoN85 said:


> i dont understand why its not drawing more than 360w, anyway i actually would like a vbios that is made for my specific card, if i benchmark with the 406w vbios it says in the results that i have a gaming X trio, that isnt really a problem but i would be nice to be able to list the correct card when uploading a benchmark score.
> 
> * i have had 1 problem when changing vbios, for some reason my browser (Chrome) gets all blacked out after i change vbios, when i start Chrome its just a black screen, anyone know why?





Lownage said:


> I have the same with my X Trio. 406W bios results in 360W drawn... Same with Galax



it's month i fight with this Gaming X Trio issue, this is a mine old post:
*edit: johansson, even to me chrome goes black, you can solve with win 8 compatibility, and then going in chrome option and disable hardware acceleration, then reverse to win 10 compatibility*



mattxx88 said:


> i flashed it (400w bios) on my Gaming X Trio, following the 1st page
> 
> my problem is that it's recgnized correctly from gpuz but in the real use the card is capped at 360w drain max


actually i partially solved with asus XOC bios

this card is really a pain in the @$$


----------



## JustinThyme

mattxx88 said:


> it's month i fight with this Gaming X Trio issue, this is a mine old post:
> *edit: johansson, even to me chrome goes black, you can solve with win 8 compatibility, and then going in chrome option and disable hardware acceleration, then reverse to win 10 compatibility*
> 
> 
> 
> actually i partially solved with asus XOC bios
> 
> this card is really a pain in the @$$


You may have other limiting factors. a Power limit opened up to 1000W doesnt mean the card will ever pull 1000 watts. Its just effectively taking that limit out of the way so it doesnt hinder you from running it up for as much voltage and clocks as it will handle, which in your case seems its.........360 Watts.
This is why I just went back to my stock OC Vbios. Upping the power limit only got me a few Mhz more. Think of it as running a 2 inch fuel line to a 1.4L 4 cylinder. You have made sure it wont starve for fuel but it will never use that capacity....ever.

Also have to look at what PSU you are running, is it capable of the power delivery to run everything? 

Just food for thought.


----------



## jura11

JustinThyme said:


> Excellent recommendations. I would love it if Aquacomputer made blocks for the Strix O11G cards but they dont. They make some of the best blocks on the market but unfortunately stick with reference cards. So im in the boat of your second choice with the HK IV with a passive backplate. The Phanteks blocks are also very nice, they just use the original backplate which does absolutely nothing for cooling, Zero contact with any surfaces. Their block fit and finish are right up there with the HK blocks. Ive not had good luck with my last few attempts with EK products so my go to now is HK for everything. Barrow........I hate to even mention that name. They were the first to come to market with a Strix block and saying they are 100% junk is an understatement. Between warped blocks, Terminal blocks that block access to install any Nvlink bridge and just sloppy production, crappy plating and failure to debur that actually took some of my blood they were by far the worst blocks Ive had the displeasure of installing on anything to date, even from the old days of home made blocks. Ill post up a few pics of that adventure as its worth it to warn off people. I cant say one way or another on the Byski blocks other than they are big in the asian market and just now showing up in the US market. PPCs was bad mouthing them one day about some undsclosed business ethics then selling them the next. Some of what Ive seen look nice but looks arent everything.
> 
> Check out the Barrow junk that PPCs sold stating they were as good as EK and lied saying they sold designs to EK and HK but then later wouldnt produce anything to back that up. The only honorable thing they did was take them back without charging me a restocking fee. Extremely sharp edges, VRMs with no contact points at all and crappy contact with most of the primary VRMs. On burr on a screw hole that literally had a shard protruding is the one that took my blood. One would not think they needed kevlar gloves to install a water block.


Hi there 

Sadly Aquacomputer doesn't make waterblocks for Strix or other non reference PCB, they're making only waterblock for reference PCB which is shame 

I did use one of their waterblocks on friend loop and must admit its one of nicest waterblock on market and performance is better than with EK Vector RTX 2080Ti WB, I didn't compared against the Heatkiller IV RTX 2080Ti because I didn't have here for testing, I used only my old Zotac RTX 2080Ti AMP

Temperatures with EKWB Vector RTX 2080Ti have been in 38-42°C as max with Galax 380W BIOS in gaming or benchmarks, that's in 21-25°C ambient temperature, max OC 2115MHz which dropped to 2100MHz 

Temperatures with Aquacomputer Kryographics NEXT RTX 2080Ti with active backplate have been as highest in 35-38°C and max OC 2115MHz with Galax 380W BIOS, ambient temperature have been same in both tests, I didn't tried OC more my Zotac RTX 2080Ti AMP, but with such temperatures I would expect to see increase of core clocks as with this block clocks stayed at 2115MHz all the time without the downclocking 

Agree with Phanteks,I used theor waterblocks on Pascal Gigabyte GTX1080Ti Aorus Extreme and MSI GTX1080Ti Gaming X 11G and no issues, I expected something similar on RTX 2080Ti and if they disappointed me, a bit,like in yours case by backplate, temperatures have been similar to EK Vector RTX 2080Ti WB 

Heatkiller IV RTX 2080Ti I have used on friend build amd temperatures have been around 2-3°C higher than with Aquacomputer Kryographics RTX 2080Ti with active backplate

Regarding the Barrow, sorry to hear you have bad experience with Barrow, I have used their waterblocks on several occasions and no issues, temperatures have always been good for price what friend paid for these blocks, usually they're been around 2-3°C worse than EK(on one build I have seen better temperatures by 1-2°C)

Bykski I have tested on two builds too, one with Palit RTX 2080Ti and one with EVGA RTX 2080Ti XC and temperatures have been good as well mostly in low 40's to mid 40's(45°C) as max, both have run Galax 380W BIOS 

I use their fittings(Barrow or Bykski) without the single issue, use their reservoirs and pumps too without the issues 

If time permits then I will do bigger tests with all 4 waterblocks(Aquacomputer, Heatkiller and EK and Bykski), I just need to find time for this

Hope this helps 

Thanks, Jura


----------



## ducky083

Hi, 



What do you think about this waterblock :


XSPC Razer Neo (for reference pcb only)...


Maybe better like the Aqua Computer ????


----------



## ducky083

jura11 said:


> Hi there
> 
> Sadly Aquacomputer doesn't make waterblocks for Strix or other non reference PCB, they're making only waterblock for reference PCB which is shame
> 
> I did use one of their waterblocks on friend loop and must admit its one of nicest waterblock on market and performance is better than with EK Vector RTX 2080Ti WB, I didn't compared against the Heatkiller IV RTX 2080Ti because I didn't have here for testing, I used only my old Zotac RTX 2080Ti AMP
> 
> Temperatures with EKWB Vector RTX 2080Ti have been in 38-42°C as max with Galax 380W BIOS in gaming or benchmarks, that's in 21-25°C ambient temperature, max OC 2115MHz which dropped to 2100MHz
> 
> Temperatures with Aquacomputer Kryographics NEXT RTX 2080Ti with active backplate have been as highest in 35-38°C and max OC 2115MHz with Galax 380W BIOS, ambient temperature have been same in both tests, I didn't tried OC more my Zotac RTX 2080Ti AMP, but with such temperatures I would expect to see increase of core clocks as with this block clocks stayed at 2115MHz all the time without the downclocking
> 
> Agree with Phanteks,I used theor waterblocks on Pascal Gigabyte GTX1080Ti Aorus Extreme and MSI GTX1080Ti Gaming X 11G and no issues, I expected something similar on RTX 2080Ti and if they disappointed me, a bit,like in yours case by backplate, temperatures have been similar to EK Vector RTX 2080Ti WB
> 
> Heatkiller IV RTX 2080Ti I have used on friend build amd temperatures have been around 2-3°C higher than with Aquacomputer Kryographics RTX 2080Ti with active backplate
> 
> Regarding the Barrow, sorry to hear you have bad experience with Barrow, I have used their waterblocks on several occasions and no issues, temperatures have always been good for price what friend paid for these blocks, usually they're been around 2-3°C worse than EK(on one build I have seen better temperatures by 1-2°C)
> 
> Bykski I have tested on two builds too, one with Palit RTX 2080Ti and one with EVGA RTX 2080Ti XC and temperatures have been good as well mostly in low 40's to mid 40's(45°C) as max, both have run Galax 380W BIOS
> 
> I use their fittings(Barrow or Bykski) without the single issue, use their reservoirs and pumps too without the issues
> 
> If time permits then I will do bigger tests with all 4 waterblocks(Aquacomputer, Heatkiller and EK and Bykski), I just need to find time for this
> 
> Hope this helps
> 
> Thanks, Jura



Hi, these temperatures with witch radiator and pump ?


thanks for your answer ;-)


----------



## mattxx88

JustinThyme said:


> You may have other limiting factors. a Power limit opened up to 1000W doesnt mean the card will ever pull 1000 watts. Its just effectively taking that limit out of the way so it doesnt hinder you from running it up for as much voltage and clocks as it will handle, which in your case seems its.........360 Watts.
> This is why I just went back to my stock OC Vbios. Upping the power limit only got me a few Mhz more. Think of it as running a 2 inch fuel line to a 1.4L 4 cylinder. You have made sure it wont starve for fuel but it will never use that capacity....ever.
> 
> Also have to look at what PSU you are running, is it capable of the power delivery to run everything?
> 
> Just food for thought.


having the card under water, the only limiting factors are temperature and gpu die quality
and i see that using XOC bios, the core frequency doesn't cut as it did with 400w and 380w bios, and my power supply can't be the problem it's a corsair AX1200

how is it possible that 3 users with the same card, all caps @360w?


----------



## GAN77

ducky083 said:


> Hi,
> 
> 
> 
> What do you think about this waterblock :
> 
> 
> XSPC Razer Neo (for reference pcb only)...
> 
> 
> Maybe better like the Aqua Computer ????


Productive and beautiful waterblock


----------



## jura11

ducky083 said:


> Hi,
> 
> 
> 
> What do you think about this waterblock :
> 
> 
> XSPC Razer Neo (for reference pcb only)...
> 
> 
> Maybe better like the Aqua Computer ????


Hi there 

Pascal XSPC Razer WB have been on par with Heatkiller IV and in one review XSPC outperformed Heatkiller IV 

Not sure how XSPC will compare against the Aquacomputer, didn't tried this waterblock 

Hope this helps 

Thanks, Jura


----------



## jura11

ducky083 said:


> Hi, these temperatures with witch radiator and pump ?
> 
> 
> thanks for your answer ;-)


Hi there 

I have tried or tested them on my build or loop(4*360mm radiators plus MO-ra3 360mm with dual DDC pumps and D5 pump and another 3* GPUs in loop GTX1080Ti with 2113MHz OC, GTX1080 with 2164MHz and GTX1080 with 2100MHz OC),only Bykski and Barrow I tested on different loops

Bykski I tested with 8086k with 5.2GHz and single 360mm radiator and DDC 18w pump from Barrow, with 240mm radiator in loop I think temperatures would be maybe touch better 

Barrow I tested on 8700k with 5.1GHz and 360mm and 240mm radiator and pump have used EK 3.2 PWM Elite edition

I plan test them on my loop again if time permits

Hope this helps 

Thanks, Jura


----------



## ducky083

jura11 said:


> Hi there
> 
> Pascal XSPC Razer WB have been on par with Heatkiller IV and in one review XSPC outperformed Heatkiller IV
> 
> Not sure how XSPC will compare against the Aquacomputer, didn't tried this waterblock
> 
> Hope this helps
> 
> Thanks, Jura



Thanks for all yours answers ;-)


----------



## ducky083

GAN77 said:


> Productive and beautiful waterblock



Have you got it ?


Can you say me more about it please ?


What is the max temperature on superposition benchmark 1080p extreme ?


What is the cooling used ?


Please, say me more ;-)


----------



## JustinThyme

mattxx88 said:


> having the card under water, the only limiting factors are temperature and gpu die quality
> and i see that using XOC bios, the core frequency doesn't cut as it did with 400w and 380w bios, and my power supply can't be the problem it's a corsair AX1200
> 
> how is it possible that 3 users with the same card, all caps @360w?


Possibly because as you stated and what I implied, thats all it can take, die limitations.


----------



## J7SC

ducky083 said:


> Hi, What do you think about this waterblock : XSPC Razer Neo (for reference pcb only)...(...)


 
That does look like a very nice and capable 2080 Ti water block, though w/o a trusted-source, standardized & head-to-head comparison test, performance is hard to gauge against AquaComputer, Heatkiller et al. Still, every single XSPC w-cooling product I have (including 7x RX 360 rads since 2012) has been superb re. performance and quality.

Somewhat related re. 2080 Ti w-cooling, I wonder who makes the OEM Aorus 2080 Ti water blocks for Gigabyte. So far, they delivered outstanding performance and quality on my 2080 Tis, but still...would be nice to know who makes them. Sort of looks like Bykski perhaps..screen grabs below are from YouTube / Declassified Systems.


----------



## mattxx88

JustinThyme said:


> Possibly because as you stated and what I implied, thats all it can take, die limitations.


mmm... maybe i was not enough clear before, i explain better the situation:

Galax 380w/Msi400w bios, bench superposition, 1.075v 2200mhz fixed with MSI Ab curve:

cut @360w and core goes down to 2145/2160mhz

Asus XOC bios, bench superposition, 1.075v 2200mhz:

NO CUT

so my "engine" as you stated before, it's a V8 with a clear exhaust/[email protected] problems, that limit it


----------



## GAN77

ducky083 said:


> Have you got it ?
> 
> 
> Can you say me more about it please ?
> 
> 
> What is the max temperature on superposition benchmark 1080p extreme ?
> 
> 
> What is the cooling used ?
> 
> 
> Please, say me more ;-)


Yes, I got this block.
I use three 420 external radiators.
At room temperature 25-27С temperature, on superposition benchmark 1080p extreme temperature is about 32-34C.


----------



## ducky083

GAN77 said:


> Yes, I got this block.
> I use three 420 external radiators.
> At room temperature 25-27С temperature, on superposition benchmark 1080p extreme temperature is about 32-34C.



Lol, three rad of 420mm lol, it's not a classic cooling configuration ;-)


----------



## GAN77

ducky083 said:


> Lol, three rad of 420mm lol, it's not a classic cooling configuration ;-)


For me, the classic cooling system)

Some install watercool MO-RA3, i assembled my external radiator)


----------



## keikei

https://videocardz.com/driver/nvidia-geforce-game-ready-440-97-whql


----------



## chibi

Hey guys, is it worth upgrading from Titan Xp -> NVIDIA 2080 Ti FE GPU? I run 3440x1440 resolution for gaming. Thanks


----------



## kithylin

chibi said:


> Hey guys, is it worth upgrading from Titan Xp -> NVIDIA 2080 Ti FE GPU? I run 3440x1440 resolution for gaming. Thanks


https://gpu.userbenchmark.com/Compare/Nvidia-Titan-Xp-vs-Nvidia-RTX-2080-Ti/m265423vs4027

Expect at most +24% gains in most scenarios. For a $1000 card.


----------



## JustinThyme

chibi said:


> Hey guys, is it worth upgrading from Titan Xp -> NVIDIA 2080 Ti FE GPU? I run 3440x1440 resolution for gaming. Thanks


For gaming, I would have to say yes. The Titan XP lags behind a bit and was never meant for gaming.


----------



## JustinThyme

mattxx88 said:


> mmm... maybe i was not enough clear before, i explain better the situation:
> 
> Galax 380w/Msi400w bios, bench superposition, 1.075v 2200mhz fixed with MSI Ab curve:
> 
> cut @360w and core goes down to 2145/2160mhz
> 
> Asus XOC bios, bench superposition, 1.075v 2200mhz:
> 
> NO CUT
> 
> so my "engine" as you stated before, it's a V8 with a clear exhaust/[email protected] problems, that limit it


Well I guess you are just screwed then.
Ive not seen anyone on anything but extreme cooling doing much better than the 2150 range give or take a few MHz. 

I still think you have just met your cards limitations.


----------



## bhsmurfy

Looking to go sub 0. Will be using dice/acetone. dice/meth, and dice/eth I know the dangers of the later two. I am wondering if anyone has dimensions on the pot KP has for his 2080 ti card so I can make a custom pot for mine. Would save me a day in measuring.


----------



## jbow7

Hey DangerSK... You still running that bios version (90.02.17.40.89) with no fan bug issues? I still have my Dual OC on air due to the repeating safe mode issue... I would like to flash in a bios that works so I can put it back on water. You still not having issues with it? Thanks!


Gah... nevermind... I just realized that this is one that i have already tried. Does anyone have the MSI "No-Fan bug" bios that you have to send them a serial number to get?


----------



## pattiri

Hello Guys,


I'm newbie and I have a question. I'm using my 2080ti Strix OC with the 1000 watt bios. I can easily go up 400 watt and more but I can't go over +104 GPU and +800 Memory with MSI Afterburner. 



I'm using stock cooler and when I run superposition 1080p extreme or 8K optimized GPU voltage never reaches to 1.093. When GPU temp reaches 60, GPU clock stays 2010 or 2025 and voltage is 1.043.


I think if I can give GPU a stable 1.093 volt maybe I can go higher GPU clocks but I don't know how to do this, which bios or settings should I use?


Thanks for your help


----------



## kithylin

pattiri said:


> Hello Guys,
> 
> 
> I'm newbie and I have a question. I'm using my 2080ti Strix OC with the 1000 watt bios. I can easily go up 400 watt and more but I can't go over +104 GPU and +800 Memory with MSI Afterburner.
> 
> 
> 
> I'm using stock cooler and when I run superposition 1080p extreme or 8K optimized GPU voltage never reaches to 1.093. When GPU temp reaches 60, GPU clock stays 2010 or 2025 and voltage is 1.043.
> 
> 
> I think if I can give GPU a stable 1.093 volt maybe I can go higher GPU clocks but I don't know how to do this, which bios or settings should I use?
> 
> 
> Thanks for your help


Try actually reducing your voltage to get the card to run cooler and thus faster. With modern video cards it's all about the temperatures now and much less about voltage. Too much voltage can actually be harmful to your overclocking goals more than anything sometimes. Often times if you can some how get the temps down you may overclock better with the stock voltage on stock bios even.

Nvidia starts reducing the core clocks by (I think this is it) -12 or -15 Mhz for about every +10c starting at 35c and upwards core temps. I may be off a little bit but that's pretty close.


----------



## AndrejB

@jura11 all this time it was ghub causing crashes and limiting my overclock. Very weird...


----------



## jura11

AndrejB said:


> @jura11 all this time it was ghub causing crashes and limiting my overclock. Very weird...


Hi Andrej 

Some SW can cause headache,friend have few issues or problems with Razer SW and McAfee AV as well which caused BSOD 

And regarding the G Hub not sure there, I have Logitech G502 which is my favourite mouse to the date and I use their SW and no issues 

Yours OC is now better? Maybe ambient temperature play bigger role? 

Hope this helps 

Thanks, Jura


----------



## J7SC

kithylin said:


> Try actually reducing your voltage to get the card to run cooler and thus faster. With modern video cards it's all about the temperatures now and much less about voltage. Too much voltage can actually be harmful to your overclocking goals more than anything sometimes. Often times if you can some how get the temps down you may overclock better with the stock voltage on stock bios even.
> 
> Nvidia starts reducing the core clocks by (I think this is it) -12 or -15 Mhz for about every +10c starting at 35c and upwards core temps. I may be off a little bit but that's pretty close.


 
^ yeah, many folks still think RTX oc'ing is like Pascal. But dropping the voltage (and thus heat generation) as much as you can for a given clock in RTX is the way to go. Even with a ""100,000w Bios"", the NV temp parameter still kicks in, somewhere between 35c and 38c, and then further at higher temps. If you run Superposition 4k or 8k, you can see the clocks drop (ie in 'calmer' subtest #8) by 15MHz as temps reach 38 C, at least on my setup. So far 38 C is also the max temp in my best Superposition and PortRoyal runs...but one of these (fall/winter) days, I should get it down to 35 C or less - I hope


----------



## AndrejB

jura11 said:


> AndrejB said:
> 
> 
> 
> @jura11 all this time it was ghub causing crashes and limiting my overclock. Very weird...
> 
> 
> 
> Hi Andrej
> 
> Some SW can cause headache,friend have few issues or problems with Razer SW and McAfee AV as well which caused BSOD
> 
> And regarding the G Hub not sure there, I have Logitech G502 which is my favourite mouse to the date and I use their SW and no issues
> 
> Yours OC is now better? Maybe ambient temperature play bigger role?
> 
> Hope this helps
> 
> Thanks, Jura
Click to expand...

Nope, it's not ambient as it's reproduceable. As soon as I install ghub, nv driver starts crashing.
I think it's the g pro keyboard that has a bug somewhere, good thing I can just switch off the rgb and my mouse has onboard memory 🙂

Now getting 2055mhz @ 65c on the matrix bios (just oc scanner).

Manual +90 & +900 gets me 2085/8300 @ 65c


----------



## bhsmurfy

10640 Port Royal
https://www.3dmark.com/pr/165677


----------



## MrTOOSHORT

2205MHz, nice :thumb:


----------



## xkm1948

Has anyone used EK's MLC 2080Ti block plus 140mm core? How does it stack up versus say an EVGA 2080Ti Hybrid kit? Looking to water cool a new 2080Ti I just bought without too much hassle.


----------



## jbow7

Hey has anyone successfully flashed a Gigabyte Xtreme Waterforce bios onto a ASUS Dual OC? I am in a never ending search of a fan-bug free bios version so I can put my card back onto water.


Or conversely... has anyone used the artic fan header adapter as a work around for the safe mode issue?


----------



## bhsmurfy

MrTOOSHORT said:


> 2205MHz, nice :thumb:


Yeah got a good 40-50 runs at 2205. Only past 3 stuck above 10.6 I think I am at the limit for my card. Unless I need to be under 40c full load for more than 2205.
Yeah not sure what I have to do to get more out of it. I think am at my Thermal limit considering this is on air*(*Card sit on a window seal in freezing ambient temps) im at 40c full load peaking to 48-51c right at the end of the benchmark. IM saving up for some upgrades. cpu/mobo/ddr4/m.2 ssd.

It'd be nice to see what this card can do when i finish my dice pot for it. Im still thinking not too much higher tho unless I can get some more volts from somewhere?

grz on your HoF run


----------



## Jpmboy

xkm1948 said:


> Has anyone used EK's MLC 2080Ti block plus 140mm core? How does it stack up versus say an EVGA 2080Ti Hybrid kit? Looking to water cool a new 2080Ti I just bought without too much hassle.


Full water cooling will do much better than an AIO. 


bhsmurfy said:


> Yeah got a good 40-50 runs at 2205. Only past 3 stuck above 10.6 I think I am at the limit for my card. Unless I need to be under 40c full load for more than 2205.
> Yeah not sure what I have to do to get more out of it. I think am at my Thermal limit considering this is on air*(*Card sit on a window seal in freezing ambient temps) im at 40c full load peaking to 48-51c right at the end of the benchmark. IM saving up for some upgrades. cpu/mobo/ddr4/m.2 ssd.
> It'd be nice to see what this card can do when i finish my dice pot for it. Im still thinking not too much higher tho unless I can get some more volts from somewhere?
> grz on your HoF run


It should drop the first clock bin (15Hz) at 32C, next at 38 and 40C. Run GPUZ sensor tab during a run to see what the clocks actually are when the card is loaded and hot.
What bios?


----------



## JustinThyme

Jpmboy said:


> Full water cooling will do much better than an AIO.


+10 on this. If you have the ability to do a custom loop, doesnt even have to be hard tubing, the abilities are almost endless. You can choose a much better block (Im not a big fan of EK these days) and expand rad/s as much as needed. Also give you the ability of larger rad/s and go with lower RPMs on fans for quieter operation. I get it when the user just doesn't feel comfortable and an AIO will beat out air nearly always but a custom loop trumps both.


----------



## keikei

Gett'em fresh and hot! *GeForce 441.08*



> New Game Ready Driver Released: Includes Support For GeForce GTX 1660 SUPER; Adds ReShade Filters To GeForce Experience, Image Sharpening To NVIDIA Control Panel, G-SYNC To Ultra Low-Latency Rendering; and Support For 7 New G-SYNC Compatible Gaming Monitors


----------



## J7SC

keikei said:


> Gett'em fresh and hot! *GeForce 441.08*


 
...what the __xx__ ? I _just_ finished some runs with freshly installed 440.97  ...windows open, heat off > ambient 16c. Oh well, I'll try the new driver later in the week....

Anyway, below is a SLI/NVLink Port Royal with temps, staying just below 29c. https://www.3dmark.com/pr/166143 
Bottom pic is SuperPosition 8k (automatically single card), also showing temps, GPU speed and voltage etc. Bios is stock - which nevertheless hit almost 130% - and 1.05v max GPU. Once I load the 441.08 driver, I might try voltage curve(s).


----------



## Sheyster

JustinThyme said:


> I get it when the user just doesn't feel comfortable and *an AIO will beat out air nearly always* but a custom loop trumps both.


Noctua NH-D15, Dark Rock 4 Pro and Thermalright TS 140 Power will give any AIO a run for its money, no water needed. 

I'm not dissing custom loops here, they certainly have their place. AIO cost vs. effectiveness/reliability is debatable at best.


----------



## Sheyster

keikei said:


> Gett'em fresh and hot! *GeForce 441.08*


2 drivers in one day?? I think I'll wait another day or two, maybe a third one is in the works.


----------



## chibi

Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.

Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


----------



## Brodda-Syd

chibi said:


> Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.
> 
> Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


It all depends on the type of games you play and what resolution you game at.
We do not know the spec of the 3080 ti and it probably wont be released until next summer at the earliest. 
So.
Are you OK gaming on the 9900ks UHD 630 graphics until next summer? I don't think so.

I game at 4K and in August I got fed up waiting to upgrade my GTX1080's and so I splashed the cash for two RTX 2080 ti's
If you game at 1440p and your not interested in Ray Tracing, by a RTX 2080 ti today.
If you game at 4K and want Ray Tracing, still buy at RTX 2080 ti today and when the summer comes around, sell to 2080 ti and use the money to upgrade to a 3080 ti
I'm assuming the 3080 ti will have more than double the Ray Tracing capability of the 2080 ti

But of course the decision is up to you


----------



## ahnafakeef

chibi said:


> Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.
> 
> Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


On the performance side of things, as long as you're not aiming for something like 4K120 (or 8K) and don't mind turning RTX off, you should be more or less satisfied with a 2080Ti for gaming at 4K/60Hz or 1440P/144Hz. Or you could always get two if your goals are more ambitious.

As for waiting for the next flagship, rumors have suggested a release date of 1H 2020, but those rumors are largely unsubstantiated as of right now. So you could be waiting anywhere between 2 to 8 months for a 3080Ti, while missing out on all the big titles that are being released in Q4 2019.

But the 2080Ti being relatively towards the end of its life is a rather safe assumption, given how long it's been since Nvidia last released a new flagship. But if you don't mind taking the trouble of selling the 2080Ti and the extra expenditure for a 3080Ti when it comes out, I'd say however much you'd lose on the 2080Ti when selling it is worth the 2 to 8 months of gaming you will get out of it.

Having played through Exodus and Odyssey at 4K60 on my new 9900K+2080Ti build that I got just a month ago, I'd say that the performance in games is definitely there with a 2080Ti (minus on outlandish resolutions, obviously). I suppose the question is if you're willing to accept the downsides of selling the 2080Ti when you eventually upgrade to a 3080Ti. Given that the 2080Ti is delivering the performance I'm looking for, I'm happy to be able to play all these AAA titles at (almost) 60FPS at the visual fidelity of 4K. And that is also accounting for the fact that I'm looking forward to upgrading to a couple of 3080Tis when they eventually come out.

I was pretty much in your exact situation about a month ago, so hopefully this adds some perspective to your dilemma.


----------



## chibi

I'll be on a Dell AW3418DW 3440 x 1440 120 Hz so I feel the 2080 Ti will definitely be a solid card. Previously I was using the Titan Xp with my 8700K and it was no slouch either. I'm just on the fence with the 2080 Ti being so _old_ now that I don't want to pay the premium for an end of life part hah. But then ampere is so far away that I don't know if I could live with the iGPU for 21:9.


----------



## kx11

hey guys , i did a video about the GPU OC impact on 4k gaming in Outer Worlds , hope you like it


----------



## ahnafakeef

chibi said:


> I'll be on a Dell AW3418DW 3440 x 1440 120 Hz so I feel the 2080 Ti will definitely be a solid card. Previously I was using the Titan Xp with my 8700K and it was no slouch either. I'm just on the fence with the 2080 Ti being so _old_ now that I don't want to pay the premium for an end of life part hah. But then ampere is so far away that I don't know if I could live with the iGPU for 21:9.


Get a 2080Ti. You won't be disappointed with the performance. The hassle of reselling it will be worth not having to deal with iGPU, imho.


----------



## chibi

ahnafakeef said:


> Get a 2080Ti. You won't be disappointed with the performance. The hassle of reselling it will be worth not having to deal with iGPU, imho.



With the 2080 Ti FE, are there any issues flashing the bios with the Galax 380W? Any loss of ports or performance hits?


----------



## J7SC

chibi said:


> Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.
> 
> Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


 


chibi said:


> I'll be on a Dell AW3418DW 3440 x 1440 120 Hz so I feel the 2080 Ti will definitely be a solid card. Previously I was using the Titan Xp with my 8700K and it was no slouch either. I'm just on the fence with the 2080 Ti being so _old_ now that I don't want to pay the premium for an end of life part hah. But then ampere is so far away that I don't know if I could live with the iGPU for 21:9.


 
I am not sure either I could survive on iGPU, especially with a shiny new 9900KS rig. If OP wants to wait for Ampere (uncertain release date, 1st gen 7nm EV process) instead of a 2080 Ti now, why not look at a used 1080 / Ti or perhaps even 980 Ti - better than iGPU


----------



## ahnafakeef

chibi said:


> With the 2080 Ti FE, are there any issues flashing the bios with the Galax 380W? Any loss of ports or performance hits?


I haven't tried other BIOSes on my 2080Ti, but to the best of my knowledge, you shouldn't face any of those issues. Although 380W apparently makes the cards run hotter making them hit the thermal limit much quicker, which is supposedly an issue with any air-cooled 2080Ti. But someone else with more experience on this will hopefully chime in and provide more concrete information.

Personally, I'm running a very modest overclock (+125 power, +100 core, +500 memory) on my stock ASUS Strix BIOS via MSI Afterburner with a custom fan curve, and it usually keeps the card around 2000MHz core clock, which has sufficed in keeping FPS in Exodus and Odyssey near about 60 at most instances. And I'm happy with that for now.


J7SC said:


> I am not sure either I could survive on iGPU, especially with a shiny new 9900KS rig. If OP wants to wait for Ampere (uncertain release date, 1st gen 7nm EV process) instead of a 2080 Ti now, why not look at a used 1080 / Ti or perhaps even 980 Ti - better than iGPU


If s/he's happy with the performance of those GPUs, they might be an option. Although I'd personally want every last bit of GPU horsepower I can get at 3440x1440 120Hz.


----------



## keikei

Hey, whatever helps right? I enabled image sharpening as well. I still need to do the reshade thing.


----------



## J7SC

I guess the 'Low Latency Mode' used to be called 'Maximum pre-rendered frames' in earlier driver releases :headscrat


----------



## keikei

J7SC said:


> I guess the 'Low Latency Mode' used to be called 'Maximum pre-rendered frames' in earlier driver releases :headscrat



More likely just marketing as I don't expect too see such a playable difference. The sharpness option maybe more perceptible. https://wccftech.com/game-ready-dri...ning-g-sync-with-ultra-low-latency-rendering/


----------



## J7SC

keikei said:


> More likely just marketing as I don't expect too see such a playable difference. The sharpness option maybe more perceptible. https://wccftech.com/game-ready-dri...ning-g-sync-with-ultra-low-latency-rendering/


 
...NV control panel item location and description look similar per attachment, but NV marketing 'might have spoken'  I'll play with that sharpness option in the driver menu a bit...just picked up a slightly used Philips 4K 40 inch monitor last week and running it over DisplayPort


----------



## JustinThyme

Sheyster said:


> Noctua NH-D15, Dark Rock 4 Pro and Thermalright TS 140 Power will give any AIO a run for its money, no water needed.
> 
> I'm not dissing custom loops here, they certainly have their place. AIO cost vs. effectiveness/reliability is debatable at best.


Im afraid Im going to have to respectfully disagree on that one, and in a huge way. Maybe the bottom of the bucket AIOs will be comparable but the high end will outperform and both will do so much quieter. The other advantage to the AIO and a custom loop is it doesnt dump the heat in the case to recycle it right back across the heatsink, that is of course unless you are running some serious fans to pull it out. If air is all your budget will allow Im not knocking anyone on that, it is what it is. Same goes for air cooling on a 2080Ti. The cards are hot, no argument to be had there. I had to leave the door open on my big aZZ Enthoo Elite when I first got my pair of 2080Tis as when they were loaded they created so much heat that gets dumped into the case that the fans for my rads were pulling the hot air through making my loop temp increase steadily. Had to wait for the Strix blocks to become available to get my machine right. Liquid temp while on air and door closed would easily pass 45C with next to no CPU load. Hasnt seen more than 30C since. Add to all that a huge honking piece of metal that is going to make it next to impossible to get to your ram and will heat up the ram in the process. 

Anything is debatable andd I realize that anyone worried about cost will have an issue with a good AIO as they arent cheap much less a custom loop that the fittings can run you in the poor house. You have to lift your leg high if you want to run with the big dogs.

Of course as you already agreed no air cooler or AIO will ever come close to the performance of a well designed custom loop.


----------



## Sheyster

J7SC said:


> I guess the 'Low Latency Mode' used to be called 'Maximum pre-rendered frames' in earlier driver releases :headscrat


Yep, they changed the name in response to AMD making a big deal about their own similar setting.


----------



## chris89

Can someone fix the reference PCB blower 2080 ti's fan curve BIOS so on auto it'll be at 75% at 75c and 100% 84c.... basically need a linear curve from 0 to 84c from 0% to 100%

thanks


----------



## ahnafakeef

chris89 said:


> Can someone fix the reference PCB blower 2080 ti's fan curve BIOS so on auto it'll be at 75% at 75c and 100% 84c.... basically need a linear curve from 0 to 84c from 0% to 100%
> 
> thanks


Iirc, apparently the BIOS cannot be modded at random with 2080Tis due to some restriction from Nvidia. You can only flash some compatible BIOSes but you're stuck with the settings it comes with. 

Although, I have to ask, why do you want a linear curve from 0 to 84c? My card never goes above 60c and throttles quite a bit when it hits 60c. So my custom curve in AB is linear from 0 to 55c from 0% to 100% to maximize cooling and minimize throttling at 60c as much as possible. Am I doing this wrong?


----------



## chibi

J7SC said:


> I am not sure either I could survive on iGPU, especially with a shiny new 9900KS rig. If OP wants to wait for Ampere (uncertain release date, 1st gen 7nm EV process) instead of a 2080 Ti now, why not look at a used 1080 / Ti or perhaps even 980 Ti - better than iGPU



I had the Titan Xp and felt the need to upgrade. Just didn't time it right and now I'm stuck between getting the 2080 Ti now, or wait for 3080 Ti.


----------



## J7SC

chibi said:


> I had the Titan Xp and felt the need to upgrade. Just didn't time it right and now I'm stuck between getting the 2080 Ti now, or wait for 3080 Ti.


 
...that's a tough one to decide, especially if you want to drive high resolutions with high refresh rates - as other here suggested, iGPU may have trouble with that. The 3080 Ti is supposed to have rasterization and ray tracing improvement, along with 16 GB VRAM. On the flip side, it will be the initial '7nm EUV' guinea pig and even better NV cards are sure to follow, depending on when/what competing 'big Navi' and 'Intel X GPU' bring to the consumer. 

I don't think 2080 Ti will be out of date for several years, though, re. games, given its current 'top performer' status - after all, game developers are looking for high-volume sales. I picked my two 2080 Tis (Aorus w-blocked) up last December, after skipping two GPU generations, and drive 4K with one / both, depending on app and title (re. SLI/NVLink) without any issues whatsoever. I will probably skip 3080 Ti for the gen after that, but I really want to see first what 3080 Ti, big Navi and Intel X GPU are all about before making any concrete plans... 

At the end of the day, you need to figure out if 8 months or so with a super gaming rig like 9900K/S is worth around $600 you might loose with 2080 Ti and subsequent resale when 3080 Ti comes out :2cents:


----------



## chibi

J7SC said:


> ...that's a tough one to decide, especially if you want to drive high resolutions with high refresh rates - as other here suggested, iGPU may have trouble with that. The 3080 Ti is supposed to have rasterization and ray tracing improvement, along with 16 GB VRAM. On the flip side, it will be the initial '7nm EUV' guinea pig and even better NV cards are sure to follow, depending on when/what competing 'big Navi' and 'Intel X GPU' bring to the consumer.
> 
> I don't think 2080 Ti will be out of date for several years, though, re. games, given its current 'top performer' status - after all, game developers are looking for high-volume sales. I picked my two 2080 Tis (Aorus w-blocked) up last December, after skipping two GPU generations, and drive 4K with one / both, depending on app and title (re. SLI/NVLink) without any issues whatsoever. I will probably skip 3080 Ti for the gen after that, but I really want to see first what 3080 Ti, big Navi and Intel X GPU are all about before making any concrete plans...
> 
> At the end of the day, you need to figure out if 8 months or so with a super gaming rig like 9900K/S is worth around $600 you might loose with 2080 Ti and subsequent resale when 3080 Ti comes out :2cents:



Thanks for your input, I will sleep on it as I wait for the 9900KS to ship.


----------



## H3MPY

*2080 ti XC Ultra flash to Galax 380w*

Hey guys got a xc ultra not too long ago and today I am trying to flash it to the galax 380w bios. I am using the patched nvflash and keep getting the cert 2.0 verification error. ive been digging around the thread for similar issues and from what I can see so far people stopped posting about it or maybe just forgot to disable device. Been trying everything and am stumped for now. ANy input is greatly appreciated. Thanks!!


----------



## bhsmurfy

H3MPY said:


> Hey guys got a xc ultra not too long ago and today I am trying to flash it to the galax 380w bios. I am using the patched nvflash and keep getting the cert 2.0 verification error. ive been digging around the thread for similar issues and from what I can see so far people stopped posting about it or maybe just forgot to disable device. Been trying everything and am stumped for now. Any input is greatly appreciated. Thanks!!


You have a different xusb firmware and would have to hardware program the bios onto the vbios chip. Either way your power limit is a wopping 37watts lower with the regular EVGA XC Ultra bios(Galax is 380 EVGA is 343 iirc). If you want more power your only option is the KPE XOC Bios(EVGA Kingpin Edition) Then certain options are buggy under certain OC Programs. Change too many settings in evga it freezes etc. Just have to find the quirks. They have same xusb firmwares.
The XC Ultra is a good card tho. At least mine is. 2205/2200 #64 x1GPU on Port Royal Hall of Fame


----------



## H3MPY

bhsmurfy said:


> You have a different xusb firmware and would have to hardware program the bios onto the vbios chip. Either way your power limit is a wopping 37watts lower with the regular EVGA XC Ultra bios(Galax is 380 EVGA is 343 iirc). If you want more power your only option is the KPE XOC Bios(EVGA Kingpin Edition) Then certain options are buggy under certain OC Programs. Change too many settings in evga it freezes etc. Just have to find the quirks. They have same xusb firmwares.
> The XC Ultra is a good card tho. At least mine is. 2205/2200 #64 x1GPU on Port Royal Hall of Fame


Hey thanks for the response. I have seen multiple posts of people claiming to have flashed that Galax bios onto their XC Ultras. Not sure if they were just lying then or what not.


----------



## dante`afk

chibi said:


> Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.
> 
> Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


well, do you need the 2080ti performance now?

if yes > buy now
if no > buy the ampere which might be released mid/end of 2020 or even 2021.


if money does not play a role, buy the 2080Ti now and the ampere when released.


----------



## Brodda-Syd

chibi said:


> I'll be on a Dell AW3418DW 3440 x 1440 120 Hz so I feel the 2080 Ti will definitely be a solid card. Previously I was using the Titan Xp with my 8700K and it was no slouch either. I'm just on the fence with the 2080 Ti being so _old_ now that I don't want to pay the premium for an end of life part hah. But then ampere is so far away that I don't know if I could live with the iGPU for 21:9.


I see your point as I was waiting for AMD to release their "nVIDIA KILLER" hoping it will bring down the price of the overly expensive 2080 ti, but I came to the realization that the "nVidia Killer" is not a priority for AMD and I will probably be waiting until next summer for that as well.
My friends and work colleagues told me they are "sick of hearing me whinning about the exorbitant price of the 2080 ti and its not like you can't afford it, shut up and buy it". So I had to compromise on my principles and splash the cash.
Now intel will be joining the GPU party, the future should see enough competition to keep GPU prices reasonable, unless they form a cartel. 

I would say, if you can afford a 2080 ti then buy it.


----------



## chibi

Brodda-Syd said:


> Splash the cash.



Looks like you did a belly flop from a 20 ft diving board and went with 2x! 

General consensus is that get 2080 Ti now. I'll wait until black Friday and see what deals can be had. Can't be too much longer.


----------



## J7SC

*


chibi said:



Looks like you did a belly flop from a 20 ft diving board and went with 2x! 

Click to expand...

*


chibi said:


> General consensus is that get 2080 Ti now. I'll wait until black Friday and see what deals can be had. Can't be too much longer.



...same 'belly flop' happened to me last December , you should also get two so it's like you have 'a 4080 Ti'  

If you can get a good deal on a 2080 Ti card around black Friday, try to get one for which water-blocks are readily available with the money you saved on the deal...2080 Ti and their 300w to 400w typical bios range love water-cooling...even the cheaper Bykski blocks are still quite good.


----------



## kithylin

Brodda-Syd said:


> I see your point as I was waiting for AMD to release their "nVIDIA KILLER" hoping it will bring down the price of the overly expensive 2080 ti, but I came to the realization that the "nVidia Killer" is not a priority for AMD and I will probably be waiting until next summer for that as well.


Currently AMD is about 2-3 years behind everyone else in terms of video cards. They are only just now releasing the 5700 XT that matches even with the 1080 Ti in most games, 3 years after the 1080 Ti came out. Those are based on "Navi 10" though and there is "Rumors" of a "Big Navi" or bigger card based on "Navi 12" but it may not exist or never come out. Even if it does it most likely won't even match even with the 2080 Ti, sadly.

I really do wish AMD or Intel or -SOMEONE- could match Nvidia in performance at less price to reel in Nvidia's crazy prices.


----------



## chibi

J7SC said:


> ...same 'belly flop' happened to me last December , you should also get two so it's like you have 'a 4080 Ti'
> 
> If you can get a good deal on a 2080 Ti card around black Friday, try to get one for which water-blocks are readily available with the money you saved on the deal...2080 Ti and their 300w to 400w typical bios range love water-cooling...even the cheaper Bykski blocks are still quite good.




That's a good option to consider. I usually stick with Watercool blocks though and it doesn't look like any pre-blocked cards are partnering with them. I'll most likely get the founders edition though.


----------



## keikei

The most i've seen regarding discounts was about $100 off a high end card. Most likely a 1080Ti back in the day, but it didnt last too long. Maybe Black Friday to Cyber Monday.


----------



## TK421

H3MPY said:


> Hey thanks for the response. I have seen multiple posts of people claiming to have flashed that Galax bios onto their XC Ultras. Not sure if they were just lying then or what not.


I have my xc ultra (not icx) 2080ti with the galax 380w bios, no problem here


----------



## gfunkernaught

TK421 said:


> I have my xc ultra (not icx) 2080ti with the galax 380w bios, no problem here


those bios crash my xc ultra when i oc and play games that use RT cores. there is probably a reason why my card in particular has a 330w limit, the board simply cannot handle more than that.


----------



## Brodda-Syd

chibi said:


> Looks like you did a belly flop from a 20 ft diving board and went with 2x!
> 
> General consensus is that get 2080 Ti now. I'll wait until black Friday and see what deals can be had. Can't be too much longer.


Yes it was a Big and Ugly BELLY FLOP SPLASH and it HURT BAD!!! (Mainly coz I got a Big Belly).
But with Crysis2/3, Battlefield 1/4/5/Hard Line, Mass Effect: Andromeda, Star Wars: Battlefront 2, Call of Duty WW2, Arma 3, Ghost Recon: Wildlands, Metal Gear Solid V, Metro 2033/Exodus, Deus Ex: Mankind, Titanfall1/2, Apex Legends....
*** ALL IN GLORIOUS 4K *** with "SLI" frame-rates of 100 minimum with all effects MAXED-OUT I have a BIG BIG BIG SMILE ON MY FACE!!!

Yes. BLACK Friday would be the perfect time to try and get a good deal on a 2080 ti or TWO as an early Christmas Present.


----------



## TheRedViper

J7SC said:


> ...post link to your build-thread here when ready - I like over-the-top


There you go 

Thermaltake core P5, I9-7900x 4.8GHz all cores, Galax 2080ti 10th anniversary on XOC BIOS, corsair vengeance RGB 4x8GB 3200MHz, Asus deluxe prime II X299 mobo, Aquacomputer cuplex kryos next Nickle plated cpu cooler, asus rog thor 1200W psu.


----------



## Brodda-Syd

TheRedViper said:


> There you go
> 
> Thermaltake core P5, I9-7900x 4.8GHz all cores, Galax 2080ti 10th anniversary on XOC BIOS, corsair vengeance RGB 4x8GB 3200MHz, Asus deluxe prime II X299 mobo, Aquacomputer cuplex kryos next Nickle plated cpu cooler, asus rog thor 1200W psu.


Wow.
More Pics Please


----------



## J7SC

TheRedViper said:


> There you go
> 
> Thermaltake core P5, I9-7900x 4.8GHz all cores, Galax 2080ti 10th anniversary on XOC BIOS, corsair vengeance RGB 4x8GB 3200MHz, Asus deluxe prime II X299 mobo, Aquacomputer cuplex kryos next Nickle plated cpu cooler, asus rog thor 1200W psu.


 
beautiful :thumb: ...Core P5 are so much fun to work with ...more pics are always welcome, including detailed ones of Galax 2080ti 10th anniversary


----------



## GRABibus

gfunkernaught said:


> those bios crash my xc ultra when i oc and play games that use RT cores. there is probably a reason why my card in particular has a 330w limit, the board simply cannot handle more than that.


This is not the bios which crashes your GPU when playing Ray Tracing games with overclock.
Ray tracing is so much demanding that usually, we have to decrease overclocks when playing DXR games versus non DXR games.

For example, from my side, I have an overclock at [email protected] which works fine in DX11 games but crashes immediately in BFV or CoD MW with DXR enabled.

When DXR enabled, my stable overclokcs are at maximum 2055MHz-2085MHz.


----------



## ahnafakeef

Brodda-Syd said:


> Yes it was a Big and Ugly BELLY FLOP SPLASH and it HURT BAD!!! (Mainly coz I got a Big Belly).
> But with Crysis2/3, Battlefield 1/4/5/Hard Line, Mass Effect: Andromeda, Star Wars: Battlefront 2, Call of Duty WW2, Arma 3, Ghost Recon: Wildlands, Metal Gear Solid V, Metro 2033/Exodus, Deus Ex: Mankind, Titanfall1/2, Apex Legends....
> *** ALL IN GLORIOUS 4K *** with "SLI" frame-rates of 100 minimum with all effects MAXED-OUT I have a BIG BIG BIG SMILE ON MY FACE!!!
> 
> Yes. BLACK Friday would be the perfect time to try and get a good deal on a 2080 ti or TWO as an early Christmas Present.


Which monitor/TV are you using? And how are you doing 4K 120Hz without HDMI 2.1? Any compromises in picture settings (e.g. chroma subsampling)?


----------



## kithylin

ahnafakeef said:


> Which monitor/TV are you using? And how are you doing 4K 120Hz without HDMI 2.1? Any compromises in picture settings (e.g. chroma subsampling)?


3840x2160 @ 120hz is only 29.86 Gbps of data bandwidth for the video signal. Displayport 2.0 is rated for up to 77.37 Gbps maximum total data rate. Displayport 2.0 could easily handle 4K @ 120 Hz.

Resolution bandwidth calculator: https://k.kramerav.com/support/bwcalculator.asp

Displayport specifications: https://en.wikipedia.org/wiki/DisplayPort#Specifications


----------



## ahnafakeef

kithylin said:


> 3840x2160 @ 120hz is only 29.86 Gbps of data bandwidth for the video signal. Displayport 2.0 is rated for up to 77.37 Gbps maximum total data rate. Displayport 2.0 could easily handle 4K @ 120 Hz.
> 
> Resolution bandwidth calculator: https://k.kramerav.com/support/bwcalculator.asp
> 
> Displayport specifications: https://en.wikipedia.org/wiki/DisplayPort#Specifications


I know DP can handle it, which is why I led with the question "which monitor/TV are you using". 

With the mention of 4K 120Hz, I half expected the screen to be an LG C9 or a Samsung Q90R.


----------



## J7SC

ahnafakeef said:


> I know DP can handle it, which is why I led with the question "which monitor/TV are you using".
> 
> With the mention of 4K 120Hz, I half expected the screen to be an *LG C9 or a Samsung Q90R*.


 
One of each, please, Santa...


----------



## kx11

ahnafakeef said:


> I know DP can handle it, which is why I led with the question "which monitor/TV are you using".
> 
> With the mention of 4K 120Hz, I half expected the screen to be an LG C9 or a Samsung Q90R.





LG C9 just got the Gsync firmware update ( you do need the latest geforce driver from 2 days ago )


----------



## kx11

TheRedViper said:


> There you go
> 
> Thermaltake core P5, I9-7900x 4.8GHz all cores, Galax 2080ti 10th anniversary on XOC BIOS, corsair vengeance RGB 4x8GB 3200MHz, Asus deluxe prime II X299 mobo, Aquacomputer cuplex kryos next Nickle plated cpu cooler, asus rog thor 1200W psu.





wow that is pretty 



do you need the XOC bios on the HOF 10th edition ?! i thought this gpu got a huge OC headroom on vanilla bios


----------



## Driller au

GRABibus said:


> This is not the bios who crashes your GPU when playing Ray Tracing games with overclock.
> Ray tracing is so much demanding that usually, we have to decrease overclocks when playing DXR games versus non DXR games.
> 
> For example, from my side, I have an overclock at [email protected] which works fine in DX11 games but crashes immediately in BFV or CoD MW with DXR enabled.
> 
> When DXR enabled, my stable overclokcs are at maximum 2055MHz-2080MHz.


My experience exactly to, i just flashed the 380W galax bios a few nights ago to see if it made a difference but still the same.


----------



## ahnafakeef

J7SC said:


> One of each, please, Santa...
> 
> https://www.youtube.com/watch?v=KkXE_n4Pn2Q


I'm good with just one of them 

Why would you want a QLED in a world where OLED exists?



kx11 said:


> LG C9 just got the Gsync firmware update ( you do need the latest geforce driver from 2 days ago )


I've updated it already, thanks to GeForce Experience. 

Speaking of, is GeForce Experience the best way to install drivers? Or should I use DDU to do a clean uninstall and then manually install it?


----------



## J7SC

ahnafakeef said:


> I'm good with just one of them
> 
> Why would you want a QLED in a world where OLED exists? (...)


 
...need an extra monitor to watch in the bathroom...


----------



## bhsmurfy

Driller au said:


> My experience exactly to, i just flashed the 380W galax bios a few nights ago to see if it made a difference but still the same.



I have opposite experience. I clock higher on dx12 and have to clock to 2130-2160 for dx11 apps.


----------



## Driller au

bhsmurfy said:


> I have opposite experience. I clock higher on dx12 and have to clock to 2130-2160 for dx11 apps.


Yep but what we are talking about is when DXR is enabled, would be interested in your experience with that higher voltage bios in game, i saw your port royal result which is impressive but how does it go in games like COD MW or BFV if you have them ?


----------



## ahnafakeef

J7SC said:


> ...need an extra monitor to watch in the bathroom...


You know what's better than a C9?

TWO C9s!


----------



## Brodda-Syd

ahnafakeef said:


> Which monitor/TV are you using? And how are you doing 4K 120Hz without HDMI 2.1? Any compromises in picture settings (e.g. chroma subsampling)?


At 4K I'm still using my 60Hz Dell UltraSharp UP3214Q at the moment but will be upgrading to a 144Hz 4K 32 inch monitor after Christmas. Not Entirely sure which one I will be getting at the moment.


----------



## ahnafakeef

Brodda-Syd said:


> At 4K I'm still using my 60Hz Dell UltraSharp UP3214Q at the moment but will be upgrading to a 144Hz 4K 32 inch monitor after Christmas. Not Entirely sure which one I will be getting at the moment.


I see. You mentioned "over 100FPS in glorious 4K" so I thought you were already playing on a 4K 120/144Hz screen.


----------



## Brodda-Syd

ahnafakeef said:


> I see. You mentioned "over 100FPS in glorious 4K" so I thought you were already playing on a 4K 120/144Hz screen.


Maybe I was a bit mis-leading there. 
I am getting around 115fps in the Crysis 3 Jungle level at 4K, but my 4K display is only 60Hz.

There are plenty of 27-28 inch GSYNC 144Hz 4K displays but I'm looking for a 32 inch GSYNC 144Hz 4K display.
I am specifically waiting for a 32 inch version of https://uk-store.acer.com/predator-x-gaming-monitor-predator-x27-black
As this video 



 shows that Linus is testing with a 32 inch GSYNC 144Hz 4K display so they DO EXIST!!!
Plus why did they lend Linus 32 inch display? Why didn't they lend him a 27 inch display for testing?

Now I could go bigger with one of these https://www.amazon.co.uk/Acer-Preda...+4k+display&qid=1572611105&s=computers&sr=1-4 
And I have checked it out but its more like a TV than a monitor as it needs to be at least 3+ feet away from your eyes.

I will be upgrading, but I cannot at this time say when


----------



## ahnafakeef

Brodda-Syd said:


> Maybe I was a bit mis-leading there.
> I am getting around 115fps in the Crysis 3 Jungle level at 4K, but my 4K display is only 60Hz.
> 
> There are plenty of 27-28 inch GSYNC 144Hz 4K displays but I'm looking for a 32 inch GSYNC 144Hz 4K display.
> I am specifically waiting for a 32 inch version of https://uk-store.acer.com/predator-x-gaming-monitor-predator-x27-black
> As this video https://www.youtube.com/watch?v=YjBqlpu-V_0 shows that Linus is testing with a 32 inch GSYNC 144Hz 4K display so they DO EXIST!!!
> Plus why did they lend Linus 32 inch display? Why didn't they lend him a 27 inch display for testing?
> 
> Now I could go bigger with one of these https://www.amazon.co.uk/Acer-Preda...+4k+display&qid=1572611105&s=computers&sr=1-4
> And I have checked it out but its more like a TV than a monitor as it needs to be at least 3+ feet away from your eyes.
> 
> I will be upgrading, but I cannot at this time say when


I love my 32" for a desk setup where it's literally an arm's length away from me. I wouldn't go any smaller or bigger for this distance.


----------



## J7SC

ahnafakeef said:


> You know what's better than a C9?
> 
> TWO C9s!


 
Good point...there's still room in the garage


----------



## Sheyster

Brodda-Syd said:


> Now I could go bigger with one of these https://www.amazon.co.uk/Acer-Preda...+4k+display&qid=1572611105&s=computers&sr=1-4
> And I have checked it out but its more like a TV than a monitor as it needs to be at least 3+ feet away from your eyes.


The review for the ASUS version of that monitor/panel on TFTcentral was not impressive.


----------



## sblantipodi

Brodda-Syd said:


> We do not know the spec of the 3080 ti and it probably wont be released until next summer at the earliest.


considering the history of the nvidia releases, is there a card that lived two years?
do you think that 2080Ti can live 2 years?


----------



## gfunkernaught

sblantipodi said:


> considering the history of the nvidia releases, is there a card that lived two years?
> do you think that 2080Ti can live 2 years?


those days are long gone. it used to be the most expensive gaming card was nvidia's best model, 8800 GTX for instance. the card lasted two generations, especially in SLI. once we started to see a $500-650 being released with only a 256-bit bus followed by the Ti 384-bit bus variant, it was game over. i guess the pascal series is still kicking ass.


----------



## sblantipodi

gfunkernaught said:


> sblantipodi said:
> 
> 
> 
> considering the history of the nvidia releases, is there a card that lived two years?
> do you think that 2080Ti can live 2 years?
> 
> 
> 
> those days are long gone. it used to be the most expensive gaming card was nvidia's best model, 8800 GTX for instance. the card lasted two generations, especially in SLI. once we started to see a $500-650 being released with only a 256-bit bus followed by the Ti 384-bit bus variant, it was game over. i guess the pascal series is still kicking ass.
Click to expand...

So is it reasonable that we will see the 3080ti this Christmas?


----------



## bl4ckdot

sblantipodi said:


> So is it reasonable that we will see the 3080ti this Christmas?


No


----------



## sblantipodi

bl4ckdot said:


> No


what could be the first reasonable date to mosts?


----------



## bl4ckdot

I don't expect anything until march 2020


----------



## JustinThyme

One thing Ive learned over the years is not to expect anything. They could have a new horizon just sitting there that wont be released until they have exhausted sales on whats there now. They just released the "Super" cards so that buys them time on their schedule as well. 

As for monitors Im using an ASUS PG348Q 3440x1440 100Hz thats working great for me. I see plenty of people complain about backlight bleed on all these monitors because they are all using the same panel. Hmmmm. Lets first turn out all the lights, crank brightness and backlight to max and contrast to max with a black screen and look for bleed. They will all have it to a degree. Thing is though I dont see it on mine but then again I dont sit in a pitch black room with brightness and backlight maxed out and contrast cranked staring at a black screen either. I set it to where I use it and dont see anything out of the norm. Just 34" of ultrawide goodness.


----------



## ikomiko

Hi, 

i have Problems to get 1.093 Volt on my Asus Strix 2080 TI. Currently iam using the XOC Bios. I tried the Matrix Bios and it was possible for me to get 1.093 with the voltage/frequency curve, but I wans't able to use all of my HDMI/DP Connetions anymore. 

Is there a Bios for me where i can use 1.093 Voltage with all my Connections?


----------



## kithylin

ikomiko said:


> Hi,
> 
> i have Problems to get 1.093 Volt on my Asus Strix 2080 TI. Currently iam using the XOC Bios. I tried the Matrix Bios and it was possible for me to get 1.093 with the voltage/frequency curve, but I wans't able to use all of my HDMI/DP Connetions anymore.
> 
> Is there a Bios for me where i can use 1.093 Voltage with all my Connections?


I think you're going at this video card overclocking thing the wrong direction. With modern cards like 2080 Ti you actually want to try to -REDUCE- voltage and see if you can overclock with less volts (and thus less heat). The old way of overclocking video cards was "More volts = Higher Overclock" but that's not how video cards work anymore today.


----------



## skupples

ladies and gents,

I assume its not normal for my new card to idle @ 1350mhz, .78mv?


----------



## J7SC

skupples said:


> ladies and gents,
> 
> I assume its not normal for my new card to idle @ 1350mhz, .78mv?



...'Yes' - Just run a quick rendering test in GPUz to see the card exceed 1350 (by a lot...) when 3D kicks in, and/or use an open sensor tab while you run Firestrike or PortRoyal


I should add that it also comes down to what kind of power regime you have per NVidia driver tab (optimal vs prefer max), and whether you're SLI'ing


----------



## kithylin

skupples said:


> ladies and gents,
> 
> I assume its not normal for my new card to idle @ 1350mhz, .78mv?


I think this might be nvidia's old multi-monitor bug. In the past this usually would happen if you have 2 or more monitors and they are at different refresh rates. Like one high refresh screen in the center and 2 x 60hz screens on either side for example. Check to see if all of your monitors are the same resolution and refresh rate and it should drop down to "proper" idle clocks.


----------



## skupples

yeah, definitely reminds me of the old mm bug, except i'm only on a single input atm. I'm wondering if 4K native is what's triggering it. i'll have to drop res this evening when I get home. Aside from that, I'm gonna refresh the BIOS. 

I'm wondering if the previous owner was a tinkerer. why else would someone in Wisconsin have liquidated 9 cards that smell and look nigh brand new, with intact warranty seals besides golden sample hunting.

this is me thinking about it on a totally up & up stand point, of course.  

i've purchased used mining cards, and have a strong nose. new hardware & used hardware always smells different, even if the stock cooler was tucked away in storage while it did work.

oh, you'll find this funny.

all my 1080ti issues? They seem to be related to NV experience having an issue with a very specific set of games. Deduced this the hard way, with lots of money. but yeah, removing experience resolves all of my 1080ti weirdness. excluding sli stuff, but we won't go there  in short, seems experience "blocks unsigned code" if you will, tho i'm sure its a totally different reason why torrented titles won't run while NV update is installed.


----------



## keikei

Don't worry folks. We can still hit 4k/60fps...we have to settle for high settings instead.


----------



## JustinThyme

skupples said:


> ladies and gents,
> 
> I assume its not normal for my new card to idle @ 1350mhz, .78mv?


Mine do.


----------



## skupples

keikei said:


> Don't worry folks. We can still hit 4k/60fps...we have to settle for high settings instead.


is that post today's updates? It's gonna keep getting better, this is kinda how it works. you should know this, old timer  t

he game is butter ATM on mostly high + ultra textures & lighting, and honestly expecting 4K ultra in something this shiny and view distance heavy, even with a 2080ti or titan, is a bit premature. I ended up setting internal resolution to 90%, I can't see a difference @ 2 feet, but it allowed me to squeeze in a little more shiny.



JustinThyme said:


> Mine do.


good to know.

think i'll be ordering a kryo or heatkiller this weekend.


----------



## keikei

skupples said:


> is that post today's updates? It's gonna keep getting better, this is kinda how it works. you should know this, old timer  t
> 
> he game is butter ATM on mostly high + ultra textures & lighting, and honestly expecting 4K ultra in something this shiny and view distance heavy, even with a 2080ti or titan, is a bit premature. I ended up setting internal resolution to 90%, I can't see a difference @ 2 feet, but it allowed me to squeeze in a little more shiny.
> 
> 
> 
> good to know.
> 
> think i'll be ordering a kryo or heatkiller this weekend.



Vid posted yesterday. I expect more tweaks. Green doesnt like to lose.


----------



## J7SC

:ninja: You're going to have some fun with a water-cooled 2080 Ti, ie. with Heatkiller and similar blocks  While there certainly are several impressive air-cooled 2080 Ti models out there, typically, the 2080 Ti uses between 300w and 400w on 'stock bios / full load' ...custom water (never mind more exotic) setups are 'de rigueur' to dissipate that kind of heat energy quietly, especially if you want max 4K performance w/o 70%+ fan noise.


----------



## skupples

keikei said:


> Vid posted yesterday. I expect more tweaks. Green doesnt like to lose.
> 
> 
> https://www.youtube.com/watch?v=yD3xd1qGcfo


ok, well... all of that data is a bit different now after today's 3GB patch. I just spent 3 hours in, and it was all glorious irish cream butter. 

the best bet is always to play the game, instead of listening to trust fund hipsters ramble on about the FPSs and the settings and blah blah blah.
the game's got 40 some settings to toggle & play with, and at this point most of my stuff is set to ultra & very high, still only ~90% avg GPU usage. Pegged @ 60 24/7

only trick i've pulled is the 9/10ths internal render.



J7SC said:


> :ninja: You're going to have some fun with a water-cooled 2080 Ti, ie. with Heatkiller and similar blocks  While there certainly are several impressive air-cooled 2080 Ti models out there, typically, the 2080 Ti uses between 300w and 400w on 'stock bios / full load' ...custom water (never mind more exotic) setups are 'de rigueur' to dissipate that kind of heat energy quietly, especially if you want max 4K performance w/o 70%+ fan noise.



good to see not a whole lot's changed. I got used to undervolt tuning w. pascal, so this should be fun. I've already got 150/300 dialed in. I'll need water for anything beyond that though, and it's probably already ninja throttling me, right?


----------



## J7SC

skupples said:


> (...) good to see not a whole lot's changed. I got used to undervolt tuning w. pascal, so this should be fun. I've already got 150/300 dialed in. I'll need water for anything beyond that though, and it's probably already ninja throttling me, right?


 
...those damn ninja throttlers pop up when you least expect it.... but keeping temps low is step one, and volts as low as possible for given stability is step 2, to fight the evil master-Ninja NVBoost 3.0 (its selfie > below)


----------



## MrTOOSHORT

300 on memory? Get er up to 1000+. 300 is for kids


----------



## bhsmurfy

MrTOOSHORT said:


> 300 on memory? Get er up to 1000+. 300 is for kids


QFT

I dont start artifacts until 1775


----------



## kx11

as long as i'm away from water the fps seems to hold 60+ fps @ 4k


----------



## J7SC

kx11 said:


> as long as i'm away from water the fps seems to hold 60+ fps @ 4k https://www.youtube.com/watch?v=NnU1zNd9JEc&t=5s


 
...tx, that's some seriously nice eye candy :thumb: 

btw, SLI/NVLink profile available for this (just in case I want to go into the water...)?


----------



## sblantipodi

Graphics is good but nothing that can justify less than 60fps on an rtx ti card. Don't forget that this crap runs on a toy like a ps4 pro.


----------



## jura11

ikomiko said:


> Hi,
> 
> i have Problems to get 1.093 Volt on my Asus Strix 2080 TI. Currently iam using the XOC Bios. I tried the Matrix Bios and it was possible for me to get 1.093 with the voltage/frequency curve, but I wans't able to use all of my HDMI/DP Connetions anymore.
> 
> Is there a Bios for me where i can use 1.093 Voltage with all my Connections?


Hi there 

Its strange you can't use all ports with Matrix BIOS because I use on my Asus RTX 2080Ti Strix Matrix BIOS and I can use all ports 

Regarding the XOC BIOS,sadly on XOC you can't use voltage frequency curve 

But as above more voltage sometimes doesn't mean you can OC much higher, with Turing GPUs my OC is much better with less voltage 

Hope this helps 

Thanks, Jura


----------



## VPII

Just a quick question, I have not posted in thread in a long time. At present I'm just about in summer with ambient temps sitting close to or around 30C if not more. Do you lose max overclock with ambient temps being high. My reason for asking is that in winter I could bench 2160 to 2175mhz core without and issue. At present I cannot even do 2145 and sometimes not even 2130. Please be kind.


----------



## jura11

VPII said:


> Just a quick question, I have not posted in thread in a long time. At present I'm just about in summer with ambient temps sitting close to or around 30C if not more. Do you lose max overclock with ambient temps being high. My reason for asking is that in winter I could bench 2160 to 2175mhz core without and issue. At present I cannot even do 2145 and sometimes not even 2130. Please be kind.


Hi there 

Yes most of will lose clocks by higher ambient or water temperature which you can experience in hotter weather 

Its similar to yours I usually lose 15-30MHz in hotter ambient temperature

What GPU temperatures are you seeing?

Hope this helps 

Thanks, Jura


----------



## skupples

MrTOOSHORT said:


> 300 on memory? Get er up to 1000+. 300 is for kids


good to know. 

I was being conservative, cuz air. 

I typically start @ +500 and go from there with a block. 

gotta remember though, I'm in games 99% of the time, not benches


----------



## kithylin

VPII said:


> Just a quick question, I have not posted in thread in a long time. At present I'm just about in summer with ambient temps sitting close to or around 30C if not more. Do you lose max overclock with ambient temps being high. My reason for asking is that in winter I could bench 2160 to 2175mhz core without and issue. At present I cannot even do 2145 and sometimes not even 2130. Please be kind.


Yes. Nvidia GPU's starting from the 1000 series -> 1600 series -> and 2000 series all overclock based on temperatures above all else. They lose -10 to -15 Mhz for every +10c, roughly. This can not be disabled or avoided. So as your card gets hotter, it will automatically lose clocks no matter what you do. And higher ambient temperatures = hotter running temperatures for your card. Even if you are water cooled in a custom water loop, ambient temperatures still effect your water temps. Custom water loops can not cool below ambient temperatures, only down to ambient.


----------



## Pauliesss

Well, of course I am back, having issues with crashes in The Division 2, which I thought was a driver related issue (Nvidia claimed), but now exactly the same issue is with Red Dead Redemption 2.

The game will crash, sometimes in 30 min, sometimes in 1 hour, and the only error in the event viewer I can find is this:


> Display driver nvlddmkm stopped responding and has successfully recovered.
> Event ID: 4101


I have tried to do a clean installation of the newest drivers, I have tried some registry **** with TdrDelay, nothing seems to help.

I am starting to think that my GAINWARD GeForce RTX 2080 Ti Phoenix GS just cannot handle the constant load in a GPU hungry games.

What do you guys think? What should I try next?

I am afraid this will not get through the RMA process.


----------



## keikei

^Uninstall nvidia experience?


----------



## Pauliesss

I never had it installed.


----------



## vmanuelgm

Red Redemption gameplay with the 2080Ti (still preprocessed, at 4k in a pair of hours or so):






High settings ingame, Image Sharpening on, 441.12 Drivers (nice performance in several games).


----------



## KCDC

Pauliesss said:


> Well, of course I am back, having issues with crashes in The Division 2, which I thought was a driver related issue (Nvidia claimed), but now exactly the same issue is with Red Dead Redemption 2.
> 
> The game will crash, sometimes in 30 min, sometimes in 1 hour, and the only error in the event viewer I can find is this:
> 
> 
> I have tried to do a clean installation of the newest drivers, I have tried some registry **** with TdrDelay, nothing seems to help.
> 
> I am starting to think that my GAINWARD GeForce RTX 2080 Ti Phoenix GS just cannot handle the constant load in a GPU hungry games.
> 
> What do you guys think? What should I try next?
> 
> I am afraid this will not get through the RMA process.


Are you using DX12 on these? If so, try switching that in the settings to DX11/Vulkan. I would get errors in TD2 with Dx12, not sure about RDR2.


----------



## Pauliesss

KCDC said:


> Are you using DX12 on these? If so, try switching that in the settings to DX11/Vulkan. I would get errors in TD2 with Dx12, not sure about RDR2.


Yeah, but still, it's strange, I saw people using DX12 with no crashes with the same drivers, I think this may be GPU related.


----------



## J7SC

VPII said:


> Just a quick question, I have not posted in thread in a long time. At present I'm just about in summer with ambient temps sitting close to or around 30C if not more. Do you lose max overclock with ambient temps being high. My reason for asking is that in winter I could bench 2160 to 2175mhz core without and issue. At present I cannot even do 2145 and sometimes not even 2130. Please be kind.





kithylin said:


> Yes. Nvidia GPU's starting from the 1000 series -> 1600 series -> and 2000 series all overclock based on temperatures above all else. They lose -10 to -15 Mhz for every +10c, roughly. This can not be disabled or avoided. So as your card gets hotter, it will automatically lose clocks no matter what you do. And higher ambient temperatures = hotter running temperatures for your card. Even if you are water cooled in a custom water loop, ambient temperatures still effect your water temps. Custom water loops can not cool below ambient temperatures, only down to ambient.


 
^^ Yeah, depending on your specific w-cooling (ambient) setup, your card(s) will see a typical increase of between 10C and 17C with a challenging benchmark or game load (PortRoyal and Superposition8k with some temps in attachment below from a prior post). So if you START at around 30C ambient, you'll likely end up at 45C ++. That in turn means a loss of about 30 MHz JUST via temps and NVBoost 3.0 compared to a start of a bench or game with ambient temp in the mid-to-high teens. BTW, winter is coming in my neck of the woods :applaud:


----------



## ntuason

Damn didn’t know these new generation cards are dependent on temps. I’m on my 4th or 5th Strix OC thinking there was something wrong with my card as it kept decreasing clocks as temps went up. 

The current card I have can overclock to +175 on core and +1450 on memory, tops out at 60C - 62C and stays at 2100MHz or 2085MHz on air. Is this bad, mediocre, good or excellent?


----------



## skupples

kithylin said:


> Yes. Nvidia GPU's starting from the 1000 series -> 1600 series -> and 2000 series all overclock based on temperatures above all else. They lose -10 to -15 Mhz for every +10c, roughly. This can not be disabled or avoided. So as your card gets hotter, it will automatically lose clocks no matter what you do. And higher ambient temperatures = hotter running temperatures for your card. Even if you are water cooled in a custom water loop, ambient temperatures still effect your water temps. Custom water loops can not cool below ambient temperatures, only down to ambient.


for ever 10+ over like 55, right?



ntuason said:


> Damn didn’t know these new generation cards are dependent on temps. I’m on my 4th or 5th Strix OC thinking there was something wrong with my card as it kept decreasing clocks as temps went up.
> 
> The current card I have can overclock to +175 on core and +1450 on memory, tops out at 60C - 62C and stays at 2100MHz or 2085MHz on air. Is this bad, mediocre, good or excellent?



this has been the case since keplar, its just even more so the case now


----------



## kithylin

skupples said:


> for ever 10+ over like 55, right?


All Nvidia GPU's from 1000 -> 2000 series start dropping clocks at 35c core temps and every 10c there after. The only way to avoid losing clock speed from tempatures is to keep the cards below 35c during gaming / load at all times. This is why me and everyone else is trying to remind folks to reduce voltage to try and overclock. Less voltage = less heat = higher clocks.


----------



## skupples

thanks for the #.

makes sense. weird place we're at, undervolting FTW. 

that's funny, the damn thing essentially throttles @ ambient for folks like me in the sub/actual tropics.

even funnier that its 20c above idle temp on most models.


i suppose this means I should actually research blocks first. I was just gonna toss a kryographics+back plate at it.


----------



## J7SC

I am not 100% certain, but first temp related GPU MHz drop might actually occur at mid/high 20 C range...but at then end of the day, that doesn't change the recommendation that the colder, the better (both via cooling setup, and less heat generation in the first place) for the 2080 Ti & co


----------



## AndrejB

2070/7500(not pushed yet) @ 0.950v 270w max 55c(in apex)

Dropped temps by 10c, these cards really like low power.


----------



## schoolofmonkey

Pauliesss said:


> Yeah, but still, it's strange, I saw people using DX12 with no crashes with the same drivers, I think this may be GPU related.


Ok so it's not just me, I'm having nothing but trouble with RDR 2 on my ROG RTX 2080ti Strix.
The "Game Ready" drivers 441.12 I can't even get through a benchmark without a hang using directx 12, Vulkan I can, but I still get hangs through game play, I can alt-tab out into Windows fine.

Tried driver version 441.08, got through the benchmark with directx 12, 2 hours of game play, hang, same with Vulkan.

Now I'm testing 436.51, better benchmark score, 3 hours of game play with directx 12, no hangs so far (had to stop playing as I needed to do some stuff).

With all tests I had to increase power limit to 125%.

Even logs, were all over the place, from "Display driver nvlddmkm stopped responding and has successfully recovered" to "The program RDR2.exe version 1.0.1207.60 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Security and Maintenance control panel.".

But I've never had a single problem with any other game, recently place The Outer Worlds, Borderlands 3, Metro Exodus RTX On, hammer the GPU over and over again with Timespy Extreme stress test, not a single problem with any of it.

At first I thought my vertical GPU mount was fault, took it out, didn't stop the RDR 2 issues, then started to think like you there's a GPU problem, but again doesn't fail or crash in any other game with any driver I use, it has to be RDR 2, GTA V works perfectly.

Yes some people aren't having trouble at all.

I feel like I'm trouble shooting a problem that isn't really my problem at all...


----------



## VPII

@J7SC and @jura11 and @kithylin I think you may have misunderstood my question. I do not mean speed drops during bench runs due to temps, I know that I lose my first 15Mhz at 41C and there after at 45C another 15mhz. What I mean is actual stable clocks when running benchmarks. Like I was able to bench at 2160 sometimes, but 2145mhz no problem running TS and TSE but right now it will only run 2115mhz for Timespy but FS is all good up to 2145mhz. ANything higher than 2115mhz in TS will crash after first game test or during the first game test.


----------



## Pauliesss

schoolofmonkey said:


> Ok so it's not just me, I'm having nothing but trouble with RDR 2 on my ROG RTX 2080ti Strix.
> The "Game Ready" drivers 441.12 I can't even get through a benchmark without a hang using directx 12, Vulkan I can, but I still get hangs through game play, I can alt-tab out into Windows fine.
> 
> Tried driver version 441.08, got through the benchmark with directx 12, 2 hours of game play, hang, same with Vulkan.
> 
> Now I'm testing 436.51, better benchmark score, 3 hours of game play with directx 12, no hangs so far (had to stop playing as I needed to do some stuff).
> 
> With all tests I had to increase power limit to 125%.
> 
> Even logs, were all over the place, from "Display driver nvlddmkm stopped responding and has successfully recovered" to "The program RDR2.exe version 1.0.1207.60 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Security and Maintenance control panel.".
> 
> But I've never had a single problem with any other game, recently place The Outer Worlds, Borderlands 3, Metro Exodus RTX On, hammer the GPU over and over again with Timespy Extreme stress test, not a single problem with any of it.
> 
> At first I thought my vertical GPU mount was fault, took it out, didn't stop the RDR 2 issues, then started to think like you there's a GPU problem, but again doesn't fail or crash in any other game with any driver I use, it has to be RDR 2, GTA V works perfectly.
> 
> Yes some people aren't having trouble at all.
> 
> I feel like I'm trouble shooting a problem that isn't really my problem at all...


I don't know what to do, to be honest. Played for 1 hour, perhaps a bit more, and crashed again, same error in the event log.

I even tried to put my fan manually to 85% to keep temps under 80°C, but it crashed anyway, so it is not related to that.

My PSU is really old, 850W, almost 7 years old, could be that as well, I don't know really...


----------



## keikei

Pauliesss said:


> I don't know what to do, to be honest. Played for 1 hour, perhaps a bit more, and crashed again, same error in the event log.
> 
> I even tried to put my fan manually to 85% to keep temps under 80°C, but it crashed anyway, so it is not related to that.
> 
> My PSU is really old, 850W, almost 7 years old, could be that as well, I don't know really...


You are playing DX12? TD2 is super unstable while playing it from my experience.


----------



## Pauliesss

keikei said:


> You are playing DX12? TD2 is super unstable while playing it from my experience.


Yes, but the same crash and error happens in TD2 and RDR2 (in DX12). The only difference is that in TD2, it can happen after 2 or more hours, while in RDR2 it usually happens after 1 hour or so.

There is either something wrong with DX12 implementation in both games, or my PSU / GPU is bad. Really hard to tell, I would need another 2080Ti and/or PSU to test that.

Error message is "Display driver nvlddmkm stopped responding and has successfully recovered." with Event ID 4101.


----------



## keikei

Try a separate display port.


----------



## Pauliesss

Tried a separate display port, and did chkdsk and memory diagnostics with no issues, crashed after 20 minutes, same error.

Is there any way to check if the power supply may be an issue? Like it cannot provide enough power after some time and therefore causing to crash?


----------



## kithylin

VPII said:


> @J7SC and @jura11 and @kithylin I think you may have misunderstood my question. I do not mean speed drops during bench runs due to temps, I know that I lose my first 15Mhz at 41C and there after at 45C another 15mhz. What I mean is actual stable clocks when running benchmarks. Like I was able to bench at 2160 sometimes, but 2145mhz no problem running TS and TSE but right now it will only run 2115mhz for Timespy but FS is all good up to 2145mhz. ANything higher than 2115mhz in TS will crash after first game test or during the first game test.


I think we did understand you. Your maximum stable overclock can and may change depending on ambient temperatures and how hot your card runs. You're trying to push it higher than it can go stable to try and push it up to the previous stable winter clocks to compensate for the loss of clocks due to the hotter summer temps and the card's not capable of doing that.


----------



## schoolofmonkey

Pauliesss said:


> Tried a separate display port, and did chkdsk and memory diagnostics with no issues, crashed after 20 minutes, same error.
> 
> Is there any way to check if the power supply may be an issue? Like it cannot provide enough power after some time and therefore causing to crash?


And yet I'm getting the exact same thing, brand new power supply, 9900k, Ram, MB, 2080ti all 3 months old, every game works perfectly.
I even reinstalled 441.12 and guess what I got 4 benchmark runs and 2 hours of RDR 2 gaming, quit, fired it back up ran a benchmark hard lock, I didn't do anything but quit then reload the game.
But this time there is nothing in the event log or Security and Maintenance about any crash like previously, it has to be the game.

Under Documents\Rockstar Games\Red Dead Redemption 2\Settings there is a few PipelineCacheWindows files, when I deleted them everything was fine until I fired up the game a second time, I do have my documents mapped to a mechanical drive.

The other thing I noticed Fullscreen never stick with the game, it always ends back with Borderless Window.

I am at the point I might try a clean Windows 10 install like I've seen a few others try, because I'm out of ideas as it's not the GPU, it can't be more demanding games have no problems.


----------



## Pauliesss

schoolofmonkey said:


> And yet I'm getting the exact same thing, brand new power supply, 9900k, Ram, MB, 2080ti all 3 months old, every game works perfectly.
> I even reinstalled 441.12 and guess what I got 4 benchmark runs and 2 hours of RDR 2 gaming, quit, fired it back up ran a benchmark hard lock, I didn't do anything but quit then reload the game.
> But this time there is nothing in the event log or Security and Maintenance about any crash like previously, it has to be the game.
> 
> Under Documents\Rockstar Games\Red Dead Redemption 2\Settings there is a few PipelineCacheWindows files, when I deleted them everything was fine until I fired up the game a second time, I do have my documents mapped to a mechanical drive.
> 
> The other thing I noticed Fullscreen never stick with the game, it always ends back with Borderless Window.
> 
> I am at the point I might try a clean Windows 10 install like I've seen a few others try, because I'm out of ideas as it's not the GPU, it can't be more demanding games have no problems.


Right now I cannot do a clean Windows 10 installation, for various reasons, however, if you are willing to try that and report back, it would be great.

I'm also thinking about contacting Nvidia directly, but not sure if it's worth the time.


----------



## keikei

You guys try reverting everything back to stock settings?


----------



## schoolofmonkey

keikei said:


> You guys try reverting everything back to stock settings?


My card is stock, I even removed my vertical GPU mount.

I'm running a different test, since my last post I've been running the benchmark over and over, with G-Sync off, not a single crash, will be now trying it with G-Sync.

The odd thing about the benchmark for anyone thinking it's accurate, it's not, things change, Arthur's clothes change to what you're wearing in that part of the game, some times the horses don't die or the camera doesn't rotate where it should, so don't rely on those scores for performance rating, so far runs can be 10 - 30fps different.

Edit Update:
G-Sync on 2 runs hard lock at the exact same place as last night's hard lock with G-Sync on.
No recorded Event Log error, that only happens if I use the 441.08, 441.12 it never records one.

And yet it was fine all day yesterday using 436.51 with G-Sync on.


----------



## Pauliesss

That's not a bad idea at all, it seems G-Sync does not work with Vulkan, that may be the reason why I'm not crashing when I switch from DX12 to Vulkan.

I will try this as well tomorrow, disable G-Sync, but keep DX12, and see what happens.


----------



## schoolofmonkey

Pauliesss said:


> That's not a bad idea at all, it seems G-Sync does not work with Vulkan, that may be the reason why I'm not crashing when I switch from DX12 to Vulkan.
> 
> I will try this as well tomorrow, disable G-Sync, but keep DX12, and see what happens.


Just posted a update on my last post.


----------



## schoolofmonkey

Ok, Clean Windows install, all updated, latest drivers, Rickstar launcher and game installed.
441.12 DCH, G-Sync on Directx 12, froze at the same spot in the benchmark.


----------



## J7SC

schoolofmonkey said:


> Ok, Clean Windows install, all updated, latest drivers, Rickstar launcher and game installed.
> 441.12 DCH, G-Sync on Directx 12, froze at the same spot in the benchmark.


 
...just an idea: 441.12 was released with some medium and high security bug fixes...I wonder if that driver has to go back in the oven, or if there's a potential conflict on your specific setup re. other security software


----------



## schoolofmonkey

J7SC said:


> ...just an idea: 441.12 was released with some medium and high security bug fixes...I wonder if that driver has to go back in the oven, or if there's a potential conflict on your specific setup re. other security software


I don't know, I do know it's a complete new Windows 10 install, system is a 9900k, z390 Apex, 32GB (2x16) 3200Mhz Trident z ram, nothing out of the ordinary. 

It's just so odd that this is the only game I'm having this problem with, nothing in the last few months had any issues, I even went back and tested some, it's just RDR 2, if it was a faulty GPU I'd be having this problem in something else surley.

Oh I do get the same lockup with Vulkan but it just takes longer to happen.

Seems there's a reddit thread, so it's not just us.

https://www.reddit.com/r/reddeadred..._source=amp&utm_medium=&utm_content=post_body


----------



## Pauliesss

schoolofmonkey said:


> I don't know, I do know it's a complete new Windows 10 install, system is a 9900k, z390 Apex, 32GB (2x16) 3200Mhz Trident z ram, nothing out of the ordinary.
> 
> It's just so odd that this is the only game I'm having this problem with, nothing in the last few months had any issues, I even went back and tested some, it's just RDR 2, if it was a faulty GPU I'd be having this problem in something else surley.
> 
> Oh I do get the same lockup with Vulkan but it just takes longer to happen.
> 
> Seems there's a reddit thread, so it's not just us.
> 
> https://www.reddit.com/r/reddeadred..._source=amp&utm_medium=&utm_content=post_body


Can you try to play without G-Sync again and see what happens?

I will do the same today.

Edit: I wish we could try our GPUs in a different system, or use a different GPU in our system, but I don't know anyone who has 2080Ti, nor anyone who could even fit my 2080Ti.


----------



## schoolofmonkey

Pauliesss said:


> Can you try to play without G-Sync again and see what happens?
> 
> I will do the same today.
> 
> Edit: I wish we could try our GPUs in a different system, or use a different GPU in our system, but I don't know anyone who has 2080Ti, nor anyone who could even fit my 2080Ti.


Well my day has been this, trying to fix the problem.

Clean Windows 10 install, installed updates, motheboard drivers, 441.12 Nvdia DCH driver and Rockstar launcher, on top of a BIOS update, system running at default with no overclock.

After the first setup, hard lock on the first benchmark run with Directx 12, that was at 12.59pm.

Swapped out my ram with my wife's machine, that actually let me benchmark for over 2 hours, then finally when I though it was good hard lock at 3:10pm, but you can still alt tab out, you just need to sign out to get the damn game off the screen.
What after such and extended period would it hard lock, if there was a fault it would just happen.

It's certainly not a hardware issue, it's something with this bloody DRM they are using, tested my ram in the other machine, no fault, I've tested my GPU in the other machine, still fine, CPU is stable, that's been hammered when I dialed in the original overclock.
But then there's the fact I've been gaming perfectly on my machine since I bought it in July with a lot more demanding titles, so what the hell is going on with RDR 2.

This is the error you get in the event log, which it records one, most of the time Windows doesn't even notice a problem, which again is odd.

Faulting application name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Faulting module name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Exception code: 0xc0000005
Fault offset: 0x0000000002b4f61a
Faulting process id: 0x1994
Faulting application start time: 0x01d596a8d7e98c56
Faulting application path: E:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Faulting module path: E:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Report Id: 6eb68a03-defb-421d-a37a-01f060edea7e
Faulting package full name:
Faulting package-relative application ID:

It's either drm or the game, because I just rolled back the driver to 441.08 and I'm now getting event logs:

Under System:
Display driver nvlddmkm stopped responding and has successfully recovered.

Under Application:
Faulting application name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Faulting module name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Exception code: 0xc0000005
Fault offset: 0x0000000002b4f61a
Faulting process id: 0x25f8
Faulting application start time: 0x01d596c1ed34db4d
Faulting application path: E:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Faulting module path: E:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Report Id: d8291739-13c9-4fa5-a5da-2065ffcc7e1d
Faulting package full name:
Faulting package-relative application ID:

This was upon fresh driver install with the first benchmark run, hung for about 5 seconds CTD, even got a Rockstar Launder error.
Red Dead Redemption 2 exited Unexpectedly.
Please click retry below to enter the game again, or click safe mode to launch the game with reduced graphics settings.

So errors on both drivers, just one handles the problem less aggressively..

This is all with directx 12 gsync on or off, off it just takes longer to hang, testing Vulkan now.


----------



## Kaltenbrunner

So I'm forgetting there actually was a real 2080 Ti, and now the 2080 super is priced around what a 2080 was last Christmas.

So when's the next gen expected? Maybe next summer ?


----------



## Pauliesss

@schoolofmonkey just woke up, disabled g-sync while keeping dx12 (not Vulkan, yet), crashed in about 30 minutes.

However, with disabled g-sync, the error in the event viewer is different now:
Application RDR2.exe has been blocked from accessing Graphics hardware.
Event ID: 4109

Application log:
Faulting application name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Faulting module name: RDR2.exe, version: 1.0.1207.60, time stamp: 0x5dc2dfaf
Exception code: 0xc0000005
Fault offset: 0x0000000002b4f61a
Faulting process id: 0x188c
Faulting application start time: 0x01d596daca4a802e
Faulting application path: C:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Faulting module path: C:\Program Files\Rockstar Games\Red Dead Redemption 2\RDR2.exe
Report Id: 4c0e8270-884a-43d9-838c-7d167d9b56e5
Faulting package full name: 
Faulting package-relative application ID: 

...what the hell is going on.


----------



## Sheyster

schoolofmonkey said:


> Well my day has been this, trying to fix the problem.


Are you overclocking the RAM? If yes, try 3200 with default timings.


----------



## Brodda-Syd

Pauliesss said:


> I don't know what to do, to be honest. Played for 1 hour, perhaps a bit more, and crashed again, same error in the event log.
> 
> I even tried to put my fan manually to 85% to keep temps under 80°C, but it crashed anyway, so it is not related to that.
> 
> My PSU is really old, 850W, almost 7 years old, could be that as well, I don't know really...


If your power supply weren't up to it, you would have problems with other things not just 1 game.
I have an OCZ ZX850w GOLD thats about 7/8 years old and it handles two factory overclocked RTX 2080 ti's in sli and 8 disk drives, without any problems.


----------



## schoolofmonkey

Sheyster said:


> schoolofmonkey said:
> 
> 
> 
> Well my day has been this, trying to fix the problem.
> 
> 
> 
> Are you overclocking the RAM? If yes, try 3200 with default timings.
Click to expand...

Nope, everything is stock even defaulted my 5Ghz overclock.


----------



## Pauliesss

Alright, next thing I tried was to bring down core clock and memory clock by -200 Mhz on GPU, just to see what happens (G-Sync is still off).

I was able to play for more than 2 hours, and just crashed, same error as I mentioned on the previous page.


----------



## skupples

RDR2 defaults to Vulcan for a reason. It looks slightly better and runs woefully better. At least, for the bigger cards it seems. Folks on the lower end are reporting the opposite, who knows tho. Folks are still going off of day one data 2 patches later.


----------



## schoolofmonkey

Pauliesss said:


> Alright, next thing I tried was to bring down core clock and memory clock by -200 Mhz on GPU, just to see what happens (G-Sync is still off).
> 
> I was able to play for more than 2 hours, and just crashed, same error as I mentioned on the previous page.


Well I got 5 hours out of Vulkan before I went to bed, I got 5 â€“ 30 minutes out of Directx 12.

The downside to Vulkan is the minor stuttering, but it's more stable on my 2080ti for now.

I can bet anything there's a memory leak, because I ran the benchmark for 5 hours continuously last night and the longer I ran it the lower the frame rates got, it never crashed with Vulkan, just the frame rates dropped.
First hour lows were a repeatable 45 â€“ 50fps, after that they dropped as low as 20fps, I never quit the game.
When I did quit and restarted the benchmark, the low fps went back to the 45 â€“ 50fps.

I pretty much destroyed my machine making sure all the hardware isn't faulty for this game, it's not btw.

Edit:
This game is whacked, I've been running the directx 12 benchmark again to send the crashdumps to nvidia and guess what its not crashing like before, I haven't changed anything besides the api.


----------



## keikei

skupples said:


> RDR2 defaults to Vulcan for a reason. It looks slightly better and runs woefully better. At least, for the bigger cards it seems. Folks on the lower end are reporting the opposite, who knows tho. Folks are still going off of day one data 2 patches later.



I suspect if it does come out, Green will do a repeat of the 2080S. Mainly bump up the RT cores, whiling leaving cuda numbers alone. Possibly faster memory as well. There is still time for a holiday release. Its gud to be king.


----------



## Pauliesss

schoolofmonkey said:


> Well I got 5 hours out of Vulkan before I went to bed, I got 5 â€“ 30 minutes out of Directx 12.
> 
> The downside to Vulkan is the minor stuttering, but it's more stable on my 2080ti for now.
> 
> I can bet anything there's a memory leak, because I ran the benchmark for 5 hours continuously last night and the longer I ran it the lower the frame rates got, it never crashed with Vulkan, just the frame rates dropped.
> First hour lows were a repeatable 45 â€“ 50fps, after that they dropped as low as 20fps, I never quit the game.
> When I did quit and restarted the benchmark, the low fps went back to the 45 â€“ 50fps.
> 
> I pretty much destroyed my machine making sure all the hardware isn't faulty for this game, it's not btw.
> 
> Edit:
> This game is whacked, I've been running the directx 12 benchmark again to send the crashdumps to nvidia and guess what its not crashing like before, I haven't changed anything besides the api.


Yep, Vulkan it is then, played for 6 hours or so, no crash at all, but I have to agree, Vulkan does seems to stutter a bit, DX12 seems more fluid / responsive.

Keep me posted about how it goes with Nvidia, thanks.


----------



## schoolofmonkey

Pauliesss said:


> Yep, Vulkan it is then, played for 6 hours or so, no crash at all, but I have to agree, Vulkan does seems to stutter a bit, DX12 seems more fluid / responsive.
> 
> Keep me posted about how it goes with Nvidia, thanks.


Joker's got you covered:


----------



## changboy

guys look this one :


----------



## ahnafakeef

Can someone please help me set up G-Sync properly? I've set Monitor Technology to G-Sync Compatible in NVCP. Should I set V-Sync to On, Off, 3D Application, or Fast? I've tried both on and off, and I'm still getting tearing in movies played via VLC.

Please advise me as to what other settings I need to change, and what values I need to set them to. Thank you!


----------



## skupples

keikei said:


> I suspect if it does come out, Green will do a repeat of the 2080S. Mainly bump up the RT cores, whiling leaving cuda numbers alone. Possibly faster memory as well. There is still time for a holiday release. Its gud to be king.


its possible, but seems highly unlikely for NV to release any more cats from the bag until NAVI is competing with 2080+. Christmas is around the bend, the rumor mill would already be spewing hot GPU vomit if something were coming in the next 5 weeks from either camp.


----------



## MrTOOSHORT

ahnafakeef said:


> Can someone please help me set up G-Sync properly? I've set Monitor Technology to G-Sync Compatible in NVCP. Should I set V-Sync to On, Off, 3D Application, or Fast? I've tried both on and off, and I'm still getting tearing in movies played via VLC.
> 
> Please advise me as to what other settings I need to change, and what values I need to set them to. Thank you!



Make sure the part in circled is checked:


----------



## ahnafakeef

MrTOOSHORT said:


> Make sure the part in circled is checked:


Thank you!

Should I select "Enable for full screen" or "Enable for full screen and windowed mode"?

Also, is it supposed to say "display not validated as G-Sync compatible" dialog in that window?

And is there anything else I need to change?


----------



## skupples

you wouldn't even have the G-sync tab if it wasn't actually supported on some level.

for example - my driver has zero mention of G-sync of any kind.


----------



## JustinThyme

MrTOOSHORT said:


> Make sure the part in circled is checked:


Mine stops at the display. I dont have any of that bottom part. Works OK although I seldom use it. I only turn Gsync on when I experience tearing which is pretty much almost never. Monitor is gsync model 120Hz ASUS pg349Q. I dont play many games that dont support SLI so my twin 2080Tis can pretty much pump out the frames faster than the 120Hz with pretty much anything with maxed out settings.


----------



## ahnafakeef

skupples said:


> you wouldn't even have the G-sync tab if it wasn't actually supported on some level.
> 
> for example - my driver has zero mention of G-sync of any kind.


You're right. The G-Sync options weren't there before. 


JustinThyme said:


> Mine stops at the display. I dont have any of that bottom part. Works OK although I seldom use it. I only turn Gsync on when I experience tearing which is pretty much almost never. Monitor is gsync model 120Hz ASUS pg349Q. I dont play many games that dont support SLI so my twin 2080Tis can pretty much pump out the frames faster than the 120Hz with pretty much anything with maxed out settings.


SLI 2080Tis will probably crush any standard resolution/refresh rate combo. I'm playing at "only" 4K 60Hz, and even then I could have used a second one just for RTX since that's the one setting that seems to quite literally halve the FPS. 

My only hope with G-Sync was that it would make 40FPS as playable as 60FPS so that I could enable RTX at least at high if not ultra. But even high is dropping the FPS below 40, which is when it becomes distinctly noticeable since G-Sync doesn't work for me below 40FPS. To make matters worse, Tomb Raider doesn't seem to like even the slightest overclock (+100 core, +500 mem) for me, despite the same overclock being stable in Exodus and Odyssey. 

Shadow of the Tomb Raider looks absolutely glorious on a 4K OLED with HDR and RTX though. So much so that I'm still enjoying the scenery at a sluggish ~30FPS. 

Not even thinking about RDR2 at this point. Probably just gonna wait till I get SLI Amperes in my system so I can ultra everything at 4K 120Hz.


----------



## J7SC

I oc'ed my 40 inch 4k Phillips monitor to 75 MHz @4K on dual 2080 TIs via DisplayPort...Not the ultimate gaming setup, but it sure looks and plays nice (perhaps because of advancing age and related eyesight :lachen: )


----------



## Sketchus

Sorry if this is covered elsewhere, but I've scanned fairly thoroughly and can't find anything. My Gaming X Trio is unable to be flashed to the 406w bios because of that XUSB error. I guess my card shipped with an updated version. 

Is there anything I can do? Or even an alternative?

Thanks


----------



## Chobbit

Nice how did you do that?

I have a Phillips 43 inch 4k IPS monitor would it work on that? 



J7SC said:


> I oc'ed my 40 inch 4k Phillips monitor to 75 MHz @4K on dual 2080 TIs via DisplayPort...Not the ultimate gaming setup, but it sure looks and plays nice (perhaps because of advancing age and related eyesight :lachen: )


----------



## Chobbit

Hey so I've finally got my new PC an upgrade from my older 1080ti/5960x setup and I'm more than happy with it:

Thermaltake tower 900
i9 9060x
ROG Rampage VI Extreme Omega board
2 x Sea Hawk 2080ti's
128gb 3000 Corsair RBG RAM
2 Custom loops (1 for CPU & 1 for GPU) with 560 radiators 

I've got a couple of questions though, 

1. What's a decent overclock on the core and memory? 
As when I've pushed the Power Limit up to 110% & then began overclocking the cards with afterburner they don't go over 50 degrees C, however I found that an extra +125 on the core was kind of a current max as the next step +150 only worked if I increased pump and fan speed for some reason otherwise I had a driver stop or bench stop.
So I just left it at +125 and moved on to Memory overclocking where I went in steps of 100 however even though I haven't found my memory overclock limit I've noticed the more Memory speed I add the more the core is somtimes throttled back in steps of about 15mhz. It got to the point where +600 on the memory gave me a lower score than +500 memory and +700 memory gave me just about the same score as +500.
Is this normal and is my ideal clocks basically +125/+500?

2. Has anyone else experienced that a SLI setup works great however the second card doesn't seem to output? 
I can't plug my monitor or HTC Vive into the second card and get any display signal but works fine on first card. I've never used NVLink before and I didn't know if this was an issue or there's something I may need to change in the BIOS?


Cheers


----------



## ntuason

Chobbit said:


> Hey so I've finally got my new PC an upgrade from my older 1080ti/5960x setup and I'm more than happy with it:
> 
> Thermaltake tower 900
> i9 9060x
> ROG Rampage VI Extreme Omega board
> 2 x Sea Hawk 2080ti's
> 128gb 3000 Corsair RBG RAM
> 2 Custom loops (1 for CPU & 1 for GPU) with 560 radiators
> 
> I've got a couple of questions though,
> 
> 1. What's a decent overclock on the core and memory?
> As when I've pushed the Power Limit up to 110% & then began overclocking the cards with afterburner they don't go over 50 degrees C, however I found that an extra +125 on the core was kind of a current max as the next step +150 only worked if I increased pump and fan speed for some reason otherwise I had a driver stop or bench stop.
> So I just left it at +125 and moved on to Memory overclocking where I went in steps of 100 however even though I haven't found my memory overclock limit I've noticed the more Memory speed I add the more the core is somtimes throttled back in steps of about 15mhz. It got to the point where +600 on the memory gave me a lower score than +500 memory and +700 memory gave me just about the same score as +500.
> Is this normal and is my ideal clocks basically +125/+500?


I say at least 150+ on core, 1100+ on memory. But what matters is your actual core clock speed and where it stays at load. 50C my card stays at 2130MHz and just slows down as temps increase, 60C stays 2100.


----------



## JustinThyme

ahnafakeef said:


> You're right. The G-Sync options weren't there before.
> 
> SLI 2080Tis will probably crush any standard resolution/refresh rate combo. I'm playing at "only" 4K 60Hz, and even then I could have used a second one just for RTX since that's the one setting that seems to quite literally halve the FPS.
> 
> My only hope with G-Sync was that it would make 40FPS as playable as 60FPS so that I could enable RTX at least at high if not ultra. But even high is dropping the FPS below 40, which is when it becomes distinctly noticeable since G-Sync doesn't work for me below 40FPS. To make matters worse, Tomb Raider doesn't seem to like even the slightest overclock (+100 core, +500 mem) for me, despite the same overclock being stable in Exodus and Odyssey.
> 
> Shadow of the Tomb Raider looks absolutely glorious on a 4K OLED with HDR and RTX though. So much so that I'm still enjoying the scenery at a sluggish ~30FPS.
> 
> Not even thinking about RDR2 at this point. Probably just gonna wait till I get SLI Amperes in my system so I can ultra everything at 4K 120Hz.


Gysync doesnt pump out frames faster. It slows down the monitor refresh rate to what the GPU is pumping out to prevent tearing but sad part is that communication between monitor and GPU takes clock cycles and may actually cause you to lose a few FPS




Chobbit said:


> Hey so I've finally got my new PC an upgrade from my older 1080ti/5960x setup and I'm more than happy with it:
> 
> Thermaltake tower 900
> i9 9060x
> ROG Rampage VI Extreme Omega board
> 2 x Sea Hawk 2080ti's
> 128gb 3000 Corsair RBG RAM
> 2 Custom loops (1 for CPU & 1 for GPU) with 560 radiators
> 
> I've got a couple of questions though,
> 
> 1. What's a decent overclock on the core and memory?
> As when I've pushed the Power Limit up to 110% & then began overclocking the cards with afterburner they don't go over 50 degrees C, however I found that an extra +125 on the core was kind of a current max as the next step +150 only worked if I increased pump and fan speed for some reason otherwise I had a driver stop or bench stop.
> So I just left it at +125 and moved on to Memory overclocking where I went in steps of 100 however even though I haven't found my memory overclock limit I've noticed the more Memory speed I add the more the core is somtimes throttled back in steps of about 15mhz. It got to the point where +600 on the memory gave me a lower score than +500 memory and +700 memory gave me just about the same score as +500.
> Is this normal and is my ideal clocks basically +125/+500?
> 
> 2. Has anyone else experienced that a SLI setup works great however the second card doesn't seem to output?
> I can't plug my monitor or HTC Vive into the second card and get any display signal but works fine on first card. I've never used NVLink before and I didn't know if this was an issue or there's something I may need to change in the BIOS?
> 
> 
> Cheers






ntuason said:


> I say at least 150+ on core, 1100+ on memory. But what matters is your actual core clock speed and where it stays at load. 50C my card stays at 2130MHz and just slows down as temps increase, 60C stays 2100.


Firs thing it to max out the power limit on GPU and be a bit more specific on what you are trying to OC and what you have for cooling....All I see reference to is a case and a pump which is an awfully big window..


----------



## ahnafakeef

JustinThyme said:


> Gysync doesnt pump out frames faster. It slows down the monitor refresh rate to what the GPU is pumping out to prevent tearing but sad part is that communication between monitor and GPU takes clock cycles and may actually cause you to lose a few FPS


So it's not supposed to make 40FPS G-Sync feel smoother than 40FPS without G-Sync?

Also, my G-Sync compatible screen has a G-Sync range of 40-60FPS at 4K. Is this range the same with all G-Sync screens, or do gaming monitors with a G-Sync module support a bigger range of frame rates with G-Sync?


----------



## J7SC

Chobbit said:


> Nice how did you do that?
> 
> I have a Phillips 43 inch 4k IPS monitor would it work on that?


 
...here you go, per vid below. For now, I have stopped at 75MHz (4K via DisplayPort) though have yet to run into any test issues, so I might be able to go a bit higher, may be...


----------



## JustinThyme

ahnafakeef said:


> So it's not supposed to make 40FPS G-Sync feel smoother than 40FPS without G-Sync?
> 
> Also, my G-Sync compatible screen has a G-Sync range of 40-60FPS at 4K. Is this range the same with all G-Sync screens, or do gaming monitors with a G-Sync module support a bigger range of frame rates with G-Sync?


The concept is to slow down the monitor refresh rate to match the output of the GPU to prevent tearing which happens when your display is repeating what has already been processed before. Say your GPU is only putting out 40FPS and your monitor is at 100 Hz. The refresh will be happening more than twice as fast as the frames the GPU is putting out. It also works vice versa if your GPU is pumping out 100GPS and you have a 60hz refresh rate. The idea is to match them up. Below is a random sample I grabbed off the net that shows tearing. 

Dont know what Gsync compatible means, sounds like marketing verbage. Either it has Gsync which comes at a premium or it doesnt. Its an added two way comunication between the GPU and monitor to sync up frame rate to refresh rate. Its not meant to give you extra frames, it normally does the exact opposite and reduces frames and performance as its just another process going on behind the scenes.


----------



## Nastya

I flashed a higher power limit BIOS onto my card (same vendor, PCB) and now OC scanner won't work. Reinstalled drivers already.
Did the flash go wrong?


----------



## mattxx88

AndiWandi said:


> I flashed a higher power limit BIOS onto my card (same vendor, PCB) and now OC scanner won't work. Reinstalled drivers already.
> Did the flash go wrong?


did you followed the first page guide? cause it lack of a little info: you got to disable your board in device manager before flash it
https://www.overclockersclub.com/guides/how_to_flash_rtx_bios/


----------



## Nastya

I flashed once disabling the device prior, and once without. Didn't work with either method.


----------



## ahnafakeef

JustinThyme said:


> The concept is to slow down the monitor refresh rate to match the output of the GPU to prevent tearing which happens when your display is repeating what has already been processed before. Say your GPU is only putting out 40FPS and your monitor is at 100 Hz. The refresh will be happening more than twice as fast as the frames the GPU is putting out. It also works vice versa if your GPU is pumping out 100GPS and you have a 60hz refresh rate. The idea is to match them up. Below is a random sample I grabbed off the net that shows tearing.
> 
> Dont know what Gsync compatible means, sounds like marketing verbage. Either it has Gsync which comes at a premium or it doesnt. Its an added two way comunication between the GPU and monitor to sync up frame rate to refresh rate. Its not meant to give you extra frames, it normally does the exact opposite and reduces frames and performance as its just another process going on behind the scenes.


Thank you for the explanation!

Compatible for my C9 basically means the HDMI Variable Refresh Rate on the TV is supported by Nvidia to function similarly to G-Sync when used with a compatible GPU on Windows.


----------



## J7SC

On G-Sync, Wikipedia has a decent write-up (below) and if implemented correctly, G-Sync (and for that matter FreeSync) can help with deep variations of FPS within a specific game, and also between different games. I am wondering though whether setting 'prerendered frames' to 0 or 1 in the NV driver tab (now called low latency / 'ultra') also plays a role here, i.e. re. tearing and double frames :headscrat


----------



## ahnafakeef

J7SC said:


> On G-Sync, Wikipedia has a decent write-up (below) and if implemented correctly, G-Sync (and for that matter FreeSync) can help with deep variations of FPS within a specific game, and also between different games. I am wondering though whether setting 'prerendered frames' to 0 or 1 in the NV driver tab (now called low latency / 'ultra') also plays a role here, i.e. re. tearing and double frames :headscrat


Given how long I have been green with envy of people using G-Sync since it came out, I really thought that the kinks would have been worked out by now 

Is there a guide on how to set up G-Sync properly? I don't know why but I was expecting it to work by simply setting "V-Sync" to "G-Sync" in the game settings.


----------



## skupples

ahnafakeef said:


> You're right. The G-Sync options weren't there before.
> 
> SLI 2080Tis will probably crush any standard resolution/refresh rate combo. I'm playing at "only" 4K 60Hz, and even then I could have used a second one just for RTX since that's the one setting that seems to quite literally halve the FPS.
> 
> My only hope with G-Sync was that it would make 40FPS as playable as 60FPS so that I could enable RTX at least at high if not ultra. But even high is dropping the FPS below 40, which is when it becomes distinctly noticeable since G-Sync doesn't work for me below 40FPS. To make matters worse, Tomb Raider doesn't seem to like even the slightest overclock (+100 core, +500 mem) for me, despite the same overclock being stable in Exodus and Odyssey.
> 
> Shadow of the Tomb Raider looks absolutely glorious on a 4K OLED with HDR and RTX though. So much so that I'm still enjoying the scenery at a sluggish ~30FPS.
> 
> Not even thinking about RDR2 at this point. Probably just gonna wait till I get SLI Amperes in my system so I can ultra everything at 4K 120Hz.


i'd be all about a 2nd 2080ti, if it was actually still a thing in most titles, instead of 1 in 20. with limited hacked in support that screws minimums and injects stutter 

as to the bugs, i've seen a very specific set of steps given out a few times now for how to "properly configure" Gsync, since apparently there's more than one way to do it wrong.


----------



## J7SC

ahnafakeef said:


> Given how long I have been green with envy of people using G-Sync since it came out, I really thought that the kinks would have been worked out by now
> 
> Is there a guide on how to set up G-Sync properly? I don't know why but I was expecting it to work by simply setting "V-Sync" to "G-Sync" in the game settings.


 
Have you seen this '101' guide ? https://www.blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/14/


----------



## ahnafakeef

skupples said:


> i'd be all about a 2nd 2080ti, if it was actually still a thing in most titles, instead of 1 in 20. with limited hacked in support that screws minimums and injects stutter
> 
> as to the bugs, i've seen a very specific set of steps given out a few times now for how to "properly configure" Gsync, since apparently there's more than one way to do it wrong.


As someone who plays very few games and mostly only big-name titles, I'd be happy if they offered SLI support at least in the big-budget, graphically demanding titles. And I don't care what they say about a 2080Ti performing well enough for most people at most resolutions to not warrant an investment in SLI on the devs part (if that is in fact the excuse they're going with). It is clearly not enough for my needs. I want my games to run on 4K at Ultra locked at 60FPS - in a world where the possibility clearly exists - and I'm willing to pay twice as much for the GPU horsepower. Like, just give me my frames already, dammit! /endrant

Please do tell where I might be able to find this magical set of instructions that makes G-Sync sing. Thank you!


----------



## JustinThyme

ahnafakeef said:


> As someone who plays very few games and mostly only big-name titles, I'd be happy if they offered SLI support at least in the big-budget, graphically demanding titles. And I don't care what they say about a 2080Ti performing well enough for most people at most resolutions to not warrant an investment in SLI on the devs part (if that is in fact the excuse they're going with). It is clearly not enough for my needs. I want my games to run on 4K at Ultra locked at 60FPS - in a world where the possibility clearly exists - and I'm willing to pay twice as much for the GPU horsepower. Like, just give me my frames already, dammit! /endrant
> 
> Please do tell where I might be able to find this magical set of instructions that makes G-Sync sing. Thank you!


Im with you there. Nvidia fully supports it, its just game titles that dont. Plenty out there that do though. 

heres a short list and most of what I have is on the list.

https://www.gamingscan.com/best-games-that-support-sli/


----------



## sblantipodi

I would be really curious to know what is the average monitor of a 2080Ti owner.

Is 4k something interesting for those users?
Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?

Am I the only one using 4k on a 2080Ti?


----------



## kx11

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?





ur not , after i got PG27UQ i wouldn't go below 4k in games , trying to hit 4k 120fps most of the time , it's not easy with maxed out graphics


----------



## keikei

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?


4k/60hz, but looking forward to move up to 4k/120 in 2020. I believe there a few monitors with those specs expected to be release at that time.


----------



## mattxx88

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?


you're not alone 
PG27UQ here and 2080Ti gaming X Trio


----------



## sblantipodi

ok I'm not alone but people with PG27UQ can't speak in this "poll" because those people have it all 
4K, 144Hz, HDR1000, GSYNC, that is the god monitor, we are talking about monitors for humans


----------



## Sheyster

sblantipodi said:


> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?


I think 144+ Hz is mandatory for competitive FPS gaming. Many FPS pro gamers are using 240 Hz now. For RPGs, MMOs, etc. I believe a good 4K monitor is fine especially if it's G-sync or Freesync enabled.


----------



## Sheyster

sblantipodi said:


> ok I'm not alone but people with PG27UQ can't speak in this "poll" because those people have it all
> 4K, 144Hz, HDR1000, GSYNC, that is the god monitor, we are talking about monitors for humans


Indeed, but really too bad it's not a 32 inch monitor.


----------



## skupples

ahnafakeef said:


> As someone who plays very few games and mostly only big-name titles, I'd be happy if they offered SLI support at least in the big-budget, graphically demanding titles. And I don't care what they say about a 2080Ti performing well enough for most people at most resolutions to not warrant an investment in SLI on the devs part (if that is in fact the excuse they're going with). It is clearly not enough for my needs. I want my games to run on 4K at Ultra locked at 60FPS - in a world where the possibility clearly exists - and I'm willing to pay twice as much for the GPU horsepower. Like, just give me my frames already, dammit! /endrant
> 
> Please do tell where I might be able to find this magical set of instructions that makes G-Sync sing. Thank you!





JustinThyme said:


> Im with you there. Nvidia fully supports it, its just game titles that dont. Plenty out there that do though.
> 
> heres a short list and most of what I have is on the list.
> 
> https://www.gamingscan.com/best-games-that-support-sli/


looks like we're all with you on that one.

Nvidia is already in the business of bribing devs for features, not sure why SLI can't go on that list. At least for AAA releases. You know, like how it used to be not all that long ago.

as to that guide, it seems to leave off the fact that you'll need to roll drivers back for SLI to work in many of those titles. Example, FO4's SLI did not work the last time I tried it. Witcher 3 was the last AAA release I remember playing where SLI just worked.


----------



## mattxx88

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?


I think that a wide hz range (GSYNC) monitor (144+hz) is mandatory due to you can cover all gaming scenario you need

i make you an example, when i play a FPS i drop down all graphics settings except for texture and mesh (BF series for example) to achieve high FPS on my screen and reach a better view of enemies
on the other hand i also like games as AC Odissey or the new RDR2 in wich i focus on graphics quality, maybe you cannot reach solid 60fps everytime but gsync helps a lot.
dunno if i've been clear enough


----------



## KCDC

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?


I like to do both depending on the game. With the monitors in my sig, either 1440p 165 hz or 7680x1440 if the game natively supports it. However, I do love playing 4k60 on my LG OLED mostly with HDR games. Gears 5 with ultra textures looks pretty amazing on it and that gets full 60 fps at all times with the settings maxed using one of the cards. I did a comparison of it vs the Xbox One X version and the difference in quality/framerate is pretty nuts.


----------



## skupples

i'd go 144hz ove 4K60, if I spent any amount of time in competitive gaming.


ALSO! damn the back of these cards get hot, I forgot 1080ti didn't have rear memory. I guess i'm gonna have to re-rig my back plate typhoon.


----------



## gfunkernaught

Wanted to share my 3dmark results, while in my garage at 4c ambient.
https://www.3dmark.com/spy/9238408
https://www.3dmark.com/pr/172411

KPXOC bios on a water cooled 2080 ti xc ultra (reference pcb), running [email protected], memory at 8500mhz. 3dmark doesnt seem to be reading the memory correctly so i took screenshots towards the end of gpu test 1 and 2. I also ran this on a 1080p monitor so there is downscaling involved here with the rtss osd. Also threw in a GTA5 benchmark screenshot. I totally forgot to grab screens from port royal. next time it gets colder, probably freezing, im going back out there to push the clock even further.

https://imgur.com/a/53B5gmE screenshots of the runs. i tried the to use the insert image function but it didnt work and i didnt want to attach the files themselves to the post.


----------



## kithylin

gfunkernaught said:


> Wanted to share my 3dmark results, while in my garage at 4c ambient.
> https://www.3dmark.com/spy/9238408
> https://www.3dmark.com/pr/172411
> 
> KPXOC bios on a water cooled 2080 ti xc ultra (reference pcb), running [email protected], memory at 8500mhz. 3dmark doesnt seem to be reading the memory correctly so i took screenshots towards the end of gpu test 1 and 2. I also ran this on a 1080p monitor so there is downscaling involved here with the rtss osd. Also threw in a GTA5 benchmark screenshot. I totally forgot to grab screens from port royal. next time it gets colder, probably freezing, im going back out there to push the clock even further.
> 
> https://imgur.com/a/53B5gmE screenshots of the runs. i tried the to use the insert image function but it didnt work and i didnt want to attach the files themselves to the post.


Sadly 3DMark is not a sustained load, only a short burst for a couple minutes. Do you happen to know what clocks you manage to actually sustain under gaming loads when running games for a few hours?


----------



## Talon2016

sblantipodi said:


> I would be really curious to know what is the average monitor of a 2080Ti owner.
> 
> Is 4k something interesting for those users?
> Most people here says that 144Hz is quite mandatory for gaming, am I the only one who prefers a good 4k 60Hz display to a 2k 144Hz for non fast paced games?
> 
> Am I the only one using 4k on a 2080Ti?


I game at 4K 144Hz G-Sync on my Acer XB273K with a 9900KS and 2080 Ti and I absolutely love it. I would never go back to anything with a lower resolution.


----------



## keikei

Speaking of 4k/high res, this is supposed to be out in U.S. now. Not seeing it currently: https://www.guru3d.com/news-story/new-predator-cg437k-p-delivers-4kup-to-144hz.html


----------



## skupples

yep, it's beginning. DP1.4 galore

I expect there to be genuinely affordable models by the time the new consoles drop.


----------



## TK421

skupples said:


> i'd go 144hz ove 4K60, if I spent any amount of time in competitive gaming.
> 
> 
> ALSO! damn the back of these cards get hot, I forgot 1080ti didn't have rear memory. I guess i'm gonna have to re-rig my back plate typhoon.


Do you have a picture of that?


----------



## skupples

the rigging? 

I have a CaseLabs STH10, so I just strap an AP15 to the bottom horizontal with thick zipties. I just need to bring some home from work. Haven't needed to use it since GK110.

the whole 1350 idle thing is really annoying. How has NV not figured this out by now? I came home to a hot room today because of it.


----------



## ahnafakeef

sblantipodi said:


> ok I'm not alone but people with PG27UQ can't speak in this "poll" because those people have it all
> 4K, 144Hz, HDR1000, GSYNC, that is the god monitor, we are talking about monitors for humans


I don't think the PG27UQ has it all. It's not an OLED panel, and 27" is way too small for general PC usage at 4K, or to appreciate the details of 4K in games.

The one that "has it all" would be the LG B9/C9/E9 OLED TV. 4K, 120Hz, Dolby Vision, G-Sync, OLED, HDMI 2.1 - all in one package. Even the new Dell 55" OLED gaming monitor is lacking a feature, I think. HDR, was it?

Jumping from 32" 4K to 65" 4K, I really thought the drop in pixel density would be noticeable. But given the distance I've to sit at with 65", I really can't tell the difference. The only gripe I have with my C9 is the input latency (or is it input lag? I always mix them up). I didn't think 13ms would be this noticeable, but it really is in Tomb Raider. Although, I have introduced a wireless KB/M in the mix, so that might be worsening it even further.

So if you're talking "god monitor", OLED is really where it's at. If competitive gaming or professional-grade color accuracy isn't a concern, I really can't recommend anything else, especially for people in the US, where a C9 55" is $1500 and the 65" is $2400. And that's not counting Black Friday deals or resellers like Greentoe.


----------



## skupples

^^ its OLED or wait for cheaper smaller OLED  . Just adjust your desk for BIG.


----------



## bhsmurfy

skupples said:


> the rigging?
> 
> I have a CaseLabs STH10, so I just strap an AP15 to the bottom horizontal with thick zipties. I just need to bring some home from work. Haven't needed to use it since GK110.
> 
> the whole 1350 idle thing is really annoying. How has NV not figured this out by now? I came home to a hot room today because of it.


What bios.

My card idles at 300Mhz. Even when KPE XOC bios is loaded... Maybe its not NV but you that hasnt figured it out?


----------



## skupples

oh hey, it stopped. Only change is I switched over to HDMI to see if my monitor flicker issue would go away, unfortunately it did not, and I lost 10bit. the flicker is the monitor though, as its persisted across many GPUs. Or at least, it's not the GPU  

bios should be stock for evga sc... 90.02.17.00.8E


----------



## ahnafakeef

skupples said:


> ^^ its OLED or wait for cheaper smaller OLED  . Just adjust your desk for BIG.


At $1500 MSRP before any discount, the 55" C9 is substantially cheaper than any of the screens that are being marketed as Nvidia's Big Format Gaming Displays. And then you have the B9 one tier below it with an even smaller price tag. And then I think there are slightly less popular brands with even more competitive prices on their OLEDs. Heck, the Dell AW55 cost $5000 when it first launched (heard it dropped to $2799 to compete with something else though). So I don't think most people in the market for these specs would have much to complain about OLED prices (or LG's prices, for that matter).

As for size, I've wall-mounted mine and got a desk on wheels that I can move around to adjust the distance for movies or gaming or browsing the internet. And that's with a 65". 55" would be very usable on a slightly large-sized desk, I presume.


----------



## bhsmurfy

skupples said:


> oh hey, it stopped. Only change is I switched over to HDMI to see if my monitor flicker issue would go away, unfortunately it did not, and I lost 10bit. the flicker is the monitor though, as its persisted across many GPUs. Or at least, it's not the GPU
> 
> bios should be stock for evga sc... 90.02.17.00.8E


Maybe you have a rogue process keeping it from idling correctly.

I have the same card, with same bios. I idle at 300Mhz

My GPU power is not correct for some reason it reads what it thinks coming from the wall for the whole system.


Have you tried unplugging the monitor for some time also. I had this weird bug on a Asus VG(forget numbers) the 24" 1080 1ms monitor. One day I rebooted and I have separate PSU for my GPU and I guess some memory somewhere was holding onto some type of settings to where nothing worked the way it was suppose to.
Unplugged everything for about 10mins, cleared my cmos. And everything was back good. Cant explain it but thats what happened.


----------



## KCDC

ahnafakeef said:


> At $1500 MSRP before any discount, the 55" C9 is substantially cheaper than any of the screens that are being marketed as Nvidia's Big Format Gaming Displays. And then you have the B9 one tier below it with an even smaller price tag. And then I think there are slightly less popular brands with even more competitive prices on their OLEDs. Heck, the Dell AW55 cost $5000 when it first launched (heard it dropped to $2799 to compete with something else though). So I don't think most people in the market for these specs would have much to complain about OLED prices (or LG's prices, for that matter).
> 
> As for size, I've wall-mounted mine and got a desk on wheels that I can move around to adjust the distance for movies or gaming or browsing the internet. And that's with a 65". 55" would be very usable on a slightly large-sized desk, I presume.



I love my 65" B7 OLED but really wish I'd have waited for the 9 series for that sweet sweet Gsync. From what I remember, all LG OLEDs have the same panel from B up to the top tier, but other stuff is sacrificed like lower quality stands or no soundbar. B model also gets the lower tier processor, but you'll still get the same panel. 



In Assassins Creed Odyssey, I get slight tearing when settings are maxed trying to keep 60 fps. Might also be my cable but it's HDMI certified 25 ft. So far it's the only game I really notice it with. Turning a few settings down takes it back to solid 60 no tearing or stuttering.


To get close to 120hz on the 9s @4k will require quite a bit of gfx tuning I'd imagine even on a 2080ti. BFGD was such a great idea, but they didn't go OLED so it was dead from the get-go at that price IMO



Black Friday/Cyber Monday should be an interesting time with the 8 series as they've already dropped a good bit if you don't care about gsync.


The Sony Bravia series is still the panel to beat as far as OLED goes, but LG has the best value.


----------



## Chobbit

ahnafakeef said:


> skupples said:
> 
> 
> 
> i'd be all about a 2nd 2080ti, if it was actually still a thing in most titles, instead of 1 in 20. with limited hacked in support that screws minimums and injects stutter /forum/images/smilies/frown.gif
> 
> as to the bugs, i've seen a very specific set of steps given out a few times now for how to "properly configure" Gsync, since apparently there's more than one way to do it wrong.
> 
> 
> 
> As someone who plays very few games and mostly only big-name titles, I'd be happy if they offered SLI support at least in the big-budget, graphically demanding titles. And I don't care what they say about a 2080Ti performing well enough for most people at most resolutions to not warrant an investment in SLI on the devs part (if that is in fact the excuse they're going with). It is clearly not enough for my needs. I want my games to run on 4K at Ultra locked at 60FPS - in a world where the possibility clearly exists - and I'm willing to pay twice as much for the GPU horsepower. Like, just give me my frames already, dammit! /endrant
> 
> Please do tell where I might be able to find this magical set of instructions that makes G-Sync sing. Thank you!
Click to expand...

To be fair I'm in the same boat however I decided I wanted the 2080ti sli for rendering, I like building and stream on Planet Coaster and already have been working on a park for awhile. With a single 2080ti and almost max details the park would run as low as 40fps on 4k 😞. 

I noticed it wasn't running sli and then found a profile through Nvidia Inspector that enabled to use 96% of both GPU's (I think I've just hit my i9 9960x's CPU bottleneck) and that makes a massive difference where it doesn't drop below 60 currently. Also Witcher 3 with the extreme setting mod (and many others) use the sli well and is bettery smooth.

I have had sli in the past and I think the nvlink even though it's still acting as an sli bridge on these cards does an amazing job of removing the stuttering you normally get with sli as it's hardly (if at all noticible) compared to my past sli setups and it feels so much better to play some games now.

If only more games supported sli I would highly recommend it but there is sometimes work around and they work better than they used too.


----------



## ahnafakeef

KCDC said:


> I love my 65" B7 OLED but really wish I'd have waited for the 9 series for that sweet sweet Gsync. From what I remember, all LG OLEDs have the same panel from B up to the top tier, but other stuff is sacrificed like lower quality stands or no soundbar. B model also gets the lower tier processor, but you'll still get the same panel.
> 
> 
> 
> In Assassins Creed Odyssey, I get slight tearing when settings are maxed trying to keep 60 fps. Might also be my cable but it's HDMI certified 25 ft. So far it's the only game I really notice it with. Turning a few settings down takes it back to solid 60 no tearing or stuttering.
> 
> 
> To get close to 120hz on the 9s @4k will require quite a bit of gfx tuning I'd imagine even on a 2080ti. BFGD was such a great idea, but they didn't go OLED so it was dead from the get-go at that price IMO
> 
> 
> 
> Black Friday/Cyber Monday should be an interesting time with the 8 series as they've already dropped a good bit if you don't care about gsync.
> 
> 
> The Sony Bravia series is still the panel to beat as far as OLED goes, but LG has the best value.


To be fair, if you can lower just a few settings and turn RTX off, you should be around 60FPS most of the time in Odyssey and Shadow of the Tomb Raider. A bit less so in Exodus, but it was still very playable for me with RTX off. In which case, you aren't missing G-Sync as much. I myself finished Exodus and Odyssey on a non-G-Sync 4K screen. I don't care for the built-in speakers or the stand since I've wall-mounted mine, and I'll be using headphones (or a speaker system), so I probably could have gotten by with a B9, but I just wanted the latest processor they offer since I plan on keeping this screen for a while (fingers crossed for no burn-in).

I'm getting tearing in movies played on VLC, if you can believe it! In Odyssey, all I was required to turn down was anti-aliasing to medium to get it to stay close to 60FPS most of the time. But that was at 4K on a 32" screen, and I personally didn't notice the difference, so I was okay without much AA.

I didn't know the length of the cable made a difference. I got myself a 6.6ft, a 10ft, and a 25ft cable because I wasn't sure how I was going to set everything up. Ended up needing no more than the 6.6ft. Now I'm feeling a bit glad about that. Now if I could only figure out a way to reduce this infernal input latency that I'm experiencing.

And 120FPS at 4K on a C9 is purely future-proofing at this point. 4K 120Hz isn't usable at this point since it's only available via HDMI 2.1 and there are no HDMI 2.1 GPUs at the moment. So I don't expect to run anything at 4K at 120FPS unless SLI returns to its former glory and I end up with two Ampere flagships in my system. And yeah, agreed about the BFGD. Their existence is redundant (if not irrational) for me with the C9 available in the market.

Absolutely vouch for the C8 if you don't care for G-Sync, have the GPU to run 4K60, or can compromise settings enough to get 4K60. The panel is supposed to be very similar to, if not the same as, that of the C9. I don't know if these are Black Friday discounts, but I've seen people posting links to $1800 65" C9s on Reddit already. Very good deal, if you ask me.

Agreed on the Sony bit as well. They definitely have the better processor (I don't know about the panel as it's supposed to be the same one as the C9). However, their latest TVs do not have G-Sync or HDMI 2.1, which would make 4K120 impossible, which made the C9 the default winner as a gaming screen.


Chobbit said:


> To be fair I'm in the same boat however I decided I wanted the 2080ti sli for rendering, I like building and stream on Planet Coaster and already have been working on a park for awhile. With a single 2080ti and almost max details the park would run as low as 40fps on 4k 😞.
> 
> I noticed it wasn't running sli and then found a profile through Nvidia Inspector that enabled to use 96% of both GPU's (I think I've just hit my i9 9960x's CPU bottleneck) and that makes a massive difference where it doesn't drop below 60 currently. Also Witcher 3 with the extreme setting mod (and many others) use the sli well and is bettery smooth.
> 
> I have had sli in the past and I think the nvlink even though it's still acting as an sli bridge on these cards does an amazing job of removing the stuttering you normally get with sli as it's hardly (if at all noticible) compared to my past sli setups and it feels so much better to play some games now.
> 
> If only more games supported sli I would highly recommend it but there is sometimes work around and they work better than they used too.


My hands are itching to pull the trigger on a second 2080Ti! Having to disable things like RTX in year-old games on a shiny new rig is exasperating. Only if they had SLI support on AAA titles. I guess I'll just wait till Ampere for SLI.

I've had SLI in the past as well with the Titan X (I think it was the X. It was one of the Titans that was released sometime after the original Titan in a black shroud). Only had a 1080p screen at the time, so I ended up playing Witcher 3 at 3x DSR (essentially 3K). Got around 50FPS at that resolution, so I can only imagine how smooth it must be with 2080Tis in SLI. And the SLI experience wasn't nearly as bad as people made it sound. So I'm very much a proponent of SLI if it will give me the FPS I want. I could complain about Nvidia's prices though. Paid a sweet $1800 for my 2080Ti Strix OC, as opposed to $1000 a pop for my Titans back then. But it is Nvidia's market to rule, so it is what it is really.


----------



## KCDC

ahnafakeef said:


> To be fair, if you can lower just a few settings and turn RTX off, you should be around 60FPS most of the time in Odyssey and Shadow of the Tomb Raider. A bit less so in Exodus, but it was still very playable for me with RTX off. In which case, you aren't missing G-Sync as much. I myself finished Exodus and Odyssey on a non-G-Sync 4K screen. I don't care for the built-in speakers or the stand since I've wall-mounted mine, and I'll be using headphones (or a speaker system), so I probably could have gotten by with a B9, but I just wanted the latest processor they offer since I plan on keeping this screen for a while (fingers crossed for no burn-in).
> 
> I'm getting tearing in movies played on VLC, if you can believe it! In Odyssey, all I was required to turn down was anti-aliasing to medium to get it to stay close to 60FPS most of the time. But that was at 4K on a 32" screen, and I personally didn't notice the difference, so I was okay without much AA.
> 
> I didn't know the length of the cable made a difference. I got myself a 6.6ft, a 10ft, and a 25ft cable because I wasn't sure how I was going to set everything up. Ended up needing no more than the 6.6ft. Now I'm feeling a bit glad about that. Now if I could only figure out a way to reduce this infernal input latency that I'm experiencing.
> 
> And 120FPS at 4K on a C9 is purely future-proofing at this point. 4K 120Hz isn't usable at this point since it's only available via HDMI 2.1 and there are no HDMI 2.1 GPUs at the moment. So I don't expect to run anything at 4K at 120FPS unless SLI returns to its former glory and I end up with two Ampere flagships in my system. And yeah, agreed about the BFGD. Their existence is redundant (if not irrational) for me with the C9 available in the market.
> 
> Absolutely vouch for the C8 if you don't care for G-Sync, have the GPU to run 4K60, or can compromise settings enough to get 4K60. The panel is supposed to be very similar to, if not the same as, that of the C9. I don't know if these are Black Friday discounts, but I've seen people posting links to $1800 65" C9s on Reddit already. Very good deal, if you ask me.
> 
> Agreed on the Sony bit as well. They definitely have the better processor (I don't know about the panel as it's supposed to be the same one as the C9). However, their latest TVs do not have G-Sync or HDMI 2.1, which would make 4K120 impossible, which made the C9 the default winner as a gaming screen.
> 
> My hands are itching to pull the trigger on a second 2080Ti! Having to disable things like RTX in year-old games on a shiny new rig is exasperating. Only if they had SLI support on AAA titles. I guess I'll just wait till Ampere for SLI.
> 
> I've had SLI in the past as well with the Titan X (I think it was the X. It was one of the Titans that was released sometime after the original Titan in a black shroud). Only had a 1080p screen at the time, so I ended up playing Witcher 3 at 3x DSR (essentially 3K). Got around 50FPS at that resolution, so I can only imagine how smooth it must be with 2080Tis in SLI. And the SLI experience wasn't nearly as bad as people made it sound. So I'm very much a proponent of SLI if it will give me the FPS I want. I could complain about Nvidia's prices though. Paid a sweet $1800 for my 2080Ti Strix OC, as opposed to $1000 a pop for my Titans back then. But it is Nvidia's market to rule, so it is what it is really.



A friend of mine has one of the Bravias with the speakers behind the panel and aside from better panel quality control, the wall of speakers makes an amazing difference when paired with a capable surround system, even alone it's impressive the range it can handle, really fills in the gaps. But that price is nuts to me for what I want it for. I went with the B since I had a SJ9 sound bar setup already, so the audio wasn't needed. I do have to make sure all post processing is off to reduce the lag issues, you may want to make sure you have any post processes turned off (super res, denoising, true motion etc) Under the Picture menu I believe. That helped with input lag on my consoles. As far as HDMI cables go, I know that up to 25ish feet should be good for solid signal but the biggest thing is to make sure it's truly HDMI certified (orange tag with the QR code on the packaging or thumbnail if from amazon). Otherwise you never know what cable you're getting. I'm 90% sure it's not my cable and ACO being annoying to keep at a stable fps. 



It's tough to justify dual cards for games these days if it's just for that one game or hopeful others will support it. I wouldn't have two if it wasn't for design/rendering purposes.


----------



## ahnafakeef

KCDC said:


> A friend of mine has one of the Bravias with the speakers behind the panel and aside from better panel quality control, the wall of speakers makes an amazing difference when paired with a capable surround system, even alone it's impressive the range it can handle, really fills in the gaps. But that price is nuts to me for what I want it for. I went with the B since I had a SJ9 sound bar setup already, so the audio wasn't needed. I do have to make sure all post processing is off to reduce the lag issues, you may want to make sure you have any post processes turned off (super res, denoising, true motion etc) Under the Picture menu I believe. That helped with input lag on my consoles. As far as HDMI cables go, I know that up to 25ish feet should be good for solid signal but the biggest thing is to make sure it's truly HDMI certified (orange tag with the QR code on the packaging or thumbnail if from amazon). Otherwise you never know what cable you're getting. I'm 90% sure it's not my cable and ACO being annoying to keep at a stable fps.
> 
> 
> 
> It's tough to justify dual cards for games these days if it's just for that one game or hopeful others will support it. I wouldn't have two if it wasn't for design/rendering purposes.


Oh yeah, if you want Sony, you gotta be willing to pay extra, which I honestly wouldn't have minded since I'm using this one screen for gaming, movies, TV, and everyday PC usage. But the lack of HDMI 2.1 and G-Sync is what made my decision. As for the behind the panel speaker, hopefully I won't miss it much if I end up getting a surround system. 

Will turning off the post-processing help with PC gaming? Also, will disabling them have any adverse effects on the picture quality or viewing experience in general?

Weirdly enough, things don't operate exactly the same when using it as a monitor vs viewing content directly from the TV's apps. Netflix from the TV app seems to go into Dolby Vision mode automatically when it's available, whereas Netflix via Chrome on Windows does not. Same for the motion smoothing. Netflix is super smooth, whereas playing anything via PC (Netflix, VLC etc) doesn't seem to have the motion smoothing. Which is concerning because I intended to watch 4K Bluray movies primarily via the PC. Watching it directly on the TV would require me to put it on a flash drive and then connect it via the TV's USB port, which is an extra step I'd much rather avoid if possible.

And yeah, I really needed the extra GPU horsepower for the games. But hey, at least you're getting all the GPU horsepower you need for your purposes, so good for you.

Oh, and just a PSA: in case anyone reading this wants a Titan RTX, Nvidia has a 20% Education Discount on the card on their website, which brings it down to $2000 apiece. Understandably small niche I presume, but letting you guys know just in case.


----------



## TK421

I believe for dual monitor setup, regardless of the resolution and refresh, the card will always be at P0.


----------



## skupples

thanks for the response,

nope its clearly related to using display port. The card sleeps just fine after switching to HDMI, but it returns to 1350 as soon as i flip it back to DP. 

good news is, i'ma be the daddy of a shiny new C9 this Holiday. I've finally replaced every last possible piece of hardware, and the flicker continues. thus its my monitor (YAY!!! EXCUSE TO SPEND MONEY!) 

I figure I should probably pick it up from the local Best Buy, so I can easily swap if needed.


----------



## gfunkernaught

kithylin said:


> gfunkernaught said:
> 
> 
> 
> Wanted to share my 3dmark results, while in my garage at 4c ambient.
> https://www.3dmark.com/spy/9238408
> https://www.3dmark.com/pr/172411
> 
> KPXOC bios on a water cooled 2080 ti xc ultra (reference pcb), running [email protected], memory at 8500mhz. 3dmark doesnt seem to be reading the memory correctly so i took screenshots towards the end of gpu test 1 and 2. I also ran this on a 1080p monitor so there is downscaling involved here with the rtss osd. Also threw in a GTA5 benchmark screenshot. I totally forgot to grab screens from port royal. next time it gets colder, probably freezing, im going back out there to push the clock even further.
> 
> https://imgur.com/a/53B5gmE screenshots of the runs. i tried the to use the insert image function but it didnt work and i didnt want to attach the files themselves to the post.
> 
> 
> 
> Sadly 3DMark is not a sustained load, only a short burst for a couple minutes. Do you happen to know what clocks you manage to actually sustain under gaming loads when running games for a few hours?
Click to expand...

I know. I was just posting to showcase how far I can push my particular gpu. Not sad at all actually, that's why benchmarks exist! Sustained [email protected] Some dips due to power limit. Benchmarks for me are a way to confirm what your gpu is actually capable of, like confirming the card's ceiling. I can confirm without a doubt I have a not so good overclocker if I need to cool the thing down to 17c to reach [email protected] I've tried different BIOS with different power limits and they're all bs for gaming. Each board has a power limit for a reason, it physically cannot handle more than it was rated for. Mine is 336w. More than that the card cannot handle at full load (all cuda and rt cores in use).


----------



## KCDC

ahnafakeef said:


> Oh yeah, if you want Sony, you gotta be willing to pay extra, which I honestly wouldn't have minded since I'm using this one screen for gaming, movies, TV, and everyday PC usage. But the lack of HDMI 2.1 and G-Sync is what made my decision. As for the behind the panel speaker, hopefully I won't miss it much if I end up getting a surround system.
> 
> Will turning off the post-processing help with PC gaming? Also, will disabling them have any adverse effects on the picture quality or viewing experience in general?
> 
> Weirdly enough, things don't operate exactly the same when using it as a monitor vs viewing content directly from the TV's apps. Netflix from the TV app seems to go into Dolby Vision mode automatically when it's available, whereas Netflix via Chrome on Windows does not. Same for the motion smoothing. Netflix is super smooth, whereas playing anything via PC (Netflix, VLC etc) doesn't seem to have the motion smoothing. Which is concerning because I intended to watch 4K Bluray movies primarily via the PC. Watching it directly on the TV would require me to put it on a flash drive and then connect it via the TV's USB port, which is an extra step I'd much rather avoid if possible.
> 
> And yeah, I really needed the extra GPU horsepower for the games. But hey, at least you're getting all the GPU horsepower you need for your purposes, so good for you.
> 
> Oh, and just a PSA: in case anyone reading this wants a Titan RTX, Nvidia has a 20% Education Discount on the card on their website, which brings it down to $2000 apiece. Understandably small niche I presume, but letting you guys know just in case.


I can't really speak to playing content from my PC to the OLED, I keep that on the console/TV App side. HDR does kick in automatically on any game supporting it from windows and worked on youtube 4k HDR when I tested it. I'd check your color settings in nvcp that your oled output is YCBCR 4:2:2 12 bit, that's all i had to do and HDR works fine, but check to make sure HDR stays turned on in windows settings. 
None of the post processing effects on the TV affect the color quality of the image, they're meant to try and make crappier sources look better via upscaling, denoising, frame blending etc at the cost of latency. I noticed the difference in lag best when playing pinball FX 3. Post processes would cause me to lose games because the button press to flipper interaction was that bad. Once I turned them all off, it was back to a playable state. Also being in Game Mode gets rid of more lag, so you may want to check that your HDMI input is set to that as well.


----------



## gfunkernaught

ESRCJ said:


> Founder's edition with a Heatkiller waterblock. Currently I've only had time to play Gears 5. Like I said, the 2080 Ti can draw a lot or power in this game. I was seeing over 370W at 2175MHz 1.093V. A few months ago I was playing Forza Horizon 4, Shadow of the Tomb Raider, Division 2, and Forza Motorsport 7.


ah of course. there's a reason nvidia held onto them . i was looking at the new corsair 2080 ti water block and on one forum someone posted some results showing it beat out the ek vector by 5-6c. what are your load temps at both of those clocks?


----------



## gfunkernaught

has anyone using a v/f curve successfully used it to set multiple points? i've tried to set a v/f at several points hoping it would drop down to where i set it to when the power and heat come into play, but instead the clocks drop only and the voltage stays the same. undervolting never works properly for me, since the clock will drop at a certain temp. i'd set a point at [email protected], thinking it will drop to 2085mhz where i want it to, but it always drop, leading to a crash in games like shadow of the tomb raider and control.


----------



## ntuason

Hello, 

I just wanted to get a second or few options. My friend has an old AMD 5870 that’s acting up, he wants to upgrade to a 2080 Ti (which hopefully will last him 5 years). Is it a bad time to upgrade so late or should I tell him to wait for next gen? 

Thanks


----------



## TWiST2k

Something new will probably be out within 6 months, either a 2080 Ti Super, or Ampere. I think it will really depend on what AMD has coming if we get a refresh or a new card. But only time will tell for sure. I have been keeping an eye on things, but prices have been holding pretty steady for like 13 months.


----------



## bhsmurfy

ntuason said:


> Hello,
> 
> I just wanted to get a second or few options. My friend has an old AMD 5870 that’s acting up, he wants to upgrade to a 2080 Ti (which hopefully will last him 5 years). Is it a bad time to upgrade so late or should I tell him to wait for next gen?
> 
> Thanks


If you have the money upgrade.

Something better always comes out every 6 months.

Its a strong card and you wont have buyers remorse.

Also have your buddy join the forums?? lol


----------



## keikei

ntuason said:


> Hello,
> 
> I just wanted to get a second or few options. My friend has an old AMD 5870 that’s acting up, he wants to upgrade to a 2080 Ti (which hopefully will last him 5 years). Is it a bad time to upgrade so late or should I tell him to wait for next gen?
> 
> Thanks


Rumors mark about 1 yr until ampere. Assuming generational uptick, average 30%, and current Ti replacement price. Also assuming Nvidia releases entire line. A lot of assumptions. Is he/she willing to wait that amount of time for that amount of performance increase? If he/she is playing the long game, wait for ampere as RT is suspected to be much better. RT right now is not adequate. I want to use another word, but...


----------



## JustinThyme

Unless you have a firm release date from the manufacturer then go with whats available when you need it. As stated theres always something new coming out. I'm habitually every other generation and Ill be skipping the 3080Ti. I did go with a 2080Ti over a 1080Ti for a 30% performance increase. The spread wasnt that big from a 980 to a 1080. Right now everything on the 7nm launch is 100% conjecture as it always is. Ita all over the place. Some saying March, Others Says June, My experience tells me the first FE 3080Ti card will start trailing out in the Aug time frame at best. This is not a short wait. The only thing new coming out soon is an announced release of the Intel 109xx chips. and still no firm date. That is also all over the place with some saying shipping as earlier as 11/25. That I would consider as a wait for item, not something more than 6 moths out especially seeing how they just threw out the bone of the super gpus which are'nt all that super.


----------



## kx11

trying the new Neon ray tracing benchmark @ 4k











runs alright mostly


----------



## GRABibus

Nice.
Ray tracing is not supposed to work only with D3D12 and not D3D11 ?
On your OSD it is D3D11 API.


----------



## kx11

GRABibus said:


> Nice.
> Ray tracing is not supposed to work only with D3D12 and not D3D11 ?
> On your OSD it is D3D11 API.



raytracing can work with whatever APi you like , world of Tanks benchmark did that already


----------



## J7SC

I think the benchmark is DX11 default, though the upcoming engine version is supposed to be DX12, Vulkan as well. My 4k Ultra single run posted at Neon Noir thread yesterday, includes some MSI AB monitoring. Decent VRAM usage, but not that stressful on GPU temps...also seems to involve the CPU a lot, had 16 threads humming away. 

Next, I have to set it up for SLI / NVLink, apparently with AFR2 SLI setting ? Anyone know for sure ?


----------



## GRABibus

kx11 said:


> raytracing can work with whatever APi you like , world of Tanks benchmark did that already
> 
> https://www.youtube.com/watch?v=Qin1nKMZ7qI


I asked this because I play 2 games with Ray Tracing : BFV and CoD MW.

Ray Tracing can't be enabled with DX11 in BFV, only with DX12.
In COD MW, no choice, the game is only a DX12 game.


----------



## JustinThyme

J7SC said:


> I think the benchmark is DX11 default, though the upcoming engine version is supposed to be DX12, Vulkan as well. My 4k Ultra single run posted at Neon Noir thread yesterday, includes some MSI AB monitoring. Decent VRAM usage, but not that stressful on GPU temps...also seems to involve the CPU a lot, had 16 threads humming away.
> 
> Next, I have to set it up for SLI / NVLink, apparently with AFR2 SLI setting ? Anyone know for sure ?



Good question. My second GPU never left 1350


----------



## JustinThyme

Little more realistic. First one was default resolution. which is 1600x900. This is 3440x1440 2K. I have the ASUS PG349Q 34" wide screen monitor. the 4K 27" just doesn't do it for me. Have one but use it for connecting my laptops to it. 

I have an Nvidia profile to force SLI on superposition. Maybe I can hack that up a bit to get this to use both cards.


----------



## J7SC

JustinThyme said:


> Good question. My second GPU never left 1350


 
Attachment is a screenshot re. SLI/NVLink tips...have not tried it yet, and per related vid below, probably not that much dual GPU scaling, but still, worth checking SLI/NVL out


----------



## skupples

ntuason said:


> Hello,
> 
> I just wanted to get a second or few options. My friend has an old AMD 5870 that’s acting up, he wants to upgrade to a 2080 Ti (which hopefully will last him 5 years). Is it a bad time to upgrade so late or should I tell him to wait for next gen?
> 
> Thanks


you'll have the 3080 by mid summer, and 3080ti by this time next year. 

assuming nv does what nv usually does, which is usually a good bet.


----------



## JustinThyme

J7SC said:


> Attachment is a screenshot re. SLI/NVLink tips...have not tried it yet, and per related vid below, probably not that much dual GPU scaling, but still, worth checking SLI/NVL out
> 
> https://www.youtube.com/watch?v=phutT0LVUOw



Didnt see an tips on how to get second card to show up. 

Added exe, whether or not its the right one is another story. Long path and hard to find but it says Neon Noir game launcher in the end
Forced alternate frame rendering 2 and my score actually dropped.


----------



## J7SC

JustinThyme said:


> Didnt see an tips on how to get second card to show up.
> 
> Added exe, whether or not its the right one is another story. Long path and hard to find but it says Neon Noir game launcher in the end
> Forced alternate frame rendering 2 and my score actually dropped.


 
I just posted a quick-and-dirty 4K Ultra / SLI here: https://www.overclock.net/forum/226-software-news/1736286-cryengine-ray-tracing-everyone-neon-noir-benchmark-tool-released-7.html#post28200564 

BTW, I also read (on the internet, so it must be true  ) that the Crysis 3 profile **works** for the NeonNoir bench, but I haven't checked that out yet


----------



## VPII

@J7SC Im sitting with a slight issue when running Timespy. I only noticed recently when running the second game test stutter but so worse that the average fps is 10 to 12 lower than normal. I tried various drivers and even a fresh windows installation but still same issue.

Sent from my SM-G960F using Tapatalk


----------



## J7SC

VPII said:


> @J7SC Im sitting with a slight issue when running Timespy. I only noticed recently when running the second game test stutter but so worse that the average fps is 10 to 12 lower than normal. I tried various drivers and even a fresh windows installation but still same issue.
> 
> Sent from my SM-G960F using Tapatalk


 
:headscrat...I think GT2 is the one that stresses VRAM (while GT1 is apparently more GPU intensive). First step I would take is to drop VRAM speed by a chunk just to see if the stutter disappears or persists. Could anything have affected VRAM cooling, btw ?


----------



## ahnafakeef

skupples said:


> you'll have the 3080 by mid summer, and 3080ti by this time next year.
> 
> assuming nv does what nv usually does, which is usually a good bet.


How reliable is this estimate? Am I going to get at least a good six months of usage out of a second 2080Ti if I were to buy one now?

And speaking of:
1. Do I need an SLI bridge for SLI/NVLink/MGPU? Do they function in essentially the same manner, or do they need to be set up differently?
2. Would air-cooling work better if I mount the second card vertically instead of one on top of the other? (In a Corsair 680X, if that helps)
3. Which SLI bridge should I get for a dual ASUS ROG STRIX 2080Ti OC setup? Would this work? https://www.amazon.com/ASUS-GeForce-Nvlink-Graphic-ROG-NVLINK-3/dp/B07KKPWQ5P


----------



## Sketchus

Just to check, I'm still stuffed on the newer XUSB version on my Gaming X Trio? No way to flash to a higher power limit.


----------



## ntuason

bhsmurfy said:


> If you have the money upgrade.
> Something better always comes out every 6 months.
> Its a strong card...





keikei said:


> Rumors mark about 1 yr until ampere...





skupples said:


> you'll have the 3080 by mid summer, and 3080ti by this...


Thanks for the info. He’s decided to wait it out another year for 3080 Ti to do a complete rig upgrade.


----------



## VPII

J7SC said:


> :headscrat...I think GT2 is the one that stresses VRAM (while GT1 is apparently more GPU intensive). First step I would take is to drop VRAM speed by a chunk just to see if the stutter disappears or persists. Could anything have affected VRAM cooling, btw ?


Hi there, so I decided to drop my entire overclock on the graphics card, core and memory running stock. Well I still get the same issue. At first it seems to be running as normal but about a quarter way through the bench I started getting the stutters again. FPS would be around 90 or so and would just drop to 60.


----------



## skupples

J7SC said:


> I just posted a quick-and-dirty 4K Ultra / SLI here: https://www.overclock.net/forum/226-software-news/1736286-cryengine-ray-tracing-everyone-neon-noir-benchmark-tool-released-7.html#post28200564
> 
> BTW, I also read (on the internet, so it must be true  ) that the Crysis 3 profile **works** for the NeonNoir bench, but I haven't checked that out yet


LOL... Maybe it does, maybe it doesn't? 

good ol internet SLI advice tells you using the bits a different game on the same engine MUST WORK, ALWAYS, ALL THE TIME, PERFECT 8K240HZ 99.99% GPU SCALING! and or BF:V bits will work if not  

when in reality...


----------



## jura11

Hi guys


I have tried this Crytek Neon Noir Benchmark and for love of god my scores are same like with 1080p or 1440p or even in Ultrawide 3440x1440, scores are around 3200-3300 points

Looks like I'm CPU limited or my CPU does bottleneck very badly in this benchmark

Running 5960x with 4.6Ghz currently and Asus RTX 2080Ti Strix with 2160MHz OC

Although I'm running RAM at 2113MHz which can make a bit difference but still my scores are very low if I'm comparing against the 4690k or 6700k etc and GTX1080Ti 

GPU usage its very low in this benchmark, literally sits in 20-40% during the whole benchmark and in only two instances seen full boost clocks at 2145-2160MHz 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi guys
> 
> I have tried this Crytek Neon Noir Benchmark and for love of god my scores are same like with 1080p or 1440p or even in Ultrawide 3440x1440, scores are around 3200-3300 points
> 
> Looks like I'm CPU limited or my CPU does bottleneck very badly in this benchmark
> 
> Running 5960x with 4.6Ghz currently and Asus RTX 2080Ti Strix with 2160MHz OC
> 
> Although I'm running RAM at 2113MHz which can make a bit difference but still my scores are very low if I'm comparing against the 4690k or 6700k etc and GTX1080Ti
> 
> GPU usage its very low in this benchmark, literally sits in 20-40% during the whole benchmark and in only two instances seen full boost clocks at 2145-2160MHz
> 
> Thanks, Jura


 
You're not alone, check out the vid in this NeonNoir thread re. GPU usage : https://www.overclock.net/forum/226-software-news/1736286-cryengine-ray-tracing-everyone-neon-noir-benchmark-tool-released-7.html#post28200440 

I have to run 4K Ultra before single 2080 Ti GPU (left pic below) reaches 99% max usage. For 4K NVL/SLI, primary GPU hits up to 99%, secondary 86%. This bench does seem to use relatively many CPU threads, though at lighter-per thread loads....wondering how significantly system ram is in NeonNoir :thinking:


----------



## sblantipodi

STADIA is out and I think that it's the end of gaming as we know it.
RTX cards will have no sense in 2020/2021.

if you can play 4K 60FPS for 10 dollars a month, why spend 1300€ in a GPU?


----------



## HeadlessKnight

jura11 said:


> Hi guys
> 
> 
> I have tried this Crytek Neon Noir Benchmark and for love of god my scores are same like with 1080p or 1440p or even in Ultrawide 3440x1440, scores are around 3200-3300 points
> 
> Looks like I'm CPU limited or my CPU does bottleneck very badly in this benchmark
> 
> Running 5960x with 4.6Ghz currently and Asus RTX 2080Ti Strix with 2160MHz OC
> 
> Although I'm running RAM at 2113MHz which can make a bit difference but still my scores are very low if I'm comparing against the 4690k or 6700k etc and GTX1080Ti
> 
> GPU usage its very low in this benchmark, literally sits in 20-40% during the whole benchmark and in only two instances seen full boost clocks at 2145-2160MHz
> 
> Thanks, Jura


It is not a CPU bottleneck, but rather a compability issue or something like that. Because I can get 99% GPU usage with my sig rig laptop at 720p (4720HQ and 980M) and smoke my 5960x 4.5 GHz 2x 2080 Ti nVLINK setup with same settings. I check the log file and it is detecting the graphics card as Microsoft Standard VGA Adapter. I tinkered for hours to solve the issue and gave up and didn't give a crap since then because it is the only thing that gives me issues.


----------



## skupples

sblantipodi said:


> STADIA is out and I think that it's the end of gaming as we know it.
> RTX cards will have no sense in 2020/2021.
> 
> if you can play 4K 60FPS for 10 dollars a month, why spend 1300€ in a GPU?


latency. 
direct control over games.
highly doubt we'll be modding ES6 if its hosted in the cloud.

all I know is GeforceNOW is still a joke, even after changing names from GRID.


----------



## sblantipodi

skupples said:


> sblantipodi said:
> 
> 
> 
> STADIA is out and I think that it's the end of gaming as we know it.
> RTX cards will have no sense in 2020/2021.
> 
> if you can play 4K 60FPS for 10 dollars a month, why spend 1300â‚¬ in a GPU?
> 
> 
> 
> latency.
> direct control over games.
> highly doubt we'll be modding ES6 if its hosted in the cloud.
> 
> all I know is GeforceNOW is still a joke, even after changing names from GRID.
Click to expand...

Did you read some reviews? Latency on stadia is a no problem. Really no need to buy newer GPUs now that stadia is reality.


----------



## Sheyster

sblantipodi said:


> Did you read some reviews? Latency on stadia is a no problem. Really no need to buy newer GPUs now that stadia is reality.


Read reviews, no thank you for me. Sounds like it might be an option for console gamers once there are more than 22 titles available, but any serious PC gamer will be highly disappointed IMHO.


----------



## skupples

the only way to know how stadia is going to work in your home, is to play stadia in your home. unless its your neighbor that reviewed it? 

22 titles? good start, lemme know when we're close to 10s of thousands, spanning 20 years, THAT CAN BE MODDED <00000 CAN'T SAY THIS LOUD ENOUGH.


this kinda makes me think modding will be dead in the new oblivion, cuz they're probably secretly devving it for Stadia and other streamed services XD

honestly? aside from AAA (which is quite disappointing 99% of the time these days, even the new star wars is MEHHHH uncharted clone with star wars textures) the loss won't be felt by most of us, and could even help out the segment by removing all those silly console/PC flip flopping WoW kiddies that showed up so many years ago now. 

lemme know when they let me run a full local access virtual machine on their blades, until then i'm good.


----------



## Sheyster

skupples said:


> 22 titles? good start, lemme know when we're close to 10s of thousands, spanning 20 years, THAT CAN BE MODDED <00000 CAN'T SAY THIS LOUD ENOUGH.


I guess I should have mentioned there are only 2-3 of those I'd actually play!  Didn't mean to sound too optimistic.


----------



## sblantipodi

Sheyster said:


> sblantipodi said:
> 
> 
> 
> Did you read some reviews? Latency on stadia is a no problem. Really no need to buy newer GPUs now that stadia is reality.
> 
> 
> 
> Read reviews, no thank you for me. Sounds like it might be an option for console gamers once there are more than 22 titles available, but any serious PC gamer will be highly disappointed IMHO.
Click to expand...

Don't understand your point. 
It runs 4k 60fps with high settings. Why serious pc gamers would be disappointed?


----------



## skupples

Sheyster said:


> I guess I should have mentioned there are only 2-3 of those I'd actually play!  Didn't mean to sound too optimistic.


lol no worries, its rare my panties are ever wadded in any way shape or form. #Freeball 


sblantipodi said:


> Don't understand your point.
> It runs 4k 60fps with high settings. Why serious pc gamers would be disappointed?


sir, have you personally sat down and used the product?

my shield can't even do a high quality streaming service, and that's from my PC to my TV via an internal stupidly overbuilt home network. 

i mean, its ok. it works for single player, but aside from that? blaaaaaah. 

i can also still access my desktop to access mod files, etc.

i'm nowhere close to a rich man, but as stated. I'll stick to my own game streaming solutions for now.

(let's be real, $1200 every year or two is quite cheap compared to most hobbies)


----------



## Sheyster

sblantipodi said:


> Don't understand your point.
> It runs 4k 60fps with high settings. Why serious pc gamers would be disappointed?


I run 2K (2560x1440) @ 144 Hz. They can call me when they have 4K @120 Hz available. 60 Hz is for plebs, at any resolution including 4K.  Until 4K120 is available I'm really not interested, at all. Heck, I'd even venture to say that 4K144 should be the ultimate goal.


----------



## skupples

i'd assume that it can do 2k120, OR 4K60.... i mean, that is the eternal trade off, and has been for a couple generations now.

i'm pretty sure I can do that with my shield, assuming its a setting that can be dialed in by the monitor, and detected by android.


----------



## JustinThyme

Sheyster said:


> I run 2K (2560x1440) @ 144 Hz. They can call me when they have 4K @120 Hz available. 60 Hz is for plebs, at any resolution including 4K.  Until 4K120 is available I'm really not interested, at all. Heck, I'd even venture to say that 4K144 should be the ultimate goal.


If we are Aiming for that may as well shoot for 8K at 200Hz. I about crapped when I saw 8K TVs hit the market. Whats the point? 4K platform is not fully developed with extremely limited media. TV programming and even movie titles are still very limited and there are no 8K titles for anything...saving a few demos to show you how great the TV is until you plug it in to any service and find limited 4K is the best you can do. 

Honestly, and maybe because Im and old fart and taking all other aspects of refresh rates out of the picture, on a 27" screen the only discernible difference is my icons and text shrink. Looking at the graphics I can tell no difference. Now when you are talking a 55" and bigger TV screen, of course its noticeable to everyone but my wife who cant tell the difference between 480P and 1080P......or so she says. (Keep telling her to quit cheating the optometrist out of their income) 

I dont intend on changing from 3440x1440 120Hz until the monitor craps out. It just fits my work space and for now maxes out anything thats out there as far as discernible differences. Sure you may see a slight difference on two 27" 2K vs 4K side by side but my bet is if you were put in a blind test separated by a little time you would not.


----------



## Sheyster

JustinThyme said:


> If we are Aiming for that may as well shoot for 8K at 200Hz. I about crapped when I saw 8K TVs hit the market. Whats the point? 4K platform is not fully developed with extremely limited media. TV programming and even movie titles are still very limited and there are no 8K titles for anything...saving a few demos to show you how great the TV is until you plug it in to any service and find limited 4K is the best you can do.


I saw the 8K Samsung at CEDIA last year. Amazing for sure. My post was primarily referencing high refresh rate FPS gaming on gaming monitors specifically. I should have been more clear about that. I'll probably stay on 2K until there is a decent 4K gaming monitor on the market that is 32" or larger. The new 38" LG looks interesting but is way too pricey IMHO:

https://www.lg.com/us/monitors/lg-38GL950G-B-gaming-monitor

Not quite 4K but close.


----------



## TWiST2k

majestynl said:


> So i figured out whats wrong with flashing the Galax 380w Bios. The Galax version is an older build with a older XUSB Firmware version.
> NVFlash doesn't accept an older XUSB Fw if there is one newer installed on current loaded Vbios!
> 
> My Vbios was included with *XUSB-FW Version ID : 0x70090003 * and the current Galax 380w on TPU has an older version *XUSB-FW Version ID: 0x70060003*
> 
> What i did:
> 
> 
> Backup-ed my Original Rom and just flashed same one back to check if NVflash etc is working properly
> Checked TPU Bios libary to find an Bios who had better Power limit and was build on same month or later then my current bios version!
> I found the EVGA XC and MSI Seahawk X (Ref board)
> Double-checked if the XUSB had at least same FW version. It did!
> Flashed EVGA Bios (338w) with NvFlash (Board Id Mismatch version)
> 
> 
> Flashed successful and the card is running already better  Need to check which Bios is the best for now.
> If we want to use the 380w bios, we need a newer Bios file from that card with at least same or newer XUSB Fw, otherwise we cant flash it!
> 
> I believe not only MSI Duke OC, but most new bought cards are running with an updated XUSB FW on their last vBios!
> 
> 
> _See below for Screenshot Running MSI Duke OC with a EVGA Bios running 338w_


Picked up a XC Ultra and was trying to flash and got the error, did some reading, came across this post, but did not see any solutions?

Does anyone know if there is an updated 380w bios around? 

Thanks!


----------



## TK421

TWiST2k said:


> Picked up a XC Ultra and was trying to flash and got the error, did some reading, came across this post, but did not see any solutions?
> 
> Does anyone know if there is an updated 380w bios around?
> 
> Thanks!


I have xc ultra (non ICX) and can use the galax 380w bios no problem


----------



## TWiST2k

TK421 said:


> I have xc ultra (non ICX) and can use the galax 380w bios no problem


I am sure you and many others can, but my card is brand new and has a new bios on it. Nvflash will not allow an older XUSB flashed onto a newer version, its all in the post I quoted.


----------



## Sheyster

TWiST2k said:


> Does anyone know if there is an updated 380w bios around?


I'm only aware of the original 380W BIOS. I don't believe there is one with newer XUSB. 

If one exists, please post it if you have it!


----------



## KCDC

Nvidia Drivers Reportedly Enable New Checkerboard Rendering on Multi-GPUs

https://www.tomshardware.com/news/nvidia-driver-checkerboard-rendering-multi-gpu-sli

Only shows up in Inspector currently and doesn't do much of anything yet. It's been added for something in the future? Multi-gpu without devs needing to support it maybe? Seems like a strange thing to add at this time.


----------



## skupples

the future slowly trickles in.


----------



## TWiST2k

Sheyster said:


> I'm only aware of the original 380W BIOS. I don't believe there is one with newer XUSB.
> 
> If one exists, please post it if you have it!


I have been doing some digging today and if I come across anything that works properly, I will be sure to post an update here!


----------



## JustinThyme

Sheyster said:


> I saw the 8K Samsung at CEDIA last year. Amazing for sure. My post was primarily referencing high refresh rate FPS gaming on gaming monitors specifically. I should have been more clear about that. I'll probably stay on 2K until there is a decent 4K gaming monitor on the market that is 32" or larger. The new 38" LG looks interesting but is way too pricey IMHO:
> 
> https://www.lg.com/us/monitors/lg-38GL950G-B-gaming-monitor
> 
> Not quite 4K but close.


Yeah that looks like better specs but pretty damn steep. Pre Order for $1800USD.
Not muhc better than what I already have at 3440x1440


----------



## keikei

If true, looks to be a decent bump: https://www.guru3d.com/news-story/is-nvidia-working-on-a-geforce-rtx-2080-ti-super-after-all.html


----------



## Sheyster

keikei said:


> If true, looks to be a decent bump: https://www.guru3d.com/news-story/is-nvidia-working-on-a-geforce-rtx-2080-ti-super-after-all.html


I'm starting to think we'll see a Ti Super soon.


----------



## keikei

Sheyster said:


> I'm starting to think we'll see a Ti Super soon.


The space is there for the timing and specs (between OG 2080ti and TITAN). What is a little surprising is the cuda #'s. It doesnt look like what Nvidia did with the 2080. The 'Super' in this instance may actually live up to the moniker.


----------



## skupples

2080ti with one more cluster? woot, i guess lol.

as a side note, I was thinking about how folks around here keep referencing the cost of a car vs. the cost of a GPU.

folks, if you can only afford a $1,500 car, you probably shouldn't be buying $1,500 GPUs. I suppose that's all priority though. I only spent $5k on my civic after my CTSV's engine popped.


----------



## ThrashZone

HI,
Lots of good deals on cars at the 1k+ price range 
Saw a 2001 mustang v6 basket case but body was in good shape so was paint 
One you can likely drive down the road with a little work the other one you can make believe driving...


----------



## chibi

I'm looking to make a purchase for either the nvidia FE or EVGA FTW3 Ultra 2080 TI card next week during black friday. I am not new to watercooling so that may be an option in the future, but for now, I would like to stay with the default air cooler.

That being said, is there any other card you would recommend besides the two I posted? I like the FE card for the best aesthetics, I like the FTW3 for the performance. I'll be using a Meshify C case and am undecided whether to use a D15S or AIO for the CPU (9900KS).

What's your thoughts on the best 2080 Ti for this use case? Thanks


----------



## J7SC

VPII said:


> Hi there, so I decided to drop my entire overclock on the graphics card, core and memory running stock. Well I still get the same issue. At first it seems to be running as normal but about a quarter way through the bench I started getting the stutters again. FPS would be around 90 or so and would just drop to 60.



Weird...any idea what changed between the earlier stutter-free runs and the later ones ? NV driver or Win 10 updates ? Also, you obviously will have run the test with temp recording in the background to make sure it is not down-stepping GPU clock. 

When you run the test 'back-to-back' with just a minute or so for cool down in-between and no other apps being opened, you get the same stuttering ? The only other thing I can think of is some sort of system cache or PCIe lane bottle-neck, but all this being 'long distance', I am just guessing...


----------



## postem

KCDC said:


> Nvidia Drivers Reportedly Enable New Checkerboard Rendering on Multi-GPUs
> 
> https://www.tomshardware.com/news/nvidia-driver-checkerboard-rendering-multi-gpu-sli
> 
> Only shows up in Inspector currently and doesn't do much of anything yet. It's been added for something in the future? Multi-gpu without devs needing to support it maybe? Seems like a strange thing to add at this time.


Im quite happy i sold both my 1080s for a 2080ti. SLI wasnt that painful, however it almost always introduce microstutters and SLI support was quite abysmal, except for the few that supported dx12 multigpu render. 

Everything that adds performance is useful, and probably as AMD done, and intel is working on interposers, its natural to think again nvidia is considering multi gpu cards. You cant keep increasing the die area, the only real problem is latency.


----------



## VPII

J7SC said:


> Weird...any idea what changed between the earlier stutter-free runs and the later ones ? NV driver or Win 10 updates ? Also, you obviously will have run the test with temp recording in the background to make sure it is not down-stepping GPU clock.
> 
> When you run the test 'back-to-back' with just a minute or so for cool down in-between and no other apps being opened, you get the same stuttering ? The only other thing I can think of is some sort of system cache or PCIe lane bottle-neck, but all this being 'long distance', I am just guessing...


Thanks, yup there was driver updates. Funny thing is Fire Strike runs and in some cases even better but Time Spy is an issue and only in the second game bench. I'll check the pci-e setup in the bios just to confirm what it states.


----------



## skupples

postem said:


> Im quite happy i sold both my 1080s for a 2080ti. SLI wasnt that painful, however it almost always introduce microstutters and SLI support was quite abysmal, except for the few that supported dx12 multigpu render.
> 
> Everything that adds performance is useful, and probably as AMD done, and intel is working on interposers, its natural to think again nvidia is considering multi gpu cards. You cant keep increasing the die area, the only real problem is latency.


EXACTLY!

and how else will PC gaming separate itself from the onslaught of new console propaganda about 4K120 capable HDMI2.1? The return of "dual GPU" and ACTUALLY maximizing on that 4K120 4:4:4 capable port  

We're already seeing it, specially in vulkan.


----------



## keikei

KCDC said:


> Nvidia Drivers Reportedly Enable New Checkerboard Rendering on Multi-GPUs
> 
> https://www.tomshardware.com/news/nvidia-driver-checkerboard-rendering-multi-gpu-sli
> 
> Only shows up in Inspector currently and doesn't do much of anything yet. It's been added for something in the future? Multi-gpu without devs needing to support it maybe? Seems like a strange thing to add at this time.


Green looks to branch out in terms of benefits of going to their side. Atm, both crossfire/SLI are deemed non-existent. If this tech comes to fruition, they will have another option over Red. I like competition as it pushes innovation. Maybe I can actually use those cards sitting on my shelf sooner than later. Imagine 100% scaling for quad cards. Jebus Christ. Lol.


----------



## kithylin

keikei said:


> Green looks to branch out in terms of benefits of going to their side. Atm, both crossfire/SLI are deemed non-existent. If this tech comes to fruition, they will have another option over Red. I like competition as it pushes innovation. Maybe I can actually use those cards sitting on my shelf sooner than later. Imagine 100% scaling for quad cards. Jebus Christ. Lol.


Just so everyone knows.. Checkerboard Rendering is -NOT- a new technology. PS4 has been doing it since it was first released and AMD has been using Checkerboard Rendering for CrossFire since the 2000's.


----------



## sblantipodi

Hope that stadia will force Nvidia to lower its prices.
With stadia pushing titles at a great quality it always more difficult to justify the price of a GPU.


----------



## skupples

good point, here's for hoping. 

and here's for a 3080ti *ticks money into side account*


----------



## J7SC

skupples said:


> good point, here's for hoping.
> 
> and here's for a 3080ti *ticks money into side account*



...*ticks money into side account$ for  Ti Super... 3080 Ti / Ampere... MCM Hopper...


----------



## JustinThyme

postem said:


> Im quite happy i sold both my 1080s for a 2080ti. SLI wasnt that painful, however it almost always introduce microstutters and SLI support was quite abysmal, except for the few that supported dx12 multigpu render.
> 
> Everything that adds performance is useful, and probably as AMD done, and intel is working on interposers, its natural to think again nvidia is considering multi gpu cards. You cant keep increasing the die area, the only real problem is latency.


Im even happy that I replaced my pair of 1080Tis with a pair of 2080Tis. about a 30% performance increase. I got in early on the 1080Tis before the bit miners drove up the price to make them cost as much as the 2080Tis when they were released.


----------



## keikei

JustinThyme said:


> Im even happy that I replaced my pair of 1080Tis with a pair of 2080Tis. about a 30% performance increase. I got in early on the 1080Tis before the bit miners drove up the price to make them cost as much as the 2080Tis when they were released.


The mgpu early driver is great news. I just hope its revealed sometime next year. I would luv 100% scaling SLI automatically without dev implementation. I could see this as a HUGE benefit. The up and coming 2020 games look to aim squarely at raising the graphical bar.


----------



## KCDC

Redshift, the GPU renderer that I use has finally added full NVLink geometry and VDB support. I finally have 20ish GB VRAM to use! Excited to test it out tonight.


----------



## J7SC

keikei said:


> The mgpu early driver is great news. I just hope its revealed sometime next year. I would luv 100% scaling SLI automatically without dev implementation. I could see this as a HUGE benefit. The up and coming 2020 games look to aim squarely at raising the graphical bar.





KCDC said:


> Redshift, the GPU renderer that I use has finally added full NVLink geometry and VDB support. I finally have 20ish GB VRAM to use! Excited to test it out tonight.



...my dual 2080 Ti GPUs have already been coming in handy in more than one app/game, but this is potentially great news


----------



## KCDC

J7SC said:


> ...my dual 2080 Ti GPUs have already been coming in handy in more than one app/game, but this is potentially great news


Yes it's pretty much what I've been waiting/hoping for since I bought these cards and a big selling point over quadro/pro cards. VRay just added it as well.


----------



## keikei

J7SC said:


> ...my dual 2080 Ti GPUs have already been coming in handy in more than one app/game, but this is potentially great news



2020 seems to have several stars align for me in regards a big rig updated. Mainly, 2nd gpu and monitor. Maybe i should be 'gud' after that. Lol.


----------



## J7SC

keikei said:


> 2020 seems to have several stars align for me in regards a big rig updated. Mainly, 2nd gpu and monitor. Maybe *i should be 'gud' after that*. Lol.



...Even if true, I suspect the industry will do everything in its power to persuade you of the opposite


----------



## keikei

J7SC said:


> ...Even if true, I suspect the industry will do everything in its power to persuade you of the opposite



No doubt. Mgpu capability is a very compelling factor, especially for midrange. Buy one now, then another one later. All of a sudden you go top tier performance. I suspect AMD is also looking into this, these guys tend to copy/match whatever the other one does. Great for us. Thank God I had the foresight to get a full tower a year ago. I tend to buy stuff impulsively.


----------



## gfunkernaught

does anyone think that 42c is too warm for [email protected]?


----------



## J7SC

gfunkernaught said:


> does anyone think that 42c is too warm for [email protected]?



...'off the cuff', no - but would need to know more about your cooling setup

...while I can keep dual 2080 TIs on their own custom loop below 32 C now, that is with a big custom water setup...when initially test-benching the cards individually on a single pump / low setting and single 360 with quiet-mode fans, peak temps at 2160+ / 1.043v was in the low 40s with 21c ambient


----------



## jura11

gfunkernaught said:


> does anyone think that 42c is too warm for [email protected]?


Hi there 

Depending on ambient and radiators, pump and other things like power limit and BIOS too but in higher ambient temperatures these temperatures I would say they're not bad there and on games as well, some games pulls crazy wattage or in some games you can see higher power limit or power draw 

In current colder weather(18-21°C) my 4*GPUs in rendering won't pass 33-34°C,in normal gaming scenario or benchmarks Asus RTX 2080Ti Strix with 2160MHz OC wouldn't pass 36-38°C but in most games I'm sitting in 32-34°C

I dropped my fans in current weather to 600-650RPM from 700-800RPM, still think 500RPM would be just enough as fan speed for my loop and temperatures wouldn't suffer as much

Hope this helps 

Thanks, Jura


----------



## looniam

KCDC said:


> Nvidia Drivers Reportedly Enable New Checkerboard Rendering on Multi-GPUs
> 
> https://www.tomshardware.com/news/nvidia-driver-checkerboard-rendering-multi-gpu-sli
> 
> Only shows up in Inspector currently and doesn't do much of anything yet. It's been added for something in the future? Multi-gpu without devs needing to support it maybe? Seems like a strange thing to add at this time.


someone claims to have it working:

https://www.forum-3dcenter.org/vbulletin/showthread.php?p=12144578#post12144578

the whole reason i popped in was to drop that link. need NVLink to get it working proper (mostly?)

where is @Baasha ?


----------



## HeadlessKnight

jura11 said:


> Hi guys
> 
> 
> I have tried this Crytek Neon Noir Benchmark and for love of god my scores are same like with 1080p or 1440p or even in Ultrawide 3440x1440, scores are around 3200-3300 points
> 
> Looks like I'm CPU limited or my CPU does bottleneck very badly in this benchmark
> 
> Running 5960x with 4.6Ghz currently and Asus RTX 2080Ti Strix with 2160MHz OC
> 
> Although I'm running RAM at 2113MHz which can make a bit difference but still my scores are very low if I'm comparing against the 4690k or 6700k etc and GTX1080Ti
> 
> GPU usage its very low in this benchmark, literally sits in 20-40% during the whole benchmark and in only two instances seen full boost clocks at 2145-2160MHz
> 
> Thanks, Jura


Hi

Disabling HPET fixed it for me using these commands :-

bcdedit /deletevalue useplatformclock

bcdedit /set disabledynamictick yes

Regards.


----------



## gfunkernaught

J7SC said:


> ...'off the cuff', no - but would need to know more about your cooling setup
> 
> ...while I can keep dual 2080 TIs on their own custom loop below 32 C now, that is with a big custom water setup...when initially test-benching the cards individually on a single pump / low setting and single 360 with quiet-mode fans, peak temps at 2160+ / 1.043v was in the low 40s with 21c ambient


this is my custom setup:
res/pump>cpu>240 rad with 4x120mm static fans push/pull>gpu>360mm rad with 3x120mm fans push>res/pump

the system typically has a capacity to cool the gpu at 18-20c above ambient. im convinced that my card isnt a good overclocker at all. but the other day codmw crashed when the gpu was approaching 42c and finally maintained 42c for a few minutes at the v/f i mentioned.


----------



## JustinThyme

gfunkernaught said:


> this is my custom setup:
> res/pump>cpu>240 rad with 4x120mm static fans push/pull>gpu>360mm rad with 3x120mm fans push>res/pump
> 
> the system typically has a capacity to cool the gpu at 18-20c above ambient. im convinced that my card isnt a good overclocker at all. but the other day codmw crashed when the gpu was approaching 42c and finally maintained 42c for a few minutes at the v/f i mentioned.


15C delta or less is desirable. What block? Hows your liquid temps?


----------



## J7SC

gfunkernaught said:


> this is my custom setup:
> res/pump>cpu>240 rad with 4x120mm static fans push/pull>gpu>360mm rad with 3x120mm fans push>res/pump
> 
> the system typically has a capacity to cool the gpu at 18-20c above ambient. im convinced that my card isnt a good overclocker at all. but the other day codmw crashed when the gpu was approaching 42c and finally maintained 42c for a few minutes at the v/f i mentioned.


 
...assuming ambient temps around 20c give or take, I don't think 42c would be the cause of the crash. By 42c, the card will have started to down-clock by at least one bin anyway. Perhaps try to step down your initial 'oc' just a bit (ie one bin)...you might even get overall higher FPS/scores as a result, given the way NVidia boost works these days. Also, do you have some way to measure VRAM temps (ie via an infrared device) ? Even with an overall well-cooled card, if some of the VRAM chips present a hot spot, that can cause a bit of an issue.


----------



## gfunkernaught

JustinThyme said:


> 15C delta or less is desirable. What block? Hows your liquid temps?


ekwb vector. i dont have any water temp device or meter. the cpu temp is similar the gpu while gaming, so im guessing the water temp is a few degrees cooler. idk. i have corsair 760T case, not ideal for real custom loops. i have a 200mm fan installed in the side panel blowing directly onto the cpu because i saw lower temps on the gpu (cool air is being delivered to the 360mm rad at the top of the case) than when i had it mounted in the center of the side panel. the 240mm rad is mounted at the front of the case vertically. i have the fans blowing air out of the case from there. i used to have them reversed, because traditionally fans mounted at the front of the case should be intake, but i saw higher temps since the rad heated the air going into the case. so now the 200mm fan is the only intake fan.


----------



## gfunkernaught

J7SC said:


> ...assuming ambient temps around 20c give or take, I don't think 42c would be the cause of the crash. By 42c, the card will have started to down-clock by at least one bin anyway. Perhaps try to step down your initial 'oc' just a bit (ie one bin)...you might even get overall higher FPS/scores as a result, given the way NVidia boost works these days. Also, do you have some way to measure VRAM temps (ie via an infrared device) ? Even with an overall well-cooled card, if some of the VRAM chips present a hot spot, that can cause a bit of an issue.


i dont have an IR device. i do have an IR gun that shows temps but no image. idk if its gpu boost 4/turing, or just my card in particular, but it can be very inconsistent when it comes to voltage/frequency. i use afterburner. i set the power/temp limit to max first. then start heaven and the clock starts at [email protected] i set the clock offset to 130+, which gives me [email protected] then, like you said, while i game and the card hits 40c, it steps down to [email protected], then if the power limit is reached it will step down to [email protected], [email protected], etc. other times though, it will drop the clock to 2085mhz, but keep the voltage at 1050mv, and i think it is warming the card and using extra power needlessly. also, i have tried using the v/f curve with poor results. i'd set a point at [email protected] because i think that is the card's sweet spot. most of the time it maintains that setting. other times though, it will drop the clock to 2070mhz and keep the voltage at 1043mv, again needlessly. so im at a loss. no matter what bios i use, except the kingpin xoc bios, it behaves the same. just to give you an idea of how poor of a sample this card is: with the xoc bios, i took my pc outside while it was 4c ambient, and ran benchmarks at [email protected] successfully, but as soon as i played a game, crashed within seconds of load. so i guess the worse or lower on the silicon lottery chain a card is, the colder it has to be to get good a overclock.


----------



## gfunkernaught

i reversed the fans of the top rad and front rad so now fresh air comes in from the front and top, and also reversed the 200mm side fan to pull warm air out. the rear fan is still pulling warm air out. temps seem a bit more in control. i ran 8k superposition twice in a row with ambient temp at 20c and the card hit 36c by the end of the run. codmw at 4k max settings, 39c was the max temp with 22c ambient. i've seen these other water blocks like the heatkiller and corsair's new one for the 2080 ti which seems to outperform the vector. i bought a microcenter warranty with this card so im not sure if im going to keep it if its not good at overclocking, may not be worth investing more money into it.


----------



## GRABibus

jura11 said:


> Hi there
> 
> Depending on ambient and radiators, pump and other things like power limit and BIOS too but in higher ambient temperatures these temperatures I would say they're not bad there and on games as well, some games pulls crazy wattage or in some games you can see higher power limit or power draw
> 
> In current colder weather(18-21°C) my 4*GPUs in rendering won't pass 33-34°C,in normal gaming scenario or benchmarks Asus RTX 2080Ti Strix with 2160MHz OC wouldn't pass 36-38°C but in most games I'm sitting in 32-34°C
> 
> I dropped my fans in current weather to 600-650RPM from 700-800RPM, still think 500RPM would be just enough as fan speed for my loop and temperatures wouldn't suffer as much
> 
> Hope this helps
> 
> Thanks, Jura


Hello Jura,
refering to the discussion we had on this thread several weeks ago on GPU overclock with DXR games, I noticed than I could play several hours of BFV with DXR enabled with my MSI AB profile "[email protected]" without any "Crash To Desktop".
The game stabilizes at [email protected] with [email protected]°C at 22°c ambiant temperature.

Some weeks ago when we discussed about it, everything above 2070MHz (Even at 1.093V) made me Crash To Desktop with DXR enabled in BFV, only after minutes of game.

Maybe you shoudl give a new try.

Stability seems have to improve (Last BFV patches ? Drivers ?...). This must be confirmed of course byh more hours of gaming.


----------



## JackCY

It's more like you're riding the edge of instability if simple software changes have effect on stability of your hardware. In other words different loads are stable and unstable, where as you want all stable to call the OC proper and not have issues anywhere in any use. Games are hardly the best test of stability of GPUs.


----------



## gfunkernaught

GRABibus said:


> Hello Jura,
> refering to the discussion we had on this thread several weeks ago on GPU overclock with DXR games, I noticed than I could play several hours of BFV with DXR enabled with my MSI AB profile "[email protected]" without any "Crash To Desktop".
> The game stabilizes at [email protected] with [email protected]°C at 22°c ambiant temperature.
> 
> Some weeks ago when we discussed about it, everything above 2070MHz (Even at 1.093V) made me Crash To Desktop with DXR enabled in BFV, only after minutes of game.
> 
> Maybe you shoudl give a new try.
> 
> Stability seems have to improve (Last BFV patches ? Drivers ?...). This must be confirmed of course byh more hours of gaming.


what ive seen is that the RT cores are more sensitive to clock speed and temperatures. Try turning DXR off in BF5 with the same clock/voltage and see if you still get a crash. more voltage isnt always good. every piece of silicon has a sweet spot that can find. mine seems to be [email protected] but i have played games including ones that use ray tracing for hours and hours at [email protected] but sometimes i'd get a random crash.


----------



## skupples

keikei said:


> No doubt. Mgpu capability is a very compelling factor, especially for midrange. Buy one now, then another one later. All of a sudden you go top tier performance. I suspect AMD is also looking into this, these guys tend to copy/match whatever the other one does. Great for us. Thank God I had the foresight to get a full tower a year ago. I tend to buy stuff impulsively.


unfortunately, it's never gonna be that rosey.

the best bet is always going to be to buy the best single card you can, and this will remain the case until mid range cards have the same memory setups as big chip, which will never happen.


----------



## GRABibus

gfunkernaught said:


> what ive seen is that the RT cores are more sensitive to clock speed and temperatures. Try turning DXR off in BF5 with the same clock/voltage and see if you still get a crash. more voltage isnt always good. every piece of silicon has a sweet spot that can find. mine seems to be [email protected] but i have played games including ones that use ray tracing for hours and hours at [email protected] but sometimes i'd get a random crash.


I don't want put "off" DXR and as I have a RTX card, I want to get all benefits of it.
So, I have set up different OC profiles in MSI AB which haven't crashed until now, in order to take into account DXR or not, depending on the games.
It took me several weeks of tests...As everybody here 


For "Non DXR games" :
Profile 1 : + 160MHz offset in MSI AB and then V/F curve on [email protected]
Profile 2 : + 160MHz offset in MSI AB and then V/F curve on [email protected]

Both seems to be stable since weeks in all my games without DXR.


With DXR, it is another story.

For BFV with DXR enabled :
Profile 3 : + 160MHz offset in MSI AB and then V/F curve on [email protected] => to be tested longer.
Profile 4 : + 145MHz offset in MSI AB and then V/F curve on [email protected]

For CoD MW with DXR enabled :
Profile 5 : + 130MHz offset in MSI AB and then V/F curve on [email protected]
Other potential profile : + 115MHz offset in MSI AB and then V/F curve on [email protected]

for BFV and CoD MW without DXR enabled, profile 1 and profile 2 above seem stable also. But I didn' play enough time both games without Ray Tracing.


----------



## skupples

memory left stock?


----------



## GRABibus

skupples said:


> memory left stock?


No :
8150MHz in CoD MW
8200MHz in all other games


----------



## J7SC

gfunkernaught said:


> i reversed the fans of the top rad and front rad so now fresh air comes in from the front and top, and also reversed the 200mm side fan to pull warm air out. the rear fan is still pulling warm air out. temps seem a bit more in control. i ran 8k superposition twice in a row with ambient temp at 20c and the card hit 36c by the end of the run. codmw at 4k max settings, 39c was the max temp with 22c ambient. i've seen these other water blocks like the heatkiller and corsair's new one for the 2080 ti which seems to outperform the vector. i bought a microcenter warranty with this card so im not sure if im going to keep it if its not good at overclocking, may not be worth investing more money into it.


 
Your temps seem quite good now after the loop changes. For what it is worth, I usually run separate loops for CPU and GPU (especially with dual GPUs with as much wattage as 2080 Ti), but I know that is not absolutely necessary - I just like building weird loops as I have a store room of old but functional w-cooling stuff and also mostly use HEDT 'heater' CPUs  . The key with cooling these GPUs is keeping them from raising the temps past a given delta (delta preferably below 20 C, depending on rad setup) even after hours of play.

As to trading the card in, you obviously have to consider what the new GPU might or might not be able to do in 'oc'. In daily use, I doubt that an extra 30 MHz (ie 2060 vs 2090 or so) really makes that much of a difference, but that said, I tend to push every piece of computer equipment to its sustainable max myself...


----------



## scaramonga

So, fitting my new Gigabyte Aorus Geforce RTX 2080 ti Xtreme Waterforce WB tomorrow, and wish to know which BIOS to flash for best clocks, bearing in mind that this card is fitted with a waterblock 

The Galax one?

Thx guys!


----------



## GRABibus

scaramonga said:


> So, fitting my new Gigabyte Aorus Geforce RTX 2080 ti Xtreme Waterforce WB tomorrow, and wish to know which BIOS to flash for best clocks, bearing in mind that this card is fitted with a waterblock
> 
> The Galax one?
> 
> Thx guys!


it's a nice card.
you should first test it and try some OC's with factory Bios to chjeck if it is a good clocker or not.
Whatever the fact it is watercooled and whatever the bios, a poor clocker is a poor clocker....Silicon lottery...

As your card is a "A" card and 2x8pins, the Galax one 380W should work. But you do it at your own risk 

Your power limit is 366W and Galax one is 380W. Not so much gain in terms of power limit.


----------



## J7SC

...I have been running two of those Aorus Xtr WB cards for almost a year and really like them. Good advice above re. benching them first to establish a baseline, for example with Superposition 4K or 8K, TimeSpy or PortRoyal. Per past posts, even with stock bios, I hit 375 W to 380 W in GPUz regularly with PL maxed. I ended up not even bothering with another bios...just using good old MSI AB 'sliders', usually w/o any extra voltage. 

That said, I recall that the Asus Strix XOC bios (way high power limit) has worked for others with this card model, albeit w/o all the output ports working (DSP?). Apparently, that XOC bios though will really heat things up, w/o large MHz gains, so caution. In any case, these cards do their best with lots and lots of cooling (my separate GPU loop for the two has a total of 1080/60 rads etc). The colder you can get the card(s), the better.


----------



## gfunkernaught

J7SC said:


> Your temps seem quite good now after the loop changes. For what it is worth, I usually run separate loops for CPU and GPU (especially with dual GPUs with as much wattage as 2080 Ti), but I know that is not absolutely necessary - I just like building weird loops as I have a store room of old but functional w-cooling stuff and also mostly use HEDT 'heater' CPUs  . The key with cooling these GPUs is keeping them from raising the temps past a given delta (delta preferably below 20 C, depending on rad setup) even after hours of play.
> 
> As to trading the card in, you obviously have to consider what the new GPU might or might not be able to do in 'oc'. In daily use, I doubt that an extra 30 MHz (ie 2060 vs 2090 or so) really makes that much of a difference, but that said, I tend to push every piece of computer equipment to its sustainable max myself...


yea actually it is better than before. unless i keep a window open, the ambient temp will rise due to the heat exhausted from the PC, runaway heat effect. it hit 39 after hours of play because the air surrounding the pc got warm. im still playing with the fan speeds to find a good noise/performance ratio. 

and yes i agree with you on the trade-in. its always a gamble, and it seems nvidia only sells their best binned gpus. i've read that if you want the best water cooling experience with a reference board get the founders edition. which makes sense. so yeah the warranty is up like may 2021, so ill wait til something else comes out if and just have a $1250 microcenter giftcard to use whenever i need to go there.:specool:


----------



## skupples

what?

i think you're confused about something, or attempting to commit fraud. you typically get the same thing you sent in on a warranty claim. and how do you expect to turn a 2080ti back into 1250 @ microcenter?


----------



## kithylin

skupples said:


> what?
> 
> i think you're confused about something, or attempting to commit fraud. you typically get the same thing you sent in on a warranty claim. and how do you expect to turn a 2080ti back into 1250 @ microcenter?


They're hoping that they will get a better binning in the silicon lotto by returning it and trying a second card. Yes this is 100% completely a fraudulent misuse of the return policy: Returning something when nothing is actually wrong with it just because it doesn't overclock as they want.


----------



## keikei

Gsync +OLED=Wowie. We dont have 4k/144hz yet.


----------



## gfunkernaught

kithylin said:


> skupples said:
> 
> 
> 
> what?
> 
> i think you're confused about something, or attempting to commit fraud. you typically get the same thing you sent in on a warranty claim. and how do you expect to turn a 2080ti back into 1250 @ microcenter?
> 
> 
> 
> They're hoping that they will get a better binning in the silicon lotto by returning it and trying a second card. Yes this is 100% completely a fraudulent misuse of the return policy: Returning something when nothing is actually wrong with it just because it doesn't overclock as they want.
Click to expand...

Actually it's in the terms that I can return it for whatever reason. It's a paid warranty. They are aware and capitalize on it by putting the same gpu back on shelves and can collect yet another extended warranty fee for it. I am way passed the 30-day return period. There's nothing fraudulent about claiming a card doesn't overclock well and bringing it back to the store two years later for store credit when you purchased the right to do so. I've bought a used card from them once and it didn't say anything was wrong with it on the store label, but it showed artifacts right at the desktop without a load. Brought it back the same day and told them about it. I checked their stock a few weeks later and found it back on the shelf. Fraudulent would be swapping the PCB and all the serial numbers and stickers with an older or cheaper gpu that's dead and returning it.


----------



## kithylin

gfunkernaught said:


> Actually it's in the terms that I can return it for whatever reason. It's a paid warranty. They are aware and capitalize on it by putting the same gpu back on shelves and can collect yet another extended warranty fee for it. I am way passed the 30-day return period. There's nothing fraudulent about claiming a card doesn't overclock well and bringing it back to the store two years later for store credit when you purchased the right to do so. I've bought a used card from them once and it didn't say anything was wrong with it on the store label, but it showed artifacts right at the desktop without a load. Brought it back the same day and told them about it. I checked their stock a few weeks later and found it back on the shelf. Fraudulent would be swapping the PCB and all the serial numbers and stickers with an older or cheaper gpu that's dead and returning it.


Do be aware that EVGA will only honor their warranty so many times before they deny it on you, paid or not. If you intend to keep warrantying cards until you find a golden sample that runs 2200 mhz then you will be cut off at some point. And once you get warranty denied, you won't be able to file any further claims on your card, even if it actually dies a year later. They keep a record of all of your warranty claims. Also EVGA does send you back refurbished cards as warranty replacement, just so you know.


----------



## keikei

For those running SLI on non-WC, should I be worried about heat blowing on the bottom card? Do I just need to crank up the case fans?


----------



## ZealotKi11er

keikei said:


> For those running SLI on non-WC, should I be worried about heat blowing on the bottom card? Do I just need to crank up the case fans?


I would not SLI unless you are WC. These GPUs run hot in a single GPU configuration already. Case Airflow will only help so much.


----------



## Sheyster

keikei said:


> For those running SLI on non-WC, should I be worried about heat blowing on the bottom card? Do I just need to crank up the case fans?


Having a fan directly above the cards helps a lot, and good case air-flow is critical. When I ran 2 Titan-X Maxwells I had a 200mm fan directly above the cards. It was very effective.


----------



## Sheyster

keikei said:


> For those running SLI on non-WC, should I be worried about heat blowing on the bottom card? Do I just need to crank up the case fans?


Here is a pic, you can see the edge of the 200mm fan on top. Cards ran nice and cool at 1450 MHz all day.


----------



## skupples

using an in-store extended warranty is a bit different than using EVGA's extended warranty.

either way, I highly doubt the extended warranty says you can replace for no reason/won't OC an xtra 15mhz.

So badly want to add C9 to my roster, but I think i'll be adding INDEX this year instead. Keep my 4K60 until a bit more HDMI 2.1 competition shows up.


----------



## KCDC

skupples said:


> So badly want to add C9 to my roster, but I think i'll be adding INDEX this year instead. Keep my 4K60 until a bit more HDMI 2.1 competition shows up.


I keep stopping myself from getting one, my B7 works/looks great and gsync 120FPS 4K isn't that important... that's what I continue to tell myself... it's not really working, though. 

Played Assassin's Creed Odyssey on it for the majority of the weekend after getting the settings right for mostly solid 60fps and it feels/looks better than anything I could fathom on console, so I'm happy with what I have. Plus no crazy loud fans going in my media center. Getting anything near 120 would start to lean it closer to console quality. Probably a generation or two of cards before it makes the most sense. Got shadow of the tomb raider collection (70% off on steam, btw) and looking forward to seeing how that looks. Really shouldn't have plugged my pc into my oled, now no one sees me anymore. Still trying to get the HDR settings just right on the TV side. I took settings from my XBone X and the results are close but not perfect. Color banding and such in some things like HUD and interface elements.


----------



## KCDC

looniam said:


> someone claims to have it working:
> 
> https://www.forum-3dcenter.org/vbulletin/showthread.php?p=12144578#post12144578
> 
> the whole reason i popped in was to drop that link. need NVLink to get it working proper (mostly?)
> 
> where is @Baasha ?


From what I remember reading, NVLink is needed for the bandwidth required. Looks like I am messing with Inspector again. Maybe it will work for something this time 

EDIT: Maybe I can get it to work for something this time.


----------



## skupples

KCDC said:


> I keep stopping myself from getting one, my B7 works/looks great and gsync 120FPS 4K isn't that important... that's what I continue to tell myself... it's not really working, though.
> 
> Played Assassin's Creed Odyssey on it for the majority of the weekend after getting the settings right for mostly solid 60fps and it feels/looks better than anything I could fathom on console, so I'm happy with what I have. Plus no crazy loud fans going in my media center. Getting anything near 120 would start to lean it closer to console quality. Probably a generation or two of cards before it makes the most sense. Got shadow of the tomb raider collection (70% off on steam, btw) and looking forward to seeing how that looks. Really shouldn't have plugged my pc into my oled, now no one sees me anymore. Still trying to get the HDR settings just right on the TV side. I took settings from my XBone X and the results are close but not perfect. Color banding and such in some things like HUD and interface elements.


yep, i'm thinking about grabbing Asus TUF 3440x1440p to replace my dying ASUS Color Pro 4K60. it's only $600, compared to alienware's 1200 for gsync and 20 more hz.


----------



## jura11

GRABibus said:


> Hello Jura,
> refering to the discussion we had on this thread several weeks ago on GPU overclock with DXR games, I noticed than I could play several hours of BFV with DXR enabled with my MSI AB profile "[email protected]" without any "Crash To Desktop".
> The game stabilizes at [email protected] with [email protected]°C at 22°c ambiant temperature.
> 
> Some weeks ago when we discussed about it, everything above 2070MHz (Even at 1.093V) made me Crash To Desktop with DXR enabled in BFV, only after minutes of game.
> 
> Maybe you shoudl give a new try.
> 
> Stability seems have to improve (Last BFV patches ? Drivers ?...). This must be confirmed of course byh more hours of gaming.



Hi there 

Glad you have now better OC experience with your GPU, sadly didn't have too much time to do more tests in games, BFV I played probably when has been released and I don't play MP sadly, will probably retest the game when time allows and will need to test new drivers too because I don't update my drivers because I use PC for rendering and there I need only stability or rather I prefer stability over better performance 

Maybe CTD errors caused drivers or game itself hard to say, but seems OC in yours case is lot better

I will do few tests later in week because I'm interested too if OC it will be better in rendering because currently I just can't push my Asus RTX 2080Ti Strix beyond 2070MHz in rendering 


Hope this helps 

Thanks, Jura


----------



## KCDC

jura11 said:


> Hi there
> 
> Glad you have now better OC experience with your GPU, sadly didn't have too much time to do more tests in games, BFV I played probably when has been released and I don't play MP sadly, will probably retest the game when time allows and will need to test new drivers too because I don't update my drivers because I use PC for rendering and there I need only stability or rather I prefer stability over better performance
> 
> Maybe CTD errors caused drivers or game itself hard to say, but seems OC in yours case is lot better
> 
> I will do few tests later in week because I'm interested too if OC it will be better in rendering because currently I just can't push my Asus RTX 2080Ti Strix beyond 2070MHz in rendering
> 
> 
> Hope this helps
> 
> Thanks, Jura


I've found very minimal frame time differences when GPU rendering overclocked vs stock card settings, maybe a few seconds per frame depending on the scene. If you're rendering out 10s of thousands of frames, I could see the benefit. Redshift straight up crashes to desktop if I attempt anything close to my stable game OC settings, this may have something to do with RTX options enabled. Of course, the devs also recommend leaving the hardware at stock settings to minimize stability issues. Not sure if Arnold, Octane or VRay are the same. Cycles doesn't care and renders fine with the game OC but I think this is because it's not as robust of an engine. Still not much of a time difference for the extra strain and heat produced IMO. The one area I see a slight difference is in real-time previews when editing scenes.


----------



## gfunkernaught

to those who are confused about the off topic warranty thing: its microcenter's warrany (which i said in an earlier post) not evga. and yes, microcenter says if you give us more money you can bring that thing back any time within 2-3 years for whatever reason and we'll test it and put it back on the shelf. im obviously paraphrasing, but its the truth. anyway...

now that i reversed the air flow in my case to pull air in from the top and front through my rads, my gpu temp dropped about 2-3c, the highest ive seen was 39c with 22c ambient. not great but better than before. using the 380w bios, ill start up a game like codmw, clock stays at [email protected] until the temps hit 38c, then it changes to [email protected], and at 39c drops to [email protected] weird. why would it raise the voltage like that?


----------



## skupples

must be part of the reason why they never moved past a couple dozen stores (good things don't go far)  I'd pay it too, if Microcenter did any of their cool in-store stuff for web customers. I miss circuit city, compUSA, etc. Bezos, you don't come anywhere close. 1-2 days? psssh, how about driving to the store to stuff a cart full of parts. Best I can do in FL is driving to the TigerDirect warehouse store in miami, or up to Orlando'ish for WC parts.


----------



## JustinThyme

skupples said:


> must be part of the reason why they never moved past a couple dozen stores (good things don't go far)  I'd pay it too, if Microcenter did any of their cool in-store stuff for web customers. I miss circuit city, compUSA, etc. Bezos, you don't come anywhere close. 1-2 days? psssh, how about driving to the store to stuff a cart full of parts. Best I can do in FL is driving to the TigerDirect warehouse store in miami, or up to Orlando'ish for WC parts.


Im lucky enough to have two stores to choose from, one in Brooklyn and one in Patterson NJ. Try to stay out of Brooklyn though. I dont buy up warranties. I generally run the crap out of the hardware while its still in the standard return window to weed out weak or just plain infant mortality of the parts. Ive bought my 1080Tis there and 2080Tis for a pretty good discount over other outlets. Like you said though the thing that sucks is to get those deals you have to drive to the store.


----------



## skupples

the last cards I purchased in-store were GK110 titans from compUSA  They were $999, while everyone else was paying at least $100 premium over msrp. Not much of a deal really, but it was so nice to be able to just stroll in, grab a couple, n stroll out (after paying)

the voltage is dropping because the card likes to keep itself cool, is basically my understanding. The warmer they are, the lower they clock. I'd assume this is reflected in voltage fluctuations as well. Clocks drop, volts drop, 1c is saved. I bet these beasts are loving winter. too bad we don't get that here.


----------



## ahnafakeef

skupples said:


> yep, i'm thinking about grabbing Asus TUF 3440x1440p to replace my dying ASUS Color Pro 4K60. it's only $600, compared to alienware's 1200 for gsync and 20 more hz.


B9 55" for $1200: https://www.videoonly.com/lg-ratings/

Reference: https://www.reddit.com/r/OLED/comments/e1kkqn/black_friday_deals/


----------



## gfunkernaught

skupples said:


> the last cards I purchased in-store were GK110 titans from compUSA /forum/images/smilies/frown.gif They were $999, while everyone else was paying at least $100 premium over msrp. Not much of a deal really, but it was so nice to be able to just stroll in, grab a couple, n stroll out (after paying)
> 
> the voltage is dropping because the card likes to keep itself cool, is basically my understanding. The warmer they are, the lower they clock. I'd assume this is reflected in voltage fluctuations as well. Clocks drop, volts drop, 1c is saved. I bet these beasts are loving winter. too bad we don't get that here.


I get the frequency and voltage drops to keep cool. Did you read my post? My card will start at [email protected] then raise the voltage only to 1068mv once the temp hits 38c. Counter-productive dont you think?


----------



## Brodda-Syd

keikei said:


> For those running SLI on non-WC, should I be worried about heat blowing on the bottom card? Do I just need to crank up the case fans?


Yes!!!
Case fans on high when SLI is enabled!
My main card will go to 70c at 1975MHz when I game using a single card.
However, with SLI enabled it will be at 83c and clocks down to 1800MHz while the 2nd card stays at 67c
Thankfully my Silverstone Raven RV02-E is just about the best Air-Flow case ever designed.
Any other case your cards will clock down even further.


----------



## NBrock

skupples said:


> yep, i'm thinking about grabbing Asus TUF 3440x1440p to replace my dying ASUS Color Pro 4K60. it's only $600, compared to alienware's 1200 for gsync and 20 more hz.



The Alienware 3440x1440 @120hz monitor is 649 right now. Deal is on slick deals.It is New Egg's ebay site.

https://slickdeals.net/f/13597105-3...c-curved-ips-led-monitor-650?src=featured-cat

I have the monitor and love it. I picked it up from Microcenter a while back for 800. I got to compare it to the Asus and Acer and the quality/feel is much better than the models I saw in person. I also have zero light bleed.

I also used the calibration setting TFT Central tested and recommended and I couldn't be happier. 

https://www.tftcentral.co.uk/reviews/dell_alienware_aw3418dw.htm


----------



## skupples

hmm, interesting... 

the one I referenced is the newest model, looks like. it's super sleek now, thus totally justifies putting it back up @ $1200   

which asus did you compare it with?


----------



## dph314

Hey guys. Just had a quick question that I was hoping someone might be able to help with. I searched for the issue and found a few people with the same one back in May/June but after reading through several pages afterwards it doesn't look like they solved it (or I just missed it).

Either way, I have a EVGA XC2 Ultra (1E07) and I've been trying to flash the Galaxy 380W BIOS, but I keep getting this error:

"XUSB FW component of the input GPU firmware image is incompatible, please use
a newer version of GPU firmware image for this product.
BIOS Cert 2.0 Verification Error, Update aborted.
Nothing changed!
ERROR: Invalid firmware image detected."

I followed the directions exactly, used the "nvflash64 --protectoff" and "nvflash64 -6 GalaxyBIOS.rom" commands, but keep getting that error. I also saw it suggested that it might be a driver issue from swapping cards but I ran DDU twice and just reinstalled the drivers. Any assistance would be appreciated.


----------



## NBrock

skupples said:


> hmm, interesting...
> 
> the one I referenced is the newest model, looks like. it's super sleek now, thus totally justifies putting it back up @ $1200
> 
> which asus did you compare it with?



I'm not sure what the model number was. It was the 3440x1440 @ 100hz that came out shortly before the Alienware 3440x1440 @120hz. Same with the Acer.


Edit:
I didn't know they came out with an updated version (of the Alienware). After checking I see what you mean about the redesign...but honestly the redesign and the slightly faster response time doesn't seem like it would be worth double the price. I have mine on a VISA mount and arm to save some room on the desk. The previous model stand took up a lot of room but it is sturdy AF and looks nice enough. It's heavy though...but that's because it is mostly metal and heavy duty plastic.

I'd have to see the "nano color" ips in person to know how much that really makes a difference over the original model.


----------



## NewType88

Those with flashed bios, when it comes time to sell your card, do you flash it back to stock typically or just leave it for the informed buyer ? What's typical ?


----------



## chibi

NewType88 said:


> Those with flashed bios, when it comes time to sell your card, do you flash it back to stock typically or just leave it for the informed buyer ? What's typical ?



Inform new buyer. Then let them know you can flash back to the original bios if preferred. Hopefully you have a backup of the bios. :thumb:


----------



## NewType88

chibi said:


> Inform new buyer. Then let them know you can flash back to the original bios if preferred. Hopefully you have a backup of the bios. :thumb:


Ya, I followed the instructions and made a back up. I guess I just do the steps over again and select the saved back up bios, to reflash to stock ? I'm getting ahead of myself. I want a new block to match my build better, but ill probably just wait to next gen.


----------



## chibi

NewType88 said:


> Ya, I followed the instructions and made a back up. I guess I just do the steps over again and select the saved back up bios, to reflash to stock ? I'm getting ahead of myself. I want a new block to match my build better, but ill probably just wait to next gen.



That's right, just do the same steps, but select the original bios instead of the upgrade one. Good luck! I'm really tempted to get the EVGA 2080 Ti FTW3 this black friday, but like you, I'm going to try and wait for next gen.


----------



## skupples

I'd rather throw that kinda cash at a C9 or Valve Index.


----------



## Cobra652

*rtx bundle activation*

Is there a helpful old and reliable member here who can help me with a Modern Warfare RTX bundle activation?
I activated one with my rtx card, but i got another copy and want to activate it to my brother for christmas gift.
Anyone can help me please?
Thank you very much!


----------



## JustinThyme

Cobra652 said:


> Is there a helpful old and reliable member here who can help me with a Modern Warfare RTX bundle activation?
> I activated one with my rtx card, but i got another copy and want to activate it to my brother for christmas gift.
> Anyone can help me please?
> Thank you very much!


What I do know is the bundles are locked to the serial numbers. I thought I could install one of two and found out different. Had the same thought of passing it on to someone else. I have heard of putting one card in at a time or alternately you may be able to put one of the cards in long enough to install the game and take it back out this is all provided the time to claim the game isn't over.


----------



## skupples

I assume you'd get the key after the serialization pass. I didn't get far enough with my 5700XT to find out, I got just far enough to find out I had to install their serializing tool, and or email them w. proof of purchase via the support system.


either way, if you need help. I got you. Zero interest in FPS, mucho interest in keeping the embers of ocn alive.


----------



## JustinThyme

skupples said:


> I assume you'd get the key after the serialization pass. I didn't get far enough with my 5700XT to find out, I got just far enough to find out I had to install their serializing tool, and or email them w. proof of purchase via the support system.
> 
> 
> either way, if you need help. I got you. Zero interest in FPS, mucho interest in keeping the embers of ocn alive.


My last two were emails from the retailer then the hoop jumping began.


----------



## Cobra652

JustinThyme said:


> What I do know is the bundles are locked to the serial numbers. I thought I could install one of two and found out different. Had the same thought of passing it on to someone else. I have heard of putting one card in at a time or alternately you may be able to put one of the cards in long enough to install the game and take it back out this is all provided the time to claim the game isn't over.


Yes its locked to vga hardware. I have 2 MW bundle codes. One redeemed with my card. The second, i cant redeem it with my card. So i need someone who can activate it for me with another RTX card VIA Experience.
I can pay 5-10 dollars if needed. So im searching for a helpful member. Thanks!


----------



## dph314

So I read some more, and some more, and found out that my issue with not being able to flash the Galaxy 380W BIOS was that my card came with a newer XUSB firmware that the card wouldn't let me revert by flashing an older BIOS, hence the error. But then the topic just kinda dropped off again. So, my new question is- with my EVGA XC2 Ultra, should I try looking for a newer version of the Galaxy 380W than what's linked to in the OP or are there any other BIOS's there that would work on my card, like the Extreme-OC ones? I'm not really too worried about losing a HDMI-port, just don't want to brick the card or anything. Any insight into the easiest/safest route here would be much appreciated.


----------



## J7SC

dph314 said:


> So I read some more, and some more, and found out that my issue with not being able to flash the Galaxy 380W BIOS was that my card came with a newer XUSB firmware that the card wouldn't let me revert by flashing an older BIOS, hence the error. But then the topic just kinda dropped off again. So, my new question is- with my EVGA XC2 Ultra, should I try looking for a newer version of the Galaxy 380W than what's linked to in the OP or are there any other BIOS's there that would work on my card, like the Extreme-OC ones? I'm not really too worried about losing a HDMI-port, just don't want to brick the card or anything. Any insight into the easiest/safest route here would be much appreciated.


 
What is the maximum watt rating again on your EVGA XC2 Ultra (w/ power sliders on max) on its stock bios ?


Edit - Linus doing some 2x 2080 Ti tile CFR here; looks like Metro Exodus plays well in that mode


----------



## dph314

J7SC said:


> What is the maximum watt rating again on your EVGA XC2 Ultra (w/ power sliders on max) on its stock bios ?


The card at its max +130% Power Target has a max of 338W. And it throttles bad, hits 130% before even getting to hit a full load most of the time, so overclocking is pointless so far. No fun . But yeah it's a 2-8pin, stays at 63C. Just begging for more power. Was all excited to flash but I'm still not sure if one of the Extreme ones would be okay or if I should try to find a newer Galaxy 380W one.


----------



## kithylin

dph314 said:


> The card at its max +130% Power Target has a max of 338W. And it throttles bad, hits 130% before even getting to hit a full load most of the time, so overclocking is pointless so far. No fun . But yeah it's a 2-8pin, stays at 63C. Just begging for more power. Was all excited to flash but I'm still not sure if one of the Extreme ones would be okay or if I should try to find a newer Galaxy 380W one.


Have you tried under-volting your card? That should reduce the power load and help prevent it from hitting the power limit while also reducing heat and may allow you to overclock the card properly.


----------



## dph314

kithylin said:


> Have you tried under-volting your card? That should reduce the power load and help prevent it from hitting the power limit while also reducing heat and may allow you to overclock the card properly.


No, haven't tried that yet. The voltage slider is all the way to the left in Afterburner though, just goes from +0 to +100, so I don't know. Either way, I definitely want to get rid of this Power Limit. Does 2000mhz stock before the load gets too high and power-limit throttling kicks in, so annoying. And usually at only 1.062v too. A higher power limit and running 1.093v or whatever the max is is what I'm looking for. I just want to make sure I'm flashing the right BIOS, the Galaxy linked in the OP won't work but I'm hoping someone else can confirm another XOC one works on reference boards (specifically the XC2 Ultra).


----------



## J7SC

dph314 said:


> No, haven't tried that yet. The voltage slider is all the way to the left in Afterburner though, just goes from +0 to +100, so I don't know. Either way, I definitely want to get rid of this Power Limit. Does 2000mhz stock before the load gets too high and power-limit throttling kicks in, so annoying. And usually at only 1.062v too. A higher power limit and running 1.093v or whatever the max is is what I'm looking for. I just want to make sure I'm flashing the right BIOS, the Galaxy linked in the OP won't work but I'm hoping someone else can confirm another XOC one works on reference boards (specifically the XC2 Ultra).


 
A new bios certainly would help with PL, but throttling also occurs via temps per NVBoost algorithm. At 63 c, while quite good for an air-cooled card btw, you will have already lost at least 30 MHz. This is not to say that a new bios with a higher power limit is a bad idea, but the extra wattage (40w or so) will result in extra heat you might want to address as well.


----------



## kithylin

dph314 said:


> No, haven't tried that yet. The voltage slider is all the way to the left in Afterburner though, just goes from +0 to +100, so I don't know. Either way, I definitely want to get rid of this Power Limit. Does 2000mhz stock before the load gets too high and power-limit throttling kicks in, so annoying. And usually at only 1.062v too. A higher power limit and running 1.093v or whatever the max is is what I'm looking for. I just want to make sure I'm flashing the right BIOS, the Galaxy linked in the OP won't work but I'm hoping someone else can confirm another XOC one works on reference boards (specifically the XC2 Ultra).


Actually that's not really what you should be looking for. Generally with the 2080 Ti series you want to try reducing voltage, not increasing it. Increasing volts is counter-productive really in the grand scheme of things. With the modern cards when we reduce voltage it will also reduce heat, and also reduce power usage, which will help prevent the cards from reducing clocks from heat and hopefully keep them from bouncing on the power limiter. The cards are based on heat now and there's nothing any bios can do about that. As the cards get hotter they will reduce temps, even if you had an unlimited bios with no power limit you will still be subject to losing clock speed due to heat. It's how the new cards are designed.

With afterburner open press Control-F and try for 0.96v for 2000 mhz and see if that works better for you.


----------



## dph314

J7SC said:


> A new bios certainly would help with PL, but throttling also occurs via temps per NVBoost algorithm. At 63 c, while quite good for an air-cooled card btw, you will have already lost at least 30 MHz. This is not to say that a new bios with a higher power limit is a bad idea, but the extra wattage (40w or so) will result in extra heat you might want to address as well.


Yeah I've noticed that drop before. Have pretty decent case fans I'll have to bump up a bit.



kithylin said:


> Actually that's not really what you should be looking for. Generally with the 2080 Ti series you want to try reducing voltage, not increasing it. Increasing volts is counter-productive really in the grand scheme of things. With the modern cards when we reduce voltage it will also reduce heat, and also reduce power usage, which will help prevent the cards from reducing clocks from heat and hopefully keep them from bouncing on the power limiter. The cards are based on heat now and there's nothing any bios can do about that. As the cards get hotter they will reduce temps, even if you had an unlimited bios with no power limit you will still be subject to losing clock speed due to heat. It's how the new cards are designed.
> 
> With afterburner open press Control-F and try for 0.96v for 2000 mhz and see if that works better for you.


Thanks, I'll definitely give that a shot. I'd still like to have the extra room, just to have some freedom to play around with it. Is there really no newer BIOS's posted aside from the one with the old XUSB version that won't flash to newer cards?


----------



## JustinThyme

kithylin said:


> Actually that's not really what you should be looking for. Generally with the 2080 Ti series you want to try reducing voltage, not increasing it. Increasing volts is counter-productive really in the grand scheme of things. With the modern cards when we reduce voltage it will also reduce heat, and also reduce power usage, which will help prevent the cards from reducing clocks from heat and hopefully keep them from bouncing on the power limiter. The cards are based on heat now and there's nothing any bios can do about that. As the cards get hotter they will reduce temps, even if you had an unlimited bios with no power limit you will still be subject to losing clock speed due to heat. It's how the new cards are designed.
> 
> With afterburner open press Control-F and try for 0.96v for 2000 mhz and see if that works better for you.


Its about finding a happy medium with your card. Not just manufacturer but also the silicon lottery. 
Problem is he is on air. Thats whats holding him back. No different than over clocking a CPU. Need volts and efficient cooling. I have my cards at +40 which peaks voltage to 1.068 and never pass 42C but Im not on air. I cant come anywhere near stability undervolted and expect a decent OC and yes it will never hit a themal throttle or even close to stock power limit but it also wont get anywhere near 2150 which is about where I live. I go past +40 and I hit power limit. I install different BIOS with higher power limit and I just hit the limitations of the silicon.


----------



## dph314

JustinThyme said:


> Its about finding a happy medium with your card. Not just manufacturer but also the silicon lottery.
> Problem is he is on air. Thats whats holding him back. No different than over clocking a CPU. Need volts and efficient cooling. I have my cards at +40 which peaks voltage to 1.068 and never pass 42C but Im not on air. I cant come anywhere near stability undervolted and expect a decent OC and yes it will never hit a themal throttle or even close to stock power limit but it also wont get anywhere near 2150 which is about where I live. I go past +40 and I hit power limit. I install different BIOS with higher power limit and I just hit the limitations of the silicon.


Yeah I'd like to have that freedom too. And I'd like to try out kithylin's idea with the lower voltage either way but I still want this Power Limit out of the way, especially when it's so easy to. Well, it used to be easy. I can't remember the last card I had that I didn't flash for one reason or another, I'm comfortable with it. But with this error, and that Galaxy BIOS being the only one in the OP listed for A/reference cards, I'm just unsure of which BIOS to use. Is there maybe a modded version of nvflash with the XUSB version-check disabled?


----------



## Lynkdev

*So many versions*

I want to drop some money on two 2080 Ti's today and was looking for the most preferred of the cards from an overclocking and water cooling perspective. What would be the next best in your guys opinion please? i was looking at the FE and FTW. 

I'm putting my new system together replacing two titan x maxwells.


----------



## J7SC

Lynkdev said:


> I want to drop some money on two 2080 Ti's today and was looking for the most preferred of the cards from an overclocking and water cooling perspective. What would be the next best in your guys opinion please? i was looking at the FE and FTW.
> 
> I'm putting my new system together replacing two titan x maxwells.


 
...sidestepping your avatar's implications, this is like one of those 'man walks into a bar jokes' and a fight broke out because he asked whether the drink is spelled Whisky or Whiskey... First, I would recommend a full water-block-cooled card no mater what the label, be that factory-original or after-market. RTX 2080 Tis pull a lot of watts (between 300w and 400w on stock bios, depending), and that is heat energy to be gotten rid off, especially with the NVidia boost logic where temps play a big role re. clock reductions.

My :2cents: :I have 2x Aorus 2080 Ti xtreme waterforce full factory block cards (pls see sig rig) now for about a year and couldn't be happier...one clocks to above 2205, the other to 2145 at load (and, I should mention, with triple 360/60 rads for the two). Hard to find right now at a decent price, though...The EVGA FTW Hydrocopper also seems like a good choice, but it is usually sold out. Then there is the MSI Sea Hawk EK X...and for real overkill (and big $$), there is the KingPin w/ Hydrocopper full water block (rather than AIO)

Alternatively, if you're comfortable doing your own water blocks, folks probably will suggest that all you really should want / need is a 2080 Ti Founders Edition with an aftermarket block from Alphacool or Heatkiller and may be the 380w Galax bios to boot...


----------



## Lynkdev

J7SC said:


> ...sidestepping your avatar's implications, this is like one of those 'man walks into a bar jokes' and a fight broke out because he asked whether the drink is spelled Whisky or Whiskey... First, I would recommend a full water-block-cooled card no mater what the label, be that factory-original or after-market. RTX 2080 Tis pull a lot of watts (between 300w and 400w on stock bios, depending), and that is heat energy to be gotten rid off, especially with the NVidia boost logic where temps play a big role re. clock reductions.
> 
> My :2cents: :I have 2x Aorus 2080 Ti xtreme waterforce full factory block cards (pls see sig rig) now for about a year and couldn't be happier...one clocks to above 2205, the other to 2145 at load (and, I should mention, with triple 360/60 rads for the two). Hard to find right now at a decent price, though...The EVGA FTW Hydrocopper also seems like a good choice, but it is usually sold out. Then there is the MSI Sea Hawk EK X...and for real overkill (and big $$), there is the KingPin w/ Hydrocopper full water block (rather than AIO)
> 
> Alternatively, if you're comfortable doing your own water blocks, folks probably will suggest that all you really should want / need is a 2080 Ti Founders Edition with an aftermarket block from Alphacool or Heatkiller and may be the 380w Galax bios to boot...


thanks, think I'll be going with the FE after all.


----------



## kithylin

Lynkdev said:


> thanks, think I'll be going with the FE after all.


As with most other nvidia cards: FE cards with reference-design PCB will be a lot easier to find a block for it and you will have a significantly broader range of choices for your blocks. Custom PCB models are more difficult to find blocks for and will limit your choice of blocks. Sometimes you end up with some custom-PCB models with 1 block from 1 company and that's the only choice. And if you don't like how it looks, tough cookies.

But I have one tip though: There's a video on gamer's nexus somewhere of what the process is like to disassemble the FE cooler design on the FE cards. It's.. very complicated and frustrating. I might suggest you at least watch that through before buying. Several other vendors sell blower-style cards with reference PCB's that would be a lot easier to disassemble. Since you're going water-block with your new cards it doesn't matter what name is on the card, all reference design cards use the exact same PCB underneath. Refer to the EK Cooling Configurator website to determine if a card is reference PCB or custom PCB first.


----------



## JustinThyme

dph314 said:


> Yeah I'd like to have that freedom too. And I'd like to try out kithylin's idea with the lower voltage either way but I still want this Power Limit out of the way, especially when it's so easy to. Well, it used to be easy. I can't remember the last card I had that I didn't flash for one reason or another, I'm comfortable with it. But with this error, and that Galaxy BIOS being the only one in the OP listed for A/reference cards, I'm just unsure of which BIOS to use. Is there maybe a modded version of nvflash with the XUSB version-check disabled?


If its available they have it on techpowerup.

Ive not seen anyone gain much if any from a higher power limit unless they are using a chiller or better. The entire reason of the power limit in the first place is the simple fact that these cards are some hot little buggers. When I was waiting for blocks to become available for my strix I had to leave the case door open as the heat from a pair of GPUs was actually running up my liquid temp from all that heat getting pulled through the rads then it does now with them in the loop. They would run up to 65C in a heart beat and top out closer to 85C with the fans running full bore sounding like the approach at the airport. Dropping the voltage drops the power draw and thereby the temps but also drops stability. The A reference should give you better reach but in the end thermals are going to kill you on air. Like I mentioned earlier I flashed from the Strix to the matrix as there were no doubts as they are identical in every respect but the BIOS and the matrix built in AIO. Gaining a little more on the power limit I still hit the wall of the silicon and 2150 is it without going into power limit at all and without going past 42C. I get the same either way so I went back to the stock BIOS. Ive seen folks undervolt to get things that are running hot to cool down (did so myself with gaming laptop) but have never heard of lowering the voltage to up the clocks. The BIOS being encrypted did make for a PITA on what you can choose from, used to be you could edit it yourself. Either way tweak with what you have and good luck. If you cant go full custom loop Id at least entertain an AIO to bring the temps down.





kithylin said:


> As with most other nvidia cards: FE cards with reference-design PCB will be a lot easier to find a block for it and you will have a significantly broader range of choices for your blocks. Custom PCB models are more difficult to find blocks for and will limit your choice of blocks. Sometimes you end up with some custom-PCB models with 1 block from 1 company and that's the only choice. And if you don't like how it looks, tough cookies.
> 
> But I have one tip though: There's a video on gamer's nexus somewhere of what the process is like to disassemble the FE cooler design on the FE cards. It's.. very complicated and frustrating. I might suggest you at least watch that through before buying. Several other vendors sell blower-style cards with reference PCB's that would be a lot easier to disassemble. Since you're going water-block with your new cards it doesn't matter what name is on the card, all reference design cards use the exact same PCB underneath. Refer to the EK Cooling Configurator website to determine if a card is reference PCB or custom PCB first.


+1 on that. The customs limit your choice but at this stage in the game most of the top runners have the custom PCBs worth having covered. The only one that has the best performing blocks but only reference design is aquacomputer. Dont get that but they stay backlogged so no need to branch out I guess. Heat killer is another good one but has more of a selection. If you order direct from them just bbe prepared to wait as they are backlogged too. Even Phanteks is making decent blocks these days. Sorry not a fan of EK.


----------



## Talon2016

dph314 said:


> JustinThyme said:
> 
> 
> 
> Its about finding a happy medium with your card. Not just manufacturer but also the silicon lottery.
> Problem is he is on air. Thats whats holding him back. No different than over clocking a CPU. Need volts and efficient cooling. I have my cards at +40 which peaks voltage to 1.068 and never pass 42C but Im not on air. I cant come anywhere near stability undervolted and expect a decent OC and yes it will never hit a themal throttle or even close to stock power limit but it also wont get anywhere near 2150 which is about where I live. I go past +40 and I hit power limit. I install different BIOS with higher power limit and I just hit the limitations of the silicon.
> 
> 
> 
> Yeah I'd like to have that freedom too. And I'd like to try out kithylin's idea with the lower voltage either way but I still want this Power Limit out of the way, especially when it's so easy to. Well, it used to be easy. I can't remember the last card I had that I didn't flash for one reason or another, I'm comfortable with it. But with this error, and that Galaxy BIOS being the only one in the OP listed for A/reference cards, I'm just unsure of which BIOS to use. Is there maybe a modded version of nvflash with the XUSB version-check disabled?
Click to expand...

Just use the EVGA FTW3 Ultra vBios on that XC2. I used to have an XC Ultra which is the same PCB/card minus the extra sensors and it worked flawlessly with the FTW3 vbios and that has a 373w limit.


----------



## zhrooms

dph314 said:


> Just had a quick question that I was hoping someone might be able to help with. I searched for the issue and found a few people with the same one back in May/June but after reading through several pages afterwards it doesn't look like they solved it.
> 
> Either way, I have a EVGA XC2 Ultra (1E07) and I've been trying to flash the Galaxy 380W BIOS, but I keep getting this error:
> 
> "XUSB FW component of the input GPU firmware image is incompatible"


 
Hey, we've known about this issue for a few months now, I've added a brief text about it in the original post warning people about new cards. There is no known bypass. What I can suggest for you is try the Gigabyte 366W ones, as many of them are from 2019, should be able to find a compatible one. The only other way is to physically shunt mod the card to increase the power limit, which most people obviously do not want to do, but it still works.


----------



## jura11

Hi there

Which card to choose or which card I would choose, have look I have tried few RTX 2080Ti GPUs like Asus RTX 2080Ti Strix, Zotac RTX 2080Ti AMP, Palit RTX 2080Ti and EVGA RTX 2080Ti XC Ultra and FTW3, Asus RTX 2080Ti Strix will do 2160MHz with Matrix BIOS , Zotac RTX 2080Ti AMP will do 2100-2115MHz with Galax 380W BIOS, XC Ultra would do 2085-2100MHz and that's it, FTW3 would do 2100MHz and if some instances would do 2115MHz or 2130MHz but that's have been very very rare, but usually have sit in 2100-2115MHz during the gaming, Palit RTX 2080Ti have been disappointment, one of them would do 2100MHz and another would do 2070MHz with Galax 380W BIOS 

Temperatures in all tests have been in 38-42°C as max

Depends what are you planning to do, for air cooling build choose card which is quietest like is Asus RTX 2080Ti Strix, Zotac RTX 2080Ti AMP have noisy cooler and FTW3 too have noisier cooler 

For water cooling build, Asus RTX 2080Ti Strix are nice cards, FE I would probably choose too 

Asus RTX 2080Ti Strix is quite good card with Matrix BIOS or XOC BIOS too

Hope this helps 

Thanks, Jura


----------



## dph314

JustinThyme said:


> If its available they have it on techpowerup.
> 
> Ive not seen anyone gain much if any from a higher power limit unless they are using a chiller or better. The entire reason of the power limit in the first place is the simple fact that these cards are some hot little buggers. When I was waiting for blocks to become available for my strix I had to leave the case door open as the heat from a pair of GPUs was actually running up my liquid temp from all that heat getting pulled through the rads then it does now with them in the loop. They would run up to 65C in a heart beat and top out closer to 85C with the fans running full bore sounding like the approach at the airport. Dropping the voltage drops the power draw and thereby the temps but also drops stability. The A reference should give you better reach but in the end thermals are going to kill you on air. Like I mentioned earlier I flashed from the Strix to the matrix as there were no doubts as they are identical in every respect but the BIOS and the matrix built in AIO. Gaining a little more on the power limit I still hit the wall of the silicon and 2150 is it without going into power limit at all and without going past 42C. I get the same either way so I went back to the stock BIOS. Ive seen folks undervolt to get things that are running hot to cool down (did so myself with gaming laptop) but have never heard of lowering the voltage to up the clocks. The BIOS being encrypted did make for a PITA on what you can choose from, used to be you could edit it yourself. Either way tweak with what you have and good luck. If you cant go full custom loop Id at least entertain an AIO to bring the temps down.


Damn. Well, yeah thanks for the info. I just have the one card, and a AIO for my CPU, haven't seen the card above 65C yet. I'm able to keep the room pretty cool though, just Crack the window above the PC and let that Buffalo winter air in. 



Talon2016 said:


> Just use the EVGA FTW3 Ultra vBios on that XC2. I used to have an XC Ultra which is the same PCB/card minus the extra sensors and it worked flawlessly with the FTW3 vbios and that has a 373w limit.


Nice, thanks. I'll give it a shot, would at least be a good ~12% extra to play with. 



zhrooms said:


> Hey, we've known about this issue for a few months now, I've added a brief text about it in the original post warning people about new cards. There is no known bypass. What I can suggest for you is try the Gigabyte 366W ones, as many of them are from 2019, should be able to find a compatible one. The only other way is to physically shunt mod the card to increase the power limit, which most people obviously do not want to do, but it still works.


Thanks, I'll give that one a shot if for whatever reason the FTW3 one has an issue. Yeah I don't wanna bother with the shunt if I'm just going to be on air, from the sound of things.


----------



## marrawi

Hi all, wanted to get your opinion/experience with OC'ing the TI's.

I have 2 RTX 2080 TI on custom loop. The highest stable clocks I got was +220 (2100) on GPU and +800 (1945) on mem @124% power target. Is this average? I am asking because I have one non-A card and I'm limited at 124% power.

3d Mark: With above OC, I got 22,486 score on Time Spy: https://www.3dmark.com/spy/9463069

Temps: with 3dmark Time Spy Extreme Stress test full cycle, temps with overclock hits 48c and 51c, I have the blocks in connected in parallel. Is this high? Especially considering current lower ambient temperature (22c)

Other specs: i7 9700k 5.2 ghz, 32GB ddr4 @ 3600








Sent from my SM-G975U using Tapatalk


----------



## ADSL

The best bios for MSI 2080ti Gaming X Trio is still the one posted on the 1st post?


----------



## J7SC

zhrooms said:


> Hey, we've known about this issue for a few months now, I've added a brief text about it in the original post warning people about new cards. There is no known bypass. What I can suggest for you is try the Gigabyte 366W ones, as many of them are from 2019, should be able to find a compatible one. The only other way is to physically shunt mod the card to increase the power limit, which most people obviously do not want to do, but it still works.


 


dph314 said:


> (...) Thanks, I'll give that one a shot if for whatever reason the FTW3 one has an issue. Yeah I don't wanna bother with the shunt if I'm just going to be on air, from the sound of things.


 
Yeah, the Gigabyte bios (stock on my cards) are pretty good; regularly pull 370W - 380W in heavy load scenarios. Here is the download link (look for Xtreme WaterForce): https://www.techpowerup.com/vgabios/?architecture=NVIDIA&manufacturer=Gigabyte&model=RTX+2080+Ti&interface=&memType=&memSize=&since=


----------



## dph314

J7SC said:


> Yeah, the Gigabyte bios (stock on my cards) are pretty good; regularly pull 370W - 380W in heavy load scenarios. Here is the download link (look for Xtreme WaterForce)


Nice. I'll give it a shot. I checked out my stock XC2 Ultra one in GPU-Z and the Build Date is March '19, and all the BIOS's listed (like the FTW3 one or the Xtreme WaterForce that you just recommended) are all built in late 2018. Does that mean I'm out of luck? Because the Galaxy 388W one I initially wanted is listed as September '18 Build Date, and it wouldn't flash. And the one you recommended is November, not much newer . No where near the March '19 that mine is.


----------



## J7SC

dph314 said:


> Nice. I'll give it a shot. I checked out my stock XC2 Ultra one in GPU-Z and the Build Date is March '19, and all the BIOS's listed (like the FTW3 one or the Xtreme WaterForce that you just recommended) are all built in late 2018. Does that mean I'm out of luck? Because the Galaxy 388W one I initially wanted is listed as September '18 Build Date, and it wouldn't flash. And the one you recommended is November, not much newer . No where near the March '19 that mine is.


 
Gigabyte's global site has some dated late February 2019...also, I think the PCBs for air, AIO and full WB Xtreme 2080 Ti are the same, and bios should be very similar


----------



## dph314

J7SC said:


> Gigabyte's global site has some dated late February 2019...also, I think the PCBs for air, AIO and full WB Xtreme 2080 Ti are the same, and bios should be very similar


Sweet, thanks. I'll check it out soon and give it a shot.



Also, thanks to kithylin for first suggesting the voltage curve. I have a solid 1995mhz @ 1013mv going (after the temp-related drops from hitting 63C), instead of sinking to 1920mhz or 1890mhz every few seconds because of bumping into the Power Target. Love seeing that. I just chose those random values to start, but as you said I can probably go lower on the volts or higher on the clock.


----------



## jura11

marrawi said:


> Hi all, wanted to get your opinion/experience with OC'ing the TI's.
> 
> I have 2 RTX 2080 TI on custom loop. The highest stable clocks I got was +220 (2100) on GPU and +800 (1945) on mem @124% power target. Is this average? I am asking because I have one non-A card and I'm limited at 124% power.
> 
> 3d Mark: With above OC, I got 22,486 score on Time Spy: https://www.3dmark.com/spy/9463069
> 
> Temps: with 3dmark Time Spy Extreme Stress test full cycle, temps with overclock hits 48c and 51c, I have the blocks in connected in parallel. Is this high? Especially considering current lower ambient temperature (22c)
> 
> Other specs: i7 9700k 5.2 ghz, 32GB ddr4 @ 3600
> View attachment 309088
> 
> 
> Sent from my SM-G975U using Tapatalk


Hi there 

I can't comment on SLI results as I never tried SLI RTX 2080Ti 

Temperatures are bit higher but this depends on lots of factors why are high like waterblocks and how many radiators do you have, if you are running just single 480mm radiator then temperatures are not bad, I have build for friend loop with 8700k with 5.2GHz and Asus RTX 2080Ti Strix with single 360mm radiator and temperatures we are seen 42-45°C with 2115-2130MHz OC on this cards,I think with more radiator space we could push his GPU to 2145-2160MHz quite easily 

Hope this helps 

Thanks, Jura


----------



## marrawi

jura11 said:


> Hi there
> 
> 
> 
> I can't comment on SLI results as I never tried SLI RTX 2080Ti
> 
> 
> 
> Temperatures are bit higher but this depends on lots of factors why are high like waterblocks and how many radiators do you have, if you are running just single 480mm radiator then temperatures are not bad, I have build for friend loop with 8700k with 5.2GHz and Asus RTX 2080Ti Strix with single 360mm radiator and temperatures we are seen 42-45°C with 2115-2130MHz OC on this cards,I think with more radiator space we could push his GPU to 2145-2160MHz quite easily
> 
> 
> 
> Hope this helps
> 
> 
> 
> Thanks, Jura


Thanks Jura, 

I have 4*120MM/60mm rads. Its a ridiculous setup for looks. I have seen average temps of high 40's on a single card. It's amazing that ur buddy's temps are mid 40's with a single 360 rad!

Thanks for responding, I might add more rad space in summer. 

Sent from my SM-G975U using Tapatalk


----------



## J7SC

marrawi said:


> Thanks Jura,
> 
> I have 4*120MM/60mm rads. Its a ridiculous setup for looks. I have seen average temps of high 40's on a single card. It's amazing that ur buddy's temps are mid 40's with a single 360 rad!
> 
> Thanks for responding, I might add more rad space in summer.
> 
> Sent from my SM-G975U using Tapatalk


 
FYI on dual card temps, I have been running water-cooled 2080 Ti SLI/NVL for a year now...3dm top graph is single card run temp, bottom graph is SLI/NVL...very minor difference ! These runs were done in the summer with ambient in low 20ies c; dedicated GPU-loop has a total of 1080/60 rad space.


----------



## marrawi

J7SC said:


> FYI on dual card temps, I have been running water-cooled 2080 Ti SLI/NVL for a year now...3dm top graph is single card run temp, bottom graph is SLI/NVL...very minor difference ! These runs were done in the summer with ambient in low 20ies c; dedicated GPU-loop has a total of 1080/60 rad space.


Wow! Great temps! What GPU blocks you use? Also, are those temps with OC? If so, what clocks?

Thanks for sharing!



Sent from my SM-G975U using Tapatalk


----------



## J7SC

marrawi said:


> Wow! Great temps! What GPU blocks you use? Also, are those temps with OC? If so, what clocks?
> 
> Thanks for sharing!
> 
> Sent from my SM-G975U using Tapatalk


 
Factory-installed GPU blocks (Aorus Xtreme Waterforce WB), 'typical' single GPU clocks 2190 to 2205, 'typical' dual GPU clocks 2145 to 2160....and yes, the graphs before were on OC, no extra voltage in MSI AB, Power Limit at max (> max 'observed in GPUz' = 379W)


----------



## Cobra652

Cobra652 said:


> Is there a helpful old and reliable member here who can help me with a Modern Warfare RTX bundle activation?
> I activated one with my rtx card, but i got another copy and want to activate it to my brother for christmas gift.
> Anyone can help me please?
> Thank you very much!


Anyone please? 
If needed i can pay 5-10 dollars via paypal for helping.


----------



## marrawi

Cobra652 said:


> Anyone please?
> 
> If needed i can pay 5-10 dollars via paypal for helping.


Why couldn't you activate? 

Sent from my SM-G975U using Tapatalk


----------



## chibi

Cobra652 said:


> Anyone please?
> If needed i can pay 5-10 dollars via paypal for helping.


It may be locked unless you have a RTX card in the system. Does your brother have a RTX GPU? If not, that may be why he cannot activate. Loan him your GPU to install. :thumb:


----------



## Cobra652

marrawi said:


> Why couldn't you activate?
> 
> Sent from my SM-G975U using Tapatalk





chibi said:


> It may be locked unless you have a RTX card in the system. Does your brother have a RTX GPU? If not, that may be why he cannot activate. Loan him your GPU to install. :thumb:



Because you can activate only one Modern Warfare RTX bundle / Rtx vga.. Im activated one MW with my rtx. The second bundle code needs another rtx card. So i need someone with an rtx card (without activated MW on it).


----------



## dph314

kithylin said:


> With afterburner open press Control-F and try for 0.96v for 2000 mhz and see if that works better for you.


Well, after a little testing, 2025mhz @ 1006mv is stable, but at 993mv the game just closes after a few minutes. So...not the best luck I assume, based on your initial recommendation. Still not bad I guess though, especially considering the first 2080ti that I had before I even knew about all this A/Non-A crap. Had a Non-A that stayed at 1760mhz in everything because of the ridiculous power-limit.


----------



## MonarchX

Any firm words on 2080 Ti Super release lately?


----------



## ntuason

MonarchX said:


> Any firm words on 2080 Ti Super release lately?


Only rumors for now and states Q1 2020, no solid info. I'm returning my 2080 Ti and willing to wait till March.


----------



## keikei

ntuason said:


> Only rumors for now and states Q1 2020, no solid info. I'm returning my 2080 Ti and willing to wait till March.



Yeah, sounds about right. Ti looks to be a decent bump though, as oppose to the 2080S.


----------



## J7SC

keikei said:


> Yeah, sounds about right. Ti looks to be a decent bump though, as oppose to the 2080S.


 
I'm still wondering what that might do to their Titan RTX sales, especially if it is the same full-meal-deal chip, and available on custom-PCBs. Granted, the Titan RTX has 24GB and may be they'll upgrade that to the latest greatest 16gb/s, but still...


----------



## JustinThyme

marrawi said:


> Hi all, wanted to get your opinion/experience with OC'ing the TI's.
> 
> I have 2 RTX 2080 TI on custom loop. The highest stable clocks I got was +220 (2100) on GPU and +800 (1945) on mem @124% power target. Is this average? I am asking because I have one non-A card and I'm limited at 124% power.
> 
> 3d Mark: With above OC, I got 22,486 score on Time Spy: https://www.3dmark.com/spy/9463069
> 
> Temps: with 3dmark Time Spy Extreme Stress test full cycle, temps with overclock hits 48c and 51c, I have the blocks in connected in parallel. Is this high? Especially considering current lower ambient temperature (22c)
> 
> Other specs: i7 9700k 5.2 ghz, 32GB ddr4 @ 3600
> View attachment 309088
> 
> 
> Sent from my SM-G975U using Tapatalk


Having different cards isnt the best option especially one being a non A core. Not a horrible score, mine is only about 1200 higher on matching A cards.

Temps are a bit high and getting into the thermal throttling zone. Dont know what else you have in the loop as far as components. 22C ambient wont matter much if you don't have enough rad/s anf fans to dissipate the heat. Just for a comparison I have two peaking at 2150 right on the edge of power limit without actually hitting it and temps don't pass 42C on either. Rig is in sig.


----------



## marrawi

JustinThyme said:


> Having different cards isnt the best option especially one being a non A core. Not a horrible score, mine is only about 1200 higher on matching A cards.
> 
> 
> 
> Temps are a bit high and getting into the thermal throttling zone. Dont know what else you have in the loop as far as components. 22C ambient wont matter much if you don't have enough rad/s anf fans to dissipate the heat. Just for a comparison I have two peaking at 2150 right on the edge of power limit without actually hitting it and temps don't pass 42C on either. Rig is in sig.


Thanks for sharing your input. I went with a limiting case and couldn't have more radiator space. I'm looking for a decade-old external radiators from watercool or phobya to mount on the back of the case. Throwback Sunday it is!

Sent from my SM-G975U using Tapatalk


----------



## szeged

Was going to get a 2080 super but the ftw3 went OOS as I was ordering, so i said screw it and was just going to get a 2080ti but figured I'd double check about a new release since the 2080ti has been out a while...anything actually looking like it might release early next year? 

Worth getting a 2080ti now or wait 3-4 months to find out?


----------



## keikei

szeged said:


> Was going to get a 2080 super but the ftw3 went OOS as I was ordering, so i said screw it and was just going to get a 2080ti but figured I'd double check about a new release since the 2080ti has been out a while...anything actually looking like it might release early next year?
> 
> Worth getting a 2080ti now or wait 3-4 months to find out?



Best case scenario, we get a cheap RTX TITAN with less memory and you have the fastest gaming card around the release of CyberPunk 2027.


----------



## szeged

keikei said:


> Best case scenario, we get a cheap RTX TITAN with less memory and you have the fastest gaming card around the release of CyberPunk 2027.
> 
> 
> https://www.youtube.com/watch?v=0L595AHrhWE&t=403s



So your opinion would be to wait for next year? If so I'll probably just get a 2070 super to hold me over till then because my Maxwell titan x is finally getting retired, just can't hold it's own at 4k anymore.


----------



## sblantipodi

ntuason said:


> MonarchX said:
> 
> 
> 
> Any firm words on 2080 Ti Super release lately?
> 
> 
> 
> Only rumors for now and states Q1 2020, no solid info. I'm returning my 2080 Ti and willing to wait till March.
Click to expand...

Super version would be not that faster than the standard one, no need to return your card.


----------



## Laiq

Which bios is the best i can get for my Palit 2080ti gaming pro oc? watercooled.


----------



## keikei

sblantipodi said:


> Super version would be not that faster than the standard one, no need to return your card.


Agree, unless $ is no object. Ampere supposedly not too far away either. I'm more excited regarding that early mgpu driver. I'd luv to go back to dual cards.


----------



## dph314

J7SC said:


> Gigabyte's global site has some dated late February 2019...also, I think the PCBs for air, AIO and full WB Xtreme 2080 Ti are the same, and bios should be very similar


I find myself at a crossroads once more. So I checked Gigabyte's site. The Waterforce Xtreme's are custom-PCB's, are those safe to flash to my reference EVGA XC2 Ultra?

Then, I checked some others they got, reference ones like the Gaming OC and the Windforce, and you were right about the much newer BIOS build dates. But, they all say "For MICRON Memory" in the description, and I have Samsung. So I don't know if those would be safe to flash either now 




Laiq said:


> Which bios is the best i can get for my Palit 2080ti gaming pro oc? watercooled.



The GALAX/KFA2 RTX 2080 Ti one for 1E07 reference cards in the first post, if it'll flash. Otherwise, that's what I'm trying to figure out now myself. That one won't take on mine, get an error because my card is too new for it, but you could try.



szeged said:


> So your opinion would be to wait for next year? If so I'll probably just get a 2070 super to hold me over till then because my Maxwell titan x is finally getting retired, just can't hold it's own at 4k anymore.


Hey I remember you. I haven't been around here much in quite a while, but I definitely haven't forgotten that avatar, ha. Thanks for all the laughs back in the day.


----------



## Sheyster

keikei said:


> Agree, unless $ is no object. Ampere supposedly not too far away either. I'm more excited regarding that early mgpu driver. I'd luv to go back to dual cards.


Agreed, I too am looking forward to more info on this new development. Maybe Nvidia will pull a rabbit out of their hat.


----------



## skupples

szeged said:


> Was going to get a 2080 super but the ftw3 went OOS as I was ordering, so i said screw it and was just going to get a 2080ti but figured I'd double check about a new release since the 2080ti has been out a while...anything actually looking like it might release early next year?
> 
> Worth getting a 2080ti now or wait 3-4 months to find out?


no high end replacements until middle of next year. 3080 will roll in june, and be a bit faster than 2080ti in all tasks.

the super will be at least 15% faster based on swaps folks have done with the newest fastest memory. 2080ti w. super memory runs 10% faster. So add one more SMX = 15?


----------



## MonarchX

skupples said:


> no high end replacements until middle of next year. 3080 will roll in june, and be a bit faster than 2080ti in all tasks.
> 
> the super will be at least 15% faster based on swaps folks have done with the newest fastest memory. 2080ti w. super memory runs 10% faster. So add one more SMX = 15?


Something like that and I bet CyberPunk won't run all that well...


----------



## J7SC

dph314 said:


> I find myself at a crossroads once more. So I checked Gigabyte's site. The Waterforce Xtreme's are custom-PCB's, are those safe to flash to my reference EVGA XC2 Ultra?
> 
> Then, I checked some others they got, reference ones like the Gaming OC and the Windforce, and you were right about the much newer BIOS build dates. But, they all say "For MICRON Memory" in the description, and I have Samsung. So I don't know if those would be safe to flash either now  (...).


 
With any kind of bios flash, especially a non-native one, a good helping of caution is not misplaced...i.e. having another NVidia GPU (even if not the same model) ready 'just in case' to help re-flash the 2080 Ti back to the original. 

As to the 'Micron' part, I think that just indicates a specific change in timings or s.th. similar for Micron-equipped cards (I should try it...). That said, the Bios usually contains a tag internally that reads 'Micron...Samsung...Hynix' as it has different segments for (and work) on all three. Try that on your current Bios info tab in GPUz, you should see all three.


----------



## szeged

skupples said:


> no high end replacements until middle of next year. 3080 will roll in june, and be a bit faster than 2080ti in all tasks.
> 
> the super will be at least 15% faster based on swaps folks have done with the newest fastest memory. 2080ti w. super memory runs 10% faster. So add one more SMX = 15?



So we are expecting a 2080ti super to hold nvidia over till the 3080? Or just hopeful rumors at this point?


----------



## keikei

MonarchX said:


> Something like that and I bet CyberPunk won't run all that well...


CDPR games tend to favor Nvidia. For whatever reason the engine luv tessellation.


----------



## J7SC

keikei said:


> Agree, unless $ is no object. Ampere supposedly not too far away either. I'm more excited regarding that early mgpu driver. I'd luv to go back to dual cards.


 
...same here. With 2x w-cooled and decent-clocking 2080 Tis, MPGU is going to be my focus until I have a chance to review 3080 Ti custom-PCB Ampere




szeged said:


> So we are expecting a 2080ti super to hold nvidia over till the 3080? Or just hopeful rumors at this point?


 
...rumours (and probably the card itself) have been around since the summer (basically a back-stop if big Navi would have released ?). According to TPU, still only a rumour now, but one with a bit of potential 'meat' on it. https://www.techpowerup.com/261328/nvidia-readying-geforce-rtx-2080-ti-super-after-all 

...Hard to understand why they didn't release 2080 Ti-S for the holiday season, though. Perhaps it is a refresh for the new year's $low $ales period ? But with Ampere (according to the same rumour sources) coming sometime around mid-year, 2080 Ti-S halo would only last a few months ?


----------



## szeged

Guess the 2070 super is gonna hold me over till a 3080 or whatever it's called this time is released. Titan x maxwell has been doing fine for most stuff but some games at 4k it really drags its feet.


----------



## skupples

MonarchX said:


> Something like that and I bet CyberPunk won't run all that well...


did you read my prediction on that one as well? 

Cyberpunk - the not next gen, next gen title. 

"console limitations are why we didn't get x, Y, & Z!" incoming. (see evolution of watch dogs, or FF11 MMO, or oh so many others)



szeged said:


> So we are expecting a 2080ti super to hold nvidia over till the 3080? Or just hopeful rumors at this point?


unless they release the one additional SMX + 10% faster 2080 super memory, yessir. 

Still zero reason for NV to do anything. Big Navi is still vaporware, and will likely be only as fast as, not faster than, existing 20x0 series products.

AMD seems to only be able to keep up when they're on the smaller node.


----------



## skupples

oops


----------



## Talon2016

zhrooms said:


> Hey, we've known about this issue for a few months now, I've added a brief text about it in the original post warning people about new cards. There is no known bypass. What I can suggest for you is try the Gigabyte 366W ones, as many of them are from 2019, should be able to find a compatible one. The only other way is to physically shunt mod the card to increase the power limit, which most people obviously do not want to do, but it still works.


The only bypass is use of a USB programmer/test clip. I've used this method to bypass the XUSB error. It works and you can flash older vBIOS to newer cards without any issues.


----------



## gecko991

I think the Super Ti will be coming, another hold over exploitation perhaps and people will buy it as the 3080 may get pushed back into fall. I just grabbed another 2080 Ti, not a problem.


----------



## dph314

J7SC said:


> With any kind of bios flash, especially a non-native one, a good helping of caution is not misplaced...i.e. having another NVidia GPU (even if not the same model) ready 'just in case' to help re-flash the 2080 Ti back to the original.
> 
> As to the 'Micron' part, I think that just indicates a specific change in timings or s.th. similar for Micron-equipped cards (I should try it...). That said, the Bios usually contains a tag internally that reads 'Micron...Samsung...Hynix' as it has different segments for (and work) on all three. Try that on your current Bios info tab in GPUz, you should see all three.


Thanks, good to know. Yeah, I do have those tags listed in GPU-Z for mine. So, I guess I should be good then? Any Gigabyte reference-card BIOS should be fine? And I do have a RX580 laying around, should the unfortunate need arise.


----------



## dph314

skupples said:


> oops


Congrats on 20k 



Talon2016 said:


> The only bypass is use of a USB programmer/test clip. I've used this method to bypass the XUSB error. It works and you can flash older vBIOS to newer cards without any issues.


That's interesting, just looked up a video on those. I don't think I'd go that far on my 2080ti, but it'd be a good thing to have some day.


----------



## J7SC

dph314 said:


> Thanks, good to know. Yeah, I do have those tags listed in GPU-Z for mine. So, I guess I should be good then? Any Gigabyte reference-card BIOS should be fine? And I do have a RX580 laying around, should the unfortunate need arise.


 
Any Gigabyte bios which has the highest (officially 366w though it will go a bit beyond) limit - and of course the newer date. So at the very least the Aorus 'Xtreme' PCB ones per OP table on page 1


----------



## sblantipodi

So is multi gpu coming back in 2020?
Personally with single cards costing like two cards I really doubt that I would sli 1300€ cards 😄


----------



## Sheyster

skupples said:


> Still zero reason for NV to do anything. Big Navi is still vaporware, and will likely be only as fast as, not faster than, existing 20x0 series products.


A little extra milking never hurt, right? I'm sure a certain percentage of OG 2080 Ti users would upgrade if performance was +15%. Ampere is imminent so they might as well milk Turing a little more while they can.


----------



## sblantipodi

Sheyster said:


> skupples said:
> 
> 
> 
> Still zero reason for NV to do anything. Big Navi is still vaporware, and will likely be only as fast as, not faster than, existing 20x0 series products.
> 
> 
> 
> A little extra milking never hurt, right? I'm sure a certain percentage of OG 2080 Ti users would upgrade if performance was +15%. Ampere is imminent so they might as well milk Turing a little more while they can.
Click to expand...

What imminent means? Will we see it before the summer? I mean, available in shops not on PowerPoints. 🙂


----------



## Sheyster

sblantipodi said:


> What imminent means? Will we see it before the summer? I mean, available in shops not on PowerPoints. 🙂


We don't have an exact release date obviously but we can bet on 2020 sometime. I would guess no later than June or July for a 3080 if they take the old approach to their releases (cut-down GPU first followed by bigger versions later).


----------



## chibi

I'm afraid if the 2080 Ti Super comes out, I won't have the resolve to hold out for 3080 Ti Ampere. I just put together a 9900KS system and have a 1060 3GB GPU as a place holder. I figured a 2080 Ti was not big enough of a jump from my previous Titan Xp so looking at all these BF/CM deals has been rough, haha.


----------



## J7SC

chibi said:


> I'm afraid if the 2080 Ti Super comes out, I won't have the resolve to hold out for 3080 Ti Ampere. I just put together a 9900KS system and have a 1060 3GB GPU as a place holder. I figured a 2080 Ti was not big enough of a jump from my previous Titan Xp so looking at all these BF/CM deals has been rough, haha.


 
Why stop with a ('may be') 2080 Ti-S ? This setup should keep you happy for a while


----------



## skupples

dph314 said:


> Congrats on 20k
> 
> 
> 
> That's interesting, just looked up a video on those. I don't think I'd go that far on my 2080ti, but it'd be a good thing to have some day.


thanks, about 19,995 shartposts, but posts none the less. I've always read quite a few forums, but i only really ever ended up posting here.

 and those only have 8 pins of power, dafuq?


----------



## Chobbit

Hey guys I'm hoping one of my cards isn't dead  but more information here if anyone can help:
https://www.overclock.net/forum/69-...eezing-relating-gpu-drivers.html#post28221546

Basically windows kept crashing/freezing, at first rearly but quickly it became common. Got rid of my overclocks (hadn't increased voltage at all yet) and tried different drivers and the only one that stopped the crashes/freezes were drivers that didn't allow SLI on the 2080ti's, went back to recent drivers and crashing only stopped when I disabled SLI. 

Noticed that nvlink bridge would stop lighting up when froze/crashed and still does after a little while but because not in SLI it doesn't affect the first card and the display. When nvlink stop lighting up I then look in MSI afterburner and that card returns no readings, SLI option disappears from NVidia Control panel. Also when I then check in Device manager there it an ! icon on the second card and viewing it's properties is this message:

"This device is not working properly because Windows cannot load the drivers required for this device. (Code 31)

The specified request is not a valid operation for the target device"

I then try updating the drivers and it says "The best drivers for your device are already installed"

Is there anything that can be done for this? Could a bios update fix it or is it something else?

Hoping it's not a dead card as this system was not built to easily remove these cards this was supposed to stay together for a long time.

Cheers


----------



## Sheyster

Chobbit said:


> Hey guys I'm hoping one of my cards isn't dead  but more information here if anyone can help:
> https://www.overclock.net/forum/69-...eezing-relating-gpu-drivers.html#post28221546
> 
> Basically windows kept crashing/freezing, at first rearly but quickly it became common. Got rid of my overclocks (hadn't increased voltage at all yet) and tried different drivers and the only one that stopped the crashes/freezes were drivers that didn't allow SLI on the 2080ti's, went back to recent drivers and crashing only stopped when I disabled SLI.
> 
> Noticed that nvlink bridge would stop lighting up when froze/crashed and still does after a little while but because not in SLI it doesn't affect the first card and the display. When nvlink stop lighting up I then look in MSI afterburner and that card returns no readings, SLI option disappears from NVidia Control panel. Also when I then check in Device manager there it an ! icon on the second card and viewing it's properties is this message:
> 
> "This device is not working properly because Windows cannot load the drivers required for this device. (Code 31)
> 
> The specified request is not a valid operation for the target device"
> 
> I then try updating the drivers and it says "The best drivers for your device are already installed"
> 
> Is there anything that can be done for this? Could a bios update fix it or is it something else?
> 
> Hoping it's not a dead card as this system was not built to easily remove these cards this was supposed to stay together for a long time.
> 
> Cheers


If reinstalling a clean Windows doesn't work then it's probably the card. That's all I got.


----------



## skupples

as i said in your other thread, have you used guru3d display driver uninstaller, & started over?

also, are you sure the cards are properly socketed? Are you sure the slots are clean? 

also, do not install experience after running DDU. Test again.

I had a similar issue with me 1080tis. Ended up being a dirty slot, and experience breaking programs at launch. 

experience has been off my machine for 2 months now, and I haven't had a single issue since.

also, if I remember correctly. You wanna do the driver install with both cards in, and the bridge on.


----------



## NBrock

chibi said:


> I'm afraid if the 2080 Ti Super comes out, I won't have the resolve to hold out for 3080 Ti Ampere. I just put together a 9900KS system and have a 1060 3GB GPU as a place holder. I figured a 2080 Ti was not big enough of a jump from my previous Titan Xp so looking at all these BF/CM deals has been rough, haha.



The 2080ti was a pretty big jump vs my Titan Xp. Especially after more driver updates that have come as Turing matures and the fact that I was able to flash a higher wattage bios on my FE card. Both the TXp and the 2080ti were watercooled. 

My TXp would run 2050 core and +500 on memory for most work loads. My 2080ti runs about 2130 core and +1000 mem for most gaming loads.


----------



## Chobbit

I've only just seen these replies, should I DDU in safe mode and then install the latest drivers without Experience?


Cheers




skupples said:


> as i said in your other thread, have you used guru3d display driver uninstaller, & started over?
> 
> also, are you sure the cards are properly socketed? Are you sure the slots are clean?
> 
> also, do not install experience after running DDU. Test again.
> 
> I had a similar issue with me 1080tis. Ended up being a dirty slot, and experience breaking programs at launch.
> 
> experience has been off my machine for 2 months now, and I haven't had a single issue since.
> 
> also, if I remember correctly. You wanna do the driver install with both cards in, and the bridge on.


----------



## skupples

correct.

DDU in in safe mode with restart option, then install driver of choice when back into normal windows, without experience. 

cross fingers,

report back.

(prays to the RNG gods)


----------



## Chobbit

It only bloomin worked  I've been testing the SLI for over an hour of benching and it's solid.

Thank you so much, do we know why Geforce Experience can cause these conflicts? I thought it was just a game setting optimisation tool and looks for driver updates.

You've saved me the hassle of stripping my system and getting a new card, thanks






skupples said:


> as i said in your other thread, have you used guru3d display driver uninstaller, & started over?
> 
> also, are you sure the cards are properly socketed? Are you sure the slots are clean?
> 
> also, do not install experience after running DDU. Test again.
> 
> I had a similar issue with me 1080tis. Ended up being a dirty slot, and experience breaking programs at launch.
> 
> experience has been off my machine for 2 months now, and I haven't had a single issue since.
> 
> also, if I remember correctly. You wanna do the driver install with both cards in, and the bridge on.





skupples said:


> correct.
> 
> DDU in in safe mode with restart option, then install driver of choice when back into normal windows, without experience.
> 
> cross fingers,
> 
> report back.
> 
> (prays to the RNG gods)


----------



## Zed03

Has anyone else noticed every TU102A card is sold out everywhere? Even well before black friday? Gigabyte, MSI, EVGA (except for Kingpin) all on backorder with "unknown time to restock".

I'm starting to think all these binned A chips are being put toward 2080 ti supers instead.


----------



## skupples

i have a theory about experience, that theory is that is flipping inane and blocking content it thinks is pirated.

what titles did you have issues with black screening on launch? 90% of my steam titles worked, but anything with an external launcher (social club, origin, ubi) wouldn't work. 

bro, i spent thousands on this issue.

new board, GPUs, it didn't click for me until I had zero issues with 5700XT. IdK what the true issue is, but I think experience is acting as a content filter / anti-piracy tool as well.

I was thinking I had waterlogged one or both of my 1080ti's, cuz I had a minor leak. turns out the issues were experience all along.


----------



## Zed03

Ubi launcher doesn't start if windows is it testmode or has kernel debugging enabled. Did you have that on?


----------



## Chobbit

Yikes sorry mate that's rough and something as stupid as a game.

Pretty much all my games are steam bought, it was just doing anything and nothing sometimes when it would just crash the card and if the sli was enabled it would crash the other card and my display with it.

I'm so glad it's sorted and simple, your a life saver.



skupples said:


> i have a theory about experience, that theory is that is flipping inane and blocking content it thinks is pirated.
> 
> what titles did you have issues with black screening on launch? 90% of my steam titles worked, but anything with an external launcher (social club, origin, ubi) wouldn't work.
> 
> bro, i spent thousands on this issue.
> 
> new board, GPUs, it didn't click for me until I had zero issues with 5700XT. IdK what the true issue is, but I think experience is acting as a content filter / anti-piracy tool as well.
> 
> I was thinking I had waterlogged one or both of my 1080ti's, cuz I had a minor leak. turns out the issues were experience all along.


----------



## skupples

glad to hear DDU/no experience saved another life  


--------

nope nope nope

its not the launchers that won't run, its games from the launchers. They get stuck in a black screen during first launch, or crash out, or white screen. 

only known fix on my system is AMD or no experience.

I'm wondering if it has anything to do with my windows not being licensed. I'm truly curious if Nvidia is doing some sort of content blocking/moderation/filtering thru experience, and its confused by my not-licensed OS.


----------



## Chobbit

Yeah my Seahawk EK cards (300a cards) were bought in June and it wasn't long before stock had sold out everywhere I had looked after that. 

The system and cooling was slowly built over the next few months. By the time I got the actual system all working I can only find one place selling them for about £300 more each than I paid for them 6 months ago and then it's an import from Australia for alot more than I paid for them.

Your right these cards are selling out fast and are becoming rare quick.



Zed03 said:


> Has anyone else noticed every TU102A card is sold out everywhere? Even well before black friday? Gigabyte, MSI, EVGA (except for Kingpin) all on backorder with "unknown time to restock".
> 
> I'm starting to think all these binned A chips are being put toward 2080 ti supers instead.


----------



## Sheyster

Chobbit said:


> It only bloomin worked  I've been testing the SLI for over an hour of benching and it's solid.
> 
> Thank you so much, do we know why Geforce Experience can cause these conflicts? I thought it was just a game setting optimisation tool and looks for driver updates.
> 
> You've saved me the hassle of stripping my system and getting a new card, thanks



If DDU had not worked, you should try a fresh Windows install before stripping the system down. I've seen instances where DDU does not work and Windows fresh install works.

I agree with scupples, do not use GFE. The only benefit to it was the filters and now they are available in the NVCP with just the driver only installation.


----------



## skupples

the only benefit was streaming to my TV in the other room via shield, but this current issue combined with 2017 shield pro being one of the best current home brew boxes on earth? Not much love lost, at least the shield still has a use. I really like their controller too, I wish they'd get around to adding windows support, like they said they would years ago.


----------



## Chobbit

I've actually reinstalled windows twice in the last 4 days lol but obviously after you do the windows install and install chipsets and mobo drivers you tend to install graphic drivers and naturally you would install experience with it.

Buggar I like shadowplay to record and live stream. Can you install shadow play without experience?


----------



## J7SC

Zed03 said:


> Has anyone else noticed every TU102A card is sold out everywhere? Even well before black friday? Gigabyte, MSI, EVGA (except for Kingpin) all on backorder with "unknown time to restock".
> 
> I'm starting to think all these binned A chips are being put toward 2080 ti supers instead.


 
...don't know where you are located, but several big retailers in EU have been selling a lot of them and have them currently in stock, i.e. caseking.de


----------



## skupples

Chobbit said:


> I've actually reinstalled windows twice in the last 4 days lol but obviously after you do the windows install and install chipsets and mobo drivers you tend to install graphic drivers and naturally you would install experience with it.
> 
> Buggar I like shadowplay to record and live stream. Can you install shadow play without experience?


yeah, it's not the OS. It's something with experience, and or a certain combo of hardware and licensing/region? idk, it's a clusterfark, whatever it is and we aren't the only ones to have the issue. You just don't hear about it much, as it seems to affect those running bootlegged stuff the most. 

guru3d recently re-released an overhauled Clean Install tool, it's your best bet for finding an answer if someone doesn't directly confirm with proofs.


----------



## Sheyster

Chobbit said:


> I've actually reinstalled windows twice in the last 4 days lol but obviously after you do the windows install and install chipsets and mobo drivers you tend to install graphic drivers and naturally you would install experience with it.
> 
> Buggar I like shadowplay to record and live stream. Can you install shadow play without experience?



As far as I know Shadowplay is part of GFE, that's the only way to get it. Look for alternatives like OBS and others.


----------



## Apothysis

I've tried both the Trio 406W bios and the Galax 380W bios but my MSI 2080Ti Trio X still seems to be stuck at a power limit of ~110%? GPU-Z reads the updated version correctly and Afterburner lets me set the new limit but I just can't get it to go above 110%. It's under a waterblock and temps are in the lower 40s. Any suggestions?


----------



## JustinThyme

Zed03 said:


> Has anyone else noticed every TU102A card is sold out everywhere? Even well before black friday? Gigabyte, MSI, EVGA (except for Kingpin) all on backorder with "unknown time to restock".
> 
> I'm starting to think all these binned A chips are being put toward 2080 ti supers instead.


Plenty to be had in the US. I just randomly entered my part number and pretty much every retailer has them. I did note that a lot of retailers are not carrying the non binned chips any longer. Im guessing that is due to otehr "super" cards that have hit the market reducing demand of the lesser cards.


----------



## Zed03

JustinThyme said:


> Plenty to be had in the US. I just randomly entered my part number and pretty much every retailer has them. I did note that a lot of retailers are not carrying the non binned chips any longer. Im guessing that is due to otehr "super" cards that have hit the market reducing demand of the lesser cards.


I'm in Canada and I'm having trouble finding a Gigabyte 2080 TI WB 11g. My local shop has had it on back order for over 1.5 months  We even tried to find a blower/fan TU102A and order an EK block to put it on water manually, but those are sold out, too.

Eg:

https://www.newegg.ca/gigabyte-gefo...on=11g wb&cm_re=11g_wb-_-14-932-074-_-Product
https://www.memoryexpress.com/Products/MX74288 "special order" = not in stock
https://www.canadacomputers.com/product_info.php?cPath=43_557_559&item_id=129640 none online, just 1 at random physical location


----------



## J7SC

Zed03 said:


> I'm in Canada and I'm having trouble finding a Gigabyte 2080 TI WB 11g. My local shop has had it on back order for over 1.5 months  We even tried to find a blower/fan TU102A and order an EK block to put it on water manually, but those are sold out, too.
> 
> Eg:
> 
> https://www.newegg.ca/gigabyte-gefo...on=11g wb&cm_re=11g_wb-_-14-932-074-_-Product
> https://www.memoryexpress.com/Products/MX74288 "special order" = not in stock
> https://www.canadacomputers.com/product_info.php?cPath=43_557_559&item_id=129640 none online, just 1 at random physical location


 
I am in W.Canada and bought my two Gigab. Aorus 2080 Ti WB from Memoryexpress a year ago, but they never had that many to begin with. That said, I just checked Amazon.ca, there are a couple for sale...with Amazon Prime, should be ok https://www.amazon.ca/GIGABYTE-GeForce-WATERFORCE-GV-N208TAORUS-WB-11GC/dp/B07JVJHWMQ

Edit...says 2 new, but also in stock Dec 19 ?


----------



## JustinThyme

J7SC said:


> I am in W.Canada and bought my two Gigab. Aorus 2080 Ti WB from Memoryexpress a year ago, but they never had that many to begin with. That said, I just checked Amazon.ca, there are a couple for sale...with Amazon Prime, should be ok https://www.amazon.ca/GIGABYTE-GeForce-WATERFORCE-GV-N208TAORUS-WB-11GC/dp/B07JVJHWMQ
> 
> Edit...says 2 new, but also in stock Dec 19 ?


Those are a little more scarce but still available here. You guys in Saskatoon get good OC weather but shafted in the retail end.

Anything with a block already on it becomes a rarity after sales have peaked and begun to wain. Lots of others to choose from and you can put your own block on but by the time you get it all together the back order will be over most likely. Make your order and wait is all the advice I can offer or wait for next generation. Right now is probably the worst time of the year for stock levels on just about anything. After Black Friday and Before Christmas.


----------



## keikei

Chobbit said:


> I've actually reinstalled windows twice in the last 4 days lol but obviously after you do the windows install and install chipsets and mobo drivers you tend to install graphic drivers and naturally you would install experience with it.
> 
> Buggar I like shadowplay to record and live stream. Can you install shadow play without experience?


Oh yeah? Whats your channel? I'm not surprised Experience is causing some sort of issue. Like another member has mentioned, it seems like some sort drm shiet. Minimum, Green is tracking us. Blows my mind why its associated with the gpu software. And why precisely do i need to 'log in'? That alone raised suspicion.


----------



## Lownage

Apothysis said:


> I've tried both the Trio 406W bios and the Galax 380W bios but my MSI 2080Ti Trio X still seems to be stuck at a power limit of ~110%? GPU-Z reads the updated version correctly and Afterburner lets me set the new limit but I just can't get it to go above 110%. It's under a waterblock and temps are in the lower 40s. Any suggestions?


I´m having the exat same issue. Doesn´t matter which bios is installed, max powerlimit is ~350W

Galax bios gave me strange effects in Chrome (turned white, couldn´t use it anymore). 


Actually using the 406W bios @ 2115Mhz @ 1,05V Curve +900 VRAM (Micron) max powerdraw I´ve seen is 360W.

Did you try the 450 Lightning Z bios? I read somewhere that this is usable on the Trio as well. It has even better memory timings. Had no balls to try it though...


----------



## mattxx88

Apothysis said:


> I've tried both the Trio 406W bios and the Galax 380W bios but my MSI 2080Ti Trio X still seems to be stuck at a power limit of ~110%? GPU-Z reads the updated version correctly and Afterburner lets me set the new limit but I just can't get it to go above 110%. It's under a waterblock and temps are in the lower 40s. Any suggestions?





Lownage said:


> I´m having the exat same issue. Doesn´t matter which bios is installed, max powerlimit is ~350W
> 
> Galax bios gave me strange effects in Chrome (turned white, couldn´t use it anymore).
> 
> 
> Actually using the 406W bios @ 2115Mhz @ 1,05V Curve +900 VRAM (Micron) max powerdraw I´ve seen is 360W.
> 
> Did you try the 450 Lightning Z bios? I read somewhere that this is usable on the Trio as well. It even has better memory timings. Had no balls to try it though...


you are in good company, 3 here with the same issue

X Trio with waterblock, max power draw 360w
i am using Lightning bios 2130mhz @1,050v curve +1300 VRAM (Samsung) 

with 400w bios i can push ram to +1500 but Lightning have better timings


----------



## chibi

Anyone receive a newer Founders Edition card recently? Just wondering if the 380W Bios is compatible with the newer FE card? I see they just came back in stock yesterday and my finger is itching to order.

I read a few pages back about a newer aib card not being able to flash that older bios.


----------



## skupples

moonlight or steam stream is the best replacement for Shadow Play & shield streaming, however neither match the quality though they claim to use similar compression tech.


----------



## Apothysis

Lownage said:


> I´m having the exat same issue. Doesn´t matter which bios is installed, max powerlimit is ~350W
> 
> Galax bios gave me strange effects in Chrome (turned white, couldn´t use it anymore).
> 
> Actually using the 406W bios @ 2115Mhz @ 1,05V Curve +900 VRAM (Micron) max powerdraw I´ve seen is 360W.
> 
> Did you try the 450 Lightning Z bios? I read somewhere that this is usable on the Trio as well. It has even better memory timings. Had no balls to try it though...





mattxx88 said:


> you are in good company, 3 here with the same issue
> 
> X Trio with waterblock, max power draw 360w
> i am using Lightning bios 2130mhz @1,050v curve +1300 VRAM (Samsung)
> 
> with 400w bios i can push ram to +1500 but Lightning have better timings



Interesting. I have Samsung memory on mine, running a fixed +120 core and +1300 vram. Also getting 2130 (2115 above 40c). I'll give the Lightning bios a try!


----------



## gfunkernaught

I want to clarify something I've been reading about newer cards not taking older bios. So the 2080s and ti's that are recently manufactured won't accept bios from late 2018 early 2019? Maybe I got luckier than I thought with my card. It would be a bummer not being able to flash.


----------



## J7SC

JustinThyme said:


> Those are a little more scarce but still available here. You guys in Saskatoon get good OC weather but shafted in the retail end.
> 
> Anything with a block already on it becomes a rarity after sales have peaked and begun to wain. Lots of others to choose from and you can put your own block on but by the time you get it all together the back order will be over most likely. Make your order and wait is all the advice I can offer or wait for next generation. Right now is probably the worst time of the year for stock levels on just about anything. After Black Friday and Before Christmas.


 
...While I've been to Saskatoon in the winter (brrr), I actually live on Canada's Pacific Coast >> we like our snow on the mountains, not in our drive ways  That said, it has been chilly by our standards lately.

I actually got a decent price on the two Aorus WB late last year. One was open box (C$ 100 off, the original purchaser thought it was an AIO and returned it a day later). Shortly after, prices started to kick up and availability drop...but generally speaking, CDN prices (and availability) are worse here than US for most computer components. I always wonder when folks go on about their local Microcenter etc...In any case, I am still enjoying these cards tremendously, and beginning to have some 'discovery-fun' (and the odd frustration) with the CFR checkerboard SLI/NVLnk.


----------



## bp7178

Lownage said:


> Galax bios gave me strange effects in Chrome (turned white, couldn´t use it anymore).


Had a similar problem myself after flashing a Asus Matrix BIOS onto a Strix card. When I flashed the Strix BIOS back onto the card, Chrome would open then go all black. If I flashed the Maxtix BIOS back onto the card, Chrome would work fine. 

I would have to disable the card in device manager, open Chrome then turn off hardware acceleration to make it useable. Ultimatley I got the Strix BIOS to work again by flashing the card and reinstalling Chrome. 



gfunkernaught said:


> I want to clarify something I've been reading about newer cards not taking older bios. So the 2080s and ti's that are recently manufactured won't accept bios from late 2018 early 2019? Maybe I got luckier than I thought with my card. It would be a bummer not being able to flash.


I couldn't flash certain older BIOS files onto two recently purchased Asus Strix 2080 Ti cards. Anecdotally, when I flashed FE cards way back with the 2080 Ti was released I don't remember having any issues in doing so. It was way more of a pain with Strix cards which were purchased in the last month or so. I ended up returning two Strix cards as the coil whine was a bit much for a $1300 card for me. I replaced it with an EVGA FTW3 Ultra 2080 Ti and have been much happier. With the 377 watt BIOS it comes with I don't see a need to flash it anymore.


----------



## dph314

bp7178 said:


> Had a similar problem myself after flashing a Asus Matrix BIOS onto a Strix card. When I flashed the Strix BIOS back onto the card, Chrome would open then go all black. If I flashed the Maxtix BIOS back onto the card, Chrome would work fine.
> 
> I would have to disable the card in device manager, open Chrome then turn off hardware acceleration to make it useable. Ultimatley I got the Strix BIOS to work again by flashing the card and reinstalling Chrome.
> 
> 
> 
> I couldn't flash certain older BIOS files onto two recently purchased Asus Strix 2080 Ti cards. Anecdotally, when I flashed FE cards way back with the 2080 Ti was released I don't remember having any issues in doing so. It was way more of a pain with Strix cards which were purchased in the last month or so. I ended up returning two Strix cards as the coil whine was a bit much for a $1300 card for me. I replaced it with an EVGA FTW3 Ultra 2080 Ti and have been much happier. With the 377 watt BIOS it comes with I don't see a need to flash it anymore.


What is your max voltage / core clock that doesn't throttle in a game like, say, RDR2 on Ultra? Sorry if I missed it in another post. But yeah I'm stuck with 338W and the highest I could go without hitting the Power Limit at all is 1006mv, and the best I can do at that is 2010mhz. I figure if a 2-second BIOS-flash is all I need for another voltage step or two then why not? I really gotta get to Gigabyte's site and try the Gaming OC BIOS, just been kinda busy this week. Temperatures aren't a problem yet, could keep it at 63C with a bit more voltage.


----------



## Lownage

mattxx88 said:


> you are in good company, 3 here with the same issue
> 
> X Trio with waterblock, max power draw 360w
> i am using Lightning bios 2130mhz @1,050v curve +1300 VRAM (Samsung)
> 
> with 400w bios i can push ram to +1500 but Lightning have better timings


Which of the Lightning bios are you using? There are plenty, some of them even have problems with flashing while no fans are plugged in.

This seems to be the fixed one: https://www.dropbox.com/s/8s0vewi7ggymmfb/V377_FanRF_Ln2.7z?dl=0 (https://community.hwbot.org/topic/188615-msi-rtx-2080-ti-lightning-thread/)


----------



## mattxx88

Lownage said:


> Which of the Lightning bios are you using? There are plenty, some of them even have problems with flashing while no fans are plugged in.
> 
> This seems to be the fixed one: https://www.dropbox.com/s/8s0vewi7ggymmfb/V377_FanRF_Ln2.7z?dl=0 (https://community.hwbot.org/topic/188615-msi-rtx-2080-ti-lightning-thread/)


If i remember good, i am using this version, post #13:

https://forum-en.msi.com/index.php?topic=314643.0

Tonight when i go back from work i'll check better


----------



## Apothysis

Lownage said:


> Which of the Lightning bios are you using? There are plenty, some of them even have problems with flashing while no fans are plugged in.
> 
> This seems to be the fixed one: https://www.dropbox.com/s/8s0vewi7ggymmfb/V377_FanRF_Ln2.7z?dl=0 (https://community.hwbot.org/topic/188615-msi-rtx-2080-ti-lightning-thread/)



I'll try this one out as well. I just tried the 350W/380W bios on TechPowerup and that at least seems to have gotten my power limit up to 330W with some spikes to 340W which is a better result than I've seen on stock, the 406W trio, and the GALAX 380W bios.


Update: Worse result for me, getting 300-310W.


----------



## renhanxue

bp7178 said:


> I couldn't flash certain older BIOS files onto two recently purchased Asus Strix 2080 Ti cards. Anecdotally, when I flashed FE cards way back with the 2080 Ti was released I don't remember having any issues in doing so. It was way more of a pain with Strix cards which were purchased in the last month or so. I ended up returning two Strix cards as the coil whine was a bit much for a $1300 card for me. I replaced it with an EVGA FTW3 Ultra 2080 Ti and have been much happier. With the 377 watt BIOS it comes with I don't see a need to flash it anymore.


I bought a 2080Ti Strix just last week and the VBIOS version (with the switch in performance mode) is 90.02.17.00.47, modification date 11/22/18. That's identical to the one on Techpowerup for this card. Was that the one that your cards had? The Matrix VBIOS on Techpowerup is newer.

I haven't tried flashing it yet but I will probably do so in the next few days - I'll report back with my results.


----------



## Spiriva

Apothysis said:


> I'll try this one out as well. I just tried the 350W/380W bios on TechPowerup and that at least seems to have gotten my power limit up to 330W with some spikes to 340W which is a better result than I've seen on stock, the 406W trio, and the GALAX 380W bios.
> 
> 
> Update: Worse result for me, getting 300-310W.


For my Nvidia FE 2080TI I flashed the "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)" bios. It works fine, during 3dmark it can pull around 630W. Running it at ~2180mhz-2200mhz. Altho for "normal use" when Im not benchmarking the card pulls around 400-500W with this bios. Memory is set to +1000.
For me this is the best bios Ive tried. The 380W bios was okay, but my card wouldnt pass 2145mhz and the memory wouldnt be stable over +600.

Watercooled.


----------



## bp7178

dph314 said:


> What is your max voltage / core clock that doesn't throttle in a game like, say, RDR2 on Ultra? Sorry if I missed it in another post. But yeah I'm stuck with 338W and the highest I could go without hitting the Power Limit at all is 1006mv, and the best I can do at that is 2010mhz. I figure if a 2-second BIOS-flash is all I need for another voltage step or two then why not? I really gotta get to Gigabyte's site and try the Gaming OC BIOS, just been kinda busy this week. Temperatures aren't a problem yet, could keep it at 63C with a bit more voltage.


IMHO, if you are at 63c there's not much point in getting too into the weeds with power limits. If you can keep the card under 40c, then sure. Once you cross 40c its going to start dropping clocks anyway. You may improve short duration performance, say for a benchmark, but in games its not worth the hassle.


----------



## MikeJeffries

Hi guys!

Anyone here have the Gigabyte Aorus Extreme 2080 Ti?
If so, what kind if overclocks have you hit for the memory and core clock?


Sent from my iPhone using Tapatalk


----------



## J7SC

MikeJeffries said:


> Hi guys!
> 
> Anyone here have the Gigabyte Aorus Extreme 2080 Ti?
> If so, what kind if overclocks have you hit for the memory and core clock?
> 
> 
> Sent from my iPhone using Tapatalk




There are three types (air, AIO, factory full water block) of the 'Gigabyte Aorus Extreme 2080 Ti'. I have two of the full water block ones. Max VRAM is 2058 (for best performance before diminishing returns). GPU clocks are here, noting extensive GPU loop to keep temps in the 30s under load https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club-963.html#post28218142


----------



## MikeJeffries

Ye i meant tri-fan


Sent from my iPhone using Tapatalk


----------



## dangerSK

Apothysis said:


> I'll try this one out as well. I just tried the 350W/380W bios on TechPowerup and that at least seems to have gotten my power limit up to 330W with some spikes to 340W which is a better result than I've seen on stock, the 406W trio, and the GALAX 380W bios.
> 
> 
> Update: Worse result for me, getting 300-310W.


Well, its no-bug bios. If u want max performance u need Lightning NDA (edit: Lightning 1000w bios only for XOCers) or HOF 2000w bios.


----------



## MonarchX

Lightning NDA?


----------



## gtz

Just got my new 2080Ti in the mail. Can't wait to mess with it. Got a new Zotac AMP.


----------



## skupples

ooof

hope it turns out better than the only zotac card I've ever owned. The blower was such shart that the card spent 99% of its life thermal throttled to hell & back. 

wake me up when 4K120 hits, folks. got my 3440x1440p120 all set up, and good lord I love being back on high refresh rates, but mother of god I already miss super shiny 4K with no AA nonsense.


----------



## Xyxyll

Has anyone learned of any successful BIOS options for the EVGA FTW3 Ultra? 373W isn't bad, but it could be so much better.

The FTW3 Ultra is the 19 power phase custom PCB.


----------



## chibi

Xyxyll said:


> Has anyone learned of any successful BIOS options for the EVGA FTW3 Ultra? 373W isn't bad, but it could be so much better.
> 
> The FTW3 Ultra is the 19 power phase custom PCB.



Does the Kingpin bios work on it?


----------



## ntuason

chibi said:


> Does the Kingpin bios work on it?


Ive tried the Kingpin XOC on my FTW3 and it works. The only problem is you loose control of you fans and it’s set to 100% at all times. I’d only recommend it if on water.


----------



## gtz

skupples said:


> ooof
> 
> hope it turns out better than the only zotac card I've ever owned. The blower was such shart that the card spent 99% of its life thermal throttled to hell & back


To be fair any high powered card with a blower should be outlawed. Back when I bought my 1080Ti, all I could get was a EVGA blower style at retail price (Crypto pricing was horrible) and I had a similar experience.

The Zotac I purchased is a triple fan design. Not the prettiest but for 850 new was a no brainer.


----------



## Xyxyll

ntuason said:


> Ive tried the Kingpin XOC on my FTW3 and it works. The only problem is you loose control of you fans and it’s set to 100% at all times. I’d only recommend it if on water.


Wow, I didn't think the Kingpin BIOS would work, considering they're different PCBs. Thanks.

I'm beginning to think I should have gone with a reference board 2080 Ti.


----------



## skupples

gtz said:


> To be fair any high powered card with a blower should be outlawed. Back when I bought my 1080Ti, all I could get was a EVGA blower style at retail price (Crypto pricing was horrible) and I had a similar experience.
> 
> The Zotac I purchased is a triple fan design. Not the prettiest but for 850 new was a no brainer.


ok cool  I didn't see which cooler the amp came with.


----------



## bp7178

Xyxyll said:


> Has anyone learned of any successful BIOS options for the EVGA FTW3 Ultra? 373W isn't bad, but it could be so much better.
> 
> The FTW3 Ultra is the 19 power phase custom PCB.


Unless you are using a full cover waterblock its a waste of time to go beyond the 373w. Temps will throttle you back long before power on air cooling. If anything, that power limit is keeping temps in check.


----------



## Xyxyll

bp7178 said:


> Unless you are using a full cover waterblock its a waste of time to go beyond the 373w. Temps will throttle you back long before power on air cooling. If anything, that power limit is keeping temps in check.


You're not wrong. I'm just eyeing this temperature headroom, since the giant air cooler is keeping it under 60C.


----------



## Renegade5399

Xyxyll said:


> You're not wrong. I'm just eyeing this temperature headroom, since the giant air cooler is keeping it under 60C.


I went through about 10 different BIOS files on these cards until I gave up. Some gave more core, but killed the RAM performance. Others seemed to just not work right. I ended up just taking the latest OEM BIOS and got a waterblock along with performing the shunt mod. The card I kept in the machine and modded runs at 2070 core/16000 mem and hits 49°C core at load. If it ticks up to 52°C due to room temps, it drops to 2055 core. The power draw with the GPU at 100% load during a stress test can hit almost 600W. Under normal gaming conditions it pulls about 360-470W.


----------



## Sheyster

Renegade5399 said:


> I went through about 10 different BIOS files on these cards until I gave up. Some gave more core, but killed the RAM performance. Others seemed to just not work right. I ended up just taking the latest OEM BIOS and got a waterblock along with performing the shunt mod. The card I kept in the machine and modded runs at 2070 core/16000 mem and hits 49°C core at load. If it ticks up to 52°C due to room temps, it drops to 2055 core. The power draw with the GPU at 100% load during a stress test can hit almost 600W. Under normal gaming conditions it pulls about 360-470W.


I know some say the 380W GLAX BIOS has loose timings. This is not necessarily a bad thing as you might be able to OC the VRAM higher with looser timings. What BIOS did you find best for VRAM, timings wise?


----------



## ESRCJ

Sheyster said:


> I know some say the 380W GLAX BIOS has loose timings. This is not necessarily a bad thing as you might be able to OC the VRAM higher with looser timings. What BIOS did you find best for VRAM, timings wise?


I haven't tested all of them, but among those I tested, the Gigabyte Aorus Extreme (366W) and Asus Matrix BIOS come to mind. I have also found that I get better performance with the Galax 380W BIOS if I use Galax Xtreme Tuner instead of MSI AB. The one downside is there's no VF curve editing and your vcore won't go past 1.063V as a result, so you will be limited a little. However, despite this, I still get better benchmark results.


----------



## sblantipodi

question to the experts:
do you think that the upcoming RTX3080 Ti will use PCIe 4.0 or that it will benefit from the added bandwidth of the PCIe 4?


----------



## ZealotKi11er

sblantipodi said:


> question to the experts:
> do you think that the upcoming RTX3080 Ti will use PCIe 4.0 or that it will benefit from the added bandwidth of the PCIe 4?


It depends. Nvidia would probably need to validate it with AMD MBs. I can tell you for sure they have been using Intel for at least 7 years in their lab.


----------



## Dennis1989

I have a 2080ti xc(338W) gaming from evga. I am looking for a bios. The graphics card has the new xusb, so I can not flash an old bios. Is there a new bios that I can flash?


----------



## Jpmboy

Can a single 2080Ti saturate PCIEx3 at any single panel resolution? (Yeah I mean 3 x 4K panels can, but that's just academic.)


----------



## JustinThyme

Jpmboy said:


> Can a single 2080Ti saturate PCIEx3 at any single panel resolution? (Yeah I mean 3 x 4K panels can, but that's just academic.)


Not that Ive noticed. It barely saturates 3.0 at X8. 1080Ti there was no difference for be between x8 and x16. 2080Ti a 6% bump at x16. Not much but enough to note that it does barely saturate the x8 PE 3.0


----------



## jeanjean15

Spiriva said:


> For my Nvidia FE 2080TI I flashed the "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)" bios. It works fine, during 3dmark it can pull around 630W. Running it at ~2180mhz-2200mhz. Altho for "normal use" when Im not benchmarking the card pulls around 400-500W with this bios. Memory is set to +1000.
> For me this is the best bios Ive tried. The 380W bios was okay, but my card wouldnt pass 2145mhz and the memory wouldnt be stable over +600.
> 
> Watercooled.


Hi . 

Where is it possible to download this bios ?

Indeed , it isn't available at first page anymore.

Thanks in advance .


----------



## Dennis1989

dph314 said:


> What is your max voltage / core clock that doesn't throttle in a game like, say, RDR2 on Ultra? Sorry if I missed it in another post. But yeah I'm stuck with 338W and the highest I could go without hitting the Power Limit at all is 1006mv, and the best I can do at that is 2010mhz. I figure if a 2-second BIOS-flash is all I need for another voltage step or two then why not? I really gotta get to Gigabyte's site and try the Gaming OC BIOS, just been kinda busy this week. Temperatures aren't a problem yet, could keep it at 63C with a bit more voltage.


Did you already try it? I face the same problem


----------



## VPII

For me the problem faced with the RTX 2080 Ti is temps. I did flash my card with the Strix XOC bios so basically 1000watt limit and it works great if I can keep temps at 40 or below 40c. The moment it reaches 41c my core clock drop by 15mhz. Interesting though, if I drop my card's rad in a bucket of ice water and clock the card +150mhz with stock 2010mhz it would mean 2160mhz for the overclock but it would when starting the bench sit at 2175mhz until the card reaches 33 to 36c. I have a water cooler fitted to the card NZXt Kraken G12 with a Corsair H110 and temps are kept pretty well, but unfortunately with the high ambient now, 28 to 35c temps reaches 43c sometimes.


----------



## TK421

Renegade5399 said:


> I went through about 10 different BIOS files on these cards until I gave up. Some gave more core, but killed the RAM performance. Others seemed to just not work right. I ended up just taking the latest OEM BIOS and got a waterblock along with performing the shunt mod. The card I kept in the machine and modded runs at 2070 core/16000 mem and hits 49°C core at load. If it ticks up to 52°C due to room temps, it drops to 2055 core. The power draw with the GPU at 100% load during a stress test can hit almost 600W. Under normal gaming conditions it pulls about 360-470W.





Sheyster said:


> I know some say the 380W GLAX BIOS has loose timings. This is not necessarily a bad thing as you might be able to OC the VRAM higher with looser timings. What BIOS did you find best for VRAM, timings wise?







Wait it has looser mem timings?


I was able to get 1300 stable on my samsung 2080ti xc ultra.




What other bioses should I look at? I want to know the VRAM timing differences between all of them.


----------



## jeanjean15

VPII said:


> For me the problem faced with the RTX 2080 Ti is temps. I did flash my card with the Strix XOC bios so basically 1000watt limit and it works great if I can keep temps at 40 or below 40c. The moment it reaches 41c my core clock drop by 15mhz. Interesting though, if I drop my card's rad in a bucket of ice water and clock the card +150mhz with stock 2010mhz it would mean 2160mhz for the overclock but it would when starting the bench sit at 2175mhz until the card reaches 33 to 36c. I have a water cooler fitted to the card NZXt Kraken G12 with a Corsair H110 and temps are kept pretty well, but unfortunately with the high ambient now, 28 to 35c temps reaches 43c sometimes.


Hi .

Your card has got a reference PCB ?

Would be interested in flashing mine with this bios .


----------



## Spiriva

jeanjean15 said:


> Hi .
> 
> Where is it possible to download this bios ?
> 
> Indeed , it isn't available at first page anymore.
> 
> Thanks in advance .


I sent you a PM with the bios.


----------



## jeanjean15

Spiriva said:


> I sent you a PM with the bios.



Thank you . 

Did you try also the bios of ASUS RTX 2080 Ti Strix OC Custom PCB ? 

If yes , is it working well ?


----------



## Spiriva

jeanjean15 said:


> Thank you .
> 
> Did you try also the bios of ASUS RTX 2080 Ti Strix OC Custom PCB ?
> 
> If yes , is it working well ?


I tried the galax 380W bios, the kingpin xoc bios and then the galax hof oc.
For me the hof oc was the best, keep in mind my card is watercooled (ek block) with dual 480 rads (16 fans).

On air i would stick with the galax 380W bios.


----------



## jeanjean15

Spiriva said:


> I tried the galax 380W bios, the kingpin xoc bios and then the galax hof oc.
> For me the hof oc was the best, keep in mind my card is watercooled (ek block) with dual 480 rads (16 fans).
> 
> On air i would stick with the galax 380W bios.



Thanks for the advice .

However , my card is EK watercooled too with one 280 , one 240 and one 360 rads . lol.

And i am putting my PC outside with a ambiant temperature of 5°C when i want to do some benchmark . lol .


----------



## GTANY

May someone upload the following bios : Palit RTX 2080 Ti Dual (Non-A) Reference PCB (2x8-Pin) 250W x 124% Power Target BIOS (310W) ?

Indeed, I downloaded the file in the source on Palit site but I expect the executable to be not compatible with any brand different from Palit (mine will be a Gigabyte).


----------



## GRABibus

Can the HOF Galax 3x8pins 2000W be flashed on my MSI Sea Hawk 2080Ti (which is a 2x8pins card)? This is the Sea Hawk X version, not the EK one.


----------



## J-P Saari

*What's wrong with this picture 2080 ti lightning z LN2 bios, stress testing.*

Hi all

I have 2080 ti lightning z LN2 bios.

Why My power limit is only 280W? , is it because of the temperature? waiting for my bitspower gpu block for watercooling the Lightning z.

Also, any idea why afterburner doesn't mention anymore LN2 after the GPU name like it used to. 

Thanks for your input.

(Edit) Actually, in this picture I was testing the kingpin bios. The same situation was with the LN2 bios.


----------



## VPII

jeanjean15 said:


> Hi .
> 
> Your card has got a reference PCB ?
> 
> Would be interested in flashing mine with this bios .


Yes I believe it is a reference PCB. I found that the XOC bios helps a lot keeping the speed at the set overclock right through benches if you can keep your temps at 40C max.


----------



## dangerSK

J-P Saari said:


> Hi all
> 
> I have 2080 ti lightning z LN2 bios.
> 
> Why My power limit is only 280W? , is it because of the temperature? waiting for my bitspower gpu block for watercooling the Lightning z.
> 
> Also, any idea why afterburner doesn't mention anymore LN2 after the GPU name like it used to.
> 
> Thanks for your input.
> 
> (Edit) Actually, in this picture I was testing the kingpin bios. The same situation was with the LN2 bios.


Reflash again LN2 bios and check it. also what kind of load youre running ?


----------



## jeanjean15

VPII said:


> Yes I believe it is a reference PCB. I found that the XOC bios helps a lot keeping the speed at the set overclock right through benches if you can keep your temps at 40C max.


I tried to flash with the " ASUS RTX 2080 Ti Strix OC Custom PCB " but it doesn't succeed . Indeed the flash failed ( black screen ! ) and i had to restore my old bios by booting from a second card . 

So be careful with this bios .

However , the xoc bios from Galax can be flashed without any problem but it didn't improve a lot the overclocking capability of my card . Very similar too the 380 w galax bios at last.

For your information , i have a card with reference PCB.


----------



## VPII

jeanjean15 said:


> I tried to flash with the " ASUS RTX 2080 Ti Strix OC Custom PCB " but it doesn't succeed . Indeed the flash failed ( black screen ! ) and i had to restore my old bios by booting from a second card .
> 
> So be careful with this bios .
> 
> However , the xoc bios from Galax can be flashed without any problem but it didn't improve a lot the overclocking capability of my card . Very similar too the 380 w galax bios at last.
> 
> For your information , i have a card with reference PCB.


The XOC bios won't improve overclocking as you do not have control over the voltage to GPU, what it does is keep your core clock constant while under load as you do not hit the power limit.


----------



## Nano2k

Hey guys, hope someone can help me out.

I've been running my Gainward GS with the Galax 380W bios almost for a year without issues. I run it undervolted at 0.925V and clocks are stable around 1960-1980MHz, temps below 70°C, stock air cooler.

The other day while I was playing Dead Space 3 (never had finished it...) it crashed with the X0 artifacts, I thought it might have to do with the fact it was running at low volts and clocks because the game didnt load the card enough.
Rebooted and could finish the game without issues.

Today as I power on my computer, it either gave me the x0 artifacts at Windows log in or on the desktop and would hang up, no blue screen.

I booted using my cpus integrated gfx and flashed the original Gainward bios and it seems to be working ok again...

What do you think, is my card dying or can you get these issues from a software/driver issue?

Thanks for the help!


----------



## Sheyster

VPII said:


> The XOC bios won't improve overclocking as you do not have control over the voltage to GPU, what it does is keep your core clock constant while under load as you do not hit the power limit.


It won't help with temperature issues and associated bin drops. Almost no point in using it unless your card is in a loop.


----------



## keikei

https://www.techpowerup.com/261996/nvidia-releases-geforce-441-66-whql-game-ready-drivers


----------



## J-P Saari

J-P Saari said:


> Hi all
> 
> I have 2080 ti lightning z LN2 bios.
> 
> Why My power limit is only 280W? , is it because of the temperature? waiting for my bitspower gpu block for watercooling the Lightning z.
> 
> Also, any idea why afterburner doesn't mention anymore LN2 after the GPU name like it used to.
> 
> Thanks for your input.
> 
> (Edit) Actually, in this picture I was testing the kingpin bios. The same situation was with the LN2 bios.





dangerSK said:


> Reflash again LN2 bios and check it. also what kind of load youre running ?


Hi 

I was running gears of war 5 way over 5k resolution to maximize the usage. 

I've tried reflashing the bios, Is there any specific parameters I should use in nvflash? I've been using -6


----------



## J-P Saari

J-P Saari said:


> Hi
> 
> I was running gears of war 5 way over 5k resolution to maximize the usage.
> 
> I've tried reflashing the bios, Is there any specific parameters I should use in nvflash? I've been using -6





J-P Saari said:


> Hi
> 
> I was running gears of war 5 way over 5k resolution to maximize the usage.
> 
> I've tried reflashing the bios, Is there any specific parameters I should use in nvflash? I've been using -6


EDIT: Can I use the kingpin bios to unlock voltage?


----------



## VPII

Sheyster said:


> It won't help with temperature issues and associated bin drops. Almost no point in using it unless your card is in a loop.


NZXT Kraken G12 + Corsair H110i cooler, thank you. Highest temps I've seen with 32c ambient was 43C.


----------



## skupples

yeah, what about the rest of the board components? Core is 1/3 of what needs to be cooled.


----------



## VPII

skupples said:


> yeah, what about the rest of the board components? Core is 1/3 of what needs to be cooled.


VRM and Ram at 40C max measured with an infrared thermometer.


----------



## bigjdubb

I have a question for those of you who have added a block to your 2080ti. Would a 360mm rad be overwhelmed with a cpu (3700x) and a 2080ti? I don't run either one of them overclocked other than sliding the power limit to the right on my 2080ti FTW3 Ultra.

I don't really need the 2080ti to be watercooled because it is really quiet and runs cool with the massive air cooler it comes with but I am going to be doing a loop for the cpu so I was considering adding a block to the 2080ti. If adding the 2080ti means I have to crank the fans then it isn't worth it, but I'm fine with keeping it the 70ish degree area that it runs at now with the stock fan profile. 

I have always been in the overkill side of things with my radiators so I am not really how things run in the minimalist camp.


----------



## J7SC

bigjdubb said:


> I have a question for those of you who have added a block to your 2080ti. Would a 360mm rad be overwhelmed with a cpu (3700x) and a 2080ti? I don't run either one of them overclocked other than sliding the power limit to the right on my 2080ti FTW3 Ultra.
> 
> I don't really need the 2080ti to be watercooled because it is really quiet and runs cool with the massive air cooler it comes with but I am going to be doing a loop for the cpu so I was considering adding a block to the 2080ti. If adding the 2080ti means I have to crank the fans then it isn't worth it, but I'm fine with keeping it the 70ish degree area that it runs at now with the stock fan profile.
> 
> I have always been in the overkill side of things with my radiators so I am not really how things run in the minimalist camp.


 
I probably qualify for the 'overkill side of things' with rads and pumps (2x pumps, 3x 360 thick for dual 2080 Ti loop), but I had all that w-cooling equipment from various earlier builds anyhow. However, when I first tested the cards individually on a 6700K test bench with a single thick 360 rad and 3 120mm Noctua fans set to whisper quiet, the results were, IMO, pretty good...about 15c to 20c max above ambient under load for both CPU and GPU. Assuming that you're talking about pairing the 2080 TI with the 3700X (per first sig rig), without heavy oc, it should work fine...better/quieter than just straight GPU air cooling.


----------



## GRABibus

GRABibus said:


> Can the HOF Galax 3x8pins 2000W be flashed on my MSI Sea Hawk 2080Ti (which is a 2x8pins card)? This is the Sea Hawk X version, not the EK one.


No one has flashed the Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W on a 2x8 pins reference PCB card ? (As my MSI Sea Hawk X)?
Any feedback ?


----------



## Sheyster

bigjdubb said:


> I have a question for those of you who have added a block to your 2080ti. Would a 360mm rad be overwhelmed with a cpu (3700x) and a 2080ti? I don't run either one of them overclocked other than sliding the power limit to the right on my 2080ti FTW3 Ultra.
> 
> I don't really need the 2080ti to be watercooled because it is really quiet and runs cool with the massive air cooler it comes with but I am going to be doing a loop for the cpu so I was considering adding a block to the 2080ti. If adding the 2080ti means I have to crank the fans then it isn't worth it, but I'm fine with keeping it the 70ish degree area that it runs at now with the stock fan profile.
> 
> I have always been in the overkill side of things with my radiators so I am not really how things run in the minimalist camp.


Based on your use case I would not bother to watercool that card, especially this late in the game.

IMHO no watercooling is "minimalist". I am 100% aircooled with 5 GHz x 8 cores and over 2000 MHz on my 2080 Ti for gaming. That is as minimalist as it gets cooling-wise for this kind of performance. The icing on the cake: It's actually not very loud.


----------



## Spiriva

GRABibus said:


> No one has flashed the Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W on a 2x8 pins reference PCB card ? (As my MSI Sea Hawk X)?
> Any feedback ?


I use that bios on my Nvidia FE 2080ti. Works good for me.


----------



## dangerSK

J-P Saari said:


> Hi
> 
> I was running gears of war 5 way over 5k resolution to maximize the usage.
> 
> I've tried reflashing the bios, Is there any specific parameters I should use in nvflash? I've been using -6


Nah -6 parameter is fine. Also u can use the bios im attaching, it was good bios so far with voltage options.


----------



## J-P Saari

dangerSK said:


> Nah -6 parameter is fine. Also u can use the bios im attaching, it was good bios so far with voltage options.


Cool thanks. I'll try it right now.


----------



## J-P Saari

dangerSK said:


> Nah -6 parameter is fine. Also u can use the bios im attaching, it was good bios so far with voltage options.


Just a quick test shows me potential in the voltage. However the powerlimit seems to be 250W, am I right?


----------



## GRABibus

So I have flashed my MSI Sea Hawk X with HOF XOC Galax 3x8 pin 2000W Bios => It is ok.

And it is promising as I could pass several sessions of Heaven and Time Spy at [email protected] and Memory at 8300MHz.
until now, with Galax 380W Bios, I was limited in those benchmarks at [email protected] and memory at 8200MHz. anything higher, crash.

question : I use MSI AB.
But each time I save an OC profile and launch it, BSOD !

Do we have to use another OC tool than MSI AB with this Bios or should MSI AB works (And then I should have an issue...) ?

Thank you.


----------



## GRABibus

Spiriva said:


> I use that bios on my Nvidia FE 2080ti. Works good for me.


Thanks.


----------



## GRABibus

GRABibus said:


> So I have flashed my MSI Sea Hawk X with HOF XOC Galax 3x8 pin 2000W Bios => It is ok.
> 
> And it is promising as I could pass several sessions of Heaven and Time Spy at [email protected] and Memory at 8300MHz.
> until now, with Galax 380W Bios, I was limited in those benchmarks at [email protected] and memory at 8200MHz. anything higher, crash.
> 
> question : I use MSI AB.
> But each time I save an OC profile and launch it, BSOD !
> 
> Do we have to use another OC tool than MSI AB with this Bios or should MSI AB works (And then I should have an issue...) ?
> 
> Thank you.


Precision XOC gives me the same issue.


----------



## J7SC

GRABibus said:


> Precision XOC gives me the same issue.


 
...have you tried the Galax Xtreme Tuner plus utility ? There might also be a second, 'unofficial' XOC one (hard to get)...but try the regular one first


----------



## 86Jarrod

GRABibus said:


> Precision XOC gives me the same issue.


Crashes on mine too. Galax extreme tuner it crashes if you click default button if I remember right. Not sure about saving profiles with it though. KPE xoc works with profiles if that's what you want.


----------



## dangerSK

J-P Saari said:


> Just a quick test shows me potential in the voltage. However the powerlimit seems to be 250W, am I right?


You ccan go up to 111% tdp = 300W bios


----------



## J-P Saari

dangerSK said:


> You ccan go up to 111% tdp = 300W bios


Isn't the newest one 380W? Is the voltage limit lower in that or am I totally in the wrong here?


----------



## dangerSK

J-P Saari said:


> Isn't the newest one 380W? Is the voltage limit lower in that or am I totally in the wrong here?


yea this one is only 300W but has voltage control, well if u want max power then go KP bios or HOF 2000w


----------



## GRABibus

J7SC said:


> ...have you tried the Galax Xtreme Tuner plus utility ? There might also be a second, 'unofficial' XOC one (hard to get)...but try the regular one first


How do you edit V/F curve with Galax Xtreme Tuner plus utility ?

Update : I even can't launch it. i crash BSOD when i try to launch this soft.


----------



## GRABibus

86Jarrod said:


> Crashes on mine too. Galax extreme tuner it crashes if you click default button if I remember right. Not sure about saving profiles with it though. KPE xoc works with profiles if that's what you want.


What I want is to be able to edit V/F curve , to be able to fix voltage above 1,093V and no bsod with profiles )


----------



## Spiriva

GRABibus said:


> What I want is to be able to edit V/F curve , to be able to fix voltage above 1,093V and no bsod with profiles )


Have you tried uninstalling AB and installing it again ?


----------



## J-P Saari

dangerSK said:


> yea this one is only 300W but has voltage control, well if u want max power then go KP bios or HOF 2000w


Alright, thanks for the tips. Has the voltage controls been tested to work? or are they just for show?


----------



## kithylin

GRABibus said:


> How do you edit V/F curve with Galax Xtreme Tuner plus utility ?
> 
> Update : I even can't launch it. i crash BSOD when i try to launch this soft.


Remember to completely uninstall (and delete the folders for it) for MSI AB and/or PrecisionX, any/all other overclocking and tuning softwares. They can and will conflict just being installed in the system, even if they aren't running.


----------



## pXuis

GRABibus said:


> How do you edit V/F curve with Galax Xtreme Tuner plus utility ?
> 
> Update : I even can't launch it. i crash BSOD when i try to launch this soft.


You need to delete the User Settings file in the install directory to avoid the blue screen.
Also, clicking on "reset to defaults" may cause blue screen as well.


----------



## Zed03

Jpmboy said:


> Can a single 2080Ti saturate PCIEx3 at any single panel resolution? (Yeah I mean 3 x 4K panels can, but that's just academic.)


Steve at GN was able to saturate an 8x PCIE 3.0 with NVLINK, but didn't even come close to hitting limits of 16x PCIE 3.0.


----------



## ntuason

Zed03 said:


> Steve at GN was able to saturate an 8x PCIE 3.0 with NVLINK, but didn't even come close to hitting limits of 16x PCIE 3.0.


So 2080 Ti on AMDs latest platform which has PCIE 4.0 will yield no differences against PCIE 3.0? Until GPUs that are native 4.0 releases?


----------



## skupples

ntuason said:


> So 2080 Ti on AMDs latest platform which has PCIE 4.0 will yield no differences against PCIE 3.0? Until GPUs that are native 4.0 releases?


close enough. it won't make a difference until the cards support it, AND until they use the extra bandwidth. we're just now seeing it start to happen on 8x3.0, and only when running multiples.


----------



## GRABibus

Spiriva said:


> Have you tried uninstalling AB and installing it again ?





kithylin said:


> Remember to completely uninstall (and delete the folders for it) for MSI AB and/or PrecisionX, any/all other overclocking and tuning softwares. They can and will conflict just being installed in the system, even if they aren't running.





pXuis said:


> You need to delete the User Settings file in the install directory to avoid the blue screen.
> Also, clicking on "reset to defaults" may cause blue screen as well.


I have cleanely uninstalled MSI AB, removed profiles in "Profiles" folder. No way.

Still getting BSOD (nvlddmkm.sys stop worked, code : SYSTEM_SERVICE_EXCEPTION) when I launch a profile saved in MSI AB.

I will do a DDU and reinstall drivers and let's see....


----------



## pXuis

I'm referring to the user settings file in the Galax Xtreme utility directory.


----------



## GRABibus

pXuis said:


> I'm referring to the user settings file in the Galax Xtreme utility directory.


I uninstalled this soft...And I can't find anywhere any remaining folder of it.
I only stick with MSI AB.


----------



## pXuis

If someone could give some advice. I'm starting to think I might have a faulty card.
So I got my hands on a *2080Ti HOF Lab Edition*, second hand obviously, to replace my FE card. I can post pics on request.

I've got the LN2 and normal 450W bios (Which is actually detected as a 400W bios by Nvidia-Smi).

Anyway, the card is a really bad clocker so far. Regardless of voltage, I can't pass TimeSpy on anything higher than 2115Mhz or 2130Mhz (if temps permit) without crashing. Considering my FE could do 2160Mhz, this is a real letdown. Most other Lab editions I've see can hit 2200Mhz no problem.

Here's the weird part which leads me to believe it might be faulty.
In GPU-Z or any other tool, the card never pulls more than 200-230watts. At first I thought it might be a software/sensor thing. But I've got a digital meter to measure power at the wall, and subtracting CPU package etc from the wall reading aligns with the 220ish watt power draw reported by GPU-Z.

Is it possible that the card can hit 2115/30Mhz at a mere 220W? (Increasing voltage slightly increases wattage). Would the low power draw explain the relatively bad OC potential under water? I'm thinking bad VRM?


----------



## GRABibus

GRABibus said:


> I uninstalled this soft...And I can't find anywhere any remaining folder of it.
> I only stick with MSI AB.


I have tested the Galax HOF XOC 3x8pins 2000W in CoD MW with following profile :
[email protected]
+1300MHz on memory

=> I have between 5% and 10% less fps than with Galax 380W bios at 
[email protected]
+1150MHz on memory.

The HOF XOC helps in having higher stable clocks (Both core and memory), but, less performances versus lower frequencies with Galax 380W....

If someone can explain this ....


----------



## Jpmboy

Zed03 said:


> Steve at GN was able to saturate an 8x PCIE 3.0 with NVLINK, but didn't even come close to hitting limits of 16x PCIE 3.0.


with NVlink... I think the question was with a single card. That said, I can't get sli 2080tis to saturate x8 at 1440P (which is consistent with the math). 4K60? ... probably. 


GRABibus said:


> What I want is to be able to edit V/F curve , to be able to fix voltage above 1,093V and no bsod with profiles )


Use afterburner. The Galax tool does not have a "VF" curve, you just set the voltage slider and voltage, this will lock the card(s) in the P0 state.


kithylin said:


> Remember to completely uninstall (and delete the folders for it) for MSI AB and/or PrecisionX, any/all other overclocking and tuning softwares. They can and will conflict just being installed in the system, even if they aren't running.


IDK, I have AB and Xtreme tuner installed on several machines. No conflicts or clashes (sli 2080Tis). EVGA XOC is a different beast and on one machine here, it will clash with itself and corrupt the NVD. Only reason I use it is for a dual fan EVGA card, and even there it "fouls" occasionally requiring a system restore point to get rid of the "grey-screen" desktop at lauch. crap software IMO. 


GRABibus said:


> I have tested the Galax HOF XOC 3x8pins 2000W in CoD MW with following profile :
> [email protected]
> +1300MHz on memory
> 
> => I have between 5% and 10% less fps than with Galax 380W bios at
> [email protected]
> +1150MHz on memory.
> The HOF XOC helps in having higher stable clocks (Both core and memory), but, less performances versus lower frequencies with Galax 380W....
> If someone can explain this ....


is the 2000W bios loaded on a 3x8 pin PCIE card? Check (if you can) if the bios is trying to pull the 3rd PCIE 8 pin from the ATX rail. That said, Higher clocks does not mean better memory or core efficiency since there are a crap-load of error correcting routines (MCEs) that attempt to match checksums by repeating procedure calls... higher "stable" clocks but inefficient due to error correction.


pXuis said:


> If someone could give some advice. I'm starting to think I might have a faulty card.
> So I got my hands on a *2080Ti HOF Lab Edition*, second hand obviously, to replace my FE card. I can post pics on request.
> I've got the LN2 and normal 450W bios (Which is actually detected as a 400W bios by Nvidia-Smi).
> Anyway, the card is a really bad clocker so far. Regardless of voltage, I can't pass TimeSpy on anything higher than 2115Mhz or 2130Mhz (if temps permit) without crashing. Considering my FE could do 2160Mhz, this is a real letdown. Most other Lab editions I've see can hit 2200Mhz no problem.
> 
> Here's the weird part which leads me to believe it might be faulty.
> In GPU-Z or any other tool, the card never pulls more than 200-230watts. At first I thought it might be a software/sensor thing. But I've got a digital meter to measure power at the wall, and subtracting CPU package etc from the wall reading aligns with the 220ish watt power draw reported by GPU-Z.
> 
> *Is it possible that the card can hit 2115/30Mhz at a mere 220W*? (Increasing voltage slightly increases wattage). Would the low power draw explain the relatively bad OC potential under water? I'm thinking bad VRM?


 Sure, why not?
Does the card have any residual LET (water-proofing) or other signs of extreme OC? Did the previous owner show you any benchmarks with the card (like video proof that it works before you bought it)?


----------



## J7SC

pXuis said:


> (...)
> In GPU-Z or any other tool, the card never pulls more than 200-230watts. At first I thought it might be a software/sensor thing. But I've got a digital meter to measure power at the wall, and subtracting CPU package etc from the wall reading aligns with the 220ish watt power draw reported by GPU-Z.
> 
> Is it possible that the card can hit 2115/30Mhz at a mere 220W? (Increasing voltage slightly increases wattage). Would the low power draw explain the relatively bad OC potential under water? I'm thinking bad VRM?


 
A genuine Galax HoF "OC Labs" might very well be slower under regular water as it is designed for extensive chilled water and/or sub-zero cooling. In the bad old days when GPUz would still divulge ASIC quality (see pic), you got that explanation about the ASIC vs. temp applications. You might want tor try the voltage curve process in MSI AB (w/o making & applying any changes, you're just looking for the raw curve) as that would give you *an idea* about MHz vs. voltage and ASIC.

Also, there is a "NDA type" overclocking software tool for the "OC Labs" with more parameters and higher limits (though caution is in order with 'just' water cooling). Perhaps the seller has a copy of the software ?


----------



## dangerSK

J7SC said:


> A genuine Galax HoF "OC Labs" might very well be slower under regular water as it is designed for extensive chilled water and/or sub-zero cooling. In the bad old days when GPUz would still divulge ASIC quality (see pic), you got that explanation about the ASIC vs. temp applications. You might want tor try the voltage curve process in MSI AB (w/o making & applying any changes, you're just looking for the raw curve) as that would give you *an idea* about MHz vs. voltage and ASIC.
> 
> Also, there is a "NDA type" overclocking software tool for the "OC Labs" with more parameters and higher limits (though caution is in order with 'just' water cooling). Perhaps the seller has a copy of the software ?


Not hard to get hold of NVVDD tool though, if u have GOC u can even request it from Galax.


----------



## GRABibus

Jpmboy said:


> is the 2000W bios loaded on a 3x8 pin PCIE card? Check (if you can) if the bios is trying to pull the 3rd PCIE 8 pin from the ATX rail. That said, Higher clocks does not mean better memory or core efficiency since there are a crap-load of error correcting routines (MCEs) that attempt to match checksums by repeating procedure calls... higher "stable" clocks but inefficient due to error correction.


My card is a 2x8 pins one.
I don’t know how to check the pulling power you mention.

Do you know why I get Bsod when I launch a saved profile in AB ? (see my post above for the code and error message).
I also get this Bsod randomly if I click on »reset«  in MSI AB. Same issue with Galax Xtreme tool or EVGA PX.
This happens only with the HOF Galax XOC bios.
It doesn’t happen neither with Galax 380 W bios nor the EVGA kingpin bios....


----------



## Jpmboy

GRABibus said:


> My card is a 2x8 pins one.
> I don’t know how to check the pulling power you mention.
> 
> Do you know why I get Bsod when I launch a saved profile in AB ? (see my post above for the code and error message).
> I also get this Bsod randomly if I click on »reset« in MSI AB. Same issue with Galax Xtreme tool or EVGA PX.
> This happens only with the HOF Galax XOC bios.
> It doesn’t happen neither with Galax 380 W bios nor the EVGA kingpin bios....


 Both tools work with the driver - i'd clean th edriver with DDU and delete the profile files in this folder. reload NVD, and try again.
A 3x8pin bios has to enumerate the power rails and when you flash a 3x on to a 2x PCB, the bios can assign the 3rd (PCIE) source to the only other available source on a 2X card - which is the PCIE slot (off the ATX and any "SLI" aux power connector to the MB). We had no problem dealing with this when we had access to the Bios (via tweaker), buit now it's a blind crap shoot. If you have an AXi series PSU, the Corsair link or the newer corsair PSU software will let you monitor the current draw on the various rails the PSU has.


----------



## GRABibus

Jpmboy said:


> Both tools work with the driver - i'd clean th edriver with DDU and delete the profile files in this folder. reload NVD, and try again.
> A 3x8pin bios has to enumerate the power rails and when you flash a 3x on to a 2x PCB, the bios can assign the 3rd (PCIE) source to the only other available source on a 2X card - which is the PCIE slot (off the ATX and any "SLI" aux power connector to the MB). We had no problem dealing with this when we had access to the Bios (via tweaker), buit now it's a blind crap shoot. If you have an AXi series PSU, the Corsair link or the newer corsair PSU software will let you monitor the current draw on the various rails the PSU has.


Thank you for explanations.
I did clean the drivers yesterday with DDU, deleted the profiles in the folder you enclosed. I did everything.
No way.
What is strange is that this is the only bios where I have this issue.

I did reflash the Galax 380W, and see also same loss fps in CoD MW..
So, this lost fps I mentioned with HOF XOC is maybe not due to HOF XOC Bios, but to the last driver set 441.66 WHQL or last CoD MW patch....

So, it is really a pity that I experienced this BSOD's when launching save MSI AB profiles with HOF XOC because with this Bios I could bench Time Spy and µHEaven unigine at [email protected] can play some DX111 games at this frequency also, so nice !
*** is this BSOD issue...


----------



## 1M4TO

hello folks, i'm in the need of help, will aprreciate a lot.
i got an evga 2080ti gaming, and while i was playing apex legends i got some repetitive sounds for few seconds, screen went black and i had to hard reset the pc. 
After the boot i got no signal and the fans are not spinning anymore, i tried remove/swap ram and so on, but non wake it up.
then i put a 2080 xc gaming and it did not start at the beginning, but after a cmos and re-setting all the components the 2080 worked and its working just fine.
I didnt tried other psu vga cables (750 evga g3), but the fans not spinning and the led not lighting up point me to a dead card?

cpu is 3700x and ddr 3600 corsair, no overclock, just docp (xmp) ram profile.
no bios mod, stock card. I'm worried that i had waterblock on it, but no leaks.

any suggestion before i rma it?
thaaaaanks


----------



## Alecs010

*Overclocking Bios rtx 2080ti*

Hi Guys,

Does anyone has a link ore download adress, for the rom file to flash my msi rtx 2080ti seahawk ek cards. 380 to 400 watts is oke, i am running now on stock bios.

Regards Alex


----------



## kithylin

Alecs010 said:


> Hi Guys,
> 
> Does anyone has a link ore download adress, for the rom file to flash my msi rtx 2080ti seahawk ek cards. 380 to 400 watts is oke, i am running now on stock bios.
> 
> Regards Alex


All of the information you asked for should be on the first post / first page of this thread.


----------



## GRABibus

Do some people have tested this Bios ? Power limit=400W
Dated from 26th July 2019

https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.30.00.E1
NVIDIA GeForce RTX 2080 Ti VGA BIOS V6577
Copyright (C) 1996-2019 NVIDIA Corp.
GPU Board
Connectors
1x HDMI
1x USB-C
3x DisplayPort
Board power limit
Target: 300.0 W
Limit: 400.0 W
Adj. Range: -67%, +33%
Thermal Limits
Rated: 84.0C
Max: 88.0C
Memory Support
GDDR6, Samsung
GDDR6, Micron
GDDR6, Hynix
Boost Clock: 1740 MHz


----------



## Sheyster

GRABibus said:


> Do some people have tested this Bios ? Power limit=400W
> Dated from 26th July 2019
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> GPU Device Id: 0x10DE 0x1E07
> Version: 90.02.30.00.E1
> NVIDIA GeForce RTX 2080 Ti VGA BIOS V6577
> Copyright (C) 1996-2019 NVIDIA Corp.
> GPU Board
> Connectors
> 1x HDMI
> 1x USB-C
> 3x DisplayPort
> Board power limit
> Target: 300.0 W
> Limit: 400.0 W
> Adj. Range: -67%, +33%
> Thermal Limits
> Rated: 84.0C
> Max: 88.0C
> Memory Support
> GDDR6, Samsung
> GDDR6, Micron
> GDDR6, Hynix
> Boost Clock: 1740 MHz


I'd be curious to know if anyone with a reference PCB card has tested it.


----------



## Alecs010

kithylin said:


> All of the information you asked for should be on the first post / first page of this thread.


Oke but can you ore somebody else tell me if the bios for the msi rtx 2080ti gaming trio x of 406watts works on a msi rtx 2080 ti sea hawk ek?? because i was reading and if something will go wrong i screwed my gpu .


----------



## Spiriva

Does anyone know if the screws that comes with the EK block for the FE 2080ti also can be screwed in to the original Nvidia FE cooler ?


----------



## Spiriva

Sheyster said:


> I'd be curious to know if anyone with a reference PCB card has tested it.


I tried it on my 2080ti FE, got it on it right now. Seems to work about the same as the 380W Galax bios.


----------



## sblantipodi

Is there any chance that Xbox series X will be more powerful than current rtx cards?


----------



## kithylin

sblantipodi said:


> Is there any chance that Xbox series X will be more powerful than current rtx cards?


No because xbox is exclusively using AMD technology, not Nvidia. AMD has a contract with Microsoft for Xbox. And AMD cards are only just now catching up with the GTX 1080 Ti graphics wise.


----------



## keikei

sblantipodi said:


> Is there any _chance_ that Xbox series X will be more powerful than current rtx cards?



Sure, but i'd keep expectations low. I estimate performance between 1080ti/2080ti. 5700XT performance is 'close' to 1080ti atm and that is midrange. 5700XT is rdna1. We will see X Series using rdna2, which will have RT capabilites.


----------



## Shadowdane

I have this MSI 2080 Ti Duke OC card bought it way before I realized the damn thing only has a 290W power limit! 
https://www.techpowerup.com/vgabios/205080/msi-rtx2080ti-11264-181019


I noticed this Gigabyte Windforce OC card which is also a Reference PCB with 3 fans and a power limit of 366W for the BIOS.. does anyone know if the Gigabyte bios works fine with the MSI Duke OC card and still controls all 3 fans?? 
https://www.techpowerup.com/vgabios/205080/msi-rtx2080ti-11264-181019


Wouldn't really be worth using if the cooling performance suffers due to losing 1 or more fans with the different BIOS.


----------



## skupples

sblantipodi said:


> Is there any chance that Xbox series X will be more powerful than current rtx cards?


completely plausible that'll contend with 2060. NV's been on cruise control for awhile now with very little competition. They really haven't done a whole lot, and a massive part of every RTX core sits around doing nothing, unless you turn on the FPS nuking shiny stuff in a hand full of titles.

the rumors, if tempored, sound like ~5700(notXT) is essentially what will be driving the new consoles. So that combined with custom API = they're gonna be a decent little shartbox this time around.

gotta remember that consoles have custom API which allows the hardware they have to be much more efficient.


----------



## TK421

Anyone with 2080ti kingpin can share oc experience under ambient?


----------



## Kalm_Traveler

TK421 said:


> Anyone with 2080ti kingpin can share oc experience under ambient?


I've had one for a week so far.. Waiting for the cpu to show up so I can put it into a build. 

I'll post my OC once I have a chance to get it dialed in. Really hoping it will boost higher than my Titans.


----------



## kx11

TK421 said:


> Anyone with 2080ti kingpin can share oc experience under ambient?



well it's quite good , mine is under a hydrocopper block and runs cool even when i game with those clocks , room temp is around 23c




the clock are this low because the fan profile is extreme (2440rpm) with silent mode the gpu temps goes up to 52c max


----------



## TK421

Kalm_Traveler said:


> I've had one for a week so far.. Waiting for the cpu to show up so I can put it into a build.
> 
> I'll post my OC once I have a chance to get it dialed in. Really hoping it will boost higher than my Titans.





kx11 said:


> well it's quite good , mine is under a hydrocopper block and runs cool even when i game with those clocks , room temp is around 23c
> 
> 
> 
> 
> the clock are this low because the fan profile is extreme (2440rpm) with silent mode the gpu temps goes up to 52c max





Any of you got memory OC above +1300 offset? Currently my xc ultra 2080ti is stable mem (samsung) 8300mhz (+1300 offset on AB) and core 2050 (offset +120 afaik) or core 1950 (locked 0.95v, offset +150) on the xc ultra.


There's a local seller who wants to swap it for a Kingpin pretty cheap, not sure if I should go for it or not, because the xc ultra is pretty nicely binned too.


----------



## MrTOOSHORT

^^

I did!, but bench only


----------



## Sheyster

Shadowdane said:


> I have this MSI 2080 Ti Duke OC card bought it way before I realized the damn thing only has a 290W power limit!



I have the same card. It is a reference PCB card with a much better cooler. Install the GALAX 380w BIOS. It is a beast with that BIOS installed.


----------



## kx11

TK421 said:


> Any of you got memory OC above +1300 offset? Currently my xc ultra 2080ti is stable mem (samsung) 8300mhz (+1300 offset on AB) and core 2050 (offset +120 afaik) or core 1950 (locked 0.95v, offset +150) on the xc ultra.
> 
> 
> There's a local seller who wants to swap it for a Kingpin pretty cheap, not sure if I should go for it or not, because the xc ultra is pretty nicely binned too.



the screenshot i posted shows how much my gpu is OC


----------



## Shadowdane

Sheyster said:


> I have the same card. It is a reference PCB card with a much better cooler. Install the GALAX 380w BIOS. It is a beast with that BIOS installed.


I'll have to give that one a try.. so all 3 fans continue to work on the GALAX bios? I can't remember which bios I tried a few months ago, but I remember one of the fans stopped working and my cooling performance got worse.


----------



## TK421

MrTOOSHORT said:


> ^^
> 
> I did!, but bench only



What are your 24/7 stable memory OC? Did you add any additional voltage?








kx11 said:


> the screenshot i posted shows how much my gpu is OC


 Oops, got it. 



That's really good, 8400 stable 24/7 shows just how good Samsung memory is.


Did you add any additional voltage to the VRAM?


----------



## Sheyster

Shadowdane said:


> I'll have to give that one a try.. so all 3 fans continue to work on the GALAX bios? I can't remember which bios I tried a few months ago, but I remember one of the fans stopped working and my cooling performance got worse.


Yes, all 3 still work.


----------



## TK421

Sheyster said:


> Yes, all 3 still work.


Do you have any idea on how the Galax 380w bios supposedly has looser timings?


I want to know how it actually compares to other bios timings. If it's any different at all.


----------



## Sheyster

TK421 said:


> Do you have any idea on how the Galax 380w bios supposedly has looser timings?
> 
> 
> I want to know how it actually compares to other bios timings. If it's any different at all.


Someone earlier in the thread compared it to a Gigabyte 366w BIOS and said the Gigabyte BIOS was slightly better in benchmarks. That's all I know about it. I believe everyone agrees that for gaming use it's the best choice overall.


----------



## AndrejB

Really hope someone will upload the Asus 2080ti oc white bios.

My strix can barely run the matrix clocks... can only reach 2085mhz barely (+15). Would like to have a bit more headroom...


----------



## pattiri

AndrejB said:


> Really hope someone will upload the Asus 2080ti oc white bios.
> 
> My strix can barely run the matrix clocks... can only reach 2085mhz barely (+15). Would like to have a bit more headroom...



which bios is this? I'm using my asus strix 2080ti OC with 1000 watt bios, factory 3 fan cooler. max temp is 65 but I can hit 2010 Mhz @ 65 celcius. Around 40 celcius I can get 2070 and below 40 2085 and after a couple of seconds temp goes 55+ so mostly I'm using it @ 2025-2040 MHz and 7750 Mhz RAM. so you are lucky I guess


----------



## AndrejB

pattiri said:


> AndrejB said:
> 
> 
> 
> Really hope someone will upload the Asus 2080ti oc white bios.
> 
> My strix can barely run the matrix clocks... can only reach 2085mhz barely (+15). Would like to have a bit more headroom...
> 
> 
> 
> 
> which bios is this? I'm using my asus strix 2080ti OC with 1000 watt bios, factory 3 fan cooler. max temp is 65 but I can hit 2010 Mhz @ 65 celcius. Around 40 celcius I can get 2070 and below 40 2085 and after a couple of seconds temp goes 55+ so mostly I'm using it @ 2025-2040 MHz and 7750 Mhz RAM. so you are lucky I guess /forum/images/smilies/smile.gif
Click to expand...

It's a new card, same as ours just white. But it has probably a different vf curve and it's clocked at 1770 mhz instead of 1800.

Regarding the the clocks, i have exactly the same as yours at 65c 2010mhz, that's why I said barely because matrix runs at 1995 @ 65c at default.


----------



## TK421

Sheyster said:


> Someone earlier in the thread compared it to a Gigabyte 366w BIOS and said the Gigabyte BIOS was slightly better in benchmarks. That's all I know about it. I believe everyone agrees that for gaming use it's the best choice overall.



Ah ok, got it.


But there's no way to find out the exact memory timings?











AndrejB said:


> Really hope someone will upload the Asus 2080ti oc white bios.
> 
> My strix can barely run the matrix clocks... can only reach 2085mhz barely (+15). Would like to have a bit more headroom...





pattiri said:


> which bios is this? I'm using my asus strix 2080ti OC with 1000 watt bios, factory 3 fan cooler. max temp is 65 but I can hit 2010 Mhz @ 65 celcius. Around 40 celcius I can get 2070 and below 40 2085 and after a couple of seconds temp goes 55+ so mostly I'm using it @ 2025-2040 MHz and 7750 Mhz RAM. so you are lucky I guess





AndrejB said:


> It's a new card, same as ours just white. But it has probably a different vf curve and it's clocked at 1770 mhz instead of 1800.
> 
> Regarding the the clocks, i have exactly the same as yours at 65c 2010mhz, that's why I said barely because matrix runs at 1995 @ 65c at default.





Do you guys think the Matrix 1800MHz is a better binned chip compared to the Kingpin 1770MHz?


----------



## Sheyster

TK421 said:


> Ah ok, got it.
> 
> But there's no way to find out the exact memory timings?


I don't think so, I don't believe GPU-Z shows that info. There might be something else out there that does.


----------



## joyzao

Hello everyone, everything good? I have a rtx 2080 ti asus strix oc, which bios can I have better benchmark performance? I tried the msi trio x but it doesn't flash. The asus strix xoc is not going well, any tips bios?

appreciate


----------



## ntuason

joyzao said:


> Hello everyone, everything good? I have a rtx 2080 ti asus strix oc, which bios can I have better benchmark performance? I tried the msi trio x but it doesn't flash. The asus strix xoc is not going well, any tips bios?
> 
> appreciate


 ASUS RTX 2080 Ti Strix OC XOC is as good as it gets. What OC are you getting?


----------



## joyzao

ntuason said:


> ASUS RTX 2080 Ti Strix OC XOC is as good as it gets. What OC are you getting?




galax bios for example gives this error, why is it? the best bios would be asus xoc?

The score was little higher than the standard bios of the board, so I found it weak, and it greatly increased the temperature.

Which bios could you test?


error message : 
Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.



BIOS Cert 2.0 Verification Error, Update aborted.


Nothing changed!



ERROR: Invalid firmware image detected.

Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.


----------



## ntuason

joyzao said:


> galax bios for example gives this error, why is it? the best bios would be asus xoc?
> 
> The score was little higher than the standard bios of the board, so I found it weak, and it greatly increased the temperature.
> 
> Which bios could you test?
> 
> 
> error message :
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.
> 
> Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.


Are you trying to flash galax bios on your Asus Strix?! Oh man, thank lord for that compatibility bios check or else you would of bricked your card. Use the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS for your card, max the power limit via after burner, then see how high your OC can go then slowly turn down power limit until you hit power limit on HWiNfo/GPUz.
Thats what I did.


----------



## joyzao

ntuason said:


> Are you trying to flash galax bios on your Asus Strix?! Oh man, thank lord for that compatibility bios check or else you would of bricked your card. Use the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS for your card, max the power limit via after burner, then see how high your OC can go then slowly turn down power limit until you hit power limit on HWiNfo/GPUz.
> Thats what I did.




However, hof bios that is custom pcb doesn't work either. I am testing the xoc asus.


----------



## JustinThyme

ASUS XOC or ASUS matrix. 
Honestly i have a pair of the same cards and ended up putting them back to stock BIOS. 2150 is it. No matter the power limit. Never above 45C when running any BIOS and it just wont go any higher never hitting the power limit. Always locks up when I try to push that last little bit. Maybe if you run a chiller or LN2 it may get you somewhere. Definitely not going anywhere on air as it will throttle on thermals first. I get 2150 at 42C with stock BIOS. +30 on Volts and power limit to max on stock. memory at +800 is where I live. Gets decent results. Im happy. HK blocks on overkil custom loop.


----------



## AndrejB

joyzao said:


> ntuason said:
> 
> 
> 
> Are you trying to flash galax bios on your Asus Strix?! Oh man, thank lord for that compatibility bios check or else you would of bricked your card. Use the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS for your card, max the power limit via after burner, then see how high your OC can go then slowly turn down power limit until you hit power limit on HWiNfo/GPUz.
> Thats what I did.
> 
> 
> 
> 
> 
> However, hof bios that is custom pcb doesn't work either. I am testing the xoc asus.
Click to expand...

You got a card with a newer XUSB firmware, not sure you can flash any of these bioses. Until someone uploads...
I tried the galax bios, you just loose a port.

Regarding the PL, I found it depends on Vccio/Vccsa voltages. 

For instance if I run my mine at 1.15v, I get instant crashing moving from 300w>360w. 
If I run them at 1.2v, heaven/superposition stay at 340w
If I go to 1.22v heaven/superposition take 360w.

(This part I'm not sure about) depending on the game, pl allows you to run more stable clocks, noticed that mass effect was dropping clocks to 1890mhz at 300w. At 360w it stayed at 2010


----------



## joyzao

AndrejB said:


> You got a card with a newer XUSB firmware, not sure you can flash any of these bioses. Until someone uploads...
> I tried the galax bios, you just loose a port.
> 
> Regarding the PL, I found it depends on Vccio/Vccsa voltages.
> 
> For instance if I run my mine at 1.15v, I get instant crashing moving from 300w>360w.
> If I run them at 1.2v, heaven/superposition stay at 340w
> If I go to 1.22v heaven/superposition take 360w.
> 
> (This part I'm not sure about) depending on the game, pl allows you to run more stable clocks, noticed that mass effect was dropping clocks to 1890mhz at 300w. At 360w it stayed at 2010



Thanks for the reply.

How can I cheat the xusb firmware problem?

I flash the bios xoc 1000w, but the others unsuccessfully. I wanted to try the best possible for 24/7 and also for benchmark.

Thank you


----------



## AndrejB

joyzao said:


> AndrejB said:
> 
> 
> 
> You got a card with a newer XUSB firmware, not sure you can flash any of these bioses. Until someone uploads...
> I tried the galax bios, you just loose a port.
> 
> Regarding the PL, I found it depends on Vccio/Vccsa voltages.
> 
> For instance if I run my mine at 1.15v, I get instant crashing moving from 300w>360w.
> If I run them at 1.2v, heaven/superposition stay at 340w
> If I go to 1.22v heaven/superposition take 360w.
> 
> (This part I'm not sure about) depending on the game, pl allows you to run more stable clocks, noticed that mass effect was dropping clocks to 1890mhz at 300w. At 360w it stayed at 2010
> 
> 
> 
> 
> Thanks for the reply.
> 
> How can I cheat the xusb firmware problem?
> 
> I flash the bios xoc 1000w, but the others unsuccessfully. I wanted to try the best possible for 24/7 and also for benchmark.
> 
> Thank you
Click to expand...

If the xoc worked, I would try matrix if you are on air. If not you can stay on the xoc but given the xoc doesn't have vf curve editing, I think you can't use oc scanner


----------



## Talon2016

joyzao said:


> Thanks for the reply.
> 
> How can I cheat the xusb firmware problem?
> 
> I flash the bios xoc 1000w, but the others unsuccessfully. I wanted to try the best possible for 24/7 and also for benchmark.
> 
> Thank you


You have two options to flash the Galax HOF XOC, use the 'old' vBIOS that is easily floating around with an older card, or a newer card but use a usb programmer/clip to force flash direct to the vBIOS chip. Or you can try and acquire the newer Galax HOF XOC vBIOS that uses a newer XUSB firmware and will flash on newer cards.


----------



## TK421

Talon2016 said:


> You have two options to flash the Galax HOF XOC, use the 'old' vBIOS that is easily floating around with an older card, or a newer card but use a usb programmer/clip to force flash direct to the vBIOS chip. Or you can try and acquire the newer Galax HOF XOC vBIOS that uses a newer XUSB firmware and will flash on newer cards.





Maybe contact OP, he has the old vbios version in stock.




I never made this post btw.


----------



## BleedOutCold

Has anyone identified a source for a late 2019 version of the 380W bios that would avoid the XUSB FW hardware incompatibility referenced in the 1st post's bios flash section?


----------



## Prod1702

BleedOutCold said:


> Has anyone identified a source for a late 2019 version of the 380W bios that would avoid the XUSB FW hardware incompatibility referenced in the 1st post's bios flash section?


Hoping for this as well since I just got my 2080ti and none of the higher power limit bios I have tired work. Thanks for any help.


----------



## Sheyster

BleedOutCold said:


> Has anyone identified a source for a late 2019 version of the 380W bios that would avoid the XUSB FW hardware incompatibility referenced in the 1st post's bios flash section?


I believe someone found a newer 366w Gigabyte BIOS that worked with newer cards in the techpowerup VBIOS database. It was mentioned earlier in this thread, sorry I don't have a direct link.


----------



## skupples

anyone else pre-order optimus's block? 

should be here in January,


----------



## BleedOutCold

Sheyster said:


> I believe someone found a newer 366w Gigabyte BIOS that worked with newer cards in the techpowerup VBIOS database. It was mentioned earlier in this thread, sorry I don't have a direct link.


Thank you, I'll dig around for that one.




skupples said:


> anyone else pre-order optimus's block?
> 
> should be here in January,


Thought really hard about it, but couldn't quite wrap my head around the thickness of their acrylic cover and went for a heatkiller IV instead. I think they may need a couple generations' iteration to find a balance between their performance/build quality focus and something that looks good mounted vertically inside a showcase build. I expect I'll happily put their blocks into my windowless S5 build later in 2020 though.


----------



## Spiriva

skupples said:


> anyone else pre-order optimus's block?
> 
> should be here in January,


That 2080ti block looks really nice. I currently got the EK block on my 2080ti, but might try out the optimus one as it looks really sweet.


----------



## skupples

the thickness did stand out to me at first, but its all subjective. Specially if its mounted on a pci-e extender. You'll be looking at the big V instead of discoloration and dye stuck in the islands, which pulls the eye away from anything else.

though I suppose if your goal is up votes & not performance, I can understand...  we all know people love things staying the same while being renewed yearly.


----------



## chibi

I've been eyeing the EVGA RTX 2080 Ti FTW3 Ultra card and it's regularly priced between $1,899 to $1,999 CAD. Next week it will go on sale for $1,599 CAD which is pretty decent in terms of pricing. The only time it's been lower was at $1,549 according to PCPP for 1 day.

Should I take the plunge? I'm currently sitting on a placeholder 1060 3GB card paired with 34" 1440P 120Hz. I really don't want to wait for a summer release NVIDIA 3-series and have to wait even longer for the Ti towards the end of 2020. Sigh.


----------



## Sheyster

chibi said:


> I've been eyeing the EVGA RTX 2080 Ti FTW3 Ultra card and it's regularly priced between $1,899 to $1,999 CAD. Next week it will go on sale for $1,599 CAD which is pretty decent in terms of pricing. The only time it's been lower was at $1,549 according to PCPP for 1 day.
> 
> Should I take the plunge? I'm currently sitting on a placeholder 1060 3GB card paired with 34" 1440P 120Hz. I really don't want to wait for a summer release NVIDIA 3-series and have to wait even longer for the Ti towards the end of 2020. Sigh.


At this point in the game why not just buy the cheapest A-chip 2080 Ti you can find?


----------



## chibi

Sheyster said:


> At this point in the game why not just buy the cheapest A-chip 2080 Ti you can find?



The price difference is only $100 CAD tops between the low bin A chips. For an extra hundo I believe the raised PL, beefy cooler and EVGA brand is worth it for the Boxing Day sale.


----------



## AndrejB

Del


----------



## ftln

Hi All,

Just picked up a XC Hybrid, how do I know if I have newer XUSB firmware which is incompatible with older bios's ?

If I flash older bios will it just fail with no further damage or will it brick the card ?


----------



## Sheyster

ftln said:


> Hi All,
> 
> Just picked up a XC Hybrid, how do I know if I have newer XUSB firmware which is incompatible with older bios's ?
> 
> If I flash older bios will it just fail with no further damage or will it brick the card ?



Yes, you'll only get an error if you try to flash. It won't harm the card.


----------



## ftln

Sheyster said:


> Yes, you'll only get an error if you try to flash. It won't harm the card.


Thanks 

Anybody know which bios I should try ?

This is the bios my card shipped with: https://www.techpowerup.com/vgabios/216367/216367

Filename:	216367.rom
VBIOS Version:	90.02.30.40.1A
UEFI Supported:	Yes
BIOS Build date:	2019-09-02 00:00:00
Date added:	2019-12-18 23:57:39
VBIOS Size:	1022 KB
MD5 Hash:	2e0410d8cae5881fe3e778f62a22d2a7
SHA1 Hash:	f2a2d4ba44df956abb8857e420a5f0811639aad0

GPU Device Id: 0x10DE 0x1E07
Version: 90.02.30.40.1A
PG150 SKU 32 VGA BIOS
Copyright (C) 1996-2019 NVIDIA Corp.
GPU Board
Connectors
1x HDMI
1x USB-C
3x DisplayPort
Board power limit
Target: 260.0 W
Limit: 338.0 W
Adj. Range: -62%, +30%
Thermal Limits
Rated: 84.0C
Max: 88.0C
Memory Support
GDDR6, Samsung
GDDR6, Micron
GDDR6, Hynix
Boost Clock: 1635 MHz

Manufacturer:	EVGA
Model:	RTX 2080 Ti
Device Id:	10DE 1E07
Subsystem Id:	3842 2384
Interface:	PCI-E
Memory Size:	11264 MB
GPU Clock:	1350 MHz
Boost Clock:	1635 MHz
Memory Clock:	1750 MHz
Memory Type:	GDDR6


----------



## Spiriva

ftln said:


> Thanks
> 
> Anybody know which bios I should try ?


Try the Galax 380W bios from the first page, its a good solid bios that works with air cooling, if you put it under water there is other bioses to try that might work better.
Dont forget to save your original bios if you ever need to flash it back or send the card in on RMA.

https://www.techpowerup.com/vgabios/204557/KFA2.RTX2080Ti.11264.180910.rom


nvflash --save 2080tiorg.rom
nvflash --protectoff
nvflash -6 KFA2.RTX2080Ti.11264.180910.rom (or when you download it u can save it as a shorter name, for exemple kfa2.rom) 

Then its:

nvflash -6 KFA2.rom


----------



## jura11

joyzao said:


> Hello everyone, everything good? I have a rtx 2080 ti asus strix oc, which bios can I have better benchmark performance? I tried the msi trio x but it doesn't flash. The asus strix xoc is not going well, any tips bios?
> 
> appreciate


Hi there 

Personally I would try Matrix BIOS if you on air or water, I use on my Asus RTX 2080Ti Strix Matrix BIOS and as with everything depends on temperatures, try keep temperatures under 70's or I would rathet say in low 60's to have best clocks

I can do with Matrix BIOS 2160-2175MHz and +800MHz on VRAM

XOC BIOS does work but with that BIOS you really need to be under water there to get best performance, with XOC BIOS my best OC is close 2200MHz(2190MHz)but that's in very low ambient in 13°C and water temperature I have seen just 15°C under load at such ambient 

Hope this helps 

Thanks, Jura


----------



## ftln

So just tried the gigabyte 360w 2019 bios (https://www.techpowerup.com/vgabios/216078/216078) on my 2019 333w EVGA XC Hybrid and after flashing I am unable to control the fans on the SC Hybrid  Did a fresh install of win10 just to make sure and fans spins very slow on the VRM and the GPU RAD even when maxing out the fan curve in X1 or Afterburner. So have come back to origingal bios.

Original bios is still pretty good though : https://www.3dmark.com/spy/9760917 - 7719 gpu cscore in Timespy Extreme


----------



## Prod1702

I bought a MSI 2080 Ti Gaming X Tro last week. Just flashed the Galaxy Bios here https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726 which worked great. Still have fan control and so far no problems. It is allowing me to get a sable 2100mhz core (+135) and 8100mhz mem (+1100) with no drops running 3dmark. Power was hitting 370w to 380w. Temps stayed around 64c with 70% fan speed. Only Con is I lost RGB control but the RGB's are lite up just running the rainbow colors. Does really matter much to me since I plan on putting a waterblock on it soon.


----------



## Bart

Hey dudes, need some input from 2080TI overclockers. I bought what I presumed would be the most El Cheapo 2080TI you can buy, one of these (I've had good luck with Zotac over the years):










Then I put it under water, with a Heatkiller IV, using Arctic Cooling MX-4 paste:










Now I've been water cooling and overclocking for decades, but I'm seeing weird stuff here. GOOD weird!! The first weird thing is the temps. This thing TOPS OUT at 35C! Now if that GPU were stock boosting at 1750mhz, I'd still be impressed. But that's after firing up MSI Afterburner, setting +250 core OC, +700 memory OC (with 120% power limit), and running Time Spy Extreme / Firestrike Ultra stress tests REPEATEDLY. I've been testing for hours. The core is reportedly hitting 2145mhz, memory is hitting 8000mhz, and 35C is the reported max temp (!!!). Those core clocks seem suspiciously good, but this is my first 2080 series card, so no experience with these things. Did I just win the silicon lottery on this GPU?


----------



## Sheyster

Bart said:


> Did I just win the silicon lottery on this GPU?


I would not consider anything under 2200 core/8000 memory 100% stable to be "golden". It's all relative though, others may think differently.


----------



## Bart

Sheyster said:


> I would not consider anything under 2200 core/8000 memory 100% stable to be "golden". It's all relative though, others may think differently.


Cool, so I'll aim for 2200 then, thanks! I doubt it will hit that, but it will be fun trying.


----------



## ESRCJ

Bart said:


> Hey dudes, need some input from 2080TI overclockers. I bought what I presumed would be the most El Cheapo 2080TI you can buy, one of these (I've had good luck with Zotac over the years):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Then I put it under water, with a Heatkiller IV, using Arctic Cooling MX-4 paste:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Now I've been water cooling and overclocking for decades, but I'm seeing weird stuff here. GOOD weird!! The first weird thing is the temps. This thing TOPS OUT at 35C! Now if that GPU were stock boosting at 1750mhz, I'd still be impressed. But that's after firing up MSI Afterburner, setting +250 core OC, +700 memory OC (with 120% power limit), and running Time Spy Extreme / Firestrike Ultra stress tests REPEATEDLY. I've been testing for hours. The core is reportedly hitting 2145mhz, memory is hitting 8000mhz, and 35C is the reported max temp (!!!). Those core clocks seem suspiciously good, but this is my first 2080 series card, so no experience with these things. Did I just win the silicon lottery on this GPU?


The temperature you're reporting is nice, but it's meaningless without the proper context, such as ambient temperature, more more relevantly for water cooling, the delta between fluid temperature and GPU temperature. Basically there should be nothing out of the ordinary there compared to other 2080 Tis using the same block. 

As for the reported clock speeds, unless it's passing something like the TimeSpy Extreme stress test at the reported clocks and doing so without throttling, the silicon cannot be fully evaluated. With that said, the voltage also matters. If you're hitting 2145MHz stable at 1.093V (the cap on vcore), I would not at all consider that winning the silicon lottery. That would be above average though. I've seen some very poor silicon, such as FTW3s that can barely manage over 2000MHz. As for buying a cheaper variant, it really doesn't matter. Unless you're buying a Kingpin or Matrix, I don't think there's much binning going on. 



Sheyster said:


> I would not consider anything under 2200 core/8000 memory 100% stable to be "golden". It's all relative though, others may think differently.


"Golden" is indeed subjective, since we don't know the distribution of silicon quality on these cards and the cutoff for "golden" would be in itself subjective. With that said, I would also not consider anything under 2200MHz at 1.093V to be golden.


----------



## Bart

ESRCJ said:


> The temperature you're reporting is nice, but it's meaningless without the proper context, such as ambient temperature, more more relevantly for water cooling, the delta between fluid temperature and GPU temperature. Basically there should be nothing out of the ordinary there compared to other 2080 Tis using the same block.


The temps must be wrong. The rig is in a cold basement (plus Canadian winter), ambient 18C-20C, in an overkill loop (2 88ml 360 Monstas, one 360 X-flow, and a 240, all with good fans). But I've done overkill rad space for years and haven't seen this. Best temps I've seen under full load in similar conditions were my 1080TIs topping out at 46C, so I'm assuming this GPU temp sensor is just plain goofy. It got into the liquor-filled egg nog or something.


----------



## jura11

Bart said:


> The temps must be wrong. The rig is in a cold basement (plus Canadian winter), ambient 18C-20C, in an overkill loop (2 88ml 360 Monstas, one 360 X-flow, and a 240, all with good fans). But I've done overkill rad space for years and haven't seen this. Best temps I've seen under full load in similar conditions were my 1080TIs topping out at 46C, so I'm assuming this GPU temp sensor is just plain goofy. It got into the liquor-filled egg nog or something.


Hi there 

These temperatures are great and I would expect such temperatures with such loop 

Have look on my loop with 4*GPUs in one loop, 4*360mm radiators plus MO-ra3 360mm my Asus RTX 2080Ti Strix will do 2145-2160MHz with temperatures topping out at 38-42°C in normal ambient temperature in mid 20's, in lower ambient temperature in normal stress tests I see 36-38°C with 2145MHz OC, my GTX1080Ti will do 2113MHz at 1.07v and temperatures are in mid 30's(33-36°C) in gaming or rendering 

I just done quick Superposition test with ambient in 20°C and temperatures never passed 36°C on RTX 2080Ti Strix 

My water delta T during the test have been in 1-1.5°C 

What waterblock did you use with GTX1080Ti and what GTX1080Ti did you have, I know friend MSI GTX1080Ti Gaming X 11G have been one of the hottest GPU which I tested, I tried few waterblocks on that card but best temperatures I have seen have been in mid 40's to low 50's that's with 2*360mm radiators plus 240mm radiator 

Heatkiller with Aquacomputer Kryographics RTX 2080Ti WB are two best waterblocks on market for RTX 2080Ti 

I don't think yours GPU temperature sensor is bad or wrong reports,these temperatures I would expect too

Hope this helps 

Thanks, Jura


----------



## Bart

Thanks Jura, that's good to know, thanks for the info! I have a pair of Zotac 1080TIs with Heatkiller IV blocks in a Threadripper system, they top out at around 46C at 2100 core. Heatkillers are my go-to, but it appears the 2080TI runs a bit cooler than the 1080TI, or Watercool made some serious improvements in blocks between the 1080s and 2080s. Good to know these readings are accurate, and awesome!


----------



## ESRCJ

Bart said:


> The temps must be wrong. The rig is in a cold basement (plus Canadian winter), ambient 18C-20C, in an overkill loop (2 88ml 360 Monstas, one 360 X-flow, and a 240, all with good fans). But I've done overkill rad space for years and haven't seen this. Best temps I've seen under full load in similar conditions were my 1080TIs topping out at 46C, so I'm assuming this GPU temp sensor is just plain goofy. It got into the liquor-filled egg nog or something.


If your ambients are 18-20C, then 35C sounds about right for a reference BIOS near the power limit. Igor's Lab tested the Heatkiller block by fixing the fluid temp to 20C with a chiller and with a full load on the 380W BIOS and was hitting 35C on average, so a 15C delta between the fluid and GPU temps. Obviously without a chiller, it would take some time for that delta to reach a steady state. 

The temperature reported should be accurate. You should expect less than a 15C delta between fluid temp and GPU temp if the card isn't hitting 380W.


----------



## sblantipodi

guys, is there any nvidia roadmap where they show PCIe 4.0 in future cards?
do you think that nvidia will ever ship a PCIe 4.0 card in the near future or if it will jump to PCIe 5.0 directly?


----------



## shiokarai

Is there any way to bypass newer cards XUSB FW error when flashing the BIOS or newer cards are essentially BIOS locked?


----------



## Prod1702

shiokarai said:


> Is there any way to bypass newer cards XUSB FW error when flashing the BIOS or newer cards are essentially BIOS locked?



You need a newer bios that works with the newer FW. Galaxy has a new 380w that you can try. I have a MSI 2080 Ti with the new FW and the latest Galaxy 380W bios works for me but the older ones give me an error.


----------



## chibi

Prod1702 said:


> You need a newer bios that works with the newer FW. Galaxy has a new 380w that you can try. I have a MSI 2080 Ti with the new FW and the latest Galaxy 380W bios works for me but the older ones give me an error.



Where can you source the newer Galaxy 380W Bios? Please share, thank you. :thumb:


----------



## Prod1702

chibi said:


> Where can you source the newer Galaxy 380W Bios? Please share, thank you. :thumb:


This is the one that I am using.

https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726

You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.


----------



## chibi

Prod1702 said:


> This is the one that I am using.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.



Thanks brother, +Rep! I know a lot of people were stuck trying to flash the older BIOS on new cards. This will help a lot of us here. :thumb:


----------



## kithylin

sblantipodi said:


> guys, is there any nvidia roadmap where they show PCIe 4.0 in future cards?
> do you think that nvidia will ever ship a PCIe 4.0 card in the near future or if it will jump to PCIe 5.0 directly?


Nvidia has not officially confirmed anywhere in any public space that they have any future plans to offer a PCI-Express 4.0 video card at this time. Ampere (next generation coming up "Soon") may have this feature, but they have not released any specifications on Ampere yet. So no one knows.


----------



## gfunkernaught

Sheyster said:


> I would not consider anything under 2200 core/8000 memory 100% stable to be "golden". It's all relative though, others may think differently.


golden sample can be many things. for example i have an asus rog laptop with a 980m and playing codmw 2019 at 900p with all settings maxed was getting 55-60fps vsync and the temps were around 70-72c. thats with the laptop on my lap. that to me sounds like a golden sample because that is amazing for a laptop gpu. another example of a golden sample for a 2080 ti would be say [email protected] or less voltage on a reference pcb. 2200mhz+ stable in games is in the custom pcb realm only unless some rare platinum sample exists on a reference pcb with flashed bios.


----------



## joyzao

Prod1702 said:


> This is the one that I am using.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.



I can flash bios, but when restarting the pc it runs out of video, the only bios I could work so far was strix xoc and asus matrix, how can I use this bios from galax?


----------



## AndrejB

joyzao said:


> Prod1702 said:
> 
> 
> 
> This is the one that I am using.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.
> 
> 
> 
> 
> I can flash bios, but when restarting the pc it runs out of video, the only bios I could work so far was strix xoc and asus matrix, how can I use this bios from galax?
Click to expand...

By flashing and then changing the dp port (physically) and restarting.

Or if you already know which one doesn't work, change to the other one beforehand.


----------



## TK421

chibi said:


> Thanks brother, +Rep! I know a lot of people were stuck trying to flash the older BIOS on new cards. This will help a lot of us here. :thumb:





Prod1702 said:


> This is the one that I am using.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.



Are you guys both on reference pcbs?



I wonder if this new 400w galax vbios would flash into an older reference pcb that's compatible with the 380w vbios?


----------



## Prod1702

TK421 said:


> Are you guys both on reference pcbs?
> 
> 
> 
> I wonder if this new 400w galax vbios would flash into an older reference pcb that's compatible with the 380w vbios?


No I have a MSI 2080 Ti Gaming X Tro. It has a custom pcb.


----------



## shiokarai

Prod1702 said:


> chibi said:
> 
> 
> 
> Where can you source the newer Galaxy 380W Bios? Please share, thank you. /forum/images/smilies/thumb.gif
> 
> 
> 
> This is the one that I am using.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> You can also search on there for other ones to see if one works better for you. If you need steps on how to flash with nvflash, I would look at the OPs posted on the first page.
Click to expand...

Well, I have reference card (EVGA XC Ultra) and this is custom PCB with 3x8pin vs 2x8pin on reference board.... wouldn’t this cause issues?


----------



## ESRCJ

shiokarai said:


> Well, I have reference card (EVGA XC Ultra) and this is custom PCB with 3x8pin vs 2x8pin on reference board.... wouldn’t this cause issues?


I gave it a try with my FE. The BIOS works, but the power limit isn't 400W with a dual 8-pin. Instead, it caps at a little under 350W.


----------



## Sheyster

ESRCJ said:


> I gave it a try with my FE. The BIOS works, but the power limit isn't 400W with a dual 8-pin. Instead, it caps at a little under 350W.


That's typical, hence why everyone likes the original 380w BIOS.


----------



## BigMack70

I am getting tired of hearing how obnoxiously loud this card's stock cooler can get (Founders Edition)... thinking of biting the bullet and getting a Kraken G12 and putting it under water. I know that works but also that it is a bit of a hack; anyone have any advice for or against doing that? 

I'm not terribly concerned about VRM temps; the case has good airflow and this thing has enough phases that I doubt they will overheat on a stock BIOS. 

My bigger concern is how much of a pain in the ass it might be to reassemble the card in 6 months when it comes time to sell it to upgrade to whatever Nvidia releases next. I know the stock card has thermal pads everywhere. Is it too bad to reassemble? Will the removed thermal pads on the stock cooler dry out and become unusable in the meantime?


----------



## shiokarai

Sheyster said:


> ESRCJ said:
> 
> 
> 
> I gave it a try with my FE. The BIOS works, but the power limit isn't 400W with a dual 8-pin. Instead, it caps at a little under 350W.
> 
> 
> 
> That's typical, hence why everyone likes the original 380w BIOS. /forum/images/smilies/wink.gif
Click to expand...

That’s basically XC ultra power limit (about 340w), so no difference.... and this means still a no way to get a better bios for newer reference pcb cards I assume.


----------



## EugW

why does the XOC bios not work with davinci resolve? I tried both kingpin and asus. When I start davinci resolve my gpu clocks are 645mhz gpu and 450mhz mem. Pls help


----------



## kithylin

EugW said:


> why does the XOC bios not work with davinci resolve? I tried both kingpin and asus. When I start davinci resolve my gpu clocks are 645mhz gpu and 450mhz mem. Pls help


Does it work at stock speed with a stock bios?


----------



## EugW

kithylin said:


> Does it work at stock speed with a stock bios?


Yes it does. After conversation with a pro-oc we figured out that after launching davinci it forces my card to p8 state, also he [pro-oc] tested davinci too and it works for him, I think problem in Windows somehow.


----------



## motorcyclerider

Sheyster said:


> That's typical, hence why everyone likes the original 380w BIOS. /forum/images/smilies/wink.gif


The 380 watt bios doesn't work with new 2019 cards. I have a EVGA XC Hybrid and have tried to flash a good many bios. Most 2018 bios versions simply want flash. Out out the ones I could get to flash only 3 gave me increased wattage. The King ping, EVGA FTW3, and the ASUS ROG Matrix. The King Ping is to much, the FTW3 gave me 370watts out of 370watts its suppose to, and the ASUS Matrix gave me all the 370watts it's good for. The ASUS didn't play nice with my ports. So I went with this ASUS bios.

https://www.techpowerup.com/vgabios/208661/evga-rtx2080ti-11264-181104


Has anyone one gotten the 380 watt or any other bios with 400+ watt to work on a 2019 evga?


----------



## chibi

What about the late 2019 FE cards? Does the 380W bios flash on those?


----------



## kx11

if anyone wants the 2080ti strix white edition bios i can upload it to TPU


----------



## axiumone

Anyone with dual FE cards with waterblocks interested in swapping to two strix cards with either EK or bitspower blocks? The strix cards a little large for me and mostly, I need cards with 3 displayports. My cards stay around 2070mhz mostly on a 365w strix bios.


----------



## motorcyclerider

chibi said:


> What about the late 2019 FE cards? Does the 380W bios flash on those?


I don't believe so. Who every did the really nice write up at the beginning of this forum touched on this issue.


----------



## madno

There was a discussion about serial numbers and dead cards during the past year. I got a card with 03202... (not 0323). Any information (maybe I should write guesses, rumours) about that range available?


----------



## motorcyclerider

*motorcyclerider*



Spiriva said:


> I sent you a PM with the bios.


Please PM me too.


----------



## motorcyclerider

Spiriva said:


> I sent you a PM with the bios.


Please PM me too.


----------



## GRABibus

kx11 said:


> if anyone wants the 2080ti strix white edition bios i can upload it to TPU


What are his specifications ?
Better than Galax 380W Bios ?


----------



## AndrejB

kx11 said:


> if anyone wants the 2080ti strix white edition bios i can upload it to TPU


Please, this bios would be more stable than the matrix, for us lottery losers.


----------



## shiokarai

motorcyclerider said:


> The 380 watt bios doesn't work with new 2019 cards. I have a EVGA XC Hybrid and have tried to flash a good many bios. Most 2018 bios versions simply want flash. Out out the ones I could get to flash only 3 gave me increased wattage. The King ping, EVGA FTW3, and the ASUS ROG Matrix. The King Ping is to much, the FTW3 gave me 370watts out of 370watts its suppose to, and the ASUS Matrix gave me all the 370watts it's good for. The ASUS didn't play nice with my ports. So I went with this ASUS bios.
> 
> https://www.techpowerup.com/vgabios/208661/evga-rtx2080ti-11264-181104
> 
> 
> Has anyone one gotten the 380 watt or any other bios with 400+ watt to work on a 2019 evga?


Could you elaborate/explain? How was kingpin bios "too much"? Anything wrong with it? Have you successfully flashed kingpin XOC bios to the EVGA XC Hybrid 2019?

Also, why did you go with the ASUS matrix bios, nor the EVGA FTW3?


----------



## shiokarai

So, if I understand correctly, the only working right now bioses for the 2019/late 2019 RTX 2080 ti cards (based on FE PCB) are the Asus XOC bios and Kingpin XOC bios? Or are there issues with those bioses too?


----------



## kx11

GRABibus said:


> What are his specifications ?
> Better than Galax 380W Bios ?



i don't know if it's better than GALAX but it's running memory clock with +400 offset by default


----------



## kx11

AndrejB said:


> Please, this bios would be more stable than the matrix, for us lottery losers.



it should be since i updated it last night which included AURA lights fix as the main feature 





hope it works out fine


https://www.techpowerup.com/vgabios/216667/216667


----------



## GRABibus

kx11 said:


> it should be since i updated it last night which included AURA lights fix as the main feature
> 
> 
> 
> 
> 
> hope it works out fine
> 
> 
> https://www.techpowerup.com/vgabios/216667/216667


Thanks.
It's a 360W PL Bios. So less than Galax 380W. I will reach in most of the time "PL wall" in some games and bench with this ASUS one.


----------



## pattiri

kx11 said:


> it should be since i updated it last night which included AURA lights fix as the main feature
> 
> 
> 
> 
> 
> hope it works out fine
> 
> 
> https://www.techpowerup.com/vgabios/216667/216667



Thanks for the link. I've tried this and back to the 1000Watt Bios. For me it's the best. I limited the power limit to 420 watt I can reach max around 410 watts during 8K Superposition. 1080p Extreme benchmark starts with 2070 Mhz @40 celcius and when it's done it's 2025 Mhz at around 60-62 Celcius. Vram is @7800Mhz, the max score I get is 9868. It's air cooled and this is the best stable clocks I can get from my silicon lottery loser card


----------



## shiokarai

pattiri said:


> Thanks for the link. I've tried this and back to the 1000Watt Bios. For me it's the best. I limited the power limit to 420 watt I can reach max around 410 watts during 8K Superposition. 1080p Extreme benchmark starts with 2070 Mhz @40 celcius and when it's done it's 2025 Mhz at around 60-62 Celcius. Vram is @7800Mhz, the max score I get is 9868. It's air cooled and this is the best stable clocks I can get from my silicon lottery loser card


ASUS XOC bios? on 2019 FE PCB card?


----------



## AndrejB

kx11 said:


> AndrejB said:
> 
> 
> 
> Please, this bios would be more stable than the matrix, for us lottery losers.
> 
> 
> 
> 
> it should be since i updated it last night which included AURA lights fix as the main feature
> 
> 
> 
> 
> 
> hope it works out fine
> 
> 
> https://www.techpowerup.com/vgabios/216667/216667
Click to expand...

Thank you.

Small question, how did you update the bios? Through GpuTweak?


----------



## GRABibus

AndrejB said:


> Thank you.
> 
> Small question, how did you update the bios? Through GpuTweak?


With Nvflash I assume.
See first page of this thread for how to flash (At your own risk)


----------



## acoustic

I've considered flashing the Kingpin BIOS to my EVGA 2080TI FTW3 Ultra on the stock 3-fan aircooler, just for the higher power limit. I've run 2100/8000 since I've owned it, but I'll crash if the temps hit 70c. Only game that's got it that hot is the new Star Wars game, surprisingly. I use a 1:1 fan curve. Don't really know if there's anything to gain unless I put it under water. I'm pretty sure this card would be a monster if I could get a block on it.


----------



## AndrejB

GRABibus said:


> With Nvflash I assume.
> See first page of this thread for how to flash (At your own risk)


The question was regarding the new asus strix white oc card, that doesn't have any bios on tpu...

I asked, because I'm wondering how asus sends official bios updates.


----------



## superino

Guys I need a hand I explain I own a Gigabyte GV-N208TAORUSX-WB-11GC GeForce RTX 2080 Ti, do you think I can mount EK-Quantum Vector Aorus RTX 2080 Ti water block? I cracked a hole where I connect the fitting


----------



## motorcyclerider

shiokarai said:


> motorcyclerider said:
> 
> 
> 
> The 380 watt bios doesn't work with new 2019 cards. I have a EVGA XC Hybrid and have tried to flash a good many bios. Most 2018 bios versions simply want flash. Out out the ones I could get to flash only 3 gave me increased wattage. The King ping, EVGA FTW3, and the ASUS ROG Matrix. The King Ping is to much, the FTW3 gave me 370watts out of 370watts its suppose to, and the ASUS Matrix gave me all the 370watts it's good for. The ASUS didn't play nice with my ports. So I went with this ASUS bios.
> 
> https://www.techpowerup.com/vgabios/208661/evga-rtx2080ti-11264-181104
> 
> 
> Has anyone one gotten the 380 watt or any other bios with 400+ watt to work on a 2019 evga?
> 
> 
> 
> Could you elaborate/explain? How was kingpin bios "too much"? Anything wrong with it? Have you successfully flashed kingpin XOC bios to the EVGA XC Hybrid 2019?
> 
> Also, why did you go with the ASUS matrix bios, nor the EVGA FTW3?
Click to expand...

It was a typo I did use the the EVGA FTW3 bios. I was able to flash King Pin. The King Ping runs the fans and pump at what seems over 100%. Extremely noisy to the point it sounds like it's it going to burn something up. I also could not get Afterburner or X1 to work correctly with the King Pin Bios. X1 wouldn't even open.


----------



## motorcyclerider

shiokarai said:


> So, if I understand correctly, the only working right now bioses for the 2019/late 2019 RTX 2080 ti cards (based on FE PCB) are the Asus XOC bios and Kingpin XOC bios? Or are there issues with those bioses too?


The Asus will flash on my late 2019 but it messes up my HDMI Port.


----------



## shiokarai

motorcyclerider said:


> shiokarai said:
> 
> 
> 
> So, if I understand correctly, the only working right now bioses for the 2019/late 2019 RTX 2080 ti cards (based on FE PCB) are the Asus XOC bios and Kingpin XOC bios? Or are there issues with those bioses too?
> 
> 
> 
> The Asus will flash on my late 2019 but it messes up my HDMI Port.
Click to expand...

 anything else not working/bugged/etc? Is there an actual improvement?


----------



## kx11

AndrejB said:


> Thank you.
> 
> Small question, how did you update the bios? Through GpuTweak?



armoury crate app , it can be found with Asus recent mobos (late 2018 and afterwards ) as it won't work without one


----------



## motorcyclerider

shiokarai said:


> motorcyclerider said:
> 
> 
> 
> 
> 
> shiokarai said:
> 
> 
> 
> So, if I understand correctly, the only working right now bioses for the 2019/late 2019 RTX 2080 ti cards (based on FE PCB) are the Asus XOC bios and Kingpin XOC bios? Or are there issues with those bioses too?
> 
> 
> 
> The Asus will flash on my late 2019 but it messes up my HDMI Port.
> 
> Click to expand...
> 
> anything else not working/bugged/etc? Is there an actual improvement?
Click to expand...

Not really much improved with the 373 watt EVGA bios. I need more power. I wish the King Ping bios didn't have the issues it does. I can't clock my card high enough to even make it run above 60°. I can get a good stable 2070mhz clock with the stock bios and the FTW3 bios.


----------



## AndrejB

kx11 said:


> AndrejB said:
> 
> 
> 
> Thank you.
> 
> Small question, how did you update the bios? Through GpuTweak?
> 
> 
> 
> 
> armoury crate app , it can be found with Asus recent mobos (late 2018 and afterwards ) as it won't work without one
Click to expand...

Thanks.

And lol, so because I have a gigabyte board I can't update the gpu bios until released on tpu, nice... (I knew liveupdate in gputweak didn't do s....)

Well played Asus, well efing played....


----------



## skupples

if you're like many other people as of recent. You'll have had your last straw drawn with ASUS.

My 2nd to last ASUS straw - ASUS z390 PRIME - A being absolute trash. Cheap boards do NOT have to be garbage, and ASUS used to know this.

The final flipping straw - my Asus Pro Art 10bit 4K60 started artifacting on either half of the screen randomly after 14 months of use. It's slowly getting faster & faster. No it wasn't the cable, or the GPU, or the motherboard, or the storage.... *wallet dies* i like this AW3420 though. it's a nice hold over until 4K OLED 120 shrinks in size.


----------



## BleedOutCold

shiokarai said:


> So, if I understand correctly, the only working right now bioses for the 2019/late 2019 RTX 2080 ti cards (based on FE PCB) are the Asus XOC bios and Kingpin XOC bios? Or are there issues with those bioses too?


I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.

EDIT: nope, for whatever reason this new card is the older ISSI EEPROM.


----------



## pattiri

shiokarai said:


> ASUS XOC bios? on 2019 FE PCB card?



Nope, my card is old.


----------



## JustinThyme

skupples said:


> if you're like many other people as of recent. You'll have had your last straw drawn with ASUS.
> 
> My 2nd to last ASUS straw - ASUS z390 PRIME - A being absolute trash. Cheap boards do NOT have to be garbage, and ASUS used to know this.
> 
> The final flipping straw - my Asus Pro Art 10bit 4K60 started artifacting on either half of the screen randomly after 14 months of use. It's slowly getting faster & faster. No it wasn't the cable, or the GPU, or the motherboard, or the storage.... *wallet dies* i like this AW3420 though. it's a nice hold over until 4K OLED 120 shrinks in size.


Well that sucks the big old donkey.......
Ive had good luck. R6E holding up, pair of 2080Ti Stix O11G GPUs running Strong and PG349Q works great. Only ASUS product Ive bout that Ive regretted it the POS SPATHA Mouse. Works OK but has the battery life of a 5 year old gaming on an Iphone 4. I just dont get that one. I use a Logitech MX master for the most part that will go a month without charging yet they cant get 24 hours out of the spatha even with all the lighting turned off. They've done plenty of other things to chap my a$$ so Next build going a different direction.

On the topic of the cards even keeping them cool doesnt matter what BIOS I run...2150 is it. Not doing 2151 no matter what I run in them Triedd several including the XOC and Matrix and while I didnt hit power limit.......2150 was it.


----------



## chibi

BleedOutCold said:


> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.



Good to know! Are there any issues with fan control or loss of video ports?


----------



## BleedOutCold

chibi said:


> BleedOutCold said:
> 
> 
> 
> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.
> 
> 
> 
> 
> Good to know! Are there any issues with fan control or loss of video ports?
Click to expand...

No loss of the one display port I’m using, haven’t tried others. an’t speak to fans, card is under a Heatkiller IV block. I still need to explore what (if any) headroom or memory instability this bios provides. Also need to try the kingpin and see if the added heat and electricity of pulling another ~100W can be justified by headroom.


----------



## shiokarai

BleedOutCold said:


> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.


XUSB FW mismatch, still doesn't work...


----------



## motorcyclerider

shiokarai said:


> BleedOutCold said:
> 
> 
> 
> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.
> 
> 
> 
> 
> XUSB FW mismatch, still doesn't work...
Click to expand...

Doesn't work for me either.


----------



## BleedOutCold

shiokarai said:


> BleedOutCold said:
> 
> 
> 
> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.
> 
> 
> 
> XUSB FW mismatch, still doesn't work...
Click to expand...




motorcyclerider said:


> shiokarai said:
> 
> 
> 
> 
> 
> BleedOutCold said:
> 
> 
> 
> I can confirm the following 380W bios working fine on a new (11/2019) 2080TI FE: https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910 Max power draw is reported as 390W by HWInfo64.
> 
> 
> 
> 
> XUSB FW mismatch, still doesn't work...
> 
> Click to expand...
> 
> Doesn't work for me either.
Click to expand...

Interesting, looking back at the flash log despite being a new FE mine has older USB firmware and ISSI EEPROM.


----------



## TK421

Have a bit of complicated question for Kingpin 2080Ti owners.












Here's two screenshots of the card running a looped 3dmark timespy extreme benchmark.

https://i.imgur.com/dVlTr8d.png

https://i.imgur.com/zFoBCHa.png

Any idea why memory bank on the left side is hotter than the rest? There's like 10c difference between hottest and coldest memory banks.

The cooling plate is connected to the AIO itself, so the temp difference shouldn't be that big here either.

https://i.imgur.com/BwWbGXG.png

https://i.imgur.com/efFyxiM.png

By logic mem2 (in the front) should be coldest while mem1 (top) and mem3 (bottom) should have roughly same. I don't think losing 1 IC chip cuts power output that drastically where it can be upwards of 10c difference?







I also tried overclocking the core. I'm on stock voltage, the maximum boost seems to be 2115MHz (+75 on core), the core voltage is jumping from 1.042-1.07 and power consumption is 290w average (hwinfo).


The gpu is sitting 40-41c, I thought I should be able to get more (around 2200 core at least) given it's a highly binned card?


----------



## AndrejB

kx11 said:


> AndrejB said:
> 
> 
> 
> Please, this bios would be more stable than the matrix, for us lottery losers.
> 
> 
> 
> 
> it should be since i updated it last night which included AURA lights fix as the main feature
> 
> 
> 
> 
> 
> hope it works out fine
> 
> 
> https://www.techpowerup.com/vgabios/216667/216667
Click to expand...

Would you mind uploading your bioses to tpu through gpuz, so they can be verified and we can have them in one place?


----------



## Kalm_Traveler

TK421 said:


> Have a bit of complicated question for Kingpin 2080Ti owners.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Here's two screenshots of the card running a looped 3dmark timespy extreme benchmark.
> 
> https://i.imgur.com/dVlTr8d.png
> 
> https://i.imgur.com/zFoBCHa.png
> 
> Any idea why memory bank on the left side is hotter than the rest? There's like 10c difference between hottest and coldest memory banks.
> 
> The cooling plate is connected to the AIO itself, so the temp difference shouldn't be that big here either.
> 
> https://i.imgur.com/BwWbGXG.png
> 
> https://i.imgur.com/efFyxiM.png
> 
> By logic mem2 (in the front) should be coldest while mem1 (top) and mem3 (bottom) should have roughly same. I don't think losing 1 IC chip cuts power output that drastically where it can be upwards of 10c difference?
> 
> 
> 
> 
> 
> 
> 
> I also tried overclocking the core. I'm on stock voltage, the maximum boost seems to be 2115MHz (+75 on core), the core voltage is jumping from 1.042-1.07 and power consumption is 290w average (hwinfo).
> 
> 
> The gpu is sitting 40-41c, I thought I should be able to get more (around 2200 core at least) given it's a highly binned card?


I don't have the stock cooler on mine anymore but I would expect the side with 3 memory chips instead of 4 to be significantly cooler, that's 25% less heat generation.

As far as binning, that is correct but remember AIB companies are left with whatever Nvidia sells to them, where as Nvidia can bin the chips as they come off the fab to keep the best for their own cards.


----------



## TK421

Kalm_Traveler said:


> I don't have the stock cooler on mine anymore but I would expect the side with 3 memory chips instead of 4 to be significantly cooler, that's 25% less heat generation.
> 
> As far as binning, that is correct but remember AIB companies are left with whatever Nvidia sells to them, where as Nvidia can bin the chips as they come off the fab to keep the best for their own cards.



That's a good point.


But I don't think they put the highest binned silicon? FE cards don't seem to clock any better than AIB ones.








Just kinda disappointed on the binning quality of this Kingpin tbh.


----------



## kx11

AndrejB said:


> Would you mind uploading your bioses to tpu through gpuz, so they can be verified and we can have them in one place?





that's how i did it


----------



## AndrejB

kx11 said:


> AndrejB said:
> 
> 
> 
> Would you mind uploading your bioses to tpu through gpuz, so they can be verified and we can have them in one place?
> 
> 
> 
> 
> 
> 
> that's how i did it
Click to expand...

Oh ok, not sure why it's showing unverified still.
In any case thank you very much, if you get any updates please let me know.


----------



## TK421

Out of curiosity would there be interest in a card with older xusb version?


----------



## keikei

Regarding the 'max frame rate' feature, does that mean ALL games will have this? https://wccftech.com/nvidia-ces-2020-game-ready-driver-release-rtx-on/


----------



## Sheyster

keikei said:


> Regarding the 'max frame rate' feature, does that mean ALL games will have this? https://wccftech.com/nvidia-ces-2020-game-ready-driver-release-rtx-on/


That's just a new frame rate limiter you can use (or I assume not use) in NVCP. I assume it will work with any game. Why are you concerned about it?


----------



## munchkin1

For anyone with a newer XUSB firmware struggling to find bios. Here are my findings so far on the limited number of suitable new bios around with higher power limits than my card (which is 330w)

My card is an MSI 2080ti Gaming X Trio purchased Oct 2019. 
Tested bios:

1. Galax HOF 10yr 400w. https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
- Appeared to work well
- Actually hits power limit around 345-350w. Won't go over that and reports as hitting power limit in GPUZ/HWinfo despite also showing limit as 400w
- I believe this is because the Galax is a 3x8pin, while MSI is a 2x8pin with fake 6pin (can be treated as being 2x8pin)
- Able to add 30mhz more to core for benching. Not stable enough for daily OC though 
- Major improvement on benching memory OC. MSI bios max 8280. Able to bench at 8420 on Galax bios.

2. Gigabyte Gaming OC 360w. https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218
- Fans bad. RPM report incorrectly and fans never ramp up so card was 25+ deg hotter under load, counter intuitive
- Afterburner did have some control but even at 100%, fans were very slow
- Worse performance due to temperature
- Maybe OK on other cards, just the MSI fans did not work well

3. EVGA FTW3 373w. https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
- Working OK, fans etc good. RGB appears to be gone (worked fine on the other bioses) but that doesn't concern me too much
- Since this is a 2x8pin bios, it seems the power limit is working properly. Have seen it just above 360w during gaming.
- Benching OC is barely improved over HOF bios. Core can't OC higher. Memory can bench at 8480.
- Daily gaming OC is improved significantly. Without power limit it seems happy to run a lot higher with daily stability as opposed to 'enough stability to pass a bench but will freeze after 10mins gaming'

Other bios I had lined up to try but didn't get around to due to being happy with the EVGA bios performance:
- Strix White OC
- Strix XOC


----------



## shiokarai

munchkin1 said:


> For anyone with a newer XUSB firmware struggling to find bios. Here are my findings so far on the limited number of suitable new bios around with higher power limits than my card (which is 330w)
> 
> My card is an MSI 2080ti Gaming X Trio purchased Oct 2019.
> Tested bios:
> 
> 1. Galax HOF 10yr 400w. https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> - Appeared to work well
> - Actually hits power limit around 345-350w. Won't go over that and reports as hitting power limit in GPUZ/HWinfo despite also showing limit as 400w
> - I believe this is because the Galax is a 3x8pin, while MSI is a 2x8pin with fake 6pin (can be treated as being 2x8pin)
> - Able to add 30mhz more to core for benching. Not stable enough for daily OC though
> - Major improvement on benching memory OC. MSI bios max 8280. Able to bench at 8420 on Galax bios.
> 
> 2. Gigabyte Gaming OC 360w. https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218
> - Fans bad. RPM report incorrectly and fans never ramp up so card was 25+ deg hotter under load, counter intuitive
> - Afterburner did have some control but even at 100%, fans were very slow
> - Worse performance due to temperature
> - Maybe OK on other cards, just the MSI fans did not work well
> 
> 3. EVGA FTW3 373w. https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
> - Working OK, fans etc good. RGB appears to be gone (worked fine on the other bioses) but that doesn't concern me too much
> - Since this is a 2x8pin bios, it seems the power limit is working properly. Have seen it just above 360w during gaming.
> - Benching OC is barely improved over HOF bios. Core can't OC higher. Memory can bench at 8480.
> - Daily gaming OC is improved significantly. Without power limit it seems happy to run a lot higher with daily stability as opposed to 'enough stability to pass a bench but will freeze after 10mins gaming'
> 
> Other bios I had lined up to try but didn't get around to due to being happy with the EVGA bios performance:
> - Strix White OC
> - Strix XOC


Thank you! Now try XOC bios and report back


----------



## Neosin

munchkin1 said:


> For anyone with a newer XUSB firmware struggling to find bios. Here are my findings so far on the limited number of suitable new bios around with higher power limits than my card (which is 330w)
> 
> My card is an MSI 2080ti Gaming X Trio purchased Oct 2019.
> Tested bios:
> 
> 1. Galax HOF 10yr 400w. https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> - Appeared to work well
> - Actually hits power limit around 345-350w. Won't go over that and reports as hitting power limit in GPUZ/HWinfo despite also showing limit as 400w
> - I believe this is because the Galax is a 3x8pin, while MSI is a 2x8pin with fake 6pin (can be treated as being 2x8pin)
> - Able to add 30mhz more to core for benching. Not stable enough for daily OC though
> - Major improvement on benching memory OC. MSI bios max 8280. Able to bench at 8420 on Galax bios.
> 
> 2. Gigabyte Gaming OC 360w. https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218
> - Fans bad. RPM report incorrectly and fans never ramp up so card was 25+ deg hotter under load, counter intuitive
> - Afterburner did have some control but even at 100%, fans were very slow
> - Worse performance due to temperature
> - Maybe OK on other cards, just the MSI fans did not work well
> 
> 3. EVGA FTW3 373w. https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
> - Working OK, fans etc good. RGB appears to be gone (worked fine on the other bioses) but that doesn't concern me too much
> - Since this is a 2x8pin bios, it seems the power limit is working properly. Have seen it just above 360w during gaming.
> - Benching OC is barely improved over HOF bios. Core can't OC higher. Memory can bench at 8480.
> - Daily gaming OC is improved significantly. Without power limit it seems happy to run a lot higher with daily stability as opposed to 'enough stability to pass a bench but will freeze after 10mins gaming'
> 
> Other bios I had lined up to try but didn't get around to due to being happy with the EVGA bios performance:
> - Strix White OC
> - Strix XOC


You got a bios to actually flash to the MSI 2080ti Gaming X Trio? 

Here's the thing, I have a portable air chiller, I'm able to get the card's temp down to about 15-25c under load, so it'd be easy to run extra power. But I cannot get anything to flash to it. Which nvflash are you using? 

I think we need something in the 400-450W range to really push this card to it's limits bench wise. But I be damned if anything works. I think the GPU list needs to be updated to remove the 400W stuff, because it just does not work, won't flash, etc. I have not tried the EVGA FTW3 one... 

Note one the MSI power pins, I've tested all three plugs, none of them are dummy or "fake" they are all pulling power, the card doesn't "power balance" instead depends on the PSU to handle it, I have the corsair ax1600i which does a fine job of keeping them really close amp wise. No idea why MSI did that, it does absolutely nothing for the card, the dual 8 pins is more than enough. Just a dumb marketing gimmick, yes it works, is functional, but doesn't do anything performance wise for the card. ???MAYBE??? Lowers the temp on the wires by 1c? My temp gun couldn't really notice any difference in temps on the wires was all within 1% error with all three plugged in or not. 

2nd thought, the card has fuses on each plug, No idea what they are or where they pop (if they did pop I'd just solider them solid) they are completely useless too as you'd have to be at like over 700w to do any physical damage to the PCB traces or something stupid like that, I don't even think other cards have them. Honestly can't believe they did this to a flagship card, that stock under load, stays at like 54c. lol They slapped on a beautiful cooler design, works so amazingly well, then the card is completely useless to a poweruser. Let's just say, I won't ever be buying another MSI video card. I knew better, I've been an EVGA fanboy forever, took a chance and my gut was right. MSI support has no clue and look down upon you for being a "power user" and even suggesting wanting a bios with more power. (I've had some interesting convo's with their support) Like yea fellas, I bought your top high end card to be treated like I bought your low end card. Guess what MSI... ,.|... Never again. I'll just go kingpin or something I know for a fact will have higher power. /rant ROFL

Gonna try flashing the EVGA bios, see what happens. Honestly at this point I'd rather ebay this card with a big nasty open letter to MSI about this (good laugh to send that to them) with some no reserve bids. ROFL what a riot that would be. Like if you are gonna f' your high end power users at least use some f'ing lube. Can you tell i'm a wee bit upset with MSI right now? Man what a pile of trash MSI's GPU dept is. What are you guys smoking over there? This idea we have to hack up bios just because MSI doesn't want to support us is absolutely nuts. Like I don't care, VOID my warranty, I want everything this card can give, I paid for it, take your stupid limits for dummies off.


----------



## shiokarai

munchkin1 said:


> For anyone with a newer XUSB firmware struggling to find bios. Here are my findings so far on the limited number of suitable new bios around with higher power limits than my card (which is 330w)
> 
> My card is an MSI 2080ti Gaming X Trio purchased Oct 2019.
> Tested bios:
> 
> 1. Galax HOF 10yr 400w. https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> - Appeared to work well
> - Actually hits power limit around 345-350w. Won't go over that and reports as hitting power limit in GPUZ/HWinfo despite also showing limit as 400w
> - I believe this is because the Galax is a 3x8pin, while MSI is a 2x8pin with fake 6pin (can be treated as being 2x8pin)
> - Able to add 30mhz more to core for benching. Not stable enough for daily OC though
> - Major improvement on benching memory OC. MSI bios max 8280. Able to bench at 8420 on Galax bios.
> 
> 2. Gigabyte Gaming OC 360w. https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218
> - Fans bad. RPM report incorrectly and fans never ramp up so card was 25+ deg hotter under load, counter intuitive
> - Afterburner did have some control but even at 100%, fans were very slow
> - Worse performance due to temperature
> - Maybe OK on other cards, just the MSI fans did not work well
> 
> 3. EVGA FTW3 373w. https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
> - Working OK, fans etc good. RGB appears to be gone (worked fine on the other bioses) but that doesn't concern me too much
> - Since this is a 2x8pin bios, it seems the power limit is working properly. Have seen it just above 360w during gaming.
> - Benching OC is barely improved over HOF bios. Core can't OC higher. Memory can bench at 8480.
> - Daily gaming OC is improved significantly. Without power limit it seems happy to run a lot higher with daily stability as opposed to 'enough stability to pass a bench but will freeze after 10mins gaming'
> 
> Other bios I had lined up to try but didn't get around to due to being happy with the EVGA bios performance:
> - Strix White OC
> - Strix XOC


https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029 FTW3 BIOS working here, on the RTX 2080 Ti XC ULTRA, late 2019. Nothing strange, lost of display etc. to report, everything works as with stock BIOS. A small improvement in clocks, but holds clocks during gaming much better, 2130 core seems to be the limit for my card with this BIOS (custom loop, card never exceeds 35c), mem 8150, will try higher. Still power and voltage limited. What next? XOC BIOS? Which one?


----------



## munchkin1

I'm currently daily OC stable at 2145 core, 8000 mem. Can bench at 2160 core, 8480 mem. Can probably push memory higher stable but haven't tried much as I'm getting awesome performance anyway here (paired with 8700k @ 5.1ghz).

As for MSI power. I know the 6 pin does actually pull power. But it's not treated like an actual extra power delivery - see buildzoid video. It's piggybacked onto the 8pin input. So in terms of choosing bios - the MSI gaming x Trio can essentially be treated as a 2x8pin, which my testing so far also agrees with (3pin bios tops out at 350w regardless of power limit, 2pin bios goes higher). Now I could be wrong here, I'm just saying it seems to be the case based on the result I got.

I saw a lot of rubbish on reddit and some here too about 'oh noes do not flash that custom PCB bios you'll brick your card'. Obviously that's BS. Seems to me the real factors to consider when trying other bios are;
1. Need compatible XUSB version
2. Fan control isn't uniform between all bios, some cards bios causes fan issues on other cards
3. Stick to 2x8pin bios if card is 2x8pin (which includes MSI gaming x trio), 3x8pin if card is 3x8pin
4. If display outputs are mistmatched, some ports might break

I'll probably give the Strix White OC and Strix XOC bios a run this weekend, too busy with work during the week. The evga FTW3 bios is easy the best for me so far. Great daily OC stability and during gaming I'm seeing it pulling 340-350w regularly with occasional bumps to 360w+. It is still hitting 373w power limit, just nowhere near as often as when it was 330w limited (basically was on power limit constantly then). So that suggests there may be a little more headroom, however I think I'm pretty close to the cards max now.

While I'm not thrilled about the MSI turning out to have newer XUSB and thus harder to find bios, and having mediocre factory power limit, on the whole I'm quite happy with the card. Great air cooler (thanks to stupidly big heatsink I assume, the card is obnoxiously massive) and very happy with stable OC. However from this experience, I think next gen I'd be more inclined to get a reference board and just go custom loop water.


----------



## chibi

For the FE owners here, would/have you flashed the 380w bios and kept the stock air cooler? Do you find it's overkill and not recommended for keeping the stock air cooler?


----------



## AndrejB

@kx11
Sorry for bothering, just wanted to see what kind of clock are you getting with the white asus?


----------



## JustinThyme

munchkin1 said:


> I'm currently daily OC stable at 2145 core, 8000 mem. Can bench at 2160 core, 8480 mem. Can probably push memory higher stable but haven't tried much as I'm getting awesome performance anyway here (paired with 8700k @ 5.1ghz).
> 
> As for MSI power. I know the 6 pin does actually pull power. But it's not treated like an actual extra power delivery - see buildzoid video. It's piggybacked onto the 8pin input. So in terms of choosing bios - the MSI gaming x Trio can essentially be treated as a 2x8pin, which my testing so far also agrees with (3pin bios tops out at 350w regardless of power limit, 2pin bios goes higher). Now I could be wrong here, I'm just saying it seems to be the case based on the result I got.
> 
> I saw a lot of rubbish on reddit and some here too about 'oh noes do not flash that custom PCB bios you'll brick your card'. Obviously that's BS. Seems to me the real factors to consider when trying other bios are;
> 1. Need compatible XUSB version
> 2. Fan control isn't uniform between all bios, some cards bios causes fan issues on other cards
> 3. Stick to 2x8pin bios if card is 2x8pin (which includes MSI gaming x trio), 3x8pin if card is 3x8pin
> 4. If display outputs are mistmatched, some ports might break
> 
> I'll probably give the Strix White OC and Strix XOC bios a run this weekend, too busy with work during the week. The evga FTW3 bios is easy the best for me so far. Great daily OC stability and during gaming I'm seeing it pulling 340-350w regularly with occasional bumps to 360w+. It is still hitting 373w power limit, just nowhere near as often as when it was 330w limited (basically was on power limit constantly then). So that suggests there may be a little more headroom, however I think I'm pretty close to the cards max now.
> 
> While I'm not thrilled about the MSI turning out to have newer XUSB and thus harder to find bios, and having mediocre factory power limit, on the whole I'm quite happy with the card. Great air cooler (thanks to stupidly big heatsink I assume, the card is obnoxiously massive) and very happy with stable OC. However from this experience, I think next gen I'd be more inclined to get a reference board and just go custom loop water.



Thats a lot of trouble to go though for a small gain. Im getting 2150 all day with 800 memory on stock ASUS 2080Ti o11G BIOS and zero voltage increase. Tried the XOC and the matrix bios and all I did was add heat and still topped out about the same without hitting power limit, maybe to 2160.


----------



## AndrejB

JustinThyme said:


> Thats a lot of trouble to go though for a small gain. Im getting 2150 all day with 800 memory on stock ASUS 2080Ti o11G BIOS and zero voltage increase. Tried the XOC and the matrix bios and all I did was add heat and still topped out about the same without hitting power limit, maybe to 2160.


Is it possible that I got two cards that top out at 2010mhz (+130 manual, same thing with oc scanner).

Just received a new card (rma'd previous because of fan sensore issue) and got the oddly specific 2010mhz again...

Auros master
9900k
Ddr4 4000c17
Strix oc


----------



## kx11

AndrejB said:


> @*kx11*
> Sorry for bothering, just wanted to see what kind of clock are you getting with the white asus?



it goes up to 2135mhz and goes down from there with heat...etc


----------



## shiokarai

How's the performance with the asus XOC bios vs 380w galax bios/other bioses with similar power limit? asus XOC bios doesn't have memory OC right? So this would be detrimental to the actual card performance, no? Why bother with this XOC bios then?


----------



## JustinThyme

AndrejB said:


> Is it possible that I got two cards that top out at 2010mhz (+130 manual, same thing with oc scanner).
> 
> Just received a new card (rma'd previous because of fan sensore issue) and got the oddly specific 2010mhz again...
> 
> Auros master
> 9900k
> Ddr4 4000c17
> Strix oc


Yes, totally possible and you will always default to slower card. Silicon lottery at play.


----------



## pattiri

shiokarai said:


> How's the performance with the asus XOC bios vs 380w galax bios/other bioses with similar power limit? asus XOC bios doesn't have memory OC right? So this would be detrimental to the actual card performance, no? Why bother with this XOC bios then?



Asus XOC has memory OC. I'm using my Asus strix OC with that bios, +104 GPU and +800 Memory. On air, while gaming card gets stable at 55-57 celcius and 2025-2040 Mhz. (Note: Depending on my experiences, +90 and +104 gives same clock speeds, I guess it's because the 15 Mhz steps)


----------



## shiokarai

pattiri said:


> Asus XOC has memory OC. I'm using my Asus strix OC with that bios, +104 GPU and +800 Memory. On air, while gaming card gets stable at 55-57 celcius and 2025-2040 Mhz. (Note: Depending on my experiences, +90 and +104 gives same clock speeds, I guess it's because the 15 Mhz steps)


You mean Strix OC bios or ASUS XOC bios with 1000w limit? I was under assumption XOC bios doesn't have memory OC? (it's on the first page of this thread).


----------



## pattiri

shiokarai said:


> You mean Strix OC bios or ASUS XOC bios with 1000w limit? I was under assumption XOC bios doesn't have memory OC? (it's on the first page of this thread).



Yes I'm using the XOC on the first page. It allows overclocking memory but memory always stays at that clock, never downlock at idle.


http://prntscr.com/qkvo9n


----------



## shiokarai

pattiri said:


> Yes I'm using the XOC on the first page. It allows overclocking memory but memory always stays at that clock, never downlock at idle.
> 
> 
> http://prntscr.com/qkvo9n


ok, now that's interesting. you're running it with ASUS card or other card? FE pcb, custom PCB? Any errors, lost display ports etc.?


----------



## shiokarai

shiokarai said:


> ok, now that's interesting. you're running it with ASUS card or other card? FE pcb, custom PCB? Any errors, lost display ports etc.?


Okay, tried the ASUS XOC 1000w BIOS on my XC ULTRA, and sadly no improvement - highest usage when gaming went up only by 10w or so (388 vs 375 or so with FTW3 ultra bios) and stability is the same, can't get more than 2130 on the core stable while gaming. Also no memory down clocking when idle = bad and lost first DP. So back to FTW3 bios for me. Maybe with the second card there will be more room to play.


----------



## JustinThyme

Your pushing it on air no matter what BIOS unless you dump a 5K BTU AC in there. Once you get to like 45C that’s it no matter the power limit. Even AIO would help. I remember when I first put my pair of 2080Tis in they ran so hot on air not even OCD past the 1650 that the hot air they dumped in my case made my liquid loop go up more than it does with them now being with blocks and in the loop. Had to run with the door off my case as 60C was the norm with fans running full tilt!


----------



## newls1

i know im a little late to the game, but just looking for someone to clue me in on what are some average MEMORY oc's to hopefully achieve using SAMSUNG IC's on my new 2080ti? Im currently at +700 (Using afterburner) but was just seeing what are some peoples average clocks at. if it matters any, this 2080ti is cooled by a bitspower fcwb. Appreciate any feedback


----------



## pattiri

shiokarai said:


> ok, now that's interesting. you're running it with ASUS card or other card? FE pcb, custom PCB? Any errors, lost display ports etc.?


 I'm using it with Asus Rog Strix OC. I'm using only one display port and didn't tried others so I'm not sure if I lost any ports and I got no errors. 



https://prnt.sc/qkyt3p


check this; here my 2080 ti using 572 watts.  this was taken from HWinfo so it's true.


----------



## NBrock

newls1 said:


> i know im a little late to the game, but just looking for someone to clue me in on what are some average MEMORY oc's to hopefully achieve using SAMSUNG IC's on my new 2080ti? Im currently at +700 (Using afterburner) but was just seeing what are some peoples average clocks at. if it matters any, this 2080ti is cooled by a bitspower fcwb. Appreciate any feedback


Since you are watercooled I believe you should be able to shoot for 1000+

I have samsung memory on mine, it tops out at about +1225 even with my water cooling.


----------



## HeadlessKnight

newls1 said:


> i know im a little late to the game, but just looking for someone to clue me in on what are some average MEMORY oc's to hopefully achieve using SAMSUNG IC's on my new 2080ti? Im currently at +700 (Using afterburner) but was just seeing what are some peoples average clocks at. if it matters any, this 2080ti is cooled by a bitspower fcwb. Appreciate any feedback


My Micron tops at +600 and Samsung +1300 on air, gaming stable. The samsung is probably +1400 is stable too, but I didn't game on it, only benching.


----------



## newls1

holy damn, that is a huge oc... sammy seems to be good stuff. appreciate the feedback


----------



## shiokarai

JustinThyme said:


> Your pushing it on air no matter what BIOS unless you dump a 5K BTU AC in there. Once you get to like 45C that’s it no matter the power limit. Even AIO would help. I remember when I first put my pair of 2080Tis in they ran so hot on air not even OCD past the 1650 that the hot air they dumped in my case made my liquid loop go up more than it does with them now being with blocks and in the loop. Had to run with the door off my case as 60C was the norm with fans running full tilt!



It's on water, even when pushed for benching temps never exceed 35c (a beefy custom loop in CaseLabs STH10 + aquacomputer block on the card) so it's basically the limit of my silicon ie 2130 on the core, game 24/7 stable (core +160/170, mem +1150; trying +180 crash/freeze)


----------



## mattxx88

did a little test tonight, some pages ago some1 mentioned KFA bios 380w i tried it but again, my card wall @350/360w (2080Ti MSI Gaming X Trio)
today i can get finally the HOF OC LAB 2000W bios from a friend. 
I was really interested @ this bios cause of the curve mode overvoltage (which Kingpin bios does not allow)

these are the tests, i wanted to keep a fixed voltage of 1.075w to max out the power draw using KFA 380w Bios

NOTE: i am on custom loop, so the temperature is out of equation

KFA 380w bios:

 

as you can see above it cut many times between 2130/2115mhz

  

and max power draw is 350w 

 

now i flashed the HOF bios, fixed by the curve a voltage of 1.075v as above, frequency remain fixed at 2145mhz

 


what i want to understand now is this thing:

Why MSI Gaming X Trio doesn't accept more than 360w with any bios i tried (Galax 380w, MSI 400w custom, KFA 380w, Asus Matrix, Lightning Z LN2 380w, EVGA FTW3) except for Kingpin 2000w and HOF 2000w?
why it doesn't reach 380w for example with Galax bios?

it's months i'm asking myself why but cannot have a plausible answer
https://www.overclock.net/forum/27983876-post8032.html
https://www.overclock.net/forum/28167134-post9252.html
https://www.overclock.net/forum/28222886-post9694.html

now i reached my peace of mind with this HOF bios


----------



## J7SC

JustinThyme said:


> Your pushing it on air no matter what BIOS unless you dump a 5K BTU AC in there. Once you get to like 45C that’s it no matter the power limit. Even AIO would help. I remember when I first put my pair of 2080Tis in they ran so hot on air not even OCD past the 1650 that the hot air they dumped in my case made my liquid loop go up more than it does with them now being with blocks and in the loop. Had to run with the door off my case as 60C was the norm with fans running full tilt!


 
It's worth repeating (IMO) that good temps trump a special bios most of the time no matter what the PL...as temp is one of the constraints in the current NVBoost. This is not to say that the silicon lottery doesn't play a role, but no matter what one is dealt with a 2080 Ti purchase, lowering temps will get you better results on your purchase.

Speaking of silicon lottery, I mentioned some months back that the two Aorus 2080 Ti Xtr factory full-water-blocked cards I have are very close in terms of serial numbers ("xyz0001 vs xyz0003") - yet one is 45 MHz faster than the other with the same cooling loop, driver and (stock) bios. The better of the two runs at 2190 MHz all day (and a bit higher with ambient at below 20c), the other one tops out at 2145Mhz to 2160 MHz, depending on ambient. 

As to VRAM type, both my cards have Micron. While Samsung generally seems preferable, the Micron equipped cards can certainly do quite well in their own right (several top 2080 Ti scores at HWBot are Micron), especially with good VRAM cooling...Micron VRAM seems a bit more sensitive to heat compared to Samsung. Per MSI AB VRAM slider, +1220 to 1240 is the Micron VRAM sweet-spot for my (heavily w-cooled) setup...


----------



## 86Jarrod

The Galax HOF 2000w was uploaded on techpowerup the day after Christmas for anyone interested. Bios 90.02.0B.40.95


----------



## eminded1

ivbeen running +100 on core and +500 on mem with Samsung memory on my EVGA RTX 2080 ti XC Black Edition card... I did go up to +150 and +750 and it crashed in game so I turned it back down and haven't touched it since temps stay around 74c under any load.


----------



## newls1

eminded1 said:


> ivbeen running +100 on core and +500 on mem with Samsung memory on my EVGA RTX 2080 ti XC Black Edition card... I did go up to +150 and +750 and it crashed in game so I turned it back down and haven't touched it since temps stay around 74c under any load.


more then likely crashed cause the +150core speeds and lack of voltage to support it.. Doubt the +750 mem caused it as ive learned on this forum in the past 3 days of owning my 2080ti with sammy mem, that it reminds me of the old days of tccd memory.... it just keeps going and going and going. I thought +500 was alot to OC, then was told 1000+ is easily obtained. so i just applied +850 and gained nearly 100 points in timespy run i just did a few mins ago. Drop your core back to +125 and bring mem up to +850 and ill bet you anything it will be just fine


----------



## skupples

Yep. You can start at like +850-1000 on mem on most 2080ti and not have slowing errors 

It’s been at least 500 since Keplar.


----------



## dangerSK

86Jarrod said:


> The Galax HOF 2000w was uploaded on techpowerup the day after Christmas for anyone interested. Bios 90.02.0B.40.95


Whoever did that is ******, its not "safe" bios and can damage your card. There is a reason why it is kept private.


----------



## iRSs

dangerSK said:


> Whoever did that is ******, its not "safe" bios and can damage your card. There is a reason why it is kept private.


use it at your own risk!

Sent from my LG-H930 using Tapatalk


----------



## Salvo_ARC

MSI Gaming X trio owner here. My 2nd card actually. First one died in 3 months. Just finished running some benchmarks after adding 2 case fans under the gpu and was very impressed with temps.

Timespy score 14,239 combined with a graphics score of 16,236 https://www.3dmark.com/spy/10123720
Port Royal score 10,044 https://www.3dmark.com/pr/203280

Currently the oc is +130 on the core and +1200 on the memory. Max gpu temp during both tests was 55c.


----------



## Tragic

Salvo_ARC said:


> MSI Gaming X trio owner here. My 2nd card actually. First one died in 3 months. Just finished running some benchmarks after adding 2 case fans under the gpu and was very impressed with temps.
> 
> Timespy score 14,239 combined with a graphics score of 16,236 https://www.3dmark.com/spy/10123720
> Port Royal score 10,044 https://www.3dmark.com/pr/203280
> 
> Currently the oc is +130 on the core and +1200 on the memory. Max gpu temp during both tests was 55c.







Same 8700K OC 

Same card but OC is +125 clock and + 1500 mem

Only difference is I use the 135% power limit Vbios
I score the same as you with the 110% PL Vbios
The score with the flashed 135% PL Vbios is 15220


****Important** In the second half of 2019, many new cards sold comes with a different XUSB FW Version ID, making them unable to flash almost every single BIOS available in the TechPowerUp BIOS Collection, so be aware when purchasing a brand new card today that you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 came with a BIOS from September 6th and has the 0x70060003 firmware with a build date of 2018-08-29, the same card purchased in late Q3 2019 came with a BIOS from May 28th, firmware 0x70100003 and the date 2018-12-28, this newer card also had a different EEPROM (WBond instead of ISSI), and simply cannot flash the now almost year old MSI Gaming X Trio 406W BIOS.****
*- If yours is Q3 2018 with the correct build dates then here you go (do it correctly) https://www.overclock.net/forum/27716280-post3683.html Been using it heavily in VR and 2 - 3D flat screening for 8 months now.
*


----------



## mattxx88

Tragic said:


> Same 8700K OC
> 
> Same card but OC is +125 clock and + 1500 mem
> 
> Only difference is I use the 135% power limit Vbios
> I score the same as you with the 110% PL Vbios
> The score with the flashed 135% PL Vbios is 15220
> 
> 
> ****Important** In the second half of 2019, many new cards sold comes with a different XUSB FW Version ID, making them unable to flash almost every single BIOS available in the TechPowerUp BIOS Collection, so be aware when purchasing a brand new card today that you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 came with a BIOS from September 6th and has the 0x70060003 firmware with a build date of 2018-08-29, the same card purchased in late Q3 2019 came with a BIOS from May 28th, firmware 0x70100003 and the date 2018-12-28, this newer card also had a different EEPROM (WBond instead of ISSI), and simply cannot flash the now almost year old MSI Gaming X Trio 406W BIOS.****
> *- If yours is Q3 2018 with the correct build dates then here you go (do it correctly) https://www.overclock.net/forum/27716280-post3683.html Been using it heavily in VR and 2 - 3D flat screening for 8 months now.
> *


how to check the firmware buil date? nvflash?


----------



## shiokarai

mattxx88 said:


> Tragic said:
> 
> 
> 
> Same 8700K OC
> 
> Same card but OC is +125 clock and + 1500 mem
> 
> Only difference is I use the 135% power limit Vbios
> I score the same as you with the 110% PL Vbios
> The score with the flashed 135% PL Vbios is 15220
> 
> 
> ****Important** In the second half of 2019, many new cards sold comes with a different XUSB FW Version ID, making them unable to flash almost every single BIOS available in the TechPowerUp BIOS Collection, so be aware when purchasing a brand new card today that you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 came with a BIOS from September 6th and has the 0x70060003 firmware with a build date of 2018-08-29, the same card purchased in late Q3 2019 came with a BIOS from May 28th, firmware 0x70100003 and the date 2018-12-28, this newer card also had a different EEPROM (WBond instead of ISSI), and simply cannot flash the now almost year old MSI Gaming X Trio 406W BIOS.****
> *- If yours is Q3 2018 with the correct build dates then here you go (do it correctly) https://www.overclock.net/forum/27716280-post3683.html Been using it heavily in VR and 2 - 3D flat screening for 8 months now.
> *
> 
> 
> 
> how to check the firmware buil date? nvflash?
Click to expand...

GpuZ


----------



## Salvo_ARC

Tragic said:


> Same 8700K OC
> 
> Same card but OC is +125 clock and + 1500 mem
> 
> Only difference is I use the 135% power limit Vbios
> I score the same as you with the 110% PL Vbios
> The score with the flashed 135% PL Vbios is 15220
> 
> 
> ****Important** In the second half of 2019, many new cards sold comes with a different XUSB FW Version ID, making them unable to flash almost every single BIOS available in the TechPowerUp BIOS Collection, so be aware when purchasing a brand new card today that you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 came with a BIOS from September 6th and has the 0x70060003 firmware with a build date of 2018-08-29, the same card purchased in late Q3 2019 came with a BIOS from May 28th, firmware 0x70100003 and the date 2018-12-28, this newer card also had a different EEPROM (WBond instead of ISSI), and simply cannot flash the now almost year old MSI Gaming X Trio 406W BIOS.****
> *- If yours is Q3 2018 with the correct build dates then here you go (do it correctly) https://www.overclock.net/forum/27716280-post3683.html Been using it heavily in VR and 2 - 3D flat screening for 8 months now.
> *


Can't flash this card. According to Gpu-z the bios version 90.02.30.00.65 (which I have) was released 3/28/1019.


----------



## JosiahBradley

Hello! I just got my new system built with 2 x Asus Strix 2080 Ti OC. After some initial testing I used this thread to flash both cards to Asus Matrix 2080 Tis. Just with the power slider / temp and +100 core with no voltage changes I'm running at 2130Mhz on both cards. Netted me 13540 in Timespy Extreme. Pretty happy start as I plan on continuing to overclock the CPU and push the GPUs harder. Thanks for the very handy info! https://www.3dmark.com/spy/10142464


----------



## DirtyScrubz

My Asus Strix 2080 Ti OC is from Nov w/Samsung memory. How high are you guys OC'ing your memory for daily usage?


----------



## shiokarai

DirtyScrubz said:


> My Asus Strix 2080 Ti OC is from Nov w/Samsung memory. How high are you guys OC'ing your memory for daily usage?


As much as possible, preferably 1000+


----------



## JustinThyme

i leave volatge slider at 0. Plus 800 on Vram and 2150 all day long on GPU with minimal heat. I can get +1000 or even 1200 on Samsung memory but its a balancing act. More core or more ram. More volts means more heat. Ive tried several different BIOS files and ended up going back to stock. I gained maybe 50Mhz on matrix and XOC and no power limit. I stay stock and 50Mhz less and still no power limit. My issue is, has and always will be heat. Once it passes 45C thermal throttle gets you so unless you are on LN2 or a chiller flashing for higher power limit is useless. With stock O11G BIOS I never pass 42C underwater. If you are on air........youre not getting anywhere.


----------



## shiokarai

JustinThyme said:


> i leave volatge slider at 0. Plus 800 on Vram and 2150 all day long on GPU with minimal heat. I can get +1000 or even 1200 on Samsung memory but its a balancing act. More core or more ram. More volts means more heat. Ive tried several different BIOS files and ended up going back to stock. I gained maybe 50Mhz on matrix and XOC and no power limit. I stay stock and 50Mhz less and still no power limit. My issue is, has and always will be heat. Once it passes 45C thermal throttle gets you so unless you are on LN2 or a chiller flashing for higher power limit is useless. With stock O11G BIOS I never pass 42C underwater. If you are on air........youre not getting anywhere.


First downclock is at 35c or so, right? (first 15mhz down). I've found that too: more mem oc or more core oc, extreme mem oc lowers my core oc.


----------



## ntuason

DirtyScrubz said:


> My Asus Strix 2080 Ti OC is from Nov w/Samsung memory. How high are you guys OC'ing your memory for daily usage?


My 24/7 settings are: GPU Core - +165 and Memory - +1350 on MSI Afterburner since September 2019 with XOC Bios.


----------



## BleedOutCold

JustinThyme said:


> i leave volatge slider at 0. Plus 800 on Vram and 2150 all day long on GPU with minimal heat.


Serious question, not trying to gotcha anyone: how is the core running at 2150? I can keep 2130, 2145, or 2160 locked, but it seems like it's in 15Mhz steps. FE card flashed with 380W bios if that matters.


----------



## J7SC

On VRAM @ 2080 Ti, going from MSI AB 7070 MHz (stock on these cards) to 8244 MHz with everything else (ie PL, GPU MHz) held equal costs about 1% of Power Consumption per card in a quick Port Royal run...not much, but when nearing the limit, it can make a difference. With Superposition 8K, I have seen a bit more than that but still under 10 watts re. boosted VRAM per card - probably the same the RGB uses on each card (it has a lot of RGB)


----------



## skupples

BleedOutCold said:


> Serious question, not trying to gotcha anyone: how is the core running at 2150? I can keep 2130, 2145, or 2160 locked, but it seems like it's in 15Mhz steps. FE card flashed with 380W bios if that matters.


wouldn't surprise me. that's kinda the SOP for clock bins since keplar. 12-15mhz per bump. 

idk for sure though.


----------



## Sheyster

BleedOutCold said:


> Serious question, not trying to gotcha anyone: how is the core running at 2150? I can keep 2130, 2145, or 2160 locked, but it seems like it's in 15Mhz steps. FE card flashed with 380W bios if that matters.


It's probably actually running at 2145 since 143x is the nearest multi of 15. You can dial in +whatever in AB, but if it's not a whole multi of 15 it will fall back to closest whole multi.


----------



## JustinThyme

BleedOutCold said:


> Serious question, not trying to gotcha anyone: how is the core running at 2150? I can keep 2130, 2145, or 2160 locked, but it seems like it's in 15Mhz steps. FE card flashed with 380W bios if that matters.


Dunno, that’s just how it rolls. I don’t try to dig into what’s there. Just know I tried higher power limit BIOS and got nowhere but hotter and throttled on thermal.


----------



## Johnny Rook

GIGABYTE GeForce RTX 2080 Ti GAMING OC 11G


----------



## BigMack70

Anyone who has disassembled their card to put it under water - how complicated would reassembling the card in the future (for resale) be? Do you need to purchase new thermal pads for everything if you disassemble/reassemble the card?


----------



## lowrider_05

BigMack70 said:


> Anyone who has disassembled their card to put it under water - how complicated would reassembling the card in the future (for resale) be? Do you need to purchase new thermal pads for everything if you disassemble/reassemble the card?


I reassambled a 2080 ti for RMA after the VMEM got errors and if you disassembled it careful enough the thermal pads should come off in one piece an can be reused.


----------



## BigMack70

lowrider_05 said:


> I reassambled a 2080 ti for RMA after the VMEM got errors and if you disassembled it careful enough the thermal pads should come off in one piece an can be reused.


Awesome; thanks! I was worried maybe they dried out or became brittle or otherwise just generally became a problem. I know new cards are coming out relatively soon, so I was going to postpone putting my G12 on if it meant a huge headache when upgrade time comes.


----------



## Medusa666

I got my hands on a Galax RTX 2080 Ti HOF OC Lab Edition and I noticed that it has a 1830 MHz boost clock stock according to GPU-Z. 

Compared to the other 2080 Ti that I have owned this is significantly higher, and I can't find any official Galax card with that specified boost. 

Is there any other card with higher stock boost than 1830? 

I have not overclocked it yet as I'm waiting for a block from Bitspower 

Thank you.


----------



## newls1

BigMack70 said:


> Anyone who has disassembled their card to put it under water - how complicated would reassembling the card in the future (for resale) be? Do you need to purchase new thermal pads for everything if you disassemble/reassemble the card?


i just re-apply all thermal pads to the HSF unit (ones that stuck to card, but most will stay on the HSF unit) put all the little screws and stuff in a little zip lock bag, wrap the HSF unit in plastic wrap, and place it all inside graphics card box, and if anything happens and i need to revert back, everything is all ready.


----------



## Uns33n

Hey guys, I am about to pick up a EVGA 2080TI FTW3 used. I have always used AMD cards, this will be my first Nvidia card. He has had it for a year and says its still under warranty (2 years left). How do I go about transferring the warranty under me? Anything I should be looking for specifically on the card when I inspect it? Should I bother asking if its Micron or Samsung memory? Thanks


----------



## Wrathlon

For anyone getting the XUSBFW error when trying to flash something like a Gaming X Trio you can use the BIOS from the Galax HOF 10 Year which is a) newer and thus works and b) has a higher 400w limit.

https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726


----------



## bp7178

Uns33n said:


> Hey guys, I am about to pick up a EVGA 2080TI FTW3 used. I have always used AMD cards, this will be my first Nvidia card. He has had it for a year and says its still under warranty (2 years left). How do I go about transferring the warranty under me? Anything I should be looking for specifically on the card when I inspect it? Should I bother asking if its Micron or Samsung memory? Thanks


If the prior owner registered it, they will have to unregister it on EVGA's site. My 2080 Ti FTW3 Ultra has Samsung memory but IIRC this is just an availability thing, just what they had on hand when making the card. 

You likely won't be able to detect much by visual inspection, just if the card has been disassembled or not. I wouldn't consider that a deal breaker. Ask questions and see how the seller responds. If he or she comes off like a doofus, you may want to reconsider.


----------



## BigMack70

Successfully got my FE card under water using a kraken G12 bracket to mount a kraken x62 to the card... I should have done this a year ago. Free 5-10% performance bump and no more annoying fan noise. Temps max out around 45C with 800rpm fans. I'm bummed I talked myself out of it back then. 

Don't have thermal contacts on vrms/memory but went over the everything on the PCB that I could with a thermal IR gun and didn't see anything over 80c, most things 50-70c, so I'm calling it good to go.


----------



## sanj

i just overclocked rog strix oc, max 70C temps but 59% automatic fan speed, which is pretty loud. What's your fan curve?


----------



## ntuason

sanj said:


> i just overclocked rog strix oc, max 70C temps but 59% automatic fan speed, which is pretty loud. What's your fan curve?


I'm on air too. I use an aggressive settings to keep twmps down to 60C max though. O11 Dyanmic does a pretty good job sealing away the fan noise.


----------



## kithylin

Uns33n said:


> Hey guys, I am about to pick up a EVGA 2080TI FTW3 used. I have always used AMD cards, this will be my first Nvidia card. He has had it for a year and says its still under warranty (2 years left). How do I go about transferring the warranty under me? Anything I should be looking for specifically on the card when I inspect it? Should I bother asking if its Micron or Samsung memory? Thanks


Switching warranties on the EVGA cards will mean your new warranty (After you register it) will be from the original manufacture date, not the purchase date. Which may be shorter than 2 years. Only the original first owner with original proof of purchase gets the "full warranty" if registered from the original purchase date. They need to unregister it from their EVGA account, and then you can re-register it with your account. Make sure the original EVGA sticker on the back of the card is intact with the serial number on it, and there's not been any physical modification to the PCB in any way.


----------



## JustinThyme

sanj said:


> i just overclocked rog strix oc, max 70C temps but 59% automatic fan speed, which is pretty loud. What's your fan curve?


My fan curve is at zero.


----------



## ftln

*ftln*



Wrathlon said:


> For anyone getting the XUSBFW error when trying to flash something like a Gaming X Trio you can use the BIOS from the Galax HOF 10 Year which is a) newer and thus works and b) has a higher 400w limit.
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726


Will I get full 400w power limit on 2x8pin card using this bios ?


----------



## Wrathlon

ftln said:


> Will I get full 400w power limit on 2x8pin card using this bios ?


I have no idea I have a 3 connector Gaming X Trio.


----------



## Tonza

Has anyone taken off the fans and shroud from strix 2080ti and slapped 2x 120mm fans there? If so, what have been the results?


----------



## pattiri

sanj said:


> i just overclocked rog strix oc, max 70C temps but 59% automatic fan speed, which is pretty loud. What's your fan curve?



Here is my fan curve for my strix OC. It's noisy but it works.











I'm using below V/F curve with Asus Matrix bios. My card stays at 55-57 celcius while gaming at 2070 or 2050 Mhz. Earlier I was using XOC bios and since I got no control over voltage, card wasn't stable even at 2040 Mhz, because card used to give 1.043 Volt at 2040. Now with this V/F curve; card has 1.093 V from 2130 to 2055 and it's stable.


----------



## Mooncheese

Hi all, I recently picked up a used Gigabyte Aorus Waterforce Extreme 2080 Ti (WB) and am thinking about returning it because, for one, I paid way too much for it ($1200), and two, it no longer has the original warranty and three, I feel really stupid seeing as how NGreedia may be announcing Ampere in March, no later than June of this year with '3080' probably going to be 25% faster than 2080 Ti in rasterization and do RT 50-100% faster all at 200W for $800. 

If this thing can't do 2150 MHz solid it's going back. 

Given the fact that these cards were notorious for the Micron memory failing (from what I gather, I could be mistaken) I kind of don't even want to mess with it to be honest. 

I feel really stupid, especially knowing that Ampere was due out in March before pulling the trigger. Just tired of waiting with my full-block 1080 Ti FE @ 3440x1440. Seeing 55 FPS in DX12 with some settings turned down in The Division 2, same with some other demanding games. I got some disability back-pay from the VA (U.S. Army) and pretty much lost all sense and pulled the trigger on this thing. It looks beautiful and I have a Gigabyte Aorus Gaming 7 mobo with RGB monoblock that would really complement it but yeah, no 4 year warranty, high failure rate with the memory and Ampere out in 2-5 months with cards 25-50% faster for $800-$1000, I am probably going to send it back. 

Thoughts appreciated!


----------



## Tonza

I dont think 3080 will put out that kind of numbers vs 2080ti, 3080Ti in the other hand will be something like that, or even more, but that thing will probably cost even more than 2080ti at lauch which is already unbeliavable price for GPU
.


----------



## Shawnb99

Tonza said:


> I dont think 3080 will put out that kind of numbers vs 2080ti, 3080Ti in the other hand will be something like that, or even more, but that thing will probably cost even more than 2080ti at lauch which is already unbeliavable price for GPU
> 
> .




Plus no idea if the TI will launch at the same time as 3080. Generally the TI releases 6 months after the 2080


----------



## Mooncheese

Tonza said:


> I dont think 3080 will put out that kind of numbers vs 2080ti, 3080Ti in the other hand will be something like that, or even more, but that thing will probably cost even more than 2080ti at lauch which is already unbeliavable price for GPU
> .


To the contrary, solely in rasterization terms I believe we will see the kind of jump in performance we saw going from Maxwell to Pascal, i.e. 980 Ti to 1080 (25%) and 980 Ti to 1080 Ti (50%+). RT will be 50% faster between 2080 Ti and 3080 and possibly 100% faster between 2080 Ti and 3080 Ti (because they placed emphasis on an huge improvement in RT and 25% is not an improvement worth upgrading for when youre at 50 FPS in Shadow of the Tomb Raider with RT on). There will also be new Ampere specific features like there are with every new architecture, DLSS 2 or something or another "Only on Ampere". NGreed have already stated they are also bringing the prices down across the board, which probably means -$100 for the mid tier cards, the 70 and 80, and presumably $200 for the "3080 Ti". 

What they may do is launch big die Ampere as the Titan card at launch like they have historically done for $1500 or something ridiculous (because they can still point to Titan V and say "look at how much cheaper it is in comparison"} and then launch the same card minus 5% SM's next year as "3080 Ti" for $1k.

Look at the gains we saw halving the node from Maxwell to Pascal (28 to 16nm). We are going to see that all over again. No way are they halving the node for less than 50% increase in performance across the board solely in rasterization. 

RT is going to be twice as fast, mark my words. RT didn't take off because the feature destroyed performance and NGreed sorely want their pet technology to succeed.


----------



## Mooncheese

Just viewed this after writing my last post:


----------



## motorcyclerider

munchkin1 said:


> For anyone with a newer XUSB firmware struggling to find bios. Here are my findings so far on the limited number of suitable new bios around with higher power limits than my card (which is 330w)
> 
> My card is an MSI 2080ti Gaming X Trio purchased Oct 2019.
> Tested bios:
> 
> 1. Galax HOF 10yr 400w. https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> - Appeared to work well
> - Actually hits power limit around 345-350w. Won't go over that and reports as hitting power limit in GPUZ/HWinfo despite also showing limit as 400w
> - I believe this is because the Galax is a 3x8pin, while MSI is a 2x8pin with fake 6pin (can be treated as being 2x8pin)
> - Able to add 30mhz more to core for benching. Not stable enough for daily OC though
> - Major improvement on benching memory OC. MSI bios max 8280. Able to bench at 8420 on Galax bios.
> 
> 2. Gigabyte Gaming OC 360w. https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218
> - Fans bad. RPM report incorrectly and fans never ramp up so card was 25+ deg hotter under load, counter intuitive
> - Afterburner did have some control but even at 100%, fans were very slow
> - Worse performance due to temperature
> - Maybe OK on other cards, just the MSI fans did not work well
> 
> 3. EVGA FTW3 373w. https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
> - Working OK, fans etc good. RGB appears to be gone (worked fine on the other bioses) but that doesn't concern me too much
> - Since this is a 2x8pin bios, it seems the power limit is working properly. Have seen it just above 360w during gaming.
> - Benching OC is barely improved over HOF bios. Core can't OC higher. Memory can bench at 8480.
> - Daily gaming OC is improved significantly. Without power limit it seems happy to run a lot higher with daily stability as opposed to 'enough stability to pass a bench but will freeze after 10mins gaming'
> 
> Other bios I had lined up to try but didn't get around to due to being happy with the EVGA bios performance:
> - Strix White OC
> - Strix XOC


Your are the man! The Galax HOF 10yr 400w is the 1st rom to get me over 373 watts. Its giving me around 410watts on my late model 2019 card. THANK YOU.


----------



## Uriette

New to Overclock since today, you can add me to the list. 

Running a wonderful AORUS GeForce RTX 2080 TI Xtreme 11G since October-November if I'm correct (I need to check when I bought it lol I'm not at home now). Craziest GPU I've ever had in my PC enthusiast life. My 2 previous GPUs were EVGA GeForce GTX 980 Ti Superclocked+ (in SLI for 5-6 months) and an ASUS ROG R9 290X Matrix Platinum.


----------



## keikei

Uriette said:


> New to Overclock since today, you can add me to the list.
> 
> Running a wonderful AORUS GeForce RTX 2080 TI Xtreme 11G since October-November if I'm correct (I need to check when I bought it lol I'm not at home now). Craziest GPU I've ever had in my PC enthusiast life. My 2 previous GPUs were EVGA GeForce GTX 980 Ti Superclocked+ (in SLI for 5-6 months) and an ASUS ROG R9 290X Matrix Platinum.


Welcome to OCN.  Dat indeed is one helluva upgrade. I remember rockin xfire 290X's and failing hard with 4k. Now, not so much, but with every game release, they seems to push the card harder and harder.


----------



## Uriette

keikei said:


> Welcome to OCN.  Dat indeed is one helluva upgrade. I remember rockin xfire 290X's and failing hard with 4k. Now, not so much, but with every game release, they seems to push the card harder and harder.


Thank you very much ! 

Yes, that's a very big step up, my 1st 980 Ti was used for like 4 years or something like that, and it was use very often in heavy games. The second one I used for SLI was keep for 5-6 months, I sell it because I get no real improvment in performances ingame (was mostly for the lulz to run an SLI one time in my life). Still a solid GPU today, but I get the possibility to buy an RTX 2080 Ti today by selling a lot of skins from CSGO. ^^
The R9 290X Matrix it's different. It was pretty powerful (when it was the best GPU at this date), but was running too, too hot and was very loud. So I decide to sell it one year later and go for the 980 Ti SC+ like a few months after the official release of the GTX 980 Ti card from nVidia.

Actually for the 4K yes, some games are really really heavy and even the highest-end GPU in the market can have issues running smoothly the biggest AAA Games. (Example with R2R haha ^^)


----------



## BigMack70

If you're self-employed (and for all intents and purposes, Linus and other folks making a living full time on Youtube are self-employed), work/life balance is hard to get right - even harder than it usually is. There's constant pressure to be in "work mode", or to have things running in your mind about work - even in the background. Unless you are exceptionally good at compartmentalizing your life, everything tends to bleed together. Perspective on life, quality time with family, etc... those things take a hit. Not sure it was worth a 20 minute video, but it's not surprising he's had to wrestle with those things.


----------



## JustinThyme

As of right now any speculation on the next Gen Nvidia platform is just that, speculation and conjecture. I’m sure something will be pushed out but it may be something like the Volta that does squat for anything but serious GPU workloads and cost $5K a piece. We’ll just have to wait and see until late summer for the babies and fall into winter for the Ti of it happens.


----------



## JustinThyme

Mooncheese said:


> Just viewed this after writing my last post:
> 
> https://youtu.be/DAGLpRN_RV8


Even the producer said this video was 100% conjecture.


----------



## kithylin

JustinThyme said:


> As of right now any speculation on the next Gen Nvidia platform is just that, speculation and conjecture. I’m sure something will be pushed out but it may be something like the Volta that does squat for anything but serious GPU workloads and cost $5K a piece. We’ll just have to wait and see until late summer for the babies and fall into winter for the Ti of it happens.


Well we do know it will be named Ampere. That's all we know though.


----------



## J7SC

I look forward to hear more REAL information about Ampere if and when it becomes available (ditto for AMD's 'big' GPU and Intel's Xe). But for now, I am more interested in mastering the checker-board rendering for my 2x 2080 TIs (which btw have been great fun for more than a year now). I haven't had the time to dig deeper into the checker-board option (other than enable it for some tests), but with my setup driving a 4K / 75Hz (DisplayPort) monitor, I really have not run into any bottlenecks.


----------



## EvilPerm

Add me to the list please 


Quick question... New to overclocking, I have clocked my Gigabyte rtx 2080 ti gaming oc to 2145 core and 2069 memory @ 1.05 volts... (msi afterburner curve) Temps are around 45deg and all seems stable after a few hours testing. Is this any good and ok for long gaming sessions (8hr+)


According to GPUz it stays at 1.05v and 2130/45 core for the whole of the benchmark


edit - it does dip below 1.05v so not set to constantly run at 1.05 (when idle etc) excuse a new overclocker not knowing exact terms etc :/





I can run at 1.00 volts for 2100 core and same oc on memory and again stable.


using https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218 bios that was installed on card



Sorry in advance if these are stupid questions etc...


----------



## newls1

EvilPerm said:


> Add me to the list please
> 
> 
> Quick question... New to overclocking, I have clocked my Gigabyte rtx 2080 ti gaming oc to 2145 core and 2069 memory @ 1.05 volts... (msi afterburner curve) Temps are around 45deg and all seems stable after a few hours testing. Is this any good and ok for long gaming sessions (8hr+)
> 
> 
> According to GPUz it stays at 1.05v and 2130/45 core for the whole of the benchmark
> 
> 
> edit - it does dip below 1.05v so not set to constantly run at 1.05 (when idle etc) excuse a new overclocker not knowing exact terms etc :/
> 
> 
> 
> 
> 
> I can run at 1.00 volts for 2100 core and same oc on memory and again stable.
> 
> 
> using https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218 bios that was installed on card
> 
> 
> 
> Sorry in advance if these are stupid questions etc...


guessing with a 45c temp you are watercooled? Those speeds are just fine and 1.05v isnt gonna hurt a thing. we have same exact card, and im just using +130core and +1000mem in afterburner. Im completely content with this OC and my temps dont go above 33/34c after playing pretty much any game. Unless we experiment with modded bios'es to increase power limit, we are pretty much stuck, but i will NOT bios mod this card. She is running way to good and plenty of power for anything ill ever need out of it.


----------



## DirtyScrubz

Mooncheese said:


> To the contrary, solely in *rasterization terms I believe we will see the kind of jump in performance *we saw going from Maxwell to Pascal, i.e. 980 Ti to 1080 (25%) and* 980 Ti to 1080 Ti (50%+)*. RT will be 50% faster between 2080 Ti and 3080 and possibly* 100% faster between 2080 Ti and 3080 Ti* (because they placed emphasis on an huge improvement in RT and 25% is not an improvement worth upgrading for when youre at 50 FPS in Shadow of the Tomb Raider with RT on). There will also be new Ampere specific features like there are with every new architecture, DLSS 2 or something or another "Only on Ampere". NGreed have already stated they are also *bringing the prices down across the board, which probably means -$100 for the mid tier cards, the 70 and 80, and presumably $200 for the "3080 Ti". *
> 
> What they may do is launch big die Ampere as the Titan card at launch like they have historically done for $1500 or something ridiculous (because they can still point to Titan V and say "look at how much cheaper it is in comparison"} and then launch the same card minus 5% SM's next year as "3080 Ti" for $1k.
> 
> Look at the gains we saw halving the node from Maxwell to Pascal (28 to 16nm). We are going to see that all over again. No way are they halving the node for less than 50% increase in performance across the board solely in rasterization.
> 
> RT is going to be twice as fast, mark my words. RT didn't take off because the feature destroyed performance and NGreed sorely want their pet technology to succeed.



LOL prepare to be disappointed..sorely.


----------



## kithylin

Mooncheese said:


> RT is going to be twice as fast, mark my words. RT didn't take off because the feature destroyed performance and NGreed sorely want their pet technology to succeed.


Just so you know, it's already been documented that the "twice as fast" claim that Nvidia are advertising for Ampere is only going to be in relation to the RT Cores / AI Cores being twice as fast vs Turing's RT / AI Cores. The actual rasterization speed will probably only be +30% at best and that's being super optimistic.


----------



## DirtyScrubz

kithylin said:


> Just so you know, it's already been documented that the "twice as fast" claim that Nvidia are advertising for Ampere is only going to be in relation to the RT Cores / AI Cores being twice as fast vs Turing's RT / AI Cores. The actual rasterization speed will probably only be +30% at best and that's being super optimistic.



Yeah he's setting himself up for disappointment w/those lofty expectations.


----------



## BigMack70

kithylin said:


> The actual rasterization speed will probably only be +30% at best and that's being super optimistic.


+30% would be - by FAR - the worst architectural performance improvement in the past two decades for Nvidia. The average is around 60%. Low end tends to be around 40% - and that typically only happens when a new architecture does not come with a new process node. Last time we got a new process node and new architecture, we got +80% performance (Pascal over Maxwell).

Only way the jump will be that bad is if they are essentially only focusing on ray tracing performance, with minimal improvements to anything else. I expect more of a balance... 50% ish rasterization gains along with major RT improvements.


----------



## Mooncheese

BigMack70 said:


> +30% would be - by FAR - the worst architectural performance improvement in the past two decades for Nvidia. The average is around 60%. Low end tends to be around 40% - and that typically only happens when a new architecture does not come with a new process node. Last time we got a new process node and new architecture, we got +80% performance (Pascal over Maxwell).
> 
> Only way the jump will be that bad is if they are essentially only focusing on ray tracing performance, with minimal improvements to anything else. I expect more of a balance... 50% ish rasterization gains along with major RT improvements.


Thank-you, so much denial in this thread. I returned that 2080 Ti by the way, no way am I paying $1300 after taxes for a 30% bump up and over 1080 Ti. 

Here's what happened the last time the node size dropped 41%: 

GTX 980, 50% faster than GTX 680, GTX 980 Ti, 50% faster than 780 Ti

GTX 1080, 50% faster than GTX 980, GTX 1080 Ti, 60% faster than 980 Ti 

We will likely see more than 50% gain from one card to the next in Ray Tracing. 

We cannot look to the performance difference between Pascal and Turing for an idea as to what we can expect between Turing and Ampere (node size only dropped 25% and Turing had 30% of the physical die allocated to RT). Needless to say, Turing was anomalous against the generational leaps in performance going back to Fermi. 

The Tapei investment meeting leak where insiders stated that Ampere will be 50% faster and / or 50% more efficient is not an exaggeration. It's fully within the realm of likelihood given the lithography change, increase in frequency and architectural improvement. 

I'm predicting: 

RTX 3080 @ 220W, 50% faster in rasterization, 70% faster in Ray Tracing vs RTX 2080. MSRP $700 ($800 for FE at launch) 
RTX 3070 @ 180W, 50% faster in rasterization, 70% faster in Ray Tracing vs RTX 2070. As fast as RTX 2080 Ti at launch and faster as drivers mature (see: GTX 970 vs 780 Ti, GTX 1070 vs 980 Ti). MSRP: ~$400 ($500 for FE at launch). 

They may release "Titan A" at launch for $1500 (they can still state "It's $1000 cheaper than Titan V!"). We will probably see 3080 Ti early-to-middle of 2021 for $1000. It will be 50% faster than 2080 Ti, probably do RT 70-100% faster.


----------



## BigMack70

Mooncheese said:


> Thank-you, so much denial in this thread. I returned that 2080 Ti by the way, no way am I paying $1300 after taxes for a 30% bump up and over 1080 Ti.
> 
> Here's what happened the last time the node size dropped 41%:
> 
> GTX 980, 50% faster than GTX 680, GTX 980 Ti, 50% faster than 780 Ti
> 
> GTX 1080, 50% faster than GTX 980, GTX 1080 Ti, 60% faster than 980 Ti
> 
> We will likely see more than 50% gain from one card to the next in Ray Tracing.
> 
> We cannot look to the performance difference between Pascal and Turing for an idea as to what we can expect between Turing and Ampere (node size only dropped 25% and Turing had 30% of the physical die allocated to RT). Needless to say, Turing was anomalous against the generational leaps in performance going back to Fermi.
> 
> The Tapei investment meeting leak where insiders stated that Ampere will be 50% faster and / or 50% more efficient is not an exaggeration. It's fully within the realm of likelihood given the lithography change, increase in frequency and architectural improvement.
> 
> I'm predicting:
> 
> RTX 3080 @ 220W, 50% faster in rasterization, 70% faster in Ray Tracing vs RTX 2080. MSRP $700 ($800 for FE at launch)
> RTX 3070 @ 180W, 50% faster in rasterization, 70% faster in Ray Tracing vs RTX 2070. As fast as RTX 2080 Ti at launch and faster as drivers mature (see: GTX 970 vs 780 Ti, GTX 1070 vs 980 Ti). MSRP: ~$400 ($500 for FE at launch).
> 
> They may release "Titan A" at launch for $1500 (they can still state "It's $1000 cheaper than Titan V!"). We will probably see 3080 Ti early-to-middle of 2021 for $1000. It will be 50% faster than 2080 Ti, probably do RT 70-100% faster.


Just so everyone can be on the same page, here's the "big chip" performance improvements over time from architecture to architecture, just for everyone's reference. The numbers are of course rough estimates, as exact figures are determined by the particular test suite of games, but the TPU test suite is broad enough that I think this works fairly well for estimating relative performance over time.

G80 --> Tesla (8800 GTX -- GTX 285) = *+59% performance*
Tesla --> Fermi (GTX 285 -- GTX 580) = *+67% performance*
Fermi --> Kepler (GTX 580 -- GTX 780 Ti) = *+104% performance*
Kepler --> Maxwell (GTX 780 Ti -- Titan X Maxwell) = *+45% performance*
Maxwell --> Pascal (Titan XM -- GTX 1080 Ti) = *+76% performance* _(NOTE: this is based on Titan XM = 980 Ti + 5% performance)_
Pascal --> Turing (GTX 1080 Ti -- RTX 2080 Ti) = *+38% performance*

I recognize that it is sometimes useful to compare actual graphics cards to other cards, in which case the jumps are often smaller (e.g. GTX 580 -- GTX 680). But I think generally speaking it is most helpful to compare architectures, rather than cards, because Nvidia can and does play all kinds of shenanigans with how they cut down their chips and bin everything out into different SKUs. It might be 18 months before we see the highest end Ampere SKU, but it's very reasonable to expect that when it does arrive it will be significantly faster than the 2080 Ti. +60% performance is a safe prediction. +40% pessimistic. +80% optimistic.


----------



## Mooncheese

BigMack70 said:


> Just so everyone can be on the same page, here's the "big chip" performance improvements over time from architecture to architecture, just for everyone's reference...
> 
> G80 --> Tesla (8800 GTX -- GTX 285) = *+59% performance*
> Tesla --> Fermi (GTX 285 -- GTX 580) = *+67% performance*
> Fermi --> Kepler (GTX 580 -- GTX 780 Ti) = *+104% performance*
> Kepler --> Maxwell (GTX 780 Ti -- Titan X Maxwell) = *+45% performance*
> Maxwell --> Pascal (Titan XM -- GTX 1080 Ti) = *+76% performance* _(NOTE: this is based on Titan XM = 980 Ti + 5% performance)_
> Pascal --> Turing (GTX 1080 Ti -- RTX 2080 Ti) = *+38% performance*
> 
> I recognize that it is sometimes useful to compare actual graphics cards to other cards, in which case the jumps are often smaller (e.g. GTX 580 -- GTX 680). But I think generally speaking it is most helpful to compare architectures, rather than cards, because Nvidia can and does play all kinds of shenanigans with how they cut down their chips and bin everything out into different SKUs. It might be 18 months before we see the highest end Ampere SKU, but it's very reasonable to expect that when it does arrive it will be significantly faster than the 2080 Ti. +60% performance is a safe prediction. +40% pessimistic. +80% optimistic.


That's a lot of work, thanks for this. However, I take issue with the 38% performance difference cited between 1080 Ti and 2080 Ti. 38% is on the high end, only seen in a minority of titles, we also see other titles showing only 23%. I think the actual avg is closer to 30%. 

Hardware Unboxed did a good job showing this (27:06 mark)


----------



## BigMack70

Mooncheese said:


> However, I take issue with the 38% performance difference cited between 1080 Ti and 2080 Ti. 38% is on the high end, only seen in a minority of titles


It's just dependent on the exact suite of games tested. TPU's review was on the higher end (I think the range on reviews was +25% to +45%, with most at 30-35%, IIRC).

But I think it's a good enough estimate. You can probably manipulate any of those numbers up or down 10% by changing what review sites you go back and look at, but TPU is the easiest site to track performance over this length of time.

Their newest reviews with more modern titles have it as +47% faster. So if we assume they're still overestimating by a bit, it's probably reasonable that a re-test from others would have the 2080 Ti as 35-40% faster than the 1080 Ti.


----------



## Mooncheese

BigMack70 said:


> It's just dependent on the exact suite of games tested. TPU's review was on the higher end (I think the range on reviews was +25% to +45%, with most at 30-35%, IIRC).
> 
> But I think it's a good enough estimate. You can probably manipulate any of those numbers up or down 10% by changing what review sites you go back and look at, but TPU is the easiest site to track performance over this length of time.
> 
> Their newest reviews with more modern titles have it as +47% faster. So if we assume they're still overestimating by a bit, it's probably reasonable that a re-test from others would have the 2080 Ti as 35-40% faster than the 1080 Ti.


Fair enough. 

For me a major determinant in my decision to return the 2080 Ti was the fact that, even with 16+3 phase VRM and a 366W TDP (Aorus Gigabyte Waterforce Xtreme, actual waterblock, not AIO) the performance difference I could expect to see was only 35% at most in a few specific titles, i.e. The Witcher 3, Shadow of the Tomb Raider, The Division 2 (DX12) etc. And that's only if the card could run stable at 2150 MHz as my 1080 Ti FE has a healthy 15% overclock (2012-2025 MHz with undervolt of 1.025v mitigating 300W TDP wattage starvation under full water-block with load temps of ~45c). 

Compared to 2080 Ti FE with a 300W TDP the difference is only 15-25%. 

Example: 

In SOTTR benchmark, my 1080 Ti does 92 FPS @ 2560x1440 @ 2000 MHz with the same in-game graphics settings used here: 

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16

A 300W TDP 2080 Ti FE does 113 FPS. 

I mean that's what, 25%? 

You need 380W and to have won the silicon lottery @ 2150 MHz to see 120 FPS with a 2080 Ti and then the difference is only 30%. 

Shadow of the Tomb Raider is one of the titles that the 2080 Ti sees a large performance difference. 

30%. 

That's not worth $1300 after-taxes, I will simply wait 2-5 months for +50% solely in rasterization, probably well over 50% in RT, all while taking 100W out of my loop (220W vs 300W). RTX 3080: 25-30% faster and 50% less power consumption vs Gigabyte Waterforce XTreme 2080 Ti (220W vs 366W) and cost 50% less and possibly 2x the video memory (if they offer 20GB at launch)!

Waiting is a no-brainer. 

2-5 months, who can't wait for that? https://www.reddit.com/r/nvidia/comments/et3yes/2080ti_upgrade_finally/

Also, if you have a 2080 Ti, the time to sell is now. Once Ampere is released 2080 Ti value will drop down to what a 3070 will cost and the only reason you may still sell at that price is because said potential buyer didn't F5 fast enough the morning they became available through NGreedia.


----------



## kithylin

I think you're both seem to be forgetting that in general rasterization the 2080 Ti was about +40% over the 1080 Ti at best and in some titles only +30%, depending on the in-game settings and resolutions.

The days of these huge +50% and +60% gains every generation are long gone and over because it's getting harder and harder to shrink the technology with each passing year. At some point in the future (when companies reach 1nm) they won't be able to rely on shrinking the silicon at all anymore and they'll have to rely on other (probably Chiplet designs) methods to get performance improvements. We should all expect to see dwindling performance gains with every future generation now as the norm. Also they are likely going to be further and further apart between new updates. 2-4 years between the next family starting soon won't be uncommon. It takes a good bit of time and a lot of money to shrink to the next node. This is why nvidia added RT Cores / AI cores to their video cards: They aren't able to push rasterization speeds as high with each new family as they could previously and now that they have these RT cores they can accelerate those to nearly double with each new card and they can technically claim in marketing "DOUBLE PERFORMANCE VS PREVIOUS CARDS!" .. and they can't get slapped for false advertisement suits. Which is kind of technically true.. but the tiny print at the bottom is RT cores are double. Honestly I would be surprised to see +30% in rasterization at all and that's very hopeful. Pushing RTX and ray tracing performance is going to be the future aim for Ngreedia. They aren't really focused on rasterization performance anymore as a selling point.


----------



## sblantipodi

Cyberpunk 2077 has been postponed to push the 3080 imho. xD

2080ti is at the end of it's cycle.


----------



## sblantipodi

2080 and 2080ti was launched at the same moment,
I don't like the fact that 3080 will be released before 3080ti.


----------



## keikei

sblantipodi said:


> 2080 and 2080ti was launched at the same moment,
> I don't like the fact that 3080 will be released before 3080ti.



I expect Ampere to have an identical launch as Turing seeing as AMD is finally targeting highend.


----------



## BigMack70

kithylin said:


> I think you're both seem to be forgetting that in general rasterization the 2080 Ti was about +40% over the 1080 Ti at best and in some titles only +30%, depending on the in-game settings and resolutions.


I'm not sure how you read my post, where I literally link a ton of performance summary graphs, and then conclude that I am forgetting how the card performed. When I just showed how it performed.



> The days of these huge +50% and +60% gains every generation are long gone and over because it's getting harder and harder to shrink the technology with each passing year. *At some point in the future...*


The sky is falling!! Didn't you hear? I definitely heard the sky is falling. 

However, in other news, nothing you say here has anything to do with Ampere. But I'm sure the sky will fall on us one day.



> Also they are likely going to be further and further apart between new updates. 2-4 years between the next family starting soon won't be uncommon.


I have not said a word about the pace of improvement over time. I'm not sure why you are talking about this. Although if we *are* going to talk about this, it's interesting to note that you still appear to be very misinformed, because release timings have been very inconsistent across the past 15 years with little in the way of discernable patterns. Anything between 12 months and 3 years has not been uncommon for a new architecture refresh. It may eventually slow down as that sky falls down on us all, but right now, 2-3 years between architectures is not uncommon... and hasn't been uncommon for a decade.



> This is why nvidia added RT Cores / AI cores to their video cards: They aren't able to push rasterization speeds as high with each new family as they could previously


Good thing we have an Nvidia engineer on these forums to tell us what they can or can't do, and to explain the hidden reasoning behind their design decisions. I'm particularly glad to know that it was impossible for them to use all that die space given to RT cores for CUDA cores. I thought they might have just shipped bare metal in the place of those RT cores if they didn't invest in RT.



> Honestly I would be surprised to see +30% in rasterization at all and that's very hopeful


Yup... that sky is falling fast. Better get a good umbrella.




*Good. Grief.*


----------



## J7SC

I don't quite get why folks want to turn this 2080 Ti-specific thread into a hate-fest now for 2080 Ti (there are separate threads in the news section of 3070 and 3080 if you want to indulge in rumors). 

Also, 2080 Ti won't be 'at the end of it's cycle when next gen comes out...Game developers and producers, for example, like to make big revenues which is why they try to cover a fairly wide range of GPU performance envelopes. Where the 2080 Ti is currently in 1st or 2nd spot in a bench, it might slip to fourth or fifth (depending also on AMD, and possibly Intel), but that's not the same as being useless.

I've been in the GPU upgrade game for a long time (I currently have over 30 functional GPUs) but I also skip a generation from time to time, ie. with the 10 series. I just hope that NVidia doesn't have the same kind of release debacle for next-gen as they had with the 2k RTX series. Once new top-line (ie Ti) Ampere GPUs are out, I'll wait some months before even considering a purchase decision to see what the verdict is after the hype dies down and real world results (and user experiences) are in.


----------



## kithylin

J7SC said:


> I don't quite get why folks want to turn this 2080 Ti-specific thread into a hate-fest now for 2080 Ti (there are separate threads in the news section of 3070 and 3080 if you want to indulge in rumors).
> 
> Also, 2080 Ti won't be 'at the end of it's cycle when next gen comes out...Game developers and producers, for example, like to make big revenues which is why they try to cover a fairly wide range of GPU performance envelopes. Where the 2080 Ti is currently in 1st or 2nd spot in a bench, it might slip to fourth or fifth (depending also on AMD, and possibly Intel), but that's not the same as being useless.
> 
> I've been in the GPU upgrade game for a long time (I currently have over 30 functional GPUs) but I also skip a generation from time to time, ie. with the 10 series. I just hope that NVidia doesn't have the same kind of release debacle for next-gen as they had with the 2k RTX series. Once new top-line (ie Ti) Ampere GPUs are out, I'll wait some months before even considering a purchase decision to see what the verdict is after the hype dies down and real world results (and user experiences) are in.


No one is specifically hating on the 2080 Ti, it's a great card. But it's time as a "Current card" is nearly ended and it's scheduled to be replaced some time this year here in a few months. I'll stop talking about anything else though.. I don't want to get flagged as being off topic.


----------



## Mooncheese

J7SC said:


> I don't quite get why folks want to turn this 2080 Ti-specific thread into a hate-fest now for 2080 Ti (there are separate threads in the news section of 3070 and 3080 if you want to indulge in rumors).
> 
> Also, 2080 Ti won't be 'at the end of it's cycle when next gen comes out...Game developers and producers, for example, like to make big revenues which is why they try to cover a fairly wide range of GPU performance envelopes. Where the 2080 Ti is currently in 1st or 2nd spot in a bench, it might slip to fourth or fifth (depending also on AMD, and possibly Intel), but that's not the same as being useless.
> 
> I've been in the GPU upgrade game for a long time (I currently have over 30 functional GPUs) but I also skip a generation from time to time, ie. with the 10 series. I just hope that NVidia doesn't have the same kind of release debacle for next-gen as they had with the 2k RTX series. Once new top-line (ie Ti) Ampere GPUs are out, I'll wait some months before even considering a purchase decision to see what the verdict is after the hype dies down and real world results (and user experiences) are in.


If you went from 980 Ti to 2080 Ti then it definitely was an upgrade, it just sucks for those of us with a 1080 Ti, what a disappointment. At least I will have gotten three years out of my card. Totally got my money's worth. 

2080 Ti may slip down to 4th or 5th (Titan A > / = 3080 Ti > 3080 > Big Navi > 3070 > / = 2080 Ti) but it won't be irrelevant in rasterization. Ray Tracing however, it will remain a comparison piece by all outlets to show how huge the gains were between 3070, 3080, 3080 Ti etc. and the first-gen RT 2080 Ti. I hate to say it but it WILL become irrelevant in RT. 45 FPS @ 4K with RT + DLSS turned on (SOTTR), and all that RT is doing is shadows, yeah that's borderline unplayable. More demanding console ports are going to be even more RT intensive and 2080 Ti will just not be up to snuff. 2080 or 2070? Youre not going to turn RT on, period. 

Early adopters fee and all that. I kind of get it though, like they wanted to introduce ray tracing and the best way they knew how to accomplish this was to allocate 30% of the die to RT cores. Hence how underwelming Turing was coming from Pascal in rasterization terms. It basically failed at everything, can't really do RT very well and unless you are ready to part ways with $1300 after-taxes for a ~30% bump up and over 1080 Ti, I mean yeah. 

But why didn't they just make a Ray Tracing card? I mean how many of us have like 3 or 4 PCI-E slots unpopulated? "Want ray tracing? here's an RT card with the ENTIRE die dedicated to RT and DLSS for $500" or whatever

It was a painful birthing process but I believe we will be back on track to see 50%+ jump in performance between generations, tier for tier, at least with Ampere and probably with Volta after that (assuming they do go with AMD's chiplet design).


----------



## Mooncheese

kithylin said:


> I think you're both seem to be forgetting that in general rasterization the 2080 Ti was about +40% over the 1080 Ti at best and in some titles only +30%, depending on the in-game settings and resolutions.
> 
> The days of these huge +50% and +60% gains every generation are long gone and over because it's getting harder and harder to shrink the technology with each passing year. At some point in the future (when companies reach 1nm) they won't be able to rely on shrinking the silicon at all anymore and they'll have to rely on other (probably Chiplet designs) methods to get performance improvements. We should all expect to see dwindling performance gains with every future generation now as the norm. Also they are likely going to be further and further apart between new updates. 2-4 years between the next family starting soon won't be uncommon. It takes a good bit of time and a lot of money to shrink to the next node. This is why nvidia added RT Cores / AI cores to their video cards: They aren't able to push rasterization speeds as high with each new family as they could previously and now that they have these RT cores they can accelerate those to nearly double with each new card and they can technically claim in marketing "DOUBLE PERFORMANCE VS PREVIOUS CARDS!" .. and they can't get slapped for false advertisement suits. Which is kind of technically true.. but the tiny print at the bottom is RT cores are double. Honestly I would be surprised to see +30% in rasterization at all and that's very hopeful. Pushing RTX and ray tracing performance is going to be the future aim for Ngreedia. They aren't really focused on rasterization performance anymore as a selling point.


Chiplets. And 40% is pushing it, aside from a handful of newer games where NGreedia can "optimize the driver for the new architecture" (a euphemism they employ, properly translated: gimping the old architecture) the difference is near 20% and 30 maybe 35% in some select titles: 









What needs to be pointed out in all of these "stock clock" comparisons is that the 1080 Ti is running about 100 MHz slower than the 2080 Ti it's compared against (because NGreedia overclocked the FE by 100 MHz out-of-the-box). When you run them both at the same freq. the difference is that much less. Like there's only a 12% difference (92 vs 103 FPS) between my 1080 Ti @ 2000 MHz and a 2080 Ti @ stock clocks in SOTTR and only a 23% difference (92 vs 113 FPS) with the 2080 Ti oveclocked to the same freq: 

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16

Both 1080 Ti FE and 2080 Ti FE can do about 2000-2050 MHz with AIB variants of both cards able to do about another 100 Mhz. 

I'd say 25-30% difference is fair. 

40%+, keep dreaming.


----------



## DirtyScrubz

BigMack70 said:


> Just so everyone can be on the same page, here's the "big chip" performance improvements over time from architecture to architecture, just for everyone's reference. The numbers are of course rough estimates, as exact figures are determined by the particular test suite of games, but the TPU test suite is broad enough that I think this works fairly well for estimating relative performance over time.
> 
> G80 --> Tesla (8800 GTX -- GTX 285) = *+59% performance*
> Tesla --> Fermi (GTX 285 -- GTX 580) = *+67% performance*
> Fermi --> Kepler (GTX 580 -- GTX 780 Ti) = *+104% performance*
> Kepler --> Maxwell (GTX 780 Ti -- Titan X Maxwell) = *+45% performance*
> Maxwell --> Pascal (Titan XM -- GTX 1080 Ti) = *+76% performance* _(NOTE: this is based on Titan XM = 980 Ti + 5% performance)_
> Pascal --> Turing (GTX 1080 Ti -- RTX 2080 Ti) = *+38% performance*
> 
> I recognize that it is sometimes useful to compare actual graphics cards to other cards, in which case the jumps are often smaller (e.g. GTX 580 -- GTX 680). But I think generally speaking it is most helpful to compare architectures, rather than cards, because Nvidia can and does play all kinds of shenanigans with how they cut down their chips and bin everything out into different SKUs. It might be 18 months before we see the highest end Ampere SKU, but it's very reasonable to expect that when it does arrive it will be significantly faster than the 2080 Ti. +60% performance is a safe prediction. +40% pessimistic. +80% optimistic.



12nm->7nm won't be a huge jump for Turing to Ampere because of RT cores + Tensor now in addition to raster. So you can't compare previous node jumps which yielded massive gains to today. Just look at how huge the 2080 Ti chip is, even if they were to shrink it down to 7nm, you'd have to add lots of transistors to scale up the performance to 50%+ for rasterization and you'd end up with a power hungry gigantic chip. Each successive node these days yields less and less gains, realistic expectations peg Ampere being anywhere from 25-30% faster in rasterization and maybe if they really improve RT, 50% for that. I wouldn't count on anything more than that though.


For example, just looking at AMD's move from 14nm to 7nm with Vega 64 to Vega 7, we only saw a 24% uplift in rasterization performance. Navi 5700 XT (10.3 B transistors) is more efficient than either of those and ended up being only 6% slower than Radeon VII (13.2B transistors) but still the gains weren't massive. People expecting these 50-100% uplifts in raster performance are going to be in for a rude awakening with Ampere IMO. Personally I'd love for NVIDIA to wave a magic wand and pull it off since I'd end up buying one but I don't see it happening.


----------



## Mooncheese

DirtyScrubz said:


> 12nm->7nm won't be a huge jump for Turing to Ampere because of RT cores + Tensor now in addition to raster...


Because Turing doesn't have those.


----------



## moogleslam

Tried this on reddit, but you guys might have more answers:

I can't seem to get my card to do what I see others do in screenshots, video, etc.

EVGA 2080 Ti Black (11G-P4-2281-KR)
i9-9000KF @ 5.1 GHz
Motherboard: ASRock Z390 Phantom Gaming-ITX/ac
Memory: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3000 (CMK32GX4M2D3000C16)
PSU: Corsair Professional Gold 850 W 80+ Gold (CMPSU-850AX)

In Precision X1, I moved my Power target to the maximum available of 112%, and have GPU Temp Target linked to it so 88 degrees. I added 100 to Core Clock and 100 to Voltage. I added 500 to memory. I set fans to 100%

Then I ran the Superposition benchmark, and got 7635. Does this sound low? Benchmark shows temps never going above 54 degrees, but utilization at 99% for most of the time. It correctly shows the 500 MHz memory overclock, but the GPU clock never moves from 1545 Mhz. In Precision X1, the log shows the total power % never going above 74%

Is this making sense, or am I overlooking something or is something just not working right?

Thanks!

EDIT: According to CPUID HWMonitor, Voltage min is 0.762 and max is 0.819. That right?

EDIT2: Running latest NVIDIA drivers and used DDU before installing.

EDIT3: Set to "prefer max performance" in NVIDIA Control Panel

SOLVED!!!!! OMG, I figured it out! There was a bios setting called "PCIE1 Link Speed", and it was set to Auto. I changed it to Gen3, and reran the benchmark, this time actually with a 150 MHz overclock, and got a score of 9001!!!! Utilization was 56% at most, and temps still never went over 60 degrees! So happy! Also horrified that I've been running my PC like this for 7 months!!!!!


----------



## BigMack70

Mooncheese said:


> Because Turing doesn't have those.


LOL... Thanks for this. I needed a laugh. 

I mean... Come on guys, I know there might be plenty of reasons for someone to disagree with my logic here (for example, that past precedent never necessarily predicts future outcomes), but some of you guys are just posting straight up uninformed nonsense that betrays you are either not reading or not thinking before you type, or you legitimately don't know what you are talking about.

In an owner's thread for a $1000+ GPU. I don't want to point out that some people have more money than sense... But it's getting close.


----------



## kithylin

Mooncheese said:


> Because Turing doesn't have those.


What? The Turing series totally has Tensor cores, at least the RTX cards do. That was one of the biggest selling points of the Turing series after all.


----------



## Mooncheese

kithylin said:


> What? The Turing series totally has Tensor cores, at least the RTX cards do. That was one of the biggest selling points of the Turing series after all.


I suppose I should have ended that with "/s", at least BigMack70 got it. 

Read my comment in context.


----------



## Mooncheese

BigMack70 said:


> LOL... Thanks for this. I needed a laugh.
> 
> I mean... Come on guys, I know there might be plenty of reasons for someone to disagree with my logic here (for example, that past precedent never necessarily predicts future outcomes), but some of you guys are just posting straight up uninformed nonsense that betrays you are either not reading or not thinking before you type, or you legitimately don't know what you are talking about.
> 
> In an owner's thread for a $1000+ GPU. I don't want to point out that some people have more money than sense... But it's getting close.


This is the 2080 Ti thread after all. 

More money that sense sounds about right. 

It just sucks that stupid people making stupid decisions were a positive reinforcement for NGreedia's new prices (where they effectively doubled the price if one acknowledges that the entire stack was renamed one card higher, i.e. RTX "2070", no SLI, 60 card SKU, only as fast as the outgoing 80 card, not the 80 Ti card like every incoming 70 card before it, a glorified $600 60 card at launch, the "2080", a glorified $900 70 card and the "2080 Ti" a $1300 after-taxes 80 card, some 30% faster than the 80 Ti card that came before it, fully the purview of the new 80 card. 

$1200 gets you 80 class performance from 2017 onward because people have more money than sense. 

Market economics is interesting, if you don't like something, if you think something is overpriced, DON'T BUY IT! Guess what happens if enough people do that? ..................Stock remains high and prices come down! I mean last time I checked it costs NGreedia $400 including all costs, R and D, logistics, labor, etc. to make a 2080 Ti? They can sell them at $700 and still make near 200% profit! This absolute nonsense notion that the card is that expensive because of high manufacturing costs, it's absolute bollocks. 

Thank god the pressure is put on NGreedia something fierce in 2020 with a resurgent AMD, Intel breaking into the GPU scene, next-gen consoles and NGreedia's need to restore goodwill among it's consumerbase. 

Wait until everyone here realizes how ripped they got, how everyone sitting tight with a 1080 Ti who told them "DON'T BUY" was right, when they see the incoming 70 card run as fast as 2080 Ti for $500 and STILL do RT 25%+ faster @ 180W. 

Maybe then people will sit up and listen, although I'm not so sure. 

Thankfully the pressure means we will probably not see another $600 and $900 60 and 70 card (I suppose that now they will actually be 70 and 80 cards compared to Turing) and the 80 Ti card will probably come down to $1000, which will seem like a bargain in light of Turing's price gouging.

I'm looking forward to upgrading to the 3080, I will never give NGreedia more than $700-800 for a GPU. 

It's just such a ridiculous value proposition given how quickly they become obsolete.

I've been getting every 80 Ti card going back to Kepler but no more, I'm stepping down to the 80 card from here on out. Coming from a 1080 Ti it's still a 50% jump to 3080 and I am excited for Ray Tracing. 

If I upgrade to 3080 Ti then I need to upgrade to 4080 Ti two'ish years hence to see that 50%+ performance increase again. 

$700-800 vs $1000-$1200+ every two years. 

I can put the difference towards a new chipset, VR, a drone, new dumbbells, you name it. $400 every two years is a lot of money so I hereby announce that I am no longer a Ti enthusiast.

I think looking in the rearview mirror that Turing will be looked at with mixed emotions. On the one hand, it will be where Ray Tracing as a concept became a reality. On the other everyone who bought a Turing card will be reminded as to how much they got ripped off if they came from Pascal, especially with the low number of titles that actually had Ray Tracing in them, how unimpressive the effect was most of the time (like only shadows in Shadow of the Tomb Raider), and how impactful it was performance-wise.


----------



## JustinThyme

My experience with 2080Ti over 1080Ti both SLI and both ASUS OC editions got me 30% on air where it would run the factory OC and a little bit more to better than 35% under water with a loop that stays under 32C max. New Nvlink makes a difference I’m sure. 3GB/s to 100GB/s is quite a jump for SLI. And before anyone goes there not all titles support SLI but that’s laziness on the Devs. Most of the titles I play support it.


----------



## DirtyScrubz

Mooncheese said:


> Because Turing doesn't have those.



The point being these aren't simple raster chips of the early 2000s where the chips scaled w/Moores law. You are looking at increasingly diminishing returns with each successive generation right now, that's why I showed the AMD example from 14nm to 7nm which is more significant than 12nm to 7nm. The current 2080 Ti is 775mm^2 so since the TSMC 12nm node is ~33.8 MTr/mm^2 and they double that at 7nm to ~67 MTr/mm^2 and assume you take a 1:1 2080 Ti and shrink it to 7nm using their high power library, then roughly you'd need a 350-370 mm^2 chip to match the 2080 Ti and maybe 400mm^2 to get something faster (this isn't counting architectural improvements). Will NVIDIA really want to push out giant chips at 7nm, esp. with the increased costs vs 12nm? 













They could push out another gigantic chip and charge everyone $1500-1800 a pop I suppose for a halo card but I think they'd rather have smaller chips initially to make more money. Big Navi isn't expected to be more than 15-20% faster than 2080 Ti from the leaks I've seen so NVIDIA could match that at 7nm with a 400mm^2 chip. So I don't think we'll see a consumer grade chip from them that gives a huge uplift (e.g. 50-100% raster performance) over 2080 Ti, at least not initially due to costs.


----------



## DirtyScrubz

Mooncheese said:


> 40%+, keep dreaming.



It can hit 40%+, just depends on what game, API and resolution you're looking at:




































This gets close enough to 40%:








I could keep going but I think you get the point.


----------



## Kaltenbrunner

holy crap these cards are expense new and used


----------



## DirtyScrubz

Kaltenbrunner said:


> holy crap these cards are expense new and used



Depends where you look. I picked up my Strix 11G OC for $800 new off this guy on CL.


----------



## kithylin

Kaltenbrunner said:


> holy crap these cards are expense new and used


And the brand new cards are coming out literally within the next 10 months or less. Which.. either wait for those. OR... once those come out perhaps the 2080 Ti's will drop in price.


----------



## sultanofswing

So I have a newer BIOS EVGA XC Ultra 2080ti. On Techpowerup I found a 520w BIOS for the Kingpin hybrid that also uses the newer BIOS. I have successfully flashed the card 3 times but no matter what I do the card will not go into 3d clock mode as GPU-Z says it's Being PWR limited.

I assume there is just some incompatability.


----------



## BigMack70

DirtyScrubz said:


> So I don't think we'll see a consumer grade chip from them that gives a huge uplift (e.g. 50-100% raster performance) over 2080 Ti, *at least not initially* due to costs.


Time will tell, but this is also my expectation as regards timing. I'm personally expecting their big chip next spring or summer. Would love to be wrong, though.

Remember, though - for this to be possible, they need to beat the Turing big chip with their midrange Ampere chip... Paving the way for substantial performance uplift (well above 30%) when the big chip finally drops.


----------



## Mooncheese

DirtyScrubz said:


> It can hit 40%+, just depends on what game, API and resolution you're looking at:
> 
> 
> https://images.tweaktown.com/conten...enchmarked-dx12-testing-20-graphics-cards.png
> https://tpucdn.com/review/control-benchmark-test-performance-nvidia-rtx/images/1440.png
> https://tpucdn.com/review/call-of-d...nce-analysis/images/performance-3840-2160.png
> https://tpucdn.com/review/call-of-d...nce-analysis/images/performance-2560-1440.png
> 
> 
> This gets close enough to 40%:
> https://www.guru3d.com/index.php?ct=articles&action=file&id=49430
> I could keep going but I think you get the point.


Ah yes, more comparing the factory overclocked 2080 and 2080 Ti FE to the previous gen 1080 Ti FE that ran 100 Mhz slower (but can run at the same freq.). 

Let's take apart your examples of "40%" one by one shall we?



"MSI 2080 Ti" is an AIB with greater power delivery and cooling, if we are going to do this fairly we need to include an 350-400W AIB 1080 Ti FE that runs 15C cooler with another 50-100W on tap, so this entry is out of the equation considering all there is here is "1080 Ti FE", so next entry down the list, "2080 Ti FE" at 69 FPS. 

Everyone knows that 1080 Ti FE is as fast as 2080 FE when both cards run at the same freq so for comparison sake we will just use the 2080 FE entry from here on and compare that to the 2080 Ti FE, not any AIB entries. 

Hitman 2

50 vs 69 FPS = 26%

https://www.calculatorsoup.com/calculators/algebra/percent-change-calculator.php


Control 

55 vs 69 FPS = 25.4%


COD: Modern Warfare 

62.5 vs 77.9 = 24%


The Division 2 

49 vs 64 =30%


----------



## Mooncheese

kithylin said:


> And the brand new cards are coming out literally within the next 10 months or less. Which.. either wait for those. OR... once those come out perhaps the 2080 Ti's will drop in price.


2 months. Potentially. No later than 5.


----------



## Mooncheese

Kaltenbrunner said:


> holy crap these cards are expense new and used


Wait 2-5 months and either choose from 2080 Ti rasterization performance @ 180W with the 3070 (which will probably do RT 25%+ faster) for $500 or hell, you can buy used then for $500. 

Two months. 

Everyone still buying 2080 Ti at full price or $900 used is going to be both shocked and upset in 2-5 months time.


----------



## Mooncheese

DirtyScrubz said:


> The point being these aren't simple raster chips of the early 2000s where the chips scaled w/Moores law. You are looking at increasingly diminishing returns with each successive generation right now, that's why I showed the AMD example from 14nm to 7nm which is more significant than 12nm to 7nm. The current 2080 Ti is 775mm^2 so since the TSMC 12nm node is ~33.8 MTr/mm^2 and they double that at 7nm to ~67 MTr/mm^2 and assume you take a 1:1 2080 Ti and shrink it to 7nm using their high power library, then roughly you'd need a 350-370 mm^2 chip to match the 2080 Ti and maybe 400mm^2 to get something faster (this isn't counting architectural improvements). Will NVIDIA really want to push out giant chips at 7nm, esp. with the increased costs vs 12nm?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They could push out another gigantic chip and charge everyone $1500-1800 a pop I suppose for a halo card but I think they'd rather have smaller chips initially to make more money. Big Navi isn't expected to be more than 15-20% faster than 2080 Ti from the leaks I've seen so NVIDIA could match that at 7nm with a 400mm^2 chip. So I don't think we'll see a consumer grade chip from them that gives a huge uplift (e.g. 50-100% raster performance) over 2080 Ti, at least not initially due to costs.


That's what naysayers have been saying going back to Fermi "No way can they make this any faster, no way is the 970 going to be as fast as 780 Ti 50% less TDP, no way is 1070 going to be as fast as 980 Ti with 50% less TDP". 

It's happening. Tapei Times produced a fairly reliable leak from an insider indicating exactly as such.


----------



## Mooncheese

The Division 2 

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/17

Since there is no 1080 Ti here we will use my bench but first we must do some math to translate it over to 2560x1440. 

70 FPS @ 3440x1440, 3440x1440 is 25.5% more pixels than 2560x1440.

The Aorus Waterforce 2080 here does 96.7 FPS here which is exactly 27% over 70 FPS, very close to the resolution difference. 

My personal bench @ 2000 MHz same settings used by OC3D: https://imgur.com/a/6Qhofxl

So around 95 FPS @ 2560x1440 is probably accurate (I can do a bench @ 2560x1440 later if you like): 

95 vs 132 FPS = 38% difference. 

The Gigabyte 2080 Ti Waterforce Xtreme here does do 132 FPS @ 2560x1440, but this isn't an FE, it has robust power delivery and 366W TDP and @ 132 FPS it's overclocked to 2150 MHz. It is right under the Kingpin card in terms of performance and I do mean right under. Overclocked it's actually 17% faster than a 2080 Ti FE at factory clocks and and 7% faster against an overclocked 2080 Ti FE in SOTTR. 

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16

For the sake of fairness, considering I don't have 350-400W 1080 Ti but only a 1080 Ti FE with the factory 300W vbios, we should be comparing it to the overclocked 2080 Ti FE figure, to do this we could just take off 7% (the difference between overclocked Aorus Gigabyte Waterforce Xtreme @ 366W / 2150 MHz and overclocked 2080 Ti FE. 

38% - 7% = 31% 

The Division 2, along with SOTTR, are one of the handful of titles that has more than 22-25% performance difference over 1080 Ti and having just gotten into The Division 2 it was a leading factor in my impulsive decision to upgrade to this very same GPU (Aorus Waterforce Xtreme 2080 Ti) that I have, fortunately, come to my senses and returned it (card arrived with bent back-plate and PCB and scratched, among other issues, and was grounds for return as "item not as described").

But yeah, i was looking at near 40% performance difference going from 1080 Ti FE to basically the fastest 2080 Ti money can buy (assuming it could do 2150 MHz stable). 

If we compare say 1080 Ti Kingpin or 1080 Ti Aorus Waterforce Xtreme to this card the difference isn't 40%, it's 30%. 

This is the problem with nearly all of the benchmark comparisons out there, you have AIB cards that have more TDP and cooling thrown in there and uninformed people in the forum try (and fail) to point to that as a metric to be used when comparing against a different non-AIB card. 

In regards to the aforementioned Shadow of the Tomb Raider bench, my 1080 Ti did 92 FPS with the exact settings used by OC3D, putting it basically where "2080 FE OC"
is. 

Which is the point of this post, it's a follow up to my previous post, outside of a handful of games that utilize FP32 (i.e. Wolfenstein: New Order) RTX 2080 isn't actually faster than 1080 Ti in rasterization when both cards run at the same freq. (Turing FE came pre-overclocked by 100 MHz)

I'm glad I returned the card, it had no warranty to speak of and the bent PCB and backplate indicated that it was disassembled and reassembled and screws around die overtightened, bending the PCB and backplate, probably among the slew of 2080 Ti's that suffered from micron memory failure. If all they did during the refurbishing is replace the micron modules with the same micron modules then I could probably expect failure in short order given the conditions remain constant. 

If the 3080 is what the leaks indicate, I am looking at something 50% faster than 2080, or 25% faster than the Gigabyte Aorus Waterforce Xtreme @ 366W once said 3080 is also similarly overclocked and at 220W, possibly with 20GB of video memory, probably do RT 50% faster than 2080 Ti, new Ampere exclusive features, etc. for $800!

Did I mention that I paid $1180 for said used Aorus Waterforce Xtreme 2080 Ti? 

Yeah, not to proud of that one, but I do have a Gigabyte mobo that would have matched exactly and I was enticed by having one of the fastest 2080 Ti's and this card did go for over $1500 new (since discontinued, probably the micron memory failure). 

You can find a bunch of these from the same seller on ebay, I'm nearly certain they are all refurbished (listed as "pre-owned").


----------



## keikei

Kaltenbrunner said:


> holy crap these cards are expense new and used



Yeah, you're looking to spend about $1000 give or take a few hundred. Its the top dawg atm. We should be seeing this performance soon though with Ampere.


----------



## DirtyScrubz

Mooncheese said:


> That's what naysayers have been saying going back to Fermi "No way can they make this any faster, no way is the 970 going to be as fast as 780 Ti 50% less TDP, no way is 1070 going to be as fast as 980 Ti with 50% less TDP".
> 
> It's happening. Tapei Times produced a fairly reliable leak from an insider indicating exactly as such.
> 
> https://youtu.be/886lbuKB3HI



Did you bother reading the Taipei times source? They just mentioned it in passing w/out any source whatsoever nor did they even bother attempting to name one. WCCFTech is a joke, you should never link that. And yes you overpaid for that Waterforce card, totally not worth that much used. Like I said, I got my Strix OC 11G for $800 new.


----------



## Mooncheese

DirtyScrubz said:


> Did you bother reading the Taipei times source? They just mentioned it in passing w/out any source whatsoever nor did they even bother attempting to name one. WCCFTech is a joke, you should never link that. And yes you overpaid for that Waterforce card, totally not worth that much used. Like I said, I got my Strix OC 11G for $800 new.


Waterforce Xtreme > Strix OC 11G. 

Also, add $170 for a water block for Strix OC. You may have purchased it for $800 (when, how?) but they are going for $1299 retail ($1229 on amazon, "on sale"). 

Waterforce Extreme weighs 3.5 lbs, 3 lbs of which is copper block, has 16+3 phase VRM, higher TDP. 

Go ahead and run these benchmarks, I bet you will be at minimum 7% slower than Waterforce Extreme and this isn't even the full water-block version tested here but the AIO version. 

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16

Waterforce Xtreme (waterblock) was over $1500 new.

Edit: 

You don't even need to do that, Strix overclocked is already in the comparison. 

ROG Strix does 106 FPS in SOTTR with no overclock

vs

Aorus Waterforce Extreme @ 116 FPS no overclock (AIO, not even waterblock, there is a difference, as with AIO you have a 240mm rad, with waterblock you have way more copper in the block and unlimited rad surface area potential, slim 420 and 360 PE here, the difference is like 10C as AIO variant still gets in the mid 50's and properly cooled waterblock variants can be as cool as 40-45C full @ 366w+)

I can guarantee you that the water block variant with 1kw or more rad surface area is even faster.

https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16

Also, looks way better. 






Youre in denial dude, I so wish there was RemindMe! bot here, we shall find out in 2-5 months. 

There is so much pressure on NGreedia to perform and we are getting leaks from multiple different sources pointing to 50% uplift in rasterization alone over Turing. I mean they are dropping the node from 12 to 7nm, look at the performance difference the last time they dropped the node 41%. 

Get ready to eat your hat because I will be back here to rub it in your face.


----------



## Mooncheese




----------



## DirtyScrubz

Mooncheese said:


> Waterforce Xtreme > Strix OC 11G.
> 
> Also, add $170 for a water block for Strix OC. You may have purchased it for $800 (when, how?) but they are going for $1299 retail ($1229 on amazon, "on sale").
> 
> Waterforce Extreme weighs 3.5 lbs, 3 lbs of which is copper block, has 16+3 phase VRM, higher TDP.
> 
> Go ahead and run these benchmarks, I bet you will be at minimum 7% slower than Waterforce Extreme and this isn't even the full water-block version tested here but the AIO version.
> 
> https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16
> 
> Waterforce Xtreme (waterblock) was over $1500 new.
> 
> Edit:
> 
> You don't even need to do that, Strix overclocked is already in the comparison.
> 
> ROG Strix does 106 FPS in SOTTR with no overclock
> 
> vs
> 
> Aorus Waterforce Extreme @ 116 FPS no overclock (AIO, not even waterblock, there is a difference, as with AIO you have a 240mm rad, with waterblock you have way more copper in the block and unlimited rad surface area potential, slim 420 and 360 PE here, the difference is like 10C as AIO variant still gets in the mid 50's and properly cooled waterblock variants can be as cool as 40-45C full @ 366w+)
> 
> I can guarantee you that the water block variant with 1kw or more rad surface area is even faster.
> 
> https://www.overclock3d.net/reviews...2080ti_aorus_extreme_waterforce_11g_review/16
> 
> Also, looks way better.
> 
> https://youtu.be/E9BuTfffbsY
> 
> Youre in denial dude, I so wish there was RemindMe! bot here, we shall find out in 2-5 months.
> 
> There is so much pressure on NGreedia to perform and we are getting leaks from multiple different sources pointing to 50% uplift in rasterization alone over Turing. I mean they are dropping the node from 12 to 7nm, look at the performance difference the last time they dropped the node 41%.
> 
> Get ready to eat your hat because I will be back here to rub it in your face.





I think the Waterforce is ugly as are most AIO based cards so looks are subjective, not that it matters as my Strix has an EK Vector block on it anyhow. So your benchmark comparisons mean nothing to me since I can flash whatever bios I want on this thing (not that it even needs it due to voltage limits). I've provided logical reasons for why I don't think you'll see Big Ampere this year and you've provided...youtube videos.


----------



## sultanofswing

It looks like the Strix XOC Bios works on my newer 2019 card that is posted in the original post. Shame memory clocks do not go into 2d mode and also not having the voltage/frequency curve kinda sucks.


----------



## Mooncheese

DirtyScrubz said:


> I think the Waterforce is ugly as are most AIO based cards so looks are subjective, not that it matters as my Strix has an EK Vector block on it anyhow. So your benchmark comparisons mean nothing to me since I can flash whatever bios I want on this thing (not that it even needs it due to voltage limits). I've provided logical reasons for why I don't think you'll see Big Ampere this year and you've provided...youtube videos.


I stated that I had the waterblock variant, I don't know, 8 times? Learn to read slowly and correctly before spouting off rubbish. (The entire point of the vid below my last post was to show the card in question, I mean you can see right from the vid snapshot that it's not the AIO variant, youre either blind, illiterate and or dumb or all three, but this is the "2080 Ti" forum after all, just like the Titan forum from yesteryear, a forum comprised of nearly illiterate stupid people with too much money driving up the prices for the 99% of us who either can't afford a $1200 GPU purchase every other year or think that's a waste of money.

We can share benchmarks in two-three months when I put my block on my 3080. Can't wait to see a 220W card annihilate your 380W XOC Bios card. I really can't wait to see the RT difference.


----------



## Mooncheese

https://twitter.com/CorgiKitty/status/1218542322306056192

"48x128 and 60x128 FMA CUDA cores

U sure this is correct?"


----------



## kithylin

Mooncheese said:


> https://twitter.com/CorgiKitty/status/1218542322306056192
> 
> "48x128 and 60x128 FMA CUDA cores
> 
> U sure this is correct?"


It's probably not. Since no specifications have been released by nvidia everything right now is pure speculation. That user could of created that in a few minutes in photoshop. No one knows anything for sure about the new cards at this point in time.


----------



## Vipeax

nvflash64_patched_5.590.0.exe

https://drive.google.com/open?id=1r_rROoTMOcg90YWUyCQgHV5WooopQEMP


----------



## DirtyScrubz

Mooncheese said:


> DirtyScrubz said:
> 
> 
> 
> I think the Waterforce is ugly as are most AIO based cards so looks are subjective, not that it matters as my Strix has an EK Vector block on it anyhow. So your benchmark comparisons mean nothing to me since I can flash whatever bios I want on this thing (not that it even needs it due to voltage limits). I've provided logical reasons for why I don't think you'll see Big Ampere this year and you've provided...youtube videos.
> 
> 
> 
> I stated that I had the waterblock variant, I don't know, 8 times? Learn to read slowly and correctly before spouting off rubbish. (The entire point of the vid below my last post was to show the card in question, I mean you can see right from the vid snapshot that it's not the AIO variant, youre either blind, illiterate and or dumb or all three, but this is the "2080 Ti" forum after all, just like the Titan forum from yesteryear, a forum comprised of nearly illiterate stupid people with too much money driving up the prices for the 99% of us who either can't afford a $1200 GPU purchase every other year or think that's a waste of money.
> 
> We can share benchmarks in two-three months when I put my block on my 3080. Can't wait to see a 220W card annihilate your 380W XOC Bios card. I really can't wait to see the RT difference.
Click to expand...

So you bought a card with a pre-installed wb that looks ugly and was dumb enough to pay over $1300 used for it and then cried it cost too much--well no **** genius. Go mow some more lawns kid, maybe you'll be able to afford the 3080 Ti in a few years and not have to come on these forums spouting trash to justify you not being able to afford one.


----------



## acoustic

Imagine spending all this time ranting about the 2080TI when you don't own one. I've enjoyed great performance for a while now; $1200 on release means I spent $2.46 (1200 / 487) per day for hours upon hours of top notch performance.

Please go to the 1080TI thread where you belong instead of trashing this thread for no reason. Obviously buying a 2080TI for $1000 right now would be silly, but then again you don't really see too many people doing that right now unless they game at 4K, which I understand at that point as a 2080TI is really the only card capable of a playable experience at that resolution.

Was the card overpriced? Yup. Welcome to a market that has ZERO competition. Any company with a brain would have done what NV did in their shoes.


----------



## BigMack70

acoustic said:


> $1200 on release means I spent $2.46 (1200 / 487) per day for hours upon hours of top notch performance.


It's not even that bad for anyone who resold their 1080 Ti... I got $650 for mine, so it's just $1.29 / day... and it will also bring down the purchase price of any future upgrades because it will have the highest possible resale value of anything this generation.


----------



## J7SC

DirtyScrubz said:


> So you bought a card with a pre-installed wb that looks ugly and was dumb enough to pay over $1300 used for it and then cried it cost too much--well no **** genius. Go mow some more lawns kid, maybe you'll be able to afford the 3080 Ti in a few years and not have to come on these forums spouting trash to justify you not being able to afford one.


 
...not to mix in your *argument(s)* with Mooncheese, not least as all this seems to be void of basic courtesy. But since I own 2x of the Aorus Xtr factory full-waterblock cards for well over a year now that were purchased for both work and play, I have no idea what you are going on about....not a single problem with them  , and with included and mounted water-blocks, quite a good deal to boot.

Performance-wise, with stock Bios (and with extensive rad space), I have spent pretty much all year in the top 25 or so (pic below) in 3DM HOF Port Royal. The lower graph in the first pic shows you the temps for the Port Royal run and it underscores the factory water-blocks performing exceptionally well. As to the looks of the card, it's a matter of taste. I think they look great if a bit RGB-heavy. But if you don't like the look, the cover comes off pretty easily (also per first pic below). 

Anyway, back to real life..


----------



## Sheyster

acoustic said:


> Imagine spending all this time ranting about the 2080TI when you don't own one. I've enjoyed great performance for a while now; $1200 on release means I spent $2.46 (1200 / 487) per day for hours upon hours of top notch performance.
> 
> Please go to the 1080TI thread where you belong instead of trashing this thread for no reason. Obviously buying a 2080TI for $1000 right now would be silly, but then again you don't really see too many people doing that right now unless they game at 4K, which I understand at that point as a 2080TI is really the only card capable of a playable experience at that resolution.
> 
> Was the card overpriced? Yup. Welcome to a market that has ZERO competition. Any company with a brain would have done what NV did in their shoes.



I think Moon Cheese owns a 2080 Ti right? He's just super salty he paid a lot for it from what I gather.

If he does not own one then you're right, he's trolling hard and should get the heck out of this thread.


----------



## DirtyScrubz

Sheyster said:


> I think Moon Cheese owns a 2080 Ti right? He's just super salty he paid a lot for it from what I gather.
> 
> If he does not own one then you're right, he's trolling hard and should get the heck out of this thread.



He paid $1300 for a used waterforce extreme and realized how stupid that was and that he couldn't really afford it. After returning it, he came on this thread to troll everyone else to make himself feel better.


----------



## chibi

DirtyScrubz said:


> He paid $1300 for a used waterforce extreme and realized how stupid that was and that he couldn't really afford it. After returning it, he came on this thread to troll everyone else to make himself feel better.



That's kind of sad... lol.


----------



## DirtyScrubz

J7SC said:


> ...not to mix in your *argument(s)* with Mooncheese, not least as all this seems to be void of basic courtesy. But since I own 2x of the Aorus Xtr factory full-waterblock cards for well over a year now that were purchased for both work and play, I have no idea what you are going on about....not a single problem with them  , and with included and mounted water-blocks, quite a good deal to boot.
> 
> Performance-wise, with stock Bios (and with extensive rad space), I have spent pretty much all year in the top 25 or so (pic below) in 3DM HOF Port Royal. The lower graph in the first pic shows you the temps for the Port Royal run and it underscores the factory water-blocks performing exceptionally well. As to the looks of the card, it's a matter of taste. I think they look great if a bit RGB-heavy. But if you don't like the look, the cover comes off pretty easily (also per first pic below).
> 
> Anyway, back to real life..



Like I initially said, looks are subjective. Nothing wrong w/the card, just paying $1300 used for one right now isn't worth it, especially since you can buy other brands for much cheaper on hardwareswap and other places. The funny thing is he comes in here saying how crappy the 2080 Ti is because he's dreaming of how great the 3080 might be but what he doesn't realize is most here who have a 2080 Ti will just sell and grab the 3080 Ti or whatever when the time comes. Anytime I see someone say "NGreedia" over and over, I already know they're salty.


----------



## J7SC

DirtyScrubz said:


> Like I initially said, looks are subjective. Nothing wrong w/the card, just paying $1300 used for one right now isn't worth it, especially since you can buy other brands for much cheaper on hardwareswap and other places. The funny thing is he comes in here saying how crappy the 2080 Ti is because he's dreaming of how great the 3080 might be but what he doesn't realize is most here who have a 2080 Ti will just sell and grab the 3080 Ti or whatever when the time comes. Anytime I see someone say "NGreedia" over and over, I already know they're salty.


 
...I know. I just grabbed your previous post because, frankly...it was much shorter and easier to grab than Mooncheese's latest iterations...

For what it's worth, we paid the equivalent of US $ 1.2k NEW for the first card in late November '18 (we're in Canada), and US $1.1k for the second one a few weeks later (it was 'open box' by someone who returned it the same day of purchase after realizing it was not the AIO version, per validated receipts). What has always intrigued me is that they are only 2 serial numbers apart, yet have different GPU and VRAM limits, and even Bios versions per GPUz. Granted, this particular Aorus WB card model is the top-binned for Gigabyte anyway, yet there is a 60 MHz diff on GPU top speed ("1" > "2") though about 30 MHz ("2" vs "1") in VRAM...so much for serial numbers and silicon lottery...btw, both pull about 380w on stock Bios with 122% (max) PL.

All else aside, what I really get out of it is that very obviously, temps matter w/ NV boost far more than anything else... the colder, the better, no matter what the silicon lottery dealt you. In addition, SLI / NVlink works far better than most folks have you believe for most of the apps and games I have ever run across, so no ...3070 / 3080 / 3080 Ti won't rob me of sleep just yet


----------



## sultanofswing

Does anyone by any chance have the Galax HOF 2000w BIOS they would be willing to share?


----------



## Kalm_Traveler

Kind of funny results tonight on my Kingpin card - swaped from an EK VGA Supremacy block + some cheapy copper memory heatsinks and the stock VRM cooling to a Bykski water block.

GPU temp is identical, memory is a bit cooler, VRMs seem about the same, but running OC scanner in Afterburner it clocked this thing max at 2085, where on the previous janky setup the memory was getting pretty hot but OC Scanner clocked it up to 2150...


----------



## J7SC

Kalm_Traveler said:


> Kind of funny results tonight on my Kingpin card - swaped from an EK VGA Supremacy block + some cheapy copper memory heatsinks and the stock VRM cooling to a Bykski water block.
> 
> GPU temp is identical, memory is a bit cooler, VRMs seem about the same, but running OC scanner in Afterburner it clocked this thing max at 2085, where on the previous janky setup the memory was getting pretty hot but OC Scanner clocked it up to 2150...


 
That's genuinely weird :headscrat Are there no steep if fleeting 'peaks' when running OC scanner with the new block (wondering about mounting pressure and/or thermal pads / perhaps remount) ? Also, I take it that you put the card back in the same PCIe slot with the same bios (different slot for the same card can change Win registry enumeration, ditto for new / flashed bios. It also has an impact on Afterburner). If you solve it, please post about it here because it's a head scratcher...


----------



## Kalm_Traveler

J7SC said:


> That's genuinely weird :headscrat Are there no steep if fleeting 'peaks' when running OC scanner with the new block (wondering about mounting pressure and/or thermal pads / perhaps remount) ? Also, I take it that you put the card back in the same PCIe slot with the same bios (different slot for the same card can change Win registry enumeration, ditto for new / flashed bios. It also has an impact on Afterburner). If you solve it, please post about it here because it's a head scratcher...


no temp spikes in Afterburner that I can see. Yes same slot. Didn't change any settings or BIOS on anything whatsoever. Removed card from PC, swapped cooling solution, reinstalled card. 

It is also still on the middle BIOS switch position (OC).

Yes very very strange indeed - I'll do some more testing tonight and let you know if I find anything obvious but so far I'm super confused.


----------



## D-EJ915

DirtyScrubz said:


> Like I initially said, looks are subjective. Nothing wrong w/the card, just paying $1300 used for one right now isn't worth it, especially since you can buy other brands for much cheaper on hardwareswap and other places. The funny thing is he comes in here saying how crappy the 2080 Ti is because he's dreaming of how great the 3080 might be but what he doesn't realize is most here who have a 2080 Ti will just sell and grab the 3080 Ti or whatever when the time comes. Anytime I see someone say "NGreedia" over and over, I already know they're salty.


Spending that much used is a bad deal for sure since new ones on amazon are 1350. As for the card itself, at least it's not a hydro copper. I'm over here rocking my 1080 Tis still, I hope next gen brings an actual upgrade for the 700 price bracket again.


----------



## ESRCJ

sultanofswing said:


> Does anyone by any chance have the Galax HOF 2000w BIOS they would be willing to share?


I second that request. I'd love to give it a go!


----------



## truehighroller1

ESRCJ said:


> I second that request. I'd love to give it a go!


It sucked compared to the kingpin bios trust me. I'm still, running the kingpin myself on my dual oc asus 2080ti that's using water cooling. I have it but will not release it as I was asked not to by a little birdie but trust me you don't want it compared to the kingpin bios any way.


----------



## sultanofswing

truehighroller1 said:


> It sucked compared to the kingpin bios trust me. I'm still, running the kingpin myself on my dual oc asus 2080ti that's using water cooling. I have it but will not release it as I was asked not to by a little birdie but trust me you don't want it compared to the kingpin bios any way.


I’m currently using the kingpin bios but I don’t like the way the memory doesn’t go into 2d clocks and I’d love to have the voltage/frequency table back. On the kingpin bios I am able to keep temps no higher than 43c at 560watts so I have good cooling to handle it.


----------



## Shawnb99

D-EJ915 said:


> Spending that much used is a bad deal for sure since new ones on amazon are 1350. As for the card itself, at least it's not a hydro copper. I'm over here rocking my 1080 Tis still, I hope next gen brings an actual upgrade for the 700 price bracket again.




What’s wrong with the Hydrocopper?


----------



## schumeiker

I have an RTX 2080ti Aorus Xtreme It easily reached 75 degrees in games I put 0.981v at 2075Mhz but during games it drops to 2025 reaching 65 degrees. Would there be a better way to optimize performance without raising the temperature too much?


----------



## J7SC

schumeiker said:


> I have an RTX 2080ti Aorus Xtreme It easily reached 75 degrees in games I put 0.981v at 2075Mhz but during games it drops to 2025 reaching 65 degrees. Would there be a better way to optimize performance without raising the temperature too much?


 
I got two of those Aorus Xtr cards, but the WB factory water-blocked version. There are also aftermarket water-coolers available and as is the case with any 2080 Ti model / brand, any time you can lower your temps, you get more 'stable' and higher (up to a point) clocks. For every given amount of temp increase (starting in the 30c range), you lose 15 MHz or so.


----------



## kithylin

schumeiker said:


> I have an RTX 2080ti Aorus Xtreme It easily reached 75 degrees in games I put 0.981v at 2075Mhz but during games it drops to 2025 reaching 65 degrees. Would there be a better way to optimize performance without raising the temperature too much?


Try seeing if you can reduce voltage instead of increasing it. Increasing voltage on the 2080 Ti's on air cooled cards is counter-productive towards overclocking. Increasing voltage just makes the card produce more heat which makes it throttle down further. Sometimes you can reduce volts and still OC and make less heat, and thus less throttling / run higher clocks. In the end though your enemy is tempatures with these cards. You may not be able to run above 2025 Mhz on that card unless you water cool it. Try increasing airflow in your case and perhaps reduce your ambient / room temps.


----------



## Uriette

You should be able to lower the voltage for sure. My 2080 Ti Xtreme can reach 2055 MHz through OC Scanner perfectly stable without any voltage increase nor Power target limit.

Mine reach 73-74°C under heavy gaming sessions and it's not really bad IMHO, my previous GTX 980 Ti Superclocked+ from EVGA used to have similar temps under heavy load aswell.


----------



## J7SC

...below is from an earlier article by TH and shows the relationship between temps and GPU clock. So increasing cooling will help a lot, as would lowering voltage (> less heat) if stable. 'Extra heat' can 'cost' as much as an extra 100MHz or more


----------



## ESRCJ

truehighroller1 said:


> It sucked compared to the kingpin bios trust me. I'm still, running the kingpin myself on my dual oc asus 2080ti that's using water cooling. I have it but will not release it as I was asked not to by a little birdie but trust me you don't want it compared to the kingpin bios any way.


The Kingpin XOC BIOS didn't work out so well for me. Basically it was very unstable for me using the same voltage-frequency steps as I've used with non-XOC BIOS. The main appeal to me is the lack of an effective power limit and the slight voltage headroom up to 1.125V.


----------



## ESRCJ

J7SC said:


> ...below is from an earlier article by TH and shows the relationship between temps and GPU clock. So increasing cooling will help a lot, as would lowering voltage (> less heat) if stable. 'Extra heat' can 'cost' as much as an extra 100MHz or more


Very small changes in temperature can make a difference. Just switching from Kryonaut to Conductonaut between my waterblock and GPU resulted in a 3C decrease in load GPU temperature, allowing me to shift each point of my VF curve 2 spaces to the left (0.0125V "savings" per clock).


----------



## D-EJ915

Shawnb99 said:


> What’s wrong with the Hydrocopper?


Overpriced with poor cooling https://www.techpowerup.com/review/evga-hydro-copper-gtx-1080/6.html This is an old model but they're always mediocre at best.


----------



## sultanofswing

ESRCJ said:


> The Kingpin XOC BIOS didn't work out so well for me. Basically it was very unstable for me using the same voltage-frequency steps as I've used with non-XOC BIOS. The main appeal to me is the lack of an effective power limit and the slight voltage headroom up to 1.125V.


How did you get to use a voltage-frequency step with the XOC bios as it doesn't work with that Bios.


----------



## truehighroller1

sultanofswing said:


> I’m currently using the kingpin bios but I don’t like the way the memory doesn’t go into 2d clocks and I’d love to have the voltage/frequency table back. On the kingpin bios I am able to keep temps no higher than 43c at 560watts so I have good cooling to handle it.


The voltage table was bugged and caused major issues for me. My guess is that's why kingpin xoc locked it out they couldn't get around whatever the bug is at the voltage table level of the bios.




sultanofswing said:


> How did you get to use a voltage-frequency step with the XOC bios as it doesn't work with that Bios.


He must be thinking of a different bios he was using is my guess. Agreed.


----------



## sultanofswing

truehighroller1 said:


> The voltage table was bugged and caused major issues for me. My guess is that's why kingpin xoc locked it out they couldn't get around whatever the bug is at the voltage table level of the bios.
> 
> 
> 
> 
> He must be thinking of a different bios he was using is my guess. Agreed.


Ok yea I was using the Kingpin XOC Bios which that table does not work, I tried 3 different "normal" kingpin Bios(520w) from Techpowerup that will flash on my late 2019 card but with those the instant I went to do a benchmark it would go into Power Limit and the clocks would just tank even though it was only using like 200watts.


----------



## truehighroller1

sultanofswing said:


> Ok yea I was using the Kingpin XOC Bios which that table does not work, I tried 3 different "normal" kingpin Bios(520w) from Techpowerup that will flash on my late 2019 card but with those the instant I went to do a benchmark it would go into Power Limit and the clocks would just tank even though it was only using like 200watts.


The kingpin xoc is the best tweaked unlocked imo. They perfected it as good as you can imo. I used to create custom bios for people here for awhile. It makes me so... mad that I can't mod my own bios on this beautiful card. That bios you want is a dud trust me. I played with it for awhile and couldn't get around the voltage table glitch no matter what I did. It would blue screen when I opened afterburner or any other application for testing purposes after setting up the voltage table is what it was doing. Even their app for the card the bios was released for.

That was only after rebooting my PC as well. I could run it after setting it up, but reboot and try bam bsod. After that I had to uninstall the driver etc to get back to normal behavior. It was a headache. 

Now if they fixed that somehow it might be worth a try but concidering they haven't released a kingpin bios with the voltage table enabled it must be an enherint glitch.


----------



## sultanofswing

truehighroller1 said:


> The kingpin xoc is the best tweaked unlocked imo. They perfected it as good as you can imo. I used to create custom bios for people here for awhile. It makes me so... mad that I can't mod my own bios on this beautiful card. That bios you want is a dud trust me. I played with it for awhile and couldn't get around the voltage table glitch no matter what I did. It would blue screen when I opened afterburner or any other application for testing purposes after setting up the voltage table is what it was doing. Even their app for the card the bios was released for.
> 
> That was only after rebooting my PC as well. I could run it after setting it up, but reboot and try bam bsod. After that I had to uninstall the driver etc to get back to normal behavior. It was a headache.
> 
> Now if they fixed that somehow it might be worth a try but concidering they haven't released a kingpin bios with the voltage table enabled it must be an enherint glitch.


Yea the XOC works fine for me other than the memory will not go into 2d mode.
I find it weird that the "regular" Kingpin Bios goes into power limit the instant you start any 3d program, it's like it drops to 2d mode. Not sure if that is normal behavior or it's because I am using it on a FE card.
Even If I use afterburner to lock the curve it will lock to my desired frequency/voltage but as soon as I start a 3d program bam instant power limit not matter what the slider is set too.


----------



## truehighroller1

sultanofswing said:


> Yea the XOC works fine for me other than the memory will not go into 2d mode.
> I find it weird that the "regular" Kingpin Bios goes into power limit the instant you start any 3d program, it's like it drops to 2d mode. Not sure if that is normal behavior or it's because I am using it on a FE card.
> Even If I use afterburner to lock the curve it will lock to my desired frequency/voltage but as soon as I start a 3d program bam instant power limit not matter what the slider is set too.


My guess is my 2d memory clocks don't work either. I'll look tomorrow and tell you. I just redownloaded the galax bios incase I didn't make a local backup of it. It's still out there on the internet. Been downloaded 29 times now it says. 

On a sidenote: This card I have will be worth more money resale because it has the unlocked older bios yay!! Lol.

3080ti comes out I'll see what they do in regards to the bios this time around before I sell this one and buy one.


----------



## sultanofswing

truehighroller1 said:


> My guess is my 2d memory clocks don't work either. I'll look tomorrow and tell you. I just redownloaded the galax bios incase I didn't make a local backup of it. It's still out there on the internet. Been downloaded 29 times now it says.
> 
> On a sidenote: This card I have will be worth more money resale because it has the unlocked older bios yay!! Lol.
> 
> 3080ti comes out I'll see what they do in regards to the bios this time around before I sell this one and buy one.


I so want to test the Galax Bios but the one that is out there that someone uploaded to techpowerup does not work with my newer bios.


----------



## Shawnb99

D-EJ915 said:


> Overpriced with poor cooling https://www.techpowerup.com/review/evga-hydro-copper-gtx-1080/6.html This is an old model but they're always mediocre at best.


When I bought mine I was expecting to pay a bit more and wasn't expecting the best results. I grabbed mine more for the fact it came pre-installed and saved me having to install a block of my own since I've never done that on a GPU before and it seemed like a lot of work.

If Optimus goes ahead with offering GPU blocks and adding the ability of sending the block to them to install I won't be buying one ever again.

Really the main and only reason to pick the Hydrocopper was to save me the trouble of installation myself and screwing something up. 

I already have enough issues getting the CPU block to sit just right and using enough LM, I didn't want to go through that again with the GPU. I'm sure EVGA knows this hence the markup for it.

When I upgrade to the 3080TI whenever that comes out I won't be making the same mistake


----------



## Tyemcho

hi guys i have a question. what is a difference between TU102-300A-K1 and TU102-300A-K5 and TU102-300A-K6-A1. i'm planing to replace my 2080ti chip, so i searched on taobao, dont know what will happen if i put K5 or K6 model on my PCB.


----------



## Medizinmann

Shawnb99 said:


> When I bought mine I was expecting to pay a bit more and wasn't expecting the best results. I grabbed mine more for the fact it came pre-installed and saved me having to install a block of my own since I've never done that on a GPU before and it seemed like a lot of work.
> 
> If Optimus goes ahead with offering GPU blocks and adding the ability of sending the block to them to install I won't be buying one ever again.
> 
> Really the main and only reason to pick the Hydrocopper was to save me the trouble of installation myself and screwing something up.
> 
> I already have enough issues getting the CPU block to sit just right and using enough LM, I didn't want to go through that again with the GPU. I'm sure EVGA knows this hence the markup for it.
> 
> When I upgrade to the 3080TI whenever that comes out I won't be making the same mistake


You could replace the TIM with LM...I am sure this would help a lot.

I use an Alphacool GPX Pro on my rather cheap Palit Gaming OC 2080TI (flashed the 380W-BIOS) – with LM as TIM and temps never exceed 40°C – even under full load for prolonged periods – in everyday use never more than 30°C(with 22°C ambient) - and idle is just 1°C above water temps in the loop.

The "big hassle" (at least for me) with installing the water block wasn't the LM on the GPU die (BTW: I always use conformal coat* around CPU/GPU as a precaution) - it was placing all the thermal pads right…so just lifting the water block and replacing the TIM with LM shouldn't be a very big thing...

Greetings,
Medizinmann

*https://www.amazon.co.uk/MG-Chemica...formal+coat&qid=1580384438&s=computers&sr=8-1


----------



## Shawnb99

Medizinmann said:


> You could replace the TIM with LM...I am sure this would help a lot.
> 
> 
> 
> I use an Alphacool GPX Pro on my rather cheap Palit Gaming OC 2080TI (flashed the 380W-BIOS) – with LM as TIM and temps never exceed 40°C – even under full load for prolonged periods – in everyday use never more than 30°C(with 22°C ambient) - and idle is just 1°C above water temps in the loop.
> 
> 
> 
> The "big hassle" (at least for me) with installing the water block wasn't the LM on the GPU die (BTW: I always use conformal coat* around CPU/GPU as a precaution) - it was placing all the thermal pads right…so just lifting the water block and replacing the TIM with LM shouldn't be a very big thing...
> 
> 
> 
> Greetings,
> 
> Medizinmann
> 
> 
> 
> *https://www.amazon.co.uk/MG-Chemica...formal+coat&qid=1580384438&s=computers&sr=8-1



I thought about replacing the TIM with LM but am going to wait for Optimus to come out with a block for the card.


----------



## J7SC

Medizinmann said:


> You could replace the TIM with LM...I am sure this would help a lot (...) The "big hassle" (at least for me) with installing the water block wasn't the LM on the GPU die (BTW: I always use conformal coat* around CPU/GPU as a precaution) - it was placing all the thermal pads right…so just lifting the water block and replacing the TIM with LM shouldn't be a very big thing...
> 
> Greetings,
> Medizinmann
> 
> *https://www.amazon.co.uk/MG-Chemica...formal+coat&qid=1580384438&s=computers&sr=8-1



You probably already know this but if others who plan to w-cool don't, you can also use nail polish around the CPU/GPU as a precautionary measure when using LM


----------



## Jpmboy

Medizinmann said:


> You could replace the TIM with LM...I am sure this would help a lot.
> 
> I use an Alphacool GPX Pro on my rather cheap Palit Gaming OC 2080TI (flashed the 380W-BIOS) – with LM as TIM and temps never exceed 40°C – even under full load for prolonged periods – in everyday use never more than 30°C(with 22°C ambient) - and idle is just 1°C above water temps in the loop.
> 
> The "big hassle" (at least for me) with installing the water block wasn't the LM on the GPU die (BTW: *I always use conformal coat* around CPU/GPU as a precaution*) - it was placing all the thermal pads right…so just lifting the water block and replacing the TIM with LM shouldn't be a very big thing...
> 
> Greetings,
> Medizinmann
> 
> *https://www.amazon.co.uk/MG-Chemica...formal+coat&qid=1580384438&s=computers&sr=8-1


yep - MG conformal coating is the best to use. Easy to visualize (UV) and simply is the "right stuff" :thumb:


----------



## KCDC

With gpus being so sensitive to temps for higher clocks these days, I wonder if I would benefit swapping the stock Aorus Xtreme waterblocks for Phanteks glacier blocks while also using better thermal pads/paste. The tinker bug is biting me and I should ignore it. Haven't found any comparison reviews between this and the stock Aorus block. Probably because no one is silly enough to swap one for the other. Still curious, though. 

http://www.phanteks.com/PH-GB2080TiGB.html


----------



## J7SC

KCDC said:


> With gpus being so sensitive to temps for higher clocks these days, I wonder if I would benefit swapping the stock Aorus Xtreme waterblocks for Phanteks glacier blocks while also using better thermal pads/paste. The tinker bug is biting me and I should ignore it. Haven't found any comparison reviews between this and the stock Aorus block. Probably because no one is silly enough to swap one for the other. Still curious, though.
> 
> http://www.phanteks.com/PH-GB2080TiGB.html


 
With my own 2x Aorus Xtreme factory waterblocked cards, I have always wondered who actually makes them as OEM for Gigabyte/Aorus (Bykski, Barrowch, Bitfenix, other ? 2nd pic below) and whether you can improve upon them (seems likely). 

The Phanteks blocks look interesting... On the other hand, per attached first pic, temp degradation isn't really a problem with OEM blocks, either, in repeated PortRoyal 380w x2 runs :2cents: - though that is via a 1080/55 rad setup, with dual D5 pumps for the GPU-only loop. Yet 'even better' blocks would probably save a few degrees (which matters). Love to see your before-and-after results if you decide to go ahead !


----------



## KCDC

J7SC said:


> With my own 2x Aorus Xtreme factory waterblocked cards, I have always wondered who actually makes them as OEM for Gigabyte/Aorus (Bykski, Barrowch, Bitfenix, other ? 2nd pic below) and whether you can improve upon them (seems likely).
> 
> The Phanteks blocks look interesting... On the other hand, per attached first pic, temp degradation isn't really a problem with OEM blocks, either, in repeated PortRoyal 380w x2 runs :2cents: - though that is via a 1080/55 rad setup, with dual D5 pumps for the GPU-only loop. Yet 'even better' blocks would probably save a few degrees (which matters). Love to see your before-and-after results if you decide to go ahead !


Phanteks appears to be the only ones that make compatible blocks for the Aorus Xtreme cards, was hoping for a heatkiller but sadly no. 

Overall, I don't know the quality or efficiency of phanteks gpu blocks. If I let the bug get to me and go for it, I'll be sure to report back. Would be cool to see a sustained lower bin or two but that seems hopeful.

Edit: Looks like bitspower makes one as well.


----------



## wyattneill

I have an Asus 2080 TI OC I have shunt modded and flashed the asus unlimited power limit bios to my card. I get very strong overclocks and benchmarks when running chilled water, 2190 and 8300. So my question would be, can I flash the 2000w Galax bios that has voltage control to my strix oc version? I seen a score in 3dmark time spy extreme done by de8auer and he was running this card himself. With a core clock of 2450, of course I assume this is ln2 however I feel there is more here to be had with chilled water. So I just wanted to ask before I go ahead and give the unlimited power galax bios a try. Constructive critism is welcomed of course. I am really trying to pick up a kingpin but until then I will be just playing with this one.


----------



## sultanofswing

After bricking my MX Based Bios chip 2080ti by trying to USB program flash it 4 times I finally got it working and also was able to get the Galax HOF XOC bios installed on it and it actually works very good.
2d clocks work on desktop like they are supposed too.
3d clocks work properly
The voltage table in MSI Afterburner worked like it is supposed to as well.

Assuming nothing happens this will be the BIOS that I keep on it for a while.


----------



## truehighroller1

sultanofswing said:


> After bricking my MX Based Bios chip 2080ti by trying to USB program flash it 4 times I finally got it working and also was able to get the Galax HOF XOC bios installed on it and it actually works very good.
> 2d clocks work on desktop like they are supposed too.
> 3d clocks work properly
> The voltage table in MSI Afterburner worked like it is supposed to as well.
> 
> Assuming nothing happens this will be the BIOS that I keep on it for a while.


Huh, interesting so you used the one on techpowerup and it's working?


----------



## sultanofswing

truehighroller1 said:


> Huh, interesting so you used the one on techpowerup and it's working?


Yessir


----------



## truehighroller1

sultanofswing said:


> Yessir


I have a funny feeling the one I have must be an older version. You're not getting blue screens after setting up the voltage table and a good reboot or shutdown then coming backup into windows and then opening afterburner and enabling the profile you setup? I'm going to try that one on techpowerup and report back what happens. Thanks bub.


----------



## sultanofswing

truehighroller1 said:


> I have a funny feeling the one I have must be an older version. You're not getting blue screens after setting up the voltage table and a good reboot or shutdown then coming backup into windows and then opening afterburner and enabling the profile you setup? I'm going to try that one on techpowerup and report back what happens. Thanks bub.


I have not gotten quite that far, I flashed the card last night. Setup a Curve in MSI Afterburner and ran a benchmark and it worked fine. I do not save profiles in MSI Afterburner I only set my settings up and then set it to use those settings on boot.

I will setup a curve and try a few reboots and see what happens.


----------



## truehighroller1

sultanofswing said:


> I have not gotten quite that far, I flashed the card last night. Setup a Curve in MSI Afterburner and ran a benchmark and it worked fine. I do not save profiles in MSI Afterburner I only set my settings up and then set it to use those settings on boot.
> 
> I will setup a curve and try a few reboots and see what happens.


I just did and yep, blue screen as soon as you load the profile.


----------



## sultanofswing

truehighroller1 said:


> I just did and yep, blue screen as soon as you load the profile.


Just tried it myself and I did get a BSOD, I didn't save a profile but it still did it.

Really wished we could find the newer version of that BIOS because it was reported that it was fixed in that version.


----------



## truehighroller1

sultanofswing said:


> Just tried it myself and I did get a BSOD, I didn't save a profile but it still did it.
> 
> Really wished we could find the newer version of that BIOS because it was reported that it was fixed in that version.


Agreed. I'm gonna do some searching.


----------



## 86Jarrod

truehighroller1 said:


> Agreed. I'm gonna do some searching.


Let me know if you find anything. It took me a couple months to track down the old version. I tried to find screenshots of overclockers and google the bios version they were using with all different sorts of keywords, finally found it but was incredibly time consuming. A version with just tighter mem timings would be awesome. BTW it shouldn't be on tech powerup for any noob to find and fry their air cooled card. I suppose it was inevitable though.


----------



## sultanofswing

If you guys find anything let me know as this BIOS would be near perfect without the BSOD.
Right now I just set afterburner to not apply settings on startup.


----------



## gfunkernaught

anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


----------



## JustinThyme

gfunkernaught said:


> anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


Mines been rock solid. Tried XOC and Matrix BIOS on a pair of Strix 2080Ti OC cards in the end flashed back to stock BIOS because I saw no real gain. 2145-2160 is all she wrote. Tune it to just below power limit to get 2145 and zero voltage increase.

Only ting that chaps my butt is the different tuners. The GU tweak that was made for these cards is inconsistent so I usually use MSI after Burner.


----------



## dangerSK

gfunkernaught said:


> anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


There shouldnt be any, u cant degrade card with stock voltage. Fault is in Software 99%


----------



## J7SC

gfunkernaught said:


> anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


 
My cards have been at the same oc for a year +, but I did run into one weird issue recently when my saved oc profiles didn't work right all of a sudden. I traced it to MSI AB / settings / synchronize GPUs...it's always supposed to be on and I didn't go into 'MSI AB settings' prior, but Windows 10 had _just_ done several updates right before... 

Anyhow, all back to normal


----------



## Hardrock

Hi guys I am having an issue maybe with my 2080TI but not totally sure but I figured I would start here. 

Basically up until last week I was getting 175-200 FPS in COD modern warfare multiplayer at 3440x1440 with medium-high graphics settings and ray tracing turned off. I am not sure what happened but I noticed in the last two days I am getting 100- 125 FPS with occasional jumps to the mid 140 FPS in MP. Even if I drop all setting to low in COD MP and drop the render resolution form 100% - 50% I only get 180FPS which to me seems crazy. 

I uninstalled the latest Nvidia drivers and went back to an older driver version. I used DDU to remove any old drivers hanging around but this had made no difference.

I had an overclock in the card +200 on the GPU and +100 on the RAM so I though maybe its a bad overclock but reverting to the default gaming profile for the card made no difference. 

Maybe its a game optimization problem in the latest COD update ?? anyone else notice this happen to them recently ??

Any thoughts appreciated, TY. Here are my system specs

Ryzen 9 3950X - Stock settings (water cooled) 
ASUS ROG STRIX 2080TI - Stock settings ( Water cooled ) 
32 GB Gksill NEO DDR4 (16-16-16-36) - Stock settings confirmed in BIOS
Corsair MP600 M.2 Nvme PCIE - 4.0 for OS
WIN 10 64bit , fresh install.
Samsung EVO 860 2TB Drive for games 
EVGA G2 1000 Supernova PSU.
ASUS ROG PG35V display @ 200HZ, 3440 x 1440


----------



## NBrock

Could have been an update with the game that caused it. Might be worth checking the game forums to see if others are having the same issue. 

Not related to the game...but you should be able to push more than +100 on memory. Most 2080ti I have seen can at least do +800.


----------



## keikei

Hardrock said:


> Hi guys I am having an issue maybe with my 2080TI but not totally sure but I figured I would start here.
> 
> Basically up until last week I was getting 175-200 FPS in COD modern warfare multiplayer at 3440x1440 with medium-high graphics settings and ray tracing turned off. I am not sure what happened but I noticed in the last two days I am getting 100- 125 FPS with occasional jumps to the mid 140 FPS in MP. Even if I drop all setting to low in COD MP and drop the render resolution form 100% - 50% I only get 180FPS which to me seems crazy.
> 
> I uninstalled the latest Nvidia drivers and went back to an older driver version. I used DDU to remove any old drivers hanging around but this had made no difference.
> 
> I had an overclock in the card +200 on the GPU and +100 on the RAM so I though maybe its a bad overclock but reverting to the default gaming profile for the card made no difference.
> 
> Maybe its a game optimization problem in the latest COD update ?? anyone else notice this happen to them recently ??
> 
> Any thoughts appreciated, TY. Here are my system specs
> 
> Ryzen 9 3950X - Stock settings (water cooled)
> ASUS ROG STRIX 2080TI - Stock settings ( Water cooled )
> 32 GB Gksill NEO DDR4 (16-16-16-36) - Stock settings confirmed in BIOS
> Corsair MP600 M.2 Nvme PCIE - 4.0 for OS
> WIN 10 64bit , fresh install.
> Samsung EVO 860 2TB Drive for games
> EVGA G2 1000 Supernova PSU.
> ASUS ROG PG35V display @ 200HZ, 3440 x 1440


Try the new driver. There is a COD related fix.


----------



## Hardrock

Thanks looks like they released a new driver today for a frame rate issue. Hopefully this fixes it. Thanks !


----------



## Shawnb99

J7SC said:


> You probably already know this but if others who plan to w-cool don't, you can also use nail polish around the CPU/GPU as a precautionary measure when using LM


Thanks, I'll try this method last time. First time using LM I spilled some on the MB, got a new one and second attempt i used a kneadble eraser in the same idea but had no issues this time. Nail polish sounds a lot easier.


----------



## jura11

gfunkernaught said:


> anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


Hi there 

I can say for myself my Asus RTX 2080Ti Strix with Matrix BIOS is rock solid,my OC varies from 2145-2175MHz, I tried 2160MHz at Witcher 3 and no issues, tried 2175MHz and too no issues in Witcher 3 

On other hand in RT enabled or games with RT 2085-2115MHz is my max, in these games I just use my normal OC profiles like 2160-2175MHz, in some games 2145MHz its max what I'm able to use or run, but in some games 2160-2175MHz its achievable quite easy

Hope this helps 

Thanks, Jura


----------



## jura11

gfunkernaught said:


> anyone else sick of how inconsistent overclock stability is with their 2080 ti's? you find a clock that works for months and then all of a sudden, with no driver changes now it isnt?


If you are getting inconsistent OC stability, try different drivers or SW, personally I use MSI Afterburner, I won't touch EVGA SW or Asus OC SW

Hope this helps 

Thanks, Jura


----------



## Medizinmann

J7SC said:


> You probably already know this but if others who plan to w-cool don't, you can also use nail polish around the CPU/GPU as a precautionary measure when using LM



Yes, you could use Nail polish as der8auer (or his female assitant) is always demonstrating. But I didn’t feel comfortable with it and wanted something more heat resistant…
And I paid 17€ for the bottle MG conformal coat(heat resistant up to 200°C) on Amazon at the time - which was very okay considering the total cost of my build.


...but yeah - in the end the temps are so low now, that nail polish would have been save to use...



Greetings,
Medizinmann


----------



## J7SC

Medizinmann said:


> Yes, you could use Nail polish as der8auer (or his female assitant) is always demonstrating. But I didn’t feel comfortable with it and wanted something more heat resistant…
> And I paid 17€ for the bottle MG conformal coat(heat resistant up to 200°C) on Amazon at the time - which was very okay considering the total cost of my build.
> 
> 
> ...but yeah - in the End the temps are so low now, that nail polish would have been save to use...
> 
> 
> 
> Greetings,
> Medizinmann


 
...if one already has conformal coat available, that's perfect. If not (clear, or at least non-metal flake) nail polish works, as does liquid electrical tape (LET). I also used MX4 (=non conductive) TIM many times before to insulate components near liquid metal....back in the day when I could get those big 20g MX4 locally  But as you state, by the time you are already forking out for liquid metal, you might as well add some conformal coating to the order...


----------



## QuickPwn

Hi everyone i got a 2080 ti strix and i wanted to know whats you guys suggest to flash my 2080 ti i was looking at the evga king pin vbios or the asus 2080 ti matrix i am kinda of scarde to use the xoc vbios since they are unlimited no sure what you guys think let me know thank you


----------



## AndrejB

QuickPwn said:


> Hi everyone i got a 2080 ti strix and i wanted to know whats you guys suggest to flash my 2080 ti i was looking at the evga king pin vbios or the asus 2080 ti matrix i am kinda of scarde to use the xoc vbios since they are unlimited no sure what you guys think let me know thank you


If it's air cooled then matrix or strix white (it's somewhere in this thread and has a better stock fan curve than the matrix) bioses


----------



## Shewie

Hello everyone.
It's my first post here but I believe I couldn't find a better community forum to ask this question and share my insights.
I'm not a modder, WaterCooling enthusiast, etc. I buy stuff, place them inside the case and expect them to work. OC - yes, with RTX it's never been simpler.
The question, however, originates from my today's experience with newly purchased 2080ti from Amazon.
I've had previously EVGA FTW3 2080ti which I was using for the past 11 months. Then an opportunity came and I decided to sell the card to my friend for a very good price as for a used card with minimum loss in value.
Then I decided then to buy the cheapest possible 2080ti for that money from amazon.co.uk which costed significantly less then FTW3.
The card came yesterday but today I decided to put it's into the test to see how far subpar this card could be compared to all hailed FTW3 from EVGA.
and until now I can't really comprehend the results I've got after giving this Windforce OC a benchmark Go.
The first thing I've noticed was the fact that the card uses Samsung Memory Die instead of Micron.
If we go to google and type Windforce OC 2080ti PCB - all of the pictures will show PCB with Micron chip on it.
I know from not so far experience that the micron on my setup never went past 850Mhz.
FTW3 with Samsung, on the other hand, maxed out at 1050Mhz before started dropping me some artifacts.

But this small and insignificant in form and shape Gigabyte Windforce takes on 1500Mhz memory OC and it's not giving me a single chuckle whilst benchmarking.
That 1500Mhz + on OC gives this card an enormous 750Gb/s memory bandwidth.

I ran all the benchmarks without a single crash, freeze or even chuckle.
GPU Boost is not that significant due to its thermal limitations or even subpar silicon quality comparing to FTW3 which allowed me to boost the card up to 2075-2080Mhz.

Do you think actually that manufacturers could start supplying new cards with faster memory known from 2080Super cards?
I think I could go easily past that +1500Mhz mark but MSI Afterburner does not allow me to get over that mark 

I'm baffled, confused and can't really articulate my conclusion after this test.
But I know I'm in the right place to address this matter to have some insights from those who are far more knowledgeable than myself.

*my test methodology has been always the same.
CPU 8700k 5.1Ghz
RAM 64GB 3200Mhz

I've never modded any of my cards, changed bios, firmware etc. It's all been always factory.
Screen shot of the results, HWinfo readings etc.


----------



## truehighroller1

86Jarrod said:


> Let me know if you find anything. It took me a couple months to track down the old version. I tried to find screenshots of overclockers and google the bios version they were using with all different sorts of keywords, finally found it but was incredibly time consuming. A version with just tighter mem timings would be awesome. BTW it shouldn't be on tech powerup for any noob to find and fry their air cooled card. I suppose it was inevitable though.


Meh, I couldn't find it posted any where. I'm back on the Kingpin XOC as the other kingpin BIOS seem to still limit my card power wise etc.. This is the BIOS that my card likes and I guess I can deal with that lol. At least it likes this one I suppose.


----------



## jura11

QuickPwn said:


> Hi everyone i got a 2080 ti strix and i wanted to know whats you guys suggest to flash my 2080 ti i was looking at the evga king pin vbios or the asus 2080 ti matrix i am kinda of scarde to use the xoc vbios since they are unlimited no sure what you guys think let me know thank you


Hi there 

Kingpin BIOS not sure if it will be compatible or you will not loose DP port with that BIOS on your Asus RTX 2080Ti Strix

Matrix BIOS is for me best there, running on my Asus RTX 2080Ti Strix 2160-2175MHz(+100MHz on core or +115MHz on core) and no issues, rock stable in games or benchmarks and rendering(in RT specific renderers I need to run stock because would crash but that's normal for rendering because it use Tensor RT cores etc) 

XOC BIOS not sure if I would recommend,its great OC BIOS there with unlimited power limit but sadly with that BIOS yours VRAM will be running at full speed which can cause few issues

Asus Strix White could be another option, didn't tried on my Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## jura11

Shewie said:


> Hello everyone.
> It's my first post here but I believe I couldn't find a better community forum to ask this question and share my insights.
> I'm not a modder, WaterCooling enthusiast, etc. I buy stuff, place them inside the case and expect them to work. OC - yes, with RTX it's never been simpler.
> The question, however, originates from my today's experience with newly purchased 2080ti from Amazon.
> I've had previously EVGA FTW3 2080ti which I was using for the past 11 months. Then an opportunity came and I decided to sell the card to my friend for a very good price as for a used card with minimum loss in value.
> Then I decided then to buy the cheapest possible 2080ti for that money from amazon.co.uk which costed significantly less then FTW3.
> The card came yesterday but today I decided to put it's into the test to see how far subpar this card could be compared to all hailed FTW3 from EVGA.
> and until now I can't really comprehend the results I've got after giving this Windforce OC a benchmark Go.
> The first thing I've noticed was the fact that the card uses Samsung Memory Die instead of Micron.
> If we go to google and type Windforce OC 2080ti PCB - all of the pictures will show PCB with Micron chip on it.
> I know from not so far experience that the micron on my setup never went past 850Mhz.
> FTW3 with Samsung, on the other hand, maxed out at 1050Mhz before started dropping me some artifacts.
> 
> But this small and insignificant in form and shape Gigabyte Windforce takes on 1500Mhz memory OC and it's not giving me a single chuckle whilst benchmarking.
> That 1500Mhz + on OC gives this card an enormous 750Gb/s memory bandwidth.
> 
> I ran all the benchmarks without a single crash, freeze or even chuckle.
> GPU Boost is not that significant due to its thermal limitations or even subpar silicon quality comparing to FTW3 which allowed me to boost the card up to 2075-2080Mhz.
> 
> Do you think actually that manufacturers could start supplying new cards with faster memory known from 2080Super cards?
> I think I could go easily past that +1500Mhz mark but MSI Afterburner does not allow me to get over that mark
> 
> I'm baffled, confused and can't really articulate my conclusion after this test.
> But I know I'm in the right place to address this matter to have some insights from those who are far more knowledgeable than myself.
> 
> *my test methodology has been always the same.
> CPU 8700k 5.1Ghz
> RAM 64GB 3200Mhz
> 
> I've never modded any of my cards, changed bios, firmware etc. It's all been always factory.
> Screen shot of the results, HWinfo readings etc.


Hi there 

Earlier RTX 2080Ti does used Micron VRAM chips, later or latest ones seems to use only Samsung VRAM which usually clocks higher but seen or tried few GPUs with Micron and on them I could clock VRAM +1175-1200MHz as max 

I think my max on my Asus RTX 2080Ti Strix its around 1000MHz which puts me to 739Gbs maybe 1015-1030MHz in right ambient temperature, don't think I can do +1500MHz 

Maybe flashing different BIOS would unlock extra potential, did you tried 4k Optimized benchmark in Unigine Superposition? 

Temperatures are okay because you are hit 61°C which is low and okay and ypu should be have good OC 

On Matrix BIOS if I do +115MHz on core I will hit 2175MHz without the problem, this OC which I'm running on my Asus RTX 2080Ti Strix 

Putting yours GPU under water would unlock maybe extra 50-75MHz maybe more not sure on that there and you need to ask yourself if its worth it extra spending? 

Did you tried test with stock settings and OC settings and what differences you would see? 

From that you can decide if its worth it to OC or not

Hard to say or recommend there, ypu can try different BIOS there which is compatible with yours RTX 2080Ti or yours I/O

Hope this helps 

Thanks, Jura


----------



## truehighroller1

jura11 said:


> Hi there
> 
> Kingpin BIOS not sure if it will be compatible or you will not loose DP port with that BIOS on your Asus RTX 2080Ti Strix
> 
> Matrix BIOS is for me best there, running on my Asus RTX 2080Ti Strix 2160-2175MHz(+100MHz on core or +115MHz on core) and no issues, rock stable in games or benchmarks and rendering(in RT specific renderers I need to run stock because would crash but that's normal for rendering because it use Tensor RT cores etc)
> 
> XOC BIOS not sure if I would recommend,its great OC BIOS there with unlimited power limit but sadly with that BIOS yours VRAM will be running at full speed which can cause few issues
> 
> Asus Strix White could be another option, didn't tried on my Asus RTX 2080Ti Strix
> 
> Hope this helps
> 
> Thanks, Jura


Matrix BIOS cut's my power limit at 315 Watts or so other wise I would use it like you. With my Water cooling system which is custom I get top temps of 40 celcius with pulling 550 watts on my card with the kingpin xoc bios just for some background on my setup. I tried about seven different BIOS yesterday and landed back on this one as the best power limit wise etc. Also no issues with my vram running at 3d clocks full time. I only load my overclock profile which ramps the clocks up when gaming.


----------



## Shewie

jura11 said:


> Hi there
> 
> Earlier RTX 2080Ti does used Micron VRAM chips, later or latest ones seems to use only Samsung VRAM which usually clocks higher but seen or tried few GPUs with Micron and on them I could clock VRAM +1175-1200MHz as max
> 
> I think my max on my Asus RTX 2080Ti Strix its around 1000MHz which puts me to 739Gbs maybe 1015-1030MHz in right ambient temperature, don't think I can do +1500MHz
> 
> Maybe flashing different BIOS would unlock extra potential, did you tried 4k Optimized benchmark in Unigine Superposition?
> 
> Temperatures are okay because you are hit 61°C which is low and okay and ypu should be have good OC
> 
> On Matrix BIOS if I do +115MHz on core I will hit 2175MHz without the problem, this OC which I'm running on my Asus RTX 2080Ti Strix
> 
> Putting yours GPU under water would unlock maybe extra 50-75MHz maybe more not sure on that there and you need to ask yourself if its worth it extra spending?
> 
> Did you tried test with stock settings and OC settings and what differences you would see?
> 
> From that you can decide if its worth it to OC or not
> 
> Hard to say or recommend there, ypu can try different BIOS there which is compatible with yours RTX 2080Ti or yours I/O
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for your input.
I just realized that I was benchmarking my card using faulty Raiser which turned my 16x PCIE 3.0 into x8 1.1 you can even see this on the screen (GPU-Z) 
So I've removed the cable and plugged it straight into the slot 16x, with this I was able to squeez another 5% to 10% in Final Fantasy Benchmark.
My Superposition Score 4k optimized is 13890points. 
I've absolutely zero experience in flashing card's bios, I'd need to be introduced to the process as well as offered the right Bios. 
I don't if manufacturer matters here because You've offered me Matrix Bios my PC sports Gigabyte Card, I just don't know if this marriage will do well? But I'm here to listed much more experienced people.
I've ordered myself EVGA Hybrid Kit for XC 2080ti. Nothing fancy but will try to see if there is any improvement under simple AIO or it's just low quality silicon. Superposition keeps stable clocks just under 2000Mhz.
As mentioned before, haven't had any single benchmark that resulted in crash or freeze, so I think there is still a room for improvements 
But I can't go any further with memory clocks with MSI Afterburner.
Card is also pulling max 296W in this configuration (if that information matters)

Thanks again!


----------



## sultanofswing

truehighroller1 said:


> Matrix BIOS cut's my power limit at 315 Watts or so other wise I would use it like you. With my Water cooling system which is custom I get top temps of 40 celcius with pulling 550 watts on my card with the kingpin xoc bios just for some background on my setup. I tried about seven different BIOS yesterday and landed back on this one as the best power limit wise etc. Also no issues with my vram running at 3d clocks full time. I only load my overclock profile which ramps the clocks up when gaming.


Weird on my EVGA XC Ultra with the Kingpin XOC bios that is listed on the first page no matter what I do the memory stays in 3d mode. No amount of windows or nvidia power settings change it either.

I think that if the 30 series is worth anything and EVGA releases a Kingpin version I may just hop on it. I usually try to stay away from AIB card due to the size of them and waterblock compatability but I may just bite the bullet regardless.


----------



## J7SC

sultanofswing said:


> Weird on my EVGA XC Ultra with the Kingpin XOC bios that is listed on the first page no matter what I do the memory stays in 3d mode. No amount of windows or nvidia power settings change it either.
> 
> I think that if the 30 series is worth anything and EVGA releases a Kingpin version I may just hop on it. I usually try to stay away from AIB card due to the size of them and waterblock compatability but I may just bite the bullet regardless.


 
I have been buying custom AIB / PCB cards since...forever it seems and (usually) didn't regret it. They cost a bit more (and take longer to get to market after a new GPU gen release) but the custom bios and cooling solutions are worth it, at least to me. This btw does not mean that 'regular / OEM' are no good.

That said, especially with cards such as the 2080 Ti that can hit 360w-400w out of the box in some AIB configs, their custom cooling solutions (especially water if available) come in handy. KingPin editions are always among the best, but with the current 2080 TI gen, they took an awful long time to become available. Hopefully, KingPin Ampere / 3k series is different on that count.


----------



## truehighroller1

Shewie said:


> Thanks for your input.
> I just realized that I was benchmarking my card using faulty Raiser which turned my 16x PCIE 3.0 into x8 1.1 you can even see this on the screen (GPU-Z)
> So I've removed the cable and plugged it straight into the slot 16x, with this I was able to squeez another 5% to 10% in Final Fantasy Benchmark.
> My Superposition Score 4k optimized is 13890points.
> I've absolutely zero experience in flashing card's bios, I'd need to be introduced to the process as well as offered the right Bios.
> I don't if manufacturer matters here because You've offered me Matrix Bios my PC sports Gigabyte Card, I just don't know if this marriage will do well? But I'm here to listed much more experienced people.
> I've ordered myself EVGA Hybrid Kit for XC 2080ti. Nothing fancy but will try to see if there is any improvement under simple AIO or it's just low quality silicon. Superposition keeps stable clocks just under 2000Mhz.
> As mentioned before, haven't had any single benchmark that resulted in crash or freeze, so I think there is still a room for improvements
> But I can't go any further with memory clocks with MSI Afterburner.
> Card is also pulling max 296W in this configuration (if that information matters)
> 
> Thanks again!


What version Gigabyte card and what version BIOS?





sultanofswing said:


> Weird on my EVGA XC Ultra with the Kingpin XOC bios that is listed on the first page no matter what I do the memory stays in 3d mode. No amount of windows or nvidia power settings change it either.
> 
> I think that if the 30 series is worth anything and EVGA releases a Kingpin version I may just hop on it. I usually try to stay away from AIB card due to the size of them and waterblock compatability but I may just bite the bullet regardless.


Yep memory stays clocked at 3d speeds. I have the same issue with it. I tried a slew of BIOS the other day for the fun of it and landed right back on the kingpin as stated though because of power draw limits causing issues with other BIOS that I tried. 



J7SC said:


> I have been buying custom AIB / PCB cards since...forever it seems and (usually) didn't regret it. They cost a bit more (and take longer to get to market after a new GPU gen release) but the custom bios and cooling solutions are worth it, at least to me. This btw does not mean that 'regular / OEM' are no good.
> 
> That said, especially with cards such as the 2080 Ti that can hit 360w-400w out of the box in some AIB configs, their custom cooling solutions (especially water if available) come in handy. KingPin editions are always among the best, but with the current 2080 TI gen, they took an awful long time to become available. Hopefully, KingPin Ampere / 3k series is different on that count.


I'll be getting a galax hof series myself but yes kingpin is nice too but they never release enough and the people that get first dibs because of loyalty plans etc. always buy them all up and resell them for profit it just turns me off to them tbh. I wouldn't mind getting one if, I could actually get one before all the hawkers buy them all up..

Edit: I just flashed the galax 380 watts bios and it works as it should.. Everything works voltage table wise and the wattage pulls proper at 380 watts. The memory clocks down as it should etc. as well.


----------



## Shewie

@truehighroller1

GV-N208TWF3OC-11GC
Bios Ver: 90.02.17.00.52


----------



## truehighroller1

Shewie said:


> @truehighroller1
> 
> GV-N208TWF3OC-11GC
> Bios Ver: 90.02.17.00.52



You can use this one linked below, safely and it will boost you to 380 watts then. If you're going for the highest overclock score for a cold night run and have extreme water cooling and know what you're doing you can temporarily flash the kingpin xoc 2000watt bios for said runs but I would flash back to the one mentioned below after that. Don't use the kingpin one mentioned if you're on air.

https://www.techpowerup.com/vgabios/204557/kfa2-rtx2080ti-11264-180910

Check out the OP in this thread to see how to flash it. You'll see about half way through after the cards and their descriptions a show button underneath how to flash guide. It goes through step by step. It's easy honestly.


----------



## J7SC

truehighroller1 said:


> (...) I'll be getting a galax hof series myself but yes kingpin is nice too but they never release enough and the people that get first dibs because of loyalty plans etc. always buy them all up and resell them for profit it just turns me off to them tbh. I wouldn't mind getting one if, I could actually get one before all the hawkers buy them all up..(...)


 
Galax HOF / HOF OCL (or their KFA2 HOF brand in some markets) are great but next to impossible to get in Canada, at least with a functional warranty.


----------



## Hardrock

keikei said:


> Try the new driver. There is a COD related fix.


Thanks, that was the fix. I overclocked my 2080 TI to 1850 Mhz on the core and 15500 on the RAM, +125 on the power target. I now am getting consistent 180 - 200 FPS at 3440 x 1460 in COD multiplayer. I saw my card holding a constant boost of 2100 Mhz during gaming with 97% utilization.


----------



## jura11

Shewie said:


> @truehighroller1
> 
> GV-N208TWF3OC-11GC
> Bios Ver: 90.02.17.00.52



Hi there 

Can you please try this BIOS for GIGABYTE RTX 2080Ti WINDFORCE OC

https://www.techpowerup.com/vgabios/204664/gigabyte-rtx2080ti-11264-181014

This one is Gigabyte RTX 2080Ti WINDFORCE OC with updated power limit, with this yours power limit will increase to 366W 

I would try this one BIOS as first, don't even try Matrix BIOS, you will loose one DP port and HDMI port there on yours GPU, assuming WINDFORCE its based on reference PCB or is custom one?

Assume you are hitting Power Limit or voltage limit in Superposition or any benchmark? 


Hope this helps 

Thanks, Jura


----------



## jura11

truehighroller1 said:


> Matrix BIOS cut's my power limit at 315 Watts or so other wise I would use it like you. With my Water cooling system which is custom I get top temps of 40 celcius with pulling 550 watts on my card with the kingpin xoc bios just for some background on my setup. I tried about seven different BIOS yesterday and landed back on this one as the best power limit wise etc. Also no issues with my vram running at 3d clocks full time. I only load my overclock profile which ramps the clocks up when gaming.


Hi there 

If yours GPU is based on reference PCB then Matrix or XOC will disable one DP port and HDMI port, Matrix BIOS is 360W, XOC is I think 1000W, I only tested on my loop XOC at 550W beyond that I didn't tried, not sure if this BIOS will have ill effect on scores or actual gaming experience 

315W with Matrix BIOS hard to say, I'm hitting in most games close to max power limit at 340-360W, there are few occasions where I don't even hit power limit because in some occasions my 5960x is limited factor 

I will post my screenshot later on from Witcher 3 where I was trying 2175MHz OC, played for 4 hours without the single problem 

Yes temperatures will helps there if you are able stay under 40's then you should have nice OC, in my case first downclocking starts at 37-39°C, but in some cases I have seen first downclocking at 41°C 

I tried too on my old Zotac RTX 2080Ti AMP as well few different BIOS amd I known how is it

Hope this helps 

Thanks, Jura


----------



## Shewie

@jura11 great success.
I was able to flash and run all the necessery benchmarks to evaluate gains.
It was ok, I immediately met 370W limit while testbenching TimeSpy.
Everything went super well, zero hiccups, freezed frames etc. 
With better cooling I think I can squeeze even more.

GPU AIO kit is on the way.
I was asked on another group to see the code on the memory because some suggested that there could be some possibility that I could have (somehow) HC16 Samsung memory chip installed on my card and not HC14 like all the cards without SUPER.
We will see once I open the card to mount the AIO. I need to get also heatsinks for VRM and powerstages.

I can't go past +1500Mhz / 8500Mhz on Memory because I'm limited by MSI afterburner app. I do still believe I could squeeze extra couple of hundreds there for sure knowing how this card performed so far in all the benchmarks even after mod.


----------



## J7SC

...you can check re. the rated memory speed if you take the cooler off and look at the back of the memory chips --- if it is the faster Samsung used like those on 2080 Super cards, it will say s.th. like HC16 which is rated at 2000 MHz (pic below from Techpowerup)...2000 MHz is of course BEFORE oc'ing since even my two Micron-equipped cards bench hard at 2061 MHz VRAM in SLI/NVLink (higher VRAM possible, but with lower bench results). The latest Samsung "..HC16" should have much higher headroom than that.

...in any case, looks like you got past the original PowerLimit with the Bios suggestions Jura11 made :thumb: With a very nice card like yours, extensive water-cooling will also pay significant dividends !


----------



## kithylin

Shewie said:


> I can't go past +1500Mhz / 8500Mhz on Memory because I'm limited by MSI afterburner app. I do still believe I could squeeze extra couple of hundreds there for sure knowing how this card performed so far in all the benchmarks even after mod.


Try EVGA PrecisionX, it has a higher limit on the memory slider than afterburner does. Remember to backup profiles and completely uninstall afterburner first.


----------



## JustinThyme

Shewie said:


> @jura11 great success.
> I was able to flash and run all the necessery benchmarks to evaluate gains.
> It was ok, I immediately met 370W limit while testbenching TimeSpy.
> Everything went super well, zero hiccups, freezed frames etc.
> With better cooling I think I can squeeze even more.
> 
> GPU AIO kit is on the way.
> I was asked on another group to see the code on the memory because some suggested that there could be some possibility that I could have (somehow) HC16 Samsung memory chip installed on my card and not HC14 like all the cards without SUPER.
> We will see once I open the card to mount the AIO. I need to get also heatsinks for VRM and powerstages.
> 
> I can't go past +1500Mhz / 8500Mhz on Memory because I'm limited by MSI afterburner app. I do still believe I could squeeze extra couple of hundreds there for sure knowing how this card performed so far in all the benchmarks even after mod.


If youu can get that on AIR you will definitely do better under water. Mine would always throttle on thermals no matter what power limit you put in there until they are nice and cool with cores never passing 45C on a looped stress test. Most benches they dont pass 42C and nice and quiet. If it was me Id skip the AIO and just dive into a custom loop.


----------



## Medizinmann

Shewie said:


> @jura11 great success.
> I was able to flash and run all the necessery benchmarks to evaluate gains.
> It was ok, I immediately met 370W limit while testbenching TimeSpy.
> Everything went super well, zero hiccups, freezed frames etc.
> With better cooling I think I can squeeze even more.
> 
> GPU AIO kit is on the way.
> I was asked on another group to see the code on the memory because some suggested that there could be some possibility that I could have (somehow) HC16 Samsung memory chip installed on my card and not HC14 like all the cards without SUPER.
> We will see once I open the card to mount the AIO. I need to get also heatsinks for VRM and powerstages.
> 
> I can't go past +1500Mhz / 8500Mhz on Memory because I'm limited by MSI afterburner app. I do still believe I could squeeze extra couple of hundreds there for sure knowing how this card performed so far in all the benchmarks even after mod.


Very impressive - I can't get anything beyond +775 Mhz stable on memory under water...GPU I can get up to 2145Mhz though with temps never exceeding 42°C.
I use the KFA2 BIOS om my Palit Gaming Pro OC with Powerlimit of 380W - and see up to 390W with HWInfo. 



JustinThyme said:


> If youu can get that on AIR you will definitely do better under water. Mine would always throttle on thermals no matter what power limit you put in there until they are nice and cool with cores never passing 45C on a looped stress test. Most benches they dont pass 42C and nice and quiet. If it was me Id skip the AIO and just dive into a custom loop.


Or do something in between…
Alphacool has a system of quickconnect AIOs which can be expanded as one wishes.
I use an Eisbaer 360mm for my CPU and an Eiswolf GPX pro 240mm for my GPU both in one loop with an added 120mm RAD all set up with Noctua NF-A12 in push&pull…
Okay that isn't cost saving but it was easy to install.

Greetings,
Medizinmann


----------



## Kaltenbrunner

So the 1080 ti was $800 and the rtx 2080 ti was $1200 ???

That's crazy


----------



## Shewie

@J7SC

Sadly, no secret sauce here. It's an HC14 Samsung Memory Chip.
I'm almost done with my setup but my bracket chipped off when I was adjusting screw size.

The Good thing Gigabyte's cold plate is detachable from heatsink and can be used and will be used as a handy extension to my project.
I didn't want to go now with full WC because with the imminent 3xxx premiere within the next quarter will make me switch...
Water Block after searching some completed listings on ebay consider to be a non-return investment. Unless new card will be fully supported (which I doubt).

I'm surprisingly happy with the quality of the ID-Cooling ICEKIMO 240VGA solution, it's robust, with full metal cover, quiet fan, and pump.
I gave it a dry go for two hours yesterday sitting at my desk to see if there is no "leakage" situation.
I paid £65 brand new and I consider it a great value for the money as for now.

I will test it later once my bits and pieces arrive to have this finally working.


@Medizinmann
The performance from those memory chips is gobsmacking, I did last run yesterday +1700 - again no hiccups but I was reaching again 370W limit and it was certainly limiting me from pulling anything more from the card.
I also saw an almost 10oC jump from my previous setup (finally can see thermal limitations of the small Gigabyte Cooling solution) so I decided to wait for my AIO and extra heatsinks to give it again a go.


----------



## J7SC

Shewie said:


> @J7SC
> 
> Sadly, no secret sauce here. It's an HC14 Samsung Memory Chip.
> I'm almost done with my setup but my bracket chipped off when I was adjusting screw size. (...) I'm surprisingly happy with the quality of the ID-Cooling ICEKIMO 240VGA solution, it's robust, with full metal cover, quiet fan, and pump.
> I gave it a dry go for two hours yesterday sitting at my desk to see if there is no "leakage" situation.
> I paid £65 brand new and I consider it a great value for the money as for now.
> 
> I will test it later once my bits and pieces arrive to have this finally working.(...) I also saw an almost 10oC jump from my previous setup (finally can see thermal limitations of the small Gigabyte Cooling solution) so I decided to wait for my AIO and extra heatsinks to give it again a go.


 
...doesn't really matter re. HC14 if they perform like HC16s  . As to cooling, I had recently posted the Tom HW table below showing max clocks at different temps, given NVidia's boost algorithms. Any temp improvement is useful, even via a 'custom' approach with AIO, VRM fan(s) and aftermarket heat-sinks.


----------



## sultanofswing

Doing some more testing on my XC Ultra. Guess I didn't win the silicon lottery.
2160mhz produces core artifacts regardless of voltage, tested 2160 from 1050-1093mv without success

This is with keeping temps just below 40c usually around 38-39c


----------



## Bill Shull

I have joined the club and became a proud owner of a 2080ti Kingpin card. Long story as to why I ended up with such an expensive card but here I am.

Last night after watching GN and a few videos I attempted to OC the card. 

Memory 8475Mhz and CPU 2190mhz and this was very stable in GTA5 and Far Cry 5.

My card stayed right around 31-35c

Any suggestions on settings to break 2200?


----------



## Kalm_Traveler

Bill Shull said:


> I have joined the club and became a proud owner of a 2080ti Kingpin card. Long story as to why I ended up with such an expensive card but here I am.
> 
> Last night after watching GN and a few videos I attempted to OC the card.
> 
> Memory 8475Mhz and CPU 2190mhz and this was very stable in GTA5 and Far Cry 5.
> 
> My card stayed right around 31-35c
> 
> Any suggestions on settings to break 2200?


how are you keeping it that cool?

with the stock AIO in ~ 20c ambient I was still hitting 42-45c about 2150.


----------



## truehighroller1

Kalm_Traveler said:


> how are you keeping it that cool?
> 
> with the stock AIO in ~ 20c ambient I was still hitting 42-45c about 2150.


Power management mode: optimal power, adaptive perhaps?


----------



## kithylin

Kalm_Traveler said:


> how are you keeping it that cool?
> 
> with the stock AIO in ~ 20c ambient I was still hitting 42-45c about 2150.


They're probably using gsync, vsync, or the "new" nvidia frame rate limiter and keeping it on, like most folks should for gaming. If you're playing with no frame rate limiter at all then that would be why your card runs hotter.


----------



## Kalm_Traveler

kithylin said:


> They're probably using gsync, vsync, or the "new" nvidia frame rate limiter and keeping it on, like most folks should for gaming. If you're playing with no frame rate limiter at all then that would be why your card runs hotter.


ah I game with Gsync on, but for benchmarking you're supposed to turn it off. I can't hold 2150mhz in normal room temperatures with my Kingpin 2080 Ti actually pushing the gpu, hence the question. I guess if you are artificially limiting load it makes sense that you can run higher clocks with lower temps.


----------



## kithylin

Kalm_Traveler said:


> ah I game with Gsync on, but for benchmarking you're supposed to turn it off. I can't hold 2150mhz in normal room temperatures with my Kingpin 2080 Ti actually pushing the gpu, hence the question. I guess if you are artificially limiting load it makes sense that you can run higher clocks with lower temps.


It's not about artificially limiting the load for no reason. It's to match your monitor's refresh rate. There's no reason to go above what your monitor can render. Without some sort of limiter to match it to what your monitor can do you just pointlessly render extra frames for no reason other than to just make your card run hot for nothing. This is basic computer gaming 101 and applies to all video cards.


----------



## Kalm_Traveler

kithylin said:


> It's not about artificially limiting the load for no reason. It's to match your monitor's refresh rate. There's no reason to go above what your monitor can render. Without some sort of limiter to match it to what your monitor can do you just pointlessly render extra frames for no reason other than to just make your card run hot for nothing. This is basic computer gaming 101 and applies to all video cards.


I know that (been building my own rigs since 2000). My apologies for not being more clear - I am referring to benchmarking. I did not say that artificially limiting load was for no reason, but it IS an artificial limitation and does not give a proper indication of what the hardware can actually handle at full load. 

Clarifying that the person I quoted was talking about Gsync (artificially) limited load was my acknowledgement that we are not talking about the same overclocking scenario.


----------



## sultanofswing

GSYNC doesn't limit your frames unless you set up a limit, going over the GSYNC Max refresh rate just turns GSYNC off.


----------



## J7SC

Below is some useful info for GPU and CPU degradation (temps, volts, current), with special mention of 2080 Ti. Buildzoid has a communication style that needs some getting used to but he really knows his stuff. His channel is worth following on PCB breakdowns, shunt mods and more.


----------



## jura11

Medizinmann said:


> Very impressive - I can't get anything beyond +775 Mhz stable on memory under water...GPU I can get up to 2145Mhz though with temps never exceeding 42°C.
> I use the KFA2 BIOS om my Palit Gaming Pro OC with Powerlimit of 380W - and see up to 390W with HWInfo.
> 
> 
> 
> Or do something in between…
> Alphacool has a system of quickconnect AIOs which can be expanded as one wishes.
> I use an Eisbaer 360mm for my CPU and an Eiswolf GPX pro 240mm for my GPU both in one loop with an added 120mm RAD all set up with Noctua NF-A12 in push&pull…
> Okay that isn't cost saving but it was easy to install.
> 
> Greetings,
> Medizinmann


Hi there 

From personal experience and what I have tested or tried two of Palit RTX 2080Ti Gaming Pro OC, one of them wouldn't do 2100MHz with stock BIOS or with Galax 380W BIOS, one of them would do 2085MHz as max and another 2115MHz as max with +800MHz on VRAM and another +700MHz as max, both of them have run under water and temperatures have been in 42-45°C on first(on this loop I have used single 360mm radiator which cooled RTX 2080Ti plus 8700k with 5.2GHz) and on second I have seen 36-38°C as max, tried few blocks on these GPUs like Bykski, EKWB and Phanteks, best have been Phanteks with close Bykski at second place and last EKWB, on second build I have tested all these blocks 

What max OC are you getting on yours Palit RTX 2080Ti Gaming Pro OC? 

Hope this helps 

Thanks, Jura


----------



## jura11

Yesterday I have done few test and benchmarks and played few games and tested mainly stability in games,rendering and pretty happy with final results,although 442.01 is not best driver for benching 430.86 is better one for sure

2190MHz/+700MHz on VRAM(didn't pushed clocks beyond 700MHz)











2175MHz/+700MHz on VRAM,this one is my daily OC 










BIOS I'm using Asus Matrix BIOS

Hope this helps

Thanks,Jura


----------



## sultanofswing

How are you managing only 367ish watts at 2190mhz and 1.094 volts?
My card at [email protected] with a temp around 38c is in the 380 watt range.


----------



## jura11

sultanofswing said:


> How are you managing only 367ish watts at 2190mhz and 1.094 volts?
> My card at [email protected] with a temp around 38c is in the 380 watt range.


Hi there 

Assuming you are running or using Galax 380W BIOS? If yes that's what I would expect around 380W

How I'm managing only 367W at 2190MHz at 1.09v, I'm on totally different BIOS to yours, I'm running or using Matrix BIOS which I think is 36OW BIOS 


Hope this helps 

Thanks, Jura


----------



## Medizinmann

jura11 said:


> Hi there
> 
> From personal experience and what I have tested or tried two of Palit RTX 2080Ti Gaming Pro OC, one of them wouldn't do 2100MHz with stock BIOS or with Galax 380W BIOS, one of them would do 2085MHz as max and another 2115MHz as max with +800MHz on VRAM and another +700MHz as max, both of them have run under water and temperatures have been in 42-45°C on first(on this loop I have used single 360mm radiator which cooled RTX 2080Ti plus 8700k with 5.2GHz) and on second I have seen 36-38°C as max, tried few blocks on these GPUs like Bykski, EKWB and Phanteks, best have been Phanteks with close Bykski at second place and last EKWB, on second build I have tested all these blocks
> 
> What max OC are you getting on yours Palit RTX 2080Ti Gaming Pro OC?
> 
> Hope this helps
> 
> Thanks, Jura


With Timespy/Firestrike/Port Royal it is 2145 Mhz on the GPU and max. +770 Mhz on the VRAM.

I use an Alphacool Eiswolf GPX Pro 240mm in a loop with an Alphacool Eisbaer 360 + an additional 120mm RAD - all with Noctuas in push&pull - and LM as TIM on the GPU - temps won't exceed 42°C(with 24°C ambient) in Timespy even with multiple runs.

I use the Galax/KFA2 380W BIOS – and remember the GPU pulling up to 390W in total.

I will retest it this afternoon…and look in the numbers reported by HWInfo.

Greetings,
Medizinmann


----------



## Medizinmann

jura11 said:


> Yesterday I have done few test and benchmarks and played few games and tested mainly stability in games,rendering and pretty happy with final results,although 442.01 is not best driver for benching 430.86 is better one for sure
> 
> 2190MHz/+700MHz on VRAM(didn't pushed clocks beyond 700MHz)
> 
> 2175MHz/+700MHz on VRAM,this one is my daily OC
> 
> BIOS I'm using Asus Matrix BIOS
> 
> Hope this helps
> 
> Thanks,Jura


Would this run on a Ref. PCB like the Palit Gaming OC Pro?

Greetings,
Medizinmann


----------



## jura11

Medizinmann said:


> Would this run on a Ref. PCB like the Palit Gaming OC Pro?
> 
> Greetings,
> Medizinmann


Hi there 

Yes it would but you would loose one DP port and HDMI port on reference PCB GPUs due this you need to decide if its worth it or not 

I tried on my Asus RTX 2080Ti Strix as well Galax 380W BIOS and on my Asus RTX 2080Ti Strix I lost DP port and HDMI port 

Matrix BIOS seems is best for my Asus RTX 2080Ti Strix, although I didn't tried Asus RTX 2080Ti Strix White BIOS which seems have higher power limit 

Hope this helps 

Thanks, Jura


----------



## jura11

Medizinmann said:


> With Timespy/Firestrike/Port Royal it is 2145 Mhz on the GPU and max. +770 Mhz on the VRAM.
> 
> I use an Alphacool Eiswolf GPX Pro 240mm in a loop with an Alphacool Eisbaer 360 + an additional 120mm RAD - all with Noctuas in push&pull - and LM as TIM on the GPU - temps won't exceed 42°C(with 24°C ambient) in Timespy even with multiple runs.
> 
> I use the Galax/KFA2 380W BIOS – and remember the GPU pulling up to 390W in total.
> 
> I will retest it this afternoon…and look in the numbers reported by HWInfo.
> 
> Greetings,
> Medizinmann


Hi there 

That's great OC, friend Palit RTX 2080Ti Gaming Pro OC would do as max 2115MHz as max and my old Zotac RTX 2080Ti AMP too wouldn't do more than 2115MHz, on both I have run Galax 380W BIOS 

No way I could push my Zotac RTX 2080Ti AMP to 2145MHz or more, I think max I have seen 2130MHz but that's in very low ambient temperature like 15-17°C 

Friend Palit RTX 2080Ti too wouldn't do more than +800MHz on VRAM not sure why, we both checked if he is running Samsung or Micron VRAM but for sure he has running Samsung memory

Great setup there, you have 2*360mm radiators which should be more than enough for great OC on GPU and CPU and yours water delta T should be a good too 

That's what I never tried or rather chicken out from using LM, I'm still using on my GPUs Kryonaut and on other 3*GPUs I'm using Noctua NT-H1 which seems working there okay and no issues, when I will be rebuilding the loop then I will probably try Thermalright TFX 

Hope this helps 

Thanks, Jura


----------



## Medizinmann

jura11 said:


> Hi there
> 
> Yes it would but you would loose one DP port and HDMI port on reference PCB GPUs due this you need to decide if its worth it or not
> 
> I tried on my Asus RTX 2080Ti Strix as well Galax 380W BIOS and on my Asus RTX 2080Ti Strix I lost DP port and HDMI port
> 
> Matrix BIOS seems is best for my Asus RTX 2080Ti Strix, although I didn't tried Asus RTX 2080Ti Strix White BIOS which seems have higher power limit
> 
> Hope this helps
> 
> Thanks, Jura


Okay - I will stick to the KFA2 Bios then - as it works with all DP ports and HDMI - and I need all of them as this is a medical imaging workstation setup with 3x medical displays + a gaming monitor  - gaming is just a side product…:thumb:



jura11 said:


> Hi there
> 
> That's great OC, friend Palit RTX 2080Ti Gaming Pro OC would do as max 2115MHz as max and my old Zotac RTX 2080Ti AMP too wouldn't do more than 2115MHz, on both I have run Galax 380W BIOS
> 
> No way I could push my Zotac RTX 2080Ti AMP to 2145MHz or more, I think max I have seen 2130MHz but that's in very low ambient temperature like 15-17°C
> 
> Friend Palit RTX 2080Ti too wouldn't do more than +800MHz on VRAM not sure why, we both checked if he is running Samsung or Micron VRAM but for sure he has running Samsung memory
> 
> Great setup there, you have 2*360mm radiators which should be more than enough for great OC on GPU and CPU and yours water delta T should be a good too
> 
> That's what I never tried or rather chicken out from using LM, I'm still using on my GPUs Kryonaut and on other 3*GPUs I'm using Noctua NT-H1 which seems working there okay and no issues, when I will be rebuilding the loop then I will probably try Thermalright TFX
> 
> Hope this helps
> 
> Thanks, Jura


I use LM on GPU and CPU - have painted everthing else aorund CPU/GPU with conformal coat as a percaution...

Yeah thanks - will stick to the KFA2 BIOS!

Greetings,
Medizinmann


----------



## mardon

Joining the club tomorrow. Very excited. Got a Gigabyte Gaming OC and a Kraken G12 to go on it! Let the bios mods and overclocking begin!

Am i best just getting the latest Gigabyte BIOS if its not on there already or the higher KFA one? Card is second hand so hopefully not got a Bios lock on it.


----------



## Robostyle

That's cool. Totally. Raytracing realism I always dreamed of. 


Spoiler























@feat 2080Ti


----------



## BulletSponge

I'm a little late to the party but add me as well. This card's a beast.


----------



## JustinThyme

BulletSponge said:


> I'm a little late to the party but add me as well. This card's a beast.


Running two of them with Heatkiller blocks.


----------



## kamueone kamue

sry. how to delete a post?


----------



## kamueone kamue

ReFFrs said:


> Works with AORUS 2080 Ti Xtreme Waterforce
> 
> Tested with core clock max 2130 MHz @1.093v / mem 16340 Mhz
> 
> Power Limit @420W (42% of XOC max limit 1000W)
> 
> Beware: peak power consumption reached 48% (480W) in TimeSpy Extreme, which is considered the most power-hungry test. Then clocks started to drop of course following power limit, so it's a short time peak only.
> 
> You can see settings here and the result:


i also have an aorus waterforce Hybrid AIO

https://www.gigabyte.com/de/Graphics-Card/GV-N208TAORUSX-W-11GC#kf

but when i flash the bios you mentioned, the first HDMI Port shows only DVI Singal, the Second shows nothing, and the third HDMI still has a second Bios flashed on it ( Dual Bios Card ).
I have no other imput then HDMI on my Monitor.

Hope someone can help. Would need some Bios without Power limitation. All bioses i flashed had the same issue.

thx in advance.


----------



## kamueone kamue

jura11 said:


> Hi there
> 
> From personal experience and what I have tested or tried two of Palit RTX 2080Ti Gaming Pro OC, one of them wouldn't do 2100MHz with stock BIOS or with Galax 380W BIOS, one of them would do 2085MHz as max and another 2115MHz as max with +800MHz on VRAM and another +700MHz as max, both of them have run under water and temperatures have been in 42-45°C on first(on this loop I have used single 360mm radiator which cooled RTX 2080Ti plus 8700k with 5.2GHz) and on second I have seen 36-38°C as max, tried few blocks on these GPUs like Bykski, EKWB and Phanteks, best have been Phanteks with close Bykski at second place and last EKWB, on second build I have tested all these blocks
> 
> What max OC are you getting on yours Palit RTX 2080Ti Gaming Pro OC?
> 
> Hope this helps
> 
> Thanks, Jura


hi Jura,

i have the Palit 2080 ti Dual. And it should be similar to your card. I have looked at mine. Its an A-chip under the cooler.

So my question is, do you know a Bios with the highest Powerlimit, that will NOT disable the HDMI output.With control of Memory and GPU clock. Because my monitor only has Hdmi input.

I would try flashing and report the results.

My other RTX 2080 ti is an Aorus Xtreme Waterforce Hybrid AIO. Every Bios i flashed so far, disabled the HDMI Port or made a DVI Signal with 1080p max out of it.

Maybe someone know what to do? 

thank you all. Best 2080ti thread ever.


----------



## Stuf zor

Hey I also own the Aorus Xtreme waterforce AIO 2080 TI, so far I tried alot of bios to improve power but have no luck in finding a good one. really hope to find 1 which let everything work like it should on my card and increase power. if anyone knows a working one please


----------



## kamueone kamue

so you have one that incraeses power Target? with working HDMI?


----------



## olrdtg

I asked this in the Titan RTX owners club thread, but it's been pretty dead, and seeing as how the PCB and shunt modding is the same I thought I'd ask here as well; I purchased some Panasonic 3M0 current sense resistors, and I've seen people both replacing the resistors on the card with these, and soldering them on top. Am I able to stack these 3M0 resistors on the 5M0 ones? Or do I have to remove the 5M0 ones?


----------



## MrTOOSHORT

olrdtg said:


> I asked this in the Titan RTX owners club thread, but it's been pretty dead, and seeing as how the PCB and shunt modding is the same I thought I'd ask here as well; I purchased some Panasonic 3M0 current sense resistors, and I've seen people both replacing the resistors on the card with these, and soldering them on top. Am I able to stack these 3M0 resistors on the 5M0 ones? Or do I have to remove the 5M0 ones?


Same as 2080ti, replace with 3M0:

*https://www.overclock.net/forum/27803830-post24.html*

*https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html*

I replaced mine on my prior Founders 2080ti, worked great.


----------



## jura11

kamueone kamue said:


> hi Jura,
> 
> i have the Palit 2080 ti Dual. And it should be similar to your card. I have looked at mine. Its an A-chip under the cooler.
> 
> So my question is, do you know a Bios with the highest Powerlimit, that will NOT disable the HDMI output.With control of Memory and GPU clock. Because my monitor only has Hdmi input.
> 
> I would try flashing and report the results.
> 
> My other RTX 2080 ti is an Aorus Xtreme Waterforce Hybrid AIO. Every Bios i flashed so far, disabled the HDMI Port or made a DVI Signal with 1080p max out of it.
> 
> Maybe someone know what to do?
> 
> thank you all. Best 2080ti thread ever.



Hi there 

I would personally try Galax 380W BIOS on Palit RTX 2080Ti, if its based on reference PCB then this BIOS shouldn't disable HDMI port or DP port, its one of highest power limit BIOS for reference PCB 

For RTX 2080Ti Aorus Extreme Waterforce not sure there, stock Aorus Extreme Waterforce have 366W I think power limit which should be OK there,I'm running on my Asus RTX 2080Ti Strix Matrix BIOS which is 360W or so and I can do on my Asus RTX 2080Ti Strix 2175-2190MHz, in lower ambient temperature it will do 2205MHz 

I would recommend check I/O Connector on yours RTX 2080Ti and compare that to other RTX 2080Ti and if I/O matches then you can try this BIOS

I'm pretty sure someone over here I s running Aorus Extreme Waterforce amd you can ask him

Hope this helps 

Thanks, Jura


----------



## kamueone kamue

jura11 said:


> Hi there
> 
> I would personally try Galax 380W BIOS on Palit RTX 2080Ti, if its based on reference PCB then this BIOS shouldn't disable HDMI port or DP port, its one of highest power limit BIOS for reference PCB
> 
> For RTX 2080Ti Aorus Extreme Waterforce not sure there, stock Aorus Extreme Waterforce have 366W I think power limit which should be OK there,I'm running on my Asus RTX 2080Ti Strix Matrix BIOS which is 360W or so and I can do on my Asus RTX 2080Ti Strix 2175-2190MHz, in lower ambient temperature it will do 2205MHz
> 
> I would recommend check I/O Connector on yours RTX 2080Ti and compare that to other RTX 2080Ti and if I/O matches then you can try this BIOS
> 
> I'm pretty sure someone over here I s running Aorus Extreme Waterforce amd you can ask him
> 
> Hope this helps
> 
> Thanks, Jura


Thank you very much for teh infos.

For the Palit i would try the Galax HOF Bios 450W . Or is this too much? i would try this one:


https://www.techpowerup.com/vgabios/209434/galax-rtx2080ti-11264-181127


For the Aorus Waterforce - I have checked all the I/O Connection of all of the cards, there is no Card with 3x HDMI 3x Diplayports and 1x USBC. And nothing looking nearly similar.

Maybe someone can help.

thank you


----------



## J7SC

kamueone kamue said:


> Thank you very much for teh infos.
> 
> For the Palit i would try the Galax HOF Bios 450W . Or is this too much? i would try this one:
> 
> 
> https://www.techpowerup.com/vgabios/209434/galax-rtx2080ti-11264-181127
> 
> 
> For the Aorus Waterforce - I have checked all the I/O Connection of all of the cards, there is no Card with 3x HDMI 3x Diplayports and 1x USBC. And nothing looking nearly similar.
> 
> Maybe someone can help.
> 
> thank you


 
Have you tried the Aorus Waterforce Xtreme 'WB' ( = full factory waterblock) Bios ? The Aorus AIO, WB and regular 'air' versions of these cards seem to have the same PCB, including I/O. Now, it was my understanding that they all pull about the same watts but then again, I run two of those 'WB' full factory waterblock cards and regularly pull between 375w and 380w, per pic below. With this in mind, the 380W Galax doesn't make much sense, also in light of the I/O port issues referenced before. 

Some of the XOC bios (Asus, KP and perhaps Galax) have been known to work on these PCBs, but apparently with the IO issues. Besides, you really would want to have excessive cooling for those. Speaking of cooling, have you considered changing the fans on the AIO rad, or perhaps even go to push/pull ? Extra cooling really helps with those (and really, any 2080 Ti). I recently posted some graphs from THW which showed very significant MHz gains with reduced temps (reposted below).


----------



## kamueone kamue

J7SC said:


> Have you tried the Aorus Waterforce Xtreme 'WB' ( = full factory waterblock) Bios ? The Aorus AIO, WB and regular 'air' versions of these cards seem to have the same PCB, including I/O. Now, it was my understanding that they all pull about the same watts but then again, I run two of those 'WB' full factory waterblock cards and regularly pull between 375w and 380w, per pic below. With this in mind, the 380W Galax doesn't make much sense, also in light of the I/O port issues referenced before.
> 
> Some of the XOC bios (Asus, KP and perhaps Galax) have been known to work on these PCBs, but apparently with the IO issues. Besides, you really would want to have excessive cooling for those. Speaking of cooling, have you considered changing the fans on the AIO rad, or perhaps even go to push/pull ? Extra cooling really helps with those (and really, any 2080 Ti). I recently posted some graphs from THW which showed very significant MHz gains with reduced temps (reposted below).



thank you.

More Cooling would make sense. thx.

The AIO Temps are very good. I am pushing the card to the Power Limit. Thats why i can not get more then around 2150 Mhz. At around 2200 Mhz it will just hang or crash. 

The Powerlimit of both Cards Aorus Waterblock and Aorus AIO are at 366 Watts. My card runs at 370-375 Watts. When reached the Powerlimit it drops the GPU Clock.

This card could do so much more. But sadly my hands are bound. 

Can anybody show me a direction where i can learn more about Bios modding? i would try it myself. i would take the risk, no problem at all. there should be a way to resign the pached Bios without the "masterkey"

I also would hardware Flash it if needed.

Hope someone can show me a direction.

Thank you,


----------



## J7SC

kamueone kamue said:


> thank you.
> 
> More Cooling would make sense. thx.
> 
> The AIO Temps are very good. I am pushing the card to the Power Limit. Thats why i can not get more then around 2150 Mhz. At around 2200 Mhz it will just hang or crash. (...) Can anybody show me a direction where i can learn more about Bios modding? i would try it myself. i would take the risk, no problem at all. there should be a way to resign the pached Bios without the "masterkey"
> 
> I also would hardware Flash it if needed. Hope someone can show me a direction. Thank you,


 
I can share your frustration, what with one of my cards doing some (very light) render-testing at over 2230MHz, per below, and the other not that far behind, all on stock Bios. But I doubt that you can re-write this Bios yourself, even apart from some hardwired NVBoost parameters in use, ie. temps. 

Still, if someone points you in the right direction and there is a Bios solution for these PCBs (apart from the referenced XOC bios and their IO limitations), I would be elated. All that said, for now I focus on heavy temp control and also the multi-gpu checkerboard rendering for my two cards. Frankly, plenty enough for me (for now  )


----------



## kamueone kamue

J7SC said:


> kamueone kamue said:
> 
> 
> 
> thank you.
> 
> More Cooling would make sense. thx.
> 
> The AIO Temps are very good. I am pushing the card to the Power Limit. Thats why i can not get more then around 2150 Mhz. At around 2200 Mhz it will just hang or crash. (...) Can anybody show me a direction where i can learn more about Bios modding? i would try it myself. i would take the risk, no problem at all. there should be a way to resign the pached Bios without the "masterkey"
> 
> I also would hardware Flash it if needed. Hope someone can show me a direction. Thank you,
> 
> 
> 
> 
> I can share your frustration, what with one of my cards doing some (very light) render-testing at over 2230MHz, per below, and the other not that that far behind, all on stock Bios. But I doubt that you can re-write this Bios yourself, even apart from some hardwired NVBoost parameters in use, ie. temps.
> 
> Still, if someone points you in the right direction and there is a Bios solution for these PCBs (apart from the referenced XOC bios and their IO limitations), I would be elated. All that said, for now I focus on heavy temp control and also the multi-gpu checkerboard rendering for my two cards. Frankly, plenty enough for me (for now /forum/images/smilies/whistle.gif )
Click to expand...



That is really nice the frequencies you can run on your cards.

I would flash the bios of the WB version you have. But then the AIO / Pump would not work anymore?


----------



## Emf0r

Hey so i've been snooping around for the past couple of days trying to wrap my head around it. One thing I noticed is that the the OP guide doesn't make mention of turning off the drivers for his card even though he isnt in DOS. Does that not need to be done with the patched version? Also, With the new xusb is there not an option to do this in DOS for safety?


----------



## sultanofswing

So whoever posted the HOF XOC Bios on Techpowerup if you are in here something isn't right with it.
I USB Programmed the BIOS about 3 weeks ago and everything was perfect with it other than the BSOD if you saved a profile or tried to lower the power target.

I since switched to a few different BIOS's and ended up USB programming back to the HOF XOC.
I notice now there is a disclaimer on the description and now the BIOS will not allow the PC to wake from sleep without the PC restarting on it's own which it did not do the first time I flashed it before it must have been edited.

Also the first time I flashed that BIOS right before the OS loaded I would get a black screen showing Nvidia BIOS and the BIOS number and now it does not do that.

Hopefully you are in here and can take a look at it, if not it kinda sucks because it has ruined the BIOS.

I have USB programmed it 4 times now and cannot get the same result that I did the first time I Programmed it.


----------



## J7SC

kamueone kamue said:


> That is really nice the frequencies you can run on your cards.
> 
> I would flash the bios of the WB version you have. But then the AIO / Pump would not work anymore?


 
...I don't know as I only ever been on these stock WB Bios. But if you don't mind 'flashing for fun', why not give it a go and (cautiously watching) to see what happens to the AIO pump start-up. Here's a link @ TechPowerup for it https://www.techpowerup.com/vgabios/206789/gigabyte-rtx2080ti-11264-181022


----------



## kamueone kamue

J7SC said:


> ...I don't know as I only ever been on these stock WB Bios. But if you don't mind 'flashing for fun', why not give it a go and (cautiously watching) to see what happens to the AIO pump start-up. Here's a link @ TechPowerup for it https://www.techpowerup.com/vgabios/206789/gigabyte-rtx2080ti-11264-181022


So i tried a couple of things:

first to be sure i disconnected the pump and the Fans from the Card, attached the Fans to the motherboard, and soldered a Cable for the pump. Then gave 12 max. onto the Pump over the motherbard header. 

After starting the system all worked fine with a custom Fan Curve and Pump running on Full Speed. 

Started 3d Mark to give the system a load. After 5 seconds it will crash the Card. I think there is circuit checking the Pump or/and the fans stats. When they are not connected, System makes a crah under load.

Then i reattached the Fans and pump and Flashed the F2 Bios you mentioned with the H ending not the P ( because i use the 2 Hdmi outputs on the upper left ).

System starts and the Fans kick in at around 40-50%. Pump works. But under load ( 3d Mark ) and a little bit of OC the Card crashes.

So flashed the Backup Bios i keep flashing if nothing works. My Temps are at 40-50°C under load. Can get GPU clocks of 2160 MHz (+133) and MEM clocks of around 8300-8400 Mhz (+1300).

3dMark Point overall 14546 and GPU Score around 16553.

Still same Problem. Hitting GPU Power Limit. Would like to avoid the shunt mod.

Thank you to all.


----------



## truehighroller1

kamueone kamue said:


> Thank you very much for teh infos.
> 
> For the Palit i would try the Galax HOF Bios 450W . Or is this too much? i would try this one:
> 
> 
> https://www.techpowerup.com/vgabios/209434/galax-rtx2080ti-11264-181127
> 
> 
> For the Aorus Waterforce - I have checked all the I/O Connection of all of the cards, there is no Card with 3x HDMI 3x Diplayports and 1x USBC. And nothing looking nearly similar.
> 
> Maybe someone can help.
> 
> thank you


That, Galax BIOS you're referring to doesn't work for me in regards to power draw either. It caps itself at around 290 or some odd number. I'm using the 
Download GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W) Official (Source)

└ Compatible with all TU102-300A (1E07) cards ┘ 

It work's as it should. The other BIOS that works for me but doesn't clock down memory wise is the kingpin XOC ln2. It has no power limit period and works as it should power wise. I was pulling around 550 watts with it. I used the kingpin for my hall of fame run and it was nice for benching.

Worth noting also: I don't care about ports being disabled as I have one gaming monitor and that's it.


----------



## dangerSK

truehighroller1 said:


> That, Galax BIOS you're referring to doesn't work for me in regards to power draw either. It caps itself at around 290 or some odd number. I'm using the
> Download GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W) Official (Source)
> 
> └ Compatible with all TU102-300A (1E07) cards ┘
> 
> It work's as it should. The other BIOS that works for me but doesn't clock down memory wise is the kingpin XOC ln2. It has no power limit period and works as it should power wise. I was pulling around 550 watts with it. I used the kingpin for my hall of fame run and it was nice for benching.
> 
> Worth noting also: I don't care about ports being disabled as I have one gaming monitor and that's it.


I wouldnt use kingpin XOC ln2, or to be precise only for benching, running it daily is not a good idea, pretty dangerous bios.


----------



## truehighroller1

dangerSK said:


> I wouldnt use kingpin XOC ln2, or to be precise only for benching, running it daily is not a good idea, pretty dangerous bios.


That's what I said.


----------



## OrionBG

Hey guys,
I have two 2080Ti cards, one EVGA XC and one EVGA XC Ultra. both are Reference PCBs.
What is the highest TDP BIOS I can flash on those? I'll be running them water-cooled.
On my old 1080Ti I had an unlimited everything BIOS but I don't know if there is one for the 2080Ti compatible with reference PCBs.
Ideally I'm looking for a BIOS that can do around 500W - 600W as I'll be running them both in the same system and there is also going to be a Threadripper 3960x so I want to have some power left for it 
I guess this have been asked many time but the thread is so big nowadays... and the first post did not (or at least I did not find) if a BIOS from a non reference card can be flashed to a reference PCB card...


----------



## sultanofswing

OrionBG said:


> Hey guys,
> I have two 2080Ti cards, one EVGA XC and one EVGA XC Ultra. both are Reference PCBs.
> What is the highest TDP BIOS I can flash on those? I'll be running them water-cooled.
> On my old 1080Ti I had an unlimited everything BIOS but I don't know if there is one for the 2080Ti compatible with reference PCBs.
> Ideally I'm looking for a BIOS that can do around 500W - 600W as I'll be running them both in the same system and there is also going to be a Threadripper 3960x so I want to have some power left for it
> I guess this have been asked many time but the thread is so big nowadays... and the first post did not (or at least I did not find) if a BIOS from a non reference card can be flashed to a reference PCB card...


You have a few options as I have the same card, Depending on what your BIOS chip is of course.
For a BIOS that can do 5-600w there are really only 3 options
Asus XOC
Kingpin XOC
Galax HOF XOC
Those Bios have pretty much unlimited power limits but they have some slight drawbacks.
The Asus XOC has a 1000 watt limit, no overvoltage, memory clocks stuck in 3d mode and you cannot use voltage curve editor.
The Kingpin XOC works great but the voltage is locked to 1.125v and the memory clocks do not go into 2d mode and you cannot use the voltage curve editor.
The Galax HOF XOC works better I feel as you can use the voltage curve to control the voltage, Clocks work normal in 2d mode but with the only available BIOS that you can download you cannot adjust the power target and you cannot save a profile as it will cause a BSOD.

The Kingpin XOC can be flashed to any card regardless of the BIOS chip version
The only available download of the HOF XOC bios can only be flashed to Older BIOS chip versions unless you have a CH341A USB programmer(that is how I do it)

As far as I know there are only a few Bios that work perfectly on the XC Ultra That I would use and that would be, 
Asus Matrix/White with a 360 watt power limit.
EVGA FTW 3 with a 373 watt power limit.
Galax/KFA2 with a 380 watt power limit.

There are a few more BIOS files with a higher limit but they do not work properly, example. the 450 watt HOF Bios does not work properly on the XC Ultra as the XC/XC ultra are only 2 8 pin.

Me personally I use the HOF XOC Bios for daily use as the voltage curve works so when I want to just game I can use my normal [email protected] and then If I want to bench I can crank it up however high I want.


----------



## OrionBG

sultanofswing said:


> ...The Galax HOF XOC works better I feel as you can use the voltage curve to control the voltage, Clocks work normal in 2d mode but with the only available BIOS that you can download you cannot adjust the power target and you cannot save a profile as it will cause a BSOD....


Probably a stupid question but what do you mean by saving a profile? Do you mean in MSI Afterburner/Precision X1 or something else?


----------



## sultanofswing

OrionBG said:


> Probably a stupid question but what do you mean by saving a profile? Do you mean in MSI Afterburner/Precision X1 or something else?


If you set a overclock and either have your program set to apply those settings on boot it will cause a BSOD. It will do it with any of the overclocking apps.


----------



## dangerSK

truehighroller1 said:


> That's what I said.


Just saying, as ive seen enough dumb people running it daily.


----------



## OrionBG

sultanofswing said:


> If you set a overclock and either have your program set to apply those settings on boot it will cause a BSOD. It will do it with any of the overclocking apps.


I see. I never do that actually. 
Usually I run without any changes daily and only overclock the card when benchmarking.
So if I apply OC settings in the apps manually without setting the OC to autoload, everything is fine with that BIOS?
Both my cards have been manufactured around the end of 2018 to early 2019 so I guess I should have no issues with BIOS flashing...
BTW, you mentioned that you are flashing your BIOSes with a programmer. Is it an easy task (for you) or does it involve desoldering and soldering the chip?
I'm asking because I did a "deep dive"  in TechPowerUp's GPU BIOS database and specifically the Unverified one, and I found some potential gems.
I don't have the instruments to recover from a bad/wrong flash but you might have and if you are willing to test some BIOSes, I can give you the list to try them.


----------



## sultanofswing

OrionBG said:


> I see. I never do that actually.
> Usually I run without any changes daily and only overclock the card when benchmarking.
> So if I apply OC settings in the apps manually without setting the OC to autoload, everything is fine with that BIOS?
> Both my cards have been manufactured around the end of 2018 to early 2019 so I guess I should have no issues with BIOS flashing...
> BTW, you mentioned that you are flashing your BIOSes with a programmer. Is it an easy task (for you) or does it involve desoldering and soldering the chip?
> I'm asking because I did a "deep dive"  in TechPowerUp's GPU BIOS database and specifically the Unverified one, and I found some potential gems.
> I don't have the instruments to recover from a bad/wrong flash but you might have and if you are willing to test some BIOSes, I can give you the list to try them.


I do not have to unsolder the BIOS Chip, I just drain my loop, pull the card then pull the cooler. I have a test clip that clips onto the BIOS Chip and I program that way, The good thing about that method is I can recover a Bad BIOS Flash or flash anything and everything I want to the BIOS chip with no issues. It pretty much the exact same method that the Manufacturers use as the BIOS chips are programmed before they are even soldered onto the PCB.

Correct as long as you do not set the overclock to auto load you are fine.


----------



## Emf0r

Hey hey,
Would anyone happen to know if there are any available bios available for late 2019 ftw3 ultras with the new XUSB?


Was thinking maybe the 400w galax HOF 10 year-as someone mentioned success with a flash on their xc gaming hyrid card getting past 400w on it. (GALAX.RTX2080Ti.11264.190726.rom i think)



Any help wold be super appreciated!


----------



## dangerSK

Emf0r said:


> Hey hey,
> Would anyone happen to know if there are any available bios available for late 2019 ftw3 ultras with the new XUSB?
> 
> 
> Was thinking maybe the 400w galax HOF 10 year-as someone mentioned success with a flash on their xc gaming hyrid card getting past 400w on it. (GALAX.RTX2080Ti.11264.190726.rom i think)
> 
> 
> 
> Any help wold be super appreciated!


New 10y HOF should be flashable, try it..


----------



## sultanofswing

Emf0r said:


> Hey hey,
> Would anyone happen to know if there are any available bios available for late 2019 ftw3 ultras with the new XUSB?
> 
> 
> Was thinking maybe the 400w galax HOF 10 year-as someone mentioned success with a flash on their xc gaming hyrid card getting past 400w on it. (GALAX.RTX2080Ti.11264.190726.rom i think)
> 
> 
> 
> Any help wold be super appreciated!


I have not tried that particular BIOS but I can. I do know flashing a BIOS that is from a card that has more than 2 8 pins onto a card that has only 2 8 pins causes Higher than normal Idle power consumption and also will not hit the correct target.

Example is on my XC ultra a bios from a card with 2 8 pins my idle power consumption is about 14w.
Flashing a BIOS from a card with 3 8 pins my idle power consumption is 46w and it will hit a power limit below what the BIOS is supposed to.


----------



## OrionBG

sultanofswing said:


> I do not have to unsolder the BIOS Chip, I just drain my loop, pull the card then pull the cooler. I have a test clip that clips onto the BIOS Chip and I program that way, The good thing about that method is I can recover a Bad BIOS Flash or flash anything and everything I want to the BIOS chip with no issues. It pretty much the exact same method that the Manufacturers use as the BIOS chips are programmed before they are even soldered onto the PCB.
> 
> Correct as long as you do not set the overclock to auto load you are fine.


In that case whould you be willing to test any of those BIOSes:

https://www.techpowerup.com/vgabios/208990/208990
https://www.techpowerup.com/vgabios/210258/210258
https://www.techpowerup.com/vgabios/210469/210469
https://www.techpowerup.com/vgabios/213408/213408
https://www.techpowerup.com/vgabios/210130/210130

Those are all from the Unverified section as I've mentioned and there are some very interesting ones like the ASUS XOC and the EVGA Kingpin (there are three of them)


----------



## sultanofswing

OrionBG said:


> In that case whould you be willing to test any of those BIOSes:
> 
> https://www.techpowerup.com/vgabios/208990/208990
> https://www.techpowerup.com/vgabios/210258/210258
> https://www.techpowerup.com/vgabios/210469/210469
> https://www.techpowerup.com/vgabios/213408/213408
> https://www.techpowerup.com/vgabios/210130/210130
> 
> Those are all from the Unverified section as I've mentioned and there are some very interesting ones like the ASUS XOC and the EVGA Kingpin (there are three of them)


I have tested all of those already.
Asus XOC doesn't allow voltage curve to function and has locked voltage.
The Kingpin BIOS files that have a 520w limit all flash just fine but because the XC Ultra does not have the right power circuitry it puts the card in a weird state the instant you start a 3d app the clocks get stuck at 300mhz.
The Kingpin XOC Bios you listed there is the same Kingpin XOC Listed on the first page which is the Bios TiN uploaded.


----------



## dangerSK

OrionBG said:


> In that case whould you be willing to test any of those BIOSes:
> 
> https://www.techpowerup.com/vgabios/208990/208990
> https://www.techpowerup.com/vgabios/210258/210258
> https://www.techpowerup.com/vgabios/210469/210469
> https://www.techpowerup.com/vgabios/213408/213408
> https://www.techpowerup.com/vgabios/210130/210130
> 
> Those are all from the Unverified section as I've mentioned and there are some very interesting ones like the ASUS XOC and the EVGA Kingpin (there are three of them)


Hmm when Tin said to not upload LN2 bios to forums and stuff he knew why


----------



## sultanofswing

dangerSK said:


> Hmm when Tin said to not upload LN2 bios to forums and stuff he knew why


To be fair TiN has the Ln2 Bios listed right on Xdevs which is the same one on Techpowerup that he just posted.


----------



## dangerSK

sultanofswing said:


> To be fair TiN has the Ln2 Bios listed right on Xdevs which is the same one on Techpowerup that he just posted.


For sure, but there are warnings, detailed description of what the bios does and what might happen and the RAR is protected by PWD Ipromisenottormathiscard - My point, XOC bioses shouldnt be uploaded on "normies" sites.


----------



## sultanofswing

dangerSK said:


> For sure, but there are warnings, detailed description of what the bios does and what might happen and the RAR is protected by PWD Ipromisenottormathiscard - My point, XOC bioses shouldnt be upload on "normies" sites.


I agree, especially for someone that might be on air cooling still.


----------



## sultanofswing

Long story short, 
If you have a 2 pin card stick to a BIOS from a card that also has 2 pins.
If you are air cooled I would not recommend any of the XOC Bios as I have seen upwards of 500watts easily.


----------



## dangerSK

sultanofswing said:


> Long story short,
> If you have a 2 pin card stick to a BIOS from a card that also has 2 pins.
> If you are air cooled I would not recommend any of the XOC Bios as I have seen upwards of 500watts easily.


On aircooling u want to stay under 400W as thats very hard to keep in safe temp range, even on my Lightning with beefy 3 slot 3fan design


----------



## sultanofswing

dangerSK said:


> On aircooling u want to stay under 400W as thats very hard to keep in safe temp range, even on my Lightning with beefy 3 slot 3fan design


Yea myself being on water with way more radiator surface area than I need it is no issue.


----------



## dangerSK

sultanofswing said:


> Yea myself being on water with way more radiator surface area than I need it is no issue.


On water I pushed max 1.25Vcore on Lightning, power consumption was for sure 600W+, dunno for sure as Ive done shunt mod on all shunts in the past (now its useless) and I cant track accurate PL xD


----------



## kx11

what the crap is Nvidia teasing on twitter ?!


----------



## kithylin

kx11 said:


> what the crap is Nvidia teasing on twitter ?!


I looked through the first few pages on nvidia's twitter and I don't see anything teased anywhere?? Could you give us an actual direct link to the tweet you are referring to?


----------



## jura11

kamueone kamue said:


> Thank you very much for teh infos.
> 
> For the Palit i would try the Galax HOF Bios 450W . Or is this too much? i would try this one:
> 
> 
> https://www.techpowerup.com/vgabios/209434/galax-rtx2080ti-11264-181127
> 
> 
> For the Aorus Waterforce - I have checked all the I/O Connection of all of the cards, there is no Card with 3x HDMI 3x Diplayports and 1x USBC. And nothing looking nearly similar.
> 
> Maybe someone can help.
> 
> thank you


Hi there 

As guys above said previously I wouldn't get or try Galax HoF 450W BIOS, this BIOS not sure if it works on reference PCB GPUs, if yes not sure if you will loose one DP port and HDMI port amd still not sure if its works at all there

Galax 380W BIOS its probably the best for reference PCB, I'm sure somewhere over here is link for this BIOS, if not then I will upload it later on

For Aorus Extreme Waterforce I don't know there, really not sure which one BIOS I would try, if I/O doesn't match then for sure you will loose DP port and HDMI port as well there, that's what I learned or tried on few GPUs which I tried 

Hope this helps and good luck 

Thanks, Jura


----------



## kx11

kithylin said:


> I looked through the first few pages on nvidia's twitter and I don't see anything teased anywhere?? Could you give us an actual direct link to the tweet you are referring to?





https://twitter.com/NVIDIAGeForce/s.../nvidia-new-geforce-rtx-graphics-card-teaser/


----------



## Emf0r

sultanofswing said:


> I have not tried that particular BIOS but I can. I do know flashing a BIOS that is from a card that has more than 2 8 pins onto a card that has only 2 8 pins causes Higher than normal Idle power consumption and also will not hit the correct target.
> 
> Example is on my XC ultra a bios from a card with 2 8 pins my idle power consumption is about 14w.
> Flashing a BIOS from a card with 3 8 pins my idle power consumption is 46w and it will hit a power limit below what the BIOS is supposed to.





Interesting, somebody on here flashed that Bios on to therir XC hybrid gaming, and were Hitting ~410 wattage. 



At the moment would you happen to recommend any 8x2 pin Bios?


----------



## kithylin

kx11 said:


> https://twitter.com/NVIDIAGeForce/s.../nvidia-new-geforce-rtx-graphics-card-teaser/


That's just the Nvidia RTX 3000 series / AKA Ampere. Everyone's already known that is coming and will be announced at GTC 2020 in a few months. This is not news and it's not a secret by now. Also that's over on the secondary @nvidiageforce twitter page. No wonder I couldn't find it. When you said "Nvidia is teasing on twitter" I had thought you meant @Nvidia official twitter page, which has nothing.


----------



## sultanofswing

Emf0r said:


> Interesting, somebody on here flashed that Bios on to therir XC hybrid gaming, and were Hitting ~410 wattage.
> 
> 
> 
> At the moment would you happen to recommend any 8x2 pin Bios?


I can try it and report back. Currently either the FTW3 373 watt bios or the KFA2 380 watt bios.


----------



## sultanofswing

Emf0r said:


> Interesting, somebody on here flashed that Bios on to therir XC hybrid gaming, and were Hitting ~410 wattage.
> 
> 
> 
> At the moment would you happen to recommend any 8x2 pin Bios?


I can try it and report back. 
Currently either the FTW3 373 watt bios or the KFA2 380 watt bios.


----------



## sultanofswing

Not sure why it double posted like that.
Anyway just tried that Bios.
Ran a 10 minute session of Modern Warfare with a clock set to 2160mhz at 1087mv, This only drew about 372 watt according to HWinfo64.
At 38c it downclocked to 2145mhz, usually the first downclock step for me is 40c but still not too bad.

The only thing I see is the idle power consumption is higher than a 2 pin bios.
On the FTW3 Bios that I pretty much daily use idle power consumption is around 14watts
With this BIOS idle power Consumption seems to be right at 60watts.

Going to Timespy test it to see if I can get it to hit the limit to see where it is.
Edit-Bios no good like I thought, Stays on the power limit in Timespy at a reported wattage of only 360watt.


----------



## Emf0r

sultanofswing said:


> Not sure why it double posted like that.
> Anyway just tried that Bios.
> Ran a 10 minute session of Modern Warfare with a clock set to 2160mhz at 1087mv, This only drew about 372 watt according to HWinfo64.
> At 38c it downclocked to 2145mhz, usually the first downclock step for me is 40c but still not too bad.
> 
> The only thing I see is the idle power consumption is higher than a 2 pin bios.
> On the FTW3 Bios that I pretty much daily use idle power consumption is around 14watts
> With this BIOS idle power Consumption seems to be right at 60watts.
> 
> Going to Timespy test it to see if I can get it to hit the limit to see where it is.
> Edit-Bios no good like I thought, Stays on the power limit in Timespy at a reported wattage of only 360watt.





Ouch, Definitely not great. Wonder how that managed to come out with a 410. also at 60w idle I can leave it. Thank you for running that! I appreciate it.

Is the KFA2 380 an updated XUSB? or would it require a force flash with a clip? I need to go back in and double check my nvflash stat, Can't remember what my EEPROM said.


----------



## sultanofswing

Emf0r said:


> Ouch, Definitely not great. Wonder how that managed to come out with a 410. also at 60w idle I can leave it. Thank you for running that! I appreciate it.
> 
> Is the KFA2 380 an updated XUSB? or would it require a force flash with a clip? I need to go back in and double check my nvflash stat, Can't remember what my EEPROM said.


The 380w KFA2 Bios seems to be only an older version.


----------



## Emf0r

sultanofswing said:


> The 380w KFA2 Bios seems to be only an older version.



Bummer, might just stick with the FTW bios. NOT that its anything to complain about, but was excited to play around a bit.


as it stands I'm able to stay at 2150 on air topping out and dropping to 2085-2100 at under 60. and WC is just around the corner unless this whole Ampere craze turns out to be worthwhile.


----------



## keikei

https://wccftech.com/nvidia-new-geforce-rtx-graphics-card-teaser/


----------



## truehighroller1

keikei said:


> https://wccftech.com/nvidia-new-geforce-rtx-graphics-card-teaser/


Has anyone tried to open the image in photoshop to see if they can remove the layers that they added to the picture?


----------



## dangerSK

truehighroller1 said:


> Has anyone tried to open the image in photoshop to see if they can remove the layers that they added to the picture?


Dude thats not possible, not how it works. U cant unblur or sharpen something that was already processed. U can maybe try to recreate the image at best


----------



## truehighroller1

dangerSK said:


> Dude thats not possible, not how it works. U cant unblur or sharpen something that was already processed. U can maybe try to recreate the image at best


They unlayered obamas birth certificate so I disagree lol.


----------



## dangerSK

truehighroller1 said:


> They unlayered obamas birth certificate so I disagree lol.


I never said it's entirely impossible, just said that the method with photoshop he wanted to use is not gonna work. With machine learning and neural networks, I believe it's possible to recreate the image and that's also what I mentioned previously.


----------



## keikei

The color looks to be yellow and not the Supa green. Tells me its Ampere and not the Ti. Nice monday surprise though.


----------



## dangerSK

keikei said:


> The color looks to be yellow and not the Supa green. Tells me its Ampere and not the Ti. Nice monday surprise though.


Im here laughing how all of u just might got jebaited xD Dude they said its limited edition for cyberpunk = main color of the game is yellow, what if that card is only 2080Ti Limited edition with yellow cyberpunk accent ?


----------



## Medizinmann

jura11 said:


> Hi there
> 
> That's great OC, friend Palit RTX 2080Ti Gaming Pro OC would do as max 2115MHz as max and my old Zotac RTX 2080Ti AMP too wouldn't do more than 2115MHz, on both I have run Galax 380W BIOS
> 
> No way I could push my Zotac RTX 2080Ti AMP to 2145MHz or more, I think max I have seen 2130MHz but that's in very low ambient temperature like 15-17°C
> 
> Friend Palit RTX 2080Ti too wouldn't do more than +800MHz on VRAM not sure why, we both checked if he is running Samsung or Micron VRAM but for sure he has running Samsung memory
> 
> Great setup there, you have 2*360mm radiators which should be more than enough for great OC on GPU and CPU and yours water delta T should be a good too
> 
> That's what I never tried or rather chicken out from using LM, I'm still using on my GPUs Kryonaut and on other 3*GPUs I'm using Noctua NT-H1 which seems working there okay and no issues, when I will be rebuilding the loop then I will probably try Thermalright TFX
> 
> Hope this helps
> 
> Thanks, Jura


I did some testing - and backing of a little on the GPU Speed(max. 2130Mhz instead of 2145MHz) I was able to get Memory up to +1000Mhz stable…and +1100MHz for a few benches. But all in all only with cooling ramped up to the max - as you can see GPU doesn't exceed 37°C - but again that is with all fans blazing at max speed.









For my daily use I got back to 2130Mhz GPU and +770Mhz on Memory...

Greetings,
Medizinmann


----------



## keikei

dangerSK said:


> Im here laughing how all of u just might got jebaited xD Dude they said its limited edition for cyberpunk = main color of the game is yellow, what if that card is only 2080Ti Limited edition with yellow cyberpunk accent ?


Yeah, makes sense. Not Ti though, TITAN.


----------



## dangerSK

keikei said:


> Yeah, makes sense. Not Ti though, TITAN.


Titan wouldnt make sense, its not even a gaming card and Nvidia knows it, its probably just "custom" reference cooler 2080Ti


----------



## keikei

dangerSK said:


> Titan wouldnt make sense, its not even a gaming card and Nvidia knows it, its probably just "custom" reference cooler 2080Ti


Dem green/red starwars cards were TITANS. This can also be the custom version of the Ti S, but also have a normal Ti S at hand as well. Who knows, either way, it'll make $$ for nvidia. Yes, TITAN used purely for workstation.


----------



## dangerSK

keikei said:


> Dem green/red starwars cards were TITANS. This can also be the custom version of the Ti S, but also have a normal Ti S at hand as well. Who knows, either way, it'll make $$ for nvidia. Yes, TITAN used purely for workstation.


Keep dreaming  https://www.facebook.com/NVIDIAGeForceCZ/videos/1505206129648853/ I was totally right, just custom cooler 2080Ti


----------



## J7SC

Who knows, they might even try to slip that on again/off again 2080 Ti 'Super' in before Ampere launch if they got extra GPU dies...


----------



## looniam

[/lurk]





https://www.nvidia.com/en-us/geforce/news/cyberpunk-2077-geforce-rtx-2080-ti-gpu/

that is all.
[lurk]


----------



## sblantipodi

pretty strange that they choosed to produce a Cyberpunk 2077 2080 Ti instead of launching the new 3080 or 3080Ti with that "brandization"


----------



## mardon

I'm seeing across the internet people claiming to run their 2080Ti's are 2100mhz+ on the core?? How and is this locked or for the first 10 seconds of a benchmark?

I've got a water cooled Gigabyte 2080Ti Gaming OC (A Chip) mounted under Kraken G12 Corsair H50 AIO with Noctua 1700rpm 120mm fan on the rad, Noctua 92mm mounted to the G12 and copper heat sinks on the RAM and VRMS. This is installed in a SFF V6 Ncase M1 with another 120mm intake fan mounted below the card. I have flashed the card with a KFA 130% power limit BIOS.

When gaming in metro exodus or benching time spy ultra I am hitting the power limit and card can run at 2140mhz for a short period then settles around 2070mhz with temps around 72C. Thats at 4k/60 RTX on.

I tried undervolting the card and it now runs at 1920mhz and 62C during metro or 52C on PUBG (3440x1400/100hz). I'll be keeping this setting as i've maxed out the refresh rate of the respective monitors. 

More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?

As a side note.. wow DLSS 2.0 on 'Deliver us the Moon' is a game changer! Decent performance bump for no discernable drop in image quality at 4K and lovely lovely reflections  Please please Deep Sliver go back and patch metro Exodus especially now its on steam!


----------



## sblantipodi

mardon said:


> I'm seeing across the internet people claiming to run their 2080Ti's are 2100mhz+ on the core?? How and is this locked or for the first 10 seconds of a benchmark?
> 
> I've got a water cooled Gigabyte 2080Ti Gaming OC (A Chip) mounted under Kraken G12 Corsair H50 AIO with Noctua 1700rpm 120mm fan on the rad, Noctua 92mm mounted to the G12 and copper heat sinks on the RAM and VRMS. This is installed in a SFF V6 Ncase M1 with another 120mm intake fan mounted below the card. I have flashed the card with a KFA 130% power limit BIOS.
> 
> When gaming in metro exodus or benching time spy ultra I am hitting the power limit and card can run at 2140mhz for a short period then settles around 2070mhz with temps around 72C. Thats at 4k/60 RTX on.
> 
> I tried undervolting the card and it now runs at 1920mhz and 62C during metro or 52C on PUBG (3440x1400/100hz). I'll be keeping this setting as i've maxed out the refresh rate of the respective monitors.
> 
> More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?
> 
> As a side note.. wow DLSS 2.0 on 'Deliver us the Moon' is a game changer! Decent performance bump for no discernable drop in image quality at 4K and lovely lovely reflections  Please please Deep Sliver go back and patch metro Exodus especially now its on steam!


same for Wolfenstein Youngblood


----------



## Woundingchaney

sblantipodi said:


> pretty strange that they choosed to produce a Cyberpunk 2077 2080 Ti instead of launching the new 3080 or 3080Ti with that "brandization"


Cyberpunk was pushed back about 6 months, Im sure they have been planning the branding for awhile now. I would imagine that if Cyberpunk had released on its actual initial launch date Nvidia wouldnt have been able to have the 3000 series out in time. Now that Cyberpunk is most likely releasing around the same time as the 3000 series they most likely are not able to produce enough gpus to have a limited run of the branded model.


----------



## kithylin

mardon said:


> I'm seeing across the internet people claiming to run their 2080Ti's are 2100mhz+ on the core?? How and is this locked or for the first 10 seconds of a benchmark?
> 
> I've got a water cooled Gigabyte 2080Ti Gaming OC (A Chip) mounted under Kraken G12 Corsair H50 AIO with Noctua 1700rpm 120mm fan on the rad, Noctua 92mm mounted to the G12 and copper heat sinks on the RAM and VRMS. This is installed in a SFF V6 Ncase M1 with another 120mm intake fan mounted below the card. I have flashed the card with a KFA 130% power limit BIOS.
> 
> When gaming in metro exodus or benching time spy ultra I am hitting the power limit and card can run at 2140mhz for a short period then settles around 2070mhz with temps around 72C. Thats at 4k/60 RTX on.
> 
> I tried undervolting the card and it now runs at 1920mhz and 62C during metro or 52C on PUBG (3440x1400/100hz). I'll be keeping this setting as i've maxed out the refresh rate of the respective monitors.
> 
> More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?
> 
> As a side note.. wow DLSS 2.0 on 'Deliver us the Moon' is a game changer! Decent performance bump for no discernable drop in image quality at 4K and lovely lovely reflections  Please please Deep Sliver go back and patch metro Exodus especially now its on steam!


They're all using custom water loops with 2 or 3 or 4 radiators in the loop. You're never going to see custom water loop temps with an AIO, no matter how big your AIO is or how many fans and heatsinks you throw at it. Nothing beats custom water loops except liquid nitrogen.


----------



## Nizzen

mardon said:


> I'm seeing across the internet people claiming to run their 2080Ti's are 2100mhz+ on the core?? How and is this locked or for the first 10 seconds of a benchmark?
> 
> I've got a water cooled Gigabyte 2080Ti Gaming OC (A Chip) mounted under Kraken G12 Corsair H50 AIO with Noctua 1700rpm 120mm fan on the rad, Noctua 92mm mounted to the G12 and copper heat sinks on the RAM and VRMS. This is installed in a SFF V6 Ncase M1 with another 120mm intake fan mounted below the card. I have flashed the card with a KFA 130% power limit BIOS.
> 
> When gaming in metro exodus or benching time spy ultra I am hitting the power limit and card can run at 2140mhz for a short period then settles around 2070mhz with temps around 72C. Thats at 4k/60 RTX on.
> 
> I tried undervolting the card and it now runs at 1920mhz and 62C during metro or 52C on PUBG (3440x1400/100hz). I'll be keeping this setting as i've maxed out the refresh rate of the respective monitors.
> 
> More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?
> 
> As a side note.. wow DLSS 2.0 on 'Deliver us the Moon' is a game changer! Decent performance bump for no discernable drop in image quality at 4K and lovely lovely reflections  Please please Deep Sliver go back and patch metro Exodus especially now its on steam!


You "need" under 50-55c load to get stable 2100+

2070mhz is pretty much max on my Msi 2080ti trio x. With auto fan (silent) 
Custom water is the best way to cool the gpu.


----------



## mardon

kithylin said:


> They're all using custom water loops with 2 or 3 or 4 radiators in the loop. You're never going to see custom water loop temps with an AIO, no matter how big your AIO is or how many fans and heatsinks you throw at it. Nothing beats custom water loops except liquid nitrogen.


I thought that must be the case but when you read throw away comment like "mine does 2150 on the core easy" without letting onto the fact they're fully custom cooled it gave me false expectations. I've done some reading and youtube watching this lunch and it seems with my case and small 120 AIO my results are pretty decent! Etherway i'm over the moon with the card in general. The build was challenging but good fun too.

I enjoy benchmarking and recon I can get a bit more out of her yet!


----------



## mardon

Nizzen said:


> You "need" under 50-55c load to get stable 2100+
> 
> 2070mhz is pretty much max on my Msi 2080ti trio x. With auto fan (silent)
> Custom water is the best way to cool the gpu.


What temps are you getting with your fans on silent? Also what games are you playing for it to be able to sit at 2070? Thats a great result decent on an air cooled card!

For comparison in my tiny case with the stock Gigabyte Gaming OC shroud on I was hitting the thermal limit when OC'd.


----------



## truehighroller1

mardon said:


> More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?



The newer ones seem to have a better yield high over clocking wise. That's all. Mine runs at 2085 and I have about the best setup you can get around here water cooling wise. My memory was the best clocking wise or close and now some of these new ones are clocking past 1500+ easy. It's you probably got an older production card or didn't hit the lottery one of the two. My setup stays under 38c for the gpu at all times no matter how much I hit it.

I have four radiators in my loop and I have 14 fans total running them and they're all outside of my case.


----------



## Medizinmann

kithylin said:


> They're all using custom water loops with 2 or 3 or 4 radiators in the loop. You're never going to see custom water loop temps with an AIO, no matter how big your AIO is or how many fans and heatsinks you throw at it. Nothing beats custom water loops except liquid nitrogen.


Well one could use Alphacools AIO Concept and combine a Eiswolf GPX Pro for GPU block with an Eisbear CPU block and quickconnect both in one loop with two pumps + bigger RADs(240mm + 360mm + 1200mm + Noctuas in push&pull)...:thumb:
...like I did - and use LM as TIM. Not exactly cost saving but easy to do – in comparison to a custom loop at least.
My temps don't exceed 43°C in low noise config with 2100 Mhz and with all fans blazing I can get to 2130 Mhz to 2145 Mhz with temps up to 37°C.



mardon said:


> I'm seeing across the internet people claiming to run their 2080Ti's are 2100mhz+ on the core?? How and is this locked or for the first 10 seconds of a benchmark?
> 
> I've got a water cooled Gigabyte 2080Ti Gaming OC (A Chip) mounted under Kraken G12 Corsair H50 AIO with Noctua 1700rpm 120mm fan on the rad, Noctua 92mm mounted to the G12 and copper heat sinks on the RAM and VRMS. This is installed in a SFF V6 Ncase M1 with another 120mm intake fan mounted below the card. I have flashed the card with a KFA 130% power limit BIOS.
> 
> When gaming in metro exodus or benching time spy ultra I am hitting the power limit and card can run at 2140mhz for a short period then settles around 2070mhz with temps around 72C. Thats at 4k/60 RTX on.
> 
> I tried undervolting the card and it now runs at 1920mhz and 62C during metro or 52C on PUBG (3440x1400/100hz). I'll be keeping this setting as i've maxed out the refresh rate of the respective monitors.
> 
> More for my personal interest or future performance headroom but how are people hitting 2100mhz+ on air without hitting a voltage limit? What am I doing wrong?


You could try LM as TIM – should bring some improvement like 5-10°C on the GPU core - also you could use push and pull on the RAD – this should bring 1-2°C better cooling for water temps. And increase overall airflow in your case.
…but in the end a Kraken isn't a full waterblock and a 240mm rad is the bare minimum for an OCed 2080TI.



Nizzen said:


> You "need" under 50-55c load to get stable 2100+
> 
> 2070mhz is pretty much max on my Msi 2080ti trio x. With auto fan (silent)
> Custom water is the best way to cool the gpu.


2070MHz silent on air is great!!! :thumb:

He would need a bigger rad then the 240mm and a real waterblock.

Greetings,
Medizinmann


----------



## mardon

Thanks for the replies everyone. That confirms my suspicions that you need some pretty heft cooling hardware to get those figures so I should be happy with my numbers considering the size constraints on my system.
@truehighroller1 My card is an older October 2018 purchase date. Got is for £750 so can't complain as its still got warranty. 

Great idea about push pull. I can probably get a 15mm and a 25mm 120mm fans on my 120mm rad. I'm nervous about using LM TIM as I did it on my 2080 Super and ended up getting a faulty DVI port and that was using nail polish and electrical tape on the components. Couldn't find any sign of a spill ether.

Currently running 1700rpm Noctua 120mm redux fans. May have to for the horrible coloured noctua nf-a12x25's all round.


----------



## Carillo

Hello Guys! 

Just bought this DELL card comming out of a Aurora R8 from a guy complaining about noice.. 680 dollar converted from norweigian kroners, Just look at the cooler, and you understand why.. I was 100% sure it was a NON-A chip hence the blower-cooler.. But not only was it a A-chip, but probably top 5% binned 2080 ti dies. Found a EK waterblock i had lying around, and the results are great.. CPU i holding me back tough (poor Ivy) The 39000 graphic score is Galax 380w bios and the 41000 is KingPin bios. 2310mhz is with chilled water, and not stable at all.. just playing around  2250mhz is stable


----------



## J7SC

not quite a Bugatti Atlantique...but still a very nice 'barn find', so to speak :thumb: I do wonder how many top-notch-bin CPUs and GPUs quietly exist in an office or factory somewhere without ever exceeding 10% of their true potential


----------



## Carillo

J7SC said:


> not quite a Bugatti Atlantique...but still a very nice 'barn find', so to speak :thumb: I do wonder how many top-notch-bin CPUs and GPUs quietly exist in an office or factory somewhere without ever exceeding 10% of their true potential


Lol, that exact thought have crossed my mind a thousand times, "picturing some guy punching numbers in exel rocking a top binned chip with stock Intel cooler and a barking 1500 RPM HDD in a corner office never never knowing(or caring) what have could been..ALL the fame and glory faded away"


----------



## Mooncheese

Hi all, I just got a great deal on a used 2080 Ti, low and behold it's power limited to 112% PT, it's reference PCB, I believe it may possibly be one of the TU-102A non-binned variants. I'm under full water block with 1kw rad surface area, what bios can I flash this thing to to increase the PT? 

Thanks for any help!

Correction: It's actually a binned variant going by GPU-Z, TU-102-300A-K1 A1

Current card: https://www.techpowerup.com/gpu-specs/alienware-rtx-2080-ti.b6857


----------



## Mooncheese

Carillo said:


> Hello Guys!
> 
> Just bought this DELL card comming out of a Aurora R8 from a guy complaining about noice.. 680 dollar converted from norweigian kroners, Just look at the cooler, and you understand why.. I was 100% sure it was a NON-A chip hence the blower-cooler.. But not only was it a A-chip, but probably top 5% binned 2080 ti dies. Found a EK waterblock i had lying around, and the results are great.. CPU i holding me back tough (poor Ivy) The 39000 graphic score is Galax 380w bios and the 41000 is KingPin bios. 2310mhz is with chilled water, and not stable at all.. just playing around  2250mhz is stable


Speak of the devil! I picked up the same card and mine is limited to 112% PT in MSI Afterburner! Did you have this problem? According to GPU-Z mine is also Revision 300A K1-A1. 

https://www.techpowerup.com/gpu-specs/alienware-rtx-2080-ti.b6857

The owner also said that he felt this card was binned because it could do 2100 MHz with ease, but I can't see that at 112% PT, help!


----------



## Mooncheese

I got it fixed by scrutinizing Carillo's post and flashing the Galaxy 380W BIOS from page 1. I'm now at 126% PT and it's now holding at at least 2115 MHz @ 43-45C (SE 420 + PE 360 40% fan speed, 72F ambient). I haven't tried for more, will see if it's stable here

Firestrike: 

https://www.3dmark.com/3dm/44042156?

I don't even know where to begin with the memory overclocking, whilst putting the card under water I peeled away the thermal pads to see what kind of memory it was rocking and they appeared to be Micron. I'm at +400 MHz on the memory for now. 

No additional voltage. 

Card is very nice coming from 1080 Ti @ 2000 Mhz, my FPS went from ~70 in The Witcher 3 @ 3440x1440 + Phoenix Lighting Mode and Halk HD Textures to ~100-105, probably a 45% gain. Going to try the Division 2 bench to see how it does, my last was 71 FPS using Hardware Unboxed optimization settings @ 3440x1440. 

I should have waited for Ampere but at $750 including a Phanteks Glacier block, offline (no sales tax) I couldn't resist. 

I had a problem with Google Chrome appearing all black (latest Nvidia driver) but this command solved the problem for me, in case anyone else is affected by this with this GPU: (everything in between brackets) ["C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-gpu"]

What is interesting is that with stock VBIOS the problem wasn't present, it was only after flashing to Galax VBIOS, wiping the driver with DDU and then restarting and clean-installing the display driver did the issue manifest.


----------



## Mooncheese

The Division 2 easily a 50% increase from 1080 Ti to 2080 Ti, I literally can't believe the performance difference. It started at 2115 Mhz and dropped down to 2110 MHz @ 44C, but no power starvation. This is a binned FE, TU-102-300A-K1-A1 on Galax 380W vbios. 

I'm tempted to try 2150 Mhz. 

https://imgur.com/a/1D73BAw

Update: 

Playing a bit, it did get up to 53C (fans are an inaudible 40% RPM, I don't like noise) but clocks didn't dip below 2100 MHz even there. I'm tempted to try an undervolt. 

Not sure if the thermal compound needs to cure or not, Gelid GC Extreme. 

Really, really impressed with this in two games so far. Going to try Shadow of the Tomb Raider later. 

Can't believe over 50% gain in The Division 2.

Edit: Ambient is actually 75C not 72C, south facing apartment tends to warm up in the evenings on clear days. On cooler days I'm fairly confident it won't go beyond 50C.


----------



## sultanofswing

Mooncheese said:


> I got it fixed by scrutinizing Carillo's post and flashing the Galaxy 380W BIOS from page 1. I'm now at 126% PT and it's now holding at at least 2115 MHz @ 43-45C (SE 420 + PE 360 40% fan speed, 72F ambient). I haven't tried for more, will see if it's stable here
> 
> Firestrike:
> 
> https://www.3dmark.com/3dm/44042156?
> 
> I don't even know where to begin with the memory overclocking, whilst putting the card under water I peeled away the thermal pads to see what kind of memory it was rocking and they appeared to be Micron. I'm at +400 MHz on the memory for now.
> 
> No additional voltage.
> 
> Card is very nice coming from 1080 Ti @ 2000 Mhz, my FPS went from ~70 in The Witcher 3 @ 3440x1440 + Phoenix Lighting Mode and Halk HD Textures to ~100-105, probably a 45% gain. Going to try the Division 2 bench to see how it does, my last was 71 FPS using Hardware Unboxed optimization settings @ 3440x1440.
> 
> I should have waited for Ampere but at $750 including a Phanteks Glacier block, offline (no sales tax) I couldn't resist.
> 
> I had a problem with Google Chrome appearing all black (latest Nvidia driver) but this command solved the problem for me, in case anyone else is affected by this with this GPU: (everything in between brackets) ["C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-gpu"]
> 
> What is interesting is that with stock VBIOS the problem wasn't present, it was only after flashing to Galax VBIOS, wiping the driver with DDU and then restarting and clean-installing the display driver did the issue manifest.


Chrome does that to me anytime I flash a BIOS to my card and I discovered my own fix. What I do is go to C:users>youraccount>appdata>Local>google>chrome>user data 
There you will find the folder labeled Shader Cache, Just delete that folder and this will fix the black screen as Chrome will rebuild the Shader Cache once you start it again.
Doing this method there is no need to "disable" the GPU"


----------



## Mooncheese

The card will get up to 115-120% PT @ 380W, how is this even possible without a shunt mod? 1've seen 117% while casually monitoring my last session of The Division 2 and MSI AB is showing a full peak of 126% (380w!). This is reference PCB, how is this possible?!



sultanofswing said:


> Chrome does that to me anytime I flash a BIOS to my card and I discovered my own fix. What I do is go to C:users>youraccount>appdata>Local>google>chrome>user data
> There you will find the folder labeled Shader Cache, Just delete that folder and this will fix the black screen as Chrome will rebuild the Shader Cache once you start it again.
> Doing this method there is no need to "disable" the GPU"


Dude thank you so much for this!


----------



## sultanofswing

Mooncheese said:


> The card will get up to 115-120% PT @ 380W, how is this even possible without a shunt mod? 1've seen 117% while casually monitoring my last session of The Division 2 and MSI AB is showing a full peak of 126% (380w!). This is reference PCB, how is this possible?!
> 
> 
> 
> Dude thank you so much for this!


No Problem!


----------



## sblantipodi

Do you think that a 2080ti will run cyberpunk 2077 in 4k DLSS maxed out at a decent framerate or they will use the game to drive the 3080 selling by handicapping the 2080ti performance/optimization in that game?


----------



## sultanofswing

sblantipodi said:


> Do you think that a 2080ti will run cyberpunk 2077 in 4k DLSS maxed out at a decent framerate or they will use the game to drive the 3080 selling by handicapping the 2080ti performance/optimization in that game?


I don't think anyone here knows how Cyberpunk is going to run.


----------



## kithylin

sultanofswing said:


> I don't think anyone here knows how Cyberpunk is going to run.


It had darn well better be extremely well optimized after all this time they took to develop it and the multiple delays.


----------



## Mooncheese

How hot is everyone's back-plate getting? Even under full water block the back-plate is very hot to the touch, so much so that even though the core is showing 50C or less I decided to reduce PT from 126 to 120% (may go down to 115%) and drop the memory overclock from +400 down to 0 to mitigate some of the lost power. FPS in a demanding scene went from 97 to 95-94 doing so. Also, 2100 was not stable, instead of bumping up voltage I ran OC Scanner in MSI AB and attained a 90% confidence rating at +160 core (2070 MHz) up from +70% confidence rating at 2100 MHz (where I got a display driver failure and recovery). 

Should the back-plate be getting this hot with a water-block? I'm asking because my 1080 Ti was cool to the touch @ 300W. Some difference to note though, on this backplate there are three thermal pads that interface with PCB whereas with 1080 Ti FE there were basically no thermal pads so it could be that more heat is being conveyed to the back-plate in conjunction with higher TDP (assuming 350W, I have to figure out how to save my settings before nuking and updating Hwinfo64 so I can actually determine what kind of power draw there is as MSI AB only seems t want to show PT percentage wise, not actual TDP). But yeah man, it's as hot or hotter than my 1080 Ti FE when it was at 300W on the factory air-cooler. Block in question is Phanteks Glacier and I ensured that the memory and MOSFET were covered and in contact with the block. I have no idea why it's running this hot. It has to be like 100F at least, if not more. I don't know what it is about TW3 but it always pounds power. If I don't cap it it will do 126% PT with 380W Galax bios, sustained in some scenes / environments! Before closing the game and wanting to see how hot the card was getting to the touch it sitting between 120-125% PT is what prompted me to do so. Sure enough, back-plate is HOT to the touch. Hell I may have to dial it back to 115% PT on this bios until I can figure out if this is normal or not. Hopefully it isn't an issue of something not making adequate contact with the water block. 



sblantipodi said:


> Do you think that a 2080ti will run cyberpunk 2077 in 4k DLSS maxed out at a decent framerate or they will use the game to drive the 3080 selling by handicapping the 2080ti performance/optimization in that game?


That's what they do every generation, they call it "focusing on the new architecture". But yeah, 7nm EUV should be 50% faster at similar die size regardless, they won't need to "focus on the new architecture" that much this time around to see massive improvement.


----------



## Mooncheese

Well I updated Hwinfo64 to the latest version so I can see what the 2080 Ti is doing, I added Core Voltage and GPU Power to OSD / RTSS and after reducing PT from 126 to 115% it's indicating 340-350W power @ 1.063v @ 2070-2085 MHz around 90-100 FPS fully maxed + Phoenix Lighting + HD Reworked in TW3 in demanding scenes (basically nearly everywhere). 

Wanting to undervolt I tried opening freq curve in MSI AB (Ctrl+F) to see if I could say put the curve at 2000 MHz @ 1.025v but doing so means there is no way to limit the curve to 2000 MHz unless I reduce PT down to 90% on Galax BIOS but then the clocks are all over the place, bouncing from 1950 to 2040 MHz. 

Still getting the hang of this. Maybe flashing to a lower TDP bios might help. At least it's not doing more than 1.063v, looking at the freq. curve in Galax KAF 2 bios the voltage seems to scale up to like 1.2v @ 2100+ MHz! 

Still don't know why the back-plate is getting as hot as it's getting, I mean it is 350W and 1.063v but my 1080 Ti FE was cool to the touch at 300W and 1.025v. 

This forum is like a ghost town now? I have like one reply to all of this. Maybe I need to find a different 2080 Ti forum.


----------



## Carillo

Mooncheese said:


> Speak of the devil! I picked up the same card and mine is limited to 112% PT in MSI Afterburner! Did you have this problem? According to GPU-Z mine is also Revision 300A K1-A1.
> 
> https://www.techpowerup.com/gpu-specs/alienware-rtx-2080-ti.b6857
> 
> The owner also said that he felt this card was binned because it could do 2100 MHz with ease, but I can't see that at 112% PT, help!


Hey. I see you found out about the bios ?  Did you save your stock bios ? If you did, please send it to me as i forgot to save mine


----------



## jura11

@Mooncheese

Can you monitor on yours RTX 2080Ti VRM? Usually in my case VRM temperatures are in 8-12°C delta T from core,something like this core temperature is 38° C amd VRM temperature is 46°C or core temperature is 43°C and VRM temperature is 55°C,highest VRM temperature I have seen in 58°C

What I remember or what I tested or run Galax 380W BIOS on my Zotac RTX 2080Ti AMP I could run stable 2115-2130MHz in most of the games, just in BFV or Control or Metro Exodus I needed to downclock GPU to 2085MHz or so because would crash with my normal OC, but usually I run 2100MHz OC for most of the time 

On my Asus RTX 2080Ti Strix I'm running 2175MHz OC on Matrix BIOS which is stable in Witcher 3 with similar mods like you are running plus on top grass mod and few other mods, with everything on Ultra I'm getting 105-125FPS quite easily, in towns depends but usually in 90's to 100's there few dips to 80's but that's can be down to background characters I have set to Ultra

I can try tonight Witcher 3 and check my backplate how hot it is or so, but I expect it will be hot as in your case maybe cooler hard to say



Hope this helps 

Thanks, Jura


----------



## Mooncheese

Carillo said:


> Hey. I see you found out about the bios ?  Did you save your stock bios ? If you did, please send it to me as i forgot to save mine


I did but I don't think you want it as it was limited to 112% PT!

Before putting it under water I ran it with the default Aurora cooler and I couldn't believe how bad the performance was. It would boost to 1890 MHz at first, then immediately, like before 60C, it would drop down to 1800-1755 MHz! And the noise! Dear god, like ear deafening, I'm not kidding, insane level of noise, must have been 60 db easily. 

I feel sorry for the countless people with this card under the stock cooler and vbios stuck down there at 1755 MHz because under water block with more TDP this thing can do 2070-2100 MHz fairly stable. 

How do I get the BIOS to you? But yeah, youre better off with an Nvidia 2080 Ti FE bios, assuming that one also isn't limited to 112% PT. Honestly I have no idea why this card was limited to 112% PT. Even after putting it under water it was wattage throttling down to 1900 MHz, clocks were bounding between like 1900-1950 @ 42C. 

Also, update on dialing it in. Reducing PT to 115% and reducing memory overclock to from +400 MHz to 0 did bring the temp on the back-plate down a few degrees. I don't have an IR thermometer but core temp dropped from 50-53C to 47-50C. 

Question: Did you re-use the stock AW backplate and is it also getting really hot?


----------



## Carillo

Mooncheese said:


> I did but I don't think you want it as it was limited to 112% PT!
> 
> Before putting it under water I ran it with the default Aurora cooler and I couldn't believe how bad the performance was. It would boost to 1890 MHz at first, then immediately, like before 60C, it would drop down to 1800-1755 MHz! And the noise! Dear god, like ear deafening, I'm not kidding, insane level of noise, must have been 60 db easily.
> 
> I feel sorry for the countless people with this card under the stock cooler and vbios stuck down there at 1755 MHz because under water block with more TDP this thing can do 2070-2100 MHz fairly stable.
> 
> How do I get the BIOS to you? But yeah, youre better off with an Nvidia 2080 Ti FE bios, assuming that one also isn't limited to 112% PT. Honestly I have no idea why this card was limited to 112% PT. Even after putting it under water it was wattage throttling down to 1900 MHz, clocks were bounding between like 1900-1950 @ 42C.
> 
> Also, update on dialing it in. Reducing PT to 115% and reducing memory overclock to from +400 MHz to 0 did bring the temp on the back-plate down a few degrees. I don't have an IR thermometer but core temp dropped from 50-53C to 47-50C.
> 
> Question: Did you re-use the stock AW backplate and is it also getting really hot?


Just need the bios in case something happens, and i need to flash it back  Woul be great if you could send it to [email protected]. You talking about jumping frequenzies ? That sounds like you hitting the power limit. I always use the curve editor in afterburner , except when using the King Pin bios(because its diabled). When running Timespy, even with only 1000mV the card hits the 380w power limit and the results are frequenzies all over the place like you describe. When using the KingPin bios, that is not an issue, but i see a watt peak off 650w.. 

Thanks


----------



## Mooncheese

jura11 said:


> @Mooncheese
> 
> Can you monitor on yours RTX 2080Ti VRM? Usually in my case VRM temperatures are in 8-12°C delta T from core,something like this core temperature is 38° C amd VRM temperature is 46°C or core temperature is 43°C and VRM temperature is 55°C,highest VRM temperature I have seen in 58°C
> 
> What I remember or what I tested or run Galax 380W BIOS on my Zotac RTX 2080Ti AMP I could run stable 2115-2130MHz in most of the games, just in BFV or Control or Metro Exodus I needed to downclock GPU to 2085MHz or so because would crash with my normal OC, but usually I run 2100MHz OC for most of the time
> 
> On my Asus RTX 2080Ti Strix I'm running 2175MHz OC on Matrix BIOS which is stable in Witcher 3 with similar mods like you are running plus on top grass mod and few other mods, with everything on Ultra I'm getting 105-125FPS quite easily, in towns depends but usually in 90's to 100's there few dips to 80's but that's can be down to background characters I have set to Ultra
> 
> I can try tonight Witcher 3 and check my backplate how hot it is or so, but I expect it will be hot as in your case maybe cooler hard to say
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the reply! 2175 MHz that is insane! Is it stable? In the process of upgrading I discovered a really neat feature in MSI AB, "OC Scanner", that basically attempts to determine the stable freq. of the card. @ 2100 MHz it attained "70% confidence" rating, reducing that to 2070 MHz bumped that up to 90%, I will run it here and will adjust accordingly. Have you tried that feature in MSI AB @ 2175 MHz? That freq. sounds like golden sample material. 

In regards to monitoring VRM temp, I don't believe reference PCB (essentially 2080 Ti FE) VRM allow for temp monitoring. 

I'm just really concerned because it gets HOT, like the last time I felt a back-plate get that hot it was 1080 Ti FE on the stock cooler. When I put that card under a full EK block the back-plate became nearly cool to the touch (luke warm). 

I was shocked to find that this backplate is as hot or hotter than that with a full block underneath it. I'm concerned that something generating a lot of heat isn't in proper contact with the block and that heat that would otherwise be absorbed by said block is going through the back-plate now instead. 

But again, some differences to note, 1080 Ti FE back-plate had basically no thermal pads on it and I presume that the heat that was accumulating there was residual heat from the PCB with stock cooler, just by proximity. This back-plate (Alienware Aurora variant) has three rather large thermal pads that interface directly with the block, one large one on the back-side of GPU core and the others, I forget where they are exactly (presume over MOSFET). There is also more current AND voltage running through the card, with 1080 Ti FE I ran an undervolt via MSI AB voltage / freq. curve of 2000 MHz @ 1.025v. Here it's doing 350W @ 1.063v @ 2070 MHz with Galax 380W bios @ 115%. If I try to reduce PT below 115% it begins to wattage throttle down from 2070 MHz. 

Ultimately I may have to sacrifice 50 MHz and just see if reducing OC to 2000 MHz helps bring both the wattage and voltage down on it's own. 

If anyone and everyone also on under water can chime in and tell me whether or not I should be concerned with the hot back-plate and dial it back or if this is normal under full water-block I would be forever grateful!


----------



## Mooncheese

Update: 

I remembered how to undervolt via freq / voltage curve in MSI AB and created a new profile of 2000 MHz @ 1.025v @ 110% PT (310-320W) that OC Scanner gave a 90% Confidence Rating. Peak temp during the 5 min test was 42C. 

I assigned that to the primary profile and moved the 2070 MHz @ 1.063v profile to profile 2 and may switch to that in game via hotkey if I feel that I need that extra 70 MHz. For example, I actually capped TW3 to 90 FPS in Rivatuner as I felt that yeah, 105-110 FPS is nice but it's not the same difference as say between 70 and 90 FPS G-Sync. 90 FPS feels pretty good and this way I can run the less TDP and voltage profile. 

If I encounter a title that is problematic (No Man's Sky etc.) I will run the hotter profile. 

This has been my approach to overlocking for a while now, it's the same with a CPU, there is a point with a given chip where additional freq. begins to require an inordinate increase in voltage. Take my 8700k for example, @ 5.0 GHz, 0 AVX (delid, monoblock, 1kw rad surface area) it requires 1.342v. If I try for 5.1 GHz the voltage required is 1.4-1.425v. That additional 100 Mhz is now asking for another 60-80mv. 

That additional 70 MHz on 2080 Ti FE is asking for another 40 mv. 

Anyhow, those are my thoughts on this card thus far, thanks for reading and please share your insights if you have any.


----------



## Mooncheese

Addendum: 

I forgot to add this to my last post, figured I'd make another in case anyone read the last post as this is pretty interesting relevant part of the upgrade experience coming from 1080 Ti. 

I forgot to add how much of an increase I've seen whilst mining with 2080 Ti compared to 1080 Ti. Up until just yesterday, when I pulled the 1080 Ti out and put the 2080 Ti in, I've been seeing ~.9-.95 mBTC on avg on CMinderCuda9.0+ / GrinCuckatoo31 for about $1 a day (CPU is also mining at 50% load on RandomXmonero). I didn't expect to see such a huge jump in earnings on this algo, now it's doing ~1.4 mBTC same algo, and I'm now getting like $1.60 a day with the same power consumption. 

When mining PT is at 50% (160W) and there is only a memory OC of +400 MHz. 

GPU is around 35C while mining and voltage is like .700v. 

It's a nice way to offset my heating bill, apartment is very cold in the morning if I don't mine overnight. I'm basically just doing it for the heat at this point. 

At this rate an avg. increase of $.50 a day works out to $15 a month making this $750 total purchase (with Phanteks Glacier block, no tax, local sale) a $660 purchase 6 months hence when Ampere is most likely to release. $660 for a card that will probably be 25% slower than an $800 before taxes card, not too shabby at all!

3080 will probably be $800 at launch, "FE suckers price", add another $150-200 for a water-block (back-plate optional) to that makes for $950-$1000 before taxes or around $1100 at ~10% sales tax all said and done. 

I will rock this until 3080 Ti which will probably release in March of next year, a year from now, hell I may rock this well into 3080 Ti and swoop in and pick that up late in the cycle like I did 2080 Ti. Not enough titles with RT to be honest. That's the only shortcoming of this decision, like I literally cannot turn RT on at 3440x1440 and have a playable experience. Ampere will probably do RT some 75-100% faster on top of 50% faster rasterization tier for tier. 

Anyhow, loving the upgrade thus far, I picked up 50% gain in two titles I'm currently enjoying on the AW3418DW (love this panel) and while mining. 

25% my ass! (I know some titles, including synthetics like Firestrike the difference is only 25%).


----------



## jura11

Mooncheese said:


> Thanks for the reply! 2175 MHz that is insane! Is it stable? In the process of upgrading I discovered a really neat feature in MSI AB, "OC Scanner", that basically attempts to determine the stable freq. of the card. @ 2100 MHz it attained "70% confidence" rating, reducing that to 2070 MHz bumped that up to 90%, I will run it here and will adjust accordingly. Have you tried that feature in MSI AB @ 2175 MHz? That freq. sounds like golden sample material.
> 
> In regards to monitoring VRM temp, I don't believe reference PCB (essentially 2080 Ti FE) VRM allow for temp monitoring.
> 
> I'm just really concerned because it gets HOT, like the last time I felt a back-plate get that hot it was 1080 Ti FE on the stock cooler. When I put that card under a full EK block the back-plate became nearly cool to the touch (luke warm).
> 
> I was shocked to find that this backplate is as hot or hotter than that with a full block underneath it. I'm concerned that something generating a lot of heat isn't in proper contact with the block and that heat that would otherwise be absorbed by said block is going through the back-plate now instead.
> 
> But again, some differences to note, 1080 Ti FE back-plate had basically no thermal pads on it and I presume that the heat that was accumulating there was residual heat from the PCB with stock cooler, just by proximity. This back-plate (Alienware Aurora variant) has three rather large thermal pads that interface directly with the block, one large one on the back-side of GPU core and the others, I forget where they are exactly (presume over MOSFET). There is also more current AND voltage running through the card, with 1080 Ti FE I ran an undervolt via MSI AB voltage / freq. curve of 2000 MHz @ 1.025v. Here it's doing 350W @ 1.063v @ 2070 MHz with Galax 380W bios @ 115%. If I try to reduce PT below 115% it begins to wattage throttle down from 2070 MHz.
> 
> Ultimately I may have to sacrifice 50 MHz and just see if reducing OC to 2000 MHz helps bring both the wattage and voltage down on it's own.
> 
> If anyone and everyone also on under water can chime in and tell me whether or not I should be concerned with the hot back-plate and dial it back or if this is normal under full water-block I would be forever grateful!


 @Mooncheese

2175MHz is stable in Witcher 3 and other games, 2190MHz or 2205MHz is stable in most benchmarks, in games depending on game I would say, due this I rather use 2175MHz OC which I know is stable and no issues in gaming 

Sadly I can't use OC Scanner in MSI Afterburner because I'm running multiple GPUs(GTX1080Ti with 2113MHz OC and GTX1080 with 2113MHz and GTX1080 with 2164MHz OC),I use my PC mostly for rendering than gaming, with such setup MSI Afterburner OC Scanner would crash or black screen 

Golden sample, I wouldn't say this there, there are better GPU than my, its okay sample just not quite golden 

I forgot on reference PCB there is no way how to monitor VRM temperatures

What temperatures are you seeing in games like Witcher 3 or others? 

Regarding the backplate temperatures, I just tried Witcher 3 now, temperatures I have seen on core 38-40°C with 2160MHz and VRM temperatures have been in 48-50°C for most of the time, clocks have been stable for whole session, no downclocking etc 

During this session I checked backplate temperature with IR thermometer gun, with IR thermometer gun I measured 37-38°C highest temperature which is close to PCI_E slot or in middle of GPU closer to PCI_E slot, at other spots temperatures have been in 25-27°C top what I remember and backplate have been quite hot on touch I would say, you wouldn't burn yourself by it but for sure its quite hot, bit this is quite normal there, I have tried few of RTX 2080Ti and few waterblocks as well and seen such temperatures or experienced such thing like hot backplates, coldest back or backplate I have experienced with Aquacomputer Kryos RTX 2080Ti waterblock and active backplate, this one I tested with Palit RTX 2080Ti 

You can try get different backplate if its helps, but not sure how much there would help or you can try to run it without the backplate if temperatures would improve, I remember I run with EVGA Hybrid AIO stock backplate on EVGA Titan X SC with 1469MHz OC and backplate have been so hot my god I couldn't believe it,then I swapped EVGA Hybrid AIO for Raijintek Morpheus II cooler and run it without the backplate and temperatures dropped immediately, previously on EVGA Hybrid AIO I have seen 55-60°C normally with Raijintek Morpheus II cooler temperatures have been in 42-45°C max, then I went with waterblock on it and temperatures been in 36-38°C and checked as well backplate, always have been okay no issues 

RTX 2080Ti are running hotter than previous generations of GPUs I would say there, many people can confirm that, if I would say I would be concerned by it, I wouldn't for now, just monitor temperature

Hope this helps 

Thanks, Jura


----------



## Mooncheese

jura11 said:


> @Mooncheese
> 
> 2175MHz is stable in Witcher 3 and other games, 2190MHz or 2205MHz is stable in most benchmarks, in games depending on game I would say, due this I rather use 2175MHz OC which I know is stable and no issues in gaming
> 
> Sadly I can't use OC Scanner in MSI Afterburner because I'm running multiple GPUs(GTX1080Ti with 2113MHz OC and GTX1080 with 2113MHz and GTX1080 with 2164MHz OC),I use my PC mostly for rendering than gaming, with such setup MSI Afterburner OC Scanner would crash or black screen
> 
> Golden sample, I wouldn't say this there, there are better GPU than my, its okay sample just not quite golden
> 
> I forgot on reference PCB there is no way how to monitor VRM temperatures
> 
> What temperatures are you seeing in games like Witcher 3 or others?
> 
> Regarding the backplate temperatures, I just tried Witcher 3 now, temperatures I have seen on core 38-40°C with 2160MHz and VRM temperatures have been in 48-50°C for most of the time, clocks have been stable for whole session, no downclocking etc
> 
> During this session I checked backplate temperature with IR thermometer gun, with IR thermometer gun I measured 37-38°C highest temperature which is close to PCI_E slot or in middle of GPU closer to PCI_E slot, at other spots temperatures have been in 25-27°C top what I remember and backplate have been quite hot on touch I would say, you wouldn't burn yourself by it but for sure its quite hot, bit this is quite normal there, I have tried few of RTX 2080Ti and few waterblocks as well and seen such temperatures or experienced such thing like hot backplates, coldest back or backplate I have experienced with Aquacomputer Kryos RTX 2080Ti waterblock and active backplate, this one I tested with Palit RTX 2080Ti
> 
> You can try get different backplate if its helps, but not sure how much there would help or you can try to run it without the backplate if temperatures would improve, I remember I run with EVGA Hybrid AIO stock backplate on EVGA Titan X SC with 1469MHz OC and backplate have been so hot my god I couldn't believe it,then I swapped EVGA Hybrid AIO for Raijintek Morpheus II cooler and run it without the backplate and temperatures dropped immediately, previously on EVGA Hybrid AIO I have seen 55-60°C normally with Raijintek Morpheus II cooler temperatures have been in 42-45°C max, then I went with waterblock on it and temperatures been in 36-38°C and checked as well backplate, always have been okay no issues
> 
> RTX 2080Ti are running hotter than previous generations of GPUs I would say there, many people can confirm that, if I would say I would be concerned by it, I wouldn't for now, just monitor temperature
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the reply, I need to get an IR thermometer. Thanks for the advice, I was thinking the same thing this morning, looking for a back-plate with an integrated heat-sink. Anything better than this? 

https://www.alphacool.com/shop/new-...or-eisblock-gpx-n-rtx-2080-acetal-plexi-light


But seems to drop temps by at least 20C about VRM (different card pictured, VRM on 2080 Ti is on right side perpendicular to GPU core) 

https://www.tomshardware.com/uk/rev...px-a-water-cooling-radeon-rx-vega,5557-2.html


Using search engine I couldn't find any analysis of the thermal ability of this back-plate in conjunction with 2080 Ti. 

Any recommendation that will work with reference PCB without having to buy a new water-block?


----------



## jura11

Mooncheese said:


> Thanks for the reply, I need to get an IR thermometer. Thanks for the advice, I was thinking the same thing this morning, looking for a back-plate with an integrated heat-sink. Anything better than this?
> 
> https://www.alphacool.com/shop/new-...or-eisblock-gpx-n-rtx-2080-acetal-plexi-light
> 
> 
> But seems to drop temps by at least 20C about VRM (different card pictured, VRM on 2080 Ti is on right side perpendicular to GPU core)
> 
> https://www.tomshardware.com/uk/rev...px-a-water-cooling-radeon-rx-vega,5557-2.html
> 
> 
> Using search engine I couldn't find any analysis of the thermal ability of this back-plate in conjunction with 2080 Ti.
> 
> Any recommendation that will work with reference PCB without having to buy a new water-block?


Hi there

Not sure on that,if its will be better than stock one,maybe yes,hard to say there I would say you will see improvement over stock but how much,I would probably only guess on that there

Most of normal backplates are for aesthetics mostly and can helps with rigidity etc,there is only one backplate which definitely helps is Aquacomputer Kryographics Active backplate but sadly don't think it can be used on other waterblocks than Aquacomputer ones which is shame 

https://shop.aquacomputer.de/product_info.php?products_id=3795

Link on review 






I have used AQ Kryographics RTX2080Ti waterblock with active backplte I think on two builds and they outperformed most of the waterblocks which I tested I tried like EKWB Vector,Phanteks Glacier,Heatkiller IV Pro,Bykski and Barrow and XSPC one I tested as well,only Alphacool and Bitspower I never tried or used there 

Here is my probably best result(2205MHz) on my Asus RTX 2080Ti Strix,I tested that in Firestrike and Unigine Superpostion and few other benchmarks plus Crysis 3










Hope this helps

Thanks,Jura


----------



## Intrud3r

I'm thinking about getting my air cooled Aorus 2080 ti some water cooling sweetness .... 

Are there any people in here that can shove me in the right direction to get this done?

Which parts should I use? What is the best?

I have the Gigabyte Aorus 2080 ti 11GB

I've got room for a 240/280mm rad on top in my case ...


----------



## Mooncheese

Intrud3r said:


> I'm thinking about getting my air cooled Aorus 2080 ti some water cooling sweetness ....
> 
> Are there any people in here that can shove me in the right direction to get this done?
> 
> Which parts should I use? What is the best?
> 
> I have the Gigabyte Aorus 2080 ti 11GB
> 
> I've got room for a 240/280mm rad on top in my case ...


Water cooling is 100% the way to go! I like EK parts, although the competitors stuff is pretty much as good as EK now for less (I was hearing issues with other brand radiators, i.e. Alpha Cool etc in terms of fittings being prone to leaking but that was years ago and a lot of it is hearsay, but EK has been around for the longest, their products do carry a price-premium though)

It's intimidating but you don't even need to do hard tubing, here's my rig (before adding the 2080 Ti, I intend to do a follow up vid)

I recommend going with the most radiator surface area as you can right from the get go. The most appealing thing about water cooling is the elimination of fan noise, more rad surface area means you can run the fans at lower RPM for the most part. 

Bear in mind that some competitors do offer better products, EK is by no means the best in terms of water-block or radiator performance, (but they do tend to be at the mid-to-top of the charts in terms of performance), EK has just been around for the longest and is the most reputable I would say. 

Do some research, here's EK's configurator if you want to get an idea as to what you would need and where to get started. 

If you do go this route make sure you watch plenty of liquid cooling for beginners videos on youtube. It's a bit more complicated than air cooling. Like I just drained and refilled my loop after swapping from 1080 Ti to 2080 Ti and to fill the loop you need to disconnect the power cables from the motherboard (so as not to fry your CPU), the pump is still connected to PSU you power so you need to trick the PSU into thinking that it's connected to the motherboard with either a 24 pin jumper that you plug into your 24 pin power cable (or you can use a paperclip and connect to specific pins in cable), then, with the pump / pump-reservoir combo full of water you hit the power on the PSU and carefully watch (it happens fast) as the water is distributed through the loop, being careful to turn the PSU off as the water approaches the bottom of the pump-res. You have to do this a few times until your loop is full of water. Ideally you should be doing this with the PSU pulled out of the chassis, also no PSU cable to GPU, same concept as mobo + CPU, then youre supposed to leak test for 24 hours (I give it an hour or so and watch closely, with hard tubing you actually want to leak test for longer because if you didn't chamfer the edges of a cut tube you risk the tube becoming dislodged from a fitting). I'm making it sound more complicated than it is, it's really not THAT complicated. Then when leak testing is done you simply power down PSU and reconnect power to motherboard and GPU and then turn on for the first time. The cooling performance, looks and silence are what making liquid cooling worth the time and money, on and the performance. Running 2080 Ti at 47-50C @ 2025-2070 MHz solid with zero noise is really nice.


----------



## Intrud3r

Mooncheese said:


> Water cooling is 100% the way to go! I like EK parts, although the competitors stuff is pretty much as good as EK now for less (I was hearing issues with other brand radiators, i.e. Alpha Cool etc in terms of fittings being prone to leaking but that was years ago and a lot of it is hearsay, but EK has been around for the longest, their products do carry a price-premium though)
> 
> It's intimidating but you don't even need to do hard tubing, here's my rig (before adding the 2080 Ti, I intend to do a follow up vid)
> 
> I recommend going with the most radiator surface area as you can right from the get go. The most appealing thing about water cooling is the elimination of fan noise, more rad surface area means you can run the fans at lower RPM for the most part.
> 
> Bear in mind that some competitors do offer better products, EK is by no means the best in terms of water-block or radiator performance, (but they do tend to be at the mid-to-top of the charts in terms of performance), EK has just been around for the longest and is the most reputable I would say.
> 
> Do some research, here's EK's configurator if you want to get an idea as to what you would need and where to get started.
> 
> If you do go this route make sure you watch plenty of liquid cooling for beginners videos on youtube. It's a bit more complicated than air cooling. Like I just drained and refilled my loop after swapping from 1080 Ti to 2080 Ti and to fill the loop you need to disconnect the power cables from the motherboard (so as not to fry your CPU), the pump is still connected to PSU you power so you need to trick the PSU into thinking that it's connected to the motherboard with either a 24 pin jumper that you plug into your 24 pin power cable (or you can use a paperclip and connect to specific pins in cable), then, with the pump / pump-reservoir combo full of water you hit the power on the PSU and carefully watch (it happens fast) as the water is distributed through the loop, being careful to turn the PSU off as the water approaches the bottom of the pump-res. You have to do this a few times until your loop is full of water. Ideally you should be doing this with the PSU pulled out of the chassis, also no PSU cable to GPU, same concept as mobo + CPU, then youre supposed to leak test for 24 hours (I give it an hour or so and watch closely, with hard tubing you actually want to leak test for longer because if you didn't chamfer the edges of a cut tube you risk the tube becoming dislodged from a fitting). I'm making it sound more complicated than it is, it's really not THAT complicated. Then when leak testing is done you simply power down PSU and reconnect power to motherboard and GPU and then turn on for the first time. The cooling performance, looks and silence are what making liquid cooling worth the time and money, on and the performance. Running 2080 Ti at 47-50C @ 2025-2070 MHz solid with zero noise is really nice.


Thank you for the information. I'm getting darn annoyed by the sounds the fans make with the card being air cooled ...


----------



## Mooncheese

Intrud3r said:


> Thank you for the information. I'm getting darn annoyed by the sounds the fans make with the card being air cooled ...


No problem, having a look at your signature where youre at 1935 MHz @ 1.05v, the true value of liquid cooling isn't just dead silence, you can really go for INSANE overclocks. We have the same GPU and my card is stable at 2000-2025 MHz @ 1.025v (90% Confidence Rating in OC Scanner, yet to have a display driver failure here in past two days of fairly lengthy gaming sessions). It can do 2070 Mhz at 1.063v with 90% confidence rating. And that's because the core doesn't exceed 50C with the undervolt. With air cooler, at 300-320W TDP I imagine the temps would be around 75C best case scenario (AIB card) or 85C on FE cooler. 

That's just the GPU, CPU, dude if you want to see my temps back when I first plumbed my loop check this out: 




Like 60C under full AVX instruction set with 8700k @ 5.0 GHz @ 1.375v (voltage was higher back then as I was on an older BIOS, F4, F6 BIOS has allowed for reducing core voltage further). 

So you could probably get another 100 MHz with both components at same voltage, whisper quiet, and just really pretty to look at. 

It's worth the money, trust me, been doing this for a while. Liquid cooling is where it's at.

The lower the temp of the component the less voltage required to sustain a given freq., less voltage means higher freq with less wattage. Less voltage and wattage means less heat. And on and on in a self-reinforcing manner.


----------



## Intrud3r

Mooncheese said:


> No problem, having a look at your signature where youre at 1935 MHz @ 1.05v, the true value of liquid cooling isn't just dead silence, you can really go for INSANE overclocks. We have the same GPU and my card is stable at 2000-2025 MHz @ 1.025v (90% Confidence Rating in OC Scanner, yet to have a display driver failure here in past two days of fairly lengthy gaming sessions). It can do 2070 Mhz at 1.063v with 90% confidence rating. And that's because the core doesn't exceed 50C with the undervolt. With air cooler, at 300-320W TDP I imagine the temps would be around 75C best case scenario (AIB card) or 85C on FE cooler.
> 
> That's just the GPU, CPU, dude if you want to see my temps back when I first plumbed my loop check this out: https://youtu.be/BAsInofP3SQ
> 
> Like 60C under full AVX instruction set with 8700k @ 5.0 GHz @ 1.375v (voltage was higher back then as I was on an older BIOS, F4, F6 BIOS has allowed for reducing core voltage further).
> 
> So you could probably get another 100 MHz with both components at same voltage, whisper quiet, and just really pretty to look at.
> 
> It's worth the money, trust me, been doing this for a while. Liquid cooling is where it's at.
> 
> The lower the temp of the component the less voltage required to sustain a given freq., less voltage means higher freq with less wattage. Less voltage and wattage means less heat. And on and on in a self-reinforcing manner.


How often do you have to clear a loop with like 1x360 + 1x240/280 + 1x120 (guess that's where i should go to then) ... maybe the last 120 not needed ... i donnow.


----------



## Mooncheese

Intrud3r said:


> How often do you have to clear a loop with like 1x360 + 1x240/280 + 1x120 (guess that's where i should go to then) ... maybe the last 120 not needed ... i donnow.


I just drained my loop for the first time in 2.5 years upgrading from 1080 Ti to 2080 Ti. Block was immaculate, even the micro-fin array, next-to-no corrosion as I only use distilled water + EK biocide. 

The only issue I'm having now is the soft tubing seems to have become discolored (as you can see in the pics in my post above). I'm changing the back-plate and need to tear down the block again anyhow because one of the thermal pads covering one of the memory chips was either light blue or dark blue (I can't remember) and I need to ensure it's not dark blue as that would mean the previous owner replaced the thermal pad at some point and forgot to remove the dark blue protective plastic film in the process (the reason said film is dark blue). I can't remember if said pad was light blue or dark blue, I wasn't 100% there mentally that day (felt under the weather and lack of sleep), I should have caught that, but part of me thinks it was light blue thermal tape as I'm fairly certain I would have noticed dark blue + reflective plastic. 

Anyhow, I'm going to take the opportunity to replace the tubing with something that is more durable and doesn't discolor so easily. Thinking of hard-cooling but honestly I don't like the idea of changing the GPU again with hard-tubing down the road. I mean it looks better, but it will be way busier in my case as I have most of the soft tubing routed behind motherboard wall to respective components. Ideally you want your pump / res high in the loop and you want your drain at the bottom. If you look at my pics and vid you will note that the 360 PE on right side of case has a valve on one of the outlet fittings for draining. I attached a ~6" long piece of tubing to that via a fitting and put the PC on the kitchen table with a bucket on a chair under it and it drained well enough to at least get the GPU out of the loop (getting all of the water out of the radiators can be challenging).


----------



## Intrud3r

That sounds reasonable. I could live with that ... every 2.5+ years ... 

Time to watch some vids I guess


----------



## Mooncheese

Here's how the 1080 Ti block looks after 2 years and 3 months of not flushing the loop with just distilled water and biocide. I also forgot to add that another benefit of liquid cooing is greatly extending the lifespan of components. I've run my 1080 Ti FE 24 hours a day for the past 3 years (if not gaming I'm mining at 50% PT, @ 35C) and it shows no signs of wear. It did 2000 MHz with undervolt of 1.025v when new and it still held that up until a few days ago when I pulled it out of the loop for upgrade. I'm seeing a litany of posts all over the web by people stating that their 1080 Ti died and therefore they had to upgrade to say 2080 S and that if the 1080 Ti didn't die they would have waited for Ampere. A LOT of comments like this. 

Component A at 80C on core and 100C on VRM / MOSFET sustained temp @ 1.063v+ is only going to last a few years. 

Component B at 50C or less about both core and VRM / MOSFET with slight undervolt, I wouldn't be surprised if that component could go on like that for 15-20 years. 

Because dropping the temp allows for running a given freq. with less voltage. you can run same component at the same or more freq as would otherwise be at 80C but with WAY less voltage. Or you can find a happy medium, slight undervolt an still way higher than on air (2013-2025 MHz here @ 1.025v @ 47C load, 90% Confidence Rating in OC Scanner)

On air the clocks start dropping with temp, usually around 15 MHz per 10C. So 1950 MHz looks like 1900 MHz going from 45 to 80C+

Liquid cooling is where it's at, if you can swing like $700 on a loop DO IT. 

Don't forget the monoblock for the CPU, it's like a full length GPU block in that it cools the VRM. VRM temp is a huge problem with my motherboard, Gigabyte Z370 Aorus Gaming 7, but with monoblock VRM temps don't exceed 45C! 

https://www.ekfluidgaming.com/why-liquid-cooling/


----------



## JustinThyme

Mooncheese said:


> Don't forget the monoblock for the CPU, it's like a full length GPU block in that it cools the VRM. VRM temp is a huge problem with my motherboard, Gigabyte Z370 Aorus Gaming 7, but with monoblock VRM temps don't exceed 45C!
> 
> https://www.ekfluidgaming.com/why-liquid-cooling/


Worst junk to hit the market ever in terms of performance. Tried several and they were all fails. Best results are to have a seperate cooler for each. CPU and VRM. Monoblocks are aesthetically pleasing but performance is garbage. 

This is how you do it!

Or better yet second pic with Optimus block.


----------



## Medizinmann

Intrud3r said:


> How often do you have to clear a loop with like 1x360 + 1x240/280 + 1x120 (guess that's where i should go to then) ... maybe the last 120 not needed ... i donnow.


You look into i.e. JayzTwoCents channel - and as a rule of thumb 240mm rad per component – that is without OC – I would go for at least 360mm and either lower fan speed for less noise or have more headroom for OC.

If you aren’t into the aesthetics of your build – you could look into the Alphacool concept of AIOs with quick connects(or any other Vendor) – and combine an Eiswolf GPX Pro for 2080TI with an Eisbaer CPU-Cooler in one loop. IMHO less hassel than a custom waterloop and easily expandable with add on Rads…:thumb:






A custom loop is arguable more pleasing for the eye - but quick connects IMHO are easier to install...

Greetings
Medizinmann


----------



## Mooncheese

JustinThyme said:


> Worst junk to hit the market ever in terms of performance. Tried several and they were all fails. Best results are to have a seperate cooler for each. CPU and VRM. Monoblocks are aesthetically pleasing but performance is garbage.
> 
> This is how you do it!
> 
> Or better yet second pic with Optimus block.


CPU is at 60C under full instruction set and VRM at 40C (vs 100C with factory heat-sink). 

How are separate coolers better? Coolant enters block and hits CPU die first, then goes to VRM? I mean that may have been what actually happened, if you don't like actually read and follow the instructions you can easily reverse the inlet and outlet and have the coolant enter the monoblock in reverse order, over the VRM first and then your CPU, and that will result in less-than-stellar temps on the CPU. Then you can take to the internet and spout nonsense such as "CPU monoblocks are garbage" and everyone who doesn't know any better assumes that "CPU monoblocks are garbage". 

You actually don't need a separate cooler for each, monoblock looks way better, you need less short line runs and fittings and therefore less potential leak sources (the chance of a leak goes up with the total number of fittings in a loop, i.e., if there is a 5% chance fitting failure and you have 15 fittings there is a chance that one of them will fail whereas with 10 fittings there is less of a chance that any will fail) less complicated is better. 

As shown around 21:45 mark in Hwinfo64 during Prime95, Small FFT: 

VRM MOS: (VRM MOSFET) 41C MAX

CPU Core: 58-60C MAX 

https://youtu.be/BAsInofP3SQ?t=1299

If you like I can dredge up an older post of mine here created around the time of the build (Nov 2017) with pictures showing peak temps for both CPU and VRM MOS after 1 hour straight of Prime95 small FFT. Temps were like 65C max on CPU and 45C on VRM. 

Just getting ready for the "you only ran Prime95 briefly lolz" retort.


----------



## Shawnb99

Mooncheese said:


> CPU is at 60C under full instruction set an VRM at 45C (vs 100C with factory heat-sink).
> 
> 
> 
> How are separate coolers better? Coolant enters block and hits CPU die first, then goes to VRM? You actually don't need a separate cooler for each, monoblock looks way better, you need less fittings, is less complicated. Not sure what you did wrong.
> 
> 
> 
> As shown around 21:45 mark in Hwinfo64 during Prime95, Small FFT:
> 
> 
> 
> VRM MOS: (VRM MOSFET) 41C MAX
> 
> 
> 
> CPU Core: 58-60C MAX
> 
> 
> 
> https://youtu.be/BAsInofP3SQ?t=1299




It doesn’t matter what the coolant enters first, the temperature is constant. The reason monoblocks suck is they usually sacrifice better mounting on the CPU so they can also cool the VRM’s.


----------



## Mooncheese

Medizinmann said:


> You look into i.e. JayzTwoCents channel - and as a rule of thumb 240mm rad per component – that is without OC – I would go for at least 360mm and either lower fan speed for less noise or have more headroom for OC.
> 
> If you aren’t into the aesthetics of your build – you could look into the Alphacool concept of AIOs with quick connects(or any other Vendor) – and combine an Eiswolf GPX Pro for 2080TI with an Eisbaer CPU-Cooler in one loop. IMHO less hassel than a custom waterloop and easily expandable with add on Rads…:thumb:
> 
> https://www.youtube.com/watch?v=VD_Z6QL1dI0
> 
> A custom loop is arguable more pleasing for the eye - but quick connects IMHO are easier to install...
> 
> Greetings
> Medizinmann


The actual rule of thumb generally accepted by both the manufacturers and enthusiasts is 1x 120mm x 25mm radiator per component (I believe 100W) with another 120mm if overclocking: 

https://www.ekwb.com/blog/how-big-should-my-radiator-be/

That said my philosophy is as much radiator surface area as your wallet and chassis (if you don't want to route an external rad) can handle. 

I'm running a slim 420 SE (single fan, 25mm) and a 360 PE (push-pull, I believe 40mm) for just i7 8700k @ 5.0 GHz (170W) and 2080 Ti @ 2000 MHz @ 1.025v (~325w). Basically the equivalent of a 360 PE (360x40mm) per component. This makes for a whisper quiet experience with amazing load temps.


----------



## Mooncheese

Shawnb99 said:


> It doesn’t matter what the coolant enters first, the temperature is constant. The reason monoblocks suck is they usually sacrifice better mounting on the CPU so they can also cool the VRM’s.


That monoblock I'm using is like 2 lbs of nickel coated copper. It has enough thermal mass to avoid heat-soak. Coolant flow direction actually does matter, it's why manufacturers instruct which way the inlet and outlet should go. If it didn't matter they wouldn't even say anything.

Let me guess, you also tried a monoblock ignoring the coolant flow instructions and had less than satisfactory results?


----------



## Shawnb99

Mooncheese said:


> That monoblock I'm using is like 2 lbs of nickel coated copper. It has enough thermal mass to avoid heat-soak. Coolant flow direction actually does matter, it's why manufacturers instruct which way the inlet and outlet should go. If it didn't matter they wouldn't even say anything.
> 
> Let me guess, you also tried a monoblock ignoring the coolant flow instructions and had less than satisfactory results?


Yeah that must be it. I'm to stupid to read.

Coolant flow into some blocks matter that's why they have inlet and outlet ports. My point was no item heats up the coolant to affect other items. IE GPU before CPU or CPU before GPU makes no difference. 

Monoblocks are crap compared to a block for just the CPU.


----------



## JustinThyme

Mooncheese said:


> CPU is at 60C under full instruction set and VRM at 40C (vs 100C with factory heat-sink).
> 
> How are separate coolers better? Coolant enters block and hits CPU die first, then goes to VRM? I mean that may have been what actually happened, if you don't like actually read and follow the instructions you can easily reverse the inlet and outlet and have the coolant enter the monoblock in reverse order, over the VRM first and then your CPU, and that will result in less-than-stellar temps on the CPU. Then you can take to the internet and spout nonsense such as "CPU monoblocks are garbage" and everyone who doesn't know any better assumes that "CPU monoblocks are garbage".
> 
> You actually don't need a separate cooler for each, monoblock looks way better, you need less short line runs and fittings and therefore less potential leak sources (the chance of a leak goes up with the total number of fittings in a loop, i.e., if there is a 5% chance fitting failure and you have 15 fittings there is a chance that one of them will fail whereas with 10 fittings there is less of a chance that any will fail) less complicated is better.
> 
> As shown around 21:45 mark in Hwinfo64 during Prime95, Small FFT:
> 
> VRM MOS: (VRM MOSFET) 41C MAX
> 
> CPU Core: 58-60C MAX
> 
> https://youtu.be/BAsInofP3SQ?t=1299
> 
> If you like I can dredge up an older post of mine here created around the time of the build (Nov 2017) with pictures showing peak temps for both CPU and VRM MOS after 1 hour straight of Prime95 small FFT. Temps were like 65C max on CPU and 45C on VRM.
> 
> Just getting ready for the "you only ran Prime95 briefly lolz" retort.


Your choice to run what you like. Im speaking from experience. You WILL get better performance 100% of the time with separate blocks, this one has been beat to death and proven many times over. 

Running in reverse? Even EK will tell you thats a no no on a CPU block be in mono or stand alone. 

I have 3 of them on my wall of shame tried in every which way they can be run. First one from an M8E and two from X299 platforms. 45C on VRMs is hot compared to a proper set up with separate blocks and sufficient rad surface. On X299 RVIE Ive not seen above 32 since going with a separate block where I was getting 65 and 70 with the monojunk. Most that have been at this long enough will tell you that P95=how to turn PC into space heater. Run Blender, Realbench, cinebench, AIDA 64 with the FPU stress ticked as well as CPU. Go over to ROG forums under the X299 section and look for thread "To monoblock or not to monoblock" Very long thread with fail after fail.

I don't spout garbage, monojunks are garbage. They look pretty but thats where it ends. I wouldnt give one to someone I dont even like. I leave them on my wall of shame to remind me to never go there again.


----------



## JustinThyme

Mooncheese said:


> The actual rule of thumb generally accepted by both the manufacturers and enthusiasts is 1x 120mm x 25mm radiator per component (I believe 100W) with another 120mm if overclocking:
> 
> https://www.ekwb.com/blog/how-big-should-my-radiator-be/
> 
> That said my philosophy is as much radiator surface area as your wallet and chassis (if you don't want to route an external rad) can handle.
> 
> I'm running a slim 420 SE (single fan, 25mm) and a 360 PE (push-pull, I believe 40mm) for just i7 8700k @ 5.0 GHz (170W) and 2080 Ti @ 2000 MHz @ 1.025v (~325w). Basically the equivalent of a 360 PE (360x40mm) per component. This makes for a whisper quiet experience with amazing load temps.


The 120 rule is very old news. Im cooling a 9940X, 2X 2080Tis and the VRM and a single 480 may allow it to run hot as hell with liquid temps hitting past 45C and more. Keep in mind that even when that was acceptable it was minimal and a long time ago when CPU and GPU heat dissipation was well under 100 Watts, like 65 watts. My freakin VRMS pump out more than that. I crank my machine to max and Im maxing out a 1600 watt PSU and had to upgrade UPS from 1600vA to 2200vA to keep it from going into overload from running just time spy. 

If your set up works for you thats all fine and dandy but not every system will perform the same.

As for maintenance, I do it every year without fail in the December to January time frames to coincide with any new products I may wish to try, one of them NEVER again being a monojunk.


----------



## Nizzen

JustinThyme said:


> The 120 rule is very old news. Im cooling a 9940X, 2X 2080Tis and the VRM and a single 480 may allow it to run hot as hell with liquid temps hitting past 45C and more. Keep in mind that even when that was acceptable it was minimal and a long time ago when CPU and GPU heat dissipation was well under 100 Watts, like 65 watts. My freakin VRMS pump out more than that. I crank my machine to max and Im maxing out a 1600 watt PSU and had to upgrade UPS from 1600vA to 2200vA to keep it from going into overload from running just time spy.
> 
> If your set up works for you thats all fine and dandy but not every system will perform the same.
> 
> As for maintenance, I do it every year without fail in the December to January time frames to coincide with any new products I may wish to try, one of them NEVER again being a monojunk.


I ran 3930k + 3x titan X maxwell (Ek water) on one ek 480 xtx radiator. It worked for my use.


----------



## truehighroller1

You're all wrong... Phase change is where it's at..

/S


----------



## jura11

Nizzen said:


> I ran 3930k + 3x titan X maxwell (Ek water) on one ek 480 xtx radiator. It worked for my use.


Hi there 

Yup will works for sure, I have run 5960x with 4.7GHz OC and 3*GPUs setup(GTX1080Ti with 2113MHz OC and GTX1080 with 2113MHz and GTX1080 with 2164MHz OC) on 360mm 60mm thick radiator on top and 240mm 60mm thick radiator on bottom and temperatures I have seen in rendering in 36-38°C amd in gaming temperatures have been very similar, tried mining with 3*GPUs setup and usually temperatures have been in 38-40°C as max and no issues, but on that build I have run fans in 900-1100RPM 

Not sure if I would run 3*GPUs on single radiator or just 480mm radiator, if yes then I would need to run fans much faster,currently have 4*GPUs setup and 4*360mm radiators plus MO-ra3 360mm and fans most of the time are running at 650-750RPM on most pf the fans, only Arctic Cooling P12 PWM I'm running at higher speeds 1000-1100RPM on MO-ra3 360mm and on other two radiators running them at 800RPM that's in Caselabs M8 with pedestal and build under load is quiet(Superflower 8pack 2000W PSU fan is loudest fan on my build) and water delta T in gaming is in 1-3°C as max, rendering water delta T is 2-4°C as max with fans spinning around 650-750RPM, in old build water delta T in gaming have been in 4-6°C and in rendering 6-8°C and in mining 8-12°C 

Hope this helps 

Thanks, Jura


----------



## Mooncheese

Shawnb99 said:


> Yeah that must be it. I'm to stupid to read.
> 
> Coolant flow into some blocks matter that's why they have inlet and outlet ports. My point was no item heats up the coolant to affect other items. IE GPU before CPU or CPU before GPU makes no difference.
> 
> Monoblocks are crap compared to a block for just the CPU.


Actually it does make a difference, irrespective of flow rate. 

What makes more sense, 380W 2080 Ti outlet going directly into CPU inlet?

Or

380W 2080 Ti outlet to radiator > radiator outlet to CPU inlet (after conveying heat to air via radiator, hence cooling the water as it travels through the rad)?

Same concept with a monoblock, yes flow direction 100% matters, if you reverse the inlet and outlet now you have water going over potentially 120C (on air) VRM then the CPU core. 

Yes, I have read this argument one thousand times, that with adequate flow rate this doesn't matter but that's not actually how thermodynamics works. 

I can tell you right now that if you plumb a 380W GPU outlet directly into CPU inlet with no radiator in between it's going to affect your CPU temps. 

The reason I'm seeing 60C load under full AVX instruction set with 8700k @ 5.0 GHz @ 1.38v (see vid in last post) with a load on the GPU (~200W) is because I applied systems thinking to my loop. Systems thinking is looking at each component in relation to everything around it. 

Just putting rads in a system, setting up fans as intake, plumbing the GPU right into the CPU, this is mindless, and I can tell you right now that you will NEVER see 58C load on a 170w+ CPU with a 200W GPU also running with that approach. 


Also, in the vid I posted above with 58C peak CPU core temp at 1.38v (170W) during Prime95 small FFT whilst also mining with the GPU, those rads are set up to push OUT of the case. 

You will read comments where people state that having rad fans as intake into the PC is better (the argument is that outside air is cooler) but I completely disagree. When you intake that heat into the system it's absorbed by all of the components in question, the coolant in the tubing, the reservoir, the GPU block and back-plate, your system memory, the motherboard, all of your storage, your PSU to include passively by the radiators themselves. You want rear fan set up as intake and all rad fans set up as exhaust to evacuate that heat OUT of the case. 

Go ahead and run Prime95 small FFT with a load on the GPU and try to get 58C on the core of 8700k / 9900k @ 5.0 GHz with nearly 1.4v with all of the rads intaking into the case and your GPU feeding right into the CPU and post your results. 

All of these things matter. You want one rad in between your components, ideally you want the lower TDP component (usually the CPU) going to a rad , then going to your pump-res, then going to the greater TDP component (GPU) then to the largest radiator (for me that's the PE 360 in push-pull) and you want to exhaust the heat OUT of your case. 

This is applying systems thinking to cooling.


----------



## z390e

Mooncheese said:


> Go ahead and run Prime95 small FFT with a load on the GPU and try to get 58C on the core of 8700k / 9900k @ 5.0 GHz with nearly 1.4v with all of the rads intaking into the case and your GPU feeding right into the CPU and post your results.



Quoted for truth, given that this is literally my exact setup and can confirm.


----------



## Shawnb99

Mooncheese said:


> Actually it does make a difference, irrespective of flow rate.
> 
> What makes more sense, 380W 2080 Ti outlet going directly into CPU inlet?
> 
> Or
> 
> 380W 2080 Ti outlet to radiator > radiator outlet to CPU inlet (after conveying heat to air via radiator, hence cooling the water as it travels through the rad)?
> 
> Same concept with a monoblock, yes flow direction 100% matters, if you reverse the inlet and outlet now you have water going over potentially 120C (on air) VRM then the CPU core.
> 
> Yes, I have read this argument one thousand times, that with adequate flow rate this doesn't matter but that's not actually how thermodynamics works.
> 
> I can tell you right now that if you plumb a 380W GPU outlet directly into CPU inlet with no radiator in between it's going to affect your CPU temps.
> 
> The reason I'm seeing 60C load under full AVX instruction set with 8700k @ 5.0 GHz @ 1.38v (see vid in last post) with a load on the GPU (~200W) is because I applied systems thinking to my loop. Systems thinking is looking at each component in relation to everything around it.
> 
> Just putting rads in a system, setting up fans as intake, plumbing the GPU right into the CPU, this is mindless, and I can tell you right now that you will NEVER see 58C load on a 170w+ CPU with a 200W GPU also running with that approach.
> 
> 
> Also, in the vid I posted above with 58C peak CPU core temp at 1.38v (170W) during Prime95 small FFT whilst also mining with the GPU, those rads are set up to push OUT of the case.
> 
> You will read comments where people state that having rad fans as intake into the PC is better (the argument is that outside air is cooler) but I completely disagree. When you intake that heat into the system it's absorbed by all of the components in question, the coolant in the tubing, the reservoir, the GPU block and back-plate, your system memory, the motherboard, all of your storage, your PSU to include passively by the radiators themselves. You want rear fan set up as intake and all rad fans set up as exhaust to evacuate that heat OUT of the case.
> 
> Go ahead and run Prime95 small FFT with a load on the GPU and try to get 58C on the core of 8700k / 9900k @ 5.0 GHz with nearly 1.4v with all of the rads intaking into the case and your GPU feeding right into the CPU and post your results.
> 
> All of these things matter. You want one rad in between your components, ideally you want the lower TDP component (usually the CPU) going to a rad , then going to your pump-res, then going to the greater TDP component (GPU) then to the largest radiator (for me that's the PE 360 in push-pull) and you want to exhaust the heat OUT of your case.
> 
> This is applying systems thinking to cooling.


I have temp sensors on every inlet/outlet of my loop and there less then a degree difference in any of them. Loop order doesn't matter and never has. It's a myth.


----------



## truehighroller1

Shawnb99 said:


> I have temp sensors on every inlet/outlet of my loop and there less then a degree difference in any of them. Loop order doesn't matter and never has. It's a myth.


The inlet for the cpu block is in a certain position to get fresh flow over the cpu cores centered so I agree, you're trolling.


----------



## jura11

Hi @Mooncheese

Reversing flow on GPUs usually adds 2-4°C what I have tested and what I run on every block which have tested or run,on EK Vector RTX 2080Ti WB add around 2-4°C in my case, tried something similar on Bykski or Heatkiller IV RTX 2080Ti and only on Phanteks Glacier WB I have seen higher deltas between the IN and OUT, if I remember correctly delta have been 5-6°C

Reversing flow on CPU block can make higher difference in temperatures, because most of the blocks are directional or using jet plate, it most cases it will restrict flow and therefore yours temperatures will rise 

There are several tests with GPU before or after CPU and it really doesn't make difference in most cases or GPU directly feeding CPU or other way and there is minimal difference in temperatures or no difference 

Not enough radiator space it will result in higher water temperatures and higher water delta T which means you will need to run fans way faster than with more radiators

Monoblock, I tried only two of them on 8086k and 5960x amd both performed around 5-7°C worse than EK Supermacy EVO which is not the best CPU water block,older Aquacomputer Kryos HF CPU outperformed Supermacy EVO by good 5-7°C as well in my case 

I have built loop for friend loop where he is using or run Phanteks EVOLV ATX with Asus Maximus X Formula with 8086k which has run 5.2GHz with 1.45v delidded I think and Palit RTX 2080Ti with 380W BIOS with Bykski CPU WB and Bykski GPU WB and this have been cooled by single HWLabs GTS360 360mm radiator and temperatures in gaming we are seen 42-45°C in 25-27°C ambient, CPU temperatures in Realbench Stress test or OCCT 4.4.2 we are seen 82-85°C on PKG with fans running at 1000-1200RPM 

I will try post the pictures of that loop if I can find it on my phone 

Hope this helps 

Thanks, Jura


----------



## kithylin

I thought this was supposed to be the RTX 2080 Ti thread.. why are there 2-3 pages discussing CPU monoblocks?  There's a CPU section of the forums for that.


----------



## Intrud3r

kithylin said:


> I thought this was supposed to be the RTX 2080 Ti thread.. why are there 2-3 pages discussing CPU monoblocks?  There's a CPU section of the forums for that.


Sorry for triggering all this


----------



## z390e

Just added the Asus OC 11GB Gaming model....how much difference do people see on the VAR cards with air cooling vs a DIY liquid cooled FE card? Assuming we can push them a bit more with liquid/custom, but I see card maxing around 2050. Seems only the K|NGP|N one allows a VBIOS that can surpass that?


----------



## Mooncheese

z390e said:


> Quoted for truth, given that this is literally my exact setup and can confirm.


How hot does your 9900K get? I've been contemplating upgrading to that but would really only see any gains in DX12 and Vulkan titles that see the additional cores and 9900K has only slightly better IPC but because Spectre and Meltdown mitigation has been implemented through the BIOS one cannot simply turn them off if running the older architecture. 8700k @ 5.2 GHz is basically as fast as 9900k @ 5.0 GHz single core speed. The fact is that 9900k can clock higher because of fab and yield improvement (a higher number the samples bin higher because there is less lost to fabrication, I believe HPTC retooled their assembly line recently but I could be mistaken). I know this to be the case with 9900KS, but I think nearly all 9900K can do 5.0 GHz whereas 70% of 8700k can do 5.0 GHz. Various benchmarks show that Spectre and Meltdown degrade performance by about 200 MHz. I.E., 8700k without Spectre and Meltdown mitigation @ 5.0 GHz is as fast as 9900k with Spectre and Mitigation protection in place via the BIOS. With 8700k as long as one doesn't update to a BIOS from 2018 onward, when Spectre and Meltdown vulnerability became known and highly publicized (this was the true start of the fall of Intel BTW, for all we know it was AMD who discovered this and leaked it out) with 8700k then one can simply turn off the mitigation that was implemented through Windows with InSpectre (in collaboration with Intel). Vast research has been done on Spectre and Meltdown and the likelihood of getting affected by it is literally the same odds as getting struck by lightning. I'll take my chances for 200 MHz. Anyhow, the point I'm trying to get at is that I would only see gains in DX12 and Vulkan titles and that's only if I'm not GPU bottlenecked at 3440x1440, and right now, honestly right now I'm still bottlenecked in most of my titles having upgraded from 1080 to 2080 Ti at this res, I'm just around 50% (in reality, I can't believe how much faster 2080 Ti actually is in some titles, most of which I have still to make my way through: No Mans Sky, avg FPS with 1080 Ti @ 2 GHz = 60 FPS, 2080 Ti = 90 FPS, same story with The Division 2, my benchmark went from 71 FPS to 110 FPS @ 3440x1440, same with The Witcher 3 (60 FPS lows to 100 in Toussant with HD Reworked and Phoenix Lighting, everything maxed / Hairworks Off). The only game that basically showed very little improvement was AC: Odyssey, where some CPU bottleneck is present and GPU doesn't seem to get above 90% utilization much. Also didn't see a big gain in FPS in this title for some reason. 

Anyhow, yeah I may just keep this set-up until Intel finally moves off of 14nm node (I hear 10nm Ice Lake is next). Too many titles where single core speed is still of utmost importance and the IPC isn't changing on this node much between 8th, 9th and now 10th gen extreme processors. It's sad when you realize that the actual reason Intel isn't already on a smaller node is that they had a monopoly and were milking the masses. Nvidia took a page right out of Intel's playbook when they kept the 20 series GPU's on basically the same node and architecture (very little difference in actual process size and layout between Turing and Pascal). Why make that next big jump when that costs money and you can sell products on the current node because they will buy whatever you sell them? 

Happy to see a resurgent AMD honestly, I'm about to go look at overclocked 3900X vs 8700k gaming benchmarks (I already know 7nm AMD is faster in productivity). I think 8700k is still faster in single core speed because 9900k is still faster for gaming because of single core speed. AMD has more cores but it can't do more than 4.7 GHz (whereas both 8700 and 9900k can do 5.2-5.3 GHz, golden sample). 

Anyhow, this has become quite the ramble, I think I will move on to the next comment reply. 



Shawnb99 said:


> I have temp sensors on every inlet/outlet of my loop and there less then a degree difference in any of them. Loop order doesn't matter and never has. It's a myth.


Yeah, if it didn't matter EKWB wouldn't recommend putting a rad before their pump: 

"Radiators are usually placed just before the reservoir in order to remove heat before the pump, but you can also have radiators between arrays of blocks (largest radiator before GPU and smallest radiator before CPU for optimal thermal performance if your chassis can accommodate it)."

https://www.ekwb.com/blog/does-loop-order-matter/

They recommend placing a radiator before the reservoir which flows into the pump, inherent in this advice is to put the radiator after the component in question. If the order of your loop didn't matter they wouldn't even recommend this. 

As for you measuring the temperature at the outlet of each of your components and found a ~1C variance there are multiple factors that play into this. You may have a mostly highly non-restrictive loop with one or two radiators and only a CPU block and a single GPU and you run your pump at 100% RPM. But say you have my loop where you have two very large radiators (EK 420 SE and 360 PE), a CPU + VRM monoblock (more channels, it's about as restrictive as a GPU block), and your run your pump at 50% RPM like I do, because pumps make noise and you can hear the pump above 50% RPM with extremely quiet fans (120mm Eloop B12-PS PWM and Phanteks PH-F140MP) running at 40% RPM. I have actual 5.1 surround and don't have that isolation that headphones provide (along with the attendant discomfort and fact that emulated surround is not as good as actual surround and bass in headphones is a gimmick, when the floor shakes because of an explosion, that's closer to how it feels in real life). I'm positive that if I were to try to plumb my GPU before CPU that I would have higher CPU temps because the lower flow rate and increased restriction of the aforementioned large radiators and the increased channels on the CPU monoblock would allow for marginal heat-soak into the water because the water is sitting next to the hot component for a greater period of time and therefore absorbs more heat.) 

Found this trying to gauge what actual research on the matter shows and yes, for the most part loop order does not matter for the avg person who is just going to run a single radiator and run their pump and fans at 70-100% RPM when system is under duress. 

Another reason I don't run my pump at more than 50% RPM is because as with anything heat and electricity based and the nature of entropy, a pump running at 100% RPM will die in half the time vs the same pump running at 50% RPM. Pump failure = fried components very fast. Like you better be watching your temps in OSD, we are talking about an increase from say 50 to 100C+ in the span of a minute or less (because there is no water flow and the water right next to the hot component immediately begins to boil which will create pressure by way of expansion (the reason pot lids jump off of the pot when boiling water) which will basically cause your loop to explode. The question is, does said water hit your PSU and short your entire system, possibly starting a fire in the process before or after your GPU literally cooks to death (GPU core is the hottest @ nearly 400W TDP) and shuts down the PC. Will your GPU work again after that? Sometimes they do, sometimes they don't come back from that (temp trigger is 99C I believe on Nvidia). 

Yeah fun times, if you intend to run your pumps at 100% RPM I would probably run two of them in tandem with the secondary at 50% RPM solely as a back-up for primary. 

Conversations that corroborate this elsewhere: 

Fronzel

https://rog.asus.com/forum/showthre...op-does-the-order-of-components-really-matter

"The temperature of the water never varies more than a few degrees C through the entire loop in a typical setup.
However, this all comes down to waterflow. With good flow you are never even going to see 1 degree C of difference.

I don't have temperatures to show from my own system, but we can do some simple math to illustrate:
Water has a heat capacity of about 4182 J pr kg pr K at sea level and room temperature.
Lets take a MCPx35 with 1050 L/h, and assume you get less than half that flow in an actual WC loop - 500L/h
Mass moving through the system every second (500/60/60) = 0,139 kg
Heat capacity of that mass (0,139*4182J) = 580 per degree Kelvin

This means you have 580 Watts of heat being removed from your system for every one degree Kelvin/Celcius of difference on inlet and outlet temperatire on the radiator. And this is the highest temperature difference you are going to have in a loop with this flow rate.

With a very low flow rate you will get higher temperature differences through the loop, and placement of components in relation to radiators might become a factor.

And the conclusion: now i want a flow meter and temperature sensors in my loop"



jura11 said:


> Hi @Mooncheese
> 
> 
> 
> Reversing flow on GPUs usually adds 2-4°C what I have tested and what I run on every block which have tested or run,on EK Vector RTX 2080Ti WB add around 2-4°C in my case, tried something similar on Bykski or Heatkiller IV RTX 2080Ti and only on Phanteks Glacier WB I have seen higher deltas between the IN and OUT, if I remember correctly delta have been 5-6°C
> 
> Reversing flow on CPU block can make higher difference in temperatures, because most of the blocks are directional or using jet plate, it most cases it will restrict flow and therefore yours temperatures will rise
> 
> There are several tests with GPU before or after CPU and it really doesn't make difference in most cases or GPU directly feeding CPU or other way and there is minimal difference in temperatures or no difference
> 
> Not enough radiator space it will result in higher water temperatures and higher water delta T which means you will need to run fans way faster than with more radiators
> 
> Monoblock, I tried only two of them on 8086k and 5960x amd both performed around 5-7°C worse than EK Supermacy EVO which is not the best CPU water block,older Aquacomputer Kryos HF CPU outperformed Supermacy EVO by good 5-7°C as well in my case
> 
> I have built loop for friend loop where he is using or run Phanteks EVOLV ATX with Asus Maximus X Formula with 8086k which has run 5.2GHz with 1.45v delidded I think and Palit RTX 2080Ti with 380W BIOS with Bykski CPU WB and Bykski GPU WB and this have been cooled by single HWLabs GTS360 360mm radiator and temperatures in gaming we are seen 42-45°C in 25-27°C ambient, CPU temperatures in Realbench Stress test or OCCT 4.4.2 we are seen 82-85°C on PKG with fans running at 1000-1200RPM
> 
> I will try post the pictures of that loop if I can find it on my phone
> 
> Hope this helps
> 
> Thanks, Jura


Great insights, yes perfect example as to how high flow rate renders loop order irrelevant. With an EK block with a dense micro-fin array over CPU you have to place it before VRM because that restriction slows the flow about the entire component and the other way the water would heat up over the VRM. This is basically supporting my assertion earlier that flow direction 100% matters in a monoblock, which is basically the same design (albeit no channels for memory) as a GPU block AND this also supports my assertion that component order actually DOES matter unless the system builder intends to run the pump(s) at 100% RPM and is ready to tolerate the noise and shortened lifespan from that or go through the hassle of running pumps in tandem. I'm actually using a Phanteks Glacier block and because the there are less fins and they are further apart there is less resistance in the block and therefore flow direction should not matter, however I still placed the inlet over the CPU and outlet after VRM because, just accounting for my reduced flow rate. 

As for the performance difference you saw between the monoblock and dedicated CPU block with 8086 because you are cooling both the VRM and the CPU core the heat being generated by both components self-reinforce each other. If said proc. is using a lot of voltage and current it will create near-exponentially more heat running the component at, or especially beyond, it's limit. That heat will heat the entire monoblock. If you intend to push your component beyond the limit with a dangerous overclock, i.e. 5.3 GHz @ 1.425v, (it will run like this for a year or two but that component is going to burn out over 1.4v) all that extra voltage (and therefore current because the hotter the component the more current is lost in the form of heat, the less efficient it is) is going to heat up the VRM which will heat up the monoblock which will heat up the CPU. 

Look at 2080 Ti, just reducing my voltage from 1.068 (or even 1.093, I had to update Hwinfo64 before it could see the component and by then I had already found enough sense to undervolt, simply opening the door to the PC and putting my hand gently on the back-plate (after grounding myself) and it was nearly too hot to touch. Like I couldn't hold my hand on it. After dropping from 2100 MHz with default voltage on Galax 380W BIOS @ 126% to 2025 MHz @ 1.025v I can actually keep my hand on the back-plate, it's still hot but not to the point of burning skin (and it drew 126% I can't believe FE can pull 380W with factory power delivery, this is a huge increase over 1080 Ti which can't do more than 300W) to 2010-2025 MHz @ 1.025v on frequency curve (click on telemetry window and input Alt+F to pull up graph and simply take off 100 Mhz from where you are stable now with default voltage and place the top of the curve at that frequency at 1.025v and flatten the curve all the to the right, apply settings, save to 3rd profile to see the performance and temp difference in game, for me this is only maybe 5 FPS @ 3440x1440 and the temps on core are about 3-5C lower as the card went from drawing 380W to 320W). Anyhow, yeah CPU is the same story, the temp of the VRM goes up with voltage. 

A CPU block will always outperform a monoblock, but monoblocks are nice because they reduce the complexity of the loop (not to mention look better than CPU blocks). If youre on hard-tube this becomes important because when the number of fittings in the system go up the greater the likelihood of an O-Ring failure. It happens: https://youtu.be/MhlL_IWJqag?t=430

For me I went with a monoblock because I wanted to cool the VRM, I didn't just want to put a CPU block on it and use the factory VRM heat-sink. Buying a VRM heat-sink that would be used separate from a CPU block sounded like it was going to introduce too much complexity and inherent risk into the system for a marginal gain. (By your account the CPU block only performed about 5C cooler than the monoblock, and for all we know said 8086 was running at high voltage and wattage and yeah, that heat is going to heat the entire block). I was 100% going to cool the VRM with water and adding a separate VRM block with two additional failure points (should I elect to go with hard tubing and potential O ring failure due to not perfectly chamfering the a section of tubing) was something that I avoided. 




kithylin said:


> I thought this was supposed to be the RTX 2080 Ti thread.. why are there 2-3 pages discussing CPU monoblocks?  There's a CPU section of the forums for that.


Why are there people who insist that the discussion of a particular graphics card be reduced to "which air cooler is the best" and "are my temps ok?" and "which 2080 Ti looks the best" etc. We are discussing 2080 Ti, just in the broader context of cooling it with liquid and we have at least one person who is learning a lot about it in the process (Intrud3r). 

All of my recent comments pertain directly to 2080 Ti performance, longevity, acoustics, etc. If you want to have a simplistic conversation about the GPU try Nvidia Reddit: https://www.reddit.com/r/nvidia/comments/f71tu3/transporting_2080ti_in_suitcase_quick_question_on/



z390e said:


> Just added the Asus OC 11GB Gaming model....how much difference do people see on the VAR cards with air cooling vs a DIY liquid cooled FE card? Assuming we can push them a bit more with liquid/custom, but I see card maxing around 2050. Seems only the K|NGP|N one allows a VBIOS that can surpass that?


Define DIY. 

If your definition of DIY is an NZXT Kraken G12 and a single 120mm AIO the air cooler may rival or outperform the AIO, and there is no concern of the VRM's cooking to death. 

If your definition of DIY is EK's aluminum Fluid Gaming series (I think they offer a 360mm rad now) then that will outperform any AIB air cooler but youre asking for galvanic corrosion should you neglect to remember not to mix aluminum with copper with your next GPU and monoblock upgrade. This also limits your selection of GPU's because aftermarket blocked variants (i.e. Gigabyte Aorus Waterforce Extreme) are only copper, so now youre looking at having to do an actual loop anyways or choosing a different GPU.


----------



## sultanofswing

I have a total of 7 temp sensors in my loop as well as an extremely accurate flow sensor.

My setup consists of 3 360mm radiators and a D5 pump.
I see a difference of about .8-1c between the GPU in vs the Out.

I also run my pump at 100% max speed, in my setup that gives me 245 l/hr(just over a GPM). If I start running the pump below 70% flow falls off really fast (125 l/hr)
Without having a very accurate flow sensor and assuming to run a pump at half speed you may be seriously hurting your performance.
Also D5 pumps are rated all the way up to 24v so I don't think running them at 100%(12v) really effects them.


----------



## Mooncheese

Just to sum up, loop order does matter if your flow-rate isn't high. 

The perfect analog to the logic is the fact that inlet and outlet can work either way on Phanteks Glacier 2080 Ti block because it is less restrictive than a different block with greater fin density on the GPU core (and therefore more restrictive). The block itself is a near perfect microcosm of a loop. You could replace the VRM on the block with a GPU, without high flow rate you wouldn't want to direct the flow of water over the GPU or VRM then the CPU or GPU core. With high flow rate this doesn't matter much. 

About pump failure, typical D5 140 pump has MTBF of 50k hours. I just ran my PC for 24 hours a day since Nov 2017 (if not gaming then mining), that makes for approximately 800 days out of 2000 (50,000 / 24 = 2083 days) or half it's lifespan @ 25C water temp. 

https://linustechtips.com/main/topic/602797-ek-140-d5-life-span/

Anyone out there with this pump going on year 3 might want to start considering adding an auxiliary pump. I'm thinking about adding one now that I have to take the loop apart to change the soft tubing which has become discolored from direct and indirect sunlight. I've decided to stay on soft tubing in the end. I'll take a significantly reduced chance of a leak vs tubing that looks better and offers few other benefit (PEGT and Acrylic). 

Pump RPM directly affects pump longevity and you need to run your pump at 100% RPM to maintain high flow rate to circumvent a poorly configured loop, i.e., putting a 400W GPU right before your pump (which EK recommends against in the aforementioned link) or before your CPU. 

Also the noise, dear god, the buzz noise is very loud with a D5 140 RGB EX-RES at 100% RPM. It was literally the first second thing I did after booting into Windows after piecing together my build, figuring out what PWM header it was connected to and going into BIOS to turn it down to 50% RPM. 

But hey, if you got pumps in tandem or don't mind your loop literally exploding every three years because you didn't run out and spend $150 to buy a new pump and use headphones and don't mind the noise, by all means, put your GPU right before your CPU and just turn the pump up to 100% RPM.


----------



## Mooncheese

sultanofswing said:


> I have a total of 7 temp sensors in my loop as well as an extremely accurate flow sensor.
> 
> My setup consists of 3 360mm radiators and a D5 pump.
> I see a difference of about .8-1c between the GPU in vs the Out.
> 
> I also run my pump at 100% max speed, in my setup that gives me 245 l/hr(just over a GPM). If I start running the pump below 70% flow falls off really fast (125 l/hr)
> Without having a very accurate flow sensor and assuming to run a pump at half speed you may be seriously hurting your performance.
> Also D5 pumps are rated all the way up to 24v so I don't think running them at 100%(12v) really effects them.


Yes exactly, if you run the pump at 50% (or it could be 70%, I need to double check in BIOS) like I am it reduces the flow rate. You may have slightly more restriction in your loop with 3x 360 radiator, but I'm right behind you with a 420, a 360 and a monoblock. Check my last post above, MBTF for D5 is 50k hours. If rough math puts you even half way there I would either replace said pump or I would add an auxiliary pump for eventual failure because youre running your pump at 100% RPM. 

We could get a lot of insight if you were willing to run your pump at 50% RPM, allow for adequate heat-soak, and then test your temp difference on outlet of every component.


----------



## JustinThyme

truehighroller1 said:


> You're all wrong... Phase change is where it's at..
> 
> /S


LOL, Been there and done that and yep, sub ambient without filling an LN2 pot. My problem is I got acrried away and cooked too many boards from condensation. Been thinking about a chiller thats not sub ambient but will keep loop right at ambient that will buy me an 8C temp drop on liquid temp.

As for the loop order debate, it doesn't make a hill of beans. Once you are warmed up the liquid gets to temp its less than 1C difference measured anywhere in the loop with one exception. *You are using weak a$$ pump/s.* I run 3 all near 100% all the time only staggered slightly in speeds to cancel out the high pitch whine from D5s. 

Only thing on curve controllers are rad fans. To spew out that EK says you should run a rad before the CPU but in the same breath say its OK to run it backwards is well, lunacy. Mostly EK Lunacy and why I dont use their blocks anymore. "Oh we have this special designed jet plate at the inlet that causes turbulence over the fins for improved performance" Next page........"You can run the block reverse flow with little to no performance change". They were OK like 5 years ago but now more worried about RGB than performance and have done little change to the basic fluid dynamics but they look purdy! Thats whats driving their market ATM. 

People snag it up and get all proud over having 20 EK badges in their case with them all accented by RGB lighting. All I got left is 2 rads that work fine but the badges got taken off long ago when I first put them in and painted rads first as the way they came from the factoty direct order was insulting. EK badges were all covered up with film and nice and shiny tough. Id offer them up for prizes if I didn't toss them in the trash. 

I usually split my rads up between components not for performance but easier runs. Only thing that matters in loop order is Res before pump and res above pump inlet. Oh one more thing. In through the inlet port and out through the out. 

Right now mine is Res>D5 Next pump mounted to bottom of res>inlet to a pair of D5s mounted to bitspower seial head>CPU block>VRM block>420 rad in the penthouse>2x 2080Tis paralleled>filter>480 fatboy in the basement>360 slim going down the front and back to res. 

Ive changed this a million different ways in search of the great loop order god that never appeared. The only thing that screwed up performance was a distro plate that paralleled the CPU and GPUs causing more flow but most of it going to GPUs. Yanking that piece of crap back out dropped my CPU temp under heavy load by 4C and total flow rate of loop from 620L/Hr to 540L/Hr. I tried that BS about rad before CPU and all it did was make me have to bend more tube and buy more fittings, sometimes its a better fit though depending on the case. Stress test runs still gave me the same results. 14 cores @ 4.8 1.220 Vcore 70-80C from coldest to hottest core. Idles around 30C. All with an ambient of 22C. Only time I see temps under 70 is either crack the window with a fan in the winter and chap the wife or in summer time when I put a 500 BTU window AC in that same window behind my PC and direct most of the flow to the back cover inlet. Other than that I can run them in any arbitrary order and for a given ambient there is no change in thermals of any component. Feel free to argue all day about how the temp leaving a 3lb colid copper block is 1C higher with a 0.06 PSI pressure drop. In the end all that matters to me are the numbers in my 5GHz club on HEDT. If you cant hit 5GHz on a 9900K on air, they have special schools for those people. You want best cooling results, dont use mono blocks, get as much rad area as you can, decent high static pressure fans and a good fan controller like aquaero so it can stealth when your posting on OCN and ramp up as necessary when loaded up. Otherwise why bother, buy an air cooler for your CPU and call it a or feel special and call it water cooled with a pair of AIOs. 

None of my statements are based on arguments and conjecture. Its all personal experience over many years and many wasted $$$. My intention is not to cause ill feelings. Just save people from spending money they dont have to because my dumba$$ already did it for you reaching for the grail that doesnt exist. That is unless you go extreme like phase change, serious sub ambient chiller or LN2.


----------



## sultanofswing

Mooncheese said:


> Yes exactly, if you run the pump at 50% (or it could be 70%, I need to double check in BIOS) like I am it reduces the flow rate. You may have slightly more restriction in your loop with 3x 360 radiator, but I'm right behind you with a 420, a 360 and a monoblock. Check my last post above, MBTF for D5 is 50k hours. If rough math puts you even half way there I would either replace said pump or I would add an auxiliary pump for eventual failure because youre running your pump at 100% RPM.
> 
> We could get a lot of insight if you were willing to run your pump at 50% RPM, allow for adequate heat-soak, and then test your temp difference on outlet of every component.


I have tested it, there is about a 8c difference in water temp from 50% to 100%. At 100% My water temp never goes over 30c even with my 23c ambient and that is the way I plan to keep it.


----------



## z390e

Mooncheese said:


> How hot does your 9900K get?


No problems really I run the Corsair H150i PRO 360mm and haven't had thermal issues or anything else at a 5g OC not that Im really pushing it all that hard. 





Mooncheese said:


> Define DIY.
> 
> If your definition of DIY is an NZXT Kraken G12 and a single 120mm AIO the air cooler may rival or outperform the AIO, and there is no concern of the VRM's cooking to death.
> 
> If your definition of DIY is EK's aluminum Fluid Gaming series (I think they offer a 360mm rad now) then that will outperform any AIB air cooler but youre asking for galvanic corrosion should you neglect to remember not to mix aluminum with copper with your next GPU and monoblock upgrade. This also limits your selection of GPU's because aftermarket blocked variants (i.e. Gigabyte Aorus Waterforce Extreme) are only copper, so now youre looking at having to do an actual loop anyways or choosing a different GPU.


Have had pretty good success with EVGA/Corsair coolers in general and pretty simple to put together with minimal maintenance and decent thermals that prevented throttling typically Did anyone ever do a benchmark for that type of stuff to see what kind of gains are typical? Would love to see Gamers Nexus do a big block/cooler etc test of thermals etc across the board. Id pay a few hundred extra for a 10% or more improvement but if is 1% or 3% probably not, especially when those type of upgrades also involve increased maintenance. This card seems pretty good tbh even without the liquid cooling.


----------



## Mooncheese

sultanofswing said:


> I have tested it, there is about a 8c difference in water temp from 50% to 100%. At 100% My water temp never goes over 30c even with my 23c ambient and that is the way I plan to keep it.


Can you measure the temp at the outlet of every component in the loop: CPU>Radiator>Pump+Res>GPU>Radiator>? I'm curious to see what the difference in water temp there would be with water coming out of a 380W 2080 Ti with pump speed at 50% in your system with adequate heat soak. 

The argument here is that just as there would be greater need to direct flow in a certain direction with a GPU or CPU when the rate of flow is lower due to greater surface area of the GPU heat-sink fin-array so would there also be a need with the entire cooking system, that just as water would absorb greater amount of heat flowing slower over the VRM within the block, water flowing slower in the entire loop would need to flow in a certain direction so as to expel the heat from the hot components into open air via a water-to-air radiator instead of say, your CPU monoblock or your pump motor etc. 

I can't remember if it was you and I'm too stoned to want to waste any more time with this but someone stated that they measured the water temp at the inlet and outlet after each and every component in the loop and stated the half-truth that flow direction doesn't matter. 

I stated that it does matter when the flow is lower than moderate (50% pump speed, 2-3 large radiators, a CPU block or monoblock and a GPU block, not counting the various 90 degree angle bends in the hard-line that all reduce the liquid velocity) and I gave the perfect analogy of the GPU block water flow needing to go in a certain direction if the flow rate is constricted. Because the water spends more time on the hot component and becomes hotter but also bleeding heat out into the case in an equal degree as said water is also spending more time over various heat sinks: the plumbing, fittings, pump + pump res etc so it ultimately normalizes, but that's if you order the components properly, where 380W of heat coming out of the GPU is able to immediately travel to the component with the greatest heat-to-air surface area (a large radiator with adequate airflow about the surface area). A hot CPU block / monoblock that is also dealing with greater water temperature because of the power produced by the components that it's trying to keep cool isn't going to have nearly the heat-to-medium ability as an open-air water-to-air radiator. So the two components heat the water synergistically, in a self-reinforcing manner, and you wind up with HOT water coming out of that CPU block which by then, hopefully, into a radiator and not your pump+res because you concluded that 100% pump speed and attendant risk of premature failure, in ~3-5 years, vs 6-10 years running the pump at 50% RPM and increased noise is an acceptable trade-off for being lazy and plumbing your GPU right into your CPU or Pump+Res. 

Running the pump at 50% RPM doubles the life expectancy so no potential failure around year 3 that snuck up on you really fast. Nearly the same cooling potential if you apply systems thinking to your PC: placing one radiator of adequate cooling capacity right after each hot component allows for running the pump at lower speed, thereby reducing noise and doubling the life of the pump, reducing replacement interval (save money: $150 every 2 years because you don't want to risk failure on year 3 whereas at 50% RPM failure is extended from 3-5 years to 6-10 years so pump replacement interval can be pushed out to every 5 years = $150 spent on a replacement pump. 

3x360 mm radiators, you have no excuse not to put one radiator after each hot component. 

I'm willing to wager that at 50% RPM in a constrictive system the temp difference coming out of the hot components is 5-10C hotter than it would otherwise be had the water had a chance to immediately dump said heat into large water-to-air surface medium at the same pump speed. 

If you can measure the water temperature after each hot component with your system running at 50% pump speed after adequate heat-soak has set in (i.e., run Superposition in loop for about an hour along with running Prime95), not the water temperature at some random location, but before and after each hot component, then do that again at 100% pump speed, same 30-60 of superposition and prime95 to induce heat soak).


----------



## sultanofswing

Mooncheese said:


> Can you measure the temp at the outlet of every component in the loop: CPU>Radiator>Pump+Res>GPU>Radiator>? I'm curious to see what the difference in water temp there would be with water coming out of a 380W 2080 Ti with pump speed at 50% in your system with adequate heat soak.
> 
> The argument here is that just as there would be greater need to direct flow in a certain direction with a GPU or CPU when the rate of flow is lower due to greater surface area of the GPU heat-sink fin-array so would there also be a need with the entire cooking system, that just as water would absorb greater amount of heat flowing slower over the VRM within the block, water flowing slower in the entire loop would need to flow in a certain direction so as to expel the heat from the hot components into open air via a water-to-air radiator instead of say, your CPU monoblock or your pump motor etc.
> 
> I can't remember if it was you and I'm too stoned to want to waste any more time with this but someone stated that they measured the water temp at the inlet and outlet after each and every component in the loop and stated the half-truth that flow direction doesn't matter.
> 
> I stated that it does matter when the flow is lower than moderate (50% pump speed, 2-3 large radiators, a CPU block or monoblock and a GPU block, not counting the various 90 degree angle bends in the hard-line that all reduce the liquid velocity) and I gave the perfect analogy of the GPU block water flow needing to go in a certain direction if the flow rate is constricted. Because the water spends more time on the hot component and becomes hotter but also bleeding heat out into the case in an equal degree as said water is also spending more time over various heat sinks: the plumbing, fittings, pump + pump res etc so it ultimately normalizes, but that's if you order the components properly, where 380W of heat coming out of the GPU is able to immediately travel to the component with the greatest heat-to-air surface area (a large radiator with adequate airflow about the surface area). A hot CPU block / monoblock that is also dealing with greater water temperature because of the power produced by the components that it's trying to keep cool isn't going to have nearly the heat-to-medium ability as an open-air water-to-air radiator. So the two components heat the water synergistically, in a self-reinforcing manner, and you wind up with HOT water coming out of that CPU block which by then, hopefully, into a radiator and not your pump+res because you concluded that 100% pump speed and attendant risk of premature failure, in ~3-5 years, vs 6-10 years running the pump at 50% RPM and increased noise is an acceptable trade-off for being lazy and plumbing your GPU right into your CPU or Pump+Res.
> 
> Running the pump at 50% RPM doubles the life expectancy so no potential failure around year 3 that snuck up on you really fast. Nearly the same cooling potential if you apply systems thinking to your PC: placing one radiator of adequate cooling capacity right after each hot component allows for running the pump at lower speed, thereby reducing noise and doubling the life of the pump, reducing replacement interval (save money: $150 every 2 years because you don't want to risk failure on year 3 whereas at 50% RPM failure is extended from 3-5 years to 6-10 years so pump replacement interval can be pushed out to every 5 years = $150 spent on a replacement pump.).



I can test more but everything on my system show me that The higher the flow rate the better the cooling potential up until about 1gpm. After that it becomes diminishing returns.


----------



## JustinThyme

sultanofswing said:


> I can test more but everything on my system show me that The higher the flow rate the better the cooling potential up until about 1gpm. After that it becomes diminishing returns.


Mine makes a difference across the board from dialed back to 5L/M (1.3 gpm) and 100% or close to it at 9L/M (2.4GPM) Mine stays tweaked in at 550L/Hr. (145GPH)


----------



## sultanofswing

JustinThyme said:


> Mine makes a difference across the board from dialed back to 5L/M (1.3 gpm) and 100% or close to it at 9L/M (2.4GPM) Mine stays tweaked in at 550L/Hr. (145GPH)


Yea Single pump here so I have no test data over 250 l/hr.

I just know that in my system it works like this

Low water flow= low Water temp and high component temp
High water flow=Higher water temp and lower component temp.

At the end of the day the higher the flow rate the better off you will be.


----------



## J7SC

kithylin said:


> I thought this was supposed to be the RTX 2080 Ti thread.. why are there 2-3 pages discussing CPU monoblocks?  There's a CPU section of the forums for that.


 
bump !


----------



## UniverseN

Hi, guys, my MSI 2080Ti Gaming X Trio is reaching thermal performance limits according to hwinfo. It doesn't happen often, but it shouldn't get there at all! The thing is, the core gets to max 69 celsius. I have checked and rechecked all the vrm/memory thermal pads, changed the thermal paste, it all looks in order. I have a custom, pretty aggressive fan curve, pretty good airflow in the case. Power limit in Afterburner - 110%. Temperature limit - 85 celsius. Any ideas? Do the MSI 2080Ti Gaming X Trio's even have thermal sensors in vrm's/memory?


----------



## Mooncheese

UniverseN said:


> Hi, guys, my MSI 2080Ti Gaming X Trio is reaching thermal performance limits according to hwinfo. It doesn't happen often, but it shouldn't get there at all! The thing is, the core gets to max 69 celsius. I have checked and rechecked all the vrm/memory thermal pads, changed the thermal paste, it all looks in order. I have a custom, pretty aggressive fan curve, pretty good airflow in the case. Power limit in Afterburner - 110%. Temperature limit - 85 celsius. Any ideas? Do the MSI 2080Ti Gaming X Trio's even have thermal sensors in vrm's/memory?


What do you mean it's reaching "performance limit" according to Hwinfo64? 

If you don't want thermal throttling unlink thermal limit from power target in MSI AIB and run your fan profile so that after 50C it scales to 100% RPM.


----------



## Mooncheese

sultanofswing said:


> Yea Single pump here so I have no test data over 250 l/hr.
> 
> I just know that in my system it works like this
> 
> Low water flow= low Water temp and high component temp
> High water flow=Higher water temp and lower component temp.
> 
> At the end of the day the higher the flow rate the better off you will be.


I'd like to see someone with a constrictive system (2-3 large radiators, CPU and GPU block(s), pump+res) measure the water temp before and after each component, right at the inlet and outlet, with all components under duress (prime95 on CPU and GPU Superposition) for about 30-60 minutes (for adequate heat-soak to occur) to see the temp difference running the pump at 50% RPM and at 100% RPM. 

Will we see a 5C difference at the outlet of a 2080 Ti that is plumbed right into a CPU with the pump speed at 50% RPM vs 100% RPM? We will never know because do to growing illiteracy no-one can read this very request, even though I've only typed it, I don't know, 3-4 times over the span of all of my previous posts here? 

Is your water temp measure coming out of a radiator where it was able to remove a greater amount of heat to the open air because of slower flow rate and therefore the conclusion is: "slower flow rate = same water temp but hotter components". 

I don't know, I just give up with this, and then the people who only want to talk about what cooler looks the most "pimp" or what temps are normal or what is the "best" 2080 Ti insist that all of this is going off on some kind of tangent and want to steer the conversation "back on track" (what cooler is the best, what aftermarket 2080 TI looks the most "pimp", are my temps ok?). 

Is this overclock.net or is it r/nvidia?

I mean we only have, I don't know, 2 years worth of "what 2080 Ti looks the most pimp?", "are my temps ok?", "how do I overclock?", is it really a problem if an exchange about liquid cooling (that directly pertains to 2080 Ti) happens that spans a whopping 2 pages out of 1035 pages during the last 3-4 months of the GPU's effective life-cycle? 

"Let's get back on track, I want to keep talking about what 2080 Ti looks the most pimp!"

What happened to OC.net?


----------



## UniverseN

Mooncheese said:


> What do you mean it's reaching "performance limit" according to Hwinfo64?
> 
> If you don't want thermal throttling unlink thermal limit from power target in MSI AIB and run your fan profile so that after 50C it scales to 100% RPM.


Hi, thanks for the reply. Hwinfo shows "Performance limit - thermal" - "Yes" after playing. Power and Temp limits unlinked in afterburner. It hits Power limit all the time, as all 2080Ti's do, it doesn't trigger Thermal limit at the same time, though.


----------



## z390e

Mooncheese said:


> I don't know, I just give up with this, and then the people who only want to talk about what cooler looks the most "pimp" or what temps are normal or what is the "best" 2080 Ti insist that all of this is going off on some kind of tangent and want to steer the conversation "back on track" (what cooler is the best, what aftermarket 2080 TI looks the most "pimp", are my temps ok?).
> 
> What happened to OC.net?


You and a few others are moaning on and on about CPU temps and watercooling temperatures in a thread dedicated to RTX 2080ti's and you are the one asking "What happened to OC.net?". Your comments and the rest of the nonsense about it aren't specific to this card. So go ahead and get a warm cup of "move your tripe to another thread".

Different forum, same problems. Some know-it-all always wants to do what they want and not follow the guidelines. Today that appears to be you, here on OC.net.


----------



## Barefooter

Mooncheese said:


> I'd like to see someone with a constrictive system (2-3 large radiators, CPU and GPU block(s), pump+res) measure the water temp before and after each component, right at the inlet and outlet, with all components under duress (prime95 on CPU and GPU Superposition) for about 30-60 minutes (for adequate heat-soak to occur) to see the temp difference running the pump at 50% RPM and at 100% RPM.
> 
> SNIP


I would not consider my system constrictive, but it does have six radiators and seven water blocks.

I don't want put a wall of text up here on the 2080 Ti Owner's Club about flow testing so if you want to see the details you can click *here* to see the post on my build log.

Below is the chart I made for it.


----------



## daddyd302

I finally caved in and got the evga 2080ti ftw3 ultra. I decided to switch when I saw it used on amazon for $1194. I came from 2 1080ti to my current 2080ti. So far so good, highest temp I got was 67C.


----------



## JustinThyme

Mooncheese said:


> I'd like to see someone with a constrictive system (2-3 large radiators, CPU and GPU block(s), pump+res) measure the water temp before and after each component, right at the inlet and outlet, with all components under duress (prime95 on CPU and GPU Superposition) for about 30-60 minutes (for adequate heat-soak to occur) to see the temp difference running the pump at 50% RPM and at 100% RPM.
> 
> Will we see a 5C difference at the outlet of a 2080 Ti that is plumbed right into a CPU with the pump speed at 50% RPM vs 100% RPM? We will never know because do to growing illiteracy no-one can read this very request, even though I've only typed it, I don't know, 3-4 times over the span of all of my previous posts here?
> 
> Is your water temp measure coming out of a radiator where it was able to remove a greater amount of heat to the open air because of slower flow rate and therefore the conclusion is: "slower flow rate = same water temp but hotter components".
> 
> I don't know, I just give up with this, and then the people who only want to talk about what cooler looks the most "pimp" or what temps are normal or what is the "best" 2080 Ti insist that all of this is going off on some kind of tangent and want to steer the conversation "back on track" (what cooler is the best, what aftermarket 2080 TI looks the most "pimp", are my temps ok?).
> 
> Is this overclock.net or is it r/nvidia?
> 
> I mean we only have, I don't know, 2 years worth of "what 2080 Ti looks the most pimp?", "are my temps ok?", "how do I overclock?", is it really a problem if an exchange about liquid cooling (that directly pertains to 2080 Ti) happens that spans a whopping 2 pages out of 1035 pages during the last 3-4 months of the GPU's effective life-cycle?
> 
> "Let's get back on track, I want to keep talking about what 2080 Ti looks the most pimp!"
> 
> What happened to OC.net?


Go ahead, we will all be waiting. Im running two temp sensors, One at the pump outlets before CPU and one after all the components before it hits the 480x60 and 360x30 rad before heading back for Res. One pump at 100%, 0ne at 96%, One at 94%......Under load, As you can see, within a margin of error on the two sensors which are both aquaro digital on aqaubus.

I do consider my system restrictive but have enough pump power to keep the flow rate to 560/Hr at full speed pumps>CPU>VRMS>420 HWL GTR>2x 2080Tis>filter>480x60 EK>360x30 EK>res

My 2080Tis are pimping!


----------



## sultanofswing

So I may have gotten a pretty good deal. Just traded my XC Ultra for a Brand New Kingpin, Straight trade!

She does 2205 with ease!


----------



## mardon

Hi currently using the KFA high power limit BIOS.
I'm running a Kraken G2 with Noctua Redux 120mm 1700rpm fan. It's going about 1400rpm at 41% minimum fan curve which is annoyingly audible when not gaming.
Is there a bios with a similar power limit and IO port activated but with a 0 rpm mode?
Otherwise I'm going to have to faff with Speedfan. Only issue is my mini itx board only had two fan headers and I've got 5 fans.


----------



## JustinThyme

sultanofswing said:


> So I may have gotten a pretty good deal. Just traded my XC Ultra for a Brand New Kingpin, Straight trade!
> 
> She does 2205 with ease!


If you are cranking 2205 with ease that's a golden beast.
lets see some benches!


----------



## sultanofswing

JustinThyme said:


> If you are cranking 2205 with ease that's a golden beast.
> lets see some benches!


Gonna play around with it some today and see what I can get. Ordered the Bykski Waterblock for it last night but I might dunk the AIO Radiator in some Ice water today and see what it will do.


----------



## J7SC

sultanofswing said:


> Gonna play around with it some today and see what I can get. Ordered the Bykski Waterblock for it last night but I might dunk the AIO Radiator in some Ice water today and see what it will do.


 
That's a nice card and clock on AIO, and going with a full waterblock is the smart thing to do, ESPECIALLY for a KingPin card. 

Below are some Superposition runs I did at (more or less) 2205 @ 1.063v, done in the winter time, with room ambient 17c. That's obviously with the faster of my two cards... The separate GPU cooling system - built to handle two of those cards with ease - consists of dual D5 pumps and 3x 360/55 rads. With only one card running, the delta temps only rose by 11c. Put differently, throw as much cooling at your KingPin as you can as is demonstrated by another Superposition run clock measurement at the bottom.

Finally, when things are more or less ready re. your updated cooling (or even w/ice on AIO rad), can you do a separate stress run at your convenience with MSI AB or PrecX recording overall watt / PL ? I really love to see a genuine Kingpin card graph on that


----------



## sultanofswing

J7SC said:


> That's a nice card and clock on AIO, and going with a full waterblock is the smart thing to do, ESPECIALLY for a KingPin card.
> 
> Below are some Superposition runs I did at (more or less) 2205 @ 1.063v, done in the winter time, with room ambient 17c. That's obviously with the faster of my two cards... The separate GPU cooling system - built to handle two of those cards with ease - consists of dual D5 pumps and 3x 360/55 rads. With only one card running, the delta temps only rose by 11c. Put differently, throw as much cooling at your KingPin as you can as is demonstrated by another Superposition run clock measurement at the bottom.
> 
> Finally, when things are more or less ready re. your updated cooling (or even w/ice on AIO rad), can you do a separate stress run at your convenience with MSI AB or PrecX recording overall watt / PL ? I really love to see a genuine Kingpin card graph on that


Just did a Timespy run at 2205. Board power maxed out at 334watt it appears with a max GPU temp of 34c in the ice water.

Seems the first 15mhz temp step happens at 34c, gonna try 2215


----------



## J7SC

sultanofswing said:


> Just did a Timespy run at 2205. Board power maxed out at 334watt it appears with a max GPU temp of 34c in the ice water.
> 
> Seems the first 15mhz temp step happens at 34c, gonna try 2215


 
Nice, tx - and with 'only' 334w, you still have lots of headroom to go with all those special KingPin bios (which I would only use after upgraded w-cooling). Also - though I am not sure about this - there might even be one 15 MHz temp step threshold below the 34c, given my own runs. BTW, 2215 might not work 'for real'. I think it goes 2190 > 2205 > 2220 > 2235 etc


----------



## sultanofswing

J7SC said:


> Nice, tx - and with 'only' 334w, you still have lots of headroom to go with all those special KingPin bios (which I would only use after upgraded w-cooling). Also - though I am not sure about this - there might even be one 15 MHz temp step threshold below the 34c, given my own runs. BTW, 2215 might not work 'for real'. I think it goes 2190 > 2205 > 2220 > 2235 etc


Yea it was 2220. No issues with "only" 334w considering the stock BIOS has a 520w limit without the need for XOC BIOS.


----------



## sultanofswing

2220 passes Timespy at 1.08v


----------



## z390e

Really nice pushing on that card sultansofswing if you see someone else wanting to do that trade let me know lol

how does someone even get offered that trade?

pretty sure 2 nicely oc'd kingpins w/NVLINK is the best that can be done with NV right now


----------



## z390e

nice setup as well there J7SC I love seeing these 2080ti's push above 2100

whats the max power you've seen them pull down?


----------



## sultanofswing

So it seems the pump may have not liked the radiator in the Ice bucket. took all that back out and put the fans back on and now the pump is making a loud whine and temps are 48c at idle. Uh


----------



## kithylin

sultanofswing said:


> 2220 passes Timespy at 1.08v


Have you tried playing games on it for a few hours non-stop yet? Time spy is a small momentary test for just a few minutes. Does it hold that 2220 Mhz over time after a few hours of gaming and heat soak?


----------



## sultanofswing

kithylin said:


> Have you tried playing games on it for a few hours non-stop yet? Time spy is a small momentary test for just a few minutes. Does it hold that 2220 Mhz over time after a few hours of gaming and heat soak?


I am only benchmarking the card not concerned about gaming


----------



## truehighroller1

sultanofswing said:


> So it seems the pump may have not liked the radiator in the Ice bucket. took all that back out and put the fans back on and now the pump is making a loud whine and temps are 48c at idle. Uh


Froze the grease perhaps causing damage?


----------



## sultanofswing

truehighroller1 said:


> Froze the grease perhaps causing damage?


Doubt it lowest temp I got it too was 15c


----------



## J7SC

z390e said:


> nice setup as well there J7SC I love seeing these 2080ti's push above 2100
> 
> whats the max power you've seen them pull down?


 
Tx - These cards are on the Aorus Xtr *stock *Bios and in heavy load such as Superposition or PortRoyal, they both pull between 375w and 380w (each). Now, a KingPin card should go higher with their exotic Bios, and so should the Galax/KFA2 HoF OCLabs (KP and Galax are the two top cards to beat :2cents, especially with sub-zero cooling. That said, I'm thrilled about my Aorus WB and their consistent - and uncomplicated - performance for over a year now




sultanofswing said:


> So it seems the pump may have not liked the radiator in the Ice bucket. took all that back out and put the fans back on and now the pump is making a loud whine and temps are 48c at idle. Uh


 
As long as the pump of the KingPin is not on the rad (doesn't look that way), I wouldn't worry too much. There may have been some air bubbles which came loose during the ice bucket run and related moving of the rad. I would just let everything get back to room temp for a few hours, then try again, including shaking and tilting the rad a bit. Besides, you already ordered the full GPU block, and I presume you'll throw some nice custom loop / pump / rad combo at it


----------



## sultanofswing

J7SC said:


> Tx - These cards are on the Aorus Xtr *stock *Bios and in heavy load such as Superposition or PortRoyal, they both pull between 375w and 380w (each). Now, a KingPin card should go higher with their exotic Bios, and so should the Galax/KFA2 HoF OCLabs (KP and Galax are the two top cards to beat :2cents, especially with sub-zero cooling. That said, I'm thrilled about my Aorus WB and their consistent - and uncomplicated - performance for over a year now
> 
> 
> 
> 
> 
> As long as the pump of the KingPin is not on the rad (doesn't look that way), I wouldn't worry too much. There may have been some air bubbles which came loose during the ice bucket run and related moving of the rad. I would just let everything get back to room temp for a few hours, then try again, including shaking and tilting the rad a bit. Besides, you already ordered the full GPU block, and I presume you'll throw some nice custom loop / pump / rad combo at it


I’ve been tilting and shaking this thing, turning it off and letting it sit multiple times with no luck.


----------



## J7SC

sultanofswing said:


> I’ve been tilting and shaking this thing, turning it off and letting it sit multiple times with no luck.


 
I don't know what kind of pump the KP card is using, but if there was air in it and the pump spun for more than just a brief moment w/o lubricating cooling liquid, the bearing might have been hurt. I would wait a while longer just letting it sit, then try again with the shaking and especially tilting (I assume the AIO rad was above the GPU plane before, then below it for the ice bucket ?). Also, there are plenty of vids on how to mod AIOs, including pump and liquid replacements...Finally, might be time to plan for some D5 (or so) shopping for the full block...


----------



## JustinThyme

J7SC said:


> That's a nice card and clock on AIO, and going with a full waterblock is the smart thing to do, ESPECIALLY for a KingPin card.
> 
> Below are some Superposition runs I did at (more or less) 2205 @ 1.063v, done in the winter time, with room ambient 17c. That's obviously with the faster of my two cards... The separate GPU cooling system - built to handle two of those cards with ease - consists of dual D5 pumps and 3x 360/55 rads. With only one card running, the delta temps only rose by 11c. Put differently, throw as much cooling at your KingPin as you can as is demonstrated by another Superposition run clock measurement at the bottom.
> 
> Finally, when things are more or less ready re. your updated cooling (or even w/ice on AIO rad), can you do a separate stress run at your convenience with MSI AB or PrecX recording overall watt / PL ? I really love to see a genuine Kingpin card graph on that


If your gonna run SP gonna have to do better than that! Yeah I know mine is not 4K, monitor is ultra wide 1440P


Yes you definitely need liquid flowing.
Dunno about Bykski. I tried Chinese once and wont do it again with a Barrow that Matt O from performance PCs Talked me into becuause EK and HK was still pounding their puds over a Strix 2080Ti Block. By far the worst blocks Id ever seen, one of them was left so rough it literally left me bleeding. Thats another story in itself. Cant speak for Bykski but stay the hell away fro POS barrow blocks.


----------



## J7SC

JustinThyme said:


> If your gonna run SP gonna have to do better than that!


 
...I showed SP 8K (!) at 2205...


----------



## sultanofswing

I finally got this thing to get all the air out. I ended up cutting the damn AIO Radiator off the card, swapping it with a 280mm Radiator and just kept messing with it.

Anyone need a Heatkiller Block for a Founders 2080ti? No longer need mine since I got rid of the founders card.


----------



## JustinThyme

sultanofswing said:


> I finally got this thing to get all the air out. I ended up cutting the damn AIO Radiator off the card, swapping it with a 280mm Radiator and just kept messing with it.
> 
> Anyone need a Heatkiller Block for a Founders 2080ti? No longer need mine since I got rid of the founders card.



Wish I could say yes, Im running Strix cards that are so far off from reference its unreal. HK is one of the best GPU blocks on the market. Only thing Id take over that is the aquacompter with the active back plate.


----------



## sultanofswing

JustinThyme said:


> Wish I could say yes, Im running Strix cards that are so far off from reference its unreal. HK is one of the best GPU blocks on the market. Only thing Id take over that is the aquacompter with the active back plate.



Yea I hear the AQ block is really good, I was going to get that one before I got the HK but it was not as easy to get but it would have been nice to have that along with the rest of the Aquacomputer stuff I have (I absolutely love the 6xt and the flow meters etc etc).

I think while I wait for the Block to show up for the Kingpin I am going to break out the old 900d and slap my 2 480 rads in it and dedicate those to just the Kingpin. 

So far I am in love with this card, I have been testing to see where I can run the card daily, My first test was [email protected] and I did a 30 min session of Modern Warfare without any issues at all at 1440p res.

Here is how I have my AQ setup.


----------



## J7SC

sultanofswing said:


> I finally got this thing to get all the air out. I ended up cutting the damn AIO Radiator off the card, swapping it with a 280mm Radiator and just kept messing with it.
> 
> Anyone need a Heatkiller Block for a Founders 2080ti? No longer need mine since I got rid of the founders card.


 
:thumb: ...and once your full w-block for the KP arrives, you get to bleed the air out all over again - but enjoy even better temps / MHz


----------



## z390e

sultanofswing said:


> So far I am in love with this card, I have been testing to see where I can run the card daily, My first test was [email protected] and I did a 30 min session of Modern Warfare without any issues at all at 1440p res.


keep us posted sultanofswing it's exciting to see someone pushing the Kingpin without LN2


----------



## sultanofswing

J7SC said:


> :thumb: ...and once your full w-block for the KP arrives, you get to bleed the air out all over again - but enjoy even better temps / MHz


No issues there.


----------



## sultanofswing

z390e said:


> keep us posted sultanofswing it's exciting to see someone pushing the Kingpin without LN2



Meh, I'm late to the party as usual but the Founders card on water was not cutting it for me.


----------



## JustinThyme

sultanofswing said:


> Yea I hear the AQ block is really good, I was going to get that one before I got the HK but it was not as easy to get but it would have been nice to have that along with the rest of the Aquacomputer stuff I have (I absolutely love the 6xt and the flow meters etc etc).
> 
> I think while I wait for the Block to show up for the Kingpin I am going to break out the old 900d and slap my 2 480 rads in it and dedicate those to just the Kingpin.
> 
> So far I am in love with this card, I have been testing to see where I can run the card daily, My first test was [email protected] and I did a 30 min session of Modern Warfare without any issues at all at 1440p res.
> 
> Here is how I have my AQ setup.


Mine


----------



## salamizone

Hi
Thanks to this forum I have learned a lot, it is my first post, I hope you can help me 


I have MSI Gaming Trio + EKWB + Bios is EVGA FTW3 373w
 
Is this score normal? in hwbot there are better with less

+1500 memory and stable ... will it be wrong?

Thank!


----------



## Renegade5399

Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)

This BIOS is now available on TPU and is a Verified BIOS:

Right here

Can this be added back to the first post?


----------



## Shewie

Hi again folks, I was away for a little while because I was waiting for missing parts for my AIO cooling solution for my Gigabyte 2080ti with those crazy Memory chips.

You suggested me to flash waterforce bios on my Gigabyte Windforce 11GB.
I think I've started hitting some power limitations. I can't get past 370W and all my benchmark ends with fairly similar score than AIO water-cooled.
Temps dropped by 15oC from 68oC Max to 54oC in TimeSpy.
Memories are not really cooled and VRM is not cooled at all with heatsinks because I'm still waiting for those to be delivered from aliexpress.

Would you suggest to flash KFA 380W Bios? or go above that?
Card is now running with AIO supplied with a 240mm rad, push-pull sandwich configuration.

CC: @jura11


----------



## J7SC

salamizone said:


> Hi
> Thanks to this forum I have learned a lot, it is my first post, I hope you can help me  I have MSI Gaming Trio + EKWB + Bios is EVGA FTW3 373w Is this score normal? in hwbot there are better with less+1500 memory and stable ... will it be wrong?
> 
> Thank!


 
First post @ OCN > welcome  
Your score looks good and is inline with a few other higher scores. Your VRAM speed in particular is outstanding, and it does help with Superposition. Re. HWBot scores that are higher with lower clocks, there can be a variety of reasons...including extreme sub-zero cooling. But also don't forget that HWBot allows LOD bias for that bench which will artificially boost results w/o corresponding clocks. 




Shewie said:


> Hi again folks, I was away for a little while because I was waiting for missing parts for my AIO cooling solution for my Gigabyte 2080ti with those crazy Memory chips.
> 
> You suggested me to flash waterforce bios on my Gigabyte Windforce 11GB.
> I think I've started hitting some power limitations. I can't get past 370W and all my benchmark ends with fairly similar score than AIO water-cooled.
> Temps dropped by 15oC from 68oC Max to 54oC in TimeSpy.
> Memories are not really cooled and VRM is not cooled at all with heatsinks because I'm still waiting for those to be delivered from aliexpress. Would you suggest to flash KFA 380W Bios? or go above that? Card is now running with AIO supplied with a 240mm rad, push-pull sandwich configuration.(...)


 
If you're already hitting 370w with the Gigabyte Bios, I doubt that the Galax/KFA2 380w bios is worth switching to. Then again, if you do not mind flashing again, why not give it a try ? At the same time, while 54c in TS is nothing to sneeze at, you will still gain with just a more extensive cooling arrangement, ie keeping the GPU temp below 38c, if possible


----------



## Shewie

J7SC said:


> First post @ OCN > welcome
> Your score looks good and is inline with a few other higher scores. Your VRAM speed in particular is outstanding, and it does help with Superposition. Re. HWBot scores that are higher with lower clocks, there can be a variety of reasons...including extreme sub-zero cooling. But also don't forget that HWBot allows LOD bias for that bench which will artificially boost results w/o corresponding clocks.
> 
> If you're already hitting 370w with the Gigabyte Bios, I doubt that the Galax/KFA2 380w bios is worth switching to. Then again, if you do not mind flashing again, why not give it a try ? At the same time, while 54c in TS is nothing to sneeze at, you will still gain with just a more extensive cooling arrangement, ie keeping the GPU temp below 38c, if possible



It's just for the benchmark run, nothing else. I wanted to squeeze as much as possible for the single run other than that I run this card idle 99% of the time.
I'm using only single DP port because iGPU is juicing another screen. So the only option for me is the KFA 380W bios?

Card is idling at 26oC


----------



## J7SC

Shewie said:


> It's just for the benchmark run, nothing else. I wanted to squeeze as much as possible for the single run other than that I run this card idle 99% of the time.
> I'm using only single DP port because iGPU is juicing another screen. So the only option for me is the KFA 380W bios?
> 
> Card is idling at 26oC


 
I haven't used the Galax/KFA2 380 (since it is so close to my stock Gigabyte Aorus in terms of PL), but over the year or so since I've been to this thread, it seems to be very popular and the most versatile in terms of loading onto non-Galax/KFA2 boards. If you're just trying to have a few quick benches , you can try to put your AIO rad into a bucket of ice-water (after taking the fans off ). HOWEVER, this can shake air bubbles lose and have them lodge at all the wrong places - best to always keep the rad above the pump plane and in the same position it was mounted at before, and also follow this related advice from today's GN vid (around 18m30s).


----------



## Uns33n

Just picked up my first 2080Ti card. This card is a BEAST. So far +175 core & +1000 mem, stock voltage, benchmarks and gaming stable. 60c MAX. 

Can't wait to see what this card can really do.


----------



## JustinThyme

salamizone said:


> Hi
> Thanks to this forum I have learned a lot, it is my first post, I hope you can help me
> 
> 
> I have MSI Gaming Trio + EKWB + Bios is EVGA FTW3 373w
> 
> Is this score normal? in hwbot there are better with less
> 
> +1500 memory and stable ... will it be wrong?
> 
> Thank!


Yeah dont pay much attention to HWbot scores. Too many variables at play where not everyone uses or knows the cheats that are allowed. Same goes for overclocking CPUS's. Ive always been the type that if it isnt all cores then its not an OC, its a partial but they will allow going by the core and OCing the best core and turning back others, disabling HT etc to get that one high Freq. 

You have a goo score especially for that high of a resolution and a very good memory clock.


----------



## jura11

Shewie said:


> Hi again folks, I was away for a little while because I was waiting for missing parts for my AIO cooling solution for my Gigabyte 2080ti with those crazy Memory chips.
> 
> You suggested me to flash waterforce bios on my Gigabyte Windforce 11GB.
> I think I've started hitting some power limitations. I can't get past 370W and all my benchmark ends with fairly similar score than AIO water-cooled.
> Temps dropped by 15oC from 68oC Max to 54oC in TimeSpy.
> Memories are not really cooled and VRM is not cooled at all with heatsinks because I'm still waiting for those to be delivered from aliexpress.
> 
> Would you suggest to flash KFA 380W Bios? or go above that?
> Card is now running with AIO supplied with a 240mm rad, push-pull sandwich configuration.
> 
> CC: @jura11


Hi there 

370W to 380W BIOS I think its not big jump in power limit for you and really not sure if you will gain a lot headroom by using such BIOS,you can try Asus XOC BIOS wgich should in theory give you with 100% power limit 1000W, I would use this BIOS only for benchmarks, not sure if I would use that as my daily BIOS 

Good drop in temperatures there for sure, still you will gain probably by going by proper water loop, how much not sure there

Hope this helps 

Thanks, Jura


----------



## Uns33n

How common is +1500(8500) memory OC on stock volts? Just tried it and completely stable on bench and games so far.


----------



## Shewie

Done some testing, not sure if the scores are impressive considering current setup and zero effective cooling on VRAM except for a fan that is blowing PCB because I'm still waiting for my heatsinks to delivered from china 
Aircooled Barebone Ram can pull +1700Mhz right before I start seeing some tiny artifacts. Once I finish with RAM cooling I will try again maybe with ASUS XOC.

No one is suggesting HOF Bios as a starter before jumping to XOC? Any drawbacks or it's just not suitable for FE based PCB cards?

Here are some scores.
Marginal gains on TimeSpy compared to 370W Waterforce Bios.


----------



## JustinThyme

Shewie said:


> Done some testing, not sure if the scores are impressive considering current setup and zero effective cooling on VRAM except for a fan that is blowing PCB because I'm still waiting for my heatsinks to delivered from china
> Aircooled Barebone Ram can pull +1700Mhz right before I start seeing some tiny artifacts. Once I finish with RAM cooling I will try again maybe with ASUS XOC.
> 
> No one is suggesting HOF Bios as a starter before jumping to XOC? Any drawbacks or it's just not suitable for FE based PCB cards?
> 
> Here are some scores.
> Marginal gains on TimeSpy compared to 370W Waterforce Bios.



Run Fire strike Ultra. Older bench but taxes your GPU More. Ive done OCs that passed everything else then locked up in Fire strike Ultra.
Also in the heaven open up the resolution.


----------



## Shewie

last time I've checked FireStrike Ultra. this benchmark was named by Gamers Nexus as "irrelevant" in 2019. But that's ok:
Firestrike - GPU Score 9 491 https://www.3dmark.com/fs/21908020
TimeSpy - GPU Score 16 680 https://www.3dmark.com/spy/10743898

BIOS KFA2. Max temp 48oC on FireStrike, Max temp on TimeSpy 52oC

I won't be changing settings in Heaven. The only benchmark that is validated by broader community forum that gathers scores from around the world accepts only Extreme preset final scores. If you change single setting there you validate your credbility because the final score is then presented as "custom". 
That's why since some time ago we have Unigine Superposition benchmark that trades the blows properly with top line of the cards.


----------



## dangerSK

JustinThyme said:


> Yeah dont pay much attention to HWbot scores. Too many variables at play where not everyone uses or knows the cheats that are allowed. Same goes for overclocking CPUS's. Ive always been the type that if it isnt all cores then its not an OC, its a partial but they will allow going by the core and OCing the best core and turning back others, disabling HT etc to get that one high Freq.
> 
> You have a goo score especially for that high of a resolution and a very good memory clock.


Learning few tweaks for 3D isnt worst idea, LOD tweak, inspector tweaks in general, maybe stripped OS with maxmem etc


----------



## sultanofswing

You guys have some really good superposition scores. Not sure what I am doing wrong with my setup.
2160/8000 I only get a 8k Optimized score of 5872
This is with an [email protected]
Have the normal Nvidia Control panel settings like texture filtering quality set to performance.

There a secret to Superposition or Am I just not setting something up right.


----------



## JustinThyme

dangerSK said:


> Learning few tweaks for 3D isnt worst idea, LOD tweak, inspector tweaks in general, maybe stripped OS with maxmem etc


Not when they are tweaks simply to jack the score of a synthetic benchmark and usually end up with worse real world performance.


----------



## J7SC

sultanofswing said:


> You guys have some really good superposition scores. Not sure what I am doing wrong with my setup.
> 2160/8000 I only get a 8k Optimized score of 5872
> This is with an [email protected]
> Have the normal Nvidia Control panel settings like texture filtering quality set to performance.
> 
> There a secret to Superposition or Am I just not setting something up right.


 
Well, what I posted was with the GPU at 2205/82xx (compared to yours above)... Re. your 8700K/5.1, CPU speed matters a bit but not that much with 4K and 8k benches. But GPU temp control matters a great deal - with only one of two cards used, that single card has a total of 1080x55 rad space to itself in my setup... To be able to run 2205, ambient temp has to be 17c or so. My typical single-GPU speed is 2190 for 'everything' and with a typical 21c or so ambient, all on stock GPU Bios. With two cards pulling about 380w each already plus an oc'ed TR, my 1300w PSU does not have enough headroom left, otherwise I would try one of the XOC bios - I am sure I would gain a decent amount re. PL.

The real issue with tweaks is if people compare 'apples and oranges'. There are indeed a myriad of tweaks such as LOD Bias (especially if you have the right unofficial NVInspector folks over at HWBot use - and where those kind of tweaks are allowed). In general, Unigine does not have the same type of system check apps 3DM has to validate, even though a new tweak will take an update by 3DM system check app to catch it (regarding accepting 3DM submission for HoF etc via the 'valid score' tag). I use 4K and especially 8K Superposition to give the 11GB of VRAM a real standardized test and workout. I always loved Unigine's graphics (not only Superposition but Valley and Heaven 4)


----------



## sultanofswing

J7SC said:


> Well, what I posted was with the GPU at 2205/82xx (compared to yours above)... Re. your 8700K/5.1, CPU speed matters a bit but not that much with 4K and 8k benches. But GPU temp control matters a great deal - with only one of two cards used, that single card has a total of 1080x55 rad space to itself in my setup... To be able to run 2205, ambient temp has to be 17c or so. My typical single-GPU speed is 2190 for 'everything' and with a typical 21c or so ambient, all on stock GPU Bios. With two cards pulling about 380w each already plus an oc'ed TR, my 1300w PSU does not have enough headroom left, otherwise I would try one of the XOC bios - I am sure I would gain a decent amount re. PL.
> 
> The real issue with tweaks is if people compare 'apples and oranges'. There are indeed a myriad of tweaks such as LOD Bias (especially if you have the right unofficial NVInspector folks over at HWBot use - and where those kind of tweaks are allowed). In general, Unigine does not have the same type of system check apps 3DM has to validate, even though a new tweak will take an update by 3DM system check app to catch it (regarding accepting 3DM submission for HoF etc via the 'valid score' tag). I use 4K and especially 8K Superposition to give the 11GB of VRAM a real standardized test and workout. I always loved Unigine's graphics (not only Superposition but Valley and Heaven 4)


Yea, I just see the leaderboards filled with scores higher than mine with much lower GPU clocks and it makes me wonder. Other than pushing to 2220mhz the other day I have not done much more testing as I am still waiting for the Waterblock to show up for the card. I do not expect much better temps but I will be dedicating either a 480 and a thick 360 radiator or 2 480's strictly for just the GPU.

Right now I am still testing to see how far I can go at just 1 volt. Was messing around today and it seems I got a great Core but the Memory is not the best. I see slight artifacts at +1200 and Driver crashes at 1250, even bumped up the memory voltage a little bit with no luck so maybe once the full cover block is on the card and the memory temps get a little better we will see but it's doubtful.


----------



## Shewie

sultanofswing said:


> You guys have some really good superposition scores. Not sure what I am doing wrong with my setup.
> 2160/8000 I only get a 8k Optimized score of 5872
> This is with an [email protected]
> Have the normal Nvidia Control panel settings like texture filtering quality set to performance.
> 
> There a secret to Superposition or Am I just not setting something up right.



I'm running all my tests with 8700k clocked at 5.1Ghz.
My 2080ti isn't even a good bin, I'm getting max 2080Mhz for split second then stable 2055Mhz for the rest of the test, the longer the test takes the lower it clocks thrugh the run.

My GPU Bin isn't top notch but my memory chips really are (in my opinion). 
I'm constantly able to clock my VRAM much higher than vast majority of people here, without having any passive cooling on Memory Die as for now  
My AIO Cooling on GPU is still a "work in progress"

My current BIOS - KFA2 380W.


----------



## dangerSK

JustinThyme said:


> Not when they are tweaks simply to jack the score of a synthetic benchmark and usually end up with worse real world performance.


He asked about score, so I guess hes a bit interested in benching and for proper benching u need these tweaks.


----------



## dangerSK

Shewie said:


> I'm running all my tests with 8700k clocked at 5.1Ghz.
> My 2080ti isn't even a good bin, I'm getting max 2080Mhz for split second then stable 2055Mhz for the rest of the test, the longer the test takes the lower it clocks thrugh the run.
> 
> My GPU Bin isn't top notch but my memory chips really are (in my opinion).
> I'm constantly able to clock my VRAM much higher than vast majority of people here, without having any passive cooling on Memory Die as for now
> My AIO Cooling on GPU is still a "work in progress"
> 
> My current BIOS - KFA2 380W.


Problem is mem oc on 2080Ti isnt crucial, its hard to bin 2080ti for core. Thats why u want top bin cards like HOF, KP or Lightning.


----------



## TK421

dangerSK said:


> Problem is mem oc on 2080Ti isnt crucial, its hard to bin 2080ti for core. Thats why u want top bin cards like HOF, KP or Lightning.


is it true that the FE cards have higher chances of getting the same quality die as the top bin from AIB? since nvidia has the first dibs in binning for FE before the cores are sent out to customers who buy them


----------



## J7SC

I saw this article linked at one of OCN's news feeds and followed it to Techspot...interesting stuff > about DLSS


----------



## JustinThyme

dangerSK said:


> He asked about score, so I guess hes a bit interested in benching and for proper benching u need these tweaks.


He was asking if something was wrong with his rig. Cheats and renamed tweaks does not a performer make. For proper benching you need proper hardware set up correctly for the performance you paid for, not a number for HwBot. Not like they pay out.....


----------



## Uns33n

Is it possible to still brick a card when flashing if card has dual bios switch? Thinking of flashing Galax bios to my strix.


----------



## dangerSK

JustinThyme said:


> He was asking if something was wrong with his rig. Cheats and renamed tweaks does not a performer make. For proper benching you need proper hardware set up correctly for the performance you paid for, not a number for HwBot. Not like they pay out.....


okay tbh whats cheating or not correct with setting up your gpu into "performance" mode ? U can use LOD "tweak" for games if u want FPS. Setting performance preference in control panel isnt cheat either. Using best drivers for performance isnt cheat too, Sooo what wrong with that ? Only tweaks I can recall that arent ideal for daily are maxmem as youre limiting your ram size and stripped OS (isnt ideal but usable)


----------



## dangerSK

Uns33n said:


> Is it possible to still brick a card when flashing if card has dual bios switch? Thinking of flashing Galax bios to my strix.


No, even without dual bios u can easily recover bad flash.


----------



## dangerSK

TK421 said:


> is it true that the FE cards have higher chances of getting the same quality die as the top bin from AIB? since nvidia has the first dibs in binning for FE before the cores are sent out to customers who buy them


From my experience FE cards have very decent bin, not always golden one but a lot better than AIB cards. With AIB cards its pretty random. But best bins should still be KP and GALAX, About Lightning Im not sure how hard (or not whatsoever) are they binning.


----------



## TK421

dangerSK said:


> From my experience FE cards have very decent bin, not always golden one but a lot better than AIB cards. With AIB cards its pretty random. But best bins should still be KP and GALAX, About Lightning Im not sure how hard (or not whatsoever) are they binning.



Around where the level of the highest end card (but not XOC) for each AIB? Such as FTW3 Ultra, Strix OC, Gaming X Trio etc.


----------



## eddy5667

sultanofswing said:


> So whoever posted the HOF XOC Bios on Techpowerup if you are in here something isn't right with it.
> I USB Programmed the BIOS about 3 weeks ago and everything was perfect with it other than the BSOD if you saved a profile or tried to lower the power target.
> 
> I since switched to a few different BIOS's and ended up USB programming back to the HOF XOC.
> I notice now there is a disclaimer on the description and now the BIOS will not allow the PC to wake from sleep without the PC restarting on it's own which it did not do the first time I flashed it before it must have been edited.
> 
> Also the first time I flashed that BIOS right before the OS loaded I would get a black screen showing Nvidia BIOS and the BIOS number and now it does not do that.
> 
> Hopefully you are in here and can take a look at it, if not it kinda sucks because it has ruined the BIOS.
> 
> I have USB programmed it 4 times now and cannot get the same result that I did the first time I Programmed it.


Can you please do me a favor and email this galax XOC bios that worked fine for you on your reference 2080ti..

[email protected]

thanks !!


----------



## dangerSK

TK421 said:


> Around where the level of the highest end card (but not XOC) for each AIB? Such as FTW3 Ultra, Strix OC, Gaming X Trio etc.


I believe they are same, they are not binned so its totally random, however u also need decent PCB to match with good core so from the ones u listed I would go for Strix, best PCB from these, good XOC support etc.


----------



## Sheyster

JustinThyme said:


> He was asking if something was wrong with his rig. Cheats and renamed tweaks does not a performer make. For proper benching you need proper hardware set up correctly for the performance you paid for, not a number for HwBot. Not like they pay out.....


Some people value e-peen and bragging rights more than pay-outs!

FWIW, I feel the same way you do about it. Take anything competitive, analyze it and there will always be tweaks, cheats, short-cuts, etc. that folks use to come out on top.


----------



## TK421

dangerSK said:


> I believe they are same, they are not binned so its totally random, however u also need decent PCB to match with good core so from the ones u listed I would go for Strix, best PCB from these, good XOC support etc.


 according to reference pcb analysis the reference one is powerful enough even for overclocking on water


----------



## dangerSK

TK421 said:


> according to reference pcb analysis the reference one is powerful enough even for overclocking on water


Well im not even considering water as a serious OC, I was talking going rly cold (ln2,dice) u want cards with good XOC support so u can easily change voltages etc. In terms of VRMs FE (or reference) is powerful enough


----------



## sultanofswing

Looks Like I am able to carry 2205mhz throughout superposition 8k with a room temp of 21c and a max GPU temp of 43c on the modified AIO.
Still waiting on my waterblock to show up for this card!
Anyway, Superposition score still on the low side I feel.


----------



## JustinThyme

Sheyster said:


> Some people value e-peen and bragging rights more than pay-outs!
> 
> FWIW, I feel the same way you do about it. Take anything competitive, analyze it and there will always be tweaks, cheats, short-cuts, etc. that folks use to come out on top.


Hell we all like bragging rights. However they lose their lustre when cheats are used to get there. One that I think kills me the most is CPU OC. Sure we will give you credit for a 5GHz OC on an 18 core CPU when one core is at 5GHz and the rest are dialed back to nothing. Takes the wind out of my sails, its all or nothing in my book.


----------



## TK421

dangerSK said:


> Well im not even considering water as a serious OC, I was talking going rly cold (ln2,dice) u want cards with good XOC support so u can easily change voltages etc. In terms of VRMs FE (or reference) is powerful enough



the only one I've seen so far is the KP having overkill memory power filtering


I'm not sure if vrm quality (not power delivery spec) matters for ambient oc









sultanofswing said:


> Looks Like I am able to carry 2205mhz throughout superposition 8k with a room temp of 21c and a max GPU temp of 43c on the modified AIO.
> Still waiting on my waterblock to show up for this card!
> Anyway, Superposition score still on the low side I feel.



how did you modify?


----------



## sultanofswing

TK421 said:


> the only one I've seen so far is the KP having overkill memory power filtering
> 
> 
> I'm not sure if vrm quality (not power delivery spec) matters for ambient oc
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> how did you modify?


Just swapped the 240mm radiator with a 280, Realistically a 240mm radiator on a card at 350 watts isn't enough, Hell even the 280 is rather small.


----------



## J7SC

sultanofswing said:


> Just swapped the 240mm radiator with a 280, Realistically a 240mm radiator on a card at 350 watts isn't enough, Hell even the 280 is rather small.


 
...re. your Superposition, can you get the VRAM speed up (from 8000 to 8200 - 8300) or do you get artifacts etc ? Full waterblock should help with VRAM temps as well when it arrives


----------



## sultanofswing

J7SC said:


> ...re. your Superposition, can you get the VRAM speed up (from 8000 to 8200 - 8300) or do you get artifacts etc ? Full waterblock should help with VRAM temps as well when it arrives


VRAM seem's to go to about 8200 before I see artifacts, 8250 it crashes even if I up the Voltage to the memory so I just settle on 8000 and run that daily.


----------



## dangerSK

TK421 said:


> the only one I've seen so far is the KP having overkill memory power filtering
> 
> 
> I'm not sure if vrm quality (not power delivery spec) matters for ambient oc


It doesnt even matter at LN2, in general 2080Ti reference VRM is very solid, what youre paying to Galax/Msi/Kingpin for is the support and features, u get voltage control with afterburner and custom voltage tools, cards support all sorts of ln2 pots, they have probe points... My point is u want these features on cold, for ambient OC even normal 2080ti is fine.


----------



## TK421

sultanofswing said:


> Just swapped the 240mm radiator with a 280, Realistically a 240mm radiator on a card at 350 watts isn't enough, Hell even the 280 is rather small.



How did you do this? Isn't the unit on the Kingpin card a sealed Asetek unit?









J7SC said:


> ...re. your Superposition, can you get the VRAM speed up (from 8000 to 8200 - 8300) or do you get artifacts etc ? Full waterblock should help with VRAM temps as well when it arrives





sultanofswing said:


> VRAM seem's to go to about 8200 before I see artifacts, 8250 it crashes even if I up the Voltage to the memory so I just settle on 8000 and run that daily.



Weirdly enough I can run +1300 stable 24/7 on my Samsung 2080Ti XC Ultra (1650mhz boost)


Some people say that the galax 380w bios is using looser timings, but I cannot be sure about this statement.


There's also zhrooms and r/overclocking discord saying that Micron benefits more from low temperatures (waterblock, cannot be achieved on air) and Samsung clocks better if the temperature is warmer (for most air coolers on the market).















dangerSK said:


> It doesnt even matter at LN2, in general 2080Ti reference VRM is very solid, what youre paying to Galax/Msi/Kingpin for is the support and features, u get voltage control with afterburner and custom voltage tools, cards support all sorts of ln2 pots, they have probe points... My point is u want these features on cold, for ambient OC even normal 2080ti is fine.



Ah ok.


Currently I'm eyeing the Gaming X Trio 2080ti, it seems that the binning is 1755MHz, which is similar to HOF OC WB on the front page.


I'm not so sure if the silicon on the X Trio, is top tier or not. Can anyone comment on this?


----------



## dangerSK

TK421 said:


> How did you do this? Isn't the unit on the Kingpin card a sealed Asetek unit?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Weirdly enough I can run +1300 stable 24/7 on my Samsung 2080Ti XC Ultra (1650mhz boost)
> 
> 
> Some people say that the galax 380w bios is using looser timings, but I cannot be sure about this statement.
> 
> 
> There's also zhrooms and r/overclocking discord saying that Micron benefits more from low temperatures (waterblock, cannot be achieved on air) and Samsung clocks better if the temperature is warmer (for most air coolers on the market).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah ok.
> 
> 
> Currently I'm eyeing the Gaming X Trio 2080ti, it seems that the binning is 1755MHz, which is similar to HOF OC WB on the front page.
> 
> 
> I'm not so sure if the silicon on the X Trio, is top tier or not. Can anyone comment on this?


Random bin, if u want serios OC look for MSI Lightning in case u want MSI Top tier product, TRIO has power balance issues as I heard


----------



## sultanofswing

TK421 said:


> How did you do this? Isn't the unit on the Kingpin card a sealed Asetek unit?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Weirdly enough I can run +1300 stable 24/7 on my Samsung 2080Ti XC Ultra (1650mhz boost)
> 
> 
> Some people say that the galax 380w bios is using looser timings, but I cannot be sure about this statement.
> 
> 
> There's also zhrooms and r/overclocking discord saying that Micron benefits more from low temperatures (waterblock, cannot be achieved on air) and Samsung clocks better if the temperature is warmer (for most air coolers on the market).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah ok.
> 
> 
> Currently I'm eyeing the Gaming X Trio 2080ti, it seems that the binning is 1755MHz, which is similar to HOF OC WB on the front page.
> 
> 
> I'm not so sure if the silicon on the X Trio, is top tier or not. Can anyone comment on this?


It's a sealed unit but the hoses come right off just like on any other AIO. All you have to do is submerge the radiator in distilled water with a little bit of glycol mixed in and put the suction from the pump under the water til it runs out of the radiator outlet. While it is submerged you just pop the hose back on.

You got a good bin on your memory it seems. 
I got a good core bin but not memory bin which that I am fine with.


----------



## dangerSK

JustinThyme said:


> Hell we all like bragging rights. However they lose their lustre when cheats are used to get there. One that I think kills me the most is CPU OC. Sure we will give you credit for a 5GHz OC on an 18 core CPU when one core is at 5GHz and the rest are dialed back to nothing. Takes the wind out of my sails, its all or nothing in my book.


Dont know If this was directed at me but Im running my 9980xe all core 5Ghz and never seen anyone brag with one core boost (***?)..


----------



## TK421

dangerSK said:


> Random bin, if u want serios OC look for MSI Lightning in case u want MSI Top tier product, TRIO has power balance issues as I heard



What do you mean by power balance issue?








sultanofswing said:


> It's a sealed unit but the hoses come right off just like on any other AIO. All you have to do is submerge the radiator in distilled water with a little bit of glycol mixed in and put the suction from the pump under the water til it runs out of the radiator outlet. While it is submerged you just pop the hose back on.
> 
> You got a good bin on your memory it seems.
> I got a good core bin but not memory bin which that I am fine with.



How did you detach the tubes from fittings? 

Curious question, would you be willing to sell that modified AIO or an unmodified stock unit once a waterblock comes in?






Did you increase the memory voltage when overclocking the KP?


----------



## long2905

guys long time lurker first time posting here. i was using a MSI Gaming X Trio which works great except for the part where I can only OC it to +130Mhz and not more. thats still fine of course as the card is a beast and it runs quiet.

getting bored however i got a Palit 2080ti dual from ebay just for the heck of it. low and behold the screenshot from the seller as well as me opening up gpuz have the card's device ID as 1E07 aka a 300A binned chip. The vendor was listed as Zotac/PC Partner which is odd. 

I then read around reddit and here and tried to flash a EVGA XC Ultra vBIOS and it succeeded. I didnt pay much attention at the time whether its 300 or 300A, only amazed that this Palit card can get zero fan RPM mode with just a vbios flash.

Then I try to revert to a stock Palit Dual vBIOS and then got the device ID mismatch and nothing changed. So I tried to dig deeper and all points lead to a Palit Dual having a 300 chip 1E04 but not mine?

I then tried to flash the KFA 380W vBIOS and succeeded, and then a Palit Gaming Pro OC vBIOS with no issue either. So now I'm trying to get the Palit ThunderMaster app to recognize the led on the card to control it.

Any thoughts on this guys? Am I just lucky or did the seller somehow slap a Palit Dual cooler on a Zotac AMP card (thats the result I got when i click on look up in GPUZ)

EDIT: this is what I got with the Palit Gaming Pro OC vBIOS so far. I'm not too much of an enthusiast though so dont want to bother with SuperPosition, for now anyway.

https://www.3dmark.com/spy/10785181


----------



## JustinThyme

dangerSK said:


> Dont know If this was directed at me but Im running my 9980xe all core 5Ghz and never seen anyone brag with one core boost (***?)..


Not directed at anyone, just the general idea that is fact. Look here and especially the bell ringers on HwBot. 
BTW, Dont see you listed on the 5GHZ club here but there are several listed that are and not on all cores.


----------



## JustinThyme

long2905 said:


> guys long time lurker first time posting here. i was using a MSI Gaming X Trio which works great except for the part where I can only OC it to +130Mhz and not more. thats still fine of course as the card is a beast and it runs quiet.
> 
> getting bored however i got a Palit 2080ti dual from ebay just for the heck of it. low and behold the screenshot from the seller as well as me opening up gpuz have the card's device ID as 1E07 aka a 300A binned chip. The vendor was listed as Zotac/PC Partner which is odd.
> 
> I then read around reddit and here and tried to flash a EVGA XC Ultra vBIOS and it succeeded. I didnt pay much attention at the time whether its 300 or 300A, only amazed that this Palit card can get zero fan RPM mode with just a vbios flash.
> 
> Then I try to revert to a stock Palit Dual vBIOS and then got the device ID mismatch and nothing changed. So I tried to dig deeper and all points lead to a Palit Dual having a 300 chip 1E04 but not mine?
> 
> I then tried to flash the KFA 380W vBIOS and succeeded, and then a Palit Gaming Pro OC vBIOS with no issue either. So now I'm trying to get the Palit ThunderMaster app to recognize the led on the card to control it.
> 
> Any thoughts on this guys? Am I just lucky or did the seller somehow slap a Palit Dual cooler on a Zotac AMP card (thats the result I got when i click on look up in GPUZ)
> 
> EDIT: this is what I got with the Palit Gaming Pro OC vBIOS so far. I'm not too much of an enthusiast though so dont want to bother with SuperPosition, for now anyway.
> 
> https://www.3dmark.com/spy/10785181


Not too bad for single card but not a valid score either.
Clocks look decent.


----------



## dangerSK

JustinThyme said:


> Not directed at anyone, just the general idea that is fact. Look here and especially the bell ringers on HwBot.
> BTW, Dont see you listed on the 5GHZ club here but there are several listed that are and not on all cores.


5ghz club ? whats that  do i need to be in it ? https://hwbot.org/submission/4084882_slovak_killer_cinebench___r15_core_i9_9980xe_5503_cb


----------



## truehighroller1

JustinThyme said:


> Not directed at anyone, just the general idea that is fact. Look here and especially the bell ringers on HwBot.
> BTW, Dont see you listed on the 5GHZ club here but there are several listed that are and not on all cores.



Hey hey hey NOW lol. I reached 5.3Ghz on all cores thank you very much on my 7900x and on my old cpu I reached 5Ghz and possibly more too on all cores lol.

Hell 2pac might have been alive still on my first one lol.

Also, that's without cheats both times.


----------



## long2905

JustinThyme said:


> Not too bad for single card but not a valid score either.
> 
> Clocks look decent.




Thanks. The validity was due to me using a ES chip but thats fine to me.

Technically i have 2 cards right now but i dont need that much power.


Sent from my iPhone using Tapatalk


----------



## sultanofswing

TK421 said:


> What do you mean by power balance issue?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How did you detach the tubes from fittings?
> 
> Curious question, would you be willing to sell that modified AIO or an unmodified stock unit once a waterblock comes in?
> 
> 
> 
> 
> 
> 
> Did you increase the memory voltage when overclocking the KP?


I'll have to see when and if the waterblock comes in. Hopefully it's not infected when it shows up LOL.
I've increased memory voltage but it didn't help in my situation.


----------



## TK421

sultanofswing said:


> I'll have to see when and if the waterblock comes in. Hopefully it's not infected when it shows up LOL.
> I've increased memory voltage but it didn't help in my situation.



yeah the growth thing is what I'm afraid of


----------



## J7SC

...some interesting Metro Exodus DX12 4K using the CFR / checkerboard MGPU setting. It's on 2x Titan RTX, not 2x 2080 Ti, but close enough


----------



## JustinThyme

long2905 said:


> Thanks. The validity was due to me using a ES chip but thats fine to me.
> 
> Technically i have 2 cards right now but i dont need that much power.
> 
> 
> Sent from my iPhone using Tapatalk


I don't most of the time either but run two......because I can.


----------



## long2905

JustinThyme said:


> I don't most of the time either but run two......because I can.


cant argue with that 

anyway it looks like the KFA vBIOS is best for my card so far. can score 16k GPU point in timespy


----------



## TK421

long2905 said:


> cant argue with that
> 
> anyway it looks like the KFA vBIOS is best for my card so far. can score 16k GPU point in timespy



kfa 380w for reference?


----------



## long2905

TK421 said:


> kfa 380w for reference?




Yeah thats what i flashed with no issue so far. The card can sustain high clock for longer. According to gpuz the card draw as much as 392w


----------



## TheFinnishOne

Hi, I have a Palit 2080 ti, non "A" model card, cooled by a Alphacool Eiswolf 240 GPX Pro. 240mm AIO
I run it with plus 150 on the core and 1000 on the memory, when its boosting as high as its supposed to be, that results in a 2100Mhz core and 8000Mhz on the memory.
But the problem is that although both cpu temps are just fine, and gpu temps are fine, just below 55c, for the majority of the time I only get 1995Mhz from the core.
Same games, same gpu tests, sometimes I get 2100, usually 1995Mhz, what causes that?
Here's the best 3D Mark Timespy result I've had https://www.3dmark.com/spy/7516312
GPU score of 16 129 and that was below the highest 2100/8000Mhz I could push the card to.


----------



## sultanofswing

TheFinnishOne said:


> Hi, I have a Palit 2080 ti, non "A" model card, cooled by a Alphacool Eiswolf 240 GPX Pro. 240mm AIO
> I run it with plus 150 on the core and 1000 on the memory, when its boosting as high as its supposed to be, that results in a 2100Mhz core and 8000Mhz on the memory.
> But the problem is that although both cpu temps are just fine, and gpu temps are fine, just below 55c, for the majority of the time I only get 1995Mhz from the core.
> Same games, same gpu tests, sometimes I get 2100, usually 1995Mhz, what causes that?
> Here's the best 3D Mark Timespy result I've had https://www.3dmark.com/spy/7516312
> GPU score of 16 129 and that was below the highest 2100/8000Mhz I could push the card to.


This is normal as GPU boost will drop 15mhz at different temp intervals. 36c minus 15mhz, 40c minus 15mhz etc etc
If you want the card to run 2100mhz overshoot the clock by however many 15mhz intervals it drops due to temps
I also recommend using the Voltage curve editor vs just entering a +value in the slider window as doing that method the card will use whatever voltage is available and sometimes it is more than is needed.
Example, Stock my card runs 2040mhz and then voltage is 1.05-1.06
I use the Voltage curve to run [email protected]


----------



## MrTOOSHORT

sultanofswing said:


> You guys have some really good superposition scores. Not sure what I am doing wrong with my setup.
> 2160/8000 I only get a 8k Optimized score of 5872
> This is with an [email protected]
> Have the normal Nvidia Control panel settings like texture filtering quality set to performance.
> 
> There a secret to Superposition or Am I just not setting something up right.


I did some testing on my KPE, you need more voltage. I can pass 2175Mhz under 1v(real 0.92v) on 8K Superposition, but score sucks, 58xx. Added a little more voltage via classy tool(1.02v real with Fluke dmm), and score jumped up a bit. 6258


----------



## Mooncheese

Uns33n said:


> Just picked up my first 2080Ti card. This card is a BEAST. So far +175 core & +1000 mem, stock voltage, benchmarks and gaming stable. 60c MAX.
> 
> Can't wait to see what this card can really do.


This card has impressed me, but I wouldn't have paid full price for it. So far I've seen my FPS go from ~70 to 100 in The Witcher 3 and The Division 2, No Mans Sky, with mostly maxed settings (TW3 with HD Reworked) and in Shadow of the Tomb Raider I'm actually shocked to see that the performance with RT and DLSS on is actually the same as it was with the 1080 Ti with no DLSS or RT (~70 FPS, dips to 60 in demanding areas) @ 3440x1440. Crypto hash-rate also saw a 50% bump. I was under the impression that 2080 Ti was only 25% on avg faster than 1080 Ti but thus far, at least in handful of titles. Really looking forward to how much faster 2080 Ti is vs 1080 Ti in Wolfenstein: New Colossus as that is one title that leverages the FP32 performance of Turing heavily in favor of the 2080 Ti. Probably looking at 50% faster here as well, and there's also Middle Earth: Shadow of War. Also glad I picked this up with the Coronavirus hysteria induced shut-down of global electronic supply chain in China; I wonder how this may impact the arrival of Ampere, Big Navi, and Intel's 10th gen processors (not that I'm interested in more 14+++++++++, IPC of Comet Lake will be the same as Coffee Lake, just 4 more cores, that's great if the title in question is DX12 or if I'm doing a lot or video editing, but it's not worth buying a new mobo IMHO. I will rock 8700k until 10nm at minimum, I like to get 3-5 years out of my components). 






Stoked that I got my A1 chip card for $750 including a $150 Phanteks Glacier block ($750 total). I wouldn't pay more than that for 2080 Ti honestly, even if the gains are near 50% in some titles. But yeah, it's nicer than I thought it would be, DLSS is definitely awesome tech for sure and I wish more games supported it. 



TheFinnishOne said:


> Hi, I have a Palit 2080 ti, non "A" model card, cooled by a Alphacool Eiswolf 240 GPX Pro. 240mm AIO
> I run it with plus 150 on the core and 1000 on the memory, when its boosting as high as its supposed to be, that results in a 2100Mhz core and 8000Mhz on the memory.
> But the problem is that although both cpu temps are just fine, and gpu temps are fine, just below 55c, for the majority of the time I only get 1995Mhz from the core.
> Same games, same gpu tests, sometimes I get 2100, usually 1995Mhz, what causes that?
> Here's the best 3D Mark Timespy result I've had https://www.3dmark.com/spy/7516312
> GPU score of 16 129 and that was below the highest 2100/8000Mhz I could push the card to.





sultanofswing said:


> This is normal as GPU boost will drop 15mhz at different temp intervals. 36c minus 15mhz, 40c minus 15mhz etc etc
> If you want the card to run 2100mhz overshoot the clock by however many 15mhz intervals it drops due to temps
> I also recommend using the Voltage curve editor vs just entering a +value in the slider window as doing that method the card will use whatever voltage is available and sometimes it is more than is needed.
> Example, Stock my card runs 2040mhz and then voltage is 1.05-1.06
> I use the Voltage curve to run [email protected]


1995 MHz at 55C whereas it will do 2100 MHz in other games going by what youre saying, this sounds more like wattage starvation than anything else, a problem that is actually exacerbated by using more voltage. By undervolting you can sustain a given freq. with less wattage, but only if the temps are low enough, and 55C is definitely low enough. 

Let me give you an example. 

With 1080 Ti FE, default vbios, in The Witcher 3 in 3D Vision (the load induced by this particular title in 3D Vision is something else) I will see the clocks dip all the way down to 1935 MHz with PT maxed (120% = 300W) at default voltage of 1.063v. 

If I undervolt to 1.025v the clocks typically don't dip below 2000-1987 MHz with the same load, same scene etc. In fact you can test this by setting one MSI AB profile via a freq / voltage undervolt curve (Alt+F on telemetry graph to bring up freq / voltage curve, full instructions follow) then save that to a profile, assign that profile to a hotkey, say for instance Ctrl+Shift+2, and have your default voltage profile profile assigned to Ctrl+Shift+1 and simply switch in between profiles in the game that is causing your clocks to dip down to 1995 MHz and see if this alleviates the issue. 

Throwing more voltage at the issue is NOT the solution as this will only exacerbate the problem. 

Problem is, you may not get away with an undervolt with a non A binned TU-102 but there is no harm in trying. 55C is much lower than 85C so you may actually stand a good chance. For example, my TU-102-300A-K1-A1 does 2040 MHz at 1.025v @ 35C, where it eventually settles at 2025 MHz at 40-43C load (1kw radiator surface area shared with 8700k). 

I recommend using OC Scanner to test your undervolt for stability, my experience has been that if you get a 90% Confidence rating there youre extremely unlikely to get a display driver failure. I've yet to get one at 90% Confidence Rating on my profiles and I've had my 2080 Ti for about a month now and use it at 100% load for around 3 hours per evening, multiple different games, many of which are extremely problematic (The Division 2 is notorious for GPU related crashing). 

https://www.reddit.com/r/nvidia/comments/9idtco/rtx_2080_downvolt_the_easy_way/


----------



## sultanofswing

MrTOOSHORT said:


> I did some testing on my KPE, you need more voltage. I can pass 2175Mhz under 1v(real 0.92v) on 8K Superposition, but score sucks, 58xx. Added a little more voltage via classy tool(1.02v real with Fluke dmm), and score jumped up a bit. 6258


More voltage does not seem to help in my case.


----------



## TheFinnishOne

Thank you both sultanofswing and Mooncheese.
Very informative answers, will experiment with downvolting.
Will report back here with results.


----------



## KCDC

J7SC said:


> ...some interesting Metro Exodus DX12 4K using the CFR / checkerboard MGPU setting. It's on 2x Titan RTX, not 2x 2080 Ti, but close enough
> 
> https://www.youtube.com/watch?v=HX5pNpVMShA


I tried getting this to work on the latest Tomb Raider @4k with Inspector but it was a crashy mess.. Haven't tried it on Exodus but now I will!


----------



## Mooncheese

TheFinnishOne said:


> Thank you both sultanofswing and Mooncheese.
> Very informative answers, will experiment with downvolting.
> Will report back here with results.


Curious as to what your findings were! I only assumed wattage starvation because I believe you indicated that you were using a reference PCB 2080 Ti (Palit variant) and you should not be seeing a freq. drop of 100 MHz going from say 35 to 55C (the clocks drop 15 MHz every 10C, so 55C should be like 2070 vs 2100 MHz at 35C). Either way, undervolting is the way to go with both 1080 Ti and 2080 Ti.


----------



## JustinThyme

Mooncheese said:


> Curious as to what your findings were! I only assumed wattage starvation because I believe you indicated that you were using a reference PCB 2080 Ti (Palit variant) and you should not be seeing a freq. drop of 100 MHz going from say 35 to 55C (the clocks drop 15 MHz every 10C, so 55C should be like 2070 vs 2100 MHz at 35C). Either way, undervolting is the way to go with both 1080 Ti and 2080 Ti.


water cooling is the way to go, I never make it to 40C.


----------



## sultanofswing

Mooncheese said:


> Curious as to what your findings were! I only assumed wattage starvation because I believe you indicated that you were using a reference PCB 2080 Ti (Palit variant) and you should not be seeing a freq. drop of 100 MHz going from say 35 to 55C (the clocks drop 15 MHz every 10C, so 55C should be like 2070 vs 2100 MHz at 35C). Either way, undervolting is the way to go with both 1080 Ti and 2080 Ti.


On my card there is a 15mhz Drop at 35-36c and then a 15mhz drop at 40-41c.
If we continue this trend it seems the drops come in at or around the 5-6c range which if you were at 2100 and idle temps were 28 you would drop to 
2085 at 35-36c
2070 at 40-41c
2055 at 45-46c
2040 at 50-51c
2025 at 55-56c

A person on Air cooling may never see the first 2 clockspeed decreases as the card heats up way faster than a card on water.


----------



## Mooncheese

JustinThyme said:


> water cooling is the way to go, I never make it to 40C.


Nice! How much rad surface area do you have?

Since tearing down my loop and removing all of the plasticizer from the soft-tubing and switching to EK's Quantum back-plate with contact over the VRM (not sure if this is a contributing factor) I'm now seeing 42-43C load at 2040-2025 MHz with a 420 SE + 360 PE. I have 14/10 acrylic tubing (and lots of it) and a Barrow distro plate and about $150 worth of Bykski fittings coming from Ali Express, can't wait to get this EK Duraclear out of my system. What a mess! If youre reading this and contemplating going with a full loop please take this advice, just skip right over the soft tubing and go straight to hard tubing, preferably acrylic (PEGT will still do everything that soft tubing will: discoloration with direct sunlight, water permeation out of the loop and plasticizer crap it just takes 2-3x as long)





sultanofswing said:


> On my card there is a 15mhz Drop at 35-36c and then a 15mhz drop at 40-41c.
> If we continue this trend it seems the drops come in at or around the 5-6c range which if you were at 2100 and idle temps were 28 you would drop to
> 2085 at 35-36c
> 2070 at 40-41c
> 2055 at 45-46c
> 2040 at 50-51c
> 2025 at 55-56c
> 
> A person on Air cooling may never see the first 2 clockspeed decreases as the card heats up way faster than a card on water.


Thanks for the correction, on 980 Ti and 1080 Ti it was around 13 MHz every 10C but youre right, it's much more aggressive with 2080 Ti. My clocks start out at 2040 MHz @ 35C and then drop to 2025 at 40C so the -15 MHz every 5C is accurate. 

It's possible that 'TheFinnishOne' is losing all of the freq from thermal drops, but there's still 35 MHz unaccounted for if their 2100 MHz @ 35C down to 1995 MHz @ 55C is an accurate statement.


----------



## JustinThyme

Mooncheese said:


> Nice! How much rad surface area do you have?


Passive back plates do make a difference.

My rad surface is overkill to say the least. Enough so that it wont be part of the equation. 
480x60 XE with push pull ML 120s in the bottom.
360 SE with push QL fans in the front
360 HW labs GTS mid between basement and upper comparment with pull QL
420 HW Labs GTR with pull Noctua 140 industrial 2000 rpm fans.

Right now its about as good as its going to get inside of a case. Been contemplating getting a MO-RA 420 to get all of that out of the case or just getting an 800W chiller and keeping it just to where there is no condensation.


----------



## Medizinmann

JustinThyme said:


> Passive back plates do make a difference.
> 
> My rad surface is overkill to say the least. Enough so that it wont be part of the equation.
> 480x60 XE with push pull ML 120s in the bottom.
> 360 SE with push QL fans in the front
> 360 HW labs GTS mid between basement and upper comparment with pull QL
> 420 HW Labs GTR with pull Noctua 140 industrial 2000 rpm fans.
> 
> Right now its about as good as its going to get inside of a case. Been contemplating getting a MO-RA 420 to get all of that out of the case or just getting an 800W chiller and keeping it just to where there is no condensation.


Yepp - tbe next logical step - thought about this one…
https://www.alphacool.com/shop/new-products/21410/alphacool-eiszeit-2000-chiller-black

Greetings,
Medizinmann


----------



## Mooncheese

JustinThyme said:


> Passive back plates do make a difference.
> 
> My rad surface is overkill to say the least. Enough so that it wont be part of the equation.
> 480x60 XE with push pull ML 120s in the bottom.
> 360 SE with push QL fans in the front
> 360 HW labs GTS mid between basement and upper comparment with pull QL
> 420 HW Labs GTR with pull Noctua 140 industrial 2000 rpm fans.
> 
> Right now its about as good as its going to get inside of a case. Been contemplating getting a MO-RA 420 to get all of that out of the case or just getting an 800W chiller and keeping it just to where there is no condensation.


Jesus Christ, I literally laughed out loud reading this, that's an insane amount of radiators, Corsair 1000D?

Youre making me want to upgrade from View 71 but I just splurged on a $160 distribution plate for it. 

Anyhow, you should be better than 40C with that much rad surface area, are you running a 400W BIOS and an over-volt? For comparison, I'm seeing 43C @ 2040-2025 MHz @ 1.025v with only an EK 420 SE and 360 PE (latter in push pull) with the fans at 40% RPM and that's shared with an 8700k @ 5.0GHz @ 1.342v. 

Maybe you have a 300W CPU or something.

Edit, I just check your signature, trying to gauge how much 14 core / 28 thread 9940x draws with an overclock, I think around 300W?

And thanks for mentioning a chiller, I didn't even know what this was until today. Watching this now, very interesting:


----------



## JustinThyme

Mooncheese said:


> Jesus Christ, I literally laughed out loud reading this, that's an insane amount of radiators, Corsair 1000D?
> 
> Youre making me want to upgrade from View 71 but I just splurged on a $160 distribution plate for it.
> 
> Anyhow, you should be better than 40C with that much rad surface area, are you running a 400W BIOS and an over-volt? For comparison, I'm seeing 43C @ 2040-2025 MHz @ 1.025v with only an EK 420 SE and 360 PE (latter in push pull) with the fans at 40% RPM and that's shared with an 8700k @ 5.0GHz @ 1.342v.
> 
> Maybe you have a 300W CPU or something.
> 
> Edit, I just check your signature, trying to gauge how much 14 core / 28 thread 9940x draws with an overclock, I think around 300W?
> 
> And thanks for mentioning a chiller, I didn't even know what this was until today. Watching this now, very interesting: https://youtu.be/VDEu7zPTNu4


LOL. Enthoo Elite case. I have a 900D that’s sitting empty and may get refilled with another HEDT build but on the fence about running down a micro ITX X299 board that will fit in a phanteks evolve shift X and slap one of the 1080Tis I have collecting dust for an overkill HTPC. 

You’d be surprised how much current draw comes from a 9940X OC to 4.8 all cores (will go higher but temps past 80C make me nervous). Pair that with active cooling block on VRMs and a pair of 2080TIs and when I hit a heavy load on everything it pushes a 1600W PSU as far as I’d like to take it. I’ve actually slammed the CPU with multiple AVX workloads at the same time and just the CPU with GPUs at idle pulling 900 watts from the wall. When I load up everything it will run the fans up to nearly 100% and maintain a 30-31C liquid temp so a 9-10C delta is very good. Anything less than 15 is decent. 

I’ve run several different BIOS versions and they all end the same. I get 2150 give or take max with no added volts. If I want to run up the Vram I have to give it a little kick to +20 which ends up at like 1.035V peak never hitting the stock power limit at 125%. Ran the Matrix BIOS and XOC, same results. One GPU by itself is a little better bin than the other but not by much. What’s the fun running one a little higher when you can run two? 

I’ll take it!
https://www.3dmark.com/spy/10762306

A lot of different opinions on SLI. NVlink bandwidth at 100 GB/s makes the old HB bridges from last generation look sick at 3GB/s. Nvidia stopped developing it for awhile but back in full swing. Some titles don’t support it but nearly every titles I bother with does and when using GPUs for a workload other than gaming two cards blow one out of the water. 

Yeah it’s a lot of $$ but I’m not starving. Old and decrepit doesn’t mean washed up and in the poor house. At a comfortable level with two brats. One a college Jr on a full ride playing VB and brat two is like half the members here, 18 YO HS senior that knows everything. I keep telling him to pack his crap and get out and make a million in your first year and make a fool out of me while you still know everything because in a few years you are going to wake up and realize you don’t know diddly and be broke busing tables. 

He hates school with a passion so no college gonna happen there. Trying to get him into VoTech as that’s where the money is now. 

A million kids coming out with a degree in forensic anthropology from watching too much TV then wonder why they stand in the unemployment line because for the 10K that graduated with that major this year there is 1 job opening. Yet welders and other skilled trades are starting at $75K. 

Too many on pipe dreams thinking they are going to come out of a 4 year program in liberal arts and get a job with a starting salary in the 6 figure range. My wife is highly educated double masters in marketing and business management from an Ivy League school. Rowed her way though college. She’s an operations manager for a fairly large company and a new gal shows up with a masters in marketing. Hires her then 3 days later she comes barging into wife’s office demanding a massive pay and benefits increase. “This person is making $25K/year more than me and gets 30 days paid vacation, I should get no less!” Wife didn’t even look up from what she was doing, just told her this isn’t going to be a good fit for you. Come back Friday to collect your 3 day paycheck. Dummy doesn’t bother to think she is demanding to match someone who has been in a position for 20 years! 

Best part of that story, I have a measly BSEE from Tulane working in critical power platform engineering and my wife makes me look poor! LOL. She gets the $$ and I get the benefits. Hard to top a top tier health insurance and 6 weeks of PTO. The laughing part stops though every April 15th and turns to ????


----------



## kithylin

Mooncheese said:


> Maybe you have a 300W CPU or something.
> 
> Edit, I just check your signature, trying to gauge how much 14 core / 28 thread 9940x draws with an overclock, I think around 300W?


https://www.overclock.net/forum/297...ial-5ghz-overclock-club-871.html#post28333300

They are running their 9940X @ 5.1 Ghz @ 1.295v all-core. According to the outer vision power supply calculator ( https://outervision.com/power-supply-calculator ) that should be 451 watts for the CPU, roughly. I know that site isn't 100% accurate but it's all we have.


----------



## JustinThyme

Medizinmann said:


> Yepp - tbe next logical step - thought about this one…
> https://www.alphacool.com/shop/new-products/21410/alphacool-eiszeit-2000-chiller-black
> 
> Greetings,
> Medizinmann


Nice looking chiller but don’t see any available in the US. 1500W would be awesome. Ordering one form the UK by the time I got finished with shipping and VAT and customs I’d have to take out a second mortgage. Only thing available here that I’ve found that is PC specific are the Koolance 800W. Another option is to convert an aquarium chiller.


----------



## JustinThyme

kithylin said:


> https://www.overclock.net/forum/297...ial-5ghz-overclock-club-871.html#post28333300
> 
> They are running their 9940X @ 5.1 Ghz @ 1.295v all-core. According to the outer vision power supply calculator ( https://outervision.com/power-supply-calculator ) that should be 451 watts for the CPU, roughly. I know that site isn't 100% accurate but it's all we have.


I use the network management card on my UPS that gives me total output. Granted the best measurement I can get it loaded up CPU, GPUs at idle and pumps and fans. 3 D5s adds about 50 watts to the load. When I’m benching long hard 30 min runs I have it polling all power data every 60 seconds.


----------



## J7SC

Not sure if this was mentioned here already, but last week, Tom's HW mentioned two new NVidia GPUs popping up in some benches w/ some insane Cuda core counts (7,552 CU and 6,912 cu respectively). For comparison, 2080 Ti has 4352 Cu. Apart from the usual grain of salt with such early stories and some odd memory sizes, these are likely engineering samples (Quadro) and may be destined for data centres. Still, if this in any way relates to next-gen Ampere, a cut-down 3080 Ti 7nm still would be a monster (or rather mon$ter). link

All the more reason to refine CFR NVLink for dual 2080 TI (8704 Cu)


----------



## Mooncheese

J7SC said:


> Not sure if this was mentioned here already, but last week, Tom's HW mentioned two new NVidia GPUs popping up in some benches w/ some insane Cuda core counts (7,552 CU and 6,912 cu respectively). For comparison, 2080 Ti has 4352 Cu. Apart from the usual grain of salt with such early stories and some odd memory sizes, these are likely engineering samples (Quadro) and may be destined for data centres. Still, if this in any way relates to next-gen Ampere, a cut-down 3080 Ti 7nm still would be a monster (or rather mon$ter). link
> 
> All the more reason to refine CFR NVLink for dual 2080 TI (8704 Cu)


Those leaked specs are 100% for a Quadro card for data centers or a purpose built rendering card like Titan V but the relevant bit is that the card on average is 40-50% faster than Titan V. 24-48 GB of video memory says that that card is likely a V100 or Titan V successor. 

Big question is, will that card even go into production now that Intel have had a major algorithmic breakthrough and managed to get a pair of 22 core Xeons to be 3.5% times faster than a V100 in Deep Learning: https://wccftech.com/intel-ai-breakthrough-destroys-8-nvidia-v100-gpu/

So even if this card is 40-50% faster than a V100, if that's what it is meant to replace, it's already pointless for that market. 

This may be good news for us though, as Nvidia would now be forced to focus more on their gaming segment. Not sure.


----------



## z390e

J7SC said:


> All the more reason to refine CFR NVLink for dual 2080 TI (8704 Cu)


When you say refine CFR NVLInk for dual 2080ti can you elaborate? It seems like it would only really bring value on 4K or 8K for gaming, Im already at max framerate for my monitor (144hz) for games like Division 2 with a single 2080ti on 1440p. A game like Escape from Tarkov is what I'd like to see a CFR A/B test of with dual 2080ti's.

That game I cant get above about 90fps outside Shoreline dorms even at 1440p with everything I can throw at it. I'd love to see someone with NVLink and 2080ti's post a video of Tarkov. Hell, Id even paypal someone to buy the game to do the test (+ a tip) assuming the re-released the game back to me after the test.


----------



## z390e

Mooncheese said:


> Those leaked specs are 100% for a Quadro card for data centers or a purpose built rendering card like Titan V but the relevant bit is that the card on average is 40-50% faster than Titan V. 24-48 GB of video memory says that that card is likely a V100 or Titan V successor.
> 
> Big question is, will that card even go into production now that Intel have had a major algorithmic breakthrough and managed to get a pair of 22 core Xeons to be 3.5% times faster than a V100 in Deep Learning: https://wccftech.com/intel-ai-breakthrough-destroys-8-nvidia-v100-gpu/
> 
> So even if this card is 40-50% faster than a V100, if that's what it is meant to replace, it's already pointless for that market.
> 
> This may be good news for us though, as Nvidia would now be forced to focus more on their gaming segment. Not sure.



I actually suspect thats why we are seeing these now leak out as NV tries to see what they can do before they are obsoleted.


----------



## J7SC

Mooncheese said:


> Those leaked specs are 100% for a Quadro card for data centers or a purpose built rendering card like Titan V but the relevant bit is that the card on average is 40-50% faster than Titan V. 24-48 GB of video memory says that that card is likely a V100 or Titan V successor.
> 
> Big question is, will that card even go into production now that Intel have had a major algorithmic breakthrough and managed to get a pair of 22 core Xeons to be 3.5% times faster than a V100 in Deep Learning: (...)


 
I read that article earlier. It still might be a bit early to weigh the costs and benefits of each approach (matrix multiplications vs hashing). I was also thinking about AMD's massive cache HEDTs in that context :thinking: But clearly, there's a lot more work to be done re. AI and DL. 'Somewhat' related:





 



z390e said:


> When you say refine CFR NVLInk for dual 2080ti can you elaborate? It seems like it would only really bring value on 4K or 8K for gaming, Im already at max framerate for my monitor (144hz) for games like Division 2 with a single 2080ti on 1440p. A game like Escape from Tarkov is what I'd like to see a CFR A/B test of with dual 2080ti's.
> 
> That game I cant get above about 90fps outside Shoreline dorms even at 1440p with everything I can throw at it. I'd love to see someone with NVLink and 2080ti's post a video of Tarkov. Hell, Id even paypal someone to buy the game to do the test (+ a tip) assuming the re-released the game back to me after the test.


 
For now, I have just followed the original CFR RTX NVlink instructions for games like Metro Exodus (YouTube video I posted earlier in the week was from someone else, though) and I'm on a 4K monitor (DisplayPort) anyhow. But when I have more time, I look forward to dive deeper into tiles / CFR because it strikes me as a potentially superior mgpu solution compared to AFR


----------



## Mooncheese

Welp, bad news everyone, my 2080 Ti TU-102-300A-K1-A1 started having issues today. I got a display driver failure while trying to run a game this afternoon (Mafia 3) and opening Event Viewer I noticed that there was not only a display driver failure from that but like 5+ from earlier in the day when I was mining crypto at 50% PT, +400 MHz on the memory. 

At first, while trying not to panic, I re-ran OC Scanner at default clocks and voltage, +0 core and memory and it failed with a 0% confidence rating. So then wanting to rule out a corrupted display driver I DDU'ed the driver and installed the same driver 442.19, creating a restore point before doing so in case this didn't alleviate the issue (so many Nvidia Control Panel settings to lose) and the problem persisted. Out of curiosity I decided to run a program to see if there were artifacts, 3DMark Firestrike and there were artifacts within 5-10 seconds of starting GPU test 1 followed by a display driver failure, necessitating a hard reset. I forgot to mention that the majority of attempt to run OC Scanner resulted in the screen freezing. 

So then I begin to panic and take to the internet, I know that my card has Micron memory, and I believe this is where the problem is. Considering that I bought the card second-hand I can't seek warranty assistance unfortunately. Tried underclocking the memory to -300 MHz and it alleviated the issue but it seems only temporarily as now artifacts appear in Firestrike and it refuses to pass OC Scanner stability test. Tried -500 MHz, same issue. 

******* Micron memory, I swear to god if Nvidia releases 3000 series with that garbage I am 100% not buying any of it. 

Really disappointed, at least I'm not out $1200 but if I bought new at least I would be under warranty right now. I will never buy a used GPU again. Thing is, I don't believe it's the seller's fault, I swapped back-plates recently and the EK Quantum 2080 Ti FE backplate didn't have any thermal pads on the underside covering the memory, only the GPU core and VRM all the way to the right (I followed the instructions). Looking at the under-side of the Alienware Aurora back-plate and there are thermal pads where the memory is. So it's possible that I cooked the memory to death having swapped the back-plate, how this is even possible I'm not sure because the water-block is making contact with the modules. I tried to put the old back-plate back on but it's already too late, one of or more of the crappy Micron modules is probably gone as doing this didn't resolve the issue. This is an FE PCB card, how EK can sell their Quantum back-plate for 2080 Ti FE with no thermal pad placement for the memory, how can this not be a widespread problem?


----------



## z390e

Mooncheese said:


> 2. Don't buy any GPU with Micron memory. Ever.


Sorry to hear about your issues, but we know they had problems with the Micron memory in 2018 when they stopped shipping with it. I feel like this one is a buyer beware lesson. Hopefully the card can be saved.


----------



## Shawnb99

Mooncheese said:


> Welp, bad news everyone, my 2080 Ti TU-102-300A-K1-A1 started having issues today. I got a display driver failure while trying to run a game this afternoon (Mafia 3) and opening Event Viewer I noticed that there was not only a display driver failure from that but like 5+ from earlier in the day when I was mining crypto at 50% PT, +400 MHz on the memory.
> 
> 
> 
> At first, while trying not to panic, I re-ran OC Scanner at default clocks and voltage, +0 core and memory and it failed with a 0% confidence rating. So then wanting to rule out a corrupted display driver I DDU'ed the driver and installed the same driver 442.19, creating a restore point before doing so in case this didn't alleviate the issue (so many Nvidia Control Panel settings to lose) and the problem persisted. Out of curiosity I decided to run a program to see if there were artifacts, 3DMark Firestrike and there were artifacts within 5-10 seconds of starting GPU test 1 followed by a display driver failure, necessitating a hard reset. I forgot to mention that the majority of attempt to run OC Scanner resulted in the screen freezing.
> 
> 
> 
> So then I begin to panic and take to the internet, I know that my card has Micron memory, and I believe this is where the problem is. Considering that I bought the card second-hand I can't seek warranty assistance unfortunately. Tried underclocking the memory to -300 MHz and it alleviated the issue but it seems only temporarily as now artifacts appear in Firestrike and it refuses to pass OC Scanner stability test. Tried -500 MHz, same issue.
> 
> 
> 
> ******* Micron memory, I swear to god if Nvidia releases 3000 series with that garbage I am 100% not buying any of it.
> 
> 
> 
> Really disappointed, at least I'm not out $1200 but if I bought new at least I would be under warranty right now. I will never buy a used GPU again. Thing is, I don't believe it's the seller's fault, I swapped back-plates recently and the EK Quantum 2080 Ti FE backplate didn't have any thermal pads on the underside covering the memory, only the GPU core and VRM all the way to the right (I followed the instructions). Looking at the under-side of the Alienware Aurora back-plate and there are thermal pads where the memory is. So it's possible that I cooked the memory to death having swapped the back-plate, how this is even possible I'm not sure because the water-block is making contact with the modules. I tried to put the old back-plate back on but it's already too late, one of or more of the crappy Micron modules is probably gone as doing this didn't resolve the issue. This is an FE PCB card, how EK can sell their Quantum back-plate for 2080 Ti FE with no thermal pad placement for the memory, how can this not be a widespread problem?
> 
> https://youtu.be/t5memuI5WD4
> 
> 
> 
> Tomorrow I will put the 1080 Ti back in. Good thing I didn't rush to sell the 1080 Ti, then I would be out of a GPU for a while.
> 
> 
> 
> I suppose I will take this as a lesson.
> 
> 
> 
> 1. Don't buy used GPU's. (this card was fine, I believe the swapping of the back-plate is what caused the issue, I just have no warranty to fall back on).
> 
> 
> 
> 2. Don't buy any GPU with Micron memory. Ever.
> 
> 
> 
> I can't believe how widespread this issue is, 2080 Ti FE must have a massive failure rate.
> 
> 
> 
> https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens
> 
> 
> 
> https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/
> 
> 
> 
> I tried to see if anyone offers GPU repair service, I know of one tech group down in Brazil who swapped out the memory modules with the newer faster memory from 2080 S but this is beyond my capability. Does anyone know if anyone offers repair service that includes replacing memory modules?
> 
> 
> 
> Card is fine except for this garbage ass Micron memory.
> 
> 
> 
> https://www.youtube.com/watch?v=UqlKrGFmxKY




If buying used buy EVGA since you can transfer the warranty. Otherwise it’s a crap shoot buying cards second hand


----------



## jura11

Mooncheese said:


> Nice! How much rad surface area do you have?
> 
> Since tearing down my loop and removing all of the plasticizer from the soft-tubing and switching to EK's Quantum back-plate with contact over the VRM (not sure if this is a contributing factor) I'm now seeing 42-43C load at 2040-2025 MHz with a 420 SE + 360 PE. I have 14/10 acrylic tubing (and lots of it) and a Barrow distro plate and about $150 worth of Bykski fittings coming from Ali Express, can't wait to get this EK Duraclear out of my system. What a mess! If youre reading this and contemplating going with a full loop please take this advice, just skip right over the soft tubing and go straight to hard tubing, preferably acrylic (PEGT will still do everything that soft tubing will: discoloration with direct sunlight, water permeation out of the loop and plasticizer crap it just takes 2-3x as long)
> 
> 
> 
> 
> 
> Thanks for the correction, on 980 Ti and 1080 Ti it was around 13 MHz every 10C but youre right, it's much more aggressive with 2080 Ti. My clocks start out at 2040 MHz @ 35C and then drop to 2025 at 40C so the -15 MHz every 5C is accurate.
> 
> It's possible that 'TheFinnishOne' is losing all of the freq from thermal drops, but there's still 35 MHz unaccounted for if their 2100 MHz @ 35C down to 1995 MHz @ 55C is an accurate statement.


Hi there 

If you are really want to use soft tubing then I would suggest use EK ZMT or something like Tygon A-60-G which is similar to ZMT or any other EPDM tubing, they're zero maintenance tubing and usually last up to 10 years there

With any soft tubing you should keep in mind, water temperature shouldn't be higher than 35-40°C if it will be higher you will see quicker developing plasticizer, EK DuraClear I wouldn't touch, in my loop lasted 3 months because after that tubing started yellowing and if you see yellowing then you know water temperature is higher, that time I run 5820k with 4.6Ghz and 3*GPUs setup(GTX1080Ti with 2113MHz OC, GTX1080 with 2164MHz and GTX1080 with 2100MHz OC) with 360mm and 240mm 60mm thick radiator on bottom in Phanteks Enthoo Primo 

I remember water temperature in my old loop with 3*GPUs setup etc have been in 30-35°C as max under load or during the gaming but still I have seen yellowing, 40°C I have seen only in very warm weather like over here have been in past during the heatwaves when water temperature have been in 30's on idle etc

Due this I switched to EK ZMT or EPDM tubing because of this, I usually tear down loop every 6 months to check tubing and blocks and running now EK ZMT for 2nd year and no issues 

Bykski or Barrow fittings I use on my loop and no issues with them

During the colder days or nights I tried on my loop what I can achieve on my Asus RTX 2080Ti Strix, best to date is 2220MHz which will drop to 2205MHz in FireStrike but usually clocks are stable in such benchmarks 

For gaming 2175-2190MHz are my preferred clocks, 2205MHz I can use in some games but not in every game, usually 2175-2190MHz are most stable with temperatures on GPU at 36-40°C as max

If you can add MO-ra3 360mm to yours loop and you will definitely benefit from it there, I'm running 4*360mm radiators plus MO-ra3 360mm and I'm very happy with this setup, running fans on main radiators at 650-750RPM and on MO-ra3 360mm running fans Arctic Cooling P12 PWM at 1000-1200RPM because on this radiator you simply can't hear these fans, have another set of P12 PWM on other radiators and on these fans are running at 700-800RPM as max

On my loop loudest thing is PSU, its Superflower 8pack 2000W PSU 

If you mining then pay attention to water temperature, if water temperature is high or higher then you can expect plasticizer to develop because of tubing and water temperature 

Best way how to counter it its more radiator space and fans 

Hope this helps 

Thanks, Jura


----------



## kithylin

Just a note on tubing: Personally I use exclusively Primochill Advanced LRT tubing. I have had some loops in the past using this tubing getting water temps up to 50c - 55c (back when I was doing i7-980X @ 4.6 Ghz + Dual overclocked GTX 480's in the same loop years ago. Those cards put out some -INSANE- heat to contend with) and I have never had any detrimental effects. And no plasticizer in the loops. I have called their office and spoke with a human to confirm that the Advanced LRT line is Plasticizer-free much like medical grade tubing.


----------



## jura11

JustinThyme said:


> Passive back plates do make a difference.
> 
> My rad surface is overkill to say the least. Enough so that it wont be part of the equation.
> 480x60 XE with push pull ML 120s in the bottom.
> 360 SE with push QL fans in the front
> 360 HW labs GTS mid between basement and upper comparment with pull QL
> 420 HW Labs GTR with pull Noctua 140 industrial 2000 rpm fans.
> 
> Right now its about as good as its going to get inside of a case. Been contemplating getting a MO-RA 420 to get all of that out of the case or just getting an 800W chiller and keeping it just to where there is no condensation.



Hi there 

There is nothing like overkill in water cooling, I'm running 4*360mm radiators plus MO-ra3 360mm and must say have run just 4*360mm radiators without the MO-ra3 360mm temperatures have been OK but still not been 100% happy because I needed to run fans faster in some cases, but now with MO-ra3 360mm no issues, fans are running low and no noise just PSU fan is loudest on my loop, I contemplating removing PSU fans and replacing with something way quieter and control it with Aquaero 

I was contemplating as well getting chiller for hot summer days 

More radiators will helps with good water temperature delta T but won't help a lot when ambient temperature is high, because you are can't go below ambient with normal water cooling unless you have water chiller 

I built loop for friend where friend is running just 8086k with 5.2GHz and Asus RTX 2080Ti Strix in Enthoo Primo and temperatures we are seen in 36-38°C that's with 360mm 60mm radiator on top and 240mm 60mm thick radiator on bottom, his Strix is OK will do 2145-2160MHz in any game or any benchmark 

In his case we are tried few fans, at the end I swapped all his Corsair SP120 fkr Arctic Cooling P12 PWM and finally he can enjoy low noise at gaming because Corsair SP120 have been so noisy at any RPM, on other loop I use Noiseblocker BlackSilent PL2 Pro which are nice fans and very quiet although they don't push lots of air through the 60mm radiator 

Hope this helps 

Thanks, Jura


----------



## jura11

kithylin said:


> Just a note on tubing: Personally I use exclusively Primochill Advanced LRT tubing. I have had some loops in the past using this tubing getting water temps up to 50c - 55c (back when I was doing i7-980X @ 4.6 Ghz + Dual overclocked GTX 480's in the same loop years ago. Those cards put out some -INSANE- heat to contend with) and I have never had any detrimental effects. And no plasticizer in the loops. I have called their office and spoke with a human to confirm that the Advanced LRT line is Plasticizer-free much like medical grade tubing.


Hi there 

PrimoChill Advanced LRT, this depends there, I have used two batches of LRT, one is perfect on friend build(I have used that only at back of the case, because I run out from EPDM tubing) and on other loop we are starting to see yellowing

Plasticizer not sure its hard to say it now because both loops are quite freshly built(3 months) and if we will see any build up of plasticizer after 1 year we will see there

Some people do have issues with LRT tubing and personally I don't use it, I prefer to use EK ZMT or EPDM or Tygon A-60-G tubing which I know will last years 

Hope this helps 

Thanks, Jura


----------



## jura11

Mooncheese said:


> Welp, bad news everyone, my 2080 Ti TU-102-300A-K1-A1 started having issues today. I got a display driver failure while trying to run a game this afternoon (Mafia 3) and opening Event Viewer I noticed that there was not only a display driver failure from that but like 5+ from earlier in the day when I was mining crypto at 50% PT, +400 MHz on the memory.
> 
> At first, while trying not to panic, I re-ran OC Scanner at default clocks and voltage, +0 core and memory and it failed with a 0% confidence rating. So then wanting to rule out a corrupted display driver I DDU'ed the driver and installed the same driver 442.19, creating a restore point before doing so in case this didn't alleviate the issue (so many Nvidia Control Panel settings to lose) and the problem persisted. Out of curiosity I decided to run a program to see if there were artifacts, 3DMark Firestrike and there were artifacts within 5-10 seconds of starting GPU test 1 followed by a display driver failure, necessitating a hard reset. I forgot to mention that the majority of attempt to run OC Scanner resulted in the screen freezing.
> 
> So then I begin to panic and take to the internet, I know that my card has Micron memory, and I believe this is where the problem is. Considering that I bought the card second-hand I can't seek warranty assistance unfortunately. Tried underclocking the memory to -300 MHz and it alleviated the issue but it seems only temporarily as now artifacts appear in Firestrike and it refuses to pass OC Scanner stability test. Tried -500 MHz, same issue.
> 
> ******* Micron memory, I swear to god if Nvidia releases 3000 series with that garbage I am 100% not buying any of it.
> 
> Really disappointed, at least I'm not out $1200 but if I bought new at least I would be under warranty right now. I will never buy a used GPU again. Thing is, I don't believe it's the seller's fault, I swapped back-plates recently and the EK Quantum 2080 Ti FE backplate didn't have any thermal pads on the underside covering the memory, only the GPU core and VRM all the way to the right (I followed the instructions). Looking at the under-side of the Alienware Aurora back-plate and there are thermal pads where the memory is. So it's possible that I cooked the memory to death having swapped the back-plate, how this is even possible I'm not sure because the water-block is making contact with the modules. I tried to put the old back-plate back on but it's already too late, one of or more of the crappy Micron modules is probably gone as doing this didn't resolve the issue. This is an FE PCB card, how EK can sell their Quantum back-plate for 2080 Ti FE with no thermal pad placement for the memory, how can this not be a widespread problem?
> https://youtu.be/t5memuI5WD4
> 
> Tomorrow I will put the 1080 Ti back in. Good thing I didn't rush to sell the 1080 Ti, then I would be out of a GPU for a while.
> 
> I suppose I will take this as a lesson.
> 
> 1. Don't buy used GPU's. (this card was fine, I believe the swapping of the back-plate is what caused the issue, I just have no warranty to fall back on).
> 
> 2. Don't buy any GPU with Micron memory. Ever.
> 
> I can't believe how widespread this issue is, 2080 Ti FE must have a massive failure rate.
> 
> https://www.gamersnexus.net/guides/...cting-failure-analysis-crashing-black-screens
> 
> https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/
> 
> I tried to see if anyone offers GPU repair service, I know of one tech group down in Brazil who swapped out the memory modules with the newer faster memory from 2080 S but this is beyond my capability. Does anyone know if anyone offers repair service that includes replacing memory modules?
> 
> Card is fine except for this garbage ass Micron memory.
> 
> https://www.youtube.com/watch?v=UqlKrGFmxKY



Hi there 

Its sad to hear that you have issues with yours RTX 2080Ti or rather you have issues with Micron memories 

Few months ago I built loop for friends and there I have used Palit RTX 2080Ti with Micron VRAM and both failed after 3 months of constant use, been lucky because I coupd at least RMA them, bit in yours case hard to say, did you tried to underclock them to like - 700MHz, in friend case it helped a bit because he could play at least games and could at least do something 

Replacing the VRAM for Samsung or something like that I think would be possible but you will need to have BGA machine, you can try contacting someone in your area if they can do it, but not sure if you can able find Samsung GDDR6

Hope this helps 

Thanks, Jura


----------



## JustinThyme

Sorry to hear of the 2080Tis passing. There was a lot of speculation early on with the first 2080Tis rolling out from everyone in the Micron was the culprit. I was seeing that with underclocking the Vram being nearly a 100% fix rate. A lot of manufacturers rushed back to the drawing board to either use a different memory vendor or make changes on the heat removal method from the Vram. Some have reported good success with Micron chips and in the same breath I’ve not heard of anyone having thermonuclear meltdowns with Samsung memory. Some bins still clock higher than others but Samsung seems to be very solid and stable. I bought two cards that aren’t the best OCs I’ve seen posted in the OC dept but they are stable. Actually sequential serial numbers. 1600 is about where I land 100% stable and reliable on Samsung Vram.


----------



## Mooncheese

jura11 said:


> Hi there
> 
> Its sad to hear that you have issues with yours RTX 2080Ti or rather you have issues with Micron memories
> 
> Few months ago I built loop for friends and there I have used Palit RTX 2080Ti with Micron VRAM and both failed after 3 months of constant use, been lucky because I coupd at least RMA them, bit in yours case hard to say, did you tried to underclock them to like - 700MHz, in friend case it helped a bit because he could play at least games and could at least do something
> 
> Replacing the VRAM for Samsung or something like that I think would be possible but you will need to have BGA machine, you can try contacting someone in your area if they can do it, but not sure if you can able find Samsung GDDR6
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the consolation, yeah, this unit is from 2018, although I just purchased it used it's manufacturer date was Nov 2018 going by the original BIOS in GPU-Z. What is worrisome is that at first I was able to underclock the memory by 300 MHz and pass Firestrike without artifacts and it passed MSI AB OC Scanner with a 70% rating (down from 90% with the same core clocks), trying to further diagnose it, attempting to get the confidence rating up to 100%, then it wouldn't even pass OC scanner (0% confidence) or it would freeze and now it's presenting artifacts in Firestrike at -300, -400 and even -500 MHz. I'm going to try to take it apart and put the thicker Fujitsu thermal pads that I currently have on the VRM on the memory and put the thinner blue pad that was on the memory modules on the MOSFET chokes. For all I know it could be a clearance issue with the water-block. Looking at the row of modules nearest to the terminal the memory there is making contact with the water block so I'm assuming that all of the modules are. I mean the 4 screws encircling the GPU core are going into the water block very tight, nearly as tight as they will go (but not so much that it would crack the die) so I'm nearly positive that the memory modules are making contact with the water-block. 

I will tear it all back apart this morning and give it one more shot before putting the 1080 Ti back in. It just sucks that replacing this card is now a $1200+ before taxes affair if I would go that route. That's really the only route I would go with this, I would never take another chance with another used 2080 Ti, even with Samsung memory. So I suppose I will just buckle up and sit tight and watch the COVID-19 supply chain disruption ****-show and hope that Ampere is still on track for release sometime in June-July and upgrade to 3080 at that point in time. I don't think I will ever have it in me to buy a $1200 before-taxes GPU. Not with these kinds of failure rates. 



JustinThyme said:


> Sorry to hear of the 2080Tis passing. There was a lot of speculation early on with the first 2080Tis rolling out from everyone in the Micron was the culprit. I was seeing that with underclocking the Vram being nearly a 100% fix rate. A lot of manufacturers rushed back to the drawing board to either use a different memory vendor or make changes on the heat removal method from the Vram. Some have reported good success with Micron chips and in the same breath I’ve not heard of anyone having thermonuclear meltdowns with Samsung memory. Some bins still clock higher than others but Samsung seems to be very solid and stable. I bought two cards that aren’t the best OCs I’ve seen posted in the OC dept but they are stable. Actually sequential serial numbers. 1600 is about where I land 100% stable and reliable on Samsung Vram.


Yeah this unit has Micron memory and going by the Release Date of it's original BIOS in GPU-Z (Sept-Nov 2018) it's one of the first wave Micron units to top it off. I tried underclocking by -300 MHz and had some initial success with this, but then after trying to place the thermal pads from the original back-plate over the memory on the back-side of the PCB in conjunction with the EK Quantum back-plate and re-attempting now there are artifacts in Firestrike at -300 MHz, -400 MHz etc. -500 MHz is the limit. I still don't know if it's an inadequate pressure issue because of the EK back-plate or if the situation with a failed memory module is deteriorating very quickly due to underclocking the VRAM no longer clearing up the artifacts in Firestrike or passing OC Scanner. 

Very sad. This sucks, I've read about the failures but to have it happen, man I just feel stupid for even getting this thing. I should have just waited, but I got tired of waiting and didn't want to spend $1200 before taxes for it. Now I'm really going to wait and I'm out nearly $800 all said and done. A lot of money for me as a veteran on partial disability (I make $1100 a month, nearly half of it goes to rent and utilities). 

I didn't realize how widespread the Micron memory failure problem was, I ran across comments from various threads amounting to "I work at an RMA center and we can't keep track of the number of returns for 20 series cards". 

What is disgusting is that NGreedia went with Micron memory to save what, $1-2, maybe $3 per memory module? That kind of bean-counting bull**** with a $1200 GPU? I tried contacting NGreedia to see what kind of repair service, if any, I could avail myself of and there's basically nothing. No-one repairs these things apparently. I mean all that's wrong with it is the Micron memory. No-one can change that out for a fee? 

Just sad and enraged. 

Like I almost just want to not go with NGreedia for my next GPU at this point. Just sell the AW3418DW and hope that AMD's version of Ray Tracing is as good or better and get a free-sync panel. I'm nearly that done with them at this point. 

Ridiculous price-gouging with 20 series and they use cheap Micron memory to save pennies in the grand scheme of things.


----------



## J7SC

*Re. Micron memory*

...sorry to hear that another card is biting the dust. This isn't my first post on this and probably won't be my last...personally, I prefer Samsung over Micron, though I have had no problem w/ Micron on my two custom PCB boards. Importantly, the original speculation / blame game about 'bad' Micron chips was superseded *once NVidia updated its official position on the early production-run failures*. The best article on that is by Techspot > *here* 

I highly recommend you read it, especially as some YouTubers simply did not bother updating their original speculation. Briefly, almost all early 2080 TIs had Micron, so any other issue with the cards just got tagged onto Micron. Also, RTX 2080 (non-Ti) cards did not have the problem of what NVidia called "Test escapes" for the RTX 2080 Tis - yet RTX 2080 and RTX 2080 Ti share the same memory controller and chips...

Folks with early cards should have exchanged them long ago.


----------



## sultanofswing

Maybe you can try a VBIOS with looser memory timings.


----------



## Mooncheese

J7SC said:


> *Re. Micron memory*
> 
> ...sorry to hear that another card is biting the dust. This isn't my first post on this and probably won't be my last...personally, I prefer Samsung over Micron, though I have had no problem w/ Micron on my two custom PCB bards. Importantly, the original speculation / blame game about 'bad' Micron chips was superseded *once NVidia updated its official position on the early production-run failures*. The best article on that is by Techspot > *here*
> 
> I highly recommend you read it, especially as some YouTubers simply did not bother updating their original speculation. Briefly, almost all early 2080 TIs had Micron, so any other issue with the cards just got tagged onto Micron. Also, RTX 2080 (non-Ti) cards did not have the problem of what NVidia called "Test escapes" for the RTX 2080 Tis - yet RTX 2080 and RTX 2080 Ti share the same memory controller and chips...
> 
> Folks with early cards should have exchanged them long ago.


Could it be the Galax BIOS? I'm doubtful. I just took the card and block out and replaced the thermal pads on the memory with slightly taller Fujitsu thermal pads to ensure that it's not an issue of the memory having inadequate contact with the block. Put it back in and the problem persists, worse, underclocking the memory to the -500 MHz limit doesn't alleviate the problem so it's not as though I could limp this thing along until Ampere releases. I basically have a $800 paper-weight. I will try flashing to the factory 2080 Ti FE vbios, 90.02.30.00.05, from here:https://www.techpowerup.com/vgabios/?architecture=NVIDIA&model=RTX+2080+Ti&page=2

I have nothing left to lose, I highly doubt flashing to a different BIOS will fix this, it feels like failed memory. 

Problem is, the artifacts I'm getting aren't the space invader or firework variety, they appear as corrupt vertical and horizontal purple lines etc that appear briefly then disappear. At first I was certain it was memory but now I don't know anymore. The GPU core hasn't seen more than 43C recently and has even been undervolted. Changing the voltage doesn't fix the problem. 

Is anyone offering GPU repair service? I can't believe NGreedia doesn't offer to repair these things, what a waste.


----------



## kithylin

jura11 said:


> Hi there
> 
> PrimoChill Advanced LRT, this depends there, I have used two batches of LRT, one is perfect on friend build(I have used that only at back of the case, because I run out from EPDM tubing) and on other loop we are starting to see yellowing
> 
> Plasticizer not sure its hard to say it now because both loops are quite freshly built(3 months) and if we will see any build up of plasticizer after 1 year we will see there
> 
> Some people do have issues with LRT tubing and personally I don't use it, I prefer to use EK ZMT or EPDM or Tygon A-60-G tubing which I know will last years
> 
> Hope this helps
> 
> Thanks, Jura


I'm not trying to argue but just so you know: The Advanced LRT line of tubing will never have plasticizer in your loop. They physically do not have it in their chemical formula for the tubing. It will never break down in a loop and add anything to the water. I've only used the UV Blue line though. If you are using the clear tubing then all clear tubing yellows when exposed to a closed loop system no matter who makes it. That's just natural. Yellowing of tubing does not mean it has plasticizer in a loop. It's just a natural thing.


----------



## Mooncheese

2080 Ti FE bios didn't solve the problem. 2080 Ti is going out and 1080 Ti back in. Sad day. I can't believe my 3 year old 1080 Ti is more durable that this piece of **** micron memory 2080 Ti. 

Thinking of dumping NGreed completely with RDNA 2. It was there ******* price gouging that got me in this situation in the first place. Had they priced 80 Ti at $700 like they have going back to Kepler I wouldn't have paid $700 for a used unit without a warranty. 

Honestly they can go **** themselves. Pretty much done with NGreedia. And there's nothing I can do, apparently no-one in the U.S. (or the world) offers GPU repair service. No warranty? Youre ****ed.


----------



## J7SC

Mooncheese said:


> 2080 Ti FE bios didn't solve the problem. 2080 Ti is going out and 1080 Ti back in. Sad day. I can't believe my 3 year old 1080 Ti is more durable that this piece of **** micron memory 2080 Ti.
> 
> Thinking of dumping NGreed completely with RDNA 2. It was there ******* price gouging that got me in this situation in the first place. Had they priced 80 Ti at $700 like they have going back to Kepler I wouldn't have paid $700 for a used unit without a warranty.
> 
> Honestly they can go **** themselves. Pretty much done with NGreedia. And there's nothing I can do, apparently no-one in the U.S. (or the world) offers GPU repair service. No warranty? Youre ****ed.


 
If the OEM bios roll-back didn't fix it AND your old 1080 Ti works fine in the same system (including monitor), then that's not encouraging. Still, if you have another older windows machine you can try the 2080 Ti there, it is another step to triangulate the problem(s).

Now, some years back I did subzero / LN2, and there were times hardware (including GPUs) got hurt and started to display weirdness. I did use the 'oven solder re-flow' method on two occasions to fix it. Have a look at the vid below but make *sure to use extreme caution* (ie max heat limit; duration) if you have otherwise exhausted all other options (incl. check on warranty). Also note that preferably, do not use your main oven as there will be toxic fumes. Anyhow, good luck


----------



## Mooncheese

I called a repair shop in San Diego and through the course of talking with the technician I had the realization that repair is basically impossible without being able to buy the Samsung memory modules in question. They can technically be replaced, it takes good soldering skills, but without the modules repair is out of the question.


----------



## reflex75

Cards with Samsung memory have died too.
https://www.nvidia.com/en-us/geforc...s-cards/5/297061/explanation-about-dying-rtx/


----------



## Avacado

Mooncheese said:


> And there's nothing I can do, apparently no-one in the U.S. (or the world) offers GPU repair service. No warranty? Youre ****ed.


I mean, I just recently got my 2080 Ti from EVGA with a 5 year warranty, and it's pretty solid.


----------



## Mooncheese

Avacado said:


> I mean, I just recently got my 2080 Ti from EVGA with a 5 year warranty, and it's pretty solid.


I should have paid $1300 after taxes for mine so I could have a warranty. I got the 2080 Ti for $750 including a $150 Phanteks Glacier block, granted it only lasted two weeks but yeah. 

Anyhow, 1080 Ti is back in and good lord did it need a repaste, I ran The Witcher 3 at 100% load @ 2025 MHz @ 1.025v for 15 minutes and the temps didn't break 38C! Still averaging 75-80 FPS in a demanding scene with a lot of foliage in Toussant @ 3440x1440 with HD Reworked and Phoenix Lighting mod, everything maxed except Hairworks (off). I'm going to miss RT in Shadow of the Tomb Raider, but honestly I'm at a part in the game where the skybox in Paitit shifts and now it's afternoon and RT is basically doing nothing but still inducing a performance penalty in the entire area. I'm talking about the part where you have to put the green indigenous outfit on and go and save captured soldiers etc. 

I can't believe how solid the 1080 Ti is, 3 years old and it runs like the day it was new. 2080 Ti lasts for 2 weeks and then ****s the bed with Micron memory. Good job Huang, you made a lot of more money to buy new Bugatti Chirons to put on your private Hawaiian island. Maybe you can get a new leather jacket. Just thinking about this knob makes me want to punch him in the face like 20 times in a row. $1200 before-taxes for a glorified 80 card (25% bump in performance over 1080 Ti makes it the 20 series 80 card and the "2080" a glorified $900 at launch 70 card, nice rename scam) and NGreedia tries to skimp and save $2 per GB of memory going with Micron. Golf clap. 

Not sure if I want to stay with Team Green. Might go RDNA 2 and pair that with Samsung's upcoming Odyssey G9 panel. I love my AW3418DW though, that's the only thing keeping me in this crappy overpriced Apple emulating ecosystem since NGreed, in their infinite wisdom, killed off 3D Vision last year. 

Three year old GPU vs 2080 Ti turned paperweight: 

https://www.3dmark.com/compare/fs/21991559/fs/21983782

No idea how combined is 15% faster with 1080 Ti.


----------



## truehighroller1

Mooncheese said:


> I should have paid $1300 after taxes for mine so I could have a warranty. I got the 2080 Ti for $750 including a $150 Phanteks Glacier block, granted it only lasted two weeks but yeah.
> 
> Anyhow, 1080 Ti is back in and good lord did it need a repaste, I ran The Witcher 3 at 100% load @ 2025 MHz @ 1.025v for 15 minutes and the temps didn't break 38C! Still averaging 75-80 FPS in a demanding scene with a lot of foliage in Toussant @ 3440x1440 with HD Reworked and Phoenix Lighting mod, everything maxed except Hairworks (off). I'm going to miss RT in Shadow of the Tomb Raider, but honestly I'm at a part in the game where the skybox in Paitit shifts and now it's afternoon and RT is basically doing nothing but still inducing a performance penalty in the entire area. I'm talking about the part where you have to put the green indigenous outfit on and go and save captured soldiers etc.
> 
> I can't believe how solid the 1080 Ti is, 3 years old and it runs like the day it was new. 2080 Ti lasts for 2 weeks and then ****s the bed with Micron memory. Good job Huang, you made a lot of more money to buy new Bugatti Chirons to put on your private Hawaiian island. Maybe you can get a new leather jacket. Just thinking about this knob makes me want to punch him in the face like 20 times in a row. $1200 before-taxes for a glorified 80 card (25% bump in performance over 1080 Ti makes it the 20 series 80 card and the "2080" a glorified $900 at launch 70 card, nice rename scam) and NGreedia tries to skimp and save $2 per GB of memory going with Micron. Golf clap.
> 
> Not sure if I want to stay with Team Green. Might go RDNA 2 and pair that with Samsung's upcoming Odyssey G9 panel. I love my AW3418DW though, that's the only thing keeping me in this crappy overpriced Apple emulating ecosystem since NGreed, in their infinite wisdom, killed off 3D Vision last year.
> 
> Three year old GPU vs 2080 Ti turned paperweight:
> 
> https://www.3dmark.com/compare/fs/21991559/fs/21983782
> 
> No idea how combined is 15% faster with 1080 Ti.


I've had my asus dual oc 2080ti for awhile now with sammy memory and it's running fine. My 1080ti I had for god knows how long is still running solid in my brothers pc too.


----------



## Talon2016

Mooncheese said:


> 2080 Ti FE bios didn't solve the problem. 2080 Ti is going out and 1080 Ti back in. Sad day. I can't believe my 3 year old 1080 Ti is more durable that this piece of **** micron memory 2080 Ti.
> 
> Thinking of dumping NGreed completely with RDNA 2. It was there ******* price gouging that got me in this situation in the first place. Had they priced 80 Ti at $700 like they have going back to Kepler I wouldn't have paid $700 for a used unit without a warranty.
> 
> Honestly they can go **** themselves. Pretty much done with NGreedia. And there's nothing I can do, apparently no-one in the U.S. (or the world) offers GPU repair service. No warranty? Youre ****ed.


Sorry to hear about your loss, that really sucks. Unfortunately you're not gonna like me saying this, but you really should have done more research. Early 2080 Ti built in 2018 were having failure issues, related the to the memory (even a few Samsung cards failed). Possibly the way it was soldered or mounted at the factory or just simply early GDDR6 failure. Either way cards were failing, Nvidia quietly as possible fixed the issue and soon after most cards have been rock solid. That said you buying a 2018 card with absolutely no warranty, no matter the price was a really bad gamble since those cards IMO could be ticking time bombs as you found out in the worst way. Had you purchased a second hand EVGA card you would have full manufacturer warranty from time it was built. This is the reason I will only purchase a used EVGA card. My year old 2080 Ti FTW3 Hybrid with Samsung memory is rock solid and I purchased the extended 5 year warranty. 

Did you at least purchase this GPU using PayPal? If you did you have 6 months warranty through them since you can return the item for 6 months after purchase for issues exactly like this. If you did you can setup a return through PayPal and the seller will have to fully refund you. If you paid cash, well I think you already know you are out of luck there.


----------



## Mooncheese

Talon2016 said:


> Sorry to hear about your loss, that really sucks. Unfortunately you're not gonna like me saying this, but you really should have done more research. Early 2080 Ti built in 2018 were having failure issues, related the to the memory (even a few Samsung cards failed). Possibly the way it was soldered or mounted at the factory or just simply early GDDR6 failure. Either way cards were failing, Nvidia quietly as possible fixed the issue and soon after most cards have been rock solid. That said you buying a 2018 card with absolutely no warranty, no matter the price was a really bad gamble since those cards IMO could be ticking time bombs as you found out in the worst way. Had you purchased a second hand EVGA card you would have full manufacturer warranty from time it was built. This is the reason I will only purchase a used EVGA card. My year old 2080 Ti FTW3 Hybrid with Samsung memory is rock solid and I purchased the extended 5 year warranty.
> 
> Did you at least purchase this GPU using PayPal? If you did you have 6 months warranty through them since you can return the item for 6 months after purchase for issues exactly like this. If you did you can setup a return through PayPal and the seller will have to fully refund you. If you paid cash, well I think you already know you are out of luck there.


I was aware of the micron memory failure issue. The card in question was listed on ebay from a seller with stellar feedback who agreed to terminate the auction to sell locally. I met with him and believe him to be honest, given the fact that he had been using it for a while I assumed that any kind of memory issue would have manifested while under use by now. It was a judgment call, a risk with consequences I was aware of before making the decision. To be fair, the card worked fine until I decided to tear apart my loop to deal with a plasticizer problem (EK Duraclear) that surfaced after swapping the 1080 Ti out for the 2080 Ti. The problem could have been as simple as the block not making adequate contact with the memory modules after disassembling to ascertain that the thermal pads were on the memory correctly and to swap out the factory Alienware Aurora backplate for EK's Quantum plate that has no thermal pads that cover the memory modules (per the installation manual). The problem could have surfaced after that point in time, it could be as simple as using a ever so slightly longer screw where a short screw should have been and thus, inadequate pressure on the memory modules with no way to relieve the heat out of the back side of the PCB through the original back-plate. Or the problem could have been there since the beginning, just waiting to manifest. Or, although I feel this is unlikely, the owner acquired the card, recognized that it was of the original wave units of Micron memory (original BIOS had a release date of Sept. or Oct. 2018) and not wanting to take a chance with it prudently decided to sell it. 

Again, having met with the owner, I doubt this to be the case. 

Anyhow, this could be WAY worse. I was looking at 2080 Ti's for weeks before settling on this one, I nearly pulled the trigger on an Aorus Waterforce Extreme variant (I actually bidded $1050 for it, the auction ended at $1180, new this is a $1500 variant). Had I won that auction and had that card failed on day 31 out of day 30 without a transferable warranty I would be out around $1200. If I was hasty and wanted to recoup some money before Ampere refresh and sold my 1080 Ti with EK water-block for, say $400 on ebay, I wouldn't have a card to fall back on. Then I'd be out $1200 with no GPU and no ebay buyer protection beyond 30 days in the same situation looking at having to buy a new 2080 right before Ampere refresh (not wanting to gamble with a used GPU again) or just attempting to content myself with the emulators I have on my phone (I have emulators for every 8 and 16 bit system from the 1990's on my phone, they are awesome) 

I'm out $750. I learned a lesson. Honestly $750 with a water-block was just too enticing of an offer to refuse. The card worked up until it didn't so it's not as the seller sold me a card that didn't work. 

I had a bad feeling about getting the card and it looks as though my intuition about the memory proved to be correct in the end. 

That card ran insanely hot on the back-plate. I have no idea why. Like my EK waterblocked 1080 Ti's factory FE back-plate is literally luke-warm to the touch and that's with thermal pads on the back of the GPU die, the memory, and the VRM / MOSFET. 

Anyhow, in the grand scheme of things even if I stay with NGreed I'm not out much more than many of you who bought 2080 Ti FE new and threw a $150 waterblock on it ($1470 by my estimation with 8-10% sales tax). I can still go out and buy a new 3080 for $800 and throw a water block on it and only exceed what you paid for your 2080 Ti by a few hundred dollars and that's including the $750 lost to the 2080 Ti. $750 + $880 + $150 (2080 + water block + 10% sales tax) = $1780 - $1470 ($1320 + $150) = $310 difference. And going by what data we have thus far, the next architecture is shaping up to be 50% faster than current across the product stack, so 3080 works out to around 25% faster than 2080 Ti, presumably much more in Ray Tracing? That's assuming I stay with Team Greed. This entire fiasco has left a bad taste in my mouth something fierce. Like the whole reason I had to stoop to buying used offline and local is because 2080 Ti, the only upgrade path for one with a $700 1080 Ti from 2017, is $1320 after taxes new before a $150 water-block. 

I just got tired of waiting, I should have waited a little longer. 

Ah well, hopefully this human malware hysteria induced Asian supply-chain disruption lifts and we get back on track to seeing Ampere refresh sometime in June-July, which is only 4 months away. 

I can wait 4 months. 

I will not buy another used GPU for the rest of my life. 

Anyone reading this contemplating buying a used 2080 Ti on Ebay. DON'T.


----------



## Mooncheese

Maybe this is why NGreedia don't have 2080 Ti for sale anymore, 100% of production is going towards RMA replacement: 






The comments are interesting "I'm on my 3rd RMA!"


----------



## Talon2016

Mooncheese said:


> I was aware of the micron memory failure issue. The card in question was listed on ebay from a seller with stellar feedback who agreed to terminate the auction to sell locally. I met with him and believe him to be honest, given the fact that he had been using it for a while I assumed that any kind of memory issue would have manifested while under use by now. It was a judgment call, a risk with consequences I was aware of before making the decision. To be fair, the card worked fine until I decided to tear apart my loop to deal with a plasticizer problem (EK Duraclear) that surfaced after swapping the 1080 Ti out for the 2080 Ti. The problem could have been as simple as the block not making adequate contact with the memory modules after disassembling to ascertain that the thermal pads were on the memory correctly and to swap out the factory Alienware Aurora backplate for EK's Quantum plate that has no thermal pads that cover the memory modules (per the installation manual). The problem could have surfaced after that point in time, it could be as simple as using a ever so slightly longer screw where a short screw should have been and thus, inadequate pressure on the memory modules with no way to relieve the heat out of the back side of the PCB through the original back-plate. Or the problem could have been there since the beginning, just waiting to manifest. Or, although I feel this is unlikely, the owner acquired the card, recognized that it was of the original wave units of Micron memory (original BIOS had a release date of Sept. or Oct. 2018) and not wanting to take a chance with it prudently decided to sell it.
> 
> Again, having met with the owner, I doubt this to be the case.
> 
> Anyhow, this could be WAY worse. I was looking at 2080 Ti's for weeks before settling on this one, I nearly pulled the trigger on an Aorus Waterforce Extreme variant (I actually bidded $1050 for it, the auction ended at $1180, new this is a $1500 variant). Had I won that auction and had that card failed on day 31 out of day 30 without a transferable warranty I would be out around $1200. If I was hasty and wanted to recoup some money before Ampere refresh and sold my 1080 Ti with EK water-block for, say $400 on ebay, I wouldn't have a card to fall back on. Then I'd be out $1200 with no GPU and no ebay buyer protection beyond 30 days in the same situation looking at having to buy a new 2080 right before Ampere refresh (not wanting to gamble with a used GPU again) or just attempting to content myself with the emulators I have on my phone (I have emulators for every 8 and 16 bit system from the 1990's on my phone, they are awesome)
> 
> I'm out $750. I learned a lesson. Honestly $750 with a water-block was just too enticing of an offer to refuse. The card worked up until it didn't so it's not as the seller sold me a card that didn't work.
> 
> I had a bad feeling about getting the card and it looks as though my intuition about the memory proved to be correct in the end.
> 
> That card ran insanely hot on the back-plate. I have no idea why. Like my EK waterblocked 1080 Ti's factory FE back-plate is literally luke-warm to the touch and that's with thermal pads on the back of the GPU die, the memory, and the VRM / MOSFET.
> 
> Anyhow, in the grand scheme of things even if I stay with NGreed I'm not out much more than many of you who bought 2080 Ti FE new and threw a $150 waterblock on it ($1470 by my estimation with 8-10% sales tax). I can still go out and buy a new 3080 for $800 and throw a water block on it and only exceed what you paid for your 2080 Ti by a few hundred dollars and that's including the $750 lost to the 2080 Ti. $750 + $880 + $150 (2080 + water block + 10% sales tax) = $1780 - $1470 ($1320 + $150) = $310 difference. And going by what data we have thus far, the next architecture is shaping up to be 50% faster than current across the product stack, so 3080 works out to around 25% faster than 2080 Ti, presumably much more in Ray Tracing? That's assuming I stay with Team Greed. This entire fiasco has left a bad taste in my mouth something fierce. Like the whole reason I had to stoop to buying used offline and local is because 2080 Ti, the only upgrade path for one with a $700 1080 Ti from 2017, is $1320 after taxes new before a $150 water-block.
> 
> I just got tired of waiting, I should have waited a little longer.
> 
> Ah well, hopefully this human malware hysteria induced Asian supply-chain disruption lifts and we get back on track to seeing Ampere refresh sometime in June-July, which is only 4 months away.
> 
> I can wait 4 months.
> 
> I will not buy another used GPU for the rest of my life.
> 
> Anyone reading this contemplating buying a used 2080 Ti on Ebay. DON'T.


I hope it's soon, I want a 3080 Ti or a Titan Ampere. I've never gone Titan, but might this round since I'm gaming at 4K 144Hz and might go full block/water build next time.


----------



## JustinThyme

Mooncheese said:


> Maybe this is why NGreedia don't have 2080 Ti for sale anymore, 100% of production is going towards RMA replacement:
> 
> https://www.youtube.com/watch?v=kYIv2mz1fZA
> 
> The comments are interesting "I'm on my 3rd RMA!"


Are all of yours FE Nvidia cards with micron?

They are still selling plenty.
https://www.nvidia.com/en-us/shop/geforce/?page=1&limit=9&locale=en-us
https://www.amazon.com/NVIDIA-GeFor...CBMDAQRTMDJ&psc=1&refRID=VC2MMCE4WCBMDAQRTMDJ

Top tiers are fetching a higher price now.


----------



## Mooncheese

JustinThyme said:


> Are all of yours FE Nvidia cards with micron?
> 
> They are still selling plenty.
> https://www.nvidia.com/en-us/shop/geforce/?page=1&limit=9&locale=en-us
> https://www.amazon.com/NVIDIA-GeFor...CBMDAQRTMDJ&psc=1&refRID=VC2MMCE4WCBMDAQRTMDJ
> 
> Top tiers are fetching a higher price now.


"Out of Stock" on the official site is not indicative of selling plenty. That's indicative of supply side choke point. Basically they aren't making them anymore (or they are, and 100% of production is going towards RMA replacement as I stated, the problem appears to be very widespread), and the 2nd link is a some scalper who is also off-loading inventory on Amazon. $1500 before taxes for what is in essence a $700 GPU. 

Yeah, NGreed and Intel, it's great when there isn't competition and they can just run rough-shod over the consumer-base. We then turn around and reward their monopoly with our continued steadfast allegiance. 

Economics are interesting, if you don't like the price of something, don't buy it and the price will come down. How people can justify a $1300 expense for a trivial bump up and over 1080 Ti is mind-boggling. The super rich buy this crap, and then the crowd that doesn't have money pouring out of their ears who normally buys the 80 Ti card every generation for $700, they can just bugger off and play the "gamble with a used 2080 Ti game". 

That was a fun game. 

Thanks for keeping the prices high. 

It must be nice being super rich. 

So basically all the rich idiots who normally buy the Titan X card every generation and now here in this forum. 

NGreed are real smart. 

They basically threw the 80 Ti customer-base under the bus for profit and the super-wealthy obliged!

Guess who isn't going to stay with NGreed any longer?

Looking forward to a $700 GPU some 50% faster than 2080 Ti from AMD by the end of the year. I will put the $500 that would have otherwise gone to 3080 Ti towards pairing that with a Free-Sync panel. 

The logic works like this. Both next-gen consoles will have ray tracing facilitated by AMD hardware. It's axiomatic that PC ports of said games will do RT better natively with AMD hardware. 

Staying with NGreed after killing of 3D Vision, then the disgusting price-gouging with 20 series where they renamed the entire product stack one GPU higher to essentially "justify" doubling the price (RTX "2070", replete with 60 card SKU, no SLI, 100% a $600 60 card at launch, RTX "2080", only as fast as previous gen 80 Ti card, not 25%+ faster like every new 80 card in existence going back to at least Kepler, 100% a $900 70 card at launch, and "2080 Ti" 100% a $1300+ after taxes 80 card), yeah, sorry I'm not a sadomasochist. 

Some of you enjoy getting punched in the face, "thank-you sir, may I have another"

Or youre really rich and $1500 for a single GPU + water-block is basically wallet lint for you. 

I'm not either. 

**** NGreedia. 

Thinking of Jensen Huang in his leather jacket, I swear to god I want to punch him in the face as hard as I can until I'm exhausted. 

"Just buy it"

"It just works"

Yeah no, it just doesn't ******* work, your profit oriented manufacturing decisions pretty much determined that. 

But you did save a whopping $20 in memory costs for all 11 GB of memory modules on the 2080 Ti going with Micron memory! 

Real good cost / benefit analysis here, "let's double the price of the GPU's, we can rename the entire product stack one GPU higher and they won't be the wiser!" 

Huang: "That's a great idea! I like money. Yeah I like money. I can have another Bugatti Chiron!" 

Peterson: "You can buy a fleet of them!"

Huang: "I like money"

Peterson: "AND, not only can we double our profit, be we can make $20 more per GPU by using Micron memory, who have agreed to sell us memory at $2 less per GB than Samsung!"

Huang: "I like money"


----------



## Avacado

Some spend money hooking up their cars, travel, some their clothes or jewelry. 1300$ on a GPU doesn't make you super rich, it makes you employed with an expensive hobby no different than anyone else. "It must be nice to be super rich" just translates into I don't make a decent wage and i'm pissed at everyone else who does. I also have to note the irony in that you own one and have such discontent for it.


----------



## JustinThyme

Mooncheese said:


> "Out of Stock" on the official site is not indicative of selling plenty. That's indicative of supply side choke point. Basically they aren't making them anymore (or they are, and 100% of production is going towards RMA replacement as I stated, the problem appears to be very widespread), and the 2nd link is a some scalper who is also off-loading inventory on Amazon. $1500 before taxes for what is in essence a $700 GPU.
> 
> Yeah, NGreed and Intel, it's great when there isn't competition and they can just run rough-shod over the consumer-base. We then turn around and reward their monopoly with our continued steadfast allegiance.
> 
> Economics are interesting, if you don't like the price of something, don't buy it and the price will come down. How people can justify a $1300 expense for a trivial bump up and over 1080 Ti is mind-boggling. The super rich buy this crap, and then the crowd that doesn't have money pouring out of their ears who normally buys the 80 Ti card every generation for $700, they can just bugger off and play the "gamble with a used 2080 Ti game".
> 
> That was a fun game.
> 
> Thanks for keeping the prices high.
> 
> It must be nice being super rich.
> 
> So basically all the rich idiots who normally buy the Titan X card every generation and now here in this forum.
> 
> NGreed are real smart.
> 
> They basically threw the 80 Ti customer-base under the bus for profit and the super-wealthy obliged!
> 
> Guess who isn't going to stay with NGreed any longer?
> 
> Looking forward to a $700 GPU some 50% faster than 2080 Ti from AMD by the end of the year. I will put the $500 that would have otherwise gone to 3080 Ti towards pairing that with a Free-Sync panel.
> 
> The logic works like this. Both next-gen consoles will have ray tracing facilitated by AMD hardware. It's axiomatic that PC ports of said games will do RT better natively with AMD hardware.
> 
> Staying with NGreed after killing of 3D Vision, then the disgusting price-gouging with 20 series where they renamed the entire product stack one GPU higher to essentially "justify" doubling the price (RTX "2070", replete with 60 card SKU, no SLI, 100% a $600 60 card at launch, RTX "2080", only as fast as previous gen 80 Ti card, not 25%+ faster like every new 80 card in existence going back to at least Kepler, 100% a $900 70 card at launch, and "2080 Ti" 100% a $1300+ after taxes 80 card), yeah, sorry I'm not a sadomasochist.
> 
> Some of you enjoy getting punched in the face, "thank-you sir, may I have another"
> 
> Or youre really rich and $1500 for a single GPU + water-block is basically wallet lint for you.
> 
> I'm not either.
> 
> **** NGreedia.
> 
> Thinking of Jensen Huang in his leather jacket, I swear to god I want to punch him in the face as hard as I can until I'm exhausted.
> 
> "Just buy it"
> 
> "It just works"
> 
> Yeah no, it just doesn't ******* work, your profit oriented manufacturing decisions pretty much determined that.
> 
> But you did save a whopping $20 in memory costs for all 11 GB of memory modules on the 2080 Ti going with Micron memory!
> 
> Real good cost / benefit analysis here, "let's double the price of the GPU's, we can rename the entire product stack one GPU higher and they won't be the wiser!"
> 
> Huang: "That's a great idea! I like money. Yeah I like money. I can have another Bugatti Chiron!"
> 
> Peterson: "You can buy a fleet of them!"
> 
> Huang: "I like money"
> 
> Peterson: "AND, not only can we double our profit, be we can make $20 more per GPU by using Micron memory, who have agreed to sell us memory at $2 less per GB than Samsung!"
> 
> Huang: "I like money"
> 
> https://youtu.be/y0O7_3o3BrI


You are free to buy what you like. Out of stock means they cant keep up with demand. They are very much still being made and very much still being sold. You always have the choice to go AMD or start up your own brand. 

When I see someone posting about 3 RMAs personally I generally come to the same conclusion and it has nothing to do with the manufacturer or the retail chain. 
Funniest one lately was IPS back light bleed. Some fool posts up "Oh dude, crank your brightness and contrast to max then turn out all the lights and sit in a pitch black room and start at a static black image to test your monitor for blacklight bleed. Im still laughng over all the fools that did this. Thats like saying inflate your 35psi tires to 150 psi and drive down a road with nothing but spikes to see if they blow. Personally I dont sit around in the dark staring at static black images. End result, all IPS manufacturers post labels on their boxes saying if you are stupid enough to crank settings to max and sit around in a dark room staring at black images......get off the crack pipe. Or something to that effect. 

I dont get it why jabs of how something at the low tier being a POS jabs always come from the cheap seats and blame those who can afford better or maybe couldn't and made their kids starve for a week to buy something they couldn't afford. Nvidia is a publicly traded company as is AMD. They answer to stock holders and they do fairly well. I just dumped 100 shares Feb 20 that I bought the end of may 2019. Turned $13K into $31K for a capital gain of $18K. Yes more than doubled my investment in less than a year. Am I loaded with money falling out of my pockets? Hell no, every cent of that so called gain just cut my losses on college tuition on one of my kids, pretty much covered Dorm and meal plan for the year. Im no market player. Just know what to look for and if I can wrestle up the money to go long for at least a year I do. That cash came from a payout of from my company stock options, the rest went into 401K low yield mutual funds where they do 100% match so it was doubled then another 2%. It doesnt take a genius, just ordinary people who are not fools and know one day they are going to need something to live off of and SS isnt going to cut it. When last kid is out of school that's it for me. That's all I work for now. When they are off to their own rat race I'm hanging up my hat, probably having a massive sale on PC gear and hitting the road to who knows where before I'm too old and really decrepit to enjoy it.

Compare Nvidia Stock to AMD and Intel and even Nvidias low point make a joke out of both AMD and Intel that stay within a few bucks. AMD would have been a good investment before Ryzen when it was selling for $20 a share. Now the two of them stay around the same with for all the TR4 hoopla Intels stock is still higher by about $7/share

They have plenty of these in stock
https://www.nvidia.com/en-us/deep-learning-ai/products/titan-rtx/


----------



## Mooncheese

JustinThyme said:


> You are free to buy what you like. Out of stock means they cant keep up with demand. They are very much still being made and very much still being sold. You always have the choice to go AMD or start up your own brand.
> 
> When I see someone posting about 3 RMAs personally I generally come to the same conclusion and it has nothing to do with the manufacturer or the retail chain.
> Funniest one lately was IPS back light bleed. Some fool posts up "Oh dude, crank your brightness and contrast to max then turn out all the lights and sit in a pitch black room and start at a static black image to test your monitor for blacklight bleed. Im still laughng over all the fools that did this. Thats like saying inflate your 35psi tires to 150 psi and drive down a road with nothing but spikes to see if they blow. Personally I dont sit around in the dark staring at static black images. End result, all IPS manufacturers post labels on their boxes saying if you are stupid enough to crank settings to max and sit around in a dark room staring at black images......get off the crack pipe. Or something to that effect.
> 
> I dont get it why jabs of how something at the low tier being a POS jabs always come from the cheap seats and blame those who can afford better or maybe couldn't and made their kids starve for a week to buy something they couldn't afford. Nvidia is a publicly traded company as is AMD. They answer to stock holders and they do fairly well. I just dumped 100 shares Feb 20 that I bought the end of may 2019. Turned $13K into $31K for a capital gain of $18K. Yes more than doubled my investment in less than a year. Am I loaded with money falling out of my pockets? Hell no, every cent of that so called gain just cut my losses on college tuition on one of my kids, pretty much covered Dorm and meal plan for the year. Im no market player. Just know what to look for and if I can wrestle up the money to go long for at least a year I do. That cash came from a payout of from my company stock options, the rest went into 401K low yield mutual funds where they do 100% match so it was doubled then another 2%. It doesnt take a genius, just ordinary people who are not fools and know one day they are going to need something to live off of and SS isnt going to cut it. When last kid is out of school that's it for me. That's all I work for now. When they are off to their own rat race I'm hanging up my hat, probably having a massive sale on PC gear and hitting the road to who knows where before I'm too old and really decrepit to enjoy it.
> 
> Compare Nvidia Stock to AMD and Intel and even Nvidias low point make a joke out of both AMD and Intel that stay within a few bucks. AMD would have been a good investment before Ryzen when it was selling for $20 a share. Now the two of them stay around the same with for all the TR4 hoopla Intels stock is still higher by about $7/share
> 
> They have plenty of these in stock
> https://www.nvidia.com/en-us/deep-learning-ai/products/titan-rtx/


RDNA 2 will do to Nvidia what Ryzen did to Intel. Bet. 

Anyhow, thanks for the oven advice, I actually discovered an alternative method from TechYesCity that involves a heat-gun, which I just happened to acquire as I intend to replace the EK Duracrap with acrylic and a distribution plate once it finally gets here from Ali Express (China). 






I will try this later on, there's a slim chance the the problem is malformed solder from memory overheating but if it's the memory themselves re-heating and reforming the solder won't fix this. 

Sorry for the insults, I'm just pissed that this is where the hobby is now, where formerly attainable hardware is now priced at 2x the historical price. 

This 100% is the result of income inequality and a monopoly in the industry. Please don't take this as an insult per se, it's just an objective assessment of where we are with 20 series pricing (see also: Samsung $1k+ Galaxy S20)

AMD is really going to upset NGreedia this year, and at that point, I may want to upgrade to whatever they have up their sleeve because as I stated, RT will be done much better natively with AMD hardware for the vast majority of titles which will all be console ports with few exception. 

And at this point we absolutely must support the competition. It's ******* ridiculous that what amounts to the 80 card costs $1200 before taxes with this generation. That's more insulting than drip drip iterative crap released by Intel on the same process node because there is (was) no competition from AMD. 

Now look at Intel, caught with their pants down, can't wait to see Huang's smug little face prance around on stage in his worn out leather jacket "Just buy it" "starting from $1500" when AMD releases RDNA 2. Watch him and the rest of his cohorts who thought it was a good idea to double the price of their products and throw the 80 Ti enthusiasts under the bus with Turing **** their pants when we all jump ship because having properly killed off 3D Vision and the entire panel industry having adopted VESA 2.0 Free-sync, done natively without the need for a $250 module, they will have no more proprietary gimmicks up their sleeve to justify 2x the asking price for comparable performance from AMD. 

And that control panel, dear god, they ask for $1200 before taxes for the 80 card and they can't even modernize the UI of Nvidia Control Panel? It looks identical to the one I used back in 2011 when I had a pair of 580M's. That searing white panel is like looking into head-lights at night when the rest of Windows 10 offers a Dark Mode and File Explorer and 90% of the OS UI can be set to black / dark. Look at AMD's control panel, complete with WattMan for comparison. 

NGreedia is done, they just don't know it yet. 

Fanboys flame on, I don't give a ****.


----------



## truehighroller1

Mooncheese said:


> RDNA 2 will do to Nvidia what Ryzen did to Intel. Bet.
> 
> Anyhow, thanks for the oven advice, I actually discovered an alternative method from TechYesCity that involves a heat-gun, which I just happened to acquire as I intend to replace the EK Duracrap with acrylic and a distribution plate once it finally gets here from Ali Express (China).
> 
> https://www.youtube.com/watch?v=Vz1UTQW7PC0
> 
> I will try this later on, there's a slim chance the the problem is malformed solder from memory overheating but if it's the memory themselves re-heating and reforming the solder won't fix this.
> 
> Sorry for the insults, I'm just pissed that this is where the hobby is now, where formerly attainable hardware is now priced at 2x the historical price.
> 
> This 100% is the result of income inequality and a monopoly in the industry. Please don't take this as an insult per se, it's just an objective assessment of where we are with 20 series pricing (see also: Samsung $1k+ Galaxy S20)
> 
> AMD is really going to upset NGreedia this year, and at that point, I may want to upgrade to whatever they have up their sleeve because as I stated, RT will be done much better natively with AMD hardware for the vast majority of titles which will all be console ports with few exception.
> 
> And at this point we absolutely must support the competition. It's ******* ridiculous that what amounts to the 80 card costs $1200 before taxes with this generation. That's more insulting than drip drip iterative crap released by Intel on the same process node because there is (was) no competition from AMD.
> 
> Now look at Intel, caught with their pants down, can't wait to see Huang's smug little face prance around on stage in his worn out leather jacket "Just buy it" "starting from $1500" when AMD releases RDNA 2. Watch him and the rest of his cohorts who thought it was a good idea to double the price of their products and throw the 80 Ti enthusiasts under the bus with Turing **** their pants when we all jump ship because having properly killed off 3D Vision and the entire panel industry having adopted VESA 2.0 Free-sync, done natively without the need for a $250 module, they will have no more proprietary gimmicks up their sleeve to justify 2x the asking price for comparable performance from AMD.
> 
> And that control panel, dear god, they ask for $1200 before taxes for the 80 card and they can't even modernize the UI of Nvidia Control Panel? It looks identical to the one I used back in 2011 when I had a pair of 580M's. That searing white panel is like looking into head-lights at night when the rest of Windows 10 offers a Dark Mode and File Explorer and 90% of the OS UI can be set to black / dark. Look at AMD's control panel, complete with WattMan for comparison.
> 
> NGreedia is done, they just don't know it yet.
> 
> Fanboys flame on, I don't give a ****.


Tryin to catch me trolling dirty.


----------



## Shewie

Sorry for breaking the long ongoing conversation.
I've got a week ago from my friend for a quick loan an RTX 2080 SUPER from AORUS.
Absolutely a looker, fantastic, heavy well build card.
Installed, ran some benchmarks and then decided to give it a go with some OC... and then came my utter disappointment.
Knowing from reviews and community how spectacular that new 16Gb memory Die was from Samsung
I couldn't get past 1000Mhz with them. My current ones on 2080ti goes easily past +1700Mhz without any sufficient cooling on them.

I was generally speaking rather disappointed with the card's OC potential and temperatures going past 77oC in a very well ventilated case. AORUS cooling solution offers nothing but RGB show - sad fact.
The card was rather loud under RDR2 casual load generating a lot of heat as well.

In absolute stable max OC, with cranking fans (card and case) to the absolute max speed I was only able to score 15 points (fifteen points) more in Time Spy than OC'd 2080 Gaming X Trio (same applies to the FireStrike and Superposition where the gains were negligible)
ofc card was delivering me the advertised specs but I was generally speaking expecting something more out of the SUPER card to be frank with you.

Just my 5 pence.

P.S due to coronavirus situation we all agree here that we can expect a significant delay when it comes to 3xxx cards and if so initial batch will be extremely scarce, saturating the market with the right amount of cards is an impossible task taking into consideration current situation especially on the memory and PCB supplies.
Some may see an opportunity here clearing the stock from initial batches for their personal gain selling later on eBay with a significant cut.


----------



## 113802

What frequency can most of you run at with .950v?


----------



## JustinThyme

Mooncheese said:


> RDNA 2 will do to Nvidia what Ryzen did to Intel. Bet.
> 
> Anyhow, thanks for the oven advice, I actually discovered an alternative method from TechYesCity that involves a heat-gun, which I just happened to acquire as I intend to replace the EK Duracrap with acrylic and a distribution plate once it finally gets here from Ali Express (China).
> 
> https://www.youtube.com/watch?v=Vz1UTQW7PC0
> 
> I will try this later on, there's a slim chance the the problem is malformed solder from memory overheating but if it's the memory themselves re-heating and reforming the solder won't fix this.
> 
> Sorry for the insults, I'm just pissed that this is where the hobby is now, where formerly attainable hardware is now priced at 2x the historical price.
> 
> This 100% is the result of income inequality and a monopoly in the industry. Please don't take this as an insult per se, it's just an objective assessment of where we are with 20 series pricing (see also: Samsung $1k+ Galaxy S20)
> 
> AMD is really going to upset NGreedia this year, and at that point, I may want to upgrade to whatever they have up their sleeve because as I stated, RT will be done much better natively with AMD hardware for the vast majority of titles which will all be console ports with few exception.
> 
> And at this point we absolutely must support the competition. It's ******* ridiculous that what amounts to the 80 card costs $1200 before taxes with this generation. That's more insulting than drip drip iterative crap released by Intel on the same process node because there is (was) no competition from AMD.
> 
> Now look at Intel, caught with their pants down, can't wait to see Huang's smug little face prance around on stage in his worn out leather jacket "Just buy it" "starting from $1500" when AMD releases RDNA 2. Watch him and the rest of his cohorts who thought it was a good idea to double the price of their products and throw the 80 Ti enthusiasts under the bus with Turing **** their pants when we all jump ship because having properly killed off 3D Vision and the entire panel industry having adopted VESA 2.0 Free-sync, done natively without the need for a $250 module, they will have no more proprietary gimmicks up their sleeve to justify 2x the asking price for comparable performance from AMD.
> 
> And that control panel, dear god, they ask for $1200 before taxes for the 80 card and they can't even modernize the UI of Nvidia Control Panel? It looks identical to the one I used back in 2011 when I had a pair of 580M's. That searing white panel is like looking into head-lights at night when the rest of Windows 10 offers a Dark Mode and File Explorer and 90% of the OS UI can be set to black / dark. Look at AMD's control panel, complete with WattMan for comparison.
> 
> NGreedia is done, they just don't know it yet.
> 
> Fanboys flame on, I don't give a ****.


Nvidia is far from done. Their stock is dropping ATM as it has historically when the current platform has reached its peak. As for cost increases that’s all part of supply and demand. You can thank crypto mining for that buying up anything and everything with Ti at the end leaving none for the rest of the world. They didn’t reach for the titans because of the price spread even though it’s much better for such tasks. I was watching a documentary of sorts on a crypto mining farm not too long ago. 3 people running in with no AC in a tropical environment because the cost of cooling would kill the profits. This place was unreal! Huge exhaust fans on the walls just pulling the heat out. The monthly utility bill was like $20K USD monthly. About 500K Sqft of nothing but mining racks. This is what happened when GPUs compute power for certain applications outperform CPUs. Some of the DOD sites I work in running power of 25 megawatts to supply their machines that are mostly comprised of GPUs. Can’t tell you what they do with them as I’m not even supposed to know but even the governments are buying them all up. I watched when the market flipped over it when I bought my pair of 1080Tis for less than $700 each new and a month later they are going for $1100, almost double. If you want to get back for the cost hikes find your local crypto miner and beotch slap them so hard the picture on their mothers drivers license will start crying.


----------



## cg4200

*got kinda lucky*

I had an evga 2080 ti non a chip it was a micron memory oced ok 
It was a good card no problems except hitting pl and no way to flash 
I sold on ebay just so I don't have to meet up and we are both covered.
Bought anniversary edition 5700xt on ebay boosted up to 2100 with load blower so I bought a water block and was gaming all day at 2150 .. did not like the broken drivers limiting ocing had to go 19. something.
anyway bought 65 lg which has gsync so I bought a used zotak gaming for 790.00 crappy ebay now charges taxes in my state
smelled like non smoker card installed and card would hit 2100 so I ran it for a week to make sure it had no issues this one has Samsung memory. bought a waterblock and finished water cooling only have gtx dual 360 for it down sized from thermal take case and 480 plus 360.
and I got lucky 2 times in a row with good silicon about time! this is my firestrike https://www.3dmark.com/3dm/44533645?
I did flash to the galax bios great reading thru a lot of the pages lol


----------



## Shewie

cg4200 post scores of Superposition, TimeSpy, FireStrike Extreme, Unigine Heaven.


----------



## z390e

A few days ago Mooncheese you were all happy and now your card crapped the bed and its "F Nvidia".

What you should be saying is "F my own lack of research, I stupidly bought a card the vendor knows has a crappy component, and even replaced it going forward".

Don't let the door to AMD burn you on the way in with a "blame everyone but myself" attitude like that.


----------



## kithylin

WannaBeOCer said:


> What frequency can most of you run at with .950v?


The way modern video cards work that question is not possible for anyone to answer. What someone else gets at that voltage may not be possible for you to get with your card due to how the video cards run different clocks at different core tempatures, even if voltage is the same. And they may have a different card from you with different thermals. Temperature matters everything with modern video cards.


----------



## 113802

kithylin said:


> The way modern video cards work that question is not possible for anyone to answer. What someone else gets at that voltage may not be possible for you to get with your card due to how the video cards run different clocks at different core tempatures, even if voltage is the same. And they may have a different card from you with different thermals. Temperature matters everything with modern video cards.


I should of said FE users with waterblocks. Trying to find the average what other users are running. I can run my card at 2055Mhz/2075Mhz with .950v. I understand how modern video cards work.

With .950v the card uses around 225w and stays below 40c with the fans at 900 RPM. While 2100Mhz requires 1.012mV which uses around 300w and hits 42c when gaming which drops down the frequency to 2085Mhz.


----------



## JustinThyme

WannaBeOCer said:


> I should of said FE users with waterblocks. Trying to find the average what other users are running. I can run my card at 2055Mhz/2075Mhz with .950v. I understand how modern video cards work.
> 
> With .950v the card uses around 225w and stays below 40c with the fans at 900 RPM. While 2100Mhz requires 1.012mV which uses around 300w and hits 42c when gaming which drops down the frequency to 2085Mhz.


I still can’t answer that question. Mine get clocked as high as they will go without hitting PL. I keep it just under that. Frequency and voltage depends on what I’m doing. They idle down around 350 and turbo out to 2160 peak although not that high all that often. Generally speaking ultra settings on anything I do won’t push it much past 1900 or so. I don’t watch power consumption or cooling as I have all that past what I’d ever need. Ever since I went to HK blocks with passive backplates my GPU cores don’t see past 40C and idle darn near at liquid temp, maybe +1C.


----------



## 113802

JustinThyme said:


> I still canâ€™️t answer that question. Mine get clocked as high as they will go without hitting PL. I keep it just under that. Frequency and voltage depends on what Iâ€™️m doing. They idle down around 350 and turbo out to 2160 peak although not that high all that often. Generally speaking ultra settings on anything I do wonâ€™️t push it much past 1900 or so. I donâ€™️t watch power consumption or cooling as I have all that past what Iâ€™️d ever need. Ever since I went to HK blocks with passive backplates my GPU cores donâ€™️t see past 40C and idle darn near at liquid temp, maybe +1C.


If you have time, set your max voltage to .950V and raise the core clock until you can find the max stable clock you can run at .950v.


----------



## JustinThyme

WannaBeOCer said:


> If you have time, set your max voltage to .950V and raise the core clock until you can find the max stable clock you can run at .950v.



Oh I know how to, I have no reason to was my point that got turned into a garbled post. I go for max OC and let her rip and seld regulate in Vcore and frequenceies or lock in turbo when benching.


----------



## jura11

WannaBeOCer said:


> I should of said FE users with waterblocks. Trying to find the average what other users are running. I can run my card at 2055Mhz/2075Mhz with .950v. I understand how modern video cards work.
> 
> With .950v the card uses around 225w and stays below 40c with the fans at 900 RPM. While 2100Mhz requires 1.012mV which uses around 300w and hits 42c when gaming which drops down the frequency to 2085Mhz.


Hi there 

I can do few tests later on at 0.950v or so and see what I can achieve or what OC I can have at such voltage 

Have look at 1.093-1.094v my Asus RTX 2080Ti Strix will do 2205-2220MHz with GPU pulling in excess of 360-366W I think and temperatures still stays at 38-40°C, in lower ambient temperature I can keep the GPU in 36-38°C constantly, but usually running 2160-2175MHz OC which is more than okay and enough for gaming because 2205-2220MHz OC is stable in some games but is stable in every benchmark which I have tried and wattage I think is pretty much same and no issues with temperatures, running 4*360mm radiators plus MO-ra3 360mm with fans running at 650-750RPM and on MO-ra3 360mm I'm running P12 PWM and there I'm running fans at 1000-1200RPM but literally these fans you won't hear

I remember friend have EVOLV ATX with single 360mm radiator whuch cools 8086k with 5.2GHz OC and RTX 2080Ti and he runs fans there literally just at 800-850RPM with GPU running 2145MHz OC with Galax 380W BIOS and his temperatures are in 40's(42-45°C) 

How many radiators do you have in your case? 

Hope this helps 

Thanks, Jura


----------



## sultanofswing

Is anyone having issues with driver 442.50? I have a Gsync Monitor and even with Gsync turned off I keep getting black screens that flash in games. 
Few others have reported it on Nvidia's forums as well.


----------



## J7SC

sultanofswing said:


> Is anyone having issues with driver 442.50? I have a Gsync Monitor and even with Gsync turned off I keep getting black screens that flash in games.
> Few others have reported it on Nvidia's forums as well.


 
...the only issue I noticed so far with 442.50 (on 2x Aorus w-c 2080 Ti, 4K DSP 75 Mhz) is an ever so slight loss of bench performance. Everything else with it seems to work fine


----------



## Hiikeri

Zotac with Microns here since 2080Ti release, 10/2018 and +1000Mhz > 16Gbps Vram and Galax380W bios since 11/2018. With stock air cooling.

No any problems here.


----------



## sultanofswing

Got the Bykski block on the Kingpin, Still testing ambient cooling out.


----------



## trivium nate

so these are my specs

Thermaltake P3 White//AMD RYZEN 7 3800X 8-Core 3.9 GHz (4.5 GHz Turbo)
Corsair H100i-PRO//GIGABYTE GA-AX370-Gaming K5 MB
EVGA RTX-2080TI (11GB) 1TB-SSD//8TBHDD(x2)
G.SKILL TridentZ Series 32GB//1000 Watt Corsair PSU
65"TCL-4KHDR-TV//Windows 10-(X64)

using bios F40, and my gpu is the 11G-P4-2383-KB with driver the latest drivers

How come now that ive updated my bios and upgraded my CPU from a 1700X to a 3800X and have done bios set back to factory ( not an overclocker) and fresh install of windows now when i boot my PC my GPU fan or fans start up pretty loud for a little bit until my motherboard RGB turns on, EVGA precision and nvidia control panel are also set to default just wondering, Other than the fans now starting up loud when i start my PC on my GPU everything works fine and I have no issue's. And Yes all of the wiring is plugged in
JW
TY


----------



## EarlZ

I currently have a Gigabyte 2080TI Gaming OC and I am looking at getting the Accelero Xtreme IV Rev 2, would it be fair to expect around 20C better temps? My ambient is 29-33c depending on the time of day and my card can easily hit 85C on some games running at 1950Mhz


----------



## JustinThyme

trivium nate said:


> so these are my specs
> 
> Thermaltake P3 White//AMD RYZEN 7 3800X 8-Core 3.9 GHz (4.5 GHz Turbo)
> Corsair H100i-PRO//GIGABYTE GA-AX370-Gaming K5 MB
> EVGA RTX-2080TI (11GB) 1TB-SSD//8TBHDD(x2)
> G.SKILL TridentZ Series 32GB//1000 Watt Corsair PSU
> 65"TCL-4KHDR-TV//Windows 10-(X64)
> 
> using bios F40, and my gpu is the 11G-P4-2383-KB with driver the latest drivers
> 
> How come now that ive updated my bios and upgraded my CPU from a 1700X to a 3800X and have done bios set back to factory ( not an overclocker) and fresh install of windows now when i boot my PC my GPU fan or fans start up pretty loud for a little bit until my motherboard RGB turns on, EVGA precision and nvidia control panel are also set to default just wondering, Other than the fans now starting up loud when i start my PC on my GPU everything works fine and I have no issue's. And Yes all of the wiring is plugged in
> JW
> TY


Not out of the question. I have water blocks on mine and I get 2 seconds of blast at boot but thats because I have them set to boost at start so they arent slowly coming out of a dead stop. May want to check if there are any boost on start options in your fan controllers.


----------



## trivium nate

Okay then


----------



## trivium nate

I dont see that anywhere in my stuff bios or evga precision or corsair icu


----------



## JustinThyme

trivium nate said:


> I dont see that anywhere in my stuff bios or evga precision or corsair icu



Could be the BIOS doing it without your permission. If its too bothersome one can always roll back. Im used to the 2 second rule. 
Dont have anything connected to MOBO except output of aquaero for speed feedback because that little ticker to ignore the CPU fan speed doesnt work. I cant boot without something there. I had a single fan connected but thats even more annoying when it ramps up every time the CPU spikes over launching something. Up and down and up and up and down. Thats why I use the aquaero even though this high end MOBO is supposed to handle all that. Bool Sheet!! unless I set a curve so wide I can fit a fat ladies a$$ in there.


----------



## sultanofswing

Still doing testing on Ambient cooling.

All I will try for tonight.


----------



## GraphicsWhore

EarlZ said:


> I currently have a Gigabyte 2080TI Gaming OC and I am looking at getting the Accelero Xtreme IV Rev 2, would it be fair to expect around 20C better temps?


No.


----------



## MrTOOSHORT

Have to bench in the morning after work to take advantage of the weather!


----------



## J7SC

MrTOOSHORT said:


> Have to bench in the morning after work to take advantage of the weather!


 
...technically, that's still 'ambient cooling'  Not quite as good as Triton, mind you, w/ frozen nitrogen as a surface. Edmonton is closer, though



Spoiler


----------



## MrTOOSHORT

J7SC said:


> ...technically, that's still 'ambient cooling'  Not quite as good as Triton, mind you, w/ frozen nitrogen as a surface. Edmonton is closer, though
> 
> 
> 
> Spoiler


----------



## JustinThyme

MrTOOSHORT said:


> Have to bench in the morning after work to take advantage of the weather!


brrrrrrrrrrr


----------



## Mooncheese

Update: 

I contacted TecLab, yes that Teclab, the Brazilian tech associates who successfully transplanted 2080S memory onto a 2080 Ti to see if they could help replace the fouled Micron memory on my failed 2080 Ti. Ronaldo, extremely nice, agreed to help but we can't seem to get replacement GDDR6 anywhere. I spent about a week looking high and low at repair options until giving up and just listing the 2080 Ti on ebay as "broken, for parts", explaining the failed memory situation and within one day someone offered a $420 best offer (Buy it now was $499) which I could not refuse. That very next morning there was an auction for an EVGA 2080 Ti XC Ultra from August 2019 with a transferable warranty "Buy it now": $849, that I won with an $800 best offer. I contacted Phanteks and ordered the exact thermal pads to be used in conjunction with their glacier block as I believe the memory failure was because Micron memory is sensitive to "warm" temperature and there may have been inadequate contact / pressure between the block and the memory modules as not having the correct thermal pads on hand I simply re-used the original ones (I did do a visual inspection to ensure contact with the block though). The card should ship out next week Monday and I hope to get it before next weekend. 

In the meantime I'm playing games that run exceedingly well with the 1080 Ti @ 3440x1440, currently that's Prey, Redout, Devil May Cry 5 and sometimes Mafia 3 (although I should be saving Mafia 3 for the 2080 Ti). I've put SOTTR and The Witcher 3 on the back-burner for now. 

Hoping the replacement card has Samsung GDDR6 and does the same or better on the core. I don't know how good 2100 Mhz @ default voltage @ 45C with the original card is in the grand scheme of things (90% Confidence Rating in OC Scanner) but considering that some here are like at 2200 MHz+ I'm assuming that that's average to above average. Hoping the replacement does the same or better. 

Anyhow, $800 with a transferable warranty for reference PCB without having to buy a new water block and recouping $400 from the failed 2080 Ti, making this a $400 boo boo in the end, I'm ok with that I guess. 

I'm just wondering if I should have waited for 3080 Ti, but honestly there's no telling how Asian supply chain disruption will impact planned Q4 release window. I intend to sell my 1080 Ti this time around once I have the 2080 Ti up and running for at least 2-3 weeks to be somewhat safe. I kind of want to wait longer than that but I don't want to wait too close to Ampere refresh because I imagine Ampere excitement post GTC may diminish demand for 1080 Ti a bit. 

Hoping to get $400 for the 1080 Ti with a $150 EK waterblock in near pristine condition. That will make the upgrade much less painful in the end. 3080 Ti rolls around some time in Q1 2021 if COVID-19 hysteria persists and I will buy again and keep the 2080 Ti as back-up in case there are issues again like we saw with the Micron memory failure with early TU-102 and I need to RMA. 

Thing is, I'm seeing a CPU bottleneck in some titles now with 2080 Ti @ 3440x1440 with an 8700k @ 5.0 GHz actual (0 AVX). SOTTR with DLSS on / RT off I'm seeing 90% GPU utilization in Paititi with DX12! How?! AC: Odyssey, bottlenecked pretty hard here in Athens. I imagine a completely new build may be in order in 2021 with 3080 Ti. It will be like when I upgraded to 1080 Ti with 4930k all over again, where I was only really able to appreciate the GPU upgrading to 8700k. 

Looking forward to seeing who has the fastest single core speed in 2021 or whether or not the next-gen consoles are going to force greater multi-core optimization and make more cores relevant in gaming. 

Anyhow, thanks for reading, figured I'd let everyone know that I'm still lurking around here.


----------



## happyluckbox

Just bought a founders edition 2080ti. Do I still need to flash bios these days or do they come from the factory with a higher power limit now?

I just read the op and it mentions not being able to flash the higher bios on cards shipped in the later half of 2019. Can somebody confirm if this is the case?


----------



## sultanofswing

happyluckbox said:


> Just bought a founders edition 2080ti. Do I still need to flash bios these days or do they come from the factory with a higher power limit now?
> 
> I just read the op and it mentions not being able to flash the higher bios on cards shipped in the later half of 2019. Can somebody confirm if this is the case?


If the card was made in the later half of 2019 then it will have a different BIOS chip than the cards that were made before then so yes it has been confirmed.


----------



## happyluckbox

sultanofswing said:


> If the card was made in the later half of 2019 then it will have a different BIOS chip than the cards that were made before then so yes it has been confirmed.


Is there a way to check when the card was made? Also, if I do have the new chip, am I stuck with the lower power limit bios until one of the AIB's come out with a higher Wattage bios using the new chip?


----------



## cosmomobay

Hi All, I have being a member since 2013. I haven't posted much. I have just finish upgrading my old rig, upgraded to EVGA RTX 2080TI Vertical installed Water Cooled.


----------



## Talon2016

Mooncheese said:


> https://youtu.be/UqlKrGFmxKY
> 
> Update:
> 
> I contacted TecLab, yes that Teclab, the Brazilian tech associates who successfully transplanted 2080S memory onto a 2080 Ti to see if they could help replace the fouled Micron memory on my failed 2080 Ti. Ronaldo, extremely nice, agreed to help but we can't seem to get replacement GDDR6 anywhere. I spent about a week looking high and low at repair options until giving up and just listing the 2080 Ti on ebay as "broken, for parts", explaining the failed memory situation and within one day someone offered a $420 best offer (Buy it now was $499) which I could not refuse. That very next morning there was an auction for an EVGA 2080 Ti XC Ultra from August 2019 with a transferable warranty "Buy it now": $849, that I won with an $800 best offer. I contacted Phanteks and ordered the exact thermal pads to be used in conjunction with their glacier block as I believe the memory failure was because Micron memory is sensitive to "warm" temperature and there may have been inadequate contact / pressure between the block and the memory modules as not having the correct thermal pads on hand I simply re-used the original ones (I did do a visual inspection to ensure contact with the block though). The card should ship out next week Monday and I hope to get it before next weekend.
> 
> In the meantime I'm playing games that run exceedingly well with the 1080 Ti @ 3440x1440, currently that's Prey, Redout, Devil May Cry 5 and sometimes Mafia 3 (although I should be saving Mafia 3 for the 2080 Ti). I've put SOTTR and The Witcher 3 on the back-burner for now.
> 
> Hoping the replacement card has Samsung GDDR6 and does the same or better on the core. I don't know how good 2100 Mhz @ default voltage @ 45C with the original card is in the grand scheme of things (90% Confidence Rating in OC Scanner) but considering that some here are like at 2200 MHz+ I'm assuming that that's average to above average. Hoping the replacement does the same or better.
> 
> Anyhow, $800 with a transferable warranty for reference PCB without having to buy a new water block and recouping $400 from the failed 2080 Ti, making this a $400 boo boo in the end, I'm ok with that I guess.
> 
> I'm just wondering if I should have waited for 3080 Ti, but honestly there's no telling how Asian supply chain disruption will impact planned Q4 release window. I intend to sell my 1080 Ti this time around once I have the 2080 Ti up and running for at least 2-3 weeks to be somewhat safe. I kind of want to wait longer than that but I don't want to wait too close to Ampere refresh because I imagine Ampere excitement post GTC may diminish demand for 1080 Ti a bit.
> 
> Hoping to get $400 for the 1080 Ti with a $150 EK waterblock in near pristine condition. That will make the upgrade much less painful in the end. 3080 Ti rolls around some time in Q1 2021 if COVID-19 hysteria persists and I will buy again and keep the 2080 Ti as back-up in case there are issues again like we saw with the Micron memory failure with early TU-102 and I need to RMA.
> 
> Thing is, I'm seeing a CPU bottleneck in some titles now with 2080 Ti @ 3440x1440 with an 8700k @ 5.0 GHz actual (0 AVX). SOTTR with DLSS on / RT off I'm seeing 90% GPU utilization in Paititi with DX12! How?! AC: Odyssey, bottlenecked pretty hard here in Athens. I imagine a completely new build may be in order in 2021 with 3080 Ti. It will be like when I upgraded to 1080 Ti with 4930k all over again, where I was only really able to appreciate the GPU upgrading to 8700k.
> 
> Looking forward to seeing who has the fastest single core speed in 2021 or whether or not the next-gen consoles are going to force greater multi-core optimization and make more cores relevant in gaming.
> 
> Anyhow, thanks for reading, figured I'd let everyone know that I'm still lurking around here.


Glad you were able to figure out a workable solution. I figured that broken 2080 Ti would recoup some good cash. Even if it doesn't have Samsung memory, the later built RTX cards are reliable and they fixed whatever initial issue they were having, even on the Micron stuff. I would expect it has a fair chance at having Samsung memory though. Make sure the previous owner unregisters the card with EVGA and register it yourself on EVGA website and you're good to go for the remainder of the warranty.


----------



## jura11

Hi @Mooncheese

Finding Samsung GDDR6 memory modules its very hard,maybe you can try check Alibaba or Aliexpress where in theory you can find them much easier than on eBay 

Earlier RTX 2080Ti have been equipped with Micron memories like almost every GTX1080Ti which has used only Micron memory modules but you can find GTX1080Ti with Hynix or Samsung and usually these have been best for OC, my GTX1080Ti wouldn't do more than +400MHz on VRAM that's with Micron memories 

I built few PC's with RTX 2080Ti, 4 of them have run RTX 2080Ti with Micron memories and every of RTX 2080Ti with Micron memories failed or started artifact at stock setting after 6 months or so

Other builds running which I built running RTX 2080Ti with Samsung memory and no issues, they're running strong and no issues, few of them have good OC in region of 2100-2145MHz, some are really poor OC'ers hahaha like they won't do more than 2085MHz 

If you are buying used RTX 2080Ti ger with warranty if its possible, EVGA and many other will allow you to transfer warranty 

I have tried few EVGA RTX 2080Ti, most of them would do 2100MHz, FTW3 which I have tried or tested wouldn't do more than 2130MHz at literally cold ambient temperature (10°C) and GPU wouldn't pass 34°C in gaming or benchmarks 

Yup I can confirm in Paititi at 3440x1440 on RTX 2080Ti Strix I'm seeing too low GPU utilisation, similarly too in AC: Odyssey but I'm running 5960x with 4.7GHz OC, similarly you can find poor utilisation in some Witcher 3 scenes or mission where GPU usage wouldn't go beyond 80% 

Hope this helps and good luck mate 

Hope this helps 

Thanks, Jura


----------



## z390e

Mooncheese said:


> Hoping to get $400 for the 1080 Ti with a $150 EK waterblock in near pristine condition.


Seems like a hell of a deal, let me know when you post it lol. Good to hear you have gotten some resolution on this and I would again suggest not burning the NV bridge just because of one crappy card that even they acknowledged.


----------



## e64462

Just bought a hydro copper 2080 ti from evga. I flipped through the included documentation, and did some google searches but couldn't find an answer to my question.

How should I prepare this waterblock for installation? Vinegar & Distilled Water or is all the flux removed prior to shipping?


----------



## J7SC

e64462 said:


> Just bought a hydro copper 2080 ti from evga. I flipped through the included documentation, and did some google searches but couldn't find an answer to my question.
> 
> How should I prepare this waterblock for installation? Vinegar & Distilled Water or is all the flux removed prior to shipping?


 
I typically wipe water-block metal contact areas off with a bit of isopropyl alcohol (99%) if there was either adhesive, or more likely my own greasy fingerprints


----------



## e64462

Hi J7SC,

Thanks for the reply. The waterblock is already installed, so the contact areas aren't my direct concern. I'm more worried about whether I should try to rinse out the water block at all before installation and if so, what is recommended.

Thanks again!


----------



## J7SC

e64462 said:


> Hi J7SC,
> 
> Thanks for the reply. The waterblock is already installed, so the contact areas aren't my direct concern. I'm more worried about whether I should try to rinse out the water block at all before installation and if so, what is recommended.
> 
> Thanks again!


 
Oh, I see  ....I always flush new blocks (and for that matter rads) several times over with just distilled water...bathtubs are great for that


----------



## Mooncheese

happyluckbox said:


> Is there a way to check when the card was made? Also, if I do have the new chip, am I stuck with the lower power limit bios until one of the AIB's come out with a higher Wattage bios using the new chip?


Unless the BIOS was flashed, I could be mistaken but the "Release date" in GPU-Z either indicates the date the VBIOS was installed onto the hardware, giving an approximate date of manufacture, but it could be the date the VBIOS in question was created, and may apply to cards made way later than that point in time. Not sure. You should also be able to tell by the S/N with some cards, with the certain digits denoting month and year of manufacture. Maybe someone can chime in and elaborate on this. 



cosmomobay said:


> Hi All, I have being a member since 2013. I haven't posted much. I have just finish upgrading my old rig, upgraded to EVGA RTX 2080TI Vertical installed Water Cooled.


Hi! Personally I like the way GPU's are traditionally mounted, especially if you have a nickel back-plate with just RGB spill out of the side of them. But to each their own. 



Talon2016 said:


> Glad you were able to figure out a workable solution. I figured that broken 2080 Ti would recoup some good cash. Even if it doesn't have Samsung memory, the later built RTX cards are reliable and they fixed whatever initial issue they were having, even on the Micron stuff. I would expect it has a fair chance at having Samsung memory though. Make sure the previous owner unregisters the card with EVGA and register it yourself on EVGA website and you're good to go for the remainder of the warranty.


Thanks for the advice, getting the warranty sorted out will be high on my priority list next week. I can't believe I sold it for $400 to be honest. The thing is, I have no idea how the buyer is going to fix it unless they are taking GDDR6 off of donor cards because I can't find GDDR6 anywhere, even Ali Express. 



jura11 said:


> Hi @Mooncheese
> 
> Finding Samsung GDDR6 memory modules its very hard,maybe you can try check Alibaba or Aliexpress where in theory you can find them much easier than on eBay
> 
> Earlier RTX 2080Ti have been equipped with Micron memories like almost every GTX1080Ti which has used only Micron memory modules but you can find GTX1080Ti with Hynix or Samsung and usually these have been best for OC, my GTX1080Ti wouldn't do more than +400MHz on VRAM that's with Micron memories
> 
> I built few PC's with RTX 2080Ti, 4 of them have run RTX 2080Ti with Micron memories and every of RTX 2080Ti with Micron memories failed or started artifact at stock setting after 6 months or so
> 
> Other builds running which I built running RTX 2080Ti with Samsung memory and no issues, they're running strong and no issues, few of them have good OC in region of 2100-2145MHz, some are really poor OC'ers hahaha like they won't do more than 2085MHz
> 
> If you are buying used RTX 2080Ti get with warranty if its possible, EVGA and many other will allow you to transfer warranty
> 
> I have tried few EVGA RTX 2080Ti, most of them would do 2100MHz, FTW3 which I have tried or tested wouldn't do more than 2130MHz at literally cold ambient temperature (10°C) and GPU wouldn't pass 34°C in gaming or benchmarks
> 
> Yup I can confirm in Paititi at 3440x1440 on RTX 2080Ti Strix I'm seeing too low GPU utilisation, similarly too in AC: Odyssey but I'm running 5960x with 4.7GHz OC, similarly you can find poor utilisation in some Witcher 3 scenes or mission where GPU usage wouldn't go beyond 80%
> 
> Hope this helps and good luck mate
> 
> Hope this helps
> 
> Thanks, Jura


Thanks for the kind words, yeah I'm really hoping this one has Samsung memory but if it's not, at least it will have 2 years or so of the transferable warranty remaining. Interesting how all of your units with Micron memory all failed. How can people argue that it's not the Micron memory? I've read in a few places people stating that it's not the memory per se, but the memory controller. I think it's most likely the memory due to heat because my card failed once I replaced the factory Alienware Aurora back-plate that had thermal pads on the back-side of the memory connecting to said back-plate with the EK Quantum back-plate that did not have thermal pads there (per the instructions!). What is interesting is that the avg core temp dropped from 50-51C to 43C! In my naivete I assumed that the slightly greater mass of the EK back-plate in conjunction with removing the filter from the rear 140mm fan that I have oriented as intake was what was behind the 7C drop in GPU core. But in retrospect, what I believe was actually happening was that the memory modules were always making inadequate contact with the block because previous user had discarded the Phanteks thermal pads that go over the memory or acquired the block without them in the first place and made the mistake of trying to use the factory Alienware Aurora cooler pads which were too thin and this entire time the only way said modules were dealing with the heat was conveying it through the thermal pads on the back-side of PCB, which resulted in heat-soak on the thin back-plate and then likely through the back-side of GPU core down into water-block. I was really quite surprised to see EK Quantum block had dropped core temp from 51 to 43C, fresh Gelid GC Extreme was used in both cases and screws were adequately tight both times. It didn't even dawn on me that the drop in core temp was because memory modules were no longer conveying their heat through the back-plate and through the GPU core because of inadequate contact with the block. My advice to anyone reading this, if you ever acquire a used water-block without the thermal pads, do not assume that it's safe to re-use your thermal pads off of your factory cooler. Again, I did do a visual inspection and the pads appeared to be making contact with the block but there may not have been enough pressure. 

Writing all of this, I wish EVGA's XC2 variant was compatible with Phanteks Glacier block, as I probably would have paid a little more for that because it's reference PCB with temp probes on the memory and VRM. But the only XC2 I could find on ebay were like $1100-1200 used. There was someone selling locally for $1000 but I would need to purchase a compatible block for that and although EK states that their Quantum FE block is compatible (visual: customer) the fact that XC2 isn't on Phantek's list of compatible 2080 Ti was cause for concern. The only block I know for sure to work with XC2 is EVGA's Hydro Copper block, not a bad looking or expensive block ($150) per se but that's $150 on top of $1000-$1100 used vs $800 out of the door for XC Ultra. All I wanted was 300A reference PCB for under $900 with a warranty. 

And this would explain my observation here about the back-plate being so hot, I'm talking TOO hot to the touch. Because memory modules weren't making contact with the block. The previous owner even went out of his way to CNC machine a rather elaborate aluminum back-plate that he placed two small fans in to send air over these deep channels. If the water-block was making proper contact with the memory modules this would not have been needed at all. Thinking about getting an IR thermometer this time around, not sure. I do have some temp probes that came with my motherboard that I can look for, maybe I can figure out how to attach them in between the memory modules as they sit on the PCB. 

For comparison my 1080 Ti FE with adequate contact with the modules is literally cool to the touch on the back-plate @ 300W. 

Concerning sourcing GDDR6, I don't think it can be done without donor cards, TecLab couldn't find any and neither can I. I checked Ali Express per your advice out of curiosity and couldn't find any. 




[


e64462 said:


> Hi J7SC,
> 
> Thanks for the reply. The waterblock is already installed, so the contact areas aren't my direct concern. I'm more worried about whether I should try to rinse out the water block at all before installation and if so, what is recommended.
> 
> Thanks again!


Don't ever mix alcohol with acrylic. Alcohol can break down the polymer chains of acrylic and result in cracking. 

If you have any acrylic anywhere in your system, just steer clear of it altogether. 

Distilled vinegar can erode nickel plating. https://www.reddit.com/r/watercooli...do_not_clean_you_nickel_plated_fittings_with/

Just use distilled water for the fill, you don't need to rinse the inside of the GPU block unless it was used and it looks like it needs it.


----------



## RAGEdemon

Dear Gentlemen,

Can anyone recommend a good overclocking bios for the reference PCB MSI Duke OC where all 3 fans will stop when the card is idle as with the vanilla BIOS? Unfortunately, this card is locked at max 290W.

This card has very good cooling, and has had liquid metal applied. Vanilla bios OC load temps are currently 2055 MHz @ 55 degrees C @ 25 degrees C ambient.

My system is usually on 24/7 and therefore collects a lot of dust even with filters. A saving grace is technologies where all fans stop - which stops the heatsinks getting clogged up with dust.

The Galax 380w bios middle fan still keeps spinning.
MSI Trio bios? Bad power balancing?
MSI Lightening Z Bios? - I think FROZR tech only disables all fans in LN2 mode. 
Others?

I would appreciate your help and recommendations - Thank you.


----------



## jura11

Hi @Mooncheese

When you are buying used GPU waterblock, never use stock thermal pads from stock cooler, this has happened to friend of mine who used as well EVGA pads on his GTX1080Ti Aorus Extreme Phanteks Glacier WB,ge bought two blocks one new and second used, he build the loop and then after 2 days he told me he have problems with temperatures because they way to high for water cooled GPUs, he saw 65°C on one and on second he saw just 38°C in games which supports SLI, at the end I sent message to Phanteks which kindly send me replacement pads FOC, afterwards his temperatures dropped to 36-38°C on both GPUs and he never had a issue with them 

Regarding the backplate, I have tested several of them and every one of EKWB or Phanteks abd Heatkiller backplates have been from warm to hot to touch, only Aquacomputer Kryographics RTX 2080Ti with active backplate have been literally cold on touch under load 

Why RTX 2080Ti GPUs with Micron fail more likely than with Samsung memories, hard to say, can be down to the memory controller which can be true, I never investigated this issue too much there, I assume its combination of both there plus once Nvidia apologised for the screw up with RTX 2080Ti 

My Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti WB and their backplate is like in yours case hot on touch under load, IR thermometer is good to have

Alibaba usually I use for sourcing some parts which are not largely available in the UK or on Amazon, eBay or Aliexpress 

I think I found there few GDDR6 memory modules for £22-£32 per module 

Hope this helps and good luck there

Thanks, Jura


----------



## Mooncheese

RAGEdemon said:


> Dear Gentlemen,
> 
> Can anyone recommend a good overclocking bios for the reference PCB MSI Duke OC where all 3 fans will stop when the card is idle as with the vanilla BIOS? Unfortunately, this card is locked at max 290W.
> 
> This card has very good cooling, and has had liquid metal applied. Vanilla bios OC load temps are currently 2055 MHz @ 55 degrees C @ 25 degrees C ambient.
> 
> My system is usually on 24/7 and therefore collects a lot of dust even with filters. A saving grace is technologies where all fans stop - which stops the heatsinks getting clogged up with dust.
> 
> The Galax 380w bios middle fan still keeps spinning.
> MSI Trio bios? Bad power balancing?
> MSI Lightening Z Bios? - I think FROZR tech only disables all fans in LN2 mode.
> Others?
> 
> I would appreciate your help and recommendations - Thank you.


Rage how is it going?! I didn't know you were on here, my handle is StarMan on Nvidia official forum and 3D Vision discord. 55C on air that is amazing! 

Not sure what the issue is with the fans, wish I could be of more help. 



jura11 said:


> Hi @Mooncheese
> 
> When you are buying used GPU waterblock, never use stock thermal pads from stock cooler, this has happened to friend of mine who used as well EVGA pads on his GTX1080Ti Aorus Extreme Phanteks Glacier WB,ge bought two blocks one new and second used, he build the loop and then after 2 days he told me he have problems with temperatures because they way to high for water cooled GPUs, he saw 65°C on one and on second he saw just 38°C in games which supports SLI, at the end I sent message to Phanteks which kindly send me replacement pads FOC, afterwards his temperatures dropped to 36-38°C on both GPUs and he never had a issue with them
> 
> Regarding the backplate, I have tested several of them and every one of EKWB or Phanteks abd Heatkiller backplates have been from warm to hot to touch, only Aquacomputer Kryographics RTX 2080Ti with active backplate have been literally cold on touch under load
> 
> Why RTX 2080Ti GPUs with Micron fail more likely than with Samsung memories, hard to say, can be down to the memory controller which can be true, I never investigated this issue too much there, I assume its combination of both there plus once Nvidia apologised for the screw up with RTX 2080Ti
> 
> My Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti WB and their backplate is like in yours case hot on touch under load, IR thermometer is good to have
> 
> Alibaba usually I use for sourcing some parts which are not largely available in the UK or on Amazon, eBay or Aliexpress
> 
> I think I found there few GDDR6 memory modules for £22-£32 per module
> 
> Hope this helps and good luck there
> 
> Thanks, Jura


22 British Pounds works out to close to $30 US per module, and that's on the low end. Assuming some middle ground of $35 per module, not counting shipping etc. Well it looks like I made the right decision because that would have been around $400 to replace 11 GB of memory modules. I sold the broken card for $400 and bought a new one with 2.5 years remaining warranty for $800 and who knows, this one may have Samsung memory. (EVGA XC Ultra manufacture date August, 2019, not sure if EVGA is putting Samsung modules on non Kingpin cards). Then I would have had to send it down to Brazil, pay custom duties, and I would want to compensate TecLab somehow, even though they offered to fix it for free. 

I just lucked out and found a buyer for $400 otherwise replacing the modules would have been the way to go but would have probably been closer to $600 or more. 

I'm surprised you found the memory in question, can you post a link here or is it against site terms? 

Thanks again.


----------



## RAGEdemon

Mooncheese said:


> Rage how is it going?! I didn't know you were on here, my handle is StarMan on Nvidia official forum and 3D Vision discord. 55C on air that is amazing!
> 
> Not sure what the issue is with the fans, wish I could be of more help.


Hey StarMan, been here a while; being a pain in nVidia's bum. I remember starting a 300 page thread which contributed to the class action lawsuite against nVidia which made them pay out to american consumers years ago... https://www.overclock.net/forum/69-nvidia/1535502-gtx-970s-can-only-use-3-5gb-4gb-vram-issue.html

This forum (and the 3DV community) are one of the few bastions left for intelligent discussions on the net, amongst overwhelming toxicity... a gem really. Hope you got your 3DV driver install issues fixed; see you around man.


----------



## Mooncheese

RAGEdemon said:


> Hey StarMan, been here a while; being a pain in nVidia's bum. I remember starting a 300 page thread which contributed to the class action lawsuite against nVidia which made them pay out to american consumers years ago... https://www.overclock.net/forum/69-nvidia/1535502-gtx-970s-can-only-use-3-5gb-4gb-vram-issue.html
> 
> This forum (and the 3DV community) are one of the few bastions left for intelligent discussions on the net, amongst overwhelming toxicity... a gem really. Hope you got your 3DV driver install issues fixed; see you around man.


Wow man, serious props, I didn't know your thread contributed to the creation of that class-action lawsuit. Yeah it seems critical thinking and intelligent discussions are seriously in decline nearly everywhere (or censored). 

I'm currently on 442.19, although Losti's thread was locked over at Official GeFarce board I figured I would use his method to add 3D Vision driver bits to this driver after I get my replacement 2080 Ti in next week. Something broke along the way updating via 3D Fix Manager and 3D Vision is completely broken in a lot of titles, to include, but not limited to, Devil May Cry 5 and Hellblade: Senua's Sacrifice. 3D Vision isn't working at all (double image) accompanied with a red warning "Warning: attempt to run stereoscopic 3D in non-stereoscopic mode". Master Otaku suggested modifying PG278Q EDID file with CRU to limit it to 120 Hz as the red warning is because 3D Vision is trying to run at 72 FPS. The thing is, I've never had this problem before, this problem cropped up somewhere along the way simply updating one of the recent drivers with 3D Fix Manager. I'm hoping using Losti's method will fix this with 442.19. 

I'm still on Windows 1809. 

Thanks for any help with this. 

https://www.nvidia.com/en-us/geforce/forums/3d-vision/41/299949/the-way-its-meant-to-be-played/

It's ironic that GeFarce moderators working out if India who don't know **** from shinola about the technology in question lock down the last thread with information as to how to use 3D Vision with newer drivers after NGreed bean-counters kill of the feature to save a whopping $100 per driver update, consigning thousands of us, many of which, myself included having just purchased their top-tier GPU, to no longer being able to use 3D Vision, because said thread encourages leaving GeFarce forums for https://www.mtbs3d.com/. 

I mean the ******* irony, they kill off a feature, essentially delete the past 10+ years of forum posts with their horrible forum redesign, and then lock the thread that has information as to how to continue to use the technology they no longer support because it encourages leaving the forum in question. Huh? 

Like if you try to google information from threads from before the forum redesign you will find threads like this that only result in a Bad Header request: 

https://www.nvidia.com/en-us/geforc...-run-stereoscopic-in-non-stereo-display-mode/

If that's not censorship I don't know what is. 

Honestly I think the forum redesign was them cracking down on all of the discontent with the Turing rename and double the price scam. Last thing they wanted was investors taking to their official forum to gauge customer satisfaction and finding the overall sentiment negative. So they used any excuse to wipe all of that clean and they found one with a redesign that rendered threads beyond a certain date unviewable "Bad Header Request" but also made it impossible to even search for anything on the forum. 

Like nobody points this out, all you find on the internet is mindless cheerleading about how great Nvidia is. If you go to r/Nvidia, that's 99% of the posts there. 

This is the reality: Nvidia abandons all of their pet technologies eventually, they engage in insane bean-counting cost saving measures like discontinuing 3D Vision (how much does it cost to pay a single programmer to simply copy and paste lines of code pertaining to the the 3D Vision driver bits from the previous driver to the next one? What is this, 15-30 minutes tops? So maybe $100 tops per driver? Meanwhile there were literally thousands of us still using and enjoying the feature, all of whom stayed with Nvidia this entire time, when they could have purchased less expensive AMD equivalent hardware, solely because of the technology in question, many of whom even purchased their $1200 before taxes 80 card, myself included) and they engage in active censorship (in the form of the forum redesign), including paying moderators to sit on r/Nvidia and censor posts that don't approximate cheerleading for Nvidia (FACT, known from personal experience, I have MULTIPLE posts calling them out for all of the above, without profanity or anything that violates the sub-reddit rules that never post and the moderators never respond to my inquiry as to why the posts aren't posting). 

Anyhow, singing to the choir here as I'm sure you know all of this, feels good in an odd way to say it to someone who knows what is really going on.

R.I.P: 

PhysX
3D Vision
Hairworks


Dead men walking: 

SLI
G-Sync
RT core facilitated Ray Tracing (when non RT core methods used by both the next-gen consoles and future AMD GPU's facilitate path tracing without RT cores, RT cores will go the way of the G-Sync module). 

NGreedia will continue to find new expensive pet technologies though. Anything that is anti-consumer they will do because their reason for existence is to extract as much $ from your wallet as possible. 

My G-Sync panel is the only thing left keeping me tenuously trapped in their anti-consumer ecosystem. Seriously considering doing an all AMD build sometime in 2021-2022 in conjunction with Samsung's Odyssey G9, a serious upgrade over AW3418DW if you like curved ultra-wide: https://www.theverge.com/circuitbre...g-monitor-49-inch-qled-display-specs-ces-2020


----------



## kithylin

Mooncheese said:


> Yeah it seems critical thinking and intelligent discussions are seriously in decline nearly everywhere (or censored).





Mooncheese said:


> NGreedia will continue to find new expensive pet technologies though. Anything that is anti-consumer they will do because their reason for existence is to extract as much $ from your wallet as possible.


You speak of intelligent discussions yet you're bashing Nvidia in what is supposed to be a thread where we discuss and support Nvidia and owners of Nvidia's products as if we were high school kids in a flame war on the interwebs.


----------



## truehighroller1

kithylin said:


> You speak of intelligent discussions yet you're bashing Nvidia in what is supposed to be a thread where we discuss and support Nvidia and owners of Nvidia's products as if we were high school kids in a flame war on the interwebs.


So many unstable trolls going around in here the last few weeks I've noticed. Wonder why that is? I bet, it's because the wuhan flu has everyone off of work and so the trolls have no one to kick while they're down so they're coming out in troves on the internet to get their jollies now.


----------



## TK421

jura11 said:


> Hi @*Mooncheese*
> 
> Finding Samsung GDDR6 memory modules its very hard,maybe you can try check Alibaba or Aliexpress where in theory you can find them much easier than on eBay
> 
> Earlier RTX 2080Ti have been equipped with Micron memories like almost every GTX1080Ti which has used only Micron memory modules but you can find GTX1080Ti with Hynix or Samsung and usually these have been best for OC, my GTX1080Ti wouldn't do more than +400MHz on VRAM that's with Micron memories
> 
> I built few PC's with RTX 2080Ti, 4 of them have run RTX 2080Ti with Micron memories and every of RTX 2080Ti with Micron memories failed or started artifact at stock setting after 6 months or so
> 
> Other builds running which I built running RTX 2080Ti with Samsung memory and no issues, they're running strong and no issues, few of them have good OC in region of 2100-2145MHz, some are really poor OC'ers hahaha like they won't do more than 2085MHz
> 
> If you are buying used RTX 2080Ti ger with warranty if its possible, EVGA and many other will allow you to transfer warranty
> 
> I have tried few EVGA RTX 2080Ti, most of them would do 2100MHz, FTW3 which I have tried or tested wouldn't do more than 2130MHz at literally cold ambient temperature (10°C) and GPU wouldn't pass 34°C in gaming or benchmarks
> 
> Yup I can confirm in Paititi at 3440x1440 on RTX 2080Ti Strix I'm seeing too low GPU utilisation, similarly too in AC: Odyssey but I'm running 5960x with 4.7GHz OC, similarly you can find poor utilisation in some Witcher 3 scenes or mission where GPU usage wouldn't go beyond 80%
> 
> Hope this helps and good luck mate
> 
> Hope this helps
> 
> Thanks, Jura



I've had a micron card degrade from +800mhz max down to only +125-150.

RMA'd that garbage right away.

Micron is literally the worst company for gpu overclocker to come across. Literally worse than Hitler.






























Regarding 1080Ti, was there really Hynix and Samsung for GDDR5X? I thought Micron had exclusivity for production of GDDR5X?


----------



## J7SC

This thread needs:

a.) cleaning 

b.) ration 

...not necessarily in that order


----------



## z390e

kithylin said:


> You speak of intelligent discussions yet you're bashing Nvidia in what is supposed to be a thread where we discuss and support Nvidia and owners of Nvidia's products as if we were high school kids in a flame war on the interwebs.


hes been doing it in every thread Ive seen him in since he bought a piece of crap 2080ti with Micron memory on ebay and it (predictably) crapped the bed on him

until that he was Mr Nvidia

edit: and hes linking articles from the verge for reference I mean seriously does that site have any place even being posted here after their video?


----------



## RAGEdemon

Important distinction: We are consumers, not blind fanboys. It is absolutely possible to love a product and its capabilities, yet dislike the manufacturer and its practices. This is not duality, this is simply a depth of human capacity.

To young folks and those who have not been directors / know business law: 

Speaking fairly, the primary objective for any for-profit company is to maximise shareholder wealth (make money) - it is literally illegal for the company not to.

The shareholders would sue the directors and CEO (aka managing director) if they did something against the best interests of the company, and it is again illegal for the directors to do anything against the company's best interests - it is considered a dereliction of a directors fiduciary duties, and a director can be personally held liable for monetary compensation and jail time.

This is is simply truth, and is not up for debate.

However, ethics plays a certain role in this: German companies require by law to have half the board to be representing workers interests which insures balance within the company for its employees and the management - these kinds of countries take ethics seriously. However, then there are countries such as USA, where profit is seen as the 'only' aim of the company - there is little to no worker or public interest representation, so you have something called runaway capitalism - grinding workers and consuming public into dust to get the smallest bit of profit which secures the directorship huge bonuses; society and company employees be damned - this is the law that USA based nVidia follows.

NVidia is not evil, but they will take advantage if the opportunity presents itself - it is only doing what the law and shareholders require of it. Mooncheese is right in that there is absolutely no doubt that NVidia is profiteering from us all - they make billions in profit each year - but are only doing what the law and shareholders require of it. Cancelling non-profit making programs within the company such as 3DVision etc, is exactly as ought to be expected.

Pricing products not on how much something costs to make but how much consumers are willing to pay for it is also exactly in-line with all corporate strategy, hence the latest pricing shenanigans - RTX cards are selling even when people said they would be dead on arrival - the 6 figure salaried professionals working at these companies as strategy heads know more about us consumers than we know about ourselves - if a company can sell a product for $100 but they choose to sell it for $50 out of some misplaced notion of generosity, the directorship would be sued into oblivion by the shareholders for not doing their best to maximise shareholder wealth - this is not hyperbole - this is simply fact.

nVidia didn't set those ridiculous prices - we consumers did. nVidia' strategists researched and then intelligently gambled that we would pay those ridiculous prices - and they were right; we did. We have no-one to blame but ourselves. And now all future cards too will be similarly priced.

I cannot stress this enough: The primary goal, legally and otherwise for any company's directorship, is to increase its shareholder wealth. - Not to be nice to consumers, or to be ethical, or to sell things slightly above manufacturing cost (there is a token ethics element, but it often goes ignored with little consequence). Being nice and ethical only comes in at a very far second place in jurisdictions such as the USA.

AMD would do mostly the same were they in the same position as Intel and nVidia (they are Canadian so their laws aren't as aggressive as USA's when it comes to the requirement for directors to generate shareholder wealth as an overriding corporate strategy.

Gentlemen, the harsh reality is that companies are not your friends - they will not spit on you if you were on fire: they only want to sell you their wares for profit so they pretend to be nice through PR (a company's propaganda department). This includes giveaways, excellent customer service, and so called 'freebies'. Here, the company has simply calculated that if they spend X on these things, then they can make Y profit extra on top of the money they spent + they can make Z on the brand's increased value due to the public's increased good will.

We as consumers owe ourselves and the rest of society to be critical and objective about any company's products. We absolutely should not be fanboys or brain-dead PR regurgitators that companies PR departments want us to be. What we should do however, is to come together as a community and help each other get the best out of any company's products, whether it be price, performance, or something else entirely.

We also need to ensure we support consumer protection laws such as right to repair bills, as well as ethical workplace laws such that the factory workers are not treated badly, e.g. minimum wage. We have voting power on our side, but unfortunately the companies have money on their side, which they use to "lobby" (legally bribe) lawmakers into giving them more free money while curtailing consumer/worker/environmental rights.

Unfortunately, in places such as the USA, money speaks louder than votes, and it doesn't help when the consumer gets brainwashed into voting against their own best interest (fanboyism). Certainly, we should never be 'loyal' to any company - doing so would again betray ourselves and our fellow man.

Live long and prosper, my friends...


----------



## Imprezzion

I have done something quite YOLO.

I bought a 2080 Ti listing here locally, it's basically a Inno3D Jet Blower card with a Accelero IV on it. The seller had multiple cards, all of them as bare PCB's with Accelero's on them. No stock cooler, no box, nothing. I offered a pretty low amount for a bare PCB without the cooler as I have my own Accelero IV laying around anyway.

It's probably a non-A reference board unless it's a PCB from a Inno3D X2 that has a A chip.

I did buy a bunch of those enzotech and alphacool VRAM and VRM sinks. They should not be needed but still.


----------



## Mooncheese

RAGEdemon said:


> Important distinction: We are consumers, not blind fanboys. It is absolutely possible to love a product and its capabilities, yet dislike the manufacturer and its practices. This is not duality, this is simply a depth of human capacity.
> 
> To young folks and those who have not been directors / know business law:
> 
> Speaking fairly, the primary objective for any for-profit company is to maximise shareholder wealth (make money) - it is literally illegal for the company not to.
> 
> The shareholders would sue the directors and CEO (aka managing director) if they did something against the best interests of the company, and it is again illegal for the directors to do anything against the company's best interests - it is considered a dereliction of a directors fiduciary duties, and a director can be personally held liable for monetary compensation and jail time.
> 
> This is is simply truth, and is not up for debate.
> 
> However, ethics plays a certain role in this: German companies require by law to have half the board to be representing workers interests which insures balance within the company for its employees and the management - these kinds of countries take ethics seriously. However, then there are countries such as USA, where profit is seen as the 'only' aim of the company - there is little to no worker or public interest representation, so you have something called runaway capitalism - grinding workers and consuming public into dust to get the smallest bit of profit which secures the directorship huge bonuses; society and company employees be damned - this is the law that USA based nVidia follows.
> 
> NVidia is not evil, but they will take advantage if the opportunity presents itself - it is only doing what the law and shareholders require of it. Mooncheese is right in that there is absolutely no doubt that NVidia is profiteering from us all - they make billions in profit each year - but are only doing what the law and shareholders require of it. Cancelling non-profit making programs within the company such as 3DVision etc, is exactly as ought to be expected.
> 
> Pricing products not on how much something costs to make but how much consumers are willing to pay for it is also exactly in-line with all corporate strategy, hence the latest pricing shenanigans - RTX cards are selling even when people said they would be dead on arrival - the 6 figure salaried professionals working at these companies as strategy heads know more about us consumers than we know about ourselves - if a company can sell a product for $100 but they choose to sell it for $50 out of some misplaced notion of generosity, the directorship would be sued into oblivion by the shareholders for not doing their best to maximise shareholder wealth - this is not hyperbole - this is simply fact.
> 
> nVidia didn't set those ridiculous prices - we consumers did. nVidia' strategists researched and then intelligently gambled that we would pay those ridiculous prices - and they were right; we did. We have no-one to blame but ourselves. And now all future cards too will be similarly priced.
> 
> I cannot stress this enough: The primary goal, legally and otherwise for any company's directorship, is to increase its shareholder wealth. - Not to be nice to consumers, or to be ethical, or to sell things slightly above manufacturing cost (there is a token ethics element, but it often goes ignored with little consequence). Being nice and ethical only comes in at a very far second place in jurisdictions such as the USA.
> 
> AMD would do mostly the same were they in the same position as Intel and nVidia (they are Canadian so their laws aren't as aggressive as USA's when it comes to the requirement for directors to generate shareholder wealth as an overriding corporate strategy.
> 
> Gentlemen, the harsh reality is that companies are not your friends - they will not spit on you if you were on fire: they only want to sell you their wares for profit so they pretend to be nice through PR (a company's propaganda department). This includes giveaways, excellent customer service, and so called 'freebies'. Here, the company has simply calculated that if they spend X on these things, then they can make Y profit extra on top of the money they spent + they can make Z on the brand's increased value due to the public's increased good will.
> 
> We as consumers owe ourselves and the rest of society to be critical and objective about any company's products. We absolutely should not be fanboys or brain-dead PR regurgitators that companies PR departments want us to be. What we should do however, is to come together as a community and help each other get the best out of any company's products, whether it be price, performance, or something else entirely.
> 
> We also need to ensure we support consumer protection laws such as right to repair bills, as well as ethical workplace laws such that the factory workers are not treated badly, e.g. minimum wage. We have voting power on our side, but unfortunately the companies have money on their side, which they use to "lobby" (legally bribe) lawmakers into giving them more free money while curtailing consumer/worker/environmental rights.
> 
> Unfortunately, in places such as the USA, money speaks louder than votes, and it doesn't help when the consumer gets brainwashed into voting against their own best interest (fanboyism). Certainly, we should never be 'loyal' to any company - doing so would again betray ourselves and our fellow man.
> 
> Live long and prosper, my friends...



Rage, speaking of how NGreedia consumer-base has really become like Apple consumer-base (mindless morons), check this out (repost from Ampere 40% higher thread): 

https://www.overclock.net/forum/379...x-2080-ti-q4-2020-launch-15.html#post28369256






For $1000 or less. 

It's safe to say that NGreedia is in serious trouble. 

Potential entrants into NGreedia ecosystem, those building a PC for the first time because they either want higher frame-rates or want to be able to use peripherals (FPS games) etc. etc, whatever the reason, are not going to want to pay 3x the amount for a performance equivalent PC that can't even do some of the things that XBox One X can do. 

Wait until you see how quick changing games is: https://youtu.be/7Fjn4GRw8qE?t=489

So NGreedia's consumer base is pretty guaranteed to be limited to whomever is currently already with a PC who may feel the need to upgrade their GPU this year. 

Consoles are seriously going to make a comeback this year. 

AMD RDNA 2 to top it off. 

Say goodbye to NGreedia "rename the entire product stack on GPU higher to double the price" shenanigans. 

What's interesting in a kind of Stanford Prison Experiment kind of way is that, not only did I provide clear proof that NGreedia is absolutely, 100% gimping the slightly older architecture ("focusing on the new architecture") with Read Dead Redemption 2 where 2080 Ti is 85% faster than 1080 Ti but that that sister thread, not containing any profanity or any inflammatory remarks worthy of having the thread removed on Nvidia sub-reddit and no-one even blinked. 

The only response I got here was that the performance difference between 60 and 110 FPS wasn't 100% but 85% and anther response about how the CPU's used were different but no-one seemed to care that not only is this an abundantly clear example of NGreedia intentionally gimping the slightly older GPU to make the newer, considerably more expensive GPU seem faster but that this is also clearly an example of paid moderators sitting on r/Nvidia whose sole existence is to remove threads that are even remotely critical of Nvidia. 

Like no-one even batted an eyelash at all of this. And that's worrisome because come GPU refresh time we can fully expect those of us for whom needlessly upgrading to the next, new $1200 GPU is not palatable to be in a position where we can expect them to "focus on the new architecture". Can we expect 3080 Ti to magically be 85% faster than 2080 Ti in say Cyberpunk 2077? Will you bat an eyelash then? Or is everyone here simultaneously flush with cash and sodium fluoride addled, having lost all of their critical faculties to Mountain Dew and GMO cheese-goodies over the years? 

Is this what Nvidia consumer-base is now? Mindless morons who are either too rich or too stupid to care?


----------



## sblantipodi

My RTX2080ti isn't able to maxout Gears5 at a fixed 60fps in 4k but the new Xbox series X yes. 

How I feel stupid to spend 1300€ on a GPU 😂


----------



## mattxx88

sblantipodi said:


> My RTX2080ti isn't able to maxout Gears5 at a fixed 60fps in 4k but the new Xbox series X yes.
> 
> How I feel stupid to spend 1300€ on a GPU 😂


you are talking about a 2 years old gpu vs a notyetexistent shoes box. no sense for me


----------



## EarlZ

Can anyone point me to a very recent guide on how to properly use the voltage curve editor with MSI AB. I've been trying to set 0.9xx for 1905Mhz but it still bumps it to 1.025


----------



## Imprezzion

So, basically, as mine's a Non-A chip it limits like mad, even on the Palit 310w BIOS. Plus, with my frankenstein Accelero Xtreme IV = 2 140mm RGB fans on the cards PWM i really don't like the 35% fanspeed. I wanna have lower for the 140's. 

I found a nvflash patched version on reddit that allows me to flash a A BIOS onto a Non-A chip with a disabled overwrite function somehow. 
I'm just scared as balls to actually try it as the card i have doesn't have dual BIOS and i'm afraid to permabrick it if the flash goes bad.

Did someone ever try to overwrite a non-A to a A using a patched nvflash? And did it work? Lol? 

Other option for me would be to shunt mod it but i don't know if i can cool that much lol. I prefer to have some control over the power limit still.

EDIT: I've been running a borderless window of Borderlands 3 at 150% res scale 1080P all Ultra / Badass settings with MSI AB set to always on top for 30 orso minutes and dailed in a Curve OC on the Palit 310w BIOS. 

The card seems to really like 1025mV as that sits right at the point of not throttling at ~120-123% power so I am on my way to finding the max overclock it can do at 1025mV. Started with VRAM, it likes +1000Mhz / 8000Mhz effective. It ran 1200 for a while but started to glitch textures after like, 10 minutes or so so went back to 1000 and it's fine there. The card has Samsung by the way.

As for core, it's been running at 1965Mhz for a good 20 minutes now and temps stabilized at 63c with my Accelero mod. It very rarely drops to 1950Mhz for half a second when it just hits 124% but otherwise it seems happy here so far. I'll go ahead and raise it 1 clock bin higher to 1980 and I'll see what happens.

Are RTX2xxx cards just as sensitive to temperatures as GTX1xxx was? I have a LOT of room on the fanspeed left, I'm only on ~44% so.. if it helps with stability i can go a bit higher if it helps general stability.


----------



## sultanofswing

After doing a bunch of testing on my card I have discovered that while the voltage curve editor is great for keeping the card at lower voltages and temps lower it seems to hinder performance.
I have been getting a lot of frame hitching in a lot of games that I play.
I normally was running [email protected] without issues and no crashes whatsoever.

I talked to a few people in the Overclocking community(Vince and a few others) and they told me even though the card is stable and never has any issue at those setting it needs more voltage.

I proceeded to do a bunch of testing to confirm whether this was true or not.
I ran multitudes of benchmarks and playing actual games and here is what I found.
2115mhz using an offset of +60 in Precision X1 yielded me 3-400 more points in superposition 8k benchmark than my normal [email protected] when using the voltage curve.
This got me even more curious so I decided to use the voltage curve and set it to [email protected] and still got worse scores and worse gameplay than the +60 in the clock offset table.

I then decided to ditch the voltage curve all together and use the +60 clock offset and in games my 1% lows got way better and all the frame hitching I was seeing was gone.

I see everyone wants a higher Current BIOS but has a card that runs temps over 50c, the higher current BIOS will not do you much of any good unless you can get temps below 45c (40c preferably).
As temperatures climb and the same voltage is maintained the power draw will get higher so keeping the temps as low as possible is the key, not a higher power limit bios)


----------



## sultanofswing

Imprezzion said:


> So, basically, as mine's a Non-A chip it limits like mad, even on the Palit 310w BIOS. Plus, with my frankenstein Accelero Xtreme IV = 2 140mm RGB fans on the cards PWM i really don't like the 35% fanspeed. I wanna have lower for the 140's.
> 
> I found a nvflash patched version on reddit that allows me to flash a A BIOS onto a Non-A chip with a disabled overwrite function somehow.
> I'm just scared as balls to actually try it as the card i have doesn't have dual BIOS and i'm afraid to permabrick it if the flash goes bad.
> 
> Did someone ever try to overwrite a non-A to a A using a patched nvflash? And did it work? Lol?
> 
> Other option for me would be to shunt mod it but i don't know if i can cool that much lol. I prefer to have some control over the power limit still.
> 
> EDIT: I've been running a borderless window of Borderlands 3 at 150% res scale 1080P all Ultra / Badass settings with MSI AB set to always on top for 30 orso minutes and dailed in a Curve OC on the Palit 310w BIOS.
> 
> The card seems to really like 1025mV as that sits right at the point of not throttling at ~120-123% power so I am on my way to finding the max overclock it can do at 1025mV. Started with VRAM, it likes +1000Mhz / 8000Mhz effective. It ran 1200 for a while but started to glitch textures after like, 10 minutes or so so went back to 1000 and it's fine there. The card has Samsung by the way.
> 
> As for core, it's been running at 1965Mhz for a good 20 minutes now and temps stabilized at 63c with my Accelero mod. It very rarely drops to 1950Mhz for half a second when it just hits 124% but otherwise it seems happy here so far. I'll go ahead and raise it 1 clock bin higher to 1980 and I'll see what happens.
> 
> Are RTX2xxx cards just as sensitive to temperatures as GTX1xxx was? I have a LOT of room on the fanspeed left, I'm only on ~44% so.. if it helps with stability i can go a bit higher if it helps general stability.


Turing is even more sensitive to temperatures than pascal was. Turing will drop 15mhz for every 5c gain in temp, there is a 15mhz drop at 35c,40c,45c,50c and so on and so forth.


----------



## Imprezzion

Yeah true but using Curve OC negates that completely. 

I more or less meant that Pascal was very sensitive to temperatures in stability as well. A clock that was stable at 45c could instantly crash at 65c for example. 

Haven't seen that happen on Turing yet but then again i only had a RTX2080 Gaming X Trio and now a non-A ref PCB 2080 Ti.

It did crash once on 1965Mhz tho so i'm back to 1950/8000 on 1.025v and it seems totally fine and happy there.


----------



## sultanofswing

Imprezzion said:


> Yeah true but using Curve OC negates that completely.
> 
> I more or less meant that Pascal was very sensitive to temperatures in stability as well. A clock that was stable at 45c could instantly crash at 65c for example.
> 
> Haven't seen that happen on Turing yet but then again i only had a RTX2080 Gaming X Trio and now a non-A ref PCB 2080 Ti.
> 
> It did crash once on 1965Mhz tho so i'm back to 1950/8000 on 1.025v and it seems totally fine and happy there.



Using the curve does not do anything with the temperature vs clock speed decrease. 
Turing is no different concerning clockspeed and temperature it will act just like pascal.


----------



## Imprezzion

sultanofswing said:


> Using the curve does not do anything with the temperature vs clock speed decrease.
> Turing is no different concerning clockspeed and temperature it will act just like pascal.


Then how can both my 1080 Ti and 2080 Ti Stay on the exact clocks I set in the curve regardless of temperature? I mean, there's a second tab on the curve screen "temperature" where you can set a temperature curve. I have mine flat up to 88c and my clocks do not change at all between idle 23c and load 62c. The voltage curve is set to 1950 @ 1.025 and that's just what it runs.


----------



## sultanofswing

Imprezzion said:


> Then how can both my 1080 Ti and 2080 Ti Stay on the exact clocks I set in the curve regardless of temperature? I mean, there's a second tab on the curve screen "temperature" where you can set a temperature curve. I have mine flat up to 88c and my clocks do not change at all between idle 23c and load 62c. The voltage curve is set to 1950 @ 1.025 and that's just what it runs.


Mine is the same way and it still drop clocks just like everyone else's here.

I have a kingpin which pretty much bypasses all of nvidias limits except the temp vs clockspeed curve. 

It's hardcoded in the bios and not bypassable.

Sent from my GM1915 using Tapatalk


----------



## jura11

Hi @Imprezzion 

I just tested my at 1.024v what clocks I'm able to achieve,tested in few games like BFV or Witcher 3 and in Unigine Superpostion and 3DMARK suite,2100MHz is quite easy achievable on my Asus RTX2080Ti strix,temperatures have been quite good at 34-35C as max and utilization have been in 90-110% and power draw have been in 329.9W,didn't tried go above that clocks but I think I would be able bench at 2130-2145MHz quite easily,I've folded at 2100MHz at 1.024v and no issues to report,just at such clocks rendering in Cycles or even any SW which does use RT cores or OptiX is impossible because you will get CUDA error,usually in rendering I keep my 2025MHz profile with 1.024v which is stable in rendering for prolong times

Hope this helps

Thanks,Jura


----------



## Imprezzion

jura11 said:


> Hi @Imprezzion
> 
> I just tested my at 1.024v what clocks I'm able to achieve,tested in few games like BFV or Witcher 3 and in Unigine Superpostion and 3DMARK suite,2100MHz is quite easy achievable on my Asus RTX2080Ti strix,temperatures have been quite good at 34-35C as max and utilization have been in 90-110% and power draw have been in 329.9W,didn't tried go above that clocks but I think I would be able bench at 2130-2145MHz quite easily,I've folded at 2100MHz at 1.024v and no issues to report,just at such clocks rendering in Cycles or even any SW which does use RT cores or OptiX is impossible because you will get CUDA error,usually in rendering I keep my 2025MHz profile with 1.024v which is stable in rendering for prolong times
> 
> Hope this helps
> 
> Thanks,Jura


Well, yes and no. It's a Strix so it's most likely an A chip. Mine's a bottom of the barrel non-A chip which is why it was SO cheap but it won't really do much above 1965Mhz unless I raise voltage to like, 1.062v but then it throttles so hard due to power limitations it still runs the same clocks lol.

I've had it stable on 1935-1920 depending on temps on 1.012v but that's barely above stock boost. 

Not that it matters, even on these clocks it's way way faster then my 1080 Ti Lightning @ 2067Mhz will ever be but yeah. I know it can do so much more if I shunt mod it... 

I don't have the right resistors unfortunately. I have a bunch of 5MO's from the old 780/780 Ti days but I read a 5MO triggers protection mode and I need something like a 8MO or 12MO which I would have to order first.


----------



## loginwong

I have ordered the GALAX GeForce® RTX 2080Ti HOF 10th Anniversary OC LAB Edition today and can get this card in next week, did anyone can PM me for the Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W BIOS?


----------



## sultanofswing

loginwong said:


> I have ordered the GALAX GeForce RTX 2080Ti HOF 10th Anniversary OC LAB Edition today and can get this card in next week, did anyone can PM me for the Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W BIOS?


That bios is on techpowerup. It's an older version that has some issues but it does work. 
You can also try and contact Galax and give them your info and the serial number of your card and see if they can send you the newest version.

Sent from my GM1915 using Tapatalk


----------



## loginwong

thanks, I will take a look on that


----------



## gfunkernaught

All this talk about Ngreedia and ethics in business don't matter and keep the shareholders wealthy..you're all right. I was against the 2080 ti when I had a 1080 ti. Then nvidia unlocked DXR on Pascal, late, on purpose, so 1080 ti owners can get a taste. Let's face it, ray tracing DOES look good. Quake 2 RTX DOES look good. No one can argue that ray tracing is just okay compared to raster lights and shadows. SHADOWS that for a decade have looked like garbage at certain angles. 

But my point is, when someone says "rich" and "mindless" referring to nvidia customers, not all of us are. Nvidia is kind of like the hottest stripper in the club, like how can you not?>?>? Now I get it after being burned by the b***h! Nvidia is a succubus, a temptress. Meanwhile AMD is like wife material. Not super hot, but wont gouge your wallet and will still deliver.


----------



## TedOlsthoorn

Hi everyone!

Does anyone know if there's a BIOS version that's compatible with the Gigabyte 2080 Ti Windforce (non-OC model) to raise the power limit?
Gigabyte released a BIOS version with higher power limits for the Windforce OC and Gaming OC versions, but not the standard Windforce model.
On the Gigabyte forums there's a couple of BIOSes floating around, but it's never really clear if they are for the Windforce non-OC or the OC model. Has anyone tried flashing the updated Windforce OC bios to a non-OC card? I'm afraid to brick mine, I've never flashed vBIOS before.

TL;DR: I'm looking for a BIOS for raised power limits for a Windforce non-OC model. Thanks in advance!


----------



## Medizinmann

TedOlsthoorn said:


> Hi everyone!
> 
> Does anyone know if there's a BIOS version that's compatible with the Gigabyte 2080 Ti Windforce (non-OC model) to raise the power limit?
> Gigabyte released a BIOS version with higher power limits for the Windforce OC and Gaming OC versions, but not the standard Windforce model.
> On the Gigabyte forums there's a couple of BIOSes floating around, but it's never really clear if they are for the Windforce non-OC or the OC model. Has anyone tried flashing the updated Windforce OC bios to a non-OC card? I'm afraid to brick mine, I've never flashed vBIOS before.
> 
> TL;DR: I'm looking for a BIOS for raised power limits for a Windforce non-OC model. Thanks in advance!


AFAIK you could use the Palit BIOS for "Non-A" cards with 310W powerlimit (see page one of this thread).

And you should always backup your existing BIOS before flashing.

Greetings,
Medizinmann


----------



## TedOlsthoorn

Thanks Medizinmann! I've used NVFlash to backup my existing BIOS.

Only I've read that by flashing the Palit BIOs I'll miss the middle fan of my card working, since that BIOS is made for a two-fan card.
Any ideas if the Windforce OC model BIOS update could work for me? Since the cards are basically identical other than my non-OC version having a non-A chip.
I have the BIOS update file but I'm hesistant to try it out because I'm not sure what will happen, lol.


----------



## Medizinmann

TedOlsthoorn said:


> Thanks Medizinmann! I've used NVFlash to backup my existing BIOS.
> 
> Only I've read that by flashing the Palit BIOs I'll miss the middle fan of my card working, since that BIOS is made for a two-fan card.
> Any ideas if the Windforce OC model BIOS update could work for me? Since the cards are basically identical other than my non-OC version having a non-A chip.
> I have the BIOS update file but I'm hesistant to try it out because I'm not sure what will happen, lol.


Sorry, don't know this exactly.

Another idea would be to look into the Gainwards Phoenix BIOS - as this also raises the Power Target up to 310W, is a Non-A Chip and a reference PCB design with 3 Fans!
https://www.techpowerup.com/vgabios/215370/215370

*Disclaimer: But it is an unveryfied BIOS!*

Someone on reddit reported it to be successfull though!
https://www.reddit.com/r/nvidia/comments/cae5ob/request_gainward_2080_ti_phoenix_owner_to_flash/

Greetings,
Medizinmann


----------



## TedOlsthoorn

Awesome, thanks a lot Medizinmann! I've downloaded the BIOS from the link you posted, ready to flash it to my card whenever I feel is the right time, haha.
When I've done so I'll report back!

Edit: I couldn't resist so flashed the BIOS with NVFlash, using the -6 command to get around the subsystem vendor ID problem it gave me. It worked, the power limit is now higher but the fan speeds seem very limited even at 100% manual speed in Afterburner, probably due to the fact that the BIOS is for a Gainward card and the fan speeds for the Gainward and Windforce cards aren't the same. Annoying, because I need the extra fan speed to keep the card cool!


----------



## Imprezzion

TedOlsthoorn said:


> Awesome, thanks a lot Medizinmann! I've downloaded the BIOS from the link you posted, ready to flash it to my card whenever I feel is the right time, haha.
> When I've done so I'll report back!
> 
> Edit: I couldn't resist so flashed the BIOS with NVFlash, using the -6 command to get around the subsystem vendor ID problem it gave me. It worked, the power limit is now higher but the fan speeds seem very limited even at 100% manual speed in Afterburner, probably due to the fact that the BIOS is for a Gainward card and the fan speeds for the Gainward and Windforce cards aren't the same. Annoying, because I need the extra fan speed to keep the card cool!


It limits RPM to about 2200 right? I use the Palit BIOS on my card but I Frankenstein cool mine with a Accelero IV and 2 140mm RGB fans which have their PWM wire and RPM wire on the card. They only do 1600RPM max so no problems for me tho. 

I might buy a AIO for it just for the fun of it so I can shuntmod it without overwhelming the Accelero's heatsink.

I'll order a proper soldering station, some 10 mOhm resistors and maybe a Kraken G12 or whatever to strap a AIO to it.

I do actually have a H110i GTX here somewhere but I don't think that pump style can mount to a GPU as it has the square pump.


----------



## TedOlsthoorn

Imprezzion said:


> It limits RPM to about 2200 right? I use the Palit BIOS on my card but I Frankenstein cool mine with a Accelero IV and 2 140mm RGB fans which have their PWM wire and RPM wire on the card. They only do 1600RPM max so no problems for me tho.
> 
> I might buy a AIO for it just for the fun of it so I can shuntmod it without overwhelming the Accelero's heatsink.
> 
> I'll order a proper soldering station, some 10 mOhm resistors and maybe a Kraken G12 or whatever to strap a AIO to it.
> 
> I do actually have a H110i GTX here somewhere but I don't think that pump style can mount to a GPU as it has the square pump.


Yeah, it seems to be limited to about 2200rpm for all three fans at 100%, which isn't enough with this stock cooler for the higher power limit unfortunately.

I'm not familiar with hardware modding, so I won't try to give you any advice on your project, haha. Good luck though!


----------



## long2905

Imprezzion said:


> It limits RPM to about 2200 right? I use the Palit BIOS on my card but I Frankenstein cool mine with a Accelero IV and 2 140mm RGB fans which have their PWM wire and RPM wire on the card. They only do 1600RPM max so no problems for me tho.
> 
> I might buy a AIO for it just for the fun of it so I can shuntmod it without overwhelming the Accelero's heatsink.
> 
> I'll order a proper soldering station, some 10 mOhm resistors and maybe a Kraken G12 or whatever to strap a AIO to it.
> 
> I do actually have a H110i GTX here somewhere but I don't think that pump style can mount to a GPU as it has the square pump.



Google alphacool aio specifically for 2080ti ref pcb. They cover everything and looks nice enough. Im tempted to pull the trigger but the price is still high (for me)


----------



## ESRCJ

If you guys need something to bench for your 2080 Tis, I have a bench-a-thon going on right now for Metro Exodus:

https://www.overclock.net/forum/410...344-metro-exodus-bench-thon.html#post28377750

I plan on including the OCN community a bit more with these moving forward. Feel free to join in!


----------



## Hundsbuah

just found this bios - does s.o. know if this gpu has a reference pcb?

https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726


----------



## loginwong

Hundsbuah said:


> just found this bios - does s.o. know if this gpu has a reference pcb?
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726


This GPU is not a reference pcb


----------



## RAGEdemon

MSI Duke OC XUSB ([email protected]%TDP Max) firmwares: There have been more recent firmwares uploaded to TechPowerup, which were now flashed OK. 

My Duke is Liquid Metal re-pasted. Fans set to 100% @ 55C; 0% @ 50C (Very Steep though Very Quiet). I also have 2x120mm case fans directly pointed at the card hacked into the PC case cover, so there is no real heat buildup.

Every card and especially manufacturer has different fan configuration and type - i.e. number of fans and their RPM range, meaning that firmware for one reference PCB card may not work well with another reference PCB card as far as cooling goes. I believe this is the reason for such large variations from users.

I have tested 2 Bioses:

1. MSI Gaming X Trio [3 connector] 330W - Automatic OC scanner - TDP maxes out at 330W and power limit is always hit final stage. Final OC results are 100MHz above the default Duke OC firmware. GPU-Z main tab shows solid Core OC set at 1900MHz. Duke used to be 1800 at OC Scanner max overclock. This has been excellent as a daily use. Temps increased from 55C to 59C.

A note here - the Gaming X Trio doesn't have power balancing circuitry on the PCB but the firmware is agnostic to this. The Duke OC DOES have power balancing, so arguably, Duke OC + Gaming X Trio bios is actually 'better+safer+cheaper' in my personal opinion, to the stock Gaming X Trio card.

2. EVGA FTW3 373W [3 connector] - 373W - Automatic OC scanner - TDP maxes out at 373W and power limit is always hit final stage. Final OC results are 80MHz above the default Duke OC firmware OC scanner max overclock. GPU-Z main tab shows solid Core OC set at 1880MHz (previously 1800MHz OC scanner max overclock). Temps increased from 55C to 65C. Went back to Gaming X Trio bios.

3. Gigabyte Gaming OC [2 connector] - 366W - Automatic OC scanner - TDP maxes out at 366W and power limit is always hit final stage. Final OC results are WORSE than original Duke OC firmware OC scanner max overclock. Temps increased from 55C to 77C.

Changing the core voltage to +100 in MSI afterburner resulted in around -15MHz OC, so left that at 0 - generally the power limit always ensures the Voltage never reaches the +X you set so ultimately this is useless unless you're on water.

All tests done at 50% Humidity @ 25C.

EDIT:
Testing shows that although there was 100MHz increase in overclocking capability in OC scanner, this does not translate into overclocking while gaming. 

All extra power bioses tried actually have a DETRIMENTAL effect on overclocking, and this is why:

Simply, the extra power = extra heat = lowered OC.

I went back to the original Duke OC bios and set the fan to 100% @ 45C. By far, this gives the best gaming overclocking results; e.g. in Control 4K GPU 100%, card stays @~2050 for the most part.

Comparing this with other 'high power' bioses, they went down to <1990MHz during gameplay.

The trick is to have better cooling ensuring the higher power is dissipated and the temperature doesn't downclock your overclock.

Summary: If you are on air - stay with the Original Duke OC Bios for optimum overclocking folks...


----------



## Medizinmann

TedOlsthoorn said:


> Yeah, it seems to be limited to about 2200rpm for all three fans at 100%, which isn't enough with this stock cooler for the higher power limit unfortunately.
> 
> I'm not familiar with hardware modding, so I won't try to give you any advice on your project, haha. Good luck though!


The "easy" way and IMHO best solution would be an Alphacool Eiswolf 240 GPX Pro (and of course LM as TIM while your are at it :thumb...but it will cost around 180-200€(maybe less since some modells are discounted right now to 160€). But all needed thermal pads are provided and the manual is easy to follow.

Greertings,
Medizinmann


----------



## TedOlsthoorn

Medizinmann said:


> The "easy" way and IMHO best solution would be an Alphacool Eiswolf 240 GPX Pro (and of course LM as TIM while your are at it :thumb...but it will cost around 180-200€(maybe less since some modells are discounted right now to 160€). But all needed thermal pads are provided and the manual is easy to follow.
> 
> Greertings,
> Medizinmann


I think at this point I'd rather wait it out until the next generation drops instead of spending more for a liquid cooler. Thanks for the tips though!


----------



## Imprezzion

Hmm, mine's running Prolimatech PK-3 TIM not as that's my go-to performance paste but I also have a full Conductonaut laying here I was goin to use to delid my 9900K but I won't do that as it runs 5.1/4.8 just fine in terms of temperature on just a lapped IHS.

I could use the Conductonaut on the Accelero maybe that'll drop temperatures even further?


----------



## Medizinmann

Imprezzion said:


> Hmm, mine's running Prolimatech PK-3 TIM not as that's my go-to performance paste but I also have a full Conductonaut laying here I was goin to use to delid my 9900K but I won't do that as it runs 5.1/4.8 just fine in terms of temperature on just a lapped IHS.
> 
> I could use the Conductonaut on the Accelero maybe that'll drop temperatures even further?


It would for sure - I would expect 5-10°C - LM as TIM is very effective with GPUs since were are talking about direct die cooling!

Question is - what material is the coldplate of the Accelero made of? It looks like Copper - which should work...but I don't know for sure.
And make sure to use conformal coat of something similar to protect componente around the GPU die...

BTW: What to you use to cool VRMs and RAM? Did you install extra heatsinks there?

Greetings,
Medizinmann


----------



## kithylin

Hundsbuah said:


> just found this bios - does s.o. know if this gpu has a reference pcb?
> 
> https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726


I wanted to share a little information for you for the future: You could figure this out yourself. First do a google image search for: RTX 2080 Ti Founder's Edition PCB. Then do another google image search for: Galax RTX 2080 Ti Hall of Fame PCB. In comparing the images you could see that the Galax card is very different and not at all the same.


----------



## Imprezzion

Medizinmann said:


> It would for sure - I would expect 5-10°C - LM as TIM is very effective with GPUs since were are talking about direct die cooling!
> 
> Question is - what material is the coldplate of the Accelero made of? It looks like Copper - which should work...but I don't know for sure.
> And make sure to use conformal coat of something similar to protect componente around the GPU die...
> 
> BTW: What to you use to cool VRMs and RAM? Did you install extra heatsinks there?
> 
> Greetings,
> Medizinmann


Yeah i did. Enzotech 14x14mm heatsinks for the ram (come with pre-applied thermal tape) and Alphacool 6,5x6.5mm VRM heatsinks (don't come with tape, I used Akasa tape).


----------



## AndrejB

Anyone know what thickness thermal pads go on a Strix and which brand?

The factory ones seem not to be mounted properly as the vrms are always 5c higher than the core. (Metro exodus 70c core, 75c vrms @ 1.006v)

This is my 3rd card, RMAd the first two (finally got a strong core and sammy mem) but with the previous two the vrms were always the same temp as the core.


----------



## jura11

AndrejB said:


> Anyone know what thickness thermal pads go on a Strix and which brand?
> 
> The factory ones seem not to be mounted properly as the vrms are always 5c higher than the core. (Metro exodus 70c core, 75c vrms @ 1.006v)
> 
> This is my 3rd card, RMAd the first two (finally got a strong core and sammy mem) but with the previous two the vrms were always the same temp as the core.


Hi Andrej 

I think they're using 1mm thermal pads, which one are best usually Fujipoly are one of best one with Alphacool ones which I have used on two RTX 2080Ti and with both of them have best experience and results 

I would try measuring them personally, I remember once I used thinner pads on Phanteks Glacier WB and my temperatures went through the roof, friend bought used WB without the pads and that time I used my EK thermal pads and this hasn't been best solution hahaha because under water we are seen 60°C on core and on VRM we are seen 75°C 

At the end I contacted Phanteks and they send me thermal pads anx no more issues, temperatures on core in 38-42°C and VRM been in 46-50°C as max

In my case my VRM temperatures are always higher than Core temperatures, this has been from day one on my Asus RTX 2080Ti Strix with EK Vector RTX 2080Ti WB 

Hope this helps 

Thanks, Jura


----------



## Imprezzion

They will always be higher. A VRM isn't a "chip". It doesn't have to run under 80c. Most VRM's are rated for like 125c and they don't perform (much) worse at 75c compared to 45c. It's a physical component not a microchip. 

VRM 5c higher than core is amazing actually lol. Anything under 90c is plenty of headroom.


----------



## AndrejB

Thank you both, for all the info!!!

So one more small question, I would have to get two thermal sheets, one 0.5mm for the chokes and one 1mm for the vrm? Like on these ekwb instructions:

https://www.google.com/url?sa=t&sou...Vaw18kGdLR7mvO8BXBlSZJadN&cshid=1585137993939

I won't be doing this now, guess all the issues I had with the previous two cards (dxgi device hung in most games and barely any oc room on the matrix bios) was maybe caused by the vrms not delivering full power. Currently I can push 2070mhz @ 1v with no issues


----------



## dave.knb

*Anyone got experience with flashing a Bios to a new xusb fw card?*

Hi All,

Unfortunately i got a MSI RTX 2080 Ti Gaming X Trio with a new XUSB FW ID: 0x70100003
and this one has the following bios on it (see GPU-Z Screenshot)

And the problem is that there is no Bios out there that I am aware of that would flash onto my card (XUSB FW ID Mismatch)

Do you guys know any solution how I could get a another Bios with a bigger Powerlimit on my card than 330w?
and which bios would you recommend to flash?


----------



## kithylin

Imprezzion said:


> They will always be higher. A VRM isn't a "chip". It doesn't have to run under 80c. Most VRM's are rated for like 125c and they don't perform (much) worse at 75c compared to 45c. It's a physical component not a microchip.
> 
> VRM 5c higher than core is amazing actually lol. Anything under 90c is plenty of headroom.


Yep totally this. Back in the GTX 700 and GTX 900 days the VRM's on almost all cards tended to run at least +20c hotter than core temps and sometimes +30c more than core temps. It's all perfectly normal and fine and nothing to be concerned about. Even if your VRM's were running 100c when your core was 60c that's still perfectly normal and nothing to care about, even for RTX 2080 Ti's.


----------



## sultanofswing

dave.knb said:


> Hi All,
> 
> Unfortunately i got a MSI RTX 2080 Ti Gaming X Trio with a new XUSB FW ID: 0x70100003
> and this one has the following bios on it (see GPU-Z Screenshot)
> 
> And the problem is that there is no Bios out there that I am aware of that would flash onto my card (XUSB FW ID Mismatch)
> 
> Do you guys know any solution how I could get a another Bios with a bigger Powerlimit on my card than 330w?
> and which bios would you recommend to flash?



Since you have the newer card which has the newer vendor for the bios chip (MX vs ISSI) you can flash it with any MX bios available.

The best option I used on my old MX based FE was the EVGA FTW 3 373watt BIOS. 

If you want to flash a BIOS like lets say the Galax 380 watt bios and the only option is the older ISSI version BIOS you will have to hard flash the card using the USB CH341A Programming method which is actually pretty simple.


----------



## J7SC

FYI, DerBauer did a quick 'show and tell' of the 2080 Ti Cyberpunk 2077 edition


----------



## AndrejB

Anyone notice that after running oc scanner, the card can't drop to pcie 1.1, rather stays at pcie 2 (810mhz mem instead of 405)?


----------



## jura11

AndrejB said:


> Thank you both, for all the info!!!
> 
> So one more small question, I would have to get two thermal sheets, one 0.5mm for the chokes and one 1mm for the vrm? Like on these ekwb instructions:
> 
> https://www.google.com/url?sa=t&sou...Vaw18kGdLR7mvO8BXBlSZJadN&cshid=1585137993939
> 
> I won't be doing this now, guess all the issues I had with the previous two cards (dxgi device hung in most games and barely any oc room on the matrix bios) was maybe caused by the vrms not delivering full power. Currently I can push 2070mhz @ 1v with no issues



Hi Andrej 

These instructions are for EKWB Vector waterblocks and not sure if they're applicable to your case or yours situation as you are running stock air cooler and I'm not 100% on thickness of thermal pad 

EKWB are not the best thermal pads, try Fujipoly or Alphacool Eisschicht pads which I think are rebranded Fujipoly pads or are very similar to them

That's great OC at 1.0v, I folded in [email protected] with 1.0v at 2130MHz and no issues, temperatures have been in 38-40° C as max and no issues 

For some other tests I use 1.02v where 2130-2145MHz is possible just not in rendering as always because will crash with CUDA errors 

Hope this helps 

Thanks, Jura


----------



## newls1

quick question: I have a Gigabyte Gaming OC under a FCWB and while gaming, the temps usually hover in the 31-33c range loaded playing either games like FC5 or metro exodus, etc.. with a room temp of 66-67f. Using MSI afterburner OC over the past several months of +150MHz core / +1100Mem, and maxed out power slider... My GPU clock boosts to 2130, and sometimes drops to 2115Mhz.. These about average speeds for a ref 2080Ti on water? I mean, im super happy and not 1 issue ever with it "yet", but was just wondering what your average joe gets for oc's using water on these huge ass gpus... Thanks


----------



## AndrejB

jura11 said:


> Hi Andrej
> 
> These instructions are for EKWB Vector waterblocks and not sure if they're applicable to your case or yours situation as you are running stock air cooler and I'm not 100% on thickness of thermal pad
> 
> EKWB are not the best thermal pads, try Fujipoly or Alphacool Eisschicht pads which I think are rebranded Fujipoly pads or are very similar to them
> 
> That's great OC at 1.0v, I folded in [email protected] with 1.0v at 2130MHz and no issues, temperatures have been in 38-40Â° C as max and no issues
> 
> For some other tests I use 1.02v where 2130-2145MHz is possible just not in rendering as always because will crash with CUDA errors
> 
> Hope this helps
> 
> Thanks, Jura


Thanks Jura, guess I'll ask asus support when the time comes for specs.

I did find a retailer here in canada that has the fujipoly pads (has 13kW/h and also 17kW/h but these are a bit expensive and probably excessive)

Finally have a card that can compare with yours 🙂


----------



## chibi

AndrejB said:


> Thanks Jura, guess I'll ask asus support when the time comes for specs.
> 
> I did find a retailer here in canada that has the fujipoly pads (has 13kW/h and also 17kW/h but these are a bit expensive and probably excessive)
> 
> Finally have a card that can compare with yours 🙂



Curious, who is the retailer if I may ask?


----------



## AndrejB

chibi said:


> Curious, who is the retailer if I may ask?


Of course, seems like a small store in Toronto.

https://www.dazmode.com/


----------



## changboy

Its possible i sold my gigabyte aorus 1080 ti extreme WB and the guy should come saturday to take it, then i not sure if i should buy a 2080 ti or wait for the new generation. I read its been delayed and i dont know how much time i need to wait, coz i have a oldier r9-290 i can put back in my main pc help to wait till the laugh of the 3080 ti.

What do you guys thinking about this ? Coz i remember when i bought my 1080 ti 8 month after they laugh the 2080 ti and i was a bit sad.


----------



## Medizinmann

AndrejB said:


> Thanks Jura, guess I'll ask asus support when the time comes for specs.
> 
> I did find a retailer here in canada that has the fujipoly pads (has 13kW/h and also 17kW/h but these are a bit expensive and probably excessive)
> 
> Finally have a card that can compare with yours 🙂





chibi said:


> Curious, who is the retailer if I may ask?





jura11 said:


> Hi Andrej
> 
> These instructions are for EKWB Vector waterblocks and not sure if they're applicable to your case or yours situation as you are running stock air cooler and I'm not 100% on thickness of thermal pad
> 
> EKWB are not the best thermal pads, try Fujipoly or Alphacool Eisschicht pads which I think are rebranded Fujipoly pads or are very similar to them
> 
> That's great OC at 1.0v, I folded in [email protected] with 1.0v at 2130MHz and no issues, temperatures have been in 38-40° C as max and no issues
> 
> For some other tests I use 1.02v where 2130-2145MHz is possible just not in rendering as always because will crash with CUDA errors
> 
> Hope this helps
> 
> Thanks, Jura


Last but not least - you could take a look at Thermal Grizzly Minus Pads - seem to be more reasonable priced.

https://www.thermal-grizzly.com/en/products/13-minus-pad-8-en

https://www.newegg.com/p/pl?d=Thermal+Grizzly+Pad+Thermo+Minus+Pad+

Greetings,
Medizinmann


----------



## iSpark

I'm at the bottom of the top! lol

Have you ever wanted to have something that was top end, but you were always a day late and a dollar short? Always having to settle for the bottom or middle of the road?

Well, today I moved up to the bottom of the top end even though it was a "day late". hah
My current card is a EVGA GeForce GTX_ 680_. Hey, progress right? Keep moving forward, even if it does take years... 










Anywho, I now have a EVGA RTX 2080 Ti Black edition.

It will be going under water, and the EK-Vector block is all ready installed. I ordered a combo block and backplate from Titan Rig but they didn't send the backplate with the order. sigh... -.-

Can this card be ran without a backplate, or at least until the backplate arrives?


----------



## Imprezzion

iSpark said:


> I'm at the bottom of the top! lol
> 
> Have you ever wanted to have something that was top end, but you were always a day late and a dollar short? Always having to settle for the bottom or middle of the road?
> 
> Well, today I moved up to the bottom of the top end even though it was a "day late". hah
> My current card is a EVGA GeForce GTX_ 680_. Hey, progress right? Keep moving forward, even if it does take years...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anywho, I now have a EVGA RTX 2080 Ti Black edition.
> 
> It will be going under water, and the EK-Vector block is all ready installed. I ordered a combo block and backplate from Titan Rig but they didn't send the backplate with the order. sigh... -.-
> 
> Can this card be ran without a backplate, or at least until the backplate arrives?


Yeah sure. Only 1 downside to this specific model under water. No power limit. It's a Black Edition which is a non-A chip so maxes out at 310w as there's no higher BIOS for non-A models.


----------



## philhalo66

Time for me to join the 2080 Ti club. grabbing a FTW3 Ultra Hybrid next week sometime will post a picture when i get it.


----------



## Imprezzion

Well, I just bought another 2080 Ti from my local supplier for a great deal. Last time I got a non-A Inno3D card and now I got a A card, the Gainward Phoenix Golden Sample. 
I really hope that card has Sammy memory and will clock properly compared to my super terrible non A card that barely holds 1935Mhz on 1.043v and STILL throttles with a 310w BIOS. 

Can a Gainward GS with the 380w Galax BIOS (i checked, it's compatible, both ref 3 fan PCB's) actually run 1.093v without throttling or is even 380w not enough for that. 

DW about the cooling, the Gainward cooler can hold it's own for a while and i still have my modded Accelero IV laying around from the Inno3D and otherwise i'll fab up some AIO cooler for the core and make a nice VRM heatsink like I did with my Inno3D.


----------



## GoldCartGamer

I have a MSI Gaming X Trio. My XUSB-FW is 0x70090003. If I am reading the OP correctly, I cannot flash the 400W unofficial bios posted due to it being older than my card? Is there a reason to at minimum update my card to the latest official bios? Or is there a different recommended bios for my card? My card is on the stock cooler.


----------



## sultanofswing

I personally feel that it's a waste of time to flash a higher power limit BIOS to a card that is still running on a air cooler. No BIOS is going to gain you anything really unless you can keep the temps below 45c (40c preferably).


----------



## z390e

sultanofswing said:


> I personally feel that it's a waste of time to flash a higher power limit BIOS to a card that is still running on a air cooler. No BIOS is going to gain you anything really unless you can keep the temps below 45c (40c preferably).


This 100%


----------



## dave.knb

*Anyone got experience with flashing a Bios to a new xusb fw card?*



sultanofswing said:


> Since you have the newer card which has the newer vendor for the bios chip (MX vs ISSI) you can flash it with any MX bios available.
> 
> The best option I used on my old MX based FE was the EVGA FTW 3 373watt BIOS.
> 
> If you want to flash a BIOS like lets say the Galax 380 watt bios and the only option is the older ISSI version BIOS you will have to hard flash the card using the USB CH341A Programming method which is actually pretty simple.



Okay, so to my GPU: https://www.techpowerup.com/vgabios/213896/msi-rtx2080ti-11264-190528
i can flash a bios like this?: https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029

even when its an older BIOS?
so will the xusb fw id match with these two? and how could i check (with nvflash)?

--

This USB CH341A Programming method, are there any Guides how to use that on the web?
And of course i have to check that my GPU is going to work with the bios I am flashing, else im a bit screwed right?


Tanks in advance, for the helpful Answers


----------



## sultanofswing

dave.knb said:


> Okay, so to my GPU: https://www.techpowerup.com/vgabios/213896/msi-rtx2080ti-11264-190528
> i can flash a bios like this?: https://www.techpowerup.com/vgabios/207756/evga-rtx2080ti-11264-181029
> 
> even when its an older BIOS?
> so will the xusb fw id match with these two? and how could i check (with nvflash)?
> 
> --
> 
> This USB CH341A Programming method, are there any Guides how to use that on the web?
> And of course i have to check that my GPU is going to work with the bios I am flashing, else im a bit screwed right?
> 
> 
> Tanks in advance, for the helpful Answers


When I had my FE 2080ti this is the FTW3 BIOS I ran https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107

To tell which BIOS you have you can run the command nvflash64 --protectoff
It will then tell you which BIOS chip you have.


----------



## long2905

sultanofswing said:


> When I had my FE 2080ti this is the FTW3 BIOS I ran https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107
> 
> To tell which BIOS you have you can run the command nvflash64 --protectoff
> It will then tell you which BIOS chip you have.



Did you get zero fan and led changing ability when flashing the ftw3 vbios?


----------



## sultanofswing

long2905 said:


> Did you get zero fan and led changing ability when flashing the ftw3 vbios?


My card was on water so I didn't care about the LED and as far as 0 fan i'm not sure as it was on water.


----------



## Endeav

Which bios works with the new 2080Ti EEPROM's? I have a 2080Ti XC Ultra that I received as an RMA earlier this month, came in a newer box and not an EVGA RMA box. The 373 watt FTW3 and 400w galax bios I had flashed to my XC Ultra before RMA don't work with this new one.


----------



## long2905

sultanofswing said:


> My card was on water so I didn't care about the LED and as far as 0 fan i'm not sure as it was on water.



Very tempted to go complete custom loop but at this time ehhh


----------



## Imprezzion

I'm curious which version of EEPROM my Gainward Phoenix GS will have when it gets here. And if I flash it with the Galaxy or EVGA BIOS I am also quite curious if the fans and LED's will still work as intended.

Anyone here happen to have a Gainward Phoenix GS with a flashed BIOS? I mean, the card's PCB itself isn't anything special, just a 16 phase reference A chip PCB, but would be nice to know in advance which BIOS I should try.


----------



## Carillo

Hello guys  

Can anyone be kind and share the Galax XOC bios ? ( link) Seems to not be available on the front page. 

Jon


----------



## sultanofswing

Carillo said:


> Hello guys
> 
> Can anyone be kind and share the Galax XOC bios ? ( link) Seems to not be available on the front page.
> 
> Jon


https://www.techpowerup.com/vgabios/216553/galaxy-rtx2080ti-11264-181010

Use at your own risk, Would not use on a aircooled card either.


----------



## Endeav

sultanofswing said:


> When I had my FE 2080ti this is the FTW3 BIOS I ran https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107
> 
> To tell which BIOS you have you can run the command nvflash64 --protectoff
> It will then tell you which BIOS chip you have.


Just want to say this BIOS worked on my 2080 Ti XC Ultra MX bios. Thanks.


----------



## Carillo

sultanofswing said:


> https://www.techpowerup.com/vgabios/216553/galaxy-rtx2080ti-11264-181010
> 
> Use at your own risk, Would not use on a aircooled card either.


Thanks, works great on my 2080 Ti FE PCB. Peaking 480 watt in BF5 in 4k


----------



## sultanofswing

Carillo said:


> Thanks, works great on my 2080 Ti FE PCB. Peaking 480 watt in BF5 in 4k



What clockspeed?


----------



## Carillo

sultanofswing said:


> What clockspeed?


2250mhz & PLUS 1000 MEM @ 1.125mV using a Predator XB3 4k 144hz monitor


----------



## Imprezzion

Carillo said:


> Thanks, works great on my 2080 Ti FE PCB. Peaking 480 watt in BF5 in 4k


Lol that BIOS works on a Founders PCB? Never knew... I I find a 380w BIOS not enough I might just run this one.. And yes I will run it on air because I'm YOLO like that. 1080 Ti Armor ran fine for ages with XOC BIOS at 1.150v and similar power draw so. This should be juuuust fine.

Of course I will watercool it later on but as I'm still binning 2080 Ti's I'm just aircooling them until I have one that will do 2200+ easily.


----------



## Carillo

Imprezzion said:


> Lol that BIOS works on a Founders PCB? Never knew... I I find a 380w BIOS not enough I might just run this one.. And yes I will run it on air because I'm YOLO like that. 1080 Ti Armor ran fine for ages with XOC BIOS at 1.150v and similar power draw so. This should be juuuust fine.
> 
> Of course I will watercool it later on but as I'm still binning 2080 Ti's I'm just aircooling them until I have one that will do 2200+ easily.


I think your VRMs or will burn up within hours if using this bios on air  Yes it does work, almost perfect.Both the KingPin and Asus XOC bioses is useless if not on LN2. Fe design, but its a DELL card with PCB made by Texas Instruments, real quality, not kina bull  Couple of issues i noticed, 1: If saved overclock in msi afterburner is applied, driver will crash and i get blue screen. So i need to set the curve every time i start windows. 2: If powerlimit slider is touched, same bluescreen. With my old 1440p monitor, the 380w limit was no issue, but with 4k i really need the extra juice, and i can really see a big FPS gain. Both BF5 and Modern Warefare is pushing 440 watt on average


----------



## Imprezzion

Carillo said:


> I think your VRMs or will burn up within hours if using this bios on air  Yes it does work, almost perfect.Both the KingPin and Asus XOC bioses is useless if not on LN2. Fe design, but its a DELL card with PCB made by Texas Instruments, real quality, not kina bull  Couple of issues i noticed, 1: If saved overclock in msi afterburner is applied, driver will crash and i get blue screen. So i need to set the curve every time i start windows. 2: If powerlimit slider is touched, same bluescreen. With my old 1440p monitor, the 380w limit was no issue, but with 4k i really need the extra juice, and i can really see a big FPS gain. Both BF5 and Modern Warefare is pushing 440 watt on average


I'm on 1080p 144Hz but use upscaling in most games that natively support it like Modern Warfare and Borderlands 3. Both games on Ultra with 125% scaling absolutely smash the 310w limit my non-A chip has even at 1.043v and it limits me to ~1935-1950Mhz as this non-A chip is the bottom of the barrel bin. Also some random OEM card made by Inno3D but it has a single slot I/O and such.

I decided to swap that card for a Gainward Phoenix GS which is an A chip 16 phase reference board and was the cheapest A chip ref board here for watercooling. But j don't wanna re-do my loop and buy a block before I know I have a good card / bin.

It will have to survive a few days on 380w or more for the binning unfortunately hehe.


----------



## Medizinmann

Carillo said:


> Thanks, works great on my 2080 Ti FE PCB. Peaking 480 watt in BF5 in 4k



Are all DP-Porst still usable with it?


Greetings,
Medizinmann


----------



## Carillo

Imprezzion said:


> I'm on 1080p 144Hz but use upscaling in most games that natively support it like Modern Warfare and Borderlands 3. Both games on Ultra with 125% scaling absolutely smash the 310w limit my non-A chip has even at 1.043v and it limits me to ~1935-1950Mhz as this non-A chip is the bottom of the barrel bin. Also some random OEM card made by Inno3D but it has a single slot I/O and such.
> 
> I decided to swap that card for a Gainward Phoenix GS which is an A chip 16 phase reference board and was the cheapest A chip ref board here for watercooling. But j don't wanna re-do my loop and buy a block before I know I have a good card / bin.
> 
> It will have to survive a few days on 380w or more for the binning unfortunately hehe.


If you are aiming for 2205 + bin with normal water cooling, you probably still binning cards when 3080 Ti is old


----------



## Carillo

Medizinmann said:


> Are all DP-Porst still usable with it?
> 
> 
> Greetings,
> Medizinmann


Yes, as i said i'm on 4k 144hz, thats not possible using HDMI. The display port i'm using right now is at least working, have not tried the others.


----------



## Imprezzion

Carillo said:


> If you are aiming for 2205 + bin with normal water cooling, you probably still binning cards when 3080 Ti is old


Well yeah 2200+ might be a bit much, I'm honestly aiming for one that stays above 2100Mhz in any situation. Closer to 2200 would be nice but money isn't infinite either and I don't plan to bin like 15 of them or something. Just like, when I see a cheap one float around either new, open box, or even second hand, I will buy it and bin it lol. I mean, I had a 1080 Ti that, with XOC and 1.200v, could do 2180Mhz just fine. Not on air tho.

Oh and 144Hz works fine on HDMI btw. Just create a custom resolution in nVidia CP and it runs it just fine here on HDMI on a Iiyama GB2788HS.


----------



## Carillo

Imprezzion said:


> Well yeah 2200+ might be a bit much, I'm honestly aiming for one that stays above 2100Mhz in any situation. Closer to 2200 would be nice but money isn't infinite either and I don't plan to bin like 15 of them or something. Just like, when I see a cheap one float around either new, open box, or even second hand, I will buy it and bin it lol. I mean, I had a 1080 Ti that, with XOC and 1.200v, could do 2180Mhz just fine. Not on air tho.
> 
> Oh and 144Hz works fine on HDMI btw. Just create a custom resolution in nVidia CP and it runs it just fine here on HDMI on a Iiyama GB2788HS.


Yes, for 144Hz at 1440p, you will need at least an HDMI 2.0 or a DisplayPort 1.2 but for 4K 144Hz you are going to need an HDMI 2.1(witch i don't have) or alternatively, a DisplayPort 1.4 which maxes out at 120Hz without compression


----------



## Apothysis

Carillo said:


> I think your VRMs or will burn up within hours if using this bios on air  Yes it does work, almost perfect.Both the KingPin and Asus XOC bioses is useless if not on LN2. Fe design, but its a DELL card with PCB made by Texas Instruments, real quality, not kina bull  Couple of issues i noticed, 1: If saved overclock in msi afterburner is applied, driver will crash and i get blue screen. So i need to set the curve every time i start windows. 2: If powerlimit slider is touched, same bluescreen. With my old 1440p monitor, the 380w limit was no issue, but with 4k i really need the extra juice, and i can really see a big FPS gain. Both BF5 and Modern Warefare is pushing 440 watt on average


What about saving a profile and applying it? I also noticed the same crash on Windows login with OC applied on startup. Wonder if there's a way to apply the profile automatically a few moments after afterburner starts..


BUT! Important news for anyone who like me is sitting on an MSI Trio X that seems to be hardlocked at 330W TDP: The GALAX XOC-bios *actually works*. I've been getting 400+ W and perflimit reasons are voltage-related. Keep in mind this bios also goes up to 1.125v so it's not for everyone.

So far I've also tried the 406w Trio-bios, 380w Galax-bios, 380w Lightning Z-bios - none of them exceed 330W and hit PL.


Also, any thoughts on running 1.125v daily under a block? Staying below 50c at all times.


----------



## Carillo

Apothysis said:


> What about saving a profile and applying it? I also noticed the same crash on Windows login with OC applied on startup. Wonder if there's a way to apply the profile automatically a few moments after afterburner starts..
> 
> 
> BUT! Important news for anyone who like me is sitting on an MSI Trio X that seems to be hardlocked at 330W TDP: The GALAX XOC-bios *actually works*. I've been getting 400+ W and perflimit reasons are voltage-related. Keep in mind this bios also goes up to 1.125v so it's not for everyone.
> 
> So far I've also tried the 406w Trio-bios, 380w Galax-bios, 380w Lightning Z-bios - none of them exceed 330W and hit PL.
> 
> 
> Also, any thoughts on running 1.125v daily under a block? Staying below 50c at all times.


Also saving a profile, and applying it gets me a blue screen. I have to manually set the curve on boot. Writing a script, or find another OC tool might be a work around. So far i have tried Evga X1(fails when touching the curve) and Asus GPU tweak(profile works, even auto aply after boot, but voltage regulation is all over the place) So still using Afterburner. Have to add i flashed last night, so havent had much time trying things. As for what voltage is safe for 24/7 ? I would say that it's temp dependent. As long as you are able to cool it, i personaly would have no problem using 1.125mV 24/7. But who knows? Guess i have to find out. Also curious if the card can handle 500 watt dayli usage over time  Bf5 is hitting 495 watt. Find it hard to belive, that there is no one in here that can share some experience using this bios ?


----------



## Imprezzion

I don't think anyone even tried that BIOS on other cards as it's a non-reference BIOS that shouldn't work on other non-ref cards like a X Trio or even a reference card but apparently it does lol. With the exception of the crashes in Afterburner and I wonder if the RGB and fan control works properly...

We'll see tomorrow when my Gainward gets here.


----------



## Apothysis

Imprezzion said:


> I don't think anyone even tried that BIOS on other cards as it's a non-reference BIOS that shouldn't work on other non-ref cards like a X Trio or even a reference card but apparently it does lol. With the exception of the crashes in Afterburner and I wonder if the RGB and fan control works properly...
> 
> We'll see tomorrow when my Gainward gets here.



Yeah I'm tempted to try the Galax 450W bios now in the hopes that it'll work both PL-wise and without the bugs.


Update: Galax 450W bios also works on MSI X Trio. Sadly, much like with other bios, it's stuck at power limit @ 330W.


Update #2: After playing around with this a bit, I think the issue with the X Trio-cards that has this bug is that it'll ignore the % power limit and default back to the cards original 110%. Most bios have a base TDP of 300W which ends up at 330W with the cards default PL % and it lines up with my readings in gpu-z (for instance, the Galax 450W bios has a base TDP of 300W but extends the power limit % to 150%, but its still capping out at 330W and 110% TDP). I believe this is why the Galax XOC-bios works, it sets the base TDP to 2000W while the percentage remains at 100%. This is probably why the Lightning Z 380W bios works somewhat with the base at 350W.


----------



## Medizinmann

Carillo said:


> Yes, as i said i'm on 4k 144hz, thats not possible using HDMI. The display port i'm using right now is at least working, have not tried the others.



Yeah one is/was expected - but I need them all + USB-C... 


Greetings,
Medizinmann


----------



## Carillo

I understand


----------



## Carillo

Imprezzion said:


> I don't think anyone even tried that BIOS on other cards as it's a non-reference BIOS that shouldn't work on other non-ref cards like a X Trio or even a reference card but apparently it does lol. With the exception of the crashes in Afterburner and I wonder if the RGB and fan control works properly...
> 
> We'll see tomorrow when my Gainward gets here.


I will try to create a script in powershell using a MSI API created som years ago. If it works, i will let you know and share it here if anyone is interested.


----------



## truehighroller1

Carillo said:


> Also saving a profile, and applying it gets me a blue screen. I have to manually set the curve on boot. Writing a script, or find another OC tool might be a work around. So far i have tried Evga X1(fails when touching the curve) and Asus GPU tweak(profile works, even auto aply after boot, but voltage regulation is all over the place) So still using Afterburner. Have to add i flashed last night, so havent had much time trying things. As for what voltage is safe for 24/7 ? I would say that it's temp dependent. As long as you are able to cool it, i personaly would have no problem using 1.125mV 24/7. But who knows? Guess i have to find out. Also curious if the card can handle 500 watt dayli usage over time  Bf5 is hitting 495 watt. Find it hard to belive, that there is no one in here that can share some experience using this bios ?



I've used it on my asus dual oc card myself. Same thing though as you results wise, blue screens etc..


----------



## Apothysis

I tested yet another XOC bios, the EVGA one with 2000W and it definitely worked as well (hitting ~550W and perfcap was voltage) and finally the EVGA Kingpin Hybrid bios with 361W/520W base/boost TDP and it FREAKING WORKS! So it definitely seems that the bug with the X Trio-cards is related to base TDP and ignoring the % power-setting.










So to anyone stuck with an MSI X Trio that won't exceed 330W, give the EVGA 361W bios a try if you're not comfortable with XOC-bios.


----------



## Carillo

Apothysis said:


> I tested yet another XOC bios, the EVGA one with 2000W and it definitely worked as well (hitting ~550W and perfcap was voltage) and finally the EVGA Kingpin Hybrid bios with 361W/520W base/boost TDP and it FREAKING WORKS! So it definitely seems that the bug with the X Trio-cards is related to base TDP and ignoring the % power-setting.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So to anyone stuck with an MSI X Trio that won't exceed 330W, give the EVGA 361W bios a try if you're not comfortable with XOC-bios.


The "Evga one" you are refering to is the KingPin bios ? Or something else ? Please link the bioses you are talking about  Does the Curve editor work on those evga bioses ? I know its disabled on the Kingpin XOC bios, so useless for me

EDIT: Tried the kingpin Hybrid bios now, and my card got stuck @ 300mhz hitting PL


----------



## DADDYDC650

I'm sure this has been asked a billion times but would it be worth flashing my EVGA GeForce RTX 2080 Ti BLACK EDITION GAMING? It currently runs between 1860-1980Mhz at max. What would be the best BIOS for my card? Ty in advance.


----------



## Carillo

DADDYDC650 said:


> I'm sure this has been asked a billion times but would it be worth flashing my EVGA GeForce RTX 2080 Ti BLACK EDITION GAMING? It currently runs between 1860-1980Mhz at max. What would be the best BIOS for my card? Ty in advance.


That is a non-A chip, so the Palit 310W non-A bios you find on the first page will work.


----------



## DADDYDC650

Carillo said:


> That is a non-A chip, so the only non-A bios you find on the first page will work.


Ty for responding. Do you know what people are getting after flashing with my card? Would it be worth it to flash?


----------



## Apothysis

Carillo said:


> The "Evga one" you are refering to is the KingPin bios ? Or something else ? Please link the bioses you are talking about  Does the Curve editor work on those evga bioses ? I know its disabled on the Kingpin XOC bios, so useless for me
> 
> EDIT: Tried the kingpin Hybrid bios now, and my card got stuck @ 300mhz hitting PL



Seems my post was a bit premature, I noticed similar behaviour just now actually. It seems to draw A LOT of power for some reason? Throttling very hard in Timespy due to power @ 450W, losing ~100 MHz on the core.


----------



## J7SC

FYI, Stock bios on my two Aorus Xtr WB cards (which have 2x 8 pin EPS) regularly pull between 375w-380w...may be helpful to some of you, with the usual warnings such as use at your own risk if you want to try, and some of your ports may be impacted, given the custom PCB of the factory *water*-blocked cards

https://www.techpowerup.com/vgabios/208102/gigabyte-rtx2080ti-11264-181206


----------



## Carillo

DADDYDC650 said:


> Ty for responding. Do you know what people are getting after flashing with my card? Would it be worth it to flash?


Well, your card is 280w with your current bios. So 30 watts extra sure helps. It all depends on your current cooling solution how much you will gain.


----------



## DADDYDC650

Carillo said:


> Well, your card is 280w with your current bios. So 30 watts extra sure helps. It all depends on your current cooling solution how much you will gain.


According to OP it doesn't seem possible to flash my GPU since it's a non-A. Bummer.

NVM. I guess I can use the Palit Non-A BIOS but the link is broken.


----------



## Carillo

DADDYDC650 said:


> According to OP it doesn't seem possible to flash my GPU since it's a non-A. Bummer.


Yes it's possible. Depends if your card is pre mid 2019 or later...Read the front page


----------



## DADDYDC650

Carillo said:


> Yes it's possible. Depends if your card is pre mid 2019 or later...Read the front page


Purchased in February of 2019. So much to read...


----------



## DADDYDC650

Does anyone have the Palit RTX 2080 Ti Dual (Non-A) Reference 310w BIOS? OP link is broken.


----------



## Imprezzion

DADDYDC650 said:


> Does anyone have the Palit RTX 2080 Ti Dual (Non-A) Reference 310w BIOS? OP link is broken.


Yeah. I do somewhere. But I'm not sure if it's Palit. There's more brands that have the 310w BIOS and I'm not sure which one I have. 

I'll check after dinner.


----------



## Carillo

Apothysis said:


> Yeah I'm tempted to try the Galax 450W bios now in the hopes that it'll work both PL-wise and without the bugs.
> 
> 
> Update: Galax 450W bios also works on MSI X Trio. Sadly, much like with other bios, it's stuck at power limit @ 330W.
> 
> 
> Update #2: After playing around with this a bit, I think the issue with the X Trio-cards that has this bug is that it'll ignore the % power limit and default back to the cards original 110%. Most bios have a base TDP of 300W which ends up at 330W with the cards default PL % and it lines up with my readings in gpu-z (for instance, the Galax 450W bios has a base TDP of 300W but extends the power limit % to 150%, but its still capping out at 330W and 110% TDP). I believe this is why the Galax XOC-bios works, it sets the base TDP to 2000W while the percentage remains at 100%. This is probably why the Lightning Z 380W bios works somewhat with the base at 350W.


Does not work on the FE PCB cards i have tried, PL hitting @ 200 watts. reason is way current is messured over power connectors, and when using the bios ment for 3 with a PCB having only two, the bios thinks you draw a lot more than you actually do


----------



## Medizinmann

DADDYDC650 said:


> Does anyone have the Palit RTX 2080 Ti Dual (Non-A) Reference 310w BIOS? OP link is broken.



This seems to be it...

https://www.techpowerup.com/vgabios/208274/palit-rtx2080ti-11264-190131




Greetings,
Medizinmann


----------



## sultanofswing

The Kingpin non XOC bios will not work properly on a FE card.


----------



## DADDYDC650

Medizinmann said:


> DADDYDC650 said:
> 
> 
> 
> Does anyone have the Palit RTX 2080 Ti Dual (Non-A) Reference 310w BIOS? OP link is broken.
> 
> 
> 
> 
> This seems to be it...
> 
> https://www.techpowerup.com/vgabios/208274/palit-rtx2080ti-11264-190131
> 
> 
> 
> 
> Greetings,
> Medizinmann
Click to expand...

Ty kind sir. Will flash soon.


----------



## Apothysis

Carillo said:


> Does not work on the FE PCB cards i have tried, PL hitting @ 200 watts. reason is way current is messured over power connectors, and when using the bios ment for 3 with a PCB having only two, the bios thinks you draw a lot more than you actually do


Played with a bunch of different bioses now. Some work, some don't. The whole 10% of base TDP argument seems to hold up in most cases? But for instance the Galax 400W/450W bios only pulls 300-330W.. Very annoying bug.

I'm back on the Lightning Z-bios for now, it seems to be the most problem-free even though the PL is low. The EVGA 360W/520W bios gives me over 400W in timespy but doesn't seem to recognize some games as load and stays idle while also pulling much more power than seems normal, not sure if its the sensors that get borked because of mismatch.

It's a shame the Galax XOC has the issues it does otherwise I'd probably daily that  At least I managed to witness the card pull over 550W today, I thought it was doomed to stay at 330W


definitely dodging MSI when Ampere comes


----------



## sultanofswing

Apothysis said:


> Carillo said:
> 
> 
> 
> Does not work on the FE PCB cards i have tried, PL hitting @ 200 watts. reason is way current is messured over power connectors, and when using the bios ment for 3 with a PCB having only two, the bios thinks you draw a lot more than you actually do
> 
> 
> 
> Played with a bunch of different bioses now. Some work, some don't. The whole 10% of base TDP argument seems to hold up in most cases? But for instance the Galax 400W/450W bios only pulls 300-330W.. Very annoying bug.
> 
> I'm back on the Lightning Z-bios for now, it seems to be the most problem-free even though the PL is low. The EVGA 360W/520W bios gives me over 400W in timespy but doesn't seem to recognize some games as load and stays idle while also pulling much more power than seems normal, not sure if its the sensors that get borked because of mismatch.
> 
> It's a shame the Galax XOC has the issues it does otherwise I'd probably daily that /forum/images/smilies/smile.gif At least I managed to witness the card pull over 550W today, I thought it was doomed to stay at 330W
> 
> 
> definitely dodging MSI when Ampere comes
Click to expand...

This is the main reason I said screw it and bought a Kingpin. Nvidia locked these card down pretty hard.


----------



## Carillo

sultanofswing said:


> This is the main reason I said screw it and bought a Kingpin. Nvidia locked these card down pretty hard.


Well, The Galax XOC bios seems to the best bios so far for the FE, if 380 watt is not enough, and i do belive i have tried over 30 different bioses on 6 different cards.. The bios works great, been playing games for hours with it.. normal power draw, VF curve, high performance, with the only downside, that i have to apply the curve manually every boot, no problem, i do it gladly


----------



## sultanofswing

Carillo said:


> Well, The Galax XOC bios seems to the best bios so far for the FE, if 380 watt is not enough, and i do belive i have tried over 30 different bioses on 6 different cards.. The bios works great, been playing games for hours with it.. normal power draw, VF curve, high performance, with the only downside, that i have to apply the curve manually every boot, no problem, i do it gladly


Yea I ran that BIOS on my FE when I had it. It's a great BIOS but the Voltage curve method of overclocking gives poor results.


----------



## Imprezzion

Bit of a shame MSI uses custom PCB'S for almost everything except the very cheapest models. This means the otherwise great quality VRM and such can't be used with most BIOS so it gets held back big time.


----------



## Apothysis

Imprezzion said:


> Bit of a shame MSI uses custom PCB'S for almost everything except the very cheapest models. This means the otherwise great quality VRM and such can't be used with most BIOS so it gets held back big time.


 It's weird though because for some reason I have had 0 issues with the flashing process. Every bios so far from EVGA and GALAX (and other MSI bios) that I've tested have seemed to work fine, no issues bricking the card or going into safe-mode. The bug also happens with the X Trio 406W bios and that's made for this card, was the first bios I tested.. A couple of others in this thread have had the same issue with this particular card. It's very strange.



But, paying as much as I did for the X Trio only to find out that it's pretty much the only card that lacks power balancing, and that the extra 6-pin is completely useless, is very frustrating.


----------



## Carillo

sultanofswing said:


> Yea I ran that BIOS on my FE when I had it. It's a great BIOS but the Voltage curve method of overclocking gives poor results.


Poor results ? Please elaborate. The voltage curve, or VF curve is one of the main reasons i use the Galax XOC, and the only XOC bios as far as i know that actually have that activated.. thats the only way to get full control of the relation between core clock and voltage witch is a big advantage if you want your overclock stable...


----------



## sultanofswing

Carillo said:


> sultanofswing said:
> 
> 
> 
> Yea I ran that BIOS on my FE when I had it. It's a great BIOS but the Voltage curve method of overclocking gives poor results.
> 
> 
> 
> Poor results ? Please elaborate. The voltage curve, or VF curve is one of the main reasons i use the Galax XOC, and the only XOC bios as far as i know that actually have that activated.. thats the only way to get full control of the relation between core clock and voltage witch is a big advantage if you want your overclock stable...
Click to expand...

Run a benchmark like this.
Target a clockspeed, let’s say 2100mhz.
Adjust core clock slider offset to whatever + value nets you 2100mhz and also max out voltage slider and run a benchmark take note of the voltage during the benchmark.

Now go back to default, now use your voltage curve editor to set 2100mhz at the voltage that was used in the first benchmark and post the results. You can even play around and use whatever voltage you’d like. Also make sure power limit and temp limit slider is maxed.


On my card using the core clock “slider” produces higher benchmark scores than the curve editor.

On my card I can run [email protected] using curve editor but running 2115 with the offset slider uses more power and produces better scores.

This is why you can overclock higher using the voltage curve because it does not use as much power as using the core offset slider.

[email protected] using voltage curve is completely stable for me.
2160 using the core offset slider is the most I can go using that method.

This is also the main reason most XOC bios out there disable the voltage curve. Extensive testing has been done by pretty much all the major overclockers and the consensus was the voltage curve editor was useless for them.


----------



## Imprezzion

sultanofswing said:


> Run a benchmark like this.
> Target a clockspeed, let’s say 2100mhz.
> Adjust core clock slider offset to whatever + value nets you 2100mhz and also max out voltage slider and run a benchmark take note of the voltage during the benchmark.
> 
> Now go back to default, now use your voltage curve editor to set 2100mhz at the voltage that was used in the first benchmark and post the results. You can even play around and use whatever voltage you’d like. Also make sure power limit and temp limit slider is maxed.
> 
> 
> On my card using the core clock “slider” produces higher benchmark scores than the curve editor.
> 
> On my card I can run [email protected] using curve editor but running 2115 with the offset slider uses more power and produces better scores.
> 
> This is why you can overclock higher using the voltage curve because it does not use as much power as using the core offset slider.
> 
> [email protected] using voltage curve is completely stable for me.
> 2160 using the core offset slider is the most I can go using that method.
> 
> This is also the main reason most XOC bios out there disable the voltage curve. Extensive testing has been done by pretty much all the major overclockers and the consensus was the voltage curve editor was useless for them.


This is exactly how the XOC 1080 Ti BIOS acted on my Lightning 1080 Ti but it DID work with Curve on a Gaming X PCB. It's PCB and component dependant appearantly. I tested with Borderlands 3. Standing still in 1 spot, 140 FPS at slider 2070Mhz, 111 FPS at Curve 2250Mhz 1.200v. But, with XOC on the Gaming X it ran curve just fine at 2115Mhz 1.180v at about 146FPS and also applied after reboot just fine.

Probably the exact same for the 2080 Ti. It might work properly on certain PCB's and not on others.

I hope the delivery of my Gainward is not delayed any further and when it arrives tomorrow I'll try to see how the XOC BIOS behaves on that PCB which is just a 16 phase reference A chip PCB.


----------



## sultanofswing

Imprezzion said:


> This is exactly how the XOC 1080 Ti BIOS acted on my Lightning 1080 Ti but it DID work with Curve on a Gaming X PCB. It's PCB and component dependant appearantly. I tested with Borderlands 3. Standing still in 1 spot, 140 FPS at slider 2070Mhz, 111 FPS at Curve 2250Mhz 1.200v. But, with XOC on the Gaming X it ran curve just fine at 2115Mhz 1.180v at about 146FPS and also applied after reboot just fine.
> 
> Probably the exact same for the 2080 Ti. It might work properly on certain PCB's and not on others.
> 
> I hope the delivery of my Gainward is not delayed any further and when it arrives tomorrow I'll try to see how the XOC BIOS behaves on that PCB which is just a 16 phase reference A chip PCB.


Mine worked that way on me FE 2080ti on the stock BIOS and it works the same way on my Kingpin on the stock BIOS or the XOC BIOS


----------



## Madblaster6

So sadly my MSI 2080 Ti Gaming X Trio's memory gave out. I originally had a card with micron memory and an RMA did nothing. I recently purchased a new one and luckily it has Samsung memory that I can get +1300 on instead of +300 I got on the micron one. Sadly this one came with the 0x70100003 firmware. I was wondering if anyone has successfully found a way to flash the 400W bios on these cards yet.


----------



## Carillo

sultanofswing said:


> Run a benchmark like this.
> Target a clockspeed, let’s say 2100mhz.
> Adjust core clock slider offset to whatever + value nets you 2100mhz and also max out voltage slider and run a benchmark take note of the voltage during the benchmark.
> 
> Now go back to default, now use your voltage curve editor to set 2100mhz at the voltage that was used in the first benchmark and post the results. You can even play around and use whatever voltage you’d like. Also make sure power limit and temp limit slider is maxed.
> 
> 
> On my card using the core clock “slider” produces higher benchmark scores than the curve editor.
> 
> On my card I can run [email protected] using curve editor but running 2115 with the offset slider uses more power and produces better scores.
> 
> This is why you can overclock higher using the voltage curve because it does not use as much power as using the core offset slider.
> 
> [email protected] using voltage curve is completely stable for me.
> 2160 using the core offset slider is the most I can go using that method.
> 
> This is also the main reason most XOC bios out there disable the voltage curve. Extensive testing has been done by pretty much all the major overclockers and the consensus was the voltage curve editor was useless for them.



Do you have any source references that this is why curve is disabled on XOC bios? I did as you said, pulled the slider until I had 2145mhz @ 1.125V (which is max volts) and ran a benchmark. Then used VF curfe 2145 @ 1.125V ... average FPS differentiated by 0.9 FPS .. So I got my suspicions confirmed, 2145mhz SLIDER = 2145mhz CURVE .. That's probably something you're doing wrong


----------



## sultanofswing

Carillo said:


> Do you have any source references that this is why curve is disabled on XOC bios? I did as you said, pulled the slider until I had 2145mhz @ 1.125V (which is max volts) and ran a benchmark. Then used VF curfe 2145 @ 1.125V ... average FPS differentiated by 0.9 FPS .. So I got my suspicions confirmed, 2145mhz SLIDER = 2145mhz CURVE .. That's probably something you're doing wrong


How would it be something I am doing wrong, there are only 2 ways to do it.
Any rate it gives me worse results and has been reported by others to give them worse results so I will leave it at that.


----------



## sultanofswing

This was testing from a few weeks ago using the 2 methods. The results are minor but there is a difference.


----------



## Carillo

sultanofswing said:


> How would it be something I am doing wrong, there are only 2 ways to do it.
> Any rate it gives me worse results and has been reported by others to give them worse results so I will leave it at that.


Testet a lot games now, there is no difference between curve and slider performance wise with the same clocks and voltage... why would there be..And your statement regarding "this is the reason its disabled in other XOC bioses" What is your source for that teori?


----------



## sultanofswing

Carillo said:


> sultanofswing said:
> 
> 
> 
> How would it be something I am doing wrong, there are only 2 ways to do it.
> Any rate it gives me worse results and has been reported by others to give them worse results so I will leave it at that.
> 
> 
> 
> Testet a lot games now, there is no difference between curve and slider performance wise with the same clocks and voltage... why would there be..And your statement regarding "this is the reason its disabled in other XOC bioses" What is your source for that teori?
Click to expand...

I talked to Vince (Kingpin) about it and a few users on here have reported the same findings.

If you look at my little sheet I posted tell me why [email protected] uses only 304 watt but 2115 using the slider uses 346 watt at 1.075?

That testing was done on the standard Kingpin BIOS not an XOC bios btw.


----------



## Mooncheese

sultanofswing said:


> When I had my FE 2080ti this is the FTW3 BIOS I ran https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107
> 
> To tell which BIOS you have you can run the command nvflash64 --protectoff
> It will then tell you which BIOS chip you have.


How was this BIOS on FE PCB? I finally got my replacement 2080 Ti FE, this time around an XC2 Ultra with Samsung B-Die that does 1100 MHz @ 90% Confidence Rating and 2070 on the core with an undervolt of 1.013v @ ~37C (it drops down to 2055 MHz @ 1.006v above 39C).

I'm noticing some slight wattage starvation induced clock throttling down to 2040, 2025 MHz in Timespy @ 337-345W with XC2 VBIOS and I'm tempted to try FTW3 but I'm worried that somehow I may lose the ICX2 sensors in the process, even though both cards have them as it's a different VBIOS and PCB. Also not very eager to flash this card after burning out my last 2080 Ti FE PCB 300A card that had Micron memory with the Galax BIOS. This one does have 2.5 years remaining on the warranty but I don't want to push it. 

Really glad I found this XC2 local for $900, it's basically an XC Ultra with ICX2 sensors which adds tremendous peace of mind considering my last card's memory burned out because of inadequate contact with the memory modules and the waterblock as previous owner was apparently using factory thermal pads that are thinner than the ones supplied by Phanteks for their Glacier block and although I visually ascertained that the modules were making contact they may not have been making adequate contact as the back-plate was incredibly hot. This time around the back-plate is barely lukewarm (same backplate).

Here's some running around in Ghost Recon Breakpoint after the recent Survival update @ 2055 MHz @ 1.006v. This is from when I had the card only at +600 MHz on the memory before dialing in the memory overclock to 1100 MHz. It can do 1200 MHz and pass Port Royal, Timespy and Firestrike but results in a 66% Confidence Rating in MSI OC Scanner. 1100 MHz is 90% confidence rating so this is about as good as it gets for the memory. I apologize for the quality, I was using Nvidia Geforce Experience @ only 4 MBPS (I turned this up to 7 MBPS) with this video: 





This is what the memory temps look like in The Witcher 3 @340W (see attachment) 

I have 14mm OD Acrylic and a distribution block on hand and all the fittings etc ready to go, waiting for taller M2.5x8mm back-plate screws to arrive so I can secure the back-plate, supposed to arrive by friday. 

I will probably do a pre-hard tubing video update with the current EK Duraclear before then though. 

Love this card but wanting more power, curious as to how compatible the FTW3 vbios is with reference PCB longevity wise. I'm certain the Micron memory failed with my last card but I wonder if that Galax VBIOS had anything to do with it.

Oh and the benches, of particular note, Port Royal really responded to the memory overclock, we're talking about a 10% performance gain going from +500 Mhz to +1200 Mhz on the memory. 

https://www.3dmark.com/compare/pr/237543/pr/235141

Timespy: (from left to right, +1200 MHz memory, +600 MHz memory, 1080 Ti): 

https://www.3dmark.com/compare/spy/11266430/spy/11255515/spy/2949106

Nearly 60% gain in Timespy vs 1080 Ti! 

Firestrike (from left to right, +1200 MHz memory, +600 Mhz memory, +0 Mhz memory): 

https://www.3dmark.com/compare/fs/22228119/fs/22210593/fs/21849140#


----------



## sultanofswing

Mooncheese said:


> sultanofswing said:
> 
> 
> 
> When I had my FE 2080ti this is the FTW3 BIOS I ran https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107
> 
> To tell which BIOS you have you can run the command nvflash64 --protectoff
> It will then tell you which BIOS chip you have.
> 
> 
> 
> How was this BIOS on FE PCB? I finally got my replacement 2080 Ti FE, this time around an XC2 Ultra with Samsung B-Die that does 1100 MHz @ 90% Confidence Rating and 2070 on the core with an undervolt of 1.013v @ ~37C (it drops down to 2055 MHz @ 1.006v above 39C).
> 
> I'm noticing some slight wattage starvation induced clock throttling down to 2040, 2025 MHz in Timespy @ 337-345W with XC2 VBIOS and I'm tempted to try FTW3 but I'm worried that somehow I may lose the ICX2 sensors in the process, even though both cards have them as it's a different VBIOS and PCB. Also not very eager to flash this card after burning out my last 2080 Ti FE PCB 300A card that had Micron memory with the Galax BIOS. This one does have 2.5 years remaining on the warranty but I don't want to push it.
> 
> Really glad I found this XC2 local for $900, it's basically an XC Ultra with ICX2 sensors which adds tremendous peace of mind considering my last card's memory burned out because of inadequate contact with the memory modules and the waterblock as previous owner was apparently using factory thermal pads that are thinner than the ones supplied by Phanteks for their Glacier block and although I visually ascertained that the modules were making contact they may not have been making adequate contact as the back-plate was incredibly hot. This time around the back-plate is barely lukewarm (same backplate).
> 
> Here's some running around in Ghost Recon Breakpoint after the recent Survival update @ 2055 MHz @ 1.006v. This is from when I had the card only at +600 MHz on the memory before dialing in the memory overclock to 1100 MHz. It can do 1200 MHz and pass Port Royal, Timespy and Firestrike but results in a 66% Confidence Rating in MSI OC Scanner. 1100 MHz is 90% confidence rating so this is about as good as it gets for the memory. I apologize for the quality, I was using Nvidia Geforce Experience @ only 4 MBPS (I turned this up to 7 MBPS) with this video:
> 
> 
> 
> 
> 
> This is what the memory temps look like in The Witcher 3 @340W (see attachment)
> 
> I have 14mm OD Acrylic and a distribution block on hand and all the fittings etc ready to go, waiting for taller M2.5x8mm back-plate screws to arrive so I can secure the back-plate, supposed to arrive by friday.
> 
> I will probably do a pre-hard tubing video update with the current EK Duraclear before then though.
> 
> Love this card but wanting more power, curious as to how compatible the FTW3 vbios is with reference PCB longevity wise. I'm certain the Micron memory failed with my last card but I wonder if that Galax VBIOS had anything to do with it.
> 
> Oh and the benches, of particular note, Port Royal really responded to the memory overclock, we're talking about a 10% performance gain going from +500 Mhz to +1200 Mhz on the memory.
> 
> https://www.3dmark.com/compare/pr/237543/pr/235141
> 
> Timespy: (from left to right, +1200 MHz memory, +600 MHz memory, 1080 Ti):
> 
> https://www.3dmark.com/compare/spy/11266430/spy/11255515/spy/2949106
> 
> Nearly 60% gain in Timespy vs 1080 Ti!
> 
> Firestrike (from left to right, +1200 MHz memory, +600 Mhz memory, +0 Mhz memory):
> 
> https://www.3dmark.com/compare/fs/22228119/fs/22210593/fs/21849140#
Click to expand...

The FTW3 bios worked ok but not much better than the default BIOS. My old card was a XC ultra so it didn’t have icx sensors.

You may not be seeing wattage starvation vs it just being normal clock scaling with temp. Remember it will start scaling as low as 36c and kick in every 5-6c and drop 15mhz each time, that is unless you are hitting the power limit. Think my XC ultra default power limit was 360ish watts or so? Can’t remember off the top of my head.

With you running [email protected] it appears you are using the voltage curve which will starve the card for voltage.
Try 2055 with the offset slider and see what it does.


----------



## Asmodian

sultanofswing said:


> With you running [email protected] it appears you are using the voltage curve which will starve the card for voltage.


What does "starve the card for voltage" mean?


----------



## Imprezzion

Asmodian said:


> What does "starve the card for voltage" mean?


Just my opinion but it's not starved. Yes, 1.006v is way below "stock" which is 1.043v-1.062v but if it's stable, stays cooler and draws less power, why not.


----------



## Asmodian

Yes, this is my understanding as well. "Starved for voltage" simply doesn't make sense to me.


----------



## sultanofswing

If you give it more voltage and it gets either better FPS or benchmarks higher than it was starving for voltage. Only way you will know is by testing various methods.

When I get home from work I’ll post up some examples and let you guys decide.


----------



## Mooncheese

sultanofswing said:


> The FTW3 bios worked ok but not much better than the default BIOS. My old card was a XC ultra so it didn’t have icx sensors.
> 
> You may not be seeing wattage starvation vs it just being normal clock scaling with temp. Remember it will start scaling as low as 36c and kick in every 5-6c and drop 15mhz each time, that is unless you are hitting the power limit. Think my XC ultra default power limit was 360ish watts or so? Can’t remember off the top of my head.
> 
> With you running [email protected] it appears you are using the voltage curve which will starve the card for voltage.
> Try 2055 with the offset slider and see what it does.


 If I remember correctly XC Ultra and XC2 Ultra are both 330W (but hit 345W in actuality, at least according to Hwinfo64). What I'm experiencing isn't thermal throttling. The thermal throttling I see is that the card does 2085 MHz @ 1.019v under 37C, then 2070 MHz @ 1.013v to 42C, then 2055 MHz @ 1.006v from there to 47C, the highest I've seen the core thus far. In Timespy, in Graphics Test 2, around some time mid-point the clocks dip all the way to 2040, 2025 MHz with 330-345w indicated usage. Given my experience with undervolting 1080 Ti, this is indeed wattage starvation, which I've already managed to mitigate to a great degree with the undervolt. 



Imprezzion said:


> Just my opinion but it's not starved. Yes, 1.006v is way below "stock" which is 1.043v-1.062v but if it's stable, stays cooler and draws less power, why not.


Yes exactly, that's the whole point of undervolting, the ability to run greater frequency with less wattage. 

Stock voltage is 1.069v on my card, if I run Firestrike the score is 37,800 Graphics with no undervolt / freq curve @ 2070 core and +500 memory. With the undervolt it's 38,800 Graphics same core and memory freq because there is less wattage starvation happening. 

I actually had to resort to undervolting, which this card does admirably, after failing to secure 2100 Mhz at default voltage. For whatever reason, more voltage doesn't help secure more core freq. with my sample. So I went in the other direction with the undervolt. So far my overclock of 2070 @ 1.013-1.019v (settling to 2050 MHz @ 1.006-1.013v) and +1100 MHz is stable. 

If an overclock is unstable I will get a display driver failure within 10 minutes of playing F1 2019 @ 3440x1440 of all games for some reason. 

This overclock has a 90% Confidence Rating in MSI AB OC Scanner and is thus far stable throughout all of the previously listed benches and I just played F1 2019 for 2 hours. Needless to say I'm ecstatic that it holds +1100 MHz on the memory and 2055 MHz on the core @ 1.006-1.013v. 

Those benches I listed, that's pretty damn good for 330W. If I compare them to default 2080 Ti FE on air it's quite a bit faster. Hell it's as fast as non-reference cards. 

2080 Ti FE, Timespy: 13,610 Graphics (15,250 overclocked):
https://www.guru3d.com/articles_pages/geforce_rtx_2080_ti_founders_review,37.html

Gaming X Trio: 15,200 Graphics
https://www.guru3d.com/articles_pages/msi_geforce_rtx_2080_ti_gaming_x_trio_review,28.html

Lighting Z: 16,350 Graphics 
https://www.guru3d.com/articles_pages/msi_geforce_rtx_2080_ti_lightning_z_review,30.html

This XC2 @ 2070 MHz +1100 Memory with undervolt, reference PCB @ ~340w: 16,320 Graphics
https://www.3dmark.com/spy/11266430

Undervolt is the way to go, it's so underrated. 

If anyone else has any experience with FTW3 BIOS on reference PCB please share!


----------



## Mooncheese

Please excuse the rambling, this is completely unscripted, I bounce around a bit. If you want to skip to how the card runs without adequate heat-soak in the system skip to after the 10 min mark. The first half I describe my previous 2080 Ti failure and my hard-tubing plans for this weekend and why I went with acrylic over PETG. 






Here's Greg Salazar's (formerly Science Studio) recent PETG tube failure and my comment which can be found below that video: 






"Greg, this is one of the many disadvantages of PETG, it can deform and dislodge from a fitting with water temp above 55C. Acrylic doesn't have this problem. Also, PETG suffers from evapo-transpiration, where water from the loop will evaporate through the molecules of the tubing. My current soft-tubing (EK Duraclear) exhibits this problem and is soon to be replaced with 14mm OD Acrylic. Also, PETG suffers from discoloration with direct and even indirect sunlight. Any windows in your room? Give it 2 years and your PETG will be discolored. 


Acrylic doesn't have this problem. 

Additionally, chamfering tube ends the way you do in the video can result in ends that aren't smooth and can rupture the rubber O-ring with PETG. That chamfering tool is meant to be used with Acrylic. Chamfering with Acrylic is much cleaner and this problem doesn't present itself as with PETG. 

Additionally Acrylic is much clearer. 

The only downsides to Acrylic are that it isn't as impact resistant which is relevant for those who swing hammers around in their PC's. 

PETG is for amateurs, I'm surprised that you not only advocate it over Acrylic but then suffer from tube failing from water temp exceeding 55C and or a ruptured rubber O-ring all in one fell swoop. 

Maybe I should become a youtuber and make content. 

Stop being an amateur, do it right next time. 





 "


----------



## sultanofswing

I use acrylic and acrylic only myself.


----------



## Mooncheese

sultanofswing said:


> I use acrylic and acrylic only myself.


I'm the only person pointing out the fact that PETG can deform and dislodge in Greg Salar's disaster video (previous post). Everyone else is speculating about pressure in the reservoir etc. 

I guess it's not a well know fact yet, but it will soon be with these monster 300W TDP chips. 

Apparently this happens all of the time with PETG, if you google you will find numerous examples, yet PETG is still praised as the go to hard tubing. 

I mean it's easier to cut? Golf clap?






Edit: 

A few examples: 

https://www.reddit.com/r/watercooling/comments/dyxk7n/petg_melted_and_got_coolant_all_over_my_gpu/

https://www.reddit.com/r/watercooling/comments/amv7jy/so_my_petg_melted_whilst_gaming_how/


----------



## sultanofswing

So I did more testing once I got home like I said I would and I will post the results and let you guys decide whether you think the info I am giving is false.
My card runs 2055 by default so for the test I use +50 on the offset slider.
For the Voltage editor I used 2100 at different voltages and these are my results.
All tests were done with Timespy.

Test 1- +50 on the core clock slider/+1000 on memory slider
This netted a clock speed of [email protected]
Timespy Graphics score=16,649
Power draw reported by HWinfo64=362.66w

Test 2-Voltage editor set to [email protected] (same voltage as above test) +1000 on the memory slider
Timespy Graphics score-16,426
Power draw reported by HWinfo64=361.16w

Test 3-Voltage editor set to [email protected] +1000 on the memory slider
Timespy graphics score=15,619
Power draw reported by Hwinfo54=316w

You guys can look over this if you want but as you can see in the first 2 tests for me running the Voltage curve editor Produces lower scores.
You can also see in test 3 just because you can run a certain clockspeed with less voltage does not mean it will give the same results (for me at least). Test 3 is what is known as voltage starving the card, You can indeed starve the card for voltage and still be 100% perfectly stable.
There is a 1,030 point score drop from going from test 1 to test 3

I have screenshots from all 3 runs that I can post also.


----------



## Mooncheese

sultanofswing said:


> So I did more testing once I got home like I said I would and I will post the results and let you guys decide whether you think the info I am giving is false.
> My card runs 2055 by default so for the test I use +50 on the offset slider.
> For the Voltage editor I used 2100 at different voltages and these are my results.
> All tests were done with Timespy.
> 
> Test 1- +50 on the core clock slider/+1000 on memory slider
> This netted a clock speed of [email protected]
> Timespy Graphics score=16,649
> Power draw reported by HWinfo64=362.66w
> 
> Test 2-Voltage editor set to [email protected] (same voltage as above test) +1000 on the memory slider
> Timespy Graphics score-16,426
> Power draw reported by HWinfo64=361.16w
> 
> Test 3-Voltage editor set to [email protected] +1000 on the memory slider
> Timespy graphics score=15,619
> Power draw reported by Hwinfo54=316w
> 
> You guys can look over this if you want but as you can see in the first 2 tests for me running the Voltage curve editor Produces lower scores.
> You can also see in test 3 just because you can run a certain clockspeed with less voltage does not mean it will give the same results (for me at least). Test 3 is what is known as voltage starving the card, You can indeed starve the card for voltage and still be 100% perfectly stable.
> There is a 1,030 point score drop from going from test 1 to test 3
> 
> I have screenshots from all 3 runs that I can post also.


Very interesting! Actually, I remember seeing this behavior before back with 780 Ti now that I think about it. 

There are some notable variables that I have to ask if you accounted for. 

Did you run the three tests in order and were the temperatures the same? 

What were the temps throughout the benchmark for all three runs (accounting for changes in ambient)? 

What could also be happening is that although youre not seeing a crash per se, the core isn't stable with less voltage and is resulting in "error correction". This is how overclocking memory on 1080 Ti is, where beyond a certain freq. there is no more performance to be derived because although the GPU hasn't crashed under load there "error correction" going on. For my 1080 Ti there was no point in overclocking more than +425 MHz. +550 MHz would yield zero performance increase over +425 MHz. 

In this instance, assuming that memory voltage is not tethered to GPU voltage somehow, the error correction could be taking place withing the GPU core and is reduced with greater voltage as reflected with your results. 

Anyhow, thanks for sharing this, I may actually do a run with 2085 MHz @ 1.069v on the freq curve to try this out but my experience thus far with Firestrike is that my GPU score was 37,800 with default voltage, same freq on the core and memory (2070 MHz, +500 Memory) vs 38,800 MHz with the undervolt. So for my card the opposite was true. 

But I will do a run with the increased voltage to see the difference. 

Unfortunately for me I missed the 3DMark sale and have to have my card sit through the lengthy demo at the beginning which means by the time I'm at GPU Test 1 the card is already at 42+C.


----------



## sultanofswing

Mooncheese said:


> Very interesting! Actually, I remember seeing this behavior before back with 780 Ti now that I think about it.
> 
> There are some notable variables that I have to ask if you accounted for.
> 
> Did you run the three tests in order and were the temperatures the same?
> 
> What were the temps throughout the benchmark for all three runs (accounting for changes in ambient)?
> 
> What could also be happening is that although youre not seeing a crash per se, the core isn't stable with less voltage and is resulting in "error correction". This is how overclocking memory on 1080 Ti is, where beyond a certain freq. there is no more performance to be derived because although the GPU hasn't crashed under load there "error correction" going on. For my 1080 Ti there was no point in overclocking more than +425 MHz. +550 MHz would yield zero performance increase over +425 MHz.
> 
> In this instance, assuming that memory voltage is not tethered to GPU voltage somehow, the error correction could be taking place withing the GPU core and is reduced with greater voltage as reflected with your results.
> 
> Anyhow, thanks for sharing this, I may actually do a run with 2085 MHz @ 1.069v on the freq curve to try this out but my experience thus far with Firestrike is that my GPU score was 37,800 with default voltage, same freq on the core and memory (2070 MHz, +500 Memory) vs 38,800 MHz with the undervolt. So for my card the opposite was true.
> 
> But I will do a run with the increased voltage to see the difference.
> 
> Unfortunately for me I missed the 3DMark sale and have to have my card sit through the lengthy demo at the beginning which means by the time I'm at GPU Test 1 the card is already at 42+C.


All 3 test were done back to back to back. GPU temp never went over 36c.
I would love to say it's error correction but before when I was testing all this when using the Voltage curve editor even though the scores were lower vs using the offset slider I could go all the way up to 2250mhz and the scores would raise.

There could possibly be some temp variatons but Ambient only raised .6 degrees and the core maxed out at 36c on all 3 runs.
The PC has been on now for 4 hours and ambient is the highest. I'll run test 1 again and see if that score gets lower.

I reran test 1 as ambient has went up right at 2 degrees and these are the results
Timespy Graphics score=16635
HWinfo64 Reported wattage=365.128
Max GPU temp=38c

For me and what I have found on my card even if I run the exact same clockspeed and the exact same voltage between the offset slider and the curve editor the offset slider wins every single time as noted by my initial test 1 and test 2. Now it is not much in terms of benchmarks scores but for gaming I see way less frametime spikes and stutters when I use the offset slider vs the voltage curve editor.

And trust me, I am not trying to tell people to not use it, I used to only use the voltage curve editor. When I had my 1080ti it was the only way the card would do over 2100mhz as the offset slider the card would crash way before 2100mhz.

The only reason I switched is because I was seeing way lower benchmark scores running higher clocks than other people were with the same hardware, I did some testing and was told by a few people that I talk to on a personal known basis that the curve editor was not the proper way and to start using the offset slider, Once I did that benchmark scores started going up at lower clocks and games seemed to run better.

Core voltage and memory voltage on my card is handled by 2 separate controllers.

At any rate these are my findings for myself, I choose to do things that give me the best results and the offset slider gives me the best results.

Here is 2145, so close to that 17k graphics score. I bet if I had a 9900k it would possibly be there but doubtful. This little 8700k still chugging along though.
https://www.3dmark.com/spy/10894537


----------



## Mooncheese

sultanofswing said:


> All 3 test were done back to back to back. GPU temp never went over 36c.
> I would love to say it's error correction but before when I was testing all this when using the Voltage curve editor even though the scores were lower vs using the offset slider I could go all the way up to 2250mhz and the scores would raise.
> 
> There could possibly be some temp variatons but Ambient only raised .6 degrees and the core maxed out at 36c on all 3 runs.
> The PC has been on now for 4 hours and ambient is the highest. I'll run test 1 again and see if that score gets lower.
> 
> I reran test 1 as ambient has went up right at 2 degrees and these are the results
> Timespy Graphics score=16635
> HWinfo64 Reported wattage=365.128
> Max GPU temp=38c


Well my results were worse without the undervolt, just like with Firestrike. 

I simply set core freq to the same (2070 MHz), on XC2 Ultra bios this is +60 MHz apparently under water, and I increased vcore "+100" on the slider in MSI AB. This worked out to 1.050v according to OSD. I recorded the demo and GPU Test 1 with both this setting and my undervolt and the clocks dip way harder without the undervolt with wattage pretty much sitting at 335W for much more of the time under load. 

It was a small loss, but this doesn't reflect potentially higher temps from the additional voltage and wattage without the undervolt so it's not a true reflection of the performance benefit of the undervolt. 

The higher score is the undervolt: 

https://www.3dmark.com/compare/spy/11274403/spy/11266430

I will upload the videos showing how the clocks dip really hard in the demo without the undervolt. 

Oh and interestingly, at first I simply tried to set the core to +60 MHz and run it without also turning voltage to max and that resulted in a display driver failure. So the same freq. with default voltage is impossible apparently.

What card and BIOS are you running? 

I remember you staying that you used FTW3 Hydrocopper with FE PCB, how long did you use that BIOS with that PCB and what were your observations? I'm somewhat concerned because FTW3 has 19 Power Phases vs 16 and I'm worried that this may put some kind of undo amperage load on FE PCB 16 Power Phases but I'm not qualified to make this kind of assumption. 

I would love for someone with more technical expertise to offer up their opinion. 

But yeah, I'm tempted to try FTW3 vbios but I'm somewhat convinced that Galax HOF bios may have had something to do with my previous reference PCB 2080 Ti's early demise, aside from the fact that it's a near certainty that the micron memory wasn't making adequate contact with the water block. 

I'm just scared to run non-reference vbios on reference PCB because I can't completely rule that out.


----------



## mllrkllr88

I did a little write up about my experience modding and overclocking reference 2080Ti vs Kingpin Edition

https://www.overclock.net/forum/69-...e-modding-extreme-oc-2080ti.html#post28391432


----------



## Mooncheese

mllrkllr88 said:


> I did a little write up about my experience modding and overclocking reference 2080Ti vs Kingpin Edition
> 
> https://www.overclock.net/forum/69-...e-modding-extreme-oc-2080ti.html#post28391432


Great write-up! IMHO 5% isn't worth the asking price for Kingpin though. 

What are your thoughts on using FTW3 Ultra / Hydrocopper BIOS on a reference PCB card (XC2 Ultra)?


----------



## sultanofswing

Mooncheese said:


> Well my results were worse without the undervolt, just like with Firestrike.
> 
> I simply set core freq to the same (2070 MHz), on XC2 Ultra bios this is +60 MHz apparently under water, and I increased vcore "+100" on the slider in MSI AB. This worked out to 1.050v according to OSD. I recorded the demo and GPU Test 1 with both this setting and my undervolt and the clocks dip way harder without the undervolt with wattage pretty much sitting at 335W for much more of the time under load.
> 
> It was a small loss, but this doesn't reflect potentially higher temps from the additional voltage and wattage without the undervolt so it's not a true reflection of the performance benefit of the undervolt.
> 
> The higher score is the undervolt:
> 
> https://www.3dmark.com/compare/spy/11274403/spy/11266430
> 
> I will upload the videos showing how the clocks dip really hard in the demo without the undervolt.
> 
> Oh and interestingly, at first I simply tried to set the core to +60 MHz and run it without also turning voltage to max and that resulted in a display driver failure. So the same freq. with default voltage is impossible apparently.
> 
> What card and BIOS are you running?
> 
> I remember you staying that you used FTW3 Hydrocopper with FE PCB, how long did you use that BIOS with that PCB and what were your observations? I'm somewhat concerned because FTW3 has 19 Power Phases vs 16 and I'm worried that this may put some kind of undo amperage load on FE PCB 16 Power Phases but I'm not qualified to make this kind of assumption.
> 
> I would love for someone with more technical expertise to offer up their opinion.
> 
> But yeah, I'm tempted to try FTW3 vbios but I'm somewhat convinced that Galax HOF bios may have had something to do with my previous reference PCB 2080 Ti's early demise, aside from the fact that it's a near certainty that the micron memory wasn't making adequate contact with the water block.
> 
> I'm just scared to run non-reference vbios on reference PCB because I can't completely rule that out.


I used to have an XC Ultra and that is what I ran a plethora of BIOS options on. I have a USB programmer so no BIOS was left unturned, I settled on the ftw3 bios for daily use but I only kept that card for 2 months.
I am currently on a Kingpin.


----------



## sultanofswing

Just passed Timespy at 2280mhz, LUL!


----------



## Mooncheese

sultanofswing said:


> I used to have an XC Ultra and that is what I ran a plethora of BIOS options on. I have a USB programmer so no BIOS was left unturned, I settled on the ftw3 bios for daily use but I only kept that card for 2 months.
> I am currently on a Kingpin.





sultanofswing said:


> Just passed Timespy at 2280mhz, LUL!


Nice! How much was KP? How long did you own the XC Ultra? Why did you like the FTW3 bios the best? Do you have any benches from that card with FTW3 bios? Which FTW3 bios exactly did you use? My card is one of the newer cards that needs a newer vbios but I think you may have already linked to the correct one (the Hydrocopper variant). 

What are your thoughts about using non-reference PCB vbios on reference PCB? 

Here's Metro Exodus all settings maxed, one bench at RT on High and the Ultra @ 3440x1440, I am uploading a video now. 69 FPS avg with RT, that's playable! So glad I'm not at 3840x2160. Hoping they bring DLSS 2.0 to this game. 

https://imgur.com/a/sEpPeTe


----------



## sultanofswing

Mooncheese said:


> Nice! How much was KP? How long did you own the XC Ultra? Why did you like the FTW3 bios the best? Do you have any benches from that card with FTW3 bios? Which FTW3 bios exactly did you use? My card is one of the newer cards that needs a newer vbios but I think you may have already linked to the correct one (the Hydrocopper variant).
> 
> What are your thoughts about using non-reference PCB vbios on reference PCB?
> 
> Here's Metro Exodus all settings maxed, one bench at RT on High and the Ultra @ 3440x1440, I am uploading a video now. 69 FPS avg with RT, that's playable! So glad I'm not at 3840x2160. Hoping they bring DLSS 2.0 to this game.
> 
> https://imgur.com/a/sEpPeTe


I personally think Vbios flashing should be the last resort that anyone should do and to focus on temperature control first.
Turing has 15mhz downclocks for every 5c gain in temp once you cross 35c
40
45
50
55
60
If you can keep the temp below 40c and you are hitting the power limit then by all means you can raise the power limit. I have some benchmarks of the XC Ultra but that card was nothing special and in the end a higher power limit BIOS would not help it as the core was not as good as others.
Turing likes to be cold, if you can run lets say 2100 below 40c but crash at 2115 at 42c then having a higher power limit wont help (I am only using that as an example).

As far as running a non reference BIOS on a Reference card and hurting it? I seriously doubt it will hurt it as the core and the memory have a strict window of voltages that AIB's have to stay within and there are only 3 AIB's that can bypass that with those cards being the Kinpin,Lightning-Z and the Galax HOF so flashing a normal AIB BIOS should not hurt anything.
The worst that I can see happening is you may get incorrect Wattage measurements reported by software due to power planes and phases being different but I doubt it would hurt anything.

I can't say what I paid for the Kingpin but It's not cheap but for me it was worth it. As I have gotten older when I want to game I personally do not even overclock the card that much, I may do +50 on the offset slider but trying to push every last drop out of it while playing games isn't really beneficial unless the game responds extremely well to overclocking but most do not.

Yes the Hydrocopper BIOS I linked earlier was the one that I used. 
If I had to go back with a FE card and had to pick a BIOS to run on a 24/7 daily basis and you did not have a USB programmer it would be the FTW3 BIOS.
If you can get your hands on a CH341 USB programmer then I would run the Galax 380w BIOS for a daily.

I can help with the USB programming method as it is actually very very easy.


----------



## Madness11

Hey guys , pls help ... how I can fix this ??
https://youtu.be/e8plSuJ4v-A
Its happen only evening.. please help ( zotac rtx2080ti amp)


----------



## JustinThyme

Throw a stick of dynamite in there! 

Need more specifics than a video. I hear and see and fan busting ass, but why. Idle? Fan curve at 100%? BIOS flashed with other than stock? Software driving it up? Or is the fan even what you are referring to?


----------



## JustinThyme

Sultan is right. It’s all about cooling. Out of the box the KP isn’t really much different than any other high end card. It’s get hot, it throttles. 2080 out it the box is about what you get. Get a good water block on it, not your norm as the construction of the card will still leave you with your fan cooled VRMs. You basically have to find a CPU block with a flat top that will fit it and cover just the core and be able to keep the temp at 40C or lower, the stock cooler will not do that in normal environments. 

You are paying for a few things in its $1900 price tag
Binning binning binning
You can flash any high end card with any bios up to AMD including the XOC BIOS and it won’t amount to a hill of beans if you can’t keep it cool. 

A chiller will get you 100 MHz above others. Where it shines is doing crazy sheet like LN2 where it will hit 2500. Not everyone has a supply of LN2 or someone to keep the pot full while you are gaming. Good for setting records on benchmark runs. 

Day to day use out of the box you won’t get enough performance gain to really notice any difference over other cards and honestly having to deal with the AIO concept blows. Idiots like me it’s not about the cost being $600 over other cards. If anyone made a custom block for it to cover the whole card I’d be down for it, they don’t. 

The one up I’d give it over something like the matrix is the AIO at least is made to dump the heat out of your case instead of still dumping it into it like an air cooled card as the matrix does. Difference is though you can buy an aftermarket block for the matrix as the PCB is the same layout at the Strix O11G because it’s the same PCB. Only difference is binning chips and it’s price point isn’t much different and just and now ever harder to find that the KP. 

The card was meant for crazy bass turds that are going to yank all the stock stuff off of it, load up and XOC bios and run LN2.


----------



## Carillo

sultanofswing said:


> Just passed Timespy at 2280mhz, LUL!



Slider-OC ? Post your score


----------



## sultanofswing

No, card will not do 2280 with the slider unless it is chilled. That was voltage curve locked to 1.093 on just regular ambient water. I’m at work but the score at that clock was worse than my score at 2100mhz using the slider.

This is the main reason I stopped using the voltage curve editor.
Under ambient water cooling and stock voltage my card will crash at anything above 2145mhz using the offset slider but using voltage curve I can go all the way to 2280(may go more) but performance isn’t there.


----------



## Mooncheese

sultanofswing said:


> I personally think Vbios flashing should be the last resort that anyone should do and to focus on temperature control first.
> Turing has 15mhz downclocks for every 5c gain in temp once you cross 35c
> 40
> 45
> 50
> 55
> 60
> If you can keep the temp below 40c and you are hitting the power limit then by all means you can raise the power limit. I have some benchmarks of the XC Ultra but that card was nothing special and in the end a higher power limit BIOS would not help it as the core was not as good as others.
> Turing likes to be cold, if you can run lets say 2100 below 40c but crash at 2115 at 42c then having a higher power limit wont help (I am only using that as an example).
> 
> As far as running a non reference BIOS on a Reference card and hurting it? I seriously doubt it will hurt it as the core and the memory have a strict window of voltages that AIB's have to stay within and there are only 3 AIB's that can bypass that with those cards being the Kinpin,Lightning-Z and the Galax HOF so flashing a normal AIB BIOS should not hurt anything.
> The worst that I can see happening is you may get incorrect Wattage measurements reported by software due to power planes and phases being different but I doubt it would hurt anything.
> 
> I can't say what I paid for the Kingpin but It's not cheap but for me it was worth it. As I have gotten older when I want to game I personally do not even overclock the card that much, I may do +50 on the offset slider but trying to push every last drop out of it while playing games isn't really beneficial unless the game responds extremely well to overclocking but most do not.
> 
> Yes the Hydrocopper BIOS I linked earlier was the one that I used.
> If I had to go back with a FE card and had to pick a BIOS to run on a 24/7 daily basis and you did not have a USB programmer it would be the FTW3 BIOS.
> If you can get your hands on a CH341 USB programmer then I would run the Galax 380w BIOS for a daily.
> 
> I can help with the USB programming method as it is actually very very easy.


I could be mistaken but a 300a chip is a 300a chip. The only reason KP is faster is because the cleaner power delivery allows for tighter memory timings, as per "mllrkllr88": https://www.overclock.net/forum/69-...e-modding-extreme-oc-2080ti.html#post28391432

KP isn't binned this time around. Youre getting a random 300a chip. 

You probably didn't win the silicon lottery with your XC Ultra. Mine does 2070 MHz @ 1.013v. That's pretty damn good. My score of 16,350 @ 2070 MHz @ 1.013v with XC2 Ultra is very close to your 16,450 @ 2100 MHz @ 1.075mv with Kingpin. Again, your 300a isn't better than my 300a per se. The reason you can hit 2200 MHz is because of tighter timings and cleaner power delivery. What else. Yeah just because it will pass 3DMark at 2200 MHz doesn't mean it's stable. I can pass 3DMark at 2100 MHz with 1.013v but it will crash within 10 minutes of playing F1 2019. 

Getting the core to stay under 40C is unreasonable in the extreme short of using a chiller in tandem with a massive external radiator. I'm already at 1kw of rad surface area, 43-45c is about as good as it gets without an inordinate amount of rad surface area and a chiller. From what I've seen lower temps don't magically correlate to significantly higher clocks either. My card starts out at 2085 MHz until 37-39C, then drops to 2070 Mhz until 42-43C and then it's 2055 MHz from there until 47C+. We are talking about 30 MHz over 10c difference.

That 380W Galax HOF bios may have had something to do with cooking my previous card to death. I'm afraid to flash a non-reference bios to reference PCB at this point given the fact that the power delivery system is completely different. You have a bios that thinks there is 19 phases among which to distribute amperage but there's only 16 phases on reference PCB, I get that reference PCB is over-engineered, I just don't want to wind up with a $900 paperweight this time around as I'm quite fond of my XC2 and don't want to abuse EVGA's warranty. 

If I were EVGA and someone sent a card in for RMA and it wasn't using the factory bios that would be immediate grounds for refusal to honor the RMA. Reading comments here about people "yeah I ran FTW3 bios on my last card that had to be RMA'ed to EVGA" is somewhat maddening. Do you think said vbios had anything to do with said early demise? 

Still going to wait to hear from someone who can state authoritatively that they had a non-reference bios, i.e FTW3, on their reference PCB for more than 2 months before flashing anything to this card. In the grand scheme of things I'm only looking at increasing TDP by 30W so yeah, I will probably not mess with it.


----------



## sultanofswing

KP is most definitely binned. EVGA tests all the chips and saves the best for the KP.


----------



## mllrkllr88

Mooncheese said:


> Great write-up! IMHO 5% isn't worth the asking price for Kingpin though.
> 
> What are your thoughts on using FTW3 Ultra / Hydrocopper BIOS on a reference PCB card (XC2 Ultra)?



Agreed, for the normal user/gamer...it's not even close to worth it even considering non-modded difference is higher than 5%.


As for the bios, I couldn't really say. For water cooling i'd imagine bios based mods will give you enough headroom and not throttle. However, if LN2 is the goal then the card will need serious hard mods to get the most out of it.


----------



## truehighroller1

sultanofswing said:


> No, card will not do 2280 with the slider unless it is chilled. That was voltage curve locked to 1.093 on just regular ambient water. I’m at work but the score at that clock was worse than my score at 2100mhz using the slider.
> 
> This is the main reason I stopped using the voltage curve editor.
> Under ambient water cooling and stock voltage my card will crash at anything above 2145mhz using the offset slider but using voltage curve I can go all the way to 2280(may go more) but performance isn’t there.


My dual oc asus 2080ti does the same thing. At a certain point you saturate your power level if you will. Terminology don't care, that's what it comes down to though in laymen terms.

I can use the xoc kingpin bios on my card and get way more power draw too just fyi, tested it scored it recorded it etc. This is normal behavior.


----------



## sultanofswing

I know but apparently I’m wrong according to everyone else so I will leave it at that.


----------



## sultanofswing

I can game at 2200 all day long at 2200 using voltage curve. No issues at all but it’s not the proper way to do it.


----------



## Mooncheese

Some more gameplay footage, note the memory and power delivery temps below "Vcore" row in OSD, very cool, not used to being able to see mem and pwr on the fly.


----------



## Mooncheese

sultanofswing said:


> KP is most definitely binned. EVGA tests all the chips and saves the best for the KP.


I stand corrected then, I wasn't aware that EVGA was doing internal binning. I know they were doing that with 980 Ti where if you paid more you could get a chip with a higher ASIC but I thought they stopped doing that with after 980 Ti. 



mllrkllr88 said:


> Agreed, for the normal user/gamer...it's not even close to worth it even considering non-modded difference is higher than 5%.
> 
> 
> As for the bios, I couldn't really say. For water cooling i'd imagine bios based mods will give you enough headroom and not throttle. However, if LN2 is the goal then the card will need serious hard mods to get the most out of it.


My concern isn't just the temperatures about the components that are in contact with the water-block but other ones on the board, i.e. the voltage controller etc. I'd like more feedback about using FTW3 vbios on reference PCB before trying that. 



sultanofswing said:


> I can game at 2200 all day long at 2200 using voltage curve. No issues at all but it’s not the proper way to do it.


Awesome, yeah for my card it will pass 3DMark at slightly higher clocks but will crash while gaming @ 2100 MHz but 2070 MHz has been mostly stable. The way you know for sure is extended gaming sessions. 

BTW, anyone have any experience with Metro: Exodus, I actually got a display driver failure last night after about 15-20 minutes of play time, my overclock has been fairly robust up until now. Googling the issue, apparently it's a common problem with DX12 and this title with the solution being disabling DX12. I will try later with no overclock to see if that is the problem but I figured I would ask you guys first. 

https://steamcommunity.com/app/412020/discussions/0/3658515990045391329/?ctp=4

https://steamcommunity.com/app/412020/discussions/0/1864994060719712567/

"choppinbladz 17 hours ago 
I also have this same problem. I have been reading every post I can find where people have had issues with it crashing with Directx12 since it first came out. But other people say they can run it in Directx12 with everything maxed out and have no issues. I have a brand new rig with a clean install of Windows, latest Nvidia driver 445.75:

Ryzen 7 3700x - Not overclocked, properly cooled, stable
Asus X570 Tuf Gaming Wifi Mobo
Corsair RGB Vengeance Pro with XMP profile at 3200Mhz 2x8GB - Stable, properly cooled
EVGA XC Ultra RTX 2080ti - Not overclocked, properly cooled, stable
Western Digital Black PCIe NVMe 500GB M.2 SSD
Clean install of Windows 10 Pro 1909 with all latest updates installed

When I run it in DirectX 12, all settings maxed, ray tracing ultra, DLSS disabled, Hairworks On, Advanced PhysX On, it plays butter smooth and looks fantastic, but only runs anywhere from 2 minutes to about 10 minutes, then freezes and crashes to desktop. I have ran it with and without Rivatuner showing stats, uninstalled Geforce Experience since some people mentioned the overlay and/or Ansel were the cause, have disabled ray tracing, disabled hairworks, disabled advanced PhysX, tried every possible graphics setting, it doesn't matter still crashes in Directx12. Directx11 plays perfectly fine all day. Now, the problem can't be DirectX12 because I can play Shadow of the Tomb Raider and Control with Directx12 and Ray tracing maxed out all day long without any issues.
I have used DDU to fully uninstall and reinstall the driver, uninstalled and reinstalled the game, have completely uninstalled all other applications like Razer Synapse, Corsair ICUE, Sound Blaster, etc. still crashes to desktop in DirectX12.
This makes absolutely no sense and is stressing me out. I refuse to accept that I have to play this game in DirectX 11 on a brand new, expensive rig that I have saved for specifically to play with ray tracing.
If anybody has any other ideas I would love to hear them."

Edit: 

4A Game tech support solution is to disable ray tracing!

https://steamcommunity.com/app/412020/discussions/0/1864994060719712567/

I'm getting a refund for this, what bollocks.


----------



## kithylin

Mooncheese said:


> I stand corrected then, I wasn't aware that EVGA was doing internal binning. I know they were doing that with 980 Ti where if you paid more you could get a chip with a higher ASIC but I thought they stopped doing that with after 980 Ti.


I'm not sure about the 2080 Ti Kingpin but Vince himself was quoted in a video interview with Jayz twocents showing how he both hand-designed the entire PCB layout for the 1080 Ti Kingpin himself and he actually hand binned chips inside of EVGA himself for the 1080 Ti Kingpin. Like putting the chips in a test rig and trying to OC them internally with his own hands. Only the chips that OC'd the highest were hand selected for the 1080 Ti Kingpin cards. And they were a limited run and EVGA couldn't even replace them for RMA returns.


----------



## Mooncheese

kithylin said:


> I'm not sure about the 2080 Ti Kingpin but Vince himself was quoted in a video interview with Jayz twocents showing how he both hand-designed the entire PCB layout for the 1080 Ti Kingpin himself and he actually hand binned chips inside of EVGA himself for the 1080 Ti Kingpin. Like putting the chips in a test rig and trying to OC them internally with his own hands. Only the chips that OC'd the highest were hand selected for the 1080 Ti Kingpin cards. And they were a limited run and EVGA couldn't even replace them for RMA returns.


With internal binning it's nearly impossible for them to test each and every GPU core because of time and resources. In actuality they are likely taking a percentage of 300a, running them through some kind of benchmark / stability test, and taking the best performers from the ones they test. 

There's going to be plenty of 300a out in the wild that would otherwise clock as well as a Kingpin variant had they the same power delivery and PCB. 

Ultimately it still boils down to a lottery. I mean youre more likely / guaranteed to a good 300a for $1900 (before taxes?) with Kingpin, whether or not that additional 100 MHz is worth the additional cost depends on how much dispensable income you have on hand. 

For the money though, used XC Ultra for $900-1000 local sale with 2 years remaining on the warranty is probably your best bet. 

EVGA is awesome just for this reason, I will probably stay within their ecosystem from here on out, next card will most definitely be EVGA. 

$900 out the door / local sale for my XC2 Ultra warrantied until Sept 2022 with Samsung B-Die that does 2070 MHz core, +1100 MHz memory with an undervolt of 1.013v and I can see the temps of memory, VRM and MOSFET, I'm beyond ecstatic! 

Ampere can take it's time, loving this card, 70 FPS average in Metro Exodus all settings maxed (RT: High) @ 3440x1440 on DLSS 1.0. I'm good for now!


----------



## sultanofswing

I’ve never had any issues with exodus no matter what settings I used


----------



## J7SC

sultanofswing said:


> I’ve never had any issues with exodus no matter what settings I used


 
...yeah, some related vids below (single RTX, dual RTX w/ CFR). 

On binning, NVidia does the pre-binning and to get the good bins, the vendor has to have bought a given quantity of not so good ones. That said, I would think that there is further binning by Vince & Co for KingPin. Also, with 'stock' Bios and un-modded voltage curve, I posted a few months back that that is sort of is like the old 'ASIC'. While not a percentage, it tells you the MHz / voltage relationship per given GPU chip


----------



## sultanofswing

This was on my Kingpin when I got it, someone signed off on it using this number.


----------



## Carillo

17872 graphic score Time Spy, using VF curve, 572W max...Must be something funcky going on with those Kingpin cards... I just leave it at that  

https://www.3dmark.com/3dm/45360171


----------



## sultanofswing

Show your settings, also what are your temps?


----------



## Carillo

sultanofswing said:


> Show your settings, also what are your temps?


The 3dmark info shows pretty much everything.. what settings do you need to know ? Most funny thing about this FE card, is that it's a Dell card comming from a alienware computer witch i paid 650 dollar for


----------



## sultanofswing

Your afterburner settings, what voltages and what temps.


----------



## Carillo

sultanofswing said:


> Your afterburner settings, what voltages and what temps.



[email protected] VF curve +1200 mhz mem... 35 degree gpu  Good luck


----------



## JustinThyme

Carillo said:


> [email protected] VF curve +1200 mhz mem... 35 degree gpu  Good luck


Says Galax card


----------



## Carillo

JustinThyme said:


> Says Galax card[/QUOT
> 
> Galax Bios report : Galax... Asus Bios report Asus. What kind of PCB you have does not matter when it comes to ID


----------



## Carillo

sultanofswing said:


> Show your settings, also what are your temps?



Still waiting for your 2280mhz score.. can't wait, i'm so excited


----------



## J7SC

2280MHz ? - child's play 



Spoiler



Don't forget, posted on APRIL 1st  Dual blower cooler and GDR7 are a nice touch


----------



## Carillo

J7SC said:


> 2280MHz ? - child's play
> 
> 
> 
> Spoiler
> 
> 
> 
> Don't forget, posted on APRIL 1st  Dual blower cooler and GDR7 are a nice touch
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> [/QU
> 
> yeah, it will probably outperform a gtx 1070


----------



## sultanofswing

I’m at work, I also never tried to say I had the best scores I just was posting my findings with my particular card. You took it upon yourself to be the one upper.


----------



## Carillo

sultanofswing said:


> I’m at work, I also never tried to say I had the best scores I just was posting my findings with my particular card. You took it upon yourself to be the one upper.


Just proving your curve teori wrong.. yeah, for some reason you can t use it. But saying its disabled in XOC bioses because its performing bad? you are not referring to a specific card, you also mentioned your FE card as a low performer using curve. Reason you and some others is experience this, is because you are hitting PL. POINT is, stop spraying BS


----------



## sultanofswing

Was never hitting power limit on my FE when I used galax XOC Bios.

Yes I used the galax XOC bios on my FE Card and it still did the exact same thing my kingpin does.

I’ve tested and tested for weeks on end trying to come up with an answer for it and was told by kingpin himself the voltage curve editor was buggy for them so the XOC bios has it disabled.


----------



## JustinThyme

You guys are cracking me up. I’m sitting over here happy I can hit 2150 with two 2080Tis and keep them under 40C. Time spy is baby crap. Run Firestrike extreme. Harder to pass.


----------



## jura11

Hi @sultanofswing

Here are mys results of manual curve and OC offset on my Asus RTX 2080Ti Strix with Matrix BIOS

Here is manual curve with 2100MHz at 1.01v










And here is it offset OC(+30MHz) with same 2100MHz,voltage I have seen during the this Superposition test has been in 1.04-1.06v 










As you can see difference between the offset and manual curve/frequency is just so minimal and if its worth it depends on situation,power draw between two settings or two OC is too minimal in my case,with manual curve I see 328W and with OC offset I have seen 346W as max,mainly important for me if its passes with such voltage/frequency rendering then is stable like for gaming or anything what I use my GPUs for,I really recommend for people to do some tests and compare results and mainly use and run games which are you playing and see if its worth it,benchmarks not sure if its are correct way how to spot difference

Hope this helps 

Thanks,Jura


----------



## sultanofswing

jura11 said:


> Hi @sultanofswing
> 
> Here are mys results of manual curve and OC offset on my Asus RTX 2080Ti Strix with Matrix BIOS
> 
> Here is manual curve with 2100MHz at 1.01v
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And here is it offset OC(+30MHz) with same 2100MHz,voltage I have seen during the this Superposition test has been in 1.04-1.06v
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As you can see difference between the offset and manual curve/frequency is just so minimal and if its worth it depends on situation,power draw between two settings or two OC is too minimal in my case,with manual curve I see 328W and with OC offset I have seen 346W as max,mainly important for me if its passes with such voltage/frequency rendering then is stable like for gaming or anything what I use my GPUs for,I really recommend for people to do some tests and compare results and mainly use and run games which are you playing and see if its worth it,benchmarks not sure if its are correct way how to spot difference
> 
> Hope this helps


Thanks @jura11

Thank you for doing that test, it confirms that either something is weird with my system or something else that I just simply cannot understand is happening. I ran the exact same settings as you for the core on both of these tests.
Now someone explain this

Test 1-Superposition 4k, Voltage/frequency curve set to [email protected]=13,538
This is right in line with your test #Jural.

Test 2-Superposition 4k, offset slider set to +50. This netted 2100mhz


----------



## sultanofswing

@jura11 if you do not mind can you post up how your Voltage curve looked on that run so I can compare?


----------



## sultanofswing

I think I may have just had a breakthrough.


----------



## AndrejB

@jura11 do you maybe have the matrix bios file?

The one techpowerup has been changed to a max of 2055 (tested on two cards already)


----------



## Apothysis

sultanofswing said:


> Thanks @*jura11*
> 
> Thank you for doing that test, it confirms that either something is weird with my system or something else that I just simply cannot understand is happening. I ran the exact same settings as you for the core on both of these tests.
> Now someone explain this
> 
> Test 1-Superposition 4k, Voltage/frequency curve set to [email protected]=13,538
> This is right in line with your test #Jural.
> 
> Test 2-Superposition 4k, offset slider set to +50. This netted 2100mhz



Your results make a lot of sense if you're hitting power limit. Offset increases the entire curve so that you get equal increases along the entire voltage curve as you downclock for stuff like power limits. Manually adjusting a single point (like 1.093v) leaves the entire curve below that point (aside from a tiny ramp just before 1.093v) at more or less stock frequency so you're losing out on the entire offset.

https://i.imgur.com/vTayW52.png
https://i.imgur.com/LWNjHj8.png

As you can see if you hit the power limit while manually tuning for 1.093v you're most likely going to drop down to sub 1.062v at which point you're sitting below 2000 MHz on the core, a pretty significant difference since with offset you're sitting above 2000 MHz even at 0.950v (in my case).

If you have a bios like the GALAX XOC which completely negates the power limit you're never dropping below the set voltage so you'll never drop in frequency. At least to me that's what it looks like because that's exactly the results I see with my 330W power limit.


----------



## Imprezzion

My card should finally arrive today, the A chip one that is, so I can test with a 380w BIOS and a XOC one if I can cool XOC with air. I know the risks and I don't care about them on air. 2080 To VRM is strong enough to handle a bit of abuse IMO and as long as my core temps are reasonable (under 80c) I'm just going to push it on XOC. I don't wannainvest in a waterblock and such before I know for sure my card benefits from running XOC at all. 

If for example I only gain like, 50Mhz over a non-XOC BIOS it's not worth it.


----------



## EarlZ

I think that my card is running warmer than usual, no changes with in my PC and the room temperature is about the same 29-30c day time and 27-28c during night time, I am running at 0.987v with 1920Mhz on core and no MEM overclocking. I am now getting 83-85c on The Division 2, IIRC I was getting around 74-75c before. Fan speed curve has not changed. Maybe its about time to re-apply the GPU thermal paste?


----------



## MrTOOSHORT

EarlZ said:


> I think that my card is running warmer than usual, no changes with in my PC and the room temperature is about the same 29-30c day time and 27-28c during night time, I am running at 0.987v with 1920Mhz on core and no MEM overclocking. I am now getting 83-85c on The Division 2, IIRC I was getting around 74-75c before. Fan speed curve has not changed. Maybe its about time to re-apply the GPU thermal paste?



Time to blow out the air cooler of dust.


----------



## EarlZ

MrTOOSHORT said:


> Time to blow out the air cooler of dust.


Thats the first thing I did, minimal dust accumulation.


----------



## Imprezzion

Ok holy god that Gainward gets hot  I mean, compared to my Accelero non-A card. This cooler is a LOT smaller and it's VERY quiet, hell even 85% fanspeed is almost silent, but it hits about 77c on maxed power limit and auto fans. This goes down to like, high 60's low 70's on 90-100% fanspeed. 

I don't have a lot of hope of cooling this thing with a XOC BIOS at all. I'm just going to try a 380w first and be a bit gentle with the voltage curve lol. It does absolutely hammer the stock BIOS power limit of 126% @ +100mV running about 1043mv and downclocking all the way to 1845Mhz lol. 

Dirty manual 1.012v at 2010Mhz with 8000Mhz memory stays JUST under the limit by 1-3% and runs fine so far. This alone is miles ahead of my non-A which barely ran 1935Mhz @ 1.043v stable..

EDIT: Flashed KFA2/GALAX 380W BIOS, works great. Power limit is a total non-issue now. Even on 1.093v it barely touches 90%. And it unlocked WAY higher fanspeeds as well. Before it locked to 2250RPM even on 100%, now the card can do 2900RPM. RGB also still works fine. 

Did a quick test in Borderlands 3 for temps power and clocks. 
It isn't as good on air with 60+c temps as some binned cards but it seems to run around 2100Mhz just fine on 1.093v. 2115Mhz almost immediatly crashed so maybe 2100/2085Mhz isn't exactly 100% stable but at least it RUNS it unlike my non-A which wouldn't even start a game on 2050Mhz on any voltage. 

This card also came with the old FW and with Samsung memory so the memory is a absolute beast as well. 

On 100% fanspeed temps are perfectly managable but this is not where i'm going to keep it 24/7 ofcourse until i upgrade the cooling with either a AIO, Accelero IV or a full custom block.

EDIT2: Ok, it got hot and crashed, 2070Mhz @ 1.062v seems to be more reliable. But, the biggest downside to this cooler is probably the fact the VRAM and VRM is all cooled by the same block so that's a LOT of extra heat for the poor thing.


----------



## sultanofswing

Apothysis said:


> Your results make a lot of sense if you're hitting power limit. Offset increases the entire curve so that you get equal increases along the entire voltage curve as you downclock for stuff like power limits. Manually adjusting a single point (like 1.093v) leaves the entire curve below that point (aside from a tiny ramp just before 1.093v) at more or less stock frequency so you're losing out on the entire offset.
> 
> https://i.imgur.com/vTayW52.png
> https://i.imgur.com/LWNjHj8.png
> 
> As you can see if you hit the power limit while manually tuning for 1.093v you're most likely going to drop down to sub 1.062v at which point you're sitting below 2000 MHz on the core, a pretty significant difference since with offset you're sitting above 2000 MHz even at 0.950v (in my case).
> 
> If you have a bios like the GALAX XOC which completely negates the power limit you're never dropping below the set voltage so you'll never drop in frequency. At least to me that's what it looks like because that's exactly the results I see with my 330W power limit.


Card is a Kingpin, Default Bios has a power limit of 520w, Which I am NOT hitting.


----------



## jura11

sultanofswing said:


> Thanks @jura11
> 
> Thank you for doing that test, it confirms that either something is weird with my system or something else that I just simply cannot understand is happening. I ran the exact same settings as you for the core on both of these tests.
> Now someone explain this
> 
> Test 1-Superposition 4k, Voltage/frequency curve set to [email protected]=13,538
> This is right in line with your test #Jural.
> 
> Test 2-Superposition 4k, offset slider set to +50. This netted 2100mhz


Hi there 

Not sure if something is wrong or weird in your system but what I remember my old Zotac RTX 2080Ti AMP never liked manual V/F curve that's with Galax 380W BIOS 

At 2100MHz with downvolting you are getting similar scores like I'm getting which is okay but with offset OC yours score are quite higher by good 400-500 points

Yours GPU utilisation is at 99%,my is at 98% and I'm running 5960x with 4.6HHZ OC with 96GB of 2133MHz RAM 

I will definitely post my V/F curve and offset as well and you can see there is no trickery, in all tests I have run fans in similar speeds, didn't change anything just with downvolting temperatures are much better by 2-3°C 

In one of the tests core jumped to [email protected] then dropped off to 2100MHz where it stayed for whole test

Hope this helps 

Thanks, Jura


----------



## jura11

AndrejB said:


> @jura11 do you maybe have the matrix bios file?
> 
> The one techpowerup has been changed to a max of 2055 (tested on two cards already)


Hi Andrej 

I think in your case why you are seeing only 2055MHz with Matrix BIOS are temperatures,I'm getting 2070MHz with stock Matrix BIOS and my temperatures are in 36-38°C for most of the time, 38-40°C I see only when I really push GPU clocks or I do render with other GPUs or when I fold in [email protected] 

But yes I will upload my Matrix BIOS over here and you can try it there

I didn't tested this BIOS on stock Asus RTX 2080Ti Strix or with air cooler and can't comment on if clocks staying at 2070MHz or you see first downclocking at 2055MHz 

Hope this helps 

Thanks, Jura


----------



## Imprezzion

After loads of testing different BIOS and curves and such I found out my Gainward doesn't like the Galax/KFA2 380W BIOS all that much lol. It works fine but fanspeed is capped to 41% lows which is still quite loud as the RPM scaling is also all over the place.

I then tested a Gigabyte Windforce OC Reference BIOS and that kind of works better but only the middle fan responds to PWM input and the outside 2 fans stay at 20% so temps went up very fast. So that BIOS was a no go as well.

Then i remembered i read here that someone ran the EVGA FTW3 BIOS on reference with no issues so i tried that as that's also a triple-fan design and yes, that works perfectly fine so far. Fanspeed goes from 20-100% fine, scaling with RPM is good, all 3 fans respond fine, now to test my Curve on this BIOS. 

I have to say this card is definitely not a "Golden Sample" as Gainward calls it lovingly as it will not really do 2100Mhz+ on aircooled temperatures. 
I now seem to stabilize somewhere around the 2055-2070Mhz mark at 1.062v. Adding more voltage doesn't really make the card run any higher frequencies. It will run 2115Mhz for a while on 1.093v but as it heats up past the 68-70c point it gets very unstable and throws random DirectX crashes. 

So far i can lock in on 2055-2070Mhz 1.062v with acceptable noise levels and power sitting around 110-117% with the EVGA BIOS. It has room to go as high as 124% so i'm close, but not there yet and it doesn't throttle so far. 

That was better on the Galax/KFA2 BIOS btw. That did run like, 95% power limit not even needing me to raise it at all. So, if for some reason this BIOS is unstable or hits the limit i'll just have to accept a bit louder idle noise. Or strap different fans to the cooler like i do with my Accelero's  (I generally just take the fans off and strap 2 140mm 1600RPM Cooler Master MF140's on the cooler that i re-wired to take power from the motherboard and PWM + RPM Sense from the card).


----------



## jura11

Mooncheese said:


> I stand corrected then, I wasn't aware that EVGA was doing internal binning. I know they were doing that with 980 Ti where if you paid more you could get a chip with a higher ASIC but I thought they stopped doing that with after 980 Ti.
> 
> 
> 
> My concern isn't just the temperatures about the components that are in contact with the water-block but other ones on the board, i.e. the voltage controller etc. I'd like more feedback about using FTW3 vbios on reference PCB before trying that.
> 
> 
> 
> Awesome, yeah for my card it will pass 3DMark at slightly higher clocks but will crash while gaming @ 2100 MHz but 2070 MHz has been mostly stable. The way you know for sure is extended gaming sessions.
> 
> BTW, anyone have any experience with Metro: Exodus, I actually got a display driver failure last night after about 15-20 minutes of play time, my overclock has been fairly robust up until now. Googling the issue, apparently it's a common problem with DX12 and this title with the solution being disabling DX12. I will try later with no overclock to see if that is the problem but I figured I would ask you guys first.
> 
> https://steamcommunity.com/app/412020/discussions/0/3658515990045391329/?ctp=4
> 
> https://steamcommunity.com/app/412020/discussions/0/1864994060719712567/
> 
> "choppinbladz 17 hours ago
> I also have this same problem. I have been reading every post I can find where people have had issues with it crashing with Directx12 since it first came out. But other people say they can run it in Directx12 with everything maxed out and have no issues. I have a brand new rig with a clean install of Windows, latest Nvidia driver 445.75:
> 
> Ryzen 7 3700x - Not overclocked, properly cooled, stable
> Asus X570 Tuf Gaming Wifi Mobo
> Corsair RGB Vengeance Pro with XMP profile at 3200Mhz 2x8GB - Stable, properly cooled
> EVGA XC Ultra RTX 2080ti - Not overclocked, properly cooled, stable
> Western Digital Black PCIe NVMe 500GB M.2 SSD
> Clean install of Windows 10 Pro 1909 with all latest updates installed
> 
> When I run it in DirectX 12, all settings maxed, ray tracing ultra, DLSS disabled, Hairworks On, Advanced PhysX On, it plays butter smooth and looks fantastic, but only runs anywhere from 2 minutes to about 10 minutes, then freezes and crashes to desktop. I have ran it with and without Rivatuner showing stats, uninstalled Geforce Experience since some people mentioned the overlay and/or Ansel were the cause, have disabled ray tracing, disabled hairworks, disabled advanced PhysX, tried every possible graphics setting, it doesn't matter still crashes in Directx12. Directx11 plays perfectly fine all day. Now, the problem can't be DirectX12 because I can play Shadow of the Tomb Raider and Control with Directx12 and Ray tracing maxed out all day long without any issues.
> I have used DDU to fully uninstall and reinstall the driver, uninstalled and reinstalled the game, have completely uninstalled all other applications like Razer Synapse, Corsair ICUE, Sound Blaster, etc. still crashes to desktop in DirectX12.
> This makes absolutely no sense and is stressing me out. I refuse to accept that I have to play this game in DirectX 11 on a brand new, expensive rig that I have saved for specifically to play with ray tracing.
> If anybody has any other ideas I would love to hear them."
> 
> Edit:
> 
> 4A Game tech support solution is to disable ray tracing!
> 
> https://steamcommunity.com/app/412020/discussions/0/1864994060719712567/
> 
> I'm getting a refund for this, what bollocks.



Hi there 

I'm pretty sure EVGA always bin their highest models like Kingpin, I owned only 980Ti Kinpin and 780Ti Kingpin model and after that never had chance get RTX 2080Ti Kingpin model because of price, not sure if they bin FTW3 and others 

Regarding yours issues with Metro Exodus and other people issues with Metro Exodus, what I remember I got similar issue in Metro Exodus mainly when I OC my old Zotac RTX 2080Ti AMP with Galax 380W BIOS, I couldn't run with Galax 380W BIOS 2100MHz in that game, no way, in other games I could run 2115MHz but in this game I have to run 2085MHz or 2070MHz if I didn't want to crash, there have been some places where I could run 2100MHz but for moat of the parts of game no way, similarly in Control or in BFV

Can you please check or try Control, BFV with RT settings to max or Ultra etc and check if game crashes or not? 

I tried few days ago Control with Asus RTX 2080Ti Strix and best what I could do or run has been 2160-2175MHz with RT set on max, no DLSS and at 3440x1440, 2190MHz has been stable but in lower ambient temperatures, in hotter weather it would downclock to 2175MHz 

In Metro Exodus I recommend you downclock GPU by 30MHz from yours original or stable OC, be sure its stable and be sure VRAM is stable 

Hope this helps 

Thanks, Jura


----------



## z390e

EarlZ said:


> I think that my card is running warmer than usual, no changes with in my PC and the room temperature is about the same 29-30c day time and 27-28c during night time, I am running at 0.987v with 1920Mhz on core and no MEM overclocking. I am now getting 83-85c on The Division 2, IIRC I was getting around 74-75c before. Fan speed curve has not changed. Maybe its about time to re-apply the GPU thermal paste?


Playing the same game on 1440p with RTX 2080ti and getting MAX 65%C even at 99% usage, did you change your settings maybe?


----------



## Imprezzion

What other BIOS's are floating around for a reference card?

I've tried: 

- XOC ASUS, works fine but lower voltage then any other BIOS as there's no curve so it drops to 1.050v.

- Galax/KFA2 380w BIOS. Almost perfect but idle fans locked to 41% minimum is way way too much. 

- Gigabyte Windforce 366w BIOS. Only lets the middle fan respond to PWM. Other 2 stay idle.

- EVGA FTW3 Ultra 373w BIOS. Running it now, it has everything I want, fans work, Curve works, everything works, BUT. It's reporting power limit at least 20% higher then any other 370-380w BIOS. It's getting very close to hitting it. Don't like that.

So, is there any other BIOS that works on reference with a ~380w power limit, proper fan control with 20-25% idle and proper voltage & curve control?


----------



## truehighroller1

Imprezzion said:


> What other BIOS's are floating around for a reference card?
> 
> I've tried:
> 
> - XOC ASUS, works fine but lower voltage then any other BIOS as there's no curve so it drops to 1.050v.
> 
> - Galax/KFA2 380w BIOS. Almost perfect but idle fans locked to 41% minimum is way way too much.
> 
> - Gigabyte Windforce 366w BIOS. Only lets the middle fan respond to PWM. Other 2 stay idle.
> 
> - EVGA FTW3 Ultra 373w BIOS. Running it now, it has everything I want, fans work, Curve works, everything works, BUT. It's reporting power limit at least 20% higher then any other 370-380w BIOS. It's getting very close to hitting it. Don't like that.
> 
> So, is there any other BIOS that works on reference with a ~380w power limit, proper fan control with 20-25% idle and proper voltage & curve control?


You know what's funny about you making this comment but, not funny to us from a customer standpoint is? They, the people saying lock the BIOS down on the video card that the customer is paying so much of their hard earned money for down, have no flipping clue how to mod these BIOS because I used to do it when you could and was very good at it and I see where they're failing at it so hard. 

This whole locking down of my hardware that I paid hard earned money for is B.S.. You people " if you're seeing this comment marketing team if you will or, pr team whatever they call you " making these decisions at the top of these organizations for obvious financial benefit reasons are butt clowns and suck and you're ruining the over clocking culture and in hind sight innovation at the same time.

Further outside the box for pr people or whatever for these big wigs.

If your people who're getting paid to lock things down and try to mod things in the background to get your bottom dollar up can't flipping do it right, save your bottom line some money and open things back up a little and let the modding community do the research for you for free and copy what we're doing in your next release of hardware and reek more bottom dollar benefits in the long wrong.

The lock the community down approach to make more money is stupid and IMO you'll lose more money in the long run when it comes down to it by hurting the community. Like a trickle down effect in a way.

I know I didn't answer your question at all and apologize. Someone will for sure no doubt about it help you shortly because that's what us true modders are about helping people when we know how but, I just wanted to make this point because your comment hit that nail for me.

I looked over the last part of your comment and can't comment about the fan issue because I use water so I haven't paid attention to the fan curves at all, I'm sorry.


----------



## Imprezzion

truehighroller1 said:


> You know what's funny about you making this comment but, not funny to us from a customer standpoint is? They, the people saying lock the BIOS down on the video card that the customer is paying so much of their hard earned money for down, have no flipping clue how to mod these BIOS because I used to do it when you could and was very good at it and I see where they're failing at it so hard.
> 
> This whole locking down of my hardware that I paid hard earned money for is B.S.. You people " if you're seeing this comment marketing team if you will or, pr team whatever they call you " making these decisions at the top of these organizations for obvious financial benefit reasons are butt clowns and suck and you're ruining the over clocking culture and in hind sight innovation at the same time.
> 
> Further outside the box for pr people or whatever for these big wigs.
> 
> If your people who're getting paid to lock things down and try to mod things in the background to get your bottom dollar up can't flipping do it right, save your bottom line some money and open things back up a little and let the modding community do the research for you for free and copy what we're doing in your next release of hardware and reek more bottom dollar benefits in the long wrong.
> 
> The lock the community down approach to make more money is stupid and IMO you'll lose more money in the long run when it comes down to it by hurting the community. Like a trickle down effect in a way.
> 
> I know I didn't answer your question at all and apologize. Someone will for sure no doubt about it help you shortly because that's what us true modders are about helping people when we know how but, I just wanted to make this point because your comment hit that nail for me.
> 
> I looked over the last part of your comment and can't comment about the fan issue because I use water so I haven't paid attention to the fan curves at all, I'm sorry.


And that is where i totally agree with you. Hell, i had the time of my life modding GTX6xx / GTX7xx and even GTX9xx BIOS with the GUI BIOS editors for those. Also AMD BIOS's and unlocking HD69/79xx's and such. 

I would love to do some BIOS modding again only I know absolutely zero about Hex editing or even attempting to "crack" the BIOS checksum stuff.. I'm not a programmer or writer, I am a hardware man. And if that means learning how to use a GUI editor like KBT or something like nvflash, that's easily read and learned but i'm not the kind of guy to learn Hex or whatever for weeks or months just to MAYBE stand a chance of making a custom BIOS. 

You know what, you've kinda inspired me to put some time into this. I found the Gigabyte BIOS on TPU GPU BIOS Database as well just scrolling through the 1E07 compatible BIOS's. 
I'm going to keep searching through them, also in unverified, and maybe i'll find a BIOS that somehow for some reason just works better so I can share it here. Or maybe not.. Who knows!

EDIT: Like, the thing i find the worst of this new nVidia style is the whole temperature downclocking stuff.
I mean, my card will probably run 2085Mhz all day at 73c 1.068v, but because it runs around 73c it downclocks like 5 or 6 bins so i have to start the card at 2130Mhz on idle temperature.. Which is probably so high it won't ever be stable long enough to heat up and drop clocks. So, because of this stupid temperature throttling that can't be disabled it's going to cost me at least 50Mhz in clocks and i have to run a way higher voltage for the same clock just to compensate for the high clockspeed when starting a game from idle. 

The GTX1080 Ti XOC BIOS at least had that disabled and always ran the same clockspeed. Unfortunately even with a XOC BIOS on the RTX 2080 Ti it will STILL temperature throttle even if temperature isn't even in MSI Afterburner anymore.


----------



## truehighroller1

Imprezzion said:


> And that is where i totally agree with you. Hell, i had the time of my life modding GTX6xx / GTX7xx and even GTX9xx BIOS with the GUI BIOS editors for those. Also AMD BIOS's and unlocking HD69/79xx's and such.
> 
> I would love to do some BIOS modding again only I know absolutely zero about Hex editing or even attempting to "crack" the BIOS checksum stuff.. I'm not a programmer or writer, I am a hardware man. And if that means learning how to use a GUI editor like KBT or something like nvflash, that's easily read and learned but i'm not the kind of guy to learn Hex or whatever for weeks or months just to MAYBE stand a chance of making a custom BIOS.
> 
> You know what, you've kinda inspired me to put some time into this. I found the Gigabyte BIOS on TPU GPU BIOS Database as well just scrolling through the 1E07 compatible BIOS's.
> I'm going to keep searching through them, also in unverified, and maybe i'll find a BIOS that somehow for some reason just works better so I can share it here. Or maybe not.. Who knows!


Yeah you struck the right note with that comment of yours. They don't have the intuition to make the fan curves for this person or that person or perhaps and more then likely the real answer here is that they don't care to spend the money for some BIOS tweaker that they're paying $2.50 an hour to make it that way because it will cost them bottom line money.

This goes into the voltage curve bug as well that we all see right in our faces that they could give two cents to fix. Which would in all honesty WOULD be so simple to fix but that's $20 extra pay from their pockets for some poor way~ under paid person in India... Gotta have that new outfitted yacht to float out in the pacific some where right?

They locked down everything to have more variants and make more money and are doing a crap job at it and it sucks. Glad to be your inspiration, you were mine for venting lol.

Also, I was able to reach the settings through afterburner in the memory directly with a nifty hacking tool last time it was cold and I was benching for the fun of it. 

Just would be better with being able to mod our own hardware.


----------



## Imprezzion

Well, one tip for everyone who's running into problemsn with your non original BIOS. Try this one.

https://www.techpowerup.com/vgabios/206432/gigabyte-rtx2080ti-11264-181023

It's a A chip 1E07 reference PCB BIOS from a Gigabyte Windforce / Gaming OC and it has a power limit target at 300w with power limit +22% at 366w.
Only downside i ran into with my card is the fact that the fancontroller doesn't work properly that is on my Gainward. All fans do spin but only the middle one responds to PWM commands. 

There's a second version from the Aorus Xtreme which is not reference as far as i know and so may or may not work with reference. Didn't try as the limits are the same but maybe it will work for someone.
https://www.techpowerup.com/vgabios/205897/gigabyte-rtx2080ti-11264-181025-1

So far personally i'm now on 2070Mhz @ 1.068v under load. This starts idle at 2130Mhz and is quite unstable but usually it heats up fast enough as I have 15% fanspeed up to 60c so it warms up faster and passes the crashing point. 

I tried to go higher to 2150Mhz starting clocks to end out around 2100-2085Mhz but that instantly crashed even loading borderlands 3. I got it forced to 1.093v now at 2160Mhz and yes, itstarts the game at 2160Mhz but won't survive 10 minutes of playing even on 2115Mhz. It just gets too hot and gets too close to the power limit. It seems to run fine, then it hits mid 70's and as soon as it hits the 373w limit voltage drops shortly to 1.081 or lower and that instantly crashes the game lol. Temps + 370w power @ 1.093v is just too much for the poor stock aircooler.


----------



## sultanofswing

Trying to do 2100+ on air with temps over 50c is always a crap shoot, “usually” to get a solid 2100+ overclock you want temps below 45c but with clock/temp scaling, staying under 40 means you only hit 1 clock/temp step down and can run a higher frequency for a longer duration.


----------



## drnooob

*question about 2080ti nvlink*

hi, and thank you everyone for all the information provided here. just trying to see if i can get higher stable clocks with my 2080 tis in nvlink. i have an evga 2080 ti xc ultra and an evga 2080 ti xc in nvlink. i noticed that i was only getting just over 1900mhz in heaven when running in sli, and when i looked at the voltage frequency curves for the two cards in msi afterburner, i noticed that the xc card had a lower curve than the xc ultra, thereby needing higher voltage compared to the xc ultra card when trying to maintain the same clock speeds. i thought this was related to the bios versions for the two cards so i went ahead and flashed both to the galax 380w bios successfully. now i re-enabled SLI after both cards were flashed to the same bios and its still happening. the voltage curve for one card is for some reason lower than the other, so one is only pushing 950mv while the other card needs 1037mv to push the same clock speeds. im not sure what the discrepancy is or how to fix it. the perf cap reason for the card with the lower voltage requirement is SLI, so im assuming that the reason for the cap is because of the clock speeds of the other clock. the perf cap reason for the other card is vrel and SLI, and occasionally power, so this card seems to be pushed to the max, although the wattage is still only around 240w and this bios has a 380w limit. can anyone help?


----------



## Imprezzion

sultanofswing said:


> Trying to do 2100+ on air with temps over 50c is always a crap shoot, “usually” to get a solid 2100+ overclock you want temps below 45c but with clock/temp scaling, staying under 40 means you only hit 1 clock/temp step down and can run a higher frequency for a longer duration.


Yeah I noticed hehe. It will actually hold 2100Mhz on 1.081v if I just smash the fanspeed to 100% and only change to 2100Mhz with the curve after it heats up fully but there's no way to launch a game or benchmark on the clocks that runs when it is at idle temps. 

I got it dailed in now on 2055-2070Mhz core and 8000"Mhz" memory depending on the game / load on 1.062v. Played Warzone and Borderlands 3 for a few hours with zero issues. It starts at 2115Mhz but it generally holds that long enough as it's on a way lower temperature to heat up and drop clocks accordingly and doesn't crash. 

Temps sit in the mid 60's on 70% fanspeed @ ~2100 RPM (locked to 70% between 60-80c). It's audible but not obnoxiously loud or anything.

I'm running the EVGA FTW3 373w BIOS. Power limit is perfect for this cooler because it is 124% @ 373w and games run anywhere from 95-115% at above clocks so it can't never get too crazy with power and overheat like mad. Even with Superposition looping and constantly smashing 124% power it doesn't really go above 75-76c so it's pretty "safe".

So, if I look at it as a pure air OC it's actually pretty high right?

Honestly, I'm very impressed with this Gainward Phoenix GS. Seems like a pretty darn good card to have for air. I did repaste it ofcourse with PK-3. The cooler design is very efficient as it's actually quite small for a triple fan cooler and also cools both VRM and VRAM directly with the main heatsink. Parts of the VRM are even direct touch heatpipes.


----------



## J7SC

FYI re. 2080 Ti temps vs GPU MHz with various bios, voltage and fan settings, I had posted this summary (THW) a couple of months back . Seems worth a re-post, given the current discussion...best thing you can do is lower temps as much as possible. For example, between 35C and 77C, their GPU clock dropped by 150 MHz or more...

Source


----------



## Mooncheese

I got it done! 

I just spent the past 24 hours putting the hard tubing loop together (off and on, maybe 10 hours total in actuality): 




Update:

Addendum:


----------



## EarlZ

z390e said:


> Playing the same game on 1440p with RTX 2080ti and getting MAX 65%C even at 99% usage, did you change your settings maybe?


No changes on my game settings, 65C is amazing, which card is that and what are your ambient temps ?


----------



## sultanofswing

Mooncheese said:


> I got it done!
> 
> I just spent the past 24 hours putting the hard tubing loop together (off and on, maybe 10 hours total in actuality): https://youtu.be/XXMqMGJ2l-c


Looking good there!


----------



## Avacado

Fantastic job bro!


----------



## Mooncheese

sultanofswing said:


> Looking good there!





Avacado said:


> Fantastic job bro!


Thanks!!!!


----------



## Madblaster6

Heads up. Found a new BIOS for the Gaming X Trio: 90.02.42.00.14

https://www.techpowerup.com/vgabios/219116/219116


----------



## z390e

EarlZ said:


> No changes on my game settings, 65C is amazing, which card is that and what are your ambient temps ?


Asus Strix OC RTX 2080ti

Ambient right now is 18 degrees C


----------



## Imprezzion

Well, I just bought a whole bunch of stuff for my GPU and build in general. Including a Kraken G12 Black, a whole load of copper VRM / VRAM heatsinks (Enzotech as always ) a Lamptron LAMP-FARGB 10 port PWM fan and ARGB splitter, a bunch of adapter cables for GPU PWM to normal PWM and RGB stuff, a Phanteks ARGB kit to stick on the G12 and match / connect directly to my Phanteks case DRGB daisychain connector and some other misc stuff.

Now all I need is a good AIO to strap to it. It has to be a 240mm 25mm or 30mm thick one. Those are the only sizes that will fit with my 280mm frontmounted CPU rad which is push-pull and one thicc boi so it interferes with top mount 360mm radiators. And a 280mm is too wide and will hit my motherboards VRM and RAM heatsinks. So, as the Phanteks has 240mm mounts offset from the board that will fit fine but only push-pull with 25 or 30mm. 

I'm looking at a H100i v2 as they ARE actually compatible with the G12 even tho not officially but might also just go for a X52 or whatever.


----------



## jura11

sultanofswing said:


> @jura11 if you do not mind can you post up how your Voltage curve looked on that run so I can compare?


Hi there

Here are my curves 

Offset +30MHz(2100MHz)










Curve (2100MHz)










Stock Curve










Hope this helps

Thanks,Jura


----------



## Imprezzion

Here's my curve for 2070Mhz 1.068v.


----------



## Mooncheese

Imprezzion said:


> Well, I just bought a whole bunch of stuff for my GPU and build in general. Including a Kraken G12 Black, a whole load of copper VRM / VRAM heatsinks (Enzotech as always ) a Lamptron LAMP-FARGB 10 port PWM fan and ARGB splitter, a bunch of adapter cables for GPU PWM to normal PWM and RGB stuff, a Phanteks ARGB kit to stick on the G12 and match / connect directly to my Phanteks case DRGB daisychain connector and some other misc stuff.
> 
> Now all I need is a good AIO to strap to it. It has to be a 240mm 25mm or 30mm thick one. Those are the only sizes that will fit with my 280mm frontmounted CPU rad which is push-pull and one thicc boi so it interferes with top mount 360mm radiators. And a 280mm is too wide and will hit my motherboards VRM and RAM heatsinks. So, as the Phanteks has 240mm mounts offset from the board that will fit fine but only push-pull with 25 or 30mm.
> 
> I'm looking at a H100i v2 as they ARE actually compatible with the G12 even tho not officially but might also just go for a X52 or whatever.


I was where you were back in 2013-2014 with AIO's on everything, my advice is, skip that nonsense and do a proper loop. You can actually sell the H100i. I've sold many AIO's on ebay, people will buy them. Sell all that nonsense and if budget limited just go with EK's Fluid Gaming kit, that way you will have all components plumbed into the same rad surface area. But bear in mind that the blocks and rads are aluminum. If you don't intend to upgrade a rad and don't mind waiting for EK Fluid Gaming water-block compatibility as newer GPU's come out then there's no issue. 

Fan on the VRM and nothing on the memory, it's just monkey business. I mean, been there done that and I have the T-Shirt, (Kraken G10 on 780 Ti SLI, 980 Ti and 1080 Ti), it works, but it's less than ideal.


----------



## J7SC

Has anyone got Unigine Superposition working with *NVLink 'CFR' / checkerboard* instead of the standard *AFR* ? Below is a quick NVLink AFR Superposition 8k run w/ dual 2080 TIs, but I really like to see if CFR would improve things. 

Temps are not an issue, fortunately (2 pumps, 3x 360/55 rads for separate GPU loop, per 2nd pic)


----------



## Imprezzion

Mooncheese said:


> I was where you were back in 2013-2014 with AIO's on everything, my advice is, skip that nonsense and do a proper loop. You can actually sell the H100i. I've sold many AIO's on ebay, people will buy them. Sell all that nonsense and if budget limited just go with EK's Fluid Gaming kit, that way you will have all components plumbed into the same rad surface area. But bear in mind that the blocks and rads are aluminum. If you don't intend to upgrade a rad and don't mind waiting for EK Fluid Gaming water-block compatibility as newer GPU's come out then there's no issue.
> 
> Fan on the VRM and nothing on the memory, it's just monkey business. I mean, been there done that and I have the T-Shirt, (Kraken G10 on 780 Ti SLI, 980 Ti and 1080 Ti), it works, but it's less than ideal.


I always had multiple Accelero Hybrid's laying around on most of my cards but they don't sell anymore here. 

I switch hardware SO often I can't warrant buying a new full cover block and backplate for every card I buy / own that happens to be a different design or whatever. That's why I use a EK Phoenix 280 CPU loop (i wanted to buy custom but this was on clearance for next to nothing) but then again my CPU doesn't need any more cooling. It's on 5.1 and 5.2 needs WAY too much voltage and doesn't get hot on the EK Phoenix either. Never reaches 70c. 

As for GPU's i switch like, every months basically and i have multiple GPU's at home as well, I currently have a non-A 2080 Ti from Inno3D, a MSI GTX1080 Ti Lightning Z, this Gainward 2080 Ti, some older AMD cards like a bunch of R9 280X and 290(x)'s, 2 780 Ti's. There should be a 980 Ti somewhere but I think i kinda permanently lended that out to a boddy of mine.. 

And whenever I spot a good deal on a 2080 Ti I will probably buy it anyway. What else to do in these boring times. And if that happens to be like a custom PCB MSI or ASUS or whatever then my block would be useless again. 
So, I want a universal solution that can work on all cards, also non-reference (which a G12 will 90% of the times do just fine). 

And, it's purely meant for noise and lower temps to make it temperature throttle less. 

And also looks. I mean, a full push-pull 240mm AIO with RGB fans and RGB strips in the Kraken.. I mean, RGB all the stuff..


----------



## sultanofswing

Today is testing day, Seeing what this card uses for voltage for Offset slider overclocks so I can then see if I can figure out a Voltage Curve.


----------



## Mooncheese

Imprezzion said:


> I always had multiple Accelero Hybrid's laying around on most of my cards but they don't sell anymore here.
> 
> I switch hardware SO often I can't warrant buying a new full cover block and backplate for every card I buy / own that happens to be a different design or whatever. That's why I use a EK Phoenix 280 CPU loop (i wanted to buy custom but this was on clearance for next to nothing) but then again my CPU doesn't need any more cooling. It's on 5.1 and 5.2 needs WAY too much voltage and doesn't get hot on the EK Phoenix either. Never reaches 70c.
> 
> As for GPU's i switch like, every months basically and i have multiple GPU's at home as well, I currently have a non-A 2080 Ti from Inno3D, a MSI GTX1080 Ti Lightning Z, this Gainward 2080 Ti, some older AMD cards like a bunch of R9 280X and 290(x)'s, 2 780 Ti's. There should be a 980 Ti somewhere but I think i kinda permanently lended that out to a boddy of mine..
> 
> And whenever I spot a good deal on a 2080 Ti I will probably buy it anyway. What else to do in these boring times. And if that happens to be like a custom PCB MSI or ASUS or whatever then my block would be useless again.
> So, I want a universal solution that can work on all cards, also non-reference (which a G12 will 90% of the times do just fine).
> 
> And, it's purely meant for noise and lower temps to make it temperature throttle less.
> 
> And also looks. I mean, a full push-pull 240mm AIO with RGB fans and RGB strips in the Kraken.. I mean, RGB all the stuff..


Ah I see, in that case NZXT's AIO bracket is the way to go.


----------



## Imprezzion

Mooncheese said:


> Ah I see, in that case NZXT's AIO bracket is the way to go.


Yeah, I got a Kraken X52 to go along with it which will be running 4 Cooler Master MF120 PWM fans controlled by the card. I will make an adapter for it to go from GPU 4 pin to RPM and PWM from the card and power 12v+ and - from the motherboard or even SATA.


----------



## Emetsys

Hi,
I wanted to flash my 2080Ti, but since it is a card shipped with a newer bios that the available in the thread, I can't use them.
It is a MSI Gaming X Trio with the firmware 0x70100003.
There is the new Galax HOF 10th anniversary edition that have the same revision.
Is it dangerous to flash my card with it?


----------



## J7SC

Quick question: I never bothered with the voltage curves (other than checking stock speed-voltage for each card) and just went for the MSI AB sliders with oc profiles. Is it possible to save voltage curve settings as a profile in MSI AB and just load them when I want to ? If so, slider or voltage curve isn't really an either-or-choice :headscrat


----------



## jura11

J7SC said:


> Quick question: I never bothered with the voltage curves (other than checking stock speed-voltage for each card) and just went for the MSI AB sliders with oc profiles. Is it possible to save voltage curve settings as a profile in MSI AB and just load them when I want to ? If so, slider or voltage curve isn't really an either-or-choice :headscrat


Yes you can do that, I have several profiles like manual V/F OC Curve and normal offset OC with several OC like for benchmarks only, for rendering, gaming and folding

For rendering or folding I prefer manual curve with low voltage, for gaming manual curve with 1.09v or offset works as well for me there



Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Yes you can do that, I have several profiles like manual V/F OC Curve and normal offset OC with several OC like for benchmarks only, for rendering, gaming and folding
> 
> For rendering or folding I prefer manual curve with low voltage, for gaming manual curve with 1.09v or offset works as well for me there
> 
> 
> 
> Hope this helps
> 
> Thanks, Jura


 
Thanks for that info


----------



## sultanofswing

Ok, Does anyone see anything wrong with this voltage curve set for [email protected]?


----------



## Mooncheese

sultanofswing said:


> Ok, Does anyone see anything wrong with this voltage curve set for [email protected]?


Nope, looking good!


----------



## Mooncheese

Imprezzion said:


> Yeah, I got a Kraken X52 to go along with it which will be running 4 Cooler Master MF120 PWM fans controlled by the card. I will make an adapter for it to go from GPU 4 pin to RPM and PWM from the card and power 12v+ and - from the motherboard or even SATA.


Just for the VRM fan? On some wattage limited cards you could recoup a few watts throwing the card under water as whatever wattage was required for the fan(s) are taken from GPU. This can be 3-5w or so if I remember correctly. It's not a lot in the grand scheme of things, but with 1080 Ti FE default vbios that 5W is less power for GPU core. I would just run the VRM fan off of mobo at 70% RPM constant, that small fan is nearly inaudible even at 70% RPM.


----------



## 86Jarrod

sultanofswing said:


> Today is testing day, Seeing what this card uses for voltage for Offset slider overclocks so I can then see if I can figure out a Voltage Curve.


This really interested me so I did some testing also. It looks like on my card I have the same performance loss just sliding 1.093 on the v-curve to where I want it. I figured that wouldn't matter because my shunt mod, power limit wouldn't be a factor. So when you look at my testing you can see I have similar results. Nvidia control panel all default except power to adaptive and sharpening off. But I thought, if I try to raise the slider to a number just lower to where I want 1.093 to be and then bump up 1.093 what would happen? So I raised the slider to +147(2175mhz range) and then dragged 1.093 up to +150(2190mhz range) and tested a couple times. Turns out It scored way higher than just dragging 1.093 but with the benefit of higher frequency due to voltage! Crazy I'm just figuring this out today. Now I gotta re-try the Galax XOC bios on my next benching day, whenever that'll be.


----------



## Imprezzion

Mooncheese said:


> Just for the VRM fan? On some wattage limited cards you could recoup a few watts throwing the card under water as whatever wattage was required for the fan(s) are taken from GPU. This can be 3-5w or so if I remember correctly. It's not a lot in the grand scheme of things, but with 1080 Ti FE default vbios that 5W is less power for GPU core. I would just run the VRM fan off of mobo at 70% RPM constant, that small fan is nearly inaudible even at 70% RPM.


Nah the radiator fans get controlled by the card through a Lamptron PWM and ARGB hub. I'm making a custom cable for that so that the PWN and RPM signal goes to the card and power from either the main board or of the controller doesn't need power to come from the input side but just does it over sata I'll use just PWM and RPM.


----------



## sultanofswing

86Jarrod said:


> This really interested me so I did some testing also. It looks like on my card I have the same performance loss just sliding 1.093 on the v-curve to where I want it. I figured that wouldn't matter because my shunt mod, power limit wouldn't be a factor. So when you look at my testing you can see I have similar results. Nvidia control panel all default except power to adaptive and sharpening off. But I thought, if I try to raise the slider to a number just lower to where I want 1.093 to be and then bump up 1.093 what would happen? So I raised the slider to +147(2175mhz range) and then dragged 1.093 up to +150(2190mhz range) and tested a couple times. Turns out It scored way higher than just dragging 1.093 but with the benefit of higher frequency due to voltage! Crazy I'm just figuring this out today. Now I gotta re-try the Galax XOC bios on my next benching day, whenever that'll be.



You on a chiller or chilled water I assume?

Also Disable the OSD if you have it enabled, your score will go up quite a bit.


----------



## 86Jarrod

sultanofswing said:


> You on a chiller or chilled water I assume?
> 
> Also Disable the OSD if you have it enabled, your score will go up quite a bit.


Oh I do, different control panel settings and all. I wasn't going for score my mem wasn't oc'd or anything. It was just to show you're right about v-curve in that you can't just drag the highest available mv to where you want it even with no power limit. It's that the base frequency settings have to be somewhat close to the wanted frequency/mv to achieve better performance and scores as shown in the last test. So v-curve works just not how we thought it might by dragging up one mv/freq. Edit: No chiller just cold room, 560/80mm monsta rad, 4 nf-a14 140mm noctuas @35%, pump at max. I also shortened the posts on my hydrocopper so that the die and plate touch so I could apply lm. Dropped temps by 7c. I never see more than a room temp to loop temp delta of over 2c and an Ambient to die delta of more than like 13 or 14 on xoc. Less if fans are blazing and my windows open in winter.


----------



## sultanofswing

@86Jarrod I tried your method and got this result.

+90 offset slider(2145mhz), Then dragged 1093mv to 2160.

Weird thing is the second test showed it being all over the power limit. Hwinfo64 shows a peak of 422 watt, Weird since this Bios is good for 520 watt.


----------



## 86Jarrod

@sultanofswing Weird. This is one of the few I ran when I move the slider to 147(2175mhz) then opened v-curve and bumped 1.093 up 3 to +150(2190mhz). Maybe because mines shunt modded? I have no idea then.


----------



## sultanofswing

86Jarrod said:


> Weird. This is one of the few I ran when I move the slider to 147(2175mhz) then opened v-curve and bumped 1.093 up 3 to +150(2190mhz). Maybe because mines shunt modded? I have no idea then.


I've had this happen before, Gonna shut down and switch BIOS switch. Reboot then switch it back and see if that works, Nice Temps!


----------



## EarlZ

z390e said:


> Asus Strix OC RTX 2080ti
> 
> Ambient right now is 18 degrees C


The cooler on that card is on a different level compared to the Gigabyte Gaming OC, its about 20-25c better.

I decided to do a repaste and it was worth the effort. The TIM Gigabyte used was already dried up and flaked. Used the Noctua paste that I had around and my temps are abou 11-12 lower on MAX load and it bearly breaks 74c with the division 2 now.


----------



## Mooncheese

sultanofswing said:


> @86Jarrod I tried your method and got this result.
> 
> +90 offset slider(2145mhz), Then dragged 1093mv to 2160.
> 
> Weird thing is the second test showed it being all over the power limit. Hwinfo64 shows a peak of 422 watt, Weird since this Bios is good for 520 watt.


17,300 that is awesome.


----------



## Imprezzion

EarlZ said:


> The cooler on that card is on a different level compared to the Gigabyte Gaming OC, its about 20-25c better.
> 
> I decided to do a repaste and it was worth the effort. The TIM Gigabyte used was already dried up and flaked. Used the Noctua paste that I had around and my temps are abou 11-12 lower on MAX load and it bearly breaks 74c with the division 2 now.


The Gainward Phoenix GS I have also had terrible paste. The paste itself was pretty good and gooey actually and not dried up but it had quite some airpockets in it and not the best coverage at all. Replaced it with Prolimatech PK-3 and it dropped temps like a brick.


Now I'm waiting for my X52 + G12 to show up to see how low I can get it to go with a AIO. Should I liquid metal it or?
I'm now on 2070Mhz 1.068v with EVGA FTW3 BIOS and it was around 70c on 100% fanspeed first in games like Division 2. Now it's on about 65c on 70% fanspeed.

Oh by the way, @86Jarrod and @sultanofswing this totally works and i can comfirm this completely with real-world tests. 

I just use a very very simple performance test imo. Borderlands 3 on Badass 1080P 150% resolution scale standing still at the exact point the game launches in Sanctuary. Very repeatable as you always spawn exactly the same location and facing the exact same direction. 

2100Mhz @ 1.093v with 0Mhz slider and just slamming the Curve up to 1.093v @ 2100 results in 113 FPS. 
2100Mhz @ 1.093v with +200Mhz (2055 @ 1.068mv) slider and core voltage slider to +100mv and THEN doing the curve to 2100Mhz @ 1.093v results in 118 FPS. Quite a difference and I can repeat it time after time. 

*So yeah, only use Curve with an as high as possible core slider OC and +100mv on the slider. It gives more performance somehow compared to 0Mhz slider and then a big jump in the curve!.*

Sidenote, i'm back to the KFA2/Galax BIOS as the EVGA FTW3 Ultra doesn't behave in terms of power limit. It is 373w which is only 7w less then the KFA2 380W however it shows 120-124% all the time and the KFA2 on a higher clock and higher voltage barely shows 102% so a 20% difference. The KFA2 doesn't throttle. The EVGA does at 1.093v. 

And yes, the aircooler on the Gainward Phoenix GS is really struggling with 1.093v at any reasonable fanspeed. It runs around 66c stabilized after a long while but only with 100% fanspeed as VRAM and VRM heat is also put directly into the main heatsink so more voltage is 2x more heat as the VRM gets hotter as well. 

Glad to see my Kraken G12 + X52 + Enzotech VRAM/VRM heatsinks coming in sometime next week. Until that time i'll just have to live with the noise


----------



## freezer2k

Hi, 

I have the EVGA GeForce RTX 2080 Super Black Gaming 8 GB (08G-P4-3081-KR).

I want to do [email protected] using Linux Ubuntu 19.10 or soon 20.04.

The whole folding part I already got working. But I really want to undervolt them to keep power costs and temps low. This worked wonderfully using Voltage/Frequency Curve in MSI Afterburner under Win10.
I assume theres nothing like this available in Linux? 

Can I flash a custom VF-Curve BIOS on this card?

Thanks


----------



## Imprezzion

freezer2k said:


> Hi,
> 
> I have the EVGA GeForce RTX 2080 Super Black Gaming 8 GB (08G-P4-3081-KR).
> 
> I want to do [email protected] using Linux Ubuntu 19.10 or soon 20.04.
> 
> The whole folding part I already got working. But I really want to undervolt them to keep power costs and temps low. This worked wonderfully using Voltage/Frequency Curve in MSI Afterburner under Win10.
> I assume theres nothing like this available in Linux?
> 
> Can I flash a custom VF-Curve BIOS on this card?
> 
> Thanks


It's not actually possible for us to edit the BIOS directly due to new checksum and encryption on these BIOS files compared to GTX9xx and older. 
Maybe virtualize a Windows in Linux and see if a OC applied in the virtual windows will stick on Linux as well? I really really see that as a 2% chance of working but if we don't try random stuff we'll never know 

BTW for you other guys on water / chilled. Did you notice a big difference between what clock was stable at what voltage compared to air and higher temperatures?
I mean, i've just been doing so much testing with the Curve discussion from previous posts and also just testing my card in general and i really seem to hit a pretty hard wall at 2070Mhz 1.068v. I mean, it's "hot" at 66-67c load but it is rock solid stable. Wierd thing is, even 2100Mhz @ 1.093v just will not even run 1 minute of Borderlands 3 at 67c and crashes all over the place but when it's cold, say idle temps 27c, and it warms up when just starting a game it runs 2115Mhz just fine untill it drops to 2100-2085 and eventually 2070Mhz when it gets up to temperature.


----------



## freezer2k

freezer2k said:


> Hi,
> 
> I have the EVGA GeForce RTX 2080 Super Black Gaming 8 GB (08G-P4-3081-KR).
> 
> I want to do [email protected] using Linux Ubuntu 19.10 or soon 20.04.
> 
> The whole folding part I already got working. But I really want to undervolt them to keep power costs and temps low. This worked wonderfully using Voltage/Frequency Curve in MSI Afterburner under Win10.
> I assume theres nothing like this available in Linux?
> 
> Can I flash a custom VF-Curve BIOS on this card?
> 
> Thanks


I found this for Linux:
https://www.reddit.com/r/nvidia/comments/amoi4y/how_to_do_gpu_underclocking_in_linux/

So setting a custom Power Limit. Still using the default VF-Curve, but limiting power? I guess it will run at lower frequencies then?

Wonder how the power/performance ratio is compared to MSI Afterburner OC-Curve and the lock-in of a certain Voltage+Frequency.


----------



## Imprezzion

Well balls. It really did not like that.. card was running fine 2085 @ 1.093 on 67c ish when my PC just turned off completely and restarted.

Didn't do any damage to the card ofcourse, heck it'll take a lot more then that to kill a card in such a short run, but it tripped a protection somewhere and judging by the loud relais click on shutdown it was my PSU.

So, are you kidding me. A full Seasonic Focus Plus Gold 750w can't keep up with a 5.1Ghz 9900K which at best pulls 240w in a stress test and a 380w 2080 Ti? I mean, I have 12 fans and a LOT of RGB hooked up to it but still... it's a 750w ffs. I might try to connect the card to 2 separate PCI-E leads as it's on a single split 8+8 cable now.

And also, on the FTW3 (Ultra) BIOS it also does the exact same.. Power draw and FPS is way lower on just a very high curve on offset 0 compared to offset +150 and then 1 bin up in the curve. 
Difference is like, a LOT. 

I have 2 profiles next to eachother with the exact same clocks, just a different curve. You can see the difference in power limit and such as I apply each of them and MSI AB doesn't log my FPS here in the test i was running but it went from 103 fps to 111 fps and back switching between the profiles. 

So yeah, curve with offset is MUCH better but also puts WAY more load on the poor thing and it cannot run the same clocks for me. 2070Mhz is nowhere near stable at the "higher load" of the better curve. It barely runs 10 minutes without throwing a crash to desktop or a DirectX error. Well, it never gets boring like this in these isolation times!


----------



## Mooncheese

Imprezzion said:


> It's not actually possible for us to edit the BIOS directly due to new checksum and encryption on these BIOS files compared to GTX9xx and older.
> Maybe virtualize a Windows in Linux and see if a OC applied in the virtual windows will stick on Linux as well? I really really see that as a 2% chance of working but if we don't try random stuff we'll never know
> 
> BTW for you other guys on water / chilled. Did you notice a big difference between what clock was stable at what voltage compared to air and higher temperatures?
> I mean, i've just been doing so much testing with the Curve discussion from previous posts and also just testing my card in general and i really seem to hit a pretty hard wall at 2070Mhz 1.068v. I mean, it's "hot" at 66-67c load but it is rock solid stable. Wierd thing is, even 2100Mhz @ 1.093v just will not even run 1 minute of Borderlands 3 at 67c and crashes all over the place but when it's cold, say idle temps 27c, and it warms up when just starting a game it runs 2115Mhz just fine untill it drops to 2100-2085 and eventually 2070Mhz when it gets up to temperature.


Nope, my card also exhibits a hard wall at 2100 MHz above 40C. Like it will do 2100 MHz and pass all benchmarks, presumably because the core starts out at ~27-31C and typically doesn't get hotter than say 40-42C whilst benching (Superposition will get it up to 43-44C because it's nearly at 340W the entire time) but once over 40C, it doesn't matter if I do +100 on the voltage slider in MSI AB, it's going to CTD + Display Driver Failure, guaranteed. So I'm also seeing a hard-wall at 2100 MHz with my 300a. The consolation is that my card will do 2070 MHz @ 1.013v which is fantastic because bringing the voltage down drops wattage consumption quite a bit. The difference for my card is substantial, much more than running the card at 2100 MHz with as much voltage as I can pump into it. In Firestrike for example, simply turning up core freq to +120 MHz and +500 MHz on the memory it did 37,800 Graphics (2070 MHz core actual) @ 1.069v throughout the duration of the test (going by OSD). Now, dialing it in on the freq / voltage curve so that it does 2070 MHz @ 1.013v it does 38,800 Graphics. Closely monitoring the wattage consumption while trying to bear out Sultanofswing's voltage starvation hypothesis (which works on his card because he has KP) with +100 on voltage slider and 2070 MHz core in Timespy Demo it was pulling on average 30-40W more than with the undervolt, and the clocks were dipping way harder and more frequently. 

Anyhow, yeah 2100 MHz, seems you need an above average core and clean power delivery to attain that or more. 

That said, my clocks of 2070 MHz @ 1.013v are solid, I just played for around 3-4 hours yesterday total without a crash. 

Here's 42 minutes of Ghost Recon Breakpoint "bug testing" (radial menu disappears at end and in the beginning I try in vain to present a "press a to open parachute prompt":


----------



## J7SC

At the end of the day, for gaming with a 2080 Ti, I doubt many folks would notice the extra MHz (say above 2070) unless they have a FPS overlay. 

For the record, when I first got the GB Aorus XTR WB cards, each was tested with an AIO on a Z170 test bench, and they would still run min of 2190 MHz and 2145 MHz respectively, even at 46 C (w/23 C ambient). Since then, a much more extensive GPU loop for both cards typically keeps the cards well below 40 C even for hours. As mentioned before, I rarely touch the voltage slider, still run the stock Bios (max 375w +) and never got around to trying the voltage curve. FYI, I just posted related 'collage' graphs in another thread here https://www.overclock.net/forum/61-water-cooling/584302-ocn-water-cooling-club-picture-gallery-11007.html#post28396312


----------



## sultanofswing

Imprezzion said:


> Well balls. It really did not like that.. card was running fine 2085 @ 1.093 on 67c ish when my PC just turned off completely and restarted.
> 
> Didn't do any damage to the card ofcourse, heck it'll take a lot more then that to kill a card in such a short run, but it tripped a protection somewhere and judging by the loud relais click on shutdown it was my PSU.
> 
> So, are you kidding me. A full Seasonic Focus Plus Gold 750w can't keep up with a 5.1Ghz 9900K which at best pulls 240w in a stress test and a 380w 2080 Ti? I mean, I have 12 fans and a LOT of RGB hooked up to it but still... it's a 750w ffs. I might try to connect the card to 2 separate PCI-E leads as it's on a single split 8+8 cable now.
> 
> And also, on the FTW3 (Ultra) BIOS it also does the exact same.. Power draw and FPS is way lower on just a very high curve on offset 0 compared to offset +150 and then 1 bin up in the curve.
> Difference is like, a LOT.
> 
> I have 2 profiles next to eachother with the exact same clocks, just a different curve. You can see the difference in power limit and such as I apply each of them and MSI AB doesn't log my FPS here in the test i was running but it went from 103 fps to 111 fps and back switching between the profiles.
> 
> So yeah, curve with offset is MUCH better but also puts WAY more load on the poor thing and it cannot run the same clocks for me. 2070Mhz is nowhere near stable at the "higher load" of the better curve. It barely runs 10 minutes without throwing a crash to desktop or a DirectX error. Well, it never gets boring like this in these isolation times!


It's your power supply, The Focus series unless you have the revised version had a well known issue with Higher wattage cards. The transient response of the card boosting would cause the power supplies to freak out and just shut off.


----------



## skupples

Mooncheese said:


> I'm the only person pointing out the fact that PETG can deform and dislodge in Greg Salar's disaster video (previous post). Everyone else is speculating about pressure in the reservoir etc.
> 
> I guess it's not a well know fact yet, but it will soon be with these monster 300W TDP chips.
> 
> Apparently this happens all of the time with PETG, if you google you will find numerous examples, yet PETG is still praised as the go to hard tubing.
> 
> I mean it's easier to cut? Golf clap?
> 
> https://youtu.be/8tqyxxVQGFU
> 
> Edit:


i know this response is a few days old but... 

I was always under the impression that PETG's only strong point over standard lines is its easier to work with, but that it's rather porous and ages poorly. (been said for years over in the watercooling master thread) 

and as far as higher TDP goes. We already know how that works thanks to people that benchmark with 2-3 overclocked GPUs at a time, each dumping 350-400w. 

also - aquacomputer's burpy valve never hurt anyone, just toss it atop your tube res and never worry about the woes of pressure again. 

I've definitely experienced pressure leaks, but they always happened @ a fitting. Not major O-rings !


----------



## kithylin

Imprezzion said:


> So, are you kidding me. A full Seasonic Focus Plus Gold 750w can't keep up with a 5.1Ghz 9900K which at best pulls 240w in a stress test and a 380w 2080 Ti? I mean, I have 12 fans and a LOT of RGB hooked up to it but still... it's a 750w ffs. I might try to connect the card to 2 separate PCI-E leads as it's on a single split 8+8 cable now.


Perhaps check your math: 240W CPU + 380W GPU = 620 watts together. Leaving 130 watts left out of your 750W PSU. Let's be super conservative and think you have low speed fans that you run at 30% most of the time. Most modern PWM case fans are about 8 watts each. Some are more, so 8W is conservative. So 12 case fans @ 8 watts = 96 watts. So then that's leaving just 34 watts left for your hard drives, SSD's, RGB, and what ever other accessories you have. So yeah you're running that poor power supply at 98% - 100% capacity. It's no wonder it shut off. You barely have any room left in that thing for overclocking the video card or anything else. And that's assuming it's a good power supply that can actually handle the full 750W written on the sticker. Not all of them can actually handle the load they put on the sticker @ 100% load constantly. The general rule with computers is you want a power supply that will be at bare minimum rated for +10% above what you actually use. +20% is more realistic. You don't want to run power handling components at 100% load non-stop. Most of them can't handle it. Or if they do handle it they won't do it for not very long.


----------



## spin5000

Is there a bios editor available yet so I can simply save my card's default bios, edit it to just increase the power limits, then flash the card with the updated (higher power limits) bios?

The reason I ask is because there are all sorts of variables in bios, including ones people hardly ever mention such as optimizations for different memory (micron, samsung, hynix) and not only that but also different memory timings using the same memory (eg. samsung) but between different graphics cards/manufacturers. All that stuff and more buried settings in a bios means using bios' made for other cards with possibly different components isn't ideal and can even negatively affect both stability and performance...


----------



## J7SC

kithylin said:


> Perhaps check your math: 240W CPU + 380W GPU = 620 watts together. Leaving 130 watts left out of your 750W PSU. Let's be super conservative and think you have low speed fans that you run at 30% most of the time. Most modern PWM case fans are about 8 watts each. Some are more, so 8W is conservative. So 12 case fans @ 8 watts = 96 watts. So then that's leaving just 34 watts left for your hard drives, SSD's, RGB, and what ever other accessories you have. So yeah you're running that poor power supply at 98% - 100% capacity. It's no wonder it shut off. (...)


 
^ :thumb:

It is easy to forget 'peripherals'...for example, I know what my dual 2080 Ti use on full tilt (~ 760w max), plus CPU (~ 280w), but 4x D5 (75% setting) ?, 18x 120mm fans , some biggies? 3x M.2 ?.2x SSDs ? And so forth. Best to keep plenty of headroom re. PSU power budget (I'm running an Antec HPC 1300w for now but can easily add a 2nd one via Antec's 'OC-Link')


----------



## Imprezzion

kithylin said:


> Perhaps check your math: 240W CPU + 380W GPU = 620 watts together. Leaving 130 watts left out of your 750W PSU. Let's be super conservative and think you have low speed fans that you run at 30% most of the time. Most modern PWM case fans are about 8 watts each. Some are more, so 8W is conservative. So 12 case fans @ 8 watts = 96 watts. So then that's leaving just 34 watts left for your hard drives, SSD's, RGB, and what ever other accessories you have. So yeah you're running that poor power supply at 98% - 100% capacity. It's no wonder it shut off. You barely have any room left in that thing for overclocking the video card or anything else. And that's assuming it's a good power supply that can actually handle the full 750W written on the sticker. Not all of them can actually handle the load they put on the sticker @ 100% load constantly. The general rule with computers is you want a power supply that will be at bare minimum rated for +10% above what you actually use. +20% is more realistic. You don't want to run power handling components at 100% load non-stop. Most of them can't handle it. Or if they do handle it they won't do it for not very long.


Yeah i kinda agree. My math was way off 

It's going to be too close for comfort as i'm about to strap a Kraken G12 and a further 2 fans extra to the GPU meaning quite a bit of extra load again. 

I mean, the Focus Plus Gold is a VERY good design and should be able to handle it as the CPU will ofcourse only run 240w on like, a Prime95 AVX stresstest. In games it sits around 140w so yeah. 
Another extra thing I have to take into account is the RAM in my rig. It's only a 2x8GB kit but it's running 1.544v with 1.30v SA 1.25v IO as well. Takes quite a bit more power then a stock 3200C16 1.35v kit.

But, my next upgrade will be a random Seasonic PSU compatible with my Cablemod cables with 850w or more depending on what i can source secondhand. Can't be bothered to buy a new one as I have spare cables enough so i'll search for a cheap one with missing cables secondhand or something. Luckily Seasonic is using the same exact cables for pretty much all modern generations of PSU's from Focus Plus and up so shouldn't be that difficult to find one. 

For now it hasn't happened again and i've been gaming all evening (Warzone) on 2085-2070Mhz 1,093v with 380w limit in place.


----------



## sultanofswing

Made some changes to my setup today. 
Ditched the Enthoo Elite and went back to my 0-11XL.
Got it setup now running triple D5 pumps in series.

It aint all the pretty but it works.


----------



## J7SC

^ nice :thumb: ...can never have enough pumps and XSPC rads  

0-11XL case looks like it can swallow a lot


----------



## sultanofswing

J7SC said:


> ^ nice :thumb: ...can never have enough pumps and XSPC rads
> 
> 0-11XL case looks like it can swallow a lot


Yea you can fit quite a bit in this case, the width is what it really has going for it.


----------



## sultanofswing

So interesting thing I learned tonight, My card will not pass Timespy at [email protected] but will pass at [email protected]

So for the hell of it I decided to try [email protected] and then use the feature of the Kingpin to set the NVVDD(Core Voltage) to 1.143mv

It appears the voltage curve is working properly now, Gonna try and get temps down this week and push for more.


----------



## Imprezzion

sultanofswing said:


> So interesting thing I learned tonight, My card will not pass Timespy at [email protected] but will pass at [email protected]
> 
> So for the hell of it I decided to try [email protected] and then use the feature of the Kingpin to set the NVVDD(Core Voltage) to 1.143mv
> 
> It appears the voltage curve is working properly now, Gonna try and get temps down this week and push for more.


What are your current temps at?

My whole power debacle has been solved. I pulled the trigger on a Seasonic Prime Ultra Good 1000w that I know is compatible with my cables and if shilka tells you the Prime Ultra is one of the best PSU lines out there then you buy it.. It was on a clearance sale as well, €166 with next-day shipping. Ain't bad at all.


----------



## sultanofswing

Imprezzion said:


> What are your current temps at?
> 
> My whole power debacle has been solved. I pulled the trigger on a Seasonic Prime Ultra Good 1000w that I know is compatible with my cables and if shilka tells you the Prime Ultra is one of the best PSU lines out there then you buy it.. It was on a clearance sale as well, €166 with next-day shipping. Ain't bad at all.


That run hit 39c on the card so still on normal non chilled cooling.


----------



## J7SC

Imprezzion said:


> What are your current temps at?
> 
> My whole power debacle has been solved. I pulled the trigger on a Seasonic Prime Ultra Good 1000w that I know is compatible with my cables and if shilka tells you the Prime Ultra is one of the best PSU lines out there then you buy it.. It was on a clearance sale as well, €166 with next-day shipping. Ain't bad at all.


 
That's a nice PSU, at a good price ! Also doesn't hurt to have the 'old' one as a spare for emergencies, or new component testing on a testbench etc


----------



## Imprezzion

J7SC said:


> That's a nice PSU, at a good price ! Also doesn't hurt to have the 'old' one as a spare for emergencies, or new component testing on a testbench etc


That's why I jumped right on it. It's only a Gold but i don't really care about Titanium vs Platinum vs Gold, quality is more important then 3% efficiency. And, I have cablemod Focus Plus CPU EPS 8 pin cables and PCI-E and 24 Pin ATX and the Prime Ultra series uses the same cables (SE/RT Series from Cablemod) so I can keep them in place and can just replace the PSU only. 

Should be here tomorrow along with the rest of my Kraken G12 and RGB stuff. The Kraken X52 already arrived lol. 

I am now running the EVGA FTW3 Ultra BIOS and limiting it to 350w with 1.012v on 2025-2010Mhz. No problems so far.


----------



## spin5000

Is there a bios editor available yet so I can simply save my card's default bios, edit it to just increase the power limits, then flash the card with the updated (higher power limits) bios?

The reason I ask is because there are all sorts of variables in bios, including ones people hardly ever mention such as optimizations for different memory (micron, samsung, hynix) and not only that but also different memory timings using the same memory (eg. samsung) but between different graphics cards/manufacturers. All that stuff and more buried settings in a bios means using bios' made for other cards with possibly different components isn't ideal and can even negatively affect both stability and performance...

Basically, I have a Gigabyte 2080 Ti Turbo ( https://www.gigabyte.com/ca/Graphics-Card/GV-N208TTURBO-11GC-rev-10#kf ) and would like to raise the power limit from the current 280w to somewhere between 300w and 330w. The cooling can easily take it. I've done tons of tests.


----------



## Imprezzion

spin5000 said:


> Is there a bios editor available yet so I can simply save my card's default bios, edit it to just increase the power limits, then flash the card with the updated (higher power limits) bios?
> 
> The reason I ask is because there are all sorts of variables in bios, including ones people hardly ever mention such as optimizations for different memory (micron, samsung, hynix) and not only that but also different memory timings using the same memory (eg. samsung) but between different graphics cards/manufacturers. All that stuff and more buried settings in a bios means using bios' made for other cards with possibly different components isn't ideal and can even negatively affect both stability and performance...
> 
> Basically, I have a Gigabyte 2080 Ti Turbo ( https://www.gigabyte.com/ca/Graphics-Card/GV-N208TTURBO-11GC-rev-10#kf ) and would like to raise the power limit from the current 280w to somewhere between 300w and 330w. The cooling can easily take it. I've done tons of tests.


I don't know that is such a good idea. The turbo blower is a non-A so they are all identical PCB and optimizations and there is only 1 BIOS available with higher limit (310w). The problem with that specific card is that Gigabyte removed power phases. It only has 11. That is, maybe, a problem with 300w+.

An editor doesn't exist and will probably never exist due to encryption and checksums on the BIOS.


----------



## kithylin

spin5000 said:


> Is there a bios editor available yet so I can simply save my card's default bios, edit it to just increase the power limits, then flash the card with the updated (higher power limits) bios?
> 
> The reason I ask is because there are all sorts of variables in bios, including ones people hardly ever mention such as optimizations for different memory (micron, samsung, hynix) and not only that but also different memory timings using the same memory (eg. samsung) but between different graphics cards/manufacturers. All that stuff and more buried settings in a bios means using bios' made for other cards with possibly different components isn't ideal and can even negatively affect both stability and performance...
> 
> Basically, I have a Gigabyte 2080 Ti Turbo ( https://www.gigabyte.com/ca/Graphics-Card/GV-N208TTURBO-11GC-rev-10#kf ) and would like to raise the power limit from the current 280w to somewhere between 300w and 330w. The cooling can easily take it. I've done tons of tests.


They haven't even cracked the bios's to release a reliable bios editor for Pascal (GTX 1000 series) cards yet and those were released in 2016, 4 years ago. They have a long long way to go before we ever get any kind of bios editing for the newer Turing video cards yet, if we ever get it.


----------



## Imprezzion

Well, my Kraken G12 came in and all my adapters and hubs and cables and such as well. Going to be building my card now. Updating when it's installed


----------



## rares495

Imprezzion said:


> That's why I jumped right on it. It's only a Gold but i don't really care about Titanium vs Platinum vs Gold, quality is more important then 3% efficiency.


Higher efficiency rating guarantees higher quality components inside. That's the only way to achieve those numbers. And that's what you buy platinum and titanium units for in the first place, not the damn 3% improvement.


----------



## truehighroller1

rares495 said:


> Higher efficiency rating guarantees higher quality components inside. That's the only way to achieve those numbers. And that's what you buy platinum and titanium units for in the first place, not the damn 3% improvement.


I recently bought the AX1600i, solid freaking PSU man solid.


----------



## Imprezzion

rares495 said:


> Higher efficiency rating guarantees higher quality components inside. That's the only way to achieve those numbers. And that's what you buy platinum and titanium units for in the first place, not the damn 3% improvement.


There are really good Gold PSU's and really bad Platinums floating around man. It's totally not a guarantee. Not every Gold PSU uses inferior components with a lesser lifespan. We're talking about a Prime Ultra here, not some random no-name brand Gold unit. And no, it does not guarantee it. At all. There's plenty of Platinum / Titaniums with less filtering and more ripple and worse Chinese caps then many top-tier Gold units.


----------



## rares495

Imprezzion said:


> There are really good Gold PSU's and really bad Platinums floating around man. It's totally not a guarantee. Not every Gold PSU uses inferior components with a lesser lifespan. We're talking about a Prime Ultra here, not some random no-name brand Gold unit. And no, it does not guarantee it. At all. There's plenty of Platinum / Titaniums with less filtering and more ripple and worse Chinese caps then many top-tier Gold units.


Ok. Show me some of those bad units.


----------



## Duskfall

For anyone who tries to flash Palit RTX 2080 Ti Dual (Non-A) Bios to Asus Rog Strix Non-A don't do it cause it soft-bricks the card probably because of the custom PCB.


----------



## kithylin

rares495 said:


> Ok. Show me some of those bad units.


Enermax 80+ gold units in the 2004-2008 years that were literally catching fire in people's computers. It was all over the internet. That brand was pretty well known for it. So much so that personally I would never ever allow any of their units in any of my computers today in 2020, even if they are titanium or whatever. Dell had used them in their desktops for a number of years and then after the fire problem and some dells catching fire they dropped em. Anyway.. this whole power supply discussion is pretty far off topic from this thread I think, isn't it?


----------



## rares495

kithylin said:


> Enermax 80+ gold units in the 2004-2008 years that were literally catching fire in people's computers. It was all over the internet. That brand was pretty well known for it. So much so that personally I would never ever allow any of their units in any of my computers today in 2020, even if they are titanium or whatever. Dell had used them in their desktops for a number of years and then after the fire problem and some dells catching fire they dropped em. Anyway.. this whole power supply discussion is pretty far off topic from this thread I think, isn't it?


Anyone and their fox can do 80+ gold. Show me bad platinum/titanium units.



Blah blah 2080Ti is the best gaming GPU. I'd love to have one.


----------



## Imprezzion

Well, that's done. Kraken G12 + X52 push-pull mounted. I'll post some pics later.

As for results. I did NOT expect that. At all. Fans on 100% for a quick test. With stock Phoenix GS air on 2070Mhz 1.093v with EVGA FTW3 Ultra BIOS with 100% fanspeed it ran in the mid 60's and loud as heck.

Now with fans on 100% on the Kraken as a quick test it didn't even hit 40c.. I made a super silent fan curve and it stabilized temps at 48c on 37% fanspeed.. I did buy a EK GPU 4 pin to standard PWM adapter cable and run the rad fans off the cards PWM controller so I can control rad fans through Afterburner.

Also, where 2085Mhz 1.093v would crash in 10 minutes and 2100+ Mhz wouldn't even load a game properly it now runs 2100Mhz on 47c just fine so far for a couple of minutes which it never did before lol..

I had to cut a LOT of heatsinks for the VRAM and VRM to size because they are too high to fit under the brackets and the corners overlap the VRAM. 

Also had to go through my stash of old random aluminium heatsinks from random cards and motherboards and such and cut up a bunch of them for the chokes, VRM and such. 
Can't ever have enough heatsinks lol. 

Also, I got the stock Phoenix backplate to fit as well. I had to remove the spacers in the middle of the backplate that normally holds around the core but it fits with them removed just fine.

EDIT: Did a few runs of my current favorite power and stability testing benchmark Superposition. It passes a run on 2190Mhz but has some black artifacts here and there. Did another run on 2175Mhz, a lot less artifacts but still some visible. On 2160Mhz they are completely gone.
I also didn't run 100% fanspeed so it got up to about 42c on ~48% on the radiator fans. Got 10216 on 1080p Extreme. 

Only problem now with the EVGA BIOS is powerlimit again. It does occasionally hit 124% and drops to like, 2115-2145Mhz so. 

Once my new PSU comes in tomorrow I'll go back to the KFA2BIOS as that reads over 10% lower so it shouldn't throttle anymore.


----------



## kithylin

Imprezzion said:


> I had to cut a LOT of heatsinks for the VRAM and VRM to size because they are too high to fit under the brackets and the corners overlap the VRAM.
> 
> Also had to go through my stash of old random aluminium heatsinks from random cards and motherboards and such and cut up a bunch of them for the chokes, VRM and such.
> Can't ever have enough heatsinks lol.


Are those heatsinks permanently attached with thermal adhesive? Or removable later?


----------



## J7SC

Imprezzion said:


> Well, that's done. Kraken G12 + X52 push-pull mounted. I'll post some pics later.


 
Looks (and apparently runs) cool :thumb: The look reminds me a bit of KP edition


----------



## philhalo66

im officially a member now. https://www.techpowerup.com/gpuz/details/86y3e


----------



## Imprezzion

kithylin said:


> Are those heatsinks permanently attached with thermal adhesive? Or removable later?


Removable, the Enzotechs come with pre-applied thermal tape which is actually very high quality stuff. The others are applied with Akasa thermal tape. The big cut-to-size sheets of it. 

I did the finger test on the VRM and even after 30 minutes of looping Superposition they barely get warm to the touch on 380w. Nice.

I'm going back to the KFA2 BIOS now and testing some more. PSU isn't here yet but as long as I don't load the CPU too much it should hold.

EDIT: Tried the KFA2, still the same issues I had with it even tho i'm controlling radiator fans now. 41% minimum fanspeed which is 1400RPM on my push-pull 120mm's. Not exactly quiet for idle. I mean, it's perfectly do-able under load but idle? 

Plus, the power limit is way higher, it barely touches 100% now of the 126% available but the score in Superposition actually was 200 points worse then with the EVGA BIOS even tho that limited slightly. Also, coil whine with KFA2, not with EVGA. I'm going back to the EVGA one. Let the limit stay at 380w. This way I have all the benefits of 1.093v and lower temperature and IF it ever gets into a game or situation where it wants to draw 400w+ endagering the card it will throttle itself. Sounds like a perfect plan for fun and safety 

I mean, I can try XOC just for the heck of it... (Who am I kidding, it's already flashing)

EDIT2: It does not like 1.125v and like, 500w of power draw. It runs it, and barely gets any hotter, got up to 46c there at 50% fanspeed, but it isn't very stable lol. It barely clocks any higher, like 2190Mhz is possible now, but the risk is quite large drawing over 500w all over the place and the improvement is tiny. It gained about 100 points at best. 

So yeah, back to EVGA FTW3 Ultra. That works by far the best out of all the BIOS for my card and application. The KFA2 would be better if fanspeed wasn't locked to 41% with 1400 RPM..


----------



## z390e

great testing info @Imprezzion


----------



## Imprezzion

Here you go guys, probably the final one for now.

Game stable clocks tested with Borderlands 3 @ Badass graphics @ 1080p 144Hz 150% resolution scale.

Everythings on the screenshot. Temps, power, radiator fan RPM, the curve I use, clocks and so forth. 

As you can see it's running around 48c on 51% radiator fan speed. If I just crank them to 100% it drops immediatly to like 38-39c but it doesn't need to be that cold. Seems to run 2130Mhz just fine on 48c. Yes, it needs sub 45c for 2170Mhz but it can't launch games at 2190/2205 so I can't run any higher then 2170Mhz initial before heating up anyway so. This is a nice balance between noise and performance.

So yeah, bottom line, RTX2xxx REALLY wants to be cool. A clockspeed that will instantly crash at 65c will run forever on 46c for example.
Is it worth going with a AIO G12+X52 like solution? No. Totally not worth it. I gained maybe 60Mhz and some noise reduction. But I did it for the looks and the fun of building it.


----------



## keikei

Any SLI users here (nonWC)?


----------



## Nizzen

keikei said:


> Any SLI users here (nonWC)?


Yes, but not on my 3900x. I need 4 slot space, so I use X299 for SLI. To get enough air for the first card.


----------



## keikei

Nizzen said:


> Yes, but not on my 3900x. I need 4 slot space, so I use X299 for SLI. To get enough air for the first card.


Air flow/temp is my main concern. What slots are occupied?


----------



## Nizzen

keikei said:


> Air flow/temp is my main concern. What slots are occupied?


On my Asus X299 Apex: Slot 1 and slot 3. Using 4 space nvlink.

3 space, and you need watercooling, or else gpu nr1 will throttle hard, and/or fans will spin 100% with load.


----------



## keikei

Nizzen said:


> On my Asus X299 Apex: Slot 1 and slot 3. Using 4 space nvlink.
> 
> 3 space, and you need watercooling, or else gpu nr1 will throttle hard, and/or fans will spin 100% with load.


What are the temp differences between the 2 cards? FE?


----------



## Nizzen

keikei said:


> What are the temp differences between the 2 cards? FE?


With 4 slot spacing and 2x msi 2080ti trio X (2,75 slot cooler), the temp difference in load is about 12-14c.

With 3 slot space, the best cooler is "blower" style. Atleast for gpu nr 1.


----------



## keikei

Nizzen said:


> With 4 slot spacing and 2x msi 2080ti trio X (2,75 slot cooler), the temp difference in load is about 12-14c.
> 
> With 3 slot space, the best cooler is "blower" style. Atleast for gpu nr 1.


I appreciate the feedback. I have another card untouched and seeing as im getting a Nitro soon, illl need the extra power. I'm willing to deal with the SLI gamble if theres a gud chance for moar frames.


----------



## Nizzen

keikei said:


> I appreciate the feedback. I have another card untouched and seeing as im getting a Nitro soon, illl need the extra power. I'm willing to deal with the SLI gamble if theres a gud chance for moar frames.


Battlefield V in DX 11 and sli fix is scaling VERY good in 4k 

Many other games works great with SLI, and others don't. Have fun


----------



## Antsu

Imprezzion said:


> Removable, the Enzotechs come with pre-applied thermal tape which is actually very high quality stuff. The others are applied with Akasa thermal tape. The big cut-to-size sheets of it.
> 
> I did the finger test on the VRM and even after 30 minutes of looping Superposition they barely get warm to the touch on 380w. Nice.
> 
> I'm going back to the KFA2 BIOS now and testing some more. PSU isn't here yet but as long as I don't load the CPU too much it should hold.
> 
> EDIT: Tried the KFA2, still the same issues I had with it even tho i'm controlling radiator fans now. 41% minimum fanspeed which is 1400RPM on my push-pull 120mm's. Not exactly quiet for idle. I mean, it's perfectly do-able under load but idle?
> 
> Plus, the power limit is way higher, it barely touches 100% now of the 126% available but the score in Superposition actually was 200 points worse then with the EVGA BIOS even tho that limited slightly. Also, coil whine with KFA2, not with EVGA. I'm going back to the EVGA one. Let the limit stay at 380w. This way I have all the benefits of 1.093v and lower temperature and IF it ever gets into a game or situation where it wants to draw 400w+ endagering the card it will throttle itself. Sounds like a perfect plan for fun and safety
> 
> I mean, I can try XOC just for the heck of it... (Who am I kidding, it's already flashing)
> 
> EDIT2: It does not like 1.125v and like, 500w of power draw. It runs it, and barely gets any hotter, got up to 46c there at 50% fanspeed, but it isn't very stable lol. It barely clocks any higher, like 2190Mhz is possible now, but the risk is quite large drawing over 500w all over the place and the improvement is tiny. It gained about 100 points at best.
> 
> So yeah, back to EVGA FTW3 Ultra. That works by far the best out of all the BIOS for my card and application. The KFA2 would be better if fanspeed wasn't locked to 41% with 1400 RPM..



I actually just got the same card yesterday and slapped my X61 + G10 on it. I can run 2190MHz @ 1.125V on the core too, but my memory clocks like **** now. I had them stable at +1050 on air (with 2900RPM fans tho ) and now max is +800-900 depending on load, even with heavy airflow on the card. They are Micron tho, so maybe they are just more sensitive to temperature? I already have some Alphacool heatsinks and Akasa thermal tape on the way, will see if it improves that situation.


----------



## Lurifaks

Hi

What do you think is best for the durability of the fans, zero fan bios or constant on.


----------



## philhalo66

Anyone else with the FTW3 Hybrid card care to share your temps? I'm not sure if its just my card but this 2080 Ti seems to run hotter than the surface of the sun. Coming from a 1080 Ti SC that almost never broke 60C with 65% fan speed im blown away to see this card with 2 corsair SP120L's fans barely keeping it below 60C with the fans max speed. Does turing really run this hot?


----------



## Emetsys

Hello,

I have a gaming X trio with a high XUSB.

There are few bios compatible, but I found the Galak HOF 10 years edition.

I'm currently running an asus 366W and it's a bit better than the stock one (330W).

Can the HOF bios be dangerous for my card?

Thanks in advance


----------



## truehighroller1

philhalo66 said:


> Anyone else with the FTW3 Hybrid card care to share your temps? I'm not sure if its just my card but this 2080 Ti seems to run hotter than the surface of the sun. Coming from a 1080 Ti SC that almost never broke 60C with 65% fan speed im blown away to see this card with 2 corsair SP120L's fans barely keeping it below 60C with the fans max speed. Does turing really run this hot?



Yep. I would make sure that you're getting good airflow out of your case but yes they're hot. Just open the side of your case and see if your temps move much if not, then it's normal.


----------



## Imprezzion

Antsu said:


> I actually just got the same card yesterday and slapped my X61 + G10 on it. I can run 2190MHz @ 1.125V on the core too, but my memory clocks like **** now. I had them stable at +1050 on air (with 2900RPM fans tho ) and now max is +800-900 depending on load, even with heavy airflow on the card. They are Micron tho, so maybe they are just more sensitive to temperature? I already have some Alphacool heatsinks and Akasa thermal tape on the way, will see if it improves that situation.


Been gaming all night with 2145Mhz @ 1.093v around 46-48c. Perfectly stable.

My memory with aircooling ran +1000 (Samsung) and now seems to do +1200 with no issues with the Enzotechs heatsinks. It is a little doubtful whether it is stable it not as after a few hours of Borderlands 3 with no issues in-game and closing the game with going back to my desktop the memory completely spazzed out and the desktop and YouTube glitched all over the place so.. hehe.. 

As for the EVGA Hybrid question, that temp is pretty high even for a 120mm. EVGA allows you to repaste the card without losing warranty right? Just try to remount it with new proper paste.

Galaxy HOF on a X Trio?
Of course it's dangerous. It's a unlimited 2000w BIOS with 1.125v. Pay attention to your temperatures, limit the power to like, 24% (480w) so you don't instantly nuke it in case something goes wrong.

And the fans and RGB might not work anymore.


----------



## philhalo66

truehighroller1 said:


> Yep. I would make sure that you're getting good airflow out of your case but yes they're hot. Just open the side of your case and see if your temps move much if not, then it's normal.


I got a new case coming in tomorrow its a Graphite 780T and ill be using an EVGA 360MM CLC for my 9900K and i plan to put the GPU rad in the front pulling in fresh air along with 2 corsair LL120's with the 360mm cooler venting heat out the top as well as a 140mm noctua fan venting heat out the back, I'm hoping that drops the temps down to the mid 40's. right now with the fans maxed it loads at 53C and seems to turbo to 2055Mhz even in metro exodus maxed out. Im still blown away at the insane amount of heat this thing puts out, it rivals my 9900K at 5Ghz 1.355V which is really saying something.


----------



## Imprezzion

I have my 9900K's EK Phoenix 280 in the front of my Phanteks Evolv X as intake and the Kraken X52 for the GPU as outtake in the top. So far temps are great and perfectly stable at 46-48c. They don't rise slowly or whatever so my internal case temps with the CPU loaded up aren't very high and don't rise so I kinda wanna keep it like this as reversing the GPU rad in the top to intake would drop GPU temps slightly maybe but would mean the case temps go up massively as I only have rear outtake then. 

For my specific case and cooler setup I think this is the best.

But yes, the amount of heat coming from the GPU rad is insane haha.


----------



## truehighroller1

philhalo66 said:


> I got a new case coming in tomorrow its a Graphite 780T and ill be using an EVGA 360MM CLC for my 9900K and i plan to put the GPU rad in the front pulling in fresh air along with 2 corsair LL120's with the 360mm cooler venting heat out the top as well as a 140mm noctua fan venting heat out the back, I'm hoping that drops the temps down to the mid 40's. right now with the fans maxed it loads at 53C and seems to turbo to 2055Mhz even in metro exodus maxed out. Im still blown away at the insane amount of heat this thing puts out, it rivals my 9900K at 5Ghz 1.355V which is really saying something.


Sounds like you know what you're doing from your comment. I'll give you this. My dual fan oc Asus 2080ti which I have water cooled, never goes over 40c. Water cool it, you'll be way happier with the card.



Imprezzion said:


> I have my 9900K's EK Phoenix 280 in the front of my Phanteks Evolv X as intake and the Kraken X52 for the GPU as outtake in the top. So far temps are great and perfectly stable at 46-48c. They don't rise slowly or whatever so my internal case temps with the CPU loaded up aren't very high and don't rise so I kinda wanna keep it like this as reversing the GPU rad in the top to intake would drop GPU temps slightly maybe but would mean the case temps go up massively as I only have rear outtake then.
> 
> For my specific case and cooler setup I think this is the best.
> 
> But yes, the amount of heat coming from the GPU rad is insane haha.


I bet that comment was helpful to him. Love you three. Forever ever.. inside joke to op, just ignore this part, they get it.


----------



## Emetsys

Imprezzion said:


> Galaxy HOF on a X Trio?
> Of course it's dangerous. It's a unlimited 2000w BIOS with 1.125v. Pay attention to your temperatures, limit the power to like, 24% (480w) so you don't instantly nuke it in case something goes wrong.


Isn't it limited to 400w? I'm talking about this bios: https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726

My card is watercooled, so the fan/rgb is not a problem, and the temp are ok, but yes I don't want to draw to much power of the regulators.


----------



## Imprezzion

Emetsys said:


> Isn't it limited to 400w? I'm talking about this bios: https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726
> 
> My card is watercooled, so the fan/rgb is not a problem, and the temp are ok, but yes I don't want to draw to much power of the regulators.


Yeah that one is limited to 400w. I thought you meant the 20th anniversary one which is 2000w.

I have kind of a wierd problem with my VRAM overclock though.
I run +1000 normally which is stable in every way so I tried to push it a bit higher and 1200 seems to run fine in games, FPS is slightly higher, scores scale in benches so it doesn't ECC too much, but whenever I shut down a game and go back to desktop the VRAM totally flips out and crashes the entire driver or black screens and such. Anyone else had this on a high VRAM OC?


----------



## Antsu

Imprezzion said:


> Been gaming all night with 2145Mhz @ 1.093v around 46-48c. Perfectly stable.
> 
> My memory with aircooling ran +1000 (Samsung) and now seems to do +1200 with no issues with the Enzotechs heatsinks. It is a little doubtful whether it is stable it not as after a few hours of Borderlands 3 with no issues in-game and closing the game with going back to my desktop the memory completely spazzed out and the desktop and YouTube glitched all over the place so.. hehe..


Your results on the memory are encouraging, too bad I already ordered those cheap aluminium sinks. I don't think those will ever get used now as I see my local store has some Enzotechs... God damn you dude, my GPU loves you, but my wallet really hates you. Spending 70€ on a few pieces of copper to MAYBE OC memory a bit higher really makes you question your hobbies...


----------



## philhalo66

truehighroller1 said:


> Sounds like you know what you're doing from your comment. I'll give you this. My dual fan oc Asus 2080ti which I have water cooled, never goes over 40c. Water cool it, you'll be way happier with the card.
> 
> 
> 
> I bet that comment was helpful to him. Love you three. Forever ever.. inside joke to op, just ignore this part, they get it.


i'd like to get a full cover waterblock, but im just not comfortable doing custom watercooling anymore. Haven't done it since the geforce 200 days and it's a lot more work than i care to get into.


----------



## truehighroller1

philhalo66 said:


> i'd like to get a full cover waterblock, but im just not comfortable doing custom watercooling anymore. Haven't done it since the geforce 200 days and it's a lot more work than i care to get into.


This one was my first and agreed, it was a job.


----------



## philhalo66

truehighroller1 said:


> This one was my first and agreed, it was a job.


Yeah it's a lot of work to drain the loop every few months and refill it/test for leaks. If it leaks lol good luck no manufacturer will cover that. 

On a different topic, anyone with the EVGA 2080 Ti Hybrid try a third party fan? i got a corsair SP120L and it wont do the full 2700 RPM it stops at 2010 like some sort of soft lock.


----------



## kithylin

philhalo66 said:


> Yeah it's a lot of work to drain the loop every few months and refill it/test for leaks. If it leaks lol good luck no manufacturer will cover that.
> 
> On a different topic, anyone with the EVGA 2080 Ti Hybrid try a third party fan? i got a corsair SP120L and it wont do the full 2700 RPM it stops at 2010 like some sort of soft lock.


Then you're either using the wrong fluids or designing the loop (and the metals within it) poorly. I've been water cooling my computers since the early 2000's. It's really simple. Use similar metals in the loop and just straight distilled water and I typically run my loops 1-4 years without draining things. And when I do finally take it apart the parts inside are just as clean as the day I built the loop. Just add more and top it off once a month and everything's hunky dory. Sorry, kinda off topic but I just felt compelled to chime in.. the misconception of having to drain a water loop twice a year is silly.


----------



## philhalo66

kithylin said:


> Then you're either using the wrong fluids or designing the loop (and the metals within it) poorly. I've been water cooling my computers since the early 2000's. It's really simple. Use similar metals in the loop and just straight distilled water and I typically run my loops 1-4 years without draining things. And when I do finally take it apart the parts inside are just as clean as the day I built the loop. Just add more and top it off once a month and everything's hunky dory. Sorry, kinda off topic but I just felt compelled to chime in.. the misconception of having to drain a water loop twice a year is silly.


i wont pretend to be an expert that's for sure, but every single expert i have heard from say to flush the loop every 6-8 months even with similar metals. Who knows maybe they're full of crap? I don't really remember too well my older loops but i do remember the coolant kinda turning a funny color after a few months like not algae or anything like that but it kinda stained the gpu and cpu block along with the tubes. Been a long time so maybe im forgetting some details.


----------



## kithylin

philhalo66 said:


> i wont pretend to be an expert that's for sure, but every single expert i have heard from say to flush the loop every 6-8 months even with similar metals. Who knows maybe they're full of crap? I don't really remember too well my older loops but i do remember the coolant kinda turning a funny color after a few months like not algae or anything like that but it kinda stained the gpu and cpu block along with the tubes. Been a long time so maybe im forgetting some details.


Yeah it depends on what fluids you use. Some fluids have to be changed more often than others. Some fluids you can just drain it and refill. Some fluids with dyes in em you have to completely disassemble the entire loop, take all the blocks off and scrub the insides twice a year. Personally I'm all about performance and low maintenance instead of appearance so I just go straight distilled water and leave it for a long time. Usually the only time I ever drain my loops is if I need to change something, like a hardware upgrade.


----------



## Mooncheese

philhalo66 said:


> Yeah it's a lot of work to drain the loop every few months and refill it/test for leaks. If it leaks lol good luck no manufacturer will cover that.
> 
> On a different topic, anyone with the EVGA 2080 Ti Hybrid try a third party fan? i got a corsair SP120L and it wont do the full 2700 RPM it stops at 2010 like some sort of soft lock.


Youre making out liquid cooling to be more daunting than it is. I've had a soft tubing loop since 2017, zero leaks, and I only flushed the loop after upgrading to 2080 Ti 3-4 months ago. So 2.5 years without having to flush the loop. I only use distilled water, no pastel garbage which will aggregate and harden in your fin arrays requiring a complete disassembly of your blocks. 

If you want more cooling performance youre going to need to increase your radiator surface area. Full stop. Getting your 120mm fan to run at 2700 RPM (full blast) instead of 2100 RPM may help by 2-3 degrees but this completely defeats the primary benefit of liquid cooling: relative silence. 

I've taken a look at your CLC (youtube is your friend) and essentially it's an Asetek pump (they all are) that can be modified (tubing cut at base of pump and replaced with soft tubing to a larger radiator of your choosing): 

EVGA 2080 Ti Hybrid installation (with a good view of the pump and barbs): 




How to modify any AIO, including yours, so that it will work with any radiator(s): 




I recommend going with the biggest, thickest rad you can shoehorn into your case. If you can fit two, the more the merrier (just string them together). 

It's funny because this is basically a half assed way of doing a loop. 

I'm telling you right now that EK's soft tubing loop + fittings is nearly bulletproof, the hoses are extremely robust / impervious to rupture and puncture and they really go on the barbs extremely well. Youre better off just stepping up to a soft tubing loop with a full water block. 

Stop monkeying around with AIO's, they have a usefulness (for those who swap GPU's frequently and can't afford / don't want to buy a $100+ water-block every time) but if you only upgrade your GPU every 3 years, dude just step up to a soft tubing loop.

Edit: 

And flushing the loop, even if I had to do it every week, it's a non-event. Here's what flushing the loop looks like: 

1. Power down PC, disconnect power and all USB peripherals and displayport = 2 minutes
2. take panels off = 2 minutes
3. Pick up and place PC on table = 1 minute
4. Disconnect motherboard and GPU power = 30 seconds
5. Unscrew pump-res top / top fitting plug on distribution plate, attach 1/4 fitting with coolant line to it for refill and to allow air to enter the loop from the top as liquid exits the drain port on the bottom. 1 min
6. Attach drain hose to bottom same as to pump / res / distro plate. 1 min
7. drain loop. 2 min
8. Add fresh distilled water whilst power cycling: 3-4 minutes
9. Remove fill and drain hoses, button up pump / res / distro plate
10. Return PC to original position, re-install side panels, plug in peripherals, display port, PSU power, power on: 3 min

Draining and filling the loop can literally be done in 15-20 minutes. 

Youre making it out to being something that is dreaded because you've never done it. 

I'm telling you right now it's easier than it sounds, if you can't do a loop because of lack of money, that's one thing, but youre kidding yourself if you think this is complicated or difficult. 

The above process looks something like this somewhere in the middle: https://youtu.be/XXMqMGJ2l-c?t=530


----------



## Mooncheese

https://www.ekwb.com/shop/ek-kit-classic-rgb-p360

$319 and it includes their leak tester. Add another $150 for a quantum block youre looking at a full loop under $500.


----------



## kithylin

Mooncheese said:


> https://www.ekwb.com/shop/ek-kit-classic-rgb-p360
> 
> $319 and it includes their leak tester. Add another $150 for a quantum block youre looking at a full loop under $500.


Similar kit (I can't find anything different?) listed for $289: https://www.ekwb.com/shop/ek-kit-classic-rgb-s360


----------



## Gonzo28

Hey there, i just got my zotac 2080ti blower. I installed the ek water blocks water cooler and now i realised that it has a power limit of 112%. Can anyone help me? Is flashing a bios dangerous to this card? is there a bios for this card to enhance the power target? Can i flash any 2080ti bios on it? questions over questions, thanks in advance .

device id is 10DE 1E07 19DA 1503

the forum says non-a for the blower, but if i understand that correctly, my device id shows that its an "a" chip

but i cant find anything in the tutorial?!


----------



## Mooncheese

kithylin said:


> Similar kit (I can't find anything different?) listed for $289: https://www.ekwb.com/shop/ek-kit-classic-rgb-s360


For only 30-40 more the $319 version comes with a 360PE, the $289 kit comes with a 360SE, a significantly slimmer rad.


----------



## Mooncheese

Gonzo28 said:


> Hey there, i just got my zotac 2080ti blower. I installed the ek water blocks water cooler and now i realised that it has a power limit of 112%. Can anyone help me? Is flashing a bios dangerous to this card? is there a bios for this card to enhance the power target? Can i flash any 2080ti bios on it? questions over questions, thanks in advance .
> 
> device id is 10DE 1E07 19DA 1503
> 
> the forum says non-a for the blower, but if i understand that correctly, my device id shows that its an "a" chip
> 
> but i cant find anything in the tutorial?!


I had the exact same problem with the Alienware Aurora variant I had before the XC2 I have now, although it was 300a for whatever reason it was limited to 112% PT and like 280W. Reference PCB is reference PCB, if youre certain you have 300a you can flash to any 300a BIOS, including mine @ 340W and including 2080 Ti FE.


----------



## Mooncheese

I also posted this over in the water-cooling section but I'd figure I'd share this here because more and more people are breaking into water cooling and a lot of us here are under water: 

​I recently upgraded from soft tubing in conjunction with EK's D5 140 EX RES pump / res combo to a distro plate + DDC pump and 14/10 hard acrylic and I can't run my DDC more than 55% RPM otherwise it sounds louder than I would like and even here at 55% RPM it still sounds like a fish tank. I was using a D5 140 before this, which I still have an hand and it wasn't nearly as loud, so I decided to look at data comparing DDC to D5 in terms of noise / flow rate / pressure. D5 at 100% RPM flows 1500 liters per hour with 3.9 meters of pressure. DDC at 100% RPM flows 1000 liters per hour with 5.2 meters of pressure. My D5 at 70% RPM sounds as loud as my DDC at 50% RPM. 70% of 1500 is around 1000. 50% of 1000 is 500. 70% of 3.9 is closer to 3. 50% of 5.2 is around 3. So my D5 @ 70% RPM which has the same noise output or less as DDC at 50% RPM has double the flow rate with around the same head pressure. I actually began looking into the actual flow rate data not just because the noise (I actually need to run this DDC at no more than 35% RPM for it to be nearly inaudible, ideally at 25% RPM, but then my temps suffer terribly) but because the temp of my GPU core went up 7C! My 2080 Ti was no more than 43C under full load @ 340W with an undervolt of 1.013v (2070 MHz core, 8150 MHz memory). Luckily for me EK sells a standalone acetal top for any D5 pump for $50 and I didn't need to purchase a new pump.

DDC @ 100% RPM, GOOD LORD THAT'S A BUZZ. Think refrigerator level of hum. I'm not kidding. 

https://www.ekwb.com/shop/ek-xtop-revo-d5-acetal

D5 top arrives in 5 days, I can't wait for my rig to no longer sound like a fish tank. Hell I could run the D5 at 50% RPM and it would be completely inaudible and I'm still looking at 750 liters per hour vs 500.


----------



## sultanofswing

Distro plates are cool and all but they won't come anywhere near my build.

Also on Flow rates, it all depends on the setup so you really need to get an accurate flow meter to test with.

I have the Aquacomputer High Flow sensor in mine which has been tested to be as accurate as a 700 dollar lab grade test meter so I know exactly what the flow rate in my loop is.

I am running triple D5 pumps in series in my setup and this give me 400lph.

The Manufacturers flow rating of the pumps is with them running open (just pumping without being connected to anything).


----------



## z390e

@sultanofswing Great post. One question, why such a variance on MEM temps? 13 celsius difference between 1 and 3. Whats causing that type of variance?


----------



## sultanofswing

z390e said:


> @sultanofswing Great post. One question, why such a variance on MEM temps? 13 celsius difference between 1 and 3. Whats causing that type of variance?


Not 100% sure. I always noticed it but never really worried to look into it. Card is under a Bykski full cover block so it’s possibly it could be a mounting issue but I won’t know unless I pull it apart.

I do know the memory closest to the VRM will run warmer too. I’d have to check PX1 to see if that’s the case


----------



## MrTOOSHORT

^^^

My temp separation are consistent with yours, but, about 3-5'C warmer. Less rad than you. On HC block. Think you are fine.


----------



## Imprezzion

Gonzo28 said:


> Hey there, i just got my zotac 2080ti blower. I installed the ek water blocks water cooler and now i realised that it has a power limit of 112%. Can anyone help me? Is flashing a bios dangerous to this card? is there a bios for this card to enhance the power target? Can i flash any 2080ti bios on it? questions over questions, thanks in advance .
> 
> device id is 10DE 1E07 19DA 1503
> 
> the forum says non-a for the blower, but if i understand that correctly, my device id shows that its an "a" chip
> 
> but i cant find anything in the tutorial?!


1E07 is an A chip. 1E04 is non-A so your good there. Flash the KFA2/Galax 380W BIOS from the OP.


----------



## ssateneth

I didn't see anyone in this thread talk about taking more drastic measures to bypass the XUSB error to flash a higher power limit BIOS

====================
"XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
a newer version of GPU firmware image for this product."

BIOS Cert 2.0 Verification Error, Update aborted.

Nothing changed!

ERROR: Invalid firmware image detected.
=====================

I will let you guys know that you CANNOT use a CH341A SPI flash programmer to bypass this limit / XUSB error. Using a CH341A will brick your 2080 Ti, and you will NOT be able to flash it back into a working state with either the CH341A or NVFlash or patched NVFlash, even if it is detected in windows, even with using the same BIOS that was on the card before.

Source: Myself.


----------



## sultanofswing

ssateneth said:


> I didn't see anyone in this thread talk about taking more drastic measures to bypass the XUSB error to flash a higher power limit BIOS
> 
> ====================
> "XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
> a newer version of GPU firmware image for this product."
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> =====================
> 
> I will let you guys know that you CANNOT use a CH341A SPI flash programmer to bypass this limit / XUSB error. Using a CH341A will brick your 2080 Ti, and you will NOT be able to flash it back into a working state with either the CH341A or NVFlash or patched NVFlash, even if it is detected in windows, even with using the same BIOS that was on the card before.
> 
> Source: Myself.


Not sure what you are trying to do but I CH341A flashed a few 2080ti's without any issues.
Sounds like you used the wrong settings in the Programmer software.


----------



## spin5000

I was going to buy an MSI Gaming X Trio off someone I know but he ended up selling it. Now, I know someone else who said they'd sell me a Zotac AMP but A) he's asking $20 more than the Gaming X Trio, B) it's just the AMP, not the AMP Extreme Core let alone the AMP Extreme.

It's really hard to find comparison reviews and forum threads about the Zotacs (most are about MSI, Gigabyte, or ASUS) therefore how much worse is the Zotac AMP non-extreme than the MSI Trio X Gaming?

Super high-end models aside (MSI Lightning, Zotac AMP Extreme, Gigabyte Aorus Extreme, etc.), the MSI Gaming X Trio seems to cool insanely good and overclock well too (well into 2000+ MHz core sustained) from all the reviews and forum threads I've read. How comparable is the Zotac AMP non-extreme in terms of pure cooling (I don't care about noise) and overclocking? The Zotac also seems to have lower max power limit 


I know the Zotac AMP non-extreme has a 2.5 or 2.75 slot cooler but I've seen reviews of 2 cards, both with 2.5/2.75 slot coolers, where there were still fairly big differences in cooling between the 2 cards therefore it's not automatically a given card both cards will have similar cooling just because both have 2.5/2.75 slot coolers........


----------



## Talon2016

ssateneth said:


> I didn't see anyone in this thread talk about taking more drastic measures to bypass the XUSB error to flash a higher power limit BIOS
> 
> ====================
> "XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
> a newer version of GPU firmware image for this product."
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> =====================
> 
> I will let you guys know that you CANNOT use a CH341A SPI flash programmer to bypass this limit / XUSB error. Using a CH341A will brick your 2080 Ti, and you will NOT be able to flash it back into a working state with either the CH341A or NVFlash or patched NVFlash, even if it is detected in windows, even with using the same BIOS that was on the card before.
> 
> Source: Myself.


I flashed it with a cheap usb programmer without issue. Not sure what you did, but I had no issues.


----------



## Imprezzion

spin5000 said:


> I was going to buy an MSI Gaming X Trio off someone I know but he ended up selling it. Now, I know someone else who said they'd sell me a Zotac AMP but A) he's asking $20 more than the Gaming X Trio, B) it's just the AMP, not the AMP Extreme Core let alone the AMP Extreme.
> 
> It's really hard to find comparison reviews and forum threads about the Zotacs (most are about MSI, Gigabyte, or ASUS) therefore how much worse is the Zotac AMP non-extreme than the MSI Trio X Gaming?
> 
> Super high-end models aside (MSI Lightning, Zotac AMP Extreme, Gigabyte Aorus Extreme, etc.), the MSI Gaming X Trio seems to cool insanely good and overclock well too (well into 2000+ MHz core sustained) from all the reviews and forum threads I've read. How comparable is the Zotac AMP non-extreme in terms of pure cooling (I don't care about noise) and overclocking? The Zotac also seems to have lower max power limit
> 
> 
> I know the Zotac AMP non-extreme has a 2.5 or 2.75 slot cooler but I've seen reviews of 2 cards, both with 2.5/2.75 slot coolers, where there were still fairly big differences in cooling between the 2 cards therefore it's not automatically a given card both cards will have similar cooling just because both have 2.5/2.75 slot coolers........


Tom's has a great review of the 2080 non Ti AMP.
https://www.tomshardware.com/reviews/zotac-geforce-rtx-2080-amp,5839-5.html

It seems like a good card overall. About 65c on Auto fanspeed and the regular AMP is a reference PCB with an A chip so about the best combination you can get honestly as it has a way eider compatibility with BIOS flashing than a X Trio. Not a bad buy at all.


----------



## Mooncheese

sultanofswing said:


> Distro plates are cool and all but they won't come anywhere near my build.
> 
> Also on Flow rates, it all depends on the setup so you really need to get an accurate flow meter to test with.
> 
> I have the Aquacomputer High Flow sensor in mine which has been tested to be as accurate as a 700 dollar lab grade test meter so I know exactly what the flow rate in my loop is.
> 
> I am running triple D5 pumps in series in my setup and this give me 400lph.
> 
> The Manufacturers flow rating of the pumps is with them running open (just pumping without being connected to anything).


Great post and data! Ah I see, so 1000 lph is with zero resistance, what is your opinion on D5 vs DDC? Why are you running 3 D5 in serial, for head pressure? 



z390e said:


> @sultanofswing Great post. One question, why such a variance on MEM temps? 13 celsius difference between 1 and 3. Whats causing that type of variance?


My card does the same thing and it's the memory bank that is nearest to the SLI tab. From what I gather the left most VRM bank is communicating with the right-most VRM bank via traces in the board that route through / near this memory bank but I could be mistaken (see attached image). 





Imprezzion said:


> 1E07 is an A chip. 1E04 is non-A so your good there. Flash the KFA2/Galax 380W BIOS from the OP.





sultanofswing said:


> Not 100% sure. I always noticed it but never really worried to look into it. Card is under a Bykski full cover block so it’s possibly it could be a mounting issue but I won’t know unless I pull it apart.
> 
> I do know the memory closest to the VRM will run warmer too. I’d have to check PX1 to see if that’s the case


Galax HOF is non-reference PCB though, has a higher number of phases, wouldn't it load up the lower number of phases with additional amperage? Because this is the BIOS that I was running when I experienced card failure about a month ago with my 300A Alienware Aurora variant. The cause of the failure is most definitely the Micron memory which overheated, but I haven't ruled out this BIOS being implicated in the failure. 

Personally I wouldn't flash a non-reference PCB BIOS to a reference PCB card. It could be completely incidental, but I can't rule it out. 

I'm curious to hear what others think because I am still curious as to how EVGA's FTW Ultra BIOS would work on my card considering I am seeing a power bottleneck at 340W some of the time but I having experienced a failed card I won't flash a non-reference PCB BIOS to reference PCB unless I can get ample assurance that it's safe.


----------



## Mooncheese

ssateneth said:


> I didn't see anyone in this thread talk about taking more drastic measures to bypass the XUSB error to flash a higher power limit BIOS
> 
> ====================
> "XUSB FW component of the input GPU firmware image is imcompatible [sic], please use
> a newer version of GPU firmware image for this product."
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> Nothing changed!
> 
> ERROR: Invalid firmware image detected.
> =====================
> 
> I will let you guys know that you CANNOT use a CH341A SPI flash programmer to bypass this limit / XUSB error. Using a CH341A will brick your 2080 Ti, and you will NOT be able to flash it back into a working state with either the CH341A or NVFlash or patched NVFlash, even if it is detected in windows, even with using the same BIOS that was on the card before.
> 
> Source: Myself.


Yikes! Good to know and sorry about your loss! What did you do with the bricked card? What cards and BIOS are affected by this error? Thanks for the warning!


----------



## sultanofswing

Mooncheese said:


> Yikes! Good to know and sorry about your loss! What did you do with the bricked card? What cards and BIOS are affected by this error? Thanks for the warning!



Nah he just doesn't know the proper settings to use I bet.

I run 3 D5's that way I can run them at a slower speed and still have plenty of flow. 
I will only Run D5's personally.


----------



## J7SC

...just a little fun with Quake II RTX on 2080 Ti NVLink (> no issues with dual cards). This is v1.1 @ 1080 HD - I was going to download v1.2 for 4K DSP monitor, but instead just started to to shoot a few bad (viral muted?) monsters


----------



## JustinThyme

spin5000 said:


> I was going to buy an MSI Gaming X Trio off someone I know but he ended up selling it. Now, I know someone else who said they'd sell me a Zotac AMP but A) he's asking $20 more than the Gaming X Trio, B) it's just the AMP, not the AMP Extreme Core let alone the AMP Extreme.
> 
> It's really hard to find comparison reviews and forum threads about the Zotacs (most are about MSI, Gigabyte, or ASUS) therefore how much worse is the Zotac AMP non-extreme than the MSI Trio X Gaming?
> 
> Super high-end models aside (MSI Lightning, Zotac AMP Extreme, Gigabyte Aorus Extreme, etc.), the MSI Gaming X Trio seems to cool insanely good and overclock well too (well into 2000+ MHz core sustained) from all the reviews and forum threads I've read. How comparable is the Zotac AMP non-extreme in terms of pure cooling (I don't care about noise) and overclocking? The Zotac also seems to have lower max power limit
> 
> 
> I know the Zotac AMP non-extreme has a 2.5 or 2.75 slot cooler but I've seen reviews of 2 cards, both with 2.5/2.75 slot coolers, where there were still fairly big differences in cooling between the 2 cards therefore it's not automatically a given card both cards will have similar cooling just because both have 2.5/2.75 slot coolers........


I can’t speak for the current models but they had one of the highest clock rates on 1080 Ti and at a lower price point until the miner ruined the market.


----------



## Imprezzion

Mooncheese said:


> Great post and data! Ah I see, so 1000 lph is with zero resistance, what is your opinion on D5 vs DDC? Why are you running 3 D5 in serial, for head pressure?
> 
> 
> 
> My card does the same thing and it's the memory bank that is nearest to the SLI tab. From what I gather the left most VRM bank is communicating with the right-most VRM bank via traces in the board that route through / near this memory bank but I could be mistaken (see attached image).
> 
> 
> 
> 
> 
> 
> 
> Galax HOF is non-reference PCB though, has a higher number of phases, wouldn't it load up the lower number of phases with additional amperage? Because this is the BIOS that I was running when I experienced card failure about a month ago with my 300A Alienware Aurora variant. The cause of the failure is most definitely the Micron memory which overheated, but I haven't ruled out this BIOS being implicated in the failure.
> 
> Personally I wouldn't flash a non-reference PCB BIOS to a reference PCB card. It could be completely incidental, but I can't rule it out.
> 
> I'm curious to hear what others think because I am still curious as to how EVGA's FTW Ultra BIOS would work on my card considering I am seeing a power bottleneck at 340W some of the time but I having experienced a failed card I won't flash a non-reference PCB BIOS to reference PCB unless I can get ample assurance that it's safe.


Well, what kinda proof you want for it to be safe? I mean, my card is a Gainward Phoenix GS under a G12 + Kraken X52. It's a full reference PCB A chip with Samsung memory.

Any BIOS will flash on reference without bricking it actually. Not everything will work properly but it will flash and boot.

- KFA2/Galax 380W: Works perfectly fine with 1 exception. Fanspeed is locked to 41% minimum and 100% maximum meaning the lowest your fans can run is 1400 RPM so it's loud idle.
- EVGA FTW3 Ultra / HC: The best BIOS for reference. Gives you a little less power compared to the KFA2 one but enough for 1.093v. fanspeed and RGB works fine.
- HOF 20th anniversary: works fine in reference. Fans work, power is unlimited, voltage does 1.125v fine, might lose a DP port tho?
- Gigabyte Windforce: works fine with 1 exception, if you have a triple fan card with 3 connectors only the middle fan responds to PWM. The rest stays at idle RPM.
- ASUS Strix: works fine but loses a DP port and maybe less fan control.


----------



## J7SC

...per my post above, a quick update re. Quake II RTX. Just downloaded v1.3...clearly improved eye candy (....can hardly wait to free up 4K DSP monitor, currently on testbench)


----------



## sultanofswing

J7SC said:


> ...per my post above, a quick update re. Quake II RTX. Just downloaded v1.3...clearly improved eye candy (....can hardly wait to free up 4K DSP monitor, currently on testbench)


Very nice, Next gen release I plan to update my whole system and I may go back to SLI if next gen works out.
I talk to Vince (Kingpin)quite a bit and according to him 3080ti Kingpin is already in the works and should release a lot faster so I may just snag up 2 of them.

SLI/NVLink may be hit or miss but if we would quit getting half asses developed games that are nothing but Betas we would probably be able to see more benefits.


----------



## J7SC

sultanofswing said:


> Very nice, Next gen release I plan to update my whole system and I may go back to SLI if next gen works out.
> I talk to Vince (Kingpin)quite a bit and according to him 3080ti Kingpin is already in the works and should release a lot faster so I may just snag up 2 of them.
> 
> SLI/NVLink may be hit or miss but if we would quit getting half asses developed games that are nothing but Betas we would probably be able to see more benefits.


 
...I/m not a hardcore gamer and use the SLI/NVLink for some productivity apps as well. Still, with the games I do play, I've had zero problems (incl. on frame time etc). Then there's the enabling of 'CFR' (checkerboard tile) SLI instead of AFR....still early days with that, and may be not widely applicable to 2080 Ti / 3080 Ti, but the industry is moving to mGPUs in coming gens...

Below is a re-post of Metro Exodus on (Titan) RTX w/ CFR


----------



## spin5000

Imprezzion said:


> spin5000 said:
> 
> 
> 
> I was going to buy an MSI Gaming X Trio off someone I know but he ended up selling it. Now, I know someone else who said they'd sell me a Zotac AMP but A) he's asking $20 more than the Gaming X Trio, B) it's just the AMP, not the AMP Extreme Core let alone the AMP Extreme.
> 
> It's really hard to find comparison reviews and forum threads about the Zotacs (most are about MSI, Gigabyte, or ASUS) therefore how much worse is the Zotac AMP non-extreme than the MSI Trio X Gaming?
> 
> Super high-end models aside (MSI Lightning, Zotac AMP Extreme, Gigabyte Aorus Extreme, etc.), the MSI Gaming X Trio seems to cool insanely good and overclock well too (well into 2000+ MHz core sustained) from all the reviews and forum threads I've read. How comparable is the Zotac AMP non-extreme in terms of pure cooling (I don't care about noise) and overclocking? The Zotac also seems to have lower max power limit /forum/images/smilies/frown.gif
> 
> 
> I know the Zotac AMP non-extreme has a 2.5 or 2.75 slot cooler but I've seen reviews of 2 cards, both with 2.5/2.75 slot coolers, where there were still fairly big differences in cooling between the 2 cards therefore it's not automatically a given card both cards will have similar cooling just because both have 2.5/2.75 slot coolers........
> 
> 
> 
> Tom's has a great review of the 2080 non Ti AMP.
> https://www.tomshardware.com/reviews/zotac-geforce-rtx-2080-amp,5839-5.html
> 
> It seems like a good card overall. About 65c on Auto fanspeed and the regular AMP is a reference PCB with an A chip so about the best combination you can get honestly as it has a way eider compatibility with BIOS flashing than a X Trio. Not a bad buy at all.
Click to expand...

I'm wondering about the 2080 Ti though, not the non-Ti...


----------



## Mooncheese

sultanofswing said:


> Nah he just doesn't know the proper settings to use I bet.
> 
> I run 3 D5's that way I can run them at a slower speed and still have plenty of flow.
> I will only Run D5's personally.


You didn't answer my question though, why do you prefer D5 over DDC? I'm looking for input in this regard as I have a D5 Revo top coming for my D5 140 that I am thinking about replacing the DDC that came with the distro block with. 

How do the two compare in terms of flow rate, temps and noise in your opinion? Thanks! 



Imprezzion said:


> Well, what kinda proof you want for it to be safe? I mean, my card is a Gainward Phoenix GS under a G12 + Kraken X52. It's a full reference PCB A chip with Samsung memory.
> 
> Any BIOS will flash on reference without bricking it actually. Not everything will work properly but it will flash and boot.
> 
> - KFA2/Galax 380W: Works perfectly fine with 1 exception. Fanspeed is locked to 41% minimum and 100% maximum meaning the lowest your fans can run is 1400 RPM so it's loud idle.
> - EVGA FTW3 Ultra / HC: The best BIOS for reference. Gives you a little less power compared to the KFA2 one but enough for 1.093v. fanspeed and RGB works fine.
> - HOF 20th anniversary: works fine in reference. Fans work, power is unlimited, voltage does 1.125v fine, might lose a DP port tho?
> - Gigabyte Windforce: works fine with 1 exception, if you have a triple fan card with 3 connectors only the middle fan responds to PWM. The rest stays at idle RPM.
> - ASUS Strix: works fine but loses a DP port and maybe less fan control.


Well this is what I'm trying to get more information about, how reference PCB cards handle BIOS that were designed with greater power phase count. How long have you been using a non-reference BIOS on reference PCB, how are the temps, performance difference etc? 

I've got my eye on this BIOS: https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107

If I remember correctly sultanofswing said that this works with newer 300a reference PCB cards and was what he was using before upgrading to KP? 

I have a newer XC2 Ultra manufactured in May of 2019. 

What should I do to avoid bricking the card? Reading 'ssateneth's' recounting of bricking a card is concerning. 

Thanks for any help with this.


----------



## z390e

sultanofswing said:


> Very nice, Next gen release I plan to update my whole system and I may go back to SLI if next gen works out.
> I talk to Vince (Kingpin)quite a bit and according to him 3080ti Kingpin is already in the works and should release a lot faster so I may just snag up 2 of them.
> 
> SLI/NVLink may be hit or miss but if we would quit getting half asses developed games that are nothing but Betas we would probably be able to see more benefits.


----------



## Mooncheese

J7SC said:


> ...I/m not a hardcore gamer and use the SLI/NVLink for some productivity apps as well. Still, with the games I do play, I've had zero problems (incl. on frame time etc). Then there's the enabling of 'CFR' (checkerboard tile) SLI instead of AFR....still early days with that, and may be not widely applicable to 2080 Ti / 3080 Ti, but the industry is moving to mGPUs in coming gens...
> 
> Below is a re-post of Metro Exodus on (Titan) RTX w/ CFR
> 
> https://www.youtube.com/watch?v=HX5pNpVMShA
> 
> https://www.youtube.com/watch?v=PBO1jDM3atA


Something is seriously off with the performance here! 

With 2080 Ti @ 2070 MHz I'm seeing 120 FPS at the start of the game as the camera pans down to Artyom taking the top off the sewer entrance and then descending into the tunnel, it dips to 85 FPS here then resumes to 100 FPS while in, this is at 3440x1440, which is 67% less pixels than 3840x2160 but still, that's over 200% more FPS! This is with Ray Tracing on High and everything else maxed out, Hairworks on! I can upload a video later but it's rather boring gameplay as I spend a lot of time trying to see the difference between RT on and off but yeah, not kidding, it's 90-110 FPS where in the same section that single Titan RTX is doing 30 FPS @ 4K. 

I do notice that Titan RTX dips down to 1680-1750 MHz though, and I am running the GDDR6 at 8150 MHz but yeah, that's HBM2 etc. I figured Titan RTX was basically as fast as 2080 Ti.

Edit: 

Video uploading now, will provide link shortly.


----------



## Imprezzion

Mooncheese said:


> You didn't answer my question though, why do you prefer D5 over DDC? I'm looking for input in this regard as I have a D5 Revo top coming for my D5 140 that I am thinking about replacing the DDC that came with the distro block with.
> 
> How do the two compare in terms of flow rate, temps and noise in your opinion? Thanks!
> 
> 
> 
> Well this is what I'm trying to get more information about, how reference PCB cards handle BIOS that were designed with greater power phase count. How long have you been using a non-reference BIOS on reference PCB, how are the temps, performance difference etc?
> 
> I've got my eye on this BIOS: https://www.techpowerup.com/vgabios/207291/evga-rtx2080ti-11264-181107
> 
> If I remember correctly sultanofswing said that this works with newer 300a reference PCB cards and was what he was using before upgrading to KP?
> 
> I have a newer XC2 Ultra manufactured in May of 2019.
> 
> What should I do to avoid bricking the card? Reading 'ssateneth's' recounting of bricking a card is concerning.
> 
> Thanks for any help with this.


The Hydrocopper is the same BIOS as the FTW3 Ultra has pretty much. I have only had this specific 2080Ti for like, 2 weeks so long term effects? Not a clue. I'm finding out as we go. As far as I can read from here phase count isn't really a problem. More fanspeed, RGB and I/O on different BIOS.

So, as far as long term goes I'm not the person to answer that yet. I can't imagine it to become a problem as the reference board is built like a tank phase wise and it should handle any load up to 380w just fine providing it's properly cooled.


----------



## JustinThyme

J7SC said:


> ...per my post above, a quick update re. Quake II RTX. Just downloaded v1.3...clearly improved eye candy (....can hardly wait to free up 4K DSP monitor, currently on testbench)


PG43UQ will probably be my next monitor upgrade on a PG38UQ that is great but still 2K at 1440P


----------



## J7SC

JustinThyme said:


> PG43UQ will probably be my next monitor upgrade on a PG38UQ that is great but still 2K at 1440P


 
:thumb: Here's a little 4K 'h'ordeuvre' (a la Quake II RTX v1.3)


----------



## Mooncheese

Is this Superposition 4K Optimized score any good? Also,fantastic news! It holds 8150 MHz memory stable, I've been running it like this for about a week, no crashes and 90% Confidence Rating in OC Scanner. 8200 MHz however is not stable and only gets 69% Confidence Rating in OC scanner. 

I just checked google to see what kind of score 2080 Ti FE at default clocks on air does, 10,500! We are talking about a 25% performance disparity here, 79 FPS to 102 FPS avg, YOWZA! 

2080 Ti FE factory clocks: 




My XC2 @ 2070 MHz @ 1.013v, 8100 MHz memory:


----------



## JustinThyme

My issue is going to be space. I have a 27 inch 4K but honestly I can’t tell the difference between 1080 and 4K on a small screen unless I don’t scale icons and test and they become so small I have to use a telescope to see them. My work laptop is a Dell X1 tablet that has a 4K screen in 13 inches. What’s waste. None of our proprietary software will scale so I have to set the display to 1920x1080 to see text that is still tiny. Now on my man cave 70 inch screen the 4K is night and day. I’m hoping that the 43 inch will make it worth it as well but I’m in no hurry. First wait for that price to come down then figure out where I’m gonna put something that big. My 34 inch at 1440P ultra wide is already pushing the limits. I think I can squeeze wider in as the 34 is already at an angle in the hole which is just how I sit anyhow. It has about 6 inches of play up and down and still be in the hole. If I go out of that a lot I’ll just have to toss or sell off a very nice $5000 solid wood l shaped desk with hutch and filing cabinets etc that the wifey got me. I think she’d be more pissed of over than than she was when I pulled up in the driveway with an HD Fatboy. She got over it though LOL. She won’t get on it and forbids the kids to get on it but I’m OK with that. She is afraid of them because her ex damn near killed himself being an idiot coming off of an interstate ramp in excess of 100mph and lost it. He’s permanently disfigured but lived with one arm anyway. It’s not a crotch rocket, I’m too old for that ****. I just like to cruise back country roads. I don’t do 4 lane highways or bigger and damn sure where I live I don’t get on the parkway or turnpike with those idiots that scare the crap out of me in a car.


----------



## sultanofswing

Mooncheese said:


> You didn't answer my question though, why do you prefer D5 over DDC? I'm looking for input in this regard as I have a D5 Revo top coming for my D5 140 that I am thinking about replacing the DDC that came with the distro block with.
> 
> How do the two compare in terms of flow rate, temps and noise in your opinion? Thanks! .


D5 is quieter, Has more flow rate and from my experience has just been a better all around pump honestly.

Biggest issue is the noise with the DDC style pumps.


----------



## sultanofswing

Next year I'm going to upgrade this tired old 8700k/Z370 setup.
Cannot decide what I want to do though.

I used to be on a X299 setup with a 7820x and have been contemplating going back to X299.

I honestly do not game that much, I mostly tinker and run benchmarks so the X299 setup may be the way to go with a High Core count chip.
Have thought about AMD but still not 100% sure.

We will see what Q3 and Q4 of this year bring.


----------



## EarlZ

Whats the best test for memory overclock to ensure that I am not loosing performance ?


----------



## Mooncheese

Imprezzion said:


> The Hydrocopper is the same BIOS as the FTW3 Ultra has pretty much. I have only had this specific 2080Ti for like, 2 weeks so long term effects? Not a clue. I'm finding out as we go. As far as I can read from here phase count isn't really a problem. More fanspeed, RGB and I/O on different BIOS.
> 
> So, as far as long term goes I'm not the person to answer that yet. I can't imagine it to become a problem as the reference board is built like a tank phase wise and it should handle any load up to 380w just fine providing it's properly cooled.


Ah I see, I am still mulling it over, to be honest it 342W might be enough with the undervolt as it seldom gets that high @ 2055-2070 MHz. For whatever reason 2100 MHz isn't attainable with my sample, irrespective of the voltage, so 2070 MHz with an undervolt is as good as it gets. But I love that this card does 8150 Mhz on the memory stable and that makes a massive difference in RT. My Port Royal score when from 9200 to 10200 simply going from +500 to +1200 on the memory, a 10% improvement. 

As for that promised gameplay footage running Metro Exodus at 90-110 FPS with RT on High, all other settings maxed @ 3440x1440 here it is. I realized that the Titan RTX 3840x2160 footage you linked to here was with DLSS off, hence 30 FPS avg single Titan RTX and 45 FPS 2x Titan RTX CFR. DLSS 1.0 in this game has seen a huge improvement with a patch and no longer looks blurry, why anyone would not want to use DLSS and cripple their framerate is beyond me. I'm seeing 3x the framerate in the same section of the game with RT on High, all other settings maxed. That's 67% less pixels but 3x the framemrate. Again, they fixed DLSS and it no longer looks blurry, if youre not using DLSS with RT in this game youre doing it wrong. 




Edit: Sharpen in Nvidia Freestyle Filter (Alt+F3) is a preset I use in all of my games, it's basically Lumasharpen from Reshade, which I was a huge fan of. 

I have a somewhat hilarious video where I fight the Volga mutants and Ray Tracing bug at the same time with commentary that I'm uploading now. 



JustinThyme said:


> PG43UQ will probably be my next monitor upgrade on a PG38UQ that is great but still 2K at 1440P


Have you not seen or used a curved ultrawide? They are where it's at! You won't want to look at flat 16:9 ever again, it's like when the standard resolution went from 4:3 to 16:9 all over again but more pronounced. And movies align perfectly with a 21:9 panel. I have AW341DW, love this panel!


----------



## Imprezzion

Mooncheese said:


> Ah I see, I am still mulling it over, to be honest it 342W might be enough with the undervolt as it seldom gets that high @ 2055-2070 MHz. For whatever reason 2100 MHz isn't attainable with my sample, irrespective of the voltage, so 2070 MHz with an undervolt is as good as it gets. But I love that this card does 8150 Mhz on the memory stable and that makes a massive difference in RT. My Port Royal score when from 9200 to 10200 simply going from +500 to +1200 on the memory, a 10% improvement.
> 
> As for that promised gameplay footage running Metro Exodus at 90-110 FPS with RT on High, all other settings maxed @ 3440x1440 here it is. I realized that the Titan RTX 3840x2160 footage you linked to here was with DLSS off, hence 30 FPS avg single Titan RTX and 45 FPS 2x Titan RTX CFR. DLSS 1.0 in this game has seen a huge improvement with a patch and no longer looks blurry, why anyone would not want to use DLSS and cripple their framerate is beyond me. I'm seeing 3x the framerate in the same section of the game with RT on High, all other settings maxed. That's 67% less pixels but 3x the framemrate. Again, they fixed DLSS and it no longer looks blurry, if youre not using DLSS with RT in this game youre doing it wrong. https://youtu.be/WzMUaLolL34
> 
> Edit: Sharpen in Nvidia Freestyle Filter (Alt+F3) is a preset I use in all of my games, it's basically Lumasharpen from Reshade, which I was a huge fan of.
> 
> I have a somewhat hilarious video where I fight the Volga mutants and Ray Tracing bug at the same time with commentary that I'm uploading now.
> 
> 
> 
> Have you not seen or used a curved ultrawide? They are where it's at! You won't want to look at flat 16:9 ever again, it's like when the standard resolution went from 4:3 to 16:9 all over again but more pronounced. And movies align perfectly with a 21:9 panel. I have AW341DW, love this panel!
> 
> https://youtu.be/0afwE6GVLQw


Yeah memory is a massive boost on RTX2xxx. Mine luckily does +1100 fine. Samsung memory FTW lol. If this card would've come with anything else I would've sold it again lol.

2055-2070Mhz Max with voltage not making a difference with higher clocks is most likely temperature. Mine does the exact same on air around 60-70c. Now that I have water and 44-48c load temps on low fanspeed I can run 2130Mhz just fine and even 2170-2190Mhz if I Max the rad fans and run 38-39c load. This crashes as soon as it gets above 42c tho.


----------



## kithylin

sultanofswing said:


> Next year I'm going to upgrade this tired old 8700k/Z370 setup.
> Cannot decide what I want to do though.
> 
> I used to be on a X299 setup with a 7820x and have been contemplating going back to X299.
> 
> I honestly do not game that much, I mostly tinker and run benchmarks so the X299 setup may be the way to go with a High Core count chip.
> Have thought about AMD but still not 100% sure.
> 
> We will see what Q3 and Q4 of this year bring.


Kind of off topic but I thought I would mention that the big 3 memory vendors (SK Hynix, Samsung, Micron) have all announced their DDR5 desktop ram designs will be hitting markets in 2021. Wikipedia says the minimum JEDEC specification for DDR5 will be 5300 Mhz and the maximum JEDEC official spec will be 6400 Mhz. Single-channel DDR5 at 5300 will have to be minimum 51.2 GB/s memory speed, per the JEDEC Spec. Dual channel in most computer systems typically nets roughly +80% over single channel so we should realistically be looking at around 92 GB/s ram speeds as "common" for the new "Desktop" class DDR5 systems bare minimum for the slowest/cheapest DDR5 sold. I don't know what your budget is but if you can it may be worth it to see which company (Intel or AMD) will be the first to release a DDR5 HEDT platform with quad-channel ram in 2021.


----------



## Meisgoot312

Hey guys, been lurking on this thread for a while now looking for a way to flash a non-A reference 2080 Ti (MSI Ventus GP). I'm going to try the hardware flash using the external test clip (whenever my watercooling parts arrive from China, fingers crossed), since it looks like waiting for a new FW non-A BIOS with a 300W+ BIOS isn't working out (been waiting for months). 

Also I would like to add (since it wasn't mentioned in the first post). I haven't disassembled my GPU yet, but I've noticed on an online breakdown that the BIOS chip is actually a 1.8V Bios chip (MX25U8033E) and that most of the Hardware Clip kits are 5V and 3.3V. I have a 2020 MSI Ventus and I haven't checked if the BIOS chip is the same chip, but if it is, then a 5V or even 3.3V flasher will damage the BIOS chip. You would need a 1.8V adapter to obtain safe voltage levels.


----------



## sultanofswing

Meisgoot312 said:


> Hey guys, been lurking on this thread for a while now looking for a way to flash a non-A reference 2080 Ti (MSI Ventus GP). I'm going to try the hardware flash using the external test clip (whenever my watercooling parts arrive from China, fingers crossed), since it looks like waiting for a new FW non-A BIOS with a 300W+ BIOS isn't working out (been waiting for months).
> 
> 
> Also I would like to add (since it wasn't mentioned in the first post). I haven't disassembled my GPU yet, but I've noticed on an online breakdown that the BIOS chip is actually a 1.8V Bios chip (MX25U8033E) and that most of the Hardware Clip kits are 5V and 3.3V. I have a 2020 MSI Ventus and I haven't checked if the BIOS chip is the same chip, but if it is, then a 5V or even 3.3V flasher will damage the BIOS chip. You would need a 1.8V adapter to obtain safe voltage levels.


I ran into this as well, my FE card had a 1.8v MX (Macronix) bios chip but it flashed just fine without using the 1.8v adapter.

I purchased a 1.8v adapter and flashed it with that again and it still worked fine.


----------



## spin5000

Have a Zotac 2080 Ti Amp non-extreme. Stock clocks and cooling are very nice with 3D Mark Firestrike (Ultra 4K & Extreme) and Unigine Superposition Extreme 1080P both not passing 56 or 57 degrees with the conservative stock fan curve (which I never use). This results in very consistent clocks in the mid-to-high 1800s or, if I raise power limit, mid-to-high 1900s.

Regardless of if I'm hitting mid-to-high 1800s with default power limit or mid-to-high 1900s with maxed power limits, power limit flag/notification is constantly going off (checked MSI Afterburner & GPU-Z).

So, from everything I read, it seems quite recommended & safe to upgrade this card to the Galax HOF 380W BIOS...


I went over this thread's OP but am a little unclear on BIOS flashing because it states that newer 2080 Tis have a different firmware which prohibits flashing but then there's a note which I think says that the newer firmware cards can still be flashed but you have to skip using NVFlash. How can you flash a BIOS while skipping the NVFlash part? Isn't NVFlash what's doing the flashing??????


----------



## sultanofswing

spin5000 said:


> Have a Zotac 2080 Ti Amp non-extreme. Stock clocks and cooling are very nice with 3D Mark Firestrike (Ultra 4K & Extreme) and Unigine Superposition Extreme 1080P both not passing 56 or 57 degrees with the conservative stock fan curve (which I never use). This results in very consistent clocks in the mid-to-high 1800s or, if I raise power limit, mid-to-high 1900s.
> 
> Regardless of if I'm hitting mid-to-high 1800s with default power limit or mid-to-high 1900s with maxed power limits, power limit flag/notification is constantly going off (checked MSI Afterburner & GPU-Z).
> 
> So, from everything I read, it seems quite recommended & safe to upgrade this card to the Galax HOF 380W BIOS...
> 
> 
> I went over this thread's OP but am a little unclear on BIOS flashing because it states that newer 2080 Tis have a different firmware which prohibits flashing but then there's a note which I think says that the newer firmware cards can still be flashed but you have to skip using NVFlash. How can you flash a BIOS while skipping the NVFlash part? Isn't NVFlash what's doing the flashing??????


You have to hard flash the card with a CH341A programmer. Cooler has to come off and you put a test clip on the bios chip and flash it.

It’s easy to do but it’s above the scope of your average user.


----------



## Meisgoot312

sultanofswing said:


> I ran into this as well, my FE card had a 1.8v MX (Macronix) bios chip but it flashed just fine without using the 1.8v adapter.
> 
> I purchased a 1.8v adapter and flashed it with that again and it still worked fine.


Thanks for the info, I was just concerned since the MX Bios Chip has an "absolute max" of 2.5V. In my industry we're taught to avoid absolute max values like the plague (for good reason), but a lot of times its up to the silicon lottery whether or not you fry your chip past that value! Out of curiosity, which BIOS and programmer software did you use? I'm planning on hitting it with the Palit 124% one on the first page.


----------



## sultanofswing

Meisgoot312 said:


> sultanofswing said:
> 
> 
> 
> I ran into this as well, my FE card had a 1.8v MX (Macronix) bios chip but it flashed just fine without using the 1.8v adapter.
> 
> I purchased a 1.8v adapter and flashed it with that again and it still worked fine.
> 
> 
> 
> Thanks for the info, I was just concerned since the MX Bios Chip has an "absolute max" of 2.5V. In my industry we're taught to avoid absolute max values like the plague (for good reason), but a lot of times its up to the silicon lottery whether or not you fry your chip past that value! Out of curiosity, which BIOS and programmer software did you use? I'm planning on hitting it with the Palit 124% one on the first page.
Click to expand...

I used Asprogrammer. I’d use the adapter just to be on the safe side. I may have just gotten lucky.


----------



## Meisgoot312

sultanofswing said:


> I used Asprogrammer. I’d use the adapter just to be on the safe side. I may have just gotten lucky.


Thanks for the assist man, I'll post back on here eventually when I get my parts.


----------



## sultanofswing

Meisgoot312 said:


> sultanofswing said:
> 
> 
> 
> I used Asprogrammer. Iâ€™️d use the adapter just to be on the safe side. I may have just gotten lucky.
> 
> 
> 
> Thanks for the assist man, I'll post back on here eventually when I get my parts.
Click to expand...

No problem, my first time flashing it I bricked it, that’s when I learned about the different IC’s and was able to find the correct IC in AsProgrammer and after that I flashed the card maybe 5 or 6 different times trying different ROMS without any issues. Just don’t connect the test clip backwards or you can fry the chip.


----------



## Mooncheese

Imprezzion said:


> Yeah memory is a massive boost on RTX2xxx. Mine luckily does +1100 fine. Samsung memory FTW lol. If this card would've come with anything else I would've sold it again lol.
> 
> 2055-2070Mhz Max with voltage not making a difference with higher clocks is most likely temperature. Mine does the exact same on air around 60-70c. Now that I have water and 44-48c load temps on low fanspeed I can run 2130Mhz just fine and even 2170-2190Mhz if I Max the rad fans and run 38-39c load. This crashes as soon as it gets above 42c tho.


My load temps are the same, it doesn't matter, even if I do volt / freq curve with 1.069v @ 2100 MHz it's not stable. But 2070 MHz @ 50mv less and it's rock solid! Color me perplexed! Anyhow, this is ok because it would wattage throttle in some games above 2055-2070 MHz @ 340w. So the icing on the cake is that I can run the card with way less voltage and wattage and it won't throttle below 2055 MHz basically never. It does 2070 MHz until it hits 45C then it dips to 2055 MHz from there on out, I think I've seen it dip to 2040 MHz briefly once in the past month or so of ownership. So I'm not sure FTW3 BIOS will be of any benefit as it would only need the wattage above my peak freq, i.e. 2150 MHz, if that was stable. But alas, this isn't a binned chip, just average to above average (2070 MHz core and 8150 MHz memory @ 1.013v is pretty impressive at least to me). Love this card honestly, love the temp sensors on the memory and MOSFET, oh and did I mention I got it for $900 out-the-door from craigslist and it has 2.5 years remaining on the warranty (manufacture date, Sept 2019)!

Here's what the temps look like at 70F ambient after accounting for heat-soak: 








kithylin said:


> Kind of off topic but I thought I would mention that the big 3 memory vendors (SK Hynix, Samsung, Micron) have all announced their DDR5 desktop ram designs will be hitting markets in 2021. Wikipedia says the minimum JEDEC specification for DDR5 will be 5300 Mhz and the maximum JEDEC official spec will be 6400 Mhz. Single-channel DDR5 at 5300 will have to be minimum 51.2 GB/s memory speed, per the JEDEC Spec. Dual channel in most computer systems typically nets roughly +80% over single channel so we should realistically be looking at around 92 GB/s ram speeds as "common" for the new "Desktop" class DDR5 systems bare minimum for the slowest/cheapest DDR5 sold. I don't know what your budget is but if you can it may be worth it to see which company (Intel or AMD) will be the first to release a DDR5 HEDT platform with quad-channel ram in 2021.


Wow, just in time for 3080 Ti. Might be time to upgrade from 8700k, and to think that I just spent $200 on Royals solely to increase my memory speed from 3200 MHz to 3600 MHz (I did double the capacity though, and some games run better with more than 16GB). 

Hopefully they figure out Sony's extremely fast hard-drive tech where you can load or transition in between games in like 4 seconds soon, that's a chipset upgrade that is worth the money. But to be honest, 8700k @ 5.0 GHz may even be future even with GA-102. I seldom see a CPU bottleneck with 2080 Ti, even in CPU demanding games (The Watch Dogs 2, certain areas of Shadow of the Tomb Raider, i.e. Paititi, but RT @ 80 FPS tends to prevent that CPU bottleneck from appearing much).


----------



## JustinThyme

Mooncheese said:


> Ah I see, I am still mulling it over, to be honest it 342W might be enough with the undervolt as it seldom gets that high @ 2055-2070 MHz. For whatever reason 2100 MHz isn't attainable with my sample, irrespective of the voltage, so 2070 MHz with an undervolt is as good as it gets. But I love that this card does 8150 Mhz on the memory stable and that makes a massive difference in RT. My Port Royal score when from 9200 to 10200 simply going from +500 to +1200 on the memory, a 10% improvement.
> 
> As for that promised gameplay footage running Metro Exodus at 90-110 FPS with RT on High, all other settings maxed @ 3440x1440 here it is. I realized that the Titan RTX 3840x2160 footage you linked to here was with DLSS off, hence 30 FPS avg single Titan RTX and 45 FPS 2x Titan RTX CFR. DLSS 1.0 in this game has seen a huge improvement with a patch and no longer looks blurry, why anyone would not want to use DLSS and cripple their framerate is beyond me. I'm seeing 3x the framerate in the same section of the game with RT on High, all other settings maxed. That's 67% less pixels but 3x the framemrate. Again, they fixed DLSS and it no longer looks blurry, if youre not using DLSS with RT in this game youre doing it wrong. https://youtu.be/WzMUaLolL34
> 
> Edit: Sharpen in Nvidia Freestyle Filter (Alt+F3) is a preset I use in all of my games, it's basically Lumasharpen from Reshade, which I was a huge fan of.
> 
> I have a somewhat hilarious video where I fight the Volga mutants and Ray Tracing bug at the same time with commentary that I'm uploading now.
> 
> 
> 
> Have you not seen or used a curved ultrawide? They are where it's at! You won't want to look at flat 16:9 ever again, it's like when the standard resolution went from 4:3 to 16:9 all over again but more pronounced. And movies align perfectly with a 21:9 panel. I have AW341DW, love this panel!
> 
> https://youtu.be/0afwE6GVLQw


Curved ultrawide is what I’m using now. ASUS PG348Q 3440x1440 100Hz. Gsync. The ASUS and Alienware uses the same panel, LG IIRC. The one you listed in the vid is the same as the ASUS PG349UQ. Change? 120Hz. Looking for large 4K display. The PG43UQ is 43 inch and 4K.


----------



## Mooncheese

JustinThyme said:


> Curved ultrawide is what I’m using now. ASUS PG348Q 3440x1440 100Hz. Gsync. The ASUS and Alienware uses the same panel, LG IIRC. The one you listed in the vid is the same as the ASUS PG349UQ. Change? 120Hz. Looking for large 4K display. The PG43UQ is 43 inch and 4K.


The PG348VQ isn't the same panel as AW3418DW, it's an older panel and the latter is newer and can overclock to 120 Hz without issue (I've been running mine at 120 Hz since around June 2018. There is zero downside to overclocking the panel, they just run at 100 Hz sample because panel lottery is like silicon lottery. One panel will do 120 Hz, the next will only do 110, so they ship them at 100 Hz but most will do 120 Hz no problem.

AW3418DW and X34P share the same panel. 

This is an older model, Alienware rleased AW3420DW mid 2019, which shares the same panel with LG34GK940G and compared to AW3418DW has better color gamut and contrast and does 120 Hz native but isn't true HDR. The only ultrawide HDR panel we have at present is PG35VQ, and if I had $2500 on hand it would probably be sitting on my desk and the AW3418DW in the closet as back-up panel. 

If youre hell bent on 4K don't forget that there's also ultrawide 4K, 5120x2160 since at least 2018: 




But I'm eagerly waiting for Samsung Odyssey G9, which is slated for release next year, which a 3080 Ti under full water block would drive nicely. 

I don't know, Samsung Odyssey G9 vs PG35VQ (HDR) that would be a tough decision! 

I don't know how you could ever consider going back to 16:9!






ALL games and genres look better on curved 21:9. ALL OF THEM. Racing, 3rd person shooter / action adventure, 1st person shooter, racing sims, flight sims, there's not a single genre where I'm like "boy I wish I was at 16:9". 

https://www.theverge.com/circuitbre...g-monitor-49-inch-qled-display-specs-ces-2020


----------



## z390e

JustinThyme said:


> Curved ultrawide is what I’m using now. ASUS PG348Q 3440x1440 100Hz. Gsync. The ASUS and Alienware uses the same panel, LG IIRC. The one you listed in the vid is the same as the ASUS PG349UQ. Change? 120Hz. Looking for large 4K display. The PG43UQ is 43 inch and 4K.


You have games that are hitting 120fps+ @ 4k with a single 2080ti? Damn. Which games?


----------



## Mooncheese

z390e said:


> You have games that are hitting 120fps+ @ 4k with a single 2080ti? Damn. Which games?


Oh yeah dude, F1 2019 maxed out hits 120 FPS. If I turn Ray Tracing off in both Shadow of the Tomb Raider and Metro Exodus they both run at 120 FPS (Metro Exodus runs at 120 FPS with like 69% load as shown here: 




Dude honestly 3440x1440 is where it's at, like you couldn't pay me any amount of money to trade this for 3840x2160. The added peripheral vision with the curvature and the fact that 1440p is plenty sharp, even with aided vision @ 3ft away. Like I just don't understand the allure of flat 16:9 4K, at all. 


BTW, I edited my previous comment after you replied, here's the part you missed: 

"
I don't know how you could ever consider going back to 16:9!






ALL games and genres look better on curved 21:9. ALL OF THEM. Racing, 3rd person shooter / action adventure, 1st person shooter, racing sims, flight sims, there's not a single genre where I'm like "boy I wish I was at 16:9".


----------



## Mooncheese

In case you missed it I added Wolfenstein: New Colossus running @ 120 FPS @ 3440x1440 with 1080 Ti to previous post in a post script edit. Also, F1 2019 and all of my uploads are nearly maxed out / maxed out!


----------



## spin5000

sultanofswing said:


> You have to hard flash the card with a CH341A programmer. Cooler has to come off and you put a test clip on the bios chip and flash it.
> 
> It’s easy to do but it’s above the scope of your average user.


Wow. Short of manufacturers releasing a prgram to update the BIOS, the CH341A method is the only way? I read from here https://www.overclock.net/forum/28073720-post8480.html that the newer cards with the newer firmware can still be flashed as long as the bios being flashed onto the card is also a bios from a newer firmware card. So you're saying this is actually not the case, despite what's written in that post, and we need to use the CH341A method?

NVFlash Reports:
- XUSB-FW Version ID: 0x70090003
- Build Date: Oct. 9, 2018
- Modification Date: Oct. 21, 2018
- XUSB-FW Build Time: 2018-10-04


----------



## Medizinmann

z390e said:


> You have games that are hitting 120fps+ @ 4k with a single 2080ti? Damn. Which games?


i.e. StarCraft II where it one the other hand actually wouldn't hurt if one only gets like 80-100FPS...
CSGO will run above 120+FPS…

Besides that...uhm mostly no...he is talking about 1440p or 1440p wide screens specifically – which is a good idea since there is no point in 4k for gaming – at least not for the usual monitor.

…it would be great on a TV-screen though.


----------



## Medizinmann

sultanofswing said:


> You have to hard flash the card with a CH341A programmer. Cooler has to come off and you put a test clip on the bios chip and flash it.
> 
> It’s easy to do but it’s above the scope of your average user.


Question is - can he try it and go for the programmer if NVflash failes or could he brick it with NVFlash?


----------



## Medizinmann

spin5000 said:


> Have a Zotac 2080 Ti Amp non-extreme. Stock clocks and cooling are very nice with 3D Mark Firestrike (Ultra 4K & Extreme) and Unigine Superposition Extreme 1080P both not passing 56 or 57 degrees with the conservative stock fan curve (which I never use). This results in very consistent clocks in the mid-to-high 1800s or, if I raise power limit, mid-to-high 1900s.
> 
> Regardless of if I'm hitting mid-to-high 1800s with default power limit or mid-to-high 1900s with maxed power limits, power limit flag/notification is constantly going off (checked MSI Afterburner & GPU-Z).
> 
> So, from everything I read, it seems quite recommended & safe to upgrade this card to the Galax HOF 380W BIOS...


You mean the Galax/KFA2 380W BIOS - the Galax HOF 2000W BIOS isn't safe by any means…;-)



> I went over this thread's OP but am a little unclear on BIOS flashing because it states that newer 2080 Tis have a different firmware which prohibits flashing but then there's a note which I think says that the newer firmware cards can still be flashed but you have to skip using NVFlash. How can you flash a BIOS while skipping the NVFlash part? Isn't NVFlash what's doing the flashing??????


It seems this depends on the date the card is manufactured. The text says that cards past Q2 2019 are problematic – can you see when your cards has been manufactured?


----------



## spin5000

Medizinmann said:


> You mean the Galax/KFA2 380W BIOS - the Galax HOF 2000W BIOS isn't safe by any means…;-)
> 
> 
> 
> It seems this depends on the date the card is manufactured. The text says that cards past Q2 2019 are problematic – can you see when your cards has been manufactured?


I see lots of dates:

NVFlash Reports:
- XUSB-FW Version ID: 0x70090003
- Build Date: Oct. 9, 2018
- Modification Date: Oct. 21, 2018
- XUSB-FW Build Time: 2018-10-04

I'm guessing I have the older, flashable version but the OP does not mention XUSB-FW 0x70090003. It only mentions 0x70060003 and 0x70010003.


I just checked and can confirm that the Galax 380W BIOS is 0x70060003 August 2018 while my Zotac AMP is 0x70090003 October 2018...

Can the different XUSB-FW version affect stability, overclocking, etc.? Can different XUSB-FW use different VRAM timings and therefore affect performance, overclocking, and/or stability? 



Medizinmann said:


> You mean the Galax/KFA2 380W BIOS - the Galax HOF 2000W BIOS isn't safe by any means…;-)


Haha, yes, my bad.


----------



## sultanofswing

spin5000 said:


> Wow. Short of manufacturers releasing a prgram to update the BIOS, the CH341A method is the only way? I read from here https://www.overclock.net/forum/28073720-post8480.html that the newer cards with the newer firmware can still be flashed as long as the bios being flashed onto the card is also a bios from a newer firmware card. So you're saying this is actually not the case, despite what's written in that post, and we need to use the CH341A method?
> 
> NVFlash Reports:
> - XUSB-FW Version ID: 0x70090003
> - Build Date: Oct. 9, 2018
> - Modification Date: Oct. 21, 2018
> - XUSB-FW Build Time: 2018-10-04



No, the newer cards can be flashed with NVflash64 within windows provided the BIOS you are trying to flash comes from a card that has the changes that were made in 2019.
The only time you have to use the CH341A method is say you have a card from 3/2019 but you want to flash a BIOS that came off a card from 6/2018 for example.

You can download a couple of BIOS files from TPU and try and flash them, if the XUSB does not match it will let you know without causing any issues.


----------



## smonkie

I'm using a 2080Ti Duke in a x570 Aorus Master + 3950X with Noctua D15. The space is very tight between the Noctua and the gpu, and now the chipset fan/heat -which is just trapped behind the Duke, is making things worse. GPU temps hover around 80º during gaming sessions, so I guess it would be bad when summer comes. 

That's why I'm thinking in moving the gpu to the second slot. I've read several topics about the loss of bandwith, but it doesn't seem to affect that much the fps. Has anyone tried this with his 2080Ti?


----------



## Mooncheese

smonkie said:


> I'm using a 2080Ti Duke in a x570 Aorus Master + 3950X with Noctua D15. The space is very tight between the Noctua and the gpu, and now the chipset fan/heat -which is just trapped behind the Duke, is making things worse. GPU temps hover around 80º during gaming sessions, so I guess it would be bad when summer comes.
> 
> That's why I'm thinking in moving the gpu to the second slot. I've read several topics about the loss of bandwith, but it doesn't seem to affect that much the fps. Has anyone tried this with his 2080Ti?


My first PC had a massive air cooler rivaling Noctua's D14 / D15 in size, Phanteks TC14PE, great cooler, but I noticed the same problem with temps. Essentially the air cooler obstructs airflow that would otherwise pass over the back-plate and many backplates aren't passive / aesthetic but serve as heat-sinks that help a bit. When I switched from that to an AIO my temps on primary card dropped like 5C because I had my ceiling fans and rear 140mm fan mounted as intake (3 Corsair H55's mounted as exhaust in the front of an Air 540, one AIO for the CPU and the other two with NZXT Kraken G10 brackets on the 780 Ti's). 

Don't get me wrong I like the simplicity and bulletproofness of air coolers but it seems that they do obstruct airflow over the top of a GPU.

Youre going to lose some performance at x8 speed (compare Titan RTX @ x8 to x16 here: https://www.overclock.net/forum/69-nvidia/1709322-2080-ti-pci-e-x8.html)

I think it's time to ditch the air cooler brother! AIO or full loop, your choice, but everyone ends up with a full loop eventually. 






Evolution:


----------



## long2905

Guys last time i shared that i got a palit dual 2080ti and somehow managed to flash 300A vbios on it.

Today i got bored and deciced to repaste the card and sate my curiosity on what kind of chip i got exactly thus the picture above.

Goods news its an A chip! Bad news the ram are Micron though. I have no problem so far oc’ing them +1000.

But then i have a look at the pcb and compare to stock palit dual pcb online and they do not match, nor a few other possibilities like zotac or EVGA.

Based on the picture above, is it possible to determine which AIB this board belongs to? This card has both the FE fan header along with 2 standard VGA fan headers at the bottom right corner and most pcbs i compared so far only have either one not both. There is this text “Astron” on the power plug which may help identify it but i still have yet to find an exact match.


----------



## long2905

Pic of the die itself


----------



## J7SC

long2905 said:


> Guys last time i shared that i got a palit dual 2080ti and somehow managed to flash 300A vbios on it.
> 
> Today i got bored and deciced to repaste the card and sate my curiosity on what kind of chip i got exactly thus the picture above.
> 
> Goods news its an A chip! Bad news the ram are Micron though. I have no problem so far oc'ing them +1000.
> 
> But then i have a look at the pcb and compare to stock palit dual pcb online and they do not match, nor a few other possibilities like zotac or EVGA.
> 
> Based on the picture above, is it possible to determine which AIB this board belongs to? This card has both the FE fan header along with 2 standard VGA fan headers at the bottom right corner and most pcbs i compared so far only have either one not both. There is this text "Astron" on the power plug which may help identify it but i still have yet to find an exact match.


..good news re. A chip  Re. PCB check out <> Buildzoid's YouTube channel, he does a lot of PCB reviews with pics, and you might be able to match it. Re. 'bad news' on the Micron memory, there continues to be a lot of misinformation spread..while Samsung is preferable and Micron also runs a bit warmer, it also seems to use tighter timings. My two 2080 Ti (Aorus factory water-blocked) both have Micron, and typical 'oc' is an effective 8222 MHz, ...I can run it higher than that depending on the apps, but fps start dropping even w/ no artifacts. Currently #3 in Heaven 4 / 4K and spent almost a year now in top 30 / 3DM HoF Port Royal with that VRAM at ambient. Best advice is to cool the GPU AND VRAM as much as possible.


----------



## long2905

think i found a match. its actually a zotac card with a palit cooler :O

https://www.techpowerup.com/review/zotac-geforce-rtx-2080-ti-amp/


----------



## long2905

J7SC said:


> ..good news re. A chip  Re. PCB check out <> Buildzoid's YouTube channel, he does a lot of PCB reviews with pics, and you might be able to match it. Re. 'bad news' on the Micron memory, there continues to be a lot of misinformation spread..while Samsung is preferable and Micron also runs a bit warmer, it also seems to use tighter timings. My two 2080 Ti (Aorus factory water-blocked) both have Micron, and typical 'oc' is an effective 8222 MHz, ...I can run it higher than that depending on the apps, but fps start dropping even w/ no artifacts. Currently #3 in Heaven 4 / 4K and spent almost a year now in top 30 / 3DM HoF Port Royal with that VRAM at ambient. Best advice is to cool the GPU AND VRAM as much as possible.


yeah the bad Micron myth is what i got from reading here and a few other places like Reddit. but my experience so far is perfectly fine.

Guys since there are so many of you here putting a waterblock on your 2080ti, is there any chance someone have no use for their stock cooler complete with backplate anymore? I just need 1 that fit the FE PCB and preferably with 3 fans. I would love to take if off your hands.


----------



## james dantow

Hi All,

Scrolling through here, lots of great info. Learning a lot - new to OCing. Thanks to everyone.

Current Setup:
Zotac 2080ti AMP Extreme Core, 3 fan, A-card, Samsung vram.
380w bios flashed
2160MHz and 8200MHz clocks set
Currently its watercooled with an EKWB CPU block and several thermaltake fans, and additional copper heatsinks.

It seems to be hitting Vrel during timespy at only 1.05v but at 380w, I can't seem to get voltage to go any higher. Temps are peaking at 39C at sustained load, 21C ambient.

Anything more I can do to this card or is it pretty much maxed out? Seems like there is some thermal headroom there... I couldn't see anywhere if I can flash a bios higher than the 380w one.

Any suggestions?

Thank you!


----------



## Mooncheese

Nevermind the fact that Nvidia completely dropped Micron from the cards that they themselves manufacture and sell. 

It doesn't mean that ALL cards with Micron memory will have a problem and the problem may not manifest if you have the card under full waterblock with an inordinate amount of radiator surface area with mem temps that don't exceed 40C, but to say that there isn't / wasn't a problem with a percentage of Micron memory failing is disingenuous. 

Again, speaking from direct experience, my Micron memory card ran so hot about the back-plate the previous owner saw fit to fabricate an elaborate back-plate replete with recesses for small fans and deep channels. When he gave that to me along with the Phanteks water-block I was somewhat puzzled, a water-block should cool the memory and all components sufficiently by itself. But after affixing the Phanteks block to the card I quickly discovered to my dismay how incredibly hot the backplate was. I didn't have an IR thermometer but you wouldn't want to keep your hand on the backplate for more than say 5-10 seconds because it was uncomfortably hot. The replacement card (yes, said card died from Micron memory failure when I switched from the factory back-plate that connected the memory modules with EK's Quantum back-plate that didn't recommend thermal pad placement on the back-side of the PCB where the memory is within 4 days) with Samsung memory is luke warm on the back-plate. 

I have no idea what kind of voltage Micron's memory required but I surmise that it easily ran 20-30C+ warmer than Samsung's.

https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/

"Nvidia currently has problems with failures of the RTX 2080 Ti. The replacement cards now use Samsung GDDR6 memory instead of Micron.


RTX 2080 Ti: what’s the problem?
The new Turing generation is currently experiencing some glitches. In addition to high prices, poor availability and raytracing at a low level, there has been another problem in recent weeks. Many users reported failures of the flagship RTX 2080 Ti. Primarily it was about problems with the Founders Edition, which Nvidia offers directly. But also with models made by the AIBs there were picture errors or complete crashes. Therefore, Nvidia recently removed the RTX 2080 Ti from its own store. The graphics card is now also available again from Nvidia.

But where did the problems originally come from? The RTX 2080 ti as Founders Edition uses the reference board just like some of the partner cards. The GDDR6 memory came from Micron. Exactly this is said to have caused the problem in combination with cold solder joints and unsprung screw connections of the cooler, as Igor Wallossek showed. Parts of these bugs are fixed by Nvidia now.


Nvidia no longer ships GPUs with micron memory
It is difficult to say how often the memory was responsible for the death of the graphics cards. However, the frequency of the cases shows that the Micron memory probably caused many problems. That’s why Nvidia is now shipping the RTX 2080 Ti with Samsung memory. The RTX 2080 and RTX 2070 as volume models were already converted to Samsung memory earlier. The first models and of course the Founders Edition are or were all equipped with GDDR6 memory from Micron, which might have caused the problems in many cases. That’s why Nvidia is now shipping the RTX 2080 Ti replacement models with Samsung memory. The AIBs are also supposed to receive only bundles with GDDR6 memory from Samsung. Whether there were also other changes is not yet known. It remains to be hoped that Nvidia has solved the problems with it and that there will be no more notable failures."


----------



## spin5000

Does anyone know a good test for 2080 Ti Memory OC stability? I'm currently using 3dMark Ultra and have slowly made my way up to maxing out the MSI Afterburner slider to +1500 MHz (8500 MHz, 17000 MHz effective) with not a sign of artifacts... Stock mem is 7000 Mhz (14000 MHz effective)...This can't be right...




Mooncheese said:


> https://youtu.be/t5memuI5WD4
> 
> Nevermind the fact that Nvidia completely dropped Micron from the cards that they themselves manufacture and sell.
> 
> It doesn't mean that ALL cards with Micron memory will have a problem and the problem may not manifest if you have the card under full waterblock with an inordinate amount of radiator surface area with mem temps that don't exceed 40C, but to say that there isn't / wasn't a problem with a percentage of Micron memory failing is disingenuous.
> 
> Again, speaking from direct experience, my Micron memory card ran so hot about the back-plate the previous owner saw fit to fabricate an elaborate back-plate replete with recesses for small fans and deep channels. When he gave that to me along with the Phanteks water-block I was somewhat puzzled, a water-block should cool the memory and all components sufficiently by itself. But after affixing the Phanteks block to the card I quickly discovered to my dismay how incredibly hot the backplate was. I didn't have an IR thermometer but you wouldn't want to keep your hand on the backplate for more than say 5-10 seconds because it was uncomfortably hot. The replacement card (yes, said card died from Micron memory failure when I switched from the factory back-plate that connected the memory modules with EK's Quantum back-plate that didn't recommend thermal pad placement on the back-side of the PCB where the memory is within 4 days) with Samsung memory is luke warm on the back-plate.
> 
> I have no idea what kind of voltage Micron's memory required but I surmise that it easily ran 20-30C+ warmer than Samsung's.
> 
> https://www.pcbuildersclub.com/en/2...ches-from-micron-to-samsung-for-gddr6-memory/
> 
> "Nvidia currently has problems with failures of the RTX 2080 Ti. The replacement cards now use Samsung GDDR6 memory instead of Micron.
> 
> 
> RTX 2080 Ti: what’s the problem?
> The new Turing generation is currently experiencing some glitches. In addition to high prices, poor availability and raytracing at a low level, there has been another problem in recent weeks. Many users reported failures of the flagship RTX 2080 Ti. Primarily it was about problems with the Founders Edition, which Nvidia offers directly. But also with models made by the AIBs there were picture errors or complete crashes. Therefore, Nvidia recently removed the RTX 2080 Ti from its own store. The graphics card is now also available again from Nvidia.
> 
> But where did the problems originally come from? The RTX 2080 ti as Founders Edition uses the reference board just like some of the partner cards. The GDDR6 memory came from Micron. Exactly this is said to have caused the problem in combination with cold solder joints and unsprung screw connections of the cooler, as Igor Wallossek showed. Parts of these bugs are fixed by Nvidia now.
> 
> 
> Nvidia no longer ships GPUs with micron memory
> It is difficult to say how often the memory was responsible for the death of the graphics cards. However, the frequency of the cases shows that the Micron memory probably caused many problems. That’s why Nvidia is now shipping the RTX 2080 Ti with Samsung memory. The RTX 2080 and RTX 2070 as volume models were already converted to Samsung memory earlier. The first models and of course the Founders Edition are or were all equipped with GDDR6 memory from Micron, which might have caused the problems in many cases. That’s why Nvidia is now shipping the RTX 2080 Ti replacement models with Samsung memory. The AIBs are also supposed to receive only bundles with GDDR6 memory from Samsung. Whether there were also other changes is not yet known. It remains to be hoped that Nvidia has solved the problems with it and that there will be no more notable failures."


Do you think the firmware on the cards/BIOS could have also been responsible or contributed to issues? The reason why I ask is because my Zotac AMP 2080 Ti (non-extreme) comes with what seems like a slightler newer firmware which is 0x70090003 October 2018 rather than the slightly older 0x70060003 August 2018 which seems to be common. Perhaps there were some improvements and fixes in the newer FW?

This FW thing is the only thing that's making me nervous about flashing my card's BIOS. I've flashed other cards many times in the past so it's not the BIOS flashing itself that I'm concerned about, it's the different FW since nobody seems to really know what the FW does and what was changed between XUSB-FW 0x70090003 (October 2018) and XUSB-FW 0x70060003 (August 2018)...


----------



## J7SC

Mooncheese said:


> (...)


 
This is the gift that keeps on giving...  Please note (again, squared):

1.) Almost all early RTX card had Micron VRAM, so _any other issue_ would still be a failure of a 'card with Micron', and a lot of self-proclaimed experts jumped on that simple correlation
2.) As reported in this and other threads and sites, some early Samsung VRAM equipped cards also failed
3.) 2080 and 2080 Ti used the same memory controller and Micron memory; 2080 was not affected 
*4.) NVidia finally addressed the issue*, per report by Techspot below (and others). NVidia: "Limited test escapes from early boards caused the issues some customers have experienced with RTX 2080 Ti Founders Edition. We stand ready to help any customers who are experiencing problems"

It may be that we'll never know. I do think that early Founders Edition cards (and those with related PCBs) are best treated with caution in the used market. I also add - again - that my preference with all else being equal is Samsung. Yet I have no reason to complain with the experiences with two Micron-equipped cards since December '18, along w/ many others that posted on this over the last year.

Source


----------



## z390e

long2905 said:


> yeah the bad Micron myth is what i got from reading here and a few other places like Reddit. but my experience so far is perfectly fine.


There are actual members of this forum who have been burned by the Micron memory in the 2080ti. I would be extra careful OC'ing those cards or making extra sure I kept them incredibly cool.


----------



## Nizzen

z390e said:


> There are actual members of this forum who have been burned by the Micron memory in the 2080ti. I would be extra careful OC'ing those cards or making extra sure I kept them incredibly cool.


There will allways be someone with failed hardware  5pcs of 2080ti with micron here, and non dead YET


----------



## z390e

james dantow said:


> I couldn't see anywhere if I can flash a bios higher than the 380w one.
> 
> Any suggestions?
> 
> Thank you!


I thought both the Galax HOF and the KINGPIN bios's would go above 380w? Not sure if those work on that card though.


----------



## long2905

z390e said:


> There are actual members of this forum who have been burned by the Micron memory in the 2080ti. I would be extra careful OC'ing those cards or making extra sure I kept them incredibly cool.


i appreciate the concern and advice. which is why i'm actively looking for a better cooler sans using a custom loop as that would increase the cost and hassle if and when i move to a new card/system with the next generation looming over.

speaking of which, you do not have a stock cooler lying around and okay with parting do you


----------



## z390e

haha I do but for a Maxwell Titan and a 1080 (non-ti) sadly none for a 2080ti


----------



## J7SC

long2905 said:


> i appreciate the concern and advice. which is why i'm actively looking for a better cooler sans using a custom loop as that would increase the cost and hassle if and when i move to a new card/system with the next generation looming over.
> 
> speaking of which, you do not have a stock cooler lying around and okay with parting do you


 
"Speaking of which"...if you cannot find someone willing to part with a stock cooler, and your card's PCB matches the Zotac Amp, there are several water-blocks available, including (but not limited to):


https://www.ekwb.com/configurator/waterblock/3831109810477

https://www.amazon.com/Copper-Graphics-Water-Cooling-2080Ti/dp/B07NV6XYJM


----------



## JustinThyme

z390e said:


> You have games that are hitting 120fps+ @ 4k with a single 2080ti? Damn. Which games?


I’m running 2 cards at 100Hz 2K. 
I’m looking to go to 4K ultrawide which the selection is just now coming to market. Plenty of 27 inch 4K but still slim pickings on UW. I laughed when I saw a few HUGE 60 inch monitors and was like dayum! Then I read the spec sheet. 1080P


----------



## spin5000

OK, so there's at least 3 different XUSB-firmware versions around. I'm assuming flashing the bios requires another bios with the identical XUSB-firmware version, yes?

The reason I'm asking is because, according to the OP, the Gigabyte Gaming OC and Windforce OC have the highest power limit (366 W) for reference PCB 2080 Tis besides the Galax/KFA2 380 W. So I downloaded 5 or 6 of the Gigabyte Gaming OC and Windforce OC BIOSs and used NVFlash64 to find their firmware versions. I found 2 (both Gaming OC) which use the same XUSB-FW version as my stock Zotac Amp non-extreme (0x70090003). I guess that means I should be able to flash with that particular BIOS?

Also, can anyone explain why the Gigabyte 2080 Ti Gaming OC in the OP has different colour text for the word "reference" in the chart? The Windforce OC's "reference" text is blue while the Gaming OC's is light-orange. Does light-orange mean it's not recommended to use that card's BIOS to flash another reference card despite the Giga being labelled a reference PCB?


----------



## Mooncheese

J7SC said:


> This is the gift that keeps on giving...  Please note (again, squared):
> 
> 1.) Almost all early RTX card had Micron VRAM, so _any other issue_ would still be a failure of a 'card with Micron', and a lot of self-proclaimed experts jumped on that simple correlation
> 2.) As reported in this and other threads and sites, some early Samsung VRAM equipped cards also failed
> 3.) 2080 and 2080 Ti used the same memory controller and Micron memory; 2080 was not affected
> *4.) NVidia finally addressed the issue*, per report by Techspot below (and others). NVidia: "Limited test escapes from early boards caused the issues some customers have experienced with RTX 2080 Ti Founders Edition. We stand ready to help any customers who are experiencing problems"
> 
> It may be that we'll never know. I do think that early Founders Edition cards (and those with related PCBs) are best treated with caution in the used market. I also add - again - that my preference with all else being equal is Samsung. Yet I have no reason to complain with the experiences with two Micron-equipped cards since December '18, along w/ many others that posted on this over the last year.
> 
> Source


1. Yes but failure rate of Samsung cards is way lower and there are other reasons behind TU-102 failure other than memory, I've heard that a lot of TU-102 were dying because of imperfections in the GPU core / die itself due to the die size. 

2. See above. 

3. Of course RTX 2080 isn't going to be affected, it's a 220W TDP card, you have to understand that current runs through traces in the PCB that run directly through the memory to the GPU core (and the memory itself), it's axiomatic that this will heat up the memory: 




Again, RTX 2080 is down 100W, has less traces routed through where the memory is located, there's less memory requiring less power to begin with, the card runs way cooler overall. The cause of premature Micron memory is that it gets hot enough to fail, we are probably talking about sustained temps of 90-110C under factory cooler at default fan speed. Buyer A takes card home and has 85 ambient, card is dead in short order, Buyer B takes card home and puts it under a full water block and doesn't overclock it, Micron memory survives. Samsung memory runs cooler presumably because it requires less voltage and power to run at a given freq.

Temperature is the problem, your Micron memory is doing great because your memory probably doesn't exceed 35C with 3x 360 radiators attributed solely to the GPU. Who else is doing that? Answer, next-to-noone in reality has more than one 360 rad per component, let alone 3 of them just for the GPU(s) (actually 1.5 rad per GPU in your case, I forgot you have SLI) 

But even owners of your card had high failure rate! Ebay was flooded with GB WF XTreme all from the same seller, likely affiliated with Gigabyte doing RMA turn-arounds where they simply replaced the failed memory (hopefully not with like kind, but that's probably what they did as the cards sold have no warranty) and then threw them back up on ebay!

Again, the fact of the matter is that Nvidia themselves saw fit to eliminate Micron memory from the cards they themselves manufacture and sell, that fact alone should speak volumes about the source of the high failure rate of RTX 2080 Ti at and after launch!


----------



## Mooncheese

JustinThyme said:


> I’m running 2 cards at 100Hz 2K.
> I’m looking to go to 4K ultrawide which the selection is just now coming to market. Plenty of 27 inch 4K but still slim pickings on UW. I laughed when I saw a few HUGE 60 inch monitors and was like dayum! Then I read the spec sheet. 1080P


4K is so overrated dude! Just content yourself with 1440p, it looks plenty sharp! Loving 90-120 FPS maxed out on my 3440x1440 panel, no desire for more!


----------



## spin5000

spin5000 said:


> OK, so there's at least 3 different XUSB-firmware versions around. I'm assuming flashing the bios requires another bios with the identical XUSB-firmware version, yes?
> 
> The reason I'm asking is because, according to the OP, the Gigabyte Gaming OC and Windforce OC have the highest power limit (366 W) for reference PCB 2080 Tis besides the Galax/KFA2 380 W. So I downloaded 5 or 6 of the Gigabyte Gaming OC and Windforce OC BIOSs and used NVFlash64 to find their firmware versions. I found 2 (both Gaming OC) which use the same XUSB-FW version as my stock Zotac Amp non-extreme (0x70090003). I guess that means I should be able to flash with that particular BIOS?
> 
> Also, can anyone explain why the Gigabyte 2080 Ti Gaming OC in the OP has different colour text for the word "reference" in the chart? The Windforce OC's "reference" text is blue while the Gaming OC's is light-orange. Does light-orange mean it's not recommended to use that card's BIOS to flash another reference card despite the Giga being labelled a reference PCB?


Anyone??...

I'd love to update my Zotac BIOS for higher power. I've found a Gigabyte Gaming OC BIOS (366 W) and EVGA XC Ultra BIOS (336 W) that are both XUSB-FW version 0x70090003 like my Zotac AMP non-extreme so they should flash fine but I'm getting mixed messages on whether the Gigabyte Gaming OC is a reference PCB and if it's therefore safe to use it's BIOS or not.


----------



## Medizinmann

smonkie said:


> I'm using a 2080Ti Duke in a x570 Aorus Master + 3950X with Noctua D15. The space is very tight between the Noctua and the gpu, and now the chipset fan/heat -which is just trapped behind the Duke, is making things worse. GPU temps hover around 80º during gaming sessions, so I guess it would be bad when summer comes.
> 
> That's why I'm thinking in moving the gpu to the second slot. I've read several topics about the loss of bandwith, but it doesn't seem to affect that much the fps. Has anyone tried this with his 2080Ti?


It actually shouldn't matter much - tests by Gamer Nexus showed like 1-2% loss on some games - mostly everthing in the margin of error - but with cooler temps you might even get better performance since the GPU should boost higher and hold higher frequencies longer. :thumb:

If you need any FPS and want to squeeze out the last bit of perfromance - you must inevitably go full water block anyhow…easiest way IMHO would be a Alphacool GPX Pro and you can upgrade a CPU block later…other solution is the hybrid cooler NZXT etc.

But for the moment and since this is just changing the slot – you might just give it a try and put the card in the second slot.

You could do some benchmarks before and after to evaluate.:thumb:

Greetings,
Medizinmann


----------



## Medizinmann

james dantow said:


> Hi All,
> 
> Scrolling through here, lots of great info. Learning a lot - new to OCing. Thanks to everyone.
> 
> Current Setup:
> Zotac 2080ti AMP Extreme Core, 3 fan, A-card, Samsung vram.
> 380w bios flashed
> 2160MHz and 8200MHz clocks set
> Currently its watercooled with an EKWB CPU block and several thermaltake fans, and additional copper heatsinks.
> 
> It seems to be hitting Vrel during timespy at only 1.05v but at 380w, I can't seem to get voltage to go any higher. Temps are peaking at 39C at sustained load, 21C ambient.


Did you try Timespy 4k?

My card (Palit Gaming OC 2080TI with 380W BIOS) hits 2145 MHz with 1,087V and 386W on a regular Basis - with all fans blazing at full speed I have seen 1,093V and 390W...until temps in the loop go over 40°C - and it holds 330W with 2100 MHz pretty constantly, when playing games - with reasonable noise Levels - allowing temps in the loop up to 52°C.



> Anything more I can do to this card or is it pretty much maxed out? Seems like there is some thermal headroom there... I couldn't see anywhere if I can flash a bios higher than the 380w one.
> 
> Any suggestions?
> 
> Thank you!





z390e said:


> I thought both the Galax HOF and the KINGPIN bios's would go above 380w? Not sure if those work on that card though.


Well some people reported success with the Galax HOF 2000W BIOS – it seems to work mostly – with some glitches like losing a DP or two and some instabilities.

If one needs stability and all DPs (like I do) it isn’t recommended though.

Greetings,
Medizinmann


----------



## Imprezzion

This might sound funny but it is very serious. Yes. Watercool it even if it's with a G12+Kraken X series AIO or whatever. The stock coolers fans and RGB draw power directly from the cards power budget. It lowered my power consumption like, 8-10% with water compared to 100% fanspeed on stock triple fan cooler with RGB. Hehe.

Even with 1.093v curve OC at 2145-2130Mhz with 8000Mhz memory and the EVGA FTW3 Ultra BIOS it hardly ever goes over 110% power now and never throttles.


----------



## J7SC

^^ actually makes sense, what with tight PL budgets. My cards are factory water-blocked (so no fans), but I turned off the rather extensive RGB which translates to between 10w and 12w savings per card 

Did a few more Unigine runs to compliment some earlier ones I did (Superposition, Heaven 4, Valley). I really like Unigine benches. I'll do more runs soon but for now it seems that with Superposition, v1.1 *seems* marginally slower than v1 :headscrat. 

4k and 8k Superposition runs at 2190 (2175 - 2205) for single cards, with effective VRAM speed at 8222 MHz - and no RGB


----------



## Mooncheese

J7SC said:


> ^^ actually makes sense, what with tight PL budgets. My cards are factory water-blocked (so no fans), but I turned off the rather extensive RGB which translates to between 10w and 12w savings per card
> 
> Did a few more Unigine runs to compliment some earlier ones I did (Superposition, Heaven 4, Valley). I really like Unigine benches. I'll do more runs soon but for now it seems that with Superposition, v1.1 *seems* marginally slower than v1 :headscrat.
> 
> 4k and 8k Superposition runs at 2190 (2175 - 2205) for single cards, with effective VRAM speed at 8222 MHz - and no RGB


Nice, your 1.1 is about 5% faster avg FPS but your mins and max are way higher than mine!

The difference between 2070 and 2190 MHz on the core and 8150 and 8222 memory is around 5% which correlates neatly with the bump in avg performance.


----------



## Imprezzion

Well, I think I'm at the end of the tweaking I can do with my card. With the G12+X52 temps are mid 40's on acceptable fan noise (rad is push pull), VRAM and VRM is done with copper heatsinks, don't have a temp sensor but after hours of gaming they are hot to the touch as is the backplate which has thermal pads under it as well. Hot to the touch can't be much more then 50-60c so.

EVGA FTW3 Ultra BIOS is on it, Curve OC at 1.093v 2145-2130Mhz depending on whether it hits the thermal throttle point in the mid 40's or not, 8000Mhz memory.

It's Rock solid in any game or bench I run and higher clocks on either memory or core gives random black artifacts or crashes direct X so no room there anymore.

Don't really know what else I can do.. I even put a Phanteks RGB strip in the Kraken and linked that to the rest of my RGB setup so..


----------



## J7SC

Re. Superposition, apart from v1 <> 1.1, I also think some more recent Windows 10 'updates' as well as NVidia driver security-related updates might zap scores just a bit.



Imprezzion said:


> Well, I think I'm at the end of the tweaking I can do with my card. With the G12+X52 temps are mid 40's on acceptable fan noise (rad is push pull), VRAM and VRM is done with copper heatsinks, don't have a temp sensor but after hours of gaming they are hot to the touch as is the backplate which has thermal pads under it as well. Hot to the touch can't be much more then 50-60c so.
> 
> EVGA FTW3 Ultra BIOS is on it, Curve OC at 1.093v 2145-2130Mhz depending on whether it hits the thermal throttle point in the mid 40's or not, 8000Mhz memory.
> 
> It's Rock solid in any game or bench I run and higher clocks on either memory or core gives random black artifacts or crashes direct X so no room there anymore.
> 
> Don't really know what else I can do.. I even put a Phanteks RGB strip in the Kraken and linked that to the rest of my RGB setup so..


With games, a solid 2145 MHz vs, say 2175 MHz (= +1.4%), would barely be noticeable. That said, getting to the highest possible 'stable' speed for most apps is what it's all about. But if your copper heatsinks are hot to the touch after hours of play, would it make sense to mount (another?) 120mm fan blowing 'into' the vertical plane of the card ?


----------



## Mooncheese

Imprezzion said:


> Well, I think I'm at the end of the tweaking I can do with my card. With the G12+X52 temps are mid 40's on acceptable fan noise (rad is push pull), VRAM and VRM is done with copper heatsinks, don't have a temp sensor but after hours of gaming they are hot to the touch as is the backplate which has thermal pads under it as well. Hot to the touch can't be much more then 50-60c so.
> 
> EVGA FTW3 Ultra BIOS is on it, Curve OC at 1.093v 2145-2130Mhz depending on whether it hits the thermal throttle point in the mid 40's or not, 8000Mhz memory.
> 
> It's Rock solid in any game or bench I run and higher clocks on either memory or core gives random black artifacts or crashes direct X so no room there anymore.
> 
> Don't really know what else I can do.. I even put a Phanteks RGB strip in the Kraken and linked that to the rest of my RGB setup so..


Could you upload a screen shot of your freq. curve @ 1.093v? Also, I'm curious as to how reference PCB performs with that BIOS, would you mind doing Timespy, Superposition and Port Royal? So the BIOS goes up to 1.093v? 

I'm contemplating this BIOS, but I need to get my temps lower as it is. 

I just ordered EK's XTOP Revo Dual D5 Serial and a second EK D5 G2. With two D5's in serial running at 100% RPM it will be quieter than this DDC affixed to my distro plate and both head pressure and flow will increase by 50%, and most importantly if one pump goes I have redundancy. 






My GPU temps dropped from 50-52C under sustained load in titles that get the wattage up around 320W to 45-47C simply increasing the pump speed on the DDC from 55% to 100% RPM. Considering the radiators are cool to the touch I'm sure there's room for improvement further. DDC is 1000 lph @ 5.2 m/bar @ 100% RPM, single D5 is 1500 lph @ 3.9 m/bar @ 100% RPM. Two D5's in serial nearly doubles the head pressure. So something like 1500 lph and 7'ish m/bar and it will be quieter! 

Tired of spending money on my computer but this was a no brainer.


----------



## sultanofswing

Here is my Superposition 4k optimized score. I don't ever run the 4k benchmark I usually do the 8k one.
This was 2145mhz dropping to 2130 right at the end of the benchmark.


----------



## J7SC

^^ nice. I wish Unigine would come up with another / new benchmark, and one -like Valley - where you can free-float through the landscape. That's always fun.


----------



## Imprezzion

Mooncheese said:


> Could you upload a screen shot of your freq. curve @ 1.093v? Also, I'm curious as to how reference PCB performs with that BIOS, would you mind doing Timespy, Superposition and Port Royal? So the BIOS goes up to 1.093v?
> 
> I'm contemplating this BIOS, but I need to get my temps lower as it is.
> 
> I just ordered EK's XTOP Revo Dual D5 Serial and a second EK D5 G2. With two D5's in serial running at 100% RPM it will be quieter than this DDC affixed to my distro plate and both head pressure and flow will increase by 50%, and most importantly if one pump goes I have redundancy.
> 
> https://youtu.be/h1IgnvLwen4
> 
> My GPU temps dropped from 50-52C under sustained load in titles that get the wattage up around 320W to 45-47C simply increasing the pump speed on the DDC from 55% to 100% RPM. Considering the radiators are cool to the touch I'm sure there's room for improvement further. DDC is 1000 lph @ 5.2 m/bar @ 100% RPM, single D5 is 1500 lph @ 3.9 m/bar @ 100% RPM. Two D5's in serial nearly doubles the head pressure. So something like 1500 lph and 7'ish m/bar and it will be quieter!
> 
> Tired of spending money on my computer but this was a no brainer.


Done. I do not own the upgrade for Port Royal and my steam has no funds on it atm so..

It does kind of ride the power limit in Superposition and sometimes drops to 2100-2115Mhz but not a lot. It's like 2-3% over the limit from time to time. For benches like this I should really be running either the KFA2 or the Galax HOF BIOS as those 2 don't hit limiter but they make my fans incredibly loud on idle as KFA2 has 41% minimum fanspeed which is almost 1400RPM for my rad fans (controlled by the card) and that is LOUD.


----------



## Mooncheese

Imprezzion said:


> Done. I do not own the upgrade for Port Royal and my steam has no funds on it atm so..
> 
> It does kind of ride the power limit in Superposition and sometimes drops to 2100-2115Mhz but not a lot. It's like 2-3% over the limit from time to time. For benches like this I should really be running either the KFA2 or the Galax HOF BIOS as those 2 don't hit limiter but they make my fans incredibly loud on idle as KFA2 has 41% minimum fanspeed which is almost 1400RPM for my rad fans (controlled by the card) and that is LOUD.


Nice! Thanks for that, how the hell is everyone's min and max so much higher than mine?! Youre like the 3rd 2080 Ti benchmark in the last two pages that has a min of 85 FPS and max of ~130, meanwhile mine is only doing 75 min and 116 max. Similar averages though. Could it be G-Sync? 

Your Timespy is a bit higher too, 16,777 vs my 16,350. 

I'm tempted to try this BIOS, any idea if the temp sensors will continue to work (XC2 Ultra)? I have one of the newer cards that is more difficult to flash, what is the best procedure to flash to FTW3 Ultra / Hydrocopper BIOS? After reading about someone bricking their card attempting to flash a few pages ago I'm kind of afraid to mess with it. 

https://www.3dmark.com/3dm/45318342

Your CPU score is making me want to upgrade to 9900k. At least I have an upgrade path with Gigabyte Aorus Gaming 7 Z370, 9900k works with a BIOS update and the VRM on this board is up to task, I'm just tired of throwing money at the computer and outside of benchmarking won't see added benefit of two additional cores considering most titles are still, even in 2020, have poor multi-core support outside of a handful of DX12 and Vulkan titles. Single core speed and IPC is comparable between 8700 and 9900k and $500+ pricetag (more if you elect to buy one guaranteed to do 5.1 GHz from Silicon Lottery = $750) is difficult to swallow for marginal real-world gains when 95% of the time youre GPU bottlenecked and /or the title in question is DX11.

I reckon that $750 for a binned 5.1 GHz 0 AVX 9900k would be better used in 2021 towards a new DDR5 chipset, either 10nm from Intel or 4th or 5th gen Ryzen.


----------



## keikei

Hey, looks like I may actually use dem RT cores:


----------



## MrTOOSHORT

My KPE 4K last year:


----------



## Mooncheese

MrTOOSHORT said:


> My KPE 4K last year:


Dude that's savage, that's probably the highest score I've seen, where are you in relation to the leaderboard? 

I forgot to include my volt + freq curve for my XC2 Ultra, it does 2085 MHz until 39C then dips to 2070 MHz until like 43-45C and then it's 2055 MHz @ 1.006v at which point I've yet to see it dip under.


----------



## Mooncheese

keikei said:


> Hey, looks like I may actually use dem RT cores:
> 
> 
> https://www.youtube.com/watch?v=s_eeWr622Ss


I'm putting them to good use in both Metro Exodus and Shadow of the Tomb Raider! Looking forward to picking up Control when it goes on sale, I'm glad I didn't jump on that title when it was first released as they've only recently added DLSS 2.0 support which adds huge performance boost (and image sharpness!). Looking forward to all future RTX titles having DLSS 2.0 support right out of the gate, it's seriously a game changer.


----------



## ThrashZone

Hi,
Mr.TOOSAVAGE


----------



## Imprezzion

Mooncheese said:


> Nice! Thanks for that, how the hell is everyone's min and max so much higher than mine?! Youre like the 3rd 2080 Ti benchmark in the last two pages that has a min of 85 FPS and max of ~130, meanwhile mine is only doing 75 min and 116 max. Similar averages though. Could it be G-Sync?
> 
> Your Timespy is a bit higher too, 16,777 vs my 16,350.
> 
> I'm tempted to try this BIOS, any idea if the temp sensors will continue to work (XC2 Ultra)? I have one of the newer cards that is more difficult to flash, what is the best procedure to flash to FTW3 Ultra / Hydrocopper BIOS? After reading about someone bricking their card attempting to flash a few pages ago I'm kind of afraid to mess with it.
> 
> https://www.3dmark.com/3dm/45318342
> 
> Your CPU score is making me want to upgrade to 9900k. At least I have an upgrade path with Gigabyte Aorus Gaming 7 Z370, 9900k works with a BIOS update and the VRM on this board is up to task, I'm just tired of throwing money at the computer and outside of benchmarking won't see added benefit of two additional cores considering most titles are still, even in 2020, have poor multi-core support outside of a handful of DX12 and Vulkan titles. Single core speed and IPC is comparable between 8700 and 9900k and $500+ pricetag (more if you elect to buy one guaranteed to do 5.1 GHz from Silicon Lottery = $750) is difficult to swallow for marginal real-world gains when 95% of the time youre GPU bottlenecked and /or the title in question is DX11.
> 
> I reckon that $750 for a binned 5.1 GHz 0 AVX 9900k would be better used in 2021 towards a new DDR5 chipset, either 10nm from Intel or 4th or 5th gen Ryzen.


9900K isn't worth it whe you have a 8700K. I came from a 7700K and the 9900K secondhand was just as expensive as a 8700K was new so.. 
This 9900K is a P0 early batch secondhand one and after lapping it it is one of the best P0 chips i've seen so far. Got massively lucky to find one that can do 5.1Ghz AVX 0 at 1.248v VR VOut (1.334v BIOS/CPU-Z). In 3DMark with full AVX it hit 70c hottest core. Fans don't ramp up for the CPU loop below 74c so..  

Could be g-sync yeah. My monitor is a Iiyama G-Master GB2788HS which is a 144Hz 1080P panel but it only supports Freesync so I run it without sync at 120Hz so my GPU can downclock in idle with a second 75Hz monitor. (144+75Hz means higher idle clocks)


----------



## Mooncheese

Imprezzion said:


> 9900K isn't worth it whe you have a 8700K. I came from a 7700K and the 9900K secondhand was just as expensive as a 8700K was new so..
> This 9900K is a P0 early batch secondhand one and after lapping it it is one of the best P0 chips i've seen so far. Got massively lucky to find one that can do 5.1Ghz AVX 0 at 1.248v VR VOut (1.334v BIOS/CPU-Z). In 3DMark with full AVX it hit 70c hottest core. Fans don't ramp up for the CPU loop below 74c so..
> 
> Could be g-sync yeah. My monitor is a Iiyama G-Master GB2788HS which is a 144Hz 1080P panel but it only supports Freesync so I run it without sync at 120Hz so my GPU can downclock in idle with a second 75Hz monitor. (144+75Hz means higher idle clocks)


Did you see my volt + freq curve for comparison? 2085 MHz @ 1.018v and 8150 MHz on the memory! It's still 400 points shy of your Timespy score though and I'm somewhat tempted to try FTW3 Ultra / Hydrocopper BIOS. Is bricking the card risky with the newer 2080 Ti? I hear the procedure is different with newer 2080 Ti's. 

If you ever upgrade and want to sell your 9900k PM me!


----------



## GoldCartGamer

Thoughts on the 16Gbps memory?

https://www.techpowerup.com/265870/...ti-gaming-z-trio-featuring-mighty-fast-memory


----------



## Mooncheese

WOW, big update for me, out of curiosity I ran Timespy again and was in complete disbelief as my score had plummeted from 16,3xx down to 15,700. Sometimes I get better scores right after the PC boots so I figured I would start there to rule that out, and it did the same thing. So then, trying to remember whether or not I had the memory at +600 or +1200 for that 16,3xxx score I reduced the mem overclock to +600 (to rule out wattage starvation / case of robbing Peter to pay Paul, +1150 uses another ~15W from what I can tell up and over +600) and the score went down!

So now I'm in full-blown troubleshooting mode, I figured maybe the display driver became corrupted (still getting bad crashes in Metro Exodus, the only game, and it's a widespread problem with the game!) but before I would clean reinstall the driver I figured I would try disabling Geforce Experience overlay and that right there was the problem! 

Geforce Overlay was eating up 700 points worth of performance, GOOD LORD. 

I'm tempted to do a run with MSI AB disabled now to see if that too is sapping performance. 

With and without Geforce Experience overlay enabled: 

https://www.3dmark.com/compare/spy/11517855/spy/11516998

Love Geforce experience though, being able to record on the fly and the Freestyle filters, I'm not sure I can live without that but good lord that's a lot of performance lost solely to an overlay!


----------



## Mooncheese

GoldCartGamer said:


> Thoughts on the 16Gbps memory?
> 
> https://www.techpowerup.com/265870/...ti-gaming-z-trio-featuring-mighty-fast-memory


Well I'm at 16.3 GBPS effective here (+1150 MHz memory) with Samsung B-Die, my question is, is MSI using Samsung B-Die that they themselves have overclocked to 8000 MHz or are the memory modules the newer, faster memory found on 2080 and 2070 Super? 

This could just be a case of MSI pre-overclocking binned Samsung B-Die. 

Memory speed most definitely matters though, especially with Ray Tracing! My Port Royal score went from 9200 @ +600 (7600 or 15.2 GBPS) to 10200 @ +1200 (8200 or 16.4 GBPS), a 10% gain! +1200 vs +0 is probably closer to 20% gain but I haven't tried it (2080 Ti FE @ default clocks does 7900 in Port Royal). 2080 Ti @ 16.3 gbps handles RT with ease, I'm seeing 90-110 FPS maxed out in Metro:Exodus (RT: High) nearly everywhere @ 3440x1440 and around 80 FPS avg in Shadow of the Tomb Raider. 

RT is extremely responsive to memory speed.


----------



## sultanofswing

Mooncheese said:


> WOW, big update for me, out of curiosity I ran Timespy again and was in complete disbelief as my score had plummeted from 16,3xx down to 15,700. Sometimes I get better scores right after the PC boots so I figured I would start there to rule that out, and it did the same thing. So then, trying to remember whether or not I had the memory at +600 or +1200 for that 16,3xxx score I reduced the mem overclock to +600 (to rule out wattage starvation / case of robbing Peter to pay Paul, +1150 uses another ~15W from what I can tell up and over +600) and the score went down!
> 
> So now I'm in full-blown troubleshooting mode, I figured maybe the display driver became corrupted (still getting bad crashes in Metro Exodus, the only game, and it's a widespread problem with the game!) but before I would clean reinstall the driver I figured I would try disabling Geforce Experience overlay and that right there was the problem!
> 
> Geforce Overlay was eating up 700 points worth of performance, GOOD LORD.
> 
> I'm tempted to do a run with MSI AB disabled now to see if that too is sapping performance.
> 
> With and without Geforce Experience overlay enabled:
> 
> https://www.3dmark.com/compare/spy/11517855/spy/11516998
> 
> Love Geforce experience though, being able to record on the fly and the Freestyle filters, I'm not sure I can live without that but good lord that's a lot of performance lost solely to an overlay!



Don’t run any overlays with 3dmark. It kills the score, even msi afterburner overlay will also effects the score.


Sent from my iPhone using Tapatalk


----------



## Mooncheese

sultanofswing said:


> Don’t run any overlays with 3dmark. It kills the score, even msi afterburner overlay will also effects the score.
> 
> 
> Sent from my iPhone using Tapatalk


Yeah it does, picked up 150 points, now at 16,456! Not bad for 340W and 1.006-1.019v reference PCB! 

https://www.3dmark.com/3dm/45828289?


----------



## Mooncheese

Here's Superposition all overlays disabled, I picked up 10 FPS min and 12 FPS max: 

With Geforce Experience and RTSS enabled: 13,606

With Geforce Experience and RTSS disabled: 13,818


----------



## J7SC

keikei said:


> Hey, looks like I may actually use dem RT cores:
> 
> 
> https://www.youtube.com/watch?v=s_eeWr622Ss


 
Dont' forget Quake II RTX 


Spoiler














Re. Superposition v1 and v1.1, I did some more testing this morning, and while version 1.0 is a smidgen ahead most though not all of the time, it is within margin or error. I suspect Win 10 and Nvidia driver 'security updates' had more of the earlier referenced impact on scores. :thinking:


----------



## mardon

Double POST


----------



## mardon

How are you disabling Geforce experience? Whole uninstall or just disabling the overlay? 

Can you still access Alt+F3 for sharpening etc?


----------



## Mooncheese

mardon said:


> How are you disabling Geforce experience? Whole uninstall or just disabling the overlay?
> 
> Can you still access Alt+F3 for sharpening etc?


No unfortunately you lose access to Freestyle Filters and video recording capability. The gain here is marginal, it should just be done to compare the performance of you card to others because others are also doing this. I re-enabled Geforce Experience after the benchmarking yesterday because I really like the Freestyle filters (Sharpen, sometimes Color, Color with default settings is fun to look at for a while in games that are too green spectrum oriented, i.e. The Division 2) and I have gotten into recording my gameplay sessions in Metro Exodus. The performance loss is 3%. When youre getting 90-100 FPS we are talking about being at 87 vs 90 or 97 vs 100 FPS. To me it's worth the performance hit. If the title was extremely demanding and I was struggling to maintain say 70 FPS minimum I would most definitely disable Geforce Experience (well it depends, Sharpen really makes every game I've used it on look WAY better). 

To turn it off open Geforce Experience > Settings Wheel / Cog > In Game Overlay (move slider)


----------



## z390e

mardon said:


> How are you disabling Geforce experience? Whole uninstall or just disabling the overlay?
> 
> Can you still access Alt+F3 for sharpening etc?


turn off in-game overlay on the geforce settings panel

you can re-enable it later

you also want to turn off g-sync and stuff on your monitor for benchmarking imo


----------



## Meisgoot312

sultanofswing said:


> No problem, my first time flashing it I bricked it, that’s when I learned about the different IC’s and was able to find the correct IC in AsProgrammer and after that I flashed the card maybe 5 or 6 different times trying different ROMS without any issues. Just don’t connect the test clip backwards or you can fry the chip.


Originally, I was going to wait until I got my watercooling parts, but I decided to test out the flashing. When it comes to the hardware and actually flashing, it works fine, but the BIOS I tried, the Palit Dual Bios for non-A chips didn't work. I flashed the Palit Dual Bios (the one on the front page) and it disabled all but one of my DP ports and didn't change any of my power limits. I was able to flash the original BIOS back, so that's how I know the hardware works.

Out of curiosity, what BIOS did you flash?

I might need to just try older versions of that BIOS since it seems that its the only non-A bios with >280W power limit. It would be sad if there wasn't a BIOS that worked . I want to avoid doing a shunt mod since I still want to retain the original PCB as much as I can (also working from home means no fancy soldering tools, I'm spoiled).

EDIT: I looked into the BIOS using a Hex editor and I noticed that the end of the "faulty" Palit BIOS ends in FFs from the end of end bios to the end of the memory, while my working MSI BIOS is filled with 00. This might have an effect on my card, it looks like AsProgrammer pads the ending with FF instead of 00. I'll probably try fixing it tomorrow one more time.


----------



## sultanofswing

Meisgoot312 said:


> Originally, I was going to wait until I got my watercooling parts, but I decided to test out the flashing. When it comes to the hardware and actually flashing, it works fine, but the BIOS I tried, the Palit Dual Bios for non-A chips didn't work. I flashed the Palit Dual Bios (the one on the front page) and it disabled all but one of my DP ports and didn't change any of my power limits. I was able to flash the original BIOS back, so that's how I know the hardware works.
> 
> Out of curiosity, what BIOS did you flash?
> 
> I might need to just try older versions of that BIOS since it seems that its the only non-A bios with >280W power limit. It would be sad if there wasn't a BIOS that worked . I want to avoid doing a shunt mod since I still want to retain the original PCB as much as I can (also working from home means no fancy soldering tools, I'm spoiled).
> 
> EDIT: I looked into the BIOS using a Hex editor and I noticed that the end of the "faulty" Palit BIOS ends in FFs from the end of end bios to the end of the memory, while my working MSI BIOS is filled with 00. This might have an effect on my card, it looks like AsProgrammer pads the ending with FF instead of 00. I'll probably try fixing it tomorrow one more time.



So sometimes the flash will work just fine but Windows decides to play funky stuff.
Usually after I would flash I would end up rebooting a few times for the changes to take full effect.
What didn't work about the BIOS for non A chips?


----------



## Meisgoot312

sultanofswing said:


> So sometimes the flash will work just fine but Windows decides to play funky stuff.
> Usually after I would flash I would end up rebooting a few times for the changes to take full effect.
> What didn't work about the BIOS for non A chips?


I'll try again tomorrow, tearing apart the RTX 2080 Ti just to flash the BIOS is a little annoying, also I'll start needing to replace some of the thermal pads from taking it apart so much. The non-A palit bios only had 1 working DP port and the power limit didn't change. I don't know why I didn't consider restarting a few times, missed opportunity, dang. Thanks for your help, I'll let you know what happens whenever I decide to do it! (I need to buy more thermal pads though).

Do you have a non-A card?


----------



## sultanofswing

Meisgoot312 said:


> I'll try again tomorrow, tearing apart the RTX 2080 Ti just to flash the BIOS is a little annoying, also I'll start needing to replace some of the thermal pads from taking it apart so much. The non-A palit bios only had 1 working DP port and the power limit didn't change. I don't know why I didn't consider restarting a few times, missed opportunity, dang. Thanks for your help, I'll let you know what happens whenever I decide to do it! (I need to buy more thermal pads though).
> 
> Do you have a non-A card?


No I don't have a non A card but in reality it should not matter.

Not all cards/manufacturers have the same port layouts so it's pretty common for one bios file from 1 vendor to have different ports that work/don't work from another vender.


----------



## Meisgoot312

sultanofswing said:


> No I don't have a non A card but in reality it should not matter.
> 
> Not all cards/manufacturers have the same port layouts so it's pretty common for one bios file from 1 vendor to have different ports that work/don't work from another vender.


Damn, I was hoping that wouldn't be the case, but it makes sense. I guess my only option is the power shunt if the BIOS really does disable my other DP ports (I really need both my monitors). I have some 15 mOhm resistors laying around that I could use for a ~35% shunt. Dang, oh well.


----------



## sultanofswing

Meisgoot312 said:


> Damn, I was hoping that wouldn't be the case, but it makes sense. I guess my only option is the power shunt if the BIOS really does disable my other DP ports (I really need both my monitors). I have some 15 mOhm resistors laying around that I could use for a ~35% shunt. Dang, oh well.


I would just see if you can try and find a BIOS that has the same display port configuration and roll with it.

It's a lot of testing but I am sure there is a BIOS that would work just fine.

I would try flashing any of the regular A chip Bios files and see if any of them work.


----------



## Meisgoot312

sultanofswing said:


> I would just see if you can try and find a BIOS that has the same display port configuration and roll with it.
> 
> It's a lot of testing but I am sure there is a BIOS that would work just fine.
> 
> I would try flashing any of the regular A chip Bios files and see if any of them work.


I might try that, are there no hardware differences between the A and non-A chips other than being listed as different parts? I know nvflash will prevent you from flashing the different parts, but I wonder if it actually has any difference in the silicon other than binning. Thanks for the suggestion!


----------



## sultanofswing

Meisgoot312 said:


> I might try that, are there no hardware differences between the A and non-A chips other than being listed as different parts? I know nvflash will prevent you from flashing the different parts, but I wonder if it actually has any difference in the silicon other than binning. Thanks for the suggestion!


Now that I cannot actually speak on as I do not have a non A chip to test with. The beauty is you have the programmer so if a BIOS doesn't work then it's not like it's bricked you can always flash back.

Flashing this way can be a PITA, Everytime I had to do it I had to drain my water loop and all that jazz.


----------



## spin5000

I flashed my Zotac AMP non-extreme to the EVGA XC Ultra BIOS since that's the highest power limit BIOS - 336 W - that's available for reference boards (besides Gigabyte Windforce 366 W and Galax/KFA2 380 W which I can't use since no ones's posted an Oct. 2019 FW version of either). Results were originally very disappointing as I saw no improvement because the EVGA BIOS gave the GPU a ****ty frequency-to-voltage curve and was basically applying more voltage for each frequency step than the ZOTAC default BIOS which is odd. So I used MSI Afterburner's OC test/scan thing; I did it 3 times. The first time I got an overclock of +153, then +146, final time was +165.

I saved the +165 one but was still a little disappointed with the overclock. So I then added some more overclock using the slider in MSI AB to raise the entire OC curve from the already raised auto-OC one. That's much better. My GPU is now hitting 2130 MHz at times and almost always locked in the mid-to-high 20xx.

3DMark froze once when the card either boosted to 2130 or 2145 so what I did was go into the OC curve and lower every frequency-step above 2115 MHz down to 2115 MHz no matter the voltage. Since then, I haven't had a single issue with 20x runs of each of the following:

- Firestrike
- Firestrike Extreme
- Firestrike Ultra
- Superposition 1080p Extreme
- Superposition 8K Optimized

And I'm barely hitting 60 degrees with a linear fan curve that doesn't hit 100% until 65 degrees or 70 degrees (still experimenting with the 2).

All those tests are also with mem at +1500 MHz (8500 MHz or 17000 MHz effective) which is shocking to me that it's perfectly stable and also bringing framerate increases (tested in Unigine Valley looking in 1 spot, at 3 different points).

If I can just get my hands on an Oct. 2019 FW version of the Gigabyte Windforce 366 W or Galax/KFA2 380 W BIOS, I think I'd be able to minimize or eliminate the core clock fluctuations due to power limit hitting. At 2050 - 2115 MHz, I'm perfectly happy with raw speed, very happy in fact, now I'm just looking for more consistency (staying at something like 2070 or 2085 MHz without moving would be beautiful).


----------



## mardon

Mooncheese said:


> No unfortunately you lose access to Freestyle Filters and video recording capability. The gain here is marginal, it should just be done to compare the performance of you card to others because others are also doing this. I re-enabled Geforce Experience after the benchmarking yesterday because I really like the Freestyle filters (Sharpen, sometimes Color, Color with default settings is fun to look at for a while in games that are too green spectrum oriented, i.e. The Division 2) and I have gotten into recording my gameplay sessions in Metro Exodus. The performance loss is 3%. When youre getting 90-100 FPS we are talking about being at 87 vs 90 or 97 vs 100 FPS. To me it's worth the performance hit. If the title was extremely demanding and I was struggling to maintain say 70 FPS minimum I would most definitely disable Geforce Experience (well it depends, Sharpen really makes every game I've used it on look WAY better).
> 
> To turn it off open Geforce Experience > Settings Wheel / Cog > In Game Overlay (move slider)


Thanks will give it a go tonight. Doing some benchmarking shortly so every little helps.

What sort of temp drops are people getting for thermal grizzly kryonaut vs thermal grizzly conductonaut? Thinking of making the move to Liquid Metal.

Currently at 50C in game. What is the golden temp for an additional core speed step? 45C?


----------



## Jpmboy

mardon said:


> Thanks will give it a go tonight. Doing some benchmarking shortly so every little helps.
> 
> What sort of temp drops are people getting for thermal grizzly kryonaut vs thermal grizzly conductonaut? Thinking of making the move to Liquid Metal.
> 
> Currently at 50C in game. What is the golden temp for an additional core speed step? 45C?


there are several clock bin drops above 40C and up to Tmax. These depend not only on the temp, but also on the power draw.


----------



## mardon

I'm trying to work out if its worth trying to drop another 4/5C if I don't get an additional bump in clock speed. My current 24/7 clock 875mV @ 1950Mhz but when I first boot it'll sit at 1980Mhz when the temperatues are a little lower. It would be good to hold that clock speed.

Due to my 3440x1440 monitor topping out at 100hz and my 4K oled stopping at 60hz I don't actually need any more speed day to day. The rest is just benchmarking fun.


----------



## J7SC

Still on my Unigine kick (<thanks> Covid-19...), this time with Valley / 4k /dual cards (per 'dead' Valley thread settings). "Free" and "Walk" roaming beyond benchmark is still a bag of fun, especially around the top of mountains. Time flies with that...


----------



## Mooncheese

spin5000 said:


> I flashed my Zotac AMP non-extreme to the EVGA XC Ultra BIOS since that's the highest power limit BIOS - 336 W - that's available for reference boards (besides Gigabyte Windforce 366 W and Galax/KFA2 380 W which I can't use since no ones's posted an Oct. 2019 FW version of either). Results were originally very disappointing as I saw no improvement because the EVGA BIOS gave the GPU a ****ty frequency-to-voltage curve and was basically applying more voltage for each frequency step than the ZOTAC default BIOS which is odd. So I used MSI Afterburner's OC test/scan thing; I did it 3 times. The first time I got an overclock of +153, then +146, final time was +165.
> 
> I saved the +165 one but was still a little disappointed with the overclock. So I then added some more overclock using the slider in MSI AB to raise the entire OC curve from the already raised auto-OC one. That's much better. My GPU is now hitting 2130 MHz at times and almost always locked in the mid-to-high 20xx.
> 
> 3DMark froze once when the card either boosted to 2130 or 2145 so what I did was go into the OC curve and lower every frequency-step above 2115 MHz down to 2115 MHz no matter the voltage. Since then, I haven't had a single issue with 20x runs of each of the following:
> 
> - Firestrike
> - Firestrike Extreme
> - Firestrike Ultra
> - Superposition 1080p Extreme
> - Superposition 8K Optimized
> 
> And I'm barely hitting 60 degrees with a linear fan curve that doesn't hit 100% until 65 degrees or 70 degrees (still experimenting with the 2).
> 
> All those tests are also with mem at +1500 MHz (8500 MHz or 17000 MHz effective) which is shocking to me that it's perfectly stable and also bringing framerate increases (tested in Unigine Valley looking in 1 spot, at 3 different points).
> 
> If I can just get my hands on an Oct. 2019 FW version of the Gigabyte Windforce 366 W or Galax/KFA2 380 W BIOS, I think I'd be able to minimize or eliminate the core clock fluctuations due to power limit hitting. At 2050 - 2115 MHz, I'm perfectly happy with raw speed, very happy in fact, now I'm just looking for more consistency (staying at something like 2070 or 2085 MHz without moving would be beautiful).


You can see how stable your overclock is fairly accurately with OC Scanner in MSI AB. Anything under 90% Confidence Rating and youre going to get a display driver failure induced CTD (or black screen) in many games. I can pass all of the benchmarks at 2100 MHz on the core with an undervolt (1.019v) but that results in a confidence rating of 69% and I will get crashes in games, i.e. F1 2019 crashing within like 15 minutes of a session. 90% = no crashing in F1 2019 period. 

Care to post your volt + freq curve? I'm curious as to how it looks, if you get a 90% Confidence Rating in OC Scanner I may try your curve as I'd like to secure more than 2070 on the core, ideally 2150 MHz but it may be wishful thinking at this point, and if I have to run 1.069v which increases my temps 5-7C for another 75 MHz I'm not sure if it's worth it considering I will lose at least 15 MHz of that once the core gets up and over 50C and undervolting prolongs the life of the silicon. Running 1.069v to 1.093v, it's the same story as either running a CPU at 5.0 GHz @ 1.342v or 5.1 GHz @ 1.425v, yeah you got another 100 MHz, but that additional voltage isn't good for the silicon. 

Let me elaborate on what prolonging the life of the silicon means. This means that a processor degrades much slower with less voltage. Say you just got a brand new CPU, take 8700k or whatever, and you want to secure 5.1 GHz, so you run 1.425 or even 1.45v, this processor can theoretically take 1.425v, but it's not good for it. You run 5.1 GHz for 6 months, next thing you know youre getting blue screens, it's no longer stable, after loosening your memory timings, increasing your auxiliary voltages without success you finally realize that you have to drop your CPU freq down. Now it does 5.0 GHz but it still requires 1.425v whereas before, when it was new, it could do this at 1.34v. Then 6 months later it's no longer stable at 5.0 GHz, eventually you realize that you have too much voltage but by then it's too late. Same goes for GPU's. Plenty of anecdotal stories abound where user A ran their 1080 Ti @ 1.093v and yeah, it did 2050 MHz great for about 6 months, and then it wasn't stable any longer, was only stable at 2000 MHz with the same voltage (which was previously attainable with 1.063v), then that wasn't stable any longer. 

I've actually experienced this first-hand with over-volting with 780 Ti years ago. I learned real quick that the best approach to overclocking is keeping the core as cool as possible (hence my foray into water cooling) in conjunction with undervolting. My 1080 Ti that I purchased in March of 2017 still does 2000 Mhz @ 1.025v to this day 100% rock solid as has been under a water block for 80% of it's life (I only just upgraded to 2080 Ti as of about 6 weeks ago) and I leave my computer on 24/7 (I mine crypto to heat the apartment / offset my heating bill and actually use it for gaming maybe 3-5 hours a day) . I'm seeing reports on a daily basis of 1080 Ti owners having their cards die on them. 

Best approach is bring the temp of the component down as much as possible and undervolt!



mardon said:


> Thanks will give it a go tonight. Doing some benchmarking shortly so every little helps.
> 
> What sort of temp drops are people getting for thermal grizzly kryonaut vs thermal grizzly conductonaut? Thinking of making the move to Liquid Metal.
> 
> Currently at 50C in game. What is the golden temp for an additional core speed step? 45C?


Sajin's post here is the most relevant to your question: 

https://forums.evga.com/1080TI-FTW3-with-Liquid-Metal-m2687019.aspx

If that liquid metal goes off of the die and onto one of those lands kiss your GPU goodbye. Liquid TIM is conductive. I tried it once with a laptop CPU (i7 3920xm) a few years ago and it was like 4-5C cooler than Gelid GC Extreme. I really wanted to try it not for the performance but because I was tired of tearing down the laptop to replace the TIM every 4-5 months. Sadly that liquid metal also needed to be replaced, it just took a few more months longer, and it was an absolute PITA to remove, permanently attached itself to the heat-sink. 

It eats aluminum BTW. I wouldn't put that crap on my 2080 Ti for 4-5C.


----------



## J7SC

mardon said:


> Thanks will give it a go tonight. Doing some benchmarking shortly so every little helps.
> 
> What sort of temp drops are people getting for thermal grizzly kryonaut vs thermal grizzly conductonaut? Thinking of making the move to Liquid Metal.
> 
> Currently at 50C in game. What is the golden *temp for an additional core speed step? 45C*?


 
...on my 2080 Ti cards, 38C is where the first step kicks on (though there might be an earlier one according to some writers). Anyway, if you can keep it close to 38C-40C, you're good. 

Regarding liquid metal, I used to to use it a lot on delidded CPUs and also various GPUs. You have to insulate the pcb layer between the inner die square and outer square...either with professional insulators or even nail polish if you cannot find the pro materials. I even successfully used a barrier of MX4 on multiple occasions between the actual die, the adjacent PCB and outer square on earlier GeForce GTX cards w/o ever running into trouble. It also depends a little bit on how you mount the GPU, meaning w/ vertical mounts, extra caution is in order. 

GN had a recent, somewhat related video (below, around 14 min) referring to some upcoming Asus ROG laptops that use LM, along with a stainless steel shim guard arrangement. Can DerBauer be far behind w/ custom guards ?


----------



## Mooncheese

J7SC said:


> ...on my 2080 Ti cards, 38C is where the first step kicks on (though there might be an earlier one according to some writers). Anyway, if you can keep it close to 38C-40C, you're good.
> 
> Regarding liquid metal, I used to to use it a lot on delidded CPUs and also various GPUs. You have to insulate the pcb layer between the inner die square and outer square...either with professional insulators or even nail polish if you cannot find the pro materials. I even successfully used a barrier of MX4 on multiple occasions between the actual die, the adjacent PCB and outer square on earlier GeForce GTX cards w/o ever running into trouble. It also depends a little bit on how you mount the GPU, meaning w/ vertical mounts, extra caution is in order.
> 
> GN had a recent, somewhat related video (below, around 14 min) referring to some upcoming Asus ROG laptops that use LM, along with a stainless steel shim guard arrangement. Can DerBauer be far behind w/ custom guards ?
> 
> https://www.youtube.com/watch?v=U-pCQcCEp68


Actually, reading this thread that I linked to in my previous post all the way through, I actually want to try switching to liquid TIM on the GPU! How durable is nail polish? Because at some point (I believe on page 3 of the thread) an EVGA rep comments on the matter, stating that liquid metal will eventually eat through both the GPU die and nail polish. I don't know how much truth there is to this, I do know that IC Diamond scratches dies real good (even though IC Diamond states that this doesn't happen) and I've only ever been using Gelid GC Extreme since trying both IC Diamond and experiencing that and also trying Coolaboratory Liquid Ultra and not being satisfied with the fact that I still had to replace the TIM after like 6-9 months (laptop). 

I'm seriously considering this now, so "liquid duct tape" is in order? Another poster warned against using black electrical tape as that may not cohere to the PCB good enough and liquid TIM can "wick" (seep, go under) the TIM and onto the sensitive sub-components that surround the GPU die. 

I have ample rad surface area and my rads don't even get hot, with my serial D5 pump upgrade I would have a lot to gain from conveying more heat to the rads but that could only be accomplished with liquid metal.

https://forums.evga.com/1080TI-FTW3-with-Liquid-Metal-m2687019-p4.aspx


----------



## mardon

I was going to go the nail polish and electrical tape route and then strap an additional NF-A12x15 to the thicker 48mm radiator for push pull. Hoping the two upgrades should push me into mid 40's?

Here's a link to the build on redit.. I'm working in a pretty comfined space.

https://www.reddit.com/r/sffpc/comments/fvzzib/can_i_squeeze_out_any_more_cooling_performance/


----------



## Mooncheese

mardon said:


> I was going to go the nail polish and electrical tape route and then strap an additional NF-A12x15 to the thicker 48mm radiator for push pull. Hoping the two upgrades should push me into mid 40's?
> 
> Here's a link to the build on redit.. I'm working in a pretty comfined space.
> 
> https://www.reddit.com/r/sffpc/comments/fvzzib/can_i_squeeze_out_any_more_cooling_performance/


I'm right behind you lol, going to get nail polish, if you can afford it there's a proper acrylic coating: https://www.amazon.com/gp/product/B008O9YIV6/?th=1

The only thing that sucks now is that I can't find this stuff anywhere other than performancepcs.com and amazon and amazon has been delaying all "non-essential" orders. I have my D5 serial pump housing arriving on Tuesday, I was looking to replace this noisy DDC as soon as possible but I suppose I can live with near-refrigerator hum for a few more days while waiting for TGC to arrive. 

Everyone who has tried this stuff with ample cooling has seen 5-7C+. The stories where no gains are to be had are those who were using TGC in conjunction with factory blower style coolers that are heat-soaked and it doesn't matter how much more heat is effectively conveyed to them. 

My only concern is whether or not gallium / liquid metal really does eat into the exposed silicon GPU die or not. 

My last experience with CLU also wasn't a good one, but there are notable differences, that i7 3920 ran at 80-90C overclocked, and it's the heat that hardens and dries out liquid metal, necessitating replacement in literally like 6-9 months time. And when I went in to replace it it had hardened to such the extent that I had to resort to using sandpaper to get it off the copper heat-sink. I don't think it will do that with temps of say 45C, but that was my last experience. Also wondering what it does to nickel plated copper (waterblock). I hear only aluminum is to be avoided. Not sure, if anyone else has experience with this on GPU's and water-blocks please offer up your advice!


----------



## J7SC

mardon said:


> I was going to go the nail polish and electrical tape route and then strap an additional NF-A12x15 to the thicker 48mm radiator for push pull. Hoping the two upgrades should push me into mid 40's?
> 
> Here's a link to the build on redit.. *I'm working in a pretty confined space*.
> 
> https://www.reddit.com/r/sffpc/comments/fvzzib/can_i_squeeze_out_any_more_cooling_performance/


 
...oh wow, you most certainly are ! Space-efficient though, if difficult to cool ?


----------



## mardon

Mooncheese said:


> I'm right behind you lol, going to get nail polish, if you can afford it there's a proper acrylic coating: https://www.amazon.com/gp/product/B008O9YIV6/?th=1
> 
> The only thing that sucks now is that I can't find this stuff anywhere other than performancepcs.com and amazon and amazon has been delaying all "non-essential" orders. I have my D5 serial pump housing arriving on Tuesday, I was looking to replace this noisy DDC as soon as possible but I suppose I can live with near-refrigerator hum for a few more days while waiting for TGC to arrive.
> 
> Everyone who has tried this stuff with ample cooling has seen 5-7C+. The stories where no gains are to be had are those who were using TGC in conjunction with factory blower style coolers that are heat-soaked and it doesn't matter how much more heat is effectively conveyed to them.
> 
> My only concern is whether or not gallium / liquid metal really does eat into the exposed silicon GPU die or not.
> 
> My last experience with CLU also wasn't a good one, but there are notable differences, that i7 3920 ran at 80-90C overclocked, and it's the heat that hardens and dries out liquid metal, necessitating replacement in literally like 6-9 months time. And when I went in to replace it it had hardened to such the extent that I had to resort to using sandpaper to get it off the copper heat-sink. I don't think it will do that with temps of say 45C, but that was my last experience. Also wondering what it does to nickel plated copper (waterblock). I hear only aluminum is to be avoided. Not sure, if anyone else has experience with this on GPU's and water-blocks please offer up your advice!
> 
> https://youtu.be/IIikmLf_H48
> 
> https://youtu.be/lrMzB_K8mNU


I'll probably get round to it on Sunday. I've got anNF-A12x25 and NFA9 to go underneath the kraken G12 while i'm at it. Think i'm going to have to take the CPU block off to try a different piping orientation to allow for push pull.



J7SC said:


> ...oh wow, you most certainly are ! Space-efficient though, if difficult to cool ?


The NCase M1 is probably the best case of it type so i'm actually really happy with the thermals. I'm being greedy if anything. 2 Hours of COD Warzone and my max temp was 51C which really isn't bad all things considered. The 9700k does get loaded massivly so sits around 50C. However I loaded up minecraft RTX on the Neon City level and say 95% CPU usage and 70C which is the highest i've ever seen it!


----------



## Mooncheese

mardon said:


> I'll probably get round to it on Sunday. I've got anNF-A12x25 and NFA9 to go underneath the kraken G12 while i'm at it. Think i'm going to have to take the CPU block off to try a different piping orientation to allow for push pull.
> 
> 
> The NCase M1 is probably the best case of it type so i'm actually really happy with the thermals. I'm being greedy if anything. 2 Hours of COD Warzone and my max temp was 51C which really isn't bad all things considered. The 9700k does get loaded massivly so sits around 50C. However I loaded up minecraft RTX on the Neon City level and say 95% CPU usage and 70C which is the highest i've ever seen it!


Wow I had a look at your rig and that's a lot of power in a tiny package! Not sure what kind of advice I could impart, youre right that the gains with an actual full water block would be marginal to non-existent, but you may be able to save some space where the GPU is vs the G12 bracket. If money was no object you could also do an external radiator, but that kind of defeats the purpose of the small form factor, unless you used quick-disconnect fittings and were wanting to retain the form factor for portability reasons (with 2x 240 rads in the chassis, external 560 rad or larger). 

Anyhow, I just wanted to thank you for your post earlier, I don't know why I didn't think of liquid metal earlier! I ordered MG Chemicals 422B-55ML Silicone Conformal Coating for the lands / capacitors that encircle the CPU die from ebay, should arrive here around the same time as the TGC liquid metal does (hopefully right before next weekend). I need to tear apart the loop anyhow because the run going from the GPU outlet to the distribution plate isn't exactly long enough and I'm worried that if it's bumped into with sufficient force (whilst cleaning etc.) that it may be a leak source down the road and I have to drain the loop anyway because I'm swapping out the DDC for 2x D5 in serial (EK-XTOP Revo Dual D5 Serial). 

Looking forward to how much lower the GPU core will be with liquid metal and a good 50% increase in both flow rate and head pressure over single DDC, while quieting the system down and adding peace of mind with a back-up pump should the one that is going on 3 years of 24/7 give up the ghost. Also, looking forward to not having to repaste the GPU for a few years; when I put my 1080 Ti under a water block in Nov 2017 with Gelid GC Extreme the temps slowly, imperceptibly increased to the point where 50C under load at room temperature with an undervolt (1.025v) was normal (1080 Ti FE @ 300w default vbios). After having to fall back on my 1080 Ti after my first 2080 Ti suffered from memory failure (Micron) I took the opportunity to repaste the GPU and the temperature dropped 7C! (same TIM) I guess Gelid GC Extreme doesn't last forever. I hear liquid metal lasts a lot longer but it depends on how hot the component gets, I'm anticipating a drop of 5-7C from the TIM alone, and maybe another 2-3C from the increase in flow rate and head pressure. Right now the core actually gets up to 50C under extended gaming sessions. It starts out at like 43-45C but slowly climbs up there. Radiators aren't even warm! You probably know what I'm talking about here, when I was still using AIO's myself I remember how hot the Corsair H55's (120x25mm) would get that were attached to my 780 Ti's, if you put your hand against the rads themselves they were PIPING hot, like as hot as a hot back-plate. This prompted me to upgrade to NZXT x41's (140x38mm) and that did drop the temps like 5-7C and those rads were cooler, but would still get warm / hot. Fast forward to today and now I have a 420 SE (420x25mm) push only in the ceiling and a 360 PE (360x38mm) push-pull in the front of the case and both of these rads are nearly cool to the touch. I don't have a water temp sensor but I guesstimate that the water temp is probably near ambient, or around 30C, meaning what is likely happening at present is that the water block on the GPU is itself heating up due to flow restriction from the single DDC. I saw a 5C reduction in temps increasing the DDC from 55% to 100% RPM so I know there is still some room for improvement here as the rads are still cool to luke-warm to the touch. I imagine upping flow rate from 1000 lph to 1500 lph (100% RPM) or even 1200 lph (80% RPM, going for compromise between performance and acoustics) and head pressure from ~5 to ~7 m/bar in conjunction with effectively conveying the heat from the GPU core to the water-block, I'm guessing hoping 10C reduction is possible, or 35-40C under full load, sustained. This may allow for 2100 MHz (it starts out at 2085 MHz @ 1.019v, dropping to 2070 MHz somewhere around 40-43C) under 40C @ 1.019v as the core will be 10C cooler, allowing for less voltage being required to secure more freq and countering the -15 MHz drop at 40C. 

I'm super excited at this point, again, thanks for reminding me of liquid metal!


----------



## Mooncheese

Looking forward to never having to change the TIM again on this card (assuming I don't upgrade to 3080 Ti next year): https://www.gamersnexus.net/guides/...e-year-test-how-often-to-replace-liquid-metal

The Gelid GC Extreme I'm currently using does in fact need to be changed, presumably on an annual basis considering my 1080 Ti got around 7C warmer within 2.5 years of not changing the TIM under full water block. I'm liking that the TIM used on CPU's with much higher temps only saw like 1C increase after a years time according to GamersNexus.


----------



## spin5000

Mooncheese said:


> You can see how stable your overclock is fairly accurately with OC Scanner in MSI AB. Anything under 90% Confidence Rating and youre going to get a display driver failure induced CTD (or black screen) in many games. I can pass all of the benchmarks at 2100 MHz on the core with an undervolt (1.019v) but that results in a confidence rating of 69% and I will get crashes in games, i.e. F1 2019 crashing within like 15 minutes of a session. 90% = no crashing in F1 2019 period.
> 
> Care to post your volt + freq curve? I'm curious as to how it looks, if you get a 90% Confidence Rating in OC Scanner I may try your curve as I'd like to secure more than 2070 on the core, ideally 2150 MHz but it may be wishful thinking at this point, and if I have to run 1.069v which increases my temps 5-7C for another 75 MHz I'm not sure if it's worth it considering I will lose at least 15 MHz of that once the core gets up and over 50C and undervolting prolongs the life of the silicon. Running 1.069v to 1.093v, it's the same story as either running a CPU at 5.0 GHz @ 1.342v or 5.1 GHz @ 1.425v, yeah you got another 100 MHz, but that additional voltage isn't good for the silicon.
> 
> Let me elaborate on what prolonging the life of the silicon means. This means that a processor degrades much slower with less voltage. Say you just got a brand new CPU, take 8700k or whatever, and you want to secure 5.1 GHz, so you run 1.425 or even 1.45v, this processor can theoretically take 1.425v, but it's not good for it. You run 5.1 GHz for 6 months, next thing you know youre getting blue screens, it's no longer stable, after loosening your memory timings, increasing your auxiliary voltages without success you finally realize that you have to drop your CPU freq down. Now it does 5.0 GHz but it still requires 1.425v whereas before, when it was new, it could do this at 1.34v. Then 6 months later it's no longer stable at 5.0 GHz, eventually you realize that you have too much voltage but by then it's too late. Same goes for GPU's. Plenty of anecdotal stories abound where user A ran their 1080 Ti @ 1.093v and yeah, it did 2050 MHz great for about 6 months, and then it wasn't stable any longer, was only stable at 2000 MHz with the same voltage (which was previously attainable with 1.063v), then that wasn't stable any longer.
> 
> I've actually experienced this first-hand with over-volting with 780 Ti years ago. I learned real quick that the best approach to overclocking is keeping the core as cool as possible (hence my foray into water cooling) in conjunction with undervolting. My 1080 Ti that I purchased in March of 2017 still does 2000 Mhz @ 1.025v to this day 100% rock solid as has been under a water block for 80% of it's life (I only just upgraded to 2080 Ti as of about 6 weeks ago) and I leave my computer on 24/7 (I mine crypto to heat the apartment / offset my heating bill and actually use it for gaming maybe 3-5 hours a day) . I'm seeing reports on a daily basis of 1080 Ti owners having their cards die on them.
> 
> Best approach is bring the temp of the component down as much as possible and undervolt!


I can't test the OC with MSI AB OC Test because every time I try, MSI AB automatically lowers my entire saved OC curve. The program is ********.

Here's a pic of the curve before MSI AB screws it up:

Just to reiterate. I used the MSI AB OC Scan curve. Then I added a further OC to that curve by upping the MSI AB core clock slider. Then I went into the curve and manually lowered every point above 2115 (2130, 2145, etc.) to 2115 since I had a benchmark freeze once when the card either hit 2130 or 2145 MHz (can't remember).


P.S. Does the 2080 Ti's BIOS control fan max speeds? I could have sworn my Zotac AMP non-extreme stock BIOS had faster fan-speeds than the EVGA XC Ultra BIOS I flashed to. Fan speed for a given fan speed % as well as max speed (100%) seem lower with the EVGA BIOS. Is this all in my head??? I know with the 980 Ti, fan speeds (curve and max speed) were literally dictated by the BIOS...


----------



## mardon

Mooncheese said:


> Wow I had a look at your rig and that's a lot of power in a tiny package! Not sure what kind of advice I could impart, youre right that the gains with an actual full water block would be marginal to non-existent, but you may be able to save some space where the GPU is vs the G12 bracket. If money was no object you could also do an external radiator, but that kind of defeats the purpose of the small form factor, unless you used quick-disconnect fittings and were wanting to retain the form factor for portability reasons (with 2x 240 rads in the chassis, external 560 rad or larger).
> 
> Anyhow, I just wanted to thank you for your post earlier, I don't know why I didn't think of liquid metal earlier! I ordered MG Chemicals 422B-55ML Silicone Conformal Coating for the lands / capacitors that encircle the CPU die from ebay, should arrive here around the same time as the TGC liquid metal does (hopefully right before next weekend). I need to tear apart the loop anyhow because the run going from the GPU outlet to the distribution plate isn't exactly long enough and I'm worried that if it's bumped into with sufficient force (whilst cleaning etc.) that it may be a leak source down the road and I have to drain the loop anyway because I'm swapping out the DDC for 2x D5 in serial (EK-XTOP Revo Dual D5 Serial).
> 
> Looking forward to how much lower the GPU core will be with liquid metal and a good 50% increase in both flow rate and head pressure over single DDC, while quieting the system down and adding peace of mind with a back-up pump should the one that is going on 3 years of 24/7 give up the ghost. Also, looking forward to not having to repaste the GPU for a few years; when I put my 1080 Ti under a water block in Nov 2017 with Gelid GC Extreme the temps slowly, imperceptibly increased to the point where 50C under load at room temperature with an undervolt (1.025v) was normal (1080 Ti FE @ 300w default vbios). After having to fall back on my 1080 Ti after my first 2080 Ti suffered from memory failure (Micron) I took the opportunity to repaste the GPU and the temperature dropped 7C! (same TIM) I guess Gelid GC Extreme doesn't last forever. I hear liquid metal lasts a lot longer but it depends on how hot the component gets, I'm anticipating a drop of 5-7C from the TIM alone, and maybe another 2-3C from the increase in flow rate and head pressure. Right now the core actually gets up to 50C under extended gaming sessions. It starts out at like 43-45C but slowly climbs up there. Radiators aren't even warm! You probably know what I'm talking about here, when I was still using AIO's myself I remember how hot the Corsair H55's (120x25mm) would get that were attached to my 780 Ti's, if you put your hand against the rads themselves they were PIPING hot, like as hot as a hot back-plate. This prompted me to upgrade to NZXT x41's (140x38mm) and that did drop the temps like 5-7C and those rads were cooler, but would still get warm / hot. Fast forward to today and now I have a 420 SE (420x25mm) push only in the ceiling and a 360 PE (360x38mm) push-pull in the front of the case and both of these rads are nearly cool to the touch. I don't have a water temp sensor but I guesstimate that the water temp is probably near ambient, or around 30C, meaning what is likely happening at present is that the water block on the GPU is itself heating up due to flow restriction from the single DDC. I saw a 5C reduction in temps increasing the DDC from 55% to 100% RPM so I know there is still some room for improvement here as the rads are still cool to luke-warm to the touch. I imagine upping flow rate from 1000 lph to 1500 lph (100% RPM) or even 1200 lph (80% RPM, going for compromise between performance and acoustics) and head pressure from ~5 to ~7 m/bar in conjunction with effectively conveying the heat from the GPU core to the water-block, I'm guessing hoping 10C reduction is possible, or 35-40C under full load, sustained. This may allow for 2100 MHz (it starts out at 2085 MHz @ 1.019v, dropping to 2070 MHz somewhere around 40-43C) under 40C @ 1.019v as the core will be 10C cooler, allowing for less voltage being required to secure more freq and countering the -15 MHz drop at 40C.
> 
> I'm super excited at this point, again, thanks for reminding me of liquid metal!


Wow you really know your stuff. I love the idea of an additional radiator in the ceiling! 
I think at this point I'm just looking to refine my build due to the money spent. If I was to start again I'd definitely go full custom loop. On my next build (in this case i'll go that route).

I'm looking forward to seeing what RTX performance the 3000 series brings. Mine craft RTX has actually blown my mind. The fact it can make something so graphically simple look stunning is very exciting for me. The 2080ti with DLSS does chug a little a 3440x1440. If we get furniture modern games with full path tracing we're going to need some more tensor cores!

I've used liquid metal a few times with great success on laptops. I had a 2080 Super for a time and despite not obvious spillage I lost a displayport to the application. I am nervous. Perhaps i'll let you go first and report your temperature drop?! The video I watched last night saw a -1C drop on a 2080. On mine stupidly I didn't test before temperatures but they were definitely in the ball park of what most others were getting for the Zotac Dual Fan (read hot and loud in an SFF-PC).



spin5000 said:


> I can't test the OC with MSI AB OC Test because every time I try, MSI AB automatically lowers my entire saved OC curve. The program is ********.
> 
> Here's a pic of the curve before MSI AB screws it up:
> 
> Just to reiterate. I used the MSI AB OC Scan curve. Then I added a further OC to that curve by upping the MSI AB core clock slider. Then I went into the curve and manually lowered every point above 2115 (2130, 2145, etc.) to 2115 since I had a benchmark freeze once when the card either hit 2130 or 2145 MHz (can't remember).
> 
> 
> P.S. Does the 2080 Ti's BIOS control fan max speeds? I could have sworn my Zotac AMP non-extreme stock BIOS had faster fan-speeds than the EVGA XC Ultra BIOS I flashed to. Fan speed for a given fan speed % as well as max speed (100%) seem lower with the EVGA BIOS. Is this all in my head??? I know with the 980 Ti, fan speeds (curve and max speed) were literally dictated by the BIOS...


Ok so i’m not going mad. Refinding my OC last night I had exactly the same experience! Very annoying. I’ve not had a crash yet with a 90% Certainty check despite the annoyance of curve nodes jumping around or the whole curve moving up or down.I then usually do a TimeSpy Extreme Stress test. If the settings pass that they’ve so far been good to go.

For reference here are my curves: 
Non Demanding
1935Mhz Core | 7000Mhz Mem | 887mV

24/7
2025Mhz Core | 7700Mhz Mem | 981 mV

Mild Overclock
2070Mhz Core | 7800Mhz Mem | 1000mV

I’m not dialing in my top overclock until i’ve sorted the final tweaks.

The difference between NonDemanding and my mild OC is around 4/7FPS


----------



## sultanofswing

I've used Liquid Metal on almost all of my GPU's.
Usually the temp difference between Liquid metal and the Kryonaut I usually use is 2c.


----------



## mardon

sultanofswing said:


> I've used Liquid Metal on almost all of my GPU's.
> Usually the temp difference between Liquid metal and the Kryonaut I usually use is 2c.


If thats the case the risk/reward may not be worth it.


----------



## Mooncheese

sultanofswing said:


> I've used Liquid Metal on almost all of my GPU's.
> Usually the temp difference between Liquid metal and the Kryonaut I usually use is 2c.


Any idea how Kryonaut compare to Gelid GC Extreme? Really curious as to how youre only seen a 2C difference with that much radiator surface area as others with good coolers are seeing at least 5C difference between liquid metal and the best non liquid metal TIM. Have you tried with 100% pump speed? Maybe your waterblock is doing what mine is doing, heating up and not conveying the heat to the water effectively because the flow rate isn't adequate. Not sure.


----------



## Mooncheese

sultanofswing said:


> I've used Liquid Metal on almost all of my GPU's.
> Usually the temp difference between Liquid metal and the Kryonaut I usually use is 2c.


Any idea how Kryonaut compare to Gelid GC Extreme? Really curious as to how youre only seen a 2C difference with that much radiator surface area as others with good coolers are seeing at least 5C difference between liquid metal and the best non liquid metal TIM. Have you tried with 100% pump speed? Maybe your waterblock is doing what mine is doing, heating up and not conveying the heat to the water effectively because the flow rate isn't adequate. Not sure. 

Maybe it's a lack of mounting pressure? 

Follow this up: 




With this, Tech Powerup saw a 10C reduction in conjunction with washers accounting for the thickness of the thermal pad, something Der8auer failed to account for) and that's with the factory air cooler to top it off. : https://www.techpowerup.com/review/amd-radeon-vii/33.html

If we include the HBM2, Radeon 7 has a near comparable die size wattage dispersion area to 2080 Ti. 

I'm seriously curious as to how youre only seeing a 2C gain over conventional TIM with that much rad surface area and a water block. Inadequate mounting pressure? Not sure.


----------



## sultanofswing

Mooncheese said:


> Any idea how Kryonaut compare to Gelid GC Extreme? Really curious as to how youre only seen a 2C difference with that much radiator surface area as others with good coolers are seeing at least 5C difference between liquid metal and the best non liquid metal TIM. Have you tried with 100% pump speed? Maybe your waterblock is doing what mine is doing, heating up and not conveying the heat to the water effectively because the flow rate isn't adequate. Not sure.
> 
> Maybe it's a lack of mounting pressure?
> 
> Follow this up: https://www.youtube.com/watch?v=HNxuhLUAh8A
> 
> With this, Tech Powerup saw a 10C reduction in conjunction with washers accounting for the thickness of the thermal pad, something Der8auer failed to account for) and that's with the factory air cooler to top it off. : https://www.techpowerup.com/review/amd-radeon-vii/33.html
> 
> If we include the HBM2, Radeon 7 has a near comparable die size wattage dispersion area to 2080 Ti.
> 
> I'm seriously curious as to how youre only seeing a 2C gain over conventional TIM with that much rad surface area and a water block. Inadequate mounting pressure? Not sure.


Flow rate is definitely not an issue with Triple D5 pumps in my loop.

There are way too many variables as to why some people see a bigger temp decrease than others. 

My setup is already very efficient for what it is. With an Ambient temp of 21-22c and my card at 2145mhz it takes roughly 2 hours for my card to reach 40c while playing games that work the card hard (just tested this with Minecraft RTX Beta)

If I could get a 5c drop in temp with Liquid Metal on my setup I would do it but I've tested it enough to know that it's not worth it for me.

Another thing to remember is the colder the temperature the less effective Liquid Metal is, As temperatures decrease Liquid Metals Thermal conductivity goes down.

I use Conductonaut for Liquid metal when I use it, It's been on my 8700k for 3 years now.


----------



## Mooncheese

sultanofswing said:


> Flow rate is definitely not an issue with Triple D5 pumps in my loop.
> 
> There are way too many variables as to why some people see a bigger temp decrease than others.
> 
> My setup is already very efficient for what it is. With an Ambient temp of 21-22c and my card at 2145mhz it takes roughly 2 hours for my card to reach 40c while playing games that work the card hard (just tested this with Minecraft RTX Beta)
> 
> If I could get a 5c drop in temp with Liquid Metal on my setup I would do it but I've tested it enough to know that it's not worth it for me.
> 
> Another thing to remember is the colder the temperature the less effective Liquid Metal is, As temperatures decrease Liquid Metals Thermal conductivity goes down.
> 
> I use Conductonaut for Liquid metal when I use it, It's been on my 8700k for 3 years now.


Oh that's good to know, I wasn't aware that the colder the temperature the less conductive liquid metal becomes. Maybe it looks like -3C @ 45 degrees but that same component would be like minus 7-10C @ 70C? 

Well my core is getting up to 50C and I'd like to bring that down. I believe that it's not a matter of inadequate radiator surface area, it's a combination of flow rate and GPU core conveying the heat to the water-block then to the water effectively.

My card will get up to 50C within about an hour, maybe 1.5 hours of gaming. Possibly the waterblock is slowly accumulating with heat and not being adequately cooled with the single DDC as again, the radiators are nearly cool to the touch. 

So you were using liquid metal and you switched back to conventional TIM? Why the switch? 

The other thing that attracts me to liquid metal is the fact that I am seeing a rather substantial performance degradation with my current thermal paste over time, my 1080 Ti went from like 43C full load with new paste in Nov 2017 to now hitting 50C as of recent. So +7C in around 2 years. 

It's my understanding that liquid metal is very robust and lasts nearly indefinitely. I intend to use liquid metal in conjunction with the non-conformal paste on the SMD / SMT's from here on out. Tired of losing performance over time and having / needing to go in and tear the GPU apart down the road. 

I want to do it now because I want to see what kind of gains I can expect with a core that does get up to 50C. 

To me 50C with as much radiator surface area as I have and a full water block is fail territory. 

I should be mid to low 40's.


----------



## sultanofswing

Mooncheese said:


> Oh that's good to know, I wasn't aware that the colder the temperature the less conductive liquid metal becomes. Maybe it looks like -3C @ 45 degrees but that same component would be like minus 7-10C @ 70C?
> 
> Well my core is getting up to 50C and I'd like to bring that down. I believe that it's not a matter of inadequate radiator surface area, it's a combination of flow rate and GPU core conveying the heat to the water-block then to the water effectively.
> 
> My card will get up to 50C within about an hour, maybe 1.5 hours of gaming. Possibly the waterblock is slowly accumulating with heat and not being adequately cooled with the single DDC as again, the radiators are nearly cool to the touch.
> 
> So you were using liquid metal and you switched back to conventional TIM? Why the switch?
> 
> The other thing that attracts me to liquid metal is the fact that I am seeing a rather substantial performance degradation with my current thermal paste over time, my 1080 Ti went from like 43C full load with new paste in Nov 2017 to now hitting 50C as of recent. So +7C in around 2 years.
> 
> It's my understanding that liquid metal is very robust and lasts nearly indefinitely. I intend to use liquid metal in conjunction with the non-conformal paste on the SMD / SMT's from here on out. Tired of losing performance over time and having / needing to go in and tear the GPU apart down the road.
> 
> I want to do it now because I want to see what kind of gains I can expect with a core that does get up to 50C.
> 
> To me 50C with as much radiator surface area as I have and a full water block is fail territory.
> 
> I should be mid to low 40's.


What fans are you running on your Radiators?


----------



## Mooncheese

sultanofswing said:


> What fans are you running on your Radiators?


Noiseblockers and Vardar EVO's push-pull @ 40% RPM on the 360PE and Phanteks PH-F140MP on the 420SE, also at 40% RPM. 

It's not an inadequate airflow problem, the radiators aren't even getting hot. If I had to guess water temp is at 30C.


----------



## Mooncheese

For anyone considering this with a copper heat-sink I found this bit of information, basically gallium alloys with copper over time, leading to the perception of it "drying out" (exactly this happened to me with my laptop and a copper heat-sink, it took about 6-9 months but the temperature of 70-80C helped expedite the process). If you have nickel plated copper youre pretty much good to go: 

http://forum.notebookreview.com/thr...-works-why-it-fails-and-how-to-use-it.809332/

"Dangers of Liquid Metal:
Reactivity with Copper Heatsinks

It's been debated whether liquid metals damage copper heatsinks.
They do: over time, the gallium in liquid metal will be absorbed into the copper heatsink, causing the LM to "dry" out.

Let's explain in more detail:
The electrode potential of gallium is -0.53V, nickel is -0.24V, and copper is +0.34V.
The difference between gallium and copper favors a reaction that occurs even at room temperatures.

Obviously, all liquid metals have a high gallium content (plus other metals to reduce the melting point). When the gallium contacts pure copper, the metals irreversibly alloy. This reaction proceeds until there is no more copper or all the gallium is consumed [3].

The reaction is: Ga + Cu → CuGa2 [67%] + Cu3Ga [11%]. ( + Ga2O3 [12%])
Both CuGa products are stable until 175C[3][4].

[​IMG]

The means liquid metal will literally eat into the copper until the gallium is gone, and the resulting copper-gallium alloy is a silver-ish color. Yes, - in case you are wondering - the gallium in liquid metal reacts this way despite the fact that there are other metal stabilizers present in LM[5].

The non-gallium components (indium, tin, etc) of the liquid metal[3] which are solid at room temperature will be left behind on the heatsink surface as this process occurs. The formation of this non-gallium metal deposit is most obvious visually when the gallium is totally absorbed into copper. Do note that this residual non-gallium liquid metal is hard and brittle, as you would expect. While this deposit is technically metal and is a good heat conductor, it does not form evenly and therefore it's highly likely that an air gap will also form between the die and the heatsink, and your laptop will hit thermal runaway at this point. This video [link] is a good example of the consequences of this process. This mechanism appears to be the most common cause of long-term failure in LM applications.

[​IMG]

Note that at higher temperatures, the invasion of liquid gallium into the copper heatsink only gets faster.[6] Anecdotally, it appears that this process can take anywhere from just a couple months to a year+ until a point of failure is reached.

The factors influencing the speed of this process include obvious ones like temperature, formula of LM used, surface roughness, and amount of LM used. But, the porosity and purity of the copper heatsink may also play a role. Due to all these variables, accurately predicting the rate of erosion for an application of LM is simply not possible.
Here's a graph of the mass fractions of the obtained CuGa alloys at various temperatures (oxygen atmosphere) for you nerds: 

This effect is less observed in the classic delidded desktop CPU because the gallium in the liquid metal is far less reactive against the nickle plating of the CPU heatspreader. (The nickle plating is designed to protect the copper against normal solder alloying, but also happens to be effective vs gallium[7]).
If some people tell you that LM 'dries out' while others say it's totally stable, now you know why. LM is fine under a CPU IHS, and its even fine when used between a die and pure copper, but in long-term use it will pit copper surfaces and this can lead to temperatures that stay stable for months but suddenly spike towards the end of the LM's usable life. Again, nickel surfaces are also pitted, but just at a significantly slower rate:


----------



## sultanofswing

Mooncheese said:


> Noiseblockers and Vardar EVO's push-pull @ 40% RPM on the 360PE and Phanteks PH-F140MP on the 420SE, also at 40% RPM.
> 
> It's not an inadequate airflow problem, the radiators aren't even getting hot. If I had to guess water temp is at 30C.


Without a temp sensor you will never know, my rads never get hot either so that doesn't really tell you anything.

I run 17 Artic P12's at 1400rpm, the hottest I have ever seen my water temp was 30.8c and that was with my ambient up near 24c.

my water temp delta from Idle to full load is 5c.


----------



## J7SC

sultanofswing said:


> *Without a temp sensor you will never know*, my rads never get hot either so that doesn't really tell you anything.
> 
> I run 17 Artic P12's at 1400rpm, the hottest I have ever seen my water temp was 30.8c and that was with my ambient up near 24c.
> 
> my water temp delta from Idle to full load is 5c.


 
^^...this

Also, per below, some of the temp differences are quite minimal in the real world. I have used Coolaboratory Ultra and Pro (LM), Gelid Extreme, Noctua NH1 and MX4...obviously, LM performs best but for longer term and/or regular machine use, I stick with MX4, mostly as it is easily available locally. I have machines with MX4 that have been temp-steady for 8 years without remount. I like Gelid Extreme as well for non-LM applications, but at the end of the day, the difference to MX4 is 0.6 C (per below), and it is harder to get here.


----------



## Mooncheese

sultanofswing said:


> Without a temp sensor you will never know, my rads never get hot either so that doesn't really tell you anything.
> 
> I run 17 Artic P12's at 1400rpm, the hottest I have ever seen my water temp was 30.8c and that was with my ambient up near 24c.
> 
> my water temp delta from Idle to full load is 5c.


The rads not getting hot tells me a lot, I don't know about you but I've had inadequate rad surface area, i.e. Corsair 120x25mm H55 affixed to 780 Ti, the radiator was incredibly hot. I then upgraded to 980 Ti and then to NZXT x41 (140x38mm) later on and that radiator, while also warm / hot, wasn't nearly as hot as the 120x25mm rad. Both of my radiators are cool to the touch now, from that it's axiomatic that the heat on the GPU core isn't because of high water temp, but either the GPU block is not being cooled adequately because of inadequate flow / head pressure or that there is room for improvement in the ability of the GPU core to convey it's heat to the GPU block, in my case I believe it's a combination of both.


----------



## sultanofswing

J7SC said:


> ^^...this
> 
> Also, per below, some of the temp differences are quite minimal in the real world. I have used Coolaboratory Ultra and Pro (LM), Gelid Extreme, Noctua NH1 and MX4...obviously, LM performs best but for longer term and/or regular machine use, I stick with MX4, mostly as it is easily available locally. I have machines with MX4 that have been temp-steady for 8 years without remount. I like Gelid Extreme as well for non-LM applications, but at the end of the day, the difference to MX4 is 0.6 C (per below), and it is harder to get here.


Exactly my findings over the years with different generations of cards and testing applications.
I stopped using LM on my GPU's as it was just not worth it. 

I also do a crap ton of testing, I think I remounted a 2080ti 6 times in 1 weekend testing different thermal pastes/liquid metals and at the end of the day Kryonaut works just as good and it's easier to cleanup than Liquid Metal which can be a PITA to clean as it likes to get everywhere.


----------



## Mooncheese

J7SC said:


> ^^...this
> 
> Also, per below, some of the temp differences are quite minimal in the real world. I have used Coolaboratory Ultra and Pro (LM), Gelid Extreme, Noctua NH1 and MX4...obviously, LM performs best but for longer term and/or regular machine use, I stick with MX4, mostly as it is easily available locally. I have machines with MX4 that have been temp-steady for 8 years without remount. I like Gelid Extreme as well for non-LM applications, but at the end of the day, the difference to MX4 is 0.6 C (per below), and it is harder to get here.


Here's another relevant observation of mine. When I repaste with Gelid GC Extreme the temps are amazing, but then after a week or two it seems that the temps go up by 3-4C as the paste "cures". Youre not going to see that effect in at test where they simply swap TIM between like 20-30 different manufacturers and don't return to see / note any temp changes 1-2 weeks down the road. Liquid metal doesn't do this, there is no curing going on.


----------



## J7SC

Mooncheese said:


> Here's another relevant observation of mine. When I repaste with Gelid GC Extreme the temps are amazing, but then after a week or two it seems that the temps go up by 3-4C as the paste "cures". Youre not going to see that effect in at test where they simply swap TIM between like 20-30 different manufacturers and don't return to see / note any temp changes 1-2 weeks down the road. Liquid metal doesn't do this, there is no curing going on.


 
This did NOT happen with my Gelid Extreme and MX4 CPU and GPU applications. Temps stayed where they were initially, subject only to ambient temp variance.


----------



## jura11

@Mooncheese

On Pascal GPUs I used Noctua NT-H1 with which I have best experience and temperatures have been always good in 36-38°C

With Turing GPUs I have tried like Noctua NT-H1, Kryonaut, Gelid(EKWB Ecotherm), LM and Thermalright TF6, TF8 and TFX and in my case TFX is one of best TIM which I have used on CPU or GPUs

EKWB 420SE is not the best radiator, ypu want something like HWLabs or XSPC, SE radiators are one of worst performing radiators

With your radiator space you shouldn't see such temperatures, personally I would have look on Aquacomputer Quadro for fan control and for sensors use their sensors which are really good ones

Other option is adding MO-ra3 360mm to your loop which will and should lower water delta T to reasonable levels 

I can see 42°C on my RTX 2080Ti with 2145MHz only when I folding in [email protected] as other 3*GPUs contribute to heat in the loop

DDC pumps shouldn't be a problem for such loop there, I run single EK DDC 3.2 PWM Elite edition with 360mm radiator and 240mm 60mm thick radiator and 3*GPUs setup with 5960x and Aquacomputer Kryos HF which is one of most restrictive waterblock and don't think DDC is issue in your loop

Hope this helps 

Thanks, Jura


----------



## Mooncheese

jura11 said:


> @Mooncheese
> 
> On Pascal GPUs I used Noctua NT-H1 with which I have best experience and temperatures have been always good in 36-38°C
> 
> With Turing GPUs I have tried like Noctua NT-H1, Kryonaut, Gelid(EKWB Ecotherm), LM and Thermalright TF6, TF8 and TFX and in my case TFX is one of best TIM which I have used on CPU or GPUs
> 
> EKWB 420SE is not the best radiator, ypu want something like HWLabs or XSPC, SE radiators are one of worst performing radiators
> 
> With your radiator space you shouldn't see such temperatures, personally I would have look on Aquacomputer Quadro for fan control and for sensors use their sensors which are really good ones
> 
> Other option is adding MO-ra3 360mm to your loop which will and should lower water delta T to reasonable levels
> 
> I can see 42°C on my RTX 2080Ti with 2145MHz only when I folding in [email protected] as other 3*GPUs contribute to heat in the loop
> 
> DDC pumps shouldn't be a problem for such loop there, I run single EK DDC 3.2 PWM Elite edition with 360mm radiator and 240mm 60mm thick radiator and 3*GPUs setup with 5960x and Aquacomputer Kryos HF which is one of most restrictive waterblock and don't think DDC is issue in your loop
> 
> Hope this helps
> 
> Thanks, Jura


420 SE (3x140x28mm) in pull performs on par with SE 360 PE (3x120x38mm) in push-pull considering the fans only operate at 40% RPM. If I were to increase the fan speed to 70% or more on the 360 PE then it would outperform the slim 420 but I like a quiet system and anything above 40% RPM is too loud for me. 






As you can see from testing, radiator thickness only provides benefit in push-pull with high fan speed, here look at 360 PE vs 360 XE: 

EK 360 XE @ 1850 RPM (100%) Push-Pull: 472w
EK 360 PE @ 1850 RPM Push-Pull: 383w

EK 360 XE @ 1350 RPM (70%) Push only: 318w
EK 360 PW @ 1360 RPM Push only: 293w 

EK 360 XE @ 750 RPM (35%) P-P: 215w
EK 360 PE @ 750 RPM P-P: 204w

EK 360 XE @ 750 RPM Push only: 176w
EK 360 PE @ 750 RPM Push only: 182w 

As you can see, the thinner radiator in push only at lower RPM actually outperforms the thicker radiator. 

https://www.xtremerigs.net/2015/05/31/ek-coolstream-xe-360mm-radiator-review/5/

I went with the 420 SE Pull in the ceiling because 420 XE Push-Pull would put the rad and fans down another 2-3 inches where it is now, near the top of the monoblock and flirting with the memory. I wanted the top rad to be out of sight out of mind, I didn't want this: https://linustechtips.com/main/uplo...humb.jpg.d4b38081a3eb82f209ebdf12731264bd.jpg

Additionally, the benefit of a thicker radiator can only be seen when running the fans at 70% RPM or more. If you run the fans down at 40% RPM as I do there is little to be gained with a thicker radiator. 

Slim radiators get a bad rap but the end users don't really understand that unless you are push-pull and have the fans above 60-70% RPM the additional thickness yields little benefit at best, or is a detriment at worst as we see with 60mm thick 360 XE 750 RPM push only vs 38mm thick 360 PE. 

At 40% RPM a slim 420 in push only is about on par with a 38mm 360 rad in push-pull. We are talking about 28 to 38mm here. 38mm rad in push-pull may outperform slim 420 with both rad fans set to 70%+ RPM, but again, that's too loud for me and the performance difference may be marginal at best. Honestly I'm a fan of 140mm fans and radiators, I only went with a 360 rad in the front because a 420 wouldn't fit flush there (Thermaltake View 71). Technically a 420 rad can be placed there but the outer fans would be obstructed by the front case shroud behind the tempered glass (see attached pics). IMHO, longer and wider but slimmer doesn't mean inferior. 420 SE in push only is AT LEAST on par with 10mm thicker 360 PE in push pull, for all I know it's actually better but neither ExtremeRigs nor anyone else has done a review of the 420 SE. I went with the slim 420 because there is no middle ground here, there is the 420 CE which is 45mm thick and would only yield additional benefit with fans in push pull so that's 45mm further down towards the monoblock and memory modules. 

At this point I would need to redo the loop and buy a new radiator and 3x more fans for a 420 CE but that's not the problem! I have ample radiator surface area. I have a GPU core conveying it's heat to the water block and water block conveying it's heat to the water problem, not a shortage of radiator surface area. 

With that out of the way 2x 360 radiators of moderate thickness is triple the 120x25mm radiator per component rule of thumb (for 50-60c) for just a CPU and GPU. 

I am certain that my water temp is 30C even without a temp sensor because as soon as I exit benchmarking or a game that is inducing full load on the GPU the temps immediately drop and subside to 30C and from there it takes maybe 30 minutes to drop down to 26-27C (near ambient = 24C). That tells me that the water temp is around 30C as the GPU block is immediately dropping to whatever the water temp once 300-350w of heat is removed and then it takes another 15-30 minutes for the water temp to slowly drop back down to ambient.

I mean I suppose I could put additional fans on the slim 420, I just don't see the point, the radiators don't exhibit heat-soak, they are nearly cool to the touch.


----------



## sultanofswing

Ditch those Vardar fans for something better and you could run them at a higher speed,move more air and be quieter.

My Artic P12'[email protected] RPM are damn near silent and if I open my side panel I can move paper around my desk from the airflow they put out.
I also have some Noctua NF-A12x25's that I have tested which are regarded as one of the best radiator fans out there and the Artics are quieter and move just as much air through a radiator.
At 27.99 for a pack of 5 of the P12's they get a high praise from me.

The only other fans that I would recommend would be the Silent Wings 3's.


I have- 
XSPC 54mm 360 in Push/Pull
XSPC 30mm 480 in Push/Pull
XSPC 30mm 360 in Pull

Also have a Alphacool XT45 360 laying here I may throw in at a later date but leaving the system how it is for now.


----------



## Mooncheese

sultanofswing said:


> Ditch those Vardar fans for something better and you could run them at a higher speed,move more air and be quieter.
> 
> My Artic P12'[email protected] RPM are damn near silent and if I open my side panel I can move paper around my desk from the airflow they put out.
> I also have some Noctua NF-A12x25's that I have tested which are regarded as one of the best radiator fans out there and the Artics are quieter and move just as much air through a radiator.
> At 27.99 for a pack of 5 of the P12's they get a high praise from me.
> 
> The only other fans that I would recommend would be the Silent Wings 3's.
> 
> 
> I have-
> XSPC 54mm 360 in Push/Pull
> XSPC 30mm 480 in Push/Pull
> XSPC 30mm 360 in Pull
> 
> Also have a Alphacool XT45 360 laying here I may throw in at a later date but leaving the system how it is for now.


Declining marginal return! 

You have the definition of overkill. 

Vardar EVO 120s (1150 RPM max) @ 40% RPM is as quiet as it gets!

Noiseblocker 120mm and the Phanteks fans cited were also selected because they are nearly inaudible at or below 40% RPM! 

While editing my previous comment, wanting to drive home the fact that a longer, wider thinner radiator will dissipate more heat than a shorter, thinner thicker radiator I found ExtremeLabs 420 CE review: 

360 XE (60mm) @ 750 RPM push-pull: 210w, (push only: 176w)https://www.xtremerigs.net/2015/05/31/ek-coolstream-xe-360mm-radiator-review/5/
420 CE (45mm) @ 750 RPM push-pull: 260w, (push only: 233w)https://www.xtremerigs.net/2015/10/06/ek-ce-420-radiator-review/5/


That's how I know for a fact that my slim 420 pull only (28mm) is AT LEAST on par with my 360 PE push-pull (38mm) at 40% RPM. 

Again, the problem isn't inadequate fan speed nor radiator surface area! It's inadequate thermal transfer from GPU core to water block and from water block to the loop! I've already seen a drop of 5C increasing the single DDC from 55% RPM to 100% RPM and I believe there is more room to be had here! 2x D5 in serial at 80% RPM (going for acoustics) will increase theoretical flow rate from 1000 lph and head pressure of 5.2 m/bar (single DDC @ 100% RPM) to 1200 lph and head pressure to 6.1 m/bar! If serial D5 is quiet enough and I elect to run them at 100% RPM we are talking about a 50% increase in flow rate and head pressure over single DDC @ 1500 lph and head pressure of ~7.6 m/bar. 

50C on the gpu core with this much radiator surface area with an undervolt (~1.013v) with "only" 340w, there's a problem! I should be mid 40's! My 1080 Ti with the single D5 and older soft-tubing loop was at 40-42C with Gelid GC Extreme full load @ 300w @ 1.025v (2000 MHz core, +450 memory)! Soft tubing was at least 15 less 90 degree angle bends if we count the fittings on the distribution plate as 90 degree angle turns which they are! Single DDC isn't up to the task! My 2080 Ti was running at 45C load with single D5 and old soft tubing minus at least 15 90 degree angle bends, no other changes made to the loop! I'm now 5C hotter! It's a flow-rate problem, 100% certain! 

I intend to prove my hypothesis!


----------



## sultanofswing

We've been over this you will not see 1200lph with dual d5's.


----------



## Mooncheese

sultanofswing said:


> We've been over this you will not see 1200lph with dual d5's.


2x serial D5 at 80% RPM will flow more and have more head pressure than single DDC @ 100% RPM. Single DDC is 1000 lph. I understand that this rating is with no resistance, that's besides the point, that's how pumps are rated, with no resistance because resistance varies from one loop to the next. 

https://www.ekwb.com/blog/which-pump-should-you-use-d5-or-ddc/


----------



## J7SC

sultanofswing said:


> Ditch those Vardar fans for something better and you could run them at a higher speed,move more air and be quieter.
> 
> My Artic P12'[email protected] RPM are damn near silent and if I open my side panel I can move paper around my desk from the airflow they put out.
> I also have some Noctua NF-A12x25's that I have tested which are regarded as one of the best radiator fans out there and the Artics are quieter and move just as much air through a radiator.
> At 27.99 for a pack of 5 of the P12's they get a high praise from me.
> 
> The only other fans that I would recommend would be the Silent Wings 3's.
> 
> 
> I have-
> XSPC 54mm 360 in Push/Pull
> XSPC 30mm 480 in Push/Pull
> XSPC 30mm 360 in Pull
> 
> Also have a Alphacool XT45 360 laying here I may throw in at a later date but leaving the system how it is for now.


 
Are those the same Arctic fans DerBauer was talking about per vid below ? 

https://youtu.be/_BvR173EwaM?t=593


----------



## sultanofswing

Mooncheese said:


> 2x serial D5 at 80% RPM will flow more and have more head pressure than single DDC @ 100% RPM. Single DDC is 1000 lph. I understand that this rating is with no resistance, that's besides the point, that's how pumps are rated, with no resistance because resistance varies from one loop to the next.
> 
> https://www.ekwb.com/blog/which-pump-should-you-use-d5-or-ddc/


I understand this and know all about the pumps and how they are rated.

Just don't post flow rates without knowing what flow rate you have and the flow rate with the new pumps.
Someone might look at your post and think that is what their flow rate is because they have X pump when in reality that's not how it works.

The easy way to tell if you need more flow rate is your water temp vs your component temp

High water temp, Low component temp=Good flow Rate
Low water temp,High Component temp=Less than ideal flow Rate.


----------



## sultanofswing

J7SC said:


> Are those the same Arctic fans DerBauer was talking about per vid below ?
> 
> https://youtu.be/_BvR173EwaM?t=593


Yes the P12 PWM PST. I highly recommend them.

https://www.amazon.com/ARCTIC-ACFAN...d=1587329520&sprefix=artic+p12,aps,209&sr=8-1


----------



## kev_800

Quick question for you 2080 ti owners. Apologize if this is the wrong place to put this question and if it's a silly question, but I haven't built a PC in 6 years 

In which slot do you install the 2.7 width (~3 slot) cards, i.e., the EVGA FTW3 2080ti, or Asus Strix? I have an Asus Maximus XI Hero coming, and not sure if these cards will fit in the top slot -- seems like a silly question and I assume they do, but I also have a Dark Rock 4 cooler and the top slot seems very close on this schematic from Asus (attached). Or does the card orient downwards towards the other slots?

I must use the top PCI-e x16_1 slot, for adequate bus speed, correct?

In terms of cooling the 2080ti, will it make any difference if I have a HT Omega Clara sound card in the pci-e slot on the bottom? I can use the onboard sound capabilities on the Maximus Mobo if that will make a difference. Just want to make sure the card is adequately cooled.


----------



## Mooncheese

sultanofswing said:


> I understand this and know all about the pumps and how they are rated.
> 
> Just don't post flow rates without knowing what flow rate you have and the flow rate with the new pumps.
> Someone might look at your post and think that is what their flow rate is because they have X pump when in reality that's not how it works.
> 
> The easy way to tell if you need more flow rate is your water temp vs your component temp
> 
> High water temp, Low component temp=Good flow Rate
> Low water temp,High Component temp=Less than ideal flow Rate.


Yes, as I stated, I have high component temp + low water temp. When exiting benchmarking or any game that is inducing 99% load on the GPU @ 300-340W sustained with load temps of 47-50C the GPU core immediately drops down to 30C and stays there for 15-30 minutes until gradually reducing to near ambient (26-27C, ambient = ~24C). I'm talking IMMEDIATELY drops to 30C and then stays there. In this regard, the component itself is now acting as a water temp sensor. With no load / wattage about the component and it just hovering at 30C, that tells me water temp is 30C! I don't need a water temp sensor, this is common sense. 15-30 minutes later it slowly drops to 26-27C. If water temp dropped to 35 or 40C and hung out there for 15-30 minutes, that would tell me water temp is 35-40C. Water block doesn't need more than a few moments to cool to water temp and water block is basically acting as a water temp sensor with wattage being added to the loop from said component. Common sense applied, no water temp sensor needed. 

Core hits 47-50C with single DDC @ 100% RPM and was hitting 50-55C @ 55% RPM now with hard tubing at 15+ 90 degree angle bends! 90 degree angle bends slow down flow rate, big time!

With older soft tubing loop with no 90 degree angle bends and single D5 same GPU, same wattage, same voltage, same TIM was hitting 42-45C @ 60% RPM on the pump! 

Real easy to deduce that I have a flow restriction problem. Also, although I like my Slim 420 it's my understanding that slim radiators are more restrictive flow wise vs thicker ones.


----------



## keikei

kev_800 said:


> Quick question for you 2080 ti owners. Apologize if this is the wrong place to put this question and if it's a silly question, but I haven't built a PC in 6 years
> 
> In which slot do you install the 2.7 width (~3 slot) cards, i.e., the EVGA FTW3 2080ti, or Asus Strix? I have an Asus Maximus XI Hero coming, and not sure if these cards will fit in the top slot -- seems like a silly question and I assume they do, but I also have a Dark Rock 4 cooler and the top slot seems very close on this schematic from Asus (attached). Or does the card orient downwards towards the other slots?
> 
> I must use the top PCI-e x16_1 slot, for adequate bus speed, correct?
> 
> In terms of cooling the 2080ti, will it make any difference if I have a HT Omega Clara sound card in the pci-e slot on the bottom? I can use the onboard sound capabilities on the Maximus Mobo if that will make a difference. Just want to make sure the card is adequately cooled.



Unless the Dark Rock 4 cooler is hanging over the PCIE, the card should fit. Very close, but should fit.


----------



## kev_800

keikei said:


> Unless the Dark Rock 4 cooler is hanging over the PCIE, the card should fit. Very close, but should fit.


Got it, thanks... I will dry fit it just to be sure I don't need to install the 2080 ti first.

Keikei -- 1300 watts? That seems a bit much for your rig? I am planning to re-use my 850 Watt Gold Rated Corsair.... I am wondering if that will be sufficient now


----------



## Imprezzion

kev_800 said:


> Got it, thanks... I will dry fit it just to be sure I don't need to install the 2080 ti first.
> 
> Keikei -- 1300 watts? That seems a bit much for your rig? I am planning to re-use my 850 Watt Gold Rated Corsair.... I am wondering if that will be sufficient now


It will be, just. I mean, I ran my sig rig on a Focus Plus Gold 750w and it would handle it, barely. With all my fans, dual watercooler and amount of RGB IT wouldn't handle a 380w BIOS anymore. It shut down several times in games when CPU was also loaded up. So yeah, a 750w with full max overclocks and loads of RGB and fans isn't enough.

Your on a bit older 850w which is arguably a even better quality so it should be fine as long as you don't run a insane CPU overclock or 15 RGB fans hehe.


----------



## keikei

kev_800 said:


> Got it, thanks... I will dry fit it just to be sure I don't need to install the 2080 ti first.
> 
> Keikei -- 1300 watts? That seems a bit much for your rig? I am planning to re-use my 850 Watt Gold Rated Corsair.... I am wondering if that will be sufficient now



Atm, it is indeed overkill, but i've planned ahead for potential SLI. Come to find out the Nitro i order is backstocked, so i gotta wait potentially 2 weeks. Ugh.


----------



## J7SC

keikei said:


> Atm, it is indeed overkill, but i've planned ahead for potential SLI. Come to find out the Nitro i order is backstocked, so i gotta wait potentially 2 weeks. Ugh.


 
^...good idea to leave lots of headroom, especially when cabling in a custom build may not be so easy to change. 

I'm running a 1300w TR 2950X @ 4.3 all-core oc (280w), 2x 2080 Ti (760w peak combined) plus dual loops w/ 4x D5s and plenty of fans. The 1300 W Antec HPC Platinum has just enough headroom for now, but does have an 'oc-link' for a second HPC to plug in if I decide to upgrade to and oc a TR3 3970X.


----------



## Mooncheese

J7SC said:


> ^...good idea to leave lots of headroom, especially when cabling in a custom build may not be so easy to change.
> 
> I'm running a 1300w TR 2950X @ 4.3 all-core oc (280w), 2x 2080 Ti (760w peak combined) plus dual loops w/ 4x D5s and plenty of fans. The 1300 W Antec HPC Platinum has just enough headroom for now, but does have an 'oc-link' for a second HPC to plug in if I decide to upgrade to and oc a TR3 3970X.


I hear D5 pulls 30w each so add 120 to 1040 if my math is correct, not counting whatever else is consuming power, fans, RGB etc. and youre probably close to 1200W. It's my understanding that you don't want to use more than 70% of a PSU's rated output for longevity and efficiency? In your case that would be around 900w. 

For example, I have a 1kw PSU but only have a single 2080 Ti and 180W CPU and 2x D5. So I'm using maybe 60% of the rated output tops.


----------



## J7SC

nah - D5s are 24max, and also set to 70%, RGB off. Never ever had this PSU OCP in the build. This Antec PSU is highly rated, and I have two spare anyhow (ie. one for OC-link w/ dual PSUs, but not necessary right now)


----------



## Mooncheese

J7SC said:


> nah - D5s are 24max, and also set to 70%, RGB off. Never ever had this PSU OCP in the build. This Antec PSU is highly rated, and I have two spare anyhow (ie. one for OC-link w/ dual PSUs, but not necessary right now)


True, plus youre not going to have all three components at 100% all of the time, especially with SLI support what it is today. 

Following up on our previous discussion here, I monitored my temps closely upon exiting a game with full load yesterday that got up to 44-46C on the core and it took 3.5 minutes to drop from 35 to 30C. Immediately upon exiting the game I observed the temps (and graph in MSI AB) and it was actually at 35C, not 30C, then it took 1 min to drop to 33C and from there another 2.5 minutes to drop down to 30C so my water temp is probably somewhere around 33C. 

I spent the bulk of the day looking at options to expand my radiator surface area and concluded, after much research (and Hardware Labs Black Ice Nemesis 420 GTX being out of stock everywhere except one last white one on amazon) that there is room for improvement upgrading the EK SE 420 (28mm, 22 FPI) to EK CE 420 (45mm, 16 ppi). 

I could not, for the life of me, find any data on 420 SE but what I did was I compared the performance difference between EK PE 360 and EK SE 360 (38 and 28mm respectively) and the thicker rad outperforms the thinner one quite a bit at 750 RPM. In fact the SE 360 was rated very poorly by ExtremeRigs. 

Anyhow, curious as to the rated performance difference is between the CE 420 and SE 420 by EK I re-entered all of the information in their configurator and holy smokes, we are talking about going from 580w to 707w cooling capacity! (see attachment, I've circled PE 360, CE 420 and SE 420). By the way, according to EK, their slim 420 outperforms their PE 360 by quite a bit.

I've already made the measurements and it will fit nicely (single fan) hopefully everything arrives by the end of the week. 

Here's what I'm doing: 

1. Thermal Grizzly Conductonaut on GPU core. 
2. Switching from single DDC to 2x D5 in serial. 
3. Upgrading from SE 420 to CE 420. 
4. Replacing the 2.0mm thermal pad that came with EK's Quantum back-plate that goes over the GPU core with Gelid GC Extreme's thermal pad (Thermal Conductivity (W/mK):12)

Hopefully, all of the above brings my core temp down, I'm hoping for 7C but am optimistic that I may see 10C as I'm currently seeing 50C which is way too high for as much rad surface area as I have at present.


----------



## sultanofswing

Mooncheese said:


> True, plus youre not going to have all three components at 100% all of the time, especially with SLI support what it is today.
> 
> Following up on our previous discussion here, I monitored my temps closely upon exiting a game with full load yesterday that got up to 44-46C on the core and it took 3.5 minutes to drop from 35 to 30C. Immediately upon exiting the game I observed the temps (and graph in MSI AB) and it was actually at 35C, not 30C, then it took 1 min to drop to 33C and from there another 2.5 minutes to drop down to 30C so my water temp is probably somewhere around 33C.
> 
> I spent the bulk of the day looking at options to expand my radiator surface area and concluded, after much research (and Hardware Labs Black Ice Nemesis 420 GTX being out of stock everywhere except one last white one on amazon) that there is room for improvement upgrading the EK SE 420 (28mm, 22 FPI) to EK CE 420 (45mm, 16 ppi).
> 
> I could not, for the life of me, find any data on 420 SE but what I did was I compared the performance difference between EK PE 360 and EK SE 360 (38 and 28mm respectively) and the thicker rad outperforms the thinner one quite a bit at 750 RPM. In fact the SE 360 was rated very poorly by ExtremeRigs.
> 
> Anyhow, curious as to the rated performance difference is between the CE 420 and SE 420 by EK I re-entered all of the information in their configurator and holy smokes, we are talking about going from 580w to 707w cooling capacity! (see attachment, I've circled PE 360, CE 420 and SE 420). By the way, according to EK, their slim 420 outperforms their PE 360 by quite a bit.
> 
> I've already made the measurements and it will fit nicely (single fan) hopefully everything arrives by the end of the week.
> 
> Here's what I'm doing:
> 
> 1. Thermal Grizzly Conductonaut on GPU core.
> 2. Switching from single DDC to 2x D5 in serial.
> 3. Upgrading from SE 420 to CE 420.
> 4. Replacing the 2.0mm thermal pad that came with EK's Quantum back-plate that goes over the GPU core with Gelid GC Extreme's thermal pad (Thermal Conductivity (W/mK):12)
> 
> Hopefully, all of the above brings my core temp down, I'm hoping for 7C but am optimistic that I may see 10C as I'm currently seeing 50C which is way too high for as much rad surface area as I have at present.


My advise is to fit as many radiators as you can in your case. After about 3 good 360 rads you will not gain any better component temps but you will be able to keep water temp lower for that much longer.

A good setup will put a 2080ti in the 38-40c range and if you can get ambient around 20-21c you can top out around 36c on the 2080ti.

When I set up a watercooling build I do not look at trying to get component temps as cool as possible instead I plan my builds with Water to Ambient temp as the main priority, Getting and maintaining the water temp as close to ambient as possible is the #1 key, The component temps will follow suit.

This is one of the main reasons I won't run a distro plate. The space the distro plate takes up can be used for a radiator.
Most of those fancy builds I see on Reddit or in Facebook groups with crazy Distros and a single rad on a 2080ti system or 2 rads on a SLI 2080ti system kill me.
Talk to a guy the other day that did a build and was showing off how well it looked. It did look good but when asked about his temps he said his 2080ti ran around 58-60c and he was perfectly happy with that.
Not me,not this guy. If I'm gonna water cool performance comes first.


----------



## J7SC

...yeah, controlling temps even over longer runs is key. Below (posted before) is the temp graph of dual  2080 Tis (>extra heat) on full tilt @ 3DM Port Royal. 1080/55 rad space for the GPU loop keeps temps at bay


----------



## sultanofswing

J7SC said:


> ...yeah, controlling temps even over longer runs is key. Below (posted before) is the temp graph of dual  2080 Tis (>extra heat) on full tilt @ 3DM Port Royal. 1080/55 rad space for the GPU loop keep temps at bay


Yea 2 cards at 300+ watts each is a lot of heat to dissipate.


----------



## J7SC

sultanofswing said:


> Yea 2 cards at 300+ watts each is a lot of heat to dissipate.


 
...GPUs are just under 760w combined 

As to distro plates, as a user of a TT Core P5, I'm potentially interested in a custom-fitted one for the Core P5 with a D5 TT introduced for sale for a future build (first pic)...minus the RGB. 

The second pic is a prototype TT showed @ CES whereby the WHOLE back is a distro plate, including mobo mounts - now that would allow for plenty of rad space  I think there are other firms already putting out such designs, such as the Singularity Computers' Spectre Case, but at a much higher price than TT is expected to charge. Obviously, one would need to read some reviews first re. sturdiness, flow etc.


----------



## sultanofswing

J7SC said:


> ...GPUs are just under 760w combined
> 
> As to distro plates, as a user of a TT Core P5, I'm potentially interested in a custom-fitted one for the Core P5 with a D5 TT introduced for sale for a future build (first pic)...minus the RGB.
> 
> The second pic is a prototype TT showed @ CES whereby the WHOLE back is a distro plate, including mobo mounts - now that would allow for plenty of rad space  I think there are other firms already putting out such designs, such as the Singularity Computers' Spectre Case, but at a much higher price than TT is expected to charge. Obviously, one would need to read some reviews first re. sturdiness, flow etc.


My biggest issue with the Distros is every in/out that they have is basically a 90 degree fitting.


----------



## J7SC

sultanofswing said:


> My biggest issue with the Distros is every in/out that they have is basically a 90 degree fitting.


 
...could be a market opportunity for whoever comes up with a different in/out angle for a distro, i.e. via a raised acrylic block with side in/out. Sort of like this unusual GPU block in/out arrangement (per screen grab from GGF Events vid)


----------



## Mooncheese

sultanofswing said:


> My advise is to fit as many radiators as you can in your case. After about 3 good 360 rads you will not gain any better component temps but you will be able to keep water temp lower for that much longer.
> 
> A good setup will put a 2080ti in the 38-40c range and if you can get ambient around 20-21c you can top out around 36c on the 2080ti.
> 
> When I set up a watercooling build I do not look at trying to get component temps as cool as possible instead I plan my builds with Water to Ambient temp as the main priority, Getting and maintaining the water temp as close to ambient as possible is the #1 key, The component temps will follow suit.
> 
> This is one of the main reasons I won't run a distro plate. The space the distro plate takes up can be used for a radiator.
> Most of those fancy builds I see on Reddit or in Facebook groups with crazy Distros and a single rad on a 2080ti system or 2 rads on a SLI 2080ti system kill me.
> Talk to a guy the other day that did a build and was showing off how well it looked. It did look good but when asked about his temps he said his 2080ti ran around 58-60c and he was perfectly happy with that.
> Not me,not this guy. If I'm gonna water cool performance comes first.


I could technically fit a CE 420 in the front as well but that would be another $200 counting having to buy 3 140 fans. Right now it's a CE 420 and a PE 360. According to EK's configurator a single 707w CE 420 is overkill for the estimated 530w they have for 2080 Ti + 8700k (they state 29C water temp with single CE 420) Note that I selected "silent overclocking". I could technically put a 360 radiator where the distro plate is in Thermaltake View 71 but honestly, I love the way this looks. 50-60C on the GPU core with water-cooling is unacceptable. Right now I'm seeing ~45'ish @ 20-21C (67F) ambient sustained load accounting for heat soak and upwards of 50C when it's a little warmer @ ~23C (75F) I'd like to bring that down to 40 with the cooler ambient and 45 when it's warmer which I believe is fully within reach with the changes enumerated in my previous post. I'm excited to try the liquid metal, many if not most are seeing massive gains (7C) with air coolers. I don't see how I won't see at least 5C from that alone. Add the additional 100W of cooling potential (707 - 580 = 127 - 25w for additional D5) and the increased flow rate and head pressure of the additional D5 and the better heat transfer to the back-plate with the more thermally conductive Gelid GC Extreme thermal pad over the GPU core. I'm hopeful that this thing will be in the high 30's at 21C and low 40's at 23C to be honest (-7C). 

I'm expecting 5C minimum, possibly 7-10C with the improvements. 



J7SC said:


> ...yeah, controlling temps even over longer runs is key. Below (posted before) is the temp graph of dual  2080 Tis (>extra heat) on full tilt @ 3DM Port Royal. 1080/55 rad space for the GPU loop keeps temps at bay


That chart sucks! It's almost as bad as Nvidia's Turing marketing hype graph! I can't tell really at all the temps about the GPU's throughout that run, maybe include MSI AB graph along with it next time!


----------



## Medizinmann

Mooncheese said:


> True, plus youre not going to have all three components at 100% all of the time, especially with SLI support what it is today.
> 
> Following up on our previous discussion here, I monitored my temps closely upon exiting a game with full load yesterday that got up to 44-46C on the core and it took 3.5 minutes to drop from 35 to 30C. Immediately upon exiting the game I observed the temps (and graph in MSI AB) and it was actually at 35C, not 30C, then it took 1 min to drop to 33C and from there another 2.5 minutes to drop down to 30C so my water temp is probably somewhere around 33C.
> 
> I spent the bulk of the day looking at options to expand my radiator surface area and concluded, after much research (and Hardware Labs Black Ice Nemesis 420 GTX being out of stock everywhere except one last white one on amazon) that there is room for improvement upgrading the EK SE 420 (28mm, 22 FPI) to EK CE 420 (45mm, 16 ppi).
> 
> I could not, for the life of me, find any data on 420 SE but what I did was I compared the performance difference between EK PE 360 and EK SE 360 (38 and 28mm respectively) and the thicker rad outperforms the thinner one quite a bit at 750 RPM. In fact the SE 360 was rated very poorly by ExtremeRigs.
> 
> Anyhow, curious as to the rated performance difference is between the CE 420 and SE 420 by EK I re-entered all of the information in their configurator and holy smokes, we are talking about going from 580w to 707w cooling capacity! (see attachment, I've circled PE 360, CE 420 and SE 420). By the way, according to EK, their slim 420 outperforms their PE 360 by quite a bit.
> 
> I've already made the measurements and it will fit nicely (single fan) hopefully everything arrives by the end of the week.
> 
> Here's what I'm doing:
> 
> 1. Thermal Grizzly Conductonaut on GPU core.
> 2. Switching from single DDC to 2x D5 in serial.
> 3. Upgrading from SE 420 to CE 420.
> 4. Replacing the 2.0mm thermal pad that came with EK's Quantum back-plate that goes over the GPU core with Gelid GC Extreme's thermal pad (Thermal Conductivity (W/mK):12)
> 
> Hopefully, all of the above brings my core temp down, I'm hoping for 7C but am optimistic that I may see 10C as I'm currently seeing 50C which is way too high for as much rad surface area as I have at present.


IMHO - you should also think about a sensor for the water temps - I bound my fan speeds to the water temp.

As for me I usually see GPU temps 1°C above water temp in idle and with Windows Desktop. Gaming it goes up to 52°C with water temp going up to 48°C - with all fans blazing at full speed GPU temp doesn't exceed 43°C while water in the loop never goes over 38°C – right now with ambient 22-23°C…

I am using LM(using the Alphacool variant as it was faster available at the time of building and approved with CPU and GPU-block) as TIM and Alphacool GPX Pro + Alphacool Eisbaer with 240+360+120 rads – with Noctuas on all of them in push&pull config – might upgrade the 240 to a 360 as this would fit…

With Windows Desktop it is silent gaming is okay (don’t hear the fans with headphones) but fans at full speed is not acceptable full time…

Greetings,
Medizinmann


----------



## J7SC

Mooncheese said:


> (...)
> That chart sucks! It's almost as bad as Nvidia's Turing marketing hype graph! I can't tell really at all the temps about the GPU's throughout that run, maybe include MSI AB graph along with it next time!


 
That's the standard 3DM 'detail - temp' chart from a HoF run, with temp scale on the Y axis - I find it quite readable and informative


----------



## long2905

hi everyone, just checking if anyone here have a reference PCB and attempted to put a FE cooler on it. Do you have trouble with the one fan not turning? Im having the FE cooler in transit right now and want to prepare for it as best i could


----------



## Mooncheese

Medizinmann said:


> IMHO - you should also think about a sensor for the water temps - I bound my fan speeds to the water temp.
> 
> As for me I usually see GPU temps 1°C above water temp in idle and with Windows Desktop. Gaming it goes up to 52°C with water temp going up to 48°C - with all fans blazing at full speed GPU temp doesn't exceed 43°C while water in the loop never goes over 38°C – right now with ambient 22-23°C…
> 
> I am using LM(using the Alphacool variant as it was faster available at the time of building and approved with CPU and GPU-block) as TIM and Alphacool GPX Pro + Alphacool Eisbaer with 240+360+120 rads – with Noctuas on all of them in push&pull config – might upgrade the 240 to a 360 as this would fit…
> 
> With Windows Desktop it is silent gaming is okay (don’t hear the fans with headphones) but fans at full speed is not acceptable full time…
> 
> Greetings,
> Medizinmann


Yes that's pretty much what I stated earlier, water temp can be deduced fairly accurately with no load on the GPU and no heat in the system. For my system with current slim 420 GPU idles at 26-27C @ 21C (67F) ambient and 30-31C @ 23C (75F) ambient. 

When I exit a game that is inducing full load and there is adequate heat soak (i.e. 45-47C) the GPU core is already at 35C within the few seconds of hitting Alt+F4 and looking at my readout in task-bar. I then pulled up the MSI AB graph and timed it and it took 1 minute to drop from 35 to 33C and another 2.5 minutes to drop from 33 to 30C. I presume that this isn't water temp per se but that the actual GPU block has accumulated heat soak and it is what is actually cooling as it has a good deal of copper mass to it and the flow rate with the distribution plate and 15 or so 90 degree angle bends and the single DDC is less than ideal. I know flow rate is part of the problem, the question is, how much still remains to be had with more flow. I actually was seeing upwards of 47-49C @ 21C ambient and 50-55C at 23C ambient with the DDC at 55% RPM! Increasing it to 100% RPM actually dropped the core temp on the GPU by 5C but it's got a mighty buzz to it that rivals the refrigerator in the kitchen (it's as though they are communicating with each other now, plotting an escape!). 

I'm anticipating a drop of 5C at bare minimum, I will be surprised if it doesn't drop down 7C. We have about the same amount of radiator surface area but in my opinion 140mm fans and rads are the way to go as they can deliver more airflow at lower noise profile. (140mm fan at 60% RPM = 120mm fan at 100% RPM). I'm contemplating replacing the PE 360 in the front of the case with the SE 420 as it has a higher wattage dissipation rating from EK (581w vs 500w) and the 140mm fans would move more air at lower RPM it's just that I have my rads set up pulling air out of the case and the front of TT View 71 has a shroud that may occlude the 140mm fans and I'm worried that heat would accumulate in the uppermost fan / rad area and be trapped there (see image). 

https://community.thermaltake.com/uploads/monthly_2019_02/6.jpg.27c46975068ff8cf516e4e18bc44303d.jpg

In fact even the mounting bracket would occlude the rad: 





And then I would need to buy 3 more 140mm fans at like $20-25 each and this has already become more expensive than I would like. I will probably be done with it for a good while after this, the only thing I may do after this is upgrade the chipset next year when DDR5 hits as the 8700k will be going on 4 years of age and probably in need up upgrading if I upgrade from 2080 Ti to 3080 Ti, which at this point I may not even do seeing as how I'm at 90 FPS in Metro Exodus with RT on maxed out and 80 FPS in Shadow of the Tomb Raider, also maxed out with RT on at 3440x1440. I may be good here for a few years until all of the bugs and problems of 3080 Ti are sorted (i.e. high failure rate with GDDR6 memory on 2080 Ti and other problems). 

I'm also thinking about using liquid metal on the IHS of my delidded 8700k but am worried that because the galium is more on the free-flowing side that it may escape the cohesion area between that and the monoblock and drip down below it. Are you using liquid metal on your CPU? 

I'm going to have it all apart for the radiator swap, might as well go in and do the CPU while I'm at it.



J7SC said:


> That's the standard 3DM 'detail - temp' chart from a HoF run, with temp scale on the Y axis - I find it quite readable and informative


I thought that Ray Tracing didn't work with SLI?


----------



## J7SC

Mooncheese said:


> (...) I thought that Ray Tracing didn't work with SLI?


 
...it does, as long as the app devs did their part.


----------



## z390e

Medizinmann said:


> fans at full speed is not acceptable full time…


My wife asked me to cool down a PC she had running that had bad fans  I threw in 3x Noctua 3000rpm fans and she made me take them out the moment she heard them. From the door, outside, before she came in. LMAO.


----------



## Medizinmann

Mooncheese said:


> Yes that's pretty much what I stated earlier, water temp can be deduced fairly accurately with no load on the GPU and no heat in the system. For my system with current slim 420 GPU idles at 26-27C @ 21C (67F) ambient and 30-31C @ 23C (75F) ambient.


Yeah - you are right - as I said GPU temp idle is like 1°C above water in the loop - but I need the extra sensor as my mobo won't work properly with the GPU temp as input for the fan controls - It works for a moment but after the next reboot fan speeds are stuck at some random speed like 60-80% - don't know why - it works well with the extra sensor - and it only was like 7€...



> When I exit a game that is inducing full load and there is adequate heat soak (i.e. 45-47C) the GPU core is already at 35C within the few seconds of hitting Alt+F4 and looking at my readout in task-bar. I then pulled up the MSI AB graph and timed it and it took 1 minute to drop from 35 to 33C and another 2.5 minutes to drop from 33 to 30C. I presume that this isn't water temp per se but that the actual GPU block has accumulated heat soak and it is what is actually cooling as it has a good deal of copper mass to it and the flow rate with the distribution plate and 15 or so 90 degree angle bends and the single DDC is less than ideal. I know flow rate is part of the problem, the question is, how much still remains to be had with more flow. I actually was seeing upwards of 47-49C @ 21C ambient and 50-55C at 23C ambient with the DDC at 55% RPM! Increasing it to 100% RPM actually dropped the core temp on the GPU by 5C but it's got a mighty buzz to it that rivals the refrigerator in the kitchen (it's as though they are communicating with each other now, plotting an escape!).
> 
> I'm anticipating a drop of 5C at bare minimum, I will be surprised if it doesn't drop down 7C. We have about the same amount of radiator surface area but in my opinion 140mm fans and rads are the way to go as they can deliver more airflow at lower noise profile. (140mm fan at 60% RPM = 120mm fan at 100% RPM). I'm contemplating replacing the PE 360 in the front of the case with the SE 420 as it has a higher wattage dissipation rating from EK (581w vs 500w) and the 140mm fans would move more air at lower RPM it's just that I have my rads set up pulling air out of the case and the front of TT View 71 has a shroud that may occlude the 140mm fans and I'm worried that heat would accumulate in the uppermost fan / rad area and be trapped there (see image).
> 
> https://community.thermaltake.com/uploads/monthly_2019_02/6.jpg.27c46975068ff8cf516e4e18bc44303d.jpg
> 
> In fact even the mounting bracket would occlude the rad:
> 
> And then I would need to buy 3 more 140mm fans at like $20-25 each and this has already become more expensive than I would like. I will probably be done with it for a good while after this, the only thing I may do after this is upgrade the chipset next year when DDR5 hits as the 8700k will be going on 4 years of age and probably in need up upgrading if I upgrade from 2080 Ti to 3080 Ti, which at this point I may not even do seeing as how I'm at 90 FPS in Metro Exodus with RT on maxed out and 80 FPS in Shadow of the Tomb Raider, also maxed out with RT on at 3440x1440. I may be good here for a few years until all of the bugs and problems of 3080 Ti are sorted (i.e. high failure rate with GDDR6 memory on 2080 Ti and other problems).


Don't talk about cost - I used 14 Noctua NF-A12x25 PWM $29 each + 1x140mm Noctua Case fan + 2x60mm Noctuas for RAM-Cooling - so fans alone add up to $470 (for the fans alone!) + CPU(included 360mm rad) + GPU Cooler(included 240mm rad) + extra 120mm Rad + extra quick connects + Coolant + Conformal Coat + LM - well - altogether a little over $1000 for my cooling solution alone - and it isn't even a fancy custom waterloop with RGB or something...:blinksmil:kookoo: :thumb:

The benefit is a silent desktop for my usual workload with great temps...and I hope to reuse many of the components for years to come - hopefully...:thumb:



> I'm also thinking about using liquid metal on the IHS of my delidded 8700k but am worried that because the galium is more on the free-flowing side that it may escape the cohesion area between that and the monoblock and drip down below it. Are you using liquid metal on your CPU?


Yes, I do - but I didn't delid it as tests by Der8auer showed not much of a benefid for the 3900x - direct die would be a thing - but I am not that adventures...but cooling for the GPU is in fact kind of direct die - installation of the GPU-block was adventure enough for my - quite a shock when screen was black at first boot - one thermalpad was the wrong hight and therefore not enough contact...my bad...

I just used LM instead of other TIM - I used a lot of conformal coat allarround the CPU area - the same on the GPU btw.



> I'm going to have it all apart for the radiator swap, might as well go in and do the CPU while I'm at it.


:thumb:

I am thinking of upgrading the 240mm rad to a 360mm - as this would fit - or I could upgrade the 360mm to 420mm and reuse the 360mm in the top of the case...which would bring this setup to the equivalent of 2x420mm rads - but that's it - at least in this case.

Greetings,
Medizinmann


----------



## Mooncheese

Medizinmann said:


> Yeah - you are right - as I said GPU temp idle is like 1°C above water in the loop - but I need the extra sensor as my mobo won't work properly with the GPU temp as input for the fan controls - It works for a moment but after the next reboot fan speeds are stuck at some random speed like 60-80% - don't know why - it works well with the extra sensor - and it only was like 7€...
> 
> 
> Don't talk about cost - I used 14 Noctua NF-A12x25 PWM $29 each + 1x140mm Noctua Case fan + 2x60mm Noctuas for RAM-Cooling - so fans alone add up to $470 (for the fans alone!) + CPU(included 360mm rad) + GPU Cooler(included 240mm rad) + extra 120mm Rad + extra quick connects + Coolant + Conformal Coat + LM - well - altogether a little over $1000 for my cooling solution alone - and it isn't even a fancy custom waterloop with RGB or something...:blinksmil:kookoo: :thumb:
> 
> The benefit is a silent desktop for my usual workload with great temps...and I hope to reuse many of the components for years to come - hopefully...:thumb:
> 
> 
> Yes, I do - but I didn't delid it as tests by Der8auer showed not much of a benefid for the 3900x - direct die would be a thing - but I am not that adventures...but cooling for the GPU is in fact kind of direct die - installation of the GPU-block was adventure enough for my - quite a shock when screen was black at first boot - one thermalpad was the wrong hight and therefore not enough contact...my bad...
> 
> I just used LM instead of other TIM - I used a lot of conformal coat allarround the CPU area - the same on the GPU btw.
> 
> :thumb:
> 
> I am thinking of upgrading the 240mm rad to a 360mm - as this would fit - or I could upgrade the 360mm to 420mm and reuse the 360mm in the top of the case...which would bring this setup to the equivalent of 2x420mm rads - but that's it - at least in this case.
> 
> Greetings,
> Medizinmann


You should upgrade! Why not? More radiator surface area is always better.


----------



## Mooncheese

J7SC said:


> ...GPUs are just under 760w combined
> 
> As to distro plates, as a user of a TT Core P5, I'm potentially interested in a custom-fitted one for the Core P5 with a D5 TT introduced for sale for a future build (first pic)...minus the RGB.
> 
> The second pic is a prototype TT showed @ CES whereby the WHOLE back is a distro plate, including mobo mounts - now that would allow for plenty of rad space  I think there are other firms already putting out such designs, such as the Singularity Computers' Spectre Case, but at a much higher price than TT is expected to charge. Obviously, one would need to read some reviews first re. sturdiness, flow etc.


Wow the P5 is nice, I love the open air concept but if I take the tempered glass off my Thermaltake View 71 the noise goes up noticeably (temps come down though, I was trying to see how a 420 rad would fit in the front and I had the front glass panel off today for like 5 minutes or so and the GPU core temp while mining crypto at 165w came down from 36 to 33C. 



sultanofswing said:


> My biggest issue with the Distros is every in/out that they have is basically a 90 degree fitting.


I did some digging as to how impactful 90 degree fittings are in terms of restricting flow and I read somewhere that each 90 degree fitting is the equivalent of losing 1 inch / 25mm of head pressure. I just counted and I have 15 90 degree angle bends if we count the inlet ports on the distribution plate. So 16 inches or 375mm of head pressure lost to this distro plate and hard tubing over the soft tubing. That's not a whole lot, I mean it's something to account for but DDC is 5.2 m/bar and single D5 is 3.9 m/bar. We are talking about taking a little over 1/3rd of a meter off of that in terms of pressure drop. If youre running multiple D5 in serial the pressure drop is inconsequential (2xD5 is ~7.5'ish m/bar at 100% RPM). Speaking of which, I just got my 2x EK D5 Serial Revo an additional D5 in today, this thing is very nice and I'm looking forward to getting it in, hopefully my conductonaut, conformal coating and CE 420 rad comes in by Friday.







z390e said:


> My wife asked me to cool down a PC she had running that had bad fans  I threw in 3x Noctua 3000rpm fans and she made me take them out the moment she heard them. From the door, outside, before she came in. LMAO.


Apparently not just water cooling enthusiasts are obsessed with silence


----------



## sultanofswing

@Mooncheese
I wish you had a water temps sensor, My card idles right at 3c over ambient temp and that is from a fresh boot with the PC sitting asleep all day.
This trend continues no matter what the water temp is it's always 3c over water temp.


----------



## shrapz_nz

Hi All, hoping someone will have a magic solution for me here as I am out of options from what I can see..

I have a Gigabyte 2080Ti Windforce (The non OC, crappy version) https://www.techpowerup.com/gpu-specs/gigabyte-rtx-2080-ti-windforce.b6415
I purchased this second hand about 6 months ago, has been working fine since.

Recently I noticed my FPS was getting low in games, upon checking the speeds out in Tweak II (and afterburner) I found that the card is not boosting and just sitting at 1350mhz.
Any attempts to change any settings in Tweak II result in the pc becoming unusable or crashing completely.
I have tried various suggestions on how to fix this but to no avail, hoping someone can point something out to me what I may have missed? I have no receipt / don't know where it was purchased, so RMA is not possible.

*Attempted fixes* 

- Download original bios from card and reflash (tried this multiple times)
- Download same bios from online and reflash to ensure I didn't have a buggy bios
- Attempted to flash a bios from another card with the modded nvflash, however, always getting gpu mismatch. From reading the first post, looks like this is no longer possible
- Full reinstall of windows (more than once)
- Unplug the card from all power sources and leave unplugged for the night, then reinstall windows, this did actually solve the problem and I was able to set power limit to 112% and benchmarked it with out fault. Next reboot and it was back to 1350mhz
- Tried all sorts of settings in the NVControl panel

The card itself has a EK waterblock on it in a custom loop, idles at 34deg and 42 under load (1350mhz), has never seen a temperature over 50 degrees since ive had it.

Any ideas before I sell it for scrap?


----------



## Medizinmann

Mooncheese said:


> You should upgrade! Why not? More radiator surface area is always better.


Yeah - I might if I have time on my hands...:thumb:

Question is how much money I want to throw at it - and changing the 360mm in the front (and reusing it in the top replacing the 240mm) to a 420mm would be the biggst jump, but also quite expensive again - like $100 for the rad an than 4x$25 for new fans or at least $30 for some good adapters which isn't 100%...well...and so on. But in the end - I might do it some day...:thinking:

And my wife might kill me int the process if I tear the whole thing apart again - "for no good reason" - other than adding 180mm radspace...:blinksmil:axesmiley

Greetings,
Medizinmann


----------



## Medizinmann

z390e said:


> My wife asked me to cool down a PC she had running that had bad fans  I threw in 3x Noctua 3000rpm fans and she made me take them out the moment she heard them. From the door, outside, before she came in. LMAO.


Yeah - my wife is also quite sensitive to noise from technical appliances like i.e. computers.
Or to put it short – a computer mustn’t emit any noise…

As we share an office space (room is 5m x 5m) at home the goal was that the desktop is inaudible in normal workloads (workstation for mediacal image reformat and diagnostic)– so I used Noctuas F-A12x25 PWM and run them on a low noise profile. High speeds only kick in when temps in the water loop exceed 45°C – which never happens with my normal workload – and gaming isn’t a normal workload in working hours…

Greetings,
Medizinmann


----------



## Medizinmann

Mooncheese said:


> Wow the P5 is nice, I love the open air concept but if I take the tempered glass off my Thermaltake View 71 the noise goes up noticeably (temps come down though, I was trying to see how a 420 rad would fit in the front and I had the front glass panel off today for like 5 minutes or so and the GPU core temp while mining crypto at 165w came down from 36 to 33C.


I decided for the high airflow config with top end front panels off of my case over the "low noise" config with top and front panels installed as I am able to run fans at much lower speed and therefore at even lower noise levels as the in the low noise config – while with much more headroom when I need extra cooling in “extreme” situations like gaming when the whole setup pulls up to 800W constantly and needs to dissipate these in the room…



> Apparently not just water cooling enthusiasts are obsessed with silence


Well most people with healthy ears are...

Greetings,
Medizinmann


----------



## J7SC

shrapz_nz said:


> Hi All, hoping someone will have a magic solution for me here as I am out of options from what I can see..
> 
> I have a Gigabyte 2080Ti Windforce (The non OC, crappy version) https://www.techpowerup.com/gpu-specs/gigabyte-rtx-2080-ti-windforce.b6415
> I purchased this second hand about 6 months ago, has been working fine since.
> 
> Recently I noticed my FPS was getting low in games, upon checking the speeds out in Tweak II (and afterburner) I found that the card is not boosting and just sitting at 1350mhz.
> Any attempts to change any settings in Tweak II result in the pc becoming unusable or crashing completely.
> I have tried various suggestions on how to fix this but to no avail, hoping someone can point something out to me what I may have missed? I have no receipt / don't know where it was purchased, so RMA is not possible.
> 
> *Attempted fixes*
> 
> - Download original bios from card and reflash (tried this multiple times)
> - Download same bios from online and reflash to ensure I didn't have a buggy bios
> - Attempted to flash a bios from another card with the modded nvflash, however, always getting gpu mismatch. From reading the first post, looks like this is no longer possible
> - Full reinstall of windows (more than once)
> - Unplug the card from all power sources and leave unplugged for the night, then reinstall windows, this did actually solve the problem and I was able to set power limit to 112% and benchmarked it with out fault. Next reboot and it was back to 1350mhz
> - Tried all sorts of settings in the NVControl panel
> 
> The card itself has a EK waterblock on it in a custom loop, idles at 34deg and 42 under load (1350mhz), has never seen a temperature over 50 degrees since ive had it.
> 
> Any ideas before I sell it for scrap?


 
This is a known problem with some of the early, mostly reference PCB-based cards. Likely that the '1350' is a result of a short detected....can sometimes be a result of too much /uneven cooler mount pressure - at least worth trying to take the water block off, check for obvious anomalies on the PCB and carefully remount. On some non-reference PCB cards, disconnected fan header would also cause the '1350', but unlikely to be the case here. All that said, most of the '1350' issue cards were RMAed. 

https://linustechtips.com/main/topic/1079253-rtx-2080ti-not-staying-at-boost/


----------



## Mooncheese

sultanofswing said:


> @Mooncheese
> I wish you had a water temps sensor, My card idles right at 3c over ambient temp and that is from a fresh boot with the PC sitting asleep all day.
> This trend continues no matter what the water temp is it's always 3c over water temp.


I shut down the PC today and when I returned and turned it back on the GPU was at 25C with 68F ambient. Now it's at 30C @ 72F ambient after being on for a few hours, presumably the PSU and DDC have warmed up the air a bit inside of the case. 

Speaking of which, I wanted to get everyone's opinion on what would be more optimal for temps, right now I have the rads set up as exhaust and rear 140mm fan as intake (so as to provide some airflow over the GPU backplate and RAM. If I were to turn the fans around I could actually replace the PE 360 in the front with the CE 420 (707 vs 500W) and keep the SE 420 in the ceiling as is (581 vs 500W of the PE 360) but I'm afraid of dumping that heat right in the case and onto the distribution plate, all of the plumbing, the memory, the GPU back-plate etc. I wanted to run this by everyone to see what their opinion of it is. 



Medizinmann said:


> Yeah - my wife is also quite sensitive to noise from technical appliances like i.e. computers.
> Or to put it short – a computer mustn’t emit any noise…
> 
> As we share an office space (room is 5m x 5m) at home the goal was that the desktop is inaudible in normal workloads (workstation for mediacal image reformat and diagnostic)– so I used Noctuas F-A12x25 PWM and run them on a low noise profile. High speeds only kick in when temps in the water loop exceed 45°C – which never happens with my normal workload – and gaming isn’t a normal workload in working hours…
> 
> Greetings,
> Medizinmann


45C water temp? Good lord, what are you cooling? 



J7SC said:


> This is a known problem with some of the early, mostly reference PCB-based cards. Likely that the '1350' is a result of a short detected....can sometimes be a result of too much /uneven cooler mount pressure - at least worth trying to take the water block off, check for obvious anomalies on the PCB and carefully remount. On some non-reference PCB cards, disconnected fan header would also cause the '1350', but unlikely to be the case here. All that said, most of the '1350' issue cards were RMAed.
> 
> https://linustechtips.com/main/topic/1079253-rtx-2080ti-not-staying-at-boost/
> 
> https://www.youtube.com/watch?v=A0dRm5DqHWQ


That sucks! Sorry to hear about it. Having an expensive component such as 2080 Ti die on you is very nerve wracking.


----------



## J7SC

Mooncheese said:


> (...)
> 
> That sucks! *Sorry to hear about it.* Having an expensive component such as 2080 Ti die on you is very nerve wracking.


 
...yeah, but I did not have that issue - my post was a response to shrapz_nz post / question.


----------



## sultanofswing

Mooncheese said:


> I shut down the PC today and when I returned and turned it back on the GPU was at 25C with 68F ambient. Now it's at 30C @ 72F ambient after being on for a few hours, presumably the PSU and DDC have warmed up the air a bit inside of the case.
> 
> Speaking of which, I wanted to get everyone's opinion on what would be more optimal for temps, right now I have the rads set up as exhaust and rear 140mm fan as intake (so as to provide some airflow over the GPU backplate and RAM. If I were to turn the fans around I could actually replace the PE 360 in the front with the CE 420 (707 vs 500W) and keep the SE 420 in the ceiling as is (581 vs 500W of the PE 360) but I'm afraid of dumping that heat right in the case and onto the distribution plate, all of the plumbing, the memory, the GPU back-plate etc. I wanted to run this by everyone to see what their opinion of it is.
> 
> 
> 
> 45C water temp? Good lord, what are you cooling?
> 
> 
> 
> That sucks! Sorry to hear about it. Having an expensive component such as 2080 Ti die on you is very nerve wracking.


From my testing running all radiator fans as intake produced the best results.
In My 0-11XL I ran all intakes with no exhaust fans at all and it worked amazing.

Radiators like nice cool ambient outside case air.


----------



## OrionBG

Hey guys, after almost 6 months of collecting parts, I have finally began building my new system. (there is a build log in the Builds section if interested. "Project Titan")

I'm going to be using two EVGA RTX2080Ti cards, one XC and the other is an XC Ultra. They are both reference boards (supposedly, although I have found some small differences here and there on the PCBs).
They are both going to be water-cooled and have their separate loop. Both cards are manufactured before the BIOS change (or encryption or whatever it was) so flashing should be easy.
What is the best BIOS currently to unlock as much performance as possible from those cards? I want to be able to push at least 500W per card. (more the better  )

Thanks.


----------



## sultanofswing

OrionBG said:


> Hey guys, after almost 6 months of collecting parts, I have finally began building my new system. (there is a build log in the Builds section if interested. "Project Titan")
> 
> 
> 
> I'm going to be using two EVGA RTX2080Ti cards, one XC and the other is an XC Ultra. They are both reference boards (supposedly, although I have found some small differences here and there on the PCBs).
> 
> They are both going to be water-cooled and have their separate loop. Both cards are manufactured before the BIOS change (or encryption or whatever it was) so flashing should be easy.
> 
> What is the best BIOS currently to unlock as much performance as possible from those cards? I want to be able to push at least 500W per card. (more the better  )
> 
> 
> 
> Thanks.


Galax 380w. 
You won't push 500 watts for daily use, to push that kind of power you'd need an XOC bios. 

Sent from my GM1915 using Tapatalk


----------



## OrionBG

sultanofswing said:


> Galax 380w.
> You won't push 500 watts for daily use, to push that kind of power you'd need an XOC bios.
> 
> Sent from my GM1915 using Tapatalk


Yes, exactly!  Sooo which XOC BIOS is the right one? 
Also, daily I'll most probably run close to stock. The power, I need for benchmarking 3D mark ladders and so on...


----------



## Medizinmann

sultanofswing said:


> @Mooncheese
> I wish you had a water temps sensor, My card idles right at 3c over ambient temp and that is from a fresh boot with the PC sitting asleep all day.
> This trend continues no matter what the water temp is it's always 3c over water temp.


This might have to do with calibration - with no or low load my GPU usually hovers at or around 1°C above the water temps.

Greetings,
Medizinmann


----------



## Medizinmann

Mooncheese said:


> I shut down the PC today and when I returned and turned it back on the GPU was at 25C with 68F ambient. Now it's at 30C @ 72F ambient after being on for a few hours, presumably the PSU and DDC have warmed up the air a bit inside of the case.


I see idle temps around 26-27°C with 21-22°C ambient - if I push it and run all fans full speed I can hold wtaer temps around 2-3°C above ambient.



> Speaking of which, I wanted to get everyone's opinion on what would be more optimal for temps, right now I have the rads set up as exhaust and rear 140mm fan as intake (so as to provide some airflow over the GPU backplate and RAM. If I were to turn the fans around I could actually replace the PE 360 in the front with the CE 420 (707 vs 500W) and keep the SE 420 in the ceiling as is (581 vs 500W of the PE 360) but I'm afraid of dumping that heat right in the case and onto the distribution plate, all of the plumbing, the memory, the GPU back-plate etc. I wanted to run this by everyone to see what their opinion of it is.


I use my rads in the top + the front as intakes + one 140mm fan in the top blowing over the VRM heatsink and one 120mm rad in the back as outtake - all set up with push&pull.

PSU is in an extra compartment down below - case is Phanteks Eclipse p600s(http://www.phanteks.com/Eclipse-P600s.html).



> 45C water temp? Good lord, what are you cooling?


A 2080TI with KFA2/Galas Bios 380W - under load with sustained power consumption of 330W (peak 386W) + 3900X with a slight OC with 180W (peak 210W) under sustained load - which results in around 510W(peak around 600W) to cool. Other heatsources are the mobo and 3 nvme SSD 2 TB (2x Gen3 and one Gen4)
With 100% fan speeds this would mean water temps around 5-6C° above ambient - but I run a silent profile so my fan curve is planned to rather slow increase the fan speed up to 32°C (max. 40% of max. rpm at that point) and then steeply climbs to a 100% at 52°C which means 1800rpm on all fans - at 45°C it is already at 70% - after that is where it gets really noisy... 

When gaming I usually reach the 40-43° with an acceptable noise level - under my daily use workload its more like 28-29°C(in silence) with internet browsing, medical image reformat and review and Office apps and such.

Greetings,
Medizinmann


----------



## edhutner

Hi all. I'm joining the 2080 ti club 
Recently got EVGA 2080 Ti XC and since yesterday it's watercooled with Heatkiller IV GPU block.
And, of course, now I am looking to increase the power limit.
I intended to use GALAX/KFA 380W bios, but I could not use nvflash, because my XUSB version (0x70100003) is newer.

Browsing the techpowerup gpu vbios database for newer bios with good power limit, I found this one - https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726 It's for GALAX RTX 2080 Ti HOF 10th year edition, 400W and the same as mine XUSB - 0x70100003. As far as I know Galax HOF board is non reference design and I am afraid to flash it to my card.

Has anyone tried it on reference board?


----------



## sultanofswing

edhutner said:


> Hi all. I'm joining the 2080 ti club
> 
> Recently got EVGA 2080 Ti XC and since yesterday it's watercooled with Heatkiller IV GPU block.
> 
> And, of course, now I am looking to increase the power limit.
> 
> I intended to use GALAX/KFA 380W bios, but I could not use nvflash, because my XUSB version (0x70100003) is newer.
> 
> 
> 
> Browsing the techpowerup gpu vbios database for newer bios with good power limit, I found this one - https://www.techpowerup.com/vgabios/215860/galax-rtx2080ti-11264-190726 It's for GALAX RTX 2080 Ti HOF 10th year edition, 400W and the same as mine XUSB - 0x70100003. As far as I know Galax HOF board is non reference design and I am afraid to flash it to my card.
> 
> 
> 
> Has anyone tried it on reference board?


That bios will flash but won't hit 400w as the Galax card uses a different power plane.

Sent from my GM1915 using Tapatalk


----------



## edhutner

@sultanofswing thanks.
I will be happy if it hit 380 🙂 Is there risk to brick it, or to loose some of DP/hdmi ports?


----------



## J7SC

...you can also try the Gigabyte Aorus 2080 Ti Xtreme WB bios (they did an update in February '19). That card has a custom PCB with two 8 pin connectors rather than the three 8 pin connectors the relevant Galax has. I'm not sure about HDMI ports if flashed onto another board as I have two of those actual Aorus cards with the (earlier) Bios as stock. Below are successive Superposition 8K runs I did earlier today (single card), with GPUz showing almost 380w...if you get stuck w/ other bios options, this might be worth a try...though water-cooling highly recommended.


----------



## Imprezzion

edhutner said:


> @sultanofswing thanks.
> I will be happy if it hit 380 🙂 Is there risk to brick it, or to loose some of DP/hdmi ports?


I haven't looked myself for it as I have the older FW luckily but try to find a EVGA FTW3 Ultra/Hydrocopper BIOS with the new FW if that exists. That should do 373w.


----------



## AndrejB

I have an odd issue, when I oc my card the pcie speed doesn't drop to 1.1 but stays at 2 and mem stays at 810 instead of 405 at idle and this only happens if I move the core +105 or do oc scanner, anything under it behaves normally


----------



## spin5000

Does anyone know if the Gigabyte Gaming OC uses a reference PCB? It says "reference" in the OP but the text is in yellow as opposed to blue like most others. What does this yellow signify? I looked everywhere but there's no legend or guide for the OP (why use colours to signify different things but then not inform the readers to what those colours mean?).

According to a review or 2 I saw, it seems like it's a sort of clone of a reference with some small changes. Are the changes insignificant enough where reference card owners shouldn't have a problem flashing their cards with, and using, the Gigabyte Gaming OC's BIOS or is it advised to stay away and only stick with truly true reference boards?


----------



## kithylin

spin5000 said:


> Does anyone know if the Gigabyte Gaming OC uses a reference PCB? It says "reference" in the OP but the text is in yellow as opposed to blue like most others. What does this yellow signify? I looked everywhere but there's no legend or guide for the OP (why use colours to signify different things but then not inform the readers to what those colours mean?).
> 
> According to a review or 2 I saw, it seems like it's a sort of clone of a reference with some small changes. Are the changes insignificant enough where reference card owners shouldn't have a problem flashing their cards with, and using, the Gigabyte Gaming OC's BIOS or is it advised to stay away and only stick with truly true reference boards?


Go here: https://www.ekwb.com/configurator/ Select your video card all the way down to the specific model. After you end up on the resulting parts selection page it will tell you at the top if it's reference PCB or if it's custom PCB.


----------



## edhutner

spin5000 said:


> Does anyone know if the Gigabyte Gaming OC uses a reference PCB? ...


As far as I know it is reference, the difference is only in the fan headers I think.

I have flashed GB Gaming OC bios (https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218) on my EVGA XC, which is also reference board, and it is working without any issues.

P.s. I have not testes if fan output is working, beacuse my is watercooled and dont use it's fan headers.


----------



## J7SC

edhutner said:


> As far as I know it is reference, the difference is only in the fan headers I think.
> 
> I have flashed GB Gaming OC bios (https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218) on my EVGA XC, which is also reference board, and it is working without any issues.
> 
> P.s. I have not testes if fan output is working, beacuse my is watercooled and dont use it's fan headers.


 
AFAIK, The GB Gamig OC bios has the same max wattage as the GB Aorus I referenced above. Any idea of actual peak watts @ GPUz when fully stressed in your w-cooled setup ?


----------



## keikei

So, ID said that they would work on RT after the Doom launch. If its anything like this, it would be the first time I would turn it on. Wow!


----------



## Mooncheese

keikei said:


> So, ID said that they would work on RT after the Doom launch. If its anything like this, it would be the first time I would turn it on. Wow!
> 
> 
> https://www.youtube.com/watch?v=e6FVyg8hM7M&feature=emb_logo


Looks awesome but I'm not a fan of the Gaussian Blur! I usually use Sharpen in Freestyle filter, this seems to do the opposite of that. Also, prepare to have your FPS cut in half, Doom Eternal at anything under 90 FPS is not going to feel good considering how fast-paced it is.


----------



## kithylin

Mooncheese said:


> Looks awesome but I'm not a fan of the Gaussian Blur! I usually use Sharpen in Freestyle filter, this seems to do the opposite of that. Also, prepare to have your FPS cut in half, Doom Eternal at anything under 90 FPS is not going to feel good considering how fast-paced it is.


That's why we have DLSS. Ray tracing + DLSS is typically very little to almost no performance impact from the testing I have seen online. As long as you have a title that supports both technologies.


----------



## edhutner

J7SC said:


> AFAIK, The GB Gamig OC bios has the same max wattage as the GB Aorus I referenced above. Any idea of actual peak watts @ GPUz when fully stressed in your w-cooled setup ?


Testing with occt 6.0.0.b1, gpu with error detection and shader complexity 3 at 1920x1080 I get fluctuations between 350-370W. Average value about 358W.


----------



## J7SC

edhutner said:


> Testing with occt 6.0.0.b1, gpu with error detection and shader complexity 3 at 1920x1080 I get fluctuations between 350-370W. Average value about 358W.


 
370W peak and ports working is great :thumb:


----------



## xkm1948

Anybody have experience with this 2080Ti AIO?

Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti (2070/2080 Super) - black M02

https://www.aquatuning.us/water-coo...pro-rtx-2080/2080ti-2070/2080-super-black-m02

My EVGA 2080Ti XC gets fairly hot if I move the power slider over the default 100%. And the 2 fans on the GPU would sometime blow up to 100% during heavy load.


----------



## Laithan

I just picked up an EVGA 2080Ti FTW3 Ultra so I've made the jump & I'm in a new club 

The info in the first page of this thread was excellent and well formatted. I am sort of surprised there isn't a modded vBIOS for this card... am I just missing it? I know that it comes with 373W stock however I'm looking for a 400W or 450W BIOS for this GPU. Does one exist or is this only going to be possible on the Kingpin?

Thanks in advance


----------



## kithylin

Laithan said:


> I just picked up an EVGA 2080Ti FTW3 Ultra so I've made the jump & I'm in a new club
> 
> The info in the first page of this thread was excellent and well formatted. I am sort of surprised there isn't a modded vBIOS for this card... am I just missing it? I know that it comes with 373W stock however I'm looking for a 400W or 450W BIOS for this GPU. Does one exist or is this only going to be possible on the Kingpin?
> 
> Thanks in advance


Nvidia disabled our ability to modify the bios's ourselves on modern video cards back with the 1000 (Pascal) series. That's a thing of the past and no longer possible with today's Nvidia video cards. You can cross-flash a bios from a different vendor's card with a higher power limit though but I'm not sure on compatibility. Someone in this thread may be able to help you.


----------



## Mooncheese

kithylin said:


> That's why we have DLSS. Ray tracing + DLSS is typically very little to almost no performance impact from the testing I have seen online. As long as you have a title that supports both technologies.


Yes but the title in question has to support DLSS 2.0 + RT, a mod that introduces path tracing / global illumination via Reshade will not use DLSS. 

Id have stated that they will implement RT later on.


----------



## Laithan

kithylin said:


> Nvidia disabled our ability to modify the bios's ourselves on modern video cards back with the 1000 (Pascal) series. That's a thing of the past and no longer possible with today's Nvidia video cards. You can cross-flash a bios from a different vendor's card with a higher power limit though but I'm not sure on compatibility. Someone in this thread may be able to help you.


Sorry, I didn't mean like the traditional vBios modifications (I've made a few in my day)  I am really only referring to the power limit increases as we see for the kingpin cards. 
https://xdevs.com/guide/2080ti_kpe/#cbios

The stock BIOS in the FTW3 Ultra is 373W.. I know there is a Galax BIOS @ 380W but it's a reference design BIOS. I am wondering if anyone has successfully found a BIOS with a higher power limit for the FTW3 Ultra.

Ty


----------



## Mooncheese

Laithan said:


> I just picked up an EVGA 2080Ti FTW3 Ultra so I've made the jump & I'm in a new club
> 
> The info in the first page of this thread was excellent and well formatted. I am sort of surprised there isn't a modded vBIOS for this card... am I just missing it? I know that it comes with 373W stock however I'm looking for a 400W or 450W BIOS for this GPU. Does one exist or is this only going to be possible on the Kingpin?
> 
> Thanks in advance


You don't need 400W, in fact 350W is plenty, 383W about as much you need assuming your core is good for 2200 MHz stable on the core and 8100-8200 MHz on the memory. My clocks don't dip below 2070 MHz really @ 1.013v and 340w vbios (XC2 Ultra) @ ~45C. 

If you bring the core temp down you can use less voltage and in turn, less wattage. 

450W, you don't need that much, 400W is overkill. People here are running Gigabyte Waterforce Extreme stable at 2150 MHz solid with only 380W. Not many 2080 Ti are good for more than 2150 MHz. 2200 MHz on air / water is about as high as you will get. LN2 is another matter. 

If your card is on air cooler youre not going to want or need more than 373W and that's assuming your core is good for 2150 MHz. If you put your card under full water block it's a different story, then you won't want nor need more wattage as you can run higher freq with undervolt and less wattage. 

Here's what my XC2 @ 2070 MHz looks like in Superposition 4K Optimized @ 340W @ 1.013v: 




If you observe the wattage drawn it's 300w on avg with occasional spurts up to 320w. Superposition induces higher than average power draw. 

400-450W, I don't know where you read that you would need that much wattage or if it would yield any benefit with FTW3, especially on air. 

In fact, I could have already flashed my card to FTW3 bios @ 373w but I just don't see the point or benefit because the core doesn't want to do more than 2085 MHz, even @ 1.069v. Unless you won the silicon lottery and have an exceptional core that can do 2150-2200 MHz you most definitely don't need any more power.


----------



## keikei

Mooncheese said:


> Looks awesome but I'm not a fan of the Gaussian Blur! I usually use Sharpen in Freestyle filter, this seems to do the opposite of that. Also, prepare to have your FPS cut in half, Doom Eternal at anything under 90 FPS is not going to feel good considering how fast-paced it is.



The performance hit will depend if ID decides to do full blown RT/DLSS options. At this rate, I'll take any usable form of RT. The 2080ti can do 4k/ultra around 120fps in the game, so above 100fps is plausible. The tech has been getting better and better. We just have to be patient (including myself).


----------



## Imprezzion

Laithan said:


> Sorry, I didn't mean like the traditional vBios modifications (I've made a few in my day)  I am really only referring to the power limit increases as we see for the kingpin cards.
> https://xdevs.com/guide/2080ti_kpe/#cbios
> 
> The stock BIOS in the FTW3 Ultra is 373W.. I know there is a Galax BIOS @ 380W but it's a reference design BIOS. I am wondering if anyone has successfully found a BIOS with a higher power limit for the FTW3 Ultra.
> 
> Ty


I actually run the FTW3 Ultra BIOS on my reference Gainward Phoenix GS which is cooled with a Kraken G12 + Kraken X52. It runs 2130Mhz @ 1.093mv daily and can bench 2190Mhz at the same voltage. Just isn't very stable in demanding games over 2145Mhz and will occasionally DirectX crash so I just run conservative 2130Mhz 24/7 lol.

373w is enough for any game I play to not hit power limits at all. CoD MW, Borderlands 3, GTA V, Halo MCC, anything like that won't go over 110% really (which is about 330-340w?). It does throttle slightly in benches like 3DMark or Superposition but not to the point of really costing any noticable performance score wise. 

If I were you I'd stick with the original 373w BIOS for daily clocks under water or even air. Higher BIOS wattage will only help subzero or benching.

BTW, on reference at least, the KFA2 BIOS gives quite a lot more room then the 7w difference it has on paper. I get 20% lower power readings on KFA2 compared to FTW3 but fanspeed can't go lower then 41% idle with the KFA2 BIOS and since I control my radiator fans through the card that is loud as all heck idle around 1400 RPM lol. If you don't care about noise or don't use the cards fan controller you can use KFA2 380w just fine in a FTW3.


----------



## Medizinmann

xkm1948 said:


> Anybody have experience with this 2080Ti AIO?
> 
> Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti (2070/2080 Super) - black M02
> 
> https://www.aquatuning.us/water-coo...pro-rtx-2080/2080ti-2070/2080-super-black-m02
> 
> My EVGA 2080Ti XC gets fairly hot if I move the power slider over the default 100%. And the 2 fans on the GPU would sometime blow up to 100% during heavy load.



I am using an Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti (2070/2080 Super) - black M02 on my Palit GamingPro OC RTX 2080 Ti and I am pretty happy with it - runs great with an nice OC and the 380W KFA2/Galax BIOS.
I am running it in one loop with an Alphacool Eisbaer 360 CPU cooler and a 120mm add-on rad - all with Noctua Noctua NF-A12x25 PWM in push/pull. BTW: LM as TIM on both(GPU die and CPU IHS) with a lot conformal coat all around it...:thumb:

The fans from bequit! (Silent Wings II) that came with it are okay too and actually as silent as advertised - but the Noctuas are better and able to push/pull more air especially under full load.

Greetings,
Medizinmann


----------



## xkm1948

Medizinmann said:


> I am using an Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti (2070/2080 Super) - black M02 on my Palit GamingPro OC RTX 2080 Ti and I am pretty happy with it - runs great with an nice OC and the 380W KFA2/Galax BIOS.
> I am running it in one loop with an Alphacool Eisbaer 360 CPU cooler and a 120mm add-on rad - all with Noctua Noctua NF-A12x25 PWM in push/pull. BTW: LM as TIM on both(GPU die and CPU IHS) with a lot conformal coat all around it...:thumb:
> 
> The fans from bequit! (Silent Wings II) that came with it are okay too and actually as silent as advertised - but the Noctuas are better and able to push/pull more air especially under full load.
> 
> Greetings,
> Medizinmann


Damn nice. Great to know! I am pulling the trigger on this then!


----------



## sblantipodi

I still don't understand the naming scheme of the 3090.
Is it a way to sell the 3090 at the 2080ti price and make a 3090ti at even bigger price?

Are they completely crazy?


----------



## Medizinmann

sblantipodi said:


> I still don't understand the naming scheme of the 3090.
> Is it a way to sell the 3090 at the 2080ti price and make a 3090ti at even bigger price?
> 
> Are they completely crazy?


Define crazy?

As long as there are no real competitors…
I mean even the 2080Ti is actually gone up in price over the last 6-8 months – and I don’t mean the last 3 with you know what… no - even long before. 
In 07/2019 I was happy to get my Palit GamingOC Pro for 930€(included VAT and shipping) - now you pay at least 1130€.

As long as AMD isn’t bringing something really competitive to match NVIDIAs high end - NVIDIA can make ridicules prices – the same as Intel did until AMD came out with Zen2…

In the midrange NVIDIA had to lower prices with the advent of the 5700 XT but in the high end…

It’s a free market and as long as people are willing to pay…

We can only hope that AMDs RDNA2 a.k.a. 5900XT or whatever it's called in the end - is good/competitive...

Greetings,
Medizinmann


----------



## fluidzoverclock

Hi guys, my msi 2080ti card has been stuttering randomly for a while now, i get random micro freezes in all games, temps are fine (72c max), psu is powerful enough (corsair rm850x). The 2080ti is set to stock clocks. Today I ran OCCT VRAM test and unique heaven in the background at the same time to check for vram instability. Immediately millions of errors appeared on OCCT. I even lowered the clocks and Im stil seeing millions of errors. Also the display turned off, and the driver recovered. The card appears to be unstable. Before I rma the card, could somebody here please check something for me? Run unique heaven and occt vram test at the same time, with your 2080ti at stock, to see if occt shows errors.. for all I know, the errors, display driver resetting are a side effect of running both unique heaven and occt at the same time, I just need to rule that out and I appreciate it. 

Screenshot - https://i.imgur.com/nS87pQE.jpg
Video - https://streamable.com/3whhfv

Bullet points of the video : 

00:00 - 00:19 = The card is set to stock in msi afterburner.
00:19 - 00:53 = Running Vram test in OCCT, with Unique Heaven running in the background displays millions of errors within seconds.
00:53 - 01:15 = Display driver nvlddmkm stopped responding and has successfully recovered. This happens over and over. Check the time of each event.
01:25 - 02:15 - I try lowering the GPU core core down, using Msi afterburner, from default, to -502mhz, and run OCCT vram test again, still many errors. 
02:20 = I lower the Power limit from 100 to 80, and still errors appear.

I also lowered the Vram speed in Afterburner, to as low as it would go, and the display driver still crashed. https://i.imgur.com/nzVarAx.jpg


----------



## sultanofswing

fluidzoverclock said:


> Hi guys, my msi 2080ti card has been stuttering randomly for a while now, i get random micro freezes in all games, temps are fine (72c max), psu is powerful enough (corsair rm850x). The 2080ti is set to stock clocks. Today I ran OCCT VRAM test and unique heaven in the background at the same time to check for vram instability. Immediately millions of errors appeared on OCCT. I even lowered the clocks and Im stil seeing millions of errors. Also the display turned off, and the driver recovered. The card appears to be unstable. Before I rma the card, could somebody here please check something for me? Run unique heaven and occt vram test at the same time, with your 2080ti at stock, to see if occt shows errors.. for all I know, the errors, display driver resetting are a side effect of running both unique heaven and occt at the same time, I just need to rule that out and I appreciate it.
> 
> Screenshot - https://i.imgur.com/nS87pQE.jpg
> Video - https://streamable.com/3whhfv
> 
> Bullet points of the video :
> 
> 00:00 - 00:19 = The card is set to stock in msi afterburner.
> 00:19 - 00:53 = Running Vram test in OCCT, with Unique Heaven running in the background displays millions of errors within seconds.
> 00:53 - 01:15 = Display driver nvlddmkm stopped responding and has successfully recovered. This happens over and over. Check the time of each event.
> 01:25 - 02:15 - I try lowering the GPU core core down, using Msi afterburner, from default, to -502mhz, and run OCCT vram test again, still many errors.
> 02:20 = I lower the Power limit from 100 to 80, and still errors appear.
> 
> I also lowered the Vram speed in Afterburner, to as low as it would go, and the display driver still crashed. https://i.imgur.com/nzVarAx.jpg



DDU the driver and also make sure you are running separate power cables to the gpu and not daisy chained.


Sent from my iPhone using Tapatalk


----------



## fluidzoverclock

sultanofswing said:


> DDU the driver and also make sure you are running separate power cables to the gpu and not daisy chained.
> 
> 
> Sent from my iPhone using Tapatalk


Already ddu'd many times, reinstalled windows a ton of times and tested different drivers. The psu is single rail 12v, Corsair rm850x. But Ill try using single power cables to rule it out, thanks.


----------



## AndrejB

@fluidzoverclock


Yep, RMA time

This isn't even a completely stable oc/undervolt, yet no errors.

This is my 3rd card, had the same issues like you, display driver resetting in Apex (famous dxgi hung/removed error), took me a while to figure out it was the cards not the game or memory...


----------



## J7SC

AndrejB said:


> @fluidzoverclock
> 
> 
> Yep, RMA time
> 
> This isn't even a completely stable oc/undervolt, yet no errors.
> 
> This is my 3rd card, had the same issues like you, display driver resetting in Apex (famous dxgi hung/removed error), took me a while to figure out it was the cards not the game or memory...


 
I would agree with ^^ re. RMA. It may have been the case that NVidia pushed 2080 TIs out the door early as a response to the mining collapse they were feeling. In any case, there was a select batch of ""*2080 Ti test escapes*"" which made it out the door and onto vendors' products. This could manifest in several different ways. That includes a lock of 1350 MHz boost (there can also be other reasons for that, btw), space invaders weird memory artifacts - often erroneously associated with Micron-only memory (most early 2080 Ti had Micron VRAM supplied by Nvidia to vendors, but some Samsung also failed), and also BSOD / 'micro-locks'. Suffice it to say that with water-cooled cards, Heaven 4 in the mid-to-high 2100s actual max boost clock should not be too much of a problem with a healthy card. One additional test you might want to run briefly is 3DM Port Royal is it stresses a different segment of your GPU (tensor cores DLSS), more than most benchmarks, including OCCT, Heaven etc. and post whether the same issue recurs with your card.

A bit more here on that whole test escape episode here https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html

Another related source is this:


----------



## Nineball_Seraph

Since im stable on my XC Ultra @2160 (on water) with my stock bios, is it worth trying the KFA2?


----------



## AndrejB

Nineball_Seraph said:


> Since im stable on my XC Ultra @2160 (on water) with my stock bios, is it worth trying the KFA2?


If you aren't hitting the power limit, then I don't see a reason.

I noticed that lowering the VCCSA voltage on my aorus master, lowered my strix power consumption, same voltage, same test, 30w less.


----------



## Mooncheese

AndrejB said:


> If you aren't hitting the power limit, then I don't see a reason.
> 
> I noticed that lowering the VCCSA voltage on my aorus master, lowered my strix power consumption, same voltage, same test, 30w less.


Interesting, this is the first time I've ever heard of this, is this an expected side effect? How is this even possible? I'm trying to mentally figure this out and I just cant, are you sure it's not some kind of reporting error? Lowering VCCSA on the motherboard, which is an auxiliary voltage for the CPU, reduces the power consumption of the GPU? I mean the only thing that I could think is that if a hungry CPU was consuming an inordinate amount of power that somehow the 24 pin and 6 pin power cables were at their limit and couldn't deliver 75W over PCI-E to the card but that effect would work in the opposite direction, delivery less power to the card with more VCCSA and that would have be an insanely powerful CPU to tap out 24 and 6 pin power delivery.


----------



## Mooncheese

sblantipodi said:


> I still don't understand the naming scheme of the 3090.
> Is it a way to sell the 3090 at the 2080ti price and make a 3090ti at even bigger price?
> 
> Are they completely crazy?


3090? When was this announced? That would be confusing because typically 90 class cards denote to GPU's sharing a PCB, i.e. GTX 690.


----------



## AndrejB

Mooncheese said:


> Interesting, this is the first time I've ever heard of this, is this an expected side effect? How is this even possible? I'm trying to mentally figure this out and I just cant, are you sure it's not some kind of reporting error? Lowering VCCSA on the motherboard, which is an auxiliary voltage for the CPU, reduces the power consumption of the GPU? I mean the only thing that I could think is that if a hungry CPU was consuming an inordinate amount of power that somehow the 24 pin and 6 pin power cables were at their limit and couldn't deliver 75W over PCI-E to the card but that effect would work in the opposite direction, delivery less power to the card with more VCCSA and that would have be an insanely powerful CPU to tap out 24 and 6 pin power delivery.



You're completely right, the idea behind my statement was from experience with my first and second card (this is my third) where if I ran a too low of a VCCSA voltage the display driver would crash.

I see now that it may have been down to the cards being faulty or something, because these tests now show that it's within margin of error (1.23 vs 1.2 vs 1.18)


----------



## Mooncheese

AndrejB said:


> You're completely right, the idea behind my statement was from experience with my first and second card (this is my third) where if I ran a too low of a VCCSA voltage the display driver would crash.
> 
> I see now that it may have been down to the cards being faulty or something, because these tests now show that it's within margin of error (1.23 vs 1.2 vs 1.18)


GPU power is nominal through all three runs (250W) and appears to be unaffected by VCCSA. 

1965 MHz @ .950w is a fantastic undervolt, but if youre getting display driver CTD's it may be because that's actually not enough voltage for that freq at that temperature. 

Have you tried using MSI AB OC Scanner? I've found that to be fairly reliable in terms of gauging the stability of an overclock. For example, I can pass benchmarks at 2100 MHz core, 8150 MHz memory @ 1.013-1.019v but will crash in games and OC Scanner only gives this overclock a 69% Confidence Rating. @ 2070 MHz core, 8150 MHz memory, same voltage, I don't crash in games and OC Scanner gives this overclock a 90% Confidence Rating. I believe 90% is actually the highest you can get, if you get 90% youre pretty much not going to get a crash in games. 

For example, at 69% Confidence Rating I will get a crash within 10 minutes of playing F1 2019 @ 3440x1440. With 90%, I've yet to experience a crash yet, I can play for 3 hours straight etc, even with the core getting up to 50C.


----------



## AndrejB

Mooncheese said:


> GPU power is nominal through all three runs (250W) and appears to be unaffected by VCCSA.
> 
> 1965 MHz @ .950w is a fantastic undervolt, but if youre getting display driver CTD's it may be because that's actually not enough voltage for that freq at that temperature.
> 
> Have you tried using MSI AB OC Scanner? I've found that to be fairly reliable in terms of gauging the stability of an overclock. For example, I can pass benchmarks at 2100 MHz core, 8150 MHz memory @ 1.013-1.019v but will crash in games and OC Scanner only gives this overclock a 69% Confidence Rating. @ 2070 MHz core, 8150 MHz memory, same voltage, I don't crash in games and OC Scanner gives this overclock a 90% Confidence Rating. I believe 90% is actually the highest you can get, if you get 90% youre pretty much not going to get a crash in games.
> 
> For example, at 69% Confidence Rating I will get a crash within 10 minutes of playing F1 2019 @ 3440x1440. With 90%, I've yet to experience a crash yet, I can play for 3 hours straight etc, even with the core getting up to 50C.


Yep this card has a pretty good core, unfortunately anything over 7950 on the mem gives lower confidence rating in oc scanner test, even if occt doesn't report any vram errors. So the captured settings you saw are at 90% confidence.

Interestingly this card hasn't crashed once in any game on anything I put (0.925v at 2070) but stutters a bit. The previous two were pretty sensitive.

The only weirdness this card does is raise the idle pcie speed from 1.1 to 2.0 (405 vs 810 mem) at idle after oc scanner or raising the core over +105.


----------



## Imprezzion

I might have to test my card as well lol. It has Samsung memory and isn't really that early of a card but it does give me some grief with the memory OC. Like, most games are perfectly stable at 8100Mhz but there are some games, world of tanks for example, that will DXGI crash at anything over 7600. Now, I know this is still above stock, and I played several hours of WoT on 7500 with no issues, but it's still wierd that CoD MW Warzone and Borderlands 3 run for hours and hours on 8100 with zero crashes...

Off to the testing phase we go again lol. 

P.s. this is all on 2130Mhz 1.093v with EVGA FTW3 Ultra BIOS with a G12 + Kraken X52 and Enzotech copper VRAM heatsinks. Core never goes over 48c.


----------



## J7SC

Imprezzion said:


> I might have to test my card as well lol. It has Samsung memory and isn't really that early of a card but it does give me some grief with the memory OC. Like, most games are perfectly stable at 8100Mhz but there are some games, world of tanks for example, that will DXGI crash at anything over 7600. Now, I know this is still above stock, and I played several hours of WoT on 7500 with no issues, but it's still wierd that CoD MW Warzone and Borderlands 3 run for hours and hours on 8100 with zero crashes...
> 
> Off to the testing phase we go again lol.
> 
> P.s. this is all on 2130Mhz 1.093v with EVGA FTW3 Ultra BIOS with a G12 + Kraken X52 and Enzotech copper VRAM heatsinks. Core never goes over 48c.


 
For VRAM testing, 3DM TimeSpyEx GPU test 2 and 3DM Firestrike Ultra are quite demanding


----------



## Imprezzion

J7SC said:


> For VRAM testing, 3DM TimeSpyEx GPU test 2 and 3DM Firestrike Ultra are quite demanding


I ran those and Superposition 1080p Extreme looped for a quite a while when working at home not using my PC. It survived several hours of loops even on 8100Mhz. Also just tested Superposition 1080p extreme with OCCT VRAM error scan. Nothing. No errors on 7500 or 8100. It did error on 8200 and crashed the entire system when pushing 8300 but seems clean on 8100 for a 5 minute run which should be plenty to show errors if the VRAM was physically broken.

Must just be a unstable OC then or WoT engine being a bit of a donkey lol.

Or, option 3, this reference PCB card doesn't always play Nice with the FTW3 Ultra BIOS and it's memory timings and such. That could also be part of it lol. Shame I can't use KFA2 BIOS as the fans are way too loud idle.


----------



## J7SC

Imprezzion said:


> I ran those and Superposition 1080p Extreme looped for a quite a while when working at home not using my PC. It survived several hours of loops even on 8100Mhz. Also just tested Superposition 1080p extreme with OCCT VRAM error scan. Nothing. No errors on 7500 or 8100. It did error on 8200 and crashed the entire system when pushing 8300 but seems clean on 8100 for a 5 minute run which should be plenty to show errors if the VRAM was physically broken.
> 
> Must just be a unstable OC then or WoT engine being a bit of a donkey lol.
> 
> Or, option 3, this reference PCB card doesn't always play Nice with the FTW3 Ultra BIOS and it's memory timings and such. That could also be part of it lol. Shame I can't use KFA2 BIOS as the fans are way too loud idle.


 
You can also try Superposition 4K and 8K (which use a lot of VRAM). Also, best VRAM performance (ie FPS etc relating to efficiency) happens well below 'crash speed' , ie 8300 in your post. That said, there are some apps / games which are just buggy !


----------



## Imprezzion

Yeah 8100 does seem to also scale pretty well with scores in any bench and I just did like a 2 hour Borderlands 3 grind session, zero issues. No slowdowns, no high frametimes, no crashes.. 2130/8100 @ 1.093v 373w power limit, doesn't throttle (102-110% in game) and around 46-47c core temps.


----------



## JackCY

Mooncheese said:


> 3090? When was this announced? That would be confusing because typically 90 class cards denote to GPU's sharing a PCB, i.e. GTX 690.


That's what it will take to live up to hype anyway.


----------



## fluidzoverclock

AndrejB said:


> @fluidzoverclock
> 
> 
> Yep, RMA time
> 
> This isn't even a completely stable oc/undervolt, yet no errors.
> 
> This is my 3rd card, had the same issues like you, display driver resetting in Apex (famous dxgi hung/removed error), took me a while to figure out it was the cards not the game or memory...


Thanks for this, it shows my card is bad.




J7SC said:


> I would agree with ^^ re. RMA. It may have been the case that NVidia pushed 2080 TIs out the door early as a response to the mining collapse they were feeling. In any case, there was a select batch of ""*2080 Ti test escapes*"" which made it out the door and onto vendors' products. This could manifest in several different ways. That includes a lock of 1350 MHz boost (there can also be other reasons for that, btw), space invaders weird memory artifacts - often erroneously associated with Micron-only memory (most early 2080 Ti had Micron VRAM supplied by Nvidia to vendors, but some Samsung also failed), and also BSOD / 'micro-locks'. Suffice it to say that with water-cooled cards, Heaven 4 in the mid-to-high 2100s actual max boost clock should not be too much of a problem with a healthy card. One additional test you might want to run briefly is 3DM Port Royal is it stresses a different segment of your GPU (tensor cores DLSS), more than most benchmarks, including OCCT, Heaven etc. and post whether the same issue recurs with your card.
> 
> A bit more here on that whole test escape episode here https://www.techspot.com/news/77445-nvidia-addresses-failing-geforce-rtx-2080-ti-cards.html
> 
> Another related source is this:
> 
> https://www.youtube.com/watch?v=JIRfPlC15uc


Appreciate your reply, thanks for the link.


----------



## sblantipodi

am I the only one who runs a 2080Ti on an old haswell-e CPU?
am I loosing a lot even if I play only 4K?

should I upgrade my CPU? 
are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?


----------



## Laithan

sblantipodi said:


> am I the only one who runs a 2080Ti on an old haswell-e CPU?
> am I loosing a lot even if I play only 4K?
> 
> should I upgrade my CPU?
> are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?


I'm runing IVY-e and it is plenty fast enough to drive this GPU, frametimes are very low.


----------



## kithylin

sblantipodi said:


> am I the only one who runs a 2080Ti on an old haswell-e CPU?
> am I loosing a lot even if I play only 4K?
> 
> should I upgrade my CPU?
> are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?


If you can get your core clock speeds up to at least 4.0 - 4.4 Ghz then you should be fine. Intel hasn't actually increased IPC in at least 6 years. They just keep selling chips with higher default turbo/baseclock speeds and claim "THE NEW CHIPS ARE 10% FASTER!".


----------



## GRABibus

sblantipodi said:


> am I the only one who runs a 2080Ti on an old haswell-e CPU?
> am I loosing a lot even if I play only 4K?
> 
> should I upgrade my CPU?
> are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?


No you are not alone 
Look at my sig.

I play 2560x1440.


----------



## ThrashZone

Hi,
He's asking everywhere he's got the upgrade bug bad


----------



## GRABibus

Deleted


----------



## Medizinmann

sblantipodi said:


> am I the only one who runs a 2080Ti on an old haswell-e CPU?
> am I loosing a lot even if I play only 4K?
> 
> should I upgrade my CPU?
> are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?



Well uhm - it depends...
Gamer Nexus did some bechnmarks that showed some newer titels profit from using 6- or 8-core CPUs - but this was tested with 1080p - and in the end there were only minor benefits in using more than 6-cores.

...and 4k is mostly GPU-Bound anyway....


Youtuber JayzTwoCents did also some tests with 1440p and saw some benefits from newer CPUs but diminishing returns in 4k.



So no - you presumably don't loose a lot when running 4k - may be a little...


Greetings,
Medizinmann


----------



## HeadlessKnight

kithylin said:


> If you can get your core clock speeds up to at least 4.0 - 4.4 Ghz then you should be fine. Intel hasn't actually increased IPC in at least 6 years. They just keep selling chips with higher default turbo/baseclock speeds and claim "THE NEW CHIPS ARE 10% FASTER!".


Skylake has a 5-10% IPC advantage over Haswell though. But you are correct that the new Comet, Cascade and Coffee Lakes are not faster than Skylake cfc though, since it is the same thing recycled each year with moar cores, but they are faster than Haswell and OC higher.


----------



## kithylin

HeadlessKnight said:


> Skylake has a 5-10% IPC advantage over Haswell though. But you are correct that the new Comet, Cascade and Coffee Lakes are not faster than Skylake cfc though, since it is the same thing recycled each year with moar cores, but they are faster than Haswell and OC higher.


Yeah but.. in the big overall picture of things Intel has only increased it by max +20% over 6 years, if that. It's not that big of a deal. Just get any 3000 series ivy bridge chip and clock it up to a decent clock speed around 4 Ghz and it's "Fast enough" vs modern chips. There won't be any difference in FPS in games.


----------



## HeadlessKnight

kithylin said:


> Yeah but.. in the big overall picture of things Intel has only increased it by max +20% over 6 years, if that. It's not that big of a deal. Just get any 3000 series ivy bridge chip and clock it up to a decent clock speed around 4 Ghz and it's "Fast enough" vs modern chips. There won't be any difference in FPS in games.


True.


----------



## J7SC

kithylin said:


> Yeah but.. in the big overall picture of things Intel has only increased it by max +20% over 6 years, if that. It's not that big of a deal. Just get any 3000 series ivy bridge chip and clock it up to a decent clock speed around 4 Ghz and it's "Fast enough" vs modern chips. There won't be any difference in FPS in games.


 
...yeah, when I first got my 2x Aorus 2080 Tis, I plugged them into a Z170 test-bench which runs a 5-year old 6700K / 4.8 giggles / 3866 RAM. At 4K resolution, there was only a very minor aggregate FPS difference to the eventual HEDT build (the latter at a lower oc w/ 4.3), and that presumably was because of double the RAM channels with the HEDT. For 4K gaming, a decent-clocked older CPU model should do more than fine. 1080p and perhaps 1440p might be a different story, I guess.


----------



## Silent Scone

Any reason why the GALAX XOC BIOS is removed from OP?


----------



## philhalo66

Looks like i lost the silicon lottery yet again. cant even get 2055 stable.


----------



## Nizzen

philhalo66 said:


> Looks like i lost the silicon lottery yet again. cant even get 2055 stable.


Model?


----------



## philhalo66

Nizzen said:


> Model?


EVGA RTX 2080 Ti FTW3 Ultra Hybrid


----------



## OrionBG

Silent Scone said:


> Any reason why the GALAX XOC BIOS is removed from OP?


+1 Does anybody has a link for this one?


----------



## ThrashZone

Silent Scone said:


> Any reason why the GALAX XOC BIOS is removed from OP?





OrionBG said:


> +1 Does anybody has a link for this one?


 @zhrooms


----------



## Nizzen

philhalo66 said:


> EVGA RTX 2080 Ti FTW3 Ultra Hybrid


That was unlucky!

My Asus 2080ti oc white edition does 2085mhz on stock aircooler and stock bios.


----------



## sultanofswing

The Galax BIOS is on TPU.


----------



## philhalo66

Nizzen said:


> That was unlucky!
> 
> My Asus 2080ti oc white edition does 2085mhz on stock aircooler and stock bios.


I cant even get to 2000Mhz before it crashes. it boosts to 1965 but any higher and it crashes within an hour or so. Dumb part is because i got the card on sale if i wanted to step-up to the air cooled version i have to pay nearly $250.


----------



## J7SC

...looking at the various OC results here, it occurred to me that I actually never ran 3DM PortRoyal @ bone stock to compare it to my best full tilt run. Context: dual 2080 TI factory water-blocked cards on a GPU-dedicated loop with dual D5s and a total of 1080x55 mm rad space. Also, ambient temps were about 5c higher with the bone stock run....bone stock means no oc whatsoever on memory, GPU, PowerTarget, voltage etc., so like 'fresh out of the box'.

- Max boost MHz with default stock settings was 2025 MHz for both cards (single card default stock is about 60 MHz higher). PortRoyal result at default was '18450'.
- Max boost MHz in full oc mode (including 122%+ PowerTarget for a total of 379w per card) came to 2145 MHz, with 'best/most efficient' VRAM overclock (Micron) at 2061. Port Royal result at full tilt was '20503'

So score-wise, the full-tilt run was between 10% and 11% faster...for daily applications, I'm not sure that matters that much though there may be some select 4K games where it might mean s.th.

(in pic below, upper and left is full oc, lower center and right is default)


----------



## Laithan

Starting to get some "miles" on the 2080Ti FTW3 Ultra (stock cooler) and it seems that I'm not really thermally limited for the most part.. I guess water cooling could provide _some_ benefit but it doesn't seem like it would really do a whole lot. Is a water block even worth it for a 2080Ti? I assume power restrictions would be the limiting factor for the most part not thermals.. My guess is that a water block would likely just allow me to maintain a higher boost clock for a longer period of time but not actually providing additional O/C headroom. 

Anyone that went from AIR to H2O on a 2080Ti care to comment?


----------



## J7SC

Laithan said:


> Starting to get some "miles" on the 2080Ti FTW3 Ultra (stock cooler) and it seems that I'm not really thermally limited for the most part.. I guess water cooling could provide _some_ benefit but it doesn't seem like it would really do a whole lot. Is a water block even worth it for a 2080Ti? I assume power restrictions would be the limiting factor for the most part not thermals.. My guess is that a water block would likely just allow me to maintain a higher boost clock for a longer period of time but not actually providing additional O/C headroom.
> 
> Anyone that went from AIR to H2O on a 2080Ti care to comment?


 
I can't comment directly as my cards are factory water-blocked (re. PL, up to 380w on stock Bios), but this table below I posted before from Tom's hardware shows the frequency differences of a 2080 Ti FE card (right column) at different temps to be fairly significant.


----------



## Medizinmann

Laithan said:


> Starting to get some "miles" on the 2080Ti FTW3 Ultra (stock cooler) and it seems that I'm not really thermally limited for the most part.. I guess water cooling could provide _some_ benefit but it doesn't seem like it would really do a whole lot. Is a water block even worth it for a 2080Ti? I assume power restrictions would be the limiting factor for the most part not thermals.. My guess is that a water block would likely just allow me to maintain a higher boost clock for a longer period of time but not actually providing additional O/C headroom.
> 
> Anyone that went from AIR to H2O on a 2080Ti care to comment?


Well uhm it depends…
There are several tests on how much benefit water-cooling might bring you...
JayzTwoCents i.e. did some nice tests with different GPUs over the last years.
GameNexus has tests with a very systematic approach.
Linus also showed that you if you can bump the airflow high enough - you can reach similar results on air - but the noise... 

In short - with water-cooling you can get more performance with less noise…
Is it worth it - this depends.
You can reach similar performance with stock settings but you end up with higher noise levels when using air-cooling alone.
...and you have some headroom for OC and higher power targets with water-cooling - okay we are talking about 5-10% performance increase at best (not looking at XOC-cards of course with power targets over 1000W)

So overall water-cooling makes sense especially if you like it quiet and if you are looking at higher power targets and OC.

While my 2080 TI Palit GamingOC Pro is unbearable on air when OCed to the max. with 330W power target - it’s quiet on water and I can use the 380W Galax/KFA2 Bios and also seeing boosts up to 2145MHz - against 2025 Mhz on air.

But I had to add a water block and had to invest in water-cooling equipment - so you should also take the extra money spend into account.
So less noise and a higher OC - but more power consumption and more expenses.

Greetings
Medizinmann


----------



## sblantipodi

kithylin said:


> If you can get your core clock speeds up to at least 4.0 - 4.4 Ghz then you should be fine. Intel hasn't actually increased IPC in at least 6 years. They just keep selling chips with higher default turbo/baseclock speeds and claim "THE NEW CHIPS ARE 10% FASTER!".





GRABibus said:


> No you are not alone
> Look at my sig.
> 
> I play 2560x1440.


nice to know it thanks


----------



## Mooncheese

Hi everyone, sorry I've been busy for the past few days, I upgraded the radiators and redid 90% of the runs in the loop whilst leak testing (EK Leak tester) and I had to tear apart the CPU monoblock and GPU blocks to clean them after the former soft tubing put plasticizer in my previous loop. I have liquid metal (conductonaut) on both the CPU and GPU and peak GPU load dropped 7C (from 47-49C to 40-42C, spends more time around 47 and 40 respectively) and I brought the noise down considerably replacing the single DDC with the dual D5's but I ran into an issue where the former undervolt overclock @ 8150 MHz memory was no longer stable, this creeped in rather quickly, it was fine the evening I had it all together and could do some benchmark runs and run around in The Witcher 3 and Shadow of the Tomb Raider for maybe an hour but the following morning while trying to resume benchmarking it started failing in Port Royal very quickly (15 seconds into bench). So then I started to worry and wanting to rule out inadequate contact between one of the memory modules that may not have had a temp sensor on then (ICX, there is one temp sensor per row of memory, I don't know if this calculates the average across the row or only the temp of the module it's nearest) that I couldn't see I drained the loop, pulled the card, disassembled and replaced the thermal pads on the memory with the ones supplied by Phanteks (the ones I were using were of the same height but higher conductivity from Fujipoly). I was worried that because the liquid metal is thinner that the heat-sink was brought closer to the die and that may have compacted the thermal pads that are more putty like and don't rebound, this was actually the second time draining the loop and pulling and disassembling the card as the first time I actually didn't put enough liquid metal on the die and the heat-sink). Re-assembled the card, leak tested, filled the loop and the problem persisted. So I figured maybe one of the modules was not making adequate contact, ran hot, lost stability, so I figured I would try maxing the voltage slider in MSI AB and forgo using the undervolt on the freq. curve and this worked great for a while! I ran Port Royal stress test for 40 minutes, 99.75% stability rating around 1.050v (2130 MHz core, 8125MHz memory) whereas before it would crash within like 20 seconds. I then re-ran all of the benchmarks again, no problem, played Metro Exodus last night, also no problem (albeit having to reduce the overclock by 50 MHz just for this title). MSI AB OC Scanner: 90% Confidence Rating. 

This morning I managed to get one Superposition bench in at 2140 MHz on the core (because 2130 MHz felt stable, 2160 results in CTD and 63% Confidence Rating in OC Scanner so I was trying to find that max overclock) and then when I tried Time Spy it crashed, so I restarted the PC and reverted the the last stable overclock and now it still crashes, I then tried Port Royal and now it's crashing there within like 1 min whereas just last night it did 40 min stress test 99.75% stability rating with the same overclock and voltage. 

So now I'm worried that something may be burning out on the card that I can't see with the ICX sensors or maybe there is too much mounting pressure on the GPU die? Is this possible? I have no artifacts to speak of, it just feels like I have a card that is losing it's stability and will continue to do this until, I don't know, it dies? 

I have conformal coating on the SMD's surrounding the GPU core and no liquid metal had escaped the core the last time I had it apart to inspect and there is no liquid metal anywhere on the PCB or anything. I have no idea what's going on here with this card and I figured I would ask you guys. 

Love the temp drop but I just lost all of my OC stability with this. I'm wondering if going back to normal thermal compound will address the issue but logically I just don't see how it will. 

I suppose I will dial back the overclock until it's stable and continue to monitor, feels like the card is dying some how. 

Update: 

Reducing the memory overclock from 8125 MHz to 8100 MHz allowed for a restoration of stability, the question is, for how long. It just did Port Royal stress test for 20 minutes @ 2130 MHz core / 8100 MHz memory, it could have gone for 40 min but I didn't want to needlessly flog the card and wanted to cool the loop so I could try Time Spy again. If there is any instability both Time Spy and Port Royal will make that clear within like 1 min. 

The old overclock of 2130 MHz core / 8125 MHz memory is now at 83% Confidence Rating in OC Scanner, which is weird, because just an hour ago 2140 MHz core / 8125 MHz memory did 90% and then passed Superposition, it was only when I attempt to run Time Spy that it crashed, then after dialing back memoy overclock by 25 MHz its stable again. I feel like the memory is going, where on reference PCB is the memory voltage controller? 

Not sure how this problem could have cropped up, I only changed the thermal compound to liquid metal, could it be excessive mounting pressure on the GPU die?


----------



## kithylin

Mooncheese said:


> Not sure how this problem could have cropped up, I only changed the thermal compound to liquid metal, could it be excessive mounting pressure on the GPU die?


I read in your post you have ICX sensors and that's a EVGA thing. I just thought I might take a moment and just give a little note to you that you've most likely voided your warranty using liquid metal on the card per EVGA. Back about a year ago in the 1080 Ti owner's thread someone commented EVGA Refused their RMA because they found liquid metal residue on the gpu core after taking it apart when it was sent back for RMA. Apparently they consider this "modification" of the PCB.


----------



## Mooncheese

kithylin said:


> I read in your post you have ICX sensors and that's a EVGA thing. I just thought I might take a moment and just give a little note to you that you've most likely voided your warranty using liquid metal on the card per EVGA. Back about a year ago in the 1080 Ti owner's thread someone commented EVGA Refused their RMA because they found liquid metal residue on the gpu core after taking it apart when it was sent back for RMA. Apparently they consider this "modification" of the PCB.


Thanks for the clarification. Anyhow, wrapping my head around the instability had me thinking more clearly, and I should have thought of this from the get go, but the backplate is screwed directly into the water block, touching the GPU back-plate while the card is running may have been transmitting static electricity directly through the backplate screws into the water-block and into the GPU core and seeing as how liquid metal is conductive basically I'm touching the GPU core while it's running. Add in wool socks and carpet to the mix, yeah. I used to remind myself to touch the chassis on the inside of the PC before feeling around and touching things but I may have forgotten to do this while feeling the back-plate. 

Anyhow, I'm pulling the card out and reverting to standard TIM, I'm hoping that the damage isn't permanent but this will rule out electricity entering the core through the water-block and anything that touches it. I should have thought of this earlier. There are little plastic washers that are used in conjunction with the back-plate and water-block screws precisely for this purpose but with liquid metal on the GPU core or any components on the PCB that may be in contact with the card this doesn't prevent static electricity travelling through the screw into the water-block. 

Hopefully at this point I can at least stop the continual loss of overclockability where it is, I may not recoup the lost performance (around 150 MHz core) but hopefully I can stop it here as this is probably what's happening. 

I should have done this yesterday, this will make the 3rd or 4th time having to drain the loop and disassemble the GPU and repaste in the span of 3 days. But at least I think I know what the problem is. Hopefully.


----------



## Imprezzion

Hmm, i finally found a situation in which i AM hitting power limits on my Phoenix GS with EVGA FTW3 Hybrid BIOS.

I am running 2145-2115Mhz depending on temperatures on 1.093mV with 8000 memory. Normally, most of my games don't hit the power limit as I do run max quality but only 1080P as I have a 144Hz 1080P monitor.

Now, recently i've been playing GTA V again for a but for example and that game doesn't need 144FPS to be smooth. And i modded it to heck anyway for eyecandy and realism so..
Also, Modern Warfare and MW2 Remastered have such a high FPS that I can do with a bit more quality.

So, what's the point? Well.. Resolution scaling is.. If I for example run GTA V on 1080P 100% it runs around 100-105% power limit. When I crank this up to 150% it already hits the limit and on 200% "4K" downsampling it's.. bad  It just slams into the limiter and throttles super bad. Even on 373w. Same for MW and MW2R. Throttles as low as low 2000Mhz clocks. 

The problem is, it downclocks to like, 2085Mhz but drops voltage all the way to 1.018-1.043v which is nowhere near enough to be stable on 2070-2085Mhz so this throttling actually makes my card very very unstable when running resolution scaling / downsampling. 

I am really tempted to just slap the HOF 2000w BIOS on this card. I know it works with this reference PCB, i ran it on the air cooler before, but i am not sure i want to do it..
Running HOF XOC 2000w, even with a limiter on 22-23%, it will still be able to pull 440-460w. And, on 1.125v it will go even higher.. 

It's cooled by a Kraken G12 + Kraken X52 and Alphacool / Enzotech copper heatsinks all over the VRM, MOSFET, chokes and so forth. PCB is a full reference one with all phases. 
The core cooling can handle it with ease, it barely touches 48c now slamming the 373w limiter with the radiator fans on like, 44% and pump on silent.

I don't really care if it breaks, got a 1080Ti Lightning Z as a spare anyway.. shame of the money but hey, hobbies are expensive 

But there's still a difference between: "Hey, it could break and it has some risk but the reference VRM is pretty beefy and built to last if cooled well so.. it would be fun to try!" and "Well, 95% sure it will blow to bits, everyone that ever tried it blew it up and the VRM has no chance to ever survive that".

And no, i'm not full cover blocking it. I want interchangability with my cooling setup. That's why I went AIO. I swap cards WAY too often to warrant full cover blocks.


----------



## sultanofswing

Imprezzion said:


> Hmm, i finally found a situation in which i AM hitting power limits on my Phoenix GS with EVGA FTW3 Hybrid BIOS.
> 
> I am running 2145-2115Mhz depending on temperatures on 1.093mV with 8000 memory. Normally, most of my games don't hit the power limit as I do run max quality but only 1080P as I have a 144Hz 1080P monitor.
> 
> Now, recently i've been playing GTA V again for a but for example and that game doesn't need 144FPS to be smooth. And i modded it to heck anyway for eyecandy and realism so..
> Also, Modern Warfare and MW2 Remastered have such a high FPS that I can do with a bit more quality.
> 
> So, what's the point? Well.. Resolution scaling is.. If I for example run GTA V on 1080P 100% it runs around 100-105% power limit. When I crank this up to 150% it already hits the limit and on 200% "4K" downsampling it's.. bad  It just slams into the limiter and throttles super bad. Even on 373w. Same for MW and MW2R. Throttles as low as low 2000Mhz clocks.
> 
> The problem is, it downclocks to like, 2085Mhz but drops voltage all the way to 1.018-1.043v which is nowhere near enough to be stable on 2070-2085Mhz so this throttling actually makes my card very very unstable when running resolution scaling / downsampling.
> 
> I am really tempted to just slap the HOF 2000w BIOS on this card. I know it works with this reference PCB, i ran it on the air cooler before, but i am not sure i want to do it..
> Running HOF XOC 2000w, even with a limiter on 22-23%, it will still be able to pull 440-460w. And, on 1.125v it will go even higher..
> 
> It's cooled by a Kraken G12 + Kraken X52 and Alphacool / Enzotech copper heatsinks all over the VRM, MOSFET, chokes and so forth. PCB is a full reference one with all phases.
> The core cooling can handle it with ease, it barely touches 48c now slamming the 373w limiter with the radiator fans on like, 44% and pump on silent.
> 
> I don't really care if it breaks, got a 1080Ti Lightning Z as a spare anyway.. shame of the money but hey, hobbies are expensive
> 
> But there's still a difference between: "Hey, it could break and it has some risk but the reference VRM is pretty beefy and built to last if cooled well so.. it would be fun to try!" and "Well, 95% sure it will blow to bits, everyone that ever tried it blew it up and the VRM has no chance to ever survive that".
> 
> And no, i'm not full cover blocking it. I want interchangability with my cooling setup. That's why I went AIO. I swap cards WAY too often to warrant full cover blocks.



48c is on the verge of not able to get over 2100 stable.
I don’t know why you guys keep wanting higher power limits when your temps are not good enough for the higher limit.

Anything over 40c and stability start to go downhill fast and throwing more voltage at it will make even less stable.


Sent from my iPhone using Tapatalk


----------



## Imprezzion

sultanofswing said:


> 48c is on the verge of not able to get over 2100 stable.
> I don’t know why you guys keep wanting higher power limits when your temps are not good enough for the higher limit.
> 
> Anything over 40c and stability start to go downhill fast and throwing more voltage at it will make even less stable.
> 
> 
> Sent from my iPhone using Tapatalk


Not all cards and not all chips are the exact same or have the same temperature on which they get unstable. 

This specific card will not get any more stable with lower temperatures and doesn't get unstable at higher temps either. At least not on these clocks. There's no need for me to run it any cooler in daily gaming as there's no difference in stability under 60c at 2130-2115Mhz. So why have more pump noise and rad fan noise. There's no need. 

At 33c with the X52 rad in a ice box for a 3DMark & Superposition test it can't hold 2160Mhz for any length of time without artifacting and only above 65c ish on the stock Phoenix GS aircooler did it really start to get unstable at 2130-2115Mhz. It can run 62-64c all day just fine in games on 2115Mhz. And 2130 @ 1 higher temperature bin, around 56-57c, is just fine as well. 

Mind you, i'm not looking for core stability. I know where the stability points are in my Curve and where it's happy. I just wanna know how well-built the reference VRM is and if it can handle 450w constant load without nuking itself as I haven't looked at it closely enough. 

Then again, time to YOLO. First of all i'm going to try if the MSI Trio X BIOS will properly boot on a reference PCB. It has 406w. Curious to see if it works and how high the power draw goes in MW2R @ 4K downsampling.
If it's still hitting or it doesn't flash / boot, time for HOF XOC.


----------



## J7SC

Imprezzion said:


> Not all cards and not all chips are the exact same or have the same temperature on which they get unstable.
> 
> This specific card will not get any more stable with lower temperatures and doesn't get unstable at higher temps either. At least not on these clocks. There's no need for me to run it any cooler in daily gaming as there's no difference in stability under 60c at 2130-2115Mhz. So why have more pump noise and rad fan noise. There's no need.
> 
> At 33c with the X52 rad in a ice box for a 3DMark & Superposition test it can't hold 2160Mhz for any length of time without artifacting and only above 65c ish on the stock Phoenix GS aircooler did it really start to get unstable at 2130-2115Mhz. It can run 62-64c all day just fine in games on 2115Mhz. And 2130 @ 1 higher temperature bin, around 56-57c, is just fine as well.
> 
> Mind you, i'm not looking for core stability. I know where the stability points are in my Curve and where it's happy. I just wanna know how well-built the reference VRM is and if it can handle 450w constant load without nuking itself as I haven't looked at it closely enough.
> 
> Then again, time to YOLO. First of all i'm going to try if the MSI Trio X BIOS will properly boot on a reference PCB. It has 406w. Curious to see if it works and how high the power draw goes in MW2R @ 4K downsampling.
> If it's still hitting or it doesn't flash / boot, time for HOF XOC.


 
The MSI Trio X has 3x EPS connectors (1x 6 pin, 2x 8 pin / spoiler pic) addressed by its bios whereby the reference PCB has 2x 8 pin...it very well might work, but not for the full 406w



Spoiler


----------



## Imprezzion

J7SC said:


> The MSI Trio X has 3x EPS connectors (1x 6 pin, 2x 8 pin / spoiler pic) addressed by its bios whereby the reference PCB has 2x 8 pin...it very well might work, but not for the full 406w
> 
> 
> 
> Spoiler


Point taken..  Should've remembered as my 1080 Ti lightning Z has 3 as well.. It booted up fine and everything works, RGB, fancontroller, buuut, even though MSI AB is reporting about 111-117% power draw it throttles like mad and in GPU-Z PerfCap is PWR and it's at 100.0% all the time in games and is only letting the card use 300w exactly. So, it's not applying the 135% limit from MSI AB correctly. Shame. It had promise.. 

At least HOF XOC works. 

EDIT: HOF BIOS @ 1.125v 2160Mhz. Drops to 2130Mhz under full load around 52-53C, power draw is right around 400w. Perfectly stable so far and no heat related stability issues at all.

See screenshot. Been playing MW2CR @ max graphics, 4k downsampling for like, an hour or 2 now and no crashes or artifacts at all. 

I did lock radiator fanspeed at 50% which is the max RPM (~1400-1450) in which they are quiet enough to live with in gaming. Ramping them to the full 2200RPM will instantly drop temperatures like a rock to like, low 40's so i may need better rad fans as these obviously don't flow very well even on 1450RPM but I use them for the LED effects. Not the performance of the fan 

The NZXT G12 VRM fan is running at 1150RPM. The VRM heatsinks I put on are hot to the touch but nothing crazy, can hold my finger on them just fine. The Phoenix GS backplate is thermal padded to the card as well and is hot, but again nothing crazy. Can hold my hand on it for 10 seconds just fine so not much more then 55-60c. The reference VRM seems to handle 400w pretty easily in terms of temperatures.

The VRAM is getting pretty hot though, those heatsinks are a definite no-touch lol. Them being stuck under the G12 and all with zero airflow. The big Enzotech heatsinks help but get saturated and can't loose the heat.. I might need to find a way to give the VRAM some airflow.

This is all on 22c ambients (see Drive temp sensor in HWiNFO, the SSD this monitors is directly in front of the bottom most intake fan underneath the 280mm rad so gets a perfect intake ambient temp reading.


----------



## sultanofswing

Interesting results, it would be the first Turing card that doesn't scale with temps I have seen but not all are created equal.

Mine start to scale really well once I get below 30c. Above 40c(tested by blocking off Radiators) it becomes unstable quick above 2145mhz.

Here is some useful info also. Shows how Turing scales with Temperatures


----------



## kx11

sultanofswing said:


> Interesting results, it would be the first Turing card that doesn't scale with temps I have seen but not all are created equal.
> 
> Mine start to scale really well once I get below 30c. Above 40c(tested by blocking off Radiators) it becomes unstable quick above 2145mhz.
> 
> Here is some useful info also. Shows how Turing scales with Temperatures





my KP with a good curve and a bit of voltage ( 60+ ) runs good , starts from 2205mhz and scales down to 2160mhz w/ 8200memory speed


----------



## sultanofswing

kx11 said:


> my KP with a good curve and a bit of voltage ( 60+ ) runs good , starts from 2205mhz and scales down to 2160mhz w/ 8200memory speed
> 
> 
> 
> 
> 
> 
> https://www.youtube.com/watch?v=ZZQ6JAE6_2c



What's your curve look like.


----------



## kx11

sultanofswing said:


> What's your curve look like.



this works for me well


----------



## Imprezzion

Lel it broke.



Nah the 2080Ti didn't blow up. The Galaxy HOF XOC BIOS did break MSI Afterburner though.

For some reason with that BIOS as soon as I apply a saved profile or let it apply at windows boot it BSOD's with the Nvidia driver with a system service exception. Manually doing a overclock and not saving a profile works fine.

I tried 3 different drivers with full DDU wipes, 4 different MSI AB versions, EVGA X1, disabling low level IO and such, nothing works.

Just as a quick test to see if the BIOS is at fault I flashed the KFA 380w one again, applied profiles just fine. No BSOD.

So, now I gotta find a way to get MSI AB to apply a OC at boot without using a profile slot haha..

Or, I have to manually do the curve and memory OC every boot. Not that that is such a big deal but still..


----------



## J7SC

Imprezzion said:


> Lel it broke.
> 
> 
> 
> Nah the 2080Ti didn't blow up. The Galaxy HOF XOC BIOS did break MSI Afterburner though.
> 
> For some reason with that BIOS as soon as I apply a saved profile or let it apply at windows boot it BSOD's with the Nvidia driver with a system service exception. Manually doing a overclock and not saving a profile works fine.
> 
> I tried 3 different drivers with full DDU wipes, 4 different MSI AB versions, EVGA X1, disabling low level IO and such, nothing works.
> 
> Just as a quick test to see if the BIOS is at fault I flashed the KFA 380w one again, applied profiles just fine. No BSOD.
> 
> So, now I gotta find a way to get MSI AB to apply a OC at boot without using a profile slot haha..
> 
> Or, I have to manually do the curve and memory OC every boot. Not that that is such a big deal but still..


 
Have you tried to completely uninstall and then reinstall MSI AB fresh ? Uninstall should include the delete options for the folder, profiles etc. I used to switch Bios a lot on previous-gen cards, and every time, the saved MSI AB profiles wouldn't work anymore, if not crash. In Windows, every time I flashed a new Bios on the SAME card, the registry would enumerate that as a NEW GPU.


----------



## GRABibus

Imprezzion said:


> Lel it broke.
> 
> 
> 
> Nah the 2080Ti didn't blow up. The Galaxy HOF XOC BIOS did break MSI Afterburner though.
> 
> For some reason with that BIOS as soon as I apply a saved profile or let it apply at windows boot it BSOD's with the Nvidia driver with a system service exception. Manually doing a overclock and not saving a profile works fine.
> 
> I tried 3 different drivers with full DDU wipes, 4 different MSI AB versions, EVGA X1, disabling low level IO and such, nothing works.
> 
> Just as a quick test to see if the BIOS is at fault I flashed the KFA 380w one again, applied profiles just fine. No BSOD.
> 
> So, now I gotta find a way to get MSI AB to apply a OC at boot without using a profile slot haha..
> 
> Or, I have to manually do the curve and memory OC every boot. Not that that is such a big deal but still..


its a well know issue.
I have also teh HOF XOC Bios and if I save an OC profile in MSI AB and lauch it, immediate BSOD.
Then I have all my stable OC's settings on a paper : Just before playing a game or benchmark, I set the values in MSI AB of Core clock, Memory Clokv etc..and then click apply. By doing this way, no BSOD.
With EVGA FTW3 BIOS 380W, no issues.


----------



## sultanofswing

kx11 said:


> sultanofswing said:
> 
> 
> 
> What's your curve look like.
> 
> 
> 
> 
> this works for me well
Click to expand...

Can you try something? Keep your current overclock but bring the 2190 and 2175 cells closer to your 2205 target.
So you have [email protected]
[email protected] the next voltage drop.
[email protected] the next voltage drop.


----------



## ESRCJ

Have any of you experience degradation on your 2080 Tis? I was recently playing a game and it kept crashing, so I investigated further and found that my 2080 Ti could no longer pass the Time Spy Extreme stress test at my current settings (2100MHz at 1.006V and memory at 8200MHz, 380W Galax BIOS). I'll note that I actually used to pass that same frequency at 1.000V, but I had to bump it up a few months ago. I decided to bump up the vcore yet again to 1.012V and I'm still getting crashes in the stress test, but a little later on. Unless something on the software side has changed, it certainly seems as if my 2080 Ti has degraded, given that it's needing more voltage for the same frequency.

I'm a bit surprised that this card is degrading so quickly while operating at lower voltages than stock and temperatures being kept below 40C in all loads. I'm genuinely curious to know if anyone else has experienced any degradation.


----------



## reflex75

ESRCJ said:


> Have any of you experience degradation on your 2080 Tis? I was recently playing a game and it kept crashing, so I investigated further and found that my 2080 Ti could no longer pass the Time Spy Extreme stress test at my current settings (2100MHz at 1.006V and memory at 8200MHz, 380W Galax BIOS). I'll note that I actually used to pass that same frequency at 1.000V, but I had to bump it up a few months ago. I decided to bump up the vcore yet again to 1.012V and I'm still getting crashes in the stress test, but a little later on. Unless something on the software side has changed, it certainly seems as if my 2080 Ti has degraded, given that it's needing more voltage for the same frequency.
> 
> I'm a bit surprised that this card is degrading so quickly while operating at lower voltages than stock and temperatures being kept below 40C in all loads. I'm genuinely curious to know if anyone else has experienced any degradation.


All this huge chips are degrading faster...
Mine lost 3 bins of 15Mhz, which means 45Mhz lower frequency at the same voltage!
(always checking with Port Royal at the same settings)


----------



## AndrejB

reflex75 said:


> ESRCJ said:
> 
> 
> 
> Have any of you experience degradation on your 2080 Tis? I was recently playing a game and it kept crashing, so I investigated further and found that my 2080 Ti could no longer pass the Time Spy Extreme stress test at my current settings (2100MHz at 1.006V and memory at 8200MHz, 380W Galax BIOS). I'll note that I actually used to pass that same frequency at 1.000V, but I had to bump it up a few months ago. I decided to bump up the vcore yet again to 1.012V and I'm still getting crashes in the stress test, but a little later on. Unless something on the software side has changed, it certainly seems as if my 2080 Ti has degraded, given that it's needing more voltage for the same frequency.
> 
> I'm a bit surprised that this card is degrading so quickly while operating at lower voltages than stock and temperatures being kept below 40C in all loads. I'm genuinely curious to know if anyone else has experienced any degradation.
> 
> 
> 
> All this huge chips are degrading faster...
> Mine lost 3 bins of 15Mhz, which means 45Mhz lower frequency at the same voltage!
> (always checking with Port Royal at the same settings)
Click to expand...

It could be the drivers as well.


----------



## J7SC

AndrejB said:


> It could be the drivers as well.


 
^^ This... Adjusting for ambient I have not noticed any degradation so far, but I test with the same (older) drivers I had earlier data for. There was a relatively recent NVidia driver update re. security which cost a bit of performance. Ditto for some Win10 updates (all on original GPU V-Bios). This doesn't mean that degradation isn't a potential issue in some cases. Vid below also underscores that lower temps (extra cooling) can help with delaying degradation.


----------



## ESRCJ

I tried to see if I could prove any other explanation for my lower frequencies per voltage other than degradation and I may have found the culprit. I've been running a modded BIOS for my Rampage VI Extreme Omega which uses an older microcode than the public version of the BIOS. After the release of 9000 series Skylake-X CPUs, all X299 motherboards started reporting 5C higher per core with 7000 series CPUs, where this 5C increase in reported temperatures is associated with a 5C increase in TJ max after 9000 series launched. If you run an older microcode (I don't remember which one), then this issue is no longer present. It's more of an OCD thing for me to see 5C higher across cores. Anyways, I switched to the public version of the BIOS with the newer microcode and my 2080 Ti "degradation" issue seems to be resolved. I ran the Time Spy Extreme stress test 3 times and 2100MHz at 1.000V is "stable" again. I'll run it another few times later on to confirm this and play some games.

I find it very odd that a microcode turned out to be the culprit and that it only started manifesting in a noticeable manner just recently.


----------



## Imprezzion

I did some further testing on the effect of temperature on my cards stability. I raised the starting point to 2175Mhz @ 1.125v with the HOF XOC. Ran GTA V races all evening and some warzone and just played around with the fanspeed on my rad. This card doesn't get unstable at all at higher temperatures. It holds 2130Mhz perfectly fine on 62c and 2145Mhz just fine in the low to mid 50's. 

Downside is, lower temps, 40-42c ish, don't make it any more stable either. It artifacts like mad on 2190 even at 35c.

Wierd card this thing.. but then again 2145/8000 rock solid, I'd take that any day!

Now to see how long this thing lasts before it degrades to all heck or blows up on 1.125v with ~380-410w constant load wattage lol.


----------



## sultanofswing

Imprezzion said:


> I did some further testing on the effect of temperature on my cards stability. I raised the starting point to 2175Mhz @ 1.125v with the HOF XOC. Ran GTA V races all evening and some warzone and just played around with the fanspeed on my rad. This card doesn't get unstable at all at higher temperatures. It holds 2130Mhz perfectly fine on 62c and 2145Mhz just fine in the low to mid 50's.
> 
> Downside is, lower temps, 40-42c ish, don't make it any more stable either. It artifacts like mad on 2190 even at 35c.
> 
> Wierd card this thing.. but then again 2145/8000 rock solid, I'd take that any day!
> 
> Now to see how long this thing lasts before it degrades to all heck or blows up on 1.125v with ~380-410w constant load wattage lol.



Are you only raising the 2175 point in your curve or are you raising it and blending the rest in?
This is how I am doing mine. Also could you run Timespy with your current curve and post your graphics score?


----------



## Imprezzion

sultanofswing said:


> Are you only raising the 2175 point in your curve or are you raising it and blending the rest in?
> This is how I am doing mine. Also could you run Timespy with your current curve and post your graphics score?


I blend it sort of. I use +120Mhz core which is 2130Mhz base with idle temperatures. Then I edit the curve for 1.125v after as this BIOS does not allow me to save profiles as it BSOD'S when applying a profile in MSI AB so I have to do it every boot. Seems to be a known issue with HOF XOC on reference PCB. Kinda quick and dirty. And yes, that does perform better and gets higher scores then just curve 1 point without blending it. I'm going to sleep now, I'll do time spy tomorrow .


----------



## Mooncheese

sultanofswing said:


> 48c is on the verge of not able to get over 2100 stable.
> I don’t know why you guys keep wanting higher power limits when your temps are not good enough for the higher limit.
> 
> Anything over 40c and stability start to go downhill fast and throwing more voltage at it will make even less stable.
> 
> 
> Sent from my iPhone using Tapatalk


100% spot on, same observation here, at 49C 2070 MHz @ 1.019v is not stable, will not pass Port Royal torture test, 5C lower though, at 43-44C and it's stable here. 



ESRCJ said:


> Have any of you experience degradation on your 2080 Tis? I was recently playing a game and it kept crashing, so I investigated further and found that my 2080 Ti could no longer pass the Time Spy Extreme stress test at my current settings (2100MHz at 1.006V and memory at 8200MHz, 380W Galax BIOS). I'll note that I actually used to pass that same frequency at 1.000V, but I had to bump it up a few months ago. I decided to bump up the vcore yet again to 1.012V and I'm still getting crashes in the stress test, but a little later on. Unless something on the software side has changed, it certainly seems as if my 2080 Ti has degraded, given that it's needing more voltage for the same frequency.
> 
> I'm a bit surprised that this card is degrading so quickly while operating at lower voltages than stock and temperatures being kept below 40C in all loads. I'm genuinely curious to know if anyone else has experienced any degradation.





reflex75 said:


> All this huge chips are degrading faster...
> Mine lost 3 bins of 15Mhz, which means 45Mhz lower frequency at the same voltage!
> (always checking with Port Royal at the same settings)


Well I don't know if you guys seen my last post where I was complaining about losing 100 MHz on the memory experimenting with liquid metal, I go into that into some length in my recent radiator + serial D5 pump upgrade video which I will post below. My card has lost some stability for sure, but I'm fairly certain most if not all of it was some kind of shorting happening with the liquid metal on the GPU core as the core was no longer insulated from the back-plate (backplate screws going directly into water block) and hence static electricity given the few times that I stupidly wanted to feel how hot the back-plate was, and potentially other components on the PCB that are in direct contact with the water-block. 

My card went from +1150 MHz stable on the memory down to +1050 MHz within the span of a morning. Ruling everything out, and disassembling the card and replacing the thermal pads on the memory modules for fear that I had potentially compressed the high conductivity Fujipoly pads that are putty like and can flatten (although I can see the memory temps there is only one temp sensor per row of memory, I was worried that the loss of memory stability was due to a module not near the temp sensor) I was forced to conclude that the source of the newfound instability was in fact the liquid metal, even though absolutely zero of it got anywhere other than the GPU core. My theory is that either static electricity from touching the back-plate without first grounding myself against the chassis or from another component touching the water-block (there is one in fact) was sending current / static electricity into the core, where the memory controller resides. Not sure, but I've been running Port Royal stress test every morning followed by Time Spy, Superposition and then OC Scanner and the phenomenon seems to have halted at +1025 MHz on the memory. It's a good thing I switched back to standard TIM (Gelid GC Extreme, I have Kryonaut on hand and intend to switch to that when my 1/4" temp sensors come in next week, along with my 1mm Gelid thermal pads to throw back on the memory modules because I did a bad paste job as there should only be a 3C difference between GC Extreme and Conductonaut but I'm seeing 4-5C). 

Anyhow, I highly advise AGAINST liquid metal on the GPU core if youre running a water block because it is no longer insulated from the back-plate. Lesson learned the hard way. At least I figured it out quickly (1-2 days) and stopped further degradation. I intend to upgrade to 3080 Ti next year anyway. 







If you want to skip my opinion on the "crisis" you can go straight to 44:40 mark where a few moments after the pumps go from 40 to 100% RPM a few times in SOTTR the temps drop 3C (the cooling difference is so pronounced it cooled the back-plate and the pumps are set to go on when thermocoupler attached to back-plate hits 38C, so pumps go on and drop the temp about the entire card) but this is nothing, in Port Royal when the pumps come on it drops the core temp from 45 to 40C which I also show via a historical graph. This is with 66F ambient. 

https://imgur.com/a/hFQwpBi


----------



## ESRCJ

Looks like I spoke too soon. I fired up Gears 5 and it crashed. I'm back to crashing in the TimeSpy Extreme stress test as well. This is very odd, given that it passed without issues Friday morning, then became magically "unstable" tonight. Temps are basically identical.


----------



## MrTOOSHORT

ESRCJ said:


> Looks like I spoke too soon. I fired up Gears 5 and it crashed. I'm back to crashing in the TimeSpy Extreme stress test as well. This is very odd, given that it passed without issues Friday morning, then became magically "unstable" tonight. Temps are basically identical.



Maybe your ram RTLs are going out of wack after your reboots. Making a stable ram oc unstable at times. Omega has that issue, so I've read. Check Asrock Timing Configurator. Could just lower your ram oc for the time being and see if it helps. Unstable ram oc can affect gpu overclocks.


----------



## ESRCJ

MrTOOSHORT said:


> ESRCJ said:
> 
> 
> 
> Looks like I spoke too soon. I fired up Gears 5 and it crashed. I'm back to crashing in the TimeSpy Extreme stress test as well. This is very odd, given that it passed without issues Friday morning, then became magically "unstable" tonight. Temps are basically identical.
> 
> 
> 
> Maybe your ram RTLs are going out of wack after your reboots. Making a stable ram oc unstable at times. Omega has that issue, so I've read. Check Asrock Timing Configurator. Could just lower your ram oc for the time being and see if it helps. Unstable ram oc can affect gpu overclocks.
Click to expand...

I ran my CPU and memory completely stock just to rule out anything on those ends and unfortunately the issue still persists. Perhaps my GPU is in fact degraded and I just got lucky with the stable stress test runs earlier. The only other thing I could think of is something on the software side, which I've already gone through uninstalling various programs to test that, but still no luck.


----------



## Imprezzion

Timespy @ 2175/8000 1.125v Auto fans with my custim curve. Got up to about 51c with clocks alternating between 2145 and 2130 depending on the temperature. When under 48c it will stay at 2145. It will remain stable well over 60c though at 2130 if I raise it. 

Not the best result I had, i did multiple runs, the best was 16108 total score but didn't have monitoring running lol.

It won't validate as I run 445.98 Hotfix drivers. The official ones break Shader Caching so I run the hotfix version but that isn't 3DMark approved lol.


----------



## Mooncheese

Imprezzion said:


> Timespy @ 2175/8000 1.125v Auto fans with my custim curve. Got up to about 51c with clocks alternating between 2145 and 2130 depending on the temperature. When under 48c it will stay at 2145. It will remain stable well over 60c though at 2130 if I raise it.
> 
> Not the best result I had, i did multiple runs, the best was 16108 total score but didn't have monitoring running lol.
> 
> It won't validate as I run 445.98 Hotfix drivers. The official ones break Shader Caching so I run the hotfix version but that isn't 3DMark approved lol.


How are you guys running so much voltage? What BIOS is that and what card exactly? Are you no longer worried about the longevity of the card with 3080 Ti due out soon? Last I heard simply adjusting the voltage slider in MSI AB to "+100" voltage takes the avg. lifespan from like 5 years to 1 year or something or another, allegedly quoted from Jacket Man but I'm not sure as to how accurate or factual this is.


----------



## Laithan

FWIW, it is Maxwell but I ran 1.293v on my 980Ti's for over 5 years with zero degradation... (full cover block)


----------



## sultanofswing

Imprezzion said:


> Timespy @ 2175/8000 1.125v Auto fans with my custim curve. Got up to about 51c with clocks alternating between 2145 and 2130 depending on the temperature. When under 48c it will stay at 2145. It will remain stable well over 60c though at 2130 if I raise it.
> 
> Not the best result I had, i did multiple runs, the best was 16108 total score but didn't have monitoring running lol.
> 
> It won't validate as I run 445.98 Hotfix drivers. The official ones break Shader Caching so I run the hotfix version but that isn't 3DMark approved lol.


OK, as I expected, Not trying to call you out or anything along the lines just wanted to show you why I think your card is not affected by temperatures like you say.

Now what I am about to explain could be totally wrong but it is what I have experienced on 4 2080ti's that I have owned and I will try to go into detail some.

Your graphics score for the clocks you are running and the voltage you are running are lower than average and here is why I believe that.
When you set up a Curve in MSI afterburner everyone has a different way and depending on how it is done it can skew results quite heavily.
With the curve the higher you have your target voltage/frequency point from the surrounding points the more the results will be skewed.
It is very hard to explain so I will show some pictures to see if you kinda understand the point I am trying to make.

So here are 2 examples on the extreme end of the spectrum.

First example, Voltage curve set for the card to run [email protected] with no extra voltage added.
Notice the graphics score and the clock ran. This curve was setup by entering +90 in the offset slider(2130mhz), I then went to the 1093mv point and raised that to 2145mhz).
This keeps the voltage curve nice and tight. Now using this exact method if my card goes over 40c it will crash. This ran 2145mhz in the first graphics test and 2130 in the second graphics test










Now the "incorrect" Method.
Notice the graphics score and the clock ran. This curve was setup by just raising the 1093mv slider and then doing some "minor" smoothing of the curve with no extra voltage added.
Using this method I can run 2300mhz and it is not effected by the temperature whatsoever. This setup ran 2220mhz in both graphics test's. 










At the end of the day I don't have access to your system to see how it is setup and this may not have anything to do with the results that you see.
I just know that the less you smooth the voltage curve the higher the clock you can run and not see any stability issues but the score vs clock ran doesn't add up.


----------



## changboy

Laithan said:


> FWIW, it is Maxwell but I ran 1.293v on my 980Ti's for over 5 years with zero degradation... (full cover block)


Nice logo btw, lol.


----------



## Imprezzion

sultanofswing said:


> OK, as I expected, Not trying to call you out or anything along the lines just wanted to show you why I think your card is not affected by temperatures like you say.
> 
> Now what I am about to explain could be totally wrong but it is what I have experienced on 4 2080ti's that I have owned and I will try to go into detail some.
> 
> Your graphics score for the clocks you are running and the voltage you are running are lower than average and here is why I believe that.
> When you set up a Curve in MSI afterburner everyone has a different way and depending on how it is done it can skew results quite heavily.
> With the curve the higher you have your target voltage/frequency point from the surrounding points the more the results will be skewed.
> It is very hard to explain so I will show some pictures to see if you kinda understand the point I am trying to make.
> 
> So here are 2 examples on the extreme end of the spectrum.
> 
> First example, Voltage curve set for the card to run [email protected] with no extra voltage added.
> Notice the graphics score and the clock ran. This curve was setup by entering +90 in the offset slider(2130mhz), I then went to the 1093mv point and raised that to 2145mhz).
> This keeps the voltage curve nice and tight. Now using this exact method if my card goes over 40c it will crash. This ran 2145mhz in the first graphics test and 2130 in the second graphics test
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Now the "incorrect" Method.
> Notice the graphics score and the clock ran. This curve was setup by just raising the 1093mv slider and then doing some "minor" smoothing of the curve with no extra voltage added.
> Using this method I can run 2300mhz and it is not effected by the temperature whatsoever. This setup ran 2220mhz in both graphics test's.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At the end of the day I don't have access to your system to see how it is setup and this may not have anything to do with the results that you see.
> I just know that the less you smooth the voltage curve the higher the clock you can run and not see any stability issues but the score vs clock ran doesn't add up.


This is the exact behavior my Non-A Inno3D card showed. It would do 2200Mhz on 1.093v with a bad curve but barely did 2040Mhz with a good curve. Besides being a Non-A and power throttling like mad cause the best BIOS for non-A is still useless . I could never test the actual stable overclock of that card as it would just randomly drop voltage to 1.012v or whatever and instantly crash lol. Best I got out of it was 1980Mhz 1.012 volts as that didn't throttle. 

Then again, my curve is smooth enough to not be "wrong". I also noticed another thing affecting my graphics score here. I had nVidia drivers 3D performance set to High Quality filtering and with 2x SA transparancy AA and Ambient Occlusion forced to High Quality. This would cost a few 100 points tho. I went to 16777 by just disabling texture filtering in the driver so. Those other 300 points aren't hard to find lol.


----------



## Laithan

changboy said:


> Nice logo btw, lol.


We violated the rule to never be in the same thread at the same time... Imagine a child seeing two Mickey Mouse(mice) in front of them? lol


----------



## kx11

so


LG C9 can output 4k 120hz on PC ??


----------



## Jpmboy

kx11 said:


> so
> LG C9 can output 4k 120hz on PC ??
> https://www.youtube.com/watch?v=lstTEfWp5Vs


AFAIC tell he does not show the C9 info panel saying it is actually receiving and displaying 4K120 over it's HDMI. The transcoder/transceiver for HDMI2.0 will receive the signal only at up to it's stated signal density or bit rate. I suspect you'd have to drop bit rate and play with the chroma. It would be close to the 18G ceiling I think.

EDIT::
According to Forbes:

LG’s 2020 4K OLED And LCD TVs Don’t Support Full HDMI 2.1
It has been confirmed by LG that the FRL speeds in the EDID dumps of the C9 and CX are correct.
T*he C9 has full 48Gbps ports capable of 4K 120Hz 4:4:4 12-bit.*
The CX however only has 40Gbps ports capable of 4K 120K 4:4:4 10-bit.
Here’s the official statement LG shared:
“While LG covered most of the HDMI 2.1 related specs in its 2019 TVs, including full bandwidth support in all of the HDMI ports for its 4K and 8K TVs, the market situation evolution indicated that real content that requires 48Gbps is not available in the market. 
Based on market situation, LG decided to re-allocate the hardware resources of 2020 chipsets optimizing for AI functions including CPU&GPU and supporting full bandwidth in only 2 ports of 2020 8K TV series (ZX series, NANO99, NANO97, NANO95). And the rest of the ports of 8K TVs and all HDMI 2.1 ports of 4K TVs have lower bandwidth than 48 Gbps but support up to 4K 120P 4:4:4/RGB 10bit. We apologize for not flagging this earlier to you.”


----------



## kx11

Jpmboy said:


> AFAIC tell he does not show the C9 info panel saying it is actually receiving and displaying 4K120 over it's HDMI. The transcoder/transceiver for HDMI2.0 will receive the signal only at up to it's stated signal density or bit rate. I suspect you'd have to drop bit rate and play with the chroma. It would be close to the 18G ceiling I think.
> 
> EDIT::
> According to Forbes:
> 
> LG’s 2020 4K OLED And LCD TVs Don’t Support Full HDMI 2.1
> It has been confirmed by LG that the FRL speeds in the EDID dumps of the C9 and CX are correct.
> T*he C9 has full 48Gbps ports capable of 4K 120Hz 4:4:4 12-bit.*
> The CX however only has 40Gbps ports capable of 4K 120K 4:4:4 10-bit.
> Here’s the official statement LG shared:
> “While LG covered most of the HDMI 2.1 related specs in its 2019 TVs, including full bandwidth support in all of the HDMI ports for its 4K and 8K TVs, the market situation evolution indicated that real content that requires 48Gbps is not available in the market.
> Based on market situation, LG decided to re-allocate the hardware resources of 2020 chipsets optimizing for AI functions including CPU&GPU and supporting full bandwidth in only 2 ports of 2020 8K TV series (ZX series, NANO99, NANO97, NANO95). And the rest of the ports of 8K TVs and all HDMI 2.1 ports of 4K TVs have lower bandwidth than 48 Gbps but support up to 4K 120P 4:4:4/RGB 10bit. We apologize for not flagging this earlier to you.”





i did follow his guide since i have C9 and it didn't work at all , i guess he got lucky his display can sneak the 120hz 4k 4:2:2 chroma sample to the panel , these tricks happens with a lot of monitors/TVs , i used to have an Acer 2k monitor that can output 4k60hz way back in 2013/14 through Nvidia custom resolution and sometimes forcing a little tool called C.R.U. to enable 4k 60fps with lower bandwidth just so the gpu driver allows it to send the signal to the monitor


----------



## Jpmboy

kx11 said:


> i did follow his guide since i have C9 and it didn't work at all , i guess he got lucky his display can sneak the 120hz 4k 4:2:2 chroma sample to the panel , these tricks happens with a lot of monitors/TVs , i used to have an Acer 2k monitor that can output 4k60hz way back in 2013/14 through Nvidia custom resolution and sometimes forcing a little tool called C.R.U. to enable 4k 60fps with lower bandwidth just so the gpu driver allows it to send the signal to the monitor


you might double check the hdmi cable itself. 4K120 should work on the C9... you have a 2019 model?


----------



## kx11

Jpmboy said:


> you might double check the hdmi cable itself. 4K120 should work on the C9... you have a 2019 model?



yes it's a 2019 , the cable is good , 4k 60hz HDR 10 capable


----------



## Mooncheese

Laithan said:


> FWIW, it is Maxwell but I ran 1.293v on my 980Ti's for over 5 years with zero degradation... (full cover block)


Interesting, so what is the reason behind this statement made by Nvidia? 

I'm just curious as to why Nvidia would state that increasing the voltage over stock would take the avg. lifespan from 5 years down to 1. 

https://www.reddit.com/r/nvidia/comments/aclfgo/will_overclocking_significantly_shorten_the/

https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/

For whatever reason increasing the voltage on my card doesn't increase the stability really. I notice that wattage consumption goes up from like 300w to 330w on average and my temps go up 1-2C. My scores are higher in all of the benchmarks with more freq. and voltage but last I tried it wasn't stable with +100 on the voltage slider in MSI AB. I can try again, I am finally stable with the memory clock degradation from experimenting with liquid metal @ +1025 MHz memory and I'm fairly certain that the memory overclock was the cause of the instability when dabbling with +100 on the voltage slider and 2130 MHz core (even though that presented a 90% Confidence rating in OC scanner). 

I just wanted to hear from everyone here about what their thoughts and observations about voltage and lifespan is before doing so. 

I'm curious to try FTW3 bios as it goes to 1.093v an 373w (currently with an XC2 Ultra). I am afraid that somehow the BIOS may render the ICX sensors unusable (not sure) or that it may brick the card as this XC2 Ultra was manufactured in Oct of 2019. 

What is the method used to flash newer cards per chance? 

Thanks for any help with these questions.


----------



## Mooncheese

To hell with 4K @ 120 FPS, I will probably upgrade from my AW3418DW (3440x1440 @ 120 Hz IPS) to this: https://www.theverge.com/2020/1/7/2...y-g9-super-ultra-wide-gaming-monitor-ces-2020

It's hard for me to go back to 16:9 after experiencing curved 21:9. I have my ROG Swift PG278Q on an arm next to my AW3418DW and even though some games look stunning in 3D Vision (Tomb Raider, The Witcher 3 etc.) I hardly ever use it because the first thing I notice is going from curved 21:9 to flat 16:9. It's like trying to go back to 4:3.


----------



## Jpmboy

Mooncheese said:


> Interesting, so what is the reason behind this statement made by Nvidia?
> 
> I'm just curious as to why Nvidia would state that increasing the voltage over stock would take the avg. lifespan from 5 years down to 1.
> 
> https://www.reddit.com/r/nvidia/comments/aclfgo/will_overclocking_significantly_shorten_the/
> 
> https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/
> 
> For whatever reason increasing the voltage on my card doesn't increase the stability really. I notice that wattage consumption goes up from like 300w to 330w on average and my temps go up 1-2C. My scores are higher in all of the benchmarks with more freq. and voltage but last I tried it wasn't stable with +100 on the voltage slider in MSI AB. I can try again, I am finally stable with the memory clock degradation from experimenting with liquid metal @ +1025 MHz memory and I'm fairly certain that the memory overclock was the cause of the instability when dabbling with +100 on the voltage slider and 2130 MHz core (even though that presented a 90% Confidence rating in OC scanner).
> 
> I just wanted to hear from everyone here about what their thoughts and observations about voltage and lifespan is before doing so.
> 
> I'm curious to try FTW3 bios as it goes to 1.093v an 373w (currently with an XC2 Ultra). I am afraid that somehow the BIOS may render the ICX sensors unusable (not sure) or that it may brick the card as this XC2 Ultra was manufactured in Oct of 2019.
> 
> What is the method used to flash newer cards per chance?
> 
> Thanks for any help with these questions.


There is ZERO debate that higher voltage, higher current/power and higher temperature leads to a shortening of the "lifespan" of most any SMD (small micro device, eg, cpu, gpu, etc). And depending on leakage (gate bypass) scaling of operating frequency may not have a positive slope with voltage (Hz vs mV).
Frankly, until you control the temperature and current use (or power, TDP, watts) at any freq, voltage, taking any one of these outside the acceptable operating range for voltage or power (AOR) - that's basically any overclock - the projected life span of the part (cpu gpu etc) will decrease. Whether it goes from 5 years to 1 year... or never shows any decline really depends on the quality of the silicon (crystal quality, including the necessary defects required for it to work as a semiC at all)... eg, the sliicon lottery. So if your GPU is behaving differently after putting out 300+ Watts at a higher than stock voltage to run at a higher than spec boost... lifespan is a risk we all take when overclocking.
Pascal does not scale with voltage until the temperature is kept well below ambient, turing is the same. Last NV architecture to scale at ambient was Maxwell.


----------



## sultanofswing

Mooncheese said:


> Interesting, so what is the reason behind this statement made by Nvidia?
> 
> I'm just curious as to why Nvidia would state that increasing the voltage over stock would take the avg. lifespan from 5 years down to 1.
> 
> https://www.reddit.com/r/nvidia/comments/aclfgo/will_overclocking_significantly_shorten_the/
> 
> https://www.techspot.com/article/1704-geforce-rtx-2080-overclocking/
> 
> For whatever reason increasing the voltage on my card doesn't increase the stability really. I notice that wattage consumption goes up from like 300w to 330w on average and my temps go up 1-2C. My scores are higher in all of the benchmarks with more freq. and voltage but last I tried it wasn't stable with +100 on the voltage slider in MSI AB. I can try again, I am finally stable with the memory clock degradation from experimenting with liquid metal @ +1025 MHz memory and I'm fairly certain that the memory overclock was the cause of the instability when dabbling with +100 on the voltage slider and 2130 MHz core (even though that presented a 90% Confidence rating in OC scanner).
> 
> I just wanted to hear from everyone here about what their thoughts and observations about voltage and lifespan is before doing so.
> 
> I'm curious to try FTW3 bios as it goes to 1.093v an 373w (currently with an XC2 Ultra). I am afraid that somehow the BIOS may render the ICX sensors unusable (not sure) or that it may brick the card as this XC2 Ultra was manufactured in Oct of 2019.
> 
> What is the method used to flash newer cards per chance?
> 
> Thanks for any help with these questions.


Although I am not sure what Nvidia's response to degradation is my take is it all correlates to temperature and current.
Obviously a card that runs hotter than a card that runs cooler will have a shorter lifespan.
I also do not overclock for games as it is pretty pointless, think of it like this.
My card stock clocks it boosts to 2055, throwing 90 extra mhz on it isn't really going to do all that much but make power consumption go up a lot.
Now if we were able to throw 1000mhz or more at it then we would see meaningful gains that would be worth it.

Here is the method I use to flash newer cards. CH341A USB programmer and AsProgrammer software.


----------



## krizby

Jpmboy said:


> There is ZERO debate that higher voltage, higher current/power and higher temperature leads to a shortening of the "lifespan" of most any SMD (small micro device, eg, cpu, gpu, etc). And depending on leakage (gate bypass) scaling of operating frequency may not have a positive slope with voltage (Hz vs mV).
> Frankly, until you control the temperature and current use (or power, TDP, watts) at any freq, voltage, taking any one of these outside the acceptable operating range for voltage or power (AOR) - that's basically any overclock - the projected life span of the part (cpu gpu etc) will decrease. Whether it goes from 5 years to 1 year... or never shows any decline really depends on the quality of the silicon (crystal quality, including the necessary defects required for it to work as a semiC at all)... eg, the sliicon lottery. So if your GPU is behaving differently after putting out 300+ Watts at a higher than stock voltage to run at a higher than spec boost... lifespan is a risk we all take when overclocking.
> Pascal does not scale with voltage until the temperature is kept well below ambient, turing is the same. Last NV architecture to scale at ambient was Maxwell.


So far I haven't seen any user reports about silicon degradation with Nvidia and AMD GPUs, on Intel and AMD CPUs degradation is quite real once voltage is above 1.5v so I guess at 1.0-1.2v there is no major concern about degradation.

That being said I'm more worry about the soldering life on the card that constantly being stressed and hard reset by overclockers, I reckon the high death rate of Turing in its early days was attributed to BGA soldering on the VRAM.


----------



## Mooncheese

Jpmboy said:


> There is ZERO debate that higher voltage, higher current/power and higher temperature leads to a shortening of the "lifespan" of most any SMD (small micro device, eg, cpu, gpu, etc). And depending on leakage (gate bypass) scaling of operating frequency may not have a positive slope with voltage (Hz vs mV).
> Frankly, until you control the temperature and current use (or power, TDP, watts) at any freq, voltage, taking any one of these outside the acceptable operating range for voltage or power (AOR) - that's basically any overclock - the projected life span of the part (cpu gpu etc) will decrease. Whether it goes from 5 years to 1 year... or never shows any decline really depends on the quality of the silicon (crystal quality, including the necessary defects required for it to work as a semiC at all)... eg, the sliicon lottery. So if your GPU is behaving differently after putting out 300+ Watts at a higher than stock voltage to run at a higher than spec boost... lifespan is a risk we all take when overclocking.
> Pascal does not scale with voltage until the temperature is kept well below ambient, turing is the same. Last NV architecture to scale at ambient was Maxwell.


Ah I see, so unless cooled below ambient on LN2 additional voltage is pointless on Turing. 

In regards to shortening the lifespan via overclocking, what about undervolting? I'm at 2070 MHz @ 1.019v. The card does get up to 340-350W (XC2 Ultra, default vbios) but VRM / MOSFET typically don't get hotter than 40-43C (full water block, 1400w rad surface area, CE 420 + XE 360, GPU core ~43C, memory 40-50C)? 

Fairly certain voltage matters the most nearly matched by temperature followed by current in regards to longevity. It's my understanding that if cooling power delivery correctly the amount of current is of no import? 

My 1080 Ti has shown zero signs of degradation and was just replaced by this 2080 Ti about 2 months ago. We are talking about 3 years of 24/7 usage (FE vbios, undervolt of 1.025v, if not gaming I'm mining crypto over cold nights at 65% PT) but temps of 40-45C. Seeing many posts "my 1080 Ti died" from users with factory air coolers, particularly FE blower style (85C load). 



sultanofswing said:


> Although I am not sure what Nvidia's response to degradation is my take is it all correlates to temperature and current.
> Obviously a card that runs hotter than a card that runs cooler will have a shorter lifespan.
> I also do not overclock for games as it is pretty pointless, think of it like this.
> My card stock clocks it boosts to 2055, throwing 90 extra mhz on it isn't really going to do all that much but make power consumption go up a lot.
> Now if we were able to throw 1000mhz or more at it then we would see meaningful gains that would be worth it.
> 
> Here is the method I use to flash newer cards. CH341A USB programmer and AsProgrammer software.


What if one doesn't have this equipment, what is the best flashing method for newer cards? The card I have is reference PCB (XC2 Ultra) and I'm considering flashing to FTW3 for the 373w cap but may just keep it here.


----------



## sultanofswing

Mooncheese said:


> Ah I see, so unless cooled below ambient on LN2 additional voltage is pointless on Turing.
> 
> In regards to shortening the lifespan via overclocking, what about undervolting? I'm at 2070 MHz @ 1.019v. The card does get up to 340-350W (XC2 Ultra, default vbios) but VRM / MOSFET typically don't get hotter than 40-43C (full water block, 1400w rad surface area, CE 420 + XE 360, GPU core ~43C, memory 40-50C)?
> 
> Fairly certain voltage matters the most nearly matched by temperature followed by current in regards to longevity. It's my understanding that if cooling power delivery correctly the amount of current is of no import?
> 
> My 1080 Ti has shown zero signs of degradation and was just replaced by this 2080 Ti about 2 months ago. We are talking about 3 years of 24/7 usage (FE vbios, undervolt of 1.025v, if not gaming I'm mining crypto over cold nights at 65% PT) but temps of 40-45C. Seeing many posts "my 1080 Ti died" from users with factory air coolers, particularly FE blower style (85C load).
> 
> 
> 
> What if one doesn't have this equipment, what is the best flashing method for newer cards? The card I have is reference PCB (XC2 Ultra) and I'm considering flashing to FTW3 for the 373w cap but may just keep it here.



If the bios you want to flash is compatible with your XUSB FW version use NVFlash64


Sent from my iPhone using Tapatalk


----------



## Imprezzion

I have done some "way out of safe zone" overclocks on both COU's and GPU's and the only 2 that ever suffered bad degradation were my 3770K @ 4.95Ghz with 1.496v. It lasted for about 2-2.5 years before the BSOD's started. It still works fine on stock but can't really hold even 4.5Ghz anymore on any voltage.

Counter to that is my "at-the-time golden" 2500K that did 5.3Ghz 24h Prime95 stable @ 1.54v and still lives to this day on that exact OC. Even had a few LN2/DICE sessions with it. Survived those just fine as well.

The other one is a R9 290 flashed to 290X and with unlocked BIOS with no power limit and more voltage (1.25v I thought?). It's pretty much dead. It worked for quite some time, 6-8 months, but had no proper cooling, just reference blower @ 100%, ran in the 90c range quite often. It works but only 2D clocks. 3D applications artifact like mad now with purple and green stripes. It's done for.

Then again, I have never had a nVidia degrade on me. Not even the 780/780 TI's with custom make BIOS edited stuff on founders coolers, not the 980/980 Ti with 1.212v and no limits, my 1080/1080Ti's on XOC never degraded one bit.. Lightning Z on LN2 BIOS on stock cooler still works fine.

So, I'm very curious to see how far this 2080Ti Phoenix GS can take me on XOC 1.125v 400-430w power draw in most games, 2175-2130 depending on load temperatures, 7800-8000 memory (GTA V and World of Tanks for some reason won't run 8000 while all other games do). Temperatures sit in the low 40's normally with my rad fans cranked but if I want some more silence I turn them down and temps go to mid 50's.


----------



## Mooncheese

Imprezzion said:


> I have done some "way out of safe zone" overclocks on both COU's and GPU's and the only 2 that ever suffered bad degradation were my 3770K @ 4.95Ghz with 1.496v. It lasted for about 2-2.5 years before the BSOD's started. It still works fine on stock but can't really hold even 4.5Ghz anymore on any voltage.
> 
> Counter to that is my "at-the-time golden" 2500K that did 5.3Ghz 24h Prime95 stable @ 1.54v and still lives to this day on that exact OC. Even had a few LN2/DICE sessions with it. Survived those just fine as well.
> 
> The other one is a R9 290 flashed to 290X and with unlocked BIOS with no power limit and more voltage (1.25v I thought?). It's pretty much dead. It worked for quite some time, 6-8 months, but had no proper cooling, just reference blower @ 100%, ran in the 90c range quite often. It works but only 2D clocks. 3D applications artifact like mad now with purple and green stripes. It's done for.
> 
> Then again, I have never had a nVidia degrade on me. Not even the 780/780 TI's with custom make BIOS edited stuff on founders coolers, not the 980/980 Ti with 1.212v and no limits, my 1080/1080Ti's on XOC never degraded one bit.. Lightning Z on LN2 BIOS on stock cooler still works fine.
> 
> So, I'm very curious to see how far this 2080Ti Phoenix GS can take me on XOC 1.125v 400-430w power draw in most games, 2175-2130 depending on load temperatures, 7800-8000 memory (GTA V and World of Tanks for some reason won't run 8000 while all other games do). Temperatures sit in the low 40's normally with my rad fans cranked but if I want some more silence I turn them down and temps go to mid 50's.


I actually experienced silicon degradation firsthand with 780 Ti, I flashed a hotter vbios for like 1.29v or something to try and secure more than 1200 MHz and it gradually lost stability one bin at a time with the increased voltage. It happened pretty quick actually, like within a few months. Next thing I knew it was no longer stable at even a freq that was stable with the cooler vbios and that's when I just reverted to the original vbios. Temps had gone up like 5C or something with the additional voltage. I had a pair of 780 Ti in SLI and they were being cooled via NZXT Kraken G10's.


----------



## Imprezzion

Mooncheese said:


> I actually experienced silicon degradation firsthand with 780 Ti, I flashed a hotter vbios for like 1.29v or something to try and secure more than 1200 MHz and it gradually lost stability one bin at a time with the increased voltage. It happened pretty quick actually, like within a few months. Next thing I knew it was no longer stable at even a freq that was stable with the cooler vbios and that's when I just reverted to the original vbios. Temps had gone up like 5C or something with the additional voltage. I had a pair of 780 Ti in SLI and they were being cooled via NZXT Kraken G10's.


I never went that high on the 780 TI's. 1.212v with disabled boost in the P state tab and no power limit. 

I think I still have a working non Ti 780 laying around somewhere. Just doesn't have a cooler as the EK block started leaking and I never replaced it. Just got a Ti and threw the PCB on the wall but it should still work. I have the bare block and mounts for a accelero III and I can strap some 120's to it but I have no VRM or VRAM cooling for it.

I wonder if Arctic or Gelid still sells those 780 VRM blocks for the accelero or icy vision somewhere locally..

EDIT: Yes, it is still hanging right here in my office lol next to my blocks top and above my X58A UD7 R1.0 for which i should really be getting a 6 core 12 thread unlocked S1366 Xeon some day..


----------



## Medizinmann

Mooncheese said:


> To hell with 4K @ 120 FPS, I will probably upgrade from my AW3418DW (3440x1440 @ 120 Hz IPS) to this: https://www.theverge.com/2020/1/7/2...y-g9-super-ultra-wide-gaming-monitor-ces-2020
> 
> It's hard for me to go back to 16:9 after experiencing curved 21:9. I have my ROG Swift PG278Q on an arm next to my AW3418DW and even though some games look stunning in 3D Vision (Tomb Raider, The Witcher 3 etc.) I hardly ever use it because the first thing I notice is going from curved 21:9 to flat 16:9. It's like trying to go back to 4:3.


5120x1440 resolution, 240Hz refresh rate, 1ms response time, and both FreeSync 2 and G-Sync compatibility - and an MSRP around 1600€...
Sounds tempting - but 49" would be a problem for me - as I have 3 other 27" monitors to fit on my desk(standing vertical I might add)...
But it might bring other 35" 240Hz 1440p displays down in price...:thumb:

Greetings,
Medizinmann


----------



## Nitethorn

Hey guys, just wanted to see if anyone here had the Zotac 2080 ti amp extreme and if you had done a tear down on it? I'm planning to repaste the card for now and eventually put it on WC to try and get it to run a little cooler. I was thinking of rewiring the LED's to a separate header while I was at it (spectra lighting on it does not work). If you've torn one down, any tips for me? Only GPU I've ever had apart was my old GTX 970 reference. That was super easy. This thing looks a wee bit more complicated but to be honest I haven't even tried yet.


----------



## Bart

Nitethorn said:


> Hey guys, just wanted to see if anyone here had the Zotac 2080 ti amp extreme and if you had done a tear down on it? I'm planning to repaste the card for now and eventually put it on WC to try and get it to run a little cooler. I was thinking of rewiring the LED's to a separate header while I was at it (spectra lighting on it does not work). If you've torn one down, any tips for me? Only GPU I've ever had apart was my old GTX 970 reference. That was super easy. This thing looks a wee bit more complicated but to be honest I haven't even tried yet.


I *think* mine is an amp extreme (deffo a Zotac Amp something but I can't find the box to check). It's sitting in a PC that's down right now, but I put a HeatKiller IV water block on it, nothing weird or alarming about the tear down. Just take your time obviously. It's the coolest running GPU I have ever owned, to a point where the temps are so low I thought it was reading wrong. In a cool basement, I couldn't make the card break 35C, even OCed to a good degree (+250 core +800 memory at one point). I could run Timespy Extreme stress tests for 2 hours, 35C was the top temp I saw. Mind = blown. Highly recommend Watercool Heatkillers if you go WC, block install instructions are very good, as are the included thermal pads. :thumb:


----------



## J7SC

Nitethorn said:


> Hey guys, just wanted to see if anyone here had the Zotac 2080 ti amp extreme and if you had done a tear down on it? I'm planning to repaste the card for now and eventually put it on WC to try and get it to run a little cooler. I was thinking of rewiring the LED's to a separate header while I was at it (spectra lighting on it does not work). If you've torn one down, any tips for me? Only GPU I've ever had apart was my old GTX 970 reference. That was super easy. This thing looks a wee bit more complicated but to be honest I haven't even tried yet.


 
Have a look at this (post teardown) PCB view to give you an idea what to expect


----------



## BigFidel

From what I can gather A TU104-400-A1 chip cannot be flashed in any way to increase its power limit? I bought the 2080 ti recently and had no idea about the A variant.


----------



## Imprezzion

BigFidel said:


> From what I can gather A TU104-400-A1 chip cannot be flashed in any way to increase its power limit? I bought the 2080 ti recently and had no idea about the A variant.


There is a 310w BIOS that works in the OP for the non-A chips. I ran that on my Inno3D non-A card but it still throttled. Just less lol. Don't expect to run anything over 1.043v ish as higher will just throttle more. Best to keep it around 1.012-1.025v to keep throttling to a minimum. A decent card should do 2000+Mhz even on those voltages. Mine did 2040Mhz on 1.031v sort of stable. 

I did actually end up selling it again and grabbing a secondhand Gainward Phoenix GS which is a A chip, slapped a Kraken G12+X52 on it, some copper sinks for the VRM / VRAM, HOF XOC unlimited 1.125v BIOS and now sit happily at 2175-2145Mhz depending on load temps and no throttling and rock solid. **

What manufacturer VRAM chips does it have? If they are Samsung it might be worth keeping tho. VRAM at +800Mhz or more gives a very nice boost in overall performance and if the core will do 2025Mhz or more with the 310w BIOS i would keep it.


** Yes, I am aware XOC shouldn't be taken lightly and has risks involved for degradation or even VRM damage, I just don't really care lol. If it breaks, shame.. expensive hobby we shall call it then lol.


----------



## BigFidel

Imprezzion said:


> There is a 310w BIOS that works in the OP for the non-A chips. I ran that on my Inno3D non-A card but it still throttled. Just less lol. Don't expect to run anything over 1.043v ish as higher will just throttle more. Best to keep it around 1.012-1.025v to keep throttling to a minimum. A decent card should do 2000+Mhz even on those voltages. Mine did 2040Mhz on 1.031v sort of stable.
> 
> I did actually end up selling it again and grabbing a secondhand Gainward Phoenix GS which is a A chip, slapped a Kraken G12+X52 on it, some copper sinks for the VRM / VRAM, HOF XOC unlimited 1.125v BIOS and now sit happily at 2175-2145Mhz depending on load temps and no throttling and rock solid. **
> 
> What manufacturer VRAM chips does it have? If they are Samsung it might be worth keeping tho. VRAM at +800Mhz or more gives a very nice boost in overall performance and if the core will do 2025Mhz or more with the 310w BIOS i would keep it.
> 
> 
> ** Yes, I am aware XOC shouldn't be taken lightly and has risks involved for degradation or even VRM damage, I just don't really care lol. If it breaks, shame.. expensive hobby we shall call it then lol.


Samsung Ram.
I read briefly that apparently it also depends on the firmware of the card? Like a newer 2080 ti black has a firmware that wont allow the flash? Please tell me I read that all wrong.


----------



## Mooncheese

Anyone else excited for 3080 Ti? Thoughts on "Moores Law" leak? Seems pretty credible, 4x RT performance on top of 50% uplift in rasterization with 220-230W TDP, sounds unbelievable.


----------



## BigFidel

Mooncheese said:


> Anyone else excited for 3080 Ti? Thoughts on "Moores Law" leak? Seems pretty credible, 4x RT performance on top of 50% uplift in rasterization with 220-230W TDP, sounds unbelievable.


Honestly given the time they have had since the last big improvement (1080ti) It isn't that unexpected. Its welcome but not a surprise.


----------



## Imprezzion

BigFidel said:


> Samsung Ram.
> I read briefly that apparently it also depends on the firmware of the card? Like a newer 2080 ti black has a firmware that wont allow the flash? Please tell me I read that all wrong.


Don't really know if that applies to non-A chips honestly. Easiest way to find that out I guess is check with nvflash64 which firmware it has unless someone knows a easier way to check it?


----------



## Imprezzion

BigFidel said:


> Samsung Ram.
> I read briefly that apparently it also depends on the firmware of the card? Like a newer 2080 ti black has a firmware that wont allow the flash? Please tell me I read that all wrong.


Easiest way to find that out I guess is check with nvflash64 which firmware it has unless someone knows a easier way to check it?


----------



## BigFidel

You can check with nvflash. Lol I am so newb to flashing this damn thing. Last time I flashed GPUs was in X800/6800 days. 
I would love to check right now.

I get this error.

Warning image PCI subsytem ID 10DE.1E04x does not match adapter PCI subsystem ID 3842.2281


----------



## kithylin

Mooncheese said:


> Anyone else excited for 3080 Ti? Thoughts on "Moores Law" leak? Seems pretty credible, 4x RT performance on top of 50% uplift in rasterization with 220-230W TDP, sounds unbelievable.


If the rumored MSRP of 3080 Ti FE is actually going to be $1800 then no.. I'm not interested even remotely. I'm sure it will sell like wildfire anyway though.


----------



## Shawnb99

kithylin said:


> If the rumored MSRP of 3080 Ti FE is actually going to be $1800 then no.. I'm not interested even remotely. I'm sure it will sell like wildfire anyway though.



$1800? There’s no way it’ll be that much.


----------



## kithylin

Shawnb99 said:


> $1800? There’s no way it’ll be that much.


I don't see why not. The RTX 2080 Ti launched for +$300 over the 1080 Ti's launch price, and was commonly found for $1200 - $1500 everywhere around launch day. +$300 more for the next card is $1800. And since Nvidia saw how everyone bought all of their 2080 Ti's out of stock at the old price, at launch, I have every reason to assume they will most likely increase it another +$300 this time around. The RTX 4080 Ti will most likely be $2000 - $2200.


----------



## Shawnb99

kithylin said:


> I don't see why not. The RTX 2080 Ti launched for +$300 over the 1080 Ti's launch price, and was commonly found for $1200 - $1500 everywhere around launch day. +$300 more for the next card is $1800. And since Nvidia saw how everyone bought all of their 2080 Ti's out of stock at the old price, at launch, I have every reason to assume they will most likely increase it another +$300 this time around. The RTX 4080 Ti will most likely be $2000 - $2200.



If they do sell it at that price., then AMD will truly have them beat


----------



## philhalo66

Shawnb99 said:


> If they do sell it at that price., then AMD will truly have them beat


people paid 2500 for rtx titan's dont underestimate fanboys. Knowing Nvidia it wouldnt surprise me if the rtx 3080 ti was 2500 and the titan was 3500


----------



## Medizinmann

Mooncheese said:


> Anyone else excited for 3080 Ti? Thoughts on "Moores Law" leak? Seems pretty credible, 4x RT performance on top of 50% uplift in rasterization with 220-230W TDP, sounds unbelievable.


Well - it is a monster - but the rumoured price seem also to be monstrous....:thinking::axesmiley



kithylin said:


> If the rumored MSRP of 3080 Ti FE is actually going to be $1800 then no.. I'm not interested even remotely. I'm sure it will sell like wildfire anyway though.


Yep - at this price point - a definite NO...

I am still hoping for RDNA 2 to get prices down a little - hopefully.

But honestly - I am thinking about skipping the next generation altogether - planning on upgrading with the NVidia 4000er or AMDs RNDA 3 or whatever it might be called than in Q4 2021 or Q1 2022….as my 2080 TI is already total overkill for my setup and my chances for gameplay are sporadic at best anyway…

Greetings,
Medizinmann


----------



## Imprezzion

BigFidel said:


> You can check with nvflash. Lol I am so newb to flashing this damn thing. Last time I flashed GPUs was in X800/6800 days.
> I would love to check right now.
> 
> I get this error.
> 
> Warning image PCI subsytem ID 10DE.1E04x does not match adapter PCI subsystem ID 3842.2281


That error is quite normal as the BIOS is from a different vendor. There's a command for nvflash to check which firmware a card has without trying to flash something but God knows what that command is.. I'll check my nvflash64 command list.

EDIT: Ok it's nvflash64 --check.


----------



## Shawnb99

The 2080TI price came at the end of the mining craze that sent GPU prices skyrocketing so they knew they could get away with $1000 and up for the 2080TI.
Now we’re in a recession with millions out of work for months, when things start to resume to normal we’ll end up seeing great deals just to get people buying again.
So I highly doubt Nvidia would be dumb enough to almost double the price of a 3080TI monster or not.


----------



## philhalo66

Shawnb99 said:


> The 2080TI price came at the end of the mining craze that sent GPU prices skyrocketing so they knew they could get away with $1000 and up for the 2080TI.
> Now we’re in a recession with millions out of work for months, when things start to resume to normal we’ll end up seeing great deals just to get people buying again.
> So I highly doubt Nvidia would be dumb enough to almost double the price of a 3080TI monster or not.


Im going to remain skeptical till im proven wrong. Nvidia is top dog and AMD has nothing to even come close and they know it.


----------



## kithylin

Shawnb99 said:


> The 2080TI price came at the end of the mining craze that sent GPU prices skyrocketing so they knew they could get away with $1000 and up for the 2080TI.
> Now we’re in a recession with millions out of work for months, when things start to resume to normal we’ll end up seeing great deals just to get people buying again.
> So I highly doubt Nvidia would be dumb enough to almost double the price of a 3080TI monster or not.


And I'll bookmark your comment here to come back in a few months and quote you with 3080 Ti prices. Not to be mean but just to remind folks "We toldcha so." when it comes out.


----------



## Shawnb99

kithylin said:


> And I'll bookmark your comment here to come back in a few months and quote you with 3080 Ti prices. Not to be mean but just to remind folks "We toldcha so." when it comes out.



Ok only if I can do the same to you when you’re wrong


----------



## kithylin

Shawnb99 said:


> Ok only if I can do the same to you when you’re wrong


I sincerely hope I'm wrong and Nvidia releases the 3080 Ti's for $1000 or even $800. I doubt it but it would be awfully nice if they did... *dream*


----------



## BigFidel

Do we have a 3080ti rumor thread because these last few posts seem more focused on that.


----------



## Medizinmann

sblantipodi said:


> am I the only one who runs a 2080Ti on an old haswell-e CPU?
> am I loosing a lot even if I play only 4K?
> 
> should I upgrade my CPU?
> are there someone who upgraded from a similar cpu and can tell that there is a big upgrade in terms of performance in 4K?


Most recent video about CPU Bottleneck from Gamer Nexus - in short - for 4k it shouldn't matter...





Greetings,
Medizinmann


----------



## Mooncheese

kithylin said:


> I don't see why not. The RTX 2080 Ti launched for +$300 over the 1080 Ti's launch price, and was commonly found for $1200 - $1500 everywhere around launch day. +$300 more for the next card is $1800. And since Nvidia saw how everyone bought all of their 2080 Ti's out of stock at the old price, at launch, I have every reason to assume they will most likely increase it another +$300 this time around. The RTX 4080 Ti will most likely be $2000 - $2200.


Ampere is launching in a completely different environment where Nvidia no longer has an absence of competition. 

Next-gen consoles are releasing within the same window where next-gen AMD APU's will be capable of 4K 60 FPS for $1000 or less. Seeing as to how powerful the AMD APU's are Nvidia is not taking any chances, hence the massive performance increase over Turing. Also, Intel is releasing GPU's next year. There is no way in hell the 80 Ti card is going to cost $1800. "Moore's Law" leak has GA-102 at $1200 and that sound credible.


----------



## Mooncheese

Shawnb99 said:


> The 2080TI price came at the end of the mining craze that sent GPU prices skyrocketing so they knew they could get away with $1000 and up for the 2080TI.
> Now we’re in a recession with millions out of work for months, when things start to resume to normal we’ll end up seeing great deals just to get people buying again.
> So I highly doubt Nvidia would be dumb enough to almost double the price of a 3080TI monster or not.


100% spot on, someone else is paying attention. They jacked up the price precisely because of the crypto-craze driven mining demand. Also, no real competition from AMD. They also renamed the entire product stack one GPU higher, i.e. RTX "2070", TU-106 SKU, no SLI, only as fast as the outgoing 80 card not 80 Ti card like every new 70 card going back to Kepler: $700 at launch. A $700 60 card. Same goes for the "2080", which unlike 1080, which was 25% faster than 980 Ti, or 980, which was 25% than 780 Ti, the "2080" was only as fast as 1080 Ti for......$900. Then we have the "2080 Ti", with an avg. 25% increase in rasterization performance makes it a good candidate for a renamed 80 card. But don't say this too loud here, this thread is Nvidia fanboy land. 





philhalo66 said:


> Im going to remain skeptical till im proven wrong. Nvidia is top dog and AMD has nothing to even come close and they know it.


Dude the APU's in the next gen consoles absolutely spank. We are talking about the equivalent of a 3700x CPU AND a GTX 1070-1080 in terms of compute at 100W on single SOC @ 7nm. What do you think this architecture is going to do with 3x the TDP? Next gen AMD is going to be very fast, hence Nvidia being forced to step up their price-to-performance because they are no longer in a position where they can offer up gimped iterative products whilst also introducing new wizz-bang proprietary features now that they killed off 3D Vision (as of 425.31, but we are still keeping it going) and G-Sync, are on their way out in favor of FreeSync 2.0, which the entire monitor industry has adopted (because moving inventory has proven challenging when you have to mark up a panel $300 because of the $300 G-Sync module, whereas FreeSync 2.0 doesn't require a proprietary module and now consumer is looking at a $400 monitor purchase instead of a $700 monitor purchase). 










kithylin said:


> And I'll bookmark your comment here to come back in a few months and quote you with 3080 Ti prices. Not to be mean but just to remind folks "We toldcha so." when it comes out.


Youre not going to be telling anyone that because you fail to understand that Turing was anomalous in price-performance terms, it was released at the tail-end of the crypto-mining craze with no competition and was used as a spring-board to release new, much needed (by Nvidia, but RT and DLSS are really cool to be honest) proprietary technologies because their current pet technologies are on their way out (they killed 3D Vision as of 425.31 and G-Sync is being replaced as the industry standard by FreeSync 2.0 which can be done without a $300 module). Also, Intel are releasing new GPU's soon and the next-gen consoles are releasing within the same window (Q4 2020). I've only personally stated all of this like 6 times here within the past 2 months. If you want an eco-chamber as to how awesome Nvidia is, how wonderful their features are, how much they care about their consumer-base, and how they will always be on top, the best, #1 then you should spend more time on Nvidia sub-reddit. But if you try to deny all of the above here you will be called out on it. This isn't "Nvidia Fan Boy" sub-forum although everyone here has a 2080 Ti for the most part. 

A lot of us are tired of both NGreedia and Intel, it's time for some healthy competition, thank god we are finally getting that. Furthermore, unless you absolutely need Nvidia's products, we need to start actually supporting the competition. The "rename the entire product stack" scam of Turing is what we get with a monopoly. 

RTX "2070" was a $700 60 card: FACT. 

RTX "2080", if we accept the "2070" was a 60 card, and the fact that "2080" was no longer 25% faster than outgoing 80 Ti card (read: as fast as), was most likely also the actual 70 card, renamed the "2080" making for a $900 70 card at launch. 

RTX "2080 Ti", if we accept the above, was most likely a $1200 before-taxes 80 card. 

If you like these prices keep waving the pom poms for Jacket Man and his bull****. 

I don't, hence no pom pom waiving and preferring to remain fact oriented, if that means stating information that may be unpleasant to fan boys then so be it.

Pricing: 

Thanks to massive economic inequality inherent in Neo-Feudalism / Fascism / Corporate Capitalism where 70% of adults didn't even have $1000 on hand for an emergency BEFORE the fake-pandemic the entire electronics industry is going to be reeling for years and will have to bring their prices down to a level that more than the wealth children of the .01% in this country can afford. 

Let me ask you if this makes sense. A $1800 GPU during the Greatest Depression / Collapse of current socio-economic paradigm. 

Do you understand market dynamics during a depression? 

There is a severe shortage of capital and prices tend to go down because of that. 

Nvidia now has to compete with next-gen consoles where somewhere between $500 and $100 gets the end-consumer a 4K 60 FPS experience. Anyone doing that math is not going to opt to build a PC for the first time where 4k @ 60 FPS is at minimum $2000 in parts. The entire Ampere product stack will not only be considerably faster than Turing but the prices will need to stay where they are, if not go down for Nvidia to remain competitive during this new global downturn where no-one has any money. Shawnb99 is again correct here.

https://finance.yahoo.com/news/survey-69-americans-less-1-171927256.html

https://www.fool.com/retirement/2016/09/25/nearly-7-in-10-americans-have-less-than-1000-in-sa.aspx

The current level of economic inequality actually exceeds that of the medieval era: 

https://www.forbes.com/sites/noahki...ottom-50-of-country-study-finds/#7498ffc93cf8

https://inequality.org/facts/income-inequality/


----------



## GosuPl

I have a strange behavior TDP of card. For example in MSI AB or HOF AI / XtremeTuner I give max PT and core clock + 150. mem clock + 1000. It's in 3DM FSE / FSU, Time Spy Extreme, games etc. GPU doesn't pull more than 280W.
G-Sync / Vsync off.

RTX 2080 Ti HOF 10 TH OC Lab Edition.

Test bench spec :

i9-7980XE 4.8 GHz + 3.2 mesh
X299 Rampage Extreme VI Encore
32 GB DDR4. Trident Z Royale 3800 MHz Cl16.16.16.T1 tRFC 340
Corsair AX 1500i

GPU connected of course via 3x 8 PIN

Any ideas?


----------



## sblantipodi

To all people who says that a 2080Ti is more powerful than a PS5,
why we haven't seen anything like this on 2080Ti yet?






power is nothing without control.

I'm sad to say it, but consoles proves that we are "dumb" at every new release xD
we spend 1300€ for a card that is surclassed by a console.


----------



## reflex75

sblantipodi said:


> To all people who says that a 2080Ti is more powerful than a PS5,
> why we haven't seen anything like this on 2080Ti yet?
> 
> https://youtu.be/IIdn6yNdHMY
> 
> power is nothing without control.
> 
> I'm sad to say it, but consoles proves that we are "dumb" at every new release xD
> we spend 1300€ for a card that is surclassed by a console.



1300€ 2080ti RTX is for playing Minecraft on PC:


----------



## sblantipodi

reflex75 said:


> 1300€ 2080ti RTX is for playing Minecraft on PC:


lol


----------



## kx11

sblantipodi said:


> To all people who says that a 2080Ti is more powerful than a PS5,
> why we haven't seen anything like this on 2080Ti yet?
> 
> https://youtu.be/IIdn6yNdHMY
> 
> power is nothing without control.
> 
> I'm sad to say it, but consoles proves that we are "dumb" at every new release xD
> we spend 1300€ for a card that is surclassed by a console.



28fps ?!!






nope thanks


----------



## kithylin

Mooncheese said:


> A lot of us are tired of both NGreedia and Intel, it's time for some healthy competition, thank god we are finally getting that. Furthermore, unless you absolutely need Nvidia's products, we need to start actually supporting the competition. The "rename the entire product stack" scam of Turing is what we get with a monopoly.


Since you seem keen on sharing facts with everyone let's share some more facts. Right now there is literally zero competition for Nvidia. The fastest AMD card was the Radeon VII and that only matches even with GTX 1080 Ti's performance. As of right now and typing this there is no AMD Radeon video card that can match the RTX 2080 Ti in actual raster performance. None. There are Rumors of AMD releasing some new "Nvidia Killer" card that AMD has rumored will be faster than 2080 Ti, supposedly, according to rumors. Even if it is.. then Ngreedia will just release the 3080 Ti and then be on top again. It's been like this for the past 12 years. Nvidia has always been on top and hasn't had any direct competition at the top-end for a very long time. I might be wrong, I might be right, who knows guessing with the future. But they could entirely release a 3080 Ti for $1800 - $2000 if they wanted to. There's nothing stopping them. Or they could release one for $1000, they could do that too if they wanted to. Also there's no reason to mention nor discuss consoles here. No matter what consoles have or use they're a custom designed chip in a closed environment and have absolutely nothing what so ever to do with desktop video cards or the performance of desktop video cards. Consoles of any kind shouldn't even be mentioned in this thread. That's completely off topic from anything discussed here.


----------



## Laithan

sblantipodi said:


> To all people who says that a 2080Ti is more powerful than a PS5,
> why we haven't seen anything like this on 2080Ti yet?
> 
> https://youtu.be/IIdn6yNdHMY
> 
> power is nothing without control.
> 
> I'm sad to say it, but consoles proves that we are "dumb" at every new release xD
> we spend 1300€ for a card that is surclassed by a console.



This is wrong in several ways. If I don't point this out, someone else will so might as well get it over with... that is an engine, which is *software*... Who said a 2080Ti can't run that?


https://www.unrealengine.com/en-US/blog/a-first-look-at-unreal-engine-5

Unreal Engine 4 & 5 timeline
Unreal Engine 4.25 already supports next-generation console platforms from Sony and Microsoft, and Epic is working closely with console manufacturers and dozens of game developers and publishers using Unreal Engine 4 to build next-gen games.

Unreal Engine 5 will be available in preview in early 2021, and in full release late in 2021, supporting next-generation consoles, current-generation consoles, PC, Mac, iOS, and Android.

We’re designing for forward compatibility, so you can get started with next-gen development now in UE4 and move your projects to UE5 when ready. 

We will release Fortnite, built with UE4, on next-gen consoles at launch and, in keeping with our commitment to prove out industry-leading features through internal production, migrate the game to UE5 in mid-2021.


----------



## Medizinmann

sblantipodi said:


> To all people who says that a 2080Ti is more powerful than a PS5,
> why we haven't seen anything like this on 2080Ti yet?


Because this engine isn't out yet and it is a demo – we will have to wait and see how the real game will look…



> power is nothing without control.
> 
> I'm sad to say it, but consoles proves that we are "dumb" at every new release xD
> we spend 1300€ for a card that is surclassed by a console.


First of all you compare a GPU released 20 months ago to a console that isn’t even out yet…
…and we don’t really know if the console will deliver in games. 
…and we don’t know how the engine will look like on a 2080TI since this versions isn’t out either.

…and btw if you bought you 2080 TI before the pandemic you could get it around 900-1000€ - I paid 930€ for a Palit 2080TI in Juli 2019 and 1050€ for a KFA2 2080TI 2 month before…if you bought it around or close to the release – as always you paid the early adopters premium…

And a console is console is a console...my 2080TI has more uses for me than just gaming...like i.e. suporting 4x4k Displays at once, usable compute power for other tasks...etc.

Greetings,
Medizinmann


----------



## rustyk

Shawnb99 said:


> The 2080TI price came at the end of the mining craze that sent GPU prices skyrocketing so they knew they could get away with $1000 and up for the 2080TI.
> Now we’re in a recession with millions out of work for months, when things start to resume to normal we’ll end up seeing great deals just to get people buying again.
> So I highly doubt Nvidia would be dumb enough to almost double the price of a 3080TI monster or not.


I haven't read the rest of the thread yet, just catching up, but I agree with you. Wouldn't like to say what they will charge, but I expect it will stay in line with current prices.
All the talk of 2k,3k+ and 'they can charge what they want' is bluntly a bit stupid, as if they charge too much they won't sell enough and therefore will make less profit overall.
They price to maximise profit, it's not hard to understand.
Anyone who talks about 'fanboys' is just advertising their own twisted thinking IMHO.


----------



## Mooncheese

Medizinmann said:


> Because this engine isn't out yet and it is a demo – we will have to wait and see how the real game will look…
> 
> 
> 
> First of all you compare a GPU released 20 months ago to a console that isn’t even out yet…
> …and we don’t really know if the console will deliver in games.
> …and we don’t know how the engine will look like on a 2080TI since this versions isn’t out either.
> 
> …and btw if you bought you 2080 TI before the pandemic you could get it around 900-1000€ - I paid 930€ for a Palit 2080TI in Juli 2019 and 1050€ for a KFA2 2080TI 2 month before…if you bought it around or close to the release – as always you paid the early adopters premium…
> 
> And a console is console is a console...my 2080TI has more uses for me than just gaming...like i.e. suporting 4x4k Displays at once, usable compute power for other tasks...etc.
> 
> Greetings,
> Medizinmann


Same, paid $900 for my XC2 Ultra a few months ago with a 2 years remaining on the warranty with Samsung B Die and a build date of Oct 2019 for $900 local sale / no tax. I wouldn't pay full price for this before tax ($1300+).


----------



## GosuPl

2160/16400 in the air where it is not even the limit of this GPU in these conditions.

Interestingly, even after the bios reflash and plays with other bioses, all sort settings and max TDP near still around 280W. I suspect three options:


1.Nuts ;-) from GALAX is the way the GPU designed that softs do not want to give the value well.
2.I have a bug with PT and grandpa doesn't want to get up
3.The problem with the good separation of the 12v line from the PSU, I will try diff.

The card is connected, of course, with separate 12v from PSU 3x8 PIN.

Have you met any of you with such a problem of sensors for indicating GPU current consumption in GPU-Z and HWinfo 64?


----------



## Medizinmann

Mooncheese said:


> Same, paid $900 for my XC2 Ultra a few months ago with a 2 years remaining on the warranty with Samsung B Die and a build date of Oct 2019 for $900 local sale / no tax.


The 930€ was from a well-known shop/reseller including VAT (in Germany 19%) + shipping 8,99€.
And in the end of Q3/2019 to January of 2020 you could get pricing around the 950-1000€ for a 2080TI with decent cooler and A-Chip.

BTW: just checked the price development graph for my exact GPU (Palit GamingOC Pro 2080TI) – price at release (21.08.2018) was 1299€ - in 01/2019 it was 999€… 



> I wouldn't pay full price for this before tax ($1300+).


Well - define full price…see above…

1300€ in the beginning was the release price and anybody buying then was paying the "early adopters fee” - as I did when I ordered my X570 mobo and Ryzen CPU at release date - paying prices slightly above MSRP - because I had a deadline to build my new PC/Workstation...
I usually also like to avoid the "early adopter’s fee” or “I must buy in the crisis fee”…

Right now prices gone up considerably (same for CPUs, SSDs and mobos), because of some problems in the supply chain - cheapest 2080TI with another cooler than blower style is 1150-1250€ again (again all prices here always include VAT - no way to go around that besides used market).
So let's hope supply will be better when new GPUs arrive and AMD will have something with RDNA 2 which at least can compete up to NVidias new xx80...otherwise prices will stay up pretty high for NVidias high end GPUs – if we like it or not…

We will see if the recession “helps” to bring prices down – but for the high end I doubt it – since the group of people who are able shell out $1000+ at least $2000-3000 for a matching PC + a decent Gaming Monitor – isn’t the same group that suffers most in this crisis. Always talking about the high end stuff – I mean we might see prices coming down for low end and midrange - especially since we might see some 
real completion from AMD…but the high end – I really doubt it.

Greetings,
Medizinmann


----------



## philhalo66

I paid full price for mine. But because of that i got the extended warranty so now im covered for 10 years.


----------



## kithylin

philhalo66 said:


> I paid full price for mine. But because of that i got the extended warranty so now im covered for 10 years.


Buying something 6-10 months later is still paying "full price" for the video cards and people can still get the full 10 year warranty even 2 years later most of the time, as long as the cards are still being sold new. Most (I know not all, but most do) companies honor warranties from the date of purchase, not the date of manufacturer. Just because someone bought the same card at a later date for as much as -$400 cheaper than you does not mean it is in any way different from your purchase.


----------



## TONSCHUH

Gigabyte Aorus X (Stock):


----------



## philhalo66

kithylin said:


> Buying something 6-10 months later is still paying "full price" for the video cards and people can still get the full 10 year warranty even 2 years later most of the time, as long as the cards are still being sold new. Most (I know not all, but most do) companies honor warranties from the date of purchase, not the date of manufacturer. Just because someone bought the same card at a later date for as much as -$400 cheaper than you does not mean it is in any way different from your purchase.


most do not cover second hand cards at all. EVGA is the only one i know of and they only honor the original 3 year from date of purchase. And no you cant, you must buy the extended warranty within 30 days of purchase and it must be bought from an authorized retailer.


----------



## TONSCHUH

I replaced my 2x Asus ROG GTX980-ti-STRIX-OC-6GB (watercooled with EK-Full-Cover-Blocks ====>>> 1557/8250MHz) with 1x Gigabyte RTX2080-ti-AORUS-EXTREME-11GB (Air-Cooled for now).

When I ran Fire-Strike with the 980's I scored ~25k, but with the 2080-ti I barely reach ~22k, even with a further OC.

GSync / VSync are disabled. 

Do I miss something or what is going on ?

On FB, one guy got ~25k with 95MHz less of an OC, *** ?

Any help would be greatly appreciated !

:|


----------



## philhalo66

TONSCHUH said:


> I replaced my 2x Asus ROG GTX980-ti-STRIX-OC-6GB (watercooled with EK-Full-Cover-Blocks ====>>> 1557/8250MHz) with 1x Gigabyte RTX2080-ti-AORUS-EXTREME-11GB (Air-Cooled for now).
> 
> When I ran Fire-Strike with the 980's I scored ~25k, but with the 2080-ti I barely reach ~22k, even with a further OC.
> 
> GSync / VSync are disabled.
> 
> Do I miss something or what is going on ?
> 
> On FB, one guy got ~25k with 95MHz less of an OC, *** ?
> 
> Any help would be greatly appreciated !
> 
> :|


what are your clocks and GPU load during the benchmark?


----------



## Imprezzion

And, is that graphics score or total score. CPU and overclock thereof matters if it's total score.


----------



## Laithan

Also make sure your voltage and power sliders are set to MAX in MSI AB/X1


----------



## Mooncheese

Hi everyone, I just wanted to report the good news that my EVGA XC2 Ultra with a manufacture date of Oct 2019 successfully flashed to FTW3 Ultra WB and the card now has 373w (381w observed last night in Hwinfo64) and my wattage starvation induced clock drops down from 2070 Mhz to 2010-2025 MHz @ 340W are gone and now the card holds 2085 MHz up until 48C, the hottest I've seen it get (considerably warmer ambient, but this morning core didn't exceed 43C in Port Royal torture test, 36 minutes of wattage draw between 325-340W constant @ 68F). 

I was worried that because the card has a new manufacture date (either Oct or May 2019, I could be mistaken and it's actually from May 2019, hence the ability to flash the vbios with NVFLASH) that it wouldn't flash and I was also worried because I couldn't find a single instance of someone with XC2 Ultra who tried or attempted to flash to FTW3 bios and I was worried that I would at best cause the ECX sensors to no longer be recognized, even though FTW3 has the same exact sensors, or at worst, that I could brick the card. Others have successfully flashed XC Ultra to FTW3, but XC Ultra is reference PCB and you can even flash 2080 Ti FE to FTW3 but not XC2 Ultra. 

Really happy with the performance to be honest, it could be that I recently upgraded the radiators from 420x28mm (EK SE 420) to 420x45 (EK CE 420) and 360x38mm (PE 360) to 360x60mm (XE 360) because the VRM / MOSFET didn't even more than 1-2C hotter with ~380W on tap (and used). Temps for VRM going by 5 different sensors placed about both banks of VRM were in the mid-to-low 40's with one bank not exceeding 38C. Video Memory, going by the sensor on each bank, don't exceed ~43C with one bank at ~52C under sustained load after water temp heats up a little). 

I got motivated to flash because my clocks were dipping from 2070 to 2010 MHz @ 340W @ 110-120 FPS 2D with all settings on Ultra, Hairworks off, and HD Reworked along side Phoenix Lighting Last night (The Witcher 3 in 3D Vision). After successfully flashing the card, reinstalling the display driver and 3D Vision via 3D Fix Manager and figure out Google's notorious black / blank page bug (delete GPU Cache folder at Users > Username > Appdata > Local > Google > Chrome > User Data > ShaderCache if anyone has this problem I returned to the same save-point in TW3 and it held 2085 MHz constant and my avg FPS went from 57 to 60. We are talking about 75 MHz core here. 

Timespy score went up as well, this is allowing the core to get up to 48C because pumps (serial D5's via EK Revo Serial D5 top) are only instructed to go on when the thermocoupler attached to the back-plate hits 38C and this happens at the very end of the demo that precedes the actual benchmark and core is getting well above 40C whereas under 40C it will now do 2100 MHz @ 1.025v on the freq curve, Port Royal torture test stable. It dips to 2085 MHz somewhere after I've yet to see it dip below that, even at 40C. So it picked up 15 MHz at least and it has me wondering if additional wattage has something to do with the newfound stability. Even with not wattage limitation present the clocks would formerly dip down to 2055 MHz after 45C @ 1.019v with the factory 340W bios. Now they don't dip below 2085 MHz @ 1.025v even upwards of 48C, the highest I've seen it get. So I'm looking at at least 30MHz gain just having more wattage on hand. No temps look out of the ordinary or deviate from where they were previously, at least going by the 11 or ICX sensors scattered about the card. It still doesn't exceed 43C @ 68F during 36 min Port Royal torture / stability test. 

Here's the new Timespy, this is with the GPU hitting 48C at one point before back-plate becoming sufficiently warm enough to tell the pumps to go from 40% to 100% RPM and then the temps drop 5-6C immediately (I've seen it drop from 46 to 40C). I a pair of Barrow 1/4 fitting temp sensors that I intend to place in the distro plate, right before the GPU and the other near the exit on the terminal so I can see how hot the water is coming right out of the GPU vs after having gone through the relatively cool 170W 8700k and monoblock and two 600W radiators and for redundancy, it case one fails which will allow the pumps to come on sooner as the water temp warms up and cools down faster than the back-plate does (rad fans as intake, rear 140mm fan as exhaust is bringing cool air from the back of the case over the GPU back-plate and it takes at least 5 minutes of sustained 340W usage for the GPU to heat the backplate up to 38C whereas water gets warmer much faster, especially at 40 RPM and reduced transfer to the radiators as heat transfer to radiators is a function of flow rate; the higher the flow the more water is bringing heat to the radiators that is then conveyed to the open air). 

I'm also delidding the CPU for the first time, originally I sent it off to Silicon Lottery to have them bin and delid it and was under the impression at the time that liquid metal is fairly robust and should outlive the life of the processor. My temps have gone up quite a bit recently and there's a really uneven spread to them which is usually indicative to liquid metal drying out on part of the die (alloying more into the IHS, even nickel plated, it just takes longer than copper). I tried running OCCT a few days ago as I'm braving 5.1 GHz @ 1.386v (Turbo LLC, 1.2v VCCIO and VCCSA. Turbo is the equivalent to Very High on an Asus board in terms of LLC) and the temps on two cores went up to 92 and 85C with the rest at like 65C. If temp goes above 85C OCCT terminates the bench, which is nicer than Prime95, which will let it hit 100C or until you black screen / full system shut down / TJMAX 105C thermal safety disconnection of power)

It passed 17 minutes of Prime95, small FFT, but I had to terminate the test around that point in time because one of the cores was hitting 85C. I have yet to have any problems whilst benching and gaming and it's been about 5 days now here at 5.1 GHz. I'm running Dynamic Offset (because volts drop to .800mv whilst web browsing etc). I remember the last time I tried this it was not at all stable. I believe the newfound stability is because I upgraded the memory (2x8 @ 3200 MHz G.Skill Trident Z RGB) to newer Royals (2x16 GB @ 3600 MHz) as I've even been able to tighten the timings with the newer memory (17-20-20-38 down from 18-22-22-42) but I did have to up the voltage up to 1.45v. With the former memory I never ran MemTest64, not because I didn't want to but because I didn't have a spare USB stick (Windows Installation media was on my only stick in case I had to restore from System Image, repair the OS). I decided to chance it this time and actually dialed in my timings with MemTest86. I'm sure if I ran MemTest86 with the older memory it would have failed as this same freq and voltage was not possible with the older modules. They needed 1.45v at factory clocks because I was getting BSOD's above 5.0 GHz -1 AVX. Upping the voltage to 1.45v allowed for 5.0 GHz 0 AVX, (I had to reduce Uncore from 45 to 42 and I am using slightly more VCCIO and VCCSA now, 1.2v up from 1.15v). 

Anyhow, I have RocketCool's delid + relid tool ($24 used on amazon), Bitspower Nickel Copper IHS (to rule out the concave center of factory IHS as being part of the problem), intend to forgo relidding with RTV silicone (mounting latch and cohesion from liquid metal should suffice), some more Conductonaut (for delid, I'm debating Kryonaut or Coductonaut between IHS and monoblock) on order. 

Upgrades next week include the aformentioned relid of CPU, the temp sensors, straightening up the runs from the distro block to the GPU and back with different sized 90 degree angle fittings (Bykski, $4.50 from Ali Express, chrome plated nickel, look and feel better than EK at half the price) and swapping the TIM on GPU core from Gelid GC Extreme as I believe I did a poor job (too much TIM, half of which was already partially dried up from the spatula from the last time I used it) to Kryonaut. Given that the Conductonaut was easily 5C cooler than Gelid GC Extreme is now (37C at the end of Superposition, now it's 40-41C) whereas testing from other have found only a 3C difference I'm mostly confidence Kryonaut will be 2, maybe even 3C cooler than now). 

Anyhow here's the new Timespy run (16,550). It's not the best, but remember this is with the core at 2085 MHz @ 1.025v and I'm still letting the temps get up to 48C. I will rerun next week after I get my water temp sensors in and Kryonaut. It shouldn't exceed 41C even dealing with the demo then. 16,600 shouldn't be out of question same overclock. 

https://www.3dmark.com/spy/12032725



TONSCHUH said:


> I replaced my 2x Asus ROG GTX980-ti-STRIX-OC-6GB (watercooled with EK-Full-Cover-Blocks ====>>> 1557/8250MHz) with 1x Gigabyte RTX2080-ti-AORUS-EXTREME-11GB (Air-Cooled for now).
> 
> When I ran Fire-Strike with the 980's I scored ~25k, but with the 2080-ti I barely reach ~22k, even with a further OC.
> 
> GSync / VSync are disabled.
> 
> Do I miss something or what is going on ?
> 
> On FB, one guy got ~25k with 95MHz less of an OC, *** ?
> 
> Any help would be greatly appreciated !
> 
> :|


Did you purchase your 2080 Ti new? What are the observed clocks while running Firestrike? Something is seriously off, 1080 Ti does 28k GPU Firestrike 1.1 @ default clocks and 31k GPU @ ~2000 MHz and 2080 Ti does 39k GPU Firestrike 1.1 and this is with older lower clocks on the default vbios, now that I've flashed to FTW3 vbios all of my benches went up a few hundred points (Timespy GPU went from 163XX to 165XX same clocks just with another 40w on tap). 1080 TI obviously shouldn't be faster than 2080 Ti in Firestrike, something is off with your card. Is it stuck at 1350 MHz? Even with default power target and clocks 2080 Ti does like 34-35k @ ~1700 MHz avg and ~280W factory clocks. 

https://www.3dmark.com/fs/22228119


----------



## kithylin

philhalo66 said:


> most do not cover second hand cards at all. EVGA is the only one i know of and they only honor the original 3 year from date of purchase. And no you cant, you must buy the extended warranty within 30 days of purchase and it must be bought from an authorized retailer.


I never said anything about second hand. You made that up. Please do not make up words I did not write. Authorized dealers still sell brand new cards up to 1 year later (sometimes later even). It's entirely possible someone could buy a brand new (by then new old stock) video card from an authorized retailer, a full 1 year later for a cheaper price and still get full warranty and everything they are entitled to. Including paying for the 10 year warranty upgrade within 30 days after their purchase date. Just because you paid +$400 more because you bought it on the initial release day doesn't make your card any different than someone else buying it later.


----------



## philhalo66

kithylin said:


> I never said anything about second hand. You made that up. Please do not make up words I did not write. Authorized dealers still sell brand new cards up to 1 year later (sometimes later even). It's entirely possible someone could buy a brand new (by then new old stock) video card from an authorized retailer, a full 1 year later for a cheaper price and still get full warranty and everything they are entitled to. Including paying for the 10 year warranty upgrade within 30 days after their purchase date. Just because you paid +$400 more because you bought it on the initial release day doesn't make your card any different than someone else buying it later.


I was under the impression you were talking about used cards, i never seen a single 2080 Ti anywhere in the us for anywhere near 1000 let alone 400 off. Did i say that my card was different? I literally just stated i got mine for full retail price. talk about putting words into people mouths.. I also didn't buy mine release day either, I got it about a month and change ago for 1600.


----------



## Warlord1981

Hello everyone!

Yesterday I replaced my Gigabyte RTX 2080 Gaming OC with a Gigabyte RTX 2080 Ti Gaming OC I had just receive, same cooler and size.

You can find all card info on my screenshot. It has the latest BIOS with the increased power limits.

Previous non-Ti card had no problem with temps hitting max 70C with 120% power limit, +100 core and 60% fixed fans.

The Ti with everything stock (power limit 100%, stock clocks and stock fan curve) hits 80C. I have the NZXT S340 Elite case and never had any issues with temps of cpus/gpus. Room is air conditioned at 26C. 

Also another thing I noticed is that fans are like working less on the Ti card, ie. 60% in non-Ti made some noise, while 60% on the Ti is almost quiet.

Other tests running loops of Heaven Benchmark with lower than 100% power limits:

Power Limit: 95%
Temp Limit: 83C
Clocks: Stock
Fan Speed: 70% (fixed)
GPU Temp: 83C

Power Limit: 85%
Temp Limit: 80C
Clocks: Stock
Fan Speed: 70% (fixed)
GPU Temp: 78C 

Power Limit: 80%
Temp Limit: 79C
Clocks: Stock
Fan Speed: auto stock fan curve
GPU Temp: 73C 

Everything monitored and controlled via MSI Afterburner.

What do you advise? There is an issue with the card's temp, correct? Should i RMA the card? Can i change the thermal paste? Will this void my warranty?


----------



## TONSCHUH

Imprezzion said:


> And, is that graphics score or total score. CPU and overclock thereof matters if it's total score.


Old: https://www.3dmark.com/fs/13808500

New: https://www.3dmark.com/fs/22648027


----------



## TONSCHUH

philhalo66 said:


> what are your clocks and GPU load during the benchmark?


Have a look at the attached screenshot:


----------



## TONSCHUH

Laithan said:


> Also make sure your voltage and power sliders are set to MAX in MSI AB/X1


I used the OC-Tool from Gigabyte / Aorus and even tried the OC-Scan-Function, but it didn't change the outcome.

I also fired-up Precision X!, to see if the settings from the Gigabyte Tool get really applied and it looked like it.


----------



## TONSCHUH

Mooncheese said:


> Did you purchase your 2080 Ti new? What are the observed clocks while running Firestrike? Something is seriously off, 1080 Ti does 28k GPU Firestrike 1.1 @ default clocks and 31k GPU @ ~2000 MHz and 2080 Ti does 39k GPU Firestrike 1.1 and this is with older lower clocks on the default vbios, now that I've flashed to FTW3 vbios all of my benches went up a few hundred points (Timespy GPU went from 163XX to 165XX same clocks just with another 40w on tap). 1080 TI obviously shouldn't be faster than 2080 Ti in Firestrike, something is off with your card. Is it stuck at 1350 MHz? Even with default power target and clocks 2080 Ti does like 34-35k @ ~1700 MHz avg and ~280W factory clocks.
> 
> https://www.3dmark.com/fs/22228119



Yeah, I bought it new.

Have a look at the clocks here:


The thing is, that I get pretty much the same scores at stock clocks (1770) too.


----------



## Mooncheese

TONSCHUH said:


> Yeah, I bought it new.
> 
> Have a look at the clocks here:
> 
> 
> The thing is, that I get pretty much the same scores at stock clocks (1770) too.


Only the GPU score is relevant here, you didn't give us a link to your benchmark, for whatever reason (probably the marijuana) I thought that you were referring to the GPU score when you cited 25k or whatever. Give us the link to your Firestrike run or the GPU score, I'm seeing 2070 MHz and 311w in GPU-Z, I don't see anything out of the ordinary here.


----------



## Mooncheese

Warlord1981 said:


> Hello everyone!
> 
> Yesterday I replaced my Gigabyte RTX 2080 Gaming OC with a Gigabyte RTX 2080 Ti Gaming OC I had just receive, same cooler and size.
> 
> You can find all card info on my screenshot. It has the latest BIOS with the increased power limits.
> 
> Previous non-Ti card had no problem with temps hitting max 70C with 120% power limit, +100 core and 60% fixed fans.
> 
> The Ti with everything stock (power limit 100%, stock clocks and stock fan curve) hits 80C. I have the NZXT S340 Elite case and never had any issues with temps of cpus/gpus. Room is air conditioned at 26C.
> 
> Also another thing I noticed is that fans are like working less on the Ti card, ie. 60% in non-Ti made some noise, while 60% on the Ti is almost quiet.
> 
> Other tests running loops of Heaven Benchmark with lower than 100% power limits:
> 
> Power Limit: 95%
> Temp Limit: 83C
> Clocks: Stock
> Fan Speed: 70% (fixed)
> GPU Temp: 83C
> 
> Power Limit: 85%
> Temp Limit: 80C
> Clocks: Stock
> Fan Speed: 70% (fixed)
> GPU Temp: 78C
> 
> Power Limit: 80%
> Temp Limit: 79C
> Clocks: Stock
> Fan Speed: auto stock fan curve
> GPU Temp: 73C
> 
> Everything monitored and controlled via MSI Afterburner.
> 
> What do you advise? There is an issue with the card's temp, correct? Should i RMA the card? Can i change the thermal paste? Will this void my warranty?


I'm seeing mixed info about this card, Optimum Tech's sample didn't exceed 65C under load, but there is this other person on reddit similarly complaining of the card hitting 80C under load like you. It could be that the TIM is due for replacement, not sure. As far as previous card only hitting 70C with presumably the same cooler, yeah previous card is about 100W less going into same amount of thermal mass, of course it's going to be cooler. If the card is brand new then changing the TIM wont help a whole lot, maybe 3C, but you could experiment both with changing the TIM and running the fans on an aggressive profile, i.e. 100% RPM @ 65C. 

If you can swing it I highly recommend putting the card under full water block with a single pump-res, soft tubing and a block on the CPU. Youre looking at like $400-500 depending on what radiator(s) you go with but that card will then no longer exceed 50C, will do upwards of 2100 MHz core and be near dead silent while doing so. 

But that card shouldn't exceed 65-67C as that's what it did in both Guru3D and Optimum Tech reviews: 

https://www.guru3d.com/articles-pages/gigabyte-geforce-rtx-2080-ti-gaming-oc-11g-review,8.html








Another person at 80C though: https://www.reddit.com/r/gigabyte/comments/awb2tb/gigabyte_gaming_oc_rtx_2080_ti_high_temps/

Not sure if it's bad QC at Gigabyte and they are misapplying TIM or what but that's a pretty big variance (67 to 80C).

Edit: 

Also the performance gain is quite remarkable putting the card under water. Looking further at Guru3D's Gigabyte 2080 Ti OC review, I can't believe that this card only does 13,800 GPU in Timespy (around 12,700 overall) as my XC2 Ultra, similarly reference PCB, does nearly 16,500 or nearly 3k more points here. This is around 20% faster. 

Just put the card under a water block. Whatever the loop costs will go towards your next upgrade, as your next GPU will also be under water, be way faster, way quieter, have a way longer lifespan. The sooner you decide to do a loop the more you valuable it is because all of your future upgrades will be at much lower temperatures, perform better, and last longer. If you have $500 to spare I HIGHLY recommend doing a basic loop ASAP. 

https://www.guru3d.com/articles_pages/gigabyte_geforce_rtx_2080_ti_gaming_oc_11g_review,22.html

https://www.3dmark.com/spy/12032725

Not sure what settings were used for that Superposition run, but if it's 4K Optimized my card does nearly 14k there (100 FPS avg, 85 FPS Min, 129 max, 40C at end of bench). 

Not trying to brag, just trying to convince you to step up to water cooling, it's more than worth it. 

My 3080 Ti FTW3 Ultra will be nice and cool under 1400W of rad surface area some time next year, no further investment (other than the GPU water block) needed. Next GPU after that, same story, I just need to buy a water block. It's a very good investment, I would go so far as to say that if youre a PC gaming enthusiast and youre not under a full loop youre doing it wrong.


----------



## J7SC

TONSCHUH said:


> Yeah, I bought it new.
> 
> Have a look at the clocks here:
> 
> 
> The thing is, that I get pretty much the same scores at stock clocks (1770) too.


 
Hi - I have 2x Aorus 2080 Ti Xtr, though the factory full water-blocked ones. Still, AFAIK, they should have the same PCB. Given the screenshot you provided, I think there's an issue re. Power Limit...First, check this link out from TechPowerUp (card description, bios) and note the 366 w in the bios...

https://www.techpowerup.com/vgabios/207996/gigabyte-rtx2080ti-11264-181121-2


Comparing that to your screenshot, under 'full power limit', your card should not be 311w max but at least 366w (and at least 122%). For comparison, below are screenies of my two cards (stock bios) running Superposition 8K...ignore the clocks for now (heavily w-cooled), but check the blue-framed boxes.

Superposition 8K taxes the GPU and VRAM a lot, so perhaps you can run that...AFTER you uninstall all other GPU oc software AND load MSI Afterburner. When running Superpositon, set the GPUz 'Power Consumption (%, watts) to record 'highest reading' and then post back please.


----------



## rustyk

Warlord1981 said:


> Hello everyone!
> 
> Yesterday I replaced my Gigabyte RTX 2080 Gaming OC with a Gigabyte RTX 2080 Ti Gaming OC I had just receive, same cooler and size.
> 
> You can find all card info on my screenshot. It has the latest BIOS with the increased power limits.
> 
> Previous non-Ti card had no problem with temps hitting max 70C with 120% power limit, +100 core and 60% fixed fans.
> 
> The Ti with everything stock (power limit 100%, stock clocks and stock fan curve) hits 80C. I have the NZXT S340 Elite case and never had any issues with temps of cpus/gpus. Room is air conditioned at 26C.
> 
> Also another thing I noticed is that fans are like working less on the Ti card, ie. 60% in non-Ti made some noise, while 60% on the Ti is almost quiet.
> 
> Other tests running loops of Heaven Benchmark with lower than 100% power limits:
> 
> Power Limit: 95%
> Temp Limit: 83C
> Clocks: Stock
> Fan Speed: 70% (fixed)
> GPU Temp: 83C
> 
> Power Limit: 85%
> Temp Limit: 80C
> Clocks: Stock
> Fan Speed: 70% (fixed)
> GPU Temp: 78C
> 
> Power Limit: 80%
> Temp Limit: 79C
> Clocks: Stock
> Fan Speed: auto stock fan curve
> GPU Temp: 73C
> 
> Everything monitored and controlled via MSI Afterburner.
> 
> What do you advise? There is an issue with the card's temp, correct? Should i RMA the card? Can i change the thermal paste? Will this void my warranty?




Seems completely normal to me, not sure what you're expecting? As someone else said, the 2080TI is consuming more power and the heat needs to go somewhere.
Why do you think there's a problem? Just leave the fan on auto and enjoy. If you want to max out performance, then put it under water.


----------



## Warlord1981

Mooncheese said:


> I'm seeing mixed info about this card, Optimum Tech's sample didn't exceed 65C under load, but there is this other person on reddit similarly complaining of the card hitting 80C under load like you. It could be that the TIM is due for replacement, not sure. As far as previous card only hitting 70C with presumably the same cooler, yeah previous card is about 100W less going into same amount of thermal mass, of course it's going to be cooler. If the card is brand new then changing the TIM wont help a whole lot, maybe 3C, but you could experiment both with changing the TIM and running the fans on an aggressive profile, i.e. 100% RPM @ 65C.
> 
> If you can swing it I highly recommend putting the card under full water block with a single pump-res, soft tubing and a block on the CPU. Youre looking at like $400-500 depending on what radiator(s) you go with but that card will then no longer exceed 50C, will do upwards of 2100 MHz core and be near dead silent while doing so.
> 
> But that card shouldn't exceed 65-67C as that's what it did in both Guru3D and Optimum Tech reviews:
> 
> https://www.guru3d.com/articles-pages/gigabyte-geforce-rtx-2080-ti-gaming-oc-11g-review,8.html
> 
> 
> https://youtu.be/GuZUwFINj7Y
> 
> 
> Another person at 80C though: https://www.reddit.com/r/gigabyte/comments/awb2tb/gigabyte_gaming_oc_rtx_2080_ti_high_temps/
> 
> Not sure if it's bad QC at Gigabyte and they are misapplying TIM or what but that's a pretty big variance (67 to 80C).
> 
> Edit:
> 
> Also the performance gain is quite remarkable putting the card under water. Looking further at Guru3D's Gigabyte 2080 Ti OC review, I can't believe that this card only does 13,800 GPU in Timespy (around 12,700 overall) as my XC2 Ultra, similarly reference PCB, does nearly 16,500 or nearly 3k more points here. This is around 20% faster.
> 
> Just put the card under a water block. Whatever the loop costs will go towards your next upgrade, as your next GPU will also be under water, be way faster, way quieter, have a way longer lifespan. The sooner you decide to do a loop the more you valuable it is because all of your future upgrades will be at much lower temperatures, perform better, and last longer. If you have $500 to spare I HIGHLY recommend doing a basic loop ASAP.
> 
> https://www.guru3d.com/articles_pages/gigabyte_geforce_rtx_2080_ti_gaming_oc_11g_review,22.html
> 
> https://www.3dmark.com/spy/12032725
> 
> Not sure what settings were used for that Superposition run, but if it's 4K Optimized my card does nearly 14k there (100 FPS avg, 85 FPS Min, 129 max, 40C at end of bench).
> 
> Not trying to brag, just trying to convince you to step up to water cooling, it's more than worth it.
> 
> My 3080 Ti FTW3 Ultra will be nice and cool under 1400W of rad surface area some time next year, no further investment (other than the GPU water block) needed. Next GPU after that, same story, I just need to buy a water block. It's a very good investment, I would go so far as to say that if youre a PC gaming enthusiast and youre not under a full loop youre doing it wrong.


I'm thinking maybe Optimum & Guru had the first BIOS with the 260-290Watt. Mine has 300-366Watt, hence the higher temps? Also, their ambient temp was 20C.. Here in Cyprus today is 40C outside  

Water makes wonders I agree. I think I am a PC enthusiast in a way, but i have never done a water loop and i hesitate. Also i think i am limited for a water loop by my case NZXT S340 Elite and my psu Corsair RM750x.

Another important question: Will disassembling the card to install the waterblock void the warranty?


----------



## Warlord1981

rustyk said:


> Seems completely normal to me, not sure what you're expecting? As someone else said, the 2080TI is consuming more power and the heat needs to go somewhere.
> Why do you think there's a problem? Just leave the fan on auto and enjoy. If you want to max out performance, then put it under water.


I agree, but playing with 80C at the first 30mins when limit is 84C is like playing with fire?

Today i made an agressive fan curve (with the cost of noise) and I am able to see ~75C

Also, two same coolers, why on the Ti is much quieter (and less performance presumably) ? Noise-wise 80% fans on the Ti are like 60% on the non-Ti.


----------



## Mooncheese

J7SC said:


> Hi - I have 2x Aorus 2080 Ti Xtr, though the factory full water-blocked ones. Still, AFAIK, they should have the same PCB. Given the screenshot you provided, I think there's an issue re. Power Limit...First, check this link out from TechPowerUp (card description, bios) and note the 366 w in the bios...
> 
> https://www.techpowerup.com/vgabios/207996/gigabyte-rtx2080ti-11264-181121-2
> 
> 
> Comparing that to your screenshot, under 'full power limit', your card should not be 311w max but at least 366w (and at least 122%). For comparison, below are screenies of my two cards (stock bios) running Superposition 8K...ignore the clocks for now (heavily w-cooled), but check the blue-framed boxes.
> 
> Superposition 8K taxes the GPU and VRAM a lot, so perhaps you can run that...AFTER you uninstall all other GPU oc software AND load MSI Afterburner. When running Superpositon, set the GPUz 'Power Consumption (%, watts) to record 'highest reading' and then post back please.


Gaming OC is reference PCB, per Page 1 of this thread, but I could tell that just by looking at it as I've had my 2080 Ti apart multiple times now: https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club.html



rustyk said:


> Seems completely normal to me, not sure what you're expecting? As someone else said, the 2080TI is consuming more power and the heat needs to go somewhere.
> Why do you think there's a problem? Just leave the fan on auto and enjoy. If you want to max out performance, then put it under water.


Reviewers were getting 65-67C, 80C indicates either considerably higher ambient or a bad TIM job from the factory. 



Warlord1981 said:


> I'm thinking maybe Optimum & Guru had the first BIOS with the 260-290Watt. Mine has 300-366Watt, hence the higher temps? Also, their ambient temp was 20C.. Here in Cyprus today is 40C outside
> 
> Water makes wonders I agree. I think I am a PC enthusiast in a way, but i have never done a water loop and i hesitate. Also i think i am limited for a water loop by my case NZXT S340 Elite and my psu Corsair RM750x.
> 
> Another important question: Will disassembling the card to install the waterblock void the warranty?


It's possible, Guru3D has this card at 269W power consumption and the overclocked performance isn't anything to write home about (1950 MHz core, which he states wasn't stable ultimately and needed to be dialed back, and only 300 point increase in Timespy, so maybe 14,xxx Timespy GPU). Thing is, Gaming OC is an A card that isn't limited to 280W, if both Optimum Tech and Guru3D turned up PT (Guru3D did turn up PT to 111%) then that's 320W. If youre at 366W now, then the additional 46w will hike up temps a few degrees, but 80C is very high. Even at 366W this card shouldn't be more than a few degrees warmer than 67C. Maybe 72-75C tops. 



Warlord1981 said:


> I agree, but playing with 80C at the first 30mins when limit is 84C is like playing with fire?
> 
> Today i made an agressive fan curve (with the cost of noise) and I am able to see ~75C
> 
> Also, two same coolers, why on the Ti is much quieter (and less performance presumably) ? Noise-wise 80% fans on the Ti are like 60% on the non-Ti.


You say it's 40C outside (104F) that's extremely hot, how warm is actual ambient where your PC is running? If it's 80-85F ambient then 80C is totally par for the course with 2080 Ti on air, no matter the cooler. 

Optimum Tech, Guru3D and most temps thrown around here is with ambient down in the low 70's high 60's "room temp".

A reason 2080 Ti sounds quieter may be because ambient has gone up and less air is being pushed by the fans and therefore, less noise is being produced. Hotter air = less air density = less lift / airflow with a given airfoil. I was in aviation in the Army for 9 years, and I studied to be a pilot, we would cancel shooting ranges if it was too hot without any cross wind because rotor blades have to work considerably harder to keep the aircraft aloft due to lower air density. Hot, humid with no cross wind makes flying a helicopter difficult. Same principle applies to any airfoil, your 2080 Ti fans might sound quieter than the 2080 fans because you may have last heard your 2080 fans months ago when ambient was lower and they were pushing more air due to greater air density and were therefore louder. Simple experiment that everyone can do, roll down your windows during the winter and the amount of air entering the car is insane. Roll down your windows during the summer and it's like a gentle breeze and considerably quieter with far less wind buffeting going on. This is because there is less air density and hence less actual constituent air molecules entering your car. Hotter the medium = less molecules can be packed into a given space.


----------



## hadesx82

I recently purchased the EVGA version of this card for my new build (powering two 27" Razer Raptor monitors). My build is a Gigabyte Z390 ultra with 32gb X 2 of Corsair renegade ram and this graphics card. Overall I'm very happy with the card's performance but I have two issues that I'm dealing with:

1. When using any internet browser, the brightness levels are shifting ever so slightly from dark to light. Not sure if this is a setting on the card or on the monitors.
2. When running most 3d applications (including Call of Duty Modern Warfare, Heaven 3d test, etc.), If I press certain keys, it causes both monitors to go black for a few seconds. It's almost like they are trying to switch input signals.
In COD, this only happens if I press the tab or ESC key.

Any suggestions for the above problems? 

Thanks!


----------



## philhalo66

hadesx82 said:


> I recently purchased the EVGA version of this card for my new build (powering two 27" Razer Raptor monitors). My build is a Gigabyte Z390 ultra with 32gb X 2 of Corsair renegade ram and this graphics card. Overall I'm very happy with the card's performance but I have two issues that I'm dealing with:
> 
> 1. When using any internet browser, the brightness levels are shifting ever so slightly from dark to light. Not sure if this is a setting on the card or on the monitors.
> 2. When running most 3d applications (including Call of Duty Modern Warfare, Heaven 3d test, etc.), If I press certain keys, it causes both monitors to go black for a few seconds. It's almost like they are trying to switch input signals.
> In COD, this only happens if I press the tab or ESC key.
> 
> Any suggestions for the above problems?
> 
> Thanks!


that sounds like freesync, my msi monitor does that unless i disable it on the monitor. If i were you i would go into the nvidia control panel and turn off Gsync and make sure freesync is turned off in your monitor too.


----------



## Warlord1981

Mooncheese said:


> Gaming OC is reference PCB, per Page 1 of this thread, but I could tell that just by looking at it as I've had my 2080 Ti apart multiple times now: https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club.html
> 
> 
> 
> Reviewers were getting 65-67C, 80C indicates either considerably higher ambient or a bad TIM job from the factory.
> 
> 
> 
> It's possible, Guru3D has this card at 269W power consumption and the overclocked performance isn't anything to write home about (1950 MHz core, which he states wasn't stable ultimately and needed to be dialed back, and only 300 point increase in Timespy, so maybe 14,xxx Timespy GPU). Thing is, Gaming OC is an A card that isn't limited to 280W, if both Optimum Tech and Guru3D turned up PT (Guru3D did turn up PT to 111%) then that's 320W. If youre at 366W now, then the additional 46w will hike up temps a few degrees, but 80C is very high. Even at 366W this card shouldn't be more than a few degrees warmer than 67C. Maybe 72-75C tops.
> 
> 
> 
> You say it's 40C outside (104F) that's extremely hot, how warm is actual ambient where your PC is running? If it's 80-85F ambient then 80C is totally par for the course with 2080 Ti on air, no matter the cooler.
> 
> Optimum Tech, Guru3D and most temps thrown around here is with ambient down in the low 70's high 60's "room temp".
> 
> A reason 2080 Ti sounds quieter may be because ambient has gone up and less air is being pushed by the fans and therefore, less noise is being produced. Hotter air = less air density = less lift / airflow with a given airfoil. I was in aviation in the Army for 9 years, and I studied to be a pilot, we would cancel shooting ranges if it was too hot without any cross wind because rotor blades have to work considerably harder to keep the aircraft aloft due to lower air density. Hot, humid with no cross wind makes flying a helicopter difficult. Same principle applies to any airfoil, your 2080 Ti fans might sound quieter than the 2080 fans because you may have last heard your 2080 fans months ago when ambient was lower and they were pushing more air due to greater air density and were therefore louder. Simple experiment that everyone can do, roll down your windows during the winter and the amount of air entering the car is insane. Roll down your windows during the summer and it's like a gentle breeze and considerably quieter with far less wind buffeting going on. This is because there is less air density and hence less actual constituent air molecules entering your car. Hotter the medium = less molecules can be packed into a given space.


Last time I heard my non-Ti was yesterday and for sure it's louder 

80C I am seeing with 120% powerlimit (366Watt) and stock fan curve.
With 100% powerlimit (300Watt) and again stock fan curve I am seeing 77C.
With 100% powerlimit and my custom agressive fan curve I am seeing 74C.

All these at full load without any overclock.

Temp in room I am not sure, however I had my aircondition at 26C all day so for sure not hot like outside but cool enough. Not as low as 20C however either, like the reviewers.

Also I have 3 monitors, of course only one used at gaming 1440p 144Hz GSync. Because of multi monitors (common isue with nvidia cards) at idle I have around 51C which triggers the fans, so I use Nvidia Inspector for Multi Display Power Saver mode to drop clocks on the card, and keep temps around 49C.

Will replacing my TIM void warranty with Gigabyte? I have some MX4 laying around.


----------



## Mooncheese

Warlord1981 said:


> Last time I heard my non-Ti was yesterday and for sure it's louder
> 
> 80C I am seeing with 120% powerlimit (366Watt) and stock fan curve.
> With 100% powerlimit (300Watt) and again stock fan curve I am seeing 77C.
> With 100% powerlimit and my custom agressive fan curve I am seeing 74C.
> 
> All these at full load without any overclock.
> 
> Temp in room I am not sure, however I had my aircondition at 26C all day so for sure not hot like outside but cool enough. Not as low as 20C however either, like the reviewers.
> 
> Also I have 3 monitors, of course only one used at gaming 1440p 144Hz GSync. Because of multi monitors (common isue with nvidia cards) at idle I have around 51C which triggers the fans, so I use Nvidia Inspector for Multi Display Power Saver mode to drop clocks on the card, and keep temps around 49C.
> 
> Will replacing my TIM void warranty with Gigabyte? I have some MX4 laying around.


Replacing the TIM doesn't void the warranty of any card. We are allowed to perform maintenance on them. You could even put it under a water-block and they wouldn't be none the wiser / it wouldn't violate the terms of the warranty. I doubt it's the TIM if your ambient is 26C though (79F). 80C sounds about right if it's 80F where the PC is. You can help reduce ambient by turning of the additional monitors as they are producing needless heat.

Alternatively, I recommend playing around with the volt / freq curve in MSI AB (Alt+F4 to pull up graph). Try for say 1950 MHz @ 1.0v to start and turn PT down to 300W. Undervolting means the card can get away with a higher freq with less wattage. Running PT @ 366W with 80F ambient = 80C load. You don't need that much wattage as your card is probably not even doing 2000 MHz @ 80C. My card pulls 360W @ 2085 MHz sustained and +1025 on the memory @ 45C. I can do 2000 MHz sustained with 310-320W. You don't need the full 366W because youre probably not even at 2000 MHz having dropped so many bins at 80C. Just turn PT down to whatever 300W is and try getting a stable freq on freq curve @ 1.0v or even .950v. Throwing more current and voltage at the card with 80F ambient isn't going to be a winning strategy. Right now you want to undervolt the card to bring the temps and current draw down per MHz down. 

You should easily be able to drop 10C at 1900 MHz @ .950v @ 280W. Performance will be similar as youre thermal throttling at 80C. Youre just going to extend the life of the card, reduce fan noise, reduce power consumption, reduce ambient (because less heat produced by card) and have comparable performance.


----------



## Krzych04650

Warlord1981 said:


> Also I have 3 monitors, of course only one used at gaming 1440p 144Hz GSync. Because of multi monitors (common isue with nvidia cards) at idle I have around 51C which triggers the fans, so I use Nvidia Inspector for Multi Display Power Saver mode to drop clocks on the card, and keep temps around 49C.


The issue with this idle power draw is that the card is in 3D clock at all times as far as I am aware, I get it too, 64W idle power consumption with single ultrawide monitor.
You can fix it though. You can select any point on VF curve in MSI Afterburner (CTRL+F) and press L to force the card to clock to this exact point. Vertical yellow line will appear when you do this. If you save the profile while this is applied it will remember, so you don't need to reapply this every time. So lower the curve and then force card into lowest point, it should automatically drop memory clock when you do this as well. Save the profile and then you can go to settings in Afterburner and Profiles tab, where you can choose default profile for 3D clock. If you have Afterburner set up to launch on Windows start then it will automatically apply this profile every time, until you override it manually by applying any other profile of course. For me it goes to 825 MHz core and 810 memory and power is dropped from 64 to 26W. 
I am not sure if there can be any issues with functionality at such low clocks though, one would assume that the card has a good reason to be in 3D clocks, but then you look at it sitting at 3D clock on idle desktop with absolutely nothing going on, drawing 65W idle. 
This isn't some Turing specific thing though, I remember my two GTX 1080s drawing 50W idle each as well.
This can also be due to power mode in NVIDIA Control Panel, I've read somewhere that if you choose Maximum Performance it doesn't go to 2D clocks, ever. But if you choose any other profile then you would get throttling in games with unusually lower power draw like Origins or Odyssey. Not sure if fixing VF curve by pressing L in Afterburner overrides that or not.
But if your issue is multi-monitor specific then forcing low clock is probably the only option, unless there is some actual reason for the card to run 3D clocks in multi-monitor mode.


----------



## J7SC

Mooncheese said:


> Gaming OC is reference PCB, per Page 1 of this thread, but I could tell that just by looking at it as I've had my 2080 Ti apart multiple times now: https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club.html
> 
> (...)


Excuse me ? :headscrat 
My response was to TONSCHUH who had <> stated here that he has the Gigabyte RTX2080-ti-*AORUS-EXTREME*-11GB . That's the custom PCB (up to 366w on stock bios), also per OP in this thread


----------



## hadesx82

philhalo66 said:


> that sounds like freesync, my msi monitor does that unless i disable it on the monitor. If i were you i would go into the nvidia control panel and turn off Gsync and make sure freesync is turned off in your monitor too.


Thanks. I disabled freesync but the same issues are still there. In the Heaven 3d engine, keyboard inputs dont trigger it per se, its any movement of the mouse.


----------



## TONSCHUH

Mooncheese said:


> Only the GPU score is relevant here, you didn't give us a link to your benchmark, for whatever reason (probably the marijuana) I thought that you were referring to the GPU score when you cited 25k or whatever. Give us the link to your Firestrike run or the GPU score, I'm seeing 2070 MHz and 311w in GPU-Z, I don't see anything out of the ordinary here.


I replied to the wrong person with it:

Old: https://www.3dmark.com/fs/13808500 ====>>> Graphics Score: 40979

New: https://www.3dmark.com/fs/22648027 ====>>> Graphics Score: 35613


Model-No: GV-N208TAORUS X-11GC (Rev. v1.0)


----------



## TONSCHUH

J7SC said:


> Hi - I have 2x Aorus 2080 Ti Xtr, though the factory full water-blocked ones. Still, AFAIK, they should have the same PCB. Given the screenshot you provided, I think there's an issue re. Power Limit...First, check this link out from TechPowerUp (card description, bios) and note the 366 w in the bios...
> 
> https://www.techpowerup.com/vgabios/207996/gigabyte-rtx2080ti-11264-181121-2
> 
> 
> Comparing that to your screenshot, under 'full power limit', your card should not be 311w max but at least 366w (and at least 122%). For comparison, below are screenies of my two cards (stock bios) running Superposition 8K...ignore the clocks for now (heavily w-cooled), but check the blue-framed boxes.
> 
> Superposition 8K taxes the GPU and VRAM a lot, so perhaps you can run that...AFTER you uninstall all other GPU oc software AND load MSI Afterburner. When running Superpositon, set the GPUz 'Power Consumption (%, watts) to record 'highest reading' and then post back please.


Thanks a lot for Your help !!!

I will try this later-on today.


----------



## Mooncheese

J7SC said:


> Mooncheese said:
> 
> 
> 
> Gaming OC is reference PCB, per Page 1 of this thread, but I could tell that just by looking at it as I've had my 2080 Ti apart multiple times now: https://www.overclock.net/forum/69-nvidia/1706276-official-nvidia-rtx-2080-ti-owner-s-club.html
> 
> (...)
> 
> 
> 
> Excuse me ? /forum/images/smilies/headscratch.gif
> My response was to TONSCHUH who had <> stated here that he has the Gigabyte RTX2080-ti-*AORUS-EXTREME*-11GB . That's the custom PCB (up to 366w on stock bios), also per OP in this thread
Click to expand...

I apologize, I had mixed up the person you were replying to with Warlord who is having temp problems with Gigabyte OC.


----------



## J7SC

Mooncheese said:


> I apologize, I had mixed up the person you were replying to with Warlord who is having temp problems with Gigabyte OC.


 
No problem 



TONSCHUH said:


> Thanks a lot for Your help !!!
> 
> I will try this later-on today.


 
...yeah, the score is unimportant for now (or for that matter running 4K instead of 8K Superposition, though 8K is obviously more stressful). Try a fresh boot into windows 10, let it settle down for a minute and then set your MSI AB to your best clocks...also leave voltage alone and set max Power Limit in MSI AB (should be around 122% for the Aorus 2080 Ti Xtr). Of course, have GPUz open on 'sensors' and show 'highest reading' enabled there for Power (% and watts), and perhaps also clocks and temp. 

It may very well be that the card will heat up enough that clocks and PL budget drop quickly, but with sensors on 'highest reading', you can quickly tell if it hit the top bios setting early on. BTW, once you water-cool the GPU ('generously' :2cents per your earlier post, your clocks can go up by as much as 60 MHz+, and PL max can be sustained longer. 

Finally, do you have the 'advanced version' of 3DM (ie with Port Royal and DLSS feature test enabled)? The DLSS feature test is basically 2x Port Royal back to back - and a nice, long stress test. Below is a screenie of 2x Aorus Xtr w-cooled, with temp / time graph from the 3DM tab I ran earlier today, noting the controlled temps at below 30 C for both cards


----------



## Warlord1981

Mooncheese said:


> Replacing the TIM doesn't void the warranty of any card. We are allowed to perform maintenance on them. You could even put it under a water-block and they wouldn't be none the wiser / it wouldn't violate the terms of the warranty. I doubt it's the TIM if your ambient is 26C though (79F). 80C sounds about right if it's 80F where the PC is. You can help reduce ambient by turning of the additional monitors as they are producing needless heat.
> 
> Alternatively, I recommend playing around with the volt / freq curve in MSI AB (Alt+F4 to pull up graph). Try for say 1950 MHz @ 1.0v to start and turn PT down to 300W. Undervolting means the card can get away with a higher freq with less wattage. Running PT @ 366W with 80F ambient = 80C load. You don't need that much wattage as your card is probably not even doing 2000 MHz @ 80C. My card pulls 360W @ 2085 MHz sustained and +1025 on the memory @ 45C. I can do 2000 MHz sustained with 310-320W. You don't need the full 366W because youre probably not even at 2000 MHz having dropped so many bins at 80C. Just turn PT down to whatever 300W is and try getting a stable freq on freq curve @ 1.0v or even .950v. Throwing more current and voltage at the card with 80F ambient isn't going to be a winning strategy. Right now you want to undervolt the card to bring the temps and current draw down per MHz down.
> 
> You should easily be able to drop 10C at 1900 MHz @ .950v @ 280W. Performance will be similar as youre thermal throttling at 80C. Youre just going to extend the life of the card, reduce fan noise, reduce power consumption, reduce ambient (because less heat produced by card) and have comparable performance.


Thank you for this! 

Here are my results with Heaven Benchmark 1 run:
(Btw my room ambient must be greater than 26C. I found a room temperature thermometer that measures until 26C and it tops it immediately)

Everything stock = 110fps @ 80C by the end of 1st run.
1900MHz @ 0.950v w/ stock fan curve = 109.5fps @ 77C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1845.
1900MHz @ 0.900v w/ stock fan curve = 109.6fps @ 74C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1860.
1900MHz @ 0.875v w/ stock fan curve = 110fps @ 72C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1860.
1900MHz @ 0.850v w/ stock fan curve = 110.8fps @ 71C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1875.

1900MHz @ 0.825v crashed the test. So this is the bottom line of my card's voltage at full load?

So I took the best result and put a custom fan curve:

1900MHz @ 0.850v w/ custom fan curve = 110.2fps @ 67C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1875.

So is 0.850v safe?

PS. Power limit is at 100% on all tests.


----------



## TONSCHUH

J7SC said:


> Hi - I have 2x Aorus 2080 Ti Xtr, though the factory full water-blocked ones. Still, AFAIK, they should have the same PCB. Given the screenshot you provided, I think there's an issue re. Power Limit...First, check this link out from TechPowerUp (card description, bios) and note the 366 w in the bios...
> 
> https://www.techpowerup.com/vgabios/207996/gigabyte-rtx2080ti-11264-181121-2
> 
> 
> Comparing that to your screenshot, under 'full power limit', your card should not be 311w max but at least 366w (and at least 122%). For comparison, below are screenies of my two cards (stock bios) running Superposition 8K...ignore the clocks for now (heavily w-cooled), but check the blue-framed boxes.
> 
> Superposition 8K taxes the GPU and VRAM a lot, so perhaps you can run that...AFTER you uninstall all other GPU oc software AND load MSI Afterburner. When running Superpositon, set the GPUz 'Power Consumption (%, watts) to record 'highest reading' and then post back please.


Ok, I tried it with Afterburner and it looks like it, the the Gigabyte Aorus Software is crap, unfortunately.

Thanks a lot for Your (and the others) help !!!

I read on the web, that Afterburner wouldn't work and that I should use Precision instead, which was obviously not right.

Originally I wanted to get the "WB"-Model, but it was hard to come by, but I might have a look, if there is a nice EK-Block for it.


----------



## TONSCHUH

J7SC said:


> No problem
> 
> 
> 
> 
> ...yeah, the score is unimportant for now (or for that matter running 4K instead of 8K Superposition, though 8K is obviously more stressful). Try a fresh boot into windows 10, let it settle down for a minute and then set your MSI AB to your best clocks...also leave voltage alone and set max Power Limit in MSI AB (should be around 122% for the Aorus 2080 Ti Xtr). Of course, have GPUz open on 'sensors' and show 'highest reading' enabled there for Power (% and watts), and perhaps also clocks and temp.
> 
> It may very well be that the card will heat up enough that clocks and PL budget drop quickly, but with sensors on 'highest reading', you can quickly tell if it hit the top bios setting early on. BTW, once you water-cool the GPU ('generously' :2cents per your earlier post, your clocks can go up by as much as 60 MHz+, and PL max can be sustained longer.
> 
> Finally, do you have the 'advanced version' of 3DM (ie with Port Royal and DLSS feature test enabled)? The DLSS feature test is basically 2x Port Royal back to back - and a nice, long stress test. Below is a screenie of 2x Aorus Xtr w-cooled, with temp / time graph from the 3DM tab I ran earlier today, noting the controlled temps at below 30 C for both cards


Wow, nice temp's !!!

I have a custom-loop with 1x 240x60mm + 2x 480x60mm + 2x D5 + 1x MonoBlock for my Maximus IX Apex and got around 38-40C, with everything in a single-loop (the 2x 980-ti-Setup it was) and clocks maxed out.

If there is a nice Block for the 2080-ti and I have some spare cash again, I might get one (evtl. at tax-time).

Will have a look tomorrow, how far I can push it on-Air and will keep You posted.


----------



## Bart

TONSCHUH said:


> If there is a nice Block for the 2080-ti and I have some spare cash again, I might get one (evtl. at tax-time).


One word my friend: Heatkiller. I have a Heatkiller IV GPU block from Watercool, and that thing flat out refuses to go higher than 35C, even when running OCed (+200-250 core +800 mem). Now that's in a cool basement with rad overkill (like most guys in here, LOL), but still. Heatkiller FTW!!


----------



## Imprezzion

And here I was thinking my Kraken X52's 240mm rad was pretty overkill for a 2080TI already. Only to find out that 420-425w with HOF XOC BIOS can still quite easily saturate that rad lol. 

Not that my temps are bad per sé, they usually sit around 47-51c with reasonable fanspeeds on the rad. it is push-pull but with pretty mediocre static pressure fans (cooler master MF120L's) becasue they are more for looks then cooling but I noticed by just ramping the fanspeed from ~1300 RPM up to like, 2100RPM the temperature drops like a brick to 39-42c so the rad is super saturated even at 1300RPM with 4 fans lol.

Would upgrading to a 280 or 360 Kraken AIO help? I got plenty of room in the case for a 280 or 360 where the 240 is positioned now so.. 

Yes, i know you're all going to say get a custom block and a proper loop but i switch GPU's so often I don't wanna have to buy a block every time lol. A Kraken G12 + X52 fits litterally everything. Even non-reference and all you need to make it work is some copper VRM/VRAM heatsinks if the card doesn't have a baseplate and core temperatures are very close to a full cover waterblock lol.


----------



## TONSCHUH

Bart said:


> TONSCHUH said:
> 
> 
> 
> If there is a nice Block for the 2080-ti and I have some spare cash again, I might get one (evtl. at tax-time).
> /forum/images/smilies/smile.gif
> 
> 
> 
> One word my friend: Heatkiller. I have a Heatkiller IV GPU block from Watercool, and that thing flat out refuses to go higher than 35C, even when running OCed (+200-250 core +800 mem). Now that's in a cool basement with rad overkill (like most guys in here, LOL), but still. Heatkiller FTW!!
Click to expand...

Oh ok, thanks a lot for the tip !

I will have a look at that block !


Edit:

It seems, that there is only a Heatkiller IV for the non-custom-PCB's.


----------



## J7SC

TONSCHUH said:


> Ok, I tried it with Afterburner and it looks like it, the the Gigabyte Aorus Software is crap, unfortunately. Thanks a lot for Your (and the others) help !!!
> 
> I read on the web, that Afterburner wouldn't work and that I should use Precision instead, which was obviously not right.
> 
> Originally I wanted to get the "WB"-Model, but it was hard to come by, but I might have a look, if there is a nice EK-Block for it.


 
...looks like everything worked fine w/ values where they're supposed to be :thumb: 
I tried various other OC software for different gens of GPUs, and some work fine, but MSI AB is the gold standard IMO.

Re. aftermarket water blocks, given the custom PCB of the Aorus Xtreme 2080 Ti, check with various block vendors. But I know that Alphacool and EKWB both have specific versions for it, as do Bykski, Bitspower, Barrow and Phanteks. Pricing for blocks ranges from US$90 to $160 and up. There are others from Europe and Asia, but I don't think Heatkiller actually has one for this PCB (I really like and use some of their other w-cooling products). The highly-rated Aquacomputer one may/may not fit - you have to check with them. 

The collage below covers: 
a.) a condensed table from THW re. 2080 Ti FE clocks as related to temps, and 
b.) screen grabs from a YouTube vid by 'Declassified Systems' showing the Aorus 2080 Ti Xtreme *factory block* being modded...since I have those anyway, I can tell you that they perform very well but I don't know about the performance / quality of the aftermarket ones for the Aorus.


----------



## rustyk

Warlord1981 said:


> Thank you for this!
> 
> Here are my results with Heaven Benchmark 1 run:
> (Btw my room ambient must be greater than 26C. I found a room temperature thermometer that measures until 26C and it tops it immediately)
> 
> Everything stock = 110fps @ 80C by the end of 1st run.
> 1900MHz @ 0.950v w/ stock fan curve = 109.5fps @ 77C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1845.
> 1900MHz @ 0.900v w/ stock fan curve = 109.6fps @ 74C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1860.
> 1900MHz @ 0.875v w/ stock fan curve = 110fps @ 72C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1860.
> 1900MHz @ 0.850v w/ stock fan curve = 110.8fps @ 71C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1875.
> 
> 1900MHz @ 0.825v crashed the test. So this is the bottom line of my card's voltage at full load?
> 
> So I took the best result and put a custom fan curve:
> 
> 1900MHz @ 0.850v w/ custom fan curve = 110.2fps @ 67C at the end of the 1st run. Actual clock starts 1890 at the beginning of run and at the end has fallen down to 1875.
> 
> So is 0.850v safe?
> 
> PS. Power limit is at 100% on all tests.


This is a very good example of undervolting and thanks for supplying the data, it's inspired me to play around a bit too.
The reason I replied before was that I have exactly the same card as you (in fact I've had 3) and I think the temps are completely normal. Sometimes people see what they consider to be a high temp and look for a problem that isn't there.
I just ran a superposition benchmark on a custom curve and auto fan, temps climbed up to 77. In my experience, this is normal. I don't know if the reviews were on an open bench, or what settings they had. I've found that the auto curve will always let temps climb to that sort of region.

I also don't think you can rely on benchmarking to prove stability, as the load will vary depending on the game/engine etc etc. You could spend ages making your card stable based on a benchmark, then find that it crashes in certain games. I remember as far back as my GTX470, where I had different clock profiles for different games.


----------



## Warlord1981

rustyk said:


> This is a very good example of undervolting and thanks for supplying the data, it's inspired me to play around a bit too.
> The reason I replied before was that I have exactly the same card as you (in fact I've had 3) and I think the temps are completely normal. Sometimes people see what they consider to be a high temp and look for a problem that isn't there.
> I just ran a superposition benchmark on a custom curve and auto fan, temps climbed up to 77. In my experience, this is normal. I don't know if the reviews were on an open bench, or what settings they had. I've found that the auto curve will always let temps climb to that sort of region.
> 
> I also don't think you can rely on benchmarking to prove stability, as the load will vary depending on the game/engine etc etc. You could spend ages making your card stable based on a benchmark, then find that it crashes in certain games. I remember as far back as my GTX470, where I had different clock profiles for different games.



I'm glad this was helpful to you and hope you can achieve similar results.

I was about to return the card as i was feeling very uncomfortable with 80C, but Mooncheese opened my eyes with the undervolt magic which helped me quite a lot!

To give you an idea of real-world gaming, applying my best result of [email protected] w/custom fan curve to COD:Warzone for half an hour, the gpu temp didn't exceed 64C.. such low temps i couldn't even see with the non-Ti in winter time.

Regarding stability, time will tell. If I get crashes or black screens i will raise the voltage by 0.025v.


----------



## rustyk

Warlord1981 said:


> I'm glad this was helpful to you and hope you can achieve similar results.
> 
> I was about to return the card as i was feeling very uncomfortable with 80C, but Mooncheese opened my eyes with the undervolt magic which helped me quite a lot!
> 
> To give you an idea of real-world gaming, applying my best result of [email protected] w/custom fan curve to COD:Warzone for half an hour, the gpu temp didn't exceed 64C.. such low temps i couldn't even see with the non-Ti in winter time.
> 
> Regarding stability, time will tell. If I get crashes or black screens i will raise the voltage by 0.025v.


Well good luck. I know that there a quite a few parameters at play in terms of voltage, power, temp and framerate. Personally I'm happy to use the auto overclock curve, leave the fan at auto and just enjoy. In my experience, it just comes down to how much you want to tweak and how much you want to play, but I think your card is normal. Enjoy!


----------



## tcclaviger

I owned a 2080ti for .... 5 days, used it 2, and failed on day three do I win? 100% stock card, I always give them some time to die due to manufacturing flaw before i start going HAM on them lol. 

Tomorrow my Kryographics NEXT 2080ti block arrives, so, at least I didn't go through the trouble of installing, getting the new tubing run, filled, bled etc only to find this out.

Really disappointing for such an expensive card. It shouldn't even be possible to ship a 2080ti with a manufacturing flaw this bad. My 1080ti sucks 600 watts (power uncapped) happily under water without issue.

Fun pics of Control doing wonky things:


----------



## kithylin

tcclaviger said:


> I owned a 2080ti for .... 5 days, used it 2, and failed on day three do I win? 100% stock card, I always give them some time to die due to manufacturing flaw before i start going HAM on them lol.
> 
> Tomorrow my Kryographics NEXT 2080ti block arrives, so, at least I didn't go through the trouble of installing, getting the new tubing run, filled, bled etc only to find this out.
> 
> Really disappointing for such an expensive card. It shouldn't even be possible to ship a 2080ti with a manufacturing flaw this bad. My 1080ti sucks 600 watts (power uncapped) happily under water without issue.
> 
> Fun pics of Control doing wonky things:


If you read back the news from when this card released there were hundreds if not thousands of reports of this all across the internet with the early batches of 2080 Ti's having memory issues. Steve from gamer's nexus even had people send in cards to him for testing to verify their cards were bad. There was a pretty high percentage of 2080 Ti owners that had to file for RMA for the first batch of cards due to this issues. Some people (A friend of mine I talk to daily) had to RMA his founder's edition 4 times before getting a card that worked correctly. I guess it's possible you bought a new old stock card that was one of the early ones that never sold? At least hopefully you can go file for a replacement somewhere.


----------



## tcclaviger

Yeah, EVGA will take care of me, was purchased directly from them. EVGA is pretty good about warranties from what I understand. I knew the issue existed and was always a risk when buying a 2080ti. Still sad to see it happen to people this late in the game.

Lots of silver lining luck involved in the timing tbh, just sad I already moved the beasty 1080ti to wifey's cooling loop and I'm not going through all that trouble of moving it back while waiting. Radeon 5770 aircooled relic coming out of the box for a few days it seems lol.


----------



## J7SC

kithylin said:


> If you read back the news from when this card released there were hundreds if not thousands of reports of this all across the internet with the early batches of 2080 Ti's having memory issues. Steve from gamer's nexus even had people send in cards to him for testing to verify their cards were bad. There was a pretty high percentage of 2080 Ti owners that had to file for RMA for the first batch of cards due to this issues. Some people (A friend of mine I talk to daily) had to RMA his founder's edition 4 times before getting a card that worked correctly. I guess it's possible you bought a new old stock card that was one of the early ones that never sold? At least hopefully you can go file for a replacement somewhere.


 


tcclaviger said:


> Yeah, EVGA will take care of me, was purchased directly from them. EVGA is pretty good about warranties from what I understand. I knew the issue existed and was always a risk when buying a 2080ti. Still sad to see it happen to people this late in the game.
> 
> Lots of silver lining luck involved in the timing tbh, just sad I already moved the beasty 1080ti to wifey's cooling loop and I'm not going through all that trouble of moving it back while waiting. Radeon 5770 aircooled relic coming out of the box for a few days it seems lol.


 
...yeah, the (in)famous early-batch NVidia '_*test escapes*_'...

Glad that EVGA is taking care of it for you


----------



## kithylin

tcclaviger said:


> Yeah, EVGA will take care of me, was purchased directly from them. EVGA is pretty good about warranties from what I understand. I knew the issue existed and was always a risk when buying a 2080ti. Still sad to see it happen to people this late in the game.
> 
> Lots of silver lining luck involved in the timing tbh, just sad I already moved the beasty 1080ti to wifey's cooling loop and I'm not going through all that trouble of moving it back while waiting. Radeon 5770 aircooled relic coming out of the box for a few days it seems lol.


Sounds good. One of the benefits of doing an RMA right now in time is that you should (in theory at least) get a later batch 2080 Ti that didn't have the problems the early ones did.


----------



## TONSCHUH

Imprezzion said:


> And here I was thinking my Kraken X52's 240mm rad was pretty overkill for a 2080TI already. Only to find out that 420-425w with HOF XOC BIOS can still quite easily saturate that rad lol.
> 
> Not that my temps are bad per sé, they usually sit around 47-51c with reasonable fanspeeds on the rad. it is push-pull but with pretty mediocre static pressure fans (cooler master MF120L's) becasue they are more for looks then cooling but I noticed by just ramping the fanspeed from ~1300 RPM up to like, 2100RPM the temperature drops like a brick to 39-42c so the rad is super saturated even at 1300RPM with 4 fans lol.
> 
> Would upgrading to a 280 or 360 Kraken AIO help? I got plenty of room in the case for a 280 or 360 where the 240 is positioned now so..
> 
> Yes, i know you're all going to say get a custom block and a proper loop but i switch GPU's so often I don't wanna have to buy a block every time lol. A Kraken G12 + X52 fits litterally everything. Even non-reference and all you need to make it work is some copper VRM/VRAM heatsinks if the card doesn't have a baseplate and core temperatures are very close to a full cover waterblock lol.


I had 14x of this ones here:

https://www.amazon.com.au/Bgears-b-Blaster-120mm-Bearing-Extreme/dp/B0043GKY1W

... but got this ones now (in blue, green and red), because they were on sale:

https://www.umart.com.au/CoolerMaster-12CM-Sickleflow-X-Blue_24452G.html

That's my sandwich (the 240x60mm is inside the case):


----------



## TONSCHUH

J7SC said:


> ...looks like everything worked fine w/ values where they're supposed to be :thumb:
> I tried various other OC software for different gens of GPUs, and some work fine, but MSI AB is the gold standard IMO.
> 
> Re. aftermarket water blocks, given the custom PCB of the Aorus Xtreme 2080 Ti, check with various block vendors. But I know that Alphacool and EKWB both have specific versions for it, as do Bykski, Bitspower, Barrow and Phanteks. Pricing for blocks ranges from US$90 to $160 and up. There are others from Europe and Asia, but I don't think Heatkiller actually has one for this PCB (I really like and use some of their other w-cooling products). The highly-rated Aquacomputer one may/may not fit - you have to check with them.
> 
> The collage below covers:
> a.) a condensed table from THW re. 2080 Ti FE clocks as related to temps, and
> b.) screen grabs from a YouTube vid by 'Declassified Systems' showing the Aorus 2080 Ti Xtreme *factory block* being modded...since I have those anyway, I can tell you that they perform very well but I don't know about the performance / quality of the aftermarket ones for the Aorus.


I'm not sure, if a different BIOS, like with 380W or even 1000W, will give me much better overclock's on Air.

I might opt for a EK-Block, as most parts of my loop are EK and I only have some Bitspower fittings.


----------



## TONSCHUH

Ok, I played a bit around with the clocks etc and that's pretty much it (I could still fine-tune it a bit, but I doubt, that it would result in much better scores, mainly because of PWR):

https://www.3dmark.com/3dm/46940296? (every higher clock (VRAM), resulted in lower scores, mainly because of VRel)

For Superposition 8k, have a look at the attached screenshots:


----------



## tcclaviger

What to do while waiting for my RMA arrival?!

Plug in my ancient and tiny backup GPU, Radeon HD5770. Lol.

Aquacomputer block with 17w Fujipoly pads ready and waiting to receive a new 2080ti, active backplate still in the mail.

Bored overclocker...


----------



## GAN77

tcclaviger said:


> What to do while waiting for my RMA arrival?!
> 
> Plug in my ancient and tiny backup GPU, Radeon HD5770. Lol.
> 
> Aquacomputer block with 17w Fujipoly pads ready and waiting to receive a new 2080ti, active backplate still in the mail.
> 
> Bored overclocker...


Familiar picture)
Since thermal paste is provided for memory chips, I recommend applying thermal paste to the memory with a small pea. I recently opened my block and some of the chips were not in contact with the water block. At the first installation, I smeared thermal grease on a memory chip.


----------



## J7SC

TONSCHUH said:


> Ok, I played a bit around with the clocks etc and that's pretty much it (I could still fine-tune it a bit, but I doubt, that it would result in much better scores, mainly because of PWR):
> 
> https://www.3dmark.com/3dm/46940296? (every higher clocks, resulted in lower scores, mainly because of VRel)
> 
> For Superposition 8k, have a look at the attached screenshots:


 
...nice :thumb: Your card will love w-cooling if / when you go for that


----------



## tcclaviger

GAN77 said:


> Familiar picture)
> Since thermal paste is provided for memory chips, I recommend applying thermal paste to the memory with a small pea. I recently opened my block and some of the chips were not in contact with the water block. At the first installation, I smeared thermal grease on a memory chip.


Thanks  Had been planning on doing so, I really look forward to seeing where the new card's limit is when thermally free to do w/e it wants to do 

Once I XOC bios loaded my 1080ti and let it eat at 1.2 volts it came back at 2188 with no power limitations, and never rose over a laughable 40c, card is a screamer (and pulls such stupid power levels the coils make nooooooooooise when doing so). Out of sympathy for it I let it off the hook at 2000/1.1v daily lol.


----------



## GAN77

tcclaviger said:


> Thanks  Had been planning on doing so, I really look forward to seeing where the new card's limit is when thermally free to do w/e it wants to do
> 
> Once I XOC bios loaded my 1080ti and let it eat at 1.2 volts it came back at 2188 with no power limitations, and never rose over a laughable 40c, card is a screamer (and pulls such stupid power levels the coils make nooooooooooise when doing so). Out of sympathy for it I let it off the hook at 2000/1.1v daily lol.



I noticed. This water block loves flow rates above 0.6 gallons per minute. Also with an active rear panel there are no o-rings for the connection ports. I used old rings from a regular port. Be careful.

Have a good installation!


----------



## TONSCHUH

J7SC said:


> ...nice :thumb: Your card will love w-cooling if / when you go for that


Can't wait for it, but man, our Australian Dollar sucks big time !

I plan on getting Block + Backplate in July, if everything goes well.

What Bios should I flash later-on ? The 1000W (XOC STRIX OC) one ?


----------



## Shawnb99

GAN77 said:


> I noticed. This water block loves flow rates above 0.6 gallons per minute. Also with an active rear panel there are no o-rings for the connection ports. I used old rings from a regular port. Be careful.
> 
> Have a good installation!



Everything loves a flow rate of at least 0.5GPM, that’s the lowest your flow should ever get. 1GPM is what you should be striving for.


----------



## GAN77

Shawnb99 said:


> Everything loves a flow rate of at least 0.5GPM, that’s the lowest your flow should ever get. 1GPM is what you should be striving for.


My maximum flow rate can reach around 1.2GPM. I was interested to find out the minimum effective consumption for kryographics NEXT RTX 2080 Ti.


----------



## Shawnb99

GAN77 said:


> My maximum flow rate can reach around 1.2GPM. I was interested to find out the minimum effective consumption for kryographics NEXT RTX 2080 Ti.



You should keep it at maximum then. Likely will see a degree or two better temps.


----------



## Regel

Hi all,

Reading this thread now, I seem to have made a mistake...

I've bought an Asus 2080 Ti Turbo 11GB which I now see is a non-A card. I intended to include this in a watercool loop with a Heatkiller IV block.
As I already opened the box, returning the card would cost at minimum 10% of the purchase price.

Is the overclocking possibility with this card severely limited and should I just take the loss and exchange it for something like a Zotac 2080 Ti AMP;
or is the difference in performance rather marginal and I should just keep it?


----------



## Royalfrakk

Hi,

It's my first time tinkering with GPU BIOS, first time custom loop (which I have just finished), first time overclocking.
I have the MSI Gaming X Trio 2080 ti. I tried several BIOS - mainly 400/450W Galax, 406W Gaming X Trio, 520W Evga, 1000W strix. When testing in games and benchmarks, the max power consumption is at around 340-360W with the first 3, the 520W limits itself to 300mhz (except in the openGL Kombustor, it draws ~500W there), the 1000W seems to be not working ok with the card.
I had a basic slider adjustment: +135 core, +1000 memory, power limit to the max available.
How can I achieve higher performance? 
Thank you for any input!


----------



## Mooncheese

Regel said:


> Hi all,
> 
> Reading this thread now, I seem to have made a mistake...
> 
> I've bought an Asus 2080 Ti Turbo 11GB which I now see is a non-A card. I intended to include this in a watercool loop with a Heatkiller IV block.
> As I already opened the box, returning the card would cost at minimum 10% of the purchase price.
> 
> Is the overclocking possibility with this card severely limited and should I just take the loss and exchange it for something like a Zotac 2080 Ti AMP;
> or is the difference in performance rather marginal and I should just keep it?


Return it and wait for Ampere, due out Sept.


----------



## TONSCHUH

Ok, I can confirm, that the "ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)"-BIOS from Page-1, works with the Gigabyte 2080-ti-AORUS-EXTREME-11GB (Model-No: GV-N208TAORUS X-11GC (Rev. v1.0)).

I used NVIDIA NVFlash 5.590.0 to flash it.

To see, if it worked, I pushed the GPU a bit, while still on-Air:


----------



## J7SC

TONSCHUH said:


> Ok, I can confirm, that the "ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)"-BIOS from Page-1, works with the Gigabyte 2080-ti-AORUS-EXTREME-11GB (Model-No: GV-N208TAORUS X-11GC (Rev. v1.0)).
> 
> I used NVIDIA NVFlash 5.590.0 to flash it.
> 
> To see, if it worked, I pushed the GPU a bit, while still on-Air:


 
Very nice !  But do be careful with a 1000w bios on air. Hopefully, you can get w-cooling going soon. 
Also, with that Asus Strix bios, are all your ports and fans working they way they did before ? I ask as others seem to have had some issues re. I/O and fan speeds with select custom bios.


----------



## TONSCHUH

J7SC said:


> Very nice !  But do be careful with a 1000w bios on air. Hopefully, you can get w-cooling going soon.
> Also, with that Asus Strix bios, are all your ports and fans working they way they did before ? I ask as others seem to have had some issues re. I/O and fan speeds with select custom bios.


Yeah, I will not push it on-Air too much and wait that I can get a block for it.

2x of my DP's don't work anymore, but as I use 1x DP for my main-screen and HDMI for my second one, it's not a big deal. I didn't check, if another one of the HDMI's is not working either.


----------



## gfunkernaught

TONSCHUH said:


> Ok, I can confirm, that the "ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)"-BIOS from Page-1, works with the Gigabyte 2080-ti-AORUS-EXTREME-11GB (Model-No: GV-N208TAORUS X-11GC (Rev. v1.0)).
> 
> I used NVIDIA NVFlash 5.590.0 to flash it.
> 
> To see, if it worked, I pushed the GPU a bit, while still on-Air:


Anyone trying to use these bios on a reference PCB beware, I tried them out and yeah I got [email protected] and about 6990 in time spy extreme, but crashed within seconds in CodMW. After rebooting, after I logged in my PC hard locked. Rebooted, same thing. Almost bricked my card it seems. I'd like to see some 2080 ti overclocks that are in the extreme range and playable in games that use RT cores.


----------



## TONSCHUH

gfunkernaught said:


> Anyone trying to use these bios on a reference PCB beware, I tried them out and yeah I got [email protected] and about 6990 in time spy extreme, but crashed within seconds in CodMW. After rebooting, after I logged in my PC hard locked. Rebooted, same thing. Almost bricked my card it seems. I'd like to see some 2080 ti overclocks that are in the extreme range and playable in games that use RT cores.


Be careful mate !!!

Mine was stable in 8k-Superposition, but I didn't try 3D-Mark yet !


----------



## Imprezzion

gfunkernaught said:


> Anyone trying to use these bios on a reference PCB beware, I tried them out and yeah I got [email protected] and about 6990 in time spy extreme, but crashed within seconds in CodMW. After rebooting, after I logged in my PC hard locked. Rebooted, same thing. Almost bricked my card it seems. I'd like to see some 2080 ti overclocks that are in the extreme range and playable in games that use RT cores.


There's no reason to use it on reference as HOF XOC works way better on reference. Just make sure to NOT save a profile in Afterburner or apply at boot! That will BSOD! Manually enter OC after every boot and it works flawlessly. Only downside is the relatively high min fanspeed of 34% 1000RPM which is barely audible since I'm using it to control my 4x120mm on my GPU radiator.


----------



## philhalo66

tcclaviger said:


> I owned a 2080ti for .... 5 days, used it 2, and failed on day three do I win? 100% stock card, I always give them some time to die due to manufacturing flaw before i start going HAM on them lol.
> 
> Tomorrow my Kryographics NEXT 2080ti block arrives, so, at least I didn't go through the trouble of installing, getting the new tubing run, filled, bled etc only to find this out.
> 
> Really disappointing for such an expensive card. It shouldn't even be possible to ship a 2080ti with a manufacturing flaw this bad. My 1080ti sucks 600 watts (power uncapped) happily under water without issue.
> 
> Fun pics of Control doing wonky things:


Control did that to me as well i think the game is just a buggy mess.


----------



## rustyk

Mooncheese said:


> Return it and wait for Ampere, due out Sept.


Is that retail availability? Plus, what's the cost and performance uplift?

It's not necessarily bad advice, but without knowing all the detail (purchase cost etc.), it might even make sense to keep the Non-A card and not bother with water cooling, then upgrade when the dust settles.


----------



## rustyk

Regel said:


> Hi all,
> 
> Reading this thread now, I seem to have made a mistake...
> 
> I've bought an Asus 2080 Ti Turbo 11GB which I now see is a non-A card. I intended to include this in a watercool loop with a Heatkiller IV block.
> As I already opened the box, returning the card would cost at minimum 10% of the purchase price.
> 
> Is the overclocking possibility with this card severely limited and should I just take the loss and exchange it for something like a Zotac 2080 Ti AMP;
> or is the difference in performance rather marginal and I should just keep it?


If you're going to chuck a load of money at a 2080TI now, trying to max performance, it's probably a bad time.
What are you actually trying to achieve and what's your use case? Depending on what you paid, it might make sense to keep it, use it and get the benefit now, then sell and upgrade at the appropriate time.

A friend of mine bought a non-A 2080, watercooled it and was still getting significant benefits from the lower temps and boost algorithms, even with the inherent power limitations.


----------



## gfunkernaught

All the BIOS I've tried on my 2080 TI, the KFA bios work the best. The XOC bios, any edition, seem to be, to me at least, only good for benchmarks. My card is stable and performs best in ALL games with [email protected] For Vulkan and dx11/12 that don't use ray tracing I could do [email protected] The highest I got to was [email protected] with the kp xoc bios, middle of winter in my garage. I think the ambient temp was like 4c. The GPU on load didn't exceed 19c.


----------



## Mooncheese

rustyk said:


> Is that retail availability? Plus, what's the cost and performance uplift?
> 
> It's not necessarily bad advice, but without knowing all the detail (purchase cost etc.), it might even make sense to keep the Non-A card and not bother with water cooling, then upgrade when the dust settles.


Are you kidding? 3080 Ti will be easily 50% faster than 2080 Ti solely in rasterization and some 2-4x faster in Ray Tracing going by leaks we have thus far. 

https://www.overclock.net/forum/379...e-s-law-dead-nvidia-ampere-massive-leaks.html

I would NOT buy any GPU right now. I would NOT keep the 2080 Ti you have if you actually have the ability to return it for a refund. 

Even if I didn't have a GPU to begin with I wouldn't buy one right now. If you must buy one, can't content yourself with android based games or some other form of diversion I would get something used, like 1080 Ti for $300.


----------



## Mooncheese

Duplicate.


----------



## gfunkernaught

I would say never spend more than $650ish on a gpu but I doubt we will ever see top-tier gpu from nvidia at $500-$700 ever again because nVidia can do whatever they want now. Unless AMD makes a gpu that can perform close to or the same as nvidia's $1200 counterpart.


----------



## J7SC

Regel said:


> Hi all,
> 
> Reading this thread now, I seem to have made a mistake...
> 
> I've bought an Asus 2080 Ti Turbo 11GB which I now see is a non-A card. I intended to include this in a watercool loop with a Heatkiller IV block.
> As I already opened the box, returning the card would cost at minimum 10% of the purchase price.
> 
> Is the overclocking possibility with this card severely limited and should I just take the loss and exchange it for something like a Zotac 2080 Ti AMP;
> or is the difference in performance rather marginal and I should just keep it?


 
Did you already purchase the Heatkiller IV block as well ? If so, I would probably keep the Asus 2080 Ti Turbo, but if not, it would be a tough call at 50/50. 

While Ampere (consumer) cards are expected to drop sometime in the fall, the previous gen (Turing RTX) launch certainly did not go smoothly, never mind early breakages due to 'test escapes'...may be waiting a bit after 'official availability' of the first Ampere consumer cards is not such a bad approach. Then there's the question of not only the delay between 'paper launch' and actual consumer availability at M$RP, but also the sequences of launch...for example "3080", then "3080 Ti" then "Titan Ampere"...perhaps that gets Ampere closer to the Christmas holiday season, depending which one you want. Finally, re. for example "3080 Ti", NVidia is likely to do their own FE fir$t for a bit, before custom PCB versions by vendors arrive. Another factor to consider is what big Navi' will bring from AMD around a similar time frame. As others stated, depending on your use case, a well cooled non-A 2080 Ti could keep you happy for quite a while until either Ampere and/or big Navi are available in reasonable quantities and flavours / prices.


----------



## philhalo66

Anyone here got a FTW3 hybrid model? The fan headers don't seem to work very well was wondering if anyone else was experiencing this problem.


----------



## Pichulec

Is there any bios for NON-A version which does not have a problem with fans?? I have blower style card and palit dual bios let my fan go maximum 2200rpm.


----------



## Pichulec

Is there any bios for NON-A version witch power limit 310W which does not have a problem with fans?? I have blower style card and palit dual bios let my fan go maximum 2200rpm.


----------



## Pichulec

Is there any bios for NON-A version which does not have a problem with fans?? I have blower style card and palit dual bios let my fan go maximum 2200rpm.


----------



## Mooncheese

gfunkernaught said:


> I would say never spend more than $650ish on a gpu but I doubt we will ever see top-tier gpu from nvidia at $500-$700 ever again because nVidia can do whatever they want now. Unless AMD makes a gpu that can perform close to or the same as nvidia's $1200 counterpart.


3080 will be ~$700. They cannot do what they did with Turing again as not only does AMD have Big Navi coming out but there's also the next-gen consoles and Intel releasing discrete GPU next year. The price / performance of Turing was completely anomalous, it was released on the tail-end of the crypto mining craze (before BTC fell off of a cliff) with the expectation that that massive surge in demand would continue, it was also released in a vacuum as AMD did not have a competing product. 

All of this has been fairly meted out by now in this thread, I'm not going to waste a whole lot more energy reiterating what I've already said. 

3080 will be ~$700 and it will be 25% faster than 2080 Ti in Rasterization and 2-3x faster in RT @ 200W

3080 Ti will be $1000-1100 and will be 50% faster than 2080 Ti in rasterizatin and 3, possibly 4x faster in RT @ 300W

3070 will be $500 and will be as fast as 2080 Ti in rasterization yet do RT at least 2x faster @ 170-180W


----------



## philhalo66

Mooncheese said:


> 3080 will be ~$700. They cannot do what they did with Turing again as not only does AMD have Big Navi coming out but there's also the next-gen consoles and Intel releasing discrete GPU next year. The price / performance of Turing was completely anomalous, it was released on the tail-end of the crypto mining craze (before BTC fell off of a cliff) with the expectation that that massive surge in demand would continue, it was also released in a vacuum as AMD did not have a competing product.
> 
> All of this has been fairly meted out by now in this thread, I'm not going to waste a whole lot more energy reiterating what I've already said.
> 
> 3080 will be ~$700 and it will be 25% faster than 2080 Ti in Rasterization and 2-3x faster in RT @ 200W
> 
> 3080 Ti will be $1000-1100 and will be 50% faster than 2080 Ti in rasterizatin and 3, possibly 4x faster in RT @ 300W
> 
> 3070 will be $500 and will be as fast as 2080 Ti in rasterization yet do RT at least 2x faster @ 170-180W


where is everyone pulling this 50% faster number from?


----------



## kithylin

Mooncheese said:


> 3080 will be ~$700. They cannot do what they did with Turing again as not only does AMD have Big Navi coming out but there's also the next-gen consoles and Intel releasing discrete GPU next year. The price / performance of Turing was completely anomalous, it was released on the tail-end of the crypto mining craze (before BTC fell off of a cliff) with the expectation that that massive surge in demand would continue, it was also released in a vacuum as AMD did not have a competing product.


Just to clear up something: AMD's target with their "Big Navi" card is to compete with RTX 2080 Ti, not RTX 3080 Ti. Nvidia's RTX 3080 Ti by default will be faster than this new AMD "Big Navi" card.



philhalo66 said:


> where is everyone pulling this 50% faster number from?


That was started by the "Taipei Times" in Jan 2020 and everyone else has just "Ran with it" even today: http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557 At the time of writing my message here I have not found any source actually direct from Nvidia that has actually stated any official claims of "50% faster" that I'm aware of. I may be wrong and if I am someone please correct me but I've been searching the internet a lot and I can't find any "Actual source from Nvidia" on that claim as of yet.


----------



## philhalo66

kithylin said:


> Just to clear up something: AMD's target with their "Big Navi" card is to compete with RTX 2080 Ti, not RTX 3080 Ti. Nvidia's RTX 3080 Ti by default will be faster than this new AMD "Big Navi" card.
> 
> 
> That was started by the "Taipei Times" in Jan 2020 and everyone else has just "Ran with it" even today: http://www.taipeitimes.com/News/biz/archives/2020/01/02/2003728557 At the time of writing my message here I have not found any source actually direct from Nvidia that has actually stated any official claims of "50% faster" that I'm aware of. I may be wrong and if I am someone please correct me but I've been searching the internet a lot and I can't find any "Actual source from Nvidia" on that claim as of yet.


i figured it was some bs rumor. I will be very surprised if its even 25% faster


----------



## gfunkernaught

Mooncheese said:


> gfunkernaught said:
> 
> 
> 
> I would say never spend more than $650ish on a gpu but I doubt we will ever see top-tier gpu from nvidia at $500-$700 ever again because nVidia can do whatever they want now. Unless AMD makes a gpu that can perform close to or the same as nvidia's $1200 counterpart.
> 
> 
> 
> 3080 will be ~$700. They cannot do what they did with Turing again as not only does AMD have Big Navi coming out but there's also the next-gen consoles and Intel releasing discrete GPU next year. The price / performance of Turing was completely anomalous, it was released on the tail-end of the crypto mining craze (before BTC fell off of a cliff) with the expectation that that massive surge in demand would continue, it was also released in a vacuum as AMD did not have a competing product.
> 
> All of this has been fairly meted out by now in this thread, I'm not going to waste a whole lot more energy reiterating what I've already said.
> 
> 3080 will be ~$700 and it will be 25% faster than 2080 Ti in Rasterization and 2-3x faster in RT @ 200W
> 
> 3080 Ti will be $1000-1100 and will be 50% faster than 2080 Ti in rasterizatin and 3, possibly 4x faster in RT @ 300W
> 
> 3070 will be $500 and will be as fast as 2080 Ti in rasterization yet do RT at least 2x faster @ 170-180W
Click to expand...

Where did you get this data from? And yes, Nvidia can do what they did with Turing. Based on these numbers you posted, if they're accurate, they will do the same exact thing. Who's gonna stop them at this point? Bitcoin was a part of the equation. It was mostly Nvidia pushing ray tracing hype and marketing and people, like me, buying into it and supporting their bs. Anything over $700 for a gaming GPU is absolute nonsense. The only thing I can see that would force Nvidia to be fair and competitive is the fact that ray tracing is no longer proprietary.


----------



## kithylin

Mooncheese said:


> All of this has been fairly meted out by now in this thread, I'm not going to waste a whole lot more energy reiterating what I've already said.
> 
> 3080 will be ~$700 and it will be 25% faster than 2080 Ti in Rasterization and 2-3x faster in RT @ 200W
> 
> 3080 Ti will be $1000-1100 and will be 50% faster than 2080 Ti in rasterizatin and 3, possibly 4x faster in RT @ 300W
> 
> 3070 will be $500 and will be as fast as 2080 Ti in rasterization yet do RT at least 2x faster @ 170-180W


Could you please share your source? Where did you see or hear this? Did you find out from any actual confirmed source / leak that these numbers are actually facts?


----------



## Laithan

I think the leaks in that YouTube video are likely true. From a business perspective it _has_ to be considerably faster than the 2080Ti or it would be a flop. It will certainly drive the prices of a 2080Ti down and this should be helpful for the 4K community overall. I'm satisfied with my 2080Ti especially when it sounds like $1400+ custom designs all over again for 3080Ti. That price point is difficult to get excited about (minus the mining craze).


----------



## kithylin

Laithan said:


> I think the leaks in that YouTube video are likely true. From a business perspective it _has_ to be considerably faster than the 2080Ti or it would be a flop. It will certainly drive the prices of a 2080Ti down and this should be helpful for the 4K community overall. I'm satisfied with my 2080Ti especially when it sounds like $1400+ custom designs all over again for 3080Ti. That price point is difficult to get excited about (minus the mining craze).


Actually that's not true. Nvidia has released other video cards in the past where the top spec card of the new family is only +30% faster than the top spec card of the previous family (not including Titans) and they sold very well and they were released as a retail product. It is entirely possible based on the history of Nvidia products in the past that the new Ampere video cards could possibly only be +25% to +30% faster than Turing and still be sold as a successful product and be very profitable for the company. It does not have to be "50% faster" for Nvidia. They have no competition at the high end right now (As of typing this) and their new Ampere video cards will have zero competition on the high end when they release a few months from now. A smart thing to do from the business perspective would be to release new products only as fast over the previous one that people are willing to pay for. I'm sure if Nvidia wanted to do it they could release a card that is just +15% faster or +20% faster for first generation 7nm cards (Ampere) then the other +20% faster for the next family of 7nm (revised) products to milk more money out of their customers. There's really no reason to give us a single card that's all +40% faster in one gulp if they could split it up across two families for more profit. They may not do that this time around but as we get closer and closer to the physical limitations of silicon when they can't shrink chips any further (it's currently physically impossible to produce anything CPU, GPU, or Memory beyond 3nm for example) they'll have to stretch out their profits thinner and thinner at some point.


----------



## J7SC

For now, it is all speculation and rumour, even if it turns out to be accurate (or not). The only thing we really have 'as Ampere fact' at this stage are the official numbers from the Ampere A100 ('enterprise' model), also per Anandtech below. While the general Ampere architecture will be applied to consumer and pro_sumer cards, there will obviously be different configurations of memory (HBM2 vs GDDR6, size), and of the number of Cuda, Tensor and RT cores. The A100 is (currently at least) the flagship, with 6912 Cuda cores...earlier rumour / speculation had the full-fat GPU at 8192 Cudas, but 6912 is it, for now at least (also depending on yields, like in some previous gens where the full theoretical size was never released).

*What caught my attention is the power consumption* (400w @ ~ 1.41 GHz) of the 7nm A100, a lot higher than previous (12nm) gen, though it is certainly more powerful. The A100 has 54 billion transistors and is physically still humongous, even if it is 7nm. It does suggest though that the consumer and pro_sumer 7nm models will likely be quite 'toasty' and will probably *benefit greatly* from water-cooling. There is the related question of die shrinks focusing heat on smaller areas that are harder to cool. Finally, it will be really interesting to see how the 7nm wafer yields shape up as that will also have an impact on pricing. Personally, I couldn't be happier with my two 2080 Tis for now...but when the time comes, I hope to see some nice custom PCB factory w-cooled models on the market, perhaps in early 2021.


----------



## gfunkernaught

Maybe water cooling will become standard when chips hit their 3nm size limit as mentioned above. I remember back in the day, having $350 graphics card for 5 years then upgrading to the top level card that cost $650 and was blown away and then held onto that card for 5 years. Their products held their value long after release. Even the 1080 ti (before the bitcoin craze) was a good value. Should've kept the one I had. 

Side note/question:
Anyone experience this thermal throttling with the card under water? I have my curve set to start at [email protected] so that when the temp hits 40c it drops to 2085mhz which is where I want it. Works most of the time but then there are times where two issues arise. First one is when the clock will drop to the next lower step but the voltage does not. Second, the clock does not drop at all and thus becomes unstable then crash. Is there any way possible to force the card to absolutely never change the frequency set by the user? Or is GPU Boost the ultimate decider?


----------



## J7SC

Unless you run some LN2 bios and special control software on cards like the KingPin or Galax HoF OCL, I think NVidia GPU boost is as controlling as 'nurse Ratched' in _one flew over the cuckoo's nest_


----------



## Regel

J7SC said:


> Did you already purchase the Heatkiller IV block as well ? If so, I would probably keep the Asus 2080 Ti Turbo, but if not, it would be a tough call at 50/50.
> 
> While Ampere (consumer) cards are expected to drop sometime in the fall, the previous gen (Turing RTX) launch certainly did not go smoothly, never mind early breakages due to 'test escapes'...may be waiting a bit after 'official availability' of the first Ampere consumer cards is not such a bad approach. Then there's the question of not only the delay between 'paper launch' and actual consumer availability at M$RP, but also the sequences of launch...for example "3080", then "3080 Ti" then "Titan Ampere"...perhaps that gets Ampere closer to the Christmas holiday season, depending which one you want. Finally, re. for example "3080 Ti", NVidia is likely to do their own FE fir$t for a bit, before custom PCB versions by vendors arrive. Another factor to consider is what big Navi' will bring from AMD around a similar time frame. As others stated, depending on your use case, a well cooled non-A 2080 Ti could keep you happy for quite a while until either Ampere and/or big Navi are available in reasonable quantities and flavours / prices.





rustyk said:


> If you're going to chuck a load of money at a 2080TI now, trying to max performance, it's probably a bad time.
> What are you actually trying to achieve and what's your use case? Depending on what you paid, it might make sense to keep it, use it and get the benefit now, then sell and upgrade at the appropriate time.
> 
> A friend of mine bought a non-A 2080, watercooled it and was still getting significant benefits from the lower temps and boost algorithms, even with the inherent power limitations.


Thank you both. I will just keep it for now and upgrade probably only in two years or so, probably skip the Ampere release.
I care mostly about video editing with a little gaming on the side.


----------



## keikei

Laithan said:


> I think the leaks in that YouTube video are likely true. From a business perspective it _has_ to be considerably faster than the 2080Ti or it would be a flop. It will certainly drive the prices of a 2080Ti down and this should be helpful for the 4K community overall. I'm *satisfied* with my 2080Ti especially when it sounds like $1400+ custom designs all over again for 3080Ti. That price point is difficult to get excited about (minus the mining craze).



Until next gen console ports arrive. You're gonna want the RT and dev's will push the hardware @30fps/4k. We're gonna need better tech. Metro: E and RDR2 already pushing our cards to their knees. I do agree with the pricing, considering Nvidia will most likely come out first. There is no incentive for them to lower $, if the performance is insane. We will buy it.


*I hope the new games will have unlocked frames. I"m finding too many locked @ 60.


----------



## sultanofswing

J7SC said:


> Unless you run some LN2 bios and special control software on cards like the KingPin or Galax HoF OCL, I think NVidia GPU boost is as controlling as 'nurse Ratched' in _one flew over the cuckoo's nest_


Even the KINGPIN is susceptible to GPU boost algorithm. When I talked to Vince about it he said they were able to disable GPU boost but it would throw the card into weird modes that made the card not function properly.


----------



## gfunkernaught

sultanofswing said:


> Even the KINGPIN is susceptible to GPU boost algorithm. When I talked to Vince about it he said they were able to disable GPU boost but it would throw the card into weird modes that made the card not function properly.


SO true. I tried the KP XOC bios and it did the same exact thing as normal bios. Temp hits 39-40c, drop a clock step. It must be inherent and deeply hard coded into the chip itself which is probably why Vince had issues when he disabled it.


----------



## Mooncheese

kithylin said:


> Actually that's not true. Nvidia has released other video cards in the past where the top spec card of the new family is only +30% faster than the top spec card of the previous family (not including Titans) and they sold very well and they were released as a retail product. It is entirely possible based on the history of Nvidia products in the past that the new Ampere video cards could possibly only be +25% to +30% faster than Turing and still be sold as a successful product and be very profitable for the company. It does not have to be "50% faster" for Nvidia. They have no competition at the high end right now (As of typing this) and their new Ampere video cards will have zero competition on the high end when they release a few months from now. A smart thing to do from the business perspective would be to release new products only as fast over the previous one that people are willing to pay for. I'm sure if Nvidia wanted to do it they could release a card that is just +15% faster or +20% faster for first generation 7nm cards (Ampere) then the other +20% faster for the next family of 7nm (revised) products to milk more money out of their customers. There's really no reason to give us a single card that's all +40% faster in one gulp if they could split it up across two families for more profit. They may not do that this time around but as we get closer and closer to the physical limitations of silicon when they can't shrink chips any further (it's currently physically impossible to produce anything CPU, GPU, or Memory beyond 3nm for example) they'll have to stretch out their profits thinner and thinner at some point.


Except for Turing and Kepler refresh (GTX 780, same architecture) when did this happen? Going back until Fermi the generational leap has been 50%. 

GTX 680, 50% faster than 580, as fast as 580 SLI
GTX 980, 50% faster than 780, as fast as 780 SLI
GTX 1080, 50% faster than 980, as fast as 980 SLI

GTX 980 Ti, 50% faster than 780 Ti, as fast as 780 Ti SLI 
GTX 1080 Ti, 60% faster than 980 Ti, as fast as 980 Ti SLI

GTX 970, as fast as 780 Ti @ 200W
GTX 1070, as fast as 980 Ti @ 200W


With exception of of GTX 970, 980, 1070 and 1080 I've owned all of the cards in question (GTX 580M SLI and GTX 680M SLI). 

Turing is anomalous because NGreedia renamed the entire product stack one GPU higher in an attempt to justify an actual doubling in asking price. FACT. 

Example: 

RTX "2070", TU-106 SKU (60 card SKU), no SLI (every 70 card up until here has had SLI, no 60 card has had SLI), only as fast as outgoing 80 card, not the 80 Ti card like every new 70 card before it. MSRP: $600, historical price of 60 card: $350
RTX "2080", only as fast as the outgoing 80 Ti card, not 25-30% faster like every new 80 card before it: $900, historical price of 70 card: $450
RTX "2080 Ti" see above. $1200 before taxes, historical price of 80 card: $600


----------



## J7SC

sultanofswing said:


> Even the KINGPIN is susceptible to GPU boost algorithm. When I talked to Vince about it he said they were able to disable GPU boost but it would throw the card into weird modes that made the card not function properly.





gfunkernaught said:


> SO true. I tried the KP XOC bios and it did the same exact thing as normal bios. Temp hits 39-40c, drop a clock step. It must be inherent and deeply hard coded into the chip itself which is probably why Vince had issues when he disabled it.


 
Yeah, Nvidia boost algorithm is pretty much baked in. I did see a blog post elsewhere re. the _old "K-boost"_ where this came up re. a Galax HoF OCL w/ custom software with pretty much the same (non-) result. In the old days, per Anandtech, " _K-Boost disables all of the power saving features that current video cards use... The major benefit of K-Boost is to help remove the sudden drops in voltage and clock speeds that happen dynamically when stressing video cards_

...I still get best overall results with leaving my 2080 Tis completely alone (stock Bios), and just using the MHz and PL sliders in MSI AB and without touching the voltage slider 

For those who know who nurse Ratched is...


----------



## Asmodian

Mooncheese said:


> Turing is anomalous because NGreedia renamed the entire product stack one GPU higher in an attempt to justify an actual doubling in asking price. FACT.
> 
> Example:
> 
> RTX "2070", TU-106 SKU (60 card SKU), no SLI (every 70 card up until here has had SLI, no 60 card has had SLI), only as fast as outgoing 80 card, not the 80 Ti card like every new 70 card before it. MSRP: $600, historical price of 60 card: $350
> RTX "2080", only as fast as the outgoing 80 Ti card, not 25-30% faster like every new 80 card before it: $900, historical price of 70 card: $450
> RTX "2080 Ti" see above. $1200 before taxes, historical price of 80 card: $600


A publicly traded company (legally required to try to create value for their shareholders) realized they could sell their products for more money so they did. Shocking! What scum! 

Nvidia is not a public benefit corporation. Trying to shame them into lowering their prices is not going to work, that takes competition. How do you want the world to work? Should we implement price controls on GPUs now? 

This moral outrage over prices is pretty ridiculous, like we are all somehow entitled to relatively cheap GPUs. Don't like the price don't buy it but this moral outrage just reeks of misplaced entitlement. Also, you are weirdly extrapolating from a few releases, "it was like this three times in a row so it must always be like this!" The world changes, things won't always happen the same way and you are not entitled to a GPU X% better for the same price every two years just because it happened a few times in a row.


----------



## Laithan

Asmodian said:


> A publicly traded company (legally required to try to create value for their shareholders) realized they could sell their products for more money so they did. Shocking! What scum!
> 
> Nvidia is not a public benefit corporation. Trying to shame them into lowering their prices is not going to work, that takes competition. How do you want the world to work? Should we implement price controls on GPUs now?
> 
> This moral outrage over prices is pretty ridiculous, like we are all somehow entitled to relatively cheap GPUs. Don't like the price don't buy it but this moral outrage just reeks of misplaced entitlement. Also, you are weirdly extrapolating from a few releases, "it was like this three times in a row so it must always be like this!" The world changes, things won't always happen the same way and you are not entitled to a GPU X% better for the same price every two years just because it happened a few times in a row.


I don't see anyone saying those things. The history of _*why*_ GPU prices were driven far beyond the accepted normal up cannot be ignored and must be part of the equation. The mining craze was *THE* reason GPU prices inflated beyond "reasonable" and had nothing to do with R&D, fabrication or cost of production... The way I see it, the only thing that the majority of the public view had in the area of expectations was simply _the return of normal pricing_ as the GPU mining craze came to an end. As we know this did not occur and this reason alone with no other influences or external bearings is enough to define a reasonable expectation. There's no question that shareholders appreciated the mining craze... but NVIDIA keeping GPU prices inflated despite the mining craze being over is where I would direct the _*moral criticism*_. Many might define this as corporate greed and others just a profitable business decision but at the end of the day PC Gaming as a whole becomes more and more threatened when consoles are giving you ENTIRE SYSTEMS for less than the price of just the GPU on PC. If that's not alarming I'm not sure what is.


----------



## Asmodian

Laithan said:


> I don't see anyone saying those things.


"NGreeda" is not implying moral outrage?



Laithan said:


> NVIDIA keeping GPU prices inflated despite the mining craze being over is where I would direct the _*moral criticism*_. Many might define this as corporate greed and others just a profitable business decision but at the end of the day PC Gaming as a whole becomes more and more threatened when consoles are giving you ENTIRE SYSTEMS for less than the price of just the GPU on PC. If that's not alarming I'm not sure what is.


*The reason GPU prices are higher now is because Nvidia realized they can sell GPUs for more money.* The mining craze may be what clued them in but it is this reality that sets the prices. It is only when they start selling less GPUs to the point they make less money that the prices will drop. Pretty much every business operates the same way, though most have a lot more competition to keep things in check. There are no marketing departments that go "X% margins are too high, they would still sell fine at that price but there is no reason we need to make that much so we will change less." This is simply not the way any companies' marketing department thinks.

There is zero chance I will buy a console and Nvidia knows this. They will still happily sell you a mid range GPU that competes with the current consoles at a market competitive price but they also realized there was a market for stupidly expensive GPUs and are catering to it.

Either we regulate the industry in some way or companies do this, that is just how it works. Calling Nvidia "NGreedia" and referencing historical prices as if Nvidia had an obligation to provide GPUs at below market prices because the market price used to be lower is totally misunderstanding how the world works, or at least how markets work in the capitalist cesspool that is relevant to this issue.


----------



## Mooncheese

Laithan said:


> I don't see anyone saying those things. The history of _*why*_ GPU prices were driven far beyond the accepted normal up cannot be ignored and must be part of the equation. The mining craze was *THE* reason GPU prices inflated beyond "reasonable" and had nothing to do with R&D, fabrication or cost of production... The way I see it, the only thing that the majority of the public view had in the area of expectations was simply _the return of normal pricing_ as the GPU mining craze came to an end. As we know this did not occur and this reason alone with no other influences or external bearings is enough to define a reasonable expectation. There's no question that shareholders appreciated the mining craze... but NVIDIA keeping GPU prices inflated despite the mining craze being over is where I would direct the _*moral criticism*_. Many might define this as corporate greed and others just a profitable business decision but at the end of the day PC Gaming as a whole becomes more and more threatened when consoles are giving you ENTIRE SYSTEMS for less than the price of just the GPU on PC. If that's not alarming I'm not sure what is.


Spot on, I couldn't have summed it up better myself, I've been saying the same thing, this whole retort "if you don't like the prices then don't buy" completely misses the broad-side of the barn fact that these prices coupled with next-gen consoles that can offer a 4K @ 60 FPS experience for at or under $1k out-the-door, nearly the asking price of the 20 series "80 card" at launch, if left unchecked would literally spell doom for the entire PC enthusiast industry. Want to take a trip down memory lane? How large was PC gaming in the late 1990's early 2000's? It was a niche hobby. Do we want this to return to a niche hobby with outrageous prices and the inability to compete with consoles? I don't. 

Thank god NGreedia can't continue with their shenanigans with Big Navi, the next-gen-consoles and Intel releasing a discrete GPU next year. Everyone in disbelief at Moore's Law leaks from a price-performance perspective (RTX 3070 as fast as 2080 Ti for $500, NO IT CAN'T BE) fail to understand that NGreedia can no longer get away with it's shenanigans with competition in the market and crypto mining no longer presenting a demand for their discrete GPU's. 

Turing is what a monopoly looks like, and if youre only comment about monopolies is "if you don't like it don't buy it" you must be one of the .01% economic elite for whom money is no object. I CAN'T STAND this form of dismissive, classist, elitism. It disgusts me. It's everything that is wrong with this culture, this socio-economic arrangement and the vast majority of the problems plaguing humanity. 

Basically NGreedia doubled the price of the entire product stack with their rename scam and if you publicly complain about it youre entitled. What a farce!


----------



## Mooncheese

Asmodian said:


> "NGreeda" is not implying moral outrage?
> 
> 
> 
> *The reason GPU prices are higher now is because Nvidia realized they can sell GPUs for more money.* The mining craze may be what clued them in but it is this reality that sets the prices. It is only when they start selling less GPUs to the point they make less money that the prices will drop. Pretty much every business operates the same way, though most have a lot more competition to keep things in check. There are no marketing departments that go "X% margins are too high, they would still sell fine at that price but there is no reason we need to make that much so we will change less." This is simply not the way any companies' marketing department thinks.
> 
> There is zero chance I will buy a console and Nvidia knows this. They will still happily sell you a mid range GPU that competes with the current consoles at a market competitive price but they also realized there was a market for stupidly expensive GPUs and are catering to it.
> 
> Either we regulate the industry in some way or companies do this, that is just how it works. Calling Nvidia "NGreedia" and referencing historical prices as if Nvidia had an obligation to provide GPUs at below market prices because the market price used to be lower is totally misunderstanding how the world works, or at least how markets work in the capitalist cesspool that is relevant to this issue.


Yes Nvidia has learned about the Brave New World "market dynamics": 

https://www.cnbc.com/2018/01/18/few-americans-have-enough-savings-to-cover-a-1000-emergency.html

https://www.forbes.com/sites/noahki...ottom-50-of-country-study-finds/#1f3feb863cf8

https://inequality.org/facts/income-inequality/

Do you think the average citizen is needlessly upgrading their i-phone every year? Who can afford $1000 on a phone, which now is essentially a fashion item, a status symbol? No, the well heeled, now comprising the .01% and the 10% service class that serves them buy these needlessly every year. Hence the stand for the new I-Mac stand costing $1k. The electronics industry has shifted their product line to the only remaining demographic that can afford them: the .01% economic "elite" and the 10% service class that serves them. Take a look at Steam Survey: there are more with GTX 1060 than anything else

https://store.steampowered.com/hwsurvey/videocard/

That should speak volumes about Nvidia's newfound demographic discovery. 

If youre not among the .01% economic elite or the 10% service class youre pretty much SOL and should "if you don't like the prices don't buy it". Or you can just eat Top Ramen for 6 months straight, or go without replacing your shoes, or and you get the idea. 

Unfortunately for me it seems that here in 2080 Ti thread the vast majority of owners (who parted with $1350 after taxes for the 80 class card, knowing or not) seem to be somehow among the privileged economic elite in this country and when you bring up how monopolies are harming the hobby or how Nvidia lying about the names of their products to double the price it's met with "if you don't like it don't buy it, I on the other hand, can buy whatever I want and shall do so". 

I don't like the fact that the 80 Ti card was always an awesome card price-performance wise, it was always Titan level performance for a little more than the 80 card ($700). They changed all that with Turing with their rename scam. Now I can part with another $500 for a decent bump in performance, coming from previous gen 80 Ti card. 

I don't have money flowing out of my ears, I'm passionate about the hobby, and this is where my money goes, so yeah, I'm a bit indignant in this regard.


----------



## Mooncheese

Interesting article that brings up relevant points I attempted to bring up here: 

https://www.inc.com/minda-zetlin/apple-999-monitor-stand-overpriced-wwdc-john-ternus.html

I


----------



## Asmodian

Mooncheese said:


> Yes Nvidia has learned about the Brave New World "market dynamics":
> 
> https://www.cnbc.com/2018/01/18/few-americans-have-enough-savings-to-cover-a-1000-emergency.html
> 
> https://www.forbes.com/sites/noahki...ottom-50-of-country-study-finds/#1f3feb863cf8
> 
> https://inequality.org/facts/income-inequality/
> 
> Unfortunately for me it seems that here in 2080 Ti thread the vast majority of owners (who parted with $1350 after taxes for the 80 class card, knowing or not) seem to be somehow among the privileged economic elite in this country and when you bring up how monopolies are harming the hobby or how Nvidia lying about the names of their products to double the price it's met with "if you don't like it don't buy it, I on the other hand, can buy whatever I want and shall do so".
> 
> I don't like the fact that the 80 Ti card was always an awesome card price-performance wise, it was always Titan level performance for a little more than the 80 card ($700). They changed all that with Turing with their rename scam. Now I can part with another $500 for a decent bump in performance, coming from previous gen 80 Ti card.
> 
> I don't have money flowing out of my ears, I'm passionate about the hobby, and this is where my money goes, so yeah, I'm a bit indignant in this regard.


This is a very reasonable reaction and I have no issues at all with a post like your's. I agree with all of your points as well. It is the "NGreedia" and similar, that frustrates me. As if this behavior is a just a failing in the morals of the people at Nvidia so if only they were better people we would have cheap GPUs.

Monopolies enforced by IP laws get very complicated and socity has no idea how to modernize these ideas but it frustrates me when people act like this is something Nvidia is uniquely morally responsible for instead of the natural result of the market. Every single company that is big enough to make a good GPU would do the same. If you have a unique product people want, you charge through the roof for it. This is not always the right thing to do and I would love for socity to figure out a better way but it will take more thought than calling them names and only looking at the fastest GPU. We cannot think of this situation as a moral failing of a group of "them" if we want to figure out a solution. This is a failure of the system, not individuals.


----------



## kithylin

Mooncheese said:


> Except for Turing and Kepler refresh (GTX 780, same architecture) when did this happen? Going back until Fermi the generational leap has been 50%.
> 
> GTX 680, 50% faster than 580, as fast as 580 SLI
> GTX 980, 50% faster than 780, as fast as 780 SLI
> GTX 1080, 50% faster than 980, as fast as 980 SLI
> 
> GTX 980 Ti, 50% faster than 780 Ti, as fast as 780 Ti SLI
> GTX 1080 Ti, 60% faster than 980 Ti, as fast as 980 Ti SLI
> 
> GTX 970, as fast as 780 Ti @ 200W
> GTX 1070, as fast as 980 Ti @ 200W
> 
> 
> With exception of of GTX 970, 980, 1070 and 1080 I've owned all of the cards in question (GTX 580M SLI and GTX 680M SLI).


You omitted and incorrectly compared a few cards there. If you're comparing the GTX 580 to the GTX 680, then you are comparing the "top of the stack" of each family. Carrying that forwards (and backwards) we would compare the GTX 780 Ti to the GTX 980 Ti which was only a +40% gain in most games at the time. 

Also if you go back a ways, the GTX 580 was only roughly +20% faster (less in some games) vs the GTX 480. The sane thing to do back then for most people was to stay on the 480 and wait to upgrade to the 680 instead.

And go back even further. The GTX 285 was only +30% faster than the 9800 GTX. And the 9800 GTX launch was kind of a crappy launch too. It was only roughly +25% faster than the 8800 Ultra. We weren't seeing large +50% gains with each family until the 600, 700, 900, 1000, and 2000 series.

Nvidia has launched several products that were not significantly better than the previous ones at different points in time. It's not a new concept for them. They could realistically launch the 3080 Ti that would only be say +30% faster for raster performance but +4x faster for the Tensor core component and it would still sell like hotcakes.



Mooncheese said:


> RTX "2070", TU-106 SKU (60 card SKU), no SLI (every 70 card up until here has had SLI, no 60 card has had SLI), only as fast as outgoing 80 card, not the 80 Ti card like every new 70 card before it. MSRP: $600, historical price of 60 card: $350


By the way your statement about the 60 series cards is not true. GTX 260, GTX 460, GTX 560, GTX 660, and GTX 760 all supported SLI. GTX 260 and GTX 760 supported 2-way and 3-way SLI.


----------



## rustyk

Mooncheese said:


> Yes Nvidia has learned about the Brave New World "market dynamics":
> 
> https://www.cnbc.com/2018/01/18/few-americans-have-enough-savings-to-cover-a-1000-emergency.html
> 
> https://www.forbes.com/sites/noahki...ottom-50-of-country-study-finds/#1f3feb863cf8
> 
> https://inequality.org/facts/income-inequality/
> 
> Do you think the average citizen is needlessly upgrading their i-phone every year? Who can afford $1000 on a phone, which now is essentially a fashion item, a status symbol? No, the well heeled, now comprising the .01% and the 10% service class that serves them buy these needlessly every year. Hence the stand for the new I-Mac stand costing $1k. The electronics industry has shifted their product line to the only remaining demographic that can afford them: the .01% economic "elite" and the 10% service class that serves them. Take a look at Steam Survey: there are more with GTX 1060 than anything else
> 
> https://store.steampowered.com/hwsurvey/videocard/
> 
> That should speak volumes about Nvidia's newfound demographic discovery.
> 
> If youre not among the .01% economic elite or the 10% service class youre pretty much SOL and should "if you don't like the prices don't buy it". Or you can just eat Top Ramen for 6 months straight, or go without replacing your shoes, or and you get the idea.
> 
> Unfortunately for me it seems that here in 2080 Ti thread the vast majority of owners (who parted with $1350 after taxes for the 80 class card, knowing or not) seem to be somehow among the privileged economic elite in this country and when you bring up how monopolies are harming the hobby or how Nvidia lying about the names of their products to double the price it's met with "if you don't like it don't buy it, I on the other hand, can buy whatever I want and shall do so".
> 
> I don't like the fact that the 80 Ti card was always an awesome card price-performance wise, it was always Titan level performance for a little more than the 80 card ($700). They changed all that with Turing with their rename scam. Now I can part with another $500 for a decent bump in performance, coming from previous gen 80 Ti card.
> 
> I don't have money flowing out of my ears, I'm passionate about the hobby, and this is where my money goes, so yeah, I'm a bit indignant in this regard.


I've seen you bring this up time and time again and on other forums along with the 'Ngreedia' and privileged elite bit.
The issue I have is that you're talking about the highest of the high end and you're talking about a luxury item. In a lot of other industries the 'elite' buy the high end (paying over the odds), then the tech trickles down the product stack and over all there is progress for everyone.
I still can't believe you bought a 2080Ti, since propping up the 2nd hand market is important to those that buy new then sell to recoup costs and upgrade to the next elite card.

Your fixation on Nvidia's naming scheme is silly as it's arbitrary and a function of marketing. People read reviews, see how much it costs then decide if the benefit is worth it to them or not.
Also I've been pc gaming for 30 years+ and people have always been talking about the death of pc gaming, but it never materialises and actually so what if it did? The world changes.


----------



## jblanc03

*68c watercooled! is this normal???*

i have the RTX titan with the EK Vector waterblock and i used the frostbyte 2 thermal paste which is supposed to be good right?

i Have the card in its own separate cooling loop with 2 MCP5 pumps in series pushing through 1/2" ID tubing and 2 480 Radiators blowing in push pull config.

Before this card i was using the Maxwell GTX Titan and never saw temps rise above 50 Degrees Celcius when under load and overclocked!

So my question is, is 68 Degrees Celcius and above normal for this card when under load? Mind you this is stock with no overclocking whatsoever.


----------



## kithylin

jblanc03 said:


> i have the RTX titan with the EK Vector waterblock and i used the frostbyte 2 thermal paste which is supposed to be good right?
> 
> i Have the card in its own separate cooling loop with 2 MCP5 pumps in series pushing through 1/2" ID tubing and 2 480 Radiators blowing in push pull config.
> 
> Before this card i was using the Maxwell GTX Titan and never saw temps rise above 50 Degrees Celcius when under load and overclocked!
> 
> So my question is, is 68 Degrees Celcius and above normal for this card when under load? Mind you this is stock with no overclocking whatsoever.


If you're using the EVGA Frostbite 2 thermal paste then it probably is actually extremely poor compared to other better pastes on the market. It hasn't been independently tested yet (It's still brand new) but the thermal transfer rate on the back of the packaging is one of the lowest for thermal pastes that are commonly used for computer parts.

1.6 W/mk - EVGA FrostBite 2
8.9 W/mk - Arctic Silver 5
12.5 W/mk - Thermal Grizzly Kryonaut


----------



## jura11

jblanc03 said:


> i have the RTX titan with the EK Vector waterblock and i used the frostbyte 2 thermal paste which is supposed to be good right?
> 
> i Have the card in its own separate cooling loop with 2 MCP5 pumps in series pushing through 1/2" ID tubing and 2 480 Radiators blowing in push pull config.
> 
> Before this card i was using the Maxwell GTX Titan and never saw temps rise above 50 Degrees Celcius when under load and overclocked!
> 
> So my question is, is 68 Degrees Celcius and above normal for this card when under load? Mind you this is stock with no overclocking whatsoever.



Hi there 

68°C under water in my opinion is way too high for any kind of water cooling

I have built for friend loop where friend is running 8086k with 5.2GHz OC and RTX 2080Ti with Galax 380W BIOS with 2100MHz OC with single 360mm radiator and his temperatures are in 42-48°C on load,48°C I have seen highest GPU temperature in prolonged gaming session on that build 

Frostbyte not used or never tried , usually I use NT-H1, Kryonaut or Thermalright TFX

Assume waterblock have good contact like on core and VRM 

Hope this helps 

Thanks, Jura


----------



## haleNpace

Hi, thanks to those who have shared their knowledge and support to this thread. It was quite easy to be able to find information on the various types of these cards and as a result I decided to upgrade from 1080fe.

There was a click frenzy sale and I purchased a Galax 1 click sg. I was ok with the non A chip, I played with the 1080fe a lot and figured the extra grunt here without the oc headroom would be fine. I double checked the order and the info here and I just wanted the opinion of some experienced users. 
In gpu-z it lists my new card as a 10DE 1E07. In the Nvidia bios tab it says 280w and the boost clock is “1545” so I am assuming it is really the Non A variant. 

Mem Type is listed as Samsung if that helps?


----------



## Imprezzion

1E07 is A. 1E04 is non-A. It's an A chip with a reference BIOS.


----------



## haleNpace

Imprezzion said:


> 1E07 is A. 1E04 is non-A. It's an A chip with a reference BIOS.


Thank you Sir, it has a different part number than those listed on the first page so I wasn’t sure. I thought it might be misinterpreted as some cards had in the past. The card I is the V2 version and it listed different GPU’s in the specs tab. 

https://www.galax.com/en/graphics-card/20-series/galax-geforce-rtx-2080ti-sg-1clickoc-v2.html

I guess I was lucky? Thanks again.


----------



## Imprezzion

Any card labeled as OC from the manufacturer is a A chip as the non-A can only be sold as a non factory overclocked card. If this is the OC model it has to be A chip. You got lucky hehe.

Now, get to flashing the EVGA FTW3 Ultra A chip BIOS and get clockin' to 2100 and beyond . If it's a card with the older compatible firmware that is.


----------



## Yuke

Hey, 

i have a question about my thermals.

Basically, from the first day i got the *Gigabyte RTX2080Ti Gaming OC 11g*, the temps were out of control. I tried different PC-Cases (Mid and Big-Tower) and different Noctua Fan setups (up to 8 additional fans). Pretty much wasted half the cards value on this bull****.

I undervolted the GPU to [email protected] and i still need 73% Fanspeed to keep it between 77° and 82°C

Here are some ressources claiming very good temperatures (top tier):

https://www.guru3d.com/articles-pages/gigabyte-geforce-rtx-2080-ti-gaming-oc-11g-review,8.html

https://wccftech.com/review/gigabyte-geforce-rtx-2080-ti-gaming-oc-graphics-card-review/9/

https://youtu.be/GuZUwFINj7Y?t=259

https://youtu.be/BivkldMd2yE?t=88

-> Basically sub 70°C + less fanspeed - even in some cases when overclocked.

The last thing that comes to mind now is that they ****ed up cooler installation in the factory, which checking for would void my 3 years of warranty i have left.

So, before opening the card up, i want to know if i maybe missed something, i could still check out, before killing my warranty...

Thanks.

ps: My card came with already upgraded bios (122% powerlimit) + (upgraded?) samsung ram...even tho i bought it brand new here....maybe someone messed up the cooler installation afterwards :/


----------



## Medizinmann

GAN77 said:


> I noticed. This water block loves flow rates above 0.6 gallons per minute. Also with an active rear panel there are no o-rings for the connection ports. I used old rings from a regular port. Be careful.
> 
> Have a good installation!





Shawnb99 said:


> Everything loves a flow rate of at least 0.5GPM, that’s the lowest your flow should ever get. 1GPM is what you should be striving for.


Of course - higher flowrate is always better...but...

Actually...if I interpret the numbers for my pump right (max. flowrate of 80l/h - may be a little more as I use two pumps) I am more on the 0,25-0,35 GPM side at max. 

I never see GPU temps more than a few C° above water temps…with high fan speed on rads usually 40-43°C for the GPU after prolonged Timespy loads using the Galax 380W BIOS and also cooling an OCed 3900x – which adds up to around 550W to cool under full load. With my silent settings It goes up to 58°C - but that’s with really low fans speeds.

Greetings,
Medizinmann


----------



## Medizinmann

jblanc03 said:


> i have the RTX titan with the EK Vector waterblock and i used the frostbyte 2 thermal paste which is supposed to be good right?
> 
> i Have the card in its own separate cooling loop with 2 MCP5 pumps in series pushing through 1/2" ID tubing and 2 480 Radiators blowing in push pull config.
> 
> Before this card i was using the Maxwell GTX Titan and never saw temps rise above 50 Degrees Celcius when under load and overclocked!
> 
> So my question is, is 68 Degrees Celcius and above normal for this card when under load? Mind you this is stock with no overclocking whatsoever.


I would agree with jura11 and say this is way too high...

I would assume there is a problem with the contact pressure applied.

BTW: I would do a repaste and go with kryonaut or better condoctonaut(LM).

Greetings,
Medizinmann


----------



## Imprezzion

Yuke said:


> Hey,
> 
> i have a question about my thermals.
> 
> Basically, from the first day i got the *Gigabyte RTX2080Ti Gaming OC 11g*, the temps were out of control. I tried different PC-Cases (Mid and Big-Tower) and different Noctua Fan setups (up to 8 additional fans). Pretty much wasted half the cards value on this bull****.
> 
> I undervolted the GPU to [email protected] and i still need 73% Fanspeed to keep it between 77° and 82°C
> 
> Here are some ressources claiming very good temperatures (top tier):
> 
> https://www.guru3d.com/articles-pages/gigabyte-geforce-rtx-2080-ti-gaming-oc-11g-review,8.html
> 
> https://wccftech.com/review/gigabyte-geforce-rtx-2080-ti-gaming-oc-graphics-card-review/9/
> 
> https://youtu.be/GuZUwFINj7Y?t=259
> 
> https://youtu.be/BivkldMd2yE?t=88
> 
> -> Basically sub 70°C + less fanspeed - even in some cases when overclocked.
> 
> The last thing that comes to mind now is that they ****ed up cooler installation in the factory, which checking for would void my 3 years of warranty i have left.
> 
> So, before opening the card up, i want to know if i maybe missed something, i could still check out, before killing my warranty...
> 
> Thanks.
> 
> ps: My card came with already upgraded bios (122% powerlimit) + (upgraded?) samsung ram...even tho i bought it brand new here....maybe someone messed up the cooler installation afterwards :/


Samsung RAM was a launch RAM option in some models. 122% is also what a Gigabyte OC model should have. The non-OC has 114% if I'm correct. 

The thermal paste application from the factory is probably terrible lol. Just replace it with MX4, Kryonaut, PK-3 or something similar. In most countries that would actually not void warranty. Your in Germany right? In USA it doesn't void warranty as they have right to repair laws. No idea if Germany has something similar. Here in Holland I believe we do but don't quote me on that haha..


----------



## Mooncheese

rustyk said:


> I've seen you bring this up time and time again and on other forums along with the 'Ngreedia' and privileged elite bit.
> The issue I have is that you're talking about the highest of the high end and you're talking about a luxury item. In a lot of other industries the 'elite' buy the high end (paying over the odds), then the tech trickles down the product stack and over all there is progress for everyone.
> I still can't believe you bought a 2080Ti, since propping up the 2nd hand market is important to those that buy new then sell to recoup costs and upgrade to the next elite card.
> 
> Your fixation on Nvidia's naming scheme is silly as it's arbitrary and a function of marketing. People read reviews, see how much it costs then decide if the benefit is worth it to them or not.
> Also I've been pc gaming for 30 years+ and people have always been talking about the death of pc gaming, but it never materialises and actually so what if it did? The world changes.


The problem I have is the rename the entire product stack to double the price scam. 

80 Ti was never a card for the privileged "elite". 

80 Ti was always a performance bargain for those without money flowing out of their ears for the Titan card and who simply waited 6-12 months after Titan released. 

I don't have a problem with NGreedia selling a $1200 card to rich, impatient morons. 

I do have a problem when the only upgrade path from the previous 80 Ti card is a $1350 after-taxes 80 class card renamed the 80 Ti card.


----------



## Yuke

Imprezzion said:


> Samsung RAM was a launch RAM option in some models. 122% is also what a Gigabyte OC model should have. The non-OC has 114% if I'm correct.
> 
> The thermal paste application from the factory is probably terrible lol. Just replace it with MX4, Kryonaut, PK-3 or something similar. In most countries that would actually not void warranty. Your in Germany right? In USA it doesn't void warranty as they have right to repair laws. No idea if Germany has something similar. Here in Holland I believe we do but don't quote me on that haha..



Hey,

in the reviews they also have the "Gaming OC" version, capped at 111% powerlimit, i've read that Gigabyte at some point increased the limit with a BIOS flash. I really hope its a "bad paste application" issue that i can resolve...ordered a tube of kyronaut yesterday. The worst thing is that i know that i can do over +900Mhz on the VRAM if i dont undervolt...but the freaking temperatures (and NOISE - oh god the noise when the fan is over 73%) are preventing me from going nuts on the card...

Regarding warranty here. As far as i know its a matter of goodwill here...we call it "Kulanz". Basically most are ok with RMA/Warranty claims if it doesnt look like you handled the merchandise with a sledgehammer....but you never know if someone handling your issue has a bad day...


----------



## Mooncheese

kithylin said:


> You omitted and incorrectly compared a few cards there. If you're comparing the GTX 580 to the GTX 680, then you are comparing the "top of the stack" of each family. Carrying that forwards (and backwards) we would compare the GTX 780 Ti to the GTX 980 Ti which was only a +40% gain in most games at the time.
> 
> Also if you go back a ways, the GTX 580 was only roughly +20% faster (less in some games) vs the GTX 480. The sane thing to do back then for most people was to stay on the 480 and wait to upgrade to the 680 instead.
> 
> And go back even further. The GTX 285 was only +30% faster than the 9800 GTX. And the 9800 GTX launch was kind of a crappy launch too. It was only roughly +25% faster than the 8800 Ultra. We weren't seeing large +50% gains with each family until the 600, 700, 900, 1000, and 2000 series.
> 
> Nvidia has launched several products that were not significantly better than the previous ones at different points in time. It's not a new concept for them. They could realistically launch the 3080 Ti that would only be say +30% faster for raster performance but +4x faster for the Tensor core component and it would still sell like hotcakes.
> 
> 
> By the way your statement about the 60 series cards is not true. GTX 260, GTX 460, GTX 560, GTX 660, and GTX 760 all supported SLI. GTX 260 and GTX 760 supported 2-way and 3-way SLI.


980 Ti only 40% gain over 780 Ti *****, you conveniently missed the bit where I said I owned all of the 80 Ti cards in question: 

From left to right, Single 780 Ti (EVGA SC 2) overclocked, 780 Ti stock clocks, 780 Ti SLI overclocked, 980 Ti overclocked (MSI 6G Gaming), 1080 Ti overclocked (FE), 2080 Ti overclocked (XC2 Ultra on FTW3 vbios).

Single 780 Ti: 13,551
Single:980 Ti, 21,381

= 38%

My mental math was incorrect here, I was thinking that the difference between 13 and 21 was roughly half of 13.5 (7) and so I attributed 50% to the performance increase when in fact it's closer to 40%. 

Anyhow, overclocked 780 Ti SLI is 24,xxx and overclocked 980 Ti is 21,xxx. You get the idea. Single 980 Ti is nearly as fast as overclocked 780 Ti. With 2080 Ti we don't see this, we see something closer to comparing say GTX 1080 to 980 Ti SLI (overclocked, 980 Ti was massively underclocked from the factory @ 1250 MHz when it could easily do 1450-1500 MHz). 

1080 Ti is again nearly 50% faster than 980 Ti (31k vs 21k GPU). 

2080 Ti, going by Firestrike which isn't exactly fair (because there are larger gains in other benchmarks indicative of real world performance disparity, i.e. Timespy) and it's a 25% increase, from 31 to 39k GPU. 

The combined score with the 2080 Ti bench is skewed heavily in favor of 2080 Ti so at the very end I added an 8700k + 1080 Ti bench: 

https://www.3dmark.com/compare/fs/2...s/5667438/fs/12056672/fs/22717185/fs/20090084

All of the performance comparison between generations without considering the lithography changes is meaningless. Turing was basically Kepler refresh 700 series all over again (16 to 12nm, essentially same architecture). The node is dropping to 10 and 7nm respectively. 3080 Ti absolutely WOULD NOT sell like hotcakes @ only 30% increase in performance otherwise NGreedia would 100% gimp the card to be only 30% faster. They aren't gimping it, "The way consumers should be owned", this time around because of the competition I've only stated 15 thousand times right here in this thread. Resurgent AMD, if their next-gen console APU is the equivalent of GTX 1070-1080 and Ryzen 3700x at only 100-130W what do you think this same architecture is going to look like with a massive die at 300W? NGreedia obviously caught wind of the performance of Big Navi and can no longer gimp their products. They can also no longer continue to hike up the prices. We may not even see them try the $800 for 80 class card, but $900 launch price" shenanigans this time around, the prices may be priced $100 lower right out of the gate. 

We won't have to wait very long though, Sept is only 3 months away, but I will absolutely return to this thread at that point in time to say I told you so.

Next gen consoles with AMD APU's means that games will natively support path tracing on AMD hardware and that NGreedia needs to put in the additional work driver side to get their hardware to play nice with the console ports. 

Vulkan is looking to supplant DX12 API as the dominant API. 

Next gen consoles themselves means any would be volunteer for bamboozlement from Jacket Man will do the mental arithmetic (some will, but people buying expensive NGreedia GPU's are by and large rich morons for whom money is no object the inverse relationship between wealth and intelligence in this society is understood) and conclude that "why would I pay $2500+ to build a PC when I can have the same level of performance (4K @ 60 FPS) at only $1000 AND have access to stellar console exclusives?" 

Jacket Man knows this. 

Hence no gimping and no price shenanigans this time around. 

NGreedia's continued survival in the gaming GPU market basically depends on it. 

I mean there will always be rich morons needlessly upgrading so that they can have the highest score in 3DMark (see 10900k thread) but NGreedia's continue financial health can't depend solely on that. 

All of this is happening in the worst economic time in our history. 

Think Greatest Depression. 

Think 40% unemployment. 

Think 70% of U.S. citizens without $500 on hand. 

NGreedia rename scam shenanigans are pretty much done for. They made this move on the tail-end of the crypto bubble and the decision to do so was influenced primarily from moving massive inventory of gaming GPU's to mining farms, both in the U.S. and China and elsewhere. Crypto isn't doing so well, people have moved on to ASIC miners. 

Ampere is releasing in a completely different economic landscape.


----------



## rustyk

Mooncheese said:


> ...
> I don't have a problem with NGreedia selling a $1200 card to rich, impatient morons.
> ...


Here we go, I knew it was just a matter of time. If you get so angry and judgemental when talking about high end gear, maybe you should take up a different hobby?
I'm being serious.


----------



## philhalo66

yeah i have to agree 68C is pretty darn hot for a custom loop, i have the hybrid 2080 Ti and even with an overclock and running 4K games i top out at 56C with a 120MM Aio.


----------



## kithylin

Mooncheese said:


> 2080 Ti, going by Firestrike which isn't exactly fair (because there are larger gains in other benchmarks indicative of real world performance disparity, i.e. Timespy) and it's a 25% increase, from 31 to 39k GPU.


Just a note for you for the future. I would suggest you use userbenchmark to compare video cards and not firestrike. Userbenchmark takes tens of thousands of results from all sorts of different configurations (some overclocked, some stock) and computes an overall median average. Firestrike can vary wildly between systems and you're just looking at a couple samples from just a few people. Here's a link for you: https://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-Ti-vs-Nvidia-GTX-980-Ti/2165vs3439

It's much better to get an overall view of how different cards actually compare in real-world performance.


----------



## Mooncheese

Asmodian said:


> This is a very reasonable reaction and I have no issues at all with a post like your's. I agree with all of your points as well. It is the "NGreedia" and similar, that frustrates me. As if this behavior is a just a failing in the morals of the people at Nvidia so if only they were better people we would have cheap GPUs.
> 
> Monopolies enforced by IP laws get very complicated and socity has no idea how to modernize these ideas but it frustrates me when people act like this is something Nvidia is uniquely morally responsible for instead of the natural result of the market. Every single company that is big enough to make a good GPU would do the same. If you have a unique product people want, you charge through the roof for it. This is not always the right thing to do and I would love for socity to figure out a better way but it will take more thought than calling them names and only looking at the fastest GPU. We cannot think of this situation as a moral failing of a group of "them" if we want to figure out a solution. This is a failure of the system, not individuals.


I agree to a certain extent, but the system is comprised of individuals. Nvidia actually priced Turing at such a level because of a rally off Nvidia shares being sold to investors due to a meteoric demand in Nvidia's gaming GPU's due to the crypto-bubble. Nvidia is now being sued because said market participants are claiming that they did not know that $1bn worth of GPU sales were attributed as demand from the gaming public.

https://www.overclock.net/forum/225...king-1-billion-mining-gpu-revenue-gaming.html

Nvidia had to have considered that such a sharp divergence in pricing (especially if / when you acknowledge that the entire Turing stack was renamed one GPU higher in an attempt to justify a doubling in historic price) would harm consumer confidence (read: consumers, not investors) and, as with the behavior of all unregulated corporations, they chose on the side of short-term-gain (long term losses manifest as a collapse in consumer confidence, especially pronounced in presence of a resurgent market competitor). 

Read my sentiment here. 

Know that I've purchased every 80 Ti card going back to Kepler. 

I'm a partially disabled combat veteran. My income is $1169 a month (60%) and after rent, utilities and food I'm left with around $200 a month. 90% of my recent upgrades (from 4930k to 8700k, from AIO's to full 10/14 hard tube acrylic loop, from 1080 Ti to 2080 Ti) were made possible with disability back-payment. You file a claim from some kind service connected issue you have from 8.5 years of active duty Army and one tour of Afghanistan (OEF 2004-2005, 25th Infantry Division) and it can take 6 months for them to make a decision, about half the time in your favor (I'm still trying to press for Anthrax vaccine injury, which was untested by FDA before administration, and many of us are suffering from a suite of symptoms that manifest as Gulf War Illness, fatigue, joint pain, multiple chemical sensitivity etc.) and if they decide in your favor the present you with back-payment until the date you submitted the claim. 

Getting 2080 Ti used was quite a challenge, and ultimately getting a good deal required finding one local, off-line (Craigslist), in one instance with no original receipt nor warranty, just to get one for a reasonable price ($750 with water block the first time, which succumbed to memory failure after switching from the Alienware cooler to water-block (Micron) and running at more than 280W (original vbios was limited to 280W @ 111% PT for whatever reason, and this was TU-102-300A-K1-A1, basically I had 300A card with non 300A vbios). Second 2080 Ti I managed to get an amazing deal on, again second hand XC2 Ultra with 2.5 years remaining on the warranty (transferred over EVGA), with Samsung B-Die, and with ICX sensors from FTW3 (I can see mem, VRM and an additional GPU core temp probe outside of socket) that happens to be compatible with the left-over Phanteks Glacier block for $900. 

So this has been a costly venture ultimately, and buying off-line proved a costly gamble with the total cost of upgrade to 2080 Ti from 1080 Ti was $1650. 

New we are looking at $1450 after tax (XC2 demands a $50 price premium over XC Ultra, well worth it for the ability to see the temps all about the card) for the card itself and another $200 after tax and shipping for a water block, or about the same cost. Ultimately I paid $1650 coming from 1080 Ti with no upgrade path for 2080 Ti and a water-block. 

That's nearly 9 months of saving for me and I can't do anything, I can never go out to eat anywhere, I can't do anything that requires spending money. I do all of my cooking in the home, I don't have a gym membership (I exercise outside in direct sunlight daily), I don't have a phone bill. 

9 months savings for the upgrade from 1080 Ti to 2080 Ti. 

Nvidia decided to sacrifice the entire segment of their consumer-base that makes a generational 80 Ti purchase @ $700 in a vain attempt to satisfy investor confidence. 

Because HERE'S THE REALITY: 

Investor confidence and the buying of shares skyrocketed until tail-end of 2017 and the crash of the crypto-bubble in Nov, 2017, of which I was also directly affected (I use my PC to heat my living space and mining was a way to offset my heating bill and generate a small income [typically $100 a month @ 3-5 a day on Nicehash, up until crash of 2017) but because of now extant economic conditions (unprecedented unemployment, the offshoring of jobs, people with capital seeking more capital) people began buying up GPU's here in the U.S. hand over fist, with the expectation of return on investment. I remember seeing it here in the forum and I remember castigating a certain member vigorously about it (KickAssCop, who at one point bragged about going into a local Fry's and buying 7 or 8 1080 Ti' "all of them left on the shelf" so that he could mine crypto, I told them they were going to harm the PC gaming consumer base because 1: no-one who wanted to build a PC for gaming could buy one, and if they were available they were all marked up, by scalpers at first and then by actual merchants (1080 Ti was going for like $900 on amazon and newegg and that they would drive up the prices of GPU's going into the future, which is exactly what happened with Turing. 

Crypto crashed at the point where Nvidia was fairly far along in the development of Turing based GPU's (20 series started shipping in Sept 2018), after crypto crashed they probably decided to add new, already planned features that would salivate the masses such as Ray Tracing, and DLSS. It is true that research and development costs for Turing likely exceeded that of all previous architectures (DLSS in particular). It was emergency time at Nvidia because they now had to satisfy the massive spike in stock investor participation and GPU sales to crypto had plummeted, and along with it, their sales from their gaming sector. So they had a big pow wow, "brainstorm" and Jensen Huang, the captain of industry that he is, probably said something along the lines of "it's emergency time for gaming GPU's, we need to introduce something that instills enough excitement to both create demand and justify increase in price!" (they needed to double the price of the inventory because of the collapse of gaming GPU sales due to collapse in value in BTC and with it the demand for gaming GPU's intended to be used for just for mining). They rightly understood that the consuming public would be in outrage if they actually doubled the price of their products, tier for tier, so they resorted to lying to the consuming public by renaming the entire product stack one tier higher. I.E. RTX "2070", 60 card SKU, no SLI, only as fast as the outgoing 80 card, not the 80 Ti card or about 30% less performance increase every generation going back to Fermi, making for double the asking price: $600 vs $350. 

GTC 2018, the sweat on the brow, Jensen's gait and body language conveying anxiety and apprehension, the constant need to repeat, in a near mumbling manner "it just works", "just buy it", "it just works", "just buy it". 

They were in quite the pickle, and at that point they were basically forced to defrauding the consumer-base. The debacle that ensued was best summarized by "AdoredTV"'s astute Turing analysis, where he pointed out the intentional vagueness of the charts used to exaggerate the performance increase in relation to the outgoing generation by Nvidia at GTC and Tom's Hardware's Nvidia sponsored "article" about Turing, concluding (written by the editor now) "Just Buy It: Why Nvidia's Turing GPU's are worth the money" 






https://www.tomshardware.com/news/nvidia-rtx-gpus-worth-the-money,37689.html


Nvidia was also caught black-listing review outlets and rather onerous review (HardForum hit by it) embargo and an onerous NDA. 

https://www.tomshardware.com/news/nvidia-rtx-gpus-worth-the-money,37689.html

Nvidia's newfound predicament (massive recent investor activity) dicatated that they not only needed to keep profits as high as they were during the height of the crypto-bubble, right up until Nov 2017, crypto rallied to like 21k (BTC) 1 BTC = $21k) when they literally couldn't keep up with demand (factories were running full bore, they were literally selling them hand over fist) but that said profits needed to continue along an upward trajectory and exceed them. 

With them not being able to rely on profit being generated through mass distribution they had to make up for that in profit per unit sold. 

Hence the need to double the price of their gaming GPU's. 

And now, investor confidence is wavering because: going by Turing sales data, only a fraction of the consuming public can afford the cost of Nvidia's upper-tier gaming GPU's (hence the uncharacteristic and unprecedented Super variant release mid cycle) that promised to offer one tier higher performance at the lower tier price. Some, (but clearly not all) understood that the GPU's had doubled down in price and that to support Nvidia for doing this was to vote in favor of said behavior and like future behavior. Not buying a new RTX 20 series GPU, especially if one didn't need to and with ample used 10 series GPU's in supply, is voting against this behavior, just as I am urging all to not by Intel 10th series CPU's (and continue to reward Intel for overpriced, dated mediocrity with high profit margins) I urged everyone to not buy 20 series for a while. 

I only upgraded to 2080 Ti because I had a lot of games that were at 60 FPS @ 3440x1440 and 2560x1440 (3D Vision, 60 FPS = 120 FPS 2D) and there were huge gains to be had, I was also curious about ray tracing and DLSS, I had already had my 1080 Ti for 3 years and wanted more power) and it was the only real meaningful upgrade to make to significantly improve my gaming experience with a still in 2020 future-resilient CPU (8700k @ 5.1 GHz 0 AVX) and given the nature of the collapse in society / civilization in 2019 even before the onset of the SARS COV2 Plandemic, due to huge environmental crises (vast majority of Australia on fire in 2018, the accelerated deforestation of Amazon Rainforest, the hospice of the Great Barrier Reef, the near team melting of the Arctic Sea Ice in the summer, and with it some 50 gigatons of Deep Sea Frozen Methane Clathrates, molecule per molecule 100% stronger as a heat trapping gas vs CO2, and the cataclysmic setting into motion 69 and counting feedback loops which, given present estimates, result in a near term increase in near surface air temperature of 4-6C (minimum time before event, 6 to 8 weeks after arctic sea ice melts completely at the summer extent, sometime in September) which would result in all cereal grasses, you know, corn, soy, wheat, sorghum, millet, rice etc., would no longer be able to germinate (they can't germinate at only another 4C) meaning, the end of agriculture as we know it (we may be able migrate agriculture closer to the poles but that would require a large amount of land that isn't presently occupied by "civilization" but that would require years of planning in advance and we are going to deal with the human population bottleneck by way of famine long before any residual amount of us survive and manage to migrate away from the poles. Also, the elites have plans but you aren't being told about them because youre whom they intend to prune away. Yes. That's right, the rich and powerful rightly understand that we have a human overpopulation (primarily consumption problem) and overconsumption problem. If everyone lived and consumed at the level of middle class American (what's left of it, now comprising 10% of U.S., the service class (military, police, federal and even state level bureaucracy, the .01% are Corporate / Shadow Govt.) we would need 5 planet earths. 

They understand that industrial "civilization", guided by the dictates of Corporate Capitalism (the infinite growth paradigm, that nature only exists as a resource, that industrial civilization is primary and the natural world secondary) is / was completely unsustainable because when your economic paradigm views and treats the planet that it depends on for survival as a something that ought to be raped and ravaged the planet invariably cannot survive that for very long. The BOE that is coming (Blue Ocean Event) will, in a truly poetic and karmic way, act as a catalyst for the super-organism that is Gaia, our planet (see James Lovelock's Gaia Hypothesis) to initiate an immune response to rid the super-organism of it's disease (disease: humans, as manifested as Industrial "Civilization"). Our human body, itself a super-organism comprised of smaller independent sentient forms of life: cells and below that, at base, protons and then atoms, quantum in nature (it's all Quantum / Conscious, from the proton / photon up to the molecule, up to the cell, up to the organelle, up to the organs, up to the body, up to the planet, up to the parent star(s), up to the solar system, up to the galaxy, up to the cluster, up to the super cluster, sorry for the tangent) will liken that of the body's immune response to a virus: putting on a fever to raise the temperature of the body to a level where invasive life (the virus or the dominant species that has attained a type one ability to capture power from the host organism (Type 1 civilization in Fermi's Paradox). 

To me this is incredibly karmic. What is interesting is that evolution actually works this way, just in the way that all pro-biota stand a much better chance at survival (working with the host, i.e. digestive bacteria) so too do organisms on planets. If any humans survive this they will have learned the hard way that it is much better to treat your host with reverence and utmost respect and to limit the number of ones progeny and an economy predicated on the working with, rather than against, nature (see: Avatar). Speaking of Dyson spheres, we are at least 2k years behind where we ought to be thanks to Standard Oil / Rockefeller family dynasty and the need to keep suppressed reverse engineered alien technology, because if the public knew that we didn't even need to be on oil, gas or coal or even nuclear for the past 80 years (Shadow Govt in U.S. had reverse engineered anti-gravitic , and near limitless free energy since at least the mid 1950's):

Anyhow, here's some links, have fun. Sorry to go off on a tangent, needless to say we have other problems and my purchasing decision was influenced by a prediction of blue ocean event sometime between 2017 and 2021. I didn't want to wait for whatever was coming next because I didn't and still don't know how much time we have left before the "elites" pull the plug but I would venture to say that accelerating the melting of arctic sea ice by way of the loss of the Aerosol Masking Effect (all air pollution exists as low level ozone consisting mostly of, but not limited to, diesel particulate. Think of it as an umbrella, reducing insolation (the amount of sunlight striking the planet) is a way to kick-start that process, a process that will be supported by way of depopulation by mass mandatory vaccination. So we are looking at losing the food-supply and mandatory vaccination, most likely next year at the rate of heating (it was 80F in the arctic as of yesterday, 86F in Siberia) we could see BOE this summer followed by potential famine as we lose the cereal grains needed to feed 7.8 billion people, followed by martial law being deployed and mandatory vaccinations distributed with threat of relocation to an internment camp for those who refuse and these will be done by the military in case anyone attempts to defend themselves with force and everyone getting the vaccine, what are they putting into you? Because Anthony Fauci and NIIAD (National Institute of Infectious Diseases) is on the payroll of Bill Gates, who just so happens to be in the Shadow Govt, whose operating system exists as a surveillance tool and whose father was a renowned eugenecist (The Bill and Melinda Gates foundation make annual contributions to NIIAD in the form of billions of dollars, not just NIIAD but also the CDC and FDA are basically owned by Bill Gates and the Pharmaceutical Industry, who exist solely to perpetuate disease and illness (disease is profitable, good health is not). 

You should another member of the Shadow Govt's vocal sentiment on this matter, Henry Kissinger, who basically says that we need to get rid of 90% of the world's population who are "useless eaters". 

They will primarily use famine as the means of population reduction, itself a result of the complete loss or arctic sea ice in the summer and the setting into motion 69 and counting positive feedback loops, mandatory vaccines, and if that doesn't work, they have 5G that will be used to amplify disease into the already vaccinated. (EMF disrupts cellular communication, go ahead and tape your phone directly to your head for a few years and grow a brain tumor. 5G is as strong EMF'wise as holding a phone to not just your head but your entire body from 100 feet away. They intend to put them on every city block, integrated into new light poles). 

Their survival? The actual shadow govt have not only reverse engineered anti-gravity but also what amount to teleportation devices that take them off world to installations on other planets, some of which are habitable like Earth. They intend to come back in a few hundred years and repopulate if they can, that's assuming we con't end up like what happened to Mars, the last planet they populated, which at one point was a habitable desert world (with a span of rainforest near the poles, which had ice and hence glaciers and hence rivers) until our last global world war resulted in such cataclysm that we had to escape to Earth, or so it's theorized by Richard C Hoagland). 

But Shadow Govt isn't very big, many members of upper 1% don't know of, aren't privy to / apart of this group. Said group has massive interconnected tunnels underneath the continental U.S. that will be used as a getaway when they decide to "pull the plug" via igniting the 69 and counting feedback loops that will result in an immediate, as within 6 weeks time, 4-6C increase in mean surface temperature. 

A "pandemic" is useful in this situation, because it allows you to get ahead of the imminent collapse of the economy (due in 2020 primarily to the inability to keep increasing the federal deficit, global ponzi scheme). You can't declare martial law, give $4T to the banks, eliminate privacy (end to end encryption in text messaging) and create a sentiment conducive to and supportive of mandatory vaccination. 

Bill and Melinda Gates foundation were sued by Kenya because their tetanus vaccine rendered the women who got them infertile. That's the test phase. Then they need to create a pandemic. Very convenient when you just to happened to be engaged in exactly the catalyst by way of joint research between chinese and U.S. virologits in Wuhan, China, research being conducted there because it was deemed to dangerous to do in the U.S. and therefore illlegal: gain of function / making superbugs. Then one "escapes" at your lab. And then the media, owned by said actors (Big Pharma and health insurance industry) successfully spin the narrative (by way of rewarding $16k per patient per COVID-19 diagnosis) and through convincing the public that the situation is a pandemic, accomplished by over-reporting and conflating instances of death to COVID-19 in conjunction with a massive underreporting of rate of incidence results in a mortality rate of 6% when in fact it should be closer to, on par with, season flu). And now the people will scream for Bill Gates to save us, and he's got the vaccine ready. 

I can already hear it: 

"WE NEED MANDATORY VACCINES, WE NEED MANDATORY VACCINES, WE CAN'T HAVE THE ENTIRE ECONOMY SHUT DOWN FOR 4-5 MONTHS AGAIN!" 

The people will clamor at first, until they learn that refusal to take the vaccines results in a one way trip to an internment camp, and that some of their friends secretly questioned the official narrative and the lock-down are missing or they witness them being taken off to an internment camp (neighbor for example) and then the fear will set in but by then it will be too late as people have lost Habeus Corpus (already operant, you have no due process, because this has been declared a national emergency and during times of emergency the federal govt. can, as of some bill passed during Bush administration, due mass vaccinations by force or with threat of relocation. They want to depopulate. Guess what they will feed you in the camps? Hint: it definitely wont be organic). The sodium fluoride in the water? That's not there for your teeth. ADA was basically created by a confluence of industries in the 1950's who had an abundance of Sodium Fluorsilicate, which is sodium fluoride in dry form, hence silicate, and they intended to chemically lobotomized the masses, a trick they learned from the prominent Nazi scientists that came over after the way during Operation Paperclip: the forgiveness of war-crimes, crimes against humanity, and the issuance of citizenship to the U.S. who were using it in the death camps during WW2. They learned early on that the best way to control the masses is through mass chemical lobotimization. 






Yeah, knowing all of this, and knowing that I only have a limited amount of time remaining on the planet I decided to go spend my money in pursuit of doing things I love, primary of which is playing video games. 

I have spent all of my money in pursuit of joy and wonder and excitement derived from this activity. 

There is no surviving this. 

99% of humanity will be eliminated. 

Highly unlikely anyone reading this is among the shadow govt who can stay underground (and offworld) for decades.


----------



## kithylin

Mooncheese said:


> Anyhow, here's some links, have fun. Sorry to go off on a tangent, needless to say we have other problems and my purchasing decision was influenced by a prediction of blue ocean event sometime between 2017 and 2021.


Perhaps instead of looking at video cards you may want to invest in a new tin foil hat there. Perhaps a bigger and thicker one.


----------



## Mooncheese

rustyk said:


> Here we go, I knew it was just a matter of time. If you get so angry and judgemental when talking about high end gear, maybe you should take up a different hobby?
> I'm being serious.


The wealthy are the most to blame for our present predicament. There's plenty of anger levied at them for reasons that extend far beyond corporate behavior and the ruthless terms dictated by the market. See: NAFTA. See: Citizens United ruling. See: PATRIOT Act, the mass surveillance done on U.S. citizens, at once illegal, now legal (because of the threat of arabs in caves who "hate us because of our consumer freedoms") This country is a joke now and people don't even understand what is going on / has happened. 



kithylin said:


> Just a note for you for the future. I would suggest you use userbenchmark to compare video cards and not firestrike. Userbenchmark takes tens of thousands of results from all sorts of different configurations (some overclocked, some stock) and computes an overall median average. Firestrike can vary wildly between systems and you're just looking at a couple samples from just a few people. Here's a link for you: https://gpu.userbenchmark.com/Compare/Nvidia-GTX-780-Ti-vs-Nvidia-GTX-980-Ti/2165vs3439
> 
> It's much better to get an overall view of how different cards actually compare in real-world performance.


Great advice and I agree, Firestrike is completely obsolete as a benchmark. Going by it 2080 Ti is only 25% faster than 1080 Ti when in reality it can be 40% higher (The Witcher 3), 50% higher (Read Dead Redemption 2) or even 60-70% higher (The Division 2). In fact, vast majority of my games I've seen what has to be a 50% increase in performance. No Man's Sky: 50%, The Witcher 3: 45%, Shadow of the Tomb Raider: 40% etc. Timespy seems to be more accurate here as I see 60% difference between 1080 Ti and 2080 Ti (10,500 and 16,300 GPU for example). Which reminds me, I need to play something this afternoon. I'm probably going to play Zelda: BOTW on CEMU with the new Ray Tracing Reshade on 0.19.2 (no more shader cache needed, it's created nearly perfectly on the fly with Vulkan 1.2, which driver 335.15 supports). We are talking about real global illumination with this reshade, it's fake as some wrongly state. It's running at like 75 FPS, no stutter, game looks STUNNING with RT on, and I've yet to play through it. I've been putting off Zelda: BOTW from 2017 until now, until Vulkan support finally provided a near perfect experience with Nvidia GPU and waiting has rewarded me with playing it from the beginning running perfect @ 3440x1440 with Ray Tracing. 

What is everyone else playing at the moment?




philhalo66 said:


> yeah i have to agree 68C is pretty darn hot for a custom loop, i have the hybrid 2080 Ti and even with an overclock and running 4K games i top out at 56C with a 120MM Aio.


68C at what ambient? It's summer time and 15-20F increase outside results in increase inside and inside of PC case. 

If it's not ambient then it's the matter of inadequate heat-sink pressure between heat-sink and die, misapplied (either from factory or after repaste) TIM, or inadequate fan speed, size of cooling solution (i.e. trying to cool 380W 2080 Ti with a 120x25mm AIO affixed only to the GPU core. 

Sorry, I wasn't paying attention to back-story here.


----------



## haleNpace

Imprezzion said:


> Any card labeled as OC from the manufacturer is a A chip as the non-A can only be sold as a non factory overclocked card. If this is the OC model it has to be A chip. You got lucky hehe.
> 
> Now, get to flashing the EVGA FTW3 Ultra A chip BIOS and get clockin' to 2100 and beyond . If it's a card with the older compatible firmware that is.


Isn’t that a card with 19 power phases? Mine only has 16?


----------



## Pichulec

Is flashing Galax 380W bios to NON-A 2080Ti possible?? Is it going to work or card will be bricked?


----------



## Imprezzion

haleNpace said:


> Isn’t that a card with 19 power phases? Mine only has 16?


Doesn't matter really. The only thing that matters BIOS wise is that it has the same amount of PCI-E power connectors. If it doesn't it won't read power draw properly but even then it will still work. The worst that can happen is one or more display outputs not working anymore but on my Phoenix GS they all work fine.


----------



## philhalo66

the ftw3 2080 ti isn't really 16 phase anyway, the voltage controller UP9512 the ftw3 card uses is only an 8 phase voltage controller so no matter what evga claims its only an 8 phase. buildzoid does a really good breakdown of the PCB.


----------



## Yuke

Yuke said:


> Hey,
> 
> i have a question about my thermals.
> 
> Basically, from the first day i got the *Gigabyte RTX2080Ti Gaming OC 11g*, the temps were out of control. I tried different PC-Cases (Mid and Big-Tower) and different Noctua Fan setups (up to 8 additional fans). Pretty much wasted half the cards value on this bull****.
> 
> I undervolted the GPU to [email protected] and i still need 73% Fanspeed to keep it between 77° and 82°C
> 
> Here are some ressources claiming very good temperatures (top tier):
> 
> https://www.guru3d.com/articles-pages/gigabyte-geforce-rtx-2080-ti-gaming-oc-11g-review,8.html
> 
> https://wccftech.com/review/gigabyte-geforce-rtx-2080-ti-gaming-oc-graphics-card-review/9/
> 
> https://youtu.be/GuZUwFINj7Y?t=259
> 
> https://youtu.be/BivkldMd2yE?t=88
> 
> -> Basically sub 70°C + less fanspeed - even in some cases when overclocked.
> 
> The last thing that comes to mind now is that they ****ed up cooler installation in the factory, which checking for would void my 3 years of warranty i have left.
> 
> So, before opening the card up, i want to know if i maybe missed something, i could still check out, before killing my warranty...
> 
> Thanks.
> 
> ps: My card came with already upgraded bios (122% powerlimit) + (upgraded?) samsung ram...even tho i bought it brand new here....maybe someone messed up the cooler installation afterwards :/


Wow, guys, just wow.

I reapplied the thermal paste and im down by 14°C and gained two boost stages....thats how bad things were.

1. The screws were so loose that a 3 month old baby could undo them (almost zero pressure?)
2. The thermal paste was super crazy dry

I reapplied Kyronaut via spread method and tightened the cooler a bit more than before and thats its...my undervolted gpu runs now at 63°C at 1900Mhz/+600Mhz on the ram....what a ******* joke....threw so much money at this bull**** before in the hope of "fixing" it.


----------



## rustyk

Yuke said:


> Wow, guys, just wow.
> 
> I reapplied the thermal paste and im down by 14°C and gained two boost stages....thats how bad things were.
> 
> 1. The screws were so loose that a 3 month old baby could undo them (almost zero pressure?)
> 2. The thermal paste was super crazy dry
> 
> I reapplied Kyronaut via spread method and tightened the cooler a bit more than before and thats its...my undervolted gpu runs now at 63°C at 1900Mhz/+600Mhz on the ram....what a ******* joke....threw so much money at this bull**** before in the hope of "fixing" it.


I have the same card, is there any way you could post a run of a benchmark or something with clockspeeds, voltages temps etc?
I'd always left the fan profile on auto, thinking it was a balance between noise and performance. Seems reasonable. Also I've managed to boost benchmarks by fixing higher % fan speeds, but now you're got me thinking about what's normal or not.
Actually, you're talking about undervolting and relatively low boost speeds, which makes me think you might be prioritising differently to me. I'm not sure how it's comparable without data from the card and a benchmark run.


----------



## kithylin

Yuke said:


> Wow, guys, just wow.
> 
> I reapplied the thermal paste and im down by 14°C and gained two boost stages....thats how bad things were.
> 
> 1. The screws were so loose that a 3 month old baby could undo them (almost zero pressure?)
> 2. The thermal paste was super crazy dry
> 
> I reapplied Kyronaut via spread method and tightened the cooler a bit more than before and thats its...my undervolted gpu runs now at 63°C at 1900Mhz/+600Mhz on the ram....what a ******* joke....threw so much money at this bull**** before in the hope of "fixing" it.


I'm pleased we could help you resolve your issue at least. :thumb:


----------



## Yuke

rustyk said:


> I have the same card, is there any way you could post a run of a benchmark or something with clockspeeds, voltages temps etc?
> I'd always left the fan profile on auto, thinking it was a balance between noise and performance. Seems reasonable. Also I've managed to boost benchmarks by fixing higher % fan speeds, but now you're got me thinking about what's normal or not.
> Actually, you're talking about undervolting and relatively low boost speeds, which makes me think you might be prioritising differently to me. I'm not sure how it's comparable without data from the card and a benchmark run.


I have 3D mark in steam that i could run. Just tell me which benchmark and what settings...and yes, i went for a silent approach.

I increased clocks and voltages now that i have a bit of headroom. Firestrike Ultra (the 4k benchmark) runs around 75°C when looping scene 1:

1935Mhz (0.975V), +915Mhz VRAM and 65% fanspeed. Pretty ******* happy now... its basically on sound level of my noctua casefans @ 1000rpm...


----------



## haleNpace

Imprezzion said:


> Doesn't matter really. The only thing that matters BIOS wise is that it has the same amount of PCI-E power connectors. If it doesn't it won't read power draw properly but even then it will still work. The worst that can happen is one or more display outputs not working anymore but on my Phoenix GS they all work fine.


I checked last night, my card doesn’t have the usb-c connection and from the flash attempt it has the newer firmware version even though it stated compatibility in the nvflash read me. I was able to backup but not write to the bios. Seems I missed the boat. My card is power limited to 280w which isn’t the best but I won’t complain. For a long time the 2080 ti was significantly more expensive than the most expensive 2080 super which I wouldn’t commit to, I was about to buy the 2070 super but I was lucky to pick this 300A card up in that 2080 super bracket in the click frenzy sale, something new to me. Maybe after the ampere release I’d be more inclined to clip flash but this for now will power my sim rig and it’s a fair step up from the 1080 fe. Thanks again for the info gents.


Here’s the bios. https://www.techpowerup.com/vgabios/221638/221638

Edit - I used the KFA2 compatible bios. It stated it was outdated as well as the xusb firmware flag. To the credit of nvflash it aborted without issue.


----------



## AssassinWarlord

Hello there...update for first page:

Gainward:
Phoenix Non-A | 3 Fan | 2.5 Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4260183364115 | PN 426018336-4115

informations are wrong. Card has only a 10+2 Phase design. And PN is not the same as EAN. PN Code is: NE6208TT20LC-150X
and is a near reference PCB, not identical but watercoolers for referenc pcb cards can also be used with this PCB


----------



## Imprezzion

AssassinWarlord said:


> Hello there...update for first page:
> 
> Gainward:
> Phoenix Non-A | 3 Fan | 2.5 Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4260183364115 | PN 426018336-4115
> 
> informations are wrong. Card has only a 10+2 Phase design. And PN is not the same as EAN. PN Code is: NE6208TT20LC-150X
> and is a near reference PCB, not identical but watercoolers for referenc pcb cards can also be used with this PCB


Euh, the photo shows TU-102-300A as the chip so it's actually an A chip.


----------



## AssassinWarlord

oh...yes...but only 10+2 phase design xD why gainward...why


----------



## Imprezzion

AssassinWarlord said:


> oh...yes...but only 10+2 phase design xD why gainward...why


I have the Phoenix GS model and I have to admit I didn't even check the amount of phases but I can tell you it handles a lot of abuse so far. I'm using HOF XOC BIOS on it and flooring it on the full 1.125v on 2160-2130 depending on idle temp (25c) or load (45-48c). It runs as high as 410w peak power draw, most games sit around the 370-390w range, and it handles it like a champ. VRM heatsinks get sorta on the edge of too hot to touch but that's like 60-70c at best.


----------



## Laithan

AssassinWarlord said:


> Hello there...update for first page:
> 
> Gainward:
> Phoenix Non-A | 3 Fan | 2.5 Slot | 292mm | RGB | 16 Power Phases | 1545 MHz Boost | 250/310 W | Reference PCB | EAN 4260183364115 | PN 426018336-4115
> 
> informations are wrong. Card has only a 10+2 Phase design. And PN is not the same as EAN. PN Code is: NE6208TT20LC-150X
> and is a near reference PCB, not identical but watercoolers for referenc pcb cards can also be used with this PCB


According to buildzoid "almost all 2080Ti's are only 8 phase....not using doublers....sending the PWM signal from voltage controller and shove it into 2 phases at the same time". The EVGA 2080Ti FTW3 Ultra as the example has 16 phases but only an 8-phase controller, so it is truly only 8 phase. So if what buildzoid says is true, either the reference cards (or maybe this Gainward is special IDK) have more "true" phases than these custom boards (10 vs 8), or the reference card is doing the same thing and is actually acting as a 5 phase controller sending to 2 phases at the same time to get 10. 

I suspect someone could validate this but it sure would be odd if reference cards had a true 10 phase controller and custom cards only used an 8 phase controller. 





Spoiler



https://youtu.be/vHgDgDtva0E?t=249


----------



## Asmodian

Laithan said:


> I suspect someone could validate this but it sure would be odd if reference cards had a true 10 phase controller and custom cards only used an 8 phase controller.


The more real phases there are the smoother the power delivery potentially is, but the power limit or VRM efficiency is more about the number of chips. A true 10 phase with 10x of X MOSFET compared to a true 8 phase with 16x of X MOSFET are better than each other in different ways. Especially with good filtering 8 phase seems plenty smooth enough for the 2080 Ti, so the designs with two fewer real phases but more MOSFETs might be better in the only important ways.


----------



## J7SC

...yeah, Nvidia frowned upon vendors using doublers this time around, so most cards use 'twinned' (8 > 16) VRM phases not unlike what Asus is doing with several mobos these days...It also comes down to the VRM controller in use and how it is set up. I think only the Asus Strix/Matrix, Galax Hof and KingPin have 10 phase VRM controller setups, the others 8 phases (before twinning). Then there are some weirder conditions that can kick in, for example with "10" phases, per time-stamped Buildzoid vid below on the Asus Strix to compliment the above-posted vid

https://youtu.be/N6A_UWwYgkA?t=702


----------



## wisepds

Hello guys.. I have changed my Thermal paste on the Zotac 2080 Ti Amp Extreme graphic card, with Thermal Grizzly Kryonaut, now are 3ºC coldest than original thermal paste.



67ºC at 2077 mhz with 112% OC (+600 mem clock) at 70% rpm.


----------



## Nitethorn

wisepds said:


> Hello guys.. I have changed my Thermal paste on the Zotac 2080 Ti Amp Extreme graphic card, with Thermal Grizzly Kryonaut, now are 3ºC coldest than original thermal paste.
> 
> 
> 
> 67ºC at 2077 mhz with 112% OC (+600 mem clock) at 70% rpm.


I have that card and have been planning to repaste with TGK as well, but I'm slightly disappointed that it only made a 3C difference. When I repasted my old GTX 970, it made a solid 7-8C difference. Hmmm...


----------



## Laithan

Nitethorn said:


> I have that card and have been planning to repaste with TGK as well, but I'm slightly disappointed that it only made a 3C difference. When I repasted my old GTX 970, it made a solid 7-8C difference. Hmmm...


His previous paste was likely still in good condition and/or was a better quality than what Gigabyte used. I'm a firm beleiver in Kryonaut and I use it on all my CPUs/GPUs also. I don't think you can get much better without liquid metal which I avoid.


----------



## Imprezzion

On GPU's LM actually makes a bigger difference because it's basically direct die cooling but I need to get some conformal coating / liquid electric tape stuff to seal the rest of the die surroundings just for safety. 

I went a different direction on my 2080 Ti now with overclocking / BIOS. I have been rock solid for a few weeks at HOF XOC 2160-2130Mjz @ 1.125v now with max temps sitting in the high 40's under my Kraken X52+G12 combo with the pump maxed at 2800 RPM and rad fans around 1300 RPM. Not very quiet tho.

I wanted to know how much quieter and perhaps cooler it would run with a more sensible BIOS, the EVGA FTW3 Ultra one, and a undervolt + overclock. I settled for 2040-2025Mhz on 1.018v so far and after looping 3DMark Time Spy stresstest for an hour to test stability and temps it maxed out around 50-51c with the fans at like, 900 RPM and pump at 2000RPM. Also perfectly stable so far. From what I read here 2040-2025Mhz on 1.018v is pretty good actually and obviously those 3 FPS aren't going to be noticable.

Also turned the CPU down from 5.1/4.8 on 1.34v to 5.0/4.7 on 1.28v (not VR VOut, that was 1.296v down to 1.232v in Prime95 29.8 with AVX) and dropped the radiator fans for the CPU loop down to 900 RPM under 80c and now my rig is super quiet, like totally inaudible with headset on and a very slight air moving noise without headset under load. This is actually pretty nice and there's obviously no noticeable performance loss.. 

My RAM is still at a very high overclock at 4200-16-17-17-28-280-2T so that helps keeping it snappy.

Also now I can run idle fanspeed for CPU and GPU at 600-650 RPM and that combined with my PSU set to Semi Fanless and Power Saving power plan in Windows so that it aggressively downclocks in idle / light loads mean it's totally inaudible idle and under light loads.

I kinda like it this way even tho it's not die hard to the max overclocked lol.

I played a bit of World of Tanks, Battlefield 4 and Mafia 2 Remastered and so far GPU max temp was 47c usually sitting around 44-45c and CPU max temp was 58-59c across the cores also averaging mid 40's. I love it. Idle everything is like ambient +3 so 24-25c lol.


----------



## Laithan

Imprezzion said:


> I have been rock solid for a few weeks at HOF XOC 2160-2130Mjz @ 1.125v now with max temps sitting in the high 40's under my Kraken X52+G12 combo with the pump maxed at 2800 RPM and rad fans around 1300 RPM.


Using the GPUz render test (just for consistency), FTW3 Ultra, I'm going from 2175Mhz (below 47C) to 2160Mhz (when it hits 47C-54C) and then drops to 2145Mhz as it levels out at 53C. The only identifyable changes in GPU-z are the temps rising slightly. The render test is pretty consistent as far as how much GPU workload and power it will consume. The conclusion is that this throttling is purely THERMAL based (I know the temps are not high, but seems they all throttle early for some reason) because this is repeatable. I am under the impression that this is perfectly normal for a stock AIR cooler... 

My question is, do you see this specific thermal throttling eliminated when your temps are kept under 47C? It would seem that because the results are repeatable that it would be a thermal issue but maybe it still happens regardless?


----------



## QuatroKiller

Hi All, trying to figure out why my Radiator Fan is locked at 100% and shows 0 RPM in Afterburner, GPUZ, and Precision X1.

I have a NVIDIA 2080Ti FE which i flashed with the EVGA 2080 TI XC Hybrid Bios.

The OC programs say 25% (or i can slide to 100%) yet they report out 0 RPM. When i look at the fan it seems to be running at 100% at all times.

How can i get proper control and RPM reporting back on Fan 2 (Rad Fan)?

Thanks.


----------



## Imprezzion

Laithan said:


> Using the GPUz render test (just for consistency), FTW3 Ultra, I'm going from 2175Mhz (below 47C) to 2160Mhz (when it hits 47C-54C) and then drops to 2145Mhz as it levels out at 53C. The only identifyable changes in GPU-z are the temps rising slightly. The render test is pretty consistent as far as how much GPU workload and power it will consume. The conclusion is that this throttling is purely THERMAL based (I know the temps are not high, but seems they all throttle early for some reason) because this is repeatable. I am under the impression that this is perfectly normal for a stock AIR cooler...
> 
> My question is, do you see this specific thermal throttling eliminated when your temps are kept under 47C? It would seem that because the results are repeatable that it would be a thermal issue but maybe it still happens regardless?


Yeah that's normal. It's part of the architecture of these cards. Mine starts at 2160 idle temps and drops to 2130 above 45c ish for the HOF XOC and it does 2055-2025 on the EVGA. I can run fine on 2115-2070 on EVGA on 1.093v but that requires more fan speed to keep it under 54c to not drop another clock bin. 

The card is stable at 2085Mhz on 1.093v with EVGA and 2160Mhz on 1.125v XOC but the higher clocks on idle temps sometimes crash a game before the card gets up to temperature and drops clock and it also crashes sometimes in game menu's when load is low and temps drop and the card clocks up. For 2160Mhz load I need 2205Mhz starting clocks and that isn't stable enough on XOC. Neither is the 2130Mhz I need on EVGA to run 2080Mhz load. 

I kinda miss the GTX 6xx/7xx days where we could just make a custom BIOS with KBE / KBT and disable thermal throttling all together just locking a certain power state.


----------



## Laithan

Imprezzion said:


> Yeah that's normal.
> 
> I kinda miss the GTX 6xx/7xx days where we could just make a custom BIOS with KBE / KBT and disable thermal throttling all together just locking a certain power state.


Thanks, and yes.. If we were allowed to modify the BIOS we could eliminate perfcaps again and let these GPUs eat... imagine buying a race car and the manufacturer doesn't let you modify the ECU.. CPUs on the other hand have specific features to lock the speed and prevent throttling right in the BIOS... yet for some reason it is Taboo ever since Pascal.


----------



## kithylin

Imprezzion said:


> On GPU's LM actually makes a bigger difference because it's basically direct die cooling but I need to get some conformal coating / liquid electric tape stuff to seal the rest of the die surroundings just for safety.


The only thing to remember for folks is you can't use liquid metal on a GPU at all if the heatsink's base that contacts the liquid metal is aluminum. Aluminum will dissolve to a powder within a 2-5 days when contacting the liquid metal. And you can use it if the heatsink's base is copper but it's still not advised to do that. Copper takes longer but copper will absorb the liquid metal it's self in to the copper material and it will permanently alter the physical chemistry of the copper in the heatsink. With copper it takes a lot longer though. 6-12 months or so. But you will have to take the gpu apart and re-apply more liquid metal every so often as it gets absorbed into the copper material over time. There's no risk of any of this with traditional thermal pastes (even normal Thermal Grizzly Kryonaut). Due to the liquid metal changing the chemical makeup of the copper in the heatsink you also won't have the same cooling performance with that heatsink with traditional pastes if you choose to go back to traditional pastes later instead of liquid metal.


----------



## J7SC

Laithan said:


> Thanks, and yes.. If we were allowed to modify the BIOS we could eliminate perfcaps again and let these GPUs eat... imagine buying a *race car and the manufacturer doesn't let you modify the ECU*.. CPUs on the other hand have specific features to lock the speed and prevent throttling right in the BIOS... yet for some reason it is Taboo ever since Pascal.


Actually, I think that in some race series (including F1), the ECU is standardized and provided / monitored by the race control body ...perhaps some shunt mods going on somewhere behind the scenes ? 

As to the 2080 Ti temp <> MHz relationship, I posted the attached before (from THW). In my own w-cooled setup, I think 38 C, or close to it, is the first speed bin step


----------



## wisepds

Nitethorn said:


> I have that card and have been planning to repaste with TGK as well, but I'm slightly disappointed that it only made a 3C difference. When I repasted my old GTX 970, it made a solid 7-8C difference. Hmmm...


You know... 3ºC... are good result. You must think that is a big chip, a lot of heat to dissipate... 3ºC for me are ok, i have the coolest zotact 2080 Ti Amp extreme i have ever seen on any review of internet.

I'm happy...


----------



## Medizinmann

kithylin said:


> The only thing to remember for folks is you can't use liquid metal on a GPU at all if the heatsink's base that contacts the liquid metal is aluminum. Aluminum will dissolve to a powder within a 2-5 days when contacting the liquid metal. And you can use it if the heatsink's base is copper but it's still not advised to do that. Copper takes longer but copper will absorb the liquid metal it's self in to the copper material and it will permanently alter the physical chemistry of the copper in the heatsink. With copper it takes a lot longer though. 6-12 months or so. But you will have to take the gpu apart and re-apply more liquid metal every so often as it gets absorbed into the copper material over time.


Well If I go the route of LM - I would also use a Waterblock and use one that is nickel plated…der8bauer has a lot of material on use of LM.

And the changes with Copper aren’t that critical - but yes you might need to reapply – as some LM is absorbed into the Copper. 

But again – better use nickel plated copper.



> There's no risk of any of this with traditional thermal pastes (even normal Thermal Grizzly Kryonaut).


Yeah, but you also won't come close to the perfromance of LM.



> Due to the liquid metal changing the chemical makeup of the copper in the heatsink you also won't have the same cooling performance with that heatsink with traditional pastes if you choose to go back to traditional pastes later instead of liquid metal.


Yes, this is a decission on should make LM or "normal" paste - or (again) use some nickel plated cold plate. 

Greetings,
Medizinmann


----------



## kithylin

Medizinmann said:


> Yeah, but you also won't come close to the perfromance of LM.


That's up for debate actually. In several of the reviews I've seen online liquid metal on a video card only runs -5c to -7c cooler than the best normal thermal paste. Folks have to decide if the risk is worth it for them. It can also drip down and contact the PCB and fry a $1200 RTX 2080 Ti.


----------



## wisepds

kithylin said:


> That's up for debate actually. In several of the reviews I've seen online liquid metal on a video card only runs -5c to -7c cooler than the best normal thermal paste. Folks have to decide if the risk is worth it for them. It can also drip down and contact the PCB and fry a $1200 RTX 2080 Ti.


Just that... for me, it's so much dangerous... a bad contact and... Bye bye money!


----------



## Imprezzion

kithylin said:


> The only thing to remember for folks is you can't use liquid metal on a GPU at all if the heatsink's base that contacts the liquid metal is aluminum. Aluminum will dissolve to a powder within a 2-5 days when contacting the liquid metal. And you can use it if the heatsink's base is copper but it's still not advised to do that. Copper takes longer but copper will absorb the liquid metal it's self in to the copper material and it will permanently alter the physical chemistry of the copper in the heatsink. With copper it takes a lot longer though. 6-12 months or so. But you will have to take the gpu apart and re-apply more liquid metal every so often as it gets absorbed into the copper material over time. There's no risk of any of this with traditional thermal pastes (even normal Thermal Grizzly Kryonaut). Due to the liquid metal changing the chemical makeup of the copper in the heatsink you also won't have the same cooling performance with that heatsink with traditional pastes if you choose to go back to traditional pastes later instead of liquid metal.


Luckily I cool mine with a Kraken X52 which is copper. The Phoenix GS also has a copper cold plate so no issues there with aluminum.

I know it also affects copper. My Swiftech H320 got really really discolored and rough feeling after 3 years of CLU. It didn't perform very well with normal paste after so I agree on that. But as this is a simple easy to replace AIO if it degrades the cold plate I'll just get a new one.. /care


----------



## Medizinmann

kithylin said:


> That's up for debate actually. In several of the reviews I've seen online liquid metal on a video card only runs -5c to -7c cooler than the best normal thermal paste. Folks have to decide if the risk is worth it for them. It can also drip down and contact the PCB and fry a $1200 RTX 2080 Ti.


5-7°C less is not to shabby but I have seen even higher number on GPUs.

And yes it isn't without risks - never said that - and anybody going that route should consider that.

And as always use plenty of conformal coat on the PCB to protect it... ;-)

Greetings,
Medizinmann


----------



## Nitethorn

wisepds said:


> You know... 3ºC... are good result. You must think that is a big chip, a lot of heat to dissipate... 3ºC for me are ok, i have the coolest zotact 2080 Ti Amp extreme i have ever seen on any review of internet.
> 
> I'm happy...


Yeah, 3C is good, I didn't mean to say it wasn't. I had just hoped to see a higher difference, as my card with stock paste runs about 80C under full load, mid 70's when gaming. I know that's within spec, but I would really like to see it a lot cooler and I can't afford to put water on it yet.


----------



## QuatroKiller

QuatroKiller said:


> Hi All, trying to figure out why my Radiator Fan is locked at 100% and shows 0 RPM in Afterburner, GPUZ, and Precision X1.
> 
> I have a NVIDIA 2080Ti FE which i flashed with the EVGA 2080 TI XC Hybrid Bios.
> 
> The OC programs say 25% (or i can slide to 100%) yet they report out 0 RPM. When i look at the fan it seems to be running at 100% at all times.
> 
> How can i get proper control and RPM reporting back on Fan 2 (Rad Fan)?
> 
> Thanks.


Saw a ton of replies after my post related to other topics, hoping to get some experts who might know what is going on with my situation (quoted my post above, thanks)


----------



## kx11

windows 2004 build is cool for OC


----------



## J7SC

kx11 said:


> windows 2004 build is cool for OC


 
...ohh, nice ! Any direct comp data to previous Win 10 build ?


----------



## sultanofswing

I notice version 2004 feels way snappier than previous version myself.


Sent from my iPhone using Tapatalk


----------



## kx11

J7SC said:


> ...ohh, nice ! Any direct comp data to previous Win 10 build ?



i couldn't get a stable gameplay with these numbers on previous Win10 builds


----------



## J7SC

sultanofswing said:


> I notice version 2004 feels way snappier than previous version myself.
> 
> Sent from my iPhone using Tapatalk


 


kx11 said:


> i couldn't get a stable gameplay with these numbers on previous Win10 builds


 
Tx gents...I guess I got my weekend OC bench project lined up :rambo:


----------



## kx11

trying out an online match to test windows 10 2004 build , also gpu OC


----------



## Laithan

kx11 said:


> trying out an online match to test windows 10 2004 build , also gpu OC


Nice, pegged at 2220 the entire time. I can see it is under water, but is that the only reason you are not throttling or do you have another trick up your sleeve


----------



## Imprezzion

Well, I went ahead and installed 2004 through windows update and DDU'd the old drivers, got the 450.99 developer driver and enabled the HW stuff. It shows as WDDM 2.7 in GPU-Z with the HW stuff enabled.

Man oh man what a difference...

So far I basically only tested with Borderlands 3 but it feels way smoother.. like, by a lot.

And the best thing is, the better OC also seems true. My card held up to a full hour of BL3 maliwan takedown runs at 2040/7800 on 1.018v which it wouldn't even do for 5 minutes with the old windows / drivers without throwing a direct x crash. It really seems to make a huge difference lol. I'll continue testing tomorrow if 1.093v gives me more then 2085Mhz which it maxed out before and I might even flash back to HOF XOC to see if 1.125v gives me more then the 2130 it maxed out at before. 

What I also noticed is that my general CPU temps were much lower then they usually are in Borderlands and the core temperatures were much closer to each other then they usually are. 

I got a lot of testing to do lol but it looks super promising!


----------



## philhalo66

man, seeing people on here with 2100MHz+ i totally lost the silicon lottery, cant even keep 2040 stable.


----------



## sultanofswing

philhalo66 said:


> man, seeing people on here with 2100MHz+ i totally lost the silicon lottery, cant even keep 2040 stable.



It’s all about keeping temps down. Keep the card below 45c and 2100mhz is easy.


Sent from my iPhone using Tapatalk


----------



## philhalo66

sultanofswing said:


> It’s all about keeping temps down. Keep the card below 45c and 2100mhz is easy.
> 
> 
> Sent from my iPhone using Tapatalk


crashes at anything above 2025MHz. 39C 2100Mhz it crashes within 30 seconds.


----------



## Laithan

philhalo66 said:


> crashes at anything above 2025MHz. 39C 2100Mhz it crashes within 30 seconds.


Are you getting max voltage when it crashes? GPU-z log will show it. You could try using the curve to let it eat a bit more voltage at that speed.


----------



## kx11

Laithan said:


> Nice, pegged at 2220 the entire time. I can see it is under water, but is that the only reason you are not throttling or do you have another trick up your sleeve



a custom curve in MSI AB is my only trick 


and this is a KP card so ....


----------



## Laithan

kx11 said:


> a custom curve in MSI AB is my only trick
> 
> 
> and this is a KP card so ....


Ya, so there's a touch of Houdini there . You running a custom BIOS?

I'm coming over from Maxwell where we didn't have a curve to edit (but we has a BIOS lol) so I'm still just getting a feel for it. 

I know it will be different for every GPU but is this angle similar to what you are using?


----------



## kx11

Laithan said:


> Ya, so there's a touch of Houdini there . You running a custom BIOS?
> 
> I'm coming over from Maxwell where we didn't have a curve to edit (but we has a BIOS lol) so I'm still just getting a feel for it.
> 
> I know it will be different for every GPU but is this angle similar to what you are using?



i'm running stock OC bios nothing extra 





the curve edit looks like this , also used 70+ in the voltage slider , voltage control is set to third party in MSI AB


----------



## mattxx88

any1 triend the new gpuz sensor tab?

this is my 2080Ti trio oced 2130mhz and + 1300mem, after a Port Royal benchmark


----------



## ht_addict

*I finally turned green*

I have been an AMD wannabe since the dawn of time both CPU and GPU. Once I dipped into the Blue, but reverted back sort after the build was complete. Never dipped my foot into the Green till now. Bored during these Covid times I decided on a new build. Part of it was a Gigabyte Aorus 2080ti WB 11G. So its back to using AB for overclocking. My question is what is the typical increase to the GPU and Memory for 2080ti card for overclocking.


----------



## Imprezzion

Real shame that 450.99 seems to have some glitches with HW Scheduling and full screen / borderless window games combined with a second screen with YouTube or whatever on it.

When playing a game and running YouTube on a second monitor YouTube lags like mad and even sometimes stops playing / loading as soon as the game is in focus / the primary window...

I'm still trying to find out how to fix it. Disabling HW Scheduling also removes the problem but k.


----------



## sultanofswing

@kx11
For the hell of it.


----------



## philhalo66

Laithan said:


> Are you getting max voltage when it crashes? GPU-z log will show it. You could try using the curve to let it eat a bit more voltage at that speed.


i maxed out the voltage slider and set to 2100MHz and voltage drops from 1.075 to 1.0500 and never goes above it with temps at 37C during gpuz render and it crashes within 2 minutes.


----------



## kithylin

philhalo66 said:


> i maxed out the voltage slider and set to 2100MHz and voltage drops from 1.075 to 1.0500 and never goes above it with temps at 37C during gpuz render and it crashes within 2 minutes.


You need to use the manual curve editor and create your own curve to force the voltage to stay up there along with the overclock. If your voltage is dropping then that's likely part of your problem.


----------



## philhalo66

kithylin said:


> You need to use the manual curve editor and create your own curve to force the voltage to stay up there along with the overclock. If your voltage is dropping then that's likely part of your problem.


Does EVGA Precision XOC have that? I tried to find it earlier but only curve at all i seen was auto scan which claims max stable was +40.


----------



## kx11

sultanofswing said:


> @*kx11*
> For the hell of it.
> 
> 
> picture sharing



wow very nice


----------



## Voodoo Rufus

Sup guys.

I just converted my 2080Ti Founders to water cooling this weekend with a Heatkiller IV. Flashed it to the 380W Galax bios just fine, also. Now my load temps are <50C on the GPU and <40C on the water exiting the GPU. Dual pump, single loop setup with the GPU dumping its heat into the 280GTX radiator which helped hugely over it on a dual loop pushing it's heat into the Corsair thin 280 radiator. 

My OC Scanner boost clocks are rock solid at 2040MHz while running 3DMark, and my ram is at 8100MHz. Power consumption is right at 380W.
Port Royal is at 9944
Time Spy is at 15030

My main objective was to simply quiet and tame this card. Now I've got a small OC itch to scratch and I want to maximize this card since I have the cooling power.

I have had issues trying to manually increasing the GPU clocks. Even trying to match what the OC Scanner gives me just crashes out Furmark bench. I'd like to squeak out a little more speed if possible. I have the cooling, but how much more could I reasonably get outside of the OC Scanner? It seems to have done a nice job.


----------



## jblanc03

Can you send here the exact resistor you ended up purchasing? i am trying to do the same thing you did. I am searching on google for those resisitors. Having trouble finding them.

thanks


----------



## MrTOOSHORT

jblanc03 said:


> Can you send here the exact resistor you ended up purchasing? i am trying to do the same thing you did. I am searching on google for those resisitors. Having trouble finding them.
> 
> 
> thanks


Goto this thread for a good read:


*https://www.overclock.net/forum/69-nvidia/1714864-2080-ti-working-shunt-mod.html*


First post also says where to get the shunts. You can remove or stack the shunts, up to you.


----------



## Tyemcho

hello everyone! i purchased two damaged RTX 2080ti my goal is to change memory chips from micron to samsung. i have TRIO and STRIX please! i need PCB pictures boards that have only Samsung memory, very clear pictures or just tell me (strap resistor layout) thank you p.s have already soldered 16gbps samsung chips on both cards, gpuz shows 0mb when driver is enabled


----------



## edhutner

Hi @Tyemcho
Here I uploaded couple of photos of mine before putting the water block. It is EVGA XC Gaming, reference pcb with Samsung memory. Please let me know when you download it and i will delete the gallery.
https://postimg.cc/gallery/dZr5D93


----------



## Circaflex

Been out of the game a while everyone, but buying a 2080ti. When I last looked at cards, reference design was what you wanted to go after, especially if you wanted to water cool. Is that still the case? Are reference PCBs still around after being out for so long on the market?


----------



## MrTOOSHORT

Circaflex said:


> Been out of the game a while everyone, but buying a 2080ti. When I last looked at cards, reference design was what you wanted to go after, especially if you wanted to water cool. Is that still the case? Are reference PCBs still around after being out for so long on the market?


Still around and yes the reference founders is a good bet if you want to water cool.



*https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/*


----------



## Circaflex

MrTOOSHORT said:


> Still around and yes the reference founders is a good bet if you want to water cool.
> 
> 
> 
> *https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/*



Awesome, thanks. What does Founders Edition mean? Have there been any major improvements over reference? i saw the chart with the power phases being shown, is that making a big difference?


----------



## MrTOOSHORT

Circaflex said:


> Awesome, thanks. What does Founders Edition mean? Have there been any major improvements over reference? i saw the chart with the power phases being shown, is that making a big difference?


Founders is the new name for reference from Nvidia. The Founders 2080ti(reference) has a beefy PCB and power delivery, so it's a good choice for water cooling. Also many blocks to choose from many vendors.


----------



## ThrashZone

Hi,
Founders is vanilla 
Cards like 2080ti ftw3 ultra have a big custom board like the King|pin/....


----------



## Imprezzion

There's many AIB cards way cheaper then a founders that use founders / reference PCB. Just check whichever card is cheap in your country and compare it with the list in the OP to see which PCB it uses. Do not under any circumstance buy a non-A tho.

I use a Gainward Phoenix GS as I could get that very cheap secondhand locally with like, 17 months of warranty left and original box + receipt.


----------



## J7SC

Circaflex said:


> Awesome, thanks. What does Founders Edition mean? Have there been any major improvements over reference? i saw the chart with the power phases being shown, is that making a big difference?


 
...yeah, lot's of good choices out there for 2080 Ti (A chip). Since these cards react really well to w-cooling in terms of extra speed, you might even consider some of the available factory full waterblock versions (or install a custom block yourself afterwards)


----------



## TK421

Does anyone with RTX series GPU notice a bug with memory clocks when using MSi AB?


If my PC goes to sleep before, the memory clocks will be set 200MHz lower regardless of what you do. It needs a restart to fix.




RTX 2080 Ti, Windows 10.


----------



## Imprezzion

TK421 said:


> Does anyone with RTX series GPU notice a bug with memory clocks when using MSi AB?
> 
> 
> If my PC goes to sleep before, the memory clocks will be set 200MHz lower regardless of what you do. It needs a restart to fix.
> 
> 
> 
> 
> RTX 2080 Ti, Windows 10.


Yup mine does the same. It always drops to 7800 in stead of 8000 after sleep. Haven't found a fix but I almost never sleep my PC anyway so, didn't look all that deep into it.


----------



## Medizinmann

Circaflex said:


> Awesome, thanks. What does Founders Edition mean? Have there been any major improvements over reference? i saw the chart with the power phases being shown, is that making a big difference?


Founders Edition is the reference design form NVidia directly from NVidia themselves for an extra premium...
As already stated you can choose any other GPU with reference PCB Design – there is a list on page one.
I personally use a Palit GamingOC Pro 2080 TI I got relatively cheap(930€ incl. VAT) and flashed it with the 380W BIOS from KFA/Galax and put a Waterblock on it.

Best regards,
Medizinmann


----------



## sblantipodi

guys, want to upgrade to rtx3080ti, is this the right moment to sell my 2080ti?


----------



## ASUSfreak

A bit confused... 

I have ASUS 2080Ti OC so which one do I use to flash?

The bios provided by OP?
The bios from Techpowerup?

And if Techpowerup do I use the one with same bios model nr of mine? Or can I choose any ASUS 2080Ti bios thus selecting the one with most OC potential (speeds/powerlimit)?
Or maybe the ones on Techpowerup ARE the stock bios of my card? 9like if I extracted my bios from my card)?

I have flashed nvidia cards before in my life but was a GTX470 I believe so it's been a while...

And that"s why I ask it befor I flash my 2080Ti lol...

Cause now it only does +100 on core...


----------



## sultanofswing

ASUSfreak said:


> A bit confused...
> 
> I have ASUS 2080Ti OC so which one do I use to flash?
> 
> The bios provided by OP?
> The bios from Techpowerup?
> 
> And if Techpowerup do I use the one with same bios model nr of mine? Or can I choose any ASUS 2080Ti bios thus selecting the one with most OC potential (speeds/powerlimit)?
> Or maybe the ones on Techpowerup ARE the stock bios of my card? 9like if I extracted my bios from my card)?
> 
> I have flashed nvidia cards before in my life but was a GTX470 I believe so it's been a while...
> 
> And that"s why I ask it befor I flash my 2080Ti lol...
> 
> Cause now it only does +100 on core...


The BIOS Files on the first page are from TPU.
Assuming your card is a A chip card you can flash just about any bios you want that matches your Xusb version.


----------



## Imprezzion

ASUSfreak said:


> A bit confused...
> 
> I have ASUS 2080Ti OC so which one do I use to flash?
> 
> The bios provided by OP?
> The bios from Techpowerup?
> 
> And if Techpowerup do I use the one with same bios model nr of mine? Or can I choose any ASUS 2080Ti bios thus selecting the one with most OC potential (speeds/powerlimit)?
> Or maybe the ones on Techpowerup ARE the stock bios of my card? 9like if I extracted my bios from my card)?
> 
> I have flashed nvidia cards before in my life but was a GTX470 I believe so it's been a while...
> 
> And that"s why I ask it befor I flash my 2080Ti lol...
> 
> Cause now it only does +100 on core...


A normal BIOS with a higher power limit will not allow you to run higher clocks per se. It just reduces power throttling but doesn't increase max overclocks. 

The only real way to improve the max clocks is lower temps to under 50c ish load or raise the voltage with the HOF XOC BIOS to 1.125v but that BIOS should only be used under water and has risks involved with it as it disabled any and all limits and protections.


----------



## Circaflex

My card comes thursday, ill be putting it under water immediately, should i see how stock bios acts or immediately flash an unlocked bios? Reference card is what i went with.


----------



## J7SC

Circaflex said:


> My card comes thursday, ill be putting it under water immediately, should i see how stock bios acts or immediately flash an unlocked bios? Reference card is what i went with.


 
...when the card arrives, just briefly test it out with stock air-cooler and stock bios to make sure everything works the way it should. Then mount the w-block...as to bios flash, any time after initial check but if you plan to use a high-power limit bios, it's probably better to wait after you mounted and tested the w-block.


----------



## Regel

Hi guys,

So I bought the Asus 2080 Ti Turbo (non-A) only to later find out it is a non-A chip and being a bit miffed at the lack of headroom for power limit.
I've now flashed the Palit RTX 2080 Ti Dual (Non-A) ROM from the OP to increase the power limit in Afterburner to 124%, which seems to have improved the performance a bit.

Is there anything else I can do to squeeze out a bit more performance? It's under water so I have a lot of temperature headroom.


----------



## kithylin

Regel said:


> Hi guys,
> 
> So I bought the Asus 2080 Ti Turbo (non-A) only to later find out it is a non-A chip and being a bit miffed at the lack of headroom for power limit.
> I've now flashed the Palit RTX 2080 Ti Dual (Non-A) ROM from the OP to increase the power limit in Afterburner to 124%, which seems to have improved the performance a bit.
> 
> Is there anything else I can do to squeeze out a bit more performance? It's under water so I have a lot of temperature headroom.


You should understand for a moment that the % in afterburner does not relate to the physical power limit. You could have one bios that allows 300 watts +20% which would be 120% or 360W while at the same time you could have a different bios from someone else that was 350 watts + 15% which would be 115% or 402 watts. The % in afterburner is just an increase over what ever that bios's base power limit is. Different bios's have a different base power limit to start with.


----------



## Imprezzion

The non-A palit BIOS is 310w and that's the highest non-A available. There's absolutely no way to push any more performance as even on 1.043v it will just smash the limiter. 

I had a cheap secondhand Inno3D non-A and sold it for the same price I bought it for within a week. Couldn't go over 1950 core at 1.000v and it still hit the limiter sometimes.

Bought a Gainward Phoenix GS secondhand which has a A chip, slapped a Kraken G12 with X52 on it and some copper VRM and VRAM sinks on it and now with the EVGA FTW3 Ultra BIOS I can run whatever I want. 2130 @ 1.093v is just fine.


----------



## Regel

kithylin said:


> You should understand for a moment that the % in afterburner does not relate to the physical power limit. You could have one bios that allows 300 watts +20% which would be 120% or 360W while at the same time you could have a different bios from someone else that was 350 watts + 15% which would be 115% or 402 watts. The % in afterburner is just an increase over what ever that bios's base power limit is. Different bios's have a different base power limit to start with.


Yes, I understand. Perhaps I worded it a bit strangely, but that is why I chose the Palit BIOS.



Imprezzion said:


> The non-A palit BIOS is 310w and that's the highest non-A available. There's absolutely no way to push any more performance as even on 1.043v it will just smash the limiter.
> 
> I had a cheap secondhand Inno3D non-A and sold it for the same price I bought it for within a week. Couldn't go over 1950 core at 1.000v and it still hit the limiter sometimes.
> 
> Bought a Gainward Phoenix GS secondhand which has a A chip, slapped a Kraken G12 with X52 on it and some copper VRM and VRAM sinks on it and now with the EVGA FTW3 Ultra BIOS I can run whatever I want. 2130 @ 1.093v is just fine.


Right... figured as much. Do you notice a significant performance increase?
Unfortunately I bought mine new and would lose quite a bit of money if I'd resell it.


----------



## Serchio

Hi guys,

I am struggling with my RTX 2080Ti for quite some time and I would like to ask you for any ideas you might have.

I will start with my config first. 
PC1: Asus CH6, Ryzen 3900X, EVGA RTX2080Ti 2160MHz + 8099MHz, 2x8 DDR4 3733 CL14, 1TB Samsung PM981, Asus PG258Q, EVGA G3 750W - cpu and gpu are watercooled - gpu max temp is 45C.
PC2: MSI B450 Tomahawk MAX, Ryzen [email protected], MSI GTX1080, 2x8 DDR4 3200 CL14, 250GB Samsung 850 Pro, Seasonig 650W (bronze or gold) - no watercooling - gpu max temp below 70C.

The issue I have is that in some 1080p games the gameplay is more smooth with PC2 rather than PC1. The game I play the most is Apex Legends where I experience some sluttering with PC1 while on PC2 everything is fine. I have made a few screenshots from afterburner and latencymon to give you an overview what the issue is - frametime ms. It jumps quite a lot on PC1: from 4.5ms to 17ms while on PC2 the frametime is almost constant and is around 8-9ms.

I have tried a ton of things and optimizations I could fine to tune PC1 to get a constant frametime in order to get rid of sluttering:
- system time resolution 1ms,
- disabling HPET in windows,
- process lasso with priority, cpu affiliation etc.,
- bios setting to disable multi-threading, stock settings etc,
- revert GPU OC, even downclocking the core down to 1700MHz,
- NVCP focused on high performance, stock settings, with G-Sync disabled - probably every combination possible,
- tried different drive - Samsung 850 Pro,
- clean installation of different releases of Windows 10, different nvidia drivers versions (using DDU each time) back to a version from September 2019,
- disabling unneeded services in windows + all telemetry + windows gamming services + DVR1 + DVR2
- using the same video settings for the game on both PC - everything set to low 
- more things that I am not able to remember right now - basically everything I was able to find.

What I have not tried yet because of the watercooling:
- to put RTX2080Ti in second PCI slot (8x) - the card work in the first one right now and GPU-Z shows that it runs at 16x GEN3
- to put RTX2080Ti into PC2
- different PSU
- different RAM

Do you have any other suggestions or ideas what I can try? I didn't need to do anything to the PC2 in order to get a better experience out of the box...

EDIT. I forgot to mention - all benchmarking tools (3DMark, Superposition) give me correct results for GPU, CPU and both combined...


----------



## Imprezzion

Regel said:


> Yes, I understand. Perhaps I worded it a bit strangely, but that is why I chose the Palit BIOS.
> 
> 
> 
> Right... figured as much. Do you notice a significant performance increase?
> Unfortunately I bought mine new and would lose quite a bit of money if I'd resell it.


Honestly? No. I play on 1080p 120Hz or 144Hz if I can be bothered to hook up a displayport cable but on this resolution even a non-A easily handles everything. But if you spend the cash on water-cooling and such it would be a shame to not utilize it fully. 

I would've kept it if I couldn't resell it for the same money. You can always try to resell it and find a used A chip with a reference / founders PCB and buy that?


----------



## tcclaviger

My RMA arrived 😛 In avatar pic happily at home, happy days!

The Kryographics Next + Active backplate really is all it's made out to be.

Non-a EVGA black edition, pull 38k gfx 31k total score in firestrike, 9970 in superposition, and playing Control, max options at 70+ on UW1440p. Quite satisfied.

Palit 310 bios doing amazing things + curve clocking, it bounces around between 2050 and 2085, only dropping to 2k on the dot, in only Superposition.

GPU temps are water + 12c after hours of playing Control, rest of the card is pretty cool as well, slightly warm to the touch, can leave hand in place without pain/burn indefinitely, so that's somewhere in the low 40s.

It's a good bit faster than my semi-golden 1080ti has all power caps removed. If it proves itself for another month without issue I'll shunt mod and flex on some a bin cards lol.

I see a lot of people claim the EVGA Black + Palit bios won't hold over 2k, well, that's just false, it absolutely will:

https://youtu.be/PYgM9ykvhsI


----------



## Imprezzion

tcclaviger said:


> My RMA arrived 😛 In avatar pic happily at home, happy days!
> 
> The Kryographics Next + Active backplate really is all it's made out to be.
> 
> Non-a EVGA black edition, pull 38k gfx 31k total score in firestrike, 9970 in superposition, and playing Control, max options at 70+ on UW1440p. Quite satisfied.
> 
> Palit 310 bios doing amazing things + curve clocking, it bounces around between 2050 and 2085, only dropping to 2k on the dot, in only Superposition.
> 
> GPU temps are water + 12c after hours of playing Control, rest of the card is pretty cool as well, slightly warm to the touch, can leave hand in place without pain/burn indefinitely, so that's somewhere in the low 40s.
> 
> It's a good bit faster than my semi-golden 1080ti has all power caps removed. If it proves itself for another month without issue I'll shunt mod and flex on some a bin cards lol.
> 
> I see a lot of people claim the EVGA Black + Palit bios won't hold over 2k, well, that's just false, it absolutely will:
> 
> https://youtu.be/PYgM9ykvhsI


Depends on the resolution mostly. My non-A Inno3D would for example hold 2050 just fine in Borderlands 3 on 1080P max graphics settings but raising render resolution scaling to 150% (slightly above 1440p actual) would see it starting to hit the limiter and 200% scaling (4K effectively) smacked the limiter so hard it drops to 1920-1935 core lol. 

Voltage plays a big role as well. If the card can do 2000+ stable on 1.012v or less it's less prone to limiting then 2000+ on 1.062v or even 1.093v for example.

Also with it being watercooled it will draw less power as the fans and stock cooler LEDs and such are counted in the power draw! This can be as much as 20w with higher fanspeed!


----------



## Shawnb99

sblantipodi said:


> guys, want to upgrade to rtx3080ti, is this the right moment to sell my 2080ti?


What do you have to replace it with? 3080TI might not be out for months, plus what if there's the 3090TI?


----------



## Meisgoot312

Regel said:


> Hi guys,
> 
> So I bought the Asus 2080 Ti Turbo (non-A) only to later find out it is a non-A chip and being a bit miffed at the lack of headroom for power limit.
> I've now flashed the Palit RTX 2080 Ti Dual (Non-A) ROM from the OP to increase the power limit in Afterburner to 124%, which seems to have improved the performance a bit.
> 
> Is there anything else I can do to squeeze out a bit more performance? It's under water so I have a lot of temperature headroom.


Hey Regel!

Out of curiosity, did you use nvflash or a BIOS debugger tool? I have an MSI Ventus GP Non-A, but the MX BIOS chip prevents me from flashing either the Palit or Gigabyte BIOS on it. I've tried using the BIOS debugger tool with the PALIT Bios, but it bugs out for me (disabling 2 of my output ports and bugs out MSI afterburner). I'm thinking of trying again with the debugger tool, but I'm curious to see if you've done anything that's different for a successful flash (or if you happened to get an older BIOS chip).

Thanks!

EDIT: I'm an idiot, with the Palit vBIOS there was no way since it used the old BIOS chip. But with the gigabyte, I've been typing in 'y' instead of 'YES' for the Board ID mismatch override. I've flashed the gigabyte vBIOS and am now experimenting.

EDIT2: For anyone curious, the Gigabyte Non-A windforce can be flashed onto any new firmware card, it disabled one of my DP ports, but the extra 30-40W is worth it. Google Chrome doesn't like the gigabyte bios and I need to disable hardware acceleration in order to get it to not artifact.

EDIT3: It appears that my timespy score actually dropped by 3% even after seeing the power limit increase, curious. I think I tested my GPU while it was in cooler weather, but GPU temperature doesn't seem out of the ordinary. It actually looks like the default clocks are lower than my MSI Ventus, even if their VBIOS says that they're supposed to be the same. My Ventus uses +235 on core clock to get the same score as the gigabyte bios on ~+265, memory is the same.


----------



## tcclaviger

Imprezzion said:


> Depends on the resolution mostly. My non-A Inno3D would for example hold 2050 just fine in Borderlands 3 on 1080P max graphics settings but raising render resolution scaling to 150% (slightly above 1440p actual) would see it starting to hit the limiter and 200% scaling (4K effectively) smacked the limiter so hard it drops to 1920-1935 core lol.
> 
> Voltage plays a big role as well. If the card can do 2000+ stable on 1.012v or less it's less prone to limiting then 2000+ on 1.062v or even 1.093v for example.
> 
> Also with it being watercooled it will draw less power as the fans and stock cooler LEDs and such are counted in the power draw! This can be as much as 20w with higher fanspeed!


Makes sense, it does 2085 set point, floating between 2055 and 2070 at 1.006 volts with 99% usage, as in Control. Memory is totally test stable at +1000, can run bechmarks at 1200 without score drop, but it does throw some encoding data errors, so I leave it at 1000.

Even when pegging the limiter it only downclocks slightly from 2070 to 2010 ish, I suspect being well cooled helps a ton.


----------



## JMTH

Has anyone had any video card issues on Z490? CPU/Memory OC'ed or at default my ROG-STRIX-RTX2080TI-O11G-GAMING is running really bad. To compare here are the results from my X99 platform baseline testing using the stock cooling and no OC, then the Z490 testing with custom cooling (GPU temps 35/45C).

Unigine Heaven Basic - 6814.99 vrs ~4200
Unigine Heaven xTreme - 6638.2 vrs 3713.4
Fire Strike - 23909 vrs 18287

I have tried NVIDIA drivers 430.86, 441.87, 445.87, 446.14. They all produce around the same results.

I have 2 separate 8pin power cables from the Corsair AX1200i PSU to the card, as well as the extra 4pin PCI-E power on a separate cable to the motherboard.

MB: ASUS Maximus XII Extreme
CPU: i7-10700k
Memory: 32GB (2x16) G.Skill F4-4000C19D-32GTZR


----------



## tcclaviger

Interesting JMTH. Is the z490 perhaps showing some instability with CPU/ram/gfx, cussing it to perform poorly? Is the CPU stable in tests, ram stable in tests?

Pcie slot or lanes assigned springs to mind, like is it running in 8x mode or pcie 2.0 mode?

Aside from those things, I would start examining software environment, what's running concurrently?


----------



## Laithan

I would capture a .01 second interval GPU-z log and see exactly what is going on with the GPU when it attempts to boost. Import into Excel after to view more easily. Attach log here if you want me to import it and take a look. I suspect it is the GPU itself.


----------



## JMTH

tcclaviger said:


> Interesting JMTH. Is the z490 perhaps showing some instability with CPU/ram/gfx, cussing it to perform poorly? Is the CPU stable in tests, ram stable in tests?
> 
> Pcie slot or lanes assigned springs to mind, like is it running in 8x mode or pcie 2.0 mode?
> 
> Aside from those things, I would start examining software environment, what's running concurrently?


Its currently running at 16x Gen 3.0, see picture below. Not much else is running, HWInfo64, NVIDIA Settings, GPUTweak II, Intel RST, Windows Security, Firefox, thats about it.

OC's are stable in tests Prime95, XTU, Realbench, Aida64, Stressapptest in Ubuntu, Ramtest.



Laithan said:


> I would capture a .01 second interval GPU-z log and see exactly what is going on with the GPU when it attempts to boost. Import into Excel after to view more easily. Attach log here if you want me to import it and take a look. I suspect it is the GPU itself.


Log attached for FireStrike also attached.


----------



## Laithan

JMTH said:


> Its currently running at 16x Gen 3.0, see picture below. Not much else is running, HWInfo64, NVIDIA Settings, GPUTweak II, Intel RST, Windows Security, Firefox, thats about it.
> 
> OC's are stable in tests Prime95, XTU, Realbench, Aida64, Stressapptest in Ubuntu, Ramtest.
> 
> 
> 
> Log attached for FireStrike also attached.


 /*!
* Power. Indicating perf is limited by total power limit.
*/
NV_GPU_PERF_POLICY_ID_SW_POWER = 1,
/*!
* Thermal. Indicating perf is limited by temperature limit.
*/
NV_GPU_PERF_POLICY_ID_SW_THERMAL = 2,
/*!
* Reliability. Indicating perf is limited by reliability voltage.
*/
NV_GPU_PERF_POLICY_ID_SW_RELIABILITY = 4,
/*!
* Operating. Indicating perf is limited by max operating voltage.
*/
NV_GPU_PERF_POLICY_ID_SW_OPERATING = 8,
/*!
* Utilization. Indicating perf is limited by GPU utilization.
*/
NV_GPU_PERF_POLICY_ID_SW_UTILIZATION = 16,


*Some observations:*
I am seeing something interesting when @ idle (perfcap 16). From 08:52 to 14:54 which is roughly about 6 minutes, your GPU was mostly idle at 1%-3% utilization and yet your video memory gradually increased from using 936MB to 1991MB the entire time. I assume firestrike was not loading textures for 6 minutes. If the entire log is not only firestrike like if you were using Chrome or something that lightly uses the GPU this might be normal.

As my first step, I clear out most of the idle time (leaving some for reference) to only look at boosting data.

First observation is that the GPU usage is very low. Firestrike should be pushing the GPU utilization up to 99%. The highest utilization that you've reached was 80% and for only a very brief moment. Your average GPU utilization (with all IDLE time excluded) was 48.04% for that run... So you are sort of getting half the performance for that reason.

The second observation is that you are getting _*voltage reliability (perfcap 4)*_ issues when boosting. The GPU is trying to run at 2175Mhz which is quite high (maybe too much for the GPU) and would indicate there is an overclock being applied. I don't think a 2080Ti is going to boost that high without an overclock or voltage curve applied to it. It starts boosting at 2085Mhz... that's definitely not stock. I would remove all overclocking and re-test. The voltage is not consistent.

The 3rd observation is that you are only seeing a maximum of 1.068v (extremely briefly) with most of the time being 1.05v.. 2175Mhz may be asking too much for this voltage (your GPU may allow up to 1.09v).. the voltage curve could be used to get the GPU to utilize its max voltage at those high speeds. The voltages are jumping all over the place also and I am surprised you didn't crash. At 3 different occasions you were running 2175Mhz @ .718v.. but with only 17%/19% utilization. 

Power consumption is low however this is due to the GPU usage being roughly half so I don't see a power issue in the logs. 


I'm not exactly sure what to make of all this... but I think it with your GPU itself. You should install it into another PC to prove if it is the GPU for sure or not. You could get a tad higher voltage with some curve tweaks but you should still be getting far more GPU utilization regardless. If you truly are not applying an overclock to this GPU something is very wrong. Did you cross flash another BIOS perhaps?

Note the TABS on the spreadsheet for those who like detail


----------



## JMTH

Laithan said:


> JMTH said:
> 
> 
> 
> Its currently running at 16x Gen 3.0, see picture below. Not much else is running, HWInfo64, NVIDIA Settings, GPUTweak II, Intel RST, Windows Security, Firefox, thats about it.
> 
> OC's are stable in tests Prime95, XTU, Realbench, Aida64, Stressapptest in Ubuntu, Ramtest.
> 
> 
> 
> Log attached for FireStrike also attached.
> 
> 
> 
> /*!
> * Power. Indicating perf is limited by total power limit.
> */
> NV_GPU_PERF_POLICY_ID_SW_POWER = 1,
> /*!
> * Thermal. Indicating perf is limited by temperature limit.
> */
> NV_GPU_PERF_POLICY_ID_SW_THERMAL = 2,
> /*!
> * Reliability. Indicating perf is limited by reliability voltage.
> */
> NV_GPU_PERF_POLICY_ID_SW_RELIABILITY = 4,
> /*!
> * Operating. Indicating perf is limited by max operating voltage.
> */
> NV_GPU_PERF_POLICY_ID_SW_OPERATING = 8,
> /*!
> * Utilization. Indicating perf is limited by GPU utilization.
> */
> NV_GPU_PERF_POLICY_ID_SW_UTILIZATION = 16,
> 
> 
> *Some observations:*
> I am seeing something interesting when @ idle (perfcap 16). From 08:52 to 14:54 which is roughly about 6 minutes, your GPU was mostly idle at 1%-3% utilization and yet your video memory gradually increased from using 936MB to 1991MB the entire time. I assume firestrike was not loading textures for 6 minutes. If the entire log is not only firestrike like if you were using Chrome or something that lightly uses the GPU this might be normal.
> 
> As my first step, I clear out most of the idle time (leaving some for reference) to only look at boosting data.
> 
> First observation is that the GPU usage is very low. Firestrike should be pushing the GPU utilization up to 99%. The highest utilization that you've reached was 80% and for only a very brief moment. Your average GPU utilization (with all IDLE time excluded) was 48.04% for that run... So you are sort of getting half the performance for that reason.
> 
> The second observation is that you are getting _*voltage reliability (perfcap 4)*_ issues when boosting. The GPU is trying to run at 2175Mhz which is quite high (maybe too much for the GPU) and would indicate there is an overclock being applied. I don't think a 2080Ti is going to boost that high without an overclock or voltage curve applied to it. It starts boosting at 2085Mhz... that's definitely not stock. I would remove all overclocking and re-test. The voltage is not consistent.
> 
> The 3rd observation is that you are only seeing a maximum of 1.068v (extremely briefly) with most of the time being 1.05v.. 2175Mhz may be asking too much for this voltage (your GPU may allow up to 1.09v).. the voltage curve could be used to get the GPU to utilize its max voltage at those high speeds. The voltages are jumping all over the place also and I am surprised you didn't crash. At 3 different occasions you were running 2175Mhz @ .718v.. but with only 17%/19% utilization.
> 
> Power consumption is low however this is due to the GPU usage being roughly half so I don't see a power issue in the logs.
> 
> 
> I'm not exactly sure what to make of all this... but I think it with your GPU itself. You should install it into another PC to prove if it is the GPU for sure or not. You could get a tad higher voltage with some curve tweaks but you should still be getting far more GPU utilization regardless. If you truly are not applying an overclock to this GPU something is very wrong. Did you cross flash another BIOS perhaps?
> 
> Note the TABS on the spreadsheet for those who like detail
Click to expand...

It does have an overclock on it using gputweak ii. It's like +160 core, +2000 memory, 100% voltage, +25% power. Or at least that is the settings in tweak.

Unfortunately I do not have another pc to test with. At least one that has water cooling.

I will remove the OC and make another log. It will be around the same, as you noticed the gpu is not even really being utilized.


----------



## tcclaviger

Try locking speed using afterburner curve and the L key, see if it will actually lock at 1950 @ 1v and hold that. If not there's a hardware problem.


----------



## JMTH

Laithan said:


> *Some observations:*
> I am seeing something interesting when @ idle (perfcap 16). From 08:52 to 14:54 which is roughly about 6 minutes, your GPU was mostly idle at 1%-3% utilization and yet your video memory gradually increased from using 936MB to 1991MB the entire time. I assume firestrike was not loading textures for 6 minutes. If the entire log is not only firestrike like if you were using Chrome or something that lightly uses the GPU this might be normal.
> 
> As my first step, I clear out most of the idle time (leaving some for reference) to only look at boosting data.
> 
> First observation is that the GPU usage is very low. Firestrike should be pushing the GPU utilization up to 99%. The highest utilization that you've reached was 80% and for only a very brief moment. Your average GPU utilization (with all IDLE time excluded) was 48.04% for that run... So you are sort of getting half the performance for that reason.
> 
> The second observation is that you are getting _*voltage reliability (perfcap 4)*_ issues when boosting. The GPU is trying to run at 2175Mhz which is quite high (maybe too much for the GPU) and would indicate there is an overclock being applied. I don't think a 2080Ti is going to boost that high without an overclock or voltage curve applied to it. It starts boosting at 2085Mhz... that's definitely not stock. I would remove all overclocking and re-test. The voltage is not consistent.
> 
> The 3rd observation is that you are only seeing a maximum of 1.068v (extremely briefly) with most of the time being 1.05v.. 2175Mhz may be asking too much for this voltage (your GPU may allow up to 1.09v).. the voltage curve could be used to get the GPU to utilize its max voltage at those high speeds. The voltages are jumping all over the place also and I am surprised you didn't crash. At 3 different occasions you were running 2175Mhz @ .718v.. but with only 17%/19% utilization.
> 
> Power consumption is low however this is due to the GPU usage being roughly half so I don't see a power issue in the logs.
> 
> 
> I'm not exactly sure what to make of all this... but I think it with your GPU itself. You should install it into another PC to prove if it is the GPU for sure or not. You could get a tad higher voltage with some curve tweaks but you should still be getting far more GPU utilization regardless. If you truly are not applying an overclock to this GPU something is very wrong. Did you cross flash another BIOS perhaps?
> 
> Note the TABS on the spreadsheet for those who like detail


Here is the new run, with default GPU settings. I might try and re-seat the card and see what happens. Will take an hour or so lol, have to drain and refill etc...
Thanks for taking a look!


----------



## Laithan

Don't drain yet (hope not too late). I doubt this is a seating issue. The PCI-e power also looks ok I think (75W max), you are getting a max of 46.8W but again full power is not expected when you aren't using full GPU utilization so that doesn't alarm me. GPU utilization actually improved a little but still only reached a maximum of 85%

For giggles, humor me for a minute please... 
Uninstall GPUTweak. If you have an option to delete settings do that (back them up if you need to). I have seen a rare issue where this may help. The most important part is to delete any saved profile settings. When the test is over you can go back to ASUS' utility if you really must.

Reboot.. ensure there is nothing running in the background that monitors or manipulates the GPU. 

Install MSI AB, enable voltage control in the settings first and then max out both the voltage and power % sliders as far as they can go, APPLY. Re-run firestrike with a GPU-z log please & attach here. This should ensure that you have enabled the maximum power and voltage for the GPU without the risk of having any bad O/C profile being applied to the GPU due to a very rare issue usually seen when an existing O/C profile is loading to a different GPU or the same GPU with a different BIOS. It is just a software issue easily fixed so I hope this is your issue.


----------



## JMTH

Laithan said:


> Don't drain yet (hope not too late). I doubt this is a seating issue. The PCI-e power also looks ok I think (75W max), you are getting a max of 46.8W but again full power is not expected when you aren't using full GPU utilization so that doesn't alarm me. GPU utilization actually improved a little but still only reached a maximum of 85%
> 
> For giggles, humor me for a minute please...
> Uninstall GPUTweak. If you have an option to delete settings do that (back them up if you need to). I have seen a rare issue where this may help. The most important part is to delete any saved profile settings. When the test is over you can go back to ASUS' utility if you really must.
> 
> Reboot.. ensure there is nothing running in the background that monitors or manipulates the GPU.
> 
> Install MSI AB, enable voltage control in the settings first and then max out both the voltage and power % sliders as far as they can go, APPLY. Re-run firestrike with a GPU-z log please & attach here. This should ensure that you have enabled the maximum power and voltage for the GPU without the risk of having any bad O/C profile being applied to the GPU due to a very rare issue usually seen when an existing O/C profile is loading to a different GPU or the same GPU with a different BIOS. It is just a software issue easily fixed so I hope this is your issue.


Heh was too late, but it didnt change anything. I thought I might of had misalignment because of the hard tubing, but it was fine. I ran it another time before trying your suggestion, same result though.

Picture and file using MSI AB below.


----------



## MrTOOSHORT

hmmm 

I did try 3dmark11 with my 10900k m12e and 2080ti this past weekend. Gpu score looked low, 49xxx. I had to rma the m12e, so I am back with 9980xe and Apex for now. Tried 3dmark11 again, 53xxx gpu score. Physics and combined test looked ok with the 10900k. Port Royal bench looked ok too, 104XX score. This just 2145Mhz or so on the 2080ti. I was running x8 pci-E 3.0.


----------



## Laithan

JMTH said:


> Heh was too late, but it didnt change anything. I thought I might of had misalignment because of the hard tubing, but it was fine. I ran it another time before trying your suggestion, same result though.
> 
> Picture and file using MSI AB below.


I'm way too tired right now but I took a quick look and it looked a little better. You spent most of your time @ 2025Mhz, voltages less spiradic. Try throwing an O/C back on and see what you get.


----------



## Regel

Meisgoot312 said:


> Hey Regel!
> 
> Out of curiosity, did you use nvflash or a BIOS debugger tool? I have an MSI Ventus GP Non-A, but the MX BIOS chip prevents me from flashing either the Palit or Gigabyte BIOS on it. I've tried using the BIOS debugger tool with the PALIT Bios, but it bugs out for me (disabling 2 of my output ports and bugs out MSI afterburner). I'm thinking of trying again with the debugger tool, but I'm curious to see if you've done anything that's different for a successful flash (or if you happened to get an older BIOS chip).
> 
> Thanks!
> 
> EDIT: I'm an idiot, with the Palit vBIOS there was no way since it used the old BIOS chip. But with the gigabyte, I've been typing in 'y' instead of 'YES' for the Board ID mismatch override. I've flashed the gigabyte vBIOS and am now experimenting.
> 
> EDIT2: For anyone curious, the Gigabyte Non-A windforce can be flashed onto any new firmware card, it disabled one of my DP ports, but the extra 30-40W is worth it. Google Chrome doesn't like the gigabyte bios and I need to disable hardware acceleration in order to get it to not artifact.
> 
> EDIT3: It appears that my timespy score actually dropped by 3% even after seeing the power limit increase, curious. I think I tested my GPU while it was in cooler weather, but GPU temperature doesn't seem out of the ordinary. It actually looks like the default clocks are lower than my MSI Ventus, even if their VBIOS says that they're supposed to be the same. My Ventus uses +235 on core clock to get the same score as the gigabyte bios on ~+265, memory is the same.


I realize this is a bit late now after your edits, but I used nvflash. The one from the OP didn't work though, so I googled a modified 5.590 version.


----------



## Mylittlepwny2

Anybody here running a pair of 2080 TIs in SLI? Just got my second Kingpin card with the Hydrocopper block and my second GPU is consistantly about 8C hotter than my first GPU at load. At idle theyre about equal. I have liquid metal on both GPUs and the first one runs 33-35C and the second runs 40-44C. Would adding an radiator between the GPUs to cool the water so its not going directly from GPU1 to GPU2 do much of anything to help? 

I suppose I could check my paste job but it looked damn good when I put the block on the second card and I dont wanna tear it all apart if I dont have to. Any help would be appreciated!


----------



## Nizzen

Mylittlepwny2 said:


> Anybody here running a pair of 2080 TIs in SLI? Just got my second Kingpin card with the Hydrocopper block and my second GPU is consistantly about 8C hotter than my first GPU at load. At idle theyre about equal. I have liquid metal on both GPUs and the first one runs 33-35C and the second runs 40-44C. Would adding an radiator between the GPUs to cool the water so its not going directly from GPU1 to GPU2 do much of anything to help?
> 
> I suppose I could check my paste job but it looked damn good when I put the block on the second card and I dont wanna tear it all apart if I dont have to. Any help would be appreciated!


Second one gets the warmer water and some heat from the other card?

33-35c looks too cold? Sensor error?


----------



## Mylittlepwny2

Nizzen said:


> Second one gets the warmer water and some heat from the other card?
> 
> 33-35c looks too cold? Sensor error?


No im pretty sure the sensor is accurate. Its lining up with all of Nvidias boost algorithms. I have an absolutely massive loop. And yes the temps from the first card affecting my card is no doubt atleast part of the issue, and im going to snag a radiator to put between them but 8C just seems like a massive delta. Gonna take it apart tomorrow and see if the 2nd card runs better when its by itself.
.


----------



## Regel

Small update from my ASUS 2080Ti Turbo:

With the Palit BIOS I was able to easily get +200MHz Core Clock and +1000MHz Memory Clock. Temps under load are 45c at a water temp of 33c
I didn't try to increase further yet.

I didn't play with the voltage at all, should I?


----------



## edhutner

Hello guys 
Here is a little mystery that I have sometimes while playing ACC. Look at the attached screenshot.
Do you see how the frame time is pulsing? (fps are limited from the game and are holding steady, but the frame time is pulsing).
The screenshot is from from Assetto Corsa Competizione single player hotlap session. 
I tried a bunch of things in order to get the line straight
- switching full screen off and on
- removing OC from the gpu, reapplying the oc (MSI AB)
- closing all background services and apps
And nothing helps, the line is pulsing and pulsing.. The only way to get the frame time graph straight is to quit the in game session and start it again. So I guess that most probably it's a problem of the game not of the hardware.

However, if someone could offer some kind of theory or explanation I am ready to listen 

p.s. gpu is evga 2080 ti xc gaming, full cover water cooled, bios from gigabyte gaming oc (360w), power limit to the max, auto oc without rising the voltage slider in MSI AB.


----------



## Meisgoot312

Regel said:


> I realize this is a bit late now after your edits, but I used nvflash. The one from the OP didn't work though, so I googled a modified 5.590 version.


Thanks Regel,

The Gigabyte BIOS so far has not been giving me good results so far (the clock speed numbers just don't add up), so I will try the flash to the PALIT BIOS later today. Thanks for letting me know!

EDIT: After testing both the gigabyte and Palit BIOSes, it appears that my stock BIOS MSI Ventus GP actually has the best scores even at lower temperatures and power limits. This makes no sense to me.


----------



## jura11

Mylittlepwny2 said:


> Anybody here running a pair of 2080 TIs in SLI? Just got my second Kingpin card with the Hydrocopper block and my second GPU is consistantly about 8C hotter than my first GPU at load. At idle theyre about equal. I have liquid metal on both GPUs and the first one runs 33-35C and the second runs 40-44C. Would adding an radiator between the GPUs to cool the water so its not going directly from GPU1 to GPU2 do much of anything to help?
> 
> I suppose I could check my paste job but it looked damn good when I put the block on the second card and I dont wanna tear it all apart if I dont have to. Any help would be appreciated!


Hi there 

GPUs are running in parallel or in series there? 

I'm running 4*GPUs setup in series and difference between the GPUs temperatures is not as big,RTX 2080Ti usually sits in 36-38°C, GTX1080Ti sits in 33-36°C,2*GTX1080's sits in mid 30's that's in rendering like in IRAY, Blender Cycles etc 

Adding radiator between the GPUs not sure if that would help, maybe a bit bit this depends on the loop 

Hope this helps 

Thanks, Jura


----------



## J7SC

Mylittlepwny2 said:


> Anybody here running a pair of 2080 TIs in SLI? Just got my second Kingpin card with the Hydrocopper block and my second GPU is consistantly about 8C hotter than my first GPU at load. At idle theyre about equal. I have liquid metal on both GPUs and the first one runs 33-35C and the second runs 40-44C. Would adding an radiator between the GPUs to cool the water so its not going directly from GPU1 to GPU2 do much of anything to help?
> 
> I suppose I could check my paste job but it looked damn good when I put the block on the second card and I dont wanna tear it all apart if I dont have to. Any help would be appreciated!


 
...I'm running a pair of w-cooled 2080 Tis in NVLink/SLI, and the temp delta between the two rarely is more than 1C. That might be due to an unusual loop arrangement, but even with a more normal setup I had for various prior SLI builds with one card's cooling exiting into the second one as intake, I have never seen a temp delta of more than 3 C, even with triple cards. I take it both cards have the same bios and same synchronized MSI AB settings. Now, some cards run hotter than others depending on their binning (in addition to the primary card usually being more stressed), but 8 C sounds potentially like a block paste or seating issue - not that it is fun tearing a build apart. Also, depending how your build is configured (hard line vs soft etc), can you switch the two CPUs for testing purposes w/o tearing it all apart ?

Re. an extra rad, per top pic below, I use separate loops for the CPU and GPUs, and the GPU loop has rads in-between the cards. The GPU loop has 3x XSPC RX360/55 rads plus two D5 pumps. Layout sequence is 'reservoir - pump 1 - GPU 1 - rad 1 - rad 2 - pump 2 - GPU 2 - rad 3. D5s are running at about 70% speed. Even with a measured total of 760w combined peak for the GPUs, temps stay well controlled...2nd pic below is a PortRoyal run for the 2x 2080 Tis with a subsequent 3DM DLSS feature test, which is like a further two PortRoyal runs in series on its own. Finally, temp difference between single 2080 Ti bench vs dual 2080 Ti runs is usually no more than 1C to 2C


----------



## Circaflex

For reference cards, is there a preferred BIOS out there? or do i just pick one and go with it? just got my card up and running under water


----------



## Mylittlepwny2

Circaflex said:


> For reference cards, is there a preferred BIOS out there? or do i just pick one and go with it? just got my card up and running under water


I've heard the Galax or KFA bios is the best for daily usage on the reference models. Itll raise your max to 380w which should give you some headroom. Alternatively the Kingpin XOC bios can raise your max voltage to 1.125 and remove power limit entirely but some users report it makes their temps go up due to increased power draw.


----------



## Mylittlepwny2

J7SC said:


> ...I'm running a pair of w-cooled 2080 Tis in NVLink/SLI, and the temp delta between the two rarely is more than 1C. That might be due to an unusual loop arrangement, but even with a more normal setup I had for various prior SLI builds with one card's cooling exiting into the second one as intake, I have never seen a temp delta of more than 3 C, even with triple cards. I take it both cards have the same bios and same synchronized MSI AB settings. Now, some cards run hotter than others depending on their binning (in addition to the primary card usually being more stressed), but 8 C sounds potentially like a block paste or seating issue - not that it is fun tearing a build apart. Also, depending how your build is configured (hard line vs soft etc), can you switch the two CPUs for testing purposes w/o tearing it all apart ?
> 
> Re. an extra rad, per top pic below, I use separate loops for the CPU and GPUs, and the GPU loop has rads in-between the cards. The GPU loop has 3x XSPC RX360/55 rads plus two D5 pumps. Layout sequence is 'reservoir - pump 1 - GPU 1 - rad 1 - rad 2 - pump 2 - GPU 2 - rad 3. D5s are running at about 70% speed. Even with a measured total of 760w combined peak for the GPUs, temps stay well controlled...2nd pic below is a PortRoyal run for the 2x 2080 Tis with a subsequent 3DM DLSS feature test, which is like a further two PortRoyal runs in series on its own. Finally, temp difference between single 2080 Ti bench vs dual 2080 Ti runs is usually no more than 1C to 2C


Thank you for your answer! I decided to tear it all apart and im glad I did. Something must have bee. Wrong with my paste job as I redid the liquid metal and everything and its now operating ay the exact same temp as my other card! Massive decreases in reported temps!


----------



## Circaflex

Mylittlepwny2 said:


> I've heard the Galax or KFA bios is the best for daily usage on the reference models. Itll raise your max to 380w which should give you some headroom. Alternatively the Kingpin XOC bios can raise your max voltage to 1.125 and remove power limit entirely but some users report it makes their temps go up due to increased power draw.


Thanks, I wonder if my card is one of the newer ones that cannot be flashed. I get this error, ERROR: Error reading PCI Configuration Space Register (0x00000032x
Detailed :The request is not supported.

when trying to turn protection off

edit I was able to get it to work, however, this has confirmed my suspicions. What a bummer


----------



## J7SC

Mylittlepwny2 said:


> Thank you for your answer! I decided to tear it all apart and im glad I did. Something must have bee. Wrong with my paste job as I redid the liquid metal and everything and its now operating ay the exact same temp as my other card! Massive decreases in reported temps!


 
^^ 'cool' :thumb:


----------



## TK421

are there any unlimited power bios I can use for my reference 2080ti?


some of the links on the page have been taken down


----------



## MrTOOSHORT

TK421 said:


> are there any unlimited power bios I can use for my reference 2080ti?
> 
> 
> some of the links on the page have been taken down


Yes this one, 1.125v load. Be careful with this one, you should have a full cover water block using it:

*https://xdevs.com/guide/2080ti_kpe/*



> XOC unofficial BIOS, Version 90.02.17.40.88
> XOC overclocking version for RTX 2080 Ti KINGPIN card.
> 
> This archive is password protected to ensure you have read this guide carefully and agree with the risks involved of flashing custom binary into your KINGPIN. Password from the archive is ipromisenottoRMAthiscard.
> 
> XOC BIOS have next limitations to restrict its use only for benchmarking purposes:
> 
> All fans are 100% speed (“jet engine mode”). Fan speed curves or adjustment in PX1 is disabled.
> Temp target is removed.
> Power limit is not there anymore.
> Card ID is generic in this BIOS (meaning you need to use nvflash with -6 parameter to flash into this BIOS or restore original BIOS).


----------



## TK421

MrTOOSHORT said:


> Yes this one, 1.125v load. Be careful with this one, you should have a full cover water block using it:
> 
> *https://xdevs.com/guide/2080ti_kpe/*



Is this fine to use with reference PCB? The KPE has 3 power connectors.


----------



## MrTOOSHORT

TK421 said:


> Is this fine to use with reference PCB? The KPE has 3 power connectors.


You said you have a reference 2080ti in the other post, it's fine for the reference 2080ti if you are on a full cover water block. Use at your own risk.


----------



## Laithan

MrTOOSHORT said:


> You said you have a reference 2080ti in the other post, it's fine for the reference 2080ti if you are on a full cover water block. Use at your own risk.


I wonder if my FTW3 Ultra would play nicely with that BIOS  I use all the outputs so I couldn't afford to lose any hmmm...


----------



## MrTOOSHORT

Laithan said:


> I wonder if my FTW3 Ultra would play nicely with that BIOS  I use all the outputs so I couldn't afford to lose any hmmm...



Yes I know it works for your card. MrFox, a bencher here at OCN, used that on his FTW3 and got some nice scores in the Benchmark threads. Like I mentioned earlier, just use the XOC bios if your card is under a waterblock with a nice loop.


----------



## TK421

MrTOOSHORT said:


> You said you have a reference 2080ti in the other post, it's fine for the reference 2080ti if you are on a full cover water block. Use at your own risk.



Yes I have the 2080TI XC Ultra, with standard/reference/FE PCB.


The KPE has an extra 8pin connector, which is my main concern.











Laithan said:


> I wonder if my FTW3 Ultra would play nicely with that BIOS  I use all the outputs so I couldn't afford to lose any hmmm...





MrTOOSHORT said:


> Yes I know it works for your card. MrFox, a bencher here at OCN, used that on his FTW3 and got some nice scores in the Benchmark threads. Like I mentioned earlier, just use the XOC bios if your card is under a waterblock with a nice loop.



How about if using 360mm AIO on the GPU (g12 bracket) and letting the VRM/VRAM be cooled with the stock EVGA baseplate? I have some fans to throw around the PCB area.


Still safe to use 1.093v w/ KPE bios?



Btw it appears that MrFox isn't active on OCN anymore.


----------



## Circaflex

how new is your card? my reference cant be flashed unless i used the usb method mentioned in first post


----------



## Imprezzion

TK421 said:


> Yes I have the 2080TI XC Ultra, with standard/reference/FE PCB.
> 
> 
> The KPE has an extra 8pin connector, which is my main concern.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> How about if using 360mm AIO on the GPU (g12 bracket) and letting the VRM/VRAM be cooled with the stock EVGA baseplate? I have some fans to throw around the PCB area.
> 
> 
> Still safe to use 1.093v w/ KPE bios?
> 
> 
> 
> Btw it appears that MrFox isn't active on OCN anymore.


That's what I run. Ok it's a 240 Kraken X52 not a 360 but yeah.

I don't use a baseplate as the Gainward Phoenix GS doesn't have one so I put a whole bunch of alphacool / enzotech copper sinks on the VRM and VRAM. I can run 1.125v XOC BIOS just fine. 

With sort of normal fan noise (1300-1400 RPM) on the rad fans temps are high 50's for the core at 2160-2145Mhz 1.125v at around 390-410w power draw. VRM gets barely warm to the touch. 

I do use the backplate for the Phoenix and it does have active cooling with thermal pads so that helps a lot as well.

As far as my opinion goes, it's "safe enough" to use. There's always risks but hey, this wouldn't be OCN without any of those risks hehe.


----------



## TK421

Imprezzion said:


> That's what I run. Ok it's a 240 Kraken X52 not a 360 but yeah.
> 
> I don't use a baseplate as the Gainward Phoenix GS doesn't have one so I put a whole bunch of alphacool / enzotech copper sinks on the VRM and VRAM. I can run 1.125v XOC BIOS just fine.
> 
> With sort of normal fan noise (1300-1400 RPM) on the rad fans temps are high 50's for the core at 2160-2145Mhz 1.125v at around 390-410w power draw. VRM gets barely warm to the touch.
> 
> I do use the backplate for the Phoenix and it does have active cooling with thermal pads so that helps a lot as well.
> 
> As far as my opinion goes, it's "safe enough" to use. There's always risks but hey, this wouldn't be OCN without any of those risks hehe.





can you link me theh copper heatsink you use?


----------



## Imprezzion

TK421 said:


> can you link me theh copper heatsink you use?


I bought them locally in Holland but I used the 14x14mm Enzotech VRAM heatsinks and I believe the 6x6mm Alphacool VRM heatsinks. Modmymods had them in US but they appear to be out of stock.

The Enzotechs come with pre-applied thermal tape but the Alphacools don't so get a sheet of Akasa thermal tape or any other brand that's a bit decent.


----------



## JMTH

Anyone have any idea why/how Unigine Heaven would peg a core at 100% and leave the GPU at ~30%? Rig below.


----------



## TK421

Imprezzion said:


> I bought them locally in Holland but I used the 14x14mm Enzotech VRAM heatsinks and I believe the 6x6mm Alphacool VRM heatsinks. Modmymods had them in US but they appear to be out of stock.
> 
> The Enzotechs come with pre-applied thermal tape but the Alphacools don't so get a sheet of Akasa thermal tape or any other brand that's a bit decent.



alright thanks









JMTH said:


> Anyone have any idea why/how Unigine Heaven would peg a core at 100% and leave the GPU at ~30%? Rig below.



what resolution is it set to?


----------



## JMTH

TK421 said:


> Imprezzion said:
> 
> 
> 
> I bought them locally in Holland but I used the 14x14mm Enzotech VRAM heatsinks and I believe the 6x6mm Alphacool VRM heatsinks. Modmymods had them in US but they appear to be out of stock.
> 
> The Enzotechs come with pre-applied thermal tape but the Alphacools don't so get a sheet of Akasa thermal tape or any other brand that's a bit decent.
> 
> 
> 
> 
> alright thanks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> JMTH said:
> 
> 
> 
> Anyone have any idea why/how Unigine Heaven would peg a core at 100% and leave the GPU at ~30%? Rig below.
> 
> Click to expand...
> 
> 
> what resolution is it set to?
Click to expand...

Just using the HWBOT basic settings.


----------



## zlatanselvic

Anyone have any recommendations of BIOS for the EVGA FTW3 Hydrocopper. I am on a custom loop with plenty of cooling. 

So far I'm limited to 2100Mhz - 2115Mhz with the stock voltage slider. Memory goes to +1200Mhz stable.


----------



## Mylittlepwny2

zlatanselvic said:


> Anyone have any recommendations of BIOS for the EVGA FTW3 Hydrocopper. I am on a custom loop with plenty of cooling.
> 
> So far I'm limited to 2100Mhz - 2115Mhz with the stock voltage slider. Memory goes to +1200Mhz stable.


So first thing to note the FTW3 is the 5th best bios currently available on a card. The only 4 that are better (perhaps someone can correct me if im wrong as its been a good while since I looked) are the Galax/KFA2 bios which allows 380w and is for reference PCBs only. Besides thats only a 7w increase anyways. Two: The Galaxy HOF Labs 450w Bios which only works on the Galax Hof Card. Third the Kingpin stock bios which allows 520w. And the best is the pair of XOC bios including the 2000w Galax Extreme bios which i believe only works for the Galax HOF card, and the Kingpin XOC Bios which can work on the FTW3 card.

Before I bought my Current SLI Kingpin cards i had SLI FTW3 cards and installing the KP XOC bios worked, however my cards refused to get out of safe mode. Meaning they would start fine but the moment I tried to run a 3d intensive application they would crash down to 300 MHz and stay there. I tried everything including disabling NVLINK, installing only 1 card, fresh windows installations and the works. Nothing made them work for me so i just flashed back. However i have heard accounts from numerous people that the XOC bios DOES work on the FTW3 cards. So im not sure why I was so unlucky. 

If you manage to get the XOC bios to work your card will be able to use up to 1.125V (stock is 1.093) so that might get you a couple extra deltas. But without the classified tool (which only works on the KP cards) you wont be able to push any farther. But still it might let you boost a bit higher for benchmarks. Biggest trick is to keep your cards below 37C. Once you hit about 37C your card is automatically going to downclock 15 MHz so that might take away much of your gains if you cant stay below that level.


----------



## Imprezzion

Mylittlepwny2 said:


> So first thing to note the FTW3 is the 5th best bios currently available on a card. The only 4 that are better (perhaps someone can correct me if im wrong as its been a good while since I looked) are the Galax/KFA2 bios which allows 380w and is for reference PCBs only. Besides thats only a 7w increase anyways. Two: The Galaxy HOF Labs 450w Bios which only works on the Galax Hof Card. Third the Kingpin stock bios which allows 520w. And the best is the pair of XOC bios including the 2000w Galax Extreme bios which i believe only works for the Galax HOF card, and the Kingpin XOC Bios which can work on the FTW3 card.
> 
> Before I bought my Current SLI Kingpin cards i had SLI FTW3 cards and installing the KP XOC bios worked, however my cards refused to get out of safe mode. Meaning they would start fine but the moment I tried to run a 3d intensive application they would crash down to 300 MHz and stay there. I tried everything including disabling NVLINK, installing only 1 card, fresh windows installations and the works. Nothing made them work for me so i just flashed back. However i have heard accounts from numerous people that the XOC bios DOES work on the FTW3 cards. So im not sure why I was so unlucky.
> 
> If you manage to get the XOC bios to work your card will be able to use up to 1.125V (stock is 1.093) so that might get you a couple extra deltas. But without the classified tool (which only works on the KP cards) you wont be able to push any farther. But still it might let you boost a bit higher for benchmarks. Biggest trick is to keep your cards below 37C. Once you hit about 37C your card is automatically going to downclock 15 MHz so that might take away much of your gains if you cant stay below that level.


HOF XOC works fine on reference and actually works better then Kingpin as HOF XOC still allows for idle clocks, idle voltage drops and fan control.


----------



## zlatanselvic

Mylittlepwny2 said:


> So first thing to note the FTW3 is the 5th best bios currently available on a card. The only 4 that are better (perhaps someone can correct me if im wrong as its been a good while since I looked) are the Galax/KFA2 bios which allows 380w and is for reference PCBs only. Besides thats only a 7w increase anyways. Two: The Galaxy HOF Labs 450w Bios which only works on the Galax Hof Card. Third the Kingpin stock bios which allows 520w. And the best is the pair of XOC bios including the 2000w Galax Extreme bios which i believe only works for the Galax HOF card, and the Kingpin XOC Bios which can work on the FTW3 card.
> 
> Before I bought my Current SLI Kingpin cards i had SLI FTW3 cards and installing the KP XOC bios worked, however my cards refused to get out of safe mode. Meaning they would start fine but the moment I tried to run a 3d intensive application they would crash down to 300 MHz and stay there. I tried everything including disabling NVLINK, installing only 1 card, fresh windows installations and the works. Nothing made them work for me so i just flashed back. However i have heard accounts from numerous people that the XOC bios DOES work on the FTW3 cards. So im not sure why I was so unlucky.
> 
> If you manage to get the XOC bios to work your card will be able to use up to 1.125V (stock is 1.093) so that might get you a couple extra deltas. But without the classified tool (which only works on the KP cards) you wont be able to push any farther. But still it might let you boost a bit higher for benchmarks. Biggest trick is to keep your cards below 37C. Once you hit about 37C your card is automatically going to downclock 15 MHz so that might take away much of your gains if you cant stay below that level.





Thank you for the feedback, I am about to split my CPU and GPU loop due to my 10900k being a small volcano. I think that will bring some more headroom back to my card!


----------



## J7SC

zlatanselvic said:


> Thank you for the feedback, I am about to split my CPU and GPU loop due to my 10900k being a small volcano. I think that will bring some more headroom back to my card!


 
One other thing to watch out for w/ custom XOC bios and 2x 2080 Ti is the combined peak power consumption (re. PSU headroom). I run a solid 1300w platinum PSU, but the 2080 Ti Aorus Xtr wb pull a combined 760w. Add in 280w or so for the oc'ed CPU plus 4 pumps, 18 fans etc and it all adds up - one reason why I haven't tried any XoC bios with my NVlink setup yet. 

I usually run split CPU and GPU loops anyway for a variety of reasons, but apart from splitting the loop, I assume you have plenty of rad space as well. With 1080/55 rad space for the two GPUs, I usually manage to stay below 38C.


----------



## zlatanselvic

*zlatanselvic*



J7SC said:


> One other thing to watch out for w/ custom XOC bios and 2x 2080 Ti is the combined peak power consumption (re. PSU headroom). I run a solid 1300w platinum PSU, but the 2080 Ti Aorus Xtr wb pull a combined 760w. Add in 280w or so for the oc'ed CPU plus 4 pumps, 18 fans etc and it all adds up - one reason why I haven't tried any XoC bios with my NVlink setup yet.
> 
> I usually run split CPU and GPU loops anyway for a variety of reasons, but apart from splitting the loop, I assume you have plenty of rad space as well. With 1080/55 rad space for the two GPUs, I usually manage to stay below 38C.




I haven't done SLI in many moons due to the lack of scaling and frame stutter. I have a 1300w EVGA supernova Platinum. I've transferred this thing over from the 295x2 days, still running like a champ. I'm in a Lian Li Dynamic XL case. I am beefing up to 2x 360mmx64mm radiators and have a waterway with pump+pump res combo. I am probably going to replumb the setup to have 1 pump cool each component. That way the loops are split. The hunt for frames is always a journey, and we all like to tinker, that's why we are here. 

Cheers and thanks again!


EDIT: Here's my rig


----------



## J7SC

zlatanselvic said:


> I haven't done SLI in many moons due to the lack of scaling and frame stutter. I have a 1300w EVGA supernova Platinum. I've transferred this thing over from the 295x2 days, still running like a champ. I'm in a Lian Li Dynamic XL case. I am beefing up to 2x 360mmx64mm radiators and have a waterway with pump+pump res combo. I am probably going to replumb the setup to have 1 pump cool each component. That way the loops are split. The hunt for frames is always a journey, and we all like to tinker, that's why we are here.
> 
> Cheers and thanks again!
> 
> 
> EDIT: Here's my rig


 
...very nice setup ! :thumb:

And yeah, I thought you were looking at dual 2080 Tis...I have been experimenting with CFR NVlink/SLI (rather than the usual AFR). CFR, *when it works*, is pretty good for scaling and frame times but expectations are that it won't be ready for prime-time until the gen after Ampere ('Hopper', rumoured to be mGPU)


----------



## tcclaviger

Nizzen said:


> Mylittlepwny2 said:
> 
> 
> 
> Anybody here running a pair of 2080 TIs in SLI? Just got my second Kingpin card with the Hydrocopper block and my second GPU is consistantly about 8C hotter than my first GPU at load. At idle theyre about equal. I have liquid metal on both GPUs and the first one runs 33-35C and the second runs 40-44C. Would adding an radiator between the GPUs to cool the water so its not going directly from GPU1 to GPU2 do much of anything to help?
> 
> I suppose I could check my paste job but it looked damn good when I put the block on the second card and I dont wanna tear it all apart if I dont have to. Any help would be appreciated!
> 
> 
> 
> Second one gets the warmer water and some heat from the other card?
> 
> 33-35c looks too cold? Sensor error?
Click to expand...

I have a 1080ti that reads ambient -2c ad idle in wife's loop, lol. Load temps are ambient +3c pulling 500 watts (xoc bios), impossebru!!

I would suggest just running the GPUs in parallel flow to keep water temps equalized for both cards, less restriction as well.

Adding a rad between them in the loop will do next to nothing for temps. The water, if flowing over .5 gpm, is very homogeneous temperature and should only vary 2c or less between reservoir exit and reservoir entry, especially with a lot of radiator as it sounds like you have. The higher the flow rate, the smaller the gap will be between the 2 points of exit and return.


----------



## kithylin

tcclaviger said:


> I have a 1080ti that reads ambient -2c ad idle in wife's loop, lol. Load temps are ambient +3c pulling 500 watts (xoc bios), impossebru!!


Just a quick note because this is mostly off topic: The XOC bios for 1080 Ti's has been documented to actually damage your card(s) if you use it to run higher voltages (any voltage above 1.093v for 1080 Ti) for a long duration. It should never be run for "Daily driver" gaming loads. It's temporary for benchmarks only then you need to switch back to a normal bios after you're done benchmarking.


----------



## tcclaviger

Yeah, I noticed horrible scaling over 1.093 so actually capped it at 1.08 or something like that, I don't use her PC much. The power limit removal is best part of it.

Coincidentally my 2080ti and 1080ti both live at the same clock speed, 2085 MHz. The fact that the 1080 never budges from that clock and the 2080 does kind of made the upgrade a little underwhelming, but still decent.

Wish there was a non-a 2080ti xoc bios, not for voltage, just for power limits.


----------



## Imprezzion

That's where a shunt mod comes in for non-A models.

Speaking of PSU's, I had to upgrade my Seasonic Focus Plus Gold 750w actually when just running a single 2080 Ti, a 9900K @ 5.1 and like, 2 waterloops / AIO's with 12 fans and loads of RGB all over the place. 

With stock power limit it kinda handled it but even with just the EVGA FTW3 Ultra BIOS it could make my PSU shut down when running benches and even had one or two shutdowns in hard to run games like Division 2.

I swapped it for a cheap Prime Gold 1000w as I could get it super cheap and it was compatible with my Cablemod cables for the Focus and even tho it isn't some platinum or titanium beast it easily handles even the 1.125v XOC BIOS at well over 400w of power draw now. Never had a single shutdown again even when benching 5.3Ghz onnthe 9900K and 2170 on the GPU with XOC BIOS.


----------



## Mylittlepwny2

tcclaviger said:


> I have a 1080ti that reads ambient -2c ad idle in wife's loop, lol. Load temps are ambient +3c pulling 500 watts (xoc bios), impossebru!!
> 
> I would suggest just running the GPUs in parallel flow to keep water temps equalized for both cards, less restriction as well.
> 
> Adding a rad between them in the loop will do next to nothing for temps. The water, if flowing over .5 gpm, is very homogeneous temperature and should only vary 2c or less between reservoir exit and reservoir entry, especially with a lot of radiator as it sounds like you have. The higher the flow rate, the smaller the gap will be between the 2 points of exit and return.


Hey thanks for the response but I did get the issue figured out! I ended up repasting the second card with LM again and temps are INSANELY down. Cards are within 1C of eachother now and never go over 38C! Much improved from before when the second GPU was hitting 50C+.


----------



## Mylittlepwny2

Imprezzion said:


> HOF XOC works fine on reference and actually works better then Kingpin as HOF XOC still allows for idle clocks, idle voltage drops and fan control.


Both my cards still downvolt and Idle with the Kingpin XOC bios when Im not using the classified controller. For my cards its basically just an increased voltage limit of 1.125V. 

I will say though that Chrome with hardware acceleration doesnt like the XOC bios for some reason and before using the XOC bios i have to remember to disable hardware acceleration before rebooting to enable the second bios.

Other than that it seems like a normal old bios, just doenst have any more power limits so my cards dont wig out when crossing the 520w power limit the stock bios has.


----------



## JennyBeans

Question what does a used 2080 ti go for cause I'm trying to hunt down a used one


----------



## kithylin

JennyBeans said:


> Question what does a used 2080 ti go for cause I'm trying to hunt down a used one


Look for yourself: https://www.ebay.com/sch/i.html?_fr...&LH_BIN=1&_sop=15&rt=nc&LH_ItemCondition=3000


----------



## J7SC

Mylittlepwny2 said:


> Hey thanks for the response but I did get the issue figured out! I ended up repasting the second card with LM again and temps are INSANELY down. Cards are within 1C of eachother now and never go over 38C! Much improved from before when the second GPU was hitting 50C+.


 
...when you redid your second card, could tell whether it was the paste application that had caused the higher temps before, or the overall block mounting ?


----------



## Baasha

Guys need your help on something...

Just finished building my main rig so I finally installed my 2x RoG Strix 2080 Ti's in the 2nd rig with the 6950X.

In BFV, BF4, or really any BF games (and GTAV), I am getting VERY LOW GPU usage on both 2080 Ti's.

This does not happen in other games (RE 2, RDR 2, Shadow of War etc.) where I get 95%+ usage across both GPUs.

I tried using 451.22 since I upgraded to Build 2004 (20H2) and turned on HAGS (always had VRR turned on as well).

The new rig with 2x Titan RTX uses the SAME monitor (Alienware 4k oled 120hz) and that scales beautifully in all games including GTA V and all BF games.

The rig with the 2080 Ti's does not.

Really need help figuring this out since I just used DDU and reinstalled 446.14 WHQL but the problem persists.

PLEASE HELP!


----------



## dilster97

I've a RTX 2080 Ti (non-A) that was pulled from a HP workstation (reference PCB, blower cooler). It's currently under water and has the Gigabyte 310W BIOS on it but it still runs into the power limit. I've been pondering the idea of shunt modding. I'm aware that the process of shunt modding is different for GPUs that are based on Pascal and Turing.

So right now I'm just querying has anyone done the shunt mod to a non-A GPU (or heard of it being done to a non-A GPU) and what was the reported power like afterwards?


----------



## keikei

Welp, bricked (had to reinstall OS as well) my card playing sekiro and just when I was coming back into it and making progress.


----------



## Imprezzion

dilster97 said:


> I've a RTX 2080 Ti (non-A) that was pulled from a HP workstation (reference PCB, blower cooler). It's currently under water and has the Gigabyte 310W BIOS on it but it still runs into the power limit. I've been pondering the idea of shunt modding. I'm aware that the process of shunt modding is different for GPUs that are based on Pascal and Turing.
> 
> So right now I'm just querying has anyone done the shunt mod to a non-A GPU (or heard of it being done to a non-A GPU) and what was the reported power like afterwards?


Plenty of people did it. There's a whole thread with detailed pictures and which resistors to get on here. I'm on mobile so can't easily find the link to it now but just search 2080 ti shunt in the forum.


----------



## kithylin

Baasha said:


> Guys need your help on something...
> 
> Just finished building my main rig so I finally installed my 2x RoG Strix 2080 Ti's in the 2nd rig with the 6950X.
> 
> In BFV, BF4, or really any BF games (and GTAV), I am getting VERY LOW GPU usage on both 2080 Ti's.
> 
> This does not happen in other games (RE 2, RDR 2, Shadow of War etc.) where I get 95%+ usage across both GPUs.
> 
> I tried using 451.22 since I upgraded to Build 2004 (20H2) and turned on HAGS (always had VRR turned on as well).
> 
> The new rig with 2x Titan RTX uses the SAME monitor (Alienware 4k oled 120hz) and that scales beautifully in all games including GTA V and all BF games.
> 
> The rig with the 2080 Ti's does not.
> 
> Really need help figuring this out since I just used DDU and reinstalled 446.14 WHQL but the problem persists.
> 
> PLEASE HELP!


Pay attention to your cpu usage in that 6950X system. Specifically switch task manager to show per-core usage. If you are having a single cpu core maxed out at 100%, or a large portion of the cpu cores at 80% or above then you're seeing a CPU Limitation. The 6950X is a 4 year old processor from an ancient platform. I'm guessing most likely it isn't fast enough to power a pair of RTX 2080 Ti's. What CPU is in your Titan RTX system that scales well?


----------



## OverCloke

Hi!

Im trying to flash bios of my MSI 2080 TI Gaming X Trio , with 90.02.30.00.D9 BIOS (https://www.techpowerup.com/vgabios/213896/msi-rtx2080ti-11264-190528) with the bios with 400W limit power 90.02.0B.40.56 (https://www.techpowerup.com/vgabios/205495/msi-rtx2080ti-11264-180930). I cant flash my vga due XUSB FW of the 400W bios is older than installed one.

XUSB FW component of the input GPU firmware image is imcompatible, please use
a newer version of GPU firmware image for this product.

Is any way to avoid this problem or any way to "edit" or find same bios with same XUSB firmware with 400W? Anyone have a MSI 2080 TI Gaming X Trio with 400W bios?

My compatible bios list: https://www.techpowerup.com/vgabios/?did=10DE-1E07-1462-3715

Regards.


----------



## Baasha

kithylin said:


> Pay attention to your cpu usage in that 6950X system. Specifically switch task manager to show per-core usage. If you are having a single cpu core maxed out at 100%, or a large portion of the cpu cores at 80% or above then you're seeing a CPU Limitation. The 6950X is a 4 year old processor from an ancient platform. I'm guessing most likely it isn't fast enough to power a pair of RTX 2080 Ti's. What CPU is in your Titan RTX system that scales well?


I had the Titan RTX in the 6950X until about 10 days ago and it worked/scaled beautifully well (see pics).

The only thing I changed in the system was the GPUs - went from 2x Titan RTX to 2x 2080 Ti's. 

Here's the weird thing, other games scale just fine and work really well - RE 2, RDR 2, Shadow of War, SWBF I & II. However, I did notice that all the BF games (in Origin) don't scale and neither does Crysis 3. SWBF I & II are also Origin games but those scale just fine.

Titan RTX SLI in 6950X system:












2080 Ti SLI in 6950X system:











The bad scaling happens in:

1.) BFV
2.) BF4
3.) BF1
4.) Crysis 3
5.) GTA V (stutters like crazy)

I'm running out of ideas here.

I used DDU and reinstalled 446.14 since 451.22 is not quite WHQL yet. Still no luck.

The Titan RTX SLI is now in the W-3175X system.


----------



## TONSCHUH

I ordered the EK-Block and Backplate today, but it will take some time, before I receive the parts, as the Backplate is not in Stock.


----------



## Mylittlepwny2

J7SC said:


> ...when you redid your second card, could tell whether it was the paste application that had caused the higher temps before, or the overall block mounting ?


I honestly couldnt tell. I have no idea why the card was overheating earlier. Paste job looked good and block was clean of any obstruction. But apparently something was wrong because the moment i redid it and put it back in it was worlds better!


----------



## Medizinmann

JennyBeans said:


> Question what does a used 2080 ti go for cause I'm trying to hunt down a used one


Well - if you can postpone your purchase I would strongly recommend doing so. Prices are…uhm…a little high right now. Much higher than they were 4-5 months ago (like 300€-400€ and I would assume a similar amount in $).
I would really wait for the release of next gen - if possible and buy a 2080TI then hopefully for cheaper or consider buying next gen or even AMD…...just kidding…:thumb:

Best regards,
Medizinmann


----------



## keikei

@*JennyBeans: *Ampere is rumored for September. Once performance is known, you'll see a flood of used cards. If there are no deals to be had, we expect mid-tier prices for 2080ti performance.


----------



## Yuke

JennyBeans said:


> Question what does a used 2080 ti go for cause I'm trying to hunt down a used one


Here in Germany prices are crazy, i could sell my 2080Ti 11G OC for 850-900€ on Ebay rightnow. Which im considering after watching all the Ampere leaks...But sadly only have a 970 as a replacement card rigtnow.


----------



## ThrashZone

Hi,
In Canada it's a sellers market.


----------



## J7SC

Mylittlepwny2 said:


> I honestly couldnt tell. I have no idea why the card was overheating earlier. Paste job looked good and block was clean of any obstruction. But apparently something was wrong because the moment i redid it and put it back in it was worlds better!


 
Tx - the main thing is that it is fixed. With large blocks trying to cover multiple spots and thermal pads, things can go awry by a tiny fraction at any time. While not a GPU but CPU issue, per spoiler, DerBauer delidded a TR3 3960x - just couldn't get it to seat right


Spoiler















Yuke said:


> Here in Germany prices are crazy, i could sell my 2080Ti 11G OC for 850-900€ on Ebay rightnow. Which im considering after watching all the Ampere leaks...But sadly only have a 970 as a replacement card rigtnow.


 
...same in W.Canada...counterparts to my specific 2080 Tis have been back-ordered forever; I'm glad we got them when we did for a work-play build (and at a good price to boot) back in December '18. As to Ampere, the published 'stock PL' wattage suggests a lot of heat, so I would wait for the custom PCB, factory-full-water blocked versions again..I probably consider a purchase (or not) in Q1 '21. When Ampere is fully available commercially, used 2080 Tis are likely to soften re. $s. But still, for example most game producers want to maximize revenues and cover the whole market...apart from a few lead titles, I don't think 2080 Tis will become useless over night but stay in the top third tier of performance :2cents: ...

...a quick Ampere rumour summary by PC Gamer re. expected Cuda for Ampere models (don't forget the salt)


----------



## JennyBeans

Medizinmann said:


> Well - if you can postpone your purchase I would strongly recommend doing so. Prices are…uhm…a little high right now. Much higher than they were 4-5 months ago (like 300€-400€ and I would assume a similar amount in $).
> I would really wait for the release of next gen - if possible and buy a 2080TI then hopefully for cheaper or consider buying next gen or even AMD…...just kidding…:thumb:
> 
> Best regards,
> Medizinmann



ah ok I could wait i guess, my stream choked out trying to stream shadow of the tomb raider all the other games I usually stream are fine .. so i'm guessing I was lacking with that one my little 1070ti can only do so much I guess


----------



## Imprezzion

That 2600x isn't helping matters either and I'm kind of afraid it will kinda bottleneck the 2080 Ti or whatever you plan to get especially when your streaming as well.


----------



## kx11

how good is this cooler ?!!


----------



## jura11

kx11 said:


> how good is this cooler ?!!


I would say this cooler should be great alternative for water cooling, have run this on Titan X SC with 1469MHz OC and seen temperatures in 42-45°C,tested then on GTX1080Ti and temperatures have been still under 50°C with 2113MHz OC

On RTX 2080Ti I didn't tried at all 

Hope this helps 

Thanks, Jura


----------



## Medizinmann

kx11 said:


> how good is this cooler ?!!





jura11 said:


> I would say this cooler should be great alternative for water cooling, have run this on Titan X SC with 1469MHz OC and seen temperatures in 42-45°C,tested then on GTX1080Ti and temperatures have been still under 50°C with 2113MHz OC
> 
> On RTX 2080Ti I didn't tried at all
> 
> Hope this helps
> 
> Thanks, Jura


What brand?

Name?

Best regards,
Medizinmann


----------



## Medizinmann

JennyBeans said:


> ah ok I could wait i guess, my stream choked out trying to stream shadow of the tomb raider all the other games I usually stream are fine .. so i'm guessing I was lacking with that one my little 1070ti can only do so much I guess


I would also say you should also consider upgrading your CPU - a 2600x could bottleneck the system while streaming - so I would postpone both until Q4/2020 and hope for the releases from NVidia and AMD to bring prices down. …and then either upgrade to a Ryzen 4000 or look into the used option – maybe even a 3900x…

It is rumoured that the new "mid-tier" GPU from Nvidia will cost around $999(canadian $) and be faster than the 2080 TI right now – so you could hope for the used market to get a 2080TI for cheap or something equivalent from either NVidia or AMD for less…
Again – looking at the new and used prices in USA, Canada and Europe – I would strongly recommend waiting at least the 3-4 months…

Best regards,
Medizinmann


----------



## chrisz5z

Medizinmann said:


> What brand?
> 
> Name?
> 
> Best regards,
> Medizinmann


https://www.raijintek.com/en/products_detail.php?ProductID=45


----------



## Medizinmann

chrisz5z said:


> https://www.raijintek.com/en/products_detail.php?ProductID=45


Question is - are the new brackets for 2080 TI available?

On most sites only compatibility up to 1080 TI is listed…:headscrat

Best regards,
Medizinmann


----------



## TONSCHUH

J7SC said:


> TONSCHUH said:
> 
> 
> 
> Ok, I played a bit around with the clocks etc and that's pretty much it (I could still fine-tune it a bit, but I doubt, that it would result in much better scores, mainly because of PWR):
> 
> https://www.3dmark.com/3dm/46940296? (every higher clocks, resulted in lower scores, mainly because of VRel)
> 
> For Superposition 8k, have a look at the attached screenshots:
> 
> 
> 
> 
> ...nice /forum/images/smilies/thumb.gif Your card will love w-cooling if / when you go for that
Click to expand...

Ordered the EK-Block + Backplate, but I'm not sure yet, when I will get them, because the Backplate was not in Stock, where I ordered them from.

👍😊👍


----------



## TONSCHUH

Medizinmann said:


> chrisz5z said:
> 
> 
> 
> https://www.raijintek.com/en/products_detail.php?ProductID=45
> 
> 
> 
> Question is - are the new brackets for 2080 TI available?
> 
> On most sites only compatibility up to 1080 TI is listedâ€¦/forum/images/smilies/headscratch.gif
> 
> Best regards,
> Medizinmann
Click to expand...

https://www.reddit.com/r/nvidia/com..._amd/?utm_medium=android_app&utm_source=share


----------



## geriatricpollywog

Just snatched this from EVGA's B-stock page for $997. It doesn't appear to be used, but has some fraying on the fabric around the water tubing. Can't wait to plug it in! Anybody got experience with Kingpins?


----------



## J7SC

TONSCHUH said:


> Ordered the EK-Block + Backplate, but I'm not sure yet, when I will get them, because the Backplate was not in Stock, where I ordered them from.


 
...well right now, global deliveries are still 'hit & miss' because of 'human malware' virus and other trade impacts, but looking forward to your update when both block and back-plate have arrived and been mounted. What kind of water-cooling loop (rads, pumps, fans) are you going to integrate that in ?




0451 said:


> Just snatched this from EVGA's B-stock page for $997. It doesn't appear to be used, but has some fraying on the fabric around the water tubing. Can't wait to plug it in! Anybody got experience with Kingpins?


congrats !...if this KP tests out 'ok' for you, it's a great deal, even in close-to-Ampere-release times. With all the money you saved, you might even consider the KP full-waterblock ('Hydrocopper')


----------



## geriatricpollywog

Thanks! My Vega broke so I had a good excuse to get a new GPU. I was thinking about getting something in the $300 range and waiting for the next gen, but this was too good to pass up and I’m sure I can get at least $800 for it after next gen is released. The EVGA Hydrocopper Block is sold out and costs $350 when its in stock. I found a Bykski block for $140 shipped.

Too bad my pcie x16 slot is broke and this will have to go into the x8 slot. I may have to upgrade from z270 if I am too bottlenecked.


----------



## J7SC

0451 said:


> Thanks! My Vega broke so I had a good excuse to get a new GPU. I was thinking about getting something in the $300 range and waiting for the next gen, but this was too good to pass up and I’m sure I can get at least $800 for it after next gen is released. The EVGA Hydrocopper Block is sold out and costs $350 when its in stock. I found a Bykski block for $140 shipped.
> 
> Too bad my pcie x16 slot is broke and this will have to go into the x8 slot. I *may have to upgrade from z270 if I am too bottlenecked*.


 
...shouldn't be too much of an impact unless you run a lot of other peripherals on your mobo...I used to do a lot of sub-zero HWBot and have several suffering GPUs of various generations w/ custom cooling setups that now only do 8x instead of 16x (across various-gen Intel and AMD mobos !)...on the one hand, you might actually get 16x back w/ your new KP 2080 Ti GPU, on the other, 8x will only be noticeable with a 'fairly populated PCIe mobo' and in benchmarks.

...as to quality w-blocks, Aquacomputer, Watercool etc seem to lead the pack, but most block comparisons I have seen don't seem to have more than 5 C variance between best and worst anyway...main thing is to minimize the dreaded NVidia boost temp throttling past plus/minus 38 C...THW temp-impact-on-MHz below


----------



## geriatricpollywog

My Kingpin is artifacting and crashing. I really hope its the motherboard because if I RMA this card, EVGA will just refund me and I won’t see another Kingpin. I think its time for Z490.


----------



## J7SC

0451 said:


> My Kingpin is artifacting and crashing. I really hope its the motherboard because if I RMA this card, EVGA will just refund me and I won’t see another Kingpin. I think its time for Z490.


 
...bummer...any stock-clock setting MSI AB clock and temp screenies ?


----------



## geriatricpollywog

Temps are in the 40s. It even crashes when I underclock and switch bios settings. I just spoke with EVGA tech support and they told me the RMA department has their own stock apart from the website and they would get me another Kingpin. Fingers crossed!


----------



## sultanofswing

Were you doing any overclocking with it using the classified tool?


----------



## geriatricpollywog

No, I literally took it out of the box and loaded some games. Just got the card today.


----------



## J7SC

0451 said:


> No, I literally took it out of the box and loaded some games. Just got the card today.


 
RMA process on a KP card that just arrived is certainly annoying, but if you get a fully functioning KP replacement w / o any extra $s, it's still a great deal (minus your frustration and your setup time)


----------



## Laithan

I've noticed an uptick on "less than reputable sellers" since things got crazy this year.. I have literally bought and returned (2) 2080Ti's before I finally got the 3rd one that worked...


----------



## geriatricpollywog

Laithan said:


> I've noticed an uptick on "less than reputable sellers" since things got crazy this year.. I have literally bought and returned (2) 2080Ti's before I finally got the 3rd one that worked...


Right but I got my card directly from EVGA who are very reputable. I called tech support at 9:30pm last night and immediately got through to a US-based technician who after verifying that I tried everything, offered me an advanced RMA. They will send me another KPE with a return label. When I RMAed my EVGA power supply, they sent me a retail box with all the cables and only wanted back the bare PSU. I would recommend EVGA to anyone who will listen.


----------



## kithylin

0451 said:


> Right but I got my card directly from EVGA who are very reputable. I called tech support at 9:30pm last night and immediately got through to a US-based technician who after verifying that I tried everything, offered me an advanced RMA. They will send me another KPE with a return label. When I RMAed my EVGA power supply, they sent me a retail box with all the cables and only wanted back the bare PSU. I would recommend EVGA to anyone who will listen.


I've been a loyal EVGA customer since the GTX 470 days. I still have my pair of EVGA GTX 470 Hydro Copper FTW cards in the other room in boxes. I've been running on an EVGA 850W P2 Platinum power supply since 2014 and it's carried me through quite a lot of different system upgrades.. the power supply stays, the boards and other parts keep changing. It even powered my 5.3 Ghz bare-die 3570K system for 2.5 years at one point. It's very stable with very tight regulation and almost no ripple, great for overclocking. I'm considering upgrading to the 1KW T2 Titanium sometime this year.


----------



## keikei

Some_ gud _news regarding the CP 2077 preview shown using a 2080ti? Whatever the next Ti (or Nvidia calls it), can't come soon enough. https://wccftech.com/cyberpunk-2077...dlss-2-0-enabled-on-an-rtx-2080ti-powered-pc/


----------



## Mylittlepwny2

0451 said:


> Just snatched this from EVGA's B-stock page for $997. It doesn't appear to be used, but has some fraying on the fabric around the water tubing. Can't wait to plug it in! Anybody got experience with Kingpins?


Yep i have 2 of the KP cards with the hydrocopper blocks. Good luck finding the blocks. Havent heard of anyone actually being able to buy one from EVGA.com in a LONG time. Both times they've seemingly come back in stock (EVGA notifications) they are instantly sold out and no one i have ever talked to managed to get one. Infact last time they came "in stock" a few weeks ago I ordered one (to have a spare) and my order ended up getting auto cancelled by EVGA due to no availability. I cant be sure if they were actually in stock or if it was just a glitch in the system. EVGA to their credit gave me $100 in EVGA bucks for the trouble once I called to ascertain the status of my order. 

Anyways point is the KP cards are the bees knees. I cant ever go back to the boring old Nvidia cards. Not only do these clock higher (both mine will game at over 2205 MHz if kept below 38C), but you can also install and run the XOC bios to completely remove the power limit and really let these things shine! $1000 for a KP card is an absolute steal. If you ever manage to find a hydrocopper block jump on it. In the RARE instance once comes up for sale on the used market they often sell for over $700.


----------



## sultanofswing

Mylittlepwny2 said:


> Yep i have 2 of the KP cards with the hydrocopper blocks. Good luck finding the blocks. Havent heard of anyone actually being able to buy one from EVGA.com in a LONG time. Both times they've seemingly come back in stock (EVGA notifications) they are instantly sold out and no one i have ever talked to managed to get one. Infact last time they came "in stock" a few weeks ago I ordered one (to have a spare) and my order ended up getting auto cancelled by EVGA due to no availability. I cant be sure if they were actually in stock or if it was just a glitch in the system. EVGA to their credit gave me $100 in EVGA bucks for the trouble once I called to ascertain the status of my order.
> 
> Anyways point is the KP cards are the bees knees. I cant ever go back to the boring old Nvidia cards. Not only do these clock higher (both mine will game at over 2205 MHz if kept below 38C), but you can also install and run the XOC bios to completely remove the power limit and really let these things shine! $1000 for a KP card is an absolute steal. If you ever manage to find a hydrocopper block jump on it. In the RARE instance once comes up for sale on the used market they often sell for over $700.


Wished I could say the same about my Kingpin. Crashes at anything over 2145mhz even with temps below 38c and the memory will show artifacts above +1050mhz


----------



## Mylittlepwny2

sultanofswing said:


> Wished I could say the same about my Kingpin. Crashes at anything over 2145mhz even with temps below 38c and the memory will show artifacts above +1050mhz


What kind of cooling are you using? Also depends on what games youre playing. I hear COD and a few other games doesnt like overclocks at all and 2130-2145 is all most people can manage even on otherwise seemingly stable overclocks. I dont play COD so i have no expererience with that one.


----------



## sultanofswing

Mylittlepwny2 said:


> What kind of cooling are you using? Also depends on what games youre playing. I hear COD and a few other games doesnt like overclocks at all and 2130-2145 is all most people can manage even on otherwise seemingly stable overclocks. I dont play COD so i have no expererience with that one.


I am not talking about in games as I do not overclock when gaming. It's when running a benchmark like time spy.
My setup is listed in my sig.


----------



## geriatricpollywog

I like the Hydro Copper block, but will likely get the Bykski one instead. No way am I spending $350-700 on a GPU block that will be hidden under the card. Plus waterblocks are difficult to sell used since people who watercool are less likely to buy used parts.


----------



## J7SC

keikei said:


> Some_ gud _news regarding the CP 2077 preview shown using a 2080ti? Whatever the next Ti (or Nvidia calls it), can't come soon enough. https://wccftech.com/cyberpunk-2077...dlss-2-0-enabled-on-an-rtx-2080ti-powered-pc/
> 
> 
> https://www.youtube.com/watch?v=8xoTj0uK1dk


 
...yeah, I watched that yesterday w/ great interest, then poured a nice cold beer and watched that 'long vid' in the spoiler. Apparently, there's some chance that Cyberpunk 2077 might do mGPU, not because current SLI/NVlink is such a hot seller , but because of mGPU plans by NVidia and AMD down the line


Spoiler










 



sultanofswing said:


> I am not talking about in games as I do not overclock when gaming. It's when running a benchmark like time spy.
> My setup is listed in my sig.


 
2145 is nothing to sneeze at...still, there's a lot of variance with the same GPU model by the same vendor. For example, my 2x 2080 Ti (Aorus Xtr factory w-block) are only 3 digits apart in their serial numbers, yet one easily gets past 2205 at a relatively low GPUv, while the other is 45 Mhz slower but using an additional voltage step (both on stock bios).


----------



## Mylittlepwny2

sultanofswing said:


> I am not talking about in games as I do not overclock when gaming. It's when running a benchmark like time spy.
> My setup is listed in my sig.


Gotcha yeah didnt see your sig down there. My apologies. Boy that is strange. I assumed that since the KP cards are all binned they would be relatively similiar in their overclock behavior. Then again I also have Liquid metal on my cards so maybe thats a factor? Either way yeah 2205 is fine for both cards, though my first card can go as high as 2220 on most 3dmark tests. Its not 100% stable though. I have noticed though that if im running NVLINK or SLI or whatever you wanna call it that I have to downclock a couple deltas. 2175 is about the most i can do when using SLI without manually increasing voltage.


----------



## J7SC

0451 said:


> I like the Hydro Copper block, but will likely get the Bykski one instead. No way am I spending $350-700 on a GPU block that will be hidden under the card. Plus waterblocks are difficult to sell used since people who watercool are less likely to buy used parts.


 
...between the KPE and Galax Hof, you probably have the two top 2080 Ti on the market. The Galax version below though has both full waterblock and air-cooling all in one :drool: you just don't want to know the price


----------



## geriatricpollywog

It’s ugly, I hate it. Does the HOF even use binned dies and Samsung memory?

The KPE on the other hand looks like it was built by Skunkworks. Plus, EVGA really takes care of their customers and they make the unlocked bios easy to locate and download.


----------



## 113802

0451 said:


> It’s ugly, I hate it. Does the HOF even use binned dies and Samsung memory?
> 
> The KPE on the other hand looks like it was built by Skunkworks. Plus, EVGA really takes care of their customers and they make the unlocked bios easy to locate and download.


It looks very clean when the RGB is disabled. Yes they're cherry picked dies, from memory overclocks I've seen on the HOF it would suggest it uses Samsung memory but PCB shots I've seen showed micron memory.


----------



## J7SC

WannaBeOCer said:


> It looks very clean when the RGB is disabled. Yes they're cherry picked dies, from memory overclocks I've seen on the HOF it would suggest it uses Samsung memory but PCB shots I've seen showed micron memory.


 
...I think the early 2080 Ti HoF (incl. OCL) had Micron, the later ones Samsung. In any case, the Galax 2080 Ti HoF OCL absolutely dominate the top-GPU categories at HWBot. And per pics below, you can get the 'regular' (non-10th-anniversary) water-block edition.

...I like EVGA's KPEs and Classifieds and had 8 of them covering several GTX generations....three had some issues though of which two had to be RMA'ed. In any case, I look forward to what Galax and KPE bring to the table for the 'Ampere' gen in terms of water-blocked cards, along with Aorus etc.


----------



## sultanofswing

Mylittlepwny2 said:


> Gotcha yeah didnt see your sig down there. My apologies. Boy that is strange. I assumed that since the KP cards are all binned they would be relatively similiar in their overclock behavior. Then again I also have Liquid metal on my cards so maybe thats a factor? Either way yeah 2205 is fine for both cards, though my first card can go as high as 2220 on most 3dmark tests. Its not 100% stable though. I have noticed though that if im running NVLINK or SLI or whatever you wanna call it that I have to downclock a couple deltas. 2175 is about the most i can do when using SLI without manually increasing voltage.


I can push the card well over 2200mhz but I require colder temps. I did 1 time do a successful Timespy run at 2205 without chilled water when I had my old 8700k setup which was the best GPU score I have gotten so far.











I have not done much benchmarking lately.


----------



## Imprezzion

Scaling with voltages and clocks is so wierd for my card lol.. 

It's a Gainward Phoenix GS under a Kraken G12 + G52 with EVGA FTW3 Ultra BIOS with Samsung memory.

It can run 2070-2040Mhz all day stable in whatever I throw at it with 1.018v custom curve around 45-47c load but slapping the full 1.093v on it only allows it to go to 2115-2085Mhz at about the same temps (47-49c). 

If I load up the HOF XOC BIOS at 1.125v it will do 2170-2130Mhz at 50-53c.

I tried dropping temps, obviously the X52 can go way lower but I don't have the fans going all that fast, but that had zero effect on stability. Even dropping to about the 38c mark giving the fans 80-100% doesn't improve stability at all.

Does my card just scale that bad past 2040 @ 1,018v?

What I also noticed is that power draw barely increases between 2040 @ 1 018v vs 2085 at 1.093v since they both max out around 340w. With the XOC BIOS it does go higher, about 370-380w.


----------



## TONSCHUH

Got my Block + Backplate and installed them. Will still have to drain / flush / change my loop, before I can do some testing. 😊


----------



## J7SC

TONSCHUH said:


> Got my Block + Backplate and installed them. Will still have to drain / flush / change my loop, before I can do some testing. 😊


 
...nice !


----------



## TONSCHUH

J7SC said:


> ...nice !


I'm up and running again, but I still have to bleed the air out.

I also had to order a D-RGB-Controller + D-RGB to RGB Converter, because the EK-Block comes with D-RGB (5V / 3-Pin's), but my MoBo has only Asus Aura Gen-1, which comes with RGB-Header's only (12V / 4-Pin's).

It would have been also nice, if EK would have released the Mono-Block for the Maximus IX Apex, with RGB in some form, but I will not drill my own LED-Holes into it, because it's too much trouble, with having to pull everything apart again.

Now I only need the cash for the Thermaltake Core P8 + Distro, because I'm over all the dust and hassle, when I have to move / drain / maintain the system.

Core P8: https://www.thermaltake.com/news/view/index?id=838

Distro for it: https://www.thermaltake.com.au/pacific-core-p5-dp-d5-plus.html

I don't know, if I could replace the TT Pump with an EK-D5 or remove the pump and seal the spot with an sealed cover or just keep it un-plugged and only use my EK-Dual-D5's or use them all 3 together, but the TT-Pump only moves 1135L/h and the EK-Pump's 1500L/h.


----------



## caki

*aorus xtreme waterblock 2080ti pump issue*

been using this card since 6 months without any issues but yesterday gpu started throttling like crazy and i found out that the culprit was pump stopping to circulate the coolant (radiators cold while card is around 85-90 C at idle in windows). When i restarted the pc i heard some air bubble passing sounds coming from the card and it was back to 30C idle when i checked again after the restart. However it got worse to the point where it would not start at all after multiple power cycles or just start and make the air bubble sound for an instance and stop again. Tilting the card and radiators seemed to solve the problem for a while until it started again. I didn’t know that the radiator had to sit above the pump level and its impossible to implement in my chassis anyway even if i did but i found a way to take the fans out of the chassis and hang them higher than the pump as a temporary solution. It solved the problem for 2 days and i was glad that the problem was gone until minutes ago when it made a stubborn return. Now since my card is non reference pcb i guess chances of installing an aftermarket cooler block is impossible? Anyway to bleed air from this system? I could find the third exit (tapped) from the pump on the card has some kind of a hole in it. Like the ones in bleeder valves.


----------



## kithylin

caki said:


> been using this card since 6 months without any issues but yesterday gpu started throttling like crazy and i found out that the culprit was pump stopping to circulate the coolant (radiators cold while card is around 85-90 C at idle in windows). When i restarted the pc i heard some air bubble passing sounds coming from the card and it was back to 30C idle when i checked again after the restart. However it got worse to the point where it would not start at all after multiple power cycles or just start and make the air bubble sound for an instance and stop again. Tilting the card and radiators seemed to solve the problem for a while until it started again. I didn’t know that the radiator had to sit above the pump level and its impossible to implement in my chassis anyway even if i did but i found a way to take the fans out of the chassis and hang them higher than the pump as a temporary solution. It solved the problem for 2 days and i was glad that the problem was gone until minutes ago when it made a stubborn return. Now since my card is non reference pcb i guess chances of installing an aftermarket cooler block is impossible? Anyway to bleed air from this system? I could find the third exit (tapped) from the pump on the card has some kind of a hole in it. Like the ones in bleeder valves.


If you are using an AIO (it sounds like you are) they have to have air bubbles in them to function. They could potentially burst and leak if they didn't have a small air bubble in it. All AIO's need to be positioned with the radiator above the pump, CPU or GPU. It's physics.


----------



## J7SC

TONSCHUH said:


> I'm up and running again, but I still have to bleed the air out.(...) Now I only need the cash for *the Thermaltake Core P8 + Distro*, because I'm over all the dust and hassle, when I have to move / drain / maintain the system.(...)


...looking forward to some 'before and after' air <> water temp comps when you're done. BTW, I didn't even know that there was a new TT Core (P8) out...looks related to core P5, but with extra items on the side and top...



Spoiler

















kithylin said:


> If you are using an AIO (it sounds like you are) they have to have air bubbles in them to function. They could potentially burst and leak if they didn't have a small air bubble in it. *All AIO's need to be positioned with the radiator above the pump*, CPU or GPU. It's physics.


^^That's good advice. While most of my HD builds are custom loop. I have some AIOs, among them a TT 240mm AIO running on CPUs (same principle as GPU AIO) since late 2012...it probably lasted this long because it was on 24/7/365. Still, the other day I turned it off to clean the rad and fans. When I turned it back on (w/ rad in the same position above the setup), temps shot up...the pump is still working at the same rpm, but some bubbles must have come loose and lodged in the wrong place; time for some AIO surgery


----------



## cisco150

*Help, Evga 2080 TI XC Gaming Mod Bios. help me choose 24/7*

Hi guys I wanted to know if this Bios is safe to run 24/7 on my card Evga 2080 Ti XC Gaming, Bios is the EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W). The card is water cooled with a 360 rad and a 240 rad water cooling my i9900k with a custom loop and PWM pump with distro block.
Here are the bios i tried, EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W), GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W and i wanted to try this one but haven't- Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) or ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W).

Here are some pics of states with the kingpin bios only thing I see is voltage is at 1.125 i know its put at 1.125 by bios. I get up too 2190mhz on core and ram i haven't push pass 7600mhz. temps are around 49c or 50c those numbers were running 3dmark time spy a couple of times. any help would be great thanks. what Bios would work the best with this card. have 32gb of system ram with a Z390 Aorus master i9900k and 1050watt power supply.


----------



## JedixJarf

JennyBeans said:


> Question what does a used 2080 ti go for cause I'm trying to hunt down a used one


Got a reference Dell 2080 Ti for $800 on ebay + 10% back in ebay bucks, slapped a block on it and run it with the 360w bios.


----------



## JennyBeans

JedixJarf said:


> Got a reference Dell 2080 Ti for $800 on ebay + 10% back in ebay bucks, slapped a block on it and run it with the 360w bios.


rip same amount I spent on my 2070 super lol


----------



## rustyk

cisco150 said:


> Hi guys I wanted to know if this Bios is safe to run 24/7 on my card Evga 2080 Ti XC Gaming, Bios is the EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W). The card is water cooled with a 360 rad and a 240 rad water cooling my i9900k with a custom loop and PWM pump with distro block..


I don't W/C my card but is that a serious question? It's pretty obvious that the cards are not designed to run 2KW of power even if they could. IMHO it's clearly designed for benchmarking for 'experts' who are willing to take risks.

What's the standard power limit on your card? I'd just use the standard one and keep the card as cold as you can. By 24/7 I assume you mean gaming rather than benchmarking? Did you benchmark any different games when you tried the different BIOS?


----------



## Shawnb99

Mylittlepwny2 said:


> Yep i have 2 of the KP cards with the hydrocopper blocks. Good luck finding the blocks. Havent heard of anyone actually being able to buy one from EVGA.com in a LONG time. Both times they've seemingly come back in stock (EVGA notifications) they are instantly sold out and no one i have ever talked to managed to get one. Infact last time they came "in stock" a few weeks ago I ordered one (to have a spare) and my order ended up getting auto cancelled by EVGA due to no availability. I cant be sure if they were actually in stock or if it was just a glitch in the system. EVGA to their credit gave me $100 in EVGA bucks for the trouble once I called to ascertain the status of my order.
> 
> Anyways point is the KP cards are the bees knees. I cant ever go back to the boring old Nvidia cards. Not only do these clock higher (both mine will game at over 2205 MHz if kept below 38C), but you can also install and run the XOC bios to completely remove the power limit and really let these things shine! $1000 for a KP card is an absolute steal. If you ever manage to find a hydrocopper block jump on it. In the RARE instance once comes up for sale on the used market they often sell for over $700.


EVGA stopped making the blocks for the 2080TIkpe very shortly after release. As of now unless someone is selling theirs you won't be able to find one. Optimus has promised to come out with a block but knowing them we'll be lucky if they release that by the end of the year and ship it 6 months later. 

The fact they made this model an AIO and then sold the block on as extra and that they stopped selling them so soon after is likely why I'll never buy one of those cards. That and they sell out so quickly that I never get the chance to buy one if I wanted.


----------



## geriatricpollywog

It took almost 2 weeks, but the EVGA RMA department sent me a retail Kingpin instead of a small RMA box. I loaded Red Dead Redemption 2 and it runs smoothly with no artifacts, 2040 / 7000 mhz on stock settings. I'll play around with it later. This is in my new 10700k setup which runs 5.2 all core @ 1.31 vcore


----------



## chibi

0451 said:


> It took almost 2 weeks, but the EVGA RMA department sent me a retail Kingpin instead of a small RMA box. I loaded Red Dead Redemption 2 and it runs smoothly with no artifacts, 2040 / 7000 mhz on stock settings. I'll play around with it later. This is in my new 10700k setup which runs 5.2 all core @ 1.31 vcore



Outstanding! Buy b-stock kingping that shart the bed and get a new retail rma replacement. Good on EVGA, only brand GPU I will ever buy that isnt nvidia reference.


----------



## geriatricpollywog

chibi said:


> Outstanding! Buy b-stock kingping that shart the bed and get a new retail rma replacement. Good on EVGA, only brand GPU I will ever buy that isnt nvidia reference.


I was totally planning on keeping the first one I got, but it artifacted. I guess I'm lucky it did! I wouldn't recommend trying to buy a Kingpin with the hope of RMAing it because chances are they won't have one in stock and you will end up with a refund instead. But I agree, EVGA's warranty service is second to none. B stock is actually eligible for the EVGA Step-Up program and I can upgrade to a 3080ti if it comes out in the next 3 months.


----------



## cisco150

rustyk said:


> I don't W/C my card but is that a serious question? It's pretty obvious that the cards are not designed to run 2KW of power even if they could. IMHO it's clearly designed for benchmarking for 'experts' who are willing to take risks.
> 
> What's the standard power limit on your card? I'd just use the standard one and keep the card as cold as you can. By 24/7 I assume you mean gaming rather than benchmarking? Did you benchmark any different games when you tried the different BIOS?


I did the 380w one with call of duty Mw 3840x1600 temp stayed at 47c and tried timespy at 49c peek 50c on all 3 Bios and no trouble at all. im only seeing a wattage of 560 or so and voltage is where it stays @ 1.125v on the kingpin Bios one with only a +115 core i was at 2190mhz


----------



## TONSCHUH

caki said:


> been using this card since 6 months without any issues but yesterday gpu started throttling like crazy and i found out that the culprit was pump stopping to circulate the coolant (radiators cold while card is around 85-90 C at idle in windows). When i restarted the pc i heard some air bubble passing sounds coming from the card and it was back to 30C idle when i checked again after the restart. However it got worse to the point where it would not start at all after multiple power cycles or just start and make the air bubble sound for an instance and stop again. Tilting the card and radiators seemed to solve the problem for a while until it started again. I didnâ€™️t know that the radiator had to sit above the pump level and its impossible to implement in my chassis anyway even if i did but i found a way to take the fans out of the chassis and hang them higher than the pump as a temporary solution. It solved the problem for 2 days and i was glad that the problem was gone until minutes ago when it made a stubborn return. Now since my card is non reference pcb i guess chances of installing an aftermarket cooler block is impossible? Anyway to bleed air from this system? I could find the third exit (tapped) from the pump on the card has some kind of a hole in it. Like the ones in bleeder valves.


I contacted Gigabyte / Aorus and asked if I would be able to put the block from the Xtreme Waterforce WB Edition onto my air-cooled version and if they would sell me one, but they don't and told me, that I would straight loose my whole warranty, if I would pull the GPU apart or alter it. So from their point of view, you are not even allowed to put a water-block on their card nor re-pasted them.


----------



## Laithan

cisco150 said:


> Hi guys I wanted to know if this Bios is safe to run 24/7 on my card Evga 2080 Ti XC Gaming, Bios is the EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W).
> 
> Here are the bios i tried, EVGA RTX 2080 Ti Kingpin Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W), GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W and i wanted to try this one but haven't- Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W) or ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W).





rustyk said:


> I don't W/C my card but is that a serious question? It's pretty obvious that the cards are not designed to run 2KW of power even if they could. IMHO it's clearly designed for benchmarking for 'experts' who are willing to take risks.
> 
> What's the standard power limit on your card? I'd just use the standard one and keep the card as cold as you can. By 24/7 I assume you mean gaming rather than benchmarking? Did you benchmark any different games when you tried the different BIOS?


From the peanut gallery (I'm sure this has already been discussed many times)

I admit that I haven't used one of the "2000 Watt" BIOS' and measured how much they can actually "pull" however I've done a _*lot*_ of power testing with Maxwell. Regardless of the GPU on the PCB, there are simply electrical limits. I am not talking about the actual maximum limit of a single 8-pin PCI-e cable _electrically_ (324 Watts I believe) but what I would consider a "safe" wattage to be pulling from a single PCI-e 8-pin is _*200 Watts*_. This is factoring the PCB trace width where the power is ultimately carried. 200W is a lot of power for a single 8-pin and above what most STOCK BIOS of recent generations of GPUs would define (150W was common for a STOCK BIOS 8-pin value). 

I had to struggle to find any test that would come even close to pulling 200W (even testing @ 4K) and the only real way I could get that much power to be pulled was with *furmark* which is of course not even close to real-world. I suspect some LN2/DICE examples w/extreme voltages may reach the power limits. Regardless of who made the GPU the 8-pin PCI-e power cable is an industry standard. The PCI-e slot itself can provide up to 75W of additional power. The traces/components on the PCB itself will of course vary.

*With a maximum of 200W per 8-pin PCI-e:*
8-pin PCI-e cable x 1 + the PCI-e slot itself = 275W of power (safe maximum)
8-pin PCI-e cables x 2 + the PCI-e slot itself = 475W of power (safe maximum)
8-pin PCI-e cables x 3 + the PCI-e slot itself = 675W of power (safe maximum)


So let's just say for giggles that we went with the maximum power an 8-pin can electrically carry which is 324 Watts.. then you would have:
*With a maximum of 324W per 8-pin PCI-e:*
8-pin PCI-e cable x 1 + the PCI-e slot itself = 399W of power (electrical maximum)
8-pin PCI-e cables x 2 + the PCI-e slot itself = 723W of power (electrical maximum)
8-pin PCI-e cables x 3 + the PCI-e slot itself = 1047W of power (electrical maximum)

So where is this magical 2000W coming from anyway 
It may be defined in the BIOS as 2000W (might as well put 1 million then) but I don't think it is eletrically possible by both the physical wires themselves or the traces on the PCB. 

Also, the question of running a GPU 24x7 is not of power it is of _*voltage*_. I would not recommend running that XOC BIOS 24x7 because of the increased voltage (unless you back it down for daily use via curve) not because of the power as it only pulls what it needs when it is needed. The actual power utilized depends on the workload presented to the GPU.

Personally as long as I could have profiles configured with different voltages for different needs then I would keep the XOC BIOS flashed at all times and just use software to limit the voltage for gaming. When benchmarking then load the max voltage profile


----------



## TONSCHUH

J7SC said:


> ...looking forward to some 'before and after' air <> water temp comps when you're done. BTW, I didn't even know that there was a new TT Core (P8) out...looks related to core P5, but with extra items on the side and top...





Spoiler















That's what I got so far, with the 7700k @5047MHz (5151MHz wasn't fully stable in Fire-Strike) in the same loop:

Gigabyte GeForce RTX2080 Ti Aorus Xtreme 11G Graphics Card (XOC-BIOS-MOD: ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W)) | Core-Voltage: 1.0870V | GPU-Clock (max stable): 1350MHz | Boost (max. stable): 2145MHz | Memory-Clock (max stable): 16000MHz | max. Temp: 32C (Water-Cooled)

Idle Temps GPU: 22C with AirCon and 28C without AirCon

Idle Temps CPU: 28C with AirCon and 30C without AirCon

I could still try to reduce the GPU Core Voltage, but doubt that it would change anything.

It's with +135 Core and +1000 (930) Me.

+150 Core and / or a slightly higher Mem Clocks, pretty much straight kick me out of Fire-Strike.

Will still play around with it, when I have some more spare time, but I doubt that I could push it much higher.

?


----------



## 99999User

Would anyone kindly confirm Kingpin bios can be flashed onto Gigabyte Gaming OC 2080Ti successfully?


----------



## kithylin

Laithan said:


> So where is this magical 2000W coming from anyway


The bios allows it but to actually get power that high people have to solder on zombie boards for LN2 overclocking.









Yes I know the image isn't a 2080 Ti. I just used it as a reference. It's the same concept.


----------



## SirCanealot

kithylin said:


> If you are using an AIO (it sounds like you are) they have to have air bubbles in them to function. They could potentially burst and leak if they didn't have a small air bubble in it. All AIO's need to be positioned with the radiator above the pump, CPU or GPU. It's physics.


I have a Silverstone Raven 3 case (flipped vertically with the big 180mm fans at the bottom) and I've replaced the fans at the bottom with a Kraken G12+G63 280mm rad and I haven't had any problems with it so far. I also ran a 240mm rad on a 1080ti and the 2080ti and haven't had any issues like this. Runs around 41-43c (temps are all over the place at this time of year in the UK!) max with push-pull fans. So I don't know if it's a case of NEED, of I've just been lucky and haven't had any issues so far. 

I do wonder if the performance would be better with the rad above the pump, but I don't think this is possible sadly


----------



## caki

TONSCHUH said:


> I contacted Gigabyte / Aorus and asked if I would be able to put the block from the Xtreme Waterforce WB Edition onto my air-cooled version and if they would sell me one, but they don't and told me, that I would straight loose my whole warranty, if I would pull the GPU apart or alter it. So from their point of view, you are not even allowed to put a water-block on their card nor re-pasted them.


well i bought the card from another country so i can't even pay them to repair my card. I ended up cutting the stock piping (and saw the shockingly cheap plastic hose inside the protective cover) to add a reservoir with pump for circulation. also disabled the stock water pump by cutting one of the power cables that feeds it. so far the temps are at least 10C lower with even slightly higher clock settings (previously stable around 2100-2130 now 2170-2200). As a side note the amount of coolant that came out of the stock AIO of this card was less than 150 ml in volume which i suspect to be the culprit!!

I always heard regarding the quality of hardware gigabyte sells to the market however my experience with the 2080 ti aorus xtreme waterforce strongly convinced me otherwise. Anything but reference card and custom setup for me, is no more.

thx everyone for your help and kind suggestions.


----------



## rustyk

cisco150 said:


> I did the 380w one with call of duty Mw 3840x1600 temp stayed at 47c and tried timespy at 49c peek 50c on all 3 Bios and no trouble at all. im only seeing a wattage of 560 or so and voltage is where it stays @ 1.125v on the kingpin Bios one with only a +115 core i was at 2190mhz


What about sustained FPS in games? Isn't that the whole point of a usable 24/7 overclock? 
People are obsessed with power draw and core clocks and synthetic benchmark scores.
I would bet real money that a decent proportion are cranking everthing up to max, running timespy and calling it a day. Meanwhile, they are artifacting and running wasteful amounts of power through the cards.


----------



## geriatricpollywog

Wait, I thought this was an overclocking forum. I’m confused.


----------



## kithylin

rustyk said:


> What about sustained FPS in games? Isn't that the whole point of a usable 24/7 overclock?
> People are obsessed with power draw and core clocks and synthetic benchmark scores.
> I would bet real money that a decent proportion are cranking everthing up to max, running timespy and calling it a day. Meanwhile, they are artifacting and running wasteful amounts of power through the cards.


A lot of people on this website are trying to hit world records with time spy and other tests with overclocks. Did you forget that's a thing? Hwbot.org is still there. That's a big part of this website and the community around here.


----------



## nikoli707

I have a pny blower 2080ti, flashed with a palit 310w bios, and an accelero iii with two noctua a12x25 fans, this is in an ncase m1 v6.

im playing around in msi ab and trying to set a custom freq. curve and it is giving me headaches. basically i want the card to max out at [email protected] and hold it. i can sometimes get it to work and it will hold [email protected] indefinitely looping unigine heaven/valley. but as soon as i restart the machine the curve graph changes on me and it seems random. is there any sort of turing bios editor? i dont like having to manual change the curve each time i load windows, and sometimes msi ab doesn't like when i hit apply, i have to load the profile and hit apply a few times in a row before it sticks.


----------



## JackCY

0451 said:


> Wait, I thought this was an overclocking forum. I’m confused.


Once upon a time.


----------



## geriatricpollywog

Or they are like Buildzoid and they game on 1700x / R9 Fury while their 10900k and 2080tis are used only for benching and modding.

When I drive around in circles at the racetrack I’m not actually trying to get anywhere XD


----------



## Laithan

kithylin said:


> The bios allows it but to actually get power that high people have to solder on zombie boards for LN2 overclocking.


Do you know if these replace the existing PCI-e connectors or compliment them?


----------



## kithylin

Laithan said:


> Do you know if these replace the existing PCI-e connectors or compliment them?


I haven't done this method myself. I've only seen it demonstrated in live streams and on youtube. But from what little I understand about it the zombie boards wire in directly to the board's VRM's and typically supplement the power from what we would otherwise get from the PCI-Express Power connectors. I don't think it would allow the card to draw more power over the PCIE connector it's self (when using those zombie boards). If it did then it would melt the motherboard and be counter-productive. I don't claim to understand the science behind all of that. I just know that if someone wants to get up near 2000 watts of power on a video card, that's how they do it. And those unlimited bios's that have high power limits of 2000 watts are designed specifically for that scenario / use case.


----------



## J7SC

Laithan said:


> Do you know if these replace the existing PCI-e connectors or compliment them?


 ...if you mean the 12v PCIe connectors originally on the card, usually the 'FrankenGPU' method replaces rather than supplements them - i.e. similar to below, I have seen this mod done before in person by an XoCer...very tedious :sad-smile


----------



## cisco150

Laithan said:


> From the peanut gallery (I'm sure this has already been discussed many times)
> 
> I admit that I haven't used one of the "2000 Watt" BIOS' and measured how much they can actually "pull" however I've done a _*lot*_ of power testing with Maxwell. Regardless of the GPU on the PCB, there are simply electrical limits. I am not talking about the actual maximum limit of a single 8-pin PCI-e cable _electrically_ (324 Watts I believe) but what I would consider a "safe" wattage to be pulling from a single PCI-e 8-pin is _*200 Watts*_. This is factoring the PCB trace width where the power is ultimately carried. 200W is a lot of power for a single 8-pin and above what most STOCK BIOS of recent generations of GPUs would define (150W was common for a STOCK BIOS 8-pin value).
> 
> I had to struggle to find any test that would come even close to pulling 200W (even testing @ 4K) and the only real way I could get that much power to be pulled was with *furmark* which is of course not even close to real-world. I suspect some LN2/DICE examples w/extreme voltages may reach the power limits. Regardless of who made the GPU the 8-pin PCI-e power cable is an industry standard. The PCI-e slot itself can provide up to 75W of additional power. The traces/components on the PCB itself will of course vary.
> 
> *With a maximum of 200W per 8-pin PCI-e:*
> 8-pin PCI-e cable x 1 + the PCI-e slot itself = 275W of power (safe maximum)
> 8-pin PCI-e cables x 2 + the PCI-e slot itself = 475W of power (safe maximum)
> 8-pin PCI-e cables x 3 + the PCI-e slot itself = 675W of power (safe maximum)
> 
> 
> So let's just say for giggles that we went with the maximum power an 8-pin can electrically carry which is 324 Watts.. then you would have:
> *With a maximum of 324W per 8-pin PCI-e:*
> 8-pin PCI-e cable x 1 + the PCI-e slot itself = 399W of power (electrical maximum)
> 8-pin PCI-e cables x 2 + the PCI-e slot itself = 723W of power (electrical maximum)
> 8-pin PCI-e cables x 3 + the PCI-e slot itself = 1047W of power (electrical maximum)
> 
> So where is this magical 2000W coming from anyway
> It may be defined in the BIOS as 2000W (might as well put 1 million then) but I don't think it is eletrically possible by both the physical wires themselves or the traces on the PCB.
> 
> Also, the question of running a GPU 24x7 is not of power it is of _*voltage*_. I would not recommend running that XOC BIOS 24x7 because of the increased voltage (unless you back it down for daily use via curve) not because of the power as it only pulls what it needs when it is needed. The actual power utilized depends on the workload presented to the GPU.
> 
> Personally as long as I could have profiles configured with different voltages for different needs then I would keep the XOC BIOS flashed at all times and just use software to limit the voltage for gaming. When benchmarking then load the max voltage profile


Thanks for the reply im not sure if the voltage can be controlled thru MSI or EVGA scanner. do you know if this is possible.


----------



## Laithan

J7SC said:


> ...if you mean the 12v PCIe connectors originally on the card, usually the 'FrankenGPU' method replaces rather than supplements them - i.e. similar to below, I have seen this mod done before in person by an XoCer...very tedious :sad-smile


Gotcha... I still may be missing something because they are using 6-pins, which I realize is almost identical to an 8-pin except for the sense pin and the additional ground but there is more of a power limit on a 6-pin vs an 8-pin because of that ground... and even if there were (3) 8-pins, that's still only over 1000watts total, still wondering where 2000watts is coming from even with these FrankenGPUs


----------



## Laithan

cisco150 said:


> Thanks for the reply im not sure if the voltage can be controlled thru MSI or EVGA scanner. do you know if this is possible.


Yes the voltage slider can be enabled and you can lower it that way or you can use the O/C curve to specify rough voltages @ certain clocks (Pascal+ only).


----------



## geriatricpollywog

I’ve melted a Cable Mod 8 pin when pushing about 450-500 watts into a Vega 64 and that is definitely under 324 watts per connector.


----------



## J7SC

Laithan said:


> Gotcha... I still may be missing something because they are using 6-pins, which I realize is almost identical to an 8-pin except for the sense pin and the additional ground but there is more of a power limit on a 6-pin vs an 8-pin because of that ground... and even if there were (3) 8-pins, that's still only over 1000watts total, still wondering where 2000watts is coming from even with these FrankenGPUs


 
...yeah, up to 4 x 6 pin (3x vcore, 1x vdimm). This was the first revision of EVGA's EPower board (fyi, Gigabyte also offered s.th. similar). 

Bit-tech.net (2012) wrote. _"According to EVGA's figures, the EPower board can run at a VCORE adjustment range of 800mV to 2000mV at a current of up to 400A, or a VDIMM adjustment range of 1000mV to 5000mV at up to 80A. To put those figures into context: EVGA recommends a 600W power supply with 42A on the 12V rail per EPower board used in the system. Before you get excited about the potential of the board, there's a catch: the system is hardly plug-and-play. Those who have bought the board are expected to solder it directly to the motherboard or graphics card of their choice, replacing the existing VRMs with a connection to the EPower board. With the installation instructions (PDF) advising users to cut PCB traces to disable on-board regulator modules, it's a hair-raising - and warranty-trashing - experience."_


----------



## Imprezzion

Well, I finally found the first game capable of hitting the 380w EVGA FTW3 limits on my Gainward lol. I run 2115-2085 depending on temps at 1.093v with a custom curve with 7800 memory and Black Desert Online on the "screenshot only" Ultra mode actually manages to tap the power limit from time to time. It usually runs 110-115% but some areas hit the limit slightly going as high as 131% and then throttling to 2025-2040 @ 1.043v shortly.

I mean, superposition always hit the limit for me but 3dmark or other games didn't so I figured I was fine but..

Well, might switch back to HOF XOC 1.125v 2145-2115Mhz again lol.


----------



## iamjanco

J7SC said:


> ...yeah, up to 4 x 6 pin (3x vcore, 1x vdimm). This was the first revision of EVGA's EPower board (fyi, Gigabyte also offered s.th. similar).
> 
> Bit-tech.net (2012) wrote. _"According to EVGA's figures, the EPower board can run at a VCORE adjustment range of 800mV to 2000mV at a current of up to 400A, or a VDIMM adjustment range of 1000mV to 5000mV at up to 80A. To put those figures into context: EVGA recommends a 600W power supply with 42A on the 12V rail per EPower board used in the system. Before you get excited about the potential of the board, there's a catch: the system is hardly plug-and-play. Those who have bought the board are expected to solder it directly to the motherboard or graphics card of their choice, replacing the existing VRMs with a connection to the EPower board. With the installation instructions (PDF) advising users to cut PCB traces to disable on-board regulator modules, it's a hair-raising - and warranty-trashing - experience."_


FWIW, the specs cited above from Bit-tech are actually for the previous version of the board. Specs for the *current version of the board* follow:



Code:


VMEM Output: Voltage adjustment range 600mV to 2300mV. Rated capacity is 80A. Maximum peak capacity - 90A at 1.9V output voltage.

VCORE: Voltage adjustment range is 600mV to 2000mV. Rated capacity is 600A. Maximum peak capacity - 620A at 1.85V output voltage.

The user's guide for the board can be *downloaded here* in PDF format.


----------



## keikei

I'm really looking forward to this game. https://wccftech.com/nvidia-geforce-rtx-death-stranding-pc-dlss-2-0-graphics-performance-60-fps-4k/


----------



## geriatricpollywog

It might be an RMA'd B-stock, but it's a Kingpin all right! +1500mhz on the memory. I can probably go higher but don't want to blow the memory controller. The core does not like more than 2150mhz, but I'll play around with the voltage and see if I can get 2200mhz.


----------



## SuperMumrik

Imprezzion said:


> Well, might switch back to HOF XOC 1.125v 2145-2115Mhz again lol.



Currently using this bios myself, but I wish someone would make a script to run with afterburner startup. I tend to forget to set my OC


----------



## Medizinmann

Imprezzion said:


> Well, I finally found the first game capable of hitting the 380w EVGA FTW3 limits on my Gainward lol. I run 2115-2085 depending on temps at 1.093v with a custom curve with 7800 memory and Black Desert Online on the "screenshot only" Ultra mode actually manages to tap the power limit from time to time. It usually runs 110-115% but some areas hit the limit slightly going as high as 131% and then throttling to 2025-2040 @ 1.043v shortly.
> 
> I mean, superposition always hit the limit for me but 3dmark or other games didn't so I figured I was fine but..
> 
> Well, might switch back to HOF XOC 1.125v 2145-2115Mhz again lol.


For me Gears 5 hits this limit frequently – if I can believe HWInfo even a little above (387W) several times - in benchmarks I have seen 393W – and I have seen 330W sustained load over periods of hours while gaming (Gears 5) in 1440p settings maxed out! 
Gears 5 also loads the CPU heavily – over all power consumption is 500-550W and it’s getting toasty in the room…:thumb:

This is with the 380W KFA2/Galax BIOS on my Palit GaminOC Pro 2080Ti with Alphacool GPX Pro.

Best regards,
Medizinmann


----------



## Imprezzion

SuperMumrik said:


> Currently using this bios myself, but I wish someone would make a script to run with afterburner startup. I tend to forget to set my OC


That was such a pain lol. First few times I got a BSOD I didn't know that MSI AB caused it lol. Freaked me out. Also, 34% minimum fanspeed isn't really low enough for me as my cards controller runs my radiator fans as I soldered the PWM and RPM sense wires to a PWM controlled hub fed by SATA power to control my radiator fans. 34% however is about 920-940RPM which is slightly more audible then I'd like idle. I usually run 22% on EVGA @ 700 ish RPM.

As for power draw. Yeah BDO sits around 350-360w @ 47-48c load. It spiked to 395w several times tho. 

With the HOF XOC I see about 370-380w constant with peaks going well over 420w. Temps don't really get any higher as I just compensate with slightly higher fanspeeds. I've seen it hit 50c once but usually the same 48c load but 10-15% more fanspeed (65% instead of 52%) on the radiator fans. On the full 100% blast setting the card runs 41-42c but doesn't gain any stability or higher clocks so I just let it run around 48-50c a but more quietly.


----------



## rustyk

kithylin said:


> A lot of people on this website are trying to hit world records with time spy and other tests with overclocks. Did you forget that's a thing? Hwbot.org is still there. That's a big part of this website and the community around here.


I'm well aware of that thanks, I mean the clue is even in the site name. 

I'm assuming you didn't actually look at the start of the conversation, as they were originally asking if the 2kw Kingpin BIOS was safe for 24/7 usage.
To me, that doesn't sound like the kind of question someone trying to set a world record overclock would be asking, it sounds like they just think the card will run 'faster' with more power.


----------



## SuperMumrik

Imprezzion said:


> That was such a pain lol. First few times I got a BSOD I didn't know that MSI AB caused it lol. Freaked me out. Also, 34% minimum fanspeed isn't really low enough for me as my cards controller runs my radiator fans as I soldered the PWM and RPM sense wires to a PWM controlled hub fed by SATA power to control my radiator fans. 34% however is about 920-940RPM which is slightly more audible then I'd like idle. I usually run 22% on EVGA @ 700 ish RPM.
> 
> As for power draw. Yeah BDO sits around 350-360w @ 47-48c load. It spiked to 395w several times tho.
> 
> With the HOF XOC I see about 370-380w constant with peaks going well over 420w. Temps don't really get any higher as I just compensate with slightly higher fanspeeds. I've seen it hit 50c once but usually the same 48c load but 10-15% more fanspeed (65% instead of 52%) on the radiator fans. On the full 100% blast setting the card runs 41-42c but doesn't gain any stability or higher clocks so I just let it run around 48-50c a but more quietly.



I knew of this before I flashed, but I just wanted a new personal best i Timespy :thumb:
I hope my next card is better binned. 2160-2130 seems to be max oc on water and it dosen't really scale with chilled water.

Saw peaks upwards of 800W from the wall during Timespy 2nd test (including a few Watts from the monitor) so no wonder it throttles like crazy on the 380W galax bios


----------



## Imprezzion

SuperMumrik said:


> I knew of this before I flashed, but I just wanted a new personal best i Timespy :thumb:
> I hope my next card is better binned. 2160-2130 seems to be max oc on water and it dosen't really scale with chilled water.
> 
> Saw peaks upwards of 800W from the wall during Timespy 2nd test (including a few Watts from the monitor) so no wonder it throttles like crazy on the 380W galax bios


Yeah I noticed with a hungry CPU and way too many RGB fans and strips the XOC BIOS completely overpowered even my Focus Plus Gold 750w. I bought a Prime Ultra 1000w as that fit my Cablemod Focus Plus cables lol.

It shut down on me twice in benches with the 9900K on 5.3Ghz HT on and the card at 2190-2160. It's a shame I can bench all day on 2190 but it won't stay stable in games even on 2160. It runs for a good 10-15 minutes, then throws a random DirectX error. Same with the memory. It's Samsung but not a very good bin. It does 7800 fine 24/7 and benches 8100 fine but anything over 7800 gives random black textures or crash to desktops during games.

I'm happy with it tho. It was just a cheap secondhand card and the Phoenix GS isn't a very high-end or highly binned card anyway so. I'm perfectly happy at 2085/7800 all day stable.


----------



## kithylin

rustyk said:


> I'm well aware of that thanks, I mean the clue is even in the site name.
> 
> I'm assuming you didn't actually look at the start of the conversation, as they were originally asking if the 2kw Kingpin BIOS was safe for 24/7 usage.
> To me, that doesn't sound like the kind of question someone trying to set a world record overclock would be asking, it sounds like they just think the card will run 'faster' with more power.


No I wasn't discussing the XOC bios or anything else other than the exact post that I quoted by you when I replied, that's all. That's what I quoted so that's what I was replying to.


----------



## rustyk

kithylin said:


> No I wasn't discussing the XOC bios or anything else other than the exact post that I quoted by you when I replied, that's all. That's what I quoted so that's what I was replying to.


Ok, that makes perfect sense, it's just that your reply seemed sarcastic to me, given that I was replying to someone who was asking about 24/7 clocks rather than absolute overclocking.

I'm pretty much an amateur but I've spent time 'tweaking' and running benchmarks, trying to improve my score, so I know it's a thing and it's fun too. I'm just a bit suspicious (rightly or wrongly) when people appear and start asking what the 'best bios' is, or asking for advice without stating what their objectives are.

I would try to understand what impacts performance, understand what was holding me back, research the various options, then ask questions, but that's just me.


----------



## 99999User

Could someone explain the difference between MSI Afterburner regular vs Extreme edition? Is ABX needed to max the voltage limit or regular good enough?


----------



## keikei

Max settings with RT on using 2080ti @ 1080p/30fps. No DLSS2.0 used.


----------



## kithylin

keikei said:


> Max settings with RT on using 2080ti @ 1080p/30fps. No DLSS2.0 used.
> 
> 
> https://www.youtube.com/watch?v=-SLjzncqf24


This video is so confusing. The commentator doesn't tell us specifically exactly which machine they're running on. I didn't hear nor find any information stating it was running on a 2080 Ti for example, where did you find that information? Also the entire captured video shows "LR", "LRB" and such down in the bottom left and bottom right corners indicating some sort of controller used. And the commentator talks about this game running with ray tracing on consoles like the PS5 too near the end of the video.


----------



## ZealotKi11er

keikei said:


> Max settings with RT on using 2080ti @ 1080p/30fps. No DLSS2.0 used.
> 
> 
> https://www.youtube.com/watch?v=-SLjzncqf24


Are you sure its 1080p? I was under the impression, its 4K/30.


----------



## keikei

ZealotKi11er said:


> Are you sure its 1080p? I was under the impression, its 4K/30.



He's on the res screen quick, but its there. Ubi does have time to optimize. 2021 right?


----------



## kithylin

keikei said:


> He's on the res screen quick, but its there. Ubi does have time to optimize. 2021 right?


Aha! Nice catch! I didn't see that in the video but I was skipping a little bit.


----------



## ZealotKi11er

1080p/30 is pathetic. They will just use DLSS 2.0 to get "resolution" back.


----------



## JackCY

ZealotKi11er said:


> 1080p/30 is pathetic. They will just use DLSS 2.0 to get "resolution" back.


How about turning off the ingame "FPS limit" set to 30?


----------



## keikei

kithylin said:


> Aha! Nice catch! I didn't see that in the video but I was skipping a little bit.



I'm not sure what Ass Cred uses for the game engine, but a Nov release is coming up soon. DF hasnt released a vid yet, so I"m looking forward to see how well the game runs. I"m getting heavy Witcher vibes with the game.




ZealotKi11er said:


> 1080p/30 is pathetic. They will just use DLSS 2.0 to get "resolution" back.



Its better than version 1.0, but it might be necessary. We want dem frames + RT on. I'm fine sacrificing some detail. Maybe it was the vid captures (I saw a handful of them), but the game in general didnt seem to use super high res textures watsoeva.


----------



## ZealotKi11er

These settings are probably in place for streaming only. People that got to play the game did so over Parsec.


----------



## tps3443

Hey everyone I purchased a used Gigabyte 2080Ti Windforce on eBay. This video card was previously taken apart and reassembled by a child I suppose, and several of the screws are missing. Back plate screws, I/O shield bracket screws etc. etc. This GPU works, and is 100% functional as is. But, I would like to get these screws replaced. 

How do I find out what size they even are? I do not have digital calipers. Could I just order all of the sizes the card uses? How do I find that out? 

Any help is much appreciated.


----------



## managerman

Ok...A long story as short as I can make it...hopefully I don't have a dying card...but I thought I would get the opinion of the experts here.

Original system: 

2990WX with MSI MEG Creation x399 and (2) 2080TI FE in SLI....With custom watercooling loop....and 1600w EVGA Titanium power supply...

The original 2080ti's both "code 43'd" with two weeks of owning them. Nvidia replaced them and all was good...I flashed the Galax 380W bios...all good. I have had no issues with the system or the cards until last week...

I upgraded my system to a 3990x with the Asus Zenith II Extreme Alpha MB....Fresh windows 10 install with latest nvidia drivers.

Current Problem: 

In SLI while running Timespy Extreme, Far Cry 5 benchmark, Shadow of the Tomb Raider benchmark, etc....shortly into the benchmark the system will black screen and reboot. This is with stock core and memory on both cards...

I also have had one of the cards, intermittently give me a Code 43....If I reinstall the drivers that goes away...but sometimes will come back...
I am getting no other obvious graphical anomolies....i.e. artifacting, stuttering, etc..

Next steps..

1. I am going to test each card individually to determine if I have the same issue when running them by themselves...
2. I may flash the cards back to the original bios and retest
3. Reinstall windows

My thoughts on the issue...

1. Bad card...plain and simple
2. Issue with the new motherboard (PCI slot, etc..)

Any thoughts would be appreciated...

Thanks,

-M


----------



## J7SC

managerman said:


> Ok...A long story as short as I can make it...hopefully I don't have a dying card...but I thought I would get the opinion of the experts here.
> 
> Original system:
> 
> 2990WX with MSI MEG Creation x399 and (2) 2080TI FE in SLI....With custom watercooling loop....and 1600w EVGA Titanium power supply...
> 
> The original 2080ti's both "code 43'd" with two weeks of owning them. Nvidia replaced them and all was good...I flashed the Galax 380W bios...all good. I have had no issues with the system or the cards until last week...
> 
> I upgraded my system to a 3990x with the Asus Zenith II Extreme Alpha MB....Fresh windows 10 install with latest nvidia drivers.
> 
> Current Problem:
> 
> In SLI while running Timespy Extreme, Far Cry 5 benchmark, Shadow of the Tomb Raider benchmark, etc....shortly into the benchmark the system will black screen and reboot. This is with stock core and memory on both cards...
> 
> I also have had one of the cards, intermittently give me a Code 43....If I reinstall the drivers that goes away...but sometimes will come back...
> I am getting no other obvious graphical anomolies....i.e. artifacting, stuttering, etc..
> 
> Next steps..
> 
> 1. I am going to test each card individually to determine if I have the same issue when running them by themselves...
> 2. I may flash the cards back to the original bios and retest
> 3. Reinstall windows
> 
> My thoughts on the issue...
> 
> 1. Bad card...plain and simple
> 2. Issue with the new motherboard (PCI slot, etc..)
> 
> Any thoughts would be appreciated...
> 
> Thanks,
> 
> -M


...I think you're on the right track w/ isolating which card it possibly is, and also flashing back to the original vBios. A quick question: Is the 3990X overclocked (either manually or by PBO) ? <> I'm wondering about PSU limits. I run a x399 Creation / 2950x combo powered by an Antec HPC 1300w with 2x Aorus Xtr WB (factory water-blocked) cards which with their stock bios pull a combined 760w @ peak. Add in a big cooling system (5x thick 360s, 4 pumps, 20x 120mm fans) which probably take at least another 100w or so, and the oc'ed 2950x (4.325 all core) at < 280w. Now, from what I understand, an oc'ed 3990x 64core can easily hit 500w or so when oc'ed. Your 1600w should be able to handle most of it, but you never know.

...to proceed, follow your plan to flash the GPUs back to their original bios, set the 3990x to 'stock' and test each card out by itself, after a fresh driver install (after DDU uninstall). That should tell you whether either of the GPUs are the source of the trouble. Finally, I have had issues with black screening (and sometimes code 43) on other machines before, but it was never the same issue (even had a HDMI cable that seemed to work most of the time but sometimes not). Anyway, Good luck...


----------



## sultanofswing

managerman said:


> Ok...A long story as short as I can make it...hopefully I don't have a dying card...but I thought I would get the opinion of the experts here.
> 
> Original system:
> 
> 2990WX with MSI MEG Creation x399 and (2) 2080TI FE in SLI....With custom watercooling loop....and 1600w EVGA Titanium power supply...
> 
> The original 2080ti's both "code 43'd" with two weeks of owning them. Nvidia replaced them and all was good...I flashed the Galax 380W bios...all good. I have had no issues with the system or the cards until last week...
> 
> I upgraded my system to a 3990x with the Asus Zenith II Extreme Alpha MB....Fresh windows 10 install with latest nvidia drivers.
> 
> Current Problem:
> 
> In SLI while running Timespy Extreme, Far Cry 5 benchmark, Shadow of the Tomb Raider benchmark, etc....shortly into the benchmark the system will black screen and reboot. This is with stock core and memory on both cards...
> 
> I also have had one of the cards, intermittently give me a Code 43....If I reinstall the drivers that goes away...but sometimes will come back...
> I am getting no other obvious graphical anomolies....i.e. artifacting, stuttering, etc..
> 
> Next steps..
> 
> 1. I am going to test each card individually to determine if I have the same issue when running them by themselves...
> 2. I may flash the cards back to the original bios and retest
> 3. Reinstall windows
> 
> My thoughts on the issue...
> 
> 1. Bad card...plain and simple
> 2. Issue with the new motherboard (PCI slot, etc..)
> 
> Any thoughts would be appreciated...
> 
> Thanks,
> 
> -M


I'd start looking at a potential PSU issue. Highly unlikely but it's always possible.

Were the cards ran Overclocked 24/7?


----------



## Imprezzion

Just test the cards individually. Click it out of the PCI-E slot and remove the PSU cables but keep it in the loop for ease of testing.


----------



## managerman

sultanofswing said:


> I'd start looking at a potential PSU issue. Highly unlikely but it's always possible.
> 
> Were the cards ran Overclocked 24/7?


Could be a PSU issue....but I have that as a lower possibility......

No...the cards were only running a couple hours a week.



Imprezzion said:


> Just test the cards individually. Click it out of the PCI-E slot and remove the PSU cables but keep it in the loop for ease of testing.


That's the plan...



J7SC said:


> ...I think you're on the right track w/ isolating which card it possibly is, and also flashing back to the original vBios. A quick question: Is the 3990X overclocked (either manually or by PBO) ? <> I'm wondering about PSU limits. I run a x399 Creation / 2950x combo powered by an Antec HPC 1300w with 2x Aorus Xtr WB (factory water-blocked) cards which with their stock bios pull a combined 760w @ peak. Add in a big cooling system (5x thick 360s, 4 pumps, 20x 120mm fans) which probably take at least another 100w or so, and the oc'ed 2950x (4.325 all core) at < 280w. Now, from what I understand, an oc'ed 3990x 64core can easily hit 500w or so when oc'ed. Your 1600w should be able to handle most of it, but you never know.
> 
> ...to proceed, follow your plan to flash the GPUs back to their original bios, set the 3990x to 'stock' and test each card out by itself, after a fresh driver install (after DDU uninstall). That should tell you whether either of the GPUs are the source of the trouble. Finally, I have had issues with black screening (and sometimes code 43) on other machines before, but it was never the same issue (even had a HDMI cable that seemed to work most of the time but sometimes not). Anyway, Good luck...


Thanks for the responses they are much appreciated....The more I think of it could be possible that the PCI-E Riser cables could be causing the problem....since the motherboard has PCI-E 4.0 and the cables are PCI-E 3.0.....who knows....I was also able to borrow another 2080ti (Zotac) so that should help in the diagnosing process..

-M


----------



## kx11

Death Stranding performance @ 8k !!!


----------



## fluidzoverclock

When I'm browsing Firefox, my 2080ti boosts from 300mhz/405mhz to 1350mhz/7000mhz, even on a page with no images/videos, and this causes the screen to stutter.
It is worst at 144hz, but still exists at 60hz. 
If I don't touch the mouse and leave firefox idle for a few seconds, then use the scroll wheel, the gpu boosts from 300/405 to 1350/7000 and the webpage will stutter.
I tested using onboard graphics, It does not happen with my intels integrated gpu (the igpu never boosts, therefore no stuttering). 
When I set Firefox to maximum performance in the nvcp, that stops the stuttering. If I set Firefox to Optimal or adaptive (default), the stutter returns.
It happens regardless of nvidia driver installed. I tested at least 5 drivers. Also I tested with two different monitors, one 60hz and one 144hz.

Bit of an odd issue and difficult to explain to someone who has never experienced it. It seems to be tied to the 2080ti boosting from its power saving state.

Has anyone run into the same issue?

Related threads - 

http://forums.mozillazine.org/viewtopic.php?f=38&t=3050033
https://translate.google.com/transl...ar-nar-jag-borjar-scrolla&prev=search&pto=aue

The video below shows the clocks boosting randomly when I scroll.


----------



## Laithan

fluidzoverclock said:


> When I'm browsing Firefox, my 2080ti boosts from 300mhz/405mhz to 1350mhz/7000mhz, even on a page with no images/videos, and this causes the screen to stutter.
> It is worst at 144hz, but still exists at 60hz.
> If I don't touch the mouse and leave firefox idle for a few seconds, then use the scroll wheel, the gpu boosts from 300/405 to 1350/7000 and the webpage will stutter.
> I tested using onboard graphics, It does not happen with my intels integrated gpu (the igpu never boosts, therefore no stuttering).
> When I set Firefox to maximum performance in the nvcp, that stops the stuttering. If I set Firefox to Optimal or adaptive (default), the stutter returns.
> It happens regardless of nvidia driver installed. I tested at least 5 drivers. Also I tested with two different monitors, one 60hz and one 144hz.
> 
> Bit of an odd issue and difficult to explain to someone who has never experienced it. It seems to be tied to the 2080ti boosting from its power saving state.
> 
> Has anyone run into the same issue?
> 
> Related threads -
> 
> http://forums.mozillazine.org/viewtopic.php?f=38&t=3050033
> https://translate.google.com/transl...ar-nar-jag-borjar-scrolla&prev=search&pto=aue
> 
> The video below shows the clocks boosting randomly when I scroll.
> 
> https://youtu.be/zt9AHDFv0-s


I'm not surprised that a browser triggers the GPU to boost when it is GPU accelerated. This is one of the reasons for power management settings. Seems normal to me. Where is the stuttering? You'd need to show frametimes I think.

Any issues in games/benchmarks?


----------



## J7SC

Laithan said:


> I'm not surprised that a browser triggers the GPU to boost when it is GPU accelerated. This is one of the reasons for power management settings. Seems normal to me. Where is the stuttering? You'd need to show frametimes I think.
> 
> Any issues in games/benchmarks?


 
...^^^yeah that, and afaik, Google Chrome in particular (comparing my Chrome and Firefox) :2cents:


----------



## geriatricpollywog

This simple trick brought my Port Royal score to over 11,000!

https://www.3dmark.com/pr/285857


----------



## tcclaviger

So much FUD lately in here.

Pulling high power will not suddenly explode if you pass 325 watts on an 8 pin, its recommend to keep wire temps down. I've pulled 750 over a 6 pin and 8 pin.... Use good wire, use a good PSU, don't benchmark in death valley....

You can ABSOLUTELY pull over 600 watts on a 1080 ti or 2080 tI running games and benchmarks.

If you're not able to achieve that except in furmark, you're simple not OCing enough. Big power isn't some achievement it's a side effect nothing more.

Stop with the "200 watts per 8 pin" bull****.

Yes it's bull****.

No your Maxwell era testing is not relevant to Turing.

XOC bios files are totally safe for 24/7 if you understand what you are doing.

Yes there is a benefit.

No, you probably shouldn't be using it if you have to ask if it's safe.


----------



## JackCY

This is entirely normal a lot of modern apps such as web browsers use 3D etc. hardware accelerated modes to render their UI, let alone web browsers that support webgl etc. you can run a whole 3D engine/game in a browser. Sometimes NV driver is idiotic and it's power saving algorithms that determine core and memory clock are over aggressive in saving power which leads to poor performance with dropped frames in apps or video playback. You have to select the correct power saving mode or alter the load in such a way that it stay in a better mode. Blame NV, this has always been an issue.


----------



## Laithan

tcclaviger said:


> So much FUD lately in here.
> 
> Pulling high power will not suddenly explode if you pass 325 watts on an 8 pin, its recommend to keep wire temps down.


Nope, but it certainly can catch on fire... (this does not mean will.. we're talking about exceeding the WIRE specification here)
And how does one keep wire temps down? This is by _*not*_ exceeding the wire specification (which is 324W maximum for a single 8-pin PCI-e). 

Have a read....(if anyone knows, it is the miners)
https://www.gpuminingresources.com/p/psu-cables.html




tcclaviger said:


> You can ABSOLUTELY pull over 600 watts on a 1080 ti or 2080 tI running games and benchmarks.


I don't think anyone said that you couldn't? :thinking:
You never exceeded the _*wire specification*_ when doing that... 

324W x 2 = 648W
+ 
75W PCI-e = 723W

So technically 723W is the maximum SAFE power according to WIRE SPECIFICATION ONLY (and that's why you and others were able to pull over 600W). Did you take risks when doing so? I cannot answer that question because I don't know how wide the traces are on the PCBs in use...

What I was asking about was in reference to the 2K Watt BIOS "where is the *2000W* coming from" not "Where is the 600W coming from"....

I mentioned that the GPU is irrelevant when strictly referring to WIRE SPECIFICATION. PCI-e is a standard that isn't related to any GPU architechture.

*WIRE SPECIFICATIONS (16 AWG)* (I am not the creator of the specifications, these are established facts):
Each 12V pin on the 6-pin connector (there are 2) is capable of 9A each. Each 12V pin on the 8-pin connector is capable of 8A each (when all 3 are used, the spec drops to 8A each but most people keep with 9A each anyway). This is where the _*324W*_ comes from, the fact that there are 3 pins and each of them are 9A each. You can calculate it yourself. This is the MAXIMUM SAFE RATING from the wire manufacturer when using 16 AWG short runs. 
https://www.rapidtables.com/calc/electric/Amp_to_Watt_Calculator.html




tcclaviger said:


> Stop with the "200 watts per 8 pin" bull****.
> 
> Yes it's bull****.


FYA - it's higher than any STOCK BIOS, even today. 150W/175W is typically the maximum defined in the GPU's BIOS (this applies to all modern GPUs) for a single 8-pin PCI-e connector. I didn't make this up, this is fact so the context of referring to the electrical standards *vs* the GPU BIOS standards must be specifically pointed out. 324W is the electrical maximum of the wire specification @ 16AWG/12V. 

For example: My EVGA 2080Ti FTW Ultra BIOS = 380W... 75W for the PCI-e slot maximum and then 155W for each of the (2) 8-pin PCI-e connectors.. This is considered one of the "higher" STOCK BIOS' in terms of power for the 2080Ti and they are still only using ~150W maxium for the 8-pin PCI-e power connector... so the STOCK BIOS' haven't changed since Maxwell with regard to maximum PCI-e power values.

What I said was that 200W each PCI-e is a safe maximum_* for a GPU*_... (not to be confused with the maximum electrical wire specs)... and is more than what a STOCK BIOS defines. If I could modify the BIOS this way it would give me 475W of power.. not the 380W I am currently limited to... and I would be comfortable running this 24x7.

Again, 200W per PCI-e 8-pin is _*higher*_ than what STOCK BIOS' define for a single 8-pin PCI-e cable.. It is not the max safe power for the WIRE SPECIFICATIONS itself, but I have found that 200W is a SAFE MAXIMUM for a GPU. I see no reason why 200W wouldn't be safe for Pascal+ either unless they reduced the trace width. I did mention that _*PCB trace width*_ can be a limiting factor and LIKELY WHY GPU manufacturers limit to 150W/175W for each 8-pin PCI-e connector (or the lack of standard with PSUs). I cannot comment on the power limits of the traces other than knowing that limits exist there. Can some GPU's PCB handle more, of course I think the custom PCBs must.. especially ones typically used with extreme overclocking but EVERYONE is limited to WIRE specs, regardless of GPU used. 

In the case of this 2000W BIOS, each PCI-e 8-pin would have to be allocated 962W of power each + the 75W from the slot... 
^ *This* is where the bull**** is.. as 723W should be the maximum "safe" power before exceeding electrical specifications (but COULD exceed PCB trace specifications) and of course should never ever be for 24x7 use...


----------



## tps3443

Hey guys, I am planning to start a pretty cool project in the next few posts that no one has attempted before! 

I just purchased a Aorus Gaming Box eGPU 2080Ti. I plan to pull the 2080Ti from this box, and run it in my desktop! 

These boxes are $1,500 bucks new, and out of stock. I found a deal on one for $650. 

The 2080Ti inside is a Waterforce with a 240MM radiator attached. I plan to pull everything, and run it all in my PC! 

Here are a few screenshots of the box, that Der8auer pulled apart at a CES just to look inside! 

I am going to shunt mod this video card too.

This Gaming box will be here in a few days! And I am gonna to start the tear down on it. 

This will be fun! I just need to find a I/O bracket for this GPU.


----------



## ht_addict

For those with a Gigabyte Aorus 2080ti 11G WB what is the bedt BIOS to flash for max performance. Thks


----------



## J7SC

ht_addict said:


> For those with a Gigabyte Aorus 2080ti 11GB WB hat is the bedt BIOS to flash for max performance. Thks


 
...I got two of those (Aorus 2080 Ti factory w-blocked) and I just kept the stock bios which, according to GPUz, peak at 380W each anyhow. On the 'out there XOC bios front' others with these cards reported that the Strix XOC 'worked' albeit with some i/o not quite functioning right but still usable overall. The KingPin XOC apparently is similar when applied to the this Aorus model


----------



## 99999User

I would avoid XOC - my Gigabyte 2080Ti went dead after flashing the Kingpin XOC....any bios not XOC worked from memory


----------



## mgkhn

after my fe died, i bought cheapest ti but it's non A version locked 280w. it's reference pcb with working on custom loop temps doesnt matter. can i flash it to A chip bios with higher power limit like (evga or kfa2)


----------



## Imprezzion

Nope, a and non a isn't cross flashable.

I got a bit of an issue with my 2080 Ti.

With EVGA FTW3 Ultra BIOS and a custom curve (+150 core then further tweaked for 2085Mhz @ 1.093v @ 48c load) and with Chrome using HW acceleration it will often sit at 1350MHz core @ 0.762v for long times when watching videos. The card cannot handle that and crashes the driver ever so often resulting in solid green video on YouTube or even a short black screen on both my monitors before recovering the driver. 

This does not happen with HW Acceleration disabled as it doesn't clock up then. Is it somehow possible to edit the curve in such a way it will boost normally to 2085 @ 1.093v in games but also gives more voltage at 1350Mhz? I tried a few times but it locks me out of 1.093v if I do that or it just won't go past 1350.


----------



## Martin778

mgkhn said:


> after my fe died, i bought cheapest ti but it's non A version locked 280w. it's reference pcb with working on custom loop temps doesnt matter. can i flash it to A chip bios with higher power limit like (evga or kfa2)


What clock do you guys get with these modded BIOS'es? I doubt it's worth the risk and effort, to me it feels like these cards are more thermally challenged than by the Power Limit. My original 2080Ti Kingpin can run 2100-2115MHz all day but if it gets above 50*C it will start dropping clocks. At 65*C you'll probably be down 50-75MHz anyways. I think it's more worthwile to slap an AiO kit on it than mod the BIOS as you'll be fighting the temperature vs clock algorythm anyways.


----------



## kithylin

Martin778 said:


> What clock do you guys get with these modded BIOS'es? I doubt it's worth the risk and effort, to me it feels like these cards are more thermally challenged than by the Power Limit. My original 2080Ti Kingpin can run 2100-2115MHz all day but if it gets above 50*C it will start dropping clocks. At 65*C you'll probably be down 50-75MHz anyways. I think it's more worthwile to slap an AiO kit on it than mod the BIOS as you'll be fighting the temperature vs clock algorythm anyways.


Just so you are aware: No one is "Modding" a bios for any RTX 2080 Ti. We haven't been able to manually modify bios's for our video cards since Maxwell. People are just cross-flashing a different vendor's bios into their cards with the 2080 Ti's.


----------



## Martin778

Modded as in, crossflashed indeed.


----------



## geriatricpollywog

Imprezzion said:


> Nope, a and non a isn't cross flashable.
> 
> I got a bit of an issue with my 2080 Ti.
> 
> With EVGA FTW3 Ultra BIOS and a custom curve (+150 core then further tweaked for 2085Mhz @ 1.093v @ 48c load) and with Chrome using HW acceleration it will often sit at 1350MHz core @ 0.762v for long times when watching videos. The card cannot handle that and crashes the driver ever so often resulting in solid green video on YouTube or even a short black screen on both my monitors before recovering the driver.
> 
> This does not happen with HW Acceleration disabled as it doesn't clock up then. Is it somehow possible to edit the curve in such a way it will boost normally to 2085 @ 1.093v in games but also gives more voltage at 1350Mhz? I tried a few times but it locks me out of 1.093v if I do that or it just won't go past 1350.


I’d suggest running Octane bench. It consistently crashes at an unstable core or memory speed. Run it over and over and downclock your core until you find a stable overclock (until it passes) and do the same for your memory. This should completely eliminate artifacts and instability.


----------



## Baasha

Looks like the Asus RoG Strix 2080 Ti GPUs have beefy VRMs and cooling.

What is the 'best' BIOS to flash to get the most performance? At stock voltage, the max I can go is +100 on the core and +1300 on the mem - it works well however I'd like to get ~ +150 on the core if possible.

I tried 'unlocking voltage control' in MSI Afterburner and set it to +100 on the voltage but Fire Strike Ultra freezes within 5 seconds even at +130 on the core. Is my GPU just brick-walling at +100?

Would a beefier BIOS help?

Need some advice. Thx.


----------



## JackCY

Offsets are useless, you have to say real live clocks.

Some Turing cards already come OCed from factory and will run 2GHz stock. While reference cards will sit at 1800 MHz. An offset of +200 is gonna work easily on one, not the other.
The cards clock as they want at any particular time anyway even for the same load they will clock differently. + all the temperature down clock.

I remember one of the benches used to crash, I think it was Octane but that used to be due to the bench or GPU driver. Nowadays I don't ever have problems with it.
Plus Octane for me is power demand wise about 15% lower than games running unlimited at full speed.

For me on Turing core clock instability is easy to spot, it will freeze or crash a program. While memory errors will cause visual artifacts such as colored spots, patterns or flashes.


----------



## geriatricpollywog

JackCY said:


> Offsets are useless, you have to say real live clocks.
> 
> Some Turing cards already come OCed from factory and will run 2GHz stock. While reference cards will sit at 1800 MHz. An offset of +200 is gonna work easily on one, not the other.
> The cards clock as they want at any particular time anyway even for the same load they will clock differently. + all the temperature down clock.
> 
> I remember one of the benches used to crash, I think it was Octane but that used to be due to the bench or GPU driver. Nowadays I don't ever have problems with it.
> Plus Octane for me is power demand wise about 15% lower than games running unlimited at full speed.
> 
> For me on Turing core clock instability is easy to spot, it will freeze or crash a program. While memory errors will cause visual artifacts such as colored spots, patterns or flashes.


Octane helped me troubleshoot artifacting due to the core being 10mhz too high. It is my favorite bench for detecting both core and memory instability.


----------



## edhutner

I tried octane bench 4.0 and the core clock was 1350mhz without hitting any limit. Is it normal? In other benches and games my videocard is boosting above 2000.
edit: now it boosts correct, may be i have mistaken the time frame when checked the first time run in ab sensors log .. i dont know


----------



## TK421

Imprezzion said:


> Nope, a and non a isn't cross flashable.
> 
> I got a bit of an issue with my 2080 Ti.
> 
> With EVGA FTW3 Ultra BIOS and a custom curve (+150 core then further tweaked for 2085Mhz @ 1.093v @ 48c load) and with Chrome using HW acceleration it will often sit at 1350MHz core @ 0.762v for long times when watching videos. The card cannot handle that and crashes the driver ever so often resulting in solid green video on YouTube or even a short black screen on both my monitors before recovering the driver.
> 
> This does not happen with HW Acceleration disabled as it doesn't clock up then. Is it somehow possible to edit the curve in such a way it will boost normally to 2085 @ 1.093v in games but also gives more voltage at 1350Mhz? I tried a few times but it locks me out of 1.093v if I do that or it just won't go past 1350.





what's the power setting / preference on nvidia control panel?


----------



## Voodoo Rufus

I'm getting about 2035MHz on my Founders card using the 380W Galax bios. Levels off once warmed up to about 2000MHz. Can't seem to get it to a stable 2050. GPU temps right around 50C on water.


----------



## Imprezzion

TK421 said:


> what's the power setting / preference on nvidia control panel?


Prefer maximum performance.


----------



## krkseg1ops

I've screwed up thermal pads on my Gainward 2080ti GS recently and now the GPU fans go to overdrive at 75C (stock air cooler) while reducing maximum power draw to 70-80%(down from 123%). It used to go to maximum (85-88) at 2400RPM max, now they are at 3000 and rendering a small hurricane near me. It basically translates into 30-40FPS in Battlefront 2 instead of 80-90. I've replaced thermal pads but did not mind the thickness and now I suspect some of the VRMs are not touching the cooler. I heard you should only be using 1mm pads on the top of the PCB (where it touches the cooler) with 0.5mm ones on the memory banks of the card. HOWEVER, I noticed Gainward 2080ti GS has some 1.5mm thick pads on the memory banks around the core while the power section only uses 0.5-1mm thickness. I'm getting confused and angry, I already unscrewed the card more than 10 times and I cannot bring it back to the original performance (mostly because the old thermal pads got all glued to each other and I had to throw them out). 
*Does anyone know what the pads thickness and composition/position should be for Gainward 2080 ti GS or where I could possibly find a schema for this?* I found one for a EKWB water block installation and it says only 0.5mm and 1mm pads should be used although I think the waterblock setup of thermal pads might be different than for a stock air cooler (as amazing as it is). I've ordered some 0.5mm and 1mm pads from Thermal Grizzly and I want to make sure this time I get it right as I had to resort to dropping the resolution to 1440p in many games due to my incompetence


----------



## Mylittlepwny2

Martin778 said:


> What clock do you guys get with these modded BIOS'es? I doubt it's worth the risk and effort, to me it feels like these cards are more thermally challenged than by the Power Limit. My original 2080Ti Kingpin can run 2100-2115MHz all day but if it gets above 50*C it will start dropping clocks. At 65*C you'll probably be down 50-75MHz anyways. I think it's more worthwile to slap an AiO kit on it than mod the BIOS as you'll be fighting the temperature vs clock algorythm anyways.


Well Nvidia boost algorithm starts dropping clocks around 38C (actually it drops a delta around 20C but unless youre running chilled water you wont notice). Every 4 - 5 C over 38 that you reach it will continue to drop deltas at that point. To also say nothing from losses to efficiency. Both my KP cards will do +150 on the core for a sustained clockspeed of 2205 MHz if they are kept below 38C which isnt hard if i turn up my radiator fans. When running them in SLI they tend to heat up a bit more and will run about 40C at max load and thus they will lose 15 MHz automatically.


----------



## JackCY

krkseg1ops said:


> I've screwed up thermal pads on my Gainward 2080ti GS recently and now the GPU fans go to overdrive at 75C (stock air cooler) while reducing maximum power draw to 70-80%(down from 123%). It used to go to maximum (85-88) at 2400RPM max, now they are at 3000 and rendering a small hurricane near me. It basically translates into 30-40FPS in Battlefront 2 instead of 80-90. I've replaced thermal pads but did not mind the thickness and now I suspect some of the VRMs are not touching the cooler. I heard you should only be using 1mm pads on the top of the PCB (where it touches the cooler) with 0.5mm ones on the memory banks of the card. HOWEVER, I noticed Gainward 2080ti GS has some 1.5mm thick pads on the memory banks around the core while the power section only uses 0.5-1mm thickness. I'm getting confused and angry, I already unscrewed the card more than 10 times and I cannot bring it back to the original performance (mostly because the old thermal pads got all glued to each other and I had to throw them out).
> *Does anyone know what the pads thickness and composition/position should be for Gainward 2080 ti GS or where I could possibly find a schema for this?* I found one for a EKWB water block installation and it says only 0.5mm and 1mm pads should be used although I think the waterblock setup of thermal pads might be different than for a stock air cooler (as amazing as it is). I've ordered some 0.5mm and 1mm pads from Thermal Grizzly and I want to make sure this time I get it right as I had to resort to dropping the resolution to 1440p in many games due to my incompetence


Pad thickness depends on the gap between component and cooler. Stock air cooler and some 3rd party water block are entirely different. You should measure the stock pads when using stock cooler.
I haven't used "K5 Pro" which nicely replaces pads but some others did use it, it's a bit pricey but no more than some good pads. I'm certainly not gonna be buying more pads now that I know it exists, it's a real pain to deal with pad thickness and pad softness/compression differences.
Some pads are soft and compress, some are hard and do not, this also plays a role.

You've likely used too thick pads around the GPU area which resulted in terrible contact with the GPU die. I had the same problem as my pads did not compress as the stock ones do, took it apart and saw how bad the die contact is until I put thin enough pads.

It's just too bad the K5 Pro is not easy to buy locally and has to be shipped internationally. Otherwise I would get it for one other project.

There are no schemas etc. for stock coolers. You could try asking the manufacturer, then wait 6 months if they ever reply at all.


----------



## Gandyman

*strange crashes*

Hey guys

Recently got a MSI Gaming Z trio 2080ti and been loving it. Playing all my old games at 144hz 1440p without having to turn everything to medium feels good.

So I recently restarted my metro exodus campaign, with DXR and DLSS disabled, To get 144hz 1440p Ultra. After beating the first area (Volga) I decided to restart the chapter but this time with DXR/DLSS enabled, to see if the experience was more enjoyable with RT.
I experienced fatal error, crash to desktop after about 10 minutes. Thinking nothing of it I rebooted my machine, played another 5-10 mins, same crash. Did this half a dozen times. Puzzled, I turned DXR/DLSS off and played for about a hour without experiencing a single crash.

Thinking it may be a bug with Metro, decided to open my only other DXR game, Shadow of the Tomb Raider. Enabled DRX/DLSS right off the get go, crashed within 10 mintues. Can replicate this within 10 minutes on demand, sometimes even in the menu. Turn DXR/DLSS off and played just now for about 2.5 hours (got caught up in the story lol) without a single crash. To humor myself I turned DLSS back on without RT (can't do that in metro) crashed again within 10 mins.

Is this faulty RT or Tensor cores on card? Buggy new features? something bad on my end I'm doing? I disabled all overclock (even power limit) in Afteburner as my first thought was that the +100 on core was too much when the RT/Tensor cores were being used also. I have 9900k running at 4.8ghz 100% verified as stable, yet to humor myself tried it again with CPU and RAM set to stock, (proper 3.6ghz stock with no xmp) Crashes in both games still replicatable. 

TL;DR Replicatable crashes in multiple games after DLSS is enabled, faulty card or faulty user? 

Cheers


----------



## AStaUK

Assuming you upgraded your GFX card and haven't done a fresh install have you tried using DDU to remove all remnants of the previous card?


----------



## cisco150

Imprezzion said:


> Well, I finally found the first game capable of hitting the 380w EVGA FTW3 limits on my Gainward lol. I run 2115-2085 depending on temps at 1.093v with a custom curve with 7800 memory and Black Desert Online on the "screenshot only" Ultra mode actually manages to tap the power limit from time to time. It usually runs 110-115% but some areas hit the limit slightly going as high as 131% and then throttling to 2025-2040 @ 1.043v shortly.
> 
> I mean, superposition always hit the limit for me but 3dmark or other games didn't so I figured I was fine but..
> 
> Well, might switch back to HOF XOC 1.125v 2145-2115Mhz again lol.


Can you share the hof xoc bios.


----------



## Gandyman

AStaUK said:


> Assuming you upgraded your GFX card and haven't done a fresh install have you tried using DDU to remove all remnants of the previous card?


Yeah, Did a fresh windows install even, as my old OS was a bit bloated and the hardware change made me less lazy so I decided to do it. Also tried a DDU today in case the fresh driver install bugged out or something.

I just bought bf V and its downloading now so I can test a third game with dxr


----------



## Gandyman

Just remembered that Wolfenstein Young Blood also has RTX features. 

Took alot longer to crash this time 60mins ish but crashed to desktop nonetheless. Keep in mind I've had the new GPU a bit over a week now and have done tonnes of gaming without any crashes, only consistent crash to desktops with RTX ON.

Anyone else had similar issues?

BFV still downloading.


----------



## maivorbim

Does anybody have the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 260W x 231% Power Target BIOS (600W) mentioned on the first page? Thanks


----------



## JackCY

Bad driver or bad card if it's happening in multiple games/programs.

Do you also crash in Q2RTX or Octane RTX bench?

I just finished the first two metro games and will move on to Exodus, but I don't think they bothered yet to add DLSS "2.0" to Exodus if ever  Which is a damn shame.
The first two Metro games are wonky, they have some serious engine problem even in the LL Redux engine used for both games. Often periodic heart beat FPS drops, sometimes the physx or something craps out and crashes the whole thing in certain mission or so people write. I've tried multiple configurations when I reached one impassable map, I think nothing worked at all, all I could do was rush it as it seemed there is almost a timer after which the game engine crashes the map...

As far as I know, Control should have all the RTX and DLSS goodies, other games... not so much.


----------



## Gandyman

JackCY said:


> Bad driver or bad card if it's happening in multiple games/programs.
> 
> Do you also crash in Q2RTX or Octane RTX bench?


Just spent all night (its almost 2 am) playing/testing and can confirm that its DLSS that crashes. Q2RTX no crash, Port Royale no crash. Just finished all of the Volga erea in Exodus with rt ON but DLSS off, and no crashes. Crashes are 100% reproducible in 5 - 10 mins once turning DLSS on. Didn't test for as long but exact same thing in Shadow of the Tomb raider. Played about 3 hours with the RT shadows on ultra, with DLSS off, was perfect. Turn DLSS on and got 3 crashes in 20 minutes. It took over a hour for Wolfenstien Youngblood to crash with DLSS on, turned it off and played for a bit and it didn't crash, but can't stand that game it was such a chore to play it, so that's hardly definitive. 
Bought BFV just to try this then found out that you need a 2080 or lower to enable DLSS ... 2080ti only gets DLSS for 4k .. which is some very odd decision making .. and a waste of money on my part. 

Are there any benchmarks with DLSS?

I use PCIE riser cable is there any chance that could be effecting DLSS only? Seems unlikely but I think ill mount it horizontal tommorow and try because I'm out of ideas.


----------



## bigjdubb

I'm thinking about selling my 2080ti and switching over to using my Radeon VII. I rarely play games anymore and it seems like it would be smart to unload it now before Nvidia launches new cards. 

Have any of you purchased used 2080ti's lately? How much did you pay for it if you have? I don't know what the current market value is for one so it's hard to decide if it would be worth the hassle.


----------



## keikei

bigjdubb said:


> I'm thinking about selling my 2080ti and switching over to using my Radeon VII. I rarely play games anymore and it seems like it would be smart to unload it now before Nvidia launches new cards.
> 
> Have any of you purchased used 2080ti's lately? How much did you pay for it if you have? I don't know what the current market value is for one so it's hard to decide if it would be worth the hassle.



Ebay is the best method for real time price checks imo.


----------



## bigjdubb

I should have thought of ebay. It's been so long since I have used Ebay that I forgot about it.


----------



## kithylin

bigjdubb said:


> I should have thought of ebay. It's been so long since I have used Ebay that I forgot about it.


Here's a customized search list for you for ebay. I set it to USA only, Used only, buy it now, sorted by lowest price + shipping first and I manually filtered out all the "junk" (backplates, support brackets, boxes, fan assemblies, water blocks, broken, as-is cards, etc.) so it just shows you the cards and nothing else. 

https://www.ebay.com/sch/i.html?_fr...ntract+-fan+-"As+is"+-cable+-bracket&LH_BIN=1


----------



## Madness11

Hey guys .. I have some issue , https://youtu.be/iX8mlBQmbS0 (52 sec) its card die ? Its happen in all games ( flickering... please tell me ) ( rtx 2080ti zotac amp)


----------



## geriatricpollywog

Madness11 said:


> Hey guys .. I have some issue , https://youtu.be/iX8mlBQmbS0 (52 sec) its card die ? Its happen in all games ( flickering... please tell me ) ( rtx 2080ti zotac amp)


Did you manually overclock or update the bios? Have you tried reinstalling Nvidia drivers and running the card stock ?


----------



## JackCY

Gandyman said:


> Just spent all night (its almost 2 am) playing/testing and can confirm that its DLSS that crashes. Q2RTX no crash, Port Royale no crash. Just finished all of the Volga erea in Exodus with rt ON but DLSS off, and no crashes. Crashes are 100% reproducible in 5 - 10 mins once turning DLSS on. Didn't test for as long but exact same thing in Shadow of the Tomb raider. Played about 3 hours with the RT shadows on ultra, with DLSS off, was perfect. Turn DLSS on and got 3 crashes in 20 minutes. It took over a hour for Wolfenstien Youngblood to crash with DLSS on, turned it off and played for a bit and it didn't crash, but can't stand that game it was such a chore to play it, so that's hardly definitive.
> Bought BFV just to try this then found out that you need a 2080 or lower to enable DLSS ... 2080ti only gets DLSS for 4k .. which is some very odd decision making .. and a waste of money on my part.
> 
> Are there any benchmarks with DLSS?
> 
> I use PCIE riser cable is there any chance that could be effecting DLSS only? Seems unlikely but I think ill mount it horizontal tommorow and try because I'm out of ideas.


Been messing with Metro Exodus last day, testing different settings for quality and performance.
That game is as unstable as it's predecessors, same engine, same quirks/issues.
When it comes to outright crashes of the game for me it seemed to be due to GPU core clock being too high at the temperature this game is able to generate, but that's expected it's an OC profile right to the edge, works elsewhere but not for ME. Sometimes the engine is idiotic, except "furmark" style benches I wasn't able to get the card to sit on a power limiter at 290W, but nah, launch Metro Exodus with no limiter and it sit there in the game menu at that power, ridiculous. Meanwhile in gameplay itself wouldn't even get that high, it gets high sure 250-260W but not to 290W and attacking the limiter. The core clock crashes were also sort of play a little bit and eventually it crashes with a bug report.

The engine also fails to even start sometimes same as the previous games did, launch and watch a black screen, nothing happening. With ME my guess is the damn Ansel is failing to hook or something like that breaks on launch until it miraculously starts working again some launch attempt later after messing with config file.

DLSS pretty much useless in ME, worse quality than shading scaling or resolution scaling with sharpening applied, at least at 1440p it is.

DLSS seems to work for me so far.

If it crashes for you everywhere with DLSS then sadly some part of the GPU core is busted I would say when it does it even at stock clocks. Who knows how well they validate the chips for functionality. Card manufacturers... I think they just run a common bench like 3DMark at best and that's it, so no DLSS test either.


----------



## J7SC

There is a DLSS feature test in the advanced version of 3DM...it's basically 2x PortRoyal, with and without DLSS. Not the be-all, end-all but at least it is something for folks to test their new 2080 Ti with


----------



## Gandyman

JackCY said:


> Been messing with Metro Exodus last day, testing different settings for quality and performance.


Borrowd a friends 2080 super, played 6 hours without a single crash with dlss on. Put my 2080ti back in, 3 crashes in 20 minutes. turned dlss off, played the rest of the night with no issues.

I tried with as much negative on the core as afterburner would allow and fans on 100% with no change.

Could there be a an issue with the 16gb/s Micron RAM that MSI put on the gaming Z over the all the other 2080tis? That kind of thing is way beyond my knowledge.

Submitted a DOA RMA request with the help of my supplier MSI said they would honor my testing as evidence and issue a replacement card. Should arrive Mon/Tues next week 

I guess if it does it with the new one then ... oops?


----------



## Madness11

Did you manually overclock or update the bios? Have you tried reinstalling Nvidia drivers and running the card stock ?[/QUOTE]
Hi , no , card work full stock . No oc , no update bios . Every game , randomly have this flickering or flashing .


----------



## JackCY

Gandyman said:


> Borrowd a friends 2080 super, played 6 hours without a single crash with dlss on. Put my 2080ti back in, 3 crashes in 20 minutes. turned dlss off, played the rest of the night with no issues.
> 
> I tried with as much negative on the core as afterburner would allow and fans on 100% with no change.
> 
> Could there be a an issue with the 16gb/s Micron RAM that MSI put on the gaming Z over the all the other 2080tis? That kind of thing is way beyond my knowledge.
> 
> Submitted a DOA RMA request with the help of my supplier MSI said they would honor my testing as evidence and issue a replacement card. Should arrive Mon/Tues next week
> 
> I guess if it does it with the new one then ... oops?


Faulty tensor cores. There are other applications that can leverage those for a non gaming use, I don't have any though. Those will likely also crash on your card.

At least you can get it replaced easily and directly. For me it would likely end up with a... seller/large shop runs Firestrike/Timespy a few times or as a stress test, deems it stable and sends it back as not defective. Only for the card to be sent back again with the same problem explanation, complaining about their incompetence and improper testing and not even bothering to send it up the supply channel to be replaced. I've had something like that with the DPC latency problems before, they just don't care about "niche" aspects/features of the sold products or even know how to properly test them even when told how it was checked and determined that it's faulty, be it a faulty hardware or faulty software, what's the difference anyway with a GPU, we pay for hardware and the software (driver), one is useless without the other working correctly.

Some cards have Micron some have Samsung VRAM chips, from any brand, even the same model can have both depending on when it was made quite likely. It's all about what is available from the supply chain.
They both clock about same but Samsung may have lower variance and usually it's the one that gets binned for top end cards.


----------



## geriatricpollywog

I switched to the XOC bios and had some fun with 3DMark. It didn't recognize my processor so I am not in the hall of fame 
https://www.3dmark.com/pr/290965


----------



## MrTOOSHORT

0451 said:


> I switched to the XOC bios and had some fun with 3DMark. It didn't recognize my processor so I am not in the hall of fame
> https://www.3dmark.com/pr/290965


That's a great score, congrats!:thumb:


----------



## Gandyman

JackCY said:


> At least you can get it replaced easily and directly. For me it would likely end up with a... seller/large shop runs Firestrike/Timespy a few times or as a stress test, deems it stable and sends it back as not defective. Only for the card to be sent back again with the same problem explanation, complaining about their incompetence and improper testing and not even bothering to send it up the supply channel to be replaced. I've had something like that with the DPC latency problems before, they just don't care about "niche" aspects/features of the sold products or even know how to properly test them even when told how it was checked and determined that it's faulty, be it a faulty hardware or faulty software, what's the difference anyway with a GPU, we pay for hardware and the software (driver), one is useless without the other working correctly


I run a small PC repair/building store and have a good relationship with my supplier. I called him basically said word for word what you said there, that if I send back this 2200 AUD GPU they will run firestrike for 30 mins and go 'seems fine' and send it back. He said he understands 100% and will see what he can do, he emailed MSI directly and kept me CC'd. He relayed that I had done over 30 hours of testing to determine that the tensor cores being faulty is the only logical conclusion. The RA director of MSI Australia emailed him back and said she would approve the RMA request and sent him an authorization. 

So I guess I was lucky, more about who you know than who you are in life I've found. Just need the right friends.

You watch my new one will arrive tomorrow and do the same thing lol -- just my luck.

Incredibly off topic but what monitor are you using now? I really appreciate all the info you gave me on the LG thread.

Cheers


----------



## JackCY

Gandyman said:


> I run a small PC repair/building store and have a good relationship with my supplier. I called him basically said word for word what you said there, that if I send back this 2200 AUD GPU they will run firestrike for 30 mins and go 'seems fine' and send it back. He said he understands 100% and will see what he can do, he emailed MSI directly and kept me CC'd. He relayed that I had done over 30 hours of testing to determine that the tensor cores being faulty is the only logical conclusion. The RA director of MSI Australia emailed him back and said she would approve the RMA request and sent him an authorization.
> 
> So I guess I was lucky, more about who you know than who you are in life I've found. Just need the right friends.
> 
> You watch my new one will arrive tomorrow and do the same thing lol -- just my luck.
> 
> Incredibly off topic but what monitor are you using now? I really appreciate all the info you gave me on the LG thread.
> 
> Cheers


Yep. I've had similar good experience when dealing with manufacturer directly, that was only after bad experience with the shop well rather bad experience with the manufacturer's local service center who kept sending back replacement cards (old card at that point, out of production) that were physically not OK or even pluggable to how bent the I/O metal bracket was. Once one goes high enough it's usually fine, but dealing with the "peons" who like most people get worked to death at low wages, scrutinized for anything not being done the cheapest way possible, the system fails there. And this has been a recurring theme for many brands especially when they had to deal with extra RMAs, bulk RMAs from miners. Once I got hold of Taiwan HQ customer service on their forum... with photos of what was being sent back from the service center, how long the whole ordeal is taking (months) when the problem could have been resolved quickly and satisfactory with a refund or offer of a sidegrade/upgrade when they don't have the same card anymore in good enough condition. They approved refund, called the service center and suddenly it was no problem and service center behavior turned around 180 degrees, I bet they were a bit surprised to get a talking to from above from Taiwan HQ and customers should not be required to tell these stories publicly on forums, twitter, etc. in an attempt to get hold of someone competent enough at the company to resolve problems.

Some shops are fine and they will swap products or sent it up the channel right away and make you wait but that's better because at least someone more competent than shop technician may look at the card and test it. There is nothing wrong with a shop doing the testing but when it's out of their expertise/knowledge/skill they should send it up the channel or at least read what the customer writes the problem is haha.

Another time, another shop I sent a card back within 14 days, regular return and refund, no card issue. They put the card on a shelf, lost the accompanying paper document, 14 days go by still no refund received, write them and they reply "we were wondering what is this card here, there is no document with it, is it broken?", resent document, no card issue, resolved. But again, I had to intervene due to error on their side.

3 cards, 3 issues. So far not much luck with local shops, small or large.

No wonder when I'm buying something like monitors that have high return rate that I rather go and look at Amazon out of my country, or German shops even, due to a more hassle free return/exchange experience and lower costs of doing so.

I use AOC Q3279VWFD8 31.5" 1440p75 IPS and since I didn't need to play a lottery on it or had any major problems so far the thread for it is rather picture free-ish as the photos I took were a bit bad (focus, exposure) and I didn't feel like retaking them and then spending hours doing the viewing angle composite etc. like I did for the AOC, Acer, Samsung, LG which are mostly in the comparison thread in signature and with some I did play the lottery a bit to no avail. I've had the 850G, it was broken internally and while the AUO VA is nicer than the older Samsung VA it's still no match to an IPS at this size, for me 31.5" is simply too large for VA and I find the other negatives of VA worse than having a bit of corner glow with IPS. Not all IPS glow as crazily as the infamous AUO 27" AHVA=IPS. Been there, tried all the available panels in 27" and 31.5" 1440p100+ IPS and VA at the time, settled for 1440p75 monitor anyway. Nowadays there are more options, there wasn't a very up to date list before so I kept the 2019 new options/releases in a thread, in signature. There still isn't much if anything really in 1440p100+ to recommend, in 1080p there are some nicer models now, the 1080p144 24" IGZO IPS and 1080p 240/280Hz IPS panels.

After having multiple 31.5" monitors and using this one a while now, I really don't want to go down in size. Plus for 1440p the 31.5" is most usable to me as it's a standard PPI = no UI scaling necessary and text is readable.


----------



## newls1

please dont make me read 1161 pages here to look for this answer....... I have a Gigabyte 2080ti gaming OC (ref design) currently @ i think 365w limited bios. What bios can I flash to that will actually work and increase my wattage limit. Im @ +140 / + 1000mem and waterblocked.....


----------



## JackCY

https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since=

There is indeed a 366W VBIOS for it on TPU. I think the highest stock VBIOS I saw was 380W. So there is not much you can do other than shunt it or get some niche non public VBIOS with a crazy limit which is unlikely made for your card.


----------



## J7SC

newls1 said:


> please dont make me read 1161 pages here to look for this answer....... I have a Gigabyte 2080ti gaming OC (ref design) currently @ i think 365w limited bios. What bios can I flash to that will actually work and increase my wattage limit. Im @ +140 / + 1000mem and waterblocked.....


 


JackCY said:


> https://www.techpowerup.com/vgabios...X+2080+Ti&interface=&memType=&memSize=&since=
> 
> There is indeed a 366W VBIOS for it on TPU. I think the highest stock VBIOS I saw was 380W. So there is not much you can do other than shunt it or get some niche non public VBIOS with a crazy limit which is unlikely made for your card.


 
I have the Aorus 2080 Ti Xtreme *WB* (factory water-block), and stock bios on both cards goes regularly up to 380W peak in GPUz etc,...technically though, it might be called '366' W bios. I'm not sure if an extra 14W or so would warrant a bios flash anyhow, but you can give that bios a try (it's on TechPowerUp, look for 'WB' suffix), given that it is used on the Gigabyte / Aorus w-cooled cards


----------



## newls1

J7SC said:


> I have the Aorus 2080 Ti Xtreme *WB* (factory water-block), and stock bios on both cards goes regularly up to 380W peak in GPUz etc,...technically though, it might be called '366' W bios. I'm not sure if an extra 14W or so would warrant a bios flash anyhow, but you can give that bios a try (it's on TechPowerUp, look for 'WB' suffix), given that it is used on the Gigabyte / Aorus w-cooled cards


thanks for the reply, certainly appreciate the feedback. I actually sold the whole pc this afternoon, so no longer need it but would have tried it for sure. Waiting on the Zen3 release now and RTX "3" cards


----------



## kwalker99

alex1990 said:


> *MSI 2080ti Gaming X TRIO
> 
> BIOS 400W*
> 
> i got accept from ru support for upload this test rom
> 
> I tested this 3 weeks, all ok


Thanks for sharing this, I have this working perfectly on a EVGA 2080TI FTW3 Ultra ;-)


----------



## J7SC

newls1 said:


> thanks for the reply, certainly appreciate the feedback. I actually sold the whole pc this afternoon, so no longer need it but would have tried it for sure. Waiting on the Zen3 release now and RTX "3" cards


 
...should be an interesting last quarter for "3s" by AMD/Zen and Nvidia/Ampere...looking forward to take a look at some custom PCB / factory-waterblock cards again when they come out later on to see what's what with Ampere. Of course they'll have UEFI bios  (sighing nostalgically for the days of DIY .txt file bios mods)


----------



## Laithan

kwalker99 said:


> Thanks for sharing this, I have this working perfectly on a EVGA 2080TI FTW3 Ultra ;-)


Have you measured power with that BIOS and seen more than 380W? The way the BIOS values work is there is a maximum value associated with each PCI-e power cable and the PCI-e slot itself (75W). All of the maximum values added together gives you the maximum power in WATTS that the BIOS will allow. That GPU MSI Trio has a 3rd PCI-e power (6-pin). This means that the total power (400W) is going to likely (unless it is used for something else) include that 3rd PCI-e 6-pin cable. When flashing that BIOS to a GPU with only 2 PCI-e power cables, you're likely unable to access the power that is allocated to the 3rd PCI-e power cable within the BIOS. If there is more than 20W allocated to the 6-pin (there will be) then the overall total power you are allowed should be less than the FTW3 Ultra 380W BIOS.


----------



## geriatricpollywog

Finally a valid result.

https://www.3dmark.com/pr/291406


----------



## kithylin

0451 said:


> Finally a valid result.
> 
> https://www.3dmark.com/pr/291406


It looks like you ranked 36'th in the top 100 for Port Royal 1x GPU, congrats!


----------



## geriatricpollywog

kithylin said:


> It looks like you ranked 36'th in the top 100 for Port Royal 1x GPU, congrats!


Thank you! I cheated a little.


----------



## Nizzen

0451 said:


> Finally a valid result.
> 
> https://www.3dmark.com/pr/291406


Nice 
Do you have a Timespy result (non extreme)


----------



## geriatricpollywog

Nizzen said:


> 0451 said:
> 
> 
> 
> Finally a valid result.
> 
> https://www.3dmark.com/pr/291406
> 
> 
> 
> Nice /forum/images/smilies/smile.gif
> Do you have a Timespy result (non extreme)
Click to expand...

No it kept crashing when I tried to run Timespy on ice. I can only get 1 good run during an ice session before things start crashing. I’ll try Timespy next time.


----------



## JackCY

Get a freezer, fill it with salty water, toss the rads into it and turn it to max freeze lol. How hard can it be.


----------



## geriatricpollywog

JackCY said:


> Get a freezer, fill it with salty water, toss the rads into it and turn it to max freeze lol. How hard can it be.


Lol I am trying to keep my ice sessions under 15 minutes to avoid condensation.


----------



## J7SC

0451 said:


> Lol I am trying to keep my ice sessions under 15 minutes to avoid condensation.


 
...for short bursts, you can even put some dry ice (DICE) into the ice-water bucket; years back, I used to do that, and it got me another 8C - 10C drop compared to just ice water...just make sure no pets or small children are around (re. safety). It doesn't last long enough to really worry about condensation, though 'use at your own risk'. 

With Ampere and perhaps RDNA2 around the corner, you might as well squeeze every last bit out of the card now. I've been in the top 30 @ 3DM PortRoyal HoF overall & 2x GPUs for well over a year now (without DICE; stock bios) but expect that to change very soon with the new cards coming out and hitting the charts. As always, the hunt will be on soon after the release of the 3080 Ti or equivalent for 'secret sauce' XoC bios / cross-flashing. Time to get some popcorn


----------



## Laithan

J7SC said:


> ...for short bursts, you can even put some dry ice (DICE) into the ice-water bucket; years back, I used to do that, and it got me another 8C - 10C drop compared to just ice water...just make sure no pets or small children are around (re. safety). It doesn't last long enough to really worry about condensation, though 'use at your own risk'.
> 
> With Ampere and perhaps RDNA2 around the corner, you might as well squeeze every last bit out of the card now. I've been in the top 30 @ 3DM PortRoyal HoF overall & 2x GPUs for well over a year now (without DICE; stock bios) but expect that to change very soon with the new cards coming out and hitting the charts. As always, the hunt will be on soon after the release of the 3080 Ti or equivalent for 'secret sauce' XoC bios / cross-flashing. Time to get some popcorn


:yessir:

We can dream a little right? I'm still remaining positive (hoping) that someone can crack the encryption so we can get back to proper BIOS modding. Imagine buying an engine for your car that you can't modify even if you were willing to give up the warranty.. :h34r-smi


----------



## geriatricpollywog

J7SC said:


> 0451 said:
> 
> 
> 
> Lol I am trying to keep my ice sessions under 15 minutes to avoid condensation.
> 
> 
> 
> 
> ...for short bursts, you can even put some dry ice (DICE) into the ice-water bucket; years back, I used to do that, and it got me another 8C - 10C drop compared to just ice water...just make sure no pets or small children are around (re. safety). It doesn't last long enough to really worry about condensation, though 'use at your own risk'.
> 
> With Ampere and perhaps RDNA2 around the corner, you might as well squeeze every last bit out of the card now. I've been in the top 30 @ 3DM PortRoyal HoF overall & 2x GPUs for well over a year now (without DICE; stock bios) but expect that to change very soon with the new cards coming out and hitting the charts. As always, the hunt will be on soon after the release of the 3080 Ti or equivalent for 'secret sauce' XoC bios / cross-flashing. Time to get some popcorn /forum/images/smilies/smile.gif
Click to expand...

I am already handling the faith snakes by using ice when the humidity is 80%, so I’m a bit apprehensive about dry ice. What was the duration of your ice benching?

I will probably wait for the next Kingpin or do the EVGA step up program if nee GPUs are out by late September.


----------



## J7SC

Laithan said:


> :yessir:
> 
> We can dream a little right? I'm still remaining positive (hoping) that someone can crack the encryption so we can get back to proper BIOS modding. Imagine buying an engine for your car that you can't modify even if you were willing to give up the warranty.. :h34r-smi


 
It's been a long time, but I remember my GTX 600 series custom PCB cards:...save bios (*.rom), rename (*.txt), mod PT et al in Notepad, save and rename (*.rom). Re-flash and off you went, until black-screening told you '...too far' :axesmiley I never broke or had to RMA any card I modded that way though.




0451 said:


> I am already handling the faith snakes by using ice when the humidity is 80%, so I’m a bit apprehensive about dry ice. What was the duration of your ice benching?
> 
> I will probably wait for the next Kingpin or do the EVGA step up program if nee GPUs are out by late September.


 
...80% humidity is an issue for sure. From what I remember (this is many years back), two XoC GPUs could exhaust a typical 1ft DICE brick in about the time it took to run 3DM11-Xtr. If you get a bit lower humidity, perhaps early in the mornings, and do decide to go for it, try to get the 'rice kernel' type DICE...the bricks are more common (for example at my local Praxair) but the rice kernel type DICE works a bit better for adding to an existing (water) ice bucket. Just be set up for the bench and ready the moment you add DICE - it will be over quickly 

I don't mind waiting for the right kind of custom w-cooled PCB 3080 Ti (3090 :headscrat ) or even skip and hold out for next-gen Hopper, depending on reviews etc. It is not like 2080 Tis will all of a sudden be obsolete, they just won't be king of the hill anymore but '''just''' in the upper 3rd of the performance rankings


----------



## geriatricpollywog

J7SC said:


> It's been a long time, but I remember my GTX 600 series custom PCB cards:...save bios (*.rom), rename (*.txt), mod PT et al in Notepad, save and rename (*.rom). Re-flash and off you went, until black-screening told you '...too far' :axesmiley I never broke or had to RMA any card I modded that way though.
> 
> 
> 
> 
> 
> ...80% humidity is an issue for sure. From what I remember (this is many years back), two XoC GPUs could exhaust a typical 1ft DICE brick in about the time it took to run 3DM11-Xtr. If you get a bit lower humidity, perhaps early in the mornings, and do decide to go for it, try to get the 'rice kernel' type DICE...the bricks are more common (for example at my local Praxair) but the rice kernel type DICE works a bit better for adding to an existing (water) ice bucket. Just be set up for the bench and ready the moment you add DICE - it will be over quickly
> 
> I don't mind waiting for the right kind of custom w-cooled PCB 3080 Ti (3090 :headscrat ) or even skip and hold out for next-gen Hopper, depending on reviews etc. It is not like 2080 Tis will all of a sudden be obsolete, they just won't be king of the hill anymore but '''just''' in the upper 3rd of the performance rankings


I am pretty sure my next card will be a 3080ti KPE and I'll pay full price for it next time. The 2080ti KPE is my first NVidia card since the GeForce FX 5600 so I am used to AMD cards where the double 8-pin basically goes straight to the VRM without current monitoring and regulation. After playing around with a KPE card, I don't think I would go back to AMD. This thing is so much fun to overclock and it is so well-made. TiN, one of the engineers behind the KPE, said on the EVGA forum that there are multiple performance criteria on RTX TU102 GPUs, it's not all black and white like on 1080Ti or previous generation chips, and to keep an eye on performance results, not just raw frequency. Even at my daily gaming OC, 2145 core / 8250 mem, I am able to get into the top 100 HOF for several 3DMark tests which is insane. And the card isn't even on my water loop.


----------



## Shawnb99

0451 said:


> I am pretty sure my next card will be a 3080ti KPE and I'll pay full price for it next time. The 2080ti KPE is my first NVidia card since the GeForce FX 5600 so I am used to AMD cards where the double 8-pin basically goes straight to the VRM without current monitoring and regulation. After playing around with a KPE card, I don't think I would go back to AMD. This thing is so much fun to overclock and it is so well-made. TiN, one of the engineers behind the KPE, said on the EVGA forum that there are multiple performance criteria on RTX TU102 GPUs, it's not all black and white like on 1080Ti or previous generation chips, and to keep an eye on performance results, not just raw frequency. Even at my daily gaming OC, 2145 core / 8250 mem, I am able to get into the top 100 HOF for several 3DMark tests which is insane. And the card isn't even on my water loop.



I’d buy a 3080TI KPE if I was actually able to buy one buy like always stock will be sold out within minutes and unless I stay up all day refreshing the page every 5 minutes chances are I’ll never be able to buy one.
Not to mention they won’t be out for 6+ months after release and finding a block for them is next to impossible either none makes one or EVGA discontinues to sell them shortly after release.

Since they make it so difficult to get one I won’t even bother


----------



## geriatricpollywog

Shawnb99 said:


> I’d buy a 3080TI KPE if I was actually able to buy one buy like always stock will be sold out within minutes and unless I stay up all day refreshing the page every 5 minutes chances are I’ll never be able to buy one.
> Not to mention they won’t be out for 6+ months after release and finding a block for them is next to impossible either none makes one or EVGA discontinues to sell them shortly after release.
> 
> Since they make it so difficult to get one I won’t even bother


When I ordered mine from the b-stock page, the 2080ti FTW3 Hydrocopper and KPE were the same price ($950) and I actually tried to buy the FTW3 first but the transaction kept failing no matter which credit card I used. I switched to the KPE and my Paypal account and it went through. Mind you, this purchasing ordeal took place over a year after release and over a period of 2 hours so your pessimism is lost on me.


----------



## kwalker99

Laithan said:


> Have you measured power with that BIOS and seen more than 380W? The way the BIOS values work is there is a maximum value associated with each PCI-e power cable and the PCI-e slot itself (75W). All of the maximum values added together gives you the maximum power in WATTS that the BIOS will allow. That GPU MSI Trio has a 3rd PCI-e power (6-pin). This means that the total power (400W) is going to likely (unless it is used for something else) include that 3rd PCI-e 6-pin cable. When flashing that BIOS to a GPU with only 2 PCI-e power cables, you're likely unable to access the power that is allocated to the 3rd PCI-e power cable within the BIOS. If there is more than 20W allocated to the 6-pin (there will be) then the overall total power you are allowed should be less than the FTW3 Ultra 380W BIOS.


You are indeed correct, I feel a bit stupid... Each power connector has it own shunt resistor, so the TDP will be the sum of these plus the PCIe 60W and looking at HW Monitor, the card seems to have a phantom limb ;-) It thinks is has 50-60W on a 6pin connector, even better it changes under load lol

So does anyone have the 600W Asus Strix bios? should be a good contender for the FTW3 ULTRA


----------



## Laithan

kwalker99 said:


> You are indeed correct, I feel a bit stupid... Each power connector has it own shunt resistor, so the TDP will be the sum of these plus the PCIe 60W and looking at HW Monitor, the card seems to have a phantom limb ;-) It thinks is has 50-60W on a 6pin connector, even better it changes under load lol
> 
> So does anyone have the 600W Asus Strix bios? should be a good contender for the FTW3 ULTRA


Please don't feel a bit stupid, it is all good! :thumb:

I was initially intrigued as well. We're here to help each other get the best performance that we paid for. It was a good topic to bring up that may benefit others that aren't aware that the MSI Trio has 3 power connectors (not meaning the 3 fans).

---

*Rant/info:* Cross-flashing is still a bit crazy to some degree. I am still sad to see that years later nobody can successfully crack (sign the BIOS, or bypass it). We are forced to cross-flash as basically our only method of increasing _artificial limits_. IMO cross flashing a BIOS that isn't designed for your specific board design, memory type, etc. is a bit more of a risk than a BIOS MOD for your specific make/model GPU when done responsibly/properly. What did NVIDIA solve? They made it worse IMO. BIOS modding wasn't killing GPUs...(maybe some that ignored cooling, but there were protections in place for those people).. Cross flashing can change memory voltage/timings, fan speeds, power consumption, core voltages for the better in some cases but can also hurt in other cases. If your board has better memory (ex: Samsung vs Hynix vs Elpida) than the memory used from the BIOS you've flashed, you may see some performance or stability concerns if the timing for that memory on the BIOS you are cross-flashing is different. Some BIOS' may have multiple memory profiles also but you get the point. If the max fan RPM is lower on the card you are using the BIOS from, then you would potentially lose some maximum RPM on your fans if they are capable of a higher rpm. A moot point for those with a block/pot. The 2nd to worst problem would be when you lose display outputs. Since the selection and arrangement can vary on the GPU outputs with custom designs, a BIOS could be for example expecting an HDMI port where a Display port is actually located. The worst problem would be bricking the GPU although they are usually pretty easy to recover. I know many people do this without any issues, my point is really just that NVIDIA didn't solve anything they just made us all roll the dice. Modifying the specific/correct BIOS for your exact make/model GPU is a guaranteed fit, cross flashing well YMMV.


----------



## Thomas Cawley

I have searched this entire thread but couldn’t find an answer. I have a EVGA 2080ti xc (not the ultra) can I flash the bios? I tried following the instructions on the first page but I get error saying Id’s don’t match. Can anyone point me in the right direction please. Thanks


----------



## edhutner

Thomas Cawley said:


> I have searched this entire thread but couldn’t find an answer. I have a EVGA 2080ti xc (not the ultra) can I flash the bios? I tried following the instructions on the first page but I get error saying Id’s don’t match. Can anyone point me in the right direction please. Thanks


I have the evga xc gaming (11G-P4-2382-KR). If your is the same, probably your card xusb fw version is 0x70100003 and you cannot flash bios with lower version. In tpu vga verified bioses database I found that only gigabyte gaming oc have the same version and is ref design and have higher power limit. Here it is 
https://www.techpowerup.com/vgabios/209314/gigabyte-rtx2080ti-11264-190218

I have successfully used it on mine. The gain is not huge, but it is a little better than stock evga bios.


----------



## kithylin

Laithan said:


> Please don't feel a bit stupid, it is all good! :thumb:
> 
> I was initially intrigued as well. We're here to help each other get the best performance that we paid for. It was a good topic to bring up that may benefit others that aren't aware that the MSI Trio has 3 power connectors (not meaning the 3 fans).
> 
> ---
> 
> *Rant/info:* Cross-flashing is still a bit crazy to some degree. I am still sad to see that years later nobody can successfully crack (sign the BIOS, or bypass it). We are forced to cross-flash as basically our only method of increasing _artificial limits_. IMO cross flashing a BIOS that isn't designed for your specific board design, memory type, etc. is a bit more of a risk than a BIOS MOD for your specific make/model GPU when done responsibly/properly. What did NVIDIA solve? They made it worse IMO. BIOS modding wasn't killing GPUs...(maybe some that ignored cooling, but there were protections in place for those people).. Cross flashing can change memory voltage/timings, fan speeds, power consumption, core voltages for the better in some cases but can also hurt in other cases. If your board has better memory (ex: Samsung vs Hynix vs Elpida) than the memory used from the BIOS you've flashed, you may see some performance or stability concerns if the timing for that memory on the BIOS you are cross-flashing is different. Some BIOS' may have multiple memory profiles also but you get the point. If the max fan RPM is lower on the card you are using the BIOS from, then you would potentially lose some maximum RPM on your fans if they are capable of a higher rpm. A moot point for those with a block/pot. The 2nd to worst problem would be when you lose display outputs. Since the selection and arrangement can vary on the GPU outputs with custom designs, a BIOS could be for example expecting an HDMI port where a Display port is actually located. The worst problem would be bricking the GPU although they are usually pretty easy to recover. I know many people do this without any issues, my point is really just that NVIDIA didn't solve anything they just made us all roll the dice. Modifying the specific/correct BIOS for your exact make/model GPU is a guaranteed fit, cross flashing well YMMV.


You can always keep your stock bios and just shunt mod it instead.


----------



## Asmodian

kithylin said:


> You can always keep your stock bios and just shunt mod it instead.


Yes, I recommend this approach. Using the wrong BIOS can cause problems but a shunt mod, if done right, works great on any card.

However, I would still like to be able to edit the BIOS limits like we used to be able to do. A shunt mod is even better but I can see why people would prefer editing their own BIOS over ordering specify resistors and breaking out a soldering iron. I didn't do a shunt mod until I was 'forced' to. I wonder if RMA rates were improved after the BIOS encryption? You can do damage with bad edits but people are still doing a lot of BIOS flashing looking for higher power limits.


----------



## H00AH

Hey guys. Would someone kindly help me out. I want to know what firmware I can use on my card to increase its power limit. I have a Gigabyte 2080 Ti TURBO OC 11G Blower card rev 2. Reference board. Device ID 1E07. Power limit shows maximum 280W. I tried searching what firmware but this isnt a popular card so I'm not sure which to use. I would like to get the most out of my 2080 Ti. It is watercooled so temps is not an issue at all. Any help would be appreciated! Thanks!


----------



## Jonas Larsson

anyone have *ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 260W x 231% Power Target BIOS (600W)?*


----------



## keikei

Interesting. If I hold onto my card till end of its warranty (2 more yrs), I will basically get a free upgrade. Noice. https://www.guru3d.com/news-story/g...rce-ampere-rtx-300-series-launch-details.html


----------



## H00AH

Sup m8s I'm getting an error "Invalid firmware image detected." I've tried flashing 2 different roms and get the same error for both. KFA 380W and windforce oc 366w rom. I tried disabling the video card in device manger but still the same error. Any ideas? My card is Gigabyte 2080 Ti TURBO OC 11G Blower. Anyone have any ideas?


----------



## kwalker99

Asmodian said:


> Yes, I recommend this approach. Using the wrong BIOS can cause problems but a shunt mod, if done right, works great on any card.
> 
> However, I would still like to be able to edit the BIOS limits like we used to be able to do. A shunt mod is even better but I can see why people would prefer editing their own BIOS over ordering specify resistors and breaking out a soldering iron. I didn't do a shunt mod until I was 'forced' to. I wonder if RMA rates were improved after the BIOS encryption? You can do damage with bad edits but people are still doing a lot of BIOS flashing looking for higher power limits.


Shunt mods are really tricky with the RTX cards, the GPU is more intelligent than previous generations. Also from my testing this week, temps are the biggest challenge, I can get up to 2130 under the current power limit of the FTW3 bios, but it drops down as the temp climbs over 50c. This is on water water btw, but it is a single loop with two rads, full mobo block.


----------



## kwalker99

Jonas Larsson said:


> anyone have *ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 260W x 231% Power Target BIOS (600W)?*


I have seen many people ask, but no one has ever provided publicly... I even question if it really exists...


----------



## kwalker99

Laithan said:


> Please don't feel a bit stupid, it is all good! :thumb:
> 
> I was initially intrigued as well. We're here to help each other get the best performance that we paid for. It was a good topic to bring up that may benefit others that aren't aware that the MSI Trio has 3 power connectors (not meaning the 3 fans).
> 
> ---


I just thought the power was summed and had forgotten that each input is monitored separately


----------



## Asmodian

kwalker99 said:


> Shunt mods are really tricky with the RTX cards, the GPU is more intelligent than previous generations. Also from my testing this week, temps are the biggest challenge, I can get up to 2130 under the current power limit of the FTW3 bios, but it drops down as the temp climbs over 50c. This is on water water btw, but it is a single loop with two rads, full mobo block.


I wouldn't call it really tricky, just use a 0.007+ ohm shunt instead of a 0.005 ohm one.

You cannot make a general statement like "I can get 2130 under the current power limit" without also specifying the workload. You cannot run furmark at 2130 MHz under the current power limit (not that you would want to but the point is that what you are running changes things a lot).

I can hit the 123% power limit of my shunt modded FE at 2055 MHz and 1.05 V (>500W). It depends on the load. Do you ever see the Performance Limit - Power flag trigger? If yes then you hit the power limit, even if you don't see drops in the reported clock speeds.


----------



## 86Jarrod

kwalker99 said:


> I have seen many people ask, but no one has ever provided publicly... I even question if it really exists...


I hope it doesn't. To think the 3000 series is about to drop and people are still hoarding bios in the overclocking community is just straight depressing.


----------



## tps3443

This is a update to my last post. I was able to pull the 2080Ti, from the AORUS GAME BOX 2080Ti eGPU. 
This is a watercooled 240mm AIO eGPU box intended for laptops with a connection via Thunderbolt 3.

I pulled the 2080Ti with the 240mm rad and all! 

I flashed the bios to a 310 watt NON-A version, and soldered on my 8 ohm shunts for each (8) pin power connected for a total power draw of 434 watts? 

This card is incredible!! It runs like 46-52C at 2,130Mhz solid! 

I managed a 17,250 timespy graphics with it as a daily stable gaming OC too. The OEM block, and factory OEM 240MM AIO are both just so darn good. They keep this card extremely cool, pulling massive power. 

Here are a few pictures!!

In all honesty, it is the fastest 2080Ti straight out of the box.

It also has Samsung GDDR6. So 16,100Mhz on the memory was 100% stable. 

I have already sold this eGPU for $1,300 used. And I reassembled it back to its normal state. 

I hope this helps anyone who comes across one of these in the future. Because it is simply a MONSTER. 

I am in the process of buying a Galax HOF 2080Ti WC Edition. “I am a enthusiast” so I love high end components.

Firestrike run 

https://www.3dmark.com/3dm/49284418?


Timespy run


----------



## J7SC

tps3443 said:


> This is a update to my last post. I was able to pull the 2080Ti, from the AORUS GAME BOX 2080Ti eGPU.
> This is a watercooled 240mm AIO eGPU box intended for laptops with a connection via Thunderbolt 3.
> 
> I pulled the 2080Ti with the 240mm rad and all!
> 
> I flashed the bios to a 310 watt NON-A version, and soldered on my 8 ohm shunts for each (8) pin power connected for a total power draw of 434 watts?
> 
> This card is incredible!! It runs like 46-52C at 2,130Mhz solid!
> 
> I managed a 17,250 timespy graphics with it as a daily stable gaming OC too. The OEM block, and factory OEM 240MM AIO are both just so darn good. They keep this card extremely cool, pulling massive power.
> 
> Here are a few pictures!!
> 
> In all honesty, it is the fastest 2080Ti straight out of the box.
> 
> It also has Samsung GDDR6. So 18,100Mhz on the memory was 100% stable.
> 
> I have already sold this eGPU for $1,300 used. And I reassembled it back to its normal state.
> 
> I hope this helps anyone who comes across one of these in the future. Because it is simply a MONSTER.
> 
> I am in the process of buying a Galax HOF 2080Ti WC Edition. “I am a enthusiast” so I love high end components.
> 
> 
> 
> Spoiler


 
Nice, and different w/ the eGPU approach :thumb: 
For a future rainy day project, you could also try to flash the the Aorus 2080 Ti Xtr WB bios on it (factory full water-block bios, up to 380w before your shunt mod impacts). Adding a bigger cooler / AIO might get you even more performance (less down clocking which starts at about 38C), though your 240mm standard setup that came with it is certainly doing a great job already.


----------



## geriatricpollywog

17,250 for Timespy Graphics is rather good for a daily stable build. You mean 8100 memory not 18100 right?


----------



## tps3443

J7SC said:


> Nice, and different w/ the eGPU approach :thumb:
> For a future rainy day project, you could also try to flash the the Aorus 2080 Ti Xtr WB bios on it (factory full water-block bios, up to 380w before your shunt mod impacts). Adding a bigger cooler / AIO might get you even more performance (less down clocking which starts at about 38C), though your 240mm standard setup that came with it is certainly doing a great job already.


This GPU is different from the retail Aorus waterforce extreme. 

This eGPU is a reference board NON-A. So the shunt modification was mandatory as the highest bios power limit is only 310 watts.

Aorus XTREME waterforce bios simply will not work that GPU is a TU102A.

Although, this eGPU does use the exact same full coverage water OEM block, And radiator, pump etc. etc.


----------



## tps3443

0451 said:


> 17,250 for Timespy Graphics is rather good for a daily stable build. You mean 8100 memory not 18100 right?


16,100Mhz effective. Or 8,050Mhz. 100% in game stable. 

I could however benchmark at 8,200 but it would artifact a little.


----------



## tps3443

Here’s a video when I first unboxed, and put this beast in my desktop for initial testing before modification haha. I’ve never seen anyone take one of these eGPU’s apart before! So, I thought it was cool!


----------



## J7SC

tps3443 said:


> This GPU is different from the retail Aorus waterforce extreme.
> 
> This eGPU is a reference board NON-A. So the shunt modification was mandatory as the highest bios power limit is only 310 watts.
> 
> Aorus XTREME waterforce bios simply will not work that GPU is a TU102A.
> 
> Although, this eGPU does use the exact same full coverage water OEM block, And radiator, pump etc. etc.


 
I see re. non-A for the eGPU. FYI only, I was referring to the 'A' full water-block version bios, not the AIO one, but that's obviously academic now. Either way, nice mod


----------



## tps3443

J7SC said:


> I see re. non-A for the eGPU. FYI only, I was referring to the 'A' full water-block version bios, not the AIO one, but that's obviously academic now. Either way, nice mod


These AIO blocks are really good. Easily just as good as a EK block or something similar. They are full coverage! And the GPU and memory coverage is all solid copper. The only way cooling could get better would be due to larger radiators and a better pump, maybe a lot more fluid capacity.

This is nothing like those NZXT G12 setups. 

If I ran this thing with 60-70% fans. It ran like 46C in RDR2 pulling a steady constant 415+ watts of power.

I’d say the cooling capability is just as good as a Kingpin.


----------



## JackCY

tps3443 said:


> 18,100Mhz effective. Or 8,050Mhz. 100% in game stable.
> 
> I could however benchmark at 8,200 but it would artifact a little.


Everybody knows that to double a number you add a 1 in front of it 

It should be 18,050.


----------



## geriatricpollywog

JackCY said:


> Everybody knows that to double a number you add a 1 in front of it
> 
> It should be 18,050.


I am so confused right now.


----------



## tps3443

I ran the memory at +1,050Mhz for gaming. 


I just looked back at my post and saw I put 18,100 lol. I am listing incorrect information. And I am getting this card mixed up with my old 2080 Super memory overclocks. 

I DID NOT RUN THE MEMORY AT 18Ghz lol. That wouldve been nice but IT DID NOT HAPPEN LOL.

16,100 effective memory on this GPU lol.


----------



## geriatricpollywog

tps3443 said:


> I ran the memory at +1,050Mhz for gaming.
> 
> 
> I just looked back at my post and saw I put 18,100 lol. I am listing incorrect information. And I am getting this card mixed up with my old 2080 Super memory overclocks.
> 
> I DID NOT RUN THE MEMORY AT 18Ghz lol. That wouldve been nice but IT DID NOT HAPPEN LOL.
> 
> 16,100 effective memory on this GPU lol.


lol thank you. My KPE was breathing heavily at 18.00ghz


----------



## J7SC

0451 said:


> I am so confused right now.


 
...Hehe - can identify with that. When I first got my Aorus WB cards in December '18, I though I lucked out on the GPU cores, but s.th. seriously wrong with VRAM...spec said it should be 14140 MHz


----------



## tps3443

I’ve sold both of my 2080Ti’s. I had a Aorus Game box 2080Ti, and a MSI 2080Ti sealed in the box that I’m
selling too.

I seriously want a Galax HOF 2080Ti WC Edition. They only made like 100 of these cards. They come bare PCB in the retail packaging. 2,000 watt bios and manual voltage control up to 1.115 Volts.

I may take a 2080Ti king pin instead, I am not sure which one yet.. I know it is a little late in the life span of Turing. 

But, in all honesty a 2.1 or 2.2Ghz 2080Ti is simply a monster and I am sure it is comparable to a stock 3080Ti. 

It’s gonna be tough trying to buy a 3080Ti at launch. Price gouging, no stock available.

So, I’m thinking November would probably be the best time to buy a Ampere 3080Ti or in a year. 2080Ti is capable enough.

Most of the guys in this forum are running 2.1+ or 2.2+ GHz on a 2080Ti which is about 20-25% faster than a reference stock 2080Ti. So that should be very close or match a stock 3080Ti.


----------



## geriatricpollywog

tps3443 said:


> Iâ€™️ve sold both of my 2080Tiâ€™️s. I had a Aorus Game box 2080Ti, and a MSI 2080Ti sealed in the box that Iâ€™️m
> selling too.
> 
> I seriously want a Galax HOF 2080Ti WC Edition. They only made like 100 of these cards. They come bare PCB in the retail packaging. 2,000 watt bios and manual voltage control up to 1.115 Volts.
> 
> I may take a 2080Ti king pin instead, I am not sure which one yet.. I know it is a little late in the life span of Turing.
> 
> But, in all honesty a 2.1 or 2.2Ghz 2080Ti is simply a monster and I am sure it is comparable to a stock 3080Ti.
> 
> Itâ€™️s gonna be tough trying to buy a 3080Ti at launch. Price gouging, no stock available.
> 
> So, Iâ€™️m thinking November would probably be the best time to buy a Ampere 3080Ti or in a year. 2080Ti is capable enough.
> 
> Most of the guys in this forum are running 2.1+ or 2.2+ GHz on a 2080Ti which is about 20-25% faster than a reference stock 2080Ti. So that should be very close or match a stock 3080Ti.


The KPE has completely unlimited voltage and power with the XOC bios and classified tool. I don’t know why anybody would consider a HOF when the KPE exists and is not some limited edition.


----------



## Avacado

Well said, I only OC my TI's when folding, honestly the damn card is such a monster, iv'e never had to OC her to get good performance. I am only @120 though.


----------



## J7SC

As stated before, 2080 Tis won't all of a sudden become obsolete over night. It is also worth to recall those 'test escapes' early 2080 Tis had, with various cards bricking because of said 'test escapes'. These are new processes for Nvidia (7nm) and I am still reading that both TSMC and Samsung will produce some of the 3k series Ampere - so waiting a bit is not an issue but perhaps prudent. 

Finally, if serious reviewers show the Ampere to be a decent upgrade, I will look for custom-PCB factory w-blocked cards again...those would probably on the market by early 2021, by which time my current 2x 2080 Ti setup will be just over three years old. But you never know - by the time Ampere hits the main stream, more info about next-gen Hopper will come out


----------



## tps3443

0451 said:


> The KPE has completely unlimited voltage and power with the XOC bios and classified tool. I don’t know why anybody would consider a HOF when the KPE exists and is not some limited edition.


Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.

I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more? 

But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin. 

At least the Galax has a high power bios, and voltage control right out of the box, and a good block too. 

I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.


----------



## TK421

tps3443 said:


> Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.
> 
> I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more?
> 
> But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin.
> 
> At least the Galax has a high power bios, and voltage control right out of the box, and a good block too.
> 
> I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.



galax hof for 1350 whre?










have anyone experienced rtx vram dropping by 200mhz lately? used the death stranding drivers and having this issue randomly


----------



## kx11

tps3443 said:


> Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.
> 
> I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more?
> 
> But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin.
> 
> At least the Galax has a high power bios, and voltage control right out of the box, and a good block too.
> 
> I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.





if after sale services are important to you go with KiNGPiN, GALAX support is disastrous, maybe KFA2 (the EU branch) will help you but don't be optimistic


----------



## Baasha

Might have asked this before but hope someone can help.

I have 2x RoG Strix 2080 Ti in the Z490 rig and they don't OC very well. I am limited to +100 on the core but Mem is +1300 stable in all scenarios.

Is there a more powerful BIOS I can flash on to this card to get better OC'ing results? If so, which one? Also, if I use just one GPU, the hottest it gets is ~ 62C under load so I have plenty of room as far as heat is concerned (with SLI, the top card is pegged at 88C).


----------



## geriatricpollywog

kx11 said:


> tps3443 said:
> 
> 
> 
> Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.
> 
> I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more?
> 
> But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin.
> 
> At least the Galax has a high power bios, and voltage control right out of the box, and a good block too.
> 
> I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.
> 
> 
> 
> 
> 
> 
> if after sale services are important to you go with KiNGPiN, GALAX support is disastrous, maybe KFA2 (the EU branch) will help you but don't be optimistic
Click to expand...

I can call EVGA at 9pm and talk to a US based tech support person immediately with no wait. « Oh your card isn’t working? Just send over your card and power supply and we’ll replace both. »

Plus the HOF card looks so chintsy with its white shroud and gold crown. The KPE looks like a stealth fighter jet.


----------



## Nizzen

tps3443 said:


> Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.
> 
> I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more?
> 
> But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin.
> 
> At least the Galax has a high power bios, and voltage control right out of the box, and a good block too.
> 
> I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.


"Kingpin"/Classified GPU's became boring after 780ti for non LN2 users.

780ti classified with evbot was epic fun on water


----------



## geriatricpollywog

Nizzen said:


> tps3443 said:
> 
> 
> 
> Because of pricing. People are asking insane amounts for used Kingpins. And I could purchase a Galax HOF WC that already includes the Byski full coverage block and a XOC bios for around $1,350 all nice and neat in its original retail packaging. The card is a monster.
> 
> I would love to have a kingpin, it already has the 240mm radiator attached which is good for 2,130Mhz maybe more?
> 
> But, I have a strict budget of $1,500 bucks for a used good condition 2080Ti Kingpin.
> 
> At least the Galax has a high power bios, and voltage control right out of the box, and a good block too.
> 
> I personally like the Galax cards. I am easily open to a kingpin for $1,500 though.
> 
> 
> 
> "Kingpin"/Classified GPU's became boring after 780ti for non LN2 users.
> 
> 780ti classified with evbot was epic fun on water /forum/images/smilies/biggrin.gif
Click to expand...

The new ones still have an evbot connector. Have you tried overclocking the turing version?


----------



## tps3443

Baasha said:


> Might have asked this before but hope someone can help.
> 
> I have 2x RoG Strix 2080 Ti in the Z490 rig and they don't OC very well. I am limited to +100 on the core but Mem is +1300 stable in all scenarios.
> 
> Is there a more powerful BIOS I can flash on to this card to get better OC'ing results? If so, which one? Also, if I use just one GPU, the hottest it gets is ~ 62C under load so I have plenty of room as far as heat is concerned (with SLI, the top card is pegged at 88C).


You do not have plenty of room as far as heat is concerned. 62C is too hot. You’ll never get better than 2,055 to 2,070Mhz with that.

You need to get down under 56 to maintain 2.1Ghz.

And get under 50C to hold a steady 2,130Mhz.

I ran +220 on my basic reference non A 2080Ti and +1,050 on the memory. It would lock down on 2,130 and pull a steady 410-415+ watts in RDR2. 

I honestly regret selling it. I feel silly. It performed as good as anything else. And it ran like 48-51C 

I pulled 17,400 timespy graphics as a suicide run. But Incould maintain a solid 17,250 give or take.

If I had those Strix cards I would watercool them both. And yes, they offer plenty of bios options with higher power limits. But, you may not even be hitting a power limit yet. Have you check GPU-Z to see what they are pulling? What bios is currently on the cards?


----------



## tps3443

TK421 said:


> galax hof for 1350 whre?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> have anyone experienced rtx vram dropping by 200mhz lately? used the death stranding drivers and having this issue randomly


Are you running a custom curve with the MSI AB OC scanner? I noticed OC scanner will drop 200Mhz on the memory for some reason. 


Also The Galax HOF WC is local on Craigslist. I sold my Aorus game box. So I may get this if he would reply to my inquires lol.

If not, I may watercool this MSI 2080Ti, whenever it shows up. I have no GPU at all right now.


----------



## kwalker99

jura11 said:


> Hi there
> 
> Bought it from Aquacomputer store,it took 3 weeks as these blocks are not in stock what I know, not sure if availability right now is better
> 
> Have tried few blocks like EK Vector RTX 2080Ti, Phanteks Glacier RTX 2080Ti and Bykski RTX 2080Ti WB
> 
> With my Zotac RTX 2080Ti and EK Vector I have seen on this loop these temperatures, idle in 18-21°C and load in 38-40°C, Palit RTX 2080Ti with Bykski WB temperatures have been in low 40's during the gaming, highest temperatures I have seen 40-43°C that's in 23°C ambient, Palit RTX 2080Ti with Phanteks Glacier WB temperatures have been 39-42°C range again in 21-23°C ambient temperature
> 
> Just didn't tried Heatkiller IV RTX 2080Ti block too which can't find in stock over here
> 
> Hope this helps
> 
> Thanks, Jura


Not sure if you have posted further info, what were your temps and clocks with the new Aquacomputer block?


----------



## TK421

tps3443 said:


> Are you running a custom curve with the MSI AB OC scanner? I noticed OC scanner will drop 200Mhz on the memory for some reason.
> 
> 
> Also The Galax HOF WC is local on Craigslist. I sold my Aorus game box. So I may get this if he would reply to my inquires lol.
> 
> If not, I may watercool this MSI 2080Ti, whenever it shows up. I have no GPU at all right now.



I'm not, just a simple +60 on the core and +1200 on memory right now.


----------



## kithylin

TK421 said:


> I'm not, just a simple +60 on the core and +1200 on memory right now.


+60 on core doesn't mean anything. What is your actual clock speed in Mhz you are running?


----------



## TK421

kithylin said:


> +60 on core doesn't mean anything. What is your actual clock speed in Mhz you are running?




the core isn't the problem, it's the memory losing 200mhz in offsets


----------



## Baasha

tps3443 said:


> You do not have plenty of room as far as heat is concerned. 62C is too hot. You’ll never get better than 2,055 to 2,070Mhz with that.
> 
> You need to get down under 56 to maintain 2.1Ghz.
> 
> And get under 50C to hold a steady 2,130Mhz.
> 
> I ran +220 on my basic reference non A 2080Ti and +1,050 on the memory. It would lock down on 2,130 and pull a steady 410-415+ watts in RDR2.
> 
> I honestly regret selling it. I feel silly. It performed as good as anything else. And it ran like 48-51C
> 
> I pulled 17,400 timespy graphics as a suicide run. But Incould maintain a solid 17,250 give or take.
> 
> If I had those Strix cards I would watercool them both. And yes, they offer plenty of bios options with higher power limits. But, you may not even be hitting a power limit yet. Have you check GPU-Z to see what they are pulling? What bios is currently on the cards?


I see. I guess it's a bit late to water-cool these GPUs since I will be upgrading to the 3080 Ti's once they are released (hopefully next month).

The weird thing is, in MSI Afterburner, if I do the 'unlock voltage' setting and move the slider all the way to the maximum allowed (+100mV), it doesn't do ANYTHING as far as OC'ing is concerned. Even if I do +120 on the Core Clock, it freezes/crashes during benchmarking.

Why would that be? I mean, shouldn't more voltage allow for more headroom when it comes to OC? I have the stock "P-Mode" BIOS on the cards.

I had 2x Founder's Edition 2080 Ti's when they were released on those cards did +150 on the core but only +1000 on the Mem - they OC'd better than these gigantic, not to mention more expensive, RoG Strix 2080 Ti's.

Bah.. I guess I should've left well enough alone with the 2x FE 2080 Ti's and called it a day.


----------



## tps3443

TK421 said:


> I'm not, just a simple +60 on the core and +1200 on memory right now.


What kind of 2080Ti do you use?


----------



## tps3443

Baasha said:


> I see. I guess it's a bit late to water-cool these GPUs since I will be upgrading to the 3080 Ti's once they are released (hopefully next month).
> 
> The weird thing is, in MSI Afterburner, if I do the 'unlock voltage' setting and move the slider all the way to the maximum allowed (+100mV), it doesn't do ANYTHING as far as OC'ing is concerned. Even if I do +120 on the Core Clock, it freezes/crashes during benchmarking.
> 
> Why would that be? I mean, shouldn't more voltage allow for more headroom when it comes to OC? I have the stock "P-Mode" BIOS on the cards.
> 
> I had 2x Founder's Edition 2080 Ti's when they were released on those cards did +150 on the core but only +1000 on the Mem - they OC'd better than these gigantic, not to mention more expensive, RoG Strix 2080 Ti's.
> 
> Bah.. I guess I should've left well enough alone with the 2x FE 2080 Ti's and called it a day.


That’s not true. It is worth watercooling them. You can get 20-25% on top of a stock 2080Ti. Which would land it dead slam in stock 3080Ti territory!

You could easily sell those cards, and buy hybrid versions to make it easier. Or just shop around for a good deal on (2) waterblocks and a cheap DDC pump reservoir combo.

Your CPU AIO looks to be expandable too, so why not add another radiator and (2) Strix blocks and your good to go!

On another note, you do not need more voltage. Most 2080Ti’s can run 2,130Mhz on 1.050V or 1.043V. So, increasing voltage beyond this is just drawing more power, possibly hitting a power limit of the cards or running even hotter.

You could probably undervolt these cards. This will lower temperatures, draw less power, produce less heat. And maybe get a solid 2,055-2,070Mhz locked in. 

There are custom STRIX bios available too with 1,000 watt power limits I think. So, running a waterblock with temps below 50C would net some incredible performance out of these cards! 

Either way, if you’re upgrading I do understand leaving them be. But you can keep these cards for another 2 years easily and save your cash! Especially with similar performance to a 3080Ti.

But, I am speculating again. We have no idea how fast a 3080Ti may actually be. But, if the leaked benchmarks are true. Then I will happily run a 2.1-2.2Ghz 2080Ti for a very long time!


----------



## Nizzen

tps3443 said:


> That’s not true. It is worth watercooling them. You can get 20-25% on top of a stock 2080Ti. Which would land it dead slam in stock 3080Ti territory!
> 
> You could easily sell those cards, and buy hybrid versions to make it easier. Or just shop around for a good deal on (2) waterblocks and a cheap DDC pump reservoir combo.
> 
> Your CPU AIO looks to be expandable too, so why not add another radiator and (2) Strix blocks and your good to go!
> 
> On another note, you do not need more voltage. Most 2080Ti’s can run 2,130Mhz on 1.050V or 1.043V. So, increasing voltage beyond this is just drawing more power, possibly hitting a power limit of the cards or running even hotter.
> 
> You could probably undervolt these cards. This will lower temperatures, draw less power, produce less heat. And maybe get a solid 2,055-2,070Mhz locked in.
> 
> There are custom STRIX bios available too with 1,000 watt power limits I think. So, running a waterblock with temps below 50C would net some incredible performance out of these cards!
> 
> Either way, if you’re upgrading I do understand leaving them be. But you can keep these cards for another 2 years easily and save your cash! Especially with similar performance to a 3080Ti.
> 
> But, I am speculating again. We have no idea how fast a 3080Ti may actually be. But, if the leaked benchmarks are true. Then I will happily run a 2.1-2.2Ghz 2080Ti for a very long time!


My ROG white 2080ti is running ~2000mhz on stock air and stock bios. 25% faster is like 2500mhz on core.....
I can run benchmarks at 2150-2170mhz with a bit cold in the room with stock bios on air 

20-25% faster with water is no way


----------



## JackCY

TK421 said:


> have anyone experienced rtx vram dropping by 200mhz lately? used the death stranding drivers and having this issue randomly


The later NV drivers are atrocious as they focus on adding new features and messing with scheduling, the frame output is far more unstable than older drivers. Never heard of some magical memory downclocking. Unless it took you years to notice that in compute mode NV cards don't run max VRAM clocks.

442.50/59 is the last stable driver. All after that so far perform worse one way or another. At least for RTX Turing. It will take some time for them to polish the driver again after adding probably major changes to it.


----------



## J7SC

JackCY said:


> The later NV drivers are atrocious as they focus on adding new features and messing with scheduling, the frame output is far more unstable than older drivers. Never heard of some magical memory downclocking. Unless it took you years to notice that in compute mode NV cards don't run max VRAM clocks.
> 
> 442.50/59 is the last stable driver. All after that so far perform worse one way or another. At least for RTX Turing. It will take some time for them to polish the driver again after adding probably major changes to it.



I'm not particularity fond of the latest drivers either; for my setup, 445.87 was best...mind you, I always run NVLink / 2-cards, so it may differ a bit compared to singles. I do look forward to MS Flight Simulator 2020, and am kind of wondering if MS Win 10 / 2004's new feature re. GPU scheduling will make a difference with that


----------



## geriatricpollywog

tps3443 said:


> There is a MSI 2080Ti Lightning Z on eBay. First time ever seeing one on eBay used. Made with real carbon fiber, it is a awesome GPU. $2,000 OBO!! Holy cow!
> 
> The cooling seems lame for such a beefy awesome card though. Any NON-A with power mod and water cooling would smoke it!


Non-A you say? Might as well be counterfeit.


----------



## tps3443

0451 said:


> Non-A you say? Might as well be counterfeit.


Nothing wrong with NON-A 2080Ti’s. It does suck how Nvidia even released them as a partner card. But, the silicon seems to be just as good.


----------



## JackCY

tps3443 said:


> Nothing wrong with NON-A 2080Ti’s. It does suck how Nvidia even released them as a partner card. But, the silicon seems to be just as good.


It's the very same thing, the only difference is "A" version allows manufacturers to apply higher than reference clocks out of the box. Is there some binning? Maybe but that could as well be done on manufacturer side considering they are happy to use 1 PCB for low to top end cards and only change the cooler and a couple extra FETs.


----------



## Laithan

tps3443 said:


> Nothing wrong with NON-A 2080Ti’s. It does suck how Nvidia even released them as a partner card. But, the silicon seems to be just as good.


I thought the power cannot go as high (??)


----------



## tps3443

Laithan said:


> I thought the power cannot go as high (??)


You have to solder mod the NON-A 2080Ti’s for increased power draw by soldering on 8 Ohm shunts, and flash the bios for a higher power limit. I’m no soldering expert, and I have already soldered (2) 2080Ti’s without any issues.

But, you’ve gotta do this for a 2080Ti FE “A” variant too. The highest bios I think is 380 watts which is just not quite enough. So two stacked 8 ohm shunts would increase this to around 530 watts of power draw. 

The NON-A cards with a bios flash, and stacked 8 ohm resistors can reach 434 watts. Which is plenty.


----------



## sblantipodi

I know that is pretty late to care about this now but is there someone who can explain me why the fans on my RTX2080Ti founders edition works so badly?

I always used a custom fan curve with afterburner but if I don't start after burner the fans spins at very low rpm until 80°C when they start spinning very fast.
They move fast and slow, fast and slow. The fan curve seems to be completely broken.

Is this a "bug" of my card or it's a known issue?


----------



## kithylin

sblantipodi said:


> I know that is pretty late to care about this now but is there someone who can explain me why the fans on my RTX2080Ti founders edition works so badly?
> 
> I always used a custom fan curve with afterburner but if I don't start after burner the fans spins at very low rpm until 80°C when they start spinning very fast.
> They move fast and slow, fast and slow. The fan curve seems to be completely broken.
> 
> Is this a "bug" of my card or it's a known issue?


A friend of mine bought a founder's edition on launch day direct from Nvidia and he too had issues with fan speed controls. They ended up RMA'ing it through nvidia and they were sent a working card back.


----------



## sblantipodi

kithylin said:


> sblantipodi said:
> 
> 
> 
> I know that is pretty late to care about this now but is there someone who can explain me why the fans on my RTX2080Ti founders edition works so badly?
> 
> I always used a custom fan curve with afterburner but if I don't start after burner the fans spins at very low rpm until 80Â°C when they start spinning very fast.
> They move fast and slow, fast and slow. The fan curve seems to be completely broken.
> 
> Is this a "bug" of my card or it's a known issue?
> 
> 
> 
> A friend of mine bought a founder's edition on launch day direct from Nvidia and he too had issues with fan speed controls. They ended up RMA'ing it through nvidia and they were sent a working card back.
Click to expand...

So is my card can be considered buggy?


----------



## geriatricpollywog

sblantipodi said:


> kithylin said:
> 
> 
> 
> 
> 
> sblantipodi said:
> 
> 
> 
> I know that is pretty late to care about this now but is there someone who can explain me why the fans on my RTX2080Ti founders edition works so badly?
> 
> I always used a custom fan curve with afterburner but if I don't start after burner the fans spins at very low rpm until 80Â°C when they start spinning very fast.
> They move fast and slow, fast and slow. The fan curve seems to be completely broken.
> 
> Is this a "bug" of my card or it's a known issue?
> 
> 
> 
> A friend of mine bought a founder's edition on launch day direct from Nvidia and he too had issues with fan speed controls. They ended up RMA'ing it through nvidia and they were sent a working card back.
> 
> Click to expand...
> 
> So is my card can be considered buggy?
Click to expand...

Look at is as an opportunity to watercool 

Edit: iPhone smilies turn into ascii art


----------



## kithylin

sblantipodi said:


> So is my card can be considered buggy?


It wasn't uncommon for the early release revision of the 2080 Ti right around the initial launch to have a host of problems. Buggy fan controllers was one situation. The more common one was defective ram that would die and cause "Space Invaders" artifacts. If you are still able to within warranty I would probably suggest contacting Nvidia Support for a replacement if possible.


----------



## gtz

kithylin said:


> It wasn't uncommon for the early release revision of the 2080 Ti right around the initial launch to have a host of problems. Buggy fan controllers was one situation. The more common one was defective ram that would die and cause "Space Invaders" artifacts. If you are still able to within warranty I would probably suggest contacting Nvidia Support for a replacement if possible.


Back in July I bought 2 RTX2080Ti's for two high end builds (one with a 10900KF system and the other with a 3900X system). The one in the 3900X system came back within 2 weeks because it was artifacting as soon as you turned on the system. This is still a problem to this day.


----------



## Shawnb99

The amount of 2080TI's that died on us is the main reason I won't jump on a 3080TI day one. Plus not like Optimus will have a block for it anytime this century


----------



## J7SC

The early so-called 'test-escapes' are a major reason why 2080 Tis have among the highest RMA rates among GPUs...wasn't so much the RAM itself (same RAM and IC used on 2080 non-Tis) but a ram-controller issue during fab, that, ahem escaped tests headscrat ). 

Like with my 2080 Ti purchases, I'm going to wait a bit to see about (w-cooled, custom PCB) Ampere after it's in actual customers' hands, if at all - or wait for Hopper


----------



## trivium nate

What waterblock, pump/res, fluid for my 11G-P4-2383-KB I really want an aio cooler for my gpu like I have for my cpu new to making custom loops


----------



## kithylin

gtz said:


> Back in July I bought 2 RTX2080Ti's for two high end builds (one with a 10900KF system and the other with a 3900X system). The one in the 3900X system came back within 2 weeks because it was artifacting as soon as you turned on the system. This is still a problem to this day.


I'm just guessing here of course and I don't really know the situation. But I think it could be possible that some people could still have some original new old stock 2080 Ti's sitting on a shelf new in box from back near launch day and then they're selling em today. Possibly. Some companies may be trying to push out old stock right now to make room for Ampere / 3000 series cards.


----------



## Laithan

Shawnb99 said:


> The amount of 2080TI's that died on us is the main reason I won't jump on a 3080TI day one. Plus not like Optimus will have a block for it anytime this century


I doubt they would actually let this happen again... it was a PR nightmare for them...


----------



## JackCY

It is inevitable that larger die and more complex products have higher failure rate. 2x die area, 2x components on it, 2x chance to be defective, simple as that.
The RTX dies aren't small, 2080Ti is 3x the die area than 5700 XT and almost double the transistors. And over 50% larger than 1080Ti anyway.

The interesting number would be how many defective ones did they throw out that they could not bin as the lowest bin TU102 used in 2080Ti. Yes lowest, the 2080Ti is the most cut down version of the chip available.


----------



## geriatricpollywog

JackCY said:


> It is inevitable that larger die and more complex products have higher failure rate. 2x die area, 2x components on it, 2x chance to be defective, simple as that.
> The RTX dies aren't small, 2080Ti is 3x the die area than 5700 XT and almost double the transistors. And over 50% larger than 1080Ti anyway.
> 
> The interesting number would be how many defective ones did they throw out that they could not bin as the lowest bin TU102 used in 2080Ti. Yes lowest, the 2080Ti is the most cut down version of the chip available.


Exactly, larger dies have more thermal expansion. Add time and temperature and eventually all 2080tis will fail.

My first Kingpin had artifacts on day 1 with stock settings. I’ve had my RMA unit for a month and it’s been good so far. I still have 11 months left on the warranty. EVGA only gives like for like replacements, or upgrades as RMA replacements. If this thing dies I will ask for a 3080ti if no Kingpins are available. Hopefully be then the 3080ti KPE is out.


----------



## TONSCHUH

J7SC said:


> TONSCHUH said:
> 
> 
> 
> I'm up and running again, but I still have to bleed the air out.(...) Now I only need the cash for *the Thermaltake Core P8 + Distro*, because I'm over all the dust and hassle, when I have to move / drain / maintain the system.(...)
> 
> 
> 
> 
> ...looking forward to some 'before and after' air water temp comps when you're done. BTW, I didn't even know that there was a new TT Core (P8) out...looks related to core P5, but with extra items on the side and top...
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> kithylin said:
> 
> 
> 
> If you are using an AIO (it sounds like you are) they have to have air bubbles in them to function. They could potentially burst and leak if they didn't have a small air bubble in it. *All AIO's need to be positioned with the radiator above the pump*, CPU or GPU. It's physics.
> 
> Click to expand...
> 
> 
> ^^That's good advice. While most of my HD builds are custom loop. I have some AIOs, among them a TT 240mm AIO running on CPUs (same principle as GPU AIO) since late 2012...it probably lasted this long because it was on 24/7/365. Still, the other day I turned it off to clean the rad and fans. When I turned it back on (w/ rad in the same position above the setup), temps shot up...the pump is still working at the same rpm, but some bubbles must have come loose and lodged in the wrong place; time for some AIO surgery /forum/images/smilies/redface.gif
Click to expand...

I ended-up ordering a TT Level 20 XT, but it might be October, before all the parts arrive.

🙂


----------



## trivium nate

just ordered an 11G-P4-2384-RX to replace my 11G-P4-2383-KB now i'll have an aio for my cpu and one for the gpu


----------



## TK421

trivium nate said:


> just ordered an 11G-P4-2384-RX to replace my 11G-P4-2383-KB now i'll have an aio for my cpu and one for the gpu



That 1 year warranty is going to hurt though.


----------



## trivium nate

Why?


----------



## geriatricpollywog

trivium nate said:


> Why?


Because it’s not eligible for the Step Up Program. You can’t trade up to a 3XXX series within 3 months of purchase like you can with the 3 year warranty cards.


----------



## edhutner

Another dying 2080 ti. EVGA xc gaming, full watercooled, moderately overclocked (360w gb bios, mem+908=7707mhz, auto oc about 2040mhz in gaming). Bought it brand new april this year. Was perfectly fine for three-four months, temperatures are fine 45-46 degress while gaming in hottest summer days. Last few weeks started to do blackscreens (nv driver restart). And it is getting more often every day. Now with absolute stock settings I cannot get it to play more than 20-30 minutes sometimes even on sitting on desktop the driver crashes.


----------



## J7SC

edhutner said:


> Another dying 2080 ti. EVGA xc gaming, full watercooled, moderately overclocked (360w gb bios, mem+908=7707mhz, auto oc about 2040mhz in gaming). Bought it brand new april this year. Was perfectly fine for three-four months, temperatures are fine 45-46 degress while gaming in hottest summer days. Last few weeks started to do blackscreens (nv driver restart). And it is getting more often every day. Now with absolute stock settings I cannot get it to play more than 20-30 minutes sometimes even on sitting on desktop the driver crashes.


 
...20min-30min is when a typical loop reaches full temps...I don't know where you are located (re. summer months / heat) but If you have a portable AC, try to place it near your system just to drop temps by a few degrees and see if it is still shutting down. Also, it is worth checking of the w-cooling block is correctly seated and nothing has come loose (ie some of the screws). All these are of course 'long-distance' suggestions without knowing anything else about the system / loop, but I hope you solve the issue :2cents:


----------



## trivium nate

that's ok


----------



## edhutner

@J7SC
It's not the temperature for sure. Today I got two black screen while idling. I tried 90% power and -100mhz on mem and -50mhz on core. It didnt help at all. It just dies. I guess that there may be something wrong on vrms and not ot the chip, because as far as I know the dying cards usually artifacts, but my does not.
I arranged replacement with amazon.de. The new gpu will arrive soon. Hope that it will be better.


----------



## kithylin

edhutner said:


> @J7SC
> It's not the temperature for sure. Today I got two black screen while idling. I tried 90% power and -100mhz on mem and -50mhz on core. It didnt help at all. It just dies. I guess that there may be something wrong on vrms and not ot the chip, because as far as I know the dying cards usually artifacts, but my does not.
> I arranged replacement with amazon.de. The new gpu will arrive soon. Hope that it will be better.


I'm not trying to say "You did it wrong" and I'm not trying to say you are incompetent, just getting that disclaimer in there first. But perhaps you may have situated the thermal pads and/or block wrong and it didn't make proper contact with the card? I'm just guessing. I've put water blocks on cards quite a few times and I've still screwed up before a couple times, it happens to the best of us. Have you watched the core temp of the gpu card? Is the core temp of the video card getting hot? Can you watch VRM Temps in GPU-Z or something else and see if the video card's VRM is overheating?


----------



## edhutner

@kithylin
On my card I dont have temperature on vrms. On gpu of course I monitor it - max 45degrees during gaming (acc or horizon).
Actually this was my first gpu water block. I took extra time putting it correct with all the pads and etc... Last night I removed the water block to prepare the card for return, and visually everything seems just fine, pads have been making a good contact.

Now when replacement arrives, I will watercool it too. I have new set of pads. I am thinking to stick a thermistor sensor somewhere in between the block and the card, but where exactly to put, in order to get the hottest vrm temperature?


----------



## MikeSanders

Is the Palit Gaming pro NON OC still an A-Chip?


----------



## gtz

please delete

posted on wrong thread


----------



## kithylin

edhutner said:


> @kithylin
> On my card I dont have temperature on vrms. On gpu of course I monitor it - max 45degrees during gaming (acc or horizon).
> Actually this was my first gpu water block. I took extra time putting it correct with all the pads and etc... Last night I removed the water block to prepare the card for return, and visually everything seems just fine, pads have been making a good contact.
> 
> Now when replacement arrives, I will watercool it too. I have new set of pads. I am thinking to stick a thermistor sensor somewhere in between the block and the card, but where exactly to put, in order to get the hottest vrm temperature?


You said in an above post you have the EVGA XC Gaming 2080 Ti, so I looked up the PCB for you and cross referenced a youtube video by Buildzoid. So I highlighted the VRM's for you. If you want to slip in a thermal probe, you could put it under the pad in between the chips here.


----------



## FarmerJo

its been a while since ive posted and ive gone back a ways and cant find an answer to my question.

Has anyone been able to get the XOC HOF bios on a newer 2080ti and not get the USBX error?
Thanks!


----------



## 2 A

I have a PNY XLR8 Gaming OC with the Galax 380W bios flashed and Im getting pretty decent numbers on it, but Now i have a custom water loop and want to flash a higher wattage. Does the strix 1000w bios work on this card or is it strictly for strix cards? The 600w bios is gone and cannot be downloaded here for some reason. If anyone have a higher wattage bios fitting for this card would be greatly appreciated.


----------



## xkm1948

Anyone tried Alphacool's brand new Eiswolf 2 AIO for their 2080Ti?


----------



## JackCY

2 A said:


> I have a PNY XLR8 Gaming OC with the Galax 380W bios flashed and Im getting pretty decent numbers on it, but Now i have a custom water loop and want to flash a higher wattage. Does the strix 1000w bios work on this card or is it strictly for strix cards? The 600w bios is gone and cannot be downloaded here for some reason. If anyone have a higher wattage bios fitting for this card would be greatly appreciated.


3 mOhm it instead.


----------



## trivium nate

I was looking at this one for my card but didn't know if it all came together (like WB, pump, Rad, Tubing) or what plus I didn't really want to take my card apart

https://www.aquatuning.us/water-coo...2080/2080ti-2070/2080-super-black-m02?c=23047




xkm1948 said:


> Anyone tried Alphacool's brand new Eiswolf 2 AIO for their 2080Ti?
> 
> https://www.youtube.com/watch?v=IPnNVqoOjU4


----------



## 2 A

Im a little hesitant to do that, even to I have access to excellent soldering equipment at work. You basically desolder 2 resistors and change them? Can you still limit power draw via software?

Do you know what happened to the 600w bios? It says it should work with my card. Can I find it elsewhere?


----------



## Medizinmann

xkm1948 said:


> Anyone tried Alphacool's brand new Eiswolf 2 AIO for their 2080Ti?
> 
> https://www.youtube.com/watch?v=IPnNVqoOjU4


Looks interesting - seems to be some improvement over the “old” GPX Pro – as more water channels cool a wider area of the block – looks more like a real full water block.

That said - I am pretty happy with the “old” Eiswolf GPX Pro on my 2080 TI.

Best regards,
Medizinmann


----------



## Medizinmann

trivium nate said:


> I was looking at this one for my card but didn't know if it all came together (like WB, pump, Rad, Tubing) or what plus I didn't really want to take my card apart
> 
> https://www.aquatuning.us/water-coo...2080/2080ti-2070/2080-super-black-m02?c=23047


It’s an AIO – it comes prefilled with the block with integrated pump, backplate, 240mm radiator, 2x120mm fans, Screws, cooling pads, TIM etc.

You only need a screwdriver.

I wouldn’t say it is easy - but it is doable – you need some time(1-2h).

I did it and used LM as TIM on the GPU – and a lot conformal coat around it – and I am pretty happy with the results.

The big plus is – you can integrate this with other parts from this series – I also installed an Eisbaer CPU 360 cooler and added an extra 120mm extra thick rad in a spare space in my case.
I also added Noctua fans in push & pull on all rads to increase cooling performance…:thumb: 

Best regards,
Medizinmann


----------



## xkm1948

trivium nate said:


> I was looking at this one for my card but didn't know if it all came together (like WB, pump, Rad, Tubing) or what plus I didn't really want to take my card apart
> 
> https://www.aquatuning.us/water-coo...2080/2080ti-2070/2080-super-black-m02?c=23047



Eiswolf 2, the one I posted is the updated version of the Eiswolf GPX Pro.


----------



## arrow0309

Anyone knows anything about the Lightnings non "Z"?
Any binning at all, do they overclock the same, just lottery, bad overclcokers?
Samsung ic's?
Any reviews?
Cause I can't find much about them except they have 1575 clock and same PL & wattage as the beefier Lightning Z.


----------



## J7SC

FYI, after 'many days' of trying to download the required 95GB patch for the extended MS Flight Simulator 2020, I am having a grand old time at 4K with two w-cooled 2080 Tis...one of those GPUs probably would have been enough, but hey, 8704 Cudas with NVLink is even better...


----------



## Asmodian

J7SC said:


> MS Flight Simulator 2020, I am having a grand old time at 4K with two w-cooled 2080 Tis.


How is the scaling? I have assumed I would not go dual GPU again any time soon but does this actually support dual GPU properly?


----------



## J7SC

Asmodian said:


> How is the scaling? I have assumed I would not go dual GPU again any time soon but does this support actually support dual GPU properly?


 
...even single GPU (2080 Ti @ 4k) wasn't a limiting issue for a sim, but MS FS 2020 has a setting that 'limits' it to 60 FPS overall - and my system seemed to hit that several times with dual 2080 Tis / NVLink / 4K 40 inch / 72 Hz monitor (stock /no oc on the GPUs) on 4K 'ultra everything'. That doesn't answer directly what kind of scaling NVLink yields, though one 2080 Ti was fast, but dual NVLink was faster...on 'wow graphics'. The few hiccups I encountered seem to have more to do with the MS server-supplied fine-grain geo detail while on low altitude fly-bys.

But to be clear, after 2.5 days of trying to download that 95GB patch on a fibre connection (for premium deluxe version, with MS Azure servers overloading :sozo: ) , I haven't had more than a few hours of play on it yet. My real hope is that it also does 'CFR' (tile-based NVLink) instead of just the traditional AFR, and will find out in more detail on the weekend. Ditto for the new 'GPU scheduling' options.


----------



## aj_hix36

I managed to sell my EVGA 2080 Ti FTW3 Ultra for $1200 cash on Facebook Marketplace. I'm pretty stoked at the used prices in the US right now. Still has over a year on the warranty too. EVGA is the best.


----------



## Shawnb99

aj_hix36 said:


> I managed to sell my EVGA 2080 Ti FTW3 Ultra for $1200 cash on Facebook Marketplace. I'm pretty stoked at the used prices in the US right now. Still has over a year on the warranty too. EVGA is the best.




Nice price!


----------



## Nizzen

J7SC said:


> FYI, after 'many days' of trying to download the required 95GB patch for the extended MS Flight Simulator 2020, I am having a grand old time at 4K with two w-cooled 2080 Tis...one of those GPUs probably would have been enough, but hey, 8704 Cudas with NVLink is even better...


Do you have a better screenshot? 

Looks like I have to try this game if it supports SLI (nvlink) with 2x2080ti


----------



## geriatricpollywog

J7SC said:


> Asmodian said:
> 
> 
> 
> How is the scaling? I have assumed I would not go dual GPU again any time soon but does this support actually support dual GPU properly?
> 
> 
> 
> 
> ...even single GPU (2080 Ti @ 4k) wasn't a limiting issue for a sim, but MS FS 2020 has a setting that 'limits' it to 60 FPS overall - and my system seemed to hit that several times with dual 2080 Tis / NVLink / 4K 40 inch / 72 Hz monitor (stock /no oc on the GPUs) on 4K 'ultra everything'. That doesn't answer directly what kind of scaling NVLink yields, though one 2080 Ti was fast, but dual NVLink was faster...on 'wow graphics'. The few hiccups I encountered seem to have more to do with the MS server-supplied fine-grain geo detail while on low altitude fly-bys.
> 
> But to be clear, after 2.5 days of trying to download that 95GB patch on a fibre connection (for premium deluxe version, with MS Azure servers overloading /forum/images/smilies/sozo.gif ) , I haven't had more than a few hours of play on it yet. My real hope is that it also does 'CFR' (tile-based NVLink) instead of just the traditional AFR, and will find out in more detail on the weekend. Ditto for the new 'GPU scheduling' options.
Click to expand...

I just ordered another K|NGP|N from EVGA’s website. It should be here tomorrow. Just realized I don’t have an NVLink bridge. Which one do you recommend?


----------



## J7SC

Nizzen said:


> Do you have a better screenshot?
> 
> Looks like I have to try this game if it supports SLI (nvlink) with 2x2080ti


 
...will try later on the weekend (still working before playing :-( ) . Yeah, those pics were just a quick proof-of-concept re. NVLink and were taken by a cell phone camera (one handed, the other was handling controls)...




0451 said:


> I just ordered another K|NGP|N from EVGA’s website. It should be here tomorrow. Just realized I don’t have an NVLink bridge. Which one do you recommend?


 
...just comes down to what is available re. NVLink bridges, and what kind of RGB, infinity mirrors etc you like to match your system. The key is to figure out your mobo 16x slot spacing as the bridges come in '3' and '4' slot versions; I happen to have both...ROG Asus, the only one @ the store back when I built the system...

EDIT: I want to add one thing though...
I really don't know yet about scaling advantages yet, or what NVLink format (AFR, AFR2, CFR) and exact settings to use. I am suggesting that dual card owners can experience NVlink in this game, but I'm not sure it worth buying a 2nd card for. I hope to have much more time on the weekend to really sink my teeth into MS FS 2020. The one thing which is a bit annoying about this sim are the incredibly long load times for start-up (presumably because of server-side graphics to be pulled down).


----------



## omarrana

hi, I am new at overclocking.
I just bought the cheapest 2080ti card, (Zotact RTX 2080 ti twin fan) and converted it to water-cooling.

My power limit is 112%, can I flash :
Palit RTX 2080 Ti Dual (Non-A) Reference PCB (2x8-Pin) 250W x 124% Power Target BIOS (310W) Official (Source)
└ Compatible with all TU102-300 (1E04) Non-A cards ┘

since I have non A card, will this work or it will brick my card. 
Please advice thank you.


----------



## J7SC

...finally had some nice, uninterrupted 'flight time' with MS FlightSim 2020  ...the terrain detail seems unmatched by other flight sims, and if you live in a major metro area or near a well-known attraction, the accuracy is amazing - not perfect, but nevertheless still amazing.

...per earlier screen shots and also attachments below, I tried both NVLink / SLI and a single card. If you already have a second card, than it's nice to try to optimize for it (still working on that a bit, ie. AFR, AFR2, CFR, other) but overall, I would say a 2nd 2080 Ti card is not necessary...the limiting factor becomes the MS server-supplied graphic detail, especially if you're flying real low (like I like to do...). Still, a nice 2080 Ti should be able to hit between 45 and 55 FPS (4K, ultra, high detail density and traffic). That said, it will use A LOT of VRAM (see graph below)...11GB is enough but not much less than that. 



Great sim though......and yes, I can see my kitchen window in one of the pics 















..enjoy some pics of Vancouver and surrounds, Gros Morne (Newfoundland) and a couple for Paris and Lisbon


----------



## trivium nate

Got my EVGA-11G-P4-2384-RX to replace my EVGA-11G-P4-2383-KB and I want to make sure I set it up right, the little fan wire that comes out of gpu goes into the fan wire right? and the fan goes in front of the rad as intake? I have another ran fan behind that rad as well

just making sure thanks


----------



## kx11

arrow0309 said:


> Anyone knows anything about the Lightnings non "Z"?
> Any binning at all, do they overclock the same, just lottery, bad overclcokers?
> Samsung ic's?
> Any reviews?
> Cause I can't find much about them except they have 1575 clock and same PL & wattage as the beefier Lightning Z.



nothing special about them and way overpriced, they have a bug when the fans are removed where the core will be locked to 1350mhz, the 10th anniversary edition is what you should get since they're binned and hand picked


----------



## Medizinmann

omarrana said:


> hi, I am new at overclocking.
> I just bought the cheapest 2080ti card, (Zotact RTX 2080 ti twin fan) and converted it to water-cooling.
> 
> My power limit is 112%, can I flash :
> Palit RTX 2080 Ti Dual (Non-A) Reference PCB (2x8-Pin) 250W x 124% Power Target BIOS (310W) Official (Source)
> └ Compatible with all TU102-300 (1E04) Non-A cards ┘
> 
> since I have non A card, will this work or it will brick my card.
> Please advice thank you.


As you GPU has a reference PCB it should work without any problems.


BTW: If you brick it you can always try to reflash your old (hopefully backuped) BIOS or the respective "original" BIOS for your GPU - you need another GPU or iGPU of course for this method.

Best regards,
Medizinmann


----------



## J7SC

Further to my earlier post about MS Flight Simulator 2020, I did get NVlink-CFR working - yippee ! Usage for each of the two 2080 Tis went from the 50%-60% range with regular NVlink-SLI (AFR, AFR2) in this sim to well above 90% for each, and unlike regular NVlink-SLI-AFR, CFR doesn't do micro-stuttering 

I got CFR going on an earlier driver, and there are some minor sign-in, sign out issues with MS FS 2020...but the smooth performance boost is nice already, especially with '4K / ultra /dense' settings. I'll update to the latest driver and retest (gotta play some more now, though ) 

Still, I don't think it is worth buying a 2nd 2080 Ti now if you haven't already got one just for this sim, and also with Ampere so close now


----------



## gtz

omarrana said:


> hi, I am new at overclocking.
> I just bought the cheapest 2080ti card, (Zotact RTX 2080 ti twin fan) and converted it to water-cooling.
> 
> My power limit is 112%, can I flash :
> Palit RTX 2080 Ti Dual (Non-A) Reference PCB (2x8-Pin) 250W x 124% Power Target BIOS (310W) Official (Source)
> └ Compatible with all TU102-300 (1E04) Non-A cards ┘
> 
> since I have non A card, will this work or it will brick my card.
> Please advice thank you.


I flashed a non a zotac with the palit bios and it worked. You can always flash back to the original bios. The only thing that was wonky about the new bios is, anytime I raised the power limit past 121 it would freeze the PC. But even with the limit at 120 as opposed to 113 it gave me a nice boost.


----------



## Mylittlepwny2

I'm super curious to hear how you got it working!


----------



## Imprezzion

My Inno3D non-a did fine with the full power limit on that BIOS for what it's worth. It'll still throttle in games but less so. Everything over 1.043v is still a throttle festival over 2050Mhz tho.


----------



## Nizzen

J7SC said:


> Further to my earlier post about MS Flight Simulator 2020, I did get NVlink-CFR working - yippee ! Usage for each of the two 2080 Tis went from the 50%-60% range with regular NVlink-SLI (AFR, AFR2) in this sim to well above 90% for each, and unlike regular NVlink-SLI-AFR, CFR doesn't do micro-stuttering
> 
> I got CFR going on an earlier driver, and there are some minor sign-in, sign out issues with MS FS 2020...but the smooth performance boost is nice already, especially with '4K / ultra /dense' settings. I'll update to the latest driver and retest (gotta play some more now, though )
> 
> Still, I don't think it is worth buying a 2nd 2080 Ti now if you haven't already got one just for this sim, and also with Ampere so close now


Pleace make a guide to get nvlink working in MS Flightsim 2020


----------



## Imprezzion

Are there any tweaks I can do to get a higher memory speed stable on a A chip reference PCB 2080 Ti?

I'eve been stable for months now on EVGA FTW3 Ultra BIOS at 1093v Curve OC at 2085Mhz core (after temp drops, it starts at 22130Mhz cold) and 7800 memory. I'd like my memory to go higher and it is Samsung however anything over 7800Mhz gives random DirectX errors and crashes. It doesn't artifact and usually runs for several hours even on 8100Mhz but it will eventually crash sooner or later above 7800Mhz.


----------



## NBrock

Will the HOF XOC bios work on an FE card? Currently on a custom loop with some THICC radiators. Would also like to be able to test it out with a water chiller. If it will work is there anything I should note?


----------



## sultanofswing

NBrock said:


> Will the HOF XOC bios work on an FE card? Currently on a custom loop with some THICC radiators. Would also like to be able to test it out with a water chiller. If it will work is there anything I should note?


Yes it will work but just don't save profiles with MSI Afterburner or it will cause a BSOD.


----------



## Imprezzion

sultanofswing said:


> Yes it will work but just don't save profiles with MSI Afterburner or it will cause a BSOD.


Can't remember exactly, it's been a while since I dailied the HOF XOC BIOS but my memory tells me saving the profile is fine, even with a 1.125v curve, but it can't apply on boot. That causes the BSOD.
So, save it but do not let MSI AB apply on boot and just manually load and apply the profile after every boot.

I ran that BIOS on a Kraken X62 with a G12 bracket and even then it barely hit 55c on 1.125v so you should be fine temp wise.

Just try to keep it under 50c a it doesn't drop too many clock bins from idle. Mine still dropped from 2175Mhz to 2130 load due to temps
It would actually run 2175 perfectly stable in games but I needed to set like, 2220Mhz curve from idle temps and that wasn't stable enough to allow the card to warm up in games and it would generally crash with a DirectX error before clocks and temps stabilized so effectively it only got me to 2130Mhz while on the EVGA FTW3 Ultra BIOS it runs 2115Mhz idle 2085Mhz load so only 45Mhz gain for way more effort and risk for the VRM as I'm not running full cover and the XOC BIOS would regularly see 400w+ power draw.


----------



## pewpewlazer

Imprezzion said:


> Can't remember exactly, it's been a while since I dailied the HOF XOC BIOS but my memory tells me saving the profile is fine, even with a 1.125v curve, but it can't apply on boot. That causes the BSOD.
> So, save it but do not let MSI AB apply on boot and just manually load and apply the profile after every boot.
> 
> I ran that BIOS on a Kraken X62 with a G12 bracket and even then it barely hit 55c on 1.125v so you should be fine temp wise.
> 
> Just try to keep it under 50c a it doesn't drop too many clock bins from idle. Mine still dropped from 2175Mhz to 2130 load due to temps
> It would actually run 2175 perfectly stable in games but I needed to set like, 2220Mhz curve from idle temps and that wasn't stable enough to allow the card to warm up in games and it would generally crash with a DirectX error before clocks and temps stabilized so effectively it only got me to 2130Mhz while on the EVGA FTW3 Ultra BIOS it runs 2115Mhz idle 2085Mhz load so only 45Mhz gain for way more effort and risk for the VRM as I'm not running full cover and the XOC BIOS would regularly see 400w+ power draw.


Even manually loading a profile in Afterburner, my computer would instantly reboot when clicking "apply". If there's a way to fix that, someone please share the secret sauce.


----------



## Imprezzion

pewpewlazer said:


> Even manually loading a profile in Afterburner, my computer would instantly reboot when clicking "apply". If there's a way to fix that, someone please share the secret sauce.


Just tested it, you're right. It does reboot.. My memory failed me there lol. Then again, u must've tried 8 different BIOS in total so hehe.. my bad.


----------



## sblantipodi

ok if a 3070 is faster than a 2080Ti and costs $500 does it have sense to sell a 2080ti ?

how can we get from a ti now? 

prices seems pretty good on the 3070 and 3080 but 10GB of VRAM is too few imho.


----------



## Nizzen

sblantipodi said:


> ok if a 3070 is faster than a 2080Ti and costs $500 does it have sense to sell a 2080ti ?
> 
> how can we get from a ti now?
> 
> prices seems pretty good on the 3070 and 3080 but 10GB of VRAM is too few imho.


350$ for used 2080ti or so


----------



## Artah

Nizzen said:


> 350$ for used 2080ti or so


I just bought 2 overpriced 2080ti Ouchy and it's not returnable. At least I'm guaranteed $700 for both of them am I right?


----------



## J7SC

Nizzen said:


> 350$ for used 2080ti or so


 
I should buy a couple more then  ...especially since the hi-po Ampere's (3090 KPE) have been announced / shown but may take a while before binning volume is reached for those to release


----------



## sblantipodi

What's the guess here, how much faster can be a 3080 over a 2080ti? And what about 3090?


----------



## keikei

sblantipodi said:


> What's the guess here, how much faster can be a 3080 over a 2080ti? And what about 3090?



Nvidia *claiming* 3080 is twice as fast as 2080 (non-Ti). 3090 is 50% faster than 2080ti.


----------



## kithylin

Artah said:


> I just bought 2 overpriced 2080ti Ouchy and it's not returnable. At least I'm guaranteed $700 for both of them am I right?


There's a used RTX 2080 Ti for $650 free shipping right now on ebay at the time of typing this. Possibly lower pretty soon as everyone is dumping em.


----------



## sblantipodi

keikei said:


> Nvidia *claiming* 3080 is twice as fast as 2080 (non-Ti). 3090 is 50% faster than 2080ti.


are you sur that 3090 is only 50% faster than 2080Ti? where did you see it?


----------



## sblantipodi

in this video you can see that a 3080 is 80% faster than a 2080, more or less.
if 2080Ti is 40% faster than a 2080, 3080 should be 40% faster than a 2080Ti.

am I wrong?


----------



## Drewminus

sblantipodi said:


> in this video you can see that a 3080 is 80% faster than a 2080, more or less.
> if 2080Ti is 40% faster than a 2080, 3080 should be 40% faster than a 2080Ti.
> 
> am I wrong?


Ehh not quite, if the 2080ti is 40% faster than the 2080, the 3080 at 80% faster than the 2080 will be ~29% faster than the 2080ti


----------



## sblantipodi

Drewminus said:


> Ehh not quite, if the 2080ti is 40% faster than the 2080, the 3080 at 80% faster than the 2080 will be ~29% faster than the 2080ti


when NDA will be lifted and we will see the first reviews?


----------



## Imprezzion

I know myself.. I will probably upgrade as soon as we know more about BIOS and cross flashing and power limit impact on the 3xxx cards lol.

I mean, I play 1080p 144hz and the only game so far that doesn't make that is Division 2 on all max settings. But yeah, new toys. Even Warzone easily does 150-200FPS without Raytracing on 125% res scale all maxed settings.


----------



## dantoddd

Looks like 3090 is 50% faster that the Titan RTX. maybe 60% over the 2080Ti. 

Just curious, are you guys selling your cards and upgrading?

I'm thinking that i will keep my 2080Ti cause if you think about it, 2080Ti may be the most important Gaming Hardware since the N64. 5-6 years from now when RT and DLSS become the norm, looking back, 2080Ti was the card that started it. So i will keep it for historical value.


----------



## MrTOOSHORT

^^

I have a KPE, I'm handing it down to one of my sons. Plus it's a card you keep.


----------



## Bastiaan_NL

dantoddd said:


> Looks like 3090 is 50% faster that the Titan RTX. maybe 60% over the 2080Ti.
> 
> Just curious, are you guys selling your cards and upgrading?
> 
> I'm thinking that i will keep my 2080Ti cause if you think about it, 2080Ti may be the most important Gaming Hardware since the N64. 5-6 years from now when RT and DLSS become the norm, looking back, 2080Ti was the card that started it. So i will keep it for historical value.


I will keep mine and let it fold away. I'll have to wait anyway to see what the real tests/benchmarks will tell us. But I'd love to get my hands on a 3090!


----------



## keikei

dantoddd said:


> Looks like 3090 is 50% faster that the Titan RTX. maybe 60% over the 2080Ti.
> 
> Just curious, are you guys selling your cards and upgrading?
> 
> I'm thinking that i will keep my 2080Ti cause if you think about it, 2080Ti may be the most important Gaming Hardware since the N64. 5-6 years from now when RT and DLSS become the norm, looking back, 2080Ti was the card that started it. So i will keep it for historical value.



I'll be keeping mine as a backup. Its value has gone so low that the resale doesnt make sense to me. Hey, it had a gud run. 2.5 yrs of dominance? Onto the next king, right!?


----------



## skline00

Got my 2080TI to team up with my 9900k over a year ago when the 9900ks were new. 

Keeping Both. I'm glad for Nvidia pushing the envelope with Ampere. 

I'm interested in the performance of the Big Navi from AMD and where it stacks up against new Ampere cards, if at all.

If you paid top dollar for a 2080TI recently, I can understand your angst but it is still a VERY fast card.

Heck my water cooled 1080TI still runs well in my OC'd 5960x.

The title of our home forum "Overclock-The pursuit of performance" says it all.


----------



## J7SC

Even with plans to upgrade to 3090s (in part a business decision re. productivity) I'm keeping both 2080 Tis...it's not like they're any slower or faster than yesterday morning. Also, I got them working in NVl-SLI-CFR (checkerboard) for the game titles and sims I play... and consistently get 60+ fps / 4K / ultra in MS Flight Simulator on a 4K monitor maxed at 72 Hz

This performance allows me to wait and choose the right kind of upgrade cards (custom PCB, w-cooling) and a new 4k/120Hz monitor after the hype has died down and also AMD BigNavi and may be Intel Xe have been released and compared. But even then, the 2080 Tis and the system they're housed in will stay - it was a special, fun build. Finally, as even the 2080 Tis were a business purchase, I wouldn't sell them anyway


----------



## keikei

J7SC said:


> Even with plans to upgrade to 3090s (in part a business decision re. productivity) I'm keeping both 2080 Tis...it's not like they're any slower or faster than yesterday morning. Also, I got them working in NVl-SLI-CFR (checkerboard) for the game titles and sims I play... and consistently get 60+ fps / 4K / ultra in MS Flight Simulator on a 4K monitor maxed at 72 Hz
> 
> This performance allows me to wait and choose the right kind of upgrade cards (custom PCB, w-cooling) and a new *4k/120Hz monitor *after the hype has died down and also AMD BigNavi and may be Intel Xe have been released and compared. But even then, the 2080 Tis and the system they're housed in will stay - it was a special, fun build. Finally, as even the 2080 Tis were a business purchase, I wouldn't sell them anyway



You wont regret the purchase.


----------



## Shawnb99

MrTOOSHORT said:


> ^^
> 
> I have a KPE, I'm handing it down to one of my sons. Plus it's a card you keep.



Want to adopt me?


----------



## J7SC

keikei said:


> You wont regret the purchase.


Latest OLED look good...


----------



## Avacado

Same boat, with the release of the new cards, the resale value of the 2080 Ti's is pointless. I'll let them fold, might do some shifting as I have one open PCI-e slot left for a folding card and adopt a 3080 for main use when the water blocks drop.


----------



## kithylin

sblantipodi said:


> in this video you can see that a 3080 is 80% faster than a 2080, more or less.
> if 2080Ti is 40% faster than a 2080, 3080 should be 40% faster than a 2080Ti.
> 
> am I wrong?


This is typical Nvidia "Marketing Fluff". There's no possible way that Nvidia could have a video card that is +80% faster than the previous model in just one generation due to physics. They're just claiming as such to bolster launch day sales so all their cards will sell out. At least 60% of the sales of each new GPU family happens in the first 90 days after launch. Anything they can do to get new cards to sell during launch period they will do, even if it's a complete lie. Don't believe what Nvidia said in the live stream event and don't expect anything to be anywhere near as fast as they claim either. 



sblantipodi said:


> when NDA will be lifted and we will see the first reviews?


The new video cards will launch on the 17'th of September. Most likely that's also when reviews will be allowed to go live.


----------



## Imprezzion

Even if value tanked, I'd still rather take like, €300-500 for mine depending on if the claims for the 3070 hold up then just throw it somewhere in a box of hardware or in my secondary Black Desert AFK box and not get anything back.

I mean, hardware loses value anyway. I look at it like this, j bought mine a year ago for 900, modded it with a G12 + X62 which I can both keep as I kept the stock cooler as well, and used it daily ever since. Was that worth a €500 loss? Sure it was. For me it was. Definitely had more then €500 worth of fun from it. And now there's going to be a new toy soon. Which is cheaper. Great right?


----------



## jura11

More likely I will be keeping my Asus RTX 2080Ti Strix hahaha, I will be not selling that card, I still have in my build GTX1080Ti and two GTX1080 and I use them for rendering and other things

Probably will retire GTX1080 to other build and get later on RTX 3080 or 3090 but for now let's see what AMD and RTX 3xxx series performance will be

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Hate to hijack the thread because I understand that the 30-series is the hot topic, but I just tried out the Galaxy 2000w bios on my 2080 Ti and set a custom curve to that which I had it set before on the KFA 380w bios, which is [email protected], knowing that it will temp throttle to 2085mhz which is where I want it because it is stable and ran a few benches successfully. I never tried using these unlimited bios this way before. Does anyone else use these high power bios with not-so-extreme overclocks so that the card can run a stable OC with basically no power limit or being very far away from the limit so there is no throttling?

EDIT: I forgot about the Afterburner profile lockup bug with these bios. Has anyone found a workaround for this yet?


----------



## 2080tiowner

Hi all, i've a ASUS 2080ti ROG STRIX with NON A CHIP (custom pcb)

Can i flash it with the 310w bios for NON A CHIP (but for référence pcb) ?

Thanks for your help !


----------



## omarrana

since RTX 2080 ti used is now really cheap. instead of selling mine, is it a good idea to buy another one and do nvlink? or should i just sell it for 400-500 euro watercooled? I initially bought it second hand and with water block cost around 800 euros. so I am not at a big loss like people who bought it brand new for 1200 .:;X


----------



## omarrana

2080tiowner said:


> Hi all, i've a ASUS 2080ti ROG STRIX with NON A CHIP (custom pcb)
> 
> Can i flash it with the 310w bios for NON A CHIP (but for référence pcb) ?
> 
> Thanks for your help !


i did it with my zotac rtx 2080 ti which is also non a. it worked.


----------



## J7SC

omarrana said:


> since RTX 2080 ti used is now really cheap. instead of selling mine, is it a good idea to buy another one and do nvlink? or should i just sell it for 400-500 euro watercooled? I initially bought it second hand and with water block cost around 800 euros. so I am not at a big loss like people who bought it brand new for 1200 .:;X


 
I have been running 2x w-cooled 2080 Tis in NVlink (specifically in 'CFR' mode wherever possible) since late '18 and rarely have issues, though I use the cards also for productivity and only play a few games and sims (such as Metro Exodus, NFS, MS FS 2020 etc). Also, NVlink isn't identical to the 'old SLI'. Still, by definition, NVlink support by all apps / games will ALWAYS be less than for a single GPU. The flip-side is that especially with 4K, the fps gain can make a difference even w/o strong scaling. As already posted before, with NVl-SLI-CFR, I get between 55 and 65+ fps in MS FS 2020 / 4K / ultra / dense.

One thing to remember though is that you need a very strong PSU w/ dual 2080 Ti (especially w/ custom PCB versions). I actually reduced 'oc' for both the GPUs and TR CPU for MS FS 2020 (spoiler) as I consistently pull 1100w+ for the whole system  ...then again, strong PSUs are also a good idea w/ 3080 / 3090s



Spoiler


----------



## nyk20z3




----------



## 2080tiowner

*2080ti*



omarrana said:


> i did it with my zotac rtx 2080 ti which is also non a. it worked.


Sure, your zotac is a reference pcb card, mine is custom pcb, it's not the same...


----------



## 2080tiowner

Hi all, i've a ASUS 2080ti ROG STRIX with NON A CHIP (custom pcb)

Can i flash it with the 310w bios for NON A CHIP (but for référence pcb) ?

Thanks for your help !


----------



## Hanks552

2080tiowner said:


> Hi all, i've a ASUS 2080ti ROG STRIX with NON A CHIP (custom pcb)
> 
> Can i flash it with the 310w bios for NON A CHIP (but for référence pcb) ?
> 
> Thanks for your help !


i don't think so


----------



## Hanks552

omarrana said:


> since RTX 2080 ti used is now really cheap. instead of selling mine, is it a good idea to buy another one and do nvlink? or should i just sell it for 400-500 euro watercooled? I initially bought it second hand and with water block cost around 800 euros. so I am not at a big loss like people who bought it brand new for 1200 .:;X


Well, i have 2 RTX 2080 TI on NVLINK with a watercooler and shunt mod... i think is not worth it
if i could, i would sell both and buy a 3090... but at the same time, I'm going to wait, because is a new technology, new architecture. remember that "cooking egg" thing?
so i might just wait a little bit, im upset with the new 3070, BUT i think the 3070 is not that strong, i think a little bit is marketing, because it have more RT cores, it can handle ray tracing better, so they say is faster... idk, lets see...


----------



## JackCY

nyk20z3 said:


> https://youtu.be/2upZSyQHwNA


I thought about posting this here but then I decided I don't wanna get burned at the stake.

:specool:


----------



## 2080tiowner

Hanks552 said:


> i don't think so



And shunt mod maybe possible ? thanks


----------



## sultanofswing

I am most interested in how the double floating point CUDA cores are going to work out.
3090 has 5248 true CUDA cores but each core can do double tasks so 10496 "cores"

Does the same hold true for the 3070 and the 3080?
3070 claims 5888 (2944?)
3080 claims 8704 (4352?)


----------



## Nizzen

sultanofswing said:


> I am most interested in how the double floating point CUDA cores are going to work out.
> 3090 has 5248 true CUDA cores but each core can do double tasks so 10496 "cores"
> 
> Does the same hold true for the 3070 and the 3080?
> 3070 claims 5888 (2944?)
> 3080 claims 8704 (4352?)


Maybe there is a answer to this here 

https://www.reddit.com/r/buildapc/c...13cea68333&utm_source=embedly&utm_term=ilgi6c


----------



## pyromaniac1

So I have a Strix OC edition. I was hitting the power limit as well as v-rel/vop with stock bios, so I flashed the Strix 1000w XOC bios.
However I'm barely consuming anymore power. I'm very limited with V-rel. Max voltage i'm able to hit is 1.07. How do I ensure that I hit the maximum of 1.093?


----------



## jura11

pyromaniac1 said:


> So I have a Strix OC edition. I was hitting the power limit as well as v-rel/vop with stock bios, so I flashed the Strix 1000w XOC bios.
> However I'm barely consuming anymore power. I'm very limited with V-rel. Max voltage i'm able to hit is 1.07. How do I ensure that I hit the maximum of 1.093?
> 
> 
> View attachment 2458463


Hi there 

If I remember correctly I couldn't hit as well 1.09v with Asus XOC BIOS, usually I could hit like 1.05v to 1.07v as max with 2205-2220MHz OC 

In some benchmarks I could hit something like 440W to 480W with 48% power limit 

Currently I'm on Asus Matrix BIOS which gave me best results and best OC 

Hope this helps 

Thanks, Jura


----------



## pewpewlazer

pyromaniac1 said:


> So I have a Strix OC edition. I was hitting the power limit as well as v-rel/vop with stock bios, so I flashed the Strix 1000w XOC bios.
> However I'm barely consuming anymore power. I'm very limited with V-rel. Max voltage i'm able to hit is 1.07. How do I ensure that I hit the maximum of 1.093?


You can't. With the way GPU Boost works, the v-f curve basically ends up flat beyond 1.050v most of the time. You need to be able to edit the v-f curve to 'force' the card to run higher voltage like 1.093v. Except you can't do that with the Strix XOC BIOS.


----------



## pyromaniac1

pewpewlazer said:


> You can't. With the way GPU Boost works, the v-f curve basically ends up flat beyond 1.050v most of the time. You need to be able to edit the v-f curve to 'force' the card to run higher voltage like 1.093v. Except you can't do that with the Strix XOC BIOS.


I See. which bios would you you recommend then?


----------



## pyromaniac1

jura11 said:


> usually I could hit like 1.05v to 1.07v as max with 2205-2220MHz OC. In some benchmarks I could hit something like 440W to 480W with 48% power limit


What temps are you running at? And How are you able to draw that much power? I can't go beyond 330 it seems. Even with the 1000W bios slider maxxed out. I'm running the stock cooler on air though, and I'm hitting 75c. 
Does lower temps mean higher boosts = higher currents = more power? I've added like +135 to the core. After that it's not stable, and It seems it's on account of me not being able to pull more voltage. 
Do I need any modded version of afterburner? The voltage slider seems to do nothing.


----------



## AndrejB

pyromaniac1 said:


> What temps are you running at? And How are you able to draw that much power? I can't go beyond 330 it seems. Even with the 1000W bios slider maxxed out. I'm running the stock cooler on air though, and I'm hitting 75c.
> Does lower temps mean higher boosts = higher currents = more power? I've added like +135 to the core. After that it's not stable, and It seems it's on account of me not being able to pull more voltage.
> Do I need any modded version of afterburner? The voltage slider seems to do nothing.


As Jura said you can't edit the voltage curve on that bios and the matrix bios to be the best for Strix Oc 

The galax 380w is an option as well but the memory I think has lower timings.

I use the matrix, then I open the voltage curve editor, then I increase the slider to +60 then I flatten the curve after 1v (or 0.95 in the summer). This gives me a max of 65c.


----------



## jura11

pyromaniac1 said:


> What temps are you running at? And How are you able to draw that much power? I can't go beyond 330 it seems. Even with the 1000W bios slider maxxed out. I'm running the stock cooler on air though, and I'm hitting 75c.
> Does lower temps mean higher boosts = higher currents = more power? I've added like +135 to the core. After that it's not stable, and It seems it's on account of me not being able to pull more voltage.
> Do I need any modded version of afterburner? The voltage slider seems to do nothing.


Hi there 

My temperatures are with such power draw is in 40-42°C in normal ambient temperatures or around 22-25°C, in lower ambient temperatures I see 38-42°C as max and seen max power draw 440-480W on XOC BIOS, just what I don't liked on XOC BIOS is VRAM has been running at full speed 

Currently running 2175MHz on Matrix 360W BIOS 

Yes frequency voltage curve control doesn't work with XOC BIOS which is bummer for me too, I use XOC BIOS only for benchmarks and some tests and some games 

Hope this helps 

Thanks, Jura


----------



## pyromaniac1

AndrejB said:


> As Jura said you can't edit the voltage curve on that bios and the matrix bios to be the best for Strix Oc
> 
> The galax 380w is an option as well but the memory I think has lower timings.
> 
> I use the matrix, then I open the voltage curve editor, then I increase the slider to +60 then I flatten the curve after 1v (or 0.95 in the summer). This gives me a max of 65c.


Can I get a picture of your curve please? And what's your stable boost at 65c? I'd assume you are then manually limiting your voltage to 1v?


----------



## AndrejB

pyromaniac1 said:


> Can I get a picture of your curve please? And what's your stable boost at 65c? I'd assume you are then manually limiting your voltage to 1v?


Here you go, honestly +60 on the matrix is rock solid anything over gets varied results in oc scanner.


----------



## pyromaniac1

BTW is the 373w FTW3 bios flashable on the strix? 

The XOC bios and the KFA2 bios work well but they don't support fan stop.


----------



## Webster200x

> *Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)*


Using this on a FE card, seems to be working, the card its watercooled just the using 2x8 pin rather than 3x8 pin card still seems to be working as normal. Any other thoughts on this?


----------



## pewpewlazer

Webster200x said:


> Using this on a FE card, seems to be working, the card its watercooled just the using 2x8 pin rather than 3x8 pin card still seems to be working as normal. Any other thoughts on this?


I think it's a waste of time since you can't load saved Afterburner profiles. Computer will insta-reboot.


----------



## Webster200x

pewpewlazer said:


> I think it's a waste of time since you can't load saved Afterburner profiles. Computer will insta-reboot.


Can always overclock it manually after booting into windows that seems to work and yes I can confirm it does give you a bsod when you try to load the msi afterburner profile.


----------



## pewpewlazer

Webster200x said:


> Can always overclock it manually after booting into windows that seems to work and yes I can confirm it does give you a bsod when you try to load the msi afterburner profile.


Maybe I'm abnormal, but I have multiple profiles I use depending on the game I'm playing. If I'm playing a game from 5 years ago where the card can run 100+ fps in its sleep, I don't exactly need the extra 5% performance. But I also don't need (and definitely do not WANT) the extra 100-200 worth of heat dump and associated fan noise either. So I run a profile that stays more in the 250w range for that.


----------



## acoustic

I tried getting cheeky and flashing the 520w Kingpin Hybrid BIOS to my FTW3 Ultra under the hybrid cooler, but couldn't get out of 2D clocks and power limit was reading all wonky. Was doing some testing to see how far behind the 3080 FTW3 Port Royal scores that Steve was getting on his livestream earlier this evening.

I ended up posting a 10494 on the stock FTW3 Ultra BIOS +105core/+1200mem/+124%power


----------



## Fuzzylogic

Bought a Non-A card (Asus Turbo 2080 Ti) and waterblock before I recognized that I might have problems raising the power limit sufficiently. Already flashed the card to the Palit 310W BIOS, but I'm still power limiting below lower than I'd like to. Am I going to brick the card attempting to force flash the XOC BIOSes? My interpretation is that those BIOSes are only compatible with 1E07 cards. Before I shunt mod this thing, are there any other BIOS options I might have overlooked?


----------



## gfunkernaught

pewpewlazer said:


> Maybe I'm abnormal, but I have multiple profiles I use depending on the game I'm playing. If I'm playing a game from 5 years ago where the card can run 100+ fps in its sleep, I don't exactly need the extra 5% performance. But I also don't need (and definitely do not WANT) the extra 100-200 worth of heat dump and associated fan noise either. So I run a profile that stays more in the 250w range for that.


It would be nice if at least one profile auto loaded. But yeah manually setting a curve each time I boot does seem a bit much.


----------



## Imprezzion

Fuzzylogic said:


> Bought a Non-A card (Asus Turbo 2080 Ti) and waterblock before I recognized that I might have problems raising the power limit sufficiently. Already flashed the card to the Palit 310W BIOS, but I'm still power limiting below lower than I'd like to. Am I going to brick the card attempting to force flash the XOC BIOSes? My interpretation is that those BIOSes are only compatible with 1E07 cards. Before I shunt mod this thing, are there any other BIOS options I might have overlooked?


Nope. No other way. It won't brick it, you can always re-flash it with another card as long as device manager still sees it but nvflash simply doesn't allow flashing that BIOS unless you overwrite the mismatch, which is possible. But then the card will just code 43 in device manager and you have to re-flash with iGPU or different GPU plugged into your monitor.

Shunt is the only way to get a non-a to clock properly with less power limits.

Worked fine for my Inno3D non-A. I tested it shortly with a liquid metal shunt (just slapping Conductonaut on the resistors with some nail polish and hot glue around them) and it worked fine. Totally the wrong resistance, half the time it would lock in protection mode, but the concept worked.


----------



## sblantipodi

Do you guys think that it have sense in upgrading a 2080ti with a 3080 that have 10GB only of VRAM?
Will 10GB last until next easter when the next gen console games hit the pc market?


----------



## Dreamliner

sblantipodi said:


> Do you guys think that it have sense in upgrading a 2080ti with a 3080 that have 10GB only of VRAM?
> Will 10GB last until next easter when the next gen console games hit the pc market?


I've been reading about this issue. Apparently there is a leak that shows NVIDIA will be releasing a 3080 20GB card.

I sold my 1080Ti and was going to get a 3080 but now I think I am going to wait for one with more memory. It could all be for nothing though. I had a 970 during that whole 4GB fiasco and I still don't think those cards had any real world consequences.

Of the current cards though, the 3080 is definitely the one to get. GDDR6X memory in the 3080 is much faster than GDDR6 memory in the 3070. Plus, 10GB vs 8GB. For once the higher end card is actually a better value. I do wonder how much more the 20GB version will be and how far behind a 2080Ti is...

I'm limping along on a 750Ti and the 30Hz 4K HDMI output on the second monitor is...rough.

Honestly, I'd probably put up a for sale ad on FB and see what kind of responses you get. With SOLD 2080Ti prices I'm seeing on eBay, its like a $150 difference for a 3080...no brainer.


----------



## ProfeZZor X

I may just wait it out for the 3090 instead of the 3080... Sure, it may be a while before I get my hands on it (post scalpers), but I'm very happy with my current 2080ti founders at the moment... All this chasing a dangling carrot routine for the next best thing can get tiring and expensive. So, I think waiting for an 8K ready 3090 will be more than sufficient for me for the next 8 to 10 years.


----------



## MacMus

do you guys think 3080/3090 is worth to move from 2080ti ?


----------



## mattxx88

MacMus said:


> do you guys think 3080/3090 is worth to move from 2080ti ?


this is up to you
by my side, playing 4k 144hz, it is worth. What makes me dubious is the only 10gb vram, i think i'll wait a 20gb/Ti version


----------



## Obenuts

Bought a used 2080ti. Initially it crashed during games if I raised the power limit above 110%, or if overclocker above +25mhz.
After 10 days I loaded my computer on my car for a long road trip... Now it crashes in games (after few minutes) with power limit above 90%...
Any advice?


----------



## Medizinmann

Obenuts said:


> Bought a used 2080ti. Initially it crashed during games if I raised the power limit above 110%, or if overclocker above +25mhz.
> After 10 days I loaded my computer on my car for a long road trip... Now it crashes in games (after few minutes) with power limit above 90%...
> Any advice?


Cooling?

Reseat/repaste cooler - would be my first idea...

Best regards,
Medizinmann


----------



## Medizinmann

sblantipodi said:


> Do you guys think that it have sense in upgrading a 2080ti with a 3080 that have 10GB only of VRAM?


I would wait.

There are rumours about a 3080-variants with 20GB etc.

…and availability is sparse right now anyhow.

And an uplift of 25-30% wouldn’t trigger me that much anyway.
And right now there aren’t much water blocks etc.



> Will 10GB last until next easter when the next gen console games hit the pc market?


The answer is – as always – it depends…
8GB is enough for almost all games in 1440p and most games in 4k – but some profit from more – like i.e. Flight Simulator – but will still run pretty decent with 8-10 GB.

Again – I would wait.

Best regards,
Medizinmann


----------



## BigMack70

MacMus said:


> do you guys think 3080/3090 is worth to move from 2080ti ?


3080 no... only 15-20% faster vs an OC'd 2080 Ti, with less vram. If custom models come out that overclock well, then might be worth it, but at the moment you're better off overclocking your 2080 Ti than buying a 3080 IMO. 

3090 yes, if value/money is not a concern - should be 35%+ faster than 2080 Ti OC vs OC


----------



## Medizinmann

MacMus said:


> do you guys think 3080/3090 is worth to move from 2080ti ?


3080 - uhm no - as already stated the uplift issn't that great.

3090 maybe but it isn't available anyhow and also very expensive.

I would wait.

If you want the fastest and gretest - wait for the HOF and Kingpin-Models and the watercooled options ect.

Best regards,
Medizinmann


----------



## Obenuts

Medizinmann said:


> Cooling?
> 
> Reseat/repaste cooler - would be my first idea...
> 
> Best regards,
> Medizinmann


Thanks for the reply, but gpu temperature is fine. Cpu and mobo temps are fine too.
I checked with GPU-Z... the moment the GPU frequency arrives at 1920mhz it crashes. Is there any way to limit the GPU clock to 1900mhz?


----------



## JM Popaleetus

I'm genuinely annoyed that EVGA is no longer selling their waterblocks for the 2000-series. Was going to shunt mod my FTW3 Ultra and put it under water instead of getting a 3080 or inevitable 3080 Ti.


----------



## J7SC

Obenuts said:


> Thanks for the reply, but gpu temperature is fine. Cpu and mobo temps are fine too.
> I checked with GPU-Z... the moment the GPU frequency arrives at 1920mhz it crashes. Is there any way to limit the GPU clock to 1900mhz?


MSI AB *should" be able to do it...quick question: Are you running max PL on your card (ie 125% or whatever, depending on model) ? If so, I would test and dial that back first by about 10% and see if it solves the issue (1920 at 100% and 125% not the same)


----------



## kithylin

JM Popaleetus said:


> I'm genuinely annoyed that EVGA is no longer selling their waterblocks for the 2000-series. Was going to shunt mod my FTW3 Ultra and put it under water instead of getting a 3080 or inevitable 3080 Ti.


I'm not surprised at all. The RTX 2080 Ti will be 2 years old here in 5 days and that's about how long EVGA takes to discontinue some of their products. You should still be able to get your paws on some EK blocks from Performance-PCS. PPCS still has 1080 Ti blocks new in stock too.


----------



## Medizinmann

Obenuts said:


> Thanks for the reply, but gpu temperature is fine. Cpu and mobo temps are fine too.
> I checked with GPU-Z... the moment the GPU frequency arrives at 1920mhz it crashes. Is there any way to limit the GPU clock to 1900mhz?


Define "temps are fine".
I would still say - it's your best bet... 
...or it's even power draw - you could try reseating PCIe-Slot and power-connectors. 

You can limit the GPU through Afterburner - just shape a frequency curve that stops/tops out at 1900 MHz.

Best regards,
Medizinmann


----------



## Brodda-Syd

chibi said:


> Hey everyone, I'm building a new 9900KS rig. Should I get the 2080 Ti FE card now, or wait for Ampere 3080 Ti? I don't have a spare GPU so it will have to work on the iGPU if wait.
> 
> Question - if this were you, would you buy into the 2080 Ti 1 year after release? Kind of at the end of it's life cycle.


Hey, so what did you do????
Did you buy a RTX 2080ti or have you managed to get a RTX 3080?


----------



## chibi

Brodda-Syd said:


> Hey, so what did you do????
> Did you buy a RTX 2080ti or have you managed to get a RTX 3080?


I'm on a 1060 3GB card for now. Waiting for 3080 20GB model to release, or 3090.


----------



## mardon

I've got a Reference board (300A) and am waterblocked (EK). I only need x1 DP and x1 HDMI and don't need fan control. 

Is there a better BIOS than the Galax which will give me over 380w power limit?


----------



## gfunkernaught

This is still a 2080 Ti thread right? 😅
Has anyone found a way to disable thermal throttling on Turing? I'm not talking about temp limit, but the thermal throttle curve that exists even at lower temps. Like at 40c, etc.


----------



## TK421

mardon said:


> I've got a Reference board (300A) and am waterblocked (EK). I only need x1 DP and x1 HDMI and don't need fan control.
> 
> Is there a better BIOS than the Galax which will give me over 380w power limit?


kingpin XOC



https://xdevs.com/guide/2080ti_kpe/#biost






gfunkernaught said:


> This is still a 2080 Ti thread right? 😅
> Has anyone found a way to disable thermal throttling on Turing? I'm not talking about temp limit, but the thermal throttle curve that exists even at lower temps. Like at 40c, etc.


unfortunately not


----------



## Mooncheese

gfunkernaught said:


> This is still a 2080 Ti thread right? 😅
> Has anyone found a way to disable thermal throttling on Turing? I'm not talking about temp limit, but the thermal throttle curve that exists even at lower temps. Like at 40c, etc.


Open MSI AB and unlink Temp Limit and Power Limit. What kind of temps are you seeing?


----------



## Mooncheese

Fuzzylogic said:


> Bought a Non-A card (Asus Turbo 2080 Ti) and waterblock before I recognized that I might have problems raising the power limit sufficiently. Already flashed the card to the Palit 310W BIOS, but I'm still power limiting below lower than I'd like to. Am I going to brick the card attempting to force flash the XOC BIOSes? My interpretation is that those BIOSes are only compatible with 1E07 cards. Before I shunt mod this thing, are there any other BIOS options I might have overlooked?


Nope the only option you have with non 300a is to shunt mod. From what I gather NV was engaging in artificial binning / limiting non 300a dies to 280w. From what I hear shunted non 300a overclocks just as well as 300a.

I believe NV had to price 2080 Ti $100 lower to try to sell TU-102 as they weren't moving very well and the only way to do that without dropping the price down too for fear of harming their 2080 S sales was to artificially gimp a percentage of TU-102 with this fake binning.

It's just sad really.

2080 Ti should never have costed $1200 just like the 3090 shouldn't cost $1500-1800.

But if people buy them, they will keep making overpriced garbage.

I bought my 2080 Ti used for $900 from craigslist (no tax or shipping) with 2 years remaining on it's transferable warranty, it has the iCX thermistors and overclocks well (signature), I love this card, easily 50% gains over my 1080 Ti because 2080 Ti overclocks better (11% overclock with 1080 Ti and 25% overclock with 2080 Ti, both cards under water blocks).

I don't regret it, but I do feel for people that bough used for $1200 before tax and another $250 for a water block and back-plate or around $1500.

That's highway robbery.

Now the gains are even less, now it's 25% gain from 2080 Ti to 3090 at the same power draw for $1500-1800 before taxes and a water block or around $2300 with a high end AIB and block.

That's insane.

It's approaching nearly $100 for each 1% gain over 2080 Ti.


----------



## sultanofswing

gfunkernaught said:


> This is still a 2080 Ti thread right? 😅
> Has anyone found a way to disable thermal throttling on Turing? I'm not talking about temp limit, but the thermal throttle curve that exists even at lower temps. Like at 40c, etc.


This is not removable. Vince(KINGPIN) and TiN reverse engineered the BIOS and removed it but it caused other issues. It just something you have to live with.


----------



## pewpewlazer

Now that Ampere is out in the wild (and somewhat disappointing IMO), now seemed like a brilliant time to shunt mod my 2080 Ti. Soldering the resistors on took all of a couple minutes, but having to remove and reinstall the full cover GPU block made me question why I was doing this...

Initial results quite good. 2175mhz @ 1.075v passed everything I tried except Fire Strike Ultra GT2 (which seems to crash at any clock speed above 2130 regardless of voltage). Cleared 10.6k in Port Royal, 17.4k Time Spy graphics score, and 8.2k Time Spy Extreme graphics score. Port Royal hangs around 400w, but Time Spy peaks around 450 (maybe closer to 500 actually) and Time Spy Extreme hit over 500w. I knew that would be the case, but still kind of nutty.

Then I got greedy and tried flashing one of the Aorus Extreme BIOSes that supposedly is faster clock-for-clock than the 380w Galax BIOS. Is saw zero difference in performance, but also zero luck getting it to do 2175mhz consistently.

Back on the Galax BIOS now and can't get it to hold 2175mhz to save my life. My load temps are in the 41-44*C range, which GPU Boost seems to treat as Boost Clock Roulette. +/-15mhz at random leading to beautiful instability. Even 2160 seems sketchy now. I'm assuming there are some "hot spots" that are hotter than the actual GPU temp, and trying to jam 1.075-1.093v at the die just makes them worse.

Oh well. At least the card should hold 2145mhz stable in games now instead of bouncing around like a ping pong ball, so that will be an improvement.


----------



## gfunkernaught

I don't know why people are constantly confusing temp limit with temp throttling. KINGPIN XOC bios DOES NOT remove the temp throttling. Thermal throttling still exists at 40c with these bios. I tried them on my card multiple times. I think the temp throttle curve starts at like 20c or something like that. Changing or disabling the temp limit does nothing to the throttle curve. Not referring to the v/f curve here. Does anyone know where I can find a good tutorial on how to shunt mod a reference 2080 ti? I can't find any. I see ones for custom PCBs and TITANs, but none for 2080 ti reference.


----------



## pewpewlazer

gfunkernaught said:


> I don't know why people are constantly confusing temp limit with temp throttling. KINGPIN XOC bios DOES NOT remove the temp throttling. Thermal throttling still exists at 40c with these bios. I tried them on my card multiple times. I think the temp throttle curve starts at like 20c or something like that. Changing or disabling the temp limit does nothing to the throttle curve. Not referring to the v/f curve here. Does anyone know where I can find a good tutorial on how to shunt mod a reference 2080 ti? I can't find any. I see ones for custom PCBs and TITANs, but none for 2080 ti reference.











2080 Ti - Working Shunt Mod


I finally got a chance to test some other shunts on my FE 2080 Ti. When using 5 mOhm resistors, as I did on my Titan X (Pascal), the 2080 Ti goes into emergency mode with a lot of loads (stuck at 300 MHz). My first try adding a shunt above 5 mOhms, 8 mOhms, did not trigger the cards safety...




www.overclock.net





Pretty sure it's the first hit if you Google "2080 Ti shunt mod" as well. Also, I believe the Titan RTX uses the 2080 Ti reference PCB.

And no, there's no way to disable "GPU Boost" temperature throttling. Anyone who confuses that with thermal throttling has no idea what they're talking about.


----------



## J7SC

gfunkernaught said:


> (...) Does anyone know where I can find a good tutorial on how to shunt mod a reference 2080 ti? I can't find any. I see ones for custom PCBs and TITANs, but none for 2080 ti reference.





pewpewlazer said:


> 2080 Ti - Working Shunt Mod
> 
> 
> I finally got a chance to test some other shunts on my FE 2080 Ti. When using 5 mOhm resistors, as I did on my Titan X (Pascal), the 2080 Ti goes into emergency mode with a lot of loads (stuck at 300 MHz). My first try adding a shunt above 5 mOhms, 8 mOhms, did not trigger the cards safety...
> 
> 
> 
> 
> www.overclock.net
> 
> 
> 
> 
> 
> Pretty sure it's the first hit if you Google "2080 Ti shunt mod" as well. Also, I believe the Titan RTX uses the 2080 Ti reference PCB.
> And no, there's no way to disable "GPU Boost" temperature throttling. Anyone who confuses that with thermal throttling has no idea what they're talking about.


...Yeah, Titan RTX owners' thread has a lot of info on the shunt modding (especially early January 2019 to February 2019). In addition to soldering shunts, there are some posts by Callsignvega who describes the 'conductive pen' method (somewhat different from liquid metal, the third however less desirable option)


----------



## kot0005

Does anyone have Strix 2080Ti OC model bios please ? i need both quiet and perf mode BIOS. I need the 1665Mhz Bios I cant find it on TPU!


----------



## Imprezzion

Since I'll be holding on to my 2080 Ti a fair bit longer as well with all the issues surrounding the 3080/3090 I wanna tweak it some more.

It's a reference PCB A chip with EVGA FTW3 Ultra BIOS so power isn't a problem. It holds 1.093v 2115-2085Mhz just fine depending on load temps. Usually sits around 50c as that's where i dailed in the clocks. Sample isn't the best only doing 2100 stable but it crashes when warming up as it boosts to 2145 initially ad that isn't stable enough.

The question I wanna ask.. is there some way to give the memory a bit more juice? BIOS mod? Shunts for the memory VRM? Something like that. 

They only do 7800Mhz stable. I can bench 8200 just fine and it will run for a good while with good performance scaling but random direct3d crashes occur above 7800. I wanna see if I can give the memory a small voltage boost to get like, 8000+ stable.


----------



## JackCY

gfunkernaught said:


> This is still a 2080 Ti thread right? 😅
> Has anyone found a way to disable thermal throttling on Turing? I'm not talking about temp limit, but the thermal throttle curve that exists even at lower temps. Like at 40c, etc.


Why would you want to disable a feature that actually helps?
The colder it is the higher it can clock, the hotter it is the lower it can clock. It adjusts on it's own without you needing to leave performance unused during lower temperatures just so you can maintain stability at higher ones.

What would be more useful is a hard clock limit at any temperature and load, a simple, this is max, never go above it.

Or is the lower temp. clock boosting higher making your card crash? Because that's the only reason wanting to disable it.



Imprezzion said:


> Since I'll be holding on to my 2080 Ti a fair bit longer as well with all the issues surrounding the 3080/3090 I wanna tweak it some more.
> 
> It's a reference PCB A chip with EVGA FTW3 Ultra BIOS so power isn't a problem. It holds 1.093v 2115-2085Mhz just fine depending on load temps. Usually sits around 50c as that's where i dailed in the clocks. Sample isn't the best only doing 2100 stable but it crashes when warming up as it boosts to 2145 initially ad that isn't stable enough.
> 
> The question I wanna ask.. is there some way to give the memory a bit more juice? BIOS mod? Shunts for the memory VRM? Something like that.
> 
> They only do 7800Mhz stable. I can bench 8200 just fine and it will run for a good while with good performance scaling but random direct3d crashes occur above 7800. I wanna see if I can give the memory a small voltage boost to get like, 8000+ stable.


Sure if you want to risk bricking the VRAM. Buildzoid explained the Turing memory VRM enough. Depends on card sure but from what I've seen, it's all set in hardware and if you screw up it's bye bye. Doesn't have to be high voltage even just wrong voltage at the wrong time.


----------



## EarlZ

I've been using Port Royal to do some memory performance gain and with a locked fan speed on 100% which keeps the GPU under 65c (1965Mhz ) from start to finish and these are my results. 2 Runs per indicated clock speed.


400Mhz - 7231,7247
600Mhz - 7269,7298
800Mhz - 7316, 7316
1Gz - 7367,7364

I am a bit surprised that it can reach 1Ghz.


----------



## Krzych04650

Managed to get second 2080 Ti on the cheap, Palit Gaming Pro OC. Cheap for a reason because there is no receipt and some idiot painted the shroud red, but still  Will play around with SLI for 2 more years, no ready to say goodbye, but I will probably get 3090/3080 Ti later for games without mGPU support.

It seems like a terrible overclocker though. It has 330W power limit and air cooler so it is not going to do great before I flash it with 380W BIOS and add waterblock, but in some lower power draw games like AC:Origins it is crashing at [email protected] and cannot do 2025 at 1.000V, so it seems like it is around 75 MHz weaker than my Sea Hawk X. My Sea Hawk run through entire Origins at 2100-2145 on some stupid 120mm AIO. This is certainly above average but this Palit looks like it is not going to go past something like 2055 MHz full stable. Crashing at [email protected] in game like Origins when power draw is like 300W with these settings so temps are around 55C is really not a good start


----------



## krizby

Imprezzion said:


> Since I'll be holding on to my 2080 Ti a fair bit longer as well with all the issues surrounding the 3080/3090 I wanna tweak it some more.
> 
> It's a reference PCB A chip with EVGA FTW3 Ultra BIOS so power isn't a problem. It holds 1.093v 2115-2085Mhz just fine depending on load temps. Usually sits around 50c as that's where i dailed in the clocks. Sample isn't the best only doing 2100 stable but it crashes when warming up as it boosts to 2145 initially ad that isn't stable enough.
> 
> The question I wanna ask.. is there some way to give the memory a bit more juice? BIOS mod? Shunts for the memory VRM? Something like that.
> 
> They only do 7800Mhz stable. I can bench 8200 just fine and it will run for a good while with good performance scaling but random direct3d crashes occur above 7800. I wanna see if I can give the memory a small voltage boost to get like, 8000+ stable.


You can de solder the VRAM on the 2080 Ti and solder on the 16Gbps modules of the 2080 Super


----------



## Imprezzion

krizby said:


> You can de solder the VRAM on the 2080 Ti and solder on the 16Gbps modules of the 2080 Super


Now that's a level of YOLO even I wouldn't do lol.


----------



## Pepillo

Wrong post


----------



## gfunkernaught

JackCY said:


> Why would you want to disable a feature that actually helps?


If you have enough experience overclocking Turing, especially water cooled, you'd know why. It is quite annoying. For example. If I want to set a v/f curve of [email protected] I have to set the point to [email protected], because once the GPU hits 40c, it should drop but doesn't always, causing a crash. It would be nice to not have thermal thottling at low temps, and no power limit. Let the user void the warranty, and use the card as he/she wishes.


----------



## JackCY

Well you're setting an offset on top of an auto boost. If you're crashing, it's simple, your OC is too high and unstable in certain situations. The temperatures at which the change in clock occurs are not hard set 100% the same all the time. It's auto boost.

You have to test your OC at all temperatures and loads the card may experience. Sometimes it's a simple it's stable under heavy load but once there is a light load scene or game closing, it crashes as the card suddenly auto boost too high for it to be stable. Again, too high OC.

And a hard clock limiter would help here too. To tame the auto boost not going nuts with clocks. Sure I would love to have the clock/temperature curve adjustable beyond the useless adjustment it allows right now.


----------



## pewpewlazer

JackCY said:


> Well you're setting an offset on top of an auto boost. If you're crashing, *it's simple, your OC is too high and unstable in certain situations*. The temperatures at which the change in clock occurs are not hard set 100% the same all the time. It's auto boost.


It's simple. His overclock, my overclock, and plenty of other overclocks that would otherwise be stable, at not stable BECAUSE OF BOOST.



JackCY said:


> You have to test your OC at all temperatures and loads the card may experience. Sometimes it's a simple it's stable under heavy load but once there is a light load scene or game closing, it crashes as the card suddenly auto boost too high for it to be stable. Again, too high OC.


Again, not "too high OC". The issue is "too much GPU Boost".



JackCY said:


> And a hard clock limiter would help here too. To tame the auto boost not going nuts with clocks. Sure I would love to have the clock/temperature curve adjustable beyond the useless adjustment it allows right now.


A hard clock limiter and disabling boost entirely would accomplish the exact same thing.

Back on topic, question for the watercooled shunt modded folks: what kind of GPU/water temp deltas are you seeing under 1.068v+ 400w+ loads? At 28c water temp I'm seeing 44-45c peaks under heavy load (like TS Extreme GT2). Does that sound right? The thermal pads for the bottom ram chips hardly had any indication of contact/compression when I removed my block last weekend, and the EVGA HydroFlopper block sees to be notorious for bad contact, so I'm wondering if there's any potential gains in trimming down the waterblock standoffs and remounting.

Obviously these cards need to run 40c or cooler for GPU Boost not to make your life miserable. And with 22-23c ambient, there's no way I'm getting cooler water temps without adding another 2 or 3 radiators. And my 2x D5 setup is already inadequate for my current 4x rad setup...


----------



## 050

Hi! Quick question hopefully. I'm running my 2080 ti FE with the galax 380w bios water cooled (so under full load with the 127% power limit, it is staying at or below 55c). Despite keeping it fairly cool, I am only getting clock speeds up to around 2050mhz, (occasionally 2070) in games and benchmarks. This isn't too bad, but the voltage seems to almost never go above 1.050 volts, certainly not up to the 1.093 that I hear is the limit. Any idea why, when running loads that aren't power limited, the card isn't pulling a higher voltage and clocking up a bit more? Is it thermals? Or is it just silicon lottery, and aside from not being able to hit 2100mhz my card simply won't even request the 1.093v?

In time spy it does hit the peak power limit at points, and overall scores ~16400 which is not too bad. I'd like to push it a bit higher but certainly don't need to. I mostly want to know if i'm missing something that is preventing my card from requesting higher voltages

Thanks for any advice!


----------



## pewpewlazer

050 said:


> Hi! Quick question hopefully. I'm running my 2080 ti FE with the galax 380w bios water cooled (so under full load with the 127% power limit, it is staying at or below 55c). Despite keeping it fairly cool, I am only getting clock speeds up to around 2050mhz, (occasionally 2070) in games and benchmarks. This isn't too bad, but the voltage seems to almost never go above 1.050 volts, certainly not up to the 1.093 that I hear is the limit. Any idea why, when running loads that aren't power limited, the card isn't pulling a higher voltage and clocking up a bit more? Is it thermals? Or is it just silicon lottery, and aside from not being able to hit 2100mhz my card simply won't even request the 1.093v?
> 
> In time spy it does hit the peak power limit at points, and overall scores ~16400 which is not too bad. I'd like to push it a bit higher but certainly don't need to. I mostly want to know if i'm missing something that is preventing my card from requesting higher voltages
> 
> Thanks for any advice!


If you're just overclocking with a core clock offset, ~1.05v is the highest voltage the card will run at that temperature based on the V-F curve. If you want it to use higher voltage, you'll have to manually set the v-f curve to run higher clocks at those higher voltages.


----------



## Imprezzion

Exactly. I mean, I can run 2115Mhz @ 1.093v all day but with it boosting to 2160Mhz cold it will crash before it heats up to the point of dropping and even if it doesn't, playing a lighter load game that doesn't heat up the cars much like... Roblox or Among Us or Fall Guys it will stay at 2160 and crash. So, i have to run 2085Mhz as that will boost to 2130Mhz and drop down which is stable enough to not crash at light loads or when heating up.

I wish this card had fanless mode.. I could set it up to idle at 45c so it doesn't drop at all as my load temps with the radiator fans running are 49-51c and I control the rad fans with the card using an EK GPU-PWM converter cable and a sata powered PWM splitter.


----------



## 050

I feel an absolute fool, I realized I had taken it a bit too literally when people said that the afterburner voltage slider did nothing on turing cards. After sliding it up to 100, i see that the card can touch 1.093v correctly. So that's sorted! Thank you


----------



## Sync0r

Hi all, just joined the 2080 Ti owners club, cheapo ebay cards enabled this hah. Grabbed a Zotac Amp reference design card with an EK block. Reflashed the vbios with the HOF 2000w power limit and 1.125v voltage. Card now clocks to 2160Mhz Core and 8250Mhz memory, quite happy with that. Doesn't seem much different from the 3080 to be honest. Screenshot is of my old 2080 super, my 2080 Ti and a couple of random 3080s from Timespy results database, cpu was a 10900k for each.


----------



## Krzych04650

Well, I guess I underestimated the power draw of two 2080 Tis  I am getting shutdowns from PSU, even though I have Seasonic Prime 1200W. Power from the wall is fluctuating from 890 to 1030W, but I guess there are some spikes are that are triggering the protection on the PSU. It also doesn't seem to be able to cool itself properly because these shutdowns increase in frequency over time, so I would guess that's heat. It takes constant 1100W load well though (Time Spy Extreme Stress Test + some CPU-Z stress test on top of that), so it must be those spikes in gaming workloads that make it trigger the protection and shutdown. Same happened when I used 750W for GTX 1080 SLI, PSU was also working at 85% capacity so power spikes could cause shutdowns. Not a great timing with all the PSU shortages...

This used Palit Gaming Pro OC that I got for a second card is not doing all that well either, it cannot even start any benchmark at 2100 MHz, let alone pass it. You can even see the silicon quality difference vs my Sea Hawk X when they are left on stock, Sea Hawk is running like 75 mV lower for the same frequencies, at 0.975V it can do what this Palit cannot do at all... Memory is a bit better though. Fortunately it doesn't really matter that much for SLI, I am seeing the same thing as I did with GTX 1080 SLI, real world gains from overclocking are two times smaller than with single card and memory OC does more than the core OC, at least in actual games, probably not 3DMark.


----------



## Voodoo Rufus

Make sure you're not overloading that breaker circuit with too many other devices. Your PSU should handle it.


----------



## Janosi

Hi guys i need help
I want to use modded bios. This is my card: Palit RTX 2080 Ti GamingPro Specs. my card is TU102-300A-K1-A1. 
which bios I could use?


----------



## Krzych04650

Janosi said:


> Hi guys i need help
> I want to use modded bios. This is my card: Palit RTX 2080 Ti GamingPro Specs. my card is TU102-300A-K1-A1.
> which bios I could use?


This is 2x8pin reference PCB so use GALAX 380W BIOS.


----------



## Janosi

Krzych04650 said:


> This is 2x8pin reference PCB so use GALAX 380W BIOS.


thank you the fast answer, the GALAX 380W BIOS is work fine


----------



## J7SC

Krzych04650 said:


> This is 2x8pin reference PCB so use GALAX 380W BIOS.


...have 'enjoyed' that power consumption since December '18...2x 2080 Ti Aorus WB, stock bios, max combined draw between them is 760W on a regular basis (before oc'ed TR, peripherals...)


----------



## slamedcards

I think I got the worst 2080 ti yet... FTW3 barely cracking 2ghz pulling near power limit. Anyone rec a high power limit bios that works? Tried the kingpin one but it got stuck at 300 mhz. Thanks


----------



## jaqkar

Hi Guys, I have a MSI Trio Gaming X 2080 TI with bios version 90.02.30.00.D9. I see this is mentioned so was wondering if its able to flash these with NVFlash at present:
*MSI Gaming X Trio purchased in Q3 2018 shipped with the 0x70060003 firmware (Build Date: 2018-08-29), the same MSI card purchased a year later in Q3 2019 shipped with the firmware 0x70100003 (Build Date: 2018-12-28), this newer card also had a different EEPROM (WBond instead of ISSI).* * The BIOS and firmware can however be overwritten by force, using an external programming/test clip, bypassing NVFlash entirely.*


----------



## Sync0r

slamedcards said:


> I think I got the worst 2080 ti yet... FTW3 barely cracking 2ghz pulling near power limit. Anyone rec a high power limit bios that works? Tried the kingpin one but it got stuck at 300 mhz. Thanks
> 
> 
> View attachment 2460818


Try overclocking with Msi Afterburner and the graph. Press Ctrl+F and then up the 1.093v point to say 2050Mhz. Move core voltage and power limit to max. Then keep upping until unstable. This way you know you are giving it highest possible voltage for your highest clock to allow max stability.


----------



## Krzych04650

slamedcards said:


> I think I got the worst 2080 ti yet... FTW3 barely cracking 2ghz pulling near power limit. Anyone rec a high power limit bios that works? Tried the kingpin one but it got stuck at 300 mhz. Thanks
> 
> 
> View attachment 2460818


This is almost the same as my second 2080 Ti I just got, also barely cracking 2 GHz when at 70C. These are very high temps for OC though, you get like five 15 MHz drops from Boost along the way.

Try running some less power draw intensive games like AC Origins or lock your framerate so your GPU usage is 70ish percent and then try to find the point of crash. If you crash even without full utilization and power draw then there is no point in higher power limit, it is just how far this particular chip goes and only cooling can help you. My Palit cannot even start any benchmark at 2100 MHz. But again, 66C is very high for OC, even if your chip was capable of let's say 2145 MHz, you would still get downgraded to 2070 over time as the temp rises.


----------



## sultanofswing

I still am not understanding why people are still trying to run max voltage and crazy high core clock offsets on a card that is running over 65c.
You are just shooting yourself in the foot.


----------



## Sync0r

sultanofswing said:


> I still am not understanding why people are still trying to run max voltage and crazy high core clock offsets on a card that is running over 65c.
> You are just shooting yourself in the foot.


Yeah best way to overclock these cards is to just keep temperature down first and foremost then you can push voltage.


----------



## Imprezzion

That. I can do it because my card runs at 48c with 1.093v 2100Mhz load under a G12 with a X52 on it but when I first tested it on stock air it also went well into the 70's on 1.093v and wouldn't get anywhere near 2100 stable lol.


----------



## TONSCHUH

Just a heads-up, I got finally a Tracking-Number for my TT Level 20 XT, after waiting almost 3 month for it.

They will now deliver it directly to my door, instead to the shop I bought it from, first.

Still, as it's coming from Melbourne, I'm not sure, when it will finally arrive.

I bought some mid-range custom cables from CableMod and a few other things, but without having the actual case in front of me, it's hard to tell, what will make it into the new case.

🙂


----------



## Sync0r

Nice, I have the ThermalTake Core x9, I don't really care about looks, just performance these days, hence the cable management. The amount of rads you can fit in it is just amazing, I have 2x480mm, 1x420mm, 2x240mm all 60mm thick, almost all in push pull, water temps just sit at ambient. 2080 Ti under load hits 32c with 20c water at 1.125v and 420watt power draw from the card.


----------



## jura11

I just tried on Zotac RTX 2080Ti AMP Galax 450W BIOS and difference in Port Royal or any benchmark its just too small in same voltage or same offset 

Zotac literally been one of the poorest OC to the date, it wouldn't do more than 2115MHz but usually it will do stable 2100MHz and 2070-2085MHz in RT games 

But must say my Asus RTX 2080Ti Strix is still one if the best RTX 2080Ti which I have tested, it will do 2205-2020MHz in benchmarks or some games, in RT games like Control or BFV I'm running 2160-2175MHz OC with 800MHz on VRAM 

Will probably flash EVGA FTW3 BIOS on Zotac RTX 2080Ti AMP and let's see 

Hope this helps 

Thanks, Jura


----------



## Sync0r

Potentially just silicon lottery I guess, mine will do 2160Mhz and 1250Mhz on the memory, I'm keeping it below the first temperature 15mhz bin reduction, using the Galax HOF 2000w limit vbios.


----------



## jura11

Sync0r said:


> Potentially just silicon lottery I guess, mine will do 2160Mhz and 1250Mhz on the memory, I'm keeping it below the first temperature 15mhz bin reduction, using the Galax HOF 2000w limit vbios.


I think so too its silicone lottery, but Zotac RTX 2080Ti AMP always been one of poorer GPUs 

I tried on this GPU several BIOS and still same, temperatures are great in 36-38°C 

That's good OC with Galax 2000W BIOS, didn't tried that BIOS on my Zotac RTX 2080Ti AMP but will probably try 

Thanks, Jura


----------



## Imprezzion

Sync0r said:


> Potentially just silicon lottery I guess, mine will do 2160Mhz and 1250Mhz on the memory, I'm keeping it below the first temperature 15mhz bin reduction, using the Galax HOF 2000w limit vbios.


1,125v or 1,093v. Mine will do 2160 as well on HOF but only on 1,125v hehe. And I hate the fact profiles BSOD in MSI AB. I'm too lazy to set it again every cold boot for those 60Mhz... 2100 @ 1,093v with EVGA FTW3 Ultra is fine by me lol.


----------



## Sync0r

Imprezzion said:


> 1,125v or 1,093v. Mine will do 2160 as well on HOF but only on 1,125v hehe. And I hate the fact profiles BSOD in MSI AB. I'm too lazy to set it again every cold boot for those 60Mhz... 2100 @ 1,093v with EVGA FTW3 Ultra is fine by me lol.


Yes 1.125v for above 2100Mhz. To be honest though the frame rate difference between the stock 2070 with the HOF bios and clocking to 2160Mhz about 1 fps lol Memory makes a much bigger impact.


----------



## Imprezzion

Sync0r said:


> Yes 1.125v for above 2100Mhz. To be honest though the frame rate difference between the stock 2070 with the HOF bios and clocking to 2160Mhz about 1 fps lol Memory makes a much bigger impact.


Yeah and that's where my card falls flat on it's face. It even has Samsung but still only does +800 fully stable. +900 works and doesn't crash but has some wierdness now and then. Like, a black flicker or wierdly colored texture once or twice an hour while gaming. +1000 or more is just asking for either loads of black artifacts or just a straight up direct3d crash.

I did do a few 3dmark and superposition runs at 2190 core 1,125v and the scores do scale up to +1300 memory but the artifacts are really bad at that point if the bench even loads without crashing lol.


----------



## sultanofswing

If anyone has a 2080ti Kingpin and is interested, I have full Core and memory voltage control that works with MSI afterburner without having to use the Classified tool.


http://imgur.com/HHz7hoE




http://imgur.com/dRyD6nc




http://imgur.com/vBscT5Y




http://imgur.com/eRdYf03


----------



## gfunkernaught

Sync0r said:


> Nice, I have the ThermalTake Core x9, I don't really care about looks, just performance these days, hence the cable management. The amount of rads you can fit in it is just amazing, I have 2x480mm, 1x420mm, 2x240mm all 60mm thick, almost all in push pull, water temps just sit at ambient. 2080 Ti under load hits 32c with 20c water at 1.125v and 420watt power draw from the card.
> 
> View attachment 2461025
> 
> View attachment 2461026


What block does your gpu have on it? 32c on load is amazing. That case is amazing.


----------



## Edge0fsanity

So its been awhile since I've been in this thread. Bought 2 2080tis on launch day and put them on water w/380w galax bios. Ran great for years but I sold them prior to 3090 launch thinking I could get one on launch day(lol). I have since bought a used ftw3 and am leaving on air while I wait for a 3090. 

What is the best bios for this card for a higher PL? I still need control of the fans unfortunately and my experience in the past is other bios's will break evga fan control. Goal is to run this thing at 1.050v with 0 power throttling. 373w stock bios has me power throttling back to 1.037v a lot even with the VF curve set.


----------



## xkm1948

Have my old 2080Ti running on Alphacool Eiswolf 2 AIO.

Not too bad TBH. Fairly quiet and plenty fast.


----------



## Imprezzion

Edge0fsanity said:


> So its been awhile since I've been in this thread. Bought 2 2080tis on launch day and put them on water w/380w galax bios. Ran great for years but I sold them prior to 3090 launch thinking I could get one on launch day(lol). I have since bought a used ftw3 and am leaving on air while I wait for a 3090.
> 
> What is the best bios for this card for a higher PL? I still need control of the fans unfortunately and my experience in the past is other bios's will break evga fan control. Goal is to run this thing at 1.050v with 0 power throttling. 373w stock bios has me power throttling back to 1.037v a lot even with the VF curve set.


How lol. I'm using a Gainward Phoenix GS with the FTW3 Ultra 373w BIOS and even on 1.093v 2100Mhz it never throttles. Comes close, it's at 360w power draw average but clocks don't ever drop and neither does voltage.

The HOF XOC BIOS will work fine on that card but you can't use MSI AB profiles. It will BSOD loading a profile so every boot needs manual OC and curve again.


----------



## Krzych04650

I just put my 2080 Ti's under water and my Sea Hawk X can do 2175 MHz sustained in benchmarks and games with 42C temp while this stupid Palit that I've got for a second card cannot even launch a benchmark at 2100 MHz, realistically not getting above 2055 sustained. What a waste. The real difference in performance isn't much, 2175 is just 2,9% faster than 2055 from my testing, but still, amazing card paired with bottom of the barrel one.


----------



## Edge0fsanity

Imprezzion said:


> How lol. I'm using a Gainward Phoenix GS with the FTW3 Ultra 373w BIOS and even on 1.093v 2100Mhz it never throttles. Comes close, it's at 360w power draw average but clocks don't ever drop and neither does voltage.
> 
> The HOF XOC BIOS will work fine on that card but you can't use MSI AB profiles. It will BSOD loading a profile so every boot needs manual OC and curve again.


You're either playing games at low res, or aren't demanding, or turning settings way down, or playing at with fps locked fairly low or some combination of that to not throttle at 1.093v. Both of my reference 2080tis on water power throttled at times w/380w bios @ 1.050v set w/VF curve in AB.

sucks about the bsods on that bios, not gonna work for me if I have to constantly reset the VF curve.


----------



## pewpewlazer

Edge0fsanity said:


> You're either playing games at low res, or aren't demanding, or turning settings way down, or playing at with fps locked fairly low or some combination of that to not throttle at 1.093v. Both of my reference 2080tis on water power throttled at times w/380w bios @ 1.050v set w/VF curve in AB.
> 
> sucks about the bsods on that bios, not gonna work for me if I have to constantly reset the VF curve.


Yeah there's no way anyone is running 1.093v with a 373w or 380w power limit unless it's at 1080p or something. Around 1.025v is around the highest you can run for high res demanding games without it smashing the PL and throttling constantly.

The 373w FTW3 or 380w GALAX BIOSes are the highest power limit BIOSes available that have a functioning V-F curve (that you can actually save and load with Afterburner). Higher than that you need to physically mod the card.


----------



## Edge0fsanity

pewpewlazer said:


> Yeah there's no way anyone is running 1.093v with a 373w or 380w power limit unless it's at 1080p or something. Around 1.025v is around the highest you can run for high res demanding games without it smashing the PL and throttling constantly.
> 
> The 373w FTW3 or 380w GALAX BIOSes are the highest power limit BIOSes available that have a functioning V-F curve (that you can actually save and load with Afterburner). Higher than that you need to physically mod the card.


that sucks, was hoping to run one of the xoc bios's and do away with the limiter. 1.050v was the sweet spot on my other two cards. This one is looking to be about the same from early OC results.


----------



## Imprezzion

I am running 1080p 144hz yes. No fans connected to the board, on water ofcourse. So no power draw there.

I run full eye candy on everything. Division 2, GTA V, Borderlands 3, Modern Warfare, none of those hit the power limit at all at 1.093v. It gets close, but doesn't hit it at all. Yes, cranking resolution scale up to 200% aka rendering 4k will hit it.


----------



## Sync0r

Imprezzion said:


> I am running 1080p 144hz yes. No fans connected to the board, on water ofcourse. So no power draw there.
> 
> I run full eye candy on everything. Division 2, GTA V, Borderlands 3, Modern Warfare, none of those hit the power limit at all at 1.093v. It gets close, but doesn't hit it at all. Yes, cranking resolution scale up to 200% aka rendering 4k will hit it.


What CPU and clock speed are you running? What is your GPU utilisation?


----------



## Imprezzion

Sync0r said:


> What CPU and clock speed are you running? What is your GPU utilisation?


99% constant usage, CPU is a 10900KF @ 5.1Ghz all core AVX0 5Ghz cache, 4200C15 on the memory.

The cards on 7900 for the memory btw. ,

Here, let me upload a MSI AB graph in a bit of division 2.










You can see my fanspeed and fan tacho in this screenshot but mind you, i'm not powering the fans from the board. Only using the RPM and PWM wire which are both connected to a external Lamptron fan controller which is used to drive the radiator fans. I can use the cards PWM and RPM signal and MSI AB fancurve to control the radiator fans but the power comes from the Lamptron over SATA. 

Yes, while sitting in the game menu at like 400+ FPS it will hit the power limit, that's why it shows 127% while i only have 124% available, but never in-game. Usually around 110-115% of the 124% available.


----------



## Delwyn

Hey guys, this question is still not answered about the MSI Gaming X trio with the Q3 2019 bios 90.02.30.00.65 - Apparently it can't be flashed to earlier versions, to the 406W version for example!
What is this **** from MSI? The 406W is *their own version*, just older and you are not allowed to flash to it?

Has anybody tried with a new Nvflash version? How would one go about forcing it to the older firmware, which still allowed the flashing?
I'm not buying MSI again ever again anyway, but this is really nasty.

It would not make a huge difference for me anyway, since I'm not on water and the temps are over 70°C at 100% Power limit (330W) anyway, but it should still be possible.


----------



## J7SC

Imprezzion said:


> I am running 1080p 144hz yes. No fans connected to the board, on water ofcourse. So no power draw there.
> 
> I run full eye candy on everything. Division 2, GTA V, Borderlands 3, Modern Warfare, none of those hit the power limit at all at 1.093v. It gets close, but doesn't hit it at all. Yes, cranking resolution scale up to 200% aka rendering 4k will hit it.


I have been following your GPU_v discussion - below is a 4K screenshot w/ MSI AB (open in separate tab, double click) for MS FlightSIm 2020 in SLI-'CFR' 4K / Ultra. Cards are not on full tilt to get power consumption down...here around 680W for both combined in this mode, rather than the 760W on max OC and PL. The CPU is a TR2950X, also dialed back a bit re. power consumption. Otherwise the thing easily and consistently draws 1140W+ overall in MS FS 2020

My question is this: In order to get max performance out of available PL etc,* isn't it better to run lower GPU_V* ? FPS are not an issue with this setup and app. IMO, getting to 1.093v would just take out available PL headroom ? Per pic, I leave the voltage slider completely untouched, and the GPU bios is stock...also cards are obviously heavily w-cooled.


----------



## Imprezzion

It would be. But if I find a game that does hit the power limit consistently I will just go back to 2145Mhz @ 1.125v with the HOF XOC BIOS.

It will take some time to set the OC every cold boot due to the BSOD bug with MSI AB profiles but I ran it for several weeks and it handled the BIOS and voltage and power just fine.


----------



## gfunkernaught

Imprezzion said:


> It would be. But if I find a game that does hit the power limit consistently I will just go back to 2145Mhz @ 1.125v with the HOF XOC BIOS.
> 
> It will take some time to set the OC every cold boot due to the BSOD bug with MSI AB profiles but I ran it for several weeks and it handled the BIOS and voltage and power just fine.


Which 2080 ti do you have? I ran those bios and ran into that bug as well. It's a shame. But you've never had an issue setting your OC every boot? Also what are your temps at 2145mhz? What games do you run? I'm curious to try it again and set the OC every boot. Might seem worth it if the performance increase warrants it. One more question, with these bios did you run into the thermal throttle at 40c? You know, that single step down? Did you set your curve to 2160mhz so when it warms up it falls to 2145mhz and stays there?


----------



## Imprezzion

gfunkernaught said:


> Which 2080 ti do you have? I ran those bios and ran into that bug as well. It's a shame. But you've never had an issue setting your OC every boot? Also what are your temps at 2145mhz? What games do you run? I'm curious to try it again and set the OC every boot. Might seem worth it if the performance increase warrants it. One more question, with these bios did you run into the thermal throttle at 40c? You know, that single step down? Did you set your curve to 2160mhz so when it warms up it falls to 2145mhz and stays there?


I have a Gainward Phoenix GS 2080Ti cooled with a Kraken G12 + Kraken X52 in push-pull and copper heatsinks on the VRM/VRAM.

Temps around 45-50c at 2145/8000 on 1.125v. Can be lower but I don't have the fans maxed out as 45-50c is a nice bracket to keep it with the downclocking as it's only 2 bins. 

Curve was set to 2175Mhz at 25c ish idle. 

I mainly play warzone, division 2, world of tanks, borderlands 3 and GTA V.

It will pass benchmarks at 2190 but artifacts do happen sometimes. It can also pass hours of gaming at 2175 with no real issues but it has to be curved at 2205 for that and that is too much, it DirectX crashes before it warms up most of the time.

I might just flash back to XOC just because I can and I'm bored lol.


----------



## gfunkernaught

Imprezzion said:


> I have a Gainward Phoenix GS 2080Ti cooled with a Kraken G12 + Kraken X52 in push-pull and copper heatsinks on the VRM/VRAM.
> 
> Temps around 45-50c at 2145/8000 on 1.125v. Can be lower but I don't have the fans maxed out as 45-50c is a nice bracket to keep it with the downclocking as it's only 2 bins.
> 
> Curve was set to 2175Mhz at 25c ish idle.
> 
> I mainly play warzone, division 2, world of tanks, borderlands 3 and GTA V.
> 
> It will pass benchmarks at 2190 but artifacts do happen sometimes. It can also pass hours of gaming at 2175 with no real issues but it has to be curved at 2205 for that and that is too much, it DirectX crashes before it warms up most of the time.
> 
> I might just flash back to XOC just because I can and I'm bored lol.


That is amazing. I tried the HOF Bios again last night just for fun and it crashed in the Bright Memory benchmark. It seems like I am in denial about the binning of my card. It seems like anything passed 2085mhz will risk a crash or running high voltage with low clocks because of throttling. Last winter, I put my PC in the garage in 4c weather and flashed the KPXOC bios. I got to [email protected] and load temp was 19c during a time spy extreme benchmark, port royal as well, I scored over 10000 and was 64th on the board. But fired up a game, crashed within seconds.


----------



## Imprezzion

gfunkernaught said:


> That is amazing. I tried the HOF Bios again last night just for fun and it crashed in the Bright Memory benchmark. It seems like I am in denial about the binning of my card. It seems like anything passed 2085mhz will risk a crash or running high voltage with low clocks because of throttling. Last winter, I put my PC in the garage in 4c weather and flashed the KPXOC bios. I got to [email protected] and load temp was 19c during a time spy extreme benchmark, port royal as well, I scored over 10000 and was 64th on the board. But fired up a game, crashed within seconds.


Back on XOC. Here's my MSI AB graphs after like, 1 hour of World Of Tanks and 2 hours of Division 2 @ Max Quality + ReShade DX12 1080p 144Hz.
2130/8000 when warmed up, starting clocks in the curve 2175. I set it up with +120 core then offset the 1.125v point 30Mhz up. This gives a very clean curve and great performance overall compared to just setting the 1.125v point and leaving core at +0. 

As you can see it flat lines at 47c with ambient 19c and ~1450RPM on the radiator fans. (4x120mm Cooler Master MF120's on a Kraken X52 in push pull top exhaust)


----------



## gfunkernaught

Imprezzion said:


> Back on XOC. Here's my MSI AB graphs after like, 1 hour of World Of Tanks and 2 hours of Division 2 @ Max Quality + ReShade DX12 1080p 144Hz.
> 2130/8000 when warmed up, starting clocks in the curve 2175. I set it up with +120 core then offset the 1.125v point 30Mhz up. This gives a very clean curve and great performance overall compared to just setting the 1.125v point and leaving core at +0.
> 
> As you can see it flat lines at 47c with ambient 19c and ~1450RPM on the radiator fans. (4x120mm Cooler Master MF120's on a Kraken X52 in push pull top exhaust)
> 
> View attachment 2462258


This is with the HOF XOC bios right?


----------



## Imprezzion

gfunkernaught said:


> This is with the HOF XOC bios right?


Yup, HOF XOC BIOS. Benched a run on 2175 did fine, world of tanks did crash on 2175 after like 20 minutes. 2160 was fine so far.


----------



## tps3443

I’m back guys! I have owned (4) various 2080Ti’s of the past year. And I originally sold my last (2) 2080Ti’s to purchase a RTX3080 or RTX3090. Well, I used a 1660Ti to tie me over. And I never could get a 3080 or 3090.

So I traded some components for a RTX2080Ti FE day before yesterday. And I must say, it’s a good one too!

My Samsung GDDR6 on it is game stable at 16,550Mhz. I’ve never seen such amazing 2080Ti memory before.


Anyways I’ve got it undervolted at 0.925v at 1,920Mhz until I get my water block on it, and then I will re-flash the Galax 380 watt bios, and solder the 8 Ohm shunts for 532 watts total. These stock FE air coolers are useless for overclocking and trying to run a game lol. Maybe these quick timespy runs yield results, but not games. Your card will just heat, and slowly clock down to the floor. 1,920Mhz seems to be the sweet spot for stock cooling. And it isn’t bad at all. It runs around 75C with the undervolt 100% fan in a 100% load.

My last NON-A 2080Ti was just as fast as a RTX3080. Around 17,250 timespy graphics score. And it held 2,130-2,145Mhz locked in games. It was amazing! I was limited to 420 watts of power, and limited by votage too.

With the memory my new FE has on it, I’m expecting it to be even faster!

So anyways, the 2080Ti is awesome! It has 11GB of Vram, and will match a RTX3080 once setup properly. So, no thanks 3000 series! Couldnt buy you anyways.


----------



## acoustic

Just as fast as a RTX3080? 17,250 graphics score is about 1500-2000 points lower than what a 3080 will put up. Great clocks on that non-A, but still no where near a 3080.


----------



## tps3443

acoustic said:


> Just as fast as a RTX3080? 17,250 graphics score is about 1500-2000 points lower than what a 3080 will put up. Great clocks on that non-A, but still no where near a 3080.


Wait, what??

An RTX3080 FE gets 17,800 in timespy graphics, “Guru3D review”. That is 3% faster then my old non-A 2080Ti. About 500-600 points away from each other. And the TUF RTX3080 AIB model got 18,500 timespy graphics which is 7% faster.


A RTX3090 gets around sub 20,000 stock and that’s still only about 15% faster than a 2.1Ghz 2080Ti.


No where near??

And even when they overclocked it, they achieved about 5% better performance over the FE. So that’s still only like 8 or 9% faster.


So yeah they are close. Very close. The RTX3080 is faster, but when you watercool a 2080Ti and run it at Just 2.1Ghz, they perform almost the same.


So with extra Vram, it is a no brainer.


----------



## acoustic

Must be the worst 3080 in history. My 3080 on stock BIOS with just max power limit is around 18800, with 24/7 clocks it's around 19400.

17800 is pretty low, even for an FE in my opinion. Where is that result from?

I came from a heavily OC'd 2080TI @ 2100/8000 and it's not even close in comparison at 3840x1600. Huge difference in performance, Metro Exodus is awesome with no DLSS completely maxed out.


----------



## Avacado

acoustic said:


> Must be the worst 3080 in history. My 3080 on stock BIOS with just max power limit is around 18800, with 24/7 clocks it's around 19400.
> 
> 17800 is pretty low, even for an FE in my opinion. Where are you drawing that result from?


I swear to god, everytime I see your name, it reads "Autistic" not "Acoustic". Damit Jim.


----------



## acoustic

Avacado said:


> I swear to god, everytime I see your name, it reads "Autistic" not "Acoustic". Damit Jim.


Makes sense since I'm a moron


----------



## tps3443

J7SC said:


> I have been following your GPU_v discussion - below is a 4K screenshot w/ MSI AB (open in separate tab, double click) for MS FlightSIm 2020 in SLI-'CFR' 4K / Ultra. Cards are not on full tilt to get power consumption down...here around 680W for both combined in this mode, rather than the 760W on max OC and PL. The CPU is a TR2950X, also dialed back a bit re. power consumption. Otherwise the thing easily and consistently draws 1140W+ overall in MS FS 2020
> 
> My question is this: In order to get max performance out of available PL etc,* isn't it better to run lower GPU_V* ? FPS are not an issue with this setup and app. IMO, getting to 1.093v would just take out available PL headroom ? Per pic, I leave the voltage slider completely untouched, and the GPU bios is stock...also cards are obviously heavily w-cooled.
> 
> 
> View attachment 2461684


I don’t even think my PSU could handle your (2) 2080Ti’s with my CPU.


acoustic said:


> Must be the worst 3080 in history. My 3080 on stock BIOS with just max power limit is around 18800, with 24/7 clocks it's around 19400.
> 
> 17800 is pretty low, even for an FE in my opinion. Where is that result from?
> 
> I came from a heavily OC'd 2080TI @ 2100/8000 and it's not even close in comparison at 3840x1600. Huge difference in performance, Metro Exodus is awesome with no DLSS completely maxed out.


Guru 3D review achieved 17,800 graphics. Looks normal to me for an FE. And with an overclocked AIB TUF model they achieved around 18,500.

So I’m just going by review data. and it is like 3-7%.

I wouldn’t even notice this in a game. Not to mention, I am going further than 2.1GHz with my 2080Ti. And the memory is abnormally good too.

I was Voltage and power limited with having a NON-A that achieved 17,250.

A FE 2080Ti with some really good Samsung GDDR6 will close the gap even further.


Just saying RTX3080, nothing to worry over. That’s why I went and got another 2080Ti.


----------



## pewpewlazer

tps3443 said:


> Wait, what??
> 
> An RTX3080 FE gets 17,800 in timespy graphics, “Guru3D review”. That is 3% faster then my old non-A 2080Ti. About 500-600 points away from each other. And the TUF RTX3080 AIB model got 18,500 timespy graphics which is 7% faster.
> 
> And even when they overclocked it, they achieved about 5% better performance over the FE. So that’s still only like 8 or 9% faster.


You're comparing a stock air cooled 3080 to a water cooled 2080 Ti with some resistors soldered on it.

Take a soldering iron to the 3080, strap a water block on it, and say hello to the very real ~20% performance difference between the two cards.


----------



## tps3443

pewpewlazer said:


> You're comparing a stock air cooled 3080 to a water cooled 2080 Ti with some resistors soldered on it.
> 
> Take a soldering iron to the 3080, strap a water block on it, and say hello to the very real ~20% performance difference between the two cards.


I am well aware, and it does sound nice and all. But, you can’t physically buy an RTX3080.

I needed a GPU desperately, I had a turtle power 1660Ti that cannot keep up with my demands, and I just cannot keep searching and waiting. An opportunity came up to grab a 2080Ti FE without coming out of pocket. Traded some parts.


Retailers are only meeting around 6% of the total orders for RTX3080’s. So it is literally impossible to even buy one.


----------



## tps3443

What are you guys getting on your 2080Ti memory overclocks? I am curious and interested to see everyone’s results. I keep mine at 16,550MHz 100% game stable. 

Is this considered golden memory? 

I’m sure it’ll run a little further for just benchmarking and what not.


----------



## Sync0r

tps3443 said:


> Wait, what??
> 
> An RTX3080 FE gets 17,800 in timespy graphics, “Guru3D review”. That is 3% faster then my old non-A 2080Ti. About 500-600 points away from each other. And the TUF RTX3080 AIB model got 18,500 timespy graphics which is 7% faster.
> 
> 
> A RTX3090 gets around sub 20,000 stock and that’s still only about 15% faster than a 2.1Ghz 2080Ti.
> 
> 
> No where near??
> 
> And even when they overclocked it, they achieved about 5% better performance over the FE. So that’s still only like 8 or 9% faster.
> 
> 
> So yeah they are close. Very close. The RTX3080 is faster, but when you watercool a 2080Ti and run it at Just 2.1Ghz, they perform almost the same.
> 
> 
> So with extra Vram, it is a no brainer.


I was once in denial with my 2080 Ti until I managed to get a 3090. Couple of screenshots of Control 2080Ti vs 3090, both not overclocked. It was a quick test, yes you can overclock the 2080 Ti a lot more and you can the 3090.


----------



## Imprezzion

tps3443 said:


> What are you guys getting on your 2080Ti memory overclocks? I am curious and interested to see everyone’s results. I keep mine at 16,550MHz 100% game stable.
> 
> Is this considered golden memory?
> 
> I’m sure it’ll run a little further for just benchmarking and what not.


Mine with Samsung does 15600 max. It can do 16000 but will occasionally give random DirectX crashes. Anything over 16000 is pretty golden to me.


----------



## Medizinmann

tps3443 said:


> What are you guys getting on your 2080Ti memory overclocks? I am curious and interested to see everyone’s results. I keep mine at 16,550MHz 100% game stable.
> Is this considered golden memory?
> 
> I’m sure it’ll run a little further for just benchmarking and what not.


I would agree here with user Imprezzion - anything over 16000 is very good.
Both my 2080Tis can do 15600 regularly and the one under water does even up to 16600 with extra cooling – but a lot of noise…

Best regards,
Medizinmann


----------



## tps3443

Imprezzion said:


> Mine with Samsung does 15600 max. It can do 16000 but will occasionally give random DirectX crashes. Anything over 16000 is pretty golden to me.


That's what I was thinking, just making sure I'm not crazy.

I am very familiar with the annoying DX errors and closing to desktop on turing. Gaming for hours is the only real way to test stability.

I really feel like it is a miracle though. When I had my last 2080Ti, I'm pretty sure I would think someone was full of it, if they were supposively running 16,550 on the GDDR6.


----------



## Medizinmann

tps3443 said:


> That's what I was thinking, just making sure I'm not crazy.
> 
> I am very familiar with the annoying DX errors and closing to desktop on turing. Gaming for hours is the only real way to test stability.
> 
> I really feel like it is a miracle though. When I had my last 2080Ti, I'm pretty sure I would think someone was full of it, if they were supposively running 16,550 on the GDDR6.


Again - it is possible for me under water with extra cooling - like everything running at max (pump, fans on rads etc.) and lower ambient temps (room temp was 15°C with open windows in winter  - but no chance with the "standard" aircooler...

...and for like a Timespy run or two - didn't test real gaming. So nothing really for everday use...

So - if you get this stable - than it is indeed a golden sample.

Best regards,
Medizinmann


----------



## Imprezzion

I can run benches at 2190 core 16400 memory but it artifacts pretty bad at that point. Scores scale and validate but it's a mess lol. Games I run stable at 2145-2130 / 15600 depending on which temperature bin it's in. That is on a AIO Kraken G12 + X52 and the HOF XOC BIOS @ 1.125v.


----------



## Bart

I can bench safely at +250 core, +850 memory, whatever that works out to in boost clock form. But I never OC when gaming, I just don't need an extra 10fps when I'm already doing 80-90.


----------



## Krzych04650

Does anyone know how to prevent DX12 benchmarks like Time Spy or Port Royal or DX12 games from using multiple GPUs? Normally you would just turn SLI off, but with explicit mGPU support I cannot find a way to disable that and benchmarks always use two cards no matter what. I want to check the scaling. I've even searched my BIOS to see if my motherboard has PCI-E slot disable feature but it doesn't, it is very rare for mobo to have it from what I've read.


----------



## tps3443

Yeah my last 2080Ti on watercooling would run 16,200 benchmarks, and I ran at 16,000 in games.

My first 2080Ti with Hynix memory only managed 15,600Mhz.

It just seems amazing to me how well this one does.


----------



## J7SC

tps3443 said:


> What are you guys getting on your 2080Ti memory overclocks? I am curious and interested to see everyone’s results. I keep mine at 16,550MHz 100% game stable.
> 
> Is this considered golden memory?
> 
> I’m sure it’ll run a little further for just benchmarking and what not.





Imprezzion said:


> Mine with Samsung does 15600 max. It can do 16000 but will occasionally give random DirectX crashes. Anything over 16000 is pretty golden to me.


...not sure I like the term 'golden' as it often means different things to different people, and it also depends on other factors such as custom bios, EVBot, type of (sub) ambient cooling etc. For my own setup, the highest *single*-GPU VRAM I can 'run' is an effective 16,834 MHz. That said, with that speed, scores and fps are already declining due to errors / efficiency.

For *both* of my (heavily cooled) cards in NVL/SLI with both cards having Micron btw, I get best and stable results at an effective 16,448 MHz - for example per below, from sustained Port Royal HoF runs (same for gaming and sims). So going higher -even when staying below 'crash' ranges - isn't always delivering the best results. Micron also runs a bit tighter VRAM timings but gets a bit hotter, afaik.

Checking other runs at PortRoyal HoF and leaving out the obvious LN2 subs, 3090s etc, a few 2080 Ti w/ Samsung can reach 17,000 MHz or even 17,200 MHz. Finally, there is one MSI at an effective VRAM speed of 18,760 MHz...likely the rare MSI Gaming '*Z*' Trio (spoiler)


Spoiler











At the end of the day, going from a well-running 2080 Ti to a 30*8*0 is probably not really worth it, IMO (never mind coming from 2x 2080 Ti). That said, going from a 2080 Ti to a 30*9*0 might very well make a noticeable difference, especially at 4K Ultra settings where an extra few fps can make the difference between playable and not playable...I have got my eyes on some not-even-released-yet 3090 custom models, but for now, it is all academic anyhow - given the total lack of any supply in my neck of the woods of RTX 3K...










The 2080 Tis in the PortRoyal HoF run above are factory full-waterblock and have a dedicated loop with 2 pumps and 1080 / 55 rads (here on the test bench before system integration) :


----------



## Imprezzion

My RAM is probably temperature limited at this point as with 16000+ (+1000 or more) it usually runs fine the first 20 minutes, then crashes. Problem is I'm not using a full cover block since I switch GPU's so often I just went for a universal AIO so that I can enjoy 40-44c core temps without having to buy new blocks for every GPU I switch lol.

I got my RAM cooled with some Enzotech full copper 14x14mm heatsinks and the G12's fan but most of the RAM is underneath the mounting plate close to the core so doesn't really get any airflow. VRM is cooled by Alphacool copper VRM heatsinks (I think they are 6x6mm?) And they are directly underneath the fan. Even at 400+w 1.125v I can still touch the heatsinks / PCB behind them and it feels way less hot then they did with stock air cooler so I'm fine there but VRAM gets quite hot.

If I can find a EK 2080 Ti reference block with EK QDC fittings (or just a straight up EK Phoenix 2080 Ti full cover block) I would buy it and full cover it with the EK Phoenix 280 kit I have and swap the CPU to a new 360mm custom loop but yeah, those blocks don't really seem to exist anymore lol.


----------



## tps3443

J7SC said:


> ...not sure I like the term 'golden' as it often means different things to different people, and it also depends on other factors such as custom bios, EVBot, type of (sub) ambient cooling etc. For my own setup, the highest *single*-GPU VRAM I can 'run' is an effective 18,834 MHz. That said, with that speed, scores and fps are already declining due to errors / efficiency.
> 
> For *both* of my (heavily cooled) cards in NVL/SLI with both cards having Micron btw, I get best and stable results at an effective 16,448 MHz - for example per below, from sustained Port Royal HoF runs (same for gaming and sims). So going higher -even when staying below 'crash' ranges - isn't always delivering the best results. Micron also runs a bit tighter VRAM timings but gets a bit hotter, afaik.
> 
> Checking other runs at PortRoyal HoF and leaving out the obvious LN2 subs, 3090s etc, a few 2080 Ti w/ Samsung can reach 17,000 MHz or even 17,200 MHz. Finally, there is one MSI at an effective VRAM speed of 18,760 MHz...likely the rare MSI Gaming '*Z*' Trio (spoiler)
> 
> 
> Spoiler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At then end of the day, going from a well-running 2080 Ti to a 30*8*0 is probably not really worth it, IMO (never mind coming from 2x 2080 Ti). That said, going from a 2080 Ti to a 30*9*0 might very well make a noticeable difference, especially at 4K Ultra settings where an extra few fps can make the difference between playable and not playable...I have got my eyes on some not-even-released-yet 3090 custom models, but for now, it is all academic anyhow - given the total lack of any supply in my neck of the woods of RTX 3K...
> 
> View attachment 2462780
> 
> 
> The 2080 Tis in the PortRoyal HoF run above are factory full-waterblock and have a dedicated loop with 2 pumps and 1080 / 55 rads (here on the test bench before system integration) :
> 
> View attachment 2462782


I had no idea a Gaming Z Trio 2080Ti existed, and it has the Samsung 16gbps modules too! Very nice. I am browsing around, and I can’t even find one for sale. I wouldn’t mind swapping my FE for one.


----------



## jura11

For VRAM OC, on my Asus RTX 2080Ti Strix with Asus Matrix BIOS I'm running daily OC +800MHz on VRAM or 8200MHz per MSI Afterburner(16400MHz), in some benchmarks I can run 845MHz on VRAM and in some benchmarks 830MHz seems to be best scoring value for VRAM 

Zotac RTX 2080Ti AMP now with EVGA RTX 2080Ti FTW3 BIOS that one will do 8000MHz on VRAM or 16k as max, max OC per GPU Asus will do easy 2205MHz in most benchmarks, gaming clocks are 2175-2190MHz like in Witcher 3 etc, in RT games like Control or BFV or Metro Exodus there I running 2145-2160MHz OC 

Zotac RTX 2080Ti AMP with EVGA RTX 2080Ti FTW3 BIOS is doing now 2145MHz in Port Royal or any other benchmarks, sadly I can't test SLI or NvLink because my 2080Ti's are in x8 and another one is in x4 slot and another GTX1080Ti is in slot 2 which x8 as well and in x1 slot is PCI_E SATA controller or card 

I know there is way you can run SLI without the NvLink bridge or even SLI Bridge just need to find time to try it hahaha 

Hope this helps 

Thanks, Jura


----------



## jura11

If I will be getting RTX 3090 or 3080 let's see, right now I don't have ideas, availability is now issue, as above I want custom 3090 probably again RTX 3090 Strix OC or 3080 Strix OC 

I will keep my 2080Ti's for sure like now, I use GPUs for rendering 

Hope this helps 

Thanks, Jura


----------



## pewpewlazer

tps3443 said:


> What are you guys getting on your 2080Ti memory overclocks? I am curious and interested to see everyone’s results. I keep mine at 16,550MHz 100% game stable.
> 
> Is this considered golden memory?
> 
> I’m sure it’ll run a little further for just benchmarking and what not.


My micron card runs +1000 (16,000mhz) all day. Can pass benchmarks around +1100 but performance consistency is questionable.

On air I could only do +750ish.

+1275 sounds pretty decent for Samsung, but certainly not a "golden sample".


----------



## tps3443

pewpewlazer said:


> My micron card runs +1000 (16,000mhz) all day. Can pass benchmarks around +1100 but performance consistency is questionable.
> 
> On air I could only do +750ish.
> 
> +1275 sounds pretty decent for Samsung, but certainly not a "golden sample".


Keep in mind though,

I run +1275 on the memory in games at around 80C GPU temps. I am still on my FE cooler. The back plate will melt your flesh off lol. I am watercooling very soon, waiting on my components. So, maybe it’ll do even better then. Because it cooks and cooks and cooks for hours and hours during Metro Redux with Raytracing on ultra.

I am undervolted running 1,920 solid for maximum performance inside of 320 watts.


----------



## slopokdave

So a few days ago Amazon had EVGA 2080 XC ti hydro coppers in stock for 899; still a terrible price, but I wanted to give one a try, I never ran a hydro copper.

First, I feel like these are kind of scammy. It's not an XC; its actually a black/gaming board. So it's Non-A. Very misleading that they label these as "XC" cards. You aren't going to get much of an overclock out of this card. Also, temps are meh at best. I had a Strix 2080 ti w/ Phanteks block before, temps at most 55c in gaming. This hydro copper has trouble holding 65C.

I think EVGA was asking $1500 for these cards when they released? Wow....


----------



## Avacado

slopokdave said:


> So a few days ago Amazon had EVGA 2080 XC ti hydro coppers in stock for 899; still a terrible price, but I wanted to give one a try, I never ran a hydro copper.
> 
> First, I feel like these are kind of scammy. It's not an XC; its actually a black/gaming board. So it's Non-A. Very misleading that they label these as "XC" cards. You aren't going to get much of an overclock out of this card. Also, temps are meh at best. I had a Strix 2080 ti w/ Phanteks block before, temps at most 55c in gaming. This hydro copper has trouble holding 65C.
> 
> I think EVGA was asking $1500 for these cards when they released? Wow....


Rather run an aftermarket block. Last I checked there were 2 SKU's of Ti variants for 999$ on B-Stock


----------



## Imprezzion

It's amazon, you have a solid return policy there. I'd return it honestly.

Just buy some random second had A board which has a block available, buy a block, still be cheaper off and enjoy it.


----------



## slopokdave

Avacado said:


> Rather run an aftermarket block. Last I checked there were 2 SKU's of Ti variants for 999$ on B-Stock





Imprezzion said:


> It's amazon, you have a solid return policy there. I'd return it honestly.
> 
> Just buy some random second had A board which has a block available, buy a block, still be cheaper off and enjoy it.


No worries, agreed on both. I actually had a Strix 2080 ti before w/ Phanteks block and it ran much cooler. I was still able to return it to Amazon though and wait for a 3090. The EVGA is just to get me by until I can land a 3090 FE.


----------



## tps3443

slopokdave said:


> No worries, agreed on both. I actually had a Strix 2080 ti before w/ Phanteks block and it ran much cooler. I was still able to return it to Amazon though and wait for a 3090. The EVGA is just to get me by until I can land a 3090 FE.


I pulled a 2080Ti from that Aorus eGPU thunderbolt gaming box. And it was a watercooled non-A, it had a 240MM AIO hooked up to it from the factory. Obviously I had to stack the resistors on the back for more power. But, that card ran like 48-52C at 2,100-2,130Mhz in anything.

It had Samsung memory too! It was nothing fancy, but ran super cool. I repasted the card, and the 120MM rad fans did spin around 3,800RPM.

NON-A’s are still fast! This card ran 17,250 in timespy graphics. And maybe check your block contact.




I am looking forward to watercooling my new 2080Ti FE that I just got. The memory overclocking on it is just absolutely amazing. Unfortunately the stock FE cooler has a hard time managing more than 1,920 for sustained loads.


----------



## tps3443

I played Death Stranding on my 2080Ti today for the first time. The game is just absolutely butter smooth. Its optimized good for PC!

100% GPU utilization all the time 130 to 160+ FPS all the time maxed out at 1440P.

This game uses every single thread of my 7980XE. All 36 threads see 65%-100% utilization.


----------



## TONSCHUH

Sync0r said:


> Nice, I have the ThermalTake Core x9, I don't really care about looks, just performance these days, hence the cable management. The amount of rads you can fit in it is just amazing, I have 2x480mm, 1x420mm, 2x240mm all 60mm thick, almost all in push pull, water temps just sit at ambient. 2080 Ti under load hits 32c with 20c water at 1.125v and 420watt power draw from the card.
> 
> View attachment 2461025
> 
> View attachment 2461026


Wow, that looks pretty awesome too !

Cable-Management is not my strength either, but as You said, as long it performs well, it's all good !

I was almost finished, after bleeding the loop etc. , where I only had one small hurdle with the custom CPU cables, because they weren't labelled, but after swapping them out, the system started-up, with the Jump-Start-Plug connected to the 24-pin cable.

Later-on, when I connected the 24-pin to the MoBo, it wouldn't boot at all.

I got an Error-Code: 00 and the system turn-on for 30sec and then gave my a red LED at my AX1200i, where the self-test button is.

It got weirder from there, because when I connected the Jump-Start-Plug again to the 24-pin, it would only turn on for 30sec and then give me the red LED again (!).

Prior to connecting the 24-pin to the MoBo for the first time, I was bleeding the loop for a couple of days, with the Jump-Start-Plug in place and now it wouldn't work longer than for 30sec and I needed to remove the plug, to reset the PSU.

So I removed all cables from the PSU and did the self-test, which it past (!).

After even trying the original cables without a change in outcome, I started to remove plenty cable ties etc, that I was at least able to move the top Rad's slightly to the side, to lift up the GPU in between the Rad's and to move the MoBo with it's tray, out the side of the case, without pulling apart and having to drain the whole loop again.

I removed the Mono-Block and CPU and found after some searching 2-3 bent and partially broken pins of the CPU-Socket.

I'm not exactly sure how I did that, but after the de-lid/ re-lid / replacement of the Standard-IHS with a Copper-IHS, I might have tightened the centre screw, which held the IHS in place to cure the glue, a bit too much, because when I put the CPU back in it's socket, it wobbled around a bit, like it was slightly warped.

I tried to fix the pins and put everything back together and had even re-flashed the MoBo-Bios, after I didn't get any Error-Code at all, after removing the CMOS-Battery for a bit.

The CPU didn't wobble anymore in the socket, so I tracked down a factory refurbished MoBo, for ~AUD350,- , which didn't arrive yet.

New ones are going for up to AUD1906,- (!), which is even much more expensive, than when I bought mine new back then.

When the MoBo finally arrives, I will cross my fingers, that the CPU and PSU are still alive.

🙂


----------



## kuutale

So i have problem,

Asus rtx 2080 ti card wont clock down it stay gpu clock 1230 and memory 14000, i dont when this start but load is 0% and card not go idle.
i try power settings, nvidia inspector also.
windows power plan not do anything

drivers clean and new setup
i have 2x144hz monitors LG and acer

can some advice what hell is happen? i think check last month and then card is downlocking but what happened. i dont any idea what do to.


----------



## jura11

kuutale said:


> So i have problem,
> 
> Asus rtx 2080 ti card wont clock down it stay gpu clock 1230 and memory 14000, i dont when this start but load is 0% and card not go idle.
> i try power settings, nvidia inspector also.
> windows power plan not do anything
> 
> drivers clean and new setup
> i have 2x144hz monitors LG and acer
> 
> can some advice what hell is happen? i think check last month and then card is downlocking but what happened. i dont any idea what do to.


Hi there 

Try set in Nvidia Control panel power management to Optimal, I have same GPU as you have and my GPU is downclock to idle speeds with Optimal power management and I'm running one 3440x1440 120Hz monitor and second 144hz 1440p AOC monitor 

What other background processes or programs are you running, iCue or something like this? 

Hope this helps 

Thanks, Jura


----------



## kuutale

jura11 said:


> Hi there
> 
> Try set in Nvidia Control panel power management to Optimal, I have same GPU as you have and my GPU is downclock to idle speeds with Optimal power management and I'm running one 3440x1440 120Hz monitor and second 144hz 1440p AOC monitor
> 
> What other background processes or programs are you running, iCue or something like this?
> 
> Hope this helps
> 
> Thanks, Jura


i have LG 34GK950F 3440x1440 and acer predator XB271HU 2560x1440 both are 144hz, i try optimal but doesn't help


----------



## acoustic

Unplug one monitor, and reboot. Multi-Monitor/high refresh-rate monitors used to have an issue similar to this, if I remember correctly. Very curious why you're seeing this issue now.


----------



## jura11

Set Nvidia power management to Optimal and reboot PC or as above unplug one of the monitors and again reboot PC

I'm running Acer Predator X34p with AOC 27G2U

Hope this helps 

Thanks, Jura


----------



## kuutale

jura11 said:


> Set Nvidia power management to Optimal and reboot PC or as above unplug one of the monitors and again reboot PC
> 
> I'm running Acer Predator X34p with AOC 27G2U
> 
> Hope this helps
> 
> Thanks, Jura


i do this and one monitor connected it downlocking both are connected same problem. Only thing i remember i do bios update 3-4 days ago, and i allways check hwinfo what new bios do. I cant remember is the problem then begins. before that i dont find that problem.

maybe i need take drivers uninstall and install again.

i use dpi cable both monitors if this help solve problem


----------



## Imprezzion

Mine also doesn't downclock with a 144Hz and a 75Hz secondary. If I set my primary to 120Hz it does. One is DP the other HDMI.

I'll play around with power settings a bit but it doesn't really bother me lol.


----------



## kuutale

Imprezzion said:


> Mine also doesn't downclock with a 144Hz and a 75Hz secondary. If I set my primary to 120Hz it does. One is DP the other HDMI.
> 
> I'll play around with power settings a bit but it doesn't really bother me lol.


it doesn't bother me too, but in the past card dowlocking this monitors, why suddenly it not does downlock, im only intresting what happened


----------



## J7SC

kuutale said:


> i do this and one monitor connected it downlocking both are connected same problem. Only thing i remember i do bios update 3-4 days ago, and i allways check hwinfo what new bios do. I cant remember is the problem then begins. before that i dont find that problem.
> 
> maybe i need take drivers uninstall and install again.
> 
> i use dpi cable both monitors if this help solve problem



Your Windows registry will enumerate each GPU based on the bios. For example, if you have a single card that has a switch for two or three bios (ie GTX 980 Cl), it will show up in the registry a two or three 'separate' GPUs. Same with a different bios you flashed on the same card. 

I would try to...
a.) uninstall the card in Device Manager and use DDU or
b.) at least uninstall all NVidia drivers in control panel / programs (and apps such as MSI AB, as it is bios specific, including profiles)
c.) reboot and install drivers again

Finally, keep in mind that the new bios may have different HDMI / DisPl settings


----------



## jura11

kuutale said:


> i do this and one monitor connected it downlocking both are connected same problem. Only thing i remember i do bios update 3-4 days ago, and i allways check hwinfo what new bios do. I cant remember is the problem then begins. before that i dont find that problem.
> 
> maybe i need take drivers uninstall and install again.
> 
> i use dpi cable both monitors if this help solve problem


Hi there 

I'm using as well DP cables or DP ports on my Asus RTX 2080Ti Strix, although my monitors are 120hz and 144hz is second monitor and due this my Asus RTX 2080Ti Strix is downclocking as should, didn't tried two 144hz monitors if with them GPU will downclock or not

But from other people reports seems this 144hz bug is still here if you are using multiple 144hz monitors or if you are using G-Sync 144hz or Freesync 144hz monitor 

Can you try turn off G-Sync on one of the monitors or try run second monitor at 60hz or even 120hz if GPU will downclock or not? 

I'm running my monitors as main at 120hz and secondary at 144hz and GPU sits at 300MHz at core and 405MHz at VRAM 

Hope this helps 

Thanks, Jura


----------



## kuutale

jura11 said:


> Hi there
> 
> I'm using as well DP cables or DP ports on my Asus RTX 2080Ti Strix, although my monitors are 120hz and 144hz is second monitor and due this my Asus RTX 2080Ti Strix is downclocking as should, didn't tried two 144hz monitors if with them GPU will downclock or not
> 
> But from other people reports seems this 144hz bug is still here if you are using multiple 144hz monitors or if you are using G-Sync 144hz or Freesync 144hz monitor
> 
> Can you try turn off G-Sync on one of the monitors or try run second monitor at 60hz or even 120hz if GPU will downclock or not?
> 
> I'm running my monitors as main at 120hz and secondary at 144hz and GPU sits at 300MHz at core and 405MHz at VRAM
> 
> Hope this helps
> 
> Thanks, Jura


i uninstall drivers and problem is away  thx for all help


----------



## tps3443

I looked in to some of the RTX3070 reviews. And my last NON-A 2080Ti was 27% faster in timespy than the RTX3070 was from the Guru3D review. So, I must say that’s pretty darn good for a 2 year old video card. With 11GB of vram, I am hopeful about keeping my 2080Ti around for a much longer.


----------



## Imprezzion

I fixed my downclocking issue lol. I had my monitors set up pretty weirdly with my primary 144Hz on HDMI (I thought I had DP) and my secondary on a DP to HDMI converter. This meant my primary didn't see the 144Hz refresh rate so I had to manually set it as a custom resolution. This prevented the card from downclocking.

I bought a proper DP cable and set it up with the 144Hz on DP which now natively detects 144Hz without a custom res and my secondary directly on HDMI @ 75Hz custom res and now it does downclock properly to 300Mhz idle.

Also, I raised my clocks to 2145Mhz load and 7900Mhz VRAM on 1.125v HOF XOC BIOS. Temps around 44-45c on water and stable in like 8 hours of Division 2.

3DMark firestrike did 16690 graphics (16335 total with a 5.1Ghz 10900KF) and superposition 1080p extreme did 10121 score.


----------



## jaqkar

Hi Guys! I have a 2080 Ti Trio Gaming X. Can one still get a bios that adds power? My current bios is 90.02.42.00.14. I read the below entry and not sure if its possible anymore:

Important Starting in the second half of 2019, many cards are now shipping with a newer XUSB FW Version ID, preventing a lot of BIOSes in the TechPowerUp BIOS Collection from being flashed to the card through NVFlash, so be aware when purchasing a new card, you might not be able to flash a higher power limit BIOS on it. Example: MSI Gaming X Trio purchased in Q3 2018 shipped with the 0x70060003 firmware (Build Date: 2018-08-29), the same MSI card purchased a year later in Q3 2019 shipped with the firmware 0x70100003 (Build Date: 2018-12-28), this newer card also had a different EEPROM (WBond instead of ISSI). The BIOS and firmware can however be overwritten by force, using an external programming/test clip, bypassing NVFlash entirely.


----------



## Dreamliner

.


----------



## tamm0r

tps3443 said:


> I pulled a 2080Ti from that Aorus eGPU thunderbolt gaming box. And it was a watercooled non-A, it had a 240MM AIO hooked up to it from the factory. Obviously I had to stack the resistors on the back for more power. But, that card ran like 48-52C at 2,100-2,130Mhz in anything.
> 
> It had Samsung memory too! It was nothing fancy, but ran super cool. I repasted the card, and the 120MM rad fans did spin around 3,800RPM.
> 
> NON-A’s are still fast! This card ran 17,250 in timespy graphics. And maybe check your block contact.


I have the same eGPU box and started to look into overclocking it. Somehow I'm unable to increate the power target no matter which tool I use (it's locked at 100%) and flashing another bios didn't work either. The stock bios [1] is limited to 250 W which is pretty low compared to the pretty similar AORUS GeForce RTX™ 2080 Ti XTREME WATERFORCE 11G [2].

You wrote "Obviously I had to stack the resistors on the back *for more power*" - what does that mean?

[1] Gigabyte RTX 2080 Ti VBIOS
[2] AORUS GeForce RTX™ 2080 Ti XTREME WATERFORCE 11G Specification | Graphics Card - GIGABYTE Global


----------



## pewpewlazer

tamm0r said:


> *You wrote "Obviously I had to stack the resistors on the back for more power" - what does that mean?*


It means this:









2080 Ti - Working Shunt Mod


I finally got a chance to test some other shunts on my FE 2080 Ti. When using 5 mOhm resistors, as I did on my Titan X (Pascal), the 2080 Ti goes into emergency mode with a lot of loads (stuck at 300 MHz). My first try adding a shunt above 5 mOhms, 8 mOhms, did not trigger the cards safety...




www.overclock.net


----------



## tamm0r

Ok, so there's no other (software-only) way to increase the power limit? Any idea why I couldn't flash another bios? Is it because of the non-A chip?


----------



## Imprezzion

Yeah, that's exactly the reason. Only BIOS for non-a is the 310w palit one which is not really that much of an improvement.


----------



## jura11

pewpewlazer said:


> Have fun with those ATI AMD drivers ✌


I'm on AMD X570 for quite while, switched from X99 and 5960x and difference is day and night, no more issues and no more random BSOD which I couldn't resolve at all

Regarding the AMD drivers, they're not worse than Nvidia in some aspects and scenarios, I have one build which is running Vega64 and two RX5700XT, this build I use for OpenCL development and OpenCL rendering with LuxRender and few other SW and has been flawless, never run to issues, running on that build Linux, OSX and Win 10 

Hope this helps 

Thanks, Jura


----------



## Krzych04650

Bart said:


> I'm ditching my 2080ti for a 6900XT without question, not that my 2080ti runs bad, but just to NOT buy Nvidia again. I'd pass on the 3080/3090 even if I could find one, but I'm not passing on the 6900XT. And my local nerd store tells me there will be stock, like actual cards, not just listings for vaporware.


Can you explain your reasoning behind this, I mean actual arguments? I am asking genuinely, not attacking. Assuming that they match NVIDIA in average raw performance, which is by the way how things should be and how they already were some years ago, and yet AMD could never really compete on the high-end because of nonexistent feature set and general incompetence, among other things. Now they are back with raw performance parity, but the rest didn't really change and the gap in feature set only widened over the years, so what reason can there be for getting high-end AMD card "without question" suddenly? It just sounds like if something game changing happened and I cannot really see what that is.


----------



## gfunkernaught

Krzych04650 said:


> Can you explain your reasoning behind this, I mean actual arguments? I am asking genuinely, not attacking. Assuming that they match NVIDIA in average raw performance, which is by the way how things should be and how they already were some years ago, and yet AMD could never really compete on the high-end because of nonexistent feature set and general incompetence, among other things. Now they are back with raw performance parity, but the rest didn't really change and the gap in feature set only widened over the years, so what reason can there be for getting high-end AMD card "without question" suddenly? It just sounds like if something game changing happened and I cannot really see what that is.


Maybe it is because of Nvidia's business practices? Would you support a company that had been decent but then decided to arbitrarily hike their prices and shift focus from quality to marketing?


----------



## CptAsian

gfunkernaught said:


> Maybe it is because of Nvidia's business practices? Would you support a company that had been decent but then decided to arbitrarily hike their prices and shift focus from quality to marketing?


They've been doing that for years though so I don't see how that could flip someone so strongly this generation. Could be that though, that's just how I feel about it.


----------



## Bart

Krzych04650 said:


> Can you explain your reasoning behind this, I mean actual arguments? I am asking genuinely, not attacking. Assuming that they match NVIDIA in average raw performance, which is by the way how things should be and how they already were some years ago, and yet AMD could never really compete on the high-end because of nonexistent feature set and general incompetence, among other things. Now they are back with raw performance parity, but the rest didn't really change and the gap in feature set only widened over the years, so what reason can there be for getting high-end AMD card "without question" suddenly? It just sounds like if something game changing happened and I cannot really see what that is.


No particular reasoning, I just want a new toy. Plus I'd rather support AMD than Nvidia. My Radeon VII is a GREAT card under water, and while not as fast as a 2080ti, it's faster than most people think once you tame the temps, the image quality is LEAGUES better than Nvidia in both 2D and 3D (better colors by a mile). So this isn't a "without question" purchasing decision. AMD has already shown they can make great hardware, so this 6800/6900 series performing well isn't a shock to anyone who's been paying attention. Feature set doesn't matter to me, and neither does marketing. People over-analyze this crap to death. I have 3 rigs with multiple GPUs (2080ti / Radeon VII / 1080TIs), so I don't need ANY new GPUs at all right now. These are all just toys to me, "need" is not part of the equation. I also don't give a rats butt about DLSS, ray tracing, or any other marketing gimmicks (not saying those things don't have uses, they just don't matter _to me right now_). So I don't compare feature sets, and I don't care about G-stink or FreeStink either, since I game with that crap off, and still don't get any screen tearing. I buy what I want, when I want, and this time around the 3080 really doesn't interest me at all. There's no supply, and Nvidia is already talking about the 3080ti to compete with the AMD stuff now, since they got taken a bit by surprise. I might buy a 3080ti just to compare to the 6900XT, who knows. It's all toys and Christmas is coming.


----------



## acoustic

Haha you called VRR crap and purposely turn it off. 😂😂


----------



## Bart

acoustic said:


> Haha you called VRR crap and purposely turn it off. 😂😂


Well you're supposed to turn that stuff off when benchmarking, and I forgot to re-enable it one day after running some benchmarks, and could not notice any difference at all when gaming. Gsync or freesync, I've tested both now, and my old eyes can see no benefit at all with frame pacing on or off. Maybe if I had a crappier system that was dropping frames, I'd see a difference, but I don't. YMMV of course.


----------



## acoustic

you think vrr is for "crappy" systems ..

alrighty


----------



## kithylin

Bart said:


> Well you're supposed to turn that stuff off when benchmarking, and I forgot to re-enable it one day after running some benchmarks, and could not notice any difference at all when gaming. Gsync or freesync, I've tested both now, and my old eyes can see no benefit at all with frame pacing on or off. Maybe if I had a crappier system that was dropping frames, I'd see a difference, but I don't. YMMV of course.


It's more for higher refresh rate monitors. Are you still playing on 1080p @ 60hz for a monitor in 2020?


----------



## Bart

kithylin said:


> It's more for higher refresh rate monitors. Are you still playing on 1080p @ 60hz for a monitor in 2020?


I'm playing at 3840 x 1200 @ 120hz, 43" HDR ultrawide.


acoustic said:


> you think vrr is for "crappy" systems ..
> 
> alrighty


Bad wording, but what I _meant_ was that Gsync/Freesync _can_ smooth out gameplay on systems that are struggling at lower fps. I didn't say that was it's only use. I'm saying that playing games like Horizon Zero Dawn or Witcher 3, I don't notice any difference with it turned on or off. So I think it doesn't matter as nearly as much as the marketing tells you IN MY USE CASE (non-competitive gamer).


----------



## kithylin

Bart said:


> I'm playing at 3840 x 1200 @ 120hz, 43" HDR ultrawide.
> 
> Bad wording, but what I _meant_ was that Gsync/Freesync _can_ smooth out gameplay on systems that are struggling at lower fps. I didn't say that was it's only use. I'm saying that playing games like Horizon Zero Dawn or Witcher 3, I don't notice any difference with it turned on or off. So I think it doesn't matter as nearly as much as the marketing tells you IN MY USE CASE (non-competitive gamer).


The big reason you don't notice this anymore is because older versions of windows 10 near when it first came out used to still allow games to access Exclusive Fullscreen Mode. In EFM freesync and gsync technologies were a lot more noticable. Now with today's version of windows 10 they render all games as borderless windowed mode in the back-end even if we select "Full screen" in game and older games that actually need EFM can't even run at all or have to be run in windowed mode, if they support it.. if not, can't play those games. In windowed mode or borderless windowed mode the Sync Technologies are barely noticeable at all.


----------



## TheRic89

kithylin said:


> The big reason you don't notice this anymore is because older versions of windows 10 near when it first came out used to still allow games to access Exclusive Fullscreen Mode. In EFM freesync and gsync technologies were a lot more noticable. Now with today's version of windows 10 they render all games as borderless windowed mode in the back-end even if we select "Full screen" in game and older games that actually need EFM can't even run at all or have to be run in windowed mode, if they support it.. if not, can't play those games. In windowed mode or borderless windowed mode the Sync Technologies are barely noticeable at all.


Interesting


----------



## Medizinmann

tamm0r said:


> That went surprisingly well  Once I have the power mod done I'll have to reduce the power limit again. The eGPU's PSU has "only" 450 W and it has to power the controller board and the GPU.


Great to hear.

BTW: This weekend I found a few minutes and flashed the 310W-BIOS on my KFA2 2080TI Dual black over Thunderbolt in an eGPU enclosure. No problems so far and now I can slide to 124% in MSI afterburner - didn't have time to test much so far. Question is, if this GPU will ever hit the 310W TDP as it is in an eGPU enclosure, only air cooled and a relatively small 2-Slot Card/Cooler too - I will test it.
But with the Palit BIOS the card runs noticeably more silent and temps seem to be okay - which is already very nice.
RGB on this card is dead now as the Palit doesn't have it - but I don't miss it and this adds another 10-15W to the usable power budget of this card...

Best regards,
Medizinmann


----------



## tps3443

Bart said:


> I'm ditching my 2080ti for a 6900XT without question, not that my 2080ti runs bad, but just to NOT buy Nvidia again. I'd pass on the 3080/3090 even if I could find one, but I'm not passing on the 6900XT. And my local nerd store tells me there will be stock, like actual cards, not just listings for vaporware.


There seems to be enough 3080’s and 3090’s but it’s a money grab! So bots buy them
all up for resale and profits. So the bots directly control the market making you think there’s an actual shortage. And people who pay more than MSRP directly feed this horrible scalping behavior. The same thing is going on with Ryzen 5900X and Ryzen 5950X right now.

This has absolutely nothing to do with Nvidia..

You can bet your ass that bots and scalpers with get the AMD RX6000 worse than anything else..

It really sucks. But this is the world we live in!


I wish you the best of luck buying a 6000 AMD GPU. Better get in line.. Any new part that is in demand will be purchased and sold out immediately and then resold for double the price. Anymore that flow back in will also be bought out before you can even hit “Add to cart” it’s all a gimmick, money grab for anyone and everyone. People who don’t even want a video card will be buying as many as their wallets will allow just to re-sale and make profits.


----------



## tps3443

tamm0r said:


> That went surprisingly well  Once I have the power mod done I'll have to reduce the power limit again. The eGPU's PSU has "only" 450 W and it has to power the controller board and the GPU.
> 
> 
> 
> Thanks a lot for the kind offer but I can order them here for little money. The shipping costs alone from USA would be probably much higher


The eGPU’s 450 watt PSU will provide just enough. It has a quality 450 watt inside of the EGPU enclosure. With the 310 watt Palit bios, and the extra 40% from the 8ohm resistors, allow it to have a maximum power limit of 434 watts total. That’s just enough to stay underneath the 450 watt PSU.

it‘ll run perfectly fine with +250 on the core and +1000 on the memory.

Mine always ran 2,130-2,145Mhz in anything. It stayed around 48C.


----------



## rustyk

kithylin said:


> The big reason you don't notice this anymore is because older versions of windows 10 near when it first came out used to still allow games to access Exclusive Fullscreen Mode. In EFM freesync and gsync technologies were a lot more noticable. Now with today's version of windows 10 they render all games as borderless windowed mode in the back-end even if we select "Full screen" in game and older games that actually need EFM can't even run at all or have to be run in windowed mode, if they support it.. if not, can't play those games. In windowed mode or borderless windowed mode the Sync Technologies are barely noticeable at all.


I don't understand this. Why would it be less noticeable in windowed or borderless windowed mode?

Also, genuinely curious about which games don't work any more? Admittedly I haven't tested that many but I've not heard of this until now.


----------



## kithylin

rustyk said:


> I don't understand this. Why would it be less noticeable in windowed or borderless windowed mode?
> 
> Also, genuinely curious about which games don't work any more? Admittedly I haven't tested that many but I've not heard of this until now.


Because of the way games are coded and work and the way drivers work. Exclusive Fullscreen Mode had lower input latency and a smoother gaming experience and it's been abandoned over the years. Playing games in windowed mode (borderless or not) has inherently slightly higher input latency by design. It's going to "Feel slower", Variable Refresh Rate or not. And it's just a bunch of older games that don't work anymore. Old versions of GRID, Need for speed carbon was one, old 2010 versions of world of warcraft (for those weirdos amung us that like to play old versions of the game on private servers), some things like that. Admittedly it's things that are better off being played on a dedicated WindowsXP computer anyway but still.. they all used to work in early versions of Win10 and are broken today in the latest version.


----------



## Falkentyne

kithylin said:


> The big reason you don't notice this anymore is because older versions of windows 10 near when it first came out used to still allow games to access Exclusive Fullscreen Mode. In EFM freesync and gsync technologies were a lot more noticable. Now with today's version of windows 10 they render all games as borderless windowed mode in the back-end even if we select "Full screen" in game and older games that actually need EFM can't even run at all or have to be run in windowed mode, if they support it.. if not, can't play those games. In windowed mode or borderless windowed mode the Sync Technologies are barely noticeable at all.





rustyk said:


> I don't understand this. Why would it be less noticeable in windowed or borderless windowed mode?
> 
> Also, genuinely curious about which games don't work any more? Admittedly I haven't tested that many but I've not heard of this until now.


You can force older games to use EFM.
Either use the registry edit to disable optimizations, or if the game ignores the registry, go to the game running properties by finding the executable exe file, and click on "disable fullscreen optimizations."
Some DX11 games will require BOTH settings.

This always works.

THIS DOES NOT WORK ON DIRECTX 12 GAMES.


Alternatively, you can create a .reg file that will have the same effect:

Open a text editor.
In a new file, paste the following:

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\System\GameConfigStore]
“GameDVR_FSEBehaviorMode”=dword:00000002
“GameDVR_HonorUserFSEBehaviorMode”=dword:00000001
“GameDVR_FSEBehavior”=dword:00000002
“GameDVR_DXGIHonorFSEWindowsCompatible”=dword:00000001

Save the file and make sure to change its extension to .reg.
It’s worth mentioning that you’ll also have to disable the Origin in-game overlay.
BTW sometimes the registry settings get "reset". I don't know why that happens. If it does, just run the reg file again (it's easier to just make your own reg file instead of adding the values in regedit manually).

If this is too much work I attached a reg file I made.
WOW THIS CRAPPY FORUM WONT ALLOW ME TO ATTACH ANYTHING THAT ISN'T A PICTURE.
YOU GUYS ARE ON YOUR OWN.

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\System\GameConfigStore]
"GameDVR_FSEBehaviorMode"=dword:00000002
"GameDVR_Enabled"=dword:00000000
"GameDVR_FSEBehavior"=dword:00000002
"GameDVR_HonorUserFSEBehaviorMode"=dword:00000001
"GameDVR_DXGIHonorFSEWindowsCompatible"=dword:00000001
"GameDVR_EFSEFeatureFlags"=dword:00000000


----------



## kithylin

Falkentyne said:


> You can force older games to use EFM.
> Either use the registry edit to disable optimizations, or if the game ignores the registry, go to the game running properties by finding the executable exe file, and click on "disable fullscreen optimizations."
> Some DX11 games will require BOTH settings.


Thank you for trying but I've been all over that. There's dedicated "retro gaming" communities and we've all been all over all of that. None of it works for older games. Old DirectX-9 and DirectX-8 games that need EFM are completely dead in the latest version of win10. There's nothing that can be done to make them run anymore. This is all way off topic now from this thread anyway. Sorry I ever brought it up but we should end it here and now.


----------



## Imprezzion

At what core temperatures do you guys mostly see instability with high overclocks occur?

One of my rad fans has a busted bearing that hauls and rattles above 1400RPM so I turned the fan curve way down so now my card runs 2085-2070 / 7800 @ 1.093v which is the same clocks I used to have at 40-44c but now it runs 53-55c at like 900-1000RPM on the fans. It's not showing any instability. Played like 4 hours straight division 2 and it runs just fine. 

I really expected to see instability with 55c at those clocks but not at all lol.


----------



## jura11

@Imprezzion 

In my case no core temperatures what will influence my instability, usually its down to games, in some games I'm running 2190-2205MHz, in some 2160MHz or 2175MHz is max which I can run and in some DXR games like Metro Exodus and others I'm running 2130-2145MHz as max, for benchmarks I use 2205MHz or 2220MHz OC profile

In rendering I'm running on both RTX 2080Ti 2055-2085MHz as max, anything above that will result in CUDA errors,only Octane can be run with 2115-2130MHz OC 

Hope this helps 

Thanks, Jura


----------



## tps3443

jura11 said:


> @Imprezzion
> 
> In my case no core temperatures what will influence my instability, usually its down to games, in some games I'm running 2190-2205MHz, in some 2160MHz or 2175MHz is max which I can run and in some DXR games like Metro Exodus and others I'm running 2130-2145MHz as max, for benchmarks I use 2205MHz or 2220MHz OC profile
> 
> In rendering I'm running on both RTX 2080Ti 2055-2085MHz as max, anything above that will result in CUDA errors,only Octane can be run with 2115-2130MHz OC
> 
> Hope this helps
> 
> Thanks, Jura


This seems to be the case with most watercooled 2080Ti’s. I remember running Doom eternal and I could managed 2,175Mhz all day. But RDR2 being more demanding would push me down to 2,115-2,130 range. I think I was hitting power limits on my last 2080Ti though In some situations.

I think a water chiller is really the only way to stay beyond 2,200 in virtually anything.


----------



## Imprezzion

My card isn't a very good bin lol. It maxed in any title at 2085 on EVGA BIOS and 2130 on 1.125v XOC BIOS.

So, there is really no need to run like 40c with my fans cranked if it's just as stable at 55c with utter silence on the same clocks with just a slightly higher curve starting point to compensate for the temperature clock drops.

It's just a Kraken X52 cooling it tho, not a full cover block.


----------



## tps3443

I broke 17K timespy graphics with a air cooled 2080Ti Founders Edition. All inside a closed case.


https://www.3dmark.com/3dm/52921404?


----------



## Imprezzion

Wow nice. Highest I got on XOC under the X52 was 16700 something lol. That was on 2160 core with 8100 memory. It artifacts pretty bad but does complete the run.


----------



## tps3443

Imprezzion said:


> Wow nice. Highest I got on XOC under the X52 was 16700 something lol. That was on 2160 core with 8100 memory. It artifacts pretty bad but does complete the run.


Maybe the card is just good. I dunno. I’ve owned several 2080Ti’s never seen these numbers on air cooling though.


----------



## Hauszer

Hey guys I've had my STRIX 2080ti with factory OC for almost 2 years now and it's been working flawlessly, until the 5th season of Modern Warfare 2 months ago where I started to CTD with Dev Error 6068. After some research it looked like underclocking the core clock of the card might fix the issue and sure enough a -40MHz underclock did the trick - for a while. 

Soon the problem started occurring again and I kept underclocking, until now I'm at -400MHz and I just had the card crash so hard that I lost signal to my monitor and had to hard reboot my machine. Clearly this can't go on so I was wondering if you guys had a sense for whether I had a dying card or if there's some things I can do on my end to investigate or fix the issue? What I've done so far is:

1.Underclock the core clock using Afterburner.

2. Keep an eye on temps using Afterburner (which never got higher than about 64C for some reason).

3. Removed, inspected, and reseated the card ensuring I have dual power cables from my PSU instead of the old split cable.

4. Used DDU to completely remove and reinstall my drivers.

The machine is a 4790k 4GHz on MSI Z97 MPOWER MAX AC with 32GB 1600Mhz DDR3 running latest Windows 10 2004 build. I've started an RMA with ASUS but I'm all ears if you guys have any suggestions as to what I can try before I ship it back. Thanks much!


----------



## Imprezzion

Sounds like degradation of the core honestly. RMA is probably your best bet indeed.


----------



## dpoverlord

Just got a EVGA RMA on my 2080ti. It got me thinking, would it make more sense to sell it? Is it faster than the 3070 at least?


----------



## J7SC

Hauszer said:


> Hey guys I've had my STRIX 2080ti with factory OC for almost 2 years now and it's been working flawlessly, until the 5th season of Modern Warfare 2 months ago where I started to CTD with Dev Error 6068. After some research it looked like underclocking the core clock of the card might fix the issue and sure enough a -40MHz underclock did the trick - for a while.
> 
> Soon the problem started occurring again and I kept underclocking, until now I'm at -400MHz and I just had the card crash so hard that I lost signal to my monitor and had to hard reboot my machine. Clearly this can't go on so I was wondering if you guys had a sense for whether I had a dying card or if there's some things I can do on my end to investigate or fix the issue? What I've done so far is:
> 
> 1.Underclock the core clock using Afterburner.
> 
> 2. Keep an eye on temps using Afterburner (which never got higher than about 64C for some reason).
> 
> 3. Removed, inspected, and reseated the card ensuring I have dual power cables from my PSU instead of the old split cable.
> 
> 4. Used DDU to completely remove and reinstall my drivers.
> 
> The machine is a 4790k 4GHz on MSI Z97 MPOWER MAX AC with 32GB 1600Mhz DDR3 running latest Windows 10 2004 build. I've started an RMA with ASUS but I'm all ears if you guys have any suggestions as to what I can try before I ship it back. Thanks much!


...I assume you have running the stock Bios and with / without extra voltage on the MSI AB slider ? In any case, with what you describe, @Imprezzion's advice makes sense


----------



## hdtvnut

Having several problems with my Asus 2080ti OC just lately, asking for help. First one is, I updated GPU Tweak II to the Asus latest, and can't seem to "apply" my settings so they are active and survive reboot. When I try to "apply", all settings go to "default".

Second one may be a hardware problem: with one or two PWM fans attached to the front of the GPU, under "user defined", "fan re-calibrate" fails, and there is no fan response at the set temperatures. The ports themselves seem OK, because I can set a "manual" fan speed. I can't find any info whatever on these ports.


----------



## Hauszer

Ya thanks guys it definitely behaves like a slowly degrading piece of gear so I'm gonna RMA it. Wish me luck, I've heard nothing but bad things about ASUS RMA and it's not like they have piles of new 2080 ti's lying around. 🙄


----------



## Krzych04650

Hauszer said:


> Ya thanks guys it definitely behaves like a slowly degrading piece of gear so I'm gonna RMA it. Wish me luck, I've heard nothing but bad things about ASUS RMA and it's not like they have piles of new 2080 ti's lying around. 🙄


Yea they are probably going hold it for 2 months just to say that everything is fine with your card


----------



## Medizinmann

dpoverlord said:


> Just got a EVGA RMA on my 2080ti. It got me thinking, would it make more sense to sell it? Is it faster than the 3070 at least?


It depends - most of the time it is - in almost all titles that use rasterization only it should be faster - it seem to be 1-2% slower in some Raytracing titles(but very few - sometimes its is even 1-2% faster). The 2080Ti is definitly faster in titles that profit from more than 8 GB - i. e. MS Flight Sim. 2020.

In the end the answer is – yes - most of the times it is faster than a RTX 3070 FE.






…and most 2080Tis have some headroom for OC – more than the RTX 30xx it seems…

And you can't buy a RTX 3070/3080/3090 anyway - at least not for a decent price...

Best regards,
Medizinmann


----------



## AndrejB

Hauszer said:


> Ya thanks guys it definitely behaves like a slowly degrading piece of gear so I'm gonna RMA it. Wish me luck, I've heard nothing but bad things about ASUS RMA and it's not like they have piles of new 2080 ti's lying around. 🙄


RMAd my card 2 times with them (2 weeks wait both times - Canada), first they sent me a old card with micron that was giving dx errors in apex, so I sent it back, then I got a nice one (ok core and samsung mem).

Be sure to detail everything on the paper that you put in the box, that's the only thing that gets read.
The second time around I even wrote not to send me an old (early batch) card.


----------



## TONSCHUH

Sorry, Double-Post ...

🙂


----------



## TONSCHUH

Got my system almost finished, at least the most important part.

It's not perfect yet, but performs well.














































































Spec's:









My Rig - Pastebin.com


Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.




pastebin.com





🙂


----------



## tps3443

dpoverlord said:


> Just got a EVGA RMA on my 2080ti. It got me thinking, would it make more sense to sell it? Is it faster than the 3070 at least?


I am leading over a RTX3070 FE by at least 25% with my 2080Ti FE with 11GB Vram its a no brainer for me.


----------



## Shark00n

Hey guys!

So I have a MSI RTX2080 Ti Ventus OC. It's an A-Bin GPU and I have a full watercooling block on it. Gaming doesn't even break a sweat under 50ºC.
So I was trying to flash its vBIOS to a more power delivering 380W vBIOS, specifically this KFA2 one - KFA2 RTX 2080 Ti VBIOS

I've tried NVFlash64 and NVFlash64 with '--protectoff' command but I'm not able to flash the stock vBIOS, I get this message and an aborted process:


> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> BIOS Cert 2.0 Verification Error, Update aborted.


Anyone have any insight on what I'm doing wrong? Is it to do with the newer XUSB FW? My card is pretty newish, got it early 2020. Is there still a way of flashing without special tools or equipment?

Also, I have 3 questions which I should probably clear up before actually updating the vBIOS 😅😅
-With good temperatures, will it affect the working life of my card in any measurable way. To put it simply, will it crap out sooner?
-Should I check what the external I/O is on the new vBIOS in order to match/not lose any of my card's I/O connections?
-Is there a better fitting vBIOS for this particular card?

I've been searching a lot thru this thread but could not clear this up.
Thanks a lot!


----------



## acoustic

2080TIs made in 2019 (I believe) use XUSB and are not compatible with earlier cards. Unfortunately, most of the high-end PL BIOS are from older cards. I'm not sure if there are any higher power-limit BIOS' made for XUSB cards, maybe someone else knows.


----------



## Shark00n

acoustic said:


> 2080TIs made in 2019 (I believe) use XUSB and are not compatible with earlier cards. Unfortunately, most of the high-end PL BIOS are from older cards. I'm not sure if there are any higher power-limit BIOS' made for XUSB cards, maybe someone else knows.


Yeah... Should've checked that before buying it. How could I miss that 
Been checking thru some other vBIOS, maybe one of these is a good bet?








MSI RTX 2080 Ti VBIOS


11 GB GDDR6, 1350 MHz GPU, 2000 MHz Memory




www.techpowerup.com












Gigabyte RTX 2080 Ti VBIOS


11 GB GDDR6, 1350 MHz GPU, 1768 MHz Memory




www.techpowerup.com





They are both for non-reference PCBs so I'm not feeling too confident..


----------



## mistershan

Hi guys. I am new to over clocking GPUs and I was thinking of trying to OC my 2080ti Strix. Can OC'ing harm the GPU? If so that's okay because I have a protection plan on it till January and if it breaks I get full credit towards a 3000 series card. So maybe bonus points if we fry it? ha... What apps do I use? Do I need a water block? 

This is my system:


i7 5820k 
Asus Strix 2080ti 
64gb Crucial DDR 4 2400 DIMM 
Asus Rampage V Extreme ATX2011E 
Crucial SSD 960 gb M500 
Corsair AX1200I Digital ATX PSU
CORSAIR HYDRO H105I LIQUID COOLER


----------



## Imprezzion

mistershan said:


> Hi guys. I am new to over clocking GPUs and I was thinking of trying to OC my 2080ti Strix. Can OC'ing harm the GPU? If so that's okay because I have a protection plan on it till January and if it breaks I get full credit towards a 3000 series card. So maybe bonus points if we fry it? ha... What apps do I use? Do I need a water block?
> 
> This is my system:
> 
> 
> i7 5820k
> Asus Strix 2080ti
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU
> CORSAIR HYDRO H105I LIQUID COOLER


Brutally honest opinion? Don't bother. The CPU is a big enough bottleneck as it is and you won't notice any real performance benefits from overclocking the GPU anyway.


----------



## gtz

mistershan said:


> Hi guys. I am new to over clocking GPUs and I was thinking of trying to OC my 2080ti Strix. Can OC'ing harm the GPU? If so that's okay because I have a protection plan on it till January and if it breaks I get full credit towards a 3000 series card. So maybe bonus points if we fry it? ha... What apps do I use? Do I need a water block?
> 
> This is my system:
> 
> 
> i7 5820k
> Asus Strix 2080ti
> 64gb Crucial DDR 4 2400 DIMM
> Asus Rampage V Extreme ATX2011E
> Crucial SSD 960 gb M500
> Corsair AX1200I Digital ATX PSU
> CORSAIR HYDRO H105I LIQUID COOLER


Yes pretty safe to overclock, there are so many things in place to prevent the card from killing itself. Just download msi afterburner and move the power limit and temp limit slider, set fan speed to around 75 percent (not 100, this takes away from the overall power limit) and start moving the core offset slider every 25-50mhz and run stress test until it crashes.


----------



## mistershan

Imprezzion said:


> Brutally honest opinion? Don't bother. The CPU is a big enough bottleneck as it is and you won't notice any real performance benefits from overclocking the GPU anyway.


Yea going to get the 5950x when it's available. How much performance will I gain from a new CPU?


----------



## mistershan

How much would an Asus Strix 2080ti used be worth if I sold it?


----------



## tps3443

Shark00n said:


> Yeah... Should've checked that before buying it. How could I miss that
> Been checking thru some other vBIOS, maybe one of these is a good bet?
> 
> 
> 
> 
> 
> 
> 
> 
> MSI RTX 2080 Ti VBIOS
> 
> 
> 11 GB GDDR6, 1350 MHz GPU, 2000 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gigabyte RTX 2080 Ti VBIOS
> 
> 
> 11 GB GDDR6, 1350 MHz GPU, 1768 MHz Memory
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> They are both for non-reference PCBs so I'm not feeling too confident..


Dont brick your card. It is easier to just solder on the 8 Ohms. You’ll gain 40% more power, and at 50C or less you’ll hold that 2,130Mhz in anything.

I could mail you (2) resistors if you like. It is very easy to do.

You can practice soldering on some old broken electronics if you want. And then move straight to the 2080Ti. I’ve done numerous 2080Ti’s, and other video cards now


----------



## pewpewlazer

Imprezzion said:


> Brutally honest opinion? Don't bother. The CPU is a big enough bottleneck as it is and you won't notice any real performance benefits from overclocking the GPU anyway.


Unless he's running the CPU at stock and/or gaming at 1080p, it won't be a crippling bottleneck and he will still realize performance gains from overclocking the GPU.

Even at a moderate 3440x1440, Watch Dogs Legion is the first game that's made me seriously consider the "need" (strong WANT) to upgrade my 4.5ghz 5820k. And frankly, if Zen 3 wasn't as good as it is, I'd have no problems sticking it out longer. Now if only it were possible to find a 5900x in stock anywhere...


----------



## Josef997

Hi guys

I want to ask about 2080 ti 1000w bios, i already flash it and its working great till now my system is,


( custom water cooling)
CPU: I9-10900K
GPU Asus strix OC 2080 Ti 
PSU: 1200 EVGA 
motherboard: Asus maximus xii extreme Z490
corsair ram: 3600 dominators

when i checked the power draw its not exceeded 480w is that because the power not enough to power it to max limit ? the temp not exceeded 55c ( during game with full power ) i do have asus 4k monitors 144 Hz that why i want my card to reached to the limit it can reach to with custom water cooling, also sometimes i hear the coil whine having little noise because of the power draw but when its become hotter, the main questions do i need extra power supply for GPU to make it reach to 1000w ? or that just number the gpu will never reach it.


----------



## mistershan

pewpewlazer said:


> Unless he's running the CPU at stock and/or gaming at 1080p, it won't be a crippling bottleneck and he will still realize performance gains from overclocking the GPU.
> 
> Even at a moderate 3440x1440, Watch Dogs Legion is the first game that's made me seriously consider the "need" (strong WANT) to upgrade my 4.5ghz 5820k. And frankly, if Zen 3 wasn't as good as it is, I'd have no problems sticking it out longer. Now if only it were possible to find a 5900x in stock anywhere...


You OC'd your 5820k? I am a never did that and have no idea where to start. Have any good guides? Is your system stable at 4.5ghz? Whats your mobo?...and yes I never game at 1080p. Only 3440 x 1440 and up.


----------



## Shark00n

tps3443 said:


> Dont brick your card. It is easier to just solder on the 8 Ohms. You’ll gain 40% more power, and at 50C or less you’ll hold that 2,130Mhz in anything.
> 
> I could mail you (2) resistors if you like. It is very easy to do.
> 
> You can practice soldering on some old broken electronics if you want. And then move straight to the 2080Ti. I’ve done numerous 2080Ti’s, and other video cards now


Thanks man.
Yeah, the mod seems easy enough. Will save it for sure for when my warranty runs out and my WC loop needs a cleaning.
I'm from europe but thanks for the offer anyways!
Any particular kind of resistor I should look for? Or higher quality? Any regular 8Ohm SMD resistor from ebay will do the trick?


----------



## Audioboxer

Hi guys, what bios would be compatible with the EVGA 11G-P4-2380-KR to unlock 130%? It's the A version Blower model.










Thanks


----------



## jura11

Josef997 said:


> Hi guys
> 
> I want to ask about 2080 ti 1000w bios, i already flash it and its working great till now my system is,
> 
> 
> ( custom water cooling)
> CPU: I9-10900K
> GPU Asus strix OC 2080 Ti
> PSU: 1200 EVGA
> motherboard: Asus maximus xii extreme Z490
> corsair ram: 3600 dominators
> 
> when i checked the power draw its not exceeded 480w is that because the power not enough to power it to max limit ? the temp not exceeded 55c ( during game with full power ) i do have asus 4k monitors 144 Hz that why i want my card to reached to the limit it can reach to with custom water cooling, also sometimes i hear the coil whine having little noise because of the power draw but when its become hotter, the main questions do i need extra power supply for GPU to make it reach to 1000w ? or that just number the gpu will never reach it.


Best BIOS for Asus RTX 2080Ti Strix what I can say and what I tested is Matrix BIOS,XOC BIOS is okay just you will not gain as much you think, I'm running that BIOS as secondary and my main BIOS is Matrix 

Yup I can hear coilwhine with EK Vector Waterblock, you can try use better thermal pads like Thermalright Odyssey which I'm now using and temperatures are slightly better 

Hope this helps 

Thanks, Jura


----------



## jura11

Audioboxer said:


> Hi guys, what bios would be compatible with the EVGA 11G-P4-2380-KR to unlock 130%? It's the A version Blower model.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks


Are you using all ports on your GPU? If not then I would suggest EVGA FTW3 BIOS, it is 370-380W BIOS or Galax 380W BIOS 

Both are great BIOS just not sure on fan, I never run GPU with stock coolers 

Hope this helps 

Thanks, Jura


----------



## Audioboxer

jura11 said:


> Are you using all ports on your GPU? If not then I would suggest EVGA FTW3 BIOS, it is 370-380W BIOS or Galax 380W BIOS
> 
> Both are great BIOS just not sure on fan, I never run GPU with stock coolers
> 
> Hope this helps
> 
> Thanks, Jura


Thanks, I forgot to say I'm ditching the blower and going watercooled. As for ports, just using display port.


----------



## jura11

Audioboxer said:


> Thanks, I forgot to say I'm ditching the blower and going watercooled. As for ports, just using display port.


Yup you should be okay there, Display port should work, you can loose one of display port or HDMi port but that's should be okay? 

Hope this helps 

Thanks, Jura


----------



## Alex TOPMAN

Hi, All
I`m new owner of MSI RTX 2080 Ti VENTUS OC with Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti - Black M02 cooling system on top.
And after some days of testing it I am looking for OC BIOS (with higher powerlimit) for this card model. Could someone help me to find the right one?
Thank you.


----------



## Gallapagosisland

Hey everyone,
I have a 2080TI Founders Card and am looking to flash. I tried it the other day and ended up bailing because I didnt know enough. The flashing utility was asking about firmware versions and I have no idea what options to use. My card has an EK water block on it and im looking to get the most performance out of the card.


----------



## Josef997

jura11 said:


> Best BIOS for Asus RTX 2080Ti Strix what I can say and what I tested is Matrix BIOS,XOC BIOS is okay just you will not gain as much you think, I'm running that BIOS as secondary and my main BIOS is Matrix
> 
> Yup I can hear coilwhine with EK Vector Waterblock, you can try use better thermal pads like Thermalright Odyssey which I'm now using and temperatures are slightly better
> 
> Hope this helps
> 
> Thanks, Jura


Yes its great till now i didn't face any problem till now all is great and i am really with the performance its incredibly increase, but this is interesting do you mean i can add 2 bios for one card and switch between them by using p mode and Q- mode ? 

i tried to flash bios for matrix it didn't works its show me its not same bios something like that, should i try another version if yes what's the second best version you advise me to go with, about the cooling i am planning to add small chiller to keep temp as ambient temp [email protected] i don't want complicated thing just to keep the temp normal not going too hot for long term of playing lol.

Thank you for your kind reply.


----------



## jura11

Josef997 said:


> Yes its great till now i didn't face any problem till now all is great and i am really with the performance its incredibly increase, but this is interesting do you mean i can add 2 bios for one card and switch between them by using p mode and Q- mode ?
> 
> i tried to flash bios for matrix it didn't works its show me its not same bios something like that, should i try another version if yes what's the second best version you advise me to go with, about the cooling i am planning to add small chiller to keep temp as ambient temp [email protected] i don't want complicated thing just to keep the temp normal not going too hot for long term of playing lol.
> 
> Thank you for your kind reply.


Hi there 

Sadly Asus RTX 2080Ti Strix is compatible only with Asus because of I/O ports, if you flash there any other BIOS you will definitely loose DP and HDMi ports, its okay if you are running single monitor but if you are on two monitors with DP then its pita to sort it

Yes I have Matrix BIOS on Q switch and on P BIOS I have XOC BIOS, with both BIOS OC is pretty much same, in benchmarks I use 2205MHz or 2220MHz OC profile, for gaming 2160-2175MHz OC profile, for rendering 2070-2085MHz 

Are you sure you have A chip or it is non A chip, can you check that in GPU-Z? Can you post what errors did you get when you are tried to flash with NvFlash? 

Hope this helps 

Thanks, Jura


----------



## Josef997

jura11 said:


> Hi there
> 
> Sadly Asus RTX 2080Ti Strix is compatible only with Asus because of I/O ports, if you flash there any other BIOS you will definitely loose DP and HDMi ports, its okay if you are running single monitor but if you are on two monitors with DP then its pita to sort it
> 
> Yes I have Matrix BIOS on Q switch and on P BIOS I have XOC BIOS, with both BIOS OC is pretty much same, in benchmarks I use 2205MHz or 2220MHz OC profile, for gaming 2160-2175MHz OC profile, for rendering 2070-2085MHz
> 
> Are you sure you have A chip or it is non A chip, can you check that in GPU-Z? Can you post what errors did you get when you are tried to flash with NvFlash?
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura,

I connect to monitors actually i was waiting for any disconnect but the good points there is no issue at the beginning the 2nd monitor always goes off then i just replace the place for the DP for both monitors and its simply works perfectly, no idea why its count as that but anyhow they are fine, That great i love the Matrix cards its something Unique!, for gaming its reached to 2160 not more than this you are lucky to get extra mhz for example i for Asus 1000w playing Call of duty GPU 147+ memory clock 1270 or 1250+ power limit all 100+ the temp 51 to 55 max temp power draw from 380 till 480 maximum power that the maximum i thinks any cards can reached with 2x8 pins connectors + PCie as i read from the comments and sercaching for the info , mine is A chip yes i will post the error but what i want to know if i shift between the mode the pc and card should be off or i can change while its On?

( Edit here after apply maximum power its showing 2160 Mhz - 8250 1.075 v temp 51- 55)

last thing because of high power draw and heating there was a coil whine sound but not too high very low sound, now after cooling up and keep the fans 100% to keep the water temp down it's much better. i will post the error message when i will try to flash it again!

thanks a lot for the info i really appreciate it.


----------



## Medizinmann

mistershan said:


> How much would an Asus Strix 2080ti used be worth if I sold it?


If you check ebay.com - around $900-$1100...if it has still warranty on it - even more - prices are crazy these days...

You could sell it - but I would estimate you would need to wait at least 2-3 month for something decent to replace it...

Best regards,
Medizinmann


----------



## Medizinmann

Alex TOPMAN said:


> Hi, All
> I`m new owner of MSI RTX 2080 Ti VENTUS OC with Alphacool Eiswolf 240 GPX Pro RTX 2080/2080Ti - Black M02 cooling system on top.
> And after some days of testing it I am looking for OC BIOS (with higher powerlimit) for this card model. Could someone help me to find the right one?
> Thank you.


Well - this card uses the reference PCB - so the Galax/KFA2 380W from page one should be the right one - if you can flash it - see important note fist page of this thread...
Otherwise you might need to use a EEPROM-Programmer or do a shunt mod which needs some soldering…

Best regards,
Medizinmann


----------



## Medizinmann

Gallapagosisland said:


> View attachment 2465555
> 
> 
> Hey everyone,
> I have a 2080TI Founders Card and am looking to flash. I tried it the other day and ended up bailing because I didnt know enough. The flashing utility was asking about firmware versions and I have no idea what options to use. My card has an EK water block on it and im looking to get the most performance out of the card.


Your best bet is IMHO the Galax/KFA2 Bios from page one with 380W power limit. You need to use the patched Flash Tool for NVidia FE.

Just follow the step-by-step guide on page one. Don't forget to make an backup of your BIOS - just in case...

Best regards,
Medizinmann


----------



## Imprezzion

The problem with a 5820K is just that many CPU reviews that use a 2080 Ti show a massive difference in average and 99th percentiles with just different CPU. Comparing a 7700K to a 10900K is a way bigger FPS difference then any overclock on a GPU itself will give you. 

I don't really understand how it works, I mean even a 9900K which I don't consider a bottleneck often lags way behind a 10900K/59xx AMD CPU in many reviews and I personally went from a 9900K to a 10900KF and saw a pretty sizable increase in minimum FPS (not so much average tho).

It will of course have a benefit to OC the GPU but I'd focus on CPU and RAM first. Then GPU with a 5820K setup. More profit to be had there.

For the Ventus OC BIOS question the KFA2/Galax BIOS would be fine. What you can also try is the EVGA FTW3 ULTRA BIOS. It has better fan control with lower minimum percentage (22% vs 34%) in case the KFA2 one had loud fans on idle with 34%. 

It does have different port configuration for HDMI/DP so you might lose a port here and there. It does work fine on my reference Gainward Phantom with all ports enabled.


----------



## Alex TOPMAN

Medizinmann said:


> Well - this card uses the reference PCB - so the Galax/KFA2 380W from page one should be the right one - if you can flash it - see important note fist page of this thread...
> Otherwise you might need to use a EEPROM-Programmer or do a power mod which needs some soldering…


Thanks for advise. But I don`t think about power mod is needed. 2x8 pins gives to card 150Wx2 + 75W gives PCIe power line. Looks like should be enough for 380W PL.


----------



## Medizinmann

Alex TOPMAN said:


> Thanks for advise. But I don`t think about power mod is needed. 2x8 pins gives to card 150Wx2 + 75W gives PCIe power line. Looks like should be enough for 380W PL.


Sorry my mistake – I meant “shunt mod”…
…and that has “nothing” to do with the power input from your PSU or your mobo…
2080 Ti - Working Shunt Mod

And I wouldn’t advise it either if you are able to flash the 380W BIOS.
But some newer GPUs with newer *XUSB FW Version ID *can’t be flashed using flash tool – see note on page one…

Best regards,
Medizinmann


----------



## Gallapagosisland

Medizinmann said:


> Your best bet is IMHO the Galax/KFA2 Bios from page one with 380W power limit. You need to use the patched Flash Tool for NVidia FE.
> 
> Just follow the step-by-step guide on page one. Don't forget to make an backup of your BIOS - just in case...
> 
> Best regards,
> Medizinmann


 I followed it exactly and i keep getting this: 


Code:


C:\nvFlash>nvflash64 --protectoff
NVIDIA Firmware Update Utility patched by Vipeax
Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.


Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00


EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page

Setting EEPROM software protect setting...

Remove EEPROM write protect complete.


C:\nvFlash>nvflash64 -6 KFA2RTX2080Ti.rom
NVIDIA Firmware Update Utility patched by Vipeax
Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.


Checking for matches between display adapter(s) and image(s)...

Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00


EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page

WARNING: Firmware image PCI Subsystem ID (10DE.12FAx
  does not match adapter PCI Subsystem ID (10DE.12A4).
WARNING: None of the firmware image compatible Board ID's
match the Board ID of the adapter.
  Adapter Board ID:        0049
  Firmware image Board ID: 007E

Please press 'y' to confirm override of PCI Subsystem ID's:
Overriding PCI subsystem ID mismatch

*** WARNING: Overriding the Board ID can be very dangerous. ***
Upgrading to an image with the wrong Board ID can render the video card
unusable.
Overriding the Board ID is only needed for extreme circumstances.
A mismatched Board ID almost always means the wrong firmware image is being
used for the specific video card.

Are you sure you want to continue?
Type "YES" to confirm (all caps):
YES

Overrding Board ID mismatch
Current      - Version:90.02.30.00.05 ID:10DE:1E07:10DE:12A4
               GPU Board (Normal Boardx
Replace with - Version:90.02.0B.00.A9 ID:10DE:1E07:10DE:12FA
               GPU Board (Normal Boardx

Update display adapter firmware?
Press 'y' to confirm (any other key to abort):
EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page

 XUSB FW component of the input GPU firmware image is imcompatible, please use
 a newer version of GPU firmware image for this product.



BIOS Cert 2.0 Verification Error, Update aborted.


Nothing changed!



ERROR: Invalid firmware image detected.

 Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.

C:\nvFlash>


----------



## acoustic

Your card must have been made in 2019? Looks like it has the XUSB that started in 2019. None of the BIOS listed are compatible as cards made prior did not have this change. Your only option is to shunt mod, if I'm not mistaken.


----------



## jura11

Josef997 said:


> Hi Jura,
> 
> I connect to monitors actually i was waiting for any disconnect but the good points there is no issue at the beginning the 2nd monitor always goes off then i just replace the place for the DP for both monitors and its simply works perfectly, no idea why its count as that but anyhow they are fine, That great i love the Matrix cards its something Unique!, for gaming its reached to 2160 not more than this you are lucky to get extra mhz for example i for Asus 1000w playing Call of duty GPU 147+ memory clock 1270 or 1250+ power limit all 100+ the temp 51 to 55 max temp power draw from 380 till 480 maximum power that the maximum i thinks any cards can reached with 2x8 pins connectors + PCie as i read from the comments and sercaching for the info , mine is A chip yes i will post the error but what i want to know if i shift between the mode the pc and card should be off or i can change while its On?
> 
> ( Edit here after apply maximum power its showing 2160 Mhz - 8250 1.075 v temp 51- 55)
> 
> last thing because of high power draw and heating there was a coil whine sound but not too high very low sound, now after cooling up and keep the fans 100% to keep the water temp down it's much better. i will post the error message when i will try to flash it again!
> 
> thanks a lot for the info i really appreciate it.


Hi there 

Matrix BIOS seems is best for Asus RTX 2080Ti Strix, XOC BIOS as well is good just one thing what I dislike on XOC is VRAM is running at full speed and this causes some issues at least in my case 

If I have been lucky with my Asus RTX 2080Ti Strix hard to say, I tested 3 of them and my is somewhat middle or silver or bronze sample hahaha, one which I tested that could do 2205MHz easy in gaming with Matrix BIOS, with XOC that GPU would do 2220MHz or 2235MHz but water temperature must be in 16-18°C 

For gaming I use just normal +100MHz or +115MHz offset which gaves me 2160-2175MHz or 2175-2190MHz, on VRAM I use offset +700MHz or 800MHz

Stock Matrix BIOS boost to 2070-2085MHz and XOC BIOS boost to 1995MHz I think but don't quote me on that, I didn't run XOC for while 

I never tried to switch BIOS on Asus RTX 2080Ti Strix while GPU is powered or during running, I usually switch off PC and switch BIOS 

If you are using XOC, set power limit to 50-55% as max, I seen maximum power draw 550W on my Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## mistershan

Medizinmann said:


> If you check ebay.com - around $900-$1100...if it has still warranty on it - even more - prices are crazy these days...
> 
> You could sell it - but I would estimate you would need to wait at least 2-3 month for something decent to replace it...
> 
> Best regards,
> Medizinmann


Those must be inflated. I tried to put my card up on craigslist for 800 and I got zero inquiries. When I put up 2 x 1080s a few years ago they sold in like an hour. I think people realize it would be better to wait and get a 3080. I guess inflated prices on ebay makes sense to find that rare person out there in the world that needs a 2080ti for NVLink?


----------



## tps3443

Gallapagosisland said:


> I followed it exactly and i keep getting this:
> 
> 
> Code:
> 
> 
> C:\nvFlash>nvflash64 --protectoff
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> Setting EEPROM software protect setting...
> 
> Remove EEPROM write protect complete.
> 
> 
> C:\nvFlash>nvflash64 -6 KFA2RTX2080Ti.rom
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Checking for matches between display adapter(s) and image(s)...
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> WARNING: Firmware image PCI Subsystem ID (10DE.12FAx
> does not match adapter PCI Subsystem ID (10DE.12A4).
> WARNING: None of the firmware image compatible Board ID's
> match the Board ID of the adapter.
> Adapter Board ID:        0049
> Firmware image Board ID: 007E
> 
> Please press 'y' to confirm override of PCI Subsystem ID's:
> Overriding PCI subsystem ID mismatch
> 
> *** WARNING: Overriding the Board ID can be very dangerous. ***
> Upgrading to an image with the wrong Board ID can render the video card
> unusable.
> Overriding the Board ID is only needed for extreme circumstances.
> A mismatched Board ID almost always means the wrong firmware image is being
> used for the specific video card.
> 
> Are you sure you want to continue?
> Type "YES" to confirm (all caps):
> YES
> 
> Overrding Board ID mismatch
> Current      - Version:90.02.30.00.05 ID:10DE:1E07:10DE:12A4
> GPU Board (Normal Boardx
> Replace with - Version:90.02.0B.00.A9 ID:10DE:1E07:10DE:12FA
> GPU Board (Normal Boardx
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.
> 
> Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.
> 
> C:\nvFlash>






Gallapagosisland said:


> I followed it exactly and i keep getting this:
> 
> 
> Code:
> 
> 
> C:\nvFlash>nvflash64 --protectoff
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> Setting EEPROM software protect setting...
> 
> Remove EEPROM write protect complete.
> 
> 
> C:\nvFlash>nvflash64 -6 KFA2RTX2080Ti.rom
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Checking for matches between display adapter(s) and image(s)...
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> WARNING: Firmware image PCI Subsystem ID (10DE.12FAx
> does not match adapter PCI Subsystem ID (10DE.12A4).
> WARNING: None of the firmware image compatible Board ID's
> match the Board ID of the adapter.
> Adapter Board ID:        0049
> Firmware image Board ID: 007E
> 
> Please press 'y' to confirm override of PCI Subsystem ID's:
> Overriding PCI subsystem ID mismatch
> 
> *** WARNING: Overriding the Board ID can be very dangerous. ***
> Upgrading to an image with the wrong Board ID can render the video card
> unusable.
> Overriding the Board ID is only needed for extreme circumstances.
> A mismatched Board ID almost always means the wrong firmware image is being
> used for the specific video card.
> 
> Are you sure you want to continue?
> Type "YES" to confirm (all caps):
> YES
> 
> Overrding Board ID mismatch
> Current      - Version:90.02.30.00.05 ID:10DE:1E07:10DE:12A4
> GPU Board (Normal Boardx
> Replace with - Version:90.02.0B.00.A9 ID:10DE:1E07:10DE:12FA
> GPU Board (Normal Boardx
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.
> 
> Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.
> 
> C:\nvFlash>


I was watching a video of Kingpin on youtube he was running a 2.2Ghz 2080Ti Kingpin in PortRoyal, he scores 10,060 at over 2,200Mhz. Then I realize that my 2080Ti Founders Edition on the stock air cooler is scoring nearly 200 points higher than that. I am getting around 10,230.

My point is, just solder the shunts on your card. Dont use LM. buy some 8 ohm resistors, and go for it! youll gain 40% more juice. So you will have 448 watts available to the card.


----------



## Medizinmann

Gallapagosisland said:


> I followed it exactly and i keep getting this:
> 
> 
> Code:
> 
> 
> C:\nvFlash>nvflash64 --protectoff
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> Setting EEPROM software protect setting...
> 
> Remove EEPROM write protect complete.
> 
> 
> C:\nvFlash>nvflash64 -6 KFA2RTX2080Ti.rom
> NVIDIA Firmware Update Utility patched by Vipeax
> Copyright (C) 1993-2019, NVIDIA Corporation. All rights reserved.
> 
> 
> Checking for matches between display adapter(s) and image(s)...
> 
> Adapter: GeForce RTX 2080 Ti  (10DE,1E07,10DE,12A4) H:--:NRM  S:00,B:01,D:00,F:00
> 
> 
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> WARNING: Firmware image PCI Subsystem ID (10DE.12FAx
> does not match adapter PCI Subsystem ID (10DE.12A4).
> WARNING: None of the firmware image compatible Board ID's
> match the Board ID of the adapter.
> Adapter Board ID:        0049
> Firmware image Board ID: 007E
> 
> Please press 'y' to confirm override of PCI Subsystem ID's:
> Overriding PCI subsystem ID mismatch
> 
> *** WARNING: Overriding the Board ID can be very dangerous. ***
> Upgrading to an image with the wrong Board ID can render the video card
> unusable.
> Overriding the Board ID is only needed for extreme circumstances.
> A mismatched Board ID almost always means the wrong firmware image is being
> used for the specific video card.
> 
> Are you sure you want to continue?
> Type "YES" to confirm (all caps):
> YES
> 
> Overrding Board ID mismatch
> Current      - Version:90.02.30.00.05 ID:10DE:1E07:10DE:12A4
> GPU Board (Normal Boardx
> Replace with - Version:90.02.0B.00.A9 ID:10DE:1E07:10DE:12FA
> GPU Board (Normal Boardx
> 
> Update display adapter firmware?
> Press 'y' to confirm (any other key to abort):
> EEPROM ID (9D,7014) : ISSI IS25WP080 1.65-1.95V 8192Kx1S, page
> 
> XUSB FW component of the input GPU firmware image is imcompatible, please use
> a newer version of GPU firmware image for this product.
> 
> 
> 
> BIOS Cert 2.0 Verification Error, Update aborted.
> 
> 
> Nothing changed!
> 
> 
> 
> ERROR: Invalid firmware image detected.
> 
> Nvflash CPU side error Code:2Error Message: Falcon In HALT or STOP state, abort uCode command issuing process.
> 
> C:\nvFlash>


Did you use the patched Flash Tool - I mean the one for the NVidia FE !








NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download


This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm




www.techpowerup.com





Best regards,
Medizinmann


----------



## Medizinmann

mistershan said:


> Those must be inflated. I tried to put my card up on craigslist for 800 and I got zero inquiries. When I put up 2 x 1080s a few years ago they sold in like an hour. I think people realize it would be better to wait and get a 3080. I guess inflated prices on ebay makes sense to find that rare person out there in the world that needs a 2080ti for NVLink?


Of course they are inflated - and nobody in their right mind should buy any GPU right now, if he/she can void it - and I don't know if you can really sell for these inflated prices - but if you are a little under these - like $800-$700 - I am pretty sure you will sell it.

You might need a few days…

I mean – look around in many places 2080 Ti are sold out – for crazy prices...
...and when AMD can't deliver some volume with their 6800/600XT/6900XT it will worsen even more…

I am also pretty sure the situation might/will change after Xmas though…

Best regards,
Medizinmann


----------



## mistershan

Medizinmann said:


> Of course they are inflated - and nobody in their right mind should buy any GPU right now, if he/she can void it - and I don't know if you can really sell for these inflated prices - but if you are a little under these - like $800-$700 - I am pretty sure you will sell it.
> 
> You might need a few days…
> 
> I mean – look around in many places 2080 Ti are sold out – for crazy prices...
> ...and when AMD can't deliver some volume with their 6800/600XT/6900XT it will worsen even more…
> 
> I am also pretty sure the situation might/will change after Xmas though…
> 
> Best regards,
> Medizinmann


What do you mean no one in their right mind would buy a video card now? You mean for the 2000 series or you also mean for the 3000 series and the new AMD ones? Is it because since there is now real competition it's best to wait to see Nvidia's quick response with the 3080ti? The fact that there is so low quantities usually is a sign they are holding back for a reason no?


----------



## kithylin

mistershan said:


> What do you mean no one in their right mind would buy a video card now? You mean for the 2000 series or you also mean for the 3000 series and the new AMD ones? Is it because since there is now real competition it's best to wait to see Nvidia's quick response with the 3080ti? The fact that there is so low quantities usually is a sign they are holding back for a reason no?


The Nvidia 3000 series (and AMD's new 6000 series launching tomorrow at the time of writing this) will be completely unobtainable due to high demand and low stock. Any of them that do show up for sale anywhere online within the next 6 months will be price gouged to nearly double the actual MSRP price. On the other side it would be stupid to buy anything from the 2000 series right now either unless you get a really good deal somewhere. Like < $500 for a 2080 Ti deal. The stock for the new cards should normalize in a couple months and when it does all older cards are obsolete and likely the price will tank hard once the new cards are easy to buy at MSRP prices commonly. This is exactly how it's happened for the past 5-7 Nvidia/AMD GPU launches. No one with a logical brain should be buying cards right now old or new. Just wait a couple months for everything to settle down then buy one. The marketplace globally on video cards is always in crazy flux right around a new GPU launch for used and new cards.


----------



## Medizinmann

tps3443 said:


> I was watching a video of Kingpin on youtube he was running a 2.2Ghz 2080Ti Kingpin in PortRoyal, he scores 10,060 at over 2,200Mhz. Then I realize that my 2080Ti Founders Edition on the stock air cooler is scoring nearly 200 points higher than that. I am getting around 10,230.
> 
> My point is, just solder the shunts on your card. Dont use LM. buy some 8 ohm resistors, and go for it! youll gain 40% more juice. So you will have 448 watts available to the card.


Well - not everybody is into soldering...and it isn't really necessary if you can flash a BIOS with higher Power Budget.

I can hit 2145 Mhz and up to 397W on the KFA2/Galax BIOS...

But – yes – if he can’t – he can resort to this and solder resistors on it.

Best regards,
Medizinmann


----------



## Medizinmann

mistershan said:


> What do you mean no one in their right mind would buy a video card now? You mean for the 2000 series or you also mean for the 3000 series and the new AMD ones? Is it because since there is now real competition it's best to wait to see Nvidia's quick response with the 3080ti? The fact that there is so low quantities usually is a sign they are holding back for a reason no?


As user *kithylin *already wrote...

It isn't a good time to buy *any *GPU at all - if you can avoid it. If you are lucky and able to get one of the new GPUs (either NVidia Ampere or AMD RDNA 2) at MSRP - go for it, but your chances are slim to none and I wouldn't pay inflated scalper-prices - if I can avoid it...

...the other thing are the Early Adopters Pains...for the new GPUs...i.e problems with drivers CAPS etc.

And of course - I wouldn’t buy "old tech"(i.e. RTX 2080Ti) for much to high inflated prices...so best advice is to wait till everything normalizes - maybe sometime in the middle to the end of Q1/2021 or even later - depending on the supply chain...

Best regards,
Medizinmann


----------



## kithylin

Also the other thing that will lead to prices on "older" / previous generation hardware to drop a lot is everyone flooding the market with used video cards over the next couple months once the supply of the new ones becomes common and normal. Most people try to get some money back out of their old hardware to recoup the investment from the upgrade. Most used video cards made within the past 4 years will probably drop in price over the next 6-8 months due to this. It's standard market economics. High supply = low prices. That used video card you see today for $500 (I'm just theorizing as an example) might be $400 or $350 in 6 months for example. And then if you buy last gen's used hardware for $500 today and 6 months from now the new cards that are +50% faster for $400 become commonplace for sale and you'll wish you had waited to buy that instead then you'll be stuck with buyer's remorse.

This is the same cycle every time a new video card is released. New cards come out. No one can buy em for 6-8 months. Then they're common. Then all the people that waited start upgrading and dump their old cards on ebay, etc. It's been happening with new video card releases since the Geforce 6000 days.


----------



## mistershan

kithylin said:


> Also the other thing that will lead to prices on "older" / previous generation hardware to drop a lot is everyone flooding the market with used video cards over the next couple months once the supply of the new ones becomes common and normal. Most people try to get some money back out of their old hardware to recoup the investment from the upgrade. Most used video cards made within the past 4 years will probably drop in price over the next 6-8 months due to this. It's standard market economics. High supply = low prices. That used video card you see today for $500 (I'm just theorizing as an example) might be $400 or $350 in 6 months for example. And then if you buy last gen's used hardware for $500 today and 6 months from now the new cards that are +50% faster for $400 become commonplace for sale and you'll wish you had waited to buy that instead then you'll be stuck with buyer's remorse.
> 
> This is the same cycle every time a new video card is released. New cards come out. No one can buy em for 6-8 months. Then they're common. Then all the people that waited start upgrading and dump their old cards on ebay, etc. It's been happening with new video card releases since the Geforce 6000 days.


Ah...Damn a place by me had a the 3090 FTW for 1,800 the other day. I should have bought it and sold my 2080ti before it does tank. The guy at the store told me to wait because Nvidia is going to release a 3080ti that will be almost as powerful for like 800 less. Idk. I do video editing work and the extra cuda cores coulda helped probably.


----------



## Krzych04650

mistershan said:


> Ah...Damn a place by me had a the 3090 FTW for 1,800 the other day. I should have bought it and sold my 2080ti before it does tank. The guy at the store told me to wait because Nvidia is going to release a 3080ti that will be almost as powerful for like 800 less. Idk. I do video editing work and the extra cuda cores coulda helped probably.


3080 Ti is now certainly coming, most likely at $999. But if anyone wants to compare prices then compare directly, not between top model of 3090 and base price of 3080 Ti. 3080 Ti FTW3 is going to be $1200.


----------



## tps3443

. Edit.


----------



## tps3443

Medizinmann said:


> Well - not everybody is into soldering...and it isn't really necessary if you can flash a BIOS with higher Power Budget.
> 
> I can hit 2145 Mhz and up to 397W on the KFA2/Galax BIOS...
> 
> But – yes – if he can’t – he can resort to this and solder resistors on it.
> 
> Best regards,
> Medizinmann



if he can flash then sure, you don’t have to take the video card out of your case lol. But, I can solder a resistor faster. And it is easier than delidding a CPU.

I flashed to the Galax 380 watt bios, and soldered the shunts. You need all the power you can provide to get past 2,130-2,145Mhz. Anyways, my EKWB waterblock will be here this week. I’m looking forward to it. It is so amazing what you can squeeze out of a 2080Ti.


----------



## Shark00n

tps3443 said:


> if he can flash then sure, you don’t have to take the video card out of your case lol. But, I can solder a resistor faster. And it is easier than delidding a CPU.
> 
> I flashed to the Galax 380 watt bios, and soldered the shunts. You need all the power you can provide to get past 2,130-2,145Mhz. Anyways, my EKWB waterblock will be here this week. I’m looking forward to it. It is so amazing what you can squeeze out of a 2080Ti.


I'm using the EK waterblock too. No backplate. Please tell us if the soddered resistors interfere in any way with the block itself.

Also can you provide a link on exactly what kind of resistors are the best quality to use? Trying to find a source on them in europe. Thanks!


----------



## Medizinmann

tps3443 said:


> if he can flash then sure, you don’t have to take the video card out of your case lol. But, I can solder a resistor faster. And it is easier than delidding a CPU.


Well - might be - but as I said - not everybody is into soldering - same goes for delidding CPUs...



> I flashed to the Galax 380 watt bios, and soldered the shunts. You need all the power you can provide to get past 2,130-2,145Mhz. Anyways, my EKWB waterblock will be here this week. I’m looking forward to it. It is so amazing what you can squeeze out of a 2080Ti.


My GPU crashes beyond 2145 MHz - and that has nothing to do with the power budget - I would need much better cooling - I was able to do one Timespy run @2160 MHz - but that was with lowered ambient (open windows last winter freezing 13°C room temps...brr...❄).

...and it isn't stable under "normal" circumstances like 22-23°C ambient.

I.e. user Imprezzion reported some success with XOC BIOS and saw power draw over 400W - but he also reported instabilities and diminishing returns beyond 400W.

Best regards,
Medizinmann


----------



## tps3443

Medizinmann said:


> Well - might be - but as I said - not everybody is into soldering - same goes for delidding CPUs...
> 
> 
> 
> My GPU crashes beyond 2145 MHz - and that has nothing to do with the power budget - I would need much better cooling - I was able to do one Timespy run @2160 MHz - but that was with lowered ambient (open windows last winter freezing 13°C room temps...brr...❄).
> 
> ...and it isn't stable under "normal" circumstances like 22-23°C ambient.
> 
> I.e. user Imprezzion reported some success with XOC BIOS and saw power draw over 400W - but he also reported instabilities and diminishing returns beyond 400W.
> 
> Best regards,
> Medizinmann


Any idea what XOC bios works on reference PCB “A“ bin 2080Ti?

I hear the kingpin 2000 watt Bios works and will lock 2,160Mhz-2,175Mhz for 55-60C GPU temps. I have read this twice on google, just confirming it works on reference PCB.

I am still playing with the 380 watt Galax bios right now. I have the shunts soldered too.


----------



## zorro20010

Guys, I can't flash my Gigabyte 2080ti turbo with any bios. Just always receive an error about board ID. I have tried to use patched NVFlash 5.590: nvflash biosname
How to flash another bios?


----------



## gamer944

Hi guys.

I am having a weird problem ... if you can find an explanation for me I will be grateful.

I have a RTX 2080Ti Aorus (not xtreme) and each time I flash the BIOS Galax XOC which removes all limits my PC freeze on MSI Afterburner same as stock ... the card works very well in stock and it is 100% stable ... but when I try to manipulate MSI Afterburner the PC freezes and reboots on its own.

if you could explain to me?


----------



## pewpewlazer

tps3443 said:


> Any idea what XOC bios works on reference PCB “A“ bin 2080Ti?
> 
> I hear the kingpin 2000 watt Bios works and will lock 2,160Mhz-2,175Mhz for 55-60C GPU temps. I have read this twice on google, just confirming it works on reference PCB.
> 
> I am still playing with the 380 watt Galax bios right now. I have the shunts soldered too.


I believe all of the "XOC" BIOSes work on A chip reference PCB cards. I've flashed mine with the following:
-STRIX XOC BIOS = no v-f curve, so you end up with 1.05v under load, totally useless.
-HOF XOC BIOS = has v-f curve functionality, but attempting to save or load profiles in Afterburner will cause your computer to reboot, so also totally useless.
-KPE XOC BIOS = no v-f curve functionality, locked at 1.125v even on the desktop, so totally useless for daily use IMO.

In all cases, you lose at least one display port output.

If your card is shunt modded, there's no reason to run an XOC BIOS unless you're hoping that last bit of voltage helps you run 3dmark 15mhz faster. I believe my card ran 2175mhz through Port Royal with the KPE XOC BIOS first try, but only made it maybe halfway through at 2190mhz before crashing. I can get 2175mhz through Port Royal as low at 1.075v on a regular BIOS, so I gain nothing from the XOC BIOS.


----------



## gfunkernaught

CptAsian said:


> They've been doing that for years though so I don't see how that could flip someone so strongly this generation. Could be that though, that's just how I feel about it.


More like "I know ya'll have been doing shady **** for years but this is just embarrasing"... that


----------



## gfunkernaught

Need some input from 2080 ti owners using a custom v/f curve in AB. I set my curve when my card is idling at 26c. I set a point of [email protected] For some reason, even though in-game the clock is shown to be running at 2085mhz, the point has moved itself to [email protected] Probably after a reboot. Not sure why other than it being the nature of GPU Boost. What is the trick to making sure the curve does not change on its own? Other than hard lock CTRL+L.


----------



## gtz

Just bought me a cheap 2080Ti from Craigslist, only drawback it had was it was completely naked. No heatsink/cooler, good thing is that it is reference design and I have 3 coolers (I have owned a lot of 2080tis, I know it's a problem lol) to choose from. I sold my last 2080ti in hopes of grabbing a 6800xt or 3080 but those are still hard to get and this 2080ti was cheap. Will post once I decide what cooler to attach.


----------



## tps3443

gtz said:


> Just bought me a cheap 2080Ti from Craigslist, only drawback it had was it was completely naked. No heatsink/cooler, good thing is that it is reference design and I have 3 coolers (I have owned a lot of 2080tis, I know it's a problem lol) to choose from. I sold my last 2080ti in hopes of grabbing a 6800xt or 3080 but those are still hard to get and this 2080ti was cheap. Will post once I decide what cooler to attach.
> 
> View attachment 2466261


Same here! I’m on my 4th 2080Ti in the past 6 months. I have ultimately decided that a 2.1+GHz 2080Ti is just as competitive as anything else available. But they do a better in Raytracing than the AMD cards.

I just water cooled this one yesterday, it has really good Samsung memory on it too.


----------



## gtz

The 2080Ti is an impressive piece of silicone and nvidia really locked them down. My previous one water-cooled was a beast. I might re attempt to water cool this one since I had to use the evga blower type cooler. Could not use the zotac cooler since those fans use soldered mini gpu 4 pin connectors instead of the mini 14pin. Could not use the dual fan evga because I could not find the midplate that covers the ram and vrm. The evga blower with spacer on the 4 main screws seems to do the job. Ran the 3dmark loop stress test and played 30mins of doom eternal and gpu works flawlessly. Not a bad gpu for 300. I did take a chance but I picked up the GPU from that person's house so I figured I was safe.


----------



## tps3443

gtz said:


> The 2080Ti is an impressive piece of silicone and nvidia really locked them down. My previous one water-cooled was a beast. I might re attempt to water cool this one since I had to use the evga blower type cooler. Could not use the zotac cooler since those fans use soldered mini gpu 4 pin connectors instead of the mini 14pin. Could not use the dual fan evga because I could not find the midplate that covers the ram and vrm. The evga blower with spacer on the 4 main screws seems to do the job. Ran the 3dmark loop stress test and played 30mins of doom eternal and gpu works flawlessly. Not a bad gpu for 300. I did take a chance but I picked up the GPU from that person's house so I figured I was safe.


Yeah a properly setup 2080Ti is very capable. Nvidia had no competition on launch, and the 2080Ti was extremely watered down. 

There is absolutely no comparison between my 2080Ti and something like a 3070.


----------



## Medizinmann

zorro20010 said:


> Guys, I can't flash my Gigabyte 2080ti turbo with any bios. Just always receive an error about board ID. I have tried to use patched NVFlash 5.590: nvflash biosname
> How to flash another bios?
> View attachment 2466172


Looks like you have a non-A card(PCI ID 1E04) - you could flash the non-A 310W BIOS from Palit - see page one.


https://www.techpowerup.com/vgabios/208274/Palit.RTX2080Ti.11264.190131.rom



Best regards,
Medizinmann


----------



## Medizinmann

gamer944 said:


> Hi guys.
> 
> I am having a weird problem ... if you can find an explanation for me I will be grateful.
> 
> I have a RTX 2080Ti Aorus (not xtreme) and each time I flash the BIOS Galax XOC which removes all limits my PC freeze on MSI Afterburner same as stock ... the card works very well in stock and it is 100% stable ... but when I try to manipulate MSI Afterburner the PC freezes and reboots on its own.
> 
> if you could explain to me?


XOC BIOS doesn't work with Afterburner at boot - so you need to disable Afterburmer starting with Windows...

The other problem sounds like driver problem...

Best regards,
Medizinmann


----------



## Medizinmann

tps3443 said:


> Any idea what XOC bios works on reference PCB “A“ bin 2080Ti?
> 
> I hear the kingpin 2000 watt Bios works and will lock 2,160Mhz-2,175Mhz for 55-60C GPU temps. I have read this twice on google, just confirming it works on reference PCB.
> 
> I am still playing with the 380 watt Galax bios right now. I have the shunts soldered too.


User Imprezzion reported some success with Galax HOF XOC BIOS - but not without problems.
You lose a DP Port and you can't use Afterburner as usual...

And as you already use the 380w BIOS with shunt mod – there isn’t IMHO really a point in trying.

As I said – above a certain point there are really diminishing returns – somewhere beyond 420W or so…and you really need very very good cooling.
Using XOC BIOS only really makes sense for LN2-OC.

Best regards,
Medizinmann


----------



## gtz

I had no intentions on water cooling my newest 2080Ti but damn is it a good overclocker. First let me say, I got lucky with samsung memory. Today was 40 degrees and decided to try the pc in my garage bench (with the garage door open to see how far I could push it. Pretty freaking far, this card is unmodified and is running a non-a chip. Got a graphics score of 16800 in timespy. That is with an offset of 475 on the core and 1000 on the mem. 

Like I mentioned this was suppose to be a place holder for either a 6900XT or 3090 (or 3080Ti if released) but if it can hold those same clocks watercooled I might go that route.


----------



## Hudson13

Finally made it here to the 2080 Ti group, have had a Gigabyte blower for the last 6 months or so and just grabbed a Kingpin 2080 Ti (and I've got a water-damaged card on the way now also to resurrect or convert to donor). Despite the noise from the blower, I recall being able to run ~2060-2145mhz core clock and +1400 memory on the Gigabyte card with 300w power limit Aorus blower BIOS at ambient temps (had to stick to blower BIOS bc every other non-XOC BIOS I found either had that FW mismatch issue in Nvflash or had a max fan speed that ran below what the turbo needed). Picked up an Alphacool hybrid kit to put the card in, and sure enough, flubbed something up during the install, resulting in an unresponsive card. No output signal and debug code d6 popped (no display device). Going to break out the multimeter again to see if I can figure anything out once the "donor" arrives, but I'm clueless with a probe.

In the short testing I did yesterday with the Kingpin (baselines done on reg BIOS, testing done on reg BIOS and XOC), I realized how much I miss that Gigabyte card. Didn't have _too_ much voltage/frequency wiggle room on ambient, but I messed with core+memory clocks and the voltages within the Classified Tool to go past stock performance. I trust myself a bit more now than when I was doing ambient benching with the Gigabyte card, so it would be nice to be able to have a real side-by-side comparison. 

I was making progress with the core and memory clocks with increasing voltage (and temp wasn't an issue, besides the pesky Vcore VRM + memory section of the card fiddling with 51deg C), but I didn't get to the +1400mhz memclock that I did easily on the turbo. Called it a night rather than trying to bench on an empty-brain, so I didn't try every option I had planned with memclock, but it was interesting to see how much a chip/card can vary between models. 

Had some good fun getting a little feel for the Kingpin card and hoping to learn more here so I can at least make sure it can last long enough to see another generation of Nvidia cards.


----------



## Hudson13

gtz said:


> I had no intentions on water cooling my newest 2080Ti but damn is it a good overclocker. First let me say, I got lucky with samsung memory. Today was 40 degrees and decided to try the pc in my garage bench (with the garage door open to see how far I could push it. Pretty freaking far, this card is unmodified and is running a non-a chip. Got a graphics score of 16800 in timespy. That is with an offset of 475 on the core and 1000 on the mem.
> 
> Like I mentioned this was suppose to be a place holder for either a 6900XT or 3090 (or 3080Ti if released) but if it can hold those same clocks watercooled I might go that route.


16800 graphics in TS is solid on air, but what frequency is core clock sitting at on load? I'm clueless so I don't know if it's intentional or just something that works, but early when I was testing my 2080, I set the core boost to something like +250 or +300, and it boosted to 2145mhz at peak. Worked great. But if I dropped to +125, it would then also peak at the same frequency with (as I recall) the same behavior. It's been a while now and I don't recall being able to replicate it on another BIOS later, so it might have just been a temporary thing or BIOS-specific bug. And maybe I just haven't seen enough solid OC results out of non-A 2080ti, which might make up a bit of that offset (since A chips would come with _some _stock OC, I assume). Do you remember how much power the card was pulling in that test? Closest result I see in my chart to 16800 is 16793 graphics in TimeSpy, and it looks like that was on reg BIOS at +50/1200 (2115mhz peak core clock)


----------



## tps3443

Hudson13 said:


> 16800 graphics in TS is solid on air, but what frequency is core clock sitting at on load? I'm clueless so I don't know if it's intentional or just something that works, but early when I was testing my 2080, I set the core boost to something like +250 or +300, and it boosted to 2145mhz at peak. Worked great. But if I dropped to +125, it would then also peak at the same frequency with (as I recall) the same behavior. It's been a while now and I don't recall being able to replicate it on another BIOS later, so it might have just been a temporary thing or BIOS-specific bug. And maybe I just haven't seen enough solid OC results out of non-A 2080ti, which might make up a bit of that offset (since A chips would come with _some _stock OC, I assume). Do you remember how much power the card was pulling in that test? Closest result I see in my chart to 16800 is 16793 graphics in TimeSpy, and it looks like that was on reg BIOS at +50/1200 (2115mhz peak core clock)


This was my air cooled 2080Ti FE. It broke 17K timespy graphics on just air! That’s with 380 watt Galax bios and shunts soldered. All inside of a closed case in normal ambient temps.




My GPU is watercooled now. but, of all (4) 2080Ti’s ive owned. I have never seen over 17K in air. I thought it was amazing!


----------



## gtz

Hudson13 said:


> 16800 graphics in TS is solid on air, but what frequency is core clock sitting at on load? I'm clueless so I don't know if it's intentional or just something that works, but early when I was testing my 2080, I set the core boost to something like +250 or +300, and it boosted to 2145mhz at peak. Worked great. But if I dropped to +125, it would then also peak at the same frequency with (as I recall) the same behavior. It's been a while now and I don't recall being able to replicate it on another BIOS later, so it might have just been a temporary thing or BIOS-specific bug. And maybe I just haven't seen enough solid OC results out of non-A 2080ti, which might make up a bit of that offset (since A chips would come with _some _stock OC, I assume). Do you remember how much power the card was pulling in that test? Closest result I see in my chart to 16800 is 16793 graphics in TimeSpy, and it looks like that was on reg BIOS at +50/1200 (2115mhz peak core clock)


You are correct, since it is a non a chip the offset needs to be greater. This is the biggest scoring rtx 2080ti. My first one was a zotac, second was unknown brand, third was another zotac, fourth was a evga (water-cooled this one, also non a and scored around 16500 on timespy on stock bios and no shunt mid), and this new evga. Now those have been my personal ones. I have built at least 5 high end rigs for flips with 2080tis. Out of all the ones I played with this non a scores the best. This thing has the stock BIOS and as far as I can tell no shunt modified. I texted the person I bought from to see if he modified it because it performs extremely well especially being a non a chip. No response, I even specified not looking for a refund just really impressed with this card.

I am going to water cool my main rig again I water-cooled it half assed 2 months ago but took everything apart and used the cpu pump block combo for a client rig and sold my old 2080ti with the block. I just assumed I was going to get a 5900x and 6800xt or 3080, boy was I wrong. 

Just bought a new barrow block from eBay for 70, owner claims he just opened to inspect it and 145 to titanrig for a barrow cpu block, alphacool reservoir ddc pump combo, 12 compression fittings, and the silicone tube that goes inside hard line tubing for easy bending. I already have the radiators and hard line tubing. Just going to push my 3950x and 2080ti as far as I can.


----------



## tps3443

gtz said:


> You are correct, since it is a non a chip the offset needs to be greater. This is the biggest scoring rtx 2080ti. My first one was a zotac, second was unknown brand, third was another zotac, fourth was a evga (water-cooled this one, also non a and scored around 16500 on timespy on stock bios and no shunt mid), and this new evga. Now those have been my personal ones. I have built at least 5 high end rigs for flips with 2080tis. Out of all the ones I played with this non a scores the best. This thing has the stock BIOS and as far as I can tell no shunt modified. I texted the person I bought from to see if he modified it because it performs extremely well especially being a non a chip. No response, I even specified not looking for a refund just really impressed with this card.
> 
> I am going to water cool my main rig again I water-cooled it half assed 2 months ago but took everything apart and used the cpu pump block combo for a client rig and sold my old 2080ti with the block. I just assumed I was going to get a 5900x and 6800xt or 3080, boy was I wrong.
> 
> Just bought a new barrow block from eBay for 70, owner claims he just opened to inspect it and 145 to titanrig for a barrow cpu block, alphacool reservoir ddc pump combo, 12 compression fittings, and the silicone tube that goes inside hard line tubing for easy bending. I already have the radiators and hard line tubing. Just going to push my 3950x and 2080ti as far as I can.


I could get around 17,250 with my old NON-A 2080Ti on watercooling shunt modded and flashed, this may not seem like a whole lot more. But being it is watercooling, this is sustained performance.

My air cooled Gigabyte non A would manage 16,800 on air. But, it had shunt mods and 310 watt bios.

The card must have something done to it. 16,800 timespy graphics on air cooling sounds near impossible without a lot of extra power.

Breaking 17K with my A bin FE on air cooling proved quite difficult. It had “Up to” 532 watts of available power that it could use if needed.


----------



## yoadknux

Hi guys. I arrived late to the party. I now own a 2080ti FTW3 (air) and I'd like to know what are some typical overclock values for 24/7 stability. 
For now I went with an undervolt of 2040 core / 8000 memory @ 1V. Temps are 72-73c at continuous (~60 scenes) of FireStrike Ultra, TimeSpy Extreme and Port Royal, and ~70c after 2 hours of Superposition stress. 
From my experience with overclocking 2080ti, there isn't much point going to high voltages (anything above 1.062V) because the power limit will kick too often and even if you think you have 2200 @ 1.093V, in reality you will have something like 2000. The solution to that was shunt mod but I don't want to shunt this card as it is air cooled. 
So, I am interested about what to expect when I go to the stock voltage of 1.05V. Is it typical to achieve 24/7 stable 2100? What about memory? 
Thanks!
By the way, I have to say I'm a bit disappointed with EVGA, first of all the card is pretty loud, especially compared to my previous cards, the Palit 2080s Gamerock and Aorus 1080ti. Second, the Precision software isn't as good as MSI Afterburner - the UI more difficult to control, the monitoring isn't as good as Afterburner, and the voltage curve is ehhh. Third, the fan control is so weird with this card. MSI AB can't control the fans at all, and in fact when I try to do it weird things happen and I have to open Precision to somehow "re-configure" the fans. I wonder if this is a bios issue - anyone ran into it?


----------



## tps3443

yoadknux said:


> Hi guys. I arrived late to the party. I now own a 2080ti FTW3 (air) and I'd like to know what are some typical overclock values for 24/7 stability.
> For now I went with an undervolt of 2040 core / 8000 memory @ 1V. Temps are 72-73c at continuous (~60 scenes) of FireStrike Ultra, TimeSpy Extreme and Port Royal, and ~70c after 2 hours of Superposition stress.
> From my experience with overclocking 2080ti, there isn't much point going to high voltages (anything above 1.062V) because the power limit will kick too often and even if you think you have 2200 @ 1.093V, in reality you will have something like 2000. The solution to that was shunt mod but I don't want to shunt this card as it is air cooled.
> So, I am interested about what to expect when I go to the stock voltage of 1.05V. Is it typical to achieve 24/7 stable 2100? What about memory?
> Thanks!
> By the way, I have to say I'm a bit disappointed with EVGA, first of all the card is pretty loud, especially compared to my previous cards, the Palit 2080s Gamerock and Aorus 1080ti. Second, the Precision software isn't as good as MSI Afterburner - the UI more difficult to control, the monitoring isn't as good as Afterburner, and the voltage curve is ehhh. Third, the fan control is so weird with this card. MSI AB can't control the fans at all, and in fact when I try to do it weird things happen and I have to open Precision to somehow "re-configure" the fans. I wonder if this is a bios issue - anyone ran into it?


People own 2080Ti’s and they can sustain 2,200 or even 2,250Mhz. It’s all about how cool you can keep the card. The only reason you are undervolting is to get your card cooler to run a higher frequency, by consuming less power and in turn creating much less heat.

The colder a GPU runs the less power it consumes. Or certain bios allow nearly unlimited power draw, or much higher voltages. But no matter what bios you have, or how much you can send you must keep the GPU cold.


I ran timespy extreme last night, my card was at 2,160Mhz and pulled 463 watts of juice. It certainly wasn’t running out of power. And it held the frequency Without clocking down.


I guess what I’m saying is, if your card is running very cool like 42C you don’t even need to undervolt it at all. Just overclock it, and it’ll lock down 2,160–2,175Mhz.


^ This is how my FE model is.


----------



## yoadknux

tps3443 said:


> People own 2080Ti’s and they can sustain 2,200 or even 2,250Mhz. It’s all about how cool you can keep the card. The only reason you are undervolting is to get your card cooler to run a higher frequency, by consuming less power and in turn creating much less heat.
> 
> The colder a GPU runs the less power it consumes. Or certain bios allow nearly unlimited power draw, or much higher voltages. But no matter what bios you have, or how much you can send you must keep the GPU cold.
> 
> 
> I ran timespy extreme last night, my card was at 2,160Mhz and pulled 463 watts of juice. It certainly wasn’t running out of power. And it held the frequency Without clocking down.
> 
> 
> I guess what I’m saying is, if your card is running very cool like 42C you don’t even need to undervolt it at all. Just overclock it, and it’ll lock down 2,160–2,175Mhz.
> 
> 
> ^ This is how my FE model is.


Hi, thanks for the comment!
My card is on air. When I run things like FireStrike Ultra it will power limit even at 1V, although it happens only at 65c+. 
At what voltage are you running 2160MHz?


----------



## tps3443

yoadknux said:


> Hi, thanks for the comment!
> My card is on air. When I run things like FireStrike Ultra it will power limit even at 1V, although it happens only at 65c+.
> At what voltage are you running 2160MHz?


Yeah reducing the temperature would greatly reduce power consumption. My card on the previous bios would run 2,130Mhz at 1.062MV and it pulls around 275-285 watts playing death stranding. On air cooling and undervolted, and a much lower frequency it Pulled 80-100 more watts easily.

I run 1,100MV for 2,160Mhz.

I think I could get by with a little less voltage. But I haven’t messed with it too much.

Watercool that card and you’ll get much lower power consumption and much higher frequencies that don’t down clock in games.


You can squeeze roughly within 3% of a RTX3080FE, once you setup a 2080Ti properly. And you’ll obtain superior ray tracing performance to something like a 6800XT.

The GPU market is a mess these days. That’s why I grabbed another 2080Ti solely to watercool it for maximum performance. I sold my last one to try and grab next gen, and regretted it.


----------



## pewpewlazer

yoadknux said:


> Hi guys. I arrived late to the party. I now own a 2080ti FTW3 (air) and I'd like to know what are some typical overclock values for 24/7 stability.
> For now I went with an undervolt of 2040 core / 8000 memory @ 1V. Temps are 72-73c at continuous (~60 scenes) of FireStrike Ultra, TimeSpy Extreme and Port Royal, and ~70c after 2 hours of Superposition stress.
> From my experience with overclocking 2080ti, there isn't much point going to high voltages (anything above 1.062V) because the power limit will kick too often and even if you think you have 2200 @ 1.093V, in reality you will have something like 2000. The solution to that was shunt mod but I don't want to shunt this card as it is air cooled.
> So, I am interested about what to expect when I go to the stock voltage of 1.05V. Is it typical to achieve 24/7 stable 2100? What about memory?


2040 @ 1.000v 72-73c load sounds pretty good. That would be somewhere around 2100-2130 at water cooled temps, which is quite good for that voltage.

"stock" voltage is up to 1.068v, although the curve will typically max out at 1.05v. With a 373w (FTW3) or 380w (Galax BIOS) power limit, ~1.025v is about the max you can consistently run for gaming in my experience without power throttling. Maybe a bit lower on air cooling.

This all varies by game and resolution of course. Running games at 4k will typically result in way higher power consumption than if you were running at 1080p. And every game/engine varies wildly. For example, in AC Valhalla or Watch Dogs Legion (with RTX on), my card will average a measly 300w even at 1.093v. But in somethung like Metro Exodus, average power draw is in the low 400s with peaks nearing 500w.

Memory is all over the map as well. My card has Micron, and topped out around +750mhz on stock cooling (EVGA XC Ultra in a case with ~800 RPM radiator fans for airflow). After throwing a water block on it, I can run up to +1100mhz. Samsung cards seem to be capable of +1400mhz and beyond if you get a good one.


----------



## yoadknux

pewpewlazer said:


> 2040 @ 1.000v 72-73c load sounds pretty good. That would be somewhere around 2100-2130 at water cooled temps, which is quite good for that voltage.
> 
> "stock" voltage is up to 1.068v, although the curve will typically max out at 1.05v. With a 373w (FTW3) or 380w (Galax BIOS) power limit, ~1.025v is about the max you can consistently run for gaming in my experience without power throttling. Maybe a bit lower on air cooling.
> 
> This all varies by game and resolution of course. Running games at 4k will typically result in way higher power consumption than if you were running at 1080p. And every game/engine varies wildly. For example, in AC Valhalla or Watch Dogs Legion (with RTX on), my card will average a measly 300w even at 1.093v. But in somethung like Metro Exodus, average power draw is in the low 400s with peaks nearing 500w.
> 
> Memory is all over the map as well. My card has Micron, and topped out around +750mhz on stock cooling (EVGA XC Ultra in a case with ~800 RPM radiator fans for airflow). After throwing a water block on it, I can run up to +1100mhz. Samsung cards seem to be capable of +1400mhz and beyond if you get a good one.


Thank you as well for the comment. So it seems the overclock I'm getting is quite typical for this card (on air). Memory can bench at 1500 too, but for 24/7, not really. I think that at some point the memory modules overheat and that causes stability issues. It's actually very weird, the FTW3 has three memory sensors, and one is very cool (~50c stressed), while the others are hot (~80c stressed). When I changed thermal paste, I noticed that only one side of the memory is connected through a thermal pad to the heatsink, while the rest of the components seem hidden below some black front plate. I think this is a design flaw, because it traps heat and it isn't connected to the main heatsink. This is my first EVGA card since the 970 and I'm actually a bit disappointed, it's a very loud card. Precision is also a very buggy thing and MSI AB can't control the fan curve on those cards (and even worse, when I try to do that, it disables two fans until I run Precision). Perhaps this card was meant to be "fully unlocked" under water, but at stock configuration, I wouldn't call it great.


----------



## Imprezzion

Medizinmann said:


> User Imprezzion reported some success with Galax HOF XOC BIOS - but not without problems.
> You lose a DP Port and you can't use Afterburner as usual...
> 
> And as you already use the 380w BIOS with shunt mod – there isn’t IMHO really a point in trying.
> 
> As I said – above a certain point there are really diminishing returns – somewhere beyond 420W or so…and you really need very very good cooling.
> Using XOC BIOS only really makes sense for LN2-OC.
> 
> Best regards,
> Medizinmann


Yes, XOC works fine if you can live without profiles in MSI AB and having to do your OC every boot manually.

Also, DP/HDMI depending on your specific model of card may or may not work. My Gainward Phoenix GS doesn't work with all ports. Only HDMI and 2 DP's work. And it doesn't wanna show a UEFI BIOS screen over DP. Only HDMI so I have to run Legacy GOP in the BIOS for that to show on my main monitor.

I can run 2145Mhz on 1.125v perfectly stable on XOC. Power draw is about 400-420w and temps around 48-50c under AIO water. On a normal BIOS I am limited to 2085Mhz on 1.093v and some games like GTA V on 4K let it hit throtttling even with 373w EVGA FTW3 Ultra BIOS.


----------



## tps3443

Imprezzion said:


> Yes, XOC works fine if you can live without profiles in MSI AB and having to do your OC every boot manually.
> 
> Also, DP/HDMI depending on your specific model of card may or may not work. My Gainward Phoenix GS doesn't work with all ports. Only HDMI and 2 DP's work. And it doesn't wanna show a UEFI BIOS screen over DP. Only HDMI so I have to run Legacy GOP in the BIOS for that to show on my main monitor.
> 
> I can run 2145Mhz on 1.125v perfectly stable on XOC. Power draw is about 400-420w and temps around 48-50c under AIO water. On a normal BIOS I am limited to 2085Mhz on 1.093v and some games like GTA V on 4K let it hit throtttling even with 373w EVGA FTW3 Ultra BIOS.


Yeah I tested the Galax XOC bios. It’ll pull some serious juice. Luckily my water loop keeps it very very cool. I ran timespy GT2 at 4K and pulled over 520 watts of power.

With my 7980XE at 4.8Ghz and my 2080Ti on XOC bios combined I am pushing the capacity of my 1200 watt PSU In death stranding. And I am getting power shut downs!



If I run the normal Galax 380 watt bios, I can easily manage 2,115Mhz sustained at 1.050V.


This thing will walk a 6800XT in ray tracing performance guys!


----------



## tps3443

Anyone running a 2080Ti watercooled with the 380 watt Galax bios and running "Soldered Shunts", I highly recommend locking your clocks at 1.093MV and 2,160Mhz. Use "SHIFT+L in MSI AfterBurner. This fixed all sorts of crashing issues I was getting.


The issues with running an offset like +150Mhz is the GPU will try to run 2,160Mhz or even 2,130mhz and then, the GPU will slowly lower voltage as the GPU heats up even only running 40-42C. And this is what causes the crashing. You may be running 2,130Mhz in a game 100% stable and your GPU flickers the voltage down to 1.043MV and it crashes your game. 

When you hit SHIFT+L you literally LOCK that voltage so it'll never lower during game play. The one downside is that MSI AB locks 2,160Mhz even the desktop. So, just save the profile. And click profile "1" right before you play a game.

I have been going back and forward with the Galax 380 watt bios, and a Galaxy XOC 2K watt bios trying to find the best stability with the absolute maximum performance. And I have finally found the best option for my 2080Ti.


----------



## Imprezzion

Doesn't have to be shift L at all. Just make sure your 1.093v is 2 clicks higher (so 2x15Mhz) then your next lowest point. So, if you wanna run +150 set it to a +120 offset, hit apply and watch the curve adjust, then raise ONLY the 1.093v point to the desired clocks, hit apply again, Save that as a profile and you never have to touch Afterburner again and it will always run 1.093v.

I run +105 offset with 2 clicks higher 1.093v point for example


----------



## pewpewlazer

tps3443 said:


> Anyone running a 2080Ti watercooled with the 380 watt Galax bios and running "Soldered Shunts", I highly recommend locking your clocks at 1.093MV and 2,160Mhz. Use "SHIFT+L in MSI AfterBurner. This fixed all sorts of crashing issues I was getting.
> 
> 
> The issues with running an offset like +150Mhz is the GPU will try to run 2,160Mhz or even 2,130mhz and then, the GPU will slowly lower voltage as the GPU heats up even only running 40-42C. And this is what causes the crashing. You may be running 2,130Mhz in a game 100% stable and your GPU flickers the voltage down to 1.043MV and it crashes your game.
> 
> When you hit SHIFT+L you literally LOCK that voltage so it'll never lower during game play. The one downside is that MSI AB locks 2,160Mhz even the desktop. So, just save the profile. And click profile "1" right before you play a game.
> 
> I have been going back and forward with the Galax 380 watt bios, and a Galaxy XOC 2K watt bios trying to find the best stability with the absolute maximum performance. And I have finally found the best option for my 2080Ti.


As long as your 1.093v point is set +15mhz higher than everything else on the curve, you shouldn't need to "L" that point on the curve.

Also, if you leave all the other points alone and only increase 1.093v (or whatever voltage point you're aiming for), the card will consistently perform worse. Why this is, I have no idea. But after shunting my card, I started jacking up only the 1.093v point on the curve, and my scores sucked. After some Google-Fu, I had found a post (I think in this thread) saying to set the other points -1mhz below your voltage point of choice. Bottom line is that the offsets at other points on the curve have an affect on SOMETHING that impacts performance...

Just to make sure I'm not totally insane, I tested it again just now in TS Extreme GT1:

1.093v +180, all other points +165 = 49.47 fps (2160mhz actual clock)
1.093v +180, all other points +0 = 48.26 fps (2160mhz actual clock)
1.093v +180, all other points +105 = 48.81 fps (throttled to 2145mhz half way through due to higher water temps)
+165 offset = 49.36 fps (ran 2145mhz @ 1.043v, jumped to 2145mhz @ 1.093v towards the end)

My biggest issue overclocking this card is the damn temperature/load dependent shifting of the v-f curve. I've run 2130mhz or 2145mhz (depending on game) for HOURS without a hitch. But it always ends up crashing because either water temps have dropped just a fraction of a degree enough, or load has temporarily dropped off just long enough, that the card decides it's going to clock up 15mhz at the same exact voltage. And boom, insta-crash.

It seems the only way to get consistent maximum clocks is to keep your peak load temps exactly at 39-40*C. Hopefully with an extra 2x D5s I can get there...


----------



## tps3443

pewpewlazer said:


> As long as your 1.093v point is set +15mhz higher than everything else on the curve, you shouldn't need to "L" that point on the curve.
> 
> Also, if you leave all the other points alone and only increase 1.093v (or whatever voltage point you're aiming for), the card will consistently perform worse. Why this is, I have no idea. But after shunting my card, I started jacking up only the 1.093v point on the curve, and my scores sucked. After some Google-Fu, I had found a post (I think in this thread) saying to set the other points -1mhz below your voltage point of choice. Bottom line is that the offsets at other points on the curve have an affect on SOMETHING that impacts performance...
> 
> Just to make sure I'm not totally insane, I tested it again just now in TS Extreme GT1:
> 
> 1.093v +180, all other points +165 = 49.47 fps (2160mhz actual clock)
> 1.093v +180, all other points +0 = 48.26 fps (2160mhz actual clock)
> 1.093v +180, all other points +105 = 48.81 fps (throttled to 2145mhz half way through due to higher water temps)
> +165 offset = 49.36 fps (ran 2145mhz @ 1.043v, jumped to 2145mhz @ 1.093v towards the end)
> 
> My biggest issue overclocking this card is the damn temperature/load dependent shifting of the v-f curve. I've run 2130mhz or 2145mhz (depending on game) for HOURS without a hitch. But it always ends up crashing because either water temps have dropped just a fraction of a degree enough, or load has temporarily dropped off just long enough, that the card decides it's going to clock up 15mhz at the same exact voltage. And boom, insta-crash.
> 
> It seems the only way to get consistent maximum clocks is to keep your peak load temps exactly at 39-40*C. Hopefully with an extra 2x D5s I can get there...


So there’s a work around by setting a voltage curve that doesn’t reduce performance? Could you elaborate more please? I really appreciate it.

Ive always known that setting a curve lowers performance slightly vs running a offset. It is weird I know trust me, and I really don’t know exactly why it does it either, it is really stupid and frustrating lol. From how derbau8er explained it, it is due to the silicon it’s self literally reacting differently with different frequencies and different voltages or different straps or something like that. But, from what I’ve gathered It makes no sense why locking 1.093 at 2,160 or 2,175 wouldn’t perform just as good as running an offset. I wish I knew..


It’s almost like there no such thing as a perfect overclock with a reference board “A” Bin 2080Ti... The Galaxy XOC bios allows massive power draw and 1.125MV which is awesome, but it is super unstable. I get power shut downs for no reason, and you can’t save a profile in MSI AB, and it isn’t my PSU at all as even being the issue. The Galax 380 watt bios is only 1.093MV and it likes to step down voltages on its own when you run a standard offset on the core clock, and it’s only 380 watts so soldering the 8 ohms is still required.

So, I really wish I had an actual Galaxy HOF WC 2000watt 2080Ti lol. Maybe that would solve the problem. I could just send 1.093MV or more and 2,145Mhz or more and be happy.


I wish there were some
better bios out there available for use with reference boards and “A bin TU102 boards”

I am seriously considering that water chiller lol. My current custom loop is sufficient, it’s just not good enough to stay under 40C with such high power draw, and my power hog 7980XE being in the mix too. That old beast sits on the desktop idle at 200+ watts.


----------



## pewpewlazer

tps3443 said:


> So there’s a work around by setting a voltage curve that doesn’t reduce performance? Could you elaborate more please? I really appreciate it.


Yes, sort of. The boost algorithm tends to level off the curve from 1.043-1.050v and beyond once the card gets up to load temps/power. And it will always try to run the highest clocks possible at the lowest voltage possible based on the curve.

So on my card, with the 380w GALAX BIOS, a +180 offset will usually result in the curve calling for a clock speed of 2160mhz at every point on the curve from 1.043v through 1.093v. So naturally, the boost algorithm will set the card to run 2160mhz @ 1.043v, because why run 1.093v if the curve says it can run the same clock at 1.043v? Only problem is, 2160mhz @ 1.043v = insta crash.

The "work around" is to set a +165 offset, then open the curve editor and drag up the 1.093v point to +180. So now once you get the card up to load temps, the curve should be calling for 2145mhz from 1.043v through 1.087v. But the 1.093v point will still be 2160mhz since it's offset is still +180. So the card will sit at 1.093v since that voltage point corresponds to the highest clock according to the curve.



tps3443 said:


> It’s almost like there no such thing as a perfect overclock with a reference board “A” Bin 2080Ti... The Galaxy XOC bios allows massive power draw and 1.125MV which is awesome, but it is super unstable. I get power shut downs for no reason, and you can’t save a profile in MSI AB, and it isn’t my PSU at all as even being the issue.


If your computer is literally shutting off, I wouldn't be so quick to dismiss your PSU. When I got my card, still on stock cooler, maxing out the 338w stock power limit would trip the overcurrent protection on my 1000w Seasonic based PSU. AC power draw was only ~500w, the same as with my old GTX 1070 SLI setup. But the 2080 Ti would shut off my computer. So I ended up with a 1600w PSU for my 600w rig. Good times.

These cards can have some crazy high transient current draws, and combined with your power monster of a CPU, it's very possible a 1.125v unlimited power BIOS is more than your PSU can handle.


----------



## J7SC

pewpewlazer said:


> Yes, sort of. The boost algorithm tends to level off the curve from 1.043-1.050v and beyond once the card gets up to load temps/power. And it will always try to run the highest clocks possible at the lowest voltage possible based on the curve. (...)
> 
> These cards can have some crazy high transient current draws, and combined with your power monster of a CPU, it's very possible a 1.125v unlimited power BIOS is more than your PSU can handle.


Your comments on brief voltage spikes are appreciated. I run 2x 2080 Ti with their own big water loop which usually keeps them below 40 C, and never bothered with max-v or fixed voltage MSI AB curves etc ...Because with two cards at max *380 W each *(*stock *Aorus bios) plus an oc'ed Threadripper and various peripherals, the 1300 W PSU is getting quite a workout already. With short spikes, which happen with RTX as you posted, I rather not go any higher via base oc to avoid PSU OCP.

My top card typically hits and stays at 1.043V even well above 2205 (when cold), the second one hits 1.062 V at 2175 (when cold). Those voltages do not change when for example benching PortRoyal, just the temps and with it the clocks do ( https://www.3dmark.com/pr/466677 starts at 2160 for both, along with 2056 VRAM).

Maxing GPUv from the start makes no sense to me, rather the opposite. Minimizing temps still has the best overarching results, imo.


----------



## geriatricpollywog

J7SC said:


> Your comments on brief voltage spikes are appreciated. I run 2x 2080 Ti with their own big water loop which usually keeps them below 40 C, and never bothered with max-v or fixed voltage MSI AB curves etc ...Because with two cards at max *380 W each *(*stock *Aorus bios) plus an oc'ed Threadripper and various peripherals, the 1300 W PSU is getting quite a workout already. With short spikes, which happen with RTX as you posted, I rather not go any higher via base oc to avoid PSU OCP.
> 
> My top card typically hits and stays at 1.043V even well above 2205 (when cold), the second one hits 1.062 V at 2175 (when cold). Those voltages do not change when for example benching PortRoyal, just the temps and with it the clocks do ( https://www.3dmark.com/pr/466677 starts at 2160 for both, along with 2056 VRAM).
> 
> Maxing GPUv from the start makes no sense to me, rather the opposite. Minimizing temps still has the best overarching results, imo.
> 
> View attachment 2467329


380w seems high. My KPE pulls 240-300w at load on stock 520w bios and +0 on voltage slider. I can make it pull over 400 watts by flipping both voltage dip switches on the back of the card, but this lowers the core speed due to increased temperature. Have you tried lowering the voltage?


----------



## J7SC

0451 said:


> 380w seems high. My KPE pulls 240-300w at load on stock 520w bios and +0 on voltage slider. I can make it pull over 400 watts by flipping both voltage dip switches on the back of the card, but this lowers the core speed due to increased temperature. Have you tried lowering the voltage?


...nope, I let them drink their stock allotment, below 'only' 375.7 W per card (Aorus stock bios XTR WB), but I have seen 379.9 W before. I just don't touch the voltage slider at all - may be I should try to undervolt after all...

...my biggest challenge has been to lower the total system power budget from 1150+ W to about 900 W with my current fav app: MS FS 2020 on SLI-CFR...PL down to 112 %, GPU oc down to 2100 MHz and VRAM down to about 2000 MHz (16,000 'effective) works fine w/o any _appreciable_ loss in visuals / fps


----------



## Imprezzion

I had a 750w Focus Plus Gold trip several times with just a single 2080 Ti on the XOC BIOS. It did handle the EVGA FTW3 Ultra BIOS (373w) but 1.125v with like 420-430w actual power draw was too much for it with a 9900K @ 5,1 and loads of fans + rgb lol. Upgraded to a 1000w Prime Gold (same cables and was cheap) and never had issues again even on XOC.


----------



## tps3443

pewpewlazer said:


> Yes, sort of. The boost algorithm tends to level off the curve from 1.043-1.050v and beyond once the card gets up to load temps/power. And it will always try to run the highest clocks possible at the lowest voltage possible based on the curve.
> 
> So on my card, with the 380w GALAX BIOS, a +180 offset will usually result in the curve calling for a clock speed of 2160mhz at every point on the curve from 1.043v through 1.093v. So naturally, the boost algorithm will set the card to run 2160mhz @ 1.043v, because why run 1.093v if the curve says it can run the same clock at 1.043v? Only problem is, 2160mhz @ 1.043v = insta crash.
> 
> The "work around" is to set a +165 offset, then open the curve editor and drag up the 1.093v point to +180. So now once you get the card up to load temps, the curve should be calling for 2145mhz from 1.043v through 1.087v. But the 1.093v point will still be 2160mhz since it's offset is still +180. So the card will sit at 1.093v since that voltage point corresponds to the highest clock according to the curve.
> 
> 
> 
> If your computer is literally shutting off, I wouldn't be so quick to dismiss your PSU. When I got my card, still on stock cooler, maxing out the 338w stock power limit would trip the overcurrent protection on my 1000w Seasonic based PSU. AC power draw was only ~500w, the same as with my old GTX 1070 SLI setup. But the 2080 Ti would shut off my computer. So I ended up with a 1600w PSU for my 600w rig. Good times.
> 
> These cards can have some crazy high transient current draws, and combined with your power monster of a CPU, it's very possible a 1.125v unlimited power BIOS is more than your PSU can handle.



I fixed the power shutdown issues. I was also running a XOC bios for my X299 dark, I updated that to a standard bios and I plugged in the supplemental "6 pin PCI-e cable near the PCI-e slots for extra power" no issues at all. I played Metro for about 2 hours straight pulling 450-500+ GPU power wattage.


Here is a timespy run.









I scored 17 098 in Time Spy


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

Imprezzion said:


> Yes, XOC works fine if you can live without profiles in MSI AB and having to do your OC every boot manually.
> 
> Also, DP/HDMI depending on your specific model of card may or may not work. My Gainward Phoenix GS doesn't work with all ports. Only HDMI and 2 DP's work. And it doesn't wanna show a UEFI BIOS screen over DP. Only HDMI so I have to run Legacy GOP in the BIOS for that to show on my main monitor.
> 
> I can run 2145Mhz on 1.125v perfectly stable on XOC. Power draw is about 400-420w and temps around 48-50c under AIO water. On a normal BIOS I am limited to 2085Mhz on 1.093v and some games like GTA V on 4K let it hit throtttling even with 373w EVGA FTW3 Ultra BIOS.


I finally resolved it. My X299 Dark has a XOC bios too, it performed fantastic but was quite old. I switched to 1.24 version, and the issues are totally gone running the 2,000 watt XOC 2080Ti bios now.. I played Metro for hours on end at 1.1V 2,145Mhz all the way through.


----------



## tps3443

0451 said:


> 380w seems high. My KPE pulls 240-300w at load on stock 520w bios and +0 on voltage slider. I can make it pull over 400 watts by flipping both voltage dip switches on the back of the card, but this lowers the core speed due to increased temperature. Have you tried lowering the voltage?



How cool does your card run?


----------



## tps3443

J7SC said:


> ...nope, I let them drink their stock allotment, below 'only' 375.7 W per card (Aorus stock bios XTR WB), but I have seen 379.9 W before. I just don't touch the voltage slider at all - may be I should try to undervolt after all...
> 
> ...my biggest challenge has been to lower the total system power budget from 1150+ W to about 900 W with my current fav app: MS FS 2020 on SLI-CFR...PL down to 112 %, GPU oc down to 2100 MHz and VRAM down to about 2000 MHz (16,000 'effective) works fine w/o any _appreciable_ loss in visuals / fps
> 
> View attachment 2467334
> 
> 
> View attachment 2467343


I plan to get another 2080Ti. I really want to run NVlink with (2) of them. But, I will probably run the Galax XOC bios on both lol. My current card will peak around 45C with a steady power draw around 500 watts. 

My system power consumption is pretty high due to my 7980XE being a straight guzzler! And now my 2080ti is the exact same.


----------



## geriatricpollywog

tps3443 said:


> How cool does your card run?


It’s on the AIO, so not very.


----------



## Imprezzion

AIO shouldn't be that bad? I mean, I can keep it under 50c on XOC 1.125v with a Kraken X52 push pull. It does require like 1600RPM fanspeed on all fours but yeah.

On 1.093v 373w it runs like 50-53c on 900RPM and 41-43c on 1500RPM.


----------



## tps3443

Imprezzion said:


> AIO shouldn't be that bad? I mean, I can keep it under 50c on XOC 1.125v with a Kraken X52 push pull. It does require like 1600RPM fanspeed on all fours but yeah.
> 
> On 1.093v 373w it runs like 50-53c on 900RPM and 41-43c on 1500RPM.


That’s not too much worse than my full custom loop with (2) 360’s EKWB rads, and a EKWB Vector 2080Ti acetal block. I run pull only on both rads right now with XSPC 120MM 1,650RPM fans. I am cooling 7980XE at 4.8Ghz too though. I pieced this loop together for as cheap as possible on Amazon. I am very happy with the performance of it.

Also, I tried the +MHz offset method, and then setting a single Curve point at 15-30Mhz higher in MSI Afterburner and it works amazing. No reduced scores in synthetics like only running a curve provides, or a SHIFT+L frequency does.

My daily setup is 17,400 in timespy graphics.


----------



## tps3443

This is my overclocked and watercooled RTX2080Ti running against a RTX3080. I set the settings identical per the review settings.

This really shows what you can obtain from a 2080Ti! It is amazing. This reflects a 37% uplift over a stock RTX2080Ti



free pic hosting


----------



## Falkentyne

tps3443 said:


> This is my overclocked and watercooled RTX2080Ti running against a RTX3080. I set the settings identical per the review settings.
> 
> This really shows what you can obtain from a 2080Ti! It is amazing. This reflects a 37% uplift over a stock RTX2080Ti
> 
> 
> 
> free pic hosting


I really have no idea why so many people panic sold their 2080 Ti's when the 3080 was announced. Shunt modded and overclocked, it loses literally loses no ground and if you watercool it, remains a top performer with more framebuffer memory, just with slightly worse raytracing. It just seems easier to put the 2080 Ti into safemode from a bad shunt mod than it is to put the 3080 in safe mode.


----------



## c0nsistent




----------



## tps3443

Falkentyne said:


> I really have no idea why so many people panic sold their 2080 Ti's when the 3080 was announced. Shunt modded and overclocked, it loses literally loses no ground and if you watercool it, remains a top performer with more framebuffer memory, just with slightly worse raytracing. It just seems easier to put the 2080 Ti into safemode from a bad shunt mod than it is to put the 3080 in safe mode.


Seriously. I’m shocked how well the 2080Ti does, and they support Nvlink. Also, the ray tracing performance is practically the same also. “with my 2080Ti anyways” Once you get that 2080Ti up and over 2.1Ghz sustained, it practically performs identical to a RTX3080 FE “Air cooled”

Ive soldered and desoldered shunts on more than a few 2080Ti’s. Ive never put one in safe mode though.


----------



## tps3443

c0nsistent said:


> View attachment 2467745


Awesome Score, that’s good performance. Which 2080Ti do you have?


----------



## c0nsistent

tps3443 said:


> Awesome Score, that’s good performance. Which 2080Ti do you have?


EVGA XC3 Ultra on air @ 100% fan. Memory: 8480mhz Core: ~2070-2100 due to temps on air

Strix XOC BIOS so vcore drops down to 1.05


----------



## J7SC

Falkentyne said:


> I really have no idea why so many people panic sold their 2080 Ti's when the 3080 was announced. Shunt modded and overclocked, it loses literally loses no ground and if you watercool it, remains a top performer with more framebuffer memory, just with slightly worse raytracing. It just seems easier to put the 2080 Ti into safemode from a bad shunt mod than it is to put the 3080 in safe mode.


Yeah, panic selling is never a good idea; and now there are folks who don't have a decent top-end card as they cannot get a RTX3K (spoiler etc) after they dumped their 2080 Ti. I got my two 2080 Tis about two years ago (380W stock bios, factory full w-block) and they perform as strong as ever...the slower of the two at around 2160. It makes no sense to me to upgrade to a 3080.

I might take a look at 3090s / 7nm TSMC (if that rumour comes true) though early in the new year, ditto for a AMD 6900 XT which seems impressive via what is limited info so far (though wondering about VRAM bandwidth at 4K with that one, and perhaps ray tracing, given 6800 XT results with the same VRAM config). 

Anyway, decent and well-cooled 2080 Tis leave little to be desired at this stage, imo



Spoiler


----------



## c0nsistent

J7SC said:


> Yeah, panic selling is never a good idea; and now there are folks who don't have a decent top-end card as they cannot get a RTX3K (spoiler etc) after they dumped their 2080 Ti. I got my two 2080 Tis about two years ago (380W stock bios, factory full w-block) and they perform as strong as ever...the slower of the two at around 2160. It makes no sense to me to upgrade to a 3080.
> 
> I might take a look at 3090s / 7nm TSMC (if that rumour comes true) though early in the new year, ditto for a AMD 6900 XT which seems impressive via what is limited info so far (though wondering about VRAM bandwidth at 4K with that one, and perhaps ray tracing, given 6800 XT results with the same VRAM config).
> 
> Anyway, decent and well-cooled 2080 Tis leave little to be desired at this stage, imo
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2467746


Yeah I own a 3080 along with my 2080 Ti and with both OC'd on air the 3080 is maybe 10% faster at rasterization and 20% at raytracing. I did order some shunts however for it but I'm working on a hot glue method to avoid voiding the warranty on the 3080


----------



## c0nsistent

Here's my timespy on air


What is a solid BIOS to run on an EVGA XC3 that isn't XOC? Judging by my KillAWatt I think I'm pulling 400W @ 1.05v but I've heard some of you say the KFA2 BIOS is no good, etc. Also, I'm on a more modern version of the BIOS, so it gets complicated finding a viable BIOS which is why I stuck the Strix XOC on here.


----------



## J7SC

c0nsistent said:


> Yeah I own a 3080 along with my 2080 Ti and with both OC'd on air the 3080 is maybe 10% faster at rasterization and 20% at raytracing. I did order some shunts however for it but I'm working on a hot glue method to avoid voiding the warranty on the 3080


I've hard-modded GPUs before, but shunt modding a (then still air-cooled?) 3080 would make me a tad nervous  The thing I like about the dual 2080 Ti is that they can do 'NVL-SLI-_CFR_' with the right driver, unlike RTX3K. I use CFR for 4K / ultra @ MS FS 2020 and some other apps.

May be I can convince our accounting to get one of these instead...then shunt mod the 4x A100s after w-cooling them


----------



## tps3443

c0nsistent said:


> View attachment 2467747
> 
> 
> Here's my timespy on air
> 
> 
> What is a solid BIOS to run on an EVGA XC3 that isn't XOC? Judging by my KillAWatt I think I'm pulling 400W @ 1.05v but I've heard some of you say the KFA2 BIOS is no good, etc. Also, I'm on a more modern version of the BIOS, so it gets complicated finding a viable BIOS which is why I stuck the Strix XOC on here.


I use the Galax HOF XOC 2KWatt bios. But, the Galax 380 watt is your best bet, I straight solder shunts stacked and it can pull up to like 545 watts or something like that. , it can do 1.093V too, unlike the Strix XOC. based on that 16,600 graphics score it looks like it’s out of power, mostly due to being air cooled and inefficient. Either way, I’d water cool them things man. Then you can actually get that performance in games. on Air it just isn’t sustained performance.

I have gone past 17K timespy graphics on air cooling.


Here’s water though.


----------



## gfunkernaught

pewpewlazer said:


> The "work around" is to set a +165 offset, then open the curve editor and drag up the 1.093v point to +180. So now once you get the card up to load temps, the curve should be calling for 2145mhz from 1.043v through 1.087v. But the 1.093v point will still be 2160mhz since it's offset is still +180. So the card will sit at 1.093v since that voltage point corresponds to the highest clock according to the curve.


But with those GALAX 380w BIOS, [email protected] would slam the power limit frequently if enough load is put on the gpu, especially using RT cores, so how would you sustain that desired offset? Plus temps? Does your GPU never go above 39c?


----------



## J7SC

gfunkernaught said:


> But with those GALAX 380w BIOS, [email protected] would slam the power limit frequently if enough load is put on the gpu, especially using RT cores, so how would you sustain that desired offset? Plus temps? Does your GPU never go above 39c?


...just ran a few quick tests...and as many of you suggest, apart from temp control (!), getting a higher PL limit via XOC bios or shunt is really the way to go. I don't bother as 2x 380W GPU stock bios plus oc'ed 16C/32T TR leaves just enough headroom on the 1300 W PSU when including the peripherals, and I use this system also for productivity.

The two identical-score runs for Port Royal below on the left for a single card are: One run with stock GPUv (1.043v) and no extra voltage, and another with a 15 MHz higher clock AND +25 on the MSI AB voltage slider (1.068v). Both runs ended up slamming into the PL @ a fraction below 380 W, and again, both with the same score. All runs temp-peaked at 38 C max or lower.

I used 2160 MHz GPU / 2056 VRAM for the base run on the single card as those are the regular settings I use for the dual cards / NVLink (with the dual card score on the right, which incidentally shows how nicely PortRoyal scales). The second single card run was at 2175. I might squeeze a few more MHz for a single PR run, but I doubt that the score would change very much, given that max PL is the real issue. If I upgrade in the new year to something like 3090s, I can start to play around with the 2080 Tis' bios via XOC bios etc in a different build configuration & PSU, but for now, I am more than happy as this setup has been performing flawlessly for two years now.


----------



## Mystic33

Hi i got recently on a second hand market a 2080ti amp extreme. What bios up 450 to 600W range could i flash that is compatible and safe? It has 2-8 pin power conectors


----------



## Imprezzion

As far as power limits on 380w with 1.093v 2100Mhz goes I can chime in on that.

My card never really hits the limit but that is because I don't run the fans off the card but off a fan controller so that saves a good few watts and I also only run 1080p 144Hz which is way less power demanding then 4K for example.

I can "make" it hit power limits by running for example GTA V with 4K downsampling or BF V with 150% resolution scale and ray tracing enabled but that doesn't hit 144FPS so I don't really daily those settings anyway.


----------



## Mystic33

got this result is it good on air?








I scored 16 151 in Time Spy


Intel Core i9-9900K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

Mystic33 said:


> got this result is it good on air?
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 151 in Time Spy
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


That is good on air cooling. And that’s literally identical to what I’ve achieved with my 2080Ti Founders Edition on stock air cooling too all inside a closed case with normal ambient temps. But it is very unrealistic, and that performance doesn’t translate to real world everyday gaming performance.

But,

I always recommend watercooling these 2080Ti’s. Reason being, that graphics score that you worked so hard to achieve on air, is like a bare minimum base line of what your daily gaming performance would be like on water-cooling.

These graphics test are very short, and it’s easy to maintain a high boost in them to reflect a high score. But when you watercool, your able to sustain this performance long term in games.


what are you ambient temps? My watercooled 2080Ti doesn’t even run 25C idle, as your card does.

I can easily run 2,205Mhz through timespy but not in games. I look forward to getting a water chiller very soon! My card has a 2,800 watt power limit with up to 1.125V and it really needs to run cooler to take advantage of this.


----------



## tps3443

J7SC said:


> ...just ran a few quick tests...and as many of you suggest, apart from temp control (!), getting a higher PL limit via XOC bios or shunt is really the way to go. I don't bother as 2x 380W GPU stock bios plus oc'ed 16C/32T TR leaves just enough headroom on the 1300 W PSU when including the peripherals, and I use this system also for productivity.
> 
> The two identical-score runs for Port Royal below on the left for a single card are: One run with stock GPUv (1.043v) and no extra voltage, and another with a 15 MHz higher clock AND +25 on the MSI AB voltage slider (1.068v). Both runs ended up slamming into the PL @ a fraction below 380 W, and again, both with the same score. All runs temp-peaked at 38 C max or lower.
> 
> I used 2160 MHz GPU / 2056 VRAM for the base run on the single card as those are the regular settings I use for the dual cards / NVLink (with the dual card score on the right, which incidentally shows how nicely PortRoyal scales). The second single card run was at 2175. I might squeeze a few more MHz for a single PR run, but I doubt that the score would change very much, given that max PL is the real issue. If I upgrade in the new year to something like 3090s, I can start to play around with the 2080 Tis' bios via XOC bios etc in a different build configuration & PSU, but for now, I am more than happy as this setup has been performing flawlessly for two years now.
> 
> View attachment 2467756


I think you’ve got some good silicon in your 2080Ti’s. What settings do you run in MSI AB? I’d like to mirror your settings for a single card. I’m back on the Galax 380 watt bios, and my shunts are soldered too. But, I’ve gotta really push like 2,160 to get a score like yours In portroyal. “Which isn’t game stable in everything”

Your Setup makes me want (2) 2080Ti’s on water, instead of just my Single card.


----------



## J7SC

Mystic33 said:


> got this result is it good on air?
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 151 in Time Spy
> 
> 
> Intel Core i9-9900K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Your result is _very good_ 'on air' !  And your 'start clock' vs 'average clock' is right in line with an air cooled card: The first pic below is the same I posted a few times before over the last year, a summary by Tom's Hardware of the impact of temps on clocks with a 2080 Ti.

The second pic is from one of my own 2x 2080 Ti PortRoyal runs with two w-cooled cards on a dedicated loop with resilience against heatsoak; temps stayed in the low on a double run. Extensive water-cooling helps most / any 2080 Ti a lot.


----------



## J7SC

tps3443 said:


> I think you’ve got some good silicon in your 2080Ti’s. What settings do you run in MSI AB? I’d like to mirror your settings for a single card. I’m back on the Galax 380 watt bios, and my shunts are soldered too. But, I’ve gotta really push like 2,160 to get a score like yours In portroyal. “Which isn’t game stable in everything”
> 
> Your Setup makes me want (2) 2080Ti’s on water, instead of just my Single card.


MSI AB settings are per post yesterday...usually zero extra voltage, no 'curve', PL slider to max (= about 380 W each), base GPU clock for both cards at 2160, VRAM for both at 2056. Single card in light rendering can hit 2235 MHz / GPU per other posts, but for PortRoyal etc I use 2160, 2175 or 2190 for single card with no appreciable difference between those clocks in scores due to max PL 'party pooper'. With that in mind, cooling is what I concentrated on the most for both cards via a dedicated GPU loop with 3x RX360 'thick' rads and 2x D5s. I had all the w-cooling pieces from earlier builds anyway, even still have some left over from older quad-SLI  

In my case, given the productivity as well as fun apps I use and which work on NVL / SLI, 2x 2080 Ti made sense, but if you play a lot of different games, RTX 2k SLI can also get frustrating - all gets down to your use model.


----------



## tps3443

J7SC said:


> MSI AB settings are per post yesterday...usually zero extra voltage, no 'curve', PL slider to max (= about 380 W each), base GPU clock for both cards at 2160, VRAM for both at 2056. Single card in light rendering can hit 2235 MHz / GPU per other posts, but for PortRoyal etc I use 2160, 2175 or 2190 for single card with no appreciable difference between those clocks in scores due to max PL 'party pooper'. With that in mind, cooling is what I concentrated on the most for both cards via a dedicated GPU loop with 3x RX360 'thick' rads and 2x D5s. I had all the w-cooling pieces from earlier builds anyway, even still have some left over from older quad-SLI
> 
> In my case, given the productivity as well as fun apps I use and which work on NVL / SLI, 2x 2080 Ti made sense, but if you play a lot of different games, RTX 2k SLI can also get frustrating - all gets down to your use model.


I think your build is awesome man. Did you have your cards up on eBay a while back? I thought I saw a similar port royal run with the (2) and it was (2) Aorus Waterforces for sale. I almost bought them lol. I want to grab another 2080Ti for nvlink and water cool that one too.


Anyways, I have considered going with more radiators. I hit 40C in Port Royal with a locked voltage of 1.093MV. But, I don’t think I will see a huge difference compared to my (2) 360 ekwb classic rads. I really want to just go water chiller for 24/7 usage.

Watercooling my 2080Ti has just changed it entirely for sure.


----------



## tps3443

@J7SC Hey, I set 2,160Mhz in Port Royal and here's what I got.

I am gonna try a lower voltage. But I don't think my Silicon is that great.









I scored 10 551 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## J7SC

tps3443 said:


> I think your build is awesome man. Did you have your cards up on eBay a while back? I thought I saw a similar port royal run with the (2) and it was (2) Aorus Waterforces for sale. I almost bought them lol. I want to grab another 2080Ti for nvlink and water cool that one too.
> 
> 
> Anyways, I have considered going with more radiators. But, I don’t think I will see a huge difference compared to my (2) 360 ekwb classic rads. I really want to just go water chiller for 24/7 usage.
> 
> Watercooling my 2080Ti has just changed it entirely. I’m seeing like up to 37% gains vs stock performance numbers on a stock air cooled 2080Ti in gaming.


Nope, the cards have never been for sale - or even been removed / worked on since I first got them back in Dec '18... per pic below, those 'white GPU cooling' tubes are actually hard copper with custom-made fittings and I rather not take it all apart again unless prepping for a completely new build 

As mentioned, I do not even use custom v-curves, or custom bios - not because I doubt that that could help an individual card, but_ because after the GPU PL bios limit x2, the overall PSU limit is waiting_...For me, that is somewhat ironic, as I used to do a fair amount of sub-zero and constantly hard / soft - mod things, with two or more PSUs. But this is also a work / productivity system to a large extent, thus the TR CPU...the whole thing was built to 'set and forget'  With that in mind, that this system has been on 3DM HoF PR continuously since early '19 even surprised myself...

...truth be told, overall, the w-cooling system helps a lot in that it avoids the typical 50 MHz +- clock loss for a 2080 Ti when things heat up. While these two specific GPUs are genuinely good silicon, the _cooling advantage would apply to any card, imo_, no mater how fast or slow out of the box. The earlier chart I posted from THW speaks to that, never mind my own posted test-temps. Whatever silicon and bios you're dealt, NVidia Boost is such that low temps get richly rewarded. By the time you shunt-mod or add a XOC bios (more watts = more heat), that is true even more so.














tps3443 said:


> @J7SC Hey, I set 2,160Mhz in Port Royal and here's what I got.
> 
> I am gonna try a lower voltage. But I don't think my Silicon is that great.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 551 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


GRRREAT score ! Sometimes less is more re. clocks and voltage. With NVidia Boost, there are all kinds 'of peaks and valleys' in scores that are, frustratingly, not a result of linear changes. On my cards, that is particularly true of VRAM. I rarely run just one card, but might get back to it next week just for the fun...after all, re. ambient temps, it is December in Canada


----------



## tps3443

J7SC said:


> Nope, the cards have never been for sale - or even been removed / worked on since I first got them back in Dec '18... per pic below, those 'white GPU cooling' tubes are actually hard copper with custom-made fittings and I rather not take it all apart again unless prepping for a completely new build
> 
> As mentioned, I do not even use custom v-curves, or custom bios - not because I doubt that that could help an individual card, but_ because after the GPU PL bios limit x2, the overall PSU limit is waiting_...For me, that is somewhat ironic, as I used to do a fair amount of sub-zero and constantly hard / soft - mod things, with two or more PSUs. But this is also a work / productivity system to a large extent, thus the TR CPU...the whole thing was build to 'set and forget'  With that in mind, that this system has been on 3DM HoF PR continuously since early '19 even surprised myself...
> 
> ...truth be told, overall, the w-cooling system helps a lot in that it avoids the typical 50 MHz +- clock loss for a 2080 Ti when things heat up. While these two specific GPUs are genuinely good silicon, the _cooling advantage would apply to any card, imo_, no mater how fast or slow out of the box. The earlier chart I posted from THW speaks to that, never mind my own posted test-temps. Whatever silicon and bios you're dealt, NVidia Boost is such that low temps get richly rewarded. By the time you shunt-mod or add a XOC bios (more watts = more heat), that is true even more so.
> 
> 
> View attachment 2467914
> 
> 
> 
> 
> 
> GRRREAT score ! Sometimes less is more re. clocks and voltage. With NVidia Boost, there are all kinds 'of peaks and valleys' in scores that are, frustratingly, not a result of linear changes. On my cards, that is particularly true of VRAM. I rarely run just one card, but might get back to it next week just for the fun...after all, re. ambient temps, it is December in Canada


I can’t run a +150 or +165 offset and just leave it, reason being. I could be playing a game at 2,145Mhz for 45 minutes and momentarily the voltage drops to like 1.050V or 1.063V, and then boom! “SYSTEM HANG” So, I have to run 1.093MV too. Anyways, to maintain stability I set a +150 offset, and then bring the 1.093MV point up 15-30Mhz and click apply. This maintains performance and it doesn’t crash in games. I still want to get my temperatures lower. My front 360MM rad intakes cold air, and the top rad is exhausting hot air out the top of my case. should I maybe set the top 360 rad for intake too, so both rads are getting the coolest air flow through them?


----------



## gfunkernaught

Here's my Port Royal score from a while ago. Used the Kingpin XOC Bios and my PC was set up in the garage in 4c ambient temperature. [email protected] Highest load gpu temp I saw was 19c. When I tried to play a game though, instant crash.



https://www.3dmark.com/pr/172411


----------



## Imprezzion

Best I ever got in Fire Strike was a 16880 graphics score. Seems kinda low. That was with a 5.1Ghz 9900K as well and HOF XOC @ 2145/8000 (actual load clocks when warmed up ~42c, cold setpoint 2175).

I'll update my driver's tomorrow to 456.98 hotfix as according to reddit the newer ones suck performance wise. I'm having some wires stutters in division 2 so a full DDU update might be needed.

I'll run it after with the current EVGA FTW3 Ultra BIOS @ 1.093v 2085/7900 which are my "daily" settings. CPU is now a 5.1Ghz 10900KF. 

What kinda graphics score should I be getting on those clocks? 16600 ish?


----------



## gfunkernaught

I'm seeing a lot of testing with time spy and firestrike here. If you're using a 2080 ti, you really should be testing/benching time spy extreme at least. Anything less than that doesn't push the gpu hard enough to really see the fruits of your overclock and cooling system efforts. Port Royal is good too. Also try the Bright Memory Benchmark. Great for testing stability as well.


----------



## gfunkernaught

So in terms of every day gaming, the GALAX 380w bios really can't be pushed beyond the 2085-2100mhz range can it? Just seems that anything higher than that will slam the PL often.


----------



## J7SC

gfunkernaught said:


> So in terms of every day gaming, the GALAX 380w bios really can't be pushed beyond the 2085-2100mhz range can it? Just seems that anything higher than that will slam the PL often.


...Have you tried the Aorus Xtr WB bios ? It's what I use natively on my cards and while the bios states '366W' max, it reaches 380 W. In this thread way back, folks with other cards got them to work well (subject to their specific HDMI / DSP port and RGB arrangements) though I'm not sure they'll make a dramatic difference to the Galax 380 W, if any. I game at 2145 with a two-card setup at full PL, unless it is MS FS 2020... It works fine there at those speeds as well with temps remaining low, but the whole system will draw 1100++ W for hours on end in my fav sim, so I down-clock to 2100 MHz and PL to around 330 W per card.


----------



## geriatricpollywog

Yeah, makes no sense why people panic sold the 2080ti for $500 when it’s better than the 3070 in every measurable way. Even power consumption in my case.









I scored 17 228 in Time Spy


Intel Core i7-10700K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com







https://www.3dmark.com/pr/291406


----------



## Imprezzion

Oh yeah I did mean time spy not fire strike.. running it now on a fresh driver (456.98 hotfix). I do like to run the normal TS not Extreme but that's more because I don't have a 4k monitor so I don't wanna know 4k performance, I wanna know and compare 1080p performance.

And yes, on EVGA FTW3 Ultra 373w BIOS it does hit the limiter in Time Spy quite heavily. I've seen it drop as low as 1.068v lol.

Ok, runs done scores horrible. 16219 graphics score 16169 combined. It throttled so hard on this 373w BIOS that it dropped to barely 2000Mhz average in test 2 lol.
Temps maxed at 53c with my rad fans at 900-1000RPM.

I have to run a different BIOS if i wanna have all the performance. I mean, in most games it doesn't throttle (as much) but even a simple bench like time spy brings it to its knees.

Problem is, i really cannot be bothered to manually do my OC every boot with XOC BIOS. I did that for months and it annoyed me so much i went back to a working stock one.

Let's see if the 380w KFA2 one makes a difference. Probably not. Might just have to end up shunting it lol. It doesn't have warranty anyway due to modded heatsinks on VRM / VRAM and a AIO mounted to it and is easy enough to replace these days if it does blow..

EDIT2: No difference, same score. Less throttling (none really? it kinda reads like 20% lower) but still no difference in scores. 16390 this time.

I went to HOF XOC again and cranked the clocks to what i would usually run. 2175 cold 2145 warmed up and 8000 memory and that gave a way way better result.
Load temps CPU around 67-68c all cores, load temp GPU around 51-52c. Both of them running very quiet and reasonable fanspeeds around 800RPM for the CPU (4x 140mm on a 280 rad) and 1400RPM for the GPU (4x 120mm on a 240 rad). Pumps both maxed tho.


----------



## tps3443

Imprezzion said:


> Oh yeah I did mean time spy not fire strike.. running it now on a fresh driver (456.98 hotfix). I do like to run the normal TS not Extreme but that's more because I don't have a 4k monitor so I don't wanna know 4k performance, I wanna know and compare 1080p performance.
> 
> And yes, on EVGA FTW3 Ultra 373w BIOS it does hit the limiter in Time Spy quite heavily. I've seen it drop as low as 1.068v lol.
> 
> Ok, runs done scores horrible. 16219 graphics score 16169 combined. It throttled so hard on this 373w BIOS that it dropped to barely 2000Mhz average in test 2 lol.
> Temps maxed at 53c with my rad fans at 900-1000RPM.
> 
> I have to run a different BIOS if i wanna have all the performance. I mean, in most games it doesn't throttle (as much) but even a simple bench like time spy brings it to its knees.
> 
> Problem is, i really cannot be bothered to manually do my OC every boot with XOC BIOS. I did that for months and it annoyed me so much i went back to a working stock one.
> 
> Let's see if the 380w KFA2 one makes a difference. Probably not. Might just have to end up shunting it lol. It doesn't have warranty anyway due to modded heatsinks on VRM / VRAM and a AIO mounted to it and is easy enough to replace these days if it does blow..
> 
> EDIT2: No difference, same score. Less throttling (none really? it kinda reads like 20% lower) but still no difference in scores. 16390 this time.
> 
> I went to HOF XOC again and cranked the clocks to what i would usually run. 2175 cold 2145 warmed up and 8000 memory and that gave a way way better result.
> Load temps CPU around 67-68c all cores, load temp GPU around 51-52c. Both of them running very quiet and reasonable fanspeeds around 800RPM for the CPU (4x 140mm on a 280 rad) and 1400RPM for the GPU (4x 120mm on a 240 rad). Pumps both maxed tho.
> 
> View attachment 2468015


Are you using a + positive offset and then bringing up one voltage point to 15 or 30Mhz? I use like 60 or 75, and then I’ll bring 1.10V or 1.118V voltage point up to 2,175Mhz. 

That Galax XOC 2K bios loves high voltage. I have managed to get it stable in games and anything else too. It is fasttt. I can pull off a 17,600 in timespy graphics. My card has both shunts soldered on it currently. 

I go back and forward between the Galax XOC 2KW bios, and the Galax 380 watt bios. I probably flash 20 times a week between the two lol.


----------



## tps3443

0451 said:


> Yeah, makes no sense why people panic sold the 2080ti for $500 when it’s better than the 3070 in every measurable way. Even power consumption in my case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 17 228 in Time Spy
> 
> 
> Intel Core i7-10700K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/pr/291406


I am totally with you man, whoa! nice scores. What kind of setup are you running? My temps are easily 10C higher than yours In those exact same test.

Water chiller or are you outside in the cold? Lol

I want to run 2,220Mhz lol.


----------



## jura11

RTX 2080Ti are still okay if you using your GPUs mainly for gaming at 1440p, at 4k or Ultrawide RTX 3090 or 3080 is better choice

RTX 3090 is limited by power limit with something like 480-520W power limit they will fly 

In rendering where I use my GPUs RTX 3070 is faster than RTX 2080Ti and RTX 3080 or 3090 will just outperform that GPU by good margin 

Its been good fun to own RTX 2080Ti and I'm sure I will be not selling any of them hahaha 

Thanks, Jura


----------



## Imprezzion

tps3443 said:


> Are you using a + positive offset and then bringing up one voltage point to 15 or 30Mhz? I use like 60 or 75, and then I’ll bring 1.10V or 1.118V voltage point up to 2,175Mhz.
> 
> That Galax XOC 2K bios loves high voltage. I have managed to get it stable in games and anything else too. It is fasttt. I can pull off a 17,600 in timespy graphics. My card has both shunts soldered on it currently.
> 
> I go back and forward between the Galax XOC 2KW bios, and the Galax 380 watt bios. I probably flash 20 times a week between the two lol.


Yeah I do. +135 offset with another +15Mhz on the curve 1.125v. 
My card is just a bad clocker. It can do benches at 2190 with 1.125v but artifacts pretty bad after like a minute and games max out at 2160 for less demanding stuff and 2145 for the most demanding stuff like Division 2 / Warzone / Cold War. 

I know why my score was so low lol. I had a whole bunch of tweaks in nVidia 3D settings maxed like AF, Transparency AA, Ambient Occlusion, Texture filtering and such lol. Put them all on default and now I got a much better and consistent scores.


----------



## J7SC

Imprezzion said:


> Oh yeah I did mean time spy not fire strike.. running it now on a fresh driver (456.98 hotfix). I do like to run the normal TS not Extreme but that's more because I don't have a 4k monitor so I don't wanna know 4k performance, I wanna know and compare 1080p performance.
> 
> And yes, on EVGA FTW3 Ultra 373w BIOS it does hit the limiter in Time Spy quite heavily. I've seen it drop as low as 1.068v lol. (...)


How is that 456.98 hotfix driver re. performance and compatibility wise compared to the previous one ?


----------



## geriatricpollywog

tps3443 said:


> I am totally with you man, whoa! nice scores. What kind of setup are you running? My temps are easily 10C higher than yours In those exact same test.
> 
> Water chiller or are you outside in the cold? Lol
> 
> I want to run 2,220Mhz lol.


I cheated a little. The AIO was in an ice bucket. I don’t have the card hooked up to my loop. Normal scores in PR on daily OC range from 10450 to 10520. Clock speeds are 2115/2130/2145 Summer/Fall/Winter.


----------



## J7SC

0451 said:


> I cheated a little. The AIO was in an ice bucket. I don’t have the card hooked up to my loop. Normal scores in PR on daily OC range from 10450 to 10520. Clock speeds are 2145/2130/2115 Summer/Fall/Winter.


What is the _regular bios_ max Watt on your 2080 Ti Kingpin, or is it 520 W+ no matter what the bios setting ?


----------



## Imprezzion

J7SC said:


> How is that 456.98 hotfix driver re. performance and compatibility wise compared to the previous one ?


So far pretty good. Very stable, good frametimes, no weirdness. Only tried 3 games so far, Division 2, Cold War and GTA V and all 3 run great.

I mean, since like, the first 450.99 BETA for Windows 2004 update there is barely any difference between drivers for RTX 2xxx. Only thing I read is that the newest branch, 457.xx, performs pretty bad overall according to reddit benchmarks. There are also no fixes in 457.xx that i require so I just stay with 456.98.


----------



## J7SC

Imprezzion said:


> So far pretty good. Very stable, good frametimes, no weirdness. Only tried 3 games so far, Division 2, Cold War and GTA V and all 3 run great.
> 
> I mean, since like, the first 450.99 BETA for Windows 2004 update there is barely any difference between drivers for RTX 2xxx. Only thing I read is that the newest branch, 457.xx, performs pretty bad overall according to reddit benchmarks. There are also no fixes in 457.xx that i require so I just stay with 456.98.


Thanks ! I use 456.71 for recent benching, but usually stay on an older custom one (June) for NVL/SLI-_CFR_, along with relevant NVInspector settings for CFR. Trying out a new driver takes a few minor extra steps with CFR, but it is worth it, i.e. in 4k/ultra MS FS 2020 imo:



Spoiler


----------



## geriatricpollywog

J7SC said:


> What is the _regular bios_ max Watt on your 2080 Ti Kingpin, or is it 520 W+ no matter what the bios setting ?


It’s always 520w according to TiN from EVGA. Mine pulls 240-300w on daily OC so the power limit doesn’t matter.


----------



## J7SC

0451 said:


> It’s always 520w according to TiN from EVGA. Mine pulls 240-300w on daily OC so the power limit doesn’t matter.


Thanks


----------



## tps3443

jura11 said:


> RTX 2080Ti are still okay if you using your GPUs mainly for gaming at 1440p, at 4k or Ultrawide RTX 3090 or 3080 is better choice
> 
> RTX 3090 is limited by power limit with something like 480-520W power limit they will fly
> 
> In rendering where I use my GPUs RTX 3070 is faster than RTX 2080Ti and RTX 3080 or 3090 will just outperform that GPU by good margin
> 
> Its been good fun to own RTX 2080Ti and I'm sure I will be not selling any of them hahaha
> 
> Thanks, Jura



Man in all honesty, I am crushing a RTX3070 in pure gaming performance. Plus they only have 8GB of vram. My performance in the Metro benchmark is reflecting 37% faster than a stock air cooled 2080Ti. I ran the exact same settings as reviewers use. And I am only a couple FPS behind a RTX3080.

That stock 2080Ti is managing 76FPS, and my card is maintaining a solid 104FPS. You cannot deny this performance. It is beastly fast! And with 11GB vram, it is a insanely good value.


As far as still being "just alright" you must be looking at the wrong information, this test has put my performance dead smack in to a stock RTX3080 FE level.. And, in purely ray tracing titles, I am easily outpacing AMD 6800XT or 6900XT. So in 2020, the 2080Ti is a very viable option that is just as competitive as anything else available on the market today.

I could have purchased whatever GPU in all honesty if I really put the time in to hunt one down. But, I bought a water block instead and this water block has put me into a new league all together.

Now if a RTX3070 works for you, and it is fast for your needs then that's great. It is a great GPU. But the 2080Ti was probably the most watered down GPU in history lol. Nvidia had no competition at all back when it was released. So when you actually open them up they step in to another performance tier all together.


----------



## jura11

tps3443 said:


> Man in all honesty, I am crushing a RTX3070 plus they only have 8GB of vram. My performance in the Metro benchmark is reflecting 37% faster than a stock air cooled 2080Ti. I ran the exact same settings as reviewers use. And I am only a couple FPS behind a RTX3080.
> 
> That stock 2080Ti is managing 76FPS, and my card is maintaining a solid 104FPS.
> 
> 
> As far as still alright, that put my performance dead smack on RTX3080 levels. And in ray tracing titles, I am easily outpacing AMD 6800XT or 6900XT. So in 2020, the 2080Ti is a viable option that is just as competitive as anything else available.


Yes I agree with you on that there, where RTX 3070 or 3080 and 3090 shows muscles is in RT and rendering 

If you are enable Ray Tracing or when you use OptiX 3070 its quite faster than 2080Ti 

I'm keeping my RTX 2080Ti's for sure, I still use them for rendering and I'm quite happy with them

Hope this helps 

Thanks, Jura


----------



## tps3443

jura11 said:


> Yes I agree with you on that there, where RTX 3070 or 3080 and 3090 shows muscles is in RT and rendering
> 
> If you are enable Ray Tracing or when you use OptiX 3070 its quite faster than 2080Ti
> 
> I'm keeping my RTX 2080Ti's for sure, I still use them for rendering and I'm quite happy with them
> 
> Hope this helps
> 
> Thanks, Jura


If you are rendering, it seems like (2) overclocked 2080Ti's wouldn't be a bad option. Doesn't NV link work great outside of gaming?


----------



## tps3443

0451 said:


> I cheated a little. The AIO was in an ice bucket. I don’t have the card hooked up to my loop. Normal scores in PR on daily OC range from 10450 to 10520. Clock speeds are 2115/2130/2145 Summer/Fall/Winter.


You could buy one of those cheap Amazon water chillers, and loop it directly in to your Kingpin 2080Ti. Then you'd have 24/7 numbers like that.

I have been researching water chillers a lot lately. And I really want one.


----------



## geriatricpollywog

tps3443 said:


> You could buy one of those cheap Amazon water chillers, and loop it directly in to your Kingpin 2080Ti. Then you'd have 24/7 numbers like that.
> 
> I have been researching water chillers a lot lately. And I really want one.


I plan on selling the 2080ti unmolested to recoup the cost of my 3090 KPE. Plus, I’m ok with 270 watts average at 2145 MHz vs 500 watts needed for another 120mhz.


----------



## jura11

tps3443 said:


> If you are rendering, it seems like (2) overclocked 2080Ti's wouldn't be a bad option. Doesn't NV link work great outside of gaming?


In rendering yes NvLink works beautifully although I don't use them because I have two different RTX 2080Ti(Zotac RTX 2080Ti AMP which is reference PCB and Asus RTX 2080Ti Strix which is higher and NvLink wouldn't fit, wish they are making flexible bridge) 

In rendering with OptiX or CUDA sadly with OC as RTX 2080Ti's are unstable, I can run them in 2055-2085MHz, above that they will crash, with RTX 3090 I can run that GPU in rendering at 2115-2130MHz easily 

Hope this helps 

Thanks, Jura


----------



## jura11

tps3443 said:


> You could buy one of those cheap Amazon water chillers, and loop it directly in to your Kingpin 2080Ti. Then you'd have 24/7 numbers like that.
> 
> I have been researching water chillers a lot lately. And I really want one.


I will be probably getting one chiller next year too, but let's see 

Thanks, Jura


----------



## J7SC

My national market (online + physical stores) still hasn't got Ampere 3K or Even AMD RX 6K in stock, so it's all a bit academic for me. But around February '21, I'm going to see what's what re.new GPU models and availability for a potential new build. The 2x 2080 Tis however are still really strong performers, so no rush.

There is the VRAM 'question' that is bugging me though with the latest-gen GPUs - for example, the 10 GB for the 3080. Granted, it is GDDR6X which is faster than regular GDDR6, but still...seems to be not very much headroom for 4K ultra settings. On the AMD RX 6K front, their VRAM bandwidth limitation seems to kick in with some 4K titles, Infinity Cache, SAM or not. 

...have a look at the graphs below - MS FS 2020, 4K / ultra '+' etc. This was one flight session, but over varying terrain, including low flights over large metro areas which push VRAM usage up. The MSI AB numbers are more likely 'memory allocation' instead of actual memory usage, but still - it twice increased it during the same session beyond 10 GB. So it is also the change rather than just the level which gives me some thought. My next card(s) should have at least 16 GB of VRAM...


----------



## geriatricpollywog

J7SC said:


> My national market (online + physical stores) still hasn't got Ampere 3K or Even AMD RX 6K in stock, so it's all a bit academic for me. But around February '21, I'm going to see what's what re.new GPU models and availability for a potential new build. The 2x 2080 Tis however are still really strong performers, so no rush.
> 
> There is the VRAM 'question' that is bugging me though with the latest-gen GPUs - for example, the 10 GB for the 3080. Granted, it is GDDR6X which is faster than regular GDDR6, but still...seems to be not very much headroom for 4K ultra settings. On the AMD RX 6K front, their VRAM bandwidth limitation seems to kick in with some 4K titles, Infinity Cache, SAM or not.
> 
> ...have a look at the graphs below - MS FS 2020, 4K / ultra '+' etc. This was one flight session, but over varying terrain, including low flights over large metro areas which push VRAM usage up. The MSI AB numbers are more likely 'memory allocation' instead of actual memory usage, but still - it twice increased it during the same session beyond 10 GB. So it is also the change rather than just the level which gives me some thought. My next card(s) should have at least 16 GB of VRAM...
> 
> View attachment 2468045


What range of frame rates are you getting in FS2020 with 2 x 2080ti? Rural and city, clear skies and stormy, live traffic on and off?

Also I may have asked this already but does NVlink allow you to utilize 22gb vram across both cards? It’s too bad the 6800XT is the only card with a sensible amount of vram. 10 gb Is borderline, 24gb is ridiculous overkill and 16gb is the Goldilocks zone.


----------



## J7SC

0451 said:


> What range of frame rates are you getting in FS2020 with 2 x 2080ti? Rural and city, clear skies and stormy, live traffic on and off?
> 
> Also I may have asked this already but does NVlink allow you to utilize 22gb vram across both cards? It’s too bad the 6800XT is the only card with a sensible amount of vram. 10 gb Is borderline, 24gb is ridiculous overkill and 16gb is the Goldilocks zone.


I always play MS FS 2020 with 'live traffic' and 'live weather'...and this being the general Vancouver/BC area, that means clouds... Typically, lowest fps is high 40s (see caveat below), and when flying high with only scattered clouds, I have seen 80+ fps a few times at 4k / ultra. The caveat is that a first flight over new terrain is usually slower than a repeat flight as the latter at least partially draws on rolling cache (mine is set to 200 GB) instead of purely network servers.

The first pic below had a slight oc on, though these days I dialed it back to 2100 / PL with around 330 W +- per card to get overall system power consumption down. The second pic is via MS FS 2020 'Dev mode'...key is to get the MS FS 2020 setup to the point that 'limited by' rapidly switches between GPUs and CPU. BTW, dual 2080 Ti cards only work well in CFR with MS FS 2020...regular (AFR) SLI doesn't seem to be do doing anything...and no, VRAM is not cumulative, so 11GB instead of 22GB. I have seen some reports whereby folks could get to cumulative VRAM with dual 2080 Tis, but it requires a specifically-written app and also a 3rd card for display.








.


----------



## tps3443

It would be so cool if MSI would create an external USB device that has dials so we can externally control our MSI AB software while “In game in real-time ” 

For example turn a dial to crank up the memory speed, another dial for GPU boost, a dial for voltage control.

That would be awesome! I’m the type of person that tunes maximum performance out of my GPU’s and CPU’s and System ram too for running games and applications.


----------



## Imprezzion

Or a phone app that can do that would be fine with me as well.

I got myself a nice little directx crash after 2 hours of GTA V racing at 2160/8000. No matter what I do my VRAM just doesn't wanna do 8000. 7900 is fine but 8000 always crashes in certain games after a long play session somehow.


----------



## gtz

So I sold my 2080Ti since I managed to snag a 6800 XT at MSRP. I had intentions to slap a waterblock (which I bought, ended it up selling it too as a combo item with the 2080Ti), but alas I snagged a 6800XT.

Here are my thoughts about the 6800XT vs the 2080Ti (and 3070 and 3080 for that matter). I will be comparing it against my fourth 2080Ti that was water-cooled, both overclocked to there max stable.

In raw performance and you don't care about ray tracing and dlss (more on this later) the 6800XT wins, can even get near the 3090 is some scenarios (notice I saidsome, like Shadow of the tomb raider). This is a feat, you can also tell by the time spy graphics score I can get of 19500. Now this is where it shines and only shines, raw performance. If that is all you care about then it is a no brainer.

Ray tracing performance, this is where the card is average. Notice I said average, I did not slam the card but it most instances it losses out to the OC'd 2080Ti (and by extensions the 3070 and 3080). In game that are optimized (don't know if that is the right word) it performs decent. Again using shadow of the tomb raider. It still losses against the 2080Ti (remember this is a overclocked 2080Ti, normal clocks the 6800XT would win but again I am running OC vs OC) but by a small margin. Control however is a popular game and the 6800XT falls flat in its face. Looked at charts and performs roughly In the ballpark of a 2070S/2080. Even slightly slower of the newly released 3060Ti. I believe a perfect world the 6800XT should perform roughly similar to a OC'd 2080Ti or 3070. This should be apparent to the port royal score of 9800 vs my old overclocked 2080Ti of 10300.

Dlss, it has nothing. Hopefully the fidelityfx will be comparable. Until then no such feature exist.

Final verdict 2080Ti still a beast, do I regret my choice. Not one bit, and I am impressed at the level of performance that amd was able to bring. I am especially happy with having a different card to play with. With all the above said I am probably not going to try to pursue a 6900XT, seeing as it is the exact same card but with 8 more CUs activated. The 3080 was never interesting to me because of the 10gb VRAM, shoot if it included 12 I would have been happy. If the 3090 ever drops the msrp to around 1000-1200 or a 3080Ti appears with 16-20 GB VRAM I will try to hunt those down.


----------



## J7SC

Imprezzion said:


> Or a phone app that can do that would be fine with me as well.
> 
> I got myself a nice little directx crash after 2 hours of GTA V racing at 2160/8000. No matter what I do my VRAM just doesn't wanna do 8000. 7900 is fine but 8000 always crashes in certain games after a long play session somehow.


...there is such a thing as PCB heat-soak that occurs after a card has been running full-tilt for a longer period. It depends also on your case, air-flow volume & direction. I typically add a 120mm fan 'on the side' of the GPU(s) which helps the PCB as well as the back-plate.


----------



## tps3443

Imprezzion said:


> Or a phone app that can do that would be fine with me as well.
> 
> I got myself a nice little directx crash after 2 hours of GTA V racing at 2160/8000. No matter what I do my VRAM just doesn't wanna do 8000. 7900 is fine but 8000 always crashes in certain games after a long play session somehow.


I do not use a backplate, because I find my watercooled 2080Ti FE with Samsung GDDR6 runs cooler without one. Maybe that’ll help? Well, and I don’t have a backplate lol. 

On another note about fast memory, Sometimes you just have what you have. And they overclock to a certain point, and that’s it.

I hunted and searched for a MSI Gaming Z Trio 2080Ti for so long. And I just couldn’t find one. They have the better 15.5GBPS Samsung memory modules on board, and those cards can run like 18Gbps to 18.5Gbps fairly easily. I still wish I had one. I had a 2080 Super FE and I ran it burning hot in game stable at 18,300Mhz 100% stable.

But either way,

My memory is game stable at +1,275MHz. Or 16,550Mhz effective since watercooling my card. I have owned (4) 2080Ti’s this is the best memory I have seen.


If you have (2) broken 2080 Supers you can swap the memory modules to your card. “It has been done” a lot of work though lol.


----------



## tps3443

@J7SC

Hey, I am trying the Galax overclocking software. And I must say, I am seeing better more consistent performance. I am trying an alternative due to not being able to save profiles with the " Galax XOC 2KW bios" This seems to be working though.










I scored 10 602 in Port Royal


Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

Galaxy HOF tuning software works with the 2080Ti Galax XOC 2,000/2KW watt bios “Without BSOD”

^ You can adjust power limit down from 100%, change voltage offset, voltage, boost frequency, and memory frequency, and MOST IMPORTANT OF ALL even save profiles “without a BSOD”


Its not as good as MSI AB, but you can save profiles reliably and boot to them in Windows daily.


----------



## Imprezzion

Hmm, good to know. I'll give it a shot. I don't really need the curve oc for 1.125v anyway. As long as it allows a custom fan curve it would work for me.


----------



## tps3443

Hey guys, I ran the Metro benchmark for 1080P. I am literally nipping at the heels of a RTX3080. And I'm 30% faster than the review sample RTX2080Ti!

2560X1440P results in a massive 37% increase in performance!!!



upload image


----------



## tps3443

Imprezzion said:


> Hmm, good to know. I'll give it a shot. I don't really need the curve oc for 1.125v anyway. As long as it allows a custom fan curve it would work for me.



Not really sure whats up with it. But, my GPU is breaking 17,500 in timespy graphics very easily now, and breaking 10,600 in port royal. And stable in games at the exact same settings to achieve these numbers. It literally LOCKS 2,145Mhz and doesnt back off. And it is running like 6C cooler. VS. using the same settings in MSI AB. I cant really explain it. This Galax HOF software really sucks as far as monitoring goes. But, damn its so stable!


----------



## Imprezzion

tps3443 said:


> Not really sure whats up with it. But, my GPU is breaking 17,500 in timespy graphics very easily now, and breaking 10,600 in port royal. And stable in games at the exact same settings to achieve these numbers. It literally LOCKS 2,145Mhz and doesnt back off. And it is running like 6C cooler. VS. using the same settings in MSI AB. I cant really explain it. This Galax HOF software really sucks as far as monitoring goes. But, damn its so stable!


Strange. I tried both Galax Tuner Plus and HOF AI suite and neither of them allow me to set a profile. 

Both just BSOD, same as MSI AB.

Might be a incompatibility with my specific model of card? Mine is a Gainward Phoenix GS which is reference based but probably not 100%. 

It also only allows me to run 1.112.1.118v but never 1.125v. And, it got stuck on 1410Mhz core in 3D several times.


----------



## J7SC

Imprezzion said:


> Strange. I tried both Galax Tuner Plus and HOF AI suite and neither of them allow me to set a profile.
> 
> Both just BSOD, same as MSI AB.
> 
> Might be a incompatibility with my specific model of card? Mine is a Gainward Phoenix GS which is reference based but probably not 100%.
> 
> It also only allows me to run 1.112.1.118v but never 1.125v. And, it got stuck on 1410Mhz core in 3D several times.


I'm still on MSI AB 4.6.1 Beta 2  ...but it works fine. I tried the Galax oc software back when I first got these cards because it could clock the VRAM higher back then. It wasn't as good as MSI AB overall, but workable. Perhaps running an actual Galax XOC bios would help with theirsoftware. That said, I'm happy that I can run low GPUv (pic below, lower right) with milder oc, given PSU limitations for 2x cards and oc'ed TR.












tps3443 said:


> @J7SC
> (...)
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 602 in Port Royal
> 
> 
> Intel Core i9-7980XE Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


...it's not a race, you know...never mind stock bios and all that for productivity. Still, as a former HWBot_er, I keep a thumb drive 


Spoiler


----------



## Imprezzion

I fixed it in a different way haha. I flashed the EVGA Kingpin XOC BIOS and disconnected the PWM wire I had running from the card to control the fans and wired that into the Kraken X52's pump fan controller. 

Now I have a full XOC 1.125v BIOS and the fans are GPU temperature controller through NZXT CAM. Works like a charm.

Did a quick 3DMark Fire Strike run to verify that the fan control works and it did a 17123 graphics score @ +165 core +900 memory which is 2145-2130 / 7900 @ 1.112-1.125v with max temp of 51c at 1400RPM. Works great. 

Only downside is that for some reason that BIOS keeps VRAM at max clocks after coming from a load. With a cold boot it idles @ 405Mhz but once it hits 3D clocks it doesn't clock down anymore. 

I used AB 4.6.2 and the latest Precision X1, both showed the same behavior. Not a big deal.


----------



## J7SC

Imprezzion said:


> I fixed it in a different way haha. I flashed the EVGA Kingpin XOC BIOS and disconnected the PWM wire I had running from the card to control the fans and wired that into the Kraken X52's pump fan controller.
> 
> Now I have a full XOC 1.125v BIOS and the fans are GPU temperature controller through NZXT CAM. Works like a charm.
> 
> Did a quick 3DMark Fire Strike run to verify that the fan control works and it did a 17123 graphics score @ +165 core +900 memory which is 2145-2130 / 7900 @ 1.112-1.125v with max temp of 51c at 1400RPM. Works great.
> 
> Only downside is that for some reason that BIOS keeps VRAM at max clocks after coming from a load. With a cold boot it idles @ 405Mhz but once it hits 3D clocks it doesn't clock down anymore.
> 
> I used AB 4.6.2 and the latest Precision X1, both showed the same behavior. Not a big deal.


 ...good stuff ! I take it your card has 2x 8 pin connectors, but with KingPin XOC (3x 8pin) that probably won't make much difference (as 2/3rd times kaboom = a lot). BTW, have you tried that Kingpin XOC vBios re. the VRAM speed issue after several hours of gaming you mentioned before ?.

...MSI AB is now up to version 4.6.3 Beta 2 - regular non-beta can't be far behind, so s.th. to add to the 'to do list' during the holidays.

...we were Christmas shopping earlier today, and while at it, checked a few more bricks and mortar stores for RTX 3k and/or AMD XT 6K...nothing, nada - same as the major online retailers in N.America. This time around, Asia and EU seem to have been served better, or may be it's just excess demand...


----------



## tps3443

J7SC said:


> ...good stuff ! I take it your card has 2x 8 pin connectors, but with KingPin XOC (3x 8pin) that probably won't make much difference (as 2/3rd times kaboom = a lot). BTW, have you tried that Kingpin XOC vBios re. the VRAM speed issue after several hours of gaming you mentioned before ?.
> 
> ...MSI AB is now up to version 4.6.3 Beta 2 - regular non-beta can't be far behind, so s.th. to add to the 'to do list' during the holidays.
> 
> ...we were Christmas shopping earlier today, and while at it, checked a few more bricks and mortar stores for RTX 3k and/or AMD XT 6K...nothing, nada - same as the major online retailers in N.America. This time around, Asia and EU seem to have been served better, or may be it's just excess demand...


I think the low stock is a directly result of bots. They buy RTX and AMD RX cards immediately upon in stock alert, just for profits and scalping purposes. So it makes it seem like we have much lower stock than we really have. Then just after so long, just about everyone has one.. So the interest just slowly fades away. I mean, we have plenty of 3080’s and 3090’s and 6800’s for sale. They just cost to much. It’s almost like the bots control the inventory, and it slowly trickles out. Because people pay the high prices.


I see these pictures of people selling them on Facebook. And they have a stack of like 50 cards in the photo. It’s disgusting. I can’t believe manufacturers even allow this to go on, they simply don’t care who it’s sold to. They just want it to sell.


----------



## J7SC

...yeah, this whole RTX 3K (and for that matter RX 6K) release was even worse than RTX 2K (when I also waited a few months). Apart from scalpers and bot-buying, there are also the miners with direct sales by NVidia per earlier post (!). Direct sales like that with limited supply / yield and excess demand lead to predictable market results. 

Still, according to this thread ASUS Strix 3090 OC poor overclocking performance , some chunks of orders obviously got through. But now we apparently are also back to chasing certain batch numbers for GPUs....

Not sure what Cyberpunk 2077 will mean re. our 2080Tis but so far, I haven't run into any issues with apps even at 4K / ultra.


----------



## tps3443

J7SC said:


> ...yeah, this whole RTX 3K (and for that matter RX 6K) release was even worse than RTX 2K (when I also waited a few months). Apart from scalpers and bot-buying, there are also the miners with direct sales by NVidia per earlier post (!). Direct sales like that with limited supply / yield and excess demand lead to predictable market results.
> 
> Still, according to this thread ASUS Strix 3090 OC poor overclocking performance , some chunks of orders obviously got through. But now we apparently are also back to chasing certain batch numbers for GPUs....
> 
> Not sure what Cyberpunk 2077 will mean re. our 2080Tis but so far, I haven't run into any issues with apps even at 4K / ultra.


I think it was so hilarious that Nvidia released a Cyberpunk 2077 edition 2080Ti, and a free version of the game was included. And even other RTX2000 series purchases included a free copy. The game was delayed so many times that a new generation of GPU’s exist now lol. I am pretty sure the developers at Cyberpunk we’re getting death threats from customers they were so pissed off at the continuous delays.

I still wouldn’t mind (2) of those Cyberpunk 2080Ti’s. they looked so cool! To bad air cooling sucks though. But, I guess it would still look really cool!

Anyways, I just flashed to the Kingpin XOC bios. First time trying this one.


----------



## Falkentyne

tps3443 said:


> I think the low stock is a directly result of bots. They buy RTX and AMD RX cards immediately upon in stock alert, just for profits and scalping purposes. So it makes it seem like we have much lower stock than we really have. Then just after so long, just about everyone has one.. So the interest just slowly fades away. I mean, we have plenty of 3080’s and 3090’s and 6800’s for sale. They just cost to much. It’s almost like the bots control the inventory, and it slowly trickles out. Because people pay the high prices.
> 
> 
> I see these pictures of people selling them on Facebook. And they have a stack of like 50 cards in the photo. It’s disgusting. I can’t believe manufacturers even allow this to go on, they simply don’t care who it’s sold to. They just want it to sell.


I Think Mr Fox has a point.
It would be nice if people actually started becoming rational human beings and NO ONE bought from a scalper. Then those scalpers would have to declare bankruptcy or become homeless or something and get off their rich privileged elitist lifestyle, and good riddance. Or they would have to scramble to sell their cards at cost + shipping and compete with each other and the etailers.
Absolutely zero sympathy. Call me evil if you want.


----------



## kithylin

Falkentyne said:


> I Think Mr Fox has a point.
> It would be nice if people actually started becoming rational human beings and NO ONE bought from a scalper. Then those scalpers would have to declare bankruptcy or become homeless or something and get off their rich privileged elitist lifestyle, and good riddance. Or they would have to scramble to sell their cards at cost + shipping and compete with each other and the etailers.
> Absolutely zero sympathy. Call me evil if you want.


People on reddit have already created scripted robots that "Buy" the auctions from these scalpers with randomly created ebay accounts that never pay. They target specific individuals by their account names that are 'known' for selling scalped cards and they repeatedly re-buy and re-buy and re-buy the listings every time they make a new listing to make sure that the scalper people never actually make any money.


----------



## J7SC

Falkentyne said:


> I Think Mr Fox has a point.
> It would be nice if people actually started becoming rational human beings and NO ONE bought from a scalper. Then those scalpers would have to declare bankruptcy or become homeless or something and get off their rich privileged elitist lifestyle, and good riddance. Or they would have to scramble to sell their cards at cost + shipping and compete with each other and the etailers.
> Absolutely zero sympathy. Call me evil if you want.


Scalpers are annoying, whether for concert tickets, sports events or online GPU sales..._bots even more so_. 

But the problem seems to go much deeper / higher. For example, per story below (as posted before), the direct sales by NVidia to miners and its huge volume of output concerning GPUs that also have limited yields at the production level. In other words, it seems to go all the way to the top, not just your average scalper and bot. I have also heard that especially for N.America, the changes in trade rules / duties with Asia means that several vendors are directing more of their limited volume at Asia and Europe where there aren't the same duties to pay at port of landing.


----------



## tps3443

@Imprezzion

I love the Kingpin XOC bios so far. It works amazing. I think I will continue on with it. I’m maintaining 2,145Mhz in games just like with the Galax XOC. my memory also stays locked at 8,250Mhz lIke yours does. But that’s ok, a very minor issue. It is also kinda tough to see what your projected boost clocks will be without being able to see the graph which is totally gone due to no voltage control.

When I used the Galax XOC bios before, I was saving the physical profile “File” to the desktop and loading the actual file from the desktop through the Galax HOF software as a work around. But still pretty annoying to deal with.

Either way, we have this bios and it is 520 watts. So that should be plenty! I also have soldered and stacked 8 ohm resistors on my card, so I can pull a little extra wattage on top of that for like timespy extreme runs, even though I don’t really need it for anything else.

I am getting a 1/2HP water chiller next month, so I can squeeze another 80-100Mhz out of this card sustained in games. Maybe 2,220-2,235Mhz?

From what I understand, under 30C can sustain 2,220Mhz in games. I’d like to go even further. It’s very fun overclocking the 2080Ti none the less!


----------



## J7SC

^^I thought the KingPin XOC bios was not limited to 520W ?

Also, for hard benching a 2080 Ti w/ XOC bios, you might as well go for a GPU pot (for DICE / LN2). This one and some CPU pots have been vegging for some years in a cardboard box now. Given the time of year, that box is sort of like the island of misfit toys...


----------



## Imprezzion

Nice! So, it is BIOS related the memory clock issue.
I did some testing in Time spy and superposition with HOF XOC, Kingpin @ 2145/7900 and KFA2 380w @ 2085/7900.

Scores tied with Kingpin and HOF XOC at 17100 roughly for Time spy and 10300 for superposition.

380w KFA2 was at 16600 and 10000, but stays cooler and has proper idle clocks and profiles.

Mine isn't shunt modded and due to being a 3x8pin BIOS the power draw measurement using HWiNFO64 is all wonky at like 500-600w load and 150-160w idle so I don't know how much it actually draws lol.

With the KFA2 380w I was seeing about 350-355w in Time spy and 365-375w in superposition. If I run superposition at 1080p extreme or higher or time spy extreme / port royal it will hit throttling with the KFA2 380w BIOS tho.

I will probably stick with Kingpin if it remains stable in games tomorrow on my day off lol.

World of tanks is a real GPU OC killer due to the seriously high frame rates that game pushes (400+ mostly) so I'll run that for a bit.


----------



## tps3443

J7SC said:


> ^^I thought the KingPin XOC bios was not limited to 520W ?
> 
> Also, for hard benching a 2080 Ti w/ XOC bios, you might as well go for a GPU pot (for DICE / LN2). This one and some CPU pots have been vegging for some years in a cardboard box now. Given the time of year, that box is sort of like the island of misfit toys...
> 
> View attachment 2468406


I like benching and getting high scores, but I mostly like sustainable overclocks that are reliable for 24/7 usage. 


I would consider running a phase changer daily, but I’ve never found LN2 appealing. That would be cool to play with though! You could probably push like 2.4Ghz on a 2080Ti daily with a phase changer, seeing how a lot of guys bench at 2.7 to 2.9Ghz with LN2.


----------



## tps3443

Imprezzion said:


> Nice! So, it is BIOS related the memory clock issue.
> I did some testing in Time spy and superposition with HOF XOC, Kingpin @ 2145/7900 and KFA2 380w @ 2085/7900.
> 
> Scores tied with Kingpin and HOF XOC at 17100 roughly for Time spy and 10300 for superposition.
> 
> 380w KFA2 was at 16600 and 10000, but stays cooler and has proper idle clocks and profiles.
> 
> Mine isn't shunt modded and due to being a 3x8pin BIOS the power draw measurement using HWiNFO64 is all wonky at like 500-600w load and 150-160w idle so I don't know how much it actually draws lol.
> 
> With the KFA2 380w I was seeing about 350-355w in Time spy and 365-375w in superposition. If I run superposition at 1080p extreme or higher or time spy extreme / port royal it will hit throttling with the KFA2 380w BIOS tho.
> 
> I will probably stick with Kingpin if it remains stable in games tomorrow on my day off lol.
> 
> World of tanks is a real GPU OC killer due to the seriously high frame rates that game pushes (400+ mostly) so I'll run that for a bit.


Yes I think the Kingpin XOC is the best option of them all. It gives us high voltage, high wattage, and it can be loaded on startup. I have all of my bios in a folder and I just bounce between them almost like I’ve got a bios switch lol. But since trying this one, I am sold.


----------



## J7SC

tps3443 said:


> I like benching and getting high scores, but I mostly like sustainable overclocks that are reliable for 24/7 usage.
> 
> I would consider running a phase changer daily, but I’ve never found LN2 appealing. That would be cool to play with though! You could probably push like 2.4Ghz on a 2080Ti daily with a phase changer, seeing how a lot of guys bench at 2.7 to 2.9Ghz with LN2.


DICE is a nice intermediate step, better / cooler than a (single) phase change cooler but not as crazy as LN2. I've done both with the pot (and also own a single-phase); phase change cooler can be used for longer but sucks a lot of power and is noisy, apart from not reaching DICE temps. Problem with DICE is that we use Acetone to 'pull the cold down' and in anything but a well-ventilated space, it can get tricky. 

On another note...THW on Cyberpunk 2077


----------



## tps3443

J7SC said:


> DICE is a nice intermediate steps, better / cooler than a (single) phase change cooler but not as crazy as LN2. I've done both with the pot (and also own a single-phase); phase change cooler can be used for longer but sucks a lot of power and is noisy, apart from not reaching DICE temps. Problem with DICE is that we use Acetone to 'pull the cold down' and in anything but a well-ventilated space, it can get tricky.
> 
> On another note...THW on Cyberpunk 2077


I am very happy with the Cyberpunk 2077 in game performance results. I play at 2560x1440P, and it looks like my heavily overclocked 2080Ti will run the game great!

I fall just a little short of a RTX3080 FE, and we have DLSS 2.0 support too! So with 11GB of Vram and our heavy overclocks, we‘re in for some top tier performance.


----------



## J7SC

...hoping for NVL-SLI (or even CFR) support in Cyberpunk 2077, but currently little to no info on that...not holding my breath though

And 'FYI' re. 3090 'supply', another twist here ...also note their quote about the average selling price


----------



## tps3443

J7SC said:


> ...hoping for NVL-SLI (or even CFR) support in Cyberpunk 2077, but currently little to no info on that...not holding my breath though
> 
> And 'FYI' re. 3090 'supply', another twist here ...also note their quote about the average selling price


I would look at watch dogs 2 nvlink performance scaling and the newer watch dogs for nvlink scaling and compare that. They use the same engine right?


----------



## GP4572

Hi Guys, first post here!
I got myself an ASUS 2080ti Turbo recently and watercooled it.
It has a confirmed non-A chip with a TDP limit of 250 +12% (274W).
I read somewhere that the earlier models had an A-chip and found this old BIOS for this card: link which has a TDP limit of 300W...
Can I flash this on my card?
If it doesn't work, can you reflash it?
What's the deal with the Palit 310W BIOS for non-A cards? Why does it work, despite all the rest having only 280W BIOS-es? They went against NV guidance or what?


----------



## Medizinmann

GP4572 said:


> Hi Guys, first post here!
> I got myself an ASUS 2080ti Turbo recently and watercooled it.
> It has a confirmed non-A chip with a TDP limit of 250 +12% (274W).
> I read somewhere that the earlier models had an A-chip and found this old BIOS for this card: link which has a TDP limit of 300W...
> Can I flash this on my card?


Should work...



> If it doesn't work, can you reflash it?


Yes - you might need a different method of video output if it fails badly...i.e. an iGPU or another GPU.



> What's the deal with the Palit 310W BIOS for non-A cards? Why does it work, despite all the rest having only 280W BIOS-es? They went against NV guidance or what?


If you are able to flash it(see important note page 1) - flash the Palit BIOS.
I have a KFA2 Dual 2080Ti with a non-A Chip On Air in an eGPU enclosure that works just fine with the Palit BIOS - seen it draw up to 319W with it(if I believe HWInfo).

Best regards,
Medizinmann


----------



## GP4572

I have tried the 300W Turbo BIOS out of curiosity but it only said "mismatching GPU" so I suppose the rumor was true, the early Turbo models had an A-Chip which has simply an incompatible BIOS.

After this I have tried the Palit non-A 310W BIOS and it runs fine. I won about +100MHz on the core


----------



## J7SC

tps3443 said:


> I would look at watch dogs 2 nvlink performance scaling and the newer watch dogs for nvlink scaling and compare that. They use the same engine right?


I don't have any of the Watch Dog series, but Watch Dog 2 fully supported NVlink/SLI; not so sure about Watch Dogs: Legion as I can't find any NVL/SLI vids. Still, once I get Cyberpunk 77, I might just try out the NVInspector NVL/SLI profile for the Watch Dog Series on Cyberpunk...I've been lucky before with strange profile fits.

On another note...per your and my earlier posts re. the MSI Tri Z 2080 Ti (rather than the X), I think I ended up just ahead of a set of those rare cards w/ a Port Royal HoF run this morning. While I had 45 MHz (3 steps) more GPU speed, he just had way more VRAM speed. I typically run the (Micron) VRAM at 2056 MHz for both my cards before getting into error correction. 2,056 MHz is 16,448 MHz effective in GDDR6 speak. Stock VRAM on my cards is 1,768 (14,144 'effective').

This fellow with what I assume were Trio 'Z' cards ran 2,345 MHz (18.760 'effective' - wow) with that newer VRAM typically applied to the 2080 Super cards ! _Just goes to show you how much more power is locked up in 2080 Tis, at least with decent cooling_. I look forward to the spring when I can finally try out some XOC bios instead of the 380W stock I need to retain for now. I figure my cards would be happiest when slurping around 430W-450W w/ the solid w-cooling I have; that doesn't push up the VRAM, but another 50W-70W per card would be nice..


----------



## kithylin

J7SC said:


> I don't have any of the Watch Dog series, but Watch Dog 2 fully supported NVlink/SLI; not so sure about Watch Dogs: Legion as I can't find any NVL/SLI vids. Still, once I get Cyberpunk 77, I might just try out the NVInspector NVL/SLI profile for the Watch Dog Series on Cyberpunk...I've been lucky before with strange profile fits.


I would wholly expect 2077 to support SLI and NV-Link out of the box on launch day. It's a very big "Nvidia Showcase Game". Cyberpunk 2077 is already confirmed to support Nvidia RTX and DLSS on launch.


----------



## tps3443

Just for fun, I ran the Galax 380 watt Bios stock with an undervolt. And this was after about 2 hours of game play. 99% GPU usage!

Who says a RTX 3070 is efficient? Lol.


The card still scores sub 16K in timespy graphics Like this. I was able to reduce voltage to 0.918MV and manage 32C after multiple hours of gaming. But I didn’t take a photo of that one. My card idles like 27-28C, so I find it funny that it loads up at a peak of like 32-34C. just a little warmer than idle.

Its amazing how well (2) 360 slim rads work. I will say that I spent a lot of time with getting perfect GPU to block contact though.


----------



## Imprezzion

I played with that BIOS on my AIO cooled card as well and I noticed something quite positive. 

I used to use the EVGA FTW3 Ultra BIOS on it so I could keep the lower minimum fan percentage but now that I have the fans controlled by the NZXT X52 the cards fan controller doesn't matter anymore.

On EVGA FTW3 Ultra I could run 2085Mhz core 7800Mhz memory @ 1.093v curve with it sitting either very close to or on the power limit. Any higher would cause DirectX crashes.

On KFA2 380w it reads at least 20% lower power numbers and doesn't throttle at all and, funny thing is, I can reach 2100/7900 without DirectX crashes!

I played an hour of World of Tanks and like 4-5 hours Division 2 (new season so yeah.. the grind is on again!) and didn't have any crashes or whatever. Usually 2100/7900 wouldn't even last 20 minutes in Division 2.

Max temps hovered around 47-48c @ 1400-1450RPM (46-49%) on the radiator fans with the pump maxed out >40c. 

If I max the fans I can push way lower temps, well under 40c, but they are 2200RPM fans and it's 4 of them so that is unbearably loud lol. 1400 isn't silent either but with a headset on during games I can't hear it.


----------



## gfunkernaught

tps3443 said:


> Galaxy HOF tuning software works with the 2080Ti Galax XOC 2,000/2KW watt bios “Without BSOD”
> 
> ^ You can adjust power limit down from 100%, change voltage offset, voltage, boost frequency, and memory frequency, and MOST IMPORTANT OF ALL even save profiles “without a BSOD”
> 
> 
> Its not as good as MSI AB, but you can save profiles reliably and boot to them in Windows daily.


Hey what is the exact name of the program or a link to the download. I am seeing multiple hits when googling "Galaxy HOF tuning software". I would love try the HOF 2kW BIOS again.


----------



## tps3443

@Imprezzion 

Hey, I know you’ve been trying to get 16,000Mhz memory game stable. Buildzoid has a fairly easy hardmod that will increase the GDDR6 voltage. This is similar to performing a shunt mod, you’ve gotta solder one area on the card. Nothing too crazy. I am considering this myself. I’d love to get higher and more stable memory overclocking.


----------



## Imprezzion

Nah, not worth the risk now that it's next to impossible to replace the card if it does break.

It's not the numbers I'm after it's just that memory overclocking has a large benefit on this card and 7900 isn't anything special.

I'm going to need every gain I can get for Cyberpunk. I'm only running 1080p but I do have a 144Hz screen..


----------



## J7SC

Imprezzion said:


> *Nah, not worth the risk now that it's next to impossible to replace the card if it does break.*
> 
> It's not the numbers I'm after it's just that memory overclocking has a large benefit on this card and 7900 isn't anything special.
> 
> I'm going to need every gain I can get for Cyberpunk. I'm only running 1080p but I do have a 144Hz screen..


...good point re. replacements. While vendors keep a certain number of older cards for RMA, they usually aren't the shining stars (and can be used / refurbished). I find that 2,000 MHz (16,000 effective) VRAM speeds are typically the sweet-spot for my setup / apps


----------



## tps3443

Imprezzion said:


> Nah, not worth the risk now that it's next to impossible to replace the card if it does break.
> 
> It's not the numbers I'm after it's just that memory overclocking has a large benefit on this card and 7900 isn't anything special.
> 
> I'm going to need every gain I can get for Cyberpunk. I'm only running 1080p but I do have a 144Hz screen..


Yeah the memory overclocking does benefit. I’m gonna attempt to mod mine. I’ve already soldered on resistors, and de-soldered them like (3) times on the same card. And I have soldered (3) other different 2080Ti’s. So I don’t see how this would be any different. The memory controller sends 1.35V and with the mod it just increases the voltage and the extra voltage helps with stability and overclocking memory even further.

I think I’m gonna go ahead and buy Cyberpunk 2077 myself. I’ve been waiting far to long for this one.


----------



## tps3443

I’ve got cyberpunk 2077.just a heads up, the character creation is quite strange, yes you customize your characters penis size and shape. They do not hold back one bit!

The game runs amazing!!

Playing at 2560x1440P Ultra present with DLSS off/ Ray tracing off. It seems to easily manage 70-80FPS.

If I ran Ray tracing Ultra preset it enables DLSS on Auto and I manage 60-70+ FPS.

Very smooth gameplay! I am still tinkering with the graphics settings quite a bit. But it runs fantastic so far. My card feels strong!!!


----------



## Imprezzion

That gives me a lot of hope I can pull off a decent frame rate on 1080p with ray tracing enabled. I'd rather not use DLSS but if I have to to get good frames, so be it.


----------



## J7SC

Imprezzion said:


> That gives me a lot of hope I can pull off a decent frame rate on 1080p with ray tracing enabled. I'd rather not use DLSS but if I have to to get good frames, so be it.


Linus had some interesting (if slightly confusing) tips on RTX, DLSS and resolution yesterday...


----------



## tps3443

J7SC said:


> Linus had some interesting (if slightly confusing) tips on RTX, DLSS and resolution yesterday...


Id be pissed if I had a AMD card..


----------



## J7SC

...this seems like a good Cyberpunk 2077 2080 Ti 'setup' vid re. various RTX and DLSS options






...speaking of RTX, managed to get past 21K in Port Royal here


----------



## geriatricpollywog

J7SC said:


> ^^I thought the KingPin XOC bios was not limited to 520W ?
> 
> Also, for hard benching a 2080 Ti w/ XOC bios, you might as well go for a GPU pot (for DICE / LN2). This one and some CPU pots have been vegging for some years in a cardboard box now. Given the time of year, that box is sort of like the island of misfit toys...


The Kingpin bioses (all 3 switch positions) are good for 520w. The XOC bios is unlimited. I use the non-XOC bios since the card runs at 2145 at 260-300w and doesn’t need a higher power limit until you start tuning the NVVD. Non XOC cards can benefit from an XOC bios because they have less efficient power delivery and require higher board power to achieve the same clocks as XOC cards.


----------



## tps3443

Guys I’m blown away by Cyberpunk2077 performance in 2560x1440P. I can easily manage a very solid 65 FPS average with Ultra RT preset. DLSS is set to auto with this preset by default. I see minimums I’d maybe 57-59 FPS. Not too often though.

If I run 2560x1440P with Ultra preset with RT off, and DLSS off I can easily manage a good 76 FPS average. with minimums of around maybe 62-64 which are very very uncommon.

I absolutely love the game, and my performance is incredible!! Based on my numbers with my overclocked and watercooled 2080Ti I have performance somewhere between a 6800XT and a RTX3080

DLSS is very good in this title! And my 2080Ti just kills it In 1440P. I couldn’t be any happier about my performance. Im not left wanting more power, and sitting around fiddling with the graphics settings too much. I’ve pretty much set UltraRT preset at 1440P, and it’s solid! Always 60-70FPS about 98% of the time! 



@J7SC

Have you tested Cyber Punk 2077 with Nvlink/SLI yet?


----------



## Medizinmann

tps3443 said:


> Id be pissed if I had a AMD card..


Why? I didn't see Linus talk about AMD GPUs in this video!?

When I see the tests it doesn't look that bad for AMD...
(1125) Cyberpunk 2077 PC GPU Benchmarks: Best Video Cards for 1080p, 1440p, & 4K - YouTube

If one could get a Radeon 6800XT at MSRP - but right now you can't buy any anyhow...

Best regards,
Medizinmann


----------



## Medizinmann

Imprezzion said:


> That gives me a lot of hope I can pull off a decent frame rate on 1080p with ray tracing enabled. I'd rather not use DLSS but if I have to to get good frames, so be it.


As we see in the tests - 1080p shoudn't be a problem.

Best regards,
Medizinmann


----------



## Medizinmann

tps3443 said:


> Guys I’m blown away by Cyberpunk2077 performance in 2560x1440P. I can easily manage a very solid 65 FPS average with Ultra RT preset. DLSS is set to auto with this preset by default. I see minimums I’d maybe 57-59 FPS. Not too often though.
> 
> If I run 2560x1440P with Ultra preset with RT off, and DLSS off I can easily manage a good 76 FPS average. with minimums of around maybe 62-64 which are very very uncommon.
> 
> I absolutely love the game, and my performance is incredible!! Based on my numbers with my overclocked and watercooled 2080Ti I have performance somewhere between a 6800XT and a RTX3080
> 
> DLSS is very good in this title! And my 2080Ti just kills it In 1440P. I couldn’t be any happier about my performance. Im not left wanting more power, and sitting around fiddling with the graphics settings too much. I’ve pretty much set UltraRT preset at 1440P, and it’s solid! Always 60-70FPS about 98% of the time!


That is nice.

I am also aiming at playable numbers in 1440p with max. settings.

Best regards,
Medizinmann


----------



## tps3443

Medizinmann said:


> Why? I didn't see Linus talk about AMD GPUs in this video!?
> 
> When I see the tests it doesn't look that bad for AMD...
> (1125) Cyberpunk 2077 PC GPU Benchmarks: Best Video Cards for 1080p, 1440p, & 4K - YouTube
> 
> If one could get a Radeon 6800XT at MSRP - but right now you can't buy any anyhow...
> 
> Best regards,
> Medizinmann


I am getting similar performance around 6900XT level based on reviewer results in Cyberpunk 2077, compared to my own results.

^ This is at 1440P without DLSS and without RT enabled, on Ultra.

You require DLSS to play with Raytracing on. So if I had an AMD card right now, it would be pretty choppy.

Also, if I wanted a high refresh rate experience I can run Ultra “without“ RT on, and enable DLSS on Auto at and resolution still at 1440P I am getting 110fps.


At 1440P Ultra RT without DLSS I get like 37FPS. This is unplayable. So if I am an AMD card I couldn’t run RT? Once I enable DLSS Auto default it gives me a smooth 65FPS with RT ultra preset. With minimums of around 57-58 that are pretty uncommon.


----------



## tps3443

Medizinmann said:


> That is nice.
> 
> I am also aiming at playable numbers in 1440p with max. settings.
> 
> Best regards,
> Medizinmann


Very playable maxed out! I am shocked. Just hit that Ultra RayTracing preset at 1440P, you can even turn global illumination on Psycho. “This is totally maxed out”

It pushes 65 everywhere. with minimums that are still sub 60fps. They did a great job with the game, no skipping no popping, no stuttering, no dropping GPU usage.

Im not tinkering with settings a lot because it runs so flawlessly.

Also, you can run ray tracing on Psycho too, but it is not advised and will average about 54-55FPS with sub 50 minimums every now and then. Still very very playable.

Ultra RT preset is technically maxed out, and they must have done a lot of optimization for the RTX2000 series in this game. Or my overclock just helps leaps and bounds! When I see these numbers from a reviewer.

This chart here is showing Ultra preset at 1440P with DLSS and RT disabled. I easily average 76 FPS here. With minimums of 62-64FPS. I’ve never seen it drop below 60fps. Performance is just crazy good Man as I am always around that 75-76 FPS range with ultra
RT off/DLSS off.


I am just stoked about this game! I am sitting right between stock 6900XT and stock RTX3080.

I am only able to sustain about 2,130Mhz on my boost clock in this game, but it does the trick! From what I have seen a stock air cooled 2080Ti at 100% power will downclock to around 1,700Mhz due to power cap and heat. So 2,130mhz is literally about 25% faster not including any memory overclocking too.

I feel like my GPU is a new release or something lol.


----------



## tps3443

Imprezzion said:


> That gives me a lot of hope I can pull off a decent frame rate on 1080p with ray tracing enabled. I'd rather not use DLSS but if I have to to get good frames, so be it.


You are going to require DLSS unfortunately. But trust me, it looks amazing on Auto.

Just use the UltraRT preset, DLSS will default to Auto and With your overclock and setup you’ll average about 90 FPS at 1920x1080.

DLSS off will cut the performance in HALF. Literally! You lose nearly 50% FPS with DLSS off With RT on.

You have two options, you can run Ultra preset with RT off and DLSS off And get amazing performance probably 100fps average at 1080P.


Or just run the Ultra RT preset which enables DLSS by default to Auto you will get about 85-90FPS.

I am guessing at performance. But they will be pretty dang realistic for what to expect. I believe your 2080Ti run 2.1+GHz right? So your good to go man


----------



## tps3443

0451 said:


> The Kingpin bioses (all 3 switch positions) are good for 520w. The XOC bios is unlimited. I use the non-XOC bios since the card runs at 2145 at 260-300w and doesn’t need a higher power limit until you start tuning the NVVD. Non XOC cards can benefit from an XOC bios because they have less efficient power delivery and require higher board power to achieve the same clocks as XOC cards.


I am actually getting better stability and performance using the Galax 380 watt 1.093mv bios with my soldered shunt mod VS. running the Kingpin XOC 1.125mv bios in cyberpunk 2077.

I get crashing with the XOC bios. Not sure why..I got frustrated with it and just flashed to my Galax 380 bios so I can play. I hold 2,115-2,130Mhz steady at 1.093MV. My performance is amazing to say the least. 40C temps.

The Galax 380 maintains stability without crashing, and runs cooler Due to less voltage. I play for like 3-4 hours straight with no issues.


----------



## J7SC

tps3443 said:


> Guys I’m blown away by Cyberpunk2077 performance in 2560x1440P. I can easily manage a very solid 65 FPS average with Ultra RT preset. DLSS is set to auto with this preset by default. I see minimums I’d maybe 57-59 FPS. Not too often though.
> 
> @J7SC
> 
> Have you tested Cyber Punk 2077 with Nvlink/SLI yet?


...haven't had time yet to get Cyberpunk 2077. Late next week may be.


----------



## tps3443

delete


----------



## sultanofswing

I've been playing CP77 hours on end at 2160mhz on my KPE 2080ti without issues. Playing at 4k Max settings with RT on and DLSS set to performance mode, Mostly staying in the 40's for FPS.
Decided to try HDR yesterday with my CX OLED and it looks pretty damn good.


----------



## J7SC

...how's the free-roaming experience with Cyberpunk 2077 ? Extensive or limited, i.e. w/ air taxi around Night city, or driving around the badlands ? 

Just one more work-project to finish, then it's Cyberpunk 2077 download and play time for the 2080 Ti(s)


----------



## Imprezzion

I played like 2 hours now on 1080p on the 2080Ti and I have to say I'm very impressed by the FPS it manages to push out and how smooth it feels even on like 50-55FPS.

I was running 2070-2085 core 7900 memory on the KFA2 380w BIOS and with all settings maxed with no DoF and motion blur @ 90 FoV I'm seeing 55-70 FPS with Ray tracing on Psycho and DLSS Quality.

Without Ray tracing about 65-80, DLSS Quality 100-140.

I even tested Ray tracing Ultra with no DLSS. 40-55FPS and what I would still call a playable experience except for shooting feeling a bit mushy.

I'm going to stick to all max RT Psycho DLSS Quality for my playthrough.

I did notice that a clock speed that any other game runs fine crashes within minutes in Cyberpunk tho.

XOC BIOS @ 1.125v 2130-2145Mhz runs any other game fine for hours and hours but I need to drop all the way to 2100-2115 to pass Cyberpunk.

At that small of a difference I'd rather run a normal 1.093v 380w BIOS at 2070-2085Mhz really with profiles in MSI AB and proper idle clocks.


----------



## tps3443

Imprezzion said:


> I played like 2 hours now on 1080p on the 2080Ti and I have to say I'm very impressed by the FPS it manages to push out and how smooth it feels even on like 50-55FPS.
> 
> I was running 2070-2085 core 7900 memory on the KFA2 380w BIOS and with all settings maxed with no DoF and motion blur @ 90 FoV I'm seeing 55-70 FPS with Ray tracing on Psycho and DLSS Quality.
> 
> Without Ray tracing about 65-80, DLSS Quality 100-140.
> 
> I even tested Ray tracing Ultra with no DLSS. 40-55FPS and what I would still call a playable experience except for shooting feeling a bit mushy.
> 
> I'm going to stick to all max RT Psycho DLSS Quality for my playthrough.
> 
> I did notice that a clock speed that any other game runs fine crashes within minutes in Cyberpunk tho.
> 
> XOC BIOS @ 1.125v 2130-2145Mhz runs any other game fine for hours and hours but I need to drop all the way to 2100-2115 to pass Cyberpunk.
> 
> At that small of a difference I'd rather run a normal 1.093v 380w BIOS at 2070-2085Mhz really with profiles in MSI AB and proper idle clocks.


Yep. I had to switch to the Galax 380 too lol. XOC was causing some issues.

Cyberpunk is veryyyy sensitive to memory overclocking on the GDDR6. I thought my memory was stable at 16,400-16,500 range. Well, it’s not.. No other game has ever given me any trouble Or crashing at these memory frequencies. Cyberpunk 2077 has proven unstable and forced me to drop down to 16,200 range, and now I’m good to go.

Running 2,130-2,115MHz sustained at 1.093 just fine.

Also, There was an update that fixed a crashing issue that I actually thought was due to GPU instability.

So I would update to 1.04 reduce GPU memory OC and try 2,100-2,115 again. You should be ok.


----------



## J7SC

...I finally had time to download Cyberpunk 2077 (Steam server seemed to be a bit wheezing though tonight).... Now trying out 4K / RTX / DLSS quality / balanced by running around the same bar and trying to avoid a loan shark who someone else owes mulla. Screenshot below is FAR from final / representative (!); just dialing in base clocks.

...SLI / CFR older drivers might not even be worth the trade-off (if it works at all). I will update on that soon. Visuals in Cyberpunk 2077 are stunning, though. 2080 Ti RTX / DLSS is still very decent overall, though


----------



## tps3443

J7SC said:


> ...I finally had time to download Cyberpunk 2077 (Steam server seemed to be a bit wheezing though tonight).... Now trying out 4K / RTX / DLSS quality / balanced by running around the same bar and trying to avoid a loan shark who someone else owes mulla. Screenshot below is FAR from final / representative (!); just dialing in base clocks.
> 
> ...SLI / CFR older drivers might not even be worth the trade-off (if it works at all). I will update on that soon. Visuals in Cyberpunk 2077 are stunning, though. 2080 Ti RTX / DLSS is still very decent overall, though
> 
> 
> View attachment 2469149


Im telling you man! You’ve got some really good silicon in your 2080Ti’s for 2,[email protected], or your cooling just allows you to keep that stable. Your GPU is at 32C, that’s just crazy Good! Your CPU utilization does look a little wonky though. My 7980XE seems to use all of 36 available threads, but at an even, but also lower utilization due to balancing the load across the cores. not sure what may be going on therewith your threadripper.

Anyways, what is your cooling setup again?

I need to improve lol. I’m so jealous. I have (2) ekwb 360 classic’s and I run around 41C at 1.093MV holding 2,115Mhz


----------



## J7SC

^^ ..jealous is misplaced, and irony is that this is actually a TR2+ 2950X productivity workstation, not a purpose-built 'gamer'. Still, once I had these two factory w-cooled 2080 Tis and tested them, I knew I had 'decent' silicon. That's why I raided the store room for w-cooling parts, meaning the cooling system for this build was zero dollar$ in terms of new acquisitions.

Anyway, dual loops...

CPU loop = 2x XSPC RX 360 and 2x D5s;
GPU loop = 3x XSPC RX 360 and 2x D5s..
I still had a few more XSPC RX 360 rads and D5s left over, but 5x rads - 4x pumps in total was all I could fit on the TT Core P5 case & frame, along with the 1300W Antec HPC PSU. Older 9x GentleTyphoon 3k rpm for just the GPUs also help...


----------



## tps3443

J7SC said:


> ^^ ..jealous is misplaced, and irony is that this is actually a TR2+ 2950X productivity workstation, not a purpose-built 'gamer'. Still, once I had these two factory w-cooled 2080 Tis and tested them, I knew I had 'decent' silicon. That's why I raided the store room for w-cooling parts, meaning the cooling system for this build was zero dollar$ in terms of new acquisitions.
> 
> Anyway, dual loops...
> 
> CPU loop = 2x XSPC RX 360 and 2x D5s;
> GPU loop = 3x XSPC RX 360 and 2x D5s..
> I still had a few more XSPC RX 360 rads and D5s left over, but 5x rads - 4x pumps in total was all I could fit on the TT Core P5 case & frame, along with the 1300W Antec HPC PSU. Older 9x GentleTyphoon 3k rpm for just the GPUs also help...


I need to separate my loop, get a larger case, and add more radiators. Or I could simply run a chiller with an extra pump.

My 7980XE adds a lot of heat to the loop.

If I’m on the desktop, and I hit run with R20 that’s like 750-800 watts. So in games it’s kinda like running a second 2080Ti. 

Do you think separating my loop would improve temps? If I designated a 360 Rad to the 2080Ti, and designated one to the 7980XE? And maybe swap to push pull typhoon fans too.

I am running some xspc 1,650RPM fans, but they are not very high static pressure.


----------



## J7SC

tps3443 said:


> I need to separate my loop, get a larger case, and add more radiators. Or I could simply run a chiller with an extra pump.
> 
> My 7980XE adds a lot of heat to the loop.
> 
> If I’m on the desktop, and I hit run with R20 that’s like 750-800 watts. So in games it’s kinda like running a second 2080Ti.
> 
> Do you think separating my loop would improve temps? If I designated a 360 Rad to the 2080Ti, and designated one to the 7980XE? And maybe swap to push pull typhoon fans too.
> 
> I am running some xspc 1,650RPM fans, but they are not very high static pressure.


This may sound `jaded, but it is actually all about business... we ONLY run XSPX RX 360s, and ONlY D5 Swiftech (re. spares for servers). With modern D5 speeds, separating CPU and GPU loops is not a necessity at all... 

But for redundancy (as well as a bit of performance boost), I always spec separate loops and at least two D5s PER loop in case of failure (which we never had)...also, separate loops make both regular maintenance and component upgrades much less of a chore...


----------



## gfunkernaught

Cyberpunk 2077 seems to like overclocked 2080 Ti's. I also realized that my gpu had more headroom than I previously thought. I've been seeing posts on here about reference 2080 Tis running [email protected] on the Galax , or in my case the KFA, 380w bios and figured let me try it. I never had luck like that in games, only benchmarks. Tried [email protected], hours of Cyberpunk Ultra Preset, RT Ultra, DLSS Quality, @ 2560x1440, highest temp I saw was 40c, 39-60fps (vsync). I used to run [email protected] because of power limits and thermal throttling, but now so far so good.


----------



## tps3443

gfunkernaught said:


> Cyberpunk 2077 seems to like overclocked 2080 Ti's. I also realized that my gpu had more headroom than I previously thought. I've been seeing posts on here about reference 2080 Tis running [email protected] on the Galax , or in my case the KFA, 380w bios and figured let me try it. I never had luck like that in games, only benchmarks. Tried [email protected], hours of Cyberpunk Ultra Preset, RT Ultra, DLSS Quality, @ 2560x1440, highest temp I saw was 40c, 39-60fps (vsync). I used to run [email protected] because of power limits and thermal throttling, but now so far so good.


Are you hitting cpu bottleneck with DLSS on Auto? I run the same preset as you. With pretty much the same overclock and temps on the GPU. Your CPU may be causing the lower minimums. What kind of load does your 8700K see?

I ran RT Ultra and set DLSS to “Performance/high performance” and it pushes all of my CPU to craziness.


----------



## J7SC

...running 4K / high textures / RTX ultra / DLSS balanced...starting to love Cyberpunk 2077's Night city in RTX glory


----------



## tps3443

J7SC said:


> ...running 4K / high textures / RTX ultra / DLSS balanced...starting to love Cyberpunk 2077's Night city in RTX glory
> 
> View attachment 2469251
> 
> 
> View attachment 2469252
> 
> 
> View attachment 2469253
> 
> 
> View attachment 2469254
> 
> 
> View attachment 2469255
> 
> 
> View attachment 2469256
> 
> 
> View attachment 2469257
> 
> 
> View attachment 2469258


I wish they would’ve implemented Vulkan API so SLI would be supported. Anyways, thats got to be tough to run. 4K UltraRT with Balanced DLSS.


----------



## BigMack70

Just put my 2080 Ti into my backup rig... curious to see how well it can run something like cyberpunk paired with a 5930k at 1080p. I'm guessing it's going to make me want to upgrade that CPU to a 5600X or 10600K.

First time I've ever kept my old GPU after an upgrade. Thought I'd sell it and help offset the cost of my 3090, but instead just decided to toss it in for the old 1060 in the secondary. Figure it should last the duration of this console generation @ 1080p. Turns out the 2080 Ti was a much better purchase than I think it appeared back at its launch. I think it will prove to be the entry-level needed to enjoy RTX features in a couple years.


----------



## tps3443

BigMack70 said:


> Just put my 2080 Ti into my backup rig... curious to see how well it can run something like cyberpunk paired with a 5930k at 1080p. I'm guessing it's going to make me want to upgrade that CPU to a 5600X or 10600K.
> 
> First time I've ever kept my old GPU after an upgrade. Thought I'd sell it and help offset the cost of my 3090, but instead just decided to toss it in for the old 1060 in the secondary. Figure it should last the duration of this console generation @ 1080p. Turns out the 2080 Ti was a much better purchase than I think it appeared back at its launch. I think it will prove to be the entry-level needed to enjoy RTX features in a couple years.


Yeah the 2080Ti has held up well. Everyone on YouTube and reddit seems to think we were Beta testers though. And a 2080Ti is obsolete, simply slotting a RTX3070 will achieve the exact same performance that you’ve already been getting.


----------



## J7SC

...there's also the prior use case for 2080 Ti owners - I already had two years of work + play use from my 2080 Tis. On top of that, even one does not have too much trouble with Cyberpunk 2077 (per above 4K RTX DLSS settings). Without RTX (+ DLSS) the game doesn't nearly look as good.


----------



## BigMack70

tps3443 said:


> Yeah the 2080Ti has held up well. Everyone on YouTube and reddit seems to think we were Beta testers though. And a 2080Ti is obsolete, simply slotting a RTX3070 will achieve the exact same performance that you’ve already been getting.


IMO this is driven by two things:
1) Most people think anyone buying a GPU that's much over $500 is an elitist snob worthy of derision who is just a fool easily parted from his money

2) Most reviewers/etc don't compare cards at their potential i.e. OC vs OC.

The 2080 Ti holds up well in part because it overclocks far better than Ampere does. I'd argue, for example, that a 3080 isn't even really an upgrade over a 2080 Ti at resolutions below 4k, if you have the 2080 Ti under water. It just responds much better to cooling and basic overclocking than the 3080 or 3090 does. You have to shove 600 watts through Ampere to get it to do anything meaningful over its stock performance in games, while the 2080 Ti just sings if you do nothing more than slap a water block (or heck, an AIO) on it.

And I don't trust 8GB of VRAM on the 3070, even at 1080p.

I do agree that everyone buying Turing or Ampere are beta testers for ray tracing, however. I'm just happy to do that since it looks so impressive in the titles that do it well. It's clearly the next frontier of graphics, and is required for "next gen" visuals. I think pure rasterization has been pushed as far as it will go.

Ultimately, I kept my 2080 Ti because I didn't see an upgrade for $200-300 for my 1060. I've always tried to have a flagship setup on my main rig, if I can afford it, and a GPU as close to $200 as possible in the secondary, but I don't see the meaningful $200 upgrade coming anytime soon. I don't want to buy a new GPU to replace a Pascal card that lacks ray tracing support, but it's obvious that we're somewhere between 1-3 GPU architectures away from decent ray tracing performance hitting the market in a $200 GPU.

The only downside is that now my secondary rig is terribly unbalanced. A PC with a 5930k and 2080 Ti hooked up to a 1080p 60 monitor... yikes. I can already feel the hole burning in my wallet to get a better mobo/CPU and monitor to go with it, even though it's basically a family/kids PC


----------



## tps3443

Cyberpunk 2077


Some women was leaned over a car talking to the driver. I killed her with the Katana while she was outside of the vehicle, but the cars window was rolled down.. So it splattered her blood everywhere In the interior and everywhere inside of the vehicle. Like everywhere!!! So, I have been driving the car a while now. Doing a few side missions. My interior Is still full of that hookers blood haha.


----------



## J7SC

I find the CorpoRat visuals the most intriguing (though may be not that particular story line). So far, 4K / max textures / RTX full / DLSS balanced is what I like the most, with 2160 MHz / 2145 MHz. It would be great if NVL/SLI would work; perhaps there will be a solution on that by someone as Cyberpunk 2077 has a huge installed base already...



Spoiler


















































 
@BigMack70 ...if your 5930K can oc a bit and you have decent system RAM and SSD, all you really need is to hook it up to a 4K tv (if available). At 4K, the CPU matters less than the GPU. When I first got my 2080 Tis, I tested each on an oc'ed 6700K / 4.9 giggles test-bench connected to a 4K tv...worked well enough


----------



## gfunkernaught

tps3443 said:


> Are you hitting cpu bottleneck with DLSS on Auto? I run the same preset as you. With pretty much the same overclock and temps on the GPU. Your CPU may be causing the lower minimums. What kind of load does your 8700K see?
> 
> I ran RT Ultra and set DLSS to “Performance/high performance” and it pushes all of my CPU to craziness.


I didn't pay attention to my CPU load while I tried DLSS Auto. I stopped using Auto because the quality was too low for my liking. The lows were even lower when I ran 4k and DLSS Balanced, so I settled on 1440p and DLSS Quality. With this setting, I see my CPU load 40-50% on all cores, but randomly. The GPU load goes between 80-99%. I could try to run 4k again but the minimum fps in the mid 30s is just too low especially when using a mouse.


----------



## Imprezzion

I'm seeing flat 99% usage on the GPU @ 1080p RTX Psycho DLSS Quality. CPU (10900KF @ 5.1) runs 30-40% all threads.

I mean, the FPS is "good enough" and it looks sick but aiming with a mouse is... Less smooth then I'd want but yeah, eye candy.. hehe.


----------



## BigMack70

J7SC said:


> @BigMack70 ...if your 5930K can oc a bit and you have decent system RAM and SSD, all you really need is to hook it up to a 4K tv (if available). At 4K, the CPU matters less than the GPU. When I first got my 2080 Tis, I tested each on an oc'ed 6700K / 4.9 giggles test-bench connected to a 4K tv...worked well enough


Running my 5930k at 4.2 GHz. It can do 4.5 but needs a ton of voltage and I don't think the performance is worth it on a backup rig. Sadly only own one 4k TV (LG C9) and that's what the main rig (9900ks / 3090) is on.

But I am tempted to save for a second TV instead of a new monitor... Image quality on monitors for content consumption is way behind TVs, and any new TV would be 4k.


----------



## tps3443

BigMack70 said:


> Running my 5930k at 4.2 GHz. It can do 4.5 but needs a ton of voltage and I don't think the performance is worth it on a backup rig. Sadly only own one 4k TV (LG C9) and that's what the main rig (9900ks / 3090) is on.
> 
> But I am tempted to save for a second TV instead of a new monitor... Image quality on monitors for content consumption is way behind TVs, and any new TV would be 4k.


I personally like using a monitor and the pixels per inch help image quality. Isn’t 1440P by 27 inches equivalent to 4K 55” or something right?


----------



## tps3443

Imprezzion said:


> I'm seeing flat 99% usage on the GPU @ 1080p RTX Psycho DLSS Quality. CPU (10900KF @ 5.1) runs 30-40% all threads.
> 
> I mean, the FPS is "good enough" and it looks sick but aiming with a mouse is... Less smooth then I'd want but yeah, eye candy.. hehe.


Set Ultra Pre-set, turn Raytracing off, set DLSS to Balanced. I seem to get about 110-145fps like this at 2560x1440P. The image quality is still good. and it runs very very smooth!

Since you are running just 1080P, you could also set RT Ultra pre-set and set DLSS to balanced.

Balanced DLSS or “Auto DLSS are the same thing. And they look amazing to me. Almost identical to quality DLSS.

Or better yet, just run Ultra with RT off and DLSS off.

You’ve got several options for 90-100FPS or more that’ll look very good.


I really like Raytracing, so balanced DLSS provides me that 65FPS average using the Ultra/RT preset.

Quality looks better but only a little better, and I see those dips to the low 50’s


----------



## J7SC

tps3443 said:


> I personally like using a monitor and the pixels per inch help image quality. Isn’t 1440P by 27 inches equivalent to 4K 55” or something right?


I use both a 40 inch 4K monitor and a 55 inch 4K IPS TV and at the end of the day, the viewing distance is probably key, as long as either has a comparable & decent response time, colour etc. With DLSS, TV might even '''improve''' visual appearance re. pixel count per square inch


----------



## Imprezzion

tps3443 said:


> Set Ultra Pre-set, turn Raytracing off, set DLSS to Balanced. I seem to get about 110-145fps like this at 2560x1440P. The image quality is still good. and it runs very very smooth!
> 
> Since you are running just 1080P, you could also set RT Ultra pre-set and set DLSS to balanced.
> 
> Balanced DLSS or “Auto DLSS are the same thing. And they look amazing to me. Almost identical to quality DLSS.
> 
> Or better yet, just run Ultra with RT off and DLSS off.
> 
> You’ve got several options for 90-100FPS or more that’ll look very good.
> 
> 
> I really like Raytracing, so balanced DLSS provides me that 65FPS average using the Ultra/RT preset.
> 
> Quality looks better but only a little better, and I see those dips to the low 50’s


I did do that for a while, everything maxed with no RT and it's buttery smooth 70-100FPS then but yeah, I missed my eye candy lol.

i set my in-game sens 1 tick higher to compensate for the sluggish aim and I'm fairing quite alright now with my trusty pistol / SMG now hehe.

Just been pumping out side missions now to get some levels and cash and enjoy the scenery.

I did run into some weirdness tho. In some indoor areas the RT reflections on windows and glass doors look really strange and clippy.. disabling RT reflections fixes it..

I tried a full re-install of the game, 460.xx drivers and 456.98 hotfix with a full DDU wipe, all that stuff but it didn't make a difference.

What I also noticed is that 456.98 actually runs noticably better then the game ready 460.xx driver lol.


----------



## J7SC

Imprezzion said:


> I did do that for a while, everything maxed with no RT and it's buttery smooth 70-100FPS then but yeah, I missed my eye candy lol.
> (...)
> What I also noticed is that 456.98 actually runs noticably better then the game ready 460.xx driver lol.


...haven't tried 456.98 yet - in your experience, was it better or worse re. RTX visual quality than 460.xx ? Also, I've been doing a bit roaming while still progressing through all three segments...roaming (including flying per air taxi) is what I really want, rather than 'shooting' people. My frame rates approximately match where my 4K monitor is (w/o vsync) - the only real difficulty was trying to drive that car in 'Nomad' with a keyboard and mouse...btw, best RTX and other eye-candy, imo, is in Corpo Rat


----------



## Krzych04650

tps3443 said:


> I personally like using a monitor and the pixels per inch help image quality. Isn’t 1440P by 27 inches equivalent to 4K 55” or something right?


55" 4K has 80 PPI, same as 27" 1080p, not 1440p. 27" 1440p is way sharper than that. That's the thing with these big screens, you are pushing 4K resolution just to get PPI of 1080p monitor. But the image quality of OLED is in class of it's own. I have LG 55B7 TV for movies and LG 38UC99 monitor for PC and both have their merits and disadvantages, and both are a big compromise vs the other in different categories, so there is no good way of going about this, it is down to personal preference and it is probably best to have both TV and monitor, at least for now. From what I can tell something that could merge the qualities of the both into one is not even remotely close to releasing. Well, it could be released already but why do something ambitious if you can milk some old stuff for a decade.


----------



## Imprezzion

J7SC said:


> ...haven't tried 456.98 yet - in your experience, was it better or worse re. RTX visual quality than 460.xx ? Also, I've been doing a bit roaming while still progressing through all three segments...roaming (including flying per air taxi) is what I really want, rather than 'shooting' people. My frame rates approximately match where my 4K monitor is (w/o vsync) - the only real difficulty was trying to drive that car in 'Nomad' with a keyboard and mouse...btw, best RTX and other eye-candy, imo, is in Corpo Rat


Haven't noticed a visual difference. What I did is make a quick save at a specific spot and walked a certain route logging FPS standing still directly after loading the save and while walking around.

460.xx had 77FPS standing still at that spot, 456.98 had 81FPS in the exact same spot and it felt smoother while walking the same route. Minimum FPS also seemed higher walking around.

Second thing is, 460.xx performs way way worse in other games I play like Division 2 for example. This is confirmed on Nvidia reddit driver discussion / benchmark thread as well.


----------



## gfunkernaught

Imprezzion said:


> I'm seeing flat 99% usage on the GPU @ 1080p RTX Psycho DLSS Quality. CPU (10900KF @ 5.1) runs 30-40% all threads.
> 
> I mean, the FPS is "good enough" and it looks sick but aiming with a mouse is... Less smooth then I'd want but yeah, eye candy.. hehe.


Go into the game's profile in the Nvidia Control Panel and set the low latency mode to ultra and vsync to enabled, but make sure you enable those settings on the Cyberpunk 2077 profile. Then disable the in-game vsync.  Also go into Controls>then enable Advanced settings. There's a curve response setting or something, I forgot what it was but it was reveaeld once I enabled Advanced Settings. Set that to "Raw". So now the mouse latency is improved.

I've been running the game with the Psycho RT setting and still getting good performance. Everything else is maxed, res is 2560x1440, DLSS set to Quality. Has anyone else noticed in games Cyberpunk, Control, Quake 2 RTX, and others that use ray tracing, the noise delay or "ghosting" effect on moving objects? Like when an object moves across a scene, it leaves a shadow trail of noise, I'm guessing those are the "rays" and the entry level ray tracing capability of the 2080 ti, since it is the first gpu to accelerate it. Now I see why 1440p is very popular when using RT, because the lower pixel density masks this "noise" much better. At 4k the noise pattern becomes more apparent and actually annoying at times, because it is so obvious. I have a new-found appreciation for Turing and the boat of money I spent on it. DLSS quality will only improve from here, so expect a sharper image in Cyberpunk in future updates. Same thing happened with Control. At launch, Control had poor DLSS quality then it got updated, now the game looks like it is native 4k most of the time.


----------



## J7SC

Imprezzion said:


> Haven't noticed a visual difference. What I did is make a quick save at a specific spot and walked a certain route logging FPS standing still directly after loading the save and while walking around.
> 
> 460.xx had 77FPS standing still at that spot, 456.98 had 81FPS in the exact same spot and it felt smoother while walking the same route. Minimum FPS also seemed higher walking around.
> 
> Second thing is, 460.xx performs way way worse in other games I play like Division 2 for example. This is confirmed on Nvidia reddit driver discussion / benchmark thread as well.


Thanks. I'll try 456.98 and check it out on my system. THW also had a 'mod' for AMD CPUs I should try out (re. core/thread usage vs Intel, naughty...). And I'm still frustrated that NVL/SLI won't work, but another game that has SLI and uses the engine is supposed to get RTX & DLSS. At that point, I might try to play around with updated NVInspector profiles


----------



## sultanofswing

While it's an outdated record per say I finally took 1st place in Time Spy Extreme for a single 2080ti, and a 10940x.
KPE 2080ti drew 600 watts on that run. All done on watercooling with the Ambient at 21c.
CPU still has lots more headroom, should do 5ghz all core pretty easy.


https://www.3dmark.com/spy/16326400



KPE took 1.3v like a champ on room ambient water and still only hit 40c.


----------



## tps3443

sultanofswing said:


> While it's an outdated record per say I finally took 1st place in Time Spy Extreme for a single 2080ti, and a 10940x.
> KPE 2080ti drew 600 watts on that run. All done on watercooling with the Ambient at 21c.
> CPU still has lots more headroom, should do 5ghz all core pretty easy.
> 
> 
> https://www.3dmark.com/spy/16326400
> 
> 
> 
> KPE took 1.3v like a champ on room ambient water and still only hit 40c.


Nice


Best I’ve done is about 8,270 graphics score with my reference 2080Ti and 7980xe On timespy extreme. Not sure what my cpu score was, but I run 4.8Ghz 24/7. It is lapped and delidded.

I would love to play with a Real kingpin 2080Ti though. I still may grab one. Or the Galax HOF WC edition if I could find one. The one that comes with no cooler mounted in the box “Bare PCB”

I don’t think my silicon is that good on my 2080Ti. It’s decent that’s all though.

I am forced to run 2,100- 2,115Mhz in Cyberpunk anymore causes crashing no matter the voltage or power dumped. Card runs 39-40C all the time. The game is sensitive to boost frequencies.

Cyberpunk has practically told me that high frequencies are for benchmarking, or unless I get a water chiller.

I find it amazing how 2080Ti “TU102A” silicon can respond so differently with different variation in quality. I see guys running Cyberpunk at 2,130Mhz sustained at 48-49C. but my TU102 cannot run that even with 40C temps.

From what I understand the kingpin has pretty well binned silicon.


----------



## gfunkernaught

Looks like I spoke too soon about running [email protected] Was playing Metro Exodus and kept running into the power limit and the clock started out at 2130mhz then settled at 2115mhz when the power limit wasn't triggered. I think I will probably stick to [email protected] since that is the only v/f point where the clock and voltage never change no matter what game I play. The only exception is of course GTest2 in Time Spy Extreme.


----------



## sultanofswing

tps3443 said:


> Nice
> 
> 
> Best I’ve done is about 8,270 graphics score with my reference 2080Ti and 7980xe On timespy extreme. Not sure what my cpu score was, but I run 4.8Ghz 24/7. It is lapped and delidded.
> 
> I would love to play with a Real kingpin 2080Ti though. I still may grab one. Or the Galax HOF WC edition if I could find one. The one that comes with no cooler mounted in the box “Bare PCB”
> 
> I don’t think my silicon is that good on my 2080Ti. It’s decent that’s all though.
> 
> I am forced to run 2,100- 2,115Mhz in Cyberpunk anymore causes crashing no matter the voltage or power dumped. Card runs 39-40C all the time. The game is sensitive to boost frequencies.
> 
> Cyberpunk has practically told me that high frequencies are for benchmarking, or unless I get a water chiller.
> 
> I find it amazing how 2080Ti “TU102A” silicon can respond so differently with different variation in quality. I see guys running Cyberpunk at 2,130Mhz sustained at 48-49C. but my TU102 cannot run that even with 40C temps.
> 
> From what I understand the kingpin has pretty well binned silicon.


My Kingpin does not like anything over 2160mhz above 39c, I can brute force it with voltage but once I cross that 39c threshold stability decreases rapidly.


----------



## geriatricpollywog

My 2080ti Kingpin is 100% stable at 2145-2160 depending on ambient temps, but anything over 2130 crashes Cyberpunk.


----------



## tps3443

0451 said:


> My 2080ti Kingpin is 100% stable at 2145-2160 depending on ambient temps, but anything over 2130 crashes Cyberpunk.


So its not just me then. My reference 2080Ti on water usually runs 2,145 mhz just fine in games. Or I can even force 2,160Mhz to run stable in just about anything really. But, Cyberpunk likes to run 2,115Mhz usually. 

2,100Mhz is bare minimum boost. 

I think it has something to do with the heavy ray tracing and DLSS being used. Maybe it affects our overclocks differently.


----------



## sultanofswing

I've played Cyberpunk at 2130 for a few hours, didn't try to push any higher. I play at 4k and CP77 doesn't seem to make my card any hotter than any other game.


----------



## geriatricpollywog

tps3443 said:


> So its not just me then. My reference 2080Ti on water usually runs 2,145 mhz just fine in games. Or I can even force 2,160Mhz to run stable in just about anything really. But, Cyberpunk likes to run 2,115Mhz usually.
> 
> 2,100Mhz is bare minimum boost.
> 
> I think it has something to do with the heavy ray tracing and DLSS being used. Maybe it affects our overclocks differently.


I think it’s the RT cores. This is my first raytracing game. Power consumption Is normally 270-300w but Cyberpunk uses 300-320w. I can force 2200+ by turning off temperature protecting and adding NVVD, but I start to see artifacts at 2160.


----------



## gfunkernaught

In case anyone has been trying out or wants to try out the Galax HOF 2000W BIOS and saw someone suggest that you use the Galaxy HOF AI tuning software because it doesn't crash like Afterburner, it does. It crashes/BSOD's when applying a "preset" onto the card at startup. I believe this is a bug with these BIOS running on a reference card. The only way to use these BIOS is to manually apply an offset AFTER starting the program, whether it be Afterburner or HOF AI. Doesn't matter. Unless someone has a suggestion. Shame really. I even tried running it as a service, crash.


----------



## J7SC

For my setup, CP '77 typically runs a degree or two hotter than other games and sims. It often starts out at 2160 MHz but it usually settles back to 2145 MHz at 1.04x v, even with well-controlled temps, per spoiler (dedicated GPU loop is built for 2x 2080 Tis, but one will just idle in CP '77 --- for now  ). vBios is the stock one, up to 380W



Spoiler


----------



## sultanofswing

gfunkernaught said:


> In case anyone has been trying out or wants to try out the Galax HOF 2000W BIOS and saw someone suggest that you use the Galaxy HOF AI tuning software because it doesn't crash like Afterburner, it does. It crashes/BSOD's when applying a "preset" onto the card at startup. I believe this is a bug with these BIOS running on a reference card. The only way to use these BIOS is to manually apply an offset AFTER starting the program, whether it be Afterburner or HOF AI. Doesn't matter. Unless someone has a suggestion. Shame really. I even tried running it as a service, crash.


It's well known about the BSOD with the available HOF BIOS. There is a newer BIOS Version that fixes those crashes/BSOD's but sadly no one is sharing it.


----------



## J7SC

Have you folks tried 'RTX Psycho' mode yet ? I normally don't load Geforce Experience, but apparently that's the way to get to the RTX Psycho mode. For now, I play 4K high textures / RTX max / DLSS balanced, and wonder if 'RTX Psycho' would be even worth it re. fps, after finding the vid below on YT





 
*EDIT *- not sure if this vid by Digital Foundry has already been shared, but apart from a great setup guide, it also touches on a few 'Psycho' settings


----------



## tps3443

J7SC said:


> Have you folks tried 'RTX Psycho' mode yet ? I normally don't load Geforce Experience, but apparently that's the way to get to the RTX Psycho mode. For now, I play 4K high textures / RTX max / DLSS balanced, and wonder if 'RTX Psycho' would be even worth it re. fps, after finding the vid below on YT
> 
> 
> 
> 
> 
> 
> *EDIT *- not sure if this vid by Digital Foundry has already been shared, but apart from a great setup guide, it also touches on a few 'Psycho' settings


I have tried Psycho ray tracing. It’s very demanding!
Also, running Psycho global illuminations only works when you disable ray tracing all together.

Psycho RT is just very heavy. You’d have to move to performance or even high performance to offset the demand of Psycho RT.

Running Ultra RT preset with Quality DLSS at 2560x1440P is demanding enough. I manage between 48-62FPS. Average is about 56-57FPS.

I would stick to Ultra RT, I don’t see any difference with Psycho on.


----------



## Imprezzion

Psycho RT only applies to ambient light reflections. So, sun and moon stuff like that. It does make a noticable difference outdoors if you stand still and toggle it.

I can sort of run it with 1080p Quality DLSS but minimums drop into the 40's and that isn't smooth enough anymore.

I can't run a DLSS higher then quality on 1080p as that really heavily impacts visuals since my starting res is already so low so I'll stick to Ultra RT and DLSS Quality. It runs anywhere from 50-75 FPS which is fine.


----------



## J7SC

I currently play CP '77 on a 40 inch 4K monitor (& probably am sitting too close) which has 4K @ 60 Hz but can oc to about 72 Hz...depending whether I'm indoor or outdoor in CP '77, my settings usually net between mid 50s and high 60s (sometimes even low 70s). Until a NVL/SLI profile can be found, I would be happy with a consistent 55 (60 capped) fps at 4K / high textures / RTX max (though not psycho) / DLSS balanced. Working on applying a few AMD-CPU-specific hex-fixes, then try the same settings, but with DLSS quality /+


----------



## BigMack70

What kind of cyberpunk frames do you guys get driving through the middle of the city with everything completely maxed out at 1080p / quality DLSS? Can you get a fully locked 60 or better?

Looks like I'm in the 50-60fps range in that scenario, and wondering how much of that is a CPU bottleneck from my 5930k @ 4.3


----------



## kithylin

BigMack70 said:


> What kind of cyberpunk frames do you guys get driving through the middle of the city with everything completely maxed out at 1080p / quality DLSS? Can you get a fully locked 60 or better?
> 
> Looks like I'm in the 50-60fps range in that scenario, and wondering how much of that is a CPU bottleneck from my 5930k @ 4.3


Is your CPU at 100% load on all cores and threads while gaming? Does your CPU have one single thread that is at 100% while the rest are not loaded that high? If you answer yes to any of these then your CPU is the bottleneck. Also since you're using an Intel CPU you should use the multiple CPU tweaks that are posted on the internet for CP-2077 that optimize the game further for Intel systems. There's one that can disable AVX-512, which might help your CPU being overloaded in the short term.

Are you watching task manager / CPU usage while gaming? If not you should be.


----------



## J7SC

...can't say much about 1080p fps as I have never run it (well, DLSS might have  ), but there's an addendum: For processor core thread use if you're running AMD (like my 2950X TR). THW describes the problem and the hex-fix solution *here* ...perhaps there are similar fixes which will turn up for older Intel, i.e. 5930K etc.


----------



## BigMack70

kithylin said:


> Is your CPU at 100% load on all cores and threads while gaming? Does your CPU have one single thread that is at 100% while the rest are not loaded that high? If you answer yes to any of these then your CPU is the bottleneck. Also since you're using an Intel CPU you should use the multiple CPU tweaks that are posted on the internet for CP-2077 that optimize the game further for Intel systems. There's one that can disable AVX-512, which might help your CPU being overloaded in the short term.
> 
> Are you watching task manager / CPU usage while gaming? If not you should be.


That's not usually how CPU bottlenecks work in gaming, not on HT CPUs anyway. It's pretty common for a CPU limit to look like anywhere from 70-90% usage, without ever showing a single thread as pegged at 100%. Games just aren't that efficient at loading CPU threads.


----------



## J7SC

...fyi, another fairly exhaustive CP '77 benchmark comp PLUS useful setup tips *here*


----------



## gfunkernaught

Since Cyberpunk isn't a faced-paced shooter, 45-60fps using Psycho RT at 1440p and Quality DLSS is great, for me at least.


----------



## gfunkernaught

sultanofswing said:


> It's well known about the BSOD with the available HOF BIOS. There is a newer BIOS Version that fixes those crashes/BSOD's but sadly no one is sharing it.


How do you know there is a newer version if no one is sharing it? Couldn't have been posted on Galax's website. Was it announced somewhere and only an elite few get a copy attached to an NDA?


----------



## J7SC

fps on the 2080 Ti are decent enough, especially with a cool-running card that holds oc clocks, though I would like to see a bit more overall consistency...I just wish I could get my Logitech Extreme 3D Pro working in CP '77 (re. driving, etc). I inadvertently ran over a few people in Night city whose buddies didn't take too kindly to that...


----------



## pewpewlazer

BigMack70 said:


> What kind of cyberpunk frames do you guys get driving through the middle of the city with everything completely maxed out at 1080p / quality DLSS? Can you get a fully locked 60 or better?
> 
> Looks like I'm in the 50-60fps range in that scenario, and wondering how much of that is a CPU bottleneck from my 5930k @ 4.3


I seem to hover in the 50ish fps range standing outside in the city. Changing graphics settings doesn't seem to do much of anything. I was wondering if it was a CPU bottleneck, but going from DLSS Balanced up to Performance gives me a nice boost, so I guess not. Does going to Balanced or Performance help you in those situations?



gfunkernaught said:


> How do you know there is a newer version if no one is sharing it? Couldn't have been posted on Galax's website. Was it announced somewhere and only an elite few get a copy attached to an NDA?


It's been talked about on multiple occasions that there's a "fixed" version that no one will post. The guy who started this thread "zhrooms" fooled me into wasting my time flashing the HOF XOC BIOS after his questionable post about Nvidia "neutering" Turing via power limits. He swore up and down the HOF XOC BIOS works with afterburner and doesn't BSOD... turns out that's because he has the "special" version that hasn't been publicly posted.


----------



## gfunkernaught

pewpewlazer said:


> I seem to hover in the 50ish fps range standing outside in the city. Changing graphics settings doesn't seem to do much of anything. I was wondering if it was a CPU bottleneck, but going from DLSS Balanced up to Performance gives me a nice boost, so I guess not. Does going to Balanced or Performance help you in those situations?
> 
> 
> 
> It's been talked about on multiple occasions that there's a "fixed" version that no one will post. The guy who started this thread "zhrooms" fooled me into wasting my time flashing the HOF XOC BIOS after his questionable post about Nvidia "neutering" Turing via power limits. He swore up and down the HOF XOC BIOS works with afterburner and doesn't BSOD... turns out that's because he has the "special" version that hasn't been publicly posted.


Hmm...Mind if I put on my aluminum hat for a sec? k...
Nvidia wants people to brick their 2080 Ti's, forcing an upgrade..No wait that could open the door to losing custies to big red. Hmmm....

Ok hat's off. Surely these private bios have NDA's attached, as did those MSI XOC bios (fact check me on that but I do remember reading about special xoc bios that only pro oc'ers get from vendors). That being said, they could breach that contract just by talking about them. I guess it falls upon the vendor to hire leak-seekers to enforce the NDA. Looks like that the KFA/Galax 380W bios are the best for reference cards. My issue with these bios is that too high a voltage means slamming into the power limit. Its weird though because in Cyberpunk, [email protected] never slams the power limit. Metro Exodus, however, does, and when it does, the boost algo's do some weird downclocking. Like [email protected], the lowest I've seen it go. If I do [email protected] or 1050mv, I don't see power limits but I do see thermal throttling, which ends up at [email protected], which I think its too much voltage for that clock when I know for a fact that 1032mv is perfect for that clock. Is it possible to set multiple points on the curve? So that in the event of a power limit trigger, the boost algo will adhere to the custom points set? Have any experience with that?


----------



## BigMack70

pewpewlazer said:


> I seem to hover in the 50ish fps range standing outside in the city. Changing graphics settings doesn't seem to do much of anything. I was wondering if it was a CPU bottleneck, but going from DLSS Balanced up to Performance gives me a nice boost, so I guess not. Does going to Balanced or Performance help you in those situations?


Sort of. It looks like my CPU limit is around ~70fps average. Or at least, that's about the framerate in the middle of the city when I run at 720p with ultra performance DLSS. Still hard to tell if I'd get meaningful performance uplift @1080p with a better CPU. From what I'm seeing though, the 2080 Ti just isn't fast enough for max (psycho) settings @ 1080p quality DLS with 60fps lock, so even if I'm a bit off, I'm not way off.


----------



## pewpewlazer

gfunkernaught said:


> Ok hat's off. Surely these private bios have NDA's attached, as did those MSI XOC bios (fact check me on that but I do remember reading about special xoc bios that only pro oc'ers get from vendors). That being said, they could breach that contract just by talking about them. I guess it falls upon the vendor to hire leak-seekers to enforce the NDA.


Yes, but the existing HOF XOC BIOS that doesn't work, and the competely useless Asus "XOC" BIOS were almost certainly leaked under NDA as well...

With Ampere, there's not even a "non-disclosure agreement" really. There's just a "don't let anyone have this, but please disclose its existence" agreement.

See: Steve "gamersnexus" and Jay "jayz2worthlesscents" on YouTube. Openly bragging about how EVGA gave them BIOSes without power limits that they can't share with actual consumers paying their hard earned dollars for the cards.



gfunkernaught said:


> Looks like that the KFA/Galax 380W bios are the best for reference cards. My issue with these bios is that too high a voltage means slamming into the power limit. Its weird though because in Cyberpunk, [email protected] never slams the power limit. Metro Exodus, however, does, and when it does, the boost algo's do some weird downclocking. Like [email protected], the lowest I've seen it go. If I do [email protected] or 1050mv, I don't see power limits but I do see thermal throttling, which ends up at [email protected], which I think its too much voltage for that clock when I know for a fact that 1032mv is perfect for that clock. Is it possible to set multiple points on the curve? So that in the event of a power limit trigger, the boost algo will adhere to the custom points set? Have any experience with that?


The GALAX/KFA BIOS has always been the best choice for daily use. I've seen claims that the FE BIOS is faster, or the Aorus BIOS was faster, or the FTW3 BIOS was faster, but I've tested all of them with and without shunt modding my card and found no conclusive difference in performance.

As far as power limits go, some of these newer titles are extremely confusing. Games like Metro (as you mentioned) or Control will plow well beyond 400w at 1.025v+ with RTX enabled. But meanwhile Watch Dogs Legion barely cracks 300w at the same clocks with RTX enabled.CP2077 barely hits 380w at 3440x1440. Maybe it would at 4k?

Turn off RTX and things get even whacker. Time Spy Extreme can pull 500w, but games like AC Valhalla barely hit 300w at the same settings...


----------



## tps3443

For Cyberpunk 2077!


Try the geforce experience in game filters guys. ALT+F3

use “Sharpening” especially with DLSS try about 15-25% you will lose about 2-3fps but, it makes the textures really really sharp. I was messing around with it. Makes the game looks like the graphics are cranked up higher lol.

You can make balanced DLSS look like quality

Or make quality DLSS look like “No DLSS”

^ Sorta!

Sharpening does remove that soft cloudy disposition that the environment has. So a little sharpening goes a long way!


----------



## Imprezzion

tps3443 said:


> For Cyberpunk 2077!
> 
> 
> Try the geforce experience in game filters guys. ALT+F3
> 
> use “Sharpening” especially with DLSS try about 15-25% you will lose about 2-3fps but, it makes the textures really really sharp. I was messing around with it. Makes the game looks like the graphics are cranked up higher lol.
> 
> You can make balanced DLSS look like quality
> 
> Or make quality DLSS look like “No DLSS”
> 
> ^ Sorta!
> 
> Sharpening does remove that soft cloudy disposition that the environment has. So a little sharpening goes a long way!


Meh, I'll stick to trusty old reshade for that. Way more control over the effects and even has a clock and FPS counter 

I don't like to run / install GeForce Experience at all. Especially not since in the early days of COD Warzone it caused crashes and such..


----------



## J7SC

Imprezzion said:


> Meh, I'll stick to trusty old reshade for that. Way more control over the effects and even has a clock and FPS counter
> 
> *I don't like to run / install GeForce Experience at all.* Especially not since in the early days of COD Warzone it caused crashes and such..


...same here, apart from the constant 'big-data vacuum' attached to GeForce Experience. One thing I started to play with this morning is crowd density, mostly because in some of the 'mall' and 'Night City streets' scenes, people walk through me and vice versa; just a bit too crowded, imo

...anyone tried the latest driver (460.89) yet ?

*EDIT*: ...saw an announcement re. a major CP '77 patch for December 21st release...wasn't entirely clear though if that will be mostly for consoles, or also affect PCs


----------



## tps3443

J7SC said:


> ...same here, apart from the constant 'big-data vacuum' attached to GeForce Experience. One thing I started to play with this morning is crowd density, mostly because in some of the 'mall' and 'Night City streets' scenes, people walk through me and vice versa; just a bit too crowded, imo
> 
> ...anyone tried the latest driver (460.89) yet ?
> 
> *EDIT*: ...saw an announcement re. a major CP '77 patch for December 21st release...wasn't entirely clear though if that will be mostly for consoles, or also affect PCs


I notice no performance loss from geforce experience, I normally don’t use it. But I like the recording feature, that’s the only reason why I have it. and this is coming from someone who reinstalls windows about every 2 months due to performance loss from Windows clogging up in general. I’m super OCD over peak performance. If I see 1.5% lower R20 scores or R15 scores, and I cant find the culprit. I just reinstall windows.

I disable auto updates, and information sharing options within GeForce experience. It’s the only option to record in game footage right?


PS I am using 460.89 no difference from 460.79. I also tried the 456 and noticed no performance benefit.


----------



## tps3443

I opened the door on my case just a little so fresh air is pulled in, temps are a MUCH better now. The top rad is exhaust, so it helps bring fresh ambient air through that top rad, instead of warm air from the front rad which intakes. This is after hours and hours of gaming. I think the game has been open for like 4-5 hours now.

@J7SC

I am locked in at absolute maximum temp of 34-35C after hours of gaming on my 2080Ti, I do believe if I reduced my CPU power consumption to around what a Threadripper would consume, and if I lowered my GPU voltage to 1.043MV like you run, I think my temps would actually be very comparable to your setup.


photo host


----------



## J7SC

tps3443 said:


> I opened the door on my case just a little so fresh air is pulled in, temps are a MUCH better now. The top rad is exhaust, so it helps bring fresh ambient air through that top rad, instead of warm air from the front rad which intakes. This is after hours and hours of gaming. I think the game has been open for like 4-5 hours now.
> 
> @J7SC
> 
> I am locked in at absolute maximum temp of 34-35C after hours of gaming on my 2080Ti, I do believe if I reduced my CPU power consumption to around what a Threadripper would consume, and if I lowered my GPU voltage to 1.043MV like you run, I think my temps would actually be very comparable to your setup.
> 
> 
> photo host


 ....lower temps are always good for 2080 Ti(s), but I'm not clear on your logic here...my Threadripper CPU is on a completely separate loop from the 2080 Ti GPUs and doesn't impact its / their temps.


----------



## tps3443

J7SC said:


> ....lower temps are always good for 2080 Ti(s), but I'm not clear on your logic here...my Threadripper CPU is on a completely separate loop from the 2080 Ti GPUs and doesn't impact its / their temps.


You are totally right, I forgot. Anyways, I have realized I can run less voltage to my 2080Ti. I am at 1.050MV at 2,115Mhz, which crashed after 2+hours, so I moved to 1.062MV. No difference in temps though since being down from 1.093. I also reduced my CPU overclock to 4.6Ghz which uses hardly any voltage to obtain that, a whole lot less Voltage than what 4.8Ghz needs. and Still floating around this same 35C Mark.. I dunno, I think I’m at the limits Of my water loop


----------



## tps3443

What level are you guys in Cyberpunk 2077? I am lvl 13. Doing strictly side missions. I have obtained a lot of cool weapons that you can’t use until level 22/28


----------



## Imprezzion

Also 13, street cred 21. I've done some story stuff but mainly side missions yes. 

I'm using a pistol / mantis blade build using a lot of the time slowdown skills which so far on Hard is incredibly overpowered lol.


----------



## tps3443

Imprezzion said:


> Also 13, street cred 21. I've done some story stuff but mainly side missions yes.
> 
> I'm using a pistol / mantis blade build using a lot of the time slowdown skills which so far on Hard is incredibly overpowered lol.


Same Here, I’m 13/21 street credit. I dunno if this helps or not. But I didn’t much like the smart guns earlier in the game and pretty much avoided them due to very low damage, And I just found it difficult to kill people with one. Now I just started using them again, and they work so good!! I actually enjoy them now. There was a guy totally hidden behind a barricade and I just slowly fired shot after shot and took him down in like 4-5 shots directly to the head with these auto guided bullets, then I finished off his crew. The game is just great so far. I played for like 5-6 hours today.


----------



## KCDC

Just hit lvl28 and I think 46 cred. I side quested for a long time before diggin into the main story, still tons to do. Lot's of cool side quests. Story's fun, AI is hilarious, driving is best with a bike, inventory sorting could be better. Wish the dialog would stop overlapping. Could use better looking set gear but maybe it gets better down the road, I look hilarious. The rest of the bugs aren't that big of a deal for me. It really is one of the best looking RTX games I have ever witnessed and I think worth it for the frame loss.

Also, after messing with settings a ton since release on a 4k 60Hz with HDR, I ended up putting all settings on max, ultra rtx and then just left DLSS on auto and that seems to give the best overall balance in my case once I stopped chasing frames. Once or twice I noticed a drop to near 30 in heavy cut scenes during the main quest or where the crowd's immense, but for the most part it feels like I'm in the mid 40s to 60 which I have it capped. I get less of the DLSS upscale issues, glittering edges etc. Seems to use Auto rather well, at least on a watercooled situation. Temps stay in the low 40s, around 2000mhz , very light OC with RTX. Ambients around 24c. Not exactly sure what Auto is doing, probably jumping between DLSS modes based on the scene? I'd get 60s on ultra performance mode, but the DLSS upscaling glitter was driving me nuts after a while. Same with performance mode.


----------



## Imprezzion

tps3443 said:


> Same Here, I’m 13/21 street credit. I dunno if this helps or not. But I didn’t much like the smart guns earlier in the game and pretty much avoided them due to very low damage, And I just found it difficult to kill people with one. Now I just started using them again, and they work so good!! I actually enjoy them now. There was a guy totally hidden behind a barricade and I just slowly fired shot after shot and took him down in like 4-5 shots directly to the head with these auto guided bullets, then I finished off his crew. The game is just great so far. I played for like 5-6 hours today.


You should really try the mantis blades with the time slowdown skills. One of my mates who is a much higher level (he has too much spare time... Haha level 26 street cred 50) has it build to the point he can slow down time 50% for 27 seconds in which you can one shot everything with the blades while remaining in stealth lol. It's hilarious to watch 😂


----------



## kairi_zeroblade

Hi guys, I would like to ask if a 2080ti would still be fine at this point in time..I am buying a used one and its a Gigabyte Aorus Extreme model to be exact..

I will be mainly using it for games..I am just confused and depressed/disappointed at our current situation with the RTX 30 series stock issues as well the hyped up prices..I can't get a 30 series card at a near SRP price nor something that is not from the gray market that only has 2 months warranty locally..

And would you just advise me to just wait for the newer ones coming out next year, though I personally don't have high hopes since I can't even buy one locally at a decent price nor the model/AIB I want isn't even sold/listed..


----------



## iunlock

kairi_zeroblade said:


> Hi guys, I would like to ask if a 2080ti would still be fine at this point in time..I am buying a used one and its a Gigabyte Aorus Extreme model to be exact..
> 
> I will be mainly using it for games..I am just confused and depressed/disappointed at our current situation with the RTX 30 series stock issues as well the hyped up prices..I can't get a 30 series card at a near SRP price nor something that is not from the gray market that only has 2 months warranty locally..
> 
> And would you just advise me to just wait for the newer ones coming out next year, though I personally don't have high hopes since I can't even buy one locally at a decent price nor the model/AIB I want isn't even sold/listed..


Absolutely, the 2080ti is an excellent choice still despite the existence, rather nonexistence of the 30 series cards.

To keep it simple, the 2080ti stock is ~equivalent to a stock 3070; however, in some title's it beats it with having better fps. An over clocked 2080ti is even better and will provide you more than enough power for 1080p, 1440p and some 4K gaming depending on your settings.

Then there's the price per performance factor and the fact that the 2080ti has more vram. Depending on what kind of deal you're getting on the 2080ti, even if you were to pay similar prices to some 30 series cards, there's still value in having a GPU now, as oppose to waiting several months that comes with no guarantees.

I'd say go for it.


----------



## kairi_zeroblade

iunlock said:


> Absolutely, the 2080ti is an excellent choice still despite the existence, rather nonexistence of the 30 series cards.
> 
> To keep it simple, the 2080ti stock is ~equivalent to a stock 3070; however, in some title's it beats it with having better fps. An over clocked 2080ti is even better and will provide you more than enough power for 1080p, 1440p and some 4K gaming depending on your settings.
> 
> Then there's the price per performance factor and the fact that the 2080ti has more vram. Depending on what kind of deal you're getting on the 2080ti, even if you were to pay similar prices to some 30 series cards, there's still value in having a GPU now, as oppose to waiting several months that comes with no guarantees.
> 
> I'd say go for it.


Thanks for your reply!!

Well since I already bought a 1440P screen it really sounds enough..(11GB VRAM) as to OC potential yeah saw many YT vids and reviews that the 2080ti if not power limited and ran cool would really go far as compared to the 30 series where somehow normal people only see a 90-100mhz jump from stock and that still didn't guarantee a huge leap in performance..I am getting it for 300$..

Though, does the Gigabyte Aorus Extreme have any issues?? saw some reviewers complain about it being a turd heater despite the ginormous size of heatsink and those 3 fans..anybody here has those particular cards??


----------



## iunlock

Wow that's a steal. As far as the stock heat sink, the small complaints were due to the cold plate not having good contact with the GPU die. Repaste the card with a good quality paste like kyronaut and it runs great. I've done a couple builds with that card.


----------



## kairi_zeroblade

iunlock said:


> Wow that's a steal. As far as the stock heat sink, the small complaints were due to the cold plate not having good contact with the GPU die. Repaste the card with a good quality paste like kyronaut and it runs great. I've done a couple builds with that card.


ohh..good thing I still have that paste..also the card is registered on the 4 year warranty program from gigabyte..

I should be fine down the road then?? but for how long?? seems the card is still competitive in 4k..(well that is if incase I ever upgrade to one..) been back reading and seeing your CP2077 screens give me the chills..hahaha


----------



## tps3443

kairi_zeroblade said:


> ohh..good thing I still have that paste..also the card is registered on the 4 year warranty program from gigabyte..
> 
> I should be fine down the road then?? but for how long?? seems the card is still competitive in 4k..(well that is if incase I ever upgrade to one..) been back reading and seeing your CP2077 screens give me the chills..hahaha


2080Ti is a beast in late 2020 man. I run about 25% faster than a RTX3070 in everything. Now my card is overclocked and watercooled. But still, the 2080Ti was really watered down and underclocked to begin with.

Port Royal, timespy graphics, built in game benchmarks.

Let me know, I’ll show you an easy 25% gap in just about anything over a RTX3070. Sometimes closer to 30%.

The 2080Ti will last about as long as a RTX3080 will running 4K Games. Because I am only slightly slower than a RTX3080 FE at around 2-7% slower. But that’s really up to you, 4K hurts performance with any GPU and requires a little optimization of the graphics settings in the more demanding titles for a smooth experience. DLSS 2.0 has saved us all with cyberpunk 2077.


----------



## tps3443

Imprezzion said:


> You should really try the mantis blades with the time slowdown skills. One of my mates who is a much higher level (he has too much spare time... Haha level 26 street cred 50) has it build to the point he can slow down time 50% for 27 seconds in which you can one shot everything with the blades while remaining in stealth lol. It's hilarious to watch 😂


I keep hearing of people being one shot kill, or one hit kill. Sometimes I wish I was a able to do that. I still come across side missions quite often that are so tuff to beat, I just have to stop or move on to something else because I die like 10-15 times over and over and see no possible way. But, I will be back! Lol


----------



## kairi_zeroblade

tps3443 said:


> 2080Ti is a beast in late 2020 man. I run about 25% faster than a RTX3070 in everything. Now my card is overclocked and watercooled. But still, the 2080Ti was really watered down and underclocked to begin with.
> 
> Port Royal, timespy graphics, built in game benchmarks.
> 
> Let me know, I’ll show you an easy 25% gap in just about anything over a RTX3070. Sometimes closer to 30%.
> 
> The 2080Ti will last about as long as a RTX3080 will running 4K Games. Because I am only slightly slower than a RTX3080 FE at around 2-7% slower. But that’s really up to you, 4K hurts performance with any GPU and requires a little optimization of the graphics settings in the more demanding titles for a smooth experience. DLSS 2.0 has saved us all with cyberpunk 2077.


Thank your for the response..seems not a bad deal at all since its 300$ and the warranty is registered upto December 2023..so probably I get a FREE upgrade on whats new by that time from Gigabyte..

Yeah I heard these are still monsters on their own league..hopefully the Card I am getting isn't limped in any way..(the reason the seller is selling it to me cheap probably)..

Thanks guys for the help...by tomorrow I might be an official member of this thread!!


----------



## Krzych04650

kairi_zeroblade said:


> Hi guys, I would like to ask if a 2080ti would still be fine at this point in time..I am buying a used one and its a Gigabyte Aorus Extreme model to be exact..
> 
> I will be mainly using it for games..I am just confused and depressed/disappointed at our current situation with the RTX 30 series stock issues as well the hyped up prices..I can't get a 30 series card at a near SRP price nor something that is not from the gray market that only has 2 months warranty locally..
> 
> And would you just advise me to just wait for the newer ones coming out next year, though I personally don't have high hopes since I can't even buy one locally at a decent price nor the model/AIB I want isn't even sold/listed..


Not only it is fine but also the best choice that you can make today, especially at something like $400, not to mention lower. Only 3090 has sensible gains over 2080 Ti, but it's price cannot be justified in any way, especially with $999 3080 Ti incoming, and gains on 3080 are so small that it is not worth disassembling the loop and buying waterblock for it. At 1440p it is 20% faster on average across many games and this is before considering that it has zero OC potential and 2080 Ti overclocks very well, basically any 2080 Ti can do 16500 Time Spy graphics score, which is 23% faster than 2080 Ti FE that is used in all the reviews to compare against 3080 and 3090. Not only that, but unlike Pascal, Turing is not left behind in any way and benefits from the same technologies as Ampere. It is hardly a new gen, 3090 is more like a new 350W segment added after 2080 Ti, Turing and Ampere are more like one big family than separate generations. But again, the price in main factor here, for the price that some of these 2080 Tis go it is really hard to justify anything else. We are talking less than 1/3rd of 3090 price.


----------



## tps3443

Krzych04650 said:


> Not only it is fine but also the best choice that you can make today, especially at something like $400, not to mention lower. Only 3090 has sensible gains over 2080 Ti, but it's price cannot be justified in any way, especially with $999 3080 Ti incoming, and gains on 3080 are so small that it is not worth disassembling the loop and buying waterblock for it. At 1440p it is 20% faster on average across many games and this is before considering that it has zero OC potential and 2080 Ti overclocks very well, basically any 2080 Ti can do 16500 Time Spy graphics score, which is 23% faster than 2080 Ti FE that is used in all the reviews to compare against 3080 and 3090. Not only that, but unlike Pascal, Turing is not left behind in any way and benefits from the same technologies as Ampere. It is hardly a new gen, 3090 is more like a new 350W segment added after 2080 Ti, Turing and Ampere are more like one big family than separate generations. But again, the price in main factor here, for the price that some of these 2080 Tis go it is really hard to justify anything else. We are talking less than 1/3rd of 3090 price.


Exactly this. ^



I can squeeze some real high timespy graphics scores of like 17,500. But, that’s not always gaming stable depending on the game. I can however manage 17,250 consistently and go play Cyberpunk 2077 for as long as I like. Which from what every one is experiencing, if it runs cyberpunk stable, it runs anything!

You can get another tier GPU on top of a stock 2080Ti.


----------



## tps3443

kairi_zeroblade said:


> Thank your for the response..seems not a bad deal at all since its 300$ and the warranty is registered upto December 2023..so probably I get a FREE upgrade on whats new by that time from Gigabyte..
> 
> Yeah I heard these are still monsters on their own league..hopefully the Card I am getting isn't limped in any way..(the reason the seller is selling it to me cheap probably)..
> 
> Thanks guys for the help...by tomorrow I might be an official member of this thread!!


$300 bucks?! I’d jump all over that!


----------



## kairi_zeroblade

tps3443 said:


> $300 bucks?! I’d jump all over that!


Yep seriously..hahaha that is why I am really tempted..



Krzych04650 said:


> Not only it is fine but also the best choice that you can make today, especially at something like $400, not to mention lower. Only 3090 has sensible gains over 2080 Ti, but it's price cannot be justified in any way, especially with $999 3080 Ti incoming, and gains on 3080 are so small that it is not worth disassembling the loop and buying waterblock for it. At 1440p it is 20% faster on average across many games and this is before considering that it has zero OC potential and 2080 Ti overclocks very well, basically any 2080 Ti can do 16500 Time Spy graphics score, which is 23% faster than 2080 Ti FE that is used in all the reviews to compare against 3080 and 3090. Not only that, but unlike Pascal, Turing is not left behind in any way and benefits from the same technologies as Ampere. It is hardly a new gen, 3090 is more like a new 350W segment added after 2080 Ti, Turing and Ampere are more like one big family than separate generations. But again, the price in main factor here, for the price that some of these 2080 Tis go it is really hard to justify anything else. We are talking less than 1/3rd of 3090 price.


well also to mention the scarcity and the sarcastic price over the 30 series cards..


----------



## tps3443

kairi_zeroblade said:


> Yep seriously..hahaha that is why I am really tempted..
> 
> 
> 
> well also to mention the scarcity and the sarcastic price over the 30 series cards..



I have owned (4) 2080Ti’s. Great cards, they really can perform 25-30% beyond a RTX3070 once setup correctly. See a lot of people under estimated the 2080Ti, listing them as 13.8 Tflop GPU’s well this number is calculated from the 1,345Mhz base clock speed. And even when the Xbox series X was coming out. People were saying that a 2080Ti would barely be faster lol. Which was a joke. My 2080Ti at 2,115 is 19Tflops of compute power. 

The card falls between a 6800XT and a RTX3080 in gaming performance. And obviously with ray tracing, it’s going to perform much closer towards the 3080, due to the AMD cards not doing so hot in RT titles.

But yeah I’d buy that man. It’s worth $600 easily being a Aorus Extreme model.

Put it in your PC and overclock it to death, memory, and GPU. Point some fans at it and you’ll probably manage 2,040Mhz to 2,055MHz sustained in games.


----------



## Krzych04650

tps3443 said:


> See a lot of people under estimated the 2080Ti, listing them as 13.8 Tflop GPU’s well this number is calculated from the 1,345Mhz base clock speed. And even when the Xbox series X was coming out. People were saying that a 2080Ti would barely be faster lol. Which was a joke. My 2080Ti at 2,115 is 19Tflops of compute power.


Yea that's a big point missed here, especially relative to basically zero OC potential of Ampere cards. 2080 Ti FE reviews mention it running average around 1830 MHz. Some games will run a bit higher and some will dip into 1700s due to power limit. If I lock my card at 1830 and then compare against my OC, I am seeing gains like 17% in Control, or 18% in the Witcher 3. So these are not just some benchmark scores, these are real world gains. Just for perspective, 3080 needs LN2 and 2300 MHz+ clock for 15% gain in Time Spy. So the gap is really closing OC vs OC. Only 3090 has decent gains vs 2080 Ti as it is much more restrained out of the box than 3080 so it has some OC gains vs 3090 FE with good models like Strix or FTW3, so you could probably get 30% lead over 2080 Ti OC across the board, at freaking 500 Watts.

Not to say I am glad about this as the only reason for older generation to hold up so well is underwhelming new generation, but the days of massive generational leaps are over it seems. Hopefully NVIDIA will be forced to innovate now because there is just no more headroom, Ampere already pushed the power to the absolute limits, there cannot be any more gains made this way, they have to do actual generational improvement now.


----------



## tps3443

Krzych04650 said:


> Yea that's a big point missed here, especially relative to basically zero OC potential of Ampere cards. 2080 Ti FE reviews mention it running average around 1830 MHz. Some games will run a bit higher and some will dip into 1700s due to power limit. If I lock my card at 1830 and then compare against my OC, I am seeing gains like 17% in Control, or 18% in the Witcher 3. So these are not just some benchmark scores, these are real world gains. Just for perspective, 3080 needs LN2 and 2300 MHz+ clock for 15% gain in Time Spy. So the gap is really closing OC vs OC. Only 3090 has decent gains vs 2080 Ti as it is much more restrained out of the box than 3080 so it has some OC gains vs 3090 FE with good models like Strix or FTW3, so you could probably get 30% lead over 2080 Ti OC across the board, at freaking 500 Watts.


Check this out. I ran the Metro benchmark as the exact same settings as this Guru3D review listed. My 2080Ti was 37% faster than their stock 2080Ti During 1440P testing, and only 1-2 FPS slower than RTX3080. Metro is one of those titles that nails that power limit hard. My card was reaching up to 520 watts in this game. So if you cool the card, and feed it power it keeps going!

My 2080Ti is shunt modded with the 8 ohm resistors stacked and soldered, ekwb Vector full block. And I run the Galax 380 watt bios. So I have 532 watts available. It hits about 35C full load how I have it tweaked now. 



PS just google RTX3080 FE review Metro Guru 3D, and he provides the settings he used for the in game benchmark.


----------



## kairi_zeroblade

tps3443 said:


> Put it in your PC and overclock it to death, memory, and GPU. Point some fans at it and you’ll probably manage 2,040Mhz to 2,055MHz sustained in games.


This was my initial thoughts..haha as the reviews really pointed out how lackluster was the cooling..compared to the MSI Gaming X Trio..well I don't have plans on including it in my loop yet since I would require a water block for it..


----------



## tps3443

kairi_zeroblade said:


> This was my initial thoughts..haha as the reviews really pointed out how lackluster was the cooling..compared to the MSI Gaming X Trio..well I don't have plans on including it in my loop yet since I would require a water block for it..


Honestly, I’d just browse eBay. I paid $90 bucks for my EKWB block brand new. I wanted all black for my loop. But, most AIB blocks that are “NON Reference” were dirt cheap.

I was seeing the MSI Trio X waterblocks for like $35-$45 dollars But all reference blocks were Double the price.


I love the all black look, so it was actually tough to find one of these without being a clear Plexi or something. But it certainly pays off by letting the 2080Ti show it’s true potential.


----------



## J7SC

kairi_zeroblade said:


> This was my initial thoughts..haha as the reviews really pointed out how lackluster was the cooling..compared to the MSI Gaming X Trio..well I don't have plans on including it in my loop yet since I would require a water block for it..


I have two of the Aorus Xtr full factory waterblock cards (since December '18). They have a similar / same PCB as the Aorus air-cooled version, though a slightly different bios to take advantage of the cooler running GPU. I dig the table below out (from THW) every once in a while to show the 'dramatic' impact of temps on 2080 Ti clocks

As to RTX, oc'ed 2080 Ti performs a touch better than oc'ed 6800XT and 6900XT, from what I have seen in reviews


----------



## kairi_zeroblade

J7SC said:


> I have two of the Aorus Xtr full factory waterblock cards (since December '18). They have a similar / same PCB as the Aorus air-cooled version, though a slightly different bios to take advantage of the cooler running GPU. I dig the table below out (from THW) every once in a while to show the 'dramatic' impact of temps on 2080 Ti clocks
> 
> As to RTX, oc'ed 2080 Ti performs a touch better than oc'ed 6800XT and 6900XT, from what I have seen in reviews
> 
> View attachment 2470046


Problem is I can't find one (waterblock) locally..we do have limited supply/sources as for water cooling components..


----------



## tps3443

J7SC said:


> I have two of the Aorus Xtr full factory waterblock cards (since December '18). They have a similar / same PCB as the Aorus air-cooled version, though a slightly different bios to take advantage of the cooler running GPU. I dig the table below out (from THW) every once in a while to show the 'dramatic' impact of temps on 2080 Ti clocks
> 
> As to RTX, oc'ed 2080 Ti performs a touch better than oc'ed 6800XT and 6900XT, from what I have seen in reviews
> 
> View attachment 2470046


Wow, 1,665Mhz at 77C? Imagine that NON-A cards with that 250 watt limit. They would probably go below even 1,665Mhz.


----------



## iunlock

kairi_zeroblade said:


> ohh..good thing I still have that paste..also the card is registered on the 4 year warranty program from gigabyte..
> 
> I should be fine down the road then?? but for how long?? seems the card is still competitive in 4k..(well that is if incase I ever upgrade to one..) been back reading and seeing your CP2077 screens give me the chills..hahaha





tps3443 said:


> 2080Ti is a beast in late 2020 man. I run about 25% faster than a RTX3070 in everything. Now my card is overclocked and watercooled. But still, the 2080Ti was really watered down and underclocked to begin with.
> 
> Port Royal, timespy graphics, built in game benchmarks.
> 
> Let me know, I’ll show you an easy 25% gap in just about anything over a RTX3070. Sometimes closer to 30%.
> 
> The 2080Ti will last about as long as a RTX3080 will running 4K Games. Because I am only slightly slower than a RTX3080 FE at around 2-7% slower. But that’s really up to you, 4K hurts performance with any GPU and requires a little optimization of the graphics settings in the more demanding titles for a smooth experience. DLSS 2.0 has saved us all with cyberpunk 2077.





tps3443 said:


> I keep hearing of people being one shot kill, or one hit kill. Sometimes I wish I was a able to do that. I still come across side missions quite often that are so tuff to beat, I just have to stop or move on to something else because I die like 10-15 times over and over and see no possible way. But, I will be back! Lol





kairi_zeroblade said:


> Thank your for the response..seems not a bad deal at all since its 300$ and the warranty is registered upto December 2023..so probably I get a FREE upgrade on whats new by that time from Gigabyte..
> 
> Yeah I heard these are still monsters on their own league..hopefully the Card I am getting isn't limped in any way..(the reason the seller is selling it to me cheap probably)..
> 
> Thanks guys for the help...by tomorrow I might be an official member of this thread!!





Krzych04650 said:


> Not only it is fine but also the best choice that you can make today, especially at something like $400, not to mention lower. Only 3090 has sensible gains over 2080 Ti, but it's price cannot be justified in any way, especially with $999 3080 Ti incoming, and gains on 3080 are so small that it is not worth disassembling the loop and buying waterblock for it. At 1440p it is 20% faster on average across many games and this is before considering that it has zero OC potential and 2080 Ti overclocks very well, basically any 2080 Ti can do 16500 Time Spy graphics score, which is 23% faster than 2080 Ti FE that is used in all the reviews to compare against 3080 and 3090. Not only that, but unlike Pascal, Turing is not left behind in any way and benefits from the same technologies as Ampere. It is hardly a new gen, 3090 is more like a new 350W segment added after 2080 Ti, Turing and Ampere are more like one big family than separate generations. But again, the price in main factor here, for the price that some of these 2080 Tis go it is really hard to justify anything else. We are talking less than 1/3rd of 3090 price.





tps3443 said:


> I have owned (4) 2080Ti’s. Great cards, they really can perform 25-30% beyond a RTX3070 once setup correctly. See a lot of people under estimated the 2080Ti, listing them as 13.8 Tflop GPU’s well this number is calculated from the 1,345Mhz base clock speed. And even when the Xbox series X was coming out. People were saying that a 2080Ti would barely be faster lol. Which was a joke. My 2080Ti at 2,115 is 19Tflops of compute power.
> 
> The card falls between a 6800XT and a RTX3080 in gaming performance. And obviously with ray tracing, it’s going to perform much closer towards the 3080, due to the AMD cards not doing so hot in RT titles.
> 
> But yeah I’d buy that man. It’s worth $600 easily being a Aorus Extreme model.
> 
> Put it in your PC and overclock it to death, memory, and GPU. Point some fans at it and you’ll probably manage 2,040Mhz to 2,055MHz sustained in games.





tps3443 said:


> Honestly, I’d just browse eBay. I paid $90 bucks for my EKWB block brand new. I wanted all black for my loop. But, most AIB blocks that are “NON Reference” were dirt cheap.
> 
> I was seeing the MSI Trio X waterblocks for like $35-$45 dollars But all reference blocks were Double the price.
> 
> 
> I love the all black look, so it was actually tough to find one of these without being a clear Plexi or something. But it certainly pays off by letting the 2080Ti show it’s true potential.


I agree, I always SMH when seeing the YT vids running the RE cards when doing comparisons. On one end it's like yea okay I guess, but on the other end it's like "Oh my gosh the 2080Ti can do so much better than what you're showing..." - ie... always take the YT stuff with a huge grain of salt, because they are wayyyy of a lot of the times we all know...

The 2080Ti OC'ed is pretty amazing. I still have my EVGA 2080Ti FTW3 Ultra w/ the Hydro Copper block running at 2145MHz+ in games, easily as it is a good bin.

Here's the main gaming desktop, the only thing different now is that it's in maintenance😀 mode with the 3080 in there now. I just finished up a deep cleaning of the entire loop and the CPU is currently just running on a single 360 rad (the one vertical rad in the back) until I decide if I want to keep this 3080 in and put a water block on it, which I have ...Or to just stuck with the 3090 that arrives Monday. I think I'm leaning toward the 3090 since I actually do need the extra vram.










With all that said, there was no reason for me to swap out the 2080Ti as it was perfectly fine, but since I do a lot of testing, benching etc... I'm always after collecting data to always see how the hardware actually performs realistically, compared to the very loose YT vids that show cards in gimp mode lol...

@kairi_zeroblade, definitely grab that 2080Ti for $300...Dooo iiiitttt.


----------



## Brodda-Syd

ahnafakeef said:


> I love my 32" for a desk setup where it's literally an arm's length away from me. I wouldn't go any smaller or bigger for this distance.


Eureka!
Philips are selling a 32 inch 144Hz 4K display in China ONLY
Philips 329M1RV with 32″ IPS Panel, 4K Resolution and 144Hz Refresh Rate Details Emerge

I'm going to have to find someone that understands Mandarin so I can order one here:-
飞利浦329M1RV 32吋FastIPS 4K显示器G-SYNC 144Hz原生1ms HDR400-淘宝网

Hope they ship internationally


----------



## tps3443

Brodda-Syd said:


> Eureka!
> Philips are selling a 32 inch 144Hz 4K display in China ONLY
> Philips 329M1RV with 32″ IPS Panel, 4K Resolution and 144Hz Refresh Rate Details Emerge
> 
> I'm going to have to find someone that understands Mandarin so I can order one here:-
> 飞利浦329M1RV 32吋FastIPS 4K显示器G-SYNC 144Hz原生1ms HDR400-淘宝网
> 
> Hope they ship internationally


We can barely push 2560x1440P 165Hz lol.


----------



## J7SC

...new Hotfix 1.05 for CP '77 is out for consoles (and will be 'very soon' for PC ?!). I have been doing mostly free-roaming over the last few days, and trying out different DLSS settings for 4K high textures / full RTX. 'Balanced' is still my DLSS fav; no appreciable difference to 'Quality'. 'Auto" DLSS also works well, but 'Performance' and lower doesn't look so hot at least on my 40 Inch Phillips / DSP monitor.

I also picked up 3 or 4 fps on average just by switching my 2950X from UMA to LMA (the 2950X has some unique options for that). Still, it would be nice if a future patch would allow for NVL/SLI so I can kick the second 2080 Ti in. Then again, I'm in the mid-50s fps (busy outside) to mid-to-high 60s fps (inside) with the above settings and just one 2080 Ti doing the lifting. I will probably try out capping to 60 fps, given the monitor's refresh limits (4K/60 base, 72 oc)


----------



## nikoli707

I have a Pny blower 300 non A 2080ti that i want more power out of. I got an ekwb fe block, and a tx240 + ex240 + cpu block setup.

Can someone link me to exactly the right shunt resistors i need to increase the power? i guess i need 3 of them, and i can see pictures from search where to shunt them.


----------



## tps3443

nikoli707 said:


> I have a Pny blower 300 non A 2080ti that i want more power out of. I got an ekwb fe block, and a tx240 + ex240 + cpu block setup.
> 
> Can someone link me to exactly the right shunt resistors i need to increase the power? i guess i need 3 of them, and i can see pictures from search where to shunt them.


You only need (2) resistors. Do not shunt the PCIe resistor. I’d order like 4-5 of them just for future needs. Or if you mess one up, or lose one during soldering. I have soldered a non-A 2080Ti it was a beast Just like the rest. Also, I’d flash to the 310 watt Palit NON-A bios. Then with the resistors, and bios you’ll have 434 watts available total.






This is where I ordered mine. don’t mind the picture. It’s the part number that counts. I have these soldered to my 2080Ti FE on water right now.



https://www.mouser.com/ProductDetail/panasonic/erj-m1wsf8m0u/?qs=js9DCdkuA2qlcNS%2FvW306w%3D%3D&countrycode=US&currencycode=USD


----------



## tps3443

J7SC said:


> ...new Hotfix 1.05 for CP '77 is out for consoles (and will be 'very soon' for PC ?!). I have been doing mostly free-roaming over the last few days, and trying out different DLSS settings for 4K high textures / full RTX. 'Balanced' is still my DLSS fav; no appreciable difference to 'Quality'. 'Auto" DLSS also works well, but 'Performance' and lower doesn't look so hot at least on my 40 Inch Phillips / DSP monitor.
> 
> I also picked up 3 or 4 fps on average just by switching my 2950X from UMA to LMA (the 2950X has some unique options for that). Still, it would be nice if a future patch would allow for NVL/SLI so I can kick the second 2080 Ti in. Then again, I'm in the mid-50s fps (busy outside) to mid-to-high 60s fps (inside) with the above settings and just one 2080 Ti doing the lifting. I will probably try out capping to 60 fps, given the monitor's refresh limits (4K/60 base, 72 oc)


“Auto DLSS” = Balanced DLSS. That’s the default anyways. I really like balanced DLSS too. But, I run Quality DLSS everyday with Ultra RT preset because I am only running 2560x1440P.

I wonder if Auto DLSS scales automatically depending on performance? Or if it just stays “Balanced mode” all the time..



I hope we get nvlink support for the Cyberpunk 2077. I’d buy another in a heart beat! Thats literally the cheapest fastest setup you can buy in the world. When it scales, it’s faster than any single GPU in the world. I’d love to have (2) in my custom loop.


----------



## iunlock

tps3443 said:


> “Auto DLSS” = Balanced DLSS. That’s the default anyways. I really like balanced DLSS too. But, I run Quality DLSS everyday with Ultra RT preset because I am only running 2560x1440P.
> 
> I wonder if Auto DLSS scales automatically depending on performance? Or if it just stays “Balanced mode” all the time..
> 
> 
> 
> I hope we get nvlink support for the Cyberpunk 2077. I’d buy another in a heart beat! Thats literally the cheapest fastest setup you can buy in the world. When it scales, it’s faster than any single GPU in the world. I’d love to have (2) in my custom loop.


With your settings you're running it with RT Reflections on too yea? (ie... Ultra Presets, All RT Off except Reflections ON, DLSS - Quality...)


----------



## J7SC

tps3443 said:


> “Auto DLSS” = Balanced DLSS. That’s the default anyways. I really like balanced DLSS too. But, I run Quality DLSS everyday with Ultra RT preset because I am only running 2560x1440P.
> 
> I wonder if Auto DLSS scales automatically depending on performance? Or if it just stays “Balanced mode” all the time..
> 
> 
> 
> I hope we get nvlink support for the Cyberpunk 2077. I’d buy another in a heart beat! Thats literally the cheapest fastest setup you can buy in the world. When it scales, it’s faster than any single GPU in the world. I’d love to have (2) in my custom loop.


...trying to test out some more 4K max textures RTX Ultra / DLSS auto vs balanced...but right now, it seems to be going through a painfully sllooowwww update (Hotfix 1.05 ?) for CP '77.

Yeah, NVL/ SLI on 2x water-cooled 2080 Ti is great fun, especially with the earlier CFR drivers (no micro-stutter w/ CFR). On Microsoft FlightSim 2020, unbelievable performance at 4K / Ultra, ditto w/ Metro Exodus and various Crytek-engined games. Alas, NVidia stopped (the never advertised) CFR feature with drivers as of this summer, before major new hardware and game releases  

One thing to keep in mid w/ dual 2080 Tis (380W each) is that they're not the most efficient setup  and you need a sturdy PSU and cooling loop (which I think you got).


----------



## tps3443

iunlock said:


> With your settings you're running it with RT Reflections on too yea? (ie... Ultra Presets, All RT Off except Reflections ON, DLSS - Quality...)


I have all Ray tracing reflections on. No tweaked settings besides “Film Grain off” I don’t like film grain.

I run Ultra Raytracing pre-set. And then I turn DLSS to Quality instead of the default Auto which is balanced. So the game is ran in full graphical fidelity at 2560x1440P.

It runs very good too.


----------



## pewpewlazer

Cyberpunk 2077 patch/hotfix/whatever 1.05 is indeed out. Switched to the newest hotfix drivers as well. No notable performance changes from either the 1.05 update or the driver update. HDR is still a broken trainwreck in Cyberpunk.

Overall the game has been an absolute joy, and my now antiquated 2080 Ti paired with my dinosaur fossil of a CPU are managing a surprisingly tolerable 50-60 fps @ 3440x1440 with ultra RTX and DLSS Balanced (using DigitalFoundry's recommended settings otherwise).

At this point, the excitement of Ampere is all but gone. Similarly the 5900x doesn't look as appealing as it did at launch. Both of said parts are sold out indefinitely, and my yesteryear build is still making ends meet. Might a well wait for Nvidia and Intel to make their next moves, and cross my fingers we can actually buy them.


----------



## tps3443

pewpewlazer said:


> Cyberpunk 2077 patch/hotfix/whatever 1.05 is indeed out. Switched to the newest hotfix drivers as well. No notable performance changes from either the 1.05 update or the driver update. HDR is still a broken trainwreck in Cyberpunk.
> 
> Overall the game has been an absolute joy, and my now antiquated 2080 Ti paired with my dinosaur fossil of a CPU are managing a surprisingly tolerable 50-60 fps @ 3440x1440 with ultra RTX and DLSS Balanced (using DigitalFoundry's recommended settings otherwise).
> 
> At this point, the excitement of Ampere is all but gone. Similarly the 5900x doesn't look as appealing as it did at launch. Both of said parts are sold out indefinitely, and my yesteryear build is still making ends meet. Might a well wait for Nvidia and Intel to make their next moves, and cross my fingers we can actually buy them.


With how fast your 2080Ti is running, I see no reason to upgrade to another GPU. Unless your going like full out expensive $1,500+ RTX3090+water+overclocked+shunt mod. Because realistically, I think you would benefit more with a better CPU. I imagine Cyberpunk 2077 is hitting that 6/12 pretty hard, even at higher resolutions.

Why not find a dirt cheap 6950X 10/20? OC it to 4.2Ghz-4.3Ghz-4.4Ghz range and just stay on that platform for another year or so until you do a major upgrade. A 6950X is still a great chip! If I had one I certainly wouldn’t feel inadequate.

CPU’s really do last a good long time! My 7980XE is over 3 years old. And it’s just getting in to its actual usability. “Meaning here in late 2020 games are actual using its threads” kinda stupid.. but it makes more sense to have one now, than it ever did back in 2017 when it first came out.


Your 5820K is worth like I dunno 75 bucks? So, I have seen some 6950X’s go for like $250.

That’s a $175 dollar upgrade that offers nearly a double in multithreaded performance from just a CPU swap. I’d be all over that If I was running X99.


----------



## J7SC

...yeah, also got the Hotfix 1.05 loaded now. My normal setting is 4K / max textures / max RTX / no film grain / balanced DLSS. Per spoiler, I used those settings to compare with 4K / max RTX / no DLSS (!) all the way to DLSS performance. Balanced (and perhaps auto) DLSS is probably the best, IMO.

FYI, some of the huge LED billboards change / rotate with different depictions. Pics are 4K (png > jpg). Right-click / new tab for best results



Spoiler



4K / text. max / RTX max / balanced DLSS









4K / text. max / RTX max / auto DLSS









4K / text. max / RTX max / quality DLSS









4K / text. max / RTX max / Performance DLSS









4K / text. max / RTX max / NO DLSS (!)


----------



## sultanofswing

Picked up a 3090 FTW3 Ultra, Honestly not that impressed. Might just go back to my 2080ti and keep that til next gen.


----------



## pewpewlazer

tps3443 said:


> I imagine Cyberpunk 2077 is hitting that 6/12 pretty hard, even at higher resolutions.
> 
> Why not find a dirt cheap 6950X 10/20? OC it to 4.2Ghz-4.3Ghz-4.4Ghz range and just stay on that platform for another year or so until you do a major upgrade. A 6950X is still a great chip! If I had one I certainly wouldn’t feel inadequate.
> 
> Your 5820K is worth like I dunno 75 bucks? So, I have seen some 6950X’s go for like $250.
> 
> That’s a $175 dollar upgrade that offers nearly a double in multithreaded performance from just a CPU swap. I’d be all over that If I was running X99.


Eh, Cyberpunk isn't really hitting my 6c/12t all that hard. Maybe ~50% usage on average. Way better (or should I say worse in terms of multithreading) than BFV which was consistently 85%+ load. But it does seem that there is some single thread limitation going on in this game with my ancient Haswell...

If I could have found a 6950x that cheap earlier this year, I would have jumped on it. But the clock speed loss going to BWE kind of wipes out any IPC gains.

I bought an E5-1660v3 Xeon (basically a 5960x, unlocked 8c/16t HWE) earlier this year in hopes of squeezing some more life out of this platform. First one wouldn't even boot more than once. Got a refund on that, but ended up buying a second one, an ASUS X99 WS board, and 32gb of E-die in the process. My ASROCK board wouldn't overclock the uncore at all on the second chip for some reason, so it was 100% useless. So now I've had a NICE upgrade for my file server sitting around for months that I've been too lazy to install.

I'm finally ready to lay out some $$$ for a CPU/platform upgrade, and AMD finally came out with something worth buying... except it's un-buyable. Might as well wait for Rocket Lake. I'd go with a 10900k, but no PCIE 4...


----------



## tps3443

J7SC said:


> ...yeah, also got the Hotfix 1.05 loaded now. My normal setting is 4K / max textures / max RTX / no film grain / balanced DLSS. Per spoiler, I used those settings to compare with 4K / max RTX / no DLSS (!) all the way to DLSS performance. Balanced (and perhaps auto) DLSS is probably the best, IMO.
> 
> FYI, some of the huge LED billboards change / rotate with different depictions. Pics are 4K (png > jpg). Right-click / new tab for best results
> 
> 
> 
> Spoiler
> 
> 
> 
> 4K / text. max / RTX max / balanced DLSS
> View attachment 2470270
> 
> 
> 4K / text. max / RTX max / auto DLSS
> View attachment 2470271
> 
> 
> 4K / text. max / RTX max / quality DLSS
> View attachment 2470272
> 
> 
> 4K / text. max / RTX max / Performance DLSS
> View attachment 2470274
> 
> 
> 4K / text. max / RTX max / NO DLSS (!)
> View attachment 2470277


Nvidia did a killer job with DLSS. I just got done playing it’s 4:16AM, been playing too long! But dang I love my new pistol. It rained for like the first time ever since Like the very very beginning of the game, and it just looks so good.

I think Quality DLSS looks almost 90% as good as no DLSS at all. And Auto DLSS “Balanced” looks almost as good as quality about 80% as good. I think Performance and Ultra Performance are just silly. And you’d be better off turning raytracing 
off If someone wanted that extra performance, but maybe not with higher resolutions? anyways I love DLSS.

And we are lucky that DLSS is just as good on our older first gen RTX cards as the latest RTX 3000 series. Nvidia didn’t leave us behind. So I’m good with that.


----------



## tps3443

sultanofswing said:


> Picked up a 3090 FTW3 Ultra, Honestly not that impressed. Might just go back to my 2080ti and keep that til next gen.


Yep that seems to be the case. If you have a very high clocked 2080Ti, it’s tough to get a huge gain with anything else.. And the cost is just crazy high right now! Like $1,600+ taxes for the cheapest 3090 around. Then I’d want a water block for it, and all that good stuff. I’d be looking at nearly $2K for a 30% boost at best? It’s tough to consider.


----------



## tps3443

pewpewlazer said:


> Eh, Cyberpunk isn't really hitting my 6c/12t all that hard. Maybe ~50% usage on average. Way better (or should I say worse in terms of multithreading) than BFV which was consistently 85%+ load. But it does seem that there is some single thread limitation going on in this game with my ancient Haswell...
> 
> If I could have found a 6950x that cheap earlier this year, I would have jumped on it. But the clock speed loss going to BWE kind of wipes out any IPC gains.
> 
> I bought an E5-1660v3 Xeon (basically a 5960x, unlocked 8c/16t HWE) earlier this year in hopes of squeezing some more life out of this platform. First one wouldn't even boot more than once. Got a refund on that, but ended up buying a second one, an ASUS X99 WS board, and 32gb of E-die in the process. My ASROCK board wouldn't overclock the uncore at all on the second chip for some reason, so it was 100% useless. So now I've had a NICE upgrade for my file server sitting around for months that I've been too lazy to install.
> 
> I'm finally ready to lay out some $$$ for a CPU/platform upgrade, and AMD finally came out with something worth buying... except it's un-buyable. Might as well wait for Rocket Lake. I'd go with a 10900k, but no PCIE 4...


BFV seems to be a joke against my CPU. I thought it was a intensive game before though, I believed it was the most CPU intensive game of any lol, back when I had a 8086K at 5.4Ghz it would load all 12 threads to 100% sometimes. But now It uses 15 threads maximum On my 7980XE While The rest just sit idle At 0%. So 15 threads is all it can use.

Death stranding uses all 36 and Cyberpunk uses all 36 threads too.

Anyways, just a thought about the 6950X. There’s some on eBay around $250 range.

Also Broadwell IPC is good. 4.3 seems to match 4.5 or 4.6 haswell.


----------



## gfunkernaught

tps3443 said:


> I opened the door on my case just a little so fresh air is pulled in, temps are a MUCH better now. The top rad is exhaust, so it helps bring fresh ambient air through that top rad, instead of warm air from the front rad which intakes. This is after hours and hours of gaming. I think the game has been open for like 4-5 hours now.
> 
> @J7SC
> 
> I am locked in at absolute maximum temp of 34-35C after hours of gaming on my 2080Ti, I do believe if I reduced my CPU power consumption to around what a Threadripper would consume, and if I lowered my GPU voltage to 1.043MV like you run, I think my temps would actually be very comparable to your setup.
> 
> 
> photo host


If you want you can try making your top and front rads intake and if you have side fans on the case's side panel, make them exhaust. I did that with my case and my load temps so far haven't exceeded 40c, considering the ambient temp is about 19-20c. I cut two holes in my side panel window (Corsair 760T) and install two 200mm fans as exhaust. Now I can cap my top and front fans at 60% and let the side fans ramp up because they are quieter.


----------



## J7SC

sultanofswing said:


> Picked up a 3090 FTW3 Ultra, Honestly not that impressed. Might just go back to my 2080ti and keep that til next gen.


Considering the availability problems of custom 3090s, you might as well keep that card, though you could more than get your money back, and decent-clocking / well- cooled 2080 Tis perform well in CP '77...even with just one card running, I typically get close to the 4K monitor limits anyway.



pewpewlazer said:


> Eh, Cyberpunk isn't really hitting my 6c/12t all that hard. Maybe ~50% usage on average. Way better (or should I say worse in terms of multithreading) than BFV which was consistently 85%+ load. But it does seem that there is some single thread limitation going on in this game with my ancient Haswell...
> 
> If I could have found a 6950x that cheap earlier this year, I would have jumped on it. But the clock speed loss going to BWE kind of wipes out any IPC gains.
> 
> I bought an E5-1660v3 Xeon (basically a 5960x, unlocked 8c/16t HWE) earlier this year in hopes of squeezing some more life out of this platform. First one wouldn't even boot more than once. Got a refund on that, but ended up buying a second one, an ASUS X99 WS board, and 32gb of E-die in the process. My ASROCK board wouldn't overclock the uncore at all on the second chip for some reason, so it was 100% useless. So now I've had a NICE upgrade for my file server sitting around for months that I've been too lazy to install.
> 
> I'm finally ready to lay out some $$$ for a CPU/platform upgrade, and AMD finally came out with something worth buying... except it's un-buyable. Might as well wait for Rocket Lake. I'd go with a 10900k, but no PCIE 4...


...fyi, I have both the Asus X99-E WS and the X79-E WS, and they are among my fav boards  especially for work-related apps. The latter runs a 4.8 GHz 4960X 'ES' and is no slouch at 1080p, the former had my 5960X (4.5) in it but is now slated to do 'work' with a Xeon v4...looking for a 'trustworthy', used 16c to 20c HT. I also moved the 5960X to the Asus x99 Rampage Xtr because that has better (higher) RAM bios settings...I paired it with 32GB Sam-B at just under 3400MHz / Cl14. That's my daily slugger / 1080p gamer.

CP '77 runs on the 4K 2950X TR setup w/ the 2080 Tis, and after changing the TR settings to NUMA from UMA a few days ago, it leaves little to be desired, at least at 4K. With 4K, your 5820K HEDT @ 4.5 giggles shouldn't really be a big disadvantage, as 4K places more emphasis on the GPU anyhow.


----------



## tps3443

gfunkernaught said:


> If you want you can try making your top and front rads intake and if you have side fans on the case's side panel, make them exhaust. I did that with my case and my load temps so far haven't exceeded 40c, considering the ambient temp is about 19-20c. I cut two holes in my side panel window (Corsair 760T) and install two 200mm fans as exhaust. Now I can cap my top and front fans at 60% and let the side fans ramp up because they are quieter.


I have gotten My GPU temps down to 35C range full load at 2,115Mhz. I just tilted my door open about 4 inches at the top, and I lowered my GPU voltage down from 1.093MV to around 1.050V. I have a Asus TUF GT501 case. My door is tempered so I can’t cut any holes.

If I reduce my CPU overclock from 4.8Ghz to 4.6Ghz I can see around 33-34C GPU temps. My CPU uses a lot of power! But it’s not really worth it. I’d rather run 4.8Ghz on my CPU daily.


So far my water loop is setup very well. And I am amazed how well (2) slim 360’s work. I am not using the best radiator fans right now. But, I think I can actually get better temps if I replaced my 1,650RPM XSPC fans.

I am gonna get some 2,200-2,800RPM high static pressure fans to replace them.

I don’t mind noise. I like performance.


----------



## Imprezzion

I got 2 separate "loops" right now. I wanted to run a full EK Phoenix loop with 1 360 and a 280 with both CPU and GPU but the 360 Phoenix rad pump combo went EOL before I could get one..

I settled for a Kraken on the GPU and a Phoenix 280 on the CPU.

This has the downside of the CPU rad being my front intake and the GPU rad is top outtake and it has to deal with the hot air from a 5.2Ghz 10900KF so yeah..

GPU temps are around 45-49c on acceptable noise levels. I do mind noise lol.

I did test it numerous times with the GPU rad removed from the case all together and running 100% fan speed push pull and it goes as low as 38-41c but that didn't improve stability at all for the card. Couldn't get even 1 15Mhz bin higher.. same applies the other way around, 2085Mhz @ 1.093 is rock solid even at near 60c. Doesn't get unstable at higher temps at all. Even on the stock Phoenix GS air cooler at like 73c it held 2085Mhz just fine in BF5 if I apply the clock after it warmed up. 

My card is a weird one lol.


----------



## J7SC

Imprezzion said:


> I got 2 separate "loops" right now. I wanted to run a full EK Phoenix loop with 1 360 and a 280 with both CPU and GPU but the 360 Phoenix rad pump combo went EOL before I could get one..
> 
> I settled for a Kraken on the GPU and a Phoenix 280 on the CPU.
> 
> This has the downside of the CPU rad being my front intake and the GPU rad is top outtake and it has to deal with the hot air from a 5.2Ghz 10900KF so yeah..
> 
> GPU temps are around 45-49c on acceptable noise levels. I do mind noise lol.
> 
> I did test it numerous times with the GPU rad removed from the case all together and running 100% fan speed push pull and it goes as low as 38-41c but that didn't improve stability at all for the card. Couldn't get even 1 15Mhz bin higher.. same applies the other way around, 2085Mhz @ 1.093 is rock solid even at near 60c. Doesn't get unstable at higher temps at all. Even on the stock Phoenix GS air cooler at like 73c it held 2085Mhz just fine in BF5 if I apply the clock after it warmed up.
> 
> My card is a weird one lol.


I am a dual-loop fan, and almost always use them in complex builds. Works great, especially with one set of quality-QDs per loop...


----------



## TONSCHUH

kairi_zeroblade said:


> Thanks for your reply!!
> 
> Well since I already bought a 1440P screen it really sounds enough..(11GB VRAM) as to OC potential yeah saw many YT vids and reviews that the 2080ti if not power limited and ran cool would really go far as compared to the 30 series where somehow normal people only see a 90-100mhz jump from stock and that still didn't guarantee a huge leap in performance..I am getting it for 300$..
> 
> Though, does the Gigabyte Aorus Extreme have any issues?? saw some reviewers complain about it being a turd heater despite the ginormous size of heatsink and those 3 fans..anybody here has those particular cards??


I have one and it maxed-out @68-70°C onAir with an overclock.

I still got an EK-Block for it, which maxed-out @32°C under Water.

Spec's: https://pastebin.com/q2D0simH

🙂


----------



## gfunkernaught

tps3443 said:


> I lowered my GPU voltage down from 1.093MV to around 1.050V


So you're running [email protected] with about 35c on load?



tps3443 said:


> I don’t mind noise. I like performance.


I don't mind noise either, espeically when using headphones, but if you can get the same performance/stability and lower noise? Why not?




Imprezzion said:


> 2085Mhz @ 1.093 is rock solid even at near 60c


Isn't that voltage too high for 208mhz? Have you tried like 1032mv?


----------



## J7SC

CP' 77 can really use up the hours...Hopefully, I don't end up like some of the cyber-candy 'zombies' at Lizzie's bar  ...switched to DLSS 'auto' (w/ 4K / high textures / max RTX base) now - may be it's just a placebo effect, but with Hotfix 1.05, it seems to give me improved / really good visuals and fps with that setting.

I also took a look at some Hotfix 1.05 console play at YT for CP '77...I'm sure glad about the PC version...


----------



## Imprezzion

gfunkernaught said:


> Isn't that voltage too high for 2085mhz? Have you tried like 1032mv?


Yes, won't even start a game at 1.037v 2085Mhz. My card just needs that much voltage. It will do 2130 @ 1.125v with XOC but not in Cyperbunk. It barely holds together at 2100Mhz. Even at 38c. 
This just isn't a good clocker.


----------



## gfunkernaught

Imprezzion said:


> Yes, won't even start a game at 1.037v 2085Mhz. My card just needs that much voltage. It will do 2130 @ 1.125v with XOC but not in Cyperbunk. It barely holds together at 2100Mhz. Even at 38c.
> This just isn't a good clocker.


Which BIOS do you have on your card right now?


----------



## Imprezzion

gfunkernaught said:


> Which BIOS do you have on your card right now?


KFA2 380w. I used the EVGA FTW3 Ultra BIOS quite a long time, it responds the same. Also the Gigabyte WF3 OC BIOS responds the same..

HOF XOC does about 2130 @ 1.125v, EVGA Kingpin XOC does about 2145 @ 1.125v.

I can bench the card @ 2190 / 8100 @ 1.125v with the HOF XOC BIOS as long as I keep it under 40c but it artifacts like mad.


----------



## tps3443

Imprezzion said:


> Yes, won't even start a game at 1.037v 2085Mhz. My card just needs that much voltage. It will do 2130 @ 1.125v with XOC but not in Cyperbunk. It barely holds together at 2100Mhz. Even at 38c.
> This just isn't a good clocker.


It’s all about temperature. With my door on my closed case my GPU would normally reach 38-39C. So 2,115Mhz is only stable with around 1.093MV. Like 100% stable. I’d bet my life on it.

Bu, if I remove my case door, then I can get my 2080Ti down to 35C and this allows the GPU to run at 2,115Mhz with 1.050MV forever! literally. I just minimized the game and was resetting MSI AB and tinkering with it with the game minimized.

If I closed the door, and continued on playing, it would crash as soon as I hit 38C lol.

Silicon is just more resistant and varies so much. I watched a video on YouTube of a 2080Ti running 2,145Mhz at 48-49C with like 1.075V in Cyberpunk 2077, and it did not budge off that 2,145Mhz the entire time he played.. And I am thinking how is that even possible?!!! This is like a super silicon lol.

^ I wish my 2080Ti would respond so well to temperatures.

Fortunately I can run 2,100-2,115Mhz forever in the game without crashing. So I suppose that’s good enough. And I am satisfied with it. Performance is really good too.

With this same profile timespy graphics score nets 17,250. So there’s really not much more to ask for I suppose.


----------



## Imprezzion

I mean, you're on 2115 and I'm on 2085 so the difference is minimal. I get 17050 in Time Spy graphics (with 8000 memory, I run 7900 daily)

Then again, my CPU and RAM are monsters. 10900KF @ OCTVB 5.3 single core 5.2 all core AVX0 1.356v and 2x16GB Trident-Z Neo @ 4400 17-17-17-36-320-2T so that is incredible.


----------



## gfunkernaught

tps3443 said:


> Fortunately I can run 2,100-2,115Mhz forever in the game without crashing. So I suppose that’s good enough. And I am satisfied with it. Performance is really good too.


Do you ever hit the power limit at those speeds? I can run [email protected] in cyberpunk for hours without a single change in clockspeed while on load. If I play Metro Exodus the power limit gets hit and my clocks go down to [email protected] That high voltage is probably the voltage offset slider I have at 100% to enable 1093mv in the v/f curve editor.


----------



## tps3443

Imprezzion said:


> I mean, you're on 2115 and I'm on 2085 so the difference is minimal. I get 17050 in Time Spy graphics (with 8000 memory, I run 7900 daily)
> 
> Then again, my CPU and RAM are monsters. 10900KF @ OCTVB 5.3 single core 5.2 all core AVX0 1.356v and 2x16GB Trident-Z Neo @ 4400 17-17-17-36-320-2T so that is incredible.


Yeah my CPU and ram do pretty well. I get 17,200 overall Score. About 16,500 physics With my old timer at 4.8Ghz.

I run my Samsung Bdie at DDR4 4000Mhz CL15-15-15-30-275 at 1T.

My XMP profile defaults to 1.5V at 4000Mhz CL15-16-16. Crazy good memory.

I can benchmark CL12-12-12 at 4000Mhz but it’s not actually stable. I would have to watercool the memory and maybe revisit it. DDR4 seems to be very sensitive beyond 40C range.

I might watercool my memory actually. Kinda forgotten all about it. But either way my CPU’s IMC is nearing the limitations. It seems to handle around 4160Mhz around CL14-14-14 Daily but things become unstable beyond 50C. I really need to get back in to some ram tuning. it’s been a few months. This was before I did a custom loop on my full system.


----------



## Imprezzion

My RAM just has the stock heatspreaders but they aren't the best ever B-Die bin. Only 3600C16.

I do have a 140mm fan aimed directly on them which I made a mount for that hangs the fan from my top radiator.

They stay just under or at 40c in normal usage at 1.45v and go to like 45-46c at 1.60v. 

I tested this RAM OC for hours and hours on both HCI and TM5 and Prime95 and it's solid but the IMC is at it's end. It already takes 1.40v SA and 1.35v IO to pass stress testing and like, 4600C18 is impossible to stabilize without going way overboard on the SA.

I am however going to update my BIOS once more (for the motherboard) as there's a newer version with better TVB support and I'm going to try and squeeze 5.4Ghz 1-2 core TVB out of this CPU if it can handle it lol. That is, if the c-states wanna behave and not freeze in idle clocks like it has been doing an awful lot when the CPU comes down from a heavy load..


----------



## tps3443

gfunkernaught said:


> Do you ever hit the power limit at those speeds? I can run [email protected] in cyberpunk for hours without a single change in clockspeed while on load. If I play Metro Exodus the power limit gets hit and my clocks go down to [email protected] That high voltage is probably the voltage offset slider I have at 100% to enable 1093mv in the v/f curve editor.


My card has the 8ohm resistors soldered on. So I never hit a power limit.

I can sustain 2,145Mhz in Metro too. It’s just Cyberpunk 2077 needs 2,115 for long term stability.


----------



## gfunkernaught

tps3443 said:


> My card has the 8ohm resistors soldered on. So I never hit a power limit.
> 
> I can sustain 2,145Mhz in Metro too. It’s just Cyberpunk 2077 needs 2,115 for long term stability.


Ah gotcha. I am debating a shunt mod. Waiting to see what happens this spring. 

[email protected] is not stable, video card crashed while playing Cyberpunk, video driver crashed. Card's temp was around 37c too. No video output but I could tell I was at the desktop, keyboard was responding, so I rebooted blindly using the command prompt, v'done it so many times, like walking in the dark. 

There was this one scene in the game, where the fps dropped to 27fps. This is after it crashed and i rebooted, I set my clock back to a stable [email protected], with psycho RT. I've never seen the game drop below 30fps even at 4k psycho rt and everything else maxed out. Idk if it was because the 2130mhz overclock, OR that particular scene was especially heavy.


----------



## tps3443

gfunkernaught said:


> Ah gotcha. I am debating a shunt mod. Waiting to see what happens this spring.
> 
> [email protected] is not stable, video card crashed while playing Cyberpunk, video driver crashed. Card's temp was around 37c too. No video output but I could tell I was at the desktop, keyboard was responding, so I rebooted blindly using the command prompt, v'done it so many times, like walking in the dark.
> 
> There was this one scene in the game, where the fps dropped to 27fps. This is after it crashed and i rebooted, I set my clock back to a stable [email protected], with psycho RT. I've never seen the game drop below 30fps even at 4k psycho rt and everything else maxed out. Idk if it was because the 2130mhz overclock, OR that particular scene was especially heavy.


To make it easy you only need 1 shunt soldered. I’ve done so many 2080Ti’s, that both 8 pin shunts just isn’t needed at all, especially with the 380 watt Galax bios. Then you won’t have to remove the waterblock. You can just solder the back 8ohm resistor, directly under the backplate. and you’ll maintain like 80-90% TDP even under the heaviest of loads.


With no shunts soldered and just running the 380 watt bios in timespy GT2 or Metro the power usage is so intense that is literally forces the card to clock down. Solder on a single 8 Ohm resistors on the back and this issue is gone. 

I have ran both resistors, single resistors, none at all. And running just 1 is just as good.

Just be careful with your solder, if it touches any nearby components on the PCB and shorts the resistor, your PC will “NOT” power on with the GPU installed. No burning, no melting, No smoke, or explosions or anything crazy lol. Just fix the solder and move on.


----------



## tps3443

Cyberpunk 1.05 runs good! I have personally noticed a lot of stuff just by playing it.. The frame rate seems more consistent too.


----------



## tps3443

@Imprezzion

This is the memory I run. Really good stuff! I run this 24/7 for gaming, and working. 

X299 always has really high latency with average DDR4. Gaming performance would always suffer too. But you can overcome these issues with really good memory.

I am nearly at 1 point per 1 Mhz in R15. Anyways, these are daily numbers.


----------



## gfunkernaught

tps3443 said:


> To make it easy you only need 1 shunt soldered. I’ve done so many 2080Ti’s, that both 8 pin shunts just isn’t needed at all, especially with the 380 watt Galax bios. Then you won’t have to remove the waterblock. You can just solder the back 8ohm resistor, directly under the backplate. and you’ll maintain like 80-90% TDP even under the heaviest of loads.
> 
> 
> With no shunts soldered and just running the 380 watt bios in timespy GT2 or Metro the power usage is so intense that is literally forces the card to clock down. Solder on a single 8 Ohm resistors on the back and this issue is gone.
> 
> I have ran both resistors, single resistors, none at all. And running just 1 is just as good.
> 
> Just be careful with your solder, if it touches any nearby components on the PCB and shorts the resistor, your PC will “NOT” power on with the GPU installed. No burning, no melting, No smoke, or explosions or anything crazy lol. Just fix the solder and move on.


As I read this, I found a used Kingpin XOC 2080 To for sale $750...the temptation!

Ive seen instructions for the RTX titan shunt mod. Are they the same for the 2080 ti? Could you maybe show me exactly where to put the resistor on the back? Not having to remove the block is good news.


----------



## tps3443

gfunkernaught said:


> As I read this, I found a used Kingpin XOC 2080 To for sale $750...the temptation!
> 
> Ive seen instructions for the RTX titan shunt mod. Are they the same for the 2080 ti? Could you maybe show me exactly where to put the resistor on the back? Not having to remove the block is good news.


Titan RTX Is the exact same. The kingpin is a great deal! I saw it too I think. But I would
need another waterblock. But yes it’s tempting!

I have soldered a shunt on the back of my 2080Ti without even removing my GPU from the system. I had an XOC bios on there and I originally removed both shunts due to thinking “I’d keep this XOC bios”. But then I decided to go back to the Galax 380 bios, so I did a quick shunt solder with the card still in the case.

Very easy to do. Here’s a picture of my card for reference. You only need (1)


----------



## yoadknux

For those of you who run on air, what kind of load temps are you seeing? My overclocked FTW3 at 124% power (373W) gets a max GPU temp of 70c on Cyberpunk, about 4-5c less if run on stock, and I wonder if it's hot compared to other air cooled 2080Tis.


----------



## gfunkernaught

tps3443 said:


> Titan RTX Is the exact same. The kingpin is a great deal! I saw it too I think. But I would
> need another waterblock. But yes it’s tempting!
> 
> I have soldered a shunt on the back of my 2080Ti without even removing my GPU from the system. I had an XOC bios on there and I originally removed both shunts due to thinking “I’d keep this XOC bios”. But then I decided to go back to the Galax 380 bios, so I did a quick shunt solder with the card still in the case.
> 
> Very easy to do. Here’s a picture of my card for reference. You only need (1)


Queue in "AAAAND ITS GONE!"...listing vanished from microcenter. It's alright, like you said, have to buy another waterblock.
Thanks for uploading those pics! I'm off to mouser to get some resistors. Much appreciated!


----------



## tps3443

@gfunkernaught

There is one in here for sale for $800 bucks.

@0451 is selling his kingpin for $800 I’d PM him maybe work something out. Think he bought a 3090.

Thats what I thought you were talking about.


----------



## Imprezzion

I decided to check the local webshops for 2080 TI's as well.. shouldn't have... The cheapest new one is like €1200..

It baffles me as to how those shops think they're ever going to sell those cards like that..


----------



## geriatricpollywog

tps3443 said:


> @gfunkernaught
> 
> There is one in here for sale for $800 bucks.
> 
> @0451 is selling his kingpin for $800 I’d PM him maybe work something out. Think he bought a 3090.
> 
> Thats what I thought you were talking about.


Yeah, that’s the asking price. Not sure how much it would cost to ship to France but it was about $30 to ship a bare pcb to the UK. The Kingpin weighs about 7lb.


----------



## J7SC

Imprezzion said:


> I decided to check the local webshops for 2080 TI's as well.. shouldn't have... The cheapest new one is like €1200..
> 
> It baffles me as to how those shops think they're ever going to sell those cards like that..


...that's with 21% VAT included, no ? I checked caseking.de the other day, and the handful of 2080 Tis (qty = 3) were all sold a week later. I suppose if you want more than 8GB VRAM for 4K, the 2080 Ti is preferable to the 3070, never mind supply constraints on other RTX 3K


----------



## Krzych04650

It generally seems like all the 2080 Ti craze on used market has ended. 3000 series is nowhere to be found and even retail prices hiked, nobody is selling his 2080 Ti anymore and if he does then he no longer has to adjust the price based on 3000 series pricing because it basically doesn't exist. When I was buying my second 2080 Ti, I got it for 2300 in my currency and there were like 3 times more offers than now. Now there are some questionable ones around 3000 and the rest is 3500+, that's closing on custom model 3080 territory, and there isn't much to choose from. Good that I got mine when people still believed that Ampere is good and that they can simply buy it


----------



## tps3443

0451 said:


> Yeah, that’s the asking price. Not sure how much it would cost to ship to France but it was about $30 to ship a bare pcb to the UK. The Kingpin weighs about 7lb.


Well it’s an amazing deal to me, even with near $100 dollar shipping. Kingpins are still very expensive to buy. I am not near a Microcenter.

If I had the money, I would buy it in heartbeat! 

Ive been wanting a kingpin for a while.


----------



## pewpewlazer

Finally made my way into the "sub 40c load temp" club and pulled off 2190mhz for Port Royal (haven't tried anything else yet). tl;dr is flow rate matters for temps and temps matter for clocks (duh).

Over the last year, my water loop had spiraled out of control in pursuit of lower water temps with minimal noise. My simple 420 + 240 loop in a mid-tower ATX case quickly ballooned into a full sized ATX case sporting 2x 420 rads and a pair of D5s, with an additional 2x 560 rads mounted externally and hooked up with some Koolance QD4 quick disconnects. I achieved the <5*C water temp delta I dreamed of, but my GPU temps seemingly tanked. I remembered the card running 10-12*C above water temps when I first got it... now I was seeing 15-16*C under heavy loads.

Seeing as I was running all this off what amounted to a single D5 at full speed (2x D5 @ 3400 RPM), flow rate surely had to have been bad. I'm well beyond the point of no return now, so I decided to add ANOTHER pair of D5s to the loop.

Ended up deciding the reasonable thing to do would be to go the dual loop route. Settled on a 420 + 280 for the CPU in the case, and the 2x 560 external setup for the GPU. Each loop gets 2x D5s. The GPU definitely needs a third radiator, but I'm out of tubing and have no idea where I'd put it. Probably about time I figure out how to build some sort of external enclosure to fit all this crap in...

Final result: WOW. Load temps on my graphics card are not only back to where I remembered them, but better. With the pumps at full speed, I'm seeing GPU temps that are only 8*C above water temps, even in TS Extreme pulling well over 400w. With the pumps backed down to ~2300 RPM, I'm still seeing temps that are only 10, maybe 11*C above water temps.

I've never managed to get through Port Royal at 2190mhz before, not even on the KPE XOC BIOS with 1.125v, so I decided to give it a try again now that I have good temps. To my amazement, it passed first try. Peak temps 35*C. The real shocker was looking in afterburner afterwards and reading it only ran *1.081v* instead of 1.093v.

10,719 - would probably be an easy 10.8k if I ran the mem at 8100 and set texture filtering to high performance in NVCP. But these latest hotfix drivers get the big fat "not approved", so I won't bother.









I scored 10 719 in Port Royal


Intel Core i7-5820K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

pewpewlazer said:


> Finally made my way into the "sub 40c load temp" club and pulled off 2190mhz for Port Royal (haven't tried anything else yet). tl;dr is flow rate matters for temps and temps matter for clocks (duh).
> 
> Over the last year, my water loop had spiraled out of control in pursuit of lower water temps with minimal noise. My simple 420 + 240 loop in a mid-tower ATX case quickly ballooned into a full sized ATX case sporting 2x 420 rads and a pair of D5s, with an additional 2x 560 rads mounted externally and hooked up with some Koolance QD4 quick disconnects. I achieved the <5*C water temp delta I dreamed of, but my GPU temps seemingly tanked. I remembered the card running 10-12*C above water temps when I first got it... now I was seeing 15-16*C under heavy loads.
> 
> Seeing as I was running all this off what amounted to a single D5 at full speed (2x D5 @ 3400 RPM), flow rate surely had to have been bad. I'm well beyond the point of no return now, so I decided to add ANOTHER pair of D5s to the loop.
> 
> Ended up deciding the reasonable thing to do would be to go the dual loop route. Settled on a 420 + 280 for the CPU in the case, and the 2x 560 external setup for the GPU. Each loop gets 2x D5s. The GPU definitely needs a third radiator, but I'm out of tubing and have no idea where I'd put it. Probably about time I figure out how to build some sort of external enclosure to fit all this crap in...
> 
> Final result: WOW. Load temps on my graphics card are not only back to where I remembered them, but better. With the pumps at full speed, I'm seeing GPU temps that are only 8*C above water temps, even in TS Extreme pulling well over 400w. With the pumps backed down to ~2300 RPM, I'm still seeing temps that are only 10, maybe 11*C above water temps.
> 
> I've never managed to get through Port Royal at 2190mhz before, not even on the KPE XOC BIOS with 1.125v, so I decided to give it a try again now that I have good temps. To my amazement, it passed first try. Peak temps 35*C. The real shocker was looking in afterburner afterwards and reading it only ran *1.081v* instead of 1.093v.
> 
> 10,719 - would probably be an easy 10.8k if I ran the mem at 8100 and set texture filtering to high performance in NVCP. But these latest hotfix drivers get the big fat "not approved", so I won't bother.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 10 719 in Port Royal
> 
> 
> Intel Core i7-5820K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


I know it’s so crazy How temperature directly controls how much voltage something needs. I’ve managed to get my 2080Ti in the 35C range with my (2) 360’s playing cyberpunk. Obviously this isn’t nearly as much power draw as Timespy or port royal. But it makes a big difference. It took a lot of work to get here. Noisy fans, door pretty much off, and reduced system voltages for for other components too.

I have also been bitten by this watercooling bug. And it just amazes me how much money I could dump in to just cooling lol

Just out of curiosity, what Swayed you from just going with chilled water? Is this something you consider? Even after going full multiple rad, multiple pump?

I am gonna order some of those EKWB Melitini fans that are 38mm thick (6) total 1,200RPM- 3,500RPM PWM. I’m excited to try these out. And I am going dual D5.

I only use (2) 360’s So I may be wasting my time going on with these


----------



## geriatricpollywog

pewpewlazer said:


> not even on the KPE XOC BIOS with 1.125v


NVIDIA GeForce RTX 2080 Ti video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com) 

Are you running a KPE? What settings on the Classified tool?


----------



## J7SC

...yeah, 2080 Tis love great cooling ! Bone stock bios Aorus (single and dual) Port Royal below; primary card at 1.043v. Dual loop cooling makes sense, especially with two cards and each at 380W. GPU-specific loop has dual D5s and triple XSPC 360/55s. Second pic below is temps for dual 2080 Ti Port Royal


----------



## geriatricpollywog

J7SC said:


> ...yeah, 2080 Tis love great cooling ! Bone stock bios Aorus (single and dual) Port Royal below; primary card at 1.043v. Dual loop cooling makes sense, especially with two cards and each at 380W. GPU-specific loop has dual D5s and triple XSPC 360/55s. Second pic below is temps for dual 2080 Ti Port Royal
> 
> View attachment 2470732
> 
> 
> View attachment 2470733


Your NVLink setup is dope. I moved to a 3090, but my Flight Simulator is not as smooth as it was on the 2080ti and the new card uses 50% more power.


----------



## J7SC

0451 said:


> Your NVLink setup is dope. I moved to a 3090, but my Flight Simulator is not as smooth as it was on the 2080ti and the new card uses 50% more power.


Cheers - that NVLink setup actually works for me most of the time, including for MS FS FlightSim 2020, Metro Ex and various Crytek in 'NVLink CFR'. Unfortunately, it does nor work (or not yet work  ) for Cyberpunk 2077. That's because 'CFR', undocumented as it was in the first place, only existed between November '19 and June '20...the latter well before drivers recognized Ampere 3090s. I am under no illusion that NVLink 3090s would hit an insane 80 to 110+ fps in 4K / Ultra MS FS 2020 / NVLink CFR. The thing is that with CFR, there's no micro stutter (unlike regular SLI AFR)...

I am looking for a pair of specific custom triple 8-pin 3090s and w-blocks, but with the pair of 2080 Tis I already have, I don't feel shortchanged at all until I find what I am looking for...

BTW everyone...


----------



## pewpewlazer

tps3443 said:


> I know it’s so crazy How temperature directly controls how much voltage something needs. I’ve managed to get my 2080Ti in the 35C range with my (2) 360’s playing cyberpunk. Obviously this isn’t nearly as much power draw as Timespy or port royal. But it makes a big difference. It took a lot of work to get here. Noisy fans, door pretty much off, and reduced system voltages for for other components too.
> 
> I have also been bitten by this watercooling bug. And it just amazes me how much money I could dump in to just cooling lol
> 
> Just out of curiosity, what Swayed you from just going with chilled water? Is this something you consider? Even after going full multiple rad, multiple pump?
> 
> I am gonna order some of those EKWB Melitini fans that are 38mm thick (6) total 1,200RPM- 3,500RPM PWM. I’m excited to try these out. And I am going dual D5.
> 
> I only use (2) 360’s So I may be wasting my time going on with these


Problems with chilled water:
1. Condensation
2. Noise
3. Noise

Insulating is a PITA. And I can't even fathom how you insulate a full cover GPU block for 24/7 use...

And I doubt there's any sort of "chiller" in existence that is similary quiet to my current setup, let alone maintaining that noise level while achieving sub-ambient temperatures with a 400w+ heat load...

For reference, I run my fans at 700-750 RPM, and running my D5s at ~3400 RPM was an unwelcomed compromise.

PS those "Meltemi" fans are kind of... meh...








EK-Meltemi 120ER Fan Review


EK Water Blocks adds to their fan portfolio with the massive 38 mm thick Meltemi. Available in their -ER (extended range) version, the EK-Meltemi aims to take their popular Vardar fan as a base and build upon it with a bigger, badder motor, and promises high static pressure optimization for PC...




www.techpowerup.com





If you're running 120mm fans and willing to pay insane prices for fans, the NF-A12x25 are the only thing worth considering IMO.



0451 said:


> NVIDIA GeForce RTX 2080 Ti video card benchmark result - Intel Core i7-10700K Processor,Micro-Star International Co., Ltd. MEG Z490 UNIFY (MS-7C71) (3dmark.com)
> 
> Are you running a KPE? What settings on the Classified tool?


No, just an EVGA XC Ultra. Reference PCB card.


----------



## zkareemz

deleted


----------



## J7SC

pewpewlazer said:


> Problems with chilled water:
> 1. Condensation
> 2. Noise
> 3. Noise
> 
> Insulating is a PITA. And I can't even fathom how you insulate a full cover GPU block for 24/7 use...
> 
> And I doubt there's any sort of "chiller" in existence that is similary quiet to my current setup, let alone maintaining that noise level while achieving sub-ambient temperatures with a 400w+ heat load...
> 
> For reference, I run my fans at 700-750 RPM, and running my D5s at ~3400 RPM was an unwelcomed compromise.
> 
> PS those "Meltemi" fans are kind of... meh...
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Meltemi 120ER Fan Review
> 
> 
> EK Water Blocks adds to their fan portfolio with the massive 38 mm thick Meltemi. Available in their -ER (extended range) version, the EK-Meltemi aims to take their popular Vardar fan as a base and build upon it with a bigger, badder motor, and promises high static pressure optimization for PC...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> If you're running 120mm fans and willing to pay insane prices for fans, the NF-A12x25 are the only thing worth considering IMO.
> (...)


...yeah, I have a 1000 W phase cooler I haven't used in years, given the noise. 

Re. fans, I had collected s.th. like 12x GentlyTyphoons (3K rpm model) from server builds 7+ years ago...they still work great and are _relatively quiet_ (for 3K rpm fans) though they also have a custom shroud I built for airflow management which reduces the sound as a nice side-effect. 

For new fans, I would also go with Noctua nf-a12x25 (chromax) for max performance and durability. For a budget built, I hear good things (apart from the low price) about the Arctic P12s below, but haven't seen anything yet re. long-term reliability


----------



## sultanofswing

Been running the Arctic's for a year now with no issues, Only time will tell.


----------



## Imprezzion

I know one thing.. the Cooler Master MasterFan 120's I use are absolutely terrible quality. The bearings are horrendous and barely last 3 months but they are super cheap and are very nice to look at RGB wise.

I must've gone through like 15 of them the last year and a half.. almost ran out of spares lol...


----------



## pewpewlazer

J7SC said:


> ...yeah, I have a 1000 W phase cooler I haven't used in years, given the noise.
> 
> Re. fans, I had collected s.th. like 12x GentlyTyphoons (3K rpm model) from server builds 7+ years ago...they still work great and are _relatively quiet_ (for 3K rpm fans) though they also have a custom shroud I built for airflow management which reduces the sound as a nice side-effect.
> 
> For new fans, I would also go with Noctua nf-a12x25 (chromax) for max performance and durability. For a budget built, I hear good things (apart from the low price) about the Arctic P12s below, but haven't seen anything yet re. long-term reliability


Gentle Typhoons... that's some old school cool right there! Definitely out of my price range at the time, but the Yate Loon D12SLs were no slouch either. I even paid top dollar (read: about 5 bucks each) for authentic Yate Loon models and not the knockoffs.

I just looked through your build log (which I love BTW - neat to see oldschool stuff like the MCRES and MCP655 mixed with a modern build) to see what this "shroud" was. Are you referring to the box that appears to be hiding 2x 360 rads on the front side of your "case"? Is there anything special going on inside there or is it basically just forming a wind tunnel of sorts between the two radiators?

As far as the Arctics go, they're great fans (for the price at least). I remember seeing a huge fan roundup a few years back and this "Arctic F12" fan caught my eye. Unbelievably good performance, especially given the bargain basement price tag. So when I noticed the new P14 PWM, I decided to give them a go. I didn't do any direct performance comparisons, but from the couple semi-decent tests that have appeared online, they seem to perform extremely well.

I've seen some complaints about PWM motor noise/humming at certain RPM ranges with the Arctics. I run mine at low speeds though and haven't noticed anything like that. My previous Corsair ML140s made an EXTREMELY irritating noise, almost like a buzzing, even at 550-600 RPM. I'm running the Arctics at 700-750 RPM and they're much quieter to my ears.

Don't expect Noctua quality. This should be a given when you're only paying 7-8 bucks a pop for a fan (if buying a 5 pack). I've ended up with 30 of these things, bought at 3 separate times. The fans I ordered in February 2020 were rather different from the fans I ordered originally in August 2019. I'm assuming there was a revision at some point. The newer fans have a longer pigtail for the "PST" feature, and a slightly different PWM response curve. The most recent batch I ordered in July 2020 seems to be identical to my February 2020 fans. Just keep in mind if you order Arctics now, and decide to get some more a year down the road, they might not be identical...


----------



## J7SC

pewpewlazer said:


> Gentle Typhoons... that's some old school cool right there! Definitely out of my price range at the time, but the Yate Loon D12SLs were no slouch either. I even paid top dollar (read: about 5 bucks each) for authentic Yate Loon models and not the knockoffs.
> 
> I just looked through your build log (which I love BTW - neat to see oldschool stuff like the MCRES and MCP655 mixed with a modern build) to see what this "shroud" was. Are you referring to the box that appears to be hiding 2x 360 rads on the front side of your "case"? Is there anything special going on inside there or is it basically just forming a wind tunnel of sorts between the two radiators?
> (...)


Tx...and yeah, the rads on the front (and for that matter the rear) are mounted 'on their side' and form a wind-tunnel w/ the baffles, behind which are 6 fans you can't see. Most of the build log pics also don't have the glass cover of TT Core P5 on re. reflections; that cover also helps. The whole thing is set up to draw fresh air from the left (by a window) and move it through the baffles and exhaust on the right. The only fan that doesn't follow that pattern is the thin 120mm 'on top' used to cool the VRM and RAM. When I first turned the finished system on, there was so much air moving from left to right that it blew a bunch of magazines off a near-by table...


----------



## tps3443

pewpewlazer said:


> Problems with chilled water:
> 1. Condensation
> 2. Noise
> 3. Noise
> 
> Insulating is a PITA. And I can't even fathom how you insulate a full cover GPU block for 24/7 use...
> 
> And I doubt there's any sort of "chiller" in existence that is similary quiet to my current setup, let alone maintaining that noise level while achieving sub-ambient temperatures with a 400w+ heat load...
> 
> For reference, I run my fans at 700-750 RPM, and running my D5s at ~3400 RPM was an unwelcomed compromise.
> 
> PS those "Meltemi" fans are kind of... meh...
> 
> 
> 
> 
> 
> 
> 
> 
> EK-Meltemi 120ER Fan Review
> 
> 
> EK Water Blocks adds to their fan portfolio with the massive 38 mm thick Meltemi. Available in their -ER (extended range) version, the EK-Meltemi aims to take their popular Vardar fan as a base and build upon it with a bigger, badder motor, and promises high static pressure optimization for PC...
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> If you're running 120mm fans and willing to pay insane prices for fans, the NF-A12x25 are the only thing worth considering IMO.
> 
> 
> 
> No, just an EVGA XC Ultra. Reference PCB card.


----------



## tps3443

Looking at the blade style and spacing of those artic P12 fans I imagine they Work really good for radiators. The blades are tightly placed together for good static pressure. I wish I would’ve come across those when I bought my current fans.

They sell them in 5 packs so your forced to buy 10 or more lol.

Im gonna be placing a new water cooling parts order next month. Looking forward to it. I may go with clear tubing and a purple pastel coolant. I’m kinda bored with my matte black ekwb tubing. I may go dual VPP755’s since they are only about $60 each. And the Alphacool dual Plexi pump top Is only $80


----------



## sultanofswing

I have 16 of the Arctic P12's in my current build. At 1200 RPM they do have a bit of a hum to them, At 1400 RPM they are almost silent.
My fan curve sets them to run from 1200-1500 RPM basically.


----------



## tps3443

sultanofswing said:


> I have 16 of the Arctic P12's in my current build. At 1200 RPM they do have a bit of a hum to them, At 1400 RPM they are almost silent.
> My fan curve sets them to run from 1200-1500 RPM basically.


I want to try those 38MM thick EKWB Meltimi fans I found some for $25 dollars a piece which is still kinda crazy. They even include the 42MM long radiator screws in the box already for the added thickness. Still deciding if I want the 0 stop 500-1200 or the 3,500rpm models. Gonna order some early next month though. This is mostly about aesthetics.


Anyways, nice build you have there.


----------



## J7SC

DerBauer did a thorough test of 120mm fans about a year ago for the casekingTV channel (thus no English version, though I left the URL info in the pic below)...the Arctic P12 PWM PST are in the top 3 overall (blue bar is airflow volume, orange bar is sound / dba)..










I also have 9x Sunon 'server fans' left over, they make the GentleTyphoon look thin (which it isn't). Those Sunon really move a lot of air given their high rpm, but they are just too noisy for a machine in a regular room


----------



## tps3443

0451 said:


> Your NVLink setup is dope. I moved to a 3090, but my Flight Simulator is not as smooth as it was on the 2080ti and the new card uses 50% more power.


I am telling you. I really like his system with the matching waterforce cards too. Plus his silicon is really good on both of them. I wonder if Aorus does some lite binning with those Aorus Waterforce WB cards.


----------



## J7SC

tps3443 said:


> I am telling you. I really like his system with the matching waterforce cards too. Plus his silicon is really good on both of them. I wonder if Aorus does some lite binning with those Aorus Waterforce WB cards.


...not sure if this is still the case w/ current top model RTX 3K, given production bottlenecks and strong demand, but with top model RTX 2K such as 2080 Ti, NVidia would do the base binning, and the vendor would have to buy a given volume of 'so-so' GPU chips from NVidia before an allotment of the good stuff would be made available to the vendors, such as Gigabyte / Aorus. 

Vendors would then allot the top bins to their more expensive models...for the Gigabyte / Aorus 2080 Ti, that would be the three Xtreme versions (air, AIO, WB). I don't think there would have been further binning within the Xtreme lines, but I am not sure. While the three seem to have the same PCB, the WB model does carry a different bios though due to full-block factory water-cooling. Fun fact: My two Xtr WB cards are only two serial number slots apart, but one is 30 - 45 MHz faster than the other, and that at slightly lower GPUv and same temps...


----------



## tps3443

J7SC said:


> ...not sure if this is still the case w/ current top model RTX 3K, given production bottlenecks and strong demand, but with top model RTX 2K such as 2080 Ti, NVidia would do the base binning, and the vendor would have to buy a given volume of 'so-so' GPU chips from NVidia before an allotment of the good stuff would be made available to the vendors, such as Gigabyte / Aorus.
> 
> Vendors would then allot the top bins to their more expensive models...for the Gigabyte / Aorus 2080 Ti, that would be the three Xtreme versions (air, AIO, WB). I don't think there would have been further binning within the Xtreme lines, but I am not sure. While the three seem to have the same PCB, the WB model does carry a different bios though due to full-block factory water-cooling. Fun fact: My two Xtr WB cards are only two serial number slots apart, but one is 30 - 45 MHz faster than the other, and that at slightly lower GPUv and same temps...


Have you seen the new Aorus RTX3090 Waterforce WB model? It looks pretty good.

Would you replace your (2) with those?


----------



## J7SC

tps3443 said:


> Have you seen the new Aorus RTX3090 Waterforce WB model? It looks pretty good.
> 
> Would you replace your (2) with those?


When the time comes, I think so....just which of the Aorus Xtr, though ? For now with Aorus 3090 Xtr, this is the first time (compared to their 1080x and 2080 Ti top offerings) that the WB doesn't have the top PCB and clocks, per spoiler pic 1 below.The air-cooled one has 3x 8 pins, for example, instead of 2x 8 pins and also a higher clock. However, they may yet add a 3x 8 pins WB to that as well - or just get some aftermarket blocks (Bykski etc already offers it).

_Edit _- After over 1.5 yrs @ Hof / PR with stock bios, I think I am about to 'fall off HoF' (was as high as 15th, now 97th ). With more and more shunt-modded and/or hi-po 3090 SLI out there, there's not much 2080 Ti's can do. Then again, if you check the left part of pic 2 (from today), a pair of decent 2080 Tis is not too far off a pair of stock, not so well tuned 3090s - so no rush to 'retire' the 2080 Tis, not least as they are the faster solution for FlightSim 2020 (via CFR). 



Spoiler


----------



## Brko

Hello there. I am late but better late than ever  2 days ago got my first Turing card - MSI RTX 2080Ti Gaming X Trio. Getting used to it. Replaced old GTX 1080 and am simply blown away what upgrade it is for 1440P (2K) resolution.


----------



## gfunkernaught

So I was playing Black Ops Cold War everything maxed 4k dlss ultra quality (via editing the config file) using an OC of [email protected], which would throttle down to 2145mhz once it got warm enough, in my case 37c. As I was playing the average temp was 36c and I was monitoring the frequency and it was 2145mhz the whole time, about an hour or so. Then the game crashed. Event Viewer says "Display driver nvlddmkm stopped responding and has successfully recovered." That means the OC failed. Now is that because it needed more voltage or was 36c too "warm" for [email protected]? Anyone have some insight? I'm thinking it probably needed more voltage but I am limited to 1093mv using the KFA 380w bios.


----------



## Imprezzion

Usually that is too high of a core clock for the card or too high of a memory clock. Both can cause driver crashes although I see memory crashes happen for me with DirectX crashes more often.

I've been rock solid @ 2085-2070Mhz core 7900 memory (samsung) @ 1.093v with the KFA2 380w BIOS in Cyberpunk which is currently the most demanding game overclock wise that I play together with Division 2.

I am considering stripping the card down and removing the G12 + X52 to swap the PK-3 for some Conductonaut but I'm not sure it will actually drop temps. The poor rad is already super temp saturated so I'm not sure how much better paste will actually drop temps.

I'm sitting around 46-48c now and that is where this card is happy and dropping temps doesn't allow me to run more clock speed at all. I tried it with the rad in a bucket with ice water and salt just to see what would happen if the card ran way cooler and yes, at like 33c it can pass 3DMark / Superposition at 2190Mhz but it's not any more stable long term in games. It will still crash at 2115-2100Mhz after like 20 minutes so I don't think for daily usage those few degrees Conductonaut might get me is going to do anything for my actual clocks. Maaaaybe it will allow me to run an XOC BIOS 1.125v at slightly higher clocks like 2130 at best but is that worth all the work? Probably not..

The load temps are super steady, like a perfect flat line on 48c, so the mount with the PK-3 seems to be pretty much perfect as there's no variance in temps at all..


----------



## gfunkernaught

Imprezzion said:


> Usually that is too high of a core clock for the card or too high of a memory clock. Both can cause driver crashes although I see memory crashes happen for me with DirectX crashes more often.
> 
> I've been rock solid @ 2085-2070Mhz core 7900 memory (samsung) @ 1.093v with the KFA2 380w BIOS in Cyberpunk which is currently the most demanding game overclock wise that I play together with Division 2.
> 
> I am considering stripping the card down and removing the G12 + X52 to swap the PK-3 for some Conductonaut but I'm not sure it will actually drop temps. The poor rad is already super temp saturated so I'm not sure how much better paste will actually drop temps.
> 
> I'm sitting around 46-48c now and that is where this card is happy and dropping temps doesn't allow me to run more clock speed at all. I tried it with the rad in a bucket with ice water and salt just to see what would happen if the card ran way cooler and yes, at like 33c it can pass 3DMark / Superposition at 2190Mhz but it's not any more stable long term in games. It will still crash at 2115-2100Mhz after like 20 minutes so I don't think for daily usage those few degrees Conductonaut might get me is going to do anything for my actual clocks. Maaaaybe it will allow me to run an XOC BIOS 1.125v at slightly higher clocks like 2130 at best but is that worth all the work? Probably not..
> 
> The load temps are super steady, like a perfect flat line on 48c, so the mount with the PK-3 seems to be pretty much perfect as there's no variance in temps at all..


Right I know that the driver will crash at the clock is too high but it's usually more detailed than that. I'm just trying to figure out if it needs more voltage or needs to be cooler or both.

Oh and 1093mv for 2085mhz that seems like way too much voltage. Which 2080 Ti do you have? Try setting a v/f curve of [email protected] and see if that works. My card can do [email protected] for hours and hours but the temps are mostly the same as when I do [email protected], so I choose the latter.

As far as XOC bios go, I ran those in my garage once last winter and hit [email protected] and got 10759 in Port Royal and [email protected] scored 7148 in Time Spy Extreme. Max temp was 19c on the gpu, ambient was 4c. But when I opened shadow of the tomb raider and got to the main menu, crashed within seconds. Here are the links to my results.









I scored 7 148 in Time Spy Extreme


Intel Core i7-8700K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com






https://www.3dmark.com/pr/172411



I currently use the ekwb vector and have been contemplating getting a new block, maybe the german aquacool block or even the corsair block. I've read many accounts of the vector not making good contact with the board, which makes some sense to me because I see the gpu hit 40c sometimes, yet the air coming out of the radiators doesn't feel very warm, but the back of the gpu where the backplate is super hot. So I'm thinking a good amount of heat coming from the board isn't being transferred properly.


----------



## J7SC

gfunkernaught said:


> Right I know that the driver will crash at the clock is too high but it's usually more detailed than that. I'm just trying to figure out if it needs more voltage or needs to be cooler or both.
> 
> Oh and 1093mv for 2085mhz that seems like way too much voltage. Which 2080 Ti do you have? Try setting a v/f curve of [email protected] and see if that works. My card can do [email protected] for hours and hours but the temps are mostly the same as when I do [email protected], so I choose the latter.
> 
> As far as XOC bios go, I ran those in my garage once last winter and hit [email protected] and got 10759 in Port Royal and [email protected] scored 7148 in Time Spy Extreme. Max temp was 19c on the gpu, ambient was 4c. But when I opened shadow of the tomb raider and got to the main menu, crashed within seconds. Here are the links to my results.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 7 148 in Time Spy Extreme
> 
> 
> Intel Core i7-8700K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/pr/172411
> 
> 
> 
> I currently use the ekwb vector and have been contemplating getting a new block, maybe the german aquacool block or even the corsair block. I've read many accounts of the vector not making good contact with the board, which makes some sense to me because I see the gpu hit 40c sometimes, yet the air coming out of the radiators doesn't feel very warm, but the back of the gpu where the backplate is super hot. So I'm thinking a good amount of heat coming from the board isn't being transferred properly.


The problem is that higher voltage means higher temps (and also leaves you less room in the PL budget for higher clocks), unless you do the "XOC bios + cold garage in the winter runs" - not ideal for gaming for an hour+. If you have voltage control, I would try to reduce the GPUv a couple of notches and see if 2145 MHz remains stable even after an hour of game play, and also record your temps. There definitely is a speed step down in NVBoost at around 38 C, but not all monitoring software updates fast enough for that to necessarily show up. Also, at least on my cards, there appear to be even earlier speed steps down (at around 28C - 32 C?)


----------



## gfunkernaught

J7SC said:


> The problem is that higher voltage means higher temps (and also leaves you less room in the PL budget for higher clocks), unless you do the "XOC bios + cold garage in the winter runs" - not ideal for gaming for an hour+. If you have voltage control, I would try to reduce the GPUv a couple of notches and see if 2145 MHz remains stable even after an hour of game play, and also record your temps. There definitely is a speed step down in NVBoost at around 38 C, but not all monitoring software updates fast enough for that to necessarily show up. Also, at least on my cards, there appear to be even earlier speed steps down (at around 28C - 32 C?)


I think it is voltage related. [email protected] + 38c is probably not stable for my card. I've never had a crash when gaming for long runs with [email protected] Unless I can keep the gpu below 35c, it will crash.


----------



## Imprezzion

gfunkernaught said:


> Right I know that the driver will crash at the clock is too high but it's usually more detailed than that. I'm just trying to figure out if it needs more voltage or needs to be cooler or both.
> 
> Oh and 1093mv for 2085mhz that seems like way too much voltage. Which 2080 Ti do you have? Try setting a v/f curve of [email protected] and see if that works. My card can do [email protected] for hours and hours but the temps are mostly the same as when I do [email protected], so I choose the latter.
> 
> As far as XOC bios go, I ran those in my garage once last winter and hit [email protected] and got 10759 in Port Royal and [email protected] scored 7148 in Time Spy Extreme. Max temp was 19c on the gpu, ambient was 4c. But when I opened shadow of the tomb raider and got to the main menu, crashed within seconds. Here are the links to my results.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 7 148 in Time Spy Extreme
> 
> 
> Intel Core i7-8700K Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> https://www.3dmark.com/pr/172411
> 
> 
> 
> I currently use the ekwb vector and have been contemplating getting a new block, maybe the german aquacool block or even the corsair block. I've read many accounts of the vector not making good contact with the board, which makes some sense to me because I see the gpu hit 40c sometimes, yet the air coming out of the radiators doesn't feel very warm, but the back of the gpu where the backplate is super hot. So I'm thinking a good amount of heat coming from the board isn't being transferred properly.


I got a Gainward Phoenix GS with a Kraken X52 + G12 strapped to it.

I mean, this card is just simply a bad clocker. 2085 is all she's got at 1.093v. If I drop down to say, 1.050v it doesn't even start a game it will just hard lock in the menu screen of Halo MCC lol.

It does better on XOC with 1.125v, it will hold 2145-2130Mhz in general but it won't do it in Cyberpunk. I can play Division 2 or Borderlands 3 or Warzone for days on 2145 with no issues but Cyberpunk won't run stable without dropping down to 2115-2100Mhz.

I would've replaced the card already for a secondhand one and would've just binned a few secondhand ones if I could get them that is and most of them are too new and don't have a flashable BIOS due to the newer firmware.

So, I have to find a well priced secondhand card which will fit a G12 + X52 (it will fit non-reference PCB's just fine) and which is old enough to allow a BIOS flash that actually clocks better.

EDIT: I only did the temperature testing with the XOC BIOS.. With the ice bucket and such.. never with the normal KFA2 BIOS.. 
So, I just cranked up the fans to 100% 2000RPM and played like, an hour of Cyberpunk now. It remains stable at 2115Mhz core 7900Mhz Memory @ 1.093v average temp 37c max 38c. If I Conductonaut it I might be able to pull like 2-3c off of it so yeah, my card IS very temp sensitive. 2115Mhz is almost instant crash @ 48c, but runs forever on 38c. Time to disassemble it and get the Conductonaut out


----------



## Audioboxer

jura11 said:


> Are you using all ports on your GPU? If not then I would suggest EVGA FTW3 BIOS, it is 370-380W BIOS or Galax 380W BIOS
> 
> Both are great BIOS just not sure on fan, I never run GPU with stock coolers
> 
> Hope this helps
> 
> Thanks, Jura


Just a follow up on this, the EVGA FTW3 BIOS flashed fine for me, power usage is now up to 373W. The KFA2 380W seemed to moan about the XUSB FW Version ID and refused to flash.

Has anyone else seen this before? A 2080Ti that will accept the 373W EVGA FTW3 BIOS but refuse to flash the KFA2 380W?

*edit* - Answered all my own questions by reading more of the topic!


----------



## tps3443

Check this thing out guys! Its so tempting !!


Thats a REAL Galax HOF WC 2KW 2080Ti. It has a XOC 2KW bios, and full external voltage controller.. Comes with Hall of fame waterblock, HOF and air cooler.

Not a bad deal. I mean considering the RTX3080’s are pretty weak upgrades. if it’s an upgrade at all.. And an RTX3090 is about $1700 for the cheapest one available.


I am considering this!! I need to sell my 2080Ti and grab it!!










GALAX RTX 2080 ti PLUS Hall of Fame (includes HOF 3 Fan cooler & Waterblock) | eBay


The waterblock was applied with kryonaut thermal paste.



www.ebay.com


----------



## Imprezzion

Awww RIP, it held 2130 @ 1.093v quite well at lower temps now that I Conductonauted the GPU and raised the fan speed of my radiators quite a bit but it did crash after like 30 minutes of GTAV Online racing. I'll drop to 2115Mhz I guess.. 

The extra noise of my fans being on 2k RPM in games doesn't really bother me. They aren't very loud at 2k RPM and my headset and the game sound drowns it out pretty well.


----------



## J7SC

Imprezzion said:


> Awww RIP, it held 2130 @ 1.093v quite well at lower temps now that I Conductonauted the GPU and raised the fan speed of my radiators quite a bit but it did crash after like 30 minutes of GTAV Online racing. I'll drop to 2115Mhz I guess..
> 
> The extra noise of my fans being on 2k RPM in games doesn't really bother me. They aren't very loud at 2k RPM and my headset and the game sound drowns it out pretty well.


What kind of rads are you running (and GPU only loop, or CPU + GPU loop) ?



tps3443 said:


> Check this thing out guys! Its so tempting !!
> 
> Thats a REAL Galax HOF WC 2KW 2080Ti. It has a XOC 2KW bios, and full external voltage controller.. Comes with Hall of fame waterblock, HOF and air cooler.
> Not a bad deal. I mean considering the RTX3080’s are pretty weak upgrades. if it’s an upgrade at all.. And an RTX3090 is about $1700 for the cheapest one available.
> I am considering this!! I need to sell my 2080Ti and grab it!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 2080 ti PLUS Hall of Fame (includes HOF 3 Fan cooler & Waterblock) | eBay
> 
> 
> The waterblock was applied with kryonaut thermal paste.
> 
> 
> 
> www.ebay.com


nice find...trying to figure out whether this is a HoF 'OCLabs' (which usually were provided just to select sub-zero XOCers) or a regular HoF...either is a superb card, though I'd be a bit nervous whether it led a heard life via external voltage controller, or was babied, since the seller isn't accepting returns.


----------



## Imprezzion

J7SC said:


> What kind of rads are you running (and GPU only loop, or CPU + GPU loop) ?


AIO . CPU is running a 280 EK loop based on the EK Phoenix kit and the GPU is running a push pull Kraken X52 lol.


----------



## jura11

Audioboxer said:


> Just a follow up on this, the EVGA FTW3 BIOS flashed fine for me, power usage is now up to 373W. The KFA2 380W seemed to moan about the XUSB FW Version ID and refused to flash.
> 
> Has anyone else seen this before? A 2080Ti that will accept the 373W EVGA FTW3 BIOS but refuse to flash the KFA2 380W?
> 
> *edit* - Answered all my own questions by reading more of the topic!


Hi there 

I think EVGA 373W is one better BIOS for RTX 2080Ti,it works great on reference GPU, I'm using that BIOS on my Zotac RTX 2080Ti AMP, best BIOS for Asus RTX 2080Ti Strix Matrix which is 366W or XOC BIOS which I'm using daily, I just capped power limit to 44% for rendering or some benchmarks 

Galax 380W BIOS I have used too and not sure why I didn't liked it, now with EVGA my Zotac RTX 2080Ti AMP will do stable in rendering 2070-2085MHz previously with Galax 380W I needed to run 2025MHz as max abd anything above that would crash, in gaming 2100-2115MHz and with EVGA 2130-2145MHz, Strix on other hand will do in gaming 2175-2190MHz,in Witcher 3 Strix is stable in 2205MHz and holds that till I reach 36-38°C and then will downclock to 2190MHz 

This XUSB error I think your GPU already have newer XUSB FW and I'm not sure why we can downgrade it to older, did you tried every NvFlash and tried command - protectoff etc? 

Hope this helps 

Thanks, Jura


----------



## gfunkernaught

Imprezzion said:


> 2115Mhz is almost instant crash @ 48c, but runs forever on 38c. Time to disassemble it and get the Conductonaut out


Have you tried IC Diamond? I tried Thermonaut and vs the IC Diamond it gets 2-3c warmer.


----------



## J7SC

...don't know how true / pertinent this is, but I read several times that IC Diamond's particles can scratch the surface of an IHS, especially if moved / removed several times. While I have used liquid metal before, that does take extra precautions as well re. prep...typically, I use Gelid Extreme or MX4 on CPUs and GPUs for regular applications. Here's an older but still relevant chart from THW



Spoiler


----------



## tps3443

Imprezzion said:


> Awww RIP, it held 2130 @ 1.093v quite well at lower temps now that I Conductonauted the GPU and raised the fan speed of my radiators quite a bit but it did crash after like 30 minutes of GTAV Online racing. I'll drop to 2115Mhz I guess..
> 
> The extra noise of my fans being on 2k RPM in games doesn't really bother me. They aren't very loud at 2k RPM and my headset and the game sound drowns it out pretty well.


Liquid metal doesn’t work that great on a GPU’s because it’s already direct die contact.


----------



## Imprezzion

J7SC said:


> ...don't know how true / pertinent this is, but I read several times that IC Diamond's particles can scratch the surface of an IHS, especially if moved / removed several times. While I have used liquid metal before, that does take extra precautions as well re. prep...typically, I use Gelid Extreme or MX4 on CPUs and GPUs for regular applications. Here's an older but still relevant chart from THW
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2471229


And that chart is why I have 2 30 gram Prolimatech PK-3 packages here and I absolutely swear by it. In my own testing it outperforms Kryonaut most of the time as Kryonaut is technically better but is terrible to apply and spread and get a good mount with however PK-3 is the easiest stuff ever.. Rice grain method always works with it, it's viscosity is just enough to make it super easy, it cleans off easy with 99% alcohol as well.. PK-3 is, IMHO, the best regular paste available for the last few years.

Back OT, I have obviously conformal coated the area around the die with the small components on the 2080 Ti, and with Conductonaut I noticed way way less of a difference on the GPU then I did on the CPU but it still made the temps about 1-3c lower and especially more stable. Less spikes and way more just 1 single temperature.

Well, now with 36-38c it does behave a lot better and actually holds 2130-2115Mhz fine. It did do 2145-2130 on Cyberpunk and World of Tanks but it crashed with a DirectX error I GTAV so, 1 bin further down and it was fine the rest of the evening.


----------



## Audioboxer

jura11 said:


> Hi there
> 
> I think EVGA 373W is one better BIOS for RTX 2080Ti,it works great on reference GPU, I'm using that BIOS on my Zotac RTX 2080Ti AMP, best BIOS for Asus RTX 2080Ti Strix Matrix which is 366W or XOC BIOS which I'm using daily, I just capped power limit to 44% for rendering or some benchmarks
> 
> Galax 380W BIOS I have used too and not sure why I didn't liked it, now with EVGA my Zotac RTX 2080Ti AMP will do stable in rendering 2070-2085MHz previously with Galax 380W I needed to run 2025MHz as max abd anything above that would crash, in gaming 2100-2115MHz and with EVGA 2130-2145MHz, Strix on other hand will do in gaming 2175-2190MHz,in Witcher 3 Strix is stable in 2205MHz and holds that till I reach 36-38°C and then will downclock to 2190MHz
> 
> This XUSB error I think your GPU already have newer XUSB FW and I'm not sure why we can downgrade it to older, did you tried every NvFlash and tried command - protectoff etc?
> 
> Hope this helps
> 
> Thanks, Jura


Yeah I went back to the EVGA 373W, this one EVGA RTX 2080 Ti VBIOS No idea if it's much different than the EVGA I ran at first EVGA RTX 2080 Ti VBIOS Works fine anyway.

I'm back to 8000MHz on memory and 2070-2085MHz on core depending on temperature. Some games manage 2085 consistently, but a few others push my temps to 45~46 which drops me down to 2070MHz. Cyberpunk for one which seems to be incredibly sensitive to OCing and pushes cards to their limits.

I can probably get the temp down a bit more, but the trade off will be exponentially more fan noise. I'm using a G12 and a x62 AIO with some noctua fans. Not really worth it for 15MHz more.

Really at my limits I guess, anything to try and push 2100MHz ends up unstable or I can't get a good memory overclock working with it.


----------



## Imprezzion

Audioboxer said:


> Yeah I went back to the EVGA 373W, this one EVGA RTX 2080 Ti VBIOS No idea if it's much different than the EVGA I ran at first EVGA RTX 2080 Ti VBIOS Works fine anyway.
> 
> I'm back to 8000MHz on memory and 2070-2085MHz on core depending on temperature. Some games manage 2085 consistently, but a few others push my temps to 45~46 which drops me down to 2070MHz. Cyberpunk for one which seems to be incredibly sensitive to OCing and pushes cards to their limits.
> 
> I can probably get the temp down a bit more, but the trade off will be exponentially more fan noise. I'm using a G12 and a x62 AIO with some noctua fans. Not really worth it for 15MHz more.
> 
> Really at my limits I guess, anything to try and push 2100MHz ends up unstable or I can't get a good memory overclock working with it.


Your setup is basically identical to mine except I'm using an X52 not a 62 and I'm using KFA2 380w and not EVGA BIOS. I did run EVGA FTW3 Ultra BIOS for a loooong time because j had my fans controlled through the card and EVGA BIOS has better percentage available in idle. But I'm using the Kraken now to control the fans with CAM synced to GPU temp so now I can use KFA2. KFA2 reads power wrong on my card, way lower then actual, so it gives me far more room power target wise. EVGA did throttle now and then @ 373w while the KFA2 BIOS barely reads 300w. 

Temps are the same as well, I usually run 45-47c with normal fan speed (1150-1200RPM) on a push-pull Cooler Master MasterFan 120mm setup.

I chanced my paste from PK-3 to Conductonaut and a bit more mounting pressure (extra washer) and I have the fans at 2000RPM now. This easily keeps my card under 40c in 18c ambients and it will run 2115Mhz stable now. At 47c it can only do 2085Mhz stable. So yeah, it does make a difference.


----------



## Audioboxer

Imprezzion said:


> Your setup is basically identical to mine except I'm using an X52 not a 62 and I'm using KFA2 380w and not EVGA BIOS. I did run EVGA FTW3 Ultra BIOS for a loooong time because j had my fans controlled through the card and EVGA BIOS has better percentage available in idle. But I'm using the Kraken now to control the fans with CAM synced to GPU temp so now I can use KFA2. KFA2 reads power wrong on my card, way lower then actual, so it gives me far more room power target wise. EVGA did throttle now and then @ 373w while the KFA2 BIOS barely reads 300w.
> 
> Temps are the same as well, I usually run 45-47c with normal fan speed (1150-1200RPM) on a push-pull Cooler Master MasterFan 120mm setup.
> 
> I chanced my paste from PK-3 to Conductonaut and a bit more mounting pressure (extra washer) and I have the fans at 2000RPM now. This easily keeps my card under 40c in 18c ambients and it will run 2115Mhz stable now. At 47c it can only do 2085Mhz stable. So yeah, it does make a difference.


Nice, I'll try get my temps down a bit more, it's Cyberpunk that tends to push 45~47 degrees. That's with RT on at ultrawide 1440p. I'm around 60FPS, but can drop to mid 50's. RT off and I'm laughing, 80~100FPS.

The KFA2 bios is just out of bounds for me, won't flash. Lots of other bios will with the crazy 1000~2000w power limits and there is even the one in the first post with seemingly no power limit. But the EVGA seems to be the most stable and allows some voltage curve modding as well. Though right now I'm not using the curve, just the sliders as they do fine with auto setting voltage and being stable at 2070~2085 depending on temp.

I'm running kryonaut, so it might just be my air intake setup/running fans lower than 2000RPM. My ambient is about 25 degrees on a quiet fan curve/pump at 2000RPM










I left my backplate on as I have some decent air circulation above the 2080ti, but I don't think that should matter much. If anything it should act as a heatsink.


----------



## J7SC

I use 'helper fans' to push air over the back-plates of the water-cooled cards, though the airflow arrangement is such that I probably don't even need it, given the various 120mms drawing air over the cards into the rads. Still, I had extra fans as this build was really just an idea to use up various unused w-cooling equipment and fans...turns out that w/ 2080 Tis, that paid extra dividends as they reward cooling. Even with both cards at full bench-marking settings (760 W combined for both GPUs), they usually stay in the 30s, unless ambient is really high (mid 20s C).

I am not sure if chasing the last MHz is worth it. With simple GPUs rendering tests, the top card will hit 2235 MHz, the bottom one 2205 MHz (and VRAM is most efficient at around 8222). But those GPU core numbers are largely academic - typically, they run 30 MHz to 45 MHz lower for sustained and difficult games, long benchmarks etc, and that is with a big GPU loop that doesn't heat-soak. For Microsoft Flight SIm 2020, where I run SLI / CFR, I dial both cards down to 2100 / 8017 and reduce power limit to about 330 W...while they will go higher, I am trying to keep the whole system (including oc'ed TR CPU) below 1000 W if I play for hours on end, which can happen with Microsoft FlightSim 2020 

For Cyberpunk 2077, I usually settle at around 2145 w/ a single 2080 Ti (for now, SLI doesn't work in CP '77, but who knows what patches might come down the pipe). These days after some more tuning, I play at 4K / max textures / DLSS 'quality' / RTX 'max' or even with 'psycho', depending on combat scenes or not. GPU utilization typically hovers between 97% and 99%. Temps stay low (low to mid 30s) because now the single GPU gets the triple 360/55 rads loop to itself. I also take the power limit down to about 350 W with CP '77 because with CP '77, the hours fly by fast as well  ! I tested it all the way down to 2100 / 8017. While that does cost a few fps, at the end of the day, the difference isn't worth torturing my card(s) for. As stated, the best possible outcome for CP '77 is a NVL / SLI fix


----------



## tps3443

My 2080Ti runs 2,115Mhz in Cyberpunk stable forever. 2,130Mhz just isnt stable at all. And I have totally given up on trying to achieve anymore than 2,115Mhz. I can run other games at higher frequencies, but I don’t even worry with that either now. My system is stable through absolutely anything. If my life depended on it, my system is stable lol. And that’s more important that an extra 15Mhz. Crashes really hurt the experience. So chasing extra MHz is just silly to me. It’s really become the definition of insanity, I am trying the same thing expecting different results. So I’m done with all that.

Unless I get a water chiller, I will revisit that extreme OC. But other than that, 2,115Mhz is the absolute maximum. And it’ll run that for however long you want it too In anything possible.


----------



## J7SC

tps3443 said:


> My 2080Ti runs 2,115Mhz in Cyberpunk stable forever. 2,130Mhz just isnt stable at all. And I have totally given up on trying to achieve anymore than 2,115Mhz. I can run other games at higher frequencies, but I don’t even worry with that either now. My system is stable through absolutely anything. If my life depended on it, my system is stable lol. And that’s more important that an extra 15Mhz. Crashes really hurt the experience. So chasing extra MHz is just silly to me. It’s really become the definition of insanity, I am trying the same thing expecting different results. So I’m done with all that.
> 
> Unless I get a water chiller, I will revisit that extreme OC. But other than that, 2,115Mhz is the absolute maximum. And it’ll run that for however long you want it too In anything possible.


...yeah, as mentioned, the difference between 2115 and 2145 isn't really worth the hassle - better to enjoy a comfortable-daily setting. Speaking of enjoyment, I switched to 4K / max textures / 'Quality DLSS' / RTX 'Psycho' for some (non-combat) scenes in CP '77's Corpo Hotel...stunning, IMO


----------



## Audioboxer

Nice clocks you guys, some great silicon lottery! I won't be going that high 

I have managed to steal myself a few more FPS by doing a manual overclock on my 3900XT, so this is good for Cyberpunk. Almost consistently over 60FPS and often 65~70FPS with ultra/RT on (only medium).

Could maybe do better at 16:9 1440p, but 21:9 1440p is niiiiice in this game.


----------



## J7SC

Audioboxer said:


> Nice clocks you guys, some great silicon lottery! I won't be going that high
> 
> I have managed to steal myself a few more FPS by doing a manual overclock on my 3900XT, so this is good for Cyberpunk. Almost consistently over 60FPS and often 65~70FPS with ultra/RT on (only medium).
> 
> Could maybe do better at 16:9 1440p, but 21:9 1440p is niiiiice in this game.


Yeah, CPU optimization is not to be underestimated. My 16C/32T 2950X (4.3 all-c) is a joy to work with. There are some games though whereby its special NUMA game mode makes a big difference, Cyberpunk 2077 among them. I picked up close to 3 fps +- w/ all else being equal just by switching from UMA to NUMA, especially in busy outside scenes...


----------



## Audioboxer

J7SC said:


> Yeah, CPU optimization is not to be underestimated. My 16C/32T 2950X (4.3 all-c) is a joy to work with. There are some games though whereby its special NUMA game mode makes a big difference, Cyberpunk 2077 among them. I picked up close to 3 fps +- w/ all else being equal just by switching from UMA to NUMA, especially in busy outside scenes...
> 
> View attachment 2471341


Cyberpunk is definitely a CPU heavy game in combination with RT killing GPUs. Especially in the city. If anyone has absolutely taken their GPU to the edge with OCing, don't sleep on your CPU with this game.

A 2080Ti with a decent OC should help you get over 60FPS stable with RT on (1080~1440p), but if you're dipping under 60 have a look at squeezing a bit more out of your CPU if you can.


----------



## tps3443

I can run 2560x1440P and DLSS on high performance “The fastest DLSS” and GPU usage still maintains 98% getting about 145 FPS average Ultra detail/RT off.. It hammers my 4.8Ghz 7980XE pretty hard though. All 36 threads running very high usage. But GPU usage never drops. so I’m very satisfied with that. I see a lot of people who run DLSS on Quality, at an even higher resolution like 3440x1440 and they still get GPU usage dropping due to their CPU just falling On its face.

I am particular over GPU usage sustaining full usage. And I hate DX11 games that just can’t use a CPU. Fortunately Cyberpunk can use one!


----------



## Audioboxer

tps3443 said:


> I can run 2560x1440P and DLSS on high performance “The fastest DLSS” and GPU usage still maintains 98% getting about 145 FPS average Ultra detail/RT off.. It hammers my 4.8Ghz 7980XE pretty hard though. All 36 threads running very high usage. But GPU usage never drops. so I’m very satisfied with that. I see a lot of people who run DLSS on Quality, at an even higher resolution like 3440x1440 and they still get GPU usage dropping due to their CPU just falling On its face.
> 
> I am particular over GPU usage sustaining full usage. And I hate DX11 games that just can’t use a CPU. Fortunately Cyberpunk can use one!


Nice! I find DLSS gets too blurry on anything but quality. You're tempting me to try going lower again for more FPS lol.

But yeah this game can really bottleneck on CPU.


----------



## gfunkernaught

J7SC said:


> ...don't know how true / pertinent this is, but I read several times that IC Diamond's particles can scratch the surface of an IHS, especially if moved / removed several times. While I have used liquid metal before, that does take extra precautions as well re. prep...typically, I use Gelid Extreme or MX4 on CPUs and GPUs for regular applications. Here's an older but still relevant chart from THW


I have seen some scratches on the die but I think it was from using paper towel to wipe off the paste, could def be from the diamonds too. But the performance is great.


----------



## tps3443

Audioboxer said:


> Nice! I find DLSS gets too blurry on anything but quality. You're tempting me to try going lower again for more FPS lol.
> 
> But yeah this game can really bottleneck on CPU.


I don’t run the game at these settings. Only academic purposes, and my own curiosity. I keep reading that people can’t get over 70-85fps no matter what they do because of CPU limitations. Which is not the case at all with proper setup.


I run Ultra RT preset, 1440P, with quality DLSS, it’s butter!


----------



## J7SC

Audioboxer said:


> Nice! I find DLSS gets too blurry on anything but quality. You're tempting me to try going lower again for more FPS lol.
> 
> But yeah this game can really bottleneck on CPU.


I've got just enough horsies to play at 4K / RTX max / DLSS Quality. 4K stresses the CPU less than say 1080 or even 1440 but still, I can tell that CP '77 started development 8+ years ago re. CPU usage...but what a great job they did with RTX / DLSS.in CP '77 ! This is really the first game for me whereby playing 4K w/o RTX (w/ DLSS Quality) would be a very different experience.

2950X CPU is not a gamer CPU and it is part of a hybrid work-play setup for me w/ 4K 40 inch monitor (anything less than 4K looks a bit off).. But with the right oc pre-sets and fast+tight RAM, 2950X can really hold its own. I've seen more than one core hit > 4.6Ghz with PBO 

...some more of my fav 4K / RTX Psycho / DLSS Quality shots...


----------



## Imprezzion

I also did just that. CPU and RAM improvements. I'm now at 5.2Ghz all core no AVX offset on the 10900KF and I raised the RAM from 4200C16 to 4400C17 with way better RTL/IO as well and it might've made some difference but I really can't back it up with numbers so..

The CPU is on the edge of what I would consider 24/7 temp wise as in Prime95 AVX FMA3 it can get to like 92-93c but that is worst-case full-on AVX. Hottest i've seen it get in games is 71c during loading and that is before the radiator fans really start to ramp up. They only ramp up to about 950-1050RPM until 75c core where they go to 100% @ 85c.

Also, due to the RAM @ 4400 I need to run uncomfortable levels of SA and IO @ 1.40v SA 1.35v IO but yeah.. it _should_ handle it? I guess?


----------



## tps3443

Imprezzion said:


> I also did just that. CPU and RAM improvements. I'm now at 5.2Ghz all core no AVX offset on the 10900KF and I raised the RAM from 4200C16 to 4400C17 with way better RTL/IO as well and it might've made some difference but I really can't back it up with numbers so..
> 
> The CPU is on the edge of what I would consider 24/7 temp wise as in Prime95 AVX FMA3 it can get to like 92-93c but that is worst-case full-on AVX. Hottest i've seen it get in games is 71c during loading and that is before the radiator fans really start to ramp up. They only ramp up to about 950-1050RPM until 75c core where they go to 100% @ 85c.
> 
> Also, due to the RAM @ 4400 I need to run uncomfortable levels of SA and IO @ 1.40v SA 1.35v IO but yeah.. it _should_ handle it? I guess?


Only one way to find out! Lol. But yeah I juice my system with some high voltages too. I don’t always follow what the internet says is an acceptable safe voltage for CPU’s. I test for degradation every now and then. I think if the system is stable. And the temps are not too crazy a chip can Last for such a long time. I have been running my 7980XE at 4.8Ghz daily for a year straight. Before that it ran at 4.6Ghz for 2 years by someone else lol. 

Poor thing. And it still holds better than silicon lottery numbers.

Silicon is TOUGH AS NAILS MAN! Haha burn it up!


----------



## J7SC

Imprezzion said:


> I also did just that. CPU and RAM improvements. I'm now at 5.2Ghz all core no AVX offset on the 10900KF and I raised the RAM from 4200C16 to 4400C17 with way better RTL/IO as well and it might've made some difference but I really can't back it up with numbers so..
> 
> The CPU is on the edge of what I would consider 24/7 temp wise as in Prime95 AVX FMA3 it can get to like 92-93c but that is worst-case full-on AVX. Hottest i've seen it get in games is 71c during loading and that is before the radiator fans really start to ramp up. They only ramp up to about 950-1050RPM until 75c core where they go to 100% @ 85c.
> 
> Also, due to the RAM @ 4400 I need to run uncomfortable levels of SA and IO @ 1.40v SA 1.35v IO but yeah.. it _should_ handle it? I guess?


1.4v SA ? That would make me a trifle nervous. Then again, I only have one 14nm Skylake CPU, an older 6700K (binned engineering sample) running 16GB of DDR4 4000 CL16 with SA at 1.116v. May be the newer Intel handle higher SAv better ?


----------



## yoadknux

tps3443 said:


> Check this thing out guys! Its so tempting !!
> 
> 
> Thats a REAL Galax HOF WC 2KW 2080Ti. It has a XOC 2KW bios, and full external voltage controller.. Comes with Hall of fame waterblock, HOF and air cooler.
> 
> Not a bad deal. I mean considering the RTX3080’s are pretty weak upgrades. if it’s an upgrade at all.. And an RTX3090 is about $1700 for the cheapest one available.
> 
> 
> I am considering this!! I need to sell my 2080Ti and grab it!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GALAX RTX 2080 ti PLUS Hall of Fame (includes HOF 3 Fan cooler & Waterblock) | eBay
> 
> 
> The waterblock was applied with kryonaut thermal paste.
> 
> 
> 
> www.ebay.com


You already have a 2080ti, why buy another?
The 2080ti was succeeded by the 3080. Getting a 2080ti to replace your 2080ti, is imo, pointless. Especially when the overall cost isn't lower, like in this listing. 
This is a collectors item yes, but you use hardware for performance, not for collecting. Any reasonable mid-range 3080 at stock will outperform ANY 2080ti in all games regardless of OC. And the 3080 will only get better from now on with driver updates, whereas the 2080ti will not (just look at the 1080ti and 2080, they were equal at launch, now for some games the 2070s beats 1080ti).


----------



## Audioboxer

Imprezzion said:


> I also did just that. CPU and RAM improvements. I'm now at 5.2Ghz all core no AVX offset on the 10900KF and I raised the RAM from 4200C16 to 4400C17 with way better RTL/IO as well and it might've made some difference but I really can't back it up with numbers so..
> 
> The CPU is on the edge of what I would consider 24/7 temp wise as in Prime95 AVX FMA3 it can get to like 92-93c but that is worst-case full-on AVX. Hottest i've seen it get in games is 71c during loading and that is before the radiator fans really start to ramp up. They only ramp up to about 950-1050RPM until 75c core where they go to 100% @ 85c.
> 
> Also, due to the RAM @ 4400 I need to run uncomfortable levels of SA and IO @ 1.40v SA 1.35v IO but yeah.. it _should_ handle it? I guess?


Beast of an overclock lol.

I miss my intel days of running a 4960k at 4.8Ghz stable on air lol. This AMD chip allowed me to shift over to PCIe 4.0 and was better value for money than intel, but overclocking is kind of killed on Ryzen 2. I'm jumping between highly tweaking PBO and a manual all cores clock and I think PBO with some tweaking might at least match the all cores clock with the benefit of not running a constant voltage. While I think some of the intel chips/older chips could handle voltage limits like a pro, these 7NM chips seem to cause headaches with voltage limits.

The new Ryzen chips seem a bit more promising matching Intel for single core/gaming, but again don't seem to be great overclockers.

If you view overclocking as a sport/for fun, intel still seems to be the way to go.


----------



## tps3443

yoadknux said:


> You already have a 2080ti, why buy another?
> The 2080ti was succeeded by the 3080. Getting a 2080ti to replace your 2080ti, is imo, pointless. Especially when the overall cost isn't lower, like in this listing.
> This is a collectors item yes, but you use hardware for performance, not for collecting. Any reasonable mid-range 3080 at stock will outperform ANY 2080ti in all games regardless of OC. And the 3080 will only get better from now on with driver updates, whereas the 2080ti will not (just look at the 1080ti and 2080, they were equal at launch, now for some games the 2070s beats 1080ti).


My 2080Ti is practically as fast as a stock 3080 In gaming. I have seen my own in game benchmarks, or even comparisons on YouTube to know. The RTX3080 just doesn’t interest me at all.

I have always wanted a real Galax HOF WC 2080Ti with the real waterblock. Just to see what it does. And enjoy the bios that was designed for that card with voltage control.

I am certainly happy with how a 2080Ti performs. I just wanted that specific 2080Ti to play with. It may not make the most sense for you but we have DLSS 2.0 support too, and on watercooling, a 2080Ti is practically a 3080. That must give you an idea to the performance I get from my card. 


I guess what I’m saying is, that’s a really cool GPU to tie me over until we have some
video cards released, that’ll offer a real performance gain. Anyone who has a 2080Ti on water already knows that there isn’t much available that’ll give you a solid performance boost.


----------



## gfunkernaught

@J7SC
Found this video about IC Diamond. The scratches I saw on my die and block were most likely from the paper towel I used to wipe it off after I let it soak in 99% Isopropyl. Even after the soak it is still very viscous. I will find some other material to use to wipe it off. Maybe microfiber? Idk. But I have tried many TIMs on my GPU, and IC Diamond performs the best. I guess it does better when making direct contact with the die.

IC Diamond scratch test Youtube.


----------



## acoustic

That chip isn't gonna


tps3443 said:


> My 2080Ti is practically as fast as a stock 3080 In gaming. I have seen my own in game benchmarks, or even comparisons on YouTube to know. The RTX3080 just doesn’t interest me at all.
> 
> I have always wanted a real Galax HOF WC 2080Ti with the real waterblock. Just to see what it does. And enjoy the bios that was designed for that card with voltage control.
> 
> I am certainly happy with how a 2080Ti performs. I just wanted that specific 2080Ti to play with. It may not make the most sense for you but we have DLSS 2.0 support too, and on watercooling, a 2080Ti is practically a 3080. That must give you an idea to the performance I get from my card.
> 
> 
> I guess what I’m saying is, that’s a really cool GPU to tie me over until we have some
> video cards released, that’ll offer a real performance gain. Anyone who has a 2080Ti on water already knows that there isn’t much available that’ll give you a solid performance boost.


You compare a shunt-modded, watercooled, heavily overclocked 2080TI, to a 3080FE which is severely power-limited and runs hot. That's completely apples and oranges. The difficulty of getting a 3080 is certainly annoying, but that doesn't take away from the fact it's an upgrade, and at 3840x1600 and 4K, it's certainly welcomed .. hell, maybe even required.

I had a 2080TI FTW3 Ultra @ 2100/8000, for reference. It was an amazing card, and if I didn't get my hands on a 3080, I wouldn't be upset; but the fact is the 3080 walks all over the 2080TI once it's tweaked.


----------



## J7SC

acoustic said:


> That chip isn't gonna
> 
> 
> You compare a shunt-modded, watercooled, heavily overclocked 2080TI, to a 3080FE which is severely power-limited and runs hot. That's completely apples and oranges. The difficulty of getting a 3080 is certainly annoying, but that doesn't take away from the fact it's an upgrade, and at 3840x1600 and 4K, it's certainly welcomed .. hell, maybe even required.
> 
> I had a 2080TI FTW3 Ultra @ 2100/8000, for reference. It was an amazing card, and if I didn't get my hands on a 3080, I wouldn't be upset; but the fact is the 3080 walks all over the 2080TI once it's tweaked.


I think it's largely academic anyways, since there's hardly any stock of new 2080 Tis left. FYI, I have run into situations (spoiler) whereby 10 GB memory of the standard 3080 would be a _potential_ issue in 4K, the solution to that seems to be the upcoming 3080 Ti w/ 16? 20? GB of VRAM (or a 3090). At the end of the day though, a standard 3080 wouldn't be enough of an upgrade if you already have a well-oc'ed 2080 Ti, IMO - but for a new purchase, I'd be thinking about a 3080 Ti / 3090 or even 6900 XT

As to @tps3443 's desire for a 2080 Ti Galax HoF, I can understand that...subject to finding out more about 'prior use' of a used one, it is a superb card, and a collectible (like KPE) for some GPU aficionados 



Spoiler


----------



## Krzych04650

acoustic said:


> You compare a shunt-modded, watercooled, heavily overclocked 2080TI, to a 3080FE which is severely power-limited and runs hot. That's completely apples and oranges. The difficulty of getting a 3080 is certainly annoying, but that doesn't take away from the fact it's an upgrade, and at 3840x1600 and 4K, it's certainly welcomed .. hell, maybe even required.
> 
> I had a 2080TI FTW3 Ultra @ 2100/8000, for reference. It was an amazing card, and if I didn't get my hands on a 3080, I wouldn't be upset; but the fact is the 3080 walks all over the 2080TI once it's tweaked.


This would make sense if 3080 had any overclocking potential, but it doesn't. Getting 15-18% real world performance gain from overclocking vs FE is common on 2080 Ti, which is around 22-23% in Time Spy, while 3080 needs LN2 and 2300 MHz+ clock to gain 15% performance in Time Spy so probably like 10-11% in games vs it's FE, which by the way is not severely power limited as you say, like 2080 Ti is, 3080 FE is probably the best reference card ever made as it runs so very close to maximum possible performance. You have 5% overclocking potential on 3080 at very best and no amount of power is going to fix that because it simply doesn't scale. This kind of argument can be made for 3090 because this one actually is severely power limited in a very similar way to 2080 Ti FE, with clocks dropping into 1700s, and good 3090 models can reach 18-20% faster than 3080 FE, so 3080 Ti will actually make enough sense, but gains on 3080 are literally not worth disassembling the loop. And I am not one of those malcontents who think that their 7 year old AMD GPU is still fine because finewine, I was ready to buy two 3090s, but the high-end of this generation is just so bad it is embarrassing, and they killed SLI on top of that.


----------



## tps3443

acoustic said:


> That chip isn't gonna
> 
> 
> You compare a shunt-modded, watercooled, heavily overclocked 2080TI, to a 3080FE which is severely power-limited and runs hot. That's completely apples and oranges. The difficulty of getting a 3080 is certainly annoying, but that doesn't take away from the fact it's an upgrade, and at 3840x1600 and 4K, it's certainly welcomed .. hell, maybe even required.
> 
> I had a 2080TI FTW3 Ultra @ 2100/8000, for reference. It was an amazing card, and if I didn't get my hands on a 3080, I wouldn't be upset; but the fact is the 3080 walks all over the 2080TI once it's tweaked.


I would happily buy a 3080 and watercool it, but the performance boost is very negligible It’s just not worth it. I would consider a 3090, but the price is a little concerning.

As for a tweaked 3080 that walks all over my 2080Ti, I think you may be referring to 10-15% at best?


----------



## yoadknux

tps3443 said:


> *My 2080Ti is practically as fast as a stock 3080 In gaming.* *on watercooling, a 2080Ti is practically a 3080. That must give you an idea to the performance I get from my card.*


I don't know dude. 460.89 Driver analysis by babeltechreviews compares a 2080ti head-on with a 3080, both on stock. On almost all benchmarks and games, a 2080ti seems to be about as strong as 73% of the 3080. This is too big of a gap to overcome, even with water cooling, shunt modding, or whatever. Whether or not the 3080 also has overclocking potential is a different discussion, but it misses my initial claim - that selling a 2080ti, in order to purchase a more expensive 2080ti, that would cost the same (or even more) as a 3080, is in my opinion a waste. A 980Ti lightning is also a "rare and collectible" card, with insane OC potential, but nobody would purchase that now, because people are after performance, and in a year or two thats how people will look at the HOF card.

The 15% you mentioned isn't really negligible, that's as if someone with a 2080s would come here and post that his card performs the same as a 2080Ti, and people here would be all over him - that's maybe not a good enough reason to upgrade, but the cards aren't equal. 

There really is a limit to how much we can do. I know many of us hand-picked everything - from finding the best thermal compound to optimal thickness of pads to getting a block and custom loop and draining it a hundred times, getting the best fans and setting them to just the highest amount of bearable noise so we could stay at 45c, then creating curves in AB and even shunt modding... But at the end of the day, new tech was released and it is better than anything we can do with our cards, and there is nothing wrong with that. I'm personally very happy with the performance my 2080ti offers, but I'm not trying to justify that based on a false belief that my card is equal to current high-end tech.


----------



## pewpewlazer

tps3443 said:


> I would happily buy a 3080 and watercool it, but the performance boost is very negligible It’s just not worth it. I would consider a 3090, but the price is a little concerning.
> 
> As for a tweaked 3080 that walks all over my 2080Ti, I think you may be referring to 10-15% at best?


It's more like 15-20%.

Depends on application and resolution of course. Ampere does extremely well at high resolutions. The gains vs Turing will be larger at 4k than at 1440p. And some games run REALLY well on Ampere. Take Borderlands 3 for example: at 4k/ultra, a stock 3080 has a ~20% advantage over my 2080 Ti @ 2160/8000 based on every review I've seen.

I'd like to upgrade to a 3080, and while I'm content being stuck with my 2080 Ti, I'm not going to pretend it matches a 3080. BEST CASE a water cooled/shunt modded 2080 Ti matches a STOCK 3080 on air cooling. Water cool and shunt mod that 3080 and the gap appears again.


----------



## J7SC

...That an oc'ed 3080 beats an oc'ed 2080 Ti isn't a big surprise, but the margins just aren't big enough for my taste, and the 10 GB limit of a 3080 is a deal-breaker - my next card(s) will have at least 16 GB of VRAM. As it stands, I may end up with a pair of 3090s in some months via a business project. By that time, the first more substantive leaks about the Ampere successor ('Hopper' ?) will be flying high(er) then.

I do like the high-oc-clock AMD 6800 XT and 6900 XT, but their memory bandwidth limitations show at 4K, where my main machine operates for productivity and games. All I really want -for now - is a consistent 4k Ultra 60 fps. The thing which bothers me about the 3090s is their lack of efficiency...reading the 3090 thread, I can't believe the number of people who think that even 520W bios aren't enough, though 2080 Tis have their 1000W XOCs as well. How I long for the days of the GeForce 600 series, using Notepad to write my own bios limits :-(

The one thing I found almost 'cruel' was *NVidia's decision to disable the SLI-CFR option* available for RTX 2K/Titan RTX only, afaik in drivers from November '19 to early June this year. This option was 'undocumented' to begin with and presumably enabled only for developers to work on mGPU down the line, where the future is. Intel Xe multi-tile also has it, working superbly according to those who have seen it. I am using CFR wherever I can (Flight Sim 2020, Metro Ex, Crytek engined games) and the gains are very impressive, as is the lack of micro-stutter. In Flight Sim 2020, I have seen over 70 fps at 4K Ultra with my two 2080 Tis. This mGPU technology has phenomenal potential, including for Turing, yet NVidia killed it shortly before Ampere launch...may be they will bring it back, who knows. 'Current' CFR, afaik, would not work with Ampere cards as the necessary drivers won't recognize the 3090 with its SLI capabilities. For now, I am more than satisfied with my 2080 Tis. Most everything I am throwing at them w/4K works splendidly, though there isn't much headroom on a high oc single 2080 Ti with Cyberpunk 2077 and most of the 4K goodies tuned on.

All that said, tech progress is a good thing (especially when one can actually buy it, and at reasonable prices), and one day, we'll look back at that silly equipment that wouldn't do RTX-cubed / 8K / 360 fps.


----------



## Krzych04650

pewpewlazer said:


> It's more like 15-20%.
> 
> Depends on application and resolution of course. Ampere does extremely well at high resolutions. The gains vs Turing will be larger at 4k than at 1440p. And some games run REALLY well on Ampere. Take Borderlands 3 for example: at 4k/ultra, a stock 3080 has a ~20% advantage over my 2080 Ti @ 2160/8000 based on every review I've seen.
> 
> I'd like to upgrade to a 3080, and while I'm content being stuck with my 2080 Ti, I'm not going to pretend it matches a 3080. BEST CASE a water cooled/shunt modded 2080 Ti matches a STOCK 3080 on air cooling. Water cool and shunt mod that 3080 and the gap appears again.


Claiming that 2080 Ti OC matches 3080 is most certainly too much to say, because like you said this is the absolute best case scenario for 2080 Ti OC, but it is just as valid as using examples from games that NVIDIA itself cherrypicked as the best possible cases for Ampere. The only reasonable way to really look at this in average across multiple games, and here taking 15% OC potential on 2080 Ti and 5% on 3080 translates into around 12% lead at 1440p and 19% lead at 4K, OC vs OC. On ultrawides it would be somewhere in-between like 15%. And this is based on averages from TPU review so the best case scenarios for Ampere are taken into account, like SOTTR, Control, Borderlands 3 or NVIDIA's favorite - DOOM Eternal, but there are also some that do not have a huge gains, like Death Stranding, so this is a very balanced list I think and very realistic average.

For me personally it would be 15% as 3840x1600 is exactly in between of 1440p in 4K in terms of pixel count. Even if Ampere did have SLI support that would still not be enough. And good 3090 models are 50% more expensive than what I paid for my 2080 Ti Sea Hawk X right after 2080 Ti launch, 50% more than already hugely price gouged card at the time. If a year ago someone told me that I am going to be skipping GPU generation I would tell them to get out of my face and go buy a console, but Ampere manged to be bad enough for that to actually happen.


----------



## tps3443

yoadknux said:


> I don't know dude. 460.89 Driver analysis by babeltechreviews compares a 2080ti head-on with a 3080, both on stock. On almost all benchmarks and games, a 2080ti seems to be about as strong as 73% of the 3080. This is too big of a gap to overcome, even with water cooling, shunt modding, or whatever. Whether or not the 3080 also has overclocking potential is a different discussion, but it misses my initial claim - that selling a 2080ti, in order to purchase a more expensive 2080ti, that would cost the same (or even more) as a 3080, is in my opinion a waste. A 980Ti lightning is also a "rare and collectible" card, with insane OC potential, but nobody would purchase that now, because people are after performance, and in a year or two thats how people will look at the HOF card.
> 
> The 15% you mentioned isn't really negligible, that's as if someone with a 2080s would come here and post that his card performs the same as a 2080Ti, and people here would be all over him - that's maybe not a good enough reason to upgrade, but the cards aren't equal.
> 
> There really is a limit to how much we can do. I know many of us hand-picked everything - from finding the best thermal compound to optimal thickness of pads to getting a block and custom loop and draining it a hundred times, getting the best fans and setting them to just the highest amount of bearable noise so we could stay at 45c, then creating curves in AB and even shunt modding... But at the end of the day, new tech was released and it is better than anything we can do with our cards, and there is nothing wrong with that. I'm personally very happy with the performance my 2080ti offers, but I'm not trying to justify that based on a false belief that my card is equal to current high-end tech.











yoadknux said:


> I don't know dude. 460.89 Driver analysis by babeltechreviews compares a 2080ti head-on with a 3080, both on stock. On almost all benchmarks and games, a 2080ti seems to be about as strong as 73% of the 3080. This is too big of a gap to overcome, even with water cooling, shunt modding, or whatever. Whether or not the 3080 also has overclocking potential is a different discussion, but it misses my initial claim - that selling a 2080ti, in order to purchase a more expensive 2080ti, that would cost the same (or even more) as a 3080, is in my opinion a waste. A 980Ti lightning is also a "rare and collectible" card, with insane OC potential, but nobody would purchase that now, because people are after performance, and in a year or two thats how people will look at the HOF card.
> 
> The 15% you mentioned isn't really negligible, that's as if someone with a 2080s would come here and post that his card performs the same as a 2080Ti, and people here would be all over him - that's maybe not a good enough reason to upgrade, but the cards aren't equal.
> 
> There really is a limit to how much we can do. I know many of us hand-picked everything - from finding the best thermal compound to optimal thickness of pads to getting a block and custom loop and draining it a hundred times, getting the best fans and setting them to just the highest amount of bearable noise so we could stay at 45c, then creating curves in AB and even shunt modding... But at the end of the day, new tech was released and it is better than anything we can do with our cards, and there is nothing wrong with that. I'm personally very happy with the performance my 2080ti offers, but I'm not trying to justify that based on a false belief that my card is equal to current high-end tech.


I see what you mean by saying a stock 2080Ti is 73% of a stock RTX3080. But you must understand. Every synthetic or built in game benchmark I run with my 2080Ti, I am only about 1-5% slower than a stock RTX3080. Timespy graphics “2%”
port royal 6% slower. UNIGINE is 2% slower too.

This is an updated chart for all of the new and old GPU’s for ”TIMESPY GRAPHICS”. My stable daily OC manages a 27% increase on top of that 2080Ti that’s in the chart. So around 17,250 timespy graphics. Sure I could squeeze more. But I’m talking 100% stable here.

All I said originally is my overclocked and water cooled 2080Ti “Practically” offers the same performance of a “Stock“ 3080


I don’t have any other built in game benchmarks to run. All I had was Metro to test. But it’s a great comparison.


The RTX3080 is a better card for sure at the lower price obviously. But, all I was getting at is. Performance of a RTX3080 is amazing, and if we can achieve that then we’re doing pretty good to not need an upgrade.


----------



## tps3443

@yoadknux

I guess the easiest way to put it would be.

A stock 2080Ti gets like 13,600 in timespy graphics. This is the same 2080Ti that’s used to compare against a RTX3080. The same 2080Ti that’s only 73% of a RTX3080 as you said.

Well my daily Timespy graphics score is 27% faster than than that 13,600.

Not a suicide run. Not a outside in the cold run. My daily stable settings that I run in all of my games, software etc etc .


This was all I was trying to say. Yes Timespy graphics is a Synthetic benchmark. But, it’s still a good estimate of the performance to expect out of a GPU. And my daily timespy graphics score is only 3% behind a stock RTX3080.


And I can squeeze even more than that 27%. But it’s not 100% stable in anything.


The RTX3080 is fast. It runs like a graphics card that gets 17,800 in timespy graphics. That’s very quick! And my 2080Ti is “Right behind it”


----------



## tps3443

@yoadknux

Ok another post, another comparison.

Let’s look at Port Royal. The RTX3080 FE does very well in this test. It kills a stock 2080Ti by about 44%.

2080Ti 36.9 FPS
3080. 53 FPS

^ That’s a substantial difference at 44% faster.

Although, my 2080Ti averages 49FPS in this test.

If 49FPS isn’t right behind 53FPS... Then I dunno what is. Because that 44% gap is now a 8% gap!


----------



## pewpewlazer

So I stumbled into the 3090 thread last night for whatever reason, and saw someone had posted a link to this "ThermSpy" application:








Thermspy


I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.




www.techpowerup.com





Apparently this is some sort of internal Nvidia tool dating back to the Fermi era.

What's interesting is that it reports two different numbers for the graphics clock speed. Looks like this - *Graph: XXXX (YYYY) MHz*. The "YYYY" number inside the parentheses matches the clock value reported by Afterburner and other monitoring tools. The "XXXX" value is what the guys over in the 3090 thread are referring to as "internal clock". This number is slightly lower (or a lot lower) than what Afterburner reports, it moves around a tiny bit, and it does not adhere to the idea of 15mhz clock bins.

We know that offsetting the entire VF curve will result in higher performance than only offsetting one single voltage point, even at the same clock speed, so maybe this "internal clock" has something to do with it? Let's test:

Setting ONLY the 1.093v point to +180 resulted in a core clock of 2175, with ThermSpy reporting 2015 (2175) = 10,264 in Port Royal
Setting the entire curve to +165, then bumping up the 1.093v point to +180 resulted in a core clock of 2175, with ThermSpy reporting 2146 (2175) = 10,533 in Port Royal
Setting the entire curve to +50 resulted in a core clock of 2040, with ThermSpy reporting 2025 (2040) = 10,063 in Port Royal





















So whatever this "internal clock" number being reported by ThermSpy is, it's not THE one and only real clock. Otherwise my +50 offset score would have roughly matched my single point offset score.

All I know is I wasted a couple hours messing around with this and still have no idea what voodoo magic happens inside these cards...


----------



## tps3443

pewpewlazer said:


> So I stumbled into the 3090 thread last night for whatever reason, and saw someone had posted a link to this "ThermSpy" application:
> 
> 
> 
> 
> 
> 
> 
> 
> Thermspy
> 
> 
> I found a link to a Nvidia program called Thermspy. If anyone is interested i will post a link to it if allowed. i have attached a image from my cards readout.
> 
> 
> 
> 
> www.techpowerup.com
> 
> 
> 
> 
> 
> Apparently this is some sort of internal Nvidia tool dating back to the Fermi era.
> 
> What's interesting is that it reports two different numbers for the graphics clock speed. Looks like this - *Graph: XXXX (YYYY) MHz*. The "YYYY" number inside the parentheses matches the clock value reported by Afterburner and other monitoring tools. The "XXXX" value is what the guys over in the 3090 thread are referring to as "internal clock". This number is slightly lower (or a lot lower) than what Afterburner reports, it moves around a tiny bit, and it does not adhere to the idea of 15mhz clock bins.
> 
> We know that offsetting the entire VF curve will result in higher performance than only offsetting one single voltage point, even at the same clock speed, so maybe this "internal clock" has something to do with it? Let's test:
> 
> Setting ONLY the 1.093v point to +180 resulted in a core clock of 2175, with ThermSpy reporting 2015 (2175) = 10,264 in Port Royal
> Setting the entire curve to +165, then bumping up the 1.093v point to +180 resulted in a core clock of 2175, with ThermSpy reporting 2146 (2175) = 10,533 in Port Royal
> Setting the entire curve to +50 resulted in a core clock of 2040, with ThermSpy reporting 2025 (2040) = 10,063 in Port Royal
> 
> View attachment 2471679
> View attachment 2471681
> View attachment 2471680
> 
> 
> So whatever this "internal clock" number being reported by ThermSpy is, it's not THE one and only real clock. Otherwise my +50 offset score would have roughly matched my single point offset score.
> 
> All I know is I wasted a couple hours messing around with this and still have no idea what voodoo magic happens inside these cards...


Very interesting. I always wondered why setting a straight undervolt causes a performance loss. So I am sure it probably acts the same way with first gen RTX cards.

But I use the same method. Running a +offset and then set that specific voltage point with a +15 MHz additional.

So what if that thermspy internal clock is like the “Shader Clock” like the old Nvidia cards use to have.


----------



## Medizinmann

Krzych04650 said:


> Claiming that 2080 Ti OC matches 3080 is most certainly too much to say, because like you said this is the absolute best case scenario for 2080 Ti OC, but it is just as valid as using examples from games that NVIDIA itself cherrypicked as the best possible cases for Ampere. The only reasonable way to really look at this in average across multiple games, and here taking 15% OC potential on 2080 Ti and 5% on 3080 translates into around 12% lead at 1440p and 19% lead at 4K, OC vs OC. On ultrawides it would be somewhere in-between like 15%. And this is based on averages from TPU review so the best case scenarios for Ampere are taken into account, like SOTTR, Control, Borderlands 3 or NVIDIA's favorite - DOOM Eternal, but there are also some that do not have a huge gains, like Death Stranding, so this is a very balanced list I think and very realistic average.
> 
> For me personally it would be 15% as 3840x1600 is exactly in between of 1440p in 4K in terms of pixel count. Even if Ampere did have SLI support that would still not be enough. And good 3090 models are 50% more expensive than what I paid for my 2080 Ti Sea Hawk X right after 2080 Ti launch, 50% more than already hugely price gouged card at the time. If a year ago someone told me that I am going to be skipping GPU generation I would tell them to get out of my face and go buy a console, but Ampere manged to be bad enough for that to actually happen.


Well - I wouldn't say that Ampere is bad per se - the 3080 FE is/would be actually pretty good - if one could buy any at MSRP...
The point here is - you would get better performance than a 2080 Ti at a much lower price point - but since you won't get any at this price point...

...and/but of course, an upgrade from an existing 2080 TI with water block and hefty OC - to get 10-15% more perfomance – doesn’t make much sense…

And the 3090 only makes sense in some scenarios where you really need much more VRAM – an that isn’t gaming.

Conclusion - I also would definitely wait for next gen or at least for the refresh versions and much much better prices…

Best regards,
Medizinmann


----------



## BigMack70

Medizinmann said:


> Well - I wouldn't say that Ampere is bad per se - the 3080 FE is/would be actually pretty good - if one could buy any at MSRP...
> The point here is - you would get better performance than a 2080 Ti at a much lower price point - but since you won't get any at this price point...
> 
> ...and/but of course, an upgrade from an existing 2080 TI with water block and hefty OC - to get 10-15% more perfomance – doesn’t make much sense…
> 
> And the 3090 only makes sense in some scenarios where you really need much more VRAM – an that isn’t gaming.
> 
> Conclusion - I also would definitely wait for next gen or at least for the refresh versions and much much better prices…
> 
> Best regards,
> Medizinmann


I swapped my 2080Ti for a 3090 on my primary rig to get hdmi 2.1 for my C9 OLED, but yeah this is pretty much my conclusion as regards performance. If you have a 2080Ti on water with a decent OC, the 3080 makes no sense to me; it's not enough of a performance jump. The 3090 is closer to a true upgrade, at least at high resolution, but not worth the money in most cases. 

IMO Ampere is a big flop, even more disappointing then Turing was over Pascal, and worth skipping for Turing owners unless hdmi 2.1 is meaningful or you just like upgrading no matter what each generation.


----------



## sultanofswing

BigMack70 said:


> I swapped my 2080Ti for a 3090 on my primary rig to get hdmi 2.1 for my C9 OLED, but yeah this is pretty much my conclusion as regards performance. If you have a 2080Ti on water with a decent OC, the 3080 makes no sense to me; it's not enough of a performance jump. The 3090 is closer to a true upgrade, at least at high resolution, but not worth the money in most cases.
> 
> IMO Ampere is a big flop, even more disappointing then Turing was over Pascal, and worth skipping for Turing owners unless hdmi 2.1 is meaningful or you just like upgrading no matter what each generation.


This is exactly the only reason I got a 3090, 3080 wasn't a big enough jump, 3090 was the only gain worth going for and I have a CX OLED so I wanted the hdmi 2.1 just as you.
Now after owning a 3090 it's utter garbage, Like how did we go from a 250w tdp card to this monster that at stock clocks is nearing 500 watts and doesn't overclock worth a crap because the 500w power limit is too low and then they still clock like doo doo.
I knew Samsung was going to be a big fail, just like all their phones.


----------



## Imprezzion

If availability and price is good and performance for the 3080 Ti is closer to 3090 then it is to 3080 I might upgrade.. might.. but my card is doing just fine for now lol. Especially since I'm on 1080p high refresh rate and am not upgrading the resolution any time soon and basically the only game that doesn't run 100+ FPS @ max settings is Cyberpunk with RT Psycho and DLSS Quality.

I actually recommended a secondhand 2080 Ti to a friend of mine who wanted an upgrade from his FE 1080 for Cyberpunk and a 3070 would've been fine but was more expensive and he managed to score a great deal on a MSI 2080 Ti Gaming X so yeah..

Even tho he's not a big overclocking expert I did check the age of the card and it should be old enough to have the older flashable XUSB firmware. Which BIOS is recommended for stock cooling on a 2080 Ti Gaming X just to not run into any power limits?


----------



## sultanofswing

Imprezzion said:


> If availability and price is good and performance for the 3080 Ti is closer to 3090 then it is to 3080 I might upgrade.. might.. but my card is doing just fine for now lol. Especially since I'm on 1080p high refresh rate and am not upgrading the resolution any time soon and basically the only game that doesn't run 100+ FPS @ max settings is Cyberpunk with RT Psycho and DLSS Quality.
> 
> I actut recommended a secondhand 2080 Ti to a friend of mine who wanted an upgrade from his FE 1080 for Cyberpunk and a 3070 would've been fine but was more expensive and he managed to score a great deal on a MSI 2080 Ti Gaming X so yeah..


If you have no need for HDMI 2.1 then wait. EVGA Cards are dying left and right and every single one of the 30 series cards are at the edge of what they can do stock.


----------



## tps3443

Imprezzion said:


> If availability and price is good and performance for the 3080 Ti is closer to 3090 then it is to 3080 I might upgrade.. might.. but my card is doing just fine for now lol. Especially since I'm on 1080p high refresh rate and am not upgrading the resolution any time soon and basically the only game that doesn't run 100+ FPS @ max settings is Cyberpunk with RT Psycho and DLSS Quality.
> 
> I actut recommended a secondhand 2080 Ti to a friend of mine who wanted an upgrade from his FE 1080 for Cyberpunk and a 3070 would've been fine but was more expensive and he managed to score a great deal on a MSI 2080 Ti Gaming X so yeah..
> 
> Even tho he's not a big overclocking expert I did check the age of the card and it should be old enough to have the older flashable XUSB firmware. Which BIOS is recommended for stock cooling on a 2080 Ti Gaming X just to not run into any power limits?


Price gouging has absolutely ruined the RTX3060Ti and RTX3070. The GPU market is a mess right now. Nonetheless, your friend can get more out of that 2080Ti than a 8Gig card with minimal OC potential.


----------



## BigMack70

sultanofswing said:


> This is exactly the only reason I got a 3090, 3080 wasn't a big enough jump, 3090 was the only gain worth going for and I have a CX OLED so I wanted the hdmi 2.1 just as you.
> Now after owning a 3090 it's utter garbage, Like how did we go from a 250w tdp card to this monster that at stock clocks is nearing 500 watts and doesn't overclock worth a crap because the 500w power limit is too low and then they still clock like doo doo.
> I knew Samsung was going to be a big fail, just like all their phones.


I think the 3090 is a good "set it and forget it" card, and absolutely terrible for tweaking. It's the first card I've owned in a long time that I'm not going to put under water, because there's just no point. The performance is ok enough compared to the 2080Ti, and the noise levels are fine, and you have to put a ridiculous amount of power through the card to get a meaningful performance jump over stock, so I'm just leaving it be. 

The 3090 just falls apart as a product when thinking about overclocking. The 2080Ti sings under water, doubly so if you flash a custom BIOS on it. It's not too hard to lock at 2100-2200 MHz at reasonable power load of 500W or less. 

Good luck doing that on the 3090... It looks like it needs around a 700-800W BIOS to lock in a max OC without dropping due to power limit, and I have no confidence in any of the PCBs to run that long term. The Samsung node is hot garbage.

It's funny, I wasn't terribly impressed with the 2080Ti when it launched, but now I am thinking "this is an awesome card that can probably get by for a couple generations until RT performance is ready for prime time".


----------



## sultanofswing

BigMack70 said:


> I think the 3090 is a good "set it and forget it" card, and absolutely terrible for tweaking. It's the first card I've owned in a long time that I'm not going to put under water, because there's just no point. The performance is ok enough compared to the 2080Ti, and the noise levels are fine, and you have to put a ridiculous amount of power through the card to get a meaningful performance jump over stock, so I'm just leaving it be.
> 
> The 3090 just falls apart as a product when thinking about overclocking. The 2080Ti sings under water, doubly so if you flash a custom BIOS on it. It's not too hard to lock at 2100-2200 MHz at reasonable power load of 500W or less.
> 
> Good luck doing that on the 3090... It looks like it needs around a 700-800W BIOS to lock in a max OC without dropping due to power limit, and I have no confidence in any of the PCBs to run that long term. The Samsung node is hot garbage.
> 
> It's funny, I wasn't terribly impressed with the 2080Ti when it launched, but now I am thinking "this is an awesome card that can probably get by for a couple generations until RT performance is ready for prime time".


100% fully agree with your assessment, the cards need to have a 600-650 watt bios STOCK. I still have my KINGPIN 2080ti and it's the best card i've ever owned, sure it was bloody expensive but on water it will do 2200mhz at about 598 watts.
Sure the 3090 performs decent but if you look at it in terms of it's power consumption it's the most inefficient GPU I've ever seen. I think my Microwave may be more efficient.


----------



## BigMack70

sultanofswing said:


> 100% fully agree with your assessment, the cards need to have a 600-650 watt bios STOCK. I still have my KINGPIN 2080ti and it's the best card i've ever owned, sure it was bloody expensive but on water it will do 2200mhz at about 598 watts.
> Sure the 3090 performs decent but if you look at it in terms of it's power consumption it's the most inefficient GPU I've ever seen. I think my Microwave may be more efficient.


Something had to have gone wrong in either the design or the manufacturing of Ampere. They have these cards pushed so far past the point of their peak efficiency at stock it's absurd. I can't remember a stock GPU ever being pushed so far beyond its comfort zone at stock that from stock it takes a 3-4% increase in power for every 1% increase in performance. At equal power draw with Turing Ampere is probably like 10% faster.


----------



## sultanofswing

BigMack70 said:


> Something had to have gone wrong in either the design or the manufacturing of Ampere. They have these cards pushed so far past the point of their peak efficiency at stock it's absurd. I can't remember a stock GPU ever being pushed so far beyond its comfort zone at stock that from stock it takes a 3-4% increase in power for every 1% increase in performance. At equal power draw with Turing Ampere is probably like 10% faster.


It really is sad, so bad that now EVGA has to deal with so many of these cards that have just popped up in smoke. I don't know the reason why as my 3090 FTW3 Ultra has been fine but I don't trust it now.


----------



## tps3443

@sultanofswing 

@BigMack70 

Sorry guys I had to say it. So, I am reading your post. And I am literally dying over here. Hilarious Lol.


----------



## sultanofswing

tps3443 said:


> @sultanofswing
> 
> @BigMack70
> 
> Sorry guys I had to say it. So, I am reading your post. And I am literally dying over here. Hilarious Lol.


Sad thing is it's true.


----------



## tps3443

Anyone familiar with the Galax HOF voltage control software?

From what I understand it is 100% unobtainable, and only a few elite overclockers actually have it for use with the Galax XOC 2KW Bios with a real
HOF 2080Ti. As this is a full custom application designed just for the Galax HOF 2080Ti.

I may get this HOF 2080Ti and he says it includes the rare application, and very unicorn XOC bios on a USB drive with the HOF 2080Ti.

Ive got someone interested in my 2080Ti, I posted it up locally, so I’m gonna take a side step to a more exotic 2080Ti with better overclocking features. I would probably just ride this HOF out until next gen GPU’s.

This rare HOF voltage control application has me more intrigued than ever right now. Apparently you can control absolutely everything in real-time.


----------



## J7SC

For what it is worth, the 3090s are the best cards that are available right now for the 4K consumer (and perhaps also the 6900XT up to 1440p). Ampere's oc headroom though seems to be more limited compared to Turing, unless folks shunt-mod and/or put 500W+++ bios on them, some w/o appropriate cooling. I am not surprised at several reports of cards dying. As much as I love oc-ing myself and have done 'questionable things', hopefully, this won't become an Ampere RMA debacle, _reflected in the pricing of next gen GPUs_....

As my two 2080 Ti (factory w-blocked) are in a work-play machine, I never bothered with custom curves, XOC bios etc - just extensive w-cooling. Their combined peak @ 780 W got me a best (so far) of 21060 in PortRoyal, and I am not sure where that slots in in terms of efficiency per unit of score, say compared to Ampere. That said, I first entered PR HoF in March '19 and didn't 'disappear until a few hours ago (spoiler)...I plan to hold on to the 2080 Tis for quite a while, even if 3090s materialize here, as I run two primary machines anyhow. The unobtanium / supply problems of custom 3090s (and other GPUs, CPUs) over the last few months took all the fun out of planning a new build for late this year, anyway.


Spoiler


----------



## tps3443

J7SC said:


> For what it is worth, the 3090s are the best cards that are available right now for the 4K consumer (and perhaps also the 6900XT up to 1440p). Ampere's oc headroom though seems to be more limited compared to Turing, unless folks shunt-mod and/or put 500W+++ bios on them, some w/o appropriate cooling. I am not surprised at several reports of cards dying. As much as I love oc-ing myself and have done 'questionable things', hopefully, this won't become an Ampere RMA debacle, _reflected in the pricing of next gen GPUs_....
> 
> As my two 2080 Ti (factory w-blocked) are in a work-play machine, I never bothered with custom curves, XOC bios etc - just extensive w-cooling. Their combined peak @ 780 W got me a best (so far) of 21060 in PortRoyal, and I am not sure where that slots in in terms of efficiency per unit of score, say compared to Ampere. That said, I first entered PR HoF in March '19 and didn't 'disappear until a few hours ago (spoiler)...I plan to hold on to the 2080 Tis for quite a while, even if 3090s materialize here, as I run two primary machines anyhow. The unobtanium / supply problems of custom 3090s (and other GPUs, CPUs) over the last few months took all the fun out of planning a new build for late this year, anyway.
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2471742


Your setup is pretty efficient! But your cooling is top notch too.

The 3090’s just consume tons of power to become fast. I think the 3090 is decent stock though. It has plenty of juice, and the 24GB of Vram is just incredible.


You can manage 21K in port royal at 780 watts. That’s a great score, and not to crazy on the power either.

A 3090 Kingpin would consume 595-600 watts to achieve about 15,700 In port royal full overclock.

^ That’s a good score for a single card Though.


----------



## J7SC

Quick question: I never tried undervolting as I don't use the v-curve in MSI AB, and its regular voltage slider won't go negative. Is there any other (non-v-curve way) to undervolt a 2080 Ti ? Also, if the cards are synced in MSI AB, will an undervolt adjustment affect both cards in relative (ie delta per card) terms, or just set one common value for both ?


----------



## Sketchus

Sorry to jump in with a question but I've searched and maybe missed an answer. I have a Gaming X Trio with the newer xusb version, so have been unable to flash. Aside from using the tool to bypass this is there any increased power limit bios' that will work with my card?

Thanks.


----------



## tps3443

Sketchus said:


> Sorry to jump in with a question but I've searched and maybe missed an answer. I have a Gaming X Trio with the newer xusb version, so have been unable to flash. Aside from using the tool to bypass this is there any increased power limit bios' that will work with my card?
> 
> Thanks.


Solder the 8 ohm resistors.


----------



## tps3443

J7SC said:


> Quick question: I never tried undervolting as I don't use the v-curve in MSI AB, and its regular voltage slider won't go negative. Is there any other (non-v-curve way) to undervolt a 2080 Ti ? Also, if the cards are synced in MSI AB, will an undervolt adjustment affect both cards in relative (ie delta per card) terms, or just set one common value for both ?


Man I think your cards already are running like they’re undervolted Lol.

Just add a positive offset of your normal Overclock minus -15Mhz. Hit enter then apply.

Then open up the Vcurve editor, select a single block for the preferred voltage point. And bring that single point up to 15Mhz. hit enter, close out the Vcueve editor, and hit apply.


Then the card will run at that specific voltage Point.

I guess you’d adjust them
both separately?


----------



## pewpewlazer

J7SC said:


> Quick question: I never tried undervolting as I don't use the v-curve in MSI AB, and its regular voltage slider won't go negative. Is there any other (non-v-curve way) to undervolt a 2080 Ti ? Also, if the cards are synced in MSI AB, will an undervolt adjustment affect both cards in relative (ie delta per card) terms, or just set one common value for both ?


I guess you could reduce the power limit slider and let the cards throttle on their own. Otherwise, VF curve is the only way to "undervolt".

When I ran a GTX 1070 SLI setup and tried using the VF curve, I had to adjust both separately. Set curve and apply clocks, click settings in Afterburner, change the "master graphics processor" selection up top to GPU2, close settings, change VF curve for card #2 and apply. 

As an added bonus, I found out that my two (identical EVGA 1070 FE cards ordered at the same time) cards had slightly different VF curves, so I couldn't even use identical settings between the two. Needless to say, I gave up pretty quickly.


----------



## Imprezzion

Well, someone got lucky... guy I know bought a EVGA FTW3 Hybrid 2080 Ti, got it now, and went to overclock it..

The thing does 2145Mhz @ 1.037v in superposition on stress loop.. without even throttling.. like.. That thing is amazing haha. It can do some seriously high core clocks as that was at like 63c even.. not even super cold.. 

I wonder if we can push 2200Mhz out of that thing with better push-pull rad fans @ 100% and 1.093v curve...


----------



## tps3443

Imprezzion said:


> Well, someone got lucky... guy I know bought a EVGA FTW3 Hybrid 2080 Ti, got it now, and went to overclock it..
> 
> The thing does 2145Mhz @ 1.037v in superposition on stress loop.. without even throttling.. like.. That thing is amazing haha. It can do some seriously high core clocks as that was at like 63c even.. not even super cold..
> 
> I wonder if we can push 2200Mhz out of that thing with better push-pull rad fans @ 100% and 1.093v curve...


I know it’s insane, that’s the card that runs 2,160-2,205 in games. That really good! The silicon varies so much in 2080Ti’s. I have seen people on YouTube with “Certain TU102” and they run like 2,175Mhz at like 38-39C. My card would never do that.

These cards just vary so much. That’s why I want to get this Galaxy HOF WC 2KW edition. It has the waterblock, and the rare bios, and rare voltage software.

^ I am just bored with my silicon, and want to try another 2080Ti.

I know @J7SC runs like 2,145 at 1.043 too. He has really good cards too..

My 2080Ti can benchmark with high frequencies. But it needs some high voltage too! 1.093MV to 1.1250 for 2,130-2,190Mhz.

And these frequencies just aren’t game stable. So I am stuck at the 2,115mhz range for gaming “Long term for hours”


----------



## sultanofswing

So can you guys test something. Download this program from the 1st post on this page Thermspy
Run your overclock settings that you normally run with that program running and see if the internal clock matches what clock your are asking for.


----------



## tps3443

sultanofswing said:


> So can you guys test something. Download this program from the 1st post on this page Thermspy
> Run your overclock settings that you normally run with that program running and see if the internal clock matches what clock your are asking for.


Im going to once I get off work. I’m curious. I think my internal clock is very close to, or the same as my stated boost clock.

My daily game stable setting net me around 17,200-17,275 in timespy graphics.

So I am gonna guess that’s probably 2,085—2,100 internal clock? with 2,115-2,130 external clock?

I have no idea.


----------



## sultanofswing

tps3443 said:


> Im going to once I get off work. I’m curious. I think my internal clock is very close to, or the same as my stated boost clock.
> 
> My daily game stable setting net me around 17,200-17,275 in timespy graphics.
> 
> So I am gonna guess that’s probably 2,085—2,100 internal clock? with 2,115-2,130 external clock?
> 
> I have no idea.


Yea if your daily overclock nets 17k+ in Timespy I'd say the internal clock is pretty close.


----------



## J7SC

pewpewlazer said:


> I guess you could reduce the power limit slider and let the cards throttle on their own. Otherwise, VF curve is the only way to "undervolt".
> 
> When I ran a GTX 1070 SLI setup and tried using the VF curve, I had to adjust both separately. Set curve and apply clocks, click settings in Afterburner, change the "master graphics processor" selection up top to GPU2, close settings, change VF curve for card #2 and apply.
> 
> As an added bonus, I found out that my two (identical EVGA 1070 FE cards ordered at the same time) cards had slightly different VF curves, so I couldn't even use identical settings between the two. Needless to say, I gave up pretty quickly.


Yeah, my two cards also have slightly different curves... GPUs have serial numbers just two apart and the same date bios (though not identical after hex comparison). I am happy with the voltskies either way, but like to 'tinker' with this setup within reason (partial work machine). Also per @tps3443 , still working on a project, but I just did a quick sample run with Valley @ 2175 GPUs and 8223 VRAM...the cards are close @ GPUv under load, but not identical


----------



## pewpewlazer

sultanofswing said:


> So can you guys test something. Download this program from the 1st post on this page Thermspy
> Run your overclock settings that you normally run with that program running and see if the internal clock matches what clock your are asking for.


Also, please share what BIOS you're running. I've been wondering if different BIOSes behave differently, but I don't exactly feel like playing musical BIOSes again. No matter what I do on the GALAX 380w BIOS, the "internal clock" is always at least ~15mhz lower than reported, even at stock settings.

Here's my latest "gaming" OC profile I've been testing. Played an hour or two of Watch Dogs Legion with it earlier, which seems to be one of the most OC sensitive games I've found. So far, so good. Entire curve was set +165, then I bumped up 1.081v and beyond to +180. Screen shot below was taken during TSE GT2.


----------



## jura11

If RTX 3090 is worth it guys, I would say yes, I use my PC for rendering and in rendering RTX 3090 is something like 2 times faster than my RTX 2080Ti Strix or Zotac RTX 2080Ti AMP, my Strix in rendering can do stable 2070-2085MHz as max and Zotac RTX 2080Ti AMP will do 2055MHz and RTX 3090 in OptiX or CUDA rendering easy will do 2145-2160MHz 

Hope this helps 

Thanks, Jura


----------



## tps3443

J7SC said:


> Yeah, my two cards also have slightly different curves... GPUs have serial numbers just two apart and the same date bios (though not identical after hex comparison). I am happy with the voltskies either way, but like to 'tinker' with this setup within reason (partial work machine). Also per @tps3443 , still working on a project, but I just did a quick sample run with Valley @ 2175 GPUs and 8223 VRAM...the cards are close @ GPUv under load, but not identical
> 
> View attachment 2471899


1.068V is still amazing at 2,175. While 1.043 is just returded good.


My 2080Ti is like hey man, I will run an absolute minimum of 2,115Mhz in anything, but I always need 1.093MV or I will lock up and crash Lol!


----------



## jura11

tps3443 said:


> 1.068V is still amazing at 2,175. While 1.043 is just returded good.
> 
> 
> My 2080Ti is like hey man, I will run an absolute minimum of 2,115Mhz in anything, but I always need 1.093MV or I will lock up and crash Lol!


2205MHz with 1.09v is absolutely minimum, 2220MHz too needs like 1.09v and anything below 2205MHz needs 1.05v, at 1.0v my Strix will do 2100-2115MHz in gaming, in Control is rock stable at such clocks 

Usually if its stable in Cyberpunk 2077 or Control or Metro Exodus then is stable any game out

Great test for stability is Witcher 3, Assassin's Creed games 

Hope this helps 

Thanks, Jura


----------



## J7SC

tps3443 said:


> 1.068V is still amazing at 2,175. While 1.043 is just returded good.
> 
> 
> My 2080Ti is like hey man, I will run an absolute minimum of 2,115Mhz in anything, but I always need 1.093MV or I will lock up and crash Lol!


What I find a bit strange is that the two cards (again, serial # only 2 apart) have different bios versions, with the same date. In fact, if you check TechPowerupbios DB, there a a pile more for Aorus Xtr WB...comparing my cards' bios with a hex editor, there are several differences. Still, I plan to flash my card #2 with the bios from my card #1 and see what happens 

@jura11


Spoiler: Planned Infrastructure Upgrade for 3090s


----------



## sultanofswing

J7SC said:


> What I find a bit strange is that the two cards (again, serial # only 2 apart) have different bios versions, with the same date. In fact, if you check TechPowerupbios DB, there a a pile more for Aorus Xtr WB...comparing my cards' bios with a hex editor, there are several differences. Still, I plan to flash my card #2 with the bios from my card #1 and see what happens
> 
> @jura11
> 
> 
> Spoiler: Planned Infrastructure Upgrade for 3090s
> 
> 
> 
> 
> View attachment 2471923


You should have flashed both of them with the same BIOS a long time ago!


----------



## J7SC

sultanofswing said:


> You should have flashed both of them with the same BIOS a long time ago!


...probably, but I was trying to figure out what the differences are - I presume the vendor has some sort of fine-grain differentiation binning going on (or not), thus the differences in same-date / same model bios versions ? Both 'stock & original' bios cards pull 380 W under full load.

In any case, this being a part-work machine means that I've been treating it as such, but when new GPUs arrive, I'm looking forward to treat these 2080 Ti cards with some XOC bios...might do the cross-flash before, though. Pic below is originally from March 2019. WinMerge comparison is just an excerpt and shows a few of the rows with colour-marked differences (there were a heck of a lot more, all told)


----------



## tps3443

This is my watercooled Founders edition with the Galax 380 watt bios and the 8 ohm resistors soldered in place too.



picture upload site


----------



## J7SC

Nice desktop ! Hopefully I get to play some more Cyberpunk 2077 later tonight. BTW, you have a really nice core setup with the 7980 XE / DDR 4000 / X299 Dark.
Do you have room for another rad ? Not that 2x 360MM are not enough, but w/ an oc'ed 18 core and shunt-modded 2080 Ti , extra cooling could be useful


----------



## tps3443

J7SC said:


> Nice desktop ! Hopefully I get to play some more Cyberpunk 2077 later tonight. BTW, you have a really nice core setup with the 7980 XE / DDR 4000 / X299 Dark.
> Do you have room for another rad ? Not that 2x 360MM are not enough, but w/ an oc'ed 18 core and shunt-modded 2080 Ti , extra cooling could be useful


Yea the good memory really saves the platform, and helps so much with IPC. I run the DDR4 4000 at 15-15-15-30-275-1T. But it is XMP 4000 CL15 already default. The CPU runs really cool, it is delidded and lapped, and all that good stuff. Plus this platform has a 110C TJmax on the processors before any sort of throttling happens. But I never get close to that anyways. The 7980XE's hottest core never goes beyond 70C looping R20. And in games it runs like 35-45C "if that", usually because it just never gets loaded up by a game.

CPU and GPU run really cool. But, my definition of cool lately is like below 35C lol. And I just cant reliably keep my 2080Ti that cool without fluctuating ambient temps and such.

But right now at this point I am a little disappointed in my 2080Ti. It just cannot clock very well at all. My silicon just isn't the best.

I have given up on chasing super low temperatures, I have closed my case back up, reduced the turbine noise a little, and both CPU and GPU under a full gaming load run about 40C each.


----------



## tps3443

jura11 said:


> 2205MHz with 1.09v is absolutely minimum, 2220MHz too needs like 1.09v and anything below 2205MHz needs 1.05v, at 1.0v my Strix will do 2100-2115MHz in gaming, in Control is rock stable at such clocks
> 
> Usually if its stable in Cyberpunk 2077 or Control or Metro Exodus then is stable any game out
> 
> Great test for stability is Witcher 3, Assassin's Creed games
> 
> Hope this helps
> 
> Thanks, Jura


What kind of cooling do you run? 2,220 is pretty amazing. What are you temps?


----------



## J7SC

...thought you might like this *rumor (add salt !*) ...2080 Ti may have to tide over



Spoiler


----------



## jura11

tps3443 said:


> What kind of cooling do you run? 2,220 is pretty amazing. What are you temps?


Hi there 

My cooling is 4*360mm radiators plus MO-ra3 360mm radiators, running 4*D5 pumps mixture of Lowara and XSPC pumps, pretty happy with performance of loop

Temperatures usually sits in 36-38°C range, highest temperature what I have seen is 42°C in 26-28°C ambient temperature, in benchmarks I usually see 31-33°C max

Hope this helps 

Thanks, Jura


----------



## acoustic

If it's still on Samsung, I'm not sure I will bother. I can't see how their 5nm process wont have the same issues their 8nm has, when we can clearly see how awful Samsung 8nm is (looking at Ampere)


----------



## jura11

J7SC said:


> ...thought you might like this *rumor (add salt !*) ...2080 Ti may have to tide over
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2471998


Yup read that and will probably upgrade too hahaha, if that's true then this GPU will be monster in rendering and RT

Planned get 6900XT as well but sadly couldn't find it in stock, its not for me but for my brother build 

Hope this helps 

Thanks, Jura


----------



## jura11

acoustic said:


> If it's still on Samsung, I'm not sure I will bother. I can't see how their 5nm process wont have the same issues their 8nm has, when we can clearly see how awful Samsung 8nm is (looking at Ampere)


Ampere and Samsung process I would say has been biggest disappointment for me, I expected bit better OC potential

Hope this helps 

Thanks, Jura


----------



## sultanofswing

Took my 3090 out and went back to the KPE 2080ti, Can't deal with that inefficient Crap Samsung made.


----------



## Nizzen

sultanofswing said:


> Took my 3090 out and went back to the KPE 2080ti, Can't deal with that inefficient Crap Samsung made.


_using 1.3v on 2080ti kingpin_

🤣

I can't wait for 7nm/5nm nVidiagpu 

Using both 3090 and 2080ti in different computers.


----------



## Laithan

You'll be waiting about a year after release date ; )


----------



## J7SC

jura11 said:


> Yup read that and will probably upgrade too hahaha, if that's true then this GPU will be monster in rendering and RT
> 
> Planned get 6900XT as well but sadly couldn't find it in stock, its not for me but for my brother build
> 
> Hope this helps
> 
> Thanks, Jura


I really like the 6900 XT up to 1440 resolution, but at 4K, the more limited memory bus width seems to kick in, even with InfinityCache. Its current ray tracing and DLSS issues can perhaps be addressed by AMD (and Microsoft) through drivers and firmware, but the memory bandwidth is a tough one to crack. So for now, 3090s are still the king of the hill @ 4K

BTW, I like your cooling system...a lot  ....I run 5x XSPC RX 360s and 4x D5s in a dual loop, and temps are and stay low...But Your MoRa*s *on top of all that ?! How many MoRas ? You seem to be ready for AD 102s Lovelace (per above _rumor_). Still hoping to see mGPU Hoppers sooner rather than later, but presumably there will be a 'Super' version of the current Amperes first, then may be AD 102s, unless the rumor turns out to be partial or complete nonsense


----------



## Audioboxer

tps3443 said:


> 1.068V is still amazing at 2,175. While 1.043 is just returded good.
> 
> 
> My 2080Ti is like hey man, I will run an absolute minimum of 2,115Mhz in anything, but I always need 1.093MV or I will lock up and crash Lol!


lol I need to pump around 1.093v through to get 2085 stable, and it often drops to 2070. Temp seems to be a nuisance for me, under heavy load I'm around 45~46 degrees and struggle to stay below. Usually get the drops to 2070 once I go over 45 degrees.

Memory at 8000. Really not sure there is much gains from OCing it. Wish I could get my core into the 21xx range. Need better cooling I guess.


----------



## Avacado

Audioboxer said:


> lol I need to pump around 1.093v through to get 2085 stable, and it often drops to 2070. Temp seems to be a nuisance for me, under heavy load I'm around 45~46 degrees and struggle to stay below. Usually get the drops to 2070 once I go over 45 degrees.
> 
> Memory at 8000. Really not sure there is much gains from OCing it. Wish I could get my core into the 21xx range. Need better cooling I guess.


Don't feel bad, I have 3 of them and only one boosts core >2100. Has nothing to do with cooling.


----------



## jura11

J7SC said:


> I really like the 6900 XT up to 1440 resolution, but at 4K, the more limited memory bus width seems to kick in, even with InfinityCache. Its current ray tracing and DLSS issues can perhaps be addressed by AMD (and Microsoft) through drivers and firmware, but the memory bandwidth is a tough one to crack. So for now, 3090s are still the king of the hill @ 4K
> 
> BTW, I like your cooling system...a lot  ....I run 5x XSPC RX 360s and 4x D5s in a dual loop, and temps are and stay low...But Your MoRa*s *on top of all that ?! How many MoRas ? You seem to be ready for AD 102s Lovelace (per above _rumor_). Still hoping to see mGPU Hoppers sooner rather than later, but presumably there will be a 'Super' version of the current Amperes first, then may be AD 102s, unless the rumor turns out to be partial or complete nonsense


6900XT seems is memory starved at higher resolution like you said mainly in 4k and above, I expected memory OC would help with that but sadly memory OC is pretty much limited on RDNA GPUs and artificial cap at 3000MHz for 6900XT, RT performance depending if you play games with that, personally sometimes I prefer old school rasterisation than RT and DLSS, I personally hate that, on Ultrawide is just blurry mess and in Cyberpunk 2077 I don't like it 

My plan too has been running more radiators, but sadly Caselabs M8 with pedestal supports max 4*360mm radiators, although I could fit at front 280mm radiator which I will probably fit if I can find HWLabs here in UK 

I like your loop as well there, seen pictures over here and I always liked your setup

Running only single MO-ra3 360mm which helps a lot with lower water delta T and mainly I can run fans literally under 800RPM for most of the time, right now loop is cooling RTX 3090 GamingPro with Bykski Waterblock, Asus RTX 2080Ti Strix with EKWB Vector RTX 2080Ti Strix waterblock and Zotac RTX 2080Ti AMP with Aquacomputer Kryographics RTX 2080Ti block and active backplate, in rendering temperatures won't break 36°C on all GPUs, Strix is running warmest at 36°C, Zotac RTX 2080Ti AMP with EVGA FTW3 BIOS runs coldest at 32°C and Palit RTX 3090 GamingPro runs somewhere in middle 

I really need to see how Lovelace AD102 will perform in rendering and then I will decide if its worth it to upgrade or it would be another sidegrade, I'm expecting as well Super version of 3080 and 3090 and possibly 20GB version of 3080

For now 3090 seems is great in rendering and performing better than I expected too

Hope this helps and thanks there

Thanks, Jura


----------



## Bart

jura11 said:


> My plan too has been running more radiators, but sadly Caselabs M8 with pedestal supports max 4*360mm radiators, although I could fit at front 280mm radiator which I will probably fit if I can find HWLabs here in UK


I'm working on putting 6 360s in mine, it will take 5 360s at a bare minimum, not 4. Heck I could stuff 4 rads in an M8 WITHOUT the pedestal, LOL! Maybe you need to re-jig your layout or something, find a way to fit more rads in there. What do you have in there that's preventing you from using all the 360 mounts?


----------



## tps3443

I am probably gonna add some radiators externally. And I am gonna order a dual D5 top.

@J7SC

@jura11 


Any ideas what the best cheap external radiator mount is? I may just ghetto rig something up.

I am looking at that Alphacool 1080/1260MM 45MM thick and it’s only about $100.00 USD. Which seems like a really good deal.


----------



## Avacado

tps3443 said:


> I am looking at that Alphacool 1080/1260MM 45MM thick and it’s only about $100.00 USD. Which seems like a really good deal.


Haha, literally just ordered that, sitting in a box in the basement.


----------



## tps3443

Avacado said:


> Haha, literally just ordered that, sitting in a box in the basement.


I saw it, and it’s such a great buy! The Phobya G changer is like $200 Or more, and the Mora3 are $300 if you can find them.


----------



## J7SC

tps3443 said:


> I am probably gonna add some radiators externally. And I am gonna order a dual D5 top.
> 
> @J7SC
> 
> @jura11
> 
> 
> *Any ideas what the best cheap external radiator mount is? I may just ghetto rig something up.*
> 
> I am looking at that Alphacool 1080/1260MM 45MM thick and it’s only about $100.00 USD. Which seems like a really good deal.


...ghetto rig you say  ? Also, moar rads are rad


Spoiler


----------



## tps3443

J7SC said:


> ...ghetto rig you say  ? Also, moar rads are rad
> 
> 
> Spoiler


Ok I have A cart put together on Amazon. I’ve been working on this for too long.

I am going to replace all of my case fans with Nidec Servo Gentle Typhoon 2,150RPM 120MM fans. So I will run (6) in my case. (3) on top rad, (3) on front rad. Then I am gonna get the Alphacool Plexi dual d5 pump top. Then (2) Alphacool VPP755 V3 pumps, then a single Alphacool 1080 45MM thick radiator, and I will use some mixed 120MM fans I already own for the 1080 rad.

I will Velcro or mount that 1080 rad underneath my desk on the floor.


Are two VPP755 V3 pumps good enough? I hear the head pressure is like a D5, but they use less power.

I like the silver color of them, and they’re only like $62 dollars each.


----------



## Bart

Gentle Typhoons FTW, wise choice! My old tinnitus-riddled ears love those fans.


----------



## tps3443

Bart said:


> Gentle Typhoons FTW, wise choice! My old tinnitus-riddled ears love those fans.


They are awesome fans. I have always been cheap with fans. Unfortunately they only have 5 in stock. And I can’t seem to find anymore Typhoon 2,150’s anywhere else. So this got me thinking about going with the lower RPM models.

These fans offer superior static pressure to anything else.

So what about the 1,450 models or even the 1,850 models?

I really appreciate any input on this.


I have (2) ekwb 360’s currently with some really cheap $2-$4 dollar standard fans, and a 2080TI at 2,115, and a 7980XE at 4.8.


----------



## Bart

tps3443 said:


> They are awesome fans. I have always been cheap with fans. Unfortunately they only have 5 in stock. And I can’t seem to find anymore Typhoon 2,150’s anywhere else. So this got me thinking about going with the lower RPM models.
> 
> These fans offer superior static pressure to anything else.
> 
> So what about the 1,450 models or even the 1,850 models?
> 
> I really appreciate any input on this.
> 
> 
> I have (2) ekwb 360’s currently with some really cheap $2-$4 dollar standard fans, and a 2080TI at 2,115, and a 7980XE at 4.8.


I have loads of the 1850RPM PWM models, no complaints, but I also tend to over-spec my builds in terms of rad space, so I'm never in a situation where I'm heat-soaked and need 100% of the fans.


----------



## tps3443

So I wanted to share this for anyone curious. I thought my Control performance was spectacular. So I started investigating it a little. So I found a video on YouTube of a overclocked RTX3080 running 2560x1440P resolution, I paused the video and did my best to replicate the scene to where I was at. And here are the results!

RTX3080 @2,085Mhz Core/ 20,204Mhz memory overclock. "CONTROL" 106FPS W/ 9900K @5.1Ghz

RTX2080Ti @ 2,115Mhz Core/ 16,222Mhz memory overclock "CONTROL" 91FPS W/ 7980XE @ 4.8Ghz

Both settings are mirrored identically, resolution, DLSS, scene, even GPU usage is solid on both, everything!! The overclocked RTX3080 at 2,085Mhz is 16.5% faster than my 2080Ti at 2,115mhz

Hope this helps! 

Please reply away, I would love to hear everyone's thoughts on this.


----------



## Krzych04650

tps3443 said:


> So I wanted to share this for anyone curious. I thought my Control performance was spectacular. So I started investigating it a little. So I found a video on YouTube of a overclocked RTX3080 running 2560x1440P resolution, I paused the video and did my best to replicate the scene to where I was at. And here are the results!
> 
> RTX3080 @2,085Mhz Core/ 20,204Mhz memory overclock. "CONTROL" 106FPS W/ 9900K @5.1Ghz
> 
> RTX2080Ti @ 2,115Mhz Core/ 16,222Mhz memory overclock "CONTROL" 91FPS W/ 7980XE @ 4.8Ghz
> 
> Both settings are mirrored identically, resolution, DLSS, scene, even GPU usage is solid on both, everything!! The overclocked RTX3080 at 2,085Mhz is 16.5% faster than my 2080Ti at 2,115mhz
> 
> Hope this helps!
> 
> Please reply away, I would love to hear everyone's thoughts on this.


Seems about right, the margin is higher than it would normally be (like 12%) because of RT and the fact that Control is one of these games that really like Ampere. This is basically as big of a difference as there can between 2080 Ti and 3080 at this resolution. It would jump to 20% difference with ultrawide and 25% at 4K.


----------



## J7SC

Cyberpunk 2077 <> Bladerunner ! 

(4K RTX Ultra / DLSS Quality Setting)


----------



## Ironcobra

Is anyone playing rdr2 or cyberpunk getting frametime spikes once in a while either when loading new area or getting a notification. Im getting a big spike like once a minute or so and its driving me nuts, happens with or with overclocking and I cant figure if its just the game engines or what. Tried reinstalling windows no change, replaced memory psu and got a samsung 980 pro. On a 5800x and aurous master.


----------



## J7SC

^I haven't observed that on my setup (TR HEDT) yet. How much system RAM are you running (assuming you refer to a 2080 Ti GPU per this thread) ?


----------



## Ironcobra

J7SC said:


> ^I haven't observed that on my setup (TR HEDT) yet. How much system RAM are you running (assuming you refer to a 2080 Ti GPU per this thread) ?


Yes running 16gb tridentz and evga xc ultra 2080ti. Im getting very similar spikes in most of my games.


----------



## jura11

Bart said:


> I'm working on putting 6 360s in mine, it will take 5 360s at a bare minimum, not 4. Heck I could stuff 4 rads in an M8 WITHOUT the pedestal, LOL! Maybe you need to re-jig your layout or something, find a way to fit more rads in there. What do you have in there that's preventing you from using all the 360 mounts?


Hi there 

I don't think I can fit 6*360mm radiators in Caselabs M8 with pedestal, running 2*360mm radiators on top, another 2*360mm radiators in pedestal and that's it, in PSU compartment have Superflower 8pack 2000w PSU which is quite large PSU and in motherboard compartment I can't fit radiator on bottom, no way, maybe on top

I'm okay with that there, previously I have run 4*GPUs setup and no issues and water delta T in rendering have been in 3-5°C with all GPUs rendering, power draw have been 1200-1295W during the rendering 


Hope this helps 

Thanks, Jura


----------



## J7SC

Ironcobra said:


> Yes running 16gb tridentz and evga xc ultra 2080ti. Im getting very similar spikes in most of my games.


16 GB should be fine. Apart from GPU oc (VRAM ?), how's the system RAM oc'ed ? 

Really not sure as I don't have the issue you describe, but on system RAM, I used to run tFAW @ 16...backing that off a bit can help re. spikes, ditto for tRAS. ...don't want to send anyone on a wild goose chase, but backing off VRAM (if heavily oc'ed) and also tFAW etc in system RAM is worth at least a test to see if the problem persists


----------



## jura11

Ironcobra said:


> Is anyone playing rdr2 or cyberpunk getting frametime spikes once in a while either when loading new area or getting a notification. Im getting a big spike like once a minute or so and its driving me nuts, happens with or with overclocking and I cant figure if its just the game engines or what. Tried reinstalling windows no change, replaced memory psu and got a samsung 980 pro. On a 5800x and aurous master.


No spikes on my, running 3900X with 64GB G.SKILL TridentZ Neo 3600MHz and absolutely no issues

I would check first drivers, are you running any RGB software, Razer devices or Razer SW etc, can you check CPU usage? 

Can you check as well if GPU is running in PCI_E gen 3 not 4?

Thanks, Jura


----------



## Bart

jura11 said:


> I don't think I can fit 6*360mm radiators in Caselabs M8 with pedestal, running 2*360mm radiators on top, another 2*360mm radiators in pedestal and that's it, in PSU compartment have Superflower 8pack 2000w PSU which is quite large PSU and in motherboard compartment I can't fit radiator on bottom, no way, maybe on top
> 
> I'm okay with that there, previously I have run 4*GPUs setup and no issues and water delta T in rendering have been in 3-5°C with all GPUs rendering, power draw have been 1200-1295W during the rendering


Yeah fitting a 360 in the bottom of that mobo compartment is a chore, especially if you have multiple GPUs. I have two GPUs, and I _barely_ managed to squeak a slim 360 plus fans in that bottom mount, but it was a heck of a tight fit. But I managed to fit two 88mm thick Monstas in the pedestal and 4 slim 360s and a 240 in the main case. But it was more of an exercise just to see if I could, not for any practical reason.


----------



## J7SC

Bart said:


> Yeah fitting a 360 in the bottom of that mobo compartment is a chore, especially if you have multiple GPUs. I have two GPUs, and I _barely_ managed to squeak a slim 360 plus fans in that bottom mount, but it was a heck of a tight fit. But I managed to fit two 88mm thick Monstas in the pedestal and 4 slim 360s and a 240 in the main case. * But it was more of an exercise just to see if I could, not for any practical reason.*


HeHe ...I operated in the same vein ...once I figured that water-cooling 2x oc'ed 2080 Ti plus an oc'ed Threadripper meant getting rid of 1150 W - 1200 W heat energy / peak, I took a look at the 'island of old w-cooling equipment' in the storage area. I used up 5 of 6 older XSPC RX 360/55 rads and 4x D5s (dual loop, btw) I had from various prior builds...I very much tried to fit the sixth rad on the TT Core P5, but no cigar.

...one problem was the _sound_, what with 3K GentleTyphoons and ML 120s. I got that (mostly) under control with baffles that also help direct airflow. The one remaining problem (I suspect you might also have) is the sheer weight of that thing...water-cooling that much is heavy ! Then again, temps are great, even over the long haul...


----------



## yoadknux

tps3443 said:


> So I wanted to share this for anyone curious. I thought my Control performance was spectacular. So I started investigating it a little. So I found a video on YouTube of a overclocked RTX3080 running 2560x1440P resolution, I paused the video and did my best to replicate the scene to where I was at. And here are the results!
> 
> RTX3080 @2,085Mhz Core/ 20,204Mhz memory overclock. "CONTROL" 106FPS W/ 9900K @5.1Ghz
> 
> RTX2080Ti @ 2,115Mhz Core/ 16,222Mhz memory overclock "CONTROL" 91FPS W/ 7980XE @ 4.8Ghz
> 
> Both settings are mirrored identically, resolution, DLSS, scene, even GPU usage is solid on both, everything!! The overclocked RTX3080 at 2,085Mhz is 16.5% faster than my 2080Ti at 2,115mhz
> 
> Hope this helps!
> 
> Please reply away, I would love to hear everyone's thoughts on this.


That sounds about right. And on stock, the 3080 will probably be about 10% faster. By the way, with the STABLE/gaming overclock, how much are you getting at Superposition 1080p Extreme?


----------



## pewpewlazer

yoadknux said:


> That sounds about right. And on stock, the 3080 will probably be about 10% faster. By the way, with the STABLE/gaming overclock, how much are you getting at Superposition 1080p Extreme?


SuperPOSition is an incredibly inconsitent and borderline worthless benchmark in my experience. But at my game stable OC (2160/8000) I scored 10,446 in SPOS 1080p extreme. I scored something like 10,300somthing my first run. 10,4xx my second run at 2160/8050. 10,446 was my third run back at the original 2160/8000.

SPOS 1080 extreme is an odd choice to compare, considering how poorly (relatively) Ampere does at low resolutions.

Bottom line is that a water cooled/overclocked 2080 Ti is 0-20% slower than a stock 3080. Depending on a lot of things. But a 10%+ deficit will require either a terrible OC on the Turing card, or comparing at 4k in an Ampere friendly game/benchmark.


----------



## J7SC

^ Yeah, I don't think Superposition 1080 ex is an ideal test for stability; I also observed fairly high swings of +- 100 with the same settings... The highest I ever got was 10541 back in March '19 (haven't tested it w/ newer drivers). Superposition does run in SLI CFR, btw, though I never tried that w/ 1080 p, just '4K Optimized and '8K Optimized'.. Also, as @pewpewlazer pointed out, a 1080 p resolution isn't really a good test for Ampere - who runs 1080 p after buying a 3080 or 3090 ?

For hard-core 'game stable' testing, an hour of Cyberpunk 2077 (4K RTX / high DLSS setting) is probably the toughest I play for a single card (as an aside, there may be progress re. CP '77 w/ CFR, but w/o RTX). For two 2080 Tis, the toughest so far is an hour in Microsoft Flight Sim 2020 / CFR <> I usually dial both cards back to 2100 / 8000 / 110% PL because that's the only way I get the system as a whole to drop below 1000 W...


----------



## jura11

Bart said:


> Yeah fitting a 360 in the bottom of that mobo compartment is a chore, especially if you have multiple GPUs. I have two GPUs, and I _barely_ managed to squeak a slim 360 plus fans in that bottom mount, but it was a heck of a tight fit. But I managed to fit two 88mm thick Monstas in the pedestal and 4 slim 360s and a 240 in the main case. But it was more of an exercise just to see if I could, not for any practical reason.


I planned put radiator in motherboard compartment but decided against it because it will be proper pita do that hahaha, previously I have run 4*GPUs setup, now running just 3*GPUs setup with dual RTX 2080Ti and RTX 3090 plus reservoir etc and running in total 12 HDD's , reworking whole loop not sure if its worth it 

Current loop have great temperatures and if I'm happy, yes 100%

I just would recommend get Aquacomputer Aquaero or Quadro or Octo 

Hope this helps 

Thanks, Jura


----------



## Krzych04650

jura11 said:


> I planned put radiator in motherboard compartment but decided against it because it will be proper pita do that hahaha, previously I have run 4*GPUs setup, now running just 3*GPUs setup with dual RTX 2080Ti and RTX 3090 plus reservoir etc and running in total 12 HDD's , reworking whole loop not sure if its worth it
> 
> Current loop have great temperatures and if I'm happy, yes 100%
> 
> I just would recommend get Aquacomputer Aquaero or Quadro or Octo
> 
> Hope this helps
> 
> Thanks, Jura


Just out of curiosity, since you have these 3 cards. Assuming someone would want to have let's say 3080 Ti and 2x2080 Ti in the same system and use either/or depending on what is faster in what game. How would that work? Like to what card the driver defaults to, to the one you have the display connected to? Or would you need to somehow disable 3080 Ti whenever you want to use 2080 Ti SLI? Can you just switch between them on software level without the need of physically disconnecting?


----------



## jura11

Krzych04650 said:


> Just out of curiosity, since you have these 3 cards. Assuming someone would want to have let's say 3080 Ti and 2x2080 Ti in the same system and use either/or depending on what is faster in what game. How would that work? Like to what card the driver defaults to, to the one you have the display connected to? Or would you need to somehow disable 3080 Ti whenever you want to use 2080 Ti SLI? Can you just switch between them on software level without the need of physically disconnecting?


I really don't know how this can be done, I use GPUs for rendering most of the time and for gaming I use RTX 3090 because both RTX 2080Ti's are different(Strix and Zotac and mainly Strix is higher than Zotac and I can't use NvLink bridge, sadly there is no flexible NvLink bridge) and one of Ti is in x8 slot and other one is running at x4 slot

Yes usually its defaults to display connected to, I think in few games or benchmarks you can select which GPUs you want to use 

On software level, not sure if there some sort of switch, you can in theory KVM switch or plug display port cables to both GPUs(2080Ti or RTX 3080Ti) 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> I really don't know how this can be done, I use GPUs for rendering most of the time and for gaming I use RTX 3090 because both RTX 2080Ti's are different(Strix and Zotac and mainly Strix is higher than Zotac and I can't use NvLink bridge, sadly there is no flexible NvLink bridge) and one of Ti is in x8 slot and other one is running at x4 slot
> 
> Yes usually its defaults to display connected to, I think in few games or benchmarks you can select which GPUs you want to use
> 
> On software level, not sure if there some sort of switch, you can in theory KVM switch or plug display port cables to both GPUs(2080Ti or RTX 3080Ti)
> 
> Hope this helps
> 
> Thanks, Jura


Whatever happened to all that broadly available mGPU touted w/ Win 10 intro, such as 2x 3090s and 1x 6900 XT all in one, happily cooperating ?


----------



## jura11

J7SC said:


> Whatever happened to all that broadly available mGPU touted w/ Win 10 intro, such as 2x 3090s and 1x 6900 XT all in one, happily cooperating ?


Not sure on that,literally don't have idea hahaha 

Thanks, Jura


----------



## Krzych04650

J7SC said:


> Whatever happened to all that broadly available mGPU touted w/ Win 10 intro, such as 2x 3090s and 1x 6900 XT all in one, happily cooperating ?


What happened is these very low level APIs that taunted this feature actually ended up being the exact reason for the death of multi GPU in general, instead doing what you described. When it was possible to implement it and fix it on driver level and to create custom profiles, so both NVIDIA and the community could do something with it, it did fine enough, but low level APIs completely removed that possibility and multi GPU became entirely dependent on games developers, which in the world where even releasing a functional game is too much to ask could not go very well.

Not like mGPU is that great in the first place, if you look at few games that actually have it like SOTTR or RDR2, they aren't really any less messed up than implicit implementations, actually they would be below average in comparison. In SOTTR TAA has artifacts in mGPU mode and modern games are built around TAA so you will get ton of aliasing and shimmer without it, and while RDR2 does not have such problems you have to be locked to your FPS target 99% of the time, because frametime goes mad when you are not. This was always partially true with SLI, but never to this extent.

CFR looks much better to me. It feels way smother and like if it had less input lag than single card, probably because running single card at 99% usage causes exponential input lag increase, and CFR never really hits that, so it is like running single card at 90% usage since CFR has no side effects. It is also way more compatible, bringing multi GPU to games that never could have it. It can even scale nicely if you throw some "third party" load on the GPUs from outside of the game, like heavy Reshade or SGSSAA. But then we go back to low level APIs and having no ability to tinker with the game. But even without that I'd take that 1.35-1.4x scaling in most games over 1.90x+ only in few games any day. Guess we will never know what happened to it.


----------



## J7SC

Krzych04650 said:


> What happened is these very low level APIs that taunted this feature actually ended up being the exact reason for the death of multi GPU in general, instead doing what you described. (...)
> 
> CFR looks much better to me. It feels way smother and like if it had less input lag than single card, probably because running single card at 99% usage causes exponential input lag increase, and CFR never really hits that, so it is like running single card at 90% usage since CFR has no side effects. It is also way more compatible, bringing multi GPU to games that never could have it. It can even scale nicely if you throw some "third party" load on the GPUs from outside of the game, like heavy Reshade or SGSSAA. But then we go back to low level APIs and having no ability to tinker with the game. But even without that I'd take that 1.35-1.4x scaling in most games over 1.90x+ only in few games any day. Guess we will never know what happened to it.


I agree with you on CFR, Apart from my own experience with it, I have seen a fellow @ YT post Metro Exodus with 2x Titan RTX with DX 12 CFR, RTX etc...seemed like slightly less micro-stutter than even a single card run he showed side-by-side for comparison.

I use CFR all the time for Microsoft FlightSim 2020 - 4K Ultra at high frame rates, no problem. Today, I even had it working for a bit in CyberPunk 2077 at 4K Ultra Native (no RTX, no DLSS) but in DX 12 Cyberpunk 2077 it was just too spotty, and trying to change a setting and repeat that feast crashed. That's going to take some more work to figure out and get to the same stability as DX11 FlightSim 2020. Overall though, I find it disturbing that a.) NVidia never officially documented the CFR w/ RTX 2K, b.) then killed it off before RTX 3K Ampere release


----------



## Krzych04650

J7SC said:


> I agree with you on CFR, Apart from my own experience with it, I have seen a fellow @ YT post Metro Exodus with 2x Titan RTX with DX 12 CFR, RTX etc...seemed like slightly less micro-stutter than even a single card run he showed side-by-side for comparison.
> 
> I use CFR all the time for Microsoft FlightSim 2020 - 4K Ultra at high frame rates, no problem. Today, I even had it working for a bit in CyberPunk 2077 at 4K Ultra Native (no RTX, no DLSS) but in DX 12 Cyberpunk 2077 it was just too spotty, and trying to change a setting and repeat that feast crashed. That's going to take some more work to figure out and get to the same stability as DX11 FlightSim 2020. Overall though, I find it disturbing that a.) NVidia never officially documented the CFR w/ RTX 2K, b.) then killed it off before RTX 3K Ampere release


Generally I found much more success with CFR in DX11 than DX12. On DX12 it almost always ends up crashing, including Metro Exodus.

We can only guess but generally the story is that it leaked through drivers accidentally and it was certainly never meant to be seen at this stage, so we don't really know what happened after that, maybe they killed it or maybe it is still in development, who knows. Still I do not like this, neither this nor ditching implicit SLI on Ampere, especially that. It is really troubling because say what you will about NVIDIA but they were always the ones pushing high-end PC gaming with their features, if they become stagnant like Intel then it is going to have a very negative impact. They are already dropping the ball on hardware side...


----------



## tps3443

@J7SC 

I have seen cyberpunk 2077 crash in the menu immediately due to instability, and I have seen it crash 2-4 hours in to the the game due to instability.


I will say that if it can run Cyberpunk for 6-8 hours it is 100% stable.

My system seems to run right through for 24 hours straight without issues at all since reducing GPU clocks to 2,115Mhz and reducing my GDDR6 memory overclock to 16,250


----------



## J7SC

Krzych04650 said:


> Generally I found much more success with CFR in DX11 than DX12. On DX12 it almost always ends up crashing, including Metro Exodus.
> 
> We can only guess but generally the story is that it leaked through drivers accidentally and it was certainly never meant to be seen at this stage, so we don't really know what happened after that, maybe they killed it or maybe it is still in development, who knows. Still I do not like this, neither this nor ditching implicit SLI on Ampere, especially that. It is really troubling because say what you will about NVIDIA but they were always the ones pushing high-end PC gaming with their features, if they become stagnant like Intel then it is going to have a very negative impact. They are already dropping the ball on hardware side...


...by coincidence, I saw this today:


Spoiler: tile GPU patent














 
There's no doubt in my mind that AMD, NVidia and Intel are all eventually going to go the 'Tile / Chiplet' approach, never mind from a yield-cost-benefit POV. The tricky bit is the connectivity fabric <> making the chiplets appear as a homogeneous GPU unit, and there too is big progress already (Ie Intel's upcoming multi-Tile high end consumer Xe HPG in recent tests). 

The undocumented NVidia Tile / CFR driver I referenced before is 'proto-Neanderthal' by comparison, but still, what a performer _when_ it works... What bugs me is that 2x 2080 Ti have locked within them _a lot more performance with tile drivers_ (never mind 2x 3090s if they would even have a hidden tile driver). Anyway, meanwhile back at the farm...the vid of CRF Tile driver for 2x Titan RTX vs single Titan RTX driver in Metro Exodus RTX DX12 I referenced earlier. Let it run for a while and compare 'stuttering' vs single card


----------



## jura11

Is there any kind of flexible NvLink bridge? Would love to test RTX 2080Ti's in SLI 

Thanks, Jura


----------



## J7SC

jura11 said:


> Is there any kind of flexible NvLink bridge? Would love to test RTX 2080Ti's in SLI
> 
> Thanks, Jura


EDIT... didn't initially see your 'flexible' qualifier... only seen rigid '3-slot' and a '4-slot'...I have & used both rigid versions. Asus, MSI, Gigabyte and the other 'usual suspects' all put them out - still available. May be pick up a rigid set, Dremel one in half and lay traces by hand ?


----------



## tps3443

@J7SC

Isn’t most GPU stuttering from multi GPU usually from CPU bottleneck?


It seems like multi GPU would work great on high frequency, high core count CPU with X16 to each GPU no?

I have little experience with multi GPU. I ran it back with Vanilla Titan‘s briefly but I had a i5 6600K running those cards so it struggled lol.


----------



## J7SC

tps3443 said:


> @J7SC
> 
> Isn’t most GPU stuttering from multi GPU usually from CPU bottleneck?
> 
> 
> It seems like multi GPU would work great on high frequency, high core count CPU with X16 to each GPU no?
> 
> I have little experience with multi GPU. I ran it back with Vanilla Titan‘s briefly but I had a i5 6600K running those cards so it struggled lol.


...Quad SLI for example wouldn't even show stop in losses compared to 2x or 3x SLI unless you cranked the CPU and VRAM way up back in the bad old days ( '12). Crossing point is somewhere between 1440p and 4K before mGPU starts to bear fruit IMO.


----------



## pewpewlazer

tps3443 said:


> @J7SC
> 
> Isn’t most GPU stuttering from multi GPU usually from CPU bottleneck?
> 
> 
> It seems like multi GPU would work great on high frequency, high core count CPU with X16 to each GPU no?
> 
> I have little experience with multi GPU. I ran it back with Vanilla Titan‘s briefly but I had a i5 6600K running those cards so it struggled lol.


I never noticed the "micro stutering" myself, but I also wasn't running V-Sync/G-Sync back then. But I thought it was entirely a result of the "AFR" approach to SLI and had nothing to do with CPUs or bottlenecks...

Driver based CFR seemed like the holy grail in this brave new world of Explicit Multi-GPU (I can think of one thing "Explicit" about it... a 4 letter word that starts with "S"). I was so hopeful that SLI might make a valiant comeback when I first saw videos of this DX12 CFR in action in Metro Exodus. But it seems Nvidia buried that one. 

Not that it matters at the moment, you can't even give Nvidia $1,500 for ONE graphics card even if you wanted to, let alone two of them.


----------



## tps3443

pewpewlazer said:


> I never noticed the "micro stutering" myself, but I also wasn't running V-Sync/G-Sync back then. But I thought it was entirely a result of the "AFR" approach to SLI and had nothing to do with CPUs or bottlenecks...
> 
> Driver based CFR seemed like the holy grail in this brave new world of Explicit Multi-GPU (I can think of one thing "Explicit" about it... a 4 letter word that starts with "S"). I was so hopeful that SLI might make a valiant comeback when I first saw videos of this DX12 CFR in action in Metro Exodus. But it seems Nvidia buried that one.
> 
> Not that it matters at the moment, you can't even give Nvidia $1,500 for ONE graphics card even if you wanted to, let alone two of them.


I know this market we are in is ridiculous. Anytime I buy a NEW GPU, I count on selling my current/old GPU heavily when it comes to buying a new one. Usually to offset atleast 30-50% Or sometimes more of the cost of the new GPU.

Well we cant even do that. The scalping has gotten so bad. It’s just a thing now.. Everyone talks crap about it, but yet more and more people are doing it..

Where are the video cards even at Lol?


----------



## J7SC

pewpewlazer said:


> I never noticed the "micro stutering" myself, but I also wasn't running V-Sync/G-Sync back then. But I thought it was entirely a result of the "AFR" approach to SLI and had nothing to do with CPUs or bottlenecks...
> 
> Driver based CFR seemed like the holy grail in this brave new world of Explicit Multi-GPU (I can think of one thing "Explicit" about it... a 4 letter word that starts with "S"). I was so hopeful that SLI might make a valiant comeback when I first saw videos of this DX12 CFR in action in Metro Exodus. But it seems Nvidia buried that one.
> 
> Not that it matters at the moment, you can't even give Nvidia $1,500 for ONE graphics card even if you wanted to, let alone two of them.


There isn't very much a single well running (ie 2100 / 8000) 2080 Ti can't do in the regular app space, never mind two of them with 'select' CFR' on some on my fav apps. CFR won't even work with Ampere - for now at least. The use case for purchasing 3090s has to be stronger than it is in my case...though 24 GB of VRAM would come in handy for some work apps.

I actually went through that calculation a couple of days ago when I had a line on 2x MSI Suprim X 3090s and a 5950X in stock with the outlet here we buy a lot of commercial computer equipment from. The prices were MSRP, no scalping...I'll go through that calculation again in a few months once we know more about Ampere 'Quadro'.

All that said, a few years ago I would have just taken the Suprim X deal, but since I do not really need them thanks to the 2080 Tis AND vendors such as NVidia (and AMD for that matter) have created such debacles in the marketplace from various angles (including direct sales to miners, ignorance re.bots and scalpers), I am in no hurry to make any decision which isn't based on a solid use-case basis. Screw their communication and marketing departments (_8K gaming_...), eventually, even some of us enthusiasts have a learning curve.


----------



## tps3443

J7SC said:


> There isn't very much a single well running (ie 2100 / 8000) 2080 Ti can't do in the regular app space, never mind two of them with 'select' CFR' on some on my fav apps. CFR won't even work with Ampere - for now at least. The use case for purchasing 3090s has to be stronger than it is in my case...though 24 GB of VRAM would come in handy for some work apps.
> 
> I actually went through that calculation a couple of days ago when I had a line on 2x MSI Suprim X 3090s and a 5950X in stock with the outlet here we buy a lot of commercial computer equipment from. The prices were MSRP, no scalping...I'll go through that calculation again in a few months once we know more about Ampere 'Quadro'.
> 
> All that said, a few years ago I would have just taken the Suprim X deal, but since I do not really need them thanks to the 2080 Tis AND vendors such as NVidia (and AMD for that matter) have created such debacles in the marketplace from various angles (including direct sales to miners, ignorance re.bots and scalpers), I am in no hurry to make any decision which isn't based on a solid use-case basis. Screw their communication and marketing departments (_8K gaming_...), eventually, even some of us enthusiasts have a learning curve.


SLI 3090’s and 5950X is definitely a beast system. But you must look at the bigger picture, your CPU is Still a very viable option man.

I am surprised how well my old 7980XE holds up against a 5950X though. Easily matching it or even outpacing is by around 10% in multithreaded loads. I use multithreaded more than single threaded. So I’m ok with slower IPC. And I think this is the bigger picture, a lot of times newer just isn’t better. Or upgrading is just a headache and very stressful. And it shouldn’t be that way at all. So it’s good to recognize that even if we can upgrade, or have the money to upgrade, it doesn’t always make sense to. 

This has brought me to the conclusion that it doesn’t really make sense to upgrade my system either, even though I could if I wanted too. It’s almost like I am waiting for something cool to happen. Maybe Nvidia/AMD will release an awesome new GPU that’s actually obtainable. Or maybe a new platform will launch that actually makes older platforms obsolete.


----------



## J7SC

^^yeah I love the 2950X...it's a good sample (16c/32T 4.3 all-core at below 1.3v load, single core at 4.675 Mhz) and has a great IMC. Unlike the 24c and 32c TRs of that genre, the 2950X also features the UMA / NUMA switch (essentially making it like two 8c/16t). It seems to work very well with my 2080 Tis..never had a single issue with that work-play system. Per spoiler, 2950X and 7980XE are not that far apart, even before taking the original purchase prices into account. Though for straight gaming / oc, the latter makes more sense.

For now, I don't really need to upgrade anything, even if the new-project-bug has bitten  BTW, thinking of picking up a HP VR - anyone have any feedback re. that in combo with 2080 Tis ?



Spoiler














 

@jura11 ...quick question for you: Have you ever tried to run your 3090 with a NVidia driver before the 450 branch, such as 445.98 ? I am asking as that is one of the last with the hidden CFR option, but well before Ampere was released - I wonder if the 3090 would be recognized or not by the 44x driver branch


----------



## tps3443

J7SC said:


> ^^yeah I love the 2950X...it's a good sample (16c/32T 4.3 all-core at below 1.3v load, single core at 4.675 Mhz) and has a great IMC. Unlike the 24c and 32c TRs of that genre, the 2950X also features the UMA / NUMA switch (essentially making it like two 8c/16t). It seems to work very well with my 2080 Tis..never had a single issue with that work-play system. Per spoiler, 2950X and 7980XE are not that far apart, even before taking the original purchase prices into account. Though for straight gaming / oc, the latter makes more sense.
> 
> For now, I don't really need to upgrade anything, even if the new-project-bug has bitten  BTW, thinking of picking up a HP VR - anyone have any feedback re. that in combo with 2080 Tis ?
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2472683
> 
> 
> 
> 
> 
> @jura11 ...quick question for you: Have you ever tried to run your 3090 with a NVidia driver before the 450 branch, such as 445.98 ? I am asking as that is one of the last with the hidden CFR option, but well before Ampere was released - I wonder if the 3090 would be recognized or not by the 44x driver branch


When the high core count CPU’s first started trickling in during 2017, they really were not that useful during that time. The average person looked at a 16 core as just overkill or even stupid lol. But here in 2020/2021 we can actually use these CPU’s. Obviously most applications have benefited from them for a while now, But games have just started really using such processors.

Death stranding is a good example. This game uses a CPU like no other! If you haven’t tried it, I really recommend it.



Some say poorly optimized. I say It is optimized very well. It averages 56% CPU usage across all 36 of my threads during 2560x1440P. And peaks much higher.. 1080P runs 71% average on 36 threads while peaking close to 90% across the cores during gameplay.


----------



## J7SC

I had a couple of good, long gaming sessions with my 2080 Tis... after two years of use, they're still an absolute joy and there's no sign of degradation in GPU or VRAM clocks, even with up to 380 W per card on the stock bios. The cards are very well cooled though and probably have only seen > 40 C one or twice in two years, in the middle of a heat wave with ambient above mid-twenties..

Speaking of cooling, I wonder which OEM made the Aorus 2080 Ti Xtr WB 'factory' blocks...Bitspower ? Bykski ? Pic below shows the block with the covers removed. Anybody have any ideas re. the OEM producer ? While I normally use Aqua and/or Heatkiller aftermarket blocks, I like to add the Aorus Xtr WB OEM producer to my list for future reference..









Declassified Systems / YT


----------



## Ironcobra

J7SC said:


> 16 GB should be fine. Apart from GPU oc (VRAM ?), how's the system RAM oc'ed ?
> 
> Really not sure as I don't have the issue you describe, but on system RAM, I used to run tFAW @ 16...backing that off a bit can help re. spikes, ditto for tRAS. ...don't want to send anyone on a wild goose chase, but backing off VRAM (if heavily oc'ed) and also tFAW etc in system RAM is worth at least a test to see if the problem persists


Ram makes no difference ocd or not, ordered a new kit from amazon just to check if the ram made a difference and nothing. Im thinking I need to do a trace log or something but I have no idea how to read them.


----------



## tps3443

Ironcobra said:


> Ram makes no difference ocd or not, ordered a new kit from amazon just to check if the ram made a difference and nothing. Im thinking I need to do a trace log or something but I have no idea how to read them.


Maybe you are getting high CPU usage. Causing your GPU usage to fall off of that 98-99% and this is causing the skips or latency spikes.

Cap the frame rate to 10-15FPS below your average so the game will use less cpu usage. Or just increase the graphics some.

Maybe 6/12 is getting some very high usage? I have 18/36 threads at 4.8Ghz with DDR4 4000 CL15-15-15 in quad channel, and this game uses some CPU for sure. The game
is butter smooth, but there are times when it ask for a lot of CPU! All threads go up to like 50-60% during heavy action, it does remain smooth Though.

Also you sure it’s not the auto save feature?That’s the only time I see a latency spike.


----------



## Ironcobra

tps3443 said:


> Maybe you are getting high CPU usage. Causing your GPU usage to fall off of that 98-99% and this is causing the skips or latency spikes.
> 
> Cap the frame rate to 10-15FPS below your average so the game will use less cpu usage. Or just increase the graphics some.
> 
> Maybe 6/12 is getting some very high usage? I have 18/36 threads at 4.8Ghz with DDR4 4000 CL15-15-15 in quad channel, and this game uses some CPU for sure. The game
> is butter smooth, but there are times when it ask for a lot of CPU! All threads go up to like 50-60% during heavy action, it does remain smooth Though.
> 
> Also you sure it’s not the auto save feature?That’s the only time I see a latency spike.


Definitely a autosave thing, I can reproduce it every time with F5, seems like its an issue across all platforms.


----------



## J7SC

Ironcobra said:


> Definitely a autosave thing, I can reproduce it every time with F5, seems like its an issue across all platforms.


Autosave in CP '77 was an issue, though I though a recent Hotfix was supposed to address that. I mostly do manual saves in CP '77 and haven't noticed any issues w/ auto or manual save - yet.


----------



## Imprezzion

Weird thing I'm seeing in Cyberpunk is a surprisingly low power draw percentage. GPU load is full 99% all the time but the power is very spikey and jumps around from like 55% to 80% all the time.

In for example Division 2 and World of Tanks power draw is pretty much flat at 85-88% all the time..

Now, my performance in Cyberpunk is fine, I run 1.06 patch with several mods like ImprovedRT, Cyber Engine Tweaks, some DLSS mod to make reflections in mirrors better and quite a heavy ReShade preset I made myself. I run the game itself at 1080p all maxed out with Psycho RT and DLSS Quality and still get 55-80 FPS but the low power draw also happens when I disable all the mods and run vanilla with Ultra RT..


----------



## tps3443

Hey guys, I adjusted my radiator fans. Setup Push/Pull on both of them. Check this out! I also have a new water pump coming and a (3rd) 360mm radiator to add in to my custom loop. I hope to improve upon this even more.

17,600 timespy graphics!!

The only video card that you can LITERALLY, "Squeeze the future out of it" lol


----------



## gfunkernaught

@tps3443
Nice!

Can you do a timespy extreme run?


----------



## tps3443

gfunkernaught said:


> @tps3443
> Nice!
> 
> Can you do a timespy extreme run?


Yeah, let me get my 2nd shunt soldered back on first. I am only running the back shunt which works great for just about anything. But Timespy extreme really needs both shunts as it’ll exceed my 456 watt limit.


----------



## J7SC

I had posted on Cyberpunk 2077 via TR with UMA vs NUMA mode before...with their Hotfixes now, UMA (left column) has better CPU core distribution across 16 c / 32 t, but the improvements are not overly dramatic. Interestingly, Hotfix also seems to have shifted more emphasis onto the higher-numbered cores. FYI, I play at 4K RTX max / DLSS quality anyhow so CPU is less important, but still - a step forward.

2080 Ti on the other hand never went below 98% with those settings 












*EDIT* FYI: New Nvidia driver out 461.09 WHQL - performance seems fine, plus it apparently has a few more security enhancements


----------



## tps3443

J7SC said:


> I had posted on Cyberpunk 2077 via TR with UMA vs NUMA mode before...with their Hotfixes now, UMA (left column) has better CPU core distribution across 16 c / 32 t, but the improvements are not overly dramatic. Interestingly, Hotfix also seems to have shifted more emphasis onto the higher-numbered cores. FYI, I play at 4K RTX max / DLSS quality anyhow so CPU is less important, but still - a step forward.
> 
> 2080 Ti on the other hand never went below 98% with those settings
> 
> 
> View attachment 2473291
> 
> 
> 
> *EDIT* FYI: New Nvidia driver out 461.09 WHQL - performance seems fine, plus it apparently has a few more security enhancements


That is some really strange CPU usage. It doesn’t use the logical processors?


----------



## J7SC

tps3443 said:


> That is some really strange CPU usage. It doesn’t use the logical processors?


...which of the three columns are you referring to ?


----------



## gfunkernaught

tps3443 said:


> Yeah, let me get my 2nd shunt soldered back on first. I am only running the back shunt which works great for just about anything. But Timespy extreme really needs both shunts as it’ll exceed my 456 watt limit.


Yea man no rush, dismantling/draining the loop can be time consuming. I actually put too much paste on my block since the shunt mod, so I'm prob going to do the same, putting on the 2nd shunt. What paste do you use btw?


----------



## tps3443

gfunkernaught said:


> Yea man no rush, dismantling/draining the loop can be time consuming. I actually put too much paste on my block since the shunt mod, so I'm prob going to do the same, putting on the 2nd shunt. What paste do you use btw?


I am using that kingpin kpx paste. It works really well.


----------



## Imprezzion

tps3443 said:


> I am using that kingpin kpx paste. It works really well.


How does KPX compare to PK3, MX4 and Kryonaut which are basically the current top 3 easily available pastes? (Not counting IC Diamond as it's a pain to apply / remove and hard to get).

I almost ran out of PK3 and the big 30 gram syringes aren't available anymore so I need an alternative lol.

I mean, I can't keep lapping and Liquid Metalling everything haha 😂


----------



## edhutner

Guys I sold my 2080 ti and upgraded to MSI 3080 suprim X.
I have my heatkiller iv (for reference 2080 ti) with backplate left, if anyone in Europe is interested please pm me.
This waterblock has been doing great job on my evga xc 2080 ti.

p.s. i dont have privileges to post in marketplace, sorry if braking rules...


----------



## gfunkernaught

Has anyone ever messed with the temp curve in AB? I can't tell if raising that line to the top makes a difference. Currently running the Bright Memory bench on loop, and the clock is running at 2145mhz (2122mhz according to tempspy). Temps are 38-39c. I ran into an anomaly before when running timespy extreme gt2 on loop. It ran my set clock of 2145mhz (again, 2120-2129mhz according to thermspy) until the temps hit 41c. Then the clock dropped to 2130mhz (2115ish mhz), but when it dropped to that bin, the power limit was also in effect, varying the clock. At one point the clock landed on 2115mhz but at a voltage of 1050mv, then it crashed. The whole time before that the voltage was 1093mv. The highest temp during that run was 42c.


----------



## J7SC

Imprezzion said:


> How does KPX compare to PK3, MX4 and Kryonaut which are basically the current top 3 easily available pastes? (Not counting IC Diamond as it's a pain to apply / remove and hard to get).
> 
> I almost ran out of PK3 and the big 30 gram syringes aren't available anymore* so I need an alternative lol.
> 
> I mean, I can't keep lapping and Liquid Metalling everything haha* 😂


...I used to use LM on CPUs and GPUs as well, glad those days are (mostly ) over. With LM on GPU, I built up a barrier of MX4 in the GPU groove (after nail polish covering exposed PCB bits) as an additional safety. Re. alternatives, >> *this* is a great and thorough series by THW (also covers GPUs)


----------



## Medizinmann

Imprezzion said:


> How does KPX compare to PK3, MX4 and Kryonaut which are basically the current top 3 easily available pastes? (Not counting IC Diamond as it's a pain to apply / remove and hard to get).


(1241) High-end thermal paste comparison - GC-Extreme VS Kryonaut VS KPx - YouTube



> I almost ran out of PK3 and the big 30 gram syringes aren't available anymore so I need an alternative lol.


You could try Kryonaut extreme...  
Thermal Grizzly High Performance Cooling Solutions - Kryonaut Extreme (thermal-grizzly.com)
(1241) New Best Thermal Paste? Thermal Grizzly Kryonaut Extreme - Overview and Thoughts - YouTube 



> I mean, I can't keep lapping and Liquid Metalling everything haha 😂


I used LM on my CPU and GPU - and alot of conformal coat around them...

Best regards,
Medizinmann


----------



## Imprezzion

CPU is LM yeah, I lapped it flat on a glass pane to 3000 grit mirror finish down to bare copper, same with the waterblock, so that needs LM.

GPU I haven't done LM yet since I don't have any conformal coating anymore and temps are "fine" but yeah with the amount of systems I build and maintain from local businesses to friends systems I go through paste fast. And I'm not putting LM on a bunch of i5 4460 stock cooler workstations lol.


----------



## tps3443

Imprezzion said:


> How does KPX compare to PK3, MX4 and Kryonaut which are basically the current top 3 easily available pastes? (Not counting IC Diamond as it's a pain to apply / remove and hard to get).
> 
> I almost ran out of PK3 and the big 30 gram syringes aren't available anymore so I need an alternative lol.
> 
> I mean, I can't keep lapping and Liquid Metalling everything haha 😂


KPX spreads easy and has a much higher thermal conductivity than those thermal paste options that you listed.

Even just Thermalright TF8 has a higher thermal conductivity and it’s fairly cheap too.

I use to use liquid metal on the IHS all the time. But I quit doing it.


----------



## acoustic

I picked up some KPx because you guys have so highly regarded it. Going to repaste my 9900K (currently has AS5) and I'll likely tear down my 3080 and repaste that as well. When I put the Hybrid cooler on my 3080 FTW3, I didn't have the KPx yet and forgot to bring my Kryonaut with me. I ended up using the paste that was pre-installed on the cooler.

Temps are great, so I doubt it's much of an issue, but the fact that I don't know what kind of paste it was, and because I'm a psycho who loves wasting his time for 1-2c gains .. I might do it today.


----------



## Medizinmann

tps3443 said:


> KPX spreads easy and has a much higher thermal conductivity than those thermal paste options that you listed.


But availability in Europe is - let's say bad or none...



> Even just Thermalright TF8 has a higher thermal conductivity and it’s fairly cheap too.


Availabilty in Europe isn't great either (much better than KPX though) - and pricing is okay, but Kryonaut is cheaper, readily available and if we trusts the tests - just as good or even better.
Thermal Paste Round-up: CPU Water Cooling (tomshardware.com)
High-end thermal paste comparison - GC-Extreme VS Kryonaut VS KPx - YouTube 

Best regards,
Medizinmann


----------



## jura11

Medizinmann said:


> But availability in Europe is - let's say bad or none...
> 
> 
> 
> Availabilty in Europe isn't great either (much better than KPX though) - and pricing is okay, but Kryonaut is cheaper, readily available and if we trusts the tests - just as good or even better.
> Thermal Paste Round-up: CPU Water Cooling (tomshardware.com)
> High-end thermal paste comparison - GC-Extreme VS Kryonaut VS KPx - YouTube
> 
> Best regards,
> Medizinmann


Hi there 

You can always order through the Aliexpress Thermalright TFX or ZF-EX which is great TIM and I use it on my RTX 2080Ti's and RTX 3090 as well 

Hope this helps 

Thanks, Jura


----------



## Falkentyne

Medizinmann said:


> (1241) High-end thermal paste comparison - GC-Extreme VS Kryonaut VS KPx - YouTube
> 
> 
> 
> You could try Kryonaut extreme...
> Thermal Grizzly High Performance Cooling Solutions - Kryonaut Extreme (thermal-grizzly.com)
> (1241) New Best Thermal Paste? Thermal Grizzly Kryonaut Extreme - Overview and Thoughts - YouTube
> 
> 
> 
> I used LM on my CPU and GPU - and alot of conformal coat around them...
> 
> Best regards,
> Medizinmann


Thermalright TFX (or the clone Thermagic TF-EX) or the almost as good TF8 are the best non LM pastes for GPU's, especially ones with very convex cores like Ampere. I assume Turing is flatter (but I don't know for sure).


----------



## tryout1

How is the viscosity of these pastes? I need a new paste anyways cause i just wait for my Heatkiller IV and thinking about my clevo notebook i'm just thinking of trying cause kryonaut pumps out pretty fast on the latter.


----------



## Falkentyne

tryout1 said:


> How is the viscosity of these pastes? I need a new paste anyways cause i just wait for my Heatkiller IV and thinking about my clevo notebook i'm just thinking of trying cause kryonaut pumps out pretty fast on the latter.


TFX is about as thick as Arctic Ceramique. It's way thicker than Kryonaut. It's also way more stable long term than Kryonaut, making it much better on laptops.


----------



## J7SC

tryout1 said:


> How is the viscosity of these pastes? I need a new paste anyways cause i just wait for my Heatkiller IV and thinking about my clevo notebook i'm just thinking of trying cause kryonaut pumps out pretty fast on the latter.


The 'stickiest' thermal paste I have used is 'Gelid Extreme' - just a bit harder to apply than my usual stand-by's such as MX4 and Noctua NH


----------



## tryout1

Ok will try the GC Extreme then, that's at least one i can get for testing on amazon.de, kryonaut works fine if there is enough pressure but on the notebook yeah there are better solutions. Hope the Heatkiller IV rocks on my new 2080 TI, after the whole EU pricing and availability deblace i got one for 400€ which seems to run at 2040/2055 core at 1.043v and 8200-8400 mem which is a gigabyte gaming oc, i already applied KFA2 380W bios on this, for sure i'm getting 50-60mhz alone out of it due to temps.


----------



## gfunkernaught

Anyone interested in buying a used EKWB Vector full coverage water block for a reference 2080 Ti?


----------



## tps3443

acoustic said:


> I picked up some KPx because you guys have so highly regarded it. Going to repaste my 9900K (currently has AS5) and I'll likely tear down my 3080 and repaste that as well. When I put the Hybrid cooler on my 3080 FTW3, I didn't have the KPx yet and forgot to bring my Kryonaut with me. I ended up using the paste that was pre-installed on the cooler.
> 
> Temps are great, so I doubt it's much of an issue, but the fact that I don't know what kind of paste it was, and because I'm a psycho who loves wasting his time for 1-2c gains .. I might do it today.


I wouldn’t repaste your 3080 they do an amazing job now days with thermal pad and paste application from the factory. 


When I repasted my 2080Ti FE “When it was air cooled” I was amazed at how well they did from the factory after taking it apart, the thermal pads look just like my super expensive Fujipoly pads, and the paste looked similar to Kryonaut. This was a brand new 2080Ti FE “Late model with Samsung GDDR6” when I took it apart. After I repasted with Thermalright TFX and put it back together, I noticed no temperature improvement at all. 

^ Video cards did not use to be like this from the factory. Props to nvidia for doing an amazing job at padding and pasting the FE 2080Ti model. I was very impressed! 


My card is watercooled now and runs like 38C maximum. but still they did a good job from the factory on my FE card.


----------



## acoustic

tps3443 said:


> I wouldn’t repaste your 3080 they do an amazing job now days with thermal pad and paste application from the factory.
> 
> 
> When I repasted my 2080Ti FE “When it was air cooled” I was amazed at how well they did from the factory after taking it apart, the thermal pads look just like my super expensive Fujipoly pads, and the paste looked similar to Kryonaut. This was a brand new 2080Ti FE “Late model with Samsung GDDR6” when I took it apart. After I repasted with Thermalright TFX and put it back together, I noticed no temperature improvement at all.
> 
> ^ Video cards did not use to be like this from the factory. Props to nvidia for doing an amazing job at padding and pasting the FE 2080Ti model. I was very impressed!
> 
> 
> My card is watercooled now and runs like 38C maximum. but still they did a good job from the factory on my FE card.


Just to clarify, my card is a 3080 FTW3 Ultra (air-cooler), but I bought the Hybrid cooler that they sell separately and installed it myself. When I realized I forgot to bring my Kryonaut, I the solid looking stuff they pre-install on the cooler side; I'm not sure what paste that is honestly, it was pink and very flat. I was impressed with the paste application when I took the card apart though--just like my 2080TI FTW3 Ultra, when I took that cooler off to put the Hybrid on, it was very even and not over-pasted and clumped up.


----------



## Carillo

Hey. Does anyone have, or know where i can find the 2080 TI Strix 600 watt bios ?


----------



## tps3443

Carillo said:


> Hey. Does anyone have, or know where i can find the 2080 TI Strix 600 watt bios ?


The strix 1000 watt bios is on page 1.


----------



## Carillo

Thanks, but that’s not what I’m asking for.


----------



## jura11

Carillo said:


> Hey. Does anyone have, or know where i can find the 2080 TI Strix 600 watt bios ?


I don't think there is 600W RTX 2080Ti Strix BIOS, I know only about Matrix which is 360W, OC White I think is similar and XOC BIOS which is 1000W 

Hope this helps 

Thanks, Jura


----------



## Carillo

jura11 said:


> I don't think there is 600W RTX 2080Ti Strix BIOS, I know only about Matrix which is 360W, OC White I think is similar and XOC BIOS which is 1000W
> 
> Hope this helps
> 
> Thanks, Jura


There is a broken link on page 1 for a 600 watt Strix bios. So i assumed it exist ?


----------



## jura11

Carillo said:


> There is a broken link on page 1 for a 600 watt Strix bios. So i assumed it exist ?


I really don't know there, I never tried or have that 600w BIOS, only Matrix and XOC BIOS I have run on my Asus RTX 2080Ti Strix 

Hope this helps 

Thanks, Jura


----------



## J7SC

@Carillo Further to what jura11 wrote, IF your 2080 ti has 3x 8 pin PCIe and very good (w-) cooling , the 520 W bios from 2080 Ti Kingpin might work...not sure about IO, fans etc though.


----------



## JustinThyme

tps3443 said:


> My 2080Ti runs 2,115Mhz in Cyberpunk stable forever. 2,130Mhz just isnt stable at all. And I have totally given up on trying to achieve anymore than 2,115Mhz. I can run other games at higher frequencies, but I don’t even worry with that either now. My system is stable through absolutely anything. If my life depended on it, my system is stable lol. And that’s more important that an extra 15Mhz. Crashes really hurt the experience. So chasing extra MHz is just silly to me. It’s really become the definition of insanity, I am trying the same thing expecting different results. So I’m done with all that.
> 
> Unless I get a water chiller, I will revisit that extreme OC. But other than that, 2,115Mhz is the absolute maximum. And it’ll run that for however long you want it too In anything possible.


What cards are you running? I get 2160 sustained out of a pair of Stix O11G OC cards under water, no chiller and they top out at 40C. That’s with HK IV blocks with passive back plates.

I tried the matrix XOC and all I got were higher temps. 2160 is my max stable with either. Memory at +800.


----------



## tps3443

JustinThyme said:


> What cards are you running? I get 2160 sustained out of a pair of Stix O11G OC cards under water, no chiller and they top out at 40C. That’s with HK IV blocks with passive back plates.
> 
> I tried the matrix XOC and all I got were higher temps. 2160 is my max stable with either. Memory at +800.


Silicon just varies within TU102.

I run 2,130Mhz/16,350 now on my FE, full sustained never down clocks at 38C max in cyberpunk 2077. I did a little tweaking with my water loop adding push/pull. My card is at the limits, but it’s extremely stable in anything imaginable.

Anymore than 2,130Mhz and it’ll crash within minutes in cyberpunk. Sure, I can run 2,175Mhz sustained in timespy and even yield 17,600 graphics score. But the frequency just isn’t stable for hours on in during gaming.


I am getting my (3rd) 360MM radiator today, and ek D5 pump. So I am hoping increasing water pressure while adding another radiator helps lowers temps.

If I can keep my card under 35C range then I can run higher than 2,130Mhz sustained.


I am using a $20 dollar rattle box water pump from Amazon currently. I have been running “2,130/16,350” for about a week now daily. no issues at all.


----------



## Imprezzion

Mine does exactly the same. It does 2100Mhz all day in any game but cyberpunk. It cannot go over 2085Mhz at all in that game regardless of temps. Mine normally runs around 38-39c as well if I max out my fans on the rad but it doesn't do any higher clocks at 39c then it can do at 47c in that game..

I usually just let it run a super quiet fan profile which let's it run 44-47c depending on the load at either 2085 or 2070Mhz with 7900 VRAM. Those extra 15Mhz don't bother me. My card is a bad bin anyway. Both memory and core are nothing special for an A chip. (Benches 2175 / 8100 @ 1.125v with mad artifacts half way down the run).


----------



## JustinThyme

tps3443 said:


> Silicon just varies within TU102.
> 
> I run 2,130Mhz/16,350 now on my FE, full sustained never down clocks at 38C max in cyberpunk 2077. I did a little tweaking with my water loop adding push/pull. My card is at the limits, but it’s extremely stable in anything imaginable.
> 
> Anymore than 2,130Mhz and it’ll crash within minutes in cyberpunk. Sure, I can run 2,175Mhz sustained in timespy and even yield 17,600 graphics score. But the frequency just isn’t stable for hours on in during gaming.
> 
> 
> I am getting my (3rd) 360MM radiator today, and ek D5 pump. So I am hoping increasing water pressure while adding another radiator helps lowers temps.
> 
> If I can keep my card under 35C range then I can run higher than 2,130Mhz sustained.
> 
> 
> I am using a $20 dollar rattle box water pump from Amazon currently. I have been running “2,130/16,350” for about a week now daily. no issues at all.


Yeah the lottery always comes into play, that’s why I buy custom cards. They bin their chips and the better go into the OC cards and the lower into the more budget oriented. FE is always a roll of the dice. The first throttle comes in at 45C so as long as you stay under that it’s all good. Running 2100+ and staying under 35C is a pipe dream unless you either leave your PC on the balcony in the winter or have a chiller then you have to watch out for and prep everything for condensation. Or last option is you keep your heat off or very low in the winter. 15C delta is a good cooler 10-12C delta is an excellent cooler. So if you have excellent cooling with an overkill loop that can keep the liquid temp under 25C you might have a shot. Mine have a delta of 10C with my ambient staying around 23-24C and my liquid tops out at 30C but that’s with a GTR 420, GTS 360, 480XE and 360SE along with 3 D5 pumps in serial pumping 620L/hr. Fans ramp with aquaero controller with my high end of the curve being 30C. As I get close to the the fans start ramping up. If I’m slamming both cards and 18 core CPU it will be loud as hell as all 21 fans will be running full bore. I’m about to yank 2 rads and rearrange leaving the 2 best in the case and adding a MO-RA 420 that has another 2 D5 pumps mounted to the side of it. Don’t know how fast I’ll be running them but don’t know how much restriction that monster is going to add. So long as I can keep my current flow rate I’m happy, that equates to around 3 Gal/min. All pumps are PWM controlled so easy enough to adjust. Some ramp their pumps up and down with temps where I tend to keep mine at a steady rate just offset from each other a little for acoustic noise cancellation. Takes some playing with but when you get them offset where they are happy they are near silent running close to full bore. Right now my lowest is set at 90%, one at 96% and the first in the loop runs 100%. Unless you run you head up in my case and turn everything else off you can’t hear them.


----------



## tps3443

JustinThyme said:


> Yeah the lottery always comes into play, that’s why I buy custom cards. They bin their chips and the better go into the OC cards and the lower into the more budget oriented. FE is always a roll of the dice. The first throttle comes in at 45C so as long as you stay under that it’s all good. Running 2100+ and staying under 35C is a pipe dream unless you either leave your PC on the balcony in the winter or have a chiller then you have to watch out for and prep everything for condensation. Or last option is you keep your heat off or very low in the winter. 15C delta is a good cooler 10-12C delta is an excellent cooler. So if you have excellent cooling with an overkill loop that can keep the liquid temp under 25C you might have a shot. Mine have a delta of 10C with my ambient staying around 23-24C and my liquid tops out at 30C but that’s with a GTR 420, GTS 360, 480XE and 360SE along with 3 D5 pumps in serial pumping 620L/hr. Fans ramp with aquaero controller with my high end of the curve being 30C. As I get close to the the fans start ramping up. If I’m slamming both cards and 18 core CPU it will be loud as hell as all 21 fans will be running full bore. I’m about to yank 2 rads and rearrange leaving the 2 best in the case and adding a MO-RA 420 that has another 2 D5 pumps mounted to the side of it. Don’t know how fast I’ll be running them but don’t know how much restriction that monster is going to add. So long as I can keep my current flow rate I’m happy, that equates to around 3 Gal/min. All pumps are PWM controlled so easy enough to adjust. Some ramp their pumps up and down with temps where I tend to keep mine at a steady rate just offset from each other a little for acoustic noise cancellation. Takes some playing with but when you get them offset where they are happy they are near silent running close to full bore. Right now my lowest is set at 90%, one at 96% and the first in the loop runs 100%. Unless you run you head up in my case and turn everything else off you can’t hear them.


That’s a crazy water loop you have! I am getting there. I just got started in custom loop stuff a few months back. And I am slowly adding and adding and spending and spending! But it’s very fun!! So I see why you guys have all of this stuff lol. You just started way before me haha.

Anyways, I am certainly impressed by my GPU temperature performance though. My custom loop cannot hold a candle to yours for sure, and my fans are 1,650RPM all push/pull, so they make some noise too that’s for sure! But I pretty much run with no door lately, and with my 7980XE running default speeds, my GPU was managing around 36C last night during testing. When I set my 7980XE back to 4.8Ghz my GPU creeps back up to 38C maybe hitting 39C for a millisecond and going back to 38C

However, I did lightly lap my 2080Ti‘s die with a small piece of flat broken glass with lapping film and diamond compound when I first set this GPU up on a EKWB waterblock, so I think that has helped astronomically with its load temperatures. I installed the waterblock probably about (4) times alone trying to get my temps right. “Adjusting thermal pad thickness, and trying to get as perfect die contact as possible“

I am considering doing my 7980XE die as well. But because I use liquid metal on that, I am worried about the LM making its way in to the actual silicon over time. But I am still considering it I dunno.


----------



## J7SC

^...at the end of the day, good cooling has its rewards  - especially w/ 2x 2080 Ti playing FlighSimulator 2020 _for hours_


----------



## JustinThyme

I’m too chicken sheet to lap dies. For a 7980XE if it’s delidded and the pigeon poop replaced with conductonaut you are good. You can over do a loop with too many rads making them work against each other and you end up pulling hot air from one into the other. I had a difficult time when I first got my cards and had to wait for decent blocks. A certain well know retailer talked me into buying Barrow junk. Made like crap, warped, sharp edges with burrs in the screw holes and the way they had the water terminals they were so far left an Nvlink bridge wouldn’t fit on. Pulled them back off and the only thing making contact was the die. Memory wasn’t even touching, secondary VRMs not touching and about half of the primary not touching. Sent that garbage back and pushed hard enough they paid for return shipping and didn’t charge me a restocking fee. That rep also told a lie saying another manufacturer bought their designs from Barrow which started a big old sheet show. I ran them on air for awhile but had to leave the cover off as they dump the heat into the case and that was making my liquid temp go up more than when I added them to the loop. I ran phanteks blocks at first and was surprised at how well they worked. Only caveat was original back plate had to be used which actually is worse as it makes no contact and just holds heat in. Finally Heat Killer came out about the same time as EK did with blocks for the cards. I chose the heat killer as they have excellent thermals and IMO look a lot better than what EK has to offer. It’s that cross over to get to the VRMs. Another plate, not very well finished and a bunch of screws sticking out the front.

custom loops will run you in the poor house. My biggest piece of advise is do it right the first time. Pay extra for the good fittings and rads. My choice of fittings is bitspower. They are higher than most but made to last forever. Barrow is just junk and while a good bit of EK products are decent their fittings suck! I don’t like the way they try to push them on you when buying a block with the clause that only their fittings are guaranteed to work with their blocks. G1/4 is G1/4. That part is seldom an issue. It’s rotary fitting and compression fittings and cheap nickel plated anything where the plating flakes off. I have some bitspower fittings more than 10 years old and still as new. No issues whatsoever. Monsoon makes some cool looking stuff but harder to install and if you don’t get them just right.....spewing liquid


----------



## jura11

JustinThyme said:


> What cards are you running? I get 2160 sustained out of a pair of Stix O11G OC cards under water, no chiller and they top out at 40C. That’s with HK IV blocks with passive back plates.
> 
> I tried the matrix XOC and all I got were higher temps. 2160 is my max stable with either. Memory at +800.


Hi there

In my case Matrix BIOS is best BIOS for Asus RTX 2080Ti Strix, I have flashed on Quiet switch XOC BIOS and on performance switch Matrix BIOS

Temperatures in my case are great and in most benchmarks RTX 2080Ti Strix will hold 2205MHz or 2220MHz, for gaming 2175MHz or 2190MHz is absolutely max which it can pass and is rock stable, just in RT games I'm running 2130-2145MHz OC but in non RT games like Witcher 3, RDR2 it holds 2175-2190MHz and 900MHz on VRAM with temperatures topping out at 36-38°C and 9, seen max temperatures 42°C that's in very hot weather hahaha 

Current RTX 3090 GamingPro with Bykski waterblock is running XOC BIOS with power limit capped at 65% and temperatures are in 38°C as max 

My PC is used for rendering most of the time and I would say gaming is secondary 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> Hi there
> 
> In my case Matrix BIOS is best BIOS for Asus RTX 2080Ti Strix, I have flashed on Quiet switch XOC BIOS and on performance switch Matrix BIOS
> 
> Temperatures in my case are great and in most benchmarks RTX 2080Ti Strix will hold 2205MHz or 2220MHz, for gaming 2175MHz or 2190MHz is absolutely max which it can pass and is rock stable, just in RT games I'm running 2130-2145MHz OC but in non RT games like Witcher 3, RDR2 it holds 2175-2190MHz and 900MHz on VRAM with temperatures topping out at 36-38°C and 9, seen max temperatures 42°C that's in very hot weather hahaha
> 
> Current RTX 3090 GamingPro with Bykski waterblock is running XOC BIOS with power limit capped at 65% and temperatures are in 38°C as max
> 
> My PC is used for rendering most of the time and I would say gaming is secondary
> 
> Hope this helps
> 
> Thanks, Jura


...if you ever need even more cooling  , DerBauer just got his two stage / cascade phase cooler for CPU and/or GPU...up to -110 C. Not something you want to sit next to for hours on end, or even plug into your regular wall outlet.

I did run a smaller single-stage phase cooler ( -50 C) for several years on a 4790 K and selectively on a 780 Ti, sound was almost bearable and performance outstanding.



Spoiler


----------



## tps3443

I installed my new pump today. I finally bought a legit water pump. Anyways, the company I bought my parts from decided not to send my new tubing and my extra fittings and other stuff etc. etc. So I couldn’t install my 3rd 360MM in the bottom
of my case today. I need my PC up and running daily. So I couldn’t wait.

This case isn’t meant to hold a 360 radiator in the bottom, but I am gonna cut holes in the PSU shroud for mounting it. It certainly has enough room down there for for it.

I am still thinking how I will mount the fans. I could exhaust them in to the PSU shroud, or maybe let the bottom 360mm intake air from inside of the PSU shroud.

The D5 has dropped temps about 1C-2C, possibly more? I have not tested anything yet. You can see my tiny little old pump beside it in the photos! “It was a joke” ! haha. That’s actually what I used. So maybe a 2-3C temps drop from a pump upgrade alone is not impossible here at all.

All of my parts are decent though. (2) EKWB 360 cool stream SE radiators, EKWB black tubing, EKWB D5 inertia pump, EKWB Vector block 2080Ti, Alphacool CPU block “Replacing soon”, some little cheap freeze mod reservoir “Just something to prime the system”, and I have these 3/8x5/8 barrow fittings “I hate them” ”They seize up together“

My 7980XE is delidded with LM, and the IHS is lapped and my cold plate is lapped too. 2080Ti is running 8 ohm shunts stacked and soldered in place with a 380 watt Galax bios. It has plenty of power available to draw. That die is lightly lapped on the 2080Ti. 

The CPU is over 3.5 years old, and my GPU is nearing 30 months old. And it’s so impressive that this system actually competes with virtually anything else on the market besides a RTX3090 of course. And the cost of entry to upgrade is just too steep for me right now. So yeah I have really grown to love watercooling!

The old days of benchmarking on air cooling was pretty fun, but with watercooling you can not only squeeze out more, but put down “SUSTAINED” performance in games for hours on end. And that’s all the difference in the world for me. I love it.


----------



## thefordmccord

I figured I'd post some results with my card.
EVGA 2080ti Black Blower Card
Non-A BIOS
Flashed with Palit 310 watt BIOS
Shunt Modded
Barrow block
Core 2,130 MHz 
Mem 8132 MHz

Firestrike
33,322








I scored 33 322 in Fire Strike


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





TimeSpy
16,698








I scored 16 698 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Port Royal
10,455








I scored 10 455 in Port Royal


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tryout1

Just got my Heatkiller IV too and did just some straight benching, looks like it helped out a bit for Time Spy
NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com) 
Time Spy GPU Score 17161 @ 2085-2130Mhz Core / 8400 Mhz Mem

I think i have to shunt mod this bad boy


----------



## J7SC

tryout1 said:


> Just got my Heatkiller IV too and did just some straight benching, looks like it helped out a bit for Time Spy
> NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)
> Time Spy GPU Score 17161 @ 2085-2130Mhz Core / 8400 Mhz Mem
> 
> I think i have to shunt mod this bad boy


Nice ! 

FYI, shunting a card 'is a personal thing', but your results are already excellent, anyway. With shunting, you might gain even higher benchmark scores, but in gaming etc, it is probably not really worth it, IMO. The best thing you can do for a 2080 Ti - great cooling - you seem to have already done with w-cooling and Heatkiller IV. If anything, you might want to add another rad to get below the reported 43 C in Time Spy towards below 38 C which would get you another 15 MHz


----------



## jura11

J7SC said:


> ...if you ever need even more cooling  , DerBauer just got his two stage / cascade phase cooler for CPU and/or GPU...up to -110 C. Not something you want to sit next to for hours on end, or even plug into your regular wall outlet.
> 
> I did run a smaller single-stage phase cooler ( -50 C) for several years on a 4790 K and selectively on a 780 Ti, sound was almost bearable and performance outstanding.
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2474084


Yup I have seen that phase cooling, don't think I have space for something like that, maybe later on I will go with chiller but for now my radiator space is more than enough for now 

Last couple of days I have been rendering big project and power draw from wall I have seen in 1500-1565W and temperatures on GPUs have been in 36-38°C for most of the time, seen highest temperatures in 40-41°C on my Strix and on Zotac RTX 2080Ti AMP with FTW3 BIOS I have seen constant 38°C and RTX 3090 been hoovering in 38°C for most of the time with one spike to 40°C 

Fans for most of the time been running in 750-850RPM, only on MO-ra3 360mm I have been running fans at 1300RPM and I still I couldn't hear them at such RPM 

Hope this helps 

Thanks, Jura


----------



## tryout1

J7SC said:


> Nice !
> 
> FYI, shunting a card 'is a personal thing', but your results are already excellent, anyway. With shunting, you might gain even higher benchmark scores, but in gaming etc, it is probably not really worth it, IMO. The best thing you can do for a 2080 Ti - great cooling - you seem to have already done with w-cooling and Heatkiller IV. If anything, you might want to add another rad to get below the reported 43 C in Time Spy towards below 38 C which would get you another 15 MHz


Yeah i know, i juiced out some more core mhz to get even higher score and sadly for the H500P case i'm using i'm already maxed with 2x 360 EK PE's (running 6x eloops 900rpm), like you said for gaming i just tried 1hr of Quake 2 RTX at 2085/8200 @1v but my goal is to get the card to just 200-250w power draw cause power here is expensive and the games i play ran even flawless on my previous 2080, only CP2077 is the AAA game i play atm but i have to admit i never had this much fun overclocking in a really long time this card scales extremely well.

NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)


----------



## tps3443

tryout1 said:


> Yeah i know, i juiced out some more core mhz to get even higher score and sadly for the H500P case i'm using i'm already maxed with 2x 360 EK PE's (running 6x eloops 900rpm), like you said for gaming i just tried 1hr of Quake 2 RTX at 2085/8200 @1v but my goal is to get the card to just 200-250w power draw cause power here is expensive and the games i play ran even flawless on my previous 2080, only CP2077 is the AAA game i play atm but i have to admit i never had this much fun overclocking in a really long time this card scales extremely well.
> 
> NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)



These 2080Ti’s perform amazing once setup properly. My card is decently efficient in Cyberpunk. I run 1.093MV and it holds 2,130Mhz. I have done a little optimizing on my custom loop. So I am peaking at 37C max temperatures.


----------



## J7SC

tryout1 said:


> Yeah i know, i juiced out some more core mhz to get even higher score and sadly for the H500P case i'm using i'm already maxed with 2x 360 EK PE's (running 6x eloops 900rpm), like you said for gaming i just tried 1hr of Quake 2 RTX at 2085/8200 @1v but my goal is* to get the card to just 200-250w power draw cause power here is expensive *and the games i play ran even flawless on my previous 2080, only CP2077 is the AAA game i play atm but i have to admit i never had this much fun overclocking in a really long time this card scales extremely well.
> 
> NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)


Power is more affordable where I live (not to mention derived 96% from renewables / hydro), but with 2x 2080 Tis, I still reduce PL and MHz a bit in apps such as Microsoft Flight Sim 2020 (SLI CFR)...I rather not have the system at over 1100 W for hours on end, even though the cooling can handle it. Besides, the difference in fps between 2100 and 2175 is really not that pronounced, at least with two cards...btw, pics below are flying through a snowy downtown at early dawn  

As to CP 2077, I have had at SLI-CFR for some brief game play with older CFR drivers, but for now, it's too buggy (ie when trying to get to the menu, never mind DLSS). The single 2080 Ti clocked high is good enough for 4K / RTX Ultra / DLSS Quality. BTW, CP 2077's light effects still blow me away, i.e. when outside with both direct sun light and indirect electronic lighting effects.


----------



## tps3443

tryout1 said:


> Just got my Heatkiller IV too and did just some straight benching, looks like it helped out a bit for Time Spy
> NVIDIA GeForce RTX 2080 Ti video card benchmark result - AMD Ryzen 9 3900X,ASUSTeK COMPUTER INC. ROG CROSSHAIR VII HERO (3dmark.com)
> Time Spy GPU Score 17161 @ 2085-2130Mhz Core / 8400 Mhz Mem
> 
> I think i have to shunt mod this bad boy


Once you do it, you won’t drop down to 2,085Mhz in GT2. And games like Metro will just chug on through without hitting a power limit. Just solder the one on the back. 

I can run through timespy at 2,175Mhz sustained and it won’t flinch.


----------



## tryout1

J7SC said:


> Power is more affordable where I live (not to mention derived 96% from renewables / hydro), but with 2x 2080 Tis, I still reduce PL and MHz a bit in apps such as Microsoft Flight Sim 2020 (SLI CFR)...I rather not have the system at over 1100 W for hours on end, even though the cooling can handle it. Besides, the difference in fps between 2100 and 2175 is really not that pronounced, at least with two cards...btw, pics below are flying through a snowy downtown at early dawn
> 
> As to CP 2077, I have had at SLI-CFR for some brief game play with older CFR drivers, but for now, it's too buggy (ie when trying to get to the menu, never mind DLSS). The single 2080 Ti clocked high is good enough for 4K / RTX Ultra / DLSS Quality. BTW, CP 2077's light effects still blow me away, i.e. when outside with both direct sun light and indirect electronic lighting effects.


Yeah CP2077 is one hell of a good looking game besides the occasional bugs i encountered and i'm about 95% done with the game incl. sidequests. It even ran quite well on my previous 2080 and like you said between 75mhz on core you won't notice that much, i only had to lower clocks to 2070mhz for 1v which nets me around 250w, i tried 1.043 at 2130mhz which according to my testing didn't increase 1 avg fps more like 0.5 fps, at WQHD the card doesn't seem to be bandwidth starved anyways, on the contrary the 2080 scaled very well with mem and after 2010 to 2115 (was max stable) i only got like 1-3 fps at most.



tps3443 said:


> Once you do it, you won’t drop down to 2,085Mhz in GT2. And games like Metro will just chug on through without hitting a power limit. Just solder the one on the back.
> 
> I can run through timespy at 2,175Mhz sustained and it won’t flinch.


Yeah i skip for now and maybe try the Kingpin bios, according to a hwluxx user it should work on reference boards, at 1v i stay a tiny bit below power throttling point too which is nice. Hell if i tell you my current game will be World of Warcraft again (shoutout to my cousin lol) which ofc works at 144fps capped with at most 67% GPU Load at WQHD ofc and RTX in the game seems broken, no gpu load increase but fps get's literally divided by 2.


----------



## tps3443

tryout1 said:


> Yeah CP2077 is one hell of a good looking game besides the occasional bugs i encountered and i'm about 95% done with the game incl. sidequests. It even ran quite well on my previous 2080 and like you said between 75mhz on core you won't notice that much, i only had to lower clocks to 2070mhz for 1v which nets me around 250w, i tried 1.043 at 2130mhz which according to my testing didn't increase 1 avg fps more like 0.5 fps, at WQHD the card doesn't seem to be bandwidth starved anyways, on the contrary the 2080 scaled very well with mem and after 2010 to 2115 (was max stable) i only got like 1-3 fps at most.
> 
> 
> 
> Yeah i skip for now and maybe try the Kingpin bios, according to a hwluxx user it should work on reference boards, at 1v i stay a tiny bit below power throttling point too which is nice. Hell if i tell you my current game will be World of Warcraft again (shoutout to my cousin lol) which ofc works at 144fps capped with at most 67% GPU Load at WQHD ofc and RTX in the game seems broken, no gpu load increase but fps get's literally divided by 2.


You can only run the “kingpin XOC bios”, but you ”Can’t” control voltage at all. It will constantly force 1.112MV and only drop if your card gets too hot.

And with that voltage the power is pretty high, and your temps will be higher too. All resulting in going backwards with performance.

The only option to have unlimited power with a reference board 2080Ti is to run the Galax 380 watt bios with a back shunt soldered for 450+ power limit. Or run both shunts soldered for a 530+ power limit.

The absolute worst that can happen with soldering a 8Ohm resistor is, your solder may touch near by components on the 2080Ti, and then your PC will NOT power on at all because the resistor is grounded. Not smoke, no flames..This has only happen to me once, and I easily fixed it.

I have been through it all with (4) 2080Ti’s. Either live with power throttling under very heavy loads. Or solder the resistors while using the Galax 380 watt bios.

Not many applications need more than 380 watts. So your usually ok to run between 2,100-2,145Mhz. But in games like Metro and some others I can’t think of right now will suck your power dry and clock the card down to 2,085Mhz or possibly even worse.

I don’t have this issue with shunts soldered. My card rolls through whatever even playing Metro at 450+ power draw from just the GPU.

Also, I advise against LM for shorting the shunts. it changes in consistency from day to day. And your power limit varies. Soldering is safe, reliable, and last virtually forever with absolutely zero maintenance.

Hope this helps! I would not attempt to flash a standard Kingpin Bios to your reference board. Be advised.


----------



## J7SC

@tryout1 ...if you don't want to shunt mod (as there are pros and cons with that), the 2x 8 pin Galax 380W (OP) seems to be the 'go-to' custom bios for most folks here. A step beyond is the Strix 1000W XOC 2x 8 pin bios, also in the original OP. Needless to add, extreme caution (apart from great cooling) is required when using an XOC bios as they typically have all kinds of safeties disabled. Depending on your PCB, you might also find some I/O ports and fan control not working right, only one way to find out with your particular model. I agree with tps3443 though that 380W is 'plenty' (my cards' stock bios max, btw), but I though that you wanted to reduce power to 250 W or below ? I take it custom bios is just for benching ?


----------



## KCDC

@J7SC Did that glacier swap for both cards a while back. VS the stock blocks, they're pretty much the same as the stocks, temp-wise. I'm not a tester so can't give exact results. I swapped due to a loop corrosion issue and the stock blocks are not easy to tear down due to the screws being under that cover... On top of that, I can take the terminals off a lot easier instead of re-doing my tubing between the two. You dont suffer from that due to how you have yours configured.

The thermal pads on the stock are huge, at least 2mm. Glaciers are 1mm, but my thermal grizzly 1mm were still too thick since they don't squish as easily, got high temps with those, so had to use the stock pads the phanteks came with.

Overall, I like these because I can actually clean the blocks when I tear down. Just thought I'd chime in after saying I'd do it a long time ago haha.

Edited typo


----------



## J7SC

KCDC said:


> @J7SC Did that glacier swap for both cards a while back. VS the stock blocks, they're pretty much the same as the stocks, temp-wise. I'm not a tester so can't give exact results. I swapped due to a loop corrosion issue and the stock blocks are not easy to tear down due to the screws being under that cover... On top of that, I can take the terminals off a lot easier instead of re-doing my tubing between the two. You dont suffer from that due to how you have yours configured.
> 
> The thermal pads on the stock are huge, at least 2mm. Glaciers are 1mm, but my thermal grizzly 1mm were still too thick since they don't squish as easily, got high temps with those, so had to use the stock pads the phanteks came with.
> 
> Overall, I like these because I can actually clean the blocks when I tear down. Just thought I'd chime in after saying I'd do it a long time ago haha.
> 
> Edited typo


Good to hear from you  ! What was the loop corrosion issue you referenced ? Multi-metal ? I check my Aorus XTR WB's regularly, but so far so good (also given a different loop config per you post). So far, I haven't noticed any degradation in loop consistency, clocks or temps, two years+ on.

That said, I have been keeping 'half an eye' on RTX 3090s and this evening, I actually saw 2x Asus TUF 3090 OC at MSRP (w/o tariff here) at our regular biz wholesaler - It took willpower not to make that call (standing orders and all that)...truth be told though, those two 2080 Ti Aorus Xtr WBs have been nothing but a joy.

BTW, do you 'do' Microsoft Flightsim 2020 ? There's a way for SLI-CFR for RTX 2k series, making a mockery out of 4K Ultra...

...per Declassified Systems /YT


----------



## KCDC

I remember that post but it's still too much work just to take a block apart haha, especially since this is now my main wfh station. Needed something easier to take apart

The corrosion issue is 100% my fault, I let a loop go for too long without a teardown due to not being able to keep my system down for a period of time. Also probably put too much trust in Primochill's utopia but I don't want to blame them. Had it been my usual play station, another story. Anyway, when I took the cpu block apart, massive pitting and corrosion. I didn't even attempt to take the aorus blocks apart, just ordered the glaciers. Clean and happy now and can monitor it easily and using clear pre-mix just to be safe.

After doing redshift and octane benchmarks, I'm slightly faster than a 3090 but with double the power draw haha. Either way, I'll stick with these a bit due to obvious reasons.

Flight Sim is definitely awesome but not really my thing, long-term. Since it took so much space I got rid of it. It did run pretty well on both cards from what I saw.


----------



## KCDC

I will do my best to wait a gen, but if I can score two 3090s down the road that I can get waterblocks for... who knows what a glass of wine will do...


----------



## kithylin

KCDC said:


> I remember that post but it's still too much work just to take a block apart haha, especially since this is now my main wfh station. Needed something easier to take apart
> 
> The corrosion issue is 100% my fault, I let a loop go for too long without a teardown due to not being able to keep my system down for a period of time. Also probably put too much trust in Primochill's utopia but I don't want to blame them. Had it been my usual play station, another story. Anyway, when I took the cpu block apart, massive pitting and corrosion. I didn't even attempt to take the aorus blocks apart, just ordered the glaciers. Clean and happy now and can monitor it easily and using clear pre-mix just to be safe.
> 
> After doing redshift and octane benchmarks, I'm slightly faster than a 3090 but with double the power draw haha. Either way, I'll stick with these a bit due to obvious reasons.
> 
> Flight Sim is definitely awesome but not really my thing, long-term. Since it took so much space I got rid of it. It did run pretty well on both cards from what I saw.


Just a note though: If a loop is planned correctly (using the right kind of similar metals in it) water loops do not have to be drained and torn down frequently just to avoid corrosion. In fact if you use all similar metals you should never have corrosion in the loop ever. I have several computers that I have run 2-3 years in the same loop without ever draining them, refilling them or cleaning the blocks and when I finally get around to it there's no corrosion inside at all.


----------



## J7SC

KCDC said:


> I remember that post but it's still too much work just to take a block apart haha, especially since this is now my main wfh station. Needed something easier to take apart
> 
> The corrosion issue is 100% my fault, I let a loop go for too long without a teardown due to not being able to keep my system down for a period of time. Also probably put too much trust in Primochill's utopia but I don't want to blame them. Had it been my usual play station, another story. Anyway, when I took the cpu block apart, massive pitting and corrosion. I didn't even attempt to take the aorus blocks apart, just ordered the glaciers. Clean and happy now and can monitor it easily and using clear pre-mix just to be safe.
> 
> After doing redshift and octane benchmarks, I'm slightly faster than a 3090 but with double the power draw haha. Either way, I'll stick with these a bit due to obvious reasons.
> 
> Flight Sim is definitely awesome but not really my thing, long-term. Since it took so much space I got rid of it. It did run pretty well on both cards from what I saw.


 
...you're right on the performance per watts for two 2080 Tis, they certainly haul **s, but you better have a big funnel for the flow of electron$. 

FlightSim 2020 and Cyberpunk 2077 are the only only non-productivity apps I use the pair for right now...


----------



## KCDC

It was due to a optimus copper block vs the rest in my system which were nickel.

edit, im trying to quote, I haven't used this new OCN system sigh. bare with me


----------



## KCDC

@*kithylin*
only thing I can think of is the optimus being a copper block vs the aorus blocks. Was also using the same gtr rads for 3-4 years over multiple builds. Always used primochill utopia mixed with distilled at the correct ratio. It was only when I used the optimus that I saw corrosion. It was enough for me to just go back to all nickel so I didn't need to worry again. Since then zero issue. Not balming the brands, just myself and not paying attention.


----------



## KCDC

Just figured out quotes, ignore me.


----------



## J7SC

KCDC said:


> It was due to a optimus copper block vs the rest in my system which were nickel.
> 
> edit, im trying to quote, I haven't used this new OCN system sigh. bare with me


Oh wow - I've been mixing (coated) copper and nickel for years w/ no ill-effect. I usually just make sure that there is no copper - aluminum (never mind other metals).

I use Thermaltake 1000 liquids for YEARS on all our commercial and my private builds (multi-metal compatible, anti-corrosive, biocide), and do so with very tight (= commercial) monitoring - no problems so far, after 9 years+ . I am sure there will be a hoard of folks telling me that Thermaltake 1000 is 'no good', but they should just switch to the Intel-AMD-NVidia w-cooling echo channels.

Yes, I use Feser and Mayhems, and I didn't have any probs with those either...

I even used that TT 1000 recipe to fix an old AIO that had been running for 8+ years > here


----------



## KCDC

J7SC said:


> Oh wow - I've been mixing (coated) copper and nickel for years w/ no ill-effect. I usually just make sure that there is no copper - aluminum (never mind other metals).
> 
> I use Thermaltake 1000 liquids for YEARS on all our commercial and my private builds (multi-metal compatible, anti-corrosive, biocide), and do so with very tight (= commercial) monitoring - no problems so far, after 9 years+ . I am sure there will be a hoard of folks telling me that Thermaltake 1000 is 'no good', but they should just switch to the Intel-AMD-NVidia w-cooling echo channels.
> 
> Yes, I use Feser and Mayhems, and I didn't have any probs with those either...
> 
> I even used that TT 1000 recipe to fix an old AIO that had been running for 8+ years > here


See , thats why I am confused as to how it happened at all. I won't make this about my issue on this thread but I was very pissed. I always made sure I kept a proper ratio of utopia and distilled.


----------



## J7SC

..kudos for your self control. But I wonder if it is a plating issue, a la EK...et al


----------



## Imprezzion

I thought about shunting my card as well but in testing clocks in general I don't see how it will provide a benefit to me lol. On 1.093v I just can't get higher clocks then 2100-2085 depending on the temperature. The card sits between 43-47c usually under a Kraken X52 + G12 bracket. I can get much lower temps if I raise the fanspeed on the rad, it will go as low as 37-40c max, with considerably more noise which I don't mind, but all that does is prevent the drop from 2100 to 2085. It never power throttles in games on 1080p 144Hz anyway. It's purely limited by voltage.

I can squeeze out 2145-2130Mhz @ 1.125v but that already has the XOC BIOS so no need to shunt that either.. and that is also unstable in Cyberpunk. 

It will run any other game forever on 1.125v 2145-2130Mhz except Cyberpunk...


----------



## tps3443

I added another radiator to my setup last night. Has anyone ran a radiator like this? Not sure how well it’ll work. Anyways, I think it is a very odd place to run the coolant through. Maybe it’ll be ok? 


I haven’t done any testing yet. I was bleeding air out last night. And then it was just too late to proceed.


----------



## acoustic

Without vents at the bottom, those fans are doing nothing. There is no place for those fans to pull air from. I'd say that's probably going to hurt your loop performance more than it will help.

Your best bet is to prop the rad up on the edges so there's a gap, and then leave the side of the case open so it can draw air from the outside.


----------



## Avacado

I'd be shocked if that GPU got any coolant, liquid follows the path of least resistance, you just made 2 huge shunts, no reason for the coolant to do a hard 90 into the GPU block when it can go straight through to the rad.


----------



## tps3443

acoustic said:


> Without vents at the bottom, those fans are doing nothing. There is no place for those fans to pull air from. I'd say that's probably going to hurt your loop performance more than it will help.
> 
> Your best bet is to prop the rad up on the edges so there's a gap, and then leave the side of the case open so it can draw air from the outside.


Yes I have the bottom shroud vented.


----------



## acoustic

tps3443 said:


> Yes I have the bottom shroud vented.


Is that vented to the outside? Otherwise it's the same deal. I had based my post off your pics on the last page that didn't show any vents, my bad.


----------



## tps3443

Avacado said:


> I'd be shocked if that GPU got any coolant, liquid follows the path of least resistance, you just made 2 huge shunts, no reason for the coolant to do a hard 90 into the GPU block when it can go straight through to the rad.


The GPU is running pretty cool as it was before. But I was also worried about this. I just wanted to test it, and see what would happen. temps seem to be the same as before 37C full load 2,130 Cyberpunk. I’m just not seeing an improvement. I am gonna most likely re-route the bottom 360 rad tubing.



acoustic said:


> Is that vented to the outside? Otherwise it's the same deal. I had based my post off your pics on the last page that didn't show any vents, my bad.


I had to drill holes in the bottom shroud. It is pulling air from the bottom of the case, which has some additional holes and vents to allow outside air to come in to the PSU area.


----------



## tps3443

Avacado said:


> I'd be shocked if that GPU got any coolant, liquid follows the path of least resistance, you just made 2 huge shunts, no reason for the coolant to do a hard 90 into the GPU block when it can go straight through to the rad.


Temperatures seem to be more or less the exact same lol. I never thought one could run a radiator like this. But I suppose It couldn’t hurt trying it. I am going to re-route it a different way. Still deciding on where. 

Wouldn't SLI GPU’s be a similar issue?


----------



## tps3443

Ok I fixed it guys!


----------



## tps3443

When I first started this custom loop, I could manage 2,100Mhz stable in CB77. Then it became 2,115Mhz, then I managed 2,130Mhz. And here we are rocking 2,145Mhz stable stable!! This is awesome!

2,145Mhz is 100% stable in Cyberpunk 2077!!

I bounce between 34-35C.

It doesn’t come off the 35C lol. Awesome !!!


----------



## Krzych04650

Avacado said:


> I'd be shocked if that GPU got any coolant, liquid follows the path of least resistance, you just made 2 huge shunts, no reason for the coolant to do a hard 90 into the GPU block when it can go straight through to the rad.


I don't think you understand how it works. What he did is basically the same as parallel loop, only upside down and the radiator is at the end instead CPU block, but it works the same. He got 37C temp with this setup and 35C with serial, which is basically how it works, and if he had two cards then there would be no penalty at all, probably like even 38C on both cards with parallel and 39/37 with serial. Fluid did flow into the GPU whenever the resistance was lower than through radiator, in real time, it is not like "GPU has more resistance so it gets no flow", it still spreads. Maybe not the best way to resolve it in this particular case, but worked well still.


----------



## Avacado

Krzych04650 said:


> I don't think you understand how it works. What he did is basically the same as parallel loop, only upside down and the radiator is at the end instead CPU block, but it works the same. He got 37C temp with this setup and 35C with serial, which is basically how it works, and if he had two cards then there would be no penalty at all, probably like even 38C on both cards with parallel and 39/37 with serial. Fluid did flow into the GPU whenever the resistance was lower than through radiator, in real time, it is not like "GPU has more resistance so it gets no flow", it still spreads. Maybe not the best way to resolve it in this particular case, but worked well still.


Right, your right, I don't have the faintest idea between the differences of serial and parallel configurations (Sarcasm). His final configuration, post #12602 is by far a better option. As long as his temps stayed roughly the same, no harm, no foul.


----------



## Avacado

tps3443 said:


> Ok I fixed it guys!


Looks good, consider switching the bottom tubes. Run the farther one to the GPU inlet and the closer one to the top rad.


----------



## tps3443

Avacado said:


> Looks good, consider switching the bottom tubes. Run the farther one to the GPU inlet and the closer one to the top rad.


I ran it like this. It stays at 35C max temps. And it is able to sustain 2,145. In Cyberpunk 2077, I am new to custom loops. And I just know that it worked either way. But not very effective the first setup I had. But I didn’t really lose anything. I know that it was extremely hard to bleed the air out of the radiator, and took a lot of squeezing and shaking. Because the air needed to exit the radiator and tubing, and then pass through the GPU block. And the air bubbles just didn’t want to corporate. I eventually got them out, but it was a pain. 

I personally don’t know much between serial and parallel. But I only hooked it up like that because it was super easy to do so.


----------



## J7SC

Imprezzion said:


> I thought about shunting my card as well but in testing clocks in general I don't see how it will *provide a benefit to me lol.* On 1.093v I just can't get higher clocks then 2100-2085 depending on the temperature. The card sits between 43-47c usually under a Kraken X52 + G12 bracket. I can get much lower temps if I raise the fanspeed on the rad, it will go as low as 37-40c max, with considerably more noise which I don't mind, but all that does is prevent the drop from 2100 to 2085. It never power throttles in games on 1080p 144Hz anyway. It's purely limited by voltage.
> 
> I can squeeze out 2145-2130Mhz @ 1.125v but that already has the XOC BIOS so no need to shunt that either.. and that is also unstable in Cyberpunk.
> 
> It will run any other game forever on 1.125v 2145-2130Mhz except Cyberpunk...


I think you're right about the no / benefit. Cyberpunk 2077 does stress RTX more than anything else I have tried. Anyhow, have a look at the two pics below...all at 4K max textures / RTX Ultra / DLSS Quality (I switch DLSS between quality and balanced, depending on 'fight' scenes...balanced adds about 12 fps to 15 fps). GPU is on stock bios and at the speeds below settles at 1.043v.

With a cold start and initial splash screen clocks at 2190 MHz, it settles to 2160 MHz once at max loop temp. I then ran the same scene with the GPU down-clocked by 100 MHz (!). That's a 100 MHz less, yet the difference was only 2 fps. Chasing MHz for little benefit in this app... as much as I like oc'ing and wringing every last bit of 24/7 performance out of my system, I'll concentrate some more on getting SLI-CFR to work (w/o DLSS) at 4K.


----------



## tps3443

J7SC said:


> I think you're right about the no / benefit. Cyberpunk 2077 does stress RTX more than anything else I have tried. Anyhow, have a look at the two pics below...all at 4K max textures / RTX Ultra / DLSS Quality (I switch DLSS between quality and balanced, depending on 'fight' scenes...balanced adds about 12 fps to 15 fps). GPU is on stock bios and at the speeds below settles at 1.043v.
> 
> With a cold start and initial splash screen clocks at 2190 MHz, it settles to 2160 MHz once at max loop temp. I then ran the same scene with the GPU down-clocked by 100 MHz (!). That's a 100 MHz less, yet the difference was only 2 fps. Chasing MHz for little benefit in this app... as much as I like oc'ing and wringing every last bit of 24/7 performance out of my system, I'll concentrate some more on getting SLI-CFR to work (w/o DLSS) at 4K.
> 
> View attachment 2474929
> 
> 
> View attachment 2474931


Well 4K is a different beast. So your not gonna see such a difference.

The RTX3090 is only 5FPS faster than a RTX3080 in 4K in CB77 per Techspots own review of the games performance. But the 3090 cost over twice as much as a 3080. Your 100Mhz increase in overclock offers about half of that that for free lol.


----------



## tps3443

Overclock certainly helps the 2080Ti. @J7SC


Look at this chart. It is pitiful! This is 2560x1440P Ultra preset with RT off/DLSS off.

They listed a 2080Ti at 48-56FPS lol. The difference is day and night.

This is a old video I uploaded running the exact same settings. My performance is in another league entirely. It’s crazy.


Heres the chart. “First”






And here’s a video I made on a XOC bios back around early December.

Keep in mind there is a 2-3% performance loss due to recording “Somperformance is actually better by a few more frames. Overclocking and squeezing every last drop, has literally turned the card in to a beast. 

Video


----------



## Imprezzion

I put the XOC BIOS back on mine out of pure boredom lol. 

Set the curve to 2130Mhz (VRAM 7900) @ the normal temperature I run (about 46c), played Cyberpunk @ 1080p all max RT Psycho DLSS Quality for a bit, game crashed 10 minutes in. Dropped 1 bin to 2115Mhz and it worked for half an hour. After that I switched to Division 2 as we did some weekly raids and such which it ran fine for like 4 hours straight.

I might get away with 2115Mhz in Cyberpunk but half an hour isn't long enough to be sure it's stable in CP77.

My FPS in CP77 is great tho. I do run some mods on it like cyber engine Tweaker, optimized DLSS, optimized RT (mostly .ini tweaks from nexus mods) and a custom reshade I made myself as my monitor is a terrible TN panel with horrible colors / black values so I prefer to run ReShade with AmbientLight, HDR / Tonemap and adaptive sharpening filters.

And they help. I average about 70 FPS in open world driving around and combat sees it drop to 55-60 worst-case which is perfectly playable and it looks amazing.


----------



## tps3443

Imprezzion said:


> I put the XOC BIOS back on mine out of pure boredom lol.
> 
> Set the curve to 2130Mhz (VRAM 7900) @ the normal temperature I run (about 46c), played Cyberpunk @ 1080p all max RT Psycho DLSS Quality for a bit, game crashed 10 minutes in. Dropped 1 bin to 2115Mhz and it worked for half an hour. After that I switched to Division 2 as we did some weekly raids and such which it ran fine for like 4 hours straight.
> 
> I might get away with 2115Mhz in Cyberpunk but half an hour isn't long enough to be sure it's stable in CP77.
> 
> My FPS in CP77 is great tho. I do run some mods on it like cyber engine Tweaker, optimized DLSS, optimized RT (mostly .ini tweaks from nexus mods) and a custom reshade I made myself as my monitor is a terrible TN panel with horrible colors / black values so I prefer to run ReShade with AmbientLight, HDR / Tonemap and adaptive sharpening filters.
> 
> And they help. I average about 70 FPS in open world driving around and combat sees it drop to 55-60 worst-case which is perfectly playable and it looks amazing.



Yeah that’s really not bad then, because I think your running a AIO right? Back when I made that video I was using my cheap amazon water pump the size of my thumb, and only push fan configuration on my radiators.

I have since upgraded to a ekwb D5 inertia, and push pull fans on (2) of the 360’s and I have added another 360MM radiator entirely.

It has been a long road with my cooling system since making that video.

If you want to run 2,130Mhz you can never exceed 38C.

If you want to run 2,145Mhz you can never exceed 36C.

I too need true stability. I have about 70 hours in Cyberpunk 2077. And maintaining these temperatures are no joke when it comes to overclocking. And all I can say is, if you can keep your card underneath 36C it’ll run 2,145Mhz forever.

This may seem silly, but if you can’t run 2,130 or 2,145 your card is too hot. That’s the name of the game with the cancerous firmware we have on our GPU’s


----------



## Imprezzion

Haha I can do that if my ambient is like 12c with the fans on 2k RPM push pull lol. Let's not hehe.


----------



## Medizinmann

tps3443 said:


> And all I can say is, if you can keep your card underneath 36C it’ll run 2,145Mhz forever.


Amen...


----------



## Medizinmann

Imprezzion said:


> Haha I can do that if my ambient is like 12c with the fans on 2k RPM push pull lol. Let's not hehe.


I could do this at 20°C ambient - I acutally did - with all fans and pump running 100% all the time - I get 2145MHz stable - but noise is unberable for me.

I guess I would need to add another 360mm-rad and a much better pump...

I settled for 2115-2130 Mhz and less noise - I don't think the last 15MHz are worth it.

Best regards,
Medizinmann


----------



## J7SC

Imprezzion said:


> I put the XOC BIOS back on mine out of pure boredom lol.
> 
> Set the curve to 2130Mhz (VRAM 7900) @ the normal temperature I run (about 46c), played Cyberpunk @ 1080p all max RT Psycho DLSS Quality for a bit, game crashed 10 minutes in. Dropped 1 bin to 2115Mhz and it worked for half an hour. After that I switched to Division 2 as we did some weekly raids and such which it ran fine for like 4 hours straight.
> 
> I might get away with 2115Mhz in Cyberpunk but half an hour isn't long enough to be sure it's stable in CP77.
> 
> My FPS in CP77 is great tho*. I do run some mods on it like cyber engine Tweaker, optimized DLSS, optimized RT (mostly .ini tweaks from nexus mods) *and a custom reshade I made myself as my monitor is a terrible TN panel with horrible colors / black values so I prefer to run ReShade with AmbientLight, HDR / Tonemap and adaptive sharpening filters.
> 
> And they help. I average about 70 FPS in open world driving around and combat sees it drop to 55-60 worst-case which is perfectly playable and it looks amazing.


I have to check out those 'cyber engine tweaker, optimized DLSS and RT per .ini tweaks from Nexus mods. As mentioned, I am still hoping for 4K / RTX Ultra and no DLSS via SLI-CFR, we'll see. Below is a single card run comparing 4k / RTX Ultra / DLSS balanced (left) with the same at 1440. At 1440 RTX Ultra / DLSS performance I have seen as much as 103 fps, but visual quality is just too weak. Besides, I play CP 2077 on a non-gsync 4K 40 inch monitor via DSP and oc'ed to 70 Hz, so those graphics settings don't make sense. BTW, cooling is via dedicated GPU loop w/ three 360/55 rads and two D5s built to handle 2x oc'ed 2080 Ti, so with only one card doing any work, it is a non-issue.


----------



## tps3443

Medizinmann said:


> I could do this at 20°C ambient - I acutally did - with all fans and pump running 100% all the time - I get 2145MHz stable - but noise is unberable for me.
> 
> I guess I would need to add another 360mm-rad and a much better pump...
> 
> I settled for 2115-2130 Mhz and less noise - I don't think the last 15MHz are worth it.
> 
> Best regards,
> Medizinmann


My ambient is about 21C everyday usually. 100% fans, 100% pump.

I am using (12) of these fans. The cheap little 38CFM fans. But (3) of my fans are the noise makers at 1800RPM.

I cant hear my D5 even at 100%? Maybe my fans dround it out.


ocala community credit union


----------



## Imprezzion

You guys all show up here with multi-rad custom loops while I'm sitting here looking at a Kraken X52, a G12 bracket and a bunch of ebay copper VRM / VRAM heatsinks (enzotech and alphacool ones) doing their best to cool 400w @ 1.125v..

The fans on it are just Cooler Master MF120's in push-pull.. at max pump speed and the fans around 70% @ ~1450-1500 RPM which is quite tolerable I sit around 44-47c in 20c ambients. Lower is possible if I leave the window open and run the full 2000RPM I can get it to do like, 37-40c and I once did a 3DMark Time Spy run with the rad in the famous ice and salt bucket which resulted in 2190/8200 passing Time Spy @ 19-22c load. But yeah, I'm not dropping my rad into a ice bucket all day to play cyberpunk haha.


----------



## kithylin

KCDC said:


> See , thats why I am confused as to how it happened at all. I won't make this about my issue on this thread but I was very pissed. I always made sure I kept a proper ratio of utopia and distilled.


Personally I've been using copper blocks, brass radiators (brass is based on copper) and nickel-plated EK GPU blocks and I only use just straight distilled water with nothing else in the loop and I've never had any issues myself. Multiple years on a fill up. This past time it's been about 14 months on my current loop setup and I'm getting some brown build up on the CPU block but that's a cheaparse chinese block for AM4 by.. I don't even know. Bkyski or something like that. It was $30 and I was short on money at the time and had a computer that was completely dead and just trying to get it going and I didn't have an AM4 block at the time. I have a all-copper Heatkiller 3.0 with AM4 mount now that I've been meaning to swap into it.... I've just been lazy. That "Brown buildup" is only in the CPU block though. My GPU block is clear plexi and it's spotless so it's not corrosion flowing through the loop.. just something with that crappy AM4 block. So I guess I'm saying.. it's possible to have bad plating? That AM4 block looks like nickel but at that price point I doubt it is. Sorry this is all off topic though.. I won't respond on this anymore here as I don't want to derail the thread. I just thought I'd share my comment that it's possible to have a badly plated block sometimes.


----------



## J7SC

Imprezzion said:


> You guys all show up here with multi-rad custom loops while I'm sitting here looking at a Kraken X52, a G12 bracket and a bunch of ebay copper VRM / VRAM heatsinks (enzotech and alphacool ones) doing their best to cool 400w @ 1.125v..
> 
> The fans on it are just Cooler Master MF120's in push-pull.. at max pump speed and the fans around 70% @ ~1450-1500 RPM which is quite tolerable I sit around 44-47c in 20c ambients. Lower is possible if I leave the window open and run the full 2000RPM I can get it to do like, 37-40c and I once did a 3DMark Time Spy run with the rad in the famous ice and salt bucket which resulted in 2190/8200 passing Time Spy @ 19-22c load. But yeah, I'm not dropping my rad into a ice bucket all day to play cyberpunk haha.


You have a point of sorts, but I haven't purchased any new water-cooling equipment in 6+ years (exception being a bit of copper tubing and new cooling liquids). Everything else is old, trusted stuff from other (private and commercial) builds that got repurposed, whether it is the XSPC RX 360 rads, swiftech D5 pumps, fittings and other tubing.

Still, one of these days, I'll build a system with one (or two ?) new Watercool MoRa 420 Pros...  Speaking of cooling, nothing like a nice icy morning flight to cool off from a few hours of Cyberpunk '77


----------



## tps3443

Imprezzion said:


> You guys all show up here with multi-rad custom loops while I'm sitting here looking at a Kraken X52, a G12 bracket and a bunch of ebay copper VRM / VRAM heatsinks (enzotech and alphacool ones) doing their best to cool 400w @ 1.125v..
> 
> The fans on it are just Cooler Master MF120's in push-pull.. at max pump speed and the fans around 70% @ ~1450-1500 RPM which is quite tolerable I sit around 44-47c in 20c ambients. Lower is possible if I leave the window open and run the full 2000RPM I can get it to do like, 37-40c and I once did a 3DMark Time Spy run with the rad in the famous ice and salt bucket which resulted in 2190/8200 passing Time Spy @ 19-22c load. But yeah, I'm not dropping my rad into a ice bucket all day to play cyberpunk haha.


A Kraken G12 is always a great affordable option for decent GPU overclocking, I was gonna run one. But you must understand, My system pulls a lot of power while gaming. This is playing cyberpunk 2077 at 765 watts. So with my CPU sitting in the low 30’s and my GPU in the mid 30’s. There is no way to cool 750 watts with a Kraken.

My cooling needs to be overkill for the CPU mostly. If I run R20 is hits like 850 watts just CPU power.


I still think you can knock 8-10C off of your load temps fairly easily “With full custom loop” even with slower turning fans. All (15) of my fans spin 1200RPM at 38CFM accept for (3) which are 1,650 XSPC.
(2) of those are in the front and (1) is in the top now.

In all honesty I need to fix my fan setup. None of them are pressure optimized and all of my fans are like $4 dollars each. 

I am considering dumping them all and going for just (9) Noctua P12 Redux 1700RPM fans instead with all of them on pull.


----------



## kithylin

tps3443 said:


> This is playing cyberpunk 2077 at 765 watts. So with my CPU sitting in the low 30’s and my GPU in the mid 30’s. There is no way to cool 750 watts with a Kraken.


A single RTX 2080 Ti couldn't possibly be pulling 750 watts just for the video card, no matter how far you overclock it. Even an RTX 3090 doesn't pull that kind of power single-card. That's total system consumption. Perhaps you should have a look in HwInfo and see what portion of that is actually your video card. I would guess around 350W maybe 400W.


----------



## JustinThyme

tps3443 said:


> I ran it like this. It stays at 35C max temps. And it is able to sustain 2,145. In Cyberpunk 2077, I am new to custom loops. And I just know that it worked either way. But not very effective the first setup I had. But I didn’t really lose anything. I know that it was extremely hard to bleed the air out of the radiator, and took a lot of squeezing and shaking. Because the air needed to exit the radiator and tubing, and then pass through the GPU block. And the air bubbles just didn’t want to corporate. I eventually got them out, but it was a pain.
> 
> I personally don’t know much between serial and parallel. But I only hooked it up like that because it was super easy to do so.


parallel will leave the liquid taking the path of least resistance. If it’s two identical GPU blocks no worries but when doing CPU and GPU in the same loop one or the other is going to get more flow starving then other making it run hotter. A good bit of CPU blocks are made with little restriction so that’s where most of your coolant will go. Some are very restrictive and you will end up with a nice cool GPU and hot CPU. If you go serial then the flow is the same at any point in the loop.


----------



## tps3443

kithylin said:


> A single RTX 2080 Ti couldn't possibly be pulling 750 watts just for the video card, no matter how far you overclock it. Even an RTX 3090 doesn't pull that kind of power single-card. That's total system consumption. Perhaps you should have a look in HwInfo and see what portion of that is actually your video card. I would guess around 350W maybe 400W.


I think you misread. I don’t just have a 2080Ti plugged in to the wall.

^That is “total system power draw“ at the wall playing Cyberpunk2077. My entire system is in the same custom loop, and I was only explaining how my custom loop is beneficial for my situation. That’s all.

The highest I have seen on just my 2080Ti alone is a little beyond 530 watts in timespy GT2. “2080Ti power usage alone”

Lastly the RTX3090 is capable of exceeding 650+ watts all by itself when you allow it to stretch its legs. If I had one It would be watercooled and I would solder on the resistors for tricking the power. So certainly beyond 600+ watts.


----------



## kithylin

tps3443 said:


> I think you misread. I don’t just have a 2080Ti plugged in to the wall.
> 
> ^That is “total system power draw“ at the wall playing Cyberpunk2077. My entire system is in the same custom loop, and I was only explaining how my custom loop is beneficial for my situation. That’s all.
> 
> Also, my card is has shunts soldered so I have exceeded 530 watts in timespy GT2 quite easily. “2080Ti power usage alone”
> 
> Lastly the RTX3090 is very capable of exceeding 650+ watts all by itself when you allow it to stretch its legs.


There's nothing to misread. You said specifically "There is no way to cool 750 watts with a Kraken." Which means you are assuming a RTX 2080 Ti pulls 750 watts of power and a Kraken would need to cool 750 watts of heat from a RTX 2080 Ti.


----------



## tps3443

JustinThyme said:


> parallel will leave the liquid taking the path of least resistance. If it’s two identical GPU blocks no worries but when doing CPU and GPU in the same loop one or the other is going to get more flow starving then other making it run hotter. A good bit of CPU blocks are made with little restriction so that’s where most of your coolant will go. Some are very restrictive and you will end up with a nice cool GPU and hot CPU. If you go serial then the flow is the same at any point in the loop.



I have seen this first hand so I totally understand it. So when I was running the EKWB black tubing, I went straight from D5 pump in to a 360, in to another 360, and then in to GPU. I noticed my CPU temps were noticeably higher like this. But my GPU was running very cool. 

Now I am going D5 straight to CPU, then rads and in to GPU. So now I am seeing much cooler CPU temperatures. I noticed it immediately after running a baseline with R15/R20. Fortunately I added another 360MM radiator so GPU temps are lower under heavy loads. But, I can only wonder now if I should go straight to the GPU first lol.


----------



## Imprezzion

I just did a 1.5 hour session of Cyberpunk just doing some side missions with the XOC BIOS, it didn't crash at 2115Mhz core 7900 memory @ 1.125v sitting at a flat 44c the entire time. Power draw according to MSI AB was "24-25%" so 375-400w consistently if that is even accurate.

The sole reason I don't run a full custom loop is because before the whole shortage I like to just buy random second hand cheap GPU's and "bin" them until I find a golden one. If that is a custom PCB or different model card that means getting new blocks and refilling the loop every time I do that.. 

That's why I run a mostly universal setup which before the arrival of the G12 was a Accellero IV which fits anything if you mod it a bit..

Same for the CPU even tho a custom loop for that is more universal. The EK Phoenix kit I have is easy to maintain, fits every possible CPU as it's just a normal EK Supremacy block with a rad + res + pump combo.


----------



## J7SC

Imprezzion said:


> I just did a 1.5 hour session of Cyberpunk just doing some side missions with the XOC BIOS, it didn't crash at 2115Mhz core 7900 memory @ 1.125v sitting at a flat 44c the entire time. Power draw according to MSI AB was "24-25%" so 375-400w consistently if that is even accurate.
> 
> The sole reason I don't run a full custom loop is because before the whole shortage I like to just buy random second hand cheap GPU's and "bin" them until I find a golden one. If that is a custom PCB or different model card that means getting new blocks and refilling the loop every time I do that..
> 
> That's why I run a mostly universal setup which before the arrival of the G12 was a Accellero IV which fits anything if you mod it a bit..
> 
> Same for the CPU even tho a custom loop for that is more universal. The EK Phoenix kit I have is easy to maintain, fits every possible CPU as it's just a normal EK Supremacy block with a rad + res + pump combo.


.^That makes sense. I have a pile of the Swiftech Universal GPU coolers (spoiler) which work particularly well if the GPUs in question have a separate cold plate covering RAM and parts of the VRM...all that is needed then is a 120 mm fan. I got into Universal GPU after buying 4 EK and other custom blocks for earlier GPUs years back, and which then had no further use (though that's a lot of copper weight I might want to cash in on  ). 

My dual 2080 Ti + TR setup was/is configured primarily for productivity, and it came with factory blocks and a decent price (back then...) - and as mentioned, I already had all the other water-cooling items like rads, fittings, pumps, tubes etc . I might yet upgrade to 3090s down the line, but not because of Cyberpunk, but because of 24 GB VRAM...



Spoiler


----------



## pewpewlazer

Imprezzion said:


> You guys all show up here with multi-rad custom loops while I'm sitting here looking at a Kraken X52, a G12 bracket and a bunch of ebay copper VRM / VRAM heatsinks (enzotech and alphacool ones) doing their best to cool 400w @ 1.125v..
> 
> The fans on it are just Cooler Master MF120's in push-pull.. at max pump speed and the fans around 70% @ ~1450-1500 RPM which is quite tolerable


I'm over here wishing my 750 RPM fans were quieter...


----------



## tps3443

Redoing my bottom panel. Setting up as push/intake.

Im off tomorrow. So I did a full gut/clean


Its all for “big beasty“ my 2080Ti lol.

I am most likely going to lose performance and gain performance at the same time. So I am only running “Push configuration only with 1200RPM 15dB fans on “All (3) rads”

I am optimizing this entire system for a more quiet setup. I have (3) 360’s so this shouldn’t be a problem.


----------



## J7SC

tps3443 said:


> Redoing my bottom panel. Setting up as push/intake.
> 
> Im off tomorrow. So I did a full gut/clean
> 
> 
> Its all for “big beasty“ my 2080Ti lol.
> 
> I am most likely going to lose performance and gain performance at the same time. So I am only running “Push configuration only with 1200RPM 15dB fans on “All (3) rads”
> 
> I am optimizing this entire system for a more quiet setup. I have (3) 360’s so this shouldn’t be a problem


That's a lot of fans  I like (re) building and updating an existing build ...looks like a good day-off project. Have fun !


----------



## tps3443

J7SC said:


> That's a lot of fans  I like (re) building and updating an existing build ...looks like a good day-off project. Have fun !



Just switching everything up a little. I literally just did this the other day. 3AM still up haha.

But yeah I had to stuff all (3) 360’s back in. I went for a silent setup this time around. Gonna try it out and see how I like it. Maybe see if the silence is worth the slightly higher temps.


----------



## Imprezzion

Well, it was a nice ride 2080 Ti, but with how high second hand prices are at the moment and the fact I was able to order a MSI 3080 Suprim X which "appeared" to be in stock the moment I ordered it it might get sold after tomorrow if my 3080 actually gets send to me.

I think I might keep it on air if the Suprim X's cooler is as good as the MSI air coolers were on the 2xxx cards...


----------



## tps3443

Imprezzion said:


> Well, it was a nice ride 2080 Ti, but with how high second hand prices are at the moment and the fact I was able to order a MSI 3080 Suprim X which "appeared" to be in stock the moment I ordered it it might get sold after tomorrow if my 3080 actually gets send to me.
> 
> I think I might keep it on air if the Suprim X's cooler is as good as the MSI air coolers were on the 2xxx cards...


Yeah the used 2080Ti market is crazy lol. But it makes sense I suppose. With an overclock they pretty much still slot in to the high end GPU’s and you can‘t really get anything substantially faster for less than a $1000 bucks anyways lol.

But absolutely on the 3080, you should see a slight performance boost and break even, or maybe even make money to upgrade haha lol. Sounds like a win win to me.

I would probably keep the 3080 on air too. Not a tremendous amount to gain out of these cards. Samsung 8NM is already boosting 2+Ghz default, and they seem to overclock like TSMC 12NM on the 2080Ti. So nothing crazy.


----------



## Medizinmann

tps3443 said:


> My ambient is about 21C everyday usually. 100% fans, 100% pump.
> 
> I am using (12) of these fans. The cheap little 38CFM fans. But (3) of my fans are the noise makers at 1800RPM.
> 
> I cant hear my D5 even at 100%? Maybe my fans dround it out.
> 
> ocala community credit union


Well - I use 14x Noctua NF-A12x25 PWM Fans (Rad-Space 360mm + 240mm + 120mm) - and yes @100% fanspeed - I don't hear the pump either - but I can hear the pump @100%, if I don't ramp up the fans to 100%...

But nonetheless - this is far to much noise in my ears...I prefer a less noisy setup...

Best regards,
Medizinmann


----------



## Imprezzion

tps3443 said:


> Yeah the used 2080Ti market is crazy lol. But it makes sense I suppose. With an overclock they pretty much still slot in to the high end GPU’s and you can‘t really get anything substantially faster for less than a $1000 bucks anyways lol.
> 
> But absolutely on the 3080, you should see a slight performance boost and break even, or maybe even make money to upgrade haha lol. Sounds like a win win to me.
> 
> I would probably keep the 3080 on air too. Not a tremendous amount to gain out of these cards. Samsung 8NM is already boosting 2+Ghz default, and they seem to overclock like TSMC 12NM on the 2080Ti. So nothing crazy.


I'm just a little worried with the 430w BIOS a Suprim X 3080 has the temps on air are going to be substantial and it will thermal boost limit quite hard, like 5-6 bins hard.. in the 3080 topic someone with that specific card mentioned 70+c on air even on 80% fanspeed with a overclock and 430w BIOS which costs way too much clock bins for me lol. 

We'll see if I actually get it tomorrow.


----------



## fraefm95

Hey,

Wrong forum.

Sorry


----------



## Medizinmann

tps3443 said:


> Just switching everything up a little. I literally just did this the other day. 3AM still up haha.
> 
> But yeah I had to stuff all (3) 360’s back in. I went for a silent setup this time around. Gonna try it out and see how I like it. Maybe see if the silence is worth the slightly higher temps.


 Im sure you will like it - more silent...you can make a fancurve to still ramp, when you play with headphones though - that is how I do it.

If I only had the time to tinker - I would also add a better pump and another rad...

Best regards,
Medizinmann


----------



## JustinThyme

You really want to reduce noise and get better temps? This is how I went about it. I’d always gone straight from the pump to the CPU then into a rad and back down into pair of 2080Tis. Pulled two rads out of my case and went with this instead (still two rads in case though)


----------



## tryout1

J7SC said:


> @tryout1 ...if you don't want to shunt mod (as there are pros and cons with that), the 2x 8 pin Galax 380W (OP) seems to be the 'go-to' custom bios for most folks here. A step beyond is the Strix 1000W XOC 2x 8 pin bios, also in the original OP. Needless to add, extreme caution (apart from great cooling) is required when using an XOC bios as they typically have all kinds of safeties disabled. Depending on your PCB, you might also find some I/O ports and fan control not working right, only one way to find out with your particular model. I agree with tps3443 though that 380W is 'plenty' (my cards' stock bios max, btw), but I though that you wanted to reduce power to 250 W or below ? I take it custom bios is just for benching ?


Sry for the relatively late reply was busy putting in my 5900x and other stuff. At the moment i'm using the Galax 380W bios which works fine so far, my card is a reference design card (gigabyte gaming oc) so i wonder if that Strix 1000W XOC bios even works, about the temps i don't really worry so far max temps i saw was 44 °C when permabenching @380w with my 2 rads running at just 900-950rpm and yes i agree with both of you 380W is more than plenty for 24/7 usage i just wanted to get more power for some ****s and giggles for benchmarking as i'm basically stuck at about 17380 gpu score even with "high performance" applied in NVCP. So far i'm still rocking that 2070/8100 1v which runs perfectly stable in CP2077 for hours, 2085 crashed on me tho after about 10mins so i think that's a good sweet spot i found between perf/power und should be more than enough for WQHD for 1-2yrs cause i think i'm gonna skip RTX 3xxx altogether.


----------



## J7SC

Imprezzion said:


> I'm just a little worried with the 430w BIOS a Suprim X 3080 has the temps on air are going to be substantial and it will thermal boost limit quite hard, like 5-6 bins hard.. in the 3080 topic someone with that specific card mentioned 70+c on air even on 80% fanspeed with a overclock and 430w BIOS which costs way too much clock bins for me lol.
> 
> We'll see if I actually get it tomorrow.


Well hopefully the 3080 will show up. I know we just posted about 'universal GPU' blocks but in case that doesn't fit the MSI 3080 Suprim X, there are several affordable blocks out for it...I checked because I have been debating about a pair of 3090s (though not 100% seriously yet re. business case for it). Still, there were actually 2x 3090 Suprim X for sale (at MSRP!) locally at a retailer we do business with, but by the time I had checked re.water-blocks, the cards were gone. In the process though I learned that there are several blocks (EU, Asia) available for the 3080 / 3090 Trio that also fit the Suprim X, per listing. To dissipate 430W of heat energy w/o costing too many bins, I think water (either via universal or custom GPU block) might be necessary, never mind the GPU fan noise at the GPU speeds you want.


----------



## J7SC

tryout1 said:


> Sry for the relatively late reply was busy putting in my 5900x and other stuff. At the moment i'm using the Galax 380W bios which works fine so far, my card is a reference design card (gigabyte gaming oc) so i wonder if that Strix 1000W XOC bios even works, about the temps i don't really worry so far max temps i saw was 44 °C when permabenching @380w with my 2 rads running at just 900-950rpm and yes i agree with both of you 380W is more than plenty for 24/7 usage i just wanted to get more power for some ****s and giggles for benchmarking as i'm basically stuck at about 17380 gpu score even with "high performance" applied in NVCP. So far i'm still rocking that 2070/8100 1v which runs perfectly stable in CP2077 for hours, 2085 crashed on me tho after about 10mins so i think that's a good sweet spot i found between perf/power und should be more than enough for WQHD for 1-2yrs cause i think i'm gonna skip RTX 3xxx altogether.


I am certainly not recommending the 1000w XOC bios as I have not tried it, but others with the same cards I have (Aorus Xtr 2x 8 pin, non-reference PCB) had it working, apart from one I/O and perhaps fan / memory speed control. I take it your 2080 Ti doesn't have a dual bios switch ? That would make experimenting with XOC bios a lot easier...


----------



## tryout1

J7SC said:


> I am certainly not recommending the 1000w XOC bios as I have not tried it, but others with the same cards I have (Aorus Xtr 2x 8 pin, non-reference PCB) had it working, apart from one I/O and perhaps fan / memory speed control. I take it your 2080 Ti doesn't have a dual bios switch ? That would make experimenting with XOC bios a lot easier...


Nah sadly not i mean i have a USB Flasher at home and a SOIC8 clip but honestly just for some points? Plus if something goes wrong i have to detach the waterblock, reflash etc.


----------



## tps3443

Imprezzion said:


> I'm just a little worried with the 430w BIOS a Suprim X 3080 has the temps on air are going to be substantial and it will thermal boost limit quite hard, like 5-6 bins hard.. in the 3080 topic someone with that specific card mentioned 70+c on air even on 80% fanspeed with a overclock and 430w BIOS which costs way too much clock bins for me lol.
> 
> We'll see if I actually get it tomorrow.


Water cool it.


tryout1 said:


> Nah sadly not i mean i have a USB Flasher at home and a SOIC8 clip but honestly just for some points? Plus if something goes wrong i have to detach the waterblock, reflash etc.


Ive got the same setup. That’s why I never worried to
much about flashing. I was in in to the high end DTR laptop scene lol.


----------



## jura11

tryout1 said:


> Sry for the relatively late reply was busy putting in my 5900x and other stuff. At the moment i'm using the Galax 380W bios which works fine so far, my card is a reference design card (gigabyte gaming oc) so i wonder if that Strix 1000W XOC bios even works, about the temps i don't really worry so far max temps i saw was 44 °C when permabenching @380w with my 2 rads running at just 900-950rpm and yes i agree with both of you 380W is more than plenty for 24/7 usage i just wanted to get more power for some ****s and giggles for benchmarking as i'm basically stuck at about 17380 gpu score even with "high performance" applied in NVCP. So far i'm still rocking that 2070/8100 1v which runs perfectly stable in CP2077 for hours, 2085 crashed on me tho after about 10mins so i think that's a good sweet spot i found between perf/power und should be more than enough for WQHD for 1-2yrs cause i think i'm gonna skip RTX 3xxx altogether.


Hi there 

1000W XOC BIOS should work on yours RTX 2080Ti, I have been using that BIOS on my Asus RTX 2080Ti Strix and Zotac RTX 2080Ti AMP as well, just bit of caution, you will loose one DP and HDMi ports and I recommend limit power limit at 44-50%,on my Strix I'm limiting my power limit at 44% if I do renders or benchmarks 

On my loop temperatures with XOC BIOS are in 30's only in warmer ambient temperature temperature would be in high 30's to low 40's 

Hope this helps 

Thanks, Jura


----------



## jura11

J7SC said:


> I am certainly not recommending the 1000w XOC bios as I have not tried it, but others with the same cards I have (Aorus Xtr 2x 8 pin, non-reference PCB) had it working, apart from one I/O and perhaps fan / memory speed control. I take it your 2080 Ti doesn't have a dual bios switch ? That would make experimenting with XOC bios a lot easier...


XOC BIOS on RTX 2080Ti Strix is safe as can be, I have been running that BIOS from date have been released to public and absolutely no issues 

Yes you will loose DP and HDMi port with that BIOS because I/O layout is different as Asus is always have different I/O layout 

Hope this helps 

Thanks, Jura


----------



## tps3443

JustinThyme said:


> You really want to reduce noise and get better temps? This is how I went about it. I’d always gone straight from the pump to the CPU then into a rad and back down into pair of 2080Tis. Pulled two rads out of my case and went with this instead (still two rads in case though)
> View attachment 2475152


I need better fans. My current fans spin at 1,200RPM and push around 38CFM with really low static pressure.

I am considering going with the Noctua NF-P12‘s they are only $13.90 each.

Still a little undecided on if I should grab a fixed RPM like the 1,300 model or maybe just go for 1,700 PWM and allow it to spin down.

But my current fans offer almost zero static pressure. Maybe around 0.75mm of pressure. If I went with the 1300RPM NF-P12 I would get over double the static pressure at the same noise level.


----------



## JustinThyme

Go for higher static pressure and use a good controller. PWM controlled will allow you to throttle up when needed and be near silent when not. The Noctua industrial 2000 RPM fans I have have a static pressure of just over 4 and most of the time they don’t run anywhere near 2000 RPMs. I have them on aquaero control with liquid temp as the reference. Liquid temp has to get up there and it takes a good bit of extremely heavy load and they still don’t make it to max RPMs.


----------



## tps3443

Here’s my final setup for the new quiet built. Using only push, 7980XE detuned to 4.7ghz with x30 mesh. 2080Ti detuned to 2,115Mhz/16,300.

GPU hitting around 43C. My CPU always runs extremely cool anyways so that’s not really a concern.

The system is almost totally silent, and so far I like it! It is actually really weird at first, playing a video game, and using my desk speakers while having such a quiet PC it is a really strange experience at first. All you can hear is the audio from the game.

I am only using some 30+ CFM fans on push. So with (3) 360MM radiators. I am pretty surprised about the performance.


----------



## J7SC

If I would buy new 120mm fans, it would either be the Arctic P12PWM (budget oriented) or the Noctua A12X25 (not so budget oriented, never mind 'that' colour). 

However, I am still using the GentleTyphoon '3K rpm' I got ages ago. They work very well with the XSPC RX360 rads re. airflow and pressure, and for their 83 CFM rating are actually bearable in terms of noise (36.5 dB), even before sound baffles. The Gentle Typhoons beat the cr*p out of for example the Corsair ML 120 2.4K rpm in terms of lower dB and higher CFM.


----------



## Imprezzion

I just wanted RGB... So I got the cooler master MF120's and 140's. The 120's actually have pretty good static pressure and not too much noise however the bearings are horrendously bad quality and a fan drops a bearing at least once a month with a nice rattle accompanying it..


----------



## Medizinmann

Concerning Fans...

I would always recommend Noctuas NF-A12x25 PWM Fans - so if you don't need RGB and colour doesn't matter - that is...  

I got 14 of them, in push&pull-config on my rads...

BTW: Noctua announced black and white variants coming out this year – but don’t know when they will be available – especially with the current situation…

Best regards,
Medizinmann


----------



## tryout1

tps3443 said:


> Water cool it.
> 
> 
> Ive got the same setup. That’s why I never worried to
> much about flashing. I was in in to the high end DTR laptop scene lol.


Yeah me too i knew i saw that name somewhere nbr.com i guess lol, shuntmodded my 1070 MXM on my old P775DM3-G and then flashed 200W bios directly on it, right now i'm using a P775TM1-G with a 2080 and a 8700k @ 4.8Ghz as my living room machine or for WoW when my girlfriend is here.



jura11 said:


> Hi there
> 
> 1000W XOC BIOS should work on yours RTX 2080Ti, I have been using that BIOS on my Asus RTX 2080Ti Strix and Zotac RTX 2080Ti AMP as well, just bit of caution, you will loose one DP and HDMi ports and I recommend limit power limit at 44-50%,on my Strix I'm limiting my power limit at 44% if I do renders or benchmarks
> 
> On my loop temperatures with XOC BIOS are in 30's only in warmer ambient temperature temperature would be in high 30's to low 40's
> 
> Hope this helps
> 
> Thanks, Jura


Thanks Jura, will try that when i have some time to spend, doesn't matter with 1 screen just for benchmarks sake to be honest if it works, 44-50% is about 440-500w i guess?


----------



## tps3443

Medizinmann said:


> Concerning Fans...
> 
> I would always recommend Noctuas NF-A12x25 PWM Fans - so if you don't need RGB and colour doesn't matter - that is...
> 
> I got 14 of them, in push&pull-config on my rads...
> 
> BTW: Noctua announced black and white variants coming out this year – but don’t know when they will be available – especially with the current situation…
> 
> Best regards,
> Medizinmann


I am tempted. I need (9) of them, and that’s a $269 dollar expense lol. But yeah I want them. I might just order them.

22dB at 2,000RPM is pretty damn impressive.

But I am tempted to just get the Noctua NF-P12 instead. They are 25dB at 1,700RPM with similar static pressure.

The funny thing about sound though, 22dB and 25dB may look similar on paper. But a 3dB increase in sound can be perceived as twice as loud when you actually hear it. So sound is very unique in nature. And a 3dB increase is HUGE!


The Noctua A12x25’s are beastly. And I could care less about color. Obviously I wish they were black or even gray. But, they perform and they are very very quiet. That’s why they’re $30 a pop. And I totally understand it.


----------



## Renegade5399

tps3443 said:


> Here’s my final setup for the new quiet built.


That's sharp looking!



Medizinmann said:


> Concerning Fans...
> 
> I would always recommend Noctuas NF-A12x25 PWM Fans - so if you don't need RGB and colour doesn't matter - that is...
> 
> I got 14 of them, in push&pull-config on my rads...
> 
> BTW: Noctua announced black and white variants coming out this year – but don’t know when they will be available – especially with the current situation…
> 
> Best regards,
> Medizinmann


A person after my own heart. I do something similar with my fans. I use PWM Deltas though hooked to a Commander Pro for speed control. Ordered special whips so the power comes straight from a molex plug while the speed control comes from the Commander Pro.



jura11 said:


> XOC BIOS on RTX 2080Ti Strix is safe as can be, I have been running that BIOS from date have been released to public and absolutely no issues
> 
> Yes you will loose DP and HDMi port with that BIOS because I/O layout is different as Asus is always have different I/O layout
> 
> Hope this helps
> 
> Thanks, Jura


With my card I found the GALAX HOF OC Lab BIOS does not do this.


----------



## J7SC

Imprezzion said:


> *I just wanted RGB*... So I got the cooler master MF120's and 140's. The 120's actually have pretty good static pressure and not too much noise however the bearings are horrendously bad quality and a fan drops a bearing at least once a month with a nice rattle accompanying it..


...'just wanted RGB' ? Then nirvana is near 


Spoiler: RGB fans RGB fans RGB fans RGB fans














 
For hardcore benching w / open window, I got s.th. like 9 of these old Sunon 120mm x 38mm from a past server build...108 CFM / 3.1K rpm / 42 dBA ...that's past my daily pain border though compared to the Gentle Typhoons...

BTW, quick tip for fan mounting...I tightly wrap black electrical tape around the radiator / fan boundary to eliminate any air escaping...non-scientific observation, but both the sound and temp marginally improved.



Spoiler: Gentle Typhoon <> Sunon


----------



## tps3443

J7SC said:


> ...'just wanted RGB' ? Then nirvana is near
> 
> 
> Spoiler: RGB fans RGB fans RGB fans RGB fans
> 
> 
> 
> 
> View attachment 2475311
> 
> 
> 
> 
> For hardcore benching w / open window, I got s.th. like 9 of these old Sunon 120mm x 38mm from a past server build...108 CFM / 3.1K rpm / 42 dBA ...that's past my daily pain border though compared to the Gentle Typhoons...
> 
> BTW, quick tip for fan mounting...I tightly wrap black electrical tape around the radiator / fan boundary to eliminate any air escaping...non-scientific observation, but both the sound and temp marginally improved.
> 
> 
> 
> Spoiler: Gentle Typhoon <> Sunon
> 
> 
> 
> 
> View attachment 2475312


I have never owned gentle typhoons. And this is why the Noctua A12x25’s really have my attention. They are essentially a mastered gentle typhoon design. And I know they’re quiet, and push mass amount of air.

I am seriously tempted to just drop the $270 on (9) of them. I feel kinda crazy doing so.

Any other options? I mean, I’m after all out performance as quietly as possible.

Right now I am turning 1,000-1,200RPM at 20dB with some “non pressure optimized” fans. So the Noctua A12x25 is 2,000RPM at 22dB, I kinda wish they were black, but I can get over it.. should I just get it over with, and buy them? lol


----------



## J7SC

tps3443 said:


> I have never owned gentle typhoons. And this is why the Noctua A12x25’s really have my attention. They are essentially a mastered gentle typhoon design. And I know they’re quiet, and push mass amount of air.
> 
> I am seriously tempted to just drop the $270 on (9) of them. I feel kinda crazy doing so.
> 
> Any other options? I mean, I’m after all out performance as quietly as possible.
> 
> Right now I am turning 1,000-1,200RPM at 20dB with some “non pressure optimized” fans. So the Noctua A12x25 is 2,000RPM at 22dB, I kinda wish they were black, but I can get over it.. should I just get it over with, and buy them? lol


...really depends on your future build-plans as well. As already stated, if I would have to replace the 120 mm GentleTyphoons and go out and buy new fans, it would be between the Arctic P12pwm and the Noctua A12x25s (the later for absolute best cooling / dB ratio).

...but check out the Watercool MoRa 420 Pro below - around US$300. Again, can you see yourself doing future builds with an external MoRa 420 Pro, connected via QDs for transfer to other builds ? That with 4+4 (push / pull) Noctua 200 fans (not cheap) would be a high-performing, very quiet, long-lasting, transferable option over the long run



Spoiler: Watercool MoRa 420 Pro price


----------



## Imprezzion

Back in the good old days when Enermax was still relevant I used to swear by their Apollish fans with the removable blades. Super quiet, unkillable bearings and seriously easy to clean but yeah, like the GT's, they don't exist anymore lol.

I need some good 140mm rad fans with RGB compatible with MSI Mystic Sync..


----------



## jura11

Personally I like Arctic Cooling P12 PWM or Arctic Cooling P14 PWM, Phanteks PH-F120MP or PH-F140MP for radiators, I use Arctic Cooling P12 PWM on MO-ra3 360mm, running them at max speed and they're pretty quiet on radiator 

Silent Wings 3 are another favorite fans for me, Noctua I really like but they are just expensive 

Hope this helps 

Thanks, Jura


----------



## tps3443

jura11 said:


> Personally I like Arctic Cooling P12 PWM or Arctic Cooling P14 PWM, Phanteks PH-F120MP or PH-F140MP for radiators, I use Arctic Cooling P12 PWM on MO-ra3 360mm, running them at max speed and they're pretty quiet on radiator
> 
> Silent Wings 3 are another favorite fans for me, Noctua I really like but they are just expensive
> 
> Hope this helps
> 
> Thanks, Jura



I know they are expensive. But, I am about to just throw in the towel and say. 

Quiet operation+Performance is costly. If I am buying fans, I may as well wrap this up with the A12’s lol. I will use them for years. But dang, I hate that brown color.


----------



## tps3443

J7SC said:


> ...really depends on your future build-plans as well. As already stated, if I would have to replace the 120 mm GentleTyphoons and go out and buy new fans, it would be between the Arctic P12pwm and the Noctua A12x25s (the later for absolute best cooling / dB ratio).
> 
> ...but check out the Watercool MoRa 420 Pro below - around US$300. Again, can you see yourself doing future builds with an external MoRa 420 Pro, connected via QDs for transfer to other builds ? That with 4+4 (push / pull) Noctua 200 fans (not cheap) would be a high-performing, very quiet, long-lasting, transferable option over the long run
> 
> 
> 
> Spoiler: Watercool MoRa 420 Pro price
> 
> 
> 
> 
> View attachment 2475319



Do you think i’d need another D5? Or is one single EKWB D5 inertia acceptable at 100%? 

Not sure if i’m ready yet. But, I might do it lol. Isn’t there a cheaper smaller one?


----------



## J7SC

tps3443 said:


> Do you think i’d need another D5? Or is one single EKWB D5 inertia acceptable at 100%?
> 
> Not sure if i’m ready yet. But, I might do it lol. Isn’t there a cheaper smaller one?


One D5 should work fine...I've even seen builds w/ 4x w-cooled 3090s and 2x w-cooled CPUs plus MoRa on a single D5. That said, I always use 2x D5 per loop 'out of habit' re. fail-over insurance for commercial builds. Then again, I never had a D5 fail during operation.

Another advantage of 2x D5 though is very consistent pressure and flow...single D5s (rarely) can drop pressure rapidly w/ air pockets in the system, more so when compared to DDC as D5s have a larger diameter. Single D5 = fine, Dual D5 = perfect (IMO)


----------



## edhutner

I also like arcics p12 pwm, probably best price/performance/noise combination.


----------



## J7SC

edhutner said:


> I also like arcics p12 pwm, probably best price/performance/noise combination.


...yeah, the new IceGiant CPU cooler also seems to use them (photo in spoiler per caseking.de). So far, I have heard nothing but good things about the Arctic P12 pwm...just would like more longevity info re. bearing life etc when it becomes available.



Spoiler


----------



## Imprezzion

Should I Conductonaut my 2080 Ti or not... Like, how much would I gain realistically over PK-3. I am at 44-47c load now with 380w BIOS @ 2100-2085Mhz under a Kraken X52. I would like to stay below 45c @ XOC BIOS without having the fans full blast. Is Conductonaut going to give me that much thermal improvement? Anyone tried?

Also, with how close the components around the die are to the Conductonaut I assume I need to isolate them just in case. But all I have handy here now is duct tape, zip ties and silicon gasket maker lol.. let's not do that..


----------



## J7SC

Imprezzion said:


> Should I Conductonaut my 2080 Ti or not... Like, how much would I gain realistically over PK-3. I am at 44-47c load now with 380w BIOS @ 2100-2085Mhz under a Kraken X52. I would like to stay below 45c @ XOC BIOS without having the fans full blast. Is Conductonaut going to give me that much thermal improvement? Anyone tried?
> 
> Also, with how close the components around the die are to the Conductonaut I assume I need to isolate them just in case. But all I have handy here now is duct tape, zip ties and silicon gasket maker lol.. let's not do that..


...with any LM, isolation around near components is a very good idea, even if it is just w/ nail-polish instead of the professional options. I have also used LM on GPUs in the past with an outside 'barrier' of non-conductive MX4. Finally, orientation of your 2080 Ti (horizontal vs vertical) can also play a small role re. isolation requirements


----------



## Imprezzion

Horizontal mount but the kraken has quite a high mounting pressure and getting it into the case requires some weird maneuvers and I gotta flip the card a few times so I'm way too careful for that. 

I can try the TIM barrier tho, have plenty of cheap(er) TIM here. Conformal coating is super hard to get here for some reason as is " liquid electrical tape " which I used to use in the past when voltmodding older stuff..

EDIT: went to get a baseline for XOC 1.125v temps using PK-3 first. 2115-2100 core 7900 memory, nVidia low latency ultra, DX12 Division 2 @ 1080P all maxed. Temps 49-51c on 70% fanspeed for the rad fans (1450 RPM) with 100% pump speed. Power draw was at an absolutely insane 430-460w continuously measured in software, but I tend to believe it temp-wise as the 380w BIOS never gets over 44c on same fanspeed. 

On one hand I'm like, my God that power draw is insane, on the other hand, VRM is properly cooled, IR thermo showed about 65-70c surface temps which is fine, and the card seems to love it. It's stable, usage nice and flat 100%, with nVidia low latency Ultra frame times are super low and consistent.. 

Well, 49-51c is the baseline at least. We'll see what Conductonaut can do about that. If anything.


----------



## tps3443

Imprezzion said:


> Horizontal mount but the kraken has quite a high mounting pressure and getting it into the case requires some weird maneuvers and I gotta flip the card a few times so I'm way too careful for that.
> 
> I can try the TIM barrier tho, have plenty of cheap(er) TIM here. Conformal coating is super hard to get here for some reason as is " liquid electrical tape " which I used to use in the past when voltmodding older stuff..
> 
> EDIT: went to get a baseline for XOC 1.125v temps using PK-3 first. 2115-2100 core 7900 memory, nVidia low latency ultra, DX12 Division 2 @ 1080P all maxed. Temps 49-51c on 70% fanspeed for the rad fans (1450 RPM) with 100% pump speed. Power draw was at an absolutely insane 430-460w continuously measured in software, but I tend to believe it temp-wise as the 380w BIOS never gets over 44c on same fanspeed.
> 
> On one hand I'm like, my God that power draw is insane, on the other hand, VRM is properly cooled, IR thermo showed about 65-70c surface temps which is fine, and the card seems to love it. It's stable, usage nice and flat 100%, with nVidia low latency Ultra frame times are super low and consistent..
> 
> Well, 49-51c is the baseline at least. We'll see what Conductonaut can do about that. If anything.



That voltage is pretty extreme. Why are you sending so much juice to the card?

Anymore than 1.093MV doesn’t help me. I just require lower temperatures to achieve more Mhz.

Running 34-36C will maintain 2,145Mhz in Cyberpunk 2077 forever, with hour after hour stability.

Did you try liquid metal on the die? Temps any better? Usually with a GPU, your already running direct die, so little to no benefit over normal thermal paste.

I am working on possibly getting 2,160Mhz daily stable. I’m considering a MO-RA3 radiator, and I think if I can get down below 34C I’m there.

Liquid metal might knock 1C off. But, it’s gonna take several applications before you reach a equilibrium with LM that has soaked fully in to your AIO cold plate.

Also, just use Kapton tape around the die. Totally safe. I have done this to many times to count.


----------



## Falkentyne

Imprezzion said:


> Should I Conductonaut my 2080 Ti or not... Like, how much would I gain realistically over PK-3. I am at 44-47c load now with 380w BIOS @ 2100-2085Mhz under a Kraken X52. I would like to stay below 45c @ XOC BIOS without having the fans full blast. Is Conductonaut going to give me that much thermal improvement? Anyone tried?
> 
> Also, with how close the components around the die are to the Conductonaut I assume I need to isolate them just in case. But all I have handy here now is duct tape, zip ties and silicon gasket maker lol.. let's not do that..


Use nail polish to coat the resistors around the die, then use a foam dam barrier (polyurethane air conditioner foam, something very highly compressible and thin like this)


https://www.amazon.com/gp/product/B002GKC2US/



Amazing this used to only cost $4 dollars.

You can also use Super 33+ electrical tape instead of the nail polish, but you need to make sure the seal is perfect.
Kapton tape works ONLY if it's high quality (meaning: 3M high temp polymide tape, you can use 1/4" and lay them side by side, but better is 3/4"--but alot more expensive. I would NOT use any kapton tape except 3M, period).

If you want to be OCD, use 3 coats of nail polish first, then Super 33+ (or 3M kapton) tape afterwards on top.


----------



## Imprezzion

Voltage does help my card. It's really not a good bin / sample at all. There's no way, even with seriously low temps into the 30's where the card will do 2100 or higher stable. It runs for a good while but throws random driver crashes or DirectX crashes every 10-30 minutes. 

I tested this by dumping the rad into a bucket of ice water (circulating with a pond pump through the rad) with load temps in Superposition 1080p extreme looped sitting at 30-32c and even then it crashed like 15 minutes into a run @ 1.093v 2115Mhz. It will let me bench sometimes as high as 2190Mhz but it's incredibly unstable and fails to complete a run 50% of the times.

I pulled every trick out of the book short of shunt modding it and it just will not do it. It's rock solid at 2070Mhz 1.093v and most of the time it will hold 2085Mhz but I have had a few crashes at 2085Mhz in Cyberpunk and World of Tanks (fps uncapped so it runs at like 300-400 FPS at a very high sustained load).

I also tested the reverse of this. Can it sustain 2070Mhz at higher temps, yes. Temperature doesn't seem to matter at all. It was just as stable at 70-72c as it was at 44c. (I did have to apply the overclock after hearing up otherwise it would idle at 2160Mhz which is terribly unstable). It will hold it just fine for hours in Division 2 with the rad fans at 500 RPM and the pump at the lowest possible PWM. 

I have not done liquid metal yet purely because I had every intention to sell it as-is and get a 3080 but after ordering over 5 grand worth of cards from "in stock" shops and having every single order cancelled I realize that isn't going to happen any time soon so I just might do it tomorrow. Just have to figure out a way to cover up the components around the die.. I have Conductonaut left over from the CPU mount so might as well.. 

Maybe on XOC 1.125v temps make more of a difference and I might get away with 2130 @ <45c? We'll see. It's just that with 460w continuous load the Kraken is really starting to struggle needing a LOT of fan speed to keep the water temps under control and not rise slowly... At least 70% 1450 RPM but really it just wants the full 2050-2100 RPM the fans will do..


----------



## kithylin

A friend of mine is considering the XOC bios on their founder's edition RTX 2080 Ti but they found an Nvidia Developer blog post from a while back stating that if we increase the voltage on the RTX 2080 Ti above the limits in most bios's that the video card would only last 1 year or 12 months before sustaining damage. Is there anyone in here that has used the XOC bios on their card with higher voltages for more than 12 months that could comment about it? Is there any truth to that statement by the developer?


----------



## Imprezzion

I have ran the card on XOC 1.125v on and off for over a year flashing between that BIOS and the 380w KFA2 one and no I'll effects yet. Then again, nVidia's warning probably takes normal air cooling as a example in which the card will run well above 80c at that point and not sub 50 under "water". Even if this is just a AIO lol.

A little naked shot before going liquid metal . How you like my handy work with random heatsinks for the VRM and VRAM. I had to modify a lot of them to fit the massive kraken g12 bracket but it works lol.

Don't mind the random EK GPU 4-pin to PWM cable hanging from the card.. it only has the RPM tach signal wire hooked up on the other end so I can monitor the radiator fan RPM through the card lol.


----------



## kithylin

Imprezzion said:


> I have ran the card on XOC 1.125v on and off for over a year flashing between that BIOS and the 380w KFA2 one and no I'll effects yet. Then again, nVidia's warning probably takes normal air cooling as a example in which the card will run well above 80c at that point and not sub 50 under "water". Even if this is just a AIO lol.
> 
> A little naked shot before going liquid metal . How you like my handy work with random heatsinks for the VRM and VRAM. I had to modify a lot of them to fit the massive kraken g12 bracket but it works lol.
> 
> Don't mind the random EK GPU 4-pin to PWM cable hanging from the card.. it only has the RPM tach signal wire hooked up on the other end so I can monitor the radiator fan RPM through the card lol.


If you're asking me personally then my personal opnion is I really dislike it when people do that sort of a thing to a video card. Now that you've put those little heatsinks on it like that it will be rather difficult for anyone to safely convert it to either air cooled or a full cover water block without damage to the card. And these are really expensive cards still. So seeing that sort of modification has me feeling a little sad inside. But that's just me.


----------



## Imprezzion

Yeah well, risk taken, bad at math. Card is dead. I applied nail polish and a strip of thin tape over the components around the die, very carefully applied the Conductonaut and remounted it, no video output code 98 on mobo debug. Other GPU (R9 280x) works fine.

Removed the block again, no spillage at all but not great contact either so re-did it again, still 98 no video.

Somehow the card got damaged in some weird way.

I'll try a remount with normal paste and deep clean with alcohol 96% and see what happens..

EDIT: works fine (at least in the BIOS) after a deep clean with some 96% denatured alcohol and cotton swabs. Lots of cotton swabs... And just applying normal paste (PK-3). Dodged a bullet there.

I have never had any good luck with liquid metal on GPU's.. used it on CPU's problem free ever since Liquid Ultra came out but GPU's are a disaster...

Now I gotta redo all the fan wiring and the RGB wiring and see what happens temp-wise. If it's worse I guess I have to remount yet again..


----------



## tps3443

Imprezzion said:


> Yeah well, risk taken, bad at math. Card is dead. I applied nail polish and a strip of thin tape over the components around the die, very carefully applied the Conductonaut and remounted it, no video output code 98 on mobo debug. Other GPU (R9 280x) works fine.
> 
> Removed the block again, no spillage at all but not great contact either so re-did it again, still 98 no video.
> 
> Somehow the card got damaged in some weird way.
> 
> I'll try a remount with normal paste and deep clean with alcohol 96% and see what happens..
> 
> EDIT: works fine (at least in the BIOS) after a deep clean with some 96% denatured alcohol and cotton swabs. Lots of cotton swabs... And just applying normal paste (PK-3). Dodged a bullet there.
> 
> I have never had any good luck with liquid metal on GPU's.. used it on CPU's problem free ever since Liquid Ultra came out but GPU's are a disaster...
> 
> Now I gotta redo all the fan wiring and the RGB wiring and see what happens temp-wise. If it's worse I guess I have to remount yet again..


Man lol. Is your card dead? Or is it working?! 

I wouldn’t bother with LM on the GPU.

Hope your cards ok.


----------



## J7SC

Imprezzion said:


> Yeah well, risk taken, bad at math. Card is dead. I applied nail polish and a strip of thin tape over the components around the die, very carefully applied the Conductonaut and remounted it, no video output code 98 on mobo debug. Other GPU (R9 280x) works fine.
> 
> Removed the block again, no spillage at all but not great contact either so re-did it again, still 98 no video.
> 
> Somehow the card got damaged in some weird way.
> 
> I'll try a remount with normal paste and deep clean with alcohol 96% and see what happens..
> 
> *EDIT: works fine (at least in the BIOS) after a deep clean with some 96% denatured alcohol and cotton swabs. Lots of cotton swabs... And just applying normal paste (PK-3). Dodged a bullet there.
> 
> I have never had any good luck with liquid metal on GPU's.. used it on CPU's problem free ever since Liquid Ultra came out but GPU's are a disaster...
> 
> Now I gotta redo all the fan wiring and the RGB wiring and see what happens temp-wise. If it's worse I guess I have to remount yet again..*


...close call ! Sometimes, even slightly uneven mounting pressure can cause '98', but impossible to say in this case...main thing is that you seemed to have dodged a bullet and are seeing a video signal again. Hopefully it will also load '3D' accelerated apps fine...hope it all works out 
...


----------



## tps3443

kithylin said:


> A friend of mine is considering the XOC bios on their founder's edition RTX 2080 Ti but they found an Nvidia Developer blog post from a while back stating that if we increase the voltage on the RTX 2080 Ti above the limits in most bios's that the video card would only last 1 year or 12 months before sustaining damage. Is there anyone in here that has used the XOC bios on their card with higher voltages for more than 12 months that could comment about it? Is there any truth to that statement by the developer?


The reference 2080Ti PCB can handle 600+ watts daily just fine. I have been abusing my 2080Ti for quite a while now. And that goes for the last (3) 2080Ti’s too. I have countless 500+ watts runs on it. And hours and hours of Metro at nearly 500 of power draw while using the XOC bios.

The only caution I will advise about the Galax XOC bios is you will draw about 100-110+ watts from your PCI-e slot. I can’t remember how much GPUz reported, but it was high. It was so much power consumption I was getting system shutdowns. Fortunately my X299 Dark motherboard has an additional 6 pin PCI-e power cable, strictly for supply extra juice to the PCI-e slot, for multiple gpu’s installed. And once I plugged that 6 pin power connector in, it fixed my power shut down issues all together for my single 2080Ti. 

Maybe you’ll damage a motherboard worst case scenario. But as for damaging a 2080Ti, I think you’ll be just fine. They’re tough as nails. I will say I abuse my card pretty hard, but it runs cool. So that’s all that really matters. My temperatures are always low, so I trust any of these “compatible XOC” bios on my 2080Ti for long term daily usage usage.

The voltage between 1.093 and 1.125 is really negligible. That’s like running 1.062MV instead of 1.093MV. My card has both shunts soldered in place with a Galax 380 watt bios. So it can sustain around that 530 watt power usage. I have never had an issue with this combination. Or any other bios.

If your buddy can keep 1.125MV cool, then sure, go for it!


----------



## tps3443

delete.


----------



## Imprezzion

J7SC said:


> ...close call ! Sometimes, even slightly uneven mounting pressure can cause '98', but impossible to say in this case...main thing is that you seemed to have dodged a bullet and are seeing a video signal again. Hopefully it will also load '3D' accelerated apps fine...hope it all works out
> ...


Been playing Division 2 all evening @ 2100 / 7900 1.125v with PK-3. Temps 1-2c lower then before this whole fiasco. 48-49c @ 70% 1450RPM.










At first boot it scared me a lot as idle it was at like 68c so I was convinced I broke the die or something but no, the pump had gotten a air bubble stuck in it. I noticed this when I saw the pump RPM being 4700-4800 RPM and not the normal 2900. I shook stuff around a bit and it came loose and dropped straight to 26c.

I do think it was mount related as the Kraken is a terribly complicated system to properly mount especially with all those heatsinks in the way..


----------



## J7SC

Imprezzion said:


> Been playing Division 2 all evening @ 2100 / 7900 1.125v with PK-3. Temps 1-2c lower then before this whole fiasco. 48-49c @ 70% 1450RPM.
> 
> View attachment 2476122
> 
> 
> At first boot it scared me a lot as idle it was at like 68c so I was convinced I broke the die or something but no, the pump had gotten a air bubble stuck in it. I noticed this when I saw the pump RPM being 4700-4800 RPM and not the normal 2900. I shook stuff around a bit and it came loose and dropped straight to 26c.
> 
> I do think it was mount related as the Kraken is a terribly complicated system to properly mount especially with all those heatsinks in the way..


----------



## EarlZ

I just noticed that HWinfo64 has some voltage readings on the GPU and I am wondering if these are accurate as they are very low. This with a Gigabyte Gaming OC 2080Ti, I am using a Cooler Master V1000 and I've bought this since 2014.











And I see that my board is detecting the 12v rail as this value


----------



## Falkentyne

EarlZ said:


> I just noticed that HWinfo64 has some voltage readings on the GPU and I am wondering if these are accurate as they are very low. This with a Gigabyte Gaming OC 2080Ti, I am using a Cooler Master V1000 and I've bought this since 2014.
> 
> 
> View attachment 2476157
> 
> 
> And I see that my board is detecting the 12v rail as this value
> 
> View attachment 2476158


Probably an error. Contact Mumak (Martin) for him to correct the values. Seems off by 1v. I've seen similar offset errors before.


----------



## Imprezzion

J7SC said:


>


There's one thing i kinda don't get.. The screenshot I showed is perfectly stable in Division 2 but it will almost instacrash in CP77..

So, XOC BIOS, 1.125v 2115-2100Mhz core 7900 memory. It draws "23%" power which is 460w theoretically if it's accurate at all and gets up to like 50c. If I load up a CP77 save and just stand still where it loads it's at 75 FPS but crashes quite fast.

Now, I went to the normal 380w KFA2 BIOS and set 2085-2070Mhz @ 1.093v. This draws about 340-360w according to HWINFO64 and the percentage MSI AB shows backs this up so much much lower. Temps are also way down to like 44c so it is definitely consuming way less power. Load the same.save and stand still, 75 FPS.. how...

I verified this with both 3DMark Time Spy and Superposition with both BIOS set to run 2085/7900 load. The XOC 1.125v BIOS draws a TON more power and gets way hotter but the scores are identical.. well within margin of error / run to run variance... In Time Spy it's 16885 for the XOC, 16832 for the 380w.. Superposition is 10019 for the XOC 10002 for the 380w.. but it stays way cooler and draws a ton less power.. and game FPS is identical.. and it doesn't crash like mad in CP77 lol.


----------



## J7SC

I don't think there's anything wrong with your card ! Depending on settings used, CP 2077 is quite a GPU torture chamber. While I can run 2175 or even 2190 on other apps and games w/ my top (heavily w-cooled) card, CP 2077 is between 15 - 30 MHz below that, at least at 4K / RTX ultra


----------



## EarlZ

Falkentyne said:


> Probably an error. Contact Mumak (Martin) for him to correct the values. Seems off by 1v. I've seen similar offset errors before.


Thanks, I just posted this on his forums.


----------



## tps3443

Imprezzion said:


> There's one thing i kinda don't get.. The screenshot I showed is perfectly stable in Division 2 but it will almost instacrash in CP77..
> 
> So, XOC BIOS, 1.125v 2115-2100Mhz core 7900 memory. It draws "23%" power which is 460w theoretically if it's accurate at all and gets up to like 50c. If I load up a CP77 save and just stand still where it loads it's at 75 FPS but crashes quite fast.
> 
> Now, I went to the normal 380w KFA2 BIOS and set 2085-2070Mhz @ 1.093v. This draws about 340-360w according to HWINFO64 and the percentage MSI AB shows backs this up so much much lower. Temps are also way down to like 44c so it is definitely consuming way less power. Load the same.save and stand still, 75 FPS.. how...
> 
> I verified this with both 3DMark Time Spy and Superposition with both BIOS set to run 2085/7900 load. The XOC 1.125v BIOS draws a TON more power and gets way hotter but the scores are identical.. well within margin of error / run to run variance... In Time Spy it's 16885 for the XOC, 16832 for the 380w.. Superposition is 10019 for the XOC 10002 for the 380w.. but it stays way cooler and draws a ton less power.. and game FPS is identical.. and it doesn't crash like mad in CP77 lol.



Are you using a undervolt profile or setting just a standard offset? 

Our 2080Ti’s have an internal clock and external clock. 

You card may say it’s running 2,100. But it could actually only be running 2,070mhz or something or even less.


Another member on here set a undervolt of 2,100Mhz at 1.000MV and the card was actually only running 2,025Mhz internally.


----------



## Falkentyne

tps3443 said:


> Are you using a undervolt profile or setting just a standard offset?
> 
> Our 2080Ti’s have an internal clock and external clock.
> 
> You card may say it’s running 2,100. But it could actually only be running 2,070mhz or something or even less.
> 
> 
> Another member on here set a undervolt of 2,100Mhz at 1.000MV and the card was actually only running 2,025Mhz internally.


Did you see my last message on OCN in the dt channel?


----------



## Imprezzion

tps3443 said:


> Are you using a undervolt profile or setting just a standard offset?
> 
> Our 2080Ti’s have an internal clock and external clock.
> 
> You card may say it’s running 2,100. But it could actually only be running 2,070mhz or something or even less.
> 
> 
> Another member on here set a undervolt of 2,100Mhz at 1.000MV and the card was actually only running 2,025Mhz internally.


I set it for both BIOS like this. I set a offset first and apply it to raise the curve to 30Mhz under what I want it to be, then drag the 1.093 or 1.125v point to what I want it to be and hit apply. For the XOC this is +90 and then 2130 @ 1.125v at idle temps. For the normal 380w this is +135 and then 2115 @ 1.093v at idle temps. 

I read about the internal clock thing but I never checked it in real-time on my card.


----------



## EarlZ

@Falkentyne

I get the same low value readings from GPU-Z which Martin has suggested checking, at 10.9-11.1v shouldn't this cause massive instability ?


----------



## Medizinmann

Imprezzion said:


> There's one thing i kinda don't get.. The screenshot I showed is perfectly stable in Division 2 but it will almost instacrash in CP77..
> 
> So, XOC BIOS, 1.125v 2115-2100Mhz core 7900 memory. It draws "23%" power which is 460w theoretically if it's accurate at all and gets up to like 50c. If I load up a CP77 save and just stand still where it loads it's at 75 FPS but crashes quite fast.
> 
> Now, I went to the normal 380w KFA2 BIOS and set 2085-2070Mhz @ 1.093v. This draws about 340-360w according to HWINFO64 and the percentage MSI AB shows backs this up so much much lower. Temps are also way down to like 44c so it is definitely consuming way less power. Load the same.save and stand still, 75 FPS.. how...
> 
> I verified this with both 3DMark Time Spy and Superposition with both BIOS set to run 2085/7900 load. The XOC 1.125v BIOS draws a TON more power and gets way hotter but the scores are identical.. well within margin of error / run to run variance... In Time Spy it's 16885 for the XOC, 16832 for the 380w.. Superposition is 10019 for the XOC 10002 for the 380w.. but it stays way cooler and draws a ton less power.. and game FPS is identical.. and it doesn't crash like mad in CP77 lol.


I would say - this is a story of diminishing returns....

Meaning – much more power draw but negligible better results – as the temps go up to fast - preventing higher boost frequencies.

If can get temps down - like below 36°C - you might see more benefit - well it's a XOC-BIOS meant for LN2-OC...  

Best regards,
Medizinmann


----------



## Imprezzion

Medizinmann said:


> I would say - this is a story of diminishing returns....
> 
> Meaning – much more power draw but negligible better results – as the temps go up to fast - preventing higher boost frequencies.
> 
> If can get temps down - like below 36°C - you might see more benefit - well it's a XOC-BIOS meant for LN2-OC...
> 
> Best regards,
> Medizinmann


Technically I can for benching, just put the rad in a ice bucket, but for 24/7 gaming I can't get lower then about 40-42c even with the fans and pump at 100% and that has zero benefit over running a nice and quiet 47-50c as it doesn't clock any higher stable at 42c.


----------



## tryout1

tps3443 said:


> Are you using a undervolt profile or setting just a standard offset?
> 
> Our 2080Ti’s have an internal clock and external clock.
> 
> You card may say it’s running 2,100. But it could actually only be running 2,070mhz or something or even less.
> 
> 
> Another member on here set a undervolt of 2,100Mhz at 1.000MV and the card was actually only running 2,025Mhz internally.


Yeah that happens if you drag the curve wrong in afterburner for example. Basically some people leave offset at 0 and just up the point they want to have like 2085 1v, to fix this you have to offset the whole curve and tinker with the voltage point you want to have.

Thankfully so far 2070/8300 1v is perfectly stable so far for me in CP2077, most i could squeeze out was 2160 at 1.093v.

Protip tho what i still find amusing and almost nobody knows, you can click the point you want to modify in the curve editor, then press shift + hold it and drag your mouse all the way to the right and release shift, then press enter to set offset directly (or shift+enter to set core clock) and boom easy straight line, sometimes you have to press enter twice.


----------



## EarlZ

Nice tip on tha OC curve, gonna try that out


----------



## fluidzoverclock

Heya, has anyone here got a msi trio x 2080ti? By any chance does it show 8pin #1 voltage as 0.0w in gpu-z when the card is idle and the nvidia power plan is set to optimal/adaptive? Or does it show a slightly higher idle voltage, 0.1w + for example? I understand it is happening because of the card changing power states, but do we all see 0.0w?

I have highlighted it in green in the image below.


----------



## J7SC

My 2080 Tis have a new 'playmate' now- picked up a Strix 3090 (at MSRP, no tariff). It likes to drink the watts, though, even on stock bios 








As I run two separate 4K systems, the 2080 Tis will stay in service. Flight Simulator 2020 in particular will be an interesting comparison: 1x 3090 vs 2x 2080 Ti in SLI-CFR...


----------



## tps3443

J7SC said:


> My 2080 Tis have a new 'playmate' now- picked up a Strix 3090 (at MSRP, no tariff). It likes to drink the watts, though, even on stock bios
> View attachment 2476869
> 
> As I run two separate 4K systems, the 2080 Tis will stay in service. Flight Simulator 2020 in particular will be an interesting comparison: 1x 3090 vs 2x 2080 Ti in SLI-CFR...


Yep I had a buyer for my 2080Ti FE lined up to hand over $800 cash “Without EKWB Quantum Vector waterblock” I was gonna re-install the Founders Edition air cooler for them. And on top of the $800, I had set aside $1,000 bucks or so. So I had $1,800 to work with total for any 3090 model that I could find. I was gonna solder the 5 ohm resistors, and install a waterblock on it after getting it of course. 


I simply couldn’t find one, and I wasn’t gonna spend a penny more than $1,800. 

So I have just decided to hang on to my 2080Ti for a while. I mean at a solid 2,145Mhz/16,300Mhz in Cyberpunk 2077 it still performs like a beast at 36C max temps.

I gave up on finding a 3090, and I just decided it would be a waste.


----------



## Diesel Powered

So I have Zotac Arctic Storm WB. It is a custom pcb card. Is it possible to flash this with the 380watt vbios? The card averages 35-41c while gaming and runs between 2040 and 2085 mhz. Any help or advice on this would be helpful.


----------



## J7SC

tps3443 said:


> Yep I had a buyer for my 2080Ti FE lined up to hand over $800 cash “Without EKWB Quantum Vector waterblock” I was gonna re-install the Founders Edition air cooler for them. And on top of the $800, I had set aside $1,000 bucks or so. So I had $1,800 to work with total for any 3090 model that I could find. I was gonna solder the 5 ohm resistors, and install a waterblock on it after getting it of course.
> 
> 
> I simply couldn’t find one, and I wasn’t gonna spend a penny more than $1,800.
> 
> So I have just decided to hang on to my 2080Ti for a while. I mean at a solid 2,145Mhz/16,300Mhz in Cyberpunk 2077 it still performs like a beast at 36C max temps.
> 
> I gave up on finding a 3090, and I just decided it would be a waste.


Makes sense - as mentioned, I am going to keep the 2080 Tis anyhow for my primary 4K setup, but I wanted to have a powerful card for a big-screen tv (IPS HDR) in another room for gaming for almost a year now. I managed to get the Strix at MSRP (and no tariff here) more or less by happy accident...also it makes for a great space heater at now over 500W / stock bios


----------



## JustinThyme

J7SC said:


> Makes sense - as mentioned, I am going to keep the 2080 Tis anyhow for my primary 4K setup, but I wanted to have a powerful card for a big-screen tv (IPS HDR) in another room for gaming for almost a year now. I managed to get the Strix at MSRP (and no tariff here) more or less by happy accident...also it makes for a great space heater at now over 500W / stock bios


Well right now the only thing that beats a pair of 2080TIs is a pair of 3090s. Im holding out. Everything I have loaded supports SLI natively or I can force it using NVidia Inspector. If it cant be run in SLI I have no use for it.


----------



## J7SC

JustinThyme said:


> Well right now the only thing that beats a pair of 2080TIs is a pair of 3090s. Im holding out. Everything I have loaded supports SLI natively or I can force it using NVidia Inspector. If it cant be run in SLI I have no use for it.


...I've been SLI'ing since '12  And as mentioned, I'm keeping my pair of 2080 Ti / SLI which have served me well for over two years now; the Strix 3090 is for a different purpose and system. That said, a single 3090 can beat 2x 2080 Tis in some apps, or match them but at a net 250 W less. But most importantly for me is the 24 GB of usable VRAM (though 16 GB would have been enough)


----------



## JustinThyme

For all those saying crypto mining is dead.....even the miners are getting desperate due to lack of GPUs









Chinese Miners Using RTX 30-Series Laptops for Mining Power


Nothing is safe from cryptominers




www.tomshardware.com


----------



## Laithan

If manufacturers (stock) and retailers (scalping) don't do something significant to thwart these issues this entire industry will continue to be held hostage... I am afraid that this is only the beginning and will just continue to get even worse over time. 

R.I.P. PC gaming and building


----------



## J7SC

Laithan said:


> If manufacturers (stock) and retailers (scalping) don't do something significant to thwart these issues this entire industry will continue to be held hostage... I am afraid that this is only the beginning and will just continue to get even worse over time.
> 
> R.I.P. PC gaming and building


...I'm beginning to feel even luckier re. my Strix 3090 find last week (at MSRP, no tariff), but anyway, yeah this whole trend is not good for the industry as a whole. I'm not about to forecast mining coin values, not least as it is a bit of a manipulated casino, but the last time this became a really serious issue was following the sanctions on Russia re. Crimea and E.Ukraine. Huge movements of coin funds 'outside the regular banking - and with it partial control -channels' followed, especially via (tiny) Cyprus based banks, and to a lesser extent Malta, along with subsequent speculation bubbles. With the Covid-19, a new administration w/ a different view on foreign funny money, ongoing political tensions (China, Russia) and also a big increase in public sector indebtedness the world over, folks are looking for alternatives etc.The question is how long this will last, short or long.

The other underlying issue seems to be 'most eggs in one basket' (TSMC) when it gets to latest-gen CPU and GPU wafer production. Apart from Samsung in S.Korea, there seem to be an awful lot of eggs in one basket in Taiwan, in a part of the world that is seeing increased political and military tensions. TSMC deserves a lot of credit for developing the know-how (though many of the actual EUV machines are from places like Holland) and combined with Intel's miss on wafer shrinkage, this too is worrisome...it takes s.th. like 4 years or so to put in a brand new plant for this kind of chip wafer production.

Finally, there's NVidia...one of my personal pet peeves was/is the games being played around the SLI-CFR drivers, which resulted in more demand for next gen RTX 3K. The same may be taking place with the upcoming 'resizable BAR' drivers that will only work on RTX3K though apparently could work on RTX2K if it would be enabled in the driver (along with mobo UEFI firmware updates for both).

For now, I'm holding on to all my older but still decent hardware


----------



## gfunkernaught

I'm all for making money on the side, but the whole idea of "solving complex equations and getting rewarded" thing just sounds like a load of bs created by people that don't want to work for a living. The people with lots of extra cash buying up gpus and psus to mine are the ones ruining our hobby, just like commercial drone pilots lobbying to keep the skies commercial. And there are those wreckless pilots that helped that agenda. But back to mining, as long as mining can occur on current hardware, the PC gaming industry will die and streaming will take over. #murica


----------



## pewpewlazer

I'm sure the "mining" situation isn't helping with availability for us mere mortals who want to buy a single graphics card to play video games (or 3DMark) with. But it's not like there's an abundance of all other electronics out there. I'm pretty sure you can't mine Ethereum on a PS5 or Xbox Series S/X, but good luck finding those in stock anywhere. Zen 3 chips are still constantly out of stock everywhere, and we know people aren't scooping them up for their booming CPU based mining farms. Hell, even automobile manufacturers are having to cut production due to chip shortages (in the middle of a _pandemic_ when tons of people are locked down and "working from home" nonetheless).


----------



## gfunkernaught

I understanding of how mining works is that if your hardware is x86 it can mine. So PS5 can technically mine. 

Again, I have no problem with people doing what they need to do to make money to feed themselves and their families. Just those who _already have _tons of money building giant warehouses for mining. People that lost their jobs due to the lockdown, maybe they spent their savings on mining rigs, I can see that. But some yup that didn't want to work or actually do something for their community just decides to build a massive mining operation. Idk...I never trusted the whole bitcoin craze. The fact that it is basically meaningless unless it can be exchanged for real world currency. I guess if people believe it is valuable then it will be. I want to know how many gallons of oil can you buy with bitcoin? Or other natural resources for that matter.


----------



## Laithan

Does it really require explanation that GPUs are akin to mining and scalpers are akin to multiple recently released tech products one of which are GPUs?


----------



## gfunkernaught

A perfect storm.


----------



## edhutner

I saw article that some started mining with laptops.. Regarding consoles probably would not be hard to mine with them also. They have browsers and this should be enough to open web page with java script miner ...


----------



## gfunkernaught

I feel bad for those laptops. They could have been in the hands of people that would actually use them for work or gaming or both. Now they're just being chewed up for mining. It's a shame.


----------



## JustinThyme

J7SC said:


> ...I've been SLI'ing since '12  And as mentioned, I'm keeping my pair of 2080 Ti / SLI which have served me well for over two years now; the Strix 3090 is for a different purpose and system. That said, a single 3090 can beat 2x 2080 Tis in some apps, or match them but at a net 250 W less. But most importantly for me is the 24 GB of usable VRAM (though 16 GB would have been enough)


The only thing that a single 3080 or 3090 can beat a pair of 2080s at is games that you can’t force SLI profile on with Nvidia Inspector. All else they fall behind. GPU compute and all things SLI. Closest I’ve seen is a 3090 that was 7K points behind my pair of 2080Ti in Time spy. I hear a lot of crying of how SLI is dead and it’s generally coming from the cheap seats. There’s an entire forum dedicated to sharing the codes to force SLI on just about any game. There’s a few I run that way but most of what I have supports it natively. Still there with every Nvidia driver update. Only bench that doesn’t support it is Superposition and it can be forced as mentioned above. Crappy bench anyway. Too limited in choices. I run 2K 1440P monitor because there are no wide screens that support 4K. With that you get a choice of 1080P or 4K, even Heaven has more options. 

I’ll be waiting until something better comes along. Biggest reason I said two 3090s is the 3080 can’t run it. Maybe when the Tis are released. Just nothing there to make me go bat sheet crazy and paying 150% or more to a scalper and I’m not going to leave my machine up with a bot running trying to find one at MSRP when I’m happy with what I have. Takes everything I throw at it and begs for more. Ultra settings barely get them warmed up. Extended runs in the summer and I might make 40C.


----------



## gfunkernaught

Wait can you force SLI with two 3090s as well? I didn't know SLI could be forced.


----------



## J7SC

gfunkernaught said:


> Wait can you force SLI with two 3090s as well? I didn't know SLI could be forced.


 ...you can 'force' SLI via NVInspector on some apps, but not others...I use NVInspector to force several apps for my SLI 2080 Tis (w/ CFR wherever possible), but it depends on the app...And yes, theoretically, you should be able to force a pair of 3090s as well, again depending on the app.


----------



## kithylin

JustinThyme said:


> The only thing that a single 3080 or 3090 can beat a pair of 2080s at is games that you can’t force SLI profile on with Nvidia Inspector. All else they fall behind. GPU compute and all things SLI. Closest I’ve seen is a 3090 that was 7K points behind my pair of 2080Ti in Time spy. I hear a lot of crying of how SLI is dead and it’s generally coming from the cheap seats. There’s an entire forum dedicated to sharing the codes to force SLI on just about any game. There’s a few I run that way but most of what I have supports it natively. Still there with every Nvidia driver update. Only bench that doesn’t support it is Superposition and it can be forced as mentioned above. Crappy bench anyway. Too limited in choices. I run 2K 1440P monitor because there are no wide screens that support 4K. With that you get a choice of 1080P or 4K, even Heaven has more options.
> 
> I’ll be waiting until something better comes along. Biggest reason I said two 3090s is the 3080 can’t run it. Maybe when the Tis are released. Just nothing there to make me go bat sheet crazy and paying 150% or more to a scalper and I’m not going to leave my machine up with a bot running trying to find one at MSRP when I’m happy with what I have. Takes everything I throw at it and begs for more. Ultra settings barely get them warmed up. Extended runs in the summer and I might make 40C.


Could you please share a link to this forum that shares nvidia profiles? I would greatly like to find and read it.

Additionally there is this: Nvidia: we won't create any SLI driver profiles after Jan 2021 Nvidia will no longer be creating any new SLI profiles for any games. It's up to the community and figuring out our own profiles if we want it to work now. So Could you please share a link to that nvidia inspector profile forum you spoke of?


----------



## Krzych04650

kithylin said:


> Could you please share a link to this forum that shares nvidia profiles? I would greatly like to find and read it.
> 
> Additionally there is this: Nvidia: we won't create any SLI driver profiles after Jan 2021 Nvidia will no longer be creating any new SLI profiles for any games. It's up to the community and figuring out our own profiles if we want it to work now. So Could you please share a link to that nvidia inspector profile forum you spoke of?


Regular profiles - 3DCenter Forum - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
CFR - 3DCenter Forum - Einzelnen Beitrag anzeigen - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)

That CFR list is not all that good though, there are many many more games that work and also a few games from this list that are listed as functional aren't actually working (for example due to frequent crashing)


----------



## J7SC

Krzych04650 said:


> Regular profiles - 3DCenter Forum - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
> CFR - 3DCenter Forum - Einzelnen Beitrag anzeigen - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
> 
> That CFR list is not all that good though, there are many many more games that work and also a few games from this list that are listed as functional aren't actually working (for example due to frequent crashing)


Great list !...fyi, for me personally, MSFS2020 works great on CFR (though I don't think CFR would work w/ 3090s given the age of the last CFR driver, but not sure). I even got CFR working on Cyberpunk 2077 'a bit', but it is hit and miss....game-play itself worked, but trying to change Graphics settings crashes the system. Other CFR titles work as well, but those two are what I spent my (sparse) game time on.

At the end of the day, I think CFR was really a precursor to / dev tool for mGPU tile down the line. Either way, with 2x 4K stations in separate rooms, one w/dual 2080 Tis, the other w/single 3090 (at least for now), I am enjoying the best of both worlds.


----------



## pewpewlazer

JustinThyme said:


> I hear a lot of crying of how SLI is dead and it’s generally coming from the cheap seats. There’s an entire forum dedicated to sharing the codes to force SLI on just about any game. There’s a few I run that way but most of what I have supports it natively.


How many DX12 or Vulkan games have you successfully "forced" SLI on? How many games have you played with RTX enabled while using SLI?

The reality is, SLI *IS* dead. CFR was its saving grace, but that too was killed off.

And I'm not some _cheap seat_ who can't afford a second graphics card. I gladly would have bought a second card if it worked. But it doesn't. Not in the games that need it most.


----------



## JustinThyme

pewpewlazer said:


> How many DX12 or Vulkan games have you successfully "forced" SLI on? How many games have you played with RTX enabled while using SLI?
> 
> The reality is, SLI *IS* dead. CFR was its saving grace, but that too was killed off.
> 
> And I'm not some _cheap seat_ who can't afford a second graphics card. I gladly would have bought a second card if it worked. But it doesn't. Not in the games that need it most.


I could note the 50+ that I always run RTX enabled with 50/50 on DX 11 or 12 but its a waste of time for someone who Capitalized and Bolded that SLI is dead. What I'm telling you is its far from dead and I don't bother with titles that either A dont support it natively or B I cant force it. Look a few posts above with the forums listed where it can be forced. Hey its not my problem that you choose to play games where its not supported due to lazy DEVs or cant be forced or have you even tried a second card and going purely from assumptions of what you read from jackwagonsRUS? 

I run two cards and I run them in SLI successfully, no reason to be hatin. If you wish to run one, that's your choice. I choose to run two. At least you have a decent monitor but guess that a single card cant keep up with a 200MHz refresh rate. 

The other end you miss completely without thinking about anything but gaming is compute power. There are other uses for GPUs than gaming and I use that too. Mostly 3D CAD rendering.

So stick with your single card if that's what makes you happy but not nice to be dissing folks who choose to run two or more that are fully supported for what they play and what they do. Its not like Im the only one, Browse the forums listed above and that's just a small dent in the number of people who run SLI. 

Instead of insisting its dead, search what's natively supported. You may actually be surprised. A lot of top tier titles.


----------



## kithylin

pewpewlazer said:


> How many DX12 or Vulkan games have you successfully "forced" SLI on? How many games have you played with RTX enabled while using SLI?
> 
> The reality is, SLI *IS* dead. CFR was its saving grace, but that too was killed off.
> 
> And I'm not some _cheap seat_ who can't afford a second graphics card. I gladly would have bought a second card if it worked. But it doesn't. Not in the games that need it most.


The entire world isn't about "Current" titles that support DirectX12 or Vulkan. Often times people only speak of "Current AAA games" that support SLI which is a small and short list (Linus Tech). Sometimes folks forget that the nvidia drivers still include SLI profiles for almost a thousand older games that all support SLI too. One of my friends for example still plays the original Skyrim with about 340 some odd mods that cause his two RTX 2080 Ti's to run at 80% - 90% GPU usage just at 1080p. There's lots of uses for SLI other than just games released within the past 3 years. And it's not always just for 1440p or 4K either.


----------



## Krzych04650

kithylin said:


> The entire world isn't about "Current" titles that support DirectX12 or Vulkan. Often times people only speak of "Current AAA games" that support SLI which is a small and short list (Linus Tech). Sometimes folks forget that the nvidia drivers still include SLI profiles for almost a thousand older games that all support SLI too. One of my friends for example still plays the original Skyrim with about 340 some odd mods that cause his two RTX 2080 Ti's to run at 80% - 90% GPU usage just at 1080p. There's lots of uses for SLI other than just games released within the past 3 years. And it's not always just for 1440p or 4K either.


Yea this is probably the biggest thinking problem in community overall, judging everything based on just few latest games, DX12/Vulkan in this case. Benchmaks are mostly made only in the latest games as well and then you can hear many people saying nonsense like "single-threaded performance doesn't matter anymore", or that SLI is not needed anymore because you don't need this much power. This is the kind of console kid mentality who only plays few latest most "trendy" games, does not have any real library of games and certainly does not know anything about the performance characteristics of games he is playing and doesn't spend any effort setting them up. It is sad to see this kind of mentality being so common on PC, and whats even worse is that this kind of impotent talk turned out to be an actual industry direction with manufacturers like NVIDIA not only prematurely abandoning pre-DX12/Vulkan altogether but also disabling what was already established.


----------



## J7SC

^^ I just posted this a few minutes ago at another forum in response to a similar theme: 
"...I am indeed happy for the 2x 2080 Ti SLI-CFR in FS2020, though I'm completely switching to 4K here, so a second system just got a 3090 for everything, including FS2020 (comparison soon with 2x 2080 Tis in the primary system in the FS2020 thread). Both sets are also used for productivity, btw.

*All but one app / sim / game I do regularly still support SLI* (about half of which even SLI-CFR), but there is no doubt that it will get increasingly difficult as newer drivers come out w/patches etc. which you can't use. NVInspector also helps a lot re. finding custom SLI solutions as many games have a 'root' code that goes back to earlier, happier SLI days. For example, the underlying Crytek engine loves SLI +CFR. What really ticks me off :

a.) SLI-CFR, undocumented as it was and likely just a precursor / dev tool for upcoming mGPU, showed just how well SLI can work...CFR doesn't really do micro-stutter for example. CFR is sometimes faster, sometimes a bit slower than regular SLI (AFR), but overall, it really showed what *'could have been' with your existing* hardware.

b.) With that in mind, it's disgusting that NVidia simply turned off CFR again right before Ampere launched, never mind shifting SLI support out of their dev realm - yet as we all know, if you want to play a demanding 4K Ultra title at decent frame rates (and/ or with ray tracing and DLSS), you need to get a top-of-the line GPU, <>> ...only you can't even buy one most of the time now, unless you either pay scalper prices, or get just plain lucky...


----------



## Laithan

I have been using SLI since the Voodoo II days and forward with more generations than I can count... There's something special about using two GPUs from much more than just a performance perspective. The asthetics look _REALLY_ nice and the fact that you can (almost) double your performance above and beyond current GPU generations is something special. I was most recently using (2) 980Ti's (modded) to run @ 4K and it worked very well, faster than a single 1080Ti by far... 

Most of what I tried to run SLI with actually worked and worked quite well IMO. The major issues I ran into were with ID Software titles (wolfenstein series, doom etc) and the biggest disappointment for me was with RDR2... where I just could NOT get SLI to work no matter what I tried.. and I didn't give up easy... apparently those who DID get SLI working with RDR2 were using Pascal... so it seems that NVIDIA was playing some similar games long before they removed CFR support.. NVIDIA is clearly trying to force obscelence also by not including SLI fingers on anything except a $1500 MSRP card.. total BS.. SLI is not dead but NVIDIA is certainly playing a huge role in trying to kill it. I don't think anyone can dispute that when you can't even SLI a 3080/3070/3060 even if you wanted to.

My opinion of NVIDIA has absolutely tanked starting when they locked us all out of BIOS modifications.... (as if that was causing RMAs... no it wasn't)... and the mining craze where they showed the first signs of not caring about the consumer... Now with this BS games they are playing especially with the artitifical price inflation, catering to miners and doing essentially NOTHING about GPU stock for the 30xx series, they are on the fecal roster. I was glad to see that team RED is able to compete and if I could, I would a 6x00 over NVIDIA in a heartbeat but of course there's issues on that side also... especially when people are home due to the pandemic.. a slap in the face.. (and an even bigger slap to scalpers ).

I finally broke down and got myself a 2080Ti for around $900 over a year ago (EVGA FTW3 Ultra) so that I could play RDR2 @ 4K and if it wasn't for the used prices I would have definitely considered a 2nd 2080Ti but that's just not going to happen right now... so we truly are "held hostage" and the consumer cannot fix this problem... except by NOT buying @ these hyper inflated prices.. Which most don't have the willpower to do.. so we're sadly going to see inflated prices become the normal once again... 

I am very happy with my 2080Ti and I'm more than happy to keep it around for the next 2 or so years at least.. Nobody is getting any money from me during all this.. nobody...


----------



## pewpewlazer

JustinThyme said:


> I could note the 50+ that I always run RTX enabled with 50/50 on DX 11 or 12 but its a waste of time for someone who Capitalized and Bolded that SLI is dead.


There aren't even 50+ games with RTX on the market. Nice try.

If there really are 50+ games you're playing with ray tracing and SLI at the same time, then SURELY you can name AT LEAST ONE?



kithylin said:


> The entire world isn't about "Current" titles that support DirectX12 or Vulkan. Often times people only speak of "Current AAA games" that support SLI which is a small and short list (Linus Tech). Sometimes folks forget that the nvidia drivers still include SLI profiles for almost a thousand older games that all support SLI too. One of my friends for example still plays the original Skyrim with about 340 some odd mods that cause his two RTX 2080 Ti's to run at 80% - 90% GPU usage just at 1080p. There's lots of uses for SLI other than just games released within the past 3 years. And it's not always just for 1440p or 4K either.


Skyrim with ~340 mods is beyond a niche use case.



Krzych04650 said:


> Yea this is probably the biggest thinking problem in community overall, judging everything based on just few latest games, DX12/Vulkan in this case.


DX12/Vulkan is what all games in the forseeable future will be using.



Krzych04650 said:


> or that SLI is not needed anymore because you don't need this much power.


With the advent of ray tracing, SLI is needed now more than ever. But DX12/Vulkan have both brought ray tracing while simultaneously killing off SLI.



Krzych04650 said:


> This is the kind of console kid mentality who only plays few latest most "trendy" games, does not have any real library of games and certainly does not know anything about the performance characteristics of games he is playing and doesn't spend any effort setting them up. It is sad to see this kind of mentality being so common on PC, and whats even worse is that this kind of impotent talk turned out to be an actual industry direction with manufacturers like NVIDIA not only prematurely abandoning pre-DX12/Vulkan altogether but also disabling what was already established.


I was too young to really appreciate the glory days of PC gaming, but I still remember playing Starcraft when it was the hot new thing, and waiting overnight for the Q3A demo to download over dialup. I ran multi-GPU setups almost exclusively from 2005 until 2018.

Call me a "console kid" if you want, but SLI is dead. D E A D. I wish it wasn't, but unless we see a DX13 or newer version of Vulkan that somehow places the multi-GPU support back into the hands of the GPU manufactures/driver developers, it's game over in the long run.


----------



## J7SC

pewpewlazer said:


> There aren't even 50+ games with RTX on the market. Nice try.
> 
> If there really are 50+ games you're playing with ray tracing and SLI at the same time, then SURELY you can name AT LEAST ONE?
> 
> 
> 
> Skyrim with ~340 mods is beyond a niche use case.
> 
> 
> 
> DX12/Vulkan is what all games in the forseeable future will be using.
> 
> 
> 
> With the advent of ray tracing, SLI is needed now more than ever. But DX12/Vulkan have both brought ray tracing while simultaneously killing off SLI.
> 
> 
> 
> I was too young to really appreciate the glory days of PC gaming, but I still remember playing Starcraft when it was the hot new thing, and waiting overnight for the Q3A demo to download over dialup. I ran multi-GPU setups almost exclusively from 2005 until 2018.
> 
> Call me a "console kid" if you want, but SLI is dead. D E A D. I wish it wasn't, but unless we see a DX13 or newer version of Vulkan that somehow places the multi-GPU support back into the hands of the GPU manufactures/driver developers, it's game over in the long run.


...your points are well taken...though I could actually name one DX 12 / RTX / SLI game (spoiler), not to mention the 30 sec of Cyberpunk 2077 I spent in SLI-CFR fpr 2x 2080 Tis before I crashed (not during game play, but hitting 'esp' for graphics settings,) --- but that really doesn't add to 50 or so titles w/ both DX 12 / RTX and SLI of any kind...

W/o not only continuous but advertised support by NVidia for any kind of SLI- rather than the opposing message they released- it is sending a strong, negative message to devs re. SLI...



Spoiler


----------



## kithylin

pewpewlazer said:


> Skyrim with ~340 mods is beyond a niche use case.





pewpewlazer said:


> DX12/Vulkan is what all games in the forseeable future will be using.


Not everyone is interested in the "Latest and greatest AAA games". Some people are perfectly happy re-playing some of the same older games they have owned for years but on much higher settings with more and more and more mods. Some people could honestly not care at all about whatever DX12 or Vulkan whatever is SLI supported or not. Some people are GPU limited in games released more than 5 years ago even with a single 2080 Ti. Games that have already had native SLI support along with performance and driver optimizations for a long time now. It's not a "niche use case". Personally I play a lot of Indie / non-mainstream games myself. I find almost no interest at all in most new games that are released within the past few years. I know I'm not alone in this line of thinking either.


----------



## Laithan

kithylin said:


> I find almost no interest at all in most new games that are released within the past few years. I know I'm not alone in this line of thinking either.


Wow.. really?!?

I can't believe you don't like false promises, unfinished games, crashes, bugs, micro-transactions, loot boxes, pay-2-win, excessive DLC, piss poor innovation and monetary cosmetics.... but I guess...


----------



## JustinThyme

pewpewlazer said:


> There aren't even 50+ games with RTX on the market. Nice try.
> 
> If there really are 50+ games you're playing with ray tracing and SLI at the same time, then SURELY you can name AT LEAST ONE?
> 
> 
> 
> Skyrim with ~340 mods is beyond a niche use case.
> 
> 
> 
> DX12/Vulkan is what all games in the forseeable future will be using.
> 
> 
> 
> With the advent of ray tracing, SLI is needed now more than ever. But DX12/Vulkan have both brought ray tracing while simultaneously killing off SLI.
> 
> 
> 
> I was too young to really appreciate the glory days of PC gaming, but I still remember playing Starcraft when it was the hot new thing, and waiting overnight for the Q3A demo to download over dialup. I ran multi-GPU setups almost exclusively from 2005 until 2018.
> 
> Call me a "console kid" if you want, but SLI is dead. D E A D. I wish it wasn't, but unless we see a DX13 or newer version of Vulkan that somehow places the multi-GPU support back into the hands of the GPU manufactures/driver developers, it's game over in the long run.


Nice try of taking things out of context. I didnt say they all support RTX now did I? I said I run 50+ with everything enabled and on ultra settings and 50/50 on DX11 or DX12. Like I said, you run your single card rig and be happy in your world of denial. Just because you don't run it and evidently have never run it and basing your replies without ever having run it then you really don't have a toe to stand on let alone a foot. Just because you read it somewhere on the internet doesn't make you a French model. Show me the results, yours, not someone else's because rest assured as soon as Im finished with my cooling upgrade that I just tore my rig down to add a MORA 3 420 and some other goodies Im going to post the difference between one and two and look forward to more denial.

In the long run all there is are lazy Devs that wont take the extra time because they know there are sheep who read BS believe that SLI doesn't work. If its not Native and its not able to be forced then I don't want it. And once again you totally left out the compute power aspect. I cant even say nice try, more like lame.


----------



## J7SC

Laithan said:


> Wow.. really?!?
> 
> I can't believe you don't like false promises, unfinished games, crashes, bugs, micro-transactions, loot boxes, pay-2-win, excessive DLC, piss poor innovation and monetary cosmetics.... but I guess...


...I like to add constant big-data snooping and otherwise unnecessary / non-optional network server integration !


----------



## JustinThyme

kithylin said:


> Not everyone is interested in the "Latest and greatest AAA games". Some people are perfectly happy re-playing some of the same older games they have owned for years but on much higher settings with more and more and more mods. Some people could honestly not care at all about whatever DX12 or Vulkan whatever is SLI supported or not. Some people are GPU limited in games released more than 5 years ago even with a single 2080 Ti. Games that have already had native SLI support along with performance and driver optimizations for a long time now. It's not a "niche use case". Personally I play a lot of Indie / non-mainstream games myself. I find almost no interest at all in most new games that are released within the past few years. I know I'm not alone in this line of thinking either.


No, you arent alone but there are still fairly new release that support SLI and RTX. With few exceptions Im not much into most of the newer junk.


----------



## JustinThyme

J7SC said:


> ...your points are well taken...though I could actually name one DX 12 / RTX / SLI game (spoiler), not to mention the 30 sec of Cyberpunk 2077 I spent in SLI-CFR fpr 2x 2080 Tis before I crashed (not during game play, but hitting 'esp' for graphics settings,) --- but that really doesn't add to 50 or so titles w/ both DX 12 / RTX and SLI of any kind...
> 
> W/o not only continuous but advertised support by NVidia for any kind of SLI- rather than the opposing message they released- it is sending a strong, negative message to devs re. SLI...
> 
> 
> 
> Spoiler


One of my favorite more recent games.......


----------



## pewpewlazer

JustinThyme said:


> Nice try of taking things out of context. I didnt say they all support RTX now did I?


Um, you literally said this:



JustinThyme said:


> I could note the *50+ that I always run RTX enabled*





JustinThyme said:


> Just because you don't run it and evidently have never run it and basing your replies without ever having run it then you really don't have a toe to stand on let alone a foot.


You evidently did not read my replies at all, because I stated I ran numerous multi-GPU setups over the years. If you require specifics:

7800 GT SLI
7900 GTO SLI
8800 GTS SLI
3870 CF
4850 CF
5770 CF
GTX 660 SLI
GTX 1070 SLI

After being continually kicked in the dick with the 1070s, I wised up and went single card with Turing.


----------



## JustinThyme

pewpewlazer said:


> Um, you literally said this:
> 
> 
> 
> 
> 
> You evidently did not read my replies at all, because I stated I ran numerous multi-GPU setups over the years. If you require specifics:
> 
> 7800 GT SLI
> 7900 GTO SLI
> 8800 GTS SLI
> 3870 CF
> 4850 CF
> 5770 CF
> GTX 660 SLI
> GTX 1070 SLI
> 
> After being continually kicked in the dick with the 1070s, I wised up and went single card with Turing.


Sorry you couldn't figure out how to make it work. Mine works just fine and my dick never felt better.


----------



## Laithan

Ding ding ding.. 15th round is over.

We go to the judge's scorecard for a decision..

_*Unanimous decision*_: SCALPERS win the fight... (this time)

Let's all just be happy we have SOMETHING in our boxes to use right now 
When a stinkin GTX 970 used is up to $250..... <sigh>

🤦‍♂️

Stay safe everyone


----------



## Krzych04650

Laithan said:


> Ding ding ding.. 15th round is over.
> 
> We go to the judge's scorecard for a decision..
> 
> _*Unanimous decision*_: SCALPERS win the fight... (this time)
> 
> Let's all just be happy we have SOMETHING in our boxes to use right now
> When a stinkin GTX 970 used is up to $250..... <sigh>
> 
> 🤦‍♂️
> 
> Stay safe everyone


Heh, it is funny how old cards are suddenly so valuable again. For example used GTX 1080 prices are approaching what I sold my 1080s for when Turing launched, and it was already in mining and overpriced new gen circumstances back then. And these are not some unrealistic listings, I am talking bids with 10+ users bidding, so people are actually paying that. No wonder why though, major retailers here don't even bother to list 3000 series anymore, there is no point  I am really glad that I made a firm decision to get second 2080 Ti at the right moment, now good used 2080 Tis go for more than 3080 MSRP.

And it is not only GPUs and scalpers, I was trying to get my hands on LG 38WN95 for months and eventually gave up. Even today, few months later, it was never in stock anywhere. There are people who pre-ordered it in June and still didn't get it.

And what is probably the most concerning about this is that manufacturers seem to be perfectly content with current situation.

Really anyone who has a good and complete setup right now should consider himself lucky. It is a complete disaster trying to buy anything today.


----------



## J7SC

...speaking of old cards, how about some 'new old stock', kind of ? 2080 TI owners should consider themselves very lucky...

Verge:


----------



## Avacado

J7SC said:


> ...speaking of old cards, how about some 'new old stock', kind of ? 2080 TI owners should consider themselves very lucky...
> 
> Verge:
> View attachment 2478501


Saw that, some people want 1600$ for a 2060 KO, they lost their effing minds.


----------



## JustinThyme

The miners have resorted to using laptops with discrete 30xxx now.

Thats getting desperate.

Laptop farm


----------



## JustinThyme

Avacado said:


> Saw that, some people want 1600$ for a 2060 KO, they lost their effing minds.


All the prices are going up. 2080Tis that I paid $999 for at launch from MicroCenter (Strix O11G OC) are now through the roof. Cheap is like $1500. NewEgg is selling them for $1890. With those kind of prices if I was in the market for a card ATM Id cave to the scalpers on the 3090s, they are about the same.


----------



## Laithan

JustinThyme said:


> The miners have resorted to using laptops with discrete 30xxx now.
> 
> Thats getting desperate.
> 
> Laptop farm
> 
> View attachment 2478563


(ง ͠° ͟ل͜ ͡°)ง

_*<jaw dropped... speechless>*_


----------



## zipder

Hi guys , I’m new here and I need some help . I’ve got an used 2080 TI Founders Edition with the 300A Variant . I want to flash new BIOS to my GPU because of the default fan setting from the FE is too loud for me and I can’t set it lower than 41% .
After trying flash this BIOS* GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W) *with *NVFlash Modified  *i got error messenger . 
Any idea ?
Thank you









My GPU


----------



## Krzych04650

zipder said:


> Hi guys , I’m new here and I need some help . I’ve got an used 2080 TI Founders Edition with the 300A Variant . I want to flash new BIOS to my GPU because of the default fan setting from the FE is too loud for me and I can’t set it lower than 41% .
> After trying flash this BIOS* GALAX/KFA2 RTX 2080 Ti OC Reference PCB (2x8-Pin) 300W x 127% Power Target BIOS (380W) *with *NVFlash Modified  *i got error messenger .
> Any idea ?
> Thank you
> View attachment 2478602
> 
> 
> My GPU
> View attachment 2478603


Would that even help though? I have both of my cards flashed with 380W BIOS and their minimum fan speed setting is the same as it was before flashing, 35% for MSI and 40% for Palit. And the reason why you cannot go lower than 41% is probably the RPM range of FE fans, it may be that they cannot spin slower.


----------



## zipder

Krzych04650 said:


> Would that even help though? I have both of my cards flashed with 380W BIOS and their minimum fan speed setting is the same as it was before flashing, 35% for MSI and 40% for Palit. And the reason why you cannot go lower than 41% is probably the RPM range of FE fans, it may be that they cannot spin slower.


Oh OK , so i don't think flash new BIOS from another manufacturer will help .
My main problem is : My GPU fans keep ramps up and down between 1200 and 1500 rpm even i set it to min 41% fan control . It rotates 1500 rpm for about 30s , then go down at 1200rpm for 2s then go up to 1500rpm again . When i play game or watching movie / youtube is not a big problem . But when i am just need to browsing web or typing word , it really annoying 
Should i update to newest Nvidia (90.02.30.00.05) bios ? Currently i'm on 90.02.0B.00.0E


----------



## Krzych04650

zipder said:


> Oh OK , so i don't think flash new BIOS from another manufacturer will help .
> My main problem is : My GPU fans keep ramps up and down between 1200 and 1500 rpm even i set it to min 41% fan control . It rotates 1500 rpm for about 30s , then go down at 1200rpm for 2s then go up to 1500rpm again . When i play game or watching movie / youtube is not a big problem . But when i am just need to browsing web or typing word , it really annoying
> Should i update to newest Nvidia (90.02.30.00.05) bios ? Currently i'm on 90.02.0B.00.0E


Can you show a screenshot with your fan curve? I don't know if this is normal for FE but such fluctuations shouldn't happen at idle unless you have your curve set to aggressively.


----------



## J7SC

Krzych04650 said:


> Can you show a screenshot with your fan curve? I don't know if this is normal for FE but such fluctuations shouldn't happen at idle unless you have your curve set to aggressively.


^^ yeah, and may be just figure out at which fan speed gaming is well cooled and lock that % for Win startup w/ MSI AB...fan speeds zipping up/down on their own are annoying me far more than a higher but steady rpm. My 2080 Tis are w-cooled anyhow, but on my 3090, 72% is that magic number - quiet enough, but also able to deal with most loads. Fans on those get loud beyond 72%, though for hard loads and a bit of benching, I just kick them up to 97% fixed. W-cooling the Strix soon, I hope


----------



## zipder

Krzych04650 said:


> Can you show a screenshot with your fan curve? I don't know if this is normal for FE but such fluctuations shouldn't happen at idle unless you have your curve set to aggressively.












I'm using Arctic Accelero Xtreme IV cooler , not stock cooler . Not sure if it has some impact on my problem ?


----------



## Laithan

You may want to try stretching out that fan curve to make it not as steep.


----------



## zipder

My problem is maybe solved . Updated my FE to newest Nvidia BIOS 90.02.30.00.05 and now fans are stable at 41% - 1500rpm untill now . No more ramps up and down . Still noisy tho .... 😅


----------



## JustinThyme

zipder said:


> My problem is maybe solved . Updated my FE to newest Nvidia BIOS 90.02.30.00.05 and now fans are stable at 41% - 1500rpm untill now . No more ramps up and down . Still noisy tho .... 😅


Only way to lower temps and keep noise in check is bite the bullet and go under water.


----------



## dcide

Hi. Maybe one of you guys can help me with this: I would need the value (size/rating) of the highlighted SMD capacitor for a 2080 Ti reference PCB.









Thanks in advance.


----------



## Imprezzion

zipder said:


> My problem is maybe solved . Updated my FE to newest Nvidia BIOS 90.02.30.00.05 and now fans are stable at 41% - 1500rpm untill now . No more ramps up and down . Still noisy tho .... 😅


If you're using a Accelero this behavior is completely normal, the fan control on that thing sucks. Flashing a BIOS with a lower fan percentage definitely helps, the PWM % for a Accelero has to be way lower then 41%.

The Galax/KFA2 is also limited to 40%, what you need is the EVGA FTW3 Ultra 373w BIOS. That has a higher power limit and much much better fan control.


----------



## zipder

Imprezzion said:


> If you're using a Accelero this behavior is completely normal, the fan control on that thing sucks. Flashing a BIOS with a lower fan percentage definitely helps, the PWM % for a Accelero has to be way lower then 41%.
> 
> The Galax/KFA2 is also limited to 40%, what you need is the EVGA FTW3 Ultra 373w BIOS. That has a higher power limit and much much better fan control.


EVGA FTW3 Ultra uses custom PCB , i don't think it will work with my FE.
And as i mentioned above , i can't flash BIOS from another manufacturer at all , even when i use NVFlash Modified with Board Id Mismatch Disabled. 😕


----------



## Imprezzion

zipder said:


> EVGA FTW3 Ultra uses custom PCB , i don't think it will work with my FE.
> And as i mentioned above , i can't flash BIOS from another manufacturer at all , even when i use NVFlash Modified with Board Id Mismatch Disabled. 😕


I have ran every BIOS in existence for the 2080 Ti on my Gainward Phoenix GS which is a reference PCB (not founders). It takes the KFA2 one but also EVGA, Gigabyte, ASUS and MSI custom PCB BIOS just fine with the only exception being non-working HDMI/DP ports of the port config is different on the BIOS but I assume a Founders will eat those BIOS just fine as well.

As for the error, you have to type YES not a n. That's why it doesn't work lol.


----------



## zipder

Imprezzion said:


> I have ran every BIOS in existence for the 2080 Ti on my Gainward Phoenix GS which is a reference PCB (not founders). It takes the KFA2 one but also EVGA, Gigabyte, ASUS and MSI custom PCB BIOS just fine with the only exception being non-working HDMI/DP ports of the port config is different on the BIOS but I assume a Founders will eat those BIOS just fine as well.
> 
> As for the error, you have to type YES not a n. That's why it doesn't work lol.


Finally i've found out why it didn't work , i typed ''yes'' instead of ''YES''  
//that screenshot i took it from internet , was too lazy 😂
Thank you so much for telling me that . 🤝
Flashed Evga XC ver. and those fans are spinning at 700rpm now , much more quieter


----------



## Falkentyne

Imprezzion said:


> I have ran every BIOS in existence for the 2080 Ti on my Gainward Phoenix GS which is a reference PCB (not founders). It takes the KFA2 one but also EVGA, Gigabyte, ASUS and MSI custom PCB BIOS just fine with the only exception being non-working HDMI/DP ports of the port config is different on the BIOS but I assume a Founders will eat those BIOS just fine as well.
> 
> As for the error, you have to type YES not a n. That's why it doesn't work lol.


I got excited until I realized I was in the 2080 Ti thread.
The 3080 FE/3090 FE still errors out if you type "YES" when trying to flash another AIB's bios (like the Kingpin or Strix XOC2 bioses on 3090 FE for example) right?


----------



## Imprezzion

Falkentyne said:


> I got excited until I realized I was in the 2080 Ti thread.
> The 3080 FE/3090 FE still errors out if you type "YES" when trying to flash another AIB's bios (like the Kingpin or Strix XOC2 bioses on 3090 FE for example) right?


Never tried that, I mean, someone has to make a 5.670 nvflash without the verification first.

I did try it the other way around funnily enough. I tried to flash the FE 3080 BIOS to my Gigabyte 3080 Gaming OC as it's the only BIOS besides the stock Gigabyte one that has 370/370W @ 100%. But, with the whole 12 pin vs 2x8 pin I'm not sure it would work even if it allowed me to flash it.


----------



## Falkentyne

Imprezzion said:


> Never tried that, I mean, someone has to make a 5.670 nvflash without the verification first.
> 
> I did try it the other way around funnily enough. I tried to flash the FE 3080 BIOS to my Gigabyte 3080 Gaming OC as it's the only BIOS besides the stock Gigabyte one that has 370/370W @ 100%. But, with the whole 12 pin vs 2x8 pin I'm not sure it would work even if it allowed me to flash it.


It didn't work even with the overrides? (I assume that card has dual bios switch). Or the nvflash wouldn't even allow the override option?


----------



## Imprezzion

Falkentyne said:


> It didn't work even with the overrides? (I assume that card has dual bios switch). Or the nvflash wouldn't even allow the override option?


I was using 5.670 nvflash and it will allow me to enter the file name and such but then said cannot flash board ID mismatch (FE has 1 letter different) and there's no mismatch override in 5.670 yet.


----------



## J7SC

Imprezzion said:


> I was using 5.670 nvflash and it will allow me to enter the file name and such but then said cannot flash board ID mismatch (FE has 1 letter different) and there's no mismatch override in 5.670 yet.


...did you solve that ?


----------



## Falkentyne

Imprezzion said:


> I was using 5.670 nvflash and it will allow me to enter the file name and such but then said cannot flash board ID mismatch (FE has 1 letter different) and there's no mismatch override in 5.670 yet.


There was a mismatch ID patched NVflash for Ampere (STILL will not flash a modded bios however) that someone posted and linked but the post got deleted before anyone could download it. And no one knows where it is. I'd like a copy of it...


----------



## acoustic

I had it but unfortunately forgot to save it before I wiped Windows. Was the post deleted by moderators or something? I thought this forum was for overclockers


----------



## J7SC

tps3443 said:


> Are you using a undervolt profile or setting just a standard offset?
> 
> Our 2080Ti’s have an internal clock and external clock.
> 
> You card may say it’s running 2,100. But it could actually only be running 2,070mhz or something or even less.
> 
> Another member on here set a undervolt of 2,100Mhz at 1.000MV and the card was actually only running 2,025Mhz internally.


...are you still Cyberpunking away w/ your 2080 Ti ? As to 'internal clock and external clock', you're referring to what is also depicted in HWInfo / GPU, I take it


----------



## Elfsberg

Hey, people 

Maybe this is a long shot but it's worth the trouble.  
So i just got my 2080 Ti and it's been damaged during freight.  It was bought used so there's no warranty. The kid who sold it won't accept responsibility, so I guess I'll have to make it work.

Anyhow:
I immediately put the card in my computer and it crashed right away at the login screen.
This is what happens: Computer freezes to black screen on Windows log in screen, all the time. The card can post and even log in to Windows in Safe Mode. I checked the device manager and it's reporting a faulty "nvidia type c port policy controller". 
I have tried:
installing official drivers through Safe Mode and disabling the banged(!) "USB-C port policy controller" device through the device manager.
Fresh install of Windows, twice. Same things. DDU in Safe Mode. Two motherboards and three PSU's. HDMI and all of the DisplayPort(s).

Here's a funny thing:
If I put the card in another computer with AMD software already installed the card will boot to Windows in 1440p (instead of whatever low resolution Safe Mode is).

Anyone been through this?

Maybe this is not the place to look for this kind of help. Where can I look for help?

I suppose some kind of hardware repair is needed but maybe there is a software workaround to deactive the USB-C circuitry. I imagine reprogramming the vBIOS is possible, but that's way out of my league. 

Thanks


----------



## Falkentyne

Elfsberg said:


> Hey, people
> 
> Maybe this is a long shot but it's worth the trouble.
> So i just got my 2080 Ti and it's been damaged during freight.  It was bought used so there's no warranty. The kid who sold it won't accept responsibility, so I guess I'll have to make it work.
> 
> Anyhow:
> I immediately put the card in my computer and it crashed right away at the login screen.
> This is what happens: Computer freezes to black screen on Windows log in screen, all the time. The card can post and even log in to Windows in Safe Mode. I checked the device manager and it's reporting a faulty "nvidia type c port policy controller".
> I have tried:
> installing official drivers through Safe Mode and disabling the banged(!) "USB-C port policy controller" device through the device manager.
> Fresh install of Windows, twice. Same things. DDU in Safe Mode. Two motherboards and three PSU's. HDMI and all of the DisplayPort(s).
> 
> Here's a funny thing:
> If I put the card in another computer with AMD software already installed the card will boot to Windows in 1440p (instead of whatever low resolution Safe Mode is).
> 
> Anyone been through this?
> 
> Maybe this is not the place to look for this kind of help. Where can I look for help?
> 
> I suppose some kind of hardware repair is needed but maybe there is a software workaround to deactive the USB-C circuitry. I imagine reprogramming the vBIOS is possible, but that's way out of my league.
> 
> Thanks


How was it damaged during freight? Is that even possible?


----------



## Laithan

Couldn't you just claim damage through the shipping company? As the buyer, it isn't your fault if it was damaged during shipping.. If it wasn't packaged properly, well... that may make it harder to make the claim but I would file a damage claim. Just search for "file a claim xxx shipping company". Make sure you have the original box, take pictures, etc..


----------



## Elfsberg

Laithan said:


> Couldn't you just claim damage through the shipping company? As the buyer, it isn't your fault if it was damaged during shipping.. If it wasn't packaged properly, well... that may make it harder to make the claim but I would file a damage claim. Just search for "file a claim xxx shipping company". Make sure you have the original box, take pictures, etc..





Falkentyne said:


> How was it damaged during freight? Is that even possible?


Kid who sold it put it in a box a little too small and the bracket by the connectors got bent. The card wasn't was placed straight, but rather askew, to fit.
I'm making assumptions but the handling of the package, even if treated properly, could well have damaged the card. The USB-C connector is the one closes to the bend.
FUBAR...  He used alot of protective materials, this is purely a mistake.

I have claimed the warranty included in the shipping, we'll see how it goes. But even if I get my money back, I want a 2080 Ti and they are damn hard to come by at all.


----------



## J7SC

Elfsberg said:


> Kid who sold it put it in a box a little too small and the bracket by the connectors got bent. The card wasn't was placed straight, but rather askew, to fit.
> I'm making assumptions but the handling of the package, even if treated properly, could well have damaged the card. The USB-C connector is the one closes to the bend.
> FUBAR...  He used alot of protective materials, this is purely a mistake.
> 
> I have claimed the warranty included in the shipping, we'll see how it goes. But even if I get my money back, I want a 2080 Ti and they are damn hard to come by at all.


While you probably will never know for sure, it could very well have happened during shipping...About 7 months or so ago, during the height of Covid-19 related transport issues, I shipped an older but rare custom card from W.Canada to E.USA (as a gift to a friend, but it was insured for about half it's true value). It was carefully packaged by me in the original box w/ bubble wrap around the anti-static bag inside...anyway, expedited shipping, and tracking initially showed good progress, than everything 'went dead' re. tracking. Finally, it showed that it had been redirected to a completely different part of the USA, then it was 'lost' for a week++..- and .when it finally arrived several weeks later (instead of the 3 to 5 days paid for), s.th. had apparently torn the box a little on the outside - and on the card itself, several caps on the back of the GPU had been shaved off as the GPU came without a backplate...I suspect s.th. very heavy must have been placed on that box, according to the pix I saw of it forwarded by my friend. Anyway, at least the insurance paid part of the value...

Now, the easiest (but not cheap) way is to ship your card to the manufacturer and 'limit' the $ amount for repairs you are willing to shell out (you would still have to pay for shipping though). Also, if this would have been a part of an SLI set, you might have had a chance...years and years back, I had several GTX 6xx GPUs and one of them had a pcb issue which affected I/O...black screen all the time. HOWEVER, it worked just fine when it was't the primary card connected to the monitor, but instead was the secondary or tertiary (SLI, tri-SLI).

One thing definitely worth trying is to flash another bios onto your 2080 Ti which isn't made for it, i.e. an XOC one from Asus (see OP in this thread)...many folks reported that flashing a 'foreign' bios worked on their specific model, other than they lost a few IO, sometimes a DisplayPort, sometimes HDMI, and I believe sometimes USB-C...that would be a software '''fix''' of sorts. You might have to flash several different ones before you find the right combo, if at all. Worth a try though. Flashing can be done with any another (preferably NVidia) GPU, say a 1070, in the primary slot and the sick 2080 Ti in a secondary slot, just making sure you flash the right 'id' GPU.

If all else fails - and only after you tried all the other options - you can also try the 'bake your GPU in the oven' to re-flow the solder re. cracked PCB traces around the USB-C...if you google that, you'll find several vids and URL stories around that re. optimal temps, length of time for baking etc...if you do that, remove all the plastic parts first, and even then there are obvious risks....I did that with a card with a little plastic switch for the bios selector on a multi-bios GPU  ...overall I'm at 1 for 2 GPUs re. success with that backing method...also remember to either clean your oven thoroughly after (toxic fumes and residue sticking to the sides), or use a little toaster oven out of regular service.

Best of luck getting this sorted !


----------



## Elfsberg

That's a good idea and kind of what I was looking for. Thanks!

Disabling CSM, or what it's called, made the card bootable to regular Windows 10, so that's a good thing, perhaps.

I tried flashing a few different BIOSes last night.
A few different FE and brand 2080 Ti, some 2080 Super, 2060, 1080 Ti.. (it's just the 2080 Ti that has a USB-C output that..)
I used the regular NVflash and NVflash Board Mismatched and some combination of -j -6 and --protectoff and the only thing that did not report a GPU Mismatch or subsystem mismatch was the EVGA 2080 Ti FTW3.
I continued experimenting and after the EVGA BIOS entered my GPU I couldn't even manage to get the original back. Lol

And then suddenly the card stopped POSTing.. and my 1080 Ti stopped posting, too..? I went to bed a little disappointed. 
Now I'm running a AMD card that works just fine.

I'ma continue working on this tomorrow.
Edit:

1080 Ti back in action, too. Thank god!


----------



## Elfsberg

OK. Somehow it now starts again. Maybe the sudden death was caused by the button on my PSU. Hybrid mode something. (Seasonic Focux GX 850 W..)

With the EVGA FTW3 BIOS the card has stopped crashing, so that's something.

I tried flashing the 2080 Super BIOS again but I get the PCI subsystem mismatch error.

Does anyone know how to alter the .rom?

I tried NiBiTor but I can't get it to work.

Thanks


----------



## jura11

Elfsberg said:


> OK. Somehow it now starts again. Maybe the sudden death was caused by the button on my PSU. Hybrid mode something. (Seasonic Focux GX 850 W..)
> 
> With the EVGA FTW3 BIOS the card has stopped crashing, so that's something.
> 
> I tried flashing the 2080 Super BIOS again but I get the PCI subsystem mismatch error.
> 
> Does anyone know how to alter the .rom?
> 
> I tried NiBiTor but I can't get it to work.
> 
> Thanks


Hi there 

Why you flashing BIOS RTX 2080 Super on RTX 2080Ti, ypu shouldn't do that because you can brick yours RTX 2080Ti 

At first download NvFlash with missmatch, latest version have that, secondly use or flash RTX 2080Ti only BIOS, don't try flashing other BIOS there, if you don't know how to do it or how to use NvFlash don't use it 

Hope this helps 

Thanks, Jura


----------



## jura11

End of era for me with RTX 2080Ti's, sold both RTX 2080Ti's(Asus ROG RTX 2080Ti Strix and Zotac RTX 2080Ti AMP) literally a few days ago, because of that, when RTX 3xxx series has been released I would be happy get £400-£600 per GPU but sold it for £2000, I just couldn't pass it on it, it will help us a lot in current situation in which we are, sadly I planned get another RTX 3090 but not now, its no point getting one when prices are jacked up and mainly these money will help us with rent etc as my brother lost his job just recently 

Hope this helps

Thanks, Jura


----------



## AndrejB

What are these VRAM/Hotspot temps on the Strix 2080ti? (I know the Strix doesn't have vram sensors, but would like to know what's going to 90c)


----------



## Elfsberg

jura11 said:


> Hi there
> 
> Why you flashing BIOS RTX 2080 Super on RTX 2080Ti, ypu shouldn't do that because you can brick yours RTX 2080Ti
> 
> At first download NvFlash with missmatch, latest version have that, secondly use or flash RTX 2080Ti only BIOS, don't try flashing other BIOS there, if you don't know how to do it or how to use NvFlash don't use it
> 
> Hope this helps
> 
> Thanks, Jura


Thanks, but maybe you missed the above posts stating the card is already as good as dead.  Flashing 2080 Ti BIOSes isn't working correctly either, seemingly no matter what version of NVflash and commands I use.
I have flashed other cards before and even bricked them and restored them too. Your suggested "don't try if you don't know how" makes learning impossible. Humanity would die thinking like that.

2080 Super has no USB-C, so it's worth a shot.


----------



## Elfsberg

AndrejB said:


> What are these VRAM/Hotspot temps on the Strix 2080ti? (I know the Strix doesn't have vram sensors, but would like to know what's going to 90c)


Judging by GPU-Z, it's the memory.


----------



## AndrejB

A nice improvement after re-padding and re-pasting 

For anyone interested, the stock strix cooler uses 1mm pads, for the memory.


----------



## sultanofswing

Thinking about throwing my KPE 2080ti up for sale here soon. What's a fair price to ask? Going to list it with the Bykski Waterblock already installed.


----------



## kithylin

sultanofswing said:


> Thinking about throwing my KPE 2080ti up for sale here soon. What's a fair price to ask? Going to list it with the Bykski Waterblock already installed.


I hope I don't seem like a moderator here but.. there is a marketplace section of the forums and they do appraisals over there. This really isn't the place for that in this thread.


----------



## CptAsian

sultanofswing said:


> Thinking about throwing my KPE 2080ti up for sale here soon. What's a fair price to ask? Going to list it with the Bykski Waterblock already installed.





kithylin said:


> I hope I don't seem like a moderator here but.. there is a marketplace section of the forums and they do appraisals over there. This really isn't the place for that in this thread.


There is an appraisals section but note that it's broken at the moment for whatever reason. People can post new threads but nobody can reply, so that makes the whole thing a bit useless.

Bug thread about it: Bug: Appraisal Forum Blocked


----------



## jura11

sultanofswing said:


> Thinking about throwing my KPE 2080ti up for sale here soon. What's a fair price to ask? Going to list it with the Bykski Waterblock already installed.


I sold my RTX 2080Ti Strix for £1000 with EKWB Vector RTX 2080Ti Strix waterblock and sold as well Zotac RTX 2080Ti AMP with Aquacomputer Kryographics RTX 2080Ti waterblock with active backplate for same price 

Hope this helps 

Thanks, Jura


----------



## J7SC

jura11 said:


> I sold my RTX 2080Ti Strix for £1000 with EKWB Vector RTX 2080Ti Strix waterblock and sold as well Zotac RTX 2080Ti AMP with Aquacomputer Kryographics RTX 2080Ti waterblock with active backplate for same price
> 
> Hope this helps
> 
> Thanks, Jura


Hi Jura - what sales channel (ie eBay, local ad) did you use if that's not private info ?


----------



## jura11

J7SC said:


> Hi Jura - what sales channel (ie eBay, local ad) did you use if that's not private info ?


Hi there 

Its not private info or something like that there, I used eBay and sold it to same person, they bought it both GPUs 

I think right now is good time to sell RTX 2080Ti because of scarce availability of RTX 3090 or 3080 or any GPUs or at least here in UK 

When RTX 3090 has been launched prices of RTX 2080Ti's plummeted to something like £400 for RTX 2080Ti Strix, I bought two RTX 2080Ti's for something like that for friends builds, I bought Asus RTX 2080Ti Strix for £440 and EVGA RTX 2080Ti FTW3 for £400 that time but currently prices are jacked up for used RTX 2080Ti's 

When you take it, I sold my GTX1080Ti for £450  

Hope this helps 

Thanks, Jura


----------



## haleNpace

Hi Gents, 

I have a 300a my power limit is 280w. Card is a Galax sg oc v2. Is there anyway around the newer xusb fw id or is the programming clip still the only way? Just curious before I order one. No, I don’t want to shunt mod. For the same reason I am not a surgeon, my hands shake, a lot lol.
Thank you in advance.


----------



## Laithan

Sadly, nobody to this day has been able to crack and sign a custom BIOS so yeah a custom programmer is the only way I am aware of. I really find this somewhat amazing knowing how many things do get hacked... and how many super smart people hack other things arguably far more complex... but we still cannot sign a custom BIOS nor has anyone leaked the tools to do this.


----------



## haleNpace

Thanks Laithan, the card is a year old and even though a lot of people trash the ‘80ti I get a lot of use out of it in the games I play at 60fps on the q7tv. I guess I am just being greedy lol but I found the power limit strange for a 300a. I think I will finally put the water block on a see how that goes first. Still sitting in the box due to other interests. Card overclocks well considering it’s on air. +220 on the core clock and +1250 (762.2 GB/s) on the memory. I haven’t really tinkered with the potential last few % just found a stable o/c and after a year almost it still stable. I ran a few bench marks but mainly just game with it. Still gets to 2100mhz then drops off pretty steady to 2040 - 1860 area depending on the game if it manages to get to 60c which is usually a benchmark temp not something I see in regular usage. I’ll put the block on it and see how that goes. It’ll only be the gpu on that loop, everything else is seperate. Thanks again.


----------



## J7SC

haleNpace said:


> Thanks Laithan, the card is a year old and even though a lot of people trash the ‘80ti I get a lot of use out of it in the games I play at 60fps on the q7tv. I guess I am just being greedy lol but I found the power limit strange for a 300a. I think I will finally put the water block on a see how that goes first. Still sitting in the box due to other interests. Card overclocks well considering it’s on air. +220 on the core clock and +1250 (762.2 GB/s) on the memory. I haven’t really tinkered with the potential last few % just found a stable o/c and after a year almost it still stable. I ran a few bench marks but mainly just game with it. Still gets to 2100mhz then drops off pretty steady to 2040 - 1860 area depending on the game if it manages to get to 60c which is usually a benchmark temp not something I see in regular usage. I’ll put the block on it and see how that goes. It’ll only be the gpu on that loop, everything else is seperate. Thanks again.


...I still very much enjoy my 2080 Tis in one of two work-play 4K systems, even with a 3090 in the other. Given the current market conditions re. availability and pricing, 2080 Tis are still a great catch...enjoy !


----------



## Bastouny

Hello,

(sorry for my poor english, i speak french)
I spend a lot of hours on the internet to find my answer but I can't find ... 
Please! Help!

I have an NVIDIA RTX 2080 Ti Founders Edition and I want to flash its BIOS in order to increase its Power Limit (320 W) (and also to cool it by water-cooling).

KFA2 RTX 2080 Ti OC BIOS (380 W) seems to be the best choice for a reference PCB (KFA2 RTX 2080 Ti VBIOS) but I read here ([Official] NVIDIA RTX 2080 Ti Owner's Club) that you have to be careful that the XUSB-FW is identical on the BIOS and on my card otherwise I will not be able to modify my BIOS.

Here is my current BIOS: NVIDIA RTX 2080 Ti VBIOS and here is my XUSB-FW Version ID: 0x70090003

Here is the XUSB-FW Version ID of KFA2 RTX 2080 Ti OC BIOS: 0x70060003

=> so I think this BIOS will not be compatible with my card...

So I looked for another BIOS but with an identical XUSB-FW Version (0x70090003). I found the one for the Gigabyte RTX 2080 Ti Gaming OC (Gigabyte RTX 2080 Ti VBIOS) but with a slightly lower Power Limit (366 W) than the KFA2.

Do you think I can flash my card with Gigabyte BIOS safely?
Am I likely to brick the card? if so, is it reversible?
Can you confirm that it is important to choose a BIOS from a card with the same PCB, with the same number of power supply phases?
*Do you have any other BIOS suggestions that will work fine with my card?*


----------



## AndrejB

Bastouny said:


> Hello,
> 
> (sorry for my poor english, i speak french)
> I spend a lot of hours on the internet to find my answer but I can't find ...
> Please! Help!
> 
> I have an NVIDIA RTX 2080 Ti Founders Edition and I want to flash its BIOS in order to increase its Power Limit (320 W) (and also to cool it by water-cooling).
> 
> KFA2 RTX 2080 Ti OC BIOS (380 W) seems to be the best choice for a reference PCB (KFA2 RTX 2080 Ti VBIOS) but I read here ([Official] NVIDIA RTX 2080 Ti Owner's Club) that you have to be careful that the XUSB-FW is identical on the BIOS and on my card otherwise I will not be able to modify my BIOS.
> 
> Here is my current BIOS: NVIDIA RTX 2080 Ti VBIOS and here is my XUSB-FW Version ID: 0x70090003
> 
> Here is the XUSB-FW Version ID of KFA2 RTX 2080 Ti OC BIOS: 0x70060003
> 
> => so I think this BIOS will not be compatible with my card...
> 
> So I looked for another BIOS but with an identical XUSB-FW Version (0x70090003). I found the one for the Gigabyte RTX 2080 Ti Gaming OC (Gigabyte RTX 2080 Ti VBIOS) but with a slightly lower Power Limit (366 W) than the KFA2.
> 
> Do you think I can flash my card with Gigabyte BIOS safely?
> Am I likely to brick the card? if so, is it reversible?
> Can you confirm that it is important to choose a BIOS from a card with the same PCB, with the same number of power supply phases?
> *Do you have any other BIOS suggestions that will work fine with my card?*


I never heard of anyone bricking their card because of xusb, this is because nvflash will report a mismatch and won't flash.

Be sure to follow the instructions in the first post and you should be fine, it'll either flash or it won't. No way around it as far as I know, unfortunately.


----------



## Bastouny

AndrejB said:


> I never heard of anyone bricking their card because of xusb, this is because nvflash will report a mismatch and won't flash.
> 
> Be sure to follow the instructions in the first post and you should be fine, it'll either flash or it won't. No way around it as far as I know, unfortunately.


I understood that I had to find a BIOS with the same version of XUSB-FW.

My question is mainly access on which BIOS will be compatible with my card, will allow unblocking the power limit and therefore improve performance?
Also, can you confirm that it is important to choose a BIOS from a card with the same PCB, with the same number of power supply phases?


----------



## Medizinmann

Bastouny said:


> Hello,
> 
> (sorry for my poor english, i speak french)
> 
> I spend a lot of hours on the internet to find my answer but I can't find ...
> Please! Help!


You came to the right place...;-)



> I have an NVIDIA RTX 2080 Ti Founders Edition and I want to flash its BIOS in order to increase its Power Limit (320 W) (and also to cool it by water-cooling).
> 
> KFA2 RTX 2080 Ti OC BIOS (380 W) seems to be the best choice for a reference PCB (KFA2 RTX 2080 Ti VBIOS) but I read here ([Official] NVIDIA RTX 2080 Ti Owner's Club) that you have to be careful that the XUSB-FW is identical on the BIOS and on my card otherwise I will not be able to modify my BIOS.
> 
> Here is my current BIOS: NVIDIA RTX 2080 Ti VBIOS and here is my XUSB-FW Version ID: 0x70090003
> 
> Here is the XUSB-FW Version ID of KFA2 RTX 2080 Ti OC BIOS: 0x70060003
> 
> => so I think this BIOS will not be compatible with my card...
> 
> So I looked for another BIOS but with an identical XUSB-FW Version (0x70090003). I found the one for the Gigabyte RTX 2080 Ti Gaming OC (Gigabyte RTX 2080 Ti VBIOS) but with a slightly lower Power Limit (366 W) than the KFA2.


The Gigabyte Bios should be fine and 14W less doesn’t do much... 



> Do you think I can flash my card with Gigabyte BIOS safely?


Yes..



> Am I likely to brick the card?


No...



> if so, is it reversible?


It depends - fashing a BIOS has always a little risk - but you can reflash most of the time - you just need a machine with an 2nd GPU or iGPU...and you can try to recover with BIOS flasher clip if this fails..

But again - such a failure isn't likely as long a you don't cut power in the process...



> Can you confirm that it is important to choose a BIOS from a card with the same PCB, with the same number of power supply phases?


Yes...see also 1st page.



> *Do you have any other BIOS suggestions that will work fine with my card?*


You could still try the KFA2 380W BIOS...be sure to use the modified Flashtool for NVIDIA FE cards...








NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download


This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm




www.techpowerup.com





But - as already said - the Gigabyte BIOS for the Gaming OC should be fine.

Best regards,
Medizinmann


----------



## Bastouny

Medizinmann said:


> You could still try the KFA2 380W BIOS...be sure to use the modified Flashtool for NVIDIA FE cards...
> 
> 
> 
> 
> 
> 
> 
> 
> NVIDIA NVFlash with Board Id Mismatch Disabled (v5.590.0) Download
> 
> 
> This is a patched version of NVIDIA's NVFlash. On Turing cards, NVFlash no longer allows overriding of the "board ID mismatch" message through comm
> 
> 
> 
> 
> www.techpowerup.com


Thank you very much for your answers. You also offer me the KFA2 380W BIOS but as I indicated it should not be able to be used with my card because of the XUSB-FW version which is different.
Do you have a 2080 Ti Founder Edition?
Have you ever flashed your BIOS, and if so, with which one?


----------



## Medizinmann

Bastouny said:


> Thank you very much for your answers. You also offer me the KFA2 380W BIOS but as I indicated it should not be able to be used with my card because of the XUSB-FW version which is different.
> Do you have a 2080 Ti Founder Edition?
> Have you ever flashed your BIOS, and if so, with which one?


As I said - you could try - it might work or it might be rejected - but the Gigabyte Gaming OC 366W BIOS seems fine anyhow and 14W less isn't anything you should worry about...and it is very unlikely that you brick your GPU in the process if you don’t turn of power while flashing…

I own a KFA2 non-A card I successfully flashed with the Palit 310W BIOS and a Palit Gaming OC I successfully flashed with the KFA2 380W BIOS.

Best regards,
Medizinmann


----------



## Bastouny

Medizinmann said:


> As I said - you could try - it might work or it might be rejected - but the Gigabyte Gaming OC 366W BIOS seems fine anyhow and 14W less isn't anything you should worry about...and it is very unlikely that you brick your GPU in the process if you don’t turn of power while flashing…
> 
> I own a KFA2 non-A card I successfully flashed with the Palit 310W BIOS and a Palit Gaming OC I successfully flashed with the KFA2 380W BIOS.
> 
> Best regards,
> Medizinmann


Thank you very much for your help


----------



## J7SC

Bastouny said:


> Thank you very much for your help


...just a quick additional comment on the Gigabyte Gaming OC 366W bios....it has the same advertised power (366W) than the stock bios on my two Gigabyte Aorus Xtreme WF WB (full factory water block model, also 2x PCIe). While the Aorus Xtreme WF WB bios is a different one from the Gaming OC one, the bios (also rated at 366W) pretty much consistently pulls almost 380W anyhow on 'max PL'- at least w/o connected fans as is the case with the factory water-cooled models . Since you mentioned that you will water-cool your GPU (w/o the fans plugged into the GPU's PCB), this may be relevant to your plans. BTW, here is the TechPowerUp link > to the Aorus 2080 Ti Xtr WF WB bios


----------



## Bastouny

J7SC said:


> ...just a quick additional comment on the Gigabyte Gaming OC 366W bios....it has the same advertised power (366W) than the stock bios on my two Gigabyte Aorus Xtreme WF WB (full factory water block model, also 2x PCIe). While the Aorus Xtreme WF WB bios is a different one from the Gaming OC one, the bios (also rated at 366W) pretty much consistently pulls almost 380W anyhow on 'max PL'- at least w/o connected fans as is the case with the factory water-cooled models . Since you mentioned that you will water-cool your GPU (w/o the fans plugged into the GPU's PCB), this may be relevant to your plans. BTW, here is the TechPowerUp link > to the Aorus 2080 Ti Xtr WF WB bios


Hi.
What worries me  about the Aorus 2080 Ti Xtr WF WB is that the PCBs are not a 16 Power Phases Reference PCB (like my 2080 Ti Founder is) but a 19 Power Phases Custom PCB.


----------



## J7SC

Bastouny said:


> Hi.
> What worries me  about the Aorus 2080 Ti Xtr WF WB is that the PCBs are not a 16 Power Phases Reference PCB (like my 2080 Ti Founder is) but a 19 Power Phases Custom PCB.


...fair enough, though other folks with my 19-phase cards have run XOC 1kw bios from Strix (2x PCIe) and KPE (3x PCIe). But it pays to be cautious w/ those expensive cards...


----------



## Laur'nt1664

Bonjour ,

Jai dévorer ce post ainsi que les réponses apportées aux principales questions!
Vraiment un post bien écris et détailler, au top et riches en information !

Et en passant par là, je cherche un BIOS compatible au power limite augmenté pour mon RTX 2080 Ti AORUX Xtrem 11G sous waterblock qui plafonne à 366W sur bios d'origine! (300 W x 122%)

J'aimerais passer au moins un 400W.

Avez-vous des pistes?

Au plaisir !


----------



## Laithan

Laur'nt1664 said:


> Bonjour ,
> 
> Jai dévorer ce post ainsi que les réponses apportées aux principales questions!
> Vraiment un post bien écris et détailler, au top et riches en information !
> 
> Et en passant par là, je cherche un BIOS compatible au power limite augmenté pour mon RTX 2080 Ti AORUX Xtrem 11G sous waterblock qui plafonne à 366W sur bios d'origine! (300 W x 122%)
> 
> J'aimerais passer au moins un 400W.
> 
> Avez-vous des pistes?
> 
> Au plaisir !


English translation... (maybe)

Good morning ,

I devour this post as well as the answers to the main questions!
Really a well written and detailed post, top and rich in information !

And by the way, I'm looking for a BIOS compatible with the increased power limit for my RTX 2080 Ti AORUX Xtrem 11G under waterblock which caps at 366W on original bios! (300W x 122W%)

I would like to pass at least one 400W.

Do you have any leads?

Enjoy !


----------



## J7SC

Laithan said:


> English translation... (maybe)
> 
> Good morning ,
> 
> I devour this post as well as the answers to the main questions!
> Really a well written and detailed post, top and rich in information !
> 
> And by the way, I'm looking for a BIOS compatible with the increased power limit for my RTX 2080 Ti AORUX Xtrem 11G under waterblock which caps at 366W on original bios! (300W x 122W%)
> 
> I would like to pass at least one 400W.
> 
> Do you have any leads?
> 
> Enjoy !


@Laithan 'Merci beaucoup'

This thread had links to the Strix XOC (1000w) bios at at the OP which folks w/ the Aorus Xtr 11G got working (minus one I/O as far as I remember)...caution advised, though, even w/ water block


----------



## Dudko02753

hi guys, i have a problem. I have 2X rtx 2080 ti 1is palit rtx 2080 ti dual and 2 is rtx 2080 ti gamingpro oc. and i cant go to sli

















can help me pls


----------



## Bastouny

Stupid question, but did you connect the two cards correctly with an SLI Bridge?


----------



## kithylin

Dudko02753 said:


> hi guys, i have a problem. I have 2X rtx 2080 ti 1is palit rtx 2080 ti dual and 2 is rtx 2080 ti gamingpro oc. and i cant go to sli
> 
> 
> can help me pls


What motherboard are you using in your computer?


----------



## Dudko02753

kithylin said:


> What motherboard are you using in your computer?


yes i conect witch nvlink , mobo is asus maximus xi hero cpu I9 9900KF memory Tforce extreme 32gb 4000mhz


----------



## Dudko02753

Bastouny said:


> Stupid question, but did you connect the two cards correctly with an SLI Bridge?


yes conect i try 2 different nvlink


----------



## J7SC

EDIT - never mind re. bridges if you tried 2 different ones already. Have you flipped the cards around (1st card into 2nd slot, 2nd card into 1st slot)...that '''may''' help if the lead has a physical PCB trace issue, though that's rare


----------



## Dudko02753

J7SC said:


> ^^^what he said - there's no indication of even the SLI option in the NV ...either the SLI bridge is missing or faulty.


not shoving up options for sli the nvlink is new


----------



## Dudko02753

Dudko02753 said:


> not shoving up options for sli the nvlink is new


----------



## kithylin

Dudko02753 said:


> not shoving up options for sli the nvlink is new







__





ASUS ROG MAXIMUS XI HERO (WI-FI) | Gaming Motherboard | ASUS USA


ASUS ROG Maximus XI Hero Wi-Fi - Ready for 9th gen Intel Core processors. ASUS 5-Way Optimization overclocks intelligently & FanXpert 4 delivers dynamic cooling.



rog.asus.com




If you never downloaded it make sure you get the chipset drivers for your board for windows 10 64-bit and install them then reboot then see if SLI shows up.


----------



## Dudko02753

kithylin said:


> __
> 
> 
> 
> 
> 
> ASUS ROG MAXIMUS XI HERO (WI-FI) | Gaming Motherboard | ASUS USA
> 
> 
> ASUS ROG Maximus XI Hero Wi-Fi - Ready for 9th gen Intel Core processors. ASUS 5-Way Optimization overclocks intelligently & FanXpert 4 delivers dynamic cooling.
> 
> 
> 
> rog.asus.com
> 
> 
> 
> 
> If you never downloaded it make sure you get the chipset drivers for your board for windows 10 64-bit and install them then reboot then see if SLI shows up.


Ok I try after I wake up because I coming from work now 😃


----------



## haleNpace

Hey gents, still trying to find a work around for my Galax. A couple of questions for the die hard owners. When I originally tried to flash a bios last year it exited with nothing changed because the error said it was older than the bios on the card. I’ve been keeping an eye on the bios database and there is an Msi gaming x trio (custom pcb MSI RTX 2080 Ti VBIOS ) with a bios compiled in March 2020 which is newer than the bios on my Galax sg oc 1 click v2 (reference pcb GALAX RTX 2080 Ti VBIOS ). My card doesn’t have the USB i/o physically on the board but still lists the port under the device specs in windows.

Will a custom pcb bios work on a reference pcb? I have read here that it can, but has lead to fan compatibility, loss of performance in some instances. How do I identify that without flashing?
Is the MSI card a version with the newer eeprom based on the date of the bios?
How does a casual user like myself identify the difference when looking at the details on the GPU database page?
Does not having the port physically on the board narrow my options for a bios?
Apart from the firmware, eeprom and bios build date what other things should I be looking at?

Thank you in advance, apologies if you’ve answered all these questions 641 times already.

edit - I also wrote to Galax asking for a 300w bios and they told me they didn’t have one at this stage and if they did it would be posted to their website if it becomes available.

One thing I have learned from all of this for others in my situation is to water cool the card. It’s the easiest option to get the performance you’re looking for.


----------



## Krzych04650

Dudko02753 said:


> yes i conect witch nvlink , mobo is asus maximus xi hero cpu I9 9900KF memory Tforce extreme 32gb 4000mhz
> View attachment 2484412


Are you using nvme drive or any other device that takes pcie lanes? SLI requires at least x8 per card if I recall correctly.


----------



## kithylin

Krzych04650 said:


> Are you using nvme drive or any other device that takes pcie lanes? SLI requires at least x8 per card if I recall correctly.


If I'm reading the specifications page right that motherboard should be able to run at least one NVME drive and still run SLI at the same time.


----------



## Dudko02753

kithylin said:


> If I'm reading the specifications page right that motherboard should be able to run at least one NVME drive and still run SLI at the same time.


yes i use samsung evo 980 nvme, card running 8x here is picture


----------



## Dudko02753

ok reinstal chipset drivers and same no sli options


----------



## J7SC

Dudko02753 said:


> ok reinstal chipset drivers and same no sli options


...you probably already thought of that, but do you have (or can you borrow) two older NVidia GPUs you can try in SLI (ie. 980s) ? You mentioned you already tried different SLI bridges, and I am wondering if there is some sort of GPU PCB issue affecting SLI signal on one of the 2080 Ti GPUs...


----------



## Dudko02753

[QUOTE = "J7SC, príspevok: 28766784, člen: 614280"]
... asi vás to už napadlo, ale máte (alebo si môžete požičať) dva staršie GPU NVidia, ktoré môžete vyskúšať v SLI (tj. 980)? Spomenuli ste, že ste už vyskúšali rôzne mosty SLI, a zaujímalo by ma, či existuje nejaký problém s plošnými spojmi GPU ovplyvňujúcimi signál SLI na jednom z 2080 Ti GPU ...
[/ QUOTE]
Yes I try gtx 780 sli working fine


----------



## Dudko02753

[QUOTE = "Dudko02753, príspevok: 28766786, člen: 642098"]
[QUOTE = "J7SC, príspevok: 28766784, člen: 614280"]
... asi vás to už napadlo, ale máte (alebo si požičiate) dva staršie GPU NVidia, ktoré môžete ďalej v SLI (tj. 980)? Spomenuli ste, že ste už vyskúšali rôzne mosty SLI, a zaujímalo by ma, či existuje nejaký problém s plošnými spojmi GPU ovplyvňujúcimi signál SLI na jednom z 2080 Ti GPU ...
[/ QUOTE]
Áno, vyskúšam, že gtx 780 sli funguje dobre
[/ QUOTE] 
Zloženie: 100% bavlna.i try different mobo acrock x99 extreme 4 i7 5960x and same no sli options


----------



## J7SC

Dudko02753 said:


> *Yes I try gtx 780 sli working fine*


...then you know it must be s.th. physical on at least one of the 2080 Ti PCBs...may be try removing coolers and carefully inspect if you see anything odd (discolored bits, scratched traces). Start at where the SLI bridge connects to the GPUs ('top left').


----------



## Dudko02753

ok tomorow i chceck the pcb and let you know.. thanks for help guys


----------



## Dudko02753

Ok I checked PCB and everything is ok no damage nothing .. what now I try old driver same


----------



## EarlZ

Is the memory tjunction temps read by HWinfo64 on the 2080Super/Ti based on a on-die sensor or just some fancy offset based on the GPU temp ?


----------



## AndrejB

EarlZ said:


> Is the memory tjunction temps read by HWinfo64 on the 2080Super/Ti based on a on-die sensor or just some fancy offset based on the GPU temp ?


Sensor, dropped 10c by changing the thermal pads.


----------



## Dudko02753

780 work normali on sli


----------



## JustinThyme

Everything still works normal on SLI Contrary to the TPU BS that Nvidia would drop support as of Jan1 2021. Well its April and heres the latest drivers. Only thing is some games don't support it natively but most can be forced with Nvidia Inspector profiles.


----------



## Krzych04650

JustinThyme said:


> Everything still works normal on SLI Contrary to the TPU BS that Nvidia would drop support as of Jan1 2021. Well its April and heres the latest drivers. Only thing is some games don't support it natively but most can be forced with Nvidia Inspector profiles.
> 
> 
> View attachment 2485160


Support is dropped in a sense that from now on no new profiles will be created by NVIDIA, and Ampere is not compatible at all, but existing SLI implementations will work on supported GPUs same as they always did.


----------



## JustinThyme

Krzych04650 said:


> Support is dropped in a sense that from now on no new profiles will be created by NVIDIA, and Ampere is not compatible at all, but existing SLI implementations will work on supported GPUs same as they always did.


Ampere is IF you buy 3090s non existent on 3080s (no connections for a bridge..... who knows what the TIs will bring or future releases. 
Profiles are still there, Don't know about anything new. What I do know is Nvidia doesn't have to create profiles. Can be done with Nvidia Inspector. If you aren't a whiz at it there are dozens of forums with profiles for games that don't support it natively already shared. Just a C&P away. Everything I have loaded either supports it natively or Ive created it or gotten it from elsewhere. Sometimes its as simple as copying it from a game that has native support.


----------



## J7SC

Krzych04650 said:


> Support is dropped in a sense that from now on no new profiles will be created by NVIDIA, and Ampere is not compatible at all, but existing SLI implementations will work on supported GPUs same as they always did.


...I still enjoy my 4K system w/ the 2x 2080 TIs and will for many more years, but I am glad I got the 3090 for another 4K system when (and at the price) I did, given what's been going on in the market... I use multiple systems anyway also in part for productivity, and I needed one with at least 16 GB of VRAM. 

I do think that mGPU will make a comeback, but as 'tiles' (think Ryzen CPU) and transparent to the apps who see it as one GPU, per recent Intel and AMD leaks (I'm sure NVidia is the same). This will eliminate the need for custom SLI profiles which was always a bit of a drag. As posted before, I run my 2x 2080 Ti in CFR wherever possible, and that is 'kinda, sorta' half way to mGPU


----------



## JustinThyme

J7SC said:


> ...I still enjoy my 4K system w/ the 2x 2080 TIs and will for many more years, but I am glad I got the 3090 for another 4K system when (and at the price) I did, given what's been going on in the market... I use multiple systems anyway also in part for productivity, and I needed one with at least 16 GB of VRAM.
> 
> I do think that mGPU will make a comeback, but as 'tiles' (think Ryzen CPU) and transparent to the apps who see it as one GPU, per recent Intel and AMD leaks (I'm sure NVidia is the same). This will eliminate the need for custom SLI profiles which was always a bit of a drag. As posted before, I run my 2x 2080 Ti in CFR wherever possible, and that is 'kinda, sorta' half way to mGPU


Yeah Im sitting this one out. My pair of Strix O11G OC cards give me everything I need and then some. 22GB VRAM between the two. I wouldnt buy a card right now for anything. They have flat out gone nuts!!! 2080 Tis are going for double what I paid for mine at launch and 15% off MSRP to boot. They are going for $2K which is over the MSRP for the 3090. Now If I was still sitting on 1080TIs (still have a pair of those in an overkill HTPC) I might consider it at MSRP. Say a funny one on Ebay with the Strix 3090 OC for over $5K!!! Im like cheese and rice man! Then others that list a little under the others (generally about $3100) but then when you look its $100 shipping. Right now is the worst time ever to build a PC. All the platforms are at end of life forget the GPU aspect of it. AM$, 1151, X299,TRX all finished. only thing new is 1200 and my bet is that will see whats coming out now and one more then it too will be done. Nahh, Ill sit where I am until some sense is made of this madness! Im not hurting in the $$ department by any means but DAMN!! Build a decent PC and it costs as much as a freakin car!! Even 1080Tis are through the roof. $1200...really! I paid $1400 for two from microcenter.


----------



## kithylin

JustinThyme said:


> Yeah Im sitting this one out. My pair of Strix O11G OC cards give me everything I need and then some. 22GB VRAM between the two. I wouldnt buy a card right now for anything. They have flat out gone nuts!!! 2080 Tis are going for double what I paid for mine at launch and 15% off MSRP to boot. They are going for $2K which is over the MSRP for the 3090. Now If I was still sitting on 1080TIs (still have a pair of those in an overkill HTPC) I might consider it at MSRP. Say a funny one on Ebay with the Strix 3090 OC for over $5K!!! Im like cheese and rice man! Then others that list a little under the others (generally about $3100) but then when you look its $100 shipping. Right now is the worst time ever to build a PC. All the platforms are at end of life forget the GPU aspect of it. AM$, 1151, X299,TRX all finished. only thing new is 1200 and my bet is that will see whats coming out now and one more then it too will be done. Nahh, Ill sit where I am until some sense is made of this madness! Im not hurting in the $$ department by any means but DAMN!! Build a decent PC and it costs as much as a freakin car!! Even 1080Tis are through the roof. $1200...really! I paid $1400 for two from microcenter.


I too have a 1080 Ti in one of my computers and It's worth more today used than I paid for it new in 2017, this is all nuts right now. I'm just happy at least the Ryzen 5800X I bought a few weeks ago was at MSRP. Now if only GPU's will come down too.


----------



## Krzych04650

J7SC said:


> ...I still enjoy my 4K system w/ the 2x 2080 TIs and will for many more years, but I am glad I got the 3090 for another 4K system when (and at the price) I did, given what's been going on in the market... I use multiple systems anyway also in part for productivity, and I needed one with at least 16 GB of VRAM.
> 
> I do think that mGPU will make a comeback, but as 'tiles' (think Ryzen CPU) and transparent to the apps who see it as one GPU, per recent Intel and AMD leaks (I'm sure NVidia is the same). This will eliminate the need for custom SLI profiles which was always a bit of a drag. As posted before, I run my 2x 2080 Ti in CFR wherever possible, and that is 'kinda, sorta' half way to mGPU


I am enjoying my 2080 Ti's as well, getting second 2080 Ti on the cheap (450 EUR) right after 3090 reviews dropped was a great decision, especially from perspective of time looking at wth is going on right now. CFR in particular was transformative for the setup, it adds support to unsupported games, it fixes games with bad native profiles, does not have any issues like frame pacing or artifacts and hit rate is really high. Only scaling is low (with some exceptions where you put on some heavy external load like SGSSAA), but it is generally enough to match 3090, essentially meaning that games where this setup is slower than 3090 are mostly RT/DLSS games that I can still play well on one 2080 Ti, recently played through Control maxed out with all RT enabled at 3840x1600. Sure 3090 would be much better for that particular case, but so is 2080 Ti SLI in many others, essentially pulling RTX 4090+ performance when scaling is good. And having such a system in current circumstances only makes things even sweeter.

Not considering dual systems, but I thought about having 3080 Ti and 2x2080 Ti in one system and use either depending on the game for the ultimate performance at all times, but from what I've heard so far the driver does not have such ability and I cannot just toggle between them, so ideally I would need some kind of physical switch or mobo with abililty to disable slots, as I doubt that disabling/enabling in the device manager would handle that. I need to ask about it in few places, it would be the absolute endgame if it worked



kithylin said:


> I too have a 1080 Ti in one of my computers and It's worth more today used than I paid for it new in 2017, this is all nuts right now. I'm just happy at least the Ryzen 5800X I bought a few weeks ago was at MSRP. Now if only GPU's will come down too.


Yea I've been looking at used market recently and GTX 1080s are literally more expensive then what I sold them for when Turing launched in 2018, and prices were already very high back then with many of the cards maintaining 100% of their value, eg. you could buy 1080 and then sell it year later used for the same price if not more. What is going on today is even few steps above that, mid-range cards listed for 3x MSRP and still selling. This was the tactic with 3060 here I think, shops didn't even bother with listing at MSRP like with previous launches and started from 800 EUR right away, and it actually worked to an extent that you could actually buy one for a good few days with next day delivery as you normally would, only you had to pay high-end 3080 MSRP for random 3060


----------



## Laithan

Sad times.. so much greed and decline of society in general.. Our hobby has been destroyed and sadly all the people PAYING these prices are supporting it. Mr. Wallet always has the last word. I wouldn't buy anything right now (or in the past year) and guess I will have my 2080Ti for a few more years... with zero regrets. I paid $900 for it and I even think THAT was expensive, but I enjoy 4K without SLI issues so I am happy. I just feel bad for those stuck between a rock and a hard place.. NVIDIA and AMD don't give a crap about us don't think they do...There has been enough time for them to do something about it by now and they are just sitting back loving the artifical inflation... it is no longer about the consumers that made them who they are in the first place.

Rant off..


----------



## JustinThyme

Laithan said:


> Sad times.. so much greed and decline of society in general.. Our hobby has been destroyed and sadly all the people PAYING these prices are supporting it. Mr. Wallet always has the last word. I wouldn't buy anything right now (or in the past year) and guess I will have my 2080Ti for a few more years... with zero regrets. I paid $900 for it and I even think THAT was expensive, but I enjoy 4K without SLI issues so I am happy. I just feel bad for those stuck between a rock and a hard place.. NVIDIA and AMD don't give a crap about us don't think they do...There has been enough time for them to do something about it by now and they are just sitting back loving the artifical inflation... it is no longer about the consumers that made them who they are in the first place.
> 
> Rant off..


Im with you there. I refuse to support it. Problem is so many are buying them at outrageous prices that simple fuels the feeding frenzy. My call would be for all geeks to unite and leave the scalpers holding the bag with a bunch of cards they cant sell. Im used to two things. A short period after every release where the same happens and idiots camped outside of an apple store every time a new iPhone is released although they have the current version the night before. Not like those are cheap Either. They keep sending me emails to upgrade to a 12 but I have an 11 pro max that I upgraded from a 8s that was getting to the point it was going to need a new battery anyway. Otherwise Id still have that one. My kids have 4s and don't want new ones. Had the batteries replaced in those once. I dont mind paying a reasonable MSRP but Ill be damned if Im paying $3k+ on a card that retails for $1800 (3090 OC is the only way Id go). Saw one on fleabay that made me spit out my coffee, $5K!! **** of here dude and it was in Japan with $100 shipping. They don't have bird emojis or Id be posting that! Im really not getting this madness or anyone who supports it. If you got your goods for MSRP then thats as good as you can do but 50% markup min and that $5K was just nucking futs!

I honestly think the whole chip shortage story is BS. Been cranking out chips for years and now there is a shortage? Supplier says its logistics and blames it on the Wuhan Crud and card manufactures are blaming the chip manufacturers. Well there are no shortages on fleabay. You can buy any card you want and have it overnighted for 100% over MSRP with a million listings.







NVIDIA RTX 3090 Pre-order & In Stock Tracker - NowInStock.net


Looking for NVIDIA RTX 3090 but having no luck? Maybe our FREE NVIDIA RTX 3090 Ti In Stock Tracker can help!



www.nowinstock.net






Here, want a white Strix?








ASUS ROG STRIX NVIDIA RTX 3090 White OC Edition Graphics Card | SHIP SAME DAY | eBay


Find many great new & used options and get the best deals for ASUS ROG STRIX NVIDIA RTX 3090 White OC Edition Graphics Card | SHIP SAME DAY at the best online prices at eBay! Free shipping for many products!



www.ebay.com


----------



## User00000

I need some urgent help with my Gigabyte 2080Ti OC. For some reason all of a sudden the max clock is locked at 1350MHz and Memory 1750MHz. No matter what I do there is no way to return to its previous normal state. I have tried clean driver uninstall, reflash the bios, scanned the GPU memory for errors (none), re-inserted the GPU, reinstalled the MB bios...etc absolutely tried everything under the sun and nothing works. The odd thing is when using the Gigabyte to autoscan the clock goes up but after it finishes it went back to the locked state.


----------



## J7SC

User00000 said:


> I need some urgent help with my Gigabyte 2080Ti OC. For some reason all of a sudden the max clock is locked at 1350MHz and Memory 1750MHz. No matter what I do there is no way to return to its previous normal state. I have tried clean driver uninstall, reflash the bios, scanned the GPU memory for errors (none), re-inserted the GPU, reinstalled the MB bios...etc absolutely tried everything under the sun and nothing works. The odd thing is when using the Gigabyte to autoscan the clock goes up but after it finishes it went back to the locked state.


...normally, dropping to and staying at 1350 MHz 'fial safe' is not a good sign. Where there any physical changes you made with your 2080 TI, such as changing fan arrangements or adding a w-block ? I ask because w/ some 2080 Ti, disconnecting the GPU fans from their GPU PCB header would lock the card to 1350 MHz fail safe (some MSI models come to mind). But again, what if any steps/changes preceded this change re. locked 1350 MHz ?


----------



## User00000

J7SC said:


> ...normally, dropping to and staying at 1350 MHz 'fial safe' is not a good sign. Where there any physical changes you made with your 2080 TI, such as changing fan arrangements or adding a w-block ? I ask because w/ some 2080 Ti, disconnecting the GPU fans from their GPU PCB header would lock the card to 1350 MHz fail safe (some MSI models come to mind). But again, what if any steps/changes preceded this change re. locked 1350 MHz ?


Yea the card was removed and water block attached. Maybe will need to attach the fan header you reckon?


----------



## User00000

J7SC said:


> ...normally, dropping to and staying at 1350 MHz 'fial safe' is not a good sign. Where there any physical changes you made with your 2080 TI, such as changing fan arrangements or adding a w-block ? I ask because w/ some 2080 Ti, disconnecting the GPU fans from their GPU PCB header would lock the card to 1350 MHz fail safe (some MSI models come to mind). But again, what if any steps/changes preceded this change re. locked 1350 MHz ?


 can the unlock be done by the user or must send card back via special software to unlock?


----------



## J7SC

User00000 said:


> Yea the card was removed and water block attached. Maybe will need to attach the fan header you reckon?


...that's the first thing I would check. Also, sometimes over-tightening and/ot uneven pressure of the block on the GPU / PCB can result in a lock to 1350 MHz. Not saying that's the case here, but if you - after you first try out that fan header item I mentioned and still have the issue - then remount the air-cooler and everything is back to normal, you can look at a careful remount of the water block

...as to your just-posted question, I don't know, depends on the root cause(s). Also, check out this vid, noting that NVidia actually admitted to the problem w/ early 2080 Ti re. a manufacturing problem, so even if the card is out of warranty, you might get lucky...though again, it depends on the root cause as 1350 can have different causes


----------



## User00000

J7SC said:


> ...that's the first thing I would check. Also, sometimes over-tightening and/ot uneven pressure of the block on the GPU / PCB can result in a lock to 1350 MHz. Not saying that's the case here, but if you - after you first try out that fan header item I mentioned and still have the issue - then remount the air-cooler and everything is back to normal, you can look at a careful remount of the water block
> 
> ...as to your just-posted question, I don't know, depends on the root cause(s). Also, check out this vid, noting that NVidia actually admitted to the problem w/ early 2080 Ti re. a manufacturing problem, so even if the card is out of warranty, you might get lucky...though again, it depends on the root cause as 1350 can have different causes


 thanks!


----------



## Toopy

Finally added thermal pads on the rear of my 2080ti Inno3d oc x3, backplate sandwich mod. 3mm Thermalright fit perfectly.
dropped memory temps by 12 degrees C. Kicking myself for not doing it sooner.

Also got some small heatsinks with thermal tape on them from Aliexpress








2.48US $ |8x30x8mm Ram Heatsink Chipset Aluminum Heatsink With Thermal Conductive Tape|Fans & Cooling| - AliExpress


Smarter Shopping, Better Living! Aliexpress.com




www.aliexpress.com




The inno3d runs a sub-plate for the mem and vrm (circled in red), Stuck the heatsinks to the edge of the sub-plate (blue) along both sides of the card.
This dropped mem temps about 14 degrees C and the added heatsinks are very hot, too hot to touch for longer than a few seconds.
I don't know why inno3d didn't make the sub-plate with fins here to begin with, they are in the air flow from the fans.











Basically from 96c to 69c


----------



## long2905

Toopy said:


> Finally added thermal pads on the rear of my 2080ti Inno3d oc x3, backplate sandwich mod. 3mm Thermalright fit perfectly.
> dropped memory temps by 12 degrees C. Kicking myself for not doing it sooner.
> 
> Also got some small heatsinks with thermal tape on them from Aliexpress
> 
> 
> 
> 
> 
> 
> 
> 
> 2.48US $ |8x30x8mm Ram Heatsink Chipset Aluminum Heatsink With Thermal Conductive Tape|Fans & Cooling| - AliExpress
> 
> 
> Smarter Shopping, Better Living! Aliexpress.com
> 
> 
> 
> 
> www.aliexpress.com
> 
> 
> 
> 
> The inno3d runs a sub-plate for the mem and vrm (circled in red), Stuck the heatsinks to the edge of the sub-plate (blue) along both sides of the card.
> This dropped mem temps about 14 degrees C and the added heatsinks are very hot, too hot to touch for longer than a few seconds.
> I don't know why inno3d didn't make the sub-plate with fins here to begin with, they are in the air flow from the fans.
> 
> Basically from 96c to 69c


do you happen to take more pictures of your mods with the card fully disassembled? I would like to refer to it and see if i can do something similar with my card or not.


----------



## Toopy

long2905 said:


> do you happen to take more pictures of your mods with the card fully disassembled? I would like to refer to it and see if i can do something similar with my card or not.


I'm afraid I didn't. The inno3d x3 has a metal backplate, I figured if I can drag some heat through to it from the mem it would act as a heatsink.
From stock there were thermal pad on the rear of the gpu and vrm's, I just added some more here, pic from tech powerup.
The backplate is very hot now around the memory, so thats functioning well.










As for the sub-plate, the sides/edges were accessible on each side of the card without taking off the cooler. Measured the available space and bought the corresponding heatsinks.
I was a little doubtful as to how well this would work considering the size and contact of the thermal interface from edge of sub-plate to heatsink, turns out it worked amazingly well.
Some more pics below, as I said the added heatsinks are extremely hot so they are extremely functional, don't have my thermal camera on hand but a thermal probe measures them at 57.2c.
I didn't remove the sub-plate so the thermal pads on the mem/vrm are the stock ones and are transferring the heat to the sub-plate very well.










The heatsinks were a perfect fit on the pci-e side.









You can see the added thermal pad/s between the backside of the card and the backplate here










I had to remove the main cooler to get to the backplate screws so I repasted the gpu, which is also running cooler probably due to higher quality thermal paste.

EDIT: Found some pictures of the subplate









So far this has enabled me to oc an extra 200mhz on the memory. Still testing


----------



## Bastouny

Hi all,

I would like to set up an SLI :

RTX 2080 Ti Founders Edition
RTX 2080 Ti KFA2 OC White
(both under water cooling)

1) Can you confirm to me that the SLI will work? (I think so after having searched the internet a lot)

2) How is the overclock done? Does the card synchronize their frequencies? Is it possible to overclock the cards independently? If it is preferable for them to be at the same frequencies, what can I do with MSI Afterburner, I overclock my Founders and the KFA2 will automatically apply the same frequencies when launching a game? or not ?

Thanks for your help


----------



## kithylin

Bastouny said:


> Hi all,
> 
> I would like to set up an SLI :
> 
> RTX 2080 Ti Founders Edition
> RTX 2080 Ti KFA2 OC White
> (both under water cooling)
> 
> 1) Can you confirm to me that the SLI will work? (I think so after having searched the internet a lot)
> 
> 2) How is the overclock done? Does the card synchronize their frequencies? Is it possible to overclock the cards independently? If it is preferable for them to be at the same frequencies, what can I do with MSI Afterburner, I overclock my Founders and the KFA2 will automatically apply the same frequencies when launching a game? or not ?
> 
> Thanks for your help


Cards do not sync their frequencies at all. And you end up with bad microstutter if you don't sync it yourself. You'll want to use MSI Afterburner and Control-F to use the curve and set a fixed clock @ fixed voltage to have the best experience.


----------



## Bastouny

kithylin said:


> Cards do not sync their frequencies at all. And you end up with bad microstutter if you don't sync it yourself. You'll want to use MSI Afterburner and Control-F to use the curve and set a fixed clock @ fixed voltage to have the best experience.


Hi,

Thank you for your answer. I am rather new to MSI Afterburner and more generally in GPU overclock. You indicate that the cards must have identical frequencies.

So how does NVIDIA Boost work with SLI? because it will vary the frequency and depend on the temperature and the power limit.
Besides, how should I adjust my Power Limit? which is different between the two card. In addition, the value is indicated in% and not in Watt.


----------



## Krzych04650

Bastouny said:


> Hi all,
> 
> I would like to set up an SLI :
> 
> RTX 2080 Ti Founders Edition
> RTX 2080 Ti KFA2 OC White
> (both under water cooling)
> 
> 1) Can you confirm to me that the SLI will work? (I think so after having searched the internet a lot)
> 
> 2) How is the overclock done? Does the card synchronize their frequencies? Is it possible to overclock the cards independently? If it is preferable for them to be at the same frequencies, what can I do with MSI Afterburner, I overclock my Founders and the KFA2 will automatically apply the same frequencies when launching a game? or not ?
> 
> Thanks for your help


Whether the SLI will work depends only on whether you can connect cards with a bridge, and whether your motherboard supports it and you have at least 8 lanes per card. NVLink bridges are not flexible, so cards have to be the exact same width.

Overclock is done the same as with single card, only you need to create two curves. Then choosing a profile in MSI Afterburner is going to apply the corresponding curves to both cards. There is a frequency sync option but it is buggy.

The are two important things with overclocking SLI to keep in mind though.

First is that gain from OC in SLI behaves differently than with single card and you gain much less. For example for the same OC on the single card I get 15% gain but in SLI only around 6%. I've seen that on both 2080 Ti SLI and 1080 SLI. There can be exceptions like Witcher 3 that scale perfectly with memory clock, there I get 11%, but still less than 18% on single card. So all in all core clock in SLI doesn't do that much.

You also really don't want to be hitting power/thermal limit or introduce any other variables like that. The trick with SLI is to tune the cards to be as "calm" as possible, so absolutely no hitting power limit, no crazy OC, etc. So max out your memory clock and aim for some middling core OC like 2025 MHz at lower voltages. You should also flash both cards to 380W BIOS, 320 on FE is going to be a problem.


----------



## Bastouny

Krzych04650 said:


> Whether the SLI will work depends only on whether you can connect cards with a bridge, and whether your motherboard supports it and you have at least 8 lanes per card. NVLink bridges are not flexible, so cards have to be the exact same width.
> 
> Overclock is done the same as with single card, only you need to create two curves. Then choosing a profile in MSI Afterburner is going to apply the corresponding curves to both cards. There is a frequency sync option but it is buggy.
> 
> The are two important things with overclocking SLI to keep in mind though.
> 
> First is that gain from OC in SLI behaves differently than with single card and you gain much less. For example for the same OC on the single card I get 15% gain but in SLI only around 6%. I've seen that on both 2080 Ti SLI and 1080 SLI. There can be exceptions like Witcher 3 that scale perfectly with memory clock, there I get 11%, but still less than 18% on single card. So all in all core clock in SLI doesn't do that much.
> 
> You also really don't want to be hitting power/thermal limit or introduce any other variables like that. The trick with SLI is to tune the cards to be as "calm" as possible, so absolutely no hitting power limit, no crazy OC, etc. So max out your memory clock and aim for some middling core OC like 2025 MHz at lower voltages. You should also flash both cards to 380W BIOS, 320 on FE is going to be a problem.


Thank you for the clarification 

To be honest, I don't have the KFA2 yet, it's a used board (with the same PCB) that I spotted. I am desperately looking for a second Founders Edition ....... If anyone has one for sale I would like to take it!

On the other hand I will not be able to flash my FE with the 380W BIOS because the version of the XUSB-FW is not identical.


----------



## kithylin

Bastouny said:


> Hi,
> 
> Thank you for your answer. I am rather new to MSI Afterburner and more generally in GPU overclock. You indicate that the cards must have identical frequencies.
> 
> So how does NVIDIA Boost work with SLI? because it will vary the frequency and depend on the temperature and the power limit.
> Besides, how should I adjust my Power Limit? which is different between the two card. In addition, the value is indicated in% and not in Watt.


This is part of why everyone claims "SLI IS DEAD" on the internet. It's virtually impossible to synchronize the clocks on both cards with the 2000 series and 3000 series due to how nvidia boost works. I've heard people in here talking about some sort of XOC bios for the 2000 series in this thread? If that's possible and it can run with no power limit then you might be able to get the clocks sync'd if you put that bios on both cards and if you're using full-cover water blocks on both and if you have a water loop to keep both cards < 60c and then set the same clocks via two curves it _MIGHT_ work. The GTX 1080 Ti is the last Nvidia card I'm aware of where we can lock in a fixed clock for multiple cards in SLI with an unlocked XOC bios and afterburner. And if you don't sync the clocks then you get microstutter and it's not a fun experience. On top of that SLI support from games is literally zero. Almost all new games released within the past 2 years have no SLI support at all. Nvidia already confirmed they will be dropping SLI support via drivers some time this year too. It was supposed to happen this spring but it can happen any time now. In the future the only way to get SLI / Dual cards working will be in DirectX12 / Vulkan games and via mGPU which is programmer dependent (The game developers have to code it in to their games to support it. If they don't, the game won't use multiple cards at all).


----------



## Krzych04650

kithylin said:


> This is part of why everyone claims "SLI IS DEAD" on the internet. It's virtually impossible to synchronize the clocks on both cards with the 2000 series and 3000 series due to how nvidia boost works. I've heard people in here talking about some sort of XOC bios for the 2000 series in this thread? If that's possible and it can run with no power limit then you might be able to get the clocks sync'd if you put that bios on both cards and if you're using full-cover water blocks on both and if you have a water loop to keep both cards < 60c and then set the same clocks via two curves it _MIGHT_ work. The GTX 1080 Ti is the last Nvidia card I'm aware of where we can lock in a fixed clock for multiple cards in SLI with an unlocked XOC bios and afterburner. And if you don't sync the clocks then you get microstutter and it's not a fun experience.


Clocks do synchronize as they always did. The only situation where they don't is explicit mGPU in games SOTTR for example, but this is not SLI. A lot bad things can be said about viability of SLI and it will really die once 4000 series launches and single GPU will match the best SLI you can make, there is really no need to make up additional imaginary problems like clock synchronization to make it look worse, it is now entering it's last year of life anyway.


----------



## klepp0906

anyone know if its possible to flash the evga hydrocopper bios onto a xc ultra? I have the hydrocopper block and its cool as a cucumber but im leery on trying the bios as its non-reference.

there was a mod on the evga forums who mentioned he used the ftw3 bios on his xc without problems (which was also a reference > non-reference flash) but I dont want to take a single piece of anecdotal evidence to heart without some reinforcement


----------



## Medizinmann

klepp0906 said:


> anyone know if its possible to flash the evga hydrocopper bios onto a xc ultra? I have the hydrocopper block and its cool as a cucumber but im leery on trying the bios as its non-reference.
> 
> there was a mod on the evga forums who mentioned he used the ftw3 bios on his xc without problems (which was also a reference > non-reference flash) but I dont want to take a single piece of anecdotal evidence to heart without some reinforcement


Why not just use the Galax/KFA2 380W BIOS for reference PCBs from page one?

Best regards,
Medizinmann


----------



## kithylin

klepp0906 said:


> anyone know if its possible to flash the evga hydrocopper bios onto a xc ultra? I have the hydrocopper block and its cool as a cucumber but im leery on trying the bios as its non-reference.
> 
> there was a mod on the evga forums who mentioned he used the ftw3 bios on his xc without problems (which was also a reference > non-reference flash) but I dont want to take a single piece of anecdotal evidence to heart without some reinforcement


We're on page 643 of this thread and there have been literally thousands of different posts in this thread alone in the past about people flashing all sorts of different bios's in to stock and AIB 2080 Ti's. I think we have a little more data than just one person by now.


----------



## klepp0906

kithylin said:


> We're on page 643 of this thread and there have been literally thousands of different posts in this thread alone in the past about people flashing all sorts of different bios's in to stock and AIB 2080 Ti's. I think we have a little more data than just one person by now.


so youre suggesting what? I go through 643 pages? Or you simply dont know the answer?

I flashed the galax 380w bios many moons ago and have used it ever since. However that is from a 16 power phase reference board. the ftw3/hydrocopper are custom/19phase. Not interested in making a paperweight.


----------



## klepp0906

Medizinmann said:


> Why not just use the Galax/KFA2 380W BIOS for reference PCBs from page one?
> 
> Best regards,
> Medizinmann


Ah, missed your post. I am using that one. However I'm having some issues with LEDsync and figured on the super slight chance using an evga bios helps I might as well try over 7 watts. Plus I feel better (marginally to be fair) to have an evga bios sitting on an evga card


----------



## klepp0906

well - i yolo'd it. fortunately all is well and i hadnt even looked but the boost clock is much higher on the hydrocopper version than the galax so it was a worthwhile change anyways irrespective of the few watt power target loss (i assume).

also fixed my obnoxious ledsync issue so theres that. I'm gonna call it a win. (outside of paying abhorrent prices for a 2080ti only to have it shat on by a 3080, but thats a rant for another day) The bright side is i couldnt get one even if i wanted to at the moment ;p


----------



## tps3443

klepp0906 said:


> well - i yolo'd it. fortunately all is well and i hadnt even looked but the boost clock is much higher on the hydrocopper version than the galax so it was a worthwhile change anyways irrespective of the few watt power target loss (i assume).
> 
> also fixed my obnoxious ledsync issue so theres that. I'm gonna call it a win. (outside of paying abhorrent prices for a 2080ti only to have it shat on by a 3080, but thats a rant for another day) The bright side is i couldnt get one even if i wanted to at the moment ;p


Nothing wrong with a 2080Ti right now, (if the price is reasonable anyways)

My 2080Ti FE is watercooled, 8ohm shunts soldered, and a 532watt power limit, with (3) 360’s and I run a steady 2,145Mhz/16,200Mhz memory.

It is easily 25-30% faster than a stock 2080Ti or RTX3070 just in synthetics. I can usually just match a stock 3080 base on all of the (In game benchmarks I have ran) 

But the GPU market is crap, and not everyone will watercool a 2080Ti. Regardless they are still super watered down from the factory. 

For a nearly 3 year old gpu, 11GB of vram, and it’ll match or usually run within 10% a healthy running RTX3080. It is impressive.

I just recently came in to some money. And I looked at the GPU market closely again today. it is sickening lol. Why would I sell it? We can’t anyways lol. looks like I am gonna be with it much longer than I though. At least its fast!


----------



## Medizinmann

klepp0906 said:


> well - i yolo'd it. fortunately all is well and i hadnt even looked but the boost clock is much higher on the hydrocopper version than the galax so it was a worthwhile change anyways irrespective of the few watt power target loss (i assume).
> 
> also fixed my obnoxious ledsync issue so theres that. I'm gonna call it a win. (outside of paying abhorrent prices for a 2080ti only to have it shat on by a 3080, but thats a rant for another day) The bright side is i couldnt get one even if i wanted to at the moment ;p


So you were successful?
Congratulations…  

Best Regards,
Medizinmann


----------



## superino

guys I need a little hand. Sorry my english
for some time I noticed some micro clicks, checking with the LatencyMon program notto skyrocketing, Driver with highest DPC routine execution time: nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver. It does this even if I watch a video sometimes even when opening the browser
tried everything change motherboard bios, different bios 2080ti video card, different drivers, monitors, format, remove all additional hardware, i can't any way.
sometimes it works great without me touching anything
i believe that the video card is a gigabyte extreme wb.
do you think it could be the ax1600i power supply?
some advice?



PHP:


CONCLUSION
_________________________________________________________________________________________________________
Your system appears to be having trouble handling real-time audio and other tasks. You are likely to
experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a
driver running in your system appear to be executing for too long. One problem may be related to power
management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.
LatencyMon has been analyzing your system for  0:01:40  (h:mm:ss) on all processors.


_________________________________________________________________________________________________________
SYSTEM INFORMATION
_________________________________________________________________________________________________________
Computer name:                                        DESKTOP-04HLD86
OS version:                                           Windows 10, 10.0, version 2009, build: 19043 (x64)
Hardware:                                             System Product Name, ASUS
CPU:                                                  GenuineIntel Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Logical processors:                                   16
Processor groups:                                     1
RAM:                                                  32699 MB total


_________________________________________________________________________________________________________
CPU SPEED
_________________________________________________________________________________________________________
Reported CPU speed:                                   360 MHz

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed
settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.


_________________________________________________________________________________________________________
MEASURED INTERRUPT TO USER PROCESS LATENCIES
_________________________________________________________________________________________________________
The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a
hardware request from the moment the interrupt service routine started execution. This includes the scheduling
and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle
wait state in response to that event.

Highest measured interrupt to process latency (µs):   78,10
Average measured interrupt to process latency (µs):   2,068341

Highest measured interrupt to DPC latency (µs):       76,10
Average measured interrupt to DPC latency (µs):       0,805393


_________________________________________________________________________________________________________
 REPORTED ISRs
_________________________________________________________________________________________________________
Interrupt service routines are routines installed by the OS and device drivers that execute in response to a
hardware interrupt signal.

Highest ISR routine execution time (µs):              110,9250
Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft
Corporation

Highest reported total ISR routine time (%):          0,011080
Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft
Corporation

Total time spent in ISRs (%)                          0,016718

ISR count (execution time <250 µs):                   91396
ISR count (execution time 250-500 µs):                0
ISR count (execution time 500-1000 µs):               0
ISR count (execution time 1000-2000 µs):              0
ISR count (execution time 2000-4000 µs):              0
ISR count (execution time >=4000 µs):                 0


_________________________________________________________________________________________________________
REPORTED DPCs
_________________________________________________________________________________________________________
DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process
to utilize the CPU while it is interrupted until the DPC has finished execution.

Highest DPC routine execution time (µs):              2746,682778
Driver with highest DPC routine execution time:       nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver,
Version 466.47 , NVIDIA Corporation

Highest reported total DPC routine time (%):          0,020366
Driver with highest DPC total execution time:         Wdf01000.sys - Runtime framework driver modalità kernel,
Microsoft Corporation

Total time spent in DPCs (%)                          0,043568

DPC count (execution time <250 µs):                   138414
DPC count (execution time 250-500 µs):                0
DPC count (execution time 500-10000 µs):              2
DPC count (execution time 1000-2000 µs):              2
DPC count (execution time 2000-4000 µs):              1
DPC count (execution time >=4000 µs):                 0


_________________________________________________________________________________________________________
 REPORTED HARD PAGEFAULTS
_________________________________________________________________________________________________________
Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but
backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the
memory from disk while the process is interrupted and blocked from execution.

NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to
interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which
programs were hit.

Process with highest pagefault count:                 ekrn.exe

Total number of hard pagefaults                       3819
Hard pagefault count of hardest hit process:          1438
Number of processes hit:                              43


_________________________________________________________________________________________________________
 PER CPU DATA
_________________________________________________________________________________________________________
CPU 0 Interrupt cycle time (s):                       1,589787
CPU 0 ISR highest execution time (µs):                110,9250
CPU 0 ISR total execution time (s):                   0,267751
CPU 0 ISR count:                                      91067
CPU 0 DPC highest execution time (µs):                2746,682778
CPU 0 DPC total execution time (s):                   0,625584
CPU 0 DPC count:                                      122848
_________________________________________________________________________________________________________
CPU 1 Interrupt cycle time (s):                       0,225563
CPU 1 ISR highest execution time (µs):                1,560556
CPU 1 ISR total execution time (s):                   0,000107
CPU 1 ISR count:                                      329
CPU 1 DPC highest execution time (µs):                50,366111
CPU 1 DPC total execution time (s):                   0,002218
CPU 1 DPC count:                                      406
_________________________________________________________________________________________________________
CPU 2 Interrupt cycle time (s):                       0,347436
CPU 2 ISR highest execution time (µs):                0,0
CPU 2 ISR total execution time (s):                   0,0
CPU 2 ISR count:                                      0
CPU 2 DPC highest execution time (µs):                68,963889
CPU 2 DPC total execution time (s):                   0,042879
CPU 2 DPC count:                                      5676
_________________________________________________________________________________________________________
CPU 3 Interrupt cycle time (s):                       0,257127
CPU 3 ISR highest execution time (µs):                0,0
CPU 3 ISR total execution time (s):                   0,0
CPU 3 ISR count:                                      0
CPU 3 DPC highest execution time (µs):                74,193889
CPU 3 DPC total execution time (s):                   0,001072
CPU 3 DPC count:                                      257
_________________________________________________________________________________________________________
CPU 4 Interrupt cycle time (s):                       0,252055
CPU 4 ISR highest execution time (µs):                0,0
CPU 4 ISR total execution time (s):                   0,0
CPU 4 ISR count:                                      0
CPU 4 DPC highest execution time (µs):                19,947222
CPU 4 DPC total execution time (s):                   0,003886
CPU 4 DPC count:                                      1481
_________________________________________________________________________________________________________
CPU 5 Interrupt cycle time (s):                       0,253463
CPU 5 ISR highest execution time (µs):                0,0
CPU 5 ISR total execution time (s):                   0,0
CPU 5 ISR count:                                      0
CPU 5 DPC highest execution time (µs):                24,323889
CPU 5 DPC total execution time (s):                   0,000596
CPU 5 DPC count:                                      172
_________________________________________________________________________________________________________
CPU 6 Interrupt cycle time (s):                       0,253687
CPU 6 ISR highest execution time (µs):                0,0
CPU 6 ISR total execution time (s):                   0,0
CPU 6 ISR count:                                      0
CPU 6 DPC highest execution time (µs):                18,333889
CPU 6 DPC total execution time (s):                   0,003010
CPU 6 DPC count:                                      1044
_________________________________________________________________________________________________________
CPU 7 Interrupt cycle time (s):                       0,241716
CPU 7 ISR highest execution time (µs):                0,0
CPU 7 ISR total execution time (s):                   0,0
CPU 7 ISR count:                                      0
CPU 7 DPC highest execution time (µs):                14,250556
CPU 7 DPC total execution time (s):                   0,000181
CPU 7 DPC count:                                      46
_________________________________________________________________________________________________________
CPU 8 Interrupt cycle time (s):                       0,254513
CPU 8 ISR highest execution time (µs):                0,0
CPU 8 ISR total execution time (s):                   0,0
CPU 8 ISR count:                                      0
CPU 8 DPC highest execution time (µs):                25,532222
CPU 8 DPC total execution time (s):                   0,004767
CPU 8 DPC count:                                      1761
_________________________________________________________________________________________________________
CPU 9 Interrupt cycle time (s):                       0,229353
CPU 9 ISR highest execution time (µs):                0,0
CPU 9 ISR total execution time (s):                   0,0
CPU 9 ISR count:                                      0
CPU 9 DPC highest execution time (µs):                15,646111
CPU 9 DPC total execution time (s):                   0,001849
CPU 9 DPC count:                                      675
_________________________________________________________________________________________________________
CPU 10 Interrupt cycle time (s):                       0,253738
CPU 10 ISR highest execution time (µs):                0,0
CPU 10 ISR total execution time (s):                   0,0
CPU 10 ISR count:                                      0
CPU 10 DPC highest execution time (µs):                22,487778
CPU 10 DPC total execution time (s):                   0,002139
CPU 10 DPC count:                                      683
_________________________________________________________________________________________________________
CPU 11 Interrupt cycle time (s):                       0,237090
CPU 11 ISR highest execution time (µs):                0,0
CPU 11 ISR total execution time (s):                   0,0
CPU 11 ISR count:                                      0
CPU 11 DPC highest execution time (µs):                13,155556
CPU 11 DPC total execution time (s):                   0,001506
CPU 11 DPC count:                                      638
_________________________________________________________________________________________________________
CPU 12 Interrupt cycle time (s):                       0,25480
CPU 12 ISR highest execution time (µs):                0,0
CPU 12 ISR total execution time (s):                   0,0
CPU 12 ISR count:                                      0
CPU 12 DPC highest execution time (µs):                21,337778
CPU 12 DPC total execution time (s):                   0,003421
CPU 12 DPC count:                                      1192
_________________________________________________________________________________________________________
CPU 13 Interrupt cycle time (s):                       0,234321
CPU 13 ISR highest execution time (µs):                0,0
CPU 13 ISR total execution time (s):                   0,0
CPU 13 ISR count:                                      0
CPU 13 DPC highest execution time (µs):                15,487778
CPU 13 DPC total execution time (s):                   0,000397
CPU 13 DPC count:                                      103
_________________________________________________________________________________________________________
CPU 14 Interrupt cycle time (s):                       0,264178
CPU 14 ISR highest execution time (µs):                0,0
CPU 14 ISR total execution time (s):                   0,0
CPU 14 ISR count:                                      0
CPU 14 DPC highest execution time (µs):                40,029444
CPU 14 DPC total execution time (s):                   0,003282
CPU 14 DPC count:                                      1053
_________________________________________________________________________________________________________
CPU 15 Interrupt cycle time (s):                       0,241375
CPU 15 ISR highest execution time (µs):                0,0
CPU 15 ISR total execution time (s):                   0,0
CPU 15 ISR count:                                      0
CPU 15 DPC highest execution time (µs):                22,657778
CPU 15 DPC total execution time (s):                   0,001274
CPU 15 DPC count:                                      384
________________________________________________________________________________________________________


----------



## Medizinmann

superino said:


> guys I need a little hand. Sorry my english
> for some time I noticed some micro clicks, checking with the LatencyMon program notto skyrocketing, Driver with highest DPC routine execution time: nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver. It does this even if I watch a video sometimes even when opening the browser
> tried everything change motherboard bios, different bios 2080ti video card, different drivers, monitors, format, remove all additional hardware, i can't any way.
> sometimes it works great without me touching anything
> i believe that the video card is a gigabyte extreme wb.
> do you think it could be the ax1600i power supply?
> some advice?
> 
> 
> 
> PHP:
> 
> 
> CONCLUSION
> _________________________________________________________________________________________________________
> Your system appears to be having trouble handling real-time audio and other tasks. You are likely to
> experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a
> driver running in your system appear to be executing for too long. One problem may be related to power
> management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.
> LatencyMon has been analyzing your system for  0:01:40  (h:mm:ss) on all processors.
> 
> 
> _________________________________________________________________________________________________________
> SYSTEM INFORMATION
> _________________________________________________________________________________________________________
> Computer name:                                        DESKTOP-04HLD86
> OS version:                                           Windows 10, 10.0, version 2009, build: 19043 (x64)
> Hardware:                                             System Product Name, ASUS
> CPU:                                                  GenuineIntel Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
> Logical processors:                                   16
> Processor groups:                                     1
> RAM:                                                  32699 MB total
> 
> 
> _________________________________________________________________________________________________________
> CPU SPEED
> _________________________________________________________________________________________________________
> Reported CPU speed:                                   360 MHz
> 
> Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed
> settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.
> 
> 
> _________________________________________________________________________________________________________
> MEASURED INTERRUPT TO USER PROCESS LATENCIES
> _________________________________________________________________________________________________________
> The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a
> hardware request from the moment the interrupt service routine started execution. This includes the scheduling
> and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle
> wait state in response to that event.
> 
> Highest measured interrupt to process latency (µs):   78,10
> Average measured interrupt to process latency (µs):   2,068341
> 
> Highest measured interrupt to DPC latency (µs):       76,10
> Average measured interrupt to DPC latency (µs):       0,805393
> 
> 
> _________________________________________________________________________________________________________
> REPORTED ISRs
> _________________________________________________________________________________________________________
> Interrupt service routines are routines installed by the OS and device drivers that execute in response to a
> hardware interrupt signal.
> 
> Highest ISR routine execution time (µs):              110,9250
> Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft
> Corporation
> 
> Highest reported total ISR routine time (%):          0,011080
> Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft
> Corporation
> 
> Total time spent in ISRs (%)                          0,016718
> 
> ISR count (execution time <250 µs):                   91396
> ISR count (execution time 250-500 µs):                0
> ISR count (execution time 500-1000 µs):               0
> ISR count (execution time 1000-2000 µs):              0
> ISR count (execution time 2000-4000 µs):              0
> ISR count (execution time >=4000 µs):                 0
> 
> 
> _________________________________________________________________________________________________________
> REPORTED DPCs
> _________________________________________________________________________________________________________
> DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process
> to utilize the CPU while it is interrupted until the DPC has finished execution.
> 
> Highest DPC routine execution time (µs):              2746,682778
> Driver with highest DPC routine execution time:       nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver,
> Version 466.47 , NVIDIA Corporation
> 
> Highest reported total DPC routine time (%):          0,020366
> Driver with highest DPC total execution time:         Wdf01000.sys - Runtime framework driver modalità kernel,
> Microsoft Corporation
> 
> Total time spent in DPCs (%)                          0,043568
> 
> DPC count (execution time <250 µs):                   138414
> DPC count (execution time 250-500 µs):                0
> DPC count (execution time 500-10000 µs):              2
> DPC count (execution time 1000-2000 µs):              2
> DPC count (execution time 2000-4000 µs):              1
> DPC count (execution time >=4000 µs):                 0
> 
> 
> _________________________________________________________________________________________________________
> REPORTED HARD PAGEFAULTS
> _________________________________________________________________________________________________________
> Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but
> backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the
> memory from disk while the process is interrupted and blocked from execution.
> 
> NOTE: some processes were hit by hard pagefaults. If these were programs producing audio, they are likely to
> interrupt the audio stream resulting in dropouts, clicks and pops. Check the Processes tab to see which
> programs were hit.
> 
> Process with highest pagefault count:                 ekrn.exe
> 
> Total number of hard pagefaults                       3819
> Hard pagefault count of hardest hit process:          1438
> Number of processes hit:                              43
> 
> 
> _________________________________________________________________________________________________________
> PER CPU DATA
> _________________________________________________________________________________________________________
> CPU 0 Interrupt cycle time (s):                       1,589787
> CPU 0 ISR highest execution time (µs):                110,9250
> CPU 0 ISR total execution time (s):                   0,267751
> CPU 0 ISR count:                                      91067
> CPU 0 DPC highest execution time (µs):                2746,682778
> CPU 0 DPC total execution time (s):                   0,625584
> CPU 0 DPC count:                                      122848
> _________________________________________________________________________________________________________
> CPU 1 Interrupt cycle time (s):                       0,225563
> CPU 1 ISR highest execution time (µs):                1,560556
> CPU 1 ISR total execution time (s):                   0,000107
> CPU 1 ISR count:                                      329
> CPU 1 DPC highest execution time (µs):                50,366111
> CPU 1 DPC total execution time (s):                   0,002218
> CPU 1 DPC count:                                      406
> _________________________________________________________________________________________________________
> CPU 2 Interrupt cycle time (s):                       0,347436
> CPU 2 ISR highest execution time (µs):                0,0
> CPU 2 ISR total execution time (s):                   0,0
> CPU 2 ISR count:                                      0
> CPU 2 DPC highest execution time (µs):                68,963889
> CPU 2 DPC total execution time (s):                   0,042879
> CPU 2 DPC count:                                      5676
> _________________________________________________________________________________________________________
> CPU 3 Interrupt cycle time (s):                       0,257127
> CPU 3 ISR highest execution time (µs):                0,0
> CPU 3 ISR total execution time (s):                   0,0
> CPU 3 ISR count:                                      0
> CPU 3 DPC highest execution time (µs):                74,193889
> CPU 3 DPC total execution time (s):                   0,001072
> CPU 3 DPC count:                                      257
> _________________________________________________________________________________________________________
> CPU 4 Interrupt cycle time (s):                       0,252055
> CPU 4 ISR highest execution time (µs):                0,0
> CPU 4 ISR total execution time (s):                   0,0
> CPU 4 ISR count:                                      0
> CPU 4 DPC highest execution time (µs):                19,947222
> CPU 4 DPC total execution time (s):                   0,003886
> CPU 4 DPC count:                                      1481
> _________________________________________________________________________________________________________
> CPU 5 Interrupt cycle time (s):                       0,253463
> CPU 5 ISR highest execution time (µs):                0,0
> CPU 5 ISR total execution time (s):                   0,0
> CPU 5 ISR count:                                      0
> CPU 5 DPC highest execution time (µs):                24,323889
> CPU 5 DPC total execution time (s):                   0,000596
> CPU 5 DPC count:                                      172
> _________________________________________________________________________________________________________
> CPU 6 Interrupt cycle time (s):                       0,253687
> CPU 6 ISR highest execution time (µs):                0,0
> CPU 6 ISR total execution time (s):                   0,0
> CPU 6 ISR count:                                      0
> CPU 6 DPC highest execution time (µs):                18,333889
> CPU 6 DPC total execution time (s):                   0,003010
> CPU 6 DPC count:                                      1044
> _________________________________________________________________________________________________________
> CPU 7 Interrupt cycle time (s):                       0,241716
> CPU 7 ISR highest execution time (µs):                0,0
> CPU 7 ISR total execution time (s):                   0,0
> CPU 7 ISR count:                                      0
> CPU 7 DPC highest execution time (µs):                14,250556
> CPU 7 DPC total execution time (s):                   0,000181
> CPU 7 DPC count:                                      46
> _________________________________________________________________________________________________________
> CPU 8 Interrupt cycle time (s):                       0,254513
> CPU 8 ISR highest execution time (µs):                0,0
> CPU 8 ISR total execution time (s):                   0,0
> CPU 8 ISR count:                                      0
> CPU 8 DPC highest execution time (µs):                25,532222
> CPU 8 DPC total execution time (s):                   0,004767
> CPU 8 DPC count:                                      1761
> _________________________________________________________________________________________________________
> CPU 9 Interrupt cycle time (s):                       0,229353
> CPU 9 ISR highest execution time (µs):                0,0
> CPU 9 ISR total execution time (s):                   0,0
> CPU 9 ISR count:                                      0
> CPU 9 DPC highest execution time (µs):                15,646111
> CPU 9 DPC total execution time (s):                   0,001849
> CPU 9 DPC count:                                      675
> _________________________________________________________________________________________________________
> CPU 10 Interrupt cycle time (s):                       0,253738
> CPU 10 ISR highest execution time (µs):                0,0
> CPU 10 ISR total execution time (s):                   0,0
> CPU 10 ISR count:                                      0
> CPU 10 DPC highest execution time (µs):                22,487778
> CPU 10 DPC total execution time (s):                   0,002139
> CPU 10 DPC count:                                      683
> _________________________________________________________________________________________________________
> CPU 11 Interrupt cycle time (s):                       0,237090
> CPU 11 ISR highest execution time (µs):                0,0
> CPU 11 ISR total execution time (s):                   0,0
> CPU 11 ISR count:                                      0
> CPU 11 DPC highest execution time (µs):                13,155556
> CPU 11 DPC total execution time (s):                   0,001506
> CPU 11 DPC count:                                      638
> _________________________________________________________________________________________________________
> CPU 12 Interrupt cycle time (s):                       0,25480
> CPU 12 ISR highest execution time (µs):                0,0
> CPU 12 ISR total execution time (s):                   0,0
> CPU 12 ISR count:                                      0
> CPU 12 DPC highest execution time (µs):                21,337778
> CPU 12 DPC total execution time (s):                   0,003421
> CPU 12 DPC count:                                      1192
> _________________________________________________________________________________________________________
> CPU 13 Interrupt cycle time (s):                       0,234321
> CPU 13 ISR highest execution time (µs):                0,0
> CPU 13 ISR total execution time (s):                   0,0
> CPU 13 ISR count:                                      0
> CPU 13 DPC highest execution time (µs):                15,487778
> CPU 13 DPC total execution time (s):                   0,000397
> CPU 13 DPC count:                                      103
> _________________________________________________________________________________________________________
> CPU 14 Interrupt cycle time (s):                       0,264178
> CPU 14 ISR highest execution time (µs):                0,0
> CPU 14 ISR total execution time (s):                   0,0
> CPU 14 ISR count:                                      0
> CPU 14 DPC highest execution time (µs):                40,029444
> CPU 14 DPC total execution time (s):                   0,003282
> CPU 14 DPC count:                                      1053
> _________________________________________________________________________________________________________
> CPU 15 Interrupt cycle time (s):                       0,241375
> CPU 15 ISR highest execution time (µs):                0,0
> CPU 15 ISR total execution time (s):                   0,0
> CPU 15 ISR count:                                      0
> CPU 15 DPC highest execution time (µs):                22,657778
> CPU 15 DPC total execution time (s):                   0,001274
> CPU 15 DPC count:                                      384
> ________________________________________________________________________________________________________


You could try this...
(461) We have been installing nVidia drivers WRONG! - YouTube 

Best Regards,
Medizinmann


----------



## superino

Medizinmann said:


> You could try this...
> (461) We have been installing nVidia drivers WRONG! - YouTube
> 
> Best Regards,
> Medizinmann


nothing to do, always the same problem. Thanks anyway


----------



## kithylin

superino said:


> nothing to do, always the same problem. Thanks anyway


Did you download and install the drivers for your onboard audio from the company that made your motherboard? Or did you just leave it default after installing windows? Because this can sometimes be effected by using the wrong driver. What is the exact model of motherboard you have? I could help you and I'll go find the download for you.


----------



## tps3443

Still rocking my 2080Ti FE. 380+ watt Galax bios, it has some really good Samsung memory, I soldered and stacked the 8 ohm resistors for up to 532 watt, EKWB full block, and (3) 360MM radiators. It’s my (Poor Man’s RTX3080) lol.


----------



## Palaiouras

Hi everyone,

I have an Nvidia RTX 2080ti FE and an MSI RTX 2080ti Sea Hawk EK X. What BIOS combination for each card do you advice me to use on each to get the best result?
Both cards are watercooled.

Thank you


----------



## Medizinmann

Palaiouras said:


> Hi everyone,
> 
> I have an Nvidia RTX 2080ti FE and an MSI RTX 2080ti Sea Hawk EK X. What BIOS combination for each card do you advice me to use on each to get the best result?
> Both cards are watercooled.
> 
> Thank you


IMHO your best option for the RTX 2080Ti FE is still the 380W KFA2/Galax BIOS from Page 1 and the 406W BIOS for the Sea Hawk EK X directly beneath it...

Best Regards,
Medizinmann


----------



## tryout1

So i have a little weird question, but a friend of mine want's to put together a gaming pc but as you may guessed the prices are a bit iffy in the gpu market, i found a 2080Ti Sea Hawk EK X tho for good money 800€. Is there something speaking against it putting a aircooler onto this card, like a Rajintek Morpheus e.g?
Technically i could even give him my 2080Ti and use the Sea Hawk but i know i have a pretty good sample so i don't wanna gamble too much.

Gesendet von meinem GM1913 mit Tapatalk


----------



## Medizinmann

tryout1 said:


> So i have a little weird question, but a friend of mine want's to put together a gaming pc but as you may guessed the prices are a bit iffy in the gpu market, i found a 2080Ti Sea Hawk EK X tho for good money 800€. Is there something speaking against it putting a aircooler onto this card, like a Rajintek Morpheus e.g?
> Technically i could even give him my 2080Ti and use the Sea Hawk but i know i have a pretty good sample so i don't wanna gamble too much.
> 
> Gesendet von meinem GM1913 mit Tapatalk


Well - why not use it watercooled?

With a small loop and a 280 or 360mmm radiator....

But yes - you can put an aircooler like the Morpheus II on it.

Best Regards,
Medizinmann


----------



## Laithan

Turing performance is all about power and temperature management. I would definitely keep it water cooled. There are some AIO solutions out there to do this more easily. Swiftech used to have a nice setup with a rad/pump/resivoir combo... all you needed was lines and connectors and the loop was expandable also.


----------



## Medizinmann

Laithan said:


> Turing performance is all about power and temperature management. I would definitely keep it water cooled. There are some AIO solutions out there to do this more easily. Swiftech used to have a nice setup with a rad/pump/resivoir combo... all you needed was lines and connectors and the loop was expandable also.


Yep - as the card comes with a water block attached to it - and there is also a 406W Power BIOS available for the Sea Hawk EK X.

I would definitely keep it under water.

Best Regards,
Medizinmann


----------



## tryout1

Thx for the replies, i will tell him the options, still cheaper and better than a 3070 imho.


----------



## J7SC

Medizinmann said:


> Yep - as the card comes with a water block attached to it - and there is also a 406W Power BIOS available for the Sea Hawk EK X.
> 
> I would definitely keep it under water.
> 
> Best Regards,
> Medizinmann





tryout1 said:


> Thx for the replies, i will tell him the options, still cheaper and better than a 3070 imho.


...yeah, keeping a custom PCB / high-W 2080 Ti under water makes a lot of sense. I have two Aorus 2080 Ti Xtr WB (factory w-block) and Turing, just like Ampere, runs best when w-cooled.


----------



## tps3443

Hey guys! So I have done some changes to my PC. All in pursuit of that perfect beast build

I successfully went full on Der8auer direct die, and an Optimus Signature V2 block with the bowed cold plate for direct die usage. Anyways, direct die is working very very good! And a full on 5Ghz is reliable. Or 4.9Ghz with no avx2 offsets at all.


I was just amazed to see a 4+ year old platform throw out performance this good. Which will be awesome for SLI 2080Ti’s!!


R15 single threaded was 224
R15 multithreaded was 4,982
R20 managed 11,692 multi threaded.
R20 managed 510 single thread.

^ Thats 10900K single thread performance out of a 4Y/O dinosaur lol.

Anyways, I have another 2080Ti (A bin) I need to grab a waterblock and a nvlink and I will run them
both. Unfortunately my 2nd 2080Ti has Hynix memory but it’ll be ok I guess. At least its an A bin model.


Anyways, now that my CPU single thread is where it needs to be (2) 2080Ti’s will be awesome (When it works that is)! Going to run both with (380 watt bios) and solder on the 8 ohm resistors for heavy benching both cards. I honestly tried to grab a single 3080Ti during launch day. It just didn’t work out. So i’m going to continue on using my 2080Ti

*@J7SC do you have any NVlink 2 slot adapters you’d be willing to sell? I am space constrained with a bottom 360 rad in my case. *


----------



## J7SC

tps3443 said:


> Hey guys! So I have done some changes to my PC. All in pursuit of that perfect beast build
> (...)
> 
> Anyways, now that my CPU single thread is where it needs to be (2) 2080Ti’s will be awesome (When it works that is)! Going to run both with (380 watt bios) and solder on the 8 ohm resistors for heavy benching both cards. I honestly tried to grab a single 3080Ti during launch day. It just didn’t work out. So i’m going to continue on using my 2080Ti
> 
> *@J7SC do you have any NVlink 2 slot adapters you’d be willing to sell? I am space constrained with a bottom 360 rad in my case. *


...sorry, don't have any NVlink 2-slot adapters myself


----------



## Falkentyne

tps3443 said:


> Hey guys! So I have done some changes to my PC. All in pursuit of that perfect beast build
> 
> I successfully went full on Der8auer direct die, and an Optimus Signature V2 block with the bowed cold plate for direct die usage. Anyways, direct die is working very very good! And a full on 5Ghz is reliable. Or 4.9Ghz with no avx2 offsets at all.
> 
> 
> I was just amazed to see a 4+ year old platform throw out performance this good. Which will be awesome for SLI 2080Ti’s!!
> 
> 
> R15 single threaded was 224
> R15 multithreaded was 4,982
> R20 managed 11,692 multi threaded.
> R20 managed 510 single thread.
> 
> ^ Thats 10900K single thread performance out of a 4Y/O dinosaur lol.
> 
> Anyways, I have another 2080Ti (A bin) I need to grab a waterblock and a nvlink and I will run them
> both. Unfortunately my 2nd 2080Ti has Hynix memory but it’ll be ok I guess. At least its an A bin model.
> 
> 
> Anyways, now that my CPU single thread is where it needs to be (2) 2080Ti’s will be awesome (When it works that is)! Going to run both with (380 watt bios) and solder on the 8 ohm resistors for heavy benching both cards. I honestly tried to grab a single 3080Ti during launch day. It just didn’t work out. So i’m going to continue on using my 2080Ti
> 
> *@J7SC do you have any NVlink 2 slot adapters you’d be willing to sell? I am space constrained with a bottom 360 rad in my case. *


Single thread performance hasn't changed from Kaby Lake to Comet Lake. It's the exact same IPC.
I'm not even sure if Skylake and Kaby lake were identical or not also.


----------



## tps3443

Falkentyne said:


> Single thread performance hasn't changed from Kaby Lake to Comet Lake. It's the exact same IPC.
> I'm not even sure if Skylake and Kaby lake were identical or not also.


Absolutely. It’s just been a tough road on getting near the frequency that Comet lake runs.

Now that 5Ghz is obtainable I can see that the ipc is the same. Kinda silly honestly.


----------



## gtz

tps3443 said:


> Hey guys! So I have done some changes to my PC. All in pursuit of that perfect beast build
> 
> I successfully went full on Der8auer direct die, and an Optimus Signature V2 block with the bowed cold plate for direct die usage. Anyways, direct die is working very very good! And a full on 5Ghz is reliable. Or 4.9Ghz with no avx2 offsets at all.
> 
> 
> I was just amazed to see a 4+ year old platform throw out performance this good. Which will be awesome for SLI 2080Ti’s!!
> 
> 
> R15 single threaded was 224
> R15 multithreaded was 4,982
> R20 managed 11,692 multi threaded.
> R20 managed 510 single thread.
> 
> ^ Thats 10900K single thread performance out of a 4Y/O dinosaur lol.
> 
> Anyways, I have another 2080Ti (A bin) I need to grab a waterblock and a nvlink and I will run them
> both. Unfortunately my 2nd 2080Ti has Hynix memory but it’ll be ok I guess. At least its an A bin model.
> 
> 
> Anyways, now that my CPU single thread is where it needs to be (2) 2080Ti’s will be awesome (When it works that is)! Going to run both with (380 watt bios) and solder on the 8 ohm resistors for heavy benching both cards. I honestly tried to grab a single 3080Ti during launch day. It just didn’t work out. So i’m going to continue on using my 2080Ti
> 
> *@J7SC do you have any NVlink 2 slot adapters you’d be willing to sell? I am space constrained with a bottom 360 rad in my case. *


Congrats

Nice to see the delid kit came in handy. A 7980XE at 5.0 and tuned RAM is a monster.


----------



## gilor8080

gigabyte 2080 ti gaming oc+G12+X53 
overclock
TIME SPY:
*16 114*








I scored 16 114 in Time Spy


Intel Core i9-10900F Processor, NVIDIA GeForce RTX 2080 Ti x 1, 16384 MB, 64-bit Windows 10}




www.3dmark.com


----------



## tps3443

gtz said:


> Congrats
> 
> Nice to see the delid kit came in handy. A 7980XE at 5.0 and tuned RAM is a monster.


Yes absolutely! I should have never re-glued the IHS back on to begin with though. And the liquid metal I applied previously was mostly gone.

I am not 100% sure which part offered the most improvement. The new Signature V2 waterblock or maybe the direct die. My temperatures of my system are just stupid to say the least. Then I went and ordered (10) 2.75MM H2o static pressure fans (Reduces sustained load temps even further). (Bionix P120 fans) They can perform nearly identical to the Noctua A12x25’s. Just not nearly as premium, while being only slightly louder than the Noctua A12x25’s.


Almost every single core temperature of the 7980XE sits about 0-1C away from each. With only one core being 3-4C hotter than the rest.
(I can live with that!)

In R23 I can actually manage to beat the prior gen monster AMD 2990WX, and can nearly double the OG 1950X Threadripper lol. So yeah these HCC Skylake X chips really are monsters for how dang old they are.

All of these numbers are daily stable. No suicide runs at all here. (If anyone is wondering)

I use this machine for work, and school, and long gaming sessions. So stability is important for me.


----------



## tps3443

J7SC said:


> ...sorry, don't have any NVlink 2-slot adapters myself



Hey, I finally got around to installing my 2nd 2080Ti, and I have both running on full power bios. I am using a Quadro RTX6000/8000 (2 slot nvlink bridge) SLI enabled in the driver just fine. 

You seem to have a lot of expertise and experience with this type of setup. So I was wondering if you could share some essentials from your own trial and error with this combination.

Should I use nvidia inspector? Which version? Are the pre-saved profiles actually useable? 

I‘d really appreciate it. I am
hoping you can save me some headache. I know you had even Flightsim working really well in SLI/nvlink mode.


----------



## J7SC

tps3443 said:


> Hey, I finally got around to installing my 2nd 2080Ti, and I have both running on full power bios. I am using a Quadro RTX6000/8000 (2 slot nvlink bridge) SLI enabled in the driver just fine.
> 
> You seem to have a lot of expertise and experience with this type of setup. So I was wondering if you could share some essentials from your own trial and error with this combination.
> 
> Should I use nvidia inspector? Which version? Are the pre-saved profiles actually useable?
> 
> I‘d really appreciate it. I am
> hoping you can save me some headache. I know you had even Flightsim working really well in SLI/nvlink mode.


...'grats on the 2nd 2080 Ti  

No real secret to run them in SLI...though for some apps, including FS2020, I use the 'CFR' ("checkerboard / tile") driver that was available until about a year ago (November '19 - end of May '20). SLI-CFR has advantages over regular SLI (AFR) such as little to no micro-stutter, but unfortunately, NVidia saw fit to discontinue the in-the-first-place undocumented support for CFR with the last driver I am aware of having a release date of the end of May '20...still works great in some titles though...

Below is the NVinspector setting to use for CFR if you have downloaded said drivers


----------



## kithylin

J7SC said:


> ...'grats on the 2nd 2080 Ti
> 
> No real secret to run them in SLI...though for some apps, including FS2020, I use the 'CFR' ("checkerboard / tile") driver that was available until about a year ago (November '19 - end of May '20). SLI-CFR has advantages over regular SLI (AFR) such as little to no micro-stutter, but unfortunately, NVidia saw fit to discontinue the in-the-first-place undocumented support for CFR with the last driver I am aware of having a release date of the end of May '20...still works great in some titles though...
> 
> Below is the NVinspector setting to use for CFR if you have downloaded said drivers


This is the first I have ever heard of Nvidia removing CFR from drivers. Just because something isn't an option in the nvidia control panel doesn't mean it was removed from the drivers. I tried a quick google search regarding Nvidia and CFR and limited it to posts between may 2020 and today and I find no one else commenting on what you claimed and no official statements. What gave you the idea that Nvidia removed CFR from the drivers?


----------



## J7SC

kithylin said:


> This is the first I have ever heard of Nvidia removing CFR from drivers. Just because something isn't an option in the nvidia control panel doesn't mean it was removed from the drivers. I tried a quick google search regarding Nvidia and CFR and limited it to posts between may 2020 and today and I find no one else commenting on what you claimed and no official statements. What gave you the idea that Nvidia removed CFR from the drivers?


interesting tone...  

anyway...CFR was never an option in the NVidia control panel to begin with, it was undocumented and NVidia Inspector was needed. And subsequent driver tests confirmed that the last gen drivers it worked was driver tree 44x.xx and as such never for 3090s...also, a bit more > here


----------



## kithylin

J7SC said:


> interesting tone...
> 
> anyway...CFR was never an option in the NVidia control panel to begin with, it was undocumented and NVidia Inspector was needed. And subsequent driver tests confirmed that the last gen drivers it worked was driver tree 44x.xx and as such never for 3090s...also, a bit more > here


That's 1 user posting something about trying to use it in 1 game and it may just be that 1 title released an update that makes their game not compatible with SLI vis CFR anymore. Battlefield 5 did that at one point in the past. They released an update that made all previous methods of enabling SLI with the game no longer work. You citing one user's forum post is not enough information (from multiple sources) for it to be a confirmation that it is completely removed from the drivers for all games. My "tone" is because you're stating something as fact to the public but there's not enough information to back up these "facts" as you claimed. CFR may yet still work for SLI for other game titles even with the current drivers.


----------



## J7SC

kithylin said:


> That's 1 user posting something about trying to use it in 1 game and it may just be that 1 title released an update that makes their game not compatible with SLI vis CFR anymore. Battlefield 5 did that at one point in the past. They released an update that made all previous methods of enabling SLI with the game no longer work. You citing one user's forum post is not enough information (from multiple sources) for it to be a confirmation that it is completely removed from the drivers for all games. My "tone" is because you're stating something as fact to the public but there's not enough information to back up these "facts" as you claimed. CFR may yet still work for SLI for other game titles even with the current drivers.


I have tested CFR w/ every new driver gen...goodbye


----------



## kithylin

J7SC said:


> I have tested CFR w/ every new driver gen...goodbye


I suppose we'll just have to take you at your word for it then since no one else but you and that one random other person claims it doesn't work. Maybe CFR doesn't work with RTX cards but still does with 1000 series cards or the older cards that are still in the driver. There's many possible scenarios.


----------



## tps3443

J7SC said:


> interesting tone...
> 
> anyway...CFR was never an option in the NVidia control panel to begin with, it was undocumented and NVidia Inspector was needed. And subsequent driver tests confirmed that the last gen drivers it worked was driver tree 44x.xx and as such never for 3090s...also, a bit more > here



I just tried flight sim totally maxed out at 2560x1440P. With only the driver settings set to force both cards in the NVCP, and flightsim was proving about 65% GPU usage to each card.

I ran standard timespy with both cards stock boost. And my PC was pulling 1,100-1,193 watts from the wall. (This is with a De-tuned) 4.8Ghz all core OC on my 7980XE)

Once I get the voltage dialed up on these cards to 1.093MV and the boost upwards of 2,100Mhz-2,145Mhz I am probably going to be just within my PSU's limits. The guru3D review said the Seasonic 1200 Prime platinum could safely pull over 1,300 watts for hours on end without any issues. So, maybe it'll be ok long-term.. Either way, I will just keep the CPU at 4.8Ghz 24/7. Once these OC on these cards is dialed in, I will have nothing left.



Timespy Graphics score was like 28.500.. The memory was +725 for each 2080Ti , core clocks both at default. Power limit is at 127% on each card. 1,890-1,920Mhz boost?


----------



## kithylin

tps3443 said:


> I just tried flight sim totally maxed out at 2560x1440P. With only the driver settings set to force both cards in the NVCP, and flightsim was proving about 65% GPU usage to each card.
> 
> I ran standard timespy with both cards stock boost. And my PC was pulling 1,100-1,193 watts from the wall. (This is with a De-tuned) 4.8Ghz all core OC on my 7980XE)
> 
> Once I get the voltage dialed up on these cards to 1.093MV and the boost upwards of 2,100Mhz-2,145Mhz I am probably going to be just within my PSU's limits. The guru3D review said the Seasonic 1200 Prime platinum could safely pull over 1,300 watts for hours on end without any issues. So, maybe it'll be ok long-term.. Either way, I will just keep the CPU at 4.8Ghz 24/7. Once these OC on these cards is dialed in, I will have nothing left.


Just so you're aware: I looked and your power supply drops to about 88% efficiency at 100% load so that would put it right at about 1344 watts draw @ the outlet side if it was at 100% load @ 88% efficiency, which would be within normal operating parameters for your unit and not exceeding it's design specifications at all.


----------



## J7SC

tps3443 said:


> I just tried flight sim totally maxed out at 2560x1440P. With only the driver settings set to force both cards in the NVCP, and flightsim was proving about 65% GPU usage to each card.
> 
> I ran standard timespy with both cards stock boost. And my PC was pulling 1,100-1,193 watts from the wall. (This is with a De-tuned) 4.8Ghz all core OC on my 7980XE)
> 
> Once I get the voltage dialed up on these cards to 1.093MV and the boost upwards of 2,100Mhz-2,145Mhz I am probably going to be just within my PSU's limits. The guru3D review said the Seasonic 1200 Prime platinum could safely pull over 1,300 watts for hours on end without any issues. So, maybe it'll be ok long-term.. Either way, I will just keep the CPU at 4.8Ghz 24/7. Once these OC on these cards is dialed in, I will have nothing left.
> 
> Timespy Graphics score was like 28.500.. The memory was +725 for each 2080Ti , core clocks both at default. Power limit is at 127% on each card. 1,890-1,920Mhz boost?


...PSU gets taxed by 2x 2080 Ti, at least w/ 380W max PL per card....add a 7980 XE, and your lights might flicker  

...I dug up this MSI AB screen from last fall re. FS 2020 / CFR...both cards are w-cooled btw ...at 4K 'ultra' settings, usage was actually decent (the initial squiggles are FS 2020 'loading').


----------



## tps3443

kithylin said:


> Just so you're aware: I looked and your power supply drops to about 88% efficiency at 100% load so that would put it right at about 1344 watts draw @ the outlet side if it was at 100% load @ 88% efficiency, which would be within normal operating parameters for your unit and not exceeding it's design specifications at all.




Well I attempted Timespy Extreme last night. With my CPU even de-tuned at only 4.8Ghz, and both 2080Ti’s running stock with only a minimal +725Mhz on each cards memory. My wall meter went from 1,000 watts to 1,100 watts, as it briefly hit 1210 watts, all of this within seconds and my entire PC went down. 

^ I had the same thinking as you. But my theory isn’t holding up.

These cards are not even running full 1.093MV yet, or boosting high at all..GPU clocks are stock for both.

This 1,200 watt PSU just isn’t enough for me anymore. Also, I am using a very cheap wall meter. So I am not sure how accurate it actually is?

With both cards stock, I am already cruising through at a fraction under 1200 watts in the standard timespy test.

My single 2080Ti can consume over 520 watts in timespy extreme with a monsterously good 8,100-8,200 graphics score. So if I have two of these 2080Ti’s now, it makes sense why my PC goes down in this test so quickly.


This was a older picture of my single 2080Ti in timespy extreme at 2,145Mhz. (Card is scoring just below a stock “Power starved” 3080Ti FE)

Now, they aren’t using anywhere near that power level currently.





Last but not least, my PSU is 90.7% efficient at 1,345 watts. 90.7% of that is around 1,210 watts of actual DC power to my components.

No mater what you feel my PSU can consume, or how these numbers look. This system is totally out of power. I would love to continue using this PSU. I really didn’t expect to require a PSU. But these cards are barely getting any voltage at all, the boost is at +0 on both. And they are both power crashing my entire PC with my current 1200 watt PSU.


----------



## tps3443

J7SC said:


> ...PSU gets taxed by 2x 2080 Ti, at least w/ 380W max PL per card....add a 7980 XE, and your lights might flicker
> 
> ...I dug up this MSI AB screen from last fall re. FS 2020 / CFR...both cards are w-cooled btw ...at 4K 'ultra' settings, usage was actually decent (the initial squiggles are FS 2020 'loading').
> 
> View attachment 2515199


My performance isn’t nearly that good man. I have some work to do. I need to get the cards dialed in and overclocked properly once I get a larger PSU, I also need to adjust the nvidia inspector to see if I can get my gpu usage up some.

I may try a prior driver and see how the AFR does. 65%GPU usage on each card isn’t enough.


----------



## J7SC

tps3443 said:


> My performance isn’t nearly that good man. I have some work to do. I need to get the cards dialed in and overclocked properly once I get a larger PSU, I also need to adjust the nvidia inspector to see if I can get my gpu usage up some.
> 
> I may try a prior driver and see how the AFR does. 65%GPU usage on each card isn’t enough.


...don't forget that I'm running 4K ultra everything - relatively less emphasis on CPU compared to 1440 or lower. Still, no matter what resolution, FS2020 really hammers one CPU core hard (close to 100% all the time) though I have seen it use more than eight cores at lower usage - just one 'over the top'. If you can set your CPU w/ a staggered oc for 2,4,6,8 etc cores, that will probably help you in FS2020 w/o pushing PSU load via CPU too high, what with the 2x 2080 Tis appetite for juice...


----------



## tps3443

J7SC said:


> ...don't forget that I'm running 4K ultra everything - relatively less emphasis on CPU compared to 1440 or lower. Still, no matter what resolution, FS2020 really hammers one CPU core hard (close to 100% all the time) though I have seen it use more than eight cores at lower usage - just one 'over the top'. If you can set your CPU w/ a staggered oc for 2,4,6,8 etc cores, that will probably help you in FS2020 w/o pushing PSU load via CPU too high, what with the 2x 2080 Tis appetite for juice...



I thought that too, but it just wasn’t running right at all. I was only getting like 50 FPS with (2) stock 2080Ti’s. I increased the graphical load using the in game resolution scaling maxed out, or on a really high setting. And the GPU usage load didn’t change at all.

It was a weird glitch I think. With more graphical demand, GPU usage simply didn’t move.


----------



## tps3443

So I downloaded RDR2, and this game is amazing! SLI performance is ridiculously good! Both 2080Ti's are sitting at 97% GPU usage while playing. Smoothest experience I have ever gotten out of this title personally. Maxed out with 140-150FPS at 2560x1440P is very easily done. And this is with the cards running around 1.905Mhz-1,920Mhz.

With both 2080Ti's stock, with only power limit maxed out. I am sucking down 1070-1100 watts from my power meter. If I attempt to increase just the voltage slider to 100% I get power shut downs in the game. (Overclocking is not an option) 

So with that being said, I am going to get a 1600 watt PSU as soon as financially possible. They are absurdly expensive right now. I am not really sure why either..

The performance in this title alone is unbelievable! Performance looks to be double.


----------



## J7SC

tps3443 said:


> So I downloaded RDR2, and this game is amazing! SLI performance is ridiculously good! Both 2080Ti's are sitting at 97% GPU usage while playing. Smoothest experience I have ever gotten out of this title personally. Maxed out with 140-150FPS at 2560x1440P is very easily done. And this is with the cards running around 1.905Mhz-1,920Mhz.
> 
> With both 2080Ti's stock, with only power limit maxed out. I am sucking down 1070-1100 watts from my power meter. If I attempt to increase just the voltage slider to 100% I get power shut downs in the game. (Overclocking is not an option)
> 
> So with that being said, I am going to get a 1600 watt PSU as soon as financially possible. They are absurdly expensive right now. I am not really sure why either..
> 
> The performance in this title alone is unbelievable! Performance looks to be double.


RDR2 on Vulkan and SLI is  ...as to PSU, my 2x 2080 Ti (max 380W per card) plus oc'ed TR can also really suck back the wattskis...I use an Antec HPC Platinum 1300W continuous power in my 2x 2080 Ti rig and so far, so good - but not much headroom left once peripherals (ie 4x D5 etc) are added in.


----------



## acoustic

You're crazy to push the PSU that hard. Just asking for a magic _poof_ moment and being left with two dead 2080TIs while looking for a new CPU and board.


----------



## Krzych04650

tps3443 said:


> So I downloaded RDR2, and this game is amazing! SLI performance is ridiculously good! Both 2080Ti's are sitting at 97% GPU usage while playing. Smoothest experience I have ever gotten out of this title personally. Maxed out with 140-150FPS at 2560x1440P is very easily done. And this is with the cards running around 1.905Mhz-1,920Mhz.





J7SC said:


> RDR2 on Vulkan and SLI is  ...as to PSU, my 2x 2080 Ti (max 380W per card) plus oc'ed TR can also really suck back the wattskis...I use an Antec HPC Platinum 1300W continuous power in my 2x 2080 Ti rig and so far, so good - but not much headroom left once peripherals (ie 4x D5 etc) are added in.


Do you have G-sync working in RDR2 with Vulkan mGPU? From what I have briefly tested and also saw on videos variable refresh rate does not work with mGPU in this game and since game's frametime goes crazy when you reach GPU bind you have to lock it below and never allow game to drop, just like you would before VRR days.

Also interesting if DLSS is going to work with mGPU in RDR2 once it gets released. In SOTTR you can enable it and it works, though there mGPU in general is broken so... Funny how games with actual native mGPU support work worse than old implicit SLI implementations.



tps3443 said:


> With both 2080Ti's stock, with only power limit maxed out. I am sucking down 1070-1100 watts from my power meter. If I attempt to increase just the voltage slider to 100% I get power shut downs in the game. (Overclocking is not an option)
> 
> So with that being said, I am going to get a 1600 watt PSU as soon as financially possible. They are absurdly expensive right now. I am not really sure why either..
> 
> The performance in this title alone is unbelievable! Performance looks to be double.


Yea I had to go from 1200W to 1600W PSU as well, and I've waited like 45 days for AX1600i to arrive. Order was postponed multiple times.



tps3443 said:


> Hey, I finally got around to installing my 2nd 2080Ti, and I have both running on full power bios. I am using a Quadro RTX6000/8000 (2 slot nvlink bridge) SLI enabled in the driver just fine.
> 
> You seem to have a lot of expertise and experience with this type of setup. So I was wondering if you could share some essentials from your own trial and error with this combination.
> 
> Should I use nvidia inspector? Which version? Are the pre-saved profiles actually useable?
> 
> I‘d really appreciate it. I am
> hoping you can save me some headache. I know you had even Flightsim working really well in SLI/nvlink mode.


Like already mentioned CFR on 441.41 driver is something to keep in mind, it has really high hit rate with DX11 titles, both ones that never had any SLI and ones that have bad profiles. I did play whole Dark Souls trilogy with it for example, and they have negative scaling with native SLI profiles, or Ori and the Will of the Wisps which never had any SLI suport. Scaling is typically 1.35x so at DX11 it will only allow you to tie 3090, but throwing some external load like SGSSAA seems to add to the scaling, I've had some games going even up to 1.7x scaling with CFR.

Here is some basic list of CFR tested games but there a ton more of them 3DCenter Forum - Einzelnen Beitrag anzeigen - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
Here is a list of custom or fixed AFR profiles, guys still keep them coming so that's a very useful thread to search through 3DCenter Forum - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)

You already have proper platform so no need to tell about x16/x16 and etc.

Generally it is best not to hit 99% usage on both GPUs depending on the game, frametime consistency can sometimes suffer.

It is also best to use SLI for heavy GPU effects and resolution increase rather than whacking the framerate at low resolutions. This is also depending on the game but you are going to hit engine/CPU limitations a lot and your scaling and frametime is going to suffer a lot, especially that your resolution is just 1440p, so you will basically need to use DSR/Supersampling/Resolution Scale all the time on such setup to take proper advantage of it without bottlenecks.

Another important thing that real world gains from overclocking in SLI are far smaller than with single card, typically half or less. Memory overclocking stil mostly does what it did, but core does not scale that well. Given how you really don't want to hit any power limits on frametime sensitive setup like SLI, it is best to opt for some middle of the way slightly undervolted setting for SLI, like 2025 MHz, instead of pushing 2100+. The trick is basically to keep the cards as "calm" as possible.

Using SLI in older CPU bottlenecked DX9 titles can also be ill advised at times as it can deepen the bottleneck. The first Witcher is a good example where theoretically you you are getting good scaling in GPU bound places but you are getting huge drops and framtime problems in CPU bound ones, much more so than on a single card.

One quirk about SLI that not a lot of people know is that double buffered V-sync does not work with it, so if you had some games like AC Black Flag or Quantum Break that were randomly locking your FPS to half refresh rate whenever they felt like it are no longer going to do this, this can be bypassed by Fast Sync anyway though.

There are probably more things to say but this is what I remember off the top of my head.


----------



## tps3443

Krzych04650 said:


> Do you have G-sync working in RDR2 with Vulkan mGPU? From what I have briefly tested and also saw on videos variable refresh rate does not work with mGPU in this game and since game's frametime goes crazy when you reach GPU bind you have to lock it below and never allow game to drop, just like you would before VRR days.
> 
> Also interesting if DLSS is going to work with mGPU in RDR2 once it gets released. In SOTTR you can enable it and it works, though there mGPU in general is broken so... Funny how games with actual native mGPU support work worse than old implicit SLI implementations.
> 
> 
> 
> Yea I had to go from 1200W to 1600W PSU as well, and I've waited like 45 days for AX1600i to arrive. Order was postponed multiple times.
> 
> 
> 
> Like already mentioned CFR on 441.41 driver is something to keep in mind, it has really high hit rate with DX11 titles, both ones that never had any SLI and ones that have bad profiles. I did play whole Dark Souls trilogy with it for example, and they have negative scaling with native SLI profiles, or Ori and the Will of the Wisps which never had any SLI suport. Scaling is typically 1.35x so at DX11 it will only allow you to tie 3090, but throwing some external load like SGSSAA seems to add to the scaling, I've had some games going even up to 1.7x scaling with CFR.
> 
> Here is some basic list of CFR tested games but there a ton more of them 3DCenter Forum - Einzelnen Beitrag anzeigen - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
> Here is a list of custom or fixed AFR profiles, guys still keep them coming so that's a very useful thread to search through 3DCenter Forum - SLI - Kompatibilitätsbits - Sammelthread (forum-3dcenter.org)
> 
> You already have proper platform so no need to tell about x16/x16 and etc.
> 
> Generally it is best not to hit 99% usage on both GPUs depending on the game, frametime consistency can sometimes suffer.
> 
> It is also best to use SLI for heavy GPU effects and resolution increase rather than whacking the framerate at low resolutions. This is also depending on the game but you are going to hit engine/CPU limitations a lot and your scaling and frametime is going to suffer a lot, especially that your resolution is just 1440p, so you will basically need to use DSR/Supersampling/Resolution Scale all the time on such setup to take proper advantage of it without bottlenecks.
> 
> Another important thing that real world gains from overclocking in SLI are far smaller than with single card, typically half or less. Memory overclocking stil mostly does what it did, but core does not scale that well. Given how you really don't want to hit any power limits on frametime sensitive setup like SLI, it is best to opt for some middle of the way slightly undervolted setting for SLI, like 2025 MHz, instead of pushing 2100+. The trick is basically to keep the cards as "calm" as possible.
> 
> Using SLI in older CPU bottlenecked DX9 titles can also be ill advised at times as it can deepen the bottleneck. The first Witcher is a good example where theoretically you you are getting good scaling in GPU bound places but you are getting huge drops and framtime problems in CPU bound ones, much more so than on a single card.
> 
> One quirk about SLI that not a lot of people know is that double buffered V-sync does not work with it, so if you had some games like AC Black Flag or Quantum Break that were randomly locking your FPS to half refresh rate whenever they felt like it are no longer going to do this, this can be bypassed by Fast Sync anyway though.
> 
> There are probably more things to say but this is what I remember off the top of my head.


Hey, thank you so much for the information. This will greatly help me out! I am definitely going to
get a 1600 watt PSU asap. I really want this setup running as reliably as I can get it.

I am running the newest driver currently like 471.11. And I always confirm if G-Sync is working with my monitors “Built-in FPS counter” usually with any game, the monitors fps counter is actually the current refresh rate. Last night I did enable this counter in RDR2 and it was moving up and down like a frame rate. So this confirmed that a variable refresh rate was working. If G-Sync was not working the monitor OSD counter would just sit on (165) and not move.

As for overclocking, I do have a 532 watt power limit on each card. I am all for the smooth experience, but I’m hard headed too so we all know i’m gonna try and push 2,100-2,130 on both cards simultaneously. One of the 2080Ti’s has bad Micron memory, so I am gimped to only+725 for both cards on the memory unfortunately.


Also, for SLI the best bios may just be the Galax HOF 2KW bios. While running stock It’ll send 1.1V+ and 2,055Mhz or possibly more Mhz to each card with unlimited power. This is something else I have considered trying too. (Can’t save profiles with HOF XOC Bios) So running default settings will be ok,
(I have not tried this bios on both cards yet) But it would essentially provide plenty of power to each card, and overclocking wouldn’t be required at all.


----------



## J7SC

tps3443 said:


> Hey, thank you so much for the information. This will greatly help me out! I am definitely going to
> get a 1600 watt PSU asap. I really want this setup running as reliably as I can get it. (...)
> 
> Also,* for SLI the best bios may just be the Galax HOF 2KW bios*. While running stock It’ll send 1.1V+ and 2,055Mhz or possibly more Mhz to each card with unlimited power. This is something else I have considered trying too. (Can’t save profiles with HOF XOC Bios) So running default settings will be ok,
> (I have not tried this bios on both cards yet) But it would essentially provide plenty of power to each card, and overclocking wouldn’t be required at all.


...HoF 2KW bios x 2 GPUs ? Yowzers  ...while the top card in my setup has an outside bench limit of just over 2200 MHz and the second one around 2175 (both on 380 W max vbios), I run them at 2115 or so in FS 2020 AND with a PL of no more than 110% (max is 130%). That seems to work fine as the cards run really cool, and w/o hiccup in FS 2020.

BTW, many/most Crytek-engined games like SLI-CFR; and also check YT for Metro Exodus CFR - works great w/ DX12 RTX. Still, regular SLI (AFR) is obviously far less hassle overall, especially for older games (never mind single cards)



Spoiler


----------



## tps3443

J7SC said:


> ...HoF 2KW bios x 2 GPUs ? Yowzers  ...while the top card in my setup has an outside bench limit of just over 2200 MHz and the second one around 2175 (both on 380 W max vbios), I run them at 2115 or so in FS 2020 AND with a PL of no more than 110% (max is 130%). That seems to work fine as the cards run really cool, and w/o hiccup in FS 2020.
> 
> BTW, many/most Crytek-engined games like SLI-CFR; and also check YT for Metro Exodus CFR - works great w/ DX12 RTX. Still, regular SLI (AFR) is obviously far less hassle overall, especially for older games (never mind single cards)
> 
> 
> 
> Spoiler
> 
> 
> 
> 
> View attachment 2515377


I have these on 127% power each card which is roughly “up to” 530 watts per card with the 8 ohm resistors soldered on. With the stock clocks they managed 19,030 in port royal. I can’t overclock them and run port royal yet (PC will just power off). Anyways, I imagine it’ll be close to, or maybe a little less that your 21K+ score. One of my cards has some crap GDDR6 lol.

Also, I am running X8/X8 because I am
using this shorter 2 slot nvlink bridge. In order to get X16/X16 I have to use a 4 slot nvlink bridge with the correct PCIe slots utilized.

is it worth going X16/X16? I would have to remove my bottom ultra slim XSPC 360MM radiator. I’m tempted to just go test bench. I’m kinda sick of being cramped in a case.


----------



## J7SC

tps3443 said:


> I have these on 127% power each card which is roughly “up to” 530 watts per card with the 8 ohm resistors soldered on. With the stock clocks they managed 19,030 in port royal. I can’t overclock them and run port royal yet (PC will just power off). Anyways, I imagine it’ll be close to, or maybe a little less that your 21K+ score. One of my cards has some crap GDDR6 lol.
> 
> Also, I am running X8/X8 because I am
> using this shorter 2 slot nvlink bridge. In order to get X16/X16 I have to use a 4 slot nvlink bridge with the correct PCIe slots utilized.
> 
> is it worth going X16/X16? I would have to remove my bottom ultra slim XSPC 360MM radiator. I’m tempted to just go test bench. I’m kinda sick of being cramped in a case.


I've never run my cards in X8/X8...but from what I've read, the loss would be minor with 2x RTX2K...

2x2080 Ti PR was mid-21k the last time I ran it afaik, but it has been awhile benching that pair


----------



## kithylin

acoustic said:


> You're crazy to push the PSU that hard. Just asking for a magic _poof_ moment and being left with two dead 2080TIs while looking for a new CPU and board.


Just a note for you: Major brand power supplies like Seasonic Platinum lines in the 1000W+ region are designed for 100% load constant. If it's a 1300W unit then it can handle 1300W just fine. Another note with how power supplies work: there is a slight efficiency loss. So say for example we have a 1300W unit that is 88% efficient @ 100% load. Then that would mean that 12% of the power conversion is lost to heat. So (1300 + 12%) = 1456 watts draw at the AC side at the outlet. And that would be running the power supply at 100% capacity and not overloading it, even though it's rated as a 1300 watt power supply. This is not "pushing it hard". It's not going to make it go _poof_ and it's not going to destroy it. That scenario I just described is within normal operating parameters.


----------



## acoustic

kithylin said:


> Just a note for you: Major brand power supplies like Seasonic Platinum lines in the 1000W+ region are designed for 100% load constant. If it's a 1300W unit then it can handle 1300W just fine. Another note with how power supplies work: there is a slight efficiency loss. So say for example we have a 1300W unit that is 88% efficient @ 100% load. Then that would mean that 12% of the power conversion is lost to heat. So (1300 + 12%) = 1456 watts draw at the AC side at the outlet. And that would be running the power supply at 100% capacity and not overloading it, even though it's rated for a 1300 watt power supply. This is not "pushing it hard". It's not going to make it go _poof_ and it's not going to destroy it. That scenario I just described is within normal operating parameters.


I'm well aware of how it works, you didn't need to mansplain on me brotha lol.

Regardless, when you're right on the verge of causing the entire PC to shutdown, and just moving your voltage sliders to 100% cause shutdowns, you're going to have a bad time. The PSU is protecting itself but eventually that heavy of a load will ring the bell on that PSU.


----------



## tps3443

kithylin said:


> Just a note for you: Major brand power supplies like Seasonic Platinum lines in the 1000W+ region are designed for 100% load constant. If it's a 1300W unit then it can handle 1300W just fine. Another note with how power supplies work: there is a slight efficiency loss. So say for example we have a 1300W unit that is 88% efficient @ 100% load. Then that would mean that 12% of the power conversion is lost to heat. So (1300 + 12%) = 1456 watts draw at the AC side at the outlet. And that would be running the power supply at 100% capacity and not overloading it, even though it's rated as a 1300 watt power supply. This is not "pushing it hard". It's not going to make it go _poof_ and it's not going to destroy it. That scenario I just described is within normal operating parameters.


You can’t pull past 1200 watts on the 12V rail though. You can exceed this 1300+ watts only when an unrealistic load is drawing from the 3.3V, the 5v, and the 12v rail simultaneously.

My PC shuts off right when my wall meter hits 1210 watts. So the 12V rail is going down hard when my cpu and gpu’s literally kill it in seconds.

I will never underestimate a PSU again, but I used to think the exact same way(A high quality PSU can run more power than it’s listed as). Well, this is real world scenario, and my components are starved lol. 1200 watts just doesn’t make the cut at all and it really sucks.

I actually bought this PSU and remember reading reviews of it pull 1345 watts and still running ok lol. And I thought, wow! Great PSU! It should be fine then, even with (2) full power 2080Ti’s or if I exceed the 1200 watts.... lol. Boy was I wrong!


----------



## acoustic

Can't blame the PSU really. You just need more wattage. You're asking a lot out of that poor thing.. lol


----------



## kithylin

tps3443 said:


> I would love to continue using this PSU. I really didn’t expect to require a PSU. But these cards are barely getting any voltage at all, the boost is at +0 on both. And they are both power crashing my entire PC with my current 1200 watt PSU.


I know it's kind of off topic but.. if you wanted to keep using that power supply and the two 2080 Ti's you could always switch to AMD. That older Intel CPU is an insane power hog. A 5950X would be the same (or faster) performance and use less than half of the power of that old 7980XE even overclocked, which would give you a lot more headroom out of your power supply for the video cards.


----------



## pewpewlazer

tps3443 said:


> You can’t pull past 1200 watts on the 12V rail though. You can exceed this 1300+ watts only when an unrealistic load is drawing from the 3.3V, the 5v, and the 12v rail simultaneously.
> 
> My PC shuts off right when my wall meter hits 1210 watts. So the 12V rail is going down hard when my cpu and gpu’s literally kill it in seconds.
> 
> I will never underestimate a PSU again, but I used to think the exact same way(A high quality PSU can run more power than it’s listed as). Well, this is real world scenario, and my components are starved lol. 1200 watts just doesn’t make the cut at all and it really sucks.
> 
> I actually bought this PSU and remember reading reviews of it pull 1345 watts and still running ok lol. And I thought, wow! Great PSU! It should be fine then, even with (2) full power 2080Ti’s or if I exceed the 1200 watts.... lol. Boy was I wrong!


Your components aren't "starved". They're actually well fed. So well fed that they're sucking down enough juice (due to transient load spikes - which your power meter won't register) to trigger your power supplies over current protection (which shuts your computer off). The issue is the way Seasonic designed your power supply, not the wattage rating.

I was in the same boat. Bought a very highly acclaimed 1000w Seasonic based power supply thinking "_gee, I'll never need to upgrade this thing again!_". Needless to say I was NOT a happy camper when my computer consistently shut off at ~550w AC power draw after upgrading to a single 2080 Ti.


----------



## JustinThyme

kithylin said:


> I know it's kind of off topic but.. if you wanted to keep using that power supply and the two 2080 Ti's you could always switch to AMD. That older Intel CPU is an insane power hog. A 5950X would be the same (or faster) performance and use less than half of the power of that old 7980XE even overclocked, which would give you a lot more headroom out of your power supply for the video cards.


I think the PSU upgrade would be more cost effective. Go changing platforms now on all end of life products is a poor decision. The only thing that is new and might see one more upgrade is socket 1200 that doesn’t offer much. I’m holding out for DDR5 before I change platforms. As for OC on a 5950X, not much to be had. I’ve not seen anyone run one to 5GHz+ All cores.

7980XE is a decent chip so long as it’s been delidded and the pigeon poop replace with liquid metal TIM. The 99XX and 109XX don’t need delidding. They do fine as is.


----------



## tps3443

[


pewpewlazer said:


> Your components aren't "starved". They're actually well fed. So well fed that they're sucking down enough juice (due to transient load spikes - which your power meter won't register) to trigger your power supplies over current protection (which shuts your computer off). The issue is the way Seasonic designed your power supply, not the wattage rating.
> 
> I was in the same boat. Bought a very highly acclaimed 1000w Seasonic based power supply thinking "_gee, I'll never need to upgrade this thing again!_". Needless to say I was NOT a happy camper when my computer consistently shut off at ~550w AC power draw after upgrading to a single 2080 Ti.


Ok well I could easily run a sustained 1,100+watt load during RDR2 game play fairly easily. But after the components heated up and reached equilibrium and something wild or crazy happen in the game, like a large gun fight with 10-15 NPC’s I would get a power shutdown.

But a sustained 1,100-1,140 watts on my wall meter playing for about 30 minutes to an hour. Then I could get the law or bounty hunter to chase me, after I racked up so many of them shooting at me at once, It begins to use my cpu more (Which consumes more power than all of my components) 

My system idles at 410 watts on the desktop lol.

Anyways, I have sold off both 2080Ti’s I owned lol. So its okay. I did buy a Evga 1600 watt PSU though. Which was a waste, because I *purchased a 3090 Kingpin today for MSRP*. (The reason behind selling the 2080Ti’s)

It’ll be here in a few days. After owning wayy too many 2080Ti’s. It’s a solid upgrade!


----------



## tps3443

Guys I have been in this club too long lol! I was a true believer in the 2080Ti potential performance (I preached it all the time/Defended it all the time lol), and it’s full potential to be a viable card that can compete and still be like the 3rd or 4th fastest GPU in the world (Even right now) Once setup properly of course). I have had (7) 2080Ti’s in total. One in particular I have had on water for a year with a capable power draw of up-to 530 watts reliably! (That was the best 2080Ti I have ever owned personally) It left my possession yesterday, as it was shipped off to the new owner @sirplayalot that card was a beast! I put 400 hours in rust on it in just these past couple months alone lol.

Anyways, I purchased a 3090 kingpin for MSRP!

(The 3090 is in my opinion, the only card worth moving up to from a well clocked 2080Ti)

I’m excited to say the least, I’m looking forward to running the new card.


----------



## JustinThyme

tps3443 said:


> Guys I have been in this club too long lol! I was a true believer in the 2080Ti potential performance (I preached it all the time/Defended it all the time lol), and it’s full potential to be a viable card that can compete and still be like the 3rd or 4th fastest GPU in the world (Even right now) Once setup properly of course). I have had (7) 2080Ti’s in total. One in particular I have had on water for a year with a capable power draw of up-to 530 watts reliably! (That was the best 2080Ti I have ever owned personally) It left my possession yesterday, as it was shipped off to the new owner @sirplayalot that card was a beast! I put 400 hours in rust on it in just these past couple months alone lol.
> 
> Anyways, I purchased a 3090 kingpin for MSRP!
> 
> (The 3090 is in my opinion, the only card worth moving up to from a well clocked 2080Ti)
> 
> I’m excited to say the least, I’m looking forward to running the new card.


Id only go 3090 Strix if they ever become readily available at MSRP and Ill be buying two of them. They are the only ones capable of SLI with Nvlink connections. Im seeing the scalpers prices dropping on Fleabay. They have them up with no bids then drop the prices. They are under $3K now. Saw one go a few days ago listed as open box for $2400. When I see that Im a bit hesitant. Open box can mean a lot of things like I ran the crap out of it mining for 6 months and now want to dump it for a higher price while I can.

Ill most likely be sitting on what I have until something that does better than what I have is released. Holding my breathe for sapphire rapids and DDR5.


----------



## tps3443

kithylin said:


> I know it's kind of off topic but.. if you wanted to keep using that power supply and the two 2080 Ti's you could always switch to AMD. That older Intel CPU is an insane power hog. A 5950X would be the same (or faster) performance and use less than half of the power of that old 7980XE even overclocked, which would give you a lot more headroom out of your power supply for the video cards.



It’s not worth it for me, I run a NVME raid setup that’s uses a lot of PCI-e lanes. And I also use some applications that can leverage my lower memory latency and more than double memory bandwidth. The 2080Ti’s are gone. I’m happy with my platform. My DDR4 4000CL15 memory, and direct die 5Ghz 7980XE is beastly powerful. (I could care less about power draw lol) 

I could also just enable some power saving features in the bios, and in windows and save tons of power. 


There is word of evga X570 Dark motherboard coming. So I imagine several users would consider getting a 5950X with a board like that.


----------



## JustinThyme

tps3443 said:


> It’s not worth it for me, I run a NVME raid setup that’s uses a lot of PCI-e lanes. And I also use some applications that can leverage my lower memory latency and more than double memory bandwidth. The 2080Ti’s are gone. I’m happy with my platform. My DDR4 4000CL15 memory, and direct die 5Ghz 7980XE is beastly powerful. (I could care less about power draw lol)
> 
> I could also just enable some power saving features in the bios, and in windows and save tons of power.
> 
> 
> There is word of evga X570 Dark motherboard coming. So I imagine several users would consider getting a 5950X with a board like that.


Only problem is AM4 is done. Im going to wait it out to see what the next gen brings from both sides before I change up.


----------



## tps3443

JustinThyme said:


> Id only go 3090 Strix if they ever become readily available at MSRP and Ill be buying two of them. They are the only ones capable of SLI with Nvlink connections. Im seeing the scalpers prices dropping on Fleabay. They have them up with no bids then drop the prices. They are under $3K now. Saw one go a few days ago listed as open box for $2400. When I see that Im a bit hesitant. Open box can mean a lot of things like I ran the crap out of it mining for 6 months and now want to dump it for a higher price while I can.
> 
> Ill most likely be sitting on what I have until something that does better than what I have is released. Holding my breathe for sapphire rapids and DDR5.


Absolutely. Yeah I was running Nvlink 2080Ti’s before buying the 3090Kp. The scalability is awesome when setup right! I was using the RTX8000 Nvlink bridge too so 100gbps link between the cards was working well.


----------



## tps3443

JustinThyme said:


> Only problem is AM4 is done. Im going to wait it out to see what the next gen brings from both sides before I change up.


Right, and this makes sense. I’m certainly not moving to a 5950X. But I know some enthusiast‘s who are holding back on the platform solely on the available motherboard types.


----------



## Medizinmann

JustinThyme said:


> Only problem is AM4 is done. Im going to wait it out to see what the next gen brings from both sides before I change up.


Yeah mostly - there might be a Zen3-Refresh Q4/2021 - with probably 100 Mhz or so higher clocks but nothing really significant.

But concerning the next step i.e. AM5 and all the others with DDR5….
…question is – how will be availability of DDR5-Memory?

I personally will fetch me a 5950XT when it come out if it’s an good or at least upgrade to a 5950X and wait and see how DDR5 is doing…

Best regards,
Medizinmann


----------



## tps3443

Looks like the 5950X is on AMD’s website for
$799 in-stock.


----------



## JustinThyme

Local Microcenter shows 25+ in stock for $799
Even Best Buy for $790.


----------



## tps3443

Nice to see inventory coming back to normal, and prices trying to become reasonable. It only takes about 8 months to a year after a new product launch lol.


----------



## Medizinmann

tps3443 said:


> Nice to see inventory coming back to normal, and prices trying to become reasonable. It only takes about 8 months to a year after a new product launch lol.


Here in Germany it(5950X) is also already a little below MSRP (which is 803,19€ inkl. VAT) with pricing between 739-769€ - even seen a sale for 729€...

GPUs are also coming down - but are still around 100% over MSRP for 3080 and below. 3090 are 10% and 3080 Ti is 15-25% over MSRP right now.

Best regards,
Medizinmann


----------



## tps3443

Medizinmann said:


> Here in Germany it(5950X) is also already a little below MSRP (which is 803,19€ inkl. VAT) with pricing between 739-769€ - even seen a sale for 729€...
> 
> GPUs are also coming down - but are still around 100% over MSRP for 3080 and below. 3090 are 10% and 3080 Ti is 15-25% over MSRP right now.
> 
> Best regards,
> Medizinmann


I have noticed some GPU MSRP’s simply don’t make sense at all vs others.

Like the FTW3 3080Ti is like $1800 USD. And a 3090 Kingpin even is $2,089 MSRP. 

Understandable if both were 3090’s but one is essentially a cut down 12GB gpu with an MSRP that’s simply too close to the higher tier in price.


----------



## tps3443

double post.


----------



## kithylin

tps3443 said:


> I have noticed some GPU MSRP’s simply don’t make sense at all vs others.
> 
> Like the FTW3 3080Ti is like $1800 USD. And a 3090 Kingpin even is $2,089 MSRP.
> 
> Understandable if both were 3090’s but one is essentially a cut down 12GB gpu with an MSRP that’s simply too close to the higher tier in price.


Blame Nvidia for that. The official MSRP from Nvidia reference cards / founder's edition are $1199 for the 3080 Ti and +$300 more to $1499 for the RTX 3090. So what you're seeing there for aftermarket versions is actually to be expected. It actually does make perfect sense when you see what Nvidia wants the cards to market for.


----------



## acoustic

tps3443 said:


> I have noticed some GPU MSRP’s simply don’t make sense at all vs others.
> 
> Like the FTW3 3080Ti is like $1800 USD. And a 3090 Kingpin even is $2,089 MSRP.
> 
> Understandable if both were 3090’s but one is essentially a cut down 12GB gpu with an MSRP that’s simply too close to the higher tier in price.


The 3080ti FTW3 is $1399 MSRP, not $1800. Quite a far way off of $2089 or $2300 for the 3090 KP.


----------



## tps3443

acoustic said:


> The 3080ti FTW3 is $1399 MSRP, not $1800. Quite a far way off of $2089 or $2300 for the 3090 KP.


Sorry about that, I fixed it. I was talking about some AIB RTX3090 models MSRP vs some other AIB 3090’s MSRP. The prices seems really similar for the high end model 3090’s.

The 3090 Kingpin is $2,089 dollars. Well the cheapest 3090 available from evga is only $490 less than a 3090 Kingpin. Well compared to last gen 2080Ti Kingpin which was over $900 dollars more expensive than the cheapest Evga 2080Ti Black edition. Nearly and entire 2080Ti on top of the price tag just for a 2080Ti Kingpin.

And this is exactly why I think the 3090KP is probably the best deal right now honestly (And that sounds just idiotic hearing my self say that). But a GPU this fast with 24GB memory buffer. Especially with pricing of even the cheapest of cheap 3090’s fetching $2,300 dollars on ebay lol.

Now with the above being said, I mean if a 3090KP is available at MSRP of course that’s a deal lol. Even the 3090 Strix OC is $2,100 or better MSRP? Beastly 3090’s are expensive!! Although, even the best 3090 available in America is still less than only $500 more than the cheapest AIB 3090. 

I remember when the 3090KP was getting teased almost every one online was like $2599-$2899 or even more MSRP lol.


----------



## tps3443

kithylin said:


> Blame Nvidia for that. The official MSRP from Nvidia reference cards / founder's edition are $1199 for the 3080 Ti and +$300 more to $1499 for the RTX 3090. So what you're seeing there for aftermarket versions is actually to be expected. It actually does make perfect sense when you see what Nvidia wants the cards to market for.


Yeah it’s not too bad. I don’t think the RTX3000 series msrp is a bad deal at all. I think the high end as / high power models are certainly worth their value.


----------



## acoustic

Yeah the KP is a great deal this gen. I was in queue for one, but got a 3080TI STRIX at Microcenter on accident while shopping for some Bitspower fittings. They also had a block and backplate for that STRIX so it made life easy. 

KP is definitely a great choice this gen with how bloated other AIBs.


----------



## tps3443

Double post.


----------



## tps3443

acoustic said:


> Yeah the KP is a great deal this gen. I was in queue for one, but got a 3080TI STRIX at Microcenter on accident while shopping for some Bitspower fittings. They also had a block and backplate for that STRIX so it made life easy.
> 
> KP is definitely a great choice this gen with how bloated other AIBs.


Yeah the high end AIB models are worth it for sure this generation. No matter the card tier 3070/80/80ti/90. Nvidia was super skimpy on power with a 3080Ti especially! (So I can see the Strix model paying off here) 

The RTX2080 to RTX2080 Super saw a 25 watt power increase for just some faster memory and tiny jump in cuda cores.(Literally the same cards) 

Now, the 3080Ti only got a 30 watt jump here over a standard 3080 lol. Which makes no sense at all. (No room for the card to perform at all)


----------



## JustinThyme

I may consider upgrading to a pair of Strix 3090s when the prices come down out of never never land. Slowly but surely its happening. I dont think it had as much to do with the launch as I didnt have problems at the launch of the 2080Tis. When China is hoarding 80% of the silicon and they hover over Taiwan like they own it thats the problem. Now that crypto mining is banned in China (because they dont want people making any side money, typical comnmunist regime) the demand is dropping sharply. I even had a guy offer up a used 3090 Stix card because I was watching it for $2400 but I refused it. Said he bought it at best buy which means he paid MSRP for it and ran it a few months. Im like Ill pay you what you paid for it including the sales tax and cost of shipping which isnt that far from me but not any more. If you want we can take it off Ebay to save you that hit and Ill drive the 45 mins to come get it and pay in cash. Ive not heard back from him but see the price is still dropping as it is pretty much across the board with some idiots still listing them as high as $5000. Im like when elephants fly!


----------



## Kaltenbrunner

I have a 2070, so there's not much to step up to. A used 2080 Ti still cost's almost as much as a 3070. I guess a lot of the 2080 ti owner can't upgrade if they wanted to, farther slowing up re-sales.


----------



## JustinThyme

Kaltenbrunner said:


> I have a 2070, so there's not much to step up to. A used 2080 Ti still cost's almost as much as a 3070. I guess a lot of the 2080 ti owner can't upgrade if they wanted to, farther slowing up re-sales.


Yeah the GPU market got out of hand. People were poking fun at 2080Ti owners until they starting selling for more than their original MSRP. I paid $750 each at launch. I havent run a 3070 but dont think it will outperform a 2080Ti.


----------



## Mystic33




----------



## tps3443

JustinThyme said:


> Yeah the GPU market got out of hand. People were poking fun at 2080Ti owners until they starting selling for more than their original MSRP. I paid $750 each at launch. I havent run a 3070 but dont think it will outperform a 2080Ti.




My last 2080Ti on ekwb full block hardmodded, and bios flash, at a sustained 2,130Mhz/16,200 memory would beat a RTX3070 by exactly a minimum of 26% and an average of 28%-30% in literally any sythetic benchmark. Games would actually reflect even more. 


Port Royal +30%, timespy +30%, unigine haven+30,

All graphics scores, or average frame rate. 

My 2080Ti would also beat a standard 2080Ti be even more lol.


----------



## tps3443

Kaltenbrunner said:


> I have a 2070, so there's not much to step up to. A used 2080 Ti still cost's almost as much as a 3070. I guess a lot of the 2080 ti owner can't upgrade if they wanted to, farther slowing up re-sales.


With a good watercooled and heavily overclocked 2080Ti you can perform just a few percent below a RTX3080 in normal games non ray traced games. And it’ll beat up a 3070 by up to 30% depending on your OC of course. There isn’t anything that amazing to upgrade too after a well clocked, and high power 2080Ti though. I blame this damn 2080Ti for being so damn fast lol, @2130/16200 daily sustained speeds in gaming is just incredible. 

I finally found a 3090 Kingpin at MSRP. Which was literally the only option that made me sell off my (2) 2080Ti’s for.

Im still testing to see how I like it, but It’s just tough to upgrade and see large a large boost with anything really nowadays. 


The GPU market is a little silly as well. But there are some good options out there. I play at 2560x1440P 165HZ so I really don’t even feel a difference in the mouse with the games I play. coming from a single fast OCed 2080Ti, to my stock 3090KP. I have to look at the FPS/OSD lol. And say, ok yeah! it’s faster lol. But to the average person, they probably wouldn’t know.


----------



## TheOpenfield

zhrooms said:


> Important Starting in the second half of 2019, many cards are now shipping with a newer XUSB FW Version ID, preventing a lot of BIOSes in the TechPowerUp BIOS Collection from being flashed to the card through NVFlash, so be aware when purchasing a new card, you might not be able to flash a higher power limit BIOS on it. _Example:_ MSI Gaming X Trio purchased in Q3 2018 shipped with the 0x70060003 firmware (Build Date: 2018-08-29), the same MSI card purchased a year later in Q3 2019 shipped with the firmware 0x70100003 (Build Date: 2018-12-28), this newer card also had a different EEPROM (WBond instead of ISSI). The BIOS and firmware can however be overwritten by force, using an external programming/test clip, bypassing NVFlash entirely.


Didn't have any luck flashing the Palit or GB Non-A BIOS on my 2019Q4 Ventus GP (new BIOS with 25*7*/280W). Are there any 300W+ Non-A BIOS available with the newer XUSB? I guess the Palit one didn't receive any updates.

310W could already help out a lot, with shunt mod this card could really shine in benchmarks. Even with 280W I can get past 16.500 GPU in Timespy with an av. clock well above 2 GHz.








I scored 14 327 in Time Spy


AMD Ryzen 5 3600, NVIDIA GeForce RTX 2080 Ti x 1, 65536 MB, 64-bit Windows 10}




www.3dmark.com





Depending on the game, 280W already yields 2100+ GHz (BFV, Horizon Zero Dawn) while others will drop just below 2 GHz (Hunt Showdown, Witcher 3 UHD). For a daily setup, this is already plenty and still somewhat efficient/well coolable.

Also as a side note:


zhrooms said:


> *Ventus GP* *Non-A* | 2 Fan | 2.5 Slot | 268mm | N/A | 16 Power Phases | 1545 MHz Boost | 250/280 W | Reference PCB | EAN 4719072646288 | PN V371-088R


This seems to be incorrect (atleast for later models). It has a cut down 10 phase for Vcore and 2 phase for Mem, missing a total of 4 phases. Also, they downgraded the smart power stages from the 70A's to the 60A's (also found on 2080(s)). Really sad to see all these cuts on a former 1000+ € card...









Shouldn't really matter with a full cover water block though.


----------



## bryanduc

Hello I have a question, I have a sli from GIGABYTE AORUS GEFORCE RTX 2080 TI XTREME WATERFORCE WB
I would like to know if I could flash the bios Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W without worries? if not what bios to increase the power limit to the max? I have no problem with the temperature I have 3x360mm radiator in my loop


----------



## JustinThyme

bryanduc said:


> Hello I have a question, I have a sli from GIGABYTE AORUS GEFORCE RTX 2080 TI XTREME WATERFORCE WB
> I would like to know if I could flash the bios Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W without worries? if not what bios to increase the power limit to the max? I have no problem with the temperature I have 3x360mm radiator in my loop


Honestly depending on where you are now upping the power limit may be futile. I have a pair of the Strix O11G OC cards and flashed to the Matrix XOC BIOs and gained nothing but heat. I also tried just the Matrix BIOS as they are identical cards. Still no improvements over the stock BIOS. The chip hits its limits without throttling with the stock BIOS and power limit at 125%. Both cards run 2130 so I just flashed the stock BIOs back on them. If you are in the same area that’s about as much as you will get. Maybe make it to 2160 but I’d doubt it. All I did was raise the power limit and clocks. Left voltage at default as raising that is actually counter productive in my instance. Keeping the cards cool as in under 45C that is the first throttle point is what works for me. Both cards are under water and never pass 40C.


----------



## Laithan

Mystic33 said:


>


Is it April? lol

Seriously, there is a lot wrong with this, please do not run the card this way. VRMs need cooling also not just the GPU/Memory (requires ACTIVE cooling). None of this is sufficient... or correct. If you insist on budget water cooling at least grab an NZXT Kraken or something similar.


----------



## bryanduc

JustinThyme said:


> Honestly depending on where you are now upping the power limit may be futile. I have a pair of the Strix O11G OC cards and flashed to the Matrix XOC BIOs and gained nothing but heat. I also tried just the Matrix BIOS as they are identical cards. Still no improvements over the stock BIOS. The chip hits its limits without throttling with the stock BIOS and power limit at 125%. Both cards run 2130 so I just flashed the stock BIOs back on them. If you are in the same area that’s about as much as you will get. Maybe make it to 2160 but I’d doubt it. All I did was raise the power limit and clocks. Left voltage at default as raising that is actually counter productive in my instance. Keeping the cards cool as in under 45C that is the first throttle point is what works for me. Both cards are under water and never pass 40C.


Thank you for your response currently via afterburner I am in oc max at 2100mgz and the temperature at 42 degrees I have analyzed my cards well and I am blocked by the power limit. I would have liked to test the bios that I said above, the real question is and is it compatible with my gpu?


----------



## JustinThyme

That I can’t tell you. It’s a risky venture. That’s why I stayed in with the Matrix XOC as the Strix and Matrix are identical cards in every respect. Just the Matrix is supposed to be better binned components. The XOC removes power limits. Running mine now with the stock BIOS and get the same boost clock as a matrix. Only thing that sucks about ASUS is you have to use software. IMO I get the software to load what you want but once that’s done you shouldn’t be required to leave it up and running in the background just to hog up system resources.
I can tell you the Galax BIOS would not load on my cards. Just have to wait for someone with the same card who has tried it or just try it. Most of the time they will fail to load if not compatible and sometimes you get a brick. I’m steady at 2130 and can reach 2160 but not stable there without hitting power limits. I don’t increase voltage either as that will result in heat and push you into the power limits. Even at 2160 it’s not power limits. It’s the limitation of the chips.


----------



## TheOpenfield

Laithan said:


> Seriously, there is a lot wrong with this, please do not run the card this way. VRMs need cooling also not just the GPU/Memory (requires ACTIVE cooling). None of this is sufficient... or correct. If you insist on budget water cooling at least grab an NZXT Kraken or something similar.


Give it a small breeze and it should do just fine. The Kraken does nothing more special than that.


----------



## Laithan

TheOpenfield said:


> Give it a small breeze and it should do just fine. The Kraken does nothing more special than that.


Yeah I suppose the fact that every single cooler has direct contact for vrms, memory and mosfets is not necessary because a "small breeze should do just fine"...LMAO... these things will go over 100C without active cooling.. The kraken at least places a fan directy over the components and is not a "small breeze"


----------



## JustinThyme

I’m not a fan of any AIO. Custom loop or bust.


----------



## pewpewlazer

Laithan said:


> Yeah I suppose the fact that every single cooler has direct contact for vrms, memory and mosfets is not necessary because a "small breeze should do just fine"...LMAO... *these things will go over 100C without active cooling.. *


Proof???


----------



## TheOpenfield

Laithan said:


> Yeah I suppose the fact that every single cooler has direct contact for vrms, memory and mosfets is not necessary because a "small breeze should do just fine"...LMAO... these things will go over 100C without active cooling.. The kraken at least places a fan directy over the components and is not a "small breeze"


Jup. Because on air, there is another ~300W heat source just nearby, which is dissipated all over the PCB.

The VRM loss should be around 30W. That Zotac AMP! Extreme has a whole lot of surface area for the power delivery, combined with heatsinks and yes, a small breeze, temps should be in spec.


----------



## Laithan

Final EVGA VRM Torture Test: VRM Thermals Not the Killer of Cards


Two EVGA GTX 1080 FTW cards have now been run through a few dozen hours of testing, each passing through real-world, synthetic, and torture testing. We've been following this story since its onset, initially validating preliminary thermal results with thermal imaging, but later stating that we...




www.gamersnexus.net





VRMs can certainly go above 100C with no cooling at all. In this example over 100C was measured with the stock cooler installed and no pads. It will not likely burn as they are rated for even higher temps so it is an efficiency issue more than anything (that would increase with overclocking). Poor efficiency leads to power loss so there is benefit to keeping them cool (as we know a 2080Ti is already power starved). I am just pointing out that they should also be actively cooled and I assumed a small breeze meant passive cooling only from case airflow.


----------



## tcclaviger

Did a thing the other day, not done yet, but not sure how much further it'll actually go.

EVGA Black non-A card doing the lord's work, will last until next gen for me for sure.

Bloated Win 10 Pro install and all, will be doing some more serious work in a bit to see if I can run down an 11,000 score.

Hellfire I'm fine being 2nd too, I always seem to be just on his heels across BM databases lol.









UNIGINE Superposition benchmark score


UNIGINE Superpsition detailed score page




benchmark.unigine.com


----------



## Medizinmann

JustinThyme said:


> I’m not a fan of any AIO. Custom loop or bust.


I am very happy with an Eiswolf GPX pro 240 in combination with an Eisbaer 360 + an extra 120mm rad in this loop and 12 Noctua fans in push&pull - works flawlessly since 08/2019. 

I am using the 380W Galax BIOS on my Palit Gamingpro oc 2080 ti and get stable 2130 Mhz....

But in the end this isn't much cheaper than a custom loop - to be honest... 🤔  

Best regards,
Medizinmann


----------



## JustinThyme

Medizinmann said:


> I am very happy with an Eiswolf GPX pro 240 in combination with an Eisbaer 360 + an extra 120mm rad in this loop and 12 Noctua fans in push&pull - works flawlessly since 08/2019.
> 
> I am using the 380W Galax BIOS on my Palit Gamingpro oc 2080 ti and get stable 2130 Mhz....
> 
> But in the end this isn't much cheaper than a custom loop - to be honest... 🤔
> 
> Best regards,
> Medizinmann


Im getting 2130 on the Strix O11G OC with stock BIOS, no voltage increase and power to 125%. Thats just the limitations of the chips. Went with Matrix XOC with no power limit and didn't get any better, just hotter so flashed it back


----------



## Medizinmann

JustinThyme said:


> Im getting 2130 on the Strix O11G OC with stock BIOS, no voltage increase and power to 125%. Thats just the limitations of the chips. Went with Matrix XOC with no power limit and didn't get any better, just hotter so flashed it back


Well yes and no - when I got my GPU - I put my card immediately under water and tested a lot - some time later (4 month) I discovered the possibility to flash.

The card came with a firmware with an power limit of 280W later a 330W power limit came out - but I directly flashed the KFA 2 380W BIOS – which results in an power limit of 398W – I I can trust HWiNFOs readings…

And I can measure differences. But you are right - over 330-340 W you are getting in the realm of diminishing returns - like hard...and you need a lot more cooling to profit from it - I did some test runs and with all cooling at full throttle and an open window in winter and an ambient temperature of like 13°C in the room I could get 2160 stable and 2175 for test runs....und normal conditions (ambient 22-24°C) I can get 2145 stable- but with more noise than my liking…

In the end I also use only like 320 W power limit for everyday use now...especially since most games I play already run with well over 100+fps…

But that wasn't my point of my post before - I just wanted to point out, that some kinds of AIO can work - but when they work they aren't that much more budget friendly and I must admit the solution doesn't look very compelling either - but my case is closed and I rarely look inside and it was easier this way...;-)

Best regards,
Medizinmann


----------



## JustinThyme

I can also hit 2160 when it’s cold with stock BIOS but leave it at 2130 as I don’t enjoy freezing my tush over 30Mhz. I’m good with 2130 24x7 with room ambient at 24C and nice and quiet. I don’t hit power limits unless I increase the voltage. Leave it at standard and it’s all good. In the beginning when I failed I kept increasing voltage and all that did for me was increase the power consumption and produce more heat. These chips are quite different from the 1080Tis where increase in voltage while raising temps a bit got better performance. With the 2080Ti it’s a complete reversal for me. It’s about what the chip can do at what temperature. I’m not a fan of AIOs and seldom take the easy way out, I take the best way out but understand others don’t share my ideology.
Happy to hear your solution works for you.


----------



## TheOpenfield

tcclaviger said:


> Did a thing the other day, not done yet, but not sure how much further it'll actually go.
> 
> EVGA Black non-A card doing the lord's work, will last until next gen for me for sure.
> 
> Bloated Win 10 Pro install and all, will be doing some more serious work in a bit to see if I can run down an 11,000 score.
> 
> Hellfire I'm fine being 2nd too, I always seem to be just on his heels across BM databases lol.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> UNIGINE Superposition benchmark score
> 
> 
> UNIGINE Superpsition detailed score page
> 
> 
> 
> 
> benchmark.unigine.com
> 
> 
> 
> 
> View attachment 2518807


Well done for a Non-A. No physical modifications, just the 310W BIOS I suppose?

I got just under 10k with my 280W max. card in one of the first runs, could probably just barely hit 10k with some tweaking. But more than that is just not possible at 280W.


----------



## JustinThyme

My crappy run stock BIOS, not OCd much


----------



## tcclaviger

TheOpenfield said:


> Well done for a Non-A. No physical modifications, just the 310W BIOS I suppose?
> 
> I got just under 10k with my 280W max. card in one of the first runs, could probably just barely hit 10k with some tweaking. But more than that is just not possible at 280W.
> View attachment 2519038


Nah, it's has shunt mod on the 8 pin cable shunts, 8 milliohm resistors with 310 watt 124% bios. Sadly nothing I do will push it past 10333, more core speed doesn't help (2205 is stable at 1.06v), lower temps don't matter at this point, it's essentially walling on the maxed out GDDR clock before score regression. The shunt mod isn't worth all that much when the card is kept under 40c at all times I found, the extra power helps stabilize clocks, but doesn't improve performance all that much. I use a locked curve at 1.06v and 2160 for gaming and it is quite happy to just sit at 2160 and never move lol.

Cooled by Kryographics Next block and active backplate.

Just happy to see where it reached, helps with my decision to skip current gen and possibly next gen if the market doesn't settle itself down.


----------



## JustinThyme

tcclaviger said:


> Nah, it's has shunt mod on the 8 pin cable shunts, 8 milliohm resistors with 310 watt 124% bios. Sadly nothing I do will push it past 10333, more core speed doesn't help (2205 is stable at 1.06v), lower temps don't matter at this point, it's essentially walling on the maxed out GDDR clock before score regression. The shunt mod isn't worth all that much when the card is kept under 40c at all times I found, the extra power helps stabilize clocks, but doesn't improve performance all that much. I use a locked curve at 1.06v and 2160 for gaming and it is quite happy to just sit at 2160 and never move lol.
> 
> Cooled by Kryographics Next block and active backplate.
> 
> Just happy to see where it reached, helps with my decision to skip current gen and possibly next gen if the market doesn't settle itself down.


Yeah that’s why I’m skipping this generation. 2080ti takes everything I throw at it/them (SLI) without missing a beat. Current situation with the out of control pricing that has begun to drop I’d run an old 8800GTS before I pay that much for a GPU. With China banning the mining of crypto currency and cards and just silicon out it Taiwan becoming more available as a result the prices are dropping and the miners in China are trying to dump their mining cards on Fleabay as “open box” they omit to tell you it’s been running full tilt 24x7 since they got it.


----------



## Laithan

Exactly... unless you find a 30xx series card brand new for a fair price (not happening), the chances of it being used for mining is higher than ever before. I am going to completey skip this generation as well. When 2080Ti was the fastest GPU we were all happy with it then.. and I'm also still quite happy with mine now.


----------



## JustinThyme

That makes two of us. I pad less for 2x strix 2080Tis than a 3090 is going for which is the only card I’d be interested in as I’ll continue to run SLI


----------



## Krzych04650

Same here, got second 2080 Ti used stupid cheap like two weeks after Ampere launch event. It was all a bit of a tough situation because by going 3090 over 2080 Ti SLI you gain a lot in RT games but lose a lot in SLI games because average gain over 2080 Ti in non-RT games is mediocre at best, so there wasn't really a good way of going about this, and since the situation was so messed up that going with 2080 Ti SLI was somehow actually way cheaper and easier than with 3090, this is what I did.

It is mostly lack of SLI support on Ampere that held me back though, I probably would have gotten 3080/Ti SLI by now if it was a possibility, but since it isn't, you gain some you lose some, doesn't make much of sense.

2080 Ti is still doing well even in RT titles with DLSS and isn't behind on important feature set, and performance of two of them in SLI supported games is way beyond what Ampere can do. CFR also makes a big difference since it expands the amount of SLI supported games dramatically and even at it's lowest scaling it is still at least enough to match Ampere. 3080 Ti is just too little too late at this point.

Next generation I definitely won't skip because gains over 2080 Ti are going to be huge regardless, and this may overlap with the upgrade to 5120x2160 resolution if those 40 inchers get Mini LED and 120Hz+ next time around, so they better be. Hopefully the rumors about massive MCM GPUs coming out this time with like 2.5x the performance of 3090 are at least half true...


----------



## JustinThyme

Krzych04650 said:


> Same here, got second 2080 Ti used stupid cheap like two weeks after Ampere launch event. It was all a bit of a tough situation because by going 3090 over 2080 Ti SLI you gain a lot in RT games but lose a lot in SLI games because average gain over 2080 Ti in non-RT games is mediocre at best, so there wasn't really a good way of going about this, and since the situation was so messed up that going with 2080 Ti SLI was somehow actually way cheaper and easier than with 3090, this is what I did.
> 
> It is mostly lack of SLI support on Ampere that held me back though, I probably would have gotten 3080/Ti SLI by now if it was a possibility, but since it isn't, you gain some you lose some, doesn't make much of sense.
> 
> 2080 Ti is still doing well even in RT titles with DLSS and isn't behind on important feature set, and performance of two of them in SLI supported games is way beyond what Ampere can do. CFR also makes a big difference since it expands the amount of SLI supported games dramatically and even at it's lowest scaling it is still at least enough to match Ampere. 3080 Ti is just too little too late at this point.
> 
> Next generation I definitely won't skip because gains over 2080 Ti are going to be huge regardless, and this may overlap with the upgrade to 5120x2160 resolution if those 40 inchers get Mini LED and 120Hz+ next time around, so they better be. Hopefully the rumors about massive MCM GPUs coming out this time with like 2.5x the performance of 3090 are at least half true...


3090 still has SLI. It’s the only one. All the rest don’t even have the bridge connection and they only reason if the prices were not in the clouds I would consider any 30XX and it would be a pair of Strix 3090s. They are starting to come down and becoming more available. Some of the scalpers on Fleabay didn’t get the Memo though and have prices over $4000 each, saw one over $5K. I’m in no way financially bound, I’m an old fart a half step away from an early retirement. It’s more of a principle that I’m not going to pay for a Bugatti and get an Acura especially when what I have already does very well and is still in the “Better than 99% of all results” in 3D marks. A lot of people are hung up on thr “SLI is dead” from reading crap published by idiots. Yeah a lot of Devs are too lazy to code for it but there’s plenty of tech geeks that know how to force it where it’s not coded into the title. Then looking past the rest of life that doesn’t include gaming the processing power for other applications is even more so still very alive. There was rumor that Nvidia would no longer support SLI past Jan 2021. Well it’s almost August and the latest drivers still support it. There a plenty of forums if you look with profiles to import with Nvidia Inspector to support titles that don’t support it natively. Often there are already profiles from titles that will work with those not already coded. I have enough to keep me busy anyhow. I don’t sit gaming on binges for days though. Definitely less than 20% of my use when I’m bored out of my mind and the weather isn’t friendly enough to jump on my HD and go get some wind therapy.


----------



## acoustic

NV didn't say they'd kill SLI support, they just said they'd no longer create new profiles. It'll work for a bit, but eventually it's going to phase out.

My issue with the NVinspector bits is that it can be hit or miss. Works great for some, doesn't work well for others, and then there's the grey zone where the scaling is great but the frametimes suffer, or there's weird microstutter.

I wish they had just kept mGPU support in the hands of NV/AMD. SLI was such a great tool, and it would be sick to run SLI with the crazy bandwidth that PCIE5.0 will be capable of.


----------



## Krzych04650

JustinThyme said:


> 3090 still has SLI. It’s the only one. All the rest don’t even have the bridge connection and they only reason if the prices were not in the clouds I would consider any 30XX and it would be a pair of Strix 3090s. They are starting to come down and becoming more available. Some of the scalpers on Fleabay didn’t get the Memo though and have prices over $4000 each, saw one over $5K. I’m in no way financially bound, I’m an old fart a half step away from an early retirement. It’s more of a principle that I’m not going to pay for a Bugatti and get an Acura especially when what I have already does very well and is still in the “Better than 99% of all results” in 3D marks. A lot of people are hung up on thr “SLI is dead” from reading crap published by idiots. Yeah a lot of Devs are too lazy to code for it but there’s plenty of tech geeks that know how to force it where it’s not coded into the title. Then looking past the rest of life that doesn’t include gaming the processing power for other applications is even more so still very alive. There was rumor that Nvidia would no longer support SLI past Jan 2021. Well it’s almost August and the latest drivers still support it. There a plenty of forums if you look with profiles to import with Nvidia Inspector to support titles that don’t support it natively. Often there are already profiles from titles that will work with those not already coded. I have enough to keep me busy anyhow. I don’t sit gaming on binges for days though. Definitely less than 20% of my use when I’m bored out of my mind and the weather isn’t friendly enough to jump on my HD and go get some wind therapy.


The story was that they will no longer create new profiles starting from Jan 2021, but will maintain ones for existing compatible devices, so Turing will work with SLI forever, it is not like they are removing existing functionality. But another part of the announcement was that Ampere does not support implicit SLI, and that includes 3090, despite NVLink bridge. It works only in DX12/Vulkan multiGPU titles. I haven't seen any proof or video of it working anywhere else.



acoustic said:


> I wish they had just kept mGPU support in the hands of NV/AMD. SLI was such a great tool, and it would be sick to run SLI with the crazy bandwidth that PCIE5.0 will be capable of.


Yea that's what killed it in the end, low level APIs. If you ask devs of SLI compatible games they are going to tell you that their games do not have any multi GPU functionality because this isn't even a thing on DX11 and whatever works now is an individual effort of NVIDIA through their drives and devs are no responsible for it and do not guarantee it to work properly. This was always sort of a hack, and now this ability is taken away. Well CFR shows that it is still possible, but NVIDIA is not what it once was and not going to make such ambitious moves. They already won this generation thanks to the risk they took with RTX and DLSS and don't care. There is also a problem of pricing, they created the situation where xx80 card is $700 and everything higher is $1200-1500, so while getting two flagships once they did cost $650 was reasonably possible by many, there aren't too many left to buy two $1500 cards, and allowing to pair two 3080s instead of buying one $1500 3090 is just not good for their business. They just want you to buy one card for $1500, not two. And if this card at least justified it with performance then it would be fine, but these cards are not only not faster but relatively slower than what say 1080 Ti was at it's time for example, so this is by no means a replacement for SLI, you just get a normal single card and with lower generational gains than previously. Looking at 1080 Ti vs 3080 Ti comparisons, only now 1080 Ti SLI has been matched with a single card and only now you can reach performance that was possible 4-5 years ago. This is exactly what SLI was for, saying that these days you get one big GPU instead two smaller ones is just so, so wrong. Hopefully MCM changes that.


----------



## kithylin

Krzych04650 said:


> The story was that they will no longer create new profiles starting from Jan 2021, but will maintain ones for existing compatible devices, so Turing will work with SLI forever, it is not like they are removing existing functionality. But another part of the announcement was that Ampere does not support implicit SLI, and that includes 3090, despite NVLink bridge. It works only in DX12/Vulkan multiGPU titles. I haven't seen any proof or video of it working anywhere else.


Here's a result from 3dmark of someone running two RTX 3090's in SLI mode in an old test, 3DMark 11. Yes I know that's kind of pointless to test these cards in an old benchmark but I'm linking here because these old benchmarks pre-dated DirectX-12, mGPU and all of that stuff. The only possible way to get SLI working in these old benchmarks is by using Implicit SLI, AKA the old way it used to work. See here: I scored X88 986 in 3DMark Vantage Extreme

So it appears the RTX 3090's actually do support old implicit SLI in any older existing title that already has an existing SLI profile.

Also: Just because Nvidia isn't creating new SLI profiles for new games doesn't mean we the community can't figure out some settings that work and create our own profiles for games.


----------



## JustinThyme

kithylin said:


> Here's a result from 3dmark of someone running two RTX 3090's in SLI mode in an old test, 3DMark 11. Yes I know that's kind of pointless to test these cards in an old benchmark but I'm linking here because these old benchmarks pre-dated DirectX-12, mGPU and all of that stuff. The only possible way to get SLI working in these old benchmarks is by using Implicit SLI, AKA the old way it used to work. See here: I scored X88 986 in 3DMark Vantage Extreme
> 
> So it appears the RTX 3090's actually do support old implicit SLI in any older existing title that already has an existing SLI profile.
> 
> Also: Just because Nvidia isn't creating new SLI profiles for new games doesn't mean we the community can't figure out some settings that work and create our own profiles for games.


Exactly, Already done profiles for the 2080Tis for titles that don't natively support it. The GTAV and Tomb raider profiles work in some of them. Others there are people better at it than me that have already done the profiles. Just have to search them out. 

But for now Im not bothering. Ill wait until my next build which will probably be Sapphire rapids or an AMD counterpart. Which ever the way the wind blows when PCIE5 and DDR5 are both prevalent. Alderlake isnt it.


----------



## TheOpenfield

tcclaviger said:


> Nah, it's has shunt mod on the 8 pin cable shunts, 8 milliohm resistors with 310 watt 124% bios. Sadly nothing I do will push it past 10333, more core speed doesn't help (2205 is stable at 1.06v), lower temps don't matter at this point, it's essentially walling on the maxed out GDDR clock before score regression. The shunt mod isn't worth all that much when the card is kept under 40c at all times I found, the extra power helps stabilize clocks, but doesn't improve performance all that much. I use a locked curve at 1.06v and 2160 for gaming and it is quite happy to just sit at 2160 and never move lol.
> 
> Cooled by Kryographics Next block and active backplate.
> 
> Just happy to see where it reached, helps with my decision to skip current gen and possibly next gen if the market doesn't settle itself down.


Jup, even with my 280W max. non-A card - shunt modding just isn't "worth it" for me. Yes, I might be able to hold 2.1 GHz in every single game - but even for the rare cases, where it will occasionally drop to around 2 GHz, it's still less than 5% difference. Nothing one would ever "feel" ingame.

For example, in Hunt Showdown 4K, it will hover around 2 GHz most of the time. For a locked 2.1 GHz even 380W will not allways be enough. Coming from good old Maxwell, where constant clock speeds made practical sense and are easily archivable, for Turing/Ampere it really isn't necessary in practice.


----------



## kithylin

TheOpenfield said:


> Jup, even with my 280W max. non-A card - shunt modding just isn't "worth it" for me. Yes, I might be able to hold 2.1 GHz in every single game - but even for the rare cases, where it will occasionally drop to around 2 GHz, it's still less than 5% difference. Nothing one would ever "feel" ingame.
> 
> For example, in Hunt Showdown 4K, it will hover around 2 GHz most of the time. For a locked 2.1 GHz even 380W will not allways be enough. Coming from good old Maxwell, where constant clock speeds made practical sense and are easily archivable, for Turing/Ampere it really isn't necessary in practice.


Well considering we can't actually buy anything faster than the 2080 Ti right now.. some people might be stuck on them for a long time, at least another year or longer. It might be worthwhile to them to get every single last % out of their cards that they can.


----------



## TheOpenfield

I also like pushing things to the limit for benchmarking purposes, but for a daily system it just doesn't make practical sense. Especially when you aren't able to get another card, shunt modding is not the best idea  (but if successful with 8 mOhm shunts, I wouldn't expect any further problems if the cooling is sufficient). Going from Stock (257W ~1750 MHz boost) to OC (280W 2-2.1 GHz boost 8000mem) yields ~15% and is definitly worth it. The shunt mod can provide 0-5% (even less if you are able to flash the 310W BIOS on a Non-A chip) more combined with immense power draw - the only thing you will "feel" is more heat and a higher power bill.


----------



## tcclaviger

Just dumped a bunch of results on 3dmark site under username Claviger...

Took #1 for 2080ti/5950x single card in FS-E and a bunch of top 10s, including the 16x/8x/4x/2x/1x #1 CPU test spots except the "max-thread" (TDC limited in PBO settings) 

Seeing the difference between 3080 and 2080ti... very meh, even if available at MSRP, I'd still not jump. The gain is very reminiscent of 1080ti > 2080ti and I only did that because I needed the 1080ti for the wife's computer. The difference isn't enough to drive my 3440x1440 locked at 100hz in the games I can't currently, would just push them from 70ish to 80ish.

Fully agree, shunt mod isn't exactly... practical, and I would absolutely not do it on air cooling. 

Kind of fun tweaking an x470 + PBO - CO 5950x and shunt modded non-a card surpass the latest and greatest in a variety of things, especially with oddball triple rank 48GB ram setup lol.


----------



## JustinThyme

kithylin said:


> Well considering we can't actually buy anything faster than the 2080 Ti right now.. some people might be stuck on them for a long time, at least another year or longer. It might be worthwhile to them to get every single last % out of their cards that they can.


That’s why I buy the top tier gear in the first place. Strix O11G OC does 2130 +1000 on Samsung memory all day long me end going above 40C ever. I’ve run the Matrix BIOS and Matrix XOC BIOS and it makes little to no difference. Got it to 2160 but not stable. No throttling or power limits, just the limits of the silicon. Actually scored lower in 3D marks with XOC BIOS at 2160 than stock BIOS at 2130. That 30MHz isn’t going to break any records and a shunt mod isn’t worth voiding a warranty either. Best you can do it get the silicon as cool as possible and push it to the limits of your cooling solution. For me it’s not enough to hit a throttle point. WC with and overkill loop with an 8C GPU delta and I’m happy with the results. 30XX prices are coming down, saw one go on Fleabay today, 3090, for pretty much MSRP BNIB. Right now Fleabay is getting flooded with them but buyer beware! Open Box is a very loose term…..as in opened it several months ago and been mining with it 24x7 until it got banned in China and now I’m gonna throw it back in the box and say I ran it for a week or no further explanation than “open box”. Then there are idiots trying to sell the same cards for $5K! LMAO.


----------



## Laithan

The mining issues/prices aren't all bad... after all they allowed me to feel great about paying $900 for my 2080Ti FTW3 Ultra... shame to fame

lol!


----------



## Medizinmann

kithylin said:


> Well considering we can't actually buy anything faster than the 2080 Ti right now.. some people might be stuck on them for a long time, at least another year or longer. It might be worthwhile to them to get every single last % out of their cards that they can.


Well - right now here in Germany you can buy 3090 and 3080 TI - but at least 15-20% over MSRP....but availability seems to be pretty good – I can get same day delivery…i.e.:
Palit RTX 3080 Ti 12GB GameRock 12GB LHR Grafikkarte - 12GB GDDR6X, 3x DisplayPort/HDMI bei notebooksbilliger.de

Best regards,
Medizinmann


----------



## kithylin

Medizinmann said:


> Well - right now here in Germany you can buy 3090 and 3080 TI - but at least 15-20% over MSRP....but availability seems to be pretty good – I can get same day delivery…i.e.:
> Palit RTX 3080 Ti 12GB GameRock 12GB LHR Grafikkarte - 12GB GDDR6X, 3x DisplayPort/HDMI bei notebooksbilliger.de
> 
> Best regards,
> Medizinmann


I was wrong. I wasn't looking in the right place. I see 3080's, 3080 Ti's, and 3090's for sale here too now.


----------



## J7SC

Medizinmann said:


> Well - right now here in Germany you can buy 3090 and 3080 TI - but at least 15-20% over MSRP....but availability seems to be pretty good – I can get same day delivery…i.e.:
> Palit RTX 3080 Ti 12GB GameRock 12GB LHR Grafikkarte - 12GB GDDR6X, 3x DisplayPort/HDMI bei notebooksbilliger.de
> 
> Best regards,
> Medizinmann


...Yeah, it depends where / in which region you look...I do check Newegg.com / .ca and Caseking.de at least weekly. In Europe, there seem to be more 3090 models available now, though not the custom-PCB ones I would want. In North America, 3090s are still much harder to find (none at Newegg.com last night apart from the overpriced 'not sold by Newegg.com' crowd)...haven't really checked for 3080 Tis though.

3090 prices, IMO, had started to back off a bit (though less so than lower-tier Ampere) but seem to have stabilized more or less, and now - at best - at a new higher MSRP...it wasn't just about GPU die excess demand shortages but also other PCB components being in short supply which is still an issue (see automotive cutbacks).

In any event, I managed to get a 3090 Strix OC at a lowish MSRP (no extra tariff here in Canada) about 6 months ago - and a few months later a custom 6900 XT with a similar good deal (that model MSRP went up since by $C 250 last week, perhaps related to currency swings). They have joined my 2x 2080 Ti SLI/CFR setup as I use these systems for productivity / work also and we switched everything to 4K minimum, so it was time to retire that 'daily' X79 / 2x 980 Classified setup. RIP, good friend !


----------



## jura11

I wouldn't jump as well ship but got my first RTX 3090 GamingPro for very good price well below MSRP price and other one I got for £100 over MSRP price although that GPU have been destined for friend build from which he pull out 

My Asus RTX 2080Ti Strix has been very good OC'er would do 2205-2220MHz with Matrix BIOS in benchmarks and in gaming would do 2160-2175MHz on daily basis or in every game, there has been I think only two games where would CTD and at these two games I run 2145MHz OC, Zotac RTX 2080Ti AMP on other hand has been similar what have now with Palit RTX 3090 GamingPro's, they're not bad GPUs just they won't OC as much as other GPUs which I tested in past

Plus sold them for good price at the end, planned get RTX 3090 Strix OC but couldn't find it in stock at all 😞

Hope this helps 

Thanks, Jura


----------



## TheOpenfield

By the way, these are the peak values for my "280W" non-A card after training of a deep neural network (~20h). 









~370W peak power draw logged via GPUZ/HWiNFO - therefore not really a "spike" but a sustained (>50ms) peak load. Thats pretty much why XOC/shunt mods are a bad idea for most professional applications (bravo, who would have guessed).


----------



## Kaltenbrunner

The used prices on these are such a kick in the teeth. And whats the typical warranty these days, 1,2,3 years? I wonder how used cards on ebay, are cleaned up, put back in the box, and then re-vacuum sealed and sold as new ??


----------



## kithylin

Kaltenbrunner said:


> The used prices on these are such a kick in the teeth. And whats the typical warranty these days, 1,2,3 years? I wonder how used cards on ebay, are cleaned up, put back in the box, and then re-vacuum sealed and sold as new ??


They're not. People are not doing that. If it's sealed with original shrink-wrap then it's still brand new. Lots of people still have new-old-stock video cards even for GTX 1080 Ti and RTX 2080 Ti that they're selling. What you have to be worried about is sellers that show just the box with no shrink wrap and then list it as "Brand New".


----------



## Laithan

I have had personal experience (albeit not with a GPU) of shady sellers taking the time and effort to RE-shrink wrap items to appear as new. It is a far easier form of counterfeit with cheap materials and machines to do this. I am not arguing that it is being done or not with GPUs, only that it is really rather easy to do and there is prescedence of this being done on eBay. IMO I would just skip the 30xx series and hope the same thing doesn't happen with the 40xx series.

(NVIDIA/AMD/RESELLERS, take note at how the Steam Deck took measures to avoid scalpers from buying up all stock by using a verified pre-order system)


----------



## kithylin

Laithan said:


> I have had personal experience (albeit not with a GPU) of shady sellers taking the time and effort to RE-shrink wrap items to appear as new. It is a far easier form of counterfeit with cheap materials and machines to do this. I am not arguing that it is being done or not with GPUs, only that it is really rather easy to do and there is prescedence of this being done on eBay. IMO I would just skip the 30xx series and hope the same thing doesn't happen with the 40xx series.
> 
> (NVIDIA/AMD/RESELLERS, take note at how the Steam Deck took measures to avoid scalpers from buying up all stock by using a verified pre-order system)


People generally don't do that on ebay today in 2021 because the sellers know how the ebay system has changed. Today if someone lists an item as new (or used) then a buyer can force a return on them if they want to for any reason. That is they can initiate a return through ebay and ebay will automatically judge in favor of the buyer and allow them to return it no matter what. Condition doesn't matter. Nothing matters. Buyers get returns absolutely always for the full amount + any shipping for new and used items. And when a buyer does this the seller can not combat it, can not stop it and can not prevent it from going through. Once the tracking number shows delivered on sending back the return ebay will automatically credit the buyer and charge it through to the seller's bank account. Due to how this is set up most sellers don't do shady stuff like that anymore. That's not to say there aren't some stupid sellers that will try anyway. But they'll learn pretty fast how ebay works if they're thinking they can scam people on ebay. People will just force refunds on them anyway and they won't make any money.


----------



## JustinThyme

It’s not as simple as that. EBay isn’t Walmart. There are a lot of exclusions. You can’t just say you want your money back and they give it to you. Pretty much the item has to be as described and delivered in a timely fashion. If it’s listed as new and it’s clearly not you can get a refund. If it’s listed as open box, which is a pretty broad term that one is arguable. If it’s listed as used then it just has to work even if it’s been mining 24x7 for the past year. My issue with buying new products from eBay sellers even if they are new they aren’t authorized retailers so registering a warranty may be difficult if not impossible. If they are slick enough and the card looks new and it’s be resealed as if new or they put in the description opened for pictures only then there’s an open door. They only thing the EBay money back guarantee helps with is with sellers that don’t accept returns. Then everything has time frames that are not negotiable.





__





Security Measure







www.ebay.com


----------



## JustinThyme

I did buy a p4800X enterprise Optane drive recently that has enough write MTBF to last a lifetime. Good price from a long standing seller with 100% feedback and 60 day returns. If I wanted to send it back over buyers remorse he could accept the return, offer me a partial refund to keep it or tell me to pi$$ off. If it’s damaged in shipping, not as described, or doesn’t work that’s where the EBay money back guarantee comes from. I got an excellent deal and when I pulled the smart data on an item listed as open box that looked new it’s only had 32GB of writes. The longevity on the enterprises drives is unreal. I couldn’t exceed the write limits in my lifetime if I tried. 41PBW or 2 million hours. $725 for a 750GB drive is a steal on these things that are going for double that for a 375GB new and more than triple for the same drive new. I was surprised I was the only bidder. I think it’s because of his listing header. I stumbled across it as it wasn’t showing up in the search parameters.


----------



## Nizzen

JustinThyme said:


> I did buy a p4800X enterprise Optane drive recently that has enough write MTBF to last a lifetime. Good price from a long standing seller with 100% feedback and 60 day returns. If I wanted to send it back over buyers remorse he could accept the return, offer me a partial refund to keep it or tell me to pi$$ off. If it’s damaged in shipping, not as described, or doesn’t work that’s where the EBay money back guarantee comes from. I got an excellent deal and when I pulled the smart data on an item listed as open box that looked new it’s only had 32GB of writes. The longevity on the enterprises drives is unreal. I couldn’t exceed the write limits in my lifetime if I tried. 41PBW or 2 million hours. $725 for a 750GB drive is a steal on these things that are going for double that for a 375GB new and more than triple for the same drive new. I was surprised I was the only bidder. I think it’s because of his listing header. I stumbled across it as it wasn’t showing up in the search parameters.


I thought for a moment you bought the new Optane dc5800x pci-e 4.0 

I 'm using Optane for Os and programs on 2x pc here. Never going to downgrade the insame 4k random read @ QD=1 for os/programs. I'm using 900p and 905p


----------



## J7SC

Nizzen said:


> I thought for a moment you bought the new Optane dc5800x pci-e 4.0
> 
> I 'm using Optane for Os and programs on 2x pc here. Never going to downgrade the insame 4k random read @ QD=1 for os/programs. I'm using 900p and 905p


Optane dc5800x pci-e 4.0 are my dream-drives !


----------



## Nizzen

J7SC said:


> Optane dc5800x pci-e 4.0 are my dream-drives !


Ohh yes!









Intel Optane SSD DC P5800X Review: The Fastest SSD Ever Made


The fastest just got faster




www.tomshardware.com





900p and 905p is still pretty good with ~320MB/s random read @QD=1


----------



## JustinThyme

Only difference is 5800x is for PCIE 4.0. I don’t have 4.0. The drive would still work but perform the same as the 4800X. I have a 905P 480GB U2 l, 905P 960GB AIC, two 905P 380GB M.2 , the P4800X 750GB I just bought and a 900P 280GB AIC sitting in the cabinet as I’m all out of PCIE slots. Still have 2 of 8 Sata port open though LOL. The rest are full or Samsung drives for media storage. I’ll be sitting on this rig at least until Sapphire rapids and see who’s doing what then because when it’s time for a new build it’s a NEW build. Be able to use my case, 1600W PSU and watercooling parts except for CPU and GPU blocks but everything else will be replaced as that’s slated to be DDR5 and PCIE 5.0 and up to 120 cores.


----------



## BTCHSLP

Maybe i read over, but got anybody tried this flash ?

EVGA *XC* | 2 Fan | 2 Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429551 | PN 11G-P4-2382-KR
to
MSI *Sea Hawk X* Hybrid | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R


Is there any reason to do not this flash ?
The EVGA-card got a full-cover waterblock and backplate.


----------



## kithylin

BTCHSLP said:


> Maybe i read over, but got anybody tried this flash ?
> 
> EVGA *XC* | 2 Fan | 2 Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429551 | PN 11G-P4-2382-KR
> to
> MSI *Sea Hawk X* Hybrid | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R
> 
> 
> Is there any reason to do not this flash ?
> The EVGA-card got a full-cover waterblock and backplate.


Other than the increase of +8 watts isn't even worth it? That won't even net you +1 Mhz in an overclock.


----------



## JustinThyme

BTCHSLP said:


> Maybe i read over, but got anybody tried this flash ?
> 
> EVGA *XC* | 2 Fan | 2 Slot | 268mm | RGB | 16 Power Phases | 1635 MHz Boost | 260/338 W | Reference PCB | EAN 4250812429551 | PN 11G-P4-2382-KR
> to
> MSI *Sea Hawk X* Hybrid | 1 Fan | 2 Slot | 268mm | LED | 16 Power Phases | 1755 MHz Boost | 300/330 W | Reference PCB | EAN 4719072596910 | PN V371-008R
> 
> 
> Is there any reason to do not this flash ?
> The EVGA-card got a full-cover waterblock and backplate.


Where are you at on power limit and clocks now? I didn’t do that card or VBIOs but did the Matrix XOC on a pair of strix cards. Upping the power limit mostly just got me more heat. I got to 2160 but when the cards hit 45C under water that’s the first throttle point and kicked it back to 2145. I flashed back to the stock amd get 2130 leaving the voltage at stock and running the slider to 125% on power limit. Stays a solid 2130 and never makes 40C or hit the power limit. That 15MHz and a hotter card just isn’t worth it to me.


----------



## BTCHSLP

kithylin said:


> Other than the increase of +8 watts isn't even worth it? That won't even net you +1 Mhz in an overclock.


The boost-clock is different or did i missed something ?
XC: 1635 MHz Boost / 260w
SeaHawk: 1755 MHz Boost / 300w



JustinThyme said:


> Where are you at on power limit and clocks now? I didn’t do that card or VBIOs but did the Matrix XOC on a pair of strix cards. Upping the power limit mostly just got me more heat. I got to 2160 but when the cards hit 45C under water that’s the first throttle point and kicked it back to 2145. I flashed back to the stock amd get 2130 leaving the voltage at stock and running the slider to 125% on power limit. Stays a solid 2130 and never makes 40C or hit the power limit. That 15MHz and a hotter card just isn’t worth it to me.


I received my card in 2 days  but i'm on the search for a fast vbios, that i do not to need any OC-tools


----------



## kithylin

BTCHSLP said:


> The boost-clock is different or did i missed something ?
> XC: 1635 MHz Boost / 260w
> SeaHawk: 1755 MHz Boost / 300w


The boost clock set on cards doesn't mean anything. All Nvidia cards will self-boost over that and you can manually overclock above the "preset boost clocks". Factory boost clocks don't ever matter on nvidia cards anymore. The power limit is the only thing that matters. And +8 watts won't boost even +1 Mhz higher than the bios you already have. If all you want is a higher boost clock speed then just set your fans to 100% and it will automatically boost higher without having to flash any bios.


----------



## BTCHSLP

Ah understand.
Cooler card -> higher boost - thanks god that i cooling it with water


----------



## kithylin

BTCHSLP said:


> Ah understand.
> Cooler card -> higher boost - thanks god that i cooling it with water


If you're water cooled then you're likely very far above whatever boost clocks either one of those two bios's would have programmed in to their boost clocks anyway just because your card is much colder than an air cooled card. So the other bios wouldn't bring you anything better than you already have.


----------



## JustinThyme

BTCHSLP said:


> The boost-clock is different or did i missed something ?
> XC: 1635 MHz Boost / 260w
> SeaHawk: 1755 MHz Boost / 300w
> 
> 
> 
> I received my card in 2 days  but i'm on the search for a fast vbios, that i do not to need any OC-tools


I run ASUS, there is no OC without software unfortunately. Yes Boost is different but my Stock boost is 1650 and with he motion of a slider its 1810 that yields me 2130. with the stock BIOS. The XOC which has no power limit still needs the software.
GPUZ shows both the stock and the OC When Im doing much of nothing they hover around 400.
What I was getting at is with the XOC BIOS it takes more juice which adds more heat and the first throttle point that will drop you 15Mhz is 45C then 60C for another 15Mhz and so on. With XOC I gained 15MHz and was always riding the edge of getting thermal throttling and nowhere near a non existent power limit, even with very good Heat Killer GPU blocks. Raising the power limit does nothing for thermal throttling. If you are using a chiller then maybe it will get you somewhere but otherwise its minimal gains at best. I still have to use the software XOC bios or not. Give it a try is all I can say. Im only sharing my results in that it did nothing for me on the Strix 2080Ti O11G OC that is the same card as the Matrix with a different BIOS and supposedly the Matrix chips are binned tighter. Other than that the PCB and components are Identical in every way. This is pretty much what a stock Matrix will do.


----------



## widezu69

Hi folks. Noob here and apologies in advance if I sound stupid due a simple oversight.

I have a Strix OC and have tried the Matrix bios and XOC bios and both times, the flash was successful but I get a black screen when windows loads the driver. I have video output, and can get into my PC bios and also use my computer without any drivers installed and in safe mode etc. But as soon as I install any display drivers, the screen goes black. There is output because my TV isn't showing "no source attached" it's just all black.

Any thoughts and suggestions would be very welcome. Thanks.


----------



## Laithan

Sometimes a BIOS not designed for that specific card can result in certain display outputs not to work. It sounds like you are using HDMI. Can you try DP?


----------



## widezu69

Laithan said:


> Sometimes a BIOS not designed for that specific card can result in certain display outputs not to work. It sounds like you are using HDMI. Can you try DP?


Thanks for your reply. Yeah I'm on HDMI and I currently don't have anything I can run DP to. I would have thought the Strix XOC 1000w bios wouldn't have issues with different outputs because I'm not flashing to a different vendor.


----------



## jura11

Hi @widezu69 

I have run on my Asus RTX 2080Ti Strix like XOC BIOS and Matrix BIOS without single issue, for most of the time I have run DP ports only and HDMi I have used only for TV and no issues, assuming you are running TV or is that monitor with HDMi ? 

Hope this helps 

Thanks, Jura


----------



## widezu69

jura11 said:


> Hi @widezu69
> 
> I have run on my Asus RTX 2080Ti Strix like XOC BIOS and Matrix BIOS without single issue, for most of the time I have run DP ports only and HDMi I have used only for TV and no issues, assuming you are running TV or is that monitor with HDMi ?
> 
> Hope this helps
> 
> Thanks, Jura


Hey Jura, yeah it's an LG OLED TV (GX model) via HDMI. I also did the flashing without uninstalling the drivers first but I don't know if that is necessary? I'm at a loss here.


----------



## J7SC

widezu69 said:


> Hey Jura, yeah it's an LG OLED TV (GX model) via HDMI. I also did the flashing without uninstalling the drivers first but I don't know if that is necessary? I'm at a loss here.


Have you tried to flash your card back to the original Strix OC bios - and does it work fine ? BTW, for flashing the vbios, you do want to disable the card first in device manager.


----------



## widezu69

J7SC said:


> Have you tried to flash your card back to the original Strix OC bios - and does it work fine ? BTW, for flashing the vbios, you do want to disable the card first in device manager.


Yeah I've flashed it back both times and things go back to normal. I'll try again tomorrow with the card switched off in device manager.


----------



## J7SC

widezu69 said:


> Yeah I've flashed it back both times and things go back to normal. I'll try again tomorrow with the card switched off in device manager.


Good luck trying to get it going tomorrow with the custom vbios. Since you can successfully flash back, it is less of a 'holding your breath' kind of thing.

I also wanted to mention a bit of a strange if positive observation re. HDMI cables...I just picked up a LG C1 OLED (48) and connected it to work-play setups w/ 3090 and 6900XT (4K120 Hz HDR and all that jazz). However, when I plugged in another work system w/ 2x 2080 Ti and the same HDMI 2.1 cable into the OLED, the display option in Win 10 actually opened up 4K / 100 Hz for the 2080 Tis...pleasant surprise that was


----------



## jura11

widezu69 said:


> Hey Jura, yeah it's an LG OLED TV (GX model) via HDMI. I also did the flashing without uninstalling the drivers first but I don't know if that is necessary? I'm at a loss here.


Hi there 

Not sure if better HDMi cable would help, probably I would check AVS forum, these guys are more knowledgeable than I'm on TV and HDMi cables or guys over here on this forum 

If its does that with stock BIOS then I suspect HDMi port/cable is playing up or Motherboard BIOS too or some settings on TV

Hope this helps and good luck 

Thanks, Jura


----------



## widezu69

J7SC said:


> Good luck trying to get it going tomorrow with the custom vbios. Since you can successfully flash back, it is less of a 'holding your breath' kind of thing.
> 
> I also wanted to mention a bit of a strange if positive observation re. HDMI cables...I just picked up a LG C1 OLED (48) and connected it to work-play setups w/ 3090 and 6900XT (4K120 Hz HDR and all that jazz). However, when I plugged in another work system w/ 2x 2080 Ti and the same HDMI 2.1 cable into the OLED, the display option in Win 10 actually opened up 4K / 100 Hz for the 2080 Tis...pleasant surprise that was


Thanks.

Yes the thing with the GX/CX and G1/C1 series TVs is that they support up to 4k 120hz on HDMI 2.0 but only at 8bit 4:2:0. If I can remove the power limits of my 2080 Ti and get the performance close to a stock 3080, I can probably hold off upgrading to a HDMI 2.1 GPU for a while.


----------



## TheOpenfield

J7SC said:


> However, when I plugged in another work system w/ 2x 2080 Ti and the same HDMI 2.1 cable into the OLED, the display option in Win 10 actually opened up 4K / 100 Hz for the 2080 Tis...pleasant surprise that was


Great! Does Gsync work as well and in what range? 4K 100 Hz (4:4:4?) with Gsync ootb would be amazing (currently waiting for the 42" LG OLED).


----------



## J7SC

TheOpenfield said:


> Great! Does Gsync work as well and in what range? 4K 100 Hz (4:4:4?) with Gsync ootb would be amazing (currently waiting for the 42" LG OLED).


GSync was on and 4K 100 Hz 'ok' but I only ran the 2080 Tis for a short while on that LG OLED and HDMI 2.1 cable before moving them over to another workstation area (Philips 40 inch)


----------



## widezu69

I'm using my 65 GX @ 4K120 4:2:0 with my 2080 Ti and G-sync appears to be working across 40‐120 fps.


----------



## Audioboxer

Got one of these 2080Ti's EVGA RTX 2080 Ti VBIOS It's a blower so the main reason I got it was because I knew I would be watercooling and last year it was the cheapest 2080Ti I could find to sneak into the EVGA step up queue (I'm still in it LOL).

Anyway, my card died about 2 weeks ago but I've got my RMA back now and back in action. This replacement seems to be a bit better binned. Running 2100 on core and 8000 on RAM just now. To get here I just flashed an FTW3 bios that had a power cap of 373w and maxed the power slider out. My old card was more comfortable at 2055 max on core.

Right now my MSI afterburner curve is at 1.081v at 2100. I guess I can push a bit more up to 1.093v? At that stage it's really the hard limit isn't it? Or is there a better bios that can be flashed to push the card harder?










I left Heaven running for like 2 hours and these were my temps










Considering the state of the EU step up queue for a 3080, I'll be lucky to get my turn this year, so trying to push my 2080Ti as much as I can.


----------



## Audioboxer

Tried to flash the KFA2.RTX2080Ti.11264.180910.rom bios and I can't due to XUSB FW version. I can however, as above, flash the FTW3 bios. Is there any other BIOS with 380W instead of 373W? I know it's not much but I'd like to squeeze everything out of this card 

I guess my question is are there any other BIOS with reference PCB support that are higher than 373W?


----------



## JustinThyme

Audioboxer said:


> Got one of these 2080Ti's EVGA RTX 2080 Ti VBIOS It's a blower so the main reason I got it was because I knew I would be watercooling and last year it was the cheapest 2080Ti I could find to sneak into the EVGA step up queue (I'm still in it LOL).
> 
> Anyway, my card died about 2 weeks ago but I've got my RMA back now and back in action. This replacement seems to be a bit better binned. Running 2100 on core and 8000 on RAM just now. To get here I just flashed an FTW3 bios that had a power cap of 373w and maxed the power slider out. My old card was more comfortable at 2055 max on core.
> 
> Right now my MSI afterburner curve is at 1.081v at 2100. I guess I can push a bit more up to 1.093v? At that stage it's really the hard limit isn't it? Or is there a better bios that can be flashed to push the card harder?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I left Heaven running for like 2 hours and these were my temps
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Considering the state of the EU step up queue for a 3080, I'll be lucky to get my turn this year, so trying to push my 2080Ti as much as I can.


Max Temps look a bit high for water cooled especially for heaven that’s a bit light in the loafers. What block? I have two strix OC cards that do 2160 and they never reach 40C max looping timespy extreme.


----------



## widezu69

Have tried to flash my Strix OC again in different configurations - GPU disabled, iGPU, no drivers etc. Yet every time I initialise the driver, screen goes black and PC locks up. This is all connected via HDMI. Guess I'll have to try again connected to a regular monitor via DP. 🤷‍♂️


----------



## kithylin

widezu69 said:


> Have tried to flash my Strix OC again in different configurations - GPU disabled, iGPU, no drivers etc. Yet every time I initialise the driver, screen goes black and PC locks up. This is all connected via HDMI. Guess I'll have to try again connected to a regular monitor via DP. 🤷‍♂️


Or you can get a $5 Displayport-to-hdmi adapter off amazon.


----------



## Audioboxer

JustinThyme said:


> Max Temps look a bit high for water cooled especially for heaven that’s a bit light in the loafers. What block? I have two strix OC cards that do 2160 and they never reach 40C max looping timespy extreme.


Corsair 2080Ti block. I've got a 5950x in my loop which gets quite toasty, so I think my heat output is pretty average. 40~45 is common for most games, Heaven just seems to make my CPU scream as well so a lot of heat dumped into the loop lol.


----------



## kithylin

Audioboxer said:


> Corsair 2080Ti block. I've got a 5950x in my loop which gets quite toasty, so I think my heat output is pretty average. 40~45 is common for most games, Heaven just seems to make my CPU scream as well so a lot of heat dumped into the loop lol.


Instead of your blocks.. I think we should be asking what your radiator setup is like?


----------



## Audioboxer

kithylin said:


> Instead of your blocks.. I think we should be asking what your radiator setup is like?


2x360 and 1x120. But they're slim Corsair XR5s (so rebranded HWLabs). Can swap one for an XR7 at some point, but I'm honestly not worried about my temps. 5950x is 60~65 gaming and 2080Ti normally 40~45. Idle is like 25~30 on 2080Ti and 35~40 on 5950x.


----------



## kithylin

Audioboxer said:


> 2x360 and 1x120. But they're slim Corsair XR5s (so rebranded HWLabs). Can swap one for an XR7 at some point, but I'm honestly not worried about my temps. 5950x is 60~65 gaming and 2080Ti normally 40~45. Idle is like 25~30 on 2080Ti and 35~40 on 5950x.


You're going to be limited on temps for overclocking your video card then. 45c is quite hot for an overclocked and water cooled 2080 Ti. And running hot makes the video cards run at a lower clock frequency with how nvidia scales their GPU's with temps.


----------



## Audioboxer

kithylin said:


> You're going to be limited on temps for overclocking your video card then. 40c is a bit hot for an overclocked and water cooled 2080 Ti. And running hot makes the video cards run at a lower clock frequency with how nvidia scales their GPU's with temps.


Without changing the BIOS away from the FTW3 one, at 124% power limit I've just settled with 2100 at 1.050v. Memory is sitting at +1000. Occasionally see 2115, but yeah, soon as I'm into low 40s it sorta just locks in at 2100.

Better than my last card anyway, it struggled to be stable at 2080Mhz on core.


----------



## JustinThyme

45C is the first throttle point. You want to keep them under that point. If your loop is getting that warm like stated might want to looking into pulling that liquid temp down. I run HeatKiller blocks that I never see more than 40C, usually around 38C but my loop temp with 2x 2080Tis and a 10980XE with a healthy OC never sees higher that 32C but usually 30C. My delta on the GPU blocks is 8C and I only get up to 40C when my office is a little warmer than average. I have my fans controlled by aquacomputer devices and my max liquid temp is set to 32C so when I get near that my fans are ramping up close to full bore and if I hit 32C they are at full bore with a lot of radiator cross section.


----------



## TheOpenfield

Depending on the water temperature, 45C GPU is alright. A delta of up to 15K water/GPU is pretty normal (with paste). I wouldn't worry about missing out on 15 MHz on the core, setting up the fan speed to your acoustic liking is probably more important


----------



## JustinThyme

TheOpenfield said:


> Depending on the water temperature, 45C GPU is alright. A delta of up to 15K water/GPU is pretty normal (with paste). I wouldn't worry about missing out on 15 MHz on the core, setting up the fan speed to your acoustic liking is probably more important


Delta of 15C+ is normal for the low budget blocks. 10C or less is preferable and expected in top tier coolers. With a proper loop and premium components your can have your cake and eat it too. This however takes not only a top tier block but adequate radiator surface area. If you choose a budget end block with less rad surface area and set fans to a lower acoustic signature you are going to miss out a lot on more than 15MHz. Throttle points are at every 15C starting at 45C. You hit 60C (which is very common for air cooled) there goes another 15MHz and if you get on the warm side and hit 75C (again not uncommon under heavy load on air) you are now throttled down 45MHz. Same goes if you don’t have the rad surface area and run fans low speed. The liquid temp increases exponentially over time. If you are good with this then don’t waste your money on a higher end card. Get a cheaper card that will deliver the same as the throttled performance with less heat. A 1660 should do it. Then you can run nearly silent but your performance may not be to your liking.


----------



## J7SC

TheOpenfield said:


> Depending on the water temperature, 45C GPU is alright. A delta of up to 15K water/GPU is pretty normal (with paste). I wouldn't worry about missing out on 15 MHz on the core, *setting up the fan speed to your acoustic liking is probably more important*


*...yes, it is*  ...an older system (TR 2950X, 2x 2080 Ti Aorus Xtr WF WB) I'm just about to rebuild re. cooling always had superb temps even with the factory OEM-blocks. Those blocks have never been off and still perform as well as day one / Dec 2018....but that setup also has 5x 360/60 rads total in a dual-loop, 4x D5s and - among other performance fans - 9x GentleTyphoon 3k rpm fans (Molex only).

At the time, it did what I wanted it to > keep the GPUs below 38 C - which is actually the first noticeable temp / bin - at 23 C ambient even with their combined 760 W max. This thing was tuned for performance and stayed in 3DM HoF / PortRoyal for 1.5 yrs, and as high as 15th (spoiler) on stock bios and blocks. HOWEVER, while the GT 3k rpm fans are actually quieter and have a more pleasant sound than for example the Corsair ML at 2.4k rpm also in that system, the thing just makes too much of a racket sound-wise, and also air-movement-wise. This is now an issue as that system is moving to our master-bedroom.

It is getting updated / reduced to a dual Revo XTop D5 pump and 3x 360/60 and push-pull Arctic P12s now. I tried a few Arctic pst pwm P12s out on another system (3090 etc) back in January and really like those fans - you only hear a gentle whoosh of the air moving while they deliver top-notch cooling. Not quite the 'sound of silence', but close !



Spoiler


----------



## TheOpenfield

In my case, it's pretty simple actually: Either ~3K water/ambient at max. fans for benchmarks, or ~7K with ~300 rpm and nearly silent. An easy choice for me. The resulting GPU temperature with 12K water/GPU will exceed or stay close to 45C during summer, but I really couldn't care less about an additional 15 MHz. BTW, there is another clock step below 40 degrees, at least with my GPU (and @J7SC s as well as it seems).


----------



## JustinThyme

There are no other clock steps, that’s simply loss of overclock due to increased temps. The first hardware forced throttle point is 45C. My rig is quiet and I still don’t hit 40C. That simply translates into rad surface area. If you are happy with what you have that’s great. I’m happy with the best mine can do and a throttle due to a poor design on my part doesn’t give me a warm fuzzy. Back to budget vs optimal.

yeah my set up blows.








I scored 20 184 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 2080 Ti x 2, 65536 MB, 64-bit Windows 10}




www.3dmark.com













I scored 26 605 in Time Spy


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 2080 Ti x 2, 65536 MB, 64-bit Windows 10}




www.3dmark.com


----------



## kithylin

JustinThyme said:


> There are no other clock steps, that’s simply loss of overclock due to increased temps. The first hardware forced throttle point is 45C. My rig is quiet and I still don’t hit 40C. That simply translates into rad surface area. If you are happy with what you have that’s great. I’m happy with the best mine can do and a throttle due to a poor design on my part doesn’t give me a warm fuzzy. Back to budget vs optimal.


The first forcecd throttle point with the GTX 1080 Ti was around 25-28 Degrees C on the GPU core. Did they change this with the RTX 2080 Ti to 45C ?


----------



## TheOpenfield

@JustinThyme Why is your Vmem clock so low?


JustinThyme said:


> optimal...


...is a very subjective thing. My system has to be silent above all.


----------



## JustinThyme

TheOpenfield said:


> @JustinThyme Why is your Vmem clock so low?
> 
> 
> ...is a very subjective thing. My system has to be silent above all.


Sounds like you need a chiller outside. I’m not going to say mine is silent but the more rad space you have the lower speeds you can run fans. What I will say is I can’t hear a thing over the speakers or cans on my head.


----------



## JustinThyme

kithylin said:


> The first forcecd throttle point with the GTX 1080 Ti was around 25-28 Degrees C on the GPU core. Did they change this with the RTX 2080 Ti to 45C ?


My 1080TIs never throttled either. They also stayed below 40C. May be dependent on what card you have and the silicon lottery.


----------



## kithylin

JustinThyme said:


> My 1080TIs never throttled either. They also stayed below 40C. May be dependent on what card you have and the silicon lottery.


Then you never got the cards cold enough to see it. The people on liquid nitrogen and extreme chiller setups confirmed that the 1080 Ti's started throttling clocks below 30c. Just because your cards aren't getting cold enough to see it doesn't mean it doesn't happen. Has anyone in here tried using the 2080 Ti's where they can keep them < 30c and let us know if they throttle down there too as the first throttle point?


----------



## J7SC

TheOpenfield said:


> In my case, it's pretty simple actually: Either ~3K water/ambient at max. fans for benchmarks, or ~7K with ~300 rpm and nearly silent. An easy choice for me. The resulting GPU temperature with 12K water/GPU will exceed or stay close to 45C during summer, but I really couldn't care less about an additional 15 MHz. BTW, there is another clock step below 40 degrees, at least with my GPU (and @J7SC s as well as it seems).


...yeah, there's one definitely below 40 C (probably more temp bin steps than that)...on my 2950X/2x2080Ti, it is 38 C, at least according to the monitoring software, in normal use-cases. _The reason why 38 C was important is because that's the one you try to stay under with the typical ambient at 20 C - 25 C_. 

In the wintertime, with the heat off and the system by the window at ~ 12 C 'inside' my home office (brrrr), there were speed step temp bins below 38 C as well, ie. low 20s....haven't run PortRoyal for a while on the 2x2080 Ti (there are Ampere & BigNavi work machines to bench) though there has been some subsequent driver improvement afaik for RTX2K as well. Still, 2080 Tis still 'got it'...very nice cards, really, even now especially when w-cooled and set to quiet...

This was my last PR run from December 2020


----------



## JustinThyme

Nope, never run LN2.
Unfortunately there is no table listed by Nvidia. Only max themral profiles. I’m not going to reinstall 1080Tis to find out, one of them is sold but I will find out this winter when the outside temp drops. Thing is how do you tell the difference between better clocks due to sub ambient temps and a set throttle point? I know there are no throttle points on CPUs until you hit TDP but I also know that when I ran those sub ambient I was able to get much better clocks. It is after all why people even go there. Otherwise just keep them below the TDP and you are good right? Well I can’t speak for anyone else but that’s never been my experience. Just like why my 10980XE laughed at an H150 AIO and I guess it had a throttle point because it wouldn’t go past 4.4GHz on the bench but the throttle point was removed when I pulled the AIO and out a 480XE with push pull fans and a 140Xres on the bench top and with that throttle point removed it clocks to 5.1GHz all cores. This winter I’ll see what other throttle points can be removed with sub ambient coolant.

I’m afraid common sense just tells me that lower temps get you better clocks, it’s the physics of silicon.

I did note the 45C, 60C and 75c drops by 15MHz on the 2080Tis while I was waiting for water blocks to be made and had them on air. It was instantaneous. Cross the threshold and the clocks dropped.


----------



## kithylin

JustinThyme said:


> I did note the 45C, 60C and 75c drops by 15MHz on the 2080Tis while I was waiting for water blocks to be made and had them on air. It was instantaneous. Cross the threshold and the clocks dropped.


I want to preface my comment below with a statement: What I'm about to write is not my personal thoughts or "fanboyism" either way. I'm not going to write that either card is better than the other for any reason. I'm just going to state facts.

Disclaimer aside: This seems to be something with how Nvidia designed their video cards. Lower temps = higher clocks. Where as with AMD Radeon cards they seem to not care and Radeon cards tend to run high clocks regardless of temps (high temps around 70c or low temps around 40c = same clocks either way), at least in my personal experience. But my experience is limited to playing with 1080 Ti, RTX 2080 Ti, and RX 6700 XT cards.


----------



## JustinThyme

I can’t speak for AMD as they haven’t had a competing card in a very long time. Last AMD I ran was passively cooled so nice and quiet and was AGP. Actually I think thr was still ATI before AMD bought them out. In my experience everything that has current running through it does better with lower temps with the exception of batteries. They have a window that’s fairly narrow. 25C is the magic number. Go up 10C and the life span is cut in half, go down 10C and you only get half the run time.


----------



## jura11

Hi @kithylin and @JustinThyme 

First dop in frequency is around 33-36°C on RTX GPUs, on GTX1080 it has been around 45°C and on GTX1080Ti was I think around 37-38°C like on RTX 

On my RTX 2080Ti Strix I have seen first drop in frequency around 35-36°C then would stay at such clocks till 40's (42°C maybe) 

On current RTX 3090 first drop in frequency by 15MHz you will see around 33-36°C then another GPU will drop the bin another 15MHz around 36-38°C and after that not sure because highest temperatures which I have seen on my GPUs is 42°C that's in baking hot weather hahaha, if temperatures drop below 35's then your frequency will raise another 15MHz 

Previously I have done lots of tests with temperatures and frequency drops and somewhere here is my post here about it

Lowest temperatures which I seen on my RTX 3090s and 2080Ti's was 29°C that's with outside temperatures below freezing and that time I think I scored best on GPUs, during the winter hopefully I can do few tests 

Hope this helps 

Thanks, Jura


----------



## JustinThyme

You guys getting whacked drops. I run mine up to full load on the clocks I have set and its 2130 from where it idles starting at 22C to 38C and when its hot 40C.
A little cool in the room tonight but started with valley, then went to heaven. Starting temp 24C on cards. 38 was the peak. You can see a nice flat line of 2130 on both cards.


----------



## JustinThyme

J7SC said:


> ...yeah, there's one definitely below 40 C (probably more temp bin steps than that)...on my 2950X/2x2080Ti, it is 38 C, at least according to the monitoring software, in normal use-cases. _The reason why 38 C was important is because that's the one you try to stay under with the typical ambient at 20 C - 25 C_.
> 
> In the wintertime, with the heat off and the system by the window at ~ 12 C 'inside' my home office (brrrr), there were speed step temp bins below 38 C as well, ie. low 20s....haven't run PortRoyal for a while on the 2x2080 Ti (there are Ampere & BigNavi work machines to bench) though there has been some subsequent driver improvement afaik for RTX2K as well. Still, 2080 Tis still 'got it'...very nice cards, really, even now especially when w-cooled and set to quiet...
> 
> This was my last PR run from December 2020
> 
> View attachment 2522068


my last PR run in May, you topped me by a little, I was running 2130. Cant see all of yours. 









I scored 20 184 in Port Royal


Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 2080 Ti x 2, 65536 MB, 64-bit Windows 10}




www.3dmark.com


----------



## dante`afk

wrong thread sry


----------



## Audioboxer

JustinThyme said:


> 45C is the first throttle point. You want to keep them under that point. If your loop is getting that warm like stated might want to looking into pulling that liquid temp down. I run HeatKiller blocks that I never see more than 40C, usually around 38C but my loop temp with 2x 2080Tis and a 10980XE with a healthy OC never sees higher that 32C but usually 30C. My delta on the GPU blocks is 8C and I only get up to 40C when my office is a little warmer than average. I have my fans controlled by aquacomputer devices and my max liquid temp is set to 32C so when I get near that my fans are ramping up close to full bore and if I hit 32C they are at full bore with a lot of radiator cross section.


Sorry for the late getting back to you but yeah I have noticed the throttle point around 43~45.

Under heavy load my water temps tend to get to 33~36, sometimes 37~38 because of summer, so I guess around 43 on the 2080Ti isn't too bad. I do slightly prefer silence over performance so I have to admit I'm far from ever running fans at full speed lol. Probably top out at 1000RPM. I'm running Arctic P12 fans. Cheaped out a bit on the fans but to be fair they're decent performers for the cost and pretty quiet at 1000. Bit noisier if they get let to go to 1500~1600.

I've got the core pretty much running at 2100 in all games and with memory at 8000 I'm quite happy. In the winter I'm sure temps will drop quite a bit, the UK gets very cold lol.

My 2080Ti is in the step up queue, so 3080 at some point. Still love the 2080Ti, beast of a card.


----------



## JustinThyme

Audioboxer said:


> Sorry for the late getting back to you but yeah I have noticed the throttle point around 43~45.
> 
> Under heavy load my water temps tend to get to 33~36, sometimes 37~38 because of summer, so I guess around 43 on the 2080Ti isn't too bad. I do slightly prefer silence over performance so I have to admit I'm far from ever running fans at full speed lol. Probably top out at 1000RPM. I'm running Arctic P12 fans. Cheaped out a bit on the fans but to be fair they're decent performers for the cost and pretty quiet at 1000. Bit noisier if they get let to go to 1500~1600.
> 
> I've got the core pretty much running at 2100 in all games and with memory at 8000 I'm quite happy. In the winter I'm sure temps will drop quite a bit, the UK gets very cold lol.
> 
> My 2080Ti is in the step up queue, so 3080 at some point. Still love the 2080Ti, beast of a card.


My fans don’t normally make it to 1000rpms although they are 2000rpm fans. Just a lot of rad surface area. I don’t see myself updating to the 30XX. Prices too high and the only thing that beats my pair of 2080Tis is a pair of 3090s and $5K+ just isn’t worth it to me.


----------



## Audioboxer

JustinThyme said:


> My fans don’t normally make it to 1000rpms although they are 2000rpm fans. Just a lot of rad surface area. I don’t see myself updating to the 30XX. Prices too high and the only thing that beats my pair of 2080Tis is a pair of 3090s and $5K+ just isn’t worth it to me.


Yeah yourself and others have me thinking longer term about getting a chunky boy in there (thicker rad) and someone else even advised 3x360 is fine for a D5 pump. I originally went 2x360 then added a 120 in for fun at the rear of a Lian Li XL. I could go with the 3x360 setup though using the side.

I'm happy enough just now but whenever it's my turn for step up and I have to drain the loop anyway I'm going to consider an XR7 (or equivalent) and maybe consider 3x360.

I was lucky with my 2080Ti given I managed to buy it before the 3xxx launched, before the really crazy price madness and in enough time for my step up window not to expire before EVGA added the 3080 to step up. However, as you can see we're a year on and I'm still in the queue. EU shortages of these cards have been worse than US queue.

When it was in for an RMA I had heard EVGA sometimes replace 2080Tis with 3070. So glad that didn't happen. The 2080Ti can trade some blows with a 3080, the 3070 is not an equivalent card if you ask me.


----------



## TheOpenfield

TheOpenfield said:


> Well done for a Non-A. No physical modifications, just the 310W BIOS I suppose?
> 
> I got just under 10k with my 280W max. card in one of the first runs, could probably just barely hit 10k with some tweaking. But more than that is just not possible at 280W.
> View attachment 2519038


Small update: passed 10k with 280W


----------



## widezu69

Sorry to butt in. So now, I've tried to boot the XOC bios on my Strix OC while only connected to a monitor via DisplayPort to try to eliminate any variables. Still the same thing. Screen goes black (but doesn't say disconnected) and the computer becomes unresponsive. GPUz is showing I have the correct bios so I'm assuming the flashing was successful.

Is there a specific version of nvflash I should be using? Or is there anything else that I have to do that I might have missed? I'm using DCH drivers if that makes any difference?

Edit: Changing from DCH to standard didn't make a difference.


----------



## JustinThyme

There are a lot of XOC BIOS files. I tried the one from the ASUS Matrix. Yeah when you flash it goes black. If GPUZ reports it then that’s what you have.


----------



## Laithan

Just to be sure, can you step through the details you see?

Assume it just happens once because you are able to run GPU-z. After you rebooot everything is fine? At what point exactly do you see it?

FYA - You run NVFLASH using an *admin* command prompt? Modern versions of NVFLASH should automatically disable the video driver temporarily before flashing right after running NVFLASH (this used to be a manual step long ago but no longer needed). A black screen is expected for a few seconds while it does this. You will need to confirm the flash with a "Y" manually (technically it could be scripted but let's ignore that). Once flashing is complete it _should_ re-enable the GPU in device manager (and you will see a black screen again for a few seconds, all normal). It is possible that the flashing completes but NVFLASH doesn't re-enable/recover your windows session properly. 

Agree with JustinThyme that you may want to download a known working XOC BIOS for that model if you are not already using that same one.


----------



## widezu69

Hi. Thanks for your reply. Here's what I'm doing:

I've downloaded the XOC bios from the first page of this thread. I'm using the latest version of NVFlash.
I perform all the commands required to flash the new bios. Screen turns off briefly, back on, nvflash does its thing, then flashes to black again then back on.
Flashing is successful as I can verify in GPUz that the new bios there. Everything is fine up to now.

Then, when I reboot the computer, I get past the post screen, then see the sign-in screen very briefly as Windows loads the display driver, and then the screen turns black and the computer becomes unresponsive.

I've also tried this without any display drivers installed. Flashing goes fine, then as soon as I attempt to install drivers, computer locks up and screen goes black (but not disconnected).

I've also tried with a Matrix vbios, same thing.

Thanks in advance for any pointers.


----------



## Laithan

That does help clarify the issue, thanks. 

Do you have any overclock apps running @ startup and applying a previously saved O/C profile?
Any other apps that communicate with the GPU such as RGB that you can disable for now?
Can you get into safe mode and run DDU and reinstall the display drivers?


----------



## widezu69

Yeah I'm able to DDU the drivers and reinstall them whenever. I've tried all different kinds combinations, flashing with/without drivers, installing afterwards, installing before etc etc. I do have GPU Tweak II installed with saves profiles and Armory Crate which controls Aura. I'll get rid of those one at a time and give it a try.


----------



## J7SC

widezu69 said:


> Yeah I'm able to DDU the drivers and reinstall them whenever. I've tried all different kinds combinations, flashing with/without drivers, installing afterwards, installing before etc etc. I do have GPU Tweak II installed with saves profiles and Armory Crate which controls Aura. I'll get rid of those one at a time and give it a try.


...can't recall if you already went there...nor am I recommending that, but I'm wondering when you will get to a completely new virgin Win 10 install as the only option left (ie. on a new, separate SSD)


----------



## widezu69

J7SC said:


> ...can't recall if you already went there...nor am I recommending that, but I'm wondering when you will get to a completely new virgin Win 10 install as the only option left (ie. on a new, separate SSD)


I'm planning on rebuilding next month so if it comes to that, that will be the time. Hoping it won't get to that.


----------



## J7SC

widezu69 said:


> I'm planning on rebuilding next month so if it comes to that, that will be the time. Hoping it won't get to that.


...years back, I had a similar problem after loading custom XOC bios (from Asus) onto EVGA Classified cards...the black screen in Windows back then was due to a higher* min*-voltage requirement of the XOC...as I had EVbot for the EVGA, I could get around it. But I have no idea if that is what you're facing now, wild goose chase et al...also, I temporarily fixed it beyond EVBot by auto-loading a MSI AB profile w/ an extra .05V +- for the GPU


----------



## 98uk

Hi chaps,

I'm tempted to trade my 2070 super for his EVGA 2080ti Black soon.

However, I only have a Seasonic G series 650w.

My system is:

CPU: Ryzen 5800x
Mem: Gskill 3600mhz CL15
Resolution: 3440x1440 @ 144hz

The CPU isn't overvolted, only the curve tweeked and PBO enabled.

Would I need a new PSU to run a 2080ti?


----------



## Krzych04650

98uk said:


> Hi chaps,
> 
> I'm tempted to trade my 2070 super for his EVGA 2080ti Black soon.
> 
> However, I only have a Seasonic G series 650w.
> 
> My system is:
> 
> CPU: Ryzen 5800x
> Mem: Gskill 3600mhz CL15
> Resolution: 3440x1440 @ 144hz
> 
> The CPU isn't overvolted, only the curve tweeked and PBO enabled.
> 
> Would I need a new PSU to run a 2080ti?


I don't think so, this CPU doesn't draw any power and that 2080 Ti Black, assuming it is not XC, is a Non-A chip with 250W stock power limit and 280W max, so you won't make it draw too much power even if you overclock. It would be a close call if you ran something like overclocked HEDT with 380W 2080Ti, that would probably trip 650W PSU in games that use a lot of CPU and GPU at the same time, but you are going to draw way less than that, like 200W less probably.


----------



## JustinThyme

Depends on your ancillaries. Is the machine water cooled? How many fans etc. I would think you would be OK but I don’t like running a PSU to its limits. 80% tops. If you intend on upgrading to more power may as well go ahead and get the PSU out of the way. Don’t know if that PSU has power monitoring or not. My Corsair AX1600I does. Yeah that’s a lot of PSU but I’m running a 10980XE overclocked to 4.8GHz all 18 cores with 2x Strix 2080Ti OC cards that are also OCd and a custom loop with 3 D5 pumps about to be 5 of them pulling roughly 20 watts each. When I load up everything to the hilt I’m pulling around 1400W. Just running something like time spy bench I’ll peak out at 1100W easy. I can monitor it with a Corsair Nexus mounted to my keyboard. I’m monitoring power in, power out and efficiency of the PSU along with several other things. Nice gadget to have.


----------



## 98uk

JustinThyme said:


> Depends on your ancillaries. Is the machine water cooled? How many fans etc. I would think you would be OK but I don’t like running a PSU to its limits. 80% tops. If you intend on upgrading to more power may as well go ahead and get the PSU out of the way. Don’t know if that PSU has power monitoring or not. My Corsair AX1600I does. Yeah that’s a lot of PSU but I’m running a 10980XE overclocked to 4.8GHz all 18 cores with 2x Strix 2080Ti OC cards that are also OCd and a custom loop with 3 D5 pumps about to be 5 of them pulling roughly 20 watts each. When I load up everything to the hilt I’m pulling around 1400W. Just running something like time spy bench I’ll peak out at 1100W easy. I can monitor it with a Corsair Nexus mounted to my keyboard. I’m monitoring power in, power out and efficiency of the PSU along with several other things. Nice gadget to have.


It's a pretty simple machine. It has 3 X 120mm, 2 X 140mm case fans and a Be Quiet! air cooler. Case is a Fractal Meshify C.

Not much else to it really bar a bit of LED from the soundcard.

I would get the PSU sorted, but at a later date. I can't afford it all right now, hence I wanted to know if it would be okay in the interim.

Sounds like it should be?


----------



## TheOpenfield

You will be just fine with that PSU. Probably limited to 280W GPU anyway (as mentioned earlier), so even "max OC" shouldn't pose any problems. You will probably stay below 400W continuous load with your whole system, with enough room to spare for peaks etc.


----------



## Medizinmann

98uk said:


> It's a pretty simple machine. It has 3 X 120mm, 2 X 140mm case fans and a Be Quiet! air cooler. Case is a Fractal Meshify C.
> 
> Not much else to it really bar a bit of LED from the soundcard.
> 
> I would get the PSU sorted, but at a later date. I can't afford it all right now, hence I wanted to know if it would be okay in the interim.
> 
> Sounds like it should be?


I would also say this shouldn't be a big problem - as a 5800X + Motherboard + fans etc. should use around 150-180W max. + 300-320W max. for the GPU should give you enough room using a 650W PSU.

Best regards,
Medizinmann


----------



## JustinThyme

98uk said:


> It's a pretty simple machine. It has 3 X 120mm, 2 X 140mm case fans and a Be Quiet! air cooler. Case is a Fractal Meshify C.
> 
> Not much else to it really bar a bit of LED from the soundcard.
> 
> I would get the PSU sorted, but at a later date. I can't afford it all right now, hence I wanted to know if it would be okay in the interim.
> 
> Sounds like it should be?


Yeah it should hold fine. Id just be sure to see how hard you are pushing it though. From an electronics engineers perspective regardless of what the label says pretty much everything is designed to run at no more than 80% of its capacity continuously. Seasonic makes decent PSUs


----------



## 98uk

Just to confirm, card is working great, even with 112% power limit. No issues at all


----------



## JustinThyme

98uk said:


> Just to confirm, card is working great, even with 112% power limit. No issues at all


Do you have the ability to rub HWinfo and see your power draw?


----------



## 98uk

JustinThyme said:


> Do you have the ability to rub HWinfo and see your power draw?


I'll give it a go next time i go on my PC. Would GPU-Z not show me the same thing.

Have to say, the black runs hot and loud. I had a Gigabyte 2070 Super with a triple fan cooler before and it was almost silent. Kinda miss that!

Monitoring temps in some benchmarks, at 100% fan, core is about 68c, with mem and hotspot both being 85c or so.


----------



## JustinThyme

No GPUZ won’t show it. HWinfo is a good monitoring app that you just launch when you need it. It will give you all the information about you rig that can possibly be collected. Shows the CPU specs as to frequency and power draw and same for GPUS. You can see the current value as well as min, max and average. GPUZ shows me what my cards are rated for and what they are set at, HWinfo shows you exactly what it’s doing at any point in time.

85C on VRam is definitely on the warm side. Hottest spot on my cards never passes 40C.


----------



## TheOpenfield

98uk said:


> Have to say, the black runs hot and loud. I had a Gigabyte 2070 Super with a triple fan cooler before and it was almost silent. Kinda miss that!


~300W on air is just no fun. Might consider undervolting or reducing the power limit. Something around 1800 MHz 0.8V should give you less than 200W in-game. Even a poor 2080 TI cooler design should handle that rather quietly.


----------



## Audioboxer

So my PC just shutdown last night after a few hours of playing New World, during playing it. Never ran the beta, didn't have any crashes or issues for a few days now. Last night was after about 3 hours of play. Not a black screen, just completely turned off.

I'm on a loop, 4 rads, cpu, gpu and memory block. So temps tend to be quite cool. With a high ambient my 2080Ti would be around 41~45, this is with 1.093v and 123% power limit. Fan curve low to moderate, I am for as silent as I can. Noisiest thing is my 2080Ti coil whine at 1.093v/123% lol.

Given there are a few rumbles about issues with this game again I'm exploring undervolting. Seems my 2080Ti is pretty average, I aimed for 0.975v and as high as I could get it, but 2000~2025 seem unstable. Seems 1.0v is holding 2010 stable. Which is still decent, temps in game are around 37~39 with a water temp of 30~31. If this holds might try to creep back to 2025~2050 with 1.0v.

This is an EVGA refurb, my other EVGA 2080Ti died a few months ago on the desktop and had to be RMA'd. These are the EVGA 2080Ti blowers, one of the last models they released of the 2080Ti IIRC. Based on ref PCB black edition, but they are A-chips and can be BIOS flashed beyond 280W.

Anyway, I'll keep my findings with New World updated in here, I know it was EVGA 3xxx cards hit hardest by failure rates with the game. The power draw spikes on the EVGA cards are concerning, especially if you're already running 123%+ and the game overshoots that even higher. I have an 850W ROG Thor power supply. I'm guessing with a shutdown it was my power supply tripping and the only way I can assume that happened was New World overshooting my 2080Ti.


----------



## kithylin

Audioboxer said:


> So my PC just shutdown last night after a few hours of playing New World, during playing it. Never ran the beta, didn't have any crashes or issues for a few days now. Last night was after about 3 hours of play. Not a black screen, just completely turned off.


Around 90% of the time this is usually your power supply hitting OCP / Over Current Protection and shutting off. Something caused you to exceed the power that your power supply can handle and shut the system off. If you haven't seen the video by JYZ yet then I would suggest you take his advice and remember to use a frame rate cap for this game and/or vsync and consider using MSI Afterburner to reduce the power limit on your card down to 80% or 70% to resolve this.


----------



## Audioboxer

kithylin said:


> Around 90% of the time this is usually your power supply hitting OCP / Over Current Protection and shutting off. Something caused you to exceed the power that your power supply can handle and shut the system off. If you haven't seen the video by JYZ yet then I would suggest you take his advice and remember to use a frame rate cap for this game and/or vsync and consider using MSI Afterburner to reduce the power limit on your card down to 80% or 70% to resolve this.


Yeah this seems about right, thankfully not looking like the card is about to blow up. I've been playing every other game including RDR2 at 1.093v, 123% and so on, not breaking a sweat.

I do have a 5950x with 270/150/190 and a curve. So technically it can draw up to 270w, but outside of some benches about 250w seems to be the most you're going to draw. Usually less than that.

So IMO for an 850w supply to trip it would seem to need one hell of a spike. I have been playing for a few days, though last nights session was the longest at once.

Unfortunately, until now, I've never monitored wattage used by the GPU, only just turned it on for in-game rivatuner stats. At 100% with a 1.0v max curve and 1000 on memory it seems to be hitting about 285w. Maintenance now so I'll see later how high above 285w it will try to go.


----------



## kithylin

Audioboxer said:


> So IMO for an 850w supply to trip it would seem to need one hell of a spike.


Unfortunately that's all it takes: 1 hell of a spike. Your report is the first time I've read of anyone with an RTX 2080 Ti experiencing these issues in that game but it doesn't surprise me. People with 3090's have reported playing that game perfectly fine for hours and weeks and days and then all of a sudden _POP_ a random spike that they never experienced before and their card is dead. So.. be careful and take steps to protect your card. Remove all overclocks. Drop power usage below 100%. Run a frame rate cap, etc. That game is murdering good video cards.


----------



## Audioboxer

kithylin said:


> Unfortunately that's all it takes: 1 hell of a spike. Your report is the first time I've read of anyone with an RTX 2080 Ti experiencing these issues in that game but it doesn't surprise me. People with 3090's have reported playing that game perfectly fine for hours and weeks and days and then all of a sudden _POP_ a random spike that they never experienced before and their card is dead. So.. be careful and take steps to protect your card. Remove all overclocks. Drop power usage below 100%. Run a frame rate cap, etc. That game is murdering good video cards.


I can confirm it can draw over the wattage the BIOS aims for at the % chosen. I seen a spike to 320W at 100%. But the other issue in play is I can confirm it will happily go to 104~105%, so it creeps over the power limit set. I think that is what is causing the spikes. As you can imagine if the slider is at 124% and voltage is set at 1.093v, chances are it is pulling nearer 400w rather than the 373w the BIOS aims for at 124%.

Very wattage hungry game but as Jay pointed out it's those bursts that take you over power limit that might be causing issue with some cards.

The behaviour does seem a bit more consistent/less eratic at 100% than it does above that. I tested up to 112% and I'm of the mindset right now if you feed this game over 100% it will take it and then some. The last few days I was running 124% and 1.093v. Right now I'm at 100% and and I've tested undervolt/leaving it with default curve.


----------



## Falkentyne

kithylin said:


> Unfortunately that's all it takes: 1 hell of a spike. Your report is the first time I've read of anyone with an RTX 2080 Ti experiencing these issues in that game but it doesn't surprise me. People with 3090's have reported playing that game perfectly fine for hours and weeks and days and then all of a sudden _POP_ a random spike that they never experienced before and their card is dead. So.. be careful and take steps to protect your card. Remove all overclocks. Drop power usage below 100%. Run a frame rate cap, etc. That game is murdering good video cards.


Someone posted a hwinfo64 cap of their 3090 running the Kingpin/eVGA (XOC?) 520W Bios on New World.
It was pulling 306 amps at just 460W total board power. 306 amps!!!


----------



## Falkentyne

kithylin said:


> I think you were reading that wrong, or HWINFO is doing the math wrong, one of the two. 460 watts @ 12vDC (what video cards run at.. well most of the parts of the video card that's high current, some small things pull off the 3.3v rail via the PCIE slot, but most of the GPU is on the 12v rail(s)) is 38.33 amps, not 306 amps. Amps = Watts / Volts when running DC current, which our computers operate at. 460 / 12 = 38.33333333333333 repeating.


That's probably per power stage.

Buildzoid mentioned on his teardowns that you can go way above 200 amps on these cards, and he calculates the # of watts of heat at 150 amps, 200 amps, 250 amps, 300 amps and so on.
There's no way it's only pulling 38 amps. MAYBE on one power stage or phase it is. Not on all combined. 

Even Intel CPU's can pull 300 amps. You just can't cool them on air  I've been above 280 amps on my 10900k. And I have direct VRM access and can read the registers.

this is my RKL at a pedestrian 4.7 ghz running prime95. 152 amps.
Been above 200 and once you get to 250 it gets uncoolable.


----------



## kithylin

Falkentyne said:


> That's probably per power stage.
> 
> Buildzoid mentioned on his teardowns that you can go way above 200 amps on these cards, and he calculates the # of watts of heat at 150 amps, 200 amps, 250 amps, 300 amps and so on.
> There's no way it's only pulling 38 amps. MAYBE on one power stage or phase it is. Not on all combined.
> 
> Even Intel CPU's can pull 300 amps. You just can't cool them on air  I've been above 280 amps on my 10900k. And I have direct VRM access and can read the registers.
> 
> this is my RKL at a pedestrian 4.7 ghz running prime95. 152 amps.
> Been above 200 and once you get to 250 it gets uncoolable.


I'm probably wrong and shouldn't of written anything. Could you delete where you quoted me and I'll delete my message too please?


----------



## Falkentyne

kithylin said:


> I'm probably wrong and shouldn't of written anything. Could you delete where you quoted me and I'll delete my message too please?


Hey I'm not upset. We're all friends here, right?

take a look at this video. Go to (sorry, corrected): 12 minutes into the video


----------



## fat4l

Hi guys.
Before me reading 600 pages I'd like to ask:
...what is the best bios for Asus Strix 2080Ti OC ?

Can we flash this bios? I watercool the card and also like the curve editor so strix XOC I think is a no go. Is this compatible bios? 
"Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)"

Thanks


----------



## jura11

fat4l said:


> Hi guys.
> Before me reading 600 pages I'd like to ask:
> ...what is the best bios for Asus Strix 2080Ti OC ?
> 
> Can we flash this bios? I watercool the card and also like the curve editor so strix XOC I think is a no go. Is this compatible bios?
> "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)"
> 
> Thanks


Hi there 

For Strix personally I would use Matrix BIOS which is one of better BIOS for Strix, with that BIOS I could keep 2205MHz in benchmarks and in gaming I would keep 2160-2175MHz in most of the games, in more demanding games I used 2145MHz profile

If you will use BIOS from Galax or EVGA etc you will loose DP port

Hope this helps 

Thanks, Jura


----------



## Medizinmann

fat4l said:


> Hi guys.
> Before me reading 600 pages I'd like to ask:
> ...what is the best bios for Asus Strix 2080Ti OC ?
> 
> Can we flash this bios? I watercool the card and also like the curve editor so strix XOC I think is a no go. Is this compatible bios?
> "Galax RTX 2080 Ti HOF OC Lab Custom PCB (3x8-Pin) 2000W x 100% Power Target BIOS (2000W)"
> 
> Thanks


You could try the ROG Strix 1000W – XOC BIOS - which is explicitly a XOC-Bios for the ROG Strix!

But I would agree with user jura11 - the Asus ROG Matrix BIOS - as the XOC-BIOS only bring diminishing returns - is in my eyes the best!

Best regards,
Medizinmann


----------



## tcclaviger

JustinThyme said:


> There are no other clock steps, that’s simply loss of overclock due to increased temps. The first hardware forced throttle point is 45C. My rig is quiet and I still don’t hit 40C. That simply translates into rad surface area. If you are happy with what you have that’s great. I’m happy with the best mine can do and a throttle due to a poor design on my part doesn’t give me a warm fuzzy. Back to budget vs optimal.
> 
> yeah my set up blows.
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 20 184 in Port Royal
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 2080 Ti x 2, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 26 605 in Time Spy
> 
> 
> Intel Core i9-10980XE Extreme Edition Processor, NVIDIA GeForce RTX 2080 Ti x 2, 65536 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com


Running a chiller @ 10-15c water.

Can confirm 1080ti stepped down 1 freq bin at roughly 30c.

Can confirm my 2080ti holds set point freq at 32c (hottest it gets).

There's some interaction between power budget and temp stepping it seems on Turing that wasn't present previously. I Have seen the 2080ti step 1 step down in specific scenarios from 2205 to 2190, but it's not temp causing it (Port Royal triggers it, for example, I set 2265 and bench at 2250).

Both cards shunt modded.
Both cards strong OC reference designs.
Both cards on higher power limit bios to extend power to essentially never hit it (536watts 2080ti, 999watts 1080ti) even though the 2080ti does casionally kiss it.

Look at attachment for GPU core speed and GPU temp. No such step point exists on 2080ti, if you're losing a step between 28c and 45c, you're hitting power cap and it's dumping 1 step to stay within budget.








@ 5950x and 2080ti with 1 GPU on 3dmark browser for "Claviger" results, I may have a little time testing this...

EDIT: Just though about this though, I have a non-A die, it may be the A die steps, not sure on that part.


----------



## Kaltenbrunner

I see the scalpers on or for Newegg, are selling a bunch of 2080/S/Ti, when did that start ?? I'm happy, I'd love a 2080ti, but guess what the prices start at ? $1100USD, then I see a bunch around 1800-1900, at thats still just 2080/S. I see a bunch of Ti's for around $2500.

For what ever reason, real or not, there's even 1 for ~$7800
search GeForce RTX 2080 TiGeForce RTX 2080GeForce RTX 2080 SUPER





Are you a human?







www.newegg.ca


----------



## J7SC

Kaltenbrunner said:


> I see the scalpers on or for Newegg, are selling a bunch of 2080/S/Ti, when did that start ?? I'm happy, I'd love a 2080ti, but guess what the prices start at ? $1100USD, then I see a bunch around 1800-1900, at thats still just 2080/S. I see a bunch of Ti's for around $2500.
> 
> For what ever reason, real or not, there's even 1 for ~$7800
> search GeForce RTX 2080 TiGeForce RTX 2080GeForce RTX 2080 SUPER
> 
> 
> 
> 
> 
> Are you a human?
> 
> 
> 
> 
> 
> 
> 
> www.newegg.ca


 "nuts"


----------



## iamjanco

...normal these days and not surprising.


----------



## Laithan

Hope they pass a law to make botting illegal... it is a topic in USA. Holding this crap hostage is maddening...


----------



## J7SC

Laithan said:


> Hope they pass a law to make botting illegal... it is a topic in USA. Holding this crap hostage is maddening...


...it is maddening - presumably, some scammers are fishing for desperate Christmas shoppers with a decent GPU on their list? On the other hand, the above examples refer to particularly wild outliers (or else may be I should sell my 2x 2080 Tis for their weight in gold ? ).

...I'm still wondering why Newegg allows the outside scam artists to appear under their (mostly trusted) brand; other than perhaps because of the fees Newegg gets.

...all in all, I managed to snag both a 6900XT and 3090 Strix at prior, lower MSRP this year (and no tariffs here) for some work systems updates as well. It was not easy, but not as difficult as the experience other folks seem to have had.


----------



## Kaltenbrunner

The used market, at least on ebay is absurd. Used cards are hardly any lower than new ones, not that I remember. And broken cards are going for 70-80%, off the top of mt head that is, I never actually worked it out.

So can't even find a used 2080ti for a sane price.


----------



## kithylin

Kaltenbrunner said:


> The used market, at least on ebay is absurd. Used cards are hardly any lower than new ones, not that I remember. And broken cards are going for 70-80%, off the top of mt head that is, I never actually worked it out.
> 
> So can't even find a used 2080ti for a sane price.


It's not just the 2080 Ti's, it's all video cards mostly. It's kind of off topic but random: GTX 980 Ti's are going for $400 still and old Nvidia Titan Black's from the 700 series are going for $300 still. 980 Ti's should be half that or less by now and the old cards should be < $100 by now.


----------



## widezu69

Hi team. I'm still trying to get to the bottom of why my Strix OC does not like any bios flashing. Every time I flash something on there that's not the bios that came with the card, it will lock up on reboot with a black screen when it tries to load the driver. Safe Mode works, so does normal Windows desktop without any nvidia driver installed. I'm trying to rule out as many things as I can. I've tried flashing a new bios with a fresh install of Windows without anything else installed and still not luck. My only other guess is that it might be happening because I'm running on water so the stock cooler & fans are not plugged in and the bios locks up when it doesn't detect a cooler.

My theory is that perhaps the first bios flash should be performed while the original cooler is attached?

To those with a Strix OC card - have you had luck flashing while on water? Did you perform your first flash with the stock cooler on? Or did you do flashing after you watercooled the card?


----------



## Medizinmann

widezu69 said:


> Hi team. I'm still trying to get to the bottom of why my Strix OC does not like any bios flashing. Every time I flash something on there that's not the bios that came with the card, it will lock up on reboot with a black screen when it tries to load the driver. Safe Mode works, so does normal Windows desktop without any nvidia driver installed. I'm trying to rule out as many things as I can. I've tried flashing a new bios with a fresh install of Windows without anything else installed and still not luck. My only other guess is that it might be happening because I'm running on water so the stock cooler & fans are not plugged in and the bios locks up when it doesn't detect a cooler.
> 
> My theory is that perhaps the first bios flash should be performed while the original cooler is attached?


I doubt it - I am pretty sure the issue is with something else.



> To those with a Strix OC card - have you had luck flashing while on water? Did you perform your first flash with the stock cooler on? Or did you do flashing after you watercooled the card?


I can only speak for my Palit GamingPro OC - I flashed it under water(some time late 2019) to the KFA2 380W BIOS and it is running fine since then.

Best regards,
Medizinmann


----------



## Laithan

I doubt the cooler makes any difference at all. If you are using a BIOS from another GPU, those ports may be getting disabled. That's a known issue with some of the BIOS' that were not made specifically for your GPU make/model.


----------



## Krzych04650

widezu69 said:


> My only other guess is that it might be happening because I'm running on water so the stock cooler & fans are not plugged in and the bios locks up when it doesn't detect a cooler.
> 
> My theory is that perhaps the first bios flash should be performed while the original cooler is attached?


Something similar happened to me after installing blocks on both my 2080 Ti's. After installing blocks the PC would boot but to very low res like if there was no driver and there was Error 46 for both GPUs in device manager. Simply disabling and re-enabling both cards in device manager worked just fine though and driver kicked in properly, so it happened only on first boot after putting blocks on. Both cards were flashed with 380W BIOS when they were still air cooled though and then it did not happen.


----------



## J7SC

Krzych04650 said:


> Something similar happened to me after installing blocks on both my 2080 Ti's. After installing blocks the PC would boot but to very low res like if there was no driver and there was Error 46 for both GPUs in device manager. Simply disabling and re-enabling both cards in device manager worked just fine though and driver kicked in properly, so it happened only on first boot after putting blocks on. Both cards were flashed with 380W BIOS when they were still air cooled though and then it did not happen.


...yeah, when you flash a different vbios, card and driver should be disabled / unloaded first. Also, each vbios (or even each setting of a multi-bios card) will create its own 'GPU entry' in the registry and that may also cause some issues on first boot after flashing (or switching) bios.


----------



## Audioboxer

With a 2 pin 2080Ti is there any point trying to flash higher than a 373W BIOS or is around 380W probably the "limit"?


----------



## Laithan

(2) 8-pins are easily capable of drawing/handling > 400W (combined) and these chips are very often power limited... but then you have the increased heat so IMO if you don't have these on water I don't know how much benefit any additional power (above 373W) would actually give you. If you can keep them cool enough there can be some gains with additional power via BIOS with a higher power limit and/or shunt mods.


----------



## Audioboxer

Laithan said:


> (2) 8-pins are easily capable of drawing/handling > 400W (combined) and these chips are very often power limited... but then you have the increased heat so IMO if you don't have these on water I don't know how much benefit any additional power (above 373W) would actually give you. If you can keep them cool enough there can be some gains with additional power via BIOS with a higher power limit and/or shunt mods.


I'm watercooled so that's why I was asking  See myself hit the 373w limit now and then on my FTW3 bios.

Will have a look around for BIOS with power limits above that.

*edit *- GALAX RTX 2080 Ti VBIOS This has flashed fine for me on a reference card, 2 pin, so I'll keep an eye on whether that extra 27w is utilised at all!

*edit2* - Hmmm, the curve seems to be displaying weird behaviour for me with this BIOS. Often refusing to go above 2000mhz on core. Some games its fine, others not so much. Then the stability testing apps are having issues with it as well.

*edit3 *- Above problem covered here [Official] NVIDIA RTX 2080 Ti Owner's Club


----------



## J7SC

Audioboxer said:


> I'm watercooled so that's why I was asking  See myself hit the 373w limit now and then on my FTW3 bios.
> 
> Will have a look around for BIOS with power limits above that.
> 
> *edit *- GALAX RTX 2080 Ti VBIOS This has flashed fine for me on a reference card, 2 pin, so I'll keep an eye on whether that extra 27w is utilised at all!
> 
> *edit2* - Hmmm, the curve seems to be displaying weird behaviour for me with this BIOS. Often refusing to go above 2000mhz on core. Some games its fine, others not so much. Then the stability testing apps are having issues with it as well.
> 
> *edit3 *- Above problem covered here [Official] NVIDIA RTX 2080 Ti Owner's Club


I'm not sure if a few extra watts are worth the hassle, to be honest. I have 2x factory w-cooled 2080 Ti Aorus Waterforce Extreme running in an older Threadripper system, and depending on settings, I have seen 379.x W per GPU on the stock vbios. Also, turning off RGB (if fed by your GPU) might yield you another 'usable' 10W-15W. 

There are some 1KW XoC bios out there for the 2080 Ti (OP page ?)...including 2x 8 pin ones (Strix), but for all but benching under temp-controlled conditions, that's likely not healthy over the longer term, ie. gaming.

Somewhat related...unexpected benefits ! 
We have 'arctic inflow + la nina' weather with lots of snow and temps around -12 C at night. That is unusual for us here on the Pacific Coast, directly by the water. Anyhow, our heat (central steam plant feeding lots of buildings) went out when two pipes burst recently...10 pm > to about 1am before it was fixed. Threadripper and 2x 380W 2080 Ti to the rescue ! -- I had moved that system to our master bedroom to make way for some new work-play builds in my home office...running 'Unigine Valley' a few times, and the 1200W or so of system heat energy generated raised our bedroom temp to about 23 C in just a few minutes


----------



## Audioboxer

J7SC said:


> I'm not sure if a few extra watts are worth the hassle, to be honest. I have 2x factory w-cooled 2080 Ti Aorus Waterforce Extreme running in an older Threadripper system, and depending on settings, I have seen 379.x W per GPU on the stock vbios. Also, turning off RGB (if fed by your GPU) might yield you another 'usable' 10W-15W.
> 
> There are some 1KW XoC bios out there for the 2080 Ti (OP page ?)...including 2x 8 pin ones (Strix), but for all but benching under temp-controlled conditions, that's likely not healthy over the longer term, ie. gaming.
> 
> Somewhat related...unexpected benefits !
> We have 'arctic inflow + la nina' weather with lots of snow and temps around -12 C at night. That is unusual for us here on the Pacific Coast, directly by the water. Anyhow, our heat (central steam plant feeding lots of buildings) went out when two pipes burst recently...10 pm > to about 1am before it was fixed. Threadripper and 2x 380W 2080 Ti to the rescue ! -- I had moved that system to our master bedroom to make way for some new work-play builds in my home office...running 'Unigine Valley' a few times, and the 1200W or so of system heat energy generated raised our bedroom temp to about 23 C in just a few minutes
> View attachment 2540499


I just went back to the 373w lol. After doing more reading it's my XUSB FW that prevents the flashing of some BIOS because I have a newer card and that Galax 400w bios has issues with 2 pin. For whatever reason instead of 400w the BIOS caps out at 350w. This explains my core clock boost issues under load.

Back up at 2100mhz on core and +1000 on memory with the 373W FTW3 BIOS and I'll just be patient until I get my 3080 lol.


----------



## Krzych04650

J7SC said:


> Somewhat related...unexpected benefits !
> We have 'arctic inflow + la nina' weather with lots of snow and temps around -12 C at night. That is unusual for us here on the Pacific Coast, directly by the water. Anyhow, our heat (central steam plant feeding lots of buildings) went out when two pipes burst recently...10 pm > to about 1am before it was fixed. Threadripper and 2x 380W 2080 Ti to the rescue ! -- I had moved that system to our master bedroom to make way for some new work-play builds in my home office...running 'Unigine Valley' a few times, and the 1200W or so of system heat energy generated raised our bedroom temp to about 23 C in just a few minutes


I am having somewhat of a similar situation. I am using a portable AC to cool/warm my room, but it is placed in unused and not thermally insulated room on the attic to avoid noise, and when it is winter the temperature in that room gets to like 5C and that is too cold for AC to still be able to produce heat since it extracts it from ambient. But I am also running external watercooling for my PC with MORA3 and have it placed in the same room as AC, again to avoid noise, and it is preventing the room from cooling down too much. It is little hell there in the middle of the summer though, like 35-40C. 

If I was okay with having the PC noise in my room then I would probably not need any heating in the winter provided I play few hours a day  But then the summer would come and it would be double the problem, heat from weather together with a heat from PC to deal with.

But there are certainly some ways to make use of PC heat during winter, it is not just a joke to make fun of inefficient chips


----------



## Medizinmann

Audioboxer said:


> With a 2 pin 2080Ti is there any point trying to flash higher than a 373W BIOS or is around 380W probably the "limit"?


If one does adhere to the standards - one 8-pin-connector can draw 150W and one PCIe-Socket can draw up to 75W - that said there is always a tolerance of around 10-15 % - so doing some napkin math makes this round and about a top limit of 430W with a two 8-pin - but it depends on your board, your power supply and the quality of your cables.

All that said - under normal circumstances you won’t go much beyond 380W without LN2 - and anyways beyond 320W you are already well in the land of diminishing returns... 

If I/we can trust the readings from HWiNFO64 - I have seen my 2080 Ti (Palit Gaming Pro OC with Galax 380W BIOS) take up to 398W peak and 370W sustained – testing with 3D Mark Time Spy and playing Gears 5 on a cold winter day with windows open and 15°C ambient ❄.

Best regards,
Medizinmann


----------



## Crosslhs82

Hi Everyone 
New to the forum.
Last Aug I bought a used Evga rtx 2080 ti Ftw3 ultra with a Ekwb Vector Nickle/Plexi wb
And Ek backplate for $900 set back to stock air cooling.
It runs great during gaming with temps 59 to 62c depending on ambient for hours @2055 to 2070mhz occasionally going over if it's cooler in the room with a slight undervolt with Msi AB
I think its [email protected] to (1.025v currently)
I dropped it a bit when I thought something was wrong when it hit 379w running Time Spy below.

I'am trying to figure out if spending approximately $400 to set it up for external custom loop on it's own would get me in perfomance.
I was thinking of a Alphacool xt45 280mmx45mm, A Corsair xd5 pump, plus fittings,tubing and my Noctua 140mm Ippcs possibly in push/pull.
I Have 2x140mm Noctua IPPC's in the front hooked to the the Corsair Commander Core with those fans on a custom curve set to the temp of the Gpu.
I would pretty much do the same thing with the fans and the pump to control their speeds in Icue.

In the summer a portable a/c keeps the room @ a comfy 21-24c.

I Don't want to Mod the Bios or shunt it.
My specs are.
Amd R9 5900x 
Asus Rog Strix X470 F Gaming 
Corsair H150i Elite Capellix 360mm 
32GB 2X16 G.SKILL TRIDENT Z GTZR 3600 16 16 16 36 
Samsung 970 Evo Plus M.2 500gb 
Samsung 870 Evo 1tb
Samsung 860 Evo 250gb 
Seagate Baracuda 3tb 
Corsair Rm850x Gold psu 
Thermaltake f31 Suppressor with the external loop pass through
Asus Tuf 27"1440p 165hz monitor 





__





Asus ROG STRIX X470-F GAMING Performance Results - UserBenchmark






www.userbenchmark.com













I scored 16 013 in Time Spy


AMD Ryzen 9 5900X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





Would it just be worth the custom loop experience, and Bragging Rights or would it be a waste of time and $$$ ? Or Some Legitimate Performance?

Could anybody give me some possible expectations to help????
Thank You For Your Time!!!!!!!!! and Experience!!!!!!!!!!


----------



## J7SC

Crosslhs82 said:


> Hi Everyone
> New to the forum.
> Last Aug I bought a used Evga rtx 2080 ti Ftw3 ultra with a Ekwb Vector Nickle/Plexi wb
> And Ek backplate for $900 set back to stock air cooling.
> It runs great during gaming with temps 59 to 62c depending on ambient for hours @2055 to 2070mhz occasionally going over if it's cooler in the room with a slight undervolt with Msi AB
> I think its [email protected] to (1.025v currently)
> I dropped it a bit when I thought something was wrong when it hit 379w running Time Spy below.
> 
> I'am trying to figure out if spending approximately $400 to set it up for external custom loop on it's own would get me in perfomance.
> I was thinking of a Alphacool xt45 280mmx45mm, A Corsair xd5 pump, plus fittings,tubing and my Noctua 140mm Ippcs possibly in push/pull.
> I Have 2x140mm Noctua IPPC's in the front hooked to the the Corsair Commander Core with those fans on a custom curve set to the temp of the Gpu.
> I would pretty much do the same thing with the fans and the pump to control their speeds in Icue.
> 
> In the summer a portable a/c keeps the room @ a comfy 21-24c.
> 
> I Don't want to Mod the Bios or shunt it.
> My specs are.
> Amd R9 5900x
> Asus Rog Strix X470 F Gaming
> Corsair H150i Elite Capellix 360mm
> 32GB 2X16 G.SKILL TRIDENT Z GTZR 3600 16 16 16 36
> Samsung 970 Evo Plus M.2 500gb
> Samsung 870 Evo 1tb
> Samsung 860 Evo 250gb
> Seagate Baracuda 3tb
> Corsair Rm850x Gold psu
> Thermaltake f31 Suppressor with the external loop pass through
> Asus Tuf 27"1440p 165hz monitor
> 
> 
> 
> 
> 
> __
> 
> 
> 
> 
> 
> Asus ROG STRIX X470-F GAMING Performance Results - UserBenchmark
> 
> 
> 
> 
> 
> 
> www.userbenchmark.com
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I scored 16 013 in Time Spy
> 
> 
> AMD Ryzen 9 5900X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}
> 
> 
> 
> 
> www.3dmark.com
> 
> 
> 
> 
> 
> Would it just be worth the custom loop experience, and Bragging Rights or would it be a waste of time and $$$ ? Or Some Legitimate Performance?
> 
> Could anybody give me some possible expectations to help????
> Thank You For Your Time!!!!!!!!! and Experience!!!!!!!!!!


I run 2x water-cooled 2080 Tis in a custom loop (380W each), but they have a factory-installed waterblock, so no direct comparison with prior air cooling. However, they do react well to ambient temp changes as the boost algorithm has temps as a major input. I also have a 3090 converted from air to a custom water loop (up to 520W), and the w-cooling makes a solid difference.

If you build a custom loop for your 2080 Ti that can be adapted to future GPU upgrades (with just a new GPU block), then I reckon that it might be worth it, otherwise, probably not.


----------



## Medizinmann

Crosslhs82 said:


> Would it just be worth the custom loop experience, and Bragging Rights or would it be a waste of time and $$$ ? Or Some Legitimate Performance?
> 
> Could anybody give me some possible expectations to help????
> Thank You For Your Time!!!!!!!!! and Experience!!!!!!!!!!


You can expect a bit more performance and (when done right) a lot less noise - which (less noise) was the main reason for me to put my 2080 Ti and 3900X under water...
Is it worth it - well hard to say - I wouldn't expect a lot more Performance, the stock BIOS of you EVGA FTW3 Ultra is already pretty good and anything beyond 380W will land you in the realms of diminishing returns anyway - you can expect a little more consistency for higher clocks though.

Best regards,
Medizinmann


----------



## Medizinmann

J7SC said:


> I run 2x water-cooled 2080 Tis in a custom loop (380W each), but they have a factory-installed waterblock, so no direct comparison with prior air cooling. However, they do react well to ambient temp changes as the boost algorithm has temps as a major input. I also have a 3090 converted from air to a custom water loop (up to 520W), and the w-cooling makes a solid difference.
> 
> If you build a custom loop for your 2080 Ti that can be adapted to future GPU upgrades (with just a new GPU block), then I reckon that it might be worth it, otherwise, probably not.


As he wrote - he already has a waterblock - from that standpoint of view he could consider it and reuse the loop later for upgrades...

Best regards,
Medizinmann


----------



## J7SC

Medizinmann said:


> As he wrote - he already has a waterblock - from that standpoint of view he could consider it and reuse the loop later for upgrades...
> 
> Best regards,
> Medizinmann


Yes, I did read that...I was referring to the rest of the loop as there are some 'hybrid'- AIO style GPU aftermarket setups...if one is building a custom loop, it might as well have lots of headroom for a.) get the most from the boost algorithm and b.) future reference...custom 3090s are already at 450W/520W, and NVidia Ada Lovelace (RTX 4k ?) is rumoured to be above that 🥴.


----------



## Crosslhs82

Hi Guy's 

Thank You for your Response's

So what kind of clocks do you guy's think under the parts (listed above) and ambient conditions it could sustain without any modding? 

As stated I already have the Wb I would need the fittings/quick disconnect fittings, pwm fan extensions, pump, rad ( would that Alphacool xt45 280mmx45mm be enough To benefit ) then rad mounting hardware for side of case with steel backed magnets or mounted to the table the system is on.

The Corsair I have is about 1yr old now and is non expandable.
The commander core I mentioned is the rgb/pwm controller for the h150i 6 ports each 3 available each, so that would give me full software control over pump and fan speeds.

I have looked into some of the water cooling kits that would provide cpu block for later on,
I have priced the parts that I would like close to about the same price, But at this time the H150i is doing a Great Job of keeping the 5900x in check especially with the ppt tdc edc limits I have set.
Having the gpu on a external loop could give more headroom for it.
My case can handle all of it no problem as Hdd/5.25 bays have been removed but feel both would be better separated.

I'am in No hurry to rush into the latest and greatest so skipping a few generations of cards is ok by me with this 2080 ti, that also applies to cpu as well.
So both running cool and consistent would carry me a few gens hopefully.

Thank You Again !!!!!!!!!


----------



## Laithan

I think you'll be fine for a few gens easy, the 2080Ti is still a beast for 4K and below... I have the same GPU as you but with the stock AIR cooler and I'm able to hit 2100Mhz. Since I don't have a water block ln this GPU I cannot comment from experience about the "before and after" however my guess is that we are power limited more than heat limited. At least that's what I see, perfcap reason is almost always *power*. The motivation to spend any additional money right now should probably not be performance at this point but other reasons as stated above.


----------



## Demonkevy666

Crosslhs82 said:


> Hi Guy's
> 
> Thank You for your Response's
> 
> So what kind of clocks do you guy's think under the parts (listed above) and ambient conditions it could sustain without any modding?
> 
> As stated I already have the Wb I would need the fittings/quick disconnect fittings, pwm fan extensions, pump, rad ( would that Alphacool xt45 280mmx45mm be enough To benefit ) then rad mounting hardware for side of case with steel backed magnets or mounted to the table the system is on.
> 
> The Corsair I have is about 1yr old now and is non expandable.
> The commander core I mentioned is the rgb/pwm controller for the h150i 6 ports each 3 available each, so that would give me full software control over pump and fan speeds.
> 
> I have looked into some of the water cooling kits that would provide cpu block for later on,
> I have priced the parts that I would like close to about the same price, But at this time the H150i is doing a Great Job of keeping the 5900x in check especially with the ppt tdc edc limits I have set.
> Having the gpu on a external loop could give more headroom for it.
> My case can handle all of it no problem as Hdd/5.25 bays have been removed but feel both would be better separated.
> 
> I'am in No hurry to rush into the latest and greatest so skipping a few generations of cards is ok by me with this 2080 ti, that also applies to cpu as well.
> So both running cool and consistent would carry me a few gens hopefully.
> 
> Thank You Again !!!!!!!!!


what are the temps you get with your 5900xon that H150i?

cause I got a sucky 5600x where I have a horrible core temp difference between some cores on Prime95, Cores 0/4 run around 62-65C cores 1/5 run around 75-75 and cores 3/4 run around 80C I've tried remounting my H150i multi-times but it seems my chips IHS just makes horrible contact with the single chiplet. the IO die has never broke 40C so far.

I would recommend all Copper and *no nickel-plating* plus distilled water for maximizing efficiency.

I'm probably going to do a new all custom water cool build as well soon for a new build. As my H150i is using thin 15mm ID cooling RGB fans in push-pull configuration in a Corsair 4000D Airflow case. It give about the same temps as having the three Corsair ML 25mm fans as push. The ID cooling fans run a lower RPM, and aren't as loud as the ML.

I'm also running an EVGA RTX 2080 T.I FTW3 ultra I scored it refurbished at microcenter for $863 in this system with the 5600x, 0 problems with this card. I think My whole system cost less than what some these cards are currently going for online lol


----------



## kithylin

Demonkevy666 said:


> cause I got a sucky 5600x where I have a horrible core temp difference between some cores on Prime95, Cores 0/4 run around 62-65C cores 1/5 run around 75-75 and cores 3/4 run around 80C I've tried remounting my H150i multi-times but it seems my chips IHS just makes horrible contact with the single chiplet. the IO die has never broke 40C so far.


That's probably because you're still using Prime95. Most people stopped using that years ago. It's not a good test to load modern hardware correctly. It's an artificial test that doesn't represent any real-world scenario. Try modern tests like AIDA64 stress test or CineBench. Both of those can be programmed to loop repeatedly for hours.


----------



## Crosslhs82

Thanks again for responding Guy's 
I do Appreciate all the info!!!!! And Advice!!!!!!!!

That good News possible skipping a few Gens!!!!!!!!
To be exact the Wb is the Ek Quantum Vector Nickle Plexi.

As far as the 5900x goes on temp It's been great with the stock H150i fans.
All my coolers have always been top mounted as exhaust well excecpt the fisrt 2 if you want to count the h60 / h80i,on Fx series, which thats where I introduced myself to noctua IPPC's 3000rpm fans, so noise really isn't a concern for me as I got use the those fans and know they were there for a reason.

In cb r23 full run it's staying at 68-69c to 72-76c depend on how much ppt tdc edc I give it And the ambient temp. ( Still playing to see what I can get, to know where it runs best between temps and performce ) 
** The Give And Take **

I live in Denver Co so there have been a few cold nights since I got the 5900x 
(Mar 6th ish) and both had great temps
1 night I played Ghost Recon Breakpoint the 2080 stayed at 47-48c and was staying @2100mhz most of the night.
The 5900x stays upper 40's low 50's gaming.
As from the link (above) of time spy the temps on both 63-64c and that may have been before play with the mobo limits.
With Curve optimizer @-20 all core.
ppt 185 tdc 170 edc 165/170 (Currently at)
I haven't tried per core thats alot of testing LOL.
So from the sounds of it 2100 to 2150ish mhz steady may not be too much of a problem ???? Then.
Hmmmmm

The previous owner let me try it out, of course he also had $800.00 cash plus a picture of my Drivers License but in about 3 hrs time I was very satisfied to meet for the rest.

I'm trying to remember who it was in a Yt video that said to use a Nickle wb on the cpu if doing it for the 5000's series.

I really like the thought of this being separated and quick to disconnect.

I still have my Zotac Gtx 1070 ti I could put in during the tear down, setup and priming.
I have my old psu to help for the priming and leak tests. So I won't have to have the rushed feeling while doing it.

When I 1st started thinking about the loop, parts availability was making me look at 3 different sites, to find like all the same brand fittings or 1 had the pump and no same size fittings hopefully some of that has straightened out.
I do have a Micro Center where I buy most of my parts depending on what it is.(Any More )
From looking at their prices for fittings I will be ordering online most of if not all of this.
They did lose some business from me when they started getting the Evga 3060 ti and the 3070's showing they were in stock to go down there 25miles 1 way for them to be puting those cards into instore builds ONLY !!!!!!!!!! Then Why list those as in stock on the website.
Ticked Me OFF!!!!! Thats when I called about the 2080 and worked the Deal and so far It's Great!!!!!! 
So They will still get some of my business just not all of it when Best Buy will price match also.
Ok off that Soapbox!!!!! LOL

Don't know when I will do this just Yet but I would like to have some custom loop experience and this should be a simple loop.
What would be better pump close to the rad or close to the WB?, pushing To the Rad or WB ?
I could do either way ?
The pump inside would cut down on the electical coming out of the case, it would just be the pwm fan ext and the tubing.
Outside would need everything power, tubes, ext, and the temp sensor that would go to the mobo header so maybe an ext there.
Lots TO learn and ask !!!!!!!!!!! 

For what ever reason p95 has never been in agreement with my system and have almost quit using it.
Can run Aida64 Extreme for hrs any tests no issue, Occt no issue, cb r23, back to back, Realbench 8 hr stresstest, gaming but p95 can usually start failing workers in minutes on stock bios and ram settings on this system.
1 thing I just remembered on my mobo was in the early days of it 1000 & 2000 series bios there was a problem with having hwinfo64 or other monitoring software and corsair Link4 or Icue running at the same time as p95 was running.
So might look into that? I will be back around the pc fri night to see if that has anything to do with it again.

Thank You All Again !!!!!!!!!!


----------



## Demonkevy666

kithylin said:


> That's probably because you're still using Prime95. Most people stopped using that years ago. It's not a good test to load modern hardware correctly. It's an artificial test that doesn't represent any real-world scenario. Try modern tests like AIDA64 stress test or CineBench. Both of those can be programmed to loop repeatedly for hours.


Not my problem temp are still widespread, even on cinebenchr23 lowest core temp is 58C on one core and 72C on the highest. Besides Prime95 is the ONLY program that stresses the AVX2 on Ryzen 5,000 series.


----------



## Crosslhs82

Have you tried the pbo limits ?
When i first put the 59 in my temps were.higher till I set those limits.


----------



## kithylin

Demonkevy666 said:


> Not my problem temp are still widespread, even on cinebenchr23 lowest core temp is 58C on one core and 72C on the highest. Besides Prime95 is the ONLY program that stresses the AVX2 on Ryzen 5,000 series.


Then your issue is poor thermal paste application or cooler is not mounted correctly. Re-apply thermal paste and re-mount the cooler and make sure it's an even and flat mount this time. What you are experiencing is not typical. I only have a 2-4c difference between the hottest cores and coolest cores on my 5800X system, even under stress and 100% load. Normally at least. I did see 2 cores get +10c hotter than the other cores in Prime95 though when I did try it. Which led me to stop using it as no other benchmark test caused that in my system.


----------



## Demonkevy666

kithylin said:


> Then your issue is poor thermal paste application or cooler is not mounted correctly. Re-apply thermal paste and re-mount the cooler and make sure it's an even and flat mount this time. What you are experiencing is not typical. I only have a 2-4c difference between the hottest cores and coolest cores on my 5800X system, even under stress and 100% load. Normally at least. I did see 2 cores get +10c hotter than the other cores in Prime95 though when I did try it. Which led me to stop using it as no other benchmark test caused that in my system.


I've already done all that it just has bad IHS contact to the chiplet.


----------



## kithylin

Demonkevy666 said:


> I've already done all that it just has bad IHS contact to the chiplet.


All the chips for Ryzen are soldered. I have never heard of anyone else reporting anything like that and I look through the AMD forums here, reddit, and on LTT forums a couple times a week.


----------



## Medizinmann

kithylin said:


> All the chips for Ryzen are soldered. I have never heard of anyone else reporting anything like that and I look through the AMD forums here, reddit, and on LTT forums a couple times a week.


There have been reports about bad soldering and therefore bad contact of CPU-die and IHS - but for Intel CPUs - never read anything like this about Ryzen.
He might also consider looking into the positioning of his cold plate – as this might have an impact – albeit this usually only applies to the higher core count CPUs with more than one chiplet like 3900X/5900X and 3950X/5950X – where the most heat generating parts are somewhat of center…

(575) Drop Temperatures on RYZEN 3000 CPUs: der8auer RYZEN 3000 OC Bracket - YouTube 

Best regards,
Medizinmann


----------



## kithylin

Medizinmann said:


> There have been reports about bad soldering and therefore bad contact of CPU-die and IHS - but for Intel CPUs - never read anything like this about Ryzen.
> He might also consider looking into the positioning of his cold plate – as this might have an impact – albeit this usually only applies to the higher core count CPUs with more than one chiplet like 3900X/5900X and 3950X/5950X – where the most heat generating parts are somewhat of center…
> 
> (575) Drop Temperatures on RYZEN 3000 CPUs: der8auer RYZEN 3000 OC Bracket - YouTube
> 
> Best regards,
> Medizinmann


I just did a double-take and checked the thread name to be sure.. yeah.. we're supposed to be discussing the RTX 2080 Ti in here. Why are people discussing CPU's? I didn't even realize what thread I was in when writing my earlier replies. I shouldn't of even wrote most of that. Maybe we should get back on topic.


----------



## Medizinmann

kithylin said:


> I just did a double-take and checked the thread name to be sure.. yeah.. we're supposed to be discussing the RTX 2080 Ti in here. Why are people discussing CPU's? I didn't even realize what thread I was in when writing my earlier replies. I shouldn't of even wrote most of that. Maybe we should get back on topic.


😅😂🤣

Well - we ended up here because of the discussion - "Is it worth it to do a custom loop for a EVGA 2080 Ti" - a few posts back - and as always it depends - and as an obvious thing - one should include her/his CPU in the loop...and then someone came up with his problems cooling a 5600X and recommending non nickel plated cold plates and such...

But we might as well end this side route here - I agree very much...😉

Best regards,
Medizinmann


----------



## Crosslhs82

Hi Guy's
Thanks Again for Your Time and Experience.

If when I do this it would most likely be a time when I'm on vacation to have plenty of time.
So End of July or Week of Xmas.

I'am aware of mixing of metals and to use the right inhibitors because of the Nickle plating.
Any thoughts on using Ekoolant clear or Mayhems X1 clear ? With this Ek Quantum Vector.
Thank You Again 
Have A Great Weekend!!!!!!


----------



## J7SC

Crosslhs82 said:


> Hi Guy's
> Thanks Again for Your Time and Experience.
> 
> If when I do this it would most likely be a time when I'm on vacation to have plenty of time.
> So End of July or Week of Xmas.
> 
> I'am aware of mixing of metals and to use the right inhibitors because of the Nickle plating.
> Any thoughts on using Ekoolant clear or Mayhems X1 clear ? With this Ek Quantum Vector.
> Thank You Again
> Have A Great Weekend!!!!!!


Mayhems X1 clear, IMO...


----------



## Crosslhs82

Thank You 

I spent several hrs on multiple evening reading other forums and found that 
The Mayhems was mentioned alot when nickle plating was also mentioned.

Would it be best to come out of the pump/res combo Xd5 straight to the rad first then to the gpu or would it make a difference once the loop is saturated ?


----------



## kithylin

Crosslhs82 said:


> Hi Guy's
> Thanks Again for Your Time and Experience.
> 
> If when I do this it would most likely be a time when I'm on vacation to have plenty of time.
> So End of July or Week of Xmas.
> 
> I'am aware of mixing of metals and to use the right inhibitors because of the Nickle plating.
> Any thoughts on using Ekoolant clear or Mayhems X1 clear ? With this Ek Quantum Vector.
> Thank You Again
> Have A Great Weekend!!!!!!


On the subject of mixing metals: Nickel and nickel plated copper blocks are safe to use with copper and bronze radiators and water blocks without the need for any inhibitors. They are low enough on the galvanic corrosion reaction table that they don't interfere with each other enough for home users to be concerned with. There is a reaction between Nickel and Copper/bronze but it is so low that it would take a very long time (We're talking in excess of 5+ years) of daily contact to cause an issue. Most people that do custom water loops drain and refill their loops with fresh water/coolant at least once a year, sometimes more often than that. It's a non-issue for home users.

The main problem is people that are mixing aluminum in with a loop that has Nickel, Nickel plated copper, copper, or bronze parts.

Also do bear in mind: There is no man-made coolant that can transfer heat better than pure distilled water. Anything you use from any company that is not pure water will suffer some efficiency loss in the transfer of heat to the radiators and reduce the temps of the components in the loop slightly.

Personally I have a nickel plated video card block, a pure copper cpu block, bronze radiators and a mix of nickel/copper fittings. I use nothing but 100% straight distilled water in my loop and I flush it and refill it at the end of the year every year. I have used all of my computers this way going back to at least the early 2000's. I have never had corrosion issues in any parts or any water or any computer system yet. This also has the added advantage that I can get more "coolant" (distilled water) for < $1.00 for a gallon (Normally $0.80/gal at our local Kroger) of it anywhere at any grocery store or walmart in town. And it's not a toxic chemical that I have to worry about spilling. It's just water. I have a hybrid wolf dog that lives with us in the house so I can't have something toxic in my computer that could potentially leak out and worry about her licking it. So no "coolants" from mayhems or any other company for me. If you have any pets or young children in the house then remember to think about that too.


----------



## Crosslhs82

Thank You Both

Sounds like I would be good to go as the rad I like is full copper. The extra ports on the rad will help draining.
Ok will plan on trying without Mayhems for awhile then inspect to see if anything is needed.

Again Thank You Guy's for Your Time and Experience!!!!!!!!!!!!
Jim


----------



## Crosslhs82

Correction Thank All 3 for you Time and Experience!!!!!!!!!

No young children, just our 15yr old son, fish and LUNCH for your Wolf dog when his Rabbit chews on Cables LOL 

Thanks Again !!!!
Have A Great Weekend!!!!!!!!!!


----------



## TheOpenfield

Water cooling isn't really worth it from a performance perspective. Judging by your current temps, you will only gain 1-2 clock steps (30 MHz) and a few extra watts due to lower leakage and no fans powered by the PCB.

Only commit to water cooling, if you want to reduce noise and most importantly enjoy playing around with hardware. Otherwise it's not worth it monetarily.

Here is a 280W card (water cooled) with 5900X (also water cooled, power limits removed) for reference: I scored 16 327 in Time Spy


----------



## Crosslhs82

Thank You for the info.
That's a pretty good comparison looking at both time spy runs for this card being undervolted then @63c during the run.
Most of the time gaming it stays @58-62c depending on the ambient temp and whats going on game wise.

I might be able to get a little more on the mem as it's @+525 now in Msi AB.

Ya!!! parts are still get some on 1 site and other parts other places for the rest at this time.

Hmmm maybe keep it the way it is until it needs repasted or at that time just do it since it would already be apart, of coarse parts availability @That time would still play a factor.

The previous owner set it back up to air to sell, when I asked him which tmi he used he said he used the Corsair tm50, would Kingpin be any better? Knowing on cpu's it's only 1-3c difference, does that still apply with GPU's the same ?

Thank You Again !!!!!!!
I Hope Everybody is Having A Great Weekend


----------



## J7SC

Crosslhs82 said:


> Thank You for the info.
> That's a pretty good comparison looking at both time spy runs for this card being undervolted then @63c during the run.
> Most of the time gaming it stays @58-62c depending on the ambient temp and whats going on game wise.
> 
> I might be able to get a little more on the mem as it's @+525 now in Msi AB.
> 
> Ya!!! parts are still get some on 1 site and other parts other places for the rest at this time.
> 
> Hmmm maybe keep it the way it is until it needs repasted or at that time just do it since it would already be apart, of coarse parts availability @That time would still play a factor.
> 
> The previous owner set it back up to air to sell, when I asked him which tmi he used he said he used the Corsair tm50, would Kingpin be any better? Knowing on cpu's it's only 1-3c difference, does that still apply with GPU's the same ?
> 
> Thank You Again !!!!!!!
> I Hope Everybody is Having A Great Weekend


Water-cooling does make a difference not only re. (significant) gains in peak speed but also, in terms of keeping sustained temps lower, as long as your loop is voluminous and well equipped re. fans. As mentioned before, both my 2080 Tis are the factory water-blocked ones with a stock bios of 380W max each (so a total of 760W for _just the GPUs_ combined). That's a lot of heat energy to transfer...


----------



## Crosslhs82

Sounds like a new slow air fryer LOL

WELL have a game changer.
So I live in an Apt and can not change the wiring or circut breakers.
Last night while the portable A/c unit which has 8000btu 8amp was on while gaming I tripped the breaker for that room only.
It did it 1 time last August after putting in the 2080, it never tripped with the 1070-Ti.

After it tripped, I raised the thermostat for the unit to kick in less and check the hot water heat thermostat which was still about 70f Turned down to the lowest point then finished playing the rest of the night.


The room and my son's room has to have portable a/c units to help or you sweat to death in the summer as the main ac can not keep up with the heat. The management replaced 1 unit because it burnt it self out from the compressor running constantly trying to keep up.

So i could try selling the 8000btu and buy a 6000btu as I did in my son's room which help stop his breaker trips with his.
We found out with those trips his bedroom, the living room, bathroom, and half the kitchen is on his breaker.
So the 6000btu helped with that.

So with summer on it's way when that A/c will be kicking in more 
COULD dropping the temps of the 2080 with a undervolt help drop the power draw ?

When it was cold that 1 night it maintained 2100mhz most of that night @47-48c but I didn't look at the watts, so I currently don't know which way to turn now 🤔?????????


----------



## acoustic

It won't lower anything enough to help with your issue. If you're able to access the breaker panel, check to see if they're using 14 or 12g wire. If it's 12g, then tell them you want to have a 20a breaker installed and you'll pay for it. Breakers are cheap.

That sucks. I had wiring issues in the last home I rented .. I bought a house now, and immediately changed out the breakers to 20A when I realized they used 12g Romex.


----------



## Crosslhs82

Thank You for the Info !!!!!!!

All I can do is ask and see but expect the worst as this management is something else.
Personally having teeth pulled with nothing during or after should give you an idea.

Well I could try raising the thermostat a few degrees and put up with higher temp/lower clocks while I game/tweak setting/ stress test and bench.
I ran Heaven Bench on Ultra before leaving this morning without Msi AB it ran up to 68c per gpu-z pulling 306w
Clocks @1915-1965.
SuperPosition this was measered by Hwinfo64 as I wasn't able to watch what was going on but was very close in clocks/temp @307w. Compared to running Msi AB with the undervolt average of 2040-2070mhz 58-63c gaming pulling 350+watts.
I will have to try gaming when get back Friday Undervolted with AB just lower the clocks and the fan curve and see what happens.

Will also look into another 6000btu 6amp A/c
It helped on my son's circut.
2amps dosen't sound like much but it did make a difference.

Thanks Again


----------



## kithylin

Personally I don't know if I should be writing my comment here as I don't actually own a 2080 Ti. I leaped over the 20 series and went from my 1080 Ti -> 3070 Ti but this is more of a generalized comment. I have water cooled all of my video cards since the GTX 400 days in every new upgrade I have owned. In general the main benefits from putting a video card in the water loop are: reduced fan noise for a quieter computer, reduced temps, and a more stable clock speed. I'll use an example about reduced fan noise: By default my 1080 Ti when I first bought it would have to run the fans at 100% constantly just to maintain 80~85c when gaming (EVGA 1080 Ti SC Black Edition). Which was rather loud and noisy. I could definitely hear it even in the hallway with the computer running and being in the same room with the machine was just unbearable. Then in a custom water loop I have 4 radiators and 15 fans and a smart fan controller. In this setup the fans in my system would typically only run 15% - 20% to maintain the card at 38c for example. Which is barely whisper quiet and much more bearable. About clock speeds: The default air cooled configuration on my 1080 Ti would run the card around 1800 Mhz core speed (And it would change around, sometimes 1700 mhz sometimes 1800 mhz) due to the high temps. After custom water cooling it I could run it up at 2126 Mhz due to the lower temps. And when I was gaming it would maintain 2126 Mhz at all times and never change. Which led to more consistent FPS performance in games. Now a friend of mine I talk to daily does own a 2080 Ti and he claims he managed to get his card stable up about +125 Mhz core speed in a custom water loop vs air and he has an Nvidia Founder's Edition 2080 Ti. He was also able to get the memory clock up to +700 Mhz from the max on air of +525 Mhz. So it did improve things for him. And it's much quieter. He has a large 1260mm radiator (single 9 x 140mm panel radiator) that stands up vertically and sits on his desk. There are a lot of benefits to custom water cooling a video card. But there is also a lot of initial investment costs with a custom water loop. For me I've been able to re-use my same 2 pumps since 2010 to today. I've had to replace 1 radiator so far and about half the fans but the core of my water cooling system is still serving me well all this time later. I just keep changing video cards and putting newer video cards in the same water cooling system. It was an investment back then but not much of an investment as time went on.


----------



## Crosslhs82

Thank You I do Appreciate Every Period.......
For Everybody's Time, Experience, and Opinions!!!!!!!!

Wow thats alot of rad.
You made me look at rads again LOL.
The Alphacool Ut60 even in push /pull could go in the front later.
But the H150i is only 1yr old so maybe later.

Can kicking up my mem clk to 600 cause it to use more power ? I did before gaming.
That breaker trip really throws a chain across the power lines.😭


----------



## Medizinmann

Crosslhs82 said:


> Hi Guy's
> Thanks Again for Your Time and Experience.
> 
> If when I do this it would most likely be a time when I'm on vacation to have plenty of time.
> So End of July or Week of Xmas.
> 
> I'am aware of mixing of metals and to use the right inhibitors because of the Nickle plating.
> Any thoughts on using Ekoolant clear or Mayhems X1 clear ? With this Ek Quantum Vector.
> Thank You Again
> Have A Great Weekend!!!!!!


I use Alphacool Eiswasser clear - but I am pretty sure any from the above is also fine...

And yes - I would also prefer something clear - as pigments AFAIR/AFAIK are always prone to cause problems....

Best regards,
Medizinmann


----------



## Crosslhs82

Thank You

I will have to experiment with settings since the breaker trip.
I just cringe at the thought of more or less of pulling the plug out the wall while the system is running, before going any farther.
Thank Again
Have A Great Day!!!!!


----------



## Krzych04650

Can anyone check if Fast Sync is working for them on driver 511.23 or newer? It stopped working for me since that driver and it still isn't fixed with newer ones. It happened before few times and rolling back drivers always helped, but now rolling back means losing DLDSR so I cannot do that.

BTW, fun fact: DLSS 2.3.2 works with multi GPU in SOTTR. Not only that but it fixes all the problems TAA has with mGPU and increases scaling from 1.82x to 1.96x. Shows that this is perfectly possible.

Too bad game has other problems like volumetric lighting flicker and stutter, so still cannot use mGPU. I think it worked fine at some earlier point but then game got a lot of updates and it broke. Funny how almost all of those DX12 mGPU implementations are far worse than implicit DX11 SLI ones.


----------



## Panchovix

Just wanted to share that got a used 2080Ti Zotac AMP Extreme this week, by default it boosts (on stock) to like 2055-2070Mhz.

But as soon I add +45 I get VRel limit and it crashes lol, pretty unlucky.

Also just +1200 on VRAM 

Saw the 3DMark leaderboard and got kinda sad lmao, but just luck of the draw I guess lol


----------



## Medizinmann

Panchovix said:


> Just wanted to share that got a used 2080Ti Zotac AMP Extreme this week, by default it boosts (on stock) to like 2055-2070Mhz.
> 
> But as soon I add +45 I get VRel limit and it crashes lol, pretty unlucky.
> 
> Also just +1200 on VRAM
> 
> Saw the 3DMark leaderboard and got kinda sad lmao, but just luck of the draw I guess lol


That is within expected specs...it isn't bad.

Depending on how much effort and money you are willing to spend - you could try repasting it and/or put it under water/watercool it...

Best regards,
Medizinmann


----------



## Panchovix

Medizinmann said:


> That is within expected specs...it isn't bad.
> 
> Depending on how much effort and money you are willing to spend - you could try repasting it and/or put it under water/watercool it...
> 
> Best regards,
> Medizinmann


Changed some paste/pads and temps dropped by like 15% lol, it did help to stabilize higher clocks now; the vram though, +1000 is my max, even with it's a lot colder

EDIT: Flashed the EVGA XC Ultra VBIOS, and it's way better, can do 2130-2145Mhz on load, and undervolt 2040Mhz at 0.975V; I guess the Zotac VBIOS is just not good. Also, fan stop finally.


----------



## Crosslhs82

Hi Everybody !!!!!
I hope your all having Great Weekend.

Well I purchased the parts minus the pump which I will get alittle later.
All alphacool Compression fittings and Quick disconnects, and the Alphacool UT 60
280mmx60mm rad for $268.00 
from Modmymod.
The pump I plan on is the Corsair Xd5 pump/res combo and phil bottle which will cost me $173.00 direct from Corsair.
Making it $441.00 

I will post back later towards the end of July when I will be on Vacation and will have plenty of time to get it Done !!!!!!


----------



## z390e

Panchovix said:


> Just wanted to share that got a used 2080Ti Zotac AMP Extreme this week, by default it boosts (on stock) to like 2055-2070Mhz.


out of curiousity how much did you pay if you dont mind me asking?

getting ready to sell my Strix 2080ti OC 11GB and wondering what the right price to put it at is


----------



## Crosslhs82

Hi 
While your waiting on Pannchovix's response.
Aug of last yr I paid $900 for my 
Evga Rtx 2080 ti Ftw3 Ultra with a Ek Quantum Vector Nickle Plexi Wb and Ek backplate.

Hoping it may help.


----------



## Panchovix

z390e said:


> out of curiousity how much did you pay if you dont mind me asking?
> 
> getting ready to sell my Strix 2080ti OC 11GB and wondering what the right price to put it at is


500USD past week, though I'm from Chile so the prices are higher; however, it was surprisingly cheap, 3070s are still going for 1000USD or so.

Being someone to like to collect GPUs, this is a kinda expensive hobby lol (Still have an AMD HD 5750 from like 12 years ago, R9 390, 1660 Super in my father's PC, etc)


----------



## Minarthitep

Hi
Owner of an MSI RTX 2080 ti Gaming Trio (not X nor Z), I successfully flashed it with Lightning Z Vbios. Memory is Micron.
Prior to that, I was running MSI AB with core OC up to 2065 MHz (no Mem oc) and now I am on stock Lightning Z Vbios with 380w PL.

Performance is on par and I have done a few rounds Firestrike / Timespy since I've flashed the bios. Core temps reaches 79°C and power 330w but I am a bit concerned about hotspot / memory temp readings on GPU-Z. So far, they reach up to 102°C (memory clock is stock at 1750, since vBios flashed, I ditched MSI Afterbuner) : should I worry or not ?

Also, reading power 8-Pin #2 at only 6w peak (6-Pin and 8-Pin #1 reach 150w)...could that be normal ?
Right now, I am hesitating going back to stock bios on the card and check initial readings.
Just wanted some feedback.
Thanks for reply


----------



## Medizinmann

z390e said:


> out of curiousity how much did you pay if you dont mind me asking?
> 
> getting ready to sell my Strix 2080ti OC 11GB and wondering what the right price to put it at is


In Germany prices for used ones start around 700€ and new 1100€ (as always VAT included).

Interestingly - models with factory waterblock like i.e. Gigabyte AORUS Waterforce XTREME are among the cheaper ones – probably because most users aren’t into custom loops…

Best regards,
Medizinmann


----------



## Panchovix

Minarthitep said:


> Hi
> Owner of an MSI RTX 2080 ti Gaming Trio (not X nor Z), I successfully flashed it with Lightning Z Vbios. Memory is Micron.
> Prior to that, I was running MSI AB with core OC up to 2065 MHz (no Mem oc) and now I am on stock Lightning Z Vbios with 380w PL.
> 
> Performance is on par and I have done a few rounds Firestrike / Timespy since I've flashed the bios. Core temps reaches 79°C and power 330w but I am a bit concerned about hotspot / memory temp readings on GPU-Z. So far, they reach up to 102°C (memory clock is stock at 1750, since vBios flashed, I ditched MSI Afterbuner) : should I worry or not ?
> 
> Also, reading power 8-Pin #2 at only 6w peak (6-Pin and 8-Pin #1 reach 150w)...could that be normal ?
> Right now, I am hesitating going back to stock bios on the card and check initial readings.
> Just wanted some feedback.
> Thanks for reply


Do both cards use 2x8 pin + 1x6 pin? If I remember the 2080Ti Gaming Z Trio is one of the few with those connectors amount, though if both are the same, IMO it is not normal, or well it is not balanced as it should be.

And that VRAM is kinda high, but I'm not sure how that would change vs stock VBIOS; GDDR6 (not X) max temp is 100°C if I'm not wrong (from 100°C and above it is degradation temps), and GDDR6 do not have the same protections as GDDR6X does (vs the 105°/110°C downclock on GDDR6X, though GDDR6 seems to downlock at 110°C as well); have you changed the pads or paste sometime?

With this 2080Ti (Zotac AMP) stock temps were about 70°C Core and 86°C VRAM, but the previous owner, since 2018 never did change the paste or pads, so I change the paste (it was dry like this, luckily have the pics lol), added some pads on the backplate, left the stock vram pads since I still can't find if they're 2mm or 2.5mm (but cleaned them, they were nasty af), and compressed air to clean the card/fans, which were nasty af as well.



Spoiler: Core/Heatsink after first opening of the card ever























After doing that, the max temp is 60°C on 335W or so (though my fan curve is Degrees = fan speed, so at 60°C, 60% fan speed), and VRAM max temp is 66-67°C; while playing lighter games it gets like 55°C or lower sometimes.


----------



## Minarthitep

Panchovix said:


> Do both cards use 2x8 pin + 1x6 pin? If I remember the 2080Ti Gaming Z Trio is one of the few with those connectors amount, though if both are the same, IMO it is not normal, or well it is not balanced as it should be.
> 
> And that VRAM is kinda high, but I'm not sure how that would change vs stock VBIOS; GDDR6 (not X) max temp is 100°C if I'm not wrong (from 100°C and above it is degradation temps), and GDDR6 do not have the same protections as GDDR6X does (vs the 105°/110°C downclock on GDDR6X, though GDDR6 seems to downlock at 110°C as well); have you changed the pads or paste sometime?
> 
> With this 2080Ti (Zotac AMP) stock temps were about 70°C Core and 86°C VRAM, but the previous owner, since 2018 never did change the paste or pads, so I change the paste (it was dry like this, luckily have the pics lol), added some pads on the backplate, left the stock vram pads since I still can't find if they're 2mm or 2.5mm (but cleaned them, they were nasty af), and compressed air to clean the card/fans, which were nasty af as well.
> 
> 
> 
> Spoiler: Core/Heatsink after first opening of the card ever
> 
> 
> 
> 
> View attachment 2554779
> 
> View attachment 2554780
> 
> 
> 
> 
> After doing that, the max temp is 60°C on 335W or so (though my fan curve is Degrees = fan speed, so at 60°C, 60% fan speed), and VRAM max temp is 66-67°C; while playing lighter games it gets like 55°C or lower sometimes.


Thanks for reply.
Gaming Trio and X Trio are 2x8 pin + 1x6 pin and Lightning Z is 3x8 pin. Power is not balanced on either cards as someone pointed out somewhere back in this thread.
Did some testing with stock Vbios flashed back yesterday after pulling out card from motherboard, torquening backplate screws and putting some nice Lego bricks underneath to prevent sagging.
PL is 330w on stock VBios and I am reaching 305w peak power consumption according to GPU-Z (vs 325~330w with Lightning Z VBios). Memory temp was 88°c which is fairly correct. GPU temp was down by 5°c due to lesser power limit (73°c).
Same issue with power delivery but i have checked and I have one PCI-e 8 pin missing - I think that is the issue even if MSI did not implement power balancing on the card. Will check later if I can plug-in some extra cable or if new power supply will be needed.
Will then flash back Lightning Z VBios and see results.
Not a bad idea to check thermal paste and redo it sometime...


----------



## Demonkevy666

I have an opportunity to buy an EVGA GeForce RTX 2080 Ti XC Ultra Gaming 11G Or What I already have a second EVGA GeForce RTX 2080 Ti Ftw3 Ultra They'd be matching cards. Price difference is the FTW3 is a little over twice the price of the XC ultra.

I'd like to run SLI with these and maybe to Nvlink link stuff, but I am wondering If it's better to have to of the same card or does it not matter?

I want to do some 2D/3D rendering/animation/Cell shading sometime later so these two cards may move over to a Thread Ripper Pro 5,000 series maybe Or Wait and see what Zen 4 Brings.

*Please don't reply to me about SLI being pointless.*


----------



## kithylin

Demonkevy666 said:


> I want to do some 2D/3D rendering/animation/Cell shading sometime later so these two cards may move over to a Thread Ripper Pro 5,000 series maybe Or Wait and see what Zen 4 Brings.
> 
> *Please don't reply to me about SLI being pointless.*


I just wanted to comment a moment: SLI Isn't pointless, it's just situationally specific. If you're on an AM4 platform you won't see any scaling with a second card at all, or very little. You'll need at least one of the HEDT platforms (Threadripper or Intel's options). Secondly: You still won't see any scaling at all unless you're gaming at 4K. Both 1080p and 1440p don't scale with SLI at all at the moment with our current hardware. I've tested this myself.


----------



## Demonkevy666

kithylin said:


> I just wanted to comment a moment: SLI Isn't pointless, it's just situationally specific. If you're on an AM4 platform you won't see any scaling with a second card at all, or very little. You'll need at least one of the HEDT platforms (Threadripper or Intel's options). Secondly: You still won't see any scaling at all unless you're gaming at 4K. Both 1080p and 1440p don't scale with SLI at all at the moment with our current hardware. I've tested this myself.


Oh thanks, I forgot to mention the screens will most likely be upgraded to 4K(43" main) and some multi-monitor(21"-24" or 32") for 3D cad work/2d animation. If in the future I go to that if I go Threadripper. I was looking at the AM4 Boards that actually support Nvlink/SLI and there only 5 AM4 boards that Have true support at all lol, They're all $350+ boards to but they all do 8x/8x. PCgamer claimed the board I have does that, but it does no, it does 16x/4x (Msi MGP gaming edge wifi)


----------



## kithylin

Demonkevy666 said:


> Oh thanks, I forgot to mention the screens will most likely be upgraded to 4K(43" main) and some multi-monitor(21"-24" or 32") for 3D cad work/2d animation. If in the future I go to that if I go Threadripper. I was looking at the AM4 Boards that actually support Nvlink/SLI and there only 5 AM4 boards that Have true support at all lol, They're all $350+ boards to but they all do 8x/8x. PCgamer claimed the board I have does that, but it does not it does 16x/4x (Msi MGP gaming edge wifi)


I know this is the 2080 Ti thread but my testing was using a pair of stupidly overclocked 1080 Ti's (both custom water cooled and both ran at 2156 Mhz core and +700 memory, which is about the same as a air cooled RTX 2080 for each card, so just about a pair of 2080's). In my testing I also have an AMD X570 AM4 platform with a Ryzen 5800X that runs @ 5050 Mhz for 1-2 core loads, 4875 Mhz for 2-4 core loads here at my house, so one of the fastest AM4 options at the moment and paired with DDR4 @ 3800 Mhz (1:1 with IF for the CPU) @ tight ram timings at 14-16-14-25-1T. With this system and the 1080 Ti pair I saw zero scaling over single card in 1080p in all games I tested. And I used Nvidia DSR and set it to 1440p and saw _SOME_ scaling, about +10% over second card and it wasn't until trying 4K that I saw about +30% scaling with a second card. Which is still abysmal. And this was with fully SLI-Supported games. I even tried some things and I saw an improvement of about +10% in SLI Scaling performance by moving my OS from an NVME drive to a SATA SSD and removing the NVME drive from the system. So any SLI configuration with top-spec cards from the 10-series and forward is going to max out AM4's PCI-Express bandwidth. SLI on AM4 will be a platform bottleneck.

Finally in the end I went to a friend's house and we set up both cards on his spare system which was using an i9-10980XE @ 5.3 Ghz all-core with quad channel DDR4 @ 4200 Mhz and both of my 1080 Ti's in 16x-16x configuration and a 4K screen and we finally saw scaling around +60% with the second card in some games and ARK: Survival Evolved scaled +80% at 4K in that computer. It depends on the game. So... you can read reviews if you want but this was my own personal first-hand experience. I assume the RTX 2080 Ti's will have a similar "requirement". They're even faster so I expect they'll probably want the same or even faster board/CPU. So SLI can be useful and offer benefits. If you have the system and configuration to benefit from it.


----------



## Panchovix

Demonkevy666 said:


> I'd like to run SLI with these and maybe to Nvlink link stuff, but I am wondering If it's better to have to of the same card or does it not matter?


SLI will work, the brand doesn't matter as long as the GPU is the same one (In this case, 2 2080Tis)


----------



## Demonkevy666

kithylin said:


> I know this is the 2080 Ti thread but my testing was using a pair of stupidly overclocked 1080 Ti's (both custom water cooled and both ran at 2156 Mhz core and +700 memory, which is about the same as a air cooled RTX 2080 for each card, so just about a pair of 2080's). In my testing I also have an AMD X570 AM4 platform with a Ryzen 5800X that runs @ 5050 Mhz for 1-2 core loads, 4875 Mhz for 2-4 core loads here at my house, so one of the fastest AM4 options at the moment and paired with DDR4 @ 3800 Mhz (1:1 with IF for the CPU) @ tight ram timings at 14-16-14-25-1T. With this system and the 1080 Ti pair I saw zero scaling over single card in 1080p in all games I tested. And I used Nvidia DSR and set it to 1440p and saw _SOME_ scaling, about +10% over second card and it wasn't until trying 4K that I saw about +30% scaling with a second card. Which is still abysmal. And this was with fully SLI-Supported games. I even tried some things and I saw an improvement of about +10% in SLI Scaling performance by moving my OS from an NVME drive to a SATA SSD and removing the NVME drive from the system. So any SLI configuration with top-spec cards from the 10-series and forward is going to max out AM4's PCI-Express bandwidth. SLI on AM4 will be a platform bottleneck.
> 
> Finally in the end I went to a friend's house and we set up both cards on his spare system which was using an i9-10980XE @ 5.3 Ghz all-core with quad channel DDR4 @ 4200 Mhz and both of my 1080 Ti's in 16x-16x configuration and a 4K screen and we finally saw scaling around +60% with the second card in some games and ARK: Survival Evolved scaled +80% at 4K in that computer. It depends on the game. So... you can read reviews if you want but this was my own personal first-hand experience. I assume the RTX 2080 Ti's will have a similar "requirement". They're even faster so I expect they'll probably want the same or even faster board/CPU. So SLI can be useful and offer benefits. If you have the system and configuration to benefit from it.


Finding games is hard anymore, but even harder when you include raytracing I've got Control which on 1080P with Raytracing and this 2080tI it's around 75-72fps heavy fights with raytracing on it might drop to about 65fps. With DSR setting to 4K it runs at 26-27fps. lol
I just bought a few games that are suppose to support SLI and have raytracing in them Hellblade: Senua's Sacrifice, Deliver us the Mood, and I know this one is tough on all cards cause of it's dual draw reality The Medium.

Guess the best thing about getting the ftw3 version is that I can find Hydro copper blacks taht will fit it.


----------



## kithylin

Demonkevy666 said:


> Finding games is hard anymore, but even harder when you include raytracing I've got Control which on 1080P with Raytracing and this 2080tI it's around 75-72fps heavy fights with raytracing on it might drop to about 65fps. With DSR setting to 4K it runs at 26-27fps. lol
> I just bought a few games that are suppose to support SLI and have raytracing in them Hellblade: Senua's Sacrifice, Deliver us the Mood, and I know this one is tough on all cards cause of it's dual draw reality The Medium.
> 
> Guess the best thing about getting the ftw3 version is that I can find Hydro copper blacks taht will fit it.


I may be wrong (Someone please correct me if I am wrong on this!) but I'm 90% sure that Raytracing does not scale with SLI regardless of configuration. At the moment I think RTX is single-card only. Also Nvidia stopped adding new SLI profiles to the Nvidia driver packages some time in spring 2021, I think. It might of been fall 2020. Nvidia will never be adding any new SLI profiles to future driver packages. That doesn't mean we can't make it work manually with Nvidia Inspector though if you find a community post on the internet somewhere of which bits to add to a game's profile. Also all of the older SLI profiles are still in the drivers for any of the older games, which is multiple thousands of SLI games since SLI started. There are profiles for several thousands of games in the nvidia packages now. Again I'm not saying SLI is bad. I just thought you might want to know all of the information before you make a purchase.


----------



## Demonkevy666

kithylin said:


> I may be wrong (Someone please correct me if I am wrong on this!) but I'm 90% sure that Raytracing does not scale with SLI regardless of configuration. At the moment I think RTX is single-card only. Also Nvidia stopped adding new SLI profiles to the Nvidia driver packages some time in spring 2021, I think. It might of been fall 2020. Nvidia will never be adding any new SLI profiles to future driver packages. That doesn't mean we can't make it work manually with Nvidia Inspector though if you find a community post on the internet somewhere of which bits to add to a game's profile. Also all of the older SLI profiles are still in the drivers for any of the older games, which is multiple thousands of SLI games since SLI started. There are profiles for several thousands of games in the nvidia packages now. Again I'm not saying SLI is bad. I just thought you might want to know all of the information before you make a purchase.


DX12 Multi-GPU scaling up and running on Deus Ex: Mankind Divided - PC Perspective

This user gave the example of why on



> Anonymous on November 1, 2016 at 7:44 pm
> *MDA = App controlled doesnt*
> MDA = App controlled doesnt care what GPU
> LDA implicit = App hands contorel over to driver (DX11)
> LDA explicit = App controlled with matching GPU
> Nvidia favors LDA explicit mode. You can find Tom Petersen saying so on Ryans GTX 1080 interview.


Pretty much explains that's there *4 different ways to work with mGPU*. Which now makes it even _worse than old SLI or crossfire in DX11_. He just forgot to add implicit and explicit to the first two, Also I think there they're suppose to be swtiched around too, Implicit and explicit

I know even mGPU in DX12 is limited by the way Nvidia's has worked their hand around it. Meaning the they only support mGPU when It's two of their cards that support SLI that the game can pick up on the old driver flags from DX11.
There is only one game I know that doesn't care it is, ashes of the singularity. It is said to use explicit for using two different GPU vendors in mGPU but the game engine doing all the work.
It's ridiculous that Nvidia is being this underhanded with Mutli card setups. Feels likes lack of consumer choices at this point


----------



## TheOpenfield

Just wondering: Has anybody here experienced a change in PCIe link speed and memory freq at idle, when changing the core offset over a certain threshold?

In my case, going from +225 to +240 MHz (one clock step) my PCIe link will stay all-time at 5.0 GT/s (instead of 2.5 GT/s) and the mem clock at 200 MHz (instead of 100 MHz). This increases the idle power consumption by 2W (from 13.5W to 15.5W). I made a short clip of said behavior (streaming with NVENC): 









Twitch


Twitch is the world's leading video platform and community for gamers.




www.twitch.tv





Changing the mem offset doesn't affect the issue at all, it is only linked to the core offset. The sad part: +240 is exactly my rockstable OC setting (2115 MHz 1,050V).


----------



## Crosslhs82

Hi Everybody 
I wanted to give an Update.

While purchasing the parts for my loop, I decided to go ahead and add the 5900x to the loop.
When done my Corsair H150i Elite Capellix will go into my son's rig on a 5600x.

So the full loop will consists of
1- Alphacool NexXxos XT45 420 rad 45mm External
1- Alphacool NexXxos Ut60 280 60mm
In case in front ( I have enough fans for P/Pull if needed on this rad )
All fans will be The Noctua NF-A14 IPPC'S 3000rpm on both Rads
All Alphacool fittings
Corsair Xd5 pump
Corsair Xc7 pro cpu block
Corsair Commander Core Xt
for Icue control
Ek Quantum Vector Nickle Plexi Wb
Most parts already recieved and the rest in the next 2 weeks or less.
Will post some result's when finished.
Take Care Everybody!!!!!


----------



## Crosslhs82

All main parts came in.
Getting Anxious!!!! Come on July LOL
Got my 420 rad set up how and where it will be.
Used 4x 120mm m3x.5 all thread.
4x stainless steel straws cut to 70mm
4x 25lb pull disc magnets and 16 m3.5 thumb nuts.
When finished it will have 3 of my
Noctua NF-A14 IPPC'S with ModRight fan grills, the 1's that have the chrome or stainless steel mesh look.


----------



## J7SC

Crosslhs82 said:


> All main parts came in.
> Getting Anxious!!!! Come on July LOL
> Got my 420 rad set up how and where it will be.
> Used 4x 120mm m3x.5 all thread.
> 4x stainless steel straws cut to 70mm
> 4x 25lb pull disc magnets and 16 m3.5 thumb nuts.
> When finished it will have 3 of my
> Noctua NF-A14 IPPC'S with ModRight fan grills, the 1's that have the chrome or stainless steel mesh look.
> View attachment 2558010


... love those rad mounts as I have been thinking about a future project involving a heavily modified CM Stryker case w/ a big external rad on the back !


----------



## Crosslhs82

I Hope it give you some good ideas for it.!!!!!!!

I wish Alphacool would have gone with m4's instead of 3's.
But those stainless steel straws and the thumb nuts really steedied them up.










My Dad had a metal Layth yrs ago which could have come in real handy with some solid rod turned down and threaded.


----------



## Crosslhs82

Going this way if I feel I need another rad I can put a 360 in the top as well.

Hoping I get some good temps for the 5900x and the 2080ti.


----------



## J7SC

Crosslhs82 said:


> I Hope it give you some good ideas for it.!!!!!!!
> 
> I wish Alphacool would have gone with m4's instead of 3's.
> But those stainless steel straws and the thumb nuts really steedied them up.
> 
> View attachment 2558017
> 
> 
> My Dad had a metal Layth yrs ago which could have come in real handy with some solid rod turned down and threaded.


...coincidentally, I was looking at the Stryker case last night which is 'parked' in our sun-room...no hurry, as I'm fully content with my latest build, but sooner or later, it will be time for another project. May be Intel this time (ie. Raptor Lake or Meteor Lake) after 3x AMD's in a row.

I would leave the front, rear and side panels of the CM Stryker off, replacing them with custom-cut acrylic (which need a mounting solution)...ditto for the back which would carry s.th. like dual 480s or so - time will tell. But your mounting solution would come in handy front, side and back !


----------



## Crosslhs82

Cool !!!!!!!

I Almost went with some angle aluminum but decided the was Way Easier!!!!!


----------



## Panchovix

Decided to do some benchmarks yesterday night, and wow 2080Tis surely overclock pretty good, like 30% more than stock in some cases lol

















I scored 16 352 in Time Spy


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





















I scored 7 644 in Time Spy Extreme


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com





















I scored 10 358 in Port Royal


AMD Ryzen 7 5800X, NVIDIA GeForce RTX 2080 Ti x 1, 32768 MB, 64-bit Windows 10}




www.3dmark.com


----------



## gaevulk_5558

Hello everyone, can you tell me the measurements of the thermal pads for a Msi 2080 ti xtrio?
I am attaching a photo of the cooler , in case you need it.


----------



## Crosslhs82

Don't know? Hopefully this may help some.
If not you might try the Msi forums.

__
https://www.reddit.com/r/nvidia/comments/mtwski


----------



## Crosslhs82

I rinsed my 420 and 280 rads and drained them through a Keurig coffee filter and found No debris.
Maybe Alphacool changed there ways or I just got Lucky.

July still can't get here fast enough for Vacation 
LOL


----------



## gaevulk_5558

In case anyone has the same problem with MSI RTX 2080 TI XTRIO

(PAD SIZE)

I solved it using the THERMAL PUTTY TG-PP10


----------



## J7SC

...w-cooled 2x2080 Ti coming out of retirement for an afternoon for the payment of 760W (total GPUs, before CPU and other)


----------



## Crosslhs82

Dammmm!!!!!!!!!!


----------



## adversary

does anyone makes waterblock for 2080TI Gigabyte Gaming OC ??

one which would cover all thermal pads contacts which is already done by stock air cooling (as that card have plenty of them).

tnx


----------



## kithylin

adversary said:


> does anyone makes waterblock for 2080TI Gigabyte Gaming OC ??
> 
> one which would cover all thermal pads contacts which is already done by stock air cooling (as that card have plenty of them).
> 
> tnx


If you can provide us an exact Gigabyte part number from the card it's self (there are sometimes multiple versions of the same model name from Gigabyte. They're well known for that sort of thing) then I could check EK's Cooling Configurator website for you.


----------



## Medizinmann

adversary said:


> does anyone makes waterblock for 2080TI Gigabyte Gaming OC ??
> 
> one which would cover all thermal pads contacts which is already done by stock air cooling (as that card have plenty of them).
> 
> tnx


You could i.e. either go to Alphacool or EK and look into their configurator - wahtever kind of waterblock is easier to obtain in your region.
HWConfig - English
Liquid cooling compatibility list | EKWB

But as user kithylin wrote you/we need the exact model of your GPU to get the right parts...

Best regards,
Medizinmann


----------



## Wicaebeth

Hi everyone it is totally safe to set the GALAX 380W to a Msi Duke OC on air? Here my bench on the default bios : I scored 15 592 in Time Spy


----------



## Panchovix

Wicaebeth said:


> Hi everyone it is totally safe to set the GALAX 380W to a Msi Duke OC on air? Here my bench on the default bios : I scored 15 592 in Time Spy


Temps seen decent so sure why not, on my Zotac AMP! the 380W VBIOS works pretty good, and the cooler may be worse than the Duke OC


----------



## kithylin

Wicaebeth said:


> Hi everyone it is totally safe to set the GALAX 380W to a Msi Duke OC on air? Here my bench on the default bios : I scored 15 592 in Time Spy


I feel the disclaimer is necessary here given your words: If you are flashing ANY other bios onto your card from any other vendor or any other card then there is a risk involved in that the other bios could cause damage to, brick, or otherwise cause your card to never work again. When you're cross-flashing nothing is "totally safe". The only "totally safe" thing is to leave the bios on the card that it came with.


----------



## Medizinmann

Wicaebeth said:


> Hi everyone it is totally safe to set the GALAX 380W to a Msi Duke OC on air? Here my bench on the default bios : I scored 15 592 in Time Spy


As user kithylin already wrote - when cross flashing nothing is "totally safe" - but to stay totally safe your in the wrong forum anyway... 😉 

Flashing the Galax 380W BIOS should work though - as the Duke OC uses a reference PCB - and cross flashing my Palit Gaming OC (running under water) with the Galax Bios worked very well for me.

I can't say how well this will work on air as the Galax OC 2080TI uses a 2-Fan design while the Duke OC uses 3 - some people reported that not all fans run that well(or one would actually run to fast to slow or not at all), when cross flashing - you should at least keep an eye on that!

Best regards,
Medizinmann


----------



## Wicaebeth

Oh, so i have flashed it here what i've got on air : I scored 15 027 in Time Spy - Default Bios 
I scored 15 592 in Time Spy - Default Bios - 2030Mhz | 0.95V | Higher = Crash due to power limitation or Voltage do ****.
I scored 16 372 in Time Spy - Bios Flash to 380W Galax - 2130Mhz | 1.037V


----------



## Medizinmann

Wicaebeth said:


> Oh, so i have flashed it here what i've got on air : I scored 15 027 in Time Spy - Default Bios
> I scored 15 592 in Time Spy - Default Bios - 2030Mhz | 0.95V | Higher = Crash due to power limitation or Voltage do ****.
> I scored 16 372 in Time Spy - Bios Flash to 380W Galax - 2130Mhz | 1.037V


Looks good! I got similar results with a graphics score around 16600.

How are the fans holding up?

Does RGB still work?

Best regards,
Medizinmann


----------



## Wicaebeth

Medizinmann said:


> Looks good! I got similar results with a graphics score around 16600.
> 
> How are the fans holding up?
> 
> Does RGB still work?
> 
> Best regards,
> Medizinmann


Hey, the RBG still working no probs and the three fan spinning, no probs at all with this bios.


----------



## Laithan

FYA, It isn't if they are all just spinning, are they all controllable and ramp up/down as expected? Especially the 3rd fan. Use GPU-z to monitor.


----------



## Vaderkos

Hello,
Does anyone know what are thermal pads thicknesses for *Palit RTX 2080 ti gaming pro oc?*. I'm looking for replacement and can not find any information about thickness and expected W/mk. I have found that it has reference pcb, but cann not find any info what are sizes even for it.

Sry if this questionis stupid


----------



## Laithan

Vaderkos said:


> Hello,
> Does anyone know what are thermal pads thicknesses for *Palit RTX 2080 ti gaming pro oc?*. I'm looking for replacement and can not find any information about thickness and expected W/mk. I have found that it has reference pcb, but cann not find any info what are sizes even for it.
> 
> Sry if this questionis stupid


It can be hard to find the specs sometimes and it is important to get it right. You can try to contact Palit and ask them. Sometimes they will actually reply with the info. If not maybe a Palit specific forum if there is one. Outside of that, you can measure them yourself but if you didn't want to wait you would have to order multiple pad thickneses to have what you need handy.


----------



## TheOpenfield

Wicaebeth said:


> Oh, so i have flashed it here what i've got on air : I scored 15 027 in Time Spy - Default Bios
> I scored 15 592 in Time Spy - Default Bios - 2030Mhz | 0.95V | Higher = Crash due to power limitation or Voltage do ****.
> I scored 16 372 in Time Spy - Bios Flash to 380W Galax - 2130Mhz | 1.037V


You should definitely max. out the VRAM too. I score similar GPU points with my 280W BIOS (better cooling though, and possibly better bin).








Result







www.3dmark.com


----------



## Medizinmann

Vaderkos said:


> Hello,
> Does anyone know what are thermal pads thicknesses for *Palit RTX 2080 ti gaming pro oc?*. I'm looking for replacement and can not find any information about thickness and expected W/mk. I have found that it has reference pcb, but cann not find any info what are sizes even for it.
> 
> Sry if this questionis stupid


Well - the question isn't stupid - but I am sure your only chance is to measure them yourself or do it by trial and error or use something like K5-PRO!
(632) More FPS for just $10 - K5 Pro Viscous Thermal Paste - YouTube
K5-PRO 60g viskos Wärmeleitpaste für Wärmeleitpad Ersatz PS3, PS4, XBOX, iMAC | eBay

K5-PRO is actually the best solution from the cooling perspective - but it is a mess to remove, if you might need to some day...

Best regards,
Medizinmann


----------



## Demonkevy666

guys my evga rtx 2080 Ti ftw3 has stopped working. 

My father connecteda power anntena to the HDTV ithe RTX 2080 ti was connected to then it stopped giving singal to the HDMI to the TV upon power up. Now the pci-express power plug has a red L.e.d on even though it's fully plugged in.on both pci express power plugs. Nothing is broken from what I can see an there are no burn marks anywhere i'm baffled as why it just stopped working.


----------



## Crosslhs82

Just A Quick Update
Next Sunday I will start my Custom Loop Adventure!!!!!!

It has been Very Hard sitting on all the parts just sitting there waiting for this Vacation.
I could swear at times I heard them saying Install Me!!!!! Now !!!!! LOL

I will try to get some pics and beginning stats maybe by Friday the 29th.
Pretty Much Tuesday the 26th will be all on my son's rig doing Bios update, putting in his 5600x and installing my Corsair H150i Elite Capellix along with ram upgrade and 2 more 
Corsair ML 140's he's got 2 already on his H110i so like I said Tueday is all His hopefully all has gone well and my loop should be purging and leak testing till I get my 4 Noctua's out of his rig.

I Hope Everyone Has Had A Great Weekend!!!!!!!!!
Later


----------



## kithylin

Crosslhs82 said:


> Just A Quick Update
> Next Sunday I will start my Custom Loop Adventure!!!!!!
> 
> It has been Very Hard sitting on all the parts just sitting there waiting for this Vacation.
> I could swear at times I heard them saying Install Me!!!!! Now !!!!! LOL
> 
> I will try to get some pics and beginning stats maybe by Friday the 29th.
> Pretty Much Tuesday the 26th will be all on my son's rig doing Bios update, putting in his 5600x and installing my Corsair H150i Elite Capellix along with ram upgrade and 2 more
> Corsair ML 140's he's got 2 already on his H110i so like I said Tueday is all His hopefully all has gone well and my loop should be purging and leak testing till I get my 4 Noctua's out of his rig.
> 
> I Hope Everyone Has Had A Great Weekend!!!!!!!!!
> Later


You're hooked now. Once ya go custom water cooling and see the performance of it you might never go back to air again. The initial setup is a lot higher than air and sometimes the maintaince is higher than air cooling but the performance makes it all worth it.


----------



## Crosslhs82

I'am Really Looking Forward To It!!!!!!!!!¡


----------



## J7SC

Crosslhs82 said:


> I'am Really Looking Forward To It!!!!!!!!!¡


...you're gonna have some fun - 2080 Tis like w-cooling ! 
I got mine back in December '18, and while other systems here have superseded the 2080 Ti setup for pure gaming, I still use it to power our media-room setup, including for select gaming. A well-cooled 2080 Ti can still throw a real punch (never mind 2x)....


----------



## Crosslhs82

I like the Blue!!!!

Has the magnet idea got you any farther on the other case?


----------



## Medizinmann

kithylin said:


> You're hooked now. Once ya go custom water cooling and see the performance of it you might never go back to air again. The initial setup is a lot higher than air and sometimes the maintaince is higher than air cooling but the performance makes it all worth it.


You forgot noise - IMHO its much easier to get noise down with water cooling.

Best regards,
Medizinmann


----------



## Crosslhs82

Good Thing I'm use to the Noctua's IPPC's 
So I may be Double checking if the System is running or Not ---LOL


----------



## J7SC

Crosslhs82 said:


> I like the Blue!!!!
> 
> Has the magnet idea got you any farther on the other case?


...other custom case project awaiting decisions on next upgrade (winter?)


----------



## Crosslhs82

Ok 
Hoping it gave you some ideas to work with.


----------



## z390e

Getting ready to flash the ASUS RTX 2080 Ti Strix OC Custom PCB (2x8-Pin) 1000W x 100% Power Target BIOS (1000W) onto my Strix OC 2080ti as a test run for BIOS updates on other cards, tried reading through the thread but didnt see anyone specifically doing that one, anyone have any gotchas they saw?

Going to flash 208990.rom onto 10DE 1E07 - 1043 866A which is currently using 90.02.17.00.47

Do most people just set power limits to ~40% first then dial it in?


----------



## pewpewlazer

The Strix XOC BIOS has no V-F curve, which makes it pretty useless. Your voltage will be at the mercy of GPU Boost. My experience was that you typically end up at 1.05v. Gains vs the 380w Galax BIOS were almost nonexistent for me. Maybe 15-30mhz, with questionable stability.


----------



## Crosslhs82

Hi Everybody
I Hope everyone is having A Great Weekend!!!!!!

I was wondering if these small imperfections on my thermal pads will be Ok ????
This is my 1st time tearing a Gpu apart working with thermal pads.
Up next Corsair XTM 50 grease for the WB.


----------



## Crosslhs82

Oops forgot the pic.


----------



## J7SC

Crosslhs82 said:


> Oops forgot the pic.


...the small imperfections (pattern ?) shouldn't be a problem. FYI, a little trick I've been using for many years is to put a little bit of thermal paste (ie. MX4/5 etc) onto the pads before closing it up. That's not a necessity by any stretch, but it works for me...


----------



## Crosslhs82

Thanks I will keep that in mind for if the results are not to my liking.

So I take you didn't see anything that standout that I should have changed before installing the block?


----------



## J7SC

Crosslhs82 said:


> Thanks I will keep that in mind for if the results are not to my liking.
> 
> So I take you didn't see anything that standout that I should have changed before installing the block?


...nope, other than thermal paste on the GPU die is missing 😄


----------



## Crosslhs82

THANK YOU !!!!!
Don't worry I didn't forget it.
The pic was taken before paste went on.
Pasted like I've seen in Yt vids X with a straight across with Corsair Xtm50.

Got the h150i out and cleaned up to go into my son's on Tuesday, 280x60 with 3 of the 4 fans mounted.( left the bottom fan off till I connect the hose's )
Cpu blk mounted, and working on the backplate as we speak.
Should be No reason not to be purging/leak testing by tomorrow evening.


----------



## Demonkevy666

Demonkevy666 said:


> guys my evga rtx 2080 Ti ftw3 has stopped working.
> 
> My father connecteda power anntena to the HDTV ithe RTX 2080 ti was connected to then it stopped giving singal to the HDMI to the TV upon power up. Now the pci-express power plug has a red L.e.d on even though it's fully plugged in.on both pci express power plugs. Nothing is broken from what I can see an there are no burn marks anywhere i'm baffled as why it just stopped working.


Video of the issue.


----------



## J7SC

Demonkevy666 said:


> Video of the issue.


...can you try a different PSU known to be good (ie. 12v rail)? Red power LED is telling, but you need to figure out if it is internal to the card, or an 'external' PSU issue.


----------



## Demonkevy666

J7SC said:


> ...can you try a different PSU known to be good (ie. 12v rail)? Red power LED is telling, but you need to figure out if it is internal to the card, or an 'external' PSU issue.


I have,tried another power supply and different power plugs, same thing happens. I had opened the card but there isn't a single broken or burnt part on the card anywhere. 
I think it mught be broke at the pci-express power connector inside the the connector. Although it seems impossible to


----------



## J7SC

Demonkevy666 said:


> I have,tried another power supply and different power plugs, same thing happens. I had opened the card but there isn't a single broken or burnt part on the card anywhere.
> I think it mught be broke at the pci-express power connector inside the the connector. Although it seems impossible to


...a broken connector could be part of the problem; even a faulty HDMI cable (hard to believe, but I ran into that once). Finally, have you checked for dirt / debris inside the PCIe connector on the card? 

FYI, Buildzoid's YouTube (sample > here) also has good info how to check power stages etc, even those which don't 'look' burned. Alternatively, you could bring it to a specialist shop and have all the components tested.


----------



## Crosslhs82

Hi Everybody 
System Up and Running.
With 2/3rds of my 420 being as a passive rad till I get my fans out of my son's tomorrow or Wednesday.
Ran a cbr23 temp 5900x topped out @ 70-71c, which is lower by 5-6c.

As far as the 2080 ti I just started Unigene Heaven temp holding 46-47c @2085mhz steady with the 5900x 58-59c, 
Fps 118- Dammmit tripped the breaker ahggg.

Round 2 
Lower the power slider to 105.
Up the a/c 1 degree.
Heaven now continously looping for a good 15mins temps 62-63c on 5900x @4.9g-----
2080 ti doing 295w-320w 51-52c now
2070-2085mhz 118-200fps @ 1440p ultra scene depending.
Heres a few pics.
Keep in mind this is not a Show case.
I tried doing what I could do as routing goes but I can only do soo much.
I DO WANT ANOTHER CASE NOW.
I have had this 1 since the Fx6350/Gigabyte fx990ud3 days.

So hoping the other 2 fans on the 420 rad will help a little bit more when I put those on.

From what I can see my 2080 ti hot spot temp is within 1-2 degress of what they were from the previous owner's pad and paste job on the Air cooler.
I don't know what pads he used but I went with TG Minus8 pads 8wk
( good / bad idea ? )
Also having thoughts of getting a 360m into the mix maybe when I find a case I like without spending 3-5 Benjamin's.
I can still put 1 in in this case no problem as long as it's 30ish mm thick.

Will have more later.
Good Night Everybody!!!!!!!!!!


----------



## Crosslhs82

Hope Everyone is having A Great Day!!!!!!!

Still with the 2/3 of the 420 passive running Heaven I have the 2080 ti running steady @2100mhz / mem +475 still undervolted.
I'm Affraid to let the beast run at full power,
Because of the breaker trips.

I got some game time in last night temps on the gpu stayed between 46-48c the whole time. That is atleast 10-14c lower then on air.

I didn't get all of what I was hoping for doing this full loop Yet, but the gpu is Liking it and the 5900x is atleast mantaining.
Loop water temp is 35.40c and room temp is 24-25c.

Sorry for not being able to get pics of the gpu powered and lite up as when this unit is powered on it's in a spot where pictures would be a big task in itself.


----------



## Crosslhs82

Hi Everyone

Man those 4 fans to finish this loop off helped!!!!
The 2080 now during gaming runs 40-43c
Holding 2115mhz
While the 5900x is now 53-56c.
Running a cb r23 the 59 ran at 67c which is 10-11c lower then the Corsair H150i.
The 2080 is running 20-23c lower.

I would Say Ya I'm HOOKED !!!!!!!!!


----------



## Crosslhs82

Hi Everybody
I was able to get some (7 actually) fairly decent pictures (from an angle) of the system running.
It's doing the Rainbow thing but I think it's not too bad.
If I run the Aura Syng (Asus lighting services) Ghost Recon Wildlands won't open up, giving the reason of Easy Anti-Cheat so I haven't ran it for a long time (haven't even tried Recon Breakpoint) and don't want to run G.Skill's either at the same time along with the Corsair Icue so the gpu is plugged into the 3pin rgb header on the Asus mobo Rainbow, G.skill Ram Rainbow, and I have the Corsair as Rainbow but nothing is in syng.

Still playing with settings and seeing what I can and can't get away with.
Gaming tonight after some settings changes Cpu and Gpu are 10c difference during in game temps.

Later Everybody!!!!!!!!!!


----------



## Crosslhs82

So I installed Armoury Crate and gpu block and ram are able to be in syng but not with the Corsar and Gr Breakpoint / Grw opens but Wildlands has suffered in fps and cpu utilization for some unknown reason.
Like 35fps compared to 76-89fps and cpu 4%util running the benchmark.
So I gave myself something else to figure out or uninstall Armoury Crate.
No real big lose, been without rgb for a long time.


----------



## kithylin

Crosslhs82 said:


> So I installed Armoury Crate and gpu block and ram are able to be in syng but not with the Corsar and Gr Breakpoint / Grw opens but Wildlands has suffered in fps and cpu utilization for some unknown reason.
> Like 35fps compared to 76-89fps and cpu 4%util running the benchmark.
> So I gave myself something else to figure out or uninstall Armoury Crate.
> No real big lose, been without rgb for a long time.


A lot of people mistaken this. Do not look at overall CPU utilization. Switch task manager's performance view for CPU to view all cores. It's likely you have 1 core or thread maxed out somewhere even if the overall utilization is 4%. So look for something like that.


----------



## Crosslhs82

Thank You will look,
But a 50 fps lose hmm


----------



## Crosslhs82

It's very strange cpu util in game showing cpu upto 65% -gpu 97% but still 50fps.
Uninstalled Armoury, verified game files, then uninstalled Grw, Reinstalled Grw but left Ai Suite 3 for easier quick fan control for the 420 rad as the thermaltake sata power 10 port fan hub was giving No signal to the Commander Core Xt or the Mobo, so I just used chsse fan 1, 2 and 3 to power those 3 140's.
I will uninstall Ai Suite and see what happens.

If that doesn't fix well I quess I will have to remove Fps from Msi AB so it's not making give myself scabs from scraching my head Lol


----------



## Crosslhs82

I was able to have a little time today on trying to fix the Fps in GRW.
I downloaded a Armoury Crate uninstall utility from Asus and it found enough left over files that now GRW is back to what is was before installing Armoury Crate.

Have A Great Weekend Everyone!!!!!!!!!!!


----------



## dk_mic

Hi
I wanted to reduce energy consumption on my card, so I have played around with various undervolts and overclocks (watercooled msi gaming x trio) and wanted to share.
Always blown away, how inefficient the stock V/F curve is dialed in.

superposition 4k:
I can remove ~ 120 W from stock power draw and have ~ 95 % performance 


wattavg fpsmin fpsmax fpsscorefps/watt1800 MHz @ 800mV | 8000 Mhz Mem183.5​89​70​112​11965​0.485014​2010 MHz @ 925 mV | 8000 Mhz Mem (current daily)254​97​75​121​12993​0.38189​OC + 145 Mhz | 8000 Mhz Mem (for benching, but this is stable)345​103​81​130​13810​0.298551​stock | 7000 Mhz Mem305​94​74​119​12615​0.308197​

screens

undervolt








current daily








oc








stock








How do you run your 2080 Ti these days?


----------



## Crosslhs82

Nice 
Pretty much locked @ 2100-2115 
@1.006 even on water because if I let it loose I trip my circut breaker if my portable A/c unit is compressing at that time.


----------



## Crosslhs82

Hi Everyone 
I was forced to roll with low flow rate from the Corsair Xd5 or put in a different pump.

Well i did that and more!!!!!!


Have new Thermaltake Core X71
Ek Kinetics Tbe 300 D5 
360mm top
420mm front
280mm bottom 
Still the Ek Quantum Vector Nickle Plexi on the 2080 ti
and Xc7 Pro on the 5900x
definitely pumping the 1gpm rate.


















At present idle cpu 30-31c cpu /gpu 27.5c
Coolant temp 24c 

During Cb R23 run 
Coolant temp 26.8c
Cpu temp 67.75c
Ambient temp 24c


Running Cb R23 and Unigene Heaven 
Ambient temp has raised up to 25c
Cpu @70 -71c
Gpu @40-41c
Coolant temp 31-31.5c

Gaming last night for a couple hrs
Ambient 24c
Cpu 51-54c
Gpu 38-41 mostly 38c

Have A Great Weekend!!!!!!!!!!!!!!!


----------



## TheOpenfield

dk_mic said:


> How do you run your 2080 Ti these days?


Maxed out at 280W PL. Reasonably efficient


----------



## Uzishanar

would like a bit of help if possible. got the MSI RTX 2080Ti Ventus GP OC with the bios 90.02.42.00.42... I followed the exact steps to flash a new firmware with nvflash to unlock 380w (mine is currently limited at 290w) but no success due to bios not matching .I also used --protectoff

Is anyone aware of a A1 (1E07) with 90.02.42.00.42 with 380w? or am i missing something like a dumb kid?


----------



## Laithan

try the -f command


----------



## Sarge198

Thank you OP for the BIOS and flashing procedure information. 
I'm currently running a NVIDIA RTX 2080Ti Founders Edition and the KFA2 RTX 2080 Ti BIOS seems to make a considerable difference in games like Fallout 4 running high refresh rates.









I do have a question about memory temperature, I have seen it get as high as 107C at which point the fans sporadically increase speed to 5000RPM.
Has anyone else experienced this behavior from their 2080Ti? Is this normal operation? How might it effect the lifespan of the card?
I'm guessing the VRAM needs new thermal pads or maybe it's lacking them all together? I've never had to take one of these cards apart. 

















I have the card mounted vertically in a Sliger SM580, it also has plenty of airflow.


----------



## kithylin

Sarge198 said:


> I do have a question about memory temperature, I have seen it get as high as 107C at which point the fans sporadically increase speed to 5000RPM.
> Has anyone else experienced this behavior from their 2080Ti? Is this normal operation? How might it effect the lifespan of the card?
> I'm guessing the VRAM needs new thermal pads or maybe it's lacking them all together? I've never had to take one of these cards apart.


The VRAM chips are rated for 125c. You are still within spec. Hot. But within spec. These are older cards now. If you never have then perhaps taking the card apart to replace the thermal pads with something newer/better might be a good idea, maybe. It's up to you. I don't think there's actually anything wrong at the moment though.


----------



## neobr

Hi guys,

After over 3 years of good services I'm thinking about changing the thermal pads of my MSI Ventus OC, the memory temperature's still on 80-90°C even on hot days but there is a good amount of oil coming out of it. Should I be concerned?

It has the reference PCB but I would like to know the proper thickness to order some pads, has anyone disassembled this particular board and/or know where I can get this information?

Thanks!


----------



## Medizinmann

neobr said:


> Hi guys,
> 
> After over 3 years of good services I'm thinking about changing the thermal pads of my MSI Ventus OC, the memory temperature's still on 80-90°C even on hot days but there is a good amount of oil coming out of it. Should I be concerned?


As user *kithylin *already wrote - these chips are rated for well above 100°C – which means your temps are safely in spec.



> It has the reference PCB but I would like to know the proper thickness to order some pads, has anyone disassembled this particular board and/or know where I can get this information?
> 
> Thanks!


The closed thing I could find is this:
Eisblock_Aurora2080Ti_FE_manual.pdf (alphacool.com) 

But I don't know if the themal pads are exactly the same with the stock air cooler...

You could use this:
CS LABS K5 PRO Pasta Térmica Viscosa para Substituição de Almofada Térmica 60g 6X10g Pacote (iPhone, Apple iMac, Sony PS4 e PS3, Xbox, Acer Aspire etc) : Amazon.com.br: Computadores e Informática 

...which will work much better than any thermal pad - but might be a mess to remove some day...;-)
More FPS for just $10 - K5 Pro Viscous Thermal Paste - YouTube 

Best regards,
Medizinmann


----------



## kithylin

I just wanted to add a note to people in this thread for the future: Even when they were brand new it was very common for air-cooled RTX 2080 Ti's (almost all of the air cooled ones except a few select rare ones) would all run their VRM chips in the 100~110C range and this was perfectly normal and acceptable. If you're still getting those temps today on a 2080 Ti that's air cooled then it's perfectly fine.

EDIT: Fixed some typos. That's what I get for trying to write forum comments when I'm falling asleep, derp. I woke up and re-read my comments in the morning and realized I wrote a bunch of nonsense. Is fixed now.


----------



## neobr

Medizinmann said:


> As user *kithylin *already wrote - these chips are rated for well above 100°C – which means your temps are safely in spec.
> 
> 
> 
> The closed thing I could find is this:
> Eisblock_Aurora2080Ti_FE_manual.pdf (alphacool.com)
> 
> But I don't know if the themal pads are exactly the same with the stock air cooler...
> 
> You could use this:
> CS LABS K5 PRO Pasta Térmica Viscosa para Substituição de Almofada Térmica 60g 6X10g Pacote (iPhone, Apple iMac, Sony PS4 e PS3, Xbox, Acer Aspire etc) : Amazon.com.br: Computadores e Informática
> 
> ...which will work much better than any thermal pad - but might be a mess to remove some day...;-)
> More FPS for just $10 - K5 Pro Viscous Thermal Paste - YouTube
> 
> Best regards,
> Medizinmann





kithylin said:


> I just wanted to add a note to people in this thread for the future: Even when they were brand new it was very common for air-cooled RTX 2080 Ti's (almost all of the air cooled ones except a few select rare ones) would all run their VRM chips in the 100~110C range and this was perfectly normal and acceptable. If you're still getting those temps today on a 2080 Ti that's air cooled then it's perfectly fine.
> 
> EDIT: Fixed some typos. That's what I get for trying to write forum comments when I'm falling asleep, derp. I woke up and re-read my comments in the morning and realized I wrote a bunch of nonsense. Is fixed now.


I really appreciate your words and tips guys.

I gave the wrong focus in my post when I mentioned temperature, I know that's really good after more than 3 years of work, the question was actually about the oil that gradually comes loose from the thermal pads, when it was vertical I I occasionally had to clean it as it ran down to the slot, now mounted horizontally I suppose this should stop for a while.

Logic says I should worry when the fluid stops going out? Does that make any sense? I know it's not conductive but my fear is that some modules end up losing contact with the thermal pads and I am being tricked by the sensor saying that everything is fine or something like that...

On the other hand, I'm afraid to open it, make the change and end up with higher temperatures due to some detail in the quality of the material and/or my ability to do it well 

Thanks, and sorry about my poor english


----------



## Medizinmann

neobr said:


> I really appreciate your words and tips guys.
> 
> I gave the wrong focus in my post when I mentioned temperature, I know that's really good after more than 3 years of work, the question was actually about the oil that gradually comes loose from the thermal pads, when it was vertical I I occasionally had to clean it as it ran down to the slot, now mounted horizontally I suppose this should stop for a while.
> 
> Logic says I should worry when the fluid stops going out? Does that make any sense?


The oil is nasty but nothing you should worry too much about. You can wipe it off and leave it at that.



> I know it's not conductive but my fear is that some modules end up losing contact with the thermal pads and I am being tricked by the sensor saying that everything is fine or something like that...


Well it is a sign of ageing of the thermal pads – and yes the more oil they lose the more brittle they will get and might at some point lose their ability to transfer enough heat.
But as long as the temps are okay I wouldn’t worry too much.



> On the other hand, I'm afraid to open it, make the change and end up with higher temperatures due to some detail in the quality of the material and/or my ability to do it well


Well it isn't open heard surgery - but I know what your are talking about - when I changed the air cooler for a liguid cooler on my GPU I also held my breath and crossed fingers and such...



> Thanks, and sorry about my poor english


As far as I am concerned your english is perfectly fine. We aren't all native speakers around here you know... 😉 

Best regards,
Medizinmann


----------



## ilgello

Fixed it


----------



## kithylin

ilgello said:


> Fixed it


What was it that you fixed?


----------



## J7SC

...while newer-gen GPUs have moved into the home-office, I still have my 2x 2080 Ti Aorus Exreme Waterforce WBs and X399 MSI Creation TR setup. It's now four years ago that I got the 2x 2080 Tis, but they're still running strong. In fact, 340W oc TR and 2x 380W GPUs really help warm things up with - 11 C outside ...did a few Superposition 8K runs tonight at 2160 MHz for each core and 2031 MHz for the VRAM:


----------



## kithylin

J7SC said:


> ...while newer-gen GPUs have moved into the home-office, I still have my 2x 2080 Ti Aorus Exreme Waterforce WBs and X399 MSI Creation TR setup. It's now four years ago that I got the 2x 2080 Tis, but they're still running strong. In fact, 340W oc TR and 2x 380W GPUs really help warm things up with - 11 C outside ...did a few Superposition 8K runs tonight at 2160 MHz for each core and 2031 MHz for the VRAM:


It's really a shame that SLI doesn't work anymore.  I tried dual 1080 Ti's back in spring 2022 with my Ryzen X570 system and it didn't even gain a single +1 FPS in games with the second video card (compared to a single) even at 1440p. SLI is officially dead now forever.  I miss the SLI days.


----------



## J7SC

kithylin said:


> It's really a shame that SLI doesn't work anymore.  I tried dual 1080 Ti's back in spring 2022 with my Ryzen X570 system and it didn't even gain a single +1 FPS in games with the second video card (compared to a single) even at 1440p. SLI is officially dead now forever.  I miss the SLI days.


I agree, and I always liked the project build-symmetry of SLI / NVL...still, a RTX 4090 has 76.3 billion transistors (18.6 billion for the RTX 2080 Ti) and can get over 17,000 w/ Superposition 8K...


----------



## kithylin

J7SC said:


> I agree, and I always liked the project build-symmetry of SLI / NVL...still, a RTX 4090 has 76.3 billion transistors (18.6 billion for the RTX 2080 Ti) and can get over 17,000 w/ Superposition 8K...


It's kind of off topic but kind of similar: I was going through a same thing to build a WindowsXP retro computer a couple years ago and I was looking at like Dual GTX 580's but then that's SLI and it doesn't work in all games and even those it does work in may not always work right and all of that nonsense when I realized a single GTX 780 is faster and works with everything. So I understand what you're saying with the newer cards. SLI is kind of dead today even when it does work on supported hardware, sadly.


----------



## J7SC

...re-furbished my December 2018 2x 2080 Ti Threadripper build...new bios and all that for the mobo and various other bits...2080 Tis are oc'ing to the same level as four years ago ... I am adding a w-cooled 3090 Strix (for 3x GPU) for a rendering station.


----------



## -CreepPorVida-

Having trouble with my Auros Xtreme, getting PWR PerfCap but only using 80% TDP.

Sliders on stock Bios in Afterburner set to +22% but camp seen to push past 1V.


----------



## J7SC

-CreepPorVida- said:


> Having trouble with my Auros Xtreme, getting PWR PerfCap but only using 80% TDP.
> 
> Sliders on stock Bios in Afterburner set to +22% but camp seen to push past 1V.
> 
> View attachment 2590974
> View attachment 2590975
> View attachment 2590976


Can you rerun with GPU-Z set to 'max' for Board Power Draw, like below, as wall as PCIe slot max power draw ? Also, what kind of PSU do you have, and are you using two separate (rather than one cable / two dongle) PCIe 8 pins ?

Below is my duo of Aorus Xtreme WF WB (factory water-block) cards with GPUZ readings after a bench run. These cards couldn't even reach full clocks due to power limits, these Arorus Xtreme cards should reach between 370W and 380W each. In the case below, I am closing on the limit of the 1300W Platinum PSU as there's also a 340W oc'ed Threadripper and various peripherals running (note my edgy PerCap reason). Anyway, if you an show max board power and the other values set to max in GPUz, that would help, along with your PSU and PCIe connection info.


----------



## -CreepPorVida-

J7SC said:


> Can you rerun with GPU-Z set to 'max' for Board Power Draw, like below, as wall as PCIe slot max power draw ? Also, what kind of PSU do you have, and are you using two separate (rather than one cable / two dongle) PCIe 8 pins ?
> 
> Below is my duo of Aorus Xtreme WF WB (factory water-block) cards with GPUZ readings after a bench run. These cards couldn't even reach full clocks due to power limits, these Arorus Xtreme cards should reach between 370W and 380W each. In the case below, I am closing on the limit of the 1300W Platinum PSU as there's also a 340W oc'ed Threadripper and various peripherals running (note my edgy PerCap reason). Anyway, if you an show max board power and the other values set to max in GPUz, that would help, along with your PSU and PCIe connection info.
> 
> View attachment 2590980


I will need to double check the cabling (upgraded from a 5700xt that had 2x6 pin. but I think its 2x8 pin (where 2 pins are kinda loose and you slam it next to the 6pin), my PSU is 750 Watts, i have a Ryzen 3900X but will screen shot the power draw shortly.


----------



## -CreepPorVida-

J7SC said:


> Can you rerun with GPU-Z set to 'max' for Board Power Draw, like below, as wall as PCIe slot max power draw ? Also, what kind of PSU do you have, and are you using two separate (rather than one cable / two dongle) PCIe 8 pins ?
> 
> Below is my duo of Aorus Xtreme WF WB (factory water-block) cards with GPUZ readings after a bench run. These cards couldn't even reach full clocks due to power limits, these Arorus Xtreme cards should reach between 370W and 380W each. In the case below, I am closing on the limit of the 1300W Platinum PSU as there's also a 340W oc'ed Threadripper and various peripherals running (note my edgy PerCap reason). Anyway, if you an show max board power and the other values set to max in GPUz, that would help, along with your PSU and PCIe connection info.
> 
> View attachment 2590980


Here is the screenshot, you can see 340 Board power draw, the other weird thing this card does is my fans are set to 48% but are running at 4000 RPM


----------



## J7SC

-CreepPorVida- said:


> Here is the screenshot, you can see 340 Board power draw, the other weird thing this card does is my fans are set to 48% but are running at 4000 RPM
> View attachment 2590984


...whoah, Hotspot at 107 C while fans show 4000 rpm ? Boost algorithm will stop full voltage anyway with that temp. I would stop pushing the card for now...then take it apart and check thermal paste, thermal pads, and fan connections. By the looks of it, there is an air cooler mount problem. Your 340W is otherwise ok.


----------



## -CreepPorVida-

J7SC said:


> ...whoah, Hotspot at 107 C while fans show 4000 rpm ? Boost algorithm will stop full voltage anyway with that temp. I would stop pushing the card for now...then take it apart and check thermal paste, thermal pads, and fan connections. By the looks of it, there is an air cooler mount problem. Your 340W is otherwise ok.


Yeah so I got this card from my friend yesterday and traded him cash + my 5700XT, hot spot on AMD cards is a bit different. Is there a guide to remount, should i buy specific pads? Last time i did this was on a 980Ti that the AIO blew on and I put a morpheos ii on it but it was a major pain.


----------



## J7SC

-CreepPorVida- said:


> Yeah so I got this card from my friend yesterday and traded him cash + my 5700XT, hot spot on AMD cards is a bit different. Is there a guide to remount, should i buy specific pads? Last time i did this was on a 980Ti that the AIO blew on and I put a morpheos ii on it but it was a major pain.


...for thermal pad size, you just need to google it (mine are different as they have the factory water-blocks). You can carefully take it apart and see if the pads are reusable - the Hotspot is either a paste or a mount issue. I typically use Gelid GC Extreme paste for the die and completely cover the die area (it's a bit thicker paste and stays more liquid over time), though Arctic MX6 should also work fine. 

...as to pads, all my other cards (right up to 4090) have custom water-blocks and thermal _putty_ (TG-10 from DigiKey) for the VRAM and even mosfets...thermal putty is like used chewing gum and conforms to the available space and I have nothing but good things to say about it (on some of my cards, over 1 1/2 years with same perfect temps). Alternatively, once you know the correct thermal pads thickness, you can order pads such as Thermaright Odyssey Extreme from Amazon which is available in various thicknesses (0.5mm to 2mm). I even put a line of thermal paste on top of thermal pads when I am not using thermal putty instead...


----------



## -CreepPorVida-

J7SC said:


> ...for thermal pad size, you just need to google it (mine are different as they have the factory water-blocks). You can carefully take it apart and see if the pads are reusable - the Hotspot is either a paste or a mount issue. I typically use Gelid GC Extreme paste for the die and completely cover the die area (it's a bit thicker paste and stays more liquid over time), though Arctic MX6 should also work fine.
> 
> ...as to pads, all my other cards (right up to 4090) have custom water-blocks and thermal _putty_ (TG-10 from DigiKey) for the VRAM and even mosfets...thermal putty is like used chewing gum and conforms to the available space and I have nothing but good things to say about it (on some of my cards, over 1 1/2 years with same perfect temps). Alternatively, once you know the correct thermal pads thickness, you can order pads such as Thermaright Odyssey Extreme from Amazon which is available in various thicknesses (0.5mm to 2mm). I even put a line of thermal paste on top of thermal pads when I am not using thermal putty instead...


Thanks for the help, im gonna find the pad size and get some to have on hand just in case **** goes sideways, ill grab some of the GC as well, repaste it and follow up here in a few days with results.


----------



## J7SC

-CreepPorVida- said:


> Thanks for the help, im gonna find the pad size and get some to have on hand just in case **** goes sideways, ill grab some of the GC as well, repaste it and follow up here in a few days with results.


OK, good luck - just take your time with the cleaning and mounting, making sure to cross-tighten (gently!) the initial 4 screw surrounding the die area. BTW, if you have a digital micrometer, you can probably find out the pad thickness if the original ones are still on there and you can find a pad area that isn't all squished or otherwise damaged.


----------



## QTargetQ

Hello I've question wich bios would be the best for my zotac 2080ti arctic storm? I have tried lots of bioses but most of them like crashing and the best results atleast for now is either stock or evga hydrocopper with 130+ core but can't say anything about stability yet have to run some more tests.


----------



## J7SC

QTargetQ said:


> Hello I've question wich bios would be the best for my zotac 2080ti arctic storm? I have tried lots of bioses but most of them like crashing and the best results atleast for now is either stock or evga hydrocopper with 130+ core but can't say anything about stability yet have to run some more tests.


You might also want to try the vbios of the Gigabyte / Aorus 2080 Ti Xtreme Waterforce WB (I use two of those, per above) with. up to 380W. You do need to check I/O compatibility though re. HDMI and DSP ports.


----------

